Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pm+acpi-4.3-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull power management and ACPI updates from Rafael Wysocki:
"From the number of commits perspective, the biggest items are ACPICA
and cpufreq changes with the latter taking the lead (over 50 commits).

On the cpufreq front, there are many cleanups and minor fixes in the
core and governors, driver updates etc. We also have a new cpufreq
driver for Mediatek MT8173 chips.

ACPICA mostly updates its debug infrastructure and adds a number of
fixes and cleanups for a good measure.

The Operating Performance Points (OPP) framework is updated with new
DT bindings and support for them among other things.

We have a few updates of the generic power domains framework and a
reorganization of the ACPI device enumeration code and bus type
operations.

And a lot of fixes and cleanups all over.

Included is one branch from the MFD tree as it contains some
PM-related driver core and ACPI PM changes a few other commits are
based on.

Specifics:

- ACPICA update to upstream revision 20150818 including method
tracing extensions to allow more in-depth AML debugging in the
kernel and a number of assorted fixes and cleanups (Bob Moore, Lv
Zheng, Markus Elfring).

- ACPI sysfs code updates and a documentation update related to AML
method tracing (Lv Zheng).

- ACPI EC driver fix related to serialized evaluations of _Qxx
methods and ACPI tools updates allowing the EC userspace tool to be
built from the kernel source (Lv Zheng).

- ACPI processor driver updates preparing it for future introduction
of CPPC support and ACPI PCC mailbox driver updates (Ashwin
Chaugule).

- ACPI interrupts enumeration fix for a regression related to the
handling of IRQ attribute conflicts between MADT and the ACPI
namespace (Jiang Liu).

- Fixes related to ACPI device PM (Mika Westerberg, Srinidhi
Kasagar).

- ACPI device registration code reorganization to separate the
sysfs-related code and bus type operations from the rest (Rafael J
Wysocki).

- Assorted cleanups in the ACPI core (Jarkko Nikula, Mathias Krause,
Andy Shevchenko, Rafael J Wysocki, Nicolas Iooss).

- ACPI cpufreq driver and ia64 cpufreq driver fixes and cleanups (Pan
Xinhui, Rafael J Wysocki).

- cpufreq core cleanups on top of the previous changes allowing it to
preseve its sysfs directories over system suspend/resume (Viresh
Kumar, Rafael J Wysocki, Sebastian Andrzej Siewior).

- cpufreq fixes and cleanups related to governors (Viresh Kumar).

- cpufreq updates (core and the cpufreq-dt driver) related to the
turbo/boost mode support (Viresh Kumar, Bartlomiej Zolnierkiewicz).

- New DT bindings for Operating Performance Points (OPP), support for
them in the OPP framework and in the cpufreq-dt driver plus related
OPP framework fixes and cleanups (Viresh Kumar).

- cpufreq powernv driver updates (Shilpasri G Bhat).

- New cpufreq driver for Mediatek MT8173 (Pi-Cheng Chen).

- Assorted cpufreq driver (speedstep-lib, sfi, integrator) cleanups
and fixes (Abhilash Jindal, Andrzej Hajda, Cristian Ardelean).

- intel_pstate driver updates including Skylake-S support, support
for enabling HW P-states per CPU and an additional vendor bypass
list entry (Kristen Carlson Accardi, Chen Yu, Ethan Zhao).

- cpuidle core fixes related to the handling of coupled idle states
(Xunlei Pang).

- intel_idle driver updates including Skylake Client support and
support for freeze-mode-specific idle states (Len Brown).

- Driver core updates related to power management (Andy Shevchenko,
Rafael J Wysocki).

- Generic power domains framework fixes and cleanups (Jon Hunter,
Geert Uytterhoeven, Rajendra Nayak, Ulf Hansson).

- Device PM QoS framework update to allow the latency tolerance
setting to be exposed to user space via sysfs (Mika Westerberg).

- devfreq support for PPMUv2 in Exynos5433 and a fix for an incorrect
exynos-ppmu DT binding (Chanwoo Choi, Javier Martinez Canillas).

- System sleep support updates (Alan Stern, Len Brown, SungEun Kim).

- rockchip-io AVS support updates (Heiko Stuebner).

- PM core clocks support fixup (Colin Ian King).

- Power capping RAPL driver update including support for Skylake H/S
and Broadwell-H (Radivoje Jovanovic, Seiichi Ikarashi).

- Generic device properties framework fixes related to the handling
of static (driver-provided) property sets (Andy Shevchenko).

- turbostat and cpupower updates (Len Brown, Shilpasri G Bhat,
Shreyas B Prabhu)"

* tag 'pm+acpi-4.3-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (180 commits)
cpufreq: speedstep-lib: Use monotonic clock
cpufreq: powernv: Increase the verbosity of OCC console messages
cpufreq: sfi: use kmemdup rather than duplicating its implementation
cpufreq: drop !cpufreq_driver check from cpufreq_parse_governor()
cpufreq: rename cpufreq_real_policy as cpufreq_user_policy
cpufreq: remove redundant 'policy' field from user_policy
cpufreq: remove redundant 'governor' field from user_policy
cpufreq: update user_policy.* on success
cpufreq: use memcpy() to copy policy
cpufreq: remove redundant CPUFREQ_INCOMPATIBLE notifier event
cpufreq: mediatek: Add MT8173 cpufreq driver
dt-bindings: mediatek: Add MT8173 CPU DVFS clock bindings
PM / Domains: Fix typo in description of genpd_dev_pm_detach()
PM / Domains: Remove unusable governor dummies
PM / Domains: Make pm_genpd_init() available to modules
PM / domains: Align column headers and data in pm_genpd_summary output
powercap / RAPL: disable the 2nd power limit properly
tools: cpupower: Fix error when running cpupower monitor
PM / OPP: Drop unlikely before IS_ERR(_OR_NULL)
PM / OPP: Fix static checker warning (broken 64bit big endian systems)
...

+8432 -3365
+187 -21
Documentation/acpi/method-tracing.txt
··· 1 - /sys/module/acpi/parameters/: 1 + ACPICA Trace Facility 2 2 3 - trace_method_name 4 - The AML method name that the user wants to trace 3 + Copyright (C) 2015, Intel Corporation 4 + Author: Lv Zheng <lv.zheng@intel.com> 5 5 6 - trace_debug_layer 7 - The temporary debug_layer used when tracing the method. 8 - Using 0xffffffff by default if it is 0. 9 6 10 - trace_debug_level 11 - The temporary debug_level used when tracing the method. 12 - Using 0x00ffffff by default if it is 0. 7 + Abstract: 13 8 14 - trace_state 9 + This document describes the functions and the interfaces of the method 10 + tracing facility. 11 + 12 + 1. Functionalities and usage examples: 13 + 14 + ACPICA provides method tracing capability. And two functions are 15 + currently implemented using this capability. 16 + 17 + A. Log reducer 18 + ACPICA subsystem provides debugging outputs when CONFIG_ACPI_DEBUG is 19 + enabled. The debugging messages which are deployed via 20 + ACPI_DEBUG_PRINT() macro can be reduced at 2 levels - per-component 21 + level (known as debug layer, configured via 22 + /sys/module/acpi/parameters/debug_layer) and per-type level (known as 23 + debug level, configured via /sys/module/acpi/parameters/debug_level). 24 + 25 + But when the particular layer/level is applied to the control method 26 + evaluations, the quantity of the debugging outputs may still be too 27 + large to be put into the kernel log buffer. The idea thus is worked out 28 + to only enable the particular debug layer/level (normally more detailed) 29 + logs when the control method evaluation is started, and disable the 30 + detailed logging when the control method evaluation is stopped. 31 + 32 + The following command examples illustrate the usage of the "log reducer" 33 + functionality: 34 + a. Filter out the debug layer/level matched logs when control methods 35 + are being evaluated: 36 + # cd /sys/module/acpi/parameters 37 + # echo "0xXXXXXXXX" > trace_debug_layer 38 + # echo "0xYYYYYYYY" > trace_debug_level 39 + # echo "enable" > trace_state 40 + b. Filter out the debug layer/level matched logs when the specified 41 + control method is being evaluated: 42 + # cd /sys/module/acpi/parameters 43 + # echo "0xXXXXXXXX" > trace_debug_layer 44 + # echo "0xYYYYYYYY" > trace_debug_level 45 + # echo "\PPPP.AAAA.TTTT.HHHH" > trace_method_name 46 + # echo "method" > /sys/module/acpi/parameters/trace_state 47 + c. Filter out the debug layer/level matched logs when the specified 48 + control method is being evaluated for the first time: 49 + # cd /sys/module/acpi/parameters 50 + # echo "0xXXXXXXXX" > trace_debug_layer 51 + # echo "0xYYYYYYYY" > trace_debug_level 52 + # echo "\PPPP.AAAA.TTTT.HHHH" > trace_method_name 53 + # echo "method-once" > /sys/module/acpi/parameters/trace_state 54 + Where: 55 + 0xXXXXXXXX/0xYYYYYYYY: Refer to Documentation/acpi/debug.txt for 56 + possible debug layer/level masking values. 57 + \PPPP.AAAA.TTTT.HHHH: Full path of a control method that can be found 58 + in the ACPI namespace. It needn't be an entry 59 + of a control method evaluation. 60 + 61 + B. AML tracer 62 + 63 + There are special log entries added by the method tracing facility at 64 + the "trace points" the AML interpreter starts/stops to execute a control 65 + method, or an AML opcode. Note that the format of the log entries are 66 + subject to change: 67 + [ 0.186427] exdebug-0398 ex_trace_point : Method Begin [0xf58394d8:\_SB.PCI0.LPCB.ECOK] execution. 68 + [ 0.186630] exdebug-0398 ex_trace_point : Opcode Begin [0xf5905c88:If] execution. 69 + [ 0.186820] exdebug-0398 ex_trace_point : Opcode Begin [0xf5905cc0:LEqual] execution. 70 + [ 0.187010] exdebug-0398 ex_trace_point : Opcode Begin [0xf5905a20:-NamePath-] execution. 71 + [ 0.187214] exdebug-0398 ex_trace_point : Opcode End [0xf5905a20:-NamePath-] execution. 72 + [ 0.187407] exdebug-0398 ex_trace_point : Opcode Begin [0xf5905f60:One] execution. 73 + [ 0.187594] exdebug-0398 ex_trace_point : Opcode End [0xf5905f60:One] execution. 74 + [ 0.187789] exdebug-0398 ex_trace_point : Opcode End [0xf5905cc0:LEqual] execution. 75 + [ 0.187980] exdebug-0398 ex_trace_point : Opcode Begin [0xf5905cc0:Return] execution. 76 + [ 0.188146] exdebug-0398 ex_trace_point : Opcode Begin [0xf5905f60:One] execution. 77 + [ 0.188334] exdebug-0398 ex_trace_point : Opcode End [0xf5905f60:One] execution. 78 + [ 0.188524] exdebug-0398 ex_trace_point : Opcode End [0xf5905cc0:Return] execution. 79 + [ 0.188712] exdebug-0398 ex_trace_point : Opcode End [0xf5905c88:If] execution. 80 + [ 0.188903] exdebug-0398 ex_trace_point : Method End [0xf58394d8:\_SB.PCI0.LPCB.ECOK] execution. 81 + 82 + Developers can utilize these special log entries to track the AML 83 + interpretion, thus can aid issue debugging and performance tuning. Note 84 + that, as the "AML tracer" logs are implemented via ACPI_DEBUG_PRINT() 85 + macro, CONFIG_ACPI_DEBUG is also required to be enabled for enabling 86 + "AML tracer" logs. 87 + 88 + The following command examples illustrate the usage of the "AML tracer" 89 + functionality: 90 + a. Filter out the method start/stop "AML tracer" logs when control 91 + methods are being evaluated: 92 + # cd /sys/module/acpi/parameters 93 + # echo "0x80" > trace_debug_layer 94 + # echo "0x10" > trace_debug_level 95 + # echo "enable" > trace_state 96 + b. Filter out the method start/stop "AML tracer" when the specified 97 + control method is being evaluated: 98 + # cd /sys/module/acpi/parameters 99 + # echo "0x80" > trace_debug_layer 100 + # echo "0x10" > trace_debug_level 101 + # echo "\PPPP.AAAA.TTTT.HHHH" > trace_method_name 102 + # echo "method" > trace_state 103 + c. Filter out the method start/stop "AML tracer" logs when the specified 104 + control method is being evaluated for the first time: 105 + # cd /sys/module/acpi/parameters 106 + # echo "0x80" > trace_debug_layer 107 + # echo "0x10" > trace_debug_level 108 + # echo "\PPPP.AAAA.TTTT.HHHH" > trace_method_name 109 + # echo "method-once" > trace_state 110 + d. Filter out the method/opcode start/stop "AML tracer" when the 111 + specified control method is being evaluated: 112 + # cd /sys/module/acpi/parameters 113 + # echo "0x80" > trace_debug_layer 114 + # echo "0x10" > trace_debug_level 115 + # echo "\PPPP.AAAA.TTTT.HHHH" > trace_method_name 116 + # echo "opcode" > trace_state 117 + e. Filter out the method/opcode start/stop "AML tracer" when the 118 + specified control method is being evaluated for the first time: 119 + # cd /sys/module/acpi/parameters 120 + # echo "0x80" > trace_debug_layer 121 + # echo "0x10" > trace_debug_level 122 + # echo "\PPPP.AAAA.TTTT.HHHH" > trace_method_name 123 + # echo "opcode-opcode" > trace_state 124 + 125 + Note that all above method tracing facility related module parameters can 126 + be used as the boot parameters, for example: 127 + acpi.trace_debug_layer=0x80 acpi.trace_debug_level=0x10 \ 128 + acpi.trace_method_name=\_SB.LID0._LID acpi.trace_state=opcode-once 129 + 130 + 2. Interface descriptions: 131 + 132 + All method tracing functions can be configured via ACPI module 133 + parameters that are accessible at /sys/module/acpi/parameters/: 134 + 135 + trace_method_name 136 + The full path of the AML method that the user wants to trace. 137 + Note that the full path shouldn't contain the trailing "_"s in its 138 + name segments but may contain "\" to form an absolute path. 139 + 140 + trace_debug_layer 141 + The temporary debug_layer used when the tracing feature is enabled. 142 + Using ACPI_EXECUTER (0x80) by default, which is the debug_layer 143 + used to match all "AML tracer" logs. 144 + 145 + trace_debug_level 146 + The temporary debug_level used when the tracing feature is enabled. 147 + Using ACPI_LV_TRACE_POINT (0x10) by default, which is the 148 + debug_level used to match all "AML tracer" logs. 149 + 150 + trace_state 15 151 The status of the tracing feature. 16 - 17 - "enabled" means this feature is enabled 18 - and the AML method is traced every time it's executed. 19 - 20 - "1" means this feature is enabled and the AML method 21 - will only be traced during the next execution. 22 - 23 - "disabled" means this feature is disabled. 24 - Users can enable/disable this debug tracing feature by 25 - "echo string > /sys/module/acpi/parameters/trace_state". 26 - "string" should be one of "enable", "disable" and "1". 152 + Users can enable/disable this debug tracing feature by executing 153 + the following command: 154 + # echo string > /sys/module/acpi/parameters/trace_state 155 + Where "string" should be one of the followings: 156 + "disable" 157 + Disable the method tracing feature. 158 + "enable" 159 + Enable the method tracing feature. 160 + ACPICA debugging messages matching 161 + "trace_debug_layer/trace_debug_level" during any method 162 + execution will be logged. 163 + "method" 164 + Enable the method tracing feature. 165 + ACPICA debugging messages matching 166 + "trace_debug_layer/trace_debug_level" during method execution 167 + of "trace_method_name" will be logged. 168 + "method-once" 169 + Enable the method tracing feature. 170 + ACPICA debugging messages matching 171 + "trace_debug_layer/trace_debug_level" during method execution 172 + of "trace_method_name" will be logged only once. 173 + "opcode" 174 + Enable the method tracing feature. 175 + ACPICA debugging messages matching 176 + "trace_debug_layer/trace_debug_level" during method/opcode 177 + execution of "trace_method_name" will be logged. 178 + "opcode-once" 179 + Enable the method tracing feature. 180 + ACPICA debugging messages matching 181 + "trace_debug_layer/trace_debug_level" during method/opcode 182 + execution of "trace_method_name" will be logged only once. 183 + Note that, the difference between the "enable" and other feature 184 + enabling options are: 185 + 1. When "enable" is specified, since 186 + "trace_debug_layer/trace_debug_level" shall apply to all control 187 + method evaluations, after configuring "trace_state" to "enable", 188 + "trace_method_name" will be reset to NULL. 189 + 2. When "method/opcode" is specified, if 190 + "trace_method_name" is NULL when "trace_state" is configured to 191 + these options, the "trace_debug_layer/trace_debug_level" will 192 + apply to all control method evaluations.
+2 -5
Documentation/cpu-freq/core.txt
··· 55 55 ---------------------------- 56 56 57 57 These are notified when a new policy is intended to be set. Each 58 - CPUFreq policy notifier is called three times for a policy transition: 58 + CPUFreq policy notifier is called twice for a policy transition: 59 59 60 60 1.) During CPUFREQ_ADJUST all CPUFreq notifiers may change the limit if 61 61 they see a need for this - may it be thermal considerations or 62 62 hardware limitations. 63 63 64 - 2.) During CPUFREQ_INCOMPATIBLE only changes may be done in order to avoid 65 - hardware failure. 66 - 67 - 3.) And during CPUFREQ_NOTIFY all notifiers are informed of the new policy 64 + 2.) And during CPUFREQ_NOTIFY all notifiers are informed of the new policy 68 65 - if two hardware drivers failed to agree on a new policy before this 69 66 stage, the incompatible hardware shall be shut down, and the user 70 67 informed of this.
+83
Documentation/devicetree/bindings/clock/mt8173-cpu-dvfs.txt
··· 1 + Device Tree Clock bindins for CPU DVFS of Mediatek MT8173 SoC 2 + 3 + Required properties: 4 + - clocks: A list of phandle + clock-specifier pairs for the clocks listed in clock names. 5 + - clock-names: Should contain the following: 6 + "cpu" - The multiplexer for clock input of CPU cluster. 7 + "intermediate" - A parent of "cpu" clock which is used as "intermediate" clock 8 + source (usually MAINPLL) when the original CPU PLL is under 9 + transition and not stable yet. 10 + Please refer to Documentation/devicetree/bindings/clk/clock-bindings.txt for 11 + generic clock consumer properties. 12 + - proc-supply: Regulator for Vproc of CPU cluster. 13 + 14 + Optional properties: 15 + - sram-supply: Regulator for Vsram of CPU cluster. When present, the cpufreq driver 16 + needs to do "voltage tracking" to step by step scale up/down Vproc and 17 + Vsram to fit SoC specific needs. When absent, the voltage scaling 18 + flow is handled by hardware, hence no software "voltage tracking" is 19 + needed. 20 + 21 + Example: 22 + -------- 23 + cpu0: cpu@0 { 24 + device_type = "cpu"; 25 + compatible = "arm,cortex-a53"; 26 + reg = <0x000>; 27 + enable-method = "psci"; 28 + cpu-idle-states = <&CPU_SLEEP_0>; 29 + clocks = <&infracfg CLK_INFRA_CA53SEL>, 30 + <&apmixedsys CLK_APMIXED_MAINPLL>; 31 + clock-names = "cpu", "intermediate"; 32 + }; 33 + 34 + cpu1: cpu@1 { 35 + device_type = "cpu"; 36 + compatible = "arm,cortex-a53"; 37 + reg = <0x001>; 38 + enable-method = "psci"; 39 + cpu-idle-states = <&CPU_SLEEP_0>; 40 + clocks = <&infracfg CLK_INFRA_CA53SEL>, 41 + <&apmixedsys CLK_APMIXED_MAINPLL>; 42 + clock-names = "cpu", "intermediate"; 43 + }; 44 + 45 + cpu2: cpu@100 { 46 + device_type = "cpu"; 47 + compatible = "arm,cortex-a57"; 48 + reg = <0x100>; 49 + enable-method = "psci"; 50 + cpu-idle-states = <&CPU_SLEEP_0>; 51 + clocks = <&infracfg CLK_INFRA_CA57SEL>, 52 + <&apmixedsys CLK_APMIXED_MAINPLL>; 53 + clock-names = "cpu", "intermediate"; 54 + }; 55 + 56 + cpu3: cpu@101 { 57 + device_type = "cpu"; 58 + compatible = "arm,cortex-a57"; 59 + reg = <0x101>; 60 + enable-method = "psci"; 61 + cpu-idle-states = <&CPU_SLEEP_0>; 62 + clocks = <&infracfg CLK_INFRA_CA57SEL>, 63 + <&apmixedsys CLK_APMIXED_MAINPLL>; 64 + clock-names = "cpu", "intermediate"; 65 + }; 66 + 67 + &cpu0 { 68 + proc-supply = <&mt6397_vpca15_reg>; 69 + }; 70 + 71 + &cpu1 { 72 + proc-supply = <&mt6397_vpca15_reg>; 73 + }; 74 + 75 + &cpu2 { 76 + proc-supply = <&da9211_vcpu_reg>; 77 + sram-supply = <&mt6397_vsramca7_reg>; 78 + }; 79 + 80 + &cpu3 { 81 + proc-supply = <&da9211_vcpu_reg>; 82 + sram-supply = <&mt6397_vsramca7_reg>; 83 + };
+40 -3
Documentation/devicetree/bindings/devfreq/event/exynos-ppmu.txt
··· 11 11 derterming the current state of each IP. 12 12 13 13 Required properties: 14 - - compatible: Should be "samsung,exynos-ppmu". 14 + - compatible: Should be "samsung,exynos-ppmu" or "samsung,exynos-ppmu-v2. 15 15 - reg: physical base address of each PPMU and length of memory mapped region. 16 16 17 17 Optional properties: 18 18 - clock-names : the name of clock used by the PPMU, "ppmu" 19 19 - clocks : phandles for clock specified in "clock-names" property 20 - - #clock-cells: should be 1. 21 20 22 - Example1 : PPMU nodes in exynos3250.dtsi are listed below. 21 + Example1 : PPMUv1 nodes in exynos3250.dtsi are listed below. 23 22 24 23 ppmu_dmc0: ppmu_dmc0@106a0000 { 25 24 compatible = "samsung,exynos-ppmu"; ··· 107 108 }; 108 109 }; 109 110 }; 111 + 112 + Example3 : PPMUv2 nodes in exynos5433.dtsi are listed below. 113 + 114 + ppmu_d0_cpu: ppmu_d0_cpu@10480000 { 115 + compatible = "samsung,exynos-ppmu-v2"; 116 + reg = <0x10480000 0x2000>; 117 + status = "disabled"; 118 + }; 119 + 120 + ppmu_d0_general: ppmu_d0_general@10490000 { 121 + compatible = "samsung,exynos-ppmu-v2"; 122 + reg = <0x10490000 0x2000>; 123 + status = "disabled"; 124 + }; 125 + 126 + ppmu_d0_rt: ppmu_d0_rt@104a0000 { 127 + compatible = "samsung,exynos-ppmu-v2"; 128 + reg = <0x104a0000 0x2000>; 129 + status = "disabled"; 130 + }; 131 + 132 + ppmu_d1_cpu: ppmu_d1_cpu@104b0000 { 133 + compatible = "samsung,exynos-ppmu-v2"; 134 + reg = <0x104b0000 0x2000>; 135 + status = "disabled"; 136 + }; 137 + 138 + ppmu_d1_general: ppmu_d1_general@104c0000 { 139 + compatible = "samsung,exynos-ppmu-v2"; 140 + reg = <0x104c0000 0x2000>; 141 + status = "disabled"; 142 + }; 143 + 144 + ppmu_d1_rt: ppmu_d1_rt@104d0000 { 145 + compatible = "samsung,exynos-ppmu-v2"; 146 + reg = <0x104d0000 0x2000>; 147 + status = "disabled"; 148 + };
+20 -20
Documentation/devicetree/bindings/power/opp.txt Documentation/devicetree/bindings/opp/opp.txt
··· 88 88 properties. 89 89 90 90 Required properties: 91 - - opp-hz: Frequency in Hz 91 + - opp-hz: Frequency in Hz, expressed as a 64-bit big-endian integer. 92 92 93 93 Optional properties: 94 94 - opp-microvolt: voltage in micro Volts. ··· 158 158 opp-shared; 159 159 160 160 opp00 { 161 - opp-hz = <1000000000>; 161 + opp-hz = /bits/ 64 <1000000000>; 162 162 opp-microvolt = <970000 975000 985000>; 163 163 opp-microamp = <70000>; 164 164 clock-latency-ns = <300000>; 165 165 opp-suspend; 166 166 }; 167 167 opp01 { 168 - opp-hz = <1100000000>; 168 + opp-hz = /bits/ 64 <1100000000>; 169 169 opp-microvolt = <980000 1000000 1010000>; 170 170 opp-microamp = <80000>; 171 171 clock-latency-ns = <310000>; 172 172 }; 173 173 opp02 { 174 - opp-hz = <1200000000>; 174 + opp-hz = /bits/ 64 <1200000000>; 175 175 opp-microvolt = <1025000>; 176 176 clock-latency-ns = <290000>; 177 177 turbo-mode; ··· 237 237 */ 238 238 239 239 opp00 { 240 - opp-hz = <1000000000>; 240 + opp-hz = /bits/ 64 <1000000000>; 241 241 opp-microvolt = <970000 975000 985000>; 242 242 opp-microamp = <70000>; 243 243 clock-latency-ns = <300000>; 244 244 opp-suspend; 245 245 }; 246 246 opp01 { 247 - opp-hz = <1100000000>; 247 + opp-hz = /bits/ 64 <1100000000>; 248 248 opp-microvolt = <980000 1000000 1010000>; 249 249 opp-microamp = <80000>; 250 250 clock-latency-ns = <310000>; 251 251 }; 252 252 opp02 { 253 - opp-hz = <1200000000>; 253 + opp-hz = /bits/ 64 <1200000000>; 254 254 opp-microvolt = <1025000>; 255 255 opp-microamp = <90000; 256 256 lock-latency-ns = <290000>; ··· 313 313 opp-shared; 314 314 315 315 opp00 { 316 - opp-hz = <1000000000>; 316 + opp-hz = /bits/ 64 <1000000000>; 317 317 opp-microvolt = <970000 975000 985000>; 318 318 opp-microamp = <70000>; 319 319 clock-latency-ns = <300000>; 320 320 opp-suspend; 321 321 }; 322 322 opp01 { 323 - opp-hz = <1100000000>; 323 + opp-hz = /bits/ 64 <1100000000>; 324 324 opp-microvolt = <980000 1000000 1010000>; 325 325 opp-microamp = <80000>; 326 326 clock-latency-ns = <310000>; 327 327 }; 328 328 opp02 { 329 - opp-hz = <1200000000>; 329 + opp-hz = /bits/ 64 <1200000000>; 330 330 opp-microvolt = <1025000>; 331 331 opp-microamp = <90000>; 332 332 clock-latency-ns = <290000>; ··· 339 339 opp-shared; 340 340 341 341 opp10 { 342 - opp-hz = <1300000000>; 342 + opp-hz = /bits/ 64 <1300000000>; 343 343 opp-microvolt = <1045000 1050000 1055000>; 344 344 opp-microamp = <95000>; 345 345 clock-latency-ns = <400000>; 346 346 opp-suspend; 347 347 }; 348 348 opp11 { 349 - opp-hz = <1400000000>; 349 + opp-hz = /bits/ 64 <1400000000>; 350 350 opp-microvolt = <1075000>; 351 351 opp-microamp = <100000>; 352 352 clock-latency-ns = <400000>; 353 353 }; 354 354 opp12 { 355 - opp-hz = <1500000000>; 355 + opp-hz = /bits/ 64 <1500000000>; 356 356 opp-microvolt = <1010000 1100000 1110000>; 357 357 opp-microamp = <95000>; 358 358 clock-latency-ns = <400000>; ··· 379 379 opp-shared; 380 380 381 381 opp00 { 382 - opp-hz = <1000000000>; 382 + opp-hz = /bits/ 64 <1000000000>; 383 383 opp-microvolt = <970000>, /* Supply 0 */ 384 384 <960000>, /* Supply 1 */ 385 385 <960000>; /* Supply 2 */ ··· 392 392 /* OR */ 393 393 394 394 opp00 { 395 - opp-hz = <1000000000>; 395 + opp-hz = /bits/ 64 <1000000000>; 396 396 opp-microvolt = <970000 975000 985000>, /* Supply 0 */ 397 397 <960000 965000 975000>, /* Supply 1 */ 398 398 <960000 965000 975000>; /* Supply 2 */ ··· 405 405 /* OR */ 406 406 407 407 opp00 { 408 - opp-hz = <1000000000>; 408 + opp-hz = /bits/ 64 <1000000000>; 409 409 opp-microvolt = <970000 975000 985000>, /* Supply 0 */ 410 410 <960000 965000 975000>, /* Supply 1 */ 411 411 <960000 965000 975000>; /* Supply 2 */ ··· 437 437 opp-shared; 438 438 439 439 opp00 { 440 - opp-hz = <600000000>; 440 + opp-hz = /bits/ 64 <600000000>; 441 441 ... 442 442 }; 443 443 444 444 opp01 { 445 - opp-hz = <800000000>; 445 + opp-hz = /bits/ 64 <800000000>; 446 446 ... 447 447 }; 448 448 }; ··· 453 453 opp-shared; 454 454 455 455 opp10 { 456 - opp-hz = <1000000000>; 456 + opp-hz = /bits/ 64 <1000000000>; 457 457 ... 458 458 }; 459 459 460 460 opp11 { 461 - opp-hz = <1100000000>; 461 + opp-hz = /bits/ 64 <1100000000>; 462 462 ... 463 463 }; 464 464 };
+1 -1
Documentation/devicetree/bindings/power/power_domain.txt
··· 48 48 #power-domain-cells = <1>; 49 49 }; 50 50 51 - child: power-controller@12340000 { 51 + child: power-controller@12341000 { 52 52 compatible = "foo,power-controller"; 53 53 reg = <0x12341000 0x1000>; 54 54 power-domains = <&parent 0>;
+14
Documentation/devicetree/bindings/power/rockchip-io-domain.txt
··· 33 33 - compatible: should be one of: 34 34 - "rockchip,rk3188-io-voltage-domain" for rk3188 35 35 - "rockchip,rk3288-io-voltage-domain" for rk3288 36 + - "rockchip,rk3368-io-voltage-domain" for rk3368 37 + - "rockchip,rk3368-pmu-io-voltage-domain" for rk3368 pmu-domains 36 38 - rockchip,grf: phandle to the syscon managing the "general register files" 37 39 38 40 ··· 66 64 - sdcard-supply: The supply connected to SDMMC0_VDD. 67 65 - wifi-supply: The supply connected to APIO3_VDD. Also known as SDIO0. 68 66 67 + Possible supplies for rk3368: 68 + - audio-supply: The supply connected to APIO3_VDD. 69 + - dvp-supply: The supply connected to DVPIO_VDD. 70 + - flash0-supply: The supply connected to FLASH0_VDD. Typically for eMMC 71 + - gpio30-supply: The supply connected to APIO1_VDD. 72 + - gpio1830 The supply connected to APIO4_VDD. 73 + - sdcard-supply: The supply connected to SDMMC0_VDD. 74 + - wifi-supply: The supply connected to APIO2_VDD. Also known as SDIO0. 75 + 76 + Possible supplies for rk3368 pmu-domains: 77 + - pmu-supply: The supply connected to PMUIO_VDD. 78 + - vop-supply: The supply connected to LCDC_VDD. 69 79 70 80 Example: 71 81
+7
Documentation/power/devices.txt
··· 341 341 and is entirely responsible for bringing the device back to the 342 342 functional state as appropriate. 343 343 344 + Note that this direct-complete procedure applies even if the device is 345 + disabled for runtime PM; only the runtime-PM status matters. It follows 346 + that if a device has system-sleep callbacks but does not support runtime 347 + PM, then its prepare callback must never return a positive value. This 348 + is because all devices are initially set to runtime-suspended with 349 + runtime PM disabled. 350 + 344 351 2. The suspend methods should quiesce the device to stop it from performing 345 352 I/O. They also may save the device registers and put it into the 346 353 appropriate low-power state, depending on the bus type the device is on,
-4
Documentation/power/runtime_pm.txt
··· 445 445 bool pm_runtime_status_suspended(struct device *dev); 446 446 - return true if the device's runtime PM status is 'suspended' 447 447 448 - bool pm_runtime_suspended_if_enabled(struct device *dev); 449 - - return true if the device's runtime PM status is 'suspended' and its 450 - 'power.disable_depth' field is equal to 1 451 - 452 448 void pm_runtime_allow(struct device *dev); 453 449 - set the power.runtime_auto flag for the device and decrease its usage 454 450 counter (used by the /sys/devices/.../power/control interface to
+12
arch/powerpc/include/asm/opal-api.h
··· 361 361 OPAL_MSG_HMI_EVT, 362 362 OPAL_MSG_DPO, 363 363 OPAL_MSG_PRD, 364 + OPAL_MSG_OCC, 364 365 OPAL_MSG_TYPE_MAX, 365 366 }; 366 367 ··· 700 699 }; 701 700 702 701 struct opal_prd_msg; 702 + 703 + #define OCC_RESET 0 704 + #define OCC_LOAD 1 705 + #define OCC_THROTTLE 2 706 + #define OCC_MAX_THROTTLE_STATUS 5 707 + 708 + struct opal_occ_msg { 709 + __be64 type; 710 + __be64 chip; 711 + __be64 throttle_status; 712 + }; 703 713 704 714 /* 705 715 * SG entries
+6
arch/x86/include/asm/msr-index.h
··· 184 184 #define MSR_PP1_ENERGY_STATUS 0x00000641 185 185 #define MSR_PP1_POLICY 0x00000642 186 186 187 + #define MSR_CONFIG_TDP_NOMINAL 0x00000648 188 + #define MSR_CONFIG_TDP_LEVEL_1 0x00000649 189 + #define MSR_CONFIG_TDP_LEVEL_2 0x0000064A 190 + #define MSR_CONFIG_TDP_CONTROL 0x0000064B 191 + #define MSR_TURBO_ACTIVATION_RATIO 0x0000064C 192 + 187 193 #define MSR_PKG_WEIGHTED_CORE_C0_RES 0x00000658 188 194 #define MSR_PKG_ANY_CORE_C0_RES 0x00000659 189 195 #define MSR_PKG_ANY_GFXE_C0_RES 0x0000065A
+1
arch/x86/kernel/acpi/boot.c
··· 445 445 polarity = acpi_sci_flags & ACPI_MADT_POLARITY_MASK; 446 446 447 447 mp_override_legacy_irq(bus_irq, polarity, trigger, gsi); 448 + acpi_penalize_sci_irq(bus_irq, trigger, polarity); 448 449 449 450 /* 450 451 * stash over-ride to indicate we've been here
+13 -6
drivers/acpi/Kconfig
··· 189 189 This driver supports ACPI-controlled docking stations and removable 190 190 drive bays such as the IBM Ultrabay and the Dell Module Bay. 191 191 192 + config ACPI_CPU_FREQ_PSS 193 + bool 194 + select THERMAL 195 + 196 + config ACPI_PROCESSOR_IDLE 197 + bool 198 + select CPU_IDLE 199 + 192 200 config ACPI_PROCESSOR 193 201 tristate "Processor" 194 - select THERMAL 195 - select CPU_IDLE 196 202 depends on X86 || IA64 203 + select ACPI_PROCESSOR_IDLE 204 + select ACPI_CPU_FREQ_PSS 197 205 default y 198 206 help 199 - This driver installs ACPI as the idle handler for Linux and uses 200 - ACPI C2 and C3 processor states to save power on systems that 201 - support it. It is required by several flavors of cpufreq 202 - performance-state drivers. 207 + This driver adds support for the ACPI Processor package. It is required 208 + by several flavors of cpufreq performance-state, thermal, throttling and 209 + idle drivers. 203 210 204 211 To compile this driver as a module, choose M here: 205 212 the module will be called processor.
+5 -3
drivers/acpi/Makefile
··· 24 24 # Power management related files 25 25 acpi-y += wakeup.o 26 26 acpi-$(CONFIG_ACPI_SYSTEM_POWER_STATES_SUPPORT) += sleep.o 27 - acpi-y += device_pm.o 27 + acpi-y += device_sysfs.o device_pm.o 28 28 acpi-$(CONFIG_ACPI_SLEEP) += proc.o 29 29 30 30 ··· 80 80 obj-$(CONFIG_ACPI_BGRT) += bgrt.o 81 81 82 82 # processor has its own "processor." module_param namespace 83 - processor-y := processor_driver.o processor_throttling.o 84 - processor-y += processor_idle.o processor_thermal.o 83 + processor-y := processor_driver.o 84 + processor-$(CONFIG_ACPI_PROCESSOR_IDLE) += processor_idle.o 85 + processor-$(CONFIG_ACPI_CPU_FREQ_PSS) += processor_throttling.o \ 86 + processor_thermal.o 85 87 processor-$(CONFIG_CPU_FREQ) += processor_perflib.o 86 88 87 89 obj-$(CONFIG_ACPI_PROCESSOR_AGGREGATOR) += acpi_pad.o
-4
drivers/acpi/ac.c
··· 16 16 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 17 17 * General Public License for more details. 18 18 * 19 - * You should have received a copy of the GNU General Public License along 20 - * with this program; if not, write to the Free Software Foundation, Inc., 21 - * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. 22 - * 23 19 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 24 20 */ 25 21
-4
drivers/acpi/acpi_ipmi.c
··· 17 17 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 18 18 * General Public License for more details. 19 19 * 20 - * You should have received a copy of the GNU General Public License along 21 - * with this program; if not, write to the Free Software Foundation, Inc., 22 - * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. 23 - * 24 20 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 25 21 */ 26 22
+32 -6
drivers/acpi/acpi_lpss.c
··· 59 59 #define LPSS_CLK_DIVIDER BIT(2) 60 60 #define LPSS_LTR BIT(3) 61 61 #define LPSS_SAVE_CTX BIT(4) 62 + #define LPSS_NO_D3_DELAY BIT(5) 62 63 63 64 struct lpss_private_data; 64 65 ··· 156 155 .flags = LPSS_SAVE_CTX, 157 156 }; 158 157 158 + static const struct lpss_device_desc bsw_pwm_dev_desc = { 159 + .flags = LPSS_SAVE_CTX | LPSS_NO_D3_DELAY, 160 + }; 161 + 159 162 static const struct lpss_device_desc byt_uart_dev_desc = { 160 163 .flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_SAVE_CTX, 164 + .clk_con_id = "baudclk", 165 + .prv_offset = 0x800, 166 + .setup = lpss_uart_setup, 167 + }; 168 + 169 + static const struct lpss_device_desc bsw_uart_dev_desc = { 170 + .flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_SAVE_CTX 171 + | LPSS_NO_D3_DELAY, 161 172 .clk_con_id = "baudclk", 162 173 .prv_offset = 0x800, 163 174 .setup = lpss_uart_setup, ··· 190 177 .setup = byt_i2c_setup, 191 178 }; 192 179 180 + static const struct lpss_device_desc bsw_i2c_dev_desc = { 181 + .flags = LPSS_CLK | LPSS_SAVE_CTX | LPSS_NO_D3_DELAY, 182 + .prv_offset = 0x800, 183 + .setup = byt_i2c_setup, 184 + }; 185 + 193 186 static struct lpss_device_desc bsw_spi_dev_desc = { 194 - .flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_SAVE_CTX, 187 + .flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_SAVE_CTX 188 + | LPSS_NO_D3_DELAY, 195 189 .prv_offset = 0x400, 196 190 .setup = lpss_deassert_reset, 197 191 }; ··· 233 213 { "INT33FC", }, 234 214 235 215 /* Braswell LPSS devices */ 236 - { "80862288", LPSS_ADDR(byt_pwm_dev_desc) }, 237 - { "8086228A", LPSS_ADDR(byt_uart_dev_desc) }, 216 + { "80862288", LPSS_ADDR(bsw_pwm_dev_desc) }, 217 + { "8086228A", LPSS_ADDR(bsw_uart_dev_desc) }, 238 218 { "8086228E", LPSS_ADDR(bsw_spi_dev_desc) }, 239 - { "808622C1", LPSS_ADDR(byt_i2c_dev_desc) }, 219 + { "808622C1", LPSS_ADDR(bsw_i2c_dev_desc) }, 240 220 221 + /* Broadwell LPSS devices */ 241 222 { "INT3430", LPSS_ADDR(lpt_dev_desc) }, 242 223 { "INT3431", LPSS_ADDR(lpt_dev_desc) }, 243 224 { "INT3432", LPSS_ADDR(lpt_i2c_dev_desc) }, ··· 578 557 * The following delay is needed or the subsequent write operations may 579 558 * fail. The LPSS devices are actually PCI devices and the PCI spec 580 559 * expects 10ms delay before the device can be accessed after D3 to D0 581 - * transition. 560 + * transition. However some platforms like BSW does not need this delay. 582 561 */ 583 - msleep(10); 562 + unsigned int delay = 10; /* default 10ms delay */ 563 + 564 + if (pdata->dev_desc->flags & LPSS_NO_D3_DELAY) 565 + delay = 0; 566 + 567 + msleep(delay); 584 568 585 569 for (i = 0; i < LPSS_PRV_REG_COUNT; i++) { 586 570 unsigned long offset = i * sizeof(u32);
-5
drivers/acpi/acpi_memhotplug.c
··· 16 16 * NON INFRINGEMENT. See the GNU General Public License for more 17 17 * details. 18 18 * 19 - * You should have received a copy of the GNU General Public License 20 - * along with this program; if not, write to the Free Software 21 - * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 22 - * 23 - * 24 19 * ACPI based HotPlug driver that supports Memory Hotplug 25 20 * This driver fields notifications from firmware for memory add 26 21 * and remove operations and alerts the VM of the affected memory
-4
drivers/acpi/acpi_pad.c
··· 12 12 * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 13 * more details. 14 14 * 15 - * You should have received a copy of the GNU General Public License along with 16 - * this program; if not, write to the Free Software Foundation, Inc., 17 - * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 18 - * 19 15 */ 20 16 21 17 #include <linux/kernel.h>
+1 -1
drivers/acpi/acpi_processor.c
··· 485 485 { } 486 486 }; 487 487 488 - static struct acpi_scan_handler __refdata processor_handler = { 488 + static struct acpi_scan_handler processor_handler = { 489 489 .ids = processor_device_ids, 490 490 .attach = acpi_processor_add, 491 491 #ifdef CONFIG_ACPI_HOTPLUG_CPU
-4
drivers/acpi/acpi_video.c
··· 17 17 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 18 18 * General Public License for more details. 19 19 * 20 - * You should have received a copy of the GNU General Public License along 21 - * with this program; if not, write to the Free Software Foundation, Inc., 22 - * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. 23 - * 24 20 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 25 21 */ 26 22
+2
drivers/acpi/acpica/Makefile
··· 11 11 acpi-y := \ 12 12 dsargs.o \ 13 13 dscontrol.o \ 14 + dsdebug.o \ 14 15 dsfield.o \ 15 16 dsinit.o \ 16 17 dsmethod.o \ ··· 165 164 utmath.o \ 166 165 utmisc.o \ 167 166 utmutex.o \ 167 + utnonansi.o \ 168 168 utobject.o \ 169 169 utosi.o \ 170 170 utownerid.o \
+19 -7
drivers/acpi/acpica/acdebug.h
··· 67 67 }; 68 68 69 69 #define PARAM_LIST(pl) pl 70 - #define DBTEST_OUTPUT_LEVEL(lvl) if (acpi_gbl_db_opt_verbose) 71 - #define VERBOSE_PRINT(fp) DBTEST_OUTPUT_LEVEL(lvl) {\ 72 - acpi_os_printf PARAM_LIST(fp);} 73 70 74 71 #define EX_NO_SINGLE_STEP 1 75 72 #define EX_SINGLE_STEP 2 ··· 74 77 /* 75 78 * dbxface - external debugger interfaces 76 79 */ 77 - acpi_status acpi_db_initialize(void); 78 - 79 - void acpi_db_terminate(void); 80 - 81 80 acpi_status 82 81 acpi_db_single_step(struct acpi_walk_state *walk_state, 83 82 union acpi_parse_object *op, u32 op_type); ··· 94 101 void acpi_db_display_interfaces(char *action_arg, char *interface_name_arg); 95 102 96 103 acpi_status acpi_db_sleep(char *object_arg); 104 + 105 + void acpi_db_trace(char *enable_arg, char *method_arg, char *once_arg); 97 106 98 107 void acpi_db_display_locks(void); 99 108 ··· 255 260 256 261 char *acpi_db_get_next_token(char *string, 257 262 char **next, acpi_object_type * return_type); 263 + 264 + /* 265 + * dbobject 266 + */ 267 + void acpi_db_decode_internal_object(union acpi_operand_object *obj_desc); 268 + 269 + void 270 + acpi_db_display_internal_object(union acpi_operand_object *obj_desc, 271 + struct acpi_walk_state *walk_state); 272 + 273 + void acpi_db_decode_arguments(struct acpi_walk_state *walk_state); 274 + 275 + void acpi_db_decode_locals(struct acpi_walk_state *walk_state); 276 + 277 + void 278 + acpi_db_dump_method_info(acpi_status status, 279 + struct acpi_walk_state *walk_state); 258 280 259 281 /* 260 282 * dbstats - Generation and display of ACPI table statistics
+8
drivers/acpi/acpica/acdispat.h
··· 354 354 acpi_ds_result_push(union acpi_operand_object *object, 355 355 struct acpi_walk_state *walk_state); 356 356 357 + /* 358 + * dsdebug - parser debugging routines 359 + */ 360 + void 361 + acpi_ds_dump_method_stack(acpi_status status, 362 + struct acpi_walk_state *walk_state, 363 + union acpi_parse_object *op); 364 + 357 365 #endif /* _ACDISPAT_H_ */
+12 -8
drivers/acpi/acpica/acglobal.h
··· 58 58 59 59 ACPI_GLOBAL(struct acpi_table_header *, acpi_gbl_DSDT); 60 60 ACPI_GLOBAL(struct acpi_table_header, acpi_gbl_original_dsdt_header); 61 + ACPI_INIT_GLOBAL(u32, acpi_gbl_dsdt_index, ACPI_INVALID_TABLE_INDEX); 62 + ACPI_INIT_GLOBAL(u32, acpi_gbl_facs_index, ACPI_INVALID_TABLE_INDEX); 63 + ACPI_INIT_GLOBAL(u32, acpi_gbl_xfacs_index, ACPI_INVALID_TABLE_INDEX); 61 64 62 65 #if (!ACPI_REDUCED_HARDWARE) 63 66 ACPI_GLOBAL(struct acpi_table_facs *, acpi_gbl_FACS); 64 - ACPI_GLOBAL(struct acpi_table_facs *, acpi_gbl_facs32); 65 - ACPI_GLOBAL(struct acpi_table_facs *, acpi_gbl_facs64); 66 67 67 68 #endif /* !ACPI_REDUCED_HARDWARE */ 68 69 ··· 236 235 237 236 ACPI_GLOBAL(struct acpi_thread_state *, acpi_gbl_current_walk_list); 238 237 238 + /* Maximum number of While() loop iterations before forced abort */ 239 + 240 + ACPI_GLOBAL(u16, acpi_gbl_max_loop_iterations); 241 + 239 242 /* Control method single step flag */ 240 243 241 244 ACPI_GLOBAL(u8, acpi_gbl_cm_single_step); ··· 295 290 296 291 ACPI_GLOBAL(u32, acpi_gbl_original_dbg_level); 297 292 ACPI_GLOBAL(u32, acpi_gbl_original_dbg_layer); 298 - ACPI_GLOBAL(u32, acpi_gbl_trace_dbg_level); 299 - ACPI_GLOBAL(u32, acpi_gbl_trace_dbg_layer); 300 293 301 294 /***************************************************************************** 302 295 * ··· 312 309 ACPI_INIT_GLOBAL(u8, acpi_gbl_ignore_noop_operator, FALSE); 313 310 ACPI_INIT_GLOBAL(u8, acpi_gbl_cstyle_disassembly, TRUE); 314 311 ACPI_INIT_GLOBAL(u8, acpi_gbl_force_aml_disassembly, FALSE); 312 + ACPI_INIT_GLOBAL(u8, acpi_gbl_dm_opt_verbose, TRUE); 315 313 316 - ACPI_GLOBAL(u8, acpi_gbl_db_opt_disasm); 317 - ACPI_GLOBAL(u8, acpi_gbl_db_opt_verbose); 314 + ACPI_GLOBAL(u8, acpi_gbl_dm_opt_disasm); 315 + ACPI_GLOBAL(u8, acpi_gbl_dm_opt_listing); 318 316 ACPI_GLOBAL(u8, acpi_gbl_num_external_methods); 319 317 ACPI_GLOBAL(u32, acpi_gbl_resolved_external_methods); 320 318 ACPI_GLOBAL(struct acpi_external_list *, acpi_gbl_external_list); ··· 350 346 /* 351 347 * Statistic globals 352 348 */ 353 - ACPI_GLOBAL(u16, acpi_gbl_obj_type_count[ACPI_TYPE_NS_NODE_MAX + 1]); 354 - ACPI_GLOBAL(u16, acpi_gbl_node_type_count[ACPI_TYPE_NS_NODE_MAX + 1]); 349 + ACPI_GLOBAL(u16, acpi_gbl_obj_type_count[ACPI_TOTAL_TYPES]); 350 + ACPI_GLOBAL(u16, acpi_gbl_node_type_count[ACPI_TOTAL_TYPES]); 355 351 ACPI_GLOBAL(u16, acpi_gbl_obj_type_count_misc); 356 352 ACPI_GLOBAL(u16, acpi_gbl_node_type_count_misc); 357 353 ACPI_GLOBAL(u32, acpi_gbl_num_nodes);
+22
drivers/acpi/acpica/acinterp.h
··· 131 131 acpi_ex_do_debug_object(union acpi_operand_object *source_desc, 132 132 u32 level, u32 index); 133 133 134 + void 135 + acpi_ex_start_trace_method(struct acpi_namespace_node *method_node, 136 + union acpi_operand_object *obj_desc, 137 + struct acpi_walk_state *walk_state); 138 + 139 + void 140 + acpi_ex_stop_trace_method(struct acpi_namespace_node *method_node, 141 + union acpi_operand_object *obj_desc, 142 + struct acpi_walk_state *walk_state); 143 + 144 + void 145 + acpi_ex_start_trace_opcode(union acpi_parse_object *op, 146 + struct acpi_walk_state *walk_state); 147 + 148 + void 149 + acpi_ex_stop_trace_opcode(union acpi_parse_object *op, 150 + struct acpi_walk_state *walk_state); 151 + 152 + void 153 + acpi_ex_trace_point(acpi_trace_event_type type, 154 + u8 begin, u8 *aml, char *pathname); 155 + 134 156 /* 135 157 * exfield - ACPI AML (p-code) execution - field manipulation 136 158 */
+22 -6
drivers/acpi/acpica/aclocal.h
··· 174 174 */ 175 175 #ifdef ACPI_LARGE_NAMESPACE_NODE 176 176 union acpi_parse_object *op; 177 + void *method_locals; 178 + void *method_args; 177 179 u32 value; 178 180 u32 length; 181 + u8 arg_count; 182 + 179 183 #endif 180 184 }; 181 185 ··· 213 209 #define ACPI_ROOT_ORIGIN_ALLOCATED (1) 214 210 #define ACPI_ROOT_ALLOW_RESIZE (2) 215 211 216 - /* Predefined (fixed) table indexes */ 212 + /* Predefined table indexes */ 217 213 218 - #define ACPI_TABLE_INDEX_DSDT (0) 219 - #define ACPI_TABLE_INDEX_FACS (1) 220 - #define ACPI_TABLE_INDEX_X_FACS (2) 214 + #define ACPI_INVALID_TABLE_INDEX (0xFFFFFFFF) 221 215 222 216 struct acpi_find_context { 223 217 char *search_for; ··· 405 403 #define ACPI_RTYPE_ALL 0x3F 406 404 407 405 #define ACPI_NUM_RTYPES 5 /* Number of actual object types */ 406 + 407 + /* Info for running the _REG methods */ 408 + 409 + struct acpi_reg_walk_info { 410 + acpi_adr_space_type space_id; 411 + u32 reg_run_count; 412 + }; 408 413 409 414 /***************************************************************************** 410 415 * ··· 724 715 union acpi_parse_object *arg; /* arguments and contained ops */ 725 716 }; 726 717 727 - #ifdef ACPI_DISASSEMBLER 718 + #if defined(ACPI_DISASSEMBLER) || defined(ACPI_DEBUG_OUTPUT) 728 719 #define ACPI_DISASM_ONLY_MEMBERS(a) a; 729 720 #else 730 721 #define ACPI_DISASM_ONLY_MEMBERS(a) ··· 735 726 u8 descriptor_type; /* To differentiate various internal objs */\ 736 727 u8 flags; /* Type of Op */\ 737 728 u16 aml_opcode; /* AML opcode */\ 738 - u32 aml_offset; /* Offset of declaration in AML */\ 729 + u8 *aml; /* Address of declaration in AML */\ 739 730 union acpi_parse_object *next; /* Next op */\ 740 731 struct acpi_namespace_node *node; /* For use by interpreter */\ 741 732 union acpi_parse_value value; /* Value or args associated with the opcode */\ ··· 1112 1103 * Index of current thread inside all them created. 1113 1104 */ 1114 1105 char init_args; 1106 + #ifdef ACPI_DEBUGGER 1107 + acpi_object_type arg_types[4]; 1108 + #endif 1115 1109 char *arguments[4]; 1116 1110 char num_threads_str[11]; 1117 1111 char id_of_thread_str[11]; ··· 1130 1118 #define ACPI_DB_REDIRECTABLE_OUTPUT 0x01 1131 1119 #define ACPI_DB_CONSOLE_OUTPUT 0x02 1132 1120 #define ACPI_DB_DUPLICATE_OUTPUT 0x03 1121 + 1122 + struct acpi_object_info { 1123 + u32 types[ACPI_TOTAL_TYPES]; 1124 + }; 1133 1125 1134 1126 /***************************************************************************** 1135 1127 *
+9
drivers/acpi/acpica/acmacros.h
··· 220 220 #define ACPI_MUL_32(a) _ACPI_MUL(a, 5) 221 221 #define ACPI_MOD_32(a) _ACPI_MOD(a, 32) 222 222 223 + /* Test for ASCII character */ 224 + 225 + #define ACPI_IS_ASCII(c) ((c) < 0x80) 226 + 227 + /* Signed integers */ 228 + 229 + #define ACPI_SIGN_POSITIVE 0 230 + #define ACPI_SIGN_NEGATIVE 1 231 + 223 232 /* 224 233 * Rounding macros (Power of two boundaries only) 225 234 */
+8 -5
drivers/acpi/acpica/acnamesp.h
··· 272 272 */ 273 273 u32 acpi_ns_opens_scope(acpi_object_type type); 274 274 275 - acpi_status 276 - acpi_ns_build_external_path(struct acpi_namespace_node *node, 277 - acpi_size size, char *name_buffer); 278 - 279 275 char *acpi_ns_get_external_pathname(struct acpi_namespace_node *node); 276 + 277 + u32 278 + acpi_ns_build_normalized_path(struct acpi_namespace_node *node, 279 + char *full_path, u32 path_size, u8 no_trailing); 280 + 281 + char *acpi_ns_get_normalized_pathname(struct acpi_namespace_node *node, 282 + u8 no_trailing); 280 283 281 284 char *acpi_ns_name_of_current_scope(struct acpi_walk_state *walk_state); 282 285 283 286 acpi_status 284 287 acpi_ns_handle_to_pathname(acpi_handle target_handle, 285 - struct acpi_buffer *buffer); 288 + struct acpi_buffer *buffer, u8 no_trailing); 286 289 287 290 u8 288 291 acpi_ns_pattern_match(struct acpi_namespace_node *obj_node, char *search_for);
+1
drivers/acpi/acpica/acobject.h
··· 176 176 u8 param_count; 177 177 u8 sync_level; 178 178 union acpi_operand_object *mutex; 179 + union acpi_operand_object *node; 179 180 u8 *aml_start; 180 181 union { 181 182 acpi_internal_method implementation;
+2 -2
drivers/acpi/acpica/acparser.h
··· 225 225 /* 226 226 * psutils - parser utilities 227 227 */ 228 - union acpi_parse_object *acpi_ps_create_scope_op(void); 228 + union acpi_parse_object *acpi_ps_create_scope_op(u8 *aml); 229 229 230 230 void acpi_ps_init_op(union acpi_parse_object *op, u16 opcode); 231 231 232 - union acpi_parse_object *acpi_ps_alloc_op(u16 opcode); 232 + union acpi_parse_object *acpi_ps_alloc_op(u16 opcode, u8 *aml); 233 233 234 234 void acpi_ps_free_op(union acpi_parse_object *op); 235 235
+1 -1
drivers/acpi/acpica/acstruct.h
··· 85 85 u8 namespace_override; /* Override existing objects */ 86 86 u8 result_size; /* Total elements for the result stack */ 87 87 u8 result_count; /* Current number of occupied elements of result stack */ 88 - u32 aml_offset; 88 + u8 *aml; 89 89 u32 arg_types; 90 90 u32 method_breakpoint; /* For single stepping */ 91 91 u32 user_breakpoint; /* User AML breakpoint */
+10 -4
drivers/acpi/acpica/actables.h
··· 154 154 struct acpi_table_header *acpi_tb_copy_dsdt(u32 table_index); 155 155 156 156 void 157 - acpi_tb_install_table_with_override(u32 table_index, 158 - struct acpi_table_desc *new_table_desc, 159 - u8 override); 157 + acpi_tb_install_table_with_override(struct acpi_table_desc *new_table_desc, 158 + u8 override, u32 *table_index); 160 159 161 160 acpi_status 162 161 acpi_tb_install_fixed_table(acpi_physical_address address, 163 - char *signature, u32 table_index); 162 + char *signature, u32 *table_index); 164 163 165 164 acpi_status acpi_tb_parse_root_table(acpi_physical_address rsdp_address); 165 + 166 + u8 acpi_is_valid_signature(char *signature); 167 + 168 + /* 169 + * tbxfload 170 + */ 171 + acpi_status acpi_tb_load_namespace(void); 166 172 167 173 #endif /* __ACTABLES_H__ */
+12 -13
drivers/acpi/acpica/acutils.h
··· 167 167 #define DB_QWORD_DISPLAY 8 168 168 169 169 /* 170 + * utnonansi - Non-ANSI C library functions 171 + */ 172 + void acpi_ut_strupr(char *src_string); 173 + 174 + void acpi_ut_strlwr(char *src_string); 175 + 176 + int acpi_ut_stricmp(char *string1, char *string2); 177 + 178 + acpi_status acpi_ut_strtoul64(char *string, u32 base, u64 *ret_integer); 179 + 180 + /* 170 181 * utglobal - Global data structures and procedures 171 182 */ 172 183 acpi_status acpi_ut_init_globals(void); ··· 215 204 acpi_status acpi_ut_hardware_initialize(void); 216 205 217 206 void acpi_ut_subsystem_shutdown(void); 218 - 219 - #define ACPI_IS_ASCII(c) ((c) < 0x80) 220 207 221 208 /* 222 209 * utcopy - Object construction and conversion interfaces ··· 517 508 518 509 u8 acpi_ut_is_pci_root_bridge(char *id); 519 510 520 - #if (defined ACPI_ASL_COMPILER || defined ACPI_EXEC_APP) 511 + #if (defined ACPI_ASL_COMPILER || defined ACPI_EXEC_APP || defined ACPI_NAMES_APP) 521 512 u8 acpi_ut_is_aml_table(struct acpi_table_header *table); 522 513 #endif 523 514 ··· 576 567 /* 577 568 * utstring - String and character utilities 578 569 */ 579 - void acpi_ut_strupr(char *src_string); 580 - 581 - #ifdef ACPI_ASL_COMPILER 582 - void acpi_ut_strlwr(char *src_string); 583 - 584 - int acpi_ut_stricmp(char *string1, char *string2); 585 - #endif 586 - 587 - acpi_status acpi_ut_strtoul64(char *string, u32 base, u64 *ret_integer); 588 - 589 570 void acpi_ut_print_string(char *string, u16 max_length); 590 571 591 572 #if defined ACPI_ASL_COMPILER || defined ACPI_EXEC_APP
+2 -2
drivers/acpi/acpica/dsargs.c
··· 86 86 87 87 /* Allocate a new parser op to be the root of the parsed tree */ 88 88 89 - op = acpi_ps_alloc_op(AML_INT_EVAL_SUBTREE_OP); 89 + op = acpi_ps_alloc_op(AML_INT_EVAL_SUBTREE_OP, aml_start); 90 90 if (!op) { 91 91 return_ACPI_STATUS(AE_NO_MEMORY); 92 92 } ··· 129 129 130 130 /* Evaluate the deferred arguments */ 131 131 132 - op = acpi_ps_alloc_op(AML_INT_EVAL_SUBTREE_OP); 132 + op = acpi_ps_alloc_op(AML_INT_EVAL_SUBTREE_OP, aml_start); 133 133 if (!op) { 134 134 return_ACPI_STATUS(AE_NO_MEMORY); 135 135 }
+1 -1
drivers/acpi/acpica/dscontrol.c
··· 212 212 */ 213 213 control_state->control.loop_count++; 214 214 if (control_state->control.loop_count > 215 - ACPI_MAX_LOOP_ITERATIONS) { 215 + acpi_gbl_max_loop_iterations) { 216 216 status = AE_AML_INFINITE_LOOP; 217 217 break; 218 218 }
+231
drivers/acpi/acpica/dsdebug.c
··· 1 + /****************************************************************************** 2 + * 3 + * Module Name: dsdebug - Parser/Interpreter interface - debugging 4 + * 5 + *****************************************************************************/ 6 + 7 + /* 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 + * All rights reserved. 10 + * 11 + * Redistribution and use in source and binary forms, with or without 12 + * modification, are permitted provided that the following conditions 13 + * are met: 14 + * 1. Redistributions of source code must retain the above copyright 15 + * notice, this list of conditions, and the following disclaimer, 16 + * without modification. 17 + * 2. Redistributions in binary form must reproduce at minimum a disclaimer 18 + * substantially similar to the "NO WARRANTY" disclaimer below 19 + * ("Disclaimer") and any redistribution must be conditioned upon 20 + * including a substantially similar Disclaimer requirement for further 21 + * binary redistribution. 22 + * 3. Neither the names of the above-listed copyright holders nor the names 23 + * of any contributors may be used to endorse or promote products derived 24 + * from this software without specific prior written permission. 25 + * 26 + * Alternatively, this software may be distributed under the terms of the 27 + * GNU General Public License ("GPL") version 2 as published by the Free 28 + * Software Foundation. 29 + * 30 + * NO WARRANTY 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 36 + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 37 + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 38 + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 39 + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 40 + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 41 + * POSSIBILITY OF SUCH DAMAGES. 42 + */ 43 + 44 + #include <acpi/acpi.h> 45 + #include "accommon.h" 46 + #include "acdispat.h" 47 + #include "acnamesp.h" 48 + #ifdef ACPI_DISASSEMBLER 49 + #include "acdisasm.h" 50 + #endif 51 + #include "acinterp.h" 52 + 53 + #define _COMPONENT ACPI_DISPATCHER 54 + ACPI_MODULE_NAME("dsdebug") 55 + 56 + #if defined(ACPI_DEBUG_OUTPUT) || defined(ACPI_DEBUGGER) 57 + /* Local prototypes */ 58 + static void 59 + acpi_ds_print_node_pathname(struct acpi_namespace_node *node, 60 + const char *message); 61 + 62 + /******************************************************************************* 63 + * 64 + * FUNCTION: acpi_ds_print_node_pathname 65 + * 66 + * PARAMETERS: node - Object 67 + * message - Prefix message 68 + * 69 + * DESCRIPTION: Print an object's full namespace pathname 70 + * Manages allocation/freeing of a pathname buffer 71 + * 72 + ******************************************************************************/ 73 + 74 + static void 75 + acpi_ds_print_node_pathname(struct acpi_namespace_node *node, 76 + const char *message) 77 + { 78 + struct acpi_buffer buffer; 79 + acpi_status status; 80 + 81 + ACPI_FUNCTION_TRACE(ds_print_node_pathname); 82 + 83 + if (!node) { 84 + ACPI_DEBUG_PRINT_RAW((ACPI_DB_DISPATCH, "[NULL NAME]")); 85 + return_VOID; 86 + } 87 + 88 + /* Convert handle to full pathname and print it (with supplied message) */ 89 + 90 + buffer.length = ACPI_ALLOCATE_LOCAL_BUFFER; 91 + 92 + status = acpi_ns_handle_to_pathname(node, &buffer, TRUE); 93 + if (ACPI_SUCCESS(status)) { 94 + if (message) { 95 + ACPI_DEBUG_PRINT_RAW((ACPI_DB_DISPATCH, "%s ", 96 + message)); 97 + } 98 + 99 + ACPI_DEBUG_PRINT_RAW((ACPI_DB_DISPATCH, "[%s] (Node %p)", 100 + (char *)buffer.pointer, node)); 101 + ACPI_FREE(buffer.pointer); 102 + } 103 + 104 + return_VOID; 105 + } 106 + 107 + /******************************************************************************* 108 + * 109 + * FUNCTION: acpi_ds_dump_method_stack 110 + * 111 + * PARAMETERS: status - Method execution status 112 + * walk_state - Current state of the parse tree walk 113 + * op - Executing parse op 114 + * 115 + * RETURN: None 116 + * 117 + * DESCRIPTION: Called when a method has been aborted because of an error. 118 + * Dumps the method execution stack. 119 + * 120 + ******************************************************************************/ 121 + 122 + void 123 + acpi_ds_dump_method_stack(acpi_status status, 124 + struct acpi_walk_state *walk_state, 125 + union acpi_parse_object *op) 126 + { 127 + union acpi_parse_object *next; 128 + struct acpi_thread_state *thread; 129 + struct acpi_walk_state *next_walk_state; 130 + struct acpi_namespace_node *previous_method = NULL; 131 + union acpi_operand_object *method_desc; 132 + 133 + ACPI_FUNCTION_TRACE(ds_dump_method_stack); 134 + 135 + /* Ignore control codes, they are not errors */ 136 + 137 + if ((status & AE_CODE_MASK) == AE_CODE_CONTROL) { 138 + return_VOID; 139 + } 140 + 141 + /* We may be executing a deferred opcode */ 142 + 143 + if (walk_state->deferred_node) { 144 + ACPI_DEBUG_PRINT((ACPI_DB_DISPATCH, 145 + "Executing subtree for Buffer/Package/Region\n")); 146 + return_VOID; 147 + } 148 + 149 + /* 150 + * If there is no Thread, we are not actually executing a method. 151 + * This can happen when the iASL compiler calls the interpreter 152 + * to perform constant folding. 153 + */ 154 + thread = walk_state->thread; 155 + if (!thread) { 156 + return_VOID; 157 + } 158 + 159 + /* Display exception and method name */ 160 + 161 + ACPI_DEBUG_PRINT((ACPI_DB_DISPATCH, 162 + "\n**** Exception %s during execution of method ", 163 + acpi_format_exception(status))); 164 + acpi_ds_print_node_pathname(walk_state->method_node, NULL); 165 + 166 + /* Display stack of executing methods */ 167 + 168 + ACPI_DEBUG_PRINT_RAW((ACPI_DB_DISPATCH, 169 + "\n\nMethod Execution Stack:\n")); 170 + next_walk_state = thread->walk_state_list; 171 + 172 + /* Walk list of linked walk states */ 173 + 174 + while (next_walk_state) { 175 + method_desc = next_walk_state->method_desc; 176 + if (method_desc) { 177 + acpi_ex_stop_trace_method((struct acpi_namespace_node *) 178 + method_desc->method.node, 179 + method_desc, walk_state); 180 + } 181 + 182 + ACPI_DEBUG_PRINT((ACPI_DB_DISPATCH, 183 + " Method [%4.4s] executing: ", 184 + acpi_ut_get_node_name(next_walk_state-> 185 + method_node))); 186 + 187 + /* First method is the currently executing method */ 188 + 189 + if (next_walk_state == walk_state) { 190 + if (op) { 191 + 192 + /* Display currently executing ASL statement */ 193 + 194 + next = op->common.next; 195 + op->common.next = NULL; 196 + 197 + #ifdef ACPI_DISASSEMBLER 198 + acpi_dm_disassemble(next_walk_state, op, 199 + ACPI_UINT32_MAX); 200 + #endif 201 + op->common.next = next; 202 + } 203 + } else { 204 + /* 205 + * This method has called another method 206 + * NOTE: the method call parse subtree is already deleted at this 207 + * point, so we cannot disassemble the method invocation. 208 + */ 209 + ACPI_DEBUG_PRINT_RAW((ACPI_DB_DISPATCH, 210 + "Call to method ")); 211 + acpi_ds_print_node_pathname(previous_method, NULL); 212 + } 213 + 214 + previous_method = next_walk_state->method_node; 215 + next_walk_state = next_walk_state->next; 216 + ACPI_DEBUG_PRINT_RAW((ACPI_DB_DISPATCH, "\n")); 217 + } 218 + 219 + return_VOID; 220 + } 221 + 222 + #else 223 + void 224 + acpi_ds_dump_method_stack(acpi_status status, 225 + struct acpi_walk_state *walk_state, 226 + union acpi_parse_object *op) 227 + { 228 + return; 229 + } 230 + 231 + #endif
+15 -5
drivers/acpi/acpica/dsinit.c
··· 237 237 return_ACPI_STATUS(status); 238 238 } 239 239 240 + /* DSDT is always the first AML table */ 241 + 242 + if (ACPI_COMPARE_NAME(table->signature, ACPI_SIG_DSDT)) { 243 + ACPI_DEBUG_PRINT_RAW((ACPI_DB_INIT, 244 + "\nInitializing Namespace objects:\n")); 245 + } 246 + 247 + /* Summary of objects initialized */ 248 + 240 249 ACPI_DEBUG_PRINT_RAW((ACPI_DB_INIT, 241 - "Table [%4.4s] (id %4.4X) - %4u Objects with %3u Devices, " 242 - "%3u Regions, %3u Methods (%u/%u/%u Serial/Non/Cvt)\n", 243 - table->signature, owner_id, info.object_count, 244 - info.device_count, info.op_region_count, 245 - info.method_count, info.serial_method_count, 250 + "Table [%4.4s:%8.8s] (id %.2X) - %4u Objects with %3u Devices, " 251 + "%3u Regions, %4u Methods (%u/%u/%u Serial/Non/Cvt)\n", 252 + table->signature, table->oem_table_id, owner_id, 253 + info.object_count, info.device_count, 254 + info.op_region_count, info.method_count, 255 + info.serial_method_count, 246 256 info.non_serial_method_count, 247 257 info.serialized_method_count)); 248 258
+21 -14
drivers/acpi/acpica/dsmethod.c
··· 46 46 #include "acdispat.h" 47 47 #include "acinterp.h" 48 48 #include "acnamesp.h" 49 - #ifdef ACPI_DISASSEMBLER 50 - #include "acdisasm.h" 51 - #endif 52 49 #include "acparser.h" 53 50 #include "amlcode.h" 51 + #include "acdebug.h" 54 52 55 53 #define _COMPONENT ACPI_DISPATCHER 56 54 ACPI_MODULE_NAME("dsmethod") ··· 101 103 102 104 /* Create/Init a root op for the method parse tree */ 103 105 104 - op = acpi_ps_alloc_op(AML_METHOD_OP); 106 + op = acpi_ps_alloc_op(AML_METHOD_OP, obj_desc->method.aml_start); 105 107 if (!op) { 106 108 return_ACPI_STATUS(AE_NO_MEMORY); 107 109 } ··· 203 205 * RETURN: Status 204 206 * 205 207 * DESCRIPTION: Called on method error. Invoke the global exception handler if 206 - * present, dump the method data if the disassembler is configured 208 + * present, dump the method data if the debugger is configured 207 209 * 208 210 * Note: Allows the exception handler to change the status code 209 211 * ··· 212 214 acpi_status 213 215 acpi_ds_method_error(acpi_status status, struct acpi_walk_state * walk_state) 214 216 { 217 + u32 aml_offset; 218 + 215 219 ACPI_FUNCTION_ENTRY(); 216 220 217 221 /* Ignore AE_OK and control exception codes */ ··· 234 234 * Handler can map the exception code to anything it wants, including 235 235 * AE_OK, in which case the executing method will not be aborted. 236 236 */ 237 + aml_offset = (u32)ACPI_PTR_DIFF(walk_state->aml, 238 + walk_state->parser_state. 239 + aml_start); 240 + 237 241 status = acpi_gbl_exception_handler(status, 238 242 walk_state->method_node ? 239 243 walk_state->method_node-> 240 244 name.integer : 0, 241 245 walk_state->opcode, 242 - walk_state->aml_offset, 243 - NULL); 246 + aml_offset, NULL); 244 247 acpi_ex_enter_interpreter(); 245 248 } 246 249 247 250 acpi_ds_clear_implicit_return(walk_state); 248 251 249 - #ifdef ACPI_DISASSEMBLER 250 252 if (ACPI_FAILURE(status)) { 253 + acpi_ds_dump_method_stack(status, walk_state, walk_state->op); 251 254 252 - /* Display method locals/args if disassembler is present */ 255 + /* Display method locals/args if debugger is present */ 253 256 254 - acpi_dm_dump_method_info(status, walk_state, walk_state->op); 255 - } 257 + #ifdef ACPI_DEBUGGER 258 + acpi_db_dump_method_info(status, walk_state); 256 259 #endif 260 + } 257 261 258 262 return (status); 259 263 } ··· 331 327 if (!method_node) { 332 328 return_ACPI_STATUS(AE_NULL_ENTRY); 333 329 } 330 + 331 + acpi_ex_start_trace_method(method_node, obj_desc, walk_state); 334 332 335 333 /* Prevent wraparound of thread count */ 336 334 ··· 580 574 /* On error, we must terminate the method properly */ 581 575 582 576 acpi_ds_terminate_control_method(obj_desc, next_walk_state); 583 - if (next_walk_state) { 584 - acpi_ds_delete_walk_state(next_walk_state); 585 - } 577 + acpi_ds_delete_walk_state(next_walk_state); 586 578 587 579 return_ACPI_STATUS(status); 588 580 } ··· 829 825 acpi_ut_release_owner_id(&method_desc->method.owner_id); 830 826 } 831 827 } 828 + 829 + acpi_ex_stop_trace_method((struct acpi_namespace_node *)method_desc-> 830 + method.node, method_desc, walk_state); 832 831 833 832 return_VOID; 834 833 }
+20 -11
drivers/acpi/acpica/dsopcode.c
··· 480 480 union acpi_operand_object **operand; 481 481 struct acpi_namespace_node *node; 482 482 union acpi_parse_object *next_op; 483 - u32 table_index; 484 483 struct acpi_table_header *table; 484 + u32 table_index; 485 485 486 486 ACPI_FUNCTION_TRACE_PTR(ds_eval_table_region_operands, op); 487 487 ··· 504 504 return_ACPI_STATUS(status); 505 505 } 506 506 507 + operand = &walk_state->operands[0]; 508 + 507 509 /* 508 510 * Resolve the Signature string, oem_id string, 509 511 * and oem_table_id string operands ··· 513 511 status = acpi_ex_resolve_operands(op->common.aml_opcode, 514 512 ACPI_WALK_OPERANDS, walk_state); 515 513 if (ACPI_FAILURE(status)) { 516 - return_ACPI_STATUS(status); 514 + goto cleanup; 517 515 } 518 - 519 - operand = &walk_state->operands[0]; 520 516 521 517 /* Find the ACPI table */ 522 518 ··· 522 522 operand[1]->string.pointer, 523 523 operand[2]->string.pointer, &table_index); 524 524 if (ACPI_FAILURE(status)) { 525 - return_ACPI_STATUS(status); 525 + if (status == AE_NOT_FOUND) { 526 + ACPI_ERROR((AE_INFO, 527 + "ACPI Table [%4.4s] OEM:(%s, %s) not found in RSDT/XSDT", 528 + operand[0]->string.pointer, 529 + operand[1]->string.pointer, 530 + operand[2]->string.pointer)); 531 + } 532 + goto cleanup; 526 533 } 527 - 528 - acpi_ut_remove_reference(operand[0]); 529 - acpi_ut_remove_reference(operand[1]); 530 - acpi_ut_remove_reference(operand[2]); 531 534 532 535 status = acpi_get_table_by_index(table_index, &table); 533 536 if (ACPI_FAILURE(status)) { 534 - return_ACPI_STATUS(status); 537 + goto cleanup; 535 538 } 536 539 537 540 obj_desc = acpi_ns_get_attached_object(node); 538 541 if (!obj_desc) { 539 - return_ACPI_STATUS(AE_NOT_EXIST); 542 + status = AE_NOT_EXIST; 543 + goto cleanup; 540 544 } 541 545 542 546 obj_desc->region.address = ACPI_PTR_TO_PHYSADDR(table); ··· 554 550 /* Now the address and length are valid for this opregion */ 555 551 556 552 obj_desc->region.flags |= AOPOBJ_DATA_VALID; 553 + 554 + cleanup: 555 + acpi_ut_remove_reference(operand[0]); 556 + acpi_ut_remove_reference(operand[1]); 557 + acpi_ut_remove_reference(operand[2]); 557 558 558 559 return_ACPI_STATUS(status); 559 560 }
+1 -1
drivers/acpi/acpica/dswload.c
··· 388 388 389 389 /* Create a new op */ 390 390 391 - op = acpi_ps_alloc_op(walk_state->opcode); 391 + op = acpi_ps_alloc_op(walk_state->opcode, walk_state->aml); 392 392 if (!op) { 393 393 return_ACPI_STATUS(AE_NO_MEMORY); 394 394 }
+1 -1
drivers/acpi/acpica/dswload2.c
··· 335 335 336 336 /* Create a new op */ 337 337 338 - op = acpi_ps_alloc_op(walk_state->opcode); 338 + op = acpi_ps_alloc_op(walk_state->opcode, walk_state->aml); 339 339 if (!op) { 340 340 return_ACPI_STATUS(AE_NO_MEMORY); 341 341 }
+18 -4
drivers/acpi/acpica/evregion.c
··· 626 626 acpi_adr_space_type space_id) 627 627 { 628 628 acpi_status status; 629 + struct acpi_reg_walk_info info; 629 630 630 631 ACPI_FUNCTION_TRACE(ev_execute_reg_methods); 632 + 633 + info.space_id = space_id; 634 + info.reg_run_count = 0; 635 + 636 + ACPI_DEBUG_PRINT_RAW((ACPI_DB_NAMES, 637 + " Running _REG methods for SpaceId %s\n", 638 + acpi_ut_get_region_name(info.space_id))); 631 639 632 640 /* 633 641 * Run all _REG methods for all Operation Regions for this space ID. This ··· 645 637 */ 646 638 status = acpi_ns_walk_namespace(ACPI_TYPE_ANY, node, ACPI_UINT32_MAX, 647 639 ACPI_NS_WALK_UNLOCK, acpi_ev_reg_run, 648 - NULL, &space_id, NULL); 640 + NULL, &info, NULL); 649 641 650 642 /* Special case for EC: handle "orphan" _REG methods with no region */ 651 643 652 644 if (space_id == ACPI_ADR_SPACE_EC) { 653 645 acpi_ev_orphan_ec_reg_method(node); 654 646 } 647 + 648 + ACPI_DEBUG_PRINT_RAW((ACPI_DB_NAMES, 649 + " Executed %u _REG methods for SpaceId %s\n", 650 + info.reg_run_count, 651 + acpi_ut_get_region_name(info.space_id))); 655 652 656 653 return_ACPI_STATUS(status); 657 654 } ··· 677 664 { 678 665 union acpi_operand_object *obj_desc; 679 666 struct acpi_namespace_node *node; 680 - acpi_adr_space_type space_id; 681 667 acpi_status status; 668 + struct acpi_reg_walk_info *info; 682 669 683 - space_id = *ACPI_CAST_PTR(acpi_adr_space_type, context); 670 + info = ACPI_CAST_PTR(struct acpi_reg_walk_info, context); 684 671 685 672 /* Convert and validate the device handle */ 686 673 ··· 709 696 710 697 /* Object is a Region */ 711 698 712 - if (obj_desc->region.space_id != space_id) { 699 + if (obj_desc->region.space_id != info->space_id) { 713 700 714 701 /* This region is for a different address space, just ignore it */ 715 702 716 703 return (AE_OK); 717 704 } 718 705 706 + info->reg_run_count++; 719 707 status = acpi_ev_execute_reg_method(obj_desc, ACPI_REG_CONNECT); 720 708 return (status); 721 709 }
-8
drivers/acpi/acpica/exconfig.c
··· 162 162 163 163 ACPI_FUNCTION_TRACE(ex_load_table_op); 164 164 165 - /* Validate lengths for the Signature, oem_id, and oem_table_id strings */ 166 - 167 - if ((operand[0]->string.length > ACPI_NAME_SIZE) || 168 - (operand[1]->string.length > ACPI_OEM_ID_SIZE) || 169 - (operand[2]->string.length > ACPI_OEM_TABLE_ID_SIZE)) { 170 - return_ACPI_STATUS(AE_AML_STRING_LIMIT); 171 - } 172 - 173 165 /* Find the ACPI table in the RSDT/XSDT */ 174 166 175 167 status = acpi_tb_find_table(operand[0]->string.pointer,
+1
drivers/acpi/acpica/excreate.c
··· 486 486 487 487 obj_desc->method.aml_start = aml_start; 488 488 obj_desc->method.aml_length = aml_length; 489 + obj_desc->method.node = operand[0]; 489 490 490 491 /* 491 492 * Disassemble the method flags. Split off the arg_count, Serialized
+324
drivers/acpi/acpica/exdebug.c
··· 43 43 44 44 #include <acpi/acpi.h> 45 45 #include "accommon.h" 46 + #include "acnamesp.h" 46 47 #include "acinterp.h" 48 + #include "acparser.h" 47 49 48 50 #define _COMPONENT ACPI_EXECUTER 49 51 ACPI_MODULE_NAME("exdebug") 52 + 53 + static union acpi_operand_object *acpi_gbl_trace_method_object = NULL; 54 + 55 + /* Local prototypes */ 56 + 57 + #ifdef ACPI_DEBUG_OUTPUT 58 + static const char *acpi_ex_get_trace_event_name(acpi_trace_event_type type); 59 + #endif 50 60 51 61 #ifndef ACPI_NO_ERROR_MESSAGES 52 62 /******************************************************************************* ··· 80 70 * enabled if necessary. 81 71 * 82 72 ******************************************************************************/ 73 + 83 74 void 84 75 acpi_ex_do_debug_object(union acpi_operand_object *source_desc, 85 76 u32 level, u32 index) ··· 319 308 return_VOID; 320 309 } 321 310 #endif 311 + 312 + /******************************************************************************* 313 + * 314 + * FUNCTION: acpi_ex_interpreter_trace_enabled 315 + * 316 + * PARAMETERS: name - Whether method name should be matched, 317 + * this should be checked before starting 318 + * the tracer 319 + * 320 + * RETURN: TRUE if interpreter trace is enabled. 321 + * 322 + * DESCRIPTION: Check whether interpreter trace is enabled 323 + * 324 + ******************************************************************************/ 325 + 326 + static u8 acpi_ex_interpreter_trace_enabled(char *name) 327 + { 328 + 329 + /* Check if tracing is enabled */ 330 + 331 + if (!(acpi_gbl_trace_flags & ACPI_TRACE_ENABLED)) { 332 + return (FALSE); 333 + } 334 + 335 + /* 336 + * Check if tracing is filtered: 337 + * 338 + * 1. If the tracer is started, acpi_gbl_trace_method_object should have 339 + * been filled by the trace starter 340 + * 2. If the tracer is not started, acpi_gbl_trace_method_name should be 341 + * matched if it is specified 342 + * 3. If the tracer is oneshot style, acpi_gbl_trace_method_name should 343 + * not be cleared by the trace stopper during the first match 344 + */ 345 + if (acpi_gbl_trace_method_object) { 346 + return (TRUE); 347 + } 348 + if (name && 349 + (acpi_gbl_trace_method_name && 350 + strcmp(acpi_gbl_trace_method_name, name))) { 351 + return (FALSE); 352 + } 353 + if ((acpi_gbl_trace_flags & ACPI_TRACE_ONESHOT) && 354 + !acpi_gbl_trace_method_name) { 355 + return (FALSE); 356 + } 357 + 358 + return (TRUE); 359 + } 360 + 361 + /******************************************************************************* 362 + * 363 + * FUNCTION: acpi_ex_get_trace_event_name 364 + * 365 + * PARAMETERS: type - Trace event type 366 + * 367 + * RETURN: Trace event name. 368 + * 369 + * DESCRIPTION: Used to obtain the full trace event name. 370 + * 371 + ******************************************************************************/ 372 + 373 + #ifdef ACPI_DEBUG_OUTPUT 374 + 375 + static const char *acpi_ex_get_trace_event_name(acpi_trace_event_type type) 376 + { 377 + switch (type) { 378 + case ACPI_TRACE_AML_METHOD: 379 + 380 + return "Method"; 381 + 382 + case ACPI_TRACE_AML_OPCODE: 383 + 384 + return "Opcode"; 385 + 386 + case ACPI_TRACE_AML_REGION: 387 + 388 + return "Region"; 389 + 390 + default: 391 + 392 + return ""; 393 + } 394 + } 395 + 396 + #endif 397 + 398 + /******************************************************************************* 399 + * 400 + * FUNCTION: acpi_ex_trace_point 401 + * 402 + * PARAMETERS: type - Trace event type 403 + * begin - TRUE if before execution 404 + * aml - Executed AML address 405 + * pathname - Object path 406 + * 407 + * RETURN: None 408 + * 409 + * DESCRIPTION: Internal interpreter execution trace. 410 + * 411 + ******************************************************************************/ 412 + 413 + void 414 + acpi_ex_trace_point(acpi_trace_event_type type, 415 + u8 begin, u8 *aml, char *pathname) 416 + { 417 + 418 + ACPI_FUNCTION_NAME(ex_trace_point); 419 + 420 + if (pathname) { 421 + ACPI_DEBUG_PRINT((ACPI_DB_TRACE_POINT, 422 + "%s %s [0x%p:%s] execution.\n", 423 + acpi_ex_get_trace_event_name(type), 424 + begin ? "Begin" : "End", aml, pathname)); 425 + } else { 426 + ACPI_DEBUG_PRINT((ACPI_DB_TRACE_POINT, 427 + "%s %s [0x%p] execution.\n", 428 + acpi_ex_get_trace_event_name(type), 429 + begin ? "Begin" : "End", aml)); 430 + } 431 + } 432 + 433 + /******************************************************************************* 434 + * 435 + * FUNCTION: acpi_ex_start_trace_method 436 + * 437 + * PARAMETERS: method_node - Node of the method 438 + * obj_desc - The method object 439 + * walk_state - current state, NULL if not yet executing 440 + * a method. 441 + * 442 + * RETURN: None 443 + * 444 + * DESCRIPTION: Start control method execution trace 445 + * 446 + ******************************************************************************/ 447 + 448 + void 449 + acpi_ex_start_trace_method(struct acpi_namespace_node *method_node, 450 + union acpi_operand_object *obj_desc, 451 + struct acpi_walk_state *walk_state) 452 + { 453 + acpi_status status; 454 + char *pathname = NULL; 455 + u8 enabled = FALSE; 456 + 457 + ACPI_FUNCTION_NAME(ex_start_trace_method); 458 + 459 + if (method_node) { 460 + pathname = acpi_ns_get_normalized_pathname(method_node, TRUE); 461 + } 462 + 463 + status = acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE); 464 + if (ACPI_FAILURE(status)) { 465 + goto exit; 466 + } 467 + 468 + enabled = acpi_ex_interpreter_trace_enabled(pathname); 469 + if (enabled && !acpi_gbl_trace_method_object) { 470 + acpi_gbl_trace_method_object = obj_desc; 471 + acpi_gbl_original_dbg_level = acpi_dbg_level; 472 + acpi_gbl_original_dbg_layer = acpi_dbg_layer; 473 + acpi_dbg_level = ACPI_TRACE_LEVEL_ALL; 474 + acpi_dbg_layer = ACPI_TRACE_LAYER_ALL; 475 + 476 + if (acpi_gbl_trace_dbg_level) { 477 + acpi_dbg_level = acpi_gbl_trace_dbg_level; 478 + } 479 + if (acpi_gbl_trace_dbg_layer) { 480 + acpi_dbg_layer = acpi_gbl_trace_dbg_layer; 481 + } 482 + } 483 + (void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE); 484 + 485 + exit: 486 + if (enabled) { 487 + ACPI_TRACE_POINT(ACPI_TRACE_AML_METHOD, TRUE, 488 + obj_desc ? obj_desc->method.aml_start : NULL, 489 + pathname); 490 + } 491 + if (pathname) { 492 + ACPI_FREE(pathname); 493 + } 494 + } 495 + 496 + /******************************************************************************* 497 + * 498 + * FUNCTION: acpi_ex_stop_trace_method 499 + * 500 + * PARAMETERS: method_node - Node of the method 501 + * obj_desc - The method object 502 + * walk_state - current state, NULL if not yet executing 503 + * a method. 504 + * 505 + * RETURN: None 506 + * 507 + * DESCRIPTION: Stop control method execution trace 508 + * 509 + ******************************************************************************/ 510 + 511 + void 512 + acpi_ex_stop_trace_method(struct acpi_namespace_node *method_node, 513 + union acpi_operand_object *obj_desc, 514 + struct acpi_walk_state *walk_state) 515 + { 516 + acpi_status status; 517 + char *pathname = NULL; 518 + u8 enabled; 519 + 520 + ACPI_FUNCTION_NAME(ex_stop_trace_method); 521 + 522 + if (method_node) { 523 + pathname = acpi_ns_get_normalized_pathname(method_node, TRUE); 524 + } 525 + 526 + status = acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE); 527 + if (ACPI_FAILURE(status)) { 528 + goto exit_path; 529 + } 530 + 531 + enabled = acpi_ex_interpreter_trace_enabled(NULL); 532 + 533 + (void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE); 534 + 535 + if (enabled) { 536 + ACPI_TRACE_POINT(ACPI_TRACE_AML_METHOD, FALSE, 537 + obj_desc ? obj_desc->method.aml_start : NULL, 538 + pathname); 539 + } 540 + 541 + status = acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE); 542 + if (ACPI_FAILURE(status)) { 543 + goto exit_path; 544 + } 545 + 546 + /* Check whether the tracer should be stopped */ 547 + 548 + if (acpi_gbl_trace_method_object == obj_desc) { 549 + 550 + /* Disable further tracing if type is one-shot */ 551 + 552 + if (acpi_gbl_trace_flags & ACPI_TRACE_ONESHOT) { 553 + acpi_gbl_trace_method_name = NULL; 554 + } 555 + 556 + acpi_dbg_level = acpi_gbl_original_dbg_level; 557 + acpi_dbg_layer = acpi_gbl_original_dbg_layer; 558 + acpi_gbl_trace_method_object = NULL; 559 + } 560 + 561 + (void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE); 562 + 563 + exit_path: 564 + if (pathname) { 565 + ACPI_FREE(pathname); 566 + } 567 + } 568 + 569 + /******************************************************************************* 570 + * 571 + * FUNCTION: acpi_ex_start_trace_opcode 572 + * 573 + * PARAMETERS: op - The parser opcode object 574 + * walk_state - current state, NULL if not yet executing 575 + * a method. 576 + * 577 + * RETURN: None 578 + * 579 + * DESCRIPTION: Start opcode execution trace 580 + * 581 + ******************************************************************************/ 582 + 583 + void 584 + acpi_ex_start_trace_opcode(union acpi_parse_object *op, 585 + struct acpi_walk_state *walk_state) 586 + { 587 + 588 + ACPI_FUNCTION_NAME(ex_start_trace_opcode); 589 + 590 + if (acpi_ex_interpreter_trace_enabled(NULL) && 591 + (acpi_gbl_trace_flags & ACPI_TRACE_OPCODE)) { 592 + ACPI_TRACE_POINT(ACPI_TRACE_AML_OPCODE, TRUE, 593 + op->common.aml, op->common.aml_op_name); 594 + } 595 + } 596 + 597 + /******************************************************************************* 598 + * 599 + * FUNCTION: acpi_ex_stop_trace_opcode 600 + * 601 + * PARAMETERS: op - The parser opcode object 602 + * walk_state - current state, NULL if not yet executing 603 + * a method. 604 + * 605 + * RETURN: None 606 + * 607 + * DESCRIPTION: Stop opcode execution trace 608 + * 609 + ******************************************************************************/ 610 + 611 + void 612 + acpi_ex_stop_trace_opcode(union acpi_parse_object *op, 613 + struct acpi_walk_state *walk_state) 614 + { 615 + 616 + ACPI_FUNCTION_NAME(ex_stop_trace_opcode); 617 + 618 + if (acpi_ex_interpreter_trace_enabled(NULL) && 619 + (acpi_gbl_trace_flags & ACPI_TRACE_OPCODE)) { 620 + ACPI_TRACE_POINT(ACPI_TRACE_AML_OPCODE, FALSE, 621 + op->common.aml, op->common.aml_op_name); 622 + } 623 + }
+2 -3
drivers/acpi/acpica/exdump.c
··· 995 995 if (obj_desc->reference.class == ACPI_REFCLASS_NAME) { 996 996 acpi_os_printf(" %p ", obj_desc->reference.node); 997 997 998 - status = 999 - acpi_ns_handle_to_pathname(obj_desc->reference.node, 1000 - &ret_buf); 998 + status = acpi_ns_handle_to_pathname(obj_desc->reference.node, 999 + &ret_buf, TRUE); 1001 1000 if (ACPI_FAILURE(status)) { 1002 1001 acpi_os_printf(" Could not convert name to pathname\n"); 1003 1002 } else {
+1 -1
drivers/acpi/acpica/exresnte.c
··· 126 126 if (!source_desc) { 127 127 ACPI_ERROR((AE_INFO, "No object attached to node [%4.4s] %p", 128 128 node->name.ascii, node)); 129 - return_ACPI_STATUS(AE_AML_NO_OPERAND); 129 + return_ACPI_STATUS(AE_AML_UNINITIALIZED_NODE); 130 130 } 131 131 132 132 /*
+11 -5
drivers/acpi/acpica/exresolv.c
··· 337 337 acpi_object_type * return_type, 338 338 union acpi_operand_object **return_desc) 339 339 { 340 - union acpi_operand_object *obj_desc = (void *)operand; 341 - struct acpi_namespace_node *node; 340 + union acpi_operand_object *obj_desc = ACPI_CAST_PTR(void, operand); 341 + struct acpi_namespace_node *node = 342 + ACPI_CAST_PTR(struct acpi_namespace_node, operand); 342 343 acpi_object_type type; 343 344 acpi_status status; 344 345 ··· 356 355 case ACPI_DESC_TYPE_NAMED: 357 356 358 357 type = ((struct acpi_namespace_node *)obj_desc)->type; 359 - obj_desc = 360 - acpi_ns_get_attached_object((struct acpi_namespace_node *) 361 - obj_desc); 358 + obj_desc = acpi_ns_get_attached_object(node); 362 359 363 360 /* If we had an Alias node, use the attached object for type info */ 364 361 ··· 366 367 acpi_ns_get_attached_object((struct 367 368 acpi_namespace_node *) 368 369 obj_desc); 370 + } 371 + 372 + if (!obj_desc) { 373 + ACPI_ERROR((AE_INFO, 374 + "[%4.4s] Node is unresolved or uninitialized", 375 + acpi_ut_get_node_name(node))); 376 + return_ACPI_STATUS(AE_AML_UNINITIALIZED_NODE); 369 377 } 370 378 break; 371 379
+2 -13
drivers/acpi/acpica/hwxfsleep.c
··· 160 160 161 161 ACPI_FUNCTION_TRACE(acpi_set_firmware_waking_vectors); 162 162 163 - /* If Hardware Reduced flag is set, there is no FACS */ 164 - 165 - if (acpi_gbl_reduced_hardware) { 166 - return_ACPI_STATUS (AE_OK); 167 - } 168 - 169 - if (acpi_gbl_facs32) { 170 - (void)acpi_hw_set_firmware_waking_vectors(acpi_gbl_facs32, 171 - physical_address, 172 - physical_address64); 173 - } 174 - if (acpi_gbl_facs64) { 175 - (void)acpi_hw_set_firmware_waking_vectors(acpi_gbl_facs64, 163 + if (acpi_gbl_FACS) { 164 + (void)acpi_hw_set_firmware_waking_vectors(acpi_gbl_FACS, 176 165 physical_address, 177 166 physical_address64); 178 167 }
+3 -1
drivers/acpi/acpica/nseval.c
··· 274 274 acpi_ex_exit_interpreter(); 275 275 276 276 if (ACPI_FAILURE(status)) { 277 + info->return_object = NULL; 277 278 goto cleanup; 278 279 } 279 280 ··· 465 464 466 465 status = acpi_ns_evaluate(info); 467 466 468 - ACPI_DEBUG_PRINT((ACPI_DB_INIT, "Executed module-level code at %p\n", 467 + ACPI_DEBUG_PRINT((ACPI_DB_INIT_NAMES, 468 + "Executed module-level code at %p\n", 469 469 method_obj->method.aml_start)); 470 470 471 471 /* Delete a possible implicit return value (in slack mode) */
+15 -1
drivers/acpi/acpica/nsload.c
··· 111 111 if (ACPI_SUCCESS(status)) { 112 112 acpi_tb_set_table_loaded_flag(table_index, TRUE); 113 113 } else { 114 - (void)acpi_tb_release_owner_id(table_index); 114 + /* 115 + * On error, delete any namespace objects created by this table. 116 + * We cannot initialize these objects, so delete them. There are 117 + * a couple of expecially bad cases: 118 + * AE_ALREADY_EXISTS - namespace collision. 119 + * AE_NOT_FOUND - the target of a Scope operator does not 120 + * exist. This target of Scope must already exist in the 121 + * namespace, as per the ACPI specification. 122 + */ 123 + (void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE); 124 + acpi_ns_delete_namespace_by_owner(acpi_gbl_root_table_list. 125 + tables[table_index].owner_id); 126 + acpi_tb_release_owner_id(table_index); 127 + 128 + return_ACPI_STATUS(status); 115 129 } 116 130 117 131 unlock:
+156 -119
drivers/acpi/acpica/nsnames.c
··· 51 51 52 52 /******************************************************************************* 53 53 * 54 - * FUNCTION: acpi_ns_build_external_path 55 - * 56 - * PARAMETERS: node - NS node whose pathname is needed 57 - * size - Size of the pathname 58 - * *name_buffer - Where to return the pathname 59 - * 60 - * RETURN: Status 61 - * Places the pathname into the name_buffer, in external format 62 - * (name segments separated by path separators) 63 - * 64 - * DESCRIPTION: Generate a full pathaname 65 - * 66 - ******************************************************************************/ 67 - acpi_status 68 - acpi_ns_build_external_path(struct acpi_namespace_node *node, 69 - acpi_size size, char *name_buffer) 70 - { 71 - acpi_size index; 72 - struct acpi_namespace_node *parent_node; 73 - 74 - ACPI_FUNCTION_ENTRY(); 75 - 76 - /* Special case for root */ 77 - 78 - index = size - 1; 79 - if (index < ACPI_NAME_SIZE) { 80 - name_buffer[0] = AML_ROOT_PREFIX; 81 - name_buffer[1] = 0; 82 - return (AE_OK); 83 - } 84 - 85 - /* Store terminator byte, then build name backwards */ 86 - 87 - parent_node = node; 88 - name_buffer[index] = 0; 89 - 90 - while ((index > ACPI_NAME_SIZE) && (parent_node != acpi_gbl_root_node)) { 91 - index -= ACPI_NAME_SIZE; 92 - 93 - /* Put the name into the buffer */ 94 - 95 - ACPI_MOVE_32_TO_32((name_buffer + index), &parent_node->name); 96 - parent_node = parent_node->parent; 97 - 98 - /* Prefix name with the path separator */ 99 - 100 - index--; 101 - name_buffer[index] = ACPI_PATH_SEPARATOR; 102 - } 103 - 104 - /* Overwrite final separator with the root prefix character */ 105 - 106 - name_buffer[index] = AML_ROOT_PREFIX; 107 - 108 - if (index != 0) { 109 - ACPI_ERROR((AE_INFO, 110 - "Could not construct external pathname; index=%u, size=%u, Path=%s", 111 - (u32) index, (u32) size, &name_buffer[size])); 112 - 113 - return (AE_BAD_PARAMETER); 114 - } 115 - 116 - return (AE_OK); 117 - } 118 - 119 - /******************************************************************************* 120 - * 121 54 * FUNCTION: acpi_ns_get_external_pathname 122 55 * 123 56 * PARAMETERS: node - Namespace node whose pathname is needed ··· 63 130 * for error and debug statements. 64 131 * 65 132 ******************************************************************************/ 66 - 67 133 char *acpi_ns_get_external_pathname(struct acpi_namespace_node *node) 68 134 { 69 - acpi_status status; 70 135 char *name_buffer; 71 - acpi_size size; 72 136 73 137 ACPI_FUNCTION_TRACE_PTR(ns_get_external_pathname, node); 74 138 75 - /* Calculate required buffer size based on depth below root */ 76 - 77 - size = acpi_ns_get_pathname_length(node); 78 - if (!size) { 79 - return_PTR(NULL); 80 - } 81 - 82 - /* Allocate a buffer to be returned to caller */ 83 - 84 - name_buffer = ACPI_ALLOCATE_ZEROED(size); 85 - if (!name_buffer) { 86 - ACPI_ERROR((AE_INFO, "Could not allocate %u bytes", (u32)size)); 87 - return_PTR(NULL); 88 - } 89 - 90 - /* Build the path in the allocated buffer */ 91 - 92 - status = acpi_ns_build_external_path(node, size, name_buffer); 93 - if (ACPI_FAILURE(status)) { 94 - ACPI_FREE(name_buffer); 95 - return_PTR(NULL); 96 - } 139 + name_buffer = acpi_ns_get_normalized_pathname(node, FALSE); 97 140 98 141 return_PTR(name_buffer); 99 142 } ··· 89 180 acpi_size acpi_ns_get_pathname_length(struct acpi_namespace_node *node) 90 181 { 91 182 acpi_size size; 92 - struct acpi_namespace_node *next_node; 93 183 94 184 ACPI_FUNCTION_ENTRY(); 95 185 96 - /* 97 - * Compute length of pathname as 5 * number of name segments. 98 - * Go back up the parent tree to the root 99 - */ 100 - size = 0; 101 - next_node = node; 186 + size = acpi_ns_build_normalized_path(node, NULL, 0, FALSE); 102 187 103 - while (next_node && (next_node != acpi_gbl_root_node)) { 104 - if (ACPI_GET_DESCRIPTOR_TYPE(next_node) != ACPI_DESC_TYPE_NAMED) { 105 - ACPI_ERROR((AE_INFO, 106 - "Invalid Namespace Node (%p) while traversing namespace", 107 - next_node)); 108 - return (0); 109 - } 110 - size += ACPI_PATH_SEGMENT_LENGTH; 111 - next_node = next_node->parent; 112 - } 113 - 114 - if (!size) { 115 - size = 1; /* Root node case */ 116 - } 117 - 118 - return (size + 1); /* +1 for null string terminator */ 188 + return (size); 119 189 } 120 190 121 191 /******************************************************************************* ··· 104 216 * PARAMETERS: target_handle - Handle of named object whose name is 105 217 * to be found 106 218 * buffer - Where the pathname is returned 219 + * no_trailing - Remove trailing '_' for each name 220 + * segment 107 221 * 108 222 * RETURN: Status, Buffer is filled with pathname if status is AE_OK 109 223 * ··· 115 225 116 226 acpi_status 117 227 acpi_ns_handle_to_pathname(acpi_handle target_handle, 118 - struct acpi_buffer * buffer) 228 + struct acpi_buffer * buffer, u8 no_trailing) 119 229 { 120 230 acpi_status status; 121 231 struct acpi_namespace_node *node; ··· 130 240 131 241 /* Determine size required for the caller buffer */ 132 242 133 - required_size = acpi_ns_get_pathname_length(node); 243 + required_size = 244 + acpi_ns_build_normalized_path(node, NULL, 0, no_trailing); 134 245 if (!required_size) { 135 246 return_ACPI_STATUS(AE_BAD_PARAMETER); 136 247 } ··· 145 254 146 255 /* Build the path in the caller buffer */ 147 256 148 - status = 149 - acpi_ns_build_external_path(node, required_size, buffer->pointer); 257 + (void)acpi_ns_build_normalized_path(node, buffer->pointer, 258 + required_size, no_trailing); 150 259 if (ACPI_FAILURE(status)) { 151 260 return_ACPI_STATUS(status); 152 261 } ··· 154 263 ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "%s [%X]\n", 155 264 (char *)buffer->pointer, (u32) required_size)); 156 265 return_ACPI_STATUS(AE_OK); 266 + } 267 + 268 + /******************************************************************************* 269 + * 270 + * FUNCTION: acpi_ns_build_normalized_path 271 + * 272 + * PARAMETERS: node - Namespace node 273 + * full_path - Where the path name is returned 274 + * path_size - Size of returned path name buffer 275 + * no_trailing - Remove trailing '_' from each name segment 276 + * 277 + * RETURN: Return 1 if the AML path is empty, otherwise returning (length 278 + * of pathname + 1) which means the 'FullPath' contains a trailing 279 + * null. 280 + * 281 + * DESCRIPTION: Build and return a full namespace pathname. 282 + * Note that if the size of 'FullPath' isn't large enough to 283 + * contain the namespace node's path name, the actual required 284 + * buffer length is returned, and it should be greater than 285 + * 'PathSize'. So callers are able to check the returning value 286 + * to determine the buffer size of 'FullPath'. 287 + * 288 + ******************************************************************************/ 289 + 290 + u32 291 + acpi_ns_build_normalized_path(struct acpi_namespace_node *node, 292 + char *full_path, u32 path_size, u8 no_trailing) 293 + { 294 + u32 length = 0, i; 295 + char name[ACPI_NAME_SIZE]; 296 + u8 do_no_trailing; 297 + char c, *left, *right; 298 + struct acpi_namespace_node *next_node; 299 + 300 + ACPI_FUNCTION_TRACE_PTR(ns_build_normalized_path, node); 301 + 302 + #define ACPI_PATH_PUT8(path, size, byte, length) \ 303 + do { \ 304 + if ((length) < (size)) \ 305 + { \ 306 + (path)[(length)] = (byte); \ 307 + } \ 308 + (length)++; \ 309 + } while (0) 310 + 311 + /* 312 + * Make sure the path_size is correct, so that we don't need to 313 + * validate both full_path and path_size. 314 + */ 315 + if (!full_path) { 316 + path_size = 0; 317 + } 318 + 319 + if (!node) { 320 + goto build_trailing_null; 321 + } 322 + 323 + next_node = node; 324 + while (next_node && next_node != acpi_gbl_root_node) { 325 + if (next_node != node) { 326 + ACPI_PATH_PUT8(full_path, path_size, 327 + AML_DUAL_NAME_PREFIX, length); 328 + } 329 + ACPI_MOVE_32_TO_32(name, &next_node->name); 330 + do_no_trailing = no_trailing; 331 + for (i = 0; i < 4; i++) { 332 + c = name[4 - i - 1]; 333 + if (do_no_trailing && c != '_') { 334 + do_no_trailing = FALSE; 335 + } 336 + if (!do_no_trailing) { 337 + ACPI_PATH_PUT8(full_path, path_size, c, length); 338 + } 339 + } 340 + next_node = next_node->parent; 341 + } 342 + ACPI_PATH_PUT8(full_path, path_size, AML_ROOT_PREFIX, length); 343 + 344 + /* Reverse the path string */ 345 + 346 + if (length <= path_size) { 347 + left = full_path; 348 + right = full_path + length - 1; 349 + while (left < right) { 350 + c = *left; 351 + *left++ = *right; 352 + *right-- = c; 353 + } 354 + } 355 + 356 + /* Append the trailing null */ 357 + 358 + build_trailing_null: 359 + ACPI_PATH_PUT8(full_path, path_size, '\0', length); 360 + 361 + #undef ACPI_PATH_PUT8 362 + 363 + return_UINT32(length); 364 + } 365 + 366 + /******************************************************************************* 367 + * 368 + * FUNCTION: acpi_ns_get_normalized_pathname 369 + * 370 + * PARAMETERS: node - Namespace node whose pathname is needed 371 + * no_trailing - Remove trailing '_' from each name segment 372 + * 373 + * RETURN: Pointer to storage containing the fully qualified name of 374 + * the node, In external format (name segments separated by path 375 + * separators.) 376 + * 377 + * DESCRIPTION: Used to obtain the full pathname to a namespace node, usually 378 + * for error and debug statements. All trailing '_' will be 379 + * removed from the full pathname if 'NoTrailing' is specified.. 380 + * 381 + ******************************************************************************/ 382 + 383 + char *acpi_ns_get_normalized_pathname(struct acpi_namespace_node *node, 384 + u8 no_trailing) 385 + { 386 + char *name_buffer; 387 + acpi_size size; 388 + 389 + ACPI_FUNCTION_TRACE_PTR(ns_get_normalized_pathname, node); 390 + 391 + /* Calculate required buffer size based on depth below root */ 392 + 393 + size = acpi_ns_build_normalized_path(node, NULL, 0, no_trailing); 394 + if (!size) { 395 + return_PTR(NULL); 396 + } 397 + 398 + /* Allocate a buffer to be returned to caller */ 399 + 400 + name_buffer = ACPI_ALLOCATE_ZEROED(size); 401 + if (!name_buffer) { 402 + ACPI_ERROR((AE_INFO, "Could not allocate %u bytes", (u32)size)); 403 + return_PTR(NULL); 404 + } 405 + 406 + /* Build the path in the allocated buffer */ 407 + 408 + (void)acpi_ns_build_normalized_path(node, name_buffer, size, 409 + no_trailing); 410 + 411 + return_PTR(name_buffer); 157 412 }
+20 -22
drivers/acpi/acpica/nsparse.c
··· 70 70 { 71 71 union acpi_parse_object *parse_root; 72 72 acpi_status status; 73 - u32 aml_length; 73 + u32 aml_length; 74 74 u8 *aml_start; 75 75 struct acpi_walk_state *walk_state; 76 76 struct acpi_table_header *table; 77 77 acpi_owner_id owner_id; 78 78 79 79 ACPI_FUNCTION_TRACE(ns_one_complete_parse); 80 + 81 + status = acpi_get_table_by_index(table_index, &table); 82 + if (ACPI_FAILURE(status)) { 83 + return_ACPI_STATUS(status); 84 + } 85 + 86 + /* Table must consist of at least a complete header */ 87 + 88 + if (table->length < sizeof(struct acpi_table_header)) { 89 + return_ACPI_STATUS(AE_BAD_HEADER); 90 + } 91 + 92 + aml_start = (u8 *)table + sizeof(struct acpi_table_header); 93 + aml_length = table->length - sizeof(struct acpi_table_header); 80 94 81 95 status = acpi_tb_get_owner_id(table_index, &owner_id); 82 96 if (ACPI_FAILURE(status)) { ··· 99 85 100 86 /* Create and init a Root Node */ 101 87 102 - parse_root = acpi_ps_create_scope_op(); 88 + parse_root = acpi_ps_create_scope_op(aml_start); 103 89 if (!parse_root) { 104 90 return_ACPI_STATUS(AE_NO_MEMORY); 105 91 } ··· 112 98 return_ACPI_STATUS(AE_NO_MEMORY); 113 99 } 114 100 115 - status = acpi_get_table_by_index(table_index, &table); 101 + status = acpi_ds_init_aml_walk(walk_state, parse_root, NULL, 102 + aml_start, aml_length, NULL, 103 + (u8)pass_number); 116 104 if (ACPI_FAILURE(status)) { 117 105 acpi_ds_delete_walk_state(walk_state); 118 - acpi_ps_free_op(parse_root); 119 - return_ACPI_STATUS(status); 120 - } 121 - 122 - /* Table must consist of at least a complete header */ 123 - 124 - if (table->length < sizeof(struct acpi_table_header)) { 125 - status = AE_BAD_HEADER; 126 - } else { 127 - aml_start = (u8 *) table + sizeof(struct acpi_table_header); 128 - aml_length = table->length - sizeof(struct acpi_table_header); 129 - status = acpi_ds_init_aml_walk(walk_state, parse_root, NULL, 130 - aml_start, aml_length, NULL, 131 - (u8) pass_number); 106 + goto cleanup; 132 107 } 133 108 134 109 /* Found OSDT table, enable the namespace override feature */ ··· 125 122 if (ACPI_COMPARE_NAME(table->signature, ACPI_SIG_OSDT) && 126 123 pass_number == ACPI_IMODE_LOAD_PASS1) { 127 124 walk_state->namespace_override = TRUE; 128 - } 129 - 130 - if (ACPI_FAILURE(status)) { 131 - acpi_ds_delete_walk_state(walk_state); 132 - goto cleanup; 133 125 } 134 126 135 127 /* start_node is the default location to load the table */
+18 -1
drivers/acpi/acpica/nsutils.c
··· 83 83 84 84 buffer.length = ACPI_ALLOCATE_LOCAL_BUFFER; 85 85 86 - status = acpi_ns_handle_to_pathname(node, &buffer); 86 + status = acpi_ns_handle_to_pathname(node, &buffer, TRUE); 87 87 if (ACPI_SUCCESS(status)) { 88 88 if (message) { 89 89 acpi_os_printf("%s ", message); ··· 595 595 acpi_status status; 596 596 597 597 ACPI_FUNCTION_TRACE(ns_terminate); 598 + 599 + #ifdef ACPI_EXEC_APP 600 + { 601 + union acpi_operand_object *prev; 602 + union acpi_operand_object *next; 603 + 604 + /* Delete any module-level code blocks */ 605 + 606 + next = acpi_gbl_module_code_list; 607 + while (next) { 608 + prev = next; 609 + next = next->method.mutex; 610 + prev->method.mutex = NULL; /* Clear the Mutex (cheated) field */ 611 + acpi_ut_remove_reference(prev); 612 + } 613 + } 614 + #endif 598 615 599 616 /* 600 617 * Free the entire namespace -- all nodes and all objects
+6 -2
drivers/acpi/acpica/nsxfname.c
··· 172 172 return (status); 173 173 } 174 174 175 - if (name_type == ACPI_FULL_PATHNAME) { 175 + if (name_type == ACPI_FULL_PATHNAME || 176 + name_type == ACPI_FULL_PATHNAME_NO_TRAILING) { 176 177 177 178 /* Get the full pathname (From the namespace root) */ 178 179 179 - status = acpi_ns_handle_to_pathname(handle, buffer); 180 + status = acpi_ns_handle_to_pathname(handle, buffer, 181 + name_type == 182 + ACPI_FULL_PATHNAME ? FALSE : 183 + TRUE); 180 184 return (status); 181 185 } 182 186
+14 -12
drivers/acpi/acpica/psargs.c
··· 287 287 "Control Method - %p Desc %p Path=%p\n", node, 288 288 method_desc, path)); 289 289 290 - name_op = acpi_ps_alloc_op(AML_INT_NAMEPATH_OP); 290 + name_op = acpi_ps_alloc_op(AML_INT_NAMEPATH_OP, start); 291 291 if (!name_op) { 292 292 return_ACPI_STATUS(AE_NO_MEMORY); 293 293 } ··· 484 484 static union acpi_parse_object *acpi_ps_get_next_field(struct acpi_parse_state 485 485 *parser_state) 486 486 { 487 - u32 aml_offset; 487 + u8 *aml; 488 488 union acpi_parse_object *field; 489 489 union acpi_parse_object *arg = NULL; 490 490 u16 opcode; ··· 498 498 499 499 ACPI_FUNCTION_TRACE(ps_get_next_field); 500 500 501 - aml_offset = 502 - (u32)ACPI_PTR_DIFF(parser_state->aml, parser_state->aml_start); 501 + aml = parser_state->aml; 503 502 504 503 /* Determine field type */ 505 504 ··· 535 536 536 537 /* Allocate a new field op */ 537 538 538 - field = acpi_ps_alloc_op(opcode); 539 + field = acpi_ps_alloc_op(opcode, aml); 539 540 if (!field) { 540 541 return_PTR(NULL); 541 542 } 542 - 543 - field->common.aml_offset = aml_offset; 544 543 545 544 /* Decode the field type */ 546 545 ··· 601 604 * Argument for Connection operator can be either a Buffer 602 605 * (resource descriptor), or a name_string. 603 606 */ 607 + aml = parser_state->aml; 604 608 if (ACPI_GET8(parser_state->aml) == AML_BUFFER_OP) { 605 609 parser_state->aml++; 606 610 ··· 614 616 615 617 /* Non-empty list */ 616 618 617 - arg = acpi_ps_alloc_op(AML_INT_BYTELIST_OP); 619 + arg = 620 + acpi_ps_alloc_op(AML_INT_BYTELIST_OP, aml); 618 621 if (!arg) { 619 622 acpi_ps_free_op(field); 620 623 return_PTR(NULL); ··· 664 665 665 666 parser_state->aml = pkg_end; 666 667 } else { 667 - arg = acpi_ps_alloc_op(AML_INT_NAMEPATH_OP); 668 + arg = acpi_ps_alloc_op(AML_INT_NAMEPATH_OP, aml); 668 669 if (!arg) { 669 670 acpi_ps_free_op(field); 670 671 return_PTR(NULL); ··· 729 730 730 731 /* Constants, strings, and namestrings are all the same size */ 731 732 732 - arg = acpi_ps_alloc_op(AML_BYTE_OP); 733 + arg = acpi_ps_alloc_op(AML_BYTE_OP, parser_state->aml); 733 734 if (!arg) { 734 735 return_ACPI_STATUS(AE_NO_MEMORY); 735 736 } ··· 776 777 777 778 /* Non-empty list */ 778 779 779 - arg = acpi_ps_alloc_op(AML_INT_BYTELIST_OP); 780 + arg = acpi_ps_alloc_op(AML_INT_BYTELIST_OP, 781 + parser_state->aml); 780 782 if (!arg) { 781 783 return_ACPI_STATUS(AE_NO_MEMORY); 782 784 } ··· 807 807 808 808 /* null_name or name_string */ 809 809 810 - arg = acpi_ps_alloc_op(AML_INT_NAMEPATH_OP); 810 + arg = 811 + acpi_ps_alloc_op(AML_INT_NAMEPATH_OP, 812 + parser_state->aml); 811 813 if (!arg) { 812 814 return_ACPI_STATUS(AE_NO_MEMORY); 813 815 }
+16 -16
drivers/acpi/acpica/psloop.c
··· 51 51 52 52 #include <acpi/acpi.h> 53 53 #include "accommon.h" 54 + #include "acinterp.h" 54 55 #include "acparser.h" 55 56 #include "acdispat.h" 56 57 #include "amlcode.h" ··· 126 125 */ 127 126 while (GET_CURRENT_ARG_TYPE(walk_state->arg_types) 128 127 && !walk_state->arg_count) { 129 - walk_state->aml_offset = 130 - (u32) ACPI_PTR_DIFF(walk_state->parser_state.aml, 131 - walk_state->parser_state. 132 - aml_start); 128 + walk_state->aml = walk_state->parser_state.aml; 133 129 134 130 status = 135 131 acpi_ps_get_next_arg(walk_state, ··· 138 140 } 139 141 140 142 if (arg) { 141 - arg->common.aml_offset = walk_state->aml_offset; 142 143 acpi_ps_append_arg(op, arg); 143 144 } 144 145 ··· 321 324 union acpi_operand_object *method_obj; 322 325 struct acpi_namespace_node *parent_node; 323 326 327 + ACPI_FUNCTION_TRACE(ps_link_module_code); 328 + 324 329 /* Get the tail of the list */ 325 330 326 331 prev = next = acpi_gbl_module_code_list; ··· 342 343 343 344 method_obj = acpi_ut_create_internal_object(ACPI_TYPE_METHOD); 344 345 if (!method_obj) { 345 - return; 346 + return_VOID; 346 347 } 348 + 349 + ACPI_DEBUG_PRINT((ACPI_DB_PARSE, 350 + "Create/Link new code block: %p\n", 351 + method_obj)); 347 352 348 353 if (parent_op->common.node) { 349 354 parent_node = parent_op->common.node; ··· 373 370 prev->method.mutex = method_obj; 374 371 } 375 372 } else { 373 + ACPI_DEBUG_PRINT((ACPI_DB_PARSE, 374 + "Appending to existing code block: %p\n", 375 + prev)); 376 + 376 377 prev->method.aml_length += aml_length; 377 378 } 379 + 380 + return_VOID; 378 381 } 379 382 380 383 /******************************************************************************* ··· 503 494 continue; 504 495 } 505 496 506 - op->common.aml_offset = walk_state->aml_offset; 507 - 508 - if (walk_state->op_info) { 509 - ACPI_DEBUG_PRINT((ACPI_DB_PARSE, 510 - "Opcode %4.4X [%s] Op %p Aml %p AmlOffset %5.5X\n", 511 - (u32) op->common.aml_opcode, 512 - walk_state->op_info->name, op, 513 - parser_state->aml, 514 - op->common.aml_offset)); 515 - } 497 + acpi_ex_start_trace_opcode(op, walk_state); 516 498 } 517 499 518 500 /*
+10 -7
drivers/acpi/acpica/psobject.c
··· 66 66 67 67 static acpi_status acpi_ps_get_aml_opcode(struct acpi_walk_state *walk_state) 68 68 { 69 + u32 aml_offset; 69 70 70 71 ACPI_FUNCTION_TRACE_PTR(ps_get_aml_opcode, walk_state); 71 72 72 - walk_state->aml_offset = 73 - (u32)ACPI_PTR_DIFF(walk_state->parser_state.aml, 74 - walk_state->parser_state.aml_start); 73 + walk_state->aml = walk_state->parser_state.aml; 75 74 walk_state->opcode = acpi_ps_peek_opcode(&(walk_state->parser_state)); 76 75 77 76 /* ··· 97 98 /* The opcode is unrecognized. Complain and skip unknown opcodes */ 98 99 99 100 if (walk_state->pass_number == 2) { 101 + aml_offset = (u32)ACPI_PTR_DIFF(walk_state->aml, 102 + walk_state-> 103 + parser_state.aml_start); 104 + 100 105 ACPI_ERROR((AE_INFO, 101 106 "Unknown opcode 0x%.2X at table offset 0x%.4X, ignoring", 102 107 walk_state->opcode, 103 - (u32)(walk_state->aml_offset + 108 + (u32)(aml_offset + 104 109 sizeof(struct acpi_table_header)))); 105 110 106 111 ACPI_DUMP_BUFFER((walk_state->parser_state.aml - 16), ··· 118 115 acpi_os_printf 119 116 ("/*\nError: Unknown opcode 0x%.2X at table offset 0x%.4X, context:\n", 120 117 walk_state->opcode, 121 - (u32)(walk_state->aml_offset + 118 + (u32)(aml_offset + 122 119 sizeof(struct acpi_table_header))); 123 120 124 121 /* Dump the context surrounding the invalid opcode */ 125 122 126 123 acpi_ut_dump_buffer(((u8 *)walk_state->parser_state. 127 124 aml - 16), 48, DB_BYTE_DISPLAY, 128 - (walk_state->aml_offset + 125 + (aml_offset + 129 126 sizeof(struct acpi_table_header) - 130 127 16)); 131 128 acpi_os_printf(" */\n"); ··· 297 294 /* Create Op structure and append to parent's argument list */ 298 295 299 296 walk_state->op_info = acpi_ps_get_opcode_info(walk_state->opcode); 300 - op = acpi_ps_alloc_op(walk_state->opcode); 297 + op = acpi_ps_alloc_op(walk_state->opcode, aml_op_start); 301 298 if (!op) { 302 299 return_ACPI_STATUS(AE_NO_MEMORY); 303 300 }
+10 -4
drivers/acpi/acpica/psparse.c
··· 147 147 return_ACPI_STATUS(AE_OK); /* OK for now */ 148 148 } 149 149 150 + acpi_ex_stop_trace_opcode(op, walk_state); 151 + 150 152 /* Delete this op and the subtree below it if asked to */ 151 153 152 154 if (((walk_state->parse_flags & ACPI_PARSE_TREE_MASK) != ··· 187 185 * op must be replaced by a placeholder return op 188 186 */ 189 187 replacement_op = 190 - acpi_ps_alloc_op(AML_INT_RETURN_VALUE_OP); 188 + acpi_ps_alloc_op(AML_INT_RETURN_VALUE_OP, 189 + op->common.aml); 191 190 if (!replacement_op) { 192 191 status = AE_NO_MEMORY; 193 192 } ··· 212 209 || (op->common.parent->common.aml_opcode == 213 210 AML_VAR_PACKAGE_OP)) { 214 211 replacement_op = 215 - acpi_ps_alloc_op(AML_INT_RETURN_VALUE_OP); 212 + acpi_ps_alloc_op(AML_INT_RETURN_VALUE_OP, 213 + op->common.aml); 216 214 if (!replacement_op) { 217 215 status = AE_NO_MEMORY; 218 216 } ··· 228 224 AML_VAR_PACKAGE_OP)) { 229 225 replacement_op = 230 226 acpi_ps_alloc_op(op->common. 231 - aml_opcode); 227 + aml_opcode, 228 + op->common.aml); 232 229 if (!replacement_op) { 233 230 status = AE_NO_MEMORY; 234 231 } else { ··· 245 240 default: 246 241 247 242 replacement_op = 248 - acpi_ps_alloc_op(AML_INT_RETURN_VALUE_OP); 243 + acpi_ps_alloc_op(AML_INT_RETURN_VALUE_OP, 244 + op->common.aml); 249 245 if (!replacement_op) { 250 246 status = AE_NO_MEMORY; 251 247 }
+5 -3
drivers/acpi/acpica/psutils.c
··· 60 60 * DESCRIPTION: Create a Scope and associated namepath op with the root name 61 61 * 62 62 ******************************************************************************/ 63 - union acpi_parse_object *acpi_ps_create_scope_op(void) 63 + union acpi_parse_object *acpi_ps_create_scope_op(u8 *aml) 64 64 { 65 65 union acpi_parse_object *scope_op; 66 66 67 - scope_op = acpi_ps_alloc_op(AML_SCOPE_OP); 67 + scope_op = acpi_ps_alloc_op(AML_SCOPE_OP, aml); 68 68 if (!scope_op) { 69 69 return (NULL); 70 70 } ··· 103 103 * FUNCTION: acpi_ps_alloc_op 104 104 * 105 105 * PARAMETERS: opcode - Opcode that will be stored in the new Op 106 + * aml - Address of the opcode 106 107 * 107 108 * RETURN: Pointer to the new Op, null on failure 108 109 * ··· 113 112 * 114 113 ******************************************************************************/ 115 114 116 - union acpi_parse_object *acpi_ps_alloc_op(u16 opcode) 115 + union acpi_parse_object *acpi_ps_alloc_op(u16 opcode, u8 *aml) 117 116 { 118 117 union acpi_parse_object *op; 119 118 const struct acpi_opcode_info *op_info; ··· 150 149 151 150 if (op) { 152 151 acpi_ps_init_op(op, opcode); 152 + op->common.aml = aml; 153 153 op->common.flags = flags; 154 154 } 155 155
+9 -114
drivers/acpi/acpica/psxface.c
··· 47 47 #include "acdispat.h" 48 48 #include "acinterp.h" 49 49 #include "actables.h" 50 + #include "acnamesp.h" 50 51 51 52 #define _COMPONENT ACPI_PARSER 52 53 ACPI_MODULE_NAME("psxface") 53 54 54 55 /* Local Prototypes */ 55 - static void acpi_ps_start_trace(struct acpi_evaluate_info *info); 56 - 57 - static void acpi_ps_stop_trace(struct acpi_evaluate_info *info); 58 - 59 56 static void 60 57 acpi_ps_update_parameter_list(struct acpi_evaluate_info *info, u16 action); 61 58 ··· 73 76 ******************************************************************************/ 74 77 75 78 acpi_status 76 - acpi_debug_trace(char *name, u32 debug_level, u32 debug_layer, u32 flags) 79 + acpi_debug_trace(const char *name, u32 debug_level, u32 debug_layer, u32 flags) 77 80 { 78 81 acpi_status status; 79 82 ··· 82 85 return (status); 83 86 } 84 87 85 - /* TBDs: Validate name, allow full path or just nameseg */ 86 - 87 - acpi_gbl_trace_method_name = *ACPI_CAST_PTR(u32, name); 88 + acpi_gbl_trace_method_name = name; 88 89 acpi_gbl_trace_flags = flags; 89 - 90 - if (debug_level) { 91 - acpi_gbl_trace_dbg_level = debug_level; 92 - } 93 - if (debug_layer) { 94 - acpi_gbl_trace_dbg_layer = debug_layer; 95 - } 90 + acpi_gbl_trace_dbg_level = debug_level; 91 + acpi_gbl_trace_dbg_layer = debug_layer; 92 + status = AE_OK; 96 93 97 94 (void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE); 98 - return (AE_OK); 99 - } 100 - 101 - /******************************************************************************* 102 - * 103 - * FUNCTION: acpi_ps_start_trace 104 - * 105 - * PARAMETERS: info - Method info struct 106 - * 107 - * RETURN: None 108 - * 109 - * DESCRIPTION: Start control method execution trace 110 - * 111 - ******************************************************************************/ 112 - 113 - static void acpi_ps_start_trace(struct acpi_evaluate_info *info) 114 - { 115 - acpi_status status; 116 - 117 - ACPI_FUNCTION_ENTRY(); 118 - 119 - status = acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE); 120 - if (ACPI_FAILURE(status)) { 121 - return; 122 - } 123 - 124 - if ((!acpi_gbl_trace_method_name) || 125 - (acpi_gbl_trace_method_name != info->node->name.integer)) { 126 - goto exit; 127 - } 128 - 129 - acpi_gbl_original_dbg_level = acpi_dbg_level; 130 - acpi_gbl_original_dbg_layer = acpi_dbg_layer; 131 - 132 - acpi_dbg_level = 0x00FFFFFF; 133 - acpi_dbg_layer = ACPI_UINT32_MAX; 134 - 135 - if (acpi_gbl_trace_dbg_level) { 136 - acpi_dbg_level = acpi_gbl_trace_dbg_level; 137 - } 138 - if (acpi_gbl_trace_dbg_layer) { 139 - acpi_dbg_layer = acpi_gbl_trace_dbg_layer; 140 - } 141 - 142 - exit: 143 - (void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE); 144 - } 145 - 146 - /******************************************************************************* 147 - * 148 - * FUNCTION: acpi_ps_stop_trace 149 - * 150 - * PARAMETERS: info - Method info struct 151 - * 152 - * RETURN: None 153 - * 154 - * DESCRIPTION: Stop control method execution trace 155 - * 156 - ******************************************************************************/ 157 - 158 - static void acpi_ps_stop_trace(struct acpi_evaluate_info *info) 159 - { 160 - acpi_status status; 161 - 162 - ACPI_FUNCTION_ENTRY(); 163 - 164 - status = acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE); 165 - if (ACPI_FAILURE(status)) { 166 - return; 167 - } 168 - 169 - if ((!acpi_gbl_trace_method_name) || 170 - (acpi_gbl_trace_method_name != info->node->name.integer)) { 171 - goto exit; 172 - } 173 - 174 - /* Disable further tracing if type is one-shot */ 175 - 176 - if (acpi_gbl_trace_flags & 1) { 177 - acpi_gbl_trace_method_name = 0; 178 - acpi_gbl_trace_dbg_level = 0; 179 - acpi_gbl_trace_dbg_layer = 0; 180 - } 181 - 182 - acpi_dbg_level = acpi_gbl_original_dbg_level; 183 - acpi_dbg_layer = acpi_gbl_original_dbg_layer; 184 - 185 - exit: 186 - (void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE); 95 + return (status); 187 96 } 188 97 189 98 /******************************************************************************* ··· 115 212 * 116 213 ******************************************************************************/ 117 214 118 - acpi_status acpi_ps_execute_method(struct acpi_evaluate_info *info) 215 + acpi_status acpi_ps_execute_method(struct acpi_evaluate_info * info) 119 216 { 120 217 acpi_status status; 121 218 union acpi_parse_object *op; ··· 146 243 */ 147 244 acpi_ps_update_parameter_list(info, REF_INCREMENT); 148 245 149 - /* Begin tracing if requested */ 150 - 151 - acpi_ps_start_trace(info); 152 - 153 246 /* 154 247 * Execute the method. Performs parse simultaneously 155 248 */ ··· 155 256 156 257 /* Create and init a Root Node */ 157 258 158 - op = acpi_ps_create_scope_op(); 259 + op = acpi_ps_create_scope_op(info->obj_desc->method.aml_start); 159 260 if (!op) { 160 261 status = AE_NO_MEMORY; 161 262 goto cleanup; ··· 224 325 225 326 cleanup: 226 327 acpi_ps_delete_parse_tree(op); 227 - 228 - /* End optional tracing */ 229 - 230 - acpi_ps_stop_trace(info); 231 328 232 329 /* Take away the extra reference that we gave the parameters above */ 233 330
+2 -1
drivers/acpi/acpica/rscreate.c
··· 348 348 status = 349 349 acpi_ns_handle_to_pathname((acpi_handle) 350 350 node, 351 - &path_buffer); 351 + &path_buffer, 352 + FALSE); 352 353 353 354 /* +1 to include null terminator */ 354 355
+3 -3
drivers/acpi/acpica/tbfadt.c
··· 345 345 /* Obtain the DSDT and FACS tables via their addresses within the FADT */ 346 346 347 347 acpi_tb_install_fixed_table((acpi_physical_address) acpi_gbl_FADT.Xdsdt, 348 - ACPI_SIG_DSDT, ACPI_TABLE_INDEX_DSDT); 348 + ACPI_SIG_DSDT, &acpi_gbl_dsdt_index); 349 349 350 350 /* If Hardware Reduced flag is set, there is no FACS */ 351 351 ··· 354 354 acpi_tb_install_fixed_table((acpi_physical_address) 355 355 acpi_gbl_FADT.facs, 356 356 ACPI_SIG_FACS, 357 - ACPI_TABLE_INDEX_FACS); 357 + &acpi_gbl_facs_index); 358 358 } 359 359 if (acpi_gbl_FADT.Xfacs) { 360 360 acpi_tb_install_fixed_table((acpi_physical_address) 361 361 acpi_gbl_FADT.Xfacs, 362 362 ACPI_SIG_FACS, 363 - ACPI_TABLE_INDEX_X_FACS); 363 + &acpi_gbl_xfacs_index); 364 364 } 365 365 } 366 366 }
+14 -1
drivers/acpi/acpica/tbfind.c
··· 68 68 acpi_tb_find_table(char *signature, 69 69 char *oem_id, char *oem_table_id, u32 *table_index) 70 70 { 71 - u32 i; 72 71 acpi_status status; 73 72 struct acpi_table_header header; 73 + u32 i; 74 74 75 75 ACPI_FUNCTION_TRACE(tb_find_table); 76 + 77 + /* Validate the input table signature */ 78 + 79 + if (!acpi_is_valid_signature(signature)) { 80 + return_ACPI_STATUS(AE_BAD_SIGNATURE); 81 + } 82 + 83 + /* Don't allow the OEM strings to be too long */ 84 + 85 + if ((strlen(oem_id) > ACPI_OEM_ID_SIZE) || 86 + (strlen(oem_table_id) > ACPI_OEM_TABLE_ID_SIZE)) { 87 + return_ACPI_STATUS(AE_AML_STRING_LIMIT); 88 + } 76 89 77 90 /* Normalize the input strings */ 78 91
+21 -19
drivers/acpi/acpica/tbinstal.c
··· 100 100 * 101 101 * FUNCTION: acpi_tb_install_table_with_override 102 102 * 103 - * PARAMETERS: table_index - Index into root table array 104 - * new_table_desc - New table descriptor to install 103 + * PARAMETERS: new_table_desc - New table descriptor to install 105 104 * override - Whether override should be performed 105 + * table_index - Where the table index is returned 106 106 * 107 107 * RETURN: None 108 108 * ··· 114 114 ******************************************************************************/ 115 115 116 116 void 117 - acpi_tb_install_table_with_override(u32 table_index, 118 - struct acpi_table_desc *new_table_desc, 119 - u8 override) 117 + acpi_tb_install_table_with_override(struct acpi_table_desc *new_table_desc, 118 + u8 override, u32 *table_index) 120 119 { 120 + u32 i; 121 + acpi_status status; 121 122 122 - if (table_index >= acpi_gbl_root_table_list.current_table_count) { 123 + status = acpi_tb_get_next_table_descriptor(&i, NULL); 124 + if (ACPI_FAILURE(status)) { 123 125 return; 124 126 } 125 127 ··· 136 134 acpi_tb_override_table(new_table_desc); 137 135 } 138 136 139 - acpi_tb_init_table_descriptor(&acpi_gbl_root_table_list. 140 - tables[table_index], 137 + acpi_tb_init_table_descriptor(&acpi_gbl_root_table_list.tables[i], 141 138 new_table_desc->address, 142 139 new_table_desc->flags, 143 140 new_table_desc->pointer); ··· 144 143 acpi_tb_print_table_header(new_table_desc->address, 145 144 new_table_desc->pointer); 146 145 146 + /* This synchronizes acpi_gbl_dsdt_index */ 147 + 148 + *table_index = i; 149 + 147 150 /* Set the global integer width (based upon revision of the DSDT) */ 148 151 149 - if (table_index == ACPI_TABLE_INDEX_DSDT) { 152 + if (i == acpi_gbl_dsdt_index) { 150 153 acpi_ut_set_integer_width(new_table_desc->pointer->revision); 151 154 } 152 155 } ··· 162 157 * PARAMETERS: address - Physical address of DSDT or FACS 163 158 * signature - Table signature, NULL if no need to 164 159 * match 165 - * table_index - Index into root table array 160 + * table_index - Where the table index is returned 166 161 * 167 162 * RETURN: Status 168 163 * ··· 173 168 174 169 acpi_status 175 170 acpi_tb_install_fixed_table(acpi_physical_address address, 176 - char *signature, u32 table_index) 171 + char *signature, u32 *table_index) 177 172 { 178 173 struct acpi_table_desc new_table_desc; 179 174 acpi_status status; ··· 205 200 goto release_and_exit; 206 201 } 207 202 208 - acpi_tb_install_table_with_override(table_index, &new_table_desc, TRUE); 203 + /* Add the table to the global root table list */ 204 + 205 + acpi_tb_install_table_with_override(&new_table_desc, TRUE, table_index); 209 206 210 207 release_and_exit: 211 208 ··· 362 355 363 356 /* Add the table to the global root table list */ 364 357 365 - status = acpi_tb_get_next_table_descriptor(&i, NULL); 366 - if (ACPI_FAILURE(status)) { 367 - goto release_and_exit; 368 - } 369 - 370 - *table_index = i; 371 - acpi_tb_install_table_with_override(i, &new_table_desc, override); 358 + acpi_tb_install_table_with_override(&new_table_desc, override, 359 + table_index); 372 360 373 361 release_and_exit: 374 362
+49 -24
drivers/acpi/acpica/tbutils.c
··· 68 68 69 69 acpi_status acpi_tb_initialize_facs(void) 70 70 { 71 + struct acpi_table_facs *facs; 71 72 72 73 /* If Hardware Reduced flag is set, there is no FACS */ 73 74 74 75 if (acpi_gbl_reduced_hardware) { 75 76 acpi_gbl_FACS = NULL; 76 77 return (AE_OK); 77 - } 78 - 79 - (void)acpi_get_table_by_index(ACPI_TABLE_INDEX_FACS, 80 - ACPI_CAST_INDIRECT_PTR(struct 81 - acpi_table_header, 82 - &acpi_gbl_facs32)); 83 - (void)acpi_get_table_by_index(ACPI_TABLE_INDEX_X_FACS, 84 - ACPI_CAST_INDIRECT_PTR(struct 85 - acpi_table_header, 86 - &acpi_gbl_facs64)); 87 - 88 - if (acpi_gbl_facs64 89 - && (!acpi_gbl_facs32 || !acpi_gbl_use32_bit_facs_addresses)) { 90 - acpi_gbl_FACS = acpi_gbl_facs64; 91 - } else if (acpi_gbl_facs32) { 92 - acpi_gbl_FACS = acpi_gbl_facs32; 78 + } else if (acpi_gbl_FADT.Xfacs && 79 + (!acpi_gbl_FADT.facs 80 + || !acpi_gbl_use32_bit_facs_addresses)) { 81 + (void)acpi_get_table_by_index(acpi_gbl_xfacs_index, 82 + ACPI_CAST_INDIRECT_PTR(struct 83 + acpi_table_header, 84 + &facs)); 85 + acpi_gbl_FACS = facs; 86 + } else if (acpi_gbl_FADT.facs) { 87 + (void)acpi_get_table_by_index(acpi_gbl_facs_index, 88 + ACPI_CAST_INDIRECT_PTR(struct 89 + acpi_table_header, 90 + &facs)); 91 + acpi_gbl_FACS = facs; 93 92 } 94 93 95 94 /* If there is no FACS, just continue. There was already an error msg */ ··· 191 192 acpi_tb_uninstall_table(table_desc); 192 193 193 194 acpi_tb_init_table_descriptor(&acpi_gbl_root_table_list. 194 - tables[ACPI_TABLE_INDEX_DSDT], 195 + tables[acpi_gbl_dsdt_index], 195 196 ACPI_PTR_TO_PHYSADDR(new_table), 196 197 ACPI_TABLE_ORIGIN_INTERNAL_VIRTUAL, 197 198 new_table); ··· 368 369 table_entry_size); 369 370 table_entry = ACPI_ADD_PTR(u8, table, sizeof(struct acpi_table_header)); 370 371 371 - /* 372 - * First three entries in the table array are reserved for the DSDT 373 - * and 32bit/64bit FACS, which are not actually present in the 374 - * RSDT/XSDT - they come from the FADT 375 - */ 376 - acpi_gbl_root_table_list.current_table_count = 3; 377 - 378 372 /* Initialize the root table array from the RSDT/XSDT */ 379 373 380 374 for (i = 0; i < table_count; i++) { ··· 403 411 acpi_os_unmap_memory(table, length); 404 412 405 413 return_ACPI_STATUS(AE_OK); 414 + } 415 + 416 + /******************************************************************************* 417 + * 418 + * FUNCTION: acpi_is_valid_signature 419 + * 420 + * PARAMETERS: signature - Sig string to be validated 421 + * 422 + * RETURN: TRUE if signature is correct length and has valid characters 423 + * 424 + * DESCRIPTION: Validate an ACPI table signature. 425 + * 426 + ******************************************************************************/ 427 + 428 + u8 acpi_is_valid_signature(char *signature) 429 + { 430 + u32 i; 431 + 432 + /* Validate the signature length */ 433 + 434 + if (strlen(signature) != ACPI_NAME_SIZE) { 435 + return (FALSE); 436 + } 437 + 438 + /* Validate each character in the signature */ 439 + 440 + for (i = 0; i < ACPI_NAME_SIZE; i++) { 441 + if (!acpi_ut_valid_acpi_char(signature[i], i)) { 442 + return (FALSE); 443 + } 444 + } 445 + 446 + return (TRUE); 406 447 }
+59 -34
drivers/acpi/acpica/tbxfload.c
··· 51 51 #define _COMPONENT ACPI_TABLES 52 52 ACPI_MODULE_NAME("tbxfload") 53 53 54 - /* Local prototypes */ 55 - static acpi_status acpi_tb_load_namespace(void); 56 - 57 54 /******************************************************************************* 58 55 * 59 56 * FUNCTION: acpi_load_tables ··· 62 65 * DESCRIPTION: Load the ACPI tables from the RSDT/XSDT 63 66 * 64 67 ******************************************************************************/ 65 - 66 68 acpi_status __init acpi_load_tables(void) 67 69 { 68 70 acpi_status status; ··· 71 75 /* Load the namespace from the tables */ 72 76 73 77 status = acpi_tb_load_namespace(); 78 + 79 + /* Don't let single failures abort the load */ 80 + 81 + if (status == AE_CTRL_TERMINATE) { 82 + status = AE_OK; 83 + } 84 + 74 85 if (ACPI_FAILURE(status)) { 75 86 ACPI_EXCEPTION((AE_INFO, status, 76 87 "While loading namespace from ACPI tables")); ··· 100 97 * the RSDT/XSDT. 101 98 * 102 99 ******************************************************************************/ 103 - static acpi_status acpi_tb_load_namespace(void) 100 + acpi_status acpi_tb_load_namespace(void) 104 101 { 105 102 acpi_status status; 106 103 u32 i; 107 104 struct acpi_table_header *new_dsdt; 105 + struct acpi_table_desc *table; 106 + u32 tables_loaded = 0; 107 + u32 tables_failed = 0; 108 108 109 109 ACPI_FUNCTION_TRACE(tb_load_namespace); 110 110 ··· 117 111 * Load the namespace. The DSDT is required, but any SSDT and 118 112 * PSDT tables are optional. Verify the DSDT. 119 113 */ 114 + table = &acpi_gbl_root_table_list.tables[acpi_gbl_dsdt_index]; 115 + 120 116 if (!acpi_gbl_root_table_list.current_table_count || 121 - !ACPI_COMPARE_NAME(& 122 - (acpi_gbl_root_table_list. 123 - tables[ACPI_TABLE_INDEX_DSDT].signature), 124 - ACPI_SIG_DSDT) 125 - || 126 - ACPI_FAILURE(acpi_tb_validate_table 127 - (&acpi_gbl_root_table_list. 128 - tables[ACPI_TABLE_INDEX_DSDT]))) { 117 + !ACPI_COMPARE_NAME(table->signature.ascii, ACPI_SIG_DSDT) || 118 + ACPI_FAILURE(acpi_tb_validate_table(table))) { 129 119 status = AE_NO_ACPI_TABLES; 130 120 goto unlock_and_exit; 131 121 } ··· 132 130 * array can change dynamically as tables are loaded at run-time. Note: 133 131 * .Pointer field is not validated until after call to acpi_tb_validate_table. 134 132 */ 135 - acpi_gbl_DSDT = 136 - acpi_gbl_root_table_list.tables[ACPI_TABLE_INDEX_DSDT].pointer; 133 + acpi_gbl_DSDT = table->pointer; 137 134 138 135 /* 139 136 * Optionally copy the entire DSDT to local memory (instead of simply ··· 141 140 * the DSDT. 142 141 */ 143 142 if (acpi_gbl_copy_dsdt_locally) { 144 - new_dsdt = acpi_tb_copy_dsdt(ACPI_TABLE_INDEX_DSDT); 143 + new_dsdt = acpi_tb_copy_dsdt(acpi_gbl_dsdt_index); 145 144 if (new_dsdt) { 146 145 acpi_gbl_DSDT = new_dsdt; 147 146 } ··· 158 157 159 158 /* Load and parse tables */ 160 159 161 - status = acpi_ns_load_table(ACPI_TABLE_INDEX_DSDT, acpi_gbl_root_node); 160 + status = acpi_ns_load_table(acpi_gbl_dsdt_index, acpi_gbl_root_node); 162 161 if (ACPI_FAILURE(status)) { 163 - return_ACPI_STATUS(status); 162 + ACPI_EXCEPTION((AE_INFO, status, "[DSDT] table load failed")); 163 + tables_failed++; 164 + } else { 165 + tables_loaded++; 164 166 } 165 167 166 168 /* Load any SSDT or PSDT tables. Note: Loop leaves tables locked */ 167 169 168 170 (void)acpi_ut_acquire_mutex(ACPI_MTX_TABLES); 169 171 for (i = 0; i < acpi_gbl_root_table_list.current_table_count; ++i) { 172 + table = &acpi_gbl_root_table_list.tables[i]; 173 + 170 174 if (!acpi_gbl_root_table_list.tables[i].address || 171 - (!ACPI_COMPARE_NAME 172 - (&(acpi_gbl_root_table_list.tables[i].signature), 173 - ACPI_SIG_SSDT) 174 - && 175 - !ACPI_COMPARE_NAME(& 176 - (acpi_gbl_root_table_list.tables[i]. 177 - signature), ACPI_SIG_PSDT) 178 - && 179 - !ACPI_COMPARE_NAME(& 180 - (acpi_gbl_root_table_list.tables[i]. 181 - signature), ACPI_SIG_OSDT)) 182 - || 183 - ACPI_FAILURE(acpi_tb_validate_table 184 - (&acpi_gbl_root_table_list.tables[i]))) { 175 + (!ACPI_COMPARE_NAME(table->signature.ascii, ACPI_SIG_SSDT) 176 + && !ACPI_COMPARE_NAME(table->signature.ascii, 177 + ACPI_SIG_PSDT) 178 + && !ACPI_COMPARE_NAME(table->signature.ascii, 179 + ACPI_SIG_OSDT)) 180 + || ACPI_FAILURE(acpi_tb_validate_table(table))) { 185 181 continue; 186 182 } 187 183 188 184 /* Ignore errors while loading tables, get as many as possible */ 189 185 190 186 (void)acpi_ut_release_mutex(ACPI_MTX_TABLES); 191 - (void)acpi_ns_load_table(i, acpi_gbl_root_node); 187 + status = acpi_ns_load_table(i, acpi_gbl_root_node); 188 + if (ACPI_FAILURE(status)) { 189 + ACPI_EXCEPTION((AE_INFO, status, 190 + "(%4.4s:%8.8s) while loading table", 191 + table->signature.ascii, 192 + table->pointer->oem_table_id)); 193 + tables_failed++; 194 + 195 + ACPI_DEBUG_PRINT_RAW((ACPI_DB_INIT, 196 + "Table [%4.4s:%8.8s] (id FF) - Table namespace load failed\n\n", 197 + table->signature.ascii, 198 + table->pointer->oem_table_id)); 199 + } else { 200 + tables_loaded++; 201 + } 202 + 192 203 (void)acpi_ut_acquire_mutex(ACPI_MTX_TABLES); 193 204 } 194 205 195 - ACPI_INFO((AE_INFO, "All ACPI Tables successfully acquired")); 206 + if (!tables_failed) { 207 + ACPI_INFO((AE_INFO, 208 + "%u ACPI AML tables successfully acquired and loaded", 209 + tables_loaded)); 210 + } else { 211 + ACPI_ERROR((AE_INFO, 212 + "%u table load failures, %u successful", 213 + tables_failed, tables_loaded)); 214 + 215 + /* Indicate at least one failure */ 216 + 217 + status = AE_CTRL_TERMINATE; 218 + } 196 219 197 220 unlock_and_exit: 198 221 (void)acpi_ut_release_mutex(ACPI_MTX_TABLES);
+31 -2
drivers/acpi/acpica/utdebug.c
··· 45 45 46 46 #include <acpi/acpi.h> 47 47 #include "accommon.h" 48 + #include "acinterp.h" 48 49 49 50 #define _COMPONENT ACPI_UTILITIES 50 51 ACPI_MODULE_NAME("utdebug") ··· 561 560 } 562 561 } 563 562 564 - #endif 563 + /******************************************************************************* 564 + * 565 + * FUNCTION: acpi_trace_point 566 + * 567 + * PARAMETERS: type - Trace event type 568 + * begin - TRUE if before execution 569 + * aml - Executed AML address 570 + * pathname - Object path 571 + * pointer - Pointer to the related object 572 + * 573 + * RETURN: None 574 + * 575 + * DESCRIPTION: Interpreter execution trace. 576 + * 577 + ******************************************************************************/ 565 578 579 + void 580 + acpi_trace_point(acpi_trace_event_type type, u8 begin, u8 *aml, char *pathname) 581 + { 582 + 583 + ACPI_FUNCTION_ENTRY(); 584 + 585 + acpi_ex_trace_point(type, begin, aml, pathname); 586 + 587 + #ifdef ACPI_USE_SYSTEM_TRACER 588 + acpi_os_trace_point(type, begin, aml, pathname); 589 + #endif 590 + } 591 + 592 + ACPI_EXPORT_SYMBOL(acpi_trace_point) 593 + #endif 566 594 #ifdef ACPI_APPLICATION 567 595 /******************************************************************************* 568 596 * ··· 605 575 * DESCRIPTION: Print error message to the console, used by applications. 606 576 * 607 577 ******************************************************************************/ 608 - 609 578 void ACPI_INTERNAL_VAR_XFACE acpi_log_error(const char *format, ...) 610 579 { 611 580 va_list args;
+3
drivers/acpi/acpica/utdelete.c
··· 209 209 acpi_ut_delete_object_desc(object->method.mutex); 210 210 object->method.mutex = NULL; 211 211 } 212 + if (object->method.node) { 213 + object->method.node = NULL; 214 + } 212 215 break; 213 216 214 217 case ACPI_TYPE_REGION:
+1 -1
drivers/acpi/acpica/utfileio.c
··· 312 312 /* Get the entire file */ 313 313 314 314 fprintf(stderr, 315 - "Reading ACPI table from file %10s - Length %.8u (0x%06X)\n", 315 + "Reading ACPI table from file %12s - Length %.8u (0x%06X)\n", 316 316 filename, file_size, file_size); 317 317 318 318 status = acpi_ut_read_table(file, table, &table_length);
+1 -2
drivers/acpi/acpica/utinit.c
··· 204 204 acpi_gbl_acpi_hardware_present = TRUE; 205 205 acpi_gbl_last_owner_id_index = 0; 206 206 acpi_gbl_next_owner_id_offset = 0; 207 - acpi_gbl_trace_dbg_level = 0; 208 - acpi_gbl_trace_dbg_layer = 0; 209 207 acpi_gbl_debugger_configuration = DEBUGGER_THREADING; 210 208 acpi_gbl_osi_mutex = NULL; 211 209 acpi_gbl_reg_methods_executed = FALSE; 210 + acpi_gbl_max_loop_iterations = 0xFFFF; 212 211 213 212 /* Hardware oriented */ 214 213
+2 -2
drivers/acpi/acpica/utmisc.c
··· 75 75 return (FALSE); 76 76 } 77 77 78 - #if (defined ACPI_ASL_COMPILER || defined ACPI_EXEC_APP) 78 + #if (defined ACPI_ASL_COMPILER || defined ACPI_EXEC_APP || defined ACPI_NAMES_APP) 79 79 /******************************************************************************* 80 80 * 81 81 * FUNCTION: acpi_ut_is_aml_table ··· 376 376 /* Get the full pathname to the node */ 377 377 378 378 buffer.length = ACPI_ALLOCATE_LOCAL_BUFFER; 379 - status = acpi_ns_handle_to_pathname(obj_handle, &buffer); 379 + status = acpi_ns_handle_to_pathname(obj_handle, &buffer, TRUE); 380 380 if (ACPI_FAILURE(status)) { 381 381 return; 382 382 }
+380
drivers/acpi/acpica/utnonansi.c
··· 1 + /******************************************************************************* 2 + * 3 + * Module Name: utnonansi - Non-ansi C library functions 4 + * 5 + ******************************************************************************/ 6 + 7 + /* 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 + * All rights reserved. 10 + * 11 + * Redistribution and use in source and binary forms, with or without 12 + * modification, are permitted provided that the following conditions 13 + * are met: 14 + * 1. Redistributions of source code must retain the above copyright 15 + * notice, this list of conditions, and the following disclaimer, 16 + * without modification. 17 + * 2. Redistributions in binary form must reproduce at minimum a disclaimer 18 + * substantially similar to the "NO WARRANTY" disclaimer below 19 + * ("Disclaimer") and any redistribution must be conditioned upon 20 + * including a substantially similar Disclaimer requirement for further 21 + * binary redistribution. 22 + * 3. Neither the names of the above-listed copyright holders nor the names 23 + * of any contributors may be used to endorse or promote products derived 24 + * from this software without specific prior written permission. 25 + * 26 + * Alternatively, this software may be distributed under the terms of the 27 + * GNU General Public License ("GPL") version 2 as published by the Free 28 + * Software Foundation. 29 + * 30 + * NO WARRANTY 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 36 + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 37 + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 38 + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 39 + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 40 + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 41 + * POSSIBILITY OF SUCH DAMAGES. 42 + */ 43 + 44 + #include <acpi/acpi.h> 45 + #include "accommon.h" 46 + 47 + #define _COMPONENT ACPI_UTILITIES 48 + ACPI_MODULE_NAME("utnonansi") 49 + 50 + /* 51 + * Non-ANSI C library functions - strlwr, strupr, stricmp, and a 64-bit 52 + * version of strtoul. 53 + */ 54 + /******************************************************************************* 55 + * 56 + * FUNCTION: acpi_ut_strlwr (strlwr) 57 + * 58 + * PARAMETERS: src_string - The source string to convert 59 + * 60 + * RETURN: None 61 + * 62 + * DESCRIPTION: Convert a string to lowercase 63 + * 64 + ******************************************************************************/ 65 + void acpi_ut_strlwr(char *src_string) 66 + { 67 + char *string; 68 + 69 + ACPI_FUNCTION_ENTRY(); 70 + 71 + if (!src_string) { 72 + return; 73 + } 74 + 75 + /* Walk entire string, lowercasing the letters */ 76 + 77 + for (string = src_string; *string; string++) { 78 + *string = (char)tolower((int)*string); 79 + } 80 + } 81 + 82 + /******************************************************************************* 83 + * 84 + * FUNCTION: acpi_ut_strupr (strupr) 85 + * 86 + * PARAMETERS: src_string - The source string to convert 87 + * 88 + * RETURN: None 89 + * 90 + * DESCRIPTION: Convert a string to uppercase 91 + * 92 + ******************************************************************************/ 93 + 94 + void acpi_ut_strupr(char *src_string) 95 + { 96 + char *string; 97 + 98 + ACPI_FUNCTION_ENTRY(); 99 + 100 + if (!src_string) { 101 + return; 102 + } 103 + 104 + /* Walk entire string, uppercasing the letters */ 105 + 106 + for (string = src_string; *string; string++) { 107 + *string = (char)toupper((int)*string); 108 + } 109 + } 110 + 111 + /****************************************************************************** 112 + * 113 + * FUNCTION: acpi_ut_stricmp (stricmp) 114 + * 115 + * PARAMETERS: string1 - first string to compare 116 + * string2 - second string to compare 117 + * 118 + * RETURN: int that signifies string relationship. Zero means strings 119 + * are equal. 120 + * 121 + * DESCRIPTION: Case-insensitive string compare. Implementation of the 122 + * non-ANSI stricmp function. 123 + * 124 + ******************************************************************************/ 125 + 126 + int acpi_ut_stricmp(char *string1, char *string2) 127 + { 128 + int c1; 129 + int c2; 130 + 131 + do { 132 + c1 = tolower((int)*string1); 133 + c2 = tolower((int)*string2); 134 + 135 + string1++; 136 + string2++; 137 + } 138 + while ((c1 == c2) && (c1)); 139 + 140 + return (c1 - c2); 141 + } 142 + 143 + /******************************************************************************* 144 + * 145 + * FUNCTION: acpi_ut_strtoul64 146 + * 147 + * PARAMETERS: string - Null terminated string 148 + * base - Radix of the string: 16 or ACPI_ANY_BASE; 149 + * ACPI_ANY_BASE means 'in behalf of to_integer' 150 + * ret_integer - Where the converted integer is returned 151 + * 152 + * RETURN: Status and Converted value 153 + * 154 + * DESCRIPTION: Convert a string into an unsigned value. Performs either a 155 + * 32-bit or 64-bit conversion, depending on the current mode 156 + * of the interpreter. 157 + * 158 + * NOTE: Does not support Octal strings, not needed. 159 + * 160 + ******************************************************************************/ 161 + 162 + acpi_status acpi_ut_strtoul64(char *string, u32 base, u64 *ret_integer) 163 + { 164 + u32 this_digit = 0; 165 + u64 return_value = 0; 166 + u64 quotient; 167 + u64 dividend; 168 + u32 to_integer_op = (base == ACPI_ANY_BASE); 169 + u32 mode32 = (acpi_gbl_integer_byte_width == 4); 170 + u8 valid_digits = 0; 171 + u8 sign_of0x = 0; 172 + u8 term = 0; 173 + 174 + ACPI_FUNCTION_TRACE_STR(ut_stroul64, string); 175 + 176 + switch (base) { 177 + case ACPI_ANY_BASE: 178 + case 16: 179 + 180 + break; 181 + 182 + default: 183 + 184 + /* Invalid Base */ 185 + 186 + return_ACPI_STATUS(AE_BAD_PARAMETER); 187 + } 188 + 189 + if (!string) { 190 + goto error_exit; 191 + } 192 + 193 + /* Skip over any white space in the buffer */ 194 + 195 + while ((*string) && (isspace((int)*string) || *string == '\t')) { 196 + string++; 197 + } 198 + 199 + if (to_integer_op) { 200 + /* 201 + * Base equal to ACPI_ANY_BASE means 'ToInteger operation case'. 202 + * We need to determine if it is decimal or hexadecimal. 203 + */ 204 + if ((*string == '0') && (tolower((int)*(string + 1)) == 'x')) { 205 + sign_of0x = 1; 206 + base = 16; 207 + 208 + /* Skip over the leading '0x' */ 209 + string += 2; 210 + } else { 211 + base = 10; 212 + } 213 + } 214 + 215 + /* Any string left? Check that '0x' is not followed by white space. */ 216 + 217 + if (!(*string) || isspace((int)*string) || *string == '\t') { 218 + if (to_integer_op) { 219 + goto error_exit; 220 + } else { 221 + goto all_done; 222 + } 223 + } 224 + 225 + /* 226 + * Perform a 32-bit or 64-bit conversion, depending upon the current 227 + * execution mode of the interpreter 228 + */ 229 + dividend = (mode32) ? ACPI_UINT32_MAX : ACPI_UINT64_MAX; 230 + 231 + /* Main loop: convert the string to a 32- or 64-bit integer */ 232 + 233 + while (*string) { 234 + if (isdigit((int)*string)) { 235 + 236 + /* Convert ASCII 0-9 to Decimal value */ 237 + 238 + this_digit = ((u8)*string) - '0'; 239 + } else if (base == 10) { 240 + 241 + /* Digit is out of range; possible in to_integer case only */ 242 + 243 + term = 1; 244 + } else { 245 + this_digit = (u8)toupper((int)*string); 246 + if (isxdigit((int)this_digit)) { 247 + 248 + /* Convert ASCII Hex char to value */ 249 + 250 + this_digit = this_digit - 'A' + 10; 251 + } else { 252 + term = 1; 253 + } 254 + } 255 + 256 + if (term) { 257 + if (to_integer_op) { 258 + goto error_exit; 259 + } else { 260 + break; 261 + } 262 + } else if ((valid_digits == 0) && (this_digit == 0) 263 + && !sign_of0x) { 264 + 265 + /* Skip zeros */ 266 + string++; 267 + continue; 268 + } 269 + 270 + valid_digits++; 271 + 272 + if (sign_of0x 273 + && ((valid_digits > 16) 274 + || ((valid_digits > 8) && mode32))) { 275 + /* 276 + * This is to_integer operation case. 277 + * No any restrictions for string-to-integer conversion, 278 + * see ACPI spec. 279 + */ 280 + goto error_exit; 281 + } 282 + 283 + /* Divide the digit into the correct position */ 284 + 285 + (void)acpi_ut_short_divide((dividend - (u64)this_digit), 286 + base, &quotient, NULL); 287 + 288 + if (return_value > quotient) { 289 + if (to_integer_op) { 290 + goto error_exit; 291 + } else { 292 + break; 293 + } 294 + } 295 + 296 + return_value *= base; 297 + return_value += this_digit; 298 + string++; 299 + } 300 + 301 + /* All done, normal exit */ 302 + 303 + all_done: 304 + 305 + ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "Converted value: %8.8X%8.8X\n", 306 + ACPI_FORMAT_UINT64(return_value))); 307 + 308 + *ret_integer = return_value; 309 + return_ACPI_STATUS(AE_OK); 310 + 311 + error_exit: 312 + /* Base was set/validated above */ 313 + 314 + if (base == 10) { 315 + return_ACPI_STATUS(AE_BAD_DECIMAL_CONSTANT); 316 + } else { 317 + return_ACPI_STATUS(AE_BAD_HEX_CONSTANT); 318 + } 319 + } 320 + 321 + #if defined (ACPI_DEBUGGER) || defined (ACPI_APPLICATION) 322 + /******************************************************************************* 323 + * 324 + * FUNCTION: acpi_ut_safe_strcpy, acpi_ut_safe_strcat, acpi_ut_safe_strncat 325 + * 326 + * PARAMETERS: Adds a "DestSize" parameter to each of the standard string 327 + * functions. This is the size of the Destination buffer. 328 + * 329 + * RETURN: TRUE if the operation would overflow the destination buffer. 330 + * 331 + * DESCRIPTION: Safe versions of standard Clib string functions. Ensure that 332 + * the result of the operation will not overflow the output string 333 + * buffer. 334 + * 335 + * NOTE: These functions are typically only helpful for processing 336 + * user input and command lines. For most ACPICA code, the 337 + * required buffer length is precisely calculated before buffer 338 + * allocation, so the use of these functions is unnecessary. 339 + * 340 + ******************************************************************************/ 341 + 342 + u8 acpi_ut_safe_strcpy(char *dest, acpi_size dest_size, char *source) 343 + { 344 + 345 + if (strlen(source) >= dest_size) { 346 + return (TRUE); 347 + } 348 + 349 + strcpy(dest, source); 350 + return (FALSE); 351 + } 352 + 353 + u8 acpi_ut_safe_strcat(char *dest, acpi_size dest_size, char *source) 354 + { 355 + 356 + if ((strlen(dest) + strlen(source)) >= dest_size) { 357 + return (TRUE); 358 + } 359 + 360 + strcat(dest, source); 361 + return (FALSE); 362 + } 363 + 364 + u8 365 + acpi_ut_safe_strncat(char *dest, 366 + acpi_size dest_size, 367 + char *source, acpi_size max_transfer_length) 368 + { 369 + acpi_size actual_transfer_length; 370 + 371 + actual_transfer_length = ACPI_MIN(max_transfer_length, strlen(source)); 372 + 373 + if ((strlen(dest) + actual_transfer_length) >= dest_size) { 374 + return (TRUE); 375 + } 376 + 377 + strncat(dest, source, max_transfer_length); 378 + return (FALSE); 379 + } 380 + #endif
-342
drivers/acpi/acpica/utstring.c
··· 48 48 #define _COMPONENT ACPI_UTILITIES 49 49 ACPI_MODULE_NAME("utstring") 50 50 51 - /* 52 - * Non-ANSI C library functions - strlwr, strupr, stricmp, and a 64-bit 53 - * version of strtoul. 54 - */ 55 - #ifdef ACPI_ASL_COMPILER 56 - /******************************************************************************* 57 - * 58 - * FUNCTION: acpi_ut_strlwr (strlwr) 59 - * 60 - * PARAMETERS: src_string - The source string to convert 61 - * 62 - * RETURN: None 63 - * 64 - * DESCRIPTION: Convert string to lowercase 65 - * 66 - * NOTE: This is not a POSIX function, so it appears here, not in utclib.c 67 - * 68 - ******************************************************************************/ 69 - void acpi_ut_strlwr(char *src_string) 70 - { 71 - char *string; 72 - 73 - ACPI_FUNCTION_ENTRY(); 74 - 75 - if (!src_string) { 76 - return; 77 - } 78 - 79 - /* Walk entire string, lowercasing the letters */ 80 - 81 - for (string = src_string; *string; string++) { 82 - *string = (char)tolower((int)*string); 83 - } 84 - 85 - return; 86 - } 87 - 88 - /****************************************************************************** 89 - * 90 - * FUNCTION: acpi_ut_stricmp (stricmp) 91 - * 92 - * PARAMETERS: string1 - first string to compare 93 - * string2 - second string to compare 94 - * 95 - * RETURN: int that signifies string relationship. Zero means strings 96 - * are equal. 97 - * 98 - * DESCRIPTION: Implementation of the non-ANSI stricmp function (compare 99 - * strings with no case sensitivity) 100 - * 101 - ******************************************************************************/ 102 - 103 - int acpi_ut_stricmp(char *string1, char *string2) 104 - { 105 - int c1; 106 - int c2; 107 - 108 - do { 109 - c1 = tolower((int)*string1); 110 - c2 = tolower((int)*string2); 111 - 112 - string1++; 113 - string2++; 114 - } 115 - while ((c1 == c2) && (c1)); 116 - 117 - return (c1 - c2); 118 - } 119 - #endif 120 - 121 - /******************************************************************************* 122 - * 123 - * FUNCTION: acpi_ut_strupr (strupr) 124 - * 125 - * PARAMETERS: src_string - The source string to convert 126 - * 127 - * RETURN: None 128 - * 129 - * DESCRIPTION: Convert string to uppercase 130 - * 131 - * NOTE: This is not a POSIX function, so it appears here, not in utclib.c 132 - * 133 - ******************************************************************************/ 134 - 135 - void acpi_ut_strupr(char *src_string) 136 - { 137 - char *string; 138 - 139 - ACPI_FUNCTION_ENTRY(); 140 - 141 - if (!src_string) { 142 - return; 143 - } 144 - 145 - /* Walk entire string, uppercasing the letters */ 146 - 147 - for (string = src_string; *string; string++) { 148 - *string = (char)toupper((int)*string); 149 - } 150 - 151 - return; 152 - } 153 - 154 - /******************************************************************************* 155 - * 156 - * FUNCTION: acpi_ut_strtoul64 157 - * 158 - * PARAMETERS: string - Null terminated string 159 - * base - Radix of the string: 16 or ACPI_ANY_BASE; 160 - * ACPI_ANY_BASE means 'in behalf of to_integer' 161 - * ret_integer - Where the converted integer is returned 162 - * 163 - * RETURN: Status and Converted value 164 - * 165 - * DESCRIPTION: Convert a string into an unsigned value. Performs either a 166 - * 32-bit or 64-bit conversion, depending on the current mode 167 - * of the interpreter. 168 - * NOTE: Does not support Octal strings, not needed. 169 - * 170 - ******************************************************************************/ 171 - 172 - acpi_status acpi_ut_strtoul64(char *string, u32 base, u64 *ret_integer) 173 - { 174 - u32 this_digit = 0; 175 - u64 return_value = 0; 176 - u64 quotient; 177 - u64 dividend; 178 - u32 to_integer_op = (base == ACPI_ANY_BASE); 179 - u32 mode32 = (acpi_gbl_integer_byte_width == 4); 180 - u8 valid_digits = 0; 181 - u8 sign_of0x = 0; 182 - u8 term = 0; 183 - 184 - ACPI_FUNCTION_TRACE_STR(ut_stroul64, string); 185 - 186 - switch (base) { 187 - case ACPI_ANY_BASE: 188 - case 16: 189 - 190 - break; 191 - 192 - default: 193 - 194 - /* Invalid Base */ 195 - 196 - return_ACPI_STATUS(AE_BAD_PARAMETER); 197 - } 198 - 199 - if (!string) { 200 - goto error_exit; 201 - } 202 - 203 - /* Skip over any white space in the buffer */ 204 - 205 - while ((*string) && (isspace((int)*string) || *string == '\t')) { 206 - string++; 207 - } 208 - 209 - if (to_integer_op) { 210 - /* 211 - * Base equal to ACPI_ANY_BASE means 'ToInteger operation case'. 212 - * We need to determine if it is decimal or hexadecimal. 213 - */ 214 - if ((*string == '0') && (tolower((int)*(string + 1)) == 'x')) { 215 - sign_of0x = 1; 216 - base = 16; 217 - 218 - /* Skip over the leading '0x' */ 219 - string += 2; 220 - } else { 221 - base = 10; 222 - } 223 - } 224 - 225 - /* Any string left? Check that '0x' is not followed by white space. */ 226 - 227 - if (!(*string) || isspace((int)*string) || *string == '\t') { 228 - if (to_integer_op) { 229 - goto error_exit; 230 - } else { 231 - goto all_done; 232 - } 233 - } 234 - 235 - /* 236 - * Perform a 32-bit or 64-bit conversion, depending upon the current 237 - * execution mode of the interpreter 238 - */ 239 - dividend = (mode32) ? ACPI_UINT32_MAX : ACPI_UINT64_MAX; 240 - 241 - /* Main loop: convert the string to a 32- or 64-bit integer */ 242 - 243 - while (*string) { 244 - if (isdigit((int)*string)) { 245 - 246 - /* Convert ASCII 0-9 to Decimal value */ 247 - 248 - this_digit = ((u8)*string) - '0'; 249 - } else if (base == 10) { 250 - 251 - /* Digit is out of range; possible in to_integer case only */ 252 - 253 - term = 1; 254 - } else { 255 - this_digit = (u8)toupper((int)*string); 256 - if (isxdigit((int)this_digit)) { 257 - 258 - /* Convert ASCII Hex char to value */ 259 - 260 - this_digit = this_digit - 'A' + 10; 261 - } else { 262 - term = 1; 263 - } 264 - } 265 - 266 - if (term) { 267 - if (to_integer_op) { 268 - goto error_exit; 269 - } else { 270 - break; 271 - } 272 - } else if ((valid_digits == 0) && (this_digit == 0) 273 - && !sign_of0x) { 274 - 275 - /* Skip zeros */ 276 - string++; 277 - continue; 278 - } 279 - 280 - valid_digits++; 281 - 282 - if (sign_of0x 283 - && ((valid_digits > 16) 284 - || ((valid_digits > 8) && mode32))) { 285 - /* 286 - * This is to_integer operation case. 287 - * No any restrictions for string-to-integer conversion, 288 - * see ACPI spec. 289 - */ 290 - goto error_exit; 291 - } 292 - 293 - /* Divide the digit into the correct position */ 294 - 295 - (void)acpi_ut_short_divide((dividend - (u64)this_digit), 296 - base, &quotient, NULL); 297 - 298 - if (return_value > quotient) { 299 - if (to_integer_op) { 300 - goto error_exit; 301 - } else { 302 - break; 303 - } 304 - } 305 - 306 - return_value *= base; 307 - return_value += this_digit; 308 - string++; 309 - } 310 - 311 - /* All done, normal exit */ 312 - 313 - all_done: 314 - 315 - ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "Converted value: %8.8X%8.8X\n", 316 - ACPI_FORMAT_UINT64(return_value))); 317 - 318 - *ret_integer = return_value; 319 - return_ACPI_STATUS(AE_OK); 320 - 321 - error_exit: 322 - /* Base was set/validated above */ 323 - 324 - if (base == 10) { 325 - return_ACPI_STATUS(AE_BAD_DECIMAL_CONSTANT); 326 - } else { 327 - return_ACPI_STATUS(AE_BAD_HEX_CONSTANT); 328 - } 329 - } 330 - 331 51 /******************************************************************************* 332 52 * 333 53 * FUNCTION: acpi_ut_print_string ··· 62 342 * sequences. 63 343 * 64 344 ******************************************************************************/ 65 - 66 345 void acpi_ut_print_string(char *string, u16 max_length) 67 346 { 68 347 u32 i; ··· 301 582 302 583 pathname++; 303 584 } 304 - } 305 - #endif 306 - 307 - #if defined (ACPI_DEBUGGER) || defined (ACPI_APPLICATION) 308 - /******************************************************************************* 309 - * 310 - * FUNCTION: acpi_ut_safe_strcpy, acpi_ut_safe_strcat, acpi_ut_safe_strncat 311 - * 312 - * PARAMETERS: Adds a "DestSize" parameter to each of the standard string 313 - * functions. This is the size of the Destination buffer. 314 - * 315 - * RETURN: TRUE if the operation would overflow the destination buffer. 316 - * 317 - * DESCRIPTION: Safe versions of standard Clib string functions. Ensure that 318 - * the result of the operation will not overflow the output string 319 - * buffer. 320 - * 321 - * NOTE: These functions are typically only helpful for processing 322 - * user input and command lines. For most ACPICA code, the 323 - * required buffer length is precisely calculated before buffer 324 - * allocation, so the use of these functions is unnecessary. 325 - * 326 - ******************************************************************************/ 327 - 328 - u8 acpi_ut_safe_strcpy(char *dest, acpi_size dest_size, char *source) 329 - { 330 - 331 - if (strlen(source) >= dest_size) { 332 - return (TRUE); 333 - } 334 - 335 - strcpy(dest, source); 336 - return (FALSE); 337 - } 338 - 339 - u8 acpi_ut_safe_strcat(char *dest, acpi_size dest_size, char *source) 340 - { 341 - 342 - if ((strlen(dest) + strlen(source)) >= dest_size) { 343 - return (TRUE); 344 - } 345 - 346 - strcat(dest, source); 347 - return (FALSE); 348 - } 349 - 350 - u8 351 - acpi_ut_safe_strncat(char *dest, 352 - acpi_size dest_size, 353 - char *source, acpi_size max_transfer_length) 354 - { 355 - acpi_size actual_transfer_length; 356 - 357 - actual_transfer_length = ACPI_MIN(max_transfer_length, strlen(source)); 358 - 359 - if ((strlen(dest) + actual_transfer_length) >= dest_size) { 360 - return (TRUE); 361 - } 362 - 363 - strncat(dest, source, max_transfer_length); 364 - return (FALSE); 365 585 } 366 586 #endif
+3 -9
drivers/acpi/acpica/utxface.c
··· 92 92 93 93 acpi_ut_mutex_terminate(); 94 94 95 - #ifdef ACPI_DEBUGGER 96 - 97 - /* Shut down the debugger */ 98 - 99 - acpi_db_terminate(); 100 - #endif 101 - 102 95 /* Now we can shutdown the OS-dependent layer */ 103 96 104 97 status = acpi_os_terminate(); ··· 510 517 511 518 /* Parameter validation */ 512 519 513 - if (!in_buffer || !return_buffer || (length < 16)) { 520 + if (!in_buffer || !return_buffer 521 + || (length < ACPI_PLD_REV1_BUFFER_SIZE)) { 514 522 return (AE_BAD_PARAMETER); 515 523 } 516 524 ··· 561 567 pld_info->rotation = ACPI_PLD_GET_ROTATION(&dword); 562 568 pld_info->order = ACPI_PLD_GET_ORDER(&dword); 563 569 564 - if (length >= ACPI_PLD_BUFFER_SIZE) { 570 + if (length >= ACPI_PLD_REV2_BUFFER_SIZE) { 565 571 566 572 /* Fifth 32-bit DWord (Revision 2 of _PLD) */ 567 573
-11
drivers/acpi/acpica/utxfinit.c
··· 124 124 return_ACPI_STATUS(status); 125 125 } 126 126 127 - /* If configured, initialize the AML debugger */ 128 - 129 - #ifdef ACPI_DEBUGGER 130 - status = acpi_db_initialize(); 131 - if (ACPI_FAILURE(status)) { 132 - ACPI_EXCEPTION((AE_INFO, status, 133 - "During Debugger initialization")); 134 - return_ACPI_STATUS(status); 135 - } 136 - #endif 137 - 138 127 return_ACPI_STATUS(AE_OK); 139 128 } 140 129
-4
drivers/acpi/apei/apei-base.c
··· 24 24 * but WITHOUT ANY WARRANTY; without even the implied warranty of 25 25 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 26 26 * GNU General Public License for more details. 27 - * 28 - * You should have received a copy of the GNU General Public License 29 - * along with this program; if not, write to the Free Software 30 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 31 27 */ 32 28 33 29 #include <linux/kernel.h>
-4
drivers/acpi/apei/einj.c
··· 18 18 * but WITHOUT ANY WARRANTY; without even the implied warranty of 19 19 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 20 20 * GNU General Public License for more details. 21 - * 22 - * You should have received a copy of the GNU General Public License 23 - * along with this program; if not, write to the Free Software 24 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 25 21 */ 26 22 27 23 #include <linux/kernel.h>
-4
drivers/acpi/apei/erst-dbg.c
··· 17 17 * but WITHOUT ANY WARRANTY; without even the implied warranty of 18 18 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 19 19 * GNU General Public License for more details. 20 - * 21 - * You should have received a copy of the GNU General Public License 22 - * along with this program; if not, write to the Free Software 23 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 24 20 */ 25 21 26 22 #include <linux/kernel.h>
-4
drivers/acpi/apei/erst.c
··· 18 18 * but WITHOUT ANY WARRANTY; without even the implied warranty of 19 19 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 20 20 * GNU General Public License for more details. 21 - * 22 - * You should have received a copy of the GNU General Public License 23 - * along with this program; if not, write to the Free Software 24 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 25 21 */ 26 22 27 23 #include <linux/kernel.h>
-4
drivers/acpi/apei/ghes.c
··· 23 23 * but WITHOUT ANY WARRANTY; without even the implied warranty of 24 24 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 25 25 * GNU General Public License for more details. 26 - * 27 - * You should have received a copy of the GNU General Public License 28 - * along with this program; if not, write to the Free Software 29 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 30 26 */ 31 27 32 28 #include <linux/kernel.h>
-4
drivers/acpi/apei/hest.c
··· 21 21 * but WITHOUT ANY WARRANTY; without even the implied warranty of 22 22 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 23 23 * GNU General Public License for more details. 24 - * 25 - * You should have received a copy of the GNU General Public License 26 - * along with this program; if not, write to the Free Software 27 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 28 24 */ 29 25 30 26 #include <linux/kernel.h>
-4
drivers/acpi/battery.c
··· 18 18 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 19 19 * General Public License for more details. 20 20 * 21 - * You should have received a copy of the GNU General Public License along 22 - * with this program; if not, write to the Free Software Foundation, Inc., 23 - * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. 24 - * 25 21 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 26 22 */ 27 23
-4
drivers/acpi/blacklist.c
··· 20 20 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 21 21 * General Public License for more details. 22 22 * 23 - * You should have received a copy of the GNU General Public License along 24 - * with this program; if not, write to the Free Software Foundation, Inc., 25 - * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. 26 - * 27 23 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 28 24 */ 29 25
+403 -5
drivers/acpi/bus.c
··· 15 15 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 16 16 * General Public License for more details. 17 17 * 18 - * You should have received a copy of the GNU General Public License along 19 - * with this program; if not, write to the Free Software Foundation, Inc., 20 - * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. 21 - * 22 18 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 23 19 */ 24 20 ··· 419 423 acpi_evaluate_ost(handle, type, ost_code, NULL); 420 424 } 421 425 426 + static void acpi_device_notify(acpi_handle handle, u32 event, void *data) 427 + { 428 + struct acpi_device *device = data; 429 + 430 + device->driver->ops.notify(device, event); 431 + } 432 + 433 + static void acpi_device_notify_fixed(void *data) 434 + { 435 + struct acpi_device *device = data; 436 + 437 + /* Fixed hardware devices have no handles */ 438 + acpi_device_notify(NULL, ACPI_FIXED_HARDWARE_EVENT, device); 439 + } 440 + 441 + static u32 acpi_device_fixed_event(void *data) 442 + { 443 + acpi_os_execute(OSL_NOTIFY_HANDLER, acpi_device_notify_fixed, data); 444 + return ACPI_INTERRUPT_HANDLED; 445 + } 446 + 447 + static int acpi_device_install_notify_handler(struct acpi_device *device) 448 + { 449 + acpi_status status; 450 + 451 + if (device->device_type == ACPI_BUS_TYPE_POWER_BUTTON) 452 + status = 453 + acpi_install_fixed_event_handler(ACPI_EVENT_POWER_BUTTON, 454 + acpi_device_fixed_event, 455 + device); 456 + else if (device->device_type == ACPI_BUS_TYPE_SLEEP_BUTTON) 457 + status = 458 + acpi_install_fixed_event_handler(ACPI_EVENT_SLEEP_BUTTON, 459 + acpi_device_fixed_event, 460 + device); 461 + else 462 + status = acpi_install_notify_handler(device->handle, 463 + ACPI_DEVICE_NOTIFY, 464 + acpi_device_notify, 465 + device); 466 + 467 + if (ACPI_FAILURE(status)) 468 + return -EINVAL; 469 + return 0; 470 + } 471 + 472 + static void acpi_device_remove_notify_handler(struct acpi_device *device) 473 + { 474 + if (device->device_type == ACPI_BUS_TYPE_POWER_BUTTON) 475 + acpi_remove_fixed_event_handler(ACPI_EVENT_POWER_BUTTON, 476 + acpi_device_fixed_event); 477 + else if (device->device_type == ACPI_BUS_TYPE_SLEEP_BUTTON) 478 + acpi_remove_fixed_event_handler(ACPI_EVENT_SLEEP_BUTTON, 479 + acpi_device_fixed_event); 480 + else 481 + acpi_remove_notify_handler(device->handle, ACPI_DEVICE_NOTIFY, 482 + acpi_device_notify); 483 + } 484 + 485 + /* -------------------------------------------------------------------------- 486 + Device Matching 487 + -------------------------------------------------------------------------- */ 488 + 489 + static struct acpi_device *acpi_primary_dev_companion(struct acpi_device *adev, 490 + const struct device *dev) 491 + { 492 + struct mutex *physical_node_lock = &adev->physical_node_lock; 493 + 494 + mutex_lock(physical_node_lock); 495 + if (list_empty(&adev->physical_node_list)) { 496 + adev = NULL; 497 + } else { 498 + const struct acpi_device_physical_node *node; 499 + 500 + node = list_first_entry(&adev->physical_node_list, 501 + struct acpi_device_physical_node, node); 502 + if (node->dev != dev) 503 + adev = NULL; 504 + } 505 + mutex_unlock(physical_node_lock); 506 + return adev; 507 + } 508 + 509 + /** 510 + * acpi_device_is_first_physical_node - Is given dev first physical node 511 + * @adev: ACPI companion device 512 + * @dev: Physical device to check 513 + * 514 + * Function checks if given @dev is the first physical devices attached to 515 + * the ACPI companion device. This distinction is needed in some cases 516 + * where the same companion device is shared between many physical devices. 517 + * 518 + * Note that the caller have to provide valid @adev pointer. 519 + */ 520 + bool acpi_device_is_first_physical_node(struct acpi_device *adev, 521 + const struct device *dev) 522 + { 523 + return !!acpi_primary_dev_companion(adev, dev); 524 + } 525 + 526 + /* 527 + * acpi_companion_match() - Can we match via ACPI companion device 528 + * @dev: Device in question 529 + * 530 + * Check if the given device has an ACPI companion and if that companion has 531 + * a valid list of PNP IDs, and if the device is the first (primary) physical 532 + * device associated with it. Return the companion pointer if that's the case 533 + * or NULL otherwise. 534 + * 535 + * If multiple physical devices are attached to a single ACPI companion, we need 536 + * to be careful. The usage scenario for this kind of relationship is that all 537 + * of the physical devices in question use resources provided by the ACPI 538 + * companion. A typical case is an MFD device where all the sub-devices share 539 + * the parent's ACPI companion. In such cases we can only allow the primary 540 + * (first) physical device to be matched with the help of the companion's PNP 541 + * IDs. 542 + * 543 + * Additional physical devices sharing the ACPI companion can still use 544 + * resources available from it but they will be matched normally using functions 545 + * provided by their bus types (and analogously for their modalias). 546 + */ 547 + struct acpi_device *acpi_companion_match(const struct device *dev) 548 + { 549 + struct acpi_device *adev; 550 + 551 + adev = ACPI_COMPANION(dev); 552 + if (!adev) 553 + return NULL; 554 + 555 + if (list_empty(&adev->pnp.ids)) 556 + return NULL; 557 + 558 + return acpi_primary_dev_companion(adev, dev); 559 + } 560 + 561 + /** 562 + * acpi_of_match_device - Match device object using the "compatible" property. 563 + * @adev: ACPI device object to match. 564 + * @of_match_table: List of device IDs to match against. 565 + * 566 + * If @dev has an ACPI companion which has ACPI_DT_NAMESPACE_HID in its list of 567 + * identifiers and a _DSD object with the "compatible" property, use that 568 + * property to match against the given list of identifiers. 569 + */ 570 + static bool acpi_of_match_device(struct acpi_device *adev, 571 + const struct of_device_id *of_match_table) 572 + { 573 + const union acpi_object *of_compatible, *obj; 574 + int i, nval; 575 + 576 + if (!adev) 577 + return false; 578 + 579 + of_compatible = adev->data.of_compatible; 580 + if (!of_match_table || !of_compatible) 581 + return false; 582 + 583 + if (of_compatible->type == ACPI_TYPE_PACKAGE) { 584 + nval = of_compatible->package.count; 585 + obj = of_compatible->package.elements; 586 + } else { /* Must be ACPI_TYPE_STRING. */ 587 + nval = 1; 588 + obj = of_compatible; 589 + } 590 + /* Now we can look for the driver DT compatible strings */ 591 + for (i = 0; i < nval; i++, obj++) { 592 + const struct of_device_id *id; 593 + 594 + for (id = of_match_table; id->compatible[0]; id++) 595 + if (!strcasecmp(obj->string.pointer, id->compatible)) 596 + return true; 597 + } 598 + 599 + return false; 600 + } 601 + 602 + static bool __acpi_match_device_cls(const struct acpi_device_id *id, 603 + struct acpi_hardware_id *hwid) 604 + { 605 + int i, msk, byte_shift; 606 + char buf[3]; 607 + 608 + if (!id->cls) 609 + return false; 610 + 611 + /* Apply class-code bitmask, before checking each class-code byte */ 612 + for (i = 1; i <= 3; i++) { 613 + byte_shift = 8 * (3 - i); 614 + msk = (id->cls_msk >> byte_shift) & 0xFF; 615 + if (!msk) 616 + continue; 617 + 618 + sprintf(buf, "%02x", (id->cls >> byte_shift) & msk); 619 + if (strncmp(buf, &hwid->id[(i - 1) * 2], 2)) 620 + return false; 621 + } 622 + return true; 623 + } 624 + 625 + static const struct acpi_device_id *__acpi_match_device( 626 + struct acpi_device *device, 627 + const struct acpi_device_id *ids, 628 + const struct of_device_id *of_ids) 629 + { 630 + const struct acpi_device_id *id; 631 + struct acpi_hardware_id *hwid; 632 + 633 + /* 634 + * If the device is not present, it is unnecessary to load device 635 + * driver for it. 636 + */ 637 + if (!device || !device->status.present) 638 + return NULL; 639 + 640 + list_for_each_entry(hwid, &device->pnp.ids, list) { 641 + /* First, check the ACPI/PNP IDs provided by the caller. */ 642 + for (id = ids; id->id[0] || id->cls; id++) { 643 + if (id->id[0] && !strcmp((char *) id->id, hwid->id)) 644 + return id; 645 + else if (id->cls && __acpi_match_device_cls(id, hwid)) 646 + return id; 647 + } 648 + 649 + /* 650 + * Next, check ACPI_DT_NAMESPACE_HID and try to match the 651 + * "compatible" property if found. 652 + * 653 + * The id returned by the below is not valid, but the only 654 + * caller passing non-NULL of_ids here is only interested in 655 + * whether or not the return value is NULL. 656 + */ 657 + if (!strcmp(ACPI_DT_NAMESPACE_HID, hwid->id) 658 + && acpi_of_match_device(device, of_ids)) 659 + return id; 660 + } 661 + return NULL; 662 + } 663 + 664 + /** 665 + * acpi_match_device - Match a struct device against a given list of ACPI IDs 666 + * @ids: Array of struct acpi_device_id object to match against. 667 + * @dev: The device structure to match. 668 + * 669 + * Check if @dev has a valid ACPI handle and if there is a struct acpi_device 670 + * object for that handle and use that object to match against a given list of 671 + * device IDs. 672 + * 673 + * Return a pointer to the first matching ID on success or %NULL on failure. 674 + */ 675 + const struct acpi_device_id *acpi_match_device(const struct acpi_device_id *ids, 676 + const struct device *dev) 677 + { 678 + return __acpi_match_device(acpi_companion_match(dev), ids, NULL); 679 + } 680 + EXPORT_SYMBOL_GPL(acpi_match_device); 681 + 682 + int acpi_match_device_ids(struct acpi_device *device, 683 + const struct acpi_device_id *ids) 684 + { 685 + return __acpi_match_device(device, ids, NULL) ? 0 : -ENOENT; 686 + } 687 + EXPORT_SYMBOL(acpi_match_device_ids); 688 + 689 + bool acpi_driver_match_device(struct device *dev, 690 + const struct device_driver *drv) 691 + { 692 + if (!drv->acpi_match_table) 693 + return acpi_of_match_device(ACPI_COMPANION(dev), 694 + drv->of_match_table); 695 + 696 + return !!__acpi_match_device(acpi_companion_match(dev), 697 + drv->acpi_match_table, drv->of_match_table); 698 + } 699 + EXPORT_SYMBOL_GPL(acpi_driver_match_device); 700 + 701 + /* -------------------------------------------------------------------------- 702 + ACPI Driver Management 703 + -------------------------------------------------------------------------- */ 704 + 705 + /** 706 + * acpi_bus_register_driver - register a driver with the ACPI bus 707 + * @driver: driver being registered 708 + * 709 + * Registers a driver with the ACPI bus. Searches the namespace for all 710 + * devices that match the driver's criteria and binds. Returns zero for 711 + * success or a negative error status for failure. 712 + */ 713 + int acpi_bus_register_driver(struct acpi_driver *driver) 714 + { 715 + int ret; 716 + 717 + if (acpi_disabled) 718 + return -ENODEV; 719 + driver->drv.name = driver->name; 720 + driver->drv.bus = &acpi_bus_type; 721 + driver->drv.owner = driver->owner; 722 + 723 + ret = driver_register(&driver->drv); 724 + return ret; 725 + } 726 + 727 + EXPORT_SYMBOL(acpi_bus_register_driver); 728 + 729 + /** 730 + * acpi_bus_unregister_driver - unregisters a driver with the ACPI bus 731 + * @driver: driver to unregister 732 + * 733 + * Unregisters a driver with the ACPI bus. Searches the namespace for all 734 + * devices that match the driver's criteria and unbinds. 735 + */ 736 + void acpi_bus_unregister_driver(struct acpi_driver *driver) 737 + { 738 + driver_unregister(&driver->drv); 739 + } 740 + 741 + EXPORT_SYMBOL(acpi_bus_unregister_driver); 742 + 743 + /* -------------------------------------------------------------------------- 744 + ACPI Bus operations 745 + -------------------------------------------------------------------------- */ 746 + 747 + static int acpi_bus_match(struct device *dev, struct device_driver *drv) 748 + { 749 + struct acpi_device *acpi_dev = to_acpi_device(dev); 750 + struct acpi_driver *acpi_drv = to_acpi_driver(drv); 751 + 752 + return acpi_dev->flags.match_driver 753 + && !acpi_match_device_ids(acpi_dev, acpi_drv->ids); 754 + } 755 + 756 + static int acpi_device_uevent(struct device *dev, struct kobj_uevent_env *env) 757 + { 758 + return __acpi_device_uevent_modalias(to_acpi_device(dev), env); 759 + } 760 + 761 + static int acpi_device_probe(struct device *dev) 762 + { 763 + struct acpi_device *acpi_dev = to_acpi_device(dev); 764 + struct acpi_driver *acpi_drv = to_acpi_driver(dev->driver); 765 + int ret; 766 + 767 + if (acpi_dev->handler && !acpi_is_pnp_device(acpi_dev)) 768 + return -EINVAL; 769 + 770 + if (!acpi_drv->ops.add) 771 + return -ENOSYS; 772 + 773 + ret = acpi_drv->ops.add(acpi_dev); 774 + if (ret) 775 + return ret; 776 + 777 + acpi_dev->driver = acpi_drv; 778 + ACPI_DEBUG_PRINT((ACPI_DB_INFO, 779 + "Driver [%s] successfully bound to device [%s]\n", 780 + acpi_drv->name, acpi_dev->pnp.bus_id)); 781 + 782 + if (acpi_drv->ops.notify) { 783 + ret = acpi_device_install_notify_handler(acpi_dev); 784 + if (ret) { 785 + if (acpi_drv->ops.remove) 786 + acpi_drv->ops.remove(acpi_dev); 787 + 788 + acpi_dev->driver = NULL; 789 + acpi_dev->driver_data = NULL; 790 + return ret; 791 + } 792 + } 793 + 794 + ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Found driver [%s] for device [%s]\n", 795 + acpi_drv->name, acpi_dev->pnp.bus_id)); 796 + get_device(dev); 797 + return 0; 798 + } 799 + 800 + static int acpi_device_remove(struct device * dev) 801 + { 802 + struct acpi_device *acpi_dev = to_acpi_device(dev); 803 + struct acpi_driver *acpi_drv = acpi_dev->driver; 804 + 805 + if (acpi_drv) { 806 + if (acpi_drv->ops.notify) 807 + acpi_device_remove_notify_handler(acpi_dev); 808 + if (acpi_drv->ops.remove) 809 + acpi_drv->ops.remove(acpi_dev); 810 + } 811 + acpi_dev->driver = NULL; 812 + acpi_dev->driver_data = NULL; 813 + 814 + put_device(dev); 815 + return 0; 816 + } 817 + 818 + struct bus_type acpi_bus_type = { 819 + .name = "acpi", 820 + .match = acpi_bus_match, 821 + .probe = acpi_device_probe, 822 + .remove = acpi_device_remove, 823 + .uevent = acpi_device_uevent, 824 + }; 825 + 422 826 /* -------------------------------------------------------------------------- 423 827 Initialization/Cleanup 424 828 -------------------------------------------------------------------------- */ ··· 1057 661 */ 1058 662 acpi_root_dir = proc_mkdir(ACPI_BUS_FILE_ROOT, NULL); 1059 663 1060 - return 0; 664 + result = bus_register(&acpi_bus_type); 665 + if (!result) 666 + return 0; 1061 667 1062 668 /* Mimic structured exception handling */ 1063 669 error1:
-4
drivers/acpi/button.c
··· 16 16 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 17 17 * General Public License for more details. 18 18 * 19 - * You should have received a copy of the GNU General Public License along 20 - * with this program; if not, write to the Free Software Foundation, Inc., 21 - * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. 22 - * 23 19 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 24 20 */ 25 21
-4
drivers/acpi/cm_sbs.c
··· 11 11 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 12 12 * General Public License for more details. 13 13 * 14 - * You should have received a copy of the GNU General Public License along 15 - * with this program; if not, write to the Free Software Foundation, Inc., 16 - * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. 17 - * 18 14 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 19 15 */ 20 16
-4
drivers/acpi/container.c
··· 20 20 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 21 21 * General Public License for more details. 22 22 * 23 - * You should have received a copy of the GNU General Public License along 24 - * with this program; if not, write to the Free Software Foundation, Inc., 25 - * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. 26 - * 27 23 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 28 24 */ 29 25 #include <linux/acpi.h>
+2
drivers/acpi/debugfs.c
··· 7 7 #include <linux/debugfs.h> 8 8 #include <linux/acpi.h> 9 9 10 + #include "internal.h" 11 + 10 12 #define _COMPONENT ACPI_SYSTEM_COMPONENT 11 13 ACPI_MODULE_NAME("debugfs"); 12 14
+8 -4
drivers/acpi/device_pm.c
··· 15 15 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 16 16 * General Public License for more details. 17 17 * 18 - * You should have received a copy of the GNU General Public License along 19 - * with this program; if not, write to the Free Software Foundation, Inc., 20 - * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. 21 - * 22 18 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 23 19 */ 24 20 ··· 1118 1122 1119 1123 if (dev->pm_domain) 1120 1124 return -EEXIST; 1125 + 1126 + /* 1127 + * Only attach the power domain to the first device if the 1128 + * companion is shared by multiple. This is to prevent doing power 1129 + * management twice. 1130 + */ 1131 + if (!acpi_device_is_first_physical_node(adev, dev)) 1132 + return -EBUSY; 1121 1133 1122 1134 acpi_add_pm_notifier(adev, dev, acpi_pm_notify_work_func); 1123 1135 dev->pm_domain = &acpi_general_pm_domain;
+521
drivers/acpi/device_sysfs.c
··· 1 + /* 2 + * drivers/acpi/device_sysfs.c - ACPI device sysfs attributes and modalias. 3 + * 4 + * Copyright (C) 2015, Intel Corp. 5 + * Author: Mika Westerberg <mika.westerberg@linux.intel.com> 6 + * Author: Rafael J. Wysocki <rafael.j.wysocki@intel.com> 7 + * 8 + * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 9 + * 10 + * This program is free software; you can redistribute it and/or modify 11 + * it under the terms of the GNU General Public License version 2 as published 12 + * by the Free Software Foundation. 13 + * 14 + * This program is distributed in the hope that it will be useful, but 15 + * WITHOUT ANY WARRANTY; without even the implied warranty of 16 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 17 + * General Public License for more details. 18 + * 19 + * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 20 + */ 21 + 22 + #include <linux/acpi.h> 23 + #include <linux/device.h> 24 + #include <linux/export.h> 25 + #include <linux/nls.h> 26 + 27 + #include "internal.h" 28 + 29 + /** 30 + * create_pnp_modalias - Create hid/cid(s) string for modalias and uevent 31 + * @acpi_dev: ACPI device object. 32 + * @modalias: Buffer to print into. 33 + * @size: Size of the buffer. 34 + * 35 + * Creates hid/cid(s) string needed for modalias and uevent 36 + * e.g. on a device with hid:IBM0001 and cid:ACPI0001 you get: 37 + * char *modalias: "acpi:IBM0001:ACPI0001" 38 + * Return: 0: no _HID and no _CID 39 + * -EINVAL: output error 40 + * -ENOMEM: output is truncated 41 + */ 42 + static int create_pnp_modalias(struct acpi_device *acpi_dev, char *modalias, 43 + int size) 44 + { 45 + int len; 46 + int count; 47 + struct acpi_hardware_id *id; 48 + 49 + /* 50 + * Since we skip ACPI_DT_NAMESPACE_HID from the modalias below, 0 should 51 + * be returned if ACPI_DT_NAMESPACE_HID is the only ACPI/PNP ID in the 52 + * device's list. 53 + */ 54 + count = 0; 55 + list_for_each_entry(id, &acpi_dev->pnp.ids, list) 56 + if (strcmp(id->id, ACPI_DT_NAMESPACE_HID)) 57 + count++; 58 + 59 + if (!count) 60 + return 0; 61 + 62 + len = snprintf(modalias, size, "acpi:"); 63 + if (len <= 0) 64 + return len; 65 + 66 + size -= len; 67 + 68 + list_for_each_entry(id, &acpi_dev->pnp.ids, list) { 69 + if (!strcmp(id->id, ACPI_DT_NAMESPACE_HID)) 70 + continue; 71 + 72 + count = snprintf(&modalias[len], size, "%s:", id->id); 73 + if (count < 0) 74 + return -EINVAL; 75 + 76 + if (count >= size) 77 + return -ENOMEM; 78 + 79 + len += count; 80 + size -= count; 81 + } 82 + modalias[len] = '\0'; 83 + return len; 84 + } 85 + 86 + /** 87 + * create_of_modalias - Creates DT compatible string for modalias and uevent 88 + * @acpi_dev: ACPI device object. 89 + * @modalias: Buffer to print into. 90 + * @size: Size of the buffer. 91 + * 92 + * Expose DT compatible modalias as of:NnameTCcompatible. This function should 93 + * only be called for devices having ACPI_DT_NAMESPACE_HID in their list of 94 + * ACPI/PNP IDs. 95 + */ 96 + static int create_of_modalias(struct acpi_device *acpi_dev, char *modalias, 97 + int size) 98 + { 99 + struct acpi_buffer buf = { ACPI_ALLOCATE_BUFFER }; 100 + const union acpi_object *of_compatible, *obj; 101 + int len, count; 102 + int i, nval; 103 + char *c; 104 + 105 + acpi_get_name(acpi_dev->handle, ACPI_SINGLE_NAME, &buf); 106 + /* DT strings are all in lower case */ 107 + for (c = buf.pointer; *c != '\0'; c++) 108 + *c = tolower(*c); 109 + 110 + len = snprintf(modalias, size, "of:N%sT", (char *)buf.pointer); 111 + ACPI_FREE(buf.pointer); 112 + 113 + if (len <= 0) 114 + return len; 115 + 116 + of_compatible = acpi_dev->data.of_compatible; 117 + if (of_compatible->type == ACPI_TYPE_PACKAGE) { 118 + nval = of_compatible->package.count; 119 + obj = of_compatible->package.elements; 120 + } else { /* Must be ACPI_TYPE_STRING. */ 121 + nval = 1; 122 + obj = of_compatible; 123 + } 124 + for (i = 0; i < nval; i++, obj++) { 125 + count = snprintf(&modalias[len], size, "C%s", 126 + obj->string.pointer); 127 + if (count < 0) 128 + return -EINVAL; 129 + 130 + if (count >= size) 131 + return -ENOMEM; 132 + 133 + len += count; 134 + size -= count; 135 + } 136 + modalias[len] = '\0'; 137 + return len; 138 + } 139 + 140 + int __acpi_device_uevent_modalias(struct acpi_device *adev, 141 + struct kobj_uevent_env *env) 142 + { 143 + int len; 144 + 145 + if (!adev) 146 + return -ENODEV; 147 + 148 + if (list_empty(&adev->pnp.ids)) 149 + return 0; 150 + 151 + if (add_uevent_var(env, "MODALIAS=")) 152 + return -ENOMEM; 153 + 154 + len = create_pnp_modalias(adev, &env->buf[env->buflen - 1], 155 + sizeof(env->buf) - env->buflen); 156 + if (len < 0) 157 + return len; 158 + 159 + env->buflen += len; 160 + if (!adev->data.of_compatible) 161 + return 0; 162 + 163 + if (len > 0 && add_uevent_var(env, "MODALIAS=")) 164 + return -ENOMEM; 165 + 166 + len = create_of_modalias(adev, &env->buf[env->buflen - 1], 167 + sizeof(env->buf) - env->buflen); 168 + if (len < 0) 169 + return len; 170 + 171 + env->buflen += len; 172 + 173 + return 0; 174 + } 175 + 176 + /** 177 + * acpi_device_uevent_modalias - uevent modalias for ACPI-enumerated devices. 178 + * 179 + * Create the uevent modalias field for ACPI-enumerated devices. 180 + * 181 + * Because other buses do not support ACPI HIDs & CIDs, e.g. for a device with 182 + * hid:IBM0001 and cid:ACPI0001 you get: "acpi:IBM0001:ACPI0001". 183 + */ 184 + int acpi_device_uevent_modalias(struct device *dev, struct kobj_uevent_env *env) 185 + { 186 + return __acpi_device_uevent_modalias(acpi_companion_match(dev), env); 187 + } 188 + EXPORT_SYMBOL_GPL(acpi_device_uevent_modalias); 189 + 190 + static int __acpi_device_modalias(struct acpi_device *adev, char *buf, int size) 191 + { 192 + int len, count; 193 + 194 + if (!adev) 195 + return -ENODEV; 196 + 197 + if (list_empty(&adev->pnp.ids)) 198 + return 0; 199 + 200 + len = create_pnp_modalias(adev, buf, size - 1); 201 + if (len < 0) { 202 + return len; 203 + } else if (len > 0) { 204 + buf[len++] = '\n'; 205 + size -= len; 206 + } 207 + if (!adev->data.of_compatible) 208 + return len; 209 + 210 + count = create_of_modalias(adev, buf + len, size - 1); 211 + if (count < 0) { 212 + return count; 213 + } else if (count > 0) { 214 + len += count; 215 + buf[len++] = '\n'; 216 + } 217 + 218 + return len; 219 + } 220 + 221 + /** 222 + * acpi_device_modalias - modalias sysfs attribute for ACPI-enumerated devices. 223 + * 224 + * Create the modalias sysfs attribute for ACPI-enumerated devices. 225 + * 226 + * Because other buses do not support ACPI HIDs & CIDs, e.g. for a device with 227 + * hid:IBM0001 and cid:ACPI0001 you get: "acpi:IBM0001:ACPI0001". 228 + */ 229 + int acpi_device_modalias(struct device *dev, char *buf, int size) 230 + { 231 + return __acpi_device_modalias(acpi_companion_match(dev), buf, size); 232 + } 233 + EXPORT_SYMBOL_GPL(acpi_device_modalias); 234 + 235 + static ssize_t 236 + acpi_device_modalias_show(struct device *dev, struct device_attribute *attr, char *buf) { 237 + return __acpi_device_modalias(to_acpi_device(dev), buf, 1024); 238 + } 239 + static DEVICE_ATTR(modalias, 0444, acpi_device_modalias_show, NULL); 240 + 241 + static ssize_t real_power_state_show(struct device *dev, 242 + struct device_attribute *attr, char *buf) 243 + { 244 + struct acpi_device *adev = to_acpi_device(dev); 245 + int state; 246 + int ret; 247 + 248 + ret = acpi_device_get_power(adev, &state); 249 + if (ret) 250 + return ret; 251 + 252 + return sprintf(buf, "%s\n", acpi_power_state_string(state)); 253 + } 254 + 255 + static DEVICE_ATTR(real_power_state, 0444, real_power_state_show, NULL); 256 + 257 + static ssize_t power_state_show(struct device *dev, 258 + struct device_attribute *attr, char *buf) 259 + { 260 + struct acpi_device *adev = to_acpi_device(dev); 261 + 262 + return sprintf(buf, "%s\n", acpi_power_state_string(adev->power.state)); 263 + } 264 + 265 + static DEVICE_ATTR(power_state, 0444, power_state_show, NULL); 266 + 267 + static ssize_t 268 + acpi_eject_store(struct device *d, struct device_attribute *attr, 269 + const char *buf, size_t count) 270 + { 271 + struct acpi_device *acpi_device = to_acpi_device(d); 272 + acpi_object_type not_used; 273 + acpi_status status; 274 + 275 + if (!count || buf[0] != '1') 276 + return -EINVAL; 277 + 278 + if ((!acpi_device->handler || !acpi_device->handler->hotplug.enabled) 279 + && !acpi_device->driver) 280 + return -ENODEV; 281 + 282 + status = acpi_get_type(acpi_device->handle, &not_used); 283 + if (ACPI_FAILURE(status) || !acpi_device->flags.ejectable) 284 + return -ENODEV; 285 + 286 + get_device(&acpi_device->dev); 287 + status = acpi_hotplug_schedule(acpi_device, ACPI_OST_EC_OSPM_EJECT); 288 + if (ACPI_SUCCESS(status)) 289 + return count; 290 + 291 + put_device(&acpi_device->dev); 292 + acpi_evaluate_ost(acpi_device->handle, ACPI_OST_EC_OSPM_EJECT, 293 + ACPI_OST_SC_NON_SPECIFIC_FAILURE, NULL); 294 + return status == AE_NO_MEMORY ? -ENOMEM : -EAGAIN; 295 + } 296 + 297 + static DEVICE_ATTR(eject, 0200, NULL, acpi_eject_store); 298 + 299 + static ssize_t 300 + acpi_device_hid_show(struct device *dev, struct device_attribute *attr, char *buf) { 301 + struct acpi_device *acpi_dev = to_acpi_device(dev); 302 + 303 + return sprintf(buf, "%s\n", acpi_device_hid(acpi_dev)); 304 + } 305 + static DEVICE_ATTR(hid, 0444, acpi_device_hid_show, NULL); 306 + 307 + static ssize_t acpi_device_uid_show(struct device *dev, 308 + struct device_attribute *attr, char *buf) 309 + { 310 + struct acpi_device *acpi_dev = to_acpi_device(dev); 311 + 312 + return sprintf(buf, "%s\n", acpi_dev->pnp.unique_id); 313 + } 314 + static DEVICE_ATTR(uid, 0444, acpi_device_uid_show, NULL); 315 + 316 + static ssize_t acpi_device_adr_show(struct device *dev, 317 + struct device_attribute *attr, char *buf) 318 + { 319 + struct acpi_device *acpi_dev = to_acpi_device(dev); 320 + 321 + return sprintf(buf, "0x%08x\n", 322 + (unsigned int)(acpi_dev->pnp.bus_address)); 323 + } 324 + static DEVICE_ATTR(adr, 0444, acpi_device_adr_show, NULL); 325 + 326 + static ssize_t 327 + acpi_device_path_show(struct device *dev, struct device_attribute *attr, char *buf) { 328 + struct acpi_device *acpi_dev = to_acpi_device(dev); 329 + struct acpi_buffer path = {ACPI_ALLOCATE_BUFFER, NULL}; 330 + int result; 331 + 332 + result = acpi_get_name(acpi_dev->handle, ACPI_FULL_PATHNAME, &path); 333 + if (result) 334 + goto end; 335 + 336 + result = sprintf(buf, "%s\n", (char*)path.pointer); 337 + kfree(path.pointer); 338 + end: 339 + return result; 340 + } 341 + static DEVICE_ATTR(path, 0444, acpi_device_path_show, NULL); 342 + 343 + /* sysfs file that shows description text from the ACPI _STR method */ 344 + static ssize_t description_show(struct device *dev, 345 + struct device_attribute *attr, 346 + char *buf) { 347 + struct acpi_device *acpi_dev = to_acpi_device(dev); 348 + int result; 349 + 350 + if (acpi_dev->pnp.str_obj == NULL) 351 + return 0; 352 + 353 + /* 354 + * The _STR object contains a Unicode identifier for a device. 355 + * We need to convert to utf-8 so it can be displayed. 356 + */ 357 + result = utf16s_to_utf8s( 358 + (wchar_t *)acpi_dev->pnp.str_obj->buffer.pointer, 359 + acpi_dev->pnp.str_obj->buffer.length, 360 + UTF16_LITTLE_ENDIAN, buf, 361 + PAGE_SIZE); 362 + 363 + buf[result++] = '\n'; 364 + 365 + return result; 366 + } 367 + static DEVICE_ATTR(description, 0444, description_show, NULL); 368 + 369 + static ssize_t 370 + acpi_device_sun_show(struct device *dev, struct device_attribute *attr, 371 + char *buf) { 372 + struct acpi_device *acpi_dev = to_acpi_device(dev); 373 + acpi_status status; 374 + unsigned long long sun; 375 + 376 + status = acpi_evaluate_integer(acpi_dev->handle, "_SUN", NULL, &sun); 377 + if (ACPI_FAILURE(status)) 378 + return -ENODEV; 379 + 380 + return sprintf(buf, "%llu\n", sun); 381 + } 382 + static DEVICE_ATTR(sun, 0444, acpi_device_sun_show, NULL); 383 + 384 + static ssize_t status_show(struct device *dev, struct device_attribute *attr, 385 + char *buf) { 386 + struct acpi_device *acpi_dev = to_acpi_device(dev); 387 + acpi_status status; 388 + unsigned long long sta; 389 + 390 + status = acpi_evaluate_integer(acpi_dev->handle, "_STA", NULL, &sta); 391 + if (ACPI_FAILURE(status)) 392 + return -ENODEV; 393 + 394 + return sprintf(buf, "%llu\n", sta); 395 + } 396 + static DEVICE_ATTR_RO(status); 397 + 398 + /** 399 + * acpi_device_setup_files - Create sysfs attributes of an ACPI device. 400 + * @dev: ACPI device object. 401 + */ 402 + int acpi_device_setup_files(struct acpi_device *dev) 403 + { 404 + struct acpi_buffer buffer = {ACPI_ALLOCATE_BUFFER, NULL}; 405 + acpi_status status; 406 + int result = 0; 407 + 408 + /* 409 + * Devices gotten from FADT don't have a "path" attribute 410 + */ 411 + if (dev->handle) { 412 + result = device_create_file(&dev->dev, &dev_attr_path); 413 + if (result) 414 + goto end; 415 + } 416 + 417 + if (!list_empty(&dev->pnp.ids)) { 418 + result = device_create_file(&dev->dev, &dev_attr_hid); 419 + if (result) 420 + goto end; 421 + 422 + result = device_create_file(&dev->dev, &dev_attr_modalias); 423 + if (result) 424 + goto end; 425 + } 426 + 427 + /* 428 + * If device has _STR, 'description' file is created 429 + */ 430 + if (acpi_has_method(dev->handle, "_STR")) { 431 + status = acpi_evaluate_object(dev->handle, "_STR", 432 + NULL, &buffer); 433 + if (ACPI_FAILURE(status)) 434 + buffer.pointer = NULL; 435 + dev->pnp.str_obj = buffer.pointer; 436 + result = device_create_file(&dev->dev, &dev_attr_description); 437 + if (result) 438 + goto end; 439 + } 440 + 441 + if (dev->pnp.type.bus_address) 442 + result = device_create_file(&dev->dev, &dev_attr_adr); 443 + if (dev->pnp.unique_id) 444 + result = device_create_file(&dev->dev, &dev_attr_uid); 445 + 446 + if (acpi_has_method(dev->handle, "_SUN")) { 447 + result = device_create_file(&dev->dev, &dev_attr_sun); 448 + if (result) 449 + goto end; 450 + } 451 + 452 + if (acpi_has_method(dev->handle, "_STA")) { 453 + result = device_create_file(&dev->dev, &dev_attr_status); 454 + if (result) 455 + goto end; 456 + } 457 + 458 + /* 459 + * If device has _EJ0, 'eject' file is created that is used to trigger 460 + * hot-removal function from userland. 461 + */ 462 + if (acpi_has_method(dev->handle, "_EJ0")) { 463 + result = device_create_file(&dev->dev, &dev_attr_eject); 464 + if (result) 465 + return result; 466 + } 467 + 468 + if (dev->flags.power_manageable) { 469 + result = device_create_file(&dev->dev, &dev_attr_power_state); 470 + if (result) 471 + return result; 472 + 473 + if (dev->power.flags.power_resources) 474 + result = device_create_file(&dev->dev, 475 + &dev_attr_real_power_state); 476 + } 477 + 478 + end: 479 + return result; 480 + } 481 + 482 + /** 483 + * acpi_device_remove_files - Remove sysfs attributes of an ACPI device. 484 + * @dev: ACPI device object. 485 + */ 486 + void acpi_device_remove_files(struct acpi_device *dev) 487 + { 488 + if (dev->flags.power_manageable) { 489 + device_remove_file(&dev->dev, &dev_attr_power_state); 490 + if (dev->power.flags.power_resources) 491 + device_remove_file(&dev->dev, 492 + &dev_attr_real_power_state); 493 + } 494 + 495 + /* 496 + * If device has _STR, remove 'description' file 497 + */ 498 + if (acpi_has_method(dev->handle, "_STR")) { 499 + kfree(dev->pnp.str_obj); 500 + device_remove_file(&dev->dev, &dev_attr_description); 501 + } 502 + /* 503 + * If device has _EJ0, remove 'eject' file. 504 + */ 505 + if (acpi_has_method(dev->handle, "_EJ0")) 506 + device_remove_file(&dev->dev, &dev_attr_eject); 507 + 508 + if (acpi_has_method(dev->handle, "_SUN")) 509 + device_remove_file(&dev->dev, &dev_attr_sun); 510 + 511 + if (dev->pnp.unique_id) 512 + device_remove_file(&dev->dev, &dev_attr_uid); 513 + if (dev->pnp.type.bus_address) 514 + device_remove_file(&dev->dev, &dev_attr_adr); 515 + device_remove_file(&dev->dev, &dev_attr_modalias); 516 + device_remove_file(&dev->dev, &dev_attr_hid); 517 + if (acpi_has_method(dev->handle, "_STA")) 518 + device_remove_file(&dev->dev, &dev_attr_status); 519 + if (dev->handle) 520 + device_remove_file(&dev->dev, &dev_attr_path); 521 + }
-4
drivers/acpi/dock.c
··· 17 17 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 18 18 * General Public License for more details. 19 19 * 20 - * You should have received a copy of the GNU General Public License along 21 - * with this program; if not, write to the Free Software Foundation, Inc., 22 - * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. 23 - * 24 20 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 25 21 */ 26 22
+60 -26
drivers/acpi/ec.c
··· 22 22 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 23 23 * General Public License for more details. 24 24 * 25 - * You should have received a copy of the GNU General Public License along 26 - * with this program; if not, write to the Free Software Foundation, Inc., 27 - * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. 28 - * 29 25 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 30 26 */ 31 27 ··· 161 165 u8 flags; 162 166 }; 163 167 168 + struct acpi_ec_query { 169 + struct transaction transaction; 170 + struct work_struct work; 171 + struct acpi_ec_query_handler *handler; 172 + }; 173 + 164 174 static int acpi_ec_query(struct acpi_ec *ec, u8 *data); 165 175 static void advance_transaction(struct acpi_ec *ec); 176 + static void acpi_ec_event_handler(struct work_struct *work); 177 + static void acpi_ec_event_processor(struct work_struct *work); 166 178 167 179 struct acpi_ec *boot_ec, *first_ec; 168 180 EXPORT_SYMBOL(first_ec); ··· 982 978 } 983 979 EXPORT_SYMBOL_GPL(acpi_ec_remove_query_handler); 984 980 985 - static void acpi_ec_run(void *cxt) 981 + static struct acpi_ec_query *acpi_ec_create_query(u8 *pval) 986 982 { 987 - struct acpi_ec_query_handler *handler = cxt; 983 + struct acpi_ec_query *q; 984 + struct transaction *t; 988 985 989 - if (!handler) 990 - return; 986 + q = kzalloc(sizeof (struct acpi_ec_query), GFP_KERNEL); 987 + if (!q) 988 + return NULL; 989 + INIT_WORK(&q->work, acpi_ec_event_processor); 990 + t = &q->transaction; 991 + t->command = ACPI_EC_COMMAND_QUERY; 992 + t->rdata = pval; 993 + t->rlen = 1; 994 + return q; 995 + } 996 + 997 + static void acpi_ec_delete_query(struct acpi_ec_query *q) 998 + { 999 + if (q) { 1000 + if (q->handler) 1001 + acpi_ec_put_query_handler(q->handler); 1002 + kfree(q); 1003 + } 1004 + } 1005 + 1006 + static void acpi_ec_event_processor(struct work_struct *work) 1007 + { 1008 + struct acpi_ec_query *q = container_of(work, struct acpi_ec_query, work); 1009 + struct acpi_ec_query_handler *handler = q->handler; 1010 + 991 1011 ec_dbg_evt("Query(0x%02x) started", handler->query_bit); 992 1012 if (handler->func) 993 1013 handler->func(handler->data); 994 1014 else if (handler->handle) 995 1015 acpi_evaluate_object(handler->handle, NULL, NULL, NULL); 996 1016 ec_dbg_evt("Query(0x%02x) stopped", handler->query_bit); 997 - acpi_ec_put_query_handler(handler); 1017 + acpi_ec_delete_query(q); 998 1018 } 999 1019 1000 1020 static int acpi_ec_query(struct acpi_ec *ec, u8 *data) 1001 1021 { 1002 1022 u8 value = 0; 1003 1023 int result; 1004 - acpi_status status; 1005 1024 struct acpi_ec_query_handler *handler; 1006 - struct transaction t = {.command = ACPI_EC_COMMAND_QUERY, 1007 - .wdata = NULL, .rdata = &value, 1008 - .wlen = 0, .rlen = 1}; 1025 + struct acpi_ec_query *q; 1026 + 1027 + q = acpi_ec_create_query(&value); 1028 + if (!q) 1029 + return -ENOMEM; 1009 1030 1010 1031 /* 1011 1032 * Query the EC to find out which _Qxx method we need to evaluate. 1012 1033 * Note that successful completion of the query causes the ACPI_EC_SCI 1013 1034 * bit to be cleared (and thus clearing the interrupt source). 1014 1035 */ 1015 - result = acpi_ec_transaction(ec, &t); 1016 - if (result) 1017 - return result; 1018 - if (data) 1019 - *data = value; 1036 + result = acpi_ec_transaction(ec, &q->transaction); 1020 1037 if (!value) 1021 - return -ENODATA; 1038 + result = -ENODATA; 1039 + if (result) 1040 + goto err_exit; 1022 1041 1023 1042 mutex_lock(&ec->mutex); 1024 1043 list_for_each_entry(handler, &ec->list, node) { 1025 1044 if (value == handler->query_bit) { 1026 - /* have custom handler for this bit */ 1027 - handler = acpi_ec_get_query_handler(handler); 1045 + q->handler = acpi_ec_get_query_handler(handler); 1028 1046 ec_dbg_evt("Query(0x%02x) scheduled", 1029 - handler->query_bit); 1030 - status = acpi_os_execute((handler->func) ? 1031 - OSL_NOTIFY_HANDLER : OSL_GPE_HANDLER, 1032 - acpi_ec_run, handler); 1033 - if (ACPI_FAILURE(status)) 1047 + q->handler->query_bit); 1048 + /* 1049 + * It is reported that _Qxx are evaluated in a 1050 + * parallel way on Windows: 1051 + * https://bugzilla.kernel.org/show_bug.cgi?id=94411 1052 + */ 1053 + if (!schedule_work(&q->work)) 1034 1054 result = -EBUSY; 1035 1055 break; 1036 1056 } 1037 1057 } 1038 1058 mutex_unlock(&ec->mutex); 1059 + 1060 + err_exit: 1061 + if (result && q) 1062 + acpi_ec_delete_query(q); 1063 + if (data) 1064 + *data = value; 1039 1065 return result; 1040 1066 } 1041 1067
-4
drivers/acpi/fan.c
··· 16 16 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 17 17 * General Public License for more details. 18 18 * 19 - * You should have received a copy of the GNU General Public License along 20 - * with this program; if not, write to the Free Software Foundation, Inc., 21 - * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. 22 - * 23 19 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 24 20 */ 25 21
-4
drivers/acpi/hed.c
··· 15 15 * but WITHOUT ANY WARRANTY; without even the implied warranty of 16 16 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 17 17 * GNU General Public License for more details. 18 - * 19 - * You should have received a copy of the GNU General Public License 20 - * along with this program; if not, write to the Free Software 21 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 22 18 */ 23 19 24 20 #include <linux/kernel.h>
+12 -4
drivers/acpi/internal.h
··· 13 13 * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 14 14 * more details. 15 15 * 16 - * You should have received a copy of the GNU General Public License along with 17 - * this program; if not, write to the Free Software Foundation, Inc., 18 - * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 19 16 */ 20 17 21 18 #ifndef _ACPI_INTERNAL_H_ ··· 67 70 68 71 #ifdef CONFIG_DEBUG_FS 69 72 extern struct dentry *acpi_debugfs_dir; 70 - int acpi_debugfs_init(void); 73 + void acpi_debugfs_init(void); 71 74 #else 72 75 static inline void acpi_debugfs_init(void) { return; } 73 76 #endif ··· 90 93 void (*release)(struct device *)); 91 94 void acpi_init_device_object(struct acpi_device *device, acpi_handle handle, 92 95 int type, unsigned long long sta); 96 + int acpi_device_setup_files(struct acpi_device *dev); 97 + void acpi_device_remove_files(struct acpi_device *dev); 93 98 void acpi_device_add_finalize(struct acpi_device *device); 94 99 void acpi_free_pnp_ids(struct acpi_device_pnp *pnp); 95 100 bool acpi_device_is_present(struct acpi_device *adev); 96 101 bool acpi_device_is_battery(struct acpi_device *adev); 102 + bool acpi_device_is_first_physical_node(struct acpi_device *adev, 103 + const struct device *dev); 104 + 105 + /* -------------------------------------------------------------------------- 106 + Device Matching and Notification 107 + -------------------------------------------------------------------------- */ 108 + struct acpi_device *acpi_companion_match(const struct device *dev); 109 + int __acpi_device_uevent_modalias(struct acpi_device *adev, 110 + struct kobj_uevent_env *env); 97 111 98 112 /* -------------------------------------------------------------------------- 99 113 Power Resource
-4
drivers/acpi/numa.c
··· 15 15 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 16 * GNU General Public License for more details. 17 17 * 18 - * You should have received a copy of the GNU General Public License 19 - * along with this program; if not, write to the Free Software 20 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 21 - * 22 18 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 23 19 * 24 20 */
+11 -34
drivers/acpi/osl.c
··· 19 19 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 20 20 * GNU General Public License for more details. 21 21 * 22 - * You should have received a copy of the GNU General Public License 23 - * along with this program; if not, write to the Free Software 24 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 25 - * 26 22 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 27 23 * 28 24 */ ··· 43 47 44 48 #include <asm/io.h> 45 49 #include <asm/uaccess.h> 50 + #include <asm-generic/io-64-nonatomic-lo-hi.h> 46 51 47 52 #include "internal.h" 48 53 ··· 80 83 static struct workqueue_struct *kacpid_wq; 81 84 static struct workqueue_struct *kacpi_notify_wq; 82 85 static struct workqueue_struct *kacpi_hotplug_wq; 86 + static bool acpi_os_initialized; 83 87 84 88 /* 85 89 * This list of permanent mappings is for memory that may be accessed from ··· 945 947 946 948 EXPORT_SYMBOL(acpi_os_write_port); 947 949 948 - #ifdef readq 949 - static inline u64 read64(const volatile void __iomem *addr) 950 - { 951 - return readq(addr); 952 - } 953 - #else 954 - static inline u64 read64(const volatile void __iomem *addr) 955 - { 956 - u64 l, h; 957 - l = readl(addr); 958 - h = readl(addr+4); 959 - return l | (h << 32); 960 - } 961 - #endif 962 - 963 950 acpi_status 964 951 acpi_os_read_memory(acpi_physical_address phys_addr, u64 *value, u32 width) 965 952 { ··· 977 994 *(u32 *) value = readl(virt_addr); 978 995 break; 979 996 case 64: 980 - *(u64 *) value = read64(virt_addr); 997 + *(u64 *) value = readq(virt_addr); 981 998 break; 982 999 default: 983 1000 BUG(); ··· 990 1007 991 1008 return AE_OK; 992 1009 } 993 - 994 - #ifdef writeq 995 - static inline void write64(u64 val, volatile void __iomem *addr) 996 - { 997 - writeq(val, addr); 998 - } 999 - #else 1000 - static inline void write64(u64 val, volatile void __iomem *addr) 1001 - { 1002 - writel(val, addr); 1003 - writel(val>>32, addr+4); 1004 - } 1005 - #endif 1006 1010 1007 1011 acpi_status 1008 1012 acpi_os_write_memory(acpi_physical_address phys_addr, u64 value, u32 width) ··· 1019 1049 writel(value, virt_addr); 1020 1050 break; 1021 1051 case 64: 1022 - write64(value, virt_addr); 1052 + writeq(value, virt_addr); 1023 1053 break; 1024 1054 default: 1025 1055 BUG(); ··· 1286 1316 long jiffies; 1287 1317 int ret = 0; 1288 1318 1319 + if (!acpi_os_initialized) 1320 + return AE_OK; 1321 + 1289 1322 if (!sem || (units < 1)) 1290 1323 return AE_BAD_PARAMETER; 1291 1324 ··· 1327 1354 acpi_status acpi_os_signal_semaphore(acpi_handle handle, u32 units) 1328 1355 { 1329 1356 struct semaphore *sem = (struct semaphore *)handle; 1357 + 1358 + if (!acpi_os_initialized) 1359 + return AE_OK; 1330 1360 1331 1361 if (!sem || (units < 1)) 1332 1362 return AE_BAD_PARAMETER; ··· 1839 1863 rv = acpi_os_map_generic_address(&acpi_gbl_FADT.reset_register); 1840 1864 pr_debug(PREFIX "%s: map reset_reg status %d\n", __func__, rv); 1841 1865 } 1866 + acpi_os_initialized = true; 1842 1867 1843 1868 return AE_OK; 1844 1869 }
-4
drivers/acpi/pci_irq.c
··· 19 19 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 20 20 * General Public License for more details. 21 21 * 22 - * You should have received a copy of the GNU General Public License along 23 - * with this program; if not, write to the Free Software Foundation, Inc., 24 - * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. 25 - * 26 22 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 27 23 */ 28 24
+16 -4
drivers/acpi/pci_link.c
··· 17 17 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 18 18 * General Public License for more details. 19 19 * 20 - * You should have received a copy of the GNU General Public License along 21 - * with this program; if not, write to the Free Software Foundation, Inc., 22 - * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. 23 - * 24 20 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 25 21 * 26 22 * TBD: ··· 816 820 if (irq >= 0 && irq < ARRAY_SIZE(acpi_irq_penalty)) { 817 821 if (active) 818 822 acpi_irq_penalty[irq] += PIRQ_PENALTY_ISA_USED; 823 + else 824 + acpi_irq_penalty[irq] += PIRQ_PENALTY_PCI_USING; 825 + } 826 + } 827 + 828 + /* 829 + * Penalize IRQ used by ACPI SCI. If ACPI SCI pin attributes conflict with 830 + * PCI IRQ attributes, mark ACPI SCI as ISA_ALWAYS so it won't be use for 831 + * PCI IRQs. 832 + */ 833 + void acpi_penalize_sci_irq(int irq, int trigger, int polarity) 834 + { 835 + if (irq >= 0 && irq < ARRAY_SIZE(acpi_irq_penalty)) { 836 + if (trigger != ACPI_MADT_TRIGGER_LEVEL || 837 + polarity != ACPI_MADT_POLARITY_ACTIVE_LOW) 838 + acpi_irq_penalty[irq] += PIRQ_PENALTY_ISA_ALWAYS; 819 839 else 820 840 acpi_irq_penalty[irq] += PIRQ_PENALTY_PCI_USING; 821 841 }
-4
drivers/acpi/pci_root.c
··· 16 16 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 17 17 * General Public License for more details. 18 18 * 19 - * You should have received a copy of the GNU General Public License along 20 - * with this program; if not, write to the Free Software Foundation, Inc., 21 - * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. 22 - * 23 19 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 24 20 */ 25 21
-4
drivers/acpi/pci_slot.c
··· 20 20 * WITHOUT ANY WARRANTY; without even the implied warranty of 21 21 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 22 22 * General Public License for more details. 23 - * 24 - * You should have received a copy of the GNU General Public License along 25 - * with this program; if not, write to the Free Software Foundation, Inc., 26 - * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 27 23 */ 28 24 29 25 #include <linux/kernel.h>
+9 -10
drivers/acpi/power.c
··· 1 1 /* 2 - * acpi_power.c - ACPI Bus Power Management ($Revision: 39 $) 2 + * drivers/acpi/power.c - ACPI Power Resources management. 3 3 * 4 - * Copyright (C) 2001, 2002 Andy Grover <andrew.grover@intel.com> 5 - * Copyright (C) 2001, 2002 Paul Diefenbaugh <paul.s.diefenbaugh@intel.com> 4 + * Copyright (C) 2001 - 2015 Intel Corp. 5 + * Author: Andy Grover <andrew.grover@intel.com> 6 + * Author: Paul Diefenbaugh <paul.s.diefenbaugh@intel.com> 7 + * Author: Rafael J. Wysocki <rafael.j.wysocki@intel.com> 6 8 * 7 9 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 8 10 * ··· 18 16 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 19 17 * General Public License for more details. 20 18 * 21 - * You should have received a copy of the GNU General Public License along 22 - * with this program; if not, write to the Free Software Foundation, Inc., 23 - * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. 24 - * 25 19 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 26 20 */ 27 21 ··· 25 27 * ACPI power-managed devices may be controlled in two ways: 26 28 * 1. via "Device Specific (D-State) Control" 27 29 * 2. via "Power Resource Control". 28 - * This module is used to manage devices relying on Power Resource Control. 30 + * The code below deals with ACPI Power Resources control. 29 31 * 30 - * An ACPI "power resource object" describes a software controllable power 31 - * plane, clock plane, or other resource used by a power managed device. 32 + * An ACPI "power resource object" represents a software controllable power 33 + * plane, clock plane, or other resource depended on by a device. 34 + * 32 35 * A device may rely on multiple power resources, and a power resource 33 36 * may be shared by multiple devices. 34 37 */
+59 -33
drivers/acpi/processor_driver.c
··· 21 21 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 22 22 * General Public License for more details. 23 23 * 24 - * You should have received a copy of the GNU General Public License along 25 - * with this program; if not, write to the Free Software Foundation, Inc., 26 - * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. 27 - * 28 24 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 29 25 */ 30 26 ··· 155 159 return NOTIFY_OK; 156 160 } 157 161 158 - static struct notifier_block __refdata acpi_cpu_notifier = { 162 + static struct notifier_block acpi_cpu_notifier = { 159 163 .notifier_call = acpi_cpu_soft_notify, 160 164 }; 161 165 162 - static int __acpi_processor_start(struct acpi_device *device) 166 + #ifdef CONFIG_ACPI_CPU_FREQ_PSS 167 + static int acpi_pss_perf_init(struct acpi_processor *pr, 168 + struct acpi_device *device) 163 169 { 164 - struct acpi_processor *pr = acpi_driver_data(device); 165 - acpi_status status; 166 170 int result = 0; 167 171 168 - if (!pr) 169 - return -ENODEV; 170 - 171 - if (pr->flags.need_hotplug_init) 172 - return 0; 173 - 174 - #ifdef CONFIG_CPU_FREQ 175 172 acpi_processor_ppc_has_changed(pr, 0); 176 - #endif 173 + 177 174 acpi_processor_get_throttling_info(pr); 178 175 179 176 if (pr->flags.throttling) 180 177 pr->flags.limit = 1; 181 178 182 - if (!cpuidle_get_driver() || cpuidle_get_driver() == &acpi_idle_driver) 183 - acpi_processor_power_init(pr); 184 - 185 179 pr->cdev = thermal_cooling_device_register("Processor", device, 186 180 &processor_cooling_ops); 187 181 if (IS_ERR(pr->cdev)) { 188 182 result = PTR_ERR(pr->cdev); 189 - goto err_power_exit; 183 + return result; 190 184 } 191 185 192 186 dev_dbg(&device->dev, "registered as cooling_device%d\n", ··· 190 204 "Failed to create sysfs link 'thermal_cooling'\n"); 191 205 goto err_thermal_unregister; 192 206 } 207 + 193 208 result = sysfs_create_link(&pr->cdev->device.kobj, 194 209 &device->dev.kobj, 195 210 "device"); ··· 200 213 goto err_remove_sysfs_thermal; 201 214 } 202 215 203 - status = acpi_install_notify_handler(device->handle, ACPI_DEVICE_NOTIFY, 204 - acpi_processor_notify, device); 205 - if (ACPI_SUCCESS(status)) 206 - return 0; 207 - 208 216 sysfs_remove_link(&pr->cdev->device.kobj, "device"); 209 217 err_remove_sysfs_thermal: 210 218 sysfs_remove_link(&device->dev.kobj, "thermal_cooling"); 211 219 err_thermal_unregister: 212 220 thermal_cooling_device_unregister(pr->cdev); 213 - err_power_exit: 221 + 222 + return result; 223 + } 224 + 225 + static void acpi_pss_perf_exit(struct acpi_processor *pr, 226 + struct acpi_device *device) 227 + { 228 + if (pr->cdev) { 229 + sysfs_remove_link(&device->dev.kobj, "thermal_cooling"); 230 + sysfs_remove_link(&pr->cdev->device.kobj, "device"); 231 + thermal_cooling_device_unregister(pr->cdev); 232 + pr->cdev = NULL; 233 + } 234 + } 235 + #else 236 + static inline int acpi_pss_perf_init(struct acpi_processor *pr, 237 + struct acpi_device *device) 238 + { 239 + return 0; 240 + } 241 + 242 + static inline void acpi_pss_perf_exit(struct acpi_processor *pr, 243 + struct acpi_device *device) {} 244 + #endif /* CONFIG_ACPI_CPU_FREQ_PSS */ 245 + 246 + static int __acpi_processor_start(struct acpi_device *device) 247 + { 248 + struct acpi_processor *pr = acpi_driver_data(device); 249 + acpi_status status; 250 + int result = 0; 251 + 252 + if (!pr) 253 + return -ENODEV; 254 + 255 + if (pr->flags.need_hotplug_init) 256 + return 0; 257 + 258 + if (!cpuidle_get_driver() || cpuidle_get_driver() == &acpi_idle_driver) 259 + acpi_processor_power_init(pr); 260 + 261 + result = acpi_pss_perf_init(pr, device); 262 + if (result) 263 + goto err_power_exit; 264 + 265 + status = acpi_install_notify_handler(device->handle, ACPI_DEVICE_NOTIFY, 266 + acpi_processor_notify, device); 267 + if (ACPI_SUCCESS(status)) 268 + return 0; 269 + 270 + err_power_exit: 214 271 acpi_processor_power_exit(pr); 215 272 return result; 216 273 } ··· 283 252 pr = acpi_driver_data(device); 284 253 if (!pr) 285 254 return 0; 286 - 287 255 acpi_processor_power_exit(pr); 288 256 289 - if (pr->cdev) { 290 - sysfs_remove_link(&device->dev.kobj, "thermal_cooling"); 291 - sysfs_remove_link(&pr->cdev->device.kobj, "device"); 292 - thermal_cooling_device_unregister(pr->cdev); 293 - pr->cdev = NULL; 294 - } 257 + acpi_pss_perf_exit(pr, device); 258 + 295 259 return 0; 296 260 } 297 261
-4
drivers/acpi/processor_idle.c
··· 21 21 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 22 22 * General Public License for more details. 23 23 * 24 - * You should have received a copy of the GNU General Public License along 25 - * with this program; if not, write to the Free Software Foundation, Inc., 26 - * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. 27 - * 28 24 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 29 25 */ 30 26
+2 -8
drivers/acpi/processor_perflib.c
··· 20 20 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 21 21 * General Public License for more details. 22 22 * 23 - * You should have received a copy of the GNU General Public License along 24 - * with this program; if not, write to the Free Software Foundation, Inc., 25 - * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. 26 - * 27 23 */ 28 24 29 25 #include <linux/kernel.h> ··· 83 87 if (ignore_ppc) 84 88 return 0; 85 89 86 - if (event != CPUFREQ_INCOMPATIBLE) 90 + if (event != CPUFREQ_ADJUST) 87 91 return 0; 88 92 89 93 mutex_lock(&performance_mutex); ··· 780 784 781 785 EXPORT_SYMBOL(acpi_processor_register_performance); 782 786 783 - void 784 - acpi_processor_unregister_performance(struct acpi_processor_performance 785 - *performance, unsigned int cpu) 787 + void acpi_processor_unregister_performance(unsigned int cpu) 786 788 { 787 789 struct acpi_processor *pr; 788 790
-4
drivers/acpi/processor_thermal.c
··· 19 19 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 20 20 * General Public License for more details. 21 21 * 22 - * You should have received a copy of the GNU General Public License along 23 - * with this program; if not, write to the Free Software Foundation, Inc., 24 - * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. 25 - * 26 22 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 27 23 */ 28 24
-4
drivers/acpi/processor_throttling.c
··· 19 19 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 20 20 * General Public License for more details. 21 21 * 22 - * You should have received a copy of the GNU General Public License along 23 - * with this program; if not, write to the Free Software Foundation, Inc., 24 - * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. 25 - * 26 22 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 27 23 */ 28 24
+3 -2
drivers/acpi/property.c
··· 528 528 529 529 if (!val) 530 530 return obj->package.count; 531 - else if (nval <= 0) 532 - return -EINVAL; 533 531 534 532 if (nval > obj->package.count) 535 533 return -EOVERFLOW; 534 + else if (nval <= 0) 535 + return -EINVAL; 536 536 537 537 items = obj->package.elements; 538 + 538 539 switch (proptype) { 539 540 case DEV_PROP_U8: 540 541 ret = acpi_copy_property_array_u8(items, (u8 *)val, nval);
-4
drivers/acpi/resource.c
··· 15 15 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 16 16 * General Public License for more details. 17 17 * 18 - * You should have received a copy of the GNU General Public License along 19 - * with this program; if not, write to the Free Software Foundation, Inc., 20 - * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. 21 - * 22 18 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 23 19 */ 24 20
-4
drivers/acpi/sbs.c
··· 17 17 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 18 18 * General Public License for more details. 19 19 * 20 - * You should have received a copy of the GNU General Public License along 21 - * with this program; if not, write to the Free Software Foundation, Inc., 22 - * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. 23 - * 24 20 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 25 21 */ 26 22
-860
drivers/acpi/scan.c
··· 115 115 return 0; 116 116 } 117 117 118 - /** 119 - * create_pnp_modalias - Create hid/cid(s) string for modalias and uevent 120 - * @acpi_dev: ACPI device object. 121 - * @modalias: Buffer to print into. 122 - * @size: Size of the buffer. 123 - * 124 - * Creates hid/cid(s) string needed for modalias and uevent 125 - * e.g. on a device with hid:IBM0001 and cid:ACPI0001 you get: 126 - * char *modalias: "acpi:IBM0001:ACPI0001" 127 - * Return: 0: no _HID and no _CID 128 - * -EINVAL: output error 129 - * -ENOMEM: output is truncated 130 - */ 131 - static int create_pnp_modalias(struct acpi_device *acpi_dev, char *modalias, 132 - int size) 133 - { 134 - int len; 135 - int count; 136 - struct acpi_hardware_id *id; 137 - 138 - /* 139 - * Since we skip ACPI_DT_NAMESPACE_HID from the modalias below, 0 should 140 - * be returned if ACPI_DT_NAMESPACE_HID is the only ACPI/PNP ID in the 141 - * device's list. 142 - */ 143 - count = 0; 144 - list_for_each_entry(id, &acpi_dev->pnp.ids, list) 145 - if (strcmp(id->id, ACPI_DT_NAMESPACE_HID)) 146 - count++; 147 - 148 - if (!count) 149 - return 0; 150 - 151 - len = snprintf(modalias, size, "acpi:"); 152 - if (len <= 0) 153 - return len; 154 - 155 - size -= len; 156 - 157 - list_for_each_entry(id, &acpi_dev->pnp.ids, list) { 158 - if (!strcmp(id->id, ACPI_DT_NAMESPACE_HID)) 159 - continue; 160 - 161 - count = snprintf(&modalias[len], size, "%s:", id->id); 162 - if (count < 0) 163 - return -EINVAL; 164 - 165 - if (count >= size) 166 - return -ENOMEM; 167 - 168 - len += count; 169 - size -= count; 170 - } 171 - modalias[len] = '\0'; 172 - return len; 173 - } 174 - 175 - /** 176 - * create_of_modalias - Creates DT compatible string for modalias and uevent 177 - * @acpi_dev: ACPI device object. 178 - * @modalias: Buffer to print into. 179 - * @size: Size of the buffer. 180 - * 181 - * Expose DT compatible modalias as of:NnameTCcompatible. This function should 182 - * only be called for devices having ACPI_DT_NAMESPACE_HID in their list of 183 - * ACPI/PNP IDs. 184 - */ 185 - static int create_of_modalias(struct acpi_device *acpi_dev, char *modalias, 186 - int size) 187 - { 188 - struct acpi_buffer buf = { ACPI_ALLOCATE_BUFFER }; 189 - const union acpi_object *of_compatible, *obj; 190 - int len, count; 191 - int i, nval; 192 - char *c; 193 - 194 - acpi_get_name(acpi_dev->handle, ACPI_SINGLE_NAME, &buf); 195 - /* DT strings are all in lower case */ 196 - for (c = buf.pointer; *c != '\0'; c++) 197 - *c = tolower(*c); 198 - 199 - len = snprintf(modalias, size, "of:N%sT", (char *)buf.pointer); 200 - ACPI_FREE(buf.pointer); 201 - 202 - if (len <= 0) 203 - return len; 204 - 205 - of_compatible = acpi_dev->data.of_compatible; 206 - if (of_compatible->type == ACPI_TYPE_PACKAGE) { 207 - nval = of_compatible->package.count; 208 - obj = of_compatible->package.elements; 209 - } else { /* Must be ACPI_TYPE_STRING. */ 210 - nval = 1; 211 - obj = of_compatible; 212 - } 213 - for (i = 0; i < nval; i++, obj++) { 214 - count = snprintf(&modalias[len], size, "C%s", 215 - obj->string.pointer); 216 - if (count < 0) 217 - return -EINVAL; 218 - 219 - if (count >= size) 220 - return -ENOMEM; 221 - 222 - len += count; 223 - size -= count; 224 - } 225 - modalias[len] = '\0'; 226 - return len; 227 - } 228 - 229 - /* 230 - * acpi_companion_match() - Can we match via ACPI companion device 231 - * @dev: Device in question 232 - * 233 - * Check if the given device has an ACPI companion and if that companion has 234 - * a valid list of PNP IDs, and if the device is the first (primary) physical 235 - * device associated with it. Return the companion pointer if that's the case 236 - * or NULL otherwise. 237 - * 238 - * If multiple physical devices are attached to a single ACPI companion, we need 239 - * to be careful. The usage scenario for this kind of relationship is that all 240 - * of the physical devices in question use resources provided by the ACPI 241 - * companion. A typical case is an MFD device where all the sub-devices share 242 - * the parent's ACPI companion. In such cases we can only allow the primary 243 - * (first) physical device to be matched with the help of the companion's PNP 244 - * IDs. 245 - * 246 - * Additional physical devices sharing the ACPI companion can still use 247 - * resources available from it but they will be matched normally using functions 248 - * provided by their bus types (and analogously for their modalias). 249 - */ 250 - static struct acpi_device *acpi_companion_match(const struct device *dev) 251 - { 252 - struct acpi_device *adev; 253 - struct mutex *physical_node_lock; 254 - 255 - adev = ACPI_COMPANION(dev); 256 - if (!adev) 257 - return NULL; 258 - 259 - if (list_empty(&adev->pnp.ids)) 260 - return NULL; 261 - 262 - physical_node_lock = &adev->physical_node_lock; 263 - mutex_lock(physical_node_lock); 264 - if (list_empty(&adev->physical_node_list)) { 265 - adev = NULL; 266 - } else { 267 - const struct acpi_device_physical_node *node; 268 - 269 - node = list_first_entry(&adev->physical_node_list, 270 - struct acpi_device_physical_node, node); 271 - if (node->dev != dev) 272 - adev = NULL; 273 - } 274 - mutex_unlock(physical_node_lock); 275 - 276 - return adev; 277 - } 278 - 279 - static int __acpi_device_uevent_modalias(struct acpi_device *adev, 280 - struct kobj_uevent_env *env) 281 - { 282 - int len; 283 - 284 - if (!adev) 285 - return -ENODEV; 286 - 287 - if (list_empty(&adev->pnp.ids)) 288 - return 0; 289 - 290 - if (add_uevent_var(env, "MODALIAS=")) 291 - return -ENOMEM; 292 - 293 - len = create_pnp_modalias(adev, &env->buf[env->buflen - 1], 294 - sizeof(env->buf) - env->buflen); 295 - if (len < 0) 296 - return len; 297 - 298 - env->buflen += len; 299 - if (!adev->data.of_compatible) 300 - return 0; 301 - 302 - if (len > 0 && add_uevent_var(env, "MODALIAS=")) 303 - return -ENOMEM; 304 - 305 - len = create_of_modalias(adev, &env->buf[env->buflen - 1], 306 - sizeof(env->buf) - env->buflen); 307 - if (len < 0) 308 - return len; 309 - 310 - env->buflen += len; 311 - 312 - return 0; 313 - } 314 - 315 - /* 316 - * Creates uevent modalias field for ACPI enumerated devices. 317 - * Because the other buses does not support ACPI HIDs & CIDs. 318 - * e.g. for a device with hid:IBM0001 and cid:ACPI0001 you get: 319 - * "acpi:IBM0001:ACPI0001" 320 - */ 321 - int acpi_device_uevent_modalias(struct device *dev, struct kobj_uevent_env *env) 322 - { 323 - return __acpi_device_uevent_modalias(acpi_companion_match(dev), env); 324 - } 325 - EXPORT_SYMBOL_GPL(acpi_device_uevent_modalias); 326 - 327 - static int __acpi_device_modalias(struct acpi_device *adev, char *buf, int size) 328 - { 329 - int len, count; 330 - 331 - if (!adev) 332 - return -ENODEV; 333 - 334 - if (list_empty(&adev->pnp.ids)) 335 - return 0; 336 - 337 - len = create_pnp_modalias(adev, buf, size - 1); 338 - if (len < 0) { 339 - return len; 340 - } else if (len > 0) { 341 - buf[len++] = '\n'; 342 - size -= len; 343 - } 344 - if (!adev->data.of_compatible) 345 - return len; 346 - 347 - count = create_of_modalias(adev, buf + len, size - 1); 348 - if (count < 0) { 349 - return count; 350 - } else if (count > 0) { 351 - len += count; 352 - buf[len++] = '\n'; 353 - } 354 - 355 - return len; 356 - } 357 - 358 - /* 359 - * Creates modalias sysfs attribute for ACPI enumerated devices. 360 - * Because the other buses does not support ACPI HIDs & CIDs. 361 - * e.g. for a device with hid:IBM0001 and cid:ACPI0001 you get: 362 - * "acpi:IBM0001:ACPI0001" 363 - */ 364 - int acpi_device_modalias(struct device *dev, char *buf, int size) 365 - { 366 - return __acpi_device_modalias(acpi_companion_match(dev), buf, size); 367 - } 368 - EXPORT_SYMBOL_GPL(acpi_device_modalias); 369 - 370 - static ssize_t 371 - acpi_device_modalias_show(struct device *dev, struct device_attribute *attr, char *buf) { 372 - return __acpi_device_modalias(to_acpi_device(dev), buf, 1024); 373 - } 374 - static DEVICE_ATTR(modalias, 0444, acpi_device_modalias_show, NULL); 375 - 376 118 bool acpi_scan_is_offline(struct acpi_device *adev, bool uevent) 377 119 { 378 120 struct acpi_device_physical_node *pn; ··· 443 701 unlock_device_hotplug(); 444 702 } 445 703 446 - static ssize_t real_power_state_show(struct device *dev, 447 - struct device_attribute *attr, char *buf) 448 - { 449 - struct acpi_device *adev = to_acpi_device(dev); 450 - int state; 451 - int ret; 452 - 453 - ret = acpi_device_get_power(adev, &state); 454 - if (ret) 455 - return ret; 456 - 457 - return sprintf(buf, "%s\n", acpi_power_state_string(state)); 458 - } 459 - 460 - static DEVICE_ATTR(real_power_state, 0444, real_power_state_show, NULL); 461 - 462 - static ssize_t power_state_show(struct device *dev, 463 - struct device_attribute *attr, char *buf) 464 - { 465 - struct acpi_device *adev = to_acpi_device(dev); 466 - 467 - return sprintf(buf, "%s\n", acpi_power_state_string(adev->power.state)); 468 - } 469 - 470 - static DEVICE_ATTR(power_state, 0444, power_state_show, NULL); 471 - 472 - static ssize_t 473 - acpi_eject_store(struct device *d, struct device_attribute *attr, 474 - const char *buf, size_t count) 475 - { 476 - struct acpi_device *acpi_device = to_acpi_device(d); 477 - acpi_object_type not_used; 478 - acpi_status status; 479 - 480 - if (!count || buf[0] != '1') 481 - return -EINVAL; 482 - 483 - if ((!acpi_device->handler || !acpi_device->handler->hotplug.enabled) 484 - && !acpi_device->driver) 485 - return -ENODEV; 486 - 487 - status = acpi_get_type(acpi_device->handle, &not_used); 488 - if (ACPI_FAILURE(status) || !acpi_device->flags.ejectable) 489 - return -ENODEV; 490 - 491 - get_device(&acpi_device->dev); 492 - status = acpi_hotplug_schedule(acpi_device, ACPI_OST_EC_OSPM_EJECT); 493 - if (ACPI_SUCCESS(status)) 494 - return count; 495 - 496 - put_device(&acpi_device->dev); 497 - acpi_evaluate_ost(acpi_device->handle, ACPI_OST_EC_OSPM_EJECT, 498 - ACPI_OST_SC_NON_SPECIFIC_FAILURE, NULL); 499 - return status == AE_NO_MEMORY ? -ENOMEM : -EAGAIN; 500 - } 501 - 502 - static DEVICE_ATTR(eject, 0200, NULL, acpi_eject_store); 503 - 504 - static ssize_t 505 - acpi_device_hid_show(struct device *dev, struct device_attribute *attr, char *buf) { 506 - struct acpi_device *acpi_dev = to_acpi_device(dev); 507 - 508 - return sprintf(buf, "%s\n", acpi_device_hid(acpi_dev)); 509 - } 510 - static DEVICE_ATTR(hid, 0444, acpi_device_hid_show, NULL); 511 - 512 - static ssize_t acpi_device_uid_show(struct device *dev, 513 - struct device_attribute *attr, char *buf) 514 - { 515 - struct acpi_device *acpi_dev = to_acpi_device(dev); 516 - 517 - return sprintf(buf, "%s\n", acpi_dev->pnp.unique_id); 518 - } 519 - static DEVICE_ATTR(uid, 0444, acpi_device_uid_show, NULL); 520 - 521 - static ssize_t acpi_device_adr_show(struct device *dev, 522 - struct device_attribute *attr, char *buf) 523 - { 524 - struct acpi_device *acpi_dev = to_acpi_device(dev); 525 - 526 - return sprintf(buf, "0x%08x\n", 527 - (unsigned int)(acpi_dev->pnp.bus_address)); 528 - } 529 - static DEVICE_ATTR(adr, 0444, acpi_device_adr_show, NULL); 530 - 531 - static ssize_t 532 - acpi_device_path_show(struct device *dev, struct device_attribute *attr, char *buf) { 533 - struct acpi_device *acpi_dev = to_acpi_device(dev); 534 - struct acpi_buffer path = {ACPI_ALLOCATE_BUFFER, NULL}; 535 - int result; 536 - 537 - result = acpi_get_name(acpi_dev->handle, ACPI_FULL_PATHNAME, &path); 538 - if (result) 539 - goto end; 540 - 541 - result = sprintf(buf, "%s\n", (char*)path.pointer); 542 - kfree(path.pointer); 543 - end: 544 - return result; 545 - } 546 - static DEVICE_ATTR(path, 0444, acpi_device_path_show, NULL); 547 - 548 - /* sysfs file that shows description text from the ACPI _STR method */ 549 - static ssize_t description_show(struct device *dev, 550 - struct device_attribute *attr, 551 - char *buf) { 552 - struct acpi_device *acpi_dev = to_acpi_device(dev); 553 - int result; 554 - 555 - if (acpi_dev->pnp.str_obj == NULL) 556 - return 0; 557 - 558 - /* 559 - * The _STR object contains a Unicode identifier for a device. 560 - * We need to convert to utf-8 so it can be displayed. 561 - */ 562 - result = utf16s_to_utf8s( 563 - (wchar_t *)acpi_dev->pnp.str_obj->buffer.pointer, 564 - acpi_dev->pnp.str_obj->buffer.length, 565 - UTF16_LITTLE_ENDIAN, buf, 566 - PAGE_SIZE); 567 - 568 - buf[result++] = '\n'; 569 - 570 - return result; 571 - } 572 - static DEVICE_ATTR(description, 0444, description_show, NULL); 573 - 574 - static ssize_t 575 - acpi_device_sun_show(struct device *dev, struct device_attribute *attr, 576 - char *buf) { 577 - struct acpi_device *acpi_dev = to_acpi_device(dev); 578 - acpi_status status; 579 - unsigned long long sun; 580 - 581 - status = acpi_evaluate_integer(acpi_dev->handle, "_SUN", NULL, &sun); 582 - if (ACPI_FAILURE(status)) 583 - return -ENODEV; 584 - 585 - return sprintf(buf, "%llu\n", sun); 586 - } 587 - static DEVICE_ATTR(sun, 0444, acpi_device_sun_show, NULL); 588 - 589 - static ssize_t status_show(struct device *dev, struct device_attribute *attr, 590 - char *buf) { 591 - struct acpi_device *acpi_dev = to_acpi_device(dev); 592 - acpi_status status; 593 - unsigned long long sta; 594 - 595 - status = acpi_evaluate_integer(acpi_dev->handle, "_STA", NULL, &sta); 596 - if (ACPI_FAILURE(status)) 597 - return -ENODEV; 598 - 599 - return sprintf(buf, "%llu\n", sta); 600 - } 601 - static DEVICE_ATTR_RO(status); 602 - 603 - static int acpi_device_setup_files(struct acpi_device *dev) 604 - { 605 - struct acpi_buffer buffer = {ACPI_ALLOCATE_BUFFER, NULL}; 606 - acpi_status status; 607 - int result = 0; 608 - 609 - /* 610 - * Devices gotten from FADT don't have a "path" attribute 611 - */ 612 - if (dev->handle) { 613 - result = device_create_file(&dev->dev, &dev_attr_path); 614 - if (result) 615 - goto end; 616 - } 617 - 618 - if (!list_empty(&dev->pnp.ids)) { 619 - result = device_create_file(&dev->dev, &dev_attr_hid); 620 - if (result) 621 - goto end; 622 - 623 - result = device_create_file(&dev->dev, &dev_attr_modalias); 624 - if (result) 625 - goto end; 626 - } 627 - 628 - /* 629 - * If device has _STR, 'description' file is created 630 - */ 631 - if (acpi_has_method(dev->handle, "_STR")) { 632 - status = acpi_evaluate_object(dev->handle, "_STR", 633 - NULL, &buffer); 634 - if (ACPI_FAILURE(status)) 635 - buffer.pointer = NULL; 636 - dev->pnp.str_obj = buffer.pointer; 637 - result = device_create_file(&dev->dev, &dev_attr_description); 638 - if (result) 639 - goto end; 640 - } 641 - 642 - if (dev->pnp.type.bus_address) 643 - result = device_create_file(&dev->dev, &dev_attr_adr); 644 - if (dev->pnp.unique_id) 645 - result = device_create_file(&dev->dev, &dev_attr_uid); 646 - 647 - if (acpi_has_method(dev->handle, "_SUN")) { 648 - result = device_create_file(&dev->dev, &dev_attr_sun); 649 - if (result) 650 - goto end; 651 - } 652 - 653 - if (acpi_has_method(dev->handle, "_STA")) { 654 - result = device_create_file(&dev->dev, &dev_attr_status); 655 - if (result) 656 - goto end; 657 - } 658 - 659 - /* 660 - * If device has _EJ0, 'eject' file is created that is used to trigger 661 - * hot-removal function from userland. 662 - */ 663 - if (acpi_has_method(dev->handle, "_EJ0")) { 664 - result = device_create_file(&dev->dev, &dev_attr_eject); 665 - if (result) 666 - return result; 667 - } 668 - 669 - if (dev->flags.power_manageable) { 670 - result = device_create_file(&dev->dev, &dev_attr_power_state); 671 - if (result) 672 - return result; 673 - 674 - if (dev->power.flags.power_resources) 675 - result = device_create_file(&dev->dev, 676 - &dev_attr_real_power_state); 677 - } 678 - 679 - end: 680 - return result; 681 - } 682 - 683 - static void acpi_device_remove_files(struct acpi_device *dev) 684 - { 685 - if (dev->flags.power_manageable) { 686 - device_remove_file(&dev->dev, &dev_attr_power_state); 687 - if (dev->power.flags.power_resources) 688 - device_remove_file(&dev->dev, 689 - &dev_attr_real_power_state); 690 - } 691 - 692 - /* 693 - * If device has _STR, remove 'description' file 694 - */ 695 - if (acpi_has_method(dev->handle, "_STR")) { 696 - kfree(dev->pnp.str_obj); 697 - device_remove_file(&dev->dev, &dev_attr_description); 698 - } 699 - /* 700 - * If device has _EJ0, remove 'eject' file. 701 - */ 702 - if (acpi_has_method(dev->handle, "_EJ0")) 703 - device_remove_file(&dev->dev, &dev_attr_eject); 704 - 705 - if (acpi_has_method(dev->handle, "_SUN")) 706 - device_remove_file(&dev->dev, &dev_attr_sun); 707 - 708 - if (dev->pnp.unique_id) 709 - device_remove_file(&dev->dev, &dev_attr_uid); 710 - if (dev->pnp.type.bus_address) 711 - device_remove_file(&dev->dev, &dev_attr_adr); 712 - device_remove_file(&dev->dev, &dev_attr_modalias); 713 - device_remove_file(&dev->dev, &dev_attr_hid); 714 - if (acpi_has_method(dev->handle, "_STA")) 715 - device_remove_file(&dev->dev, &dev_attr_status); 716 - if (dev->handle) 717 - device_remove_file(&dev->dev, &dev_attr_path); 718 - } 719 - /* -------------------------------------------------------------------------- 720 - ACPI Bus operations 721 - -------------------------------------------------------------------------- */ 722 - 723 - /** 724 - * acpi_of_match_device - Match device object using the "compatible" property. 725 - * @adev: ACPI device object to match. 726 - * @of_match_table: List of device IDs to match against. 727 - * 728 - * If @dev has an ACPI companion which has ACPI_DT_NAMESPACE_HID in its list of 729 - * identifiers and a _DSD object with the "compatible" property, use that 730 - * property to match against the given list of identifiers. 731 - */ 732 - static bool acpi_of_match_device(struct acpi_device *adev, 733 - const struct of_device_id *of_match_table) 734 - { 735 - const union acpi_object *of_compatible, *obj; 736 - int i, nval; 737 - 738 - if (!adev) 739 - return false; 740 - 741 - of_compatible = adev->data.of_compatible; 742 - if (!of_match_table || !of_compatible) 743 - return false; 744 - 745 - if (of_compatible->type == ACPI_TYPE_PACKAGE) { 746 - nval = of_compatible->package.count; 747 - obj = of_compatible->package.elements; 748 - } else { /* Must be ACPI_TYPE_STRING. */ 749 - nval = 1; 750 - obj = of_compatible; 751 - } 752 - /* Now we can look for the driver DT compatible strings */ 753 - for (i = 0; i < nval; i++, obj++) { 754 - const struct of_device_id *id; 755 - 756 - for (id = of_match_table; id->compatible[0]; id++) 757 - if (!strcasecmp(obj->string.pointer, id->compatible)) 758 - return true; 759 - } 760 - 761 - return false; 762 - } 763 - 764 - static bool __acpi_match_device_cls(const struct acpi_device_id *id, 765 - struct acpi_hardware_id *hwid) 766 - { 767 - int i, msk, byte_shift; 768 - char buf[3]; 769 - 770 - if (!id->cls) 771 - return false; 772 - 773 - /* Apply class-code bitmask, before checking each class-code byte */ 774 - for (i = 1; i <= 3; i++) { 775 - byte_shift = 8 * (3 - i); 776 - msk = (id->cls_msk >> byte_shift) & 0xFF; 777 - if (!msk) 778 - continue; 779 - 780 - sprintf(buf, "%02x", (id->cls >> byte_shift) & msk); 781 - if (strncmp(buf, &hwid->id[(i - 1) * 2], 2)) 782 - return false; 783 - } 784 - return true; 785 - } 786 - 787 - static const struct acpi_device_id *__acpi_match_device( 788 - struct acpi_device *device, 789 - const struct acpi_device_id *ids, 790 - const struct of_device_id *of_ids) 791 - { 792 - const struct acpi_device_id *id; 793 - struct acpi_hardware_id *hwid; 794 - 795 - /* 796 - * If the device is not present, it is unnecessary to load device 797 - * driver for it. 798 - */ 799 - if (!device || !device->status.present) 800 - return NULL; 801 - 802 - list_for_each_entry(hwid, &device->pnp.ids, list) { 803 - /* First, check the ACPI/PNP IDs provided by the caller. */ 804 - for (id = ids; id->id[0] || id->cls; id++) { 805 - if (id->id[0] && !strcmp((char *) id->id, hwid->id)) 806 - return id; 807 - else if (id->cls && __acpi_match_device_cls(id, hwid)) 808 - return id; 809 - } 810 - 811 - /* 812 - * Next, check ACPI_DT_NAMESPACE_HID and try to match the 813 - * "compatible" property if found. 814 - * 815 - * The id returned by the below is not valid, but the only 816 - * caller passing non-NULL of_ids here is only interested in 817 - * whether or not the return value is NULL. 818 - */ 819 - if (!strcmp(ACPI_DT_NAMESPACE_HID, hwid->id) 820 - && acpi_of_match_device(device, of_ids)) 821 - return id; 822 - } 823 - return NULL; 824 - } 825 - 826 - /** 827 - * acpi_match_device - Match a struct device against a given list of ACPI IDs 828 - * @ids: Array of struct acpi_device_id object to match against. 829 - * @dev: The device structure to match. 830 - * 831 - * Check if @dev has a valid ACPI handle and if there is a struct acpi_device 832 - * object for that handle and use that object to match against a given list of 833 - * device IDs. 834 - * 835 - * Return a pointer to the first matching ID on success or %NULL on failure. 836 - */ 837 - const struct acpi_device_id *acpi_match_device(const struct acpi_device_id *ids, 838 - const struct device *dev) 839 - { 840 - return __acpi_match_device(acpi_companion_match(dev), ids, NULL); 841 - } 842 - EXPORT_SYMBOL_GPL(acpi_match_device); 843 - 844 - int acpi_match_device_ids(struct acpi_device *device, 845 - const struct acpi_device_id *ids) 846 - { 847 - return __acpi_match_device(device, ids, NULL) ? 0 : -ENOENT; 848 - } 849 - EXPORT_SYMBOL(acpi_match_device_ids); 850 - 851 - bool acpi_driver_match_device(struct device *dev, 852 - const struct device_driver *drv) 853 - { 854 - if (!drv->acpi_match_table) 855 - return acpi_of_match_device(ACPI_COMPANION(dev), 856 - drv->of_match_table); 857 - 858 - return !!__acpi_match_device(acpi_companion_match(dev), 859 - drv->acpi_match_table, drv->of_match_table); 860 - } 861 - EXPORT_SYMBOL_GPL(acpi_driver_match_device); 862 - 863 704 static void acpi_free_power_resources_lists(struct acpi_device *device) 864 705 { 865 706 int i; ··· 468 1143 acpi_free_power_resources_lists(acpi_dev); 469 1144 kfree(acpi_dev); 470 1145 } 471 - 472 - static int acpi_bus_match(struct device *dev, struct device_driver *drv) 473 - { 474 - struct acpi_device *acpi_dev = to_acpi_device(dev); 475 - struct acpi_driver *acpi_drv = to_acpi_driver(drv); 476 - 477 - return acpi_dev->flags.match_driver 478 - && !acpi_match_device_ids(acpi_dev, acpi_drv->ids); 479 - } 480 - 481 - static int acpi_device_uevent(struct device *dev, struct kobj_uevent_env *env) 482 - { 483 - return __acpi_device_uevent_modalias(to_acpi_device(dev), env); 484 - } 485 - 486 - static void acpi_device_notify(acpi_handle handle, u32 event, void *data) 487 - { 488 - struct acpi_device *device = data; 489 - 490 - device->driver->ops.notify(device, event); 491 - } 492 - 493 - static void acpi_device_notify_fixed(void *data) 494 - { 495 - struct acpi_device *device = data; 496 - 497 - /* Fixed hardware devices have no handles */ 498 - acpi_device_notify(NULL, ACPI_FIXED_HARDWARE_EVENT, device); 499 - } 500 - 501 - static u32 acpi_device_fixed_event(void *data) 502 - { 503 - acpi_os_execute(OSL_NOTIFY_HANDLER, acpi_device_notify_fixed, data); 504 - return ACPI_INTERRUPT_HANDLED; 505 - } 506 - 507 - static int acpi_device_install_notify_handler(struct acpi_device *device) 508 - { 509 - acpi_status status; 510 - 511 - if (device->device_type == ACPI_BUS_TYPE_POWER_BUTTON) 512 - status = 513 - acpi_install_fixed_event_handler(ACPI_EVENT_POWER_BUTTON, 514 - acpi_device_fixed_event, 515 - device); 516 - else if (device->device_type == ACPI_BUS_TYPE_SLEEP_BUTTON) 517 - status = 518 - acpi_install_fixed_event_handler(ACPI_EVENT_SLEEP_BUTTON, 519 - acpi_device_fixed_event, 520 - device); 521 - else 522 - status = acpi_install_notify_handler(device->handle, 523 - ACPI_DEVICE_NOTIFY, 524 - acpi_device_notify, 525 - device); 526 - 527 - if (ACPI_FAILURE(status)) 528 - return -EINVAL; 529 - return 0; 530 - } 531 - 532 - static void acpi_device_remove_notify_handler(struct acpi_device *device) 533 - { 534 - if (device->device_type == ACPI_BUS_TYPE_POWER_BUTTON) 535 - acpi_remove_fixed_event_handler(ACPI_EVENT_POWER_BUTTON, 536 - acpi_device_fixed_event); 537 - else if (device->device_type == ACPI_BUS_TYPE_SLEEP_BUTTON) 538 - acpi_remove_fixed_event_handler(ACPI_EVENT_SLEEP_BUTTON, 539 - acpi_device_fixed_event); 540 - else 541 - acpi_remove_notify_handler(device->handle, ACPI_DEVICE_NOTIFY, 542 - acpi_device_notify); 543 - } 544 - 545 - static int acpi_device_probe(struct device *dev) 546 - { 547 - struct acpi_device *acpi_dev = to_acpi_device(dev); 548 - struct acpi_driver *acpi_drv = to_acpi_driver(dev->driver); 549 - int ret; 550 - 551 - if (acpi_dev->handler && !acpi_is_pnp_device(acpi_dev)) 552 - return -EINVAL; 553 - 554 - if (!acpi_drv->ops.add) 555 - return -ENOSYS; 556 - 557 - ret = acpi_drv->ops.add(acpi_dev); 558 - if (ret) 559 - return ret; 560 - 561 - acpi_dev->driver = acpi_drv; 562 - ACPI_DEBUG_PRINT((ACPI_DB_INFO, 563 - "Driver [%s] successfully bound to device [%s]\n", 564 - acpi_drv->name, acpi_dev->pnp.bus_id)); 565 - 566 - if (acpi_drv->ops.notify) { 567 - ret = acpi_device_install_notify_handler(acpi_dev); 568 - if (ret) { 569 - if (acpi_drv->ops.remove) 570 - acpi_drv->ops.remove(acpi_dev); 571 - 572 - acpi_dev->driver = NULL; 573 - acpi_dev->driver_data = NULL; 574 - return ret; 575 - } 576 - } 577 - 578 - ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Found driver [%s] for device [%s]\n", 579 - acpi_drv->name, acpi_dev->pnp.bus_id)); 580 - get_device(dev); 581 - return 0; 582 - } 583 - 584 - static int acpi_device_remove(struct device * dev) 585 - { 586 - struct acpi_device *acpi_dev = to_acpi_device(dev); 587 - struct acpi_driver *acpi_drv = acpi_dev->driver; 588 - 589 - if (acpi_drv) { 590 - if (acpi_drv->ops.notify) 591 - acpi_device_remove_notify_handler(acpi_dev); 592 - if (acpi_drv->ops.remove) 593 - acpi_drv->ops.remove(acpi_dev); 594 - } 595 - acpi_dev->driver = NULL; 596 - acpi_dev->driver_data = NULL; 597 - 598 - put_device(dev); 599 - return 0; 600 - } 601 - 602 - struct bus_type acpi_bus_type = { 603 - .name = "acpi", 604 - .match = acpi_bus_match, 605 - .probe = acpi_device_probe, 606 - .remove = acpi_device_remove, 607 - .uevent = acpi_device_uevent, 608 - }; 609 1146 610 1147 static void acpi_device_del(struct acpi_device *device) 611 1148 { ··· 714 1527 next = child->node.next; 715 1528 return next == head ? NULL : list_entry(next, struct acpi_device, node); 716 1529 } 717 - 718 - /* -------------------------------------------------------------------------- 719 - Driver Management 720 - -------------------------------------------------------------------------- */ 721 - /** 722 - * acpi_bus_register_driver - register a driver with the ACPI bus 723 - * @driver: driver being registered 724 - * 725 - * Registers a driver with the ACPI bus. Searches the namespace for all 726 - * devices that match the driver's criteria and binds. Returns zero for 727 - * success or a negative error status for failure. 728 - */ 729 - int acpi_bus_register_driver(struct acpi_driver *driver) 730 - { 731 - int ret; 732 - 733 - if (acpi_disabled) 734 - return -ENODEV; 735 - driver->drv.name = driver->name; 736 - driver->drv.bus = &acpi_bus_type; 737 - driver->drv.owner = driver->owner; 738 - 739 - ret = driver_register(&driver->drv); 740 - return ret; 741 - } 742 - 743 - EXPORT_SYMBOL(acpi_bus_register_driver); 744 - 745 - /** 746 - * acpi_bus_unregister_driver - unregisters a driver with the ACPI bus 747 - * @driver: driver to unregister 748 - * 749 - * Unregisters a driver with the ACPI bus. Searches the namespace for all 750 - * devices that match the driver's criteria and unbinds. 751 - */ 752 - void acpi_bus_unregister_driver(struct acpi_driver *driver) 753 - { 754 - driver_unregister(&driver->drv); 755 - } 756 - 757 - EXPORT_SYMBOL(acpi_bus_unregister_driver); 758 1530 759 1531 /* -------------------------------------------------------------------------- 760 1532 Device Enumeration ··· 1889 2743 int __init acpi_scan_init(void) 1890 2744 { 1891 2745 int result; 1892 - 1893 - result = bus_register(&acpi_bus_type); 1894 - if (result) { 1895 - /* We don't want to quit even if we failed to add suspend/resume */ 1896 - printk(KERN_ERR PREFIX "Could not register bus type\n"); 1897 - } 1898 2746 1899 2747 acpi_pci_root_init(); 1900 2748 acpi_pci_link_init();
+99 -36
drivers/acpi/sysfs.c
··· 69 69 ACPI_DEBUG_INIT(ACPI_LV_INIT), 70 70 ACPI_DEBUG_INIT(ACPI_LV_DEBUG_OBJECT), 71 71 ACPI_DEBUG_INIT(ACPI_LV_INFO), 72 + ACPI_DEBUG_INIT(ACPI_LV_REPAIR), 73 + ACPI_DEBUG_INIT(ACPI_LV_TRACE_POINT), 72 74 73 75 ACPI_DEBUG_INIT(ACPI_LV_INIT_NAMES), 74 76 ACPI_DEBUG_INIT(ACPI_LV_PARSE), ··· 164 162 module_param_cb(debug_layer, &param_ops_debug_layer, &acpi_dbg_layer, 0644); 165 163 module_param_cb(debug_level, &param_ops_debug_level, &acpi_dbg_level, 0644); 166 164 167 - static char trace_method_name[6]; 168 - module_param_string(trace_method_name, trace_method_name, 6, 0644); 169 - static unsigned int trace_debug_layer; 170 - module_param(trace_debug_layer, uint, 0644); 171 - static unsigned int trace_debug_level; 172 - module_param(trace_debug_level, uint, 0644); 165 + static char trace_method_name[1024]; 166 + 167 + int param_set_trace_method_name(const char *val, const struct kernel_param *kp) 168 + { 169 + u32 saved_flags = 0; 170 + bool is_abs_path = true; 171 + 172 + if (*val != '\\') 173 + is_abs_path = false; 174 + 175 + if ((is_abs_path && strlen(val) > 1023) || 176 + (!is_abs_path && strlen(val) > 1022)) { 177 + pr_err("%s: string parameter too long\n", kp->name); 178 + return -ENOSPC; 179 + } 180 + 181 + /* 182 + * It's not safe to update acpi_gbl_trace_method_name without 183 + * having the tracer stopped, so we save the original tracer 184 + * state and disable it. 185 + */ 186 + saved_flags = acpi_gbl_trace_flags; 187 + (void)acpi_debug_trace(NULL, 188 + acpi_gbl_trace_dbg_level, 189 + acpi_gbl_trace_dbg_layer, 190 + 0); 191 + 192 + /* This is a hack. We can't kmalloc in early boot. */ 193 + if (is_abs_path) 194 + strcpy(trace_method_name, val); 195 + else { 196 + trace_method_name[0] = '\\'; 197 + strcpy(trace_method_name+1, val); 198 + } 199 + 200 + /* Restore the original tracer state */ 201 + (void)acpi_debug_trace(trace_method_name, 202 + acpi_gbl_trace_dbg_level, 203 + acpi_gbl_trace_dbg_layer, 204 + saved_flags); 205 + 206 + return 0; 207 + } 208 + 209 + static int param_get_trace_method_name(char *buffer, const struct kernel_param *kp) 210 + { 211 + return scnprintf(buffer, PAGE_SIZE, "%s", acpi_gbl_trace_method_name); 212 + } 213 + 214 + static const struct kernel_param_ops param_ops_trace_method = { 215 + .set = param_set_trace_method_name, 216 + .get = param_get_trace_method_name, 217 + }; 218 + 219 + static const struct kernel_param_ops param_ops_trace_attrib = { 220 + .set = param_set_uint, 221 + .get = param_get_uint, 222 + }; 223 + 224 + module_param_cb(trace_method_name, &param_ops_trace_method, &trace_method_name, 0644); 225 + module_param_cb(trace_debug_layer, &param_ops_trace_attrib, &acpi_gbl_trace_dbg_layer, 0644); 226 + module_param_cb(trace_debug_level, &param_ops_trace_attrib, &acpi_gbl_trace_dbg_level, 0644); 173 227 174 228 static int param_set_trace_state(const char *val, struct kernel_param *kp) 175 229 { 176 - int result = 0; 230 + acpi_status status; 231 + const char *method = trace_method_name; 232 + u32 flags = 0; 177 233 178 - if (!strncmp(val, "enable", sizeof("enable") - 1)) { 179 - result = acpi_debug_trace(trace_method_name, trace_debug_level, 180 - trace_debug_layer, 0); 181 - if (result) 182 - result = -EBUSY; 183 - goto exit; 184 - } 234 + /* So "xxx-once" comparison should go prior than "xxx" comparison */ 235 + #define acpi_compare_param(val, key) \ 236 + strncmp((val), (key), sizeof(key) - 1) 185 237 186 - if (!strncmp(val, "disable", sizeof("disable") - 1)) { 187 - int name = 0; 188 - result = acpi_debug_trace((char *)&name, trace_debug_level, 189 - trace_debug_layer, 0); 190 - if (result) 191 - result = -EBUSY; 192 - goto exit; 193 - } 238 + if (!acpi_compare_param(val, "enable")) { 239 + method = NULL; 240 + flags = ACPI_TRACE_ENABLED; 241 + } else if (!acpi_compare_param(val, "disable")) 242 + method = NULL; 243 + else if (!acpi_compare_param(val, "method-once")) 244 + flags = ACPI_TRACE_ENABLED | ACPI_TRACE_ONESHOT; 245 + else if (!acpi_compare_param(val, "method")) 246 + flags = ACPI_TRACE_ENABLED; 247 + else if (!acpi_compare_param(val, "opcode-once")) 248 + flags = ACPI_TRACE_ENABLED | ACPI_TRACE_ONESHOT | ACPI_TRACE_OPCODE; 249 + else if (!acpi_compare_param(val, "opcode")) 250 + flags = ACPI_TRACE_ENABLED | ACPI_TRACE_OPCODE; 251 + else 252 + return -EINVAL; 194 253 195 - if (!strncmp(val, "1", 1)) { 196 - result = acpi_debug_trace(trace_method_name, trace_debug_level, 197 - trace_debug_layer, 1); 198 - if (result) 199 - result = -EBUSY; 200 - goto exit; 201 - } 254 + status = acpi_debug_trace(method, 255 + acpi_gbl_trace_dbg_level, 256 + acpi_gbl_trace_dbg_layer, 257 + flags); 258 + if (ACPI_FAILURE(status)) 259 + return -EBUSY; 202 260 203 - result = -EINVAL; 204 - exit: 205 - return result; 261 + return 0; 206 262 } 207 263 208 264 static int param_get_trace_state(char *buffer, struct kernel_param *kp) 209 265 { 210 - if (!acpi_gbl_trace_method_name) 266 + if (!(acpi_gbl_trace_flags & ACPI_TRACE_ENABLED)) 211 267 return sprintf(buffer, "disable"); 212 268 else { 213 - if (acpi_gbl_trace_flags & 1) 214 - return sprintf(buffer, "1"); 215 - else 269 + if (acpi_gbl_trace_method_name) { 270 + if (acpi_gbl_trace_flags & ACPI_TRACE_ONESHOT) 271 + return sprintf(buffer, "method-once"); 272 + else 273 + return sprintf(buffer, "method"); 274 + } else 216 275 return sprintf(buffer, "enable"); 217 276 } 218 277 return 0;
-4
drivers/acpi/tables.c
··· 15 15 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 16 * GNU General Public License for more details. 17 17 * 18 - * You should have received a copy of the GNU General Public License 19 - * along with this program; if not, write to the Free Software 20 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 21 - * 22 18 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 23 19 * 24 20 */
-4
drivers/acpi/thermal.c
··· 16 16 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 17 17 * General Public License for more details. 18 18 * 19 - * You should have received a copy of the GNU General Public License along 20 - * with this program; if not, write to the Free Software Foundation, Inc., 21 - * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. 22 - * 23 19 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 24 20 * 25 21 * This driver fully implements the ACPI thermal policy as described in the
-4
drivers/acpi/utils.c
··· 16 16 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 17 17 * General Public License for more details. 18 18 * 19 - * You should have received a copy of the GNU General Public License along 20 - * with this program; if not, write to the Free Software Foundation, Inc., 21 - * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. 22 - * 23 19 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 24 20 */ 25 21
+43
drivers/base/core.c
··· 1301 1301 } 1302 1302 EXPORT_SYMBOL_GPL(device_unregister); 1303 1303 1304 + static struct device *prev_device(struct klist_iter *i) 1305 + { 1306 + struct klist_node *n = klist_prev(i); 1307 + struct device *dev = NULL; 1308 + struct device_private *p; 1309 + 1310 + if (n) { 1311 + p = to_device_private_parent(n); 1312 + dev = p->device; 1313 + } 1314 + return dev; 1315 + } 1316 + 1304 1317 static struct device *next_device(struct klist_iter *i) 1305 1318 { 1306 1319 struct klist_node *n = klist_next(i); ··· 1401 1388 return error; 1402 1389 } 1403 1390 EXPORT_SYMBOL_GPL(device_for_each_child); 1391 + 1392 + /** 1393 + * device_for_each_child_reverse - device child iterator in reversed order. 1394 + * @parent: parent struct device. 1395 + * @fn: function to be called for each device. 1396 + * @data: data for the callback. 1397 + * 1398 + * Iterate over @parent's child devices, and call @fn for each, 1399 + * passing it @data. 1400 + * 1401 + * We check the return of @fn each time. If it returns anything 1402 + * other than 0, we break out and return that value. 1403 + */ 1404 + int device_for_each_child_reverse(struct device *parent, void *data, 1405 + int (*fn)(struct device *dev, void *data)) 1406 + { 1407 + struct klist_iter i; 1408 + struct device *child; 1409 + int error = 0; 1410 + 1411 + if (!parent->p) 1412 + return 0; 1413 + 1414 + klist_iter_init(&parent->p->klist_children, &i); 1415 + while ((child = prev_device(&i)) && !error) 1416 + error = fn(child, data); 1417 + klist_iter_exit(&i); 1418 + return error; 1419 + } 1420 + EXPORT_SYMBOL_GPL(device_for_each_child_reverse); 1404 1421 1405 1422 /** 1406 1423 * device_find_child - device iterator for locating a particular device.
+20
drivers/base/dd.c
··· 407 407 * 408 408 * This function must be called with @dev lock held. When called for a 409 409 * USB interface, @dev->parent lock must be held as well. 410 + * 411 + * If the device has a parent, runtime-resume the parent before driver probing. 410 412 */ 411 413 int driver_probe_device(struct device_driver *drv, struct device *dev) 412 414 { ··· 420 418 pr_debug("bus: '%s': %s: matched device %s with driver %s\n", 421 419 drv->bus->name, __func__, dev_name(dev), drv->name); 422 420 421 + if (dev->parent) 422 + pm_runtime_get_sync(dev->parent); 423 + 423 424 pm_runtime_barrier(dev); 424 425 ret = really_probe(dev, drv); 425 426 pm_request_idle(dev); 427 + 428 + if (dev->parent) 429 + pm_runtime_put(dev->parent); 426 430 427 431 return ret; 428 432 } ··· 523 515 524 516 device_lock(dev); 525 517 518 + if (dev->parent) 519 + pm_runtime_get_sync(dev->parent); 520 + 526 521 bus_for_each_drv(dev->bus, NULL, &data, __device_attach_driver); 527 522 dev_dbg(dev, "async probe completed\n"); 528 523 529 524 pm_request_idle(dev); 525 + 526 + if (dev->parent) 527 + pm_runtime_put(dev->parent); 530 528 531 529 device_unlock(dev); 532 530 ··· 563 549 .want_async = false, 564 550 }; 565 551 552 + if (dev->parent) 553 + pm_runtime_get_sync(dev->parent); 554 + 566 555 ret = bus_for_each_drv(dev->bus, NULL, &data, 567 556 __device_attach_driver); 568 557 if (!ret && allow_async && data.have_async) { ··· 582 565 } else { 583 566 pm_request_idle(dev); 584 567 } 568 + 569 + if (dev->parent) 570 + pm_runtime_put(dev->parent); 585 571 } 586 572 out_unlock: 587 573 device_unlock(dev);
+1 -3
drivers/base/power/clock_ops.c
··· 38 38 * @dev: The device for the given clock 39 39 * @ce: PM clock entry corresponding to the clock. 40 40 */ 41 - static inline int __pm_clk_enable(struct device *dev, struct pm_clock_entry *ce) 41 + static inline void __pm_clk_enable(struct device *dev, struct pm_clock_entry *ce) 42 42 { 43 43 int ret; 44 44 ··· 50 50 dev_err(dev, "%s: failed to enable clk %p, error %d\n", 51 51 __func__, ce->clk, ret); 52 52 } 53 - 54 - return ret; 55 53 } 56 54 57 55 /**
+77 -311
drivers/base/power/domain.c
··· 114 114 stop_latency_ns, "stop"); 115 115 } 116 116 117 - static int genpd_start_dev(struct generic_pm_domain *genpd, struct device *dev) 117 + static int genpd_start_dev(struct generic_pm_domain *genpd, struct device *dev, 118 + bool timed) 118 119 { 120 + if (!timed) 121 + return GENPD_DEV_CALLBACK(genpd, int, start, dev); 122 + 119 123 return GENPD_DEV_TIMED_CALLBACK(genpd, int, start, dev, 120 124 start_latency_ns, "start"); 121 125 } ··· 138 134 { 139 135 atomic_inc(&genpd->sd_count); 140 136 smp_mb__after_atomic(); 141 - } 142 - 143 - static void genpd_acquire_lock(struct generic_pm_domain *genpd) 144 - { 145 - DEFINE_WAIT(wait); 146 - 147 - mutex_lock(&genpd->lock); 148 - /* 149 - * Wait for the domain to transition into either the active, 150 - * or the power off state. 151 - */ 152 - for (;;) { 153 - prepare_to_wait(&genpd->status_wait_queue, &wait, 154 - TASK_UNINTERRUPTIBLE); 155 - if (genpd->status == GPD_STATE_ACTIVE 156 - || genpd->status == GPD_STATE_POWER_OFF) 157 - break; 158 - mutex_unlock(&genpd->lock); 159 - 160 - schedule(); 161 - 162 - mutex_lock(&genpd->lock); 163 - } 164 - finish_wait(&genpd->status_wait_queue, &wait); 165 - } 166 - 167 - static void genpd_release_lock(struct generic_pm_domain *genpd) 168 - { 169 - mutex_unlock(&genpd->lock); 170 - } 171 - 172 - static void genpd_set_active(struct generic_pm_domain *genpd) 173 - { 174 - if (genpd->resume_count == 0) 175 - genpd->status = GPD_STATE_ACTIVE; 176 137 } 177 138 178 139 static void genpd_recalc_cpu_exit_latency(struct generic_pm_domain *genpd) ··· 220 251 * resume a device belonging to it. 221 252 */ 222 253 static int __pm_genpd_poweron(struct generic_pm_domain *genpd) 223 - __releases(&genpd->lock) __acquires(&genpd->lock) 224 254 { 225 255 struct gpd_link *link; 226 - DEFINE_WAIT(wait); 227 256 int ret = 0; 228 - 229 - /* If the domain's master is being waited for, we have to wait too. */ 230 - for (;;) { 231 - prepare_to_wait(&genpd->status_wait_queue, &wait, 232 - TASK_UNINTERRUPTIBLE); 233 - if (genpd->status != GPD_STATE_WAIT_MASTER) 234 - break; 235 - mutex_unlock(&genpd->lock); 236 - 237 - schedule(); 238 - 239 - mutex_lock(&genpd->lock); 240 - } 241 - finish_wait(&genpd->status_wait_queue, &wait); 242 257 243 258 if (genpd->status == GPD_STATE_ACTIVE 244 259 || (genpd->prepared_count > 0 && genpd->suspend_power_off)) 245 260 return 0; 246 - 247 - if (genpd->status != GPD_STATE_POWER_OFF) { 248 - genpd_set_active(genpd); 249 - return 0; 250 - } 251 261 252 262 if (genpd->cpuidle_data) { 253 263 cpuidle_pause_and_lock(); ··· 242 294 */ 243 295 list_for_each_entry(link, &genpd->slave_links, slave_node) { 244 296 genpd_sd_counter_inc(link->master); 245 - genpd->status = GPD_STATE_WAIT_MASTER; 246 - 247 - mutex_unlock(&genpd->lock); 248 297 249 298 ret = pm_genpd_poweron(link->master); 250 - 251 - mutex_lock(&genpd->lock); 252 - 253 - /* 254 - * The "wait for parent" status is guaranteed not to change 255 - * while the master is powering on. 256 - */ 257 - genpd->status = GPD_STATE_POWER_OFF; 258 - wake_up_all(&genpd->status_wait_queue); 259 299 if (ret) { 260 300 genpd_sd_counter_dec(link->master); 261 301 goto err; ··· 255 319 goto err; 256 320 257 321 out: 258 - genpd_set_active(genpd); 259 - 322 + genpd->status = GPD_STATE_ACTIVE; 260 323 return 0; 261 324 262 325 err: ··· 291 356 return genpd ? pm_genpd_poweron(genpd) : -EINVAL; 292 357 } 293 358 294 - static int genpd_start_dev_no_timing(struct generic_pm_domain *genpd, 295 - struct device *dev) 296 - { 297 - return GENPD_DEV_CALLBACK(genpd, int, start, dev); 298 - } 299 - 300 359 static int genpd_save_dev(struct generic_pm_domain *genpd, struct device *dev) 301 360 { 302 361 return GENPD_DEV_TIMED_CALLBACK(genpd, int, save_state, dev, 303 362 save_state_latency_ns, "state save"); 304 363 } 305 364 306 - static int genpd_restore_dev(struct generic_pm_domain *genpd, struct device *dev) 365 + static int genpd_restore_dev(struct generic_pm_domain *genpd, 366 + struct device *dev, bool timed) 307 367 { 368 + if (!timed) 369 + return GENPD_DEV_CALLBACK(genpd, int, restore_state, dev); 370 + 308 371 return GENPD_DEV_TIMED_CALLBACK(genpd, int, restore_state, dev, 309 372 restore_state_latency_ns, 310 373 "state restore"); ··· 349 416 } 350 417 351 418 /** 352 - * __pm_genpd_save_device - Save the pre-suspend state of a device. 353 - * @pdd: Domain data of the device to save the state of. 354 - * @genpd: PM domain the device belongs to. 355 - */ 356 - static int __pm_genpd_save_device(struct pm_domain_data *pdd, 357 - struct generic_pm_domain *genpd) 358 - __releases(&genpd->lock) __acquires(&genpd->lock) 359 - { 360 - struct generic_pm_domain_data *gpd_data = to_gpd_data(pdd); 361 - struct device *dev = pdd->dev; 362 - int ret = 0; 363 - 364 - if (gpd_data->need_restore > 0) 365 - return 0; 366 - 367 - /* 368 - * If the value of the need_restore flag is still unknown at this point, 369 - * we trust that pm_genpd_poweroff() has verified that the device is 370 - * already runtime PM suspended. 371 - */ 372 - if (gpd_data->need_restore < 0) { 373 - gpd_data->need_restore = 1; 374 - return 0; 375 - } 376 - 377 - mutex_unlock(&genpd->lock); 378 - 379 - genpd_start_dev(genpd, dev); 380 - ret = genpd_save_dev(genpd, dev); 381 - genpd_stop_dev(genpd, dev); 382 - 383 - mutex_lock(&genpd->lock); 384 - 385 - if (!ret) 386 - gpd_data->need_restore = 1; 387 - 388 - return ret; 389 - } 390 - 391 - /** 392 - * __pm_genpd_restore_device - Restore the pre-suspend state of a device. 393 - * @pdd: Domain data of the device to restore the state of. 394 - * @genpd: PM domain the device belongs to. 395 - */ 396 - static void __pm_genpd_restore_device(struct pm_domain_data *pdd, 397 - struct generic_pm_domain *genpd) 398 - __releases(&genpd->lock) __acquires(&genpd->lock) 399 - { 400 - struct generic_pm_domain_data *gpd_data = to_gpd_data(pdd); 401 - struct device *dev = pdd->dev; 402 - int need_restore = gpd_data->need_restore; 403 - 404 - gpd_data->need_restore = 0; 405 - mutex_unlock(&genpd->lock); 406 - 407 - genpd_start_dev(genpd, dev); 408 - 409 - /* 410 - * Call genpd_restore_dev() for recently added devices too (need_restore 411 - * is negative then). 412 - */ 413 - if (need_restore) 414 - genpd_restore_dev(genpd, dev); 415 - 416 - mutex_lock(&genpd->lock); 417 - } 418 - 419 - /** 420 - * genpd_abort_poweroff - Check if a PM domain power off should be aborted. 421 - * @genpd: PM domain to check. 422 - * 423 - * Return true if a PM domain's status changed to GPD_STATE_ACTIVE during 424 - * a "power off" operation, which means that a "power on" has occured in the 425 - * meantime, or if its resume_count field is different from zero, which means 426 - * that one of its devices has been resumed in the meantime. 427 - */ 428 - static bool genpd_abort_poweroff(struct generic_pm_domain *genpd) 429 - { 430 - return genpd->status == GPD_STATE_WAIT_MASTER 431 - || genpd->status == GPD_STATE_ACTIVE || genpd->resume_count > 0; 432 - } 433 - 434 - /** 435 419 * genpd_queue_power_off_work - Queue up the execution of pm_genpd_poweroff(). 436 420 * @genpd: PM domait to power off. 437 421 * ··· 365 515 * @genpd: PM domain to power down. 366 516 * 367 517 * If all of the @genpd's devices have been suspended and all of its subdomains 368 - * have been powered down, run the runtime suspend callbacks provided by all of 369 - * the @genpd's devices' drivers and remove power from @genpd. 518 + * have been powered down, remove power from @genpd. 370 519 */ 371 520 static int pm_genpd_poweroff(struct generic_pm_domain *genpd) 372 - __releases(&genpd->lock) __acquires(&genpd->lock) 373 521 { 374 522 struct pm_domain_data *pdd; 375 523 struct gpd_link *link; 376 - unsigned int not_suspended; 377 - int ret = 0; 524 + unsigned int not_suspended = 0; 378 525 379 - start: 380 526 /* 381 527 * Do not try to power off the domain in the following situations: 382 528 * (1) The domain is already in the "power off" state. 383 - * (2) The domain is waiting for its master to power up. 384 - * (3) One of the domain's devices is being resumed right now. 385 - * (4) System suspend is in progress. 529 + * (2) System suspend is in progress. 386 530 */ 387 531 if (genpd->status == GPD_STATE_POWER_OFF 388 - || genpd->status == GPD_STATE_WAIT_MASTER 389 - || genpd->resume_count > 0 || genpd->prepared_count > 0) 532 + || genpd->prepared_count > 0) 390 533 return 0; 391 534 392 535 if (atomic_read(&genpd->sd_count) > 0) 393 536 return -EBUSY; 394 537 395 - not_suspended = 0; 396 538 list_for_each_entry(pdd, &genpd->dev_list, list_node) { 397 539 enum pm_qos_flags_status stat; 398 540 ··· 402 560 if (not_suspended > genpd->in_progress) 403 561 return -EBUSY; 404 562 405 - if (genpd->poweroff_task) { 406 - /* 407 - * Another instance of pm_genpd_poweroff() is executing 408 - * callbacks, so tell it to start over and return. 409 - */ 410 - genpd->status = GPD_STATE_REPEAT; 411 - return 0; 412 - } 413 - 414 563 if (genpd->gov && genpd->gov->power_down_ok) { 415 564 if (!genpd->gov->power_down_ok(&genpd->domain)) 416 565 return -EAGAIN; 417 - } 418 - 419 - genpd->status = GPD_STATE_BUSY; 420 - genpd->poweroff_task = current; 421 - 422 - list_for_each_entry_reverse(pdd, &genpd->dev_list, list_node) { 423 - ret = atomic_read(&genpd->sd_count) == 0 ? 424 - __pm_genpd_save_device(pdd, genpd) : -EBUSY; 425 - 426 - if (genpd_abort_poweroff(genpd)) 427 - goto out; 428 - 429 - if (ret) { 430 - genpd_set_active(genpd); 431 - goto out; 432 - } 433 - 434 - if (genpd->status == GPD_STATE_REPEAT) { 435 - genpd->poweroff_task = NULL; 436 - goto start; 437 - } 438 566 } 439 567 440 568 if (genpd->cpuidle_data) { ··· 419 607 cpuidle_pause_and_lock(); 420 608 genpd->cpuidle_data->idle_state->disabled = false; 421 609 cpuidle_resume_and_unlock(); 422 - goto out; 610 + return 0; 423 611 } 424 612 425 613 if (genpd->power_off) { 426 - if (atomic_read(&genpd->sd_count) > 0) { 427 - ret = -EBUSY; 428 - goto out; 429 - } 614 + int ret; 615 + 616 + if (atomic_read(&genpd->sd_count) > 0) 617 + return -EBUSY; 430 618 431 619 /* 432 620 * If sd_count > 0 at this point, one of the subdomains hasn't ··· 437 625 * happen very often). 438 626 */ 439 627 ret = genpd_power_off(genpd, true); 440 - if (ret == -EBUSY) { 441 - genpd_set_active(genpd); 442 - goto out; 443 - } 628 + if (ret) 629 + return ret; 444 630 } 445 631 446 632 genpd->status = GPD_STATE_POWER_OFF; ··· 448 638 genpd_queue_power_off_work(link->master); 449 639 } 450 640 451 - out: 452 - genpd->poweroff_task = NULL; 453 - wake_up_all(&genpd->status_wait_queue); 454 - return ret; 641 + return 0; 455 642 } 456 643 457 644 /** ··· 461 654 462 655 genpd = container_of(work, struct generic_pm_domain, power_off_work); 463 656 464 - genpd_acquire_lock(genpd); 657 + mutex_lock(&genpd->lock); 465 658 pm_genpd_poweroff(genpd); 466 - genpd_release_lock(genpd); 659 + mutex_unlock(&genpd->lock); 467 660 } 468 661 469 662 /** ··· 477 670 static int pm_genpd_runtime_suspend(struct device *dev) 478 671 { 479 672 struct generic_pm_domain *genpd; 480 - struct generic_pm_domain_data *gpd_data; 481 673 bool (*stop_ok)(struct device *__dev); 482 674 int ret; 483 675 ··· 490 684 if (stop_ok && !stop_ok(dev)) 491 685 return -EBUSY; 492 686 493 - ret = genpd_stop_dev(genpd, dev); 687 + ret = genpd_save_dev(genpd, dev); 494 688 if (ret) 495 689 return ret; 690 + 691 + ret = genpd_stop_dev(genpd, dev); 692 + if (ret) { 693 + genpd_restore_dev(genpd, dev, true); 694 + return ret; 695 + } 496 696 497 697 /* 498 698 * If power.irq_safe is set, this routine will be run with interrupts ··· 508 696 return 0; 509 697 510 698 mutex_lock(&genpd->lock); 511 - 512 - /* 513 - * If we have an unknown state of the need_restore flag, it means none 514 - * of the runtime PM callbacks has been invoked yet. Let's update the 515 - * flag to reflect that the current state is active. 516 - */ 517 - gpd_data = to_gpd_data(dev->power.subsys_data->domain_data); 518 - if (gpd_data->need_restore < 0) 519 - gpd_data->need_restore = 0; 520 - 521 699 genpd->in_progress++; 522 700 pm_genpd_poweroff(genpd); 523 701 genpd->in_progress--; ··· 527 725 static int pm_genpd_runtime_resume(struct device *dev) 528 726 { 529 727 struct generic_pm_domain *genpd; 530 - DEFINE_WAIT(wait); 531 728 int ret; 729 + bool timed = true; 532 730 533 731 dev_dbg(dev, "%s()\n", __func__); 534 732 ··· 537 735 return -EINVAL; 538 736 539 737 /* If power.irq_safe, the PM domain is never powered off. */ 540 - if (dev->power.irq_safe) 541 - return genpd_start_dev_no_timing(genpd, dev); 738 + if (dev->power.irq_safe) { 739 + timed = false; 740 + goto out; 741 + } 542 742 543 743 mutex_lock(&genpd->lock); 544 744 ret = __pm_genpd_poweron(genpd); 545 - if (ret) { 546 - mutex_unlock(&genpd->lock); 547 - return ret; 548 - } 549 - genpd->status = GPD_STATE_BUSY; 550 - genpd->resume_count++; 551 - for (;;) { 552 - prepare_to_wait(&genpd->status_wait_queue, &wait, 553 - TASK_UNINTERRUPTIBLE); 554 - /* 555 - * If current is the powering off task, we have been called 556 - * reentrantly from one of the device callbacks, so we should 557 - * not wait. 558 - */ 559 - if (!genpd->poweroff_task || genpd->poweroff_task == current) 560 - break; 561 - mutex_unlock(&genpd->lock); 562 - 563 - schedule(); 564 - 565 - mutex_lock(&genpd->lock); 566 - } 567 - finish_wait(&genpd->status_wait_queue, &wait); 568 - __pm_genpd_restore_device(dev->power.subsys_data->domain_data, genpd); 569 - genpd->resume_count--; 570 - genpd_set_active(genpd); 571 - wake_up_all(&genpd->status_wait_queue); 572 745 mutex_unlock(&genpd->lock); 746 + 747 + if (ret) 748 + return ret; 749 + 750 + out: 751 + genpd_start_dev(genpd, dev, timed); 752 + genpd_restore_dev(genpd, dev, timed); 573 753 574 754 return 0; 575 755 } ··· 667 883 { 668 884 struct gpd_link *link; 669 885 670 - if (genpd->status != GPD_STATE_POWER_OFF) 886 + if (genpd->status == GPD_STATE_ACTIVE) 671 887 return; 672 888 673 889 list_for_each_entry(link, &genpd->slave_links, slave_node) { ··· 744 960 if (resume_needed(dev, genpd)) 745 961 pm_runtime_resume(dev); 746 962 747 - genpd_acquire_lock(genpd); 963 + mutex_lock(&genpd->lock); 748 964 749 965 if (genpd->prepared_count++ == 0) { 750 966 genpd->suspended_count = 0; 751 967 genpd->suspend_power_off = genpd->status == GPD_STATE_POWER_OFF; 752 968 } 753 969 754 - genpd_release_lock(genpd); 970 + mutex_unlock(&genpd->lock); 755 971 756 972 if (genpd->suspend_power_off) { 757 973 pm_runtime_put_noidle(dev); ··· 886 1102 pm_genpd_sync_poweron(genpd, true); 887 1103 genpd->suspended_count--; 888 1104 889 - return genpd_start_dev(genpd, dev); 1105 + return genpd_start_dev(genpd, dev, true); 890 1106 } 891 1107 892 1108 /** ··· 1014 1230 if (IS_ERR(genpd)) 1015 1231 return -EINVAL; 1016 1232 1017 - return genpd->suspend_power_off ? 0 : genpd_start_dev(genpd, dev); 1233 + return genpd->suspend_power_off ? 0 : genpd_start_dev(genpd, dev, true); 1018 1234 } 1019 1235 1020 1236 /** ··· 1108 1324 1109 1325 pm_genpd_sync_poweron(genpd, true); 1110 1326 1111 - return genpd_start_dev(genpd, dev); 1327 + return genpd_start_dev(genpd, dev, true); 1112 1328 } 1113 1329 1114 1330 /** ··· 1224 1440 gpd_data->td = *td; 1225 1441 1226 1442 gpd_data->base.dev = dev; 1227 - gpd_data->need_restore = -1; 1228 1443 gpd_data->td.constraint_changed = true; 1229 1444 gpd_data->td.effective_constraint_ns = -1; 1230 1445 gpd_data->nb.notifier_call = genpd_dev_pm_qos_notifier; ··· 1285 1502 if (IS_ERR(gpd_data)) 1286 1503 return PTR_ERR(gpd_data); 1287 1504 1288 - genpd_acquire_lock(genpd); 1505 + mutex_lock(&genpd->lock); 1289 1506 1290 1507 if (genpd->prepared_count > 0) { 1291 1508 ret = -EAGAIN; ··· 1302 1519 list_add_tail(&gpd_data->base.list_node, &genpd->dev_list); 1303 1520 1304 1521 out: 1305 - genpd_release_lock(genpd); 1522 + mutex_unlock(&genpd->lock); 1306 1523 1307 1524 if (ret) 1308 1525 genpd_free_dev_data(dev, gpd_data); ··· 1346 1563 gpd_data = to_gpd_data(pdd); 1347 1564 dev_pm_qos_remove_notifier(dev, &gpd_data->nb); 1348 1565 1349 - genpd_acquire_lock(genpd); 1566 + mutex_lock(&genpd->lock); 1350 1567 1351 1568 if (genpd->prepared_count > 0) { 1352 1569 ret = -EAGAIN; ··· 1361 1578 1362 1579 list_del_init(&pdd->list_node); 1363 1580 1364 - genpd_release_lock(genpd); 1581 + mutex_unlock(&genpd->lock); 1365 1582 1366 1583 genpd_free_dev_data(dev, gpd_data); 1367 1584 1368 1585 return 0; 1369 1586 1370 1587 out: 1371 - genpd_release_lock(genpd); 1588 + mutex_unlock(&genpd->lock); 1372 1589 dev_pm_qos_add_notifier(dev, &gpd_data->nb); 1373 1590 1374 1591 return ret; ··· 1389 1606 || genpd == subdomain) 1390 1607 return -EINVAL; 1391 1608 1392 - start: 1393 - genpd_acquire_lock(genpd); 1609 + mutex_lock(&genpd->lock); 1394 1610 mutex_lock_nested(&subdomain->lock, SINGLE_DEPTH_NESTING); 1395 - 1396 - if (subdomain->status != GPD_STATE_POWER_OFF 1397 - && subdomain->status != GPD_STATE_ACTIVE) { 1398 - mutex_unlock(&subdomain->lock); 1399 - genpd_release_lock(genpd); 1400 - goto start; 1401 - } 1402 1611 1403 1612 if (genpd->status == GPD_STATE_POWER_OFF 1404 1613 && subdomain->status != GPD_STATE_POWER_OFF) { ··· 1419 1644 1420 1645 out: 1421 1646 mutex_unlock(&subdomain->lock); 1422 - genpd_release_lock(genpd); 1647 + mutex_unlock(&genpd->lock); 1423 1648 1424 1649 return ret; 1425 1650 } ··· 1467 1692 if (IS_ERR_OR_NULL(genpd) || IS_ERR_OR_NULL(subdomain)) 1468 1693 return -EINVAL; 1469 1694 1470 - start: 1471 - genpd_acquire_lock(genpd); 1695 + mutex_lock(&genpd->lock); 1472 1696 1473 1697 list_for_each_entry(link, &genpd->master_links, master_node) { 1474 1698 if (link->slave != subdomain) 1475 1699 continue; 1476 1700 1477 1701 mutex_lock_nested(&subdomain->lock, SINGLE_DEPTH_NESTING); 1478 - 1479 - if (subdomain->status != GPD_STATE_POWER_OFF 1480 - && subdomain->status != GPD_STATE_ACTIVE) { 1481 - mutex_unlock(&subdomain->lock); 1482 - genpd_release_lock(genpd); 1483 - goto start; 1484 - } 1485 1702 1486 1703 list_del(&link->master_node); 1487 1704 list_del(&link->slave_node); ··· 1487 1720 break; 1488 1721 } 1489 1722 1490 - genpd_release_lock(genpd); 1723 + mutex_unlock(&genpd->lock); 1491 1724 1492 1725 return ret; 1493 1726 } ··· 1511 1744 if (IS_ERR_OR_NULL(genpd) || state < 0) 1512 1745 return -EINVAL; 1513 1746 1514 - genpd_acquire_lock(genpd); 1747 + mutex_lock(&genpd->lock); 1515 1748 1516 1749 if (genpd->cpuidle_data) { 1517 1750 ret = -EEXIST; ··· 1542 1775 genpd_recalc_cpu_exit_latency(genpd); 1543 1776 1544 1777 out: 1545 - genpd_release_lock(genpd); 1778 + mutex_unlock(&genpd->lock); 1546 1779 return ret; 1547 1780 1548 1781 err: ··· 1579 1812 if (IS_ERR_OR_NULL(genpd)) 1580 1813 return -EINVAL; 1581 1814 1582 - genpd_acquire_lock(genpd); 1815 + mutex_lock(&genpd->lock); 1583 1816 1584 1817 cpuidle_data = genpd->cpuidle_data; 1585 1818 if (!cpuidle_data) { ··· 1597 1830 kfree(cpuidle_data); 1598 1831 1599 1832 out: 1600 - genpd_release_lock(genpd); 1833 + mutex_unlock(&genpd->lock); 1601 1834 return ret; 1602 1835 } 1603 1836 ··· 1679 1912 genpd->in_progress = 0; 1680 1913 atomic_set(&genpd->sd_count, 0); 1681 1914 genpd->status = is_off ? GPD_STATE_POWER_OFF : GPD_STATE_ACTIVE; 1682 - init_waitqueue_head(&genpd->status_wait_queue); 1683 - genpd->poweroff_task = NULL; 1684 - genpd->resume_count = 0; 1685 1915 genpd->device_count = 0; 1686 1916 genpd->max_off_time_ns = -1; 1687 1917 genpd->max_off_time_changed = true; ··· 1716 1952 list_add(&genpd->gpd_list_node, &gpd_list); 1717 1953 mutex_unlock(&gpd_list_lock); 1718 1954 } 1955 + EXPORT_SYMBOL_GPL(pm_genpd_init); 1719 1956 1720 1957 #ifdef CONFIG_PM_GENERIC_DOMAINS_OF 1721 1958 /* ··· 1890 2125 1891 2126 /** 1892 2127 * genpd_dev_pm_detach - Detach a device from its PM domain. 1893 - * @dev: Device to attach. 2128 + * @dev: Device to detach. 1894 2129 * @power_off: Currently not used 1895 2130 * 1896 2131 * Try to locate a corresponding generic PM domain, which the device was ··· 1948 2183 * Both generic and legacy Samsung-specific DT bindings are supported to keep 1949 2184 * backwards compatibility with existing DTBs. 1950 2185 * 1951 - * Returns 0 on successfully attached PM domain or negative error code. 2186 + * Returns 0 on successfully attached PM domain or negative error code. Note 2187 + * that if a power-domain exists for the device, but it cannot be found or 2188 + * turned on, then return -EPROBE_DEFER to ensure that the device is not 2189 + * probed and to re-try again later. 1952 2190 */ 1953 2191 int genpd_dev_pm_attach(struct device *dev) 1954 2192 { ··· 1988 2220 dev_dbg(dev, "%s() failed to find PM domain: %ld\n", 1989 2221 __func__, PTR_ERR(pd)); 1990 2222 of_node_put(dev->of_node); 1991 - return PTR_ERR(pd); 2223 + return -EPROBE_DEFER; 1992 2224 } 1993 2225 1994 2226 dev_dbg(dev, "adding to PM domain %s\n", pd->name); ··· 2006 2238 dev_err(dev, "failed to add to PM domain %s: %d", 2007 2239 pd->name, ret); 2008 2240 of_node_put(dev->of_node); 2009 - return ret; 2241 + goto out; 2010 2242 } 2011 2243 2012 2244 dev->pm_domain->detach = genpd_dev_pm_detach; 2013 2245 dev->pm_domain->sync = genpd_dev_pm_sync; 2014 - pm_genpd_poweron(pd); 2246 + ret = pm_genpd_poweron(pd); 2015 2247 2016 - return 0; 2248 + out: 2249 + return ret ? -EPROBE_DEFER : 0; 2017 2250 } 2018 2251 EXPORT_SYMBOL_GPL(genpd_dev_pm_attach); 2019 2252 #endif /* CONFIG_PM_GENERIC_DOMAINS_OF */ ··· 2062 2293 { 2063 2294 static const char * const status_lookup[] = { 2064 2295 [GPD_STATE_ACTIVE] = "on", 2065 - [GPD_STATE_WAIT_MASTER] = "wait-master", 2066 - [GPD_STATE_BUSY] = "busy", 2067 - [GPD_STATE_REPEAT] = "off-in-progress", 2068 2296 [GPD_STATE_POWER_OFF] = "off" 2069 2297 }; 2070 2298 struct pm_domain_data *pm_data; ··· 2075 2309 2076 2310 if (WARN_ON(genpd->status >= ARRAY_SIZE(status_lookup))) 2077 2311 goto exit; 2078 - seq_printf(s, "%-30s %-15s ", genpd->name, status_lookup[genpd->status]); 2312 + seq_printf(s, "%-30s %-15s ", genpd->name, status_lookup[genpd->status]); 2079 2313 2080 2314 /* 2081 2315 * Modifications on the list require holding locks on both ··· 2110 2344 struct generic_pm_domain *genpd; 2111 2345 int ret = 0; 2112 2346 2113 - seq_puts(s, " domain status slaves\n"); 2114 - seq_puts(s, " /device runtime status\n"); 2347 + seq_puts(s, "domain status slaves\n"); 2348 + seq_puts(s, " /device runtime status\n"); 2115 2349 seq_puts(s, "----------------------------------------------------------------------\n"); 2116 2350 2117 2351 ret = mutex_lock_interruptible(&gpd_list_lock);
+1 -1
drivers/base/power/main.c
··· 1377 1377 if (dev->power.direct_complete) { 1378 1378 if (pm_runtime_status_suspended(dev)) { 1379 1379 pm_runtime_disable(dev); 1380 - if (pm_runtime_suspended_if_enabled(dev)) 1380 + if (pm_runtime_status_suspended(dev)) 1381 1381 goto Complete; 1382 1382 1383 1383 pm_runtime_enable(dev);
+850 -209
drivers/base/power/opp.c
··· 11 11 * published by the Free Software Foundation. 12 12 */ 13 13 14 + #include <linux/cpu.h> 14 15 #include <linux/kernel.h> 15 16 #include <linux/errno.h> 16 17 #include <linux/err.h> ··· 52 51 * order. 53 52 * @dynamic: not-created from static DT entries. 54 53 * @available: true/false - marks if this OPP as available or not 54 + * @turbo: true if turbo (boost) OPP 55 55 * @rate: Frequency in hertz 56 - * @u_volt: Nominal voltage in microvolts corresponding to this OPP 56 + * @u_volt: Target voltage in microvolts corresponding to this OPP 57 + * @u_volt_min: Minimum voltage in microvolts corresponding to this OPP 58 + * @u_volt_max: Maximum voltage in microvolts corresponding to this OPP 59 + * @u_amp: Maximum current drawn by the device in microamperes 60 + * @clock_latency_ns: Latency (in nanoseconds) of switching to this OPP's 61 + * frequency from any other OPP's frequency. 57 62 * @dev_opp: points back to the device_opp struct this opp belongs to 58 63 * @rcu_head: RCU callback head used for deferred freeing 64 + * @np: OPP's device node. 59 65 * 60 66 * This structure stores the OPP information for a given device. 61 67 */ ··· 71 63 72 64 bool available; 73 65 bool dynamic; 66 + bool turbo; 74 67 unsigned long rate; 68 + 75 69 unsigned long u_volt; 70 + unsigned long u_volt_min; 71 + unsigned long u_volt_max; 72 + unsigned long u_amp; 73 + unsigned long clock_latency_ns; 76 74 77 75 struct device_opp *dev_opp; 76 + struct rcu_head rcu_head; 77 + 78 + struct device_node *np; 79 + }; 80 + 81 + /** 82 + * struct device_list_opp - devices managed by 'struct device_opp' 83 + * @node: list node 84 + * @dev: device to which the struct object belongs 85 + * @rcu_head: RCU callback head used for deferred freeing 86 + * 87 + * This is an internal data structure maintaining the list of devices that are 88 + * managed by 'struct device_opp'. 89 + */ 90 + struct device_list_opp { 91 + struct list_head node; 92 + const struct device *dev; 78 93 struct rcu_head rcu_head; 79 94 }; 80 95 ··· 108 77 * list. 109 78 * RCU usage: nodes are not modified in the list of device_opp, 110 79 * however addition is possible and is secured by dev_opp_list_lock 111 - * @dev: device pointer 112 80 * @srcu_head: notifier head to notify the OPP availability changes. 113 81 * @rcu_head: RCU callback head used for deferred freeing 82 + * @dev_list: list of devices that share these OPPs 114 83 * @opp_list: list of opps 84 + * @np: struct device_node pointer for opp's DT node. 85 + * @shared_opp: OPP is shared between multiple devices. 115 86 * 116 87 * This is an internal data structure maintaining the link to opps attached to 117 88 * a device. This structure is not meant to be shared to users as it is ··· 126 93 struct device_opp { 127 94 struct list_head node; 128 95 129 - struct device *dev; 130 96 struct srcu_notifier_head srcu_head; 131 97 struct rcu_head rcu_head; 98 + struct list_head dev_list; 132 99 struct list_head opp_list; 100 + 101 + struct device_node *np; 102 + unsigned long clock_latency_ns_max; 103 + bool shared_opp; 104 + struct dev_pm_opp *suspend_opp; 133 105 }; 134 106 135 107 /* ··· 154 116 "dev_opp_list_lock protection"); \ 155 117 } while (0) 156 118 119 + static struct device_list_opp *_find_list_dev(const struct device *dev, 120 + struct device_opp *dev_opp) 121 + { 122 + struct device_list_opp *list_dev; 123 + 124 + list_for_each_entry(list_dev, &dev_opp->dev_list, node) 125 + if (list_dev->dev == dev) 126 + return list_dev; 127 + 128 + return NULL; 129 + } 130 + 131 + static struct device_opp *_managed_opp(const struct device_node *np) 132 + { 133 + struct device_opp *dev_opp; 134 + 135 + list_for_each_entry_rcu(dev_opp, &dev_opp_list, node) { 136 + if (dev_opp->np == np) { 137 + /* 138 + * Multiple devices can point to the same OPP table and 139 + * so will have same node-pointer, np. 140 + * 141 + * But the OPPs will be considered as shared only if the 142 + * OPP table contains a "opp-shared" property. 143 + */ 144 + return dev_opp->shared_opp ? dev_opp : NULL; 145 + } 146 + } 147 + 148 + return NULL; 149 + } 150 + 157 151 /** 158 152 * _find_device_opp() - find device_opp struct using device pointer 159 153 * @dev: device pointer used to lookup device OPPs ··· 202 132 */ 203 133 static struct device_opp *_find_device_opp(struct device *dev) 204 134 { 205 - struct device_opp *tmp_dev_opp, *dev_opp = ERR_PTR(-ENODEV); 135 + struct device_opp *dev_opp; 206 136 207 - if (unlikely(IS_ERR_OR_NULL(dev))) { 137 + if (IS_ERR_OR_NULL(dev)) { 208 138 pr_err("%s: Invalid parameters\n", __func__); 209 139 return ERR_PTR(-EINVAL); 210 140 } 211 141 212 - list_for_each_entry_rcu(tmp_dev_opp, &dev_opp_list, node) { 213 - if (tmp_dev_opp->dev == dev) { 214 - dev_opp = tmp_dev_opp; 215 - break; 216 - } 217 - } 142 + list_for_each_entry_rcu(dev_opp, &dev_opp_list, node) 143 + if (_find_list_dev(dev, dev_opp)) 144 + return dev_opp; 218 145 219 - return dev_opp; 146 + return ERR_PTR(-ENODEV); 220 147 } 221 148 222 149 /** ··· 239 172 opp_rcu_lockdep_assert(); 240 173 241 174 tmp_opp = rcu_dereference(opp); 242 - if (unlikely(IS_ERR_OR_NULL(tmp_opp)) || !tmp_opp->available) 175 + if (IS_ERR_OR_NULL(tmp_opp) || !tmp_opp->available) 243 176 pr_err("%s: Invalid parameters\n", __func__); 244 177 else 245 178 v = tmp_opp->u_volt; ··· 271 204 opp_rcu_lockdep_assert(); 272 205 273 206 tmp_opp = rcu_dereference(opp); 274 - if (unlikely(IS_ERR_OR_NULL(tmp_opp)) || !tmp_opp->available) 207 + if (IS_ERR_OR_NULL(tmp_opp) || !tmp_opp->available) 275 208 pr_err("%s: Invalid parameters\n", __func__); 276 209 else 277 210 f = tmp_opp->rate; ··· 279 212 return f; 280 213 } 281 214 EXPORT_SYMBOL_GPL(dev_pm_opp_get_freq); 215 + 216 + /** 217 + * dev_pm_opp_is_turbo() - Returns if opp is turbo OPP or not 218 + * @opp: opp for which turbo mode is being verified 219 + * 220 + * Turbo OPPs are not for normal use, and can be enabled (under certain 221 + * conditions) for short duration of times to finish high throughput work 222 + * quickly. Running on them for longer times may overheat the chip. 223 + * 224 + * Return: true if opp is turbo opp, else false. 225 + * 226 + * Locking: This function must be called under rcu_read_lock(). opp is a rcu 227 + * protected pointer. This means that opp which could have been fetched by 228 + * opp_find_freq_{exact,ceil,floor} functions is valid as long as we are 229 + * under RCU lock. The pointer returned by the opp_find_freq family must be 230 + * used in the same section as the usage of this function with the pointer 231 + * prior to unlocking with rcu_read_unlock() to maintain the integrity of the 232 + * pointer. 233 + */ 234 + bool dev_pm_opp_is_turbo(struct dev_pm_opp *opp) 235 + { 236 + struct dev_pm_opp *tmp_opp; 237 + 238 + opp_rcu_lockdep_assert(); 239 + 240 + tmp_opp = rcu_dereference(opp); 241 + if (IS_ERR_OR_NULL(tmp_opp) || !tmp_opp->available) { 242 + pr_err("%s: Invalid parameters\n", __func__); 243 + return false; 244 + } 245 + 246 + return tmp_opp->turbo; 247 + } 248 + EXPORT_SYMBOL_GPL(dev_pm_opp_is_turbo); 249 + 250 + /** 251 + * dev_pm_opp_get_max_clock_latency() - Get max clock latency in nanoseconds 252 + * @dev: device for which we do this operation 253 + * 254 + * Return: This function returns the max clock latency in nanoseconds. 255 + * 256 + * Locking: This function takes rcu_read_lock(). 257 + */ 258 + unsigned long dev_pm_opp_get_max_clock_latency(struct device *dev) 259 + { 260 + struct device_opp *dev_opp; 261 + unsigned long clock_latency_ns; 262 + 263 + rcu_read_lock(); 264 + 265 + dev_opp = _find_device_opp(dev); 266 + if (IS_ERR(dev_opp)) 267 + clock_latency_ns = 0; 268 + else 269 + clock_latency_ns = dev_opp->clock_latency_ns_max; 270 + 271 + rcu_read_unlock(); 272 + return clock_latency_ns; 273 + } 274 + EXPORT_SYMBOL_GPL(dev_pm_opp_get_max_clock_latency); 282 275 283 276 /** 284 277 * dev_pm_opp_get_opp_count() - Get number of opps available in the opp list ··· 534 407 } 535 408 EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_floor); 536 409 410 + /* List-dev Helpers */ 411 + static void _kfree_list_dev_rcu(struct rcu_head *head) 412 + { 413 + struct device_list_opp *list_dev; 414 + 415 + list_dev = container_of(head, struct device_list_opp, rcu_head); 416 + kfree_rcu(list_dev, rcu_head); 417 + } 418 + 419 + static void _remove_list_dev(struct device_list_opp *list_dev, 420 + struct device_opp *dev_opp) 421 + { 422 + list_del(&list_dev->node); 423 + call_srcu(&dev_opp->srcu_head.srcu, &list_dev->rcu_head, 424 + _kfree_list_dev_rcu); 425 + } 426 + 427 + static struct device_list_opp *_add_list_dev(const struct device *dev, 428 + struct device_opp *dev_opp) 429 + { 430 + struct device_list_opp *list_dev; 431 + 432 + list_dev = kzalloc(sizeof(*list_dev), GFP_KERNEL); 433 + if (!list_dev) 434 + return NULL; 435 + 436 + /* Initialize list-dev */ 437 + list_dev->dev = dev; 438 + list_add_rcu(&list_dev->node, &dev_opp->dev_list); 439 + 440 + return list_dev; 441 + } 442 + 537 443 /** 538 - * _add_device_opp() - Allocate a new device OPP table 444 + * _add_device_opp() - Find device OPP table or allocate a new one 539 445 * @dev: device for which we do this operation 540 446 * 541 - * New device node which uses OPPs - used when multiple devices with OPP tables 542 - * are maintained. 447 + * It tries to find an existing table first, if it couldn't find one, it 448 + * allocates a new OPP table and returns that. 543 449 * 544 450 * Return: valid device_opp pointer if success, else NULL. 545 451 */ 546 452 static struct device_opp *_add_device_opp(struct device *dev) 547 453 { 548 454 struct device_opp *dev_opp; 455 + struct device_list_opp *list_dev; 456 + 457 + /* Check for existing list for 'dev' first */ 458 + dev_opp = _find_device_opp(dev); 459 + if (!IS_ERR(dev_opp)) 460 + return dev_opp; 549 461 550 462 /* 551 463 * Allocate a new device OPP table. In the infrequent case where a new ··· 594 428 if (!dev_opp) 595 429 return NULL; 596 430 597 - dev_opp->dev = dev; 431 + INIT_LIST_HEAD(&dev_opp->dev_list); 432 + 433 + list_dev = _add_list_dev(dev, dev_opp); 434 + if (!list_dev) { 435 + kfree(dev_opp); 436 + return NULL; 437 + } 438 + 598 439 srcu_init_notifier_head(&dev_opp->srcu_head); 599 440 INIT_LIST_HEAD(&dev_opp->opp_list); 600 441 601 442 /* Secure the device list modification */ 602 443 list_add_rcu(&dev_opp->node, &dev_opp_list); 603 444 return dev_opp; 604 - } 605 - 606 - /** 607 - * _opp_add_dynamic() - Allocate a dynamic OPP. 608 - * @dev: device for which we do this operation 609 - * @freq: Frequency in Hz for this OPP 610 - * @u_volt: Voltage in uVolts for this OPP 611 - * @dynamic: Dynamically added OPPs. 612 - * 613 - * This function adds an opp definition to the opp list and returns status. 614 - * The opp is made available by default and it can be controlled using 615 - * dev_pm_opp_enable/disable functions and may be removed by dev_pm_opp_remove. 616 - * 617 - * NOTE: "dynamic" parameter impacts OPPs added by the of_init_opp_table and 618 - * freed by of_free_opp_table. 619 - * 620 - * Locking: The internal device_opp and opp structures are RCU protected. 621 - * Hence this function internally uses RCU updater strategy with mutex locks 622 - * to keep the integrity of the internal data structures. Callers should ensure 623 - * that this function is *NOT* called under RCU protection or in contexts where 624 - * mutex cannot be locked. 625 - * 626 - * Return: 627 - * 0 On success OR 628 - * Duplicate OPPs (both freq and volt are same) and opp->available 629 - * -EEXIST Freq are same and volt are different OR 630 - * Duplicate OPPs (both freq and volt are same) and !opp->available 631 - * -ENOMEM Memory allocation failure 632 - */ 633 - static int _opp_add_dynamic(struct device *dev, unsigned long freq, 634 - long u_volt, bool dynamic) 635 - { 636 - struct device_opp *dev_opp = NULL; 637 - struct dev_pm_opp *opp, *new_opp; 638 - struct list_head *head; 639 - int ret; 640 - 641 - /* allocate new OPP node */ 642 - new_opp = kzalloc(sizeof(*new_opp), GFP_KERNEL); 643 - if (!new_opp) 644 - return -ENOMEM; 645 - 646 - /* Hold our list modification lock here */ 647 - mutex_lock(&dev_opp_list_lock); 648 - 649 - /* populate the opp table */ 650 - new_opp->rate = freq; 651 - new_opp->u_volt = u_volt; 652 - new_opp->available = true; 653 - new_opp->dynamic = dynamic; 654 - 655 - /* Check for existing list for 'dev' */ 656 - dev_opp = _find_device_opp(dev); 657 - if (IS_ERR(dev_opp)) { 658 - dev_opp = _add_device_opp(dev); 659 - if (!dev_opp) { 660 - ret = -ENOMEM; 661 - goto free_opp; 662 - } 663 - 664 - head = &dev_opp->opp_list; 665 - goto list_add; 666 - } 667 - 668 - /* 669 - * Insert new OPP in order of increasing frequency 670 - * and discard if already present 671 - */ 672 - head = &dev_opp->opp_list; 673 - list_for_each_entry_rcu(opp, &dev_opp->opp_list, node) { 674 - if (new_opp->rate <= opp->rate) 675 - break; 676 - else 677 - head = &opp->node; 678 - } 679 - 680 - /* Duplicate OPPs ? */ 681 - if (new_opp->rate == opp->rate) { 682 - ret = opp->available && new_opp->u_volt == opp->u_volt ? 683 - 0 : -EEXIST; 684 - 685 - dev_warn(dev, "%s: duplicate OPPs detected. Existing: freq: %lu, volt: %lu, enabled: %d. New: freq: %lu, volt: %lu, enabled: %d\n", 686 - __func__, opp->rate, opp->u_volt, opp->available, 687 - new_opp->rate, new_opp->u_volt, new_opp->available); 688 - goto free_opp; 689 - } 690 - 691 - list_add: 692 - new_opp->dev_opp = dev_opp; 693 - list_add_rcu(&new_opp->node, head); 694 - mutex_unlock(&dev_opp_list_lock); 695 - 696 - /* 697 - * Notify the changes in the availability of the operable 698 - * frequency/voltage list. 699 - */ 700 - srcu_notifier_call_chain(&dev_opp->srcu_head, OPP_EVENT_ADD, new_opp); 701 - return 0; 702 - 703 - free_opp: 704 - mutex_unlock(&dev_opp_list_lock); 705 - kfree(new_opp); 706 - return ret; 707 - } 708 - 709 - /** 710 - * dev_pm_opp_add() - Add an OPP table from a table definitions 711 - * @dev: device for which we do this operation 712 - * @freq: Frequency in Hz for this OPP 713 - * @u_volt: Voltage in uVolts for this OPP 714 - * 715 - * This function adds an opp definition to the opp list and returns status. 716 - * The opp is made available by default and it can be controlled using 717 - * dev_pm_opp_enable/disable functions. 718 - * 719 - * Locking: The internal device_opp and opp structures are RCU protected. 720 - * Hence this function internally uses RCU updater strategy with mutex locks 721 - * to keep the integrity of the internal data structures. Callers should ensure 722 - * that this function is *NOT* called under RCU protection or in contexts where 723 - * mutex cannot be locked. 724 - * 725 - * Return: 726 - * 0 On success OR 727 - * Duplicate OPPs (both freq and volt are same) and opp->available 728 - * -EEXIST Freq are same and volt are different OR 729 - * Duplicate OPPs (both freq and volt are same) and !opp->available 730 - * -ENOMEM Memory allocation failure 731 - */ 732 - int dev_pm_opp_add(struct device *dev, unsigned long freq, unsigned long u_volt) 733 - { 734 - return _opp_add_dynamic(dev, freq, u_volt, true); 735 - } 736 - EXPORT_SYMBOL_GPL(dev_pm_opp_add); 737 - 738 - /** 739 - * _kfree_opp_rcu() - Free OPP RCU handler 740 - * @head: RCU head 741 - */ 742 - static void _kfree_opp_rcu(struct rcu_head *head) 743 - { 744 - struct dev_pm_opp *opp = container_of(head, struct dev_pm_opp, rcu_head); 745 - 746 - kfree_rcu(opp, rcu_head); 747 445 } 748 446 749 447 /** ··· 622 592 } 623 593 624 594 /** 595 + * _remove_device_opp() - Removes a device OPP table 596 + * @dev_opp: device OPP table to be removed. 597 + * 598 + * Removes/frees device OPP table it it doesn't contain any OPPs. 599 + */ 600 + static void _remove_device_opp(struct device_opp *dev_opp) 601 + { 602 + struct device_list_opp *list_dev; 603 + 604 + if (!list_empty(&dev_opp->opp_list)) 605 + return; 606 + 607 + list_dev = list_first_entry(&dev_opp->dev_list, struct device_list_opp, 608 + node); 609 + 610 + _remove_list_dev(list_dev, dev_opp); 611 + 612 + /* dev_list must be empty now */ 613 + WARN_ON(!list_empty(&dev_opp->dev_list)); 614 + 615 + list_del_rcu(&dev_opp->node); 616 + call_srcu(&dev_opp->srcu_head.srcu, &dev_opp->rcu_head, 617 + _kfree_device_rcu); 618 + } 619 + 620 + /** 621 + * _kfree_opp_rcu() - Free OPP RCU handler 622 + * @head: RCU head 623 + */ 624 + static void _kfree_opp_rcu(struct rcu_head *head) 625 + { 626 + struct dev_pm_opp *opp = container_of(head, struct dev_pm_opp, rcu_head); 627 + 628 + kfree_rcu(opp, rcu_head); 629 + } 630 + 631 + /** 625 632 * _opp_remove() - Remove an OPP from a table definition 626 633 * @dev_opp: points back to the device_opp struct this opp belongs to 627 634 * @opp: pointer to the OPP to remove 635 + * @notify: OPP_EVENT_REMOVE notification should be sent or not 628 636 * 629 637 * This function removes an opp definition from the opp list. 630 638 * ··· 671 603 * strategy. 672 604 */ 673 605 static void _opp_remove(struct device_opp *dev_opp, 674 - struct dev_pm_opp *opp) 606 + struct dev_pm_opp *opp, bool notify) 675 607 { 676 608 /* 677 609 * Notify the changes in the availability of the operable 678 610 * frequency/voltage list. 679 611 */ 680 - srcu_notifier_call_chain(&dev_opp->srcu_head, OPP_EVENT_REMOVE, opp); 612 + if (notify) 613 + srcu_notifier_call_chain(&dev_opp->srcu_head, OPP_EVENT_REMOVE, opp); 681 614 list_del_rcu(&opp->node); 682 615 call_srcu(&dev_opp->srcu_head.srcu, &opp->rcu_head, _kfree_opp_rcu); 683 616 684 - if (list_empty(&dev_opp->opp_list)) { 685 - list_del_rcu(&dev_opp->node); 686 - call_srcu(&dev_opp->srcu_head.srcu, &dev_opp->rcu_head, 687 - _kfree_device_rcu); 688 - } 617 + _remove_device_opp(dev_opp); 689 618 } 690 619 691 620 /** ··· 724 659 goto unlock; 725 660 } 726 661 727 - _opp_remove(dev_opp, opp); 662 + _opp_remove(dev_opp, opp, true); 728 663 unlock: 729 664 mutex_unlock(&dev_opp_list_lock); 730 665 } 731 666 EXPORT_SYMBOL_GPL(dev_pm_opp_remove); 667 + 668 + static struct dev_pm_opp *_allocate_opp(struct device *dev, 669 + struct device_opp **dev_opp) 670 + { 671 + struct dev_pm_opp *opp; 672 + 673 + /* allocate new OPP node */ 674 + opp = kzalloc(sizeof(*opp), GFP_KERNEL); 675 + if (!opp) 676 + return NULL; 677 + 678 + INIT_LIST_HEAD(&opp->node); 679 + 680 + *dev_opp = _add_device_opp(dev); 681 + if (!*dev_opp) { 682 + kfree(opp); 683 + return NULL; 684 + } 685 + 686 + return opp; 687 + } 688 + 689 + static int _opp_add(struct device *dev, struct dev_pm_opp *new_opp, 690 + struct device_opp *dev_opp) 691 + { 692 + struct dev_pm_opp *opp; 693 + struct list_head *head = &dev_opp->opp_list; 694 + 695 + /* 696 + * Insert new OPP in order of increasing frequency and discard if 697 + * already present. 698 + * 699 + * Need to use &dev_opp->opp_list in the condition part of the 'for' 700 + * loop, don't replace it with head otherwise it will become an infinite 701 + * loop. 702 + */ 703 + list_for_each_entry_rcu(opp, &dev_opp->opp_list, node) { 704 + if (new_opp->rate > opp->rate) { 705 + head = &opp->node; 706 + continue; 707 + } 708 + 709 + if (new_opp->rate < opp->rate) 710 + break; 711 + 712 + /* Duplicate OPPs */ 713 + dev_warn(dev, "%s: duplicate OPPs detected. Existing: freq: %lu, volt: %lu, enabled: %d. New: freq: %lu, volt: %lu, enabled: %d\n", 714 + __func__, opp->rate, opp->u_volt, opp->available, 715 + new_opp->rate, new_opp->u_volt, new_opp->available); 716 + 717 + return opp->available && new_opp->u_volt == opp->u_volt ? 718 + 0 : -EEXIST; 719 + } 720 + 721 + new_opp->dev_opp = dev_opp; 722 + list_add_rcu(&new_opp->node, head); 723 + 724 + return 0; 725 + } 726 + 727 + /** 728 + * _opp_add_dynamic() - Allocate a dynamic OPP. 729 + * @dev: device for which we do this operation 730 + * @freq: Frequency in Hz for this OPP 731 + * @u_volt: Voltage in uVolts for this OPP 732 + * @dynamic: Dynamically added OPPs. 733 + * 734 + * This function adds an opp definition to the opp list and returns status. 735 + * The opp is made available by default and it can be controlled using 736 + * dev_pm_opp_enable/disable functions and may be removed by dev_pm_opp_remove. 737 + * 738 + * NOTE: "dynamic" parameter impacts OPPs added by the of_init_opp_table and 739 + * freed by of_free_opp_table. 740 + * 741 + * Locking: The internal device_opp and opp structures are RCU protected. 742 + * Hence this function internally uses RCU updater strategy with mutex locks 743 + * to keep the integrity of the internal data structures. Callers should ensure 744 + * that this function is *NOT* called under RCU protection or in contexts where 745 + * mutex cannot be locked. 746 + * 747 + * Return: 748 + * 0 On success OR 749 + * Duplicate OPPs (both freq and volt are same) and opp->available 750 + * -EEXIST Freq are same and volt are different OR 751 + * Duplicate OPPs (both freq and volt are same) and !opp->available 752 + * -ENOMEM Memory allocation failure 753 + */ 754 + static int _opp_add_dynamic(struct device *dev, unsigned long freq, 755 + long u_volt, bool dynamic) 756 + { 757 + struct device_opp *dev_opp; 758 + struct dev_pm_opp *new_opp; 759 + int ret; 760 + 761 + /* Hold our list modification lock here */ 762 + mutex_lock(&dev_opp_list_lock); 763 + 764 + new_opp = _allocate_opp(dev, &dev_opp); 765 + if (!new_opp) { 766 + ret = -ENOMEM; 767 + goto unlock; 768 + } 769 + 770 + /* populate the opp table */ 771 + new_opp->rate = freq; 772 + new_opp->u_volt = u_volt; 773 + new_opp->available = true; 774 + new_opp->dynamic = dynamic; 775 + 776 + ret = _opp_add(dev, new_opp, dev_opp); 777 + if (ret) 778 + goto free_opp; 779 + 780 + mutex_unlock(&dev_opp_list_lock); 781 + 782 + /* 783 + * Notify the changes in the availability of the operable 784 + * frequency/voltage list. 785 + */ 786 + srcu_notifier_call_chain(&dev_opp->srcu_head, OPP_EVENT_ADD, new_opp); 787 + return 0; 788 + 789 + free_opp: 790 + _opp_remove(dev_opp, new_opp, false); 791 + unlock: 792 + mutex_unlock(&dev_opp_list_lock); 793 + return ret; 794 + } 795 + 796 + /* TODO: Support multiple regulators */ 797 + static int opp_get_microvolt(struct dev_pm_opp *opp, struct device *dev) 798 + { 799 + u32 microvolt[3] = {0}; 800 + int count, ret; 801 + 802 + count = of_property_count_u32_elems(opp->np, "opp-microvolt"); 803 + if (!count) 804 + return 0; 805 + 806 + /* There can be one or three elements here */ 807 + if (count != 1 && count != 3) { 808 + dev_err(dev, "%s: Invalid number of elements in opp-microvolt property (%d)\n", 809 + __func__, count); 810 + return -EINVAL; 811 + } 812 + 813 + ret = of_property_read_u32_array(opp->np, "opp-microvolt", microvolt, 814 + count); 815 + if (ret) { 816 + dev_err(dev, "%s: error parsing opp-microvolt: %d\n", __func__, 817 + ret); 818 + return -EINVAL; 819 + } 820 + 821 + opp->u_volt = microvolt[0]; 822 + opp->u_volt_min = microvolt[1]; 823 + opp->u_volt_max = microvolt[2]; 824 + 825 + return 0; 826 + } 827 + 828 + /** 829 + * _opp_add_static_v2() - Allocate static OPPs (As per 'v2' DT bindings) 830 + * @dev: device for which we do this operation 831 + * @np: device node 832 + * 833 + * This function adds an opp definition to the opp list and returns status. The 834 + * opp can be controlled using dev_pm_opp_enable/disable functions and may be 835 + * removed by dev_pm_opp_remove. 836 + * 837 + * Locking: The internal device_opp and opp structures are RCU protected. 838 + * Hence this function internally uses RCU updater strategy with mutex locks 839 + * to keep the integrity of the internal data structures. Callers should ensure 840 + * that this function is *NOT* called under RCU protection or in contexts where 841 + * mutex cannot be locked. 842 + * 843 + * Return: 844 + * 0 On success OR 845 + * Duplicate OPPs (both freq and volt are same) and opp->available 846 + * -EEXIST Freq are same and volt are different OR 847 + * Duplicate OPPs (both freq and volt are same) and !opp->available 848 + * -ENOMEM Memory allocation failure 849 + * -EINVAL Failed parsing the OPP node 850 + */ 851 + static int _opp_add_static_v2(struct device *dev, struct device_node *np) 852 + { 853 + struct device_opp *dev_opp; 854 + struct dev_pm_opp *new_opp; 855 + u64 rate; 856 + u32 val; 857 + int ret; 858 + 859 + /* Hold our list modification lock here */ 860 + mutex_lock(&dev_opp_list_lock); 861 + 862 + new_opp = _allocate_opp(dev, &dev_opp); 863 + if (!new_opp) { 864 + ret = -ENOMEM; 865 + goto unlock; 866 + } 867 + 868 + ret = of_property_read_u64(np, "opp-hz", &rate); 869 + if (ret < 0) { 870 + dev_err(dev, "%s: opp-hz not found\n", __func__); 871 + goto free_opp; 872 + } 873 + 874 + /* 875 + * Rate is defined as an unsigned long in clk API, and so casting 876 + * explicitly to its type. Must be fixed once rate is 64 bit 877 + * guaranteed in clk API. 878 + */ 879 + new_opp->rate = (unsigned long)rate; 880 + new_opp->turbo = of_property_read_bool(np, "turbo-mode"); 881 + 882 + new_opp->np = np; 883 + new_opp->dynamic = false; 884 + new_opp->available = true; 885 + 886 + if (!of_property_read_u32(np, "clock-latency-ns", &val)) 887 + new_opp->clock_latency_ns = val; 888 + 889 + ret = opp_get_microvolt(new_opp, dev); 890 + if (ret) 891 + goto free_opp; 892 + 893 + if (!of_property_read_u32(new_opp->np, "opp-microamp", &val)) 894 + new_opp->u_amp = val; 895 + 896 + ret = _opp_add(dev, new_opp, dev_opp); 897 + if (ret) 898 + goto free_opp; 899 + 900 + /* OPP to select on device suspend */ 901 + if (of_property_read_bool(np, "opp-suspend")) { 902 + if (dev_opp->suspend_opp) 903 + dev_warn(dev, "%s: Multiple suspend OPPs found (%lu %lu)\n", 904 + __func__, dev_opp->suspend_opp->rate, 905 + new_opp->rate); 906 + else 907 + dev_opp->suspend_opp = new_opp; 908 + } 909 + 910 + if (new_opp->clock_latency_ns > dev_opp->clock_latency_ns_max) 911 + dev_opp->clock_latency_ns_max = new_opp->clock_latency_ns; 912 + 913 + mutex_unlock(&dev_opp_list_lock); 914 + 915 + pr_debug("%s: turbo:%d rate:%lu uv:%lu uvmin:%lu uvmax:%lu latency:%lu\n", 916 + __func__, new_opp->turbo, new_opp->rate, new_opp->u_volt, 917 + new_opp->u_volt_min, new_opp->u_volt_max, 918 + new_opp->clock_latency_ns); 919 + 920 + /* 921 + * Notify the changes in the availability of the operable 922 + * frequency/voltage list. 923 + */ 924 + srcu_notifier_call_chain(&dev_opp->srcu_head, OPP_EVENT_ADD, new_opp); 925 + return 0; 926 + 927 + free_opp: 928 + _opp_remove(dev_opp, new_opp, false); 929 + unlock: 930 + mutex_unlock(&dev_opp_list_lock); 931 + return ret; 932 + } 933 + 934 + /** 935 + * dev_pm_opp_add() - Add an OPP table from a table definitions 936 + * @dev: device for which we do this operation 937 + * @freq: Frequency in Hz for this OPP 938 + * @u_volt: Voltage in uVolts for this OPP 939 + * 940 + * This function adds an opp definition to the opp list and returns status. 941 + * The opp is made available by default and it can be controlled using 942 + * dev_pm_opp_enable/disable functions. 943 + * 944 + * Locking: The internal device_opp and opp structures are RCU protected. 945 + * Hence this function internally uses RCU updater strategy with mutex locks 946 + * to keep the integrity of the internal data structures. Callers should ensure 947 + * that this function is *NOT* called under RCU protection or in contexts where 948 + * mutex cannot be locked. 949 + * 950 + * Return: 951 + * 0 On success OR 952 + * Duplicate OPPs (both freq and volt are same) and opp->available 953 + * -EEXIST Freq are same and volt are different OR 954 + * Duplicate OPPs (both freq and volt are same) and !opp->available 955 + * -ENOMEM Memory allocation failure 956 + */ 957 + int dev_pm_opp_add(struct device *dev, unsigned long freq, unsigned long u_volt) 958 + { 959 + return _opp_add_dynamic(dev, freq, u_volt, true); 960 + } 961 + EXPORT_SYMBOL_GPL(dev_pm_opp_add); 732 962 733 963 /** 734 964 * _opp_set_availability() - helper to set the availability of an opp ··· 1185 825 1186 826 #ifdef CONFIG_OF 1187 827 /** 1188 - * of_init_opp_table() - Initialize opp table from device tree 828 + * of_free_opp_table() - Free OPP table entries created from static DT entries 1189 829 * @dev: device pointer used to lookup device OPPs. 1190 830 * 1191 - * Register the initial OPP table with the OPP library for given device. 831 + * Free OPPs created using static entries present in DT. 1192 832 * 1193 833 * Locking: The internal device_opp and opp structures are RCU protected. 1194 834 * Hence this function indirectly uses RCU updater strategy with mutex locks 1195 835 * to keep the integrity of the internal data structures. Callers should ensure 1196 836 * that this function is *NOT* called under RCU protection or in contexts where 1197 837 * mutex cannot be locked. 1198 - * 1199 - * Return: 1200 - * 0 On success OR 1201 - * Duplicate OPPs (both freq and volt are same) and opp->available 1202 - * -EEXIST Freq are same and volt are different OR 1203 - * Duplicate OPPs (both freq and volt are same) and !opp->available 1204 - * -ENOMEM Memory allocation failure 1205 - * -ENODEV when 'operating-points' property is not found or is invalid data 1206 - * in device node. 1207 - * -ENODATA when empty 'operating-points' property is found 1208 838 */ 1209 - int of_init_opp_table(struct device *dev) 839 + void of_free_opp_table(struct device *dev) 840 + { 841 + struct device_opp *dev_opp; 842 + struct dev_pm_opp *opp, *tmp; 843 + 844 + /* Hold our list modification lock here */ 845 + mutex_lock(&dev_opp_list_lock); 846 + 847 + /* Check for existing list for 'dev' */ 848 + dev_opp = _find_device_opp(dev); 849 + if (IS_ERR(dev_opp)) { 850 + int error = PTR_ERR(dev_opp); 851 + 852 + if (error != -ENODEV) 853 + WARN(1, "%s: dev_opp: %d\n", 854 + IS_ERR_OR_NULL(dev) ? 855 + "Invalid device" : dev_name(dev), 856 + error); 857 + goto unlock; 858 + } 859 + 860 + /* Find if dev_opp manages a single device */ 861 + if (list_is_singular(&dev_opp->dev_list)) { 862 + /* Free static OPPs */ 863 + list_for_each_entry_safe(opp, tmp, &dev_opp->opp_list, node) { 864 + if (!opp->dynamic) 865 + _opp_remove(dev_opp, opp, true); 866 + } 867 + } else { 868 + _remove_list_dev(_find_list_dev(dev, dev_opp), dev_opp); 869 + } 870 + 871 + unlock: 872 + mutex_unlock(&dev_opp_list_lock); 873 + } 874 + EXPORT_SYMBOL_GPL(of_free_opp_table); 875 + 876 + void of_cpumask_free_opp_table(cpumask_var_t cpumask) 877 + { 878 + struct device *cpu_dev; 879 + int cpu; 880 + 881 + WARN_ON(cpumask_empty(cpumask)); 882 + 883 + for_each_cpu(cpu, cpumask) { 884 + cpu_dev = get_cpu_device(cpu); 885 + if (!cpu_dev) { 886 + pr_err("%s: failed to get cpu%d device\n", __func__, 887 + cpu); 888 + continue; 889 + } 890 + 891 + of_free_opp_table(cpu_dev); 892 + } 893 + } 894 + EXPORT_SYMBOL_GPL(of_cpumask_free_opp_table); 895 + 896 + /* Returns opp descriptor node from its phandle. Caller must do of_node_put() */ 897 + static struct device_node * 898 + _of_get_opp_desc_node_from_prop(struct device *dev, const struct property *prop) 899 + { 900 + struct device_node *opp_np; 901 + 902 + opp_np = of_find_node_by_phandle(be32_to_cpup(prop->value)); 903 + if (!opp_np) { 904 + dev_err(dev, "%s: Prop: %s contains invalid opp desc phandle\n", 905 + __func__, prop->name); 906 + return ERR_PTR(-EINVAL); 907 + } 908 + 909 + return opp_np; 910 + } 911 + 912 + /* Returns opp descriptor node for a device. Caller must do of_node_put() */ 913 + static struct device_node *_of_get_opp_desc_node(struct device *dev) 914 + { 915 + const struct property *prop; 916 + 917 + prop = of_find_property(dev->of_node, "operating-points-v2", NULL); 918 + if (!prop) 919 + return ERR_PTR(-ENODEV); 920 + if (!prop->value) 921 + return ERR_PTR(-ENODATA); 922 + 923 + /* 924 + * TODO: Support for multiple OPP tables. 925 + * 926 + * There should be only ONE phandle present in "operating-points-v2" 927 + * property. 928 + */ 929 + if (prop->length != sizeof(__be32)) { 930 + dev_err(dev, "%s: Invalid opp desc phandle\n", __func__); 931 + return ERR_PTR(-EINVAL); 932 + } 933 + 934 + return _of_get_opp_desc_node_from_prop(dev, prop); 935 + } 936 + 937 + /* Initializes OPP tables based on new bindings */ 938 + static int _of_init_opp_table_v2(struct device *dev, 939 + const struct property *prop) 940 + { 941 + struct device_node *opp_np, *np; 942 + struct device_opp *dev_opp; 943 + int ret = 0, count = 0; 944 + 945 + if (!prop->value) 946 + return -ENODATA; 947 + 948 + /* Get opp node */ 949 + opp_np = _of_get_opp_desc_node_from_prop(dev, prop); 950 + if (IS_ERR(opp_np)) 951 + return PTR_ERR(opp_np); 952 + 953 + dev_opp = _managed_opp(opp_np); 954 + if (dev_opp) { 955 + /* OPPs are already managed */ 956 + if (!_add_list_dev(dev, dev_opp)) 957 + ret = -ENOMEM; 958 + goto put_opp_np; 959 + } 960 + 961 + /* We have opp-list node now, iterate over it and add OPPs */ 962 + for_each_available_child_of_node(opp_np, np) { 963 + count++; 964 + 965 + ret = _opp_add_static_v2(dev, np); 966 + if (ret) { 967 + dev_err(dev, "%s: Failed to add OPP, %d\n", __func__, 968 + ret); 969 + goto free_table; 970 + } 971 + } 972 + 973 + /* There should be one of more OPP defined */ 974 + if (WARN_ON(!count)) { 975 + ret = -ENOENT; 976 + goto put_opp_np; 977 + } 978 + 979 + dev_opp = _find_device_opp(dev); 980 + if (WARN_ON(IS_ERR(dev_opp))) { 981 + ret = PTR_ERR(dev_opp); 982 + goto free_table; 983 + } 984 + 985 + dev_opp->np = opp_np; 986 + dev_opp->shared_opp = of_property_read_bool(opp_np, "opp-shared"); 987 + 988 + of_node_put(opp_np); 989 + return 0; 990 + 991 + free_table: 992 + of_free_opp_table(dev); 993 + put_opp_np: 994 + of_node_put(opp_np); 995 + 996 + return ret; 997 + } 998 + 999 + /* Initializes OPP tables based on old-deprecated bindings */ 1000 + static int _of_init_opp_table_v1(struct device *dev) 1210 1001 { 1211 1002 const struct property *prop; 1212 1003 const __be32 *val; ··· 1392 881 1393 882 return 0; 1394 883 } 1395 - EXPORT_SYMBOL_GPL(of_init_opp_table); 1396 884 1397 885 /** 1398 - * of_free_opp_table() - Free OPP table entries created from static DT entries 886 + * of_init_opp_table() - Initialize opp table from device tree 1399 887 * @dev: device pointer used to lookup device OPPs. 1400 888 * 1401 - * Free OPPs created using static entries present in DT. 889 + * Register the initial OPP table with the OPP library for given device. 1402 890 * 1403 891 * Locking: The internal device_opp and opp structures are RCU protected. 1404 892 * Hence this function indirectly uses RCU updater strategy with mutex locks 1405 893 * to keep the integrity of the internal data structures. Callers should ensure 1406 894 * that this function is *NOT* called under RCU protection or in contexts where 1407 895 * mutex cannot be locked. 896 + * 897 + * Return: 898 + * 0 On success OR 899 + * Duplicate OPPs (both freq and volt are same) and opp->available 900 + * -EEXIST Freq are same and volt are different OR 901 + * Duplicate OPPs (both freq and volt are same) and !opp->available 902 + * -ENOMEM Memory allocation failure 903 + * -ENODEV when 'operating-points' property is not found or is invalid data 904 + * in device node. 905 + * -ENODATA when empty 'operating-points' property is found 906 + * -EINVAL when invalid entries are found in opp-v2 table 1408 907 */ 1409 - void of_free_opp_table(struct device *dev) 908 + int of_init_opp_table(struct device *dev) 1410 909 { 1411 - struct device_opp *dev_opp; 1412 - struct dev_pm_opp *opp, *tmp; 910 + const struct property *prop; 1413 911 1414 - /* Check for existing list for 'dev' */ 1415 - dev_opp = _find_device_opp(dev); 1416 - if (IS_ERR(dev_opp)) { 1417 - int error = PTR_ERR(dev_opp); 1418 - if (error != -ENODEV) 1419 - WARN(1, "%s: dev_opp: %d\n", 1420 - IS_ERR_OR_NULL(dev) ? 1421 - "Invalid device" : dev_name(dev), 1422 - error); 1423 - return; 912 + /* 913 + * OPPs have two version of bindings now. The older one is deprecated, 914 + * try for the new binding first. 915 + */ 916 + prop = of_find_property(dev->of_node, "operating-points-v2", NULL); 917 + if (!prop) { 918 + /* 919 + * Try old-deprecated bindings for backward compatibility with 920 + * older dtbs. 921 + */ 922 + return _of_init_opp_table_v1(dev); 1424 923 } 1425 924 1426 - /* Hold our list modification lock here */ 1427 - mutex_lock(&dev_opp_list_lock); 1428 - 1429 - /* Free static OPPs */ 1430 - list_for_each_entry_safe(opp, tmp, &dev_opp->opp_list, node) { 1431 - if (!opp->dynamic) 1432 - _opp_remove(dev_opp, opp); 1433 - } 1434 - 1435 - mutex_unlock(&dev_opp_list_lock); 925 + return _of_init_opp_table_v2(dev, prop); 1436 926 } 1437 - EXPORT_SYMBOL_GPL(of_free_opp_table); 927 + EXPORT_SYMBOL_GPL(of_init_opp_table); 928 + 929 + int of_cpumask_init_opp_table(cpumask_var_t cpumask) 930 + { 931 + struct device *cpu_dev; 932 + int cpu, ret = 0; 933 + 934 + WARN_ON(cpumask_empty(cpumask)); 935 + 936 + for_each_cpu(cpu, cpumask) { 937 + cpu_dev = get_cpu_device(cpu); 938 + if (!cpu_dev) { 939 + pr_err("%s: failed to get cpu%d device\n", __func__, 940 + cpu); 941 + continue; 942 + } 943 + 944 + ret = of_init_opp_table(cpu_dev); 945 + if (ret) { 946 + pr_err("%s: couldn't find opp table for cpu:%d, %d\n", 947 + __func__, cpu, ret); 948 + 949 + /* Free all other OPPs */ 950 + of_cpumask_free_opp_table(cpumask); 951 + break; 952 + } 953 + } 954 + 955 + return ret; 956 + } 957 + EXPORT_SYMBOL_GPL(of_cpumask_init_opp_table); 958 + 959 + /* Required only for V1 bindings, as v2 can manage it from DT itself */ 960 + int set_cpus_sharing_opps(struct device *cpu_dev, cpumask_var_t cpumask) 961 + { 962 + struct device_list_opp *list_dev; 963 + struct device_opp *dev_opp; 964 + struct device *dev; 965 + int cpu, ret = 0; 966 + 967 + rcu_read_lock(); 968 + 969 + dev_opp = _find_device_opp(cpu_dev); 970 + if (IS_ERR(dev_opp)) { 971 + ret = -EINVAL; 972 + goto out_rcu_read_unlock; 973 + } 974 + 975 + for_each_cpu(cpu, cpumask) { 976 + if (cpu == cpu_dev->id) 977 + continue; 978 + 979 + dev = get_cpu_device(cpu); 980 + if (!dev) { 981 + dev_err(cpu_dev, "%s: failed to get cpu%d device\n", 982 + __func__, cpu); 983 + continue; 984 + } 985 + 986 + list_dev = _add_list_dev(dev, dev_opp); 987 + if (!list_dev) { 988 + dev_err(dev, "%s: failed to add list-dev for cpu%d device\n", 989 + __func__, cpu); 990 + continue; 991 + } 992 + } 993 + out_rcu_read_unlock: 994 + rcu_read_unlock(); 995 + 996 + return 0; 997 + } 998 + EXPORT_SYMBOL_GPL(set_cpus_sharing_opps); 999 + 1000 + /* 1001 + * Works only for OPP v2 bindings. 1002 + * 1003 + * cpumask should be already set to mask of cpu_dev->id. 1004 + * Returns -ENOENT if operating-points-v2 bindings aren't supported. 1005 + */ 1006 + int of_get_cpus_sharing_opps(struct device *cpu_dev, cpumask_var_t cpumask) 1007 + { 1008 + struct device_node *np, *tmp_np; 1009 + struct device *tcpu_dev; 1010 + int cpu, ret = 0; 1011 + 1012 + /* Get OPP descriptor node */ 1013 + np = _of_get_opp_desc_node(cpu_dev); 1014 + if (IS_ERR(np)) { 1015 + dev_dbg(cpu_dev, "%s: Couldn't find opp node: %ld\n", __func__, 1016 + PTR_ERR(np)); 1017 + return -ENOENT; 1018 + } 1019 + 1020 + /* OPPs are shared ? */ 1021 + if (!of_property_read_bool(np, "opp-shared")) 1022 + goto put_cpu_node; 1023 + 1024 + for_each_possible_cpu(cpu) { 1025 + if (cpu == cpu_dev->id) 1026 + continue; 1027 + 1028 + tcpu_dev = get_cpu_device(cpu); 1029 + if (!tcpu_dev) { 1030 + dev_err(cpu_dev, "%s: failed to get cpu%d device\n", 1031 + __func__, cpu); 1032 + ret = -ENODEV; 1033 + goto put_cpu_node; 1034 + } 1035 + 1036 + /* Get OPP descriptor node */ 1037 + tmp_np = _of_get_opp_desc_node(tcpu_dev); 1038 + if (IS_ERR(tmp_np)) { 1039 + dev_err(tcpu_dev, "%s: Couldn't find opp node: %ld\n", 1040 + __func__, PTR_ERR(tmp_np)); 1041 + ret = PTR_ERR(tmp_np); 1042 + goto put_cpu_node; 1043 + } 1044 + 1045 + /* CPUs are sharing opp node */ 1046 + if (np == tmp_np) 1047 + cpumask_set_cpu(cpu, cpumask); 1048 + 1049 + of_node_put(tmp_np); 1050 + } 1051 + 1052 + put_cpu_node: 1053 + of_node_put(np); 1054 + return ret; 1055 + } 1056 + EXPORT_SYMBOL_GPL(of_get_cpus_sharing_opps); 1438 1057 #endif
+2
drivers/base/power/power.h
··· 73 73 extern void pm_qos_sysfs_remove_resume_latency(struct device *dev); 74 74 extern int pm_qos_sysfs_add_flags(struct device *dev); 75 75 extern void pm_qos_sysfs_remove_flags(struct device *dev); 76 + extern int pm_qos_sysfs_add_latency_tolerance(struct device *dev); 77 + extern void pm_qos_sysfs_remove_latency_tolerance(struct device *dev); 76 78 77 79 #else /* CONFIG_PM */ 78 80
+37
drivers/base/power/qos.c
··· 883 883 mutex_unlock(&dev_pm_qos_mtx); 884 884 return ret; 885 885 } 886 + 887 + /** 888 + * dev_pm_qos_expose_latency_tolerance - Expose latency tolerance to userspace 889 + * @dev: Device whose latency tolerance to expose 890 + */ 891 + int dev_pm_qos_expose_latency_tolerance(struct device *dev) 892 + { 893 + int ret; 894 + 895 + if (!dev->power.set_latency_tolerance) 896 + return -EINVAL; 897 + 898 + mutex_lock(&dev_pm_qos_sysfs_mtx); 899 + ret = pm_qos_sysfs_add_latency_tolerance(dev); 900 + mutex_unlock(&dev_pm_qos_sysfs_mtx); 901 + 902 + return ret; 903 + } 904 + EXPORT_SYMBOL_GPL(dev_pm_qos_expose_latency_tolerance); 905 + 906 + /** 907 + * dev_pm_qos_hide_latency_tolerance - Hide latency tolerance from userspace 908 + * @dev: Device whose latency tolerance to hide 909 + */ 910 + void dev_pm_qos_hide_latency_tolerance(struct device *dev) 911 + { 912 + mutex_lock(&dev_pm_qos_sysfs_mtx); 913 + pm_qos_sysfs_remove_latency_tolerance(dev); 914 + mutex_unlock(&dev_pm_qos_sysfs_mtx); 915 + 916 + /* Remove the request from user space now */ 917 + pm_runtime_get_sync(dev); 918 + dev_pm_qos_update_user_latency_tolerance(dev, 919 + PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT); 920 + pm_runtime_put(dev); 921 + } 922 + EXPORT_SYMBOL_GPL(dev_pm_qos_hide_latency_tolerance);
+11
drivers/base/power/sysfs.c
··· 738 738 sysfs_unmerge_group(&dev->kobj, &pm_qos_flags_attr_group); 739 739 } 740 740 741 + int pm_qos_sysfs_add_latency_tolerance(struct device *dev) 742 + { 743 + return sysfs_merge_group(&dev->kobj, 744 + &pm_qos_latency_tolerance_attr_group); 745 + } 746 + 747 + void pm_qos_sysfs_remove_latency_tolerance(struct device *dev) 748 + { 749 + sysfs_unmerge_group(&dev->kobj, &pm_qos_latency_tolerance_attr_group); 750 + } 751 + 741 752 void rpm_sysfs_remove(struct device *dev) 742 753 { 743 754 sysfs_unmerge_group(&dev->kobj, &pm_runtime_attr_group);
+5 -3
drivers/base/property.c
··· 27 27 */ 28 28 void device_add_property_set(struct device *dev, struct property_set *pset) 29 29 { 30 - if (pset) 31 - pset->fwnode.type = FWNODE_PDATA; 30 + if (!pset) 31 + return; 32 32 33 + pset->fwnode.type = FWNODE_PDATA; 33 34 set_secondary_fwnode(dev, &pset->fwnode); 34 35 } 35 36 EXPORT_SYMBOL_GPL(device_add_property_set); ··· 462 461 return acpi_dev_prop_read(to_acpi_node(fwnode), propname, 463 462 DEV_PROP_STRING, val, 1); 464 463 465 - return -ENXIO; 464 + return pset_prop_read_array(to_pset(fwnode), propname, 465 + DEV_PROP_STRING, val, 1); 466 466 } 467 467 EXPORT_SYMBOL_GPL(fwnode_property_read_string); 468 468
+7
drivers/cpufreq/Kconfig.arm
··· 130 130 This adds the CPUFreq driver for Marvell Kirkwood 131 131 SoCs. 132 132 133 + config ARM_MT8173_CPUFREQ 134 + bool "Mediatek MT8173 CPUFreq support" 135 + depends on ARCH_MEDIATEK && REGULATOR 136 + select PM_OPP 137 + help 138 + This adds the CPUFreq driver support for Mediatek MT8173 SoC. 139 + 133 140 config ARM_OMAP2PLUS_CPUFREQ 134 141 bool "TI OMAP2+" 135 142 depends on ARCH_OMAP2PLUS
+1
drivers/cpufreq/Makefile
··· 62 62 obj-$(CONFIG_ARM_IMX6Q_CPUFREQ) += imx6q-cpufreq.o 63 63 obj-$(CONFIG_ARM_INTEGRATOR) += integrator-cpufreq.o 64 64 obj-$(CONFIG_ARM_KIRKWOOD_CPUFREQ) += kirkwood-cpufreq.o 65 + obj-$(CONFIG_ARM_MT8173_CPUFREQ) += mt8173-cpufreq.o 65 66 obj-$(CONFIG_ARM_OMAP2PLUS_CPUFREQ) += omap-cpufreq.o 66 67 obj-$(CONFIG_ARM_PXA2xx_CPUFREQ) += pxa2xx-cpufreq.o 67 68 obj-$(CONFIG_PXA3xx) += pxa3xx-cpufreq.o
+51 -42
drivers/cpufreq/acpi-cpufreq.c
··· 65 65 #define MSR_K7_HWCR_CPB_DIS (1ULL << 25) 66 66 67 67 struct acpi_cpufreq_data { 68 - struct acpi_processor_performance *acpi_data; 69 68 struct cpufreq_frequency_table *freq_table; 70 69 unsigned int resume; 71 70 unsigned int cpu_feature; 71 + unsigned int acpi_perf_cpu; 72 72 cpumask_var_t freqdomain_cpus; 73 73 }; 74 74 75 - static DEFINE_PER_CPU(struct acpi_cpufreq_data *, acfreq_data); 76 - 77 75 /* acpi_perf_data is a pointer to percpu data. */ 78 76 static struct acpi_processor_performance __percpu *acpi_perf_data; 77 + 78 + static inline struct acpi_processor_performance *to_perf_data(struct acpi_cpufreq_data *data) 79 + { 80 + return per_cpu_ptr(acpi_perf_data, data->acpi_perf_cpu); 81 + } 79 82 80 83 static struct cpufreq_driver acpi_cpufreq_driver; 81 84 ··· 147 144 148 145 static ssize_t show_freqdomain_cpus(struct cpufreq_policy *policy, char *buf) 149 146 { 150 - struct acpi_cpufreq_data *data = per_cpu(acfreq_data, policy->cpu); 147 + struct acpi_cpufreq_data *data = policy->driver_data; 151 148 152 149 return cpufreq_show_cpus(data->freqdomain_cpus, buf); 153 150 } ··· 205 202 struct acpi_processor_performance *perf; 206 203 int i; 207 204 208 - perf = data->acpi_data; 205 + perf = to_perf_data(data); 209 206 210 207 for (i = 0; i < perf->state_count; i++) { 211 208 if (value == perf->states[i].status) ··· 224 221 else 225 222 msr &= INTEL_MSR_RANGE; 226 223 227 - perf = data->acpi_data; 224 + perf = to_perf_data(data); 228 225 229 226 cpufreq_for_each_entry(pos, data->freq_table) 230 227 if (msr == perf->states[pos->driver_data].status) ··· 330 327 put_cpu(); 331 328 } 332 329 333 - static u32 get_cur_val(const struct cpumask *mask) 330 + static u32 331 + get_cur_val(const struct cpumask *mask, struct acpi_cpufreq_data *data) 334 332 { 335 333 struct acpi_processor_performance *perf; 336 334 struct drv_cmd cmd; ··· 339 335 if (unlikely(cpumask_empty(mask))) 340 336 return 0; 341 337 342 - switch (per_cpu(acfreq_data, cpumask_first(mask))->cpu_feature) { 338 + switch (data->cpu_feature) { 343 339 case SYSTEM_INTEL_MSR_CAPABLE: 344 340 cmd.type = SYSTEM_INTEL_MSR_CAPABLE; 345 341 cmd.addr.msr.reg = MSR_IA32_PERF_CTL; ··· 350 346 break; 351 347 case SYSTEM_IO_CAPABLE: 352 348 cmd.type = SYSTEM_IO_CAPABLE; 353 - perf = per_cpu(acfreq_data, cpumask_first(mask))->acpi_data; 349 + perf = to_perf_data(data); 354 350 cmd.addr.io.port = perf->control_register.address; 355 351 cmd.addr.io.bit_width = perf->control_register.bit_width; 356 352 break; ··· 368 364 369 365 static unsigned int get_cur_freq_on_cpu(unsigned int cpu) 370 366 { 371 - struct acpi_cpufreq_data *data = per_cpu(acfreq_data, cpu); 367 + struct acpi_cpufreq_data *data; 368 + struct cpufreq_policy *policy; 372 369 unsigned int freq; 373 370 unsigned int cached_freq; 374 371 375 372 pr_debug("get_cur_freq_on_cpu (%d)\n", cpu); 376 373 377 - if (unlikely(data == NULL || 378 - data->acpi_data == NULL || data->freq_table == NULL)) { 374 + policy = cpufreq_cpu_get(cpu); 375 + if (unlikely(!policy)) 379 376 return 0; 380 - } 381 377 382 - cached_freq = data->freq_table[data->acpi_data->state].frequency; 383 - freq = extract_freq(get_cur_val(cpumask_of(cpu)), data); 378 + data = policy->driver_data; 379 + cpufreq_cpu_put(policy); 380 + if (unlikely(!data || !data->freq_table)) 381 + return 0; 382 + 383 + cached_freq = data->freq_table[to_perf_data(data)->state].frequency; 384 + freq = extract_freq(get_cur_val(cpumask_of(cpu), data), data); 384 385 if (freq != cached_freq) { 385 386 /* 386 387 * The dreaded BIOS frequency change behind our back. ··· 406 397 unsigned int i; 407 398 408 399 for (i = 0; i < 100; i++) { 409 - cur_freq = extract_freq(get_cur_val(mask), data); 400 + cur_freq = extract_freq(get_cur_val(mask, data), data); 410 401 if (cur_freq == freq) 411 402 return 1; 412 403 udelay(10); ··· 417 408 static int acpi_cpufreq_target(struct cpufreq_policy *policy, 418 409 unsigned int index) 419 410 { 420 - struct acpi_cpufreq_data *data = per_cpu(acfreq_data, policy->cpu); 411 + struct acpi_cpufreq_data *data = policy->driver_data; 421 412 struct acpi_processor_performance *perf; 422 413 struct drv_cmd cmd; 423 414 unsigned int next_perf_state = 0; /* Index into perf table */ 424 415 int result = 0; 425 416 426 - if (unlikely(data == NULL || 427 - data->acpi_data == NULL || data->freq_table == NULL)) { 417 + if (unlikely(data == NULL || data->freq_table == NULL)) { 428 418 return -ENODEV; 429 419 } 430 420 431 - perf = data->acpi_data; 421 + perf = to_perf_data(data); 432 422 next_perf_state = data->freq_table[index].driver_data; 433 423 if (perf->state == next_perf_state) { 434 424 if (unlikely(data->resume)) { ··· 490 482 static unsigned long 491 483 acpi_cpufreq_guess_freq(struct acpi_cpufreq_data *data, unsigned int cpu) 492 484 { 493 - struct acpi_processor_performance *perf = data->acpi_data; 485 + struct acpi_processor_performance *perf; 494 486 487 + perf = to_perf_data(data); 495 488 if (cpu_khz) { 496 489 /* search the closest match to cpu_khz */ 497 490 unsigned int i; ··· 681 672 goto err_free; 682 673 } 683 674 684 - data->acpi_data = per_cpu_ptr(acpi_perf_data, cpu); 685 - per_cpu(acfreq_data, cpu) = data; 675 + perf = per_cpu_ptr(acpi_perf_data, cpu); 676 + data->acpi_perf_cpu = cpu; 677 + policy->driver_data = data; 686 678 687 679 if (cpu_has(c, X86_FEATURE_CONSTANT_TSC)) 688 680 acpi_cpufreq_driver.flags |= CPUFREQ_CONST_LOOPS; 689 681 690 - result = acpi_processor_register_performance(data->acpi_data, cpu); 682 + result = acpi_processor_register_performance(perf, cpu); 691 683 if (result) 692 684 goto err_free_mask; 693 685 694 - perf = data->acpi_data; 695 686 policy->shared_type = perf->shared_type; 696 687 697 688 /* ··· 847 838 err_freqfree: 848 839 kfree(data->freq_table); 849 840 err_unreg: 850 - acpi_processor_unregister_performance(perf, cpu); 841 + acpi_processor_unregister_performance(cpu); 851 842 err_free_mask: 852 843 free_cpumask_var(data->freqdomain_cpus); 853 844 err_free: 854 845 kfree(data); 855 - per_cpu(acfreq_data, cpu) = NULL; 846 + policy->driver_data = NULL; 856 847 857 848 return result; 858 849 } 859 850 860 851 static int acpi_cpufreq_cpu_exit(struct cpufreq_policy *policy) 861 852 { 862 - struct acpi_cpufreq_data *data = per_cpu(acfreq_data, policy->cpu); 853 + struct acpi_cpufreq_data *data = policy->driver_data; 863 854 864 855 pr_debug("acpi_cpufreq_cpu_exit\n"); 865 856 866 857 if (data) { 867 - per_cpu(acfreq_data, policy->cpu) = NULL; 868 - acpi_processor_unregister_performance(data->acpi_data, 869 - policy->cpu); 858 + policy->driver_data = NULL; 859 + acpi_processor_unregister_performance(data->acpi_perf_cpu); 870 860 free_cpumask_var(data->freqdomain_cpus); 871 861 kfree(data->freq_table); 872 862 kfree(data); ··· 876 868 877 869 static int acpi_cpufreq_resume(struct cpufreq_policy *policy) 878 870 { 879 - struct acpi_cpufreq_data *data = per_cpu(acfreq_data, policy->cpu); 871 + struct acpi_cpufreq_data *data = policy->driver_data; 880 872 881 873 pr_debug("acpi_cpufreq_resume\n"); 882 874 ··· 888 880 static struct freq_attr *acpi_cpufreq_attr[] = { 889 881 &cpufreq_freq_attr_scaling_available_freqs, 890 882 &freqdomain_cpus, 891 - NULL, /* this is a placeholder for cpb, do not remove */ 883 + #ifdef CONFIG_X86_ACPI_CPUFREQ_CPB 884 + &cpb, 885 + #endif 892 886 NULL, 893 887 }; 894 888 ··· 963 953 * only if configured. This is considered legacy code, which 964 954 * will probably be removed at some point in the future. 965 955 */ 966 - if (check_amd_hwpstate_cpu(0)) { 967 - struct freq_attr **iter; 956 + if (!check_amd_hwpstate_cpu(0)) { 957 + struct freq_attr **attr; 968 958 969 - pr_debug("adding sysfs entry for cpb\n"); 959 + pr_debug("CPB unsupported, do not expose it\n"); 970 960 971 - for (iter = acpi_cpufreq_attr; *iter != NULL; iter++) 972 - ; 973 - 974 - /* make sure there is a terminator behind it */ 975 - if (iter[1] == NULL) 976 - *iter = &cpb; 961 + for (attr = acpi_cpufreq_attr; *attr; attr++) 962 + if (*attr == &cpb) { 963 + *attr = NULL; 964 + break; 965 + } 977 966 } 978 967 #endif 979 968 acpi_cpufreq_boost_init();
+62 -11
drivers/cpufreq/cpufreq-dt.c
··· 36 36 unsigned int voltage_tolerance; /* in percentage */ 37 37 }; 38 38 39 + static struct freq_attr *cpufreq_dt_attr[] = { 40 + &cpufreq_freq_attr_scaling_available_freqs, 41 + NULL, /* Extra space for boost-attr if required */ 42 + NULL, 43 + }; 44 + 39 45 static int set_target(struct cpufreq_policy *policy, unsigned int index) 40 46 { 41 47 struct dev_pm_opp *opp; ··· 190 184 191 185 static int cpufreq_init(struct cpufreq_policy *policy) 192 186 { 193 - struct cpufreq_dt_platform_data *pd; 194 187 struct cpufreq_frequency_table *freq_table; 195 188 struct device_node *np; 196 189 struct private_data *priv; ··· 198 193 struct clk *cpu_clk; 199 194 unsigned long min_uV = ~0, max_uV = 0; 200 195 unsigned int transition_latency; 196 + bool need_update = false; 201 197 int ret; 202 198 203 199 ret = allocate_resources(policy->cpu, &cpu_dev, &cpu_reg, &cpu_clk); ··· 214 208 goto out_put_reg_clk; 215 209 } 216 210 217 - /* OPPs might be populated at runtime, don't check for error here */ 218 - of_init_opp_table(cpu_dev); 211 + /* Get OPP-sharing information from "operating-points-v2" bindings */ 212 + ret = of_get_cpus_sharing_opps(cpu_dev, policy->cpus); 213 + if (ret) { 214 + /* 215 + * operating-points-v2 not supported, fallback to old method of 216 + * finding shared-OPPs for backward compatibility. 217 + */ 218 + if (ret == -ENOENT) 219 + need_update = true; 220 + else 221 + goto out_node_put; 222 + } 223 + 224 + /* 225 + * Initialize OPP tables for all policy->cpus. They will be shared by 226 + * all CPUs which have marked their CPUs shared with OPP bindings. 227 + * 228 + * For platforms not using operating-points-v2 bindings, we do this 229 + * before updating policy->cpus. Otherwise, we will end up creating 230 + * duplicate OPPs for policy->cpus. 231 + * 232 + * OPPs might be populated at runtime, don't check for error here 233 + */ 234 + of_cpumask_init_opp_table(policy->cpus); 235 + 236 + if (need_update) { 237 + struct cpufreq_dt_platform_data *pd = cpufreq_get_driver_data(); 238 + 239 + if (!pd || !pd->independent_clocks) 240 + cpumask_setall(policy->cpus); 241 + 242 + /* 243 + * OPP tables are initialized only for policy->cpu, do it for 244 + * others as well. 245 + */ 246 + set_cpus_sharing_opps(cpu_dev, policy->cpus); 247 + 248 + of_property_read_u32(np, "clock-latency", &transition_latency); 249 + } else { 250 + transition_latency = dev_pm_opp_get_max_clock_latency(cpu_dev); 251 + } 219 252 220 253 /* 221 254 * But we need OPP table to function so if it is not there let's ··· 275 230 276 231 of_property_read_u32(np, "voltage-tolerance", &priv->voltage_tolerance); 277 232 278 - if (of_property_read_u32(np, "clock-latency", &transition_latency)) 233 + if (!transition_latency) 279 234 transition_latency = CPUFREQ_ETERNAL; 280 235 281 236 if (!IS_ERR(cpu_reg)) { ··· 336 291 goto out_free_cpufreq_table; 337 292 } 338 293 339 - policy->cpuinfo.transition_latency = transition_latency; 294 + /* Support turbo/boost mode */ 295 + if (policy_has_boost_freq(policy)) { 296 + /* This gets disabled by core on driver unregister */ 297 + ret = cpufreq_enable_boost_support(); 298 + if (ret) 299 + goto out_free_cpufreq_table; 300 + cpufreq_dt_attr[1] = &cpufreq_freq_attr_scaling_boost_freqs; 301 + } 340 302 341 - pd = cpufreq_get_driver_data(); 342 - if (!pd || !pd->independent_clocks) 343 - cpumask_setall(policy->cpus); 303 + policy->cpuinfo.transition_latency = transition_latency; 344 304 345 305 of_node_put(np); 346 306 ··· 356 306 out_free_priv: 357 307 kfree(priv); 358 308 out_free_opp: 359 - of_free_opp_table(cpu_dev); 309 + of_cpumask_free_opp_table(policy->cpus); 310 + out_node_put: 360 311 of_node_put(np); 361 312 out_put_reg_clk: 362 313 clk_put(cpu_clk); ··· 373 322 374 323 cpufreq_cooling_unregister(priv->cdev); 375 324 dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table); 376 - of_free_opp_table(priv->cpu_dev); 325 + of_cpumask_free_opp_table(policy->related_cpus); 377 326 clk_put(policy->clk); 378 327 if (!IS_ERR(priv->cpu_reg)) 379 328 regulator_put(priv->cpu_reg); ··· 418 367 .exit = cpufreq_exit, 419 368 .ready = cpufreq_ready, 420 369 .name = "cpufreq-dt", 421 - .attr = cpufreq_generic_attr, 370 + .attr = cpufreq_dt_attr, 422 371 }; 423 372 424 373 static int dt_cpufreq_probe(struct platform_device *pdev)
+195 -224
drivers/cpufreq/cpufreq.c
··· 112 112 return cpufreq_driver->target_index || cpufreq_driver->target; 113 113 } 114 114 115 - /* 116 - * rwsem to guarantee that cpufreq driver module doesn't unload during critical 117 - * sections 118 - */ 119 - static DECLARE_RWSEM(cpufreq_rwsem); 120 - 121 115 /* internal prototypes */ 122 116 static int __cpufreq_governor(struct cpufreq_policy *policy, 123 117 unsigned int event); ··· 271 277 * If corresponding call cpufreq_cpu_put() isn't made, the policy wouldn't be 272 278 * freed as that depends on the kobj count. 273 279 * 274 - * It also takes a read-lock of 'cpufreq_rwsem' and doesn't put it back if a 275 - * valid policy is found. This is done to make sure the driver doesn't get 276 - * unregistered while the policy is being used. 277 - * 278 280 * Return: A valid policy on success, otherwise NULL on failure. 279 281 */ 280 282 struct cpufreq_policy *cpufreq_cpu_get(unsigned int cpu) ··· 279 289 unsigned long flags; 280 290 281 291 if (WARN_ON(cpu >= nr_cpu_ids)) 282 - return NULL; 283 - 284 - if (!down_read_trylock(&cpufreq_rwsem)) 285 292 return NULL; 286 293 287 294 /* get the cpufreq driver */ ··· 293 306 294 307 read_unlock_irqrestore(&cpufreq_driver_lock, flags); 295 308 296 - if (!policy) 297 - up_read(&cpufreq_rwsem); 298 - 299 309 return policy; 300 310 } 301 311 EXPORT_SYMBOL_GPL(cpufreq_cpu_get); ··· 304 320 * 305 321 * This decrements the kobject reference count incremented earlier by calling 306 322 * cpufreq_cpu_get(). 307 - * 308 - * It also drops the read-lock of 'cpufreq_rwsem' taken at cpufreq_cpu_get(). 309 323 */ 310 324 void cpufreq_cpu_put(struct cpufreq_policy *policy) 311 325 { 312 326 kobject_put(&policy->kobj); 313 - up_read(&cpufreq_rwsem); 314 327 } 315 328 EXPORT_SYMBOL_GPL(cpufreq_cpu_put); 316 329 ··· 520 539 { 521 540 int err = -EINVAL; 522 541 523 - if (!cpufreq_driver) 524 - goto out; 525 - 526 542 if (cpufreq_driver->setpolicy) { 527 543 if (!strncasecmp(str_governor, "performance", CPUFREQ_NAME_LEN)) { 528 544 *policy = CPUFREQ_POLICY_PERFORMANCE; ··· 554 576 555 577 mutex_unlock(&cpufreq_governor_mutex); 556 578 } 557 - out: 558 579 return err; 559 580 } 560 581 ··· 602 625 int ret, temp; \ 603 626 struct cpufreq_policy new_policy; \ 604 627 \ 605 - ret = cpufreq_get_policy(&new_policy, policy->cpu); \ 606 - if (ret) \ 607 - return -EINVAL; \ 628 + memcpy(&new_policy, policy, sizeof(*policy)); \ 608 629 \ 609 630 ret = sscanf(buf, "%u", &new_policy.object); \ 610 631 if (ret != 1) \ ··· 656 681 char str_governor[16]; 657 682 struct cpufreq_policy new_policy; 658 683 659 - ret = cpufreq_get_policy(&new_policy, policy->cpu); 660 - if (ret) 661 - return ret; 684 + memcpy(&new_policy, policy, sizeof(*policy)); 662 685 663 686 ret = sscanf(buf, "%15s", str_governor); 664 687 if (ret != 1) ··· 667 694 return -EINVAL; 668 695 669 696 ret = cpufreq_set_policy(policy, &new_policy); 670 - 671 - policy->user_policy.policy = policy->policy; 672 - policy->user_policy.governor = policy->governor; 673 - 674 - if (ret) 675 - return ret; 676 - else 677 - return count; 697 + return ret ? ret : count; 678 698 } 679 699 680 700 /** ··· 817 851 struct freq_attr *fattr = to_attr(attr); 818 852 ssize_t ret; 819 853 820 - if (!down_read_trylock(&cpufreq_rwsem)) 821 - return -EINVAL; 822 - 823 854 down_read(&policy->rwsem); 824 855 825 856 if (fattr->show) ··· 825 862 ret = -EIO; 826 863 827 864 up_read(&policy->rwsem); 828 - up_read(&cpufreq_rwsem); 829 865 830 866 return ret; 831 867 } ··· 839 877 get_online_cpus(); 840 878 841 879 if (!cpu_online(policy->cpu)) 842 - goto unlock; 843 - 844 - if (!down_read_trylock(&cpufreq_rwsem)) 845 880 goto unlock; 846 881 847 882 down_write(&policy->rwsem); ··· 856 897 857 898 unlock_policy_rwsem: 858 899 up_write(&policy->rwsem); 859 - 860 - up_read(&cpufreq_rwsem); 861 900 unlock: 862 901 put_online_cpus(); 863 902 ··· 984 1027 } 985 1028 } 986 1029 987 - static int cpufreq_add_dev_interface(struct cpufreq_policy *policy, 988 - struct device *dev) 1030 + static int cpufreq_add_dev_interface(struct cpufreq_policy *policy) 989 1031 { 990 1032 struct freq_attr **drv_attr; 991 1033 int ret = 0; ··· 1016 1060 return cpufreq_add_dev_symlink(policy); 1017 1061 } 1018 1062 1019 - static void cpufreq_init_policy(struct cpufreq_policy *policy) 1063 + static int cpufreq_init_policy(struct cpufreq_policy *policy) 1020 1064 { 1021 1065 struct cpufreq_governor *gov = NULL; 1022 1066 struct cpufreq_policy new_policy; 1023 - int ret = 0; 1024 1067 1025 1068 memcpy(&new_policy, policy, sizeof(*policy)); 1026 1069 ··· 1038 1083 cpufreq_parse_governor(gov->name, &new_policy.policy, NULL); 1039 1084 1040 1085 /* set default policy */ 1041 - ret = cpufreq_set_policy(policy, &new_policy); 1042 - if (ret) { 1043 - pr_debug("setting policy failed\n"); 1044 - if (cpufreq_driver->exit) 1045 - cpufreq_driver->exit(policy); 1046 - } 1086 + return cpufreq_set_policy(policy, &new_policy); 1047 1087 } 1048 1088 1049 - static int cpufreq_add_policy_cpu(struct cpufreq_policy *policy, 1050 - unsigned int cpu, struct device *dev) 1089 + static int cpufreq_add_policy_cpu(struct cpufreq_policy *policy, unsigned int cpu) 1051 1090 { 1052 1091 int ret = 0; 1053 1092 ··· 1075 1126 return 0; 1076 1127 } 1077 1128 1078 - static struct cpufreq_policy *cpufreq_policy_restore(unsigned int cpu) 1129 + static struct cpufreq_policy *cpufreq_policy_alloc(unsigned int cpu) 1079 1130 { 1080 - struct cpufreq_policy *policy; 1081 - unsigned long flags; 1082 - 1083 - read_lock_irqsave(&cpufreq_driver_lock, flags); 1084 - policy = per_cpu(cpufreq_cpu_data, cpu); 1085 - read_unlock_irqrestore(&cpufreq_driver_lock, flags); 1086 - 1087 - if (likely(policy)) { 1088 - /* Policy should be inactive here */ 1089 - WARN_ON(!policy_is_inactive(policy)); 1090 - 1091 - down_write(&policy->rwsem); 1092 - policy->cpu = cpu; 1093 - policy->governor = NULL; 1094 - up_write(&policy->rwsem); 1095 - } 1096 - 1097 - return policy; 1098 - } 1099 - 1100 - static struct cpufreq_policy *cpufreq_policy_alloc(struct device *dev) 1101 - { 1131 + struct device *dev = get_cpu_device(cpu); 1102 1132 struct cpufreq_policy *policy; 1103 1133 int ret; 1134 + 1135 + if (WARN_ON(!dev)) 1136 + return NULL; 1104 1137 1105 1138 policy = kzalloc(sizeof(*policy), GFP_KERNEL); 1106 1139 if (!policy) ··· 1111 1180 init_completion(&policy->kobj_unregister); 1112 1181 INIT_WORK(&policy->update, handle_update); 1113 1182 1114 - policy->cpu = dev->id; 1183 + policy->cpu = cpu; 1115 1184 1116 1185 /* Set this once on allocation */ 1117 - policy->kobj_cpu = dev->id; 1186 + policy->kobj_cpu = cpu; 1118 1187 1119 1188 return policy; 1120 1189 ··· 1176 1245 kfree(policy); 1177 1246 } 1178 1247 1179 - /** 1180 - * cpufreq_add_dev - add a CPU device 1181 - * 1182 - * Adds the cpufreq interface for a CPU device. 1183 - * 1184 - * The Oracle says: try running cpufreq registration/unregistration concurrently 1185 - * with with cpu hotplugging and all hell will break loose. Tried to clean this 1186 - * mess up, but more thorough testing is needed. - Mathieu 1187 - */ 1188 - static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif) 1248 + static int cpufreq_online(unsigned int cpu) 1189 1249 { 1190 - unsigned int j, cpu = dev->id; 1191 - int ret = -ENOMEM; 1192 1250 struct cpufreq_policy *policy; 1251 + bool new_policy; 1193 1252 unsigned long flags; 1194 - bool recover_policy = !sif; 1253 + unsigned int j; 1254 + int ret; 1195 1255 1196 - pr_debug("adding CPU %u\n", cpu); 1197 - 1198 - if (cpu_is_offline(cpu)) { 1199 - /* 1200 - * Only possible if we are here from the subsys_interface add 1201 - * callback. A hotplug notifier will follow and we will handle 1202 - * it as CPU online then. For now, just create the sysfs link, 1203 - * unless there is no policy or the link is already present. 1204 - */ 1205 - policy = per_cpu(cpufreq_cpu_data, cpu); 1206 - return policy && !cpumask_test_and_set_cpu(cpu, policy->real_cpus) 1207 - ? add_cpu_dev_symlink(policy, cpu) : 0; 1208 - } 1209 - 1210 - if (!down_read_trylock(&cpufreq_rwsem)) 1211 - return 0; 1256 + pr_debug("%s: bringing CPU%u online\n", __func__, cpu); 1212 1257 1213 1258 /* Check if this CPU already has a policy to manage it */ 1214 1259 policy = per_cpu(cpufreq_cpu_data, cpu); 1215 - if (policy && !policy_is_inactive(policy)) { 1260 + if (policy) { 1216 1261 WARN_ON(!cpumask_test_cpu(cpu, policy->related_cpus)); 1217 - ret = cpufreq_add_policy_cpu(policy, cpu, dev); 1218 - up_read(&cpufreq_rwsem); 1219 - return ret; 1220 - } 1262 + if (!policy_is_inactive(policy)) 1263 + return cpufreq_add_policy_cpu(policy, cpu); 1221 1264 1222 - /* 1223 - * Restore the saved policy when doing light-weight init and fall back 1224 - * to the full init if that fails. 1225 - */ 1226 - policy = recover_policy ? cpufreq_policy_restore(cpu) : NULL; 1227 - if (!policy) { 1228 - recover_policy = false; 1229 - policy = cpufreq_policy_alloc(dev); 1265 + /* This is the only online CPU for the policy. Start over. */ 1266 + new_policy = false; 1267 + down_write(&policy->rwsem); 1268 + policy->cpu = cpu; 1269 + policy->governor = NULL; 1270 + up_write(&policy->rwsem); 1271 + } else { 1272 + new_policy = true; 1273 + policy = cpufreq_policy_alloc(cpu); 1230 1274 if (!policy) 1231 - goto nomem_out; 1275 + return -ENOMEM; 1232 1276 } 1233 1277 1234 1278 cpumask_copy(policy->cpus, cpumask_of(cpu)); ··· 1214 1308 ret = cpufreq_driver->init(policy); 1215 1309 if (ret) { 1216 1310 pr_debug("initialization failed\n"); 1217 - goto err_set_policy_cpu; 1311 + goto out_free_policy; 1218 1312 } 1219 1313 1220 1314 down_write(&policy->rwsem); 1221 1315 1222 - /* related cpus should atleast have policy->cpus */ 1223 - cpumask_or(policy->related_cpus, policy->related_cpus, policy->cpus); 1224 - 1225 - /* Remember which CPUs have been present at the policy creation time. */ 1226 - if (!recover_policy) 1316 + if (new_policy) { 1317 + /* related_cpus should at least include policy->cpus. */ 1318 + cpumask_or(policy->related_cpus, policy->related_cpus, policy->cpus); 1319 + /* Remember CPUs present at the policy creation time. */ 1227 1320 cpumask_and(policy->real_cpus, policy->cpus, cpu_present_mask); 1321 + } 1228 1322 1229 1323 /* 1230 1324 * affected cpus must always be the one, which are online. We aren't ··· 1232 1326 */ 1233 1327 cpumask_and(policy->cpus, policy->cpus, cpu_online_mask); 1234 1328 1235 - if (!recover_policy) { 1329 + if (new_policy) { 1236 1330 policy->user_policy.min = policy->min; 1237 1331 policy->user_policy.max = policy->max; 1238 1332 ··· 1246 1340 policy->cur = cpufreq_driver->get(policy->cpu); 1247 1341 if (!policy->cur) { 1248 1342 pr_err("%s: ->get() failed\n", __func__); 1249 - goto err_get_freq; 1343 + goto out_exit_policy; 1250 1344 } 1251 1345 } 1252 1346 ··· 1293 1387 blocking_notifier_call_chain(&cpufreq_policy_notifier_list, 1294 1388 CPUFREQ_START, policy); 1295 1389 1296 - if (!recover_policy) { 1297 - ret = cpufreq_add_dev_interface(policy, dev); 1390 + if (new_policy) { 1391 + ret = cpufreq_add_dev_interface(policy); 1298 1392 if (ret) 1299 - goto err_out_unregister; 1393 + goto out_exit_policy; 1300 1394 blocking_notifier_call_chain(&cpufreq_policy_notifier_list, 1301 1395 CPUFREQ_CREATE_POLICY, policy); 1302 1396 ··· 1305 1399 write_unlock_irqrestore(&cpufreq_driver_lock, flags); 1306 1400 } 1307 1401 1308 - cpufreq_init_policy(policy); 1309 - 1310 - if (!recover_policy) { 1311 - policy->user_policy.policy = policy->policy; 1312 - policy->user_policy.governor = policy->governor; 1402 + ret = cpufreq_init_policy(policy); 1403 + if (ret) { 1404 + pr_err("%s: Failed to initialize policy for cpu: %d (%d)\n", 1405 + __func__, cpu, ret); 1406 + /* cpufreq_policy_free() will notify based on this */ 1407 + new_policy = false; 1408 + goto out_exit_policy; 1313 1409 } 1410 + 1314 1411 up_write(&policy->rwsem); 1315 1412 1316 1413 kobject_uevent(&policy->kobj, KOBJ_ADD); 1317 - 1318 - up_read(&cpufreq_rwsem); 1319 1414 1320 1415 /* Callback for handling stuff after policy is ready */ 1321 1416 if (cpufreq_driver->ready) ··· 1326 1419 1327 1420 return 0; 1328 1421 1329 - err_out_unregister: 1330 - err_get_freq: 1422 + out_exit_policy: 1331 1423 up_write(&policy->rwsem); 1332 1424 1333 1425 if (cpufreq_driver->exit) 1334 1426 cpufreq_driver->exit(policy); 1335 - err_set_policy_cpu: 1336 - cpufreq_policy_free(policy, recover_policy); 1337 - nomem_out: 1338 - up_read(&cpufreq_rwsem); 1427 + out_free_policy: 1428 + cpufreq_policy_free(policy, !new_policy); 1429 + return ret; 1430 + } 1431 + 1432 + /** 1433 + * cpufreq_add_dev - the cpufreq interface for a CPU device. 1434 + * @dev: CPU device. 1435 + * @sif: Subsystem interface structure pointer (not used) 1436 + */ 1437 + static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif) 1438 + { 1439 + unsigned cpu = dev->id; 1440 + int ret; 1441 + 1442 + dev_dbg(dev, "%s: adding CPU%u\n", __func__, cpu); 1443 + 1444 + if (cpu_online(cpu)) { 1445 + ret = cpufreq_online(cpu); 1446 + } else { 1447 + /* 1448 + * A hotplug notifier will follow and we will handle it as CPU 1449 + * online then. For now, just create the sysfs link, unless 1450 + * there is no policy or the link is already present. 1451 + */ 1452 + struct cpufreq_policy *policy = per_cpu(cpufreq_cpu_data, cpu); 1453 + 1454 + ret = policy && !cpumask_test_and_set_cpu(cpu, policy->real_cpus) 1455 + ? add_cpu_dev_symlink(policy, cpu) : 0; 1456 + } 1339 1457 1340 1458 return ret; 1341 1459 } 1342 1460 1343 - static int __cpufreq_remove_dev_prepare(struct device *dev) 1461 + static void cpufreq_offline_prepare(unsigned int cpu) 1344 1462 { 1345 - unsigned int cpu = dev->id; 1346 - int ret = 0; 1347 1463 struct cpufreq_policy *policy; 1348 1464 1349 1465 pr_debug("%s: unregistering CPU %u\n", __func__, cpu); ··· 1374 1444 policy = cpufreq_cpu_get_raw(cpu); 1375 1445 if (!policy) { 1376 1446 pr_debug("%s: No cpu_data found\n", __func__); 1377 - return -EINVAL; 1447 + return; 1378 1448 } 1379 1449 1380 1450 if (has_target()) { 1381 - ret = __cpufreq_governor(policy, CPUFREQ_GOV_STOP); 1451 + int ret = __cpufreq_governor(policy, CPUFREQ_GOV_STOP); 1382 1452 if (ret) 1383 1453 pr_err("%s: Failed to stop governor\n", __func__); 1384 1454 } ··· 1399 1469 /* Start governor again for active policy */ 1400 1470 if (!policy_is_inactive(policy)) { 1401 1471 if (has_target()) { 1402 - ret = __cpufreq_governor(policy, CPUFREQ_GOV_START); 1472 + int ret = __cpufreq_governor(policy, CPUFREQ_GOV_START); 1403 1473 if (!ret) 1404 1474 ret = __cpufreq_governor(policy, CPUFREQ_GOV_LIMITS); 1405 1475 ··· 1409 1479 } else if (cpufreq_driver->stop_cpu) { 1410 1480 cpufreq_driver->stop_cpu(policy); 1411 1481 } 1412 - 1413 - return ret; 1414 1482 } 1415 1483 1416 - static int __cpufreq_remove_dev_finish(struct device *dev) 1484 + static void cpufreq_offline_finish(unsigned int cpu) 1417 1485 { 1418 - unsigned int cpu = dev->id; 1419 - int ret; 1420 1486 struct cpufreq_policy *policy = per_cpu(cpufreq_cpu_data, cpu); 1421 1487 1422 1488 if (!policy) { 1423 1489 pr_debug("%s: No cpu_data found\n", __func__); 1424 - return -EINVAL; 1490 + return; 1425 1491 } 1426 1492 1427 1493 /* Only proceed for inactive policies */ 1428 1494 if (!policy_is_inactive(policy)) 1429 - return 0; 1495 + return; 1430 1496 1431 1497 /* If cpu is last user of policy, free policy */ 1432 1498 if (has_target()) { 1433 - ret = __cpufreq_governor(policy, CPUFREQ_GOV_POLICY_EXIT); 1499 + int ret = __cpufreq_governor(policy, CPUFREQ_GOV_POLICY_EXIT); 1434 1500 if (ret) 1435 1501 pr_err("%s: Failed to exit governor\n", __func__); 1436 1502 } ··· 1438 1512 */ 1439 1513 if (cpufreq_driver->exit) 1440 1514 cpufreq_driver->exit(policy); 1441 - 1442 - return 0; 1443 1515 } 1444 1516 1445 1517 /** ··· 1454 1530 return; 1455 1531 1456 1532 if (cpu_online(cpu)) { 1457 - __cpufreq_remove_dev_prepare(dev); 1458 - __cpufreq_remove_dev_finish(dev); 1533 + cpufreq_offline_prepare(cpu); 1534 + cpufreq_offline_finish(cpu); 1459 1535 } 1460 1536 1461 1537 cpumask_clear_cpu(cpu, policy->real_cpus); ··· 2169 2245 2170 2246 memcpy(&new_policy->cpuinfo, &policy->cpuinfo, sizeof(policy->cpuinfo)); 2171 2247 2172 - if (new_policy->min > policy->max || new_policy->max < policy->min) 2248 + /* 2249 + * This check works well when we store new min/max freq attributes, 2250 + * because new_policy is a copy of policy with one field updated. 2251 + */ 2252 + if (new_policy->min > new_policy->max) 2173 2253 return -EINVAL; 2174 2254 2175 2255 /* verify the cpu speed can be set within this limit */ ··· 2184 2256 /* adjust if necessary - all reasons */ 2185 2257 blocking_notifier_call_chain(&cpufreq_policy_notifier_list, 2186 2258 CPUFREQ_ADJUST, new_policy); 2187 - 2188 - /* adjust if necessary - hardware incompatibility*/ 2189 - blocking_notifier_call_chain(&cpufreq_policy_notifier_list, 2190 - CPUFREQ_INCOMPATIBLE, new_policy); 2191 2259 2192 2260 /* 2193 2261 * verify the cpu speed can be set within this limit, which might be ··· 2218 2294 old_gov = policy->governor; 2219 2295 /* end old governor */ 2220 2296 if (old_gov) { 2221 - __cpufreq_governor(policy, CPUFREQ_GOV_STOP); 2297 + ret = __cpufreq_governor(policy, CPUFREQ_GOV_STOP); 2298 + if (ret) { 2299 + /* This can happen due to race with other operations */ 2300 + pr_debug("%s: Failed to Stop Governor: %s (%d)\n", 2301 + __func__, old_gov->name, ret); 2302 + return ret; 2303 + } 2304 + 2222 2305 up_write(&policy->rwsem); 2223 - __cpufreq_governor(policy, CPUFREQ_GOV_POLICY_EXIT); 2306 + ret = __cpufreq_governor(policy, CPUFREQ_GOV_POLICY_EXIT); 2224 2307 down_write(&policy->rwsem); 2308 + 2309 + if (ret) { 2310 + pr_err("%s: Failed to Exit Governor: %s (%d)\n", 2311 + __func__, old_gov->name, ret); 2312 + return ret; 2313 + } 2225 2314 } 2226 2315 2227 2316 /* start new governor */ 2228 2317 policy->governor = new_policy->governor; 2229 - if (!__cpufreq_governor(policy, CPUFREQ_GOV_POLICY_INIT)) { 2230 - if (!__cpufreq_governor(policy, CPUFREQ_GOV_START)) 2318 + ret = __cpufreq_governor(policy, CPUFREQ_GOV_POLICY_INIT); 2319 + if (!ret) { 2320 + ret = __cpufreq_governor(policy, CPUFREQ_GOV_START); 2321 + if (!ret) 2231 2322 goto out; 2232 2323 2233 2324 up_write(&policy->rwsem); ··· 2254 2315 pr_debug("starting governor %s failed\n", policy->governor->name); 2255 2316 if (old_gov) { 2256 2317 policy->governor = old_gov; 2257 - __cpufreq_governor(policy, CPUFREQ_GOV_POLICY_INIT); 2258 - __cpufreq_governor(policy, CPUFREQ_GOV_START); 2318 + if (__cpufreq_governor(policy, CPUFREQ_GOV_POLICY_INIT)) 2319 + policy->governor = NULL; 2320 + else 2321 + __cpufreq_governor(policy, CPUFREQ_GOV_START); 2259 2322 } 2260 2323 2261 - return -EINVAL; 2324 + return ret; 2262 2325 2263 2326 out: 2264 2327 pr_debug("governor: change or update limits\n"); ··· 2289 2348 memcpy(&new_policy, policy, sizeof(*policy)); 2290 2349 new_policy.min = policy->user_policy.min; 2291 2350 new_policy.max = policy->user_policy.max; 2292 - new_policy.policy = policy->user_policy.policy; 2293 - new_policy.governor = policy->user_policy.governor; 2294 2351 2295 2352 /* 2296 2353 * BIOS might change freq behind our back ··· 2324 2385 unsigned long action, void *hcpu) 2325 2386 { 2326 2387 unsigned int cpu = (unsigned long)hcpu; 2327 - struct device *dev; 2328 2388 2329 - dev = get_cpu_device(cpu); 2330 - if (dev) { 2331 - switch (action & ~CPU_TASKS_FROZEN) { 2332 - case CPU_ONLINE: 2333 - cpufreq_add_dev(dev, NULL); 2334 - break; 2389 + switch (action & ~CPU_TASKS_FROZEN) { 2390 + case CPU_ONLINE: 2391 + cpufreq_online(cpu); 2392 + break; 2335 2393 2336 - case CPU_DOWN_PREPARE: 2337 - __cpufreq_remove_dev_prepare(dev); 2338 - break; 2394 + case CPU_DOWN_PREPARE: 2395 + cpufreq_offline_prepare(cpu); 2396 + break; 2339 2397 2340 - case CPU_POST_DEAD: 2341 - __cpufreq_remove_dev_finish(dev); 2342 - break; 2398 + case CPU_POST_DEAD: 2399 + cpufreq_offline_finish(cpu); 2400 + break; 2343 2401 2344 - case CPU_DOWN_FAILED: 2345 - cpufreq_add_dev(dev, NULL); 2346 - break; 2347 - } 2402 + case CPU_DOWN_FAILED: 2403 + cpufreq_online(cpu); 2404 + break; 2348 2405 } 2349 2406 return NOTIFY_OK; 2350 2407 } ··· 2410 2475 } 2411 2476 EXPORT_SYMBOL_GPL(cpufreq_boost_supported); 2412 2477 2478 + static int create_boost_sysfs_file(void) 2479 + { 2480 + int ret; 2481 + 2482 + if (!cpufreq_boost_supported()) 2483 + return 0; 2484 + 2485 + /* 2486 + * Check if driver provides function to enable boost - 2487 + * if not, use cpufreq_boost_set_sw as default 2488 + */ 2489 + if (!cpufreq_driver->set_boost) 2490 + cpufreq_driver->set_boost = cpufreq_boost_set_sw; 2491 + 2492 + ret = cpufreq_sysfs_create_file(&boost.attr); 2493 + if (ret) 2494 + pr_err("%s: cannot register global BOOST sysfs file\n", 2495 + __func__); 2496 + 2497 + return ret; 2498 + } 2499 + 2500 + static void remove_boost_sysfs_file(void) 2501 + { 2502 + if (cpufreq_boost_supported()) 2503 + cpufreq_sysfs_remove_file(&boost.attr); 2504 + } 2505 + 2506 + int cpufreq_enable_boost_support(void) 2507 + { 2508 + if (!cpufreq_driver) 2509 + return -EINVAL; 2510 + 2511 + if (cpufreq_boost_supported()) 2512 + return 0; 2513 + 2514 + cpufreq_driver->boost_supported = true; 2515 + 2516 + /* This will get removed on driver unregister */ 2517 + return create_boost_sysfs_file(); 2518 + } 2519 + EXPORT_SYMBOL_GPL(cpufreq_enable_boost_support); 2520 + 2413 2521 int cpufreq_boost_enabled(void) 2414 2522 { 2415 2523 return cpufreq_driver->boost_enabled; ··· 2491 2513 2492 2514 pr_debug("trying to register driver %s\n", driver_data->name); 2493 2515 2516 + /* Protect against concurrent CPU online/offline. */ 2517 + get_online_cpus(); 2518 + 2494 2519 write_lock_irqsave(&cpufreq_driver_lock, flags); 2495 2520 if (cpufreq_driver) { 2496 2521 write_unlock_irqrestore(&cpufreq_driver_lock, flags); 2497 - return -EEXIST; 2522 + ret = -EEXIST; 2523 + goto out; 2498 2524 } 2499 2525 cpufreq_driver = driver_data; 2500 2526 write_unlock_irqrestore(&cpufreq_driver_lock, flags); ··· 2506 2524 if (driver_data->setpolicy) 2507 2525 driver_data->flags |= CPUFREQ_CONST_LOOPS; 2508 2526 2509 - if (cpufreq_boost_supported()) { 2510 - /* 2511 - * Check if driver provides function to enable boost - 2512 - * if not, use cpufreq_boost_set_sw as default 2513 - */ 2514 - if (!cpufreq_driver->set_boost) 2515 - cpufreq_driver->set_boost = cpufreq_boost_set_sw; 2516 - 2517 - ret = cpufreq_sysfs_create_file(&boost.attr); 2518 - if (ret) { 2519 - pr_err("%s: cannot register global BOOST sysfs file\n", 2520 - __func__); 2521 - goto err_null_driver; 2522 - } 2523 - } 2527 + ret = create_boost_sysfs_file(); 2528 + if (ret) 2529 + goto err_null_driver; 2524 2530 2525 2531 ret = subsys_interface_register(&cpufreq_interface); 2526 2532 if (ret) ··· 2525 2555 register_hotcpu_notifier(&cpufreq_cpu_notifier); 2526 2556 pr_debug("driver %s up and running\n", driver_data->name); 2527 2557 2528 - return 0; 2558 + out: 2559 + put_online_cpus(); 2560 + return ret; 2561 + 2529 2562 err_if_unreg: 2530 2563 subsys_interface_unregister(&cpufreq_interface); 2531 2564 err_boost_unreg: 2532 - if (cpufreq_boost_supported()) 2533 - cpufreq_sysfs_remove_file(&boost.attr); 2565 + remove_boost_sysfs_file(); 2534 2566 err_null_driver: 2535 2567 write_lock_irqsave(&cpufreq_driver_lock, flags); 2536 2568 cpufreq_driver = NULL; 2537 2569 write_unlock_irqrestore(&cpufreq_driver_lock, flags); 2538 - return ret; 2570 + goto out; 2539 2571 } 2540 2572 EXPORT_SYMBOL_GPL(cpufreq_register_driver); 2541 2573 ··· 2558 2586 2559 2587 pr_debug("unregistering driver %s\n", driver->name); 2560 2588 2589 + /* Protect against concurrent cpu hotplug */ 2590 + get_online_cpus(); 2561 2591 subsys_interface_unregister(&cpufreq_interface); 2562 - if (cpufreq_boost_supported()) 2563 - cpufreq_sysfs_remove_file(&boost.attr); 2564 - 2592 + remove_boost_sysfs_file(); 2565 2593 unregister_hotcpu_notifier(&cpufreq_cpu_notifier); 2566 2594 2567 - down_write(&cpufreq_rwsem); 2568 2595 write_lock_irqsave(&cpufreq_driver_lock, flags); 2569 2596 2570 2597 cpufreq_driver = NULL; 2571 2598 2572 2599 write_unlock_irqrestore(&cpufreq_driver_lock, flags); 2573 - up_write(&cpufreq_rwsem); 2600 + put_online_cpus(); 2574 2601 2575 2602 return 0; 2576 2603 }
+7 -18
drivers/cpufreq/cpufreq_conservative.c
··· 47 47 static void cs_check_cpu(int cpu, unsigned int load) 48 48 { 49 49 struct cs_cpu_dbs_info_s *dbs_info = &per_cpu(cs_cpu_dbs_info, cpu); 50 - struct cpufreq_policy *policy = dbs_info->cdbs.cur_policy; 50 + struct cpufreq_policy *policy = dbs_info->cdbs.shared->policy; 51 51 struct dbs_data *dbs_data = policy->governor_data; 52 52 struct cs_dbs_tuners *cs_tuners = dbs_data->tuners; 53 53 ··· 102 102 } 103 103 } 104 104 105 - static void cs_dbs_timer(struct work_struct *work) 105 + static unsigned int cs_dbs_timer(struct cpu_dbs_info *cdbs, 106 + struct dbs_data *dbs_data, bool modify_all) 106 107 { 107 - struct cs_cpu_dbs_info_s *dbs_info = container_of(work, 108 - struct cs_cpu_dbs_info_s, cdbs.work.work); 109 - unsigned int cpu = dbs_info->cdbs.cur_policy->cpu; 110 - struct cs_cpu_dbs_info_s *core_dbs_info = &per_cpu(cs_cpu_dbs_info, 111 - cpu); 112 - struct dbs_data *dbs_data = dbs_info->cdbs.cur_policy->governor_data; 113 108 struct cs_dbs_tuners *cs_tuners = dbs_data->tuners; 114 - int delay = delay_for_sampling_rate(cs_tuners->sampling_rate); 115 - bool modify_all = true; 116 109 117 - mutex_lock(&core_dbs_info->cdbs.timer_mutex); 118 - if (!need_load_eval(&core_dbs_info->cdbs, cs_tuners->sampling_rate)) 119 - modify_all = false; 120 - else 121 - dbs_check_cpu(dbs_data, cpu); 110 + if (modify_all) 111 + dbs_check_cpu(dbs_data, cdbs->shared->policy->cpu); 122 112 123 - gov_queue_work(dbs_data, dbs_info->cdbs.cur_policy, delay, modify_all); 124 - mutex_unlock(&core_dbs_info->cdbs.timer_mutex); 113 + return delay_for_sampling_rate(cs_tuners->sampling_rate); 125 114 } 126 115 127 116 static int dbs_cpufreq_notifier(struct notifier_block *nb, unsigned long val, ··· 124 135 if (!dbs_info->enable) 125 136 return 0; 126 137 127 - policy = dbs_info->cdbs.cur_policy; 138 + policy = dbs_info->cdbs.shared->policy; 128 139 129 140 /* 130 141 * we only care if our internally tracked freq moves outside the 'valid'
+146 -50
drivers/cpufreq/cpufreq_governor.c
··· 32 32 33 33 void dbs_check_cpu(struct dbs_data *dbs_data, int cpu) 34 34 { 35 - struct cpu_dbs_common_info *cdbs = dbs_data->cdata->get_cpu_cdbs(cpu); 35 + struct cpu_dbs_info *cdbs = dbs_data->cdata->get_cpu_cdbs(cpu); 36 36 struct od_dbs_tuners *od_tuners = dbs_data->tuners; 37 37 struct cs_dbs_tuners *cs_tuners = dbs_data->tuners; 38 - struct cpufreq_policy *policy; 38 + struct cpufreq_policy *policy = cdbs->shared->policy; 39 39 unsigned int sampling_rate; 40 40 unsigned int max_load = 0; 41 41 unsigned int ignore_nice; ··· 60 60 ignore_nice = cs_tuners->ignore_nice_load; 61 61 } 62 62 63 - policy = cdbs->cur_policy; 64 - 65 63 /* Get Absolute Load */ 66 64 for_each_cpu(j, policy->cpus) { 67 - struct cpu_dbs_common_info *j_cdbs; 65 + struct cpu_dbs_info *j_cdbs; 68 66 u64 cur_wall_time, cur_idle_time; 69 67 unsigned int idle_time, wall_time; 70 68 unsigned int load; ··· 161 163 static inline void __gov_queue_work(int cpu, struct dbs_data *dbs_data, 162 164 unsigned int delay) 163 165 { 164 - struct cpu_dbs_common_info *cdbs = dbs_data->cdata->get_cpu_cdbs(cpu); 166 + struct cpu_dbs_info *cdbs = dbs_data->cdata->get_cpu_cdbs(cpu); 165 167 166 - mod_delayed_work_on(cpu, system_wq, &cdbs->work, delay); 168 + mod_delayed_work_on(cpu, system_wq, &cdbs->dwork, delay); 167 169 } 168 170 169 171 void gov_queue_work(struct dbs_data *dbs_data, struct cpufreq_policy *policy, ··· 197 199 static inline void gov_cancel_work(struct dbs_data *dbs_data, 198 200 struct cpufreq_policy *policy) 199 201 { 200 - struct cpu_dbs_common_info *cdbs; 202 + struct cpu_dbs_info *cdbs; 201 203 int i; 202 204 203 205 for_each_cpu(i, policy->cpus) { 204 206 cdbs = dbs_data->cdata->get_cpu_cdbs(i); 205 - cancel_delayed_work_sync(&cdbs->work); 207 + cancel_delayed_work_sync(&cdbs->dwork); 206 208 } 207 209 } 208 210 209 211 /* Will return if we need to evaluate cpu load again or not */ 210 - bool need_load_eval(struct cpu_dbs_common_info *cdbs, 211 - unsigned int sampling_rate) 212 + static bool need_load_eval(struct cpu_common_dbs_info *shared, 213 + unsigned int sampling_rate) 212 214 { 213 - if (policy_is_shared(cdbs->cur_policy)) { 215 + if (policy_is_shared(shared->policy)) { 214 216 ktime_t time_now = ktime_get(); 215 - s64 delta_us = ktime_us_delta(time_now, cdbs->time_stamp); 217 + s64 delta_us = ktime_us_delta(time_now, shared->time_stamp); 216 218 217 219 /* Do nothing if we recently have sampled */ 218 220 if (delta_us < (s64)(sampling_rate / 2)) 219 221 return false; 220 222 else 221 - cdbs->time_stamp = time_now; 223 + shared->time_stamp = time_now; 222 224 } 223 225 224 226 return true; 225 227 } 226 - EXPORT_SYMBOL_GPL(need_load_eval); 228 + 229 + static void dbs_timer(struct work_struct *work) 230 + { 231 + struct cpu_dbs_info *cdbs = container_of(work, struct cpu_dbs_info, 232 + dwork.work); 233 + struct cpu_common_dbs_info *shared = cdbs->shared; 234 + struct cpufreq_policy *policy = shared->policy; 235 + struct dbs_data *dbs_data = policy->governor_data; 236 + unsigned int sampling_rate, delay; 237 + bool modify_all = true; 238 + 239 + mutex_lock(&shared->timer_mutex); 240 + 241 + if (dbs_data->cdata->governor == GOV_CONSERVATIVE) { 242 + struct cs_dbs_tuners *cs_tuners = dbs_data->tuners; 243 + 244 + sampling_rate = cs_tuners->sampling_rate; 245 + } else { 246 + struct od_dbs_tuners *od_tuners = dbs_data->tuners; 247 + 248 + sampling_rate = od_tuners->sampling_rate; 249 + } 250 + 251 + if (!need_load_eval(cdbs->shared, sampling_rate)) 252 + modify_all = false; 253 + 254 + delay = dbs_data->cdata->gov_dbs_timer(cdbs, dbs_data, modify_all); 255 + gov_queue_work(dbs_data, policy, delay, modify_all); 256 + 257 + mutex_unlock(&shared->timer_mutex); 258 + } 227 259 228 260 static void set_sampling_rate(struct dbs_data *dbs_data, 229 261 unsigned int sampling_rate) ··· 267 239 } 268 240 } 269 241 242 + static int alloc_common_dbs_info(struct cpufreq_policy *policy, 243 + struct common_dbs_data *cdata) 244 + { 245 + struct cpu_common_dbs_info *shared; 246 + int j; 247 + 248 + /* Allocate memory for the common information for policy->cpus */ 249 + shared = kzalloc(sizeof(*shared), GFP_KERNEL); 250 + if (!shared) 251 + return -ENOMEM; 252 + 253 + /* Set shared for all CPUs, online+offline */ 254 + for_each_cpu(j, policy->related_cpus) 255 + cdata->get_cpu_cdbs(j)->shared = shared; 256 + 257 + return 0; 258 + } 259 + 260 + static void free_common_dbs_info(struct cpufreq_policy *policy, 261 + struct common_dbs_data *cdata) 262 + { 263 + struct cpu_dbs_info *cdbs = cdata->get_cpu_cdbs(policy->cpu); 264 + struct cpu_common_dbs_info *shared = cdbs->shared; 265 + int j; 266 + 267 + for_each_cpu(j, policy->cpus) 268 + cdata->get_cpu_cdbs(j)->shared = NULL; 269 + 270 + kfree(shared); 271 + } 272 + 270 273 static int cpufreq_governor_init(struct cpufreq_policy *policy, 271 274 struct dbs_data *dbs_data, 272 275 struct common_dbs_data *cdata) ··· 305 246 unsigned int latency; 306 247 int ret; 307 248 249 + /* State should be equivalent to EXIT */ 250 + if (policy->governor_data) 251 + return -EBUSY; 252 + 308 253 if (dbs_data) { 309 254 if (WARN_ON(have_governor_per_policy())) 310 255 return -EINVAL; 256 + 257 + ret = alloc_common_dbs_info(policy, cdata); 258 + if (ret) 259 + return ret; 260 + 311 261 dbs_data->usage_count++; 312 262 policy->governor_data = dbs_data; 313 263 return 0; ··· 326 258 if (!dbs_data) 327 259 return -ENOMEM; 328 260 261 + ret = alloc_common_dbs_info(policy, cdata); 262 + if (ret) 263 + goto free_dbs_data; 264 + 329 265 dbs_data->cdata = cdata; 330 266 dbs_data->usage_count = 1; 331 267 332 268 ret = cdata->init(dbs_data, !policy->governor->initialized); 333 269 if (ret) 334 - goto free_dbs_data; 270 + goto free_common_dbs_info; 335 271 336 272 /* policy latency is in ns. Convert it to us first */ 337 273 latency = policy->cpuinfo.transition_latency / 1000; ··· 372 300 } 373 301 cdata_exit: 374 302 cdata->exit(dbs_data, !policy->governor->initialized); 303 + free_common_dbs_info: 304 + free_common_dbs_info(policy, cdata); 375 305 free_dbs_data: 376 306 kfree(dbs_data); 377 307 return ret; 378 308 } 379 309 380 - static void cpufreq_governor_exit(struct cpufreq_policy *policy, 381 - struct dbs_data *dbs_data) 310 + static int cpufreq_governor_exit(struct cpufreq_policy *policy, 311 + struct dbs_data *dbs_data) 382 312 { 383 313 struct common_dbs_data *cdata = dbs_data->cdata; 314 + struct cpu_dbs_info *cdbs = cdata->get_cpu_cdbs(policy->cpu); 315 + 316 + /* State should be equivalent to INIT */ 317 + if (!cdbs->shared || cdbs->shared->policy) 318 + return -EBUSY; 384 319 385 320 policy->governor_data = NULL; 386 321 if (!--dbs_data->usage_count) { ··· 402 323 cdata->exit(dbs_data, policy->governor->initialized == 1); 403 324 kfree(dbs_data); 404 325 } 326 + 327 + free_common_dbs_info(policy, cdata); 328 + return 0; 405 329 } 406 330 407 331 static int cpufreq_governor_start(struct cpufreq_policy *policy, ··· 412 330 { 413 331 struct common_dbs_data *cdata = dbs_data->cdata; 414 332 unsigned int sampling_rate, ignore_nice, j, cpu = policy->cpu; 415 - struct cpu_dbs_common_info *cpu_cdbs = cdata->get_cpu_cdbs(cpu); 333 + struct cpu_dbs_info *cdbs = cdata->get_cpu_cdbs(cpu); 334 + struct cpu_common_dbs_info *shared = cdbs->shared; 416 335 int io_busy = 0; 417 336 418 337 if (!policy->cur) 419 338 return -EINVAL; 339 + 340 + /* State should be equivalent to INIT */ 341 + if (!shared || shared->policy) 342 + return -EBUSY; 420 343 421 344 if (cdata->governor == GOV_CONSERVATIVE) { 422 345 struct cs_dbs_tuners *cs_tuners = dbs_data->tuners; ··· 436 349 io_busy = od_tuners->io_is_busy; 437 350 } 438 351 352 + shared->policy = policy; 353 + shared->time_stamp = ktime_get(); 354 + mutex_init(&shared->timer_mutex); 355 + 439 356 for_each_cpu(j, policy->cpus) { 440 - struct cpu_dbs_common_info *j_cdbs = cdata->get_cpu_cdbs(j); 357 + struct cpu_dbs_info *j_cdbs = cdata->get_cpu_cdbs(j); 441 358 unsigned int prev_load; 442 359 443 - j_cdbs->cpu = j; 444 - j_cdbs->cur_policy = policy; 445 360 j_cdbs->prev_cpu_idle = 446 361 get_cpu_idle_time(j, &j_cdbs->prev_cpu_wall, io_busy); 447 362 ··· 455 366 if (ignore_nice) 456 367 j_cdbs->prev_cpu_nice = kcpustat_cpu(j).cpustat[CPUTIME_NICE]; 457 368 458 - mutex_init(&j_cdbs->timer_mutex); 459 - INIT_DEFERRABLE_WORK(&j_cdbs->work, cdata->gov_dbs_timer); 369 + INIT_DEFERRABLE_WORK(&j_cdbs->dwork, dbs_timer); 460 370 } 461 371 462 372 if (cdata->governor == GOV_CONSERVATIVE) { ··· 474 386 od_ops->powersave_bias_init_cpu(cpu); 475 387 } 476 388 477 - /* Initiate timer time stamp */ 478 - cpu_cdbs->time_stamp = ktime_get(); 479 - 480 389 gov_queue_work(dbs_data, policy, delay_for_sampling_rate(sampling_rate), 481 390 true); 482 391 return 0; 483 392 } 484 393 485 - static void cpufreq_governor_stop(struct cpufreq_policy *policy, 486 - struct dbs_data *dbs_data) 394 + static int cpufreq_governor_stop(struct cpufreq_policy *policy, 395 + struct dbs_data *dbs_data) 487 396 { 488 397 struct common_dbs_data *cdata = dbs_data->cdata; 489 398 unsigned int cpu = policy->cpu; 490 - struct cpu_dbs_common_info *cpu_cdbs = cdata->get_cpu_cdbs(cpu); 399 + struct cpu_dbs_info *cdbs = cdata->get_cpu_cdbs(cpu); 400 + struct cpu_common_dbs_info *shared = cdbs->shared; 401 + 402 + /* State should be equivalent to START */ 403 + if (!shared || !shared->policy) 404 + return -EBUSY; 405 + 406 + gov_cancel_work(dbs_data, policy); 491 407 492 408 if (cdata->governor == GOV_CONSERVATIVE) { 493 409 struct cs_cpu_dbs_info_s *cs_dbs_info = ··· 500 408 cs_dbs_info->enable = 0; 501 409 } 502 410 503 - gov_cancel_work(dbs_data, policy); 504 - 505 - mutex_destroy(&cpu_cdbs->timer_mutex); 506 - cpu_cdbs->cur_policy = NULL; 411 + shared->policy = NULL; 412 + mutex_destroy(&shared->timer_mutex); 413 + return 0; 507 414 } 508 415 509 - static void cpufreq_governor_limits(struct cpufreq_policy *policy, 510 - struct dbs_data *dbs_data) 416 + static int cpufreq_governor_limits(struct cpufreq_policy *policy, 417 + struct dbs_data *dbs_data) 511 418 { 512 419 struct common_dbs_data *cdata = dbs_data->cdata; 513 420 unsigned int cpu = policy->cpu; 514 - struct cpu_dbs_common_info *cpu_cdbs = cdata->get_cpu_cdbs(cpu); 421 + struct cpu_dbs_info *cdbs = cdata->get_cpu_cdbs(cpu); 515 422 516 - if (!cpu_cdbs->cur_policy) 517 - return; 423 + /* State should be equivalent to START */ 424 + if (!cdbs->shared || !cdbs->shared->policy) 425 + return -EBUSY; 518 426 519 - mutex_lock(&cpu_cdbs->timer_mutex); 520 - if (policy->max < cpu_cdbs->cur_policy->cur) 521 - __cpufreq_driver_target(cpu_cdbs->cur_policy, policy->max, 427 + mutex_lock(&cdbs->shared->timer_mutex); 428 + if (policy->max < cdbs->shared->policy->cur) 429 + __cpufreq_driver_target(cdbs->shared->policy, policy->max, 522 430 CPUFREQ_RELATION_H); 523 - else if (policy->min > cpu_cdbs->cur_policy->cur) 524 - __cpufreq_driver_target(cpu_cdbs->cur_policy, policy->min, 431 + else if (policy->min > cdbs->shared->policy->cur) 432 + __cpufreq_driver_target(cdbs->shared->policy, policy->min, 525 433 CPUFREQ_RELATION_L); 526 434 dbs_check_cpu(dbs_data, cpu); 527 - mutex_unlock(&cpu_cdbs->timer_mutex); 435 + mutex_unlock(&cdbs->shared->timer_mutex); 436 + 437 + return 0; 528 438 } 529 439 530 440 int cpufreq_governor_dbs(struct cpufreq_policy *policy, 531 441 struct common_dbs_data *cdata, unsigned int event) 532 442 { 533 443 struct dbs_data *dbs_data; 534 - int ret = 0; 444 + int ret; 535 445 536 446 /* Lock governor to block concurrent initialization of governor */ 537 447 mutex_lock(&cdata->mutex); ··· 543 449 else 544 450 dbs_data = cdata->gdbs_data; 545 451 546 - if (WARN_ON(!dbs_data && (event != CPUFREQ_GOV_POLICY_INIT))) { 452 + if (!dbs_data && (event != CPUFREQ_GOV_POLICY_INIT)) { 547 453 ret = -EINVAL; 548 454 goto unlock; 549 455 } ··· 553 459 ret = cpufreq_governor_init(policy, dbs_data, cdata); 554 460 break; 555 461 case CPUFREQ_GOV_POLICY_EXIT: 556 - cpufreq_governor_exit(policy, dbs_data); 462 + ret = cpufreq_governor_exit(policy, dbs_data); 557 463 break; 558 464 case CPUFREQ_GOV_START: 559 465 ret = cpufreq_governor_start(policy, dbs_data); 560 466 break; 561 467 case CPUFREQ_GOV_STOP: 562 - cpufreq_governor_stop(policy, dbs_data); 468 + ret = cpufreq_governor_stop(policy, dbs_data); 563 469 break; 564 470 case CPUFREQ_GOV_LIMITS: 565 - cpufreq_governor_limits(policy, dbs_data); 471 + ret = cpufreq_governor_limits(policy, dbs_data); 566 472 break; 473 + default: 474 + ret = -EINVAL; 567 475 } 568 476 569 477 unlock:
+22 -18
drivers/cpufreq/cpufreq_governor.h
··· 109 109 110 110 /* create helper routines */ 111 111 #define define_get_cpu_dbs_routines(_dbs_info) \ 112 - static struct cpu_dbs_common_info *get_cpu_cdbs(int cpu) \ 112 + static struct cpu_dbs_info *get_cpu_cdbs(int cpu) \ 113 113 { \ 114 114 return &per_cpu(_dbs_info, cpu).cdbs; \ 115 115 } \ ··· 128 128 * cs_*: Conservative governor 129 129 */ 130 130 131 + /* Common to all CPUs of a policy */ 132 + struct cpu_common_dbs_info { 133 + struct cpufreq_policy *policy; 134 + /* 135 + * percpu mutex that serializes governor limit change with dbs_timer 136 + * invocation. We do not want dbs_timer to run when user is changing 137 + * the governor or limits. 138 + */ 139 + struct mutex timer_mutex; 140 + ktime_t time_stamp; 141 + }; 142 + 131 143 /* Per cpu structures */ 132 - struct cpu_dbs_common_info { 133 - int cpu; 144 + struct cpu_dbs_info { 134 145 u64 prev_cpu_idle; 135 146 u64 prev_cpu_wall; 136 147 u64 prev_cpu_nice; ··· 152 141 * wake-up from idle. 153 142 */ 154 143 unsigned int prev_load; 155 - struct cpufreq_policy *cur_policy; 156 - struct delayed_work work; 157 - /* 158 - * percpu mutex that serializes governor limit change with gov_dbs_timer 159 - * invocation. We do not want gov_dbs_timer to run when user is changing 160 - * the governor or limits. 161 - */ 162 - struct mutex timer_mutex; 163 - ktime_t time_stamp; 144 + struct delayed_work dwork; 145 + struct cpu_common_dbs_info *shared; 164 146 }; 165 147 166 148 struct od_cpu_dbs_info_s { 167 - struct cpu_dbs_common_info cdbs; 149 + struct cpu_dbs_info cdbs; 168 150 struct cpufreq_frequency_table *freq_table; 169 151 unsigned int freq_lo; 170 152 unsigned int freq_lo_jiffies; ··· 167 163 }; 168 164 169 165 struct cs_cpu_dbs_info_s { 170 - struct cpu_dbs_common_info cdbs; 166 + struct cpu_dbs_info cdbs; 171 167 unsigned int down_skip; 172 168 unsigned int requested_freq; 173 169 unsigned int enable:1; ··· 208 204 */ 209 205 struct dbs_data *gdbs_data; 210 206 211 - struct cpu_dbs_common_info *(*get_cpu_cdbs)(int cpu); 207 + struct cpu_dbs_info *(*get_cpu_cdbs)(int cpu); 212 208 void *(*get_cpu_dbs_info_s)(int cpu); 213 - void (*gov_dbs_timer)(struct work_struct *work); 209 + unsigned int (*gov_dbs_timer)(struct cpu_dbs_info *cdbs, 210 + struct dbs_data *dbs_data, 211 + bool modify_all); 214 212 void (*gov_check_cpu)(int cpu, unsigned int load); 215 213 int (*init)(struct dbs_data *dbs_data, bool notify); 216 214 void (*exit)(struct dbs_data *dbs_data, bool notify); ··· 271 265 extern struct mutex cpufreq_governor_lock; 272 266 273 267 void dbs_check_cpu(struct dbs_data *dbs_data, int cpu); 274 - bool need_load_eval(struct cpu_dbs_common_info *cdbs, 275 - unsigned int sampling_rate); 276 268 int cpufreq_governor_dbs(struct cpufreq_policy *policy, 277 269 struct common_dbs_data *cdata, unsigned int event); 278 270 void gov_queue_work(struct dbs_data *dbs_data, struct cpufreq_policy *policy,
+32 -35
drivers/cpufreq/cpufreq_ondemand.c
··· 155 155 static void od_check_cpu(int cpu, unsigned int load) 156 156 { 157 157 struct od_cpu_dbs_info_s *dbs_info = &per_cpu(od_cpu_dbs_info, cpu); 158 - struct cpufreq_policy *policy = dbs_info->cdbs.cur_policy; 158 + struct cpufreq_policy *policy = dbs_info->cdbs.shared->policy; 159 159 struct dbs_data *dbs_data = policy->governor_data; 160 160 struct od_dbs_tuners *od_tuners = dbs_data->tuners; 161 161 ··· 191 191 } 192 192 } 193 193 194 - static void od_dbs_timer(struct work_struct *work) 194 + static unsigned int od_dbs_timer(struct cpu_dbs_info *cdbs, 195 + struct dbs_data *dbs_data, bool modify_all) 195 196 { 196 - struct od_cpu_dbs_info_s *dbs_info = 197 - container_of(work, struct od_cpu_dbs_info_s, cdbs.work.work); 198 - unsigned int cpu = dbs_info->cdbs.cur_policy->cpu; 199 - struct od_cpu_dbs_info_s *core_dbs_info = &per_cpu(od_cpu_dbs_info, 197 + struct cpufreq_policy *policy = cdbs->shared->policy; 198 + unsigned int cpu = policy->cpu; 199 + struct od_cpu_dbs_info_s *dbs_info = &per_cpu(od_cpu_dbs_info, 200 200 cpu); 201 - struct dbs_data *dbs_data = dbs_info->cdbs.cur_policy->governor_data; 202 201 struct od_dbs_tuners *od_tuners = dbs_data->tuners; 203 - int delay = 0, sample_type = core_dbs_info->sample_type; 204 - bool modify_all = true; 202 + int delay = 0, sample_type = dbs_info->sample_type; 205 203 206 - mutex_lock(&core_dbs_info->cdbs.timer_mutex); 207 - if (!need_load_eval(&core_dbs_info->cdbs, od_tuners->sampling_rate)) { 208 - modify_all = false; 204 + if (!modify_all) 209 205 goto max_delay; 210 - } 211 206 212 207 /* Common NORMAL_SAMPLE setup */ 213 - core_dbs_info->sample_type = OD_NORMAL_SAMPLE; 208 + dbs_info->sample_type = OD_NORMAL_SAMPLE; 214 209 if (sample_type == OD_SUB_SAMPLE) { 215 - delay = core_dbs_info->freq_lo_jiffies; 216 - __cpufreq_driver_target(core_dbs_info->cdbs.cur_policy, 217 - core_dbs_info->freq_lo, CPUFREQ_RELATION_H); 210 + delay = dbs_info->freq_lo_jiffies; 211 + __cpufreq_driver_target(policy, dbs_info->freq_lo, 212 + CPUFREQ_RELATION_H); 218 213 } else { 219 214 dbs_check_cpu(dbs_data, cpu); 220 - if (core_dbs_info->freq_lo) { 215 + if (dbs_info->freq_lo) { 221 216 /* Setup timer for SUB_SAMPLE */ 222 - core_dbs_info->sample_type = OD_SUB_SAMPLE; 223 - delay = core_dbs_info->freq_hi_jiffies; 217 + dbs_info->sample_type = OD_SUB_SAMPLE; 218 + delay = dbs_info->freq_hi_jiffies; 224 219 } 225 220 } 226 221 227 222 max_delay: 228 223 if (!delay) 229 224 delay = delay_for_sampling_rate(od_tuners->sampling_rate 230 - * core_dbs_info->rate_mult); 225 + * dbs_info->rate_mult); 231 226 232 - gov_queue_work(dbs_data, dbs_info->cdbs.cur_policy, delay, modify_all); 233 - mutex_unlock(&core_dbs_info->cdbs.timer_mutex); 227 + return delay; 234 228 } 235 229 236 230 /************************** sysfs interface ************************/ ··· 267 273 dbs_info = &per_cpu(od_cpu_dbs_info, cpu); 268 274 cpufreq_cpu_put(policy); 269 275 270 - mutex_lock(&dbs_info->cdbs.timer_mutex); 276 + mutex_lock(&dbs_info->cdbs.shared->timer_mutex); 271 277 272 - if (!delayed_work_pending(&dbs_info->cdbs.work)) { 273 - mutex_unlock(&dbs_info->cdbs.timer_mutex); 278 + if (!delayed_work_pending(&dbs_info->cdbs.dwork)) { 279 + mutex_unlock(&dbs_info->cdbs.shared->timer_mutex); 274 280 continue; 275 281 } 276 282 277 283 next_sampling = jiffies + usecs_to_jiffies(new_rate); 278 - appointed_at = dbs_info->cdbs.work.timer.expires; 284 + appointed_at = dbs_info->cdbs.dwork.timer.expires; 279 285 280 286 if (time_before(next_sampling, appointed_at)) { 281 287 282 - mutex_unlock(&dbs_info->cdbs.timer_mutex); 283 - cancel_delayed_work_sync(&dbs_info->cdbs.work); 284 - mutex_lock(&dbs_info->cdbs.timer_mutex); 288 + mutex_unlock(&dbs_info->cdbs.shared->timer_mutex); 289 + cancel_delayed_work_sync(&dbs_info->cdbs.dwork); 290 + mutex_lock(&dbs_info->cdbs.shared->timer_mutex); 285 291 286 - gov_queue_work(dbs_data, dbs_info->cdbs.cur_policy, 287 - usecs_to_jiffies(new_rate), true); 292 + gov_queue_work(dbs_data, policy, 293 + usecs_to_jiffies(new_rate), true); 288 294 289 295 } 290 - mutex_unlock(&dbs_info->cdbs.timer_mutex); 296 + mutex_unlock(&dbs_info->cdbs.shared->timer_mutex); 291 297 } 292 298 } 293 299 ··· 550 556 551 557 get_online_cpus(); 552 558 for_each_online_cpu(cpu) { 559 + struct cpu_common_dbs_info *shared; 560 + 553 561 if (cpumask_test_cpu(cpu, &done)) 554 562 continue; 555 563 556 - policy = per_cpu(od_cpu_dbs_info, cpu).cdbs.cur_policy; 557 - if (!policy) 564 + shared = per_cpu(od_cpu_dbs_info, cpu).cdbs.shared; 565 + if (!shared) 558 566 continue; 559 567 568 + policy = shared->policy; 560 569 cpumask_or(&done, &done, policy->cpus); 561 570 562 571 if (policy->governor != &cpufreq_gov_ondemand)
+4
drivers/cpufreq/cpufreq_opp.c
··· 75 75 } 76 76 freq_table[i].driver_data = i; 77 77 freq_table[i].frequency = rate / 1000; 78 + 79 + /* Is Boost/turbo opp ? */ 80 + if (dev_pm_opp_is_turbo(opp)) 81 + freq_table[i].flags = CPUFREQ_BOOST_FREQ; 78 82 } 79 83 80 84 freq_table[i].driver_data = i;
+1 -1
drivers/cpufreq/e_powersaver.c
··· 78 78 static int eps_acpi_exit(struct cpufreq_policy *policy) 79 79 { 80 80 if (eps_acpi_cpu_perf) { 81 - acpi_processor_unregister_performance(eps_acpi_cpu_perf, 0); 81 + acpi_processor_unregister_performance(0); 82 82 free_cpumask_var(eps_acpi_cpu_perf->shared_cpu_map); 83 83 kfree(eps_acpi_cpu_perf); 84 84 eps_acpi_cpu_perf = NULL;
+15
drivers/cpufreq/freq_table.c
··· 18 18 * FREQUENCY TABLE HELPERS * 19 19 *********************************************************************/ 20 20 21 + bool policy_has_boost_freq(struct cpufreq_policy *policy) 22 + { 23 + struct cpufreq_frequency_table *pos, *table = policy->freq_table; 24 + 25 + if (!table) 26 + return false; 27 + 28 + cpufreq_for_each_valid_entry(pos, table) 29 + if (pos->flags & CPUFREQ_BOOST_FREQ) 30 + return true; 31 + 32 + return false; 33 + } 34 + EXPORT_SYMBOL_GPL(policy_has_boost_freq); 35 + 21 36 int cpufreq_frequency_table_cpuinfo(struct cpufreq_policy *policy, 22 37 struct cpufreq_frequency_table *table) 23 38 {
+10 -10
drivers/cpufreq/ia64-acpi-cpufreq.c
··· 29 29 30 30 struct cpufreq_acpi_io { 31 31 struct acpi_processor_performance acpi_data; 32 - struct cpufreq_frequency_table *freq_table; 33 32 unsigned int resume; 34 33 }; 35 34 ··· 220 221 unsigned int cpu = policy->cpu; 221 222 struct cpufreq_acpi_io *data; 222 223 unsigned int result = 0; 224 + struct cpufreq_frequency_table *freq_table; 223 225 224 226 pr_debug("acpi_cpufreq_cpu_init\n"); 225 227 ··· 254 254 } 255 255 256 256 /* alloc freq_table */ 257 - data->freq_table = kzalloc(sizeof(*data->freq_table) * 257 + freq_table = kzalloc(sizeof(*freq_table) * 258 258 (data->acpi_data.state_count + 1), 259 259 GFP_KERNEL); 260 - if (!data->freq_table) { 260 + if (!freq_table) { 261 261 result = -ENOMEM; 262 262 goto err_unreg; 263 263 } ··· 276 276 for (i = 0; i <= data->acpi_data.state_count; i++) 277 277 { 278 278 if (i < data->acpi_data.state_count) { 279 - data->freq_table[i].frequency = 279 + freq_table[i].frequency = 280 280 data->acpi_data.states[i].core_frequency * 1000; 281 281 } else { 282 - data->freq_table[i].frequency = CPUFREQ_TABLE_END; 282 + freq_table[i].frequency = CPUFREQ_TABLE_END; 283 283 } 284 284 } 285 285 286 - result = cpufreq_table_validate_and_show(policy, data->freq_table); 286 + result = cpufreq_table_validate_and_show(policy, freq_table); 287 287 if (result) { 288 288 goto err_freqfree; 289 289 } ··· 311 311 return (result); 312 312 313 313 err_freqfree: 314 - kfree(data->freq_table); 314 + kfree(freq_table); 315 315 err_unreg: 316 - acpi_processor_unregister_performance(&data->acpi_data, cpu); 316 + acpi_processor_unregister_performance(cpu); 317 317 err_free: 318 318 kfree(data); 319 319 acpi_io_data[cpu] = NULL; ··· 332 332 333 333 if (data) { 334 334 acpi_io_data[policy->cpu] = NULL; 335 - acpi_processor_unregister_performance(&data->acpi_data, 336 - policy->cpu); 335 + acpi_processor_unregister_performance(policy->cpu); 336 + kfree(policy->freq_table); 337 337 kfree(data); 338 338 } 339 339
+8 -10
drivers/cpufreq/integrator-cpufreq.c
··· 98 98 /* get current setting */ 99 99 cm_osc = __raw_readl(cm_base + INTEGRATOR_HDR_OSC_OFFSET); 100 100 101 - if (machine_is_integrator()) { 101 + if (machine_is_integrator()) 102 102 vco.s = (cm_osc >> 8) & 7; 103 - } else if (machine_is_cintegrator()) { 103 + else if (machine_is_cintegrator()) 104 104 vco.s = 1; 105 - } 106 105 vco.v = cm_osc & 255; 107 106 vco.r = 22; 108 107 freqs.old = icst_hz(&cclk_params, vco) / 1000; ··· 162 163 /* detect memory etc. */ 163 164 cm_osc = __raw_readl(cm_base + INTEGRATOR_HDR_OSC_OFFSET); 164 165 165 - if (machine_is_integrator()) { 166 + if (machine_is_integrator()) 166 167 vco.s = (cm_osc >> 8) & 7; 167 - } else { 168 + else 168 169 vco.s = 1; 169 - } 170 170 vco.v = cm_osc & 255; 171 171 vco.r = 22; 172 172 ··· 201 203 struct resource *res; 202 204 203 205 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 204 - if (!res) 206 + if (!res) 205 207 return -ENODEV; 206 208 207 209 cm_base = devm_ioremap(&pdev->dev, res->start, resource_size(res)); ··· 232 234 module_platform_driver_probe(integrator_cpufreq_driver, 233 235 integrator_cpufreq_probe); 234 236 235 - MODULE_AUTHOR ("Russell M. King"); 236 - MODULE_DESCRIPTION ("cpufreq driver for ARM Integrator CPUs"); 237 - MODULE_LICENSE ("GPL"); 237 + MODULE_AUTHOR("Russell M. King"); 238 + MODULE_DESCRIPTION("cpufreq driver for ARM Integrator CPUs"); 239 + MODULE_LICENSE("GPL");
+14 -6
drivers/cpufreq/intel_pstate.c
··· 484 484 } 485 485 /************************** sysfs end ************************/ 486 486 487 - static void intel_pstate_hwp_enable(void) 487 + static void intel_pstate_hwp_enable(struct cpudata *cpudata) 488 488 { 489 - hwp_active++; 490 489 pr_info("intel_pstate: HWP enabled\n"); 491 490 492 - wrmsrl( MSR_PM_ENABLE, 0x1); 491 + wrmsrl_on_cpu(cpudata->cpu, MSR_PM_ENABLE, 0x1); 493 492 } 494 493 495 494 static int byt_get_min_pstate(void) ··· 521 522 int32_t vid_fp; 522 523 u32 vid; 523 524 524 - val = pstate << 8; 525 + val = (u64)pstate << 8; 525 526 if (limits.no_turbo && !limits.turbo_disabled) 526 527 val |= (u64)1 << 32; 527 528 ··· 610 611 { 611 612 u64 val; 612 613 613 - val = pstate << 8; 614 + val = (u64)pstate << 8; 614 615 if (limits.no_turbo && !limits.turbo_disabled) 615 616 val |= (u64)1 << 32; 616 617 ··· 908 909 ICPU(0x4c, byt_params), 909 910 ICPU(0x4e, core_params), 910 911 ICPU(0x4f, core_params), 912 + ICPU(0x5e, core_params), 911 913 ICPU(0x56, core_params), 912 914 ICPU(0x57, knl_params), 913 915 {} ··· 933 933 cpu = all_cpu_data[cpunum]; 934 934 935 935 cpu->cpu = cpunum; 936 + 937 + if (hwp_active) 938 + intel_pstate_hwp_enable(cpu); 939 + 936 940 intel_pstate_get_cpu_pstates(cpu); 937 941 938 942 init_timer_deferrable(&cpu->timer); ··· 1174 1170 {1, "ORACLE", "X4270M3 ", PPC}, 1175 1171 {1, "ORACLE", "X4270M2 ", PPC}, 1176 1172 {1, "ORACLE", "X4170M2 ", PPC}, 1173 + {1, "ORACLE", "X4170 M3", PPC}, 1174 + {1, "ORACLE", "X4275 M3", PPC}, 1175 + {1, "ORACLE", "X6-2 ", PPC}, 1176 + {1, "ORACLE", "Sudbury ", PPC}, 1177 1177 {0, "", ""}, 1178 1178 }; 1179 1179 ··· 1254 1246 return -ENOMEM; 1255 1247 1256 1248 if (static_cpu_has_safe(X86_FEATURE_HWP) && !no_hwp) 1257 - intel_pstate_hwp_enable(); 1249 + hwp_active++; 1258 1250 1259 1251 if (!hwp_active && hwp_only) 1260 1252 goto out;
+527
drivers/cpufreq/mt8173-cpufreq.c
··· 1 + /* 2 + * Copyright (c) 2015 Linaro Ltd. 3 + * Author: Pi-Cheng Chen <pi-cheng.chen@linaro.org> 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License version 2 as 7 + * published by the Free Software Foundation. 8 + * 9 + * This program is distributed in the hope that it will be useful, 10 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 + * GNU General Public License for more details. 13 + */ 14 + 15 + #include <linux/clk.h> 16 + #include <linux/cpu.h> 17 + #include <linux/cpu_cooling.h> 18 + #include <linux/cpufreq.h> 19 + #include <linux/cpumask.h> 20 + #include <linux/of.h> 21 + #include <linux/platform_device.h> 22 + #include <linux/pm_opp.h> 23 + #include <linux/regulator/consumer.h> 24 + #include <linux/slab.h> 25 + #include <linux/thermal.h> 26 + 27 + #define MIN_VOLT_SHIFT (100000) 28 + #define MAX_VOLT_SHIFT (200000) 29 + #define MAX_VOLT_LIMIT (1150000) 30 + #define VOLT_TOL (10000) 31 + 32 + /* 33 + * The struct mtk_cpu_dvfs_info holds necessary information for doing CPU DVFS 34 + * on each CPU power/clock domain of Mediatek SoCs. Each CPU cluster in 35 + * Mediatek SoCs has two voltage inputs, Vproc and Vsram. In some cases the two 36 + * voltage inputs need to be controlled under a hardware limitation: 37 + * 100mV < Vsram - Vproc < 200mV 38 + * 39 + * When scaling the clock frequency of a CPU clock domain, the clock source 40 + * needs to be switched to another stable PLL clock temporarily until 41 + * the original PLL becomes stable at target frequency. 42 + */ 43 + struct mtk_cpu_dvfs_info { 44 + struct device *cpu_dev; 45 + struct regulator *proc_reg; 46 + struct regulator *sram_reg; 47 + struct clk *cpu_clk; 48 + struct clk *inter_clk; 49 + struct thermal_cooling_device *cdev; 50 + int intermediate_voltage; 51 + bool need_voltage_tracking; 52 + }; 53 + 54 + static int mtk_cpufreq_voltage_tracking(struct mtk_cpu_dvfs_info *info, 55 + int new_vproc) 56 + { 57 + struct regulator *proc_reg = info->proc_reg; 58 + struct regulator *sram_reg = info->sram_reg; 59 + int old_vproc, old_vsram, new_vsram, vsram, vproc, ret; 60 + 61 + old_vproc = regulator_get_voltage(proc_reg); 62 + old_vsram = regulator_get_voltage(sram_reg); 63 + /* Vsram should not exceed the maximum allowed voltage of SoC. */ 64 + new_vsram = min(new_vproc + MIN_VOLT_SHIFT, MAX_VOLT_LIMIT); 65 + 66 + if (old_vproc < new_vproc) { 67 + /* 68 + * When scaling up voltages, Vsram and Vproc scale up step 69 + * by step. At each step, set Vsram to (Vproc + 200mV) first, 70 + * then set Vproc to (Vsram - 100mV). 71 + * Keep doing it until Vsram and Vproc hit target voltages. 72 + */ 73 + do { 74 + old_vsram = regulator_get_voltage(sram_reg); 75 + old_vproc = regulator_get_voltage(proc_reg); 76 + 77 + vsram = min(new_vsram, old_vproc + MAX_VOLT_SHIFT); 78 + 79 + if (vsram + VOLT_TOL >= MAX_VOLT_LIMIT) { 80 + vsram = MAX_VOLT_LIMIT; 81 + 82 + /* 83 + * If the target Vsram hits the maximum voltage, 84 + * try to set the exact voltage value first. 85 + */ 86 + ret = regulator_set_voltage(sram_reg, vsram, 87 + vsram); 88 + if (ret) 89 + ret = regulator_set_voltage(sram_reg, 90 + vsram - VOLT_TOL, 91 + vsram); 92 + 93 + vproc = new_vproc; 94 + } else { 95 + ret = regulator_set_voltage(sram_reg, vsram, 96 + vsram + VOLT_TOL); 97 + 98 + vproc = vsram - MIN_VOLT_SHIFT; 99 + } 100 + if (ret) 101 + return ret; 102 + 103 + ret = regulator_set_voltage(proc_reg, vproc, 104 + vproc + VOLT_TOL); 105 + if (ret) { 106 + regulator_set_voltage(sram_reg, old_vsram, 107 + old_vsram); 108 + return ret; 109 + } 110 + } while (vproc < new_vproc || vsram < new_vsram); 111 + } else if (old_vproc > new_vproc) { 112 + /* 113 + * When scaling down voltages, Vsram and Vproc scale down step 114 + * by step. At each step, set Vproc to (Vsram - 200mV) first, 115 + * then set Vproc to (Vproc + 100mV). 116 + * Keep doing it until Vsram and Vproc hit target voltages. 117 + */ 118 + do { 119 + old_vproc = regulator_get_voltage(proc_reg); 120 + old_vsram = regulator_get_voltage(sram_reg); 121 + 122 + vproc = max(new_vproc, old_vsram - MAX_VOLT_SHIFT); 123 + ret = regulator_set_voltage(proc_reg, vproc, 124 + vproc + VOLT_TOL); 125 + if (ret) 126 + return ret; 127 + 128 + if (vproc == new_vproc) 129 + vsram = new_vsram; 130 + else 131 + vsram = max(new_vsram, vproc + MIN_VOLT_SHIFT); 132 + 133 + if (vsram + VOLT_TOL >= MAX_VOLT_LIMIT) { 134 + vsram = MAX_VOLT_LIMIT; 135 + 136 + /* 137 + * If the target Vsram hits the maximum voltage, 138 + * try to set the exact voltage value first. 139 + */ 140 + ret = regulator_set_voltage(sram_reg, vsram, 141 + vsram); 142 + if (ret) 143 + ret = regulator_set_voltage(sram_reg, 144 + vsram - VOLT_TOL, 145 + vsram); 146 + } else { 147 + ret = regulator_set_voltage(sram_reg, vsram, 148 + vsram + VOLT_TOL); 149 + } 150 + 151 + if (ret) { 152 + regulator_set_voltage(proc_reg, old_vproc, 153 + old_vproc); 154 + return ret; 155 + } 156 + } while (vproc > new_vproc + VOLT_TOL || 157 + vsram > new_vsram + VOLT_TOL); 158 + } 159 + 160 + return 0; 161 + } 162 + 163 + static int mtk_cpufreq_set_voltage(struct mtk_cpu_dvfs_info *info, int vproc) 164 + { 165 + if (info->need_voltage_tracking) 166 + return mtk_cpufreq_voltage_tracking(info, vproc); 167 + else 168 + return regulator_set_voltage(info->proc_reg, vproc, 169 + vproc + VOLT_TOL); 170 + } 171 + 172 + static int mtk_cpufreq_set_target(struct cpufreq_policy *policy, 173 + unsigned int index) 174 + { 175 + struct cpufreq_frequency_table *freq_table = policy->freq_table; 176 + struct clk *cpu_clk = policy->clk; 177 + struct clk *armpll = clk_get_parent(cpu_clk); 178 + struct mtk_cpu_dvfs_info *info = policy->driver_data; 179 + struct device *cpu_dev = info->cpu_dev; 180 + struct dev_pm_opp *opp; 181 + long freq_hz, old_freq_hz; 182 + int vproc, old_vproc, inter_vproc, target_vproc, ret; 183 + 184 + inter_vproc = info->intermediate_voltage; 185 + 186 + old_freq_hz = clk_get_rate(cpu_clk); 187 + old_vproc = regulator_get_voltage(info->proc_reg); 188 + 189 + freq_hz = freq_table[index].frequency * 1000; 190 + 191 + rcu_read_lock(); 192 + opp = dev_pm_opp_find_freq_ceil(cpu_dev, &freq_hz); 193 + if (IS_ERR(opp)) { 194 + rcu_read_unlock(); 195 + pr_err("cpu%d: failed to find OPP for %ld\n", 196 + policy->cpu, freq_hz); 197 + return PTR_ERR(opp); 198 + } 199 + vproc = dev_pm_opp_get_voltage(opp); 200 + rcu_read_unlock(); 201 + 202 + /* 203 + * If the new voltage or the intermediate voltage is higher than the 204 + * current voltage, scale up voltage first. 205 + */ 206 + target_vproc = (inter_vproc > vproc) ? inter_vproc : vproc; 207 + if (old_vproc < target_vproc) { 208 + ret = mtk_cpufreq_set_voltage(info, target_vproc); 209 + if (ret) { 210 + pr_err("cpu%d: failed to scale up voltage!\n", 211 + policy->cpu); 212 + mtk_cpufreq_set_voltage(info, old_vproc); 213 + return ret; 214 + } 215 + } 216 + 217 + /* Reparent the CPU clock to intermediate clock. */ 218 + ret = clk_set_parent(cpu_clk, info->inter_clk); 219 + if (ret) { 220 + pr_err("cpu%d: failed to re-parent cpu clock!\n", 221 + policy->cpu); 222 + mtk_cpufreq_set_voltage(info, old_vproc); 223 + WARN_ON(1); 224 + return ret; 225 + } 226 + 227 + /* Set the original PLL to target rate. */ 228 + ret = clk_set_rate(armpll, freq_hz); 229 + if (ret) { 230 + pr_err("cpu%d: failed to scale cpu clock rate!\n", 231 + policy->cpu); 232 + clk_set_parent(cpu_clk, armpll); 233 + mtk_cpufreq_set_voltage(info, old_vproc); 234 + return ret; 235 + } 236 + 237 + /* Set parent of CPU clock back to the original PLL. */ 238 + ret = clk_set_parent(cpu_clk, armpll); 239 + if (ret) { 240 + pr_err("cpu%d: failed to re-parent cpu clock!\n", 241 + policy->cpu); 242 + mtk_cpufreq_set_voltage(info, inter_vproc); 243 + WARN_ON(1); 244 + return ret; 245 + } 246 + 247 + /* 248 + * If the new voltage is lower than the intermediate voltage or the 249 + * original voltage, scale down to the new voltage. 250 + */ 251 + if (vproc < inter_vproc || vproc < old_vproc) { 252 + ret = mtk_cpufreq_set_voltage(info, vproc); 253 + if (ret) { 254 + pr_err("cpu%d: failed to scale down voltage!\n", 255 + policy->cpu); 256 + clk_set_parent(cpu_clk, info->inter_clk); 257 + clk_set_rate(armpll, old_freq_hz); 258 + clk_set_parent(cpu_clk, armpll); 259 + return ret; 260 + } 261 + } 262 + 263 + return 0; 264 + } 265 + 266 + static void mtk_cpufreq_ready(struct cpufreq_policy *policy) 267 + { 268 + struct mtk_cpu_dvfs_info *info = policy->driver_data; 269 + struct device_node *np = of_node_get(info->cpu_dev->of_node); 270 + 271 + if (WARN_ON(!np)) 272 + return; 273 + 274 + if (of_find_property(np, "#cooling-cells", NULL)) { 275 + info->cdev = of_cpufreq_cooling_register(np, 276 + policy->related_cpus); 277 + 278 + if (IS_ERR(info->cdev)) { 279 + dev_err(info->cpu_dev, 280 + "running cpufreq without cooling device: %ld\n", 281 + PTR_ERR(info->cdev)); 282 + 283 + info->cdev = NULL; 284 + } 285 + } 286 + 287 + of_node_put(np); 288 + } 289 + 290 + static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu) 291 + { 292 + struct device *cpu_dev; 293 + struct regulator *proc_reg = ERR_PTR(-ENODEV); 294 + struct regulator *sram_reg = ERR_PTR(-ENODEV); 295 + struct clk *cpu_clk = ERR_PTR(-ENODEV); 296 + struct clk *inter_clk = ERR_PTR(-ENODEV); 297 + struct dev_pm_opp *opp; 298 + unsigned long rate; 299 + int ret; 300 + 301 + cpu_dev = get_cpu_device(cpu); 302 + if (!cpu_dev) { 303 + pr_err("failed to get cpu%d device\n", cpu); 304 + return -ENODEV; 305 + } 306 + 307 + cpu_clk = clk_get(cpu_dev, "cpu"); 308 + if (IS_ERR(cpu_clk)) { 309 + if (PTR_ERR(cpu_clk) == -EPROBE_DEFER) 310 + pr_warn("cpu clk for cpu%d not ready, retry.\n", cpu); 311 + else 312 + pr_err("failed to get cpu clk for cpu%d\n", cpu); 313 + 314 + ret = PTR_ERR(cpu_clk); 315 + return ret; 316 + } 317 + 318 + inter_clk = clk_get(cpu_dev, "intermediate"); 319 + if (IS_ERR(inter_clk)) { 320 + if (PTR_ERR(inter_clk) == -EPROBE_DEFER) 321 + pr_warn("intermediate clk for cpu%d not ready, retry.\n", 322 + cpu); 323 + else 324 + pr_err("failed to get intermediate clk for cpu%d\n", 325 + cpu); 326 + 327 + ret = PTR_ERR(inter_clk); 328 + goto out_free_resources; 329 + } 330 + 331 + proc_reg = regulator_get_exclusive(cpu_dev, "proc"); 332 + if (IS_ERR(proc_reg)) { 333 + if (PTR_ERR(proc_reg) == -EPROBE_DEFER) 334 + pr_warn("proc regulator for cpu%d not ready, retry.\n", 335 + cpu); 336 + else 337 + pr_err("failed to get proc regulator for cpu%d\n", 338 + cpu); 339 + 340 + ret = PTR_ERR(proc_reg); 341 + goto out_free_resources; 342 + } 343 + 344 + /* Both presence and absence of sram regulator are valid cases. */ 345 + sram_reg = regulator_get_exclusive(cpu_dev, "sram"); 346 + 347 + ret = of_init_opp_table(cpu_dev); 348 + if (ret) { 349 + pr_warn("no OPP table for cpu%d\n", cpu); 350 + goto out_free_resources; 351 + } 352 + 353 + /* Search a safe voltage for intermediate frequency. */ 354 + rate = clk_get_rate(inter_clk); 355 + rcu_read_lock(); 356 + opp = dev_pm_opp_find_freq_ceil(cpu_dev, &rate); 357 + if (IS_ERR(opp)) { 358 + rcu_read_unlock(); 359 + pr_err("failed to get intermediate opp for cpu%d\n", cpu); 360 + ret = PTR_ERR(opp); 361 + goto out_free_opp_table; 362 + } 363 + info->intermediate_voltage = dev_pm_opp_get_voltage(opp); 364 + rcu_read_unlock(); 365 + 366 + info->cpu_dev = cpu_dev; 367 + info->proc_reg = proc_reg; 368 + info->sram_reg = IS_ERR(sram_reg) ? NULL : sram_reg; 369 + info->cpu_clk = cpu_clk; 370 + info->inter_clk = inter_clk; 371 + 372 + /* 373 + * If SRAM regulator is present, software "voltage tracking" is needed 374 + * for this CPU power domain. 375 + */ 376 + info->need_voltage_tracking = !IS_ERR(sram_reg); 377 + 378 + return 0; 379 + 380 + out_free_opp_table: 381 + of_free_opp_table(cpu_dev); 382 + 383 + out_free_resources: 384 + if (!IS_ERR(proc_reg)) 385 + regulator_put(proc_reg); 386 + if (!IS_ERR(sram_reg)) 387 + regulator_put(sram_reg); 388 + if (!IS_ERR(cpu_clk)) 389 + clk_put(cpu_clk); 390 + if (!IS_ERR(inter_clk)) 391 + clk_put(inter_clk); 392 + 393 + return ret; 394 + } 395 + 396 + static void mtk_cpu_dvfs_info_release(struct mtk_cpu_dvfs_info *info) 397 + { 398 + if (!IS_ERR(info->proc_reg)) 399 + regulator_put(info->proc_reg); 400 + if (!IS_ERR(info->sram_reg)) 401 + regulator_put(info->sram_reg); 402 + if (!IS_ERR(info->cpu_clk)) 403 + clk_put(info->cpu_clk); 404 + if (!IS_ERR(info->inter_clk)) 405 + clk_put(info->inter_clk); 406 + 407 + of_free_opp_table(info->cpu_dev); 408 + } 409 + 410 + static int mtk_cpufreq_init(struct cpufreq_policy *policy) 411 + { 412 + struct mtk_cpu_dvfs_info *info; 413 + struct cpufreq_frequency_table *freq_table; 414 + int ret; 415 + 416 + info = kzalloc(sizeof(*info), GFP_KERNEL); 417 + if (!info) 418 + return -ENOMEM; 419 + 420 + ret = mtk_cpu_dvfs_info_init(info, policy->cpu); 421 + if (ret) { 422 + pr_err("%s failed to initialize dvfs info for cpu%d\n", 423 + __func__, policy->cpu); 424 + goto out_free_dvfs_info; 425 + } 426 + 427 + ret = dev_pm_opp_init_cpufreq_table(info->cpu_dev, &freq_table); 428 + if (ret) { 429 + pr_err("failed to init cpufreq table for cpu%d: %d\n", 430 + policy->cpu, ret); 431 + goto out_release_dvfs_info; 432 + } 433 + 434 + ret = cpufreq_table_validate_and_show(policy, freq_table); 435 + if (ret) { 436 + pr_err("%s: invalid frequency table: %d\n", __func__, ret); 437 + goto out_free_cpufreq_table; 438 + } 439 + 440 + /* CPUs in the same cluster share a clock and power domain. */ 441 + cpumask_copy(policy->cpus, &cpu_topology[policy->cpu].core_sibling); 442 + policy->driver_data = info; 443 + policy->clk = info->cpu_clk; 444 + 445 + return 0; 446 + 447 + out_free_cpufreq_table: 448 + dev_pm_opp_free_cpufreq_table(info->cpu_dev, &freq_table); 449 + 450 + out_release_dvfs_info: 451 + mtk_cpu_dvfs_info_release(info); 452 + 453 + out_free_dvfs_info: 454 + kfree(info); 455 + 456 + return ret; 457 + } 458 + 459 + static int mtk_cpufreq_exit(struct cpufreq_policy *policy) 460 + { 461 + struct mtk_cpu_dvfs_info *info = policy->driver_data; 462 + 463 + cpufreq_cooling_unregister(info->cdev); 464 + dev_pm_opp_free_cpufreq_table(info->cpu_dev, &policy->freq_table); 465 + mtk_cpu_dvfs_info_release(info); 466 + kfree(info); 467 + 468 + return 0; 469 + } 470 + 471 + static struct cpufreq_driver mt8173_cpufreq_driver = { 472 + .flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK, 473 + .verify = cpufreq_generic_frequency_table_verify, 474 + .target_index = mtk_cpufreq_set_target, 475 + .get = cpufreq_generic_get, 476 + .init = mtk_cpufreq_init, 477 + .exit = mtk_cpufreq_exit, 478 + .ready = mtk_cpufreq_ready, 479 + .name = "mtk-cpufreq", 480 + .attr = cpufreq_generic_attr, 481 + }; 482 + 483 + static int mt8173_cpufreq_probe(struct platform_device *pdev) 484 + { 485 + int ret; 486 + 487 + ret = cpufreq_register_driver(&mt8173_cpufreq_driver); 488 + if (ret) 489 + pr_err("failed to register mtk cpufreq driver\n"); 490 + 491 + return ret; 492 + } 493 + 494 + static struct platform_driver mt8173_cpufreq_platdrv = { 495 + .driver = { 496 + .name = "mt8173-cpufreq", 497 + }, 498 + .probe = mt8173_cpufreq_probe, 499 + }; 500 + 501 + static int mt8173_cpufreq_driver_init(void) 502 + { 503 + struct platform_device *pdev; 504 + int err; 505 + 506 + if (!of_machine_is_compatible("mediatek,mt8173")) 507 + return -ENODEV; 508 + 509 + err = platform_driver_register(&mt8173_cpufreq_platdrv); 510 + if (err) 511 + return err; 512 + 513 + /* 514 + * Since there's no place to hold device registration code and no 515 + * device tree based way to match cpufreq driver yet, both the driver 516 + * and the device registration codes are put here to handle defer 517 + * probing. 518 + */ 519 + pdev = platform_device_register_simple("mt8173-cpufreq", -1, NULL, 0); 520 + if (IS_ERR(pdev)) { 521 + pr_err("failed to register mtk-cpufreq platform device\n"); 522 + return PTR_ERR(pdev); 523 + } 524 + 525 + return 0; 526 + } 527 + device_initcall(mt8173_cpufreq_driver_init);
+2 -2
drivers/cpufreq/powernow-k7.c
··· 421 421 return 0; 422 422 423 423 err2: 424 - acpi_processor_unregister_performance(acpi_processor_perf, 0); 424 + acpi_processor_unregister_performance(0); 425 425 err1: 426 426 free_cpumask_var(acpi_processor_perf->shared_cpu_map); 427 427 err05: ··· 661 661 { 662 662 #ifdef CONFIG_X86_POWERNOW_K7_ACPI 663 663 if (acpi_processor_perf) { 664 - acpi_processor_unregister_performance(acpi_processor_perf, 0); 664 + acpi_processor_unregister_performance(0); 665 665 free_cpumask_var(acpi_processor_perf->shared_cpu_map); 666 666 kfree(acpi_processor_perf); 667 667 }
+2 -3
drivers/cpufreq/powernow-k8.c
··· 795 795 kfree(powernow_table); 796 796 797 797 err_out: 798 - acpi_processor_unregister_performance(&data->acpi_data, data->cpu); 798 + acpi_processor_unregister_performance(data->cpu); 799 799 800 800 /* data->acpi_data.state_count informs us at ->exit() 801 801 * whether ACPI was used */ ··· 863 863 static void powernow_k8_cpu_exit_acpi(struct powernow_k8_data *data) 864 864 { 865 865 if (data->acpi_data.state_count) 866 - acpi_processor_unregister_performance(&data->acpi_data, 867 - data->cpu); 866 + acpi_processor_unregister_performance(data->cpu); 868 867 free_cpumask_var(data->acpi_data.shared_cpu_map); 869 868 } 870 869
+184 -15
drivers/cpufreq/powernv-cpufreq.c
··· 27 27 #include <linux/smp.h> 28 28 #include <linux/of.h> 29 29 #include <linux/reboot.h> 30 + #include <linux/slab.h> 30 31 31 32 #include <asm/cputhreads.h> 32 33 #include <asm/firmware.h> 33 34 #include <asm/reg.h> 34 35 #include <asm/smp.h> /* Required for cpu_sibling_mask() in UP configs */ 36 + #include <asm/opal.h> 35 37 36 38 #define POWERNV_MAX_PSTATES 256 37 39 #define PMSR_PSAFE_ENABLE (1UL << 30) 38 40 #define PMSR_SPR_EM_DISABLE (1UL << 31) 39 41 #define PMSR_MAX(x) ((x >> 32) & 0xFF) 40 - #define PMSR_LP(x) ((x >> 48) & 0xFF) 41 42 42 43 static struct cpufreq_frequency_table powernv_freqs[POWERNV_MAX_PSTATES+1]; 43 - static bool rebooting, throttled; 44 + static bool rebooting, throttled, occ_reset; 45 + 46 + static struct chip { 47 + unsigned int id; 48 + bool throttled; 49 + cpumask_t mask; 50 + struct work_struct throttle; 51 + bool restore; 52 + } *chips; 53 + 54 + static int nr_chips; 44 55 45 56 /* 46 57 * Note: The set of pstates consists of contiguous integers, the ··· 309 298 return powernv_pstate_info.max - powernv_pstate_info.nominal; 310 299 } 311 300 312 - static void powernv_cpufreq_throttle_check(unsigned int cpu) 301 + static void powernv_cpufreq_throttle_check(void *data) 313 302 { 303 + unsigned int cpu = smp_processor_id(); 314 304 unsigned long pmsr; 315 - int pmsr_pmax, pmsr_lp; 305 + int pmsr_pmax, i; 316 306 317 307 pmsr = get_pmspr(SPRN_PMSR); 308 + 309 + for (i = 0; i < nr_chips; i++) 310 + if (chips[i].id == cpu_to_chip_id(cpu)) 311 + break; 318 312 319 313 /* Check for Pmax Capping */ 320 314 pmsr_pmax = (s8)PMSR_MAX(pmsr); 321 315 if (pmsr_pmax != powernv_pstate_info.max) { 322 - throttled = true; 323 - pr_info("CPU %d Pmax is reduced to %d\n", cpu, pmsr_pmax); 324 - pr_info("Max allowed Pstate is capped\n"); 316 + if (chips[i].throttled) 317 + goto next; 318 + chips[i].throttled = true; 319 + pr_info("CPU %d on Chip %u has Pmax reduced to %d\n", cpu, 320 + chips[i].id, pmsr_pmax); 321 + } else if (chips[i].throttled) { 322 + chips[i].throttled = false; 323 + pr_info("CPU %d on Chip %u has Pmax restored to %d\n", cpu, 324 + chips[i].id, pmsr_pmax); 325 325 } 326 326 327 - /* 328 - * Check for Psafe by reading LocalPstate 329 - * or check if Psafe_mode_active is set in PMSR. 330 - */ 331 - pmsr_lp = (s8)PMSR_LP(pmsr); 332 - if ((pmsr_lp < powernv_pstate_info.min) || 333 - (pmsr & PMSR_PSAFE_ENABLE)) { 327 + /* Check if Psafe_mode_active is set in PMSR. */ 328 + next: 329 + if (pmsr & PMSR_PSAFE_ENABLE) { 334 330 throttled = true; 335 331 pr_info("Pstate set to safe frequency\n"); 336 332 } ··· 368 350 return 0; 369 351 370 352 if (!throttled) 371 - powernv_cpufreq_throttle_check(smp_processor_id()); 353 + powernv_cpufreq_throttle_check(NULL); 372 354 373 355 freq_data.pstate_id = powernv_freqs[new_index].driver_data; 374 356 ··· 413 395 .notifier_call = powernv_cpufreq_reboot_notifier, 414 396 }; 415 397 398 + void powernv_cpufreq_work_fn(struct work_struct *work) 399 + { 400 + struct chip *chip = container_of(work, struct chip, throttle); 401 + unsigned int cpu; 402 + cpumask_var_t mask; 403 + 404 + smp_call_function_any(&chip->mask, 405 + powernv_cpufreq_throttle_check, NULL, 0); 406 + 407 + if (!chip->restore) 408 + return; 409 + 410 + chip->restore = false; 411 + cpumask_copy(mask, &chip->mask); 412 + for_each_cpu_and(cpu, mask, cpu_online_mask) { 413 + int index, tcpu; 414 + struct cpufreq_policy policy; 415 + 416 + cpufreq_get_policy(&policy, cpu); 417 + cpufreq_frequency_table_target(&policy, policy.freq_table, 418 + policy.cur, 419 + CPUFREQ_RELATION_C, &index); 420 + powernv_cpufreq_target_index(&policy, index); 421 + for_each_cpu(tcpu, policy.cpus) 422 + cpumask_clear_cpu(tcpu, mask); 423 + } 424 + } 425 + 426 + static char throttle_reason[][30] = { 427 + "No throttling", 428 + "Power Cap", 429 + "Processor Over Temperature", 430 + "Power Supply Failure", 431 + "Over Current", 432 + "OCC Reset" 433 + }; 434 + 435 + static int powernv_cpufreq_occ_msg(struct notifier_block *nb, 436 + unsigned long msg_type, void *_msg) 437 + { 438 + struct opal_msg *msg = _msg; 439 + struct opal_occ_msg omsg; 440 + int i; 441 + 442 + if (msg_type != OPAL_MSG_OCC) 443 + return 0; 444 + 445 + omsg.type = be64_to_cpu(msg->params[0]); 446 + 447 + switch (omsg.type) { 448 + case OCC_RESET: 449 + occ_reset = true; 450 + pr_info("OCC (On Chip Controller - enforces hard thermal/power limits) Resetting\n"); 451 + /* 452 + * powernv_cpufreq_throttle_check() is called in 453 + * target() callback which can detect the throttle state 454 + * for governors like ondemand. 455 + * But static governors will not call target() often thus 456 + * report throttling here. 457 + */ 458 + if (!throttled) { 459 + throttled = true; 460 + pr_crit("CPU frequency is throttled for duration\n"); 461 + } 462 + 463 + break; 464 + case OCC_LOAD: 465 + pr_info("OCC Loading, CPU frequency is throttled until OCC is started\n"); 466 + break; 467 + case OCC_THROTTLE: 468 + omsg.chip = be64_to_cpu(msg->params[1]); 469 + omsg.throttle_status = be64_to_cpu(msg->params[2]); 470 + 471 + if (occ_reset) { 472 + occ_reset = false; 473 + throttled = false; 474 + pr_info("OCC Active, CPU frequency is no longer throttled\n"); 475 + 476 + for (i = 0; i < nr_chips; i++) { 477 + chips[i].restore = true; 478 + schedule_work(&chips[i].throttle); 479 + } 480 + 481 + return 0; 482 + } 483 + 484 + if (omsg.throttle_status && 485 + omsg.throttle_status <= OCC_MAX_THROTTLE_STATUS) 486 + pr_info("OCC: Chip %u Pmax reduced due to %s\n", 487 + (unsigned int)omsg.chip, 488 + throttle_reason[omsg.throttle_status]); 489 + else if (!omsg.throttle_status) 490 + pr_info("OCC: Chip %u %s\n", (unsigned int)omsg.chip, 491 + throttle_reason[omsg.throttle_status]); 492 + else 493 + return 0; 494 + 495 + for (i = 0; i < nr_chips; i++) 496 + if (chips[i].id == omsg.chip) { 497 + if (!omsg.throttle_status) 498 + chips[i].restore = true; 499 + schedule_work(&chips[i].throttle); 500 + } 501 + } 502 + return 0; 503 + } 504 + 505 + static struct notifier_block powernv_cpufreq_opal_nb = { 506 + .notifier_call = powernv_cpufreq_occ_msg, 507 + .next = NULL, 508 + .priority = 0, 509 + }; 510 + 416 511 static void powernv_cpufreq_stop_cpu(struct cpufreq_policy *policy) 417 512 { 418 513 struct powernv_smp_call_data freq_data; ··· 545 414 .attr = powernv_cpu_freq_attr, 546 415 }; 547 416 417 + static int init_chip_info(void) 418 + { 419 + unsigned int chip[256]; 420 + unsigned int cpu, i; 421 + unsigned int prev_chip_id = UINT_MAX; 422 + 423 + for_each_possible_cpu(cpu) { 424 + unsigned int id = cpu_to_chip_id(cpu); 425 + 426 + if (prev_chip_id != id) { 427 + prev_chip_id = id; 428 + chip[nr_chips++] = id; 429 + } 430 + } 431 + 432 + chips = kmalloc_array(nr_chips, sizeof(struct chip), GFP_KERNEL); 433 + if (!chips) 434 + return -ENOMEM; 435 + 436 + for (i = 0; i < nr_chips; i++) { 437 + chips[i].id = chip[i]; 438 + chips[i].throttled = false; 439 + cpumask_copy(&chips[i].mask, cpumask_of_node(chip[i])); 440 + INIT_WORK(&chips[i].throttle, powernv_cpufreq_work_fn); 441 + chips[i].restore = false; 442 + } 443 + 444 + return 0; 445 + } 446 + 548 447 static int __init powernv_cpufreq_init(void) 549 448 { 550 449 int rc = 0; ··· 590 429 return rc; 591 430 } 592 431 432 + /* Populate chip info */ 433 + rc = init_chip_info(); 434 + if (rc) 435 + return rc; 436 + 593 437 register_reboot_notifier(&powernv_cpufreq_reboot_nb); 438 + opal_message_notifier_register(OPAL_MSG_OCC, &powernv_cpufreq_opal_nb); 594 439 return cpufreq_register_driver(&powernv_cpufreq_driver); 595 440 } 596 441 module_init(powernv_cpufreq_init); ··· 604 437 static void __exit powernv_cpufreq_exit(void) 605 438 { 606 439 unregister_reboot_notifier(&powernv_cpufreq_reboot_nb); 440 + opal_message_notifier_unregister(OPAL_MSG_OCC, 441 + &powernv_cpufreq_opal_nb); 607 442 cpufreq_unregister_driver(&powernv_cpufreq_driver); 608 443 } 609 444 module_exit(powernv_cpufreq_exit);
+2 -2
drivers/cpufreq/ppc_cbe_cpufreq_pmi.c
··· 97 97 struct cpufreq_frequency_table *cbe_freqs; 98 98 u8 node; 99 99 100 - /* Should this really be called for CPUFREQ_ADJUST, CPUFREQ_INCOMPATIBLE 101 - * and CPUFREQ_NOTIFY policy events?) 100 + /* Should this really be called for CPUFREQ_ADJUST and CPUFREQ_NOTIFY 101 + * policy events?) 102 102 */ 103 103 if (event == CPUFREQ_START) 104 104 return 0;
+1 -3
drivers/cpufreq/sfi-cpufreq.c
··· 45 45 pentry = (struct sfi_freq_table_entry *)sb->pentry; 46 46 totallen = num_freq_table_entries * sizeof(*pentry); 47 47 48 - sfi_cpufreq_array = kzalloc(totallen, GFP_KERNEL); 48 + sfi_cpufreq_array = kmemdup(pentry, totallen, GFP_KERNEL); 49 49 if (!sfi_cpufreq_array) 50 50 return -ENOMEM; 51 - 52 - memcpy(sfi_cpufreq_array, pentry, totallen); 53 51 54 52 return 0; 55 53 }
+4 -5
drivers/cpufreq/speedstep-lib.c
··· 386 386 unsigned int prev_speed; 387 387 unsigned int ret = 0; 388 388 unsigned long flags; 389 - struct timeval tv1, tv2; 389 + ktime_t tv1, tv2; 390 390 391 391 if ((!processor) || (!low_speed) || (!high_speed) || (!set_state)) 392 392 return -EINVAL; ··· 415 415 416 416 /* start latency measurement */ 417 417 if (transition_latency) 418 - do_gettimeofday(&tv1); 418 + tv1 = ktime_get(); 419 419 420 420 /* switch to high state */ 421 421 set_state(SPEEDSTEP_HIGH); 422 422 423 423 /* end latency measurement */ 424 424 if (transition_latency) 425 - do_gettimeofday(&tv2); 425 + tv2 = ktime_get(); 426 426 427 427 *high_speed = speedstep_get_frequency(processor); 428 428 if (!*high_speed) { ··· 442 442 set_state(SPEEDSTEP_LOW); 443 443 444 444 if (transition_latency) { 445 - *transition_latency = (tv2.tv_sec - tv1.tv_sec) * USEC_PER_SEC + 446 - tv2.tv_usec - tv1.tv_usec; 445 + *transition_latency = ktime_to_us(ktime_sub(tv2, tv1)); 447 446 pr_debug("transition latency is %u uSec\n", *transition_latency); 448 447 449 448 /* convert uSec to nSec and add 20% for safety reasons */
+3 -5
drivers/cpuidle/coupled.c
··· 176 176 177 177 /** 178 178 * cpuidle_state_is_coupled - check if a state is part of a coupled set 179 - * @dev: struct cpuidle_device for the current cpu 180 179 * @drv: struct cpuidle_driver for the platform 181 180 * @state: index of the target state in drv->states 182 181 * 183 182 * Returns true if the target state is coupled with cpus besides this one 184 183 */ 185 - bool cpuidle_state_is_coupled(struct cpuidle_device *dev, 186 - struct cpuidle_driver *drv, int state) 184 + bool cpuidle_state_is_coupled(struct cpuidle_driver *drv, int state) 187 185 { 188 186 return drv->states[state].flags & CPUIDLE_FLAG_COUPLED; 189 187 } ··· 471 473 return entered_state; 472 474 } 473 475 entered_state = cpuidle_enter_state(dev, drv, 474 - dev->safe_state_index); 476 + drv->safe_state_index); 475 477 local_irq_disable(); 476 478 } 477 479 ··· 519 521 } 520 522 521 523 entered_state = cpuidle_enter_state(dev, drv, 522 - dev->safe_state_index); 524 + drv->safe_state_index); 523 525 local_irq_disable(); 524 526 } 525 527
+2 -2
drivers/cpuidle/cpuidle.c
··· 214 214 tick_broadcast_exit(); 215 215 } 216 216 217 - if (!cpuidle_state_is_coupled(dev, drv, entered_state)) 217 + if (!cpuidle_state_is_coupled(drv, entered_state)) 218 218 local_irq_enable(); 219 219 220 220 diff = ktime_to_us(ktime_sub(time_end, time_start)); ··· 263 263 int cpuidle_enter(struct cpuidle_driver *drv, struct cpuidle_device *dev, 264 264 int index) 265 265 { 266 - if (cpuidle_state_is_coupled(dev, drv, index)) 266 + if (cpuidle_state_is_coupled(drv, index)) 267 267 return cpuidle_enter_state_coupled(dev, drv, index); 268 268 return cpuidle_enter_state(dev, drv, index); 269 269 }
+3 -4
drivers/cpuidle/cpuidle.h
··· 34 34 extern void cpuidle_remove_sysfs(struct cpuidle_device *dev); 35 35 36 36 #ifdef CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED 37 - bool cpuidle_state_is_coupled(struct cpuidle_device *dev, 38 - struct cpuidle_driver *drv, int state); 37 + bool cpuidle_state_is_coupled(struct cpuidle_driver *drv, int state); 39 38 int cpuidle_enter_state_coupled(struct cpuidle_device *dev, 40 39 struct cpuidle_driver *drv, int next_state); 41 40 int cpuidle_coupled_register_device(struct cpuidle_device *dev); 42 41 void cpuidle_coupled_unregister_device(struct cpuidle_device *dev); 43 42 #else 44 - static inline bool cpuidle_state_is_coupled(struct cpuidle_device *dev, 45 - struct cpuidle_driver *drv, int state) 43 + static inline 44 + bool cpuidle_state_is_coupled(struct cpuidle_driver *drv, int state) 46 45 { 47 46 return false; 48 47 }
+163 -7
drivers/devfreq/event/exynos-ppmu.c
··· 1 1 /* 2 2 * exynos_ppmu.c - EXYNOS PPMU (Platform Performance Monitoring Unit) support 3 3 * 4 - * Copyright (c) 2014 Samsung Electronics Co., Ltd. 4 + * Copyright (c) 2014-2015 Samsung Electronics Co., Ltd. 5 5 * Author : Chanwoo Choi <cw00.choi@samsung.com> 6 6 * 7 7 * This program is free software; you can redistribute it and/or modify ··· 82 82 PPMU_EVENT(mscl), 83 83 PPMU_EVENT(fimd0x), 84 84 PPMU_EVENT(fimd1x), 85 + 86 + /* Only for Exynos5433 SoCs */ 87 + PPMU_EVENT(d0-cpu), 88 + PPMU_EVENT(d0-general), 89 + PPMU_EVENT(d0-rt), 90 + PPMU_EVENT(d1-cpu), 91 + PPMU_EVENT(d1-general), 92 + PPMU_EVENT(d1-rt), 93 + 85 94 { /* sentinel */ }, 86 95 }; 87 96 ··· 105 96 return -EINVAL; 106 97 } 107 98 99 + /* 100 + * The devfreq-event ops structure for PPMU v1.1 101 + */ 108 102 static int exynos_ppmu_disable(struct devfreq_event_dev *edev) 109 103 { 110 104 struct exynos_ppmu *info = devfreq_event_get_drvdata(edev); ··· 212 200 .get_event = exynos_ppmu_get_event, 213 201 }; 214 202 203 + /* 204 + * The devfreq-event ops structure for PPMU v2.0 205 + */ 206 + static int exynos_ppmu_v2_disable(struct devfreq_event_dev *edev) 207 + { 208 + struct exynos_ppmu *info = devfreq_event_get_drvdata(edev); 209 + u32 pmnc, clear; 210 + 211 + /* Disable all counters */ 212 + clear = (PPMU_CCNT_MASK | PPMU_PMCNT0_MASK | PPMU_PMCNT1_MASK 213 + | PPMU_PMCNT2_MASK | PPMU_PMCNT3_MASK); 214 + 215 + __raw_writel(clear, info->ppmu.base + PPMU_V2_FLAG); 216 + __raw_writel(clear, info->ppmu.base + PPMU_V2_INTENC); 217 + __raw_writel(clear, info->ppmu.base + PPMU_V2_CNTENC); 218 + __raw_writel(clear, info->ppmu.base + PPMU_V2_CNT_RESET); 219 + 220 + __raw_writel(0x0, info->ppmu.base + PPMU_V2_CIG_CFG0); 221 + __raw_writel(0x0, info->ppmu.base + PPMU_V2_CIG_CFG1); 222 + __raw_writel(0x0, info->ppmu.base + PPMU_V2_CIG_CFG2); 223 + __raw_writel(0x0, info->ppmu.base + PPMU_V2_CIG_RESULT); 224 + __raw_writel(0x0, info->ppmu.base + PPMU_V2_CNT_AUTO); 225 + __raw_writel(0x0, info->ppmu.base + PPMU_V2_CH_EV0_TYPE); 226 + __raw_writel(0x0, info->ppmu.base + PPMU_V2_CH_EV1_TYPE); 227 + __raw_writel(0x0, info->ppmu.base + PPMU_V2_CH_EV2_TYPE); 228 + __raw_writel(0x0, info->ppmu.base + PPMU_V2_CH_EV3_TYPE); 229 + __raw_writel(0x0, info->ppmu.base + PPMU_V2_SM_ID_V); 230 + __raw_writel(0x0, info->ppmu.base + PPMU_V2_SM_ID_A); 231 + __raw_writel(0x0, info->ppmu.base + PPMU_V2_SM_OTHERS_V); 232 + __raw_writel(0x0, info->ppmu.base + PPMU_V2_SM_OTHERS_A); 233 + __raw_writel(0x0, info->ppmu.base + PPMU_V2_INTERRUPT_RESET); 234 + 235 + /* Disable PPMU */ 236 + pmnc = __raw_readl(info->ppmu.base + PPMU_V2_PMNC); 237 + pmnc &= ~PPMU_PMNC_ENABLE_MASK; 238 + __raw_writel(pmnc, info->ppmu.base + PPMU_V2_PMNC); 239 + 240 + return 0; 241 + } 242 + 243 + static int exynos_ppmu_v2_set_event(struct devfreq_event_dev *edev) 244 + { 245 + struct exynos_ppmu *info = devfreq_event_get_drvdata(edev); 246 + int id = exynos_ppmu_find_ppmu_id(edev); 247 + u32 pmnc, cntens; 248 + 249 + /* Enable all counters */ 250 + cntens = __raw_readl(info->ppmu.base + PPMU_V2_CNTENS); 251 + cntens |= (PPMU_CCNT_MASK | (PPMU_ENABLE << id)); 252 + __raw_writel(cntens, info->ppmu.base + PPMU_V2_CNTENS); 253 + 254 + /* Set the event of Read/Write data count */ 255 + switch (id) { 256 + case PPMU_PMNCNT0: 257 + case PPMU_PMNCNT1: 258 + case PPMU_PMNCNT2: 259 + __raw_writel(PPMU_V2_RO_DATA_CNT | PPMU_V2_WO_DATA_CNT, 260 + info->ppmu.base + PPMU_V2_CH_EVx_TYPE(id)); 261 + break; 262 + case PPMU_PMNCNT3: 263 + __raw_writel(PPMU_V2_EVT3_RW_DATA_CNT, 264 + info->ppmu.base + PPMU_V2_CH_EVx_TYPE(id)); 265 + break; 266 + } 267 + 268 + /* Reset cycle counter/performance counter and enable PPMU */ 269 + pmnc = __raw_readl(info->ppmu.base + PPMU_V2_PMNC); 270 + pmnc &= ~(PPMU_PMNC_ENABLE_MASK 271 + | PPMU_PMNC_COUNTER_RESET_MASK 272 + | PPMU_PMNC_CC_RESET_MASK 273 + | PPMU_PMNC_CC_DIVIDER_MASK 274 + | PPMU_V2_PMNC_START_MODE_MASK); 275 + pmnc |= (PPMU_ENABLE << PPMU_PMNC_ENABLE_SHIFT); 276 + pmnc |= (PPMU_ENABLE << PPMU_PMNC_COUNTER_RESET_SHIFT); 277 + pmnc |= (PPMU_ENABLE << PPMU_PMNC_CC_RESET_SHIFT); 278 + pmnc |= (PPMU_V2_MODE_MANUAL << PPMU_V2_PMNC_START_MODE_SHIFT); 279 + __raw_writel(pmnc, info->ppmu.base + PPMU_V2_PMNC); 280 + 281 + return 0; 282 + } 283 + 284 + static int exynos_ppmu_v2_get_event(struct devfreq_event_dev *edev, 285 + struct devfreq_event_data *edata) 286 + { 287 + struct exynos_ppmu *info = devfreq_event_get_drvdata(edev); 288 + int id = exynos_ppmu_find_ppmu_id(edev); 289 + u32 pmnc, cntenc; 290 + u32 pmcnt_high, pmcnt_low; 291 + u64 load_count = 0; 292 + 293 + /* Disable PPMU */ 294 + pmnc = __raw_readl(info->ppmu.base + PPMU_V2_PMNC); 295 + pmnc &= ~PPMU_PMNC_ENABLE_MASK; 296 + __raw_writel(pmnc, info->ppmu.base + PPMU_V2_PMNC); 297 + 298 + /* Read cycle count and performance count */ 299 + edata->total_count = __raw_readl(info->ppmu.base + PPMU_V2_CCNT); 300 + 301 + switch (id) { 302 + case PPMU_PMNCNT0: 303 + case PPMU_PMNCNT1: 304 + case PPMU_PMNCNT2: 305 + load_count = __raw_readl(info->ppmu.base + PPMU_V2_PMNCT(id)); 306 + break; 307 + case PPMU_PMNCNT3: 308 + pmcnt_high = __raw_readl(info->ppmu.base + PPMU_V2_PMCNT3_HIGH); 309 + pmcnt_low = __raw_readl(info->ppmu.base + PPMU_V2_PMCNT3_LOW); 310 + load_count = (u64)((pmcnt_high & 0xff) << 32) + (u64)pmcnt_low; 311 + break; 312 + } 313 + edata->load_count = load_count; 314 + 315 + /* Disable all counters */ 316 + cntenc = __raw_readl(info->ppmu.base + PPMU_V2_CNTENC); 317 + cntenc |= (PPMU_CCNT_MASK | (PPMU_ENABLE << id)); 318 + __raw_writel(cntenc, info->ppmu.base + PPMU_V2_CNTENC); 319 + 320 + dev_dbg(&edev->dev, "%25s (load: %ld / %ld)\n", edev->desc->name, 321 + edata->load_count, edata->total_count); 322 + return 0; 323 + } 324 + 325 + static const struct devfreq_event_ops exynos_ppmu_v2_ops = { 326 + .disable = exynos_ppmu_v2_disable, 327 + .set_event = exynos_ppmu_v2_set_event, 328 + .get_event = exynos_ppmu_v2_get_event, 329 + }; 330 + 331 + static const struct of_device_id exynos_ppmu_id_match[] = { 332 + { 333 + .compatible = "samsung,exynos-ppmu", 334 + .data = (void *)&exynos_ppmu_ops, 335 + }, { 336 + .compatible = "samsung,exynos-ppmu-v2", 337 + .data = (void *)&exynos_ppmu_v2_ops, 338 + }, 339 + { /* sentinel */ }, 340 + }; 341 + 342 + static struct devfreq_event_ops *exynos_bus_get_ops(struct device_node *np) 343 + { 344 + const struct of_device_id *match; 345 + 346 + match = of_match_node(exynos_ppmu_id_match, np); 347 + return (struct devfreq_event_ops *)match->data; 348 + } 349 + 215 350 static int of_get_devfreq_events(struct device_node *np, 216 351 struct exynos_ppmu *info) 217 352 { 218 353 struct devfreq_event_desc *desc; 354 + struct devfreq_event_ops *event_ops; 219 355 struct device *dev = info->dev; 220 356 struct device_node *events_np, *node; 221 357 int i, j, count; ··· 374 214 "failed to get child node of devfreq-event devices\n"); 375 215 return -EINVAL; 376 216 } 217 + event_ops = exynos_bus_get_ops(np); 377 218 378 219 count = of_get_child_count(events_np); 379 220 desc = devm_kzalloc(dev, sizeof(*desc) * count, GFP_KERNEL); ··· 399 238 continue; 400 239 } 401 240 402 - desc[j].ops = &exynos_ppmu_ops; 241 + desc[j].ops = event_ops; 403 242 desc[j].driver_data = info; 404 243 405 244 of_property_read_string(node, "event-name", &desc[j].name); ··· 514 353 515 354 return 0; 516 355 } 517 - 518 - static struct of_device_id exynos_ppmu_id_match[] = { 519 - { .compatible = "samsung,exynos-ppmu", }, 520 - { /* sentinel */ }, 521 - }; 522 356 523 357 static struct platform_driver exynos_ppmu_driver = { 524 358 .probe = exynos_ppmu_probe,
+70
drivers/devfreq/event/exynos-ppmu.h
··· 26 26 PPMU_PMNCNT_MAX, 27 27 }; 28 28 29 + /*** 30 + * PPMUv1.1 Definitions 31 + */ 29 32 enum ppmu_event_type { 30 33 PPMU_RO_BUSY_CYCLE_CNT = 0x0, 31 34 PPMU_WO_BUSY_CYCLE_CNT = 0x1, ··· 92 89 /* PPMU_PMNCTx/PPMU_BETxSEL registers */ 93 90 #define PPMU_PMNCT(x) (PPMU_PMCNT0 + (0x10 * x)) 94 91 #define PPMU_BEVTxSEL(x) (PPMU_BEVT0SEL + (0x100 * x)) 92 + 93 + /*** 94 + * PPMU_V2.0 definitions 95 + */ 96 + enum ppmu_v2_mode { 97 + PPMU_V2_MODE_MANUAL = 0, 98 + PPMU_V2_MODE_AUTO = 1, 99 + PPMU_V2_MODE_CIG = 2, /* CIG (Conditional Interrupt Generation) */ 100 + }; 101 + 102 + enum ppmu_v2_event_type { 103 + PPMU_V2_RO_DATA_CNT = 0x4, 104 + PPMU_V2_WO_DATA_CNT = 0x5, 105 + 106 + PPMU_V2_EVT3_RW_DATA_CNT = 0x22, /* Only for Event3 */ 107 + }; 108 + 109 + enum ppmu_V2_reg { 110 + /* PPC control register */ 111 + PPMU_V2_PMNC = 0x04, 112 + PPMU_V2_CNTENS = 0x08, 113 + PPMU_V2_CNTENC = 0x0c, 114 + PPMU_V2_INTENS = 0x10, 115 + PPMU_V2_INTENC = 0x14, 116 + PPMU_V2_FLAG = 0x18, 117 + 118 + /* Cycle Counter and Performance Event Counter Register */ 119 + PPMU_V2_CCNT = 0x48, 120 + PPMU_V2_PMCNT0 = 0x34, 121 + PPMU_V2_PMCNT1 = 0x38, 122 + PPMU_V2_PMCNT2 = 0x3c, 123 + PPMU_V2_PMCNT3_LOW = 0x40, 124 + PPMU_V2_PMCNT3_HIGH = 0x44, 125 + 126 + /* Bus Event Generator */ 127 + PPMU_V2_CIG_CFG0 = 0x1c, 128 + PPMU_V2_CIG_CFG1 = 0x20, 129 + PPMU_V2_CIG_CFG2 = 0x24, 130 + PPMU_V2_CIG_RESULT = 0x28, 131 + PPMU_V2_CNT_RESET = 0x2c, 132 + PPMU_V2_CNT_AUTO = 0x30, 133 + PPMU_V2_CH_EV0_TYPE = 0x200, 134 + PPMU_V2_CH_EV1_TYPE = 0x204, 135 + PPMU_V2_CH_EV2_TYPE = 0x208, 136 + PPMU_V2_CH_EV3_TYPE = 0x20c, 137 + PPMU_V2_SM_ID_V = 0x220, 138 + PPMU_V2_SM_ID_A = 0x224, 139 + PPMU_V2_SM_OTHERS_V = 0x228, 140 + PPMU_V2_SM_OTHERS_A = 0x22c, 141 + PPMU_V2_INTERRUPT_RESET = 0x260, 142 + }; 143 + 144 + /* PMNC register */ 145 + #define PPMU_V2_PMNC_START_MODE_SHIFT 20 146 + #define PPMU_V2_PMNC_START_MODE_MASK (0x3 << PPMU_V2_PMNC_START_MODE_SHIFT) 147 + 148 + #define PPMU_PMNC_CC_RESET_SHIFT 2 149 + #define PPMU_PMNC_COUNTER_RESET_SHIFT 1 150 + #define PPMU_PMNC_ENABLE_SHIFT 0 151 + #define PPMU_PMNC_START_MODE_MASK BIT(16) 152 + #define PPMU_PMNC_CC_DIVIDER_MASK BIT(3) 153 + #define PPMU_PMNC_CC_RESET_MASK BIT(2) 154 + #define PPMU_PMNC_COUNTER_RESET_MASK BIT(1) 155 + #define PPMU_PMNC_ENABLE_MASK BIT(0) 156 + 157 + #define PPMU_V2_PMNCT(x) (PPMU_V2_PMCNT0 + (0x4 * x)) 158 + #define PPMU_V2_CH_EVx_TYPE(x) (PPMU_V2_CH_EV0_TYPE + (0x4 * x)) 95 159 96 160 #endif /* __EXYNOS_PPMU_H__ */
+8
drivers/dma/Kconfig
··· 85 85 help 86 86 Enable support for the Intel(R) IOP Series RAID engines. 87 87 88 + config IDMA64 89 + tristate "Intel integrated DMA 64-bit support" 90 + select DMA_ENGINE 91 + select DMA_VIRTUAL_CHANNELS 92 + help 93 + Enable DMA support for Intel Low Power Subsystem such as found on 94 + Intel Skylake PCH. 95 + 88 96 source "drivers/dma/dw/Kconfig" 89 97 90 98 config AT_HDMAC
+1
drivers/dma/Makefile
··· 14 14 obj-$(CONFIG_MPC512X_DMA) += mpc512x_dma.o 15 15 obj-$(CONFIG_PPC_BESTCOMM) += bestcomm/ 16 16 obj-$(CONFIG_MV_XOR) += mv_xor.o 17 + obj-$(CONFIG_IDMA64) += idma64.o 17 18 obj-$(CONFIG_DW_DMAC_CORE) += dw/ 18 19 obj-$(CONFIG_AT_HDMAC) += at_hdmac.o 19 20 obj-$(CONFIG_AT_XDMAC) += at_xdmac.o
+710
drivers/dma/idma64.c
··· 1 + /* 2 + * Core driver for the Intel integrated DMA 64-bit 3 + * 4 + * Copyright (C) 2015 Intel Corporation 5 + * Author: Andy Shevchenko <andriy.shevchenko@linux.intel.com> 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + */ 11 + 12 + #include <linux/bitops.h> 13 + #include <linux/delay.h> 14 + #include <linux/dmaengine.h> 15 + #include <linux/dma-mapping.h> 16 + #include <linux/dmapool.h> 17 + #include <linux/init.h> 18 + #include <linux/module.h> 19 + #include <linux/platform_device.h> 20 + #include <linux/slab.h> 21 + 22 + #include "idma64.h" 23 + 24 + /* Platform driver name */ 25 + #define DRV_NAME "idma64" 26 + 27 + /* For now we support only two channels */ 28 + #define IDMA64_NR_CHAN 2 29 + 30 + /* ---------------------------------------------------------------------- */ 31 + 32 + static struct device *chan2dev(struct dma_chan *chan) 33 + { 34 + return &chan->dev->device; 35 + } 36 + 37 + /* ---------------------------------------------------------------------- */ 38 + 39 + static void idma64_off(struct idma64 *idma64) 40 + { 41 + unsigned short count = 100; 42 + 43 + dma_writel(idma64, CFG, 0); 44 + 45 + channel_clear_bit(idma64, MASK(XFER), idma64->all_chan_mask); 46 + channel_clear_bit(idma64, MASK(BLOCK), idma64->all_chan_mask); 47 + channel_clear_bit(idma64, MASK(SRC_TRAN), idma64->all_chan_mask); 48 + channel_clear_bit(idma64, MASK(DST_TRAN), idma64->all_chan_mask); 49 + channel_clear_bit(idma64, MASK(ERROR), idma64->all_chan_mask); 50 + 51 + do { 52 + cpu_relax(); 53 + } while (dma_readl(idma64, CFG) & IDMA64_CFG_DMA_EN && --count); 54 + } 55 + 56 + static void idma64_on(struct idma64 *idma64) 57 + { 58 + dma_writel(idma64, CFG, IDMA64_CFG_DMA_EN); 59 + } 60 + 61 + /* ---------------------------------------------------------------------- */ 62 + 63 + static void idma64_chan_init(struct idma64 *idma64, struct idma64_chan *idma64c) 64 + { 65 + u32 cfghi = IDMA64C_CFGH_SRC_PER(1) | IDMA64C_CFGH_DST_PER(0); 66 + u32 cfglo = 0; 67 + 68 + /* Enforce FIFO drain when channel is suspended */ 69 + cfglo |= IDMA64C_CFGL_CH_DRAIN; 70 + 71 + /* Set default burst alignment */ 72 + cfglo |= IDMA64C_CFGL_DST_BURST_ALIGN | IDMA64C_CFGL_SRC_BURST_ALIGN; 73 + 74 + channel_writel(idma64c, CFG_LO, cfglo); 75 + channel_writel(idma64c, CFG_HI, cfghi); 76 + 77 + /* Enable interrupts */ 78 + channel_set_bit(idma64, MASK(XFER), idma64c->mask); 79 + channel_set_bit(idma64, MASK(ERROR), idma64c->mask); 80 + 81 + /* 82 + * Enforce the controller to be turned on. 83 + * 84 + * The iDMA is turned off in ->probe() and looses context during system 85 + * suspend / resume cycle. That's why we have to enable it each time we 86 + * use it. 87 + */ 88 + idma64_on(idma64); 89 + } 90 + 91 + static void idma64_chan_stop(struct idma64 *idma64, struct idma64_chan *idma64c) 92 + { 93 + channel_clear_bit(idma64, CH_EN, idma64c->mask); 94 + } 95 + 96 + static void idma64_chan_start(struct idma64 *idma64, struct idma64_chan *idma64c) 97 + { 98 + struct idma64_desc *desc = idma64c->desc; 99 + struct idma64_hw_desc *hw = &desc->hw[0]; 100 + 101 + channel_writeq(idma64c, SAR, 0); 102 + channel_writeq(idma64c, DAR, 0); 103 + 104 + channel_writel(idma64c, CTL_HI, IDMA64C_CTLH_BLOCK_TS(~0UL)); 105 + channel_writel(idma64c, CTL_LO, IDMA64C_CTLL_LLP_S_EN | IDMA64C_CTLL_LLP_D_EN); 106 + 107 + channel_writeq(idma64c, LLP, hw->llp); 108 + 109 + channel_set_bit(idma64, CH_EN, idma64c->mask); 110 + } 111 + 112 + static void idma64_stop_transfer(struct idma64_chan *idma64c) 113 + { 114 + struct idma64 *idma64 = to_idma64(idma64c->vchan.chan.device); 115 + 116 + idma64_chan_stop(idma64, idma64c); 117 + } 118 + 119 + static void idma64_start_transfer(struct idma64_chan *idma64c) 120 + { 121 + struct idma64 *idma64 = to_idma64(idma64c->vchan.chan.device); 122 + struct virt_dma_desc *vdesc; 123 + 124 + /* Get the next descriptor */ 125 + vdesc = vchan_next_desc(&idma64c->vchan); 126 + if (!vdesc) { 127 + idma64c->desc = NULL; 128 + return; 129 + } 130 + 131 + list_del(&vdesc->node); 132 + idma64c->desc = to_idma64_desc(vdesc); 133 + 134 + /* Configure the channel */ 135 + idma64_chan_init(idma64, idma64c); 136 + 137 + /* Start the channel with a new descriptor */ 138 + idma64_chan_start(idma64, idma64c); 139 + } 140 + 141 + /* ---------------------------------------------------------------------- */ 142 + 143 + static void idma64_chan_irq(struct idma64 *idma64, unsigned short c, 144 + u32 status_err, u32 status_xfer) 145 + { 146 + struct idma64_chan *idma64c = &idma64->chan[c]; 147 + struct idma64_desc *desc; 148 + unsigned long flags; 149 + 150 + spin_lock_irqsave(&idma64c->vchan.lock, flags); 151 + desc = idma64c->desc; 152 + if (desc) { 153 + if (status_err & (1 << c)) { 154 + dma_writel(idma64, CLEAR(ERROR), idma64c->mask); 155 + desc->status = DMA_ERROR; 156 + } else if (status_xfer & (1 << c)) { 157 + dma_writel(idma64, CLEAR(XFER), idma64c->mask); 158 + desc->status = DMA_COMPLETE; 159 + vchan_cookie_complete(&desc->vdesc); 160 + idma64_start_transfer(idma64c); 161 + } 162 + 163 + /* idma64_start_transfer() updates idma64c->desc */ 164 + if (idma64c->desc == NULL || desc->status == DMA_ERROR) 165 + idma64_stop_transfer(idma64c); 166 + } 167 + spin_unlock_irqrestore(&idma64c->vchan.lock, flags); 168 + } 169 + 170 + static irqreturn_t idma64_irq(int irq, void *dev) 171 + { 172 + struct idma64 *idma64 = dev; 173 + u32 status = dma_readl(idma64, STATUS_INT); 174 + u32 status_xfer; 175 + u32 status_err; 176 + unsigned short i; 177 + 178 + dev_vdbg(idma64->dma.dev, "%s: status=%#x\n", __func__, status); 179 + 180 + /* Check if we have any interrupt from the DMA controller */ 181 + if (!status) 182 + return IRQ_NONE; 183 + 184 + /* Disable interrupts */ 185 + channel_clear_bit(idma64, MASK(XFER), idma64->all_chan_mask); 186 + channel_clear_bit(idma64, MASK(ERROR), idma64->all_chan_mask); 187 + 188 + status_xfer = dma_readl(idma64, RAW(XFER)); 189 + status_err = dma_readl(idma64, RAW(ERROR)); 190 + 191 + for (i = 0; i < idma64->dma.chancnt; i++) 192 + idma64_chan_irq(idma64, i, status_err, status_xfer); 193 + 194 + /* Re-enable interrupts */ 195 + channel_set_bit(idma64, MASK(XFER), idma64->all_chan_mask); 196 + channel_set_bit(idma64, MASK(ERROR), idma64->all_chan_mask); 197 + 198 + return IRQ_HANDLED; 199 + } 200 + 201 + /* ---------------------------------------------------------------------- */ 202 + 203 + static struct idma64_desc *idma64_alloc_desc(unsigned int ndesc) 204 + { 205 + struct idma64_desc *desc; 206 + 207 + desc = kzalloc(sizeof(*desc), GFP_NOWAIT); 208 + if (!desc) 209 + return NULL; 210 + 211 + desc->hw = kcalloc(ndesc, sizeof(*desc->hw), GFP_NOWAIT); 212 + if (!desc->hw) { 213 + kfree(desc); 214 + return NULL; 215 + } 216 + 217 + return desc; 218 + } 219 + 220 + static void idma64_desc_free(struct idma64_chan *idma64c, 221 + struct idma64_desc *desc) 222 + { 223 + struct idma64_hw_desc *hw; 224 + 225 + if (desc->ndesc) { 226 + unsigned int i = desc->ndesc; 227 + 228 + do { 229 + hw = &desc->hw[--i]; 230 + dma_pool_free(idma64c->pool, hw->lli, hw->llp); 231 + } while (i); 232 + } 233 + 234 + kfree(desc->hw); 235 + kfree(desc); 236 + } 237 + 238 + static void idma64_vdesc_free(struct virt_dma_desc *vdesc) 239 + { 240 + struct idma64_chan *idma64c = to_idma64_chan(vdesc->tx.chan); 241 + 242 + idma64_desc_free(idma64c, to_idma64_desc(vdesc)); 243 + } 244 + 245 + static u64 idma64_hw_desc_fill(struct idma64_hw_desc *hw, 246 + struct dma_slave_config *config, 247 + enum dma_transfer_direction direction, u64 llp) 248 + { 249 + struct idma64_lli *lli = hw->lli; 250 + u64 sar, dar; 251 + u32 ctlhi = IDMA64C_CTLH_BLOCK_TS(hw->len); 252 + u32 ctllo = IDMA64C_CTLL_LLP_S_EN | IDMA64C_CTLL_LLP_D_EN; 253 + u32 src_width, dst_width; 254 + 255 + if (direction == DMA_MEM_TO_DEV) { 256 + sar = hw->phys; 257 + dar = config->dst_addr; 258 + ctllo |= IDMA64C_CTLL_DST_FIX | IDMA64C_CTLL_SRC_INC | 259 + IDMA64C_CTLL_FC_M2P; 260 + src_width = min_t(u32, 2, __fls(sar | hw->len)); 261 + dst_width = __fls(config->dst_addr_width); 262 + } else { /* DMA_DEV_TO_MEM */ 263 + sar = config->src_addr; 264 + dar = hw->phys; 265 + ctllo |= IDMA64C_CTLL_DST_INC | IDMA64C_CTLL_SRC_FIX | 266 + IDMA64C_CTLL_FC_P2M; 267 + src_width = __fls(config->src_addr_width); 268 + dst_width = min_t(u32, 2, __fls(dar | hw->len)); 269 + } 270 + 271 + lli->sar = sar; 272 + lli->dar = dar; 273 + 274 + lli->ctlhi = ctlhi; 275 + lli->ctllo = ctllo | 276 + IDMA64C_CTLL_SRC_MSIZE(config->src_maxburst) | 277 + IDMA64C_CTLL_DST_MSIZE(config->dst_maxburst) | 278 + IDMA64C_CTLL_DST_WIDTH(dst_width) | 279 + IDMA64C_CTLL_SRC_WIDTH(src_width); 280 + 281 + lli->llp = llp; 282 + return hw->llp; 283 + } 284 + 285 + static void idma64_desc_fill(struct idma64_chan *idma64c, 286 + struct idma64_desc *desc) 287 + { 288 + struct dma_slave_config *config = &idma64c->config; 289 + struct idma64_hw_desc *hw = &desc->hw[desc->ndesc - 1]; 290 + struct idma64_lli *lli = hw->lli; 291 + u64 llp = 0; 292 + unsigned int i = desc->ndesc; 293 + 294 + /* Fill the hardware descriptors and link them to a list */ 295 + do { 296 + hw = &desc->hw[--i]; 297 + llp = idma64_hw_desc_fill(hw, config, desc->direction, llp); 298 + desc->length += hw->len; 299 + } while (i); 300 + 301 + /* Trigger interrupt after last block */ 302 + lli->ctllo |= IDMA64C_CTLL_INT_EN; 303 + } 304 + 305 + static struct dma_async_tx_descriptor *idma64_prep_slave_sg( 306 + struct dma_chan *chan, struct scatterlist *sgl, 307 + unsigned int sg_len, enum dma_transfer_direction direction, 308 + unsigned long flags, void *context) 309 + { 310 + struct idma64_chan *idma64c = to_idma64_chan(chan); 311 + struct idma64_desc *desc; 312 + struct scatterlist *sg; 313 + unsigned int i; 314 + 315 + desc = idma64_alloc_desc(sg_len); 316 + if (!desc) 317 + return NULL; 318 + 319 + for_each_sg(sgl, sg, sg_len, i) { 320 + struct idma64_hw_desc *hw = &desc->hw[i]; 321 + 322 + /* Allocate DMA capable memory for hardware descriptor */ 323 + hw->lli = dma_pool_alloc(idma64c->pool, GFP_NOWAIT, &hw->llp); 324 + if (!hw->lli) { 325 + desc->ndesc = i; 326 + idma64_desc_free(idma64c, desc); 327 + return NULL; 328 + } 329 + 330 + hw->phys = sg_dma_address(sg); 331 + hw->len = sg_dma_len(sg); 332 + } 333 + 334 + desc->ndesc = sg_len; 335 + desc->direction = direction; 336 + desc->status = DMA_IN_PROGRESS; 337 + 338 + idma64_desc_fill(idma64c, desc); 339 + return vchan_tx_prep(&idma64c->vchan, &desc->vdesc, flags); 340 + } 341 + 342 + static void idma64_issue_pending(struct dma_chan *chan) 343 + { 344 + struct idma64_chan *idma64c = to_idma64_chan(chan); 345 + unsigned long flags; 346 + 347 + spin_lock_irqsave(&idma64c->vchan.lock, flags); 348 + if (vchan_issue_pending(&idma64c->vchan) && !idma64c->desc) 349 + idma64_start_transfer(idma64c); 350 + spin_unlock_irqrestore(&idma64c->vchan.lock, flags); 351 + } 352 + 353 + static size_t idma64_active_desc_size(struct idma64_chan *idma64c) 354 + { 355 + struct idma64_desc *desc = idma64c->desc; 356 + struct idma64_hw_desc *hw; 357 + size_t bytes = desc->length; 358 + u64 llp; 359 + u32 ctlhi; 360 + unsigned int i = 0; 361 + 362 + llp = channel_readq(idma64c, LLP); 363 + do { 364 + hw = &desc->hw[i]; 365 + } while ((hw->llp != llp) && (++i < desc->ndesc)); 366 + 367 + if (!i) 368 + return bytes; 369 + 370 + do { 371 + bytes -= desc->hw[--i].len; 372 + } while (i); 373 + 374 + ctlhi = channel_readl(idma64c, CTL_HI); 375 + return bytes - IDMA64C_CTLH_BLOCK_TS(ctlhi); 376 + } 377 + 378 + static enum dma_status idma64_tx_status(struct dma_chan *chan, 379 + dma_cookie_t cookie, struct dma_tx_state *state) 380 + { 381 + struct idma64_chan *idma64c = to_idma64_chan(chan); 382 + struct virt_dma_desc *vdesc; 383 + enum dma_status status; 384 + size_t bytes; 385 + unsigned long flags; 386 + 387 + status = dma_cookie_status(chan, cookie, state); 388 + if (status == DMA_COMPLETE) 389 + return status; 390 + 391 + spin_lock_irqsave(&idma64c->vchan.lock, flags); 392 + vdesc = vchan_find_desc(&idma64c->vchan, cookie); 393 + if (idma64c->desc && cookie == idma64c->desc->vdesc.tx.cookie) { 394 + bytes = idma64_active_desc_size(idma64c); 395 + dma_set_residue(state, bytes); 396 + status = idma64c->desc->status; 397 + } else if (vdesc) { 398 + bytes = to_idma64_desc(vdesc)->length; 399 + dma_set_residue(state, bytes); 400 + } 401 + spin_unlock_irqrestore(&idma64c->vchan.lock, flags); 402 + 403 + return status; 404 + } 405 + 406 + static void convert_burst(u32 *maxburst) 407 + { 408 + if (*maxburst) 409 + *maxburst = __fls(*maxburst); 410 + else 411 + *maxburst = 0; 412 + } 413 + 414 + static int idma64_slave_config(struct dma_chan *chan, 415 + struct dma_slave_config *config) 416 + { 417 + struct idma64_chan *idma64c = to_idma64_chan(chan); 418 + 419 + /* Check if chan will be configured for slave transfers */ 420 + if (!is_slave_direction(config->direction)) 421 + return -EINVAL; 422 + 423 + memcpy(&idma64c->config, config, sizeof(idma64c->config)); 424 + 425 + convert_burst(&idma64c->config.src_maxburst); 426 + convert_burst(&idma64c->config.dst_maxburst); 427 + 428 + return 0; 429 + } 430 + 431 + static void idma64_chan_deactivate(struct idma64_chan *idma64c) 432 + { 433 + unsigned short count = 100; 434 + u32 cfglo; 435 + 436 + cfglo = channel_readl(idma64c, CFG_LO); 437 + channel_writel(idma64c, CFG_LO, cfglo | IDMA64C_CFGL_CH_SUSP); 438 + do { 439 + udelay(1); 440 + cfglo = channel_readl(idma64c, CFG_LO); 441 + } while (!(cfglo & IDMA64C_CFGL_FIFO_EMPTY) && --count); 442 + } 443 + 444 + static void idma64_chan_activate(struct idma64_chan *idma64c) 445 + { 446 + u32 cfglo; 447 + 448 + cfglo = channel_readl(idma64c, CFG_LO); 449 + channel_writel(idma64c, CFG_LO, cfglo & ~IDMA64C_CFGL_CH_SUSP); 450 + } 451 + 452 + static int idma64_pause(struct dma_chan *chan) 453 + { 454 + struct idma64_chan *idma64c = to_idma64_chan(chan); 455 + unsigned long flags; 456 + 457 + spin_lock_irqsave(&idma64c->vchan.lock, flags); 458 + if (idma64c->desc && idma64c->desc->status == DMA_IN_PROGRESS) { 459 + idma64_chan_deactivate(idma64c); 460 + idma64c->desc->status = DMA_PAUSED; 461 + } 462 + spin_unlock_irqrestore(&idma64c->vchan.lock, flags); 463 + 464 + return 0; 465 + } 466 + 467 + static int idma64_resume(struct dma_chan *chan) 468 + { 469 + struct idma64_chan *idma64c = to_idma64_chan(chan); 470 + unsigned long flags; 471 + 472 + spin_lock_irqsave(&idma64c->vchan.lock, flags); 473 + if (idma64c->desc && idma64c->desc->status == DMA_PAUSED) { 474 + idma64c->desc->status = DMA_IN_PROGRESS; 475 + idma64_chan_activate(idma64c); 476 + } 477 + spin_unlock_irqrestore(&idma64c->vchan.lock, flags); 478 + 479 + return 0; 480 + } 481 + 482 + static int idma64_terminate_all(struct dma_chan *chan) 483 + { 484 + struct idma64_chan *idma64c = to_idma64_chan(chan); 485 + unsigned long flags; 486 + LIST_HEAD(head); 487 + 488 + spin_lock_irqsave(&idma64c->vchan.lock, flags); 489 + idma64_chan_deactivate(idma64c); 490 + idma64_stop_transfer(idma64c); 491 + if (idma64c->desc) { 492 + idma64_vdesc_free(&idma64c->desc->vdesc); 493 + idma64c->desc = NULL; 494 + } 495 + vchan_get_all_descriptors(&idma64c->vchan, &head); 496 + spin_unlock_irqrestore(&idma64c->vchan.lock, flags); 497 + 498 + vchan_dma_desc_free_list(&idma64c->vchan, &head); 499 + return 0; 500 + } 501 + 502 + static int idma64_alloc_chan_resources(struct dma_chan *chan) 503 + { 504 + struct idma64_chan *idma64c = to_idma64_chan(chan); 505 + 506 + /* Create a pool of consistent memory blocks for hardware descriptors */ 507 + idma64c->pool = dma_pool_create(dev_name(chan2dev(chan)), 508 + chan->device->dev, 509 + sizeof(struct idma64_lli), 8, 0); 510 + if (!idma64c->pool) { 511 + dev_err(chan2dev(chan), "No memory for descriptors\n"); 512 + return -ENOMEM; 513 + } 514 + 515 + return 0; 516 + } 517 + 518 + static void idma64_free_chan_resources(struct dma_chan *chan) 519 + { 520 + struct idma64_chan *idma64c = to_idma64_chan(chan); 521 + 522 + vchan_free_chan_resources(to_virt_chan(chan)); 523 + dma_pool_destroy(idma64c->pool); 524 + idma64c->pool = NULL; 525 + } 526 + 527 + /* ---------------------------------------------------------------------- */ 528 + 529 + #define IDMA64_BUSWIDTHS \ 530 + BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) | \ 531 + BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | \ 532 + BIT(DMA_SLAVE_BUSWIDTH_4_BYTES) 533 + 534 + static int idma64_probe(struct idma64_chip *chip) 535 + { 536 + struct idma64 *idma64; 537 + unsigned short nr_chan = IDMA64_NR_CHAN; 538 + unsigned short i; 539 + int ret; 540 + 541 + idma64 = devm_kzalloc(chip->dev, sizeof(*idma64), GFP_KERNEL); 542 + if (!idma64) 543 + return -ENOMEM; 544 + 545 + idma64->regs = chip->regs; 546 + chip->idma64 = idma64; 547 + 548 + idma64->chan = devm_kcalloc(chip->dev, nr_chan, sizeof(*idma64->chan), 549 + GFP_KERNEL); 550 + if (!idma64->chan) 551 + return -ENOMEM; 552 + 553 + idma64->all_chan_mask = (1 << nr_chan) - 1; 554 + 555 + /* Turn off iDMA controller */ 556 + idma64_off(idma64); 557 + 558 + ret = devm_request_irq(chip->dev, chip->irq, idma64_irq, IRQF_SHARED, 559 + dev_name(chip->dev), idma64); 560 + if (ret) 561 + return ret; 562 + 563 + INIT_LIST_HEAD(&idma64->dma.channels); 564 + for (i = 0; i < nr_chan; i++) { 565 + struct idma64_chan *idma64c = &idma64->chan[i]; 566 + 567 + idma64c->vchan.desc_free = idma64_vdesc_free; 568 + vchan_init(&idma64c->vchan, &idma64->dma); 569 + 570 + idma64c->regs = idma64->regs + i * IDMA64_CH_LENGTH; 571 + idma64c->mask = BIT(i); 572 + } 573 + 574 + dma_cap_set(DMA_SLAVE, idma64->dma.cap_mask); 575 + dma_cap_set(DMA_PRIVATE, idma64->dma.cap_mask); 576 + 577 + idma64->dma.device_alloc_chan_resources = idma64_alloc_chan_resources; 578 + idma64->dma.device_free_chan_resources = idma64_free_chan_resources; 579 + 580 + idma64->dma.device_prep_slave_sg = idma64_prep_slave_sg; 581 + 582 + idma64->dma.device_issue_pending = idma64_issue_pending; 583 + idma64->dma.device_tx_status = idma64_tx_status; 584 + 585 + idma64->dma.device_config = idma64_slave_config; 586 + idma64->dma.device_pause = idma64_pause; 587 + idma64->dma.device_resume = idma64_resume; 588 + idma64->dma.device_terminate_all = idma64_terminate_all; 589 + 590 + idma64->dma.src_addr_widths = IDMA64_BUSWIDTHS; 591 + idma64->dma.dst_addr_widths = IDMA64_BUSWIDTHS; 592 + idma64->dma.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV); 593 + idma64->dma.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST; 594 + 595 + idma64->dma.dev = chip->dev; 596 + 597 + ret = dma_async_device_register(&idma64->dma); 598 + if (ret) 599 + return ret; 600 + 601 + dev_info(chip->dev, "Found Intel integrated DMA 64-bit\n"); 602 + return 0; 603 + } 604 + 605 + static int idma64_remove(struct idma64_chip *chip) 606 + { 607 + struct idma64 *idma64 = chip->idma64; 608 + unsigned short i; 609 + 610 + dma_async_device_unregister(&idma64->dma); 611 + 612 + /* 613 + * Explicitly call devm_request_irq() to avoid the side effects with 614 + * the scheduled tasklets. 615 + */ 616 + devm_free_irq(chip->dev, chip->irq, idma64); 617 + 618 + for (i = 0; i < idma64->dma.chancnt; i++) { 619 + struct idma64_chan *idma64c = &idma64->chan[i]; 620 + 621 + tasklet_kill(&idma64c->vchan.task); 622 + } 623 + 624 + return 0; 625 + } 626 + 627 + /* ---------------------------------------------------------------------- */ 628 + 629 + static int idma64_platform_probe(struct platform_device *pdev) 630 + { 631 + struct idma64_chip *chip; 632 + struct device *dev = &pdev->dev; 633 + struct resource *mem; 634 + int ret; 635 + 636 + chip = devm_kzalloc(dev, sizeof(*chip), GFP_KERNEL); 637 + if (!chip) 638 + return -ENOMEM; 639 + 640 + chip->irq = platform_get_irq(pdev, 0); 641 + if (chip->irq < 0) 642 + return chip->irq; 643 + 644 + mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 645 + chip->regs = devm_ioremap_resource(dev, mem); 646 + if (IS_ERR(chip->regs)) 647 + return PTR_ERR(chip->regs); 648 + 649 + ret = dma_coerce_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); 650 + if (ret) 651 + return ret; 652 + 653 + chip->dev = dev; 654 + 655 + ret = idma64_probe(chip); 656 + if (ret) 657 + return ret; 658 + 659 + platform_set_drvdata(pdev, chip); 660 + return 0; 661 + } 662 + 663 + static int idma64_platform_remove(struct platform_device *pdev) 664 + { 665 + struct idma64_chip *chip = platform_get_drvdata(pdev); 666 + 667 + return idma64_remove(chip); 668 + } 669 + 670 + #ifdef CONFIG_PM_SLEEP 671 + 672 + static int idma64_pm_suspend(struct device *dev) 673 + { 674 + struct platform_device *pdev = to_platform_device(dev); 675 + struct idma64_chip *chip = platform_get_drvdata(pdev); 676 + 677 + idma64_off(chip->idma64); 678 + return 0; 679 + } 680 + 681 + static int idma64_pm_resume(struct device *dev) 682 + { 683 + struct platform_device *pdev = to_platform_device(dev); 684 + struct idma64_chip *chip = platform_get_drvdata(pdev); 685 + 686 + idma64_on(chip->idma64); 687 + return 0; 688 + } 689 + 690 + #endif /* CONFIG_PM_SLEEP */ 691 + 692 + static const struct dev_pm_ops idma64_dev_pm_ops = { 693 + SET_SYSTEM_SLEEP_PM_OPS(idma64_pm_suspend, idma64_pm_resume) 694 + }; 695 + 696 + static struct platform_driver idma64_platform_driver = { 697 + .probe = idma64_platform_probe, 698 + .remove = idma64_platform_remove, 699 + .driver = { 700 + .name = DRV_NAME, 701 + .pm = &idma64_dev_pm_ops, 702 + }, 703 + }; 704 + 705 + module_platform_driver(idma64_platform_driver); 706 + 707 + MODULE_LICENSE("GPL v2"); 708 + MODULE_DESCRIPTION("iDMA64 core driver"); 709 + MODULE_AUTHOR("Andy Shevchenko <andriy.shevchenko@linux.intel.com>"); 710 + MODULE_ALIAS("platform:" DRV_NAME);
+233
drivers/dma/idma64.h
··· 1 + /* 2 + * Driver for the Intel integrated DMA 64-bit 3 + * 4 + * Copyright (C) 2015 Intel Corporation 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + */ 10 + 11 + #ifndef __DMA_IDMA64_H__ 12 + #define __DMA_IDMA64_H__ 13 + 14 + #include <linux/device.h> 15 + #include <linux/io.h> 16 + #include <linux/spinlock.h> 17 + #include <linux/types.h> 18 + 19 + #include "virt-dma.h" 20 + 21 + /* Channel registers */ 22 + 23 + #define IDMA64_CH_SAR 0x00 /* Source Address Register */ 24 + #define IDMA64_CH_DAR 0x08 /* Destination Address Register */ 25 + #define IDMA64_CH_LLP 0x10 /* Linked List Pointer */ 26 + #define IDMA64_CH_CTL_LO 0x18 /* Control Register Low */ 27 + #define IDMA64_CH_CTL_HI 0x1c /* Control Register High */ 28 + #define IDMA64_CH_SSTAT 0x20 29 + #define IDMA64_CH_DSTAT 0x28 30 + #define IDMA64_CH_SSTATAR 0x30 31 + #define IDMA64_CH_DSTATAR 0x38 32 + #define IDMA64_CH_CFG_LO 0x40 /* Configuration Register Low */ 33 + #define IDMA64_CH_CFG_HI 0x44 /* Configuration Register High */ 34 + #define IDMA64_CH_SGR 0x48 35 + #define IDMA64_CH_DSR 0x50 36 + 37 + #define IDMA64_CH_LENGTH 0x58 38 + 39 + /* Bitfields in CTL_LO */ 40 + #define IDMA64C_CTLL_INT_EN (1 << 0) /* irqs enabled? */ 41 + #define IDMA64C_CTLL_DST_WIDTH(x) ((x) << 1) /* bytes per element */ 42 + #define IDMA64C_CTLL_SRC_WIDTH(x) ((x) << 4) 43 + #define IDMA64C_CTLL_DST_INC (0 << 8) /* DAR update/not */ 44 + #define IDMA64C_CTLL_DST_FIX (1 << 8) 45 + #define IDMA64C_CTLL_SRC_INC (0 << 10) /* SAR update/not */ 46 + #define IDMA64C_CTLL_SRC_FIX (1 << 10) 47 + #define IDMA64C_CTLL_DST_MSIZE(x) ((x) << 11) /* burst, #elements */ 48 + #define IDMA64C_CTLL_SRC_MSIZE(x) ((x) << 14) 49 + #define IDMA64C_CTLL_FC_M2P (1 << 20) /* mem-to-periph */ 50 + #define IDMA64C_CTLL_FC_P2M (2 << 20) /* periph-to-mem */ 51 + #define IDMA64C_CTLL_LLP_D_EN (1 << 27) /* dest block chain */ 52 + #define IDMA64C_CTLL_LLP_S_EN (1 << 28) /* src block chain */ 53 + 54 + /* Bitfields in CTL_HI */ 55 + #define IDMA64C_CTLH_BLOCK_TS(x) ((x) & ((1 << 17) - 1)) 56 + #define IDMA64C_CTLH_DONE (1 << 17) 57 + 58 + /* Bitfields in CFG_LO */ 59 + #define IDMA64C_CFGL_DST_BURST_ALIGN (1 << 0) /* dst burst align */ 60 + #define IDMA64C_CFGL_SRC_BURST_ALIGN (1 << 1) /* src burst align */ 61 + #define IDMA64C_CFGL_CH_SUSP (1 << 8) 62 + #define IDMA64C_CFGL_FIFO_EMPTY (1 << 9) 63 + #define IDMA64C_CFGL_CH_DRAIN (1 << 10) /* drain FIFO */ 64 + #define IDMA64C_CFGL_DST_OPT_BL (1 << 20) /* optimize dst burst length */ 65 + #define IDMA64C_CFGL_SRC_OPT_BL (1 << 21) /* optimize src burst length */ 66 + 67 + /* Bitfields in CFG_HI */ 68 + #define IDMA64C_CFGH_SRC_PER(x) ((x) << 0) /* src peripheral */ 69 + #define IDMA64C_CFGH_DST_PER(x) ((x) << 4) /* dst peripheral */ 70 + #define IDMA64C_CFGH_RD_ISSUE_THD(x) ((x) << 8) 71 + #define IDMA64C_CFGH_RW_ISSUE_THD(x) ((x) << 18) 72 + 73 + /* Interrupt registers */ 74 + 75 + #define IDMA64_INT_XFER 0x00 76 + #define IDMA64_INT_BLOCK 0x08 77 + #define IDMA64_INT_SRC_TRAN 0x10 78 + #define IDMA64_INT_DST_TRAN 0x18 79 + #define IDMA64_INT_ERROR 0x20 80 + 81 + #define IDMA64_RAW(x) (0x2c0 + IDMA64_INT_##x) /* r */ 82 + #define IDMA64_STATUS(x) (0x2e8 + IDMA64_INT_##x) /* r (raw & mask) */ 83 + #define IDMA64_MASK(x) (0x310 + IDMA64_INT_##x) /* rw (set = irq enabled) */ 84 + #define IDMA64_CLEAR(x) (0x338 + IDMA64_INT_##x) /* w (ack, affects "raw") */ 85 + 86 + /* Common registers */ 87 + 88 + #define IDMA64_STATUS_INT 0x360 /* r */ 89 + #define IDMA64_CFG 0x398 90 + #define IDMA64_CH_EN 0x3a0 91 + 92 + /* Bitfields in CFG */ 93 + #define IDMA64_CFG_DMA_EN (1 << 0) 94 + 95 + /* Hardware descriptor for Linked LIst transfers */ 96 + struct idma64_lli { 97 + u64 sar; 98 + u64 dar; 99 + u64 llp; 100 + u32 ctllo; 101 + u32 ctlhi; 102 + u32 sstat; 103 + u32 dstat; 104 + }; 105 + 106 + struct idma64_hw_desc { 107 + struct idma64_lli *lli; 108 + dma_addr_t llp; 109 + dma_addr_t phys; 110 + unsigned int len; 111 + }; 112 + 113 + struct idma64_desc { 114 + struct virt_dma_desc vdesc; 115 + enum dma_transfer_direction direction; 116 + struct idma64_hw_desc *hw; 117 + unsigned int ndesc; 118 + size_t length; 119 + enum dma_status status; 120 + }; 121 + 122 + static inline struct idma64_desc *to_idma64_desc(struct virt_dma_desc *vdesc) 123 + { 124 + return container_of(vdesc, struct idma64_desc, vdesc); 125 + } 126 + 127 + struct idma64_chan { 128 + struct virt_dma_chan vchan; 129 + 130 + void __iomem *regs; 131 + 132 + /* hardware configuration */ 133 + enum dma_transfer_direction direction; 134 + unsigned int mask; 135 + struct dma_slave_config config; 136 + 137 + void *pool; 138 + struct idma64_desc *desc; 139 + }; 140 + 141 + static inline struct idma64_chan *to_idma64_chan(struct dma_chan *chan) 142 + { 143 + return container_of(chan, struct idma64_chan, vchan.chan); 144 + } 145 + 146 + #define channel_set_bit(idma64, reg, mask) \ 147 + dma_writel(idma64, reg, ((mask) << 8) | (mask)) 148 + #define channel_clear_bit(idma64, reg, mask) \ 149 + dma_writel(idma64, reg, ((mask) << 8) | 0) 150 + 151 + static inline u32 idma64c_readl(struct idma64_chan *idma64c, int offset) 152 + { 153 + return readl(idma64c->regs + offset); 154 + } 155 + 156 + static inline void idma64c_writel(struct idma64_chan *idma64c, int offset, 157 + u32 value) 158 + { 159 + writel(value, idma64c->regs + offset); 160 + } 161 + 162 + #define channel_readl(idma64c, reg) \ 163 + idma64c_readl(idma64c, IDMA64_CH_##reg) 164 + #define channel_writel(idma64c, reg, value) \ 165 + idma64c_writel(idma64c, IDMA64_CH_##reg, (value)) 166 + 167 + static inline u64 idma64c_readq(struct idma64_chan *idma64c, int offset) 168 + { 169 + u64 l, h; 170 + 171 + l = idma64c_readl(idma64c, offset); 172 + h = idma64c_readl(idma64c, offset + 4); 173 + 174 + return l | (h << 32); 175 + } 176 + 177 + static inline void idma64c_writeq(struct idma64_chan *idma64c, int offset, 178 + u64 value) 179 + { 180 + idma64c_writel(idma64c, offset, value); 181 + idma64c_writel(idma64c, offset + 4, value >> 32); 182 + } 183 + 184 + #define channel_readq(idma64c, reg) \ 185 + idma64c_readq(idma64c, IDMA64_CH_##reg) 186 + #define channel_writeq(idma64c, reg, value) \ 187 + idma64c_writeq(idma64c, IDMA64_CH_##reg, (value)) 188 + 189 + struct idma64 { 190 + struct dma_device dma; 191 + 192 + void __iomem *regs; 193 + 194 + /* channels */ 195 + unsigned short all_chan_mask; 196 + struct idma64_chan *chan; 197 + }; 198 + 199 + static inline struct idma64 *to_idma64(struct dma_device *ddev) 200 + { 201 + return container_of(ddev, struct idma64, dma); 202 + } 203 + 204 + static inline u32 idma64_readl(struct idma64 *idma64, int offset) 205 + { 206 + return readl(idma64->regs + offset); 207 + } 208 + 209 + static inline void idma64_writel(struct idma64 *idma64, int offset, u32 value) 210 + { 211 + writel(value, idma64->regs + offset); 212 + } 213 + 214 + #define dma_readl(idma64, reg) \ 215 + idma64_readl(idma64, IDMA64_##reg) 216 + #define dma_writel(idma64, reg, value) \ 217 + idma64_writel(idma64, IDMA64_##reg, (value)) 218 + 219 + /** 220 + * struct idma64_chip - representation of DesignWare DMA controller hardware 221 + * @dev: struct device of the DMA controller 222 + * @irq: irq line 223 + * @regs: memory mapped I/O space 224 + * @idma64: struct idma64 that is filed by idma64_probe() 225 + */ 226 + struct idma64_chip { 227 + struct device *dev; 228 + int irq; 229 + void __iomem *regs; 230 + struct idma64 *idma64; 231 + }; 232 + 233 + #endif /* __DMA_IDMA64_H__ */
+71 -1
drivers/idle/intel_idle.c
··· 591 591 .enter = NULL } 592 592 }; 593 593 594 + static struct cpuidle_state skl_cstates[] = { 595 + { 596 + .name = "C1-SKL", 597 + .desc = "MWAIT 0x00", 598 + .flags = MWAIT2flg(0x00), 599 + .exit_latency = 2, 600 + .target_residency = 2, 601 + .enter = &intel_idle, 602 + .enter_freeze = intel_idle_freeze, }, 603 + { 604 + .name = "C1E-SKL", 605 + .desc = "MWAIT 0x01", 606 + .flags = MWAIT2flg(0x01), 607 + .exit_latency = 10, 608 + .target_residency = 20, 609 + .enter = &intel_idle, 610 + .enter_freeze = intel_idle_freeze, }, 611 + { 612 + .name = "C3-SKL", 613 + .desc = "MWAIT 0x10", 614 + .flags = MWAIT2flg(0x10) | CPUIDLE_FLAG_TLB_FLUSHED, 615 + .exit_latency = 70, 616 + .target_residency = 100, 617 + .enter = &intel_idle, 618 + .enter_freeze = intel_idle_freeze, }, 619 + { 620 + .name = "C6-SKL", 621 + .desc = "MWAIT 0x20", 622 + .flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED, 623 + .exit_latency = 75, 624 + .target_residency = 200, 625 + .enter = &intel_idle, 626 + .enter_freeze = intel_idle_freeze, }, 627 + { 628 + .name = "C7s-SKL", 629 + .desc = "MWAIT 0x33", 630 + .flags = MWAIT2flg(0x33) | CPUIDLE_FLAG_TLB_FLUSHED, 631 + .exit_latency = 124, 632 + .target_residency = 800, 633 + .enter = &intel_idle, 634 + .enter_freeze = intel_idle_freeze, }, 635 + { 636 + .name = "C8-SKL", 637 + .desc = "MWAIT 0x40", 638 + .flags = MWAIT2flg(0x40) | CPUIDLE_FLAG_TLB_FLUSHED, 639 + .exit_latency = 174, 640 + .target_residency = 800, 641 + .enter = &intel_idle, 642 + .enter_freeze = intel_idle_freeze, }, 643 + { 644 + .name = "C10-SKL", 645 + .desc = "MWAIT 0x60", 646 + .flags = MWAIT2flg(0x60) | CPUIDLE_FLAG_TLB_FLUSHED, 647 + .exit_latency = 890, 648 + .target_residency = 5000, 649 + .enter = &intel_idle, 650 + .enter_freeze = intel_idle_freeze, }, 651 + { 652 + .enter = NULL } 653 + }; 654 + 594 655 static struct cpuidle_state atom_cstates[] = { 595 656 { 596 657 .name = "C1E-ATM", ··· 871 810 .disable_promotion_to_c1e = true, 872 811 }; 873 812 813 + static const struct idle_cpu idle_cpu_skl = { 814 + .state_table = skl_cstates, 815 + .disable_promotion_to_c1e = true, 816 + }; 817 + 818 + 874 819 static const struct idle_cpu idle_cpu_avn = { 875 820 .state_table = avn_cstates, 876 821 .disable_promotion_to_c1e = true, ··· 911 844 ICPU(0x47, idle_cpu_bdw), 912 845 ICPU(0x4f, idle_cpu_bdw), 913 846 ICPU(0x56, idle_cpu_bdw), 847 + ICPU(0x4e, idle_cpu_skl), 848 + ICPU(0x5e, idle_cpu_skl), 914 849 {} 915 850 }; 916 851 MODULE_DEVICE_TABLE(x86cpu, intel_idle_ids); ··· 1034 965 for (cstate = 0; cstate < CPUIDLE_STATE_MAX; ++cstate) { 1035 966 int num_substates, mwait_hint, mwait_cstate; 1036 967 1037 - if (cpuidle_state_table[cstate].enter == NULL) 968 + if ((cpuidle_state_table[cstate].enter == NULL) && 969 + (cpuidle_state_table[cstate].enter_freeze == NULL)) 1038 970 break; 1039 971 1040 972 if (cstate + 1 > max_cstate) {
+1
drivers/mailbox/Kconfig
··· 46 46 config PCC 47 47 bool "Platform Communication Channel Driver" 48 48 depends on ACPI 49 + default n 49 50 help 50 51 ACPI 5.0+ spec defines a generic mode of communication 51 52 between the OS and a platform such as the BMC. This medium
+7 -1
drivers/mailbox/pcc.c
··· 352 352 353 353 return 0; 354 354 } 355 - device_initcall(pcc_init); 355 + 356 + /* 357 + * Make PCC init postcore so that users of this mailbox 358 + * such as the ACPI Processor driver have it available 359 + * at their init. 360 + */ 361 + postcore_initcall(pcc_init);
+23
drivers/mfd/Kconfig
··· 328 328 thermal, charger and related power management functions 329 329 on these systems. 330 330 331 + config MFD_INTEL_LPSS 332 + tristate 333 + select COMMON_CLK 334 + select MFD_CORE 335 + 336 + config MFD_INTEL_LPSS_ACPI 337 + tristate "Intel Low Power Subsystem support in ACPI mode" 338 + select MFD_INTEL_LPSS 339 + depends on X86 && ACPI 340 + help 341 + This driver supports Intel Low Power Subsystem (LPSS) devices such as 342 + I2C, SPI and HS-UART starting from Intel Sunrisepoint (Intel Skylake 343 + PCH) in ACPI mode. 344 + 345 + config MFD_INTEL_LPSS_PCI 346 + tristate "Intel Low Power Subsystem support in PCI mode" 347 + select MFD_INTEL_LPSS 348 + depends on X86 && PCI 349 + help 350 + This driver supports Intel Low Power Subsystem (LPSS) devices such as 351 + I2C, SPI and HS-UART starting from Intel Sunrisepoint (Intel Skylake 352 + PCH) in PCI mode. 353 + 331 354 config MFD_INTEL_MSIC 332 355 bool "Intel MSIC" 333 356 depends on INTEL_SCU_IPC
+3
drivers/mfd/Makefile
··· 161 161 obj-$(CONFIG_MFD_TPS65090) += tps65090.o 162 162 obj-$(CONFIG_MFD_AAT2870_CORE) += aat2870-core.o 163 163 obj-$(CONFIG_MFD_ATMEL_HLCDC) += atmel-hlcdc.o 164 + obj-$(CONFIG_MFD_INTEL_LPSS) += intel-lpss.o 165 + obj-$(CONFIG_MFD_INTEL_LPSS_PCI) += intel-lpss-pci.o 166 + obj-$(CONFIG_MFD_INTEL_LPSS_ACPI) += intel-lpss-acpi.o 164 167 obj-$(CONFIG_MFD_INTEL_MSIC) += intel_msic.o 165 168 obj-$(CONFIG_MFD_PALMAS) += palmas.o 166 169 obj-$(CONFIG_MFD_VIPERBOARD) += viperboard.o
+84
drivers/mfd/intel-lpss-acpi.c
··· 1 + /* 2 + * Intel LPSS ACPI support. 3 + * 4 + * Copyright (C) 2015, Intel Corporation 5 + * 6 + * Authors: Andy Shevchenko <andriy.shevchenko@linux.intel.com> 7 + * Mika Westerberg <mika.westerberg@linux.intel.com> 8 + * 9 + * This program is free software; you can redistribute it and/or modify 10 + * it under the terms of the GNU General Public License version 2 as 11 + * published by the Free Software Foundation. 12 + */ 13 + 14 + #include <linux/acpi.h> 15 + #include <linux/ioport.h> 16 + #include <linux/kernel.h> 17 + #include <linux/module.h> 18 + #include <linux/pm.h> 19 + #include <linux/pm_runtime.h> 20 + #include <linux/platform_device.h> 21 + 22 + #include "intel-lpss.h" 23 + 24 + static const struct intel_lpss_platform_info spt_info = { 25 + .clk_rate = 120000000, 26 + }; 27 + 28 + static const struct acpi_device_id intel_lpss_acpi_ids[] = { 29 + /* SPT */ 30 + { "INT3446", (kernel_ulong_t)&spt_info }, 31 + { "INT3447", (kernel_ulong_t)&spt_info }, 32 + { } 33 + }; 34 + MODULE_DEVICE_TABLE(acpi, intel_lpss_acpi_ids); 35 + 36 + static int intel_lpss_acpi_probe(struct platform_device *pdev) 37 + { 38 + struct intel_lpss_platform_info *info; 39 + const struct acpi_device_id *id; 40 + 41 + id = acpi_match_device(intel_lpss_acpi_ids, &pdev->dev); 42 + if (!id) 43 + return -ENODEV; 44 + 45 + info = devm_kmemdup(&pdev->dev, (void *)id->driver_data, sizeof(*info), 46 + GFP_KERNEL); 47 + if (!info) 48 + return -ENOMEM; 49 + 50 + info->mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 51 + info->irq = platform_get_irq(pdev, 0); 52 + 53 + pm_runtime_set_active(&pdev->dev); 54 + pm_runtime_enable(&pdev->dev); 55 + 56 + return intel_lpss_probe(&pdev->dev, info); 57 + } 58 + 59 + static int intel_lpss_acpi_remove(struct platform_device *pdev) 60 + { 61 + intel_lpss_remove(&pdev->dev); 62 + pm_runtime_disable(&pdev->dev); 63 + 64 + return 0; 65 + } 66 + 67 + static INTEL_LPSS_PM_OPS(intel_lpss_acpi_pm_ops); 68 + 69 + static struct platform_driver intel_lpss_acpi_driver = { 70 + .probe = intel_lpss_acpi_probe, 71 + .remove = intel_lpss_acpi_remove, 72 + .driver = { 73 + .name = "intel-lpss", 74 + .acpi_match_table = intel_lpss_acpi_ids, 75 + .pm = &intel_lpss_acpi_pm_ops, 76 + }, 77 + }; 78 + 79 + module_platform_driver(intel_lpss_acpi_driver); 80 + 81 + MODULE_AUTHOR("Andy Shevchenko <andriy.shevchenko@linux.intel.com>"); 82 + MODULE_AUTHOR("Mika Westerberg <mika.westerberg@linux.intel.com>"); 83 + MODULE_DESCRIPTION("Intel LPSS ACPI driver"); 84 + MODULE_LICENSE("GPL v2");
+113
drivers/mfd/intel-lpss-pci.c
··· 1 + /* 2 + * Intel LPSS PCI support. 3 + * 4 + * Copyright (C) 2015, Intel Corporation 5 + * 6 + * Authors: Andy Shevchenko <andriy.shevchenko@linux.intel.com> 7 + * Mika Westerberg <mika.westerberg@linux.intel.com> 8 + * 9 + * This program is free software; you can redistribute it and/or modify 10 + * it under the terms of the GNU General Public License version 2 as 11 + * published by the Free Software Foundation. 12 + */ 13 + 14 + #include <linux/ioport.h> 15 + #include <linux/kernel.h> 16 + #include <linux/module.h> 17 + #include <linux/pci.h> 18 + #include <linux/pm.h> 19 + #include <linux/pm_runtime.h> 20 + 21 + #include "intel-lpss.h" 22 + 23 + static int intel_lpss_pci_probe(struct pci_dev *pdev, 24 + const struct pci_device_id *id) 25 + { 26 + struct intel_lpss_platform_info *info; 27 + int ret; 28 + 29 + ret = pcim_enable_device(pdev); 30 + if (ret) 31 + return ret; 32 + 33 + info = devm_kmemdup(&pdev->dev, (void *)id->driver_data, sizeof(*info), 34 + GFP_KERNEL); 35 + if (!info) 36 + return -ENOMEM; 37 + 38 + info->mem = &pdev->resource[0]; 39 + info->irq = pdev->irq; 40 + 41 + /* Probably it is enough to set this for iDMA capable devices only */ 42 + pci_set_master(pdev); 43 + 44 + ret = intel_lpss_probe(&pdev->dev, info); 45 + if (ret) 46 + return ret; 47 + 48 + pm_runtime_put(&pdev->dev); 49 + pm_runtime_allow(&pdev->dev); 50 + 51 + return 0; 52 + } 53 + 54 + static void intel_lpss_pci_remove(struct pci_dev *pdev) 55 + { 56 + pm_runtime_forbid(&pdev->dev); 57 + pm_runtime_get_sync(&pdev->dev); 58 + 59 + intel_lpss_remove(&pdev->dev); 60 + } 61 + 62 + static INTEL_LPSS_PM_OPS(intel_lpss_pci_pm_ops); 63 + 64 + static const struct intel_lpss_platform_info spt_info = { 65 + .clk_rate = 120000000, 66 + }; 67 + 68 + static const struct intel_lpss_platform_info spt_uart_info = { 69 + .clk_rate = 120000000, 70 + .clk_con_id = "baudclk", 71 + }; 72 + 73 + static const struct pci_device_id intel_lpss_pci_ids[] = { 74 + /* SPT-LP */ 75 + { PCI_VDEVICE(INTEL, 0x9d27), (kernel_ulong_t)&spt_uart_info }, 76 + { PCI_VDEVICE(INTEL, 0x9d28), (kernel_ulong_t)&spt_uart_info }, 77 + { PCI_VDEVICE(INTEL, 0x9d29), (kernel_ulong_t)&spt_info }, 78 + { PCI_VDEVICE(INTEL, 0x9d2a), (kernel_ulong_t)&spt_info }, 79 + { PCI_VDEVICE(INTEL, 0x9d60), (kernel_ulong_t)&spt_info }, 80 + { PCI_VDEVICE(INTEL, 0x9d61), (kernel_ulong_t)&spt_info }, 81 + { PCI_VDEVICE(INTEL, 0x9d62), (kernel_ulong_t)&spt_info }, 82 + { PCI_VDEVICE(INTEL, 0x9d63), (kernel_ulong_t)&spt_info }, 83 + { PCI_VDEVICE(INTEL, 0x9d64), (kernel_ulong_t)&spt_info }, 84 + { PCI_VDEVICE(INTEL, 0x9d65), (kernel_ulong_t)&spt_info }, 85 + { PCI_VDEVICE(INTEL, 0x9d66), (kernel_ulong_t)&spt_uart_info }, 86 + /* SPT-H */ 87 + { PCI_VDEVICE(INTEL, 0xa127), (kernel_ulong_t)&spt_uart_info }, 88 + { PCI_VDEVICE(INTEL, 0xa128), (kernel_ulong_t)&spt_uart_info }, 89 + { PCI_VDEVICE(INTEL, 0xa129), (kernel_ulong_t)&spt_info }, 90 + { PCI_VDEVICE(INTEL, 0xa12a), (kernel_ulong_t)&spt_info }, 91 + { PCI_VDEVICE(INTEL, 0xa160), (kernel_ulong_t)&spt_info }, 92 + { PCI_VDEVICE(INTEL, 0xa161), (kernel_ulong_t)&spt_info }, 93 + { PCI_VDEVICE(INTEL, 0xa166), (kernel_ulong_t)&spt_uart_info }, 94 + { } 95 + }; 96 + MODULE_DEVICE_TABLE(pci, intel_lpss_pci_ids); 97 + 98 + static struct pci_driver intel_lpss_pci_driver = { 99 + .name = "intel-lpss", 100 + .id_table = intel_lpss_pci_ids, 101 + .probe = intel_lpss_pci_probe, 102 + .remove = intel_lpss_pci_remove, 103 + .driver = { 104 + .pm = &intel_lpss_pci_pm_ops, 105 + }, 106 + }; 107 + 108 + module_pci_driver(intel_lpss_pci_driver); 109 + 110 + MODULE_AUTHOR("Andy Shevchenko <andriy.shevchenko@linux.intel.com>"); 111 + MODULE_AUTHOR("Mika Westerberg <mika.westerberg@linux.intel.com>"); 112 + MODULE_DESCRIPTION("Intel LPSS PCI driver"); 113 + MODULE_LICENSE("GPL v2");
+524
drivers/mfd/intel-lpss.c
··· 1 + /* 2 + * Intel Sunrisepoint LPSS core support. 3 + * 4 + * Copyright (C) 2015, Intel Corporation 5 + * 6 + * Authors: Andy Shevchenko <andriy.shevchenko@linux.intel.com> 7 + * Mika Westerberg <mika.westerberg@linux.intel.com> 8 + * Heikki Krogerus <heikki.krogerus@linux.intel.com> 9 + * Jarkko Nikula <jarkko.nikula@linux.intel.com> 10 + * 11 + * This program is free software; you can redistribute it and/or modify 12 + * it under the terms of the GNU General Public License version 2 as 13 + * published by the Free Software Foundation. 14 + */ 15 + 16 + #include <linux/clk.h> 17 + #include <linux/clkdev.h> 18 + #include <linux/clk-provider.h> 19 + #include <linux/debugfs.h> 20 + #include <linux/idr.h> 21 + #include <linux/ioport.h> 22 + #include <linux/kernel.h> 23 + #include <linux/module.h> 24 + #include <linux/mfd/core.h> 25 + #include <linux/pm_qos.h> 26 + #include <linux/pm_runtime.h> 27 + #include <linux/seq_file.h> 28 + 29 + #include "intel-lpss.h" 30 + 31 + #define LPSS_DEV_OFFSET 0x000 32 + #define LPSS_DEV_SIZE 0x200 33 + #define LPSS_PRIV_OFFSET 0x200 34 + #define LPSS_PRIV_SIZE 0x100 35 + #define LPSS_IDMA64_OFFSET 0x800 36 + #define LPSS_IDMA64_SIZE 0x800 37 + 38 + /* Offsets from lpss->priv */ 39 + #define LPSS_PRIV_RESETS 0x04 40 + #define LPSS_PRIV_RESETS_FUNC BIT(2) 41 + #define LPSS_PRIV_RESETS_IDMA 0x3 42 + 43 + #define LPSS_PRIV_ACTIVELTR 0x10 44 + #define LPSS_PRIV_IDLELTR 0x14 45 + 46 + #define LPSS_PRIV_LTR_REQ BIT(15) 47 + #define LPSS_PRIV_LTR_SCALE_MASK 0xc00 48 + #define LPSS_PRIV_LTR_SCALE_1US 0x800 49 + #define LPSS_PRIV_LTR_SCALE_32US 0xc00 50 + #define LPSS_PRIV_LTR_VALUE_MASK 0x3ff 51 + 52 + #define LPSS_PRIV_SSP_REG 0x20 53 + #define LPSS_PRIV_SSP_REG_DIS_DMA_FIN BIT(0) 54 + 55 + #define LPSS_PRIV_REMAP_ADDR_LO 0x40 56 + #define LPSS_PRIV_REMAP_ADDR_HI 0x44 57 + 58 + #define LPSS_PRIV_CAPS 0xfc 59 + #define LPSS_PRIV_CAPS_NO_IDMA BIT(8) 60 + #define LPSS_PRIV_CAPS_TYPE_SHIFT 4 61 + #define LPSS_PRIV_CAPS_TYPE_MASK (0xf << LPSS_PRIV_CAPS_TYPE_SHIFT) 62 + 63 + /* This matches the type field in CAPS register */ 64 + enum intel_lpss_dev_type { 65 + LPSS_DEV_I2C = 0, 66 + LPSS_DEV_UART, 67 + LPSS_DEV_SPI, 68 + }; 69 + 70 + struct intel_lpss { 71 + const struct intel_lpss_platform_info *info; 72 + enum intel_lpss_dev_type type; 73 + struct clk *clk; 74 + struct clk_lookup *clock; 75 + const struct mfd_cell *cell; 76 + struct device *dev; 77 + void __iomem *priv; 78 + int devid; 79 + u32 caps; 80 + u32 active_ltr; 81 + u32 idle_ltr; 82 + struct dentry *debugfs; 83 + }; 84 + 85 + static const struct resource intel_lpss_dev_resources[] = { 86 + DEFINE_RES_MEM_NAMED(LPSS_DEV_OFFSET, LPSS_DEV_SIZE, "lpss_dev"), 87 + DEFINE_RES_MEM_NAMED(LPSS_PRIV_OFFSET, LPSS_PRIV_SIZE, "lpss_priv"), 88 + DEFINE_RES_IRQ(0), 89 + }; 90 + 91 + static const struct resource intel_lpss_idma64_resources[] = { 92 + DEFINE_RES_MEM(LPSS_IDMA64_OFFSET, LPSS_IDMA64_SIZE), 93 + DEFINE_RES_IRQ(0), 94 + }; 95 + 96 + #define LPSS_IDMA64_DRIVER_NAME "idma64" 97 + 98 + /* 99 + * Cells needs to be ordered so that the iDMA is created first. This is 100 + * because we need to be sure the DMA is available when the host controller 101 + * driver is probed. 102 + */ 103 + static const struct mfd_cell intel_lpss_idma64_cell = { 104 + .name = LPSS_IDMA64_DRIVER_NAME, 105 + .num_resources = ARRAY_SIZE(intel_lpss_idma64_resources), 106 + .resources = intel_lpss_idma64_resources, 107 + }; 108 + 109 + static const struct mfd_cell intel_lpss_i2c_cell = { 110 + .name = "i2c_designware", 111 + .num_resources = ARRAY_SIZE(intel_lpss_dev_resources), 112 + .resources = intel_lpss_dev_resources, 113 + }; 114 + 115 + static const struct mfd_cell intel_lpss_uart_cell = { 116 + .name = "dw-apb-uart", 117 + .num_resources = ARRAY_SIZE(intel_lpss_dev_resources), 118 + .resources = intel_lpss_dev_resources, 119 + }; 120 + 121 + static const struct mfd_cell intel_lpss_spi_cell = { 122 + .name = "pxa2xx-spi", 123 + .num_resources = ARRAY_SIZE(intel_lpss_dev_resources), 124 + .resources = intel_lpss_dev_resources, 125 + }; 126 + 127 + static DEFINE_IDA(intel_lpss_devid_ida); 128 + static struct dentry *intel_lpss_debugfs; 129 + 130 + static int intel_lpss_request_dma_module(const char *name) 131 + { 132 + static bool intel_lpss_dma_requested; 133 + 134 + if (intel_lpss_dma_requested) 135 + return 0; 136 + 137 + intel_lpss_dma_requested = true; 138 + return request_module("%s", name); 139 + } 140 + 141 + static void intel_lpss_cache_ltr(struct intel_lpss *lpss) 142 + { 143 + lpss->active_ltr = readl(lpss->priv + LPSS_PRIV_ACTIVELTR); 144 + lpss->idle_ltr = readl(lpss->priv + LPSS_PRIV_IDLELTR); 145 + } 146 + 147 + static int intel_lpss_debugfs_add(struct intel_lpss *lpss) 148 + { 149 + struct dentry *dir; 150 + 151 + dir = debugfs_create_dir(dev_name(lpss->dev), intel_lpss_debugfs); 152 + if (IS_ERR(dir)) 153 + return PTR_ERR(dir); 154 + 155 + /* Cache the values into lpss structure */ 156 + intel_lpss_cache_ltr(lpss); 157 + 158 + debugfs_create_x32("capabilities", S_IRUGO, dir, &lpss->caps); 159 + debugfs_create_x32("active_ltr", S_IRUGO, dir, &lpss->active_ltr); 160 + debugfs_create_x32("idle_ltr", S_IRUGO, dir, &lpss->idle_ltr); 161 + 162 + lpss->debugfs = dir; 163 + return 0; 164 + } 165 + 166 + static void intel_lpss_debugfs_remove(struct intel_lpss *lpss) 167 + { 168 + debugfs_remove_recursive(lpss->debugfs); 169 + } 170 + 171 + static void intel_lpss_ltr_set(struct device *dev, s32 val) 172 + { 173 + struct intel_lpss *lpss = dev_get_drvdata(dev); 174 + u32 ltr; 175 + 176 + /* 177 + * Program latency tolerance (LTR) accordingly what has been asked 178 + * by the PM QoS layer or disable it in case we were passed 179 + * negative value or PM_QOS_LATENCY_ANY. 180 + */ 181 + ltr = readl(lpss->priv + LPSS_PRIV_ACTIVELTR); 182 + 183 + if (val == PM_QOS_LATENCY_ANY || val < 0) { 184 + ltr &= ~LPSS_PRIV_LTR_REQ; 185 + } else { 186 + ltr |= LPSS_PRIV_LTR_REQ; 187 + ltr &= ~LPSS_PRIV_LTR_SCALE_MASK; 188 + ltr &= ~LPSS_PRIV_LTR_VALUE_MASK; 189 + 190 + if (val > LPSS_PRIV_LTR_VALUE_MASK) 191 + ltr |= LPSS_PRIV_LTR_SCALE_32US | val >> 5; 192 + else 193 + ltr |= LPSS_PRIV_LTR_SCALE_1US | val; 194 + } 195 + 196 + if (ltr == lpss->active_ltr) 197 + return; 198 + 199 + writel(ltr, lpss->priv + LPSS_PRIV_ACTIVELTR); 200 + writel(ltr, lpss->priv + LPSS_PRIV_IDLELTR); 201 + 202 + /* Cache the values into lpss structure */ 203 + intel_lpss_cache_ltr(lpss); 204 + } 205 + 206 + static void intel_lpss_ltr_expose(struct intel_lpss *lpss) 207 + { 208 + lpss->dev->power.set_latency_tolerance = intel_lpss_ltr_set; 209 + dev_pm_qos_expose_latency_tolerance(lpss->dev); 210 + } 211 + 212 + static void intel_lpss_ltr_hide(struct intel_lpss *lpss) 213 + { 214 + dev_pm_qos_hide_latency_tolerance(lpss->dev); 215 + lpss->dev->power.set_latency_tolerance = NULL; 216 + } 217 + 218 + static int intel_lpss_assign_devs(struct intel_lpss *lpss) 219 + { 220 + unsigned int type; 221 + 222 + type = lpss->caps & LPSS_PRIV_CAPS_TYPE_MASK; 223 + type >>= LPSS_PRIV_CAPS_TYPE_SHIFT; 224 + 225 + switch (type) { 226 + case LPSS_DEV_I2C: 227 + lpss->cell = &intel_lpss_i2c_cell; 228 + break; 229 + case LPSS_DEV_UART: 230 + lpss->cell = &intel_lpss_uart_cell; 231 + break; 232 + case LPSS_DEV_SPI: 233 + lpss->cell = &intel_lpss_spi_cell; 234 + break; 235 + default: 236 + return -ENODEV; 237 + } 238 + 239 + lpss->type = type; 240 + 241 + return 0; 242 + } 243 + 244 + static bool intel_lpss_has_idma(const struct intel_lpss *lpss) 245 + { 246 + return (lpss->caps & LPSS_PRIV_CAPS_NO_IDMA) == 0; 247 + } 248 + 249 + static void intel_lpss_set_remap_addr(const struct intel_lpss *lpss) 250 + { 251 + resource_size_t addr = lpss->info->mem->start; 252 + 253 + writel(addr, lpss->priv + LPSS_PRIV_REMAP_ADDR_LO); 254 + #if BITS_PER_LONG > 32 255 + writel(addr >> 32, lpss->priv + LPSS_PRIV_REMAP_ADDR_HI); 256 + #else 257 + writel(0, lpss->priv + LPSS_PRIV_REMAP_ADDR_HI); 258 + #endif 259 + } 260 + 261 + static void intel_lpss_deassert_reset(const struct intel_lpss *lpss) 262 + { 263 + u32 value = LPSS_PRIV_RESETS_FUNC | LPSS_PRIV_RESETS_IDMA; 264 + 265 + /* Bring out the device from reset */ 266 + writel(value, lpss->priv + LPSS_PRIV_RESETS); 267 + } 268 + 269 + static void intel_lpss_init_dev(const struct intel_lpss *lpss) 270 + { 271 + u32 value = LPSS_PRIV_SSP_REG_DIS_DMA_FIN; 272 + 273 + intel_lpss_deassert_reset(lpss); 274 + 275 + if (!intel_lpss_has_idma(lpss)) 276 + return; 277 + 278 + intel_lpss_set_remap_addr(lpss); 279 + 280 + /* Make sure that SPI multiblock DMA transfers are re-enabled */ 281 + if (lpss->type == LPSS_DEV_SPI) 282 + writel(value, lpss->priv + LPSS_PRIV_SSP_REG); 283 + } 284 + 285 + static void intel_lpss_unregister_clock_tree(struct clk *clk) 286 + { 287 + struct clk *parent; 288 + 289 + while (clk) { 290 + parent = clk_get_parent(clk); 291 + clk_unregister(clk); 292 + clk = parent; 293 + } 294 + } 295 + 296 + static int intel_lpss_register_clock_divider(struct intel_lpss *lpss, 297 + const char *devname, 298 + struct clk **clk) 299 + { 300 + char name[32]; 301 + struct clk *tmp = *clk; 302 + 303 + snprintf(name, sizeof(name), "%s-enable", devname); 304 + tmp = clk_register_gate(NULL, name, __clk_get_name(tmp), 0, 305 + lpss->priv, 0, 0, NULL); 306 + if (IS_ERR(tmp)) 307 + return PTR_ERR(tmp); 308 + 309 + snprintf(name, sizeof(name), "%s-div", devname); 310 + tmp = clk_register_fractional_divider(NULL, name, __clk_get_name(tmp), 311 + 0, lpss->priv, 1, 15, 16, 15, 0, 312 + NULL); 313 + if (IS_ERR(tmp)) 314 + return PTR_ERR(tmp); 315 + *clk = tmp; 316 + 317 + snprintf(name, sizeof(name), "%s-update", devname); 318 + tmp = clk_register_gate(NULL, name, __clk_get_name(tmp), 319 + CLK_SET_RATE_PARENT, lpss->priv, 31, 0, NULL); 320 + if (IS_ERR(tmp)) 321 + return PTR_ERR(tmp); 322 + *clk = tmp; 323 + 324 + return 0; 325 + } 326 + 327 + static int intel_lpss_register_clock(struct intel_lpss *lpss) 328 + { 329 + const struct mfd_cell *cell = lpss->cell; 330 + struct clk *clk; 331 + char devname[24]; 332 + int ret; 333 + 334 + if (!lpss->info->clk_rate) 335 + return 0; 336 + 337 + /* Root clock */ 338 + clk = clk_register_fixed_rate(NULL, dev_name(lpss->dev), NULL, 339 + CLK_IS_ROOT, lpss->info->clk_rate); 340 + if (IS_ERR(clk)) 341 + return PTR_ERR(clk); 342 + 343 + snprintf(devname, sizeof(devname), "%s.%d", cell->name, lpss->devid); 344 + 345 + /* 346 + * Support for clock divider only if it has some preset value. 347 + * Otherwise we assume that the divider is not used. 348 + */ 349 + if (lpss->type != LPSS_DEV_I2C) { 350 + ret = intel_lpss_register_clock_divider(lpss, devname, &clk); 351 + if (ret) 352 + goto err_clk_register; 353 + } 354 + 355 + ret = -ENOMEM; 356 + 357 + /* Clock for the host controller */ 358 + lpss->clock = clkdev_create(clk, lpss->info->clk_con_id, "%s", devname); 359 + if (!lpss->clock) 360 + goto err_clk_register; 361 + 362 + lpss->clk = clk; 363 + 364 + return 0; 365 + 366 + err_clk_register: 367 + intel_lpss_unregister_clock_tree(clk); 368 + 369 + return ret; 370 + } 371 + 372 + static void intel_lpss_unregister_clock(struct intel_lpss *lpss) 373 + { 374 + if (IS_ERR_OR_NULL(lpss->clk)) 375 + return; 376 + 377 + clkdev_drop(lpss->clock); 378 + intel_lpss_unregister_clock_tree(lpss->clk); 379 + } 380 + 381 + int intel_lpss_probe(struct device *dev, 382 + const struct intel_lpss_platform_info *info) 383 + { 384 + struct intel_lpss *lpss; 385 + int ret; 386 + 387 + if (!info || !info->mem || info->irq <= 0) 388 + return -EINVAL; 389 + 390 + lpss = devm_kzalloc(dev, sizeof(*lpss), GFP_KERNEL); 391 + if (!lpss) 392 + return -ENOMEM; 393 + 394 + lpss->priv = devm_ioremap(dev, info->mem->start + LPSS_PRIV_OFFSET, 395 + LPSS_PRIV_SIZE); 396 + if (!lpss->priv) 397 + return -ENOMEM; 398 + 399 + lpss->info = info; 400 + lpss->dev = dev; 401 + lpss->caps = readl(lpss->priv + LPSS_PRIV_CAPS); 402 + 403 + dev_set_drvdata(dev, lpss); 404 + 405 + ret = intel_lpss_assign_devs(lpss); 406 + if (ret) 407 + return ret; 408 + 409 + intel_lpss_init_dev(lpss); 410 + 411 + lpss->devid = ida_simple_get(&intel_lpss_devid_ida, 0, 0, GFP_KERNEL); 412 + if (lpss->devid < 0) 413 + return lpss->devid; 414 + 415 + ret = intel_lpss_register_clock(lpss); 416 + if (ret) 417 + goto err_clk_register; 418 + 419 + intel_lpss_ltr_expose(lpss); 420 + 421 + ret = intel_lpss_debugfs_add(lpss); 422 + if (ret) 423 + dev_warn(dev, "Failed to create debugfs entries\n"); 424 + 425 + if (intel_lpss_has_idma(lpss)) { 426 + /* 427 + * Ensure the DMA driver is loaded before the host 428 + * controller device appears, so that the host controller 429 + * driver can request its DMA channels as early as 430 + * possible. 431 + * 432 + * If the DMA module is not there that's OK as well. 433 + */ 434 + intel_lpss_request_dma_module(LPSS_IDMA64_DRIVER_NAME); 435 + 436 + ret = mfd_add_devices(dev, lpss->devid, &intel_lpss_idma64_cell, 437 + 1, info->mem, info->irq, NULL); 438 + if (ret) 439 + dev_warn(dev, "Failed to add %s, fallback to PIO\n", 440 + LPSS_IDMA64_DRIVER_NAME); 441 + } 442 + 443 + ret = mfd_add_devices(dev, lpss->devid, lpss->cell, 444 + 1, info->mem, info->irq, NULL); 445 + if (ret) 446 + goto err_remove_ltr; 447 + 448 + return 0; 449 + 450 + err_remove_ltr: 451 + intel_lpss_debugfs_remove(lpss); 452 + intel_lpss_ltr_hide(lpss); 453 + 454 + err_clk_register: 455 + ida_simple_remove(&intel_lpss_devid_ida, lpss->devid); 456 + 457 + return ret; 458 + } 459 + EXPORT_SYMBOL_GPL(intel_lpss_probe); 460 + 461 + void intel_lpss_remove(struct device *dev) 462 + { 463 + struct intel_lpss *lpss = dev_get_drvdata(dev); 464 + 465 + mfd_remove_devices(dev); 466 + intel_lpss_debugfs_remove(lpss); 467 + intel_lpss_ltr_hide(lpss); 468 + intel_lpss_unregister_clock(lpss); 469 + ida_simple_remove(&intel_lpss_devid_ida, lpss->devid); 470 + } 471 + EXPORT_SYMBOL_GPL(intel_lpss_remove); 472 + 473 + static int resume_lpss_device(struct device *dev, void *data) 474 + { 475 + pm_runtime_resume(dev); 476 + return 0; 477 + } 478 + 479 + int intel_lpss_prepare(struct device *dev) 480 + { 481 + /* 482 + * Resume both child devices before entering system sleep. This 483 + * ensures that they are in proper state before they get suspended. 484 + */ 485 + device_for_each_child_reverse(dev, NULL, resume_lpss_device); 486 + return 0; 487 + } 488 + EXPORT_SYMBOL_GPL(intel_lpss_prepare); 489 + 490 + int intel_lpss_suspend(struct device *dev) 491 + { 492 + return 0; 493 + } 494 + EXPORT_SYMBOL_GPL(intel_lpss_suspend); 495 + 496 + int intel_lpss_resume(struct device *dev) 497 + { 498 + struct intel_lpss *lpss = dev_get_drvdata(dev); 499 + 500 + intel_lpss_init_dev(lpss); 501 + 502 + return 0; 503 + } 504 + EXPORT_SYMBOL_GPL(intel_lpss_resume); 505 + 506 + static int __init intel_lpss_init(void) 507 + { 508 + intel_lpss_debugfs = debugfs_create_dir("intel_lpss", NULL); 509 + return 0; 510 + } 511 + module_init(intel_lpss_init); 512 + 513 + static void __exit intel_lpss_exit(void) 514 + { 515 + debugfs_remove(intel_lpss_debugfs); 516 + } 517 + module_exit(intel_lpss_exit); 518 + 519 + MODULE_AUTHOR("Andy Shevchenko <andriy.shevchenko@linux.intel.com>"); 520 + MODULE_AUTHOR("Mika Westerberg <mika.westerberg@linux.intel.com>"); 521 + MODULE_AUTHOR("Heikki Krogerus <heikki.krogerus@linux.intel.com>"); 522 + MODULE_AUTHOR("Jarkko Nikula <jarkko.nikula@linux.intel.com>"); 523 + MODULE_DESCRIPTION("Intel LPSS core driver"); 524 + MODULE_LICENSE("GPL v2");
+62
drivers/mfd/intel-lpss.h
··· 1 + /* 2 + * Intel LPSS core support. 3 + * 4 + * Copyright (C) 2015, Intel Corporation 5 + * 6 + * Authors: Andy Shevchenko <andriy.shevchenko@linux.intel.com> 7 + * Mika Westerberg <mika.westerberg@linux.intel.com> 8 + * 9 + * This program is free software; you can redistribute it and/or modify 10 + * it under the terms of the GNU General Public License version 2 as 11 + * published by the Free Software Foundation. 12 + */ 13 + 14 + #ifndef __MFD_INTEL_LPSS_H 15 + #define __MFD_INTEL_LPSS_H 16 + 17 + struct device; 18 + struct resource; 19 + 20 + struct intel_lpss_platform_info { 21 + struct resource *mem; 22 + int irq; 23 + unsigned long clk_rate; 24 + const char *clk_con_id; 25 + }; 26 + 27 + int intel_lpss_probe(struct device *dev, 28 + const struct intel_lpss_platform_info *info); 29 + void intel_lpss_remove(struct device *dev); 30 + 31 + #ifdef CONFIG_PM 32 + int intel_lpss_prepare(struct device *dev); 33 + int intel_lpss_suspend(struct device *dev); 34 + int intel_lpss_resume(struct device *dev); 35 + 36 + #ifdef CONFIG_PM_SLEEP 37 + #define INTEL_LPSS_SLEEP_PM_OPS \ 38 + .prepare = intel_lpss_prepare, \ 39 + .suspend = intel_lpss_suspend, \ 40 + .resume = intel_lpss_resume, \ 41 + .freeze = intel_lpss_suspend, \ 42 + .thaw = intel_lpss_resume, \ 43 + .poweroff = intel_lpss_suspend, \ 44 + .restore = intel_lpss_resume, 45 + #endif 46 + 47 + #define INTEL_LPSS_RUNTIME_PM_OPS \ 48 + .runtime_suspend = intel_lpss_suspend, \ 49 + .runtime_resume = intel_lpss_resume, 50 + 51 + #else /* !CONFIG_PM */ 52 + #define INTEL_LPSS_SLEEP_PM_OPS 53 + #define INTEL_LPSS_RUNTIME_PM_OPS 54 + #endif /* CONFIG_PM */ 55 + 56 + #define INTEL_LPSS_PM_OPS(name) \ 57 + const struct dev_pm_ops name = { \ 58 + INTEL_LPSS_SLEEP_PM_OPS \ 59 + INTEL_LPSS_RUNTIME_PM_OPS \ 60 + } 61 + 62 + #endif /* __MFD_INTEL_LPSS_H */
+1 -1
drivers/mfd/mfd-core.c
··· 302 302 { 303 303 atomic_t *cnts = NULL; 304 304 305 - device_for_each_child(parent, &cnts, mfd_remove_devices_fn); 305 + device_for_each_child_reverse(parent, &cnts, mfd_remove_devices_fn); 306 306 kfree(cnts); 307 307 } 308 308 EXPORT_SYMBOL(mfd_remove_devices);
+1 -1
drivers/power/avs/Kconfig
··· 13 13 14 14 config ROCKCHIP_IODOMAIN 15 15 tristate "Rockchip IO domain support" 16 - depends on ARCH_ROCKCHIP && OF 16 + depends on POWER_AVS && ARCH_ROCKCHIP && OF 17 17 help 18 18 Say y here to enable support io domains on Rockchip SoCs. It is 19 19 necessary for the io domain setting of the SoC to match the
+59
drivers/power/avs/rockchip-io-domain.c
··· 43 43 #define RK3288_SOC_CON2_FLASH0 BIT(7) 44 44 #define RK3288_SOC_FLASH_SUPPLY_NUM 2 45 45 46 + #define RK3368_SOC_CON15 0x43c 47 + #define RK3368_SOC_CON15_FLASH0 BIT(14) 48 + #define RK3368_SOC_FLASH_SUPPLY_NUM 2 49 + 46 50 struct rockchip_iodomain; 47 51 48 52 /** ··· 162 158 dev_warn(iod->dev, "couldn't update flash0 ctrl\n"); 163 159 } 164 160 161 + static void rk3368_iodomain_init(struct rockchip_iodomain *iod) 162 + { 163 + int ret; 164 + u32 val; 165 + 166 + /* if no flash supply we should leave things alone */ 167 + if (!iod->supplies[RK3368_SOC_FLASH_SUPPLY_NUM].reg) 168 + return; 169 + 170 + /* 171 + * set flash0 iodomain to also use this framework 172 + * instead of a special gpio. 173 + */ 174 + val = RK3368_SOC_CON15_FLASH0 | (RK3368_SOC_CON15_FLASH0 << 16); 175 + ret = regmap_write(iod->grf, RK3368_SOC_CON15, val); 176 + if (ret < 0) 177 + dev_warn(iod->dev, "couldn't update flash0 ctrl\n"); 178 + } 179 + 165 180 /* 166 181 * On the rk3188 the io-domains are handled by a shared register with the 167 182 * lower 8 bits being still being continuing drive-strength settings. ··· 224 201 .init = rk3288_iodomain_init, 225 202 }; 226 203 204 + static const struct rockchip_iodomain_soc_data soc_data_rk3368 = { 205 + .grf_offset = 0x900, 206 + .supply_names = { 207 + NULL, /* reserved */ 208 + "dvp", /* DVPIO_VDD */ 209 + "flash0", /* FLASH0_VDD (emmc) */ 210 + "wifi", /* APIO2_VDD (sdio0) */ 211 + NULL, 212 + "audio", /* APIO3_VDD */ 213 + "sdcard", /* SDMMC0_VDD (sdmmc) */ 214 + "gpio30", /* APIO1_VDD */ 215 + "gpio1830", /* APIO4_VDD (gpujtag) */ 216 + }, 217 + .init = rk3368_iodomain_init, 218 + }; 219 + 220 + static const struct rockchip_iodomain_soc_data soc_data_rk3368_pmu = { 221 + .grf_offset = 0x100, 222 + .supply_names = { 223 + NULL, 224 + NULL, 225 + NULL, 226 + NULL, 227 + "pmu", /*PMU IO domain*/ 228 + "vop", /*LCDC IO domain*/ 229 + }, 230 + }; 231 + 227 232 static const struct of_device_id rockchip_iodomain_match[] = { 228 233 { 229 234 .compatible = "rockchip,rk3188-io-voltage-domain", ··· 260 209 { 261 210 .compatible = "rockchip,rk3288-io-voltage-domain", 262 211 .data = (void *)&soc_data_rk3288 212 + }, 213 + { 214 + .compatible = "rockchip,rk3368-io-voltage-domain", 215 + .data = (void *)&soc_data_rk3368 216 + }, 217 + { 218 + .compatible = "rockchip,rk3368-pmu-io-voltage-domain", 219 + .data = (void *)&soc_data_rk3368_pmu 263 220 }, 264 221 { /* sentinel */ }, 265 222 };
+6 -2
drivers/powercap/intel_rapl.c
··· 1096 1096 RAPL_CPU(0x3f, rapl_defaults_hsw_server),/* Haswell servers */ 1097 1097 RAPL_CPU(0x4f, rapl_defaults_hsw_server),/* Broadwell servers */ 1098 1098 RAPL_CPU(0x45, rapl_defaults_core),/* Haswell ULT */ 1099 + RAPL_CPU(0x47, rapl_defaults_core),/* Broadwell-H */ 1099 1100 RAPL_CPU(0x4E, rapl_defaults_core),/* Skylake */ 1100 1101 RAPL_CPU(0x4C, rapl_defaults_cht),/* Braswell/Cherryview */ 1101 1102 RAPL_CPU(0x4A, rapl_defaults_tng),/* Tangier */ 1102 1103 RAPL_CPU(0x56, rapl_defaults_core),/* Future Xeon */ 1103 1104 RAPL_CPU(0x5A, rapl_defaults_ann),/* Annidale */ 1105 + RAPL_CPU(0x5E, rapl_defaults_core),/* Skylake-H/S */ 1104 1106 RAPL_CPU(0x57, rapl_defaults_hsw_server),/* Knights Landing */ 1105 1107 {} 1106 1108 }; ··· 1147 1145 pr_debug("remove package, undo power limit on %d: %s\n", 1148 1146 rp->id, rd->name); 1149 1147 rapl_write_data_raw(rd, PL1_ENABLE, 0); 1150 - rapl_write_data_raw(rd, PL2_ENABLE, 0); 1151 1148 rapl_write_data_raw(rd, PL1_CLAMP, 0); 1152 - rapl_write_data_raw(rd, PL2_CLAMP, 0); 1149 + if (find_nr_power_limit(rd) > 1) { 1150 + rapl_write_data_raw(rd, PL2_ENABLE, 0); 1151 + rapl_write_data_raw(rd, PL2_CLAMP, 0); 1152 + } 1153 1153 if (rd->id == RAPL_DOMAIN_PACKAGE) { 1154 1154 rd_package = rd; 1155 1155 continue;
-1
drivers/video/fbdev/pxafb.c
··· 1668 1668 1669 1669 switch (val) { 1670 1670 case CPUFREQ_ADJUST: 1671 - case CPUFREQ_INCOMPATIBLE: 1672 1671 pr_debug("min dma period: %d ps, " 1673 1672 "new clock %d kHz\n", pxafb_display_dma_period(var), 1674 1673 policy->max);
-1
drivers/video/fbdev/sa1100fb.c
··· 1042 1042 1043 1043 switch (val) { 1044 1044 case CPUFREQ_ADJUST: 1045 - case CPUFREQ_INCOMPATIBLE: 1046 1045 dev_dbg(fbi->dev, "min dma period: %d ps, " 1047 1046 "new clock %d kHz\n", sa1100fb_min_dma_period(fbi), 1048 1047 policy->max);
+6 -10
drivers/xen/xen-acpi-processor.c
··· 560 560 561 561 return 0; 562 562 err_unregister: 563 - for_each_possible_cpu(i) { 564 - struct acpi_processor_performance *perf; 565 - perf = per_cpu_ptr(acpi_perf_data, i); 566 - acpi_processor_unregister_performance(perf, i); 567 - } 563 + for_each_possible_cpu(i) 564 + acpi_processor_unregister_performance(i); 565 + 568 566 err_out: 569 567 /* Freeing a NULL pointer is OK: alloc_percpu zeroes. */ 570 568 free_acpi_perf_data(); ··· 577 579 kfree(acpi_ids_done); 578 580 kfree(acpi_id_present); 579 581 kfree(acpi_id_cst_present); 580 - for_each_possible_cpu(i) { 581 - struct acpi_processor_performance *perf; 582 - perf = per_cpu_ptr(acpi_perf_data, i); 583 - acpi_processor_unregister_performance(perf, i); 584 - } 582 + for_each_possible_cpu(i) 583 + acpi_processor_unregister_performance(i); 584 + 585 585 free_acpi_perf_data(); 586 586 } 587 587
+1
include/acpi/acbuffer.h
··· 147 147 * (Intended for BIOS use only) 148 148 */ 149 149 #define ACPI_PLD_REV1_BUFFER_SIZE 16 /* For Revision 1 of the buffer (From ACPI spec) */ 150 + #define ACPI_PLD_REV2_BUFFER_SIZE 20 /* For Revision 2 of the buffer (From ACPI spec) */ 150 151 #define ACPI_PLD_BUFFER_SIZE 20 /* For Revision 2 of the buffer (From ACPI spec) */ 151 152 152 153 /* First 32-bit dword, bits 0:32 */
-4
include/acpi/acconfig.h
··· 136 136 137 137 #define ACPI_ROOT_TABLE_SIZE_INCREMENT 4 138 138 139 - /* Maximum number of While() loop iterations before forced abort */ 140 - 141 - #define ACPI_MAX_LOOP_ITERATIONS 0xFFFF 142 - 143 139 /* Maximum sleep allowed via Sleep() operator */ 144 140 145 141 #define ACPI_MAX_SLEEP 2000 /* 2000 millisec == two seconds */
+5 -2
include/acpi/acexcep.h
··· 192 192 #define AE_AML_BAD_RESOURCE_LENGTH EXCEP_AML (0x001F) 193 193 #define AE_AML_ILLEGAL_ADDRESS EXCEP_AML (0x0020) 194 194 #define AE_AML_INFINITE_LOOP EXCEP_AML (0x0021) 195 + #define AE_AML_UNINITIALIZED_NODE EXCEP_AML (0x0022) 195 196 196 - #define AE_CODE_AML_MAX 0x0021 197 + #define AE_CODE_AML_MAX 0x0022 197 198 198 199 /* 199 200 * Internal exceptions used for control ··· 356 355 EXCEP_TXT("AE_AML_ILLEGAL_ADDRESS", 357 356 "A memory, I/O, or PCI configuration address is invalid"), 358 357 EXCEP_TXT("AE_AML_INFINITE_LOOP", 359 - "An apparent infinite AML While loop, method was aborted") 358 + "An apparent infinite AML While loop, method was aborted"), 359 + EXCEP_TXT("AE_AML_UNINITIALIZED_NODE", 360 + "A namespace node is uninitialized or unresolved") 360 361 }; 361 362 362 363 static const struct acpi_exception_info acpi_gbl_exception_names_ctrl[] = {
+20 -1
include/acpi/acoutput.h
··· 88 88 #define ACPI_LV_DEBUG_OBJECT 0x00000002 89 89 #define ACPI_LV_INFO 0x00000004 90 90 #define ACPI_LV_REPAIR 0x00000008 91 - #define ACPI_LV_ALL_EXCEPTIONS 0x0000000F 91 + #define ACPI_LV_TRACE_POINT 0x00000010 92 + #define ACPI_LV_ALL_EXCEPTIONS 0x0000001F 92 93 93 94 /* Trace verbosity level 1 [Standard Trace Level] */ 94 95 ··· 148 147 #define ACPI_DB_DEBUG_OBJECT ACPI_DEBUG_LEVEL (ACPI_LV_DEBUG_OBJECT) 149 148 #define ACPI_DB_INFO ACPI_DEBUG_LEVEL (ACPI_LV_INFO) 150 149 #define ACPI_DB_REPAIR ACPI_DEBUG_LEVEL (ACPI_LV_REPAIR) 150 + #define ACPI_DB_TRACE_POINT ACPI_DEBUG_LEVEL (ACPI_LV_TRACE_POINT) 151 151 #define ACPI_DB_ALL_EXCEPTIONS ACPI_DEBUG_LEVEL (ACPI_LV_ALL_EXCEPTIONS) 152 152 153 153 /* Trace level -- also used in the global "DebugLevel" */ ··· 183 181 #define ACPI_DEBUG_DEFAULT (ACPI_LV_INFO | ACPI_LV_REPAIR) 184 182 #define ACPI_NORMAL_DEFAULT (ACPI_LV_INIT | ACPI_LV_DEBUG_OBJECT | ACPI_LV_REPAIR) 185 183 #define ACPI_DEBUG_ALL (ACPI_LV_AML_DISASSEMBLE | ACPI_LV_ALL_EXCEPTIONS | ACPI_LV_ALL) 184 + 185 + /* 186 + * Global trace flags 187 + */ 188 + #define ACPI_TRACE_ENABLED ((u32) 4) 189 + #define ACPI_TRACE_ONESHOT ((u32) 2) 190 + #define ACPI_TRACE_OPCODE ((u32) 1) 191 + 192 + /* Defaults for trace debugging level/layer */ 193 + 194 + #define ACPI_TRACE_LEVEL_ALL ACPI_LV_ALL 195 + #define ACPI_TRACE_LAYER_ALL 0x000001FF 196 + #define ACPI_TRACE_LEVEL_DEFAULT ACPI_LV_TRACE_POINT 197 + #define ACPI_TRACE_LAYER_DEFAULT ACPI_EXECUTER 186 198 187 199 #if defined (ACPI_DEBUG_OUTPUT) || !defined (ACPI_NO_ERROR_MESSAGES) 188 200 /* ··· 448 432 #define ACPI_DUMP_PATHNAME(a, b, c, d) acpi_ns_dump_pathname(a, b, c, d) 449 433 #define ACPI_DUMP_BUFFER(a, b) acpi_ut_debug_dump_buffer((u8 *) a, b, DB_BYTE_DISPLAY, _COMPONENT) 450 434 435 + #define ACPI_TRACE_POINT(a, b, c, d) acpi_trace_point (a, b, c, d) 436 + 451 437 #else /* ACPI_DEBUG_OUTPUT */ 452 438 /* 453 439 * This is the non-debug case -- make everything go away, ··· 471 453 #define ACPI_DUMP_PATHNAME(a, b, c, d) 472 454 #define ACPI_DUMP_BUFFER(a, b) 473 455 #define ACPI_IS_DEBUG_ENABLED(level, component) 0 456 + #define ACPI_TRACE_POINT(a, b, c, d) 474 457 475 458 /* Return macros must have a return statement at the minimum */ 476 459
-4
include/acpi/acpi_bus.h
··· 16 16 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 17 17 * General Public License for more details. 18 18 * 19 - * You should have received a copy of the GNU General Public License along 20 - * with this program; if not, write to the Free Software Foundation, Inc., 21 - * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. 22 - * 23 19 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 24 20 */ 25 21
-4
include/acpi/acpi_drivers.h
··· 16 16 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 17 17 * General Public License for more details. 18 18 * 19 - * You should have received a copy of the GNU General Public License along 20 - * with this program; if not, write to the Free Software Foundation, Inc., 21 - * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. 22 - * 23 19 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 24 20 */ 25 21
+6
include/acpi/acpiosxf.h
··· 430 430 acpi_status acpi_os_set_file_offset(ACPI_FILE file, long offset, u8 from); 431 431 #endif 432 432 433 + #ifndef ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_trace_point 434 + void 435 + acpi_os_trace_point(acpi_trace_event_type type, 436 + u8 begin, u8 *aml, char *pathname); 437 + #endif 438 + 433 439 #endif /* __ACPIOSXF_H__ */
+13 -3
include/acpi/acpixf.h
··· 46 46 47 47 /* Current ACPICA subsystem version in YYYYMMDD format */ 48 48 49 - #define ACPI_CA_VERSION 0x20150619 49 + #define ACPI_CA_VERSION 0x20150818 50 50 51 51 #include <acpi/acconfig.h> 52 52 #include <acpi/actypes.h> ··· 251 251 * traced each time it is executed. 252 252 */ 253 253 ACPI_INIT_GLOBAL(u32, acpi_gbl_trace_flags, 0); 254 - ACPI_INIT_GLOBAL(acpi_name, acpi_gbl_trace_method_name, 0); 254 + ACPI_INIT_GLOBAL(const char *, acpi_gbl_trace_method_name, NULL); 255 + ACPI_INIT_GLOBAL(u32, acpi_gbl_trace_dbg_level, ACPI_TRACE_LEVEL_DEFAULT); 256 + ACPI_INIT_GLOBAL(u32, acpi_gbl_trace_dbg_layer, ACPI_TRACE_LAYER_DEFAULT); 255 257 256 258 /* 257 259 * Runtime configuration of debug output control masks. We want the debug ··· 506 504 acpi_object_handler handler, 507 505 void **data)) 508 506 ACPI_EXTERNAL_RETURN_STATUS(acpi_status 509 - acpi_debug_trace(char *name, u32 debug_level, 507 + acpi_debug_trace(const char *name, u32 debug_level, 510 508 u32 debug_layer, u32 flags)) 511 509 512 510 /* ··· 909 907 const char *module_name, 910 908 u32 component_id, 911 909 const char *format, ...)) 910 + 911 + ACPI_DBG_DEPENDENT_RETURN_VOID(void 912 + acpi_trace_point(acpi_trace_event_type type, 913 + u8 begin, 914 + u8 *aml, char *pathname)) 912 915 ACPI_APP_DEPENDENT_RETURN_VOID(ACPI_PRINTF_LIKE(1) 913 916 void ACPI_INTERNAL_VAR_XFACE 914 917 acpi_log_error(const char *format, ...)) 918 + acpi_status acpi_initialize_debugger(void); 919 + 920 + void acpi_terminate_debugger(void); 915 921 916 922 /* 917 923 * Divergences
+13 -4
include/acpi/actbl2.h
··· 1186 1186 * December 19, 2014 1187 1187 * 1188 1188 * NOTE: There are two versions of the table with the same signature -- 1189 - * the client version and the server version. 1189 + * the client version and the server version. The common platform_class 1190 + * field is used to differentiate the two types of tables. 1190 1191 * 1191 1192 ******************************************************************************/ 1192 1193 1193 - struct acpi_table_tcpa_client { 1194 + struct acpi_table_tcpa_hdr { 1194 1195 struct acpi_table_header header; /* Common ACPI table header */ 1195 1196 u16 platform_class; 1197 + }; 1198 + 1199 + /* 1200 + * Values for platform_class above. 1201 + * This is how the client and server subtables are differentiated 1202 + */ 1203 + #define ACPI_TCPA_CLIENT_TABLE 0 1204 + #define ACPI_TCPA_SERVER_TABLE 1 1205 + 1206 + struct acpi_table_tcpa_client { 1196 1207 u32 minimum_log_length; /* Minimum length for the event log area */ 1197 1208 u64 log_address; /* Address of the event log area */ 1198 1209 }; 1199 1210 1200 1211 struct acpi_table_tcpa_server { 1201 - struct acpi_table_header header; /* Common ACPI table header */ 1202 - u16 platform_class; 1203 1212 u16 reserved; 1204 1213 u64 minimum_log_length; /* Minimum length for the event log area */ 1205 1214 u64 log_address; /* Address of the event log area */
+12 -1
include/acpi/actypes.h
··· 662 662 #define ACPI_TYPE_DEBUG_OBJECT 0x10 663 663 664 664 #define ACPI_TYPE_EXTERNAL_MAX 0x10 665 + #define ACPI_NUM_TYPES (ACPI_TYPE_EXTERNAL_MAX + 1) 665 666 666 667 /* 667 668 * These are object types that do not map directly to the ACPI ··· 684 683 #define ACPI_TYPE_LOCAL_SCOPE 0x1B /* 1 Name, multiple object_list Nodes */ 685 684 686 685 #define ACPI_TYPE_NS_NODE_MAX 0x1B /* Last typecode used within a NS Node */ 686 + #define ACPI_TOTAL_TYPES (ACPI_TYPE_NS_NODE_MAX + 1) 687 687 688 688 /* 689 689 * These are special object types that never appear in ··· 987 985 */ 988 986 #define ACPI_FULL_PATHNAME 0 989 987 #define ACPI_SINGLE_NAME 1 990 - #define ACPI_NAME_TYPE_MAX 1 988 + #define ACPI_FULL_PATHNAME_NO_TRAILING 2 989 + #define ACPI_NAME_TYPE_MAX 2 991 990 992 991 /* 993 992 * Predefined Namespace items ··· 1248 1245 u32 hits; 1249 1246 #endif 1250 1247 }; 1248 + 1249 + /* Definitions of trace event types */ 1250 + 1251 + typedef enum { 1252 + ACPI_TRACE_AML_METHOD, 1253 + ACPI_TRACE_AML_OPCODE, 1254 + ACPI_TRACE_AML_REGION 1255 + } acpi_trace_event_type; 1251 1256 1252 1257 /* Definitions of _OSI support */ 1253 1258
+10 -9
include/acpi/platform/acenv.h
··· 70 70 71 71 #ifdef ACPI_ASL_COMPILER 72 72 #define ACPI_APPLICATION 73 - #define ACPI_DISASSEMBLER 74 73 #define ACPI_DEBUG_OUTPUT 75 74 #define ACPI_CONSTANT_EVAL_ONLY 76 75 #define ACPI_LARGE_NAMESPACE_NODE 77 76 #define ACPI_DATA_TABLE_DISASSEMBLY 78 77 #define ACPI_SINGLE_THREADED 79 78 #define ACPI_32BIT_PHYSICAL_ADDRESS 79 + 80 + #define ACPI_DISASSEMBLER 1 80 81 #endif 81 82 82 83 /* acpi_exec configuration. Multithreaded with full AML debugger */ ··· 90 89 #endif 91 90 92 91 /* 93 - * acpi_bin/acpi_dump/acpi_help/acpi_names/acpi_src/acpi_xtract/Example configuration. 94 - * All single threaded. 92 + * acpi_bin/acpi_dump/acpi_help/acpi_names/acpi_src/acpi_xtract/Example 93 + * configuration. All single threaded. 95 94 */ 96 95 #if (defined ACPI_BIN_APP) || \ 97 96 (defined ACPI_DUMP_APP) || \ ··· 124 123 #define ACPI_USE_NATIVE_RSDP_POINTER 125 124 #endif 126 125 127 - /* acpi_dump configuration. Native mapping used if provied by OSPMs */ 126 + /* acpi_dump configuration. Native mapping used if provided by the host */ 128 127 129 128 #ifdef ACPI_DUMP_APP 130 129 #define ACPI_USE_NATIVE_MEMORY_MAPPING ··· 152 151 #define ACPI_USE_LOCAL_CACHE 153 152 #endif 154 153 155 - /* Common debug support */ 154 + /* Common debug/disassembler support */ 156 155 157 156 #ifdef ACPI_FULL_DEBUG 158 - #define ACPI_DEBUGGER 159 157 #define ACPI_DEBUG_OUTPUT 160 - #define ACPI_DISASSEMBLER 158 + #define ACPI_DEBUGGER 1 159 + #define ACPI_DISASSEMBLER 1 161 160 #endif 162 161 163 162 ··· 324 323 * ACPI_USE_STANDARD_HEADERS - Define this if linking to a C library and 325 324 * the standard header files may be used. 326 325 * 327 - * The ACPICA subsystem only uses low level C library functions that do not call 328 - * operating system services and may therefore be inlined in the code. 326 + * The ACPICA subsystem only uses low level C library functions that do not 327 + * call operating system services and may therefore be inlined in the code. 329 328 * 330 329 * It may be necessary to tailor these include files to the target 331 330 * generation environment.
+3
include/acpi/platform/acenvex.h
··· 56 56 #if defined(_LINUX) || defined(__linux__) 57 57 #include <acpi/platform/aclinuxex.h> 58 58 59 + #elif defined(WIN32) 60 + #include "acwinex.h" 61 + 59 62 #elif defined(_AED_EFI) 60 63 #include "acefiex.h" 61 64
+54
include/acpi/platform/acmsvcex.h
··· 1 + /****************************************************************************** 2 + * 3 + * Name: acmsvcex.h - Extra VC specific defines, etc. 4 + * 5 + *****************************************************************************/ 6 + 7 + /* 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 + * All rights reserved. 10 + * 11 + * Redistribution and use in source and binary forms, with or without 12 + * modification, are permitted provided that the following conditions 13 + * are met: 14 + * 1. Redistributions of source code must retain the above copyright 15 + * notice, this list of conditions, and the following disclaimer, 16 + * without modification. 17 + * 2. Redistributions in binary form must reproduce at minimum a disclaimer 18 + * substantially similar to the "NO WARRANTY" disclaimer below 19 + * ("Disclaimer") and any redistribution must be conditioned upon 20 + * including a substantially similar Disclaimer requirement for further 21 + * binary redistribution. 22 + * 3. Neither the names of the above-listed copyright holders nor the names 23 + * of any contributors may be used to endorse or promote products derived 24 + * from this software without specific prior written permission. 25 + * 26 + * Alternatively, this software may be distributed under the terms of the 27 + * GNU General Public License ("GPL") version 2 as published by the Free 28 + * Software Foundation. 29 + * 30 + * NO WARRANTY 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 36 + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 37 + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 38 + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 39 + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 40 + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 41 + * POSSIBILITY OF SUCH DAMAGES. 42 + */ 43 + 44 + #ifndef __ACMSVCEX_H__ 45 + #define __ACMSVCEX_H__ 46 + 47 + /* Debug support. */ 48 + 49 + #ifdef _DEBUG 50 + #define _CRTDBG_MAP_ALLOC /* Enables specific file/lineno for leaks */ 51 + #include <crtdbg.h> 52 + #endif 53 + 54 + #endif /* __ACMSVCEX_H__ */
+49
include/acpi/platform/acwinex.h
··· 1 + /****************************************************************************** 2 + * 3 + * Name: acwinex.h - Extra OS specific defines, etc. 4 + * 5 + *****************************************************************************/ 6 + 7 + /* 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 + * All rights reserved. 10 + * 11 + * Redistribution and use in source and binary forms, with or without 12 + * modification, are permitted provided that the following conditions 13 + * are met: 14 + * 1. Redistributions of source code must retain the above copyright 15 + * notice, this list of conditions, and the following disclaimer, 16 + * without modification. 17 + * 2. Redistributions in binary form must reproduce at minimum a disclaimer 18 + * substantially similar to the "NO WARRANTY" disclaimer below 19 + * ("Disclaimer") and any redistribution must be conditioned upon 20 + * including a substantially similar Disclaimer requirement for further 21 + * binary redistribution. 22 + * 3. Neither the names of the above-listed copyright holders nor the names 23 + * of any contributors may be used to endorse or promote products derived 24 + * from this software without specific prior written permission. 25 + * 26 + * Alternatively, this software may be distributed under the terms of the 27 + * GNU General Public License ("GPL") version 2 as published by the Free 28 + * Software Foundation. 29 + * 30 + * NO WARRANTY 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 36 + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 37 + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 38 + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 39 + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 40 + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 41 + * POSSIBILITY OF SUCH DAMAGES. 42 + */ 43 + 44 + #ifndef __ACWINEX_H__ 45 + #define __ACWINEX_H__ 46 + 47 + /* Windows uses VC */ 48 + 49 + #endif /* __ACWINEX_H__ */
+51 -8
include/acpi/processor.h
··· 228 228 229 229 extern int acpi_processor_register_performance(struct acpi_processor_performance 230 230 *performance, unsigned int cpu); 231 - extern void acpi_processor_unregister_performance(struct 232 - acpi_processor_performance 233 - *performance, 234 - unsigned int cpu); 231 + extern void acpi_processor_unregister_performance(unsigned int cpu); 235 232 236 233 /* note: this locks both the calling module and the processor module 237 234 if a _PPC object exists, rmmod is disallowed then */ ··· 315 318 void acpi_processor_set_pdc(acpi_handle handle); 316 319 317 320 /* in processor_throttling.c */ 321 + #ifdef CONFIG_ACPI_CPU_FREQ_PSS 318 322 int acpi_processor_tstate_has_changed(struct acpi_processor *pr); 319 323 int acpi_processor_get_throttling_info(struct acpi_processor *pr); 320 324 extern int acpi_processor_set_throttling(struct acpi_processor *pr, ··· 328 330 unsigned long action); 329 331 extern const struct file_operations acpi_processor_throttling_fops; 330 332 extern void acpi_processor_throttling_init(void); 333 + #else 334 + static inline int acpi_processor_tstate_has_changed(struct acpi_processor *pr) 335 + { 336 + return 0; 337 + } 338 + 339 + static inline int acpi_processor_get_throttling_info(struct acpi_processor *pr) 340 + { 341 + return -ENODEV; 342 + } 343 + 344 + static inline int acpi_processor_set_throttling(struct acpi_processor *pr, 345 + int state, bool force) 346 + { 347 + return -ENODEV; 348 + } 349 + 350 + static inline void acpi_processor_reevaluate_tstate(struct acpi_processor *pr, 351 + unsigned long action) {} 352 + 353 + static inline void acpi_processor_throttling_init(void) {} 354 + #endif /* CONFIG_ACPI_CPU_FREQ_PSS */ 355 + 331 356 /* in processor_idle.c */ 357 + extern struct cpuidle_driver acpi_idle_driver; 358 + #ifdef CONFIG_ACPI_PROCESSOR_IDLE 332 359 int acpi_processor_power_init(struct acpi_processor *pr); 333 360 int acpi_processor_power_exit(struct acpi_processor *pr); 334 361 int acpi_processor_cst_has_changed(struct acpi_processor *pr); 335 362 int acpi_processor_hotplug(struct acpi_processor *pr); 336 - extern struct cpuidle_driver acpi_idle_driver; 363 + #else 364 + static inline int acpi_processor_power_init(struct acpi_processor *pr) 365 + { 366 + return -ENODEV; 367 + } 337 368 338 - #ifdef CONFIG_PM_SLEEP 369 + static inline int acpi_processor_power_exit(struct acpi_processor *pr) 370 + { 371 + return -ENODEV; 372 + } 373 + 374 + static inline int acpi_processor_cst_has_changed(struct acpi_processor *pr) 375 + { 376 + return -ENODEV; 377 + } 378 + 379 + static inline int acpi_processor_hotplug(struct acpi_processor *pr) 380 + { 381 + return -ENODEV; 382 + } 383 + #endif /* CONFIG_ACPI_PROCESSOR_IDLE */ 384 + 385 + #if defined(CONFIG_PM_SLEEP) & defined(CONFIG_ACPI_PROCESSOR_IDLE) 339 386 void acpi_processor_syscore_init(void); 340 387 void acpi_processor_syscore_exit(void); 341 388 #else ··· 391 348 /* in processor_thermal.c */ 392 349 int acpi_processor_get_limit_info(struct acpi_processor *pr); 393 350 extern const struct thermal_cooling_device_ops processor_cooling_ops; 394 - #ifdef CONFIG_CPU_FREQ 351 + #if defined(CONFIG_ACPI_CPU_FREQ_PSS) & defined(CONFIG_CPU_FREQ) 395 352 void acpi_thermal_cpufreq_init(void); 396 353 void acpi_thermal_cpufreq_exit(void); 397 354 #else ··· 403 360 { 404 361 return; 405 362 } 406 - #endif 363 + #endif /* CONFIG_ACPI_CPU_FREQ_PSS */ 407 364 408 365 #endif
+1 -5
include/linux/acpi.h
··· 15 15 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 16 * GNU General Public License for more details. 17 17 * 18 - * You should have received a copy of the GNU General Public License 19 - * along with this program; if not, write to the Free Software 20 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 21 - * 22 18 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 23 19 */ 24 20 ··· 217 221 218 222 int acpi_pci_irq_enable (struct pci_dev *dev); 219 223 void acpi_penalize_isa_irq(int irq, int active); 220 - 224 + void acpi_penalize_sci_irq(int irq, int trigger, int polarity); 221 225 void acpi_pci_irq_disable (struct pci_dev *dev); 222 226 223 227 extern int ec_read(u8 addr, u8 *val);
+19 -9
include/linux/cpufreq.h
··· 51 51 unsigned int transition_latency; 52 52 }; 53 53 54 - struct cpufreq_real_policy { 54 + struct cpufreq_user_policy { 55 55 unsigned int min; /* in kHz */ 56 56 unsigned int max; /* in kHz */ 57 - unsigned int policy; /* see above */ 58 - struct cpufreq_governor *governor; /* see below */ 59 57 }; 60 58 61 59 struct cpufreq_policy { ··· 86 88 struct work_struct update; /* if update_policy() needs to be 87 89 * called, but you're in IRQ context */ 88 90 89 - struct cpufreq_real_policy user_policy; 91 + struct cpufreq_user_policy user_policy; 90 92 struct cpufreq_frequency_table *freq_table; 91 93 92 94 struct list_head policy_list; ··· 367 369 368 370 /* Policy Notifiers */ 369 371 #define CPUFREQ_ADJUST (0) 370 - #define CPUFREQ_INCOMPATIBLE (1) 371 - #define CPUFREQ_NOTIFY (2) 372 - #define CPUFREQ_START (3) 373 - #define CPUFREQ_CREATE_POLICY (4) 374 - #define CPUFREQ_REMOVE_POLICY (5) 372 + #define CPUFREQ_NOTIFY (1) 373 + #define CPUFREQ_START (2) 374 + #define CPUFREQ_CREATE_POLICY (3) 375 + #define CPUFREQ_REMOVE_POLICY (4) 375 376 376 377 #ifdef CONFIG_CPU_FREQ 377 378 int cpufreq_register_notifier(struct notifier_block *nb, unsigned int list); ··· 575 578 int cpufreq_boost_trigger_state(int state); 576 579 int cpufreq_boost_supported(void); 577 580 int cpufreq_boost_enabled(void); 581 + int cpufreq_enable_boost_support(void); 582 + bool policy_has_boost_freq(struct cpufreq_policy *policy); 578 583 #else 579 584 static inline int cpufreq_boost_trigger_state(int state) 580 585 { ··· 590 591 { 591 592 return 0; 592 593 } 594 + 595 + static inline int cpufreq_enable_boost_support(void) 596 + { 597 + return -EINVAL; 598 + } 599 + 600 + static inline bool policy_has_boost_freq(struct cpufreq_policy *policy) 601 + { 602 + return false; 603 + } 593 604 #endif 594 605 /* the following funtion is for cpufreq core use only */ 595 606 struct cpufreq_frequency_table *cpufreq_frequency_get_table(unsigned int cpu); 596 607 597 608 /* the following are really really optional */ 598 609 extern struct freq_attr cpufreq_freq_attr_scaling_available_freqs; 610 + extern struct freq_attr cpufreq_freq_attr_scaling_boost_freqs; 599 611 extern struct freq_attr *cpufreq_generic_attr[]; 600 612 int cpufreq_table_validate_and_show(struct cpufreq_policy *policy, 601 613 struct cpufreq_frequency_table *table);
-1
include/linux/cpuidle.h
··· 84 84 struct list_head device_list; 85 85 86 86 #ifdef CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED 87 - int safe_state_index; 88 87 cpumask_t coupled_cpus; 89 88 struct cpuidle_coupled *coupled; 90 89 #endif
+2
include/linux/device.h
··· 983 983 extern void device_del(struct device *dev); 984 984 extern int device_for_each_child(struct device *dev, void *data, 985 985 int (*fn)(struct device *dev, void *data)); 986 + extern int device_for_each_child_reverse(struct device *dev, void *data, 987 + int (*fn)(struct device *dev, void *data)); 986 988 extern struct device *device_find_child(struct device *dev, void *data, 987 989 int (*match)(struct device *dev, void *data)); 988 990 extern int device_rename(struct device *dev, const char *new_name);
+1
include/linux/klist.h
··· 63 63 extern void klist_iter_init_node(struct klist *k, struct klist_iter *i, 64 64 struct klist_node *n); 65 65 extern void klist_iter_exit(struct klist_iter *i); 66 + extern struct klist_node *klist_prev(struct klist_iter *i); 66 67 extern struct klist_node *klist_next(struct klist_iter *i); 67 68 68 69 #endif
+2 -1
include/linux/of.h
··· 136 136 137 137 static inline struct device_node *to_of_node(struct fwnode_handle *fwnode) 138 138 { 139 - return fwnode ? container_of(fwnode, struct device_node, fwnode) : NULL; 139 + return is_of_node(fwnode) ? 140 + container_of(fwnode, struct device_node, fwnode) : NULL; 140 141 } 141 142 142 143 static inline bool of_have_populated_dt(void)
-9
include/linux/pm_domain.h
··· 22 22 23 23 enum gpd_status { 24 24 GPD_STATE_ACTIVE = 0, /* PM domain is active */ 25 - GPD_STATE_WAIT_MASTER, /* PM domain's master is being waited for */ 26 - GPD_STATE_BUSY, /* Something is happening to the PM domain */ 27 - GPD_STATE_REPEAT, /* Power off in progress, to be repeated */ 28 25 GPD_STATE_POWER_OFF, /* PM domain is off */ 29 26 }; 30 27 ··· 56 59 unsigned int in_progress; /* Number of devices being suspended now */ 57 60 atomic_t sd_count; /* Number of subdomains with power "on" */ 58 61 enum gpd_status status; /* Current state of the domain */ 59 - wait_queue_head_t status_wait_queue; 60 - struct task_struct *poweroff_task; /* Powering off task */ 61 - unsigned int resume_count; /* Number of devices being resumed */ 62 62 unsigned int device_count; /* Number of devices */ 63 63 unsigned int suspended_count; /* System suspend device counter */ 64 64 unsigned int prepared_count; /* Suspend counter of prepared devices */ ··· 107 113 struct pm_domain_data base; 108 114 struct gpd_timing_data td; 109 115 struct notifier_block nb; 110 - int need_restore; 111 116 }; 112 117 113 118 #ifdef CONFIG_PM_GENERIC_DOMAINS ··· 221 228 return -ENOSYS; 222 229 } 223 230 static inline void pm_genpd_poweroff_unused(void) {} 224 - #define simple_qos_governor NULL 225 - #define pm_domain_always_on_gov NULL 226 231 #endif 227 232 228 233 static inline int pm_genpd_add_device(struct generic_pm_domain *genpd,
+36
include/linux/pm_opp.h
··· 30 30 31 31 unsigned long dev_pm_opp_get_freq(struct dev_pm_opp *opp); 32 32 33 + bool dev_pm_opp_is_turbo(struct dev_pm_opp *opp); 34 + 33 35 int dev_pm_opp_get_opp_count(struct device *dev); 36 + unsigned long dev_pm_opp_get_max_clock_latency(struct device *dev); 34 37 35 38 struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev, 36 39 unsigned long freq, ··· 65 62 return 0; 66 63 } 67 64 65 + static inline bool dev_pm_opp_is_turbo(struct dev_pm_opp *opp) 66 + { 67 + return false; 68 + } 69 + 68 70 static inline int dev_pm_opp_get_opp_count(struct device *dev) 71 + { 72 + return 0; 73 + } 74 + 75 + static inline unsigned long dev_pm_opp_get_max_clock_latency(struct device *dev) 69 76 { 70 77 return 0; 71 78 } ··· 128 115 #if defined(CONFIG_PM_OPP) && defined(CONFIG_OF) 129 116 int of_init_opp_table(struct device *dev); 130 117 void of_free_opp_table(struct device *dev); 118 + int of_cpumask_init_opp_table(cpumask_var_t cpumask); 119 + void of_cpumask_free_opp_table(cpumask_var_t cpumask); 120 + int of_get_cpus_sharing_opps(struct device *cpu_dev, cpumask_var_t cpumask); 121 + int set_cpus_sharing_opps(struct device *cpu_dev, cpumask_var_t cpumask); 131 122 #else 132 123 static inline int of_init_opp_table(struct device *dev) 133 124 { ··· 140 123 141 124 static inline void of_free_opp_table(struct device *dev) 142 125 { 126 + } 127 + 128 + static inline int of_cpumask_init_opp_table(cpumask_var_t cpumask) 129 + { 130 + return -ENOSYS; 131 + } 132 + 133 + static inline void of_cpumask_free_opp_table(cpumask_var_t cpumask) 134 + { 135 + } 136 + 137 + static inline int of_get_cpus_sharing_opps(struct device *cpu_dev, cpumask_var_t cpumask) 138 + { 139 + return -ENOSYS; 140 + } 141 + 142 + static inline int set_cpus_sharing_opps(struct device *cpu_dev, cpumask_var_t cpumask) 143 + { 144 + return -ENOSYS; 143 145 } 144 146 #endif 145 147
+5
include/linux/pm_qos.h
··· 161 161 int dev_pm_qos_update_flags(struct device *dev, s32 mask, bool set); 162 162 s32 dev_pm_qos_get_user_latency_tolerance(struct device *dev); 163 163 int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val); 164 + int dev_pm_qos_expose_latency_tolerance(struct device *dev); 165 + void dev_pm_qos_hide_latency_tolerance(struct device *dev); 164 166 165 167 static inline s32 dev_pm_qos_requested_resume_latency(struct device *dev) 166 168 { ··· 231 229 { return PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT; } 232 230 static inline int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val) 233 231 { return 0; } 232 + static inline int dev_pm_qos_expose_latency_tolerance(struct device *dev) 233 + { return 0; } 234 + static inline void dev_pm_qos_hide_latency_tolerance(struct device *dev) {} 234 235 235 236 static inline s32 dev_pm_qos_requested_resume_latency(struct device *dev) { return 0; } 236 237 static inline s32 dev_pm_qos_requested_flags(struct device *dev) { return 0; }
-6
include/linux/pm_runtime.h
··· 98 98 return dev->power.runtime_status == RPM_SUSPENDED; 99 99 } 100 100 101 - static inline bool pm_runtime_suspended_if_enabled(struct device *dev) 102 - { 103 - return pm_runtime_status_suspended(dev) && dev->power.disable_depth == 1; 104 - } 105 - 106 101 static inline bool pm_runtime_enabled(struct device *dev) 107 102 { 108 103 return !dev->power.disable_depth; ··· 159 164 static inline bool pm_runtime_suspended(struct device *dev) { return false; } 160 165 static inline bool pm_runtime_active(struct device *dev) { return true; } 161 166 static inline bool pm_runtime_status_suspended(struct device *dev) { return false; } 162 - static inline bool pm_runtime_suspended_if_enabled(struct device *dev) { return false; } 163 167 static inline bool pm_runtime_enabled(struct device *dev) { return false; } 164 168 165 169 static inline void pm_runtime_no_callbacks(struct device *dev) {}
+10
kernel/power/Kconfig
··· 18 18 19 19 Turning OFF this setting is NOT recommended! If in doubt, say Y. 20 20 21 + config SUSPEND_SKIP_SYNC 22 + bool "Skip kernel's sys_sync() on suspend to RAM/standby" 23 + depends on SUSPEND 24 + depends on EXPERT 25 + help 26 + Skip the kernel sys_sync() before freezing user processes. 27 + Some systems prefer not to pay this cost on every invocation 28 + of suspend, or they are content with invoking sync() from 29 + user-space before invoking suspend. Say Y if that's your case. 30 + 21 31 config HIBERNATE_CALLBACKS 22 32 bool 23 33
+2
kernel/power/suspend.c
··· 484 484 if (state == PM_SUSPEND_FREEZE) 485 485 freeze_begin(); 486 486 487 + #ifndef CONFIG_SUSPEND_SKIP_SYNC 487 488 trace_suspend_resume(TPS("sync_filesystems"), 0, true); 488 489 printk(KERN_INFO "PM: Syncing filesystems ... "); 489 490 sys_sync(); 490 491 printk("done.\n"); 491 492 trace_suspend_resume(TPS("sync_filesystems"), 0, false); 493 + #endif 492 494 493 495 pr_debug("PM: Preparing system for sleep (%s)\n", pm_states[state]); 494 496 error = suspend_prepare(state);
+15 -3
kernel/power/wakelock.c
··· 17 17 #include <linux/list.h> 18 18 #include <linux/rbtree.h> 19 19 #include <linux/slab.h> 20 + #include <linux/workqueue.h> 20 21 21 22 #include "power.h" 22 23 ··· 84 83 #define WL_GC_COUNT_MAX 100 85 84 #define WL_GC_TIME_SEC 300 86 85 86 + static void __wakelocks_gc(struct work_struct *work); 87 87 static LIST_HEAD(wakelocks_lru_list); 88 + static DECLARE_WORK(wakelock_work, __wakelocks_gc); 88 89 static unsigned int wakelocks_gc_count; 89 90 90 91 static inline void wakelocks_lru_add(struct wakelock *wl) ··· 99 96 list_move(&wl->lru, &wakelocks_lru_list); 100 97 } 101 98 102 - static void wakelocks_gc(void) 99 + static void __wakelocks_gc(struct work_struct *work) 103 100 { 104 101 struct wakelock *wl, *aux; 105 102 ktime_t now; 106 103 107 - if (++wakelocks_gc_count <= WL_GC_COUNT_MAX) 108 - return; 104 + mutex_lock(&wakelocks_lock); 109 105 110 106 now = ktime_get(); 111 107 list_for_each_entry_safe_reverse(wl, aux, &wakelocks_lru_list, lru) { ··· 129 127 } 130 128 } 131 129 wakelocks_gc_count = 0; 130 + 131 + mutex_unlock(&wakelocks_lock); 132 + } 133 + 134 + static void wakelocks_gc(void) 135 + { 136 + if (++wakelocks_gc_count <= WL_GC_COUNT_MAX) 137 + return; 138 + 139 + schedule_work(&wakelock_work); 132 140 } 133 141 #else /* !CONFIG_PM_WAKELOCKS_GC */ 134 142 static inline void wakelocks_lru_add(struct wakelock *wl) {}
+41
lib/klist.c
··· 324 324 } 325 325 326 326 /** 327 + * klist_prev - Ante up prev node in list. 328 + * @i: Iterator structure. 329 + * 330 + * First grab list lock. Decrement the reference count of the previous 331 + * node, if there was one. Grab the prev node, increment its reference 332 + * count, drop the lock, and return that prev node. 333 + */ 334 + struct klist_node *klist_prev(struct klist_iter *i) 335 + { 336 + void (*put)(struct klist_node *) = i->i_klist->put; 337 + struct klist_node *last = i->i_cur; 338 + struct klist_node *prev; 339 + 340 + spin_lock(&i->i_klist->k_lock); 341 + 342 + if (last) { 343 + prev = to_klist_node(last->n_node.prev); 344 + if (!klist_dec_and_del(last)) 345 + put = NULL; 346 + } else 347 + prev = to_klist_node(i->i_klist->k_list.prev); 348 + 349 + i->i_cur = NULL; 350 + while (prev != to_klist_node(&i->i_klist->k_list)) { 351 + if (likely(!knode_dead(prev))) { 352 + kref_get(&prev->n_ref); 353 + i->i_cur = prev; 354 + break; 355 + } 356 + prev = to_klist_node(prev->n_node.prev); 357 + } 358 + 359 + spin_unlock(&i->i_klist->k_lock); 360 + 361 + if (put && last) 362 + put(last); 363 + return i->i_cur; 364 + } 365 + EXPORT_SYMBOL_GPL(klist_prev); 366 + 367 + /** 327 368 * klist_next - Ante up next node in list. 328 369 * @i: Iterator structure. 329 370 *
+14 -148
tools/power/acpi/Makefile
··· 8 8 # as published by the Free Software Foundation; version 2 9 9 # of the License. 10 10 11 - OUTPUT=./ 12 - ifeq ("$(origin O)", "command line") 13 - OUTPUT := $(O)/ 14 - endif 11 + include ../../scripts/Makefile.include 15 12 16 - ifneq ($(OUTPUT),) 17 - # check that the output directory actually exists 18 - OUTDIR := $(shell cd $(OUTPUT) && /bin/pwd) 19 - $(if $(OUTDIR),, $(error output directory "$(OUTPUT)" does not exist)) 20 - endif 13 + all: acpidump ec 14 + clean: acpidump_clean ec_clean 15 + install: acpidump_install ec_install 16 + uninstall: acpidump_uninstall ec_uninstall 21 17 22 - SUBDIRS = tools/ec 18 + acpidump ec: FORCE 19 + $(call descend,tools/$@,all) 20 + acpidump_clean ec_clean: 21 + $(call descend,tools/$(@:_clean=),clean) 22 + acpidump_install ec_install: 23 + $(call descend,tools/$(@:_install=),install) 24 + acpidump_uninstall ec_uninstall: 25 + $(call descend,tools/$(@:_uninstall=),uninstall) 23 26 24 - # --- CONFIGURATION BEGIN --- 25 - 26 - # Set the following to `true' to make a unstripped, unoptimized 27 - # binary. Leave this set to `false' for production use. 28 - DEBUG ?= true 29 - 30 - # make the build silent. Set this to something else to make it noisy again. 31 - V ?= false 32 - 33 - # Prefix to the directories we're installing to 34 - DESTDIR ?= 35 - 36 - # --- CONFIGURATION END --- 37 - 38 - # Directory definitions. These are default and most probably 39 - # do not need to be changed. Please note that DESTDIR is 40 - # added in front of any of them 41 - 42 - bindir ?= /usr/bin 43 - sbindir ?= /usr/sbin 44 - mandir ?= /usr/man 45 - 46 - # Toolchain: what tools do we use, and what options do they need: 47 - 48 - INSTALL = /usr/bin/install -c 49 - INSTALL_PROGRAM = ${INSTALL} 50 - INSTALL_DATA = ${INSTALL} -m 644 51 - INSTALL_SCRIPT = ${INSTALL_PROGRAM} 52 - 53 - # If you are running a cross compiler, you may want to set this 54 - # to something more interesting, like "arm-linux-". If you want 55 - # to compile vs uClibc, that can be done here as well. 56 - CROSS = #/usr/i386-linux-uclibc/usr/bin/i386-uclibc- 57 - CC = $(CROSS)gcc 58 - LD = $(CROSS)gcc 59 - STRIP = $(CROSS)strip 60 - HOSTCC = gcc 61 - 62 - # check if compiler option is supported 63 - cc-supports = ${shell if $(CC) ${1} -S -o /dev/null -x c /dev/null > /dev/null 2>&1; then echo "$(1)"; fi;} 64 - 65 - # use '-Os' optimization if available, else use -O2 66 - OPTIMIZATION := $(call cc-supports,-Os,-O2) 67 - 68 - WARNINGS := -Wall 69 - WARNINGS += $(call cc-supports,-Wstrict-prototypes) 70 - WARNINGS += $(call cc-supports,-Wdeclaration-after-statement) 71 - 72 - KERNEL_INCLUDE := ../../../include 73 - ACPICA_INCLUDE := ../../../drivers/acpi/acpica 74 - CFLAGS += -D_LINUX -I$(KERNEL_INCLUDE) -I$(ACPICA_INCLUDE) 75 - CFLAGS += $(WARNINGS) 76 - 77 - ifeq ($(strip $(V)),false) 78 - QUIET=@ 79 - ECHO=@echo 80 - else 81 - QUIET= 82 - ECHO=@\# 83 - endif 84 - export QUIET ECHO 85 - 86 - # if DEBUG is enabled, then we do not strip or optimize 87 - ifeq ($(strip $(DEBUG)),true) 88 - CFLAGS += -O1 -g -DDEBUG 89 - STRIPCMD = /bin/true -Since_we_are_debugging 90 - else 91 - CFLAGS += $(OPTIMIZATION) -fomit-frame-pointer 92 - STRIPCMD = $(STRIP) -s --remove-section=.note --remove-section=.comment 93 - endif 94 - 95 - # --- ACPIDUMP BEGIN --- 96 - 97 - vpath %.c \ 98 - ../../../drivers/acpi/acpica\ 99 - tools/acpidump\ 100 - common\ 101 - os_specific/service_layers 102 - 103 - CFLAGS += -DACPI_DUMP_APP -Itools/acpidump 104 - 105 - DUMP_OBJS = \ 106 - apdump.o\ 107 - apfiles.o\ 108 - apmain.o\ 109 - osunixdir.o\ 110 - osunixmap.o\ 111 - osunixxf.o\ 112 - tbprint.o\ 113 - tbxfroot.o\ 114 - utbuffer.o\ 115 - utdebug.o\ 116 - utexcep.o\ 117 - utglobal.o\ 118 - utmath.o\ 119 - utprint.o\ 120 - utstring.o\ 121 - utxferror.o\ 122 - oslibcfs.o\ 123 - oslinuxtbl.o\ 124 - cmfsize.o\ 125 - getopt.o 126 - 127 - DUMP_OBJS := $(addprefix $(OUTPUT)tools/acpidump/,$(DUMP_OBJS)) 128 - 129 - $(OUTPUT)acpidump: $(DUMP_OBJS) 130 - $(ECHO) " LD " $@ 131 - $(QUIET) $(LD) $(CFLAGS) $(LDFLAGS) $(DUMP_OBJS) -L$(OUTPUT) -o $@ 132 - $(QUIET) $(STRIPCMD) $@ 133 - 134 - $(OUTPUT)tools/acpidump/%.o: %.c 135 - $(ECHO) " CC " $@ 136 - $(QUIET) $(CC) -c $(CFLAGS) -o $@ $< 137 - 138 - # --- ACPIDUMP END --- 139 - 140 - all: $(OUTPUT)acpidump 141 - echo $(OUTPUT) 142 - 143 - clean: 144 - -find $(OUTPUT) \( -not -type d \) -and \( -name '*~' -o -name '*.[oas]' \) -type f -print \ 145 - | xargs rm -f 146 - -rm -f $(OUTPUT)acpidump 147 - 148 - install-tools: 149 - $(INSTALL) -d $(DESTDIR)${sbindir} 150 - $(INSTALL_PROGRAM) $(OUTPUT)acpidump $(DESTDIR)${sbindir} 151 - 152 - install-man: 153 - $(INSTALL_DATA) -D man/acpidump.8 $(DESTDIR)${mandir}/man8/acpidump.8 154 - 155 - install: all install-tools install-man 156 - 157 - uninstall: 158 - - rm -f $(DESTDIR)${sbindir}/acpidump 159 - - rm -f $(DESTDIR)${mandir}/man8/acpidump.8 160 - 161 - .PHONY: all utils install-tools install-man install uninstall clean 27 + .PHONY: FORCE
+92
tools/power/acpi/Makefile.config
··· 1 + # tools/power/acpi/Makefile.config - ACPI tool Makefile 2 + # 3 + # Copyright (c) 2015, Intel Corporation 4 + # Author: Lv Zheng <lv.zheng@intel.com> 5 + # 6 + # This program is free software; you can redistribute it and/or 7 + # modify it under the terms of the GNU General Public License 8 + # as published by the Free Software Foundation; version 2 9 + # of the License. 10 + 11 + include ../../../../scripts/Makefile.include 12 + 13 + OUTPUT=./ 14 + ifeq ("$(origin O)", "command line") 15 + OUTPUT := $(O)/ 16 + endif 17 + 18 + ifneq ($(OUTPUT),) 19 + # check that the output directory actually exists 20 + OUTDIR := $(shell cd $(OUTPUT) && /bin/pwd) 21 + $(if $(OUTDIR),, $(error output directory "$(OUTPUT)" does not exist)) 22 + endif 23 + 24 + # --- CONFIGURATION BEGIN --- 25 + 26 + # Set the following to `true' to make a unstripped, unoptimized 27 + # binary. Leave this set to `false' for production use. 28 + DEBUG ?= true 29 + 30 + # make the build silent. Set this to something else to make it noisy again. 31 + V ?= false 32 + 33 + # Prefix to the directories we're installing to 34 + DESTDIR ?= 35 + 36 + # --- CONFIGURATION END --- 37 + 38 + # Directory definitions. These are default and most probably 39 + # do not need to be changed. Please note that DESTDIR is 40 + # added in front of any of them 41 + 42 + bindir ?= /usr/bin 43 + sbindir ?= /usr/sbin 44 + mandir ?= /usr/man 45 + 46 + # Toolchain: what tools do we use, and what options do they need: 47 + 48 + INSTALL = /usr/bin/install -c 49 + INSTALL_PROGRAM = ${INSTALL} 50 + INSTALL_DATA = ${INSTALL} -m 644 51 + INSTALL_SCRIPT = ${INSTALL_PROGRAM} 52 + 53 + # If you are running a cross compiler, you may want to set this 54 + # to something more interesting, like "arm-linux-". If you want 55 + # to compile vs uClibc, that can be done here as well. 56 + CROSS = #/usr/i386-linux-uclibc/usr/bin/i386-uclibc- 57 + CC = $(CROSS)gcc 58 + LD = $(CROSS)gcc 59 + STRIP = $(CROSS)strip 60 + HOSTCC = gcc 61 + 62 + # check if compiler option is supported 63 + cc-supports = ${shell if $(CC) ${1} -S -o /dev/null -x c /dev/null > /dev/null 2>&1; then echo "$(1)"; fi;} 64 + 65 + # use '-Os' optimization if available, else use -O2 66 + OPTIMIZATION := $(call cc-supports,-Os,-O2) 67 + 68 + WARNINGS := -Wall 69 + WARNINGS += $(call cc-supports,-Wstrict-prototypes) 70 + WARNINGS += $(call cc-supports,-Wdeclaration-after-statement) 71 + 72 + KERNEL_INCLUDE := ../../../include 73 + ACPICA_INCLUDE := ../../../drivers/acpi/acpica 74 + CFLAGS += -D_LINUX -I$(KERNEL_INCLUDE) -I$(ACPICA_INCLUDE) 75 + CFLAGS += $(WARNINGS) 76 + 77 + ifeq ($(strip $(V)),false) 78 + QUIET=@ 79 + ECHO=@echo 80 + else 81 + QUIET= 82 + ECHO=@\# 83 + endif 84 + 85 + # if DEBUG is enabled, then we do not strip or optimize 86 + ifeq ($(strip $(DEBUG)),true) 87 + CFLAGS += -O1 -g -DDEBUG 88 + STRIPCMD = /bin/true -Since_we_are_debugging 89 + else 90 + CFLAGS += $(OPTIMIZATION) -fomit-frame-pointer 91 + STRIPCMD = $(STRIP) -s --remove-section=.note --remove-section=.comment 92 + endif
+37
tools/power/acpi/Makefile.rules
··· 1 + # tools/power/acpi/Makefile.rules - ACPI tool Makefile 2 + # 3 + # Copyright (c) 2015, Intel Corporation 4 + # Author: Lv Zheng <lv.zheng@intel.com> 5 + # 6 + # This program is free software; you can redistribute it and/or 7 + # modify it under the terms of the GNU General Public License 8 + # as published by the Free Software Foundation; version 2 9 + # of the License. 10 + 11 + $(OUTPUT)$(TOOL): $(TOOL_OBJS) FORCE 12 + $(ECHO) " LD " $@ 13 + $(QUIET) $(LD) $(CFLAGS) $(LDFLAGS) $(TOOL_OBJS) -L$(OUTPUT) -o $@ 14 + $(QUIET) $(STRIPCMD) $@ 15 + 16 + $(OUTPUT)%.o: %.c 17 + $(ECHO) " CC " $@ 18 + $(QUIET) $(CC) -c $(CFLAGS) -o $@ $< 19 + 20 + all: $(OUTPUT)$(TOOL) 21 + clean: 22 + -find $(OUTPUT) \( -not -type d \) \ 23 + -and \( -name '*~' -o -name '*.[oas]' \) \ 24 + -type f -print \ 25 + | xargs rm -f 26 + -rm -f $(OUTPUT)$(TOOL) 27 + 28 + install-tools: 29 + $(INSTALL) -d $(DESTDIR)${sbindir} 30 + $(INSTALL_PROGRAM) $(OUTPUT)$(TOOL) $(DESTDIR)${sbindir} 31 + uninstall-tools: 32 + - rm -f $(DESTDIR)${sbindir}/$(TOOL) 33 + 34 + install: all install-tools $(EXTRA_INSTALL) 35 + uninstall: uninstall-tools $(EXTRA_UNINSTALL) 36 + 37 + .PHONY: FORCE
+53
tools/power/acpi/tools/acpidump/Makefile
··· 1 + # tools/power/acpi/tools/acpidump/Makefile - ACPI tool Makefile 2 + # 3 + # Copyright (c) 2015, Intel Corporation 4 + # Author: Lv Zheng <lv.zheng@intel.com> 5 + # 6 + # This program is free software; you can redistribute it and/or 7 + # modify it under the terms of the GNU General Public License 8 + # as published by the Free Software Foundation; version 2 9 + # of the License. 10 + 11 + include ../../Makefile.config 12 + 13 + TOOL = acpidump 14 + EXTRA_INSTALL = install-man 15 + EXTRA_UNINSTALL = uninstall-man 16 + 17 + vpath %.c \ 18 + ../../../../../drivers/acpi/acpica\ 19 + ./\ 20 + ../../common\ 21 + ../../os_specific/service_layers 22 + CFLAGS += -DACPI_DUMP_APP -I.\ 23 + -I../../../../../drivers/acpi/acpica\ 24 + -I../../../../../include 25 + TOOL_OBJS = \ 26 + apdump.o\ 27 + apfiles.o\ 28 + apmain.o\ 29 + osunixdir.o\ 30 + osunixmap.o\ 31 + osunixxf.o\ 32 + tbprint.o\ 33 + tbxfroot.o\ 34 + utbuffer.o\ 35 + utdebug.o\ 36 + utexcep.o\ 37 + utglobal.o\ 38 + utmath.o\ 39 + utnonansi.o\ 40 + utprint.o\ 41 + utstring.o\ 42 + utxferror.o\ 43 + oslibcfs.o\ 44 + oslinuxtbl.o\ 45 + cmfsize.o\ 46 + getopt.o 47 + 48 + include ../../Makefile.rules 49 + 50 + install-man: ../../man/acpidump.8 51 + $(INSTALL_DATA) -D $< $(DESTDIR)${mandir}/man8/acpidump.8 52 + uninstall-man: 53 + - rm -f $(DESTDIR)${mandir}/man8/acpidump.8
+14 -19
tools/power/acpi/tools/ec/Makefile
··· 1 - ec_access: ec_access.o 2 - $(ECHO) " LD " $@ 3 - $(QUIET) $(LD) $(CFLAGS) $(LDFLAGS) $< -o $@ 4 - $(QUIET) $(STRIPCMD) $@ 1 + # tools/power/acpi/tools/acpidump/Makefile - ACPI tool Makefile 2 + # 3 + # Copyright (c) 2015, Intel Corporation 4 + # Author: Lv Zheng <lv.zheng@intel.com> 5 + # 6 + # This program is free software; you can redistribute it and/or 7 + # modify it under the terms of the GNU General Public License 8 + # as published by the Free Software Foundation; version 2 9 + # of the License. 5 10 6 - %.o: %.c 7 - $(ECHO) " CC " $@ 8 - $(QUIET) $(CC) -c $(CFLAGS) -o $@ $< 11 + include ../../Makefile.config 9 12 10 - all: ec_access 13 + TOOL = ec 14 + TOOL_OBJS = \ 15 + ec_access.o 11 16 12 - install: 13 - $(INSTALL) -d $(DESTDIR)${sbindir} 14 - $(INSTALL_PROGRAM) ec_access $(DESTDIR)${sbindir} 15 - 16 - uninstall: 17 - - rm -f $(DESTDIR)${sbindir}/ec_access 18 - 19 - clean: 20 - -rm -f $(OUTPUT)ec_access 21 - 22 - .PHONY: all install uninstall 17 + include ../../Makefile.rules
+4
tools/power/cpupower/utils/cpufreq-set.c
··· 17 17 18 18 #include "cpufreq.h" 19 19 #include "helpers/helpers.h" 20 + #include "helpers/sysfs.h" 20 21 21 22 #define NORM_FREQ_LEN 32 22 23 ··· 317 316 318 317 if (!bitmask_isbitset(cpus_chosen, cpu) || 319 318 cpufreq_cpu_exists(cpu)) 319 + continue; 320 + 321 + if (sysfs_is_cpu_online(cpu) != 1) 320 322 continue; 321 323 322 324 printf(_("Setting cpu: %d\n"), cpu);
+2
tools/power/cpupower/utils/helpers/topology.c
··· 73 73 for (cpu = 0; cpu < cpus; cpu++) { 74 74 cpu_top->core_info[cpu].cpu = cpu; 75 75 cpu_top->core_info[cpu].is_online = sysfs_is_cpu_online(cpu); 76 + if (!cpu_top->core_info[cpu].is_online) 77 + continue; 76 78 if(sysfs_topology_read_file( 77 79 cpu, 78 80 "physical_package_id",
-5
tools/power/x86/turbostat/turbostat.8
··· 251 251 that they count at TSC rate, which is true on all processors tested to date. 252 252 253 253 .SH REFERENCES 254 - "Intel® Turbo Boost Technology 255 - in Intel® Core™ Microarchitecture (Nehalem) Based Processors" 256 - http://download.intel.com/design/processor/applnots/320354.pdf 257 - 258 - "Intel® 64 and IA-32 Architectures Software Developer's Manual 259 254 Volume 3B: System Programming Guide" 260 255 http://www.intel.com/products/processor/manuals/ 261 256
+88 -12
tools/power/x86/turbostat/turbostat.c
··· 372 372 if (do_rapl & RAPL_GFX) 373 373 outp += sprintf(outp, " GFX_J"); 374 374 if (do_rapl & RAPL_DRAM) 375 - outp += sprintf(outp, " RAM_W"); 375 + outp += sprintf(outp, " RAM_J"); 376 376 if (do_rapl & RAPL_PKG_PERF_STATUS) 377 377 outp += sprintf(outp, " PKG_%%"); 378 378 if (do_rapl & RAPL_DRAM_PERF_STATUS) ··· 1157 1157 1158 1158 get_msr(base_cpu, MSR_NHM_PLATFORM_INFO, &msr); 1159 1159 1160 - fprintf(stderr, "cpu0: MSR_NHM_PLATFORM_INFO: 0x%08llx\n", msr); 1160 + fprintf(stderr, "cpu%d: MSR_NHM_PLATFORM_INFO: 0x%08llx\n", base_cpu, msr); 1161 1161 1162 1162 ratio = (msr >> 40) & 0xFF; 1163 1163 fprintf(stderr, "%d * %.0f = %.0f MHz max efficiency frequency\n", ··· 1168 1168 ratio, bclk, ratio * bclk); 1169 1169 1170 1170 get_msr(base_cpu, MSR_IA32_POWER_CTL, &msr); 1171 - fprintf(stderr, "cpu0: MSR_IA32_POWER_CTL: 0x%08llx (C1E auto-promotion: %sabled)\n", 1172 - msr, msr & 0x2 ? "EN" : "DIS"); 1171 + fprintf(stderr, "cpu%d: MSR_IA32_POWER_CTL: 0x%08llx (C1E auto-promotion: %sabled)\n", 1172 + base_cpu, msr, msr & 0x2 ? "EN" : "DIS"); 1173 1173 1174 1174 return; 1175 1175 } ··· 1182 1182 1183 1183 get_msr(base_cpu, MSR_TURBO_RATIO_LIMIT2, &msr); 1184 1184 1185 - fprintf(stderr, "cpu0: MSR_TURBO_RATIO_LIMIT2: 0x%08llx\n", msr); 1185 + fprintf(stderr, "cpu%d: MSR_TURBO_RATIO_LIMIT2: 0x%08llx\n", base_cpu, msr); 1186 1186 1187 1187 ratio = (msr >> 8) & 0xFF; 1188 1188 if (ratio) ··· 1204 1204 1205 1205 get_msr(base_cpu, MSR_TURBO_RATIO_LIMIT1, &msr); 1206 1206 1207 - fprintf(stderr, "cpu0: MSR_TURBO_RATIO_LIMIT1: 0x%08llx\n", msr); 1207 + fprintf(stderr, "cpu%d: MSR_TURBO_RATIO_LIMIT1: 0x%08llx\n", base_cpu, msr); 1208 1208 1209 1209 ratio = (msr >> 56) & 0xFF; 1210 1210 if (ratio) ··· 1256 1256 1257 1257 get_msr(base_cpu, MSR_TURBO_RATIO_LIMIT, &msr); 1258 1258 1259 - fprintf(stderr, "cpu0: MSR_TURBO_RATIO_LIMIT: 0x%08llx\n", msr); 1259 + fprintf(stderr, "cpu%d: MSR_TURBO_RATIO_LIMIT: 0x%08llx\n", base_cpu, msr); 1260 1260 1261 1261 ratio = (msr >> 56) & 0xFF; 1262 1262 if (ratio) ··· 1312 1312 1313 1313 get_msr(base_cpu, MSR_NHM_TURBO_RATIO_LIMIT, &msr); 1314 1314 1315 - fprintf(stderr, "cpu0: MSR_NHM_TURBO_RATIO_LIMIT: 0x%08llx\n", 1316 - msr); 1315 + fprintf(stderr, "cpu%d: MSR_NHM_TURBO_RATIO_LIMIT: 0x%08llx\n", 1316 + base_cpu, msr); 1317 1317 1318 1318 /** 1319 1319 * Turbo encoding in KNL is as follows: ··· 1371 1371 #define SNB_C1_AUTO_UNDEMOTE (1UL << 27) 1372 1372 #define SNB_C3_AUTO_UNDEMOTE (1UL << 28) 1373 1373 1374 - fprintf(stderr, "cpu0: MSR_NHM_SNB_PKG_CST_CFG_CTL: 0x%08llx", msr); 1374 + fprintf(stderr, "cpu%d: MSR_NHM_SNB_PKG_CST_CFG_CTL: 0x%08llx", base_cpu, msr); 1375 1375 1376 1376 fprintf(stderr, " (%s%s%s%s%slocked: pkg-cstate-limit=%d: %s)\n", 1377 1377 (msr & SNB_C3_AUTO_UNDEMOTE) ? "UNdemote-C3, " : "", ··· 1382 1382 (unsigned int)msr & 7, 1383 1383 pkg_cstate_limit_strings[pkg_cstate_limit]); 1384 1384 return; 1385 + } 1386 + 1387 + static void 1388 + dump_config_tdp(void) 1389 + { 1390 + unsigned long long msr; 1391 + 1392 + get_msr(base_cpu, MSR_CONFIG_TDP_NOMINAL, &msr); 1393 + fprintf(stderr, "cpu%d: MSR_CONFIG_TDP_NOMINAL: 0x%08llx", base_cpu, msr); 1394 + fprintf(stderr, " (base_ratio=%d)\n", (unsigned int)msr & 0xEF); 1395 + 1396 + get_msr(base_cpu, MSR_CONFIG_TDP_LEVEL_1, &msr); 1397 + fprintf(stderr, "cpu%d: MSR_CONFIG_TDP_LEVEL_1: 0x%08llx (", base_cpu, msr); 1398 + if (msr) { 1399 + fprintf(stderr, "PKG_MIN_PWR_LVL1=%d ", (unsigned int)(msr >> 48) & 0xEFFF); 1400 + fprintf(stderr, "PKG_MAX_PWR_LVL1=%d ", (unsigned int)(msr >> 32) & 0xEFFF); 1401 + fprintf(stderr, "LVL1_RATIO=%d ", (unsigned int)(msr >> 16) & 0xEF); 1402 + fprintf(stderr, "PKG_TDP_LVL1=%d", (unsigned int)(msr) & 0xEFFF); 1403 + } 1404 + fprintf(stderr, ")\n"); 1405 + 1406 + get_msr(base_cpu, MSR_CONFIG_TDP_LEVEL_2, &msr); 1407 + fprintf(stderr, "cpu%d: MSR_CONFIG_TDP_LEVEL_2: 0x%08llx (", base_cpu, msr); 1408 + if (msr) { 1409 + fprintf(stderr, "PKG_MIN_PWR_LVL2=%d ", (unsigned int)(msr >> 48) & 0xEFFF); 1410 + fprintf(stderr, "PKG_MAX_PWR_LVL2=%d ", (unsigned int)(msr >> 32) & 0xEFFF); 1411 + fprintf(stderr, "LVL2_RATIO=%d ", (unsigned int)(msr >> 16) & 0xEF); 1412 + fprintf(stderr, "PKG_TDP_LVL2=%d", (unsigned int)(msr) & 0xEFFF); 1413 + } 1414 + fprintf(stderr, ")\n"); 1415 + 1416 + get_msr(base_cpu, MSR_CONFIG_TDP_CONTROL, &msr); 1417 + fprintf(stderr, "cpu%d: MSR_CONFIG_TDP_CONTROL: 0x%08llx (", base_cpu, msr); 1418 + if ((msr) & 0x3) 1419 + fprintf(stderr, "TDP_LEVEL=%d ", (unsigned int)(msr) & 0x3); 1420 + fprintf(stderr, " lock=%d", (unsigned int)(msr >> 31) & 1); 1421 + fprintf(stderr, ")\n"); 1422 + 1423 + get_msr(base_cpu, MSR_TURBO_ACTIVATION_RATIO, &msr); 1424 + fprintf(stderr, "cpu%d: MSR_TURBO_ACTIVATION_RATIO: 0x%08llx (", base_cpu, msr); 1425 + fprintf(stderr, "MAX_NON_TURBO_RATIO=%d", (unsigned int)(msr) & 0xEF); 1426 + fprintf(stderr, " lock=%d", (unsigned int)(msr >> 31) & 1); 1427 + fprintf(stderr, ")\n"); 1385 1428 } 1386 1429 1387 1430 void free_all_buffers(void) ··· 1916 1873 return 0; 1917 1874 } 1918 1875 } 1876 + int has_config_tdp(unsigned int family, unsigned int model) 1877 + { 1878 + if (!genuine_intel) 1879 + return 0; 1880 + 1881 + if (family != 6) 1882 + return 0; 1883 + 1884 + switch (model) { 1885 + case 0x3A: /* IVB */ 1886 + case 0x3E: /* IVB Xeon */ 1887 + 1888 + case 0x3C: /* HSW */ 1889 + case 0x3F: /* HSX */ 1890 + case 0x45: /* HSW */ 1891 + case 0x46: /* HSW */ 1892 + case 0x3D: /* BDW */ 1893 + case 0x47: /* BDW */ 1894 + case 0x4F: /* BDX */ 1895 + case 0x56: /* BDX-DE */ 1896 + case 0x4E: /* SKL */ 1897 + case 0x5E: /* SKL */ 1898 + 1899 + case 0x57: /* Knights Landing */ 1900 + return 1; 1901 + default: 1902 + return 0; 1903 + } 1904 + } 1905 + 1919 1906 static void 1920 1907 dump_cstate_pstate_config_info(family, model) 1921 1908 { ··· 1965 1892 1966 1893 if (has_knl_turbo_ratio_limit(family, model)) 1967 1894 dump_knl_turbo_ratio_limits(); 1895 + 1896 + if (has_config_tdp(family, model)) 1897 + dump_config_tdp(); 1968 1898 1969 1899 dump_nhm_cst_cfg(); 1970 1900 } ··· 3090 3014 } 3091 3015 3092 3016 void print_version() { 3093 - fprintf(stderr, "turbostat version 4.7 27-May, 2015" 3017 + fprintf(stderr, "turbostat version 4.7 17-June, 2015" 3094 3018 " - Len Brown <lenb@kernel.org>\n"); 3095 3019 } 3096 3020 ··· 3118 3042 3119 3043 progname = argv[0]; 3120 3044 3121 - while ((opt = getopt_long_only(argc, argv, "C:c:Ddhi:JM:m:PpST:v", 3045 + while ((opt = getopt_long_only(argc, argv, "+C:c:Ddhi:JM:m:PpST:v", 3122 3046 long_options, &option_index)) != -1) { 3123 3047 switch (opt) { 3124 3048 case 'C':