Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pm+acpi-3.16-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm into next

Pull ACPI and power management updates from Rafael Wysocki:
"ACPICA is the leader this time (63 commits), followed by cpufreq (28
commits), devfreq (15 commits), system suspend/hibernation (12
commits), ACPI video and ACPI device enumeration (10 commits each).

We have no major new features this time, but there are a few
significant changes of how things work. The most visible one will
probably be that we are now going to create platform devices rather
than PNP devices by default for ACPI device objects with _HID. That
was long overdue and will be really necessary to be able to use the
same drivers for the same hardware blocks on ACPI and DT-based systems
going forward. We're not expecting fallout from this one (as usual),
but it's something to watch nevertheless.

The second change having a chance to be visible is that ACPI video
will now default to using native backlight rather than the ACPI
backlight interface which should generally help systems with broken
Win8 BIOSes. We're hoping that all problems with the native backlight
handling that we had previously have been addressed and we are in a
good enough shape to flip the default, but this change should be easy
enough to revert if need be.

In addition to that, the system suspend core has a new mechanism to
allow runtime-suspended devices to stay suspended throughout system
suspend/resume transitions if some extra conditions are met
(generally, they are related to coordination within device hierarchy).
However, enabling this feature requires cooperation from the bus type
layer and for now it has only been implemented for the ACPI PM domain
(used by ACPI-enumerated platform devices mostly today).

Also, the acpidump utility that was previously shipped as a separate
tool will now be provided by the upstream ACPICA along with the rest
of ACPICA code, which will allow it to be more up to date and better
supported, and we have one new cpuidle driver (ARM clps711x).

The rest is improvements related to certain specific use cases,
cleanups and fixes all over the place.

Specifics:

- ACPICA update to upstream version 20140424. That includes a number
of fixes and improvements related to things like GPE handling,
table loading, headers, memory mapping and unmapping, DSDT/SSDT
overriding, and the Unload() operator. The acpidump utility from
upstream ACPICA is included too. From Bob Moore, Lv Zheng, David
Box, David Binderman, and Colin Ian King.

- Fixes and cleanups related to ACPI video and backlight interfaces
from Hans de Goede. That includes blacklist entries for some new
machines and using native backlight by default.

- ACPI device enumeration changes to create platform devices rather
than PNP devices for ACPI device objects with _HID by default. PNP
devices will still be created for the ACPI device object with
device IDs corresponding to real PNP devices, so that change should
not break things left and right, and we're expecting to see more
and more ACPI-enumerated platform devices in the future. From
Zhang Rui and Rafael J Wysocki.

- Updates for the ACPI LPSS (Low-Power Subsystem) driver allowing it
to handle system suspend/resume on Asus T100 correctly. From
Heikki Krogerus and Rafael J Wysocki.

- PM core update introducing a mechanism to allow runtime-suspended
devices to stay suspended over system suspend/resume transitions if
certain additional conditions related to coordination within device
hierarchy are met. Related PM documentation update and ACPI PM
domain support for the new feature. From Rafael J Wysocki.

- Fixes and improvements related to the "freeze" sleep state. They
affect several places including cpuidle, PM core, ACPI core, and
the ACPI battery driver. From Rafael J Wysocki and Zhang Rui.

- Miscellaneous fixes and updates of the ACPI core from Aaron Lu,
Bjørn Mork, Hanjun Guo, Lan Tianyu, and Rafael J Wysocki.

- Fixes and cleanups for the ACPI processor and ACPI PAD (Processor
Aggregator Device) drivers from Baoquan He, Manuel Schölling, Tony
Camuso, and Toshi Kani.

- System suspend/resume optimization in the ACPI battery driver from
Lan Tianyu.

- OPP (Operating Performance Points) subsystem updates from Chander
Kashyap, Mark Brown, and Nishanth Menon.

- cpufreq core fixes, updates and cleanups from Srivatsa S Bhat,
Stratos Karafotis, and Viresh Kumar.

- Updates, fixes and cleanups for the Tegra, powernow-k8, imx6q,
s5pv210, nforce2, and powernv cpufreq drivers from Brian Norris,
Jingoo Han, Paul Bolle, Philipp Zabel, Stratos Karafotis, and
Viresh Kumar.

- intel_pstate driver fixes and cleanups from Dirk Brandewie, Doug
Smythies, and Stratos Karafotis.

- Enabling the big.LITTLE cpufreq driver on arm64 from Mark Brown.

- Fix for the cpuidle menu governor from Chander Kashyap.

- New ARM clps711x cpuidle driver from Alexander Shiyan.

- Hibernate core fixes and cleanups from Chen Gang, Dan Carpenter,
Fabian Frederick, Pali Rohár, and Sebastian Capella.

- Intel RAPL (Running Average Power Limit) driver updates from Jacob
Pan.

- PNP subsystem updates from Bjorn Helgaas and Fabian Frederick.

- devfreq core updates from Chanwoo Choi and Paul Bolle.

- devfreq updates for exynos4 and exynos5 from Chanwoo Choi and
Bartlomiej Zolnierkiewicz.

- turbostat tool fix from Jean Delvare.

- cpupower tool updates from Prarit Bhargava, Ramkumar Ramachandra
and Thomas Renninger.

- New ACPI ec_access.c tool for poking at the EC in a safe way from
Thomas Renninger"

* tag 'pm+acpi-3.16-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (187 commits)
ACPICA: Namespace: Remove _PRP method support.
intel_pstate: Improve initial busy calculation
intel_pstate: add sample time scaling
intel_pstate: Correct rounding in busy calculation
intel_pstate: Remove C0 tracking
PM / hibernate: fixed typo in comment
ACPI: Fix x86 regression related to early mapping size limitation
ACPICA: Tables: Add mechanism to control early table checksum verification.
ACPI / scan: use platform bus type by default for _HID enumeration
ACPI / scan: always register ACPI LPSS scan handler
ACPI / scan: always register memory hotplug scan handler
ACPI / scan: always register container scan handler
ACPI / scan: Change the meaning of missing .attach() in scan handlers
ACPI / scan: introduce platform_id device PNP type flag
ACPI / scan: drop unsupported serial IDs from PNP ACPI scan handler ID list
ACPI / scan: drop IDs that do not comply with the ACPI PNP ID rule
ACPI / PNP: use device ID list for PNPACPI device enumeration
ACPI / scan: .match() callback for ACPI scan handlers
ACPI / battery: wakeup the system only when necessary
power_supply: allow power supply devices registered w/o wakeup source
...

+9463 -3813
+2 -2
Documentation/ABI/testing/sysfs-devices-system-cpu
··· 128 128 129 129 What: /sys/devices/system/cpu/cpu#/cpufreq/* 130 130 Date: pre-git history 131 - Contact: cpufreq@vger.kernel.org 131 + Contact: linux-pm@vger.kernel.org 132 132 Description: Discover and change clock speed of CPUs 133 133 134 134 Clock scaling allows you to change the clock speed of the ··· 146 146 147 147 What: /sys/devices/system/cpu/cpu#/cpufreq/freqdomain_cpus 148 148 Date: June 2013 149 - Contact: cpufreq@vger.kernel.org 149 + Contact: linux-pm@vger.kernel.org 150 150 Description: Discover CPUs in the same CPU frequency coordination domain 151 151 152 152 freqdomain_cpus is the list of CPUs (online+offline) that share
+20 -9
Documentation/ABI/testing/sysfs-power
··· 7 7 subsystem. 8 8 9 9 What: /sys/power/state 10 - Date: August 2006 10 + Date: May 2014 11 11 Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 12 12 Description: 13 - The /sys/power/state file controls the system power state. 14 - Reading from this file returns what states are supported, 15 - which is hard-coded to 'freeze' (Low-Power Idle), 'standby' 16 - (Power-On Suspend), 'mem' (Suspend-to-RAM), and 'disk' 17 - (Suspend-to-Disk). 13 + The /sys/power/state file controls system sleep states. 14 + Reading from this file returns the available sleep state 15 + labels, which may be "mem", "standby", "freeze" and "disk" 16 + (hibernation). The meanings of the first three labels depend on 17 + the relative_sleep_states command line argument as follows: 18 + 1) relative_sleep_states = 1 19 + "mem", "standby", "freeze" represent non-hibernation sleep 20 + states from the deepest ("mem", always present) to the 21 + shallowest ("freeze"). "standby" and "freeze" may or may 22 + not be present depending on the capabilities of the 23 + platform. "freeze" can only be present if "standby" is 24 + present. 25 + 2) relative_sleep_states = 0 (default) 26 + "mem" - "suspend-to-RAM", present if supported. 27 + "standby" - "power-on suspend", present if supported. 28 + "freeze" - "suspend-to-idle", always present. 18 29 19 30 Writing to this file one of these strings causes the system to 20 - transition into that state. Please see the file 21 - Documentation/power/states.txt for a description of each of 22 - these states. 31 + transition into the corresponding state, if available. See 32 + Documentation/power/states.txt for a description of what 33 + "suspend-to-RAM", "power-on suspend" and "suspend-to-idle" mean. 23 34 24 35 What: /sys/power/disk 25 36 Date: September 2006
+29
Documentation/cpu-freq/core.txt
··· 20 20 --------- 21 21 1. CPUFreq core and interfaces 22 22 2. CPUFreq notifiers 23 + 3. CPUFreq Table Generation with Operating Performance Point (OPP) 23 24 24 25 1. General Information 25 26 ======================= ··· 93 92 cpu - number of the affected CPU 94 93 old - old frequency 95 94 new - new frequency 95 + 96 + 3. CPUFreq Table Generation with Operating Performance Point (OPP) 97 + ================================================================== 98 + For details about OPP, see Documentation/power/opp.txt 99 + 100 + dev_pm_opp_init_cpufreq_table - cpufreq framework typically is initialized with 101 + cpufreq_frequency_table_cpuinfo which is provided with the list of 102 + frequencies that are available for operation. This function provides 103 + a ready to use conversion routine to translate the OPP layer's internal 104 + information about the available frequencies into a format readily 105 + providable to cpufreq. 106 + 107 + WARNING: Do not use this function in interrupt context. 108 + 109 + Example: 110 + soc_pm_init() 111 + { 112 + /* Do things */ 113 + r = dev_pm_opp_init_cpufreq_table(dev, &freq_table); 114 + if (!r) 115 + cpufreq_frequency_table_cpuinfo(policy, freq_table); 116 + /* Do other things */ 117 + } 118 + 119 + NOTE: This function is available only if CONFIG_CPU_FREQ is enabled in 120 + addition to CONFIG_PM_OPP. 121 + 122 + dev_pm_opp_free_cpufreq_table - Free up the table allocated by dev_pm_opp_init_cpufreq_table
+19
Documentation/cpu-freq/cpu-drivers.txt
··· 228 228 stage. Just pass the values to this function, and the unsigned int 229 229 index returns the number of the frequency table entry which contains 230 230 the frequency the CPU shall be set to. 231 + 232 + The following macros can be used as iterators over cpufreq_frequency_table: 233 + 234 + cpufreq_for_each_entry(pos, table) - iterates over all entries of frequency 235 + table. 236 + 237 + cpufreq-for_each_valid_entry(pos, table) - iterates over all entries, 238 + excluding CPUFREQ_ENTRY_INVALID frequencies. 239 + Use arguments "pos" - a cpufreq_frequency_table * as a loop cursor and 240 + "table" - the cpufreq_frequency_table * you want to iterate over. 241 + 242 + For example: 243 + 244 + struct cpufreq_frequency_table *pos, *driver_freq_table; 245 + 246 + cpufreq_for_each_entry(pos, driver_freq_table) { 247 + /* Do something with pos */ 248 + pos->frequency = ... 249 + }
+2 -2
Documentation/cpu-freq/index.txt
··· 35 35 ------------ 36 36 There is a CPU frequency changing CVS commit and general list where 37 37 you can report bugs, problems or submit patches. To post a message, 38 - send an email to cpufreq@vger.kernel.org, to subscribe go to 39 - http://vger.kernel.org/vger-lists.html#cpufreq and follow the 38 + send an email to linux-pm@vger.kernel.org, to subscribe go to 39 + http://vger.kernel.org/vger-lists.html#linux-pm and follow the 40 40 instructions there. 41 41 42 42 Links
+22 -2
Documentation/kernel-parameters.txt
··· 214 214 unusable. The "log_buf_len" parameter may be useful 215 215 if you need to capture more output. 216 216 217 + acpi_force_table_verification [HW,ACPI] 218 + Enable table checksum verification during early stage. 219 + By default, this is disabled due to x86 early mapping 220 + size limitation. 221 + 217 222 acpi_irq_balance [HW,ACPI] 218 223 ACPI will balance active IRQs 219 224 default in APIC mode ··· 242 237 This feature is enabled by default. 243 238 This option allows to turn off the feature. 244 239 245 - acpi_no_auto_ssdt [HW,ACPI] Disable automatic loading of SSDT 240 + acpi_no_static_ssdt [HW,ACPI] 241 + Disable installation of static SSDTs at early boot time 242 + By default, SSDTs contained in the RSDT/XSDT will be 243 + installed automatically and they will appear under 244 + /sys/firmware/acpi/tables. 245 + This option turns off this feature. 246 + Note that specifying this option does not affect 247 + dynamic table installation which will install SSDT 248 + tables to /sys/firmware/acpi/tables/dynamic. 246 249 247 250 acpica_no_return_repair [HW, ACPI] 248 251 Disable AML predefined validation mechanism ··· 2911 2898 [KNL, SMP] Set scheduler's default relax_domain_level. 2912 2899 See Documentation/cgroups/cpusets.txt. 2913 2900 2901 + relative_sleep_states= 2902 + [SUSPEND] Use sleep state labeling where the deepest 2903 + state available other than hibernation is always "mem". 2904 + Format: { "0" | "1" } 2905 + 0 -- Traditional sleep state labels. 2906 + 1 -- Relative sleep state labels. 2907 + 2914 2908 reserve= [KNL,BUGS] Force the kernel to ignore some iomem area 2915 2909 2916 2910 reservetop= [X86-32] ··· 3490 3470 the allocated input device; If set to 0, video driver 3491 3471 will only send out the event without touching backlight 3492 3472 brightness level. 3493 - default: 1 3473 + default: 0 3494 3474 3495 3475 virtio_mmio.device= 3496 3476 [VMMIO] Memory mapped virtio (platform) device.
+30 -4
Documentation/power/devices.txt
··· 2 2 3 3 Copyright (c) 2010-2011 Rafael J. Wysocki <rjw@sisk.pl>, Novell Inc. 4 4 Copyright (c) 2010 Alan Stern <stern@rowland.harvard.edu> 5 + Copyright (c) 2014 Intel Corp., Rafael J. Wysocki <rafael.j.wysocki@intel.com> 5 6 6 7 7 8 Most of the code in Linux is device drivers, so most of the Linux power ··· 327 326 driver in some way for the upcoming system power transition, but it 328 327 should not put the device into a low-power state. 329 328 329 + For devices supporting runtime power management, the return value of the 330 + prepare callback can be used to indicate to the PM core that it may 331 + safely leave the device in runtime suspend (if runtime-suspended 332 + already), provided that all of the device's descendants are also left in 333 + runtime suspend. Namely, if the prepare callback returns a positive 334 + number and that happens for all of the descendants of the device too, 335 + and all of them (including the device itself) are runtime-suspended, the 336 + PM core will skip the suspend, suspend_late and suspend_noirq suspend 337 + phases as well as the resume_noirq, resume_early and resume phases of 338 + the following system resume for all of these devices. In that case, 339 + the complete callback will be called directly after the prepare callback 340 + and is entirely responsible for bringing the device back to the 341 + functional state as appropriate. 342 + 330 343 2. The suspend methods should quiesce the device to stop it from performing 331 344 I/O. They also may save the device registers and put it into the 332 345 appropriate low-power state, depending on the bus type the device is on, ··· 415 400 the resume callbacks occur; it's not necessary to wait until the 416 401 complete phase. 417 402 403 + Moreover, if the preceding prepare callback returned a positive number, 404 + the device may have been left in runtime suspend throughout the whole 405 + system suspend and resume (the suspend, suspend_late, suspend_noirq 406 + phases of system suspend and the resume_noirq, resume_early, resume 407 + phases of system resume may have been skipped for it). In that case, 408 + the complete callback is entirely responsible for bringing the device 409 + back to the functional state after system suspend if necessary. [For 410 + example, it may need to queue up a runtime resume request for the device 411 + for this purpose.] To check if that is the case, the complete callback 412 + can consult the device's power.direct_complete flag. Namely, if that 413 + flag is set when the complete callback is being run, it has been called 414 + directly after the preceding prepare and special action may be required 415 + to make the device work correctly afterward. 416 + 418 417 At the end of these phases, drivers should be as functional as they were before 419 418 suspending: I/O can be performed using DMA and IRQs, and the relevant clocks are 420 - gated on. Even if the device was in a low-power state before the system sleep 421 - because of runtime power management, afterwards it should be back in its 422 - full-power state. There are multiple reasons why it's best to do this; they are 423 - discussed in more detail in Documentation/power/runtime_pm.txt. 419 + gated on. 424 420 425 421 However, the details here may again be platform-specific. For example, 426 422 some systems support multiple "run" states, and the mode in effect at
+5 -35
Documentation/power/opp.txt
··· 10 10 3. OPP Search Functions 11 11 4. OPP Availability Control Functions 12 12 5. OPP Data Retrieval Functions 13 - 6. Cpufreq Table Generation 14 - 7. Data Structures 13 + 6. Data Structures 15 14 16 15 1. Introduction 17 16 =============== ··· 71 72 OPP library facilitates this concept in it's implementation. The following 72 73 operational functions operate only on available opps: 73 74 opp_find_freq_{ceil, floor}, dev_pm_opp_get_voltage, dev_pm_opp_get_freq, dev_pm_opp_get_opp_count 74 - and dev_pm_opp_init_cpufreq_table 75 75 76 76 dev_pm_opp_find_freq_exact is meant to be used to find the opp pointer which can then 77 77 be used for dev_pm_opp_enable/disable functions to make an opp available as required. ··· 94 96 opp_get_{voltage, freq, opp_count} fall into this category. 95 97 96 98 opp_{add,enable,disable} are updaters which use mutex and implement it's own 97 - RCU locking mechanisms. dev_pm_opp_init_cpufreq_table acts as an updater and uses 98 - mutex to implment RCU updater strategy. These functions should *NOT* be called 99 - under RCU locks and other contexts that prevent blocking functions in RCU or 100 - mutex operations from working. 99 + RCU locking mechanisms. These functions should *NOT* be called under RCU locks 100 + and other contexts that prevent blocking functions in RCU or mutex operations 101 + from working. 101 102 102 103 2. Initial OPP List Registration 103 104 ================================ ··· 308 311 /* Do other things */ 309 312 } 310 313 311 - 6. Cpufreq Table Generation 312 - =========================== 313 - dev_pm_opp_init_cpufreq_table - cpufreq framework typically is initialized with 314 - cpufreq_frequency_table_cpuinfo which is provided with the list of 315 - frequencies that are available for operation. This function provides 316 - a ready to use conversion routine to translate the OPP layer's internal 317 - information about the available frequencies into a format readily 318 - providable to cpufreq. 319 - 320 - WARNING: Do not use this function in interrupt context. 321 - 322 - Example: 323 - soc_pm_init() 324 - { 325 - /* Do things */ 326 - r = dev_pm_opp_init_cpufreq_table(dev, &freq_table); 327 - if (!r) 328 - cpufreq_frequency_table_cpuinfo(policy, freq_table); 329 - /* Do other things */ 330 - } 331 - 332 - NOTE: This function is available only if CONFIG_CPU_FREQ is enabled in 333 - addition to CONFIG_PM as power management feature is required to 334 - dynamically scale voltage and frequency in a system. 335 - 336 - dev_pm_opp_free_cpufreq_table - Free up the table allocated by dev_pm_opp_init_cpufreq_table 337 - 338 - 7. Data Structures 314 + 6. Data Structures 339 315 ================== 340 316 Typically an SoC contains multiple voltage domains which are variable. Each 341 317 domain is represented by a device pointer. The relationship to OPP can be
+27 -8
Documentation/power/runtime_pm.txt
··· 2 2 3 3 (C) 2009-2011 Rafael J. Wysocki <rjw@sisk.pl>, Novell Inc. 4 4 (C) 2010 Alan Stern <stern@rowland.harvard.edu> 5 + (C) 2014 Intel Corp., Rafael J. Wysocki <rafael.j.wysocki@intel.com> 5 6 6 7 1. Introduction 7 8 ··· 445 444 bool pm_runtime_status_suspended(struct device *dev); 446 445 - return true if the device's runtime PM status is 'suspended' 447 446 447 + bool pm_runtime_suspended_if_enabled(struct device *dev); 448 + - return true if the device's runtime PM status is 'suspended' and its 449 + 'power.disable_depth' field is equal to 1 450 + 448 451 void pm_runtime_allow(struct device *dev); 449 452 - set the power.runtime_auto flag for the device and decrease its usage 450 453 counter (used by the /sys/devices/.../power/control interface to ··· 649 644 be more efficient to leave the devices that had been suspended before the system 650 645 suspend began in the suspended state. 651 646 647 + To this end, the PM core provides a mechanism allowing some coordination between 648 + different levels of device hierarchy. Namely, if a system suspend .prepare() 649 + callback returns a positive number for a device, that indicates to the PM core 650 + that the device appears to be runtime-suspended and its state is fine, so it 651 + may be left in runtime suspend provided that all of its descendants are also 652 + left in runtime suspend. If that happens, the PM core will not execute any 653 + system suspend and resume callbacks for all of those devices, except for the 654 + complete callback, which is then entirely responsible for handling the device 655 + as appropriate. This only applies to system suspend transitions that are not 656 + related to hibernation (see Documentation/power/devices.txt for more 657 + information). 658 + 652 659 The PM core does its best to reduce the probability of race conditions between 653 660 the runtime PM and system suspend/resume (and hibernation) callbacks by carrying 654 661 out the following operations: 655 662 656 - * During system suspend it calls pm_runtime_get_noresume() and 657 - pm_runtime_barrier() for every device right before executing the 658 - subsystem-level .suspend() callback for it. In addition to that it calls 659 - __pm_runtime_disable() with 'false' as the second argument for every device 660 - right before executing the subsystem-level .suspend_late() callback for it. 663 + * During system suspend pm_runtime_get_noresume() is called for every device 664 + right before executing the subsystem-level .prepare() callback for it and 665 + pm_runtime_barrier() is called for every device right before executing the 666 + subsystem-level .suspend() callback for it. In addition to that the PM core 667 + calls __pm_runtime_disable() with 'false' as the second argument for every 668 + device right before executing the subsystem-level .suspend_late() callback 669 + for it. 661 670 662 - * During system resume it calls pm_runtime_enable() and pm_runtime_put() 663 - for every device right after executing the subsystem-level .resume_early() 664 - callback and right after executing the subsystem-level .resume() callback 671 + * During system resume pm_runtime_enable() and pm_runtime_put() are called for 672 + every device right after executing the subsystem-level .resume_early() 673 + callback and right after executing the subsystem-level .complete() callback 665 674 for it, respectively. 666 675 667 676 7. Generic subsystem callbacks
+56 -31
Documentation/power/states.txt
··· 1 + System Power Management Sleep States 1 2 2 - System Power Management States 3 + (C) 2014 Intel Corp., Rafael J. Wysocki <rafael.j.wysocki@intel.com> 3 4 5 + The kernel supports up to four system sleep states generically, although three 6 + of them depend on the platform support code to implement the low-level details 7 + for each state. 4 8 5 - The kernel supports four power management states generically, though 6 - one is generic and the other three are dependent on platform support 7 - code to implement the low-level details for each state. 8 - This file describes each state, what they are 9 - commonly called, what ACPI state they map to, and what string to write 10 - to /sys/power/state to enter that state 9 + The states are represented by strings that can be read or written to the 10 + /sys/power/state file. Those strings may be "mem", "standby", "freeze" and 11 + "disk", where the last one always represents hibernation (Suspend-To-Disk) and 12 + the meaning of the remaining ones depends on the relative_sleep_states command 13 + line argument. 11 14 12 - state: Freeze / Low-Power Idle 15 + For relative_sleep_states=1, the strings "mem", "standby" and "freeze" label the 16 + available non-hibernation sleep states from the deepest to the shallowest, 17 + respectively. In that case, "mem" is always present in /sys/power/state, 18 + because there is at least one non-hibernation sleep state in every system. If 19 + the given system supports two non-hibernation sleep states, "standby" is present 20 + in /sys/power/state in addition to "mem". If the system supports three 21 + non-hibernation sleep states, "freeze" will be present in /sys/power/state in 22 + addition to "mem" and "standby". 23 + 24 + For relative_sleep_states=0, which is the default, the following descriptions 25 + apply. 26 + 27 + state: Suspend-To-Idle 13 28 ACPI state: S0 14 - String: "freeze" 29 + Label: "freeze" 15 30 16 - This state is a generic, pure software, light-weight, low-power state. 17 - It allows more energy to be saved relative to idle by freezing user 31 + This state is a generic, pure software, light-weight, system sleep state. 32 + It allows more energy to be saved relative to runtime idle by freezing user 18 33 space and putting all I/O devices into low-power states (possibly 19 34 lower-power than available at run time), such that the processors can 20 35 spend more time in their idle states. 21 - This state can be used for platforms without Standby/Suspend-to-RAM 36 + 37 + This state can be used for platforms without Power-On Suspend/Suspend-to-RAM 22 38 support, or it can be used in addition to Suspend-to-RAM (memory sleep) 23 - to provide reduced resume latency. 39 + to provide reduced resume latency. It is always supported. 24 40 25 41 26 42 State: Standby / Power-On Suspend 27 43 ACPI State: S1 28 - String: "standby" 44 + Label: "standby" 29 45 30 - This state offers minimal, though real, power savings, while providing 31 - a very low-latency transition back to a working system. No operating 32 - state is lost (the CPU retains power), so the system easily starts up 46 + This state, if supported, offers moderate, though real, power savings, while 47 + providing a relatively low-latency transition back to a working system. No 48 + operating state is lost (the CPU retains power), so the system easily starts up 33 49 again where it left off. 34 50 35 - We try to put devices in a low-power state equivalent to D1, which 36 - also offers low power savings, but low resume latency. Not all devices 37 - support D1, and those that don't are left on. 51 + In addition to freezing user space and putting all I/O devices into low-power 52 + states, which is done for Suspend-To-Idle too, nonboot CPUs are taken offline 53 + and all low-level system functions are suspended during transitions into this 54 + state. For this reason, it should allow more energy to be saved relative to 55 + Suspend-To-Idle, but the resume latency will generally be greater than for that 56 + state. 38 57 39 58 40 59 State: Suspend-to-RAM 41 60 ACPI State: S3 42 - String: "mem" 61 + Label: "mem" 43 62 44 - This state offers significant power savings as everything in the 45 - system is put into a low-power state, except for memory, which is 46 - placed in self-refresh mode to retain its contents. 63 + This state, if supported, offers significant power savings as everything in the 64 + system is put into a low-power state, except for memory, which should be placed 65 + into the self-refresh mode to retain its contents. All of the steps carried out 66 + when entering Power-On Suspend are also carried out during transitions to STR. 67 + Additional operations may take place depending on the platform capabilities. In 68 + particular, on ACPI systems the kernel passes control to the BIOS (platform 69 + firmware) as the last step during STR transitions and that usually results in 70 + powering down some more low-level components that aren't directly controlled by 71 + the kernel. 47 72 48 - System and device state is saved and kept in memory. All devices are 49 - suspended and put into D3. In many cases, all peripheral buses lose 50 - power when entering STR, so devices must be able to handle the 51 - transition back to the On state. 73 + System and device state is saved and kept in memory. All devices are suspended 74 + and put into low-power states. In many cases, all peripheral buses lose power 75 + when entering STR, so devices must be able to handle the transition back to the 76 + "on" state. 52 77 53 - For at least ACPI, STR requires some minimal boot-strapping code to 54 - resume the system from STR. This may be true on other platforms. 78 + For at least ACPI, STR requires some minimal boot-strapping code to resume the 79 + system from it. This may be the case on other platforms too. 55 80 56 81 57 82 State: Suspend-to-disk 58 83 ACPI State: S4 59 - String: "disk" 84 + Label: "disk" 60 85 61 86 This state offers the greatest power savings, and can be used even in 62 87 the absence of low-level platform support for power management. This
+4 -1
Documentation/power/swsusp.txt
··· 220 220 221 221 A: Try running 222 222 223 - cat `cat /proc/[0-9]*/maps | grep / | sed 's:.* /:/:' | sort -u` > /dev/null 223 + cat /proc/[0-9]*/maps | grep / | sed 's:.* /:/:' | sort -u | while read file 224 + do 225 + test -f "$file" && cat "$file" > /dev/null 226 + done 224 227 225 228 after resume. swapoff -a; swapon -a may also be useful. 226 229
-3
MAINTAINERS
··· 2405 2405 CPU FREQUENCY DRIVERS 2406 2406 M: Rafael J. Wysocki <rjw@rjwysocki.net> 2407 2407 M: Viresh Kumar <viresh.kumar@linaro.org> 2408 - L: cpufreq@vger.kernel.org 2409 2408 L: linux-pm@vger.kernel.org 2410 2409 S: Maintained 2411 2410 T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git ··· 2415 2416 CPU FREQUENCY DRIVERS - ARM BIG LITTLE 2416 2417 M: Viresh Kumar <viresh.kumar@linaro.org> 2417 2418 M: Sudeep Holla <sudeep.holla@arm.com> 2418 - L: cpufreq@vger.kernel.org 2419 2419 L: linux-pm@vger.kernel.org 2420 2420 W: http://www.arm.com/products/processors/technologies/biglittleprocessing.php 2421 2421 S: Maintained ··· 6945 6947 6946 6948 PNP SUPPORT 6947 6949 M: Rafael J. Wysocki <rafael.j.wysocki@intel.com> 6948 - M: Bjorn Helgaas <bhelgaas@google.com> 6949 6950 S: Maintained 6950 6951 F: drivers/pnp/ 6951 6952
+5 -4
arch/arm/mach-davinci/da850.c
··· 1092 1092 1093 1093 static int da850_round_armrate(struct clk *clk, unsigned long rate) 1094 1094 { 1095 - int i, ret = 0, diff; 1095 + int ret = 0, diff; 1096 1096 unsigned int best = (unsigned int) -1; 1097 1097 struct cpufreq_frequency_table *table = cpufreq_info.freq_table; 1098 + struct cpufreq_frequency_table *pos; 1098 1099 1099 1100 rate /= 1000; /* convert to kHz */ 1100 1101 1101 - for (i = 0; table[i].frequency != CPUFREQ_TABLE_END; i++) { 1102 - diff = table[i].frequency - rate; 1102 + cpufreq_for_each_entry(pos, table) { 1103 + diff = pos->frequency - rate; 1103 1104 if (diff < 0) 1104 1105 diff = -diff; 1105 1106 1106 1107 if (diff < best) { 1107 1108 best = diff; 1108 - ret = table[i].frequency; 1109 + ret = pos->frequency; 1109 1110 } 1110 1111 } 1111 1112
+56
arch/ia64/include/asm/acenv.h
··· 1 + /* 2 + * IA64 specific ACPICA environments and implementation 3 + * 4 + * Copyright (C) 2014, Intel Corporation 5 + * Author: Lv Zheng <lv.zheng@intel.com> 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + */ 11 + 12 + #ifndef _ASM_IA64_ACENV_H 13 + #define _ASM_IA64_ACENV_H 14 + 15 + #include <asm/intrinsics.h> 16 + 17 + #define COMPILER_DEPENDENT_INT64 long 18 + #define COMPILER_DEPENDENT_UINT64 unsigned long 19 + 20 + /* Asm macros */ 21 + 22 + #ifdef CONFIG_ACPI 23 + 24 + static inline int 25 + ia64_acpi_acquire_global_lock(unsigned int *lock) 26 + { 27 + unsigned int old, new, val; 28 + do { 29 + old = *lock; 30 + new = (((old & ~0x3) + 2) + ((old >> 1) & 0x1)); 31 + val = ia64_cmpxchg4_acq(lock, new, old); 32 + } while (unlikely (val != old)); 33 + return (new < 3) ? -1 : 0; 34 + } 35 + 36 + static inline int 37 + ia64_acpi_release_global_lock(unsigned int *lock) 38 + { 39 + unsigned int old, new, val; 40 + do { 41 + old = *lock; 42 + new = old & ~0x3; 43 + val = ia64_cmpxchg4_acq(lock, new, old); 44 + } while (unlikely (val != old)); 45 + return old & 0x1; 46 + } 47 + 48 + #define ACPI_ACQUIRE_GLOBAL_LOCK(facs, Acq) \ 49 + ((Acq) = ia64_acpi_acquire_global_lock(&facs->global_lock)) 50 + 51 + #define ACPI_RELEASE_GLOBAL_LOCK(facs, Acq) \ 52 + ((Acq) = ia64_acpi_release_global_lock(&facs->global_lock)) 53 + 54 + #endif 55 + 56 + #endif /* _ASM_IA64_ACENV_H */
+1 -51
arch/ia64/include/asm/acpi.h
··· 34 34 #include <linux/numa.h> 35 35 #include <asm/numa.h> 36 36 37 - #define COMPILER_DEPENDENT_INT64 long 38 - #define COMPILER_DEPENDENT_UINT64 unsigned long 39 - 40 - /* 41 - * Calling conventions: 42 - * 43 - * ACPI_SYSTEM_XFACE - Interfaces to host OS (handlers, threads) 44 - * ACPI_EXTERNAL_XFACE - External ACPI interfaces 45 - * ACPI_INTERNAL_XFACE - Internal ACPI interfaces 46 - * ACPI_INTERNAL_VAR_XFACE - Internal variable-parameter list interfaces 47 - */ 48 - #define ACPI_SYSTEM_XFACE 49 - #define ACPI_EXTERNAL_XFACE 50 - #define ACPI_INTERNAL_XFACE 51 - #define ACPI_INTERNAL_VAR_XFACE 52 - 53 - /* Asm macros */ 54 - 55 - #define ACPI_FLUSH_CPU_CACHE() 56 - 57 - static inline int 58 - ia64_acpi_acquire_global_lock (unsigned int *lock) 59 - { 60 - unsigned int old, new, val; 61 - do { 62 - old = *lock; 63 - new = (((old & ~0x3) + 2) + ((old >> 1) & 0x1)); 64 - val = ia64_cmpxchg4_acq(lock, new, old); 65 - } while (unlikely (val != old)); 66 - return (new < 3) ? -1 : 0; 67 - } 68 - 69 - static inline int 70 - ia64_acpi_release_global_lock (unsigned int *lock) 71 - { 72 - unsigned int old, new, val; 73 - do { 74 - old = *lock; 75 - new = old & ~0x3; 76 - val = ia64_cmpxchg4_acq(lock, new, old); 77 - } while (unlikely (val != old)); 78 - return old & 0x1; 79 - } 80 - 81 - #define ACPI_ACQUIRE_GLOBAL_LOCK(facs, Acq) \ 82 - ((Acq) = ia64_acpi_acquire_global_lock(&facs->global_lock)) 83 - 84 - #define ACPI_RELEASE_GLOBAL_LOCK(facs, Acq) \ 85 - ((Acq) = ia64_acpi_release_global_lock(&facs->global_lock)) 86 - 87 37 #ifdef CONFIG_ACPI 38 + extern int acpi_lapic; 88 39 #define acpi_disabled 0 /* ACPI always enabled on IA64 */ 89 40 #define acpi_noirq 0 /* ACPI always enabled on IA64 */ 90 41 #define acpi_pci_disabled 0 /* ACPI PCI always enabled on IA64 */ ··· 43 92 #endif 44 93 #define acpi_processor_cstate_check(x) (x) /* no idle limits on IA64 :) */ 45 94 static inline void disable_acpi(void) { } 46 - static inline void pci_acpi_crs_quirks(void) { } 47 95 48 96 #ifdef CONFIG_IA64_GENERIC 49 97 const char *acpi_get_sysname (void);
+3
arch/ia64/kernel/acpi.c
··· 56 56 57 57 #define PREFIX "ACPI: " 58 58 59 + int acpi_lapic; 59 60 unsigned int acpi_cpei_override; 60 61 unsigned int acpi_cpei_phys_cpuid; 61 62 ··· 677 676 if (ret < 1) 678 677 printk(KERN_ERR PREFIX 679 678 "Error parsing MADT - no LAPIC entries\n"); 679 + else 680 + acpi_lapic = 1; 680 681 681 682 #ifdef CONFIG_SMP 682 683 if (available_cpus == 0) {
+5 -12
arch/mips/loongson/lemote-2f/clock.c
··· 91 91 92 92 int clk_set_rate(struct clk *clk, unsigned long rate) 93 93 { 94 - unsigned int rate_khz = rate / 1000; 94 + struct cpufreq_frequency_table *pos; 95 95 int ret = 0; 96 96 int regval; 97 - int i; 98 97 99 98 if (likely(clk->ops && clk->ops->set_rate)) { 100 99 unsigned long flags; ··· 106 107 if (unlikely(clk->flags & CLK_RATE_PROPAGATES)) 107 108 propagate_rate(clk); 108 109 109 - for (i = 0; loongson2_clockmod_table[i].frequency != CPUFREQ_TABLE_END; 110 - i++) { 111 - if (loongson2_clockmod_table[i].frequency == 112 - CPUFREQ_ENTRY_INVALID) 113 - continue; 114 - if (rate_khz == loongson2_clockmod_table[i].frequency) 110 + cpufreq_for_each_valid_entry(pos, loongson2_clockmod_table) 111 + if (rate == pos->frequency) 115 112 break; 116 - } 117 - if (rate_khz != loongson2_clockmod_table[i].frequency) 113 + if (rate != pos->frequency) 118 114 return -ENOTSUPP; 119 115 120 116 clk->rate = rate; 121 117 122 118 regval = LOONGSON_CHIPCFG0; 123 - regval = (regval & ~0x7) | 124 - (loongson2_clockmod_table[i].driver_data - 1); 119 + regval = (regval & ~0x7) | (pos->driver_data - 1); 125 120 LOONGSON_CHIPCFG0 = regval; 126 121 127 122 return ret;
+49
arch/x86/include/asm/acenv.h
··· 1 + /* 2 + * X86 specific ACPICA environments and implementation 3 + * 4 + * Copyright (C) 2014, Intel Corporation 5 + * Author: Lv Zheng <lv.zheng@intel.com> 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + */ 11 + 12 + #ifndef _ASM_X86_ACENV_H 13 + #define _ASM_X86_ACENV_H 14 + 15 + #include <asm/special_insns.h> 16 + 17 + /* Asm macros */ 18 + 19 + #define ACPI_FLUSH_CPU_CACHE() wbinvd() 20 + 21 + #ifdef CONFIG_ACPI 22 + 23 + int __acpi_acquire_global_lock(unsigned int *lock); 24 + int __acpi_release_global_lock(unsigned int *lock); 25 + 26 + #define ACPI_ACQUIRE_GLOBAL_LOCK(facs, Acq) \ 27 + ((Acq) = __acpi_acquire_global_lock(&facs->global_lock)) 28 + 29 + #define ACPI_RELEASE_GLOBAL_LOCK(facs, Acq) \ 30 + ((Acq) = __acpi_release_global_lock(&facs->global_lock)) 31 + 32 + /* 33 + * Math helper asm macros 34 + */ 35 + #define ACPI_DIV_64_BY_32(n_hi, n_lo, d32, q32, r32) \ 36 + asm("divl %2;" \ 37 + : "=a"(q32), "=d"(r32) \ 38 + : "r"(d32), \ 39 + "0"(n_lo), "1"(n_hi)) 40 + 41 + #define ACPI_SHIFT_RIGHT_64(n_hi, n_lo) \ 42 + asm("shrl $1,%2 ;" \ 43 + "rcrl $1,%3;" \ 44 + : "=r"(n_hi), "=r"(n_lo) \ 45 + : "0"(n_hi), "1"(n_lo)) 46 + 47 + #endif 48 + 49 + #endif /* _ASM_X86_ACENV_H */
-45
arch/x86/include/asm/acpi.h
··· 32 32 #include <asm/mpspec.h> 33 33 #include <asm/realmode.h> 34 34 35 - #define COMPILER_DEPENDENT_INT64 long long 36 - #define COMPILER_DEPENDENT_UINT64 unsigned long long 37 - 38 - /* 39 - * Calling conventions: 40 - * 41 - * ACPI_SYSTEM_XFACE - Interfaces to host OS (handlers, threads) 42 - * ACPI_EXTERNAL_XFACE - External ACPI interfaces 43 - * ACPI_INTERNAL_XFACE - Internal ACPI interfaces 44 - * ACPI_INTERNAL_VAR_XFACE - Internal variable-parameter list interfaces 45 - */ 46 - #define ACPI_SYSTEM_XFACE 47 - #define ACPI_EXTERNAL_XFACE 48 - #define ACPI_INTERNAL_XFACE 49 - #define ACPI_INTERNAL_VAR_XFACE 50 - 51 - /* Asm macros */ 52 - 53 - #define ACPI_FLUSH_CPU_CACHE() wbinvd() 54 - 55 - int __acpi_acquire_global_lock(unsigned int *lock); 56 - int __acpi_release_global_lock(unsigned int *lock); 57 - 58 - #define ACPI_ACQUIRE_GLOBAL_LOCK(facs, Acq) \ 59 - ((Acq) = __acpi_acquire_global_lock(&facs->global_lock)) 60 - 61 - #define ACPI_RELEASE_GLOBAL_LOCK(facs, Acq) \ 62 - ((Acq) = __acpi_release_global_lock(&facs->global_lock)) 63 - 64 - /* 65 - * Math helper asm macros 66 - */ 67 - #define ACPI_DIV_64_BY_32(n_hi, n_lo, d32, q32, r32) \ 68 - asm("divl %2;" \ 69 - : "=a"(q32), "=d"(r32) \ 70 - : "r"(d32), \ 71 - "0"(n_lo), "1"(n_hi)) 72 - 73 - 74 - #define ACPI_SHIFT_RIGHT_64(n_hi, n_lo) \ 75 - asm("shrl $1,%2 ;" \ 76 - "rcrl $1,%3;" \ 77 - : "=r"(n_hi), "=r"(n_lo) \ 78 - : "0"(n_hi), "1"(n_lo)) 79 - 80 35 #ifdef CONFIG_ACPI 81 36 extern int acpi_lapic; 82 37 extern int acpi_ioapic;
+4 -3
drivers/acpi/Makefile
··· 39 39 acpi-y += ec.o 40 40 acpi-$(CONFIG_ACPI_DOCK) += dock.o 41 41 acpi-y += pci_root.o pci_link.o pci_irq.o 42 - acpi-$(CONFIG_X86_INTEL_LPSS) += acpi_lpss.o 42 + acpi-y += acpi_lpss.o 43 43 acpi-y += acpi_platform.o 44 + acpi-y += acpi_pnp.o 44 45 acpi-y += power.o 45 46 acpi-y += event.o 46 47 acpi-y += sysfs.o ··· 64 63 obj-$(CONFIG_ACPI_VIDEO) += video.o 65 64 obj-$(CONFIG_ACPI_PCI_SLOT) += pci_slot.o 66 65 obj-$(CONFIG_ACPI_PROCESSOR) += processor.o 67 - obj-$(CONFIG_ACPI_CONTAINER) += container.o 66 + obj-y += container.o 68 67 obj-$(CONFIG_ACPI_THERMAL) += thermal.o 69 - obj-$(CONFIG_ACPI_HOTPLUG_MEMORY) += acpi_memhotplug.o 68 + obj-y += acpi_memhotplug.o 70 69 obj-$(CONFIG_ACPI_BATTERY) += battery.o 71 70 obj-$(CONFIG_ACPI_SBS) += sbshc.o 72 71 obj-$(CONFIG_ACPI_SBS) += sbs.o
+1 -1
drivers/acpi/acpi_cmos_rtc.c
··· 68 68 return -ENODEV; 69 69 } 70 70 71 - return 0; 71 + return 1; 72 72 } 73 73 74 74 static void acpi_remove_cmos_rtc_space_handler(struct acpi_device *adev)
+8 -8
drivers/acpi/acpi_extlog.c
··· 220 220 goto err; 221 221 } 222 222 223 - extlog_l1_hdr = acpi_os_map_memory(l1_dirbase, l1_hdr_size); 223 + extlog_l1_hdr = acpi_os_map_iomem(l1_dirbase, l1_hdr_size); 224 224 l1_head = (struct extlog_l1_head *)extlog_l1_hdr; 225 225 l1_size = l1_head->total_len; 226 226 l1_percpu_entry = l1_head->entries; 227 227 elog_base = l1_head->elog_base; 228 228 elog_size = l1_head->elog_len; 229 - acpi_os_unmap_memory(extlog_l1_hdr, l1_hdr_size); 229 + acpi_os_unmap_iomem(extlog_l1_hdr, l1_hdr_size); 230 230 release_mem_region(l1_dirbase, l1_hdr_size); 231 231 232 232 /* remap L1 header again based on completed information */ ··· 237 237 (unsigned long long)l1_dirbase + l1_size); 238 238 goto err; 239 239 } 240 - extlog_l1_addr = acpi_os_map_memory(l1_dirbase, l1_size); 240 + extlog_l1_addr = acpi_os_map_iomem(l1_dirbase, l1_size); 241 241 l1_entry_base = (u64 *)((u8 *)extlog_l1_addr + l1_hdr_size); 242 242 243 243 /* remap elog table */ ··· 248 248 (unsigned long long)elog_base + elog_size); 249 249 goto err_release_l1_dir; 250 250 } 251 - elog_addr = acpi_os_map_memory(elog_base, elog_size); 251 + elog_addr = acpi_os_map_iomem(elog_base, elog_size); 252 252 253 253 rc = -ENOMEM; 254 254 /* allocate buffer to save elog record */ ··· 270 270 271 271 err_release_elog: 272 272 if (elog_addr) 273 - acpi_os_unmap_memory(elog_addr, elog_size); 273 + acpi_os_unmap_iomem(elog_addr, elog_size); 274 274 release_mem_region(elog_base, elog_size); 275 275 err_release_l1_dir: 276 276 if (extlog_l1_addr) 277 - acpi_os_unmap_memory(extlog_l1_addr, l1_size); 277 + acpi_os_unmap_iomem(extlog_l1_addr, l1_size); 278 278 release_mem_region(l1_dirbase, l1_size); 279 279 err: 280 280 pr_warn(FW_BUG "Extended error log disabled because of problems parsing f/w tables\n"); ··· 287 287 mce_unregister_decode_chain(&extlog_mce_dec); 288 288 ((struct extlog_l1_head *)extlog_l1_addr)->flags &= ~FLAG_OS_OPTIN; 289 289 if (extlog_l1_addr) 290 - acpi_os_unmap_memory(extlog_l1_addr, l1_size); 290 + acpi_os_unmap_iomem(extlog_l1_addr, l1_size); 291 291 if (elog_addr) 292 - acpi_os_unmap_memory(elog_addr, elog_size); 292 + acpi_os_unmap_iomem(elog_addr, elog_size); 293 293 release_mem_region(elog_base, elog_size); 294 294 release_mem_region(l1_dirbase, l1_size); 295 295 kfree(elog_buf);
+254 -52
drivers/acpi/acpi_lpss.c
··· 19 19 #include <linux/platform_device.h> 20 20 #include <linux/platform_data/clk-lpss.h> 21 21 #include <linux/pm_runtime.h> 22 + #include <linux/delay.h> 22 23 23 24 #include "internal.h" 24 25 25 26 ACPI_MODULE_NAME("acpi_lpss"); 26 27 28 + #ifdef CONFIG_X86_INTEL_LPSS 29 + 30 + #define LPSS_ADDR(desc) ((unsigned long)&desc) 31 + 27 32 #define LPSS_CLK_SIZE 0x04 28 33 #define LPSS_LTR_SIZE 0x18 29 34 30 35 /* Offsets relative to LPSS_PRIVATE_OFFSET */ 36 + #define LPSS_CLK_DIVIDER_DEF_MASK (BIT(1) | BIT(16)) 31 37 #define LPSS_GENERAL 0x08 32 38 #define LPSS_GENERAL_LTR_MODE_SW BIT(2) 33 39 #define LPSS_GENERAL_UART_RTS_OVRD BIT(3) ··· 49 43 #define LPSS_TX_INT 0x20 50 44 #define LPSS_TX_INT_MASK BIT(1) 51 45 46 + #define LPSS_PRV_REG_COUNT 9 47 + 52 48 struct lpss_shared_clock { 53 49 const char *name; 54 50 unsigned long rate; ··· 65 57 bool ltr_required; 66 58 unsigned int prv_offset; 67 59 size_t prv_size_override; 60 + bool clk_divider; 68 61 bool clk_gate; 62 + bool save_ctx; 69 63 struct lpss_shared_clock *shared_clock; 70 64 void (*setup)(struct lpss_private_data *pdata); 71 65 }; ··· 82 72 resource_size_t mmio_size; 83 73 struct clk *clk; 84 74 const struct lpss_device_desc *dev_desc; 75 + u32 prv_reg_ctx[LPSS_PRV_REG_COUNT]; 85 76 }; 86 77 87 78 static void lpss_uart_setup(struct lpss_private_data *pdata) ··· 103 92 .clk_required = true, 104 93 .prv_offset = 0x800, 105 94 .ltr_required = true, 95 + .clk_divider = true, 96 + .clk_gate = true, 97 + }; 98 + 99 + static struct lpss_device_desc lpt_i2c_dev_desc = { 100 + .clk_required = true, 101 + .prv_offset = 0x800, 102 + .ltr_required = true, 106 103 .clk_gate = true, 107 104 }; 108 105 ··· 118 99 .clk_required = true, 119 100 .prv_offset = 0x800, 120 101 .ltr_required = true, 102 + .clk_divider = true, 121 103 .clk_gate = true, 122 104 .setup = lpss_uart_setup, 123 105 }; ··· 136 116 137 117 static struct lpss_device_desc byt_pwm_dev_desc = { 138 118 .clk_required = true, 119 + .save_ctx = true, 139 120 .shared_clock = &pwm_clock, 140 - }; 141 - 142 - static struct lpss_shared_clock uart_clock = { 143 - .name = "uart_clk", 144 - .rate = 44236800, 145 121 }; 146 122 147 123 static struct lpss_device_desc byt_uart_dev_desc = { 148 124 .clk_required = true, 149 125 .prv_offset = 0x800, 126 + .clk_divider = true, 150 127 .clk_gate = true, 151 - .shared_clock = &uart_clock, 128 + .save_ctx = true, 152 129 .setup = lpss_uart_setup, 153 - }; 154 - 155 - static struct lpss_shared_clock spi_clock = { 156 - .name = "spi_clk", 157 - .rate = 50000000, 158 130 }; 159 131 160 132 static struct lpss_device_desc byt_spi_dev_desc = { 161 133 .clk_required = true, 162 134 .prv_offset = 0x400, 135 + .clk_divider = true, 163 136 .clk_gate = true, 164 - .shared_clock = &spi_clock, 137 + .save_ctx = true, 165 138 }; 166 139 167 140 static struct lpss_device_desc byt_sdio_dev_desc = { ··· 169 156 static struct lpss_device_desc byt_i2c_dev_desc = { 170 157 .clk_required = true, 171 158 .prv_offset = 0x800, 159 + .save_ctx = true, 172 160 .shared_clock = &i2c_clock, 173 161 }; 174 162 163 + #else 164 + 165 + #define LPSS_ADDR(desc) (0UL) 166 + 167 + #endif /* CONFIG_X86_INTEL_LPSS */ 168 + 175 169 static const struct acpi_device_id acpi_lpss_device_ids[] = { 176 170 /* Generic LPSS devices */ 177 - { "INTL9C60", (unsigned long)&lpss_dma_desc }, 171 + { "INTL9C60", LPSS_ADDR(lpss_dma_desc) }, 178 172 179 173 /* Lynxpoint LPSS devices */ 180 - { "INT33C0", (unsigned long)&lpt_dev_desc }, 181 - { "INT33C1", (unsigned long)&lpt_dev_desc }, 182 - { "INT33C2", (unsigned long)&lpt_dev_desc }, 183 - { "INT33C3", (unsigned long)&lpt_dev_desc }, 184 - { "INT33C4", (unsigned long)&lpt_uart_dev_desc }, 185 - { "INT33C5", (unsigned long)&lpt_uart_dev_desc }, 186 - { "INT33C6", (unsigned long)&lpt_sdio_dev_desc }, 174 + { "INT33C0", LPSS_ADDR(lpt_dev_desc) }, 175 + { "INT33C1", LPSS_ADDR(lpt_dev_desc) }, 176 + { "INT33C2", LPSS_ADDR(lpt_i2c_dev_desc) }, 177 + { "INT33C3", LPSS_ADDR(lpt_i2c_dev_desc) }, 178 + { "INT33C4", LPSS_ADDR(lpt_uart_dev_desc) }, 179 + { "INT33C5", LPSS_ADDR(lpt_uart_dev_desc) }, 180 + { "INT33C6", LPSS_ADDR(lpt_sdio_dev_desc) }, 187 181 { "INT33C7", }, 188 182 189 183 /* BayTrail LPSS devices */ 190 - { "80860F09", (unsigned long)&byt_pwm_dev_desc }, 191 - { "80860F0A", (unsigned long)&byt_uart_dev_desc }, 192 - { "80860F0E", (unsigned long)&byt_spi_dev_desc }, 193 - { "80860F14", (unsigned long)&byt_sdio_dev_desc }, 194 - { "80860F41", (unsigned long)&byt_i2c_dev_desc }, 184 + { "80860F09", LPSS_ADDR(byt_pwm_dev_desc) }, 185 + { "80860F0A", LPSS_ADDR(byt_uart_dev_desc) }, 186 + { "80860F0E", LPSS_ADDR(byt_spi_dev_desc) }, 187 + { "80860F14", LPSS_ADDR(byt_sdio_dev_desc) }, 188 + { "80860F41", LPSS_ADDR(byt_i2c_dev_desc) }, 195 189 { "INT33B2", }, 196 190 { "INT33FC", }, 197 191 198 - { "INT3430", (unsigned long)&lpt_dev_desc }, 199 - { "INT3431", (unsigned long)&lpt_dev_desc }, 200 - { "INT3432", (unsigned long)&lpt_dev_desc }, 201 - { "INT3433", (unsigned long)&lpt_dev_desc }, 202 - { "INT3434", (unsigned long)&lpt_uart_dev_desc }, 203 - { "INT3435", (unsigned long)&lpt_uart_dev_desc }, 204 - { "INT3436", (unsigned long)&lpt_sdio_dev_desc }, 192 + { "INT3430", LPSS_ADDR(lpt_dev_desc) }, 193 + { "INT3431", LPSS_ADDR(lpt_dev_desc) }, 194 + { "INT3432", LPSS_ADDR(lpt_i2c_dev_desc) }, 195 + { "INT3433", LPSS_ADDR(lpt_i2c_dev_desc) }, 196 + { "INT3434", LPSS_ADDR(lpt_uart_dev_desc) }, 197 + { "INT3435", LPSS_ADDR(lpt_uart_dev_desc) }, 198 + { "INT3436", LPSS_ADDR(lpt_sdio_dev_desc) }, 205 199 { "INT3437", }, 206 200 207 201 { } 208 202 }; 203 + 204 + #ifdef CONFIG_X86_INTEL_LPSS 209 205 210 206 static int is_memory(struct acpi_resource *res, void *not_used) 211 207 { ··· 235 213 { 236 214 const struct lpss_device_desc *dev_desc = pdata->dev_desc; 237 215 struct lpss_shared_clock *shared_clock = dev_desc->shared_clock; 216 + const char *devname = dev_name(&adev->dev); 238 217 struct clk *clk = ERR_PTR(-ENODEV); 239 218 struct lpss_clk_data *clk_data; 240 - const char *parent; 219 + const char *parent, *clk_name; 220 + void __iomem *prv_base; 241 221 242 222 if (!lpss_clk_dev) 243 223 lpt_register_clock_device(); ··· 250 226 251 227 if (dev_desc->clkdev_name) { 252 228 clk_register_clkdev(clk_data->clk, dev_desc->clkdev_name, 253 - dev_name(&adev->dev)); 229 + devname); 254 230 return 0; 255 231 } 256 232 ··· 259 235 return -ENODATA; 260 236 261 237 parent = clk_data->name; 238 + prv_base = pdata->mmio_base + dev_desc->prv_offset; 262 239 263 240 if (shared_clock) { 264 241 clk = shared_clock->clk; ··· 273 248 } 274 249 275 250 if (dev_desc->clk_gate) { 276 - clk = clk_register_gate(NULL, dev_name(&adev->dev), parent, 0, 277 - pdata->mmio_base + dev_desc->prv_offset, 278 - 0, 0, NULL); 279 - pdata->clk = clk; 251 + clk = clk_register_gate(NULL, devname, parent, 0, 252 + prv_base, 0, 0, NULL); 253 + parent = devname; 254 + } 255 + 256 + if (dev_desc->clk_divider) { 257 + /* Prevent division by zero */ 258 + if (!readl(prv_base)) 259 + writel(LPSS_CLK_DIVIDER_DEF_MASK, prv_base); 260 + 261 + clk_name = kasprintf(GFP_KERNEL, "%s-div", devname); 262 + if (!clk_name) 263 + return -ENOMEM; 264 + clk = clk_register_fractional_divider(NULL, clk_name, parent, 265 + 0, prv_base, 266 + 1, 15, 16, 15, 0, NULL); 267 + parent = clk_name; 268 + 269 + clk_name = kasprintf(GFP_KERNEL, "%s-update", devname); 270 + if (!clk_name) { 271 + kfree(parent); 272 + return -ENOMEM; 273 + } 274 + clk = clk_register_gate(NULL, clk_name, parent, 275 + CLK_SET_RATE_PARENT | CLK_SET_RATE_GATE, 276 + prv_base, 31, 0, NULL); 277 + kfree(parent); 278 + kfree(clk_name); 280 279 } 281 280 282 281 if (IS_ERR(clk)) 283 282 return PTR_ERR(clk); 284 283 285 - clk_register_clkdev(clk, NULL, dev_name(&adev->dev)); 284 + pdata->clk = clk; 285 + clk_register_clkdev(clk, NULL, devname); 286 286 return 0; 287 287 } 288 288 ··· 318 268 struct lpss_private_data *pdata; 319 269 struct resource_list_entry *rentry; 320 270 struct list_head resource_list; 271 + struct platform_device *pdev; 321 272 int ret; 322 273 323 274 dev_desc = (struct lpss_device_desc *)id->driver_data; 324 - if (!dev_desc) 325 - return acpi_create_platform_device(adev, id); 326 - 275 + if (!dev_desc) { 276 + pdev = acpi_create_platform_device(adev); 277 + return IS_ERR_OR_NULL(pdev) ? PTR_ERR(pdev) : 1; 278 + } 327 279 pdata = kzalloc(sizeof(*pdata), GFP_KERNEL); 328 280 if (!pdata) 329 281 return -ENOMEM; ··· 375 323 dev_desc->setup(pdata); 376 324 377 325 adev->driver_data = pdata; 378 - ret = acpi_create_platform_device(adev, id); 379 - if (ret > 0) 380 - return ret; 326 + pdev = acpi_create_platform_device(adev); 327 + if (!IS_ERR_OR_NULL(pdev)) { 328 + device_enable_async_suspend(&pdev->dev); 329 + return 1; 330 + } 381 331 332 + ret = PTR_ERR(pdev); 382 333 adev->driver_data = NULL; 383 334 384 335 err_out: ··· 505 450 } 506 451 } 507 452 453 + #ifdef CONFIG_PM 454 + /** 455 + * acpi_lpss_save_ctx() - Save the private registers of LPSS device 456 + * @dev: LPSS device 457 + * 458 + * Most LPSS devices have private registers which may loose their context when 459 + * the device is powered down. acpi_lpss_save_ctx() saves those registers into 460 + * prv_reg_ctx array. 461 + */ 462 + static void acpi_lpss_save_ctx(struct device *dev) 463 + { 464 + struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev)); 465 + unsigned int i; 466 + 467 + for (i = 0; i < LPSS_PRV_REG_COUNT; i++) { 468 + unsigned long offset = i * sizeof(u32); 469 + 470 + pdata->prv_reg_ctx[i] = __lpss_reg_read(pdata, offset); 471 + dev_dbg(dev, "saving 0x%08x from LPSS reg at offset 0x%02lx\n", 472 + pdata->prv_reg_ctx[i], offset); 473 + } 474 + } 475 + 476 + /** 477 + * acpi_lpss_restore_ctx() - Restore the private registers of LPSS device 478 + * @dev: LPSS device 479 + * 480 + * Restores the registers that were previously stored with acpi_lpss_save_ctx(). 481 + */ 482 + static void acpi_lpss_restore_ctx(struct device *dev) 483 + { 484 + struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev)); 485 + unsigned int i; 486 + 487 + /* 488 + * The following delay is needed or the subsequent write operations may 489 + * fail. The LPSS devices are actually PCI devices and the PCI spec 490 + * expects 10ms delay before the device can be accessed after D3 to D0 491 + * transition. 492 + */ 493 + msleep(10); 494 + 495 + for (i = 0; i < LPSS_PRV_REG_COUNT; i++) { 496 + unsigned long offset = i * sizeof(u32); 497 + 498 + __lpss_reg_write(pdata->prv_reg_ctx[i], pdata, offset); 499 + dev_dbg(dev, "restoring 0x%08x to LPSS reg at offset 0x%02lx\n", 500 + pdata->prv_reg_ctx[i], offset); 501 + } 502 + } 503 + 504 + #ifdef CONFIG_PM_SLEEP 505 + static int acpi_lpss_suspend_late(struct device *dev) 506 + { 507 + int ret = pm_generic_suspend_late(dev); 508 + 509 + if (ret) 510 + return ret; 511 + 512 + acpi_lpss_save_ctx(dev); 513 + return acpi_dev_suspend_late(dev); 514 + } 515 + 516 + static int acpi_lpss_restore_early(struct device *dev) 517 + { 518 + int ret = acpi_dev_resume_early(dev); 519 + 520 + if (ret) 521 + return ret; 522 + 523 + acpi_lpss_restore_ctx(dev); 524 + return pm_generic_resume_early(dev); 525 + } 526 + #endif /* CONFIG_PM_SLEEP */ 527 + 528 + #ifdef CONFIG_PM_RUNTIME 529 + static int acpi_lpss_runtime_suspend(struct device *dev) 530 + { 531 + int ret = pm_generic_runtime_suspend(dev); 532 + 533 + if (ret) 534 + return ret; 535 + 536 + acpi_lpss_save_ctx(dev); 537 + return acpi_dev_runtime_suspend(dev); 538 + } 539 + 540 + static int acpi_lpss_runtime_resume(struct device *dev) 541 + { 542 + int ret = acpi_dev_runtime_resume(dev); 543 + 544 + if (ret) 545 + return ret; 546 + 547 + acpi_lpss_restore_ctx(dev); 548 + return pm_generic_runtime_resume(dev); 549 + } 550 + #endif /* CONFIG_PM_RUNTIME */ 551 + #endif /* CONFIG_PM */ 552 + 553 + static struct dev_pm_domain acpi_lpss_pm_domain = { 554 + .ops = { 555 + #ifdef CONFIG_PM_SLEEP 556 + .suspend_late = acpi_lpss_suspend_late, 557 + .restore_early = acpi_lpss_restore_early, 558 + .prepare = acpi_subsys_prepare, 559 + .complete = acpi_subsys_complete, 560 + .suspend = acpi_subsys_suspend, 561 + .resume_early = acpi_subsys_resume_early, 562 + .freeze = acpi_subsys_freeze, 563 + .poweroff = acpi_subsys_suspend, 564 + .poweroff_late = acpi_subsys_suspend_late, 565 + #endif 566 + #ifdef CONFIG_PM_RUNTIME 567 + .runtime_suspend = acpi_lpss_runtime_suspend, 568 + .runtime_resume = acpi_lpss_runtime_resume, 569 + #endif 570 + }, 571 + }; 572 + 508 573 static int acpi_lpss_platform_notify(struct notifier_block *nb, 509 574 unsigned long action, void *data) 510 575 { ··· 632 457 struct lpss_private_data *pdata; 633 458 struct acpi_device *adev; 634 459 const struct acpi_device_id *id; 635 - int ret = 0; 636 460 637 461 id = acpi_match_device(acpi_lpss_device_ids, &pdev->dev); 638 462 if (!id || !id->driver_data) ··· 641 467 return 0; 642 468 643 469 pdata = acpi_driver_data(adev); 644 - if (!pdata || !pdata->mmio_base || !pdata->dev_desc->ltr_required) 470 + if (!pdata || !pdata->mmio_base) 645 471 return 0; 646 472 647 473 if (pdata->mmio_size < pdata->dev_desc->prv_offset + LPSS_LTR_SIZE) { ··· 649 475 return 0; 650 476 } 651 477 652 - if (action == BUS_NOTIFY_ADD_DEVICE) 653 - ret = sysfs_create_group(&pdev->dev.kobj, &lpss_attr_group); 654 - else if (action == BUS_NOTIFY_DEL_DEVICE) 655 - sysfs_remove_group(&pdev->dev.kobj, &lpss_attr_group); 478 + switch (action) { 479 + case BUS_NOTIFY_BOUND_DRIVER: 480 + if (pdata->dev_desc->save_ctx) 481 + pdev->dev.pm_domain = &acpi_lpss_pm_domain; 482 + break; 483 + case BUS_NOTIFY_UNBOUND_DRIVER: 484 + if (pdata->dev_desc->save_ctx) 485 + pdev->dev.pm_domain = NULL; 486 + break; 487 + case BUS_NOTIFY_ADD_DEVICE: 488 + if (pdata->dev_desc->ltr_required) 489 + return sysfs_create_group(&pdev->dev.kobj, 490 + &lpss_attr_group); 491 + case BUS_NOTIFY_DEL_DEVICE: 492 + if (pdata->dev_desc->ltr_required) 493 + sysfs_remove_group(&pdev->dev.kobj, &lpss_attr_group); 494 + default: 495 + break; 496 + } 656 497 657 - return ret; 498 + return 0; 658 499 } 659 500 660 501 static struct notifier_block acpi_lpss_nb = { ··· 708 519 acpi_scan_add_handler(&lpss_handler); 709 520 } 710 521 } 522 + 523 + #else 524 + 525 + static struct acpi_scan_handler lpss_handler = { 526 + .ids = acpi_lpss_device_ids, 527 + }; 528 + 529 + void __init acpi_lpss_init(void) 530 + { 531 + acpi_scan_add_handler(&lpss_handler); 532 + } 533 + 534 + #endif /* CONFIG_X86_INTEL_LPSS */
+24 -7
drivers/acpi/acpi_memhotplug.c
··· 44 44 45 45 ACPI_MODULE_NAME("acpi_memhotplug"); 46 46 47 + static const struct acpi_device_id memory_device_ids[] = { 48 + {ACPI_MEMORY_DEVICE_HID, 0}, 49 + {"", 0}, 50 + }; 51 + 52 + #ifdef CONFIG_ACPI_HOTPLUG_MEMORY 53 + 47 54 /* Memory Device States */ 48 55 #define MEMORY_INVALID_STATE 0 49 56 #define MEMORY_POWER_ON_STATE 1 ··· 59 52 static int acpi_memory_device_add(struct acpi_device *device, 60 53 const struct acpi_device_id *not_used); 61 54 static void acpi_memory_device_remove(struct acpi_device *device); 62 - 63 - static const struct acpi_device_id memory_device_ids[] = { 64 - {ACPI_MEMORY_DEVICE_HID, 0}, 65 - {"", 0}, 66 - }; 67 55 68 56 static struct acpi_scan_handler memory_device_handler = { 69 57 .ids = memory_device_ids, ··· 366 364 367 365 void __init acpi_memory_hotplug_init(void) 368 366 { 369 - if (acpi_no_memhotplug) 367 + if (acpi_no_memhotplug) { 368 + memory_device_handler.attach = NULL; 369 + acpi_scan_add_handler(&memory_device_handler); 370 370 return; 371 - 371 + } 372 372 acpi_scan_add_handler_with_hotplug(&memory_device_handler, "memory"); 373 373 } 374 374 ··· 380 376 return 1; 381 377 } 382 378 __setup("acpi_no_memhotplug", disable_acpi_memory_hotplug); 379 + 380 + #else 381 + 382 + static struct acpi_scan_handler memory_device_handler = { 383 + .ids = memory_device_ids, 384 + }; 385 + 386 + void __init acpi_memory_hotplug_init(void) 387 + { 388 + acpi_scan_add_handler(&memory_device_handler); 389 + } 390 + 391 + #endif /* CONFIG_ACPI_HOTPLUG_MEMORY */
+12 -4
drivers/acpi/acpi_pad.c
··· 156 156 157 157 while (!kthread_should_stop()) { 158 158 int cpu; 159 - u64 expire_time; 159 + unsigned long expire_time; 160 160 161 161 try_to_freeze(); 162 162 163 163 /* round robin to cpus */ 164 - if (last_jiffies + round_robin_time * HZ < jiffies) { 164 + expire_time = last_jiffies + round_robin_time * HZ; 165 + if (time_before(expire_time, jiffies)) { 165 166 last_jiffies = jiffies; 166 167 round_robin_cpu(tsk_index); 167 168 } ··· 201 200 CLOCK_EVT_NOTIFY_BROADCAST_EXIT, &cpu); 202 201 local_irq_enable(); 203 202 204 - if (jiffies > expire_time) { 203 + if (time_before(expire_time, jiffies)) { 205 204 do_sleep = 1; 206 205 break; 207 206 } ··· 216 215 * borrow CPU time from this CPU and cause RT task use > 95% 217 216 * CPU time. To make 'avoid starvation' work, takes a nap here. 218 217 */ 219 - if (do_sleep) 218 + if (unlikely(do_sleep)) 220 219 schedule_timeout_killable(HZ * idle_pct / 100); 220 + 221 + /* If an external event has set the need_resched flag, then 222 + * we need to deal with it, or this loop will continue to 223 + * spin without calling __mwait(). 224 + */ 225 + if (unlikely(need_resched())) 226 + schedule(); 221 227 } 222 228 223 229 exit_round_robin(tsk_index);
+15 -36
drivers/acpi/acpi_platform.c
··· 22 22 23 23 ACPI_MODULE_NAME("platform"); 24 24 25 - /* 26 - * The following ACPI IDs are known to be suitable for representing as 27 - * platform devices. 28 - */ 29 - static const struct acpi_device_id acpi_platform_device_ids[] = { 30 - 31 - { "PNP0D40" }, 32 - { "VPC2004" }, 33 - { "BCM4752" }, 34 - 35 - /* Intel Smart Sound Technology */ 36 - { "INT33C8" }, 37 - { "80860F28" }, 38 - 39 - { } 25 + static const struct acpi_device_id forbidden_id_list[] = { 26 + {"PNP0000", 0}, /* PIC */ 27 + {"PNP0100", 0}, /* Timer */ 28 + {"PNP0200", 0}, /* AT DMA Controller */ 29 + {"", 0}, 40 30 }; 41 31 42 32 /** 43 33 * acpi_create_platform_device - Create platform device for ACPI device node 44 34 * @adev: ACPI device node to create a platform device for. 45 - * @id: ACPI device ID used to match @adev. 46 35 * 47 36 * Check if the given @adev can be represented as a platform device and, if 48 37 * that's the case, create and register a platform device, populate its common ··· 39 50 * 40 51 * Name of the platform device will be the same as @adev's. 41 52 */ 42 - int acpi_create_platform_device(struct acpi_device *adev, 43 - const struct acpi_device_id *id) 53 + struct platform_device *acpi_create_platform_device(struct acpi_device *adev) 44 54 { 45 55 struct platform_device *pdev = NULL; 46 56 struct acpi_device *acpi_parent; ··· 51 63 52 64 /* If the ACPI node already has a physical device attached, skip it. */ 53 65 if (adev->physical_node_count) 54 - return 0; 66 + return NULL; 67 + 68 + if (!acpi_match_device_ids(adev, forbidden_id_list)) 69 + return ERR_PTR(-EINVAL); 55 70 56 71 INIT_LIST_HEAD(&resource_list); 57 72 count = acpi_dev_get_resources(adev, &resource_list, NULL, NULL); 58 73 if (count < 0) { 59 - return 0; 74 + return NULL; 60 75 } else if (count > 0) { 61 76 resources = kmalloc(count * sizeof(struct resource), 62 77 GFP_KERNEL); 63 78 if (!resources) { 64 79 dev_err(&adev->dev, "No memory for resources\n"); 65 80 acpi_dev_free_resource_list(&resource_list); 66 - return -ENOMEM; 81 + return ERR_PTR(-ENOMEM); 67 82 } 68 83 count = 0; 69 84 list_for_each_entry(rentry, &resource_list, node) ··· 103 112 pdevinfo.num_res = count; 104 113 pdevinfo.acpi_node.companion = adev; 105 114 pdev = platform_device_register_full(&pdevinfo); 106 - if (IS_ERR(pdev)) { 115 + if (IS_ERR(pdev)) 107 116 dev_err(&adev->dev, "platform device creation failed: %ld\n", 108 117 PTR_ERR(pdev)); 109 - pdev = NULL; 110 - } else { 118 + else 111 119 dev_dbg(&adev->dev, "created platform device %s\n", 112 120 dev_name(&pdev->dev)); 113 - } 114 121 115 122 kfree(resources); 116 - return 1; 117 - } 118 - 119 - static struct acpi_scan_handler platform_handler = { 120 - .ids = acpi_platform_device_ids, 121 - .attach = acpi_create_platform_device, 122 - }; 123 - 124 - void __init acpi_platform_init(void) 125 - { 126 - acpi_scan_add_handler(&platform_handler); 123 + return pdev; 127 124 }
+395
drivers/acpi/acpi_pnp.c
··· 1 + /* 2 + * ACPI support for PNP bus type 3 + * 4 + * Copyright (C) 2014, Intel Corporation 5 + * Authors: Zhang Rui <rui.zhang@intel.com> 6 + * Rafael J. Wysocki <rafael.j.wysocki@intel.com> 7 + * 8 + * This program is free software; you can redistribute it and/or modify 9 + * it under the terms of the GNU General Public License version 2 as 10 + * published by the Free Software Foundation. 11 + */ 12 + 13 + #include <linux/acpi.h> 14 + #include <linux/module.h> 15 + 16 + static const struct acpi_device_id acpi_pnp_device_ids[] = { 17 + /* pata_isapnp */ 18 + {"PNP0600"}, /* Generic ESDI/IDE/ATA compatible hard disk controller */ 19 + /* floppy */ 20 + {"PNP0700"}, 21 + /* ipmi_si */ 22 + {"IPI0001"}, 23 + /* tpm_inf_pnp */ 24 + {"IFX0101"}, /* Infineon TPMs */ 25 + {"IFX0102"}, /* Infineon TPMs */ 26 + /*tpm_tis */ 27 + {"PNP0C31"}, /* TPM */ 28 + {"ATM1200"}, /* Atmel */ 29 + {"IFX0102"}, /* Infineon */ 30 + {"BCM0101"}, /* Broadcom */ 31 + {"BCM0102"}, /* Broadcom */ 32 + {"NSC1200"}, /* National */ 33 + {"ICO0102"}, /* Intel */ 34 + /* ide */ 35 + {"PNP0600"}, /* Generic ESDI/IDE/ATA compatible hard disk controller */ 36 + /* ns558 */ 37 + {"ASB16fd"}, /* AdLib NSC16 */ 38 + {"AZT3001"}, /* AZT1008 */ 39 + {"CDC0001"}, /* Opl3-SAx */ 40 + {"CSC0001"}, /* CS4232 */ 41 + {"CSC000f"}, /* CS4236 */ 42 + {"CSC0101"}, /* CS4327 */ 43 + {"CTL7001"}, /* SB16 */ 44 + {"CTL7002"}, /* AWE64 */ 45 + {"CTL7005"}, /* Vibra16 */ 46 + {"ENS2020"}, /* SoundscapeVIVO */ 47 + {"ESS0001"}, /* ES1869 */ 48 + {"ESS0005"}, /* ES1878 */ 49 + {"ESS6880"}, /* ES688 */ 50 + {"IBM0012"}, /* CS4232 */ 51 + {"OPT0001"}, /* OPTi Audio16 */ 52 + {"YMH0006"}, /* Opl3-SA */ 53 + {"YMH0022"}, /* Opl3-SAx */ 54 + {"PNPb02f"}, /* Generic */ 55 + /* i8042 kbd */ 56 + {"PNP0300"}, 57 + {"PNP0301"}, 58 + {"PNP0302"}, 59 + {"PNP0303"}, 60 + {"PNP0304"}, 61 + {"PNP0305"}, 62 + {"PNP0306"}, 63 + {"PNP0309"}, 64 + {"PNP030a"}, 65 + {"PNP030b"}, 66 + {"PNP0320"}, 67 + {"PNP0343"}, 68 + {"PNP0344"}, 69 + {"PNP0345"}, 70 + {"CPQA0D7"}, 71 + /* i8042 aux */ 72 + {"AUI0200"}, 73 + {"FJC6000"}, 74 + {"FJC6001"}, 75 + {"PNP0f03"}, 76 + {"PNP0f0b"}, 77 + {"PNP0f0e"}, 78 + {"PNP0f12"}, 79 + {"PNP0f13"}, 80 + {"PNP0f19"}, 81 + {"PNP0f1c"}, 82 + {"SYN0801"}, 83 + /* fcpnp */ 84 + {"AVM0900"}, 85 + /* radio-cadet */ 86 + {"MSM0c24"}, /* ADS Cadet AM/FM Radio Card */ 87 + /* radio-gemtek */ 88 + {"ADS7183"}, /* AOpen FX-3D/Pro Radio */ 89 + /* radio-sf16fmr2 */ 90 + {"MFRad13"}, /* tuner subdevice of SF16-FMD2 */ 91 + /* ene_ir */ 92 + {"ENE0100"}, 93 + {"ENE0200"}, 94 + {"ENE0201"}, 95 + {"ENE0202"}, 96 + /* fintek-cir */ 97 + {"FIT0002"}, /* CIR */ 98 + /* ite-cir */ 99 + {"ITE8704"}, /* Default model */ 100 + {"ITE8713"}, /* CIR found in EEEBox 1501U */ 101 + {"ITE8708"}, /* Bridged IT8512 */ 102 + {"ITE8709"}, /* SRAM-Bridged IT8512 */ 103 + /* nuvoton-cir */ 104 + {"WEC0530"}, /* CIR */ 105 + {"NTN0530"}, /* CIR for new chip's pnp id */ 106 + /* Winbond CIR */ 107 + {"WEC1022"}, 108 + /* wbsd */ 109 + {"WEC0517"}, 110 + {"WEC0518"}, 111 + /* Winbond CIR */ 112 + {"TCM5090"}, /* 3Com Etherlink III (TP) */ 113 + {"TCM5091"}, /* 3Com Etherlink III */ 114 + {"TCM5094"}, /* 3Com Etherlink III (combo) */ 115 + {"TCM5095"}, /* 3Com Etherlink III (TPO) */ 116 + {"TCM5098"}, /* 3Com Etherlink III (TPC) */ 117 + {"PNP80f7"}, /* 3Com Etherlink III compatible */ 118 + {"PNP80f8"}, /* 3Com Etherlink III compatible */ 119 + /* nsc-ircc */ 120 + {"NSC6001"}, 121 + {"HWPC224"}, 122 + {"IBM0071"}, 123 + /* smsc-ircc2 */ 124 + {"SMCf010"}, 125 + /* sb1000 */ 126 + {"GIC1000"}, 127 + /* parport_pc */ 128 + {"PNP0400"}, /* Standard LPT Printer Port */ 129 + {"PNP0401"}, /* ECP Printer Port */ 130 + /* apple-gmux */ 131 + {"APP000B"}, 132 + /* fujitsu-laptop.c */ 133 + {"FUJ02bf"}, 134 + {"FUJ02B1"}, 135 + {"FUJ02E3"}, 136 + /* system */ 137 + {"PNP0c02"}, /* General ID for reserving resources */ 138 + {"PNP0c01"}, /* memory controller */ 139 + /* rtc_cmos */ 140 + {"PNP0b00"}, 141 + {"PNP0b01"}, 142 + {"PNP0b02"}, 143 + /* c6xdigio */ 144 + {"PNP0400"}, /* Standard LPT Printer Port */ 145 + {"PNP0401"}, /* ECP Printer Port */ 146 + /* ni_atmio.c */ 147 + {"NIC1900"}, 148 + {"NIC2400"}, 149 + {"NIC2500"}, 150 + {"NIC2600"}, 151 + {"NIC2700"}, 152 + /* serial */ 153 + {"AAC000F"}, /* Archtek America Corp. Archtek SmartLink Modem 3334BT Plug & Play */ 154 + {"ADC0001"}, /* Anchor Datacomm BV. SXPro 144 External Data Fax Modem Plug & Play */ 155 + {"ADC0002"}, /* SXPro 288 External Data Fax Modem Plug & Play */ 156 + {"AEI0250"}, /* PROLiNK 1456VH ISA PnP K56flex Fax Modem */ 157 + {"AEI1240"}, /* Actiontec ISA PNP 56K X2 Fax Modem */ 158 + {"AKY1021"}, /* Rockwell 56K ACF II Fax+Data+Voice Modem */ 159 + {"AZT4001"}, /* AZT3005 PnP SOUND DEVICE */ 160 + {"BDP3336"}, /* Best Data Products Inc. Smart One 336F PnP Modem */ 161 + {"BRI0A49"}, /* Boca Complete Ofc Communicator 14.4 Data-FAX */ 162 + {"BRI1400"}, /* Boca Research 33,600 ACF Modem */ 163 + {"BRI3400"}, /* Boca 33.6 Kbps Internal FD34FSVD */ 164 + {"BRI0A49"}, /* Boca 33.6 Kbps Internal FD34FSVD */ 165 + {"BDP3336"}, /* Best Data Products Inc. Smart One 336F PnP Modem */ 166 + {"CPI4050"}, /* Computer Peripherals Inc. EuroViVa CommCenter-33.6 SP PnP */ 167 + {"CTL3001"}, /* Creative Labs Phone Blaster 28.8 DSVD PnP Voice */ 168 + {"CTL3011"}, /* Creative Labs Modem Blaster 28.8 DSVD PnP Voice */ 169 + {"DAV0336"}, /* Davicom ISA 33.6K Modem */ 170 + {"DMB1032"}, /* Creative Modem Blaster Flash56 DI5601-1 */ 171 + {"DMB2001"}, /* Creative Modem Blaster V.90 DI5660 */ 172 + {"ETT0002"}, /* E-Tech CyberBULLET PC56RVP */ 173 + {"FUJ0202"}, /* Fujitsu 33600 PnP-I2 R Plug & Play */ 174 + {"FUJ0205"}, /* Fujitsu FMV-FX431 Plug & Play */ 175 + {"FUJ0206"}, /* Fujitsu 33600 PnP-I4 R Plug & Play */ 176 + {"FUJ0209"}, /* Fujitsu Fax Voice 33600 PNP-I5 R Plug & Play */ 177 + {"GVC000F"}, /* Archtek SmartLink Modem 3334BT Plug & Play */ 178 + {"GVC0303"}, /* Archtek SmartLink Modem 3334BRV 33.6K Data Fax Voice */ 179 + {"HAY0001"}, /* Hayes Optima 288 V.34-V.FC + FAX + Voice Plug & Play */ 180 + {"HAY000C"}, /* Hayes Optima 336 V.34 + FAX + Voice PnP */ 181 + {"HAY000D"}, /* Hayes Optima 336B V.34 + FAX + Voice PnP */ 182 + {"HAY5670"}, /* Hayes Accura 56K Ext Fax Modem PnP */ 183 + {"HAY5674"}, /* Hayes Accura 56K Ext Fax Modem PnP */ 184 + {"HAY5675"}, /* Hayes Accura 56K Fax Modem PnP */ 185 + {"HAYF000"}, /* Hayes 288, V.34 + FAX */ 186 + {"HAYF001"}, /* Hayes Optima 288 V.34 + FAX + Voice, Plug & Play */ 187 + {"IBM0033"}, /* IBM Thinkpad 701 Internal Modem Voice */ 188 + {"PNP4972"}, /* Intermec CV60 touchscreen port */ 189 + {"IXDC801"}, /* Intertex 28k8 33k6 Voice EXT PnP */ 190 + {"IXDC901"}, /* Intertex 33k6 56k Voice EXT PnP */ 191 + {"IXDD801"}, /* Intertex 28k8 33k6 Voice SP EXT PnP */ 192 + {"IXDD901"}, /* Intertex 33k6 56k Voice SP EXT PnP */ 193 + {"IXDF401"}, /* Intertex 28k8 33k6 Voice SP INT PnP */ 194 + {"IXDF801"}, /* Intertex 28k8 33k6 Voice SP EXT PnP */ 195 + {"IXDF901"}, /* Intertex 33k6 56k Voice SP EXT PnP */ 196 + {"KOR4522"}, /* KORTEX 28800 Externe PnP */ 197 + {"KORF661"}, /* KXPro 33.6 Vocal ASVD PnP */ 198 + {"LAS4040"}, /* LASAT Internet 33600 PnP */ 199 + {"LAS4540"}, /* Lasat Safire 560 PnP */ 200 + {"LAS5440"}, /* Lasat Safire 336 PnP */ 201 + {"MNP0281"}, /* Microcom TravelPorte FAST V.34 Plug & Play */ 202 + {"MNP0336"}, /* Microcom DeskPorte V.34 FAST or FAST+ Plug & Play */ 203 + {"MNP0339"}, /* Microcom DeskPorte FAST EP 28.8 Plug & Play */ 204 + {"MNP0342"}, /* Microcom DeskPorte 28.8P Plug & Play */ 205 + {"MNP0500"}, /* Microcom DeskPorte FAST ES 28.8 Plug & Play */ 206 + {"MNP0501"}, /* Microcom DeskPorte FAST ES 28.8 Plug & Play */ 207 + {"MNP0502"}, /* Microcom DeskPorte 28.8S Internal Plug & Play */ 208 + {"MOT1105"}, /* Motorola BitSURFR Plug & Play */ 209 + {"MOT1111"}, /* Motorola TA210 Plug & Play */ 210 + {"MOT1114"}, /* Motorola HMTA 200 (ISDN) Plug & Play */ 211 + {"MOT1115"}, /* Motorola BitSURFR Plug & Play */ 212 + {"MOT1190"}, /* Motorola Lifestyle 28.8 Internal */ 213 + {"MOT1501"}, /* Motorola V.3400 Plug & Play */ 214 + {"MOT1502"}, /* Motorola Lifestyle 28.8 V.34 Plug & Play */ 215 + {"MOT1505"}, /* Motorola Power 28.8 V.34 Plug & Play */ 216 + {"MOT1509"}, /* Motorola ModemSURFR External 28.8 Plug & Play */ 217 + {"MOT150A"}, /* Motorola Premier 33.6 Desktop Plug & Play */ 218 + {"MOT150F"}, /* Motorola VoiceSURFR 56K External PnP */ 219 + {"MOT1510"}, /* Motorola ModemSURFR 56K External PnP */ 220 + {"MOT1550"}, /* Motorola ModemSURFR 56K Internal PnP */ 221 + {"MOT1560"}, /* Motorola ModemSURFR Internal 28.8 Plug & Play */ 222 + {"MOT1580"}, /* Motorola Premier 33.6 Internal Plug & Play */ 223 + {"MOT15B0"}, /* Motorola OnlineSURFR 28.8 Internal Plug & Play */ 224 + {"MOT15F0"}, /* Motorola VoiceSURFR 56K Internal PnP */ 225 + {"MVX00A1"}, /* Deskline K56 Phone System PnP */ 226 + {"MVX00F2"}, /* PC Rider K56 Phone System PnP */ 227 + {"nEC8241"}, /* NEC 98NOTE SPEAKER PHONE FAX MODEM(33600bps) */ 228 + {"PMC2430"}, /* Pace 56 Voice Internal Plug & Play Modem */ 229 + {"PNP0500"}, /* Generic standard PC COM port */ 230 + {"PNP0501"}, /* Generic 16550A-compatible COM port */ 231 + {"PNPC000"}, /* Compaq 14400 Modem */ 232 + {"PNPC001"}, /* Compaq 2400/9600 Modem */ 233 + {"PNPC031"}, /* Dial-Up Networking Serial Cable between 2 PCs */ 234 + {"PNPC032"}, /* Dial-Up Networking Parallel Cable between 2 PCs */ 235 + {"PNPC100"}, /* Standard 9600 bps Modem */ 236 + {"PNPC101"}, /* Standard 14400 bps Modem */ 237 + {"PNPC102"}, /* Standard 28800 bps Modem */ 238 + {"PNPC103"}, /* Standard Modem */ 239 + {"PNPC104"}, /* Standard 9600 bps Modem */ 240 + {"PNPC105"}, /* Standard 14400 bps Modem */ 241 + {"PNPC106"}, /* Standard 28800 bps Modem */ 242 + {"PNPC107"}, /* Standard Modem */ 243 + {"PNPC108"}, /* Standard 9600 bps Modem */ 244 + {"PNPC109"}, /* Standard 14400 bps Modem */ 245 + {"PNPC10A"}, /* Standard 28800 bps Modem */ 246 + {"PNPC10B"}, /* Standard Modem */ 247 + {"PNPC10C"}, /* Standard 9600 bps Modem */ 248 + {"PNPC10D"}, /* Standard 14400 bps Modem */ 249 + {"PNPC10E"}, /* Standard 28800 bps Modem */ 250 + {"PNPC10F"}, /* Standard Modem */ 251 + {"PNP2000"}, /* Standard PCMCIA Card Modem */ 252 + {"ROK0030"}, /* Rockwell 33.6 DPF Internal PnP, Modular Technology 33.6 Internal PnP */ 253 + {"ROK0100"}, /* KORTEX 14400 Externe PnP */ 254 + {"ROK4120"}, /* Rockwell 28.8 */ 255 + {"ROK4920"}, /* Viking 28.8 INTERNAL Fax+Data+Voice PnP */ 256 + {"RSS00A0"}, /* Rockwell 33.6 DPF External PnP, BT Prologue 33.6 External PnP, Modular Technology 33.6 External PnP */ 257 + {"RSS0262"}, /* Viking 56K FAX INT */ 258 + {"RSS0250"}, /* K56 par,VV,Voice,Speakphone,AudioSpan,PnP */ 259 + {"SUP1310"}, /* SupraExpress 28.8 Data/Fax PnP modem */ 260 + {"SUP1381"}, /* SupraExpress 336i PnP Voice Modem */ 261 + {"SUP1421"}, /* SupraExpress 33.6 Data/Fax PnP modem */ 262 + {"SUP1590"}, /* SupraExpress 33.6 Data/Fax PnP modem */ 263 + {"SUP1620"}, /* SupraExpress 336i Sp ASVD */ 264 + {"SUP1760"}, /* SupraExpress 33.6 Data/Fax PnP modem */ 265 + {"SUP2171"}, /* SupraExpress 56i Sp Intl */ 266 + {"TEX0011"}, /* Phoebe Micro 33.6 Data Fax 1433VQH Plug & Play */ 267 + {"UAC000F"}, /* Archtek SmartLink Modem 3334BT Plug & Play */ 268 + {"USR0000"}, /* 3Com Corp. Gateway Telepath IIvi 33.6 */ 269 + {"USR0002"}, /* U.S. Robotics Sporster 33.6K Fax INT PnP */ 270 + {"USR0004"}, /* Sportster Vi 14.4 PnP FAX Voicemail */ 271 + {"USR0006"}, /* U.S. Robotics 33.6K Voice INT PnP */ 272 + {"USR0007"}, /* U.S. Robotics 33.6K Voice EXT PnP */ 273 + {"USR0009"}, /* U.S. Robotics Courier V.Everything INT PnP */ 274 + {"USR2002"}, /* U.S. Robotics 33.6K Voice INT PnP */ 275 + {"USR2070"}, /* U.S. Robotics 56K Voice INT PnP */ 276 + {"USR2080"}, /* U.S. Robotics 56K Voice EXT PnP */ 277 + {"USR3031"}, /* U.S. Robotics 56K FAX INT */ 278 + {"USR3050"}, /* U.S. Robotics 56K FAX INT */ 279 + {"USR3070"}, /* U.S. Robotics 56K Voice INT PnP */ 280 + {"USR3080"}, /* U.S. Robotics 56K Voice EXT PnP */ 281 + {"USR3090"}, /* U.S. Robotics 56K Voice INT PnP */ 282 + {"USR9100"}, /* U.S. Robotics 56K Message */ 283 + {"USR9160"}, /* U.S. Robotics 56K FAX EXT PnP */ 284 + {"USR9170"}, /* U.S. Robotics 56K FAX INT PnP */ 285 + {"USR9180"}, /* U.S. Robotics 56K Voice EXT PnP */ 286 + {"USR9190"}, /* U.S. Robotics 56K Voice INT PnP */ 287 + {"WACFXXX"}, /* Wacom tablets */ 288 + {"FPI2002"}, /* Compaq touchscreen */ 289 + {"FUJ02B2"}, /* Fujitsu Stylistic touchscreens */ 290 + {"FUJ02B3"}, 291 + {"FUJ02B4"}, /* Fujitsu Stylistic LT touchscreens */ 292 + {"FUJ02B6"}, /* Passive Fujitsu Stylistic touchscreens */ 293 + {"FUJ02B7"}, 294 + {"FUJ02B8"}, 295 + {"FUJ02B9"}, 296 + {"FUJ02BC"}, 297 + {"FUJ02E5"}, /* Fujitsu Wacom Tablet PC device */ 298 + {"FUJ02E6"}, /* Fujitsu P-series tablet PC device */ 299 + {"FUJ02E7"}, /* Fujitsu Wacom 2FGT Tablet PC device */ 300 + {"FUJ02E9"}, /* Fujitsu Wacom 1FGT Tablet PC device */ 301 + {"LTS0001"}, /* LG C1 EXPRESS DUAL (C1-PB11A3) touch screen (actually a FUJ02E6 in disguise) */ 302 + {"WCI0003"}, /* Rockwell's (PORALiNK) 33600 INT PNP */ 303 + {"WEC1022"}, /* Winbond CIR port, should not be probed. We should keep track of it to prevent the legacy serial driver from probing it */ 304 + /* scl200wdt */ 305 + {"NSC0800"}, /* National Semiconductor PC87307/PC97307 watchdog component */ 306 + /* mpu401 */ 307 + {"PNPb006"}, 308 + /* cs423x-pnpbios */ 309 + {"CSC0100"}, 310 + {"CSC0000"}, 311 + {"GIM0100"}, /* Guillemot Turtlebeach something appears to be cs4232 compatible */ 312 + /* es18xx-pnpbios */ 313 + {"ESS1869"}, 314 + {"ESS1879"}, 315 + /* snd-opl3sa2-pnpbios */ 316 + {"YMH0021"}, 317 + {"NMX2210"}, /* Gateway Solo 2500 */ 318 + {""}, 319 + }; 320 + 321 + static bool is_hex_digit(char c) 322 + { 323 + return (c >= 0 && c <= '9') || (c >= 'A' && c <= 'F'); 324 + } 325 + 326 + static bool matching_id(char *idstr, char *list_id) 327 + { 328 + int i; 329 + 330 + if (memcmp(idstr, list_id, 3)) 331 + return false; 332 + 333 + for (i = 3; i < 7; i++) { 334 + char c = toupper(idstr[i]); 335 + 336 + if (!is_hex_digit(c) 337 + || (list_id[i] != 'X' && c != toupper(list_id[i]))) 338 + return false; 339 + } 340 + return true; 341 + } 342 + 343 + static bool acpi_pnp_match(char *idstr, const struct acpi_device_id **matchid) 344 + { 345 + const struct acpi_device_id *devid; 346 + 347 + for (devid = acpi_pnp_device_ids; devid->id[0]; devid++) 348 + if (matching_id(idstr, (char *)devid->id)) { 349 + if (matchid) 350 + *matchid = devid; 351 + 352 + return true; 353 + } 354 + 355 + return false; 356 + } 357 + 358 + static int acpi_pnp_attach(struct acpi_device *adev, 359 + const struct acpi_device_id *id) 360 + { 361 + return 1; 362 + } 363 + 364 + static struct acpi_scan_handler acpi_pnp_handler = { 365 + .ids = acpi_pnp_device_ids, 366 + .match = acpi_pnp_match, 367 + .attach = acpi_pnp_attach, 368 + }; 369 + 370 + /* 371 + * For CMOS RTC devices, the PNP ACPI scan handler does not work, because 372 + * there is a CMOS RTC ACPI scan handler installed already, so we need to 373 + * check those devices and enumerate them to the PNP bus directly. 374 + */ 375 + static int is_cmos_rtc_device(struct acpi_device *adev) 376 + { 377 + struct acpi_device_id ids[] = { 378 + { "PNP0B00" }, 379 + { "PNP0B01" }, 380 + { "PNP0B02" }, 381 + {""}, 382 + }; 383 + return !acpi_match_device_ids(adev, ids); 384 + } 385 + 386 + bool acpi_is_pnp_device(struct acpi_device *adev) 387 + { 388 + return adev->handler == &acpi_pnp_handler || is_cmos_rtc_device(adev); 389 + } 390 + EXPORT_SYMBOL_GPL(acpi_is_pnp_device); 391 + 392 + void __init acpi_pnp_init(void) 393 + { 394 + acpi_scan_add_handler(&acpi_pnp_handler); 395 + }
+1 -1
drivers/acpi/acpi_processor.c
··· 268 268 pr->apic_id = apic_id; 269 269 270 270 cpu_index = acpi_map_cpuid(pr->apic_id, pr->acpi_id); 271 - if (!cpu0_initialized) { 271 + if (!cpu0_initialized && !acpi_lapic) { 272 272 cpu0_initialized = 1; 273 273 /* Handle UP system running SMP kernel, with no LAPIC in MADT */ 274 274 if ((cpu_index == -1) && (num_online_cpus() == 1))
+1
drivers/acpi/acpica/Makefile
··· 135 135 rsxface.o 136 136 137 137 acpi-y += \ 138 + tbdata.o \ 138 139 tbfadt.o \ 139 140 tbfind.o \ 140 141 tbinstal.o \
+170
drivers/acpi/acpica/acapps.h
··· 1 + /****************************************************************************** 2 + * 3 + * Module Name: acapps - common include for ACPI applications/tools 4 + * 5 + *****************************************************************************/ 6 + 7 + /* 8 + * Copyright (C) 2000 - 2014, Intel Corp. 9 + * All rights reserved. 10 + * 11 + * Redistribution and use in source and binary forms, with or without 12 + * modification, are permitted provided that the following conditions 13 + * are met: 14 + * 1. Redistributions of source code must retain the above copyright 15 + * notice, this list of conditions, and the following disclaimer, 16 + * without modification. 17 + * 2. Redistributions in binary form must reproduce at minimum a disclaimer 18 + * substantially similar to the "NO WARRANTY" disclaimer below 19 + * ("Disclaimer") and any redistribution must be conditioned upon 20 + * including a substantially similar Disclaimer requirement for further 21 + * binary redistribution. 22 + * 3. Neither the names of the above-listed copyright holders nor the names 23 + * of any contributors may be used to endorse or promote products derived 24 + * from this software without specific prior written permission. 25 + * 26 + * Alternatively, this software may be distributed under the terms of the 27 + * GNU General Public License ("GPL") version 2 as published by the Free 28 + * Software Foundation. 29 + * 30 + * NO WARRANTY 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 36 + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 37 + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 38 + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 39 + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 40 + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 41 + * POSSIBILITY OF SUCH DAMAGES. 42 + */ 43 + 44 + #ifndef _ACAPPS 45 + #define _ACAPPS 46 + 47 + /* Common info for tool signons */ 48 + 49 + #define ACPICA_NAME "Intel ACPI Component Architecture" 50 + #define ACPICA_COPYRIGHT "Copyright (c) 2000 - 2014 Intel Corporation" 51 + 52 + #if ACPI_MACHINE_WIDTH == 64 53 + #define ACPI_WIDTH "-64" 54 + 55 + #elif ACPI_MACHINE_WIDTH == 32 56 + #define ACPI_WIDTH "-32" 57 + 58 + #else 59 + #error unknown ACPI_MACHINE_WIDTH 60 + #define ACPI_WIDTH "-??" 61 + 62 + #endif 63 + 64 + /* Macros for signons and file headers */ 65 + 66 + #define ACPI_COMMON_SIGNON(utility_name) \ 67 + "\n%s\n%s version %8.8X%s [%s]\n%s\n\n", \ 68 + ACPICA_NAME, \ 69 + utility_name, ((u32) ACPI_CA_VERSION), ACPI_WIDTH, __DATE__, \ 70 + ACPICA_COPYRIGHT 71 + 72 + #define ACPI_COMMON_HEADER(utility_name, prefix) \ 73 + "%s%s\n%s%s version %8.8X%s [%s]\n%s%s\n%s\n", \ 74 + prefix, ACPICA_NAME, \ 75 + prefix, utility_name, ((u32) ACPI_CA_VERSION), ACPI_WIDTH, __DATE__, \ 76 + prefix, ACPICA_COPYRIGHT, \ 77 + prefix 78 + 79 + /* Macros for usage messages */ 80 + 81 + #define ACPI_USAGE_HEADER(usage) \ 82 + printf ("Usage: %s\nOptions:\n", usage); 83 + 84 + #define ACPI_OPTION(name, description) \ 85 + printf (" %-18s%s\n", name, description); 86 + 87 + #define FILE_SUFFIX_DISASSEMBLY "dsl" 88 + #define ACPI_TABLE_FILE_SUFFIX ".dat" 89 + 90 + /* 91 + * getopt 92 + */ 93 + int acpi_getopt(int argc, char **argv, char *opts); 94 + 95 + int acpi_getopt_argument(int argc, char **argv); 96 + 97 + extern int acpi_gbl_optind; 98 + extern int acpi_gbl_opterr; 99 + extern int acpi_gbl_sub_opt_char; 100 + extern char *acpi_gbl_optarg; 101 + 102 + /* 103 + * cmfsize - Common get file size function 104 + */ 105 + u32 cm_get_file_size(FILE * file); 106 + 107 + #ifndef ACPI_DUMP_APP 108 + /* 109 + * adisasm 110 + */ 111 + acpi_status 112 + ad_aml_disassemble(u8 out_to_file, 113 + char *filename, char *prefix, char **out_filename); 114 + 115 + void ad_print_statistics(void); 116 + 117 + acpi_status ad_find_dsdt(u8 **dsdt_ptr, u32 *dsdt_length); 118 + 119 + void ad_dump_tables(void); 120 + 121 + acpi_status ad_get_local_tables(void); 122 + 123 + acpi_status 124 + ad_parse_table(struct acpi_table_header *table, 125 + acpi_owner_id * owner_id, u8 load_table, u8 external); 126 + 127 + acpi_status ad_display_tables(char *filename, struct acpi_table_header *table); 128 + 129 + acpi_status ad_display_statistics(void); 130 + 131 + /* 132 + * adwalk 133 + */ 134 + void 135 + acpi_dm_cross_reference_namespace(union acpi_parse_object *parse_tree_root, 136 + struct acpi_namespace_node *namespace_root, 137 + acpi_owner_id owner_id); 138 + 139 + void acpi_dm_dump_tree(union acpi_parse_object *origin); 140 + 141 + void acpi_dm_find_orphan_methods(union acpi_parse_object *origin); 142 + 143 + void 144 + acpi_dm_finish_namespace_load(union acpi_parse_object *parse_tree_root, 145 + struct acpi_namespace_node *namespace_root, 146 + acpi_owner_id owner_id); 147 + 148 + void 149 + acpi_dm_convert_resource_indexes(union acpi_parse_object *parse_tree_root, 150 + struct acpi_namespace_node *namespace_root); 151 + 152 + /* 153 + * adfile 154 + */ 155 + acpi_status ad_initialize(void); 156 + 157 + char *fl_generate_filename(char *input_filename, char *suffix); 158 + 159 + acpi_status 160 + fl_split_input_pathname(char *input_path, 161 + char **out_directory_path, char **out_filename); 162 + 163 + char *ad_generate_filename(char *prefix, char *table_id); 164 + 165 + void 166 + ad_write_table(struct acpi_table_header *table, 167 + u32 length, char *table_name, char *oem_table_id); 168 + #endif 169 + 170 + #endif /* _ACAPPS */
+3 -2
drivers/acpi/acpica/acevents.h
··· 104 104 */ 105 105 acpi_status 106 106 acpi_ev_create_gpe_block(struct acpi_namespace_node *gpe_device, 107 - struct acpi_generic_address *gpe_block_address, 107 + u64 address, 108 + u8 space_id, 108 109 u32 register_count, 109 - u8 gpe_block_base_number, 110 + u16 gpe_block_base_number, 110 111 u32 interrupt_number, 111 112 struct acpi_gpe_block_info **return_gpe_block); 112 113
+3 -139
drivers/acpi/acpica/acglobal.h
··· 44 44 #ifndef __ACGLOBAL_H__ 45 45 #define __ACGLOBAL_H__ 46 46 47 - /* 48 - * Ensure that the globals are actually defined and initialized only once. 49 - * 50 - * The use of these macros allows a single list of globals (here) in order 51 - * to simplify maintenance of the code. 52 - */ 53 - #ifdef DEFINE_ACPI_GLOBALS 54 - #define ACPI_GLOBAL(type,name) \ 55 - extern type name; \ 56 - type name 57 - 58 - #define ACPI_INIT_GLOBAL(type,name,value) \ 59 - type name=value 60 - 61 - #else 62 - #define ACPI_GLOBAL(type,name) \ 63 - extern type name 64 - 65 - #define ACPI_INIT_GLOBAL(type,name,value) \ 66 - extern type name 67 - #endif 68 - 69 - #ifdef DEFINE_ACPI_GLOBALS 70 - 71 - /* Public globals, available from outside ACPICA subsystem */ 72 - 73 47 /***************************************************************************** 74 48 * 75 - * Runtime configuration (static defaults that can be overriden at runtime) 49 + * Globals related to the ACPI tables 76 50 * 77 51 ****************************************************************************/ 78 52 79 - /* 80 - * Enable "slack" in the AML interpreter? Default is FALSE, and the 81 - * interpreter strictly follows the ACPI specification. Setting to TRUE 82 - * allows the interpreter to ignore certain errors and/or bad AML constructs. 83 - * 84 - * Currently, these features are enabled by this flag: 85 - * 86 - * 1) Allow "implicit return" of last value in a control method 87 - * 2) Allow access beyond the end of an operation region 88 - * 3) Allow access to uninitialized locals/args (auto-init to integer 0) 89 - * 4) Allow ANY object type to be a source operand for the Store() operator 90 - * 5) Allow unresolved references (invalid target name) in package objects 91 - * 6) Enable warning messages for behavior that is not ACPI spec compliant 92 - */ 93 - ACPI_INIT_GLOBAL(u8, acpi_gbl_enable_interpreter_slack, FALSE); 53 + /* Master list of all ACPI tables that were found in the RSDT/XSDT */ 94 54 95 - /* 96 - * Automatically serialize all methods that create named objects? Default 97 - * is TRUE, meaning that all non_serialized methods are scanned once at 98 - * table load time to determine those that create named objects. Methods 99 - * that create named objects are marked Serialized in order to prevent 100 - * possible run-time problems if they are entered by more than one thread. 101 - */ 102 - ACPI_INIT_GLOBAL(u8, acpi_gbl_auto_serialize_methods, TRUE); 103 - 104 - /* 105 - * Create the predefined _OSI method in the namespace? Default is TRUE 106 - * because ACPI CA is fully compatible with other ACPI implementations. 107 - * Changing this will revert ACPI CA (and machine ASL) to pre-OSI behavior. 108 - */ 109 - ACPI_INIT_GLOBAL(u8, acpi_gbl_create_osi_method, TRUE); 110 - 111 - /* 112 - * Optionally use default values for the ACPI register widths. Set this to 113 - * TRUE to use the defaults, if an FADT contains incorrect widths/lengths. 114 - */ 115 - ACPI_INIT_GLOBAL(u8, acpi_gbl_use_default_register_widths, TRUE); 116 - 117 - /* 118 - * Optionally enable output from the AML Debug Object. 119 - */ 120 - ACPI_INIT_GLOBAL(u8, acpi_gbl_enable_aml_debug_object, FALSE); 121 - 122 - /* 123 - * Optionally copy the entire DSDT to local memory (instead of simply 124 - * mapping it.) There are some BIOSs that corrupt or replace the original 125 - * DSDT, creating the need for this option. Default is FALSE, do not copy 126 - * the DSDT. 127 - */ 128 - ACPI_INIT_GLOBAL(u8, acpi_gbl_copy_dsdt_locally, FALSE); 129 - 130 - /* 131 - * Optionally ignore an XSDT if present and use the RSDT instead. 132 - * Although the ACPI specification requires that an XSDT be used instead 133 - * of the RSDT, the XSDT has been found to be corrupt or ill-formed on 134 - * some machines. Default behavior is to use the XSDT if present. 135 - */ 136 - ACPI_INIT_GLOBAL(u8, acpi_gbl_do_not_use_xsdt, FALSE); 137 - 138 - /* 139 - * Optionally use 32-bit FADT addresses if and when there is a conflict 140 - * (address mismatch) between the 32-bit and 64-bit versions of the 141 - * address. Although ACPICA adheres to the ACPI specification which 142 - * requires the use of the corresponding 64-bit address if it is non-zero, 143 - * some machines have been found to have a corrupted non-zero 64-bit 144 - * address. Default is TRUE, favor the 32-bit addresses. 145 - */ 146 - ACPI_INIT_GLOBAL(u8, acpi_gbl_use32_bit_fadt_addresses, TRUE); 147 - 148 - /* 149 - * Optionally truncate I/O addresses to 16 bits. Provides compatibility 150 - * with other ACPI implementations. NOTE: During ACPICA initialization, 151 - * this value is set to TRUE if any Windows OSI strings have been 152 - * requested by the BIOS. 153 - */ 154 - ACPI_INIT_GLOBAL(u8, acpi_gbl_truncate_io_addresses, FALSE); 155 - 156 - /* 157 - * Disable runtime checking and repair of values returned by control methods. 158 - * Use only if the repair is causing a problem on a particular machine. 159 - */ 160 - ACPI_INIT_GLOBAL(u8, acpi_gbl_disable_auto_repair, FALSE); 161 - 162 - /* 163 - * Optionally do not load any SSDTs from the RSDT/XSDT during initialization. 164 - * This can be useful for debugging ACPI problems on some machines. 165 - */ 166 - ACPI_INIT_GLOBAL(u8, acpi_gbl_disable_ssdt_table_load, FALSE); 167 - 168 - /* 169 - * We keep track of the latest version of Windows that has been requested by 170 - * the BIOS. 171 - */ 172 - ACPI_INIT_GLOBAL(u8, acpi_gbl_osi_data, 0); 173 - 174 - #endif /* DEFINE_ACPI_GLOBALS */ 175 - 176 - /***************************************************************************** 177 - * 178 - * ACPI Table globals 179 - * 180 - ****************************************************************************/ 181 - 182 - /* 183 - * Master list of all ACPI tables that were found in the RSDT/XSDT. 184 - */ 185 55 ACPI_GLOBAL(struct acpi_table_list, acpi_gbl_root_table_list); 186 56 187 57 /* DSDT information. Used to check for DSDT corruption */ ··· 149 279 ACPI_GLOBAL(acpi_init_handler, acpi_gbl_init_handler); 150 280 ACPI_GLOBAL(acpi_table_handler, acpi_gbl_table_handler); 151 281 ACPI_GLOBAL(void *, acpi_gbl_table_handler_context); 152 - ACPI_GLOBAL(struct acpi_walk_state *, acpi_gbl_breakpoint_walk); 153 282 ACPI_GLOBAL(acpi_interface_handler, acpi_gbl_interface_handler); 154 283 ACPI_GLOBAL(struct acpi_sci_handler_info *, acpi_gbl_sci_handler_list); 155 284 ··· 165 296 /* Misc */ 166 297 167 298 ACPI_GLOBAL(u32, acpi_gbl_original_mode); 168 - ACPI_GLOBAL(u32, acpi_gbl_rsdp_original_location); 169 299 ACPI_GLOBAL(u32, acpi_gbl_ns_lookup_count); 170 300 ACPI_GLOBAL(u32, acpi_gbl_ps_find_count); 171 301 ACPI_GLOBAL(u16, acpi_gbl_pm1_enable_register_save); ··· 351 483 ACPI_GLOBAL(u32, acpi_gbl_num_nodes); 352 484 ACPI_GLOBAL(u32, acpi_gbl_num_objects); 353 485 354 - ACPI_GLOBAL(u32, acpi_gbl_size_of_parse_tree); 355 - ACPI_GLOBAL(u32, acpi_gbl_size_of_method_trees); 356 - ACPI_GLOBAL(u32, acpi_gbl_size_of_node_entries); 357 - ACPI_GLOBAL(u32, acpi_gbl_size_of_acpi_objects); 358 - 359 486 #endif /* ACPI_DEBUGGER */ 360 487 361 488 /***************************************************************************** ··· 372 509 ****************************************************************************/ 373 510 374 511 extern const struct ah_predefined_name asl_predefined_info[]; 512 + extern const struct ah_device_id asl_device_ids[]; 375 513 376 514 #endif /* __ACGLOBAL_H__ */
+12 -5
drivers/acpi/acpica/aclocal.h
··· 450 450 struct acpi_gpe_register_info { 451 451 struct acpi_generic_address status_address; /* Address of status reg */ 452 452 struct acpi_generic_address enable_address; /* Address of enable reg */ 453 + u16 base_gpe_number; /* Base GPE number for this register */ 453 454 u8 enable_for_wake; /* GPEs to keep enabled when sleeping */ 454 455 u8 enable_for_run; /* GPEs to keep enabled when running */ 455 - u8 base_gpe_number; /* Base GPE number for this register */ 456 456 }; 457 457 458 458 /* ··· 466 466 struct acpi_gpe_xrupt_info *xrupt_block; /* Backpointer to interrupt block */ 467 467 struct acpi_gpe_register_info *register_info; /* One per GPE register pair */ 468 468 struct acpi_gpe_event_info *event_info; /* One for each GPE */ 469 - struct acpi_generic_address block_address; /* Base address of the block */ 469 + u64 address; /* Base address of the block */ 470 470 u32 register_count; /* Number of register pairs in block */ 471 471 u16 gpe_count; /* Number of individual GPEs in block */ 472 - u8 block_base_number; /* Base GPE number for this block */ 473 - u8 initialized; /* TRUE if this block is initialized */ 472 + u16 block_base_number; /* Base GPE number for this block */ 473 + u8 space_id; 474 + u8 initialized; /* TRUE if this block is initialized */ 474 475 }; 475 476 476 477 /* Information about GPE interrupt handlers, one per each interrupt level used for GPEs */ ··· 734 733 #define ACPI_DASM_MATCHOP 0x06 /* Parent opcode is a Match() operator */ 735 734 #define ACPI_DASM_LNOT_PREFIX 0x07 /* Start of a Lnot_equal (etc.) pair of opcodes */ 736 735 #define ACPI_DASM_LNOT_SUFFIX 0x08 /* End of a Lnot_equal (etc.) pair of opcodes */ 737 - #define ACPI_DASM_IGNORE 0x09 /* Not used at this time */ 736 + #define ACPI_DASM_HID_STRING 0x09 /* String is a _HID or _CID */ 737 + #define ACPI_DASM_IGNORE 0x0A /* Not used at this time */ 738 738 739 739 /* 740 740 * Generic operation (for example: If, While, Store) ··· 1147 1145 #ifndef ACPI_ASL_COMPILER 1148 1146 char *action; 1149 1147 #endif 1148 + }; 1149 + 1150 + struct ah_device_id { 1151 + char *name; 1152 + char *description; 1150 1153 }; 1151 1154 1152 1155 #endif /* __ACLOCAL_H__ */
+4 -6
drivers/acpi/acpica/acpredef.h
··· 586 586 {{"_LID", METHOD_0ARGS, 587 587 METHOD_RETURNS(ACPI_RTYPE_INTEGER)}}, 588 588 589 + {{"_LPD", METHOD_0ARGS, 590 + METHOD_RETURNS(ACPI_RTYPE_PACKAGE)}}, /* Variable-length (1 Int(rev), n Pkg (2 Int) */ 591 + PACKAGE_INFO(ACPI_PTYPE2_REV_FIXED, ACPI_RTYPE_INTEGER, 2, 0, 0, 0), 592 + 589 593 {{"_MAT", METHOD_0ARGS, 590 594 METHOD_RETURNS(ACPI_RTYPE_BUFFER)}}, 591 595 ··· 701 697 {{"_PRL", METHOD_0ARGS, 702 698 METHOD_RETURNS(ACPI_RTYPE_PACKAGE)}}, /* Variable-length (Refs) */ 703 699 PACKAGE_INFO(ACPI_PTYPE1_VAR, ACPI_RTYPE_REFERENCE, 0, 0, 0, 0), 704 - 705 - {{"_PRP", METHOD_0ARGS, 706 - METHOD_RETURNS(ACPI_RTYPE_PACKAGE)}}, /* Variable-length (Pkgs) each: 1 Str, 1 Int/Str/Pkg */ 707 - PACKAGE_INFO(ACPI_PTYPE2, ACPI_RTYPE_STRING, 1, 708 - ACPI_RTYPE_INTEGER | ACPI_RTYPE_STRING | 709 - ACPI_RTYPE_PACKAGE | ACPI_RTYPE_REFERENCE, 1, 0), 710 700 711 701 {{"_PRS", METHOD_0ARGS, 712 702 METHOD_RETURNS(ACPI_RTYPE_BUFFER)}},
+49 -13
drivers/acpi/acpica/actables.h
··· 54 54 u8 *acpi_tb_scan_memory_for_rsdp(u8 *start_address, u32 length); 55 55 56 56 /* 57 + * tbdata - table data structure management 58 + */ 59 + acpi_status acpi_tb_get_next_root_index(u32 *table_index); 60 + 61 + void 62 + acpi_tb_init_table_descriptor(struct acpi_table_desc *table_desc, 63 + acpi_physical_address address, 64 + u8 flags, struct acpi_table_header *table); 65 + 66 + acpi_status 67 + acpi_tb_acquire_temp_table(struct acpi_table_desc *table_desc, 68 + acpi_physical_address address, u8 flags); 69 + 70 + void acpi_tb_release_temp_table(struct acpi_table_desc *table_desc); 71 + 72 + acpi_status acpi_tb_validate_temp_table(struct acpi_table_desc *table_desc); 73 + 74 + acpi_status 75 + acpi_tb_verify_temp_table(struct acpi_table_desc *table_desc, char *signature); 76 + 77 + u8 acpi_tb_is_table_loaded(u32 table_index); 78 + 79 + void acpi_tb_set_table_loaded_flag(u32 table_index, u8 is_loaded); 80 + 81 + /* 57 82 * tbfadt - FADT parse/convert/validate 58 83 */ 59 84 void acpi_tb_parse_fadt(u32 table_index); ··· 97 72 */ 98 73 acpi_status acpi_tb_resize_root_table_list(void); 99 74 100 - acpi_status acpi_tb_verify_table(struct acpi_table_desc *table_desc); 75 + acpi_status acpi_tb_validate_table(struct acpi_table_desc *table_desc); 101 76 102 - struct acpi_table_header *acpi_tb_table_override(struct acpi_table_header 103 - *table_header, 104 - struct acpi_table_desc 105 - *table_desc); 77 + void acpi_tb_invalidate_table(struct acpi_table_desc *table_desc); 78 + 79 + void acpi_tb_override_table(struct acpi_table_desc *old_table_desc); 106 80 107 81 acpi_status 108 - acpi_tb_add_table(struct acpi_table_desc *table_desc, u32 *table_index); 82 + acpi_tb_acquire_table(struct acpi_table_desc *table_desc, 83 + struct acpi_table_header **table_ptr, 84 + u32 *table_length, u8 *table_flags); 85 + 86 + void 87 + acpi_tb_release_table(struct acpi_table_header *table, 88 + u32 table_length, u8 table_flags); 89 + 90 + acpi_status 91 + acpi_tb_install_standard_table(acpi_physical_address address, 92 + u8 flags, 93 + u8 reload, u8 override, u32 *table_index); 109 94 110 95 acpi_status 111 96 acpi_tb_store_table(acpi_physical_address address, 112 97 struct acpi_table_header *table, 113 98 u32 length, u8 flags, u32 *table_index); 114 99 115 - void acpi_tb_delete_table(struct acpi_table_desc *table_desc); 100 + void acpi_tb_uninstall_table(struct acpi_table_desc *table_desc); 116 101 117 102 void acpi_tb_terminate(void); 118 103 ··· 133 98 acpi_status acpi_tb_release_owner_id(u32 table_index); 134 99 135 100 acpi_status acpi_tb_get_owner_id(u32 table_index, acpi_owner_id *owner_id); 136 - 137 - u8 acpi_tb_is_table_loaded(u32 table_index); 138 - 139 - void acpi_tb_set_table_loaded_flag(u32 table_index, u8 is_loaded); 140 101 141 102 /* 142 103 * tbutils - table manager utilities ··· 155 124 struct acpi_table_header *acpi_tb_copy_dsdt(u32 table_index); 156 125 157 126 void 158 - acpi_tb_install_table(acpi_physical_address address, 159 - char *signature, u32 table_index); 127 + acpi_tb_install_table_with_override(u32 table_index, 128 + struct acpi_table_desc *new_table_desc, 129 + u8 override); 130 + 131 + acpi_status 132 + acpi_tb_install_fixed_table(acpi_physical_address address, 133 + char *signature, u32 table_index); 160 134 161 135 acpi_status acpi_tb_parse_root_table(acpi_physical_address rsdp_address); 162 136
+8 -2
drivers/acpi/acpica/acutils.h
··· 176 176 177 177 char *acpi_ut_get_mutex_name(u32 mutex_id); 178 178 179 - const char *acpi_ut_get_notify_name(u32 notify_value); 180 - 179 + const char *acpi_ut_get_notify_name(u32 notify_value, acpi_object_type type); 181 180 #endif 182 181 183 182 char *acpi_ut_get_type_name(acpi_object_type type); ··· 735 736 const char *message, 736 737 struct acpi_namespace_node *node, 737 738 const char *path, acpi_status lookup_status); 739 + 740 + /* 741 + * Utility functions for ACPI names and IDs 742 + */ 743 + const struct ah_predefined_name *acpi_ah_match_predefined_name(char *nameseg); 744 + 745 + const struct ah_device_id *acpi_ah_match_hardware_id(char *hid); 738 746 739 747 #endif /* _ACUTILS_H */
+7 -6
drivers/acpi/acpica/evgpe.c
··· 383 383 if (!(gpe_register_info->enable_for_run | 384 384 gpe_register_info->enable_for_wake)) { 385 385 ACPI_DEBUG_PRINT((ACPI_DB_INTERRUPTS, 386 - "Ignore disabled registers for GPE%02X-GPE%02X: " 386 + "Ignore disabled registers for GPE %02X-%02X: " 387 387 "RunEnable=%02X, WakeEnable=%02X\n", 388 388 gpe_register_info-> 389 389 base_gpe_number, ··· 416 416 } 417 417 418 418 ACPI_DEBUG_PRINT((ACPI_DB_INTERRUPTS, 419 - "Read registers for GPE%02X-GPE%02X: Status=%02X, Enable=%02X, " 419 + "Read registers for GPE %02X-%02X: Status=%02X, Enable=%02X, " 420 420 "RunEnable=%02X, WakeEnable=%02X\n", 421 421 gpe_register_info->base_gpe_number, 422 422 gpe_register_info->base_gpe_number + ··· 706 706 status = acpi_hw_clear_gpe(gpe_event_info); 707 707 if (ACPI_FAILURE(status)) { 708 708 ACPI_EXCEPTION((AE_INFO, status, 709 - "Unable to clear GPE%02X", gpe_number)); 709 + "Unable to clear GPE %02X", 710 + gpe_number)); 710 711 return_UINT32(ACPI_INTERRUPT_NOT_HANDLED); 711 712 } 712 713 } ··· 724 723 status = acpi_hw_low_set_gpe(gpe_event_info, ACPI_GPE_DISABLE); 725 724 if (ACPI_FAILURE(status)) { 726 725 ACPI_EXCEPTION((AE_INFO, status, 727 - "Unable to disable GPE%02X", gpe_number)); 726 + "Unable to disable GPE %02X", gpe_number)); 728 727 return_UINT32(ACPI_INTERRUPT_NOT_HANDLED); 729 728 } 730 729 ··· 765 764 gpe_event_info); 766 765 if (ACPI_FAILURE(status)) { 767 766 ACPI_EXCEPTION((AE_INFO, status, 768 - "Unable to queue handler for GPE%02X - event disabled", 767 + "Unable to queue handler for GPE %02X - event disabled", 769 768 gpe_number)); 770 769 } 771 770 break; ··· 777 776 * a GPE to be enabled if it has no handler or method. 778 777 */ 779 778 ACPI_ERROR((AE_INFO, 780 - "No handler or method for GPE%02X, disabling event", 779 + "No handler or method for GPE %02X, disabling event", 781 780 gpe_number)); 782 781 783 782 break;
+16 -18
drivers/acpi/acpica/evgpeblk.c
··· 252 252 253 253 /* Init the register_info for this GPE register (8 GPEs) */ 254 254 255 - this_register->base_gpe_number = 256 - (u8) (gpe_block->block_base_number + 257 - (i * ACPI_GPE_REGISTER_WIDTH)); 255 + this_register->base_gpe_number = (u16) 256 + (gpe_block->block_base_number + 257 + (i * ACPI_GPE_REGISTER_WIDTH)); 258 258 259 - this_register->status_address.address = 260 - gpe_block->block_address.address + i; 259 + this_register->status_address.address = gpe_block->address + i; 261 260 262 261 this_register->enable_address.address = 263 - gpe_block->block_address.address + i + 264 - gpe_block->register_count; 262 + gpe_block->address + i + gpe_block->register_count; 265 263 266 - this_register->status_address.space_id = 267 - gpe_block->block_address.space_id; 268 - this_register->enable_address.space_id = 269 - gpe_block->block_address.space_id; 264 + this_register->status_address.space_id = gpe_block->space_id; 265 + this_register->enable_address.space_id = gpe_block->space_id; 270 266 this_register->status_address.bit_width = 271 267 ACPI_GPE_REGISTER_WIDTH; 272 268 this_register->enable_address.bit_width = ··· 330 334 331 335 acpi_status 332 336 acpi_ev_create_gpe_block(struct acpi_namespace_node *gpe_device, 333 - struct acpi_generic_address *gpe_block_address, 337 + u64 address, 338 + u8 space_id, 334 339 u32 register_count, 335 - u8 gpe_block_base_number, 340 + u16 gpe_block_base_number, 336 341 u32 interrupt_number, 337 342 struct acpi_gpe_block_info **return_gpe_block) 338 343 { ··· 356 359 357 360 /* Initialize the new GPE block */ 358 361 362 + gpe_block->address = address; 363 + gpe_block->space_id = space_id; 359 364 gpe_block->node = gpe_device; 360 365 gpe_block->gpe_count = (u16)(register_count * ACPI_GPE_REGISTER_WIDTH); 361 366 gpe_block->initialized = FALSE; 362 367 gpe_block->register_count = register_count; 363 368 gpe_block->block_base_number = gpe_block_base_number; 364 - 365 - ACPI_MEMCPY(&gpe_block->block_address, gpe_block_address, 366 - sizeof(struct acpi_generic_address)); 367 369 368 370 /* 369 371 * Create the register_info and event_info sub-structures ··· 404 408 } 405 409 406 410 ACPI_DEBUG_PRINT_RAW((ACPI_DB_INIT, 407 - " Initialized GPE %02X to %02X [%4.4s] %u regs on interrupt 0x%X\n", 411 + " Initialized GPE %02X to %02X [%4.4s] %u regs on interrupt 0x%X%s\n", 408 412 (u32)gpe_block->block_base_number, 409 413 (u32)(gpe_block->block_base_number + 410 414 (gpe_block->gpe_count - 1)), 411 415 gpe_device->name.ascii, gpe_block->register_count, 412 - interrupt_number)); 416 + interrupt_number, 417 + interrupt_number == 418 + acpi_gbl_FADT.sci_interrupt ? " (SCI)" : "")); 413 419 414 420 /* Update global count of currently available GPEs */ 415 421
+8 -4
drivers/acpi/acpica/evgpeinit.c
··· 131 131 /* Install GPE Block 0 */ 132 132 133 133 status = acpi_ev_create_gpe_block(acpi_gbl_fadt_gpe_device, 134 - &acpi_gbl_FADT.xgpe0_block, 135 - register_count0, 0, 134 + acpi_gbl_FADT.xgpe0_block. 135 + address, 136 + acpi_gbl_FADT.xgpe0_block. 137 + space_id, register_count0, 0, 136 138 acpi_gbl_FADT.sci_interrupt, 137 139 &acpi_gbl_gpe_fadt_blocks[0]); 138 140 ··· 171 169 172 170 status = 173 171 acpi_ev_create_gpe_block(acpi_gbl_fadt_gpe_device, 174 - &acpi_gbl_FADT.xgpe1_block, 175 - register_count1, 172 + acpi_gbl_FADT.xgpe1_block. 173 + address, 174 + acpi_gbl_FADT.xgpe1_block. 175 + space_id, register_count1, 176 176 acpi_gbl_FADT.gpe1_base, 177 177 acpi_gbl_FADT. 178 178 sci_interrupt,
+2 -1
drivers/acpi/acpica/evmisc.c
··· 167 167 "Dispatching Notify on [%4.4s] (%s) Value 0x%2.2X (%s) Node %p\n", 168 168 acpi_ut_get_node_name(node), 169 169 acpi_ut_get_type_name(node->type), notify_value, 170 - acpi_ut_get_notify_name(notify_value), node)); 170 + acpi_ut_get_notify_name(notify_value, ACPI_TYPE_ANY), 171 + node)); 171 172 172 173 status = acpi_os_execute(OSL_NOTIFY_HANDLER, acpi_ev_notify_dispatch, 173 174 info);
+1 -1
drivers/acpi/acpica/evsci.c
··· 117 117 ACPI_FUNCTION_TRACE(ev_sci_xrupt_handler); 118 118 119 119 /* 120 - * We are guaranteed by the ACPI CA initialization/shutdown code that 120 + * We are guaranteed by the ACPICA initialization/shutdown code that 121 121 * if this interrupt handler is installed, ACPI is enabled. 122 122 */ 123 123
+40 -21
drivers/acpi/acpica/evxface.c
··· 239 239 union acpi_operand_object *obj_desc; 240 240 union acpi_operand_object *handler_obj; 241 241 union acpi_operand_object *previous_handler_obj; 242 - acpi_status status; 242 + acpi_status status = AE_OK; 243 243 u32 i; 244 244 245 245 ACPI_FUNCTION_TRACE(acpi_remove_notify_handler); ··· 251 251 return_ACPI_STATUS(AE_BAD_PARAMETER); 252 252 } 253 253 254 - /* Make sure all deferred notify tasks are completed */ 255 - 256 - acpi_os_wait_events_complete(); 257 - 258 - status = acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE); 259 - if (ACPI_FAILURE(status)) { 260 - return_ACPI_STATUS(status); 261 - } 262 - 263 254 /* Root Object. Global handlers are removed here */ 264 255 265 256 if (device == ACPI_ROOT_OBJECT) { 266 257 for (i = 0; i < ACPI_NUM_NOTIFY_TYPES; i++) { 267 258 if (handler_type & (i + 1)) { 259 + status = 260 + acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE); 261 + if (ACPI_FAILURE(status)) { 262 + return_ACPI_STATUS(status); 263 + } 264 + 268 265 if (!acpi_gbl_global_notify[i].handler || 269 266 (acpi_gbl_global_notify[i].handler != 270 267 handler)) { ··· 274 277 275 278 acpi_gbl_global_notify[i].handler = NULL; 276 279 acpi_gbl_global_notify[i].context = NULL; 280 + 281 + (void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE); 282 + 283 + /* Make sure all deferred notify tasks are completed */ 284 + 285 + acpi_os_wait_events_complete(); 277 286 } 278 287 } 279 288 280 - goto unlock_and_exit; 289 + return_ACPI_STATUS(AE_OK); 281 290 } 282 291 283 292 /* All other objects: Are Notifies allowed on this object? */ 284 293 285 294 if (!acpi_ev_is_notify_object(node)) { 286 - status = AE_TYPE; 287 - goto unlock_and_exit; 295 + return_ACPI_STATUS(AE_TYPE); 288 296 } 289 297 290 298 /* Must have an existing internal object */ 291 299 292 300 obj_desc = acpi_ns_get_attached_object(node); 293 301 if (!obj_desc) { 294 - status = AE_NOT_EXIST; 295 - goto unlock_and_exit; 302 + return_ACPI_STATUS(AE_NOT_EXIST); 296 303 } 297 304 298 305 /* Internal object exists. Find the handler and remove it */ 299 306 300 307 for (i = 0; i < ACPI_NUM_NOTIFY_TYPES; i++) { 301 308 if (handler_type & (i + 1)) { 309 + status = acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE); 310 + if (ACPI_FAILURE(status)) { 311 + return_ACPI_STATUS(status); 312 + } 313 + 302 314 handler_obj = obj_desc->common_notify.notify_list[i]; 303 315 previous_handler_obj = NULL; 304 316 ··· 335 329 handler_obj->notify.next[i]; 336 330 } 337 331 332 + (void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE); 333 + 334 + /* Make sure all deferred notify tasks are completed */ 335 + 336 + acpi_os_wait_events_complete(); 338 337 acpi_ut_remove_reference(handler_obj); 339 338 } 340 339 } 340 + 341 + return_ACPI_STATUS(status); 341 342 342 343 unlock_and_exit: 343 344 (void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE); ··· 470 457 return_ACPI_STATUS(status); 471 458 } 472 459 460 + ACPI_EXPORT_SYMBOL(acpi_install_sci_handler) 461 + 473 462 /******************************************************************************* 474 463 * 475 464 * FUNCTION: acpi_remove_sci_handler ··· 483 468 * DESCRIPTION: Remove a handler for a System Control Interrupt. 484 469 * 485 470 ******************************************************************************/ 486 - 487 471 acpi_status acpi_remove_sci_handler(acpi_sci_handler address) 488 472 { 489 473 struct acpi_sci_handler_info *prev_sci_handler; ··· 536 522 return_ACPI_STATUS(status); 537 523 } 538 524 525 + ACPI_EXPORT_SYMBOL(acpi_remove_sci_handler) 526 + 539 527 /******************************************************************************* 540 528 * 541 529 * FUNCTION: acpi_install_global_event_handler ··· 553 537 * Can be used to update event counters, etc. 554 538 * 555 539 ******************************************************************************/ 556 - 557 540 acpi_status 558 541 acpi_install_global_event_handler(acpi_gbl_event_handler handler, void *context) 559 542 { ··· 855 840 return_ACPI_STATUS(AE_BAD_PARAMETER); 856 841 } 857 842 858 - /* Make sure all deferred GPE tasks are completed */ 859 - 860 - acpi_os_wait_events_complete(); 861 - 862 843 status = acpi_ut_acquire_mutex(ACPI_MTX_EVENTS); 863 844 if (ACPI_FAILURE(status)) { 864 845 return_ACPI_STATUS(status); ··· 906 895 (void)acpi_ev_add_gpe_reference(gpe_event_info); 907 896 } 908 897 898 + acpi_os_release_lock(acpi_gbl_gpe_lock, flags); 899 + (void)acpi_ut_release_mutex(ACPI_MTX_EVENTS); 900 + 901 + /* Make sure all deferred GPE tasks are completed */ 902 + 903 + acpi_os_wait_events_complete(); 904 + 909 905 /* Now we can free the handler object */ 910 906 911 907 ACPI_FREE(handler); 908 + return_ACPI_STATUS(status); 912 909 913 910 unlock_and_exit: 914 911 acpi_os_release_lock(acpi_gbl_gpe_lock, flags);
+4 -3
drivers/acpi/acpica/evxfgpe.c
··· 599 599 * For user-installed GPE Block Devices, the gpe_block_base_number 600 600 * is always zero 601 601 */ 602 - status = 603 - acpi_ev_create_gpe_block(node, gpe_block_address, register_count, 0, 604 - interrupt_number, &gpe_block); 602 + status = acpi_ev_create_gpe_block(node, gpe_block_address->address, 603 + gpe_block_address->space_id, 604 + register_count, 0, interrupt_number, 605 + &gpe_block); 605 606 if (ACPI_FAILURE(status)) { 606 607 goto unlock_and_exit; 607 608 }
+43 -41
drivers/acpi/acpica/exconfig.c
··· 343 343 struct acpi_walk_state *walk_state) 344 344 { 345 345 union acpi_operand_object *ddb_handle; 346 + struct acpi_table_header *table_header; 346 347 struct acpi_table_header *table; 347 - struct acpi_table_desc table_desc; 348 348 u32 table_index; 349 349 acpi_status status; 350 350 u32 length; 351 351 352 352 ACPI_FUNCTION_TRACE(ex_load_op); 353 - 354 - ACPI_MEMSET(&table_desc, 0, sizeof(struct acpi_table_desc)); 355 353 356 354 /* Source Object can be either an op_region or a Buffer/Field */ 357 355 ··· 378 380 379 381 /* Get the table header first so we can get the table length */ 380 382 381 - table = ACPI_ALLOCATE(sizeof(struct acpi_table_header)); 382 - if (!table) { 383 + table_header = ACPI_ALLOCATE(sizeof(struct acpi_table_header)); 384 + if (!table_header) { 383 385 return_ACPI_STATUS(AE_NO_MEMORY); 384 386 } 385 387 386 388 status = 387 389 acpi_ex_region_read(obj_desc, 388 390 sizeof(struct acpi_table_header), 389 - ACPI_CAST_PTR(u8, table)); 390 - length = table->length; 391 - ACPI_FREE(table); 391 + ACPI_CAST_PTR(u8, table_header)); 392 + length = table_header->length; 393 + ACPI_FREE(table_header); 392 394 393 395 if (ACPI_FAILURE(status)) { 394 396 return_ACPI_STATUS(status); ··· 418 420 419 421 /* Allocate a buffer for the table */ 420 422 421 - table_desc.pointer = ACPI_ALLOCATE(length); 422 - if (!table_desc.pointer) { 423 + table = ACPI_ALLOCATE(length); 424 + if (!table) { 423 425 return_ACPI_STATUS(AE_NO_MEMORY); 424 426 } 425 427 426 428 /* Read the entire table */ 427 429 428 430 status = acpi_ex_region_read(obj_desc, length, 429 - ACPI_CAST_PTR(u8, 430 - table_desc.pointer)); 431 + ACPI_CAST_PTR(u8, table)); 431 432 if (ACPI_FAILURE(status)) { 432 - ACPI_FREE(table_desc.pointer); 433 + ACPI_FREE(table); 433 434 return_ACPI_STATUS(status); 434 435 } 435 - 436 - table_desc.address = obj_desc->region.address; 437 436 break; 438 437 439 438 case ACPI_TYPE_BUFFER: /* Buffer or resolved region_field */ ··· 447 452 448 453 /* Get the actual table length from the table header */ 449 454 450 - table = 455 + table_header = 451 456 ACPI_CAST_PTR(struct acpi_table_header, 452 457 obj_desc->buffer.pointer); 453 - length = table->length; 458 + length = table_header->length; 454 459 455 460 /* Table cannot extend beyond the buffer */ 456 461 ··· 465 470 * Copy the table from the buffer because the buffer could be modified 466 471 * or even deleted in the future 467 472 */ 468 - table_desc.pointer = ACPI_ALLOCATE(length); 469 - if (!table_desc.pointer) { 473 + table = ACPI_ALLOCATE(length); 474 + if (!table) { 470 475 return_ACPI_STATUS(AE_NO_MEMORY); 471 476 } 472 477 473 - ACPI_MEMCPY(table_desc.pointer, table, length); 474 - table_desc.address = ACPI_TO_INTEGER(table_desc.pointer); 478 + ACPI_MEMCPY(table, table_header, length); 475 479 break; 476 480 477 481 default: ··· 478 484 return_ACPI_STATUS(AE_AML_OPERAND_TYPE); 479 485 } 480 486 481 - /* Validate table checksum (will not get validated in tb_add_table) */ 482 - 483 - status = acpi_tb_verify_checksum(table_desc.pointer, length); 484 - if (ACPI_FAILURE(status)) { 485 - ACPI_FREE(table_desc.pointer); 486 - return_ACPI_STATUS(status); 487 - } 488 - 489 - /* Complete the table descriptor */ 490 - 491 - table_desc.length = length; 492 - table_desc.flags = ACPI_TABLE_ORIGIN_ALLOCATED; 493 - 494 487 /* Install the new table into the local data structures */ 495 488 496 - status = acpi_tb_add_table(&table_desc, &table_index); 489 + ACPI_INFO((AE_INFO, "Dynamic OEM Table Load:")); 490 + (void)acpi_ut_acquire_mutex(ACPI_MTX_TABLES); 491 + 492 + status = acpi_tb_install_standard_table(ACPI_PTR_TO_PHYSADDR(table), 493 + ACPI_TABLE_ORIGIN_INTERNAL_VIRTUAL, 494 + TRUE, TRUE, &table_index); 495 + 496 + (void)acpi_ut_release_mutex(ACPI_MTX_TABLES); 497 497 if (ACPI_FAILURE(status)) { 498 498 499 499 /* Delete allocated table buffer */ 500 500 501 - acpi_tb_delete_table(&table_desc); 501 + ACPI_FREE(table); 502 + return_ACPI_STATUS(status); 503 + } 504 + 505 + /* 506 + * Note: Now table is "INSTALLED", it must be validated before 507 + * loading. 508 + */ 509 + status = 510 + acpi_tb_validate_table(&acpi_gbl_root_table_list. 511 + tables[table_index]); 512 + if (ACPI_FAILURE(status)) { 502 513 return_ACPI_STATUS(status); 503 514 } 504 515 ··· 535 536 return_ACPI_STATUS(status); 536 537 } 537 538 538 - ACPI_INFO((AE_INFO, "Dynamic OEM Table Load:")); 539 - acpi_tb_print_table_header(0, table_desc.pointer); 540 - 541 539 /* Remove the reference by added by acpi_ex_store above */ 542 540 543 541 acpi_ut_remove_reference(ddb_handle); ··· 542 546 /* Invoke table handler if present */ 543 547 544 548 if (acpi_gbl_table_handler) { 545 - (void)acpi_gbl_table_handler(ACPI_TABLE_EVENT_LOAD, 546 - table_desc.pointer, 549 + (void)acpi_gbl_table_handler(ACPI_TABLE_EVENT_LOAD, table, 547 550 acpi_gbl_table_handler_context); 548 551 } 549 552 ··· 569 574 struct acpi_table_header *table; 570 575 571 576 ACPI_FUNCTION_TRACE(ex_unload_table); 577 + 578 + /* 579 + * Temporarily emit a warning so that the ASL for the machine can be 580 + * hopefully obtained. This is to say that the Unload() operator is 581 + * extremely rare if not completely unused. 582 + */ 583 + ACPI_WARNING((AE_INFO, "Received request to unload an ACPI table")); 572 584 573 585 /* 574 586 * Validate the handle
+3 -1
drivers/acpi/acpica/exdump.c
··· 134 134 {ACPI_EXD_POINTER, ACPI_EXD_OFFSET(method.aml_start), "Aml Start"} 135 135 }; 136 136 137 - static struct acpi_exdump_info acpi_ex_dump_mutex[5] = { 137 + static struct acpi_exdump_info acpi_ex_dump_mutex[6] = { 138 138 {ACPI_EXD_INIT, ACPI_EXD_TABLE_SIZE(acpi_ex_dump_mutex), NULL}, 139 139 {ACPI_EXD_UINT8, ACPI_EXD_OFFSET(mutex.sync_level), "Sync Level"}, 140 + {ACPI_EXD_UINT8, ACPI_EXD_OFFSET(mutex.original_sync_level), 141 + "Original Sync Level"}, 140 142 {ACPI_EXD_POINTER, ACPI_EXD_OFFSET(mutex.owner_thread), "Owner Thread"}, 141 143 {ACPI_EXD_UINT16, ACPI_EXD_OFFSET(mutex.acquisition_depth), 142 144 "Acquire Depth"},
+12 -3
drivers/acpi/acpica/hwpci.c
··· 140 140 /* Walk the list, updating the PCI device/function/bus numbers */ 141 141 142 142 status = acpi_hw_process_pci_list(pci_id, list_head); 143 + 144 + /* Delete the list */ 145 + 146 + acpi_hw_delete_pci_list(list_head); 143 147 } 144 148 145 - /* Always delete the list */ 146 - 147 - acpi_hw_delete_pci_list(list_head); 148 149 return_ACPI_STATUS(status); 149 150 } 150 151 ··· 188 187 while (1) { 189 188 status = acpi_get_parent(current_device, &parent_device); 190 189 if (ACPI_FAILURE(status)) { 190 + 191 + /* Must delete the list before exit */ 192 + 193 + acpi_hw_delete_pci_list(*return_list_head); 191 194 return (status); 192 195 } 193 196 ··· 204 199 205 200 list_element = ACPI_ALLOCATE(sizeof(struct acpi_pci_device)); 206 201 if (!list_element) { 202 + 203 + /* Must delete the list before exit */ 204 + 205 + acpi_hw_delete_pci_list(*return_list_head); 207 206 return (AE_NO_MEMORY); 208 207 } 209 208
+8 -5
drivers/acpi/acpica/rscreate.c
··· 72 72 void *resource; 73 73 void *current_resource_ptr; 74 74 75 + ACPI_FUNCTION_TRACE(acpi_buffer_to_resource); 76 + 75 77 /* 76 78 * Note: we allow AE_AML_NO_RESOURCE_END_TAG, since an end tag 77 79 * is not required here. ··· 87 85 status = AE_OK; 88 86 } 89 87 if (ACPI_FAILURE(status)) { 90 - return (status); 88 + return_ACPI_STATUS(status); 91 89 } 92 90 93 91 /* Allocate a buffer for the converted resource */ ··· 95 93 resource = ACPI_ALLOCATE_ZEROED(list_size_needed); 96 94 current_resource_ptr = resource; 97 95 if (!resource) { 98 - return (AE_NO_MEMORY); 96 + return_ACPI_STATUS(AE_NO_MEMORY); 99 97 } 100 98 101 99 /* Perform the AML-to-Resource conversion */ ··· 112 110 *resource_ptr = resource; 113 111 } 114 112 115 - return (status); 113 + return_ACPI_STATUS(status); 116 114 } 115 + 116 + ACPI_EXPORT_SYMBOL(acpi_buffer_to_resource) 117 117 118 118 /******************************************************************************* 119 119 * ··· 134 130 * of device resources. 135 131 * 136 132 ******************************************************************************/ 137 - 138 133 acpi_status 139 134 acpi_rs_create_resource_list(union acpi_operand_object *aml_buffer, 140 - struct acpi_buffer * output_buffer) 135 + struct acpi_buffer *output_buffer) 141 136 { 142 137 143 138 acpi_status status;
+760
drivers/acpi/acpica/tbdata.c
··· 1 + /****************************************************************************** 2 + * 3 + * Module Name: tbdata - Table manager data structure functions 4 + * 5 + *****************************************************************************/ 6 + 7 + /* 8 + * Copyright (C) 2000 - 2014, Intel Corp. 9 + * All rights reserved. 10 + * 11 + * Redistribution and use in source and binary forms, with or without 12 + * modification, are permitted provided that the following conditions 13 + * are met: 14 + * 1. Redistributions of source code must retain the above copyright 15 + * notice, this list of conditions, and the following disclaimer, 16 + * without modification. 17 + * 2. Redistributions in binary form must reproduce at minimum a disclaimer 18 + * substantially similar to the "NO WARRANTY" disclaimer below 19 + * ("Disclaimer") and any redistribution must be conditioned upon 20 + * including a substantially similar Disclaimer requirement for further 21 + * binary redistribution. 22 + * 3. Neither the names of the above-listed copyright holders nor the names 23 + * of any contributors may be used to endorse or promote products derived 24 + * from this software without specific prior written permission. 25 + * 26 + * Alternatively, this software may be distributed under the terms of the 27 + * GNU General Public License ("GPL") version 2 as published by the Free 28 + * Software Foundation. 29 + * 30 + * NO WARRANTY 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 36 + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 37 + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 38 + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 39 + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 40 + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 41 + * POSSIBILITY OF SUCH DAMAGES. 42 + */ 43 + 44 + #include <acpi/acpi.h> 45 + #include "accommon.h" 46 + #include "acnamesp.h" 47 + #include "actables.h" 48 + 49 + #define _COMPONENT ACPI_TABLES 50 + ACPI_MODULE_NAME("tbdata") 51 + 52 + /******************************************************************************* 53 + * 54 + * FUNCTION: acpi_tb_init_table_descriptor 55 + * 56 + * PARAMETERS: table_desc - Table descriptor 57 + * address - Physical address of the table 58 + * flags - Allocation flags of the table 59 + * table - Pointer to the table 60 + * 61 + * RETURN: None 62 + * 63 + * DESCRIPTION: Initialize a new table descriptor 64 + * 65 + ******************************************************************************/ 66 + void 67 + acpi_tb_init_table_descriptor(struct acpi_table_desc *table_desc, 68 + acpi_physical_address address, 69 + u8 flags, struct acpi_table_header *table) 70 + { 71 + 72 + /* 73 + * Initialize the table descriptor. Set the pointer to NULL, since the 74 + * table is not fully mapped at this time. 75 + */ 76 + ACPI_MEMSET(table_desc, 0, sizeof(struct acpi_table_desc)); 77 + table_desc->address = address; 78 + table_desc->length = table->length; 79 + table_desc->flags = flags; 80 + ACPI_MOVE_32_TO_32(table_desc->signature.ascii, table->signature); 81 + } 82 + 83 + /******************************************************************************* 84 + * 85 + * FUNCTION: acpi_tb_acquire_table 86 + * 87 + * PARAMETERS: table_desc - Table descriptor 88 + * table_ptr - Where table is returned 89 + * table_length - Where table length is returned 90 + * table_flags - Where table allocation flags are returned 91 + * 92 + * RETURN: Status 93 + * 94 + * DESCRIPTION: Acquire an ACPI table. It can be used for tables not 95 + * maintained in the acpi_gbl_root_table_list. 96 + * 97 + ******************************************************************************/ 98 + 99 + acpi_status 100 + acpi_tb_acquire_table(struct acpi_table_desc *table_desc, 101 + struct acpi_table_header **table_ptr, 102 + u32 *table_length, u8 *table_flags) 103 + { 104 + struct acpi_table_header *table = NULL; 105 + 106 + switch (table_desc->flags & ACPI_TABLE_ORIGIN_MASK) { 107 + case ACPI_TABLE_ORIGIN_INTERNAL_PHYSICAL: 108 + 109 + table = 110 + acpi_os_map_memory(table_desc->address, table_desc->length); 111 + break; 112 + 113 + case ACPI_TABLE_ORIGIN_INTERNAL_VIRTUAL: 114 + case ACPI_TABLE_ORIGIN_EXTERNAL_VIRTUAL: 115 + 116 + table = 117 + ACPI_CAST_PTR(struct acpi_table_header, 118 + table_desc->address); 119 + break; 120 + 121 + default: 122 + 123 + break; 124 + } 125 + 126 + /* Table is not valid yet */ 127 + 128 + if (!table) { 129 + return (AE_NO_MEMORY); 130 + } 131 + 132 + /* Fill the return values */ 133 + 134 + *table_ptr = table; 135 + *table_length = table_desc->length; 136 + *table_flags = table_desc->flags; 137 + return (AE_OK); 138 + } 139 + 140 + /******************************************************************************* 141 + * 142 + * FUNCTION: acpi_tb_release_table 143 + * 144 + * PARAMETERS: table - Pointer for the table 145 + * table_length - Length for the table 146 + * table_flags - Allocation flags for the table 147 + * 148 + * RETURN: None 149 + * 150 + * DESCRIPTION: Release a table. The inverse of acpi_tb_acquire_table(). 151 + * 152 + ******************************************************************************/ 153 + 154 + void 155 + acpi_tb_release_table(struct acpi_table_header *table, 156 + u32 table_length, u8 table_flags) 157 + { 158 + 159 + switch (table_flags & ACPI_TABLE_ORIGIN_MASK) { 160 + case ACPI_TABLE_ORIGIN_INTERNAL_PHYSICAL: 161 + 162 + acpi_os_unmap_memory(table, table_length); 163 + break; 164 + 165 + case ACPI_TABLE_ORIGIN_INTERNAL_VIRTUAL: 166 + case ACPI_TABLE_ORIGIN_EXTERNAL_VIRTUAL: 167 + default: 168 + 169 + break; 170 + } 171 + } 172 + 173 + /******************************************************************************* 174 + * 175 + * FUNCTION: acpi_tb_acquire_temp_table 176 + * 177 + * PARAMETERS: table_desc - Table descriptor to be acquired 178 + * address - Address of the table 179 + * flags - Allocation flags of the table 180 + * 181 + * RETURN: Status 182 + * 183 + * DESCRIPTION: This function validates the table header to obtain the length 184 + * of a table and fills the table descriptor to make its state as 185 + * "INSTALLED". Such a table descriptor is only used for verified 186 + * installation. 187 + * 188 + ******************************************************************************/ 189 + 190 + acpi_status 191 + acpi_tb_acquire_temp_table(struct acpi_table_desc *table_desc, 192 + acpi_physical_address address, u8 flags) 193 + { 194 + struct acpi_table_header *table_header; 195 + 196 + switch (flags & ACPI_TABLE_ORIGIN_MASK) { 197 + case ACPI_TABLE_ORIGIN_INTERNAL_PHYSICAL: 198 + 199 + /* Get the length of the full table from the header */ 200 + 201 + table_header = 202 + acpi_os_map_memory(address, 203 + sizeof(struct acpi_table_header)); 204 + if (!table_header) { 205 + return (AE_NO_MEMORY); 206 + } 207 + 208 + acpi_tb_init_table_descriptor(table_desc, address, flags, 209 + table_header); 210 + acpi_os_unmap_memory(table_header, 211 + sizeof(struct acpi_table_header)); 212 + return (AE_OK); 213 + 214 + case ACPI_TABLE_ORIGIN_INTERNAL_VIRTUAL: 215 + case ACPI_TABLE_ORIGIN_EXTERNAL_VIRTUAL: 216 + 217 + table_header = ACPI_CAST_PTR(struct acpi_table_header, address); 218 + if (!table_header) { 219 + return (AE_NO_MEMORY); 220 + } 221 + 222 + acpi_tb_init_table_descriptor(table_desc, address, flags, 223 + table_header); 224 + return (AE_OK); 225 + 226 + default: 227 + 228 + break; 229 + } 230 + 231 + /* Table is not valid yet */ 232 + 233 + return (AE_NO_MEMORY); 234 + } 235 + 236 + /******************************************************************************* 237 + * 238 + * FUNCTION: acpi_tb_release_temp_table 239 + * 240 + * PARAMETERS: table_desc - Table descriptor to be released 241 + * 242 + * RETURN: Status 243 + * 244 + * DESCRIPTION: The inverse of acpi_tb_acquire_temp_table(). 245 + * 246 + *****************************************************************************/ 247 + 248 + void acpi_tb_release_temp_table(struct acpi_table_desc *table_desc) 249 + { 250 + 251 + /* 252 + * Note that the .Address is maintained by the callers of 253 + * acpi_tb_acquire_temp_table(), thus do not invoke acpi_tb_uninstall_table() 254 + * where .Address will be freed. 255 + */ 256 + acpi_tb_invalidate_table(table_desc); 257 + } 258 + 259 + /****************************************************************************** 260 + * 261 + * FUNCTION: acpi_tb_validate_table 262 + * 263 + * PARAMETERS: table_desc - Table descriptor 264 + * 265 + * RETURN: Status 266 + * 267 + * DESCRIPTION: This function is called to validate the table, the returned 268 + * table descriptor is in "VALIDATED" state. 269 + * 270 + *****************************************************************************/ 271 + 272 + acpi_status acpi_tb_validate_table(struct acpi_table_desc *table_desc) 273 + { 274 + acpi_status status = AE_OK; 275 + 276 + ACPI_FUNCTION_TRACE(tb_validate_table); 277 + 278 + /* Validate the table if necessary */ 279 + 280 + if (!table_desc->pointer) { 281 + status = acpi_tb_acquire_table(table_desc, &table_desc->pointer, 282 + &table_desc->length, 283 + &table_desc->flags); 284 + if (!table_desc->pointer) { 285 + status = AE_NO_MEMORY; 286 + } 287 + } 288 + 289 + return_ACPI_STATUS(status); 290 + } 291 + 292 + /******************************************************************************* 293 + * 294 + * FUNCTION: acpi_tb_invalidate_table 295 + * 296 + * PARAMETERS: table_desc - Table descriptor 297 + * 298 + * RETURN: None 299 + * 300 + * DESCRIPTION: Invalidate one internal ACPI table, this is the inverse of 301 + * acpi_tb_validate_table(). 302 + * 303 + ******************************************************************************/ 304 + 305 + void acpi_tb_invalidate_table(struct acpi_table_desc *table_desc) 306 + { 307 + 308 + ACPI_FUNCTION_TRACE(tb_invalidate_table); 309 + 310 + /* Table must be validated */ 311 + 312 + if (!table_desc->pointer) { 313 + return_VOID; 314 + } 315 + 316 + acpi_tb_release_table(table_desc->pointer, table_desc->length, 317 + table_desc->flags); 318 + table_desc->pointer = NULL; 319 + 320 + return_VOID; 321 + } 322 + 323 + /****************************************************************************** 324 + * 325 + * FUNCTION: acpi_tb_validate_temp_table 326 + * 327 + * PARAMETERS: table_desc - Table descriptor 328 + * 329 + * RETURN: Status 330 + * 331 + * DESCRIPTION: This function is called to validate the table, the returned 332 + * table descriptor is in "VALIDATED" state. 333 + * 334 + *****************************************************************************/ 335 + 336 + acpi_status acpi_tb_validate_temp_table(struct acpi_table_desc *table_desc) 337 + { 338 + 339 + if (!table_desc->pointer && !acpi_gbl_verify_table_checksum) { 340 + /* 341 + * Only validates the header of the table. 342 + * Note that Length contains the size of the mapping after invoking 343 + * this work around, this value is required by 344 + * acpi_tb_release_temp_table(). 345 + * We can do this because in acpi_init_table_descriptor(), the Length 346 + * field of the installed descriptor is filled with the actual 347 + * table length obtaining from the table header. 348 + */ 349 + table_desc->length = sizeof(struct acpi_table_header); 350 + } 351 + 352 + return (acpi_tb_validate_table(table_desc)); 353 + } 354 + 355 + /****************************************************************************** 356 + * 357 + * FUNCTION: acpi_tb_verify_temp_table 358 + * 359 + * PARAMETERS: table_desc - Table descriptor 360 + * signature - Table signature to verify 361 + * 362 + * RETURN: Status 363 + * 364 + * DESCRIPTION: This function is called to validate and verify the table, the 365 + * returned table descriptor is in "VALIDATED" state. 366 + * 367 + *****************************************************************************/ 368 + 369 + acpi_status 370 + acpi_tb_verify_temp_table(struct acpi_table_desc * table_desc, char *signature) 371 + { 372 + acpi_status status = AE_OK; 373 + 374 + ACPI_FUNCTION_TRACE(tb_verify_temp_table); 375 + 376 + /* Validate the table */ 377 + 378 + status = acpi_tb_validate_temp_table(table_desc); 379 + if (ACPI_FAILURE(status)) { 380 + return_ACPI_STATUS(AE_NO_MEMORY); 381 + } 382 + 383 + /* If a particular signature is expected (DSDT/FACS), it must match */ 384 + 385 + if (signature && !ACPI_COMPARE_NAME(&table_desc->signature, signature)) { 386 + ACPI_BIOS_ERROR((AE_INFO, 387 + "Invalid signature 0x%X for ACPI table, expected [%s]", 388 + table_desc->signature.integer, signature)); 389 + status = AE_BAD_SIGNATURE; 390 + goto invalidate_and_exit; 391 + } 392 + 393 + /* Verify the checksum */ 394 + 395 + if (acpi_gbl_verify_table_checksum) { 396 + status = 397 + acpi_tb_verify_checksum(table_desc->pointer, 398 + table_desc->length); 399 + if (ACPI_FAILURE(status)) { 400 + ACPI_EXCEPTION((AE_INFO, AE_NO_MEMORY, 401 + "%4.4s " ACPI_PRINTF_UINT 402 + " Attempted table install failed", 403 + acpi_ut_valid_acpi_name(table_desc-> 404 + signature. 405 + ascii) ? 406 + table_desc->signature.ascii : "????", 407 + ACPI_FORMAT_TO_UINT(table_desc-> 408 + address))); 409 + goto invalidate_and_exit; 410 + } 411 + } 412 + 413 + return_ACPI_STATUS(AE_OK); 414 + 415 + invalidate_and_exit: 416 + acpi_tb_invalidate_table(table_desc); 417 + return_ACPI_STATUS(status); 418 + } 419 + 420 + /******************************************************************************* 421 + * 422 + * FUNCTION: acpi_tb_resize_root_table_list 423 + * 424 + * PARAMETERS: None 425 + * 426 + * RETURN: Status 427 + * 428 + * DESCRIPTION: Expand the size of global table array 429 + * 430 + ******************************************************************************/ 431 + 432 + acpi_status acpi_tb_resize_root_table_list(void) 433 + { 434 + struct acpi_table_desc *tables; 435 + u32 table_count; 436 + 437 + ACPI_FUNCTION_TRACE(tb_resize_root_table_list); 438 + 439 + /* allow_resize flag is a parameter to acpi_initialize_tables */ 440 + 441 + if (!(acpi_gbl_root_table_list.flags & ACPI_ROOT_ALLOW_RESIZE)) { 442 + ACPI_ERROR((AE_INFO, 443 + "Resize of Root Table Array is not allowed")); 444 + return_ACPI_STATUS(AE_SUPPORT); 445 + } 446 + 447 + /* Increase the Table Array size */ 448 + 449 + if (acpi_gbl_root_table_list.flags & ACPI_ROOT_ORIGIN_ALLOCATED) { 450 + table_count = acpi_gbl_root_table_list.max_table_count; 451 + } else { 452 + table_count = acpi_gbl_root_table_list.current_table_count; 453 + } 454 + 455 + tables = ACPI_ALLOCATE_ZEROED(((acpi_size) table_count + 456 + ACPI_ROOT_TABLE_SIZE_INCREMENT) * 457 + sizeof(struct acpi_table_desc)); 458 + if (!tables) { 459 + ACPI_ERROR((AE_INFO, 460 + "Could not allocate new root table array")); 461 + return_ACPI_STATUS(AE_NO_MEMORY); 462 + } 463 + 464 + /* Copy and free the previous table array */ 465 + 466 + if (acpi_gbl_root_table_list.tables) { 467 + ACPI_MEMCPY(tables, acpi_gbl_root_table_list.tables, 468 + (acpi_size) table_count * 469 + sizeof(struct acpi_table_desc)); 470 + 471 + if (acpi_gbl_root_table_list.flags & ACPI_ROOT_ORIGIN_ALLOCATED) { 472 + ACPI_FREE(acpi_gbl_root_table_list.tables); 473 + } 474 + } 475 + 476 + acpi_gbl_root_table_list.tables = tables; 477 + acpi_gbl_root_table_list.max_table_count = 478 + table_count + ACPI_ROOT_TABLE_SIZE_INCREMENT; 479 + acpi_gbl_root_table_list.flags |= ACPI_ROOT_ORIGIN_ALLOCATED; 480 + 481 + return_ACPI_STATUS(AE_OK); 482 + } 483 + 484 + /******************************************************************************* 485 + * 486 + * FUNCTION: acpi_tb_get_next_root_index 487 + * 488 + * PARAMETERS: table_index - Where table index is returned 489 + * 490 + * RETURN: Status and table index. 491 + * 492 + * DESCRIPTION: Allocate a new ACPI table entry to the global table list 493 + * 494 + ******************************************************************************/ 495 + 496 + acpi_status acpi_tb_get_next_root_index(u32 *table_index) 497 + { 498 + acpi_status status; 499 + 500 + /* Ensure that there is room for the table in the Root Table List */ 501 + 502 + if (acpi_gbl_root_table_list.current_table_count >= 503 + acpi_gbl_root_table_list.max_table_count) { 504 + status = acpi_tb_resize_root_table_list(); 505 + if (ACPI_FAILURE(status)) { 506 + return (status); 507 + } 508 + } 509 + 510 + *table_index = acpi_gbl_root_table_list.current_table_count; 511 + acpi_gbl_root_table_list.current_table_count++; 512 + return (AE_OK); 513 + } 514 + 515 + /******************************************************************************* 516 + * 517 + * FUNCTION: acpi_tb_terminate 518 + * 519 + * PARAMETERS: None 520 + * 521 + * RETURN: None 522 + * 523 + * DESCRIPTION: Delete all internal ACPI tables 524 + * 525 + ******************************************************************************/ 526 + 527 + void acpi_tb_terminate(void) 528 + { 529 + u32 i; 530 + 531 + ACPI_FUNCTION_TRACE(tb_terminate); 532 + 533 + (void)acpi_ut_acquire_mutex(ACPI_MTX_TABLES); 534 + 535 + /* Delete the individual tables */ 536 + 537 + for (i = 0; i < acpi_gbl_root_table_list.current_table_count; i++) { 538 + acpi_tb_uninstall_table(&acpi_gbl_root_table_list.tables[i]); 539 + } 540 + 541 + /* 542 + * Delete the root table array if allocated locally. Array cannot be 543 + * mapped, so we don't need to check for that flag. 544 + */ 545 + if (acpi_gbl_root_table_list.flags & ACPI_ROOT_ORIGIN_ALLOCATED) { 546 + ACPI_FREE(acpi_gbl_root_table_list.tables); 547 + } 548 + 549 + acpi_gbl_root_table_list.tables = NULL; 550 + acpi_gbl_root_table_list.flags = 0; 551 + acpi_gbl_root_table_list.current_table_count = 0; 552 + 553 + ACPI_DEBUG_PRINT((ACPI_DB_INFO, "ACPI Tables freed\n")); 554 + 555 + (void)acpi_ut_release_mutex(ACPI_MTX_TABLES); 556 + return_VOID; 557 + } 558 + 559 + /******************************************************************************* 560 + * 561 + * FUNCTION: acpi_tb_delete_namespace_by_owner 562 + * 563 + * PARAMETERS: table_index - Table index 564 + * 565 + * RETURN: Status 566 + * 567 + * DESCRIPTION: Delete all namespace objects created when this table was loaded. 568 + * 569 + ******************************************************************************/ 570 + 571 + acpi_status acpi_tb_delete_namespace_by_owner(u32 table_index) 572 + { 573 + acpi_owner_id owner_id; 574 + acpi_status status; 575 + 576 + ACPI_FUNCTION_TRACE(tb_delete_namespace_by_owner); 577 + 578 + status = acpi_ut_acquire_mutex(ACPI_MTX_TABLES); 579 + if (ACPI_FAILURE(status)) { 580 + return_ACPI_STATUS(status); 581 + } 582 + 583 + if (table_index >= acpi_gbl_root_table_list.current_table_count) { 584 + 585 + /* The table index does not exist */ 586 + 587 + (void)acpi_ut_release_mutex(ACPI_MTX_TABLES); 588 + return_ACPI_STATUS(AE_NOT_EXIST); 589 + } 590 + 591 + /* Get the owner ID for this table, used to delete namespace nodes */ 592 + 593 + owner_id = acpi_gbl_root_table_list.tables[table_index].owner_id; 594 + (void)acpi_ut_release_mutex(ACPI_MTX_TABLES); 595 + 596 + /* 597 + * Need to acquire the namespace writer lock to prevent interference 598 + * with any concurrent namespace walks. The interpreter must be 599 + * released during the deletion since the acquisition of the deletion 600 + * lock may block, and also since the execution of a namespace walk 601 + * must be allowed to use the interpreter. 602 + */ 603 + (void)acpi_ut_release_mutex(ACPI_MTX_INTERPRETER); 604 + status = acpi_ut_acquire_write_lock(&acpi_gbl_namespace_rw_lock); 605 + 606 + acpi_ns_delete_namespace_by_owner(owner_id); 607 + if (ACPI_FAILURE(status)) { 608 + return_ACPI_STATUS(status); 609 + } 610 + 611 + acpi_ut_release_write_lock(&acpi_gbl_namespace_rw_lock); 612 + 613 + status = acpi_ut_acquire_mutex(ACPI_MTX_INTERPRETER); 614 + return_ACPI_STATUS(status); 615 + } 616 + 617 + /******************************************************************************* 618 + * 619 + * FUNCTION: acpi_tb_allocate_owner_id 620 + * 621 + * PARAMETERS: table_index - Table index 622 + * 623 + * RETURN: Status 624 + * 625 + * DESCRIPTION: Allocates owner_id in table_desc 626 + * 627 + ******************************************************************************/ 628 + 629 + acpi_status acpi_tb_allocate_owner_id(u32 table_index) 630 + { 631 + acpi_status status = AE_BAD_PARAMETER; 632 + 633 + ACPI_FUNCTION_TRACE(tb_allocate_owner_id); 634 + 635 + (void)acpi_ut_acquire_mutex(ACPI_MTX_TABLES); 636 + if (table_index < acpi_gbl_root_table_list.current_table_count) { 637 + status = 638 + acpi_ut_allocate_owner_id(& 639 + (acpi_gbl_root_table_list. 640 + tables[table_index].owner_id)); 641 + } 642 + 643 + (void)acpi_ut_release_mutex(ACPI_MTX_TABLES); 644 + return_ACPI_STATUS(status); 645 + } 646 + 647 + /******************************************************************************* 648 + * 649 + * FUNCTION: acpi_tb_release_owner_id 650 + * 651 + * PARAMETERS: table_index - Table index 652 + * 653 + * RETURN: Status 654 + * 655 + * DESCRIPTION: Releases owner_id in table_desc 656 + * 657 + ******************************************************************************/ 658 + 659 + acpi_status acpi_tb_release_owner_id(u32 table_index) 660 + { 661 + acpi_status status = AE_BAD_PARAMETER; 662 + 663 + ACPI_FUNCTION_TRACE(tb_release_owner_id); 664 + 665 + (void)acpi_ut_acquire_mutex(ACPI_MTX_TABLES); 666 + if (table_index < acpi_gbl_root_table_list.current_table_count) { 667 + acpi_ut_release_owner_id(& 668 + (acpi_gbl_root_table_list. 669 + tables[table_index].owner_id)); 670 + status = AE_OK; 671 + } 672 + 673 + (void)acpi_ut_release_mutex(ACPI_MTX_TABLES); 674 + return_ACPI_STATUS(status); 675 + } 676 + 677 + /******************************************************************************* 678 + * 679 + * FUNCTION: acpi_tb_get_owner_id 680 + * 681 + * PARAMETERS: table_index - Table index 682 + * owner_id - Where the table owner_id is returned 683 + * 684 + * RETURN: Status 685 + * 686 + * DESCRIPTION: returns owner_id for the ACPI table 687 + * 688 + ******************************************************************************/ 689 + 690 + acpi_status acpi_tb_get_owner_id(u32 table_index, acpi_owner_id * owner_id) 691 + { 692 + acpi_status status = AE_BAD_PARAMETER; 693 + 694 + ACPI_FUNCTION_TRACE(tb_get_owner_id); 695 + 696 + (void)acpi_ut_acquire_mutex(ACPI_MTX_TABLES); 697 + if (table_index < acpi_gbl_root_table_list.current_table_count) { 698 + *owner_id = 699 + acpi_gbl_root_table_list.tables[table_index].owner_id; 700 + status = AE_OK; 701 + } 702 + 703 + (void)acpi_ut_release_mutex(ACPI_MTX_TABLES); 704 + return_ACPI_STATUS(status); 705 + } 706 + 707 + /******************************************************************************* 708 + * 709 + * FUNCTION: acpi_tb_is_table_loaded 710 + * 711 + * PARAMETERS: table_index - Index into the root table 712 + * 713 + * RETURN: Table Loaded Flag 714 + * 715 + ******************************************************************************/ 716 + 717 + u8 acpi_tb_is_table_loaded(u32 table_index) 718 + { 719 + u8 is_loaded = FALSE; 720 + 721 + (void)acpi_ut_acquire_mutex(ACPI_MTX_TABLES); 722 + if (table_index < acpi_gbl_root_table_list.current_table_count) { 723 + is_loaded = (u8) 724 + (acpi_gbl_root_table_list.tables[table_index].flags & 725 + ACPI_TABLE_IS_LOADED); 726 + } 727 + 728 + (void)acpi_ut_release_mutex(ACPI_MTX_TABLES); 729 + return (is_loaded); 730 + } 731 + 732 + /******************************************************************************* 733 + * 734 + * FUNCTION: acpi_tb_set_table_loaded_flag 735 + * 736 + * PARAMETERS: table_index - Table index 737 + * is_loaded - TRUE if table is loaded, FALSE otherwise 738 + * 739 + * RETURN: None 740 + * 741 + * DESCRIPTION: Sets the table loaded flag to either TRUE or FALSE. 742 + * 743 + ******************************************************************************/ 744 + 745 + void acpi_tb_set_table_loaded_flag(u32 table_index, u8 is_loaded) 746 + { 747 + 748 + (void)acpi_ut_acquire_mutex(ACPI_MTX_TABLES); 749 + if (table_index < acpi_gbl_root_table_list.current_table_count) { 750 + if (is_loaded) { 751 + acpi_gbl_root_table_list.tables[table_index].flags |= 752 + ACPI_TABLE_IS_LOADED; 753 + } else { 754 + acpi_gbl_root_table_list.tables[table_index].flags &= 755 + ~ACPI_TABLE_IS_LOADED; 756 + } 757 + } 758 + 759 + (void)acpi_ut_release_mutex(ACPI_MTX_TABLES); 760 + }
+38 -23
drivers/acpi/acpica/tbfadt.c
··· 52 52 static void 53 53 acpi_tb_init_generic_address(struct acpi_generic_address *generic_address, 54 54 u8 space_id, 55 - u8 byte_width, u64 address, char *register_name); 55 + u8 byte_width, 56 + u64 address, char *register_name, u8 flags); 56 57 57 58 static void acpi_tb_convert_fadt(void); 58 59 ··· 70 69 u16 address32; 71 70 u16 length; 72 71 u8 default_length; 73 - u8 type; 72 + u8 flags; 74 73 75 74 } acpi_fadt_info; 76 75 77 76 #define ACPI_FADT_OPTIONAL 0 78 77 #define ACPI_FADT_REQUIRED 1 79 78 #define ACPI_FADT_SEPARATE_LENGTH 2 79 + #define ACPI_FADT_GPE_REGISTER 4 80 80 81 81 static struct acpi_fadt_info fadt_info_table[] = { 82 82 {"Pm1aEventBlock", ··· 127 125 ACPI_FADT_OFFSET(gpe0_block), 128 126 ACPI_FADT_OFFSET(gpe0_block_length), 129 127 0, 130 - ACPI_FADT_SEPARATE_LENGTH}, 128 + ACPI_FADT_SEPARATE_LENGTH | ACPI_FADT_GPE_REGISTER}, 131 129 132 130 {"Gpe1Block", 133 131 ACPI_FADT_OFFSET(xgpe1_block), 134 132 ACPI_FADT_OFFSET(gpe1_block), 135 133 ACPI_FADT_OFFSET(gpe1_block_length), 136 134 0, 137 - ACPI_FADT_SEPARATE_LENGTH} 135 + ACPI_FADT_SEPARATE_LENGTH | ACPI_FADT_GPE_REGISTER} 138 136 }; 139 137 140 138 #define ACPI_FADT_INFO_ENTRIES \ ··· 191 189 static void 192 190 acpi_tb_init_generic_address(struct acpi_generic_address *generic_address, 193 191 u8 space_id, 194 - u8 byte_width, u64 address, char *register_name) 192 + u8 byte_width, 193 + u64 address, char *register_name, u8 flags) 195 194 { 196 195 u8 bit_width; 197 196 198 - /* Bit width field in the GAS is only one byte long, 255 max */ 199 - 197 + /* 198 + * Bit width field in the GAS is only one byte long, 255 max. 199 + * Check for bit_width overflow in GAS. 200 + */ 200 201 bit_width = (u8)(byte_width * 8); 201 - 202 - if (byte_width > 31) { /* (31*8)=248 */ 203 - ACPI_ERROR((AE_INFO, 204 - "%s - 32-bit FADT register is too long (%u bytes, %u bits) " 205 - "to convert to GAS struct - 255 bits max, truncating", 206 - register_name, byte_width, (byte_width * 8))); 202 + if (byte_width > 31) { /* (31*8)=248, (32*8)=256 */ 203 + /* 204 + * No error for GPE blocks, because we do not use the bit_width 205 + * for GPEs, the legacy length (byte_width) is used instead to 206 + * allow for a large number of GPEs. 207 + */ 208 + if (!(flags & ACPI_FADT_GPE_REGISTER)) { 209 + ACPI_ERROR((AE_INFO, 210 + "%s - 32-bit FADT register is too long (%u bytes, %u bits) " 211 + "to convert to GAS struct - 255 bits max, truncating", 212 + register_name, byte_width, 213 + (byte_width * 8))); 214 + } 207 215 208 216 bit_width = 255; 209 217 } ··· 344 332 345 333 /* Obtain the DSDT and FACS tables via their addresses within the FADT */ 346 334 347 - acpi_tb_install_table((acpi_physical_address) acpi_gbl_FADT.Xdsdt, 348 - ACPI_SIG_DSDT, ACPI_TABLE_INDEX_DSDT); 335 + acpi_tb_install_fixed_table((acpi_physical_address) acpi_gbl_FADT.Xdsdt, 336 + ACPI_SIG_DSDT, ACPI_TABLE_INDEX_DSDT); 349 337 350 338 /* If Hardware Reduced flag is set, there is no FACS */ 351 339 352 340 if (!acpi_gbl_reduced_hardware) { 353 - acpi_tb_install_table((acpi_physical_address) acpi_gbl_FADT. 354 - Xfacs, ACPI_SIG_FACS, 355 - ACPI_TABLE_INDEX_FACS); 341 + acpi_tb_install_fixed_table((acpi_physical_address) 342 + acpi_gbl_FADT.Xfacs, ACPI_SIG_FACS, 343 + ACPI_TABLE_INDEX_FACS); 356 344 } 357 345 } 358 346 ··· 462 450 struct acpi_generic_address *address64; 463 451 u32 address32; 464 452 u8 length; 453 + u8 flags; 465 454 u32 i; 466 455 467 456 /* ··· 528 515 fadt_info_table[i].length); 529 516 530 517 name = fadt_info_table[i].name; 518 + flags = fadt_info_table[i].flags; 531 519 532 520 /* 533 521 * Expand the ACPI 1.0 32-bit addresses to the ACPI 2.0 64-bit "X" ··· 568 554 [i]. 569 555 length), 570 556 (u64)address32, 571 - name); 557 + name, flags); 572 558 } else if (address64->address != (u64)address32) { 573 559 574 560 /* Address mismatch */ ··· 596 582 length), 597 583 (u64) 598 584 address32, 599 - name); 585 + name, 586 + flags); 600 587 } 601 588 } 602 589 } ··· 618 603 address64->bit_width)); 619 604 } 620 605 621 - if (fadt_info_table[i].type & ACPI_FADT_REQUIRED) { 606 + if (fadt_info_table[i].flags & ACPI_FADT_REQUIRED) { 622 607 /* 623 608 * Field is required (Pm1a_event, Pm1a_control). 624 609 * Both the address and length must be non-zero. ··· 632 617 address), 633 618 length)); 634 619 } 635 - } else if (fadt_info_table[i].type & ACPI_FADT_SEPARATE_LENGTH) { 620 + } else if (fadt_info_table[i].flags & ACPI_FADT_SEPARATE_LENGTH) { 636 621 /* 637 622 * Field is optional (Pm2_control, GPE0, GPE1) AND has its own 638 623 * length field. If present, both the address and length must ··· 741 726 (fadt_pm_info_table[i]. 742 727 register_num * 743 728 pm1_register_byte_width), 744 - "PmRegisters"); 729 + "PmRegisters", 0); 745 730 } 746 731 } 747 732 }
+2 -2
drivers/acpi/acpica/tbfind.c
··· 99 99 /* Table is not currently mapped, map it */ 100 100 101 101 status = 102 - acpi_tb_verify_table(&acpi_gbl_root_table_list. 103 - tables[i]); 102 + acpi_tb_validate_table(&acpi_gbl_root_table_list. 103 + tables[i]); 104 104 if (ACPI_FAILURE(status)) { 105 105 return_ACPI_STATUS(status); 106 106 }
+358 -563
drivers/acpi/acpica/tbinstal.c
··· 43 43 44 44 #include <acpi/acpi.h> 45 45 #include "accommon.h" 46 - #include "acnamesp.h" 47 46 #include "actables.h" 48 47 49 48 #define _COMPONENT ACPI_TABLES 50 49 ACPI_MODULE_NAME("tbinstal") 51 50 52 - /****************************************************************************** 51 + /* Local prototypes */ 52 + static u8 53 + acpi_tb_compare_tables(struct acpi_table_desc *table_desc, u32 table_index); 54 + 55 + /******************************************************************************* 53 56 * 54 - * FUNCTION: acpi_tb_verify_table 57 + * FUNCTION: acpi_tb_compare_tables 55 58 * 56 - * PARAMETERS: table_desc - table 59 + * PARAMETERS: table_desc - Table 1 descriptor to be compared 60 + * table_index - Index of table 2 to be compared 57 61 * 58 - * RETURN: Status 62 + * RETURN: TRUE if both tables are identical. 59 63 * 60 - * DESCRIPTION: this function is called to verify and map table 64 + * DESCRIPTION: This function compares a table with another table that has 65 + * already been installed in the root table list. 61 66 * 62 - *****************************************************************************/ 63 - acpi_status acpi_tb_verify_table(struct acpi_table_desc *table_desc) 67 + ******************************************************************************/ 68 + 69 + static u8 70 + acpi_tb_compare_tables(struct acpi_table_desc *table_desc, u32 table_index) 64 71 { 65 72 acpi_status status = AE_OK; 66 - 67 - ACPI_FUNCTION_TRACE(tb_verify_table); 68 - 69 - /* Map the table if necessary */ 70 - 71 - if (!table_desc->pointer) { 72 - if ((table_desc->flags & ACPI_TABLE_ORIGIN_MASK) == 73 - ACPI_TABLE_ORIGIN_MAPPED) { 74 - table_desc->pointer = 75 - acpi_os_map_memory(table_desc->address, 76 - table_desc->length); 77 - } 78 - if (!table_desc->pointer) { 79 - return_ACPI_STATUS(AE_NO_MEMORY); 80 - } 81 - } 82 - 83 - /* Always calculate checksum, ignore bad checksum if requested */ 73 + u8 is_identical; 74 + struct acpi_table_header *table; 75 + u32 table_length; 76 + u8 table_flags; 84 77 85 78 status = 86 - acpi_tb_verify_checksum(table_desc->pointer, table_desc->length); 79 + acpi_tb_acquire_table(&acpi_gbl_root_table_list.tables[table_index], 80 + &table, &table_length, &table_flags); 81 + if (ACPI_FAILURE(status)) { 82 + return (FALSE); 83 + } 87 84 88 - return_ACPI_STATUS(status); 85 + /* 86 + * Check for a table match on the entire table length, 87 + * not just the header. 88 + */ 89 + is_identical = (u8)((table_desc->length != table_length || 90 + ACPI_MEMCMP(table_desc->pointer, table, 91 + table_length)) ? FALSE : TRUE); 92 + 93 + /* Release the acquired table */ 94 + 95 + acpi_tb_release_table(table, table_length, table_flags); 96 + return (is_identical); 89 97 } 90 98 91 99 /******************************************************************************* 92 100 * 93 - * FUNCTION: acpi_tb_add_table 101 + * FUNCTION: acpi_tb_install_table_with_override 94 102 * 95 - * PARAMETERS: table_desc - Table descriptor 96 - * table_index - Where the table index is returned 103 + * PARAMETERS: table_index - Index into root table array 104 + * new_table_desc - New table descriptor to install 105 + * override - Whether override should be performed 97 106 * 98 - * RETURN: Status 107 + * RETURN: None 99 108 * 100 - * DESCRIPTION: This function is called to add an ACPI table. It is used to 101 - * dynamically load tables via the Load and load_table AML 102 - * operators. 109 + * DESCRIPTION: Install an ACPI table into the global data structure. The 110 + * table override mechanism is called to allow the host 111 + * OS to replace any table before it is installed in the root 112 + * table array. 103 113 * 104 114 ******************************************************************************/ 105 115 106 - acpi_status 107 - acpi_tb_add_table(struct acpi_table_desc *table_desc, u32 *table_index) 116 + void 117 + acpi_tb_install_table_with_override(u32 table_index, 118 + struct acpi_table_desc *new_table_desc, 119 + u8 override) 108 120 { 109 - u32 i; 110 - acpi_status status = AE_OK; 111 121 112 - ACPI_FUNCTION_TRACE(tb_add_table); 113 - 114 - if (!table_desc->pointer) { 115 - status = acpi_tb_verify_table(table_desc); 116 - if (ACPI_FAILURE(status) || !table_desc->pointer) { 117 - return_ACPI_STATUS(status); 118 - } 119 - } 120 - 121 - /* 122 - * Validate the incoming table signature. 123 - * 124 - * 1) Originally, we checked the table signature for "SSDT" or "PSDT". 125 - * 2) We added support for OEMx tables, signature "OEM". 126 - * 3) Valid tables were encountered with a null signature, so we just 127 - * gave up on validating the signature, (05/2008). 128 - * 4) We encountered non-AML tables such as the MADT, which caused 129 - * interpreter errors and kernel faults. So now, we once again allow 130 - * only "SSDT", "OEMx", and now, also a null signature. (05/2011). 131 - */ 132 - if ((table_desc->pointer->signature[0] != 0x00) && 133 - (!ACPI_COMPARE_NAME(table_desc->pointer->signature, ACPI_SIG_SSDT)) 134 - && (ACPI_STRNCMP(table_desc->pointer->signature, "OEM", 3))) { 135 - ACPI_BIOS_ERROR((AE_INFO, 136 - "Table has invalid signature [%4.4s] (0x%8.8X), " 137 - "must be SSDT or OEMx", 138 - acpi_ut_valid_acpi_name(table_desc->pointer-> 139 - signature) ? 140 - table_desc->pointer->signature : "????", 141 - *(u32 *)table_desc->pointer->signature)); 142 - 143 - return_ACPI_STATUS(AE_BAD_SIGNATURE); 144 - } 145 - 146 - (void)acpi_ut_acquire_mutex(ACPI_MTX_TABLES); 147 - 148 - /* Check if table is already registered */ 149 - 150 - for (i = 0; i < acpi_gbl_root_table_list.current_table_count; ++i) { 151 - if (!acpi_gbl_root_table_list.tables[i].pointer) { 152 - status = 153 - acpi_tb_verify_table(&acpi_gbl_root_table_list. 154 - tables[i]); 155 - if (ACPI_FAILURE(status) 156 - || !acpi_gbl_root_table_list.tables[i].pointer) { 157 - continue; 158 - } 159 - } 160 - 161 - /* 162 - * Check for a table match on the entire table length, 163 - * not just the header. 164 - */ 165 - if (table_desc->length != 166 - acpi_gbl_root_table_list.tables[i].length) { 167 - continue; 168 - } 169 - 170 - if (ACPI_MEMCMP(table_desc->pointer, 171 - acpi_gbl_root_table_list.tables[i].pointer, 172 - acpi_gbl_root_table_list.tables[i].length)) { 173 - continue; 174 - } 175 - 176 - /* 177 - * Note: the current mechanism does not unregister a table if it is 178 - * dynamically unloaded. The related namespace entries are deleted, 179 - * but the table remains in the root table list. 180 - * 181 - * The assumption here is that the number of different tables that 182 - * will be loaded is actually small, and there is minimal overhead 183 - * in just keeping the table in case it is needed again. 184 - * 185 - * If this assumption changes in the future (perhaps on large 186 - * machines with many table load/unload operations), tables will 187 - * need to be unregistered when they are unloaded, and slots in the 188 - * root table list should be reused when empty. 189 - */ 190 - 191 - /* 192 - * Table is already registered. 193 - * We can delete the table that was passed as a parameter. 194 - */ 195 - acpi_tb_delete_table(table_desc); 196 - *table_index = i; 197 - 198 - if (acpi_gbl_root_table_list.tables[i]. 199 - flags & ACPI_TABLE_IS_LOADED) { 200 - 201 - /* Table is still loaded, this is an error */ 202 - 203 - status = AE_ALREADY_EXISTS; 204 - goto release; 205 - } else { 206 - /* Table was unloaded, allow it to be reloaded */ 207 - 208 - table_desc->pointer = 209 - acpi_gbl_root_table_list.tables[i].pointer; 210 - table_desc->address = 211 - acpi_gbl_root_table_list.tables[i].address; 212 - status = AE_OK; 213 - goto print_header; 214 - } 122 + if (table_index >= acpi_gbl_root_table_list.current_table_count) { 123 + return; 215 124 } 216 125 217 126 /* 218 127 * ACPI Table Override: 219 - * Allow the host to override dynamically loaded tables. 220 - * NOTE: the table is fully mapped at this point, and the mapping will 221 - * be deleted by tb_table_override if the table is actually overridden. 128 + * 129 + * Before we install the table, let the host OS override it with a new 130 + * one if desired. Any table within the RSDT/XSDT can be replaced, 131 + * including the DSDT which is pointed to by the FADT. 222 132 */ 223 - (void)acpi_tb_table_override(table_desc->pointer, table_desc); 224 - 225 - /* Add the table to the global root table list */ 226 - 227 - status = acpi_tb_store_table(table_desc->address, table_desc->pointer, 228 - table_desc->length, table_desc->flags, 229 - table_index); 230 - if (ACPI_FAILURE(status)) { 231 - goto release; 133 + if (override) { 134 + acpi_tb_override_table(new_table_desc); 232 135 } 233 136 234 - print_header: 235 - acpi_tb_print_table_header(table_desc->address, table_desc->pointer); 137 + acpi_tb_init_table_descriptor(&acpi_gbl_root_table_list. 138 + tables[table_index], 139 + new_table_desc->address, 140 + new_table_desc->flags, 141 + new_table_desc->pointer); 236 142 237 - release: 238 - (void)acpi_ut_release_mutex(ACPI_MTX_TABLES); 143 + acpi_tb_print_table_header(new_table_desc->address, 144 + new_table_desc->pointer); 145 + 146 + /* Set the global integer width (based upon revision of the DSDT) */ 147 + 148 + if (table_index == ACPI_TABLE_INDEX_DSDT) { 149 + acpi_ut_set_integer_width(new_table_desc->pointer->revision); 150 + } 151 + } 152 + 153 + /******************************************************************************* 154 + * 155 + * FUNCTION: acpi_tb_install_fixed_table 156 + * 157 + * PARAMETERS: address - Physical address of DSDT or FACS 158 + * signature - Table signature, NULL if no need to 159 + * match 160 + * table_index - Index into root table array 161 + * 162 + * RETURN: Status 163 + * 164 + * DESCRIPTION: Install a fixed ACPI table (DSDT/FACS) into the global data 165 + * structure. 166 + * 167 + ******************************************************************************/ 168 + 169 + acpi_status 170 + acpi_tb_install_fixed_table(acpi_physical_address address, 171 + char *signature, u32 table_index) 172 + { 173 + struct acpi_table_desc new_table_desc; 174 + acpi_status status; 175 + 176 + ACPI_FUNCTION_TRACE(tb_install_fixed_table); 177 + 178 + if (!address) { 179 + ACPI_ERROR((AE_INFO, 180 + "Null physical address for ACPI table [%s]", 181 + signature)); 182 + return (AE_NO_MEMORY); 183 + } 184 + 185 + /* Fill a table descriptor for validation */ 186 + 187 + status = acpi_tb_acquire_temp_table(&new_table_desc, address, 188 + ACPI_TABLE_ORIGIN_INTERNAL_PHYSICAL); 189 + if (ACPI_FAILURE(status)) { 190 + ACPI_ERROR((AE_INFO, "Could not acquire table length at %p", 191 + ACPI_CAST_PTR(void, address))); 192 + return_ACPI_STATUS(status); 193 + } 194 + 195 + /* Validate and verify a table before installation */ 196 + 197 + status = acpi_tb_verify_temp_table(&new_table_desc, signature); 198 + if (ACPI_FAILURE(status)) { 199 + goto release_and_exit; 200 + } 201 + 202 + acpi_tb_install_table_with_override(table_index, &new_table_desc, TRUE); 203 + 204 + release_and_exit: 205 + 206 + /* Release the temporary table descriptor */ 207 + 208 + acpi_tb_release_temp_table(&new_table_desc); 239 209 return_ACPI_STATUS(status); 240 210 } 241 211 242 212 /******************************************************************************* 243 213 * 244 - * FUNCTION: acpi_tb_table_override 214 + * FUNCTION: acpi_tb_install_standard_table 245 215 * 246 - * PARAMETERS: table_header - Header for the original table 247 - * table_desc - Table descriptor initialized for the 248 - * original table. May or may not be mapped. 216 + * PARAMETERS: address - Address of the table (might be a virtual 217 + * address depending on the table_flags) 218 + * flags - Flags for the table 219 + * reload - Whether reload should be performed 220 + * override - Whether override should be performed 221 + * table_index - Where the table index is returned 249 222 * 250 - * RETURN: Pointer to the entire new table. NULL if table not overridden. 251 - * If overridden, installs the new table within the input table 252 - * descriptor. 223 + * RETURN: Status 253 224 * 254 - * DESCRIPTION: Attempt table override by calling the OSL override functions. 255 - * Note: If the table is overridden, then the entire new table 256 - * is mapped and returned by this function. 225 + * DESCRIPTION: This function is called to install an ACPI table that is 226 + * neither DSDT nor FACS (a "standard" table.) 227 + * When this function is called by "Load" or "LoadTable" opcodes, 228 + * or by acpi_load_table() API, the "Reload" parameter is set. 229 + * After sucessfully returning from this function, table is 230 + * "INSTALLED" but not "VALIDATED". 257 231 * 258 232 ******************************************************************************/ 259 233 260 - struct acpi_table_header *acpi_tb_table_override(struct acpi_table_header 261 - *table_header, 262 - struct acpi_table_desc 263 - *table_desc) 234 + acpi_status 235 + acpi_tb_install_standard_table(acpi_physical_address address, 236 + u8 flags, 237 + u8 reload, u8 override, u32 *table_index) 238 + { 239 + u32 i; 240 + acpi_status status = AE_OK; 241 + struct acpi_table_desc new_table_desc; 242 + 243 + ACPI_FUNCTION_TRACE(tb_install_standard_table); 244 + 245 + /* Acquire a temporary table descriptor for validation */ 246 + 247 + status = acpi_tb_acquire_temp_table(&new_table_desc, address, flags); 248 + if (ACPI_FAILURE(status)) { 249 + ACPI_ERROR((AE_INFO, "Could not acquire table length at %p", 250 + ACPI_CAST_PTR(void, address))); 251 + return_ACPI_STATUS(status); 252 + } 253 + 254 + /* 255 + * Optionally do not load any SSDTs from the RSDT/XSDT. This can 256 + * be useful for debugging ACPI problems on some machines. 257 + */ 258 + if (!reload && 259 + acpi_gbl_disable_ssdt_table_install && 260 + ACPI_COMPARE_NAME(&new_table_desc.signature, ACPI_SIG_SSDT)) { 261 + ACPI_INFO((AE_INFO, "Ignoring installation of %4.4s at %p", 262 + new_table_desc.signature.ascii, ACPI_CAST_PTR(void, 263 + address))); 264 + goto release_and_exit; 265 + } 266 + 267 + /* Validate and verify a table before installation */ 268 + 269 + status = acpi_tb_verify_temp_table(&new_table_desc, NULL); 270 + if (ACPI_FAILURE(status)) { 271 + goto release_and_exit; 272 + } 273 + 274 + if (reload) { 275 + /* 276 + * Validate the incoming table signature. 277 + * 278 + * 1) Originally, we checked the table signature for "SSDT" or "PSDT". 279 + * 2) We added support for OEMx tables, signature "OEM". 280 + * 3) Valid tables were encountered with a null signature, so we just 281 + * gave up on validating the signature, (05/2008). 282 + * 4) We encountered non-AML tables such as the MADT, which caused 283 + * interpreter errors and kernel faults. So now, we once again allow 284 + * only "SSDT", "OEMx", and now, also a null signature. (05/2011). 285 + */ 286 + if ((new_table_desc.signature.ascii[0] != 0x00) && 287 + (!ACPI_COMPARE_NAME 288 + (&new_table_desc.signature, ACPI_SIG_SSDT)) 289 + && (ACPI_STRNCMP(new_table_desc.signature.ascii, "OEM", 3))) 290 + { 291 + ACPI_BIOS_ERROR((AE_INFO, 292 + "Table has invalid signature [%4.4s] (0x%8.8X), " 293 + "must be SSDT or OEMx", 294 + acpi_ut_valid_acpi_name(new_table_desc. 295 + signature. 296 + ascii) ? 297 + new_table_desc.signature. 298 + ascii : "????", 299 + new_table_desc.signature.integer)); 300 + 301 + status = AE_BAD_SIGNATURE; 302 + goto release_and_exit; 303 + } 304 + 305 + /* Check if table is already registered */ 306 + 307 + for (i = 0; i < acpi_gbl_root_table_list.current_table_count; 308 + ++i) { 309 + /* 310 + * Check for a table match on the entire table length, 311 + * not just the header. 312 + */ 313 + if (!acpi_tb_compare_tables(&new_table_desc, i)) { 314 + continue; 315 + } 316 + 317 + /* 318 + * Note: the current mechanism does not unregister a table if it is 319 + * dynamically unloaded. The related namespace entries are deleted, 320 + * but the table remains in the root table list. 321 + * 322 + * The assumption here is that the number of different tables that 323 + * will be loaded is actually small, and there is minimal overhead 324 + * in just keeping the table in case it is needed again. 325 + * 326 + * If this assumption changes in the future (perhaps on large 327 + * machines with many table load/unload operations), tables will 328 + * need to be unregistered when they are unloaded, and slots in the 329 + * root table list should be reused when empty. 330 + */ 331 + if (acpi_gbl_root_table_list.tables[i]. 332 + flags & ACPI_TABLE_IS_LOADED) { 333 + 334 + /* Table is still loaded, this is an error */ 335 + 336 + status = AE_ALREADY_EXISTS; 337 + goto release_and_exit; 338 + } else { 339 + /* 340 + * Table was unloaded, allow it to be reloaded. 341 + * As we are going to return AE_OK to the caller, we should 342 + * take the responsibility of freeing the input descriptor. 343 + * Refill the input descriptor to ensure 344 + * acpi_tb_install_table_with_override() can be called again to 345 + * indicate the re-installation. 346 + */ 347 + acpi_tb_uninstall_table(&new_table_desc); 348 + *table_index = i; 349 + (void)acpi_ut_release_mutex(ACPI_MTX_TABLES); 350 + return_ACPI_STATUS(AE_OK); 351 + } 352 + } 353 + } 354 + 355 + /* Add the table to the global root table list */ 356 + 357 + status = acpi_tb_get_next_root_index(&i); 358 + if (ACPI_FAILURE(status)) { 359 + goto release_and_exit; 360 + } 361 + 362 + *table_index = i; 363 + acpi_tb_install_table_with_override(i, &new_table_desc, override); 364 + 365 + release_and_exit: 366 + 367 + /* Release the temporary table descriptor */ 368 + 369 + acpi_tb_release_temp_table(&new_table_desc); 370 + return_ACPI_STATUS(status); 371 + } 372 + 373 + /******************************************************************************* 374 + * 375 + * FUNCTION: acpi_tb_override_table 376 + * 377 + * PARAMETERS: old_table_desc - Validated table descriptor to be 378 + * overridden 379 + * 380 + * RETURN: None 381 + * 382 + * DESCRIPTION: Attempt table override by calling the OSL override functions. 383 + * Note: If the table is overridden, then the entire new table 384 + * is acquired and returned by this function. 385 + * Before/after invocation, the table descriptor is in a state 386 + * that is "VALIDATED". 387 + * 388 + ******************************************************************************/ 389 + 390 + void acpi_tb_override_table(struct acpi_table_desc *old_table_desc) 264 391 { 265 392 acpi_status status; 266 - struct acpi_table_header *new_table = NULL; 267 - acpi_physical_address new_address = 0; 268 - u32 new_table_length = 0; 269 - u8 new_flags; 270 393 char *override_type; 394 + struct acpi_table_desc new_table_desc; 395 + struct acpi_table_header *table; 396 + acpi_physical_address address; 397 + u32 length; 271 398 272 399 /* (1) Attempt logical override (returns a logical address) */ 273 400 274 - status = acpi_os_table_override(table_header, &new_table); 275 - if (ACPI_SUCCESS(status) && new_table) { 276 - new_address = ACPI_PTR_TO_PHYSADDR(new_table); 277 - new_table_length = new_table->length; 278 - new_flags = ACPI_TABLE_ORIGIN_OVERRIDE; 401 + status = acpi_os_table_override(old_table_desc->pointer, &table); 402 + if (ACPI_SUCCESS(status) && table) { 403 + acpi_tb_acquire_temp_table(&new_table_desc, 404 + ACPI_PTR_TO_PHYSADDR(table), 405 + ACPI_TABLE_ORIGIN_EXTERNAL_VIRTUAL); 279 406 override_type = "Logical"; 280 407 goto finish_override; 281 408 } 282 409 283 410 /* (2) Attempt physical override (returns a physical address) */ 284 411 285 - status = acpi_os_physical_table_override(table_header, 286 - &new_address, 287 - &new_table_length); 288 - if (ACPI_SUCCESS(status) && new_address && new_table_length) { 289 - 290 - /* Map the entire new table */ 291 - 292 - new_table = acpi_os_map_memory(new_address, new_table_length); 293 - if (!new_table) { 294 - ACPI_EXCEPTION((AE_INFO, AE_NO_MEMORY, 295 - "%4.4s " ACPI_PRINTF_UINT 296 - " Attempted physical table override failed", 297 - table_header->signature, 298 - ACPI_FORMAT_TO_UINT(table_desc-> 299 - address))); 300 - return (NULL); 301 - } 302 - 412 + status = acpi_os_physical_table_override(old_table_desc->pointer, 413 + &address, &length); 414 + if (ACPI_SUCCESS(status) && address && length) { 415 + acpi_tb_acquire_temp_table(&new_table_desc, address, 416 + ACPI_TABLE_ORIGIN_INTERNAL_PHYSICAL); 303 417 override_type = "Physical"; 304 - new_flags = ACPI_TABLE_ORIGIN_MAPPED; 305 418 goto finish_override; 306 419 } 307 420 308 - return (NULL); /* There was no override */ 421 + return; /* There was no override */ 309 422 310 423 finish_override: 311 424 425 + /* Validate and verify a table before overriding */ 426 + 427 + status = acpi_tb_verify_temp_table(&new_table_desc, NULL); 428 + if (ACPI_FAILURE(status)) { 429 + return; 430 + } 431 + 312 432 ACPI_INFO((AE_INFO, "%4.4s " ACPI_PRINTF_UINT 313 433 " %s table override, new table: " ACPI_PRINTF_UINT, 314 - table_header->signature, 315 - ACPI_FORMAT_TO_UINT(table_desc->address), 316 - override_type, ACPI_FORMAT_TO_UINT(new_table))); 434 + old_table_desc->signature.ascii, 435 + ACPI_FORMAT_TO_UINT(old_table_desc->address), 436 + override_type, ACPI_FORMAT_TO_UINT(new_table_desc.address))); 317 437 318 - /* We can now unmap/delete the original table (if fully mapped) */ 438 + /* We can now uninstall the original table */ 319 439 320 - acpi_tb_delete_table(table_desc); 440 + acpi_tb_uninstall_table(old_table_desc); 321 441 322 - /* Setup descriptor for the new table */ 442 + /* 443 + * Replace the original table descriptor and keep its state as 444 + * "VALIDATED". 445 + */ 446 + acpi_tb_init_table_descriptor(old_table_desc, new_table_desc.address, 447 + new_table_desc.flags, 448 + new_table_desc.pointer); 449 + acpi_tb_validate_temp_table(old_table_desc); 323 450 324 - table_desc->address = new_address; 325 - table_desc->pointer = new_table; 326 - table_desc->length = new_table_length; 327 - table_desc->flags = new_flags; 451 + /* Release the temporary table descriptor */ 328 452 329 - return (new_table); 330 - } 331 - 332 - /******************************************************************************* 333 - * 334 - * FUNCTION: acpi_tb_resize_root_table_list 335 - * 336 - * PARAMETERS: None 337 - * 338 - * RETURN: Status 339 - * 340 - * DESCRIPTION: Expand the size of global table array 341 - * 342 - ******************************************************************************/ 343 - 344 - acpi_status acpi_tb_resize_root_table_list(void) 345 - { 346 - struct acpi_table_desc *tables; 347 - u32 table_count; 348 - 349 - ACPI_FUNCTION_TRACE(tb_resize_root_table_list); 350 - 351 - /* allow_resize flag is a parameter to acpi_initialize_tables */ 352 - 353 - if (!(acpi_gbl_root_table_list.flags & ACPI_ROOT_ALLOW_RESIZE)) { 354 - ACPI_ERROR((AE_INFO, 355 - "Resize of Root Table Array is not allowed")); 356 - return_ACPI_STATUS(AE_SUPPORT); 357 - } 358 - 359 - /* Increase the Table Array size */ 360 - 361 - if (acpi_gbl_root_table_list.flags & ACPI_ROOT_ORIGIN_ALLOCATED) { 362 - table_count = acpi_gbl_root_table_list.max_table_count; 363 - } else { 364 - table_count = acpi_gbl_root_table_list.current_table_count; 365 - } 366 - 367 - tables = ACPI_ALLOCATE_ZEROED(((acpi_size) table_count + 368 - ACPI_ROOT_TABLE_SIZE_INCREMENT) * 369 - sizeof(struct acpi_table_desc)); 370 - if (!tables) { 371 - ACPI_ERROR((AE_INFO, 372 - "Could not allocate new root table array")); 373 - return_ACPI_STATUS(AE_NO_MEMORY); 374 - } 375 - 376 - /* Copy and free the previous table array */ 377 - 378 - if (acpi_gbl_root_table_list.tables) { 379 - ACPI_MEMCPY(tables, acpi_gbl_root_table_list.tables, 380 - (acpi_size) table_count * 381 - sizeof(struct acpi_table_desc)); 382 - 383 - if (acpi_gbl_root_table_list.flags & ACPI_ROOT_ORIGIN_ALLOCATED) { 384 - ACPI_FREE(acpi_gbl_root_table_list.tables); 385 - } 386 - } 387 - 388 - acpi_gbl_root_table_list.tables = tables; 389 - acpi_gbl_root_table_list.max_table_count = 390 - table_count + ACPI_ROOT_TABLE_SIZE_INCREMENT; 391 - acpi_gbl_root_table_list.flags |= ACPI_ROOT_ORIGIN_ALLOCATED; 392 - 393 - return_ACPI_STATUS(AE_OK); 453 + acpi_tb_release_temp_table(&new_table_desc); 394 454 } 395 455 396 456 /******************************************************************************* ··· 460 400 * PARAMETERS: address - Table address 461 401 * table - Table header 462 402 * length - Table length 463 - * flags - flags 403 + * flags - Install flags 404 + * table_index - Where the table index is returned 464 405 * 465 406 * RETURN: Status and table index. 466 407 * ··· 471 410 472 411 acpi_status 473 412 acpi_tb_store_table(acpi_physical_address address, 474 - struct acpi_table_header *table, 413 + struct acpi_table_header * table, 475 414 u32 length, u8 flags, u32 *table_index) 476 415 { 477 416 acpi_status status; 478 - struct acpi_table_desc *new_table; 417 + struct acpi_table_desc *table_desc; 479 418 480 - /* Ensure that there is room for the table in the Root Table List */ 481 - 482 - if (acpi_gbl_root_table_list.current_table_count >= 483 - acpi_gbl_root_table_list.max_table_count) { 484 - status = acpi_tb_resize_root_table_list(); 485 - if (ACPI_FAILURE(status)) { 486 - return (status); 487 - } 419 + status = acpi_tb_get_next_root_index(table_index); 420 + if (ACPI_FAILURE(status)) { 421 + return (status); 488 422 } 489 - 490 - new_table = 491 - &acpi_gbl_root_table_list.tables[acpi_gbl_root_table_list. 492 - current_table_count]; 493 423 494 424 /* Initialize added table */ 495 425 496 - new_table->address = address; 497 - new_table->pointer = table; 498 - new_table->length = length; 499 - new_table->owner_id = 0; 500 - new_table->flags = flags; 501 - 502 - ACPI_MOVE_32_TO_32(&new_table->signature, table->signature); 503 - 504 - *table_index = acpi_gbl_root_table_list.current_table_count; 505 - acpi_gbl_root_table_list.current_table_count++; 426 + table_desc = &acpi_gbl_root_table_list.tables[*table_index]; 427 + acpi_tb_init_table_descriptor(table_desc, address, flags, table); 428 + table_desc->pointer = table; 506 429 return (AE_OK); 507 430 } 508 431 509 432 /******************************************************************************* 510 433 * 511 - * FUNCTION: acpi_tb_delete_table 434 + * FUNCTION: acpi_tb_uninstall_table 512 435 * 513 - * PARAMETERS: table_index - Table index 436 + * PARAMETERS: table_desc - Table descriptor 514 437 * 515 438 * RETURN: None 516 439 * ··· 502 457 * 503 458 ******************************************************************************/ 504 459 505 - void acpi_tb_delete_table(struct acpi_table_desc *table_desc) 460 + void acpi_tb_uninstall_table(struct acpi_table_desc *table_desc) 506 461 { 507 - /* Table must be mapped or allocated */ 508 - if (!table_desc->pointer) { 509 - return; 510 - } 511 - switch (table_desc->flags & ACPI_TABLE_ORIGIN_MASK) { 512 - case ACPI_TABLE_ORIGIN_MAPPED: 513 462 514 - acpi_os_unmap_memory(table_desc->pointer, table_desc->length); 515 - break; 463 + ACPI_FUNCTION_TRACE(tb_uninstall_table); 516 464 517 - case ACPI_TABLE_ORIGIN_ALLOCATED: 465 + /* Table must be installed */ 518 466 519 - ACPI_FREE(table_desc->pointer); 520 - break; 521 - 522 - /* Not mapped or allocated, there is nothing we can do */ 523 - 524 - default: 525 - 526 - return; 467 + if (!table_desc->address) { 468 + return_VOID; 527 469 } 528 470 529 - table_desc->pointer = NULL; 530 - } 471 + acpi_tb_invalidate_table(table_desc); 531 472 532 - /******************************************************************************* 533 - * 534 - * FUNCTION: acpi_tb_terminate 535 - * 536 - * PARAMETERS: None 537 - * 538 - * RETURN: None 539 - * 540 - * DESCRIPTION: Delete all internal ACPI tables 541 - * 542 - ******************************************************************************/ 543 - 544 - void acpi_tb_terminate(void) 545 - { 546 - u32 i; 547 - 548 - ACPI_FUNCTION_TRACE(tb_terminate); 549 - 550 - (void)acpi_ut_acquire_mutex(ACPI_MTX_TABLES); 551 - 552 - /* Delete the individual tables */ 553 - 554 - for (i = 0; i < acpi_gbl_root_table_list.current_table_count; i++) { 555 - acpi_tb_delete_table(&acpi_gbl_root_table_list.tables[i]); 473 + if ((table_desc->flags & ACPI_TABLE_ORIGIN_MASK) == 474 + ACPI_TABLE_ORIGIN_INTERNAL_VIRTUAL) { 475 + ACPI_FREE(ACPI_CAST_PTR(void, table_desc->address)); 556 476 } 557 477 558 - /* 559 - * Delete the root table array if allocated locally. Array cannot be 560 - * mapped, so we don't need to check for that flag. 561 - */ 562 - if (acpi_gbl_root_table_list.flags & ACPI_ROOT_ORIGIN_ALLOCATED) { 563 - ACPI_FREE(acpi_gbl_root_table_list.tables); 564 - } 565 - 566 - acpi_gbl_root_table_list.tables = NULL; 567 - acpi_gbl_root_table_list.flags = 0; 568 - acpi_gbl_root_table_list.current_table_count = 0; 569 - 570 - ACPI_DEBUG_PRINT((ACPI_DB_INFO, "ACPI Tables freed\n")); 571 - (void)acpi_ut_release_mutex(ACPI_MTX_TABLES); 572 - 478 + table_desc->address = ACPI_PTR_TO_PHYSADDR(NULL); 573 479 return_VOID; 574 - } 575 - 576 - /******************************************************************************* 577 - * 578 - * FUNCTION: acpi_tb_delete_namespace_by_owner 579 - * 580 - * PARAMETERS: table_index - Table index 581 - * 582 - * RETURN: Status 583 - * 584 - * DESCRIPTION: Delete all namespace objects created when this table was loaded. 585 - * 586 - ******************************************************************************/ 587 - 588 - acpi_status acpi_tb_delete_namespace_by_owner(u32 table_index) 589 - { 590 - acpi_owner_id owner_id; 591 - acpi_status status; 592 - 593 - ACPI_FUNCTION_TRACE(tb_delete_namespace_by_owner); 594 - 595 - status = acpi_ut_acquire_mutex(ACPI_MTX_TABLES); 596 - if (ACPI_FAILURE(status)) { 597 - return_ACPI_STATUS(status); 598 - } 599 - 600 - if (table_index >= acpi_gbl_root_table_list.current_table_count) { 601 - 602 - /* The table index does not exist */ 603 - 604 - (void)acpi_ut_release_mutex(ACPI_MTX_TABLES); 605 - return_ACPI_STATUS(AE_NOT_EXIST); 606 - } 607 - 608 - /* Get the owner ID for this table, used to delete namespace nodes */ 609 - 610 - owner_id = acpi_gbl_root_table_list.tables[table_index].owner_id; 611 - (void)acpi_ut_release_mutex(ACPI_MTX_TABLES); 612 - 613 - /* 614 - * Need to acquire the namespace writer lock to prevent interference 615 - * with any concurrent namespace walks. The interpreter must be 616 - * released during the deletion since the acquisition of the deletion 617 - * lock may block, and also since the execution of a namespace walk 618 - * must be allowed to use the interpreter. 619 - */ 620 - (void)acpi_ut_release_mutex(ACPI_MTX_INTERPRETER); 621 - status = acpi_ut_acquire_write_lock(&acpi_gbl_namespace_rw_lock); 622 - 623 - acpi_ns_delete_namespace_by_owner(owner_id); 624 - if (ACPI_FAILURE(status)) { 625 - return_ACPI_STATUS(status); 626 - } 627 - 628 - acpi_ut_release_write_lock(&acpi_gbl_namespace_rw_lock); 629 - 630 - status = acpi_ut_acquire_mutex(ACPI_MTX_INTERPRETER); 631 - return_ACPI_STATUS(status); 632 - } 633 - 634 - /******************************************************************************* 635 - * 636 - * FUNCTION: acpi_tb_allocate_owner_id 637 - * 638 - * PARAMETERS: table_index - Table index 639 - * 640 - * RETURN: Status 641 - * 642 - * DESCRIPTION: Allocates owner_id in table_desc 643 - * 644 - ******************************************************************************/ 645 - 646 - acpi_status acpi_tb_allocate_owner_id(u32 table_index) 647 - { 648 - acpi_status status = AE_BAD_PARAMETER; 649 - 650 - ACPI_FUNCTION_TRACE(tb_allocate_owner_id); 651 - 652 - (void)acpi_ut_acquire_mutex(ACPI_MTX_TABLES); 653 - if (table_index < acpi_gbl_root_table_list.current_table_count) { 654 - status = acpi_ut_allocate_owner_id 655 - (&(acpi_gbl_root_table_list.tables[table_index].owner_id)); 656 - } 657 - 658 - (void)acpi_ut_release_mutex(ACPI_MTX_TABLES); 659 - return_ACPI_STATUS(status); 660 - } 661 - 662 - /******************************************************************************* 663 - * 664 - * FUNCTION: acpi_tb_release_owner_id 665 - * 666 - * PARAMETERS: table_index - Table index 667 - * 668 - * RETURN: Status 669 - * 670 - * DESCRIPTION: Releases owner_id in table_desc 671 - * 672 - ******************************************************************************/ 673 - 674 - acpi_status acpi_tb_release_owner_id(u32 table_index) 675 - { 676 - acpi_status status = AE_BAD_PARAMETER; 677 - 678 - ACPI_FUNCTION_TRACE(tb_release_owner_id); 679 - 680 - (void)acpi_ut_acquire_mutex(ACPI_MTX_TABLES); 681 - if (table_index < acpi_gbl_root_table_list.current_table_count) { 682 - acpi_ut_release_owner_id(& 683 - (acpi_gbl_root_table_list. 684 - tables[table_index].owner_id)); 685 - status = AE_OK; 686 - } 687 - 688 - (void)acpi_ut_release_mutex(ACPI_MTX_TABLES); 689 - return_ACPI_STATUS(status); 690 - } 691 - 692 - /******************************************************************************* 693 - * 694 - * FUNCTION: acpi_tb_get_owner_id 695 - * 696 - * PARAMETERS: table_index - Table index 697 - * owner_id - Where the table owner_id is returned 698 - * 699 - * RETURN: Status 700 - * 701 - * DESCRIPTION: returns owner_id for the ACPI table 702 - * 703 - ******************************************************************************/ 704 - 705 - acpi_status acpi_tb_get_owner_id(u32 table_index, acpi_owner_id *owner_id) 706 - { 707 - acpi_status status = AE_BAD_PARAMETER; 708 - 709 - ACPI_FUNCTION_TRACE(tb_get_owner_id); 710 - 711 - (void)acpi_ut_acquire_mutex(ACPI_MTX_TABLES); 712 - if (table_index < acpi_gbl_root_table_list.current_table_count) { 713 - *owner_id = 714 - acpi_gbl_root_table_list.tables[table_index].owner_id; 715 - status = AE_OK; 716 - } 717 - 718 - (void)acpi_ut_release_mutex(ACPI_MTX_TABLES); 719 - return_ACPI_STATUS(status); 720 - } 721 - 722 - /******************************************************************************* 723 - * 724 - * FUNCTION: acpi_tb_is_table_loaded 725 - * 726 - * PARAMETERS: table_index - Table index 727 - * 728 - * RETURN: Table Loaded Flag 729 - * 730 - ******************************************************************************/ 731 - 732 - u8 acpi_tb_is_table_loaded(u32 table_index) 733 - { 734 - u8 is_loaded = FALSE; 735 - 736 - (void)acpi_ut_acquire_mutex(ACPI_MTX_TABLES); 737 - if (table_index < acpi_gbl_root_table_list.current_table_count) { 738 - is_loaded = (u8) 739 - (acpi_gbl_root_table_list.tables[table_index].flags & 740 - ACPI_TABLE_IS_LOADED); 741 - } 742 - 743 - (void)acpi_ut_release_mutex(ACPI_MTX_TABLES); 744 - return (is_loaded); 745 - } 746 - 747 - /******************************************************************************* 748 - * 749 - * FUNCTION: acpi_tb_set_table_loaded_flag 750 - * 751 - * PARAMETERS: table_index - Table index 752 - * is_loaded - TRUE if table is loaded, FALSE otherwise 753 - * 754 - * RETURN: None 755 - * 756 - * DESCRIPTION: Sets the table loaded flag to either TRUE or FALSE. 757 - * 758 - ******************************************************************************/ 759 - 760 - void acpi_tb_set_table_loaded_flag(u32 table_index, u8 is_loaded) 761 - { 762 - 763 - (void)acpi_ut_acquire_mutex(ACPI_MTX_TABLES); 764 - if (table_index < acpi_gbl_root_table_list.current_table_count) { 765 - if (is_loaded) { 766 - acpi_gbl_root_table_list.tables[table_index].flags |= 767 - ACPI_TABLE_IS_LOADED; 768 - } else { 769 - acpi_gbl_root_table_list.tables[table_index].flags &= 770 - ~ACPI_TABLE_IS_LOADED; 771 - } 772 - } 773 - 774 - (void)acpi_ut_release_mutex(ACPI_MTX_TABLES); 775 480 }
+30 -259
drivers/acpi/acpica/tbutils.c
··· 49 49 ACPI_MODULE_NAME("tbutils") 50 50 51 51 /* Local prototypes */ 52 - static acpi_status acpi_tb_validate_xsdt(acpi_physical_address address); 53 - 54 52 static acpi_physical_address 55 53 acpi_tb_get_root_table_entry(u8 *table_entry, u32 table_entry_size); 56 54 ··· 176 178 } 177 179 178 180 ACPI_MEMCPY(new_table, table_desc->pointer, table_desc->length); 179 - acpi_tb_delete_table(table_desc); 180 - table_desc->pointer = new_table; 181 - table_desc->flags = ACPI_TABLE_ORIGIN_ALLOCATED; 181 + acpi_tb_uninstall_table(table_desc); 182 + 183 + acpi_tb_init_table_descriptor(&acpi_gbl_root_table_list. 184 + tables[ACPI_TABLE_INDEX_DSDT], 185 + ACPI_PTR_TO_PHYSADDR(new_table), 186 + ACPI_TABLE_ORIGIN_INTERNAL_VIRTUAL, 187 + new_table); 182 188 183 189 ACPI_INFO((AE_INFO, 184 190 "Forced DSDT copy: length 0x%05X copied locally, original unmapped", 185 191 new_table->length)); 186 192 187 193 return (new_table); 188 - } 189 - 190 - /******************************************************************************* 191 - * 192 - * FUNCTION: acpi_tb_install_table 193 - * 194 - * PARAMETERS: address - Physical address of DSDT or FACS 195 - * signature - Table signature, NULL if no need to 196 - * match 197 - * table_index - Index into root table array 198 - * 199 - * RETURN: None 200 - * 201 - * DESCRIPTION: Install an ACPI table into the global data structure. The 202 - * table override mechanism is called to allow the host 203 - * OS to replace any table before it is installed in the root 204 - * table array. 205 - * 206 - ******************************************************************************/ 207 - 208 - void 209 - acpi_tb_install_table(acpi_physical_address address, 210 - char *signature, u32 table_index) 211 - { 212 - struct acpi_table_header *table; 213 - struct acpi_table_header *final_table; 214 - struct acpi_table_desc *table_desc; 215 - 216 - if (!address) { 217 - ACPI_ERROR((AE_INFO, 218 - "Null physical address for ACPI table [%s]", 219 - signature)); 220 - return; 221 - } 222 - 223 - /* Map just the table header */ 224 - 225 - table = acpi_os_map_memory(address, sizeof(struct acpi_table_header)); 226 - if (!table) { 227 - ACPI_ERROR((AE_INFO, 228 - "Could not map memory for table [%s] at %p", 229 - signature, ACPI_CAST_PTR(void, address))); 230 - return; 231 - } 232 - 233 - /* If a particular signature is expected (DSDT/FACS), it must match */ 234 - 235 - if (signature && !ACPI_COMPARE_NAME(table->signature, signature)) { 236 - ACPI_BIOS_ERROR((AE_INFO, 237 - "Invalid signature 0x%X for ACPI table, expected [%s]", 238 - *ACPI_CAST_PTR(u32, table->signature), 239 - signature)); 240 - goto unmap_and_exit; 241 - } 242 - 243 - /* 244 - * Initialize the table entry. Set the pointer to NULL, since the 245 - * table is not fully mapped at this time. 246 - */ 247 - table_desc = &acpi_gbl_root_table_list.tables[table_index]; 248 - 249 - table_desc->address = address; 250 - table_desc->pointer = NULL; 251 - table_desc->length = table->length; 252 - table_desc->flags = ACPI_TABLE_ORIGIN_MAPPED; 253 - ACPI_MOVE_32_TO_32(table_desc->signature.ascii, table->signature); 254 - 255 - /* 256 - * ACPI Table Override: 257 - * 258 - * Before we install the table, let the host OS override it with a new 259 - * one if desired. Any table within the RSDT/XSDT can be replaced, 260 - * including the DSDT which is pointed to by the FADT. 261 - * 262 - * NOTE: If the table is overridden, then final_table will contain a 263 - * mapped pointer to the full new table. If the table is not overridden, 264 - * or if there has been a physical override, then the table will be 265 - * fully mapped later (in verify table). In any case, we must 266 - * unmap the header that was mapped above. 267 - */ 268 - final_table = acpi_tb_table_override(table, table_desc); 269 - if (!final_table) { 270 - final_table = table; /* There was no override */ 271 - } 272 - 273 - acpi_tb_print_table_header(table_desc->address, final_table); 274 - 275 - /* Set the global integer width (based upon revision of the DSDT) */ 276 - 277 - if (table_index == ACPI_TABLE_INDEX_DSDT) { 278 - acpi_ut_set_integer_width(final_table->revision); 279 - } 280 - 281 - /* 282 - * If we have a physical override during this early loading of the ACPI 283 - * tables, unmap the table for now. It will be mapped again later when 284 - * it is actually used. This supports very early loading of ACPI tables, 285 - * before virtual memory is fully initialized and running within the 286 - * host OS. Note: A logical override has the ACPI_TABLE_ORIGIN_OVERRIDE 287 - * flag set and will not be deleted below. 288 - */ 289 - if (final_table != table) { 290 - acpi_tb_delete_table(table_desc); 291 - } 292 - 293 - unmap_and_exit: 294 - 295 - /* Always unmap the table header that we mapped above */ 296 - 297 - acpi_os_unmap_memory(table, sizeof(struct acpi_table_header)); 298 194 } 299 195 300 196 /******************************************************************************* ··· 249 357 250 358 /******************************************************************************* 251 359 * 252 - * FUNCTION: acpi_tb_validate_xsdt 253 - * 254 - * PARAMETERS: address - Physical address of the XSDT (from RSDP) 255 - * 256 - * RETURN: Status. AE_OK if the table appears to be valid. 257 - * 258 - * DESCRIPTION: Validate an XSDT to ensure that it is of minimum size and does 259 - * not contain any NULL entries. A problem that is seen in the 260 - * field is that the XSDT exists, but is actually useless because 261 - * of one or more (or all) NULL entries. 262 - * 263 - ******************************************************************************/ 264 - 265 - static acpi_status acpi_tb_validate_xsdt(acpi_physical_address xsdt_address) 266 - { 267 - struct acpi_table_header *table; 268 - u8 *next_entry; 269 - acpi_physical_address address; 270 - u32 length; 271 - u32 entry_count; 272 - acpi_status status; 273 - u32 i; 274 - 275 - /* Get the XSDT length */ 276 - 277 - table = 278 - acpi_os_map_memory(xsdt_address, sizeof(struct acpi_table_header)); 279 - if (!table) { 280 - return (AE_NO_MEMORY); 281 - } 282 - 283 - length = table->length; 284 - acpi_os_unmap_memory(table, sizeof(struct acpi_table_header)); 285 - 286 - /* 287 - * Minimum XSDT length is the size of the standard ACPI header 288 - * plus one physical address entry 289 - */ 290 - if (length < (sizeof(struct acpi_table_header) + ACPI_XSDT_ENTRY_SIZE)) { 291 - return (AE_INVALID_TABLE_LENGTH); 292 - } 293 - 294 - /* Map the entire XSDT */ 295 - 296 - table = acpi_os_map_memory(xsdt_address, length); 297 - if (!table) { 298 - return (AE_NO_MEMORY); 299 - } 300 - 301 - /* Get the number of entries and pointer to first entry */ 302 - 303 - status = AE_OK; 304 - next_entry = ACPI_ADD_PTR(u8, table, sizeof(struct acpi_table_header)); 305 - entry_count = (u32)((table->length - sizeof(struct acpi_table_header)) / 306 - ACPI_XSDT_ENTRY_SIZE); 307 - 308 - /* Validate each entry (physical address) within the XSDT */ 309 - 310 - for (i = 0; i < entry_count; i++) { 311 - address = 312 - acpi_tb_get_root_table_entry(next_entry, 313 - ACPI_XSDT_ENTRY_SIZE); 314 - if (!address) { 315 - 316 - /* Detected a NULL entry, XSDT is invalid */ 317 - 318 - status = AE_NULL_ENTRY; 319 - break; 320 - } 321 - 322 - next_entry += ACPI_XSDT_ENTRY_SIZE; 323 - } 324 - 325 - /* Unmap table */ 326 - 327 - acpi_os_unmap_memory(table, length); 328 - return (status); 329 - } 330 - 331 - /******************************************************************************* 332 - * 333 360 * FUNCTION: acpi_tb_parse_root_table 334 361 * 335 362 * PARAMETERS: rsdp - Pointer to the RSDP ··· 272 461 u32 table_count; 273 462 struct acpi_table_header *table; 274 463 acpi_physical_address address; 275 - acpi_physical_address rsdt_address; 276 464 u32 length; 277 465 u8 *table_entry; 278 466 acpi_status status; 467 + u32 table_index; 279 468 280 469 ACPI_FUNCTION_TRACE(tb_parse_root_table); 281 470 ··· 300 489 * as per the ACPI specification. 301 490 */ 302 491 address = (acpi_physical_address) rsdp->xsdt_physical_address; 303 - rsdt_address = 304 - (acpi_physical_address) rsdp->rsdt_physical_address; 305 492 table_entry_size = ACPI_XSDT_ENTRY_SIZE; 306 493 } else { 307 494 /* Root table is an RSDT (32-bit physical addresses) */ 308 495 309 496 address = (acpi_physical_address) rsdp->rsdt_physical_address; 310 - rsdt_address = address; 311 497 table_entry_size = ACPI_RSDT_ENTRY_SIZE; 312 498 } 313 499 ··· 313 505 * so unmap the RSDP here before mapping other tables 314 506 */ 315 507 acpi_os_unmap_memory(rsdp, sizeof(struct acpi_table_rsdp)); 316 - 317 - /* 318 - * If it is present and used, validate the XSDT for access/size 319 - * and ensure that all table entries are at least non-NULL 320 - */ 321 - if (table_entry_size == ACPI_XSDT_ENTRY_SIZE) { 322 - status = acpi_tb_validate_xsdt(address); 323 - if (ACPI_FAILURE(status)) { 324 - ACPI_BIOS_WARNING((AE_INFO, 325 - "XSDT is invalid (%s), using RSDT", 326 - acpi_format_exception(status))); 327 - 328 - /* Fall back to the RSDT */ 329 - 330 - address = rsdt_address; 331 - table_entry_size = ACPI_RSDT_ENTRY_SIZE; 332 - } 333 - } 334 508 335 509 /* Map the RSDT/XSDT table header to get the full table length */ 336 510 ··· 366 576 /* Initialize the root table array from the RSDT/XSDT */ 367 577 368 578 for (i = 0; i < table_count; i++) { 369 - if (acpi_gbl_root_table_list.current_table_count >= 370 - acpi_gbl_root_table_list.max_table_count) { 371 - 372 - /* There is no more room in the root table array, attempt resize */ 373 - 374 - status = acpi_tb_resize_root_table_list(); 375 - if (ACPI_FAILURE(status)) { 376 - ACPI_WARNING((AE_INFO, 377 - "Truncating %u table entries!", 378 - (unsigned) (table_count - 379 - (acpi_gbl_root_table_list. 380 - current_table_count - 381 - 2)))); 382 - break; 383 - } 384 - } 385 579 386 580 /* Get the table physical address (32-bit for RSDT, 64-bit for XSDT) */ 387 581 388 - acpi_gbl_root_table_list.tables[acpi_gbl_root_table_list. 389 - current_table_count].address = 582 + address = 390 583 acpi_tb_get_root_table_entry(table_entry, table_entry_size); 391 584 392 - table_entry += table_entry_size; 393 - acpi_gbl_root_table_list.current_table_count++; 394 - } 585 + /* Skip NULL entries in RSDT/XSDT */ 395 586 396 - /* 397 - * It is not possible to map more than one entry in some environments, 398 - * so unmap the root table here before mapping other tables 399 - */ 400 - acpi_os_unmap_memory(table, length); 401 - 402 - /* 403 - * Complete the initialization of the root table array by examining 404 - * the header of each table 405 - */ 406 - for (i = 2; i < acpi_gbl_root_table_list.current_table_count; i++) { 407 - acpi_tb_install_table(acpi_gbl_root_table_list.tables[i]. 408 - address, NULL, i); 409 - 410 - /* Special case for FADT - validate it then get the DSDT and FACS */ 411 - 412 - if (ACPI_COMPARE_NAME 413 - (&acpi_gbl_root_table_list.tables[i].signature, 414 - ACPI_SIG_FADT)) { 415 - acpi_tb_parse_fadt(i); 587 + if (!address) { 588 + goto next_table; 416 589 } 590 + 591 + status = acpi_tb_install_standard_table(address, 592 + ACPI_TABLE_ORIGIN_INTERNAL_PHYSICAL, 593 + FALSE, TRUE, 594 + &table_index); 595 + 596 + if (ACPI_SUCCESS(status) && 597 + ACPI_COMPARE_NAME(&acpi_gbl_root_table_list. 598 + tables[table_index].signature, 599 + ACPI_SIG_FADT)) { 600 + acpi_tb_parse_fadt(table_index); 601 + } 602 + 603 + next_table: 604 + 605 + table_entry += table_entry_size; 417 606 } 607 + 608 + acpi_os_unmap_memory(table, length); 418 609 419 610 return_ACPI_STATUS(AE_OK); 420 611 }
+9 -9
drivers/acpi/acpica/tbxface.c
··· 206 206 acpi_get_table_header(char *signature, 207 207 u32 instance, struct acpi_table_header *out_table_header) 208 208 { 209 - u32 i; 210 - u32 j; 209 + u32 i; 210 + u32 j; 211 211 struct acpi_table_header *header; 212 212 213 213 /* Parameter validation */ ··· 233 233 if (!acpi_gbl_root_table_list.tables[i].pointer) { 234 234 if ((acpi_gbl_root_table_list.tables[i].flags & 235 235 ACPI_TABLE_ORIGIN_MASK) == 236 - ACPI_TABLE_ORIGIN_MAPPED) { 236 + ACPI_TABLE_ORIGIN_INTERNAL_PHYSICAL) { 237 237 header = 238 238 acpi_os_map_memory(acpi_gbl_root_table_list. 239 239 tables[i].address, ··· 321 321 u32 instance, struct acpi_table_header **out_table, 322 322 acpi_size *tbl_size) 323 323 { 324 - u32 i; 325 - u32 j; 324 + u32 i; 325 + u32 j; 326 326 acpi_status status; 327 327 328 328 /* Parameter validation */ ··· 346 346 } 347 347 348 348 status = 349 - acpi_tb_verify_table(&acpi_gbl_root_table_list.tables[i]); 349 + acpi_tb_validate_table(&acpi_gbl_root_table_list.tables[i]); 350 350 if (ACPI_SUCCESS(status)) { 351 351 *out_table = acpi_gbl_root_table_list.tables[i].pointer; 352 352 *tbl_size = acpi_gbl_root_table_list.tables[i].length; ··· 390 390 * 391 391 ******************************************************************************/ 392 392 acpi_status 393 - acpi_get_table_by_index(u32 table_index, struct acpi_table_header **table) 393 + acpi_get_table_by_index(u32 table_index, struct acpi_table_header ** table) 394 394 { 395 395 acpi_status status; 396 396 ··· 416 416 /* Table is not mapped, map it */ 417 417 418 418 status = 419 - acpi_tb_verify_table(&acpi_gbl_root_table_list. 420 - tables[table_index]); 419 + acpi_tb_validate_table(&acpi_gbl_root_table_list. 420 + tables[table_index]); 421 421 if (ACPI_FAILURE(status)) { 422 422 (void)acpi_ut_release_mutex(ACPI_MTX_TABLES); 423 423 return_ACPI_STATUS(status);
+60 -27
drivers/acpi/acpica/tbxfload.c
··· 117 117 tables[ACPI_TABLE_INDEX_DSDT].signature), 118 118 ACPI_SIG_DSDT) 119 119 || 120 - ACPI_FAILURE(acpi_tb_verify_table 120 + ACPI_FAILURE(acpi_tb_validate_table 121 121 (&acpi_gbl_root_table_list. 122 122 tables[ACPI_TABLE_INDEX_DSDT]))) { 123 123 status = AE_NO_ACPI_TABLES; ··· 128 128 * Save the DSDT pointer for simple access. This is the mapped memory 129 129 * address. We must take care here because the address of the .Tables 130 130 * array can change dynamically as tables are loaded at run-time. Note: 131 - * .Pointer field is not validated until after call to acpi_tb_verify_table. 131 + * .Pointer field is not validated until after call to acpi_tb_validate_table. 132 132 */ 133 133 acpi_gbl_DSDT = 134 134 acpi_gbl_root_table_list.tables[ACPI_TABLE_INDEX_DSDT].pointer; ··· 174 174 (acpi_gbl_root_table_list.tables[i]. 175 175 signature), ACPI_SIG_PSDT)) 176 176 || 177 - ACPI_FAILURE(acpi_tb_verify_table 177 + ACPI_FAILURE(acpi_tb_validate_table 178 178 (&acpi_gbl_root_table_list.tables[i]))) { 179 - continue; 180 - } 181 - 182 - /* 183 - * Optionally do not load any SSDTs from the RSDT/XSDT. This can 184 - * be useful for debugging ACPI problems on some machines. 185 - */ 186 - if (acpi_gbl_disable_ssdt_table_load) { 187 - ACPI_INFO((AE_INFO, "Ignoring %4.4s at %p", 188 - acpi_gbl_root_table_list.tables[i].signature. 189 - ascii, ACPI_CAST_PTR(void, 190 - acpi_gbl_root_table_list. 191 - tables[i].address))); 192 179 continue; 193 180 } 194 181 ··· 195 208 196 209 /******************************************************************************* 197 210 * 211 + * FUNCTION: acpi_install_table 212 + * 213 + * PARAMETERS: address - Address of the ACPI table to be installed. 214 + * physical - Whether the address is a physical table 215 + * address or not 216 + * 217 + * RETURN: Status 218 + * 219 + * DESCRIPTION: Dynamically install an ACPI table. 220 + * Note: This function should only be invoked after 221 + * acpi_initialize_tables() and before acpi_load_tables(). 222 + * 223 + ******************************************************************************/ 224 + 225 + acpi_status __init 226 + acpi_install_table(acpi_physical_address address, u8 physical) 227 + { 228 + acpi_status status; 229 + u8 flags; 230 + u32 table_index; 231 + 232 + ACPI_FUNCTION_TRACE(acpi_install_table); 233 + 234 + if (physical) { 235 + flags = ACPI_TABLE_ORIGIN_EXTERNAL_VIRTUAL; 236 + } else { 237 + flags = ACPI_TABLE_ORIGIN_INTERNAL_PHYSICAL; 238 + } 239 + 240 + status = acpi_tb_install_standard_table(address, flags, 241 + FALSE, FALSE, &table_index); 242 + 243 + return_ACPI_STATUS(status); 244 + } 245 + 246 + ACPI_EXPORT_SYMBOL_INIT(acpi_install_table) 247 + 248 + /******************************************************************************* 249 + * 198 250 * FUNCTION: acpi_load_table 199 251 * 200 252 * PARAMETERS: table - Pointer to a buffer containing the ACPI ··· 248 222 * to ensure that the table is not deleted or unmapped. 249 223 * 250 224 ******************************************************************************/ 251 - 252 225 acpi_status acpi_load_table(struct acpi_table_header *table) 253 226 { 254 227 acpi_status status; 255 - struct acpi_table_desc table_desc; 256 228 u32 table_index; 257 229 258 230 ACPI_FUNCTION_TRACE(acpi_load_table); ··· 260 236 if (!table) { 261 237 return_ACPI_STATUS(AE_BAD_PARAMETER); 262 238 } 263 - 264 - /* Init local table descriptor */ 265 - 266 - ACPI_MEMSET(&table_desc, 0, sizeof(struct acpi_table_desc)); 267 - table_desc.address = ACPI_PTR_TO_PHYSADDR(table); 268 - table_desc.pointer = table; 269 - table_desc.length = table->length; 270 - table_desc.flags = ACPI_TABLE_ORIGIN_UNKNOWN; 271 239 272 240 /* Must acquire the interpreter lock during this operation */ 273 241 ··· 271 255 /* Install the table and load it into the namespace */ 272 256 273 257 ACPI_INFO((AE_INFO, "Host-directed Dynamic ACPI Table Load:")); 274 - status = acpi_tb_add_table(&table_desc, &table_index); 258 + (void)acpi_ut_acquire_mutex(ACPI_MTX_TABLES); 259 + 260 + status = acpi_tb_install_standard_table(ACPI_PTR_TO_PHYSADDR(table), 261 + ACPI_TABLE_ORIGIN_EXTERNAL_VIRTUAL, 262 + TRUE, FALSE, &table_index); 263 + 264 + (void)acpi_ut_release_mutex(ACPI_MTX_TABLES); 265 + if (ACPI_FAILURE(status)) { 266 + goto unlock_and_exit; 267 + } 268 + 269 + /* 270 + * Note: Now table is "INSTALLED", it must be validated before 271 + * using. 272 + */ 273 + status = 274 + acpi_tb_validate_table(&acpi_gbl_root_table_list. 275 + tables[table_index]); 275 276 if (ACPI_FAILURE(status)) { 276 277 goto unlock_and_exit; 277 278 }
+64 -12
drivers/acpi/acpica/utdecode.c
··· 462 462 463 463 /* Names for Notify() values, used for debug output */ 464 464 465 - static const char *acpi_gbl_notify_value_names[ACPI_NOTIFY_MAX + 1] = { 465 + static const char *acpi_gbl_generic_notify[ACPI_NOTIFY_MAX + 1] = { 466 466 /* 00 */ "Bus Check", 467 467 /* 01 */ "Device Check", 468 468 /* 02 */ "Device Wake", ··· 473 473 /* 07 */ "Power Fault", 474 474 /* 08 */ "Capabilities Check", 475 475 /* 09 */ "Device PLD Check", 476 - /* 10 */ "Reserved", 477 - /* 11 */ "System Locality Update", 478 - /* 12 */ "Shutdown Request" 476 + /* 0A */ "Reserved", 477 + /* 0B */ "System Locality Update", 478 + /* 0C */ "Shutdown Request" 479 479 }; 480 480 481 - const char *acpi_ut_get_notify_name(u32 notify_value) 481 + static const char *acpi_gbl_device_notify[4] = { 482 + /* 80 */ "Status Change", 483 + /* 81 */ "Information Change", 484 + /* 82 */ "Device-Specific Change", 485 + /* 83 */ "Device-Specific Change" 486 + }; 487 + 488 + static const char *acpi_gbl_processor_notify[4] = { 489 + /* 80 */ "Performance Capability Change", 490 + /* 81 */ "C-State Change", 491 + /* 82 */ "Throttling Capability Change", 492 + /* 83 */ "Device-Specific Change" 493 + }; 494 + 495 + static const char *acpi_gbl_thermal_notify[4] = { 496 + /* 80 */ "Thermal Status Change", 497 + /* 81 */ "Thermal Trip Point Change", 498 + /* 82 */ "Thermal Device List Change", 499 + /* 83 */ "Thermal Relationship Change" 500 + }; 501 + 502 + const char *acpi_ut_get_notify_name(u32 notify_value, acpi_object_type type) 482 503 { 483 504 505 + /* 00 - 0C are common to all object types */ 506 + 484 507 if (notify_value <= ACPI_NOTIFY_MAX) { 485 - return (acpi_gbl_notify_value_names[notify_value]); 486 - } else if (notify_value <= ACPI_MAX_SYS_NOTIFY) { 487 - return ("Reserved"); 488 - } else if (notify_value <= ACPI_MAX_DEVICE_SPECIFIC_NOTIFY) { 489 - return ("Device Specific"); 490 - } else { 491 - return ("Hardware Specific"); 508 + return (acpi_gbl_generic_notify[notify_value]); 492 509 } 510 + 511 + /* 0D - 7F are reserved */ 512 + 513 + if (notify_value <= ACPI_MAX_SYS_NOTIFY) { 514 + return ("Reserved"); 515 + } 516 + 517 + /* 80 - 83 are per-object-type */ 518 + 519 + if (notify_value <= 0x83) { 520 + switch (type) { 521 + case ACPI_TYPE_ANY: 522 + case ACPI_TYPE_DEVICE: 523 + return (acpi_gbl_device_notify[notify_value - 0x80]); 524 + 525 + case ACPI_TYPE_PROCESSOR: 526 + return (acpi_gbl_processor_notify[notify_value - 0x80]); 527 + 528 + case ACPI_TYPE_THERMAL: 529 + return (acpi_gbl_thermal_notify[notify_value - 0x80]); 530 + 531 + default: 532 + return ("Target object type does not support notifies"); 533 + } 534 + } 535 + 536 + /* 84 - BF are device-specific */ 537 + 538 + if (notify_value <= ACPI_MAX_DEVICE_SPECIFIC_NOTIFY) { 539 + return ("Device-Specific"); 540 + } 541 + 542 + /* C0 and above are hardware-specific */ 543 + 544 + return ("Hardware-Specific"); 493 545 } 494 546 #endif 495 547
+1 -25
drivers/acpi/acpica/utglobal.c
··· 55 55 * Static global variable initialization. 56 56 * 57 57 ******************************************************************************/ 58 - /* Debug output control masks */ 59 - u32 acpi_dbg_level = ACPI_DEBUG_DEFAULT; 60 - 61 - u32 acpi_dbg_layer = 0; 62 - 63 - /* acpi_gbl_FADT is a local copy of the FADT, converted to a common format. */ 64 - 65 - struct acpi_table_fadt acpi_gbl_FADT; 66 - u32 acpi_gbl_trace_flags; 67 - acpi_name acpi_gbl_trace_method_name; 68 - u8 acpi_gbl_system_awake_and_running; 69 - u32 acpi_current_gpe_count; 70 - 71 - /* 72 - * ACPI 5.0 introduces the concept of a "reduced hardware platform", meaning 73 - * that the ACPI hardware is no longer required. A flag in the FADT indicates 74 - * a reduced HW machine, and that flag is duplicated here for convenience. 75 - */ 76 - u8 acpi_gbl_reduced_hardware; 77 - 78 58 /* Various state name strings */ 79 - 80 59 const char *acpi_gbl_sleep_state_names[ACPI_S_STATE_COUNT] = { 81 60 "\\_S0_", 82 61 "\\_S1_", ··· 316 337 acpi_gbl_acpi_hardware_present = TRUE; 317 338 acpi_gbl_last_owner_id_index = 0; 318 339 acpi_gbl_next_owner_id_offset = 0; 319 - acpi_gbl_trace_method_name = 0; 320 340 acpi_gbl_trace_dbg_level = 0; 321 341 acpi_gbl_trace_dbg_layer = 0; 322 342 acpi_gbl_debugger_configuration = DEBUGGER_THREADING; ··· 355 377 acpi_gbl_disable_mem_tracking = FALSE; 356 378 #endif 357 379 358 - #ifdef ACPI_DEBUGGER 359 - acpi_gbl_db_terminate_threads = FALSE; 360 - #endif 380 + ACPI_DEBUGGER_EXEC(acpi_gbl_db_terminate_threads = FALSE); 361 381 362 382 return_ACPI_STATUS(AE_OK); 363 383 }
+1 -1
drivers/acpi/acpica/utstring.c
··· 353 353 } 354 354 355 355 acpi_os_printf("\""); 356 - for (i = 0; string[i] && (i < max_length); i++) { 356 + for (i = 0; (i < max_length) && string[i]; i++) { 357 357 358 358 /* Escape sequences */ 359 359
+2
drivers/acpi/acpica/utxferror.c
··· 53 53 * This module is used for the in-kernel ACPICA as well as the ACPICA 54 54 * tools/applications. 55 55 */ 56 + #ifndef ACPI_NO_ERROR_MESSAGES /* Entire module */ 56 57 /******************************************************************************* 57 58 * 58 59 * FUNCTION: acpi_error ··· 250 249 } 251 250 252 251 ACPI_EXPORT_SYMBOL(acpi_bios_warning) 252 + #endif /* ACPI_NO_ERROR_MESSAGES */
+7 -7
drivers/acpi/apei/einj.c
··· 202 202 203 203 if (!offset) 204 204 return; 205 - v = acpi_os_map_memory(paddr + offset, sizeof(*v)); 205 + v = acpi_os_map_iomem(paddr + offset, sizeof(*v)); 206 206 if (!v) 207 207 return; 208 208 sbdf = v->pcie_sbdf; ··· 210 210 sbdf >> 24, (sbdf >> 16) & 0xff, 211 211 (sbdf >> 11) & 0x1f, (sbdf >> 8) & 0x7, 212 212 v->vendor_id, v->device_id, v->rev_id); 213 - acpi_os_unmap_memory(v, sizeof(*v)); 213 + acpi_os_unmap_iomem(v, sizeof(*v)); 214 214 } 215 215 216 216 static void *einj_get_parameter_address(void) ··· 236 236 if (pa_v5) { 237 237 struct set_error_type_with_address *v5param; 238 238 239 - v5param = acpi_os_map_memory(pa_v5, sizeof(*v5param)); 239 + v5param = acpi_os_map_iomem(pa_v5, sizeof(*v5param)); 240 240 if (v5param) { 241 241 acpi5 = 1; 242 242 check_vendor_extension(pa_v5, v5param); ··· 246 246 if (param_extension && pa_v4) { 247 247 struct einj_parameter *v4param; 248 248 249 - v4param = acpi_os_map_memory(pa_v4, sizeof(*v4param)); 249 + v4param = acpi_os_map_iomem(pa_v4, sizeof(*v4param)); 250 250 if (!v4param) 251 251 return NULL; 252 252 if (v4param->reserved1 || v4param->reserved2) { 253 - acpi_os_unmap_memory(v4param, sizeof(*v4param)); 253 + acpi_os_unmap_iomem(v4param, sizeof(*v4param)); 254 254 return NULL; 255 255 } 256 256 return v4param; ··· 794 794 sizeof(struct set_error_type_with_address) : 795 795 sizeof(struct einj_parameter); 796 796 797 - acpi_os_unmap_memory(einj_param, size); 797 + acpi_os_unmap_iomem(einj_param, size); 798 798 } 799 799 apei_exec_post_unmap_gars(&ctx); 800 800 err_release: ··· 816 816 sizeof(struct set_error_type_with_address) : 817 817 sizeof(struct einj_parameter); 818 818 819 - acpi_os_unmap_memory(einj_param, size); 819 + acpi_os_unmap_iomem(einj_param, size); 820 820 } 821 821 einj_exec_ctx_init(&ctx); 822 822 apei_exec_post_unmap_gars(&ctx);
+64 -13
drivers/acpi/battery.c
··· 56 56 /* Battery power unit: 0 means mW, 1 means mA */ 57 57 #define ACPI_BATTERY_POWER_UNIT_MA 1 58 58 59 + #define ACPI_BATTERY_STATE_DISCHARGING 0x1 60 + #define ACPI_BATTERY_STATE_CHARGING 0x2 61 + #define ACPI_BATTERY_STATE_CRITICAL 0x4 62 + 59 63 #define _COMPONENT ACPI_BATTERY_COMPONENT 60 64 61 65 ACPI_MODULE_NAME("battery"); ··· 173 169 174 170 static int acpi_battery_is_charged(struct acpi_battery *battery) 175 171 { 176 - /* either charging or discharging */ 172 + /* charging, discharging or critical low */ 177 173 if (battery->state != 0) 178 174 return 0; 179 175 ··· 208 204 return -ENODEV; 209 205 switch (psp) { 210 206 case POWER_SUPPLY_PROP_STATUS: 211 - if (battery->state & 0x01) 207 + if (battery->state & ACPI_BATTERY_STATE_DISCHARGING) 212 208 val->intval = POWER_SUPPLY_STATUS_DISCHARGING; 213 - else if (battery->state & 0x02) 209 + else if (battery->state & ACPI_BATTERY_STATE_CHARGING) 214 210 val->intval = POWER_SUPPLY_STATUS_CHARGING; 215 211 else if (acpi_battery_is_charged(battery)) 216 212 val->intval = POWER_SUPPLY_STATUS_FULL; ··· 273 269 else 274 270 val->intval = 0; 275 271 break; 272 + case POWER_SUPPLY_PROP_CAPACITY_LEVEL: 273 + if (battery->state & ACPI_BATTERY_STATE_CRITICAL) 274 + val->intval = POWER_SUPPLY_CAPACITY_LEVEL_CRITICAL; 275 + else if (test_bit(ACPI_BATTERY_ALARM_PRESENT, &battery->flags) && 276 + (battery->capacity_now <= battery->alarm)) 277 + val->intval = POWER_SUPPLY_CAPACITY_LEVEL_LOW; 278 + else if (acpi_battery_is_charged(battery)) 279 + val->intval = POWER_SUPPLY_CAPACITY_LEVEL_FULL; 280 + else 281 + val->intval = POWER_SUPPLY_CAPACITY_LEVEL_NORMAL; 282 + break; 276 283 case POWER_SUPPLY_PROP_MODEL_NAME: 277 284 val->strval = battery->model_number; 278 285 break; ··· 311 296 POWER_SUPPLY_PROP_CHARGE_FULL, 312 297 POWER_SUPPLY_PROP_CHARGE_NOW, 313 298 POWER_SUPPLY_PROP_CAPACITY, 299 + POWER_SUPPLY_PROP_CAPACITY_LEVEL, 314 300 POWER_SUPPLY_PROP_MODEL_NAME, 315 301 POWER_SUPPLY_PROP_MANUFACTURER, 316 302 POWER_SUPPLY_PROP_SERIAL_NUMBER, ··· 329 313 POWER_SUPPLY_PROP_ENERGY_FULL, 330 314 POWER_SUPPLY_PROP_ENERGY_NOW, 331 315 POWER_SUPPLY_PROP_CAPACITY, 316 + POWER_SUPPLY_PROP_CAPACITY_LEVEL, 332 317 POWER_SUPPLY_PROP_MODEL_NAME, 333 318 POWER_SUPPLY_PROP_MANUFACTURER, 334 319 POWER_SUPPLY_PROP_SERIAL_NUMBER, ··· 622 605 battery->bat.type = POWER_SUPPLY_TYPE_BATTERY; 623 606 battery->bat.get_property = acpi_battery_get_property; 624 607 625 - result = power_supply_register(&battery->device->dev, &battery->bat); 608 + result = power_supply_register_no_ws(&battery->device->dev, &battery->bat); 609 + 626 610 if (result) 627 611 return result; 628 612 return device_create_file(battery->bat.dev, &alarm_attr); ··· 714 696 } 715 697 } 716 698 717 - static int acpi_battery_update(struct acpi_battery *battery) 699 + static int acpi_battery_update(struct acpi_battery *battery, bool resume) 718 700 { 719 701 int result, old_present = acpi_battery_present(battery); 720 702 result = acpi_battery_get_status(battery); ··· 725 707 battery->update_time = 0; 726 708 return 0; 727 709 } 710 + 711 + if (resume) 712 + return 0; 713 + 728 714 if (!battery->update_time || 729 715 old_present != acpi_battery_present(battery)) { 730 716 result = acpi_battery_get_info(battery); ··· 742 720 return result; 743 721 } 744 722 result = acpi_battery_get_state(battery); 723 + if (result) 724 + return result; 745 725 acpi_battery_quirks(battery); 726 + 727 + /* 728 + * Wakeup the system if battery is critical low 729 + * or lower than the alarm level 730 + */ 731 + if ((battery->state & ACPI_BATTERY_STATE_CRITICAL) || 732 + (test_bit(ACPI_BATTERY_ALARM_PRESENT, &battery->flags) && 733 + (battery->capacity_now <= battery->alarm))) 734 + pm_wakeup_event(&battery->device->dev, 0); 735 + 746 736 return result; 747 737 } 748 738 ··· 949 915 static int acpi_battery_read(int fid, struct seq_file *seq) 950 916 { 951 917 struct acpi_battery *battery = seq->private; 952 - int result = acpi_battery_update(battery); 918 + int result = acpi_battery_update(battery, false); 953 919 return acpi_print_funcs[fid](seq, result); 954 920 } 955 921 ··· 1064 1030 old = battery->bat.dev; 1065 1031 if (event == ACPI_BATTERY_NOTIFY_INFO) 1066 1032 acpi_battery_refresh(battery); 1067 - acpi_battery_update(battery); 1033 + acpi_battery_update(battery, false); 1068 1034 acpi_bus_generate_netlink_event(device->pnp.device_class, 1069 1035 dev_name(&device->dev), event, 1070 1036 acpi_battery_present(battery)); ··· 1079 1045 { 1080 1046 struct acpi_battery *battery = container_of(nb, struct acpi_battery, 1081 1047 pm_nb); 1048 + int result; 1049 + 1082 1050 switch (mode) { 1083 1051 case PM_POST_HIBERNATION: 1084 1052 case PM_POST_SUSPEND: 1085 - if (battery->bat.dev) { 1086 - sysfs_remove_battery(battery); 1087 - sysfs_add_battery(battery); 1088 - } 1053 + if (!acpi_battery_present(battery)) 1054 + return 0; 1055 + 1056 + if (!battery->bat.dev) { 1057 + result = acpi_battery_get_info(battery); 1058 + if (result) 1059 + return result; 1060 + 1061 + result = sysfs_add_battery(battery); 1062 + if (result) 1063 + return result; 1064 + } else 1065 + acpi_battery_refresh(battery); 1066 + 1067 + acpi_battery_init_alarm(battery); 1068 + acpi_battery_get_state(battery); 1089 1069 break; 1090 1070 } 1091 1071 ··· 1135 1087 mutex_init(&battery->sysfs_lock); 1136 1088 if (acpi_has_method(battery->device->handle, "_BIX")) 1137 1089 set_bit(ACPI_BATTERY_XINFO_PRESENT, &battery->flags); 1138 - result = acpi_battery_update(battery); 1090 + result = acpi_battery_update(battery, false); 1139 1091 if (result) 1140 1092 goto fail; 1141 1093 #ifdef CONFIG_ACPI_PROCFS_POWER ··· 1155 1107 battery->pm_nb.notifier_call = battery_notify; 1156 1108 register_pm_notifier(&battery->pm_nb); 1157 1109 1110 + device_init_wakeup(&device->dev, 1); 1111 + 1158 1112 return result; 1159 1113 1160 1114 fail: ··· 1173 1123 1174 1124 if (!device || !acpi_driver_data(device)) 1175 1125 return -EINVAL; 1126 + device_init_wakeup(&device->dev, 0); 1176 1127 battery = acpi_driver_data(device); 1177 1128 unregister_pm_notifier(&battery->pm_nb); 1178 1129 #ifdef CONFIG_ACPI_PROCFS_POWER ··· 1200 1149 return -EINVAL; 1201 1150 1202 1151 battery->update_time = 0; 1203 - acpi_battery_update(battery); 1152 + acpi_battery_update(battery, true); 1204 1153 return 0; 1205 1154 } 1206 1155 #else
+41 -15
drivers/acpi/bus.c
··· 52 52 EXPORT_SYMBOL(acpi_root_dir); 53 53 54 54 #ifdef CONFIG_X86 55 + #ifdef CONFIG_ACPI_CUSTOM_DSDT 56 + static inline int set_copy_dsdt(const struct dmi_system_id *id) 57 + { 58 + return 0; 59 + } 60 + #else 55 61 static int set_copy_dsdt(const struct dmi_system_id *id) 56 62 { 57 63 printk(KERN_NOTICE "%s detected - " ··· 65 59 acpi_gbl_copy_dsdt_locally = 1; 66 60 return 0; 67 61 } 62 + #endif 68 63 69 64 static struct dmi_system_id dsdt_dmi_table[] __initdata = { 70 65 /* ··· 139 132 } 140 133 EXPORT_SYMBOL(acpi_bus_private_data_handler); 141 134 135 + int acpi_bus_attach_private_data(acpi_handle handle, void *data) 136 + { 137 + acpi_status status; 138 + 139 + status = acpi_attach_data(handle, 140 + acpi_bus_private_data_handler, data); 141 + if (ACPI_FAILURE(status)) { 142 + acpi_handle_debug(handle, "Error attaching device data\n"); 143 + return -ENODEV; 144 + } 145 + 146 + return 0; 147 + } 148 + EXPORT_SYMBOL_GPL(acpi_bus_attach_private_data); 149 + 142 150 int acpi_bus_get_private_data(acpi_handle handle, void **data) 143 151 { 144 152 acpi_status status; ··· 162 140 return -EINVAL; 163 141 164 142 status = acpi_get_data(handle, acpi_bus_private_data_handler, data); 165 - if (ACPI_FAILURE(status) || !*data) { 166 - ACPI_DEBUG_PRINT((ACPI_DB_INFO, "No context for object [%p]\n", 167 - handle)); 143 + if (ACPI_FAILURE(status)) { 144 + acpi_handle_debug(handle, "No context for object\n"); 168 145 return -ENODEV; 169 146 } 170 147 171 148 return 0; 172 149 } 173 - EXPORT_SYMBOL(acpi_bus_get_private_data); 150 + EXPORT_SYMBOL_GPL(acpi_bus_get_private_data); 151 + 152 + void acpi_bus_detach_private_data(acpi_handle handle) 153 + { 154 + acpi_detach_data(handle, acpi_bus_private_data_handler); 155 + } 156 + EXPORT_SYMBOL_GPL(acpi_bus_detach_private_data); 174 157 175 158 void acpi_bus_no_hotplug(acpi_handle handle) 176 159 { ··· 367 340 { 368 341 struct acpi_device *adev; 369 342 struct acpi_driver *driver; 370 - acpi_status status; 371 343 u32 ost_code = ACPI_OST_SC_NON_SPECIFIC_FAILURE; 344 + bool hotplug_event = false; 372 345 373 346 switch (type) { 374 347 case ACPI_NOTIFY_BUS_CHECK: 375 348 acpi_handle_debug(handle, "ACPI_NOTIFY_BUS_CHECK event\n"); 349 + hotplug_event = true; 376 350 break; 377 351 378 352 case ACPI_NOTIFY_DEVICE_CHECK: 379 353 acpi_handle_debug(handle, "ACPI_NOTIFY_DEVICE_CHECK event\n"); 354 + hotplug_event = true; 380 355 break; 381 356 382 357 case ACPI_NOTIFY_DEVICE_WAKE: ··· 387 358 388 359 case ACPI_NOTIFY_EJECT_REQUEST: 389 360 acpi_handle_debug(handle, "ACPI_NOTIFY_EJECT_REQUEST event\n"); 361 + hotplug_event = true; 390 362 break; 391 363 392 364 case ACPI_NOTIFY_DEVICE_CHECK_LIGHT: ··· 423 393 (driver->flags & ACPI_DRIVER_ALL_NOTIFY_EVENTS)) 424 394 driver->ops.notify(adev, type); 425 395 426 - switch (type) { 427 - case ACPI_NOTIFY_BUS_CHECK: 428 - case ACPI_NOTIFY_DEVICE_CHECK: 429 - case ACPI_NOTIFY_EJECT_REQUEST: 430 - status = acpi_hotplug_schedule(adev, type); 431 - if (ACPI_SUCCESS(status)) 432 - return; 433 - default: 434 - break; 435 - } 396 + if (hotplug_event && ACPI_SUCCESS(acpi_hotplug_schedule(adev, type))) 397 + return; 398 + 436 399 acpi_bus_put_acpi_device(adev); 437 400 return; 438 401 ··· 488 465 return; 489 466 490 467 printk(KERN_INFO PREFIX "Core revision %08x\n", ACPI_CA_VERSION); 468 + 469 + /* It's safe to verify table checksums during late stage */ 470 + acpi_gbl_verify_table_checksum = TRUE; 491 471 492 472 /* enable workarounds, unless strict ACPI spec. compliance */ 493 473 if (!acpi_strict)
+15
drivers/acpi/container.c
··· 41 41 {"", 0}, 42 42 }; 43 43 44 + #ifdef CONFIG_ACPI_CONTAINER 45 + 44 46 static int acpi_container_offline(struct container_dev *cdev) 45 47 { 46 48 struct acpi_device *adev = ACPI_COMPANION(&cdev->dev); ··· 111 109 112 110 void __init acpi_container_init(void) 113 111 { 112 + acpi_scan_add_handler(&container_handler); 113 + } 114 + 115 + #else 116 + 117 + static struct acpi_scan_handler container_handler = { 118 + .ids = container_device_ids, 119 + }; 120 + 121 + void __init acpi_container_init(void) 122 + { 114 123 acpi_scan_add_handler_with_hotplug(&container_handler, "container"); 115 124 } 125 + 126 + #endif /* CONFIG_ACPI_CONTAINER */
+39 -7
drivers/acpi/device_pm.c
··· 900 900 */ 901 901 int acpi_subsys_prepare(struct device *dev) 902 902 { 903 - /* 904 - * Devices having power.ignore_children set may still be necessary for 905 - * suspending their children in the next phase of device suspend. 906 - */ 907 - if (dev->power.ignore_children) 908 - pm_runtime_resume(dev); 903 + struct acpi_device *adev = ACPI_COMPANION(dev); 904 + u32 sys_target; 905 + int ret, state; 909 906 910 - return pm_generic_prepare(dev); 907 + ret = pm_generic_prepare(dev); 908 + if (ret < 0) 909 + return ret; 910 + 911 + if (!adev || !pm_runtime_suspended(dev) 912 + || device_may_wakeup(dev) != !!adev->wakeup.prepare_count) 913 + return 0; 914 + 915 + sys_target = acpi_target_system_state(); 916 + if (sys_target == ACPI_STATE_S0) 917 + return 1; 918 + 919 + if (adev->power.flags.dsw_present) 920 + return 0; 921 + 922 + ret = acpi_dev_pm_get_state(dev, adev, sys_target, NULL, &state); 923 + return !ret && state == adev->power.state; 911 924 } 912 925 EXPORT_SYMBOL_GPL(acpi_subsys_prepare); 926 + 927 + /** 928 + * acpi_subsys_complete - Finalize device's resume during system resume. 929 + * @dev: Device to handle. 930 + */ 931 + void acpi_subsys_complete(struct device *dev) 932 + { 933 + /* 934 + * If the device had been runtime-suspended before the system went into 935 + * the sleep state it is going out of and it has never been resumed till 936 + * now, resume it in case the firmware powered it up. 937 + */ 938 + if (dev->power.direct_complete) 939 + pm_request_resume(dev); 940 + } 941 + EXPORT_SYMBOL_GPL(acpi_subsys_complete); 913 942 914 943 /** 915 944 * acpi_subsys_suspend - Run the device driver's suspend callback. ··· 952 923 pm_runtime_resume(dev); 953 924 return pm_generic_suspend(dev); 954 925 } 926 + EXPORT_SYMBOL_GPL(acpi_subsys_suspend); 955 927 956 928 /** 957 929 * acpi_subsys_suspend_late - Suspend device using ACPI. ··· 998 968 pm_runtime_resume(dev); 999 969 return pm_generic_freeze(dev); 1000 970 } 971 + EXPORT_SYMBOL_GPL(acpi_subsys_freeze); 1001 972 1002 973 #endif /* CONFIG_PM_SLEEP */ 1003 974 ··· 1010 979 #endif 1011 980 #ifdef CONFIG_PM_SLEEP 1012 981 .prepare = acpi_subsys_prepare, 982 + .complete = acpi_subsys_complete, 1013 983 .suspend = acpi_subsys_suspend, 1014 984 .suspend_late = acpi_subsys_suspend_late, 1015 985 .resume_early = acpi_subsys_resume_early,
+3 -15
drivers/acpi/internal.h
··· 30 30 void acpi_pci_link_init(void); 31 31 void acpi_processor_init(void); 32 32 void acpi_platform_init(void); 33 + void acpi_pnp_init(void); 33 34 int acpi_sysfs_init(void); 34 - #ifdef CONFIG_ACPI_CONTAINER 35 35 void acpi_container_init(void); 36 - #else 37 - static inline void acpi_container_init(void) {} 38 - #endif 36 + void acpi_memory_hotplug_init(void); 39 37 #ifdef CONFIG_ACPI_DOCK 40 38 void register_dock_dependent_device(struct acpi_device *adev, 41 39 acpi_handle dshandle); ··· 44 46 acpi_handle dshandle) {} 45 47 static inline int dock_notify(struct acpi_device *adev, u32 event) { return -ENODEV; } 46 48 static inline void acpi_dock_add(struct acpi_device *adev) {} 47 - #endif 48 - #ifdef CONFIG_ACPI_HOTPLUG_MEMORY 49 - void acpi_memory_hotplug_init(void); 50 - #else 51 - static inline void acpi_memory_hotplug_init(void) {} 52 49 #endif 53 50 #ifdef CONFIG_X86 54 51 void acpi_cmos_rtc_init(void); ··· 65 72 #else 66 73 static inline void acpi_debugfs_init(void) { return; } 67 74 #endif 68 - #ifdef CONFIG_X86_INTEL_LPSS 69 75 void acpi_lpss_init(void); 70 - #else 71 - static inline void acpi_lpss_init(void) {} 72 - #endif 73 76 74 77 acpi_status acpi_hotplug_schedule(struct acpi_device *adev, u32 src); 75 78 bool acpi_queue_hotplug_work(struct work_struct *work); ··· 169 180 -------------------------------------------------------------------------- */ 170 181 struct platform_device; 171 182 172 - int acpi_create_platform_device(struct acpi_device *adev, 173 - const struct acpi_device_id *id); 183 + struct platform_device *acpi_create_platform_device(struct acpi_device *adev); 174 184 175 185 /*-------------------------------------------------------------------------- 176 186 Video
+2 -2
drivers/acpi/nvs.c
··· 139 139 iounmap(entry->kaddr); 140 140 entry->unmap = false; 141 141 } else { 142 - acpi_os_unmap_memory(entry->kaddr, 143 - entry->size); 142 + acpi_os_unmap_iomem(entry->kaddr, 143 + entry->size); 144 144 } 145 145 entry->kaddr = NULL; 146 146 }
+22 -10
drivers/acpi/osl.c
··· 355 355 } 356 356 357 357 void __iomem *__init_refok 358 - acpi_os_map_memory(acpi_physical_address phys, acpi_size size) 358 + acpi_os_map_iomem(acpi_physical_address phys, acpi_size size) 359 359 { 360 360 struct acpi_ioremap *map; 361 361 void __iomem *virt; ··· 401 401 402 402 list_add_tail_rcu(&map->list, &acpi_ioremaps); 403 403 404 - out: 404 + out: 405 405 mutex_unlock(&acpi_ioremap_lock); 406 406 return map->virt + (phys - map->phys); 407 + } 408 + EXPORT_SYMBOL_GPL(acpi_os_map_iomem); 409 + 410 + void *__init_refok 411 + acpi_os_map_memory(acpi_physical_address phys, acpi_size size) 412 + { 413 + return (void *)acpi_os_map_iomem(phys, size); 407 414 } 408 415 EXPORT_SYMBOL_GPL(acpi_os_map_memory); 409 416 ··· 429 422 } 430 423 } 431 424 432 - void __ref acpi_os_unmap_memory(void __iomem *virt, acpi_size size) 425 + void __ref acpi_os_unmap_iomem(void __iomem *virt, acpi_size size) 433 426 { 434 427 struct acpi_ioremap *map; 435 428 ··· 449 442 mutex_unlock(&acpi_ioremap_lock); 450 443 451 444 acpi_os_map_cleanup(map); 445 + } 446 + EXPORT_SYMBOL_GPL(acpi_os_unmap_iomem); 447 + 448 + void __ref acpi_os_unmap_memory(void *virt, acpi_size size) 449 + { 450 + return acpi_os_unmap_iomem((void __iomem *)virt, size); 452 451 } 453 452 EXPORT_SYMBOL_GPL(acpi_os_unmap_memory); 454 453 ··· 477 464 if (!addr || !gas->bit_width) 478 465 return -EINVAL; 479 466 480 - virt = acpi_os_map_memory(addr, gas->bit_width / 8); 467 + virt = acpi_os_map_iomem(addr, gas->bit_width / 8); 481 468 if (!virt) 482 469 return -EIO; 483 470 ··· 1783 1770 } 1784 1771 #endif 1785 1772 1786 - static int __init acpi_no_auto_ssdt_setup(char *s) 1773 + static int __init acpi_no_static_ssdt_setup(char *s) 1787 1774 { 1788 - printk(KERN_NOTICE PREFIX "SSDT auto-load disabled\n"); 1775 + acpi_gbl_disable_ssdt_table_install = TRUE; 1776 + pr_info("ACPI: static SSDT installation disabled\n"); 1789 1777 1790 - acpi_gbl_disable_ssdt_table_load = TRUE; 1791 - 1792 - return 1; 1778 + return 0; 1793 1779 } 1794 1780 1795 - __setup("acpi_no_auto_ssdt", acpi_no_auto_ssdt_setup); 1781 + early_param("acpi_no_static_ssdt", acpi_no_static_ssdt_setup); 1796 1782 1797 1783 static int __init acpi_disable_return_repair(char *s) 1798 1784 {
+7
drivers/acpi/processor_driver.c
··· 121 121 struct acpi_processor *pr = per_cpu(processors, cpu); 122 122 struct acpi_device *device; 123 123 124 + /* 125 + * CPU_STARTING and CPU_DYING must not sleep. Return here since 126 + * acpi_bus_get_device() may sleep. 127 + */ 128 + if (action == CPU_STARTING || action == CPU_DYING) 129 + return NOTIFY_DONE; 130 + 124 131 if (!pr || acpi_bus_get_device(pr->handle, &device)) 125 132 return NOTIFY_DONE; 126 133
+67 -9
drivers/acpi/scan.c
··· 84 84 85 85 int acpi_scan_add_handler(struct acpi_scan_handler *handler) 86 86 { 87 - if (!handler || !handler->attach) 87 + if (!handler) 88 88 return -EINVAL; 89 89 90 90 list_add_tail(&handler->list_node, &acpi_scan_handlers_list); ··· 1551 1551 */ 1552 1552 if (acpi_has_method(device->handle, "_PSC")) 1553 1553 device->power.flags.explicit_get = 1; 1554 + 1554 1555 if (acpi_has_method(device->handle, "_IRC")) 1555 1556 device->power.flags.inrush_current = 1; 1557 + 1558 + if (acpi_has_method(device->handle, "_DSW")) 1559 + device->power.flags.dsw_present = 1; 1556 1560 1557 1561 /* 1558 1562 * Enumerate supported power management states ··· 1797 1793 return; 1798 1794 } 1799 1795 1800 - if (info->valid & ACPI_VALID_HID) 1796 + if (info->valid & ACPI_VALID_HID) { 1801 1797 acpi_add_id(pnp, info->hardware_id.string); 1798 + pnp->type.platform_id = 1; 1799 + } 1802 1800 if (info->valid & ACPI_VALID_CID) { 1803 1801 cid_list = &info->compatible_id_list; 1804 1802 for (i = 0; i < cid_list->count; i++) ··· 1979 1973 { 1980 1974 const struct acpi_device_id *devid; 1981 1975 1976 + if (handler->match) 1977 + return handler->match(idstr, matchid); 1978 + 1982 1979 for (devid = handler->ids; devid->id[0]; devid++) 1983 1980 if (!strcmp((char *)devid->id, idstr)) { 1984 1981 if (matchid) ··· 2070 2061 return AE_OK; 2071 2062 } 2072 2063 2064 + static int acpi_check_spi_i2c_slave(struct acpi_resource *ares, void *data) 2065 + { 2066 + bool *is_spi_i2c_slave_p = data; 2067 + 2068 + if (ares->type != ACPI_RESOURCE_TYPE_SERIAL_BUS) 2069 + return 1; 2070 + 2071 + /* 2072 + * devices that are connected to UART still need to be enumerated to 2073 + * platform bus 2074 + */ 2075 + if (ares->data.common_serial_bus.type != ACPI_RESOURCE_SERIAL_TYPE_UART) 2076 + *is_spi_i2c_slave_p = true; 2077 + 2078 + /* no need to do more checking */ 2079 + return -1; 2080 + } 2081 + 2082 + static void acpi_default_enumeration(struct acpi_device *device) 2083 + { 2084 + struct list_head resource_list; 2085 + bool is_spi_i2c_slave = false; 2086 + 2087 + if (!device->pnp.type.platform_id || device->handler) 2088 + return; 2089 + 2090 + /* 2091 + * Do not enemerate SPI/I2C slaves as they will be enuerated by their 2092 + * respective parents. 2093 + */ 2094 + INIT_LIST_HEAD(&resource_list); 2095 + acpi_dev_get_resources(device, &resource_list, acpi_check_spi_i2c_slave, 2096 + &is_spi_i2c_slave); 2097 + acpi_dev_free_resource_list(&resource_list); 2098 + if (!is_spi_i2c_slave) 2099 + acpi_create_platform_device(device); 2100 + } 2101 + 2073 2102 static int acpi_scan_attach_handler(struct acpi_device *device) 2074 2103 { 2075 2104 struct acpi_hardware_id *hwid; ··· 2119 2072 2120 2073 handler = acpi_scan_match_handler(hwid->id, &devid); 2121 2074 if (handler) { 2075 + if (!handler->attach) { 2076 + device->pnp.type.platform_id = 0; 2077 + continue; 2078 + } 2122 2079 device->handler = handler; 2123 2080 ret = handler->attach(device, devid); 2124 2081 if (ret > 0) ··· 2133 2082 break; 2134 2083 } 2135 2084 } 2085 + if (!ret) 2086 + acpi_default_enumeration(device); 2087 + 2136 2088 return ret; 2137 2089 } 2138 2090 ··· 2295 2241 acpi_pci_root_init(); 2296 2242 acpi_pci_link_init(); 2297 2243 acpi_processor_init(); 2298 - acpi_platform_init(); 2299 2244 acpi_lpss_init(); 2300 2245 acpi_cmos_rtc_init(); 2301 2246 acpi_container_init(); 2302 2247 acpi_memory_hotplug_init(); 2248 + acpi_pnp_init(); 2303 2249 2304 2250 mutex_lock(&acpi_scan_lock); 2305 2251 /* ··· 2313 2259 if (result) 2314 2260 goto out; 2315 2261 2316 - result = acpi_bus_scan_fixed(); 2317 - if (result) { 2318 - acpi_detach_data(acpi_root->handle, acpi_scan_drop_device); 2319 - acpi_device_del(acpi_root); 2320 - put_device(&acpi_root->dev); 2321 - goto out; 2262 + /* Fixed feature devices do not exist on HW-reduced platform */ 2263 + if (!acpi_gbl_reduced_hardware) { 2264 + result = acpi_bus_scan_fixed(); 2265 + if (result) { 2266 + acpi_detach_data(acpi_root->handle, 2267 + acpi_scan_drop_device); 2268 + acpi_device_del(acpi_root); 2269 + put_device(&acpi_root->dev); 2270 + goto out; 2271 + } 2322 2272 } 2323 2273 2324 2274 acpi_update_all_gpes();
+19
drivers/acpi/sleep.c
··· 89 89 { 90 90 return acpi_target_sleep_state; 91 91 } 92 + EXPORT_SYMBOL_GPL(acpi_target_system_state); 92 93 93 94 static bool pwr_btn_event_pending; 94 95 ··· 612 611 .recover = acpi_pm_finish, 613 612 }; 614 613 614 + static int acpi_freeze_begin(void) 615 + { 616 + acpi_scan_lock_acquire(); 617 + return 0; 618 + } 619 + 620 + static void acpi_freeze_end(void) 621 + { 622 + acpi_scan_lock_release(); 623 + } 624 + 625 + static const struct platform_freeze_ops acpi_freeze_ops = { 626 + .begin = acpi_freeze_begin, 627 + .end = acpi_freeze_end, 628 + }; 629 + 615 630 static void acpi_sleep_suspend_setup(void) 616 631 { 617 632 int i; ··· 638 621 639 622 suspend_set_ops(old_suspend_ordering ? 640 623 &acpi_suspend_ops_old : &acpi_suspend_ops); 624 + freeze_set_ops(&acpi_freeze_ops); 641 625 } 626 + 642 627 #else /* !CONFIG_SUSPEND */ 643 628 static inline void acpi_sleep_suspend_setup(void) {} 644 629 #endif /* !CONFIG_SUSPEND */
+23
drivers/acpi/tables.c
··· 44 44 45 45 static int acpi_apic_instance __initdata; 46 46 47 + /* 48 + * Disable table checksum verification for the early stage due to the size 49 + * limitation of the current x86 early mapping implementation. 50 + */ 51 + static bool acpi_verify_table_checksum __initdata = false; 52 + 47 53 void acpi_table_print_madt_entry(struct acpi_subtable_header *header) 48 54 { 49 55 if (!header) ··· 339 333 { 340 334 acpi_status status; 341 335 336 + if (acpi_verify_table_checksum) { 337 + pr_info("Early table checksum verification enabled\n"); 338 + acpi_gbl_verify_table_checksum = TRUE; 339 + } else { 340 + pr_info("Early table checksum verification disabled\n"); 341 + acpi_gbl_verify_table_checksum = FALSE; 342 + } 343 + 342 344 status = acpi_initialize_tables(initial_tables, ACPI_MAX_TABLES, 0); 343 345 if (ACPI_FAILURE(status)) 344 346 return -EINVAL; ··· 368 354 } 369 355 370 356 early_param("acpi_apic_instance", acpi_parse_apic_instance); 357 + 358 + static int __init acpi_force_table_verification_setup(char *s) 359 + { 360 + acpi_verify_table_checksum = true; 361 + 362 + return 0; 363 + } 364 + 365 + early_param("acpi_force_table_verification", acpi_force_table_verification_setup);
+4 -7
drivers/acpi/thermal.c
··· 925 925 if (result) 926 926 return result; 927 927 928 - status = acpi_attach_data(tz->device->handle, 929 - acpi_bus_private_data_handler, 930 - tz->thermal_zone); 931 - if (ACPI_FAILURE(status)) { 932 - pr_err(PREFIX "Error attaching device data\n"); 928 + status = acpi_bus_attach_private_data(tz->device->handle, 929 + tz->thermal_zone); 930 + if (ACPI_FAILURE(status)) 933 931 return -ENODEV; 934 - } 935 932 936 933 tz->tz_enabled = 1; 937 934 ··· 943 946 sysfs_remove_link(&tz->thermal_zone->device.kobj, "device"); 944 947 thermal_zone_device_unregister(tz->thermal_zone); 945 948 tz->thermal_zone = NULL; 946 - acpi_detach_data(tz->device->handle, acpi_bus_private_data_handler); 949 + acpi_bus_detach_private_data(tz->device->handle); 947 950 } 948 951 949 952
+52 -12
drivers/acpi/utils.c
··· 30 30 #include <linux/types.h> 31 31 #include <linux/hardirq.h> 32 32 #include <linux/acpi.h> 33 + #include <linux/dynamic_debug.h> 33 34 34 35 #include "internal.h" 35 36 ··· 458 457 EXPORT_SYMBOL(acpi_evaluate_ost); 459 458 460 459 /** 460 + * acpi_handle_path: Return the object path of handle 461 + * 462 + * Caller must free the returned buffer 463 + */ 464 + static char *acpi_handle_path(acpi_handle handle) 465 + { 466 + struct acpi_buffer buffer = { 467 + .length = ACPI_ALLOCATE_BUFFER, 468 + .pointer = NULL 469 + }; 470 + 471 + if (in_interrupt() || 472 + acpi_get_name(handle, ACPI_FULL_PATHNAME, &buffer) != AE_OK) 473 + return NULL; 474 + return buffer.pointer; 475 + } 476 + 477 + /** 461 478 * acpi_handle_printk: Print message with ACPI prefix and object path 462 479 * 463 480 * This function is called through acpi_handle_<level> macros and prints ··· 488 469 { 489 470 struct va_format vaf; 490 471 va_list args; 491 - struct acpi_buffer buffer = { 492 - .length = ACPI_ALLOCATE_BUFFER, 493 - .pointer = NULL 494 - }; 495 472 const char *path; 496 473 497 474 va_start(args, fmt); 498 475 vaf.fmt = fmt; 499 476 vaf.va = &args; 500 477 501 - if (in_interrupt() || 502 - acpi_get_name(handle, ACPI_FULL_PATHNAME, &buffer) != AE_OK) 503 - path = "<n/a>"; 504 - else 505 - path = buffer.pointer; 506 - 507 - printk("%sACPI: %s: %pV", level, path, &vaf); 478 + path = acpi_handle_path(handle); 479 + printk("%sACPI: %s: %pV", level, path ? path : "<n/a>" , &vaf); 508 480 509 481 va_end(args); 510 - kfree(buffer.pointer); 482 + kfree(path); 511 483 } 512 484 EXPORT_SYMBOL(acpi_handle_printk); 485 + 486 + #if defined(CONFIG_DYNAMIC_DEBUG) 487 + /** 488 + * __acpi_handle_debug: pr_debug with ACPI prefix and object path 489 + * 490 + * This function is called through acpi_handle_debug macro and debug 491 + * prints a message with ACPI prefix and object path. This function 492 + * acquires the global namespace mutex to obtain an object path. In 493 + * interrupt context, it shows the object path as <n/a>. 494 + */ 495 + void 496 + __acpi_handle_debug(struct _ddebug *descriptor, acpi_handle handle, 497 + const char *fmt, ...) 498 + { 499 + struct va_format vaf; 500 + va_list args; 501 + const char *path; 502 + 503 + va_start(args, fmt); 504 + vaf.fmt = fmt; 505 + vaf.va = &args; 506 + 507 + path = acpi_handle_path(handle); 508 + __dynamic_pr_debug(descriptor, "ACPI: %s: %pV", path ? path : "<n/a>", &vaf); 509 + 510 + va_end(args); 511 + kfree(path); 512 + } 513 + EXPORT_SYMBOL(__acpi_handle_debug); 514 + #endif 513 515 514 516 /** 515 517 * acpi_has_method: Check whether @handle has a method named @name
+189 -69
drivers/acpi/video.c
··· 68 68 MODULE_DESCRIPTION("ACPI Video Driver"); 69 69 MODULE_LICENSE("GPL"); 70 70 71 - static bool brightness_switch_enabled = 1; 71 + static bool brightness_switch_enabled; 72 72 module_param(brightness_switch_enabled, bool, 0644); 73 73 74 74 /* ··· 150 150 151 151 struct acpi_video_bus { 152 152 struct acpi_device *device; 153 + bool backlight_registered; 154 + bool backlight_notifier_registered; 153 155 u8 dos_setting; 154 156 struct acpi_video_enumerated_device *attached_array; 155 157 u8 attached_count; ··· 163 161 struct input_dev *input; 164 162 char phys[32]; /* for input device */ 165 163 struct notifier_block pm_nb; 164 + struct notifier_block backlight_nb; 166 165 }; 167 166 168 167 struct acpi_video_device_flags { ··· 476 473 }, 477 474 { 478 475 .callback = video_set_use_native_backlight, 476 + .ident = "ThinkPad W530", 477 + .matches = { 478 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 479 + DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad W530"), 480 + }, 481 + }, 482 + { 483 + .callback = video_set_use_native_backlight, 479 484 .ident = "ThinkPad X1 Carbon", 480 485 .matches = { 481 486 DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), ··· 496 485 .matches = { 497 486 DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 498 487 DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo IdeaPad Yoga 13"), 488 + }, 489 + }, 490 + { 491 + .callback = video_set_use_native_backlight, 492 + .ident = "Lenovo Yoga 2 11", 493 + .matches = { 494 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 495 + DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo Yoga 2 11"), 499 496 }, 500 497 }, 501 498 { ··· 540 521 }, 541 522 { 542 523 .callback = video_set_use_native_backlight, 524 + .ident = "Acer Aspire V5-171", 525 + .matches = { 526 + DMI_MATCH(DMI_SYS_VENDOR, "Acer"), 527 + DMI_MATCH(DMI_PRODUCT_NAME, "V5-171"), 528 + }, 529 + }, 530 + { 531 + .callback = video_set_use_native_backlight, 543 532 .ident = "Acer Aspire V5-431", 544 533 .matches = { 545 534 DMI_MATCH(DMI_SYS_VENDOR, "Acer"), 546 535 DMI_MATCH(DMI_PRODUCT_NAME, "Aspire V5-431"), 536 + }, 537 + }, 538 + { 539 + .callback = video_set_use_native_backlight, 540 + .ident = "Acer Aspire V5-471G", 541 + .matches = { 542 + DMI_MATCH(DMI_BOARD_VENDOR, "Acer"), 543 + DMI_MATCH(DMI_PRODUCT_NAME, "Aspire V5-471G"), 547 544 }, 548 545 }, 549 546 { ··· 610 575 .matches = { 611 576 DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 612 577 DMI_MATCH(DMI_PRODUCT_NAME, "HP ZBook 17"), 578 + }, 579 + }, 580 + { 581 + .callback = video_set_use_native_backlight, 582 + .ident = "HP EliteBook 8470p", 583 + .matches = { 584 + DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 585 + DMI_MATCH(DMI_PRODUCT_NAME, "HP EliteBook 8470p"), 613 586 }, 614 587 }, 615 588 { ··· 1701 1658 1702 1659 static void acpi_video_dev_register_backlight(struct acpi_video_device *device) 1703 1660 { 1704 - if (acpi_video_verify_backlight_support()) { 1705 - struct backlight_properties props; 1706 - struct pci_dev *pdev; 1707 - acpi_handle acpi_parent; 1708 - struct device *parent = NULL; 1709 - int result; 1710 - static int count; 1711 - char *name; 1661 + struct backlight_properties props; 1662 + struct pci_dev *pdev; 1663 + acpi_handle acpi_parent; 1664 + struct device *parent = NULL; 1665 + int result; 1666 + static int count; 1667 + char *name; 1712 1668 1713 - result = acpi_video_init_brightness(device); 1714 - if (result) 1715 - return; 1716 - name = kasprintf(GFP_KERNEL, "acpi_video%d", count); 1717 - if (!name) 1718 - return; 1719 - count++; 1669 + result = acpi_video_init_brightness(device); 1670 + if (result) 1671 + return; 1672 + name = kasprintf(GFP_KERNEL, "acpi_video%d", count); 1673 + if (!name) 1674 + return; 1675 + count++; 1720 1676 1721 - acpi_get_parent(device->dev->handle, &acpi_parent); 1677 + acpi_get_parent(device->dev->handle, &acpi_parent); 1722 1678 1723 - pdev = acpi_get_pci_dev(acpi_parent); 1724 - if (pdev) { 1725 - parent = &pdev->dev; 1726 - pci_dev_put(pdev); 1727 - } 1728 - 1729 - memset(&props, 0, sizeof(struct backlight_properties)); 1730 - props.type = BACKLIGHT_FIRMWARE; 1731 - props.max_brightness = device->brightness->count - 3; 1732 - device->backlight = backlight_device_register(name, 1733 - parent, 1734 - device, 1735 - &acpi_backlight_ops, 1736 - &props); 1737 - kfree(name); 1738 - if (IS_ERR(device->backlight)) 1739 - return; 1740 - 1741 - /* 1742 - * Save current brightness level in case we have to restore it 1743 - * before acpi_video_device_lcd_set_level() is called next time. 1744 - */ 1745 - device->backlight->props.brightness = 1746 - acpi_video_get_brightness(device->backlight); 1747 - 1748 - device->cooling_dev = thermal_cooling_device_register("LCD", 1749 - device->dev, &video_cooling_ops); 1750 - if (IS_ERR(device->cooling_dev)) { 1751 - /* 1752 - * Set cooling_dev to NULL so we don't crash trying to 1753 - * free it. 1754 - * Also, why the hell we are returning early and 1755 - * not attempt to register video output if cooling 1756 - * device registration failed? 1757 - * -- dtor 1758 - */ 1759 - device->cooling_dev = NULL; 1760 - return; 1761 - } 1762 - 1763 - dev_info(&device->dev->dev, "registered as cooling_device%d\n", 1764 - device->cooling_dev->id); 1765 - result = sysfs_create_link(&device->dev->dev.kobj, 1766 - &device->cooling_dev->device.kobj, 1767 - "thermal_cooling"); 1768 - if (result) 1769 - printk(KERN_ERR PREFIX "Create sysfs link\n"); 1770 - result = sysfs_create_link(&device->cooling_dev->device.kobj, 1771 - &device->dev->dev.kobj, "device"); 1772 - if (result) 1773 - printk(KERN_ERR PREFIX "Create sysfs link\n"); 1679 + pdev = acpi_get_pci_dev(acpi_parent); 1680 + if (pdev) { 1681 + parent = &pdev->dev; 1682 + pci_dev_put(pdev); 1774 1683 } 1684 + 1685 + memset(&props, 0, sizeof(struct backlight_properties)); 1686 + props.type = BACKLIGHT_FIRMWARE; 1687 + props.max_brightness = device->brightness->count - 3; 1688 + device->backlight = backlight_device_register(name, 1689 + parent, 1690 + device, 1691 + &acpi_backlight_ops, 1692 + &props); 1693 + kfree(name); 1694 + if (IS_ERR(device->backlight)) 1695 + return; 1696 + 1697 + /* 1698 + * Save current brightness level in case we have to restore it 1699 + * before acpi_video_device_lcd_set_level() is called next time. 1700 + */ 1701 + device->backlight->props.brightness = 1702 + acpi_video_get_brightness(device->backlight); 1703 + 1704 + device->cooling_dev = thermal_cooling_device_register("LCD", 1705 + device->dev, &video_cooling_ops); 1706 + if (IS_ERR(device->cooling_dev)) { 1707 + /* 1708 + * Set cooling_dev to NULL so we don't crash trying to free it. 1709 + * Also, why the hell we are returning early and not attempt to 1710 + * register video output if cooling device registration failed? 1711 + * -- dtor 1712 + */ 1713 + device->cooling_dev = NULL; 1714 + return; 1715 + } 1716 + 1717 + dev_info(&device->dev->dev, "registered as cooling_device%d\n", 1718 + device->cooling_dev->id); 1719 + result = sysfs_create_link(&device->dev->dev.kobj, 1720 + &device->cooling_dev->device.kobj, 1721 + "thermal_cooling"); 1722 + if (result) 1723 + printk(KERN_ERR PREFIX "Create sysfs link\n"); 1724 + result = sysfs_create_link(&device->cooling_dev->device.kobj, 1725 + &device->dev->dev.kobj, "device"); 1726 + if (result) 1727 + printk(KERN_ERR PREFIX "Create sysfs link\n"); 1775 1728 } 1776 1729 1777 1730 static int acpi_video_bus_register_backlight(struct acpi_video_bus *video) 1778 1731 { 1779 1732 struct acpi_video_device *dev; 1780 1733 1734 + if (video->backlight_registered) 1735 + return 0; 1736 + 1737 + if (!acpi_video_verify_backlight_support()) 1738 + return 0; 1739 + 1781 1740 mutex_lock(&video->device_list_lock); 1782 1741 list_for_each_entry(dev, &video->video_device_list, entry) 1783 1742 acpi_video_dev_register_backlight(dev); 1784 1743 mutex_unlock(&video->device_list_lock); 1744 + 1745 + video->backlight_registered = true; 1785 1746 1786 1747 video->pm_nb.notifier_call = acpi_video_resume; 1787 1748 video->pm_nb.priority = 0; ··· 1814 1767 static int acpi_video_bus_unregister_backlight(struct acpi_video_bus *video) 1815 1768 { 1816 1769 struct acpi_video_device *dev; 1817 - int error = unregister_pm_notifier(&video->pm_nb); 1770 + int error; 1771 + 1772 + if (!video->backlight_registered) 1773 + return 0; 1774 + 1775 + error = unregister_pm_notifier(&video->pm_nb); 1818 1776 1819 1777 mutex_lock(&video->device_list_lock); 1820 1778 list_for_each_entry(dev, &video->video_device_list, entry) 1821 1779 acpi_video_dev_unregister_backlight(dev); 1822 1780 mutex_unlock(&video->device_list_lock); 1781 + 1782 + video->backlight_registered = false; 1823 1783 1824 1784 return error; 1825 1785 } ··· 1921 1867 video->input = NULL; 1922 1868 } 1923 1869 1870 + static int acpi_video_backlight_notify(struct notifier_block *nb, 1871 + unsigned long val, void *bd) 1872 + { 1873 + struct backlight_device *backlight = bd; 1874 + struct acpi_video_bus *video; 1875 + 1876 + /* acpi_video_verify_backlight_support only cares about raw devices */ 1877 + if (backlight->props.type != BACKLIGHT_RAW) 1878 + return NOTIFY_DONE; 1879 + 1880 + video = container_of(nb, struct acpi_video_bus, backlight_nb); 1881 + 1882 + switch (val) { 1883 + case BACKLIGHT_REGISTERED: 1884 + if (!acpi_video_verify_backlight_support()) 1885 + acpi_video_bus_unregister_backlight(video); 1886 + break; 1887 + case BACKLIGHT_UNREGISTERED: 1888 + acpi_video_bus_register_backlight(video); 1889 + break; 1890 + } 1891 + 1892 + return NOTIFY_OK; 1893 + } 1894 + 1895 + static int acpi_video_bus_add_backlight_notify_handler( 1896 + struct acpi_video_bus *video) 1897 + { 1898 + int error; 1899 + 1900 + video->backlight_nb.notifier_call = acpi_video_backlight_notify; 1901 + video->backlight_nb.priority = 0; 1902 + error = backlight_register_notifier(&video->backlight_nb); 1903 + if (error == 0) 1904 + video->backlight_notifier_registered = true; 1905 + 1906 + return error; 1907 + } 1908 + 1909 + static int acpi_video_bus_remove_backlight_notify_handler( 1910 + struct acpi_video_bus *video) 1911 + { 1912 + if (!video->backlight_notifier_registered) 1913 + return 0; 1914 + 1915 + video->backlight_notifier_registered = false; 1916 + 1917 + return backlight_unregister_notifier(&video->backlight_nb); 1918 + } 1919 + 1924 1920 static int acpi_video_bus_put_devices(struct acpi_video_bus *video) 1925 1921 { 1926 1922 struct acpi_video_device *dev, *next; ··· 2052 1948 2053 1949 acpi_video_bus_register_backlight(video); 2054 1950 acpi_video_bus_add_notify_handler(video); 1951 + acpi_video_bus_add_backlight_notify_handler(video); 2055 1952 2056 1953 return 0; 2057 1954 ··· 2076 1971 2077 1972 video = acpi_driver_data(device); 2078 1973 1974 + acpi_video_bus_remove_backlight_notify_handler(video); 2079 1975 acpi_video_bus_remove_notify_handler(video); 2080 1976 acpi_video_bus_unregister_backlight(video); 2081 1977 acpi_video_bus_put_devices(video); ··· 2166 2060 return; 2167 2061 } 2168 2062 EXPORT_SYMBOL(acpi_video_unregister); 2063 + 2064 + void acpi_video_unregister_backlight(void) 2065 + { 2066 + struct acpi_video_bus *video; 2067 + 2068 + if (!register_count) 2069 + return; 2070 + 2071 + mutex_lock(&video_list_lock); 2072 + list_for_each_entry(video, &video_bus_head, entry) 2073 + acpi_video_bus_unregister_backlight(video); 2074 + mutex_unlock(&video_list_lock); 2075 + } 2076 + EXPORT_SYMBOL(acpi_video_unregister_backlight); 2169 2077 2170 2078 /* 2171 2079 * This is kind of nasty. Hardware using Intel chipsets may require
+51 -15
drivers/base/power/main.c
··· 479 479 TRACE_DEVICE(dev); 480 480 TRACE_RESUME(0); 481 481 482 - if (dev->power.syscore) 482 + if (dev->power.syscore || dev->power.direct_complete) 483 483 goto Out; 484 484 485 485 if (!dev->power.is_noirq_suspended) ··· 605 605 TRACE_DEVICE(dev); 606 606 TRACE_RESUME(0); 607 607 608 - if (dev->power.syscore) 608 + if (dev->power.syscore || dev->power.direct_complete) 609 609 goto Out; 610 610 611 611 if (!dev->power.is_late_suspended) ··· 734 734 735 735 if (dev->power.syscore) 736 736 goto Complete; 737 + 738 + if (dev->power.direct_complete) { 739 + /* Match the pm_runtime_disable() in __device_suspend(). */ 740 + pm_runtime_enable(dev); 741 + goto Complete; 742 + } 737 743 738 744 dpm_wait(dev->parent, async); 739 745 dpm_watchdog_set(&wd, dev); ··· 1013 1007 goto Complete; 1014 1008 } 1015 1009 1016 - if (dev->power.syscore) 1010 + if (dev->power.syscore || dev->power.direct_complete) 1017 1011 goto Complete; 1018 1012 1019 1013 dpm_wait_for_children(dev, async); ··· 1152 1146 goto Complete; 1153 1147 } 1154 1148 1155 - if (dev->power.syscore) 1149 + if (dev->power.syscore || dev->power.direct_complete) 1156 1150 goto Complete; 1157 1151 1158 1152 dpm_wait_for_children(dev, async); ··· 1338 1332 if (dev->power.syscore) 1339 1333 goto Complete; 1340 1334 1335 + if (dev->power.direct_complete) { 1336 + if (pm_runtime_status_suspended(dev)) { 1337 + pm_runtime_disable(dev); 1338 + if (pm_runtime_suspended_if_enabled(dev)) 1339 + goto Complete; 1340 + 1341 + pm_runtime_enable(dev); 1342 + } 1343 + dev->power.direct_complete = false; 1344 + } 1345 + 1341 1346 dpm_watchdog_set(&wd, dev); 1342 1347 device_lock(dev); 1343 1348 ··· 1399 1382 1400 1383 End: 1401 1384 if (!error) { 1385 + struct device *parent = dev->parent; 1386 + 1402 1387 dev->power.is_suspended = true; 1403 - if (dev->power.wakeup_path 1404 - && dev->parent && !dev->parent->power.ignore_children) 1405 - dev->parent->power.wakeup_path = true; 1388 + if (parent) { 1389 + spin_lock_irq(&parent->power.lock); 1390 + 1391 + dev->parent->power.direct_complete = false; 1392 + if (dev->power.wakeup_path 1393 + && !dev->parent->power.ignore_children) 1394 + dev->parent->power.wakeup_path = true; 1395 + 1396 + spin_unlock_irq(&parent->power.lock); 1397 + } 1406 1398 } 1407 1399 1408 1400 device_unlock(dev); ··· 1513 1487 { 1514 1488 int (*callback)(struct device *) = NULL; 1515 1489 char *info = NULL; 1516 - int error = 0; 1490 + int ret = 0; 1517 1491 1518 1492 if (dev->power.syscore) 1519 1493 return 0; ··· 1549 1523 callback = dev->driver->pm->prepare; 1550 1524 } 1551 1525 1552 - if (callback) { 1553 - error = callback(dev); 1554 - suspend_report_result(callback, error); 1555 - } 1526 + if (callback) 1527 + ret = callback(dev); 1556 1528 1557 1529 device_unlock(dev); 1558 1530 1559 - if (error) 1531 + if (ret < 0) { 1532 + suspend_report_result(callback, ret); 1560 1533 pm_runtime_put(dev); 1561 - 1562 - return error; 1534 + return ret; 1535 + } 1536 + /* 1537 + * A positive return value from ->prepare() means "this device appears 1538 + * to be runtime-suspended and its state is fine, so if it really is 1539 + * runtime-suspended, you can leave it in that state provided that you 1540 + * will do the same thing with all of its descendants". This only 1541 + * applies to suspend transitions, however. 1542 + */ 1543 + spin_lock_irq(&dev->power.lock); 1544 + dev->power.direct_complete = ret > 0 && state.event == PM_EVENT_SUSPEND; 1545 + spin_unlock_irq(&dev->power.lock); 1546 + return 0; 1563 1547 } 1564 1548 1565 1549 /**
+26 -96
drivers/base/power/opp.c
··· 15 15 #include <linux/errno.h> 16 16 #include <linux/err.h> 17 17 #include <linux/slab.h> 18 - #include <linux/cpufreq.h> 19 18 #include <linux/device.h> 20 19 #include <linux/list.h> 21 20 #include <linux/rculist.h> ··· 393 394 * to keep the integrity of the internal data structures. Callers should ensure 394 395 * that this function is *NOT* called under RCU protection or in contexts where 395 396 * mutex cannot be locked. 397 + * 398 + * Return: 399 + * 0: On success OR 400 + * Duplicate OPPs (both freq and volt are same) and opp->available 401 + * -EEXIST: Freq are same and volt are different OR 402 + * Duplicate OPPs (both freq and volt are same) and !opp->available 403 + * -ENOMEM: Memory allocation failure 396 404 */ 397 405 int dev_pm_opp_add(struct device *dev, unsigned long freq, unsigned long u_volt) 398 406 { ··· 449 443 new_opp->u_volt = u_volt; 450 444 new_opp->available = true; 451 445 452 - /* Insert new OPP in order of increasing frequency */ 446 + /* 447 + * Insert new OPP in order of increasing frequency 448 + * and discard if already present 449 + */ 453 450 head = &dev_opp->opp_list; 454 451 list_for_each_entry_rcu(opp, &dev_opp->opp_list, node) { 455 - if (new_opp->rate < opp->rate) 452 + if (new_opp->rate <= opp->rate) 456 453 break; 457 454 else 458 455 head = &opp->node; 456 + } 457 + 458 + /* Duplicate OPPs ? */ 459 + if (new_opp->rate == opp->rate) { 460 + int ret = opp->available && new_opp->u_volt == opp->u_volt ? 461 + 0 : -EEXIST; 462 + 463 + dev_warn(dev, "%s: duplicate OPPs detected. Existing: freq: %lu, volt: %lu, enabled: %d. New: freq: %lu, volt: %lu, enabled: %d\n", 464 + __func__, opp->rate, opp->u_volt, opp->available, 465 + new_opp->rate, new_opp->u_volt, new_opp->available); 466 + mutex_unlock(&dev_opp_list_lock); 467 + kfree(new_opp); 468 + return ret; 459 469 } 460 470 461 471 list_add_rcu(&new_opp->node, head); ··· 618 596 } 619 597 EXPORT_SYMBOL_GPL(dev_pm_opp_disable); 620 598 621 - #ifdef CONFIG_CPU_FREQ 622 - /** 623 - * dev_pm_opp_init_cpufreq_table() - create a cpufreq table for a device 624 - * @dev: device for which we do this operation 625 - * @table: Cpufreq table returned back to caller 626 - * 627 - * Generate a cpufreq table for a provided device- this assumes that the 628 - * opp list is already initialized and ready for usage. 629 - * 630 - * This function allocates required memory for the cpufreq table. It is 631 - * expected that the caller does the required maintenance such as freeing 632 - * the table as required. 633 - * 634 - * Returns -EINVAL for bad pointers, -ENODEV if the device is not found, -ENOMEM 635 - * if no memory available for the operation (table is not populated), returns 0 636 - * if successful and table is populated. 637 - * 638 - * WARNING: It is important for the callers to ensure refreshing their copy of 639 - * the table if any of the mentioned functions have been invoked in the interim. 640 - * 641 - * Locking: The internal device_opp and opp structures are RCU protected. 642 - * To simplify the logic, we pretend we are updater and hold relevant mutex here 643 - * Callers should ensure that this function is *NOT* called under RCU protection 644 - * or in contexts where mutex locking cannot be used. 645 - */ 646 - int dev_pm_opp_init_cpufreq_table(struct device *dev, 647 - struct cpufreq_frequency_table **table) 648 - { 649 - struct device_opp *dev_opp; 650 - struct dev_pm_opp *opp; 651 - struct cpufreq_frequency_table *freq_table; 652 - int i = 0; 653 - 654 - /* Pretend as if I am an updater */ 655 - mutex_lock(&dev_opp_list_lock); 656 - 657 - dev_opp = find_device_opp(dev); 658 - if (IS_ERR(dev_opp)) { 659 - int r = PTR_ERR(dev_opp); 660 - mutex_unlock(&dev_opp_list_lock); 661 - dev_err(dev, "%s: Device OPP not found (%d)\n", __func__, r); 662 - return r; 663 - } 664 - 665 - freq_table = kzalloc(sizeof(struct cpufreq_frequency_table) * 666 - (dev_pm_opp_get_opp_count(dev) + 1), GFP_KERNEL); 667 - if (!freq_table) { 668 - mutex_unlock(&dev_opp_list_lock); 669 - dev_warn(dev, "%s: Unable to allocate frequency table\n", 670 - __func__); 671 - return -ENOMEM; 672 - } 673 - 674 - list_for_each_entry(opp, &dev_opp->opp_list, node) { 675 - if (opp->available) { 676 - freq_table[i].driver_data = i; 677 - freq_table[i].frequency = opp->rate / 1000; 678 - i++; 679 - } 680 - } 681 - mutex_unlock(&dev_opp_list_lock); 682 - 683 - freq_table[i].driver_data = i; 684 - freq_table[i].frequency = CPUFREQ_TABLE_END; 685 - 686 - *table = &freq_table[0]; 687 - 688 - return 0; 689 - } 690 - EXPORT_SYMBOL_GPL(dev_pm_opp_init_cpufreq_table); 691 - 692 - /** 693 - * dev_pm_opp_free_cpufreq_table() - free the cpufreq table 694 - * @dev: device for which we do this operation 695 - * @table: table to free 696 - * 697 - * Free up the table allocated by dev_pm_opp_init_cpufreq_table 698 - */ 699 - void dev_pm_opp_free_cpufreq_table(struct device *dev, 700 - struct cpufreq_frequency_table **table) 701 - { 702 - if (!table) 703 - return; 704 - 705 - kfree(*table); 706 - *table = NULL; 707 - } 708 - EXPORT_SYMBOL_GPL(dev_pm_opp_free_cpufreq_table); 709 - #endif /* CONFIG_CPU_FREQ */ 710 - 711 599 /** 712 600 * dev_pm_opp_get_notifier() - find notifier_head of the device with opp 713 601 * @dev: device pointer used to lookup device OPPs. ··· 666 734 unsigned long freq = be32_to_cpup(val++) * 1000; 667 735 unsigned long volt = be32_to_cpup(val++); 668 736 669 - if (dev_pm_opp_add(dev, freq, volt)) { 737 + if (dev_pm_opp_add(dev, freq, volt)) 670 738 dev_warn(dev, "%s: Failed to add OPP %ld\n", 671 739 __func__, freq); 672 - continue; 673 - } 674 740 nr -= 2; 675 741 } 676 742
+6
drivers/base/power/wakeup.c
··· 318 318 { 319 319 int ret = 0; 320 320 321 + if (!dev) 322 + return -EINVAL; 323 + 321 324 if (enable) { 322 325 device_set_wakeup_capable(dev, true); 323 326 ret = device_wakeup_enable(dev); 324 327 } else { 328 + if (dev->power.can_wakeup) 329 + device_wakeup_disable(dev); 330 + 325 331 device_set_wakeup_capable(dev, false); 326 332 } 327 333
+2 -2
drivers/char/tpm/tpm_acpi.c
··· 95 95 96 96 log->bios_event_log_end = log->bios_event_log + len; 97 97 98 - virt = acpi_os_map_memory(start, len); 98 + virt = acpi_os_map_iomem(start, len); 99 99 if (!virt) { 100 100 kfree(log->bios_event_log); 101 101 printk("%s: ERROR - Unable to map memory\n", __func__); ··· 104 104 105 105 memcpy_fromio(log->bios_event_log, virt, len); 106 106 107 - acpi_os_unmap_memory(virt, len); 107 + acpi_os_unmap_iomem(virt, len); 108 108 return 0; 109 109 }
+1
drivers/clk/Makefile
··· 8 8 obj-$(CONFIG_COMMON_CLK) += clk-gate.o 9 9 obj-$(CONFIG_COMMON_CLK) += clk-mux.o 10 10 obj-$(CONFIG_COMMON_CLK) += clk-composite.o 11 + obj-$(CONFIG_COMMON_CLK) += clk-fractional-divider.o 11 12 12 13 # hardware specific clock types 13 14 # please keep this section sorted lexicographically by file/directory path name
+135
drivers/clk/clk-fractional-divider.c
··· 1 + /* 2 + * Copyright (C) 2014 Intel Corporation 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 as 6 + * published by the Free Software Foundation. 7 + * 8 + * Adjustable fractional divider clock implementation. 9 + * Output rate = (m / n) * parent_rate. 10 + */ 11 + 12 + #include <linux/clk-provider.h> 13 + #include <linux/module.h> 14 + #include <linux/device.h> 15 + #include <linux/slab.h> 16 + #include <linux/gcd.h> 17 + 18 + #define to_clk_fd(_hw) container_of(_hw, struct clk_fractional_divider, hw) 19 + 20 + static unsigned long clk_fd_recalc_rate(struct clk_hw *hw, 21 + unsigned long parent_rate) 22 + { 23 + struct clk_fractional_divider *fd = to_clk_fd(hw); 24 + unsigned long flags = 0; 25 + u32 val, m, n; 26 + u64 ret; 27 + 28 + if (fd->lock) 29 + spin_lock_irqsave(fd->lock, flags); 30 + 31 + val = clk_readl(fd->reg); 32 + 33 + if (fd->lock) 34 + spin_unlock_irqrestore(fd->lock, flags); 35 + 36 + m = (val & fd->mmask) >> fd->mshift; 37 + n = (val & fd->nmask) >> fd->nshift; 38 + 39 + ret = parent_rate * m; 40 + do_div(ret, n); 41 + 42 + return ret; 43 + } 44 + 45 + static long clk_fd_round_rate(struct clk_hw *hw, unsigned long rate, 46 + unsigned long *prate) 47 + { 48 + struct clk_fractional_divider *fd = to_clk_fd(hw); 49 + unsigned maxn = (fd->nmask >> fd->nshift) + 1; 50 + unsigned div; 51 + 52 + if (!rate || rate >= *prate) 53 + return *prate; 54 + 55 + div = gcd(*prate, rate); 56 + 57 + while ((*prate / div) > maxn) { 58 + div <<= 1; 59 + rate <<= 1; 60 + } 61 + 62 + return rate; 63 + } 64 + 65 + static int clk_fd_set_rate(struct clk_hw *hw, unsigned long rate, 66 + unsigned long parent_rate) 67 + { 68 + struct clk_fractional_divider *fd = to_clk_fd(hw); 69 + unsigned long flags = 0; 70 + unsigned long div; 71 + unsigned n, m; 72 + u32 val; 73 + 74 + div = gcd(parent_rate, rate); 75 + m = rate / div; 76 + n = parent_rate / div; 77 + 78 + if (fd->lock) 79 + spin_lock_irqsave(fd->lock, flags); 80 + 81 + val = clk_readl(fd->reg); 82 + val &= ~(fd->mmask | fd->nmask); 83 + val |= (m << fd->mshift) | (n << fd->nshift); 84 + clk_writel(val, fd->reg); 85 + 86 + if (fd->lock) 87 + spin_unlock_irqrestore(fd->lock, flags); 88 + 89 + return 0; 90 + } 91 + 92 + const struct clk_ops clk_fractional_divider_ops = { 93 + .recalc_rate = clk_fd_recalc_rate, 94 + .round_rate = clk_fd_round_rate, 95 + .set_rate = clk_fd_set_rate, 96 + }; 97 + EXPORT_SYMBOL_GPL(clk_fractional_divider_ops); 98 + 99 + struct clk *clk_register_fractional_divider(struct device *dev, 100 + const char *name, const char *parent_name, unsigned long flags, 101 + void __iomem *reg, u8 mshift, u8 mwidth, u8 nshift, u8 nwidth, 102 + u8 clk_divider_flags, spinlock_t *lock) 103 + { 104 + struct clk_fractional_divider *fd; 105 + struct clk_init_data init; 106 + struct clk *clk; 107 + 108 + fd = kzalloc(sizeof(*fd), GFP_KERNEL); 109 + if (!fd) { 110 + dev_err(dev, "could not allocate fractional divider clk\n"); 111 + return ERR_PTR(-ENOMEM); 112 + } 113 + 114 + init.name = name; 115 + init.ops = &clk_fractional_divider_ops; 116 + init.flags = flags | CLK_IS_BASIC; 117 + init.parent_names = parent_name ? &parent_name : NULL; 118 + init.num_parents = parent_name ? 1 : 0; 119 + 120 + fd->reg = reg; 121 + fd->mshift = mshift; 122 + fd->mmask = (BIT(mwidth) - 1) << mshift; 123 + fd->nshift = nshift; 124 + fd->nmask = (BIT(nwidth) - 1) << nshift; 125 + fd->flags = clk_divider_flags; 126 + fd->lock = lock; 127 + fd->hw.init = &init; 128 + 129 + clk = clk_register(dev, &fd->hw); 130 + if (IS_ERR(clk)) 131 + kfree(fd); 132 + 133 + return clk; 134 + } 135 + EXPORT_SYMBOL_GPL(clk_register_fractional_divider);
+4 -3
drivers/cpufreq/Kconfig.arm
··· 5 5 # big LITTLE core layer and glue drivers 6 6 config ARM_BIG_LITTLE_CPUFREQ 7 7 tristate "Generic ARM big LITTLE CPUfreq driver" 8 - depends on ARM && BIG_LITTLE && ARM_CPU_TOPOLOGY && HAVE_CLK 8 + depends on (BIG_LITTLE && ARM_CPU_TOPOLOGY) || (ARM64 && SMP) 9 + depends on HAVE_CLK 9 10 select PM_OPP 10 11 help 11 12 This enables the Generic CPUfreq driver for ARM big.LITTLE platforms. ··· 86 85 It allows usage of special frequencies for Samsung Exynos 87 86 processors if thermal conditions are appropriate. 88 87 89 - It reguires, for safe operation, thermal framework with properly 88 + It requires, for safe operation, thermal framework with properly 90 89 defined trip points. 91 90 92 91 If in doubt, say N. ··· 187 186 S3C2450 SoC. The S3C2416 supports changing the rate of the 188 187 armdiv clock source and also entering a so called dynamic 189 188 voltage scaling mode in which it is possible to reduce the 190 - core voltage of the cpu. 189 + core voltage of the CPU. 191 190 192 191 If in doubt, say N. 193 192
+2 -2
drivers/cpufreq/Kconfig.x86
··· 10 10 The driver implements an internal governor and will become 11 11 the scaling driver and governor for Sandy bridge processors. 12 12 13 - When this driver is enabled it will become the perferred 13 + When this driver is enabled it will become the preferred 14 14 scaling driver for Sandy bridge processors. 15 15 16 16 If in doubt, say N. ··· 52 52 help 53 53 The powernow-k8 driver used to provide a sysfs knob called "cpb" 54 54 to disable the Core Performance Boosting feature of AMD CPUs. This 55 - file has now been superseeded by the more generic "boost" entry. 55 + file has now been superseded by the more generic "boost" entry. 56 56 57 57 By enabling this option the acpi_cpufreq driver provides the old 58 58 entry in addition to the new boost ones, for compatibility reasons.
+2
drivers/cpufreq/Makefile
··· 1 1 # CPUfreq core 2 2 obj-$(CONFIG_CPU_FREQ) += cpufreq.o freq_table.o 3 + obj-$(CONFIG_PM_OPP) += cpufreq_opp.o 4 + 3 5 # CPUfreq stats 4 6 obj-$(CONFIG_CPU_FREQ_STAT) += cpufreq_stats.o 5 7
+4 -5
drivers/cpufreq/acpi-cpufreq.c
··· 213 213 214 214 static unsigned extract_msr(u32 msr, struct acpi_cpufreq_data *data) 215 215 { 216 - int i; 216 + struct cpufreq_frequency_table *pos; 217 217 struct acpi_processor_performance *perf; 218 218 219 219 if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) ··· 223 223 224 224 perf = data->acpi_data; 225 225 226 - for (i = 0; data->freq_table[i].frequency != CPUFREQ_TABLE_END; i++) { 227 - if (msr == perf->states[data->freq_table[i].driver_data].status) 228 - return data->freq_table[i].frequency; 229 - } 226 + cpufreq_for_each_entry(pos, data->freq_table) 227 + if (msr == perf->states[pos->driver_data].status) 228 + return pos->frequency; 230 229 return data->freq_table[0].frequency; 231 230 } 232 231
+8 -8
drivers/cpufreq/arm_big_little.c
··· 226 226 /* get the minimum frequency in the cpufreq_frequency_table */ 227 227 static inline u32 get_table_min(struct cpufreq_frequency_table *table) 228 228 { 229 - int i; 229 + struct cpufreq_frequency_table *pos; 230 230 uint32_t min_freq = ~0; 231 - for (i = 0; (table[i].frequency != CPUFREQ_TABLE_END); i++) 232 - if (table[i].frequency < min_freq) 233 - min_freq = table[i].frequency; 231 + cpufreq_for_each_entry(pos, table) 232 + if (pos->frequency < min_freq) 233 + min_freq = pos->frequency; 234 234 return min_freq; 235 235 } 236 236 237 237 /* get the maximum frequency in the cpufreq_frequency_table */ 238 238 static inline u32 get_table_max(struct cpufreq_frequency_table *table) 239 239 { 240 - int i; 240 + struct cpufreq_frequency_table *pos; 241 241 uint32_t max_freq = 0; 242 - for (i = 0; (table[i].frequency != CPUFREQ_TABLE_END); i++) 243 - if (table[i].frequency > max_freq) 244 - max_freq = table[i].frequency; 242 + cpufreq_for_each_entry(pos, table) 243 + if (pos->frequency > max_freq) 244 + max_freq = pos->frequency; 245 245 return max_freq; 246 246 } 247 247
+1 -1
drivers/cpufreq/cpufreq-nforce2.c
··· 379 379 }; 380 380 381 381 #ifdef MODULE 382 - static DEFINE_PCI_DEVICE_TABLE(nforce2_ids) = { 382 + static const struct pci_device_id nforce2_ids[] = { 383 383 { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE2 }, 384 384 {} 385 385 };
+47 -23
drivers/cpufreq/cpufreq.c
··· 354 354 void cpufreq_freq_transition_begin(struct cpufreq_policy *policy, 355 355 struct cpufreq_freqs *freqs) 356 356 { 357 + 358 + /* 359 + * Catch double invocations of _begin() which lead to self-deadlock. 360 + * ASYNC_NOTIFICATION drivers are left out because the cpufreq core 361 + * doesn't invoke _begin() on their behalf, and hence the chances of 362 + * double invocations are very low. Moreover, there are scenarios 363 + * where these checks can emit false-positive warnings in these 364 + * drivers; so we avoid that by skipping them altogether. 365 + */ 366 + WARN_ON(!(cpufreq_driver->flags & CPUFREQ_ASYNC_NOTIFICATION) 367 + && current == policy->transition_task); 368 + 357 369 wait: 358 370 wait_event(policy->transition_wait, !policy->transition_ongoing); 359 371 ··· 377 365 } 378 366 379 367 policy->transition_ongoing = true; 368 + policy->transition_task = current; 380 369 381 370 spin_unlock(&policy->transition_lock); 382 371 ··· 394 381 cpufreq_notify_post_transition(policy, freqs, transition_failed); 395 382 396 383 policy->transition_ongoing = false; 384 + policy->transition_task = NULL; 397 385 398 386 wake_up(&policy->transition_wait); 399 387 } ··· 1816 1802 * GOVERNORS * 1817 1803 *********************************************************************/ 1818 1804 1805 + static int __target_index(struct cpufreq_policy *policy, 1806 + struct cpufreq_frequency_table *freq_table, int index) 1807 + { 1808 + struct cpufreq_freqs freqs; 1809 + int retval = -EINVAL; 1810 + bool notify; 1811 + 1812 + notify = !(cpufreq_driver->flags & CPUFREQ_ASYNC_NOTIFICATION); 1813 + 1814 + if (notify) { 1815 + freqs.old = policy->cur; 1816 + freqs.new = freq_table[index].frequency; 1817 + freqs.flags = 0; 1818 + 1819 + pr_debug("%s: cpu: %d, oldfreq: %u, new freq: %u\n", 1820 + __func__, policy->cpu, freqs.old, freqs.new); 1821 + 1822 + cpufreq_freq_transition_begin(policy, &freqs); 1823 + } 1824 + 1825 + retval = cpufreq_driver->target_index(policy, index); 1826 + if (retval) 1827 + pr_err("%s: Failed to change cpu frequency: %d\n", __func__, 1828 + retval); 1829 + 1830 + if (notify) 1831 + cpufreq_freq_transition_end(policy, &freqs, retval); 1832 + 1833 + return retval; 1834 + } 1835 + 1819 1836 int __cpufreq_driver_target(struct cpufreq_policy *policy, 1820 1837 unsigned int target_freq, 1821 1838 unsigned int relation) 1822 1839 { 1823 - int retval = -EINVAL; 1824 1840 unsigned int old_target_freq = target_freq; 1841 + int retval = -EINVAL; 1825 1842 1826 1843 if (cpufreq_disabled()) 1827 1844 return -ENODEV; ··· 1879 1834 retval = cpufreq_driver->target(policy, target_freq, relation); 1880 1835 else if (cpufreq_driver->target_index) { 1881 1836 struct cpufreq_frequency_table *freq_table; 1882 - struct cpufreq_freqs freqs; 1883 - bool notify; 1884 1837 int index; 1885 1838 1886 1839 freq_table = cpufreq_frequency_get_table(policy->cpu); ··· 1899 1856 goto out; 1900 1857 } 1901 1858 1902 - notify = !(cpufreq_driver->flags & CPUFREQ_ASYNC_NOTIFICATION); 1903 - 1904 - if (notify) { 1905 - freqs.old = policy->cur; 1906 - freqs.new = freq_table[index].frequency; 1907 - freqs.flags = 0; 1908 - 1909 - pr_debug("%s: cpu: %d, oldfreq: %u, new freq: %u\n", 1910 - __func__, policy->cpu, freqs.old, freqs.new); 1911 - 1912 - cpufreq_freq_transition_begin(policy, &freqs); 1913 - } 1914 - 1915 - retval = cpufreq_driver->target_index(policy, index); 1916 - if (retval) 1917 - pr_err("%s: Failed to change cpu frequency: %d\n", 1918 - __func__, retval); 1919 - 1920 - if (notify) 1921 - cpufreq_freq_transition_end(policy, &freqs, retval); 1859 + retval = __target_index(policy, freq_table, index); 1922 1860 } 1923 1861 1924 1862 out:
+110
drivers/cpufreq/cpufreq_opp.c
··· 1 + /* 2 + * Generic OPP helper interface for CPUFreq drivers 3 + * 4 + * Copyright (C) 2009-2014 Texas Instruments Incorporated. 5 + * Nishanth Menon 6 + * Romit Dasgupta 7 + * Kevin Hilman 8 + * 9 + * This program is free software; you can redistribute it and/or modify 10 + * it under the terms of the GNU General Public License version 2 as 11 + * published by the Free Software Foundation. 12 + */ 13 + #include <linux/cpufreq.h> 14 + #include <linux/device.h> 15 + #include <linux/err.h> 16 + #include <linux/errno.h> 17 + #include <linux/export.h> 18 + #include <linux/kernel.h> 19 + #include <linux/pm_opp.h> 20 + #include <linux/rcupdate.h> 21 + #include <linux/slab.h> 22 + 23 + /** 24 + * dev_pm_opp_init_cpufreq_table() - create a cpufreq table for a device 25 + * @dev: device for which we do this operation 26 + * @table: Cpufreq table returned back to caller 27 + * 28 + * Generate a cpufreq table for a provided device- this assumes that the 29 + * opp list is already initialized and ready for usage. 30 + * 31 + * This function allocates required memory for the cpufreq table. It is 32 + * expected that the caller does the required maintenance such as freeing 33 + * the table as required. 34 + * 35 + * Returns -EINVAL for bad pointers, -ENODEV if the device is not found, -ENOMEM 36 + * if no memory available for the operation (table is not populated), returns 0 37 + * if successful and table is populated. 38 + * 39 + * WARNING: It is important for the callers to ensure refreshing their copy of 40 + * the table if any of the mentioned functions have been invoked in the interim. 41 + * 42 + * Locking: The internal device_opp and opp structures are RCU protected. 43 + * Since we just use the regular accessor functions to access the internal data 44 + * structures, we use RCU read lock inside this function. As a result, users of 45 + * this function DONOT need to use explicit locks for invoking. 46 + */ 47 + int dev_pm_opp_init_cpufreq_table(struct device *dev, 48 + struct cpufreq_frequency_table **table) 49 + { 50 + struct dev_pm_opp *opp; 51 + struct cpufreq_frequency_table *freq_table = NULL; 52 + int i, max_opps, ret = 0; 53 + unsigned long rate; 54 + 55 + rcu_read_lock(); 56 + 57 + max_opps = dev_pm_opp_get_opp_count(dev); 58 + if (max_opps <= 0) { 59 + ret = max_opps ? max_opps : -ENODATA; 60 + goto out; 61 + } 62 + 63 + freq_table = kzalloc(sizeof(*freq_table) * (max_opps + 1), GFP_KERNEL); 64 + if (!freq_table) { 65 + ret = -ENOMEM; 66 + goto out; 67 + } 68 + 69 + for (i = 0, rate = 0; i < max_opps; i++, rate++) { 70 + /* find next rate */ 71 + opp = dev_pm_opp_find_freq_ceil(dev, &rate); 72 + if (IS_ERR(opp)) { 73 + ret = PTR_ERR(opp); 74 + goto out; 75 + } 76 + freq_table[i].driver_data = i; 77 + freq_table[i].frequency = rate / 1000; 78 + } 79 + 80 + freq_table[i].driver_data = i; 81 + freq_table[i].frequency = CPUFREQ_TABLE_END; 82 + 83 + *table = &freq_table[0]; 84 + 85 + out: 86 + rcu_read_unlock(); 87 + if (ret) 88 + kfree(freq_table); 89 + 90 + return ret; 91 + } 92 + EXPORT_SYMBOL_GPL(dev_pm_opp_init_cpufreq_table); 93 + 94 + /** 95 + * dev_pm_opp_free_cpufreq_table() - free the cpufreq table 96 + * @dev: device for which we do this operation 97 + * @table: table to free 98 + * 99 + * Free up the table allocated by dev_pm_opp_init_cpufreq_table 100 + */ 101 + void dev_pm_opp_free_cpufreq_table(struct device *dev, 102 + struct cpufreq_frequency_table **table) 103 + { 104 + if (!table) 105 + return; 106 + 107 + kfree(*table); 108 + *table = NULL; 109 + } 110 + EXPORT_SYMBOL_GPL(dev_pm_opp_free_cpufreq_table);
+8 -16
drivers/cpufreq/cpufreq_stats.c
··· 182 182 183 183 static int __cpufreq_stats_create_table(struct cpufreq_policy *policy) 184 184 { 185 - unsigned int i, j, count = 0, ret = 0; 185 + unsigned int i, count = 0, ret = 0; 186 186 struct cpufreq_stats *stat; 187 187 unsigned int alloc_size; 188 188 unsigned int cpu = policy->cpu; 189 - struct cpufreq_frequency_table *table; 189 + struct cpufreq_frequency_table *pos, *table; 190 190 191 191 table = cpufreq_frequency_get_table(cpu); 192 192 if (unlikely(!table)) ··· 205 205 stat->cpu = cpu; 206 206 per_cpu(cpufreq_stats_table, cpu) = stat; 207 207 208 - for (i = 0; table[i].frequency != CPUFREQ_TABLE_END; i++) { 209 - unsigned int freq = table[i].frequency; 210 - if (freq == CPUFREQ_ENTRY_INVALID) 211 - continue; 208 + cpufreq_for_each_valid_entry(pos, table) 212 209 count++; 213 - } 214 210 215 211 alloc_size = count * sizeof(int) + count * sizeof(u64); 216 212 ··· 224 228 #ifdef CONFIG_CPU_FREQ_STAT_DETAILS 225 229 stat->trans_table = stat->freq_table + count; 226 230 #endif 227 - j = 0; 228 - for (i = 0; table[i].frequency != CPUFREQ_TABLE_END; i++) { 229 - unsigned int freq = table[i].frequency; 230 - if (freq == CPUFREQ_ENTRY_INVALID) 231 - continue; 232 - if (freq_table_get_index(stat, freq) == -1) 233 - stat->freq_table[j++] = freq; 234 - } 235 - stat->state_num = j; 231 + i = 0; 232 + cpufreq_for_each_valid_entry(pos, table) 233 + if (freq_table_get_index(stat, pos->frequency) == -1) 234 + stat->freq_table[i++] = pos->frequency; 235 + stat->state_num = i; 236 236 spin_lock(&cpufreq_stats_lock); 237 237 stat->last_time = get_jiffies_64(); 238 238 stat->last_index = freq_table_get_index(stat, policy->cur);
+3 -5
drivers/cpufreq/dbx500-cpufreq.c
··· 45 45 46 46 static int dbx500_cpufreq_probe(struct platform_device *pdev) 47 47 { 48 - int i = 0; 48 + struct cpufreq_frequency_table *pos; 49 49 50 50 freq_table = dev_get_platdata(&pdev->dev); 51 51 if (!freq_table) { ··· 60 60 } 61 61 62 62 pr_info("dbx500-cpufreq: Available frequencies:\n"); 63 - while (freq_table[i].frequency != CPUFREQ_TABLE_END) { 64 - pr_info(" %d Mhz\n", freq_table[i].frequency/1000); 65 - i++; 66 - } 63 + cpufreq_for_each_entry(pos, freq_table) 64 + pr_info(" %d Mhz\n", pos->frequency / 1000); 67 65 68 66 return cpufreq_register_driver(&dbx500_cpufreq_driver); 69 67 }
+4 -5
drivers/cpufreq/elanfreq.c
··· 147 147 static int elanfreq_cpu_init(struct cpufreq_policy *policy) 148 148 { 149 149 struct cpuinfo_x86 *c = &cpu_data(0); 150 - unsigned int i; 150 + struct cpufreq_frequency_table *pos; 151 151 152 152 /* capability check */ 153 153 if ((c->x86_vendor != X86_VENDOR_AMD) || ··· 159 159 max_freq = elanfreq_get_cpu_frequency(0); 160 160 161 161 /* table init */ 162 - for (i = 0; (elanfreq_table[i].frequency != CPUFREQ_TABLE_END); i++) { 163 - if (elanfreq_table[i].frequency > max_freq) 164 - elanfreq_table[i].frequency = CPUFREQ_ENTRY_INVALID; 165 - } 162 + cpufreq_for_each_entry(pos, elanfreq_table) 163 + if (pos->frequency > max_freq) 164 + pos->frequency = CPUFREQ_ENTRY_INVALID; 166 165 167 166 /* cpuinfo and default policy values */ 168 167 policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
+17 -15
drivers/cpufreq/exynos-cpufreq.c
··· 28 28 static int exynos_cpufreq_get_index(unsigned int freq) 29 29 { 30 30 struct cpufreq_frequency_table *freq_table = exynos_info->freq_table; 31 - int index; 31 + struct cpufreq_frequency_table *pos; 32 32 33 - for (index = 0; 34 - freq_table[index].frequency != CPUFREQ_TABLE_END; index++) 35 - if (freq_table[index].frequency == freq) 33 + cpufreq_for_each_entry(pos, freq_table) 34 + if (pos->frequency == freq) 36 35 break; 37 36 38 - if (freq_table[index].frequency == CPUFREQ_TABLE_END) 37 + if (pos->frequency == CPUFREQ_TABLE_END) 39 38 return -EINVAL; 40 39 41 - return index; 40 + return pos - freq_table; 42 41 } 43 42 44 43 static int exynos_cpufreq_scale(unsigned int target_freq) ··· 47 48 struct cpufreq_policy *policy = cpufreq_cpu_get(0); 48 49 unsigned int arm_volt, safe_arm_volt = 0; 49 50 unsigned int mpll_freq_khz = exynos_info->mpll_freq_khz; 51 + struct device *dev = exynos_info->dev; 50 52 unsigned int old_freq; 51 53 int index, old_index; 52 54 int ret = 0; ··· 89 89 /* Firstly, voltage up to increase frequency */ 90 90 ret = regulator_set_voltage(arm_regulator, arm_volt, arm_volt); 91 91 if (ret) { 92 - pr_err("%s: failed to set cpu voltage to %d\n", 93 - __func__, arm_volt); 92 + dev_err(dev, "failed to set cpu voltage to %d\n", 93 + arm_volt); 94 94 return ret; 95 95 } 96 96 } ··· 99 99 ret = regulator_set_voltage(arm_regulator, safe_arm_volt, 100 100 safe_arm_volt); 101 101 if (ret) { 102 - pr_err("%s: failed to set cpu voltage to %d\n", 103 - __func__, safe_arm_volt); 102 + dev_err(dev, "failed to set cpu voltage to %d\n", 103 + safe_arm_volt); 104 104 return ret; 105 105 } 106 106 } ··· 114 114 ret = regulator_set_voltage(arm_regulator, arm_volt, 115 115 arm_volt); 116 116 if (ret) { 117 - pr_err("%s: failed to set cpu voltage to %d\n", 118 - __func__, arm_volt); 117 + dev_err(dev, "failed to set cpu voltage to %d\n", 118 + arm_volt); 119 119 goto out; 120 120 } 121 121 } ··· 162 162 if (!exynos_info) 163 163 return -ENOMEM; 164 164 165 + exynos_info->dev = &pdev->dev; 166 + 165 167 if (of_machine_is_compatible("samsung,exynos4210")) { 166 168 exynos_info->type = EXYNOS_SOC_4210; 167 169 ret = exynos4210_cpufreq_init(exynos_info); ··· 185 183 goto err_vdd_arm; 186 184 187 185 if (exynos_info->set_freq == NULL) { 188 - pr_err("%s: No set_freq function (ERR)\n", __func__); 186 + dev_err(&pdev->dev, "No set_freq function (ERR)\n"); 189 187 goto err_vdd_arm; 190 188 } 191 189 192 190 arm_regulator = regulator_get(NULL, "vdd_arm"); 193 191 if (IS_ERR(arm_regulator)) { 194 - pr_err("%s: failed to get resource vdd_arm\n", __func__); 192 + dev_err(&pdev->dev, "failed to get resource vdd_arm\n"); 195 193 goto err_vdd_arm; 196 194 } 197 195 ··· 201 199 if (!cpufreq_register_driver(&exynos_driver)) 202 200 return 0; 203 201 204 - pr_err("%s: failed to register cpufreq driver\n", __func__); 202 + dev_err(&pdev->dev, "failed to register cpufreq driver\n"); 205 203 regulator_put(arm_regulator); 206 204 err_vdd_arm: 207 205 kfree(exynos_info);
+1
drivers/cpufreq/exynos-cpufreq.h
··· 42 42 43 43 struct exynos_dvfs_info { 44 44 enum exynos_soc_type type; 45 + struct device *dev; 45 46 unsigned long mpll_freq_khz; 46 47 unsigned int pll_safe_idx; 47 48 struct clk *cpu_clk;
+15 -15
drivers/cpufreq/exynos5440-cpufreq.c
··· 114 114 115 115 static int init_div_table(void) 116 116 { 117 - struct cpufreq_frequency_table *freq_tbl = dvfs_info->freq_table; 117 + struct cpufreq_frequency_table *pos, *freq_tbl = dvfs_info->freq_table; 118 118 unsigned int tmp, clk_div, ema_div, freq, volt_id; 119 - int i = 0; 120 119 struct dev_pm_opp *opp; 121 120 122 121 rcu_read_lock(); 123 - for (i = 0; freq_tbl[i].frequency != CPUFREQ_TABLE_END; i++) { 124 - 122 + cpufreq_for_each_entry(pos, freq_tbl) { 125 123 opp = dev_pm_opp_find_freq_exact(dvfs_info->dev, 126 - freq_tbl[i].frequency * 1000, true); 124 + pos->frequency * 1000, true); 127 125 if (IS_ERR(opp)) { 128 126 rcu_read_unlock(); 129 127 dev_err(dvfs_info->dev, 130 128 "failed to find valid OPP for %u KHZ\n", 131 - freq_tbl[i].frequency); 129 + pos->frequency); 132 130 return PTR_ERR(opp); 133 131 } 134 132 135 - freq = freq_tbl[i].frequency / 1000; /* In MHZ */ 133 + freq = pos->frequency / 1000; /* In MHZ */ 136 134 clk_div = ((freq / CPU_DIV_FREQ_MAX) & P0_7_CPUCLKDEV_MASK) 137 135 << P0_7_CPUCLKDEV_SHIFT; 138 136 clk_div |= ((freq / CPU_ATB_FREQ_MAX) & P0_7_ATBCLKDEV_MASK) ··· 155 157 tmp = (clk_div | ema_div | (volt_id << P0_7_VDD_SHIFT) 156 158 | ((freq / FREQ_UNIT) << P0_7_FREQ_SHIFT)); 157 159 158 - __raw_writel(tmp, dvfs_info->base + XMU_PMU_P0_7 + 4 * i); 160 + __raw_writel(tmp, dvfs_info->base + XMU_PMU_P0_7 + 4 * 161 + (pos - freq_tbl)); 159 162 } 160 163 161 164 rcu_read_unlock(); ··· 165 166 166 167 static void exynos_enable_dvfs(unsigned int cur_frequency) 167 168 { 168 - unsigned int tmp, i, cpu; 169 + unsigned int tmp, cpu; 169 170 struct cpufreq_frequency_table *freq_table = dvfs_info->freq_table; 171 + struct cpufreq_frequency_table *pos; 170 172 /* Disable DVFS */ 171 173 __raw_writel(0, dvfs_info->base + XMU_DVFS_CTRL); 172 174 ··· 182 182 __raw_writel(tmp, dvfs_info->base + XMU_PMUIRQEN); 183 183 184 184 /* Set initial performance index */ 185 - for (i = 0; freq_table[i].frequency != CPUFREQ_TABLE_END; i++) 186 - if (freq_table[i].frequency == cur_frequency) 185 + cpufreq_for_each_entry(pos, freq_table) 186 + if (pos->frequency == cur_frequency) 187 187 break; 188 188 189 - if (freq_table[i].frequency == CPUFREQ_TABLE_END) { 189 + if (pos->frequency == CPUFREQ_TABLE_END) { 190 190 dev_crit(dvfs_info->dev, "Boot up frequency not supported\n"); 191 191 /* Assign the highest frequency */ 192 - i = 0; 193 - cur_frequency = freq_table[i].frequency; 192 + pos = freq_table; 193 + cur_frequency = pos->frequency; 194 194 } 195 195 196 196 dev_info(dvfs_info->dev, "Setting dvfs initial frequency = %uKHZ", ··· 199 199 for (cpu = 0; cpu < CONFIG_NR_CPUS; cpu++) { 200 200 tmp = __raw_readl(dvfs_info->base + XMU_C0_3_PSTATE + cpu * 4); 201 201 tmp &= ~(P_VALUE_MASK << C0_3_PSTATE_NEW_SHIFT); 202 - tmp |= (i << C0_3_PSTATE_NEW_SHIFT); 202 + tmp |= ((pos - freq_table) << C0_3_PSTATE_NEW_SHIFT); 203 203 __raw_writel(tmp, dvfs_info->base + XMU_C0_3_PSTATE + cpu * 4); 204 204 } 205 205
+31 -33
drivers/cpufreq/freq_table.c
··· 21 21 int cpufreq_frequency_table_cpuinfo(struct cpufreq_policy *policy, 22 22 struct cpufreq_frequency_table *table) 23 23 { 24 + struct cpufreq_frequency_table *pos; 24 25 unsigned int min_freq = ~0; 25 26 unsigned int max_freq = 0; 26 - unsigned int i; 27 + unsigned int freq; 27 28 28 - for (i = 0; (table[i].frequency != CPUFREQ_TABLE_END); i++) { 29 - unsigned int freq = table[i].frequency; 30 - if (freq == CPUFREQ_ENTRY_INVALID) { 31 - pr_debug("table entry %u is invalid, skipping\n", i); 29 + cpufreq_for_each_valid_entry(pos, table) { 30 + freq = pos->frequency; 32 31 33 - continue; 34 - } 35 32 if (!cpufreq_boost_enabled() 36 - && (table[i].flags & CPUFREQ_BOOST_FREQ)) 33 + && (pos->flags & CPUFREQ_BOOST_FREQ)) 37 34 continue; 38 35 39 - pr_debug("table entry %u: %u kHz\n", i, freq); 36 + pr_debug("table entry %u: %u kHz\n", (int)(pos - table), freq); 40 37 if (freq < min_freq) 41 38 min_freq = freq; 42 39 if (freq > max_freq) ··· 54 57 int cpufreq_frequency_table_verify(struct cpufreq_policy *policy, 55 58 struct cpufreq_frequency_table *table) 56 59 { 57 - unsigned int next_larger = ~0, freq, i = 0; 60 + struct cpufreq_frequency_table *pos; 61 + unsigned int freq, next_larger = ~0; 58 62 bool found = false; 59 63 60 64 pr_debug("request for verification of policy (%u - %u kHz) for cpu %u\n", ··· 63 65 64 66 cpufreq_verify_within_cpu_limits(policy); 65 67 66 - for (; freq = table[i].frequency, freq != CPUFREQ_TABLE_END; i++) { 67 - if (freq == CPUFREQ_ENTRY_INVALID) 68 - continue; 68 + cpufreq_for_each_valid_entry(pos, table) { 69 + freq = pos->frequency; 70 + 69 71 if ((freq >= policy->min) && (freq <= policy->max)) { 70 72 found = true; 71 73 break; ··· 116 118 .driver_data = ~0, 117 119 .frequency = 0, 118 120 }; 119 - unsigned int i; 121 + struct cpufreq_frequency_table *pos; 122 + unsigned int freq, i = 0; 120 123 121 124 pr_debug("request for target %u kHz (relation: %u) for cpu %u\n", 122 125 target_freq, relation, policy->cpu); ··· 131 132 break; 132 133 } 133 134 134 - for (i = 0; (table[i].frequency != CPUFREQ_TABLE_END); i++) { 135 - unsigned int freq = table[i].frequency; 136 - if (freq == CPUFREQ_ENTRY_INVALID) 137 - continue; 135 + cpufreq_for_each_valid_entry(pos, table) { 136 + freq = pos->frequency; 137 + 138 + i = pos - table; 138 139 if ((freq < policy->min) || (freq > policy->max)) 139 140 continue; 141 + if (freq == target_freq) { 142 + optimal.driver_data = i; 143 + break; 144 + } 140 145 switch (relation) { 141 146 case CPUFREQ_RELATION_H: 142 - if (freq <= target_freq) { 147 + if (freq < target_freq) { 143 148 if (freq >= optimal.frequency) { 144 149 optimal.frequency = freq; 145 150 optimal.driver_data = i; ··· 156 153 } 157 154 break; 158 155 case CPUFREQ_RELATION_L: 159 - if (freq >= target_freq) { 156 + if (freq > target_freq) { 160 157 if (freq <= optimal.frequency) { 161 158 optimal.frequency = freq; 162 159 optimal.driver_data = i; ··· 187 184 int cpufreq_frequency_table_get_index(struct cpufreq_policy *policy, 188 185 unsigned int freq) 189 186 { 190 - struct cpufreq_frequency_table *table; 191 - int i; 187 + struct cpufreq_frequency_table *pos, *table; 192 188 193 189 table = cpufreq_frequency_get_table(policy->cpu); 194 190 if (unlikely(!table)) { ··· 195 193 return -ENOENT; 196 194 } 197 195 198 - for (i = 0; table[i].frequency != CPUFREQ_TABLE_END; i++) { 199 - if (table[i].frequency == freq) 200 - return i; 201 - } 196 + cpufreq_for_each_valid_entry(pos, table) 197 + if (pos->frequency == freq) 198 + return pos - table; 202 199 203 200 return -EINVAL; 204 201 } ··· 209 208 static ssize_t show_available_freqs(struct cpufreq_policy *policy, char *buf, 210 209 bool show_boost) 211 210 { 212 - unsigned int i = 0; 213 211 ssize_t count = 0; 214 - struct cpufreq_frequency_table *table = policy->freq_table; 212 + struct cpufreq_frequency_table *pos, *table = policy->freq_table; 215 213 216 214 if (!table) 217 215 return -ENODEV; 218 216 219 - for (i = 0; (table[i].frequency != CPUFREQ_TABLE_END); i++) { 220 - if (table[i].frequency == CPUFREQ_ENTRY_INVALID) 221 - continue; 217 + cpufreq_for_each_valid_entry(pos, table) { 222 218 /* 223 219 * show_boost = true and driver_data = BOOST freq 224 220 * display BOOST freqs ··· 227 229 * show_boost = false and driver_data != BOOST freq 228 230 * display NON BOOST freqs 229 231 */ 230 - if (show_boost ^ (table[i].flags & CPUFREQ_BOOST_FREQ)) 232 + if (show_boost ^ (pos->flags & CPUFREQ_BOOST_FREQ)) 231 233 continue; 232 234 233 - count += sprintf(&buf[count], "%d ", table[i].frequency); 235 + count += sprintf(&buf[count], "%d ", pos->frequency); 234 236 } 235 237 count += sprintf(&buf[count], "\n"); 236 238
+39 -15
drivers/cpufreq/imx6q-cpufreq.c
··· 9 9 #include <linux/clk.h> 10 10 #include <linux/cpu.h> 11 11 #include <linux/cpufreq.h> 12 - #include <linux/delay.h> 13 12 #include <linux/err.h> 14 13 #include <linux/module.h> 15 14 #include <linux/of.h> ··· 169 170 return -ENOENT; 170 171 } 171 172 172 - arm_clk = devm_clk_get(cpu_dev, "arm"); 173 - pll1_sys_clk = devm_clk_get(cpu_dev, "pll1_sys"); 174 - pll1_sw_clk = devm_clk_get(cpu_dev, "pll1_sw"); 175 - step_clk = devm_clk_get(cpu_dev, "step"); 176 - pll2_pfd2_396m_clk = devm_clk_get(cpu_dev, "pll2_pfd2_396m"); 173 + arm_clk = clk_get(cpu_dev, "arm"); 174 + pll1_sys_clk = clk_get(cpu_dev, "pll1_sys"); 175 + pll1_sw_clk = clk_get(cpu_dev, "pll1_sw"); 176 + step_clk = clk_get(cpu_dev, "step"); 177 + pll2_pfd2_396m_clk = clk_get(cpu_dev, "pll2_pfd2_396m"); 177 178 if (IS_ERR(arm_clk) || IS_ERR(pll1_sys_clk) || IS_ERR(pll1_sw_clk) || 178 179 IS_ERR(step_clk) || IS_ERR(pll2_pfd2_396m_clk)) { 179 180 dev_err(cpu_dev, "failed to get clocks\n"); 180 181 ret = -ENOENT; 181 - goto put_node; 182 + goto put_clk; 182 183 } 183 184 184 - arm_reg = devm_regulator_get(cpu_dev, "arm"); 185 - pu_reg = devm_regulator_get(cpu_dev, "pu"); 186 - soc_reg = devm_regulator_get(cpu_dev, "soc"); 185 + arm_reg = regulator_get(cpu_dev, "arm"); 186 + pu_reg = regulator_get(cpu_dev, "pu"); 187 + soc_reg = regulator_get(cpu_dev, "soc"); 187 188 if (IS_ERR(arm_reg) || IS_ERR(pu_reg) || IS_ERR(soc_reg)) { 188 189 dev_err(cpu_dev, "failed to get regulators\n"); 189 190 ret = -ENOENT; 190 - goto put_node; 191 + goto put_reg; 191 192 } 192 193 193 194 /* ··· 200 201 ret = of_init_opp_table(cpu_dev); 201 202 if (ret < 0) { 202 203 dev_err(cpu_dev, "failed to init OPP table: %d\n", ret); 203 - goto put_node; 204 + goto put_reg; 204 205 } 205 206 206 207 num = dev_pm_opp_get_opp_count(cpu_dev); 207 208 if (num < 0) { 208 209 ret = num; 209 210 dev_err(cpu_dev, "no OPP table is found: %d\n", ret); 210 - goto put_node; 211 + goto put_reg; 211 212 } 212 213 } 213 214 214 215 ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table); 215 216 if (ret) { 216 217 dev_err(cpu_dev, "failed to init cpufreq table: %d\n", ret); 217 - goto put_node; 218 + goto put_reg; 218 219 } 219 220 220 221 /* Make imx6_soc_volt array's size same as arm opp number */ ··· 300 301 301 302 free_freq_table: 302 303 dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table); 303 - put_node: 304 + put_reg: 305 + if (!IS_ERR(arm_reg)) 306 + regulator_put(arm_reg); 307 + if (!IS_ERR(pu_reg)) 308 + regulator_put(pu_reg); 309 + if (!IS_ERR(soc_reg)) 310 + regulator_put(soc_reg); 311 + put_clk: 312 + if (!IS_ERR(arm_clk)) 313 + clk_put(arm_clk); 314 + if (!IS_ERR(pll1_sys_clk)) 315 + clk_put(pll1_sys_clk); 316 + if (!IS_ERR(pll1_sw_clk)) 317 + clk_put(pll1_sw_clk); 318 + if (!IS_ERR(step_clk)) 319 + clk_put(step_clk); 320 + if (!IS_ERR(pll2_pfd2_396m_clk)) 321 + clk_put(pll2_pfd2_396m_clk); 304 322 of_node_put(np); 305 323 return ret; 306 324 } ··· 326 310 { 327 311 cpufreq_unregister_driver(&imx6q_cpufreq_driver); 328 312 dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table); 313 + regulator_put(arm_reg); 314 + regulator_put(pu_reg); 315 + regulator_put(soc_reg); 316 + clk_put(arm_clk); 317 + clk_put(pll1_sys_clk); 318 + clk_put(pll1_sw_clk); 319 + clk_put(step_clk); 320 + clk_put(pll2_pfd2_396m_clk); 329 321 330 322 return 0; 331 323 }
+37 -30
drivers/cpufreq/intel_pstate.c
··· 32 32 #include <asm/msr.h> 33 33 #include <asm/cpu_device_id.h> 34 34 35 - #define SAMPLE_COUNT 3 36 - 37 35 #define BYT_RATIOS 0x66a 38 36 #define BYT_VIDS 0x66b 39 37 #define BYT_TURBO_RATIOS 0x66c 40 38 #define BYT_TURBO_VIDS 0x66d 41 39 42 40 43 - #define FRAC_BITS 6 41 + #define FRAC_BITS 8 44 42 #define int_tofp(X) ((int64_t)(X) << FRAC_BITS) 45 43 #define fp_toint(X) ((X) >> FRAC_BITS) 46 - #define FP_ROUNDUP(X) ((X) += 1 << FRAC_BITS) 44 + 47 45 48 46 static inline int32_t mul_fp(int32_t x, int32_t y) 49 47 { ··· 57 59 int32_t core_pct_busy; 58 60 u64 aperf; 59 61 u64 mperf; 60 - unsigned long long tsc; 61 62 int freq; 63 + ktime_t time; 62 64 }; 63 65 64 66 struct pstate_data { ··· 88 90 struct cpudata { 89 91 int cpu; 90 92 91 - char name[64]; 92 - 93 93 struct timer_list timer; 94 94 95 95 struct pstate_data pstate; 96 96 struct vid_data vid; 97 97 struct _pid pid; 98 98 99 + ktime_t last_sample_time; 99 100 u64 prev_aperf; 100 101 u64 prev_mperf; 101 - unsigned long long prev_tsc; 102 102 struct sample sample; 103 103 }; 104 104 ··· 196 200 pid->last_err = fp_error; 197 201 198 202 result = pterm + mul_fp(pid->integral, pid->i_gain) + dterm; 199 - 203 + if (result >= 0) 204 + result = result + (1 << (FRAC_BITS-1)); 205 + else 206 + result = result - (1 << (FRAC_BITS-1)); 200 207 return (signed int)fp_toint(result); 201 208 } 202 209 ··· 545 546 546 547 static void intel_pstate_get_cpu_pstates(struct cpudata *cpu) 547 548 { 548 - sprintf(cpu->name, "Intel 2nd generation core"); 549 - 550 549 cpu->pstate.min_pstate = pstate_funcs.get_min(); 551 550 cpu->pstate.max_pstate = pstate_funcs.get_max(); 552 551 cpu->pstate.turbo_pstate = pstate_funcs.get_turbo(); ··· 554 557 intel_pstate_set_pstate(cpu, cpu->pstate.min_pstate); 555 558 } 556 559 557 - static inline void intel_pstate_calc_busy(struct cpudata *cpu, 558 - struct sample *sample) 560 + static inline void intel_pstate_calc_busy(struct cpudata *cpu) 559 561 { 560 - int32_t core_pct; 561 - int32_t c0_pct; 562 + struct sample *sample = &cpu->sample; 563 + int64_t core_pct; 564 + int32_t rem; 562 565 563 - core_pct = div_fp(int_tofp((sample->aperf)), 564 - int_tofp((sample->mperf))); 565 - core_pct = mul_fp(core_pct, int_tofp(100)); 566 - FP_ROUNDUP(core_pct); 566 + core_pct = int_tofp(sample->aperf) * int_tofp(100); 567 + core_pct = div_u64_rem(core_pct, int_tofp(sample->mperf), &rem); 567 568 568 - c0_pct = div_fp(int_tofp(sample->mperf), int_tofp(sample->tsc)); 569 + if ((rem << 1) >= int_tofp(sample->mperf)) 570 + core_pct += 1; 569 571 570 572 sample->freq = fp_toint( 571 573 mul_fp(int_tofp(cpu->pstate.max_pstate * 1000), core_pct)); 572 574 573 - sample->core_pct_busy = mul_fp(core_pct, c0_pct); 575 + sample->core_pct_busy = (int32_t)core_pct; 574 576 } 575 577 576 578 static inline void intel_pstate_sample(struct cpudata *cpu) 577 579 { 578 580 u64 aperf, mperf; 579 - unsigned long long tsc; 580 581 581 582 rdmsrl(MSR_IA32_APERF, aperf); 582 583 rdmsrl(MSR_IA32_MPERF, mperf); 583 - tsc = native_read_tsc(); 584 584 585 585 aperf = aperf >> FRAC_BITS; 586 586 mperf = mperf >> FRAC_BITS; 587 - tsc = tsc >> FRAC_BITS; 588 587 588 + cpu->last_sample_time = cpu->sample.time; 589 + cpu->sample.time = ktime_get(); 589 590 cpu->sample.aperf = aperf; 590 591 cpu->sample.mperf = mperf; 591 - cpu->sample.tsc = tsc; 592 592 cpu->sample.aperf -= cpu->prev_aperf; 593 593 cpu->sample.mperf -= cpu->prev_mperf; 594 - cpu->sample.tsc -= cpu->prev_tsc; 595 594 596 - intel_pstate_calc_busy(cpu, &cpu->sample); 595 + intel_pstate_calc_busy(cpu); 597 596 598 597 cpu->prev_aperf = aperf; 599 598 cpu->prev_mperf = mperf; 600 - cpu->prev_tsc = tsc; 601 599 } 602 600 603 601 static inline void intel_pstate_set_sample_time(struct cpudata *cpu) ··· 606 614 607 615 static inline int32_t intel_pstate_get_scaled_busy(struct cpudata *cpu) 608 616 { 609 - int32_t core_busy, max_pstate, current_pstate; 617 + int32_t core_busy, max_pstate, current_pstate, sample_ratio; 618 + u32 duration_us; 619 + u32 sample_time; 610 620 611 621 core_busy = cpu->sample.core_pct_busy; 612 622 max_pstate = int_tofp(cpu->pstate.max_pstate); 613 623 current_pstate = int_tofp(cpu->pstate.current_pstate); 614 624 core_busy = mul_fp(core_busy, div_fp(max_pstate, current_pstate)); 615 - return FP_ROUNDUP(core_busy); 625 + 626 + sample_time = (pid_params.sample_rate_ms * USEC_PER_MSEC); 627 + duration_us = (u32) ktime_us_delta(cpu->sample.time, 628 + cpu->last_sample_time); 629 + if (duration_us > sample_time * 3) { 630 + sample_ratio = div_fp(int_tofp(sample_time), 631 + int_tofp(duration_us)); 632 + core_busy = mul_fp(core_busy, sample_ratio); 633 + } 634 + 635 + return core_busy; 616 636 } 617 637 618 638 static inline void intel_pstate_adjust_busy_pstate(struct cpudata *cpu) ··· 678 674 ICPU(0x37, byt_params), 679 675 ICPU(0x3a, core_params), 680 676 ICPU(0x3c, core_params), 677 + ICPU(0x3d, core_params), 681 678 ICPU(0x3e, core_params), 682 679 ICPU(0x3f, core_params), 683 680 ICPU(0x45, core_params), 684 681 ICPU(0x46, core_params), 682 + ICPU(0x4f, core_params), 683 + ICPU(0x56, core_params), 685 684 {} 686 685 }; 687 686 MODULE_DEVICE_TABLE(x86cpu, intel_pstate_cpu_ids);
+5 -6
drivers/cpufreq/longhaul.c
··· 530 530 531 531 static void longhaul_setup_voltagescaling(void) 532 532 { 533 + struct cpufreq_frequency_table *freq_pos; 533 534 union msr_longhaul longhaul; 534 535 struct mV_pos minvid, maxvid, vid; 535 536 unsigned int j, speed, pos, kHz_step, numvscales; ··· 609 608 /* Calculate kHz for one voltage step */ 610 609 kHz_step = (highest_speed - min_vid_speed) / numvscales; 611 610 612 - j = 0; 613 - while (longhaul_table[j].frequency != CPUFREQ_TABLE_END) { 614 - speed = longhaul_table[j].frequency; 611 + cpufreq_for_each_entry(freq_pos, longhaul_table) { 612 + speed = freq_pos->frequency; 615 613 if (speed > min_vid_speed) 616 614 pos = (speed - min_vid_speed) / kHz_step + minvid.pos; 617 615 else 618 616 pos = minvid.pos; 619 - longhaul_table[j].driver_data |= mV_vrm_table[pos] << 8; 617 + freq_pos->driver_data |= mV_vrm_table[pos] << 8; 620 618 vid = vrm_mV_table[mV_vrm_table[pos]]; 621 619 printk(KERN_INFO PFX "f: %d kHz, index: %d, vid: %d mV\n", 622 - speed, j, vid.mV); 623 - j++; 620 + speed, (int)(freq_pos - longhaul_table), vid.mV); 624 621 } 625 622 626 623 can_scale_voltage = 1;
+5 -5
drivers/cpufreq/pasemi-cpufreq.c
··· 136 136 137 137 static int pas_cpufreq_cpu_init(struct cpufreq_policy *policy) 138 138 { 139 + struct cpufreq_frequency_table *pos; 139 140 const u32 *max_freqp; 140 141 u32 max_freq; 141 - int i, cur_astate; 142 + int cur_astate; 142 143 struct resource res; 143 144 struct device_node *cpu, *dn; 144 145 int err = -ENODEV; ··· 198 197 pr_debug("initializing frequency table\n"); 199 198 200 199 /* initialize frequency table */ 201 - for (i=0; pas_freqs[i].frequency!=CPUFREQ_TABLE_END; i++) { 202 - pas_freqs[i].frequency = 203 - get_astate_freq(pas_freqs[i].driver_data) * 100000; 204 - pr_debug("%d: %d\n", i, pas_freqs[i].frequency); 200 + cpufreq_for_each_entry(pos, pas_freqs) { 201 + pos->frequency = get_astate_freq(pos->driver_data) * 100000; 202 + pr_debug("%d: %d\n", (int)(pos - pas_freqs), pos->frequency); 205 203 } 206 204 207 205 cur_astate = get_cur_astate(policy->cpu);
+7 -7
drivers/cpufreq/powernow-k6.c
··· 151 151 152 152 static int powernow_k6_cpu_init(struct cpufreq_policy *policy) 153 153 { 154 + struct cpufreq_frequency_table *pos; 154 155 unsigned int i, f; 155 156 unsigned khz; 156 157 ··· 169 168 } 170 169 } 171 170 if (param_max_multiplier) { 172 - for (i = 0; (clock_ratio[i].frequency != CPUFREQ_TABLE_END); i++) { 173 - if (clock_ratio[i].driver_data == param_max_multiplier) { 171 + cpufreq_for_each_entry(pos, clock_ratio) 172 + if (pos->driver_data == param_max_multiplier) { 174 173 max_multiplier = param_max_multiplier; 175 174 goto have_max_multiplier; 176 175 } 177 - } 178 176 printk(KERN_ERR "powernow-k6: invalid max_multiplier parameter, valid parameters 20, 30, 35, 40, 45, 50, 55, 60\n"); 179 177 return -EINVAL; 180 178 } ··· 201 201 param_busfreq = busfreq * 10; 202 202 203 203 /* table init */ 204 - for (i = 0; (clock_ratio[i].frequency != CPUFREQ_TABLE_END); i++) { 205 - f = clock_ratio[i].driver_data; 204 + cpufreq_for_each_entry(pos, clock_ratio) { 205 + f = pos->driver_data; 206 206 if (f > max_multiplier) 207 - clock_ratio[i].frequency = CPUFREQ_ENTRY_INVALID; 207 + pos->frequency = CPUFREQ_ENTRY_INVALID; 208 208 else 209 - clock_ratio[i].frequency = busfreq * f; 209 + pos->frequency = busfreq * f; 210 210 } 211 211 212 212 /* cpuinfo and default policy values */
+73 -107
drivers/cpufreq/powernow-k8.c
··· 27 27 * power and thermal data sheets, (e.g. 30417.pdf, 30430.pdf, 43375.pdf) 28 28 */ 29 29 30 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 31 + 30 32 #include <linux/kernel.h> 31 33 #include <linux/smp.h> 32 34 #include <linux/module.h> ··· 47 45 #include <linux/mutex.h> 48 46 #include <acpi/processor.h> 49 47 50 - #define PFX "powernow-k8: " 51 48 #define VERSION "version 2.20.00" 52 49 #include "powernow-k8.h" 53 50 ··· 162 161 u32 i = 0; 163 162 164 163 if ((fid & INVALID_FID_MASK) || (data->currvid & INVALID_VID_MASK)) { 165 - printk(KERN_ERR PFX "internal error - overflow on fid write\n"); 164 + pr_err("internal error - overflow on fid write\n"); 166 165 return 1; 167 166 } 168 167 ··· 176 175 do { 177 176 wrmsr(MSR_FIDVID_CTL, lo, data->plllock * PLL_LOCK_CONVERSION); 178 177 if (i++ > 100) { 179 - printk(KERN_ERR PFX 180 - "Hardware error - pending bit very stuck - " 181 - "no further pstate changes possible\n"); 178 + pr_err("Hardware error - pending bit very stuck - no further pstate changes possible\n"); 182 179 return 1; 183 180 } 184 181 } while (query_current_values_with_pending_wait(data)); ··· 184 185 count_off_irt(data); 185 186 186 187 if (savevid != data->currvid) { 187 - printk(KERN_ERR PFX 188 - "vid change on fid trans, old 0x%x, new 0x%x\n", 189 - savevid, data->currvid); 188 + pr_err("vid change on fid trans, old 0x%x, new 0x%x\n", 189 + savevid, data->currvid); 190 190 return 1; 191 191 } 192 192 193 193 if (fid != data->currfid) { 194 - printk(KERN_ERR PFX 195 - "fid trans failed, fid 0x%x, curr 0x%x\n", fid, 194 + pr_err("fid trans failed, fid 0x%x, curr 0x%x\n", fid, 196 195 data->currfid); 197 196 return 1; 198 197 } ··· 206 209 int i = 0; 207 210 208 211 if ((data->currfid & INVALID_FID_MASK) || (vid & INVALID_VID_MASK)) { 209 - printk(KERN_ERR PFX "internal error - overflow on vid write\n"); 212 + pr_err("internal error - overflow on vid write\n"); 210 213 return 1; 211 214 } 212 215 ··· 220 223 do { 221 224 wrmsr(MSR_FIDVID_CTL, lo, STOP_GRANT_5NS); 222 225 if (i++ > 100) { 223 - printk(KERN_ERR PFX "internal error - pending bit " 224 - "very stuck - no further pstate " 225 - "changes possible\n"); 226 + pr_err("internal error - pending bit very stuck - no further pstate changes possible\n"); 226 227 return 1; 227 228 } 228 229 } while (query_current_values_with_pending_wait(data)); 229 230 230 231 if (savefid != data->currfid) { 231 - printk(KERN_ERR PFX "fid changed on vid trans, old " 232 - "0x%x new 0x%x\n", 233 - savefid, data->currfid); 232 + pr_err("fid changed on vid trans, old 0x%x new 0x%x\n", 233 + savefid, data->currfid); 234 234 return 1; 235 235 } 236 236 237 237 if (vid != data->currvid) { 238 - printk(KERN_ERR PFX "vid trans failed, vid 0x%x, " 239 - "curr 0x%x\n", 238 + pr_err("vid trans failed, vid 0x%x, curr 0x%x\n", 240 239 vid, data->currvid); 241 240 return 1; 242 241 } ··· 276 283 return 1; 277 284 278 285 if ((reqfid != data->currfid) || (reqvid != data->currvid)) { 279 - printk(KERN_ERR PFX "failed (cpu%d): req 0x%x 0x%x, " 280 - "curr 0x%x 0x%x\n", 286 + pr_err("failed (cpu%d): req 0x%x 0x%x, curr 0x%x 0x%x\n", 281 287 smp_processor_id(), 282 288 reqfid, reqvid, data->currfid, data->currvid); 283 289 return 1; ··· 296 304 u32 savefid = data->currfid; 297 305 u32 maxvid, lo, rvomult = 1; 298 306 299 - pr_debug("ph1 (cpu%d): start, currfid 0x%x, currvid 0x%x, " 300 - "reqvid 0x%x, rvo 0x%x\n", 307 + pr_debug("ph1 (cpu%d): start, currfid 0x%x, currvid 0x%x, reqvid 0x%x, rvo 0x%x\n", 301 308 smp_processor_id(), 302 309 data->currfid, data->currvid, reqvid, data->rvo); 303 310 ··· 333 342 return 1; 334 343 335 344 if (savefid != data->currfid) { 336 - printk(KERN_ERR PFX "ph1 err, currfid changed 0x%x\n", 337 - data->currfid); 345 + pr_err("ph1 err, currfid changed 0x%x\n", data->currfid); 338 346 return 1; 339 347 } 340 348 ··· 350 360 u32 fid_interval, savevid = data->currvid; 351 361 352 362 if (data->currfid == reqfid) { 353 - printk(KERN_ERR PFX "ph2 null fid transition 0x%x\n", 354 - data->currfid); 363 + pr_err("ph2 null fid transition 0x%x\n", data->currfid); 355 364 return 0; 356 365 } 357 366 358 - pr_debug("ph2 (cpu%d): starting, currfid 0x%x, currvid 0x%x, " 359 - "reqfid 0x%x\n", 367 + pr_debug("ph2 (cpu%d): starting, currfid 0x%x, currvid 0x%x, reqfid 0x%x\n", 360 368 smp_processor_id(), 361 369 data->currfid, data->currvid, reqfid); 362 370 ··· 397 409 return 1; 398 410 399 411 if (data->currfid != reqfid) { 400 - printk(KERN_ERR PFX 401 - "ph2: mismatch, failed fid transition, " 402 - "curr 0x%x, req 0x%x\n", 412 + pr_err("ph2: mismatch, failed fid transition, curr 0x%x, req 0x%x\n", 403 413 data->currfid, reqfid); 404 414 return 1; 405 415 } 406 416 407 417 if (savevid != data->currvid) { 408 - printk(KERN_ERR PFX "ph2: vid changed, save 0x%x, curr 0x%x\n", 418 + pr_err("ph2: vid changed, save 0x%x, curr 0x%x\n", 409 419 savevid, data->currvid); 410 420 return 1; 411 421 } ··· 430 444 return 1; 431 445 432 446 if (savefid != data->currfid) { 433 - printk(KERN_ERR PFX 434 - "ph3: bad fid change, save 0x%x, curr 0x%x\n", 435 - savefid, data->currfid); 447 + pr_err("ph3: bad fid change, save 0x%x, curr 0x%x\n", 448 + savefid, data->currfid); 436 449 return 1; 437 450 } 438 451 439 452 if (data->currvid != reqvid) { 440 - printk(KERN_ERR PFX 441 - "ph3: failed vid transition\n, " 442 - "req 0x%x, curr 0x%x", 443 - reqvid, data->currvid); 453 + pr_err("ph3: failed vid transition\n, req 0x%x, curr 0x%x", 454 + reqvid, data->currvid); 444 455 return 1; 445 456 } 446 457 } ··· 481 498 if ((eax & CPUID_XFAM) == CPUID_XFAM_K8) { 482 499 if (((eax & CPUID_USE_XFAM_XMOD) != CPUID_USE_XFAM_XMOD) || 483 500 ((eax & CPUID_XMOD) > CPUID_XMOD_REV_MASK)) { 484 - printk(KERN_INFO PFX 485 - "Processor cpuid %x not supported\n", eax); 501 + pr_info("Processor cpuid %x not supported\n", eax); 486 502 return; 487 503 } 488 504 489 505 eax = cpuid_eax(CPUID_GET_MAX_CAPABILITIES); 490 506 if (eax < CPUID_FREQ_VOLT_CAPABILITIES) { 491 - printk(KERN_INFO PFX 492 - "No frequency change capabilities detected\n"); 507 + pr_info("No frequency change capabilities detected\n"); 493 508 return; 494 509 } 495 510 496 511 cpuid(CPUID_FREQ_VOLT_CAPABILITIES, &eax, &ebx, &ecx, &edx); 497 512 if ((edx & P_STATE_TRANSITION_CAPABLE) 498 513 != P_STATE_TRANSITION_CAPABLE) { 499 - printk(KERN_INFO PFX 500 - "Power state transitions not supported\n"); 514 + pr_info("Power state transitions not supported\n"); 501 515 return; 502 516 } 503 517 *rc = 0; ··· 509 529 510 530 for (j = 0; j < data->numps; j++) { 511 531 if (pst[j].vid > LEAST_VID) { 512 - printk(KERN_ERR FW_BUG PFX "vid %d invalid : 0x%x\n", 513 - j, pst[j].vid); 532 + pr_err(FW_BUG "vid %d invalid : 0x%x\n", j, 533 + pst[j].vid); 514 534 return -EINVAL; 515 535 } 516 536 if (pst[j].vid < data->rvo) { 517 537 /* vid + rvo >= 0 */ 518 - printk(KERN_ERR FW_BUG PFX "0 vid exceeded with pstate" 519 - " %d\n", j); 538 + pr_err(FW_BUG "0 vid exceeded with pstate %d\n", j); 520 539 return -ENODEV; 521 540 } 522 541 if (pst[j].vid < maxvid + data->rvo) { 523 542 /* vid + rvo >= maxvid */ 524 - printk(KERN_ERR FW_BUG PFX "maxvid exceeded with pstate" 525 - " %d\n", j); 543 + pr_err(FW_BUG "maxvid exceeded with pstate %d\n", j); 526 544 return -ENODEV; 527 545 } 528 546 if (pst[j].fid > MAX_FID) { 529 - printk(KERN_ERR FW_BUG PFX "maxfid exceeded with pstate" 530 - " %d\n", j); 547 + pr_err(FW_BUG "maxfid exceeded with pstate %d\n", j); 531 548 return -ENODEV; 532 549 } 533 550 if (j && (pst[j].fid < HI_FID_TABLE_BOTTOM)) { 534 551 /* Only first fid is allowed to be in "low" range */ 535 - printk(KERN_ERR FW_BUG PFX "two low fids - %d : " 536 - "0x%x\n", j, pst[j].fid); 552 + pr_err(FW_BUG "two low fids - %d : 0x%x\n", j, 553 + pst[j].fid); 537 554 return -EINVAL; 538 555 } 539 556 if (pst[j].fid < lastfid) 540 557 lastfid = pst[j].fid; 541 558 } 542 559 if (lastfid & 1) { 543 - printk(KERN_ERR FW_BUG PFX "lastfid invalid\n"); 560 + pr_err(FW_BUG "lastfid invalid\n"); 544 561 return -EINVAL; 545 562 } 546 563 if (lastfid > LO_FID_TABLE_TOP) 547 - printk(KERN_INFO FW_BUG PFX 548 - "first fid not from lo freq table\n"); 564 + pr_info(FW_BUG "first fid not from lo freq table\n"); 549 565 550 566 return 0; 551 567 } ··· 558 582 for (j = 0; j < data->numps; j++) { 559 583 if (data->powernow_table[j].frequency != 560 584 CPUFREQ_ENTRY_INVALID) { 561 - printk(KERN_INFO PFX 562 - "fid 0x%x (%d MHz), vid 0x%x\n", 563 - data->powernow_table[j].driver_data & 0xff, 564 - data->powernow_table[j].frequency/1000, 565 - data->powernow_table[j].driver_data >> 8); 585 + pr_info("fid 0x%x (%d MHz), vid 0x%x\n", 586 + data->powernow_table[j].driver_data & 0xff, 587 + data->powernow_table[j].frequency/1000, 588 + data->powernow_table[j].driver_data >> 8); 566 589 } 567 590 } 568 591 if (data->batps) 569 - printk(KERN_INFO PFX "Only %d pstates on battery\n", 570 - data->batps); 592 + pr_info("Only %d pstates on battery\n", data->batps); 571 593 } 572 594 573 595 static int fill_powernow_table(struct powernow_k8_data *data, ··· 576 602 577 603 if (data->batps) { 578 604 /* use ACPI support to get full speed on mains power */ 579 - printk(KERN_WARNING PFX 580 - "Only %d pstates usable (use ACPI driver for full " 581 - "range\n", data->batps); 605 + pr_warn("Only %d pstates usable (use ACPI driver for full range\n", 606 + data->batps); 582 607 data->numps = data->batps; 583 608 } 584 609 585 610 for (j = 1; j < data->numps; j++) { 586 611 if (pst[j-1].fid >= pst[j].fid) { 587 - printk(KERN_ERR PFX "PST out of sequence\n"); 612 + pr_err("PST out of sequence\n"); 588 613 return -EINVAL; 589 614 } 590 615 } 591 616 592 617 if (data->numps < 2) { 593 - printk(KERN_ERR PFX "no p states to transition\n"); 618 + pr_err("no p states to transition\n"); 594 619 return -ENODEV; 595 620 } 596 621 ··· 599 626 powernow_table = kzalloc((sizeof(*powernow_table) 600 627 * (data->numps + 1)), GFP_KERNEL); 601 628 if (!powernow_table) { 602 - printk(KERN_ERR PFX "powernow_table memory alloc failure\n"); 629 + pr_err("powernow_table memory alloc failure\n"); 603 630 return -ENOMEM; 604 631 } 605 632 ··· 654 681 655 682 pr_debug("table vers: 0x%x\n", psb->tableversion); 656 683 if (psb->tableversion != PSB_VERSION_1_4) { 657 - printk(KERN_ERR FW_BUG PFX "PSB table is not v1.4\n"); 684 + pr_err(FW_BUG "PSB table is not v1.4\n"); 658 685 return -ENODEV; 659 686 } 660 687 661 688 pr_debug("flags: 0x%x\n", psb->flags1); 662 689 if (psb->flags1) { 663 - printk(KERN_ERR FW_BUG PFX "unknown flags\n"); 690 + pr_err(FW_BUG "unknown flags\n"); 664 691 return -ENODEV; 665 692 } 666 693 ··· 689 716 cpst = 1; 690 717 } 691 718 if (cpst != 1) { 692 - printk(KERN_ERR FW_BUG PFX "numpst must be 1\n"); 719 + pr_err(FW_BUG "numpst must be 1\n"); 693 720 return -ENODEV; 694 721 } 695 722 ··· 715 742 * BIOS and Kernel Developer's Guide, which is available on 716 743 * www.amd.com 717 744 */ 718 - printk(KERN_ERR FW_BUG PFX "No PSB or ACPI _PSS objects\n"); 719 - printk(KERN_ERR PFX "Make sure that your BIOS is up to date" 720 - " and Cool'N'Quiet support is enabled in BIOS setup\n"); 745 + pr_err(FW_BUG "No PSB or ACPI _PSS objects\n"); 746 + pr_err("Make sure that your BIOS is up to date and Cool'N'Quiet support is enabled in BIOS setup\n"); 721 747 return -ENODEV; 722 748 } 723 749 ··· 791 819 acpi_processor_notify_smm(THIS_MODULE); 792 820 793 821 if (!zalloc_cpumask_var(&data->acpi_data.shared_cpu_map, GFP_KERNEL)) { 794 - printk(KERN_ERR PFX 795 - "unable to alloc powernow_k8_data cpumask\n"); 822 + pr_err("unable to alloc powernow_k8_data cpumask\n"); 796 823 ret_val = -ENOMEM; 797 824 goto err_out_mem; 798 825 } ··· 856 885 } 857 886 858 887 if (freq != (data->acpi_data.states[i].core_frequency * 1000)) { 859 - printk(KERN_INFO PFX "invalid freq entries " 860 - "%u kHz vs. %u kHz\n", freq, 861 - (unsigned int) 888 + pr_info("invalid freq entries %u kHz vs. %u kHz\n", 889 + freq, (unsigned int) 862 890 (data->acpi_data.states[i].core_frequency 863 891 * 1000)); 864 892 invalidate_entry(powernow_table, i); ··· 886 916 max_latency = cur_latency; 887 917 } 888 918 if (max_latency == 0) { 889 - pr_err(FW_WARN PFX "Invalid zero transition latency\n"); 919 + pr_err(FW_WARN "Invalid zero transition latency\n"); 890 920 max_latency = 1; 891 921 } 892 922 /* value in usecs, needs to be in nanoseconds */ ··· 961 991 checkvid = data->currvid; 962 992 963 993 if (pending_bit_stuck()) { 964 - printk(KERN_ERR PFX "failing targ, change pending bit set\n"); 994 + pr_err("failing targ, change pending bit set\n"); 965 995 return -EIO; 966 996 } 967 997 ··· 973 1003 return -EIO; 974 1004 975 1005 pr_debug("targ: curr fid 0x%x, vid 0x%x\n", 976 - data->currfid, data->currvid); 1006 + data->currfid, data->currvid); 977 1007 978 1008 if ((checkvid != data->currvid) || 979 1009 (checkfid != data->currfid)) { 980 - pr_info(PFX 981 - "error - out of sync, fix 0x%x 0x%x, vid 0x%x 0x%x\n", 1010 + pr_info("error - out of sync, fix 0x%x 0x%x, vid 0x%x 0x%x\n", 982 1011 checkfid, data->currfid, 983 1012 checkvid, data->currvid); 984 1013 } ··· 989 1020 ret = transition_frequency_fidvid(data, newstate); 990 1021 991 1022 if (ret) { 992 - printk(KERN_ERR PFX "transition frequency failed\n"); 1023 + pr_err("transition frequency failed\n"); 993 1024 mutex_unlock(&fidvid_mutex); 994 1025 return 1; 995 1026 } ··· 1018 1049 struct init_on_cpu *init_on_cpu = _init_on_cpu; 1019 1050 1020 1051 if (pending_bit_stuck()) { 1021 - printk(KERN_ERR PFX "failing init, change pending bit set\n"); 1052 + pr_err("failing init, change pending bit set\n"); 1022 1053 init_on_cpu->rc = -ENODEV; 1023 1054 return; 1024 1055 } ··· 1033 1064 init_on_cpu->rc = 0; 1034 1065 } 1035 1066 1036 - static const char missing_pss_msg[] = 1037 - KERN_ERR 1038 - FW_BUG PFX "No compatible ACPI _PSS objects found.\n" 1039 - FW_BUG PFX "First, make sure Cool'N'Quiet is enabled in the BIOS.\n" 1040 - FW_BUG PFX "If that doesn't help, try upgrading your BIOS.\n"; 1067 + #define MISSING_PSS_MSG \ 1068 + FW_BUG "No compatible ACPI _PSS objects found.\n" \ 1069 + FW_BUG "First, make sure Cool'N'Quiet is enabled in the BIOS.\n" \ 1070 + FW_BUG "If that doesn't help, try upgrading your BIOS.\n" 1041 1071 1042 1072 /* per CPU init entry point to the driver */ 1043 1073 static int powernowk8_cpu_init(struct cpufreq_policy *pol) ··· 1051 1083 1052 1084 data = kzalloc(sizeof(*data), GFP_KERNEL); 1053 1085 if (!data) { 1054 - printk(KERN_ERR PFX "unable to alloc powernow_k8_data"); 1086 + pr_err("unable to alloc powernow_k8_data"); 1055 1087 return -ENOMEM; 1056 1088 } 1057 1089 ··· 1063 1095 * an UP version, and is deprecated by AMD. 1064 1096 */ 1065 1097 if (num_online_cpus() != 1) { 1066 - printk_once(missing_pss_msg); 1098 + pr_err_once(MISSING_PSS_MSG); 1067 1099 goto err_out; 1068 1100 } 1069 1101 if (pol->cpu != 0) { 1070 - printk(KERN_ERR FW_BUG PFX "No ACPI _PSS objects for " 1071 - "CPU other than CPU0. Complain to your BIOS " 1072 - "vendor.\n"); 1102 + pr_err(FW_BUG "No ACPI _PSS objects for CPU other than CPU0. Complain to your BIOS vendor.\n"); 1073 1103 goto err_out; 1074 1104 } 1075 1105 rc = find_psb_table(data); ··· 1095 1129 1096 1130 /* min/max the cpu is capable of */ 1097 1131 if (cpufreq_table_validate_and_show(pol, data->powernow_table)) { 1098 - printk(KERN_ERR FW_BUG PFX "invalid powernow_table\n"); 1132 + pr_err(FW_BUG "invalid powernow_table\n"); 1099 1133 powernow_k8_cpu_exit_acpi(data); 1100 1134 kfree(data->powernow_table); 1101 1135 kfree(data); ··· 1103 1137 } 1104 1138 1105 1139 pr_debug("cpu_init done, current fid 0x%x, vid 0x%x\n", 1106 - data->currfid, data->currvid); 1140 + data->currfid, data->currvid); 1107 1141 1108 1142 /* Point all the CPUs in this policy to the same data */ 1109 1143 for_each_cpu(cpu, pol->cpus) ··· 1186 1220 goto request; 1187 1221 1188 1222 if (strncmp(cur_drv, drv, min_t(size_t, strlen(cur_drv), strlen(drv)))) 1189 - pr_warn(PFX "WTF driver: %s\n", cur_drv); 1223 + pr_warn("WTF driver: %s\n", cur_drv); 1190 1224 1191 1225 return; 1192 1226 1193 1227 request: 1194 - pr_warn(PFX "This CPU is not supported anymore, using acpi-cpufreq instead.\n"); 1228 + pr_warn("This CPU is not supported anymore, using acpi-cpufreq instead.\n"); 1195 1229 request_module(drv); 1196 1230 } 1197 1231 ··· 1226 1260 if (ret) 1227 1261 return ret; 1228 1262 1229 - pr_info(PFX "Found %d %s (%d cpu cores) (" VERSION ")\n", 1263 + pr_info("Found %d %s (%d cpu cores) (" VERSION ")\n", 1230 1264 num_online_nodes(), boot_cpu_data.x86_model_id, supported_cpus); 1231 1265 1232 1266 return ret; ··· 1240 1274 cpufreq_unregister_driver(&cpufreq_amd64_driver); 1241 1275 } 1242 1276 1243 - MODULE_AUTHOR("Paul Devriendt <paul.devriendt@amd.com> and " 1244 - "Mark Langsdorf <mark.langsdorf@amd.com>"); 1277 + MODULE_AUTHOR("Paul Devriendt <paul.devriendt@amd.com>"); 1278 + MODULE_AUTHOR("Mark Langsdorf <mark.langsdorf@amd.com>"); 1245 1279 MODULE_DESCRIPTION("AMD Athlon 64 and Opteron processor frequency driver."); 1246 1280 MODULE_LICENSE("GPL"); 1247 1281
+1 -1
drivers/cpufreq/powernow-k8.h
··· 19 19 u32 vidmvs; /* usable value calculated from mvs */ 20 20 u32 vstable; /* voltage stabilization time, units 20 us */ 21 21 u32 plllock; /* pll lock time, units 1 us */ 22 - u32 exttype; /* extended interface = 1 */ 22 + u32 exttype; /* extended interface = 1 */ 23 23 24 24 /* keep track of the current fid / vid or pstate */ 25 25 u32 currvid;
+1 -1
drivers/cpufreq/powernv-cpufreq.c
··· 235 235 * firmware for CPU 'cpu'. This value is reported through the sysfs 236 236 * file cpuinfo_cur_freq. 237 237 */ 238 - unsigned int powernv_cpufreq_get(unsigned int cpu) 238 + static unsigned int powernv_cpufreq_get(unsigned int cpu) 239 239 { 240 240 struct powernv_smp_call_data freq_data; 241 241
+5 -4
drivers/cpufreq/ppc_cbe_cpufreq.c
··· 67 67 68 68 static int cbe_cpufreq_cpu_init(struct cpufreq_policy *policy) 69 69 { 70 + struct cpufreq_frequency_table *pos; 70 71 const u32 *max_freqp; 71 72 u32 max_freq; 72 - int i, cur_pmode; 73 + int cur_pmode; 73 74 struct device_node *cpu; 74 75 75 76 cpu = of_get_cpu_node(policy->cpu, NULL); ··· 103 102 pr_debug("initializing frequency table\n"); 104 103 105 104 /* initialize frequency table */ 106 - for (i=0; cbe_freqs[i].frequency!=CPUFREQ_TABLE_END; i++) { 107 - cbe_freqs[i].frequency = max_freq / cbe_freqs[i].driver_data; 108 - pr_debug("%d: %d\n", i, cbe_freqs[i].frequency); 105 + cpufreq_for_each_entry(pos, cbe_freqs) { 106 + pos->frequency = max_freq / pos->driver_data; 107 + pr_debug("%d: %d\n", (int)(pos - cbe_freqs), pos->frequency); 109 108 } 110 109 111 110 /* if DEBUG is enabled set_pmode() measures the latency
+17 -23
drivers/cpufreq/s3c2416-cpufreq.c
··· 266 266 static void __init s3c2416_cpufreq_cfg_regulator(struct s3c2416_data *s3c_freq) 267 267 { 268 268 int count, v, i, found; 269 - struct cpufreq_frequency_table *freq; 269 + struct cpufreq_frequency_table *pos; 270 270 struct s3c2416_dvfs *dvfs; 271 271 272 272 count = regulator_count_voltages(s3c_freq->vddarm); ··· 275 275 return; 276 276 } 277 277 278 - freq = s3c_freq->freq_table; 279 - while (count > 0 && freq->frequency != CPUFREQ_TABLE_END) { 280 - if (freq->frequency == CPUFREQ_ENTRY_INVALID) 281 - continue; 278 + if (!count) 279 + goto out; 282 280 283 - dvfs = &s3c2416_dvfs_table[freq->driver_data]; 281 + cpufreq_for_each_valid_entry(pos, s3c_freq->freq_table) { 282 + dvfs = &s3c2416_dvfs_table[pos->driver_data]; 284 283 found = 0; 285 284 286 285 /* Check only the min-voltage, more is always ok on S3C2416 */ ··· 291 292 292 293 if (!found) { 293 294 pr_debug("cpufreq: %dkHz unsupported by regulator\n", 294 - freq->frequency); 295 - freq->frequency = CPUFREQ_ENTRY_INVALID; 295 + pos->frequency); 296 + pos->frequency = CPUFREQ_ENTRY_INVALID; 296 297 } 297 - 298 - freq++; 299 298 } 300 299 300 + out: 301 301 /* Guessed */ 302 302 s3c_freq->regulator_latency = 1 * 1000 * 1000; 303 303 } ··· 336 338 static int __init s3c2416_cpufreq_driver_init(struct cpufreq_policy *policy) 337 339 { 338 340 struct s3c2416_data *s3c_freq = &s3c2416_cpufreq; 339 - struct cpufreq_frequency_table *freq; 341 + struct cpufreq_frequency_table *pos; 340 342 struct clk *msysclk; 341 343 unsigned long rate; 342 344 int ret; ··· 425 427 s3c_freq->regulator_latency = 0; 426 428 #endif 427 429 428 - freq = s3c_freq->freq_table; 429 - while (freq->frequency != CPUFREQ_TABLE_END) { 430 + cpufreq_for_each_entry(pos, s3c_freq->freq_table) { 430 431 /* special handling for dvs mode */ 431 - if (freq->driver_data == 0) { 432 + if (pos->driver_data == 0) { 432 433 if (!s3c_freq->hclk) { 433 434 pr_debug("cpufreq: %dkHz unsupported as it would need unavailable dvs mode\n", 434 - freq->frequency); 435 - freq->frequency = CPUFREQ_ENTRY_INVALID; 435 + pos->frequency); 436 + pos->frequency = CPUFREQ_ENTRY_INVALID; 436 437 } else { 437 - freq++; 438 438 continue; 439 439 } 440 440 } 441 441 442 442 /* Check for frequencies we can generate */ 443 443 rate = clk_round_rate(s3c_freq->armdiv, 444 - freq->frequency * 1000); 444 + pos->frequency * 1000); 445 445 rate /= 1000; 446 - if (rate != freq->frequency) { 446 + if (rate != pos->frequency) { 447 447 pr_debug("cpufreq: %dkHz unsupported by clock (clk_round_rate return %lu)\n", 448 - freq->frequency, rate); 449 - freq->frequency = CPUFREQ_ENTRY_INVALID; 448 + pos->frequency, rate); 449 + pos->frequency = CPUFREQ_ENTRY_INVALID; 450 450 } 451 - 452 - freq++; 453 451 } 454 452 455 453 /* Datasheet says PLL stabalisation time must be at least 300us,
+5 -10
drivers/cpufreq/s3c64xx-cpufreq.c
··· 118 118 pr_err("Unable to check supported voltages\n"); 119 119 } 120 120 121 - freq = s3c64xx_freq_table; 122 - while (count > 0 && freq->frequency != CPUFREQ_TABLE_END) { 123 - if (freq->frequency == CPUFREQ_ENTRY_INVALID) 124 - continue; 121 + if (!count) 122 + goto out; 125 123 124 + cpufreq_for_each_valid_entry(freq, s3c64xx_freq_table) { 126 125 dvfs = &s3c64xx_dvfs_table[freq->driver_data]; 127 126 found = 0; 128 127 ··· 136 137 freq->frequency); 137 138 freq->frequency = CPUFREQ_ENTRY_INVALID; 138 139 } 139 - 140 - freq++; 141 140 } 142 141 142 + out: 143 143 /* Guess based on having to do an I2C/SPI write; in future we 144 144 * will be able to query the regulator performance here. */ 145 145 regulator_latency = 1 * 1000 * 1000; ··· 177 179 } 178 180 #endif 179 181 180 - freq = s3c64xx_freq_table; 181 - while (freq->frequency != CPUFREQ_TABLE_END) { 182 + cpufreq_for_each_entry(freq, s3c64xx_freq_table) { 182 183 unsigned long r; 183 184 184 185 /* Check for frequencies we can generate */ ··· 193 196 * frequency is the maximum we can support. */ 194 197 if (!vddarm && freq->frequency > clk_get_rate(policy->clk) / 1000) 195 198 freq->frequency = CPUFREQ_ENTRY_INVALID; 196 - 197 - freq++; 198 199 } 199 200 200 201 /* Datasheet says PLL stabalisation time (if we were to use
+2 -4
drivers/cpufreq/s5pv210-cpufreq.c
··· 175 175 mutex_lock(&set_freq_lock); 176 176 177 177 if (no_cpufreq_access) { 178 - #ifdef CONFIG_PM_VERBOSE 179 - pr_err("%s:%d denied access to %s as it is disabled" 180 - "temporarily\n", __FILE__, __LINE__, __func__); 181 - #endif 178 + pr_err("Denied access to %s as it is disabled temporarily\n", 179 + __func__); 182 180 ret = -EINVAL; 183 181 goto exit; 184 182 }
+1 -1
drivers/cpufreq/speedstep-centrino.c
··· 28 28 #include <asm/cpu_device_id.h> 29 29 30 30 #define PFX "speedstep-centrino: " 31 - #define MAINTAINER "cpufreq@vger.kernel.org" 31 + #define MAINTAINER "linux-pm@vger.kernel.org" 32 32 33 33 #define INTEL_MSR_RANGE (0xffff) 34 34
+2 -7
drivers/cpufreq/tegra-cpufreq.c
··· 82 82 return ret; 83 83 } 84 84 85 - static int tegra_update_cpu_speed(struct cpufreq_policy *policy, 86 - unsigned long rate) 85 + static int tegra_target(struct cpufreq_policy *policy, unsigned int index) 87 86 { 87 + unsigned long rate = freq_table[index].frequency; 88 88 int ret = 0; 89 89 90 90 /* ··· 104 104 rate); 105 105 106 106 return ret; 107 - } 108 - 109 - static int tegra_target(struct cpufreq_policy *policy, unsigned int index) 110 - { 111 - return tegra_update_cpu_speed(policy, freq_table[index].frequency); 112 107 } 113 108 114 109 static int tegra_cpu_init(struct cpufreq_policy *policy)
+6
drivers/cpuidle/Kconfig.arm
··· 18 18 define different C-states for little and big cores through the 19 19 multiple CPU idle drivers infrastructure. 20 20 21 + config ARM_CLPS711X_CPUIDLE 22 + bool "CPU Idle Driver for CLPS711X processors" 23 + depends on ARCH_CLPS711X || COMPILE_TEST 24 + help 25 + Select this to enable cpuidle on Cirrus Logic CLPS711X SOCs. 26 + 21 27 config ARM_HIGHBANK_CPUIDLE 22 28 bool "CPU Idle Driver for Calxeda processors" 23 29 depends on ARM_PSCI
+1
drivers/cpuidle/Makefile
··· 9 9 # ARM SoC drivers 10 10 obj-$(CONFIG_ARM_ARMADA_370_XP_CPUIDLE) += cpuidle-armada-370-xp.o 11 11 obj-$(CONFIG_ARM_BIG_LITTLE_CPUIDLE) += cpuidle-big_little.o 12 + obj-$(CONFIG_ARM_CLPS711X_CPUIDLE) += cpuidle-clps711x.o 12 13 obj-$(CONFIG_ARM_HIGHBANK_CPUIDLE) += cpuidle-calxeda.o 13 14 obj-$(CONFIG_ARM_KIRKWOOD_CPUIDLE) += cpuidle-kirkwood.o 14 15 obj-$(CONFIG_ARM_ZYNQ_CPUIDLE) += cpuidle-zynq.o
+64
drivers/cpuidle/cpuidle-clps711x.c
··· 1 + /* 2 + * CLPS711X CPU idle driver 3 + * 4 + * Copyright (C) 2014 Alexander Shiyan <shc_work@mail.ru> 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License as published by 8 + * the Free Software Foundation; either version 2 of the License, or 9 + * (at your option) any later version. 10 + */ 11 + 12 + #include <linux/cpuidle.h> 13 + #include <linux/err.h> 14 + #include <linux/io.h> 15 + #include <linux/module.h> 16 + #include <linux/platform_device.h> 17 + 18 + #define CLPS711X_CPUIDLE_NAME "clps711x-cpuidle" 19 + 20 + static void __iomem *clps711x_halt; 21 + 22 + static int clps711x_cpuidle_halt(struct cpuidle_device *dev, 23 + struct cpuidle_driver *drv, int index) 24 + { 25 + writel(0xaa, clps711x_halt); 26 + 27 + return index; 28 + } 29 + 30 + static struct cpuidle_driver clps711x_idle_driver = { 31 + .name = CLPS711X_CPUIDLE_NAME, 32 + .owner = THIS_MODULE, 33 + .states[0] = { 34 + .name = "HALT", 35 + .desc = "CLPS711X HALT", 36 + .enter = clps711x_cpuidle_halt, 37 + .exit_latency = 1, 38 + }, 39 + .state_count = 1, 40 + }; 41 + 42 + static int __init clps711x_cpuidle_probe(struct platform_device *pdev) 43 + { 44 + struct resource *res; 45 + 46 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 47 + clps711x_halt = devm_ioremap_resource(&pdev->dev, res); 48 + if (IS_ERR(clps711x_halt)) 49 + return PTR_ERR(clps711x_halt); 50 + 51 + return cpuidle_register(&clps711x_idle_driver, NULL); 52 + } 53 + 54 + static struct platform_driver clps711x_cpuidle_driver = { 55 + .driver = { 56 + .name = CLPS711X_CPUIDLE_NAME, 57 + .owner = THIS_MODULE, 58 + }, 59 + }; 60 + module_platform_driver_probe(clps711x_cpuidle_driver, clps711x_cpuidle_probe); 61 + 62 + MODULE_AUTHOR("Alexander Shiyan <shc_work@mail.ru>"); 63 + MODULE_DESCRIPTION("CLPS711X CPU idle driver"); 64 + MODULE_LICENSE("GPL");
+3 -2
drivers/devfreq/Kconfig
··· 70 70 depends on (CPU_EXYNOS4210 || SOC_EXYNOS4212 || SOC_EXYNOS4412) && !ARCH_MULTIPLATFORM 71 71 select ARCH_HAS_OPP 72 72 select DEVFREQ_GOV_SIMPLE_ONDEMAND 73 + select PM_OPP 73 74 help 74 75 This adds the DEVFREQ driver for Exynos4210 memory bus (vdd_int) 75 76 and Exynos4212/4412 memory interface and bus (vdd_mif + vdd_int). 76 77 It reads PPMU counters of memory controllers and adjusts 77 78 the operating frequencies and voltages with OPP support. 78 - To operate with optimal voltages, ASV support is required 79 - (CONFIG_EXYNOS_ASV). 79 + This does not yet operate with optimal voltages. 80 80 81 81 config ARM_EXYNOS5_BUS_DEVFREQ 82 82 bool "ARM Exynos5250 Bus DEVFREQ Driver" 83 83 depends on SOC_EXYNOS5250 84 84 select ARCH_HAS_OPP 85 85 select DEVFREQ_GOV_SIMPLE_ONDEMAND 86 + select PM_OPP 86 87 help 87 88 This adds the DEVFREQ driver for Exynos5250 bus interface (vdd_int). 88 89 It reads PPMU counters of memory controllers and adjusts the
+115 -10
drivers/devfreq/devfreq.c
··· 394 394 * @devfreq: the devfreq struct 395 395 * @skip: skip calling device_unregister(). 396 396 */ 397 - static void _remove_devfreq(struct devfreq *devfreq, bool skip) 397 + static void _remove_devfreq(struct devfreq *devfreq) 398 398 { 399 399 mutex_lock(&devfreq_list_lock); 400 400 if (IS_ERR(find_device_devfreq(devfreq->dev.parent))) { ··· 412 412 if (devfreq->profile->exit) 413 413 devfreq->profile->exit(devfreq->dev.parent); 414 414 415 - if (!skip && get_device(&devfreq->dev)) { 416 - device_unregister(&devfreq->dev); 417 - put_device(&devfreq->dev); 418 - } 419 - 420 415 mutex_destroy(&devfreq->lock); 421 416 kfree(devfreq); 422 417 } ··· 421 426 * @dev: the devfreq device 422 427 * 423 428 * This calls _remove_devfreq() if _remove_devfreq() is not called. 424 - * Note that devfreq_dev_release() could be called by _remove_devfreq() as 425 - * well as by others unregistering the device. 426 429 */ 427 430 static void devfreq_dev_release(struct device *dev) 428 431 { 429 432 struct devfreq *devfreq = to_devfreq(dev); 430 433 431 - _remove_devfreq(devfreq, true); 434 + _remove_devfreq(devfreq); 432 435 } 433 436 434 437 /** ··· 537 544 if (!devfreq) 538 545 return -EINVAL; 539 546 540 - _remove_devfreq(devfreq, false); 547 + device_unregister(&devfreq->dev); 548 + put_device(&devfreq->dev); 541 549 542 550 return 0; 543 551 } 544 552 EXPORT_SYMBOL(devfreq_remove_device); 553 + 554 + static int devm_devfreq_dev_match(struct device *dev, void *res, void *data) 555 + { 556 + struct devfreq **r = res; 557 + 558 + if (WARN_ON(!r || !*r)) 559 + return 0; 560 + 561 + return *r == data; 562 + } 563 + 564 + static void devm_devfreq_dev_release(struct device *dev, void *res) 565 + { 566 + devfreq_remove_device(*(struct devfreq **)res); 567 + } 568 + 569 + /** 570 + * devm_devfreq_add_device() - Resource-managed devfreq_add_device() 571 + * @dev: the device to add devfreq feature. 572 + * @profile: device-specific profile to run devfreq. 573 + * @governor_name: name of the policy to choose frequency. 574 + * @data: private data for the governor. The devfreq framework does not 575 + * touch this value. 576 + * 577 + * This function manages automatically the memory of devfreq device using device 578 + * resource management and simplify the free operation for memory of devfreq 579 + * device. 580 + */ 581 + struct devfreq *devm_devfreq_add_device(struct device *dev, 582 + struct devfreq_dev_profile *profile, 583 + const char *governor_name, 584 + void *data) 585 + { 586 + struct devfreq **ptr, *devfreq; 587 + 588 + ptr = devres_alloc(devm_devfreq_dev_release, sizeof(*ptr), GFP_KERNEL); 589 + if (!ptr) 590 + return ERR_PTR(-ENOMEM); 591 + 592 + devfreq = devfreq_add_device(dev, profile, governor_name, data); 593 + if (IS_ERR(devfreq)) { 594 + devres_free(ptr); 595 + return ERR_PTR(-ENOMEM); 596 + } 597 + 598 + *ptr = devfreq; 599 + devres_add(dev, ptr); 600 + 601 + return devfreq; 602 + } 603 + EXPORT_SYMBOL(devm_devfreq_add_device); 604 + 605 + /** 606 + * devm_devfreq_remove_device() - Resource-managed devfreq_remove_device() 607 + * @dev: the device to add devfreq feature. 608 + * @devfreq: the devfreq instance to be removed 609 + */ 610 + void devm_devfreq_remove_device(struct device *dev, struct devfreq *devfreq) 611 + { 612 + WARN_ON(devres_release(dev, devm_devfreq_dev_release, 613 + devm_devfreq_dev_match, devfreq)); 614 + } 615 + EXPORT_SYMBOL(devm_devfreq_remove_device); 545 616 546 617 /** 547 618 * devfreq_suspend_device() - Suspend devfreq of a device. ··· 1168 1111 1169 1112 return ret; 1170 1113 } 1114 + 1115 + static void devm_devfreq_opp_release(struct device *dev, void *res) 1116 + { 1117 + devfreq_unregister_opp_notifier(dev, *(struct devfreq **)res); 1118 + } 1119 + 1120 + /** 1121 + * devm_ devfreq_register_opp_notifier() 1122 + * - Resource-managed devfreq_register_opp_notifier() 1123 + * @dev: The devfreq user device. (parent of devfreq) 1124 + * @devfreq: The devfreq object. 1125 + */ 1126 + int devm_devfreq_register_opp_notifier(struct device *dev, 1127 + struct devfreq *devfreq) 1128 + { 1129 + struct devfreq **ptr; 1130 + int ret; 1131 + 1132 + ptr = devres_alloc(devm_devfreq_opp_release, sizeof(*ptr), GFP_KERNEL); 1133 + if (!ptr) 1134 + return -ENOMEM; 1135 + 1136 + ret = devfreq_register_opp_notifier(dev, devfreq); 1137 + if (ret) { 1138 + devres_free(ptr); 1139 + return ret; 1140 + } 1141 + 1142 + *ptr = devfreq; 1143 + devres_add(dev, ptr); 1144 + 1145 + return 0; 1146 + } 1147 + EXPORT_SYMBOL(devm_devfreq_register_opp_notifier); 1148 + 1149 + /** 1150 + * devm_devfreq_unregister_opp_notifier() 1151 + * - Resource-managed devfreq_unregister_opp_notifier() 1152 + * @dev: The devfreq user device. (parent of devfreq) 1153 + * @devfreq: The devfreq object. 1154 + */ 1155 + void devm_devfreq_unregister_opp_notifier(struct device *dev, 1156 + struct devfreq *devfreq) 1157 + { 1158 + WARN_ON(devres_release(dev, devm_devfreq_opp_release, 1159 + devm_devfreq_dev_match, devfreq)); 1160 + } 1161 + EXPORT_SYMBOL(devm_devfreq_unregister_opp_notifier); 1171 1162 1172 1163 MODULE_AUTHOR("MyungJoo Ham <myungjoo.ham@samsung.com>"); 1173 1164 MODULE_DESCRIPTION("devfreq class support");
+1 -1
drivers/devfreq/exynos/Makefile
··· 1 1 # Exynos DEVFREQ Drivers 2 - obj-$(CONFIG_ARM_EXYNOS4_BUS_DEVFREQ) += exynos4_bus.o 2 + obj-$(CONFIG_ARM_EXYNOS4_BUS_DEVFREQ) += exynos_ppmu.o exynos4_bus.o 3 3 obj-$(CONFIG_ARM_EXYNOS5_BUS_DEVFREQ) += exynos_ppmu.o exynos5_bus.o
+59 -158
drivers/devfreq/exynos/exynos4_bus.c
··· 25 25 #include <linux/regulator/consumer.h> 26 26 #include <linux/module.h> 27 27 28 - /* Exynos4 ASV has been in the mailing list, but not upstreamed, yet. */ 29 - #ifdef CONFIG_EXYNOS_ASV 30 - extern unsigned int exynos_result_of_asv; 31 - #endif 32 - 33 28 #include <mach/map.h> 34 29 30 + #include "exynos_ppmu.h" 35 31 #include "exynos4_bus.h" 36 32 37 33 #define MAX_SAFEVOLT 1200000 /* 1.2V */ ··· 40 44 /* Assume that the bus is saturated if the utilization is 40% */ 41 45 #define BUS_SATURATION_RATIO 40 42 46 43 - enum ppmu_counter { 44 - PPMU_PMNCNT0 = 0, 45 - PPMU_PMCCNT1, 46 - PPMU_PMNCNT2, 47 - PPMU_PMNCNT3, 48 - PPMU_PMNCNT_MAX, 49 - }; 50 - struct exynos4_ppmu { 51 - void __iomem *hw_base; 52 - unsigned int ccnt; 53 - unsigned int event; 54 - unsigned int count[PPMU_PMNCNT_MAX]; 55 - bool ccnt_overflow; 56 - bool count_overflow[PPMU_PMNCNT_MAX]; 57 - }; 58 - 59 47 enum busclk_level_idx { 60 48 LV_0 = 0, 61 49 LV_1, ··· 48 68 LV_4, 49 69 _LV_END 50 70 }; 71 + 72 + enum exynos_ppmu_idx { 73 + PPMU_DMC0, 74 + PPMU_DMC1, 75 + PPMU_END, 76 + }; 77 + 51 78 #define EX4210_LV_MAX LV_2 52 79 #define EX4x12_LV_MAX LV_4 53 80 #define EX4210_LV_NUM (LV_2 + 1) ··· 78 91 struct regulator *vdd_int; 79 92 struct regulator *vdd_mif; /* Exynos4412/4212 only */ 80 93 struct busfreq_opp_info curr_oppinfo; 81 - struct exynos4_ppmu dmc[2]; 94 + struct busfreq_ppmu_data ppmu_data; 82 95 83 96 struct notifier_block pm_notifier; 84 97 struct mutex lock; ··· 86 99 /* Dividers calculated at boot/probe-time */ 87 100 unsigned int dmc_divtable[_LV_END]; /* DMC0 */ 88 101 unsigned int top_divtable[_LV_END]; 89 - }; 90 - 91 - struct bus_opp_table { 92 - unsigned int idx; 93 - unsigned long clk; 94 - unsigned long volt; 95 102 }; 96 103 97 104 /* 4210 controls clock of mif and voltage of int */ ··· 505 524 return 0; 506 525 } 507 526 508 - 509 - static void busfreq_mon_reset(struct busfreq_data *data) 510 - { 511 - unsigned int i; 512 - 513 - for (i = 0; i < 2; i++) { 514 - void __iomem *ppmu_base = data->dmc[i].hw_base; 515 - 516 - /* Reset PPMU */ 517 - __raw_writel(0x8000000f, ppmu_base + 0xf010); 518 - __raw_writel(0x8000000f, ppmu_base + 0xf050); 519 - __raw_writel(0x6, ppmu_base + 0xf000); 520 - __raw_writel(0x0, ppmu_base + 0xf100); 521 - 522 - /* Set PPMU Event */ 523 - data->dmc[i].event = 0x6; 524 - __raw_writel(((data->dmc[i].event << 12) | 0x1), 525 - ppmu_base + 0xfc); 526 - 527 - /* Start PPMU */ 528 - __raw_writel(0x1, ppmu_base + 0xf000); 529 - } 530 - } 531 - 532 - static void exynos4_read_ppmu(struct busfreq_data *data) 533 - { 534 - int i, j; 535 - 536 - for (i = 0; i < 2; i++) { 537 - void __iomem *ppmu_base = data->dmc[i].hw_base; 538 - u32 overflow; 539 - 540 - /* Stop PPMU */ 541 - __raw_writel(0x0, ppmu_base + 0xf000); 542 - 543 - /* Update local data from PPMU */ 544 - overflow = __raw_readl(ppmu_base + 0xf050); 545 - 546 - data->dmc[i].ccnt = __raw_readl(ppmu_base + 0xf100); 547 - data->dmc[i].ccnt_overflow = overflow & (1 << 31); 548 - 549 - for (j = 0; j < PPMU_PMNCNT_MAX; j++) { 550 - data->dmc[i].count[j] = __raw_readl( 551 - ppmu_base + (0xf110 + (0x10 * j))); 552 - data->dmc[i].count_overflow[j] = overflow & (1 << j); 553 - } 554 - } 555 - 556 - busfreq_mon_reset(data); 557 - } 558 - 559 527 static int exynos4x12_get_intspec(unsigned long mifclk) 560 528 { 561 529 int i = 0; ··· 628 698 return err; 629 699 } 630 700 631 - static int exynos4_get_busier_dmc(struct busfreq_data *data) 632 - { 633 - u64 p0 = data->dmc[0].count[0]; 634 - u64 p1 = data->dmc[1].count[0]; 635 - 636 - p0 *= data->dmc[1].ccnt; 637 - p1 *= data->dmc[0].ccnt; 638 - 639 - if (data->dmc[1].ccnt == 0) 640 - return 0; 641 - 642 - if (p0 > p1) 643 - return 0; 644 - return 1; 645 - } 646 - 647 701 static int exynos4_bus_get_dev_status(struct device *dev, 648 702 struct devfreq_dev_status *stat) 649 703 { 650 704 struct busfreq_data *data = dev_get_drvdata(dev); 651 - int busier_dmc; 652 - int cycles_x2 = 2; /* 2 x cycles */ 653 - void __iomem *addr; 654 - u32 timing; 655 - u32 memctrl; 705 + struct busfreq_ppmu_data *ppmu_data = &data->ppmu_data; 706 + int busier; 656 707 657 - exynos4_read_ppmu(data); 658 - busier_dmc = exynos4_get_busier_dmc(data); 708 + exynos_read_ppmu(ppmu_data); 709 + busier = exynos_get_busier_ppmu(ppmu_data); 659 710 stat->current_frequency = data->curr_oppinfo.rate; 660 711 661 - if (busier_dmc) 662 - addr = S5P_VA_DMC1; 663 - else 664 - addr = S5P_VA_DMC0; 665 - 666 - memctrl = __raw_readl(addr + 0x04); /* one of DDR2/3/LPDDR2 */ 667 - timing = __raw_readl(addr + 0x38); /* CL or WL/RL values */ 668 - 669 - switch ((memctrl >> 8) & 0xf) { 670 - case 0x4: /* DDR2 */ 671 - cycles_x2 = ((timing >> 16) & 0xf) * 2; 672 - break; 673 - case 0x5: /* LPDDR2 */ 674 - case 0x6: /* DDR3 */ 675 - cycles_x2 = ((timing >> 8) & 0xf) + ((timing >> 0) & 0xf); 676 - break; 677 - default: 678 - pr_err("%s: Unknown Memory Type(%d).\n", __func__, 679 - (memctrl >> 8) & 0xf); 680 - return -EINVAL; 681 - } 682 - 683 712 /* Number of cycles spent on memory access */ 684 - stat->busy_time = data->dmc[busier_dmc].count[0] / 2 * (cycles_x2 + 2); 713 + stat->busy_time = ppmu_data->ppmu[busier].count[PPMU_PMNCNT3]; 685 714 stat->busy_time *= 100 / BUS_SATURATION_RATIO; 686 - stat->total_time = data->dmc[busier_dmc].ccnt; 715 + stat->total_time = ppmu_data->ppmu[busier].ccnt; 687 716 688 717 /* If the counters have overflown, retry */ 689 - if (data->dmc[busier_dmc].ccnt_overflow || 690 - data->dmc[busier_dmc].count_overflow[0]) 718 + if (ppmu_data->ppmu[busier].ccnt_overflow || 719 + ppmu_data->ppmu[busier].count_overflow[0]) 691 720 return -EAGAIN; 692 721 693 722 return 0; 694 - } 695 - 696 - static void exynos4_bus_exit(struct device *dev) 697 - { 698 - struct busfreq_data *data = dev_get_drvdata(dev); 699 - 700 - devfreq_unregister_opp_notifier(dev, data->devfreq); 701 723 } 702 724 703 725 static struct devfreq_dev_profile exynos4_devfreq_profile = { ··· 657 775 .polling_ms = 50, 658 776 .target = exynos4_bus_target, 659 777 .get_dev_status = exynos4_bus_get_dev_status, 660 - .exit = exynos4_bus_exit, 661 778 }; 662 779 663 780 static int exynos4210_init_tables(struct busfreq_data *data) ··· 718 837 data->top_divtable[i] = tmp; 719 838 } 720 839 721 - #ifdef CONFIG_EXYNOS_ASV 722 - tmp = exynos4_result_of_asv; 723 - #else 840 + /* 841 + * TODO: init tmp based on busfreq_data 842 + * (device-tree or platform-data) 843 + */ 724 844 tmp = 0; /* Max voltages for the reliability of the unknown */ 725 - #endif 726 845 727 846 pr_debug("ASV Group of Exynos4 is %d\n", tmp); 728 847 /* Use merged grouping for voltage */ ··· 803 922 data->dmc_divtable[i] = tmp; 804 923 } 805 924 806 - #ifdef CONFIG_EXYNOS_ASV 807 - tmp = exynos4_result_of_asv; 808 - #else 809 925 tmp = 0; /* Max voltages for the reliability of the unknown */ 810 - #endif 811 926 812 927 if (tmp > 8) 813 928 tmp = 0; ··· 897 1020 static int exynos4_busfreq_probe(struct platform_device *pdev) 898 1021 { 899 1022 struct busfreq_data *data; 1023 + struct busfreq_ppmu_data *ppmu_data; 900 1024 struct dev_pm_opp *opp; 901 1025 struct device *dev = &pdev->dev; 902 1026 int err = 0; ··· 908 1030 return -ENOMEM; 909 1031 } 910 1032 1033 + ppmu_data = &data->ppmu_data; 1034 + ppmu_data->ppmu_end = PPMU_END; 1035 + ppmu_data->ppmu = devm_kzalloc(dev, 1036 + sizeof(struct exynos_ppmu) * PPMU_END, 1037 + GFP_KERNEL); 1038 + if (!ppmu_data->ppmu) { 1039 + dev_err(dev, "Failed to allocate memory for exynos_ppmu\n"); 1040 + return -ENOMEM; 1041 + } 1042 + 911 1043 data->type = pdev->id_entry->driver_data; 912 - data->dmc[0].hw_base = S5P_VA_DMC0; 913 - data->dmc[1].hw_base = S5P_VA_DMC1; 1044 + ppmu_data->ppmu[PPMU_DMC0].hw_base = S5P_VA_DMC0; 1045 + ppmu_data->ppmu[PPMU_DMC1].hw_base = S5P_VA_DMC1; 914 1046 data->pm_notifier.notifier_call = exynos4_busfreq_pm_notifier_event; 915 1047 data->dev = dev; 916 1048 mutex_init(&data->lock); ··· 936 1048 dev_err(dev, "Cannot determine the device id %d\n", data->type); 937 1049 err = -EINVAL; 938 1050 } 939 - if (err) 1051 + if (err) { 1052 + dev_err(dev, "Cannot initialize busfreq table %d\n", 1053 + data->type); 940 1054 return err; 1055 + } 941 1056 942 1057 data->vdd_int = devm_regulator_get(dev, "vdd_int"); 943 1058 if (IS_ERR(data->vdd_int)) { ··· 970 1079 971 1080 platform_set_drvdata(pdev, data); 972 1081 973 - busfreq_mon_reset(data); 974 - 975 - data->devfreq = devfreq_add_device(dev, &exynos4_devfreq_profile, 1082 + data->devfreq = devm_devfreq_add_device(dev, &exynos4_devfreq_profile, 976 1083 "simple_ondemand", NULL); 977 1084 if (IS_ERR(data->devfreq)) 978 1085 return PTR_ERR(data->devfreq); 979 1086 980 - devfreq_register_opp_notifier(dev, data->devfreq); 1087 + /* 1088 + * Start PPMU (Performance Profiling Monitoring Unit) to check 1089 + * utilization of each IP in the Exynos4 SoC. 1090 + */ 1091 + busfreq_mon_reset(ppmu_data); 981 1092 1093 + /* Register opp_notifier for Exynos4 busfreq */ 1094 + err = devm_devfreq_register_opp_notifier(dev, data->devfreq); 1095 + if (err < 0) { 1096 + dev_err(dev, "Failed to register opp notifier\n"); 1097 + return err; 1098 + } 1099 + 1100 + /* Register pm_notifier for Exynos4 busfreq */ 982 1101 err = register_pm_notifier(&data->pm_notifier); 983 1102 if (err) { 984 1103 dev_err(dev, "Failed to setup pm notifier\n"); 985 - devfreq_remove_device(data->devfreq); 986 1104 return err; 987 1105 } 988 1106 ··· 1002 1102 { 1003 1103 struct busfreq_data *data = platform_get_drvdata(pdev); 1004 1104 1105 + /* Unregister all of notifier chain */ 1005 1106 unregister_pm_notifier(&data->pm_notifier); 1006 - devfreq_remove_device(data->devfreq); 1007 1107 1008 1108 return 0; 1009 1109 } 1010 1110 1111 + #ifdef CONFIG_PM_SLEEP 1011 1112 static int exynos4_busfreq_resume(struct device *dev) 1012 1113 { 1013 1114 struct busfreq_data *data = dev_get_drvdata(dev); 1115 + struct busfreq_ppmu_data *ppmu_data = &data->ppmu_data; 1014 1116 1015 - busfreq_mon_reset(data); 1117 + busfreq_mon_reset(ppmu_data); 1016 1118 return 0; 1017 1119 } 1120 + #endif 1018 1121 1019 - static const struct dev_pm_ops exynos4_busfreq_pm = { 1020 - .resume = exynos4_busfreq_resume, 1021 - }; 1122 + static SIMPLE_DEV_PM_OPS(exynos4_busfreq_pm_ops, NULL, exynos4_busfreq_resume); 1022 1123 1023 1124 static const struct platform_device_id exynos4_busfreq_id[] = { 1024 1125 { "exynos4210-busfreq", TYPE_BUSF_EXYNOS4210 }, ··· 1035 1134 .driver = { 1036 1135 .name = "exynos4-busfreq", 1037 1136 .owner = THIS_MODULE, 1038 - .pm = &exynos4_busfreq_pm, 1137 + .pm = &exynos4_busfreq_pm_ops, 1039 1138 }, 1040 1139 }; 1041 1140
+36 -94
drivers/devfreq/exynos/exynos5_bus.c
··· 50 50 struct device *dev; 51 51 struct devfreq *devfreq; 52 52 struct regulator *vdd_int; 53 - struct exynos_ppmu ppmu[PPMU_END]; 53 + struct busfreq_ppmu_data ppmu_data; 54 54 unsigned long curr_freq; 55 55 bool disabled; 56 56 ··· 74 74 {LV_4, 100000, 1025000}, 75 75 {0, 0, 0}, 76 76 }; 77 - 78 - static void busfreq_mon_reset(struct busfreq_data_int *data) 79 - { 80 - unsigned int i; 81 - 82 - for (i = PPMU_RIGHT; i < PPMU_END; i++) { 83 - void __iomem *ppmu_base = data->ppmu[i].hw_base; 84 - 85 - /* Reset the performance and cycle counters */ 86 - exynos_ppmu_reset(ppmu_base); 87 - 88 - /* Setup count registers to monitor read/write transactions */ 89 - data->ppmu[i].event[PPMU_PMNCNT3] = RDWR_DATA_COUNT; 90 - exynos_ppmu_setevent(ppmu_base, PPMU_PMNCNT3, 91 - data->ppmu[i].event[PPMU_PMNCNT3]); 92 - 93 - exynos_ppmu_start(ppmu_base); 94 - } 95 - } 96 - 97 - static void exynos5_read_ppmu(struct busfreq_data_int *data) 98 - { 99 - int i, j; 100 - 101 - for (i = PPMU_RIGHT; i < PPMU_END; i++) { 102 - void __iomem *ppmu_base = data->ppmu[i].hw_base; 103 - 104 - exynos_ppmu_stop(ppmu_base); 105 - 106 - /* Update local data from PPMU */ 107 - data->ppmu[i].ccnt = __raw_readl(ppmu_base + PPMU_CCNT); 108 - 109 - for (j = PPMU_PMNCNT0; j < PPMU_PMNCNT_MAX; j++) { 110 - if (data->ppmu[i].event[j] == 0) 111 - data->ppmu[i].count[j] = 0; 112 - else 113 - data->ppmu[i].count[j] = 114 - exynos_ppmu_read(ppmu_base, j); 115 - } 116 - } 117 - 118 - busfreq_mon_reset(data); 119 - } 120 77 121 78 static int exynos5_int_setvolt(struct busfreq_data_int *data, 122 79 unsigned long volt) ··· 142 185 return err; 143 186 } 144 187 145 - static int exynos5_get_busier_dmc(struct busfreq_data_int *data) 146 - { 147 - int i, j; 148 - int busy = 0; 149 - unsigned int temp = 0; 150 - 151 - for (i = PPMU_RIGHT; i < PPMU_END; i++) { 152 - for (j = PPMU_PMNCNT0; j < PPMU_PMNCNT_MAX; j++) { 153 - if (data->ppmu[i].count[j] > temp) { 154 - temp = data->ppmu[i].count[j]; 155 - busy = i; 156 - } 157 - } 158 - } 159 - 160 - return busy; 161 - } 162 - 163 188 static int exynos5_int_get_dev_status(struct device *dev, 164 189 struct devfreq_dev_status *stat) 165 190 { 166 191 struct platform_device *pdev = container_of(dev, struct platform_device, 167 192 dev); 168 193 struct busfreq_data_int *data = platform_get_drvdata(pdev); 194 + struct busfreq_ppmu_data *ppmu_data = &data->ppmu_data; 169 195 int busier_dmc; 170 196 171 - exynos5_read_ppmu(data); 172 - busier_dmc = exynos5_get_busier_dmc(data); 197 + exynos_read_ppmu(ppmu_data); 198 + busier_dmc = exynos_get_busier_ppmu(ppmu_data); 173 199 174 200 stat->current_frequency = data->curr_freq; 175 201 176 202 /* Number of cycles spent on memory access */ 177 - stat->busy_time = data->ppmu[busier_dmc].count[PPMU_PMNCNT3]; 203 + stat->busy_time = ppmu_data->ppmu[busier_dmc].count[PPMU_PMNCNT3]; 178 204 stat->busy_time *= 100 / INT_BUS_SATURATION_RATIO; 179 - stat->total_time = data->ppmu[busier_dmc].ccnt; 205 + stat->total_time = ppmu_data->ppmu[busier_dmc].ccnt; 180 206 181 207 return 0; 182 - } 183 - static void exynos5_int_exit(struct device *dev) 184 - { 185 - struct platform_device *pdev = container_of(dev, struct platform_device, 186 - dev); 187 - struct busfreq_data_int *data = platform_get_drvdata(pdev); 188 - 189 - devfreq_unregister_opp_notifier(dev, data->devfreq); 190 208 } 191 209 192 210 static struct devfreq_dev_profile exynos5_devfreq_int_profile = { ··· 169 237 .polling_ms = 100, 170 238 .target = exynos5_busfreq_int_target, 171 239 .get_dev_status = exynos5_int_get_dev_status, 172 - .exit = exynos5_int_exit, 173 240 }; 174 241 175 242 static int exynos5250_init_int_tables(struct busfreq_data_int *data) ··· 246 315 static int exynos5_busfreq_int_probe(struct platform_device *pdev) 247 316 { 248 317 struct busfreq_data_int *data; 318 + struct busfreq_ppmu_data *ppmu_data; 249 319 struct dev_pm_opp *opp; 250 320 struct device *dev = &pdev->dev; 251 321 struct device_node *np; ··· 262 330 return -ENOMEM; 263 331 } 264 332 333 + ppmu_data = &data->ppmu_data; 334 + ppmu_data->ppmu_end = PPMU_END; 335 + ppmu_data->ppmu = devm_kzalloc(dev, 336 + sizeof(struct exynos_ppmu) * PPMU_END, 337 + GFP_KERNEL); 338 + if (!ppmu_data->ppmu) { 339 + dev_err(dev, "Failed to allocate memory for exynos_ppmu\n"); 340 + return -ENOMEM; 341 + } 342 + 265 343 np = of_find_compatible_node(NULL, NULL, "samsung,exynos5250-ppmu"); 266 344 if (np == NULL) { 267 345 pr_err("Unable to find PPMU node\n"); 268 346 return -ENOENT; 269 347 } 270 348 271 - for (i = PPMU_RIGHT; i < PPMU_END; i++) { 349 + for (i = 0; i < ppmu_data->ppmu_end; i++) { 272 350 /* map PPMU memory region */ 273 - data->ppmu[i].hw_base = of_iomap(np, i); 274 - if (data->ppmu[i].hw_base == NULL) { 351 + ppmu_data->ppmu[i].hw_base = of_iomap(np, i); 352 + if (ppmu_data->ppmu[i].hw_base == NULL) { 275 353 dev_err(&pdev->dev, "failed to map memory region\n"); 276 354 return -ENOMEM; 277 355 } ··· 332 390 333 391 platform_set_drvdata(pdev, data); 334 392 335 - busfreq_mon_reset(data); 393 + busfreq_mon_reset(ppmu_data); 336 394 337 - data->devfreq = devfreq_add_device(dev, &exynos5_devfreq_int_profile, 395 + data->devfreq = devm_devfreq_add_device(dev, &exynos5_devfreq_int_profile, 338 396 "simple_ondemand", NULL); 397 + if (IS_ERR(data->devfreq)) 398 + return PTR_ERR(data->devfreq); 339 399 340 - if (IS_ERR(data->devfreq)) { 341 - err = PTR_ERR(data->devfreq); 342 - goto err_devfreq_add; 400 + err = devm_devfreq_register_opp_notifier(dev, data->devfreq); 401 + if (err < 0) { 402 + dev_err(dev, "Failed to register opp notifier\n"); 403 + return err; 343 404 } 344 - 345 - devfreq_register_opp_notifier(dev, data->devfreq); 346 405 347 406 err = register_pm_notifier(&data->pm_notifier); 348 407 if (err) { 349 408 dev_err(dev, "Failed to setup pm notifier\n"); 350 - goto err_devfreq_add; 409 + return err; 351 410 } 352 411 353 412 /* TODO: Add a new QOS class for int/mif bus */ 354 413 pm_qos_add_request(&data->int_req, PM_QOS_NETWORK_THROUGHPUT, -1); 355 414 356 415 return 0; 357 - 358 - err_devfreq_add: 359 - devfreq_remove_device(data->devfreq); 360 - return err; 361 416 } 362 417 363 418 static int exynos5_busfreq_int_remove(struct platform_device *pdev) ··· 363 424 364 425 pm_qos_remove_request(&data->int_req); 365 426 unregister_pm_notifier(&data->pm_notifier); 366 - devfreq_remove_device(data->devfreq); 367 427 368 428 return 0; 369 429 } 370 430 431 + #ifdef CONFIG_PM_SLEEP 371 432 static int exynos5_busfreq_int_resume(struct device *dev) 372 433 { 373 434 struct platform_device *pdev = container_of(dev, struct platform_device, 374 435 dev); 375 436 struct busfreq_data_int *data = platform_get_drvdata(pdev); 437 + struct busfreq_ppmu_data *ppmu_data = &data->ppmu_data; 376 438 377 - busfreq_mon_reset(data); 439 + busfreq_mon_reset(ppmu_data); 378 440 return 0; 379 441 } 380 - 381 442 static const struct dev_pm_ops exynos5_busfreq_int_pm = { 382 443 .resume = exynos5_busfreq_int_resume, 383 444 }; 445 + #endif 446 + static SIMPLE_DEV_PM_OPS(exynos5_busfreq_int_pm_ops, NULL, 447 + exynos5_busfreq_int_resume); 384 448 385 449 /* platform device pointer for exynos5 devfreq device. */ 386 450 static struct platform_device *exynos5_devfreq_pdev; ··· 394 452 .driver = { 395 453 .name = "exynos5-bus-int", 396 454 .owner = THIS_MODULE, 397 - .pm = &exynos5_busfreq_int_pm, 455 + .pm = &exynos5_busfreq_int_pm_ops, 398 456 }, 399 457 }; 400 458
+60
drivers/devfreq/exynos/exynos_ppmu.c
··· 54 54 55 55 return total; 56 56 } 57 + 58 + void busfreq_mon_reset(struct busfreq_ppmu_data *ppmu_data) 59 + { 60 + unsigned int i; 61 + 62 + for (i = 0; i < ppmu_data->ppmu_end; i++) { 63 + void __iomem *ppmu_base = ppmu_data->ppmu[i].hw_base; 64 + 65 + /* Reset the performance and cycle counters */ 66 + exynos_ppmu_reset(ppmu_base); 67 + 68 + /* Setup count registers to monitor read/write transactions */ 69 + ppmu_data->ppmu[i].event[PPMU_PMNCNT3] = RDWR_DATA_COUNT; 70 + exynos_ppmu_setevent(ppmu_base, PPMU_PMNCNT3, 71 + ppmu_data->ppmu[i].event[PPMU_PMNCNT3]); 72 + 73 + exynos_ppmu_start(ppmu_base); 74 + } 75 + } 76 + 77 + void exynos_read_ppmu(struct busfreq_ppmu_data *ppmu_data) 78 + { 79 + int i, j; 80 + 81 + for (i = 0; i < ppmu_data->ppmu_end; i++) { 82 + void __iomem *ppmu_base = ppmu_data->ppmu[i].hw_base; 83 + 84 + exynos_ppmu_stop(ppmu_base); 85 + 86 + /* Update local data from PPMU */ 87 + ppmu_data->ppmu[i].ccnt = __raw_readl(ppmu_base + PPMU_CCNT); 88 + 89 + for (j = PPMU_PMNCNT0; j < PPMU_PMNCNT_MAX; j++) { 90 + if (ppmu_data->ppmu[i].event[j] == 0) 91 + ppmu_data->ppmu[i].count[j] = 0; 92 + else 93 + ppmu_data->ppmu[i].count[j] = 94 + exynos_ppmu_read(ppmu_base, j); 95 + } 96 + } 97 + 98 + busfreq_mon_reset(ppmu_data); 99 + } 100 + 101 + int exynos_get_busier_ppmu(struct busfreq_ppmu_data *ppmu_data) 102 + { 103 + unsigned int count = 0; 104 + int i, j, busy = 0; 105 + 106 + for (i = 0; i < ppmu_data->ppmu_end; i++) { 107 + for (j = PPMU_PMNCNT0; j < PPMU_PMNCNT_MAX; j++) { 108 + if (ppmu_data->ppmu[i].count[j] > count) { 109 + count = ppmu_data->ppmu[i].count[j]; 110 + busy = i; 111 + } 112 + } 113 + } 114 + 115 + return busy; 116 + }
+8
drivers/devfreq/exynos/exynos_ppmu.h
··· 69 69 bool count_overflow[PPMU_PMNCNT_MAX]; 70 70 }; 71 71 72 + struct busfreq_ppmu_data { 73 + struct exynos_ppmu *ppmu; 74 + int ppmu_end; 75 + }; 76 + 72 77 void exynos_ppmu_reset(void __iomem *ppmu_base); 73 78 void exynos_ppmu_setevent(void __iomem *ppmu_base, unsigned int ch, 74 79 unsigned int evt); 75 80 void exynos_ppmu_start(void __iomem *ppmu_base); 76 81 void exynos_ppmu_stop(void __iomem *ppmu_base); 77 82 unsigned int exynos_ppmu_read(void __iomem *ppmu_base, unsigned int ch); 83 + void busfreq_mon_reset(struct busfreq_ppmu_data *ppmu_data); 84 + void exynos_read_ppmu(struct busfreq_ppmu_data *ppmu_data); 85 + int exynos_get_busier_ppmu(struct busfreq_ppmu_data *ppmu_data); 78 86 #endif /* __DEVFREQ_EXYNOS_PPMU_H */
-9
drivers/gpu/drm/nouveau/nouveau_backlight.c
··· 31 31 */ 32 32 33 33 #include <linux/backlight.h> 34 - #include <linux/acpi.h> 35 34 36 35 #include "nouveau_drm.h" 37 36 #include "nouveau_reg.h" ··· 220 221 struct nouveau_drm *drm = nouveau_drm(dev); 221 222 struct nouveau_device *device = nv_device(drm->device); 222 223 struct drm_connector *connector; 223 - 224 - #ifdef CONFIG_ACPI 225 - if (acpi_video_backlight_support()) { 226 - NV_INFO(drm, "ACPI backlight interface available, " 227 - "not registering our own\n"); 228 - return 0; 229 - } 230 - #endif 231 224 232 225 list_for_each_entry(connector, &dev->mode_config.connector_list, head) { 233 226 if (connector->connector_type != DRM_MODE_CONNECTOR_LVDS &&
+8 -11
drivers/mfd/db8500-prcmu.c
··· 1734 1734 1735 1735 static long round_armss_rate(unsigned long rate) 1736 1736 { 1737 + struct cpufreq_frequency_table *pos; 1737 1738 long freq = 0; 1738 - int i = 0; 1739 1739 1740 1740 /* cpufreq table frequencies is in KHz. */ 1741 1741 rate = rate / 1000; 1742 1742 1743 1743 /* Find the corresponding arm opp from the cpufreq table. */ 1744 - while (db8500_cpufreq_table[i].frequency != CPUFREQ_TABLE_END) { 1745 - freq = db8500_cpufreq_table[i].frequency; 1744 + cpufreq_for_each_entry(pos, db8500_cpufreq_table) { 1745 + freq = pos->frequency; 1746 1746 if (freq == rate) 1747 1747 break; 1748 - i++; 1749 1748 } 1750 1749 1751 1750 /* Return the last valid value, even if a match was not found. */ ··· 1885 1886 1886 1887 static int set_armss_rate(unsigned long rate) 1887 1888 { 1888 - int i = 0; 1889 + struct cpufreq_frequency_table *pos; 1889 1890 1890 1891 /* cpufreq table frequencies is in KHz. */ 1891 1892 rate = rate / 1000; 1892 1893 1893 1894 /* Find the corresponding arm opp from the cpufreq table. */ 1894 - while (db8500_cpufreq_table[i].frequency != CPUFREQ_TABLE_END) { 1895 - if (db8500_cpufreq_table[i].frequency == rate) 1895 + cpufreq_for_each_entry(pos, db8500_cpufreq_table) 1896 + if (pos->frequency == rate) 1896 1897 break; 1897 - i++; 1898 - } 1899 1898 1900 - if (db8500_cpufreq_table[i].frequency != rate) 1899 + if (pos->frequency != rate) 1901 1900 return -EINVAL; 1902 1901 1903 1902 /* Set the new arm opp. */ 1904 - return db8500_prcmu_set_arm_opp(db8500_cpufreq_table[i].driver_data); 1903 + return db8500_prcmu_set_arm_opp(pos->driver_data); 1905 1904 } 1906 1905 1907 1906 static int set_plldsi_rate(unsigned long rate)
+5 -9
drivers/net/irda/sh_sir.c
··· 217 217 static u32 sh_sir_find_sclk(struct clk *irda_clk) 218 218 { 219 219 struct cpufreq_frequency_table *freq_table = irda_clk->freq_table; 220 + struct cpufreq_frequency_table *pos; 220 221 struct clk *pclk = clk_get(NULL, "peripheral_clk"); 221 222 u32 limit, min = 0xffffffff, tmp; 222 - int i, index = 0; 223 + int index = 0; 223 224 224 225 limit = clk_get_rate(pclk); 225 226 clk_put(pclk); 226 227 227 228 /* IrDA can not set over peripheral_clk */ 228 - for (i = 0; 229 - freq_table[i].frequency != CPUFREQ_TABLE_END; 230 - i++) { 231 - u32 freq = freq_table[i].frequency; 232 - 233 - if (freq == CPUFREQ_ENTRY_INVALID) 234 - continue; 229 + cpufreq_for_each_valid_entry(pos, freq_table) { 230 + u32 freq = pos->frequency; 235 231 236 232 /* IrDA should not over peripheral_clk */ 237 233 if (freq > limit) ··· 236 240 tmp = freq % SCLK_BASE; 237 241 if (tmp < min) { 238 242 min = tmp; 239 - index = i; 243 + index = pos - freq_table; 240 244 } 241 245 } 242 246
+9 -1
drivers/platform/x86/acer-wmi.c
··· 570 570 DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5750"), 571 571 }, 572 572 }, 573 + { 574 + .callback = video_set_backlight_video_vendor, 575 + .ident = "Acer Aspire 5741", 576 + .matches = { 577 + DMI_MATCH(DMI_BOARD_VENDOR, "Acer"), 578 + DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5741"), 579 + }, 580 + }, 573 581 {} 574 582 }; 575 583 ··· 2236 2228 pr_info("Brightness must be controlled by acpi video driver\n"); 2237 2229 } else { 2238 2230 pr_info("Disabling ACPI video driver\n"); 2239 - acpi_video_unregister(); 2231 + acpi_video_unregister_backlight(); 2240 2232 } 2241 2233 2242 2234 if (wmi_has_guid(WMID_GUID3)) {
+4 -24
drivers/pnp/pnpacpi/core.c
··· 30 30 31 31 static int num; 32 32 33 - /* We need only to blacklist devices that have already an acpi driver that 34 - * can't use pnp layer. We don't need to blacklist device that are directly 35 - * used by the kernel (PCI root, ...), as it is harmless and there were 36 - * already present in pnpbios. But there is an exception for devices that 37 - * have irqs (PIC, Timer) because we call acpi_register_gsi. 38 - * Finally, only devices that have a CRS method need to be in this list. 39 - */ 40 - static struct acpi_device_id excluded_id_list[] __initdata = { 41 - {"PNP0C09", 0}, /* EC */ 42 - {"PNP0C0F", 0}, /* Link device */ 43 - {"PNP0000", 0}, /* PIC */ 44 - {"PNP0100", 0}, /* Timer */ 45 - {"", 0}, 46 - }; 47 - 48 - static inline int __init is_exclusive_device(struct acpi_device *dev) 49 - { 50 - return (!acpi_match_device_ids(dev, excluded_id_list)); 51 - } 52 - 53 33 /* 54 34 * Compatible Device IDs 55 35 */ ··· 246 266 if (!pnpid) 247 267 return 0; 248 268 249 - if (is_exclusive_device(device) || !device->status.present) 269 + if (!device->status.present) 250 270 return 0; 251 271 252 272 dev = pnp_alloc_dev(&pnpacpi_protocol, num, pnpid); ··· 306 326 { 307 327 struct acpi_device *device; 308 328 309 - if (!acpi_bus_get_device(handle, &device)) 310 - pnpacpi_add_device(device); 311 - else 329 + if (acpi_bus_get_device(handle, &device)) 312 330 return AE_CTRL_DEPTH; 331 + if (acpi_is_pnp_device(device)) 332 + pnpacpi_add_device(device); 313 333 return AE_OK; 314 334 } 315 335
+2 -2
drivers/pnp/resource.c
··· 360 360 return 1; 361 361 362 362 /* check if the resource is valid */ 363 - if (*irq < 0 || *irq > 15) 363 + if (*irq > 15) 364 364 return 0; 365 365 366 366 /* check if the resource is reserved */ ··· 424 424 return 1; 425 425 426 426 /* check if the resource is valid */ 427 - if (*dma < 0 || *dma == 4 || *dma > 7) 427 + if (*dma == 4 || *dma > 7) 428 428 return 0; 429 429 430 430 /* check if the resource is reserved */
+13 -2
drivers/power/power_supply_core.c
··· 537 537 } 538 538 #endif 539 539 540 - int power_supply_register(struct device *parent, struct power_supply *psy) 540 + int __power_supply_register(struct device *parent, struct power_supply *psy, bool ws) 541 541 { 542 542 struct device *dev; 543 543 int rc; ··· 568 568 } 569 569 570 570 spin_lock_init(&psy->changed_lock); 571 - rc = device_init_wakeup(dev, true); 571 + rc = device_init_wakeup(dev, ws); 572 572 if (rc) 573 573 goto wakeup_init_failed; 574 574 ··· 606 606 success: 607 607 return rc; 608 608 } 609 + 610 + int power_supply_register(struct device *parent, struct power_supply *psy) 611 + { 612 + return __power_supply_register(parent, psy, true); 613 + } 609 614 EXPORT_SYMBOL_GPL(power_supply_register); 615 + 616 + int power_supply_register_no_ws(struct device *parent, struct power_supply *psy) 617 + { 618 + return __power_supply_register(parent, psy, false); 619 + } 620 + EXPORT_SYMBOL_GPL(power_supply_register_no_ws); 610 621 611 622 void power_supply_unregister(struct power_supply *psy) 612 623 {
+12 -21
drivers/powercap/intel_rapl.c
··· 951 951 { X86_VENDOR_INTEL, 6, 0x2d},/* Sandy Bridge EP */ 952 952 { X86_VENDOR_INTEL, 6, 0x37},/* Valleyview */ 953 953 { X86_VENDOR_INTEL, 6, 0x3a},/* Ivy Bridge */ 954 - { X86_VENDOR_INTEL, 6, 0x45},/* Haswell */ 954 + { X86_VENDOR_INTEL, 6, 0x3c},/* Haswell */ 955 + { X86_VENDOR_INTEL, 6, 0x3d},/* Broadwell */ 956 + { X86_VENDOR_INTEL, 6, 0x45},/* Haswell ULT */ 955 957 /* TODO: Add more CPU IDs after testing */ 956 958 {} 957 959 }; ··· 1126 1124 static int rapl_check_domain(int cpu, int domain) 1127 1125 { 1128 1126 unsigned msr; 1129 - u64 val1, val2 = 0; 1130 - int retry = 0; 1127 + u64 val = 0; 1131 1128 1132 1129 switch (domain) { 1133 1130 case RAPL_DOMAIN_PACKAGE: ··· 1145 1144 pr_err("invalid domain id %d\n", domain); 1146 1145 return -EINVAL; 1147 1146 } 1148 - if (rdmsrl_safe_on_cpu(cpu, msr, &val1)) 1147 + /* make sure domain counters are available and contains non-zero 1148 + * values, otherwise skip it. 1149 + */ 1150 + if (rdmsrl_safe_on_cpu(cpu, msr, &val) || !val) 1149 1151 return -ENODEV; 1150 1152 1151 - /* PP1/uncore/graphics domain may not be active at the time of 1152 - * driver loading. So skip further checks. 1153 - */ 1154 - if (domain == RAPL_DOMAIN_PP1) 1155 - return 0; 1156 - /* energy counters roll slowly on some domains */ 1157 - while (++retry < 10) { 1158 - usleep_range(10000, 15000); 1159 - rdmsrl_safe_on_cpu(cpu, msr, &val2); 1160 - if ((val1 & ENERGY_STATUS_MASK) != (val2 & ENERGY_STATUS_MASK)) 1161 - return 0; 1162 - } 1163 - /* if energy counter does not change, report as bad domain */ 1164 - pr_info("domain %s energy ctr %llu:%llu not working, skip\n", 1165 - rapl_domain_names[domain], val1, val2); 1166 - 1167 - return -ENODEV; 1153 + return 0; 1168 1154 } 1169 1155 1170 1156 /* Detect active and valid domains for the given CPU, caller must ··· 1168 1180 /* use physical package id to read counters */ 1169 1181 if (!rapl_check_domain(cpu, i)) 1170 1182 rp->domain_map |= 1 << i; 1183 + else 1184 + pr_warn("RAPL domain %s detection failed\n", 1185 + rapl_domain_names[i]); 1171 1186 } 1172 1187 rp->nr_domains = bitmap_weight(&rp->domain_map, RAPL_DOMAIN_MAX); 1173 1188 if (!rp->nr_domains) {
+5 -15
drivers/sh/clk/core.c
··· 196 196 struct cpufreq_frequency_table *freq_table, 197 197 unsigned long rate) 198 198 { 199 - int i; 199 + struct cpufreq_frequency_table *pos; 200 200 201 - for (i = 0; freq_table[i].frequency != CPUFREQ_TABLE_END; i++) { 202 - unsigned long freq = freq_table[i].frequency; 203 - 204 - if (freq == CPUFREQ_ENTRY_INVALID) 205 - continue; 206 - 207 - if (freq == rate) 208 - return i; 209 - } 201 + cpufreq_for_each_valid_entry(pos, freq_table) 202 + if (pos->frequency == rate) 203 + return pos - freq_table; 210 204 211 205 return -ENOENT; 212 206 } ··· 569 575 return abs(target - *best_freq); 570 576 } 571 577 572 - for (freq = parent->freq_table; freq->frequency != CPUFREQ_TABLE_END; 573 - freq++) { 574 - if (freq->frequency == CPUFREQ_ENTRY_INVALID) 575 - continue; 576 - 578 + cpufreq_for_each_valid_entry(freq, parent->freq_table) { 577 579 if (unlikely(freq->frequency / target <= div_min - 1)) { 578 580 unsigned long freq_max; 579 581
+13 -20
drivers/thermal/cpu_cooling.c
··· 144 144 unsigned int *output, 145 145 enum cpufreq_cooling_property property) 146 146 { 147 - int i, j; 147 + int i; 148 148 unsigned long max_level = 0, level = 0; 149 149 unsigned int freq = CPUFREQ_ENTRY_INVALID; 150 150 int descend = -1; 151 - struct cpufreq_frequency_table *table = 151 + struct cpufreq_frequency_table *pos, *table = 152 152 cpufreq_frequency_get_table(cpu); 153 153 154 154 if (!output) ··· 157 157 if (!table) 158 158 return -EINVAL; 159 159 160 - for (i = 0; table[i].frequency != CPUFREQ_TABLE_END; i++) { 161 - /* ignore invalid entries */ 162 - if (table[i].frequency == CPUFREQ_ENTRY_INVALID) 163 - continue; 164 - 160 + cpufreq_for_each_valid_entry(pos, table) { 165 161 /* ignore duplicate entry */ 166 - if (freq == table[i].frequency) 162 + if (freq == pos->frequency) 167 163 continue; 168 164 169 165 /* get the frequency order */ 170 166 if (freq != CPUFREQ_ENTRY_INVALID && descend == -1) 171 - descend = !!(freq > table[i].frequency); 167 + descend = freq > pos->frequency; 172 168 173 - freq = table[i].frequency; 169 + freq = pos->frequency; 174 170 max_level++; 175 171 } 176 172 ··· 186 190 if (property == GET_FREQ) 187 191 level = descend ? input : (max_level - input); 188 192 189 - for (i = 0, j = 0; table[i].frequency != CPUFREQ_TABLE_END; i++) { 190 - /* ignore invalid entry */ 191 - if (table[i].frequency == CPUFREQ_ENTRY_INVALID) 192 - continue; 193 - 193 + i = 0; 194 + cpufreq_for_each_valid_entry(pos, table) { 194 195 /* ignore duplicate entry */ 195 - if (freq == table[i].frequency) 196 + if (freq == pos->frequency) 196 197 continue; 197 198 198 199 /* now we have a valid frequency entry */ 199 - freq = table[i].frequency; 200 + freq = pos->frequency; 200 201 201 202 if (property == GET_LEVEL && (unsigned int)input == freq) { 202 203 /* get level by frequency */ 203 - *output = descend ? j : (max_level - j); 204 + *output = descend ? i : (max_level - i); 204 205 return 0; 205 206 } 206 - if (property == GET_FREQ && level == j) { 207 + if (property == GET_FREQ && level == i) { 207 208 /* get frequency by level */ 208 209 *output = freq; 209 210 return 0; 210 211 } 211 - j++; 212 + i++; 212 213 } 213 214 214 215 return -EINVAL;
+40
drivers/video/backlight/backlight.c
··· 23 23 24 24 static struct list_head backlight_dev_list; 25 25 static struct mutex backlight_dev_list_mutex; 26 + static struct blocking_notifier_head backlight_notifier; 26 27 27 28 static const char *const backlight_types[] = { 28 29 [BACKLIGHT_RAW] = "raw", ··· 371 370 list_add(&new_bd->entry, &backlight_dev_list); 372 371 mutex_unlock(&backlight_dev_list_mutex); 373 372 373 + blocking_notifier_call_chain(&backlight_notifier, 374 + BACKLIGHT_REGISTERED, new_bd); 375 + 374 376 return new_bd; 375 377 } 376 378 EXPORT_SYMBOL(backlight_device_register); ··· 417 413 pmac_backlight = NULL; 418 414 mutex_unlock(&pmac_backlight_mutex); 419 415 #endif 416 + 417 + blocking_notifier_call_chain(&backlight_notifier, 418 + BACKLIGHT_UNREGISTERED, bd); 419 + 420 420 mutex_lock(&bd->ops_lock); 421 421 bd->ops = NULL; 422 422 mutex_unlock(&bd->ops_lock); ··· 444 436 445 437 return *r == data; 446 438 } 439 + 440 + /** 441 + * backlight_register_notifier - get notified of backlight (un)registration 442 + * @nb: notifier block with the notifier to call on backlight (un)registration 443 + * 444 + * @return 0 on success, otherwise a negative error code 445 + * 446 + * Register a notifier to get notified when backlight devices get registered 447 + * or unregistered. 448 + */ 449 + int backlight_register_notifier(struct notifier_block *nb) 450 + { 451 + return blocking_notifier_chain_register(&backlight_notifier, nb); 452 + } 453 + EXPORT_SYMBOL(backlight_register_notifier); 454 + 455 + /** 456 + * backlight_unregister_notifier - unregister a backlight notifier 457 + * @nb: notifier block to unregister 458 + * 459 + * @return 0 on success, otherwise a negative error code 460 + * 461 + * Register a notifier to get notified when backlight devices get registered 462 + * or unregistered. 463 + */ 464 + int backlight_unregister_notifier(struct notifier_block *nb) 465 + { 466 + return blocking_notifier_chain_unregister(&backlight_notifier, nb); 467 + } 468 + EXPORT_SYMBOL(backlight_unregister_notifier); 447 469 448 470 /** 449 471 * devm_backlight_device_register - resource managed backlight_device_register() ··· 582 544 backlight_class->pm = &backlight_class_dev_pm_ops; 583 545 INIT_LIST_HEAD(&backlight_dev_list); 584 546 mutex_init(&backlight_dev_list_mutex); 547 + BLOCKING_INIT_NOTIFIER_HEAD(&backlight_notifier); 548 + 585 549 return 0; 586 550 } 587 551
+1 -3
include/acpi/acpi.h
··· 62 62 #include <acpi/acrestyp.h> /* Resource Descriptor structs */ 63 63 #include <acpi/acpiosxf.h> /* OSL interfaces (ACPICA-to-OS) */ 64 64 #include <acpi/acpixf.h> /* ACPI core subsystem external interfaces */ 65 - #ifdef ACPI_NATIVE_INTERFACE_HEADER 66 - #include ACPI_NATIVE_INTERFACE_HEADER 67 - #endif 65 + #include <acpi/platform/acenvex.h> /* Extra environment-specific items */ 68 66 69 67 #endif /* __ACPI_H__ */
+7 -2
include/acpi/acpi_bus.h
··· 131 131 struct acpi_scan_handler { 132 132 const struct acpi_device_id *ids; 133 133 struct list_head list_node; 134 + bool (*match)(char *idstr, const struct acpi_device_id **matchid); 134 135 int (*attach)(struct acpi_device *dev, const struct acpi_device_id *id); 135 136 void (*detach)(struct acpi_device *dev); 136 137 void (*bind)(struct device *phys_dev); ··· 233 232 struct acpi_pnp_type { 234 233 u32 hardware_id:1; 235 234 u32 bus_address:1; 236 - u32 reserved:30; 235 + u32 platform_id:1; 236 + u32 reserved:29; 237 237 }; 238 238 239 239 struct acpi_device_pnp { ··· 263 261 u32 inrush_current:1; /* Serialize Dx->D0 */ 264 262 u32 power_removed:1; /* Optimize Dx->D0 */ 265 263 u32 ignore_parent:1; /* Power is independent of parent power state */ 266 - u32 reserved:27; 264 + u32 dsw_present:1; /* _DSW present? */ 265 + u32 reserved:26; 267 266 }; 268 267 269 268 struct acpi_device_power_state { ··· 409 406 extern int acpi_bus_generate_netlink_event(const char*, const char*, u8, int); 410 407 void acpi_bus_private_data_handler(acpi_handle, void *); 411 408 int acpi_bus_get_private_data(acpi_handle, void **); 409 + int acpi_bus_attach_private_data(acpi_handle, void *); 410 + void acpi_bus_detach_private_data(acpi_handle); 412 411 void acpi_bus_no_hotplug(acpi_handle handle); 413 412 extern int acpi_notifier_call_chain(struct acpi_device *, u32, u32); 414 413 extern int register_acpi_notifier(struct notifier_block *);
+5
include/acpi/acpi_drivers.h
··· 96 96 /* Arch-defined function to add a bus to the system */ 97 97 98 98 struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root); 99 + 100 + #ifdef CONFIG_X86 99 101 void pci_acpi_crs_quirks(void); 102 + #else 103 + static inline void pci_acpi_crs_quirks(void) { } 104 + #endif 100 105 101 106 /* -------------------------------------------------------------------------- 102 107 Processor
+3
include/acpi/acpi_io.h
··· 9 9 return ioremap_cache(phys, size); 10 10 } 11 11 12 + void __iomem *__init_refok 13 + acpi_os_map_iomem(acpi_physical_address phys, acpi_size size); 14 + void __ref acpi_os_unmap_iomem(void __iomem *virt, acpi_size size); 12 15 void __iomem *acpi_os_get_iomem(acpi_physical_address phys, unsigned int size); 13 16 14 17 int acpi_os_map_generic_address(struct acpi_generic_address *addr);
+565 -271
include/acpi/acpixf.h
··· 46 46 47 47 /* Current ACPICA subsystem version in YYYYMMDD format */ 48 48 49 - #define ACPI_CA_VERSION 0x20140214 49 + #define ACPI_CA_VERSION 0x20140424 50 50 51 51 #include <acpi/acconfig.h> 52 52 #include <acpi/actypes.h> ··· 55 55 56 56 extern u8 acpi_gbl_permanent_mmap; 57 57 58 + /***************************************************************************** 59 + * 60 + * Macros used for ACPICA globals and configuration 61 + * 62 + ****************************************************************************/ 63 + 58 64 /* 59 - * Globals that are publically available 65 + * Ensure that global variables are defined and initialized only once. 66 + * 67 + * The use of these macros allows for a single list of globals (here) 68 + * in order to simplify maintenance of the code. 60 69 */ 61 - extern u32 acpi_current_gpe_count; 62 - extern struct acpi_table_fadt acpi_gbl_FADT; 63 - extern u8 acpi_gbl_system_awake_and_running; 64 - extern u8 acpi_gbl_reduced_hardware; /* ACPI 5.0 */ 65 - extern u8 acpi_gbl_osi_data; 70 + #ifdef DEFINE_ACPI_GLOBALS 71 + #define ACPI_GLOBAL(type,name) \ 72 + extern type name; \ 73 + type name 66 74 67 - /* Runtime configuration of debug print levels */ 75 + #define ACPI_INIT_GLOBAL(type,name,value) \ 76 + type name=value 68 77 69 - extern u32 acpi_dbg_level; 70 - extern u32 acpi_dbg_layer; 78 + #else 79 + #ifndef ACPI_GLOBAL 80 + #define ACPI_GLOBAL(type,name) \ 81 + extern type name 82 + #endif 71 83 72 - /* ACPICA runtime options */ 73 - 74 - extern u8 acpi_gbl_auto_serialize_methods; 75 - extern u8 acpi_gbl_copy_dsdt_locally; 76 - extern u8 acpi_gbl_create_osi_method; 77 - extern u8 acpi_gbl_disable_auto_repair; 78 - extern u8 acpi_gbl_disable_ssdt_table_load; 79 - extern u8 acpi_gbl_do_not_use_xsdt; 80 - extern u8 acpi_gbl_enable_aml_debug_object; 81 - extern u8 acpi_gbl_enable_interpreter_slack; 82 - extern u32 acpi_gbl_trace_flags; 83 - extern acpi_name acpi_gbl_trace_method_name; 84 - extern u8 acpi_gbl_truncate_io_addresses; 85 - extern u8 acpi_gbl_use32_bit_fadt_addresses; 86 - extern u8 acpi_gbl_use_default_register_widths; 84 + #ifndef ACPI_INIT_GLOBAL 85 + #define ACPI_INIT_GLOBAL(type,name,value) \ 86 + extern type name 87 + #endif 88 + #endif 87 89 88 90 /* 89 - * Hardware-reduced prototypes. All interfaces that use these macros will 90 - * be configured out of the ACPICA build if the ACPI_REDUCED_HARDWARE flag 91 + * These macros configure the various ACPICA interfaces. They are 92 + * useful for generating stub inline functions for features that are 93 + * configured out of the current kernel or ACPICA application. 94 + */ 95 + #ifndef ACPI_EXTERNAL_RETURN_STATUS 96 + #define ACPI_EXTERNAL_RETURN_STATUS(prototype) \ 97 + prototype; 98 + #endif 99 + 100 + #ifndef ACPI_EXTERNAL_RETURN_OK 101 + #define ACPI_EXTERNAL_RETURN_OK(prototype) \ 102 + prototype; 103 + #endif 104 + 105 + #ifndef ACPI_EXTERNAL_RETURN_VOID 106 + #define ACPI_EXTERNAL_RETURN_VOID(prototype) \ 107 + prototype; 108 + #endif 109 + 110 + #ifndef ACPI_EXTERNAL_RETURN_UINT32 111 + #define ACPI_EXTERNAL_RETURN_UINT32(prototype) \ 112 + prototype; 113 + #endif 114 + 115 + #ifndef ACPI_EXTERNAL_RETURN_PTR 116 + #define ACPI_EXTERNAL_RETURN_PTR(prototype) \ 117 + prototype; 118 + #endif 119 + 120 + /***************************************************************************** 121 + * 122 + * Public globals and runtime configuration options 123 + * 124 + ****************************************************************************/ 125 + 126 + /* 127 + * Enable "slack mode" of the AML interpreter? Default is FALSE, and the 128 + * interpreter strictly follows the ACPI specification. Setting to TRUE 129 + * allows the interpreter to ignore certain errors and/or bad AML constructs. 130 + * 131 + * Currently, these features are enabled by this flag: 132 + * 133 + * 1) Allow "implicit return" of last value in a control method 134 + * 2) Allow access beyond the end of an operation region 135 + * 3) Allow access to uninitialized locals/args (auto-init to integer 0) 136 + * 4) Allow ANY object type to be a source operand for the Store() operator 137 + * 5) Allow unresolved references (invalid target name) in package objects 138 + * 6) Enable warning messages for behavior that is not ACPI spec compliant 139 + */ 140 + ACPI_INIT_GLOBAL(u8, acpi_gbl_enable_interpreter_slack, FALSE); 141 + 142 + /* 143 + * Automatically serialize all methods that create named objects? Default 144 + * is TRUE, meaning that all non_serialized methods are scanned once at 145 + * table load time to determine those that create named objects. Methods 146 + * that create named objects are marked Serialized in order to prevent 147 + * possible run-time problems if they are entered by more than one thread. 148 + */ 149 + ACPI_INIT_GLOBAL(u8, acpi_gbl_auto_serialize_methods, TRUE); 150 + 151 + /* 152 + * Create the predefined _OSI method in the namespace? Default is TRUE 153 + * because ACPICA is fully compatible with other ACPI implementations. 154 + * Changing this will revert ACPICA (and machine ASL) to pre-OSI behavior. 155 + */ 156 + ACPI_INIT_GLOBAL(u8, acpi_gbl_create_osi_method, TRUE); 157 + 158 + /* 159 + * Optionally use default values for the ACPI register widths. Set this to 160 + * TRUE to use the defaults, if an FADT contains incorrect widths/lengths. 161 + */ 162 + ACPI_INIT_GLOBAL(u8, acpi_gbl_use_default_register_widths, TRUE); 163 + 164 + /* 165 + * Whether or not to verify the table checksum before installation. Set 166 + * this to TRUE to verify the table checksum before install it to the table 167 + * manager. Note that enabling this option causes errors to happen in some 168 + * OSPMs during early initialization stages. Default behavior is to do such 169 + * verification. 170 + */ 171 + ACPI_INIT_GLOBAL(u8, acpi_gbl_verify_table_checksum, TRUE); 172 + 173 + /* 174 + * Optionally enable output from the AML Debug Object. 175 + */ 176 + ACPI_INIT_GLOBAL(u8, acpi_gbl_enable_aml_debug_object, FALSE); 177 + 178 + /* 179 + * Optionally copy the entire DSDT to local memory (instead of simply 180 + * mapping it.) There are some BIOSs that corrupt or replace the original 181 + * DSDT, creating the need for this option. Default is FALSE, do not copy 182 + * the DSDT. 183 + */ 184 + ACPI_INIT_GLOBAL(u8, acpi_gbl_copy_dsdt_locally, FALSE); 185 + 186 + /* 187 + * Optionally ignore an XSDT if present and use the RSDT instead. 188 + * Although the ACPI specification requires that an XSDT be used instead 189 + * of the RSDT, the XSDT has been found to be corrupt or ill-formed on 190 + * some machines. Default behavior is to use the XSDT if present. 191 + */ 192 + ACPI_INIT_GLOBAL(u8, acpi_gbl_do_not_use_xsdt, FALSE); 193 + 194 + /* 195 + * Optionally use 32-bit FADT addresses if and when there is a conflict 196 + * (address mismatch) between the 32-bit and 64-bit versions of the 197 + * address. Although ACPICA adheres to the ACPI specification which 198 + * requires the use of the corresponding 64-bit address if it is non-zero, 199 + * some machines have been found to have a corrupted non-zero 64-bit 200 + * address. Default is TRUE, favor the 32-bit addresses. 201 + */ 202 + ACPI_INIT_GLOBAL(u8, acpi_gbl_use32_bit_fadt_addresses, TRUE); 203 + 204 + /* 205 + * Optionally truncate I/O addresses to 16 bits. Provides compatibility 206 + * with other ACPI implementations. NOTE: During ACPICA initialization, 207 + * this value is set to TRUE if any Windows OSI strings have been 208 + * requested by the BIOS. 209 + */ 210 + ACPI_INIT_GLOBAL(u8, acpi_gbl_truncate_io_addresses, FALSE); 211 + 212 + /* 213 + * Disable runtime checking and repair of values returned by control methods. 214 + * Use only if the repair is causing a problem on a particular machine. 215 + */ 216 + ACPI_INIT_GLOBAL(u8, acpi_gbl_disable_auto_repair, FALSE); 217 + 218 + /* 219 + * Optionally do not install any SSDTs from the RSDT/XSDT during initialization. 220 + * This can be useful for debugging ACPI problems on some machines. 221 + */ 222 + ACPI_INIT_GLOBAL(u8, acpi_gbl_disable_ssdt_table_install, FALSE); 223 + 224 + /* 225 + * We keep track of the latest version of Windows that has been requested by 226 + * the BIOS. ACPI 5.0. 227 + */ 228 + ACPI_INIT_GLOBAL(u8, acpi_gbl_osi_data, 0); 229 + 230 + /* 231 + * ACPI 5.0 introduces the concept of a "reduced hardware platform", meaning 232 + * that the ACPI hardware is no longer required. A flag in the FADT indicates 233 + * a reduced HW machine, and that flag is duplicated here for convenience. 234 + */ 235 + ACPI_INIT_GLOBAL(u8, acpi_gbl_reduced_hardware, FALSE); 236 + 237 + /* 238 + * This mechanism is used to trace a specified AML method. The method is 239 + * traced each time it is executed. 240 + */ 241 + ACPI_INIT_GLOBAL(u32, acpi_gbl_trace_flags, 0); 242 + ACPI_INIT_GLOBAL(acpi_name, acpi_gbl_trace_method_name, 0); 243 + 244 + /* 245 + * Runtime configuration of debug output control masks. We want the debug 246 + * switches statically initialized so they are already set when the debugger 247 + * is entered. 248 + */ 249 + ACPI_INIT_GLOBAL(u32, acpi_dbg_level, ACPI_DEBUG_DEFAULT); 250 + ACPI_INIT_GLOBAL(u32, acpi_dbg_layer, 0); 251 + 252 + /* 253 + * Other miscellaneous globals 254 + */ 255 + ACPI_GLOBAL(struct acpi_table_fadt, acpi_gbl_FADT); 256 + ACPI_GLOBAL(u32, acpi_current_gpe_count); 257 + ACPI_GLOBAL(u8, acpi_gbl_system_awake_and_running); 258 + 259 + /***************************************************************************** 260 + * 261 + * ACPICA public interface configuration. 262 + * 263 + * Interfaces that are configured out of the ACPICA build are replaced 264 + * by inlined stubs by default. 265 + * 266 + ****************************************************************************/ 267 + 268 + /* 269 + * Hardware-reduced prototypes (default: Not hardware reduced). 270 + * 271 + * All ACPICA hardware-related interfaces that use these macros will be 272 + * configured out of the ACPICA build if the ACPI_REDUCED_HARDWARE flag 91 273 * is set to TRUE. 274 + * 275 + * Note: This static build option for reduced hardware is intended to 276 + * reduce ACPICA code size if desired or necessary. However, even if this 277 + * option is not specified, the runtime behavior of ACPICA is dependent 278 + * on the actual FADT reduced hardware flag (HW_REDUCED_ACPI). If set, 279 + * the flag will enable similar behavior -- ACPICA will not attempt 280 + * to access any ACPI-relate hardware (SCI, GPEs, Fixed Events, etc.) 92 281 */ 93 282 #if (!ACPI_REDUCED_HARDWARE) 94 283 #define ACPI_HW_DEPENDENT_RETURN_STATUS(prototype) \ 95 - prototype; 284 + ACPI_EXTERNAL_RETURN_STATUS(prototype) 96 285 97 286 #define ACPI_HW_DEPENDENT_RETURN_OK(prototype) \ 98 - prototype; 287 + ACPI_EXTERNAL_RETURN_OK(prototype) 99 288 100 289 #define ACPI_HW_DEPENDENT_RETURN_VOID(prototype) \ 101 - prototype; 290 + ACPI_EXTERNAL_RETURN_VOID(prototype) 102 291 103 292 #else 104 293 #define ACPI_HW_DEPENDENT_RETURN_STATUS(prototype) \ ··· 302 113 #endif /* !ACPI_REDUCED_HARDWARE */ 303 114 304 115 /* 116 + * Error message prototypes (default: error messages enabled). 117 + * 118 + * All interfaces related to error and warning messages 119 + * will be configured out of the ACPICA build if the 120 + * ACPI_NO_ERROR_MESSAGE flag is defined. 121 + */ 122 + #ifndef ACPI_NO_ERROR_MESSAGES 123 + #define ACPI_MSG_DEPENDENT_RETURN_VOID(prototype) \ 124 + prototype; 125 + 126 + #else 127 + #define ACPI_MSG_DEPENDENT_RETURN_VOID(prototype) \ 128 + static ACPI_INLINE prototype {return;} 129 + 130 + #endif /* ACPI_NO_ERROR_MESSAGES */ 131 + 132 + /* 133 + * Debugging output prototypes (default: no debug output). 134 + * 135 + * All interfaces related to debug output messages 136 + * will be configured out of the ACPICA build unless the 137 + * ACPI_DEBUG_OUTPUT flag is defined. 138 + */ 139 + #ifdef ACPI_DEBUG_OUTPUT 140 + #define ACPI_DBG_DEPENDENT_RETURN_VOID(prototype) \ 141 + prototype; 142 + 143 + #else 144 + #define ACPI_DBG_DEPENDENT_RETURN_VOID(prototype) \ 145 + static ACPI_INLINE prototype {return;} 146 + 147 + #endif /* ACPI_DEBUG_OUTPUT */ 148 + 149 + /***************************************************************************** 150 + * 151 + * ACPICA public interface prototypes 152 + * 153 + ****************************************************************************/ 154 + 155 + /* 305 156 * Initialization 306 157 */ 307 - acpi_status __init 308 - acpi_initialize_tables(struct acpi_table_desc *initial_storage, 309 - u32 initial_table_count, u8 allow_resize); 158 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status __init 159 + acpi_initialize_tables(struct acpi_table_desc 160 + *initial_storage, 161 + u32 initial_table_count, 162 + u8 allow_resize)) 163 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status __init acpi_initialize_subsystem(void)) 310 164 311 - acpi_status __init acpi_initialize_subsystem(void); 165 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status __init acpi_enable_subsystem(u32 flags)) 312 166 313 - acpi_status __init acpi_enable_subsystem(u32 flags); 314 - 315 - acpi_status __init acpi_initialize_objects(u32 flags); 316 - 317 - acpi_status __init acpi_terminate(void); 167 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status __init 168 + acpi_initialize_objects(u32 flags)) 169 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status __init acpi_terminate(void)) 318 170 319 171 /* 320 172 * Miscellaneous global interfaces ··· 363 133 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_enable(void)) 364 134 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_disable(void)) 365 135 #ifdef ACPI_FUTURE_USAGE 366 - acpi_status acpi_subsystem_status(void); 136 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status acpi_subsystem_status(void)) 367 137 #endif 368 138 369 139 #ifdef ACPI_FUTURE_USAGE 370 - acpi_status acpi_get_system_info(struct acpi_buffer *ret_buffer); 140 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 141 + acpi_get_system_info(struct acpi_buffer 142 + *ret_buffer)) 371 143 #endif 144 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 145 + acpi_get_statistics(struct acpi_statistics *stats)) 146 + ACPI_EXTERNAL_RETURN_PTR(const char 147 + *acpi_format_exception(acpi_status exception)) 148 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status acpi_purge_cached_objects(void)) 372 149 373 - acpi_status acpi_get_statistics(struct acpi_statistics *stats); 150 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 151 + acpi_install_interface(acpi_string interface_name)) 374 152 375 - const char *acpi_format_exception(acpi_status exception); 153 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 154 + acpi_remove_interface(acpi_string interface_name)) 155 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status acpi_update_interfaces(u8 action)) 376 156 377 - acpi_status acpi_purge_cached_objects(void); 378 - 379 - acpi_status acpi_install_interface(acpi_string interface_name); 380 - 381 - acpi_status acpi_remove_interface(acpi_string interface_name); 382 - 383 - acpi_status acpi_update_interfaces(u8 action); 384 - 385 - u32 386 - acpi_check_address_range(acpi_adr_space_type space_id, 387 - acpi_physical_address address, 388 - acpi_size length, u8 warn); 389 - 390 - acpi_status 391 - acpi_decode_pld_buffer(u8 *in_buffer, 392 - acpi_size length, struct acpi_pld_info **return_buffer); 157 + ACPI_EXTERNAL_RETURN_UINT32(u32 158 + acpi_check_address_range(acpi_adr_space_type 159 + space_id, 160 + acpi_physical_address 161 + address, acpi_size length, 162 + u8 warn)) 163 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 164 + acpi_decode_pld_buffer(u8 *in_buffer, 165 + acpi_size length, 166 + struct acpi_pld_info 167 + **return_buffer)) 393 168 394 169 /* 395 170 * ACPI table load/unload interfaces 396 171 */ 397 - acpi_status acpi_load_table(struct acpi_table_header *table); 172 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status __init 173 + acpi_install_table(acpi_physical_address address, 174 + u8 physical)) 398 175 399 - acpi_status acpi_unload_parent_table(acpi_handle object); 176 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 177 + acpi_load_table(struct acpi_table_header *table)) 400 178 401 - acpi_status __init acpi_load_tables(void); 179 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 180 + acpi_unload_parent_table(acpi_handle object)) 181 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status __init acpi_load_tables(void)) 402 182 403 183 /* 404 184 * ACPI table manipulation interfaces 405 185 */ 406 - acpi_status __init acpi_reallocate_root_table(void); 186 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status __init acpi_reallocate_root_table(void)) 407 187 408 - acpi_status __init acpi_find_root_pointer(acpi_size *rsdp_address); 188 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status __init 189 + acpi_find_root_pointer(acpi_size * rsdp_address)) 409 190 410 - acpi_status acpi_unload_table_id(acpi_owner_id id); 411 - 412 - acpi_status 413 - acpi_get_table_header(acpi_string signature, 414 - u32 instance, struct acpi_table_header *out_table_header); 415 - 416 - acpi_status 417 - acpi_get_table_with_size(acpi_string signature, 418 - u32 instance, struct acpi_table_header **out_table, 419 - acpi_size *tbl_size); 420 - 421 - acpi_status 422 - acpi_get_table(acpi_string signature, 423 - u32 instance, struct acpi_table_header **out_table); 424 - 425 - acpi_status 426 - acpi_get_table_by_index(u32 table_index, struct acpi_table_header **out_table); 427 - 428 - acpi_status 429 - acpi_install_table_handler(acpi_table_handler handler, void *context); 430 - 431 - acpi_status acpi_remove_table_handler(acpi_table_handler handler); 191 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 192 + acpi_get_table_header(acpi_string signature, 193 + u32 instance, 194 + struct acpi_table_header 195 + *out_table_header)) 196 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 197 + acpi_get_table(acpi_string signature, u32 instance, 198 + struct acpi_table_header 199 + **out_table)) 200 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 201 + acpi_get_table_by_index(u32 table_index, 202 + struct acpi_table_header 203 + **out_table)) 204 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 205 + acpi_install_table_handler(acpi_table_handler 206 + handler, void *context)) 207 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 208 + acpi_remove_table_handler(acpi_table_handler 209 + handler)) 432 210 433 211 /* 434 212 * Namespace and name interfaces 435 213 */ 436 - acpi_status 437 - acpi_walk_namespace(acpi_object_type type, 438 - acpi_handle start_object, 439 - u32 max_depth, 440 - acpi_walk_callback descending_callback, 441 - acpi_walk_callback ascending_callback, 442 - void *context, void **return_value); 443 - 444 - acpi_status 445 - acpi_get_devices(const char *HID, 446 - acpi_walk_callback user_function, 447 - void *context, void **return_value); 448 - 449 - acpi_status 450 - acpi_get_name(acpi_handle object, 451 - u32 name_type, struct acpi_buffer *ret_path_ptr); 452 - 453 - acpi_status 454 - acpi_get_handle(acpi_handle parent, 455 - acpi_string pathname, acpi_handle * ret_handle); 456 - 457 - acpi_status 458 - acpi_attach_data(acpi_handle object, acpi_object_handler handler, void *data); 459 - 460 - acpi_status acpi_detach_data(acpi_handle object, acpi_object_handler handler); 461 - 462 - acpi_status 463 - acpi_get_data_full(acpi_handle object, acpi_object_handler handler, void **data, 464 - void (*callback)(void *)); 465 - 466 - acpi_status 467 - acpi_get_data(acpi_handle object, acpi_object_handler handler, void **data); 468 - 469 - acpi_status 470 - acpi_debug_trace(char *name, u32 debug_level, u32 debug_layer, u32 flags); 214 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 215 + acpi_walk_namespace(acpi_object_type type, 216 + acpi_handle start_object, 217 + u32 max_depth, 218 + acpi_walk_callback 219 + descending_callback, 220 + acpi_walk_callback 221 + ascending_callback, 222 + void *context, 223 + void **return_value)) 224 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 225 + acpi_get_devices(const char *HID, 226 + acpi_walk_callback user_function, 227 + void *context, 228 + void **return_value)) 229 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 230 + acpi_get_name(acpi_handle object, u32 name_type, 231 + struct acpi_buffer *ret_path_ptr)) 232 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 233 + acpi_get_handle(acpi_handle parent, 234 + acpi_string pathname, 235 + acpi_handle * ret_handle)) 236 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 237 + acpi_attach_data(acpi_handle object, 238 + acpi_object_handler handler, 239 + void *data)) 240 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 241 + acpi_detach_data(acpi_handle object, 242 + acpi_object_handler handler)) 243 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 244 + acpi_get_data(acpi_handle object, 245 + acpi_object_handler handler, 246 + void **data)) 247 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 248 + acpi_debug_trace(char *name, u32 debug_level, 249 + u32 debug_layer, u32 flags)) 471 250 472 251 /* 473 252 * Object manipulation and enumeration 474 253 */ 475 - acpi_status 476 - acpi_evaluate_object(acpi_handle object, 477 - acpi_string pathname, 478 - struct acpi_object_list *parameter_objects, 479 - struct acpi_buffer *return_object_buffer); 254 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 255 + acpi_evaluate_object(acpi_handle object, 256 + acpi_string pathname, 257 + struct acpi_object_list 258 + *parameter_objects, 259 + struct acpi_buffer 260 + *return_object_buffer)) 261 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 262 + acpi_evaluate_object_typed(acpi_handle object, 263 + acpi_string pathname, 264 + struct acpi_object_list 265 + *external_params, 266 + struct acpi_buffer 267 + *return_buffer, 268 + acpi_object_type 269 + return_type)) 270 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 271 + acpi_get_object_info(acpi_handle object, 272 + struct acpi_device_info 273 + **return_buffer)) 274 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status acpi_install_method(u8 *buffer)) 480 275 481 - acpi_status 482 - acpi_evaluate_object_typed(acpi_handle object, 483 - acpi_string pathname, 484 - struct acpi_object_list *external_params, 485 - struct acpi_buffer *return_buffer, 486 - acpi_object_type return_type); 276 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 277 + acpi_get_next_object(acpi_object_type type, 278 + acpi_handle parent, 279 + acpi_handle child, 280 + acpi_handle * out_handle)) 487 281 488 - acpi_status 489 - acpi_get_object_info(acpi_handle object, 490 - struct acpi_device_info **return_buffer); 282 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 283 + acpi_get_type(acpi_handle object, 284 + acpi_object_type * out_type)) 491 285 492 - acpi_status acpi_install_method(u8 *buffer); 493 - 494 - acpi_status 495 - acpi_get_next_object(acpi_object_type type, 496 - acpi_handle parent, 497 - acpi_handle child, acpi_handle * out_handle); 498 - 499 - acpi_status acpi_get_type(acpi_handle object, acpi_object_type * out_type); 500 - 501 - acpi_status acpi_get_id(acpi_handle object, acpi_owner_id * out_type); 502 - 503 - acpi_status acpi_get_parent(acpi_handle object, acpi_handle * out_handle); 286 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 287 + acpi_get_parent(acpi_handle object, 288 + acpi_handle * out_handle)) 504 289 505 290 /* 506 291 * Handler interfaces 507 292 */ 508 - acpi_status 509 - acpi_install_initialization_handler(acpi_init_handler handler, u32 function); 510 - 293 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 294 + acpi_install_initialization_handler 295 + (acpi_init_handler handler, u32 function)) 511 296 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status 512 - acpi_install_sci_handler(acpi_sci_handler 513 - address, 514 - void *context)) 297 + acpi_install_sci_handler(acpi_sci_handler 298 + address, 299 + void *context)) 515 300 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status 516 301 acpi_remove_sci_handler(acpi_sci_handler 517 302 address)) ··· 558 313 u32 gpe_number, 559 314 acpi_gpe_handler 560 315 address)) 561 - acpi_status acpi_install_notify_handler(acpi_handle device, u32 handler_type, 562 - acpi_notify_handler handler, 563 - void *context); 564 - 565 - acpi_status 566 - acpi_remove_notify_handler(acpi_handle device, 567 - u32 handler_type, acpi_notify_handler handler); 568 - 569 - acpi_status 570 - acpi_install_address_space_handler(acpi_handle device, 571 - acpi_adr_space_type space_id, 572 - acpi_adr_space_handler handler, 573 - acpi_adr_space_setup setup, void *context); 574 - 575 - acpi_status 576 - acpi_remove_address_space_handler(acpi_handle device, 577 - acpi_adr_space_type space_id, 578 - acpi_adr_space_handler handler); 579 - 316 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 317 + acpi_install_notify_handler(acpi_handle device, 318 + u32 handler_type, 319 + acpi_notify_handler 320 + handler, 321 + void *context)) 322 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 323 + acpi_remove_notify_handler(acpi_handle device, 324 + u32 handler_type, 325 + acpi_notify_handler 326 + handler)) 327 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 328 + acpi_install_address_space_handler(acpi_handle 329 + device, 330 + acpi_adr_space_type 331 + space_id, 332 + acpi_adr_space_handler 333 + handler, 334 + acpi_adr_space_setup 335 + setup, 336 + void *context)) 337 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 338 + acpi_remove_address_space_handler(acpi_handle 339 + device, 340 + acpi_adr_space_type 341 + space_id, 342 + acpi_adr_space_handler 343 + handler)) 580 344 #ifdef ACPI_FUTURE_USAGE 581 - acpi_status acpi_install_exception_handler(acpi_exception_handler handler); 345 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 346 + acpi_install_exception_handler 347 + (acpi_exception_handler handler)) 582 348 #endif 583 - 584 - acpi_status acpi_install_interface_handler(acpi_interface_handler handler); 349 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 350 + acpi_install_interface_handler 351 + (acpi_interface_handler handler)) 585 352 586 353 /* 587 354 * Global Lock interfaces ··· 608 351 /* 609 352 * Interfaces to AML mutex objects 610 353 */ 611 - acpi_status 612 - acpi_acquire_mutex(acpi_handle handle, acpi_string pathname, u16 timeout); 354 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 355 + acpi_acquire_mutex(acpi_handle handle, 356 + acpi_string pathname, 357 + u16 timeout)) 613 358 614 - acpi_status acpi_release_mutex(acpi_handle handle, acpi_string pathname); 359 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 360 + acpi_release_mutex(acpi_handle handle, 361 + acpi_string pathname)) 615 362 616 363 /* 617 364 * Fixed Event interfaces ··· 695 434 acpi_status(*acpi_walk_resource_callback) (struct acpi_resource * resource, 696 435 void *context); 697 436 698 - acpi_status 699 - acpi_get_vendor_resource(acpi_handle device, 700 - char *name, 701 - struct acpi_vendor_uuid *uuid, 702 - struct acpi_buffer *ret_buffer); 703 - 704 - acpi_status 705 - acpi_get_current_resources(acpi_handle device, struct acpi_buffer *ret_buffer); 706 - 437 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 438 + acpi_get_vendor_resource(acpi_handle device, 439 + char *name, 440 + struct acpi_vendor_uuid 441 + *uuid, 442 + struct acpi_buffer 443 + *ret_buffer)) 444 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 445 + acpi_get_current_resources(acpi_handle device, 446 + struct acpi_buffer 447 + *ret_buffer)) 707 448 #ifdef ACPI_FUTURE_USAGE 708 - acpi_status 709 - acpi_get_possible_resources(acpi_handle device, struct acpi_buffer *ret_buffer); 449 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 450 + acpi_get_possible_resources(acpi_handle device, 451 + struct acpi_buffer 452 + *ret_buffer)) 710 453 #endif 711 - 712 - acpi_status 713 - acpi_get_event_resources(acpi_handle device_handle, 714 - struct acpi_buffer *ret_buffer); 715 - 716 - acpi_status 717 - acpi_walk_resource_buffer(struct acpi_buffer *buffer, 718 - acpi_walk_resource_callback user_function, 719 - void *context); 720 - 721 - acpi_status 722 - acpi_walk_resources(acpi_handle device, 723 - char *name, 724 - acpi_walk_resource_callback user_function, void *context); 725 - 726 - acpi_status 727 - acpi_set_current_resources(acpi_handle device, struct acpi_buffer *in_buffer); 728 - 729 - acpi_status 730 - acpi_get_irq_routing_table(acpi_handle device, struct acpi_buffer *ret_buffer); 731 - 732 - acpi_status 733 - acpi_resource_to_address64(struct acpi_resource *resource, 734 - struct acpi_resource_address64 *out); 735 - 736 - acpi_status 737 - acpi_buffer_to_resource(u8 *aml_buffer, 738 - u16 aml_buffer_length, 739 - struct acpi_resource **resource_ptr); 454 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 455 + acpi_get_event_resources(acpi_handle device_handle, 456 + struct acpi_buffer 457 + *ret_buffer)) 458 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 459 + acpi_walk_resource_buffer(struct acpi_buffer 460 + *buffer, 461 + acpi_walk_resource_callback 462 + user_function, 463 + void *context)) 464 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 465 + acpi_walk_resources(acpi_handle device, char *name, 466 + acpi_walk_resource_callback 467 + user_function, void *context)) 468 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 469 + acpi_set_current_resources(acpi_handle device, 470 + struct acpi_buffer 471 + *in_buffer)) 472 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 473 + acpi_get_irq_routing_table(acpi_handle device, 474 + struct acpi_buffer 475 + *ret_buffer)) 476 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 477 + acpi_resource_to_address64(struct acpi_resource 478 + *resource, 479 + struct 480 + acpi_resource_address64 481 + *out)) 482 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 483 + acpi_buffer_to_resource(u8 *aml_buffer, 484 + u16 aml_buffer_length, 485 + struct acpi_resource 486 + **resource_ptr)) 740 487 741 488 /* 742 489 * Hardware (ACPI device) interfaces 743 490 */ 744 - acpi_status acpi_reset(void); 491 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status acpi_reset(void)) 745 492 746 - acpi_status acpi_read(u64 *value, struct acpi_generic_address *reg); 493 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 494 + acpi_read(u64 *value, 495 + struct acpi_generic_address *reg)) 747 496 748 - acpi_status acpi_write(u64 value, struct acpi_generic_address *reg); 497 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 498 + acpi_write(u64 value, 499 + struct acpi_generic_address *reg)) 749 500 750 501 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status 751 502 acpi_read_bit_register(u32 register_id, ··· 770 497 /* 771 498 * Sleep/Wake interfaces 772 499 */ 773 - acpi_status 774 - acpi_get_sleep_type_data(u8 sleep_state, u8 *slp_typ_a, u8 *slp_typ_b); 500 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 501 + acpi_get_sleep_type_data(u8 sleep_state, 502 + u8 *slp_typ_a, 503 + u8 *slp_typ_b)) 775 504 776 - acpi_status acpi_enter_sleep_state_prep(u8 sleep_state); 777 - 778 - acpi_status acpi_enter_sleep_state(u8 sleep_state); 505 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 506 + acpi_enter_sleep_state_prep(u8 sleep_state)) 507 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status acpi_enter_sleep_state(u8 sleep_state)) 779 508 780 509 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_enter_sleep_state_s4bios(void)) 781 510 782 - acpi_status acpi_leave_sleep_state_prep(u8 sleep_state); 783 - 784 - acpi_status acpi_leave_sleep_state(u8 sleep_state); 511 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 512 + acpi_leave_sleep_state_prep(u8 sleep_state)) 513 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status acpi_leave_sleep_state(u8 sleep_state)) 785 514 786 515 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status 787 516 acpi_set_firmware_waking_vector(u32 ··· 810 535 /* 811 536 * Error/Warning output 812 537 */ 813 - ACPI_PRINTF_LIKE(3) 814 - void ACPI_INTERNAL_VAR_XFACE 815 - acpi_error(const char *module_name, u32 line_number, const char *format, ...); 816 - 817 - ACPI_PRINTF_LIKE(4) 818 - void ACPI_INTERNAL_VAR_XFACE 819 - acpi_exception(const char *module_name, 820 - u32 line_number, acpi_status status, const char *format, ...); 821 - 822 - ACPI_PRINTF_LIKE(3) 823 - void ACPI_INTERNAL_VAR_XFACE 824 - acpi_warning(const char *module_name, u32 line_number, const char *format, ...); 825 - 826 - ACPI_PRINTF_LIKE(3) 827 - void ACPI_INTERNAL_VAR_XFACE 828 - acpi_info(const char *module_name, u32 line_number, const char *format, ...); 829 - 830 - ACPI_PRINTF_LIKE(3) 831 - void ACPI_INTERNAL_VAR_XFACE 832 - acpi_bios_error(const char *module_name, 833 - u32 line_number, const char *format, ...); 834 - 835 - ACPI_PRINTF_LIKE(3) 836 - void ACPI_INTERNAL_VAR_XFACE 837 - acpi_bios_warning(const char *module_name, 838 - u32 line_number, const char *format, ...); 538 + ACPI_MSG_DEPENDENT_RETURN_VOID(ACPI_PRINTF_LIKE(3) 539 + void ACPI_INTERNAL_VAR_XFACE 540 + acpi_error(const char *module_name, 541 + u32 line_number, 542 + const char *format, ...)) 543 + ACPI_MSG_DEPENDENT_RETURN_VOID(ACPI_PRINTF_LIKE(4) 544 + void ACPI_INTERNAL_VAR_XFACE 545 + acpi_exception(const char *module_name, 546 + u32 line_number, 547 + acpi_status status, 548 + const char *format, ...)) 549 + ACPI_MSG_DEPENDENT_RETURN_VOID(ACPI_PRINTF_LIKE(3) 550 + void ACPI_INTERNAL_VAR_XFACE 551 + acpi_warning(const char *module_name, 552 + u32 line_number, 553 + const char *format, ...)) 554 + ACPI_MSG_DEPENDENT_RETURN_VOID(ACPI_PRINTF_LIKE(3) 555 + void ACPI_INTERNAL_VAR_XFACE 556 + acpi_info(const char *module_name, 557 + u32 line_number, 558 + const char *format, ...)) 559 + ACPI_MSG_DEPENDENT_RETURN_VOID(ACPI_PRINTF_LIKE(3) 560 + void ACPI_INTERNAL_VAR_XFACE 561 + acpi_bios_error(const char *module_name, 562 + u32 line_number, 563 + const char *format, ...)) 564 + ACPI_MSG_DEPENDENT_RETURN_VOID(ACPI_PRINTF_LIKE(3) 565 + void ACPI_INTERNAL_VAR_XFACE 566 + acpi_bios_warning(const char *module_name, 567 + u32 line_number, 568 + const char *format, ...)) 839 569 840 570 /* 841 571 * Debug output 842 572 */ 843 - #ifdef ACPI_DEBUG_OUTPUT 573 + ACPI_DBG_DEPENDENT_RETURN_VOID(ACPI_PRINTF_LIKE(6) 574 + void ACPI_INTERNAL_VAR_XFACE 575 + acpi_debug_print(u32 requested_debug_level, 576 + u32 line_number, 577 + const char *function_name, 578 + const char *module_name, 579 + u32 component_id, 580 + const char *format, ...)) 581 + ACPI_DBG_DEPENDENT_RETURN_VOID(ACPI_PRINTF_LIKE(6) 582 + void ACPI_INTERNAL_VAR_XFACE 583 + acpi_debug_print_raw(u32 requested_debug_level, 584 + u32 line_number, 585 + const char *function_name, 586 + const char *module_name, 587 + u32 component_id, 588 + const char *format, ...)) 844 589 845 - ACPI_PRINTF_LIKE(6) 846 - void ACPI_INTERNAL_VAR_XFACE 847 - acpi_debug_print(u32 requested_debug_level, 848 - u32 line_number, 849 - const char *function_name, 850 - const char *module_name, 851 - u32 component_id, const char *format, ...); 590 + /* 591 + * Divergences 592 + */ 593 + acpi_status acpi_get_id(acpi_handle object, acpi_owner_id * out_type); 852 594 853 - ACPI_PRINTF_LIKE(6) 854 - void ACPI_INTERNAL_VAR_XFACE 855 - acpi_debug_print_raw(u32 requested_debug_level, 856 - u32 line_number, 857 - const char *function_name, 858 - const char *module_name, 859 - u32 component_id, const char *format, ...); 860 - #endif 595 + acpi_status acpi_unload_table_id(acpi_owner_id id); 596 + 597 + acpi_status 598 + acpi_get_table_with_size(acpi_string signature, 599 + u32 instance, struct acpi_table_header **out_table, 600 + acpi_size *tbl_size); 601 + 602 + acpi_status 603 + acpi_get_data_full(acpi_handle object, acpi_object_handler handler, void **data, 604 + void (*callback)(void *)); 861 605 862 606 #endif /* __ACXFACE_H__ */
+5 -6
include/acpi/actbl.h
··· 367 367 368 368 /* Masks for Flags field above */ 369 369 370 - #define ACPI_TABLE_ORIGIN_UNKNOWN (0) 371 - #define ACPI_TABLE_ORIGIN_MAPPED (1) 372 - #define ACPI_TABLE_ORIGIN_ALLOCATED (2) 373 - #define ACPI_TABLE_ORIGIN_OVERRIDE (4) 374 - #define ACPI_TABLE_ORIGIN_MASK (7) 375 - #define ACPI_TABLE_IS_LOADED (8) 370 + #define ACPI_TABLE_ORIGIN_EXTERNAL_VIRTUAL (0) /* Virtual address, external maintained */ 371 + #define ACPI_TABLE_ORIGIN_INTERNAL_PHYSICAL (1) /* Physical address, internally mapped */ 372 + #define ACPI_TABLE_ORIGIN_INTERNAL_VIRTUAL (2) /* Virtual address, internallly allocated */ 373 + #define ACPI_TABLE_ORIGIN_MASK (3) 374 + #define ACPI_TABLE_IS_LOADED (8) 376 375 377 376 /* 378 377 * Get the remaining ACPI tables
+2 -2
include/acpi/actbl1.h
··· 675 675 }; 676 676 677 677 /* 678 - * MADT Sub-tables, correspond to Type in struct acpi_subtable_header 678 + * MADT Subtables, correspond to Type in struct acpi_subtable_header 679 679 */ 680 680 681 681 /* 0: Processor Local APIC */ ··· 918 918 }; 919 919 920 920 /* 921 - * SRAT Sub-tables, correspond to Type in struct acpi_subtable_header 921 + * SRAT Subtables, correspond to Type in struct acpi_subtable_header 922 922 */ 923 923 924 924 /* 0: Processor Local APIC/SAPIC Affinity */
+68 -3
include/acpi/actbl2.h
··· 70 70 #define ACPI_SIG_HPET "HPET" /* High Precision Event Timer table */ 71 71 #define ACPI_SIG_IBFT "IBFT" /* iSCSI Boot Firmware Table */ 72 72 #define ACPI_SIG_IVRS "IVRS" /* I/O Virtualization Reporting Structure */ 73 + #define ACPI_SIG_LPIT "LPIT" /* Low Power Idle Table */ 73 74 #define ACPI_SIG_MCFG "MCFG" /* PCI Memory Mapped Configuration table */ 74 75 #define ACPI_SIG_MCHI "MCHI" /* Management Controller Host Interface table */ 75 76 #define ACPI_SIG_MTMR "MTMR" /* MID Timer table */ ··· 457 456 }; 458 457 459 458 /* 460 - * DMAR Sub-tables, correspond to Type in struct acpi_dmar_header 459 + * DMAR Subtables, correspond to Type in struct acpi_dmar_header 461 460 */ 462 461 463 462 /* 0: Hardware Unit Definition */ ··· 821 820 822 821 /******************************************************************************* 823 822 * 824 - * MCFG - PCI Memory Mapped Configuration table and sub-table 823 + * LPIT - Low Power Idle Table 824 + * 825 + * Conforms to "ACPI Low Power Idle Table (LPIT) and _LPD Proposal (DRAFT)" 826 + * 827 + ******************************************************************************/ 828 + 829 + struct acpi_table_lpit { 830 + struct acpi_table_header header; /* Common ACPI table header */ 831 + }; 832 + 833 + /* LPIT subtable header */ 834 + 835 + struct acpi_lpit_header { 836 + u32 type; /* Subtable type */ 837 + u32 length; /* Subtable length */ 838 + u16 unique_id; 839 + u16 reserved; 840 + u32 flags; 841 + }; 842 + 843 + /* Values for subtable Type above */ 844 + 845 + enum acpi_lpit_type { 846 + ACPI_LPIT_TYPE_NATIVE_CSTATE = 0x00, 847 + ACPI_LPIT_TYPE_SIMPLE_IO = 0x01 848 + }; 849 + 850 + /* Masks for Flags field above */ 851 + 852 + #define ACPI_LPIT_STATE_DISABLED (1) 853 + #define ACPI_LPIT_NO_COUNTER (1<<1) 854 + 855 + /* 856 + * LPIT subtables, correspond to Type in struct acpi_lpit_header 857 + */ 858 + 859 + /* 0x00: Native C-state instruction based LPI structure */ 860 + 861 + struct acpi_lpit_native { 862 + struct acpi_lpit_header header; 863 + struct acpi_generic_address entry_trigger; 864 + u32 residency; 865 + u32 latency; 866 + struct acpi_generic_address residency_counter; 867 + u64 counter_frequency; 868 + }; 869 + 870 + /* 0x01: Simple I/O based LPI structure */ 871 + 872 + struct acpi_lpit_io { 873 + struct acpi_lpit_header header; 874 + struct acpi_generic_address entry_trigger; 875 + u32 trigger_action; 876 + u64 trigger_value; 877 + u64 trigger_mask; 878 + struct acpi_generic_address minimum_idle_state; 879 + u32 residency; 880 + u32 latency; 881 + struct acpi_generic_address residency_counter; 882 + u64 counter_frequency; 883 + }; 884 + 885 + /******************************************************************************* 886 + * 887 + * MCFG - PCI Memory Mapped Configuration table and subtable 825 888 * Version 1 826 889 * 827 890 * Conforms to "PCI Firmware Specification", Revision 3.0, June 20, 2005 ··· 988 923 }; 989 924 990 925 /* 991 - * SLIC Sub-tables, correspond to Type in struct acpi_slic_header 926 + * SLIC Subtables, correspond to Type in struct acpi_slic_header 992 927 */ 993 928 994 929 /* 0: Public Key Structure */
+21
include/acpi/actypes.h
··· 329 329 * 330 330 ******************************************************************************/ 331 331 332 + #ifdef ACPI_NO_MEM_ALLOCATIONS 333 + 334 + #define ACPI_ALLOCATE(a) NULL 335 + #define ACPI_ALLOCATE_ZEROED(a) NULL 336 + #define ACPI_FREE(a) 337 + #define ACPI_MEM_TRACKING(a) 338 + 339 + #else /* ACPI_NO_MEM_ALLOCATIONS */ 340 + 332 341 #ifdef ACPI_DBG_TRACK_ALLOCATIONS 333 342 /* 334 343 * Memory allocation tracking (used by acpi_exec to detect memory leaks) ··· 358 349 #define ACPI_MEM_TRACKING(a) 359 350 360 351 #endif /* ACPI_DBG_TRACK_ALLOCATIONS */ 352 + 353 + #endif /* ACPI_NO_MEM_ALLOCATIONS */ 361 354 362 355 /****************************************************************************** 363 356 * ··· 939 928 * Miscellaneous common Data Structures used by the interfaces 940 929 */ 941 930 #define ACPI_NO_BUFFER 0 931 + 932 + #ifdef ACPI_NO_MEM_ALLOCATIONS 933 + 934 + #define ACPI_ALLOCATE_BUFFER (acpi_size) (0) 935 + #define ACPI_ALLOCATE_LOCAL_BUFFER (acpi_size) (0) 936 + 937 + #else /* ACPI_NO_MEM_ALLOCATIONS */ 938 + 942 939 #define ACPI_ALLOCATE_BUFFER (acpi_size) (-1) /* Let ACPICA allocate buffer */ 943 940 #define ACPI_ALLOCATE_LOCAL_BUFFER (acpi_size) (-2) /* For internal use only (enables tracking) */ 941 + 942 + #endif /* ACPI_NO_MEM_ALLOCATIONS */ 944 943 945 944 struct acpi_buffer { 946 945 acpi_size length; /* Length in bytes of the buffer */
+63
include/acpi/platform/acenvex.h
··· 1 + /****************************************************************************** 2 + * 3 + * Name: acenvex.h - Extra host and compiler configuration 4 + * 5 + *****************************************************************************/ 6 + 7 + /* 8 + * Copyright (C) 2000 - 2014, Intel Corp. 9 + * All rights reserved. 10 + * 11 + * Redistribution and use in source and binary forms, with or without 12 + * modification, are permitted provided that the following conditions 13 + * are met: 14 + * 1. Redistributions of source code must retain the above copyright 15 + * notice, this list of conditions, and the following disclaimer, 16 + * without modification. 17 + * 2. Redistributions in binary form must reproduce at minimum a disclaimer 18 + * substantially similar to the "NO WARRANTY" disclaimer below 19 + * ("Disclaimer") and any redistribution must be conditioned upon 20 + * including a substantially similar Disclaimer requirement for further 21 + * binary redistribution. 22 + * 3. Neither the names of the above-listed copyright holders nor the names 23 + * of any contributors may be used to endorse or promote products derived 24 + * from this software without specific prior written permission. 25 + * 26 + * Alternatively, this software may be distributed under the terms of the 27 + * GNU General Public License ("GPL") version 2 as published by the Free 28 + * Software Foundation. 29 + * 30 + * NO WARRANTY 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 36 + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 37 + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 38 + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 39 + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 40 + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 41 + * POSSIBILITY OF SUCH DAMAGES. 42 + */ 43 + 44 + #ifndef __ACENVEX_H__ 45 + #define __ACENVEX_H__ 46 + 47 + /*! [Begin] no source code translation */ 48 + 49 + /****************************************************************************** 50 + * 51 + * Extra host configuration files. All ACPICA headers are included before 52 + * including these files. 53 + * 54 + *****************************************************************************/ 55 + 56 + #if defined(_LINUX) || defined(__linux__) 57 + #include <acpi/platform/aclinuxex.h> 58 + 59 + #endif 60 + 61 + /*! [End] no source code translation !*/ 62 + 63 + #endif /* __ACENVEX_H__ */
+11
include/acpi/platform/acgcc.h
··· 64 64 */ 65 65 #define ACPI_UNUSED_VAR __attribute__ ((unused)) 66 66 67 + /* 68 + * Some versions of gcc implement strchr() with a buggy macro. So, 69 + * undef it here. Prevents error messages of this form (usually from the 70 + * file getopt.c): 71 + * 72 + * error: logical '&&' with non-zero constant will always evaluate as true 73 + */ 74 + #ifdef strchr 75 + #undef strchr 76 + #endif 77 + 67 78 #endif /* __ACGCC_H__ */
+72 -143
include/acpi/platform/aclinux.h
··· 48 48 49 49 #define ACPI_USE_SYSTEM_CLIBRARY 50 50 #define ACPI_USE_DO_WHILE_0 51 - #define ACPI_MUTEX_TYPE ACPI_BINARY_SEMAPHORE 52 51 53 52 #ifdef __KERNEL__ 54 53 ··· 70 71 #ifdef EXPORT_ACPI_INTERFACES 71 72 #include <linux/export.h> 72 73 #endif 73 - #include <asm/acpi.h> 74 + #include <asm/acenv.h> 75 + 76 + #ifndef CONFIG_ACPI 77 + 78 + /* External globals for __KERNEL__, stubs is needed */ 79 + 80 + #define ACPI_GLOBAL(t,a) 81 + #define ACPI_INIT_GLOBAL(t,a,b) 82 + 83 + /* Generating stubs for configurable ACPICA macros */ 84 + 85 + #define ACPI_NO_MEM_ALLOCATIONS 86 + 87 + /* Generating stubs for configurable ACPICA functions */ 88 + 89 + #define ACPI_NO_ERROR_MESSAGES 90 + #undef ACPI_DEBUG_OUTPUT 91 + 92 + /* External interface for __KERNEL__, stub is needed */ 93 + 94 + #define ACPI_EXTERNAL_RETURN_STATUS(prototype) \ 95 + static ACPI_INLINE prototype {return(AE_NOT_CONFIGURED);} 96 + #define ACPI_EXTERNAL_RETURN_OK(prototype) \ 97 + static ACPI_INLINE prototype {return(AE_OK);} 98 + #define ACPI_EXTERNAL_RETURN_VOID(prototype) \ 99 + static ACPI_INLINE prototype {return;} 100 + #define ACPI_EXTERNAL_RETURN_UINT32(prototype) \ 101 + static ACPI_INLINE prototype {return(0);} 102 + #define ACPI_EXTERNAL_RETURN_PTR(prototype) \ 103 + static ACPI_INLINE prototype {return(NULL);} 104 + 105 + #endif /* CONFIG_ACPI */ 74 106 75 107 /* Host-dependent types and defines for in-kernel ACPICA */ 76 108 ··· 113 83 #define acpi_spinlock spinlock_t * 114 84 #define acpi_cpu_flags unsigned long 115 85 116 - #else /* !__KERNEL__ */ 86 + /* Use native linux version of acpi_os_allocate_zeroed */ 117 87 118 - #include <stdarg.h> 119 - #include <string.h> 120 - #include <stdlib.h> 121 - #include <ctype.h> 122 - #include <unistd.h> 123 - 124 - /* Disable kernel specific declarators */ 125 - 126 - #ifndef __init 127 - #define __init 128 - #endif 129 - 130 - #ifndef __iomem 131 - #define __iomem 132 - #endif 133 - 134 - /* Host-dependent types and defines for user-space ACPICA */ 135 - 136 - #define ACPI_FLUSH_CPU_CACHE() 137 - #define ACPI_CAST_PTHREAD_T(pthread) ((acpi_thread_id) (pthread)) 138 - 139 - #if defined(__ia64__) || defined(__x86_64__) || defined(__aarch64__) 140 - #define ACPI_MACHINE_WIDTH 64 141 - #define COMPILER_DEPENDENT_INT64 long 142 - #define COMPILER_DEPENDENT_UINT64 unsigned long 143 - #else 144 - #define ACPI_MACHINE_WIDTH 32 145 - #define COMPILER_DEPENDENT_INT64 long long 146 - #define COMPILER_DEPENDENT_UINT64 unsigned long long 147 - #define ACPI_USE_NATIVE_DIVIDE 148 - #endif 149 - 150 - #ifndef __cdecl 151 - #define __cdecl 152 - #endif 153 - 154 - #endif /* __KERNEL__ */ 155 - 156 - /* Linux uses GCC */ 157 - 158 - #include <acpi/platform/acgcc.h> 159 - 160 - #ifdef __KERNEL__ 161 - 162 - /* 163 - * FIXME: Inclusion of actypes.h 164 - * Linux kernel need this before defining inline OSL interfaces as 165 - * actypes.h need to be included to find ACPICA type definitions. 166 - * Since from ACPICA's perspective, the actypes.h should be included after 167 - * acenv.h (aclinux.h), this leads to a inclusion mis-ordering issue. 168 - */ 169 - #include <acpi/actypes.h> 88 + #define USE_NATIVE_ALLOCATE_ZEROED 170 89 171 90 /* 172 91 * Overrides for in-kernel ACPICA 173 92 */ 174 - acpi_status __init acpi_os_initialize(void); 175 93 #define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_initialize 176 - 177 - acpi_status acpi_os_terminate(void); 178 94 #define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_terminate 179 - 180 - /* 181 - * Memory allocation/deallocation 182 - */ 183 - 184 - /* 185 - * The irqs_disabled() check is for resume from RAM. 186 - * Interrupts are off during resume, just like they are for boot. 187 - * However, boot has (system_state != SYSTEM_RUNNING) 188 - * to quiet __might_sleep() in kmalloc() and resume does not. 189 - */ 190 - static inline void *acpi_os_allocate(acpi_size size) 191 - { 192 - return kmalloc(size, irqs_disabled()? GFP_ATOMIC : GFP_KERNEL); 193 - } 194 - 195 95 #define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_allocate 196 - 197 - /* Use native linux version of acpi_os_allocate_zeroed */ 198 - 199 - static inline void *acpi_os_allocate_zeroed(acpi_size size) 200 - { 201 - return kzalloc(size, irqs_disabled()? GFP_ATOMIC : GFP_KERNEL); 202 - } 203 - 204 96 #define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_allocate_zeroed 205 - #define USE_NATIVE_ALLOCATE_ZEROED 206 - 207 - static inline void acpi_os_free(void *memory) 208 - { 209 - kfree(memory); 210 - } 211 - 212 97 #define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_free 213 - 214 - static inline void *acpi_os_acquire_object(acpi_cache_t * cache) 215 - { 216 - return kmem_cache_zalloc(cache, 217 - irqs_disabled()? GFP_ATOMIC : GFP_KERNEL); 218 - } 219 - 220 98 #define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_acquire_object 221 - 222 - static inline acpi_thread_id acpi_os_get_thread_id(void) 223 - { 224 - return (acpi_thread_id) (unsigned long)current; 225 - } 226 - 227 99 #define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_get_thread_id 228 - 229 - #ifndef CONFIG_PREEMPT 230 - 231 - /* 232 - * Used within ACPICA to show where it is safe to preempt execution 233 - * when CONFIG_PREEMPT=n 234 - */ 235 - #define ACPI_PREEMPTION_POINT() \ 236 - do { \ 237 - if (!irqs_disabled()) \ 238 - cond_resched(); \ 239 - } while (0) 240 - 241 - #endif 242 - 243 - /* 244 - * When lockdep is enabled, the spin_lock_init() macro stringifies it's 245 - * argument and uses that as a name for the lock in debugging. 246 - * By executing spin_lock_init() in a macro the key changes from "lock" for 247 - * all locks to the name of the argument of acpi_os_create_lock(), which 248 - * prevents lockdep from reporting false positives for ACPICA locks. 249 - */ 250 - #define acpi_os_create_lock(__handle) \ 251 - ({ \ 252 - spinlock_t *lock = ACPI_ALLOCATE(sizeof(*lock)); \ 253 - if (lock) { \ 254 - *(__handle) = lock; \ 255 - spin_lock_init(*(__handle)); \ 256 - } \ 257 - lock ? AE_OK : AE_NO_MEMORY; \ 258 - }) 259 100 #define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_create_lock 260 - 261 - void __iomem *acpi_os_map_memory(acpi_physical_address where, acpi_size length); 262 - #define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_map_memory 263 - 264 - void acpi_os_unmap_memory(void __iomem * logical_address, acpi_size size); 265 - #define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_unmap_memory 266 101 267 102 /* 268 103 * OSL interfaces used by debugger/disassembler ··· 147 252 #define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_get_next_filename 148 253 #define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_close_directory 149 254 150 - /* 151 - * OSL interfaces added by Linux 152 - */ 153 - void early_acpi_os_unmap_memory(void __iomem * virt, acpi_size size); 255 + #else /* !__KERNEL__ */ 256 + 257 + #include <stdarg.h> 258 + #include <string.h> 259 + #include <stdlib.h> 260 + #include <ctype.h> 261 + #include <unistd.h> 262 + 263 + /* Define/disable kernel-specific declarators */ 264 + 265 + #ifndef __init 266 + #define __init 267 + #endif 268 + 269 + /* Host-dependent types and defines for user-space ACPICA */ 270 + 271 + #define ACPI_FLUSH_CPU_CACHE() 272 + #define ACPI_CAST_PTHREAD_T(pthread) ((acpi_thread_id) (pthread)) 273 + 274 + #if defined(__ia64__) || defined(__x86_64__) ||\ 275 + defined(__aarch64__) || defined(__PPC64__) 276 + #define ACPI_MACHINE_WIDTH 64 277 + #define COMPILER_DEPENDENT_INT64 long 278 + #define COMPILER_DEPENDENT_UINT64 unsigned long 279 + #else 280 + #define ACPI_MACHINE_WIDTH 32 281 + #define COMPILER_DEPENDENT_INT64 long long 282 + #define COMPILER_DEPENDENT_UINT64 unsigned long long 283 + #define ACPI_USE_NATIVE_DIVIDE 284 + #endif 285 + 286 + #ifndef __cdecl 287 + #define __cdecl 288 + #endif 154 289 155 290 #endif /* __KERNEL__ */ 291 + 292 + /* Linux uses GCC */ 293 + 294 + #include <acpi/platform/acgcc.h> 156 295 157 296 #endif /* __ACLINUX_H__ */
+112
include/acpi/platform/aclinuxex.h
··· 1 + /****************************************************************************** 2 + * 3 + * Name: aclinuxex.h - Extra OS specific defines, etc. for Linux 4 + * 5 + *****************************************************************************/ 6 + 7 + /* 8 + * Copyright (C) 2000 - 2014, Intel Corp. 9 + * All rights reserved. 10 + * 11 + * Redistribution and use in source and binary forms, with or without 12 + * modification, are permitted provided that the following conditions 13 + * are met: 14 + * 1. Redistributions of source code must retain the above copyright 15 + * notice, this list of conditions, and the following disclaimer, 16 + * without modification. 17 + * 2. Redistributions in binary form must reproduce at minimum a disclaimer 18 + * substantially similar to the "NO WARRANTY" disclaimer below 19 + * ("Disclaimer") and any redistribution must be conditioned upon 20 + * including a substantially similar Disclaimer requirement for further 21 + * binary redistribution. 22 + * 3. Neither the names of the above-listed copyright holders nor the names 23 + * of any contributors may be used to endorse or promote products derived 24 + * from this software without specific prior written permission. 25 + * 26 + * Alternatively, this software may be distributed under the terms of the 27 + * GNU General Public License ("GPL") version 2 as published by the Free 28 + * Software Foundation. 29 + * 30 + * NO WARRANTY 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 36 + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 37 + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 38 + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 39 + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 40 + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 41 + * POSSIBILITY OF SUCH DAMAGES. 42 + */ 43 + 44 + #ifndef __ACLINUXEX_H__ 45 + #define __ACLINUXEX_H__ 46 + 47 + #ifdef __KERNEL__ 48 + 49 + /* 50 + * Overrides for in-kernel ACPICA 51 + */ 52 + acpi_status __init acpi_os_initialize(void); 53 + 54 + acpi_status acpi_os_terminate(void); 55 + 56 + /* 57 + * The irqs_disabled() check is for resume from RAM. 58 + * Interrupts are off during resume, just like they are for boot. 59 + * However, boot has (system_state != SYSTEM_RUNNING) 60 + * to quiet __might_sleep() in kmalloc() and resume does not. 61 + */ 62 + static inline void *acpi_os_allocate(acpi_size size) 63 + { 64 + return kmalloc(size, irqs_disabled()? GFP_ATOMIC : GFP_KERNEL); 65 + } 66 + 67 + static inline void *acpi_os_allocate_zeroed(acpi_size size) 68 + { 69 + return kzalloc(size, irqs_disabled()? GFP_ATOMIC : GFP_KERNEL); 70 + } 71 + 72 + static inline void acpi_os_free(void *memory) 73 + { 74 + kfree(memory); 75 + } 76 + 77 + static inline void *acpi_os_acquire_object(acpi_cache_t * cache) 78 + { 79 + return kmem_cache_zalloc(cache, 80 + irqs_disabled()? GFP_ATOMIC : GFP_KERNEL); 81 + } 82 + 83 + static inline acpi_thread_id acpi_os_get_thread_id(void) 84 + { 85 + return (acpi_thread_id) (unsigned long)current; 86 + } 87 + 88 + /* 89 + * When lockdep is enabled, the spin_lock_init() macro stringifies it's 90 + * argument and uses that as a name for the lock in debugging. 91 + * By executing spin_lock_init() in a macro the key changes from "lock" for 92 + * all locks to the name of the argument of acpi_os_create_lock(), which 93 + * prevents lockdep from reporting false positives for ACPICA locks. 94 + */ 95 + #define acpi_os_create_lock(__handle) \ 96 + ({ \ 97 + spinlock_t *lock = ACPI_ALLOCATE(sizeof(*lock)); \ 98 + if (lock) { \ 99 + *(__handle) = lock; \ 100 + spin_lock_init(*(__handle)); \ 101 + } \ 102 + lock ? AE_OK : AE_NO_MEMORY; \ 103 + }) 104 + 105 + /* 106 + * OSL interfaces added by Linux 107 + */ 108 + void early_acpi_os_unmap_memory(void __iomem * virt, acpi_size size); 109 + 110 + #endif /* __KERNEL__ */ 111 + 112 + #endif /* __ACLINUXEX_H__ */
+2
include/acpi/video.h
··· 19 19 #if (defined CONFIG_ACPI_VIDEO || defined CONFIG_ACPI_VIDEO_MODULE) 20 20 extern int acpi_video_register(void); 21 21 extern void acpi_video_unregister(void); 22 + extern void acpi_video_unregister_backlight(void); 22 23 extern int acpi_video_get_edid(struct acpi_device *device, int type, 23 24 int device_id, void **edid); 24 25 #else 25 26 static inline int acpi_video_register(void) { return 0; } 26 27 static inline void acpi_video_unregister(void) { return; } 28 + static inline void acpi_video_unregister_backlight(void) { return; } 27 29 static inline int acpi_video_get_edid(struct acpi_device *device, int type, 28 30 int device_id, void **edid) 29 31 {
+28 -2
include/linux/acpi.h
··· 37 37 38 38 #include <linux/list.h> 39 39 #include <linux/mod_devicetable.h> 40 + #include <linux/dynamic_debug.h> 40 41 41 42 #include <acpi/acpi.h> 42 43 #include <acpi/acpi_bus.h> ··· 184 183 const u8 *wdata, unsigned wdata_len, 185 184 u8 *rdata, unsigned rdata_len); 186 185 extern acpi_handle ec_get_handle(void); 186 + 187 + extern bool acpi_is_pnp_device(struct acpi_device *); 187 188 188 189 #if defined(CONFIG_ACPI_WMI) || defined(CONFIG_ACPI_WMI_MODULE) 189 190 ··· 557 554 int acpi_dev_suspend_late(struct device *dev); 558 555 int acpi_dev_resume_early(struct device *dev); 559 556 int acpi_subsys_prepare(struct device *dev); 557 + void acpi_subsys_complete(struct device *dev); 560 558 int acpi_subsys_suspend_late(struct device *dev); 561 559 int acpi_subsys_resume_early(struct device *dev); 560 + int acpi_subsys_suspend(struct device *dev); 561 + int acpi_subsys_freeze(struct device *dev); 562 562 #else 563 563 static inline int acpi_dev_suspend_late(struct device *dev) { return 0; } 564 564 static inline int acpi_dev_resume_early(struct device *dev) { return 0; } 565 565 static inline int acpi_subsys_prepare(struct device *dev) { return 0; } 566 + static inline void acpi_subsys_complete(struct device *dev) {} 566 567 static inline int acpi_subsys_suspend_late(struct device *dev) { return 0; } 567 568 static inline int acpi_subsys_resume_early(struct device *dev) { return 0; } 569 + static inline int acpi_subsys_suspend(struct device *dev) { return 0; } 570 + static inline int acpi_subsys_freeze(struct device *dev) { return 0; } 568 571 #endif 569 572 570 573 #if defined(CONFIG_ACPI) && defined(CONFIG_PM) ··· 598 589 acpi_handle_printk(const char *level, void *handle, const char *fmt, ...) {} 599 590 #endif /* !CONFIG_ACPI */ 600 591 592 + #if defined(CONFIG_ACPI) && defined(CONFIG_DYNAMIC_DEBUG) 593 + __printf(3, 4) 594 + void __acpi_handle_debug(struct _ddebug *descriptor, acpi_handle handle, const char *fmt, ...); 595 + #else 596 + #define __acpi_handle_debug(descriptor, handle, fmt, ...) \ 597 + acpi_handle_printk(KERN_DEBUG, handle, fmt, ##__VA_ARGS__); 598 + #endif 599 + 601 600 /* 602 601 * acpi_handle_<level>: Print message with ACPI prefix and object path 603 602 * ··· 627 610 #define acpi_handle_info(handle, fmt, ...) \ 628 611 acpi_handle_printk(KERN_INFO, handle, fmt, ##__VA_ARGS__) 629 612 630 - /* REVISIT: Support CONFIG_DYNAMIC_DEBUG when necessary */ 631 - #if defined(DEBUG) || defined(CONFIG_DYNAMIC_DEBUG) 613 + #if defined(DEBUG) 632 614 #define acpi_handle_debug(handle, fmt, ...) \ 633 615 acpi_handle_printk(KERN_DEBUG, handle, fmt, ##__VA_ARGS__) 616 + #else 617 + #if defined(CONFIG_DYNAMIC_DEBUG) 618 + #define acpi_handle_debug(handle, fmt, ...) \ 619 + do { \ 620 + DEFINE_DYNAMIC_DEBUG_METADATA(descriptor, fmt); \ 621 + if (unlikely(descriptor.flags & _DPRINTK_FLAGS_PRINT)) \ 622 + __acpi_handle_debug(&descriptor, handle, pr_fmt(fmt), \ 623 + ##__VA_ARGS__); \ 624 + } while (0) 634 625 #else 635 626 #define acpi_handle_debug(handle, fmt, ...) \ 636 627 ({ \ ··· 646 621 acpi_handle_printk(KERN_DEBUG, handle, fmt, ##__VA_ARGS__); \ 647 622 0; \ 648 623 }) 624 + #endif 649 625 #endif 650 626 651 627 #endif /*_LINUX_ACPI_H*/
+7
include/linux/backlight.h
··· 40 40 BACKLIGHT_TYPE_MAX, 41 41 }; 42 42 43 + enum backlight_notification { 44 + BACKLIGHT_REGISTERED, 45 + BACKLIGHT_UNREGISTERED, 46 + }; 47 + 43 48 struct backlight_device; 44 49 struct fb_info; 45 50 ··· 138 133 extern void backlight_force_update(struct backlight_device *bd, 139 134 enum backlight_update_reason reason); 140 135 extern bool backlight_device_registered(enum backlight_type type); 136 + extern int backlight_register_notifier(struct notifier_block *nb); 137 + extern int backlight_unregister_notifier(struct notifier_block *nb); 141 138 142 139 #define to_backlight_device(obj) container_of(obj, struct backlight_device, dev) 143 140
+31
include/linux/clk-provider.h
··· 413 413 const char *parent_name, unsigned long flags, 414 414 unsigned int mult, unsigned int div); 415 415 416 + /** 417 + * struct clk_fractional_divider - adjustable fractional divider clock 418 + * 419 + * @hw: handle between common and hardware-specific interfaces 420 + * @reg: register containing the divider 421 + * @mshift: shift to the numerator bit field 422 + * @mwidth: width of the numerator bit field 423 + * @nshift: shift to the denominator bit field 424 + * @nwidth: width of the denominator bit field 425 + * @lock: register lock 426 + * 427 + * Clock with adjustable fractional divider affecting its output frequency. 428 + */ 429 + 430 + struct clk_fractional_divider { 431 + struct clk_hw hw; 432 + void __iomem *reg; 433 + u8 mshift; 434 + u32 mmask; 435 + u8 nshift; 436 + u32 nmask; 437 + u8 flags; 438 + spinlock_t *lock; 439 + }; 440 + 441 + extern const struct clk_ops clk_fractional_divider_ops; 442 + struct clk *clk_register_fractional_divider(struct device *dev, 443 + const char *name, const char *parent_name, unsigned long flags, 444 + void __iomem *reg, u8 mshift, u8 mwidth, u8 nshift, u8 nwidth, 445 + u8 clk_divider_flags, spinlock_t *lock); 446 + 416 447 /*** 417 448 * struct clk_composite - aggregate clock of mux, divider and gate clocks 418 449 *
+50
include/linux/cpufreq.h
··· 110 110 bool transition_ongoing; /* Tracks transition status */ 111 111 spinlock_t transition_lock; 112 112 wait_queue_head_t transition_wait; 113 + struct task_struct *transition_task; /* Task which is doing the transition */ 113 114 }; 114 115 115 116 /* Only for ACPI */ ··· 468 467 unsigned int frequency; /* kHz - doesn't need to be in ascending 469 468 * order */ 470 469 }; 470 + 471 + #if defined(CONFIG_CPU_FREQ) && defined(CONFIG_PM_OPP) 472 + int dev_pm_opp_init_cpufreq_table(struct device *dev, 473 + struct cpufreq_frequency_table **table); 474 + void dev_pm_opp_free_cpufreq_table(struct device *dev, 475 + struct cpufreq_frequency_table **table); 476 + #else 477 + static inline int dev_pm_opp_init_cpufreq_table(struct device *dev, 478 + struct cpufreq_frequency_table 479 + **table) 480 + { 481 + return -EINVAL; 482 + } 483 + 484 + static inline void dev_pm_opp_free_cpufreq_table(struct device *dev, 485 + struct cpufreq_frequency_table 486 + **table) 487 + { 488 + } 489 + #endif 490 + 491 + static inline bool cpufreq_next_valid(struct cpufreq_frequency_table **pos) 492 + { 493 + while ((*pos)->frequency != CPUFREQ_TABLE_END) 494 + if ((*pos)->frequency != CPUFREQ_ENTRY_INVALID) 495 + return true; 496 + else 497 + (*pos)++; 498 + return false; 499 + } 500 + 501 + /* 502 + * cpufreq_for_each_entry - iterate over a cpufreq_frequency_table 503 + * @pos: the cpufreq_frequency_table * to use as a loop cursor. 504 + * @table: the cpufreq_frequency_table * to iterate over. 505 + */ 506 + 507 + #define cpufreq_for_each_entry(pos, table) \ 508 + for (pos = table; pos->frequency != CPUFREQ_TABLE_END; pos++) 509 + 510 + /* 511 + * cpufreq_for_each_valid_entry - iterate over a cpufreq_frequency_table 512 + * excluding CPUFREQ_ENTRY_INVALID frequencies. 513 + * @pos: the cpufreq_frequency_table * to use as a loop cursor. 514 + * @table: the cpufreq_frequency_table * to iterate over. 515 + */ 516 + 517 + #define cpufreq_for_each_valid_entry(pos, table) \ 518 + for (pos = table; cpufreq_next_valid(&pos); pos++) 471 519 472 520 int cpufreq_frequency_table_cpuinfo(struct cpufreq_policy *policy, 473 521 struct cpufreq_frequency_table *table);
+34 -1
include/linux/devfreq.h
··· 181 181 const char *governor_name, 182 182 void *data); 183 183 extern int devfreq_remove_device(struct devfreq *devfreq); 184 + extern struct devfreq *devm_devfreq_add_device(struct device *dev, 185 + struct devfreq_dev_profile *profile, 186 + const char *governor_name, 187 + void *data); 188 + extern void devm_devfreq_remove_device(struct device *dev, 189 + struct devfreq *devfreq); 184 190 185 191 /* Supposed to be called by PM_SLEEP/PM_RUNTIME callbacks */ 186 192 extern int devfreq_suspend_device(struct devfreq *devfreq); ··· 199 193 struct devfreq *devfreq); 200 194 extern int devfreq_unregister_opp_notifier(struct device *dev, 201 195 struct devfreq *devfreq); 196 + extern int devm_devfreq_register_opp_notifier(struct device *dev, 197 + struct devfreq *devfreq); 198 + extern void devm_devfreq_unregister_opp_notifier(struct device *dev, 199 + struct devfreq *devfreq); 202 200 203 201 #if IS_ENABLED(CONFIG_DEVFREQ_GOV_SIMPLE_ONDEMAND) 204 202 /** ··· 230 220 const char *governor_name, 231 221 void *data) 232 222 { 233 - return NULL; 223 + return ERR_PTR(-ENOSYS); 234 224 } 235 225 236 226 static inline int devfreq_remove_device(struct devfreq *devfreq) 237 227 { 238 228 return 0; 229 + } 230 + 231 + static inline struct devfreq *devm_devfreq_add_device(struct device *dev, 232 + struct devfreq_dev_profile *profile, 233 + const char *governor_name, 234 + void *data) 235 + { 236 + return ERR_PTR(-ENOSYS); 237 + } 238 + 239 + static inline void devm_devfreq_remove_device(struct device *dev, 240 + struct devfreq *devfreq) 241 + { 239 242 } 240 243 241 244 static inline int devfreq_suspend_device(struct devfreq *devfreq) ··· 279 256 return -EINVAL; 280 257 } 281 258 259 + static inline int devm_devfreq_register_opp_notifier(struct device *dev, 260 + struct devfreq *devfreq) 261 + { 262 + return -EINVAL; 263 + } 264 + 265 + static inline void devm_devfreq_unregister_opp_notifier(struct device *dev, 266 + struct devfreq *devfreq) 267 + { 268 + } 282 269 #endif /* CONFIG_PM_DEVFREQ */ 283 270 284 271 #endif /* __LINUX_DEVFREQ_H__ */
+28 -8
include/linux/pm.h
··· 93 93 * been registered) to recover from the race condition. 94 94 * This method is executed for all kinds of suspend transitions and is 95 95 * followed by one of the suspend callbacks: @suspend(), @freeze(), or 96 - * @poweroff(). The PM core executes subsystem-level @prepare() for all 97 - * devices before starting to invoke suspend callbacks for any of them, so 98 - * generally devices may be assumed to be functional or to respond to 99 - * runtime resume requests while @prepare() is being executed. However, 100 - * device drivers may NOT assume anything about the availability of user 101 - * space at that time and it is NOT valid to request firmware from within 102 - * @prepare() (it's too late to do that). It also is NOT valid to allocate 96 + * @poweroff(). If the transition is a suspend to memory or standby (that 97 + * is, not related to hibernation), the return value of @prepare() may be 98 + * used to indicate to the PM core to leave the device in runtime suspend 99 + * if applicable. Namely, if @prepare() returns a positive number, the PM 100 + * core will understand that as a declaration that the device appears to be 101 + * runtime-suspended and it may be left in that state during the entire 102 + * transition and during the subsequent resume if all of its descendants 103 + * are left in runtime suspend too. If that happens, @complete() will be 104 + * executed directly after @prepare() and it must ensure the proper 105 + * functioning of the device after the system resume. 106 + * The PM core executes subsystem-level @prepare() for all devices before 107 + * starting to invoke suspend callbacks for any of them, so generally 108 + * devices may be assumed to be functional or to respond to runtime resume 109 + * requests while @prepare() is being executed. However, device drivers 110 + * may NOT assume anything about the availability of user space at that 111 + * time and it is NOT valid to request firmware from within @prepare() 112 + * (it's too late to do that). It also is NOT valid to allocate 103 113 * substantial amounts of memory from @prepare() in the GFP_KERNEL mode. 104 114 * [To work around these limitations, drivers may register suspend and 105 115 * hibernation notifiers to be executed before the freezing of tasks.] ··· 122 112 * of the other devices that the PM core has unsuccessfully attempted to 123 113 * suspend earlier). 124 114 * The PM core executes subsystem-level @complete() after it has executed 125 - * the appropriate resume callbacks for all devices. 115 + * the appropriate resume callbacks for all devices. If the corresponding 116 + * @prepare() at the beginning of the suspend transition returned a 117 + * positive number and the device was left in runtime suspend (without 118 + * executing any suspend and resume callbacks for it), @complete() will be 119 + * the only callback executed for the device during resume. In that case, 120 + * @complete() must be prepared to do whatever is necessary to ensure the 121 + * proper functioning of the device after the system resume. To this end, 122 + * @complete() can check the power.direct_complete flag of the device to 123 + * learn whether (unset) or not (set) the previous suspend and resume 124 + * callbacks have been executed for it. 126 125 * 127 126 * @suspend: Executed before putting the system into a sleep state in which the 128 127 * contents of main memory are preserved. The exact action to perform ··· 565 546 bool is_late_suspended:1; 566 547 bool ignore_children:1; 567 548 bool early_init:1; /* Owned by the PM core */ 549 + bool direct_complete:1; /* Owned by the PM core */ 568 550 spinlock_t lock; 569 551 #ifdef CONFIG_PM_SLEEP 570 552 struct list_head entry;
-20
include/linux/pm_opp.h
··· 15 15 #define __LINUX_OPP_H__ 16 16 17 17 #include <linux/err.h> 18 - #include <linux/cpufreq.h> 19 18 #include <linux/notifier.h> 20 19 21 20 struct dev_pm_opp; ··· 115 116 return -EINVAL; 116 117 } 117 118 #endif 118 - 119 - #if defined(CONFIG_CPU_FREQ) && defined(CONFIG_PM_OPP) 120 - int dev_pm_opp_init_cpufreq_table(struct device *dev, 121 - struct cpufreq_frequency_table **table); 122 - void dev_pm_opp_free_cpufreq_table(struct device *dev, 123 - struct cpufreq_frequency_table **table); 124 - #else 125 - static inline int dev_pm_opp_init_cpufreq_table(struct device *dev, 126 - struct cpufreq_frequency_table **table) 127 - { 128 - return -EINVAL; 129 - } 130 - 131 - static inline 132 - void dev_pm_opp_free_cpufreq_table(struct device *dev, 133 - struct cpufreq_frequency_table **table) 134 - { 135 - } 136 - #endif /* CONFIG_CPU_FREQ */ 137 119 138 120 #endif /* __LINUX_OPP_H__ */
+6
include/linux/pm_runtime.h
··· 101 101 return dev->power.runtime_status == RPM_SUSPENDED; 102 102 } 103 103 104 + static inline bool pm_runtime_suspended_if_enabled(struct device *dev) 105 + { 106 + return pm_runtime_status_suspended(dev) && dev->power.disable_depth == 1; 107 + } 108 + 104 109 static inline bool pm_runtime_enabled(struct device *dev) 105 110 { 106 111 return !dev->power.disable_depth; ··· 155 150 static inline bool pm_runtime_suspended(struct device *dev) { return false; } 156 151 static inline bool pm_runtime_active(struct device *dev) { return true; } 157 152 static inline bool pm_runtime_status_suspended(struct device *dev) { return false; } 153 + static inline bool pm_runtime_suspended_if_enabled(struct device *dev) { return false; } 158 154 static inline bool pm_runtime_enabled(struct device *dev) { return false; } 159 155 160 156 static inline void pm_runtime_no_callbacks(struct device *dev) {}
+2
include/linux/power_supply.h
··· 264 264 265 265 extern int power_supply_register(struct device *parent, 266 266 struct power_supply *psy); 267 + extern int power_supply_register_no_ws(struct device *parent, 268 + struct power_supply *psy); 267 269 extern void power_supply_unregister(struct power_supply *psy); 268 270 extern int power_supply_powers(struct power_supply *psy, struct device *dev); 269 271
+7
include/linux/suspend.h
··· 187 187 void (*recover)(void); 188 188 }; 189 189 190 + struct platform_freeze_ops { 191 + int (*begin)(void); 192 + void (*end)(void); 193 + }; 194 + 190 195 #ifdef CONFIG_SUSPEND 191 196 /** 192 197 * suspend_set_ops - set platform dependent suspend operations ··· 199 194 */ 200 195 extern void suspend_set_ops(const struct platform_suspend_ops *ops); 201 196 extern int suspend_valid_only_mem(suspend_state_t state); 197 + extern void freeze_set_ops(const struct platform_freeze_ops *ops); 202 198 extern void freeze_wake(void); 203 199 204 200 /** ··· 226 220 227 221 static inline void suspend_set_ops(const struct platform_suspend_ops *ops) {} 228 222 static inline int pm_suspend(suspend_state_t state) { return -ENOSYS; } 223 + static inline void freeze_set_ops(const struct platform_freeze_ops *ops) {} 229 224 static inline void freeze_wake(void) {} 230 225 #endif /* !CONFIG_SUSPEND */ 231 226
+1 -2
kernel/power/Kconfig
··· 257 257 bool 258 258 259 259 config PM_OPP 260 - bool "Operating Performance Point (OPP) Layer library" 261 - depends on ARCH_HAS_OPP 260 + bool 262 261 ---help--- 263 262 SOCs have a standard set of tuples consisting of frequency and 264 263 voltage pairs that the device will support per voltage domain. This
+18 -9
kernel/power/hibernate.c
··· 35 35 static int nocompress; 36 36 static int noresume; 37 37 static int resume_wait; 38 - static int resume_delay; 38 + static unsigned int resume_delay; 39 39 static char resume_file[256] = CONFIG_PM_STD_PARTITION; 40 40 dev_t swsusp_resume_device; 41 41 sector_t swsusp_resume_block; ··· 228 228 void swsusp_show_speed(struct timeval *start, struct timeval *stop, 229 229 unsigned nr_pages, char *msg) 230 230 { 231 - s64 elapsed_centisecs64; 232 - int centisecs; 233 - int k; 234 - int kps; 231 + u64 elapsed_centisecs64; 232 + unsigned int centisecs; 233 + unsigned int k; 234 + unsigned int kps; 235 235 236 236 elapsed_centisecs64 = timeval_to_ns(stop) - timeval_to_ns(start); 237 + /* 238 + * If "(s64)elapsed_centisecs64 < 0", it will print long elapsed time, 239 + * it is obvious enough for what went wrong. 240 + */ 237 241 do_div(elapsed_centisecs64, NSEC_PER_SEC / 100); 238 242 centisecs = elapsed_centisecs64; 239 243 if (centisecs == 0) 240 244 centisecs = 1; /* avoid div-by-zero */ 241 245 k = nr_pages * (PAGE_SIZE / 1024); 242 246 kps = (k * 100) / centisecs; 243 - printk(KERN_INFO "PM: %s %d kbytes in %d.%02d seconds (%d.%02d MB/s)\n", 247 + printk(KERN_INFO "PM: %s %u kbytes in %u.%02u seconds (%u.%02u MB/s)\n", 244 248 msg, k, 245 249 centisecs / 100, centisecs % 100, 246 250 kps / 1000, (kps % 1000) / 10); ··· 599 595 case HIBERNATION_PLATFORM: 600 596 hibernation_platform_enter(); 601 597 case HIBERNATION_SHUTDOWN: 602 - kernel_power_off(); 598 + if (pm_power_off) 599 + kernel_power_off(); 603 600 break; 604 601 #ifdef CONFIG_SUSPEND 605 602 case HIBERNATION_SUSPEND: ··· 628 623 * corruption after resume. 629 624 */ 630 625 printk(KERN_CRIT "PM: Please power down manually\n"); 631 - while(1); 626 + while (1) 627 + cpu_relax(); 632 628 } 633 629 634 630 /** ··· 1115 1109 1116 1110 static int __init resumedelay_setup(char *str) 1117 1111 { 1118 - resume_delay = simple_strtoul(str, NULL, 0); 1112 + int rc = kstrtouint(str, 0, &resume_delay); 1113 + 1114 + if (rc) 1115 + return rc; 1119 1116 return 1; 1120 1117 } 1121 1118
+17 -16
kernel/power/main.c
··· 279 279 struct kobject *power_kobj; 280 280 281 281 /** 282 - * state - control system power state. 282 + * state - control system sleep states. 283 283 * 284 - * show() returns what states are supported, which is hard-coded to 285 - * 'freeze' (Low-Power Idle), 'standby' (Power-On Suspend), 286 - * 'mem' (Suspend-to-RAM), and 'disk' (Suspend-to-Disk). 284 + * show() returns available sleep state labels, which may be "mem", "standby", 285 + * "freeze" and "disk" (hibernation). See Documentation/power/states.txt for a 286 + * description of what they mean. 287 287 * 288 - * store() accepts one of those strings, translates it into the 289 - * proper enumerated value, and initiates a suspend transition. 288 + * store() accepts one of those strings, translates it into the proper 289 + * enumerated value, and initiates a suspend transition. 290 290 */ 291 291 static ssize_t state_show(struct kobject *kobj, struct kobj_attribute *attr, 292 292 char *buf) 293 293 { 294 294 char *s = buf; 295 295 #ifdef CONFIG_SUSPEND 296 - int i; 296 + suspend_state_t i; 297 297 298 - for (i = 0; i < PM_SUSPEND_MAX; i++) { 299 - if (pm_states[i] && valid_state(i)) 300 - s += sprintf(s,"%s ", pm_states[i]); 301 - } 298 + for (i = PM_SUSPEND_MIN; i < PM_SUSPEND_MAX; i++) 299 + if (pm_states[i].state) 300 + s += sprintf(s,"%s ", pm_states[i].label); 301 + 302 302 #endif 303 303 #ifdef CONFIG_HIBERNATION 304 304 s += sprintf(s, "%s\n", "disk"); ··· 314 314 { 315 315 #ifdef CONFIG_SUSPEND 316 316 suspend_state_t state = PM_SUSPEND_MIN; 317 - const char * const *s; 317 + struct pm_sleep_state *s; 318 318 #endif 319 319 char *p; 320 320 int len; ··· 328 328 329 329 #ifdef CONFIG_SUSPEND 330 330 for (s = &pm_states[state]; state < PM_SUSPEND_MAX; s++, state++) 331 - if (*s && len == strlen(*s) && !strncmp(buf, *s, len)) 332 - return state; 331 + if (s->state && len == strlen(s->label) 332 + && !strncmp(buf, s->label, len)) 333 + return s->state; 333 334 #endif 334 335 335 336 return PM_SUSPEND_ON; ··· 448 447 449 448 #ifdef CONFIG_SUSPEND 450 449 if (state < PM_SUSPEND_MAX) 451 - return sprintf(buf, "%s\n", valid_state(state) ? 452 - pm_states[state] : "error"); 450 + return sprintf(buf, "%s\n", pm_states[state].state ? 451 + pm_states[state].label : "error"); 453 452 #endif 454 453 #ifdef CONFIG_HIBERNATION 455 454 return sprintf(buf, "disk\n");
+7 -4
kernel/power/power.h
··· 178 178 unsigned int, char *); 179 179 180 180 #ifdef CONFIG_SUSPEND 181 - /* kernel/power/suspend.c */ 182 - extern const char *const pm_states[]; 181 + struct pm_sleep_state { 182 + const char *label; 183 + suspend_state_t state; 184 + }; 183 185 184 - extern bool valid_state(suspend_state_t state); 186 + /* kernel/power/suspend.c */ 187 + extern struct pm_sleep_state pm_states[]; 188 + 185 189 extern int suspend_devices_and_enter(suspend_state_t state); 186 190 #else /* !CONFIG_SUSPEND */ 187 191 static inline int suspend_devices_and_enter(suspend_state_t state) 188 192 { 189 193 return -ENOSYS; 190 194 } 191 - static inline bool valid_state(suspend_state_t state) { return false; } 192 195 #endif /* !CONFIG_SUSPEND */ 193 196 194 197 #ifdef CONFIG_PM_TEST_SUSPEND
+76 -33
kernel/power/suspend.c
··· 31 31 32 32 #include "power.h" 33 33 34 - const char *const pm_states[PM_SUSPEND_MAX] = { 35 - [PM_SUSPEND_FREEZE] = "freeze", 36 - [PM_SUSPEND_STANDBY] = "standby", 37 - [PM_SUSPEND_MEM] = "mem", 34 + struct pm_sleep_state pm_states[PM_SUSPEND_MAX] = { 35 + [PM_SUSPEND_FREEZE] = { .label = "freeze", .state = PM_SUSPEND_FREEZE }, 36 + [PM_SUSPEND_STANDBY] = { .label = "standby", }, 37 + [PM_SUSPEND_MEM] = { .label = "mem", }, 38 38 }; 39 39 40 40 static const struct platform_suspend_ops *suspend_ops; 41 + static const struct platform_freeze_ops *freeze_ops; 41 42 42 43 static bool need_suspend_ops(suspend_state_t state) 43 44 { ··· 47 46 48 47 static DECLARE_WAIT_QUEUE_HEAD(suspend_freeze_wait_head); 49 48 static bool suspend_freeze_wake; 49 + 50 + void freeze_set_ops(const struct platform_freeze_ops *ops) 51 + { 52 + lock_system_sleep(); 53 + freeze_ops = ops; 54 + unlock_system_sleep(); 55 + } 50 56 51 57 static void freeze_begin(void) 52 58 { ··· 76 68 } 77 69 EXPORT_SYMBOL_GPL(freeze_wake); 78 70 71 + static bool valid_state(suspend_state_t state) 72 + { 73 + /* 74 + * PM_SUSPEND_STANDBY and PM_SUSPEND_MEM states need low level 75 + * support and need to be valid to the low level 76 + * implementation, no valid callback implies that none are valid. 77 + */ 78 + return suspend_ops && suspend_ops->valid && suspend_ops->valid(state); 79 + } 80 + 81 + /* 82 + * If this is set, the "mem" label always corresponds to the deepest sleep state 83 + * available, the "standby" label corresponds to the second deepest sleep state 84 + * available (if any), and the "freeze" label corresponds to the remaining 85 + * available sleep state (if there is one). 86 + */ 87 + static bool relative_states; 88 + 89 + static int __init sleep_states_setup(char *str) 90 + { 91 + relative_states = !strncmp(str, "1", 1); 92 + if (relative_states) { 93 + pm_states[PM_SUSPEND_MEM].state = PM_SUSPEND_FREEZE; 94 + pm_states[PM_SUSPEND_FREEZE].state = 0; 95 + } 96 + return 1; 97 + } 98 + 99 + __setup("relative_sleep_states=", sleep_states_setup); 100 + 79 101 /** 80 102 * suspend_set_ops - Set the global suspend method table. 81 103 * @ops: Suspend operations to use. 82 104 */ 83 105 void suspend_set_ops(const struct platform_suspend_ops *ops) 84 106 { 107 + suspend_state_t i; 108 + int j = PM_SUSPEND_MAX - 1; 109 + 85 110 lock_system_sleep(); 111 + 86 112 suspend_ops = ops; 113 + for (i = PM_SUSPEND_MEM; i >= PM_SUSPEND_STANDBY; i--) 114 + if (valid_state(i)) 115 + pm_states[j--].state = i; 116 + else if (!relative_states) 117 + pm_states[j--].state = 0; 118 + 119 + pm_states[j--].state = PM_SUSPEND_FREEZE; 120 + while (j >= PM_SUSPEND_MIN) 121 + pm_states[j--].state = 0; 122 + 87 123 unlock_system_sleep(); 88 124 } 89 125 EXPORT_SYMBOL_GPL(suspend_set_ops); 90 - 91 - bool valid_state(suspend_state_t state) 92 - { 93 - if (state == PM_SUSPEND_FREEZE) { 94 - #ifdef CONFIG_PM_DEBUG 95 - if (pm_test_level != TEST_NONE && 96 - pm_test_level != TEST_FREEZER && 97 - pm_test_level != TEST_DEVICES && 98 - pm_test_level != TEST_PLATFORM) { 99 - printk(KERN_WARNING "Unsupported pm_test mode for " 100 - "freeze state, please choose " 101 - "none/freezer/devices/platform.\n"); 102 - return false; 103 - } 104 - #endif 105 - return true; 106 - } 107 - /* 108 - * PM_SUSPEND_STANDBY and PM_SUSPEND_MEMORY states need lowlevel 109 - * support and need to be valid to the lowlevel 110 - * implementation, no valid callback implies that none are valid. 111 - */ 112 - return suspend_ops && suspend_ops->valid && suspend_ops->valid(state); 113 - } 114 126 115 127 /** 116 128 * suspend_valid_only_mem - Generic memory-only valid callback. ··· 299 271 error = suspend_ops->begin(state); 300 272 if (error) 301 273 goto Close; 274 + } else if (state == PM_SUSPEND_FREEZE && freeze_ops->begin) { 275 + error = freeze_ops->begin(); 276 + if (error) 277 + goto Close; 302 278 } 303 279 suspend_console(); 304 280 suspend_test_start(); ··· 328 296 Close: 329 297 if (need_suspend_ops(state) && suspend_ops->end) 330 298 suspend_ops->end(); 299 + else if (state == PM_SUSPEND_FREEZE && freeze_ops->end) 300 + freeze_ops->end(); 301 + 331 302 trace_machine_suspend(PWR_EVENT_EXIT); 332 303 return error; 333 304 ··· 365 330 { 366 331 int error; 367 332 368 - if (!valid_state(state)) 369 - return -ENODEV; 370 - 333 + if (state == PM_SUSPEND_FREEZE) { 334 + #ifdef CONFIG_PM_DEBUG 335 + if (pm_test_level != TEST_NONE && pm_test_level <= TEST_CPUS) { 336 + pr_warning("PM: Unsupported test mode for freeze state," 337 + "please choose none/freezer/devices/platform.\n"); 338 + return -EAGAIN; 339 + } 340 + #endif 341 + } else if (!valid_state(state)) { 342 + return -EINVAL; 343 + } 371 344 if (!mutex_trylock(&pm_mutex)) 372 345 return -EBUSY; 373 346 ··· 386 343 sys_sync(); 387 344 printk("done.\n"); 388 345 389 - pr_debug("PM: Preparing system for %s sleep\n", pm_states[state]); 346 + pr_debug("PM: Preparing system for %s sleep\n", pm_states[state].label); 390 347 error = suspend_prepare(state); 391 348 if (error) 392 349 goto Unlock; ··· 394 351 if (suspend_test(TEST_FREEZER)) 395 352 goto Finish; 396 353 397 - pr_debug("PM: Entering %s sleep\n", pm_states[state]); 354 + pr_debug("PM: Entering %s sleep\n", pm_states[state].label); 398 355 pm_restrict_gfp_mask(); 399 356 error = suspend_devices_and_enter(state); 400 357 pm_restore_gfp_mask();
+11 -13
kernel/power/suspend_test.c
··· 92 92 } 93 93 94 94 if (state == PM_SUSPEND_MEM) { 95 - printk(info_test, pm_states[state]); 95 + printk(info_test, pm_states[state].label); 96 96 status = pm_suspend(state); 97 97 if (status == -ENODEV) 98 98 state = PM_SUSPEND_STANDBY; 99 99 } 100 100 if (state == PM_SUSPEND_STANDBY) { 101 - printk(info_test, pm_states[state]); 101 + printk(info_test, pm_states[state].label); 102 102 status = pm_suspend(state); 103 103 } 104 104 if (status < 0) ··· 136 136 137 137 static int __init setup_test_suspend(char *value) 138 138 { 139 - unsigned i; 139 + suspend_state_t i; 140 140 141 141 /* "=mem" ==> "mem" */ 142 142 value++; 143 - for (i = 0; i < PM_SUSPEND_MAX; i++) { 144 - if (!pm_states[i]) 145 - continue; 146 - if (strcmp(pm_states[i], value) != 0) 147 - continue; 148 - test_state = (__force suspend_state_t) i; 149 - return 0; 150 - } 143 + for (i = PM_SUSPEND_MIN; i < PM_SUSPEND_MAX; i++) 144 + if (!strcmp(pm_states[i].label, value)) { 145 + test_state = pm_states[i].state; 146 + return 0; 147 + } 148 + 151 149 printk(warn_bad_state, value); 152 150 return 0; 153 151 } ··· 162 164 /* PM is initialized by now; is that state testable? */ 163 165 if (test_state == PM_SUSPEND_ON) 164 166 goto done; 165 - if (!valid_state(test_state)) { 166 - printk(warn_bad_state, pm_states[test_state]); 167 + if (!pm_states[test_state].state) { 168 + printk(warn_bad_state, pm_states[test_state].label); 167 169 goto done; 168 170 } 169 171
+1 -1
kernel/power/swap.c
··· 567 567 568 568 /** 569 569 * save_image_lzo - Save the suspend image data compressed with LZO. 570 - * @handle: Swap mam handle to use for saving the image. 570 + * @handle: Swap map handle to use for saving the image. 571 571 * @snapshot: Image to read data from. 572 572 * @nr_to_write: Number of pages to save. 573 573 */
+25 -3
tools/power/acpi/Makefile
··· 19 19 $(if $(OUTDIR),, $(error output directory "$(OUTPUT)" does not exist)) 20 20 endif 21 21 22 + SUBDIRS = tools/ec 23 + 22 24 # --- CONFIGURATION BEGIN --- 23 25 24 26 # Set the following to `true' to make a unstripped, unoptimized ··· 70 68 WARNINGS += $(call cc-supports,-Wdeclaration-after-statement) 71 69 72 70 KERNEL_INCLUDE := ../../../include 73 - CFLAGS += -D_LINUX -DDEFINE_ALTERNATE_TYPES -I$(KERNEL_INCLUDE) 71 + ACPICA_INCLUDE := ../../../drivers/acpi/acpica 72 + CFLAGS += -D_LINUX -I$(KERNEL_INCLUDE) -I$(ACPICA_INCLUDE) 74 73 CFLAGS += $(WARNINGS) 75 74 76 75 ifeq ($(strip $(V)),false) ··· 95 92 # --- ACPIDUMP BEGIN --- 96 93 97 94 vpath %.c \ 98 - tools/acpidump 95 + ../../../drivers/acpi/acpica\ 96 + tools/acpidump\ 97 + common\ 98 + os_specific/service_layers 99 + 100 + CFLAGS += -DACPI_DUMP_APP -Itools/acpidump 99 101 100 102 DUMP_OBJS = \ 101 - acpidump.o 103 + apdump.o\ 104 + apfiles.o\ 105 + apmain.o\ 106 + osunixdir.o\ 107 + osunixmap.o\ 108 + tbprint.o\ 109 + tbxfroot.o\ 110 + utbuffer.o\ 111 + utexcep.o\ 112 + utmath.o\ 113 + utstring.o\ 114 + utxferror.o\ 115 + oslinuxtbl.o\ 116 + cmfsize.o\ 117 + getopt.o 102 118 103 119 DUMP_OBJS := $(addprefix $(OUTPUT)tools/acpidump/,$(DUMP_OBJS)) 104 120
+101
tools/power/acpi/common/cmfsize.c
··· 1 + /****************************************************************************** 2 + * 3 + * Module Name: cfsize - Common get file size function 4 + * 5 + *****************************************************************************/ 6 + 7 + /* 8 + * Copyright (C) 2000 - 2014, Intel Corp. 9 + * All rights reserved. 10 + * 11 + * Redistribution and use in source and binary forms, with or without 12 + * modification, are permitted provided that the following conditions 13 + * are met: 14 + * 1. Redistributions of source code must retain the above copyright 15 + * notice, this list of conditions, and the following disclaimer, 16 + * without modification. 17 + * 2. Redistributions in binary form must reproduce at minimum a disclaimer 18 + * substantially similar to the "NO WARRANTY" disclaimer below 19 + * ("Disclaimer") and any redistribution must be conditioned upon 20 + * including a substantially similar Disclaimer requirement for further 21 + * binary redistribution. 22 + * 3. Neither the names of the above-listed copyright holders nor the names 23 + * of any contributors may be used to endorse or promote products derived 24 + * from this software without specific prior written permission. 25 + * 26 + * Alternatively, this software may be distributed under the terms of the 27 + * GNU General Public License ("GPL") version 2 as published by the Free 28 + * Software Foundation. 29 + * 30 + * NO WARRANTY 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 36 + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 37 + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 38 + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 39 + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 40 + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 41 + * POSSIBILITY OF SUCH DAMAGES. 42 + */ 43 + 44 + #include <acpi/acpi.h> 45 + #include "accommon.h" 46 + #include "acapps.h" 47 + #include <stdio.h> 48 + 49 + #define _COMPONENT ACPI_TOOLS 50 + ACPI_MODULE_NAME("cmfsize") 51 + 52 + /******************************************************************************* 53 + * 54 + * FUNCTION: cm_get_file_size 55 + * 56 + * PARAMETERS: file - Open file descriptor 57 + * 58 + * RETURN: File Size. On error, -1 (ACPI_UINT32_MAX) 59 + * 60 + * DESCRIPTION: Get the size of a file. Uses seek-to-EOF. File must be open. 61 + * Does not disturb the current file pointer. Uses perror for 62 + * error messages. 63 + * 64 + ******************************************************************************/ 65 + u32 cm_get_file_size(FILE * file) 66 + { 67 + long file_size; 68 + long current_offset; 69 + 70 + /* Save the current file pointer, seek to EOF to obtain file size */ 71 + 72 + current_offset = ftell(file); 73 + if (current_offset < 0) { 74 + goto offset_error; 75 + } 76 + 77 + if (fseek(file, 0, SEEK_END)) { 78 + goto seek_error; 79 + } 80 + 81 + file_size = ftell(file); 82 + if (file_size < 0) { 83 + goto offset_error; 84 + } 85 + 86 + /* Restore original file pointer */ 87 + 88 + if (fseek(file, current_offset, SEEK_SET)) { 89 + goto seek_error; 90 + } 91 + 92 + return ((u32)file_size); 93 + 94 + offset_error: 95 + perror("Could not get file offset"); 96 + return (ACPI_UINT32_MAX); 97 + 98 + seek_error: 99 + perror("Could not seek file"); 100 + return (ACPI_UINT32_MAX); 101 + }
+239
tools/power/acpi/common/getopt.c
··· 1 + /****************************************************************************** 2 + * 3 + * Module Name: getopt 4 + * 5 + *****************************************************************************/ 6 + 7 + /* 8 + * Copyright (C) 2000 - 2014, Intel Corp. 9 + * All rights reserved. 10 + * 11 + * Redistribution and use in source and binary forms, with or without 12 + * modification, are permitted provided that the following conditions 13 + * are met: 14 + * 1. Redistributions of source code must retain the above copyright 15 + * notice, this list of conditions, and the following disclaimer, 16 + * without modification. 17 + * 2. Redistributions in binary form must reproduce at minimum a disclaimer 18 + * substantially similar to the "NO WARRANTY" disclaimer below 19 + * ("Disclaimer") and any redistribution must be conditioned upon 20 + * including a substantially similar Disclaimer requirement for further 21 + * binary redistribution. 22 + * 3. Neither the names of the above-listed copyright holders nor the names 23 + * of any contributors may be used to endorse or promote products derived 24 + * from this software without specific prior written permission. 25 + * 26 + * Alternatively, this software may be distributed under the terms of the 27 + * GNU General Public License ("GPL") version 2 as published by the Free 28 + * Software Foundation. 29 + * 30 + * NO WARRANTY 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 36 + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 37 + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 38 + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 39 + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 40 + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 41 + * POSSIBILITY OF SUCH DAMAGES. 42 + */ 43 + 44 + /* 45 + * ACPICA getopt() implementation 46 + * 47 + * Option strings: 48 + * "f" - Option has no arguments 49 + * "f:" - Option requires an argument 50 + * "f^" - Option has optional single-char sub-options 51 + * "f|" - Option has required single-char sub-options 52 + */ 53 + 54 + #include <stdio.h> 55 + #include <string.h> 56 + #include <acpi/acpi.h> 57 + #include "accommon.h" 58 + #include "acapps.h" 59 + 60 + #define ACPI_OPTION_ERROR(msg, badchar) \ 61 + if (acpi_gbl_opterr) {fprintf (stderr, "%s%c\n", msg, badchar);} 62 + 63 + int acpi_gbl_opterr = 1; 64 + int acpi_gbl_optind = 1; 65 + int acpi_gbl_sub_opt_char = 0; 66 + char *acpi_gbl_optarg; 67 + 68 + static int current_char_ptr = 1; 69 + 70 + /******************************************************************************* 71 + * 72 + * FUNCTION: acpi_getopt_argument 73 + * 74 + * PARAMETERS: argc, argv - from main 75 + * 76 + * RETURN: 0 if an argument was found, -1 otherwise. Sets acpi_gbl_Optarg 77 + * to point to the next argument. 78 + * 79 + * DESCRIPTION: Get the next argument. Used to obtain arguments for the 80 + * two-character options after the original call to acpi_getopt. 81 + * Note: Either the argument starts at the next character after 82 + * the option, or it is pointed to by the next argv entry. 83 + * (After call to acpi_getopt, we need to backup to the previous 84 + * argv entry). 85 + * 86 + ******************************************************************************/ 87 + 88 + int acpi_getopt_argument(int argc, char **argv) 89 + { 90 + acpi_gbl_optind--; 91 + current_char_ptr++; 92 + 93 + if (argv[acpi_gbl_optind][(int)(current_char_ptr + 1)] != '\0') { 94 + acpi_gbl_optarg = 95 + &argv[acpi_gbl_optind++][(int)(current_char_ptr + 1)]; 96 + } else if (++acpi_gbl_optind >= argc) { 97 + ACPI_OPTION_ERROR("Option requires an argument: -", 'v'); 98 + 99 + current_char_ptr = 1; 100 + return (-1); 101 + } else { 102 + acpi_gbl_optarg = argv[acpi_gbl_optind++]; 103 + } 104 + 105 + current_char_ptr = 1; 106 + return (0); 107 + } 108 + 109 + /******************************************************************************* 110 + * 111 + * FUNCTION: acpi_getopt 112 + * 113 + * PARAMETERS: argc, argv - from main 114 + * opts - options info list 115 + * 116 + * RETURN: Option character or EOF 117 + * 118 + * DESCRIPTION: Get the next option 119 + * 120 + ******************************************************************************/ 121 + 122 + int acpi_getopt(int argc, char **argv, char *opts) 123 + { 124 + int current_char; 125 + char *opts_ptr; 126 + 127 + if (current_char_ptr == 1) { 128 + if (acpi_gbl_optind >= argc || 129 + argv[acpi_gbl_optind][0] != '-' || 130 + argv[acpi_gbl_optind][1] == '\0') { 131 + return (EOF); 132 + } else if (strcmp(argv[acpi_gbl_optind], "--") == 0) { 133 + acpi_gbl_optind++; 134 + return (EOF); 135 + } 136 + } 137 + 138 + /* Get the option */ 139 + 140 + current_char = argv[acpi_gbl_optind][current_char_ptr]; 141 + 142 + /* Make sure that the option is legal */ 143 + 144 + if (current_char == ':' || 145 + (opts_ptr = strchr(opts, current_char)) == NULL) { 146 + ACPI_OPTION_ERROR("Illegal option: -", current_char); 147 + 148 + if (argv[acpi_gbl_optind][++current_char_ptr] == '\0') { 149 + acpi_gbl_optind++; 150 + current_char_ptr = 1; 151 + } 152 + 153 + return ('?'); 154 + } 155 + 156 + /* Option requires an argument? */ 157 + 158 + if (*++opts_ptr == ':') { 159 + if (argv[acpi_gbl_optind][(int)(current_char_ptr + 1)] != '\0') { 160 + acpi_gbl_optarg = 161 + &argv[acpi_gbl_optind++][(int) 162 + (current_char_ptr + 1)]; 163 + } else if (++acpi_gbl_optind >= argc) { 164 + ACPI_OPTION_ERROR("Option requires an argument: -", 165 + current_char); 166 + 167 + current_char_ptr = 1; 168 + return ('?'); 169 + } else { 170 + acpi_gbl_optarg = argv[acpi_gbl_optind++]; 171 + } 172 + 173 + current_char_ptr = 1; 174 + } 175 + 176 + /* Option has an optional argument? */ 177 + 178 + else if (*opts_ptr == '+') { 179 + if (argv[acpi_gbl_optind][(int)(current_char_ptr + 1)] != '\0') { 180 + acpi_gbl_optarg = 181 + &argv[acpi_gbl_optind++][(int) 182 + (current_char_ptr + 1)]; 183 + } else if (++acpi_gbl_optind >= argc) { 184 + acpi_gbl_optarg = NULL; 185 + } else { 186 + acpi_gbl_optarg = argv[acpi_gbl_optind++]; 187 + } 188 + 189 + current_char_ptr = 1; 190 + } 191 + 192 + /* Option has optional single-char arguments? */ 193 + 194 + else if (*opts_ptr == '^') { 195 + if (argv[acpi_gbl_optind][(int)(current_char_ptr + 1)] != '\0') { 196 + acpi_gbl_optarg = 197 + &argv[acpi_gbl_optind][(int)(current_char_ptr + 1)]; 198 + } else { 199 + acpi_gbl_optarg = "^"; 200 + } 201 + 202 + acpi_gbl_sub_opt_char = acpi_gbl_optarg[0]; 203 + acpi_gbl_optind++; 204 + current_char_ptr = 1; 205 + } 206 + 207 + /* Option has a required single-char argument? */ 208 + 209 + else if (*opts_ptr == '|') { 210 + if (argv[acpi_gbl_optind][(int)(current_char_ptr + 1)] != '\0') { 211 + acpi_gbl_optarg = 212 + &argv[acpi_gbl_optind][(int)(current_char_ptr + 1)]; 213 + } else { 214 + ACPI_OPTION_ERROR 215 + ("Option requires a single-character suboption: -", 216 + current_char); 217 + 218 + current_char_ptr = 1; 219 + return ('?'); 220 + } 221 + 222 + acpi_gbl_sub_opt_char = acpi_gbl_optarg[0]; 223 + acpi_gbl_optind++; 224 + current_char_ptr = 1; 225 + } 226 + 227 + /* Option with no arguments */ 228 + 229 + else { 230 + if (argv[acpi_gbl_optind][++current_char_ptr] == '\0') { 231 + current_char_ptr = 1; 232 + acpi_gbl_optind++; 233 + } 234 + 235 + acpi_gbl_optarg = NULL; 236 + } 237 + 238 + return (current_char); 239 + }
+73 -12
tools/power/acpi/man/acpidump.8
··· 1 1 .TH ACPIDUMP 8 2 2 .SH NAME 3 - acpidump \- Dump system's ACPI tables to an ASCII file. 3 + acpidump \- dump a system's ACPI tables to an ASCII file 4 + 4 5 .SH SYNOPSIS 5 - .ft B 6 - .B acpidump > acpidump.out 6 + .B acpidump 7 + .RI [ options ] 8 + .br 9 + 7 10 .SH DESCRIPTION 8 - \fBacpidump \fP dumps the systems ACPI tables to an ASCII file 9 - appropriate for attaching to a bug report. 11 + .B acpidump 12 + dumps the systems ACPI tables to an ASCII file appropriate for 13 + attaching to a bug report. 10 14 11 15 Subsequently, they can be processed by utilities in the ACPICA package. 12 - .SS Options 13 - no options worth worrying about. 14 - .PP 15 - .SH EXAMPLE 16 + 17 + .SH OPTIONS 18 + acpidump options are as follow: 19 + .TP 20 + .B Options 21 + .TP 22 + .B \-b 23 + Dump tables to binary files 24 + .TP 25 + .B \-c 26 + Dump customized tables 27 + .TP 28 + .B \-h \-? 29 + This help message 30 + .TP 31 + .B \-o <File> 32 + Redirect output to file 33 + .TP 34 + .B \-r <Address> 35 + Dump tables from specified RSDP 36 + .TP 37 + .B \-s 38 + Print table summaries only 39 + .TP 40 + .B \-v 41 + Display version information 42 + .TP 43 + .B \-z 44 + Verbose mode 45 + .TP 46 + .B Table Options 47 + .TP 48 + .B \-a <Address> 49 + Get table via a physical address 50 + .TP 51 + .B \-f <BinaryFile> 52 + Get table via a binary file 53 + .TP 54 + .B \-n <Signature> 55 + Get table via a name/signature 56 + .TP 57 + Invocation without parameters dumps all available tables 58 + .TP 59 + Multiple mixed instances of -a, -f, and -n are supported 60 + 61 + .SH EXAMPLES 16 62 17 63 .nf 18 64 # acpidump > acpidump.out ··· 96 50 .ta 97 51 .nf 98 52 /dev/mem 53 + /sys/firmware/acpi/tables/* 99 54 /sys/firmware/acpi/tables/dynamic/* 55 + /sys/firmware/efi/systab 100 56 .fi 101 57 102 - .PP 103 58 .SH AUTHOR 104 - .nf 105 - Written by Len Brown <len.brown@intel.com> 59 + .TP 60 + Original by: 61 + Len Brown <len.brown@intel.com> 62 + .TP 63 + Written by: 64 + Chao Guan <chao.guan@intel.com> 65 + .TP 66 + Updated by: 67 + Bob Moore <robert.moore@intel.com> 68 + Lv Zheng <lv.zheng@intel.com> 69 + 70 + .SH SEE ALSO 71 + \&\fIacpixtract\fR\|(8), \fIiasl\fR\|(8). 72 + 73 + .SH COPYRIGHT 74 + COPYRIGHT (c) 2013, Intel Corporation.
+1329
tools/power/acpi/os_specific/service_layers/oslinuxtbl.c
··· 1 + /****************************************************************************** 2 + * 3 + * Module Name: oslinuxtbl - Linux OSL for obtaining ACPI tables 4 + * 5 + *****************************************************************************/ 6 + 7 + /* 8 + * Copyright (C) 2000 - 2014, Intel Corp. 9 + * All rights reserved. 10 + * 11 + * Redistribution and use in source and binary forms, with or without 12 + * modification, are permitted provided that the following conditions 13 + * are met: 14 + * 1. Redistributions of source code must retain the above copyright 15 + * notice, this list of conditions, and the following disclaimer, 16 + * without modification. 17 + * 2. Redistributions in binary form must reproduce at minimum a disclaimer 18 + * substantially similar to the "NO WARRANTY" disclaimer below 19 + * ("Disclaimer") and any redistribution must be conditioned upon 20 + * including a substantially similar Disclaimer requirement for further 21 + * binary redistribution. 22 + * 3. Neither the names of the above-listed copyright holders nor the names 23 + * of any contributors may be used to endorse or promote products derived 24 + * from this software without specific prior written permission. 25 + * 26 + * Alternatively, this software may be distributed under the terms of the 27 + * GNU General Public License ("GPL") version 2 as published by the Free 28 + * Software Foundation. 29 + * 30 + * NO WARRANTY 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 36 + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 37 + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 38 + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 39 + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 40 + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 41 + * POSSIBILITY OF SUCH DAMAGES. 42 + */ 43 + 44 + #include "acpidump.h" 45 + 46 + #define _COMPONENT ACPI_OS_SERVICES 47 + ACPI_MODULE_NAME("oslinuxtbl") 48 + 49 + #ifndef PATH_MAX 50 + #define PATH_MAX 256 51 + #endif 52 + /* List of information about obtained ACPI tables */ 53 + typedef struct osl_table_info { 54 + struct osl_table_info *next; 55 + u32 instance; 56 + char signature[ACPI_NAME_SIZE]; 57 + 58 + } osl_table_info; 59 + 60 + /* Local prototypes */ 61 + 62 + static acpi_status osl_table_initialize(void); 63 + 64 + static acpi_status 65 + osl_table_name_from_file(char *filename, char *signature, u32 *instance); 66 + 67 + static acpi_status osl_add_table_to_list(char *signature, u32 instance); 68 + 69 + static acpi_status 70 + osl_read_table_from_file(char *filename, 71 + acpi_size file_offset, 72 + char *signature, struct acpi_table_header **table); 73 + 74 + static acpi_status 75 + osl_map_table(acpi_size address, 76 + char *signature, struct acpi_table_header **table); 77 + 78 + static void osl_unmap_table(struct acpi_table_header *table); 79 + 80 + static acpi_physical_address osl_find_rsdp_via_efi(void); 81 + 82 + static acpi_status osl_load_rsdp(void); 83 + 84 + static acpi_status osl_list_customized_tables(char *directory); 85 + 86 + static acpi_status 87 + osl_get_customized_table(char *pathname, 88 + char *signature, 89 + u32 instance, 90 + struct acpi_table_header **table, 91 + acpi_physical_address * address); 92 + 93 + static acpi_status osl_list_bios_tables(void); 94 + 95 + static acpi_status 96 + osl_get_bios_table(char *signature, 97 + u32 instance, 98 + struct acpi_table_header **table, 99 + acpi_physical_address * address); 100 + 101 + static acpi_status osl_get_last_status(acpi_status default_status); 102 + 103 + /* File locations */ 104 + 105 + #define DYNAMIC_TABLE_DIR "/sys/firmware/acpi/tables/dynamic" 106 + #define STATIC_TABLE_DIR "/sys/firmware/acpi/tables" 107 + #define EFI_SYSTAB "/sys/firmware/efi/systab" 108 + 109 + /* Should we get dynamically loaded SSDTs from DYNAMIC_TABLE_DIR? */ 110 + 111 + u8 gbl_dump_dynamic_tables = TRUE; 112 + 113 + /* Initialization flags */ 114 + 115 + u8 gbl_table_list_initialized = FALSE; 116 + 117 + /* Local copies of main ACPI tables */ 118 + 119 + struct acpi_table_rsdp gbl_rsdp; 120 + struct acpi_table_fadt *gbl_fadt = NULL; 121 + struct acpi_table_rsdt *gbl_rsdt = NULL; 122 + struct acpi_table_xsdt *gbl_xsdt = NULL; 123 + 124 + /* Table addresses */ 125 + 126 + acpi_physical_address gbl_fadt_address = 0; 127 + acpi_physical_address gbl_rsdp_address = 0; 128 + 129 + /* Revision of RSD PTR */ 130 + 131 + u8 gbl_revision = 0; 132 + 133 + struct osl_table_info *gbl_table_list_head = NULL; 134 + u32 gbl_table_count = 0; 135 + 136 + /****************************************************************************** 137 + * 138 + * FUNCTION: osl_get_last_status 139 + * 140 + * PARAMETERS: default_status - Default error status to return 141 + * 142 + * RETURN: Status; Converted from errno. 143 + * 144 + * DESCRIPTION: Get last errno and conver it to acpi_status. 145 + * 146 + *****************************************************************************/ 147 + 148 + static acpi_status osl_get_last_status(acpi_status default_status) 149 + { 150 + 151 + switch (errno) { 152 + case EACCES: 153 + case EPERM: 154 + 155 + return (AE_ACCESS); 156 + 157 + case ENOENT: 158 + 159 + return (AE_NOT_FOUND); 160 + 161 + case ENOMEM: 162 + 163 + return (AE_NO_MEMORY); 164 + 165 + default: 166 + 167 + return (default_status); 168 + } 169 + } 170 + 171 + /****************************************************************************** 172 + * 173 + * FUNCTION: acpi_os_get_table_by_address 174 + * 175 + * PARAMETERS: address - Physical address of the ACPI table 176 + * table - Where a pointer to the table is returned 177 + * 178 + * RETURN: Status; Table buffer is returned if AE_OK. 179 + * AE_NOT_FOUND: A valid table was not found at the address 180 + * 181 + * DESCRIPTION: Get an ACPI table via a physical memory address. 182 + * 183 + *****************************************************************************/ 184 + 185 + acpi_status 186 + acpi_os_get_table_by_address(acpi_physical_address address, 187 + struct acpi_table_header ** table) 188 + { 189 + u32 table_length; 190 + struct acpi_table_header *mapped_table; 191 + struct acpi_table_header *local_table = NULL; 192 + acpi_status status = AE_OK; 193 + 194 + /* Get main ACPI tables from memory on first invocation of this function */ 195 + 196 + status = osl_table_initialize(); 197 + if (ACPI_FAILURE(status)) { 198 + return (status); 199 + } 200 + 201 + /* Map the table and validate it */ 202 + 203 + status = osl_map_table(address, NULL, &mapped_table); 204 + if (ACPI_FAILURE(status)) { 205 + return (status); 206 + } 207 + 208 + /* Copy table to local buffer and return it */ 209 + 210 + table_length = ap_get_table_length(mapped_table); 211 + if (table_length == 0) { 212 + status = AE_BAD_HEADER; 213 + goto exit; 214 + } 215 + 216 + local_table = calloc(1, table_length); 217 + if (!local_table) { 218 + status = AE_NO_MEMORY; 219 + goto exit; 220 + } 221 + 222 + ACPI_MEMCPY(local_table, mapped_table, table_length); 223 + 224 + exit: 225 + osl_unmap_table(mapped_table); 226 + *table = local_table; 227 + return (status); 228 + } 229 + 230 + /****************************************************************************** 231 + * 232 + * FUNCTION: acpi_os_get_table_by_name 233 + * 234 + * PARAMETERS: signature - ACPI Signature for desired table. Must be 235 + * a null terminated 4-character string. 236 + * instance - Multiple table support for SSDT/UEFI (0...n) 237 + * Must be 0 for other tables. 238 + * table - Where a pointer to the table is returned 239 + * address - Where the table physical address is returned 240 + * 241 + * RETURN: Status; Table buffer and physical address returned if AE_OK. 242 + * AE_LIMIT: Instance is beyond valid limit 243 + * AE_NOT_FOUND: A table with the signature was not found 244 + * 245 + * NOTE: Assumes the input signature is uppercase. 246 + * 247 + *****************************************************************************/ 248 + 249 + acpi_status 250 + acpi_os_get_table_by_name(char *signature, 251 + u32 instance, 252 + struct acpi_table_header ** table, 253 + acpi_physical_address * address) 254 + { 255 + acpi_status status; 256 + 257 + /* Get main ACPI tables from memory on first invocation of this function */ 258 + 259 + status = osl_table_initialize(); 260 + if (ACPI_FAILURE(status)) { 261 + return (status); 262 + } 263 + 264 + /* Not a main ACPI table, attempt to extract it from the RSDT/XSDT */ 265 + 266 + if (!gbl_dump_customized_tables) { 267 + 268 + /* Attempt to get the table from the memory */ 269 + 270 + status = 271 + osl_get_bios_table(signature, instance, table, address); 272 + } else { 273 + /* Attempt to get the table from the static directory */ 274 + 275 + status = osl_get_customized_table(STATIC_TABLE_DIR, signature, 276 + instance, table, address); 277 + } 278 + 279 + if (ACPI_FAILURE(status) && status == AE_LIMIT) { 280 + if (gbl_dump_dynamic_tables) { 281 + 282 + /* Attempt to get a dynamic table */ 283 + 284 + status = 285 + osl_get_customized_table(DYNAMIC_TABLE_DIR, 286 + signature, instance, table, 287 + address); 288 + } 289 + } 290 + 291 + return (status); 292 + } 293 + 294 + /****************************************************************************** 295 + * 296 + * FUNCTION: osl_add_table_to_list 297 + * 298 + * PARAMETERS: signature - Table signature 299 + * instance - Table instance 300 + * 301 + * RETURN: Status; Successfully added if AE_OK. 302 + * AE_NO_MEMORY: Memory allocation error 303 + * 304 + * DESCRIPTION: Insert a table structure into OSL table list. 305 + * 306 + *****************************************************************************/ 307 + 308 + static acpi_status osl_add_table_to_list(char *signature, u32 instance) 309 + { 310 + struct osl_table_info *new_info; 311 + struct osl_table_info *next; 312 + u32 next_instance = 0; 313 + u8 found = FALSE; 314 + 315 + new_info = calloc(1, sizeof(struct osl_table_info)); 316 + if (!new_info) { 317 + return (AE_NO_MEMORY); 318 + } 319 + 320 + ACPI_MOVE_NAME(new_info->signature, signature); 321 + 322 + if (!gbl_table_list_head) { 323 + gbl_table_list_head = new_info; 324 + } else { 325 + next = gbl_table_list_head; 326 + while (1) { 327 + if (ACPI_COMPARE_NAME(next->signature, signature)) { 328 + if (next->instance == instance) { 329 + found = TRUE; 330 + } 331 + if (next->instance >= next_instance) { 332 + next_instance = next->instance + 1; 333 + } 334 + } 335 + 336 + if (!next->next) { 337 + break; 338 + } 339 + next = next->next; 340 + } 341 + next->next = new_info; 342 + } 343 + 344 + if (found) { 345 + if (instance) { 346 + fprintf(stderr, 347 + "%4.4s: Warning unmatched table instance %d, expected %d\n", 348 + signature, instance, next_instance); 349 + } 350 + instance = next_instance; 351 + } 352 + 353 + new_info->instance = instance; 354 + gbl_table_count++; 355 + 356 + return (AE_OK); 357 + } 358 + 359 + /****************************************************************************** 360 + * 361 + * FUNCTION: acpi_os_get_table_by_index 362 + * 363 + * PARAMETERS: index - Which table to get 364 + * table - Where a pointer to the table is returned 365 + * instance - Where a pointer to the table instance no. is 366 + * returned 367 + * address - Where the table physical address is returned 368 + * 369 + * RETURN: Status; Table buffer and physical address returned if AE_OK. 370 + * AE_LIMIT: Index is beyond valid limit 371 + * 372 + * DESCRIPTION: Get an ACPI table via an index value (0 through n). Returns 373 + * AE_LIMIT when an invalid index is reached. Index is not 374 + * necessarily an index into the RSDT/XSDT. 375 + * 376 + *****************************************************************************/ 377 + 378 + acpi_status 379 + acpi_os_get_table_by_index(u32 index, 380 + struct acpi_table_header ** table, 381 + u32 *instance, acpi_physical_address * address) 382 + { 383 + struct osl_table_info *info; 384 + acpi_status status; 385 + u32 i; 386 + 387 + /* Get main ACPI tables from memory on first invocation of this function */ 388 + 389 + status = osl_table_initialize(); 390 + if (ACPI_FAILURE(status)) { 391 + return (status); 392 + } 393 + 394 + /* Validate Index */ 395 + 396 + if (index >= gbl_table_count) { 397 + return (AE_LIMIT); 398 + } 399 + 400 + /* Point to the table list entry specified by the Index argument */ 401 + 402 + info = gbl_table_list_head; 403 + for (i = 0; i < index; i++) { 404 + info = info->next; 405 + } 406 + 407 + /* Now we can just get the table via the signature */ 408 + 409 + status = acpi_os_get_table_by_name(info->signature, info->instance, 410 + table, address); 411 + 412 + if (ACPI_SUCCESS(status)) { 413 + *instance = info->instance; 414 + } 415 + return (status); 416 + } 417 + 418 + /****************************************************************************** 419 + * 420 + * FUNCTION: osl_find_rsdp_via_efi 421 + * 422 + * PARAMETERS: None 423 + * 424 + * RETURN: RSDP address if found 425 + * 426 + * DESCRIPTION: Find RSDP address via EFI. 427 + * 428 + *****************************************************************************/ 429 + 430 + static acpi_physical_address osl_find_rsdp_via_efi(void) 431 + { 432 + FILE *file; 433 + char buffer[80]; 434 + unsigned long address = 0; 435 + 436 + file = fopen(EFI_SYSTAB, "r"); 437 + if (file) { 438 + while (fgets(buffer, 80, file)) { 439 + if (sscanf(buffer, "ACPI20=0x%lx", &address) == 1) { 440 + break; 441 + } 442 + } 443 + fclose(file); 444 + } 445 + 446 + return ((acpi_physical_address) (address)); 447 + } 448 + 449 + /****************************************************************************** 450 + * 451 + * FUNCTION: osl_load_rsdp 452 + * 453 + * PARAMETERS: None 454 + * 455 + * RETURN: Status 456 + * 457 + * DESCRIPTION: Scan and load RSDP. 458 + * 459 + *****************************************************************************/ 460 + 461 + static acpi_status osl_load_rsdp(void) 462 + { 463 + struct acpi_table_header *mapped_table; 464 + u8 *rsdp_address; 465 + acpi_physical_address rsdp_base; 466 + acpi_size rsdp_size; 467 + 468 + /* Get RSDP from memory */ 469 + 470 + rsdp_size = sizeof(struct acpi_table_rsdp); 471 + if (gbl_rsdp_base) { 472 + rsdp_base = gbl_rsdp_base; 473 + } else { 474 + rsdp_base = osl_find_rsdp_via_efi(); 475 + } 476 + 477 + if (!rsdp_base) { 478 + rsdp_base = ACPI_HI_RSDP_WINDOW_BASE; 479 + rsdp_size = ACPI_HI_RSDP_WINDOW_SIZE; 480 + } 481 + 482 + rsdp_address = acpi_os_map_memory(rsdp_base, rsdp_size); 483 + if (!rsdp_address) { 484 + return (osl_get_last_status(AE_BAD_ADDRESS)); 485 + } 486 + 487 + /* Search low memory for the RSDP */ 488 + 489 + mapped_table = ACPI_CAST_PTR(struct acpi_table_header, 490 + acpi_tb_scan_memory_for_rsdp(rsdp_address, 491 + rsdp_size)); 492 + if (!mapped_table) { 493 + acpi_os_unmap_memory(rsdp_address, rsdp_size); 494 + return (AE_NOT_FOUND); 495 + } 496 + 497 + gbl_rsdp_address = 498 + rsdp_base + (ACPI_CAST8(mapped_table) - rsdp_address); 499 + 500 + ACPI_MEMCPY(&gbl_rsdp, mapped_table, sizeof(struct acpi_table_rsdp)); 501 + acpi_os_unmap_memory(rsdp_address, rsdp_size); 502 + 503 + return (AE_OK); 504 + } 505 + 506 + /****************************************************************************** 507 + * 508 + * FUNCTION: osl_can_use_xsdt 509 + * 510 + * PARAMETERS: None 511 + * 512 + * RETURN: TRUE if XSDT is allowed to be used. 513 + * 514 + * DESCRIPTION: This function collects logic that can be used to determine if 515 + * XSDT should be used instead of RSDT. 516 + * 517 + *****************************************************************************/ 518 + 519 + static u8 osl_can_use_xsdt(void) 520 + { 521 + if (gbl_revision && !acpi_gbl_do_not_use_xsdt) { 522 + return (TRUE); 523 + } else { 524 + return (FALSE); 525 + } 526 + } 527 + 528 + /****************************************************************************** 529 + * 530 + * FUNCTION: osl_table_initialize 531 + * 532 + * PARAMETERS: None 533 + * 534 + * RETURN: Status 535 + * 536 + * DESCRIPTION: Initialize ACPI table data. Get and store main ACPI tables to 537 + * local variables. Main ACPI tables include RSDT, FADT, RSDT, 538 + * and/or XSDT. 539 + * 540 + *****************************************************************************/ 541 + 542 + static acpi_status osl_table_initialize(void) 543 + { 544 + acpi_status status; 545 + acpi_physical_address address; 546 + 547 + if (gbl_table_list_initialized) { 548 + return (AE_OK); 549 + } 550 + 551 + /* Get RSDP from memory */ 552 + 553 + status = osl_load_rsdp(); 554 + if (ACPI_FAILURE(status)) { 555 + return (status); 556 + } 557 + 558 + /* Get XSDT from memory */ 559 + 560 + if (gbl_rsdp.revision && !gbl_do_not_dump_xsdt) { 561 + if (gbl_xsdt) { 562 + free(gbl_xsdt); 563 + gbl_xsdt = NULL; 564 + } 565 + 566 + gbl_revision = 2; 567 + status = osl_get_bios_table(ACPI_SIG_XSDT, 0, 568 + ACPI_CAST_PTR(struct 569 + acpi_table_header *, 570 + &gbl_xsdt), &address); 571 + if (ACPI_FAILURE(status)) { 572 + return (status); 573 + } 574 + } 575 + 576 + /* Get RSDT from memory */ 577 + 578 + if (gbl_rsdp.rsdt_physical_address) { 579 + if (gbl_rsdt) { 580 + free(gbl_rsdt); 581 + gbl_rsdt = NULL; 582 + } 583 + 584 + status = osl_get_bios_table(ACPI_SIG_RSDT, 0, 585 + ACPI_CAST_PTR(struct 586 + acpi_table_header *, 587 + &gbl_rsdt), &address); 588 + if (ACPI_FAILURE(status)) { 589 + return (status); 590 + } 591 + } 592 + 593 + /* Get FADT from memory */ 594 + 595 + if (gbl_fadt) { 596 + free(gbl_fadt); 597 + gbl_fadt = NULL; 598 + } 599 + 600 + status = osl_get_bios_table(ACPI_SIG_FADT, 0, 601 + ACPI_CAST_PTR(struct acpi_table_header *, 602 + &gbl_fadt), 603 + &gbl_fadt_address); 604 + if (ACPI_FAILURE(status)) { 605 + return (status); 606 + } 607 + 608 + if (!gbl_dump_customized_tables) { 609 + 610 + /* Add mandatory tables to global table list first */ 611 + 612 + status = osl_add_table_to_list(ACPI_RSDP_NAME, 0); 613 + if (ACPI_FAILURE(status)) { 614 + return (status); 615 + } 616 + 617 + status = osl_add_table_to_list(ACPI_SIG_RSDT, 0); 618 + if (ACPI_FAILURE(status)) { 619 + return (status); 620 + } 621 + 622 + if (gbl_revision == 2) { 623 + status = osl_add_table_to_list(ACPI_SIG_XSDT, 0); 624 + if (ACPI_FAILURE(status)) { 625 + return (status); 626 + } 627 + } 628 + 629 + status = osl_add_table_to_list(ACPI_SIG_DSDT, 0); 630 + if (ACPI_FAILURE(status)) { 631 + return (status); 632 + } 633 + 634 + status = osl_add_table_to_list(ACPI_SIG_FACS, 0); 635 + if (ACPI_FAILURE(status)) { 636 + return (status); 637 + } 638 + 639 + /* Add all tables found in the memory */ 640 + 641 + status = osl_list_bios_tables(); 642 + if (ACPI_FAILURE(status)) { 643 + return (status); 644 + } 645 + } else { 646 + /* Add all tables found in the static directory */ 647 + 648 + status = osl_list_customized_tables(STATIC_TABLE_DIR); 649 + if (ACPI_FAILURE(status)) { 650 + return (status); 651 + } 652 + } 653 + 654 + if (gbl_dump_dynamic_tables) { 655 + 656 + /* Add all dynamically loaded tables in the dynamic directory */ 657 + 658 + status = osl_list_customized_tables(DYNAMIC_TABLE_DIR); 659 + if (ACPI_FAILURE(status)) { 660 + return (status); 661 + } 662 + } 663 + 664 + gbl_table_list_initialized = TRUE; 665 + return (AE_OK); 666 + } 667 + 668 + /****************************************************************************** 669 + * 670 + * FUNCTION: osl_list_bios_tables 671 + * 672 + * PARAMETERS: None 673 + * 674 + * RETURN: Status; Table list is initialized if AE_OK. 675 + * 676 + * DESCRIPTION: Add ACPI tables to the table list from memory. 677 + * 678 + * NOTE: This works on Linux as table customization does not modify the 679 + * addresses stored in RSDP/RSDT/XSDT/FADT. 680 + * 681 + *****************************************************************************/ 682 + 683 + static acpi_status osl_list_bios_tables(void) 684 + { 685 + struct acpi_table_header *mapped_table = NULL; 686 + u8 *table_data; 687 + u8 number_of_tables; 688 + u8 item_size; 689 + acpi_physical_address table_address = 0; 690 + acpi_status status = AE_OK; 691 + u32 i; 692 + 693 + if (osl_can_use_xsdt()) { 694 + item_size = sizeof(u64); 695 + table_data = 696 + ACPI_CAST8(gbl_xsdt) + sizeof(struct acpi_table_header); 697 + number_of_tables = 698 + (u8)((gbl_xsdt->header.length - 699 + sizeof(struct acpi_table_header)) 700 + / item_size); 701 + } else { /* Use RSDT if XSDT is not available */ 702 + 703 + item_size = sizeof(u32); 704 + table_data = 705 + ACPI_CAST8(gbl_rsdt) + sizeof(struct acpi_table_header); 706 + number_of_tables = 707 + (u8)((gbl_rsdt->header.length - 708 + sizeof(struct acpi_table_header)) 709 + / item_size); 710 + } 711 + 712 + /* Search RSDT/XSDT for the requested table */ 713 + 714 + for (i = 0; i < number_of_tables; ++i, table_data += item_size) { 715 + if (osl_can_use_xsdt()) { 716 + table_address = 717 + (acpi_physical_address) (*ACPI_CAST64(table_data)); 718 + } else { 719 + table_address = 720 + (acpi_physical_address) (*ACPI_CAST32(table_data)); 721 + } 722 + 723 + /* Skip NULL entries in RSDT/XSDT */ 724 + 725 + if (!table_address) { 726 + continue; 727 + } 728 + 729 + status = osl_map_table(table_address, NULL, &mapped_table); 730 + if (ACPI_FAILURE(status)) { 731 + return (status); 732 + } 733 + 734 + osl_add_table_to_list(mapped_table->signature, 0); 735 + osl_unmap_table(mapped_table); 736 + } 737 + 738 + return (AE_OK); 739 + } 740 + 741 + /****************************************************************************** 742 + * 743 + * FUNCTION: osl_get_bios_table 744 + * 745 + * PARAMETERS: signature - ACPI Signature for common table. Must be 746 + * a null terminated 4-character string. 747 + * instance - Multiple table support for SSDT/UEFI (0...n) 748 + * Must be 0 for other tables. 749 + * table - Where a pointer to the table is returned 750 + * address - Where the table physical address is returned 751 + * 752 + * RETURN: Status; Table buffer and physical address returned if AE_OK. 753 + * AE_LIMIT: Instance is beyond valid limit 754 + * AE_NOT_FOUND: A table with the signature was not found 755 + * 756 + * DESCRIPTION: Get a BIOS provided ACPI table 757 + * 758 + * NOTE: Assumes the input signature is uppercase. 759 + * 760 + *****************************************************************************/ 761 + 762 + static acpi_status 763 + osl_get_bios_table(char *signature, 764 + u32 instance, 765 + struct acpi_table_header **table, 766 + acpi_physical_address * address) 767 + { 768 + struct acpi_table_header *local_table = NULL; 769 + struct acpi_table_header *mapped_table = NULL; 770 + u8 *table_data; 771 + u8 number_of_tables; 772 + u8 item_size; 773 + u32 current_instance = 0; 774 + acpi_physical_address table_address = 0; 775 + u32 table_length = 0; 776 + acpi_status status = AE_OK; 777 + u32 i; 778 + 779 + /* Handle special tables whose addresses are not in RSDT/XSDT */ 780 + 781 + if (ACPI_COMPARE_NAME(signature, ACPI_RSDP_NAME) || 782 + ACPI_COMPARE_NAME(signature, ACPI_SIG_RSDT) || 783 + ACPI_COMPARE_NAME(signature, ACPI_SIG_XSDT) || 784 + ACPI_COMPARE_NAME(signature, ACPI_SIG_DSDT) || 785 + ACPI_COMPARE_NAME(signature, ACPI_SIG_FACS)) { 786 + if (instance > 0) { 787 + return (AE_LIMIT); 788 + } 789 + 790 + /* 791 + * Get the appropriate address, either 32-bit or 64-bit. Be very 792 + * careful about the FADT length and validate table addresses. 793 + * Note: The 64-bit addresses have priority. 794 + */ 795 + if (ACPI_COMPARE_NAME(signature, ACPI_SIG_DSDT)) { 796 + if ((gbl_fadt->header.length >= MIN_FADT_FOR_XDSDT) && 797 + gbl_fadt->Xdsdt) { 798 + table_address = 799 + (acpi_physical_address) gbl_fadt->Xdsdt; 800 + } else 801 + if ((gbl_fadt->header.length >= MIN_FADT_FOR_DSDT) 802 + && gbl_fadt->dsdt) { 803 + table_address = 804 + (acpi_physical_address) gbl_fadt->dsdt; 805 + } 806 + } else if (ACPI_COMPARE_NAME(signature, ACPI_SIG_FACS)) { 807 + if ((gbl_fadt->header.length >= MIN_FADT_FOR_XFACS) && 808 + gbl_fadt->Xfacs) { 809 + table_address = 810 + (acpi_physical_address) gbl_fadt->Xfacs; 811 + } else 812 + if ((gbl_fadt->header.length >= MIN_FADT_FOR_FACS) 813 + && gbl_fadt->facs) { 814 + table_address = 815 + (acpi_physical_address) gbl_fadt->facs; 816 + } 817 + } else if (ACPI_COMPARE_NAME(signature, ACPI_SIG_XSDT)) { 818 + if (!gbl_revision) { 819 + return (AE_BAD_SIGNATURE); 820 + } 821 + table_address = 822 + (acpi_physical_address) gbl_rsdp. 823 + xsdt_physical_address; 824 + } else if (ACPI_COMPARE_NAME(signature, ACPI_SIG_RSDT)) { 825 + table_address = 826 + (acpi_physical_address) gbl_rsdp. 827 + rsdt_physical_address; 828 + } else { 829 + table_address = 830 + (acpi_physical_address) gbl_rsdp_address; 831 + signature = ACPI_SIG_RSDP; 832 + } 833 + 834 + /* Now we can get the requested special table */ 835 + 836 + status = osl_map_table(table_address, signature, &mapped_table); 837 + if (ACPI_FAILURE(status)) { 838 + return (status); 839 + } 840 + 841 + table_length = ap_get_table_length(mapped_table); 842 + } else { /* Case for a normal ACPI table */ 843 + 844 + if (osl_can_use_xsdt()) { 845 + item_size = sizeof(u64); 846 + table_data = 847 + ACPI_CAST8(gbl_xsdt) + 848 + sizeof(struct acpi_table_header); 849 + number_of_tables = 850 + (u8)((gbl_xsdt->header.length - 851 + sizeof(struct acpi_table_header)) 852 + / item_size); 853 + } else { /* Use RSDT if XSDT is not available */ 854 + 855 + item_size = sizeof(u32); 856 + table_data = 857 + ACPI_CAST8(gbl_rsdt) + 858 + sizeof(struct acpi_table_header); 859 + number_of_tables = 860 + (u8)((gbl_rsdt->header.length - 861 + sizeof(struct acpi_table_header)) 862 + / item_size); 863 + } 864 + 865 + /* Search RSDT/XSDT for the requested table */ 866 + 867 + for (i = 0; i < number_of_tables; ++i, table_data += item_size) { 868 + if (osl_can_use_xsdt()) { 869 + table_address = 870 + (acpi_physical_address) (*ACPI_CAST64 871 + (table_data)); 872 + } else { 873 + table_address = 874 + (acpi_physical_address) (*ACPI_CAST32 875 + (table_data)); 876 + } 877 + 878 + /* Skip NULL entries in RSDT/XSDT */ 879 + 880 + if (!table_address) { 881 + continue; 882 + } 883 + 884 + status = 885 + osl_map_table(table_address, NULL, &mapped_table); 886 + if (ACPI_FAILURE(status)) { 887 + return (status); 888 + } 889 + table_length = mapped_table->length; 890 + 891 + /* Does this table match the requested signature? */ 892 + 893 + if (!ACPI_COMPARE_NAME 894 + (mapped_table->signature, signature)) { 895 + osl_unmap_table(mapped_table); 896 + mapped_table = NULL; 897 + continue; 898 + } 899 + 900 + /* Match table instance (for SSDT/UEFI tables) */ 901 + 902 + if (current_instance != instance) { 903 + osl_unmap_table(mapped_table); 904 + mapped_table = NULL; 905 + current_instance++; 906 + continue; 907 + } 908 + 909 + break; 910 + } 911 + } 912 + 913 + if (!mapped_table) { 914 + return (AE_LIMIT); 915 + } 916 + 917 + if (table_length == 0) { 918 + status = AE_BAD_HEADER; 919 + goto exit; 920 + } 921 + 922 + /* Copy table to local buffer and return it */ 923 + 924 + local_table = calloc(1, table_length); 925 + if (!local_table) { 926 + status = AE_NO_MEMORY; 927 + goto exit; 928 + } 929 + 930 + ACPI_MEMCPY(local_table, mapped_table, table_length); 931 + *address = table_address; 932 + *table = local_table; 933 + 934 + exit: 935 + osl_unmap_table(mapped_table); 936 + return (status); 937 + } 938 + 939 + /****************************************************************************** 940 + * 941 + * FUNCTION: osl_list_customized_tables 942 + * 943 + * PARAMETERS: directory - Directory that contains the tables 944 + * 945 + * RETURN: Status; Table list is initialized if AE_OK. 946 + * 947 + * DESCRIPTION: Add ACPI tables to the table list from a directory. 948 + * 949 + *****************************************************************************/ 950 + 951 + static acpi_status osl_list_customized_tables(char *directory) 952 + { 953 + void *table_dir; 954 + u32 instance; 955 + char temp_name[ACPI_NAME_SIZE]; 956 + char *filename; 957 + acpi_status status = AE_OK; 958 + 959 + /* Open the requested directory */ 960 + 961 + table_dir = acpi_os_open_directory(directory, "*", REQUEST_FILE_ONLY); 962 + if (!table_dir) { 963 + return (osl_get_last_status(AE_NOT_FOUND)); 964 + } 965 + 966 + /* Examine all entries in this directory */ 967 + 968 + while ((filename = acpi_os_get_next_filename(table_dir))) { 969 + 970 + /* Extract table name and instance number */ 971 + 972 + status = 973 + osl_table_name_from_file(filename, temp_name, &instance); 974 + 975 + /* Ignore meaningless files */ 976 + 977 + if (ACPI_FAILURE(status)) { 978 + continue; 979 + } 980 + 981 + /* Add new info node to global table list */ 982 + 983 + status = osl_add_table_to_list(temp_name, instance); 984 + if (ACPI_FAILURE(status)) { 985 + break; 986 + } 987 + } 988 + 989 + acpi_os_close_directory(table_dir); 990 + return (status); 991 + } 992 + 993 + /****************************************************************************** 994 + * 995 + * FUNCTION: osl_map_table 996 + * 997 + * PARAMETERS: address - Address of the table in memory 998 + * signature - Optional ACPI Signature for desired table. 999 + * Null terminated 4-character string. 1000 + * table - Where a pointer to the mapped table is 1001 + * returned 1002 + * 1003 + * RETURN: Status; Mapped table is returned if AE_OK. 1004 + * AE_NOT_FOUND: A valid table was not found at the address 1005 + * 1006 + * DESCRIPTION: Map entire ACPI table into caller's address space. 1007 + * 1008 + *****************************************************************************/ 1009 + 1010 + static acpi_status 1011 + osl_map_table(acpi_size address, 1012 + char *signature, struct acpi_table_header **table) 1013 + { 1014 + struct acpi_table_header *mapped_table; 1015 + u32 length; 1016 + 1017 + if (!address) { 1018 + return (AE_BAD_ADDRESS); 1019 + } 1020 + 1021 + /* 1022 + * Map the header so we can get the table length. 1023 + * Use sizeof (struct acpi_table_header) as: 1024 + * 1. it is bigger than 24 to include RSDP->Length 1025 + * 2. it is smaller than sizeof (struct acpi_table_rsdp) 1026 + */ 1027 + mapped_table = 1028 + acpi_os_map_memory(address, sizeof(struct acpi_table_header)); 1029 + if (!mapped_table) { 1030 + fprintf(stderr, "Could not map table header at 0x%8.8X%8.8X\n", 1031 + ACPI_FORMAT_UINT64(address)); 1032 + return (osl_get_last_status(AE_BAD_ADDRESS)); 1033 + } 1034 + 1035 + /* If specified, signature must match */ 1036 + 1037 + if (signature) { 1038 + if (ACPI_VALIDATE_RSDP_SIG(signature)) { 1039 + if (!ACPI_VALIDATE_RSDP_SIG(mapped_table->signature)) { 1040 + acpi_os_unmap_memory(mapped_table, 1041 + sizeof(struct 1042 + acpi_table_header)); 1043 + return (AE_BAD_SIGNATURE); 1044 + } 1045 + } else 1046 + if (!ACPI_COMPARE_NAME(signature, mapped_table->signature)) 1047 + { 1048 + acpi_os_unmap_memory(mapped_table, 1049 + sizeof(struct acpi_table_header)); 1050 + return (AE_BAD_SIGNATURE); 1051 + } 1052 + } 1053 + 1054 + /* Map the entire table */ 1055 + 1056 + length = ap_get_table_length(mapped_table); 1057 + acpi_os_unmap_memory(mapped_table, sizeof(struct acpi_table_header)); 1058 + if (length == 0) { 1059 + return (AE_BAD_HEADER); 1060 + } 1061 + 1062 + mapped_table = acpi_os_map_memory(address, length); 1063 + if (!mapped_table) { 1064 + fprintf(stderr, 1065 + "Could not map table at 0x%8.8X%8.8X length %8.8X\n", 1066 + ACPI_FORMAT_UINT64(address), length); 1067 + return (osl_get_last_status(AE_INVALID_TABLE_LENGTH)); 1068 + } 1069 + 1070 + (void)ap_is_valid_checksum(mapped_table); 1071 + 1072 + *table = mapped_table; 1073 + return (AE_OK); 1074 + } 1075 + 1076 + /****************************************************************************** 1077 + * 1078 + * FUNCTION: osl_unmap_table 1079 + * 1080 + * PARAMETERS: table - A pointer to the mapped table 1081 + * 1082 + * RETURN: None 1083 + * 1084 + * DESCRIPTION: Unmap entire ACPI table. 1085 + * 1086 + *****************************************************************************/ 1087 + 1088 + static void osl_unmap_table(struct acpi_table_header *table) 1089 + { 1090 + if (table) { 1091 + acpi_os_unmap_memory(table, ap_get_table_length(table)); 1092 + } 1093 + } 1094 + 1095 + /****************************************************************************** 1096 + * 1097 + * FUNCTION: osl_table_name_from_file 1098 + * 1099 + * PARAMETERS: filename - File that contains the desired table 1100 + * signature - Pointer to 4-character buffer to store 1101 + * extracted table signature. 1102 + * instance - Pointer to integer to store extracted 1103 + * table instance number. 1104 + * 1105 + * RETURN: Status; Table name is extracted if AE_OK. 1106 + * 1107 + * DESCRIPTION: Extract table signature and instance number from a table file 1108 + * name. 1109 + * 1110 + *****************************************************************************/ 1111 + 1112 + static acpi_status 1113 + osl_table_name_from_file(char *filename, char *signature, u32 *instance) 1114 + { 1115 + 1116 + /* Ignore meaningless files */ 1117 + 1118 + if (strlen(filename) < ACPI_NAME_SIZE) { 1119 + return (AE_BAD_SIGNATURE); 1120 + } 1121 + 1122 + /* Extract instance number */ 1123 + 1124 + if (isdigit((int)filename[ACPI_NAME_SIZE])) { 1125 + sscanf(&filename[ACPI_NAME_SIZE], "%d", instance); 1126 + } else if (strlen(filename) != ACPI_NAME_SIZE) { 1127 + return (AE_BAD_SIGNATURE); 1128 + } else { 1129 + *instance = 0; 1130 + } 1131 + 1132 + /* Extract signature */ 1133 + 1134 + ACPI_MOVE_NAME(signature, filename); 1135 + return (AE_OK); 1136 + } 1137 + 1138 + /****************************************************************************** 1139 + * 1140 + * FUNCTION: osl_read_table_from_file 1141 + * 1142 + * PARAMETERS: filename - File that contains the desired table 1143 + * file_offset - Offset of the table in file 1144 + * signature - Optional ACPI Signature for desired table. 1145 + * A null terminated 4-character string. 1146 + * table - Where a pointer to the table is returned 1147 + * 1148 + * RETURN: Status; Table buffer is returned if AE_OK. 1149 + * 1150 + * DESCRIPTION: Read a ACPI table from a file. 1151 + * 1152 + *****************************************************************************/ 1153 + 1154 + static acpi_status 1155 + osl_read_table_from_file(char *filename, 1156 + acpi_size file_offset, 1157 + char *signature, struct acpi_table_header **table) 1158 + { 1159 + FILE *table_file; 1160 + struct acpi_table_header header; 1161 + struct acpi_table_header *local_table = NULL; 1162 + u32 table_length; 1163 + s32 count; 1164 + acpi_status status = AE_OK; 1165 + 1166 + /* Open the file */ 1167 + 1168 + table_file = fopen(filename, "rb"); 1169 + if (table_file == NULL) { 1170 + fprintf(stderr, "Could not open table file: %s\n", filename); 1171 + return (osl_get_last_status(AE_NOT_FOUND)); 1172 + } 1173 + 1174 + fseek(table_file, file_offset, SEEK_SET); 1175 + 1176 + /* Read the Table header to get the table length */ 1177 + 1178 + count = fread(&header, 1, sizeof(struct acpi_table_header), table_file); 1179 + if (count != sizeof(struct acpi_table_header)) { 1180 + fprintf(stderr, "Could not read table header: %s\n", filename); 1181 + status = AE_BAD_HEADER; 1182 + goto exit; 1183 + } 1184 + 1185 + /* If signature is specified, it must match the table */ 1186 + 1187 + if (signature) { 1188 + if (ACPI_VALIDATE_RSDP_SIG(signature)) { 1189 + if (!ACPI_VALIDATE_RSDP_SIG(header.signature)) { 1190 + fprintf(stderr, 1191 + "Incorrect RSDP signature: found %8.8s\n", 1192 + header.signature); 1193 + status = AE_BAD_SIGNATURE; 1194 + goto exit; 1195 + } 1196 + } else if (!ACPI_COMPARE_NAME(signature, header.signature)) { 1197 + fprintf(stderr, 1198 + "Incorrect signature: Expecting %4.4s, found %4.4s\n", 1199 + signature, header.signature); 1200 + status = AE_BAD_SIGNATURE; 1201 + goto exit; 1202 + } 1203 + } 1204 + 1205 + table_length = ap_get_table_length(&header); 1206 + if (table_length == 0) { 1207 + status = AE_BAD_HEADER; 1208 + goto exit; 1209 + } 1210 + 1211 + /* Read the entire table into a local buffer */ 1212 + 1213 + local_table = calloc(1, table_length); 1214 + if (!local_table) { 1215 + fprintf(stderr, 1216 + "%4.4s: Could not allocate buffer for table of length %X\n", 1217 + header.signature, table_length); 1218 + status = AE_NO_MEMORY; 1219 + goto exit; 1220 + } 1221 + 1222 + fseek(table_file, file_offset, SEEK_SET); 1223 + 1224 + count = fread(local_table, 1, table_length, table_file); 1225 + if (count != table_length) { 1226 + fprintf(stderr, "%4.4s: Could not read table content\n", 1227 + header.signature); 1228 + status = AE_INVALID_TABLE_LENGTH; 1229 + goto exit; 1230 + } 1231 + 1232 + /* Validate checksum */ 1233 + 1234 + (void)ap_is_valid_checksum(local_table); 1235 + 1236 + exit: 1237 + fclose(table_file); 1238 + *table = local_table; 1239 + return (status); 1240 + } 1241 + 1242 + /****************************************************************************** 1243 + * 1244 + * FUNCTION: osl_get_customized_table 1245 + * 1246 + * PARAMETERS: pathname - Directory to find Linux customized table 1247 + * signature - ACPI Signature for desired table. Must be 1248 + * a null terminated 4-character string. 1249 + * instance - Multiple table support for SSDT/UEFI (0...n) 1250 + * Must be 0 for other tables. 1251 + * table - Where a pointer to the table is returned 1252 + * address - Where the table physical address is returned 1253 + * 1254 + * RETURN: Status; Table buffer is returned if AE_OK. 1255 + * AE_LIMIT: Instance is beyond valid limit 1256 + * AE_NOT_FOUND: A table with the signature was not found 1257 + * 1258 + * DESCRIPTION: Get an OS customized table. 1259 + * 1260 + *****************************************************************************/ 1261 + 1262 + static acpi_status 1263 + osl_get_customized_table(char *pathname, 1264 + char *signature, 1265 + u32 instance, 1266 + struct acpi_table_header **table, 1267 + acpi_physical_address * address) 1268 + { 1269 + void *table_dir; 1270 + u32 current_instance = 0; 1271 + char temp_name[ACPI_NAME_SIZE]; 1272 + char table_filename[PATH_MAX]; 1273 + char *filename; 1274 + acpi_status status; 1275 + 1276 + /* Open the directory for customized tables */ 1277 + 1278 + table_dir = acpi_os_open_directory(pathname, "*", REQUEST_FILE_ONLY); 1279 + if (!table_dir) { 1280 + return (osl_get_last_status(AE_NOT_FOUND)); 1281 + } 1282 + 1283 + /* Attempt to find the table in the directory */ 1284 + 1285 + while ((filename = acpi_os_get_next_filename(table_dir))) { 1286 + 1287 + /* Ignore meaningless files */ 1288 + 1289 + if (!ACPI_COMPARE_NAME(filename, signature)) { 1290 + continue; 1291 + } 1292 + 1293 + /* Extract table name and instance number */ 1294 + 1295 + status = 1296 + osl_table_name_from_file(filename, temp_name, 1297 + &current_instance); 1298 + 1299 + /* Ignore meaningless files */ 1300 + 1301 + if (ACPI_FAILURE(status) || current_instance != instance) { 1302 + continue; 1303 + } 1304 + 1305 + /* Create the table pathname */ 1306 + 1307 + if (instance != 0) { 1308 + sprintf(table_filename, "%s/%4.4s%d", pathname, 1309 + temp_name, instance); 1310 + } else { 1311 + sprintf(table_filename, "%s/%4.4s", pathname, 1312 + temp_name); 1313 + } 1314 + break; 1315 + } 1316 + 1317 + acpi_os_close_directory(table_dir); 1318 + 1319 + if (!filename) { 1320 + return (AE_LIMIT); 1321 + } 1322 + 1323 + /* There is no physical address saved for customized tables, use zero */ 1324 + 1325 + *address = 0; 1326 + status = osl_read_table_from_file(table_filename, 0, NULL, table); 1327 + 1328 + return (status); 1329 + }
+204
tools/power/acpi/os_specific/service_layers/osunixdir.c
··· 1 + /****************************************************************************** 2 + * 3 + * Module Name: osunixdir - Unix directory access interfaces 4 + * 5 + *****************************************************************************/ 6 + 7 + /* 8 + * Copyright (C) 2000 - 2014, Intel Corp. 9 + * All rights reserved. 10 + * 11 + * Redistribution and use in source and binary forms, with or without 12 + * modification, are permitted provided that the following conditions 13 + * are met: 14 + * 1. Redistributions of source code must retain the above copyright 15 + * notice, this list of conditions, and the following disclaimer, 16 + * without modification. 17 + * 2. Redistributions in binary form must reproduce at minimum a disclaimer 18 + * substantially similar to the "NO WARRANTY" disclaimer below 19 + * ("Disclaimer") and any redistribution must be conditioned upon 20 + * including a substantially similar Disclaimer requirement for further 21 + * binary redistribution. 22 + * 3. Neither the names of the above-listed copyright holders nor the names 23 + * of any contributors may be used to endorse or promote products derived 24 + * from this software without specific prior written permission. 25 + * 26 + * Alternatively, this software may be distributed under the terms of the 27 + * GNU General Public License ("GPL") version 2 as published by the Free 28 + * Software Foundation. 29 + * 30 + * NO WARRANTY 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 36 + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 37 + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 38 + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 39 + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 40 + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 41 + * POSSIBILITY OF SUCH DAMAGES. 42 + */ 43 + 44 + #include <acpi/acpi.h> 45 + 46 + #include <stdio.h> 47 + #include <stdlib.h> 48 + #include <string.h> 49 + #include <dirent.h> 50 + #include <fnmatch.h> 51 + #include <ctype.h> 52 + #include <sys/stat.h> 53 + 54 + /* 55 + * Allocated structure returned from os_open_directory 56 + */ 57 + typedef struct external_find_info { 58 + char *dir_pathname; 59 + DIR *dir_ptr; 60 + char temp_buffer[256]; 61 + char *wildcard_spec; 62 + char requested_file_type; 63 + 64 + } external_find_info; 65 + 66 + /******************************************************************************* 67 + * 68 + * FUNCTION: acpi_os_open_directory 69 + * 70 + * PARAMETERS: dir_pathname - Full pathname to the directory 71 + * wildcard_spec - string of the form "*.c", etc. 72 + * 73 + * RETURN: A directory "handle" to be used in subsequent search operations. 74 + * NULL returned on failure. 75 + * 76 + * DESCRIPTION: Open a directory in preparation for a wildcard search 77 + * 78 + ******************************************************************************/ 79 + 80 + void *acpi_os_open_directory(char *dir_pathname, 81 + char *wildcard_spec, char requested_file_type) 82 + { 83 + struct external_find_info *external_info; 84 + DIR *dir; 85 + 86 + /* Allocate the info struct that will be returned to the caller */ 87 + 88 + external_info = calloc(1, sizeof(struct external_find_info)); 89 + if (!external_info) { 90 + return (NULL); 91 + } 92 + 93 + /* Get the directory stream */ 94 + 95 + dir = opendir(dir_pathname); 96 + if (!dir) { 97 + fprintf(stderr, "Cannot open directory - %s\n", dir_pathname); 98 + free(external_info); 99 + return (NULL); 100 + } 101 + 102 + /* Save the info in the return structure */ 103 + 104 + external_info->wildcard_spec = wildcard_spec; 105 + external_info->requested_file_type = requested_file_type; 106 + external_info->dir_pathname = dir_pathname; 107 + external_info->dir_ptr = dir; 108 + return (external_info); 109 + } 110 + 111 + /******************************************************************************* 112 + * 113 + * FUNCTION: acpi_os_get_next_filename 114 + * 115 + * PARAMETERS: dir_handle - Created via acpi_os_open_directory 116 + * 117 + * RETURN: Next filename matched. NULL if no more matches. 118 + * 119 + * DESCRIPTION: Get the next file in the directory that matches the wildcard 120 + * specification. 121 + * 122 + ******************************************************************************/ 123 + 124 + char *acpi_os_get_next_filename(void *dir_handle) 125 + { 126 + struct external_find_info *external_info = dir_handle; 127 + struct dirent *dir_entry; 128 + char *temp_str; 129 + int str_len; 130 + struct stat temp_stat; 131 + int err; 132 + 133 + while ((dir_entry = readdir(external_info->dir_ptr))) { 134 + if (!fnmatch 135 + (external_info->wildcard_spec, dir_entry->d_name, 0)) { 136 + if (dir_entry->d_name[0] == '.') { 137 + continue; 138 + } 139 + 140 + str_len = strlen(dir_entry->d_name) + 141 + strlen(external_info->dir_pathname) + 2; 142 + 143 + temp_str = calloc(str_len, 1); 144 + if (!temp_str) { 145 + fprintf(stderr, 146 + "Could not allocate buffer for temporary string\n"); 147 + return (NULL); 148 + } 149 + 150 + strcpy(temp_str, external_info->dir_pathname); 151 + strcat(temp_str, "/"); 152 + strcat(temp_str, dir_entry->d_name); 153 + 154 + err = stat(temp_str, &temp_stat); 155 + if (err == -1) { 156 + fprintf(stderr, 157 + "Cannot stat file (should not happen) - %s\n", 158 + temp_str); 159 + free(temp_str); 160 + return (NULL); 161 + } 162 + 163 + free(temp_str); 164 + 165 + if ((S_ISDIR(temp_stat.st_mode) 166 + && (external_info->requested_file_type == 167 + REQUEST_DIR_ONLY)) 168 + || ((!S_ISDIR(temp_stat.st_mode) 169 + && external_info->requested_file_type == 170 + REQUEST_FILE_ONLY))) { 171 + 172 + /* copy to a temp buffer because dir_entry struct is on the stack */ 173 + 174 + strcpy(external_info->temp_buffer, 175 + dir_entry->d_name); 176 + return (external_info->temp_buffer); 177 + } 178 + } 179 + } 180 + 181 + return (NULL); 182 + } 183 + 184 + /******************************************************************************* 185 + * 186 + * FUNCTION: acpi_os_close_directory 187 + * 188 + * PARAMETERS: dir_handle - Created via acpi_os_open_directory 189 + * 190 + * RETURN: None. 191 + * 192 + * DESCRIPTION: Close the open directory and cleanup. 193 + * 194 + ******************************************************************************/ 195 + 196 + void acpi_os_close_directory(void *dir_handle) 197 + { 198 + struct external_find_info *external_info = dir_handle; 199 + 200 + /* Close the directory and free allocations */ 201 + 202 + closedir(external_info->dir_ptr); 203 + free(dir_handle); 204 + }
+151
tools/power/acpi/os_specific/service_layers/osunixmap.c
··· 1 + /****************************************************************************** 2 + * 3 + * Module Name: osunixmap - Unix OSL for file mappings 4 + * 5 + *****************************************************************************/ 6 + 7 + /* 8 + * Copyright (C) 2000 - 2014, Intel Corp. 9 + * All rights reserved. 10 + * 11 + * Redistribution and use in source and binary forms, with or without 12 + * modification, are permitted provided that the following conditions 13 + * are met: 14 + * 1. Redistributions of source code must retain the above copyright 15 + * notice, this list of conditions, and the following disclaimer, 16 + * without modification. 17 + * 2. Redistributions in binary form must reproduce at minimum a disclaimer 18 + * substantially similar to the "NO WARRANTY" disclaimer below 19 + * ("Disclaimer") and any redistribution must be conditioned upon 20 + * including a substantially similar Disclaimer requirement for further 21 + * binary redistribution. 22 + * 3. Neither the names of the above-listed copyright holders nor the names 23 + * of any contributors may be used to endorse or promote products derived 24 + * from this software without specific prior written permission. 25 + * 26 + * Alternatively, this software may be distributed under the terms of the 27 + * GNU General Public License ("GPL") version 2 as published by the Free 28 + * Software Foundation. 29 + * 30 + * NO WARRANTY 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 36 + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 37 + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 38 + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 39 + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 40 + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 41 + * POSSIBILITY OF SUCH DAMAGES. 42 + */ 43 + 44 + #include "acpidump.h" 45 + #include <unistd.h> 46 + #include <sys/mman.h> 47 + #ifdef _free_BSD 48 + #include <sys/param.h> 49 + #endif 50 + 51 + #define _COMPONENT ACPI_OS_SERVICES 52 + ACPI_MODULE_NAME("osunixmap") 53 + 54 + #ifndef O_BINARY 55 + #define O_BINARY 0 56 + #endif 57 + #ifdef _free_BSD 58 + #define MMAP_FLAGS MAP_SHARED 59 + #else 60 + #define MMAP_FLAGS MAP_PRIVATE 61 + #endif 62 + #define SYSTEM_MEMORY "/dev/mem" 63 + /******************************************************************************* 64 + * 65 + * FUNCTION: acpi_os_get_page_size 66 + * 67 + * PARAMETERS: None 68 + * 69 + * RETURN: Page size of the platform. 70 + * 71 + * DESCRIPTION: Obtain page size of the platform. 72 + * 73 + ******************************************************************************/ 74 + static acpi_size acpi_os_get_page_size(void) 75 + { 76 + 77 + #ifdef PAGE_SIZE 78 + return PAGE_SIZE; 79 + #else 80 + return sysconf(_SC_PAGESIZE); 81 + #endif 82 + } 83 + 84 + /****************************************************************************** 85 + * 86 + * FUNCTION: acpi_os_map_memory 87 + * 88 + * PARAMETERS: where - Physical address of memory to be mapped 89 + * length - How much memory to map 90 + * 91 + * RETURN: Pointer to mapped memory. Null on error. 92 + * 93 + * DESCRIPTION: Map physical memory into local address space. 94 + * 95 + *****************************************************************************/ 96 + 97 + void *acpi_os_map_memory(acpi_physical_address where, acpi_size length) 98 + { 99 + u8 *mapped_memory; 100 + acpi_physical_address offset; 101 + acpi_size page_size; 102 + int fd; 103 + 104 + fd = open(SYSTEM_MEMORY, O_RDONLY | O_BINARY); 105 + if (fd < 0) { 106 + fprintf(stderr, "Cannot open %s\n", SYSTEM_MEMORY); 107 + return (NULL); 108 + } 109 + 110 + /* Align the offset to use mmap */ 111 + 112 + page_size = acpi_os_get_page_size(); 113 + offset = where % page_size; 114 + 115 + /* Map the table header to get the length of the full table */ 116 + 117 + mapped_memory = mmap(NULL, (length + offset), PROT_READ, MMAP_FLAGS, 118 + fd, (where - offset)); 119 + if (mapped_memory == MAP_FAILED) { 120 + fprintf(stderr, "Cannot map %s\n", SYSTEM_MEMORY); 121 + close(fd); 122 + return (NULL); 123 + } 124 + 125 + close(fd); 126 + return (ACPI_CAST8(mapped_memory + offset)); 127 + } 128 + 129 + /****************************************************************************** 130 + * 131 + * FUNCTION: acpi_os_unmap_memory 132 + * 133 + * PARAMETERS: where - Logical address of memory to be unmapped 134 + * length - How much memory to unmap 135 + * 136 + * RETURN: None. 137 + * 138 + * DESCRIPTION: Delete a previously created mapping. Where and Length must 139 + * correspond to a previous mapping exactly. 140 + * 141 + *****************************************************************************/ 142 + 143 + void acpi_os_unmap_memory(void *where, acpi_size length) 144 + { 145 + acpi_physical_address offset; 146 + acpi_size page_size; 147 + 148 + page_size = acpi_os_get_page_size(); 149 + offset = (acpi_physical_address) where % page_size; 150 + munmap((u8 *)where - offset, (length + offset)); 151 + }
-559
tools/power/acpi/tools/acpidump/acpidump.c
··· 1 - /* 2 - * (c) Alexey Starikovskiy, Intel, 2005-2006. 3 - * All rights reserved. 4 - * 5 - * Redistribution and use in source and binary forms, with or without 6 - * modification, are permitted provided that the following conditions 7 - * are met: 8 - * 1. Redistributions of source code must retain the above copyright 9 - * notice, this list of conditions, and the following disclaimer, 10 - * without modification. 11 - * 2. Redistributions in binary form must reproduce at minimum a disclaimer 12 - * substantially similar to the "NO WARRANTY" disclaimer below 13 - * ("Disclaimer") and any redistribution must be conditioned upon 14 - * including a substantially similar Disclaimer requirement for further 15 - * binary redistribution. 16 - * 3. Neither the names of the above-listed copyright holders nor the names 17 - * of any contributors may be used to endorse or promote products derived 18 - * from this software without specific prior written permission. 19 - * 20 - * Alternatively, this software may be distributed under the terms of the 21 - * GNU General Public License ("GPL") version 2 as published by the Free 22 - * Software Foundation. 23 - * 24 - * NO WARRANTY 25 - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 26 - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 27 - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR 28 - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 29 - * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 30 - * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 31 - * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 32 - * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 33 - * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 34 - * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 35 - * POSSIBILITY OF SUCH DAMAGES. 36 - */ 37 - 38 - #ifdef DEFINE_ALTERNATE_TYPES 39 - /* hack to enable building old application with new headers -lenb */ 40 - #define acpi_fadt_descriptor acpi_table_fadt 41 - #define acpi_rsdp_descriptor acpi_table_rsdp 42 - #define DSDT_SIG ACPI_SIG_DSDT 43 - #define FACS_SIG ACPI_SIG_FACS 44 - #define FADT_SIG ACPI_SIG_FADT 45 - #define xfirmware_ctrl Xfacs 46 - #define firmware_ctrl facs 47 - 48 - typedef int s32; 49 - typedef unsigned char u8; 50 - typedef unsigned short u16; 51 - typedef unsigned int u32; 52 - typedef unsigned long long u64; 53 - typedef long long s64; 54 - #endif 55 - 56 - #include <sys/mman.h> 57 - #include <sys/types.h> 58 - #include <sys/stat.h> 59 - #include <fcntl.h> 60 - #include <stdio.h> 61 - #include <string.h> 62 - #include <unistd.h> 63 - #include <getopt.h> 64 - 65 - #include <dirent.h> 66 - 67 - #include <acpi/acconfig.h> 68 - #include <acpi/platform/acenv.h> 69 - #include <acpi/actypes.h> 70 - #include <acpi/actbl.h> 71 - 72 - static inline u8 checksum(u8 * buffer, u32 length) 73 - { 74 - u8 sum = 0, *i = buffer; 75 - buffer += length; 76 - for (; i < buffer; sum += *(i++)); 77 - return sum; 78 - } 79 - 80 - static unsigned long psz, addr, length; 81 - static int print, connect, skip; 82 - static u8 select_sig[4]; 83 - 84 - static unsigned long read_efi_systab( void ) 85 - { 86 - char buffer[80]; 87 - unsigned long addr; 88 - FILE *f = fopen("/sys/firmware/efi/systab", "r"); 89 - if (f) { 90 - while (fgets(buffer, 80, f)) { 91 - if (sscanf(buffer, "ACPI20=0x%lx", &addr) == 1) 92 - return addr; 93 - } 94 - fclose(f); 95 - } 96 - return 0; 97 - } 98 - 99 - static u8 *acpi_map_memory(unsigned long where, unsigned length) 100 - { 101 - unsigned long offset; 102 - u8 *there; 103 - int fd = open("/dev/mem", O_RDONLY); 104 - if (fd < 0) { 105 - fprintf(stderr, "acpi_os_map_memory: cannot open /dev/mem\n"); 106 - exit(1); 107 - } 108 - offset = where % psz; 109 - there = mmap(NULL, length + offset, PROT_READ, MAP_PRIVATE, 110 - fd, where - offset); 111 - close(fd); 112 - if (there == MAP_FAILED) return 0; 113 - return (there + offset); 114 - } 115 - 116 - static void acpi_unmap_memory(u8 * there, unsigned length) 117 - { 118 - unsigned long offset = (unsigned long)there % psz; 119 - munmap(there - offset, length + offset); 120 - } 121 - 122 - static struct acpi_table_header *acpi_map_table(unsigned long where, char *sig) 123 - { 124 - unsigned size; 125 - struct acpi_table_header *tbl = (struct acpi_table_header *) 126 - acpi_map_memory(where, sizeof(struct acpi_table_header)); 127 - if (!tbl || (sig && memcmp(sig, tbl->signature, 4))) return 0; 128 - size = tbl->length; 129 - acpi_unmap_memory((u8 *) tbl, sizeof(struct acpi_table_header)); 130 - return (struct acpi_table_header *)acpi_map_memory(where, size); 131 - } 132 - 133 - static void acpi_unmap_table(struct acpi_table_header *tbl) 134 - { 135 - acpi_unmap_memory((u8 *)tbl, tbl->length); 136 - } 137 - 138 - static struct acpi_rsdp_descriptor *acpi_scan_for_rsdp(u8 *begin, u32 length) 139 - { 140 - struct acpi_rsdp_descriptor *rsdp; 141 - u8 *i, *end = begin + length; 142 - /* Search from given start address for the requested length */ 143 - for (i = begin; i < end; i += ACPI_RSDP_SCAN_STEP) { 144 - /* The signature and checksum must both be correct */ 145 - if (memcmp((char *)i, "RSD PTR ", 8)) continue; 146 - rsdp = (struct acpi_rsdp_descriptor *)i; 147 - /* Signature matches, check the appropriate checksum */ 148 - if (!checksum((u8 *) rsdp, (rsdp->revision < 2) ? 149 - ACPI_RSDP_CHECKSUM_LENGTH : 150 - ACPI_RSDP_XCHECKSUM_LENGTH)) 151 - /* Checksum valid, we have found a valid RSDP */ 152 - return rsdp; 153 - } 154 - /* Searched entire block, no RSDP was found */ 155 - return 0; 156 - } 157 - 158 - /* 159 - * Output data 160 - */ 161 - static void acpi_show_data(int fd, u8 * data, int size) 162 - { 163 - char buffer[256]; 164 - int len; 165 - int i, remain = size; 166 - while (remain > 0) { 167 - len = snprintf(buffer, 256, " %04x:", size - remain); 168 - for (i = 0; i < 16 && i < remain; i++) { 169 - len += 170 - snprintf(&buffer[len], 256 - len, " %02x", data[i]); 171 - } 172 - for (; i < 16; i++) { 173 - len += snprintf(&buffer[len], 256 - len, " "); 174 - } 175 - len += snprintf(&buffer[len], 256 - len, " "); 176 - for (i = 0; i < 16 && i < remain; i++) { 177 - buffer[len++] = (isprint(data[i])) ? data[i] : '.'; 178 - } 179 - buffer[len++] = '\n'; 180 - write(fd, buffer, len); 181 - data += 16; 182 - remain -= 16; 183 - } 184 - } 185 - 186 - /* 187 - * Output ACPI table 188 - */ 189 - static void acpi_show_table(int fd, struct acpi_table_header *table, unsigned long addr) 190 - { 191 - char buff[80]; 192 - int len = snprintf(buff, 80, "%.4s @ %p\n", table->signature, (void *)addr); 193 - write(fd, buff, len); 194 - acpi_show_data(fd, (u8 *) table, table->length); 195 - buff[0] = '\n'; 196 - write(fd, buff, 1); 197 - } 198 - 199 - static void write_table(int fd, struct acpi_table_header *tbl, unsigned long addr) 200 - { 201 - static int select_done = 0; 202 - if (!select_sig[0]) { 203 - if (print) { 204 - acpi_show_table(fd, tbl, addr); 205 - } else { 206 - write(fd, tbl, tbl->length); 207 - } 208 - } else if (!select_done && !memcmp(select_sig, tbl->signature, 4)) { 209 - if (skip > 0) { 210 - --skip; 211 - return; 212 - } 213 - if (print) { 214 - acpi_show_table(fd, tbl, addr); 215 - } else { 216 - write(fd, tbl, tbl->length); 217 - } 218 - select_done = 1; 219 - } 220 - } 221 - 222 - static void acpi_dump_FADT(int fd, struct acpi_table_header *tbl, unsigned long xaddr) { 223 - struct acpi_fadt_descriptor x; 224 - unsigned long addr; 225 - size_t len = sizeof(struct acpi_fadt_descriptor); 226 - if (len > tbl->length) len = tbl->length; 227 - memcpy(&x, tbl, len); 228 - x.header.length = len; 229 - if (checksum((u8 *)tbl, len)) { 230 - fprintf(stderr, "Wrong checksum for FADT!\n"); 231 - } 232 - if (x.header.length >= 148 && x.Xdsdt) { 233 - addr = (unsigned long)x.Xdsdt; 234 - if (connect) { 235 - x.Xdsdt = lseek(fd, 0, SEEK_CUR); 236 - } 237 - } else if (x.header.length >= 44 && x.dsdt) { 238 - addr = (unsigned long)x.dsdt; 239 - if (connect) { 240 - x.dsdt = lseek(fd, 0, SEEK_CUR); 241 - } 242 - } else { 243 - fprintf(stderr, "No DSDT in FADT!\n"); 244 - goto no_dsdt; 245 - } 246 - tbl = acpi_map_table(addr, DSDT_SIG); 247 - if (!tbl) goto no_dsdt; 248 - if (checksum((u8 *)tbl, tbl->length)) 249 - fprintf(stderr, "Wrong checksum for DSDT!\n"); 250 - write_table(fd, tbl, addr); 251 - acpi_unmap_table(tbl); 252 - no_dsdt: 253 - if (x.header.length >= 140 && x.xfirmware_ctrl) { 254 - addr = (unsigned long)x.xfirmware_ctrl; 255 - if (connect) { 256 - x.xfirmware_ctrl = lseek(fd, 0, SEEK_CUR); 257 - } 258 - } else if (x.header.length >= 40 && x.firmware_ctrl) { 259 - addr = (unsigned long)x.firmware_ctrl; 260 - if (connect) { 261 - x.firmware_ctrl = lseek(fd, 0, SEEK_CUR); 262 - } 263 - } else { 264 - fprintf(stderr, "No FACS in FADT!\n"); 265 - goto no_facs; 266 - } 267 - tbl = acpi_map_table(addr, FACS_SIG); 268 - if (!tbl) goto no_facs; 269 - /* do not checksum FACS */ 270 - write_table(fd, tbl, addr); 271 - acpi_unmap_table(tbl); 272 - no_facs: 273 - write_table(fd, (struct acpi_table_header *)&x, xaddr); 274 - } 275 - 276 - static int acpi_dump_SDT(int fd, struct acpi_rsdp_descriptor *rsdp) 277 - { 278 - struct acpi_table_header *sdt, *tbl = 0; 279 - int xsdt = 1, i, num; 280 - char *offset; 281 - unsigned long addr; 282 - if (rsdp->revision > 1 && rsdp->xsdt_physical_address) { 283 - tbl = acpi_map_table(rsdp->xsdt_physical_address, "XSDT"); 284 - } 285 - if (!tbl && rsdp->rsdt_physical_address) { 286 - xsdt = 0; 287 - tbl = acpi_map_table(rsdp->rsdt_physical_address, "RSDT"); 288 - } 289 - if (!tbl) return 0; 290 - sdt = malloc(tbl->length); 291 - memcpy(sdt, tbl, tbl->length); 292 - acpi_unmap_table(tbl); 293 - if (checksum((u8 *)sdt, sdt->length)) 294 - fprintf(stderr, "Wrong checksum for %s!\n", (xsdt)?"XSDT":"RSDT"); 295 - num = (sdt->length - sizeof(struct acpi_table_header))/((xsdt)?sizeof(u64):sizeof(u32)); 296 - offset = (char *)sdt + sizeof(struct acpi_table_header); 297 - for (i = 0; i < num; ++i, offset += ((xsdt) ? sizeof(u64) : sizeof(u32))) { 298 - addr = (xsdt) ? (unsigned long)(*(u64 *)offset): 299 - (unsigned long)(*(u32 *)offset); 300 - if (!addr) continue; 301 - tbl = acpi_map_table(addr, 0); 302 - if (!tbl) continue; 303 - if (!memcmp(tbl->signature, FADT_SIG, 4)) { 304 - acpi_dump_FADT(fd, tbl, addr); 305 - } else { 306 - if (checksum((u8 *)tbl, tbl->length)) 307 - fprintf(stderr, "Wrong checksum for generic table!\n"); 308 - write_table(fd, tbl, addr); 309 - } 310 - acpi_unmap_table(tbl); 311 - if (connect) { 312 - if (xsdt) 313 - (*(u64*)offset) = lseek(fd, 0, SEEK_CUR); 314 - else 315 - (*(u32*)offset) = lseek(fd, 0, SEEK_CUR); 316 - } 317 - } 318 - if (xsdt) { 319 - addr = (unsigned long)rsdp->xsdt_physical_address; 320 - if (connect) { 321 - rsdp->xsdt_physical_address = lseek(fd, 0, SEEK_CUR); 322 - } 323 - } else { 324 - addr = (unsigned long)rsdp->rsdt_physical_address; 325 - if (connect) { 326 - rsdp->rsdt_physical_address = lseek(fd, 0, SEEK_CUR); 327 - } 328 - } 329 - write_table(fd, sdt, addr); 330 - free (sdt); 331 - return 1; 332 - } 333 - 334 - #define DYNAMIC_SSDT "/sys/firmware/acpi/tables/dynamic" 335 - 336 - static void acpi_dump_dynamic_SSDT(int fd) 337 - { 338 - struct stat file_stat; 339 - char filename[256], *ptr; 340 - DIR *tabledir; 341 - struct dirent *entry; 342 - FILE *fp; 343 - int count, readcount, length; 344 - struct acpi_table_header table_header, *ptable; 345 - 346 - if (stat(DYNAMIC_SSDT, &file_stat) == -1) { 347 - /* The directory doesn't exist */ 348 - return; 349 - } 350 - tabledir = opendir(DYNAMIC_SSDT); 351 - if(!tabledir){ 352 - /*can't open the directory */ 353 - return; 354 - } 355 - 356 - while ((entry = readdir(tabledir)) != 0){ 357 - /* skip the file of . /.. */ 358 - if (entry->d_name[0] == '.') 359 - continue; 360 - 361 - sprintf(filename, "%s/%s", DYNAMIC_SSDT, entry->d_name); 362 - fp = fopen(filename, "r"); 363 - if (fp == NULL) { 364 - fprintf(stderr, "Can't open the file of %s\n", 365 - filename); 366 - continue; 367 - } 368 - /* Read the Table header to parse the table length */ 369 - count = fread(&table_header, 1, sizeof(struct acpi_table_header), fp); 370 - if (count < sizeof(table_header)) { 371 - /* the length is lessn than ACPI table header. skip it */ 372 - fclose(fp); 373 - continue; 374 - } 375 - length = table_header.length; 376 - ptr = malloc(table_header.length); 377 - fseek(fp, 0, SEEK_SET); 378 - readcount = 0; 379 - while(!feof(fp) && readcount < length) { 380 - count = fread(ptr + readcount, 1, 256, fp); 381 - readcount += count; 382 - } 383 - fclose(fp); 384 - ptable = (struct acpi_table_header *) ptr; 385 - if (checksum((u8 *) ptable, ptable->length)) 386 - fprintf(stderr, "Wrong checksum " 387 - "for dynamic SSDT table!\n"); 388 - write_table(fd, ptable, 0); 389 - free(ptr); 390 - } 391 - closedir(tabledir); 392 - return; 393 - } 394 - 395 - static void usage(const char *progname) 396 - { 397 - puts("Usage:"); 398 - printf("%s [--addr 0x1234][--table DSDT][--output filename]" 399 - "[--binary][--length 0x456][--help]\n", progname); 400 - puts("\t--addr 0x1234 or -a 0x1234 -- look for tables at this physical address"); 401 - puts("\t--table DSDT or -t DSDT -- only dump table with DSDT signature"); 402 - puts("\t--output filename or -o filename -- redirect output from stdin to filename"); 403 - puts("\t--binary or -b -- dump data in binary form rather than in hex-dump format"); 404 - puts("\t--length 0x456 or -l 0x456 -- works only with --addr, dump physical memory" 405 - "\n\t\tregion without trying to understand it's contents"); 406 - puts("\t--skip 2 or -s 2 -- skip 2 tables of the given name and output only 3rd one"); 407 - puts("\t--help or -h -- this help message"); 408 - exit(0); 409 - } 410 - 411 - static struct option long_options[] = { 412 - {"addr", 1, 0, 0}, 413 - {"table", 1, 0, 0}, 414 - {"output", 1, 0, 0}, 415 - {"binary", 0, 0, 0}, 416 - {"length", 1, 0, 0}, 417 - {"skip", 1, 0, 0}, 418 - {"help", 0, 0, 0}, 419 - {0, 0, 0, 0} 420 - }; 421 - int main(int argc, char **argv) 422 - { 423 - int option_index, c, fd; 424 - u8 *raw; 425 - struct acpi_rsdp_descriptor rsdpx, *x = 0; 426 - char *filename = 0; 427 - char buff[80]; 428 - memset(select_sig, 0, 4); 429 - print = 1; 430 - connect = 0; 431 - addr = length = 0; 432 - skip = 0; 433 - while (1) { 434 - option_index = 0; 435 - c = getopt_long(argc, argv, "a:t:o:bl:s:h", 436 - long_options, &option_index); 437 - if (c == -1) 438 - break; 439 - 440 - switch (c) { 441 - case 0: 442 - switch (option_index) { 443 - case 0: 444 - addr = strtoul(optarg, (char **)NULL, 16); 445 - break; 446 - case 1: 447 - memcpy(select_sig, optarg, 4); 448 - break; 449 - case 2: 450 - filename = optarg; 451 - break; 452 - case 3: 453 - print = 0; 454 - break; 455 - case 4: 456 - length = strtoul(optarg, (char **)NULL, 16); 457 - break; 458 - case 5: 459 - skip = strtoul(optarg, (char **)NULL, 10); 460 - break; 461 - case 6: 462 - usage(argv[0]); 463 - exit(0); 464 - } 465 - break; 466 - case 'a': 467 - addr = strtoul(optarg, (char **)NULL, 16); 468 - break; 469 - case 't': 470 - memcpy(select_sig, optarg, 4); 471 - break; 472 - case 'o': 473 - filename = optarg; 474 - break; 475 - case 'b': 476 - print = 0; 477 - break; 478 - case 'l': 479 - length = strtoul(optarg, (char **)NULL, 16); 480 - break; 481 - case 's': 482 - skip = strtoul(optarg, (char **)NULL, 10); 483 - break; 484 - case 'h': 485 - usage(argv[0]); 486 - exit(0); 487 - default: 488 - printf("Unknown option!\n"); 489 - usage(argv[0]); 490 - exit(0); 491 - } 492 - } 493 - 494 - fd = STDOUT_FILENO; 495 - if (filename) { 496 - fd = creat(filename, S_IRUSR|S_IWUSR|S_IRGRP|S_IROTH); 497 - if (fd < 0) 498 - return fd; 499 - } 500 - 501 - if (!select_sig[0] && !print) { 502 - connect = 1; 503 - } 504 - 505 - psz = sysconf(_SC_PAGESIZE); 506 - if (length && addr) { 507 - /* We know length and address, it means we just want a memory dump */ 508 - if (!(raw = acpi_map_memory(addr, length))) 509 - goto not_found; 510 - write(fd, raw, length); 511 - acpi_unmap_memory(raw, length); 512 - close(fd); 513 - return 0; 514 - } 515 - 516 - length = sizeof(struct acpi_rsdp_descriptor); 517 - if (!addr) { 518 - addr = read_efi_systab(); 519 - if (!addr) { 520 - addr = ACPI_HI_RSDP_WINDOW_BASE; 521 - length = ACPI_HI_RSDP_WINDOW_SIZE; 522 - } 523 - } 524 - 525 - if (!(raw = acpi_map_memory(addr, length)) || 526 - !(x = acpi_scan_for_rsdp(raw, length))) 527 - goto not_found; 528 - 529 - /* Find RSDP and print all found tables */ 530 - memcpy(&rsdpx, x, sizeof(struct acpi_rsdp_descriptor)); 531 - acpi_unmap_memory(raw, length); 532 - if (connect) { 533 - lseek(fd, sizeof(struct acpi_rsdp_descriptor), SEEK_SET); 534 - } 535 - if (!acpi_dump_SDT(fd, &rsdpx)) 536 - goto not_found; 537 - if (connect) { 538 - lseek(fd, 0, SEEK_SET); 539 - write(fd, x, (rsdpx.revision < 2) ? 540 - ACPI_RSDP_CHECKSUM_LENGTH : ACPI_RSDP_XCHECKSUM_LENGTH); 541 - } else if (!select_sig[0] || !memcmp("RSD PTR ", select_sig, 4)) { 542 - addr += (long)x - (long)raw; 543 - length = snprintf(buff, 80, "RSD PTR @ %p\n", (void *)addr); 544 - write(fd, buff, length); 545 - acpi_show_data(fd, (u8 *) & rsdpx, (rsdpx.revision < 2) ? 546 - ACPI_RSDP_CHECKSUM_LENGTH : ACPI_RSDP_XCHECKSUM_LENGTH); 547 - buff[0] = '\n'; 548 - write(fd, buff, 1); 549 - } 550 - acpi_dump_dynamic_SSDT(fd); 551 - close(fd); 552 - return 0; 553 - not_found: 554 - close(fd); 555 - fprintf(stderr, "ACPI tables were not found. If you know location " 556 - "of RSD PTR table (from dmesg, etc), " 557 - "supply it with either --addr or -a option\n"); 558 - return 1; 559 - }
+130
tools/power/acpi/tools/acpidump/acpidump.h
··· 1 + /****************************************************************************** 2 + * 3 + * Module Name: acpidump.h - Include file for acpi_dump utility 4 + * 5 + *****************************************************************************/ 6 + 7 + /* 8 + * Copyright (C) 2000 - 2014, Intel Corp. 9 + * All rights reserved. 10 + * 11 + * Redistribution and use in source and binary forms, with or without 12 + * modification, are permitted provided that the following conditions 13 + * are met: 14 + * 1. Redistributions of source code must retain the above copyright 15 + * notice, this list of conditions, and the following disclaimer, 16 + * without modification. 17 + * 2. Redistributions in binary form must reproduce at minimum a disclaimer 18 + * substantially similar to the "NO WARRANTY" disclaimer below 19 + * ("Disclaimer") and any redistribution must be conditioned upon 20 + * including a substantially similar Disclaimer requirement for further 21 + * binary redistribution. 22 + * 3. Neither the names of the above-listed copyright holders nor the names 23 + * of any contributors may be used to endorse or promote products derived 24 + * from this software without specific prior written permission. 25 + * 26 + * Alternatively, this software may be distributed under the terms of the 27 + * GNU General Public License ("GPL") version 2 as published by the Free 28 + * Software Foundation. 29 + * 30 + * NO WARRANTY 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 36 + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 37 + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 38 + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 39 + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 40 + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 41 + * POSSIBILITY OF SUCH DAMAGES. 42 + */ 43 + 44 + /* 45 + * Global variables. Defined in main.c only, externed in all other files 46 + */ 47 + #ifdef _DECLARE_GLOBALS 48 + #define EXTERN 49 + #define INIT_GLOBAL(a,b) a=b 50 + #define DEFINE_ACPI_GLOBALS 1 51 + #else 52 + #define EXTERN extern 53 + #define INIT_GLOBAL(a,b) a 54 + #endif 55 + 56 + #include <acpi/acpi.h> 57 + #include "accommon.h" 58 + #include "actables.h" 59 + 60 + #include <stdio.h> 61 + #include <fcntl.h> 62 + #include <errno.h> 63 + #include <sys/stat.h> 64 + 65 + /* Globals */ 66 + 67 + EXTERN u8 INIT_GLOBAL(gbl_summary_mode, FALSE); 68 + EXTERN u8 INIT_GLOBAL(gbl_verbose_mode, FALSE); 69 + EXTERN u8 INIT_GLOBAL(gbl_binary_mode, FALSE); 70 + EXTERN u8 INIT_GLOBAL(gbl_dump_customized_tables, FALSE); 71 + EXTERN u8 INIT_GLOBAL(gbl_do_not_dump_xsdt, FALSE); 72 + EXTERN FILE INIT_GLOBAL(*gbl_output_file, NULL); 73 + EXTERN char INIT_GLOBAL(*gbl_output_filename, NULL); 74 + EXTERN u64 INIT_GLOBAL(gbl_rsdp_base, 0); 75 + 76 + /* Globals required for use with ACPICA modules */ 77 + 78 + #ifdef _DECLARE_GLOBALS 79 + u8 acpi_gbl_integer_byte_width = 8; 80 + #endif 81 + 82 + /* Action table used to defer requested options */ 83 + 84 + struct ap_dump_action { 85 + char *argument; 86 + u32 to_be_done; 87 + }; 88 + 89 + #define AP_MAX_ACTIONS 32 90 + 91 + #define AP_DUMP_ALL_TABLES 0 92 + #define AP_DUMP_TABLE_BY_ADDRESS 1 93 + #define AP_DUMP_TABLE_BY_NAME 2 94 + #define AP_DUMP_TABLE_BY_FILE 3 95 + 96 + #define AP_MAX_ACPI_FILES 256 /* Prevent infinite loops */ 97 + 98 + /* Minimum FADT sizes for various table addresses */ 99 + 100 + #define MIN_FADT_FOR_DSDT (ACPI_FADT_OFFSET (dsdt) + sizeof (u32)) 101 + #define MIN_FADT_FOR_FACS (ACPI_FADT_OFFSET (facs) + sizeof (u32)) 102 + #define MIN_FADT_FOR_XDSDT (ACPI_FADT_OFFSET (Xdsdt) + sizeof (u64)) 103 + #define MIN_FADT_FOR_XFACS (ACPI_FADT_OFFSET (Xfacs) + sizeof (u64)) 104 + 105 + /* 106 + * apdump - Table get/dump routines 107 + */ 108 + int ap_dump_table_from_file(char *pathname); 109 + 110 + int ap_dump_table_by_name(char *signature); 111 + 112 + int ap_dump_table_by_address(char *ascii_address); 113 + 114 + int ap_dump_all_tables(void); 115 + 116 + u8 ap_is_valid_header(struct acpi_table_header *table); 117 + 118 + u8 ap_is_valid_checksum(struct acpi_table_header *table); 119 + 120 + u32 ap_get_table_length(struct acpi_table_header *table); 121 + 122 + /* 123 + * apfiles - File I/O utilities 124 + */ 125 + int ap_open_output_file(char *pathname); 126 + 127 + int ap_write_to_binary_file(struct acpi_table_header *table, u32 instance); 128 + 129 + struct acpi_table_header *ap_get_table_from_file(char *pathname, 130 + u32 *file_size);
+451
tools/power/acpi/tools/acpidump/apdump.c
··· 1 + /****************************************************************************** 2 + * 3 + * Module Name: apdump - Dump routines for ACPI tables (acpidump) 4 + * 5 + *****************************************************************************/ 6 + 7 + /* 8 + * Copyright (C) 2000 - 2014, Intel Corp. 9 + * All rights reserved. 10 + * 11 + * Redistribution and use in source and binary forms, with or without 12 + * modification, are permitted provided that the following conditions 13 + * are met: 14 + * 1. Redistributions of source code must retain the above copyright 15 + * notice, this list of conditions, and the following disclaimer, 16 + * without modification. 17 + * 2. Redistributions in binary form must reproduce at minimum a disclaimer 18 + * substantially similar to the "NO WARRANTY" disclaimer below 19 + * ("Disclaimer") and any redistribution must be conditioned upon 20 + * including a substantially similar Disclaimer requirement for further 21 + * binary redistribution. 22 + * 3. Neither the names of the above-listed copyright holders nor the names 23 + * of any contributors may be used to endorse or promote products derived 24 + * from this software without specific prior written permission. 25 + * 26 + * Alternatively, this software may be distributed under the terms of the 27 + * GNU General Public License ("GPL") version 2 as published by the Free 28 + * Software Foundation. 29 + * 30 + * NO WARRANTY 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 36 + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 37 + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 38 + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 39 + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 40 + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 41 + * POSSIBILITY OF SUCH DAMAGES. 42 + */ 43 + 44 + #include "acpidump.h" 45 + 46 + /* Local prototypes */ 47 + 48 + static int 49 + ap_dump_table_buffer(struct acpi_table_header *table, 50 + u32 instance, acpi_physical_address address); 51 + 52 + /****************************************************************************** 53 + * 54 + * FUNCTION: ap_is_valid_header 55 + * 56 + * PARAMETERS: table - Pointer to table to be validated 57 + * 58 + * RETURN: TRUE if the header appears to be valid. FALSE otherwise 59 + * 60 + * DESCRIPTION: Check for a valid ACPI table header 61 + * 62 + ******************************************************************************/ 63 + 64 + u8 ap_is_valid_header(struct acpi_table_header *table) 65 + { 66 + 67 + if (!ACPI_VALIDATE_RSDP_SIG(table->signature)) { 68 + 69 + /* Make sure signature is all ASCII and a valid ACPI name */ 70 + 71 + if (!acpi_ut_valid_acpi_name(table->signature)) { 72 + fprintf(stderr, 73 + "Table signature (0x%8.8X) is invalid\n", 74 + *(u32 *)table->signature); 75 + return (FALSE); 76 + } 77 + 78 + /* Check for minimum table length */ 79 + 80 + if (table->length < sizeof(struct acpi_table_header)) { 81 + fprintf(stderr, "Table length (0x%8.8X) is invalid\n", 82 + table->length); 83 + return (FALSE); 84 + } 85 + } 86 + 87 + return (TRUE); 88 + } 89 + 90 + /****************************************************************************** 91 + * 92 + * FUNCTION: ap_is_valid_checksum 93 + * 94 + * PARAMETERS: table - Pointer to table to be validated 95 + * 96 + * RETURN: TRUE if the checksum appears to be valid. FALSE otherwise. 97 + * 98 + * DESCRIPTION: Check for a valid ACPI table checksum. 99 + * 100 + ******************************************************************************/ 101 + 102 + u8 ap_is_valid_checksum(struct acpi_table_header *table) 103 + { 104 + acpi_status status; 105 + struct acpi_table_rsdp *rsdp; 106 + 107 + if (ACPI_VALIDATE_RSDP_SIG(table->signature)) { 108 + /* 109 + * Checksum for RSDP. 110 + * Note: Other checksums are computed during the table dump. 111 + */ 112 + rsdp = ACPI_CAST_PTR(struct acpi_table_rsdp, table); 113 + status = acpi_tb_validate_rsdp(rsdp); 114 + } else { 115 + status = acpi_tb_verify_checksum(table, table->length); 116 + } 117 + 118 + if (ACPI_FAILURE(status)) { 119 + fprintf(stderr, "%4.4s: Warning: wrong checksum in table\n", 120 + table->signature); 121 + } 122 + 123 + return (AE_OK); 124 + } 125 + 126 + /****************************************************************************** 127 + * 128 + * FUNCTION: ap_get_table_length 129 + * 130 + * PARAMETERS: table - Pointer to the table 131 + * 132 + * RETURN: Table length 133 + * 134 + * DESCRIPTION: Obtain table length according to table signature. 135 + * 136 + ******************************************************************************/ 137 + 138 + u32 ap_get_table_length(struct acpi_table_header *table) 139 + { 140 + struct acpi_table_rsdp *rsdp; 141 + 142 + /* Check if table is valid */ 143 + 144 + if (!ap_is_valid_header(table)) { 145 + return (0); 146 + } 147 + 148 + if (ACPI_VALIDATE_RSDP_SIG(table->signature)) { 149 + rsdp = ACPI_CAST_PTR(struct acpi_table_rsdp, table); 150 + return (rsdp->length); 151 + } 152 + 153 + /* Normal ACPI table */ 154 + 155 + return (table->length); 156 + } 157 + 158 + /****************************************************************************** 159 + * 160 + * FUNCTION: ap_dump_table_buffer 161 + * 162 + * PARAMETERS: table - ACPI table to be dumped 163 + * instance - ACPI table instance no. to be dumped 164 + * address - Physical address of the table 165 + * 166 + * RETURN: None 167 + * 168 + * DESCRIPTION: Dump an ACPI table in standard ASCII hex format, with a 169 + * header that is compatible with the acpi_xtract utility. 170 + * 171 + ******************************************************************************/ 172 + 173 + static int 174 + ap_dump_table_buffer(struct acpi_table_header *table, 175 + u32 instance, acpi_physical_address address) 176 + { 177 + u32 table_length; 178 + 179 + table_length = ap_get_table_length(table); 180 + 181 + /* Print only the header if requested */ 182 + 183 + if (gbl_summary_mode) { 184 + acpi_tb_print_table_header(address, table); 185 + return (0); 186 + } 187 + 188 + /* Dump to binary file if requested */ 189 + 190 + if (gbl_binary_mode) { 191 + return (ap_write_to_binary_file(table, instance)); 192 + } 193 + 194 + /* 195 + * Dump the table with header for use with acpixtract utility. 196 + * Note: simplest to just always emit a 64-bit address. acpi_xtract 197 + * utility can handle this. 198 + */ 199 + printf("%4.4s @ 0x%8.8X%8.8X\n", table->signature, 200 + ACPI_FORMAT_UINT64(address)); 201 + 202 + acpi_ut_dump_buffer(ACPI_CAST_PTR(u8, table), table_length, 203 + DB_BYTE_DISPLAY, 0); 204 + printf("\n"); 205 + return (0); 206 + } 207 + 208 + /****************************************************************************** 209 + * 210 + * FUNCTION: ap_dump_all_tables 211 + * 212 + * PARAMETERS: None 213 + * 214 + * RETURN: Status 215 + * 216 + * DESCRIPTION: Get all tables from the RSDT/XSDT (or at least all of the 217 + * tables that we can possibly get). 218 + * 219 + ******************************************************************************/ 220 + 221 + int ap_dump_all_tables(void) 222 + { 223 + struct acpi_table_header *table; 224 + u32 instance = 0; 225 + acpi_physical_address address; 226 + acpi_status status; 227 + int table_status; 228 + u32 i; 229 + 230 + /* Get and dump all available ACPI tables */ 231 + 232 + for (i = 0; i < AP_MAX_ACPI_FILES; i++) { 233 + status = 234 + acpi_os_get_table_by_index(i, &table, &instance, &address); 235 + if (ACPI_FAILURE(status)) { 236 + 237 + /* AE_LIMIT means that no more tables are available */ 238 + 239 + if (status == AE_LIMIT) { 240 + return (0); 241 + } else if (i == 0) { 242 + fprintf(stderr, 243 + "Could not get ACPI tables, %s\n", 244 + acpi_format_exception(status)); 245 + return (-1); 246 + } else { 247 + fprintf(stderr, 248 + "Could not get ACPI table at index %u, %s\n", 249 + i, acpi_format_exception(status)); 250 + continue; 251 + } 252 + } 253 + 254 + table_status = ap_dump_table_buffer(table, instance, address); 255 + free(table); 256 + 257 + if (table_status) { 258 + break; 259 + } 260 + } 261 + 262 + /* Something seriously bad happened if the loop terminates here */ 263 + 264 + return (-1); 265 + } 266 + 267 + /****************************************************************************** 268 + * 269 + * FUNCTION: ap_dump_table_by_address 270 + * 271 + * PARAMETERS: ascii_address - Address for requested ACPI table 272 + * 273 + * RETURN: Status 274 + * 275 + * DESCRIPTION: Get an ACPI table via a physical address and dump it. 276 + * 277 + ******************************************************************************/ 278 + 279 + int ap_dump_table_by_address(char *ascii_address) 280 + { 281 + acpi_physical_address address; 282 + struct acpi_table_header *table; 283 + acpi_status status; 284 + int table_status; 285 + u64 long_address; 286 + 287 + /* Convert argument to an integer physical address */ 288 + 289 + status = acpi_ut_strtoul64(ascii_address, 0, &long_address); 290 + if (ACPI_FAILURE(status)) { 291 + fprintf(stderr, "%s: Could not convert to a physical address\n", 292 + ascii_address); 293 + return (-1); 294 + } 295 + 296 + address = (acpi_physical_address) long_address; 297 + status = acpi_os_get_table_by_address(address, &table); 298 + if (ACPI_FAILURE(status)) { 299 + fprintf(stderr, "Could not get table at 0x%8.8X%8.8X, %s\n", 300 + ACPI_FORMAT_UINT64(address), 301 + acpi_format_exception(status)); 302 + return (-1); 303 + } 304 + 305 + table_status = ap_dump_table_buffer(table, 0, address); 306 + free(table); 307 + return (table_status); 308 + } 309 + 310 + /****************************************************************************** 311 + * 312 + * FUNCTION: ap_dump_table_by_name 313 + * 314 + * PARAMETERS: signature - Requested ACPI table signature 315 + * 316 + * RETURN: Status 317 + * 318 + * DESCRIPTION: Get an ACPI table via a signature and dump it. Handles 319 + * multiple tables with the same signature (SSDTs). 320 + * 321 + ******************************************************************************/ 322 + 323 + int ap_dump_table_by_name(char *signature) 324 + { 325 + char local_signature[ACPI_NAME_SIZE + 1]; 326 + u32 instance; 327 + struct acpi_table_header *table; 328 + acpi_physical_address address; 329 + acpi_status status; 330 + int table_status; 331 + 332 + if (strlen(signature) != ACPI_NAME_SIZE) { 333 + fprintf(stderr, 334 + "Invalid table signature [%s]: must be exactly 4 characters\n", 335 + signature); 336 + return (-1); 337 + } 338 + 339 + /* Table signatures are expected to be uppercase */ 340 + 341 + strcpy(local_signature, signature); 342 + acpi_ut_strupr(local_signature); 343 + 344 + /* To be friendly, handle tables whose signatures do not match the name */ 345 + 346 + if (ACPI_COMPARE_NAME(local_signature, "FADT")) { 347 + strcpy(local_signature, ACPI_SIG_FADT); 348 + } else if (ACPI_COMPARE_NAME(local_signature, "MADT")) { 349 + strcpy(local_signature, ACPI_SIG_MADT); 350 + } 351 + 352 + /* Dump all instances of this signature (to handle multiple SSDTs) */ 353 + 354 + for (instance = 0; instance < AP_MAX_ACPI_FILES; instance++) { 355 + status = acpi_os_get_table_by_name(local_signature, instance, 356 + &table, &address); 357 + if (ACPI_FAILURE(status)) { 358 + 359 + /* AE_LIMIT means that no more tables are available */ 360 + 361 + if (status == AE_LIMIT) { 362 + return (0); 363 + } 364 + 365 + fprintf(stderr, 366 + "Could not get ACPI table with signature [%s], %s\n", 367 + local_signature, acpi_format_exception(status)); 368 + return (-1); 369 + } 370 + 371 + table_status = ap_dump_table_buffer(table, instance, address); 372 + free(table); 373 + 374 + if (table_status) { 375 + break; 376 + } 377 + } 378 + 379 + /* Something seriously bad happened if the loop terminates here */ 380 + 381 + return (-1); 382 + } 383 + 384 + /****************************************************************************** 385 + * 386 + * FUNCTION: ap_dump_table_from_file 387 + * 388 + * PARAMETERS: pathname - File containing the binary ACPI table 389 + * 390 + * RETURN: Status 391 + * 392 + * DESCRIPTION: Dump an ACPI table from a binary file 393 + * 394 + ******************************************************************************/ 395 + 396 + int ap_dump_table_from_file(char *pathname) 397 + { 398 + struct acpi_table_header *table; 399 + u32 file_size = 0; 400 + int table_status = -1; 401 + 402 + /* Get the entire ACPI table from the file */ 403 + 404 + table = ap_get_table_from_file(pathname, &file_size); 405 + if (!table) { 406 + return (-1); 407 + } 408 + 409 + /* File must be at least as long as the table length */ 410 + 411 + if (table->length > file_size) { 412 + fprintf(stderr, 413 + "Table length (0x%X) is too large for input file (0x%X) %s\n", 414 + table->length, file_size, pathname); 415 + goto exit; 416 + } 417 + 418 + if (gbl_verbose_mode) { 419 + fprintf(stderr, 420 + "Input file: %s contains table [%4.4s], 0x%X (%u) bytes\n", 421 + pathname, table->signature, file_size, file_size); 422 + } 423 + 424 + table_status = ap_dump_table_buffer(table, 0, 0); 425 + 426 + exit: 427 + free(table); 428 + return (table_status); 429 + } 430 + 431 + /****************************************************************************** 432 + * 433 + * FUNCTION: acpi_os* print functions 434 + * 435 + * DESCRIPTION: Used for linkage with ACPICA modules 436 + * 437 + ******************************************************************************/ 438 + 439 + void ACPI_INTERNAL_VAR_XFACE acpi_os_printf(const char *fmt, ...) 440 + { 441 + va_list args; 442 + 443 + va_start(args, fmt); 444 + vfprintf(stdout, fmt, args); 445 + va_end(args); 446 + } 447 + 448 + void acpi_os_vprintf(const char *fmt, va_list args) 449 + { 450 + vfprintf(stdout, fmt, args); 451 + }
+228
tools/power/acpi/tools/acpidump/apfiles.c
··· 1 + /****************************************************************************** 2 + * 3 + * Module Name: apfiles - File-related functions for acpidump utility 4 + * 5 + *****************************************************************************/ 6 + 7 + /* 8 + * Copyright (C) 2000 - 2014, Intel Corp. 9 + * All rights reserved. 10 + * 11 + * Redistribution and use in source and binary forms, with or without 12 + * modification, are permitted provided that the following conditions 13 + * are met: 14 + * 1. Redistributions of source code must retain the above copyright 15 + * notice, this list of conditions, and the following disclaimer, 16 + * without modification. 17 + * 2. Redistributions in binary form must reproduce at minimum a disclaimer 18 + * substantially similar to the "NO WARRANTY" disclaimer below 19 + * ("Disclaimer") and any redistribution must be conditioned upon 20 + * including a substantially similar Disclaimer requirement for further 21 + * binary redistribution. 22 + * 3. Neither the names of the above-listed copyright holders nor the names 23 + * of any contributors may be used to endorse or promote products derived 24 + * from this software without specific prior written permission. 25 + * 26 + * Alternatively, this software may be distributed under the terms of the 27 + * GNU General Public License ("GPL") version 2 as published by the Free 28 + * Software Foundation. 29 + * 30 + * NO WARRANTY 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 36 + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 37 + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 38 + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 39 + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 40 + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 41 + * POSSIBILITY OF SUCH DAMAGES. 42 + */ 43 + 44 + #include "acpidump.h" 45 + #include "acapps.h" 46 + 47 + /****************************************************************************** 48 + * 49 + * FUNCTION: ap_open_output_file 50 + * 51 + * PARAMETERS: pathname - Output filename 52 + * 53 + * RETURN: Open file handle 54 + * 55 + * DESCRIPTION: Open a text output file for acpidump. Checks if file already 56 + * exists. 57 + * 58 + ******************************************************************************/ 59 + 60 + int ap_open_output_file(char *pathname) 61 + { 62 + struct stat stat_info; 63 + FILE *file; 64 + 65 + /* If file exists, prompt for overwrite */ 66 + 67 + if (!stat(pathname, &stat_info)) { 68 + fprintf(stderr, 69 + "Target path already exists, overwrite? [y|n] "); 70 + 71 + if (getchar() != 'y') { 72 + return (-1); 73 + } 74 + } 75 + 76 + /* Point stdout to the file */ 77 + 78 + file = freopen(pathname, "w", stdout); 79 + if (!file) { 80 + perror("Could not open output file"); 81 + return (-1); 82 + } 83 + 84 + /* Save the file and path */ 85 + 86 + gbl_output_file = file; 87 + gbl_output_filename = pathname; 88 + return (0); 89 + } 90 + 91 + /****************************************************************************** 92 + * 93 + * FUNCTION: ap_write_to_binary_file 94 + * 95 + * PARAMETERS: table - ACPI table to be written 96 + * instance - ACPI table instance no. to be written 97 + * 98 + * RETURN: Status 99 + * 100 + * DESCRIPTION: Write an ACPI table to a binary file. Builds the output 101 + * filename from the table signature. 102 + * 103 + ******************************************************************************/ 104 + 105 + int ap_write_to_binary_file(struct acpi_table_header *table, u32 instance) 106 + { 107 + char filename[ACPI_NAME_SIZE + 16]; 108 + char instance_str[16]; 109 + FILE *file; 110 + size_t actual; 111 + u32 table_length; 112 + 113 + /* Obtain table length */ 114 + 115 + table_length = ap_get_table_length(table); 116 + 117 + /* Construct lower-case filename from the table local signature */ 118 + 119 + if (ACPI_VALIDATE_RSDP_SIG(table->signature)) { 120 + ACPI_MOVE_NAME(filename, ACPI_RSDP_NAME); 121 + } else { 122 + ACPI_MOVE_NAME(filename, table->signature); 123 + } 124 + filename[0] = (char)ACPI_TOLOWER(filename[0]); 125 + filename[1] = (char)ACPI_TOLOWER(filename[1]); 126 + filename[2] = (char)ACPI_TOLOWER(filename[2]); 127 + filename[3] = (char)ACPI_TOLOWER(filename[3]); 128 + filename[ACPI_NAME_SIZE] = 0; 129 + 130 + /* Handle multiple SSDts - create different filenames for each */ 131 + 132 + if (instance > 0) { 133 + sprintf(instance_str, "%u", instance); 134 + strcat(filename, instance_str); 135 + } 136 + 137 + strcat(filename, ACPI_TABLE_FILE_SUFFIX); 138 + 139 + if (gbl_verbose_mode) { 140 + fprintf(stderr, 141 + "Writing [%4.4s] to binary file: %s 0x%X (%u) bytes\n", 142 + table->signature, filename, table->length, 143 + table->length); 144 + } 145 + 146 + /* Open the file and dump the entire table in binary mode */ 147 + 148 + file = fopen(filename, "wb"); 149 + if (!file) { 150 + perror("Could not open output file"); 151 + return (-1); 152 + } 153 + 154 + actual = fwrite(table, 1, table_length, file); 155 + if (actual != table_length) { 156 + perror("Error writing binary output file"); 157 + fclose(file); 158 + return (-1); 159 + } 160 + 161 + fclose(file); 162 + return (0); 163 + } 164 + 165 + /****************************************************************************** 166 + * 167 + * FUNCTION: ap_get_table_from_file 168 + * 169 + * PARAMETERS: pathname - File containing the binary ACPI table 170 + * out_file_size - Where the file size is returned 171 + * 172 + * RETURN: Buffer containing the ACPI table. NULL on error. 173 + * 174 + * DESCRIPTION: Open a file and read it entirely into a new buffer 175 + * 176 + ******************************************************************************/ 177 + 178 + struct acpi_table_header *ap_get_table_from_file(char *pathname, 179 + u32 *out_file_size) 180 + { 181 + struct acpi_table_header *buffer = NULL; 182 + FILE *file; 183 + u32 file_size; 184 + size_t actual; 185 + 186 + /* Must use binary mode */ 187 + 188 + file = fopen(pathname, "rb"); 189 + if (!file) { 190 + perror("Could not open input file"); 191 + return (NULL); 192 + } 193 + 194 + /* Need file size to allocate a buffer */ 195 + 196 + file_size = cm_get_file_size(file); 197 + if (file_size == ACPI_UINT32_MAX) { 198 + fprintf(stderr, 199 + "Could not get input file size: %s\n", pathname); 200 + goto cleanup; 201 + } 202 + 203 + /* Allocate a buffer for the entire file */ 204 + 205 + buffer = calloc(1, file_size); 206 + if (!buffer) { 207 + fprintf(stderr, 208 + "Could not allocate file buffer of size: %u\n", 209 + file_size); 210 + goto cleanup; 211 + } 212 + 213 + /* Read the entire file */ 214 + 215 + actual = fread(buffer, 1, file_size, file); 216 + if (actual != file_size) { 217 + fprintf(stderr, "Could not read input file: %s\n", pathname); 218 + free(buffer); 219 + buffer = NULL; 220 + goto cleanup; 221 + } 222 + 223 + *out_file_size = file_size; 224 + 225 + cleanup: 226 + fclose(file); 227 + return (buffer); 228 + }
+351
tools/power/acpi/tools/acpidump/apmain.c
··· 1 + /****************************************************************************** 2 + * 3 + * Module Name: apmain - Main module for the acpidump utility 4 + * 5 + *****************************************************************************/ 6 + 7 + /* 8 + * Copyright (C) 2000 - 2014, Intel Corp. 9 + * All rights reserved. 10 + * 11 + * Redistribution and use in source and binary forms, with or without 12 + * modification, are permitted provided that the following conditions 13 + * are met: 14 + * 1. Redistributions of source code must retain the above copyright 15 + * notice, this list of conditions, and the following disclaimer, 16 + * without modification. 17 + * 2. Redistributions in binary form must reproduce at minimum a disclaimer 18 + * substantially similar to the "NO WARRANTY" disclaimer below 19 + * ("Disclaimer") and any redistribution must be conditioned upon 20 + * including a substantially similar Disclaimer requirement for further 21 + * binary redistribution. 22 + * 3. Neither the names of the above-listed copyright holders nor the names 23 + * of any contributors may be used to endorse or promote products derived 24 + * from this software without specific prior written permission. 25 + * 26 + * Alternatively, this software may be distributed under the terms of the 27 + * GNU General Public License ("GPL") version 2 as published by the Free 28 + * Software Foundation. 29 + * 30 + * NO WARRANTY 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 36 + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 37 + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 38 + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 39 + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 40 + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 41 + * POSSIBILITY OF SUCH DAMAGES. 42 + */ 43 + 44 + #define _DECLARE_GLOBALS 45 + #include "acpidump.h" 46 + #include "acapps.h" 47 + 48 + /* 49 + * acpidump - A portable utility for obtaining system ACPI tables and dumping 50 + * them in an ASCII hex format suitable for binary extraction via acpixtract. 51 + * 52 + * Obtaining the system ACPI tables is an OS-specific operation. 53 + * 54 + * This utility can be ported to any host operating system by providing a 55 + * module containing system-specific versions of these interfaces: 56 + * 57 + * acpi_os_get_table_by_address 58 + * acpi_os_get_table_by_index 59 + * acpi_os_get_table_by_name 60 + * 61 + * See the ACPICA Reference Guide for the exact definitions of these 62 + * interfaces. Also, see these ACPICA source code modules for example 63 + * implementations: 64 + * 65 + * source/os_specific/service_layers/oswintbl.c 66 + * source/os_specific/service_layers/oslinuxtbl.c 67 + */ 68 + 69 + /* Local prototypes */ 70 + 71 + static void ap_display_usage(void); 72 + 73 + static int ap_do_options(int argc, char **argv); 74 + 75 + static void ap_insert_action(char *argument, u32 to_be_done); 76 + 77 + /* Table for deferred actions from command line options */ 78 + 79 + struct ap_dump_action action_table[AP_MAX_ACTIONS]; 80 + u32 current_action = 0; 81 + 82 + #define AP_UTILITY_NAME "ACPI Binary Table Dump Utility" 83 + #define AP_SUPPORTED_OPTIONS "?a:bcf:hn:o:r:svxz" 84 + 85 + /****************************************************************************** 86 + * 87 + * FUNCTION: ap_display_usage 88 + * 89 + * DESCRIPTION: Usage message for the acpi_dump utility 90 + * 91 + ******************************************************************************/ 92 + 93 + static void ap_display_usage(void) 94 + { 95 + 96 + ACPI_USAGE_HEADER("acpidump [options]"); 97 + 98 + ACPI_OPTION("-b", "Dump tables to binary files"); 99 + ACPI_OPTION("-c", "Dump customized tables"); 100 + ACPI_OPTION("-h -?", "This help message"); 101 + ACPI_OPTION("-o <File>", "Redirect output to file"); 102 + ACPI_OPTION("-r <Address>", "Dump tables from specified RSDP"); 103 + ACPI_OPTION("-s", "Print table summaries only"); 104 + ACPI_OPTION("-v", "Display version information"); 105 + ACPI_OPTION("-z", "Verbose mode"); 106 + 107 + printf("\nTable Options:\n"); 108 + 109 + ACPI_OPTION("-a <Address>", "Get table via a physical address"); 110 + ACPI_OPTION("-f <BinaryFile>", "Get table via a binary file"); 111 + ACPI_OPTION("-n <Signature>", "Get table via a name/signature"); 112 + ACPI_OPTION("-x", "Do not use but dump XSDT"); 113 + ACPI_OPTION("-x -x", "Do not use or dump XSDT"); 114 + 115 + printf("\n" 116 + "Invocation without parameters dumps all available tables\n" 117 + "Multiple mixed instances of -a, -f, and -n are supported\n\n"); 118 + } 119 + 120 + /****************************************************************************** 121 + * 122 + * FUNCTION: ap_insert_action 123 + * 124 + * PARAMETERS: argument - Pointer to the argument for this action 125 + * to_be_done - What to do to process this action 126 + * 127 + * RETURN: None. Exits program if action table becomes full. 128 + * 129 + * DESCRIPTION: Add an action item to the action table 130 + * 131 + ******************************************************************************/ 132 + 133 + static void ap_insert_action(char *argument, u32 to_be_done) 134 + { 135 + 136 + /* Insert action and check for table overflow */ 137 + 138 + action_table[current_action].argument = argument; 139 + action_table[current_action].to_be_done = to_be_done; 140 + 141 + current_action++; 142 + if (current_action > AP_MAX_ACTIONS) { 143 + fprintf(stderr, "Too many table options (max %u)\n", 144 + AP_MAX_ACTIONS); 145 + exit(-1); 146 + } 147 + } 148 + 149 + /****************************************************************************** 150 + * 151 + * FUNCTION: ap_do_options 152 + * 153 + * PARAMETERS: argc/argv - Standard argc/argv 154 + * 155 + * RETURN: Status 156 + * 157 + * DESCRIPTION: Command line option processing. The main actions for getting 158 + * and dumping tables are deferred via the action table. 159 + * 160 + *****************************************************************************/ 161 + 162 + static int ap_do_options(int argc, char **argv) 163 + { 164 + int j; 165 + acpi_status status; 166 + 167 + /* Command line options */ 168 + 169 + while ((j = acpi_getopt(argc, argv, AP_SUPPORTED_OPTIONS)) != EOF) 170 + switch (j) { 171 + /* 172 + * Global options 173 + */ 174 + case 'b': /* Dump all input tables to binary files */ 175 + 176 + gbl_binary_mode = TRUE; 177 + continue; 178 + 179 + case 'c': /* Dump customized tables */ 180 + 181 + gbl_dump_customized_tables = TRUE; 182 + continue; 183 + 184 + case 'h': 185 + case '?': 186 + 187 + ap_display_usage(); 188 + exit(0); 189 + 190 + case 'o': /* Redirect output to a single file */ 191 + 192 + if (ap_open_output_file(acpi_gbl_optarg)) { 193 + exit(-1); 194 + } 195 + continue; 196 + 197 + case 'r': /* Dump tables from specified RSDP */ 198 + 199 + status = 200 + acpi_ut_strtoul64(acpi_gbl_optarg, 0, 201 + &gbl_rsdp_base); 202 + if (ACPI_FAILURE(status)) { 203 + fprintf(stderr, 204 + "%s: Could not convert to a physical address\n", 205 + acpi_gbl_optarg); 206 + exit(-1); 207 + } 208 + continue; 209 + 210 + case 's': /* Print table summaries only */ 211 + 212 + gbl_summary_mode = TRUE; 213 + continue; 214 + 215 + case 'x': /* Do not use XSDT */ 216 + 217 + if (!acpi_gbl_do_not_use_xsdt) { 218 + acpi_gbl_do_not_use_xsdt = TRUE; 219 + } else { 220 + gbl_do_not_dump_xsdt = TRUE; 221 + } 222 + continue; 223 + 224 + case 'v': /* Revision/version */ 225 + 226 + printf(ACPI_COMMON_SIGNON(AP_UTILITY_NAME)); 227 + exit(0); 228 + 229 + case 'z': /* Verbose mode */ 230 + 231 + gbl_verbose_mode = TRUE; 232 + fprintf(stderr, ACPI_COMMON_SIGNON(AP_UTILITY_NAME)); 233 + continue; 234 + 235 + /* 236 + * Table options 237 + */ 238 + case 'a': /* Get table by physical address */ 239 + 240 + ap_insert_action(acpi_gbl_optarg, 241 + AP_DUMP_TABLE_BY_ADDRESS); 242 + break; 243 + 244 + case 'f': /* Get table from a file */ 245 + 246 + ap_insert_action(acpi_gbl_optarg, 247 + AP_DUMP_TABLE_BY_FILE); 248 + break; 249 + 250 + case 'n': /* Get table by input name (signature) */ 251 + 252 + ap_insert_action(acpi_gbl_optarg, 253 + AP_DUMP_TABLE_BY_NAME); 254 + break; 255 + 256 + default: 257 + 258 + ap_display_usage(); 259 + exit(-1); 260 + } 261 + 262 + /* If there are no actions, this means "get/dump all tables" */ 263 + 264 + if (current_action == 0) { 265 + ap_insert_action(NULL, AP_DUMP_ALL_TABLES); 266 + } 267 + 268 + return (0); 269 + } 270 + 271 + /****************************************************************************** 272 + * 273 + * FUNCTION: main 274 + * 275 + * PARAMETERS: argc/argv - Standard argc/argv 276 + * 277 + * RETURN: Status 278 + * 279 + * DESCRIPTION: C main function for acpidump utility 280 + * 281 + ******************************************************************************/ 282 + 283 + int ACPI_SYSTEM_XFACE main(int argc, char *argv[]) 284 + { 285 + int status = 0; 286 + struct ap_dump_action *action; 287 + u32 file_size; 288 + u32 i; 289 + 290 + ACPI_DEBUG_INITIALIZE(); /* For debug version only */ 291 + 292 + /* Process command line options */ 293 + 294 + if (ap_do_options(argc, argv)) { 295 + return (-1); 296 + } 297 + 298 + /* Get/dump ACPI table(s) as requested */ 299 + 300 + for (i = 0; i < current_action; i++) { 301 + action = &action_table[i]; 302 + switch (action->to_be_done) { 303 + case AP_DUMP_ALL_TABLES: 304 + 305 + status = ap_dump_all_tables(); 306 + break; 307 + 308 + case AP_DUMP_TABLE_BY_ADDRESS: 309 + 310 + status = ap_dump_table_by_address(action->argument); 311 + break; 312 + 313 + case AP_DUMP_TABLE_BY_NAME: 314 + 315 + status = ap_dump_table_by_name(action->argument); 316 + break; 317 + 318 + case AP_DUMP_TABLE_BY_FILE: 319 + 320 + status = ap_dump_table_from_file(action->argument); 321 + break; 322 + 323 + default: 324 + 325 + fprintf(stderr, 326 + "Internal error, invalid action: 0x%X\n", 327 + action->to_be_done); 328 + return (-1); 329 + } 330 + 331 + if (status) { 332 + return (status); 333 + } 334 + } 335 + 336 + if (gbl_output_file) { 337 + if (gbl_verbose_mode) { 338 + 339 + /* Summary for the output file */ 340 + 341 + file_size = cm_get_file_size(gbl_output_file); 342 + fprintf(stderr, 343 + "Output file %s contains 0x%X (%u) bytes\n\n", 344 + gbl_output_filename, file_size, file_size); 345 + } 346 + 347 + fclose(gbl_output_file); 348 + } 349 + 350 + return (status); 351 + }
+22
tools/power/acpi/tools/ec/Makefile
··· 1 + ec_access: ec_access.o 2 + $(ECHO) " LD " $@ 3 + $(QUIET) $(LD) $(CFLAGS) $(LDFLAGS) $< -o $@ 4 + $(QUIET) $(STRIPCMD) $@ 5 + 6 + %.o: %.c 7 + $(ECHO) " CC " $@ 8 + $(QUIET) $(CC) -c $(CFLAGS) -o $@ $< 9 + 10 + all: ec_access 11 + 12 + install: 13 + $(INSTALL) -d $(DESTDIR)${sbindir} 14 + $(INSTALL_PROGRAM) ec_access $(DESTDIR)${sbindir} 15 + 16 + uninstall: 17 + - rm -f $(DESTDIR)${sbindir}/ec_access 18 + 19 + clean: 20 + -rm -f $(OUTPUT)ec_access 21 + 22 + .PHONY: all install uninstall
+238
tools/power/acpi/tools/ec/ec_access.c
··· 1 + /* 2 + * ec_access.c 3 + * 4 + * Copyright (C) 2010 SUSE Linux Products GmbH 5 + * Author: 6 + * Thomas Renninger <trenn@suse.de> 7 + * 8 + * This work is licensed under the terms of the GNU GPL, version 2. 9 + */ 10 + 11 + #include <fcntl.h> 12 + #include <err.h> 13 + #include <stdio.h> 14 + #include <stdlib.h> 15 + #include <libgen.h> 16 + #include <unistd.h> 17 + #include <getopt.h> 18 + #include <stdint.h> 19 + #include <sys/types.h> 20 + #include <sys/stat.h> 21 + 22 + 23 + #define EC_SPACE_SIZE 256 24 + #define SYSFS_PATH "/sys/kernel/debug/ec/ec0/io" 25 + 26 + /* TBD/Enhancements: 27 + - Provide param for accessing different ECs (not supported by kernel yet) 28 + */ 29 + 30 + static int read_mode = -1; 31 + static int sleep_time; 32 + static int write_byte_offset = -1; 33 + static int read_byte_offset = -1; 34 + static uint8_t write_value = -1; 35 + 36 + void usage(char progname[], int exit_status) 37 + { 38 + printf("Usage:\n"); 39 + printf("1) %s -r [-s sleep]\n", basename(progname)); 40 + printf("2) %s -b byte_offset\n", basename(progname)); 41 + printf("3) %s -w byte_offset -v value\n\n", basename(progname)); 42 + 43 + puts("\t-r [-s sleep] : Dump EC registers"); 44 + puts("\t If sleep is given, sleep x seconds,"); 45 + puts("\t re-read EC registers and show changes"); 46 + puts("\t-b offset : Read value at byte_offset (in hex)"); 47 + puts("\t-w offset -v value : Write value at byte_offset"); 48 + puts("\t-h : Print this help\n\n"); 49 + puts("Offsets and values are in hexadecimal number sytem."); 50 + puts("The offset and value must be between 0 and 0xff."); 51 + exit(exit_status); 52 + } 53 + 54 + void parse_opts(int argc, char *argv[]) 55 + { 56 + int c; 57 + 58 + while ((c = getopt(argc, argv, "rs:b:w:v:h")) != -1) { 59 + 60 + switch (c) { 61 + case 'r': 62 + if (read_mode != -1) 63 + usage(argv[0], EXIT_FAILURE); 64 + read_mode = 1; 65 + break; 66 + case 's': 67 + if (read_mode != -1 && read_mode != 1) 68 + usage(argv[0], EXIT_FAILURE); 69 + 70 + sleep_time = atoi(optarg); 71 + if (sleep_time <= 0) { 72 + sleep_time = 0; 73 + usage(argv[0], EXIT_FAILURE); 74 + printf("Bad sleep time: %s\n", optarg); 75 + } 76 + break; 77 + case 'b': 78 + if (read_mode != -1) 79 + usage(argv[0], EXIT_FAILURE); 80 + read_mode = 1; 81 + read_byte_offset = strtoul(optarg, NULL, 16); 82 + break; 83 + case 'w': 84 + if (read_mode != -1) 85 + usage(argv[0], EXIT_FAILURE); 86 + read_mode = 0; 87 + write_byte_offset = strtoul(optarg, NULL, 16); 88 + break; 89 + case 'v': 90 + write_value = strtoul(optarg, NULL, 16); 91 + break; 92 + case 'h': 93 + usage(argv[0], EXIT_SUCCESS); 94 + default: 95 + fprintf(stderr, "Unknown option!\n"); 96 + usage(argv[0], EXIT_FAILURE); 97 + } 98 + } 99 + if (read_mode == 0) { 100 + if (write_byte_offset < 0 || 101 + write_byte_offset >= EC_SPACE_SIZE) { 102 + fprintf(stderr, "Wrong byte offset 0x%.2x, valid: " 103 + "[0-0x%.2x]\n", 104 + write_byte_offset, EC_SPACE_SIZE - 1); 105 + usage(argv[0], EXIT_FAILURE); 106 + } 107 + if (write_value < 0 || 108 + write_value >= 255) { 109 + fprintf(stderr, "Wrong byte offset 0x%.2x, valid:" 110 + "[0-0xff]\n", write_byte_offset); 111 + usage(argv[0], EXIT_FAILURE); 112 + } 113 + } 114 + if (read_mode == 1 && read_byte_offset != -1) { 115 + if (read_byte_offset < -1 || 116 + read_byte_offset >= EC_SPACE_SIZE) { 117 + fprintf(stderr, "Wrong byte offset 0x%.2x, valid: " 118 + "[0-0x%.2x]\n", 119 + read_byte_offset, EC_SPACE_SIZE - 1); 120 + usage(argv[0], EXIT_FAILURE); 121 + } 122 + } 123 + /* Add additional parameter checks here */ 124 + } 125 + 126 + void dump_ec(int fd) 127 + { 128 + char buf[EC_SPACE_SIZE]; 129 + char buf2[EC_SPACE_SIZE]; 130 + int byte_off, bytes_read; 131 + 132 + bytes_read = read(fd, buf, EC_SPACE_SIZE); 133 + 134 + if (bytes_read == -1) 135 + err(EXIT_FAILURE, "Could not read from %s\n", SYSFS_PATH); 136 + 137 + if (bytes_read != EC_SPACE_SIZE) 138 + fprintf(stderr, "Could only read %d bytes\n", bytes_read); 139 + 140 + printf(" 00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F"); 141 + for (byte_off = 0; byte_off < bytes_read; byte_off++) { 142 + if ((byte_off % 16) == 0) 143 + printf("\n%.2X: ", byte_off); 144 + printf(" %.2x ", (uint8_t)buf[byte_off]); 145 + } 146 + printf("\n"); 147 + 148 + if (!sleep_time) 149 + return; 150 + 151 + printf("\n"); 152 + lseek(fd, 0, SEEK_SET); 153 + sleep(sleep_time); 154 + 155 + bytes_read = read(fd, buf2, EC_SPACE_SIZE); 156 + 157 + if (bytes_read == -1) 158 + err(EXIT_FAILURE, "Could not read from %s\n", SYSFS_PATH); 159 + 160 + if (bytes_read != EC_SPACE_SIZE) 161 + fprintf(stderr, "Could only read %d bytes\n", bytes_read); 162 + 163 + printf(" 00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F"); 164 + for (byte_off = 0; byte_off < bytes_read; byte_off++) { 165 + if ((byte_off % 16) == 0) 166 + printf("\n%.2X: ", byte_off); 167 + 168 + if (buf[byte_off] == buf2[byte_off]) 169 + printf(" %.2x ", (uint8_t)buf2[byte_off]); 170 + else 171 + printf("*%.2x ", (uint8_t)buf2[byte_off]); 172 + } 173 + printf("\n"); 174 + } 175 + 176 + void read_ec_val(int fd, int byte_offset) 177 + { 178 + uint8_t buf; 179 + int error; 180 + 181 + error = lseek(fd, byte_offset, SEEK_SET); 182 + if (error != byte_offset) 183 + err(EXIT_FAILURE, "Cannot set offset to 0x%.2x", byte_offset); 184 + 185 + error = read(fd, &buf, 1); 186 + if (error != 1) 187 + err(EXIT_FAILURE, "Could not read byte 0x%.2x from %s\n", 188 + byte_offset, SYSFS_PATH); 189 + printf("0x%.2x\n", buf); 190 + return; 191 + } 192 + 193 + void write_ec_val(int fd, int byte_offset, uint8_t value) 194 + { 195 + int error; 196 + 197 + error = lseek(fd, byte_offset, SEEK_SET); 198 + if (error != byte_offset) 199 + err(EXIT_FAILURE, "Cannot set offset to 0x%.2x", byte_offset); 200 + 201 + error = write(fd, &value, 1); 202 + if (error != 1) 203 + err(EXIT_FAILURE, "Cannot write value 0x%.2x to offset 0x%.2x", 204 + value, byte_offset); 205 + } 206 + 207 + int main(int argc, char *argv[]) 208 + { 209 + int file_mode = O_RDONLY; 210 + int fd; 211 + 212 + parse_opts(argc, argv); 213 + 214 + if (read_mode == 0) 215 + file_mode = O_WRONLY; 216 + else if (read_mode == 1) 217 + file_mode = O_RDONLY; 218 + else 219 + usage(argv[0], EXIT_FAILURE); 220 + 221 + fd = open(SYSFS_PATH, file_mode); 222 + if (fd == -1) 223 + err(EXIT_FAILURE, "%s", SYSFS_PATH); 224 + 225 + if (read_mode) 226 + if (read_byte_offset == -1) 227 + dump_ec(fd); 228 + else if (read_byte_offset < 0 || 229 + read_byte_offset >= EC_SPACE_SIZE) 230 + usage(argv[0], EXIT_FAILURE); 231 + else 232 + read_ec_val(fd, read_byte_offset); 233 + else 234 + write_ec_val(fd, write_byte_offset, write_value); 235 + close(fd); 236 + 237 + exit(EXIT_SUCCESS); 238 + }
+9 -3
tools/power/cpupower/Makefile
··· 62 62 LIB_MIN= 0 63 63 64 64 PACKAGE = cpupower 65 - PACKAGE_BUGREPORT = cpufreq@vger.kernel.org 65 + PACKAGE_BUGREPORT = linux-pm@vger.kernel.org 66 66 LANGUAGES = de fr it cs pt 67 67 68 68 ··· 274 274 $(INSTALL_DATA) -D man/cpupower.1 $(DESTDIR)${mandir}/man1/cpupower.1 275 275 $(INSTALL_DATA) -D man/cpupower-frequency-set.1 $(DESTDIR)${mandir}/man1/cpupower-frequency-set.1 276 276 $(INSTALL_DATA) -D man/cpupower-frequency-info.1 $(DESTDIR)${mandir}/man1/cpupower-frequency-info.1 277 + $(INSTALL_DATA) -D man/cpupower-idle-set.1 $(DESTDIR)${mandir}/man1/cpupower-idle-set.1 278 + $(INSTALL_DATA) -D man/cpupower-idle-info.1 $(DESTDIR)${mandir}/man1/cpupower-idle-info.1 277 279 $(INSTALL_DATA) -D man/cpupower-set.1 $(DESTDIR)${mandir}/man1/cpupower-set.1 278 280 $(INSTALL_DATA) -D man/cpupower-info.1 $(DESTDIR)${mandir}/man1/cpupower-info.1 279 281 $(INSTALL_DATA) -D man/cpupower-monitor.1 $(DESTDIR)${mandir}/man1/cpupower-monitor.1 ··· 297 295 - rm -f $(DESTDIR)${libdir}/libcpupower.* 298 296 - rm -f $(DESTDIR)${includedir}/cpufreq.h 299 297 - rm -f $(DESTDIR)${bindir}/utils/cpupower 300 - - rm -f $(DESTDIR)${mandir}/man1/cpufreq-set.1 301 - - rm -f $(DESTDIR)${mandir}/man1/cpufreq-info.1 298 + - rm -f $(DESTDIR)${mandir}/man1/cpupower.1 299 + - rm -f $(DESTDIR)${mandir}/man1/cpupower-frequency-set.1 300 + - rm -f $(DESTDIR)${mandir}/man1/cpupower-frequency-info.1 301 + - rm -f $(DESTDIR)${mandir}/man1/cpupower-set.1 302 + - rm -f $(DESTDIR)${mandir}/man1/cpupower-info.1 303 + - rm -f $(DESTDIR)${mandir}/man1/cpupower-monitor.1 302 304 - for HLANG in $(LANGUAGES); do \ 303 305 rm -f $(DESTDIR)${localedir}/$$HLANG/LC_MESSAGES/cpupower.mo; \ 304 306 done;
+11 -13
tools/power/cpupower/README
··· 1 - The cpufrequtils package (homepage: 2 - http://www.kernel.org/pub/linux/utils/kernel/cpufreq/cpufrequtils.html ) 3 - consists of the following elements: 1 + The cpupower package consists of the following elements: 4 2 5 3 requirements 6 4 ------------ ··· 9 11 For both it's not explicitly checked for (yet). 10 12 11 13 12 - libcpufreq 14 + libcpupower 13 15 ---------- 14 16 15 - "libcpufreq" is a library which offers a unified access method for userspace 17 + "libcpupower" is a library which offers a unified access method for userspace 16 18 tools and programs to the cpufreq core and drivers in the Linux kernel. This 17 19 allows for code reduction in userspace tools, a clean implementation of 18 20 the interaction to the cpufreq core, and support for both the sysfs and proc ··· 26 28 su 27 29 make install 28 30 29 - should suffice on most systems. It builds default libcpufreq, 30 - cpufreq-set and cpufreq-info files and installs them in /usr/lib and 31 - /usr/bin, respectively. If you want to set up the paths differently and/or 32 - want to configure the package to your specific needs, you need to open 33 - "Makefile" with an editor of your choice and edit the block marked 34 - CONFIGURATION. 31 + should suffice on most systems. It builds libcpupower to put in 32 + /usr/lib; cpupower, cpufreq-bench_plot.sh to put in /usr/bin; and 33 + cpufreq-bench to put in /usr/sbin. If you want to set up the paths 34 + differently and/or want to configure the package to your specific 35 + needs, you need to open "Makefile" with an editor of your choice and 36 + edit the block marked CONFIGURATION. 35 37 36 38 37 39 THANKS 38 40 ------ 39 41 Many thanks to Mattia Dongili who wrote the autotoolization and 40 - libtoolization, the manpages and the italian language file for cpufrequtils; 42 + libtoolization, the manpages and the italian language file for cpupower; 41 43 to Dave Jones for his feedback and his dump_psb tool; to Bruno Ducrot for his 42 44 powernow-k8-decode and intel_gsic tools as well as the french language file; 43 45 and to various others commenting on the previous (pre-)releases of 44 - cpufrequtils. 46 + cpupower. 45 47 46 48 47 49 Dominik Brodowski
-1
tools/power/cpupower/ToDo
··· 3 3 - Use bitmask functions to parse CPU topology more robust 4 4 (current implementation has issues on AMD) 5 5 - Try to read out boost states and frequencies on Intel 6 - - Adjust README 7 6 - Somewhere saw the ability to read power consumption of 8 7 RAM from HW on Intel SandyBridge -> another monitor? 9 8 - Add another c1e debug idle monitor
+1 -1
tools/power/cpupower/debug/kernel/cpufreq-test_tsc.c
··· 18 18 * 5.) if the third value, "diff_pmtmr", changes between 2. and 4., the 19 19 * TSC-based delay routine on the Linux kernel does not correctly 20 20 * handle the cpufreq transition. Please report this to 21 - * cpufreq@vger.kernel.org 21 + * linux-pm@vger.kernel.org 22 22 */ 23 23 24 24 #include <linux/kernel.h>
+3
tools/power/cpupower/man/cpupower-frequency-info.1
··· 50 50 \fB\-m\fR \fB\-\-human\fR 51 51 human\-readable output for the \-f, \-w, \-s and \-y parameters. 52 52 .TP 53 + \fB\-n\fR \fB\-\-no-rounding\fR 54 + Output frequencies and latencies without rounding off values. 55 + .TP 53 56 .SH "REMARKS" 54 57 .LP 55 58 By default only values of core zero are displayed. How to display settings of
+8 -2
tools/power/cpupower/man/cpupower-idle-set.1
··· 13 13 .SH "OPTIONS" 14 14 .LP 15 15 .TP 16 - \fB\-d\fR \fB\-\-disable\fR 16 + \fB\-d\fR \fB\-\-disable\fR <STATE_NO> 17 17 Disable a specific processor sleep state. 18 18 .TP 19 - \fB\-e\fR \fB\-\-enable\fR 19 + \fB\-e\fR \fB\-\-enable\fR <STATE_NO> 20 20 Enable a specific processor sleep state. 21 + .TP 22 + \fB\-D\fR \fB\-\-disable-by-latency\fR <LATENCY> 23 + Disable all idle states with a equal or higher latency than <LATENCY> 24 + .TP 25 + \fB\-E\fR \fB\-\-enable-all\fR 26 + Enable all idle states if not enabled already. 21 27 22 28 .SH "REMARKS" 23 29 .LP
+1 -1
tools/power/cpupower/man/cpupower-info.1
··· 3 3 cpupower\-info \- Shows processor power related kernel or hardware configurations 4 4 .SH SYNOPSIS 5 5 .ft B 6 - .B cpupower info [ \-b ] [ \-s ] [ \-m ] 6 + .B cpupower info [ \-b ] 7 7 8 8 .SH DESCRIPTION 9 9 \fBcpupower info \fP shows kernel configurations or processor hardware
+1 -30
tools/power/cpupower/man/cpupower-set.1
··· 3 3 cpupower\-set \- Set processor power related kernel or hardware configurations 4 4 .SH SYNOPSIS 5 5 .ft B 6 - .B cpupower set [ \-b VAL ] [ \-s VAL ] [ \-m VAL ] 6 + .B cpupower set [ \-b VAL ] 7 7 8 8 9 9 .SH DESCRIPTION ··· 54 54 Use \fBcpupower -c all info -b\fP to verify. 55 55 56 56 This options needs the msr kernel driver (CONFIG_X86_MSR) loaded. 57 - .RE 58 - .PP 59 - \-\-sched\-mc, \-m [ VAL ] 60 - .RE 61 - \-\-sched\-smt, \-s [ VAL ] 62 - .RS 4 63 - \-\-sched\-mc utilizes cores in one processor package/socket first before 64 - processes are scheduled to other processor packages/sockets. 65 - 66 - \-\-sched\-smt utilizes thread siblings of one processor core first before 67 - processes are scheduled to other cores. 68 - 69 - The impact on power consumption and performance (positiv or negativ) heavily 70 - depends on processor support for deep sleep states, frequency scaling and 71 - frequency boost modes and their dependencies between other thread siblings 72 - and processor cores. 73 - 74 - Taken over from kernel documentation: 75 - 76 - Adjust the kernel's multi-core scheduler support. 77 - 78 - Possible values are: 79 - .RS 2 80 - 0 - No power saving load balance (default value) 81 - 82 - 1 - Fill one thread/core/package first for long running threads 83 - 84 - 2 - Also bias task wakeups to semi-idle cpu package for power 85 - savings 86 57 .RE 87 58 88 59 .SH "SEE ALSO"
+70 -40
tools/power/cpupower/utils/cpufreq-info.c
··· 82 82 } 83 83 } 84 84 85 + static int no_rounding; 85 86 static void print_speed(unsigned long speed) 86 87 { 87 88 unsigned long tmp; 88 89 89 - if (speed > 1000000) { 90 - tmp = speed % 10000; 91 - if (tmp >= 5000) 92 - speed += 10000; 93 - printf("%u.%02u GHz", ((unsigned int) speed/1000000), 94 - ((unsigned int) (speed%1000000)/10000)); 95 - } else if (speed > 100000) { 96 - tmp = speed % 1000; 97 - if (tmp >= 500) 98 - speed += 1000; 99 - printf("%u MHz", ((unsigned int) speed / 1000)); 100 - } else if (speed > 1000) { 101 - tmp = speed % 100; 102 - if (tmp >= 50) 103 - speed += 100; 104 - printf("%u.%01u MHz", ((unsigned int) speed/1000), 105 - ((unsigned int) (speed%1000)/100)); 106 - } else 107 - printf("%lu kHz", speed); 90 + if (no_rounding) { 91 + if (speed > 1000000) 92 + printf("%u.%06u GHz", ((unsigned int) speed/1000000), 93 + ((unsigned int) speed%1000000)); 94 + else if (speed > 100000) 95 + printf("%u MHz", (unsigned int) speed); 96 + else if (speed > 1000) 97 + printf("%u.%03u MHz", ((unsigned int) speed/1000), 98 + (unsigned int) (speed%1000)); 99 + else 100 + printf("%lu kHz", speed); 101 + } else { 102 + if (speed > 1000000) { 103 + tmp = speed%10000; 104 + if (tmp >= 5000) 105 + speed += 10000; 106 + printf("%u.%02u GHz", ((unsigned int) speed/1000000), 107 + ((unsigned int) (speed%1000000)/10000)); 108 + } else if (speed > 100000) { 109 + tmp = speed%1000; 110 + if (tmp >= 500) 111 + speed += 1000; 112 + printf("%u MHz", ((unsigned int) speed/1000)); 113 + } else if (speed > 1000) { 114 + tmp = speed%100; 115 + if (tmp >= 50) 116 + speed += 100; 117 + printf("%u.%01u MHz", ((unsigned int) speed/1000), 118 + ((unsigned int) (speed%1000)/100)); 119 + } 120 + } 108 121 109 122 return; 110 123 } ··· 126 113 { 127 114 unsigned long tmp; 128 115 129 - if (duration > 1000000) { 130 - tmp = duration % 10000; 131 - if (tmp >= 5000) 132 - duration += 10000; 133 - printf("%u.%02u ms", ((unsigned int) duration/1000000), 134 - ((unsigned int) (duration%1000000)/10000)); 135 - } else if (duration > 100000) { 136 - tmp = duration % 1000; 137 - if (tmp >= 500) 138 - duration += 1000; 139 - printf("%u us", ((unsigned int) duration / 1000)); 140 - } else if (duration > 1000) { 141 - tmp = duration % 100; 142 - if (tmp >= 50) 143 - duration += 100; 144 - printf("%u.%01u us", ((unsigned int) duration/1000), 145 - ((unsigned int) (duration%1000)/100)); 146 - } else 147 - printf("%lu ns", duration); 148 - 116 + if (no_rounding) { 117 + if (duration > 1000000) 118 + printf("%u.%06u ms", ((unsigned int) duration/1000000), 119 + ((unsigned int) duration%1000000)); 120 + else if (duration > 100000) 121 + printf("%u us", ((unsigned int) duration/1000)); 122 + else if (duration > 1000) 123 + printf("%u.%03u us", ((unsigned int) duration/1000), 124 + ((unsigned int) duration%1000)); 125 + else 126 + printf("%lu ns", duration); 127 + } else { 128 + if (duration > 1000000) { 129 + tmp = duration%10000; 130 + if (tmp >= 5000) 131 + duration += 10000; 132 + printf("%u.%02u ms", ((unsigned int) duration/1000000), 133 + ((unsigned int) (duration%1000000)/10000)); 134 + } else if (duration > 100000) { 135 + tmp = duration%1000; 136 + if (tmp >= 500) 137 + duration += 1000; 138 + printf("%u us", ((unsigned int) duration / 1000)); 139 + } else if (duration > 1000) { 140 + tmp = duration%100; 141 + if (tmp >= 50) 142 + duration += 100; 143 + printf("%u.%01u us", ((unsigned int) duration/1000), 144 + ((unsigned int) (duration%1000)/100)); 145 + } else 146 + printf("%lu ns", duration); 147 + } 149 148 return; 150 149 } 151 150 ··· 550 525 { .name = "latency", .has_arg = no_argument, .flag = NULL, .val = 'y'}, 551 526 { .name = "proc", .has_arg = no_argument, .flag = NULL, .val = 'o'}, 552 527 { .name = "human", .has_arg = no_argument, .flag = NULL, .val = 'm'}, 528 + { .name = "no-rounding", .has_arg = no_argument, .flag = NULL, .val = 'n'}, 553 529 { }, 554 530 }; 555 531 ··· 564 538 int output_param = 0; 565 539 566 540 do { 567 - ret = getopt_long(argc, argv, "oefwldpgrasmyb", info_opts, NULL); 541 + ret = getopt_long(argc, argv, "oefwldpgrasmybn", info_opts, 542 + NULL); 568 543 switch (ret) { 569 544 case '?': 570 545 output_param = '?'; ··· 601 574 break; 602 575 } 603 576 human = 1; 577 + break; 578 + case 'n': 579 + no_rounding = 1; 604 580 break; 605 581 default: 606 582 fprintf(stderr, "invalid or unknown argument\n");
+69 -6
tools/power/cpupower/utils/cpuidle-set.c
··· 13 13 #include "helpers/sysfs.h" 14 14 15 15 static struct option info_opts[] = { 16 - { .name = "disable", .has_arg = required_argument, .flag = NULL, .val = 'd'}, 17 - { .name = "enable", .has_arg = required_argument, .flag = NULL, .val = 'e'}, 16 + { .name = "disable", 17 + .has_arg = required_argument, .flag = NULL, .val = 'd'}, 18 + { .name = "enable", 19 + .has_arg = required_argument, .flag = NULL, .val = 'e'}, 20 + { .name = "disable-by-latency", 21 + .has_arg = required_argument, .flag = NULL, .val = 'D'}, 22 + { .name = "enable-all", 23 + .has_arg = no_argument, .flag = NULL, .val = 'E'}, 18 24 { }, 19 25 }; 20 26 ··· 29 23 { 30 24 extern char *optarg; 31 25 extern int optind, opterr, optopt; 32 - int ret = 0, cont = 1, param = 0, idlestate = 0; 33 - unsigned int cpu = 0; 26 + int ret = 0, cont = 1, param = 0, disabled; 27 + unsigned long long latency = 0, state_latency; 28 + unsigned int cpu = 0, idlestate = 0, idlestates = 0; 29 + char *endptr; 34 30 35 31 do { 36 - ret = getopt_long(argc, argv, "d:e:", info_opts, NULL); 32 + ret = getopt_long(argc, argv, "d:e:ED:", info_opts, NULL); 37 33 if (ret == -1) 38 34 break; 39 35 switch (ret) { ··· 60 52 } 61 53 param = ret; 62 54 idlestate = atoi(optarg); 55 + break; 56 + case 'D': 57 + if (param) { 58 + param = -1; 59 + cont = 0; 60 + break; 61 + } 62 + param = ret; 63 + latency = strtoull(optarg, &endptr, 10); 64 + if (*endptr != '\0') { 65 + printf(_("Bad latency value: %s\n"), optarg); 66 + exit(EXIT_FAILURE); 67 + } 68 + break; 69 + case 'E': 70 + if (param) { 71 + param = -1; 72 + cont = 0; 73 + break; 74 + } 75 + param = ret; 63 76 break; 64 77 case -1: 65 78 cont = 0; ··· 108 79 if (!bitmask_isbitset(cpus_chosen, cpu)) 109 80 continue; 110 81 111 - switch (param) { 82 + if (sysfs_is_cpu_online(cpu) != 1) 83 + continue; 112 84 85 + idlestates = sysfs_get_idlestate_count(cpu); 86 + if (idlestates <= 0) 87 + continue; 88 + 89 + switch (param) { 113 90 case 'd': 114 91 ret = sysfs_idlestate_disable(cpu, idlestate, 1); 115 92 if (ret == 0) ··· 141 106 else 142 107 printf(_("Idlestate %u not enabled on CPU %u\n"), 143 108 idlestate, cpu); 109 + break; 110 + case 'D': 111 + for (idlestate = 0; idlestate < idlestates; idlestate++) { 112 + disabled = sysfs_is_idlestate_disabled 113 + (cpu, idlestate); 114 + state_latency = sysfs_get_idlestate_latency 115 + (cpu, idlestate); 116 + printf("CPU: %u - idlestate %u - state_latency: %llu - latency: %llu\n", 117 + cpu, idlestate, state_latency, latency); 118 + if (disabled == 1 || latency > state_latency) 119 + continue; 120 + ret = sysfs_idlestate_disable 121 + (cpu, idlestate, 1); 122 + if (ret == 0) 123 + printf(_("Idlestate %u disabled on CPU %u\n"), idlestate, cpu); 124 + } 125 + break; 126 + case 'E': 127 + for (idlestate = 0; idlestate < idlestates; idlestate++) { 128 + disabled = sysfs_is_idlestate_disabled 129 + (cpu, idlestate); 130 + if (disabled == 1) { 131 + ret = sysfs_idlestate_disable 132 + (cpu, idlestate, 0); 133 + if (ret == 0) 134 + printf(_("Idlestate %u enabled on CPU %u\n"), idlestate, cpu); 135 + } 136 + } 144 137 break; 145 138 default: 146 139 /* Not reachable with proper args checking */
+5 -37
tools/power/cpupower/utils/cpupower-info.c
··· 18 18 19 19 static struct option set_opts[] = { 20 20 { .name = "perf-bias", .has_arg = optional_argument, .flag = NULL, .val = 'b'}, 21 - { .name = "sched-mc", .has_arg = optional_argument, .flag = NULL, .val = 'm'}, 22 - { .name = "sched-smt", .has_arg = optional_argument, .flag = NULL, .val = 's'}, 23 21 { }, 24 22 }; 25 23 ··· 35 37 36 38 union { 37 39 struct { 38 - int sched_mc:1; 39 - int sched_smt:1; 40 40 int perf_bias:1; 41 41 }; 42 42 int params; ··· 45 49 textdomain(PACKAGE); 46 50 47 51 /* parameter parsing */ 48 - while ((ret = getopt_long(argc, argv, "msb", set_opts, NULL)) != -1) { 52 + while ((ret = getopt_long(argc, argv, "b", set_opts, NULL)) != -1) { 49 53 switch (ret) { 50 54 case 'b': 51 55 if (params.perf_bias) 52 56 print_wrong_arg_exit(); 53 57 params.perf_bias = 1; 54 - break; 55 - case 'm': 56 - if (params.sched_mc) 57 - print_wrong_arg_exit(); 58 - params.sched_mc = 1; 59 - break; 60 - case 's': 61 - if (params.sched_smt) 62 - print_wrong_arg_exit(); 63 - params.sched_smt = 1; 64 58 break; 65 59 default: 66 60 print_wrong_arg_exit(); ··· 63 77 /* Default is: show output of CPU 0 only */ 64 78 if (bitmask_isallclear(cpus_chosen)) 65 79 bitmask_setbit(cpus_chosen, 0); 66 - 67 - if (params.sched_mc) { 68 - ret = sysfs_get_sched("mc"); 69 - printf(_("System's multi core scheduler setting: ")); 70 - if (ret < 0) 71 - /* if sysfs file is missing it's: errno == ENOENT */ 72 - printf(_("not supported\n")); 73 - else 74 - printf("%d\n", ret); 75 - } 76 - if (params.sched_smt) { 77 - ret = sysfs_get_sched("smt"); 78 - printf(_("System's thread sibling scheduler setting: ")); 79 - if (ret < 0) 80 - /* if sysfs file is missing it's: errno == ENOENT */ 81 - printf(_("not supported\n")); 82 - else 83 - printf("%d\n", ret); 84 - } 85 80 86 81 /* Add more per cpu options here */ 87 82 if (!params.perf_bias) ··· 92 125 if (params.perf_bias) { 93 126 ret = msr_intel_get_perf_bias(cpu); 94 127 if (ret < 0) { 95 - printf(_("Could not read perf-bias value\n")); 96 - break; 128 + fprintf(stderr, 129 + _("Could not read perf-bias value[%d]\n"), ret); 130 + exit(EXIT_FAILURE); 97 131 } else 98 132 printf(_("perf-bias: %d\n"), ret); 99 133 } 100 134 } 101 - return ret; 135 + return 0; 102 136 }
+2 -41
tools/power/cpupower/utils/cpupower-set.c
··· 19 19 20 20 static struct option set_opts[] = { 21 21 { .name = "perf-bias", .has_arg = required_argument, .flag = NULL, .val = 'b'}, 22 - { .name = "sched-mc", .has_arg = required_argument, .flag = NULL, .val = 'm'}, 23 - { .name = "sched-smt", .has_arg = required_argument, .flag = NULL, .val = 's'}, 24 22 { }, 25 23 }; 26 24 ··· 36 38 37 39 union { 38 40 struct { 39 - int sched_mc:1; 40 - int sched_smt:1; 41 41 int perf_bias:1; 42 42 }; 43 43 int params; 44 44 } params; 45 - int sched_mc = 0, sched_smt = 0, perf_bias = 0; 45 + int perf_bias = 0; 46 46 int ret = 0; 47 47 48 48 setlocale(LC_ALL, ""); ··· 48 52 49 53 params.params = 0; 50 54 /* parameter parsing */ 51 - while ((ret = getopt_long(argc, argv, "m:s:b:", 55 + while ((ret = getopt_long(argc, argv, "b:", 52 56 set_opts, NULL)) != -1) { 53 57 switch (ret) { 54 58 case 'b': ··· 62 66 } 63 67 params.perf_bias = 1; 64 68 break; 65 - case 'm': 66 - if (params.sched_mc) 67 - print_wrong_arg_exit(); 68 - sched_mc = atoi(optarg); 69 - if (sched_mc < 0 || sched_mc > 2) { 70 - printf(_("--sched-mc param out " 71 - "of range [0-%d]\n"), 2); 72 - print_wrong_arg_exit(); 73 - } 74 - params.sched_mc = 1; 75 - break; 76 - case 's': 77 - if (params.sched_smt) 78 - print_wrong_arg_exit(); 79 - sched_smt = atoi(optarg); 80 - if (sched_smt < 0 || sched_smt > 2) { 81 - printf(_("--sched-smt param out " 82 - "of range [0-%d]\n"), 2); 83 - print_wrong_arg_exit(); 84 - } 85 - params.sched_smt = 1; 86 - break; 87 69 default: 88 70 print_wrong_arg_exit(); 89 71 } ··· 69 95 70 96 if (!params.params) 71 97 print_wrong_arg_exit(); 72 - 73 - if (params.sched_mc) { 74 - ret = sysfs_set_sched("mc", sched_mc); 75 - if (ret) 76 - fprintf(stderr, _("Error setting sched-mc %s\n"), 77 - (ret == -ENODEV) ? "not supported" : ""); 78 - } 79 - if (params.sched_smt) { 80 - ret = sysfs_set_sched("smt", sched_smt); 81 - if (ret) 82 - fprintf(stderr, _("Error setting sched-smt %s\n"), 83 - (ret == -ENODEV) ? "not supported" : ""); 84 - } 85 98 86 99 /* Default is: set all CPUs */ 87 100 if (bitmask_isallclear(cpus_chosen))
+14
tools/power/cpupower/utils/cpupower.c
··· 12 12 #include <string.h> 13 13 #include <unistd.h> 14 14 #include <errno.h> 15 + #include <sys/types.h> 16 + #include <sys/stat.h> 17 + #include <sys/utsname.h> 15 18 16 19 #include "builtin.h" 17 20 #include "helpers/helpers.h" ··· 172 169 { 173 170 const char *cmd; 174 171 unsigned int i, ret; 172 + struct stat statbuf; 173 + struct utsname uts; 175 174 176 175 cpus_chosen = bitmask_alloc(sysconf(_SC_NPROCESSORS_CONF)); 177 176 ··· 200 195 201 196 get_cpu_info(0, &cpupower_cpu_info); 202 197 run_as_root = !getuid(); 198 + if (run_as_root) { 199 + ret = uname(&uts); 200 + if (!ret && !strcmp(uts.machine, "x86_64") && 201 + stat("/dev/cpu/0/msr", &statbuf) != 0) { 202 + if (system("modprobe msr") == -1) 203 + fprintf(stderr, _("MSR access not available.\n")); 204 + } 205 + } 206 + 203 207 204 208 for (i = 0; i < ARRAY_SIZE(commands); i++) { 205 209 struct cmd_struct *p = commands + i;
+2 -2
tools/power/x86/turbostat/turbostat.c
··· 1971 1971 if (get_msr(0, MSR_IA32_TEMPERATURE_TARGET, &msr)) 1972 1972 goto guess; 1973 1973 1974 - target_c_local = (msr >> 16) & 0x7F; 1974 + target_c_local = (msr >> 16) & 0xFF; 1975 1975 1976 1976 if (verbose) 1977 1977 fprintf(stderr, "cpu%d: MSR_IA32_TEMPERATURE_TARGET: 0x%08llx (%d C)\n", 1978 1978 cpu, msr, target_c_local); 1979 1979 1980 - if (target_c_local < 85 || target_c_local > 127) 1980 + if (!target_c_local) 1981 1981 goto guess; 1982 1982 1983 1983 tcc_activation_temp = target_c_local;