Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pm+acpi-3.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull ACPI and power management updates from Rafael Wysocki:
"As far as the number of commits goes, the top spot belongs to ACPI
this time with cpufreq in the second position and a handful of PM
core, PNP and cpuidle updates. They are fixes and cleanups mostly, as
usual, with a couple of new features in the mix.

The most visible change is probably that we will create struct
acpi_device objects (visible in sysfs) for all devices represented in
the ACPI tables regardless of their status and there will be a new
sysfs attribute under those objects allowing user space to check that
status via _STA.

Consequently, ACPI device eject or generally hot-removal will not
delete those objects, unless the table containing the corresponding
namespace nodes is unloaded, which is extremely rare. Also ACPI
container hotplug will be handled quite a bit differently and cpufreq
will support CPU boost ("turbo") generically and not only in the
acpi-cpufreq driver.

Specifics:

- ACPI core changes to make it create a struct acpi_device object for
every device represented in the ACPI tables during all namespace
scans regardless of the current status of that device. In
accordance with this, ACPI hotplug operations will not delete those
objects, unless the underlying ACPI tables go away.

- On top of the above, new sysfs attribute for ACPI device objects
allowing user space to check device status by triggering the
execution of _STA for its ACPI object. From Srinivas Pandruvada.

- ACPI core hotplug changes reducing code duplication, integrating
the PCI root hotplug with the core and reworking container hotplug.

- ACPI core simplifications making it use ACPI_COMPANION() in the
code "glueing" ACPI device objects to "physical" devices.

- ACPICA update to upstream version 20131218. This adds support for
the DBG2 and PCCT tables to ACPICA, fixes some bugs and improves
debug facilities. From Bob Moore, Lv Zheng and Betty Dall.

- Init code change to carry out the early ACPI initialization
earlier. That should allow us to use ACPI during the timekeeping
initialization and possibly to simplify the EFI initialization too.
From Chun-Yi Lee.

- Clenups of the inclusions of ACPI headers in many places all over
from Lv Zheng and Rashika Kheria (work in progress).

- New helper for ACPI _DSM execution and rework of the code in
drivers that uses _DSM to execute it via the new helper. From
Jiang Liu.

- New Win8 OSI blacklist entries from Takashi Iwai.

- Assorted ACPI fixes and cleanups from Al Stone, Emil Goode, Hanjun
Guo, Lan Tianyu, Masanari Iida, Oliver Neukum, Prarit Bhargava,
Rashika Kheria, Tang Chen, Zhang Rui.

- intel_pstate driver updates, including proper Baytrail support,
from Dirk Brandewie and intel_pstate documentation from Ramkumar
Ramachandra.

- Generic CPU boost ("turbo") support for cpufreq from Lukasz
Majewski.

- powernow-k6 cpufreq driver fixes from Mikulas Patocka.

- cpufreq core fixes and cleanups from Viresh Kumar, Jane Li, Mark
Brown.

- Assorted cpufreq drivers fixes and cleanups from Anson Huang, John
Tobias, Paul Bolle, Paul Walmsley, Sachin Kamat, Shawn Guo, Viresh
Kumar.

- cpuidle cleanups from Bartlomiej Zolnierkiewicz.

- Support for hibernation APM events from Bin Shi.

- Hibernation fix to avoid bringing up nonboot CPUs with ACPI EC
disabled during thaw transitions from Bjørn Mork.

- PM core fixes and cleanups from Ben Dooks, Leonardo Potenza, Ulf
Hansson.

- PNP subsystem fixes and cleanups from Dmitry Torokhov, Levente
Kurusa, Rashika Kheria.

- New tool for profiling system suspend from Todd E Brandt and a
cpupower tool cleanup from One Thousand Gnomes"

* tag 'pm+acpi-3.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (153 commits)
thermal: exynos: boost: Automatic enable/disable of BOOST feature (at Exynos4412)
cpufreq: exynos4x12: Change L0 driver data to CPUFREQ_BOOST_FREQ
Documentation: cpufreq / boost: Update BOOST documentation
cpufreq: exynos: Extend Exynos cpufreq driver to support boost
cpufreq / boost: Kconfig: Support for software-managed BOOST
acpi-cpufreq: Adjust the code to use the common boost attribute
cpufreq: Add boost frequency support in core
intel_pstate: Add trace point to report internal state.
cpufreq: introduce cpufreq_generic_get() routine
ARM: SA1100: Create dummy clk_get_rate() to avoid build failures
cpufreq: stats: create sysfs entries when cpufreq_stats is a module
cpufreq: stats: free table and remove sysfs entry in a single routine
cpufreq: stats: remove hotplug notifiers
cpufreq: stats: handle cpufreq_unregister_driver() and suspend/resume properly
cpufreq: speedstep: remove unused speedstep_get_state
platform: introduce OF style 'modalias' support for platform bus
PM / tools: new tool for suspend/resume performance optimization
ACPI: fix module autoloading for ACPI enumerated devices
ACPI: add module autoloading support for ACPI enumerated devices
ACPI: fix create_modalias() return value handling
...

+4649 -2749
+24
Documentation/ABI/testing/sysfs-devices-system-cpu
··· 200 200 note of cpu#. 201 201 202 202 crash_notes_size: size of the note of cpu#. 203 + 204 + 205 + What: /sys/devices/system/cpu/intel_pstate/max_perf_pct 206 + /sys/devices/system/cpu/intel_pstate/min_perf_pct 207 + /sys/devices/system/cpu/intel_pstate/no_turbo 208 + Date: February 2013 209 + Contact: linux-pm@vger.kernel.org 210 + Description: Parameters for the Intel P-state driver 211 + 212 + Logic for selecting the current P-state in Intel 213 + Sandybridge+ processors. The three knobs control 214 + limits for the P-state that will be requested by the 215 + driver. 216 + 217 + max_perf_pct: limits the maximum P state that will be requested by 218 + the driver stated as a percentage of the available performance. 219 + 220 + min_perf_pct: limits the minimum P state that will be requested by 221 + the driver stated as a percentage of the available performance. 222 + 223 + no_turbo: limits the driver to selecting P states below the turbo 224 + frequency range. 225 + 226 + More details can be found in Documentation/cpu-freq/intel-pstate.txt
+1 -8
Documentation/acpi/namespace.txt
··· 235 235 named object's type in the second column). In that case the object's 236 236 directory in sysfs will contain the 'path' attribute whose value is 237 237 the full path to the node from the namespace root. 238 - struct acpi_device objects are created for the ACPI namespace nodes 239 - whose _STA control methods return PRESENT or FUNCTIONING. The power 240 - resource nodes or nodes without _STA are assumed to be both PRESENT 241 - and FUNCTIONING. 242 238 F: 243 239 The struct acpi_device object is created for a fixed hardware 244 240 feature (as indicated by the fixed feature flag's name in the second ··· 336 340 | +-------------+-------+----------------+ 337 341 | | 338 342 | | +- - - - - - - +- - - - - - +- - - - - - - -+ 339 - | +-| * PNP0C0D:00 | \_SB_.LID0 | acpi:PNP0C0D: | 343 + | +-| PNP0C0D:00 | \_SB_.LID0 | acpi:PNP0C0D: | 340 344 | | +- - - - - - - +- - - - - - +- - - - - - - -+ 341 345 | | 342 346 | | +------------+------------+-----------------------+ ··· 386 390 attribute (as described earlier in this document). 387 391 NOTE: N/A indicates the device object does not have the 'path' or the 388 392 'modalias' attribute. 389 - NOTE: The PNP0C0D device listed above is highlighted (marked by "*") 390 - to indicate it will be created only when its _STA methods return 391 - PRESENT or FUNCTIONING.
+13 -13
Documentation/cpu-freq/boost.txt
··· 17 17 Some CPUs support a functionality to raise the operating frequency of 18 18 some cores in a multi-core package if certain conditions apply, mostly 19 19 if the whole chip is not fully utilized and below it's intended thermal 20 - budget. This is done without operating system control by a combination 21 - of hardware and firmware. 20 + budget. The decision about boost disable/enable is made either at hardware 21 + (e.g. x86) or software (e.g ARM). 22 22 On Intel CPUs this is called "Turbo Boost", AMD calls it "Turbo-Core", 23 23 in technical documentation "Core performance boost". In Linux we use 24 24 the term "boost" for convenience. ··· 48 48 User controlled switch 49 49 ---------------------- 50 50 51 - To allow the user to toggle the boosting functionality, the acpi-cpufreq 52 - driver exports a sysfs knob to disable it. There is a file: 51 + To allow the user to toggle the boosting functionality, the cpufreq core 52 + driver exports a sysfs knob to enable or disable it. There is a file: 53 53 /sys/devices/system/cpu/cpufreq/boost 54 54 which can either read "0" (boosting disabled) or "1" (boosting enabled). 55 - Reading the file is always supported, even if the processor does not 56 - support boosting. In this case the file will be read-only and always 57 - reads as "0". Explicitly changing the permissions and writing to that 58 - file anyway will return EINVAL. 55 + The file is exported only when cpufreq driver supports boosting. 56 + Explicitly changing the permissions and writing to that file anyway will 57 + return EINVAL. 59 58 60 59 On supported CPUs one can write either a "0" or a "1" into this file. 61 60 This will either disable the boost functionality on all cores in the 62 - whole system (0) or will allow the hardware to boost at will (1). 61 + whole system (0) or will allow the software or hardware to boost at will 62 + (1). 63 63 64 64 Writing a "1" does not explicitly boost the system, but just allows the 65 - CPU (and the firmware) to boost at their discretion. Some implementations 66 - take external factors like the chip's temperature into account, so 67 - boosting once does not necessarily mean that it will occur every time 68 - even using the exact same software setup. 65 + CPU to boost at their discretion. Some implementations take external 66 + factors like the chip's temperature into account, so boosting once does 67 + not necessarily mean that it will occur every time even using the exact 68 + same software setup. 69 69 70 70 71 71 AMD legacy cpb switch
+40
Documentation/cpu-freq/intel-pstate.txt
··· 1 + Intel P-state driver 2 + -------------------- 3 + 4 + This driver implements a scaling driver with an internal governor for 5 + Intel Core processors. The driver follows the same model as the 6 + Transmeta scaling driver (longrun.c) and implements the setpolicy() 7 + instead of target(). Scaling drivers that implement setpolicy() are 8 + assumed to implement internal governors by the cpufreq core. All the 9 + logic for selecting the current P state is contained within the 10 + driver; no external governor is used by the cpufreq core. 11 + 12 + Intel SandyBridge+ processors are supported. 13 + 14 + New sysfs files for controlling P state selection have been added to 15 + /sys/devices/system/cpu/intel_pstate/ 16 + 17 + max_perf_pct: limits the maximum P state that will be requested by 18 + the driver stated as a percentage of the available performance. 19 + 20 + min_perf_pct: limits the minimum P state that will be requested by 21 + the driver stated as a percentage of the available performance. 22 + 23 + no_turbo: limits the driver to selecting P states below the turbo 24 + frequency range. 25 + 26 + For contemporary Intel processors, the frequency is controlled by the 27 + processor itself and the P-states exposed to software are related to 28 + performance levels. The idea that frequency can be set to a single 29 + frequency is fiction for Intel Core processors. Even if the scaling 30 + driver selects a single P state the actual frequency the processor 31 + will run at is selected by the processor itself. 32 + 33 + New debugfs files have also been added to /sys/kernel/debug/pstate_snb/ 34 + 35 + deadband 36 + d_gain_pct 37 + i_gain_pct 38 + p_gain_pct 39 + sample_rate_ms 40 + setpoint
+3
Documentation/kernel-parameters.txt
··· 343 343 no: ACPI OperationRegions are not marked as reserved, 344 344 no further checks are performed. 345 345 346 + acpi_no_memhotplug [ACPI] Disable memory hotplug. Useful for kdump 347 + kernels. 348 + 346 349 add_efi_memmap [EFI; X86] Include EFI memory map in 347 350 kernel's map of available physical RAM. 348 351
+2
MAINTAINERS
··· 257 257 F: drivers/pci/*acpi* 258 258 F: drivers/pci/*/*acpi* 259 259 F: drivers/pci/*/*/*acpi* 260 + F: tools/power/acpi 260 261 261 262 ACPI COMPONENT ARCHITECTURE (ACPICA) 262 263 M: Robert Moore <robert.moore@intel.com> ··· 272 271 S: Supported 273 272 F: drivers/acpi/acpica/ 274 273 F: include/acpi/ 274 + F: tools/power/acpi/ 275 275 276 276 ACPI FAN DRIVER 277 277 M: Zhang Rui <rui.zhang@intel.com>
+5
arch/arm/mach-exynos/common.c
··· 303 303 platform_device_register(&exynos_cpuidle); 304 304 } 305 305 306 + void __init exynos_cpufreq_init(void) 307 + { 308 + platform_device_register_simple("exynos-cpufreq", -1, NULL, 0); 309 + } 310 + 306 311 void __init exynos_init_late(void) 307 312 { 308 313 if (of_machine_is_compatible("samsung,exynos5440"))
+1
arch/arm/mach-exynos/common.h
··· 22 22 void exynos4_restart(enum reboot_mode mode, const char *cmd); 23 23 void exynos5_restart(enum reboot_mode mode, const char *cmd); 24 24 void exynos_cpuidle_init(void); 25 + void exynos_cpufreq_init(void); 25 26 void exynos_init_late(void); 26 27 27 28 void exynos_firmware_init(void);
+2 -6
arch/arm/mach-exynos/cpuidle.c
··· 172 172 { 173 173 int new_index = index; 174 174 175 - /* This mode only can be entered when other core's are offline */ 176 - if (num_online_cpus() > 1) 175 + /* AFTR can only be entered when cores other than CPU0 are offline */ 176 + if (num_online_cpus() > 1 || dev->cpu != 0) 177 177 new_index = drv->safe_state_index; 178 178 179 179 if (new_index == 0) ··· 234 234 for_each_online_cpu(cpu_id) { 235 235 device = &per_cpu(exynos4_cpuidle_device, cpu_id); 236 236 device->cpu = cpu_id; 237 - 238 - /* Support IDLE only */ 239 - if (cpu_id != 0) 240 - device->state_count = 1; 241 237 242 238 ret = cpuidle_register_device(device); 243 239 if (ret) {
+1
arch/arm/mach-exynos/mach-exynos4-dt.c
··· 22 22 static void __init exynos4_dt_machine_init(void) 23 23 { 24 24 exynos_cpuidle_init(); 25 + exynos_cpufreq_init(); 25 26 26 27 of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL); 27 28 }
+1
arch/arm/mach-exynos/mach-exynos5-dt.c
··· 44 44 } 45 45 46 46 exynos_cpuidle_init(); 47 + exynos_cpufreq_init(); 47 48 48 49 of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL); 49 50 }
+7
arch/arm/mach-sa1100/clock.c
··· 33 33 34 34 static DEFINE_SPINLOCK(clocks_lock); 35 35 36 + /* Dummy clk routine to build generic kernel parts that may be using them */ 37 + unsigned long clk_get_rate(struct clk *clk) 38 + { 39 + return 0; 40 + } 41 + EXPORT_SYMBOL(clk_get_rate); 42 + 36 43 static void clk_gpio27_enable(struct clk *clk) 37 44 { 38 45 /*
+1 -2
arch/ia64/hp/common/aml_nfw.c
··· 23 23 */ 24 24 25 25 #include <linux/module.h> 26 - #include <acpi/acpi_bus.h> 27 - #include <acpi/acpi_drivers.h> 26 + #include <linux/acpi.h> 28 27 #include <asm/sal.h> 29 28 30 29 MODULE_AUTHOR("Bjorn Helgaas <bjorn.helgaas@hp.com>");
-1
arch/ia64/kernel/acpi.c
··· 60 60 61 61 #define PREFIX "ACPI: " 62 62 63 - u32 acpi_rsdt_forced; 64 63 unsigned int acpi_cpei_override; 65 64 unsigned int acpi_cpei_phys_cpuid; 66 65
+3 -56
arch/powerpc/platforms/pseries/processor_idle.c
··· 28 28 #define MAX_IDLE_STATE_COUNT 2 29 29 30 30 static int max_idle_state = MAX_IDLE_STATE_COUNT - 1; 31 - static struct cpuidle_device __percpu *pseries_cpuidle_devices; 32 31 static struct cpuidle_state *cpuidle_state_table; 33 32 34 33 static inline void idle_loop_prolog(unsigned long *in_purr) ··· 190 191 { 191 192 int hotcpu = (unsigned long)hcpu; 192 193 struct cpuidle_device *dev = 193 - per_cpu_ptr(pseries_cpuidle_devices, hotcpu); 194 + per_cpu_ptr(cpuidle_devices, hotcpu); 194 195 195 196 if (dev && cpuidle_get_driver()) { 196 197 switch (action) { ··· 247 248 return 0; 248 249 } 249 250 250 - /* pseries_idle_devices_uninit(void) 251 - * unregister cpuidle devices and de-allocate memory 252 - */ 253 - static void pseries_idle_devices_uninit(void) 254 - { 255 - int i; 256 - struct cpuidle_device *dev; 257 - 258 - for_each_possible_cpu(i) { 259 - dev = per_cpu_ptr(pseries_cpuidle_devices, i); 260 - cpuidle_unregister_device(dev); 261 - } 262 - 263 - free_percpu(pseries_cpuidle_devices); 264 - return; 265 - } 266 - 267 - /* pseries_idle_devices_init() 268 - * allocate, initialize and register cpuidle device 269 - */ 270 - static int pseries_idle_devices_init(void) 271 - { 272 - int i; 273 - struct cpuidle_driver *drv = &pseries_idle_driver; 274 - struct cpuidle_device *dev; 275 - 276 - pseries_cpuidle_devices = alloc_percpu(struct cpuidle_device); 277 - if (pseries_cpuidle_devices == NULL) 278 - return -ENOMEM; 279 - 280 - for_each_possible_cpu(i) { 281 - dev = per_cpu_ptr(pseries_cpuidle_devices, i); 282 - dev->state_count = drv->state_count; 283 - dev->cpu = i; 284 - if (cpuidle_register_device(dev)) { 285 - printk(KERN_DEBUG \ 286 - "cpuidle_register_device %d failed!\n", i); 287 - return -EIO; 288 - } 289 - } 290 - 291 - return 0; 292 - } 293 - 294 251 /* 295 252 * pseries_idle_probe() 296 253 * Choose state table for shared versus dedicated partition ··· 282 327 return retval; 283 328 284 329 pseries_cpuidle_driver_init(); 285 - retval = cpuidle_register_driver(&pseries_idle_driver); 330 + retval = cpuidle_register(&pseries_idle_driver, NULL); 286 331 if (retval) { 287 332 printk(KERN_DEBUG "Registration of pseries driver failed.\n"); 288 - return retval; 289 - } 290 - 291 - retval = pseries_idle_devices_init(); 292 - if (retval) { 293 - pseries_idle_devices_uninit(); 294 - cpuidle_unregister_driver(&pseries_idle_driver); 295 333 return retval; 296 334 } 297 335 ··· 298 350 { 299 351 300 352 unregister_cpu_notifier(&setup_hotplug_notifier); 301 - pseries_idle_devices_uninit(); 302 - cpuidle_unregister_driver(&pseries_idle_driver); 353 + cpuidle_unregister(&pseries_idle_driver); 303 354 304 355 return; 305 356 }
+1 -2
arch/x86/kernel/acpi/boot.c
··· 46 46 47 47 #include "sleep.h" /* To include x86_acpi_suspend_lowlevel */ 48 48 static int __initdata acpi_force = 0; 49 - u32 acpi_rsdt_forced; 50 49 int acpi_disabled; 51 50 EXPORT_SYMBOL(acpi_disabled); 52 51 ··· 1561 1562 } 1562 1563 /* acpi=rsdt use RSDT instead of XSDT */ 1563 1564 else if (strcmp(arg, "rsdt") == 0) { 1564 - acpi_rsdt_forced = 1; 1565 + acpi_gbl_do_not_use_xsdt = TRUE; 1565 1566 } 1566 1567 /* "acpi=noirq" disables ACPI interrupt routing */ 1567 1568 else if (strcmp(arg, "noirq") == 0) {
+1 -3
arch/x86/kernel/apic/apic_flat_64.c
··· 20 20 #include <asm/apic.h> 21 21 #include <asm/ipi.h> 22 22 23 - #ifdef CONFIG_ACPI 24 - #include <acpi/acpi_bus.h> 25 - #endif 23 + #include <linux/acpi.h> 26 24 27 25 static struct apic apic_physflat; 28 26 static struct apic apic_flat;
-3
arch/x86/kernel/apic/io_apic.c
··· 37 37 #include <linux/kthread.h> 38 38 #include <linux/jiffies.h> /* time_after() */ 39 39 #include <linux/slab.h> 40 - #ifdef CONFIG_ACPI 41 - #include <acpi/acpi_bus.h> 42 - #endif 43 40 #include <linux/bootmem.h> 44 41 #include <linux/dmar.h> 45 42 #include <linux/hpet.h>
-1
arch/x86/pci/mmconfig-shared.c
··· 12 12 13 13 #include <linux/pci.h> 14 14 #include <linux/init.h> 15 - #include <linux/acpi.h> 16 15 #include <linux/sfi_acpi.h> 17 16 #include <linux/bitmap.h> 18 17 #include <linux/dmi.h>
-1
arch/x86/pci/mmconfig_32.c
··· 14 14 #include <linux/rcupdate.h> 15 15 #include <asm/e820.h> 16 16 #include <asm/pci_x86.h> 17 - #include <acpi/acpi.h> 18 17 19 18 /* Assume systems with more busses have correct MCFG */ 20 19 #define mmcfg_virt_addr ((void __iomem *) fix_to_virt(FIX_PCIE_MCFG))
+1 -2
arch/x86/platform/olpc/olpc-xo15-sci.c
··· 15 15 #include <linux/power_supply.h> 16 16 #include <linux/olpc-ec.h> 17 17 18 - #include <acpi/acpi_bus.h> 19 - #include <acpi/acpi_drivers.h> 18 + #include <linux/acpi.h> 20 19 #include <asm/olpc.h> 21 20 22 21 #define DRV_NAME "olpc-xo15-sci"
+1 -2
drivers/acpi/ac.c
··· 32 32 #include <linux/delay.h> 33 33 #include <linux/platform_device.h> 34 34 #include <linux/power_supply.h> 35 - #include <acpi/acpi_bus.h> 36 - #include <acpi/acpi_drivers.h> 35 + #include <linux/acpi.h> 37 36 38 37 #define PREFIX "ACPI: " 39 38
+12 -50
drivers/acpi/acpi_extlog.c
··· 9 9 10 10 #include <linux/module.h> 11 11 #include <linux/acpi.h> 12 - #include <acpi/acpi_bus.h> 13 12 #include <linux/cper.h> 14 13 #include <linux/ratelimit.h> 15 14 #include <linux/edac.h> ··· 20 21 #define EXT_ELOG_ENTRY_MASK GENMASK_ULL(51, 0) /* elog entry address mask */ 21 22 22 23 #define EXTLOG_DSM_REV 0x0 23 - #define EXTLOG_FN_QUERY 0x0 24 24 #define EXTLOG_FN_ADDR 0x1 25 25 26 26 #define FLAG_OS_OPTIN BIT(0) 27 - #define EXTLOG_QUERY_L1_EXIST BIT(1) 28 27 #define ELOG_ENTRY_VALID (1ULL<<63) 29 28 #define ELOG_ENTRY_LEN 0x1000 30 29 ··· 43 46 44 47 static int old_edac_report_status; 45 48 46 - static u8 extlog_dsm_uuid[] = "663E35AF-CC10-41A4-88EA-5470AF055295"; 49 + static u8 extlog_dsm_uuid[] __initdata = "663E35AF-CC10-41A4-88EA-5470AF055295"; 47 50 48 51 /* L1 table related physical address */ 49 52 static u64 elog_base; ··· 153 156 return NOTIFY_STOP; 154 157 } 155 158 156 - static int extlog_get_dsm(acpi_handle handle, int rev, int func, u64 *ret) 159 + static bool __init extlog_get_l1addr(void) 157 160 { 158 - struct acpi_buffer buf = {ACPI_ALLOCATE_BUFFER, NULL}; 159 - struct acpi_object_list input; 160 - union acpi_object params[4], *obj; 161 161 u8 uuid[16]; 162 - int i; 162 + acpi_handle handle; 163 + union acpi_object *obj; 163 164 164 165 acpi_str_to_uuid(extlog_dsm_uuid, uuid); 165 - input.count = 4; 166 - input.pointer = params; 167 - params[0].type = ACPI_TYPE_BUFFER; 168 - params[0].buffer.length = 16; 169 - params[0].buffer.pointer = uuid; 170 - params[1].type = ACPI_TYPE_INTEGER; 171 - params[1].integer.value = rev; 172 - params[2].type = ACPI_TYPE_INTEGER; 173 - params[2].integer.value = func; 174 - params[3].type = ACPI_TYPE_PACKAGE; 175 - params[3].package.count = 0; 176 - params[3].package.elements = NULL; 177 - 178 - if (ACPI_FAILURE(acpi_evaluate_object(handle, "_DSM", &input, &buf))) 179 - return -1; 180 - 181 - *ret = 0; 182 - obj = (union acpi_object *)buf.pointer; 183 - if (obj->type == ACPI_TYPE_INTEGER) { 184 - *ret = obj->integer.value; 185 - } else if (obj->type == ACPI_TYPE_BUFFER) { 186 - if (obj->buffer.length <= 8) { 187 - for (i = 0; i < obj->buffer.length; i++) 188 - *ret |= (obj->buffer.pointer[i] << (i * 8)); 189 - } 190 - } 191 - kfree(buf.pointer); 192 - 193 - return 0; 194 - } 195 - 196 - static bool extlog_get_l1addr(void) 197 - { 198 - acpi_handle handle; 199 - u64 ret; 200 166 201 167 if (ACPI_FAILURE(acpi_get_handle(NULL, "\\_SB", &handle))) 202 168 return false; 203 - 204 - if (extlog_get_dsm(handle, EXTLOG_DSM_REV, EXTLOG_FN_QUERY, &ret) || 205 - !(ret & EXTLOG_QUERY_L1_EXIST)) 169 + if (!acpi_check_dsm(handle, uuid, EXTLOG_DSM_REV, 1 << EXTLOG_FN_ADDR)) 206 170 return false; 207 - 208 - if (extlog_get_dsm(handle, EXTLOG_DSM_REV, EXTLOG_FN_ADDR, &ret)) 171 + obj = acpi_evaluate_dsm_typed(handle, uuid, EXTLOG_DSM_REV, 172 + EXTLOG_FN_ADDR, NULL, ACPI_TYPE_INTEGER); 173 + if (!obj) { 209 174 return false; 175 + } else { 176 + l1_dirbase = obj->integer.value; 177 + ACPI_FREE(obj); 178 + } 210 179 211 - l1_dirbase = ret; 212 180 /* Spec says L1 directory must be 4K aligned, bail out if it isn't */ 213 181 if (l1_dirbase & ((1 << 12) - 1)) { 214 182 pr_warn(FW_BUG "L1 Directory is invalid at physical %llx\n",
+19 -8
drivers/acpi/acpi_memhotplug.c
··· 180 180 181 181 static int acpi_bind_memblk(struct memory_block *mem, void *arg) 182 182 { 183 - return acpi_bind_one(&mem->dev, (acpi_handle)arg); 183 + return acpi_bind_one(&mem->dev, arg); 184 184 } 185 185 186 186 static int acpi_bind_memory_blocks(struct acpi_memory_info *info, 187 - acpi_handle handle) 187 + struct acpi_device *adev) 188 188 { 189 189 return walk_memory_range(acpi_meminfo_start_pfn(info), 190 - acpi_meminfo_end_pfn(info), (void *)handle, 190 + acpi_meminfo_end_pfn(info), adev, 191 191 acpi_bind_memblk); 192 192 } 193 193 ··· 197 197 return 0; 198 198 } 199 199 200 - static void acpi_unbind_memory_blocks(struct acpi_memory_info *info, 201 - acpi_handle handle) 200 + static void acpi_unbind_memory_blocks(struct acpi_memory_info *info) 202 201 { 203 202 walk_memory_range(acpi_meminfo_start_pfn(info), 204 203 acpi_meminfo_end_pfn(info), NULL, acpi_unbind_memblk); ··· 241 242 if (result && result != -EEXIST) 242 243 continue; 243 244 244 - result = acpi_bind_memory_blocks(info, handle); 245 + result = acpi_bind_memory_blocks(info, mem_device->device); 245 246 if (result) { 246 - acpi_unbind_memory_blocks(info, handle); 247 + acpi_unbind_memory_blocks(info); 247 248 return -ENODEV; 248 249 } 249 250 ··· 284 285 if (nid == NUMA_NO_NODE) 285 286 nid = memory_add_physaddr_to_nid(info->start_addr); 286 287 287 - acpi_unbind_memory_blocks(info, handle); 288 + acpi_unbind_memory_blocks(info); 288 289 remove_memory(nid, info->start_addr, info->length); 289 290 list_del(&info->list); 290 291 kfree(info); ··· 360 361 acpi_memory_device_free(mem_device); 361 362 } 362 363 364 + static bool __initdata acpi_no_memhotplug; 365 + 363 366 void __init acpi_memory_hotplug_init(void) 364 367 { 368 + if (acpi_no_memhotplug) 369 + return; 370 + 365 371 acpi_scan_add_handler_with_hotplug(&memory_device_handler, "memory"); 366 372 } 373 + 374 + static int __init disable_acpi_memory_hotplug(char *str) 375 + { 376 + acpi_no_memhotplug = true; 377 + return 1; 378 + } 379 + __setup("acpi_no_memhotplug", disable_acpi_memory_hotplug);
+1 -2
drivers/acpi/acpi_pad.c
··· 28 28 #include <linux/cpu.h> 29 29 #include <linux/clockchips.h> 30 30 #include <linux/slab.h> 31 - #include <acpi/acpi_bus.h> 32 - #include <acpi/acpi_drivers.h> 31 + #include <linux/acpi.h> 33 32 #include <asm/mwait.h> 34 33 35 34 #define ACPI_PROCESSOR_AGGREGATOR_CLASS "acpi_pad"
+15 -11
drivers/acpi/acpi_processor.c
··· 212 212 union acpi_object object = { 0 }; 213 213 struct acpi_buffer buffer = { sizeof(union acpi_object), &object }; 214 214 struct acpi_processor *pr = acpi_driver_data(device); 215 - int cpu_index, device_declaration = 0; 215 + int apic_id, cpu_index, device_declaration = 0; 216 216 acpi_status status = AE_OK; 217 217 static int cpu0_initialized; 218 218 unsigned long long value; ··· 258 258 device_declaration = 1; 259 259 pr->acpi_id = value; 260 260 } 261 - pr->apic_id = acpi_get_apicid(pr->handle, device_declaration, 262 - pr->acpi_id); 263 - cpu_index = acpi_map_cpuid(pr->apic_id, pr->acpi_id); 264 261 265 - /* Handle UP system running SMP kernel, with no LAPIC in MADT */ 266 - if (!cpu0_initialized && (cpu_index == -1) && 267 - (num_online_cpus() == 1)) { 268 - cpu_index = 0; 262 + apic_id = acpi_get_apicid(pr->handle, device_declaration, pr->acpi_id); 263 + if (apic_id < 0) { 264 + acpi_handle_err(pr->handle, "failed to get CPU APIC ID.\n"); 265 + return -ENODEV; 269 266 } 267 + pr->apic_id = apic_id; 270 268 271 - cpu0_initialized = 1; 272 - 269 + cpu_index = acpi_map_cpuid(pr->apic_id, pr->acpi_id); 270 + if (!cpu0_initialized) { 271 + cpu0_initialized = 1; 272 + /* Handle UP system running SMP kernel, with no LAPIC in MADT */ 273 + if ((cpu_index == -1) && (num_online_cpus() == 1)) 274 + cpu_index = 0; 275 + } 273 276 pr->id = cpu_index; 274 277 275 278 /* ··· 285 282 if (ret) 286 283 return ret; 287 284 } 285 + 288 286 /* 289 287 * On some boxes several processors use the same processor bus id. 290 288 * But they are located in different scope. For example: ··· 399 395 goto err; 400 396 } 401 397 402 - result = acpi_bind_one(dev, pr->handle); 398 + result = acpi_bind_one(dev, device); 403 399 if (result) 404 400 goto err; 405 401
-1
drivers/acpi/acpica/acdebug.h
··· 113 113 ACPI_HW_DEPENDENT_RETURN_VOID(void 114 114 acpi_db_generate_gpe(char *gpe_arg, 115 115 char *block_arg)) 116 - 117 116 ACPI_HW_DEPENDENT_RETURN_VOID(void acpi_db_generate_sci(void)) 118 117 119 118 /*
+5 -4
drivers/acpi/acpica/acevents.h
··· 71 71 72 72 ACPI_HW_DEPENDENT_RETURN_OK(acpi_status 73 73 acpi_ev_acquire_global_lock(u16 timeout)) 74 - 75 74 ACPI_HW_DEPENDENT_RETURN_OK(acpi_status acpi_ev_release_global_lock(void)) 76 - acpi_status acpi_ev_remove_global_lock_handler(void); 75 + acpi_status acpi_ev_remove_global_lock_handler(void); 77 76 78 77 /* 79 78 * evgpe - Low-level GPE support ··· 132 133 ACPI_HW_DEPENDENT_RETURN_VOID(void 133 134 acpi_ev_update_gpes(acpi_owner_id table_owner_id)) 134 135 135 - acpi_status 136 + acpi_status 136 137 acpi_ev_match_gpe_method(acpi_handle obj_handle, 137 138 u32 level, void *context, void **return_value); 138 139 ··· 148 149 acpi_ev_get_gpe_device(struct acpi_gpe_xrupt_info *gpe_xrupt_info, 149 150 struct acpi_gpe_block_info *gpe_block, void *context); 150 151 151 - struct acpi_gpe_xrupt_info *acpi_ev_get_gpe_xrupt_block(u32 interrupt_number); 152 + acpi_status 153 + acpi_ev_get_gpe_xrupt_block(u32 interrupt_number, 154 + struct acpi_gpe_xrupt_info **gpe_xrupt_block); 152 155 153 156 acpi_status acpi_ev_delete_gpe_xrupt(struct acpi_gpe_xrupt_info *gpe_xrupt); 154 157
+30
drivers/acpi/acpica/acglobal.h
··· 119 119 u8 ACPI_INIT_GLOBAL(acpi_gbl_copy_dsdt_locally, FALSE); 120 120 121 121 /* 122 + * Optionally ignore an XSDT if present and use the RSDT instead. 123 + * Although the ACPI specification requires that an XSDT be used instead 124 + * of the RSDT, the XSDT has been found to be corrupt or ill-formed on 125 + * some machines. Default behavior is to use the XSDT if present. 126 + */ 127 + u8 ACPI_INIT_GLOBAL(acpi_gbl_do_not_use_xsdt, FALSE); 128 + 129 + /* 130 + * Optionally use 32-bit FADT addresses if and when there is a conflict 131 + * (address mismatch) between the 32-bit and 64-bit versions of the 132 + * address. Although ACPICA adheres to the ACPI specification which 133 + * requires the use of the corresponding 64-bit address if it is non-zero, 134 + * some machines have been found to have a corrupted non-zero 64-bit 135 + * address. Default is FALSE, do not favor the 32-bit addresses. 136 + */ 137 + u8 ACPI_INIT_GLOBAL(acpi_gbl_use32_bit_fadt_addresses, FALSE); 138 + 139 + /* 122 140 * Optionally truncate I/O addresses to 16 bits. Provides compatibility 123 141 * with other ACPI implementations. NOTE: During ACPICA initialization, 124 142 * this value is set to TRUE if any Windows OSI strings have been ··· 499 481 ACPI_EXTERN u32 acpi_gbl_size_of_acpi_objects; 500 482 501 483 #endif /* ACPI_DEBUGGER */ 484 + 485 + /***************************************************************************** 486 + * 487 + * Application globals 488 + * 489 + ****************************************************************************/ 490 + 491 + #ifdef ACPI_APPLICATION 492 + 493 + ACPI_FILE ACPI_INIT_GLOBAL(acpi_gbl_debug_file, NULL); 494 + 495 + #endif /* ACPI_APPLICATION */ 502 496 503 497 /***************************************************************************** 504 498 *
+5 -4
drivers/acpi/acpica/aclocal.h
··· 1038 1038 struct acpi_external_list *next; 1039 1039 u32 value; 1040 1040 u16 length; 1041 + u16 flags; 1041 1042 u8 type; 1042 - u8 flags; 1043 - u8 resolved; 1044 - u8 emitted; 1045 1043 }; 1046 1044 1047 1045 /* Values for Flags field above */ 1048 1046 1049 - #define ACPI_IPATH_ALLOCATED 0x01 1047 + #define ACPI_EXT_RESOLVED_REFERENCE 0x01 /* Object was resolved during cross ref */ 1048 + #define ACPI_EXT_ORIGIN_FROM_FILE 0x02 /* External came from a file */ 1049 + #define ACPI_EXT_INTERNAL_PATH_ALLOCATED 0x04 /* Deallocate internal path on completion */ 1050 + #define ACPI_EXT_EXTERNAL_EMITTED 0x08 /* External() statement has been emitted */ 1050 1051 1051 1052 struct acpi_external_file { 1052 1053 char *path;
+1 -1
drivers/acpi/acpica/dsfield.c
··· 105 105 * operation_region not found. Generate an External for it, and 106 106 * insert the name into the namespace. 107 107 */ 108 - acpi_dm_add_to_external_list(op, path, ACPI_TYPE_REGION, 0); 108 + acpi_dm_add_op_to_external_list(op, path, ACPI_TYPE_REGION, 0, 0); 109 109 status = acpi_ns_lookup(walk_state->scope_info, path, ACPI_TYPE_REGION, 110 110 ACPI_IMODE_LOAD_PASS1, ACPI_NS_SEARCH_PARENT, 111 111 walk_state, node);
+10 -11
drivers/acpi/acpica/dsutils.c
··· 727 727 index++; 728 728 } 729 729 730 + ACPI_DEBUG_PRINT((ACPI_DB_DISPATCH, 731 + "NumOperands %d, ArgCount %d, Index %d\n", 732 + walk_state->num_operands, arg_count, index)); 733 + 734 + /* Create the interpreter arguments, in reverse order */ 735 + 730 736 index--; 731 - 732 - /* It is the appropriate order to get objects from the Result stack */ 733 - 734 737 for (i = 0; i < arg_count; i++) { 735 738 arg = arguments[index]; 736 - 737 - /* Force the filling of the operand stack in inverse order */ 738 - 739 - walk_state->operand_index = (u8) index; 739 + walk_state->operand_index = (u8)index; 740 740 741 741 status = acpi_ds_create_operand(walk_state, arg, index); 742 742 if (ACPI_FAILURE(status)) { 743 743 goto cleanup; 744 744 } 745 745 746 - index--; 747 - 748 746 ACPI_DEBUG_PRINT((ACPI_DB_DISPATCH, 749 - "Arg #%u (%p) done, Arg1=%p\n", index, arg, 750 - first_arg)); 747 + "Created Arg #%u (%p) %u args total\n", 748 + index, arg, arg_count)); 749 + index--; 751 750 } 752 751 753 752 return_ACPI_STATUS(status);
+2 -2
drivers/acpi/acpica/dswload.c
··· 181 181 * Target of Scope() not found. Generate an External for it, and 182 182 * insert the name into the namespace. 183 183 */ 184 - acpi_dm_add_to_external_list(op, path, ACPI_TYPE_DEVICE, 185 - 0); 184 + acpi_dm_add_op_to_external_list(op, path, 185 + ACPI_TYPE_DEVICE, 0, 0); 186 186 status = 187 187 acpi_ns_lookup(walk_state->scope_info, path, 188 188 object_type, ACPI_IMODE_LOAD_PASS1,
+4 -4
drivers/acpi/acpica/evgpeblk.c
··· 87 87 return_ACPI_STATUS(status); 88 88 } 89 89 90 - gpe_xrupt_block = acpi_ev_get_gpe_xrupt_block(interrupt_number); 91 - if (!gpe_xrupt_block) { 92 - status = AE_NO_MEMORY; 90 + status = 91 + acpi_ev_get_gpe_xrupt_block(interrupt_number, &gpe_xrupt_block); 92 + if (ACPI_FAILURE(status)) { 93 93 goto unlock_and_exit; 94 94 } 95 95 ··· 112 112 acpi_os_release_lock(acpi_gbl_gpe_lock, flags); 113 113 114 114 unlock_and_exit: 115 - status = acpi_ut_release_mutex(ACPI_MTX_EVENTS); 115 + (void)acpi_ut_release_mutex(ACPI_MTX_EVENTS); 116 116 return_ACPI_STATUS(status); 117 117 } 118 118
+15 -9
drivers/acpi/acpica/evgpeutil.c
··· 197 197 * FUNCTION: acpi_ev_get_gpe_xrupt_block 198 198 * 199 199 * PARAMETERS: interrupt_number - Interrupt for a GPE block 200 + * gpe_xrupt_block - Where the block is returned 200 201 * 201 - * RETURN: A GPE interrupt block 202 + * RETURN: Status 202 203 * 203 204 * DESCRIPTION: Get or Create a GPE interrupt block. There is one interrupt 204 205 * block per unique interrupt level used for GPEs. Should be ··· 208 207 * 209 208 ******************************************************************************/ 210 209 211 - struct acpi_gpe_xrupt_info *acpi_ev_get_gpe_xrupt_block(u32 interrupt_number) 210 + acpi_status 211 + acpi_ev_get_gpe_xrupt_block(u32 interrupt_number, 212 + struct acpi_gpe_xrupt_info ** gpe_xrupt_block) 212 213 { 213 214 struct acpi_gpe_xrupt_info *next_gpe_xrupt; 214 215 struct acpi_gpe_xrupt_info *gpe_xrupt; ··· 224 221 next_gpe_xrupt = acpi_gbl_gpe_xrupt_list_head; 225 222 while (next_gpe_xrupt) { 226 223 if (next_gpe_xrupt->interrupt_number == interrupt_number) { 227 - return_PTR(next_gpe_xrupt); 224 + *gpe_xrupt_block = next_gpe_xrupt; 225 + return_ACPI_STATUS(AE_OK); 228 226 } 229 227 230 228 next_gpe_xrupt = next_gpe_xrupt->next; ··· 235 231 236 232 gpe_xrupt = ACPI_ALLOCATE_ZEROED(sizeof(struct acpi_gpe_xrupt_info)); 237 233 if (!gpe_xrupt) { 238 - return_PTR(NULL); 234 + return_ACPI_STATUS(AE_NO_MEMORY); 239 235 } 240 236 241 237 gpe_xrupt->interrupt_number = interrupt_number; ··· 254 250 } else { 255 251 acpi_gbl_gpe_xrupt_list_head = gpe_xrupt; 256 252 } 253 + 257 254 acpi_os_release_lock(acpi_gbl_gpe_lock, flags); 258 255 259 256 /* Install new interrupt handler if not SCI_INT */ ··· 264 259 acpi_ev_gpe_xrupt_handler, 265 260 gpe_xrupt); 266 261 if (ACPI_FAILURE(status)) { 267 - ACPI_ERROR((AE_INFO, 268 - "Could not install GPE interrupt handler at level 0x%X", 269 - interrupt_number)); 270 - return_PTR(NULL); 262 + ACPI_EXCEPTION((AE_INFO, status, 263 + "Could not install GPE interrupt handler at level 0x%X", 264 + interrupt_number)); 265 + return_ACPI_STATUS(status); 271 266 } 272 267 } 273 268 274 - return_PTR(gpe_xrupt); 269 + *gpe_xrupt_block = gpe_xrupt; 270 + return_ACPI_STATUS(AE_OK); 275 271 } 276 272 277 273 /*******************************************************************************
+2 -1
drivers/acpi/acpica/exresnte.c
··· 124 124 } 125 125 126 126 if (!source_desc) { 127 - ACPI_ERROR((AE_INFO, "No object attached to node %p", node)); 127 + ACPI_ERROR((AE_INFO, "No object attached to node [%4.4s] %p", 128 + node->name.ascii, node)); 128 129 return_ACPI_STATUS(AE_AML_NO_OPERAND); 129 130 } 130 131
+13 -10
drivers/acpi/acpica/nsxfeval.c
··· 84 84 acpi_object_type return_type) 85 85 { 86 86 acpi_status status; 87 - u8 must_free = FALSE; 87 + u8 free_buffer_on_error = FALSE; 88 88 89 89 ACPI_FUNCTION_TRACE(acpi_evaluate_object_typed); 90 90 ··· 95 95 } 96 96 97 97 if (return_buffer->length == ACPI_ALLOCATE_BUFFER) { 98 - must_free = TRUE; 98 + free_buffer_on_error = TRUE; 99 99 } 100 100 101 101 /* Evaluate the object */ 102 102 103 - status = 104 - acpi_evaluate_object(handle, pathname, external_params, 105 - return_buffer); 103 + status = acpi_evaluate_object(handle, pathname, 104 + external_params, return_buffer); 106 105 if (ACPI_FAILURE(status)) { 107 106 return_ACPI_STATUS(status); 108 107 } ··· 134 135 pointer)->type), 135 136 acpi_ut_get_type_name(return_type))); 136 137 137 - if (must_free) { 138 - 139 - /* Caller used ACPI_ALLOCATE_BUFFER, free the return buffer */ 140 - 141 - ACPI_FREE_BUFFER(*return_buffer); 138 + if (free_buffer_on_error) { 139 + /* 140 + * Free a buffer created via ACPI_ALLOCATE_BUFFER. 141 + * Note: We use acpi_os_free here because acpi_os_allocate was used 142 + * to allocate the buffer. This purposefully bypasses the 143 + * (optionally enabled) allocation tracking mechanism since we 144 + * only want to track internal allocations. 145 + */ 146 + acpi_os_free(return_buffer->pointer); 142 147 return_buffer->pointer = NULL; 143 148 } 144 149
+49 -2
drivers/acpi/acpica/psopinfo.c
··· 71 71 72 72 const struct acpi_opcode_info *acpi_ps_get_opcode_info(u16 opcode) 73 73 { 74 + #ifdef ACPI_DEBUG_OUTPUT 75 + const char *opcode_name = "Unknown AML opcode"; 76 + #endif 77 + 74 78 ACPI_FUNCTION_NAME(ps_get_opcode_info); 75 79 76 80 /* ··· 96 92 return (&acpi_gbl_aml_op_info 97 93 [acpi_gbl_long_op_index[(u8)opcode]]); 98 94 } 95 + #if defined ACPI_ASL_COMPILER && defined ACPI_DEBUG_OUTPUT 96 + #include "asldefine.h" 97 + 98 + switch (opcode) { 99 + case AML_RAW_DATA_BYTE: 100 + opcode_name = "-Raw Data Byte-"; 101 + break; 102 + 103 + case AML_RAW_DATA_WORD: 104 + opcode_name = "-Raw Data Word-"; 105 + break; 106 + 107 + case AML_RAW_DATA_DWORD: 108 + opcode_name = "-Raw Data Dword-"; 109 + break; 110 + 111 + case AML_RAW_DATA_QWORD: 112 + opcode_name = "-Raw Data Qword-"; 113 + break; 114 + 115 + case AML_RAW_DATA_BUFFER: 116 + opcode_name = "-Raw Data Buffer-"; 117 + break; 118 + 119 + case AML_RAW_DATA_CHAIN: 120 + opcode_name = "-Raw Data Buffer Chain-"; 121 + break; 122 + 123 + case AML_PACKAGE_LENGTH: 124 + opcode_name = "-Package Length-"; 125 + break; 126 + 127 + case AML_UNASSIGNED_OPCODE: 128 + opcode_name = "-Unassigned Opcode-"; 129 + break; 130 + 131 + case AML_DEFAULT_ARG_OP: 132 + opcode_name = "-Default Arg-"; 133 + break; 134 + 135 + default: 136 + break; 137 + } 138 + #endif 99 139 100 140 /* Unknown AML opcode */ 101 141 102 - ACPI_DEBUG_PRINT((ACPI_DB_EXEC, 103 - "Unknown AML opcode [%4.4X]\n", opcode)); 142 + ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "%s [%4.4X]\n", opcode_name, opcode)); 104 143 105 144 return (&acpi_gbl_aml_op_info[_UNK]); 106 145 }
+184 -153
drivers/acpi/acpica/tbfadt.c
··· 56 56 57 57 static void acpi_tb_convert_fadt(void); 58 58 59 - static void acpi_tb_validate_fadt(void); 60 - 61 59 static void acpi_tb_setup_fadt_registers(void); 60 + 61 + static u64 62 + acpi_tb_select_address(char *register_name, u32 address32, u64 address64); 62 63 63 64 /* Table for conversion of FADT to common internal format and FADT validation */ 64 65 ··· 176 175 * space_id - ACPI Space ID for this register 177 176 * byte_width - Width of this register 178 177 * address - Address of the register 178 + * register_name - ASCII name of the ACPI register 179 179 * 180 180 * RETURN: None 181 181 * ··· 218 216 generic_address->bit_width = bit_width; 219 217 generic_address->bit_offset = 0; 220 218 generic_address->access_width = 0; /* Access width ANY */ 219 + } 220 + 221 + /******************************************************************************* 222 + * 223 + * FUNCTION: acpi_tb_select_address 224 + * 225 + * PARAMETERS: register_name - ASCII name of the ACPI register 226 + * address32 - 32-bit address of the register 227 + * address64 - 64-bit address of the register 228 + * 229 + * RETURN: The resolved 64-bit address 230 + * 231 + * DESCRIPTION: Select between 32-bit and 64-bit versions of addresses within 232 + * the FADT. Used for the FACS and DSDT addresses. 233 + * 234 + * NOTES: 235 + * 236 + * Check for FACS and DSDT address mismatches. An address mismatch between 237 + * the 32-bit and 64-bit address fields (FIRMWARE_CTRL/X_FIRMWARE_CTRL and 238 + * DSDT/X_DSDT) could be a corrupted address field or it might indicate 239 + * the presence of two FACS or two DSDT tables. 240 + * 241 + * November 2013: 242 + * By default, as per the ACPICA specification, a valid 64-bit address is 243 + * used regardless of the value of the 32-bit address. However, this 244 + * behavior can be overridden via the acpi_gbl_use32_bit_fadt_addresses flag. 245 + * 246 + ******************************************************************************/ 247 + 248 + static u64 249 + acpi_tb_select_address(char *register_name, u32 address32, u64 address64) 250 + { 251 + 252 + if (!address64) { 253 + 254 + /* 64-bit address is zero, use 32-bit address */ 255 + 256 + return ((u64)address32); 257 + } 258 + 259 + if (address32 && (address64 != (u64)address32)) { 260 + 261 + /* Address mismatch between 32-bit and 64-bit versions */ 262 + 263 + ACPI_BIOS_WARNING((AE_INFO, 264 + "32/64X %s address mismatch in FADT: " 265 + "0x%8.8X/0x%8.8X%8.8X, using %u-bit address", 266 + register_name, address32, 267 + ACPI_FORMAT_UINT64(address64), 268 + acpi_gbl_use32_bit_fadt_addresses ? 32 : 269 + 64)); 270 + 271 + /* 32-bit address override */ 272 + 273 + if (acpi_gbl_use32_bit_fadt_addresses) { 274 + return ((u64)address32); 275 + } 276 + } 277 + 278 + /* Default is to use the 64-bit address */ 279 + 280 + return (address64); 221 281 } 222 282 223 283 /******************************************************************************* ··· 395 331 396 332 acpi_tb_convert_fadt(); 397 333 398 - /* Validate FADT values now, before we make any changes */ 399 - 400 - acpi_tb_validate_fadt(); 401 - 402 334 /* Initialize the global ACPI register structures */ 403 335 404 336 acpi_tb_setup_fadt_registers(); ··· 404 344 * 405 345 * FUNCTION: acpi_tb_convert_fadt 406 346 * 407 - * PARAMETERS: None, uses acpi_gbl_FADT 347 + * PARAMETERS: none - acpi_gbl_FADT is used. 408 348 * 409 349 * RETURN: None 410 350 * 411 351 * DESCRIPTION: Converts all versions of the FADT to a common internal format. 412 - * Expand 32-bit addresses to 64-bit as necessary. 352 + * Expand 32-bit addresses to 64-bit as necessary. Also validate 353 + * important fields within the FADT. 413 354 * 414 - * NOTE: acpi_gbl_FADT must be of size (struct acpi_table_fadt), 415 - * and must contain a copy of the actual FADT. 355 + * NOTE: acpi_gbl_FADT must be of size (struct acpi_table_fadt), and must 356 + * contain a copy of the actual BIOS-provided FADT. 416 357 * 417 358 * Notes on 64-bit register addresses: 418 359 * 419 360 * After this FADT conversion, later ACPICA code will only use the 64-bit "X" 420 361 * fields of the FADT for all ACPI register addresses. 421 362 * 422 - * The 64-bit "X" fields are optional extensions to the original 32-bit FADT 363 + * The 64-bit X fields are optional extensions to the original 32-bit FADT 423 364 * V1.0 fields. Even if they are present in the FADT, they are optional and 424 365 * are unused if the BIOS sets them to zero. Therefore, we must copy/expand 425 - * 32-bit V1.0 fields if the corresponding X field is zero. 366 + * 32-bit V1.0 fields to the 64-bit X fields if the the 64-bit X field is 367 + * originally zero. 426 368 * 427 - * For ACPI 1.0 FADTs, all 32-bit address fields are expanded to the 428 - * corresponding "X" fields in the internal FADT. 369 + * For ACPI 1.0 FADTs (that contain no 64-bit addresses), all 32-bit address 370 + * fields are expanded to the corresponding 64-bit X fields in the internal 371 + * common FADT. 429 372 * 430 373 * For ACPI 2.0+ FADTs, all valid (non-zero) 32-bit address fields are expanded 431 - * to the corresponding 64-bit X fields. For compatibility with other ACPI 432 - * implementations, we ignore the 64-bit field if the 32-bit field is valid, 433 - * regardless of whether the host OS is 32-bit or 64-bit. 374 + * to the corresponding 64-bit X fields, if the 64-bit field is originally 375 + * zero. Adhering to the ACPI specification, we completely ignore the 32-bit 376 + * field if the 64-bit field is valid, regardless of whether the host OS is 377 + * 32-bit or 64-bit. 378 + * 379 + * Possible additional checks: 380 + * (acpi_gbl_FADT.pm1_event_length >= 4) 381 + * (acpi_gbl_FADT.pm1_control_length >= 2) 382 + * (acpi_gbl_FADT.pm_timer_length >= 4) 383 + * Gpe block lengths must be multiple of 2 434 384 * 435 385 ******************************************************************************/ 436 386 437 387 static void acpi_tb_convert_fadt(void) 438 388 { 389 + char *name; 439 390 struct acpi_generic_address *address64; 440 391 u32 address32; 392 + u8 length; 441 393 u32 i; 442 - 443 - /* 444 - * Expand the 32-bit FACS and DSDT addresses to 64-bit as necessary. 445 - * Later code will always use the X 64-bit field. Also, check for an 446 - * address mismatch between the 32-bit and 64-bit address fields 447 - * (FIRMWARE_CTRL/X_FIRMWARE_CTRL, DSDT/X_DSDT) which would indicate 448 - * the presence of two FACS or two DSDT tables. 449 - */ 450 - if (!acpi_gbl_FADT.Xfacs) { 451 - acpi_gbl_FADT.Xfacs = (u64) acpi_gbl_FADT.facs; 452 - } else if (acpi_gbl_FADT.facs && 453 - (acpi_gbl_FADT.Xfacs != (u64) acpi_gbl_FADT.facs)) { 454 - ACPI_WARNING((AE_INFO, 455 - "32/64 FACS address mismatch in FADT - two FACS tables!")); 456 - } 457 - 458 - if (!acpi_gbl_FADT.Xdsdt) { 459 - acpi_gbl_FADT.Xdsdt = (u64) acpi_gbl_FADT.dsdt; 460 - } else if (acpi_gbl_FADT.dsdt && 461 - (acpi_gbl_FADT.Xdsdt != (u64) acpi_gbl_FADT.dsdt)) { 462 - ACPI_WARNING((AE_INFO, 463 - "32/64 DSDT address mismatch in FADT - two DSDT tables!")); 464 - } 465 394 466 395 /* 467 396 * For ACPI 1.0 FADTs (revision 1 or 2), ensure that reserved fields which ··· 470 421 acpi_gbl_FADT.boot_flags = 0; 471 422 } 472 423 473 - /* Update the local FADT table header length */ 474 - 424 + /* 425 + * Now we can update the local FADT length to the length of the 426 + * current FADT version as defined by the ACPI specification. 427 + * Thus, we will have a common FADT internally. 428 + */ 475 429 acpi_gbl_FADT.header.length = sizeof(struct acpi_table_fadt); 476 430 477 431 /* 478 - * Expand the ACPI 1.0 32-bit addresses to the ACPI 2.0 64-bit "X" 479 - * generic address structures as necessary. Later code will always use 480 - * the 64-bit address structures. 481 - * 482 - * March 2009: 483 - * We now always use the 32-bit address if it is valid (non-null). This 484 - * is not in accordance with the ACPI specification which states that 485 - * the 64-bit address supersedes the 32-bit version, but we do this for 486 - * compatibility with other ACPI implementations. Most notably, in the 487 - * case where both the 32 and 64 versions are non-null, we use the 32-bit 488 - * version. This is the only address that is guaranteed to have been 489 - * tested by the BIOS manufacturer. 432 + * Expand the 32-bit FACS and DSDT addresses to 64-bit as necessary. 433 + * Later ACPICA code will always use the X 64-bit field. 490 434 */ 491 - for (i = 0; i < ACPI_FADT_INFO_ENTRIES; i++) { 492 - address32 = *ACPI_ADD_PTR(u32, 493 - &acpi_gbl_FADT, 494 - fadt_info_table[i].address32); 435 + acpi_gbl_FADT.Xfacs = acpi_tb_select_address("FACS", 436 + acpi_gbl_FADT.facs, 437 + acpi_gbl_FADT.Xfacs); 495 438 496 - address64 = ACPI_ADD_PTR(struct acpi_generic_address, 497 - &acpi_gbl_FADT, 498 - fadt_info_table[i].address64); 499 - 500 - /* 501 - * If both 32- and 64-bit addresses are valid (non-zero), 502 - * they must match. 503 - */ 504 - if (address64->address && address32 && 505 - (address64->address != (u64)address32)) { 506 - ACPI_BIOS_ERROR((AE_INFO, 507 - "32/64X address mismatch in FADT/%s: " 508 - "0x%8.8X/0x%8.8X%8.8X, using 32", 509 - fadt_info_table[i].name, address32, 510 - ACPI_FORMAT_UINT64(address64-> 511 - address))); 512 - } 513 - 514 - /* Always use 32-bit address if it is valid (non-null) */ 515 - 516 - if (address32) { 517 - /* 518 - * Copy the 32-bit address to the 64-bit GAS structure. The 519 - * Space ID is always I/O for 32-bit legacy address fields 520 - */ 521 - acpi_tb_init_generic_address(address64, 522 - ACPI_ADR_SPACE_SYSTEM_IO, 523 - *ACPI_ADD_PTR(u8, 524 - &acpi_gbl_FADT, 525 - fadt_info_table 526 - [i].length), 527 - (u64) address32, 528 - fadt_info_table[i].name); 529 - } 530 - } 531 - } 532 - 533 - /******************************************************************************* 534 - * 535 - * FUNCTION: acpi_tb_validate_fadt 536 - * 537 - * PARAMETERS: table - Pointer to the FADT to be validated 538 - * 539 - * RETURN: None 540 - * 541 - * DESCRIPTION: Validate various important fields within the FADT. If a problem 542 - * is found, issue a message, but no status is returned. 543 - * Used by both the table manager and the disassembler. 544 - * 545 - * Possible additional checks: 546 - * (acpi_gbl_FADT.pm1_event_length >= 4) 547 - * (acpi_gbl_FADT.pm1_control_length >= 2) 548 - * (acpi_gbl_FADT.pm_timer_length >= 4) 549 - * Gpe block lengths must be multiple of 2 550 - * 551 - ******************************************************************************/ 552 - 553 - static void acpi_tb_validate_fadt(void) 554 - { 555 - char *name; 556 - struct acpi_generic_address *address64; 557 - u8 length; 558 - u32 i; 559 - 560 - /* 561 - * Check for FACS and DSDT address mismatches. An address mismatch between 562 - * the 32-bit and 64-bit address fields (FIRMWARE_CTRL/X_FIRMWARE_CTRL and 563 - * DSDT/X_DSDT) would indicate the presence of two FACS or two DSDT tables. 564 - */ 565 - if (acpi_gbl_FADT.facs && 566 - (acpi_gbl_FADT.Xfacs != (u64)acpi_gbl_FADT.facs)) { 567 - ACPI_BIOS_WARNING((AE_INFO, 568 - "32/64X FACS address mismatch in FADT - " 569 - "0x%8.8X/0x%8.8X%8.8X, using 32", 570 - acpi_gbl_FADT.facs, 571 - ACPI_FORMAT_UINT64(acpi_gbl_FADT.Xfacs))); 572 - 573 - acpi_gbl_FADT.Xfacs = (u64)acpi_gbl_FADT.facs; 574 - } 575 - 576 - if (acpi_gbl_FADT.dsdt && 577 - (acpi_gbl_FADT.Xdsdt != (u64)acpi_gbl_FADT.dsdt)) { 578 - ACPI_BIOS_WARNING((AE_INFO, 579 - "32/64X DSDT address mismatch in FADT - " 580 - "0x%8.8X/0x%8.8X%8.8X, using 32", 581 - acpi_gbl_FADT.dsdt, 582 - ACPI_FORMAT_UINT64(acpi_gbl_FADT.Xdsdt))); 583 - 584 - acpi_gbl_FADT.Xdsdt = (u64)acpi_gbl_FADT.dsdt; 585 - } 439 + acpi_gbl_FADT.Xdsdt = acpi_tb_select_address("DSDT", 440 + acpi_gbl_FADT.dsdt, 441 + acpi_gbl_FADT.Xdsdt); 586 442 587 443 /* If Hardware Reduced flag is set, we are all done */ 588 444 ··· 499 545 500 546 for (i = 0; i < ACPI_FADT_INFO_ENTRIES; i++) { 501 547 /* 502 - * Generate pointer to the 64-bit address, get the register 503 - * length (width) and the register name 548 + * Get the 32-bit and 64-bit addresses, as well as the register 549 + * length and register name. 504 550 */ 551 + address32 = *ACPI_ADD_PTR(u32, 552 + &acpi_gbl_FADT, 553 + fadt_info_table[i].address32); 554 + 505 555 address64 = ACPI_ADD_PTR(struct acpi_generic_address, 506 556 &acpi_gbl_FADT, 507 557 fadt_info_table[i].address64); 508 - length = 509 - *ACPI_ADD_PTR(u8, &acpi_gbl_FADT, 510 - fadt_info_table[i].length); 558 + 559 + length = *ACPI_ADD_PTR(u8, 560 + &acpi_gbl_FADT, 561 + fadt_info_table[i].length); 562 + 511 563 name = fadt_info_table[i].name; 564 + 565 + /* 566 + * Expand the ACPI 1.0 32-bit addresses to the ACPI 2.0 64-bit "X" 567 + * generic address structures as necessary. Later code will always use 568 + * the 64-bit address structures. 569 + * 570 + * November 2013: 571 + * Now always use the 64-bit address if it is valid (non-zero), in 572 + * accordance with the ACPI specification which states that a 64-bit 573 + * address supersedes the 32-bit version. This behavior can be 574 + * overridden by the acpi_gbl_use32_bit_fadt_addresses flag. 575 + * 576 + * During 64-bit address construction and verification, 577 + * these cases are handled: 578 + * 579 + * Address32 zero, Address64 [don't care] - Use Address64 580 + * 581 + * Address32 non-zero, Address64 zero - Copy/use Address32 582 + * Address32 non-zero == Address64 non-zero - Use Address64 583 + * Address32 non-zero != Address64 non-zero - Warning, use Address64 584 + * 585 + * Override: if acpi_gbl_use32_bit_fadt_addresses is TRUE, and: 586 + * Address32 non-zero != Address64 non-zero - Warning, copy/use Address32 587 + * 588 + * Note: space_id is always I/O for 32-bit legacy address fields 589 + */ 590 + if (address32) { 591 + if (!address64->address) { 592 + 593 + /* 64-bit address is zero, use 32-bit address */ 594 + 595 + acpi_tb_init_generic_address(address64, 596 + ACPI_ADR_SPACE_SYSTEM_IO, 597 + *ACPI_ADD_PTR(u8, 598 + &acpi_gbl_FADT, 599 + fadt_info_table 600 + [i]. 601 + length), 602 + (u64)address32, 603 + name); 604 + } else if (address64->address != (u64)address32) { 605 + 606 + /* Address mismatch */ 607 + 608 + ACPI_BIOS_WARNING((AE_INFO, 609 + "32/64X address mismatch in FADT/%s: " 610 + "0x%8.8X/0x%8.8X%8.8X, using %u-bit address", 611 + name, address32, 612 + ACPI_FORMAT_UINT64 613 + (address64->address), 614 + acpi_gbl_use32_bit_fadt_addresses 615 + ? 32 : 64)); 616 + 617 + if (acpi_gbl_use32_bit_fadt_addresses) { 618 + 619 + /* 32-bit address override */ 620 + 621 + acpi_tb_init_generic_address(address64, 622 + ACPI_ADR_SPACE_SYSTEM_IO, 623 + *ACPI_ADD_PTR 624 + (u8, 625 + &acpi_gbl_FADT, 626 + fadt_info_table 627 + [i]. 628 + length), 629 + (u64) 630 + address32, 631 + name); 632 + } 633 + } 634 + } 512 635 513 636 /* 514 637 * For each extended field, check for length mismatch between the
+122 -92
drivers/acpi/acpica/tbutils.c
··· 49 49 ACPI_MODULE_NAME("tbutils") 50 50 51 51 /* Local prototypes */ 52 + static acpi_status acpi_tb_validate_xsdt(acpi_physical_address address); 53 + 52 54 static acpi_physical_address 53 55 acpi_tb_get_root_table_entry(u8 *table_entry, u32 table_entry_size); 54 - 55 - /******************************************************************************* 56 - * 57 - * FUNCTION: acpi_tb_check_xsdt 58 - * 59 - * PARAMETERS: address - Pointer to the XSDT 60 - * 61 - * RETURN: status 62 - * AE_OK - XSDT is okay 63 - * AE_NO_MEMORY - can't map XSDT 64 - * AE_INVALID_TABLE_LENGTH - invalid table length 65 - * AE_NULL_ENTRY - XSDT has NULL entry 66 - * 67 - * DESCRIPTION: validate XSDT 68 - ******************************************************************************/ 69 - 70 - static acpi_status 71 - acpi_tb_check_xsdt(acpi_physical_address address) 72 - { 73 - struct acpi_table_header *table; 74 - u32 length; 75 - u64 xsdt_entry_address; 76 - u8 *table_entry; 77 - u32 table_count; 78 - int i; 79 - 80 - table = acpi_os_map_memory(address, sizeof(struct acpi_table_header)); 81 - if (!table) 82 - return AE_NO_MEMORY; 83 - 84 - length = table->length; 85 - acpi_os_unmap_memory(table, sizeof(struct acpi_table_header)); 86 - if (length < sizeof(struct acpi_table_header)) 87 - return AE_INVALID_TABLE_LENGTH; 88 - 89 - table = acpi_os_map_memory(address, length); 90 - if (!table) 91 - return AE_NO_MEMORY; 92 - 93 - /* Calculate the number of tables described in XSDT */ 94 - table_count = 95 - (u32) ((table->length - 96 - sizeof(struct acpi_table_header)) / sizeof(u64)); 97 - table_entry = 98 - ACPI_CAST_PTR(u8, table) + sizeof(struct acpi_table_header); 99 - for (i = 0; i < table_count; i++) { 100 - ACPI_MOVE_64_TO_64(&xsdt_entry_address, table_entry); 101 - if (!xsdt_entry_address) { 102 - /* XSDT has NULL entry */ 103 - break; 104 - } 105 - table_entry += sizeof(u64); 106 - } 107 - acpi_os_unmap_memory(table, length); 108 - 109 - if (i < table_count) 110 - return AE_NULL_ENTRY; 111 - else 112 - return AE_OK; 113 - } 114 56 115 57 #if (!ACPI_REDUCED_HARDWARE) 116 58 /******************************************************************************* ··· 325 383 * Get the table physical address (32-bit for RSDT, 64-bit for XSDT): 326 384 * Note: Addresses are 32-bit aligned (not 64) in both RSDT and XSDT 327 385 */ 328 - if (table_entry_size == sizeof(u32)) { 386 + if (table_entry_size == ACPI_RSDT_ENTRY_SIZE) { 329 387 /* 330 388 * 32-bit platform, RSDT: Return 32-bit table entry 331 389 * 64-bit platform, RSDT: Expand 32-bit to 64-bit and return ··· 357 415 358 416 /******************************************************************************* 359 417 * 418 + * FUNCTION: acpi_tb_validate_xsdt 419 + * 420 + * PARAMETERS: address - Physical address of the XSDT (from RSDP) 421 + * 422 + * RETURN: Status. AE_OK if the table appears to be valid. 423 + * 424 + * DESCRIPTION: Validate an XSDT to ensure that it is of minimum size and does 425 + * not contain any NULL entries. A problem that is seen in the 426 + * field is that the XSDT exists, but is actually useless because 427 + * of one or more (or all) NULL entries. 428 + * 429 + ******************************************************************************/ 430 + 431 + static acpi_status acpi_tb_validate_xsdt(acpi_physical_address xsdt_address) 432 + { 433 + struct acpi_table_header *table; 434 + u8 *next_entry; 435 + acpi_physical_address address; 436 + u32 length; 437 + u32 entry_count; 438 + acpi_status status; 439 + u32 i; 440 + 441 + /* Get the XSDT length */ 442 + 443 + table = 444 + acpi_os_map_memory(xsdt_address, sizeof(struct acpi_table_header)); 445 + if (!table) { 446 + return (AE_NO_MEMORY); 447 + } 448 + 449 + length = table->length; 450 + acpi_os_unmap_memory(table, sizeof(struct acpi_table_header)); 451 + 452 + /* 453 + * Minimum XSDT length is the size of the standard ACPI header 454 + * plus one physical address entry 455 + */ 456 + if (length < (sizeof(struct acpi_table_header) + ACPI_XSDT_ENTRY_SIZE)) { 457 + return (AE_INVALID_TABLE_LENGTH); 458 + } 459 + 460 + /* Map the entire XSDT */ 461 + 462 + table = acpi_os_map_memory(xsdt_address, length); 463 + if (!table) { 464 + return (AE_NO_MEMORY); 465 + } 466 + 467 + /* Get the number of entries and pointer to first entry */ 468 + 469 + status = AE_OK; 470 + next_entry = ACPI_ADD_PTR(u8, table, sizeof(struct acpi_table_header)); 471 + entry_count = (u32)((table->length - sizeof(struct acpi_table_header)) / 472 + ACPI_XSDT_ENTRY_SIZE); 473 + 474 + /* Validate each entry (physical address) within the XSDT */ 475 + 476 + for (i = 0; i < entry_count; i++) { 477 + address = 478 + acpi_tb_get_root_table_entry(next_entry, 479 + ACPI_XSDT_ENTRY_SIZE); 480 + if (!address) { 481 + 482 + /* Detected a NULL entry, XSDT is invalid */ 483 + 484 + status = AE_NULL_ENTRY; 485 + break; 486 + } 487 + 488 + next_entry += ACPI_XSDT_ENTRY_SIZE; 489 + } 490 + 491 + /* Unmap table */ 492 + 493 + acpi_os_unmap_memory(table, length); 494 + return (status); 495 + } 496 + 497 + /******************************************************************************* 498 + * 360 499 * FUNCTION: acpi_tb_parse_root_table 361 500 * 362 501 * PARAMETERS: rsdp - Pointer to the RSDP ··· 461 438 u32 table_count; 462 439 struct acpi_table_header *table; 463 440 acpi_physical_address address; 464 - acpi_physical_address uninitialized_var(rsdt_address); 465 441 u32 length; 466 442 u8 *table_entry; 467 443 acpi_status status; 468 444 469 445 ACPI_FUNCTION_TRACE(tb_parse_root_table); 470 446 471 - /* 472 - * Map the entire RSDP and extract the address of the RSDT or XSDT 473 - */ 447 + /* Map the entire RSDP and extract the address of the RSDT or XSDT */ 448 + 474 449 rsdp = acpi_os_map_memory(rsdp_address, sizeof(struct acpi_table_rsdp)); 475 450 if (!rsdp) { 476 451 return_ACPI_STATUS(AE_NO_MEMORY); ··· 478 457 ACPI_CAST_PTR(struct acpi_table_header, 479 458 rsdp)); 480 459 481 - /* Differentiate between RSDT and XSDT root tables */ 460 + /* Use XSDT if present and not overridden. Otherwise, use RSDT */ 482 461 483 - if (rsdp->revision > 1 && rsdp->xsdt_physical_address 484 - && !acpi_rsdt_forced) { 462 + if ((rsdp->revision > 1) && 463 + rsdp->xsdt_physical_address && !acpi_gbl_do_not_use_xsdt) { 485 464 /* 486 - * Root table is an XSDT (64-bit physical addresses). We must use the 487 - * XSDT if the revision is > 1 and the XSDT pointer is present, as per 488 - * the ACPI specification. 465 + * RSDP contains an XSDT (64-bit physical addresses). We must use 466 + * the XSDT if the revision is > 1 and the XSDT pointer is present, 467 + * as per the ACPI specification. 489 468 */ 490 469 address = (acpi_physical_address) rsdp->xsdt_physical_address; 491 - table_entry_size = sizeof(u64); 492 - rsdt_address = (acpi_physical_address) 493 - rsdp->rsdt_physical_address; 470 + table_entry_size = ACPI_XSDT_ENTRY_SIZE; 494 471 } else { 495 472 /* Root table is an RSDT (32-bit physical addresses) */ 496 473 497 474 address = (acpi_physical_address) rsdp->rsdt_physical_address; 498 - table_entry_size = sizeof(u32); 475 + table_entry_size = ACPI_RSDT_ENTRY_SIZE; 499 476 } 500 477 501 478 /* ··· 502 483 */ 503 484 acpi_os_unmap_memory(rsdp, sizeof(struct acpi_table_rsdp)); 504 485 505 - if (table_entry_size == sizeof(u64)) { 506 - if (acpi_tb_check_xsdt(address) == AE_NULL_ENTRY) { 507 - /* XSDT has NULL entry, RSDT is used */ 508 - address = rsdt_address; 509 - table_entry_size = sizeof(u32); 510 - ACPI_WARNING((AE_INFO, "BIOS XSDT has NULL entry, " 511 - "using RSDT")); 486 + /* 487 + * If it is present and used, validate the XSDT for access/size 488 + * and ensure that all table entries are at least non-NULL 489 + */ 490 + if (table_entry_size == ACPI_XSDT_ENTRY_SIZE) { 491 + status = acpi_tb_validate_xsdt(address); 492 + if (ACPI_FAILURE(status)) { 493 + ACPI_BIOS_WARNING((AE_INFO, 494 + "XSDT is invalid (%s), using RSDT", 495 + acpi_format_exception(status))); 496 + 497 + /* Fall back to the RSDT */ 498 + 499 + address = 500 + (acpi_physical_address) rsdp->rsdt_physical_address; 501 + table_entry_size = ACPI_RSDT_ENTRY_SIZE; 512 502 } 513 503 } 504 + 514 505 /* Map the RSDT/XSDT table header to get the full table length */ 515 506 516 507 table = acpi_os_map_memory(address, sizeof(struct acpi_table_header)); ··· 530 501 531 502 acpi_tb_print_table_header(address, table); 532 503 533 - /* Get the length of the full table, verify length and map entire table */ 534 - 504 + /* 505 + * Validate length of the table, and map entire table. 506 + * Minimum length table must contain at least one entry. 507 + */ 535 508 length = table->length; 536 509 acpi_os_unmap_memory(table, sizeof(struct acpi_table_header)); 537 510 538 - if (length < sizeof(struct acpi_table_header)) { 511 + if (length < (sizeof(struct acpi_table_header) + table_entry_size)) { 539 512 ACPI_BIOS_ERROR((AE_INFO, 540 513 "Invalid table length 0x%X in RSDT/XSDT", 541 514 length)); ··· 557 526 return_ACPI_STATUS(status); 558 527 } 559 528 560 - /* Calculate the number of tables described in the root table */ 529 + /* Get the number of entries and pointer to first entry */ 561 530 562 531 table_count = (u32)((table->length - sizeof(struct acpi_table_header)) / 563 532 table_entry_size); 533 + table_entry = ACPI_ADD_PTR(u8, table, sizeof(struct acpi_table_header)); 534 + 564 535 /* 565 536 * First two entries in the table array are reserved for the DSDT 566 537 * and FACS, which are not actually present in the RSDT/XSDT - they 567 538 * come from the FADT 568 539 */ 569 - table_entry = 570 - ACPI_CAST_PTR(u8, table) + sizeof(struct acpi_table_header); 571 540 acpi_gbl_root_table_list.current_table_count = 2; 572 541 573 - /* 574 - * Initialize the root table array from the RSDT/XSDT 575 - */ 542 + /* Initialize the root table array from the RSDT/XSDT */ 543 + 576 544 for (i = 0; i < table_count; i++) { 577 545 if (acpi_gbl_root_table_list.current_table_count >= 578 546 acpi_gbl_root_table_list.max_table_count) { ··· 614 584 acpi_tb_install_table(acpi_gbl_root_table_list.tables[i]. 615 585 address, NULL, i); 616 586 617 - /* Special case for FADT - get the DSDT and FACS */ 587 + /* Special case for FADT - validate it then get the DSDT and FACS */ 618 588 619 589 if (ACPI_COMPARE_NAME 620 590 (&acpi_gbl_root_table_list.tables[i].signature,
+13 -6
drivers/acpi/acpica/utaddress.c
··· 224 224 225 225 while (range_info) { 226 226 /* 227 - * Check if the requested Address/Length overlaps this address_range. 228 - * Four cases to consider: 227 + * Check if the requested address/length overlaps this 228 + * address range. There are four cases to consider: 229 229 * 230 - * 1) Input address/length is contained completely in the address range 230 + * 1) Input address/length is contained completely in the 231 + * address range 231 232 * 2) Input address/length overlaps range at the range start 232 233 * 3) Input address/length overlaps range at the range end 233 234 * 4) Input address/length completely encompasses the range ··· 245 244 region_node); 246 245 247 246 ACPI_WARNING((AE_INFO, 248 - "0x%p-0x%p %s conflicts with Region %s %d", 247 + "%s range 0x%p-0x%p conflicts with OpRegion 0x%p-0x%p (%s)", 248 + acpi_ut_get_region_name(space_id), 249 249 ACPI_CAST_PTR(void, address), 250 250 ACPI_CAST_PTR(void, end_address), 251 - acpi_ut_get_region_name(space_id), 252 - pathname, overlap_count)); 251 + ACPI_CAST_PTR(void, 252 + range_info-> 253 + start_address), 254 + ACPI_CAST_PTR(void, 255 + range_info-> 256 + end_address), 257 + pathname)); 253 258 ACPI_FREE(pathname); 254 259 } 255 260 }
+7 -3
drivers/acpi/acpica/utalloc.c
··· 302 302 return (AE_BUFFER_OVERFLOW); 303 303 304 304 case ACPI_ALLOCATE_BUFFER: 305 - 306 - /* Allocate a new buffer */ 307 - 305 + /* 306 + * Allocate a new buffer. We directectly call acpi_os_allocate here to 307 + * purposefully bypass the (optionally enabled) internal allocation 308 + * tracking mechanism since we only want to track internal 309 + * allocations. Note: The caller should use acpi_os_free to free this 310 + * buffer created via ACPI_ALLOCATE_BUFFER. 311 + */ 308 312 buffer->pointer = acpi_os_allocate(required_length); 309 313 break; 310 314
+6 -6
drivers/acpi/acpica/utcache.c
··· 248 248 ACPI_FUNCTION_NAME(os_acquire_object); 249 249 250 250 if (!cache) { 251 - return (NULL); 251 + return_PTR(NULL); 252 252 } 253 253 254 254 status = acpi_ut_acquire_mutex(ACPI_MTX_CACHES); 255 255 if (ACPI_FAILURE(status)) { 256 - return (NULL); 256 + return_PTR(NULL); 257 257 } 258 258 259 259 ACPI_MEM_TRACKING(cache->requests++); ··· 276 276 277 277 status = acpi_ut_release_mutex(ACPI_MTX_CACHES); 278 278 if (ACPI_FAILURE(status)) { 279 - return (NULL); 279 + return_PTR(NULL); 280 280 } 281 281 282 282 /* Clear (zero) the previously used Object */ ··· 299 299 300 300 status = acpi_ut_release_mutex(ACPI_MTX_CACHES); 301 301 if (ACPI_FAILURE(status)) { 302 - return (NULL); 302 + return_PTR(NULL); 303 303 } 304 304 305 305 object = ACPI_ALLOCATE_ZEROED(cache->object_size); 306 306 if (!object) { 307 - return (NULL); 307 + return_PTR(NULL); 308 308 } 309 309 } 310 310 311 - return (object); 311 + return_PTR(object); 312 312 } 313 313 #endif /* ACPI_USE_LOCAL_CACHE */
+2 -2
drivers/acpi/acpica/utdebug.c
··· 194 194 */ 195 195 acpi_os_printf("%9s-%04ld ", module_name, line_number); 196 196 197 - #ifdef ACPI_EXEC_APP 197 + #ifdef ACPI_APPLICATION 198 198 /* 199 - * For acpi_exec only, emit the thread ID and nesting level. 199 + * For acpi_exec/iASL only, emit the thread ID and nesting level. 200 200 * Note: nesting level is really only useful during a single-thread 201 201 * execution. Otherwise, multiple threads will keep resetting the 202 202 * level.
-4
drivers/acpi/acpica/utglobal.c
··· 388 388 /* Public globals */ 389 389 390 390 ACPI_EXPORT_SYMBOL(acpi_gbl_FADT) 391 - 392 391 ACPI_EXPORT_SYMBOL(acpi_dbg_level) 393 - 394 392 ACPI_EXPORT_SYMBOL(acpi_dbg_layer) 395 - 396 393 ACPI_EXPORT_SYMBOL(acpi_gpe_count) 397 - 398 394 ACPI_EXPORT_SYMBOL(acpi_current_gpe_count)
+10 -2
drivers/acpi/acpica/utxfinit.c
··· 122 122 123 123 /* If configured, initialize the AML debugger */ 124 124 125 - ACPI_DEBUGGER_EXEC(status = acpi_db_initialize()); 126 - return_ACPI_STATUS(status); 125 + #ifdef ACPI_DEBUGGER 126 + status = acpi_db_initialize(); 127 + if (ACPI_FAILURE(status)) { 128 + ACPI_EXCEPTION((AE_INFO, status, 129 + "During Debugger initialization")); 130 + return_ACPI_STATUS(status); 131 + } 132 + #endif 133 + 134 + return_ACPI_STATUS(AE_OK); 127 135 } 128 136 129 137 ACPI_EXPORT_SYMBOL_INIT(acpi_initialize_subsystem)
-1
drivers/acpi/apei/apei-base.c
··· 34 34 #include <linux/module.h> 35 35 #include <linux/init.h> 36 36 #include <linux/acpi.h> 37 - #include <linux/acpi_io.h> 38 37 #include <linux/slab.h> 39 38 #include <linux/io.h> 40 39 #include <linux/kref.h>
-1
drivers/acpi/apei/apei-internal.h
··· 8 8 9 9 #include <linux/cper.h> 10 10 #include <linux/acpi.h> 11 - #include <linux/acpi_io.h> 12 11 13 12 struct apei_exec_context; 14 13
-1
drivers/acpi/apei/einj.c
··· 33 33 #include <linux/nmi.h> 34 34 #include <linux/delay.h> 35 35 #include <linux/mm.h> 36 - #include <acpi/acpi.h> 37 36 #include <asm/unaligned.h> 38 37 39 38 #include "apei-internal.h"
-1
drivers/acpi/apei/ghes.c
··· 33 33 #include <linux/module.h> 34 34 #include <linux/init.h> 35 35 #include <linux/acpi.h> 36 - #include <linux/acpi_io.h> 37 36 #include <linux/io.h> 38 37 #include <linux/interrupt.h> 39 38 #include <linux/timer.h>
+1 -2
drivers/acpi/battery.c
··· 36 36 #include <linux/suspend.h> 37 37 #include <asm/unaligned.h> 38 38 39 - #include <acpi/acpi_bus.h> 40 - #include <acpi/acpi_drivers.h> 39 + #include <linux/acpi.h> 41 40 #include <linux/power_supply.h> 42 41 43 42 #define PREFIX "ACPI: "
+50 -1
drivers/acpi/blacklist.c
··· 30 30 #include <linux/kernel.h> 31 31 #include <linux/init.h> 32 32 #include <linux/acpi.h> 33 - #include <acpi/acpi_bus.h> 34 33 #include <linux/dmi.h> 35 34 36 35 #include "internal.h" ··· 320 321 .matches = { 321 322 DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 322 323 DMI_MATCH(DMI_PRODUCT_VERSION, "2349D15"), 324 + }, 325 + }, 326 + { 327 + .callback = dmi_disable_osi_win8, 328 + .ident = "HP ProBook 2013 models", 329 + .matches = { 330 + DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 331 + DMI_MATCH(DMI_PRODUCT_NAME, "HP ProBook "), 332 + DMI_MATCH(DMI_PRODUCT_NAME, " G1"), 333 + }, 334 + }, 335 + { 336 + .callback = dmi_disable_osi_win8, 337 + .ident = "HP EliteBook 2013 models", 338 + .matches = { 339 + DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 340 + DMI_MATCH(DMI_PRODUCT_NAME, "HP EliteBook "), 341 + DMI_MATCH(DMI_PRODUCT_NAME, " G1"), 342 + }, 343 + }, 344 + { 345 + .callback = dmi_disable_osi_win8, 346 + .ident = "HP ZBook 14", 347 + .matches = { 348 + DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 349 + DMI_MATCH(DMI_PRODUCT_NAME, "HP ZBook 14"), 350 + }, 351 + }, 352 + { 353 + .callback = dmi_disable_osi_win8, 354 + .ident = "HP ZBook 15", 355 + .matches = { 356 + DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 357 + DMI_MATCH(DMI_PRODUCT_NAME, "HP ZBook 15"), 358 + }, 359 + }, 360 + { 361 + .callback = dmi_disable_osi_win8, 362 + .ident = "HP ZBook 17", 363 + .matches = { 364 + DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 365 + DMI_MATCH(DMI_PRODUCT_NAME, "HP ZBook 17"), 366 + }, 367 + }, 368 + { 369 + .callback = dmi_disable_osi_win8, 370 + .ident = "HP EliteBook 8780w", 371 + .matches = { 372 + DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 373 + DMI_MATCH(DMI_PRODUCT_NAME, "HP EliteBook 8780w"), 323 374 }, 324 375 }, 325 376
+5 -72
drivers/acpi/bus.c
··· 37 37 #include <asm/mpspec.h> 38 38 #endif 39 39 #include <linux/pci.h> 40 - #include <acpi/acpi_bus.h> 41 - #include <acpi/acpi_drivers.h> 42 40 #include <acpi/apei.h> 43 41 #include <linux/dmi.h> 44 42 #include <linux/suspend.h> ··· 49 51 struct acpi_device *acpi_root; 50 52 struct proc_dir_entry *acpi_root_dir; 51 53 EXPORT_SYMBOL(acpi_root_dir); 52 - 53 - #define STRUCT_TO_INT(s) (*((int*)&s)) 54 - 55 54 56 55 #ifdef CONFIG_X86 57 56 static int set_copy_dsdt(const struct dmi_system_id *id) ··· 110 115 if (ACPI_FAILURE(status)) 111 116 return -ENODEV; 112 117 113 - STRUCT_TO_INT(device->status) = (int) sta; 118 + acpi_set_device_status(device, sta); 114 119 115 120 if (device->status.functional && !device->status.present) { 116 121 ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Device [%s] status [%08x]: " 117 122 "functional but not present;\n", 118 - device->pnp.bus_id, 119 - (u32) STRUCT_TO_INT(device->status))); 123 + device->pnp.bus_id, (u32)sta)); 120 124 } 121 125 122 126 ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Device [%s] status [%08x]\n", 123 - device->pnp.bus_id, 124 - (u32) STRUCT_TO_INT(device->status))); 127 + device->pnp.bus_id, (u32)sta)); 125 128 return 0; 126 129 } 127 130 EXPORT_SYMBOL(acpi_bus_get_status); ··· 332 339 Notification Handling 333 340 -------------------------------------------------------------------------- */ 334 341 335 - static void acpi_bus_check_device(acpi_handle handle) 336 - { 337 - struct acpi_device *device; 338 - acpi_status status; 339 - struct acpi_device_status old_status; 340 - 341 - if (acpi_bus_get_device(handle, &device)) 342 - return; 343 - if (!device) 344 - return; 345 - 346 - old_status = device->status; 347 - 348 - /* 349 - * Make sure this device's parent is present before we go about 350 - * messing with the device. 351 - */ 352 - if (device->parent && !device->parent->status.present) { 353 - device->status = device->parent->status; 354 - return; 355 - } 356 - 357 - status = acpi_bus_get_status(device); 358 - if (ACPI_FAILURE(status)) 359 - return; 360 - 361 - if (STRUCT_TO_INT(old_status) == STRUCT_TO_INT(device->status)) 362 - return; 363 - 364 - /* 365 - * Device Insertion/Removal 366 - */ 367 - if ((device->status.present) && !(old_status.present)) { 368 - ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Device insertion detected\n")); 369 - /* TBD: Handle device insertion */ 370 - } else if (!(device->status.present) && (old_status.present)) { 371 - ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Device removal detected\n")); 372 - /* TBD: Handle device removal */ 373 - } 374 - } 375 - 376 - static void acpi_bus_check_scope(acpi_handle handle) 377 - { 378 - /* Status Change? */ 379 - acpi_bus_check_device(handle); 380 - 381 - /* 382 - * TBD: Enumerate child devices within this device's scope and 383 - * run acpi_bus_check_device()'s on them. 384 - */ 385 - } 386 - 387 342 /** 388 343 * acpi_bus_notify 389 344 * --------------- ··· 348 407 switch (type) { 349 408 350 409 case ACPI_NOTIFY_BUS_CHECK: 351 - acpi_bus_check_scope(handle); 352 - /* 353 - * TBD: We'll need to outsource certain events to non-ACPI 354 - * drivers via the device manager (device.c). 355 - */ 410 + /* TBD */ 356 411 break; 357 412 358 413 case ACPI_NOTIFY_DEVICE_CHECK: 359 - acpi_bus_check_device(handle); 360 - /* 361 - * TBD: We'll need to outsource certain events to non-ACPI 362 - * drivers via the device manager (device.c). 363 - */ 414 + /* TBD */ 364 415 break; 365 416 366 417 case ACPI_NOTIFY_DEVICE_WAKE:
+1 -20
drivers/acpi/button.c
··· 31 31 #include <linux/seq_file.h> 32 32 #include <linux/input.h> 33 33 #include <linux/slab.h> 34 - #include <acpi/acpi_bus.h> 35 - #include <acpi/acpi_drivers.h> 34 + #include <linux/acpi.h> 36 35 #include <acpi/button.h> 37 36 38 37 #define PREFIX "ACPI: " ··· 100 101 struct input_dev *input; 101 102 char phys[32]; /* for input device */ 102 103 unsigned long pushed; 103 - bool wakeup_enabled; 104 104 }; 105 105 106 106 static BLOCKING_NOTIFIER_HEAD(acpi_lid_notifier); ··· 405 407 lid_device = device; 406 408 } 407 409 408 - if (device->wakeup.flags.valid) { 409 - /* Button's GPE is run-wake GPE */ 410 - acpi_enable_gpe(device->wakeup.gpe_device, 411 - device->wakeup.gpe_number); 412 - if (!device_may_wakeup(&device->dev)) { 413 - device_set_wakeup_enable(&device->dev, true); 414 - button->wakeup_enabled = true; 415 - } 416 - } 417 - 418 410 printk(KERN_INFO PREFIX "%s [%s]\n", name, acpi_device_bid(device)); 419 411 return 0; 420 412 ··· 420 432 static int acpi_button_remove(struct acpi_device *device) 421 433 { 422 434 struct acpi_button *button = acpi_driver_data(device); 423 - 424 - if (device->wakeup.flags.valid) { 425 - acpi_disable_gpe(device->wakeup.gpe_device, 426 - device->wakeup.gpe_number); 427 - if (button->wakeup_enabled) 428 - device_set_wakeup_enable(&device->dev, false); 429 - } 430 435 431 436 acpi_button_remove_fs(device); 432 437 input_unregister_device(button->input);
+50 -5
drivers/acpi/container.c
··· 27 27 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 28 28 */ 29 29 #include <linux/acpi.h> 30 - 31 - #include "internal.h" 30 + #include <linux/container.h> 32 31 33 32 #include "internal.h" 34 33 ··· 43 44 {"", 0}, 44 45 }; 45 46 46 - static int container_device_attach(struct acpi_device *device, 47 + static int acpi_container_offline(struct container_dev *cdev) 48 + { 49 + struct acpi_device *adev = ACPI_COMPANION(&cdev->dev); 50 + struct acpi_device *child; 51 + 52 + /* Check all of the dependent devices' physical companions. */ 53 + list_for_each_entry(child, &adev->children, node) 54 + if (!acpi_scan_is_offline(child, false)) 55 + return -EBUSY; 56 + 57 + return 0; 58 + } 59 + 60 + static void acpi_container_release(struct device *dev) 61 + { 62 + kfree(to_container_dev(dev)); 63 + } 64 + 65 + static int container_device_attach(struct acpi_device *adev, 47 66 const struct acpi_device_id *not_used) 48 67 { 49 - /* This is necessary for container hotplug to work. */ 68 + struct container_dev *cdev; 69 + struct device *dev; 70 + int ret; 71 + 72 + cdev = kzalloc(sizeof(*cdev), GFP_KERNEL); 73 + if (!cdev) 74 + return -ENOMEM; 75 + 76 + cdev->offline = acpi_container_offline; 77 + dev = &cdev->dev; 78 + dev->bus = &container_subsys; 79 + dev_set_name(dev, "%s", dev_name(&adev->dev)); 80 + ACPI_COMPANION_SET(dev, adev); 81 + dev->release = acpi_container_release; 82 + ret = device_register(dev); 83 + if (ret) 84 + return ret; 85 + 86 + adev->driver_data = dev; 50 87 return 1; 88 + } 89 + 90 + static void container_device_detach(struct acpi_device *adev) 91 + { 92 + struct device *dev = acpi_driver_data(adev); 93 + 94 + adev->driver_data = NULL; 95 + if (dev) 96 + device_unregister(dev); 51 97 } 52 98 53 99 static struct acpi_scan_handler container_handler = { 54 100 .ids = container_device_ids, 55 101 .attach = container_device_attach, 102 + .detach = container_device_detach, 56 103 .hotplug = { 57 104 .enabled = true, 58 - .mode = AHM_CONTAINER, 105 + .demand_offline = true, 59 106 }, 60 107 }; 61 108
+1 -1
drivers/acpi/custom_method.c
··· 7 7 #include <linux/kernel.h> 8 8 #include <linux/uaccess.h> 9 9 #include <linux/debugfs.h> 10 - #include <acpi/acpi_drivers.h> 10 + #include <linux/acpi.h> 11 11 12 12 #include "internal.h" 13 13
+1 -1
drivers/acpi/debugfs.c
··· 5 5 #include <linux/export.h> 6 6 #include <linux/init.h> 7 7 #include <linux/debugfs.h> 8 - #include <acpi/acpi_drivers.h> 8 + #include <linux/acpi.h> 9 9 10 10 #define _COMPONENT ACPI_SYSTEM_COMPONENT 11 11 ACPI_MODULE_NAME("debugfs");
+18 -4
drivers/acpi/device_pm.c
··· 256 256 return -EINVAL; 257 257 258 258 device->power.state = ACPI_STATE_UNKNOWN; 259 + if (!acpi_device_is_present(device)) 260 + return 0; 259 261 260 262 result = acpi_device_get_power(device, &state); 261 263 if (result) ··· 304 302 return ret; 305 303 } 306 304 307 - int acpi_bus_update_power(acpi_handle handle, int *state_p) 305 + int acpi_device_update_power(struct acpi_device *device, int *state_p) 308 306 { 309 - struct acpi_device *device; 310 307 int state; 311 308 int result; 312 309 313 - result = acpi_bus_get_device(handle, &device); 314 - if (result) 310 + if (device->power.state == ACPI_STATE_UNKNOWN) { 311 + result = acpi_bus_init_power(device); 312 + if (!result && state_p) 313 + *state_p = device->power.state; 314 + 315 315 return result; 316 + } 316 317 317 318 result = acpi_device_get_power(device, &state); 318 319 if (result) ··· 342 337 *state_p = state; 343 338 344 339 return 0; 340 + } 341 + 342 + int acpi_bus_update_power(acpi_handle handle, int *state_p) 343 + { 344 + struct acpi_device *device; 345 + int result; 346 + 347 + result = acpi_bus_get_device(handle, &device); 348 + return result ? result : acpi_device_update_power(device, state_p); 345 349 } 346 350 EXPORT_SYMBOL_GPL(acpi_bus_update_power); 347 351
+5 -11
drivers/acpi/dock.c
··· 32 32 #include <linux/jiffies.h> 33 33 #include <linux/stddef.h> 34 34 #include <linux/acpi.h> 35 - #include <acpi/acpi_bus.h> 36 - #include <acpi/acpi_drivers.h> 35 + 36 + #include "internal.h" 37 37 38 38 #define PREFIX "ACPI: " 39 39 ··· 323 323 */ 324 324 static void dock_create_acpi_device(acpi_handle handle) 325 325 { 326 - struct acpi_device *device; 326 + struct acpi_device *device = NULL; 327 327 int ret; 328 328 329 - if (acpi_bus_get_device(handle, &device)) { 330 - /* 331 - * no device created for this object, 332 - * so we should create one. 333 - */ 329 + acpi_bus_get_device(handle, &device); 330 + if (!acpi_device_enumerated(device)) { 334 331 ret = acpi_bus_scan(handle); 335 332 if (ret) 336 333 pr_debug("error adding bus, %x\n", -ret); ··· 895 898 896 899 void __init acpi_dock_init(void) 897 900 { 898 - if (acpi_disabled) 899 - return; 900 - 901 901 /* look for dock stations and bays */ 902 902 acpi_walk_namespace(ACPI_TYPE_DEVICE, ACPI_ROOT_OBJECT, 903 903 ACPI_UINT32_MAX, find_dock_and_bay, NULL, NULL, NULL);
+3 -29
drivers/acpi/ec.c
··· 39 39 #include <linux/list.h> 40 40 #include <linux/spinlock.h> 41 41 #include <linux/slab.h> 42 - #include <asm/io.h> 43 - #include <acpi/acpi_bus.h> 44 - #include <acpi/acpi_drivers.h> 42 + #include <linux/acpi.h> 45 43 #include <linux/dmi.h> 44 + #include <asm/io.h> 46 45 47 46 #include "internal.h" 48 47 ··· 89 90 static unsigned int ec_storm_threshold __read_mostly = 8; 90 91 module_param(ec_storm_threshold, uint, 0644); 91 92 MODULE_PARM_DESC(ec_storm_threshold, "Maxim false GPE numbers not considered as GPE storm"); 92 - 93 - /* If we find an EC via the ECDT, we need to keep a ptr to its context */ 94 - /* External interfaces use first EC only, so remember */ 95 - typedef int (*acpi_ec_query_func) (void *data); 96 93 97 94 struct acpi_ec_query_handler { 98 95 struct list_head node; ··· 381 386 382 387 return acpi_ec_transaction(ec, &t); 383 388 } 384 - 385 - /* 386 - * Externally callable EC access functions. For now, assume 1 EC only 387 - */ 388 - int ec_burst_enable(void) 389 - { 390 - if (!first_ec) 391 - return -ENODEV; 392 - return acpi_ec_burst_enable(first_ec); 393 - } 394 - 395 - EXPORT_SYMBOL(ec_burst_enable); 396 - 397 - int ec_burst_disable(void) 398 - { 399 - if (!first_ec) 400 - return -ENODEV; 401 - return acpi_ec_burst_disable(first_ec); 402 - } 403 - 404 - EXPORT_SYMBOL(ec_burst_disable); 405 389 406 390 int ec_read(u8 addr, u8 *val) 407 391 { ··· 753 779 pr_err("Fail in evaluating the _REG object" 754 780 " of EC device. Broken bios is suspected.\n"); 755 781 } else { 782 + acpi_disable_gpe(NULL, ec->gpe); 756 783 acpi_remove_gpe_handler(NULL, ec->gpe, 757 784 &acpi_ec_gpe_handler); 758 - acpi_disable_gpe(NULL, ec->gpe); 759 785 return -ENODEV; 760 786 } 761 787 }
+1 -1
drivers/acpi/ec_sys.c
··· 105 105 .llseek = default_llseek, 106 106 }; 107 107 108 - int acpi_ec_add_debugfs(struct acpi_ec *ec, unsigned int ec_device_count) 108 + static int acpi_ec_add_debugfs(struct acpi_ec *ec, unsigned int ec_device_count) 109 109 { 110 110 struct dentry *dev_dir; 111 111 char name[64];
+1 -1
drivers/acpi/event.c
··· 12 12 #include <linux/init.h> 13 13 #include <linux/poll.h> 14 14 #include <linux/gfp.h> 15 - #include <acpi/acpi_drivers.h> 15 + #include <linux/acpi.h> 16 16 #include <net/netlink.h> 17 17 #include <net/genetlink.h> 18 18
+1 -2
drivers/acpi/fan.c
··· 29 29 #include <linux/types.h> 30 30 #include <asm/uaccess.h> 31 31 #include <linux/thermal.h> 32 - #include <acpi/acpi_bus.h> 33 - #include <acpi/acpi_drivers.h> 32 + #include <linux/acpi.h> 34 33 35 34 #define PREFIX "ACPI: " 36 35
+58 -103
drivers/acpi/glue.c
··· 37 37 { 38 38 if (acpi_disabled) 39 39 return -ENODEV; 40 - if (type && type->match && type->find_device) { 40 + if (type && type->match && type->find_companion) { 41 41 down_write(&bus_type_sem); 42 42 list_add_tail(&type->list, &bus_type_list); 43 43 up_write(&bus_type_sem); ··· 82 82 #define FIND_CHILD_MIN_SCORE 1 83 83 #define FIND_CHILD_MAX_SCORE 2 84 84 85 - static acpi_status acpi_dev_present(acpi_handle handle, u32 lvl_not_used, 86 - void *not_used, void **ret_p) 87 - { 88 - struct acpi_device *adev = NULL; 89 - 90 - acpi_bus_get_device(handle, &adev); 91 - if (adev) { 92 - *ret_p = handle; 93 - return AE_CTRL_TERMINATE; 94 - } 95 - return AE_OK; 96 - } 97 - 98 - static int do_find_child_checks(acpi_handle handle, bool is_bridge) 85 + static int find_child_checks(struct acpi_device *adev, bool check_children) 99 86 { 100 87 bool sta_present = true; 101 88 unsigned long long sta; 102 89 acpi_status status; 103 90 104 - status = acpi_evaluate_integer(handle, "_STA", NULL, &sta); 91 + status = acpi_evaluate_integer(adev->handle, "_STA", NULL, &sta); 105 92 if (status == AE_NOT_FOUND) 106 93 sta_present = false; 107 94 else if (ACPI_FAILURE(status) || !(sta & ACPI_STA_DEVICE_ENABLED)) 108 95 return -ENODEV; 109 96 110 - if (is_bridge) { 111 - void *test = NULL; 97 + if (check_children && list_empty(&adev->children)) 98 + return -ENODEV; 112 99 113 - /* Check if this object has at least one child device. */ 114 - acpi_walk_namespace(ACPI_TYPE_DEVICE, handle, 1, 115 - acpi_dev_present, NULL, NULL, &test); 116 - if (!test) 117 - return -ENODEV; 118 - } 119 100 return sta_present ? FIND_CHILD_MAX_SCORE : FIND_CHILD_MIN_SCORE; 120 101 } 121 102 122 - struct find_child_context { 123 - u64 addr; 124 - bool is_bridge; 125 - acpi_handle ret; 126 - int ret_score; 127 - }; 128 - 129 - static acpi_status do_find_child(acpi_handle handle, u32 lvl_not_used, 130 - void *data, void **not_used) 103 + struct acpi_device *acpi_find_child_device(struct acpi_device *parent, 104 + u64 address, bool check_children) 131 105 { 132 - struct find_child_context *context = data; 133 - unsigned long long addr; 134 - acpi_status status; 135 - int score; 106 + struct acpi_device *adev, *ret = NULL; 107 + int ret_score = 0; 136 108 137 - status = acpi_evaluate_integer(handle, METHOD_NAME__ADR, NULL, &addr); 138 - if (ACPI_FAILURE(status) || addr != context->addr) 139 - return AE_OK; 109 + if (!parent) 110 + return NULL; 140 111 141 - if (!context->ret) { 142 - /* This is the first matching object. Save its handle. */ 143 - context->ret = handle; 144 - return AE_OK; 112 + list_for_each_entry(adev, &parent->children, node) { 113 + unsigned long long addr; 114 + acpi_status status; 115 + int score; 116 + 117 + status = acpi_evaluate_integer(adev->handle, METHOD_NAME__ADR, 118 + NULL, &addr); 119 + if (ACPI_FAILURE(status) || addr != address) 120 + continue; 121 + 122 + if (!ret) { 123 + /* This is the first matching object. Save it. */ 124 + ret = adev; 125 + continue; 126 + } 127 + /* 128 + * There is more than one matching device object with the same 129 + * _ADR value. That really is unexpected, so we are kind of 130 + * beyond the scope of the spec here. We have to choose which 131 + * one to return, though. 132 + * 133 + * First, check if the previously found object is good enough 134 + * and return it if so. Second, do the same for the object that 135 + * we've just found. 136 + */ 137 + if (!ret_score) { 138 + ret_score = find_child_checks(ret, check_children); 139 + if (ret_score == FIND_CHILD_MAX_SCORE) 140 + return ret; 141 + } 142 + score = find_child_checks(adev, check_children); 143 + if (score == FIND_CHILD_MAX_SCORE) { 144 + return adev; 145 + } else if (score > ret_score) { 146 + ret = adev; 147 + ret_score = score; 148 + } 145 149 } 146 - /* 147 - * There is more than one matching object with the same _ADR value. 148 - * That really is unexpected, so we are kind of beyond the scope of the 149 - * spec here. We have to choose which one to return, though. 150 - * 151 - * First, check if the previously found object is good enough and return 152 - * its handle if so. Second, check the same for the object that we've 153 - * just found. 154 - */ 155 - if (!context->ret_score) { 156 - score = do_find_child_checks(context->ret, context->is_bridge); 157 - if (score == FIND_CHILD_MAX_SCORE) 158 - return AE_CTRL_TERMINATE; 159 - else 160 - context->ret_score = score; 161 - } 162 - score = do_find_child_checks(handle, context->is_bridge); 163 - if (score == FIND_CHILD_MAX_SCORE) { 164 - context->ret = handle; 165 - return AE_CTRL_TERMINATE; 166 - } else if (score > context->ret_score) { 167 - context->ret = handle; 168 - context->ret_score = score; 169 - } 170 - return AE_OK; 150 + return ret; 171 151 } 172 - 173 - acpi_handle acpi_find_child(acpi_handle parent, u64 addr, bool is_bridge) 174 - { 175 - if (parent) { 176 - struct find_child_context context = { 177 - .addr = addr, 178 - .is_bridge = is_bridge, 179 - }; 180 - 181 - acpi_walk_namespace(ACPI_TYPE_DEVICE, parent, 1, do_find_child, 182 - NULL, &context, NULL); 183 - return context.ret; 184 - } 185 - return NULL; 186 - } 187 - EXPORT_SYMBOL_GPL(acpi_find_child); 152 + EXPORT_SYMBOL_GPL(acpi_find_child_device); 188 153 189 154 static void acpi_physnode_link_name(char *buf, unsigned int node_id) 190 155 { ··· 160 195 strcpy(buf, PHYSICAL_NODE_STRING); 161 196 } 162 197 163 - int acpi_bind_one(struct device *dev, acpi_handle handle) 198 + int acpi_bind_one(struct device *dev, struct acpi_device *acpi_dev) 164 199 { 165 - struct acpi_device *acpi_dev = NULL; 166 200 struct acpi_device_physical_node *physical_node, *pn; 167 201 char physical_node_name[PHYSICAL_NODE_NAME_SIZE]; 168 202 struct list_head *physnode_list; ··· 169 205 int retval = -EINVAL; 170 206 171 207 if (ACPI_COMPANION(dev)) { 172 - if (handle) { 208 + if (acpi_dev) { 173 209 dev_warn(dev, "ACPI companion already set\n"); 174 210 return -EINVAL; 175 211 } else { 176 212 acpi_dev = ACPI_COMPANION(dev); 177 213 } 178 - } else { 179 - acpi_bus_get_device(handle, &acpi_dev); 180 214 } 181 215 if (!acpi_dev) 182 216 return -EINVAL; ··· 284 322 } 285 323 EXPORT_SYMBOL_GPL(acpi_unbind_one); 286 324 287 - void acpi_preset_companion(struct device *dev, acpi_handle parent, u64 addr) 288 - { 289 - struct acpi_device *adev; 290 - 291 - if (!acpi_bus_get_device(acpi_get_child(parent, addr), &adev)) 292 - ACPI_COMPANION_SET(dev, adev); 293 - } 294 - EXPORT_SYMBOL_GPL(acpi_preset_companion); 295 - 296 325 static int acpi_platform_notify(struct device *dev) 297 326 { 298 327 struct acpi_bus_type *type = acpi_get_bus_type(dev); 299 - acpi_handle handle; 300 328 int ret; 301 329 302 330 ret = acpi_bind_one(dev, NULL); 303 331 if (ret && type) { 304 - ret = type->find_device(dev, &handle); 305 - if (ret) { 332 + struct acpi_device *adev; 333 + 334 + adev = type->find_companion(dev); 335 + if (!adev) { 306 336 DBG("Unable to get handle for %s\n", dev_name(dev)); 337 + ret = -ENODEV; 307 338 goto out; 308 339 } 309 - ret = acpi_bind_one(dev, handle); 340 + ret = acpi_bind_one(dev, adev); 310 341 if (ret) 311 342 goto out; 312 343 }
-2
drivers/acpi/hed.c
··· 25 25 #include <linux/module.h> 26 26 #include <linux/init.h> 27 27 #include <linux/acpi.h> 28 - #include <acpi/acpi_bus.h> 29 - #include <acpi/acpi_drivers.h> 30 28 #include <acpi/hed.h> 31 29 32 30 static struct acpi_device_id acpi_hed_ids[] = {
+16 -3
drivers/acpi/internal.h
··· 28 28 int acpi_scan_init(void); 29 29 void acpi_pci_root_init(void); 30 30 void acpi_pci_link_init(void); 31 - void acpi_pci_root_hp_init(void); 32 31 void acpi_processor_init(void); 33 32 void acpi_platform_init(void); 34 33 int acpi_sysfs_init(void); ··· 72 73 static inline void acpi_lpss_init(void) {} 73 74 #endif 74 75 76 + bool acpi_queue_hotplug_work(struct work_struct *work); 77 + bool acpi_scan_is_offline(struct acpi_device *adev, bool uevent); 78 + 75 79 /* -------------------------------------------------------------------------- 76 80 Device Node Initialization / Removal 77 81 -------------------------------------------------------------------------- */ ··· 87 85 int type, unsigned long long sta); 88 86 void acpi_device_add_finalize(struct acpi_device *device); 89 87 void acpi_free_pnp_ids(struct acpi_device_pnp *pnp); 90 - int acpi_bind_one(struct device *dev, acpi_handle handle); 88 + int acpi_bind_one(struct device *dev, struct acpi_device *adev); 91 89 int acpi_unbind_one(struct device *dev); 92 - void acpi_bus_device_eject(void *data, u32 ost_src); 90 + bool acpi_device_is_present(struct acpi_device *adev); 93 91 94 92 /* -------------------------------------------------------------------------- 95 93 Power Resource ··· 106 104 int acpi_power_get_inferred_state(struct acpi_device *device, int *state); 107 105 int acpi_power_on_resources(struct acpi_device *device, int state); 108 106 int acpi_power_transition(struct acpi_device *device, int state); 107 + 108 + int acpi_device_update_power(struct acpi_device *device, int *state_p); 109 109 110 110 int acpi_wakeup_device_init(void); 111 111 void acpi_early_processor_set_pdc(void); ··· 131 127 132 128 extern struct acpi_ec *first_ec; 133 129 130 + /* If we find an EC via the ECDT, we need to keep a ptr to its context */ 131 + /* External interfaces use first EC only, so remember */ 132 + typedef int (*acpi_ec_query_func) (void *data); 133 + 134 134 int acpi_ec_init(void); 135 135 int acpi_ec_ecdt_probe(void); 136 136 int acpi_boot_ec_enable(void); 137 137 void acpi_ec_block_transactions(void); 138 138 void acpi_ec_unblock_transactions(void); 139 139 void acpi_ec_unblock_transactions_early(void); 140 + int acpi_ec_add_query_handler(struct acpi_ec *ec, u8 query_bit, 141 + acpi_handle handle, acpi_ec_query_func func, 142 + void *data); 143 + void acpi_ec_remove_query_handler(struct acpi_ec *ec, u8 query_bit); 144 + 140 145 141 146 /*-------------------------------------------------------------------------- 142 147 Suspend/Resume
-1
drivers/acpi/numa.c
··· 29 29 #include <linux/errno.h> 30 30 #include <linux/acpi.h> 31 31 #include <linux/numa.h> 32 - #include <acpi/acpi_bus.h> 33 32 34 33 #define PREFIX "ACPI: " 35 34
+2 -1
drivers/acpi/nvs.c
··· 12 12 #include <linux/mm.h> 13 13 #include <linux/slab.h> 14 14 #include <linux/acpi.h> 15 - #include <linux/acpi_io.h> 15 + 16 + #include "internal.h" 16 17 17 18 /* ACPI NVS regions, APEI may use it */ 18 19
+7 -7
drivers/acpi/osl.c
··· 39 39 #include <linux/workqueue.h> 40 40 #include <linux/nmi.h> 41 41 #include <linux/acpi.h> 42 - #include <linux/acpi_io.h> 43 42 #include <linux/efi.h> 44 43 #include <linux/ioport.h> 45 44 #include <linux/list.h> ··· 48 49 #include <asm/io.h> 49 50 #include <asm/uaccess.h> 50 51 51 - #include <acpi/acpi.h> 52 - #include <acpi/acpi_bus.h> 53 - #include <acpi/processor.h> 54 52 #include "internal.h" 55 53 56 54 #define _COMPONENT ACPI_OS_SERVICES ··· 540 544 static int all_tables_size; 541 545 542 546 /* Copied from acpica/tbutils.c:acpi_tb_checksum() */ 543 - u8 __init acpi_table_checksum(u8 *buffer, u32 length) 547 + static u8 __init acpi_table_checksum(u8 *buffer, u32 length) 544 548 { 545 549 u8 sum = 0; 546 550 u8 *end = buffer + length; ··· 1211 1215 return AE_OK; 1212 1216 } 1213 1217 1218 + bool acpi_queue_hotplug_work(struct work_struct *work) 1219 + { 1220 + return queue_work(kacpi_hotplug_wq, work); 1221 + } 1214 1222 1215 1223 acpi_status 1216 1224 acpi_os_create_semaphore(u32 max_units, u32 initial_units, acpi_handle * handle) ··· 1282 1282 jiffies = MAX_SCHEDULE_TIMEOUT; 1283 1283 else 1284 1284 jiffies = msecs_to_jiffies(timeout); 1285 - 1285 + 1286 1286 ret = down_timeout(sem, jiffies); 1287 1287 if (ret) 1288 1288 status = AE_TIME; ··· 1794 1794 { 1795 1795 kacpid_wq = alloc_workqueue("kacpid", 0, 1); 1796 1796 kacpi_notify_wq = alloc_workqueue("kacpi_notify", 0, 1); 1797 - kacpi_hotplug_wq = alloc_workqueue("kacpi_hotplug", 0, 1); 1797 + kacpi_hotplug_wq = alloc_ordered_workqueue("kacpi_hotplug", 0); 1798 1798 BUG_ON(!kacpid_wq); 1799 1799 BUG_ON(!kacpi_notify_wq); 1800 1800 BUG_ON(!kacpi_hotplug_wq);
-2
drivers/acpi/pci_irq.c
··· 37 37 #include <linux/pci.h> 38 38 #include <linux/acpi.h> 39 39 #include <linux/slab.h> 40 - #include <acpi/acpi_bus.h> 41 - #include <acpi/acpi_drivers.h> 42 40 43 41 #define PREFIX "ACPI: " 44 42
+2 -2
drivers/acpi/pci_link.c
··· 39 39 #include <linux/pci.h> 40 40 #include <linux/mutex.h> 41 41 #include <linux/slab.h> 42 + #include <linux/acpi.h> 42 43 43 - #include <acpi/acpi_bus.h> 44 - #include <acpi/acpi_drivers.h> 44 + #include "internal.h" 45 45 46 46 #define PREFIX "ACPI: " 47 47
+12 -114
drivers/acpi/pci_root.c
··· 35 35 #include <linux/pci-aspm.h> 36 36 #include <linux/acpi.h> 37 37 #include <linux/slab.h> 38 - #include <acpi/acpi_bus.h> 39 - #include <acpi/acpi_drivers.h> 40 - #include <acpi/apei.h> 38 + #include <acpi/apei.h> /* for acpi_hest_init() */ 41 39 42 40 #include "internal.h" 43 41 ··· 48 50 static int acpi_pci_root_add(struct acpi_device *device, 49 51 const struct acpi_device_id *not_used); 50 52 static void acpi_pci_root_remove(struct acpi_device *device); 53 + 54 + static int acpi_pci_root_scan_dependent(struct acpi_device *adev) 55 + { 56 + acpiphp_check_host_bridge(adev->handle); 57 + return 0; 58 + } 51 59 52 60 #define ACPI_PCIE_REQ_SUPPORT (OSC_PCI_EXT_CONFIG_SUPPORT \ 53 61 | OSC_PCI_ASPM_SUPPORT \ ··· 70 66 .attach = acpi_pci_root_add, 71 67 .detach = acpi_pci_root_remove, 72 68 .hotplug = { 73 - .ignore = true, 69 + .enabled = true, 70 + .scan_dependent = acpi_pci_root_scan_dependent, 74 71 }, 75 72 }; 76 73 ··· 635 630 void __init acpi_pci_root_init(void) 636 631 { 637 632 acpi_hest_init(); 638 - 639 - if (!acpi_pci_disabled) { 640 - pci_acpi_crs_quirks(); 641 - acpi_scan_add_handler(&pci_root_handler); 642 - } 643 - } 644 - /* Support root bridge hotplug */ 645 - 646 - static void handle_root_bridge_insertion(acpi_handle handle) 647 - { 648 - struct acpi_device *device; 649 - 650 - if (!acpi_bus_get_device(handle, &device)) { 651 - dev_printk(KERN_DEBUG, &device->dev, 652 - "acpi device already exists; ignoring notify\n"); 633 + if (acpi_pci_disabled) 653 634 return; 654 - } 655 635 656 - if (acpi_bus_scan(handle)) 657 - acpi_handle_err(handle, "cannot add bridge to acpi list\n"); 658 - } 659 - 660 - static void hotplug_event_root(void *data, u32 type) 661 - { 662 - acpi_handle handle = data; 663 - struct acpi_pci_root *root; 664 - 665 - acpi_scan_lock_acquire(); 666 - 667 - root = acpi_pci_find_root(handle); 668 - 669 - switch (type) { 670 - case ACPI_NOTIFY_BUS_CHECK: 671 - /* bus enumerate */ 672 - acpi_handle_printk(KERN_DEBUG, handle, 673 - "Bus check notify on %s\n", __func__); 674 - if (root) 675 - acpiphp_check_host_bridge(handle); 676 - else 677 - handle_root_bridge_insertion(handle); 678 - 679 - break; 680 - 681 - case ACPI_NOTIFY_DEVICE_CHECK: 682 - /* device check */ 683 - acpi_handle_printk(KERN_DEBUG, handle, 684 - "Device check notify on %s\n", __func__); 685 - if (!root) 686 - handle_root_bridge_insertion(handle); 687 - break; 688 - 689 - case ACPI_NOTIFY_EJECT_REQUEST: 690 - /* request device eject */ 691 - acpi_handle_printk(KERN_DEBUG, handle, 692 - "Device eject notify on %s\n", __func__); 693 - if (!root) 694 - break; 695 - 696 - get_device(&root->device->dev); 697 - 698 - acpi_scan_lock_release(); 699 - 700 - acpi_bus_device_eject(root->device, ACPI_NOTIFY_EJECT_REQUEST); 701 - return; 702 - default: 703 - acpi_handle_warn(handle, 704 - "notify_handler: unknown event type 0x%x\n", 705 - type); 706 - break; 707 - } 708 - 709 - acpi_scan_lock_release(); 710 - } 711 - 712 - static void handle_hotplug_event_root(acpi_handle handle, u32 type, 713 - void *context) 714 - { 715 - acpi_hotplug_execute(hotplug_event_root, handle, type); 716 - } 717 - 718 - static acpi_status __init 719 - find_root_bridges(acpi_handle handle, u32 lvl, void *context, void **rv) 720 - { 721 - acpi_status status; 722 - int *count = (int *)context; 723 - 724 - if (!acpi_is_root_bridge(handle)) 725 - return AE_OK; 726 - 727 - (*count)++; 728 - 729 - status = acpi_install_notify_handler(handle, ACPI_SYSTEM_NOTIFY, 730 - handle_hotplug_event_root, NULL); 731 - if (ACPI_FAILURE(status)) 732 - acpi_handle_printk(KERN_DEBUG, handle, 733 - "notify handler is not installed, exit status: %u\n", 734 - (unsigned int)status); 735 - else 736 - acpi_handle_printk(KERN_DEBUG, handle, 737 - "notify handler is installed\n"); 738 - 739 - return AE_OK; 740 - } 741 - 742 - void __init acpi_pci_root_hp_init(void) 743 - { 744 - int num = 0; 745 - 746 - acpi_walk_namespace(ACPI_TYPE_DEVICE, ACPI_ROOT_OBJECT, 747 - ACPI_UINT32_MAX, find_root_bridges, NULL, &num, NULL); 748 - 749 - printk(KERN_DEBUG "Found %d acpi root devices\n", num); 636 + pci_acpi_crs_quirks(); 637 + acpi_scan_add_handler_with_hotplug(&pci_root_handler, "pci_root"); 750 638 }
+1
drivers/acpi/pci_slot.c
··· 35 35 #include <linux/pci.h> 36 36 #include <linux/acpi.h> 37 37 #include <linux/dmi.h> 38 + #include <linux/pci-acpi.h> 38 39 39 40 static bool debug; 40 41 static int check_sta_before_sun;
+1 -2
drivers/acpi/power.c
··· 42 42 #include <linux/slab.h> 43 43 #include <linux/pm_runtime.h> 44 44 #include <linux/sysfs.h> 45 - #include <acpi/acpi_bus.h> 46 - #include <acpi/acpi_drivers.h> 45 + #include <linux/acpi.h> 47 46 #include "sleep.h" 48 47 #include "internal.h" 49 48
+2 -3
drivers/acpi/proc.c
··· 3 3 #include <linux/export.h> 4 4 #include <linux/suspend.h> 5 5 #include <linux/bcd.h> 6 + #include <linux/acpi.h> 6 7 #include <asm/uaccess.h> 7 8 8 - #include <acpi/acpi_bus.h> 9 - #include <acpi/acpi_drivers.h> 10 - 11 9 #include "sleep.h" 10 + #include "internal.h" 12 11 13 12 #define _COMPONENT ACPI_SYSTEM_COMPONENT 14 13
+1 -2
drivers/acpi/processor_core.c
··· 10 10 #include <linux/export.h> 11 11 #include <linux/dmi.h> 12 12 #include <linux/slab.h> 13 - 14 - #include <acpi/acpi_drivers.h> 13 + #include <linux/acpi.h> 15 14 #include <acpi/processor.h> 16 15 17 16 #include "internal.h"
+4 -4
drivers/acpi/processor_driver.c
··· 224 224 225 225 static int acpi_processor_start(struct device *dev) 226 226 { 227 - struct acpi_device *device; 227 + struct acpi_device *device = ACPI_COMPANION(dev); 228 228 229 - if (acpi_bus_get_device(ACPI_HANDLE(dev), &device)) 229 + if (!device) 230 230 return -ENODEV; 231 231 232 232 return __acpi_processor_start(device); ··· 234 234 235 235 static int acpi_processor_stop(struct device *dev) 236 236 { 237 - struct acpi_device *device; 237 + struct acpi_device *device = ACPI_COMPANION(dev); 238 238 struct acpi_processor *pr; 239 239 240 - if (acpi_bus_get_device(ACPI_HANDLE(dev), &device)) 240 + if (!device) 241 241 return 0; 242 242 243 243 acpi_remove_notify_handler(device->handle, ACPI_DEVICE_NOTIFY,
+17 -20
drivers/acpi/processor_idle.c
··· 35 35 #include <linux/clockchips.h> 36 36 #include <linux/cpuidle.h> 37 37 #include <linux/syscore_ops.h> 38 + #include <acpi/processor.h> 38 39 39 40 /* 40 41 * Include the apic definitions for x86 to have the APIC timer related defines ··· 46 45 #ifdef CONFIG_X86 47 46 #include <asm/apic.h> 48 47 #endif 49 - 50 - #include <acpi/acpi_bus.h> 51 - #include <acpi/processor.h> 52 48 53 49 #define PREFIX "ACPI: " 54 50 ··· 211 213 212 214 static void acpi_processor_resume(void) 213 215 { 214 - u32 resumed_bm_rld; 216 + u32 resumed_bm_rld = 0; 215 217 216 218 acpi_read_bit_register(ACPI_BITREG_BUS_MASTER_RLD, &resumed_bm_rld); 217 219 if (resumed_bm_rld == saved_bm_rld) ··· 596 598 case ACPI_STATE_C2: 597 599 if (!cx->address) 598 600 break; 599 - cx->valid = 1; 601 + cx->valid = 1; 600 602 break; 601 603 602 604 case ACPI_STATE_C3: ··· 778 780 if (unlikely(!pr)) 779 781 return -EINVAL; 780 782 783 + #ifdef CONFIG_HOTPLUG_CPU 784 + if ((cx->type != ACPI_STATE_C1) && (num_online_cpus() > 1) && 785 + !pr->flags.has_cst && 786 + !(acpi_gbl_FADT.flags & ACPI_FADT_C2_MP_SUPPORTED)) 787 + return acpi_idle_enter_c1(dev, drv, CPUIDLE_DRIVER_STATE_START); 788 + #endif 789 + 781 790 /* 782 791 * Must be done before busmaster disable as we might need to 783 792 * access HPET ! ··· 825 820 826 821 if (unlikely(!pr)) 827 822 return -EINVAL; 823 + 824 + #ifdef CONFIG_HOTPLUG_CPU 825 + if ((cx->type != ACPI_STATE_C1) && (num_online_cpus() > 1) && 826 + !pr->flags.has_cst && 827 + !(acpi_gbl_FADT.flags & ACPI_FADT_C2_MP_SUPPORTED)) 828 + return acpi_idle_enter_c1(dev, drv, CPUIDLE_DRIVER_STATE_START); 829 + #endif 828 830 829 831 if (!cx->bm_sts_skip && acpi_idle_bm_check()) { 830 832 if (drv->safe_state_index >= 0) { ··· 929 917 if (!cx->valid) 930 918 continue; 931 919 932 - #ifdef CONFIG_HOTPLUG_CPU 933 - if ((cx->type != ACPI_STATE_C1) && (num_online_cpus() > 1) && 934 - !pr->flags.has_cst && 935 - !(acpi_gbl_FADT.flags & ACPI_FADT_C2_MP_SUPPORTED)) 936 - continue; 937 - #endif 938 920 per_cpu(acpi_cstate[count], dev->cpu) = cx; 939 921 940 922 count++; 941 923 if (count == CPUIDLE_STATE_MAX) 942 924 break; 943 925 } 944 - 945 - dev->state_count = count; 946 926 947 927 if (!count) 948 928 return -EINVAL; ··· 975 971 976 972 if (!cx->valid) 977 973 continue; 978 - 979 - #ifdef CONFIG_HOTPLUG_CPU 980 - if ((cx->type != ACPI_STATE_C1) && (num_online_cpus() > 1) && 981 - !pr->flags.has_cst && 982 - !(acpi_gbl_FADT.flags & ACPI_FADT_C2_MP_SUPPORTED)) 983 - continue; 984 - #endif 985 974 986 975 state = &drv->states[count]; 987 976 snprintf(state->name, CPUIDLE_NAME_LEN, "C%d", i);
+2 -5
drivers/acpi/processor_perflib.c
··· 31 31 #include <linux/init.h> 32 32 #include <linux/cpufreq.h> 33 33 #include <linux/slab.h> 34 - 34 + #include <linux/acpi.h> 35 + #include <acpi/processor.h> 35 36 #ifdef CONFIG_X86 36 37 #include <asm/cpufeature.h> 37 38 #endif 38 - 39 - #include <acpi/acpi_bus.h> 40 - #include <acpi/acpi_drivers.h> 41 - #include <acpi/processor.h> 42 39 43 40 #define PREFIX "ACPI: " 44 41
+4 -7
drivers/acpi/processor_thermal.c
··· 30 30 #include <linux/module.h> 31 31 #include <linux/init.h> 32 32 #include <linux/cpufreq.h> 33 - 34 - #include <asm/uaccess.h> 35 - 36 - #include <acpi/acpi_bus.h> 33 + #include <linux/acpi.h> 37 34 #include <acpi/processor.h> 38 - #include <acpi/acpi_drivers.h> 35 + #include <asm/uaccess.h> 39 36 40 37 #define PREFIX "ACPI: " 41 38 ··· 183 186 184 187 #endif 185 188 186 - /* thermal coolign device callbacks */ 189 + /* thermal cooling device callbacks */ 187 190 static int acpi_processor_max_state(struct acpi_processor *pr) 188 191 { 189 192 int max_state = 0; 190 193 191 194 /* 192 195 * There exists four states according to 193 - * cpufreq_thermal_reduction_ptg. 0, 1, 2, 3 196 + * cpufreq_thermal_reduction_pctg. 0, 1, 2, 3 194 197 */ 195 198 max_state += cpufreq_get_max_state(pr->id); 196 199 if (pr->flags.throttling)
+2 -5
drivers/acpi/processor_throttling.c
··· 32 32 #include <linux/init.h> 33 33 #include <linux/sched.h> 34 34 #include <linux/cpufreq.h> 35 - 35 + #include <linux/acpi.h> 36 + #include <acpi/processor.h> 36 37 #include <asm/io.h> 37 38 #include <asm/uaccess.h> 38 - 39 - #include <acpi/acpi_bus.h> 40 - #include <acpi/acpi_drivers.h> 41 - #include <acpi/processor.h> 42 39 43 40 #define PREFIX "ACPI: " 44 41
+1 -2
drivers/acpi/sbshc.c
··· 8 8 * the Free Software Foundation version 2. 9 9 */ 10 10 11 - #include <acpi/acpi_bus.h> 12 - #include <acpi/acpi_drivers.h> 11 + #include <linux/acpi.h> 13 12 #include <linux/wait.h> 14 13 #include <linux/slab.h> 15 14 #include <linux/delay.h>
+390 -240
drivers/acpi/scan.c
··· 12 12 #include <linux/dmi.h> 13 13 #include <linux/nls.h> 14 14 15 - #include <acpi/acpi_drivers.h> 15 + #include <asm/pgtable.h> 16 16 17 17 #include "internal.h" 18 18 19 19 #define _COMPONENT ACPI_BUS_COMPONENT 20 20 ACPI_MODULE_NAME("scan"); 21 - #define STRUCT_TO_INT(s) (*((int*)&s)) 22 21 extern struct acpi_device *acpi_root; 23 22 24 23 #define ACPI_BUS_CLASS "system_bus" ··· 25 26 #define ACPI_BUS_DEVICE_NAME "System Bus" 26 27 27 28 #define ACPI_IS_ROOT_DEVICE(device) (!(device)->parent) 29 + 30 + #define INVALID_ACPI_HANDLE ((acpi_handle)empty_zero_page) 28 31 29 32 /* 30 33 * If set, devices will be hot-removed even if they cannot be put offline ··· 86 85 * Creates hid/cid(s) string needed for modalias and uevent 87 86 * e.g. on a device with hid:IBM0001 and cid:ACPI0001 you get: 88 87 * char *modalias: "acpi:IBM0001:ACPI0001" 88 + * Return: 0: no _HID and no _CID 89 + * -EINVAL: output error 90 + * -ENOMEM: output is truncated 89 91 */ 90 92 static int create_modalias(struct acpi_device *acpi_dev, char *modalias, 91 93 int size) ··· 105 101 106 102 list_for_each_entry(id, &acpi_dev->pnp.ids, list) { 107 103 count = snprintf(&modalias[len], size, "%s:", id->id); 108 - if (count < 0 || count >= size) 109 - return -EINVAL; 104 + if (count < 0) 105 + return EINVAL; 106 + if (count >= size) 107 + return -ENOMEM; 110 108 len += count; 111 109 size -= count; 112 110 } ··· 117 111 return len; 118 112 } 119 113 114 + /* 115 + * Creates uevent modalias field for ACPI enumerated devices. 116 + * Because the other buses does not support ACPI HIDs & CIDs. 117 + * e.g. for a device with hid:IBM0001 and cid:ACPI0001 you get: 118 + * "acpi:IBM0001:ACPI0001" 119 + */ 120 + int acpi_device_uevent_modalias(struct device *dev, struct kobj_uevent_env *env) 121 + { 122 + struct acpi_device *acpi_dev; 123 + int len; 124 + 125 + acpi_dev = ACPI_COMPANION(dev); 126 + if (!acpi_dev) 127 + return -ENODEV; 128 + 129 + /* Fall back to bus specific way of modalias exporting */ 130 + if (list_empty(&acpi_dev->pnp.ids)) 131 + return -ENODEV; 132 + 133 + if (add_uevent_var(env, "MODALIAS=")) 134 + return -ENOMEM; 135 + len = create_modalias(acpi_dev, &env->buf[env->buflen - 1], 136 + sizeof(env->buf) - env->buflen); 137 + if (len <= 0) 138 + return len; 139 + env->buflen += len; 140 + return 0; 141 + } 142 + EXPORT_SYMBOL_GPL(acpi_device_uevent_modalias); 143 + 144 + /* 145 + * Creates modalias sysfs attribute for ACPI enumerated devices. 146 + * Because the other buses does not support ACPI HIDs & CIDs. 147 + * e.g. for a device with hid:IBM0001 and cid:ACPI0001 you get: 148 + * "acpi:IBM0001:ACPI0001" 149 + */ 150 + int acpi_device_modalias(struct device *dev, char *buf, int size) 151 + { 152 + struct acpi_device *acpi_dev; 153 + int len; 154 + 155 + acpi_dev = ACPI_COMPANION(dev); 156 + if (!acpi_dev) 157 + return -ENODEV; 158 + 159 + /* Fall back to bus specific way of modalias exporting */ 160 + if (list_empty(&acpi_dev->pnp.ids)) 161 + return -ENODEV; 162 + 163 + len = create_modalias(acpi_dev, buf, size -1); 164 + if (len <= 0) 165 + return len; 166 + buf[len++] = '\n'; 167 + return len; 168 + } 169 + EXPORT_SYMBOL_GPL(acpi_device_modalias); 170 + 120 171 static ssize_t 121 172 acpi_device_modalias_show(struct device *dev, struct device_attribute *attr, char *buf) { 122 173 struct acpi_device *acpi_dev = to_acpi_device(dev); 123 174 int len; 124 175 125 - /* Device has no HID and no CID or string is >1024 */ 126 176 len = create_modalias(acpi_dev, buf, 1024); 127 177 if (len <= 0) 128 - return 0; 178 + return len; 129 179 buf[len++] = '\n'; 130 180 return len; 131 181 } 132 182 static DEVICE_ATTR(modalias, 0444, acpi_device_modalias_show, NULL); 183 + 184 + bool acpi_scan_is_offline(struct acpi_device *adev, bool uevent) 185 + { 186 + struct acpi_device_physical_node *pn; 187 + bool offline = true; 188 + 189 + mutex_lock(&adev->physical_node_lock); 190 + 191 + list_for_each_entry(pn, &adev->physical_node_list, node) 192 + if (device_supports_offline(pn->dev) && !pn->dev->offline) { 193 + if (uevent) 194 + kobject_uevent(&pn->dev->kobj, KOBJ_CHANGE); 195 + 196 + offline = false; 197 + break; 198 + } 199 + 200 + mutex_unlock(&adev->physical_node_lock); 201 + return offline; 202 + } 133 203 134 204 static acpi_status acpi_bus_offline(acpi_handle handle, u32 lvl, void *data, 135 205 void **ret_p) ··· 277 195 return AE_OK; 278 196 } 279 197 280 - static int acpi_scan_hot_remove(struct acpi_device *device) 198 + static int acpi_scan_try_to_offline(struct acpi_device *device) 281 199 { 282 200 acpi_handle handle = device->handle; 283 - struct device *errdev; 201 + struct device *errdev = NULL; 284 202 acpi_status status; 285 - unsigned long long sta; 286 - 287 - /* If there is no handle, the device node has been unregistered. */ 288 - if (!handle) { 289 - dev_dbg(&device->dev, "ACPI handle missing\n"); 290 - put_device(&device->dev); 291 - return -EINVAL; 292 - } 293 203 294 204 /* 295 205 * Carry out two passes here and ignore errors in the first pass, ··· 292 218 * 293 219 * If the first pass is successful, the second one isn't needed, though. 294 220 */ 295 - errdev = NULL; 296 221 status = acpi_walk_namespace(ACPI_TYPE_ANY, handle, ACPI_UINT32_MAX, 297 222 NULL, acpi_bus_offline, (void *)false, 298 223 (void **)&errdev); ··· 299 226 dev_warn(errdev, "Offline disabled.\n"); 300 227 acpi_walk_namespace(ACPI_TYPE_ANY, handle, ACPI_UINT32_MAX, 301 228 acpi_bus_online, NULL, NULL, NULL); 302 - put_device(&device->dev); 303 229 return -EPERM; 304 230 } 305 231 acpi_bus_offline(handle, 0, (void *)false, (void **)&errdev); ··· 317 245 acpi_walk_namespace(ACPI_TYPE_ANY, handle, 318 246 ACPI_UINT32_MAX, acpi_bus_online, 319 247 NULL, NULL, NULL); 320 - put_device(&device->dev); 321 248 return -EBUSY; 322 249 } 250 + } 251 + return 0; 252 + } 253 + 254 + static int acpi_scan_hot_remove(struct acpi_device *device) 255 + { 256 + acpi_handle handle = device->handle; 257 + unsigned long long sta; 258 + acpi_status status; 259 + 260 + if (device->handler->hotplug.demand_offline && !acpi_force_hot_remove) { 261 + if (!acpi_scan_is_offline(device, true)) 262 + return -EBUSY; 263 + } else { 264 + int error = acpi_scan_try_to_offline(device); 265 + if (error) 266 + return error; 323 267 } 324 268 325 269 ACPI_DEBUG_PRINT((ACPI_DB_INFO, 326 270 "Hot-removing device %s...\n", dev_name(&device->dev))); 327 271 328 272 acpi_bus_trim(device); 329 - 330 - /* Device node has been unregistered. */ 331 - put_device(&device->dev); 332 - device = NULL; 333 273 334 274 acpi_evaluate_lck(handle, 0); 335 275 /* ··· 369 285 return 0; 370 286 } 371 287 372 - void acpi_bus_device_eject(void *data, u32 ost_src) 288 + static int acpi_scan_device_not_present(struct acpi_device *adev) 373 289 { 374 - struct acpi_device *device = data; 375 - acpi_handle handle = device->handle; 376 - u32 ost_code = ACPI_OST_SC_NON_SPECIFIC_FAILURE; 377 - int error; 378 - 379 - lock_device_hotplug(); 380 - mutex_lock(&acpi_scan_lock); 381 - 382 - if (ost_src == ACPI_NOTIFY_EJECT_REQUEST) 383 - acpi_evaluate_hotplug_ost(handle, ACPI_NOTIFY_EJECT_REQUEST, 384 - ACPI_OST_SC_EJECT_IN_PROGRESS, NULL); 385 - 386 - if (device->handler && device->handler->hotplug.mode == AHM_CONTAINER) 387 - kobject_uevent(&device->dev.kobj, KOBJ_OFFLINE); 388 - 389 - error = acpi_scan_hot_remove(device); 390 - if (error == -EPERM) { 391 - goto err_support; 392 - } else if (error) { 393 - goto err_out; 290 + if (!acpi_device_enumerated(adev)) { 291 + dev_warn(&adev->dev, "Still not present\n"); 292 + return -EALREADY; 394 293 } 395 - 396 - out: 397 - mutex_unlock(&acpi_scan_lock); 398 - unlock_device_hotplug(); 399 - return; 400 - 401 - err_support: 402 - ost_code = ACPI_OST_SC_EJECT_NOT_SUPPORTED; 403 - err_out: 404 - acpi_evaluate_hotplug_ost(handle, ost_src, ost_code, NULL); 405 - goto out; 294 + acpi_bus_trim(adev); 295 + return 0; 406 296 } 407 297 408 - static void acpi_scan_bus_device_check(void *data, u32 ost_source) 298 + static int acpi_scan_device_check(struct acpi_device *adev) 409 299 { 410 - acpi_handle handle = data; 411 - struct acpi_device *device = NULL; 412 - u32 ost_code = ACPI_OST_SC_NON_SPECIFIC_FAILURE; 413 300 int error; 414 301 415 - lock_device_hotplug(); 416 - mutex_lock(&acpi_scan_lock); 417 - 418 - if (ost_source != ACPI_NOTIFY_BUS_CHECK) { 419 - acpi_bus_get_device(handle, &device); 420 - if (device) { 421 - dev_warn(&device->dev, "Attempt to re-insert\n"); 422 - goto out; 302 + acpi_bus_get_status(adev); 303 + if (adev->status.present || adev->status.functional) { 304 + /* 305 + * This function is only called for device objects for which 306 + * matching scan handlers exist. The only situation in which 307 + * the scan handler is not attached to this device object yet 308 + * is when the device has just appeared (either it wasn't 309 + * present at all before or it was removed and then added 310 + * again). 311 + */ 312 + if (adev->handler) { 313 + dev_warn(&adev->dev, "Already enumerated\n"); 314 + return -EALREADY; 423 315 } 316 + error = acpi_bus_scan(adev->handle); 317 + if (error) { 318 + dev_warn(&adev->dev, "Namespace scan failure\n"); 319 + return error; 320 + } 321 + if (!adev->handler) { 322 + dev_warn(&adev->dev, "Enumeration failure\n"); 323 + error = -ENODEV; 324 + } 325 + } else { 326 + error = acpi_scan_device_not_present(adev); 424 327 } 425 - error = acpi_bus_scan(handle); 426 - if (error) { 427 - acpi_handle_warn(handle, "Namespace scan failure\n"); 428 - goto out; 429 - } 430 - error = acpi_bus_get_device(handle, &device); 431 - if (error) { 432 - acpi_handle_warn(handle, "Missing device node object\n"); 433 - goto out; 434 - } 435 - ost_code = ACPI_OST_SC_SUCCESS; 436 - if (device->handler && device->handler->hotplug.mode == AHM_CONTAINER) 437 - kobject_uevent(&device->dev.kobj, KOBJ_ONLINE); 438 - 439 - out: 440 - acpi_evaluate_hotplug_ost(handle, ost_source, ost_code, NULL); 441 - mutex_unlock(&acpi_scan_lock); 442 - unlock_device_hotplug(); 328 + return error; 443 329 } 444 330 445 - static void acpi_hotplug_unsupported(acpi_handle handle, u32 type) 331 + static int acpi_scan_bus_check(struct acpi_device *adev) 446 332 { 447 - u32 ost_status; 333 + struct acpi_scan_handler *handler = adev->handler; 334 + struct acpi_device *child; 335 + int error; 448 336 449 - switch (type) { 337 + acpi_bus_get_status(adev); 338 + if (!(adev->status.present || adev->status.functional)) { 339 + acpi_scan_device_not_present(adev); 340 + return 0; 341 + } 342 + if (handler && handler->hotplug.scan_dependent) 343 + return handler->hotplug.scan_dependent(adev); 344 + 345 + error = acpi_bus_scan(adev->handle); 346 + if (error) { 347 + dev_warn(&adev->dev, "Namespace scan failure\n"); 348 + return error; 349 + } 350 + list_for_each_entry(child, &adev->children, node) { 351 + error = acpi_scan_bus_check(child); 352 + if (error) 353 + return error; 354 + } 355 + return 0; 356 + } 357 + 358 + static void acpi_device_hotplug(void *data, u32 src) 359 + { 360 + u32 ost_code = ACPI_OST_SC_NON_SPECIFIC_FAILURE; 361 + struct acpi_device *adev = data; 362 + int error; 363 + 364 + lock_device_hotplug(); 365 + mutex_lock(&acpi_scan_lock); 366 + 367 + /* 368 + * The device object's ACPI handle cannot become invalid as long as we 369 + * are holding acpi_scan_lock, but it may have become invalid before 370 + * that lock was acquired. 371 + */ 372 + if (adev->handle == INVALID_ACPI_HANDLE) 373 + goto out; 374 + 375 + switch (src) { 450 376 case ACPI_NOTIFY_BUS_CHECK: 451 - acpi_handle_debug(handle, 452 - "ACPI_NOTIFY_BUS_CHECK event: unsupported\n"); 453 - ost_status = ACPI_OST_SC_INSERT_NOT_SUPPORTED; 377 + error = acpi_scan_bus_check(adev); 454 378 break; 455 379 case ACPI_NOTIFY_DEVICE_CHECK: 456 - acpi_handle_debug(handle, 457 - "ACPI_NOTIFY_DEVICE_CHECK event: unsupported\n"); 458 - ost_status = ACPI_OST_SC_INSERT_NOT_SUPPORTED; 380 + error = acpi_scan_device_check(adev); 459 381 break; 460 382 case ACPI_NOTIFY_EJECT_REQUEST: 461 - acpi_handle_debug(handle, 462 - "ACPI_NOTIFY_EJECT_REQUEST event: unsupported\n"); 463 - ost_status = ACPI_OST_SC_EJECT_NOT_SUPPORTED; 383 + case ACPI_OST_EC_OSPM_EJECT: 384 + error = acpi_scan_hot_remove(adev); 464 385 break; 465 386 default: 466 - /* non-hotplug event; possibly handled by other handler */ 467 - return; 387 + error = -EINVAL; 388 + break; 468 389 } 390 + if (!error) 391 + ost_code = ACPI_OST_SC_SUCCESS; 469 392 470 - acpi_evaluate_hotplug_ost(handle, type, ost_status, NULL); 393 + out: 394 + acpi_evaluate_hotplug_ost(adev->handle, src, ost_code, NULL); 395 + put_device(&adev->dev); 396 + mutex_unlock(&acpi_scan_lock); 397 + unlock_device_hotplug(); 471 398 } 472 399 473 400 static void acpi_hotplug_notify_cb(acpi_handle handle, u32 type, void *data) 474 401 { 402 + u32 ost_code = ACPI_OST_SC_NON_SPECIFIC_FAILURE; 475 403 struct acpi_scan_handler *handler = data; 476 404 struct acpi_device *adev; 477 405 acpi_status status; 478 406 479 - if (!handler->hotplug.enabled) 480 - return acpi_hotplug_unsupported(handle, type); 407 + if (acpi_bus_get_device(handle, &adev)) 408 + goto err_out; 481 409 482 410 switch (type) { 483 411 case ACPI_NOTIFY_BUS_CHECK: ··· 500 404 break; 501 405 case ACPI_NOTIFY_EJECT_REQUEST: 502 406 acpi_handle_debug(handle, "ACPI_NOTIFY_EJECT_REQUEST event\n"); 503 - if (acpi_bus_get_device(handle, &adev)) 407 + if (!handler->hotplug.enabled) { 408 + acpi_handle_err(handle, "Eject disabled\n"); 409 + ost_code = ACPI_OST_SC_EJECT_NOT_SUPPORTED; 504 410 goto err_out; 505 - 506 - get_device(&adev->dev); 507 - status = acpi_hotplug_execute(acpi_bus_device_eject, adev, type); 508 - if (ACPI_SUCCESS(status)) 509 - return; 510 - 511 - put_device(&adev->dev); 512 - goto err_out; 411 + } 412 + acpi_evaluate_hotplug_ost(handle, ACPI_NOTIFY_EJECT_REQUEST, 413 + ACPI_OST_SC_EJECT_IN_PROGRESS, NULL); 414 + break; 513 415 default: 514 416 /* non-hotplug event; possibly handled by other handler */ 515 417 return; 516 418 } 517 - status = acpi_hotplug_execute(acpi_scan_bus_device_check, handle, type); 419 + get_device(&adev->dev); 420 + status = acpi_hotplug_execute(acpi_device_hotplug, adev, type); 518 421 if (ACPI_SUCCESS(status)) 519 422 return; 520 423 424 + put_device(&adev->dev); 425 + 521 426 err_out: 522 - acpi_evaluate_hotplug_ost(handle, type, 523 - ACPI_OST_SC_NON_SPECIFIC_FAILURE, NULL); 427 + acpi_evaluate_hotplug_ost(handle, type, ost_code, NULL); 524 428 } 525 429 526 430 static ssize_t real_power_state_show(struct device *dev, ··· 571 475 acpi_evaluate_hotplug_ost(acpi_device->handle, ACPI_OST_EC_OSPM_EJECT, 572 476 ACPI_OST_SC_EJECT_IN_PROGRESS, NULL); 573 477 get_device(&acpi_device->dev); 574 - status = acpi_hotplug_execute(acpi_bus_device_eject, acpi_device, 478 + status = acpi_hotplug_execute(acpi_device_hotplug, acpi_device, 575 479 ACPI_OST_EC_OSPM_EJECT); 576 480 if (ACPI_SUCCESS(status)) 577 481 return count; ··· 663 567 } 664 568 static DEVICE_ATTR(sun, 0444, acpi_device_sun_show, NULL); 665 569 570 + static ssize_t status_show(struct device *dev, struct device_attribute *attr, 571 + char *buf) { 572 + struct acpi_device *acpi_dev = to_acpi_device(dev); 573 + acpi_status status; 574 + unsigned long long sta; 575 + 576 + status = acpi_evaluate_integer(acpi_dev->handle, "_STA", NULL, &sta); 577 + if (ACPI_FAILURE(status)) 578 + return -ENODEV; 579 + 580 + return sprintf(buf, "%llu\n", sta); 581 + } 582 + static DEVICE_ATTR_RO(status); 583 + 666 584 static int acpi_device_setup_files(struct acpi_device *dev) 667 585 { 668 586 struct acpi_buffer buffer = {ACPI_ALLOCATE_BUFFER, NULL}; ··· 730 620 goto end; 731 621 } else { 732 622 dev->pnp.sun = (unsigned long)-1; 623 + } 624 + 625 + if (acpi_has_method(dev->handle, "_STA")) { 626 + result = device_create_file(&dev->dev, &dev_attr_status); 627 + if (result) 628 + goto end; 733 629 } 734 630 735 631 /* ··· 793 677 device_remove_file(&dev->dev, &dev_attr_adr); 794 678 device_remove_file(&dev->dev, &dev_attr_modalias); 795 679 device_remove_file(&dev->dev, &dev_attr_hid); 680 + if (acpi_has_method(dev->handle, "_STA")) 681 + device_remove_file(&dev->dev, &dev_attr_status); 796 682 if (dev->handle) 797 683 device_remove_file(&dev->dev, &dev_attr_path); 798 684 } ··· 900 782 return -ENOMEM; 901 783 len = create_modalias(acpi_dev, &env->buf[env->buflen - 1], 902 784 sizeof(env->buf) - env->buflen); 903 - if (len >= (sizeof(env->buf) - env->buflen)) 904 - return -ENOMEM; 785 + if (len <= 0) 786 + return len; 905 787 env->buflen += len; 906 788 return 0; 907 789 } ··· 1025 907 .uevent = acpi_device_uevent, 1026 908 }; 1027 909 1028 - static void acpi_bus_data_handler(acpi_handle handle, void *context) 910 + static void acpi_device_del(struct acpi_device *device) 1029 911 { 1030 - /* Intentionally empty. */ 912 + mutex_lock(&acpi_device_lock); 913 + if (device->parent) 914 + list_del(&device->node); 915 + 916 + list_del(&device->wakeup_list); 917 + mutex_unlock(&acpi_device_lock); 918 + 919 + acpi_power_add_remove_device(device, false); 920 + acpi_device_remove_files(device); 921 + if (device->remove) 922 + device->remove(device); 923 + 924 + device_del(&device->dev); 925 + } 926 + 927 + static LIST_HEAD(acpi_device_del_list); 928 + static DEFINE_MUTEX(acpi_device_del_lock); 929 + 930 + static void acpi_device_del_work_fn(struct work_struct *work_not_used) 931 + { 932 + for (;;) { 933 + struct acpi_device *adev; 934 + 935 + mutex_lock(&acpi_device_del_lock); 936 + 937 + if (list_empty(&acpi_device_del_list)) { 938 + mutex_unlock(&acpi_device_del_lock); 939 + break; 940 + } 941 + adev = list_first_entry(&acpi_device_del_list, 942 + struct acpi_device, del_list); 943 + list_del(&adev->del_list); 944 + 945 + mutex_unlock(&acpi_device_del_lock); 946 + 947 + acpi_device_del(adev); 948 + /* 949 + * Drop references to all power resources that might have been 950 + * used by the device. 951 + */ 952 + acpi_power_transition(adev, ACPI_STATE_D3_COLD); 953 + put_device(&adev->dev); 954 + } 955 + } 956 + 957 + /** 958 + * acpi_scan_drop_device - Drop an ACPI device object. 959 + * @handle: Handle of an ACPI namespace node, not used. 960 + * @context: Address of the ACPI device object to drop. 961 + * 962 + * This is invoked by acpi_ns_delete_node() during the removal of the ACPI 963 + * namespace node the device object pointed to by @context is attached to. 964 + * 965 + * The unregistration is carried out asynchronously to avoid running 966 + * acpi_device_del() under the ACPICA's namespace mutex and the list is used to 967 + * ensure the correct ordering (the device objects must be unregistered in the 968 + * same order in which the corresponding namespace nodes are deleted). 969 + */ 970 + static void acpi_scan_drop_device(acpi_handle handle, void *context) 971 + { 972 + static DECLARE_WORK(work, acpi_device_del_work_fn); 973 + struct acpi_device *adev = context; 974 + 975 + mutex_lock(&acpi_device_del_lock); 976 + 977 + /* 978 + * Use the ACPI hotplug workqueue which is ordered, so this work item 979 + * won't run after any hotplug work items submitted subsequently. That 980 + * prevents attempts to register device objects identical to those being 981 + * deleted from happening concurrently (such attempts result from 982 + * hotplug events handled via the ACPI hotplug workqueue). It also will 983 + * run after all of the work items submitted previosuly, which helps 984 + * those work items to ensure that they are not accessing stale device 985 + * objects. 986 + */ 987 + if (list_empty(&acpi_device_del_list)) 988 + acpi_queue_hotplug_work(&work); 989 + 990 + list_add_tail(&adev->del_list, &acpi_device_del_list); 991 + /* Make acpi_ns_validate_handle() return NULL for this handle. */ 992 + adev->handle = INVALID_ACPI_HANDLE; 993 + 994 + mutex_unlock(&acpi_device_del_lock); 1031 995 } 1032 996 1033 997 int acpi_bus_get_device(acpi_handle handle, struct acpi_device **device) ··· 1119 919 if (!device) 1120 920 return -EINVAL; 1121 921 1122 - status = acpi_get_data(handle, acpi_bus_data_handler, (void **)device); 922 + status = acpi_get_data(handle, acpi_scan_drop_device, (void **)device); 1123 923 if (ACPI_FAILURE(status) || !*device) { 1124 924 ACPI_DEBUG_PRINT((ACPI_DB_INFO, "No context for object [%p]\n", 1125 925 handle)); ··· 1139 939 if (device->handle) { 1140 940 acpi_status status; 1141 941 1142 - status = acpi_attach_data(device->handle, acpi_bus_data_handler, 942 + status = acpi_attach_data(device->handle, acpi_scan_drop_device, 1143 943 device); 1144 944 if (ACPI_FAILURE(status)) { 1145 945 acpi_handle_err(device->handle, ··· 1157 957 INIT_LIST_HEAD(&device->node); 1158 958 INIT_LIST_HEAD(&device->wakeup_list); 1159 959 INIT_LIST_HEAD(&device->physical_node_list); 960 + INIT_LIST_HEAD(&device->del_list); 1160 961 mutex_init(&device->physical_node_lock); 1161 962 1162 963 new_bus_id = kzalloc(sizeof(struct acpi_device_bus_id), GFP_KERNEL); ··· 1221 1020 mutex_unlock(&acpi_device_lock); 1222 1021 1223 1022 err_detach: 1224 - acpi_detach_data(device->handle, acpi_bus_data_handler); 1023 + acpi_detach_data(device->handle, acpi_scan_drop_device); 1225 1024 return result; 1226 - } 1227 - 1228 - static void acpi_device_unregister(struct acpi_device *device) 1229 - { 1230 - mutex_lock(&acpi_device_lock); 1231 - if (device->parent) 1232 - list_del(&device->node); 1233 - 1234 - list_del(&device->wakeup_list); 1235 - mutex_unlock(&acpi_device_lock); 1236 - 1237 - acpi_detach_data(device->handle, acpi_bus_data_handler); 1238 - 1239 - acpi_power_add_remove_device(device, false); 1240 - acpi_device_remove_files(device); 1241 - if (device->remove) 1242 - device->remove(device); 1243 - 1244 - device_del(&device->dev); 1245 - /* 1246 - * Transition the device to D3cold to drop the reference counts of all 1247 - * power resources the device depends on and turn off the ones that have 1248 - * no more references. 1249 - */ 1250 - acpi_device_set_power(device, ACPI_STATE_D3_COLD); 1251 - device->handle = NULL; 1252 - put_device(&device->dev); 1253 1025 } 1254 1026 1255 1027 /* -------------------------------------------------------------------------- ··· 1798 1624 device->device_type = type; 1799 1625 device->handle = handle; 1800 1626 device->parent = acpi_bus_get_parent(handle); 1801 - STRUCT_TO_INT(device->status) = sta; 1627 + acpi_set_device_status(device, sta); 1802 1628 acpi_device_get_busid(device); 1803 1629 acpi_set_pnp_ids(handle, &device->pnp, type); 1804 1630 acpi_bus_get_flags(device); 1805 1631 device->flags.match_driver = false; 1632 + device->flags.initialized = true; 1633 + device->flags.visited = false; 1806 1634 device_initialize(&device->dev); 1807 1635 dev_set_uevent_suppress(&device->dev, true); 1808 1636 } ··· 1889 1713 return 0; 1890 1714 } 1891 1715 1716 + bool acpi_device_is_present(struct acpi_device *adev) 1717 + { 1718 + if (adev->status.present || adev->status.functional) 1719 + return true; 1720 + 1721 + adev->flags.initialized = false; 1722 + return false; 1723 + } 1724 + 1892 1725 static bool acpi_scan_handler_matching(struct acpi_scan_handler *handler, 1893 1726 char *idstr, 1894 1727 const struct acpi_device_id **matchid) ··· 1957 1772 */ 1958 1773 list_for_each_entry(hwid, &pnp.ids, list) { 1959 1774 handler = acpi_scan_match_handler(hwid->id, NULL); 1960 - if (handler && !handler->hotplug.ignore) { 1775 + if (handler) { 1961 1776 acpi_install_notify_handler(handle, ACPI_SYSTEM_NOTIFY, 1962 1777 acpi_hotplug_notify_cb, handler); 1963 1778 break; ··· 1990 1805 } 1991 1806 1992 1807 acpi_scan_init_hotplug(handle, type); 1993 - 1994 - if (!(sta & ACPI_STA_DEVICE_PRESENT) && 1995 - !(sta & ACPI_STA_DEVICE_FUNCTIONING)) { 1996 - struct acpi_device_wakeup wakeup; 1997 - 1998 - if (acpi_has_method(handle, "_PRW")) { 1999 - acpi_bus_extract_wakeup_device_power_package(handle, 2000 - &wakeup); 2001 - acpi_power_resources_list_free(&wakeup.resources); 2002 - } 2003 - return AE_CTRL_DEPTH; 2004 - } 2005 1808 2006 1809 acpi_add_single_object(&device, handle, type, sta); 2007 1810 if (!device) ··· 2025 1852 return ret; 2026 1853 } 2027 1854 2028 - static acpi_status acpi_bus_device_attach(acpi_handle handle, u32 lvl_not_used, 2029 - void *not_used, void **ret_not_used) 1855 + static void acpi_bus_attach(struct acpi_device *device) 2030 1856 { 2031 - struct acpi_device *device; 2032 - unsigned long long sta_not_used; 1857 + struct acpi_device *child; 2033 1858 int ret; 2034 1859 2035 - /* 2036 - * Ignore errors ignored by acpi_bus_check_add() to avoid terminating 2037 - * namespace walks prematurely. 2038 - */ 2039 - if (acpi_bus_type_and_status(handle, &ret, &sta_not_used)) 2040 - return AE_OK; 2041 - 2042 - if (acpi_bus_get_device(handle, &device)) 2043 - return AE_CTRL_DEPTH; 2044 - 1860 + acpi_bus_get_status(device); 1861 + /* Skip devices that are not present. */ 1862 + if (!acpi_device_is_present(device)) { 1863 + device->flags.visited = false; 1864 + return; 1865 + } 2045 1866 if (device->handler) 2046 - return AE_OK; 1867 + goto ok; 2047 1868 1869 + if (!device->flags.initialized) { 1870 + acpi_bus_update_power(device, NULL); 1871 + device->flags.initialized = true; 1872 + } 1873 + device->flags.visited = false; 2048 1874 ret = acpi_scan_attach_handler(device); 2049 1875 if (ret < 0) 2050 - return AE_CTRL_DEPTH; 1876 + return; 2051 1877 2052 1878 device->flags.match_driver = true; 2053 - if (ret > 0) 2054 - return AE_OK; 1879 + if (!ret) { 1880 + ret = device_attach(&device->dev); 1881 + if (ret < 0) 1882 + return; 1883 + } 1884 + device->flags.visited = true; 2055 1885 2056 - ret = device_attach(&device->dev); 2057 - return ret >= 0 ? AE_OK : AE_CTRL_DEPTH; 1886 + ok: 1887 + list_for_each_entry(child, &device->children, node) 1888 + acpi_bus_attach(child); 2058 1889 } 2059 1890 2060 1891 /** ··· 2078 1901 int acpi_bus_scan(acpi_handle handle) 2079 1902 { 2080 1903 void *device = NULL; 2081 - int error = 0; 2082 1904 2083 1905 if (ACPI_SUCCESS(acpi_bus_check_add(handle, 0, NULL, &device))) 2084 1906 acpi_walk_namespace(ACPI_TYPE_ANY, handle, ACPI_UINT32_MAX, 2085 1907 acpi_bus_check_add, NULL, NULL, &device); 2086 1908 2087 - if (!device) 2088 - error = -ENODEV; 2089 - else if (ACPI_SUCCESS(acpi_bus_device_attach(handle, 0, NULL, NULL))) 2090 - acpi_walk_namespace(ACPI_TYPE_ANY, handle, ACPI_UINT32_MAX, 2091 - acpi_bus_device_attach, NULL, NULL, NULL); 2092 - 2093 - return error; 1909 + if (device) { 1910 + acpi_bus_attach(device); 1911 + return 0; 1912 + } 1913 + return -ENODEV; 2094 1914 } 2095 1915 EXPORT_SYMBOL(acpi_bus_scan); 2096 1916 2097 - static acpi_status acpi_bus_device_detach(acpi_handle handle, u32 lvl_not_used, 2098 - void *not_used, void **ret_not_used) 2099 - { 2100 - struct acpi_device *device = NULL; 2101 - 2102 - if (!acpi_bus_get_device(handle, &device)) { 2103 - struct acpi_scan_handler *dev_handler = device->handler; 2104 - 2105 - if (dev_handler) { 2106 - if (dev_handler->detach) 2107 - dev_handler->detach(device); 2108 - 2109 - device->handler = NULL; 2110 - } else { 2111 - device_release_driver(&device->dev); 2112 - } 2113 - } 2114 - return AE_OK; 2115 - } 2116 - 2117 - static acpi_status acpi_bus_remove(acpi_handle handle, u32 lvl_not_used, 2118 - void *not_used, void **ret_not_used) 2119 - { 2120 - struct acpi_device *device = NULL; 2121 - 2122 - if (!acpi_bus_get_device(handle, &device)) 2123 - acpi_device_unregister(device); 2124 - 2125 - return AE_OK; 2126 - } 2127 - 2128 1917 /** 2129 - * acpi_bus_trim - Remove ACPI device node and all of its descendants 2130 - * @start: Root of the ACPI device nodes subtree to remove. 1918 + * acpi_bus_trim - Detach scan handlers and drivers from ACPI device objects. 1919 + * @adev: Root of the ACPI namespace scope to walk. 2131 1920 * 2132 1921 * Must be called under acpi_scan_lock. 2133 1922 */ 2134 - void acpi_bus_trim(struct acpi_device *start) 1923 + void acpi_bus_trim(struct acpi_device *adev) 2135 1924 { 1925 + struct acpi_scan_handler *handler = adev->handler; 1926 + struct acpi_device *child; 1927 + 1928 + list_for_each_entry_reverse(child, &adev->children, node) 1929 + acpi_bus_trim(child); 1930 + 1931 + if (handler) { 1932 + if (handler->detach) 1933 + handler->detach(adev); 1934 + 1935 + adev->handler = NULL; 1936 + } else { 1937 + device_release_driver(&adev->dev); 1938 + } 2136 1939 /* 2137 - * Execute acpi_bus_device_detach() as a post-order callback to detach 2138 - * all ACPI drivers from the device nodes being removed. 1940 + * Most likely, the device is going away, so put it into D3cold before 1941 + * that. 2139 1942 */ 2140 - acpi_walk_namespace(ACPI_TYPE_ANY, start->handle, ACPI_UINT32_MAX, NULL, 2141 - acpi_bus_device_detach, NULL, NULL); 2142 - acpi_bus_device_detach(start->handle, 0, NULL, NULL); 2143 - /* 2144 - * Execute acpi_bus_remove() as a post-order callback to remove device 2145 - * nodes in the given namespace scope. 2146 - */ 2147 - acpi_walk_namespace(ACPI_TYPE_ANY, start->handle, ACPI_UINT32_MAX, NULL, 2148 - acpi_bus_remove, NULL, NULL); 2149 - acpi_bus_remove(start->handle, 0, NULL, NULL); 1943 + acpi_device_set_power(adev, ACPI_STATE_D3_COLD); 1944 + adev->flags.initialized = false; 1945 + adev->flags.visited = false; 2150 1946 } 2151 1947 EXPORT_SYMBOL_GPL(acpi_bus_trim); 2152 1948 ··· 2197 2047 2198 2048 result = acpi_bus_scan_fixed(); 2199 2049 if (result) { 2200 - acpi_device_unregister(acpi_root); 2050 + acpi_detach_data(acpi_root->handle, acpi_scan_drop_device); 2051 + acpi_device_del(acpi_root); 2052 + put_device(&acpi_root->dev); 2201 2053 goto out; 2202 2054 } 2203 2055 2204 2056 acpi_update_all_gpes(); 2205 - 2206 - acpi_pci_root_hp_init(); 2207 2057 2208 2058 out: 2209 2059 mutex_unlock(&acpi_scan_lock);
+2 -12
drivers/acpi/sleep.c
··· 18 18 #include <linux/reboot.h> 19 19 #include <linux/acpi.h> 20 20 #include <linux/module.h> 21 - 22 21 #include <asm/io.h> 23 - 24 - #include <acpi/acpi_bus.h> 25 - #include <acpi/acpi_drivers.h> 26 22 27 23 #include "internal.h" 28 24 #include "sleep.h" ··· 666 670 /* Reprogram control registers */ 667 671 acpi_leave_sleep_state_prep(ACPI_STATE_S4); 668 672 /* Check the hardware signature */ 669 - if (facs && s4_hardware_signature != facs->hardware_signature) { 670 - printk(KERN_EMERG "ACPI: Hardware changed while hibernated, " 671 - "cannot resume!\n"); 672 - panic("ACPI S4 hardware signature mismatch"); 673 - } 673 + if (facs && s4_hardware_signature != facs->hardware_signature) 674 + pr_crit("ACPI: Hardware changed while hibernated, success doubtful!\n"); 674 675 /* Restore the NVS memory area */ 675 676 suspend_nvs_restore(); 676 677 /* Allow EC transactions to happen. */ ··· 798 805 char supported[ACPI_S_STATE_COUNT * 3 + 1]; 799 806 char *pos = supported; 800 807 int i; 801 - 802 - if (acpi_disabled) 803 - return 0; 804 808 805 809 acpi_sleep_dmi_check(); 806 810
+1 -1
drivers/acpi/sysfs.c
··· 5 5 #include <linux/init.h> 6 6 #include <linux/kernel.h> 7 7 #include <linux/moduleparam.h> 8 - #include <acpi/acpi_drivers.h> 8 + #include <linux/acpi.h> 9 9 10 10 #include "internal.h" 11 11
+6 -5
drivers/acpi/tables.c
··· 278 278 279 279 /** 280 280 * acpi_table_parse - find table with @id, run @handler on it 281 - * 282 281 * @id: table id to find 283 282 * @handler: handler to run 284 283 * 285 284 * Scan the ACPI System Descriptor Table (STD) for a table matching @id, 286 - * run @handler on it. Return 0 if table found, return on if not. 285 + * run @handler on it. 286 + * 287 + * Return 0 if table found, -errno if not. 287 288 */ 288 289 int __init acpi_table_parse(char *id, acpi_tbl_table_handler handler) 289 290 { ··· 294 293 if (acpi_disabled) 295 294 return -ENODEV; 296 295 297 - if (!handler) 296 + if (!id || !handler) 298 297 return -EINVAL; 299 298 300 299 if (strncmp(id, ACPI_SIG_MADT, 4) == 0) ··· 307 306 early_acpi_os_unmap_memory(table, tbl_size); 308 307 return 0; 309 308 } else 310 - return 1; 309 + return -ENODEV; 311 310 } 312 311 313 312 /* ··· 352 351 353 352 status = acpi_initialize_tables(initial_tables, ACPI_MAX_TABLES, 0); 354 353 if (ACPI_FAILURE(status)) 355 - return 1; 354 + return -EINVAL; 356 355 357 356 check_multiple_madt(); 358 357 return 0;
+3 -4
drivers/acpi/thermal.c
··· 41 41 #include <linux/kmod.h> 42 42 #include <linux/reboot.h> 43 43 #include <linux/device.h> 44 - #include <asm/uaccess.h> 45 44 #include <linux/thermal.h> 46 - #include <acpi/acpi_bus.h> 47 - #include <acpi/acpi_drivers.h> 45 + #include <linux/acpi.h> 46 + #include <asm/uaccess.h> 48 47 49 48 #define PREFIX "ACPI: " 50 49 ··· 861 862 return acpi_thermal_cooling_device_cb(thermal, cdev, false); 862 863 } 863 864 864 - static const struct thermal_zone_device_ops acpi_thermal_zone_ops = { 865 + static struct thermal_zone_device_ops acpi_thermal_zone_ops = { 865 866 .bind = acpi_thermal_bind_cooling_device, 866 867 .unbind = acpi_thermal_unbind_cooling_device, 867 868 .get_temp = thermal_get_temp,
+97 -2
drivers/acpi/utils.c
··· 30 30 #include <linux/types.h> 31 31 #include <linux/hardirq.h> 32 32 #include <linux/acpi.h> 33 - #include <acpi/acpi_bus.h> 34 - #include <acpi/acpi_drivers.h> 35 33 36 34 #include "internal.h" 37 35 ··· 572 574 573 575 return status; 574 576 } 577 + 578 + /** 579 + * acpi_evaluate_dsm - evaluate device's _DSM method 580 + * @handle: ACPI device handle 581 + * @uuid: UUID of requested functions, should be 16 bytes 582 + * @rev: revision number of requested function 583 + * @func: requested function number 584 + * @argv4: the function specific parameter 585 + * 586 + * Evaluate device's _DSM method with specified UUID, revision id and 587 + * function number. Caller needs to free the returned object. 588 + * 589 + * Though ACPI defines the fourth parameter for _DSM should be a package, 590 + * some old BIOSes do expect a buffer or an integer etc. 591 + */ 592 + union acpi_object * 593 + acpi_evaluate_dsm(acpi_handle handle, const u8 *uuid, int rev, int func, 594 + union acpi_object *argv4) 595 + { 596 + acpi_status ret; 597 + struct acpi_buffer buf = {ACPI_ALLOCATE_BUFFER, NULL}; 598 + union acpi_object params[4]; 599 + struct acpi_object_list input = { 600 + .count = 4, 601 + .pointer = params, 602 + }; 603 + 604 + params[0].type = ACPI_TYPE_BUFFER; 605 + params[0].buffer.length = 16; 606 + params[0].buffer.pointer = (char *)uuid; 607 + params[1].type = ACPI_TYPE_INTEGER; 608 + params[1].integer.value = rev; 609 + params[2].type = ACPI_TYPE_INTEGER; 610 + params[2].integer.value = func; 611 + if (argv4) { 612 + params[3] = *argv4; 613 + } else { 614 + params[3].type = ACPI_TYPE_PACKAGE; 615 + params[3].package.count = 0; 616 + params[3].package.elements = NULL; 617 + } 618 + 619 + ret = acpi_evaluate_object(handle, "_DSM", &input, &buf); 620 + if (ACPI_SUCCESS(ret)) 621 + return (union acpi_object *)buf.pointer; 622 + 623 + if (ret != AE_NOT_FOUND) 624 + acpi_handle_warn(handle, 625 + "failed to evaluate _DSM (0x%x)\n", ret); 626 + 627 + return NULL; 628 + } 629 + EXPORT_SYMBOL(acpi_evaluate_dsm); 630 + 631 + /** 632 + * acpi_check_dsm - check if _DSM method supports requested functions. 633 + * @handle: ACPI device handle 634 + * @uuid: UUID of requested functions, should be 16 bytes at least 635 + * @rev: revision number of requested functions 636 + * @funcs: bitmap of requested functions 637 + * @exclude: excluding special value, used to support i915 and nouveau 638 + * 639 + * Evaluate device's _DSM method to check whether it supports requested 640 + * functions. Currently only support 64 functions at maximum, should be 641 + * enough for now. 642 + */ 643 + bool acpi_check_dsm(acpi_handle handle, const u8 *uuid, int rev, u64 funcs) 644 + { 645 + int i; 646 + u64 mask = 0; 647 + union acpi_object *obj; 648 + 649 + if (funcs == 0) 650 + return false; 651 + 652 + obj = acpi_evaluate_dsm(handle, uuid, rev, 0, NULL); 653 + if (!obj) 654 + return false; 655 + 656 + /* For compatibility, old BIOSes may return an integer */ 657 + if (obj->type == ACPI_TYPE_INTEGER) 658 + mask = obj->integer.value; 659 + else if (obj->type == ACPI_TYPE_BUFFER) 660 + for (i = 0; i < obj->buffer.length && i < 8; i++) 661 + mask |= (((u8)obj->buffer.pointer[i]) << (i * 8)); 662 + ACPI_FREE(obj); 663 + 664 + /* 665 + * Bit 0 indicates whether there's support for any functions other than 666 + * function 0 for the specified UUID and revision. 667 + */ 668 + if ((mask & 0x1) && (mask & funcs) == funcs) 669 + return true; 670 + 671 + return false; 672 + } 673 + EXPORT_SYMBOL(acpi_check_dsm);
+2 -3
drivers/acpi/video.c
··· 37 37 #include <linux/pci.h> 38 38 #include <linux/pci_ids.h> 39 39 #include <linux/slab.h> 40 - #include <asm/uaccess.h> 41 40 #include <linux/dmi.h> 42 - #include <acpi/acpi_bus.h> 43 - #include <acpi/acpi_drivers.h> 44 41 #include <linux/suspend.h> 42 + #include <linux/acpi.h> 45 43 #include <acpi/video.h> 44 + #include <asm/uaccess.h> 46 45 47 46 #include "internal.h" 48 47
+1 -1
drivers/acpi/video_detect.c
··· 50 50 51 51 static acpi_status 52 52 acpi_backlight_cap_match(acpi_handle handle, u32 level, void *context, 53 - void **retyurn_value) 53 + void **return_value) 54 54 { 55 55 long *cap = context; 56 56
-1
drivers/acpi/wakeup.c
··· 5 5 6 6 #include <linux/init.h> 7 7 #include <linux/acpi.h> 8 - #include <acpi/acpi_drivers.h> 9 8 #include <linux/kernel.h> 10 9 #include <linux/types.h> 11 10
+13 -15
drivers/ata/libata-acpi.c
··· 20 20 #include <scsi/scsi_device.h> 21 21 #include "libata.h" 22 22 23 - #include <acpi/acpi_bus.h> 24 - 25 23 unsigned int ata_acpi_gtf_filter = ATA_ACPI_FILTER_DEFAULT; 26 24 module_param_named(acpi_gtf_filter, ata_acpi_gtf_filter, int, 0644); 27 25 MODULE_PARM_DESC(acpi_gtf_filter, "filter mask for ACPI _GTF commands, set to filter out (0x1=set xfermode, 0x2=lock/freeze lock, 0x4=DIPM, 0x8=FPDMA non-zero offset, 0x10=FPDMA DMA Setup FIS auto-activate)"); ··· 178 180 /* bind acpi handle to pata port */ 179 181 void ata_acpi_bind_port(struct ata_port *ap) 180 182 { 181 - acpi_handle host_handle = ACPI_HANDLE(ap->host->dev); 183 + struct acpi_device *host_companion = ACPI_COMPANION(ap->host->dev); 182 184 183 - if (libata_noacpi || ap->flags & ATA_FLAG_ACPI_SATA || !host_handle) 185 + if (libata_noacpi || ap->flags & ATA_FLAG_ACPI_SATA || !host_companion) 184 186 return; 185 187 186 - acpi_preset_companion(&ap->tdev, host_handle, ap->port_no); 188 + acpi_preset_companion(&ap->tdev, host_companion, ap->port_no); 187 189 188 190 if (ata_acpi_gtm(ap, &ap->__acpi_init_gtm) == 0) 189 191 ap->pflags |= ATA_PFLAG_INIT_GTM_VALID; ··· 196 198 void ata_acpi_bind_dev(struct ata_device *dev) 197 199 { 198 200 struct ata_port *ap = dev->link->ap; 199 - acpi_handle port_handle = ACPI_HANDLE(&ap->tdev); 200 - acpi_handle host_handle = ACPI_HANDLE(ap->host->dev); 201 - acpi_handle parent_handle; 201 + struct acpi_device *port_companion = ACPI_COMPANION(&ap->tdev); 202 + struct acpi_device *host_companion = ACPI_COMPANION(ap->host->dev); 203 + struct acpi_device *parent; 202 204 u64 adr; 203 205 204 206 /* 205 - * For both sata/pata devices, host handle is required. 206 - * For pata device, port handle is also required. 207 + * For both sata/pata devices, host companion device is required. 208 + * For pata device, port companion device is also required. 207 209 */ 208 - if (libata_noacpi || !host_handle || 209 - (!(ap->flags & ATA_FLAG_ACPI_SATA) && !port_handle)) 210 + if (libata_noacpi || !host_companion || 211 + (!(ap->flags & ATA_FLAG_ACPI_SATA) && !port_companion)) 210 212 return; 211 213 212 214 if (ap->flags & ATA_FLAG_ACPI_SATA) { ··· 214 216 adr = SATA_ADR(ap->port_no, NO_PORT_MULT); 215 217 else 216 218 adr = SATA_ADR(ap->port_no, dev->link->pmp); 217 - parent_handle = host_handle; 219 + parent = host_companion; 218 220 } else { 219 221 adr = dev->devno; 220 - parent_handle = port_handle; 222 + parent = port_companion; 221 223 } 222 224 223 - acpi_preset_companion(&dev->tdev, parent_handle, adr); 225 + acpi_preset_companion(&dev->tdev, parent, adr); 224 226 225 227 register_hotplug_dock_device(ata_dev_acpi_handle(dev), 226 228 &ata_acpi_dev_dock_ops, dev, NULL, NULL);
+2 -3
drivers/ata/pata_acpi.c
··· 12 12 #include <linux/delay.h> 13 13 #include <linux/device.h> 14 14 #include <linux/gfp.h> 15 - #include <scsi/scsi_host.h> 16 - #include <acpi/acpi_bus.h> 17 - 15 + #include <linux/acpi.h> 18 16 #include <linux/libata.h> 19 17 #include <linux/ata.h> 18 + #include <scsi/scsi_host.h> 20 19 21 20 #define DRV_NAME "pata_acpi" 22 21 #define DRV_VERSION "0.2.3"
+1 -1
drivers/base/Makefile
··· 4 4 driver.o class.o platform.o \ 5 5 cpu.o firmware.o init.o map.o devres.o \ 6 6 attribute_container.o transport_class.o \ 7 - topology.o 7 + topology.o container.o 8 8 obj-$(CONFIG_DEVTMPFS) += devtmpfs.o 9 9 obj-$(CONFIG_DMA_CMA) += dma-contiguous.o 10 10 obj-y += power/
+1
drivers/base/base.h
··· 100 100 #endif 101 101 extern int platform_bus_init(void); 102 102 extern void cpu_dev_init(void); 103 + extern void container_dev_init(void); 103 104 104 105 struct kobject *virtual_device_parent(struct device *dev); 105 106
+44
drivers/base/container.c
··· 1 + /* 2 + * System bus type for containers. 3 + * 4 + * Copyright (C) 2013, Intel Corporation 5 + * Author: Rafael J. Wysocki <rafael.j.wysocki@intel.com> 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + */ 11 + 12 + #include <linux/container.h> 13 + 14 + #include "base.h" 15 + 16 + #define CONTAINER_BUS_NAME "container" 17 + 18 + static int trivial_online(struct device *dev) 19 + { 20 + return 0; 21 + } 22 + 23 + static int container_offline(struct device *dev) 24 + { 25 + struct container_dev *cdev = to_container_dev(dev); 26 + 27 + return cdev->offline ? cdev->offline(cdev) : 0; 28 + } 29 + 30 + struct bus_type container_subsys = { 31 + .name = CONTAINER_BUS_NAME, 32 + .dev_name = CONTAINER_BUS_NAME, 33 + .online = trivial_online, 34 + .offline = container_offline, 35 + }; 36 + 37 + void __init container_dev_init(void) 38 + { 39 + int ret; 40 + 41 + ret = subsys_system_register(&container_subsys, NULL); 42 + if (ret) 43 + pr_err("%s() failed: %d\n", __func__, ret); 44 + }
+1
drivers/base/init.c
··· 33 33 platform_bus_init(); 34 34 cpu_dev_init(); 35 35 memory_dev_init(); 36 + container_dev_init(); 36 37 }
+15 -1
drivers/base/platform.c
··· 677 677 char *buf) 678 678 { 679 679 struct platform_device *pdev = to_platform_device(dev); 680 - int len = snprintf(buf, PAGE_SIZE, "platform:%s\n", pdev->name); 680 + int len; 681 + 682 + len = of_device_get_modalias(dev, buf, PAGE_SIZE -1); 683 + if (len != -ENODEV) 684 + return len; 685 + 686 + len = acpi_device_modalias(dev, buf, PAGE_SIZE -1); 687 + if (len != -ENODEV) 688 + return len; 689 + 690 + len = snprintf(buf, PAGE_SIZE, "platform:%s\n", pdev->name); 681 691 682 692 return (len >= PAGE_SIZE) ? (PAGE_SIZE - 1) : len; 683 693 } ··· 706 696 707 697 /* Some devices have extra OF data and an OF-style MODALIAS */ 708 698 rc = of_device_uevent_modalias(dev, env); 699 + if (rc != -ENODEV) 700 + return rc; 701 + 702 + rc = acpi_device_uevent_modalias(dev, env); 709 703 if (rc != -ENODEV) 710 704 return rc; 711 705
+25 -5
drivers/base/power/clock_ops.c
··· 33 33 }; 34 34 35 35 /** 36 + * pm_clk_enable - Enable a clock, reporting any errors 37 + * @dev: The device for the given clock 38 + * @clk: The clock being enabled. 39 + */ 40 + static inline int __pm_clk_enable(struct device *dev, struct clk *clk) 41 + { 42 + int ret = clk_enable(clk); 43 + if (ret) 44 + dev_err(dev, "%s: failed to enable clk %p, error %d\n", 45 + __func__, clk, ret); 46 + 47 + return ret; 48 + } 49 + 50 + /** 36 51 * pm_clk_acquire - Acquire a device clock. 37 52 * @dev: Device whose clock is to be acquired. 38 53 * @ce: PM clock entry corresponding to the clock. ··· 58 43 if (IS_ERR(ce->clk)) { 59 44 ce->status = PCE_STATUS_ERROR; 60 45 } else { 46 + clk_prepare(ce->clk); 61 47 ce->status = PCE_STATUS_ACQUIRED; 62 48 dev_dbg(dev, "Clock %s managed by runtime PM.\n", ce->con_id); 63 49 } ··· 115 99 116 100 if (ce->status < PCE_STATUS_ERROR) { 117 101 if (ce->status == PCE_STATUS_ENABLED) 118 - clk_disable_unprepare(ce->clk); 102 + clk_disable(ce->clk); 119 103 120 - if (ce->status >= PCE_STATUS_ACQUIRED) 104 + if (ce->status >= PCE_STATUS_ACQUIRED) { 105 + clk_unprepare(ce->clk); 121 106 clk_put(ce->clk); 107 + } 122 108 } 123 109 124 110 kfree(ce->con_id); ··· 267 249 struct pm_subsys_data *psd = dev_to_psd(dev); 268 250 struct pm_clock_entry *ce; 269 251 unsigned long flags; 252 + int ret; 270 253 271 254 dev_dbg(dev, "%s()\n", __func__); 272 255 ··· 278 259 279 260 list_for_each_entry(ce, &psd->clock_list, node) { 280 261 if (ce->status < PCE_STATUS_ERROR) { 281 - clk_enable(ce->clk); 282 - ce->status = PCE_STATUS_ENABLED; 262 + ret = __pm_clk_enable(dev, ce->clk); 263 + if (!ret) 264 + ce->status = PCE_STATUS_ENABLED; 283 265 } 284 266 } 285 267 ··· 396 376 spin_lock_irqsave(&psd->lock, flags); 397 377 398 378 list_for_each_entry(ce, &psd->clock_list, node) 399 - clk_enable(ce->clk); 379 + __pm_clk_enable(dev, ce->clk); 400 380 401 381 spin_unlock_irqrestore(&psd->lock, flags); 402 382
+2 -2
drivers/base/power/generic_ops.c
··· 10 10 #include <linux/pm_runtime.h> 11 11 #include <linux/export.h> 12 12 13 - #ifdef CONFIG_PM_RUNTIME 13 + #ifdef CONFIG_PM 14 14 /** 15 15 * pm_generic_runtime_suspend - Generic runtime suspend callback for subsystems. 16 16 * @dev: Device to suspend. ··· 48 48 return ret; 49 49 } 50 50 EXPORT_SYMBOL_GPL(pm_generic_runtime_resume); 51 - #endif /* CONFIG_PM_RUNTIME */ 51 + #endif /* CONFIG_PM */ 52 52 53 53 #ifdef CONFIG_PM_SLEEP 54 54 /**
+9 -2
drivers/char/apm-emulation.c
··· 531 531 { 532 532 struct apm_user *as; 533 533 int err; 534 + unsigned long apm_event; 534 535 535 536 /* short-cut emergency suspends */ 536 537 if (atomic_read(&userspace_notification_inhibit)) ··· 539 538 540 539 switch (event) { 541 540 case PM_SUSPEND_PREPARE: 541 + case PM_HIBERNATION_PREPARE: 542 + apm_event = (event == PM_SUSPEND_PREPARE) ? 543 + APM_USER_SUSPEND : APM_USER_HIBERNATION; 542 544 /* 543 545 * Queue an event to all "writer" users that we want 544 546 * to suspend and need their ack. ··· 554 550 as->writer && as->suser) { 555 551 as->suspend_state = SUSPEND_PENDING; 556 552 atomic_inc(&suspend_acks_pending); 557 - queue_add_event(&as->queue, APM_USER_SUSPEND); 553 + queue_add_event(&as->queue, apm_event); 558 554 } 559 555 } 560 556 ··· 605 601 return notifier_from_errno(err); 606 602 607 603 case PM_POST_SUSPEND: 604 + case PM_POST_HIBERNATION: 605 + apm_event = (event == PM_POST_SUSPEND) ? 606 + APM_NORMAL_RESUME : APM_HIBERNATION_RESUME; 608 607 /* 609 608 * Anyone on the APM queues will think we're still suspended. 610 609 * Send a message so everyone knows we're now awake again. 611 610 */ 612 - queue_event(APM_NORMAL_RESUME); 611 + queue_event(apm_event); 613 612 614 613 /* 615 614 * Finally, wake up anyone who is sleeping on the suspend.
+2 -5
drivers/char/hpet.c
··· 34 34 #include <linux/uaccess.h> 35 35 #include <linux/slab.h> 36 36 #include <linux/io.h> 37 - 37 + #include <linux/acpi.h> 38 + #include <linux/hpet.h> 38 39 #include <asm/current.h> 39 40 #include <asm/irq.h> 40 41 #include <asm/div64.h> 41 - 42 - #include <linux/acpi.h> 43 - #include <acpi/acpi_bus.h> 44 - #include <linux/hpet.h> 45 42 46 43 /* 47 44 * The High Precision Event Timer driver.
+1 -1
drivers/char/tpm/tpm_acpi.c
··· 23 23 #include <linux/security.h> 24 24 #include <linux/module.h> 25 25 #include <linux/slab.h> 26 - #include <acpi/acpi.h> 26 + #include <linux/acpi.h> 27 27 28 28 #include "tpm.h" 29 29 #include "tpm_eventlog.h"
+140 -262
drivers/char/tpm/tpm_ppi.c
··· 1 1 #include <linux/acpi.h> 2 - #include <acpi/acpi_drivers.h> 3 2 #include "tpm.h" 4 - 5 - static const u8 tpm_ppi_uuid[] = { 6 - 0xA6, 0xFA, 0xDD, 0x3D, 7 - 0x1B, 0x36, 8 - 0xB4, 0x4E, 9 - 0xA4, 0x24, 10 - 0x8D, 0x10, 0x08, 0x9D, 0x16, 0x53 11 - }; 12 - static char *tpm_device_name = "TPM"; 13 3 14 4 #define TPM_PPI_REVISION_ID 1 15 5 #define TPM_PPI_FN_VERSION 1 ··· 14 24 #define PPI_VS_REQ_END 255 15 25 #define PPI_VERSION_LEN 3 16 26 27 + static const u8 tpm_ppi_uuid[] = { 28 + 0xA6, 0xFA, 0xDD, 0x3D, 29 + 0x1B, 0x36, 30 + 0xB4, 0x4E, 31 + 0xA4, 0x24, 32 + 0x8D, 0x10, 0x08, 0x9D, 0x16, 0x53 33 + }; 34 + 35 + static char tpm_ppi_version[PPI_VERSION_LEN + 1]; 36 + static acpi_handle tpm_ppi_handle; 37 + 17 38 static acpi_status ppi_callback(acpi_handle handle, u32 level, void *context, 18 39 void **return_value) 19 40 { 20 - acpi_status status = AE_OK; 21 - struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL }; 41 + union acpi_object *obj; 22 42 23 - if (ACPI_SUCCESS(acpi_get_name(handle, ACPI_FULL_PATHNAME, &buffer))) { 24 - if (strstr(buffer.pointer, context) != NULL) { 25 - *return_value = handle; 26 - status = AE_CTRL_TERMINATE; 27 - } 28 - kfree(buffer.pointer); 43 + if (!acpi_check_dsm(handle, tpm_ppi_uuid, TPM_PPI_REVISION_ID, 44 + 1 << TPM_PPI_FN_VERSION)) 45 + return AE_OK; 46 + 47 + /* Cache version string */ 48 + obj = acpi_evaluate_dsm_typed(handle, tpm_ppi_uuid, 49 + TPM_PPI_REVISION_ID, TPM_PPI_FN_VERSION, 50 + NULL, ACPI_TYPE_STRING); 51 + if (obj) { 52 + strlcpy(tpm_ppi_version, obj->string.pointer, 53 + PPI_VERSION_LEN + 1); 54 + ACPI_FREE(obj); 29 55 } 30 56 31 - return status; 57 + *return_value = handle; 58 + 59 + return AE_CTRL_TERMINATE; 32 60 } 33 61 34 - static inline void ppi_assign_params(union acpi_object params[4], 35 - u64 function_num) 62 + static inline union acpi_object * 63 + tpm_eval_dsm(int func, acpi_object_type type, union acpi_object *argv4) 36 64 { 37 - params[0].type = ACPI_TYPE_BUFFER; 38 - params[0].buffer.length = sizeof(tpm_ppi_uuid); 39 - params[0].buffer.pointer = (char *)tpm_ppi_uuid; 40 - params[1].type = ACPI_TYPE_INTEGER; 41 - params[1].integer.value = TPM_PPI_REVISION_ID; 42 - params[2].type = ACPI_TYPE_INTEGER; 43 - params[2].integer.value = function_num; 44 - params[3].type = ACPI_TYPE_PACKAGE; 45 - params[3].package.count = 0; 46 - params[3].package.elements = NULL; 65 + BUG_ON(!tpm_ppi_handle); 66 + return acpi_evaluate_dsm_typed(tpm_ppi_handle, tpm_ppi_uuid, 67 + TPM_PPI_REVISION_ID, func, argv4, type); 47 68 } 48 69 49 70 static ssize_t tpm_show_ppi_version(struct device *dev, 50 71 struct device_attribute *attr, char *buf) 51 72 { 52 - acpi_handle handle; 53 - acpi_status status; 54 - struct acpi_object_list input; 55 - struct acpi_buffer output = { ACPI_ALLOCATE_BUFFER, NULL }; 56 - union acpi_object params[4]; 57 - union acpi_object *obj; 58 - 59 - input.count = 4; 60 - ppi_assign_params(params, TPM_PPI_FN_VERSION); 61 - input.pointer = params; 62 - status = acpi_walk_namespace(ACPI_TYPE_DEVICE, ACPI_ROOT_OBJECT, 63 - ACPI_UINT32_MAX, ppi_callback, NULL, 64 - tpm_device_name, &handle); 65 - if (ACPI_FAILURE(status)) 66 - return -ENXIO; 67 - 68 - status = acpi_evaluate_object_typed(handle, "_DSM", &input, &output, 69 - ACPI_TYPE_STRING); 70 - if (ACPI_FAILURE(status)) 71 - return -ENOMEM; 72 - obj = (union acpi_object *)output.pointer; 73 - status = scnprintf(buf, PAGE_SIZE, "%s\n", obj->string.pointer); 74 - kfree(output.pointer); 75 - return status; 73 + return scnprintf(buf, PAGE_SIZE, "%s\n", tpm_ppi_version); 76 74 } 77 75 78 76 static ssize_t tpm_show_ppi_request(struct device *dev, 79 77 struct device_attribute *attr, char *buf) 80 78 { 81 - acpi_handle handle; 82 - acpi_status status; 83 - struct acpi_object_list input; 84 - struct acpi_buffer output = { ACPI_ALLOCATE_BUFFER, NULL }; 85 - union acpi_object params[4]; 86 - union acpi_object *ret_obj; 79 + ssize_t size = -EINVAL; 80 + union acpi_object *obj; 87 81 88 - input.count = 4; 89 - ppi_assign_params(params, TPM_PPI_FN_GETREQ); 90 - input.pointer = params; 91 - status = acpi_walk_namespace(ACPI_TYPE_DEVICE, ACPI_ROOT_OBJECT, 92 - ACPI_UINT32_MAX, ppi_callback, NULL, 93 - tpm_device_name, &handle); 94 - if (ACPI_FAILURE(status)) 82 + obj = tpm_eval_dsm(TPM_PPI_FN_GETREQ, ACPI_TYPE_PACKAGE, NULL); 83 + if (!obj) 95 84 return -ENXIO; 96 85 97 - status = acpi_evaluate_object_typed(handle, "_DSM", &input, &output, 98 - ACPI_TYPE_PACKAGE); 99 - if (ACPI_FAILURE(status)) 100 - return -ENOMEM; 101 86 /* 102 87 * output.pointer should be of package type, including two integers. 103 88 * The first is function return code, 0 means success and 1 means 104 89 * error. The second is pending TPM operation requested by the OS, 0 105 90 * means none and >0 means operation value. 106 91 */ 107 - ret_obj = ((union acpi_object *)output.pointer)->package.elements; 108 - if (ret_obj->type == ACPI_TYPE_INTEGER) { 109 - if (ret_obj->integer.value) { 110 - status = -EFAULT; 111 - goto cleanup; 112 - } 113 - ret_obj++; 114 - if (ret_obj->type == ACPI_TYPE_INTEGER) 115 - status = scnprintf(buf, PAGE_SIZE, "%llu\n", 116 - ret_obj->integer.value); 92 + if (obj->package.count == 2 && 93 + obj->package.elements[0].type == ACPI_TYPE_INTEGER && 94 + obj->package.elements[1].type == ACPI_TYPE_INTEGER) { 95 + if (obj->package.elements[0].integer.value) 96 + size = -EFAULT; 117 97 else 118 - status = -EINVAL; 119 - } else { 120 - status = -EINVAL; 98 + size = scnprintf(buf, PAGE_SIZE, "%llu\n", 99 + obj->package.elements[1].integer.value); 121 100 } 122 - cleanup: 123 - kfree(output.pointer); 124 - return status; 101 + 102 + ACPI_FREE(obj); 103 + 104 + return size; 125 105 } 126 106 127 107 static ssize_t tpm_store_ppi_request(struct device *dev, 128 108 struct device_attribute *attr, 129 109 const char *buf, size_t count) 130 110 { 131 - char version[PPI_VERSION_LEN + 1]; 132 - acpi_handle handle; 133 - acpi_status status; 134 - struct acpi_object_list input; 135 - struct acpi_buffer output = { ACPI_ALLOCATE_BUFFER, NULL }; 136 - union acpi_object params[4]; 137 - union acpi_object obj; 138 111 u32 req; 139 112 u64 ret; 113 + int func = TPM_PPI_FN_SUBREQ; 114 + union acpi_object *obj, tmp; 115 + union acpi_object argv4 = ACPI_INIT_DSM_ARGV4(1, &tmp); 140 116 141 - input.count = 4; 142 - ppi_assign_params(params, TPM_PPI_FN_VERSION); 143 - input.pointer = params; 144 - status = acpi_walk_namespace(ACPI_TYPE_DEVICE, ACPI_ROOT_OBJECT, 145 - ACPI_UINT32_MAX, ppi_callback, NULL, 146 - tpm_device_name, &handle); 147 - if (ACPI_FAILURE(status)) 148 - return -ENXIO; 149 - 150 - status = acpi_evaluate_object_typed(handle, "_DSM", &input, &output, 151 - ACPI_TYPE_STRING); 152 - if (ACPI_FAILURE(status)) 153 - return -ENOMEM; 154 - strlcpy(version, 155 - ((union acpi_object *)output.pointer)->string.pointer, 156 - PPI_VERSION_LEN + 1); 157 - kfree(output.pointer); 158 - output.length = ACPI_ALLOCATE_BUFFER; 159 - output.pointer = NULL; 160 117 /* 161 118 * the function to submit TPM operation request to pre-os environment 162 119 * is updated with function index from SUBREQ to SUBREQ2 since PPI 163 120 * version 1.1 164 121 */ 165 - if (strcmp(version, "1.1") < 0) 166 - params[2].integer.value = TPM_PPI_FN_SUBREQ; 167 - else 168 - params[2].integer.value = TPM_PPI_FN_SUBREQ2; 122 + if (acpi_check_dsm(tpm_ppi_handle, tpm_ppi_uuid, TPM_PPI_REVISION_ID, 123 + 1 << TPM_PPI_FN_SUBREQ2)) 124 + func = TPM_PPI_FN_SUBREQ2; 125 + 169 126 /* 170 127 * PPI spec defines params[3].type as ACPI_TYPE_PACKAGE. Some BIOS 171 128 * accept buffer/string/integer type, but some BIOS accept buffer/ 172 129 * string/package type. For PPI version 1.0 and 1.1, use buffer type 173 130 * for compatibility, and use package type since 1.2 according to spec. 174 131 */ 175 - if (strcmp(version, "1.2") < 0) { 176 - params[3].type = ACPI_TYPE_BUFFER; 177 - params[3].buffer.length = sizeof(req); 178 - sscanf(buf, "%d", &req); 179 - params[3].buffer.pointer = (char *)&req; 132 + if (strcmp(tpm_ppi_version, "1.2") < 0) { 133 + if (sscanf(buf, "%d", &req) != 1) 134 + return -EINVAL; 135 + argv4.type = ACPI_TYPE_BUFFER; 136 + argv4.buffer.length = sizeof(req); 137 + argv4.buffer.pointer = (u8 *)&req; 180 138 } else { 181 - params[3].package.count = 1; 182 - obj.type = ACPI_TYPE_INTEGER; 183 - sscanf(buf, "%llu", &obj.integer.value); 184 - params[3].package.elements = &obj; 139 + tmp.type = ACPI_TYPE_INTEGER; 140 + if (sscanf(buf, "%llu", &tmp.integer.value) != 1) 141 + return -EINVAL; 185 142 } 186 143 187 - status = acpi_evaluate_object_typed(handle, "_DSM", &input, &output, 188 - ACPI_TYPE_INTEGER); 189 - if (ACPI_FAILURE(status)) 190 - return -ENOMEM; 191 - ret = ((union acpi_object *)output.pointer)->integer.value; 144 + obj = tpm_eval_dsm(func, ACPI_TYPE_INTEGER, &argv4); 145 + if (!obj) { 146 + return -ENXIO; 147 + } else { 148 + ret = obj->integer.value; 149 + ACPI_FREE(obj); 150 + } 151 + 192 152 if (ret == 0) 193 - status = (acpi_status)count; 194 - else if (ret == 1) 195 - status = -EPERM; 196 - else 197 - status = -EFAULT; 198 - kfree(output.pointer); 199 - return status; 153 + return (acpi_status)count; 154 + 155 + return (ret == 1) ? -EPERM : -EFAULT; 200 156 } 201 157 202 158 static ssize_t tpm_show_ppi_transition_action(struct device *dev, 203 159 struct device_attribute *attr, 204 160 char *buf) 205 161 { 206 - char version[PPI_VERSION_LEN + 1]; 207 - acpi_handle handle; 208 - acpi_status status; 209 - struct acpi_object_list input; 210 - struct acpi_buffer output = { ACPI_ALLOCATE_BUFFER, NULL }; 211 - union acpi_object params[4]; 212 162 u32 ret; 213 - char *info[] = { 163 + acpi_status status; 164 + union acpi_object *obj = NULL; 165 + union acpi_object tmp = { 166 + .buffer.type = ACPI_TYPE_BUFFER, 167 + .buffer.length = 0, 168 + .buffer.pointer = NULL 169 + }; 170 + 171 + static char *info[] = { 214 172 "None", 215 173 "Shutdown", 216 174 "Reboot", 217 175 "OS Vendor-specific", 218 176 "Error", 219 177 }; 220 - input.count = 4; 221 - ppi_assign_params(params, TPM_PPI_FN_VERSION); 222 - input.pointer = params; 223 - status = acpi_walk_namespace(ACPI_TYPE_DEVICE, ACPI_ROOT_OBJECT, 224 - ACPI_UINT32_MAX, ppi_callback, NULL, 225 - tpm_device_name, &handle); 226 - if (ACPI_FAILURE(status)) 227 - return -ENXIO; 228 178 229 - status = acpi_evaluate_object_typed(handle, "_DSM", &input, &output, 230 - ACPI_TYPE_STRING); 231 - if (ACPI_FAILURE(status)) 232 - return -ENOMEM; 233 - strlcpy(version, 234 - ((union acpi_object *)output.pointer)->string.pointer, 235 - PPI_VERSION_LEN + 1); 236 179 /* 237 180 * PPI spec defines params[3].type as empty package, but some platforms 238 181 * (e.g. Capella with PPI 1.0) need integer/string/buffer type, so for 239 182 * compatibility, define params[3].type as buffer, if PPI version < 1.2 240 183 */ 241 - if (strcmp(version, "1.2") < 0) { 242 - params[3].type = ACPI_TYPE_BUFFER; 243 - params[3].buffer.length = 0; 244 - params[3].buffer.pointer = NULL; 184 + if (strcmp(tpm_ppi_version, "1.2") < 0) 185 + obj = &tmp; 186 + obj = tpm_eval_dsm(TPM_PPI_FN_GETACT, ACPI_TYPE_INTEGER, obj); 187 + if (!obj) { 188 + return -ENXIO; 189 + } else { 190 + ret = obj->integer.value; 191 + ACPI_FREE(obj); 245 192 } 246 - params[2].integer.value = TPM_PPI_FN_GETACT; 247 - kfree(output.pointer); 248 - output.length = ACPI_ALLOCATE_BUFFER; 249 - output.pointer = NULL; 250 - status = acpi_evaluate_object_typed(handle, "_DSM", &input, &output, 251 - ACPI_TYPE_INTEGER); 252 - if (ACPI_FAILURE(status)) 253 - return -ENOMEM; 254 - ret = ((union acpi_object *)output.pointer)->integer.value; 193 + 255 194 if (ret < ARRAY_SIZE(info) - 1) 256 195 status = scnprintf(buf, PAGE_SIZE, "%d: %s\n", ret, info[ret]); 257 196 else 258 197 status = scnprintf(buf, PAGE_SIZE, "%d: %s\n", ret, 259 198 info[ARRAY_SIZE(info)-1]); 260 - kfree(output.pointer); 261 199 return status; 262 200 } 263 201 ··· 193 275 struct device_attribute *attr, 194 276 char *buf) 195 277 { 196 - acpi_handle handle; 197 - acpi_status status; 198 - struct acpi_object_list input; 199 - struct acpi_buffer output = { ACPI_ALLOCATE_BUFFER, NULL }; 200 - union acpi_object params[4]; 201 - union acpi_object *ret_obj; 202 - u64 req; 278 + acpi_status status = -EINVAL; 279 + union acpi_object *obj, *ret_obj; 280 + u64 req, res; 203 281 204 - input.count = 4; 205 - ppi_assign_params(params, TPM_PPI_FN_GETRSP); 206 - input.pointer = params; 207 - status = acpi_walk_namespace(ACPI_TYPE_DEVICE, ACPI_ROOT_OBJECT, 208 - ACPI_UINT32_MAX, ppi_callback, NULL, 209 - tpm_device_name, &handle); 210 - if (ACPI_FAILURE(status)) 282 + obj = tpm_eval_dsm(TPM_PPI_FN_GETRSP, ACPI_TYPE_PACKAGE, NULL); 283 + if (!obj) 211 284 return -ENXIO; 212 285 213 - status = acpi_evaluate_object_typed(handle, "_DSM", &input, &output, 214 - ACPI_TYPE_PACKAGE); 215 - if (ACPI_FAILURE(status)) 216 - return -ENOMEM; 217 286 /* 218 287 * parameter output.pointer should be of package type, including 219 288 * 3 integers. The first means function return code, the second means ··· 208 303 * the most recent TPM operation request. Only if the first is 0, and 209 304 * the second integer is not 0, the response makes sense. 210 305 */ 211 - ret_obj = ((union acpi_object *)output.pointer)->package.elements; 212 - if (ret_obj->type != ACPI_TYPE_INTEGER) { 213 - status = -EINVAL; 306 + ret_obj = obj->package.elements; 307 + if (obj->package.count < 3 || 308 + ret_obj[0].type != ACPI_TYPE_INTEGER || 309 + ret_obj[1].type != ACPI_TYPE_INTEGER || 310 + ret_obj[2].type != ACPI_TYPE_INTEGER) 214 311 goto cleanup; 215 - } 216 - if (ret_obj->integer.value) { 312 + 313 + if (ret_obj[0].integer.value) { 217 314 status = -EFAULT; 218 315 goto cleanup; 219 316 } 220 - ret_obj++; 221 - if (ret_obj->type != ACPI_TYPE_INTEGER) { 222 - status = -EINVAL; 223 - goto cleanup; 224 - } 225 - if (ret_obj->integer.value) { 226 - req = ret_obj->integer.value; 227 - ret_obj++; 228 - if (ret_obj->type != ACPI_TYPE_INTEGER) { 229 - status = -EINVAL; 230 - goto cleanup; 231 - } 232 - if (ret_obj->integer.value == 0) 317 + 318 + req = ret_obj[1].integer.value; 319 + res = ret_obj[2].integer.value; 320 + if (req) { 321 + if (res == 0) 233 322 status = scnprintf(buf, PAGE_SIZE, "%llu %s\n", req, 234 323 "0: Success"); 235 - else if (ret_obj->integer.value == 0xFFFFFFF0) 324 + else if (res == 0xFFFFFFF0) 236 325 status = scnprintf(buf, PAGE_SIZE, "%llu %s\n", req, 237 326 "0xFFFFFFF0: User Abort"); 238 - else if (ret_obj->integer.value == 0xFFFFFFF1) 327 + else if (res == 0xFFFFFFF1) 239 328 status = scnprintf(buf, PAGE_SIZE, "%llu %s\n", req, 240 329 "0xFFFFFFF1: BIOS Failure"); 241 - else if (ret_obj->integer.value >= 1 && 242 - ret_obj->integer.value <= 0x00000FFF) 330 + else if (res >= 1 && res <= 0x00000FFF) 243 331 status = scnprintf(buf, PAGE_SIZE, "%llu %llu: %s\n", 244 - req, ret_obj->integer.value, 245 - "Corresponding TPM error"); 332 + req, res, "Corresponding TPM error"); 246 333 else 247 334 status = scnprintf(buf, PAGE_SIZE, "%llu %llu: %s\n", 248 - req, ret_obj->integer.value, 249 - "Error"); 335 + req, res, "Error"); 250 336 } else { 251 337 status = scnprintf(buf, PAGE_SIZE, "%llu: %s\n", 252 - ret_obj->integer.value, "No Recent Request"); 338 + req, "No Recent Request"); 253 339 } 340 + 254 341 cleanup: 255 - kfree(output.pointer); 342 + ACPI_FREE(obj); 256 343 return status; 257 344 } 258 345 259 346 static ssize_t show_ppi_operations(char *buf, u32 start, u32 end) 260 347 { 261 - char *str = buf; 262 - char version[PPI_VERSION_LEN + 1]; 263 - acpi_handle handle; 264 - acpi_status status; 265 - struct acpi_object_list input; 266 - struct acpi_buffer output = { ACPI_ALLOCATE_BUFFER, NULL }; 267 - union acpi_object params[4]; 268 - union acpi_object obj; 269 348 int i; 270 349 u32 ret; 271 - char *info[] = { 350 + char *str = buf; 351 + union acpi_object *obj, tmp; 352 + union acpi_object argv = ACPI_INIT_DSM_ARGV4(1, &tmp); 353 + 354 + static char *info[] = { 272 355 "Not implemented", 273 356 "BIOS only", 274 357 "Blocked for OS by BIOS", 275 358 "User required", 276 359 "User not required", 277 360 }; 278 - input.count = 4; 279 - ppi_assign_params(params, TPM_PPI_FN_VERSION); 280 - input.pointer = params; 281 - status = acpi_walk_namespace(ACPI_TYPE_DEVICE, ACPI_ROOT_OBJECT, 282 - ACPI_UINT32_MAX, ppi_callback, NULL, 283 - tpm_device_name, &handle); 284 - if (ACPI_FAILURE(status)) 285 - return -ENXIO; 286 361 287 - status = acpi_evaluate_object_typed(handle, "_DSM", &input, &output, 288 - ACPI_TYPE_STRING); 289 - if (ACPI_FAILURE(status)) 290 - return -ENOMEM; 291 - 292 - strlcpy(version, 293 - ((union acpi_object *)output.pointer)->string.pointer, 294 - PPI_VERSION_LEN + 1); 295 - kfree(output.pointer); 296 - output.length = ACPI_ALLOCATE_BUFFER; 297 - output.pointer = NULL; 298 - if (strcmp(version, "1.2") < 0) 362 + if (!acpi_check_dsm(tpm_ppi_handle, tpm_ppi_uuid, TPM_PPI_REVISION_ID, 363 + 1 << TPM_PPI_FN_GETOPR)) 299 364 return -EPERM; 300 365 301 - params[2].integer.value = TPM_PPI_FN_GETOPR; 302 - params[3].package.count = 1; 303 - obj.type = ACPI_TYPE_INTEGER; 304 - params[3].package.elements = &obj; 366 + tmp.integer.type = ACPI_TYPE_INTEGER; 305 367 for (i = start; i <= end; i++) { 306 - obj.integer.value = i; 307 - status = acpi_evaluate_object_typed(handle, "_DSM", 308 - &input, &output, ACPI_TYPE_INTEGER); 309 - if (ACPI_FAILURE(status)) 368 + tmp.integer.value = i; 369 + obj = tpm_eval_dsm(TPM_PPI_FN_GETOPR, ACPI_TYPE_INTEGER, &argv); 370 + if (!obj) { 310 371 return -ENOMEM; 372 + } else { 373 + ret = obj->integer.value; 374 + ACPI_FREE(obj); 375 + } 311 376 312 - ret = ((union acpi_object *)output.pointer)->integer.value; 313 377 if (ret > 0 && ret < ARRAY_SIZE(info)) 314 378 str += scnprintf(str, PAGE_SIZE, "%d %d: %s\n", 315 379 i, ret, info[ret]); 316 - kfree(output.pointer); 317 - output.length = ACPI_ALLOCATE_BUFFER; 318 - output.pointer = NULL; 319 380 } 381 + 320 382 return str - buf; 321 383 } 322 384 ··· 325 453 326 454 int tpm_add_ppi(struct kobject *parent) 327 455 { 456 + /* Cache TPM ACPI handle and version string */ 457 + acpi_walk_namespace(ACPI_TYPE_DEVICE, ACPI_ROOT_OBJECT, ACPI_UINT32_MAX, 458 + ppi_callback, NULL, NULL, &tpm_ppi_handle); 459 + if (tpm_ppi_handle == NULL) 460 + return -ENODEV; 461 + 328 462 return sysfs_create_group(parent, &ppi_attr_grp); 329 463 } 330 464
+6 -1
drivers/cpufreq/Kconfig
··· 20 20 config CPU_FREQ_GOV_COMMON 21 21 bool 22 22 23 + config CPU_FREQ_BOOST_SW 24 + bool 25 + depends on THERMAL 26 + 23 27 config CPU_FREQ_STAT 24 28 tristate "CPU frequency translation statistics" 25 29 default y ··· 185 181 186 182 config GENERIC_CPUFREQ_CPU0 187 183 tristate "Generic CPU0 cpufreq driver" 188 - depends on HAVE_CLK && REGULATOR && PM_OPP && OF 184 + depends on HAVE_CLK && REGULATOR && OF 185 + select PM_OPP 189 186 help 190 187 This adds a generic cpufreq driver for CPU0 frequency management. 191 188 It supports both uniprocessor (UP) and symmetric multiprocessor (SMP)
+22 -5
drivers/cpufreq/Kconfig.arm
··· 4 4 5 5 config ARM_BIG_LITTLE_CPUFREQ 6 6 tristate "Generic ARM big LITTLE CPUfreq driver" 7 - depends on ARM_CPU_TOPOLOGY && PM_OPP && HAVE_CLK 7 + depends on ARM && BIG_LITTLE && ARM_CPU_TOPOLOGY && HAVE_CLK 8 + select PM_OPP 8 9 help 9 10 This enables the Generic CPUfreq driver for ARM big.LITTLE platforms. 10 11 ··· 55 54 config ARM_EXYNOS5440_CPUFREQ 56 55 bool "SAMSUNG EXYNOS5440" 57 56 depends on SOC_EXYNOS5440 58 - depends on HAVE_CLK && PM_OPP && OF 57 + depends on HAVE_CLK && OF 58 + select PM_OPP 59 59 default y 60 60 help 61 61 This adds the CPUFreq driver for Samsung EXYNOS5440 62 62 SoC. The nature of exynos5440 clock controller is 63 63 different than previous exynos controllers so not using 64 64 the common exynos framework. 65 + 66 + If in doubt, say N. 67 + 68 + config ARM_EXYNOS_CPU_FREQ_BOOST_SW 69 + bool "EXYNOS Frequency Overclocking - Software" 70 + depends on ARM_EXYNOS_CPUFREQ 71 + select CPU_FREQ_BOOST_SW 72 + select EXYNOS_THERMAL 73 + help 74 + This driver supports software managed overclocking (BOOST). 75 + It allows usage of special frequencies for Samsung Exynos 76 + processors if thermal conditions are appropriate. 77 + 78 + It reguires, for safe operation, thermal framework with properly 79 + defined trip points. 65 80 66 81 If in doubt, say N. 67 82 ··· 96 79 If in doubt, say N. 97 80 98 81 config ARM_IMX6Q_CPUFREQ 99 - tristate "Freescale i.MX6Q cpufreq support" 100 - depends on SOC_IMX6Q 82 + tristate "Freescale i.MX6 cpufreq support" 83 + depends on ARCH_MXC 101 84 depends on REGULATOR_ANATOP 102 85 help 103 - This adds cpufreq driver support for Freescale i.MX6Q SOC. 86 + This adds cpufreq driver support for Freescale i.MX6 series SoCs. 104 87 105 88 If in doubt, say N. 106 89
+29 -57
drivers/cpufreq/acpi-cpufreq.c
··· 80 80 static struct cpufreq_driver acpi_cpufreq_driver; 81 81 82 82 static unsigned int acpi_pstate_strict; 83 - static bool boost_enabled, boost_supported; 84 83 static struct msr __percpu *msrs; 85 84 86 85 static bool boost_state(unsigned int cpu) ··· 132 133 wrmsr_on_cpus(cpumask, msr_addr, msrs); 133 134 } 134 135 135 - static ssize_t _store_boost(const char *buf, size_t count) 136 + static int _store_boost(int val) 136 137 { 137 - int ret; 138 - unsigned long val = 0; 139 - 140 - if (!boost_supported) 141 - return -EINVAL; 142 - 143 - ret = kstrtoul(buf, 10, &val); 144 - if (ret || (val > 1)) 145 - return -EINVAL; 146 - 147 - if ((val && boost_enabled) || (!val && !boost_enabled)) 148 - return count; 149 - 150 138 get_online_cpus(); 151 - 152 139 boost_set_msrs(val, cpu_online_mask); 153 - 154 140 put_online_cpus(); 155 - 156 - boost_enabled = val; 157 141 pr_debug("Core Boosting %sabled.\n", val ? "en" : "dis"); 158 142 159 - return count; 143 + return 0; 160 144 } 161 - 162 - static ssize_t store_global_boost(struct kobject *kobj, struct attribute *attr, 163 - const char *buf, size_t count) 164 - { 165 - return _store_boost(buf, count); 166 - } 167 - 168 - static ssize_t show_global_boost(struct kobject *kobj, 169 - struct attribute *attr, char *buf) 170 - { 171 - return sprintf(buf, "%u\n", boost_enabled); 172 - } 173 - 174 - static struct global_attr global_boost = __ATTR(boost, 0644, 175 - show_global_boost, 176 - store_global_boost); 177 145 178 146 static ssize_t show_freqdomain_cpus(struct cpufreq_policy *policy, char *buf) 179 147 { ··· 152 186 cpufreq_freq_attr_ro(freqdomain_cpus); 153 187 154 188 #ifdef CONFIG_X86_ACPI_CPUFREQ_CPB 189 + static ssize_t store_boost(const char *buf, size_t count) 190 + { 191 + int ret; 192 + unsigned long val = 0; 193 + 194 + if (!acpi_cpufreq_driver.boost_supported) 195 + return -EINVAL; 196 + 197 + ret = kstrtoul(buf, 10, &val); 198 + if (ret || (val > 1)) 199 + return -EINVAL; 200 + 201 + _store_boost((int) val); 202 + 203 + return count; 204 + } 205 + 155 206 static ssize_t store_cpb(struct cpufreq_policy *policy, const char *buf, 156 207 size_t count) 157 208 { 158 - return _store_boost(buf, count); 209 + return store_boost(buf, count); 159 210 } 160 211 161 212 static ssize_t show_cpb(struct cpufreq_policy *policy, char *buf) 162 213 { 163 - return sprintf(buf, "%u\n", boost_enabled); 214 + return sprintf(buf, "%u\n", acpi_cpufreq_driver.boost_enabled); 164 215 } 165 216 166 217 cpufreq_freq_attr_rw(cpb); ··· 537 554 switch (action) { 538 555 case CPU_UP_PREPARE: 539 556 case CPU_UP_PREPARE_FROZEN: 540 - boost_set_msrs(boost_enabled, cpumask); 557 + boost_set_msrs(acpi_cpufreq_driver.boost_enabled, cpumask); 541 558 break; 542 559 543 560 case CPU_DOWN_PREPARE: ··· 894 911 .resume = acpi_cpufreq_resume, 895 912 .name = "acpi-cpufreq", 896 913 .attr = acpi_cpufreq_attr, 914 + .set_boost = _store_boost, 897 915 }; 898 916 899 917 static void __init acpi_cpufreq_boost_init(void) ··· 905 921 if (!msrs) 906 922 return; 907 923 908 - boost_supported = true; 909 - boost_enabled = boost_state(0); 910 - 924 + acpi_cpufreq_driver.boost_supported = true; 925 + acpi_cpufreq_driver.boost_enabled = boost_state(0); 911 926 get_online_cpus(); 912 927 913 928 /* Force all MSRs to the same value */ 914 - boost_set_msrs(boost_enabled, cpu_online_mask); 929 + boost_set_msrs(acpi_cpufreq_driver.boost_enabled, 930 + cpu_online_mask); 915 931 916 932 register_cpu_notifier(&boost_nb); 917 933 918 934 put_online_cpus(); 919 - } else 920 - global_boost.attr.mode = 0444; 921 - 922 - /* We create the boost file in any case, though for systems without 923 - * hardware support it will be read-only and hardwired to return 0. 924 - */ 925 - if (cpufreq_sysfs_create_file(&(global_boost.attr))) 926 - pr_warn(PFX "could not register global boost sysfs file\n"); 927 - else 928 - pr_debug("registered global boost sysfs file\n"); 935 + } 929 936 } 930 937 931 938 static void __exit acpi_cpufreq_boost_exit(void) 932 939 { 933 - cpufreq_sysfs_remove_file(&(global_boost.attr)); 934 - 935 940 if (msrs) { 936 941 unregister_cpu_notifier(&boost_nb); 937 942 ··· 966 993 *iter = &cpb; 967 994 } 968 995 #endif 996 + acpi_cpufreq_boost_init(); 969 997 970 998 ret = cpufreq_register_driver(&acpi_cpufreq_driver); 971 999 if (ret) 972 1000 free_acpi_perf_data(); 973 - else 974 - acpi_cpufreq_boost_init(); 975 1001 976 1002 return ret; 977 1003 }
+2 -1
drivers/cpufreq/arm_big_little.c
··· 488 488 static struct cpufreq_driver bL_cpufreq_driver = { 489 489 .name = "arm-big-little", 490 490 .flags = CPUFREQ_STICKY | 491 - CPUFREQ_HAVE_GOVERNOR_PER_POLICY, 491 + CPUFREQ_HAVE_GOVERNOR_PER_POLICY | 492 + CPUFREQ_NEED_INITIAL_FREQ_CHECK, 492 493 .verify = cpufreq_generic_frequency_table_verify, 493 494 .target_index = bL_cpufreq_set_target, 494 495 .get = bL_cpufreq_get_rate,
+5 -12
drivers/cpufreq/at32ap-cpufreq.c
··· 21 21 #include <linux/export.h> 22 22 #include <linux/slab.h> 23 23 24 - static struct clk *cpuclk; 25 24 static struct cpufreq_frequency_table *freq_table; 26 - 27 - static unsigned int at32_get_speed(unsigned int cpu) 28 - { 29 - /* No SMP support */ 30 - if (cpu) 31 - return 0; 32 - return (unsigned int)((clk_get_rate(cpuclk) + 500) / 1000); 33 - } 34 25 35 26 static unsigned int ref_freq; 36 27 static unsigned long loops_per_jiffy_ref; ··· 30 39 { 31 40 unsigned int old_freq, new_freq; 32 41 33 - old_freq = at32_get_speed(0); 42 + old_freq = policy->cur; 34 43 new_freq = freq_table[index].frequency; 35 44 36 45 if (!ref_freq) { ··· 41 50 if (old_freq < new_freq) 42 51 boot_cpu_data.loops_per_jiffy = cpufreq_scale( 43 52 loops_per_jiffy_ref, ref_freq, new_freq); 44 - clk_set_rate(cpuclk, new_freq * 1000); 53 + clk_set_rate(policy->clk, new_freq * 1000); 45 54 if (new_freq < old_freq) 46 55 boot_cpu_data.loops_per_jiffy = cpufreq_scale( 47 56 loops_per_jiffy_ref, ref_freq, new_freq); ··· 52 61 static int at32_cpufreq_driver_init(struct cpufreq_policy *policy) 53 62 { 54 63 unsigned int frequency, rate, min_freq; 64 + static struct clk *cpuclk; 55 65 int retval, steps, i; 56 66 57 67 if (policy->cpu != 0) ··· 95 103 frequency /= 2; 96 104 } 97 105 106 + policy->clk = cpuclk; 98 107 freq_table[steps - 1].frequency = CPUFREQ_TABLE_END; 99 108 100 109 retval = cpufreq_table_validate_and_show(policy, freq_table); ··· 116 123 .init = at32_cpufreq_driver_init, 117 124 .verify = cpufreq_generic_frequency_table_verify, 118 125 .target_index = at32_set_target, 119 - .get = at32_get_speed, 126 + .get = cpufreq_generic_get, 120 127 .flags = CPUFREQ_STICKY, 121 128 }; 122 129
+3 -7
drivers/cpufreq/cpufreq-cpu0.c
··· 30 30 static struct regulator *cpu_reg; 31 31 static struct cpufreq_frequency_table *freq_table; 32 32 33 - static unsigned int cpu0_get_speed(unsigned int cpu) 34 - { 35 - return clk_get_rate(cpu_clk) / 1000; 36 - } 37 - 38 33 static int cpu0_set_target(struct cpufreq_policy *policy, unsigned int index) 39 34 { 40 35 struct dev_pm_opp *opp; ··· 39 44 int ret; 40 45 41 46 freq_Hz = clk_round_rate(cpu_clk, freq_table[index].frequency * 1000); 42 - if (freq_Hz < 0) 47 + if (freq_Hz <= 0) 43 48 freq_Hz = freq_table[index].frequency * 1000; 44 49 45 50 freq_exact = freq_Hz; ··· 95 100 96 101 static int cpu0_cpufreq_init(struct cpufreq_policy *policy) 97 102 { 103 + policy->clk = cpu_clk; 98 104 return cpufreq_generic_init(policy, freq_table, transition_latency); 99 105 } 100 106 ··· 103 107 .flags = CPUFREQ_STICKY, 104 108 .verify = cpufreq_generic_frequency_table_verify, 105 109 .target_index = cpu0_set_target, 106 - .get = cpu0_get_speed, 110 + .get = cpufreq_generic_get, 107 111 .init = cpu0_cpufreq_init, 108 112 .exit = cpufreq_generic_exit, 109 113 .name = "generic_cpu0",
+199 -19
drivers/cpufreq/cpufreq.c
··· 39 39 static DEFINE_PER_CPU(struct cpufreq_policy *, cpufreq_cpu_data); 40 40 static DEFINE_PER_CPU(struct cpufreq_policy *, cpufreq_cpu_data_fallback); 41 41 static DEFINE_RWLOCK(cpufreq_driver_lock); 42 - static DEFINE_MUTEX(cpufreq_governor_lock); 42 + DEFINE_MUTEX(cpufreq_governor_lock); 43 43 static LIST_HEAD(cpufreq_policy_list); 44 44 45 45 #ifdef CONFIG_HOTPLUG_CPU ··· 175 175 return 0; 176 176 } 177 177 EXPORT_SYMBOL_GPL(cpufreq_generic_init); 178 + 179 + unsigned int cpufreq_generic_get(unsigned int cpu) 180 + { 181 + struct cpufreq_policy *policy = per_cpu(cpufreq_cpu_data, cpu); 182 + 183 + if (!policy || IS_ERR(policy->clk)) { 184 + pr_err("%s: No %s associated to cpu: %d\n", __func__, 185 + policy ? "clk" : "policy", cpu); 186 + return 0; 187 + } 188 + 189 + return clk_get_rate(policy->clk) / 1000; 190 + } 191 + EXPORT_SYMBOL_GPL(cpufreq_generic_get); 178 192 179 193 struct cpufreq_policy *cpufreq_cpu_get(unsigned int cpu) 180 194 { ··· 334 320 } 335 321 EXPORT_SYMBOL_GPL(cpufreq_notify_transition); 336 322 323 + /* Do post notifications when there are chances that transition has failed */ 324 + void cpufreq_notify_post_transition(struct cpufreq_policy *policy, 325 + struct cpufreq_freqs *freqs, int transition_failed) 326 + { 327 + cpufreq_notify_transition(policy, freqs, CPUFREQ_POSTCHANGE); 328 + if (!transition_failed) 329 + return; 330 + 331 + swap(freqs->old, freqs->new); 332 + cpufreq_notify_transition(policy, freqs, CPUFREQ_PRECHANGE); 333 + cpufreq_notify_transition(policy, freqs, CPUFREQ_POSTCHANGE); 334 + } 335 + EXPORT_SYMBOL_GPL(cpufreq_notify_post_transition); 336 + 337 337 338 338 /********************************************************************* 339 339 * SYSFS INTERFACE * 340 340 *********************************************************************/ 341 + ssize_t show_boost(struct kobject *kobj, 342 + struct attribute *attr, char *buf) 343 + { 344 + return sprintf(buf, "%d\n", cpufreq_driver->boost_enabled); 345 + } 346 + 347 + static ssize_t store_boost(struct kobject *kobj, struct attribute *attr, 348 + const char *buf, size_t count) 349 + { 350 + int ret, enable; 351 + 352 + ret = sscanf(buf, "%d", &enable); 353 + if (ret != 1 || enable < 0 || enable > 1) 354 + return -EINVAL; 355 + 356 + if (cpufreq_boost_trigger_state(enable)) { 357 + pr_err("%s: Cannot %s BOOST!\n", __func__, 358 + enable ? "enable" : "disable"); 359 + return -EINVAL; 360 + } 361 + 362 + pr_debug("%s: cpufreq BOOST %s\n", __func__, 363 + enable ? "enabled" : "disabled"); 364 + 365 + return count; 366 + } 367 + define_one_global_rw(boost); 341 368 342 369 static struct cpufreq_governor *__find_governor(const char *str_governor) 343 370 { ··· 984 929 struct kobject *kobj; 985 930 struct completion *cmp; 986 931 932 + blocking_notifier_call_chain(&cpufreq_policy_notifier_list, 933 + CPUFREQ_REMOVE_POLICY, policy); 934 + 987 935 down_read(&policy->rwsem); 988 936 kobj = &policy->kobj; 989 937 cmp = &policy->kobj_unregister; ··· 1109 1051 goto err_set_policy_cpu; 1110 1052 } 1111 1053 1054 + write_lock_irqsave(&cpufreq_driver_lock, flags); 1055 + for_each_cpu(j, policy->cpus) 1056 + per_cpu(cpufreq_cpu_data, j) = policy; 1057 + write_unlock_irqrestore(&cpufreq_driver_lock, flags); 1058 + 1112 1059 if (cpufreq_driver->get) { 1113 1060 policy->cur = cpufreq_driver->get(policy->cpu); 1114 1061 if (!policy->cur) { 1115 1062 pr_err("%s: ->get() failed\n", __func__); 1116 1063 goto err_get_freq; 1064 + } 1065 + } 1066 + 1067 + /* 1068 + * Sometimes boot loaders set CPU frequency to a value outside of 1069 + * frequency table present with cpufreq core. In such cases CPU might be 1070 + * unstable if it has to run on that frequency for long duration of time 1071 + * and so its better to set it to a frequency which is specified in 1072 + * freq-table. This also makes cpufreq stats inconsistent as 1073 + * cpufreq-stats would fail to register because current frequency of CPU 1074 + * isn't found in freq-table. 1075 + * 1076 + * Because we don't want this change to effect boot process badly, we go 1077 + * for the next freq which is >= policy->cur ('cur' must be set by now, 1078 + * otherwise we will end up setting freq to lowest of the table as 'cur' 1079 + * is initialized to zero). 1080 + * 1081 + * We are passing target-freq as "policy->cur - 1" otherwise 1082 + * __cpufreq_driver_target() would simply fail, as policy->cur will be 1083 + * equal to target-freq. 1084 + */ 1085 + if ((cpufreq_driver->flags & CPUFREQ_NEED_INITIAL_FREQ_CHECK) 1086 + && has_target()) { 1087 + /* Are we running at unknown frequency ? */ 1088 + ret = cpufreq_frequency_table_get_index(policy, policy->cur); 1089 + if (ret == -EINVAL) { 1090 + /* Warn user and fix it */ 1091 + pr_warn("%s: CPU%d: Running at unlisted freq: %u KHz\n", 1092 + __func__, policy->cpu, policy->cur); 1093 + ret = __cpufreq_driver_target(policy, policy->cur - 1, 1094 + CPUFREQ_RELATION_L); 1095 + 1096 + /* 1097 + * Reaching here after boot in a few seconds may not 1098 + * mean that system will remain stable at "unknown" 1099 + * frequency for longer duration. Hence, a BUG_ON(). 1100 + */ 1101 + BUG_ON(ret); 1102 + pr_warn("%s: CPU%d: Unlisted initial frequency changed to: %u KHz\n", 1103 + __func__, policy->cpu, policy->cur); 1117 1104 } 1118 1105 } 1119 1106 ··· 1188 1085 } 1189 1086 #endif 1190 1087 1191 - write_lock_irqsave(&cpufreq_driver_lock, flags); 1192 - for_each_cpu(j, policy->cpus) 1193 - per_cpu(cpufreq_cpu_data, j) = policy; 1194 - write_unlock_irqrestore(&cpufreq_driver_lock, flags); 1195 - 1196 1088 if (!frozen) { 1197 1089 ret = cpufreq_add_dev_interface(policy, dev); 1198 1090 if (ret) 1199 1091 goto err_out_unregister; 1092 + blocking_notifier_call_chain(&cpufreq_policy_notifier_list, 1093 + CPUFREQ_CREATE_POLICY, policy); 1200 1094 } 1201 1095 1202 1096 write_lock_irqsave(&cpufreq_driver_lock, flags); ··· 1215 1115 return 0; 1216 1116 1217 1117 err_out_unregister: 1118 + err_get_freq: 1218 1119 write_lock_irqsave(&cpufreq_driver_lock, flags); 1219 1120 for_each_cpu(j, policy->cpus) 1220 1121 per_cpu(cpufreq_cpu_data, j) = NULL; 1221 1122 write_unlock_irqrestore(&cpufreq_driver_lock, flags); 1222 1123 1223 - err_get_freq: 1224 1124 if (cpufreq_driver->exit) 1225 1125 cpufreq_driver->exit(policy); 1226 1126 err_set_policy_cpu: ··· 1825 1725 pr_err("%s: Failed to change cpu frequency: %d\n", 1826 1726 __func__, retval); 1827 1727 1828 - if (notify) { 1829 - /* 1830 - * Notify with old freq in case we failed to change 1831 - * frequency 1832 - */ 1833 - if (retval) 1834 - freqs.new = freqs.old; 1835 - 1836 - cpufreq_notify_transition(policy, &freqs, 1837 - CPUFREQ_POSTCHANGE); 1838 - } 1728 + if (notify) 1729 + cpufreq_notify_post_transition(policy, &freqs, retval); 1839 1730 } 1840 1731 1841 1732 out: ··· 2211 2120 }; 2212 2121 2213 2122 /********************************************************************* 2123 + * BOOST * 2124 + *********************************************************************/ 2125 + static int cpufreq_boost_set_sw(int state) 2126 + { 2127 + struct cpufreq_frequency_table *freq_table; 2128 + struct cpufreq_policy *policy; 2129 + int ret = -EINVAL; 2130 + 2131 + list_for_each_entry(policy, &cpufreq_policy_list, policy_list) { 2132 + freq_table = cpufreq_frequency_get_table(policy->cpu); 2133 + if (freq_table) { 2134 + ret = cpufreq_frequency_table_cpuinfo(policy, 2135 + freq_table); 2136 + if (ret) { 2137 + pr_err("%s: Policy frequency update failed\n", 2138 + __func__); 2139 + break; 2140 + } 2141 + policy->user_policy.max = policy->max; 2142 + __cpufreq_governor(policy, CPUFREQ_GOV_LIMITS); 2143 + } 2144 + } 2145 + 2146 + return ret; 2147 + } 2148 + 2149 + int cpufreq_boost_trigger_state(int state) 2150 + { 2151 + unsigned long flags; 2152 + int ret = 0; 2153 + 2154 + if (cpufreq_driver->boost_enabled == state) 2155 + return 0; 2156 + 2157 + write_lock_irqsave(&cpufreq_driver_lock, flags); 2158 + cpufreq_driver->boost_enabled = state; 2159 + write_unlock_irqrestore(&cpufreq_driver_lock, flags); 2160 + 2161 + ret = cpufreq_driver->set_boost(state); 2162 + if (ret) { 2163 + write_lock_irqsave(&cpufreq_driver_lock, flags); 2164 + cpufreq_driver->boost_enabled = !state; 2165 + write_unlock_irqrestore(&cpufreq_driver_lock, flags); 2166 + 2167 + pr_err("%s: Cannot %s BOOST\n", __func__, 2168 + state ? "enable" : "disable"); 2169 + } 2170 + 2171 + return ret; 2172 + } 2173 + 2174 + int cpufreq_boost_supported(void) 2175 + { 2176 + if (likely(cpufreq_driver)) 2177 + return cpufreq_driver->boost_supported; 2178 + 2179 + return 0; 2180 + } 2181 + EXPORT_SYMBOL_GPL(cpufreq_boost_supported); 2182 + 2183 + int cpufreq_boost_enabled(void) 2184 + { 2185 + return cpufreq_driver->boost_enabled; 2186 + } 2187 + EXPORT_SYMBOL_GPL(cpufreq_boost_enabled); 2188 + 2189 + /********************************************************************* 2214 2190 * REGISTER / UNREGISTER CPUFREQ DRIVER * 2215 2191 *********************************************************************/ 2216 2192 ··· 2317 2159 cpufreq_driver = driver_data; 2318 2160 write_unlock_irqrestore(&cpufreq_driver_lock, flags); 2319 2161 2162 + if (cpufreq_boost_supported()) { 2163 + /* 2164 + * Check if driver provides function to enable boost - 2165 + * if not, use cpufreq_boost_set_sw as default 2166 + */ 2167 + if (!cpufreq_driver->set_boost) 2168 + cpufreq_driver->set_boost = cpufreq_boost_set_sw; 2169 + 2170 + ret = cpufreq_sysfs_create_file(&boost.attr); 2171 + if (ret) { 2172 + pr_err("%s: cannot register global BOOST sysfs file\n", 2173 + __func__); 2174 + goto err_null_driver; 2175 + } 2176 + } 2177 + 2320 2178 ret = subsys_interface_register(&cpufreq_interface); 2321 2179 if (ret) 2322 - goto err_null_driver; 2180 + goto err_boost_unreg; 2323 2181 2324 2182 if (!(cpufreq_driver->flags & CPUFREQ_STICKY)) { 2325 2183 int i; ··· 2362 2188 return 0; 2363 2189 err_if_unreg: 2364 2190 subsys_interface_unregister(&cpufreq_interface); 2191 + err_boost_unreg: 2192 + if (cpufreq_boost_supported()) 2193 + cpufreq_sysfs_remove_file(&boost.attr); 2365 2194 err_null_driver: 2366 2195 write_lock_irqsave(&cpufreq_driver_lock, flags); 2367 2196 cpufreq_driver = NULL; ··· 2391 2214 pr_debug("unregistering driver %s\n", driver->name); 2392 2215 2393 2216 subsys_interface_unregister(&cpufreq_interface); 2217 + if (cpufreq_boost_supported()) 2218 + cpufreq_sysfs_remove_file(&boost.attr); 2219 + 2394 2220 unregister_hotcpu_notifier(&cpufreq_cpu_notifier); 2395 2221 2396 2222 down_write(&cpufreq_rwsem);
+5 -1
drivers/cpufreq/cpufreq_governor.c
··· 119 119 { 120 120 int i; 121 121 122 + mutex_lock(&cpufreq_governor_lock); 122 123 if (!policy->governor_enabled) 123 - return; 124 + goto out_unlock; 124 125 125 126 if (!all_cpus) { 126 127 /* ··· 136 135 for_each_cpu(i, policy->cpus) 137 136 __gov_queue_work(i, dbs_data, delay); 138 137 } 138 + 139 + out_unlock: 140 + mutex_unlock(&cpufreq_governor_lock); 139 141 } 140 142 EXPORT_SYMBOL_GPL(gov_queue_work); 141 143
+2
drivers/cpufreq/cpufreq_governor.h
··· 257 257 return sprintf(buf, "%u\n", dbs_data->min_sampling_rate); \ 258 258 } 259 259 260 + extern struct mutex cpufreq_governor_lock; 261 + 260 262 void dbs_check_cpu(struct dbs_data *dbs_data, int cpu); 261 263 bool need_load_eval(struct cpu_dbs_common_info *cdbs, 262 264 unsigned int sampling_rate);
+48 -61
drivers/cpufreq/cpufreq_stats.c
··· 151 151 return -1; 152 152 } 153 153 154 - /* should be called late in the CPU removal sequence so that the stats 155 - * memory is still available in case someone tries to use it. 156 - */ 157 - static void cpufreq_stats_free_table(unsigned int cpu) 154 + static void __cpufreq_stats_free_table(struct cpufreq_policy *policy) 158 155 { 159 - struct cpufreq_stats *stat = per_cpu(cpufreq_stats_table, cpu); 156 + struct cpufreq_stats *stat = per_cpu(cpufreq_stats_table, policy->cpu); 160 157 161 - if (stat) { 162 - pr_debug("%s: Free stat table\n", __func__); 163 - kfree(stat->time_in_state); 164 - kfree(stat); 165 - per_cpu(cpufreq_stats_table, cpu) = NULL; 166 - } 158 + if (!stat) 159 + return; 160 + 161 + pr_debug("%s: Free stat table\n", __func__); 162 + 163 + sysfs_remove_group(&policy->kobj, &stats_attr_group); 164 + kfree(stat->time_in_state); 165 + kfree(stat); 166 + per_cpu(cpufreq_stats_table, policy->cpu) = NULL; 167 167 } 168 168 169 - /* must be called early in the CPU removal sequence (before 170 - * cpufreq_remove_dev) so that policy is still valid. 171 - */ 172 - static void cpufreq_stats_free_sysfs(unsigned int cpu) 169 + static void cpufreq_stats_free_table(unsigned int cpu) 173 170 { 174 - struct cpufreq_policy *policy = cpufreq_cpu_get(cpu); 171 + struct cpufreq_policy *policy; 175 172 173 + policy = cpufreq_cpu_get(cpu); 176 174 if (!policy) 177 175 return; 178 176 179 - if (!cpufreq_frequency_get_table(cpu)) 180 - goto put_ref; 177 + if (cpufreq_frequency_get_table(policy->cpu)) 178 + __cpufreq_stats_free_table(policy); 181 179 182 - if (!policy_is_shared(policy)) { 183 - pr_debug("%s: Free sysfs stat\n", __func__); 184 - sysfs_remove_group(&policy->kobj, &stats_attr_group); 185 - } 186 - 187 - put_ref: 188 180 cpufreq_cpu_put(policy); 189 181 } 190 182 191 - static int cpufreq_stats_create_table(struct cpufreq_policy *policy, 183 + static int __cpufreq_stats_create_table(struct cpufreq_policy *policy, 192 184 struct cpufreq_frequency_table *table) 193 185 { 194 186 unsigned int i, j, count = 0, ret = 0; ··· 253 261 return ret; 254 262 } 255 263 264 + static void cpufreq_stats_create_table(unsigned int cpu) 265 + { 266 + struct cpufreq_policy *policy; 267 + struct cpufreq_frequency_table *table; 268 + 269 + /* 270 + * "likely(!policy)" because normally cpufreq_stats will be registered 271 + * before cpufreq driver 272 + */ 273 + policy = cpufreq_cpu_get(cpu); 274 + if (likely(!policy)) 275 + return; 276 + 277 + table = cpufreq_frequency_get_table(policy->cpu); 278 + if (likely(table)) 279 + __cpufreq_stats_create_table(policy, table); 280 + 281 + cpufreq_cpu_put(policy); 282 + } 283 + 256 284 static void cpufreq_stats_update_policy_cpu(struct cpufreq_policy *policy) 257 285 { 258 286 struct cpufreq_stats *stat = per_cpu(cpufreq_stats_table, ··· 289 277 static int cpufreq_stat_notifier_policy(struct notifier_block *nb, 290 278 unsigned long val, void *data) 291 279 { 292 - int ret; 280 + int ret = 0; 293 281 struct cpufreq_policy *policy = data; 294 282 struct cpufreq_frequency_table *table; 295 283 unsigned int cpu = policy->cpu; ··· 299 287 return 0; 300 288 } 301 289 302 - if (val != CPUFREQ_NOTIFY) 303 - return 0; 304 290 table = cpufreq_frequency_get_table(cpu); 305 291 if (!table) 306 292 return 0; 307 - ret = cpufreq_stats_create_table(policy, table); 308 - if (ret) 309 - return ret; 310 - return 0; 293 + 294 + if (val == CPUFREQ_CREATE_POLICY) 295 + ret = __cpufreq_stats_create_table(policy, table); 296 + else if (val == CPUFREQ_REMOVE_POLICY) 297 + __cpufreq_stats_free_table(policy); 298 + 299 + return ret; 311 300 } 312 301 313 302 static int cpufreq_stat_notifier_trans(struct notifier_block *nb, ··· 347 334 return 0; 348 335 } 349 336 350 - static int cpufreq_stat_cpu_callback(struct notifier_block *nfb, 351 - unsigned long action, 352 - void *hcpu) 353 - { 354 - unsigned int cpu = (unsigned long)hcpu; 355 - 356 - switch (action) { 357 - case CPU_DOWN_PREPARE: 358 - cpufreq_stats_free_sysfs(cpu); 359 - break; 360 - case CPU_DEAD: 361 - cpufreq_stats_free_table(cpu); 362 - break; 363 - } 364 - return NOTIFY_OK; 365 - } 366 - 367 - /* priority=1 so this will get called before cpufreq_remove_dev */ 368 - static struct notifier_block cpufreq_stat_cpu_notifier __refdata = { 369 - .notifier_call = cpufreq_stat_cpu_callback, 370 - .priority = 1, 371 - }; 372 - 373 337 static struct notifier_block notifier_policy_block = { 374 338 .notifier_call = cpufreq_stat_notifier_policy 375 339 }; ··· 366 376 if (ret) 367 377 return ret; 368 378 369 - register_hotcpu_notifier(&cpufreq_stat_cpu_notifier); 379 + for_each_online_cpu(cpu) 380 + cpufreq_stats_create_table(cpu); 370 381 371 382 ret = cpufreq_register_notifier(&notifier_trans_block, 372 383 CPUFREQ_TRANSITION_NOTIFIER); 373 384 if (ret) { 374 385 cpufreq_unregister_notifier(&notifier_policy_block, 375 386 CPUFREQ_POLICY_NOTIFIER); 376 - unregister_hotcpu_notifier(&cpufreq_stat_cpu_notifier); 377 387 for_each_online_cpu(cpu) 378 388 cpufreq_stats_free_table(cpu); 379 389 return ret; ··· 389 399 CPUFREQ_POLICY_NOTIFIER); 390 400 cpufreq_unregister_notifier(&notifier_trans_block, 391 401 CPUFREQ_TRANSITION_NOTIFIER); 392 - unregister_hotcpu_notifier(&cpufreq_stat_cpu_notifier); 393 - for_each_online_cpu(cpu) { 402 + for_each_online_cpu(cpu) 394 403 cpufreq_stats_free_table(cpu); 395 - cpufreq_stats_free_sysfs(cpu); 396 - } 397 404 } 398 405 399 406 MODULE_AUTHOR("Zou Nan hai <nanhai.zou@intel.com>");
+5 -11
drivers/cpufreq/davinci-cpufreq.c
··· 58 58 return 0; 59 59 } 60 60 61 - static unsigned int davinci_getspeed(unsigned int cpu) 62 - { 63 - if (cpu) 64 - return 0; 65 - 66 - return clk_get_rate(cpufreq.armclk) / 1000; 67 - } 68 - 69 61 static int davinci_target(struct cpufreq_policy *policy, unsigned int idx) 70 62 { 71 63 struct davinci_cpufreq_config *pdata = cpufreq.dev->platform_data; ··· 65 73 unsigned int old_freq, new_freq; 66 74 int ret = 0; 67 75 68 - old_freq = davinci_getspeed(0); 76 + old_freq = policy->cur; 69 77 new_freq = pdata->freq_table[idx].frequency; 70 78 71 79 /* if moving to higher frequency, up the voltage beforehand */ ··· 108 116 return result; 109 117 } 110 118 119 + policy->clk = cpufreq.armclk; 120 + 111 121 /* 112 122 * Time measurement across the target() function yields ~1500-1800us 113 123 * time taken with no drivers on notification list. ··· 120 126 } 121 127 122 128 static struct cpufreq_driver davinci_driver = { 123 - .flags = CPUFREQ_STICKY, 129 + .flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK, 124 130 .verify = davinci_verify_speed, 125 131 .target_index = davinci_target, 126 - .get = davinci_getspeed, 132 + .get = cpufreq_generic_get, 127 133 .init = davinci_cpu_init, 128 134 .exit = cpufreq_generic_exit, 129 135 .name = "davinci",
+4 -18
drivers/cpufreq/dbx500-cpufreq.c
··· 26 26 return clk_set_rate(armss_clk, freq_table[index].frequency * 1000); 27 27 } 28 28 29 - static unsigned int dbx500_cpufreq_getspeed(unsigned int cpu) 30 - { 31 - int i = 0; 32 - unsigned long freq = clk_get_rate(armss_clk) / 1000; 33 - 34 - /* The value is rounded to closest frequency in the defined table. */ 35 - while (freq_table[i + 1].frequency != CPUFREQ_TABLE_END) { 36 - if (freq < freq_table[i].frequency + 37 - (freq_table[i + 1].frequency - freq_table[i].frequency) / 2) 38 - return freq_table[i].frequency; 39 - i++; 40 - } 41 - 42 - return freq_table[i].frequency; 43 - } 44 - 45 29 static int dbx500_cpufreq_init(struct cpufreq_policy *policy) 46 30 { 31 + policy->clk = armss_clk; 47 32 return cpufreq_generic_init(policy, freq_table, 20 * 1000); 48 33 } 49 34 50 35 static struct cpufreq_driver dbx500_cpufreq_driver = { 51 - .flags = CPUFREQ_STICKY | CPUFREQ_CONST_LOOPS, 36 + .flags = CPUFREQ_STICKY | CPUFREQ_CONST_LOOPS | 37 + CPUFREQ_NEED_INITIAL_FREQ_CHECK, 52 38 .verify = cpufreq_generic_frequency_table_verify, 53 39 .target_index = dbx500_cpufreq_target, 54 - .get = dbx500_cpufreq_getspeed, 40 + .get = cpufreq_generic_get, 55 41 .init = dbx500_cpufreq_init, 56 42 .name = "DBX500", 57 43 .attr = cpufreq_generic_attr,
+18 -10
drivers/cpufreq/exynos-cpufreq.c
··· 17 17 #include <linux/regulator/consumer.h> 18 18 #include <linux/cpufreq.h> 19 19 #include <linux/suspend.h> 20 + #include <linux/platform_device.h> 20 21 21 22 #include <plat/cpu.h> 22 23 ··· 30 29 static unsigned int locking_frequency; 31 30 static bool frequency_locked; 32 31 static DEFINE_MUTEX(cpufreq_lock); 33 - 34 - static unsigned int exynos_getspeed(unsigned int cpu) 35 - { 36 - return clk_get_rate(exynos_info->cpu_clk) / 1000; 37 - } 38 32 39 33 static int exynos_cpufreq_get_index(unsigned int freq) 40 34 { ··· 210 214 211 215 static int exynos_cpufreq_cpu_init(struct cpufreq_policy *policy) 212 216 { 217 + policy->clk = exynos_info->cpu_clk; 213 218 return cpufreq_generic_init(policy, exynos_info->freq_table, 100000); 214 219 } 215 220 216 221 static struct cpufreq_driver exynos_driver = { 217 - .flags = CPUFREQ_STICKY, 222 + .flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK, 218 223 .verify = cpufreq_generic_frequency_table_verify, 219 224 .target_index = exynos_target, 220 - .get = exynos_getspeed, 225 + .get = cpufreq_generic_get, 221 226 .init = exynos_cpufreq_cpu_init, 222 227 .exit = cpufreq_generic_exit, 223 228 .name = "exynos_cpufreq", 224 229 .attr = cpufreq_generic_attr, 230 + #ifdef CONFIG_ARM_EXYNOS_CPU_FREQ_BOOST_SW 231 + .boost_supported = true, 232 + #endif 225 233 #ifdef CONFIG_PM 226 234 .suspend = exynos_cpufreq_suspend, 227 235 .resume = exynos_cpufreq_resume, 228 236 #endif 229 237 }; 230 238 231 - static int __init exynos_cpufreq_init(void) 239 + static int exynos_cpufreq_probe(struct platform_device *pdev) 232 240 { 233 241 int ret = -EINVAL; 234 242 ··· 263 263 goto err_vdd_arm; 264 264 } 265 265 266 - locking_frequency = exynos_getspeed(0); 266 + locking_frequency = clk_get_rate(exynos_info->cpu_clk) / 1000; 267 267 268 268 register_pm_notifier(&exynos_cpufreq_nb); 269 269 ··· 281 281 kfree(exynos_info); 282 282 return -EINVAL; 283 283 } 284 - late_initcall(exynos_cpufreq_init); 284 + 285 + static struct platform_driver exynos_cpufreq_platdrv = { 286 + .driver = { 287 + .name = "exynos-cpufreq", 288 + .owner = THIS_MODULE, 289 + }, 290 + .probe = exynos_cpufreq_probe, 291 + }; 292 + module_platform_driver(exynos_cpufreq_platdrv);
+1 -1
drivers/cpufreq/exynos4x12-cpufreq.c
··· 30 30 }; 31 31 32 32 static struct cpufreq_frequency_table exynos4x12_freq_table[] = { 33 - {L0, CPUFREQ_ENTRY_INVALID}, 33 + {CPUFREQ_BOOST_FREQ, 1500 * 1000}, 34 34 {L1, 1400 * 1000}, 35 35 {L2, 1300 * 1000}, 36 36 {L3, 1200 * 1000},
+10 -64
drivers/cpufreq/exynos5250-cpufreq.c
··· 101 101 cpu_relax(); 102 102 } 103 103 104 - static void set_apll(unsigned int new_index, 105 - unsigned int old_index) 104 + static void set_apll(unsigned int index) 106 105 { 107 - unsigned int tmp, pdiv; 106 + unsigned int tmp; 107 + unsigned int freq = apll_freq_5250[index].freq; 108 108 109 - /* 1. MUX_CORE_SEL = MPLL, ARMCLK uses MPLL for lock time */ 109 + /* MUX_CORE_SEL = MPLL, ARMCLK uses MPLL for lock time */ 110 110 clk_set_parent(moutcore, mout_mpll); 111 111 112 112 do { ··· 115 115 tmp &= 0x7; 116 116 } while (tmp != 0x2); 117 117 118 - /* 2. Set APLL Lock time */ 119 - pdiv = ((apll_freq_5250[new_index].mps >> 8) & 0x3f); 118 + clk_set_rate(mout_apll, freq * 1000); 120 119 121 - __raw_writel((pdiv * 250), EXYNOS5_APLL_LOCK); 122 - 123 - /* 3. Change PLL PMS values */ 124 - tmp = __raw_readl(EXYNOS5_APLL_CON0); 125 - tmp &= ~((0x3ff << 16) | (0x3f << 8) | (0x7 << 0)); 126 - tmp |= apll_freq_5250[new_index].mps; 127 - __raw_writel(tmp, EXYNOS5_APLL_CON0); 128 - 129 - /* 4. wait_lock_time */ 130 - do { 131 - cpu_relax(); 132 - tmp = __raw_readl(EXYNOS5_APLL_CON0); 133 - } while (!(tmp & (0x1 << 29))); 134 - 135 - /* 5. MUX_CORE_SEL = APLL */ 120 + /* MUX_CORE_SEL = APLL */ 136 121 clk_set_parent(moutcore, mout_apll); 137 122 138 123 do { ··· 125 140 tmp = __raw_readl(EXYNOS5_CLKMUX_STATCPU); 126 141 tmp &= (0x7 << 16); 127 142 } while (tmp != (0x1 << 16)); 128 - 129 - } 130 - 131 - static bool exynos5250_pms_change(unsigned int old_index, unsigned int new_index) 132 - { 133 - unsigned int old_pm = apll_freq_5250[old_index].mps >> 8; 134 - unsigned int new_pm = apll_freq_5250[new_index].mps >> 8; 135 - 136 - return (old_pm == new_pm) ? 0 : 1; 137 143 } 138 144 139 145 static void exynos5250_set_frequency(unsigned int old_index, 140 146 unsigned int new_index) 141 147 { 142 - unsigned int tmp; 143 - 144 148 if (old_index > new_index) { 145 - if (!exynos5250_pms_change(old_index, new_index)) { 146 - /* 1. Change the system clock divider values */ 147 - set_clkdiv(new_index); 148 - /* 2. Change just s value in apll m,p,s value */ 149 - tmp = __raw_readl(EXYNOS5_APLL_CON0); 150 - tmp &= ~(0x7 << 0); 151 - tmp |= apll_freq_5250[new_index].mps & 0x7; 152 - __raw_writel(tmp, EXYNOS5_APLL_CON0); 153 - 154 - } else { 155 - /* Clock Configuration Procedure */ 156 - /* 1. Change the system clock divider values */ 157 - set_clkdiv(new_index); 158 - /* 2. Change the apll m,p,s value */ 159 - set_apll(new_index, old_index); 160 - } 149 + set_clkdiv(new_index); 150 + set_apll(new_index); 161 151 } else if (old_index < new_index) { 162 - if (!exynos5250_pms_change(old_index, new_index)) { 163 - /* 1. Change just s value in apll m,p,s value */ 164 - tmp = __raw_readl(EXYNOS5_APLL_CON0); 165 - tmp &= ~(0x7 << 0); 166 - tmp |= apll_freq_5250[new_index].mps & 0x7; 167 - __raw_writel(tmp, EXYNOS5_APLL_CON0); 168 - /* 2. Change the system clock divider values */ 169 - set_clkdiv(new_index); 170 - } else { 171 - /* Clock Configuration Procedure */ 172 - /* 1. Change the apll m,p,s value */ 173 - set_apll(new_index, old_index); 174 - /* 2. Change the system clock divider values */ 175 - set_clkdiv(new_index); 176 - } 152 + set_apll(new_index); 153 + set_clkdiv(new_index); 177 154 } 178 155 } 179 156 ··· 168 221 info->volt_table = exynos5250_volt_table; 169 222 info->freq_table = exynos5250_freq_table; 170 223 info->set_freq = exynos5250_set_frequency; 171 - info->need_apll_change = exynos5250_pms_change; 172 224 173 225 return 0; 174 226
+16 -20
drivers/cpufreq/exynos5440-cpufreq.c
··· 100 100 struct resource *mem; 101 101 int irq; 102 102 struct clk *cpu_clk; 103 - unsigned int cur_frequency; 104 103 unsigned int latency; 105 104 struct cpufreq_frequency_table *freq_table; 106 105 unsigned int freq_count; ··· 164 165 return 0; 165 166 } 166 167 167 - static void exynos_enable_dvfs(void) 168 + static void exynos_enable_dvfs(unsigned int cur_frequency) 168 169 { 169 170 unsigned int tmp, i, cpu; 170 171 struct cpufreq_frequency_table *freq_table = dvfs_info->freq_table; ··· 183 184 184 185 /* Set initial performance index */ 185 186 for (i = 0; freq_table[i].frequency != CPUFREQ_TABLE_END; i++) 186 - if (freq_table[i].frequency == dvfs_info->cur_frequency) 187 + if (freq_table[i].frequency == cur_frequency) 187 188 break; 188 189 189 190 if (freq_table[i].frequency == CPUFREQ_TABLE_END) { 190 191 dev_crit(dvfs_info->dev, "Boot up frequency not supported\n"); 191 192 /* Assign the highest frequency */ 192 193 i = 0; 193 - dvfs_info->cur_frequency = freq_table[i].frequency; 194 + cur_frequency = freq_table[i].frequency; 194 195 } 195 196 196 197 dev_info(dvfs_info->dev, "Setting dvfs initial frequency = %uKHZ", 197 - dvfs_info->cur_frequency); 198 + cur_frequency); 198 199 199 200 for (cpu = 0; cpu < CONFIG_NR_CPUS; cpu++) { 200 201 tmp = __raw_readl(dvfs_info->base + XMU_C0_3_PSTATE + cpu * 4); ··· 208 209 dvfs_info->base + XMU_DVFS_CTRL); 209 210 } 210 211 211 - static unsigned int exynos_getspeed(unsigned int cpu) 212 - { 213 - return dvfs_info->cur_frequency; 214 - } 215 - 216 212 static int exynos_target(struct cpufreq_policy *policy, unsigned int index) 217 213 { 218 214 unsigned int tmp; ··· 216 222 217 223 mutex_lock(&cpufreq_lock); 218 224 219 - freqs.old = dvfs_info->cur_frequency; 225 + freqs.old = policy->cur; 220 226 freqs.new = freq_table[index].frequency; 221 227 222 228 cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); ··· 244 250 goto skip_work; 245 251 246 252 mutex_lock(&cpufreq_lock); 247 - freqs.old = dvfs_info->cur_frequency; 253 + freqs.old = policy->cur; 248 254 249 255 cur_pstate = __raw_readl(dvfs_info->base + XMU_P_STATUS); 250 256 if (cur_pstate >> C0_3_PSTATE_VALID_SHIFT & 0x1) ··· 254 260 255 261 if (likely(index < dvfs_info->freq_count)) { 256 262 freqs.new = freq_table[index].frequency; 257 - dvfs_info->cur_frequency = freqs.new; 258 263 } else { 259 264 dev_crit(dvfs_info->dev, "New frequency out of range\n"); 260 - freqs.new = dvfs_info->cur_frequency; 265 + freqs.new = freqs.old; 261 266 } 262 267 cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); 263 268 ··· 300 307 301 308 static int exynos_cpufreq_cpu_init(struct cpufreq_policy *policy) 302 309 { 310 + policy->clk = dvfs_info->cpu_clk; 303 311 return cpufreq_generic_init(policy, dvfs_info->freq_table, 304 312 dvfs_info->latency); 305 313 } 306 314 307 315 static struct cpufreq_driver exynos_driver = { 308 - .flags = CPUFREQ_STICKY | CPUFREQ_ASYNC_NOTIFICATION, 316 + .flags = CPUFREQ_STICKY | CPUFREQ_ASYNC_NOTIFICATION | 317 + CPUFREQ_NEED_INITIAL_FREQ_CHECK, 309 318 .verify = cpufreq_generic_frequency_table_verify, 310 319 .target_index = exynos_target, 311 - .get = exynos_getspeed, 320 + .get = cpufreq_generic_get, 312 321 .init = exynos_cpufreq_cpu_init, 313 322 .exit = cpufreq_generic_exit, 314 323 .name = CPUFREQ_NAME, ··· 330 335 int ret = -EINVAL; 331 336 struct device_node *np; 332 337 struct resource res; 338 + unsigned int cur_frequency; 333 339 334 340 np = pdev->dev.of_node; 335 341 if (!np) ··· 387 391 goto err_free_table; 388 392 } 389 393 390 - dvfs_info->cur_frequency = clk_get_rate(dvfs_info->cpu_clk); 391 - if (!dvfs_info->cur_frequency) { 394 + cur_frequency = clk_get_rate(dvfs_info->cpu_clk); 395 + if (!cur_frequency) { 392 396 dev_err(dvfs_info->dev, "Failed to get clock rate\n"); 393 397 ret = -EINVAL; 394 398 goto err_free_table; 395 399 } 396 - dvfs_info->cur_frequency /= 1000; 400 + cur_frequency /= 1000; 397 401 398 402 INIT_WORK(&dvfs_info->irq_work, exynos_cpufreq_work); 399 403 ret = devm_request_irq(dvfs_info->dev, dvfs_info->irq, ··· 410 414 goto err_free_table; 411 415 } 412 416 413 - exynos_enable_dvfs(); 417 + exynos_enable_dvfs(cur_frequency); 414 418 ret = cpufreq_register_driver(&exynos_driver); 415 419 if (ret) { 416 420 dev_err(dvfs_info->dev,
+71 -7
drivers/cpufreq/freq_table.c
··· 32 32 33 33 continue; 34 34 } 35 + if (!cpufreq_boost_enabled() 36 + && table[i].driver_data == CPUFREQ_BOOST_FREQ) 37 + continue; 38 + 35 39 pr_debug("table entry %u: %u kHz, %u driver_data\n", 36 40 i, freq, table[i].driver_data); 37 41 if (freq < min_freq) ··· 182 178 } 183 179 EXPORT_SYMBOL_GPL(cpufreq_frequency_table_target); 184 180 181 + int cpufreq_frequency_table_get_index(struct cpufreq_policy *policy, 182 + unsigned int freq) 183 + { 184 + struct cpufreq_frequency_table *table; 185 + int i; 186 + 187 + table = cpufreq_frequency_get_table(policy->cpu); 188 + if (unlikely(!table)) { 189 + pr_debug("%s: Unable to find frequency table\n", __func__); 190 + return -ENOENT; 191 + } 192 + 193 + for (i = 0; table[i].frequency != CPUFREQ_TABLE_END; i++) { 194 + if (table[i].frequency == freq) 195 + return i; 196 + } 197 + 198 + return -EINVAL; 199 + } 200 + EXPORT_SYMBOL_GPL(cpufreq_frequency_table_get_index); 201 + 185 202 static DEFINE_PER_CPU(struct cpufreq_frequency_table *, cpufreq_show_table); 203 + 186 204 /** 187 205 * show_available_freqs - show available frequencies for the specified CPU 188 206 */ 189 - static ssize_t show_available_freqs(struct cpufreq_policy *policy, char *buf) 207 + static ssize_t show_available_freqs(struct cpufreq_policy *policy, char *buf, 208 + bool show_boost) 190 209 { 191 210 unsigned int i = 0; 192 211 unsigned int cpu = policy->cpu; ··· 224 197 for (i = 0; (table[i].frequency != CPUFREQ_TABLE_END); i++) { 225 198 if (table[i].frequency == CPUFREQ_ENTRY_INVALID) 226 199 continue; 200 + /* 201 + * show_boost = true and driver_data = BOOST freq 202 + * display BOOST freqs 203 + * 204 + * show_boost = false and driver_data = BOOST freq 205 + * show_boost = true and driver_data != BOOST freq 206 + * continue - do not display anything 207 + * 208 + * show_boost = false and driver_data != BOOST freq 209 + * display NON BOOST freqs 210 + */ 211 + if (show_boost ^ (table[i].driver_data == CPUFREQ_BOOST_FREQ)) 212 + continue; 213 + 227 214 count += sprintf(&buf[count], "%d ", table[i].frequency); 228 215 } 229 216 count += sprintf(&buf[count], "\n"); ··· 246 205 247 206 } 248 207 249 - struct freq_attr cpufreq_freq_attr_scaling_available_freqs = { 250 - .attr = { .name = "scaling_available_frequencies", 251 - .mode = 0444, 252 - }, 253 - .show = show_available_freqs, 254 - }; 208 + #define cpufreq_attr_available_freq(_name) \ 209 + struct freq_attr cpufreq_freq_attr_##_name##_freqs = \ 210 + __ATTR_RO(_name##_frequencies) 211 + 212 + /** 213 + * show_scaling_available_frequencies - show available normal frequencies for 214 + * the specified CPU 215 + */ 216 + static ssize_t scaling_available_frequencies_show(struct cpufreq_policy *policy, 217 + char *buf) 218 + { 219 + return show_available_freqs(policy, buf, false); 220 + } 221 + cpufreq_attr_available_freq(scaling_available); 255 222 EXPORT_SYMBOL_GPL(cpufreq_freq_attr_scaling_available_freqs); 223 + 224 + /** 225 + * show_available_boost_freqs - show available boost frequencies for 226 + * the specified CPU 227 + */ 228 + static ssize_t scaling_boost_frequencies_show(struct cpufreq_policy *policy, 229 + char *buf) 230 + { 231 + return show_available_freqs(policy, buf, true); 232 + } 233 + cpufreq_attr_available_freq(scaling_boost); 234 + EXPORT_SYMBOL_GPL(cpufreq_freq_attr_scaling_boost_freqs); 256 235 257 236 struct freq_attr *cpufreq_generic_attr[] = { 258 237 &cpufreq_freq_attr_scaling_available_freqs, 238 + #ifdef CONFIG_CPU_FREQ_BOOST_SW 239 + &cpufreq_freq_attr_scaling_boost_freqs, 240 + #endif 259 241 NULL, 260 242 }; 261 243 EXPORT_SYMBOL_GPL(cpufreq_generic_attr);
+96 -38
drivers/cpufreq/imx6q-cpufreq.c
··· 35 35 static struct cpufreq_frequency_table *freq_table; 36 36 static unsigned int transition_latency; 37 37 38 - static unsigned int imx6q_get_speed(unsigned int cpu) 39 - { 40 - return clk_get_rate(arm_clk) / 1000; 41 - } 38 + static u32 *imx6_soc_volt; 39 + static u32 soc_opp_count; 42 40 43 41 static int imx6q_set_target(struct cpufreq_policy *policy, unsigned int index) 44 42 { ··· 67 69 68 70 /* scaling up? scale voltage before frequency */ 69 71 if (new_freq > old_freq) { 72 + ret = regulator_set_voltage_tol(pu_reg, imx6_soc_volt[index], 0); 73 + if (ret) { 74 + dev_err(cpu_dev, "failed to scale vddpu up: %d\n", ret); 75 + return ret; 76 + } 77 + ret = regulator_set_voltage_tol(soc_reg, imx6_soc_volt[index], 0); 78 + if (ret) { 79 + dev_err(cpu_dev, "failed to scale vddsoc up: %d\n", ret); 80 + return ret; 81 + } 70 82 ret = regulator_set_voltage_tol(arm_reg, volt, 0); 71 83 if (ret) { 72 84 dev_err(cpu_dev, 73 85 "failed to scale vddarm up: %d\n", ret); 74 86 return ret; 75 - } 76 - 77 - /* 78 - * Need to increase vddpu and vddsoc for safety 79 - * if we are about to run at 1.2 GHz. 80 - */ 81 - if (new_freq == FREQ_1P2_GHZ / 1000) { 82 - regulator_set_voltage_tol(pu_reg, 83 - PU_SOC_VOLTAGE_HIGH, 0); 84 - regulator_set_voltage_tol(soc_reg, 85 - PU_SOC_VOLTAGE_HIGH, 0); 86 87 } 87 88 } 88 89 ··· 117 120 "failed to scale vddarm down: %d\n", ret); 118 121 ret = 0; 119 122 } 120 - 121 - if (old_freq == FREQ_1P2_GHZ / 1000) { 122 - regulator_set_voltage_tol(pu_reg, 123 - PU_SOC_VOLTAGE_NORMAL, 0); 124 - regulator_set_voltage_tol(soc_reg, 125 - PU_SOC_VOLTAGE_NORMAL, 0); 123 + ret = regulator_set_voltage_tol(soc_reg, imx6_soc_volt[index], 0); 124 + if (ret) { 125 + dev_warn(cpu_dev, "failed to scale vddsoc down: %d\n", ret); 126 + ret = 0; 127 + } 128 + ret = regulator_set_voltage_tol(pu_reg, imx6_soc_volt[index], 0); 129 + if (ret) { 130 + dev_warn(cpu_dev, "failed to scale vddpu down: %d\n", ret); 131 + ret = 0; 126 132 } 127 133 } 128 134 ··· 134 134 135 135 static int imx6q_cpufreq_init(struct cpufreq_policy *policy) 136 136 { 137 + policy->clk = arm_clk; 137 138 return cpufreq_generic_init(policy, freq_table, transition_latency); 138 139 } 139 140 140 141 static struct cpufreq_driver imx6q_cpufreq_driver = { 142 + .flags = CPUFREQ_NEED_INITIAL_FREQ_CHECK, 141 143 .verify = cpufreq_generic_frequency_table_verify, 142 144 .target_index = imx6q_set_target, 143 - .get = imx6q_get_speed, 145 + .get = cpufreq_generic_get, 144 146 .init = imx6q_cpufreq_init, 145 147 .exit = cpufreq_generic_exit, 146 148 .name = "imx6q-cpufreq", ··· 155 153 struct dev_pm_opp *opp; 156 154 unsigned long min_volt, max_volt; 157 155 int num, ret; 156 + const struct property *prop; 157 + const __be32 *val; 158 + u32 nr, i, j; 158 159 159 160 cpu_dev = get_cpu_device(0); 160 161 if (!cpu_dev) { ··· 192 187 goto put_node; 193 188 } 194 189 195 - /* We expect an OPP table supplied by platform */ 190 + /* 191 + * We expect an OPP table supplied by platform. 192 + * Just, incase the platform did not supply the OPP 193 + * table, it will try to get it. 194 + */ 196 195 num = dev_pm_opp_get_opp_count(cpu_dev); 197 196 if (num < 0) { 198 - ret = num; 199 - dev_err(cpu_dev, "no OPP table is found: %d\n", ret); 200 - goto put_node; 197 + ret = of_init_opp_table(cpu_dev); 198 + if (ret < 0) { 199 + dev_err(cpu_dev, "failed to init OPP table: %d\n", ret); 200 + goto put_node; 201 + } 202 + 203 + num = dev_pm_opp_get_opp_count(cpu_dev); 204 + if (num < 0) { 205 + ret = num; 206 + dev_err(cpu_dev, "no OPP table is found: %d\n", ret); 207 + goto put_node; 208 + } 201 209 } 202 210 203 211 ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table); ··· 219 201 goto put_node; 220 202 } 221 203 204 + /* Make imx6_soc_volt array's size same as arm opp number */ 205 + imx6_soc_volt = devm_kzalloc(cpu_dev, sizeof(*imx6_soc_volt) * num, GFP_KERNEL); 206 + if (imx6_soc_volt == NULL) { 207 + ret = -ENOMEM; 208 + goto free_freq_table; 209 + } 210 + 211 + prop = of_find_property(np, "fsl,soc-operating-points", NULL); 212 + if (!prop || !prop->value) 213 + goto soc_opp_out; 214 + 215 + /* 216 + * Each OPP is a set of tuples consisting of frequency and 217 + * voltage like <freq-kHz vol-uV>. 218 + */ 219 + nr = prop->length / sizeof(u32); 220 + if (nr % 2 || (nr / 2) < num) 221 + goto soc_opp_out; 222 + 223 + for (j = 0; j < num; j++) { 224 + val = prop->value; 225 + for (i = 0; i < nr / 2; i++) { 226 + unsigned long freq = be32_to_cpup(val++); 227 + unsigned long volt = be32_to_cpup(val++); 228 + if (freq_table[j].frequency == freq) { 229 + imx6_soc_volt[soc_opp_count++] = volt; 230 + break; 231 + } 232 + } 233 + } 234 + 235 + soc_opp_out: 236 + /* use fixed soc opp volt if no valid soc opp info found in dtb */ 237 + if (soc_opp_count != num) { 238 + dev_warn(cpu_dev, "can NOT find valid fsl,soc-operating-points property in dtb, use default value!\n"); 239 + for (j = 0; j < num; j++) 240 + imx6_soc_volt[j] = PU_SOC_VOLTAGE_NORMAL; 241 + if (freq_table[num - 1].frequency * 1000 == FREQ_1P2_GHZ) 242 + imx6_soc_volt[num - 1] = PU_SOC_VOLTAGE_HIGH; 243 + } 244 + 222 245 if (of_property_read_u32(np, "clock-latency", &transition_latency)) 223 246 transition_latency = CPUFREQ_ETERNAL; 247 + 248 + /* 249 + * Calculate the ramp time for max voltage change in the 250 + * VDDSOC and VDDPU regulators. 251 + */ 252 + ret = regulator_set_voltage_time(soc_reg, imx6_soc_volt[0], imx6_soc_volt[num - 1]); 253 + if (ret > 0) 254 + transition_latency += ret * 1000; 255 + ret = regulator_set_voltage_time(pu_reg, imx6_soc_volt[0], imx6_soc_volt[num - 1]); 256 + if (ret > 0) 257 + transition_latency += ret * 1000; 224 258 225 259 /* 226 260 * OPP is maintained in order of increasing frequency, and ··· 290 220 ret = regulator_set_voltage_time(arm_reg, min_volt, max_volt); 291 221 if (ret > 0) 292 222 transition_latency += ret * 1000; 293 - 294 - /* Count vddpu and vddsoc latency in for 1.2 GHz support */ 295 - if (freq_table[num].frequency == FREQ_1P2_GHZ / 1000) { 296 - ret = regulator_set_voltage_time(pu_reg, PU_SOC_VOLTAGE_NORMAL, 297 - PU_SOC_VOLTAGE_HIGH); 298 - if (ret > 0) 299 - transition_latency += ret * 1000; 300 - ret = regulator_set_voltage_time(soc_reg, PU_SOC_VOLTAGE_NORMAL, 301 - PU_SOC_VOLTAGE_HIGH); 302 - if (ret > 0) 303 - transition_latency += ret * 1000; 304 - } 305 223 306 224 ret = cpufreq_register_driver(&imx6q_cpufreq_driver); 307 225 if (ret) {
+1
drivers/cpufreq/integrator-cpufreq.c
··· 190 190 } 191 191 192 192 static struct cpufreq_driver integrator_driver = { 193 + .flags = CPUFREQ_NEED_INITIAL_FREQ_CHECK, 193 194 .verify = integrator_verify_policy, 194 195 .target = integrator_set_target, 195 196 .get = integrator_get,
+76 -13
drivers/cpufreq/intel_pstate.c
··· 35 35 #define SAMPLE_COUNT 3 36 36 37 37 #define BYT_RATIOS 0x66a 38 + #define BYT_VIDS 0x66b 38 39 39 40 #define FRAC_BITS 8 40 41 #define int_tofp(X) ((int64_t)(X) << FRAC_BITS) ··· 51 50 return div_s64((int64_t)x << FRAC_BITS, (int64_t)y); 52 51 } 53 52 53 + static u64 energy_divisor; 54 + 54 55 struct sample { 55 56 int32_t core_pct_busy; 56 57 u64 aperf; ··· 65 62 int min_pstate; 66 63 int max_pstate; 67 64 int turbo_pstate; 65 + }; 66 + 67 + struct vid_data { 68 + int32_t min; 69 + int32_t max; 70 + int32_t ratio; 68 71 }; 69 72 70 73 struct _pid { ··· 91 82 struct timer_list timer; 92 83 93 84 struct pstate_data pstate; 85 + struct vid_data vid; 94 86 struct _pid pid; 95 - 96 - int min_pstate_count; 97 87 98 88 u64 prev_aperf; 99 89 u64 prev_mperf; ··· 114 106 int (*get_max)(void); 115 107 int (*get_min)(void); 116 108 int (*get_turbo)(void); 117 - void (*set)(int pstate); 109 + void (*set)(struct cpudata*, int pstate); 110 + void (*get_vid)(struct cpudata *); 118 111 }; 119 112 120 113 struct cpu_defaults { ··· 367 358 return (value >> 16) & 0xFF; 368 359 } 369 360 361 + static void byt_set_pstate(struct cpudata *cpudata, int pstate) 362 + { 363 + u64 val; 364 + int32_t vid_fp; 365 + u32 vid; 366 + 367 + val = pstate << 8; 368 + if (limits.no_turbo) 369 + val |= (u64)1 << 32; 370 + 371 + vid_fp = cpudata->vid.min + mul_fp( 372 + int_tofp(pstate - cpudata->pstate.min_pstate), 373 + cpudata->vid.ratio); 374 + 375 + vid_fp = clamp_t(int32_t, vid_fp, cpudata->vid.min, cpudata->vid.max); 376 + vid = fp_toint(vid_fp); 377 + 378 + val |= vid; 379 + 380 + wrmsrl(MSR_IA32_PERF_CTL, val); 381 + } 382 + 383 + static void byt_get_vid(struct cpudata *cpudata) 384 + { 385 + u64 value; 386 + 387 + rdmsrl(BYT_VIDS, value); 388 + cpudata->vid.min = int_tofp((value >> 8) & 0x7f); 389 + cpudata->vid.max = int_tofp((value >> 16) & 0x7f); 390 + cpudata->vid.ratio = div_fp( 391 + cpudata->vid.max - cpudata->vid.min, 392 + int_tofp(cpudata->pstate.max_pstate - 393 + cpudata->pstate.min_pstate)); 394 + } 395 + 396 + 370 397 static int core_get_min_pstate(void) 371 398 { 372 399 u64 value; ··· 429 384 return ret; 430 385 } 431 386 432 - static void core_set_pstate(int pstate) 387 + static void core_set_pstate(struct cpudata *cpudata, int pstate) 433 388 { 434 389 u64 val; 435 390 ··· 470 425 .get_max = byt_get_max_pstate, 471 426 .get_min = byt_get_min_pstate, 472 427 .get_turbo = byt_get_max_pstate, 473 - .set = core_set_pstate, 428 + .set = byt_set_pstate, 429 + .get_vid = byt_get_vid, 474 430 }, 475 431 }; 476 432 ··· 508 462 509 463 cpu->pstate.current_pstate = pstate; 510 464 511 - pstate_funcs.set(pstate); 465 + pstate_funcs.set(cpu, pstate); 512 466 } 513 467 514 468 static inline void intel_pstate_pstate_increase(struct cpudata *cpu, int steps) ··· 534 488 cpu->pstate.max_pstate = pstate_funcs.get_max(); 535 489 cpu->pstate.turbo_pstate = pstate_funcs.get_turbo(); 536 490 491 + if (pstate_funcs.get_vid) 492 + pstate_funcs.get_vid(cpu); 493 + 537 494 /* 538 495 * goto max pstate so we don't slow up boot if we are built-in if we are 539 496 * a module we will take care of it during normal operation ··· 561 512 562 513 rdmsrl(MSR_IA32_APERF, aperf); 563 514 rdmsrl(MSR_IA32_MPERF, mperf); 515 + 564 516 cpu->sample_ptr = (cpu->sample_ptr + 1) % SAMPLE_COUNT; 565 517 cpu->samples[cpu->sample_ptr].aperf = aperf; 566 518 cpu->samples[cpu->sample_ptr].mperf = mperf; ··· 606 556 ctl = pid_calc(pid, busy_scaled); 607 557 608 558 steps = abs(ctl); 559 + 609 560 if (ctl < 0) 610 561 intel_pstate_pstate_increase(cpu, steps); 611 562 else ··· 616 565 static void intel_pstate_timer_func(unsigned long __data) 617 566 { 618 567 struct cpudata *cpu = (struct cpudata *) __data; 568 + struct sample *sample; 569 + u64 energy; 619 570 620 571 intel_pstate_sample(cpu); 572 + 573 + sample = &cpu->samples[cpu->sample_ptr]; 574 + rdmsrl(MSR_PKG_ENERGY_STATUS, energy); 575 + 621 576 intel_pstate_adjust_busy_pstate(cpu); 622 577 623 - if (cpu->pstate.current_pstate == cpu->pstate.min_pstate) { 624 - cpu->min_pstate_count++; 625 - if (!(cpu->min_pstate_count % 5)) { 626 - intel_pstate_set_pstate(cpu, cpu->pstate.max_pstate); 627 - } 628 - } else 629 - cpu->min_pstate_count = 0; 578 + trace_pstate_sample(fp_toint(sample->core_pct_busy), 579 + fp_toint(intel_pstate_get_scaled_busy(cpu)), 580 + cpu->pstate.current_pstate, 581 + sample->mperf, 582 + sample->aperf, 583 + div64_u64(energy, energy_divisor), 584 + sample->freq); 630 585 631 586 intel_pstate_set_sample_time(cpu); 632 587 } ··· 839 782 pstate_funcs.get_min = funcs->get_min; 840 783 pstate_funcs.get_turbo = funcs->get_turbo; 841 784 pstate_funcs.set = funcs->set; 785 + pstate_funcs.get_vid = funcs->get_vid; 842 786 } 843 787 844 788 #if IS_ENABLED(CONFIG_ACPI) ··· 913 855 int cpu, rc = 0; 914 856 const struct x86_cpu_id *id; 915 857 struct cpu_defaults *cpu_info; 858 + u64 units; 916 859 917 860 if (no_load) 918 861 return -ENODEV; ··· 947 888 if (rc) 948 889 goto out; 949 890 891 + rdmsrl(MSR_RAPL_POWER_UNIT, units); 892 + energy_divisor = 1 << ((units >> 8) & 0x1f); /* bits{12:8} */ 893 + 950 894 intel_pstate_debug_expose_params(); 951 895 intel_pstate_sysfs_expose_params(); 896 + 952 897 return rc; 953 898 out: 954 899 get_online_cpus();
+1
drivers/cpufreq/kirkwood-cpufreq.c
··· 97 97 } 98 98 99 99 static struct cpufreq_driver kirkwood_cpufreq_driver = { 100 + .flags = CPUFREQ_NEED_INITIAL_FREQ_CHECK, 100 101 .get = kirkwood_cpufreq_get_cpu_frequency, 101 102 .verify = cpufreq_generic_frequency_table_verify, 102 103 .target_index = kirkwood_cpufreq_target,
+5 -10
drivers/cpufreq/loongson2_cpufreq.c
··· 24 24 25 25 static uint nowait; 26 26 27 - static struct clk *cpuclk; 28 - 29 27 static void (*saved_cpu_wait) (void); 30 28 31 29 static int loongson2_cpu_freq_notifier(struct notifier_block *nb, ··· 40 42 current_cpu_data.udelay_val = loops_per_jiffy; 41 43 42 44 return 0; 43 - } 44 - 45 - static unsigned int loongson2_cpufreq_get(unsigned int cpu) 46 - { 47 - return clk_get_rate(cpuclk); 48 45 } 49 46 50 47 /* ··· 62 69 set_cpus_allowed_ptr(current, &cpus_allowed); 63 70 64 71 /* setting the cpu frequency */ 65 - clk_set_rate(cpuclk, freq); 72 + clk_set_rate(policy->clk, freq); 66 73 67 74 return 0; 68 75 } 69 76 70 77 static int loongson2_cpufreq_cpu_init(struct cpufreq_policy *policy) 71 78 { 79 + static struct clk *cpuclk; 72 80 int i; 73 81 unsigned long rate; 74 82 int ret; ··· 98 104 return ret; 99 105 } 100 106 107 + policy->clk = cpuclk; 101 108 return cpufreq_generic_init(policy, &loongson2_clockmod_table[0], 0); 102 109 } 103 110 104 111 static int loongson2_cpufreq_exit(struct cpufreq_policy *policy) 105 112 { 106 113 cpufreq_frequency_table_put_attr(policy->cpu); 107 - clk_put(cpuclk); 114 + clk_put(policy->clk); 108 115 return 0; 109 116 } 110 117 ··· 114 119 .init = loongson2_cpufreq_cpu_init, 115 120 .verify = cpufreq_generic_frequency_table_verify, 116 121 .target_index = loongson2_cpufreq_target, 117 - .get = loongson2_cpufreq_get, 122 + .get = cpufreq_generic_get, 118 123 .exit = loongson2_cpufreq_exit, 119 124 .attr = cpufreq_generic_attr, 120 125 };
+11 -23
drivers/cpufreq/omap-cpufreq.c
··· 36 36 37 37 static struct cpufreq_frequency_table *freq_table; 38 38 static atomic_t freq_table_users = ATOMIC_INIT(0); 39 - static struct clk *mpu_clk; 40 39 static struct device *mpu_dev; 41 40 static struct regulator *mpu_reg; 42 - 43 - static unsigned int omap_getspeed(unsigned int cpu) 44 - { 45 - unsigned long rate; 46 - 47 - if (cpu >= NR_CPUS) 48 - return 0; 49 - 50 - rate = clk_get_rate(mpu_clk) / 1000; 51 - return rate; 52 - } 53 41 54 42 static int omap_target(struct cpufreq_policy *policy, unsigned int index) 55 43 { ··· 46 58 unsigned long freq, volt = 0, volt_old = 0, tol = 0; 47 59 unsigned int old_freq, new_freq; 48 60 49 - old_freq = omap_getspeed(policy->cpu); 61 + old_freq = policy->cur; 50 62 new_freq = freq_table[index].frequency; 51 63 52 64 freq = new_freq * 1000; 53 - ret = clk_round_rate(mpu_clk, freq); 65 + ret = clk_round_rate(policy->clk, freq); 54 66 if (IS_ERR_VALUE(ret)) { 55 67 dev_warn(mpu_dev, 56 68 "CPUfreq: Cannot find matching frequency for %lu\n", ··· 88 100 } 89 101 } 90 102 91 - ret = clk_set_rate(mpu_clk, new_freq * 1000); 103 + ret = clk_set_rate(policy->clk, new_freq * 1000); 92 104 93 105 /* scaling down? scale voltage after frequency */ 94 106 if (mpu_reg && (new_freq < old_freq)) { ··· 96 108 if (r < 0) { 97 109 dev_warn(mpu_dev, "%s: unable to scale voltage down.\n", 98 110 __func__); 99 - clk_set_rate(mpu_clk, old_freq * 1000); 111 + clk_set_rate(policy->clk, old_freq * 1000); 100 112 return r; 101 113 } 102 114 } ··· 114 126 { 115 127 int result; 116 128 117 - mpu_clk = clk_get(NULL, "cpufreq_ck"); 118 - if (IS_ERR(mpu_clk)) 119 - return PTR_ERR(mpu_clk); 129 + policy->clk = clk_get(NULL, "cpufreq_ck"); 130 + if (IS_ERR(policy->clk)) 131 + return PTR_ERR(policy->clk); 120 132 121 133 if (!freq_table) { 122 134 result = dev_pm_opp_init_cpufreq_table(mpu_dev, &freq_table); ··· 137 149 138 150 freq_table_free(); 139 151 fail: 140 - clk_put(mpu_clk); 152 + clk_put(policy->clk); 141 153 return result; 142 154 } 143 155 ··· 145 157 { 146 158 cpufreq_frequency_table_put_attr(policy->cpu); 147 159 freq_table_free(); 148 - clk_put(mpu_clk); 160 + clk_put(policy->clk); 149 161 return 0; 150 162 } 151 163 152 164 static struct cpufreq_driver omap_driver = { 153 - .flags = CPUFREQ_STICKY, 165 + .flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK, 154 166 .verify = cpufreq_generic_frequency_table_verify, 155 167 .target_index = omap_target, 156 - .get = omap_getspeed, 168 + .get = cpufreq_generic_get, 157 169 .init = omap_cpu_init, 158 170 .exit = omap_cpu_exit, 159 171 .name = "omap",
+7 -11
drivers/cpufreq/pcc-cpufreq.c
··· 213 213 cpu, target_freq, 214 214 (pcch_virt_addr + pcc_cpu_data->input_offset)); 215 215 216 + freqs.old = policy->cur; 216 217 freqs.new = target_freq; 217 218 cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); 218 219 ··· 229 228 memset_io((pcch_virt_addr + pcc_cpu_data->input_offset), 0, BUF_SZ); 230 229 231 230 status = ioread16(&pcch_hdr->status); 231 + iowrite16(0, &pcch_hdr->status); 232 + 233 + cpufreq_notify_post_transition(policy, &freqs, status != CMD_COMPLETE); 234 + spin_unlock(&pcc_lock); 235 + 232 236 if (status != CMD_COMPLETE) { 233 237 pr_debug("target: FAILED for cpu %d, with status: 0x%x\n", 234 238 cpu, status); 235 - goto cmd_incomplete; 239 + return -EINVAL; 236 240 } 237 - iowrite16(0, &pcch_hdr->status); 238 241 239 - cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); 240 242 pr_debug("target: was SUCCESSFUL for cpu %d\n", cpu); 241 - spin_unlock(&pcc_lock); 242 243 243 244 return 0; 244 - 245 - cmd_incomplete: 246 - freqs.new = freqs.old; 247 - cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); 248 - iowrite16(0, &pcch_hdr->status); 249 - spin_unlock(&pcc_lock); 250 - return -EINVAL; 251 245 } 252 246 253 247 static int pcc_get_offset(int cpu)
+121 -28
drivers/cpufreq/powernow-k6.c
··· 26 26 static unsigned int busfreq; /* FSB, in 10 kHz */ 27 27 static unsigned int max_multiplier; 28 28 29 + static unsigned int param_busfreq = 0; 30 + static unsigned int param_max_multiplier = 0; 31 + 32 + module_param_named(max_multiplier, param_max_multiplier, uint, S_IRUGO); 33 + MODULE_PARM_DESC(max_multiplier, "Maximum multiplier (allowed values: 20 30 35 40 45 50 55 60)"); 34 + 35 + module_param_named(bus_frequency, param_busfreq, uint, S_IRUGO); 36 + MODULE_PARM_DESC(bus_frequency, "Bus frequency in kHz"); 29 37 30 38 /* Clock ratio multiplied by 10 - see table 27 in AMD#23446 */ 31 39 static struct cpufreq_frequency_table clock_ratio[] = { 32 - {45, /* 000 -> 4.5x */ 0}, 33 - {50, /* 001 -> 5.0x */ 0}, 34 - {40, /* 010 -> 4.0x */ 0}, 35 - {55, /* 011 -> 5.5x */ 0}, 36 - {20, /* 100 -> 2.0x */ 0}, 37 - {30, /* 101 -> 3.0x */ 0}, 38 40 {60, /* 110 -> 6.0x */ 0}, 41 + {55, /* 011 -> 5.5x */ 0}, 42 + {50, /* 001 -> 5.0x */ 0}, 43 + {45, /* 000 -> 4.5x */ 0}, 44 + {40, /* 010 -> 4.0x */ 0}, 39 45 {35, /* 111 -> 3.5x */ 0}, 46 + {30, /* 101 -> 3.0x */ 0}, 47 + {20, /* 100 -> 2.0x */ 0}, 40 48 {0, CPUFREQ_TABLE_END} 41 49 }; 42 50 51 + static const u8 index_to_register[8] = { 6, 3, 1, 0, 2, 7, 5, 4 }; 52 + static const u8 register_to_index[8] = { 3, 2, 4, 1, 7, 6, 0, 5 }; 53 + 54 + static const struct { 55 + unsigned freq; 56 + unsigned mult; 57 + } usual_frequency_table[] = { 58 + { 400000, 40 }, // 100 * 4 59 + { 450000, 45 }, // 100 * 4.5 60 + { 475000, 50 }, // 95 * 5 61 + { 500000, 50 }, // 100 * 5 62 + { 506250, 45 }, // 112.5 * 4.5 63 + { 533500, 55 }, // 97 * 5.5 64 + { 550000, 55 }, // 100 * 5.5 65 + { 562500, 50 }, // 112.5 * 5 66 + { 570000, 60 }, // 95 * 6 67 + { 600000, 60 }, // 100 * 6 68 + { 618750, 55 }, // 112.5 * 5.5 69 + { 660000, 55 }, // 120 * 5.5 70 + { 675000, 60 }, // 112.5 * 6 71 + { 720000, 60 }, // 120 * 6 72 + }; 73 + 74 + #define FREQ_RANGE 3000 43 75 44 76 /** 45 77 * powernow_k6_get_cpu_multiplier - returns the current FSB multiplier 46 78 * 47 - * Returns the current setting of the frequency multiplier. Core clock 79 + * Returns the current setting of the frequency multiplier. Core clock 48 80 * speed is frequency of the Front-Side Bus multiplied with this value. 49 81 */ 50 82 static int powernow_k6_get_cpu_multiplier(void) 51 83 { 52 - u64 invalue = 0; 84 + unsigned long invalue = 0; 53 85 u32 msrval; 86 + 87 + local_irq_disable(); 54 88 55 89 msrval = POWERNOW_IOPORT + 0x1; 56 90 wrmsr(MSR_K6_EPMR, msrval, 0); /* enable the PowerNow port */ ··· 92 58 msrval = POWERNOW_IOPORT + 0x0; 93 59 wrmsr(MSR_K6_EPMR, msrval, 0); /* disable it again */ 94 60 95 - return clock_ratio[(invalue >> 5)&7].driver_data; 61 + local_irq_enable(); 62 + 63 + return clock_ratio[register_to_index[(invalue >> 5)&7]].driver_data; 96 64 } 97 65 66 + static void powernow_k6_set_cpu_multiplier(unsigned int best_i) 67 + { 68 + unsigned long outvalue, invalue; 69 + unsigned long msrval; 70 + unsigned long cr0; 71 + 72 + /* we now need to transform best_i to the BVC format, see AMD#23446 */ 73 + 74 + /* 75 + * The processor doesn't respond to inquiry cycles while changing the 76 + * frequency, so we must disable cache. 77 + */ 78 + local_irq_disable(); 79 + cr0 = read_cr0(); 80 + write_cr0(cr0 | X86_CR0_CD); 81 + wbinvd(); 82 + 83 + outvalue = (1<<12) | (1<<10) | (1<<9) | (index_to_register[best_i]<<5); 84 + 85 + msrval = POWERNOW_IOPORT + 0x1; 86 + wrmsr(MSR_K6_EPMR, msrval, 0); /* enable the PowerNow port */ 87 + invalue = inl(POWERNOW_IOPORT + 0x8); 88 + invalue = invalue & 0x1f; 89 + outvalue = outvalue | invalue; 90 + outl(outvalue, (POWERNOW_IOPORT + 0x8)); 91 + msrval = POWERNOW_IOPORT + 0x0; 92 + wrmsr(MSR_K6_EPMR, msrval, 0); /* disable it again */ 93 + 94 + write_cr0(cr0); 95 + local_irq_enable(); 96 + } 98 97 99 98 /** 100 99 * powernow_k6_target - set the PowerNow! multiplier ··· 138 71 static int powernow_k6_target(struct cpufreq_policy *policy, 139 72 unsigned int best_i) 140 73 { 141 - unsigned long outvalue = 0, invalue = 0; 142 - unsigned long msrval; 143 74 struct cpufreq_freqs freqs; 144 75 145 76 if (clock_ratio[best_i].driver_data > max_multiplier) { ··· 150 85 151 86 cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); 152 87 153 - /* we now need to transform best_i to the BVC format, see AMD#23446 */ 154 - 155 - outvalue = (1<<12) | (1<<10) | (1<<9) | (best_i<<5); 156 - 157 - msrval = POWERNOW_IOPORT + 0x1; 158 - wrmsr(MSR_K6_EPMR, msrval, 0); /* enable the PowerNow port */ 159 - invalue = inl(POWERNOW_IOPORT + 0x8); 160 - invalue = invalue & 0xf; 161 - outvalue = outvalue | invalue; 162 - outl(outvalue , (POWERNOW_IOPORT + 0x8)); 163 - msrval = POWERNOW_IOPORT + 0x0; 164 - wrmsr(MSR_K6_EPMR, msrval, 0); /* disable it again */ 88 + powernow_k6_set_cpu_multiplier(best_i); 165 89 166 90 cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); 167 91 168 92 return 0; 169 93 } 170 94 171 - 172 95 static int powernow_k6_cpu_init(struct cpufreq_policy *policy) 173 96 { 174 97 unsigned int i, f; 98 + unsigned khz; 175 99 176 100 if (policy->cpu != 0) 177 101 return -ENODEV; 178 102 179 - /* get frequencies */ 180 - max_multiplier = powernow_k6_get_cpu_multiplier(); 181 - busfreq = cpu_khz / max_multiplier; 103 + max_multiplier = 0; 104 + khz = cpu_khz; 105 + for (i = 0; i < ARRAY_SIZE(usual_frequency_table); i++) { 106 + if (khz >= usual_frequency_table[i].freq - FREQ_RANGE && 107 + khz <= usual_frequency_table[i].freq + FREQ_RANGE) { 108 + khz = usual_frequency_table[i].freq; 109 + max_multiplier = usual_frequency_table[i].mult; 110 + break; 111 + } 112 + } 113 + if (param_max_multiplier) { 114 + for (i = 0; (clock_ratio[i].frequency != CPUFREQ_TABLE_END); i++) { 115 + if (clock_ratio[i].driver_data == param_max_multiplier) { 116 + max_multiplier = param_max_multiplier; 117 + goto have_max_multiplier; 118 + } 119 + } 120 + printk(KERN_ERR "powernow-k6: invalid max_multiplier parameter, valid parameters 20, 30, 35, 40, 45, 50, 55, 60\n"); 121 + return -EINVAL; 122 + } 123 + 124 + if (!max_multiplier) { 125 + printk(KERN_WARNING "powernow-k6: unknown frequency %u, cannot determine current multiplier\n", khz); 126 + printk(KERN_WARNING "powernow-k6: use module parameters max_multiplier and bus_frequency\n"); 127 + return -EOPNOTSUPP; 128 + } 129 + 130 + have_max_multiplier: 131 + param_max_multiplier = max_multiplier; 132 + 133 + if (param_busfreq) { 134 + if (param_busfreq >= 50000 && param_busfreq <= 150000) { 135 + busfreq = param_busfreq / 10; 136 + goto have_busfreq; 137 + } 138 + printk(KERN_ERR "powernow-k6: invalid bus_frequency parameter, allowed range 50000 - 150000 kHz\n"); 139 + return -EINVAL; 140 + } 141 + 142 + busfreq = khz / max_multiplier; 143 + have_busfreq: 144 + param_busfreq = busfreq * 10; 182 145 183 146 /* table init */ 184 147 for (i = 0; (clock_ratio[i].frequency != CPUFREQ_TABLE_END); i++) { ··· 218 125 } 219 126 220 127 /* cpuinfo and default policy values */ 221 - policy->cpuinfo.transition_latency = 200000; 128 + policy->cpuinfo.transition_latency = 500000; 222 129 223 130 return cpufreq_table_validate_and_show(policy, clock_ratio); 224 131 }
+1 -6
drivers/cpufreq/powernow-k8.c
··· 964 964 cpufreq_cpu_put(policy); 965 965 966 966 cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); 967 - 968 967 res = transition_fid_vid(data, fid, vid); 969 - if (res) 970 - freqs.new = freqs.old; 971 - else 972 - freqs.new = find_khz_freq_from_fid(data->currfid); 968 + cpufreq_notify_post_transition(policy, &freqs, res); 973 969 974 - cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); 975 970 return res; 976 971 } 977 972
+4 -13
drivers/cpufreq/ppc-corenet-cpufreq.c
··· 24 24 25 25 /** 26 26 * struct cpu_data - per CPU data struct 27 - * @clk: the clk of CPU 28 27 * @parent: the parent node of cpu clock 29 28 * @table: frequency table 30 29 */ 31 30 struct cpu_data { 32 - struct clk *clk; 33 31 struct device_node *parent; 34 32 struct cpufreq_frequency_table *table; 35 33 }; ··· 78 80 return cpumask_of(0); 79 81 } 80 82 #endif 81 - 82 - static unsigned int corenet_cpufreq_get_speed(unsigned int cpu) 83 - { 84 - struct cpu_data *data = per_cpu(cpu_data, cpu); 85 - 86 - return clk_get_rate(data->clk) / 1000; 87 - } 88 83 89 84 /* reduce the duplicated frequencies in frequency table */ 90 85 static void freq_table_redup(struct cpufreq_frequency_table *freq_table, ··· 149 158 goto err_np; 150 159 } 151 160 152 - data->clk = of_clk_get(np, 0); 153 - if (IS_ERR(data->clk)) { 161 + policy->clk = of_clk_get(np, 0); 162 + if (IS_ERR(policy->clk)) { 154 163 pr_err("%s: no clock information\n", __func__); 155 164 goto err_nomem2; 156 165 } ··· 246 255 struct cpu_data *data = per_cpu(cpu_data, policy->cpu); 247 256 248 257 parent = of_clk_get(data->parent, data->table[index].driver_data); 249 - return clk_set_parent(data->clk, parent); 258 + return clk_set_parent(policy->clk, parent); 250 259 } 251 260 252 261 static struct cpufreq_driver ppc_corenet_cpufreq_driver = { ··· 256 265 .exit = __exit_p(corenet_cpufreq_cpu_exit), 257 266 .verify = cpufreq_generic_frequency_table_verify, 258 267 .target_index = corenet_cpufreq_target, 259 - .get = corenet_cpufreq_get_speed, 268 + .get = cpufreq_generic_get, 260 269 .attr = cpufreq_generic_attr, 261 270 }; 262 271
+1
drivers/cpufreq/pxa2xx-cpufreq.c
··· 423 423 } 424 424 425 425 static struct cpufreq_driver pxa_cpufreq_driver = { 426 + .flags = CPUFREQ_NEED_INITIAL_FREQ_CHECK, 426 427 .verify = cpufreq_generic_frequency_table_verify, 427 428 .target_index = pxa_set_target, 428 429 .init = pxa_cpufreq_init,
+1
drivers/cpufreq/pxa3xx-cpufreq.c
··· 201 201 } 202 202 203 203 static struct cpufreq_driver pxa3xx_cpufreq_driver = { 204 + .flags = CPUFREQ_NEED_INITIAL_FREQ_CHECK, 204 205 .verify = cpufreq_generic_frequency_table_verify, 205 206 .target_index = pxa3xx_cpufreq_set, 206 207 .init = pxa3xx_cpufreq_init,
+1 -1
drivers/cpufreq/s3c2416-cpufreq.c
··· 481 481 } 482 482 483 483 static struct cpufreq_driver s3c2416_cpufreq_driver = { 484 - .flags = 0, 484 + .flags = CPUFREQ_NEED_INITIAL_FREQ_CHECK, 485 485 .verify = cpufreq_generic_frequency_table_verify, 486 486 .target_index = s3c2416_cpufreq_set_target, 487 487 .get = s3c2416_cpufreq_get_speed,
+2 -4
drivers/cpufreq/s3c2440-cpufreq.c
··· 22 22 #include <linux/err.h> 23 23 #include <linux/io.h> 24 24 25 - #include <mach/hardware.h> 26 - 27 25 #include <asm/mach/arch.h> 28 26 #include <asm/mach/map.h> 29 27 ··· 53 55 * specified in @cfg. The values are stored in @cfg for later use 54 56 * by the relevant set routine if the request settings can be reached. 55 57 */ 56 - int s3c2440_cpufreq_calcdivs(struct s3c_cpufreq_config *cfg) 58 + static int s3c2440_cpufreq_calcdivs(struct s3c_cpufreq_config *cfg) 57 59 { 58 60 unsigned int hdiv, pdiv; 59 61 unsigned long hclk, fclk, armclk; ··· 240 242 return ret; 241 243 } 242 244 243 - struct s3c_cpufreq_info s3c2440_cpufreq_info = { 245 + static struct s3c_cpufreq_info s3c2440_cpufreq_info = { 244 246 .max = { 245 247 .fclk = 400000000, 246 248 .hclk = 133333333,
+5 -9
drivers/cpufreq/s3c24xx-cpufreq.c
··· 355 355 return -EINVAL; 356 356 } 357 357 358 - static unsigned int s3c_cpufreq_get(unsigned int cpu) 359 - { 360 - return clk_get_rate(clk_arm) / 1000; 361 - } 362 - 363 358 struct clk *s3c_cpufreq_clk_get(struct device *dev, const char *name) 364 359 { 365 360 struct clk *clk; ··· 368 373 369 374 static int s3c_cpufreq_init(struct cpufreq_policy *policy) 370 375 { 376 + policy->clk = clk_arm; 371 377 return cpufreq_generic_init(policy, ftab, cpu_cur.info->latency); 372 378 } 373 379 ··· 404 408 { 405 409 suspend_pll.frequency = clk_get_rate(_clk_mpll); 406 410 suspend_pll.driver_data = __raw_readl(S3C2410_MPLLCON); 407 - suspend_freq = s3c_cpufreq_get(0) * 1000; 411 + suspend_freq = clk_get_rate(clk_arm); 408 412 409 413 return 0; 410 414 } ··· 444 448 #endif 445 449 446 450 static struct cpufreq_driver s3c24xx_driver = { 447 - .flags = CPUFREQ_STICKY, 451 + .flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK, 448 452 .target = s3c_cpufreq_target, 449 - .get = s3c_cpufreq_get, 453 + .get = cpufreq_generic_get, 450 454 .init = s3c_cpufreq_init, 451 455 .suspend = s3c_cpufreq_suspend, 452 456 .resume = s3c_cpufreq_resume, ··· 505 509 return 0; 506 510 } 507 511 508 - int __init s3c_cpufreq_auto_io(void) 512 + static int __init s3c_cpufreq_auto_io(void) 509 513 { 510 514 int ret; 511 515
+13 -22
drivers/cpufreq/s3c64xx-cpufreq.c
··· 19 19 #include <linux/regulator/consumer.h> 20 20 #include <linux/module.h> 21 21 22 - static struct clk *armclk; 23 22 static struct regulator *vddarm; 24 23 static unsigned long regulator_latency; 25 24 ··· 53 54 }; 54 55 #endif 55 56 56 - static unsigned int s3c64xx_cpufreq_get_speed(unsigned int cpu) 57 - { 58 - if (cpu != 0) 59 - return 0; 60 - 61 - return clk_get_rate(armclk) / 1000; 62 - } 63 - 64 57 static int s3c64xx_cpufreq_set_target(struct cpufreq_policy *policy, 65 58 unsigned int index) 66 59 { ··· 60 69 unsigned int old_freq, new_freq; 61 70 int ret; 62 71 63 - old_freq = clk_get_rate(armclk) / 1000; 72 + old_freq = clk_get_rate(policy->clk) / 1000; 64 73 new_freq = s3c64xx_freq_table[index].frequency; 65 74 dvfs = &s3c64xx_dvfs_table[s3c64xx_freq_table[index].driver_data]; 66 75 ··· 77 86 } 78 87 #endif 79 88 80 - ret = clk_set_rate(armclk, new_freq * 1000); 89 + ret = clk_set_rate(policy->clk, new_freq * 1000); 81 90 if (ret < 0) { 82 91 pr_err("Failed to set rate %dkHz: %d\n", 83 92 new_freq, ret); ··· 92 101 if (ret != 0) { 93 102 pr_err("Failed to set VDDARM for %dkHz: %d\n", 94 103 new_freq, ret); 95 - if (clk_set_rate(armclk, old_freq * 1000) < 0) 104 + if (clk_set_rate(policy->clk, old_freq * 1000) < 0) 96 105 pr_err("Failed to restore original clock rate\n"); 97 106 98 107 return ret; ··· 101 110 #endif 102 111 103 112 pr_debug("Set actual frequency %lukHz\n", 104 - clk_get_rate(armclk) / 1000); 113 + clk_get_rate(policy->clk) / 1000); 105 114 106 115 return 0; 107 116 } ··· 160 169 return -ENODEV; 161 170 } 162 171 163 - armclk = clk_get(NULL, "armclk"); 164 - if (IS_ERR(armclk)) { 172 + policy->clk = clk_get(NULL, "armclk"); 173 + if (IS_ERR(policy->clk)) { 165 174 pr_err("Unable to obtain ARMCLK: %ld\n", 166 - PTR_ERR(armclk)); 167 - return PTR_ERR(armclk); 175 + PTR_ERR(policy->clk)); 176 + return PTR_ERR(policy->clk); 168 177 } 169 178 170 179 #ifdef CONFIG_REGULATOR ··· 184 193 unsigned long r; 185 194 186 195 /* Check for frequencies we can generate */ 187 - r = clk_round_rate(armclk, freq->frequency * 1000); 196 + r = clk_round_rate(policy->clk, freq->frequency * 1000); 188 197 r /= 1000; 189 198 if (r != freq->frequency) { 190 199 pr_debug("%dkHz unsupported by clock\n", ··· 194 203 195 204 /* If we have no regulator then assume startup 196 205 * frequency is the maximum we can support. */ 197 - if (!vddarm && freq->frequency > s3c64xx_cpufreq_get_speed(0)) 206 + if (!vddarm && freq->frequency > clk_get_rate(policy->clk) / 1000) 198 207 freq->frequency = CPUFREQ_ENTRY_INVALID; 199 208 200 209 freq++; ··· 210 219 pr_err("Failed to configure frequency table: %d\n", 211 220 ret); 212 221 regulator_put(vddarm); 213 - clk_put(armclk); 222 + clk_put(policy->clk); 214 223 } 215 224 216 225 return ret; 217 226 } 218 227 219 228 static struct cpufreq_driver s3c64xx_cpufreq_driver = { 220 - .flags = 0, 229 + .flags = CPUFREQ_NEED_INITIAL_FREQ_CHECK, 221 230 .verify = cpufreq_generic_frequency_table_verify, 222 231 .target_index = s3c64xx_cpufreq_set_target, 223 - .get = s3c64xx_cpufreq_get_speed, 232 + .get = cpufreq_generic_get, 224 233 .init = s3c64xx_cpufreq_driver_init, 225 234 .name = "s3c", 226 235 };
+7 -16
drivers/cpufreq/s5pv210-cpufreq.c
··· 23 23 #include <mach/map.h> 24 24 #include <mach/regs-clock.h> 25 25 26 - static struct clk *cpu_clk; 27 26 static struct clk *dmc0_clk; 28 27 static struct clk *dmc1_clk; 29 28 static DEFINE_MUTEX(set_freq_lock); ··· 163 164 __raw_writel(tmp1, reg); 164 165 } 165 166 166 - static unsigned int s5pv210_getspeed(unsigned int cpu) 167 - { 168 - if (cpu) 169 - return 0; 170 - 171 - return clk_get_rate(cpu_clk) / 1000; 172 - } 173 - 174 167 static int s5pv210_target(struct cpufreq_policy *policy, unsigned int index) 175 168 { 176 169 unsigned long reg; ··· 184 193 goto exit; 185 194 } 186 195 187 - old_freq = s5pv210_getspeed(0); 196 + old_freq = policy->cur; 188 197 new_freq = s5pv210_freq_table[index].frequency; 189 198 190 199 /* Finding current running level index */ ··· 462 471 unsigned long mem_type; 463 472 int ret; 464 473 465 - cpu_clk = clk_get(NULL, "armclk"); 466 - if (IS_ERR(cpu_clk)) 467 - return PTR_ERR(cpu_clk); 474 + policy->clk = clk_get(NULL, "armclk"); 475 + if (IS_ERR(policy->clk)) 476 + return PTR_ERR(policy->clk); 468 477 469 478 dmc0_clk = clk_get(NULL, "sclk_dmc0"); 470 479 if (IS_ERR(dmc0_clk)) { ··· 507 516 out_dmc1: 508 517 clk_put(dmc0_clk); 509 518 out_dmc0: 510 - clk_put(cpu_clk); 519 + clk_put(policy->clk); 511 520 return ret; 512 521 } 513 522 ··· 551 560 } 552 561 553 562 static struct cpufreq_driver s5pv210_driver = { 554 - .flags = CPUFREQ_STICKY, 563 + .flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK, 555 564 .verify = cpufreq_generic_frequency_table_verify, 556 565 .target_index = s5pv210_target, 557 - .get = s5pv210_getspeed, 566 + .get = cpufreq_generic_get, 558 567 .init = s5pv210_cpu_init, 559 568 .name = "s5pv210", 560 569 #ifdef CONFIG_PM
+1 -1
drivers/cpufreq/sa1100-cpufreq.c
··· 201 201 } 202 202 203 203 static struct cpufreq_driver sa1100_driver __refdata = { 204 - .flags = CPUFREQ_STICKY, 204 + .flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK, 205 205 .verify = cpufreq_generic_frequency_table_verify, 206 206 .target_index = sa1100_target, 207 207 .get = sa11x0_getspeed,
+1 -1
drivers/cpufreq/sa1110-cpufreq.c
··· 312 312 /* sa1110_driver needs __refdata because it must remain after init registers 313 313 * it with cpufreq_register_driver() */ 314 314 static struct cpufreq_driver sa1110_driver __refdata = { 315 - .flags = CPUFREQ_STICKY, 315 + .flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK, 316 316 .verify = cpufreq_generic_frequency_table_verify, 317 317 .target_index = sa1110_target, 318 318 .get = sa11x0_getspeed,
+4 -8
drivers/cpufreq/spear-cpufreq.c
··· 30 30 u32 cnt; 31 31 } spear_cpufreq; 32 32 33 - static unsigned int spear_cpufreq_get(unsigned int cpu) 34 - { 35 - return clk_get_rate(spear_cpufreq.clk) / 1000; 36 - } 37 - 38 33 static struct clk *spear1340_cpu_get_possible_parent(unsigned long newfreq) 39 34 { 40 35 struct clk *sys_pclk; ··· 133 138 } 134 139 135 140 newfreq = clk_round_rate(srcclk, newfreq * mult); 136 - if (newfreq < 0) { 141 + if (newfreq <= 0) { 137 142 pr_err("clk_round_rate failed for cpu src clock\n"); 138 143 return newfreq; 139 144 } ··· 151 156 152 157 static int spear_cpufreq_init(struct cpufreq_policy *policy) 153 158 { 159 + policy->clk = spear_cpufreq.clk; 154 160 return cpufreq_generic_init(policy, spear_cpufreq.freq_tbl, 155 161 spear_cpufreq.transition_latency); 156 162 } 157 163 158 164 static struct cpufreq_driver spear_cpufreq_driver = { 159 165 .name = "cpufreq-spear", 160 - .flags = CPUFREQ_STICKY, 166 + .flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK, 161 167 .verify = cpufreq_generic_frequency_table_verify, 162 168 .target_index = spear_cpufreq_target, 163 - .get = spear_cpufreq_get, 169 + .get = cpufreq_generic_get, 164 170 .init = spear_cpufreq_init, 165 171 .exit = cpufreq_generic_exit, 166 172 .attr = cpufreq_generic_attr,
-32
drivers/cpufreq/speedstep-smi.c
··· 141 141 } 142 142 143 143 /** 144 - * speedstep_get_state - set the SpeedStep state 145 - * @state: processor frequency state (SPEEDSTEP_LOW or SPEEDSTEP_HIGH) 146 - * 147 - */ 148 - static int speedstep_get_state(void) 149 - { 150 - u32 function = GET_SPEEDSTEP_STATE; 151 - u32 result, state, edi, command, dummy; 152 - 153 - command = (smi_sig & 0xffffff00) | (smi_cmd & 0xff); 154 - 155 - pr_debug("trying to determine current setting with command %x " 156 - "at port %x\n", command, smi_port); 157 - 158 - __asm__ __volatile__( 159 - "push %%ebp\n" 160 - "out %%al, (%%dx)\n" 161 - "pop %%ebp\n" 162 - : "=a" (result), 163 - "=b" (state), "=D" (edi), 164 - "=c" (dummy), "=d" (dummy), "=S" (dummy) 165 - : "a" (command), "b" (function), "c" (0), 166 - "d" (smi_port), "S" (0), "D" (0) 167 - ); 168 - 169 - pr_debug("state is %x, result is %x\n", state, result); 170 - 171 - return state & 1; 172 - } 173 - 174 - 175 - /** 176 144 * speedstep_set_state - set the SpeedStep state 177 145 * @state: new processor frequency state (SPEEDSTEP_LOW or SPEEDSTEP_HIGH) 178 146 *
+9 -40
drivers/cpufreq/tegra-cpufreq.c
··· 47 47 static struct clk *pll_p_clk; 48 48 static struct clk *emc_clk; 49 49 50 - static unsigned long target_cpu_speed[NUM_CPUS]; 51 50 static DEFINE_MUTEX(tegra_cpu_lock); 52 51 static bool is_suspended; 53 - 54 - static unsigned int tegra_getspeed(unsigned int cpu) 55 - { 56 - unsigned long rate; 57 - 58 - if (cpu >= NUM_CPUS) 59 - return 0; 60 - 61 - rate = clk_get_rate(cpu_clk) / 1000; 62 - return rate; 63 - } 64 52 65 53 static int tegra_cpu_clk_set_rate(unsigned long rate) 66 54 { ··· 91 103 { 92 104 int ret = 0; 93 105 94 - if (tegra_getspeed(0) == rate) 95 - return ret; 96 - 97 106 /* 98 107 * Vote on memory bus frequency based on cpu frequency 99 108 * This sets the minimum frequency, display or avp may request higher ··· 110 125 return ret; 111 126 } 112 127 113 - static unsigned long tegra_cpu_highest_speed(void) 114 - { 115 - unsigned long rate = 0; 116 - int i; 117 - 118 - for_each_online_cpu(i) 119 - rate = max(rate, target_cpu_speed[i]); 120 - return rate; 121 - } 122 - 123 128 static int tegra_target(struct cpufreq_policy *policy, unsigned int index) 124 129 { 125 - unsigned int freq; 126 - int ret = 0; 130 + int ret = -EBUSY; 127 131 128 132 mutex_lock(&tegra_cpu_lock); 129 133 130 - if (is_suspended) 131 - goto out; 134 + if (!is_suspended) 135 + ret = tegra_update_cpu_speed(policy, 136 + freq_table[index].frequency); 132 137 133 - freq = freq_table[index].frequency; 134 - 135 - target_cpu_speed[policy->cpu] = freq; 136 - 137 - ret = tegra_update_cpu_speed(policy, tegra_cpu_highest_speed()); 138 - 139 - out: 140 138 mutex_unlock(&tegra_cpu_lock); 141 139 return ret; 142 140 } ··· 133 165 is_suspended = true; 134 166 pr_info("Tegra cpufreq suspend: setting frequency to %d kHz\n", 135 167 freq_table[0].frequency); 136 - tegra_update_cpu_speed(policy, freq_table[0].frequency); 168 + if (clk_get_rate(cpu_clk) / 1000 != freq_table[0].frequency) 169 + tegra_update_cpu_speed(policy, freq_table[0].frequency); 137 170 cpufreq_cpu_put(policy); 138 171 } else if (event == PM_POST_SUSPEND) { 139 172 is_suspended = false; ··· 158 189 clk_prepare_enable(emc_clk); 159 190 clk_prepare_enable(cpu_clk); 160 191 161 - target_cpu_speed[policy->cpu] = tegra_getspeed(policy->cpu); 162 - 163 192 /* FIXME: what's the actual transition time? */ 164 193 ret = cpufreq_generic_init(policy, freq_table, 300 * 1000); 165 194 if (ret) { ··· 169 202 if (policy->cpu == 0) 170 203 register_pm_notifier(&tegra_cpu_pm_notifier); 171 204 205 + policy->clk = cpu_clk; 172 206 return 0; 173 207 } 174 208 ··· 182 214 } 183 215 184 216 static struct cpufreq_driver tegra_cpufreq_driver = { 217 + .flags = CPUFREQ_NEED_INITIAL_FREQ_CHECK, 185 218 .verify = cpufreq_generic_frequency_table_verify, 186 219 .target_index = tegra_target, 187 - .get = tegra_getspeed, 220 + .get = cpufreq_generic_get, 188 221 .init = tegra_cpu_init, 189 222 .exit = tegra_cpu_exit, 190 223 .name = "tegra",
+13 -20
drivers/cpufreq/unicore2-cpufreq.c
··· 11 11 * published by the Free Software Foundation. 12 12 */ 13 13 14 + #include <linux/err.h> 14 15 #include <linux/kernel.h> 15 16 #include <linux/types.h> 16 17 #include <linux/init.h> ··· 34 33 return 0; 35 34 } 36 35 37 - static unsigned int ucv2_getspeed(unsigned int cpu) 38 - { 39 - struct clk *mclk = clk_get(NULL, "MAIN_CLK"); 40 - 41 - if (cpu) 42 - return 0; 43 - return clk_get_rate(mclk)/1000; 44 - } 45 - 46 36 static int ucv2_target(struct cpufreq_policy *policy, 47 37 unsigned int target_freq, 48 38 unsigned int relation) 49 39 { 50 - unsigned int cur = ucv2_getspeed(0); 51 40 struct cpufreq_freqs freqs; 52 - struct clk *mclk = clk_get(NULL, "MAIN_CLK"); 41 + int ret; 42 + 43 + freqs.old = policy->cur; 44 + freqs.new = target_freq; 53 45 54 46 cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); 47 + ret = clk_set_rate(policy->mclk, target_freq * 1000); 48 + cpufreq_notify_post_transition(policy, &freqs, ret); 55 49 56 - if (!clk_set_rate(mclk, target_freq * 1000)) { 57 - freqs.old = cur; 58 - freqs.new = target_freq; 59 - } 60 - 61 - cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); 62 - 63 - return 0; 50 + return ret; 64 51 } 65 52 66 53 static int __init ucv2_cpu_init(struct cpufreq_policy *policy) 67 54 { 68 55 if (policy->cpu != 0) 69 56 return -EINVAL; 57 + 70 58 policy->min = policy->cpuinfo.min_freq = 250000; 71 59 policy->max = policy->cpuinfo.max_freq = 1000000; 72 60 policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL; 61 + policy->clk = clk_get(NULL, "MAIN_CLK"); 62 + if (IS_ERR(policy->clk)) 63 + return PTR_ERR(policy->clk); 73 64 return 0; 74 65 } 75 66 ··· 69 76 .flags = CPUFREQ_STICKY, 70 77 .verify = ucv2_verify_speed, 71 78 .target = ucv2_target, 72 - .get = ucv2_getspeed, 79 + .get = cpufreq_generic_get, 73 80 .init = ucv2_cpu_init, 74 81 .name = "UniCore-II", 75 82 };
+1 -1
drivers/firmware/Kconfig
··· 113 113 114 114 config ISCSI_IBFT_FIND 115 115 bool "iSCSI Boot Firmware Table Attributes" 116 - depends on X86 116 + depends on X86 && ACPI 117 117 default n 118 118 help 119 119 This option enables the kernel to find the region of memory
-1
drivers/gpu/drm/gma500/opregion.c
··· 22 22 * 23 23 */ 24 24 #include <linux/acpi.h> 25 - #include <linux/acpi_io.h> 26 25 #include "psb_drv.h" 27 26 #include "psb_intel_reg.h" 28 27
+1 -2
drivers/gpu/drm/i915/Makefile
··· 38 38 intel_ringbuffer.o \ 39 39 intel_overlay.o \ 40 40 intel_sprite.o \ 41 - intel_opregion.o \ 42 41 intel_sideband.o \ 43 42 intel_uncore.o \ 44 43 dvo_ch7xxx.o \ ··· 50 51 51 52 i915-$(CONFIG_COMPAT) += i915_ioc32.o 52 53 53 - i915-$(CONFIG_ACPI) += intel_acpi.o 54 + i915-$(CONFIG_ACPI) += intel_acpi.o intel_opregion.o 54 55 55 56 i915-$(CONFIG_DRM_I915_FBDEV) += intel_fbdev.o 56 57
+2 -1
drivers/gpu/drm/i915/i915_drv.h
··· 2339 2339 2340 2340 /* intel_opregion.c */ 2341 2341 struct intel_encoder; 2342 - extern int intel_opregion_setup(struct drm_device *dev); 2343 2342 #ifdef CONFIG_ACPI 2343 + extern int intel_opregion_setup(struct drm_device *dev); 2344 2344 extern void intel_opregion_init(struct drm_device *dev); 2345 2345 extern void intel_opregion_fini(struct drm_device *dev); 2346 2346 extern void intel_opregion_asle_intr(struct drm_device *dev); ··· 2349 2349 extern int intel_opregion_notify_adapter(struct drm_device *dev, 2350 2350 pci_power_t state); 2351 2351 #else 2352 + static inline int intel_opregion_setup(struct drm_device *dev) { return 0; } 2352 2353 static inline void intel_opregion_init(struct drm_device *dev) { return; } 2353 2354 static inline void intel_opregion_fini(struct drm_device *dev) { return; } 2354 2355 static inline void intel_opregion_asle_intr(struct drm_device *dev) { return; }
+29 -115
drivers/gpu/drm/i915/intel_acpi.c
··· 6 6 #include <linux/pci.h> 7 7 #include <linux/acpi.h> 8 8 #include <linux/vga_switcheroo.h> 9 - #include <acpi/acpi_drivers.h> 10 - 11 9 #include <drm/drmP.h> 12 10 #include "i915_drv.h" 13 11 14 12 #define INTEL_DSM_REVISION_ID 1 /* For Calpella anyway... */ 15 - 16 - #define INTEL_DSM_FN_SUPPORTED_FUNCTIONS 0 /* No args */ 17 13 #define INTEL_DSM_FN_PLATFORM_MUX_INFO 1 /* No args */ 18 14 19 15 static struct intel_dsm_priv { ··· 23 27 0xa8, 0x54, 24 28 0x0f, 0x13, 0x17, 0xb0, 0x1c, 0x2c 25 29 }; 26 - 27 - static int intel_dsm(acpi_handle handle, int func) 28 - { 29 - struct acpi_buffer output = { ACPI_ALLOCATE_BUFFER, NULL }; 30 - struct acpi_object_list input; 31 - union acpi_object params[4]; 32 - union acpi_object *obj; 33 - u32 result; 34 - int ret = 0; 35 - 36 - input.count = 4; 37 - input.pointer = params; 38 - params[0].type = ACPI_TYPE_BUFFER; 39 - params[0].buffer.length = sizeof(intel_dsm_guid); 40 - params[0].buffer.pointer = (char *)intel_dsm_guid; 41 - params[1].type = ACPI_TYPE_INTEGER; 42 - params[1].integer.value = INTEL_DSM_REVISION_ID; 43 - params[2].type = ACPI_TYPE_INTEGER; 44 - params[2].integer.value = func; 45 - params[3].type = ACPI_TYPE_PACKAGE; 46 - params[3].package.count = 0; 47 - params[3].package.elements = NULL; 48 - 49 - ret = acpi_evaluate_object(handle, "_DSM", &input, &output); 50 - if (ret) { 51 - DRM_DEBUG_DRIVER("failed to evaluate _DSM: %d\n", ret); 52 - return ret; 53 - } 54 - 55 - obj = (union acpi_object *)output.pointer; 56 - 57 - result = 0; 58 - switch (obj->type) { 59 - case ACPI_TYPE_INTEGER: 60 - result = obj->integer.value; 61 - break; 62 - 63 - case ACPI_TYPE_BUFFER: 64 - if (obj->buffer.length == 4) { 65 - result = (obj->buffer.pointer[0] | 66 - (obj->buffer.pointer[1] << 8) | 67 - (obj->buffer.pointer[2] << 16) | 68 - (obj->buffer.pointer[3] << 24)); 69 - break; 70 - } 71 - default: 72 - ret = -EINVAL; 73 - break; 74 - } 75 - if (result == 0x80000002) 76 - ret = -ENODEV; 77 - 78 - kfree(output.pointer); 79 - return ret; 80 - } 81 30 82 31 static char *intel_dsm_port_name(u8 id) 83 32 { ··· 78 137 79 138 static void intel_dsm_platform_mux_info(void) 80 139 { 81 - struct acpi_buffer output = { ACPI_ALLOCATE_BUFFER, NULL }; 82 - struct acpi_object_list input; 83 - union acpi_object params[4]; 84 - union acpi_object *pkg; 85 - int i, ret; 140 + int i; 141 + union acpi_object *pkg, *connector_count; 86 142 87 - input.count = 4; 88 - input.pointer = params; 89 - params[0].type = ACPI_TYPE_BUFFER; 90 - params[0].buffer.length = sizeof(intel_dsm_guid); 91 - params[0].buffer.pointer = (char *)intel_dsm_guid; 92 - params[1].type = ACPI_TYPE_INTEGER; 93 - params[1].integer.value = INTEL_DSM_REVISION_ID; 94 - params[2].type = ACPI_TYPE_INTEGER; 95 - params[2].integer.value = INTEL_DSM_FN_PLATFORM_MUX_INFO; 96 - params[3].type = ACPI_TYPE_PACKAGE; 97 - params[3].package.count = 0; 98 - params[3].package.elements = NULL; 99 - 100 - ret = acpi_evaluate_object(intel_dsm_priv.dhandle, "_DSM", &input, 101 - &output); 102 - if (ret) { 103 - DRM_DEBUG_DRIVER("failed to evaluate _DSM: %d\n", ret); 104 - goto out; 143 + pkg = acpi_evaluate_dsm_typed(intel_dsm_priv.dhandle, intel_dsm_guid, 144 + INTEL_DSM_REVISION_ID, INTEL_DSM_FN_PLATFORM_MUX_INFO, 145 + NULL, ACPI_TYPE_PACKAGE); 146 + if (!pkg) { 147 + DRM_DEBUG_DRIVER("failed to evaluate _DSM\n"); 148 + return; 105 149 } 106 150 107 - pkg = (union acpi_object *)output.pointer; 108 - 109 - if (pkg->type == ACPI_TYPE_PACKAGE) { 110 - union acpi_object *connector_count = &pkg->package.elements[0]; 111 - DRM_DEBUG_DRIVER("MUX info connectors: %lld\n", 112 - (unsigned long long)connector_count->integer.value); 113 - for (i = 1; i < pkg->package.count; i++) { 114 - union acpi_object *obj = &pkg->package.elements[i]; 115 - union acpi_object *connector_id = 116 - &obj->package.elements[0]; 117 - union acpi_object *info = &obj->package.elements[1]; 118 - DRM_DEBUG_DRIVER("Connector id: 0x%016llx\n", 119 - (unsigned long long)connector_id->integer.value); 120 - DRM_DEBUG_DRIVER(" port id: %s\n", 121 - intel_dsm_port_name(info->buffer.pointer[0])); 122 - DRM_DEBUG_DRIVER(" display mux info: %s\n", 123 - intel_dsm_mux_type(info->buffer.pointer[1])); 124 - DRM_DEBUG_DRIVER(" aux/dc mux info: %s\n", 125 - intel_dsm_mux_type(info->buffer.pointer[2])); 126 - DRM_DEBUG_DRIVER(" hpd mux info: %s\n", 127 - intel_dsm_mux_type(info->buffer.pointer[3])); 128 - } 151 + connector_count = &pkg->package.elements[0]; 152 + DRM_DEBUG_DRIVER("MUX info connectors: %lld\n", 153 + (unsigned long long)connector_count->integer.value); 154 + for (i = 1; i < pkg->package.count; i++) { 155 + union acpi_object *obj = &pkg->package.elements[i]; 156 + union acpi_object *connector_id = &obj->package.elements[0]; 157 + union acpi_object *info = &obj->package.elements[1]; 158 + DRM_DEBUG_DRIVER("Connector id: 0x%016llx\n", 159 + (unsigned long long)connector_id->integer.value); 160 + DRM_DEBUG_DRIVER(" port id: %s\n", 161 + intel_dsm_port_name(info->buffer.pointer[0])); 162 + DRM_DEBUG_DRIVER(" display mux info: %s\n", 163 + intel_dsm_mux_type(info->buffer.pointer[1])); 164 + DRM_DEBUG_DRIVER(" aux/dc mux info: %s\n", 165 + intel_dsm_mux_type(info->buffer.pointer[2])); 166 + DRM_DEBUG_DRIVER(" hpd mux info: %s\n", 167 + intel_dsm_mux_type(info->buffer.pointer[3])); 129 168 } 130 169 131 - out: 132 - kfree(output.pointer); 170 + ACPI_FREE(pkg); 133 171 } 134 172 135 173 static bool intel_dsm_pci_probe(struct pci_dev *pdev) 136 174 { 137 175 acpi_handle dhandle; 138 - int ret; 139 176 140 177 dhandle = ACPI_HANDLE(&pdev->dev); 141 178 if (!dhandle) 142 179 return false; 143 180 144 - if (!acpi_has_method(dhandle, "_DSM")) { 181 + if (!acpi_check_dsm(dhandle, intel_dsm_guid, INTEL_DSM_REVISION_ID, 182 + 1 << INTEL_DSM_FN_PLATFORM_MUX_INFO)) { 145 183 DRM_DEBUG_KMS("no _DSM method for intel device\n"); 146 184 return false; 147 185 } 148 186 149 - ret = intel_dsm(dhandle, INTEL_DSM_FN_SUPPORTED_FUNCTIONS); 150 - if (ret < 0) { 151 - DRM_DEBUG_KMS("failed to get supported _DSM functions\n"); 152 - return false; 153 - } 154 - 155 187 intel_dsm_priv.dhandle = dhandle; 156 - 157 188 intel_dsm_platform_mux_info(); 189 + 158 190 return true; 159 191 } 160 192
-1
drivers/gpu/drm/i915/intel_opregion.c
··· 28 28 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 29 29 30 30 #include <linux/acpi.h> 31 - #include <linux/acpi_io.h> 32 31 #include <acpi/video.h> 33 32 34 33 #include <drm/drmP.h>
+16 -32
drivers/gpu/drm/nouveau/core/subdev/mxm/base.c
··· 87 87 0xB8, 0x9C, 0x79, 0xB6, 0x2F, 0xD5, 0x56, 0x65 88 88 }; 89 89 u32 mxms_args[] = { 0x00000000 }; 90 - union acpi_object args[4] = { 91 - /* _DSM MUID */ 92 - { .buffer.type = 3, 93 - .buffer.length = sizeof(muid), 94 - .buffer.pointer = muid, 95 - }, 96 - /* spec says this can be zero to mean "highest revision", but 97 - * of course there's at least one bios out there which fails 98 - * unless you pass in exactly the version it supports.. 99 - */ 100 - { .integer.type = ACPI_TYPE_INTEGER, 101 - .integer.value = (version & 0xf0) << 4 | (version & 0x0f), 102 - }, 103 - /* MXMS function */ 104 - { .integer.type = ACPI_TYPE_INTEGER, 105 - .integer.value = 0x00000010, 106 - }, 107 - /* Pointer to MXMS arguments */ 108 - { .buffer.type = ACPI_TYPE_BUFFER, 109 - .buffer.length = sizeof(mxms_args), 110 - .buffer.pointer = (char *)mxms_args, 111 - }, 90 + union acpi_object argv4 = { 91 + .buffer.type = ACPI_TYPE_BUFFER, 92 + .buffer.length = sizeof(mxms_args), 93 + .buffer.pointer = (char *)mxms_args, 112 94 }; 113 - struct acpi_object_list list = { ARRAY_SIZE(args), args }; 114 - struct acpi_buffer retn = { ACPI_ALLOCATE_BUFFER, NULL }; 115 95 union acpi_object *obj; 116 96 acpi_handle handle; 117 - int ret; 97 + int rev; 118 98 119 99 handle = ACPI_HANDLE(&device->pdev->dev); 120 100 if (!handle) 121 101 return false; 122 102 123 - ret = acpi_evaluate_object(handle, "_DSM", &list, &retn); 124 - if (ret) { 125 - nv_debug(mxm, "DSM MXMS failed: %d\n", ret); 103 + /* 104 + * spec says this can be zero to mean "highest revision", but 105 + * of course there's at least one bios out there which fails 106 + * unless you pass in exactly the version it supports.. 107 + */ 108 + rev = (version & 0xf0) << 4 | (version & 0x0f); 109 + obj = acpi_evaluate_dsm(handle, muid, rev, 0x00000010, &argv4); 110 + if (!obj) { 111 + nv_debug(mxm, "DSM MXMS failed\n"); 126 112 return false; 127 113 } 128 114 129 - obj = retn.pointer; 130 115 if (obj->type == ACPI_TYPE_BUFFER) { 131 116 mxm->mxms = kmemdup(obj->buffer.pointer, 132 117 obj->buffer.length, GFP_KERNEL); 133 - } else 134 - if (obj->type == ACPI_TYPE_INTEGER) { 118 + } else if (obj->type == ACPI_TYPE_INTEGER) { 135 119 nv_debug(mxm, "DSM MXMS returned 0x%llx\n", obj->integer.value); 136 120 } 137 121 138 - kfree(obj); 122 + ACPI_FREE(obj); 139 123 return mxm->mxms != NULL; 140 124 } 141 125 #endif
+37 -99
drivers/gpu/drm/nouveau/nouveau_acpi.c
··· 1 1 #include <linux/pci.h> 2 2 #include <linux/acpi.h> 3 3 #include <linux/slab.h> 4 - #include <acpi/acpi_drivers.h> 5 - #include <acpi/acpi_bus.h> 6 - #include <acpi/video.h> 7 - #include <acpi/acpi.h> 8 4 #include <linux/mxm-wmi.h> 9 - 10 5 #include <linux/vga_switcheroo.h> 11 - 12 6 #include <drm/drm_edid.h> 7 + #include <acpi/video.h> 13 8 14 9 #include "nouveau_drm.h" 15 10 #include "nouveau_acpi.h" ··· 73 78 74 79 static int nouveau_optimus_dsm(acpi_handle handle, int func, int arg, uint32_t *result) 75 80 { 76 - struct acpi_buffer output = { ACPI_ALLOCATE_BUFFER, NULL }; 77 - struct acpi_object_list input; 78 - union acpi_object params[4]; 81 + int i; 79 82 union acpi_object *obj; 80 - int i, err; 81 83 char args_buff[4]; 84 + union acpi_object argv4 = { 85 + .buffer.type = ACPI_TYPE_BUFFER, 86 + .buffer.length = 4, 87 + .buffer.pointer = args_buff 88 + }; 82 89 83 - input.count = 4; 84 - input.pointer = params; 85 - params[0].type = ACPI_TYPE_BUFFER; 86 - params[0].buffer.length = sizeof(nouveau_op_dsm_muid); 87 - params[0].buffer.pointer = (char *)nouveau_op_dsm_muid; 88 - params[1].type = ACPI_TYPE_INTEGER; 89 - params[1].integer.value = 0x00000100; 90 - params[2].type = ACPI_TYPE_INTEGER; 91 - params[2].integer.value = func; 92 - params[3].type = ACPI_TYPE_BUFFER; 93 - params[3].buffer.length = 4; 94 90 /* ACPI is little endian, AABBCCDD becomes {DD,CC,BB,AA} */ 95 91 for (i = 0; i < 4; i++) 96 92 args_buff[i] = (arg >> i * 8) & 0xFF; 97 - params[3].buffer.pointer = args_buff; 98 93 99 - err = acpi_evaluate_object(handle, "_DSM", &input, &output); 100 - if (err) { 101 - printk(KERN_INFO "failed to evaluate _DSM: %d\n", err); 102 - return err; 103 - } 104 - 105 - obj = (union acpi_object *)output.pointer; 106 - 107 - if (obj->type == ACPI_TYPE_INTEGER) 108 - if (obj->integer.value == 0x80000002) { 109 - return -ENODEV; 110 - } 111 - 112 - if (obj->type == ACPI_TYPE_BUFFER) { 113 - if (obj->buffer.length == 4 && result) { 114 - *result = 0; 94 + *result = 0; 95 + obj = acpi_evaluate_dsm_typed(handle, nouveau_op_dsm_muid, 0x00000100, 96 + func, &argv4, ACPI_TYPE_BUFFER); 97 + if (!obj) { 98 + acpi_handle_info(handle, "failed to evaluate _DSM\n"); 99 + return AE_ERROR; 100 + } else { 101 + if (obj->buffer.length == 4) { 115 102 *result |= obj->buffer.pointer[0]; 116 103 *result |= (obj->buffer.pointer[1] << 8); 117 104 *result |= (obj->buffer.pointer[2] << 16); 118 105 *result |= (obj->buffer.pointer[3] << 24); 119 106 } 107 + ACPI_FREE(obj); 120 108 } 121 109 122 - kfree(output.pointer); 123 110 return 0; 124 111 } 125 112 126 - static int nouveau_dsm(acpi_handle handle, int func, int arg, uint32_t *result) 113 + static int nouveau_dsm(acpi_handle handle, int func, int arg) 127 114 { 128 - struct acpi_buffer output = { ACPI_ALLOCATE_BUFFER, NULL }; 129 - struct acpi_object_list input; 130 - union acpi_object params[4]; 115 + int ret = 0; 131 116 union acpi_object *obj; 132 - int err; 117 + union acpi_object argv4 = { 118 + .integer.type = ACPI_TYPE_INTEGER, 119 + .integer.value = arg, 120 + }; 133 121 134 - input.count = 4; 135 - input.pointer = params; 136 - params[0].type = ACPI_TYPE_BUFFER; 137 - params[0].buffer.length = sizeof(nouveau_dsm_muid); 138 - params[0].buffer.pointer = (char *)nouveau_dsm_muid; 139 - params[1].type = ACPI_TYPE_INTEGER; 140 - params[1].integer.value = 0x00000102; 141 - params[2].type = ACPI_TYPE_INTEGER; 142 - params[2].integer.value = func; 143 - params[3].type = ACPI_TYPE_INTEGER; 144 - params[3].integer.value = arg; 145 - 146 - err = acpi_evaluate_object(handle, "_DSM", &input, &output); 147 - if (err) { 148 - printk(KERN_INFO "failed to evaluate _DSM: %d\n", err); 149 - return err; 150 - } 151 - 152 - obj = (union acpi_object *)output.pointer; 153 - 154 - if (obj->type == ACPI_TYPE_INTEGER) 122 + obj = acpi_evaluate_dsm_typed(handle, nouveau_dsm_muid, 0x00000102, 123 + func, &argv4, ACPI_TYPE_INTEGER); 124 + if (!obj) { 125 + acpi_handle_info(handle, "failed to evaluate _DSM\n"); 126 + return AE_ERROR; 127 + } else { 155 128 if (obj->integer.value == 0x80000002) 156 - return -ENODEV; 157 - 158 - if (obj->type == ACPI_TYPE_BUFFER) { 159 - if (obj->buffer.length == 4 && result) { 160 - *result = 0; 161 - *result |= obj->buffer.pointer[0]; 162 - *result |= (obj->buffer.pointer[1] << 8); 163 - *result |= (obj->buffer.pointer[2] << 16); 164 - *result |= (obj->buffer.pointer[3] << 24); 165 - } 129 + ret = -ENODEV; 130 + ACPI_FREE(obj); 166 131 } 167 132 168 - kfree(output.pointer); 169 - return 0; 170 - } 171 - 172 - /* Returns 1 if a DSM function is usable and 0 otherwise */ 173 - static int nouveau_test_dsm(acpi_handle test_handle, 174 - int (*dsm_func)(acpi_handle, int, int, uint32_t *), 175 - int sfnc) 176 - { 177 - u32 result = 0; 178 - 179 - /* Function 0 returns a Buffer containing available functions. The args 180 - * parameter is ignored for function 0, so just put 0 in it */ 181 - if (dsm_func(test_handle, 0, 0, &result)) 182 - return 0; 183 - 184 - /* ACPI Spec v4 9.14.1: if bit 0 is zero, no function is supported. If 185 - * the n-th bit is enabled, function n is supported */ 186 - return result & 1 && result & (1 << sfnc); 133 + return ret; 187 134 } 188 135 189 136 static int nouveau_dsm_switch_mux(acpi_handle handle, int mux_id) 190 137 { 191 138 mxm_wmi_call_mxmx(mux_id == NOUVEAU_DSM_LED_STAMINA ? MXM_MXDS_ADAPTER_IGD : MXM_MXDS_ADAPTER_0); 192 139 mxm_wmi_call_mxds(mux_id == NOUVEAU_DSM_LED_STAMINA ? MXM_MXDS_ADAPTER_IGD : MXM_MXDS_ADAPTER_0); 193 - return nouveau_dsm(handle, NOUVEAU_DSM_LED, mux_id, NULL); 140 + return nouveau_dsm(handle, NOUVEAU_DSM_LED, mux_id); 194 141 } 195 142 196 143 static int nouveau_dsm_set_discrete_state(acpi_handle handle, enum vga_switcheroo_state state) ··· 142 205 arg = NOUVEAU_DSM_POWER_SPEED; 143 206 else 144 207 arg = NOUVEAU_DSM_POWER_STAMINA; 145 - nouveau_dsm(handle, NOUVEAU_DSM_POWER, arg, NULL); 208 + nouveau_dsm(handle, NOUVEAU_DSM_POWER, arg); 146 209 return 0; 147 210 } 148 211 ··· 202 265 nouveau_dsm_priv.other_handle = dhandle; 203 266 return false; 204 267 } 205 - if (nouveau_test_dsm(dhandle, nouveau_dsm, NOUVEAU_DSM_POWER)) 268 + if (acpi_check_dsm(dhandle, nouveau_dsm_muid, 0x00000102, 269 + 1 << NOUVEAU_DSM_POWER)) 206 270 retval |= NOUVEAU_DSM_HAS_MUX; 207 271 208 - if (nouveau_test_dsm(dhandle, nouveau_optimus_dsm, 209 - NOUVEAU_DSM_OPTIMUS_CAPS)) 272 + if (acpi_check_dsm(dhandle, nouveau_op_dsm_muid, 0x00000100, 273 + 1 << NOUVEAU_DSM_OPTIMUS_CAPS)) 210 274 retval |= NOUVEAU_DSM_HAS_OPT; 211 275 212 276 if (retval & NOUVEAU_DSM_HAS_OPT) {
+1 -5
drivers/gpu/drm/radeon/radeon_acpi.c
··· 25 25 #include <linux/acpi.h> 26 26 #include <linux/slab.h> 27 27 #include <linux/power_supply.h> 28 - #include <acpi/acpi_drivers.h> 29 - #include <acpi/acpi_bus.h> 28 + #include <linux/vga_switcheroo.h> 30 29 #include <acpi/video.h> 31 - 32 30 #include <drm/drmP.h> 33 31 #include <drm/drm_crtc_helper.h> 34 32 #include "radeon.h" 35 33 #include "radeon_acpi.h" 36 34 #include "atom.h" 37 - 38 - #include <linux/vga_switcheroo.h> 39 35 40 36 #define ACPI_AC_CLASS "ac_adapter" 41 37
+6 -20
drivers/hid/i2c-hid/i2c-hid.c
··· 850 850 0xF7, 0xF6, 0xDF, 0x3C, 0x67, 0x42, 0x55, 0x45, 851 851 0xAD, 0x05, 0xB3, 0x0A, 0x3D, 0x89, 0x38, 0xDE, 852 852 }; 853 - union acpi_object params[4]; 854 - struct acpi_object_list input; 853 + union acpi_object *obj; 855 854 struct acpi_device *adev; 856 - unsigned long long value; 857 855 acpi_handle handle; 858 856 859 857 handle = ACPI_HANDLE(&client->dev); 860 858 if (!handle || acpi_bus_get_device(handle, &adev)) 861 859 return -ENODEV; 862 860 863 - input.count = ARRAY_SIZE(params); 864 - input.pointer = params; 865 - 866 - params[0].type = ACPI_TYPE_BUFFER; 867 - params[0].buffer.length = sizeof(i2c_hid_guid); 868 - params[0].buffer.pointer = i2c_hid_guid; 869 - params[1].type = ACPI_TYPE_INTEGER; 870 - params[1].integer.value = 1; 871 - params[2].type = ACPI_TYPE_INTEGER; 872 - params[2].integer.value = 1; /* HID function */ 873 - params[3].type = ACPI_TYPE_PACKAGE; 874 - params[3].package.count = 0; 875 - params[3].package.elements = NULL; 876 - 877 - if (ACPI_FAILURE(acpi_evaluate_integer(handle, "_DSM", &input, 878 - &value))) { 861 + obj = acpi_evaluate_dsm_typed(handle, i2c_hid_guid, 1, 1, NULL, 862 + ACPI_TYPE_INTEGER); 863 + if (!obj) { 879 864 dev_err(&client->dev, "device _DSM execution failed\n"); 880 865 return -ENODEV; 881 866 } 882 867 883 - pdata->hid_descriptor_address = value; 868 + pdata->hid_descriptor_address = obj->integer.value; 869 + ACPI_FREE(obj); 884 870 885 871 return 0; 886 872 }
-2
drivers/hv/vmbus_drv.c
··· 30 30 #include <linux/sysctl.h> 31 31 #include <linux/slab.h> 32 32 #include <linux/acpi.h> 33 - #include <acpi/acpi_bus.h> 34 33 #include <linux/completion.h> 35 34 #include <linux/hyperv.h> 36 35 #include <linux/kernel_stat.h> ··· 37 38 #include <asm/hypervisor.h> 38 39 #include <asm/mshyperv.h> 39 40 #include "hyperv_vmbus.h" 40 - 41 41 42 42 static struct acpi_device *hv_acpi_dev; 43 43
+1 -2
drivers/hwmon/acpi_power_meter.c
··· 30 30 #include <linux/sched.h> 31 31 #include <linux/time.h> 32 32 #include <linux/err.h> 33 - #include <acpi/acpi_drivers.h> 34 - #include <acpi/acpi_bus.h> 33 + #include <linux/acpi.h> 35 34 36 35 #define ACPI_POWER_METER_NAME "power_meter" 37 36 ACPI_MODULE_NAME(ACPI_POWER_METER_NAME);
+1 -5
drivers/hwmon/asus_atk0110.c
··· 16 16 #include <linux/dmi.h> 17 17 #include <linux/jiffies.h> 18 18 #include <linux/err.h> 19 - 20 - #include <acpi/acpi.h> 21 - #include <acpi/acpi_drivers.h> 22 - #include <acpi/acpi_bus.h> 23 - 19 + #include <linux/acpi.h> 24 20 25 21 #define ATK_HID "ATK0110" 26 22
+11
drivers/i2c/i2c-core.c
··· 104 104 static int i2c_device_uevent(struct device *dev, struct kobj_uevent_env *env) 105 105 { 106 106 struct i2c_client *client = to_i2c_client(dev); 107 + int rc; 108 + 109 + rc = acpi_device_uevent_modalias(dev, env); 110 + if (rc != -ENODEV) 111 + return rc; 107 112 108 113 if (add_uevent_var(env, "MODALIAS=%s%s", 109 114 I2C_MODULE_PREFIX, client->name)) ··· 414 409 show_modalias(struct device *dev, struct device_attribute *attr, char *buf) 415 410 { 416 411 struct i2c_client *client = to_i2c_client(dev); 412 + int len; 413 + 414 + len = acpi_device_modalias(dev, buf, PAGE_SIZE -1); 415 + if (len != -ENODEV) 416 + return len; 417 + 417 418 return sprintf(buf, "%s%s\n", I2C_MODULE_PREFIX, client->name); 418 419 } 419 420
+11 -1
drivers/ide/ide-acpi.c
··· 14 14 #include <linux/errno.h> 15 15 #include <linux/kernel.h> 16 16 #include <linux/slab.h> 17 - #include <acpi/acpi.h> 18 17 #include <linux/ide.h> 19 18 #include <linux/pci.h> 20 19 #include <linux/dmi.h> ··· 95 96 bool ide_port_acpi(ide_hwif_t *hwif) 96 97 { 97 98 return ide_noacpi == 0 && hwif->acpidata; 99 + } 100 + 101 + static acpi_handle acpi_get_child(acpi_handle handle, u64 addr) 102 + { 103 + struct acpi_device *adev; 104 + 105 + if (!handle || acpi_bus_get_device(handle, &adev)) 106 + return NULL; 107 + 108 + adev = acpi_find_child_device(adev, addr, false); 109 + return adev ? adev->handle : NULL; 98 110 } 99 111 100 112 /**
+3 -29
drivers/idle/intel_idle.c
··· 635 635 */ 636 636 static int intel_idle_cpu_init(int cpu) 637 637 { 638 - int cstate; 639 638 struct cpuidle_device *dev; 640 639 641 640 dev = per_cpu_ptr(intel_idle_cpuidle_devices, cpu); 642 - 643 - dev->state_count = 1; 644 - 645 - for (cstate = 0; cstate < CPUIDLE_STATE_MAX; ++cstate) { 646 - int num_substates, mwait_hint, mwait_cstate, mwait_substate; 647 - 648 - if (cpuidle_state_table[cstate].enter == NULL) 649 - break; 650 - 651 - if (cstate + 1 > max_cstate) { 652 - printk(PREFIX "max_cstate %d reached\n", max_cstate); 653 - break; 654 - } 655 - 656 - mwait_hint = flg2MWAIT(cpuidle_state_table[cstate].flags); 657 - mwait_cstate = MWAIT_HINT2CSTATE(mwait_hint); 658 - mwait_substate = MWAIT_HINT2SUBSTATE(mwait_hint); 659 - 660 - /* does the state exist in CPUID.MWAIT? */ 661 - num_substates = (mwait_substates >> ((mwait_cstate + 1) * 4)) 662 - & MWAIT_SUBSTATE_MASK; 663 - 664 - /* if sub-state in table is not enumerated by CPUID */ 665 - if ((mwait_substate + 1) > num_substates) 666 - continue; 667 - 668 - dev->state_count += 1; 669 - } 670 641 671 642 dev->cpu = cpu; 672 643 ··· 649 678 650 679 if (icpu->auto_demotion_disable_flags) 651 680 smp_call_function_single(cpu, auto_demotion_disable, NULL, 1); 681 + 682 + if (icpu->disable_promotion_to_c1e) 683 + smp_call_function_single(cpu, c1e_promotion_disable, NULL, 1); 652 684 653 685 return 0; 654 686 }
+1 -1
drivers/input/misc/atlas_btns.c
··· 28 28 #include <linux/init.h> 29 29 #include <linux/input.h> 30 30 #include <linux/types.h> 31 + #include <linux/acpi.h> 31 32 #include <asm/uaccess.h> 32 - #include <acpi/acpi_drivers.h> 33 33 34 34 #define ACPI_ATLAS_NAME "Atlas ACPI" 35 35 #define ACPI_ATLAS_CLASS "Atlas"
-1
drivers/iommu/amd_iommu_init.c
··· 26 26 #include <linux/msi.h> 27 27 #include <linux/amd-iommu.h> 28 28 #include <linux/export.h> 29 - #include <acpi/acpi.h> 30 29 #include <asm/pci-direct.h> 31 30 #include <asm/iommu.h> 32 31 #include <asm/gart.h>
+2 -2
drivers/iommu/intel_irq_remapping.c
··· 6 6 #include <linux/hpet.h> 7 7 #include <linux/pci.h> 8 8 #include <linux/irq.h> 9 + #include <linux/intel-iommu.h> 10 + #include <linux/acpi.h> 9 11 #include <asm/io_apic.h> 10 12 #include <asm/smp.h> 11 13 #include <asm/cpu.h> 12 - #include <linux/intel-iommu.h> 13 - #include <acpi/acpi.h> 14 14 #include <asm/irq_remapping.h> 15 15 #include <asm/pci-direct.h> 16 16 #include <asm/msidef.h>
+1 -1
drivers/mmc/core/sdio_bus.c
··· 308 308 struct mmc_host *host = func->card->host; 309 309 u64 addr = (host->slotno << 16) | func->num; 310 310 311 - acpi_preset_companion(&func->dev, ACPI_HANDLE(host->parent), addr); 311 + acpi_preset_companion(&func->dev, ACPI_COMPANION(host->parent), addr); 312 312 } 313 313 #else 314 314 static inline void sdio_acpi_set_handle(struct sdio_func *func) {}
+3
drivers/of/device.c
··· 85 85 int cplen, i; 86 86 ssize_t tsize, csize, repend; 87 87 88 + if ((!dev) || (!dev->of_node)) 89 + return -ENODEV; 90 + 88 91 /* Name & Type */ 89 92 csize = snprintf(str, len, "of:N%sT%s", dev->of_node->name, 90 93 dev->of_node->type);
+1 -1
drivers/pci/hotplug/acpiphp_glue.c
··· 494 494 495 495 acpi_bus_scan(handle); 496 496 acpi_bus_get_device(handle, &adev); 497 - if (adev) 497 + if (acpi_device_enumerated(adev)) 498 498 acpi_device_set_power(adev, ACPI_STATE_D0); 499 499 } 500 500
+1 -2
drivers/pci/hotplug/acpiphp_ibm.c
··· 31 31 #include <linux/slab.h> 32 32 #include <linux/module.h> 33 33 #include <linux/kernel.h> 34 - #include <acpi/acpi_bus.h> 35 34 #include <linux/sysfs.h> 36 35 #include <linux/kobject.h> 37 - #include <asm/uaccess.h> 38 36 #include <linux/moduleparam.h> 39 37 #include <linux/pci.h> 38 + #include <asm/uaccess.h> 40 39 41 40 #include "acpiphp.h" 42 41 #include "../pci.h"
-2
drivers/pci/hotplug/pciehp.h
··· 162 162 } 163 163 164 164 #ifdef CONFIG_ACPI 165 - #include <acpi/acpi.h> 166 - #include <acpi/acpi_bus.h> 167 165 #include <linux/pci-acpi.h> 168 166 169 167 void __init pciehp_acpi_slot_detection_init(void);
-1
drivers/pci/ioapic.c
··· 20 20 #include <linux/module.h> 21 21 #include <linux/acpi.h> 22 22 #include <linux/slab.h> 23 - #include <acpi/acpi_bus.h> 24 23 25 24 struct ioapic { 26 25 acpi_handle handle;
+6 -11
drivers/pci/pci-acpi.c
··· 12 12 #include <linux/pci.h> 13 13 #include <linux/module.h> 14 14 #include <linux/pci-aspm.h> 15 - #include <acpi/acpi.h> 16 - #include <acpi/acpi_bus.h> 17 - 18 15 #include <linux/pci-acpi.h> 19 16 #include <linux/pm_runtime.h> 20 17 #include <linux/pm_qos.h> ··· 303 306 } 304 307 305 308 /* ACPI bus type */ 306 - static int acpi_pci_find_device(struct device *dev, acpi_handle *handle) 309 + static struct acpi_device *acpi_pci_find_companion(struct device *dev) 307 310 { 308 311 struct pci_dev *pci_dev = to_pci_dev(dev); 309 - bool is_bridge; 312 + bool check_children; 310 313 u64 addr; 311 314 312 315 /* ··· 314 317 * is set only after acpi_pci_find_device() has been called for the 315 318 * given device. 316 319 */ 317 - is_bridge = pci_dev->hdr_type == PCI_HEADER_TYPE_BRIDGE 320 + check_children = pci_dev->hdr_type == PCI_HEADER_TYPE_BRIDGE 318 321 || pci_dev->hdr_type == PCI_HEADER_TYPE_CARDBUS; 319 322 /* Please ref to ACPI spec for the syntax of _ADR */ 320 323 addr = (PCI_SLOT(pci_dev->devfn) << 16) | PCI_FUNC(pci_dev->devfn); 321 - *handle = acpi_find_child(ACPI_HANDLE(dev->parent), addr, is_bridge); 322 - if (!*handle) 323 - return -ENODEV; 324 - return 0; 324 + return acpi_find_child_device(ACPI_COMPANION(dev->parent), addr, 325 + check_children); 325 326 } 326 327 327 328 static void pci_acpi_setup(struct device *dev) ··· 362 367 static struct acpi_bus_type acpi_pci_bus = { 363 368 .name = "PCI", 364 369 .match = pci_acpi_bus_match, 365 - .find_device = acpi_pci_find_device, 370 + .find_companion = acpi_pci_find_companion, 366 371 .setup = pci_acpi_setup, 367 372 .cleanup = pci_acpi_cleanup, 368 373 };
+35 -91
drivers/pci/pci-label.c
··· 29 29 #include <linux/nls.h> 30 30 #include <linux/acpi.h> 31 31 #include <linux/pci-acpi.h> 32 - #include <acpi/acpi_bus.h> 33 32 #include "pci.h" 34 33 35 34 #define DEVICE_LABEL_DSM 0x07 ··· 161 162 }; 162 163 163 164 enum acpi_attr_enum { 164 - ACPI_ATTR_NONE = 0, 165 165 ACPI_ATTR_LABEL_SHOW, 166 166 ACPI_ATTR_INDEX_SHOW, 167 167 }; ··· 168 170 static void dsm_label_utf16s_to_utf8s(union acpi_object *obj, char *buf) 169 171 { 170 172 int len; 171 - len = utf16s_to_utf8s((const wchar_t *)obj-> 172 - package.elements[1].string.pointer, 173 - obj->package.elements[1].string.length, 173 + len = utf16s_to_utf8s((const wchar_t *)obj->string.pointer, 174 + obj->string.length, 174 175 UTF16_LITTLE_ENDIAN, 175 176 buf, PAGE_SIZE); 176 177 buf[len] = '\n'; 177 178 } 178 179 179 180 static int 180 - dsm_get_label(acpi_handle handle, int func, 181 - struct acpi_buffer *output, 182 - char *buf, enum acpi_attr_enum attribute) 181 + dsm_get_label(struct device *dev, char *buf, enum acpi_attr_enum attr) 183 182 { 184 - struct acpi_object_list input; 185 - union acpi_object params[4]; 186 - union acpi_object *obj; 187 - int len = 0; 183 + acpi_handle handle; 184 + union acpi_object *obj, *tmp; 185 + int len = -1; 188 186 189 - int err; 190 - 191 - input.count = 4; 192 - input.pointer = params; 193 - params[0].type = ACPI_TYPE_BUFFER; 194 - params[0].buffer.length = sizeof(device_label_dsm_uuid); 195 - params[0].buffer.pointer = (char *)device_label_dsm_uuid; 196 - params[1].type = ACPI_TYPE_INTEGER; 197 - params[1].integer.value = 0x02; 198 - params[2].type = ACPI_TYPE_INTEGER; 199 - params[2].integer.value = func; 200 - params[3].type = ACPI_TYPE_PACKAGE; 201 - params[3].package.count = 0; 202 - params[3].package.elements = NULL; 203 - 204 - err = acpi_evaluate_object(handle, "_DSM", &input, output); 205 - if (err) 187 + handle = ACPI_HANDLE(dev); 188 + if (!handle) 206 189 return -1; 207 190 208 - obj = (union acpi_object *)output->pointer; 191 + obj = acpi_evaluate_dsm(handle, device_label_dsm_uuid, 0x2, 192 + DEVICE_LABEL_DSM, NULL); 193 + if (!obj) 194 + return -1; 209 195 210 - switch (obj->type) { 211 - case ACPI_TYPE_PACKAGE: 212 - if (obj->package.count != 2) 213 - break; 214 - len = obj->package.elements[0].integer.value; 215 - if (buf) { 216 - if (attribute == ACPI_ATTR_INDEX_SHOW) 217 - scnprintf(buf, PAGE_SIZE, "%llu\n", 218 - obj->package.elements[0].integer.value); 219 - else if (attribute == ACPI_ATTR_LABEL_SHOW) 220 - dsm_label_utf16s_to_utf8s(obj, buf); 221 - kfree(output->pointer); 222 - return strlen(buf); 223 - } 224 - kfree(output->pointer); 225 - return len; 226 - break; 227 - default: 228 - kfree(output->pointer); 196 + tmp = obj->package.elements; 197 + if (obj->type == ACPI_TYPE_PACKAGE && obj->package.count == 2 && 198 + tmp[0].type == ACPI_TYPE_INTEGER && 199 + tmp[1].type == ACPI_TYPE_STRING) { 200 + /* 201 + * The second string element is optional even when 202 + * this _DSM is implemented; when not implemented, 203 + * this entry must return a null string. 204 + */ 205 + if (attr == ACPI_ATTR_INDEX_SHOW) 206 + scnprintf(buf, PAGE_SIZE, "%llu\n", tmp->integer.value); 207 + else if (attr == ACPI_ATTR_LABEL_SHOW) 208 + dsm_label_utf16s_to_utf8s(tmp + 1, buf); 209 + len = strlen(buf) > 0 ? strlen(buf) : -1; 229 210 } 230 - return -1; 211 + 212 + ACPI_FREE(obj); 213 + 214 + return len; 231 215 } 232 216 233 217 static bool 234 218 device_has_dsm(struct device *dev) 235 219 { 236 220 acpi_handle handle; 237 - struct acpi_buffer output = {ACPI_ALLOCATE_BUFFER, NULL}; 238 221 239 222 handle = ACPI_HANDLE(dev); 240 - 241 223 if (!handle) 242 - return FALSE; 224 + return false; 243 225 244 - if (dsm_get_label(handle, DEVICE_LABEL_DSM, &output, NULL, 245 - ACPI_ATTR_NONE) > 0) 246 - return TRUE; 247 - 248 - return FALSE; 226 + return !!acpi_check_dsm(handle, device_label_dsm_uuid, 0x2, 227 + 1 << DEVICE_LABEL_DSM); 249 228 } 250 229 251 230 static umode_t ··· 241 266 static ssize_t 242 267 acpilabel_show(struct device *dev, struct device_attribute *attr, char *buf) 243 268 { 244 - struct acpi_buffer output = {ACPI_ALLOCATE_BUFFER, NULL}; 245 - acpi_handle handle; 246 - int length; 247 - 248 - handle = ACPI_HANDLE(dev); 249 - 250 - if (!handle) 251 - return -1; 252 - 253 - length = dsm_get_label(handle, DEVICE_LABEL_DSM, 254 - &output, buf, ACPI_ATTR_LABEL_SHOW); 255 - 256 - if (length < 1) 257 - return -1; 258 - 259 - return length; 269 + return dsm_get_label(dev, buf, ACPI_ATTR_LABEL_SHOW); 260 270 } 261 271 262 272 static ssize_t 263 273 acpiindex_show(struct device *dev, struct device_attribute *attr, char *buf) 264 274 { 265 - struct acpi_buffer output = {ACPI_ALLOCATE_BUFFER, NULL}; 266 - acpi_handle handle; 267 - int length; 268 - 269 - handle = ACPI_HANDLE(dev); 270 - 271 - if (!handle) 272 - return -1; 273 - 274 - length = dsm_get_label(handle, DEVICE_LABEL_DSM, 275 - &output, buf, ACPI_ATTR_INDEX_SHOW); 276 - 277 - if (length < 0) 278 - return -1; 279 - 280 - return length; 281 - 275 + return dsm_get_label(dev, buf, ACPI_ATTR_INDEX_SHOW); 282 276 } 283 277 284 278 static struct device_attribute acpi_attr_label = {
-2
drivers/platform/x86/acer-wmi.c
··· 41 41 #include <linux/slab.h> 42 42 #include <linux/input.h> 43 43 #include <linux/input/sparse-keymap.h> 44 - 45 - #include <acpi/acpi_drivers.h> 46 44 #include <acpi/video.h> 47 45 48 46 MODULE_AUTHOR("Carlos Corbacho");
+1 -2
drivers/platform/x86/asus-laptop.c
··· 53 53 #include <linux/rfkill.h> 54 54 #include <linux/slab.h> 55 55 #include <linux/dmi.h> 56 - #include <acpi/acpi_drivers.h> 57 - #include <acpi/acpi_bus.h> 56 + #include <linux/acpi.h> 58 57 59 58 #define ASUS_LAPTOP_VERSION "0.42" 60 59
+1 -2
drivers/platform/x86/asus-wmi.c
··· 45 45 #include <linux/seq_file.h> 46 46 #include <linux/platform_device.h> 47 47 #include <linux/thermal.h> 48 - #include <acpi/acpi_bus.h> 49 - #include <acpi/acpi_drivers.h> 48 + #include <linux/acpi.h> 50 49 #include <acpi/video.h> 51 50 52 51 #include "asus-wmi.h"
+1 -2
drivers/platform/x86/classmate-laptop.c
··· 21 21 #include <linux/module.h> 22 22 #include <linux/slab.h> 23 23 #include <linux/workqueue.h> 24 - #include <acpi/acpi_drivers.h> 24 + #include <linux/acpi.h> 25 25 #include <linux/backlight.h> 26 26 #include <linux/input.h> 27 27 #include <linux/rfkill.h> 28 28 29 29 MODULE_LICENSE("GPL"); 30 - 31 30 32 31 struct cmpc_accel { 33 32 int sensitivity;
-1
drivers/platform/x86/dell-wmi-aio.c
··· 24 24 #include <linux/types.h> 25 25 #include <linux/input.h> 26 26 #include <linux/input/sparse-keymap.h> 27 - #include <acpi/acpi_drivers.h> 28 27 #include <linux/acpi.h> 29 28 #include <linux/string.h> 30 29
-1
drivers/platform/x86/dell-wmi.c
··· 32 32 #include <linux/types.h> 33 33 #include <linux/input.h> 34 34 #include <linux/input/sparse-keymap.h> 35 - #include <acpi/acpi_drivers.h> 36 35 #include <linux/acpi.h> 37 36 #include <linux/string.h> 38 37 #include <linux/dmi.h>
+1 -2
drivers/platform/x86/eeepc-laptop.c
··· 28 28 #include <linux/hwmon.h> 29 29 #include <linux/hwmon-sysfs.h> 30 30 #include <linux/slab.h> 31 - #include <acpi/acpi_drivers.h> 32 - #include <acpi/acpi_bus.h> 31 + #include <linux/acpi.h> 33 32 #include <linux/uaccess.h> 34 33 #include <linux/input.h> 35 34 #include <linux/input/sparse-keymap.h>
+1 -1
drivers/platform/x86/eeepc-wmi.c
··· 33 33 #include <linux/input/sparse-keymap.h> 34 34 #include <linux/dmi.h> 35 35 #include <linux/fb.h> 36 - #include <acpi/acpi_bus.h> 36 + #include <linux/acpi.h> 37 37 38 38 #include "asus-wmi.h" 39 39
+1 -1
drivers/platform/x86/hp_accel.c
··· 36 36 #include <linux/uaccess.h> 37 37 #include <linux/leds.h> 38 38 #include <linux/atomic.h> 39 - #include <acpi/acpi_drivers.h> 39 + #include <linux/acpi.h> 40 40 #include "../../misc/lis3lv02d/lis3lv02d.h" 41 41 42 42 #define DRIVER_NAME "hp_accel"
+1 -2
drivers/platform/x86/ideapad-laptop.c
··· 26 26 #include <linux/module.h> 27 27 #include <linux/init.h> 28 28 #include <linux/types.h> 29 - #include <acpi/acpi_bus.h> 30 - #include <acpi/acpi_drivers.h> 29 + #include <linux/acpi.h> 31 30 #include <linux/rfkill.h> 32 31 #include <linux/platform_device.h> 33 32 #include <linux/input.h>
+1 -1
drivers/platform/x86/intel-rst.c
··· 20 20 #include <linux/init.h> 21 21 #include <linux/module.h> 22 22 #include <linux/slab.h> 23 - #include <acpi/acpi_drivers.h> 23 + #include <linux/acpi.h> 24 24 25 25 MODULE_LICENSE("GPL"); 26 26
+1 -1
drivers/platform/x86/intel-smartconnect.c
··· 19 19 20 20 #include <linux/init.h> 21 21 #include <linux/module.h> 22 - #include <acpi/acpi_drivers.h> 22 + #include <linux/acpi.h> 23 23 24 24 MODULE_LICENSE("GPL"); 25 25
+1 -3
drivers/platform/x86/intel_menlow.c
··· 36 36 #include <linux/types.h> 37 37 #include <linux/pci.h> 38 38 #include <linux/pm.h> 39 - 40 39 #include <linux/thermal.h> 41 - #include <acpi/acpi_bus.h> 42 - #include <acpi/acpi_drivers.h> 40 + #include <linux/acpi.h> 43 41 44 42 MODULE_AUTHOR("Thomas Sujith"); 45 43 MODULE_AUTHOR("Zhang Rui");
-3
drivers/platform/x86/intel_oaktrail.c
··· 50 50 #include <linux/platform_device.h> 51 51 #include <linux/dmi.h> 52 52 #include <linux/rfkill.h> 53 - #include <acpi/acpi_bus.h> 54 - #include <acpi/acpi_drivers.h> 55 - 56 53 57 54 #define DRIVER_NAME "intel_oaktrail" 58 55 #define DRIVER_VERSION "0.4ac1"
+1 -2
drivers/platform/x86/mxm-wmi.c
··· 20 20 #include <linux/kernel.h> 21 21 #include <linux/module.h> 22 22 #include <linux/init.h> 23 - #include <acpi/acpi_bus.h> 24 - #include <acpi/acpi_drivers.h> 23 + #include <linux/acpi.h> 25 24 26 25 MODULE_AUTHOR("Dave Airlie"); 27 26 MODULE_DESCRIPTION("MXM WMI Driver");
+1 -3
drivers/platform/x86/panasonic-laptop.c
··· 125 125 #include <linux/seq_file.h> 126 126 #include <linux/uaccess.h> 127 127 #include <linux/slab.h> 128 - #include <acpi/acpi_bus.h> 129 - #include <acpi/acpi_drivers.h> 128 + #include <linux/acpi.h> 130 129 #include <linux/input.h> 131 130 #include <linux/input/sparse-keymap.h> 132 - 133 131 134 132 #ifndef ACPI_HOTKEY_COMPONENT 135 133 #define ACPI_HOTKEY_COMPONENT 0x10000000
+1 -2
drivers/platform/x86/pvpanic.c
··· 24 24 #include <linux/module.h> 25 25 #include <linux/init.h> 26 26 #include <linux/types.h> 27 - #include <acpi/acpi_bus.h> 28 - #include <acpi/acpi_drivers.h> 27 + #include <linux/acpi.h> 29 28 30 29 MODULE_AUTHOR("Hu Tao <hutao@cn.fujitsu.com>"); 31 30 MODULE_DESCRIPTION("pvpanic device driver");
+1 -1
drivers/platform/x86/samsung-q10.c
··· 15 15 #include <linux/platform_device.h> 16 16 #include <linux/backlight.h> 17 17 #include <linux/dmi.h> 18 - #include <acpi/acpi_drivers.h> 18 + #include <linux/acpi.h> 19 19 20 20 #define SAMSUNGQ10_BL_MAX_INTENSITY 7 21 21
+1 -3
drivers/platform/x86/sony-laptop.c
··· 61 61 #include <linux/workqueue.h> 62 62 #include <linux/acpi.h> 63 63 #include <linux/slab.h> 64 - #include <acpi/acpi_drivers.h> 65 - #include <acpi/acpi_bus.h> 66 - #include <asm/uaccess.h> 67 64 #include <linux/sonypi.h> 68 65 #include <linux/sony-laptop.h> 69 66 #include <linux/rfkill.h> ··· 68 71 #include <linux/poll.h> 69 72 #include <linux/miscdevice.h> 70 73 #endif 74 + #include <asm/uaccess.h> 71 75 72 76 #define dprintk(fmt, ...) \ 73 77 do { \
+1 -3
drivers/platform/x86/tc1100-wmi.c
··· 32 32 #include <linux/slab.h> 33 33 #include <linux/init.h> 34 34 #include <linux/types.h> 35 - #include <acpi/acpi.h> 36 - #include <acpi/acpi_bus.h> 37 - #include <acpi/acpi_drivers.h> 35 + #include <linux/acpi.h> 38 36 #include <linux/platform_device.h> 39 37 40 38 #define GUID "C364AC71-36DB-495A-8494-B439D472A505"
+4 -10
drivers/platform/x86/thinkpad_acpi.c
··· 61 61 #include <linux/freezer.h> 62 62 #include <linux/delay.h> 63 63 #include <linux/slab.h> 64 - 65 64 #include <linux/nvram.h> 66 65 #include <linux/proc_fs.h> 67 66 #include <linux/seq_file.h> ··· 73 74 #include <linux/input.h> 74 75 #include <linux/leds.h> 75 76 #include <linux/rfkill.h> 76 - #include <asm/uaccess.h> 77 - 78 77 #include <linux/dmi.h> 79 78 #include <linux/jiffies.h> 80 79 #include <linux/workqueue.h> 81 - 80 + #include <linux/acpi.h> 81 + #include <linux/pci_ids.h> 82 + #include <linux/thinkpad_acpi.h> 82 83 #include <sound/core.h> 83 84 #include <sound/control.h> 84 85 #include <sound/initval.h> 85 - 86 - #include <acpi/acpi_drivers.h> 87 - 88 - #include <linux/pci_ids.h> 89 - 90 - #include <linux/thinkpad_acpi.h> 86 + #include <asm/uaccess.h> 91 87 92 88 /* ThinkPad CMOS commands */ 93 89 #define TP_CMOS_VOLUME_DOWN 0
+1 -3
drivers/platform/x86/toshiba_acpi.c
··· 54 54 #include <linux/slab.h> 55 55 #include <linux/workqueue.h> 56 56 #include <linux/i8042.h> 57 - 57 + #include <linux/acpi.h> 58 58 #include <asm/uaccess.h> 59 - 60 - #include <acpi/acpi_drivers.h> 61 59 62 60 MODULE_AUTHOR("John Belmonte"); 63 61 MODULE_DESCRIPTION("Toshiba Laptop ACPI Extras Driver");
+1 -3
drivers/platform/x86/toshiba_bluetooth.c
··· 23 23 #include <linux/module.h> 24 24 #include <linux/init.h> 25 25 #include <linux/types.h> 26 - #include <acpi/acpi_bus.h> 27 - #include <acpi/acpi_drivers.h> 26 + #include <linux/acpi.h> 28 27 29 28 MODULE_AUTHOR("Jes Sorensen <Jes.Sorensen@gmail.com>"); 30 29 MODULE_DESCRIPTION("Toshiba Laptop ACPI Bluetooth Enable Driver"); 31 30 MODULE_LICENSE("GPL"); 32 - 33 31 34 32 static int toshiba_bt_rfkill_add(struct acpi_device *device); 35 33 static int toshiba_bt_rfkill_remove(struct acpi_device *device);
-2
drivers/platform/x86/wmi.c
··· 37 37 #include <linux/acpi.h> 38 38 #include <linux/slab.h> 39 39 #include <linux/module.h> 40 - #include <acpi/acpi_bus.h> 41 - #include <acpi/acpi_drivers.h> 42 40 43 41 ACPI_MODULE_NAME("wmi"); 44 42 MODULE_AUTHOR("Carlos Corbacho");
+1 -2
drivers/platform/x86/xo15-ebook.c
··· 18 18 #include <linux/init.h> 19 19 #include <linux/types.h> 20 20 #include <linux/input.h> 21 - #include <acpi/acpi_bus.h> 22 - #include <acpi/acpi_drivers.h> 21 + #include <linux/acpi.h> 23 22 24 23 #define MODULE_NAME "xo15-ebook" 25 24
+1
drivers/pnp/card.c
··· 239 239 error = device_register(&card->dev); 240 240 if (error) { 241 241 dev_err(&card->dev, "could not register (err=%d)\n", error); 242 + put_device(&card->dev); 242 243 return error; 243 244 } 244 245
+17 -16
drivers/pnp/pnpacpi/core.c
··· 24 24 #include <linux/pnp.h> 25 25 #include <linux/slab.h> 26 26 #include <linux/mod_devicetable.h> 27 - #include <acpi/acpi_bus.h> 28 27 29 28 #include "../base.h" 30 29 #include "pnpacpi.h" ··· 241 242 struct pnp_dev *dev; 242 243 char *pnpid; 243 244 struct acpi_hardware_id *id; 245 + int error; 244 246 245 247 /* Skip devices that are already bound */ 246 248 if (device->physical_node_count) ··· 300 300 /* clear out the damaged flags */ 301 301 if (!dev->active) 302 302 pnp_init_resources(dev); 303 - pnp_add_device(dev); 303 + 304 + error = pnp_add_device(dev); 305 + if (error) { 306 + put_device(&dev->dev); 307 + return error; 308 + } 309 + 304 310 num++; 305 311 306 - return AE_OK; 312 + return 0; 307 313 } 308 314 309 315 static acpi_status __init pnpacpi_add_device_handler(acpi_handle handle, ··· 335 329 && compare_pnp_id(pnp->id, acpi_device_hid(acpi)); 336 330 } 337 331 338 - static int __init acpi_pnp_find_device(struct device *dev, acpi_handle * handle) 332 + static struct acpi_device * __init acpi_pnp_find_companion(struct device *dev) 339 333 { 340 - struct device *adev; 341 - struct acpi_device *acpi; 334 + dev = bus_find_device(&acpi_bus_type, NULL, to_pnp_dev(dev), 335 + acpi_pnp_match); 336 + if (!dev) 337 + return NULL; 342 338 343 - adev = bus_find_device(&acpi_bus_type, NULL, 344 - to_pnp_dev(dev), acpi_pnp_match); 345 - if (!adev) 346 - return -ENODEV; 347 - 348 - acpi = to_acpi_device(adev); 349 - *handle = acpi->handle; 350 - put_device(adev); 351 - return 0; 339 + put_device(dev); 340 + return to_acpi_device(dev); 352 341 } 353 342 354 343 /* complete initialization of a PNPACPI device includes having ··· 357 356 static struct acpi_bus_type __initdata acpi_pnp_bus = { 358 357 .name = "PNP", 359 358 .match = acpi_pnp_bus_match, 360 - .find_device = acpi_pnp_find_device, 359 + .find_companion = acpi_pnp_find_companion, 361 360 }; 362 361 363 362 int pnpacpi_disabled __initdata;
-1
drivers/pnp/pnpacpi/pnpacpi.h
··· 1 1 #ifndef ACPI_PNP_H 2 2 #define ACPI_PNP_H 3 3 4 - #include <acpi/acpi_bus.h> 5 4 #include <linux/acpi.h> 6 5 #include <linux/pnp.h> 7 6
+9 -3
drivers/pnp/pnpbios/core.c
··· 312 312 struct list_head *pos; 313 313 struct pnp_dev *dev; 314 314 char id[8]; 315 + int error; 315 316 316 317 /* check if the device is already added */ 317 318 list_for_each(pos, &pnpbios_protocol.devices) { 318 319 dev = list_entry(pos, struct pnp_dev, protocol_list); 319 320 if (dev->number == node->handle) 320 - return -1; 321 + return -EEXIST; 321 322 } 322 323 323 324 pnp_eisa_id_to_string(node->eisa_id & PNP_EISA_ID_MASK, id); 324 325 dev = pnp_alloc_dev(&pnpbios_protocol, node->handle, id); 325 326 if (!dev) 326 - return -1; 327 + return -ENOMEM; 327 328 328 329 pnpbios_parse_data_stream(dev, node); 329 330 dev->active = pnp_is_active(dev); ··· 343 342 if (!dev->active) 344 343 pnp_init_resources(dev); 345 344 346 - pnp_add_device(dev); 345 + error = pnp_add_device(dev); 346 + if (error) { 347 + put_device(&dev->dev); 348 + return error; 349 + } 350 + 347 351 pnpbios_interface_attach_device(node); 348 352 349 353 return 0;
+1 -1
drivers/pnp/resource.c
··· 31 31 * option registration 32 32 */ 33 33 34 - struct pnp_option *pnp_build_option(struct pnp_dev *dev, unsigned long type, 34 + static struct pnp_option *pnp_build_option(struct pnp_dev *dev, unsigned long type, 35 35 unsigned int option_flags) 36 36 { 37 37 struct pnp_option *option;
+1 -3
drivers/sfi/sfi_acpi.c
··· 60 60 #define pr_fmt(fmt) KMSG_COMPONENT ": " fmt 61 61 62 62 #include <linux/kernel.h> 63 - #include <acpi/acpi.h> 64 - 65 - #include <linux/sfi.h> 63 + #include <linux/sfi_acpi.h> 66 64 #include "sfi_core.h" 67 65 68 66 /*
+10
drivers/spi/spi.c
··· 58 58 modalias_show(struct device *dev, struct device_attribute *a, char *buf) 59 59 { 60 60 const struct spi_device *spi = to_spi_device(dev); 61 + int len; 62 + 63 + len = acpi_device_modalias(dev, buf, PAGE_SIZE - 1); 64 + if (len != -ENODEV) 65 + return len; 61 66 62 67 return sprintf(buf, "%s%s\n", SPI_MODULE_PREFIX, spi->modalias); 63 68 } ··· 119 114 static int spi_uevent(struct device *dev, struct kobj_uevent_env *env) 120 115 { 121 116 const struct spi_device *spi = to_spi_device(dev); 117 + int rc; 118 + 119 + rc = acpi_device_uevent_modalias(dev, env); 120 + if (rc != -ENODEV) 121 + return rc; 122 122 123 123 add_uevent_var(env, "MODALIAS=%s%s", SPI_MODULE_PREFIX, spi->modalias); 124 124 return 0;
+1 -1
drivers/staging/quickstart/quickstart.c
··· 31 31 #include <linux/module.h> 32 32 #include <linux/init.h> 33 33 #include <linux/types.h> 34 - #include <acpi/acpi_drivers.h> 34 + #include <linux/acpi.h> 35 35 #include <linux/platform_device.h> 36 36 #include <linux/input.h> 37 37
+6 -6
drivers/thermal/samsung/exynos_tmu_data.c
··· 131 131 132 132 #define EXYNOS4412_TMU_DATA \ 133 133 .threshold_falling = 10, \ 134 - .trigger_levels[0] = 85, \ 135 - .trigger_levels[1] = 103, \ 134 + .trigger_levels[0] = 70, \ 135 + .trigger_levels[1] = 95, \ 136 136 .trigger_levels[2] = 110, \ 137 137 .trigger_levels[3] = 120, \ 138 138 .trigger_enable[0] = true, \ ··· 155 155 .second_point_trim = 85, \ 156 156 .default_temp_offset = 50, \ 157 157 .freq_tab[0] = { \ 158 - .freq_clip_max = 800 * 1000, \ 159 - .temp_level = 85, \ 158 + .freq_clip_max = 1400 * 1000, \ 159 + .temp_level = 70, \ 160 160 }, \ 161 161 .freq_tab[1] = { \ 162 - .freq_clip_max = 200 * 1000, \ 163 - .temp_level = 103, \ 162 + .freq_clip_max = 400 * 1000, \ 163 + .temp_level = 95, \ 164 164 }, \ 165 165 .freq_tab_count = 2, \ 166 166 .registers = &exynos4412_tmu_registers, \
+20 -21
drivers/usb/core/usb-acpi.c
··· 16 16 #include <linux/acpi.h> 17 17 #include <linux/pci.h> 18 18 #include <linux/usb/hcd.h> 19 - #include <acpi/acpi_bus.h> 20 19 21 20 #include "usb.h" 22 21 ··· 126 127 return ret; 127 128 } 128 129 129 - static int usb_acpi_find_device(struct device *dev, acpi_handle *handle) 130 + static struct acpi_device *usb_acpi_find_companion(struct device *dev) 130 131 { 131 132 struct usb_device *udev; 132 133 acpi_handle *parent_handle; ··· 168 169 break; 169 170 } 170 171 171 - return -ENODEV; 172 + return NULL; 172 173 } 173 174 174 175 /* root hub's parent is the usb hcd. */ 175 - parent_handle = ACPI_HANDLE(dev->parent); 176 - *handle = acpi_get_child(parent_handle, udev->portnum); 177 - if (!*handle) 178 - return -ENODEV; 179 - return 0; 176 + return acpi_find_child_device(ACPI_COMPANION(dev->parent), 177 + udev->portnum, false); 180 178 } else if (is_usb_port(dev)) { 179 + struct acpi_device *adev = NULL; 180 + 181 181 sscanf(dev_name(dev), "port%d", &port_num); 182 182 /* Get the struct usb_device point of port's hub */ 183 183 udev = to_usb_device(dev->parent->parent); ··· 192 194 193 195 raw_port_num = usb_hcd_find_raw_port_number(hcd, 194 196 port_num); 195 - *handle = acpi_get_child(ACPI_HANDLE(&udev->dev), 196 - raw_port_num); 197 - if (!*handle) 198 - return -ENODEV; 197 + adev = acpi_find_child_device(ACPI_COMPANION(&udev->dev), 198 + raw_port_num, false); 199 + if (!adev) 200 + return NULL; 199 201 } else { 200 202 parent_handle = 201 203 usb_get_hub_port_acpi_handle(udev->parent, 202 204 udev->portnum); 203 205 if (!parent_handle) 204 - return -ENODEV; 206 + return NULL; 205 207 206 - *handle = acpi_get_child(parent_handle, port_num); 207 - if (!*handle) 208 - return -ENODEV; 208 + acpi_bus_get_device(parent_handle, &adev); 209 + adev = acpi_find_child_device(adev, port_num, false); 210 + if (!adev) 211 + return NULL; 209 212 } 210 - usb_acpi_check_port_connect_type(udev, *handle, port_num); 211 - } else 212 - return -ENODEV; 213 + usb_acpi_check_port_connect_type(udev, adev->handle, port_num); 214 + return adev; 215 + } 213 216 214 - return 0; 217 + return NULL; 215 218 } 216 219 217 220 static bool usb_acpi_bus_match(struct device *dev) ··· 223 224 static struct acpi_bus_type usb_acpi_bus = { 224 225 .name = "USB", 225 226 .match = usb_acpi_bus_match, 226 - .find_device = usb_acpi_find_device, 227 + .find_companion = usb_acpi_find_companion, 227 228 }; 228 229 229 230 int usb_acpi_register(void)
+5 -6
drivers/xen/xen-acpi-cpuhotplug.c
··· 24 24 #include <linux/cpu.h> 25 25 #include <linux/acpi.h> 26 26 #include <linux/uaccess.h> 27 - #include <acpi/acpi_bus.h> 28 - #include <acpi/acpi_drivers.h> 29 27 #include <acpi/processor.h> 30 - 31 28 #include <xen/acpi.h> 32 29 #include <xen/interface/platform.h> 33 30 #include <asm/xen/hypercall.h> ··· 266 269 if (!is_processor_present(handle)) 267 270 break; 268 271 269 - if (!acpi_bus_get_device(handle, &device)) 272 + acpi_bus_get_device(handle, &device); 273 + if (acpi_device_enumerated(device)) 270 274 break; 271 275 272 276 result = acpi_bus_scan(handle); ··· 275 277 pr_err(PREFIX "Unable to add the device\n"); 276 278 break; 277 279 } 278 - result = acpi_bus_get_device(handle, &device); 279 - if (result) { 280 + device = NULL; 281 + acpi_bus_get_device(handle, &device); 282 + if (!acpi_device_enumerated(device)) { 280 283 pr_err(PREFIX "Missing device object\n"); 281 284 break; 282 285 }
+4 -4
drivers/xen/xen-acpi-memhotplug.c
··· 22 22 #include <linux/init.h> 23 23 #include <linux/types.h> 24 24 #include <linux/acpi.h> 25 - #include <acpi/acpi_drivers.h> 26 25 #include <xen/acpi.h> 27 26 #include <xen/interface/platform.h> 28 27 #include <asm/xen/hypercall.h> ··· 168 169 acpi_scan_lock_acquire(); 169 170 170 171 acpi_bus_get_device(handle, &device); 171 - if (device) 172 + if (acpi_device_enumerated(device)) 172 173 goto end; 173 174 174 175 /* ··· 181 182 result = -EINVAL; 182 183 goto out; 183 184 } 184 - result = acpi_bus_get_device(handle, &device); 185 - if (result) { 185 + device = NULL; 186 + acpi_bus_get_device(handle, &device); 187 + if (!acpi_device_enumerated(device)) { 186 188 pr_warn(PREFIX "Missing device object\n"); 187 189 result = -EINVAL; 188 190 goto out;
+2 -3
drivers/xen/xen-acpi-pad.c
··· 18 18 19 19 #include <linux/kernel.h> 20 20 #include <linux/types.h> 21 - #include <acpi/acpi_bus.h> 22 - #include <acpi/acpi_drivers.h> 23 - #include <asm/xen/hypercall.h> 21 + #include <linux/acpi.h> 24 22 #include <xen/interface/version.h> 25 23 #include <xen/xen-ops.h> 24 + #include <asm/xen/hypercall.h> 26 25 27 26 #define ACPI_PROCESSOR_AGGREGATOR_CLASS "acpi_pad" 28 27 #define ACPI_PROCESSOR_AGGREGATOR_DEVICE_NAME "Processor Aggregator"
+1 -3
drivers/xen/xen-acpi-processor.c
··· 28 28 #include <linux/module.h> 29 29 #include <linux/types.h> 30 30 #include <linux/syscore_ops.h> 31 - #include <acpi/acpi_bus.h> 32 - #include <acpi/acpi_drivers.h> 31 + #include <linux/acpi.h> 33 32 #include <acpi/processor.h> 34 - 35 33 #include <xen/xen.h> 36 34 #include <xen/interface/platform.h> 37 35 #include <asm/xen/hypercall.h>
+46 -18
include/acpi/acpi_bus.h
··· 28 28 29 29 #include <linux/device.h> 30 30 31 - #include <acpi/acpi.h> 32 - 33 31 /* TBD: Make dynamic */ 34 32 #define ACPI_MAX_HANDLES 10 35 33 struct acpi_handle_list { ··· 64 66 bool acpi_bay_match(acpi_handle handle); 65 67 bool acpi_dock_match(acpi_handle handle); 66 68 69 + bool acpi_check_dsm(acpi_handle handle, const u8 *uuid, int rev, u64 funcs); 70 + union acpi_object *acpi_evaluate_dsm(acpi_handle handle, const u8 *uuid, 71 + int rev, int func, union acpi_object *argv4); 72 + 73 + static inline union acpi_object * 74 + acpi_evaluate_dsm_typed(acpi_handle handle, const u8 *uuid, int rev, int func, 75 + union acpi_object *argv4, acpi_object_type type) 76 + { 77 + union acpi_object *obj; 78 + 79 + obj = acpi_evaluate_dsm(handle, uuid, rev, func, argv4); 80 + if (obj && obj->type != type) { 81 + ACPI_FREE(obj); 82 + obj = NULL; 83 + } 84 + 85 + return obj; 86 + } 87 + 88 + #define ACPI_INIT_DSM_ARGV4(cnt, eles) \ 89 + { \ 90 + .package.type = ACPI_TYPE_PACKAGE, \ 91 + .package.count = (cnt), \ 92 + .package.elements = (eles) \ 93 + } 94 + 67 95 #ifdef CONFIG_ACPI 68 96 69 97 #include <linux/proc_fs.h> ··· 115 91 * ----------------- 116 92 */ 117 93 118 - enum acpi_hotplug_mode { 119 - AHM_GENERIC = 0, 120 - AHM_CONTAINER, 121 - AHM_COUNT 122 - }; 123 - 124 94 struct acpi_hotplug_profile { 125 95 struct kobject kobj; 96 + int (*scan_dependent)(struct acpi_device *adev); 126 97 bool enabled:1; 127 - bool ignore:1; 128 - enum acpi_hotplug_mode mode; 98 + bool demand_offline:1; 129 99 }; 130 100 131 101 static inline struct acpi_hotplug_profile *to_acpi_hotplug_profile( ··· 187 169 u32 ejectable:1; 188 170 u32 power_manageable:1; 189 171 u32 match_driver:1; 172 + u32 initialized:1; 173 + u32 visited:1; 190 174 u32 no_hotplug:1; 191 - u32 reserved:26; 175 + u32 reserved:24; 192 176 }; 193 177 194 178 /* File System */ ··· 320 300 struct list_head children; 321 301 struct list_head node; 322 302 struct list_head wakeup_list; 303 + struct list_head del_list; 323 304 struct acpi_device_status status; 324 305 struct acpi_device_flags flags; 325 306 struct acpi_device_pnp pnp; ··· 345 324 346 325 #define to_acpi_device(d) container_of(d, struct acpi_device, dev) 347 326 #define to_acpi_driver(d) container_of(d, struct acpi_driver, drv) 327 + 328 + static inline void acpi_set_device_status(struct acpi_device *adev, u32 sta) 329 + { 330 + *((u32 *)&adev->status) = sta; 331 + } 348 332 349 333 /* acpi_device.dev.bus == &acpi_bus_type */ 350 334 extern struct bus_type acpi_bus_type; ··· 413 387 int acpi_create_dir(struct acpi_device *); 414 388 void acpi_remove_dir(struct acpi_device *); 415 389 390 + static inline bool acpi_device_enumerated(struct acpi_device *adev) 391 + { 392 + return adev && adev->flags.initialized && adev->flags.visited; 393 + } 394 + 416 395 typedef void (*acpi_hp_callback)(void *data, u32 src); 417 396 418 397 acpi_status acpi_hotplug_execute(acpi_hp_callback func, void *data, u32 src); ··· 441 410 struct list_head list; 442 411 const char *name; 443 412 bool (*match)(struct device *dev); 444 - int (*find_device) (struct device *, acpi_handle *); 413 + struct acpi_device * (*find_companion)(struct device *); 445 414 void (*setup)(struct device *); 446 415 void (*cleanup)(struct device *); 447 416 }; ··· 460 429 }; 461 430 462 431 /* helper */ 463 - acpi_handle acpi_find_child(acpi_handle, u64, bool); 464 - static inline acpi_handle acpi_get_child(acpi_handle handle, u64 addr) 465 - { 466 - return acpi_find_child(handle, addr, false); 467 - } 468 - void acpi_preset_companion(struct device *dev, acpi_handle parent, u64 addr); 432 + 433 + struct acpi_device *acpi_find_child_device(struct acpi_device *parent, 434 + u64 address, bool check_children); 469 435 int acpi_is_root_bridge(acpi_handle); 470 436 struct acpi_pci_root *acpi_pci_find_root(acpi_handle handle); 471 437
-3
include/acpi/acpi_drivers.h
··· 26 26 #ifndef __ACPI_DRIVERS_H__ 27 27 #define __ACPI_DRIVERS_H__ 28 28 29 - #include <linux/acpi.h> 30 - #include <acpi/acpi_bus.h> 31 - 32 29 #define ACPI_MAX_STRING 80 33 30 34 31 /*
+13 -26
include/acpi/acpixf.h
··· 46 46 47 47 /* Current ACPICA subsystem version in YYYYMMDD format */ 48 48 49 - #define ACPI_CA_VERSION 0x20131115 49 + #define ACPI_CA_VERSION 0x20131218 50 50 51 51 #include <acpi/acconfig.h> 52 52 #include <acpi/actypes.h> ··· 54 54 #include <acpi/acbuffer.h> 55 55 56 56 extern u8 acpi_gbl_permanent_mmap; 57 - extern u32 acpi_rsdt_forced; 58 57 59 58 /* 60 59 * Globals that are publically available ··· 71 72 72 73 /* ACPICA runtime options */ 73 74 74 - extern u8 acpi_gbl_enable_interpreter_slack; 75 75 extern u8 acpi_gbl_all_methods_serialized; 76 - extern u8 acpi_gbl_create_osi_method; 77 - extern u8 acpi_gbl_use_default_register_widths; 78 - extern acpi_name acpi_gbl_trace_method_name; 79 - extern u32 acpi_gbl_trace_flags; 80 - extern bool acpi_gbl_enable_aml_debug_object; 81 76 extern u8 acpi_gbl_copy_dsdt_locally; 82 - extern u8 acpi_gbl_truncate_io_addresses; 77 + extern u8 acpi_gbl_create_osi_method; 83 78 extern u8 acpi_gbl_disable_auto_repair; 84 79 extern u8 acpi_gbl_disable_ssdt_table_load; 80 + extern u8 acpi_gbl_do_not_use_xsdt; 81 + extern bool acpi_gbl_enable_aml_debug_object; 82 + extern u8 acpi_gbl_enable_interpreter_slack; 83 + extern u32 acpi_gbl_trace_flags; 84 + extern acpi_name acpi_gbl_trace_method_name; 85 + extern u8 acpi_gbl_truncate_io_addresses; 86 + extern u8 acpi_gbl_use32_bit_fadt_addresses; 87 + extern u8 acpi_gbl_use_default_register_widths; 85 88 86 89 /* 87 90 * Hardware-reduced prototypes. All interfaces that use these macros will ··· 131 130 * Miscellaneous global interfaces 132 131 */ 133 132 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_enable(void)) 134 - 135 133 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_disable(void)) 136 134 #ifdef ACPI_FUTURE_USAGE 137 - acpi_status acpi_subsystem_status(void); 135 + acpi_status acpi_subsystem_status(void); 138 136 #endif 139 137 140 138 #ifdef ACPI_FUTURE_USAGE ··· 278 278 acpi_install_sci_handler(acpi_sci_handler 279 279 address, 280 280 void *context)) 281 - 282 281 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status 283 282 acpi_remove_sci_handler(acpi_sci_handler 284 283 address)) 285 - 286 284 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status 287 285 acpi_install_global_event_handler 288 286 (acpi_gbl_event_handler handler, 289 287 void *context)) 290 - 291 288 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status 292 289 acpi_install_fixed_event_handler(u32 293 290 acpi_event, ··· 292 295 handler, 293 296 void 294 297 *context)) 295 - 296 298 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status 297 299 acpi_remove_fixed_event_handler(u32 acpi_event, 298 300 acpi_event_handler 299 301 handler)) 300 - 301 302 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status 302 303 acpi_install_gpe_handler(acpi_handle 303 304 gpe_device, ··· 304 309 acpi_gpe_handler 305 310 address, 306 311 void *context)) 307 - 308 312 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status 309 313 acpi_remove_gpe_handler(acpi_handle gpe_device, 310 314 u32 gpe_number, 311 315 acpi_gpe_handler 312 316 address)) 313 317 acpi_status acpi_install_notify_handler(acpi_handle device, u32 handler_type, 314 - acpi_notify_handler handler, 315 - void *context); 318 + acpi_notify_handler handler, 319 + void *context); 316 320 317 321 acpi_status 318 322 acpi_remove_notify_handler(acpi_handle device, ··· 360 366 361 367 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status 362 368 acpi_disable_event(u32 event, u32 flags)) 363 - 364 369 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_clear_event(u32 event)) 365 370 366 371 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status ··· 397 404 parent_device, 398 405 acpi_handle gpe_device, 399 406 u32 gpe_number)) 400 - 401 407 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status 402 408 acpi_set_gpe_wake_mask(acpi_handle gpe_device, 403 409 u32 gpe_number, 404 410 u8 action)) 405 - 406 411 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status 407 412 acpi_get_gpe_status(acpi_handle gpe_device, 408 413 u32 gpe_number, 409 414 acpi_event_status 410 415 *event_status)) 411 - 412 416 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_disable_all_gpes(void)) 413 - 414 417 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_enable_all_runtime_gpes(void)) 415 418 416 419 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status ··· 420 431 *gpe_block_address, 421 432 u32 register_count, 422 433 u32 interrupt_number)) 423 - 424 434 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status 425 435 acpi_remove_gpe_block(acpi_handle gpe_device)) 426 436 ··· 520 532 #ifdef ACPI_FUTURE_USAGE 521 533 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status 522 534 acpi_get_timer_resolution(u32 *resolution)) 523 - 524 535 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_get_timer(u32 *ticks)) 525 536 526 537 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status
+3
include/acpi/actbl.h
··· 182 182 u64 table_offset_entry[1]; /* Array of pointers to ACPI tables */ 183 183 }; 184 184 185 + #define ACPI_RSDT_ENTRY_SIZE (sizeof (u32)) 186 + #define ACPI_XSDT_ENTRY_SIZE (sizeof (u64)) 187 + 185 188 /******************************************************************************* 186 189 * 187 190 * FACS - Firmware ACPI Control Structure (FACS)
+5
include/acpi/actbl2.h
··· 327 327 u32 info_count; 328 328 }; 329 329 330 + struct acpi_dbg2_header { 331 + u32 info_offset; 332 + u32 info_count; 333 + }; 334 + 330 335 /* Debug Device Information Subtable */ 331 336 332 337 struct acpi_dbg2_device {
+12 -3
include/acpi/actbl3.h
··· 374 374 struct acpi_table_pcct { 375 375 struct acpi_table_header header; /* Common ACPI table header */ 376 376 u32 flags; 377 - u32 latency; 378 - u32 reserved; 377 + u64 reserved; 379 378 }; 380 379 381 380 /* Values for Flags field above */ 382 381 383 382 #define ACPI_PCCT_DOORBELL 1 384 383 384 + /* Values for subtable type in struct acpi_subtable_header */ 385 + 386 + enum acpi_pcct_type { 387 + ACPI_PCCT_TYPE_GENERIC_SUBSPACE = 0, 388 + ACPI_PCCT_TYPE_RESERVED = 1 /* 1 and greater are reserved */ 389 + }; 390 + 385 391 /* 386 - * PCCT subtables 392 + * PCCT Subtables, correspond to Type in struct acpi_subtable_header 387 393 */ 388 394 389 395 /* 0: Generic Communications Subspace */ ··· 402 396 struct acpi_generic_address doorbell_register; 403 397 u64 preserve_mask; 404 398 u64 write_mask; 399 + u32 latency; 400 + u32 max_access_rate; 401 + u16 min_turnaround_time; 405 402 }; 406 403 407 404 /*
+2 -10
include/acpi/actypes.h
··· 928 928 * Miscellaneous common Data Structures used by the interfaces 929 929 */ 930 930 #define ACPI_NO_BUFFER 0 931 - #define ACPI_ALLOCATE_BUFFER (acpi_size) (-1) 932 - #define ACPI_ALLOCATE_LOCAL_BUFFER (acpi_size) (-2) 931 + #define ACPI_ALLOCATE_BUFFER (acpi_size) (-1) /* Let ACPICA allocate buffer */ 932 + #define ACPI_ALLOCATE_LOCAL_BUFFER (acpi_size) (-2) /* For internal use only (enables tracking) */ 933 933 934 934 struct acpi_buffer { 935 935 acpi_size length; /* Length in bytes of the buffer */ 936 936 void *pointer; /* pointer to buffer */ 937 937 }; 938 - 939 - /* 940 - * Free a buffer created in an struct acpi_buffer via ACPI_ALLOCATE_BUFFER. 941 - * Note: We use acpi_os_free here because acpi_os_allocate was used to allocate 942 - * the buffer. This purposefully bypasses the internal allocation tracking 943 - * mechanism (if it is enabled). 944 - */ 945 - #define ACPI_FREE_BUFFER(b) acpi_os_free((b).pointer) 946 938 947 939 /* 948 940 * name_type for acpi_get_name
+15 -5
include/acpi/platform/acenv.h
··· 96 96 #endif 97 97 98 98 /* 99 - * acpi_bin/acpi_dump/acpi_src/acpi_xtract configuration. All single 99 + * acpi_bin/acpi_dump/acpi_src/acpi_xtract/Example configuration. All single 100 100 * threaded, with no debug output. 101 101 */ 102 - #if (defined ACPI_BIN_APP) || \ 103 - (defined ACPI_DUMP_APP) || \ 104 - (defined ACPI_SRC_APP) || \ 105 - (defined ACPI_XTRACT_APP) 102 + #if (defined ACPI_BIN_APP) || \ 103 + (defined ACPI_DUMP_APP) || \ 104 + (defined ACPI_SRC_APP) || \ 105 + (defined ACPI_XTRACT_APP) || \ 106 + (defined ACPI_EXAMPLE_APP) 106 107 #define ACPI_APPLICATION 107 108 #define ACPI_SINGLE_THREADED 108 109 #endif ··· 394 393 #define ACPI_TOLOWER(c) acpi_ut_to_lower ((int) (c)) 395 394 396 395 #endif /* ACPI_USE_SYSTEM_CLIBRARY */ 396 + 397 + #ifndef ACPI_FILE 398 + #ifdef ACPI_APPLICATION 399 + #include <stdio.h> 400 + #define ACPI_FILE FILE * 401 + #else 402 + #define ACPI_FILE void * 403 + #endif /* ACPI_APPLICATION */ 404 + #endif /* ACPI_FILE */ 397 405 398 406 #endif /* __ACENV_H__ */
-4
include/acpi/platform/aclinux.h
··· 239 239 */ 240 240 void early_acpi_os_unmap_memory(void __iomem * virt, acpi_size size); 241 241 242 - void acpi_os_gpe_count(u32 gpe_number); 243 - 244 - void acpi_os_fixed_event_count(u32 fixed_event_number); 245 - 246 242 #endif /* __KERNEL__ */ 247 243 248 244 #endif /* __ACLINUX_H__ */
+23 -1
include/linux/acpi.h
··· 42 42 #include <acpi/acpi_bus.h> 43 43 #include <acpi/acpi_drivers.h> 44 44 #include <acpi/acpi_numa.h> 45 + #include <acpi/acpi_io.h> 45 46 #include <asm/acpi.h> 46 47 47 48 static inline acpi_handle acpi_device_handle(struct acpi_device *adev) ··· 53 52 #define ACPI_COMPANION(dev) ((dev)->acpi_node.companion) 54 53 #define ACPI_COMPANION_SET(dev, adev) ACPI_COMPANION(dev) = (adev) 55 54 #define ACPI_HANDLE(dev) acpi_device_handle(ACPI_COMPANION(dev)) 55 + 56 + static inline void acpi_preset_companion(struct device *dev, 57 + struct acpi_device *parent, u64 addr) 58 + { 59 + ACPI_COMPANION_SET(dev, acpi_find_child_device(parent, addr, NULL)); 60 + } 56 61 57 62 static inline const char *acpi_dev_name(struct acpi_device *adev) 58 63 { ··· 416 409 return !!acpi_match_device(drv->acpi_match_table, dev); 417 410 } 418 411 412 + int acpi_device_uevent_modalias(struct device *, struct kobj_uevent_env *); 413 + int acpi_device_modalias(struct device *, char *, int); 414 + 419 415 #define ACPI_PTR(_ptr) (_ptr) 420 416 421 417 #else /* !CONFIG_ACPI */ ··· 470 460 static inline int acpi_table_parse(char *id, 471 461 int (*handler)(struct acpi_table_header *)) 472 462 { 473 - return -1; 463 + return -ENODEV; 474 464 } 475 465 476 466 static inline int acpi_nvs_register(__u64 start, __u64 size) ··· 496 486 const struct device_driver *drv) 497 487 { 498 488 return false; 489 + } 490 + 491 + static inline int acpi_device_uevent_modalias(struct device *dev, 492 + struct kobj_uevent_env *env) 493 + { 494 + return -ENODEV; 495 + } 496 + 497 + static inline int acpi_device_modalias(struct device *dev, 498 + char *buf, int size) 499 + { 500 + return -ENODEV; 499 501 } 500 502 501 503 #define ACPI_PTR(_ptr) (NULL)
-1
include/linux/acpi_io.h include/acpi/acpi_io.h
··· 2 2 #define _ACPI_IO_H_ 3 3 4 4 #include <linux/io.h> 5 - #include <acpi/acpi.h> 6 5 7 6 static inline void __iomem *acpi_os_ioremap(acpi_physical_address phys, 8 7 acpi_size size)
+25
include/linux/container.h
··· 1 + /* 2 + * Definitions for container bus type. 3 + * 4 + * Copyright (C) 2013, Intel Corporation 5 + * Author: Rafael J. Wysocki <rafael.j.wysocki@intel.com> 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + */ 11 + 12 + #include <linux/device.h> 13 + 14 + /* drivers/base/power/container.c */ 15 + extern struct bus_type container_subsys; 16 + 17 + struct container_dev { 18 + struct device dev; 19 + int (*offline)(struct container_dev *cdev); 20 + }; 21 + 22 + static inline struct container_dev *to_container_dev(struct device *dev) 23 + { 24 + return container_of(dev, struct container_dev, dev); 25 + }
+42
include/linux/cpufreq.h
··· 11 11 #ifndef _LINUX_CPUFREQ_H 12 12 #define _LINUX_CPUFREQ_H 13 13 14 + #include <linux/clk.h> 14 15 #include <linux/cpumask.h> 15 16 #include <linux/completion.h> 16 17 #include <linux/kobject.h> ··· 67 66 unsigned int cpu; /* cpu nr of CPU managing this policy */ 68 67 unsigned int last_cpu; /* cpu nr of previous CPU that managed 69 68 * this policy */ 69 + struct clk *clk; 70 70 struct cpufreq_cpuinfo cpuinfo;/* see above */ 71 71 72 72 unsigned int min; /* in kHz */ ··· 227 225 int (*suspend) (struct cpufreq_policy *policy); 228 226 int (*resume) (struct cpufreq_policy *policy); 229 227 struct freq_attr **attr; 228 + 229 + /* platform specific boost support code */ 230 + bool boost_supported; 231 + bool boost_enabled; 232 + int (*set_boost) (int state); 230 233 }; 231 234 232 235 /* flags */ ··· 258 251 * can handle them specially. 259 252 */ 260 253 #define CPUFREQ_ASYNC_NOTIFICATION (1 << 4) 254 + 255 + /* 256 + * Set by drivers which want cpufreq core to check if CPU is running at a 257 + * frequency present in freq-table exposed by the driver. For these drivers if 258 + * CPU is found running at an out of table freq, we will try to set it to a freq 259 + * from the table. And if that fails, we will stop further boot process by 260 + * issuing a BUG_ON(). 261 + */ 262 + #define CPUFREQ_NEED_INITIAL_FREQ_CHECK (1 << 5) 261 263 262 264 int cpufreq_register_driver(struct cpufreq_driver *driver_data); 263 265 int cpufreq_unregister_driver(struct cpufreq_driver *driver_data); ··· 315 299 #define CPUFREQ_NOTIFY (2) 316 300 #define CPUFREQ_START (3) 317 301 #define CPUFREQ_UPDATE_POLICY_CPU (4) 302 + #define CPUFREQ_CREATE_POLICY (5) 303 + #define CPUFREQ_REMOVE_POLICY (6) 318 304 319 305 #ifdef CONFIG_CPU_FREQ 320 306 int cpufreq_register_notifier(struct notifier_block *nb, unsigned int list); ··· 324 306 325 307 void cpufreq_notify_transition(struct cpufreq_policy *policy, 326 308 struct cpufreq_freqs *freqs, unsigned int state); 309 + void cpufreq_notify_post_transition(struct cpufreq_policy *policy, 310 + struct cpufreq_freqs *freqs, int transition_failed); 327 311 328 312 #else /* CONFIG_CPU_FREQ */ 329 313 static inline int cpufreq_register_notifier(struct notifier_block *nb, ··· 440 420 441 421 #define CPUFREQ_ENTRY_INVALID ~0 442 422 #define CPUFREQ_TABLE_END ~1 423 + #define CPUFREQ_BOOST_FREQ ~2 443 424 444 425 struct cpufreq_frequency_table { 445 426 unsigned int driver_data; /* driver specific data, not used by core */ ··· 460 439 unsigned int target_freq, 461 440 unsigned int relation, 462 441 unsigned int *index); 442 + int cpufreq_frequency_table_get_index(struct cpufreq_policy *policy, 443 + unsigned int freq); 463 444 464 445 void cpufreq_frequency_table_update_policy_cpu(struct cpufreq_policy *policy); 465 446 ssize_t cpufreq_show_cpus(const struct cpumask *mask, char *buf); 466 447 448 + #ifdef CONFIG_CPU_FREQ 449 + int cpufreq_boost_trigger_state(int state); 450 + int cpufreq_boost_supported(void); 451 + int cpufreq_boost_enabled(void); 452 + #else 453 + static inline int cpufreq_boost_trigger_state(int state) 454 + { 455 + return 0; 456 + } 457 + static inline int cpufreq_boost_supported(void) 458 + { 459 + return 0; 460 + } 461 + static inline int cpufreq_boost_enabled(void) 462 + { 463 + return 0; 464 + } 465 + #endif 467 466 /* the following funtion is for cpufreq core use only */ 468 467 struct cpufreq_frequency_table *cpufreq_frequency_get_table(unsigned int cpu); 469 468 ··· 496 455 int cpufreq_table_validate_and_show(struct cpufreq_policy *policy, 497 456 struct cpufreq_frequency_table *table); 498 457 458 + unsigned int cpufreq_generic_get(unsigned int cpu); 499 459 int cpufreq_generic_init(struct cpufreq_policy *policy, 500 460 struct cpufreq_frequency_table *table, 501 461 unsigned int transition_latency);
+2 -6
include/linux/ide.h
··· 18 18 #include <linux/completion.h> 19 19 #include <linux/pm.h> 20 20 #include <linux/mutex.h> 21 - #ifdef CONFIG_BLK_DEV_IDEACPI 22 - #include <acpi/acpi.h> 23 - #endif 24 - #include <asm/byteorder.h> 25 - #include <asm/io.h> 26 - 27 21 /* for request_sense */ 28 22 #include <linux/cdrom.h> 23 + #include <asm/byteorder.h> 24 + #include <asm/io.h> 29 25 30 26 #if defined(CONFIG_CRIS) || defined(CONFIG_FRV) || defined(CONFIG_MN10300) 31 27 # define SUPPORT_VLB_SYNC 0
+1 -1
include/linux/iscsi_ibft.h
··· 21 21 #ifndef ISCSI_IBFT_H 22 22 #define ISCSI_IBFT_H 23 23 24 - #include <acpi/acpi.h> 24 + #include <linux/acpi.h> 25 25 26 26 /* 27 27 * Logical location of iSCSI Boot Format Table.
+6
include/linux/of_device.h
··· 64 64 static inline void of_device_uevent(struct device *dev, 65 65 struct kobj_uevent_env *env) { } 66 66 67 + static inline int of_device_get_modalias(struct device *dev, 68 + char *str, ssize_t len) 69 + { 70 + return -ENODEV; 71 + } 72 + 67 73 static inline int of_device_uevent_modalias(struct device *dev, 68 74 struct kobj_uevent_env *env) 69 75 {
+1 -2
include/linux/pci_hotplug.h
··· 175 175 }; 176 176 177 177 #ifdef CONFIG_ACPI 178 - #include <acpi/acpi.h> 179 - #include <acpi/acpi_bus.h> 178 + #include <linux/acpi.h> 180 179 int pci_get_hp_params(struct pci_dev *dev, struct hotplug_params *hpp); 181 180 int acpi_get_hp_hw_control_from_firmware(struct pci_dev *dev, u32 flags); 182 181 int acpi_pci_check_ejectable(struct pci_bus *pbus, acpi_handle handle);
+21
include/linux/pm.h
··· 311 311 #define SET_SYSTEM_SLEEP_PM_OPS(suspend_fn, resume_fn) 312 312 #endif 313 313 314 + #ifdef CONFIG_PM_SLEEP 315 + #define SET_LATE_SYSTEM_SLEEP_PM_OPS(suspend_fn, resume_fn) \ 316 + .suspend_late = suspend_fn, \ 317 + .resume_early = resume_fn, \ 318 + .freeze_late = suspend_fn, \ 319 + .thaw_early = resume_fn, \ 320 + .poweroff_late = suspend_fn, \ 321 + .restore_early = resume_fn, 322 + #else 323 + #define SET_LATE_SYSTEM_SLEEP_PM_OPS(suspend_fn, resume_fn) 324 + #endif 325 + 314 326 #ifdef CONFIG_PM_RUNTIME 315 327 #define SET_RUNTIME_PM_OPS(suspend_fn, resume_fn, idle_fn) \ 316 328 .runtime_suspend = suspend_fn, \ ··· 330 318 .runtime_idle = idle_fn, 331 319 #else 332 320 #define SET_RUNTIME_PM_OPS(suspend_fn, resume_fn, idle_fn) 321 + #endif 322 + 323 + #ifdef CONFIG_PM 324 + #define SET_PM_RUNTIME_PM_OPS(suspend_fn, resume_fn, idle_fn) \ 325 + .runtime_suspend = suspend_fn, \ 326 + .runtime_resume = resume_fn, \ 327 + .runtime_idle = idle_fn, 328 + #else 329 + #define SET_PM_RUNTIME_PM_OPS(suspend_fn, resume_fn, idle_fn) 333 330 #endif 334 331 335 332 /*
+8 -4
include/linux/pm_runtime.h
··· 23 23 usage_count */ 24 24 #define RPM_AUTO 0x08 /* Use autosuspend_delay */ 25 25 26 + #ifdef CONFIG_PM 27 + extern int pm_generic_runtime_suspend(struct device *dev); 28 + extern int pm_generic_runtime_resume(struct device *dev); 29 + #else 30 + static inline int pm_generic_runtime_suspend(struct device *dev) { return 0; } 31 + static inline int pm_generic_runtime_resume(struct device *dev) { return 0; } 32 + #endif 33 + 26 34 #ifdef CONFIG_PM_RUNTIME 27 35 28 36 extern struct workqueue_struct *pm_wq; ··· 45 37 extern void __pm_runtime_disable(struct device *dev, bool check_resume); 46 38 extern void pm_runtime_allow(struct device *dev); 47 39 extern void pm_runtime_forbid(struct device *dev); 48 - extern int pm_generic_runtime_suspend(struct device *dev); 49 - extern int pm_generic_runtime_resume(struct device *dev); 50 40 extern void pm_runtime_no_callbacks(struct device *dev); 51 41 extern void pm_runtime_irq_safe(struct device *dev); 52 42 extern void __pm_runtime_use_autosuspend(struct device *dev, bool use); ··· 148 142 static inline bool pm_runtime_status_suspended(struct device *dev) { return false; } 149 143 static inline bool pm_runtime_enabled(struct device *dev) { return false; } 150 144 151 - static inline int pm_generic_runtime_suspend(struct device *dev) { return 0; } 152 - static inline int pm_generic_runtime_resume(struct device *dev) { return 0; } 153 145 static inline void pm_runtime_no_callbacks(struct device *dev) {} 154 146 static inline void pm_runtime_irq_safe(struct device *dev) {} 155 147
+4 -1
include/linux/sfi_acpi.h
··· 59 59 #ifndef _LINUX_SFI_ACPI_H 60 60 #define _LINUX_SFI_ACPI_H 61 61 62 + #include <linux/acpi.h> 63 + #include <linux/sfi.h> 64 + 62 65 #ifdef CONFIG_SFI 63 - #include <acpi/acpi.h> /* struct acpi_table_header */ 66 + #include <acpi/acpi.h> /* FIXME: inclusion should be removed */ 64 67 65 68 extern int sfi_acpi_table_parse(char *signature, char *oem_id, 66 69 char *oem_table_id,
+1 -1
include/linux/tboot.h
··· 34 34 }; 35 35 36 36 #ifdef CONFIG_INTEL_TXT 37 - #include <acpi/acpi.h> 37 + #include <linux/acpi.h> 38 38 /* used to communicate between tboot and the launched kernel */ 39 39 40 40 #define TB_KEY_SIZE 64 /* 512 bits */
+53
include/trace/events/power.h
··· 35 35 TP_ARGS(state, cpu_id) 36 36 ); 37 37 38 + TRACE_EVENT(pstate_sample, 39 + 40 + TP_PROTO(u32 core_busy, 41 + u32 scaled_busy, 42 + u32 state, 43 + u64 mperf, 44 + u64 aperf, 45 + u32 energy, 46 + u32 freq 47 + ), 48 + 49 + TP_ARGS(core_busy, 50 + scaled_busy, 51 + state, 52 + mperf, 53 + aperf, 54 + energy, 55 + freq 56 + ), 57 + 58 + TP_STRUCT__entry( 59 + __field(u32, core_busy) 60 + __field(u32, scaled_busy) 61 + __field(u32, state) 62 + __field(u64, mperf) 63 + __field(u64, aperf) 64 + __field(u32, energy) 65 + __field(u32, freq) 66 + 67 + ), 68 + 69 + TP_fast_assign( 70 + __entry->core_busy = core_busy; 71 + __entry->scaled_busy = scaled_busy; 72 + __entry->state = state; 73 + __entry->mperf = mperf; 74 + __entry->aperf = aperf; 75 + __entry->energy = energy; 76 + __entry->freq = freq; 77 + ), 78 + 79 + TP_printk("core_busy=%lu scaled=%lu state=%lu mperf=%llu aperf=%llu energy=%lu freq=%lu ", 80 + (unsigned long)__entry->core_busy, 81 + (unsigned long)__entry->scaled_busy, 82 + (unsigned long)__entry->state, 83 + (unsigned long long)__entry->mperf, 84 + (unsigned long long)__entry->aperf, 85 + (unsigned long)__entry->energy, 86 + (unsigned long)__entry->freq 87 + ) 88 + 89 + ); 90 + 38 91 /* This file can get included multiple times, TRACE_HEADER_MULTI_READ at top */ 39 92 #ifndef _PWR_EVENT_AVOID_DOUBLE_DEFINING 40 93 #define _PWR_EVENT_AVOID_DOUBLE_DEFINING
+2
include/uapi/linux/apm_bios.h
··· 67 67 #define APM_USER_SUSPEND 0x000a 68 68 #define APM_STANDBY_RESUME 0x000b 69 69 #define APM_CAPABILITY_CHANGE 0x000c 70 + #define APM_USER_HIBERNATION 0x000d 71 + #define APM_HIBERNATION_RESUME 0x000e 70 72 71 73 /* 72 74 * Error codes
+1 -1
init/main.c
··· 563 563 init_timers(); 564 564 hrtimers_init(); 565 565 softirq_init(); 566 + acpi_early_init(); 566 567 timekeeping_init(); 567 568 time_init(); 568 569 sched_clock_postinit(); ··· 641 640 642 641 check_bugs(); 643 642 644 - acpi_early_init(); /* before LAPIC and SMP init */ 645 643 sfi_init_late(); 646 644 647 645 if (efi_enabled(EFI_RUNTIME_SERVICES)) {
+4 -3
kernel/power/hibernate.c
··· 82 82 83 83 unlock_system_sleep(); 84 84 } 85 + EXPORT_SYMBOL_GPL(hibernation_set_ops); 85 86 86 87 static bool entering_platform_hibernation; 87 88 ··· 294 293 error); 295 294 /* Restore control flow magically appears here */ 296 295 restore_processor_state(); 297 - if (!in_suspend) { 296 + if (!in_suspend) 298 297 events_check_enabled = false; 299 - platform_leave(platform_mode); 300 - } 298 + 299 + platform_leave(platform_mode); 301 300 302 301 Power_up: 303 302 syscore_resume();
+1446
scripts/analyze_suspend.py
··· 1 + #!/usr/bin/python 2 + # 3 + # Tool for analyzing suspend/resume timing 4 + # Copyright (c) 2013, Intel Corporation. 5 + # 6 + # This program is free software; you can redistribute it and/or modify it 7 + # under the terms and conditions of the GNU General Public License, 8 + # version 2, as published by the Free Software Foundation. 9 + # 10 + # This program is distributed in the hope it will be useful, but WITHOUT 11 + # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 + # FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 + # more details. 14 + # 15 + # You should have received a copy of the GNU General Public License along with 16 + # this program; if not, write to the Free Software Foundation, Inc., 17 + # 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 18 + # 19 + # Authors: 20 + # Todd Brandt <todd.e.brandt@linux.intel.com> 21 + # 22 + # Description: 23 + # This tool is designed to assist kernel and OS developers in optimizing 24 + # their linux stack's suspend/resume time. Using a kernel image built 25 + # with a few extra options enabled, the tool will execute a suspend and 26 + # will capture dmesg and ftrace data until resume is complete. This data 27 + # is transformed into a device timeline and a callgraph to give a quick 28 + # and detailed view of which devices and callbacks are taking the most 29 + # time in suspend/resume. The output is a single html file which can be 30 + # viewed in firefox or chrome. 31 + # 32 + # The following kernel build options are required: 33 + # CONFIG_PM_DEBUG=y 34 + # CONFIG_PM_SLEEP_DEBUG=y 35 + # CONFIG_FTRACE=y 36 + # CONFIG_FUNCTION_TRACER=y 37 + # CONFIG_FUNCTION_GRAPH_TRACER=y 38 + # 39 + # The following additional kernel parameters are required: 40 + # (e.g. in file /etc/default/grub) 41 + # GRUB_CMDLINE_LINUX_DEFAULT="... initcall_debug log_buf_len=16M ..." 42 + # 43 + 44 + import sys 45 + import time 46 + import os 47 + import string 48 + import re 49 + import array 50 + import platform 51 + import datetime 52 + import struct 53 + 54 + # -- classes -- 55 + 56 + class SystemValues: 57 + testdir = "." 58 + tpath = "/sys/kernel/debug/tracing/" 59 + mempath = "/dev/mem" 60 + powerfile = "/sys/power/state" 61 + suspendmode = "mem" 62 + prefix = "test" 63 + teststamp = "" 64 + dmesgfile = "" 65 + ftracefile = "" 66 + htmlfile = "" 67 + rtcwake = False 68 + def setOutputFile(self): 69 + if((self.htmlfile == "") and (self.dmesgfile != "")): 70 + m = re.match(r"(?P<name>.*)_dmesg\.txt$", self.dmesgfile) 71 + if(m): 72 + self.htmlfile = m.group("name")+".html" 73 + if((self.htmlfile == "") and (self.ftracefile != "")): 74 + m = re.match(r"(?P<name>.*)_ftrace\.txt$", self.ftracefile) 75 + if(m): 76 + self.htmlfile = m.group("name")+".html" 77 + if(self.htmlfile == ""): 78 + self.htmlfile = "output.html" 79 + def initTestOutput(self): 80 + hostname = platform.node() 81 + if(hostname != ""): 82 + self.prefix = hostname 83 + v = os.popen("cat /proc/version").read().strip() 84 + kver = string.split(v)[2] 85 + self.testdir = os.popen("date \"+suspend-%m%d%y-%H%M%S\"").read().strip() 86 + self.teststamp = "# "+self.testdir+" "+self.prefix+" "+self.suspendmode+" "+kver 87 + self.dmesgfile = self.testdir+"/"+self.prefix+"_"+self.suspendmode+"_dmesg.txt" 88 + self.ftracefile = self.testdir+"/"+self.prefix+"_"+self.suspendmode+"_ftrace.txt" 89 + self.htmlfile = self.testdir+"/"+self.prefix+"_"+self.suspendmode+".html" 90 + os.mkdir(self.testdir) 91 + 92 + class Data: 93 + altdevname = dict() 94 + usedmesg = False 95 + useftrace = False 96 + notestrun = False 97 + verbose = False 98 + phases = [] 99 + dmesg = {} # root data structure 100 + start = 0.0 101 + end = 0.0 102 + stamp = {'time': "", 'host': "", 'mode': ""} 103 + id = 0 104 + tSuspended = 0.0 105 + fwValid = False 106 + fwSuspend = 0 107 + fwResume = 0 108 + def initialize(self): 109 + self.dmesg = { # dmesg log data 110 + 'suspend_general': {'list': dict(), 'start': -1.0, 'end': -1.0, 111 + 'row': 0, 'color': "#CCFFCC", 'order': 0}, 112 + 'suspend_early': {'list': dict(), 'start': -1.0, 'end': -1.0, 113 + 'row': 0, 'color': "green", 'order': 1}, 114 + 'suspend_noirq': {'list': dict(), 'start': -1.0, 'end': -1.0, 115 + 'row': 0, 'color': "#00FFFF", 'order': 2}, 116 + 'suspend_cpu': {'list': dict(), 'start': -1.0, 'end': -1.0, 117 + 'row': 0, 'color': "blue", 'order': 3}, 118 + 'resume_cpu': {'list': dict(), 'start': -1.0, 'end': -1.0, 119 + 'row': 0, 'color': "red", 'order': 4}, 120 + 'resume_noirq': {'list': dict(), 'start': -1.0, 'end': -1.0, 121 + 'row': 0, 'color': "orange", 'order': 5}, 122 + 'resume_early': {'list': dict(), 'start': -1.0, 'end': -1.0, 123 + 'row': 0, 'color': "yellow", 'order': 6}, 124 + 'resume_general': {'list': dict(), 'start': -1.0, 'end': -1.0, 125 + 'row': 0, 'color': "#FFFFCC", 'order': 7} 126 + } 127 + self.phases = self.sortedPhases() 128 + def normalizeTime(self): 129 + tSus = tRes = self.tSuspended 130 + if self.fwValid: 131 + tSus -= -self.fwSuspend / 1000000000.0 132 + tRes -= self.fwResume / 1000000000.0 133 + self.tSuspended = 0.0 134 + self.start -= tSus 135 + self.end -= tRes 136 + for phase in self.phases: 137 + zero = tRes 138 + if "suspend" in phase: 139 + zero = tSus 140 + p = self.dmesg[phase] 141 + p['start'] -= zero 142 + p['end'] -= zero 143 + list = p['list'] 144 + for name in list: 145 + d = list[name] 146 + d['start'] -= zero 147 + d['end'] -= zero 148 + if('ftrace' in d): 149 + cg = d['ftrace'] 150 + cg.start -= zero 151 + cg.end -= zero 152 + for line in cg.list: 153 + line.time -= zero 154 + if self.fwValid: 155 + fws = -self.fwSuspend / 1000000000.0 156 + fwr = self.fwResume / 1000000000.0 157 + list = dict() 158 + self.id += 1 159 + devid = "dc%d" % self.id 160 + list["firmware-suspend"] = \ 161 + {'start': fws, 'end': 0, 'pid': 0, 'par': "", 162 + 'length': -fws, 'row': 0, 'id': devid }; 163 + self.id += 1 164 + devid = "dc%d" % self.id 165 + list["firmware-resume"] = \ 166 + {'start': 0, 'end': fwr, 'pid': 0, 'par': "", 167 + 'length': fwr, 'row': 0, 'id': devid }; 168 + self.dmesg['BIOS'] = \ 169 + {'list': list, 'start': fws, 'end': fwr, 170 + 'row': 0, 'color': "purple", 'order': 4} 171 + self.dmesg['resume_cpu']['order'] += 1 172 + self.dmesg['resume_noirq']['order'] += 1 173 + self.dmesg['resume_early']['order'] += 1 174 + self.dmesg['resume_general']['order'] += 1 175 + self.phases = self.sortedPhases() 176 + def vprint(self, msg): 177 + if(self.verbose): 178 + print(msg) 179 + def dmesgSortVal(self, phase): 180 + return self.dmesg[phase]['order'] 181 + def sortedPhases(self): 182 + return sorted(self.dmesg, key=self.dmesgSortVal) 183 + def sortedDevices(self, phase): 184 + list = self.dmesg[phase]['list'] 185 + slist = [] 186 + tmp = dict() 187 + for devname in list: 188 + dev = list[devname] 189 + tmp[dev['start']] = devname 190 + for t in sorted(tmp): 191 + slist.append(tmp[t]) 192 + return slist 193 + def fixupInitcalls(self, phase, end): 194 + # if any calls never returned, clip them at system resume end 195 + phaselist = self.dmesg[phase]['list'] 196 + for devname in phaselist: 197 + dev = phaselist[devname] 198 + if(dev['end'] < 0): 199 + dev['end'] = end 200 + self.vprint("%s (%s): callback didn't return" % (devname, phase)) 201 + def fixupInitcallsThatDidntReturn(self): 202 + # if any calls never returned, clip them at system resume end 203 + for phase in self.phases: 204 + self.fixupInitcalls(phase, self.dmesg['resume_general']['end']) 205 + if(phase == "resume_general"): 206 + break 207 + def newAction(self, phase, name, pid, parent, start, end): 208 + self.id += 1 209 + devid = "dc%d" % self.id 210 + list = self.dmesg[phase]['list'] 211 + length = -1.0 212 + if(start >= 0 and end >= 0): 213 + length = end - start 214 + list[name] = {'start': start, 'end': end, 'pid': pid, 'par': parent, 215 + 'length': length, 'row': 0, 'id': devid } 216 + def deviceIDs(self, devlist, phase): 217 + idlist = [] 218 + for p in self.phases: 219 + if(p[0] != phase[0]): 220 + continue 221 + list = data.dmesg[p]['list'] 222 + for devname in list: 223 + if devname in devlist: 224 + idlist.append(list[devname]['id']) 225 + return idlist 226 + def deviceParentID(self, devname, phase): 227 + pdev = "" 228 + pdevid = "" 229 + for p in self.phases: 230 + if(p[0] != phase[0]): 231 + continue 232 + list = data.dmesg[p]['list'] 233 + if devname in list: 234 + pdev = list[devname]['par'] 235 + for p in self.phases: 236 + if(p[0] != phase[0]): 237 + continue 238 + list = data.dmesg[p]['list'] 239 + if pdev in list: 240 + return list[pdev]['id'] 241 + return pdev 242 + def deviceChildrenIDs(self, devname, phase): 243 + devlist = [] 244 + for p in self.phases: 245 + if(p[0] != phase[0]): 246 + continue 247 + list = data.dmesg[p]['list'] 248 + for child in list: 249 + if(list[child]['par'] == devname): 250 + devlist.append(child) 251 + return self.deviceIDs(devlist, phase) 252 + 253 + class FTraceLine: 254 + time = 0.0 255 + length = 0.0 256 + fcall = False 257 + freturn = False 258 + fevent = False 259 + depth = 0 260 + name = "" 261 + def __init__(self, t, m, d): 262 + self.time = float(t) 263 + # check to see if this is a trace event 264 + em = re.match(r"^ *\/\* *(?P<msg>.*) \*\/ *$", m) 265 + if(em): 266 + self.name = em.group("msg") 267 + self.fevent = True 268 + return 269 + # convert the duration to seconds 270 + if(d): 271 + self.length = float(d)/1000000 272 + # the indentation determines the depth 273 + match = re.match(r"^(?P<d> *)(?P<o>.*)$", m) 274 + if(not match): 275 + return 276 + self.depth = self.getDepth(match.group('d')) 277 + m = match.group('o') 278 + # function return 279 + if(m[0] == '}'): 280 + self.freturn = True 281 + if(len(m) > 1): 282 + # includes comment with function name 283 + match = re.match(r"^} *\/\* *(?P<n>.*) *\*\/$", m) 284 + if(match): 285 + self.name = match.group('n') 286 + # function call 287 + else: 288 + self.fcall = True 289 + # function call with children 290 + if(m[-1] == '{'): 291 + match = re.match(r"^(?P<n>.*) *\(.*", m) 292 + if(match): 293 + self.name = match.group('n') 294 + # function call with no children (leaf) 295 + elif(m[-1] == ';'): 296 + self.freturn = True 297 + match = re.match(r"^(?P<n>.*) *\(.*", m) 298 + if(match): 299 + self.name = match.group('n') 300 + # something else (possibly a trace marker) 301 + else: 302 + self.name = m 303 + def getDepth(self, str): 304 + return len(str)/2 305 + 306 + class FTraceCallGraph: 307 + start = -1.0 308 + end = -1.0 309 + list = [] 310 + invalid = False 311 + depth = 0 312 + def __init__(self): 313 + self.start = -1.0 314 + self.end = -1.0 315 + self.list = [] 316 + self.depth = 0 317 + def setDepth(self, line): 318 + if(line.fcall and not line.freturn): 319 + line.depth = self.depth 320 + self.depth += 1 321 + elif(line.freturn and not line.fcall): 322 + self.depth -= 1 323 + line.depth = self.depth 324 + else: 325 + line.depth = self.depth 326 + def addLine(self, line, match): 327 + if(not self.invalid): 328 + self.setDepth(line) 329 + if(line.depth == 0 and line.freturn): 330 + self.end = line.time 331 + self.list.append(line) 332 + return True 333 + if(self.invalid): 334 + return False 335 + if(len(self.list) >= 1000000 or self.depth < 0): 336 + first = self.list[0] 337 + self.list = [] 338 + self.list.append(first) 339 + self.invalid = True 340 + id = "task %s cpu %s" % (match.group("pid"), match.group("cpu")) 341 + window = "(%f - %f)" % (self.start, line.time) 342 + data.vprint("Too much data for "+id+" "+window+", ignoring this callback") 343 + return False 344 + self.list.append(line) 345 + if(self.start < 0): 346 + self.start = line.time 347 + return False 348 + def sanityCheck(self): 349 + stack = dict() 350 + cnt = 0 351 + for l in self.list: 352 + if(l.fcall and not l.freturn): 353 + stack[l.depth] = l 354 + cnt += 1 355 + elif(l.freturn and not l.fcall): 356 + if(not stack[l.depth]): 357 + return False 358 + stack[l.depth].length = l.length 359 + stack[l.depth] = 0 360 + l.length = 0 361 + cnt -= 1 362 + if(cnt == 0): 363 + return True 364 + return False 365 + def debugPrint(self, filename): 366 + if(filename == "stdout"): 367 + print("[%f - %f]") % (self.start, self.end) 368 + for l in self.list: 369 + if(l.freturn and l.fcall): 370 + print("%f (%02d): %s(); (%.3f us)" % (l.time, l.depth, l.name, l.length*1000000)) 371 + elif(l.freturn): 372 + print("%f (%02d): %s} (%.3f us)" % (l.time, l.depth, l.name, l.length*1000000)) 373 + else: 374 + print("%f (%02d): %s() { (%.3f us)" % (l.time, l.depth, l.name, l.length*1000000)) 375 + print(" ") 376 + else: 377 + fp = open(filename, 'w') 378 + print(filename) 379 + for l in self.list: 380 + if(l.freturn and l.fcall): 381 + fp.write("%f (%02d): %s(); (%.3f us)\n" % (l.time, l.depth, l.name, l.length*1000000)) 382 + elif(l.freturn): 383 + fp.write("%f (%02d): %s} (%.3f us)\n" % (l.time, l.depth, l.name, l.length*1000000)) 384 + else: 385 + fp.write("%f (%02d): %s() { (%.3f us)\n" % (l.time, l.depth, l.name, l.length*1000000)) 386 + fp.close() 387 + 388 + class Timeline: 389 + html = {} 390 + scaleH = 0.0 # height of the timescale row as a percent of the timeline height 391 + rowH = 0.0 # height of each row in percent of the timeline height 392 + row_height_pixels = 30 393 + maxrows = 0 394 + height = 0 395 + def __init__(self): 396 + self.html = { 397 + 'timeline': "", 398 + 'legend': "", 399 + 'scale': "" 400 + } 401 + def setRows(self, rows): 402 + self.maxrows = int(rows) 403 + self.scaleH = 100.0/float(self.maxrows) 404 + self.height = self.maxrows*self.row_height_pixels 405 + r = float(self.maxrows - 1) 406 + if(r < 1.0): 407 + r = 1.0 408 + self.rowH = (100.0 - self.scaleH)/r 409 + 410 + # -- global objects -- 411 + 412 + sysvals = SystemValues() 413 + data = Data() 414 + 415 + # -- functions -- 416 + 417 + # Function: initFtrace 418 + # Description: 419 + # Configure ftrace to capture a function trace during suspend/resume 420 + def initFtrace(): 421 + global sysvals 422 + 423 + print("INITIALIZING FTRACE...") 424 + # turn trace off 425 + os.system("echo 0 > "+sysvals.tpath+"tracing_on") 426 + # set the trace clock to global 427 + os.system("echo global > "+sysvals.tpath+"trace_clock") 428 + # set trace buffer to a huge value 429 + os.system("echo nop > "+sysvals.tpath+"current_tracer") 430 + os.system("echo 100000 > "+sysvals.tpath+"buffer_size_kb") 431 + # clear the trace buffer 432 + os.system("echo \"\" > "+sysvals.tpath+"trace") 433 + # set trace type 434 + os.system("echo function_graph > "+sysvals.tpath+"current_tracer") 435 + os.system("echo \"\" > "+sysvals.tpath+"set_ftrace_filter") 436 + # set trace format options 437 + os.system("echo funcgraph-abstime > "+sysvals.tpath+"trace_options") 438 + os.system("echo funcgraph-proc > "+sysvals.tpath+"trace_options") 439 + # focus only on device suspend and resume 440 + os.system("cat "+sysvals.tpath+"available_filter_functions | grep dpm_run_callback > "+sysvals.tpath+"set_graph_function") 441 + 442 + # Function: verifyFtrace 443 + # Description: 444 + # Check that ftrace is working on the system 445 + def verifyFtrace(): 446 + global sysvals 447 + files = ["available_filter_functions", "buffer_size_kb", 448 + "current_tracer", "set_ftrace_filter", 449 + "trace", "trace_marker"] 450 + for f in files: 451 + if(os.path.exists(sysvals.tpath+f) == False): 452 + return False 453 + return True 454 + 455 + def parseStamp(line): 456 + global data, sysvals 457 + stampfmt = r"# suspend-(?P<m>[0-9]{2})(?P<d>[0-9]{2})(?P<y>[0-9]{2})-"+\ 458 + "(?P<H>[0-9]{2})(?P<M>[0-9]{2})(?P<S>[0-9]{2})"+\ 459 + " (?P<host>.*) (?P<mode>.*) (?P<kernel>.*)$" 460 + m = re.match(stampfmt, line) 461 + if(m): 462 + dt = datetime.datetime(int(m.group("y"))+2000, int(m.group("m")), 463 + int(m.group("d")), int(m.group("H")), int(m.group("M")), 464 + int(m.group("S"))) 465 + data.stamp['time'] = dt.strftime("%B %d %Y, %I:%M:%S %p") 466 + data.stamp['host'] = m.group("host") 467 + data.stamp['mode'] = m.group("mode") 468 + data.stamp['kernel'] = m.group("kernel") 469 + sysvals.suspendmode = data.stamp['mode'] 470 + 471 + # Function: analyzeTraceLog 472 + # Description: 473 + # Analyse an ftrace log output file generated from this app during 474 + # the execution phase. Create an "ftrace" structure in memory for 475 + # subsequent formatting in the html output file 476 + def analyzeTraceLog(): 477 + global sysvals, data 478 + 479 + # the ftrace data is tied to the dmesg data 480 + if(not data.usedmesg): 481 + return 482 + 483 + # read through the ftrace and parse the data 484 + data.vprint("Analyzing the ftrace data...") 485 + ftrace_line_fmt = r"^ *(?P<time>[0-9\.]*) *\| *(?P<cpu>[0-9]*)\)"+\ 486 + " *(?P<proc>.*)-(?P<pid>[0-9]*) *\|"+\ 487 + "[ +!]*(?P<dur>[0-9\.]*) .*\| (?P<msg>.*)" 488 + ftemp = dict() 489 + inthepipe = False 490 + tf = open(sysvals.ftracefile, 'r') 491 + count = 0 492 + for line in tf: 493 + count = count + 1 494 + # grab the time stamp if it's valid 495 + if(count == 1): 496 + parseStamp(line) 497 + continue 498 + # parse only valid lines 499 + m = re.match(ftrace_line_fmt, line) 500 + if(not m): 501 + continue 502 + m_time = m.group("time") 503 + m_pid = m.group("pid") 504 + m_msg = m.group("msg") 505 + m_dur = m.group("dur") 506 + if(m_time and m_pid and m_msg): 507 + t = FTraceLine(m_time, m_msg, m_dur) 508 + pid = int(m_pid) 509 + else: 510 + continue 511 + # the line should be a call, return, or event 512 + if(not t.fcall and not t.freturn and not t.fevent): 513 + continue 514 + # only parse the ftrace data during suspend/resume 515 + if(not inthepipe): 516 + # look for the suspend start marker 517 + if(t.fevent): 518 + if(t.name == "SUSPEND START"): 519 + data.vprint("SUSPEND START %f %s:%d" % (t.time, sysvals.ftracefile, count)) 520 + inthepipe = True 521 + continue 522 + else: 523 + # look for the resume end marker 524 + if(t.fevent): 525 + if(t.name == "RESUME COMPLETE"): 526 + data.vprint("RESUME COMPLETE %f %s:%d" % (t.time, sysvals.ftracefile, count)) 527 + inthepipe = False 528 + break 529 + continue 530 + # create a callgraph object for the data 531 + if(pid not in ftemp): 532 + ftemp[pid] = FTraceCallGraph() 533 + # when the call is finished, see which device matches it 534 + if(ftemp[pid].addLine(t, m)): 535 + if(not ftemp[pid].sanityCheck()): 536 + id = "task %s cpu %s" % (pid, m.group("cpu")) 537 + data.vprint("Sanity check failed for "+id+", ignoring this callback") 538 + continue 539 + callstart = ftemp[pid].start 540 + callend = ftemp[pid].end 541 + for p in data.phases: 542 + if(data.dmesg[p]['start'] <= callstart and callstart <= data.dmesg[p]['end']): 543 + list = data.dmesg[p]['list'] 544 + for devname in list: 545 + dev = list[devname] 546 + if(pid == dev['pid'] and callstart <= dev['start'] and callend >= dev['end']): 547 + data.vprint("%15s [%f - %f] %s(%d)" % (p, callstart, callend, devname, pid)) 548 + dev['ftrace'] = ftemp[pid] 549 + break 550 + ftemp[pid] = FTraceCallGraph() 551 + tf.close() 552 + 553 + # Function: sortKernelLog 554 + # Description: 555 + # The dmesg output log sometimes comes with with lines that have 556 + # timestamps out of order. This could cause issues since a call 557 + # could accidentally end up in the wrong phase 558 + def sortKernelLog(): 559 + global sysvals, data 560 + lf = open(sysvals.dmesgfile, 'r') 561 + dmesglist = [] 562 + count = 0 563 + for line in lf: 564 + line = line.replace("\r\n", "") 565 + if(count == 0): 566 + parseStamp(line) 567 + elif(count == 1): 568 + m = re.match(r"# fwsuspend (?P<s>[0-9]*) fwresume (?P<r>[0-9]*)$", line) 569 + if(m): 570 + data.fwSuspend = int(m.group("s")) 571 + data.fwResume = int(m.group("r")) 572 + if(data.fwSuspend > 0 or data.fwResume > 0): 573 + data.fwValid = True 574 + if(re.match(r".*(\[ *)(?P<ktime>[0-9\.]*)(\]) (?P<msg>.*)", line)): 575 + dmesglist.append(line) 576 + count += 1 577 + lf.close() 578 + last = "" 579 + 580 + # fix lines with the same time stamp and function with the call and return swapped 581 + for line in dmesglist: 582 + mc = re.match(r".*(\[ *)(?P<t>[0-9\.]*)(\]) calling (?P<f>.*)\+ @ .*, parent: .*", line) 583 + mr = re.match(r".*(\[ *)(?P<t>[0-9\.]*)(\]) call (?P<f>.*)\+ returned .* after (?P<dt>.*) usecs", last) 584 + if(mc and mr and (mc.group("t") == mr.group("t")) and (mc.group("f") == mr.group("f"))): 585 + i = dmesglist.index(last) 586 + j = dmesglist.index(line) 587 + dmesglist[i] = line 588 + dmesglist[j] = last 589 + last = line 590 + return dmesglist 591 + 592 + # Function: analyzeKernelLog 593 + # Description: 594 + # Analyse a dmesg log output file generated from this app during 595 + # the execution phase. Create a set of device structures in memory 596 + # for subsequent formatting in the html output file 597 + def analyzeKernelLog(): 598 + global sysvals, data 599 + 600 + print("PROCESSING DATA") 601 + data.vprint("Analyzing the dmesg data...") 602 + if(os.path.exists(sysvals.dmesgfile) == False): 603 + print("ERROR: %s doesn't exist") % sysvals.dmesgfile 604 + return False 605 + 606 + lf = sortKernelLog() 607 + phase = "suspend_runtime" 608 + 609 + dm = { 610 + 'suspend_general': r"PM: Syncing filesystems.*", 611 + 'suspend_early': r"PM: suspend of devices complete after.*", 612 + 'suspend_noirq': r"PM: late suspend of devices complete after.*", 613 + 'suspend_cpu': r"PM: noirq suspend of devices complete after.*", 614 + 'resume_cpu': r"ACPI: Low-level resume complete.*", 615 + 'resume_noirq': r"ACPI: Waking up from system sleep state.*", 616 + 'resume_early': r"PM: noirq resume of devices complete after.*", 617 + 'resume_general': r"PM: early resume of devices complete after.*", 618 + 'resume_complete': r".*Restarting tasks \.\.\..*", 619 + } 620 + if(sysvals.suspendmode == "standby"): 621 + dm['resume_cpu'] = r"PM: Restoring platform NVS memory" 622 + elif(sysvals.suspendmode == "disk"): 623 + dm['suspend_early'] = r"PM: freeze of devices complete after.*" 624 + dm['suspend_noirq'] = r"PM: late freeze of devices complete after.*" 625 + dm['suspend_cpu'] = r"PM: noirq freeze of devices complete after.*" 626 + dm['resume_cpu'] = r"PM: Restoring platform NVS memory" 627 + dm['resume_early'] = r"PM: noirq restore of devices complete after.*" 628 + dm['resume_general'] = r"PM: early restore of devices complete after.*" 629 + 630 + action_start = 0.0 631 + for line in lf: 632 + # -- preprocessing -- 633 + # parse each dmesg line into the time and message 634 + m = re.match(r".*(\[ *)(?P<ktime>[0-9\.]*)(\]) (?P<msg>.*)", line) 635 + if(m): 636 + ktime = float(m.group("ktime")) 637 + msg = m.group("msg") 638 + else: 639 + print line 640 + continue 641 + 642 + # -- phase changes -- 643 + # suspend_general start 644 + if(re.match(dm['suspend_general'], msg)): 645 + phase = "suspend_general" 646 + data.dmesg[phase]['start'] = ktime 647 + data.start = ktime 648 + # action start: syncing filesystems 649 + action_start = ktime 650 + # suspend_early start 651 + elif(re.match(dm['suspend_early'], msg)): 652 + data.dmesg["suspend_general"]['end'] = ktime 653 + phase = "suspend_early" 654 + data.dmesg[phase]['start'] = ktime 655 + # suspend_noirq start 656 + elif(re.match(dm['suspend_noirq'], msg)): 657 + data.dmesg["suspend_early"]['end'] = ktime 658 + phase = "suspend_noirq" 659 + data.dmesg[phase]['start'] = ktime 660 + # suspend_cpu start 661 + elif(re.match(dm['suspend_cpu'], msg)): 662 + data.dmesg["suspend_noirq"]['end'] = ktime 663 + phase = "suspend_cpu" 664 + data.dmesg[phase]['start'] = ktime 665 + # resume_cpu start 666 + elif(re.match(dm['resume_cpu'], msg)): 667 + data.tSuspended = ktime 668 + data.dmesg["suspend_cpu"]['end'] = ktime 669 + phase = "resume_cpu" 670 + data.dmesg[phase]['start'] = ktime 671 + # resume_noirq start 672 + elif(re.match(dm['resume_noirq'], msg)): 673 + data.dmesg["resume_cpu"]['end'] = ktime 674 + phase = "resume_noirq" 675 + data.dmesg[phase]['start'] = ktime 676 + # action end: ACPI resume 677 + data.newAction("resume_cpu", "ACPI", -1, "", action_start, ktime) 678 + # resume_early start 679 + elif(re.match(dm['resume_early'], msg)): 680 + data.dmesg["resume_noirq"]['end'] = ktime 681 + phase = "resume_early" 682 + data.dmesg[phase]['start'] = ktime 683 + # resume_general start 684 + elif(re.match(dm['resume_general'], msg)): 685 + data.dmesg["resume_early"]['end'] = ktime 686 + phase = "resume_general" 687 + data.dmesg[phase]['start'] = ktime 688 + # resume complete start 689 + elif(re.match(dm['resume_complete'], msg)): 690 + data.dmesg["resume_general"]['end'] = ktime 691 + data.end = ktime 692 + phase = "resume_runtime" 693 + break 694 + 695 + # -- device callbacks -- 696 + if(phase in data.phases): 697 + # device init call 698 + if(re.match(r"calling (?P<f>.*)\+ @ .*, parent: .*", msg)): 699 + sm = re.match(r"calling (?P<f>.*)\+ @ (?P<n>.*), parent: (?P<p>.*)", msg); 700 + f = sm.group("f") 701 + n = sm.group("n") 702 + p = sm.group("p") 703 + if(f and n and p): 704 + data.newAction(phase, f, int(n), p, ktime, -1) 705 + # device init return 706 + elif(re.match(r"call (?P<f>.*)\+ returned .* after (?P<t>.*) usecs", msg)): 707 + sm = re.match(r"call (?P<f>.*)\+ returned .* after (?P<t>.*) usecs(?P<a>.*)", msg); 708 + f = sm.group("f") 709 + t = sm.group("t") 710 + list = data.dmesg[phase]['list'] 711 + if(f in list): 712 + dev = list[f] 713 + dev['length'] = int(t) 714 + dev['end'] = ktime 715 + data.vprint("%15s [%f - %f] %s(%d) %s" % 716 + (phase, dev['start'], dev['end'], f, dev['pid'], dev['par'])) 717 + 718 + # -- phase specific actions -- 719 + if(phase == "suspend_general"): 720 + if(re.match(r"PM: Preparing system for mem sleep.*", msg)): 721 + data.newAction(phase, "filesystem-sync", -1, "", action_start, ktime) 722 + elif(re.match(r"Freezing user space processes .*", msg)): 723 + action_start = ktime 724 + elif(re.match(r"Freezing remaining freezable tasks.*", msg)): 725 + data.newAction(phase, "freeze-user-processes", -1, "", action_start, ktime) 726 + action_start = ktime 727 + elif(re.match(r"PM: Entering (?P<mode>[a-z,A-Z]*) sleep.*", msg)): 728 + data.newAction(phase, "freeze-tasks", -1, "", action_start, ktime) 729 + elif(phase == "suspend_cpu"): 730 + m = re.match(r"smpboot: CPU (?P<cpu>[0-9]*) is now offline", msg) 731 + if(m): 732 + cpu = "CPU"+m.group("cpu") 733 + data.newAction(phase, cpu, -1, "", action_start, ktime) 734 + action_start = ktime 735 + elif(re.match(r"ACPI: Preparing to enter system sleep state.*", msg)): 736 + action_start = ktime 737 + elif(re.match(r"Disabling non-boot CPUs .*", msg)): 738 + data.newAction(phase, "ACPI", -1, "", action_start, ktime) 739 + action_start = ktime 740 + elif(phase == "resume_cpu"): 741 + m = re.match(r"CPU(?P<cpu>[0-9]*) is up", msg) 742 + if(m): 743 + cpu = "CPU"+m.group("cpu") 744 + data.newAction(phase, cpu, -1, "", action_start, ktime) 745 + action_start = ktime 746 + elif(re.match(r"Enabling non-boot CPUs .*", msg)): 747 + action_start = ktime 748 + 749 + # fill in any missing phases 750 + lp = "suspend_general" 751 + for p in data.phases: 752 + if(p == "suspend_general"): 753 + continue 754 + if(data.dmesg[p]['start'] < 0): 755 + data.dmesg[p]['start'] = data.dmesg[lp]['end'] 756 + if(p == "resume_cpu"): 757 + data.tSuspended = data.dmesg[lp]['end'] 758 + if(data.dmesg[p]['end'] < 0): 759 + data.dmesg[p]['end'] = data.dmesg[p]['start'] 760 + lp = p 761 + 762 + data.fixupInitcallsThatDidntReturn() 763 + return True 764 + 765 + # Function: setTimelineRows 766 + # Description: 767 + # Organize the device or thread lists into the smallest 768 + # number of rows possible, with no entry overlapping 769 + # Arguments: 770 + # list: the list to sort (dmesg or ftrace) 771 + # sortedkeys: sorted key list to use 772 + def setTimelineRows(list, sortedkeys): 773 + global data 774 + 775 + # clear all rows and set them to undefined 776 + remaining = len(list) 777 + rowdata = dict() 778 + row = 0 779 + for item in list: 780 + list[item]['row'] = -1 781 + 782 + # try to pack each row with as many ranges as possible 783 + while(remaining > 0): 784 + if(row not in rowdata): 785 + rowdata[row] = [] 786 + for item in sortedkeys: 787 + if(list[item]['row'] < 0): 788 + s = list[item]['start'] 789 + e = list[item]['end'] 790 + valid = True 791 + for ritem in rowdata[row]: 792 + rs = ritem['start'] 793 + re = ritem['end'] 794 + if(not (((s <= rs) and (e <= rs)) or ((s >= re) and (e >= re)))): 795 + valid = False 796 + break 797 + if(valid): 798 + rowdata[row].append(list[item]) 799 + list[item]['row'] = row 800 + remaining -= 1 801 + row += 1 802 + return row 803 + 804 + # Function: createTimeScale 805 + # Description: 806 + # Create timescale lines for the dmesg and ftrace timelines 807 + # Arguments: 808 + # t0: start time (suspend begin) 809 + # tMax: end time (resume end) 810 + # tSuspend: time when suspend occurs 811 + def createTimeScale(t0, tMax, tSuspended): 812 + global data 813 + timescale = "<div class=\"t\" style=\"right:{0}%\">{1}</div>\n" 814 + output = '<div id="timescale">\n' 815 + 816 + # set scale for timeline 817 + tTotal = tMax - t0 818 + tS = 0.1 819 + if(tTotal <= 0): 820 + return output 821 + if(tTotal > 4): 822 + tS = 1 823 + if(tSuspended < 0): 824 + for i in range(int(tTotal/tS)+1): 825 + pos = "%0.3f" % (100 - ((float(i)*tS*100)/tTotal)) 826 + if(i > 0): 827 + val = "%0.f" % (float(i)*tS*1000) 828 + else: 829 + val = "" 830 + output += timescale.format(pos, val) 831 + else: 832 + tSuspend = tSuspended - t0 833 + divTotal = int(tTotal/tS) + 1 834 + divSuspend = int(tSuspend/tS) 835 + s0 = (tSuspend - tS*divSuspend)*100/tTotal 836 + for i in range(divTotal): 837 + pos = "%0.3f" % (100 - ((float(i)*tS*100)/tTotal) - s0) 838 + if((i == 0) and (s0 < 3)): 839 + val = "" 840 + elif(i == divSuspend): 841 + val = "S/R" 842 + else: 843 + val = "%0.f" % (float(i-divSuspend)*tS*1000) 844 + output += timescale.format(pos, val) 845 + output += '</div>\n' 846 + return output 847 + 848 + # Function: createHTML 849 + # Description: 850 + # Create the output html file. 851 + def createHTML(): 852 + global sysvals, data 853 + 854 + data.normalizeTime() 855 + 856 + # html function templates 857 + headline_stamp = '<div class="stamp">{0} {1} {2} {3}</div>\n' 858 + html_zoombox = '<center><button id="zoomin">ZOOM IN</button><button id="zoomout">ZOOM OUT</button><button id="zoomdef">ZOOM 1:1</button></center>\n<div id="dmesgzoombox" class="zoombox">\n' 859 + html_timeline = '<div id="{0}" class="timeline" style="height:{1}px">\n' 860 + html_device = '<div id="{0}" title="{1}" class="thread" style="left:{2}%;top:{3}%;height:{4}%;width:{5}%;">{6}</div>\n' 861 + html_phase = '<div class="phase" style="left:{0}%;width:{1}%;top:{2}%;height:{3}%;background-color:{4}">{5}</div>\n' 862 + html_legend = '<div class="square" style="left:{0}%;background-color:{1}">&nbsp;{2}</div>\n' 863 + html_timetotal = '<table class="time1">\n<tr>'\ 864 + '<td class="gray">{2} Suspend Time: <b>{0} ms</b></td>'\ 865 + '<td class="gray">{2} Resume Time: <b>{1} ms</b></td>'\ 866 + '</tr>\n</table>\n' 867 + html_timegroups = '<table class="time2">\n<tr>'\ 868 + '<td class="green">Kernel Suspend: {0} ms</td>'\ 869 + '<td class="purple">Firmware Suspend: {1} ms</td>'\ 870 + '<td class="purple">Firmware Resume: {2} ms</td>'\ 871 + '<td class="yellow">Kernel Resume: {3} ms</td>'\ 872 + '</tr>\n</table>\n' 873 + 874 + # device timeline (dmesg) 875 + if(data.usedmesg): 876 + data.vprint("Creating Device Timeline...") 877 + devtl = Timeline() 878 + 879 + # Generate the header for this timeline 880 + t0 = data.start 881 + tMax = data.end 882 + tTotal = tMax - t0 883 + if(tTotal == 0): 884 + print("ERROR: No timeline data") 885 + sys.exit() 886 + suspend_time = "%.0f"%(-data.start*1000) 887 + resume_time = "%.0f"%(data.end*1000) 888 + if data.fwValid: 889 + devtl.html['timeline'] = html_timetotal.format(suspend_time, resume_time, "Total") 890 + sktime = "%.3f"%((data.dmesg['suspend_cpu']['end'] - data.dmesg['suspend_general']['start'])*1000) 891 + sftime = "%.3f"%(data.fwSuspend / 1000000.0) 892 + rftime = "%.3f"%(data.fwResume / 1000000.0) 893 + rktime = "%.3f"%((data.dmesg['resume_general']['end'] - data.dmesg['resume_cpu']['start'])*1000) 894 + devtl.html['timeline'] += html_timegroups.format(sktime, sftime, rftime, rktime) 895 + else: 896 + devtl.html['timeline'] = html_timetotal.format(suspend_time, resume_time, "Kernel") 897 + 898 + # determine the maximum number of rows we need to draw 899 + timelinerows = 0 900 + for phase in data.dmesg: 901 + list = data.dmesg[phase]['list'] 902 + rows = setTimelineRows(list, list) 903 + data.dmesg[phase]['row'] = rows 904 + if(rows > timelinerows): 905 + timelinerows = rows 906 + 907 + # calculate the timeline height and create its bounding box 908 + devtl.setRows(timelinerows + 1) 909 + devtl.html['timeline'] += html_zoombox; 910 + devtl.html['timeline'] += html_timeline.format("dmesg", devtl.height); 911 + 912 + # draw the colored boxes for each of the phases 913 + for b in data.dmesg: 914 + phase = data.dmesg[b] 915 + left = "%.3f" % (((phase['start']-data.start)*100)/tTotal) 916 + width = "%.3f" % (((phase['end']-phase['start'])*100)/tTotal) 917 + devtl.html['timeline'] += html_phase.format(left, width, "%.3f"%devtl.scaleH, "%.3f"%(100-devtl.scaleH), data.dmesg[b]['color'], "") 918 + 919 + # draw the time scale, try to make the number of labels readable 920 + devtl.html['scale'] = createTimeScale(t0, tMax, data.tSuspended) 921 + devtl.html['timeline'] += devtl.html['scale'] 922 + for b in data.dmesg: 923 + phaselist = data.dmesg[b]['list'] 924 + for d in phaselist: 925 + name = d 926 + if(d in data.altdevname): 927 + name = data.altdevname[d] 928 + dev = phaselist[d] 929 + height = (100.0 - devtl.scaleH)/data.dmesg[b]['row'] 930 + top = "%.3f" % ((dev['row']*height) + devtl.scaleH) 931 + left = "%.3f" % (((dev['start']-data.start)*100)/tTotal) 932 + width = "%.3f" % (((dev['end']-dev['start'])*100)/tTotal) 933 + len = " (%0.3f ms) " % ((dev['end']-dev['start'])*1000) 934 + color = "rgba(204,204,204,0.5)" 935 + devtl.html['timeline'] += html_device.format(dev['id'], name+len+b, left, top, "%.3f"%height, width, name) 936 + 937 + # timeline is finished 938 + devtl.html['timeline'] += "</div>\n</div>\n" 939 + 940 + # draw a legend which describes the phases by color 941 + devtl.html['legend'] = "<div class=\"legend\">\n" 942 + pdelta = 100.0/data.phases.__len__() 943 + pmargin = pdelta / 4.0 944 + for phase in data.phases: 945 + order = "%.2f" % ((data.dmesg[phase]['order'] * pdelta) + pmargin) 946 + name = string.replace(phase, "_", " &nbsp;") 947 + devtl.html['legend'] += html_legend.format(order, data.dmesg[phase]['color'], name) 948 + devtl.html['legend'] += "</div>\n" 949 + 950 + hf = open(sysvals.htmlfile, 'w') 951 + thread_height = 0 952 + 953 + # write the html header first (html head, css code, everything up to the start of body) 954 + html_header = "<!DOCTYPE html>\n<html>\n<head>\n\ 955 + <meta http-equiv=\"content-type\" content=\"text/html; charset=UTF-8\">\n\ 956 + <title>AnalyzeSuspend</title>\n\ 957 + <style type='text/css'>\n\ 958 + body {overflow-y: scroll;}\n\ 959 + .stamp {width: 100%;text-align:center;background-color:gray;line-height:30px;color:white;font: 25px Arial;}\n\ 960 + .callgraph {margin-top: 30px;box-shadow: 5px 5px 20px black;}\n\ 961 + .callgraph article * {padding-left: 28px;}\n\ 962 + h1 {color:black;font: bold 30px Times;}\n\ 963 + table {width:100%;}\n\ 964 + .gray {background-color:rgba(80,80,80,0.1);}\n\ 965 + .green {background-color:rgba(204,255,204,0.4);}\n\ 966 + .purple {background-color:rgba(128,0,128,0.2);}\n\ 967 + .yellow {background-color:rgba(255,255,204,0.4);}\n\ 968 + .time1 {font: 22px Arial;border:1px solid;}\n\ 969 + .time2 {font: 15px Arial;border-bottom:1px solid;border-left:1px solid;border-right:1px solid;}\n\ 970 + td {text-align: center;}\n\ 971 + .tdhl {color: red;}\n\ 972 + .hide {display: none;}\n\ 973 + .pf {display: none;}\n\ 974 + .pf:checked + label {background: url(\'data:image/svg+xml;utf,<?xml version=\"1.0\" standalone=\"no\"?><svg xmlns=\"http://www.w3.org/2000/svg\" height=\"18\" width=\"18\" version=\"1.1\"><circle cx=\"9\" cy=\"9\" r=\"8\" stroke=\"black\" stroke-width=\"1\" fill=\"white\"/><rect x=\"4\" y=\"8\" width=\"10\" height=\"2\" style=\"fill:black;stroke-width:0\"/><rect x=\"8\" y=\"4\" width=\"2\" height=\"10\" style=\"fill:black;stroke-width:0\"/></svg>\') no-repeat left center;}\n\ 975 + .pf:not(:checked) ~ label {background: url(\'data:image/svg+xml;utf,<?xml version=\"1.0\" standalone=\"no\"?><svg xmlns=\"http://www.w3.org/2000/svg\" height=\"18\" width=\"18\" version=\"1.1\"><circle cx=\"9\" cy=\"9\" r=\"8\" stroke=\"black\" stroke-width=\"1\" fill=\"white\"/><rect x=\"4\" y=\"8\" width=\"10\" height=\"2\" style=\"fill:black;stroke-width:0\"/></svg>\') no-repeat left center;}\n\ 976 + .pf:checked ~ *:not(:nth-child(2)) {display: none;}\n\ 977 + .zoombox {position: relative; width: 100%; overflow-x: scroll;}\n\ 978 + .timeline {position: relative; font-size: 14px;cursor: pointer;width: 100%; overflow: hidden; background-color:#dddddd;}\n\ 979 + .thread {position: absolute; height: "+"%.3f"%thread_height+"%; overflow: hidden; line-height: 30px; border:1px solid;text-align:center;white-space:nowrap;background-color:rgba(204,204,204,0.5);}\n\ 980 + .thread:hover {background-color:white;border:1px solid red;z-index:10;}\n\ 981 + .phase {position: absolute;overflow: hidden;border:0px;text-align:center;}\n\ 982 + .t {position: absolute; top: 0%; height: 100%; border-right:1px solid black;}\n\ 983 + .legend {position: relative; width: 100%; height: 40px; text-align: center;margin-bottom:20px}\n\ 984 + .legend .square {position:absolute;top:10px; width: 0px;height: 20px;border:1px solid;padding-left:20px;}\n\ 985 + button {height:40px;width:200px;margin-bottom:20px;margin-top:20px;font-size:24px;}\n\ 986 + </style>\n</head>\n<body>\n" 987 + hf.write(html_header) 988 + 989 + # write the test title and general info header 990 + if(data.stamp['time'] != ""): 991 + hf.write(headline_stamp.format(data.stamp['host'], 992 + data.stamp['kernel'], data.stamp['mode'], data.stamp['time'])) 993 + 994 + # write the dmesg data (device timeline) 995 + if(data.usedmesg): 996 + hf.write(devtl.html['timeline']) 997 + hf.write(devtl.html['legend']) 998 + hf.write('<div id="devicedetail"></div>\n') 999 + hf.write('<div id="devicetree"></div>\n') 1000 + 1001 + # write the ftrace data (callgraph) 1002 + if(data.useftrace): 1003 + hf.write('<section id="callgraphs" class="callgraph">\n') 1004 + # write out the ftrace data converted to html 1005 + html_func_top = '<article id="{0}" class="atop" style="background-color:{1}">\n<input type="checkbox" class="pf" id="f{2}" checked/><label for="f{2}">{3} {4}</label>\n' 1006 + html_func_start = '<article>\n<input type="checkbox" class="pf" id="f{0}" checked/><label for="f{0}">{1} {2}</label>\n' 1007 + html_func_end = '</article>\n' 1008 + html_func_leaf = '<article>{0} {1}</article>\n' 1009 + num = 0 1010 + for p in data.phases: 1011 + list = data.dmesg[p]['list'] 1012 + for devname in data.sortedDevices(p): 1013 + if('ftrace' not in list[devname]): 1014 + continue 1015 + name = devname 1016 + if(devname in data.altdevname): 1017 + name = data.altdevname[devname] 1018 + devid = list[devname]['id'] 1019 + cg = list[devname]['ftrace'] 1020 + flen = "(%.3f ms)" % ((cg.end - cg.start)*1000) 1021 + hf.write(html_func_top.format(devid, data.dmesg[p]['color'], num, name+" "+p, flen)) 1022 + num += 1 1023 + for line in cg.list: 1024 + if(line.length < 0.000000001): 1025 + flen = "" 1026 + else: 1027 + flen = "(%.3f ms)" % (line.length*1000) 1028 + if(line.freturn and line.fcall): 1029 + hf.write(html_func_leaf.format(line.name, flen)) 1030 + elif(line.freturn): 1031 + hf.write(html_func_end) 1032 + else: 1033 + hf.write(html_func_start.format(num, line.name, flen)) 1034 + num += 1 1035 + hf.write(html_func_end) 1036 + hf.write("\n\n </section>\n") 1037 + # write the footer and close 1038 + addScriptCode(hf) 1039 + hf.write("</body>\n</html>\n") 1040 + hf.close() 1041 + return True 1042 + 1043 + def addScriptCode(hf): 1044 + global data 1045 + 1046 + t0 = (data.start - data.tSuspended) * 1000 1047 + tMax = (data.end - data.tSuspended) * 1000 1048 + # create an array in javascript memory with the device details 1049 + detail = ' var bounds = [%f,%f];\n' % (t0, tMax) 1050 + detail += ' var d = [];\n' 1051 + dfmt = ' d["%s"] = { n:"%s", p:"%s", c:[%s] };\n'; 1052 + for p in data.dmesg: 1053 + list = data.dmesg[p]['list'] 1054 + for d in list: 1055 + parent = data.deviceParentID(d, p) 1056 + idlist = data.deviceChildrenIDs(d, p) 1057 + idstr = "" 1058 + for i in idlist: 1059 + if(idstr == ""): 1060 + idstr += '"'+i+'"' 1061 + else: 1062 + idstr += ', '+'"'+i+'"' 1063 + detail += dfmt % (list[d]['id'], d, parent, idstr) 1064 + 1065 + # add the code which will manipulate the data in the browser 1066 + script_code = \ 1067 + '<script type="text/javascript">\n'+detail+\ 1068 + ' var filter = [];\n'\ 1069 + ' var table = [];\n'\ 1070 + ' function deviceParent(devid) {\n'\ 1071 + ' var devlist = [];\n'\ 1072 + ' if(filter.indexOf(devid) < 0) filter[filter.length] = devid;\n'\ 1073 + ' if(d[devid].p in d)\n'\ 1074 + ' devlist = deviceParent(d[devid].p);\n'\ 1075 + ' else if(d[devid].p != "")\n'\ 1076 + ' devlist = [d[devid].p];\n'\ 1077 + ' devlist[devlist.length] = d[devid].n;\n'\ 1078 + ' return devlist;\n'\ 1079 + ' }\n'\ 1080 + ' function deviceChildren(devid, column, row) {\n'\ 1081 + ' if(!(devid in d)) return;\n'\ 1082 + ' if(filter.indexOf(devid) < 0) filter[filter.length] = devid;\n'\ 1083 + ' var cell = {name: d[devid].n, span: 1};\n'\ 1084 + ' var span = 0;\n'\ 1085 + ' if(column >= table.length) table[column] = [];\n'\ 1086 + ' table[column][row] = cell;\n'\ 1087 + ' for(var i = 0; i < d[devid].c.length; i++) {\n'\ 1088 + ' var cid = d[devid].c[i];\n'\ 1089 + ' span += deviceChildren(cid, column+1, row+span);\n'\ 1090 + ' }\n'\ 1091 + ' if(span == 0) span = 1;\n'\ 1092 + ' table[column][row].span = span;\n'\ 1093 + ' return span;\n'\ 1094 + ' }\n'\ 1095 + ' function deviceTree(devid, resume) {\n'\ 1096 + ' var html = "<table border=1>";\n'\ 1097 + ' filter = [];\n'\ 1098 + ' table = [];\n'\ 1099 + ' plist = deviceParent(devid);\n'\ 1100 + ' var devidx = plist.length - 1;\n'\ 1101 + ' for(var i = 0; i < devidx; i++)\n'\ 1102 + ' table[i] = [{name: plist[i], span: 1}];\n'\ 1103 + ' deviceChildren(devid, devidx, 0);\n'\ 1104 + ' for(var i = 0; i < devidx; i++)\n'\ 1105 + ' table[i][0].span = table[devidx][0].span;\n'\ 1106 + ' for(var row = 0; row < table[0][0].span; row++) {\n'\ 1107 + ' html += "<tr>";\n'\ 1108 + ' for(var col = 0; col < table.length; col++)\n'\ 1109 + ' if(row in table[col]) {\n'\ 1110 + ' var cell = table[col][row];\n'\ 1111 + ' var args = "";\n'\ 1112 + ' if(cell.span > 1)\n'\ 1113 + ' args += " rowspan="+cell.span;\n'\ 1114 + ' if((col == devidx) && (row == 0))\n'\ 1115 + ' args += " class=tdhl";\n'\ 1116 + ' if(resume)\n'\ 1117 + ' html += "<td"+args+">"+cell.name+" &rarr;</td>";\n'\ 1118 + ' else\n'\ 1119 + ' html += "<td"+args+">&larr; "+cell.name+"</td>";\n'\ 1120 + ' }\n'\ 1121 + ' html += "</tr>";\n'\ 1122 + ' }\n'\ 1123 + ' html += "</table>";\n'\ 1124 + ' return html;\n'\ 1125 + ' }\n'\ 1126 + ' function zoomTimeline() {\n'\ 1127 + ' var timescale = document.getElementById("timescale");\n'\ 1128 + ' var dmesg = document.getElementById("dmesg");\n'\ 1129 + ' var zoombox = document.getElementById("dmesgzoombox");\n'\ 1130 + ' var val = parseFloat(dmesg.style.width);\n'\ 1131 + ' var newval = 100;\n'\ 1132 + ' var sh = window.outerWidth / 2;\n'\ 1133 + ' if(this.id == "zoomin") {\n'\ 1134 + ' newval = val * 1.2;\n'\ 1135 + ' if(newval > 40000) newval = 40000;\n'\ 1136 + ' dmesg.style.width = newval+"%";\n'\ 1137 + ' zoombox.scrollLeft = ((zoombox.scrollLeft + sh) * newval / val) - sh;\n'\ 1138 + ' } else if (this.id == "zoomout") {\n'\ 1139 + ' newval = val / 1.2;\n'\ 1140 + ' if(newval < 100) newval = 100;\n'\ 1141 + ' dmesg.style.width = newval+"%";\n'\ 1142 + ' zoombox.scrollLeft = ((zoombox.scrollLeft + sh) * newval / val) - sh;\n'\ 1143 + ' } else {\n'\ 1144 + ' zoombox.scrollLeft = 0;\n'\ 1145 + ' dmesg.style.width = "100%";\n'\ 1146 + ' }\n'\ 1147 + ' var html = "";\n'\ 1148 + ' var t0 = bounds[0];\n'\ 1149 + ' var tMax = bounds[1];\n'\ 1150 + ' var tTotal = tMax - t0;\n'\ 1151 + ' var wTotal = tTotal * 100.0 / newval;\n'\ 1152 + ' for(var tS = 1000; (wTotal / tS) < 3; tS /= 10);\n'\ 1153 + ' if(tS < 1) tS = 1;\n'\ 1154 + ' for(var s = ((t0 / tS)|0) * tS; s < tMax; s += tS) {\n'\ 1155 + ' var pos = (tMax - s) * 100.0 / tTotal;\n'\ 1156 + ' var name = (s == 0)?"S/R":(s+"ms");\n'\ 1157 + ' html += \"<div class=\\\"t\\\" style=\\\"right:\"+pos+\"%\\\">\"+name+\"</div>\";\n'\ 1158 + ' }\n'\ 1159 + ' timescale.innerHTML = html;\n'\ 1160 + ' }\n'\ 1161 + ' function deviceDetail() {\n'\ 1162 + ' var devtitle = document.getElementById("devicedetail");\n'\ 1163 + ' devtitle.innerHTML = "<h1>"+this.title+"</h1>";\n'\ 1164 + ' var devtree = document.getElementById("devicetree");\n'\ 1165 + ' devtree.innerHTML = deviceTree(this.id, (this.title.indexOf("resume") >= 0));\n'\ 1166 + ' var cglist = document.getElementById("callgraphs");\n'\ 1167 + ' if(!cglist) return;\n'\ 1168 + ' var cg = cglist.getElementsByClassName("atop");\n'\ 1169 + ' for (var i = 0; i < cg.length; i++) {\n'\ 1170 + ' if(filter.indexOf(cg[i].id) >= 0) {\n'\ 1171 + ' cg[i].style.display = "block";\n'\ 1172 + ' } else {\n'\ 1173 + ' cg[i].style.display = "none";\n'\ 1174 + ' }\n'\ 1175 + ' }\n'\ 1176 + ' }\n'\ 1177 + ' window.addEventListener("load", function () {\n'\ 1178 + ' var dmesg = document.getElementById("dmesg");\n'\ 1179 + ' dmesg.style.width = "100%"\n'\ 1180 + ' document.getElementById("zoomin").onclick = zoomTimeline;\n'\ 1181 + ' document.getElementById("zoomout").onclick = zoomTimeline;\n'\ 1182 + ' document.getElementById("zoomdef").onclick = zoomTimeline;\n'\ 1183 + ' var dev = dmesg.getElementsByClassName("thread");\n'\ 1184 + ' for (var i = 0; i < dev.length; i++) {\n'\ 1185 + ' dev[i].onclick = deviceDetail;\n'\ 1186 + ' }\n'\ 1187 + ' zoomTimeline();\n'\ 1188 + ' });\n'\ 1189 + '</script>\n' 1190 + hf.write(script_code); 1191 + 1192 + # Function: executeSuspend 1193 + # Description: 1194 + # Execute system suspend through the sysfs interface 1195 + def executeSuspend(): 1196 + global sysvals, data 1197 + 1198 + detectUSB() 1199 + pf = open(sysvals.powerfile, 'w') 1200 + # clear the kernel ring buffer just as we start 1201 + os.system("dmesg -C") 1202 + # start ftrace 1203 + if(data.useftrace): 1204 + print("START TRACING") 1205 + os.system("echo 1 > "+sysvals.tpath+"tracing_on") 1206 + os.system("echo SUSPEND START > "+sysvals.tpath+"trace_marker") 1207 + # initiate suspend 1208 + if(sysvals.rtcwake): 1209 + print("SUSPEND START") 1210 + os.system("rtcwake -s 10 -m "+sysvals.suspendmode) 1211 + else: 1212 + print("SUSPEND START (press a key to resume)") 1213 + pf.write(sysvals.suspendmode) 1214 + # execution will pause here 1215 + pf.close() 1216 + # return from suspend 1217 + print("RESUME COMPLETE") 1218 + # stop ftrace 1219 + if(data.useftrace): 1220 + os.system("echo RESUME COMPLETE > "+sysvals.tpath+"trace_marker") 1221 + os.system("echo 0 > "+sysvals.tpath+"tracing_on") 1222 + print("CAPTURING FTRACE") 1223 + os.system("echo \""+sysvals.teststamp+"\" > "+sysvals.ftracefile) 1224 + os.system("cat "+sysvals.tpath+"trace >> "+sysvals.ftracefile) 1225 + # grab a copy of the dmesg output 1226 + print("CAPTURING DMESG") 1227 + os.system("echo \""+sysvals.teststamp+"\" > "+sysvals.dmesgfile) 1228 + os.system("dmesg -c >> "+sysvals.dmesgfile) 1229 + 1230 + # Function: detectUSB 1231 + # Description: 1232 + # Detect all the USB hosts and devices currently connected 1233 + def detectUSB(): 1234 + global sysvals, data 1235 + 1236 + for dirname, dirnames, filenames in os.walk("/sys/devices"): 1237 + if(re.match(r".*/usb[0-9]*.*", dirname) and 1238 + "idVendor" in filenames and "idProduct" in filenames): 1239 + vid = os.popen("cat %s/idVendor 2>/dev/null" % dirname).read().replace('\n', '') 1240 + pid = os.popen("cat %s/idProduct 2>/dev/null" % dirname).read().replace('\n', '') 1241 + product = os.popen("cat %s/product 2>/dev/null" % dirname).read().replace('\n', '') 1242 + name = dirname.split('/')[-1] 1243 + if(len(product) > 0): 1244 + data.altdevname[name] = "%s [%s]" % (product, name) 1245 + else: 1246 + data.altdevname[name] = "%s:%s [%s]" % (vid, pid, name) 1247 + 1248 + def getModes(): 1249 + global sysvals 1250 + modes = "" 1251 + if(os.path.exists(sysvals.powerfile)): 1252 + fp = open(sysvals.powerfile, 'r') 1253 + modes = string.split(fp.read()) 1254 + fp.close() 1255 + return modes 1256 + 1257 + # Function: statusCheck 1258 + # Description: 1259 + # Verify that the requested command and options will work 1260 + def statusCheck(dryrun): 1261 + global sysvals, data 1262 + res = dict() 1263 + 1264 + if(data.notestrun): 1265 + print("SUCCESS: The command should run!") 1266 + return 1267 + 1268 + # check we have root access 1269 + check = "YES" 1270 + if(os.environ['USER'] != "root"): 1271 + if(not dryrun): 1272 + doError("root access is required", False) 1273 + check = "NO" 1274 + res[" have root access: "] = check 1275 + 1276 + # check sysfs is mounted 1277 + check = "YES" 1278 + if(not os.path.exists(sysvals.powerfile)): 1279 + if(not dryrun): 1280 + doError("sysfs must be mounted", False) 1281 + check = "NO" 1282 + res[" is sysfs mounted: "] = check 1283 + 1284 + # check target mode is a valid mode 1285 + check = "YES" 1286 + modes = getModes() 1287 + if(sysvals.suspendmode not in modes): 1288 + if(not dryrun): 1289 + doError("%s is not a value power mode" % sysvals.suspendmode, False) 1290 + check = "NO" 1291 + res[" is "+sysvals.suspendmode+" a power mode: "] = check 1292 + 1293 + # check if ftrace is available 1294 + if(data.useftrace): 1295 + check = "YES" 1296 + if(not verifyFtrace()): 1297 + if(not dryrun): 1298 + doError("ftrace is not configured", False) 1299 + check = "NO" 1300 + res[" is ftrace usable: "] = check 1301 + 1302 + # check if rtcwake 1303 + if(sysvals.rtcwake): 1304 + check = "YES" 1305 + version = os.popen("rtcwake -V 2>/dev/null").read() 1306 + if(not version.startswith("rtcwake")): 1307 + if(not dryrun): 1308 + doError("rtcwake is not installed", False) 1309 + check = "NO" 1310 + res[" is rtcwake usable: "] = check 1311 + 1312 + if(dryrun): 1313 + status = True 1314 + print("Checking if system can run the current command:") 1315 + for r in res: 1316 + print("%s\t%s" % (r, res[r])) 1317 + if(res[r] != "YES"): 1318 + status = False 1319 + if(status): 1320 + print("SUCCESS: The command should run!") 1321 + else: 1322 + print("FAILURE: The command won't run!") 1323 + 1324 + def printHelp(): 1325 + global sysvals 1326 + modes = getModes() 1327 + 1328 + print("") 1329 + print("AnalyzeSuspend") 1330 + print("Usage: sudo analyze_suspend.py <options>") 1331 + print("") 1332 + print("Description:") 1333 + print(" Initiates a system suspend/resume while capturing dmesg") 1334 + print(" and (optionally) ftrace data to analyze device timing") 1335 + print("") 1336 + print(" Generates output files in subdirectory: suspend-mmddyy-HHMMSS") 1337 + print(" HTML output: <hostname>_<mode>.html") 1338 + print(" raw dmesg output: <hostname>_<mode>_dmesg.txt") 1339 + print(" raw ftrace output (with -f): <hostname>_<mode>_ftrace.txt") 1340 + print("") 1341 + print("Options:") 1342 + print(" [general]") 1343 + print(" -h Print this help text") 1344 + print(" -verbose Print extra information during execution and analysis") 1345 + print(" -status Test to see if the system is enabled to run this tool") 1346 + print(" -modes List available suspend modes") 1347 + print(" -m mode Mode to initiate for suspend %s (default: %s)") % (modes, sysvals.suspendmode) 1348 + print(" -rtcwake Use rtcwake to autoresume after 10 seconds (default: disabled)") 1349 + print(" -f Use ftrace to create device callgraphs (default: disabled)") 1350 + print(" [re-analyze data from previous runs]") 1351 + print(" -dmesg dmesgfile Create HTML timeline from dmesg file") 1352 + print(" -ftrace ftracefile Create HTML callgraph from ftrace file") 1353 + print("") 1354 + return True 1355 + 1356 + def doError(msg, help): 1357 + print("ERROR: %s") % msg 1358 + if(help == True): 1359 + printHelp() 1360 + sys.exit() 1361 + 1362 + # -- script main -- 1363 + # loop through the command line arguments 1364 + cmd = "" 1365 + args = iter(sys.argv[1:]) 1366 + for arg in args: 1367 + if(arg == "-m"): 1368 + try: 1369 + val = args.next() 1370 + except: 1371 + doError("No mode supplied", True) 1372 + sysvals.suspendmode = val 1373 + elif(arg == "-f"): 1374 + data.useftrace = True 1375 + elif(arg == "-modes"): 1376 + cmd = "modes" 1377 + elif(arg == "-status"): 1378 + cmd = "status" 1379 + elif(arg == "-verbose"): 1380 + data.verbose = True 1381 + elif(arg == "-rtcwake"): 1382 + sysvals.rtcwake = True 1383 + elif(arg == "-dmesg"): 1384 + try: 1385 + val = args.next() 1386 + except: 1387 + doError("No dmesg file supplied", True) 1388 + data.notestrun = True 1389 + data.usedmesg = True 1390 + sysvals.dmesgfile = val 1391 + elif(arg == "-ftrace"): 1392 + try: 1393 + val = args.next() 1394 + except: 1395 + doError("No ftrace file supplied", True) 1396 + data.notestrun = True 1397 + data.useftrace = True 1398 + sysvals.ftracefile = val 1399 + elif(arg == "-h"): 1400 + printHelp() 1401 + sys.exit() 1402 + else: 1403 + doError("Invalid argument: "+arg, True) 1404 + 1405 + # just run a utility command and exit 1406 + if(cmd != ""): 1407 + if(cmd == "status"): 1408 + statusCheck(True) 1409 + elif(cmd == "modes"): 1410 + modes = getModes() 1411 + print modes 1412 + sys.exit() 1413 + 1414 + data.initialize() 1415 + 1416 + # if instructed, re-analyze existing data files 1417 + if(data.notestrun): 1418 + sysvals.setOutputFile() 1419 + data.vprint("Output file: %s" % sysvals.htmlfile) 1420 + if(sysvals.dmesgfile != ""): 1421 + analyzeKernelLog() 1422 + if(sysvals.ftracefile != ""): 1423 + analyzeTraceLog() 1424 + createHTML() 1425 + sys.exit() 1426 + 1427 + # verify that we can run a test 1428 + data.usedmesg = True 1429 + statusCheck(False) 1430 + 1431 + # prepare for the test 1432 + if(data.useftrace): 1433 + initFtrace() 1434 + sysvals.initTestOutput() 1435 + 1436 + data.vprint("Output files:\n %s" % sysvals.dmesgfile) 1437 + if(data.useftrace): 1438 + data.vprint(" %s" % sysvals.ftracefile) 1439 + data.vprint(" %s" % sysvals.htmlfile) 1440 + 1441 + # execute the test 1442 + executeSuspend() 1443 + analyzeKernelLog() 1444 + if(data.useftrace): 1445 + analyzeTraceLog() 1446 + createHTML()
+13 -3
tools/Makefile
··· 3 3 help: 4 4 @echo 'Possible targets:' 5 5 @echo '' 6 + @echo ' acpi - ACPI tools' 6 7 @echo ' cgroup - cgroup tools' 7 8 @echo ' cpupower - a tool for all things x86 CPU power' 8 9 @echo ' firewire - the userspace part of nosy, an IEEE-1394 traffic sniffer' ··· 34 33 @echo ' the respective build directory.' 35 34 @echo ' clean: a summary clean target to clean _all_ folders' 36 35 36 + acpi: FORCE 37 + $(call descend,power/$@) 38 + 37 39 cpupower: FORCE 38 40 $(call descend,power/$@) 39 41 ··· 58 54 tmon: FORCE 59 55 $(call descend,thermal/$@) 60 56 57 + acpi_install: 58 + $(call descend,power/$(@:_install=),install) 59 + 61 60 cpupower_install: 62 61 $(call descend,power/$(@:_install=),install) 63 62 ··· 76 69 tmon_install: 77 70 $(call descend,thermal/$(@:_install=),install) 78 71 79 - install: cgroup_install cpupower_install firewire_install lguest_install \ 72 + install: acpi_install cgroup_install cpupower_install firewire_install lguest_install \ 80 73 perf_install selftests_install turbostat_install usb_install \ 81 74 virtio_install vm_install net_install x86_energy_perf_policy_install \ 82 75 tmon 76 + 77 + acpi_clean: 78 + $(call descend,power/acpi,clean) 83 79 84 80 cpupower_clean: 85 81 $(call descend,power/cpupower,clean) ··· 105 95 tmon_clean: 106 96 $(call descend,thermal/tmon,clean) 107 97 108 - clean: cgroup_clean cpupower_clean firewire_clean lguest_clean perf_clean \ 109 - selftests_clean turbostat_clean usb_clean virtio_clean \ 98 + clean: acpi_clean cgroup_clean cpupower_clean firewire_clean lguest_clean \ 99 + perf_clean selftests_clean turbostat_clean usb_clean virtio_clean \ 110 100 vm_clean net_clean x86_energy_perf_policy_clean tmon_clean 111 101 112 102 .PHONY: FORCE
+137 -12
tools/power/acpi/Makefile
··· 1 - PROG= acpidump 2 - SRCS= acpidump.c 1 + # tools/power/acpi/Makefile - ACPI tool Makefile 2 + # 3 + # Copyright (c) 2013, Intel Corporation 4 + # Author: Lv Zheng <lv.zheng@intel.com> 5 + # 6 + # This program is free software; you can redistribute it and/or 7 + # modify it under the terms of the GNU General Public License 8 + # as published by the Free Software Foundation; version 2 9 + # of the License. 10 + 11 + OUTPUT=./ 12 + ifeq ("$(origin O)", "command line") 13 + OUTPUT := $(O)/ 14 + endif 15 + 16 + ifneq ($(OUTPUT),) 17 + # check that the output directory actually exists 18 + OUTDIR := $(shell cd $(OUTPUT) && /bin/pwd) 19 + $(if $(OUTDIR),, $(error output directory "$(OUTPUT)" does not exist)) 20 + endif 21 + 22 + # --- CONFIGURATION BEGIN --- 23 + 24 + # Set the following to `true' to make a unstripped, unoptimized 25 + # binary. Leave this set to `false' for production use. 26 + DEBUG ?= true 27 + 28 + # make the build silent. Set this to something else to make it noisy again. 29 + V ?= false 30 + 31 + # Prefix to the directories we're installing to 32 + DESTDIR ?= 33 + 34 + # --- CONFIGURATION END --- 35 + 36 + # Directory definitions. These are default and most probably 37 + # do not need to be changed. Please note that DESTDIR is 38 + # added in front of any of them 39 + 40 + bindir ?= /usr/bin 41 + sbindir ?= /usr/sbin 42 + mandir ?= /usr/man 43 + 44 + # Toolchain: what tools do we use, and what options do they need: 45 + 46 + INSTALL = /usr/bin/install -c 47 + INSTALL_PROGRAM = ${INSTALL} 48 + INSTALL_DATA = ${INSTALL} -m 644 49 + INSTALL_SCRIPT = ${INSTALL_PROGRAM} 50 + 51 + # If you are running a cross compiler, you may want to set this 52 + # to something more interesting, like "arm-linux-". If you want 53 + # to compile vs uClibc, that can be done here as well. 54 + CROSS = #/usr/i386-linux-uclibc/usr/bin/i386-uclibc- 55 + CC = $(CROSS)gcc 56 + LD = $(CROSS)gcc 57 + STRIP = $(CROSS)strip 58 + HOSTCC = gcc 59 + 60 + # check if compiler option is supported 61 + cc-supports = ${shell if $(CC) ${1} -S -o /dev/null -x c /dev/null > /dev/null 2>&1; then echo "$(1)"; fi;} 62 + 63 + # use '-Os' optimization if available, else use -O2 64 + OPTIMIZATION := $(call cc-supports,-Os,-O2) 65 + 66 + WARNINGS := -Wall 67 + WARNINGS += $(call cc-supports,-Wstrict-prototypes) 68 + WARNINGS += $(call cc-supports,-Wdeclaration-after-statement) 69 + 3 70 KERNEL_INCLUDE := ../../../include 4 - CFLAGS += -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Os -s -D_LINUX -DDEFINE_ALTERNATE_TYPES -I$(KERNEL_INCLUDE) 71 + CFLAGS += -D_LINUX -DDEFINE_ALTERNATE_TYPES -I$(KERNEL_INCLUDE) 72 + CFLAGS += $(WARNINGS) 5 73 6 - all: acpidump 7 - $(PROG) : $(SRCS) 8 - $(CC) $(CFLAGS) $(SRCS) -o $(PROG) 74 + ifeq ($(strip $(V)),false) 75 + QUIET=@ 76 + ECHO=@echo 77 + else 78 + QUIET= 79 + ECHO=@\# 80 + endif 81 + export QUIET ECHO 9 82 10 - CLEANFILES= $(PROG) 83 + # if DEBUG is enabled, then we do not strip or optimize 84 + ifeq ($(strip $(DEBUG)),true) 85 + CFLAGS += -O1 -g -DDEBUG 86 + STRIPCMD = /bin/true -Since_we_are_debugging 87 + else 88 + CFLAGS += $(OPTIMIZATION) -fomit-frame-pointer 89 + STRIPCMD = $(STRIP) -s --remove-section=.note --remove-section=.comment 90 + endif 11 91 12 - clean : 13 - rm -f $(CLEANFILES) $(patsubst %.c,%.o, $(SRCS)) *~ 92 + # if DEBUG is enabled, then we do not strip or optimize 93 + ifeq ($(strip $(DEBUG)),true) 94 + CFLAGS += -O1 -g -DDEBUG 95 + STRIPCMD = /bin/true -Since_we_are_debugging 96 + else 97 + CFLAGS += $(OPTIMIZATION) -fomit-frame-pointer 98 + STRIPCMD = $(STRIP) -s --remove-section=.note --remove-section=.comment 99 + endif 14 100 15 - install : 16 - install acpidump /usr/sbin/acpidump 17 - install acpidump.8 /usr/share/man/man8 101 + # --- ACPIDUMP BEGIN --- 18 102 103 + vpath %.c \ 104 + tools/acpidump 105 + 106 + DUMP_OBJS = \ 107 + acpidump.o 108 + 109 + DUMP_OBJS := $(addprefix $(OUTPUT)tools/acpidump/,$(DUMP_OBJS)) 110 + 111 + $(OUTPUT)acpidump: $(DUMP_OBJS) 112 + $(ECHO) " LD " $@ 113 + $(QUIET) $(LD) $(CFLAGS) $(LDFLAGS) $(DUMP_OBJS) -L$(OUTPUT) -o $@ 114 + $(QUIET) $(STRIPCMD) $@ 115 + 116 + $(OUTPUT)tools/acpidump/%.o: %.c 117 + $(ECHO) " CC " $@ 118 + $(QUIET) $(CC) -c $(CFLAGS) -o $@ $< 119 + 120 + # --- ACPIDUMP END --- 121 + 122 + all: $(OUTPUT)acpidump 123 + echo $(OUTPUT) 124 + 125 + clean: 126 + -find $(OUTPUT) \( -not -type d \) -and \( -name '*~' -o -name '*.[oas]' \) -type f -print \ 127 + | xargs rm -f 128 + -rm -f $(OUTPUT)acpidump 129 + 130 + install-tools: 131 + $(INSTALL) -d $(DESTDIR)${bindir} 132 + $(INSTALL_PROGRAM) $(OUTPUT)acpidump $(DESTDIR)${sbindir} 133 + 134 + install-man: 135 + $(INSTALL_DATA) -D man/acpidump.8 $(DESTDIR)${mandir}/man8/acpidump.8 136 + 137 + install: all install-tools install-man 138 + 139 + uninstall: 140 + - rm -f $(DESTDIR)${sbindir}/acpidump 141 + - rm -f $(DESTDIR)${mandir}/man8/acpidump.8 142 + 143 + .PHONY: all utils install-tools install-man install uninstall clean
tools/power/acpi/acpidump.8 tools/power/acpi/man/acpidump.8
tools/power/acpi/acpidump.c tools/power/acpi/tools/acpidump/acpidump.c
+1 -4
tools/power/cpupower/debug/kernel/cpufreq-test_tsc.c
··· 25 25 #include <linux/module.h> 26 26 #include <linux/init.h> 27 27 #include <linux/delay.h> 28 - 28 + #include <linux/acpi.h> 29 29 #include <asm/io.h> 30 - 31 - #include <acpi/acpi_bus.h> 32 - #include <acpi/acpi_drivers.h> 33 30 34 31 static int pm_tmr_ioport = 0; 35 32
+1 -1
tools/power/cpupower/utils/cpufreq-set.c
··· 257 257 print_unknown_arg(); 258 258 return -EINVAL; 259 259 } 260 - if ((sscanf(optarg, "%s", gov)) != 1) { 260 + if ((sscanf(optarg, "%19s", gov)) != 1) { 261 261 print_unknown_arg(); 262 262 return -EINVAL; 263 263 }