Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pm+acpi-4.4-rc1-1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull power management and ACPI updates from Rafael Wysocki:
"Quite a new features are included this time.

First off, the Collaborative Processor Performance Control interface
(version 2) defined by ACPI will now be supported on ARM64 along with
a cpufreq frontend for CPU performance scaling.

Second, ACPI gets a new infrastructure for the early probing of IRQ
chips and clock sources (along the lines of the existing similar
mechanism for DT).

Next, the ACPI core and the generic device properties API will now
support a recently introduced hierarchical properties extension of the
_DSD (Device Specific Data) ACPI device configuration object. If the
ACPI platform firmware uses that extension to organize device
properties in a hierarchical way, the kernel will automatically handle
it and make those properties available to device drivers via the
generic device properties API.

It also will be possible to build the ACPICA's AML interpreter
debugger into the kernel now and use that to diagnose AML-related
problems more efficiently. In the future, this should make it
possible to single-step AML execution and do similar things.
Interesting stuff, although somewhat experimental at this point.

Finally, the PM core gets a new mechanism that can be used by device
drivers to distinguish between suspend-to-RAM (based on platform
firmware support) and suspend-to-idle (or other variants of system
suspend the platform firmware is not involved in) and possibly
optimize their device suspend/resume handling accordingly.

In addition to that, some existing features are re-organized quite
substantially.

First, the ACPI-based handling of PCI host bridges on x86 and ia64 is
unified and the common code goes into the ACPI core (so as to reduce
code duplication and eliminate non-essential differences between the
two architectures in that area).

Second, the Operating Performance Points (OPP) framework is
reorganized to make the code easier to find and follow.

Next, the cpufreq core's sysfs interface is reorganized to get rid of
the "primary CPU" concept for configurations in which the same
performance scaling settings are shared between multiple CPUs.

Finally, some interfaces that aren't necessary any more are dropped
from the generic power domains framework.

On top of the above we have some minor extensions, cleanups and bug
fixes in multiple places, as usual.

Specifics:

- ACPICA update to upstream revision 20150930 (Bob Moore, Lv Zheng).

The most significant change is to allow the AML debugger to be
built into the kernel. On top of that there is an update related
to the NFIT table (the ACPI persistent memory interface) and a few
fixes and cleanups.

- ACPI CPPC2 (Collaborative Processor Performance Control v2) support
along with a cpufreq frontend (Ashwin Chaugule).

This can only be enabled on ARM64 at this point.

- New ACPI infrastructure for the early probing of IRQ chips and
clock sources (Marc Zyngier).

- Support for a new hierarchical properties extension of the ACPI
_DSD (Device Specific Data) device configuration object allowing
the kernel to handle hierarchical properties (provided by the
platform firmware this way) automatically and make them available
to device drivers via the generic device properties interface
(Rafael Wysocki).

- Generic device properties API extension to obtain an index of
certain string value in an array of strings, along the lines of
of_property_match_string(), but working for all of the supported
firmware node types, and support for the "dma-names" device
property based on it (Mika Westerberg).

- ACPI core fix to parse the MADT (Multiple APIC Description Table)
entries in the order expected by platform firmware (and mandated by
the specification) to avoid confusion on systems with more than 255
logical CPUs (Lukasz Anaczkowski).

- Consolidation of the ACPI-based handling of PCI host bridges on x86
and ia64 (Jiang Liu).

- ACPI core fixes to ensure that the correct IRQ number is used to
represent the SCI (System Control Interrupt) in the cases when it
has been re-mapped (Chen Yu).

- New ACPI backlight quirk for Lenovo IdeaPad S405 (Hans de Goede).

- ACPI EC driver fixes (Lv Zheng).

- Assorted ACPI fixes and cleanups (Dan Carpenter, Insu Yun, Jiri
Kosina, Rami Rosen, Rasmus Villemoes).

- New mechanism in the PM core allowing drivers to check if the
platform firmware is going to be involved in the upcoming system
suspend or if it has been involved in the suspend the system is
resuming from at the moment (Rafael Wysocki).

This should allow drivers to optimize their suspend/resume handling
in some cases and the changes include a couple of users of it (the
i8042 input driver, PCI PM).

- PCI PM fix to prevent runtime-suspended devices with PME enabled
from being resumed during system suspend even if they aren't
configured to wake up the system from sleep (Rafael Wysocki).

- New mechanism to report the number of a wakeup IRQ that woke up the
system from sleep last time (Alexandra Yates).

- Removal of unused interfaces from the generic power domains
framework and fixes related to latency measurements in that code
(Ulf Hansson, Daniel Lezcano).

- cpufreq core sysfs interface rework to make it handle CPUs that
share performance scaling settings (represented by a common cpufreq
policy object) more symmetrically (Viresh Kumar).

This should help to simplify the CPU offline/online handling among
other things.

- cpufreq core fixes and cleanups (Viresh Kumar).

- intel_pstate fixes related to the Turbo Activation Ratio (TAR)
mechanism on client platforms which causes the turbo P-states range
to vary depending on platform firmware settings (Srinivas
Pandruvada).

- intel_pstate sysfs interface fix (Prarit Bhargava).

- Assorted cpufreq driver (imx, tegra20, powernv, integrator) fixes
and cleanups (Bai Ping, Bartlomiej Zolnierkiewicz, Shilpasri G
Bhat, Luis de Bethencourt).

- cpuidle mvebu driver cleanups (Russell King).

- OPP (Operating Performance Points) framework code reorganization to
make it more maintainable (Viresh Kumar).

- Intel Broxton support for the RAPL (Running Average Power Limits)
power capping driver (Amy Wiles).

- Assorted power management code fixes and cleanups (Dan Carpenter,
Geert Uytterhoeven, Geliang Tang, Luis de Bethencourt, Rasmus
Villemoes)"

* tag 'pm+acpi-4.4-rc1-1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (108 commits)
cpufreq: postfix policy directory with the first CPU in related_cpus
cpufreq: create cpu/cpufreq/policyX directories
cpufreq: remove cpufreq_sysfs_{create|remove}_file()
cpufreq: create cpu/cpufreq at boot time
cpufreq: Use cpumask_copy instead of cpumask_or to copy a mask
cpufreq: ondemand: Drop unnecessary locks from update_sampling_rate()
PM / Domains: Merge measurements for PM QoS device latencies
PM / Domains: Don't measure ->start|stop() latency in system PM callbacks
PM / clk: Fix broken build due to non-matching code and header #ifdefs
ACPI / Documentation: add copy_dsdt to ACPI format options
ACPI / sysfs: correctly check failing memory allocation
ACPI / video: Add a quirk to force native backlight on Lenovo IdeaPad S405
ACPI / CPPC: Fix potential memory leak
ACPI / CPPC: signedness bug in register_pcc_channel()
ACPI / PAD: power_saving_thread() is not freezable
ACPI / PM: Fix incorrect wakeup IRQ setting during suspend-to-idle
ACPI: Using correct irq when waiting for events
ACPI: Use correct IRQ when uninstalling ACPI interrupt handler
cpuidle: mvebu: disable the bind/unbind attributes and use builtin_platform_driver
cpuidle: mvebu: clean up multiple platform drivers
...

+13980 -2217
+12
Documentation/ABI/testing/sysfs-power
··· 256 256 Writing a "1" enables this printing while writing a "0" 257 257 disables it. The default value is "0". Reading from this file 258 258 will display the current value. 259 + 260 + What: /sys/power/pm_wakeup_irq 261 + Date: April 2015 262 + Contact: Alexandra Yates <alexandra.yates@linux.intel.org> 263 + Description: 264 + The /sys/power/pm_wakeup_irq file reports to user space the IRQ 265 + number of the first wakeup interrupt (that is, the first 266 + interrupt from an IRQ line armed for system wakeup) seen by the 267 + kernel during the most recent system suspend/resume cycle. 268 + 269 + This output is useful for system wakeup diagnostics of spurious 270 + wakeup interrupts.
+5 -1
Documentation/kernel-parameters.txt
··· 167 167 168 168 acpi= [HW,ACPI,X86,ARM64] 169 169 Advanced Configuration and Power Interface 170 - Format: { force | off | strict | noirq | rsdt } 170 + Format: { force | off | strict | noirq | rsdt | 171 + copy_dsdt } 171 172 force -- enable ACPI if default was off 172 173 off -- disable ACPI if default was on 173 174 noirq -- do not use ACPI for IRQ routing ··· 1562 1561 hwp_only 1563 1562 Only load intel_pstate on systems which support 1564 1563 hardware P state control (HWP) if available. 1564 + no_acpi 1565 + Don't use ACPI processor performance control objects 1566 + _PSS and _PPC specified limits. 1565 1567 1566 1568 intremap= [X86-64, Intel-IOMMU] 1567 1569 on enable Interrupt Remapping (default)
+1 -1
arch/arm/kernel/time.c
··· 120 120 #ifdef CONFIG_COMMON_CLK 121 121 of_clk_init(NULL); 122 122 #endif 123 - clocksource_of_init(); 123 + clocksource_probe(); 124 124 } 125 125 }
+1 -1
arch/arm/mach-imx/mach-imx6q.c
··· 350 350 return; 351 351 } 352 352 353 - if (of_init_opp_table(cpu_dev)) { 353 + if (dev_pm_opp_of_add_table(cpu_dev)) { 354 354 pr_warn("failed to init OPP table\n"); 355 355 goto put_node; 356 356 }
+2 -2
arch/arm/mach-omap2/timer.c
··· 647 647 void __init omap4_local_timer_init(void) 648 648 { 649 649 omap4_sync32k_timer_init(); 650 - clocksource_of_init(); 650 + clocksource_probe(); 651 651 } 652 652 #else 653 653 void __init omap4_local_timer_init(void) ··· 663 663 omap4_sync32k_timer_init(); 664 664 realtime_counter_init(); 665 665 666 - clocksource_of_init(); 666 + clocksource_probe(); 667 667 } 668 668 #endif /* CONFIG_SOC_OMAP5 || CONFIG_SOC_DRA7XX */ 669 669
+1 -1
arch/arm/mach-rockchip/rockchip.c
··· 67 67 } 68 68 69 69 of_clk_init(NULL); 70 - clocksource_of_init(); 70 + clocksource_probe(); 71 71 } 72 72 73 73 static void __init rockchip_dt_init(void)
+1 -1
arch/arm/mach-shmobile/setup-r8a7779.c
··· 97 97 static void __init r8a7779_init_time(void) 98 98 { 99 99 r8a7779_clocks_init(r8a7779_read_mode_pins()); 100 - clocksource_of_init(); 100 + clocksource_probe(); 101 101 } 102 102 103 103 static const char *const r8a7779_compat_dt[] __initconst = {
+1 -1
arch/arm/mach-shmobile/setup-rcar-gen2.c
··· 128 128 #endif /* CONFIG_ARM_ARCH_TIMER */ 129 129 130 130 rcar_gen2_clocks_init(mode); 131 - clocksource_of_init(); 131 + clocksource_probe(); 132 132 } 133 133 134 134 struct memory_reserve_config {
+1 -1
arch/arm/mach-spear/spear13xx.c
··· 124 124 clk_put(pclk); 125 125 126 126 spear_setup_of_timer(); 127 - clocksource_of_init(); 127 + clocksource_probe(); 128 128 }
+1 -1
arch/arm/mach-sunxi/sunxi.c
··· 46 46 of_clk_init(NULL); 47 47 if (IS_ENABLED(CONFIG_RESET_CONTROLLER)) 48 48 sun6i_reset_init(); 49 - clocksource_of_init(); 49 + clocksource_probe(); 50 50 } 51 51 52 52 DT_MACHINE_START(SUN6I_DT, "Allwinner sun6i (A31) Family")
+1 -1
arch/arm/mach-u300/core.c
··· 408 408 DT_MACHINE_START(U300_DT, "U300 S335/B335 (Device Tree)") 409 409 .map_io = u300_map_io, 410 410 .init_irq = u300_init_irq_dt, 411 - .init_time = clocksource_of_init, 411 + .init_time = clocksource_probe, 412 412 .init_machine = u300_init_machine_dt, 413 413 .restart = u300_restart, 414 414 .dt_compat = u300_board_compat,
+1 -1
arch/arm/mach-ux500/timer.c
··· 44 44 45 45 dt_fail: 46 46 clksrc_dbx500_prcmu_init(prcmu_timer_base); 47 - clocksource_of_init(); 47 + clocksource_probe(); 48 48 }
+1 -1
arch/arm/mach-zynq/common.c
··· 154 154 155 155 zynq_clock_init(); 156 156 of_clk_init(NULL); 157 - clocksource_of_init(); 157 + clocksource_probe(); 158 158 } 159 159 160 160 static struct map_desc zynq_cortex_a9_scu_map __initdata = {
-1
arch/arm64/include/asm/acpi.h
··· 12 12 #ifndef _ASM_ACPI_H 13 13 #define _ASM_ACPI_H 14 14 15 - #include <linux/irqchip/arm-gic-acpi.h> 16 15 #include <linux/mm.h> 17 16 #include <linux/psci.h> 18 17
-13
arch/arm64/include/asm/irq.h
··· 1 1 #ifndef __ASM_IRQ_H 2 2 #define __ASM_IRQ_H 3 3 4 - #include <linux/irqchip/arm-gic-acpi.h> 5 - 6 4 #include <asm-generic/irq.h> 7 5 8 6 struct pt_regs; 9 7 10 8 extern void set_handle_irq(void (*handle_irq)(struct pt_regs *)); 11 - 12 - static inline void acpi_irq_init(void) 13 - { 14 - /* 15 - * Hardcode ACPI IRQ chip initialization to GICv2 for now. 16 - * Proper irqchip infrastructure will be implemented along with 17 - * incoming GICv2m|GICv3|ITS bits. 18 - */ 19 - acpi_gic_init(); 20 - } 21 - #define acpi_irq_init acpi_irq_init 22 9 23 10 #endif
-25
arch/arm64/kernel/acpi.c
··· 211 211 } 212 212 } 213 213 214 - void __init acpi_gic_init(void) 215 - { 216 - struct acpi_table_header *table; 217 - acpi_status status; 218 - acpi_size tbl_size; 219 - int err; 220 - 221 - if (acpi_disabled) 222 - return; 223 - 224 - status = acpi_get_table_with_size(ACPI_SIG_MADT, 0, &table, &tbl_size); 225 - if (ACPI_FAILURE(status)) { 226 - const char *msg = acpi_format_exception(status); 227 - 228 - pr_err("Failed to get MADT table, %s\n", msg); 229 - return; 230 - } 231 - 232 - err = gic_v2_acpi_init(table); 233 - if (err) 234 - pr_err("Failed to initialize GIC IRQ controller"); 235 - 236 - early_acpi_os_unmap_memory((char *)table, tbl_size); 237 - } 238 - 239 214 #ifdef CONFIG_ACPI_APEI 240 215 pgprot_t arch_apei_get_mem_attribute(phys_addr_t addr) 241 216 {
+1 -7
arch/arm64/kernel/time.c
··· 67 67 u32 arch_timer_rate; 68 68 69 69 of_clk_init(NULL); 70 - clocksource_of_init(); 70 + clocksource_probe(); 71 71 72 72 tick_setup_hrtimer_broadcast(); 73 - 74 - /* 75 - * Since ACPI or FDT will only one be available in the system, 76 - * we can use acpi_generic_timer_init() here safely 77 - */ 78 - acpi_generic_timer_init(); 79 73 80 74 arch_timer_rate = arch_timer_get_rate(); 81 75 if (!arch_timer_rate)
-5
arch/ia64/include/asm/pci.h
··· 64 64 #define pci_legacy_read platform_pci_legacy_read 65 65 #define pci_legacy_write platform_pci_legacy_write 66 66 67 - struct iospace_resource { 68 - struct list_head list; 69 - struct resource res; 70 - }; 71 - 72 67 struct pci_controller { 73 68 struct acpi_device *companion; 74 69 void *iommu;
+103 -271
arch/ia64/pci/pci.c
··· 115 115 .write = pci_write, 116 116 }; 117 117 118 - /* Called by ACPI when it finds a new root bus. */ 119 - 120 - static struct pci_controller *alloc_pci_controller(int seg) 121 - { 122 - struct pci_controller *controller; 123 - 124 - controller = kzalloc(sizeof(*controller), GFP_KERNEL); 125 - if (!controller) 126 - return NULL; 127 - 128 - controller->segment = seg; 129 - return controller; 130 - } 131 - 132 118 struct pci_root_info { 133 - struct acpi_device *bridge; 134 - struct pci_controller *controller; 135 - struct list_head resources; 136 - struct resource *res; 137 - resource_size_t *res_offset; 138 - unsigned int res_num; 119 + struct acpi_pci_root_info common; 120 + struct pci_controller controller; 139 121 struct list_head io_resources; 140 - char *name; 141 122 }; 142 123 143 - static unsigned int 144 - new_space (u64 phys_base, int sparse) 124 + static unsigned int new_space(u64 phys_base, int sparse) 145 125 { 146 126 u64 mmio_base; 147 127 int i; ··· 148 168 return i; 149 169 } 150 170 151 - static u64 add_io_space(struct pci_root_info *info, 152 - struct acpi_resource_address64 *addr) 171 + static int add_io_space(struct device *dev, struct pci_root_info *info, 172 + struct resource_entry *entry) 153 173 { 154 - struct iospace_resource *iospace; 155 - struct resource *resource; 174 + struct resource_entry *iospace; 175 + struct resource *resource, *res = entry->res; 156 176 char *name; 157 177 unsigned long base, min, max, base_port; 158 178 unsigned int sparse = 0, space_nr, len; 159 179 160 - len = strlen(info->name) + 32; 161 - iospace = kzalloc(sizeof(*iospace) + len, GFP_KERNEL); 180 + len = strlen(info->common.name) + 32; 181 + iospace = resource_list_create_entry(NULL, len); 162 182 if (!iospace) { 163 - dev_err(&info->bridge->dev, 164 - "PCI: No memory for %s I/O port space\n", 165 - info->name); 166 - goto out; 183 + dev_err(dev, "PCI: No memory for %s I/O port space\n", 184 + info->common.name); 185 + return -ENOMEM; 167 186 } 168 187 169 - name = (char *)(iospace + 1); 170 - 171 - min = addr->address.minimum; 172 - max = min + addr->address.address_length - 1; 173 - if (addr->info.io.translation_type == ACPI_SPARSE_TRANSLATION) 188 + if (res->flags & IORESOURCE_IO_SPARSE) 174 189 sparse = 1; 175 - 176 - space_nr = new_space(addr->address.translation_offset, sparse); 190 + space_nr = new_space(entry->offset, sparse); 177 191 if (space_nr == ~0) 178 192 goto free_resource; 179 193 194 + name = (char *)(iospace + 1); 195 + min = res->start - entry->offset; 196 + max = res->end - entry->offset; 180 197 base = __pa(io_space[space_nr].mmio_base); 181 198 base_port = IO_SPACE_BASE(space_nr); 182 - snprintf(name, len, "%s I/O Ports %08lx-%08lx", info->name, 183 - base_port + min, base_port + max); 199 + snprintf(name, len, "%s I/O Ports %08lx-%08lx", info->common.name, 200 + base_port + min, base_port + max); 184 201 185 202 /* 186 203 * The SDM guarantees the legacy 0-64K space is sparse, but if the ··· 187 210 if (space_nr == 0) 188 211 sparse = 1; 189 212 190 - resource = &iospace->res; 213 + resource = iospace->res; 191 214 resource->name = name; 192 215 resource->flags = IORESOURCE_MEM; 193 216 resource->start = base + (sparse ? IO_SPACE_SPARSE_ENCODING(min) : min); 194 217 resource->end = base + (sparse ? IO_SPACE_SPARSE_ENCODING(max) : max); 195 218 if (insert_resource(&iomem_resource, resource)) { 196 - dev_err(&info->bridge->dev, 197 - "can't allocate host bridge io space resource %pR\n", 198 - resource); 219 + dev_err(dev, 220 + "can't allocate host bridge io space resource %pR\n", 221 + resource); 199 222 goto free_resource; 200 223 } 201 224 202 - list_add_tail(&iospace->list, &info->io_resources); 203 - return base_port; 225 + entry->offset = base_port; 226 + res->start = min + base_port; 227 + res->end = max + base_port; 228 + resource_list_add_tail(iospace, &info->io_resources); 229 + 230 + return 0; 204 231 205 232 free_resource: 206 - kfree(iospace); 207 - out: 208 - return ~0; 233 + resource_list_free_entry(iospace); 234 + return -ENOSPC; 209 235 } 210 236 211 - static acpi_status resource_to_window(struct acpi_resource *resource, 212 - struct acpi_resource_address64 *addr) 237 + /* 238 + * An IO port or MMIO resource assigned to a PCI host bridge may be 239 + * consumed by the host bridge itself or available to its child 240 + * bus/devices. The ACPI specification defines a bit (Producer/Consumer) 241 + * to tell whether the resource is consumed by the host bridge itself, 242 + * but firmware hasn't used that bit consistently, so we can't rely on it. 243 + * 244 + * On x86 and IA64 platforms, all IO port and MMIO resources are assumed 245 + * to be available to child bus/devices except one special case: 246 + * IO port [0xCF8-0xCFF] is consumed by the host bridge itself 247 + * to access PCI configuration space. 248 + * 249 + * So explicitly filter out PCI CFG IO ports[0xCF8-0xCFF]. 250 + */ 251 + static bool resource_is_pcicfg_ioport(struct resource *res) 213 252 { 214 - acpi_status status; 215 - 216 - /* 217 - * We're only interested in _CRS descriptors that are 218 - * - address space descriptors for memory or I/O space 219 - * - non-zero size 220 - */ 221 - status = acpi_resource_to_address64(resource, addr); 222 - if (ACPI_SUCCESS(status) && 223 - (addr->resource_type == ACPI_MEMORY_RANGE || 224 - addr->resource_type == ACPI_IO_RANGE) && 225 - addr->address.address_length) 226 - return AE_OK; 227 - 228 - return AE_ERROR; 253 + return (res->flags & IORESOURCE_IO) && 254 + res->start == 0xCF8 && res->end == 0xCFF; 229 255 } 230 256 231 - static acpi_status count_window(struct acpi_resource *resource, void *data) 257 + static int pci_acpi_root_prepare_resources(struct acpi_pci_root_info *ci) 232 258 { 233 - unsigned int *windows = (unsigned int *) data; 234 - struct acpi_resource_address64 addr; 235 - acpi_status status; 236 - 237 - status = resource_to_window(resource, &addr); 238 - if (ACPI_SUCCESS(status)) 239 - (*windows)++; 240 - 241 - return AE_OK; 242 - } 243 - 244 - static acpi_status add_window(struct acpi_resource *res, void *data) 245 - { 246 - struct pci_root_info *info = data; 247 - struct resource *resource; 248 - struct acpi_resource_address64 addr; 249 - acpi_status status; 250 - unsigned long flags, offset = 0; 251 - struct resource *root; 252 - 253 - /* Return AE_OK for non-window resources to keep scanning for more */ 254 - status = resource_to_window(res, &addr); 255 - if (!ACPI_SUCCESS(status)) 256 - return AE_OK; 257 - 258 - if (addr.resource_type == ACPI_MEMORY_RANGE) { 259 - flags = IORESOURCE_MEM; 260 - root = &iomem_resource; 261 - offset = addr.address.translation_offset; 262 - } else if (addr.resource_type == ACPI_IO_RANGE) { 263 - flags = IORESOURCE_IO; 264 - root = &ioport_resource; 265 - offset = add_io_space(info, &addr); 266 - if (offset == ~0) 267 - return AE_OK; 268 - } else 269 - return AE_OK; 270 - 271 - resource = &info->res[info->res_num]; 272 - resource->name = info->name; 273 - resource->flags = flags; 274 - resource->start = addr.address.minimum + offset; 275 - resource->end = resource->start + addr.address.address_length - 1; 276 - info->res_offset[info->res_num] = offset; 277 - 278 - if (insert_resource(root, resource)) { 279 - dev_err(&info->bridge->dev, 280 - "can't allocate host bridge window %pR\n", 281 - resource); 282 - } else { 283 - if (offset) 284 - dev_info(&info->bridge->dev, "host bridge window %pR " 285 - "(PCI address [%#llx-%#llx])\n", 286 - resource, 287 - resource->start - offset, 288 - resource->end - offset); 289 - else 290 - dev_info(&info->bridge->dev, 291 - "host bridge window %pR\n", resource); 292 - } 293 - /* HP's firmware has a hack to work around a Windows bug. 294 - * Ignore these tiny memory ranges */ 295 - if (!((resource->flags & IORESOURCE_MEM) && 296 - (resource->end - resource->start < 16))) 297 - pci_add_resource_offset(&info->resources, resource, 298 - info->res_offset[info->res_num]); 299 - 300 - info->res_num++; 301 - return AE_OK; 302 - } 303 - 304 - static void free_pci_root_info_res(struct pci_root_info *info) 305 - { 306 - struct iospace_resource *iospace, *tmp; 307 - 308 - list_for_each_entry_safe(iospace, tmp, &info->io_resources, list) 309 - kfree(iospace); 310 - 311 - kfree(info->name); 312 - kfree(info->res); 313 - info->res = NULL; 314 - kfree(info->res_offset); 315 - info->res_offset = NULL; 316 - info->res_num = 0; 317 - kfree(info->controller); 318 - info->controller = NULL; 319 - } 320 - 321 - static void __release_pci_root_info(struct pci_root_info *info) 322 - { 323 - int i; 259 + struct device *dev = &ci->bridge->dev; 260 + struct pci_root_info *info; 324 261 struct resource *res; 325 - struct iospace_resource *iospace; 262 + struct resource_entry *entry, *tmp; 263 + int status; 326 264 327 - list_for_each_entry(iospace, &info->io_resources, list) 328 - release_resource(&iospace->res); 329 - 330 - for (i = 0; i < info->res_num; i++) { 331 - res = &info->res[i]; 332 - 333 - if (!res->parent) 334 - continue; 335 - 336 - if (!(res->flags & (IORESOURCE_MEM | IORESOURCE_IO))) 337 - continue; 338 - 339 - release_resource(res); 265 + status = acpi_pci_probe_root_resources(ci); 266 + if (status > 0) { 267 + info = container_of(ci, struct pci_root_info, common); 268 + resource_list_for_each_entry_safe(entry, tmp, &ci->resources) { 269 + res = entry->res; 270 + if (res->flags & IORESOURCE_MEM) { 271 + /* 272 + * HP's firmware has a hack to work around a 273 + * Windows bug. Ignore these tiny memory ranges. 274 + */ 275 + if (resource_size(res) <= 16) { 276 + resource_list_del(entry); 277 + insert_resource(&iomem_resource, 278 + entry->res); 279 + resource_list_add_tail(entry, 280 + &info->io_resources); 281 + } 282 + } else if (res->flags & IORESOURCE_IO) { 283 + if (resource_is_pcicfg_ioport(entry->res)) 284 + resource_list_destroy_entry(entry); 285 + else if (add_io_space(dev, info, entry)) 286 + resource_list_destroy_entry(entry); 287 + } 288 + } 340 289 } 341 290 342 - free_pci_root_info_res(info); 291 + return status; 292 + } 293 + 294 + static void pci_acpi_root_release_info(struct acpi_pci_root_info *ci) 295 + { 296 + struct pci_root_info *info; 297 + struct resource_entry *entry, *tmp; 298 + 299 + info = container_of(ci, struct pci_root_info, common); 300 + resource_list_for_each_entry_safe(entry, tmp, &info->io_resources) { 301 + release_resource(entry->res); 302 + resource_list_destroy_entry(entry); 303 + } 343 304 kfree(info); 344 305 } 345 306 346 - static void release_pci_root_info(struct pci_host_bridge *bridge) 347 - { 348 - struct pci_root_info *info = bridge->release_data; 349 - 350 - __release_pci_root_info(info); 351 - } 352 - 353 - static int 354 - probe_pci_root_info(struct pci_root_info *info, struct acpi_device *device, 355 - int busnum, int domain) 356 - { 357 - char *name; 358 - 359 - name = kmalloc(16, GFP_KERNEL); 360 - if (!name) 361 - return -ENOMEM; 362 - 363 - sprintf(name, "PCI Bus %04x:%02x", domain, busnum); 364 - info->bridge = device; 365 - info->name = name; 366 - 367 - acpi_walk_resources(device->handle, METHOD_NAME__CRS, count_window, 368 - &info->res_num); 369 - if (info->res_num) { 370 - info->res = 371 - kzalloc_node(sizeof(*info->res) * info->res_num, 372 - GFP_KERNEL, info->controller->node); 373 - if (!info->res) { 374 - kfree(name); 375 - return -ENOMEM; 376 - } 377 - 378 - info->res_offset = 379 - kzalloc_node(sizeof(*info->res_offset) * info->res_num, 380 - GFP_KERNEL, info->controller->node); 381 - if (!info->res_offset) { 382 - kfree(name); 383 - kfree(info->res); 384 - info->res = NULL; 385 - return -ENOMEM; 386 - } 387 - 388 - info->res_num = 0; 389 - acpi_walk_resources(device->handle, METHOD_NAME__CRS, 390 - add_window, info); 391 - } else 392 - kfree(name); 393 - 394 - return 0; 395 - } 307 + static struct acpi_pci_root_ops pci_acpi_root_ops = { 308 + .pci_ops = &pci_root_ops, 309 + .release_info = pci_acpi_root_release_info, 310 + .prepare_resources = pci_acpi_root_prepare_resources, 311 + }; 396 312 397 313 struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root) 398 314 { 399 315 struct acpi_device *device = root->device; 400 - int domain = root->segment; 401 - int bus = root->secondary.start; 402 - struct pci_controller *controller; 403 - struct pci_root_info *info = NULL; 404 - int busnum = root->secondary.start; 405 - struct pci_bus *pbus; 406 - int ret; 407 - 408 - controller = alloc_pci_controller(domain); 409 - if (!controller) 410 - return NULL; 411 - 412 - controller->companion = device; 413 - controller->node = acpi_get_node(device->handle); 316 + struct pci_root_info *info; 414 317 415 318 info = kzalloc(sizeof(*info), GFP_KERNEL); 416 319 if (!info) { 417 320 dev_err(&device->dev, 418 - "pci_bus %04x:%02x: ignored (out of memory)\n", 419 - domain, busnum); 420 - kfree(controller); 321 + "pci_bus %04x:%02x: ignored (out of memory)\n", 322 + root->segment, (int)root->secondary.start); 421 323 return NULL; 422 324 } 423 325 424 - info->controller = controller; 326 + info->controller.segment = root->segment; 327 + info->controller.companion = device; 328 + info->controller.node = acpi_get_node(device->handle); 425 329 INIT_LIST_HEAD(&info->io_resources); 426 - INIT_LIST_HEAD(&info->resources); 427 - 428 - ret = probe_pci_root_info(info, device, busnum, domain); 429 - if (ret) { 430 - kfree(info->controller); 431 - kfree(info); 432 - return NULL; 433 - } 434 - /* insert busn resource at first */ 435 - pci_add_resource(&info->resources, &root->secondary); 436 - /* 437 - * See arch/x86/pci/acpi.c. 438 - * The desired pci bus might already be scanned in a quirk. We 439 - * should handle the case here, but it appears that IA64 hasn't 440 - * such quirk. So we just ignore the case now. 441 - */ 442 - pbus = pci_create_root_bus(NULL, bus, &pci_root_ops, controller, 443 - &info->resources); 444 - if (!pbus) { 445 - pci_free_resource_list(&info->resources); 446 - __release_pci_root_info(info); 447 - return NULL; 448 - } 449 - 450 - pci_set_host_bridge_release(to_pci_host_bridge(pbus->bridge), 451 - release_pci_root_info, info); 452 - pci_scan_child_bus(pbus); 453 - return pbus; 330 + return acpi_pci_root_create(root, &pci_acpi_root_ops, 331 + &info->common, &info->controller); 454 332 } 455 333 456 334 int pcibios_root_bridge_prepare(struct pci_host_bridge *bridge)
+1 -1
arch/microblaze/kernel/setup.c
··· 194 194 { 195 195 of_clk_init(NULL); 196 196 setup_cpuinfo_clk(); 197 - clocksource_of_init(); 197 + clocksource_probe(); 198 198 } 199 199 200 200 #ifdef CONFIG_DEBUG_FS
+1 -1
arch/mips/pistachio/time.c
··· 39 39 struct clk *clk; 40 40 41 41 of_clk_init(NULL); 42 - clocksource_of_init(); 42 + clocksource_probe(); 43 43 44 44 np = of_get_cpu_node(0, NULL); 45 45 if (!np) {
+1 -1
arch/mips/ralink/clk.c
··· 75 75 pr_info("CPU Clock: %ldMHz\n", clk_get_rate(clk) / 1000000); 76 76 mips_hpt_frequency = clk_get_rate(clk) / 2; 77 77 clk_put(clk); 78 - clocksource_of_init(); 78 + clocksource_probe(); 79 79 }
+1 -1
arch/nios2/kernel/time.c
··· 324 324 if (count < 2) 325 325 panic("%d timer is found, it needs 2 timers in system\n", count); 326 326 327 - clocksource_of_init(); 327 + clocksource_probe(); 328 328 } 329 329 330 330 CLOCKSOURCE_OF_DECLARE(nios2_timer, ALTR_TIMER_COMPATIBLE, nios2_time_init);
+7
arch/x86/include/asm/msr-index.h
··· 206 206 #define MSR_GFX_PERF_LIMIT_REASONS 0x000006B0 207 207 #define MSR_RING_PERF_LIMIT_REASONS 0x000006B1 208 208 209 + /* Config TDP MSRs */ 210 + #define MSR_CONFIG_TDP_NOMINAL 0x00000648 211 + #define MSR_CONFIG_TDP_LEVEL1 0x00000649 212 + #define MSR_CONFIG_TDP_LEVEL2 0x0000064A 213 + #define MSR_CONFIG_TDP_CONTROL 0x0000064B 214 + #define MSR_TURBO_ACTIVATION_RATIO 0x0000064C 215 + 209 216 /* Hardware P state interface */ 210 217 #define MSR_PPERF 0x0000064e 211 218 #define MSR_PERF_LIMIT_REASONS 0x0000064f
+18 -4
arch/x86/kernel/acpi/boot.c
··· 976 976 { 977 977 int count; 978 978 int x2count = 0; 979 + int ret; 980 + struct acpi_subtable_proc madt_proc[2]; 979 981 980 982 if (!cpu_has_apic) 981 983 return -ENODEV; ··· 1001 999 acpi_parse_sapic, MAX_LOCAL_APIC); 1002 1000 1003 1001 if (!count) { 1004 - x2count = acpi_table_parse_madt(ACPI_MADT_TYPE_LOCAL_X2APIC, 1005 - acpi_parse_x2apic, MAX_LOCAL_APIC); 1006 - count = acpi_table_parse_madt(ACPI_MADT_TYPE_LOCAL_APIC, 1007 - acpi_parse_lapic, MAX_LOCAL_APIC); 1002 + memset(madt_proc, 0, sizeof(madt_proc)); 1003 + madt_proc[0].id = ACPI_MADT_TYPE_LOCAL_APIC; 1004 + madt_proc[0].handler = acpi_parse_lapic; 1005 + madt_proc[1].id = ACPI_MADT_TYPE_LOCAL_X2APIC; 1006 + madt_proc[1].handler = acpi_parse_x2apic; 1007 + ret = acpi_table_parse_entries_array(ACPI_SIG_MADT, 1008 + sizeof(struct acpi_table_madt), 1009 + madt_proc, ARRAY_SIZE(madt_proc), MAX_LOCAL_APIC); 1010 + if (ret < 0) { 1011 + printk(KERN_ERR PREFIX 1012 + "Error parsing LAPIC/X2APIC entries\n"); 1013 + return ret; 1014 + } 1015 + 1016 + x2count = madt_proc[0].count; 1017 + count = madt_proc[1].count; 1008 1018 } 1009 1019 if (!count && !x2count) { 1010 1020 printk(KERN_ERR PREFIX "No LAPIC entries present\n");
+87 -207
arch/x86/pci/acpi.c
··· 4 4 #include <linux/irq.h> 5 5 #include <linux/dmi.h> 6 6 #include <linux/slab.h> 7 + #include <linux/pci-acpi.h> 7 8 #include <asm/numa.h> 8 9 #include <asm/pci_x86.h> 9 10 10 11 struct pci_root_info { 11 - struct acpi_device *bridge; 12 - char name[16]; 12 + struct acpi_pci_root_info common; 13 13 struct pci_sysdata sd; 14 14 #ifdef CONFIG_PCI_MMCONFIG 15 15 bool mcfg_added; 16 - u16 segment; 17 16 u8 start_bus; 18 17 u8 end_bus; 19 18 #endif ··· 177 178 return 0; 178 179 } 179 180 180 - static int setup_mcfg_map(struct pci_root_info *info, u16 seg, u8 start, 181 - u8 end, phys_addr_t addr) 181 + static int setup_mcfg_map(struct acpi_pci_root_info *ci) 182 182 { 183 - int result; 184 - struct device *dev = &info->bridge->dev; 183 + int result, seg; 184 + struct pci_root_info *info; 185 + struct acpi_pci_root *root = ci->root; 186 + struct device *dev = &ci->bridge->dev; 185 187 186 - info->start_bus = start; 187 - info->end_bus = end; 188 + info = container_of(ci, struct pci_root_info, common); 189 + info->start_bus = (u8)root->secondary.start; 190 + info->end_bus = (u8)root->secondary.end; 188 191 info->mcfg_added = false; 192 + seg = info->sd.domain; 189 193 190 194 /* return success if MMCFG is not in use */ 191 195 if (raw_pci_ext_ops && raw_pci_ext_ops != &pci_mmcfg) ··· 197 195 if (!(pci_probe & PCI_PROBE_MMCONF)) 198 196 return check_segment(seg, dev, "MMCONFIG is disabled,"); 199 197 200 - result = pci_mmconfig_insert(dev, seg, start, end, addr); 198 + result = pci_mmconfig_insert(dev, seg, info->start_bus, info->end_bus, 199 + root->mcfg_addr); 201 200 if (result == 0) { 202 201 /* enable MMCFG if it hasn't been enabled yet */ 203 202 if (raw_pci_ext_ops == NULL) ··· 211 208 return 0; 212 209 } 213 210 214 - static void teardown_mcfg_map(struct pci_root_info *info) 211 + static void teardown_mcfg_map(struct acpi_pci_root_info *ci) 215 212 { 213 + struct pci_root_info *info; 214 + 215 + info = container_of(ci, struct pci_root_info, common); 216 216 if (info->mcfg_added) { 217 - pci_mmconfig_delete(info->segment, info->start_bus, 218 - info->end_bus); 217 + pci_mmconfig_delete(info->sd.domain, 218 + info->start_bus, info->end_bus); 219 219 info->mcfg_added = false; 220 220 } 221 221 } 222 222 #else 223 - static int setup_mcfg_map(struct pci_root_info *info, 224 - u16 seg, u8 start, u8 end, 225 - phys_addr_t addr) 223 + static int setup_mcfg_map(struct acpi_pci_root_info *ci) 226 224 { 227 225 return 0; 228 226 } 229 - static void teardown_mcfg_map(struct pci_root_info *info) 227 + 228 + static void teardown_mcfg_map(struct acpi_pci_root_info *ci) 230 229 { 231 230 } 232 231 #endif 233 232 234 - static void validate_resources(struct device *dev, struct list_head *crs_res, 235 - unsigned long type) 233 + static int pci_acpi_root_get_node(struct acpi_pci_root *root) 236 234 { 237 - LIST_HEAD(list); 238 - struct resource *res1, *res2, *root = NULL; 239 - struct resource_entry *tmp, *entry, *entry2; 235 + int busnum = root->secondary.start; 236 + struct acpi_device *device = root->device; 237 + int node = acpi_get_node(device->handle); 240 238 241 - BUG_ON((type & (IORESOURCE_MEM | IORESOURCE_IO)) == 0); 242 - root = (type & IORESOURCE_MEM) ? &iomem_resource : &ioport_resource; 243 - 244 - list_splice_init(crs_res, &list); 245 - resource_list_for_each_entry_safe(entry, tmp, &list) { 246 - bool free = false; 247 - resource_size_t end; 248 - 249 - res1 = entry->res; 250 - if (!(res1->flags & type)) 251 - goto next; 252 - 253 - /* Exclude non-addressable range or non-addressable portion */ 254 - end = min(res1->end, root->end); 255 - if (end <= res1->start) { 256 - dev_info(dev, "host bridge window %pR (ignored, not CPU addressable)\n", 257 - res1); 258 - free = true; 259 - goto next; 260 - } else if (res1->end != end) { 261 - dev_info(dev, "host bridge window %pR ([%#llx-%#llx] ignored, not CPU addressable)\n", 262 - res1, (unsigned long long)end + 1, 263 - (unsigned long long)res1->end); 264 - res1->end = end; 265 - } 266 - 267 - resource_list_for_each_entry(entry2, crs_res) { 268 - res2 = entry2->res; 269 - if (!(res2->flags & type)) 270 - continue; 271 - 272 - /* 273 - * I don't like throwing away windows because then 274 - * our resources no longer match the ACPI _CRS, but 275 - * the kernel resource tree doesn't allow overlaps. 276 - */ 277 - if (resource_overlaps(res1, res2)) { 278 - res2->start = min(res1->start, res2->start); 279 - res2->end = max(res1->end, res2->end); 280 - dev_info(dev, "host bridge window expanded to %pR; %pR ignored\n", 281 - res2, res1); 282 - free = true; 283 - goto next; 284 - } 285 - } 286 - 287 - next: 288 - resource_list_del(entry); 289 - if (free) 290 - resource_list_free_entry(entry); 291 - else 292 - resource_list_add_tail(entry, crs_res); 239 + if (node == NUMA_NO_NODE) { 240 + node = x86_pci_root_bus_node(busnum); 241 + if (node != 0 && node != NUMA_NO_NODE) 242 + dev_info(&device->dev, FW_BUG "no _PXM; falling back to node %d from hardware (may be inconsistent with ACPI node numbers)\n", 243 + node); 293 244 } 245 + if (node != NUMA_NO_NODE && !node_online(node)) 246 + node = NUMA_NO_NODE; 247 + 248 + return node; 294 249 } 295 250 296 - static void add_resources(struct pci_root_info *info, 297 - struct list_head *resources, 298 - struct list_head *crs_res) 251 + static int pci_acpi_root_init_info(struct acpi_pci_root_info *ci) 299 252 { 300 - struct resource_entry *entry, *tmp; 301 - struct resource *res, *conflict, *root = NULL; 302 - 303 - validate_resources(&info->bridge->dev, crs_res, IORESOURCE_MEM); 304 - validate_resources(&info->bridge->dev, crs_res, IORESOURCE_IO); 305 - 306 - resource_list_for_each_entry_safe(entry, tmp, crs_res) { 307 - res = entry->res; 308 - if (res->flags & IORESOURCE_MEM) 309 - root = &iomem_resource; 310 - else if (res->flags & IORESOURCE_IO) 311 - root = &ioport_resource; 312 - else 313 - BUG_ON(res); 314 - 315 - conflict = insert_resource_conflict(root, res); 316 - if (conflict) { 317 - dev_info(&info->bridge->dev, 318 - "ignoring host bridge window %pR (conflicts with %s %pR)\n", 319 - res, conflict->name, conflict); 320 - resource_list_destroy_entry(entry); 321 - } 322 - } 323 - 324 - list_splice_tail(crs_res, resources); 253 + return setup_mcfg_map(ci); 325 254 } 326 255 327 - static void release_pci_root_info(struct pci_host_bridge *bridge) 256 + static void pci_acpi_root_release_info(struct acpi_pci_root_info *ci) 328 257 { 329 - struct resource *res; 330 - struct resource_entry *entry; 331 - struct pci_root_info *info = bridge->release_data; 332 - 333 - resource_list_for_each_entry(entry, &bridge->windows) { 334 - res = entry->res; 335 - if (res->parent && 336 - (res->flags & (IORESOURCE_MEM | IORESOURCE_IO))) 337 - release_resource(res); 338 - } 339 - 340 - teardown_mcfg_map(info); 341 - kfree(info); 258 + teardown_mcfg_map(ci); 259 + kfree(container_of(ci, struct pci_root_info, common)); 342 260 } 343 261 344 262 /* ··· 282 358 res->start == 0xCF8 && res->end == 0xCFF; 283 359 } 284 360 285 - static void probe_pci_root_info(struct pci_root_info *info, 286 - struct acpi_device *device, 287 - int busnum, int domain, 288 - struct list_head *list) 361 + static int pci_acpi_root_prepare_resources(struct acpi_pci_root_info *ci) 289 362 { 290 - int ret; 363 + struct acpi_device *device = ci->bridge; 364 + int busnum = ci->root->secondary.start; 291 365 struct resource_entry *entry, *tmp; 366 + int status; 292 367 293 - sprintf(info->name, "PCI Bus %04x:%02x", domain, busnum); 294 - info->bridge = device; 295 - ret = acpi_dev_get_resources(device, list, 296 - acpi_dev_filter_resource_type_cb, 297 - (void *)(IORESOURCE_IO | IORESOURCE_MEM)); 298 - if (ret < 0) 299 - dev_warn(&device->dev, 300 - "failed to parse _CRS method, error code %d\n", ret); 301 - else if (ret == 0) 302 - dev_dbg(&device->dev, 303 - "no IO and memory resources present in _CRS\n"); 304 - else 305 - resource_list_for_each_entry_safe(entry, tmp, list) { 306 - if ((entry->res->flags & IORESOURCE_DISABLED) || 307 - resource_is_pcicfg_ioport(entry->res)) 368 + status = acpi_pci_probe_root_resources(ci); 369 + if (pci_use_crs) { 370 + resource_list_for_each_entry_safe(entry, tmp, &ci->resources) 371 + if (resource_is_pcicfg_ioport(entry->res)) 308 372 resource_list_destroy_entry(entry); 309 - else 310 - entry->res->name = info->name; 311 - } 373 + return status; 374 + } 375 + 376 + resource_list_for_each_entry_safe(entry, tmp, &ci->resources) { 377 + dev_printk(KERN_DEBUG, &device->dev, 378 + "host bridge window %pR (ignored)\n", entry->res); 379 + resource_list_destroy_entry(entry); 380 + } 381 + x86_pci_root_bus_resources(busnum, &ci->resources); 382 + 383 + return 0; 312 384 } 385 + 386 + static struct acpi_pci_root_ops acpi_pci_root_ops = { 387 + .pci_ops = &pci_root_ops, 388 + .init_info = pci_acpi_root_init_info, 389 + .release_info = pci_acpi_root_release_info, 390 + .prepare_resources = pci_acpi_root_prepare_resources, 391 + }; 313 392 314 393 struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root) 315 394 { 316 - struct acpi_device *device = root->device; 317 - struct pci_root_info *info; 318 395 int domain = root->segment; 319 396 int busnum = root->secondary.start; 320 - struct resource_entry *res_entry; 321 - LIST_HEAD(crs_res); 322 - LIST_HEAD(resources); 397 + int node = pci_acpi_root_get_node(root); 323 398 struct pci_bus *bus; 324 - struct pci_sysdata *sd; 325 - int node; 326 399 327 400 if (pci_ignore_seg) 328 - domain = 0; 401 + root->segment = domain = 0; 329 402 330 403 if (domain && !pci_domains_supported) { 331 404 printk(KERN_WARNING "pci_bus %04x:%02x: " ··· 331 410 return NULL; 332 411 } 333 412 334 - node = acpi_get_node(device->handle); 335 - if (node == NUMA_NO_NODE) { 336 - node = x86_pci_root_bus_node(busnum); 337 - if (node != 0 && node != NUMA_NO_NODE) 338 - dev_info(&device->dev, FW_BUG "no _PXM; falling back to node %d from hardware (may be inconsistent with ACPI node numbers)\n", 339 - node); 340 - } 341 - 342 - if (node != NUMA_NO_NODE && !node_online(node)) 343 - node = NUMA_NO_NODE; 344 - 345 - info = kzalloc_node(sizeof(*info), GFP_KERNEL, node); 346 - if (!info) { 347 - printk(KERN_WARNING "pci_bus %04x:%02x: " 348 - "ignored (out of memory)\n", domain, busnum); 349 - return NULL; 350 - } 351 - 352 - sd = &info->sd; 353 - sd->domain = domain; 354 - sd->node = node; 355 - sd->companion = device; 356 - 357 413 bus = pci_find_bus(domain, busnum); 358 414 if (bus) { 359 415 /* 360 416 * If the desired bus has been scanned already, replace 361 417 * its bus->sysdata. 362 418 */ 363 - memcpy(bus->sysdata, sd, sizeof(*sd)); 364 - kfree(info); 419 + struct pci_sysdata sd = { 420 + .domain = domain, 421 + .node = node, 422 + .companion = root->device 423 + }; 424 + 425 + memcpy(bus->sysdata, &sd, sizeof(sd)); 365 426 } else { 366 - /* insert busn res at first */ 367 - pci_add_resource(&resources, &root->secondary); 427 + struct pci_root_info *info; 368 428 369 - /* 370 - * _CRS with no apertures is normal, so only fall back to 371 - * defaults or native bridge info if we're ignoring _CRS. 372 - */ 373 - probe_pci_root_info(info, device, busnum, domain, &crs_res); 374 - if (pci_use_crs) { 375 - add_resources(info, &resources, &crs_res); 376 - } else { 377 - resource_list_for_each_entry(res_entry, &crs_res) 378 - dev_printk(KERN_DEBUG, &device->dev, 379 - "host bridge window %pR (ignored)\n", 380 - res_entry->res); 381 - resource_list_free(&crs_res); 382 - x86_pci_root_bus_resources(busnum, &resources); 383 - } 384 - 385 - if (!setup_mcfg_map(info, domain, (u8)root->secondary.start, 386 - (u8)root->secondary.end, root->mcfg_addr)) 387 - bus = pci_create_root_bus(NULL, busnum, &pci_root_ops, 388 - sd, &resources); 389 - 390 - if (bus) { 391 - pci_scan_child_bus(bus); 392 - pci_set_host_bridge_release( 393 - to_pci_host_bridge(bus->bridge), 394 - release_pci_root_info, info); 395 - } else { 396 - resource_list_free(&resources); 397 - teardown_mcfg_map(info); 398 - kfree(info); 429 + info = kzalloc_node(sizeof(*info), GFP_KERNEL, node); 430 + if (!info) 431 + dev_err(&root->device->dev, 432 + "pci_bus %04x:%02x: ignored (out of memory)\n", 433 + domain, busnum); 434 + else { 435 + info->sd.domain = domain; 436 + info->sd.node = node; 437 + info->sd.companion = root->device; 438 + bus = acpi_pci_root_create(root, &acpi_pci_root_ops, 439 + &info->common, &info->sd); 399 440 } 400 441 } 401 442 ··· 369 486 list_for_each_entry(child, &bus->children, node) 370 487 pcie_bus_configure_settings(child); 371 488 } 372 - 373 - if (bus && node != NUMA_NO_NODE) 374 - dev_printk(KERN_DEBUG, &bus->dev, "on NUMA node %d\n", node); 375 489 376 490 return bus; 377 491 }
+1 -1
arch/xtensa/kernel/time.c
··· 148 148 local_timer_setup(0); 149 149 setup_irq(this_cpu_ptr(&ccount_timer)->evt.irq, &timer_irqaction); 150 150 sched_clock_register(ccount_sched_clock_read, 32, ccount_freq); 151 - clocksource_of_init(); 151 + clocksource_probe(); 152 152 } 153 153 154 154 /*
+26 -3
drivers/acpi/Kconfig
··· 57 57 config ACPI_CCA_REQUIRED 58 58 bool 59 59 60 + config ACPI_DEBUGGER 61 + bool "In-kernel debugger (EXPERIMENTAL)" 62 + select ACPI_DEBUG 63 + help 64 + Enable in-kernel debugging facilities: statistics, internal 65 + object dump, single step control method execution. 66 + This is still under development, currently enabling this only 67 + results in the compilation of the ACPICA debugger files. 68 + 60 69 config ACPI_SLEEP 61 70 bool 62 71 depends on SUSPEND || HIBERNATION ··· 206 197 bool 207 198 select CPU_IDLE 208 199 200 + config ACPI_CPPC_LIB 201 + bool 202 + depends on ACPI_PROCESSOR 203 + depends on !ACPI_CPU_FREQ_PSS 204 + select MAILBOX 205 + select PCC 206 + help 207 + If this option is enabled, this file implements common functionality 208 + to parse CPPC tables as described in the ACPI 5.1+ spec. The 209 + routines implemented are meant to be used by other 210 + drivers to control CPU performance using CPPC semantics. 211 + If your platform does not support CPPC in firmware, 212 + leave this option disabled. 213 + 209 214 config ACPI_PROCESSOR 210 215 tristate "Processor" 211 - depends on X86 || IA64 212 - select ACPI_PROCESSOR_IDLE 213 - select ACPI_CPU_FREQ_PSS 216 + depends on X86 || IA64 || ARM64 217 + select ACPI_PROCESSOR_IDLE if X86 || IA64 218 + select ACPI_CPU_FREQ_PSS if X86 || IA64 214 219 default y 215 220 help 216 221 This driver adds support for the ACPI Processor package. It is required
+1
drivers/acpi/Makefile
··· 78 78 obj-$(CONFIG_ACPI_EC_DEBUGFS) += ec_sys.o 79 79 obj-$(CONFIG_ACPI_CUSTOM_METHOD)+= custom_method.o 80 80 obj-$(CONFIG_ACPI_BGRT) += bgrt.o 81 + obj-$(CONFIG_ACPI_CPPC_LIB) += cppc_acpi.o 81 82 82 83 # processor has its own "processor." module_param namespace 83 84 processor-y := processor_driver.o
+1 -1
drivers/acpi/acpi_lpss.c
··· 664 664 #ifdef CONFIG_PM 665 665 #ifdef CONFIG_PM_SLEEP 666 666 .prepare = acpi_subsys_prepare, 667 - .complete = acpi_subsys_complete, 667 + .complete = pm_complete_with_resume_check, 668 668 .suspend = acpi_subsys_suspend, 669 669 .suspend_late = acpi_lpss_suspend_late, 670 670 .resume_early = acpi_lpss_resume_early,
-2
drivers/acpi/acpi_pad.c
··· 148 148 while (!kthread_should_stop()) { 149 149 unsigned long expire_time; 150 150 151 - try_to_freeze(); 152 - 153 151 /* round robin to cpus */ 154 152 expire_time = last_jiffies + round_robin_time * HZ; 155 153 if (time_before(expire_time, jiffies)) {
+2 -2
drivers/acpi/acpi_pnp.c
··· 316 316 {""}, 317 317 }; 318 318 319 - static bool matching_id(char *idstr, char *list_id) 319 + static bool matching_id(const char *idstr, const char *list_id) 320 320 { 321 321 int i; 322 322 ··· 333 333 return true; 334 334 } 335 335 336 - static bool acpi_pnp_match(char *idstr, const struct acpi_device_id **matchid) 336 + static bool acpi_pnp_match(const char *idstr, const struct acpi_device_id **matchid) 337 337 { 338 338 const struct acpi_device_id *devid; 339 339
+18
drivers/acpi/acpi_processor.c
··· 164 164 -------------------------------------------------------------------------- */ 165 165 166 166 #ifdef CONFIG_ACPI_HOTPLUG_CPU 167 + int __weak acpi_map_cpu(acpi_handle handle, 168 + phys_cpuid_t physid, int *pcpu) 169 + { 170 + return -ENODEV; 171 + } 172 + 173 + int __weak acpi_unmap_cpu(int cpu) 174 + { 175 + return -ENODEV; 176 + } 177 + 178 + int __weak arch_register_cpu(int cpu) 179 + { 180 + return -ENODEV; 181 + } 182 + 183 + void __weak arch_unregister_cpu(int cpu) {} 184 + 167 185 static int acpi_processor_hotadd_init(struct acpi_processor *pr) 168 186 { 169 187 unsigned long long sta;
+17 -1
drivers/acpi/acpica/Makefile
··· 123 123 rsaddr.o \ 124 124 rscalc.o \ 125 125 rscreate.o \ 126 - rsdump.o \ 127 126 rsdumpinfo.o \ 128 127 rsinfo.o \ 129 128 rsio.o \ ··· 177 178 utxferror.o \ 178 179 utxfmutex.o 179 180 181 + acpi-$(CONFIG_ACPI_DEBUGGER) += \ 182 + dbcmds.o \ 183 + dbconvert.o \ 184 + dbdisply.o \ 185 + dbexec.o \ 186 + dbhistry.o \ 187 + dbinput.o \ 188 + dbmethod.o \ 189 + dbnames.o \ 190 + dbobject.o \ 191 + dbstats.o \ 192 + dbutils.o \ 193 + dbxface.o \ 194 + rsdump.o \ 195 + 180 196 acpi-$(ACPI_FUTURE_USAGE) += \ 197 + dbfileio.o \ 198 + dbtest.o \ 181 199 utcache.o \ 182 200 utfileio.o \ 183 201 utprint.o \
+1 -1
drivers/acpi/acpica/acapps.h
··· 88 88 acpi_os_printf (" %-18s%s\n", name, description); 89 89 90 90 #define FILE_SUFFIX_DISASSEMBLY "dsl" 91 - #define ACPI_TABLE_FILE_SUFFIX ".dat" 91 + #define FILE_SUFFIX_BINARY_TABLE ".dat" /* Needs the dot */ 92 92 93 93 /* 94 94 * getopt
+6
drivers/acpi/acpica/acdebug.h
··· 44 44 #ifndef __ACDEBUG_H__ 45 45 #define __ACDEBUG_H__ 46 46 47 + /* The debugger is used in conjunction with the disassembler most of time */ 48 + 49 + #ifdef ACPI_DISASSEMBLER 50 + #include "acdisasm.h" 51 + #endif 52 + 47 53 #define ACPI_DEBUG_BUFFER_SIZE 0x4000 /* 16K buffer for return objects */ 48 54 49 55 struct acpi_db_command_info {
+6 -1
drivers/acpi/acpica/acglobal.h
··· 325 325 326 326 #ifdef ACPI_DEBUGGER 327 327 328 - ACPI_INIT_GLOBAL(u8, acpi_gbl_db_terminate_threads, FALSE); 329 328 ACPI_INIT_GLOBAL(u8, acpi_gbl_abort_method, FALSE); 330 329 ACPI_INIT_GLOBAL(u8, acpi_gbl_method_executing, FALSE); 330 + ACPI_INIT_GLOBAL(acpi_thread_id, acpi_gbl_db_thread_id, ACPI_INVALID_THREAD_ID); 331 331 332 332 ACPI_GLOBAL(u8, acpi_gbl_db_opt_no_ini_methods); 333 333 ACPI_GLOBAL(u8, acpi_gbl_db_opt_no_region_support); ··· 337 337 ACPI_GLOBAL(u32, acpi_gbl_db_debug_level); 338 338 ACPI_GLOBAL(u32, acpi_gbl_db_console_debug_level); 339 339 ACPI_GLOBAL(struct acpi_namespace_node *, acpi_gbl_db_scope_node); 340 + ACPI_GLOBAL(u8, acpi_gbl_db_terminate_loop); 341 + ACPI_GLOBAL(u8, acpi_gbl_db_threads_terminated); 340 342 341 343 ACPI_GLOBAL(char *, acpi_gbl_db_args[ACPI_DEBUGGER_MAX_ARGS]); 342 344 ACPI_GLOBAL(acpi_object_type, acpi_gbl_db_arg_types[ACPI_DEBUGGER_MAX_ARGS]); ··· 359 357 ACPI_GLOBAL(u16, acpi_gbl_node_type_count_misc); 360 358 ACPI_GLOBAL(u32, acpi_gbl_num_nodes); 361 359 ACPI_GLOBAL(u32, acpi_gbl_num_objects); 360 + 361 + ACPI_GLOBAL(acpi_mutex, acpi_gbl_db_command_ready); 362 + ACPI_GLOBAL(acpi_mutex, acpi_gbl_db_command_complete); 362 363 363 364 #endif /* ACPI_DEBUGGER */ 364 365
-2
drivers/acpi/acpica/acinterp.h
··· 397 397 acpi_ex_dump_operands(union acpi_operand_object **operands, 398 398 const char *opcode_name, u32 num_opcodes); 399 399 400 - #ifdef ACPI_FUTURE_USAGE 401 400 void 402 401 acpi_ex_dump_object_descriptor(union acpi_operand_object *object, u32 flags); 403 402 404 403 void acpi_ex_dump_namespace_node(struct acpi_namespace_node *node, u32 flags); 405 - #endif /* ACPI_FUTURE_USAGE */ 406 404 407 405 /* 408 406 * exnames - AML namestring support
+16 -6
drivers/acpi/acpica/aclocal.h
··· 83 83 #define ACPI_MTX_EVENTS 3 /* Data for ACPI events */ 84 84 #define ACPI_MTX_CACHES 4 /* Internal caches, general purposes */ 85 85 #define ACPI_MTX_MEMORY 5 /* Debug memory tracking lists */ 86 - #define ACPI_MTX_DEBUG_CMD_COMPLETE 6 /* AML debugger */ 87 - #define ACPI_MTX_DEBUG_CMD_READY 7 /* AML debugger */ 88 86 89 - #define ACPI_MAX_MUTEX 7 87 + #define ACPI_MAX_MUTEX 5 90 88 #define ACPI_NUM_MUTEX ACPI_MAX_MUTEX+1 91 89 92 90 /* Lock structure for reader/writer interfaces */ ··· 108 110 /* This Thread ID means that the mutex is not in use (unlocked) */ 109 111 110 112 #define ACPI_MUTEX_NOT_ACQUIRED (acpi_thread_id) 0 113 + 114 + /* This Thread ID means an invalid thread ID */ 115 + 116 + #ifdef ACPI_OS_INVALID_THREAD_ID 117 + #define ACPI_INVALID_THREAD_ID ACPI_OS_INVALID_THREAD_ID 118 + #else 119 + #define ACPI_INVALID_THREAD_ID ((acpi_thread_id) 0xFFFFFFFF) 120 + #endif 111 121 112 122 /* Table for the global mutexes */ 113 123 ··· 293 287 #define ACPI_BTYPE_BUFFER_FIELD 0x00002000 294 288 #define ACPI_BTYPE_DDB_HANDLE 0x00004000 295 289 #define ACPI_BTYPE_DEBUG_OBJECT 0x00008000 296 - #define ACPI_BTYPE_REFERENCE 0x00010000 290 + #define ACPI_BTYPE_REFERENCE_OBJECT 0x00010000 /* From Index(), ref_of(), etc (type6_opcodes) */ 297 291 #define ACPI_BTYPE_RESOURCE 0x00020000 292 + #define ACPI_BTYPE_NAMED_REFERENCE 0x00040000 /* Generic unresolved Name or Namepath */ 298 293 299 294 #define ACPI_BTYPE_COMPUTE_DATA (ACPI_BTYPE_INTEGER | ACPI_BTYPE_STRING | ACPI_BTYPE_BUFFER) 300 295 301 296 #define ACPI_BTYPE_DATA (ACPI_BTYPE_COMPUTE_DATA | ACPI_BTYPE_PACKAGE) 302 - #define ACPI_BTYPE_DATA_REFERENCE (ACPI_BTYPE_DATA | ACPI_BTYPE_REFERENCE | ACPI_BTYPE_DDB_HANDLE) 297 + 298 + /* Used by Copy, de_ref_of, Store, Printf, Fprintf */ 299 + 300 + #define ACPI_BTYPE_DATA_REFERENCE (ACPI_BTYPE_DATA | ACPI_BTYPE_REFERENCE_OBJECT | ACPI_BTYPE_DDB_HANDLE) 303 301 #define ACPI_BTYPE_DEVICE_OBJECTS (ACPI_BTYPE_DEVICE | ACPI_BTYPE_THERMAL | ACPI_BTYPE_PROCESSOR) 304 302 #define ACPI_BTYPE_OBJECTS_AND_REFS 0x0001FFFF /* ARG or LOCAL */ 305 303 #define ACPI_BTYPE_ALL_OBJECTS 0x0000FFFF ··· 858 848 #define ACPI_PARSEOP_PARAMLIST 0x02 859 849 #define ACPI_PARSEOP_EMPTY_TERMLIST 0x04 860 850 #define ACPI_PARSEOP_PREDEF_CHECKED 0x08 861 - #define ACPI_PARSEOP_SPECIAL 0x10 851 + #define ACPI_PARSEOP_CLOSING_PAREN 0x10 862 852 #define ACPI_PARSEOP_COMPOUND 0x20 863 853 #define ACPI_PARSEOP_ASSIGNMENT 0x40 864 854
-4
drivers/acpi/acpica/acnamesp.h
··· 193 193 /* 194 194 * nsdump - Namespace dump/print utilities 195 195 */ 196 - #ifdef ACPI_FUTURE_USAGE 197 196 void acpi_ns_dump_tables(acpi_handle search_base, u32 max_depth); 198 - #endif /* ACPI_FUTURE_USAGE */ 199 197 200 198 void acpi_ns_dump_entry(acpi_handle handle, u32 debug_level); 201 199 ··· 206 208 acpi_ns_dump_one_object(acpi_handle obj_handle, 207 209 u32 level, void *context, void **return_value); 208 210 209 - #ifdef ACPI_FUTURE_USAGE 210 211 void 211 212 acpi_ns_dump_objects(acpi_object_type type, 212 213 u8 display_type, ··· 217 220 u8 display_type, 218 221 u32 max_depth, 219 222 acpi_owner_id owner_id, acpi_handle start_handle); 220 - #endif /* ACPI_FUTURE_USAGE */ 221 223 222 224 /* 223 225 * nseval - Namespace evaluation functions
+2 -2
drivers/acpi/acpica/acopcode.h
··· 211 211 #define ARGI_ARG4 ARG_NONE 212 212 #define ARGI_ARG5 ARG_NONE 213 213 #define ARGI_ARG6 ARG_NONE 214 - #define ARGI_BANK_FIELD_OP ARGI_INVALID_OPCODE 214 + #define ARGI_BANK_FIELD_OP ARGI_LIST1 (ARGI_INTEGER) 215 215 #define ARGI_BIT_AND_OP ARGI_LIST3 (ARGI_INTEGER, ARGI_INTEGER, ARGI_TARGETREF) 216 216 #define ARGI_BIT_NAND_OP ARGI_LIST3 (ARGI_INTEGER, ARGI_INTEGER, ARGI_TARGETREF) 217 217 #define ARGI_BIT_NOR_OP ARGI_LIST3 (ARGI_INTEGER, ARGI_INTEGER, ARGI_TARGETREF) ··· 307 307 #define ARGI_SLEEP_OP ARGI_LIST1 (ARGI_INTEGER) 308 308 #define ARGI_STALL_OP ARGI_LIST1 (ARGI_INTEGER) 309 309 #define ARGI_STATICSTRING_OP ARGI_INVALID_OPCODE 310 - #define ARGI_STORE_OP ARGI_LIST2 (ARGI_DATAREFOBJ, ARGI_TARGETREF) 310 + #define ARGI_STORE_OP ARGI_LIST2 (ARGI_DATAREFOBJ, ARGI_STORE_TARGET) 311 311 #define ARGI_STRING_OP ARGI_INVALID_OPCODE 312 312 #define ARGI_SUBTRACT_OP ARGI_LIST3 (ARGI_INTEGER, ARGI_INTEGER, ARGI_TARGETREF) 313 313 #define ARGI_THERMAL_ZONE_OP ARGI_INVALID_OPCODE
-4
drivers/acpi/acpica/acparser.h
··· 194 194 195 195 union acpi_parse_object *acpi_ps_get_arg(union acpi_parse_object *op, u32 argn); 196 196 197 - #ifdef ACPI_FUTURE_USAGE 198 197 union acpi_parse_object *acpi_ps_get_depth_next(union acpi_parse_object *origin, 199 198 union acpi_parse_object *op); 200 - #endif /* ACPI_FUTURE_USAGE */ 201 199 202 200 /* 203 201 * pswalk - parse tree walk routines ··· 233 235 234 236 u8 acpi_ps_is_leading_char(u32 c); 235 237 236 - #ifdef ACPI_FUTURE_USAGE 237 238 u32 acpi_ps_get_name(union acpi_parse_object *op); 238 - #endif /* ACPI_FUTURE_USAGE */ 239 239 240 240 void acpi_ps_set_name(union acpi_parse_object *op, u32 name); 241 241
-2
drivers/acpi/acpica/acutils.h
··· 635 635 acpi_ut_free_and_track(void *address, 636 636 u32 component, const char *module, u32 line); 637 637 638 - #ifdef ACPI_FUTURE_USAGE 639 638 void acpi_ut_dump_allocation_info(void); 640 - #endif /* ACPI_FUTURE_USAGE */ 641 639 642 640 void acpi_ut_dump_allocations(u32 component, const char *module); 643 641
+6 -5
drivers/acpi/acpica/amlcode.h
··· 277 277 #define ARGI_TARGETREF 0x0F /* Target, subject to implicit conversion */ 278 278 #define ARGI_FIXED_TARGET 0x10 /* Target, no implicit conversion */ 279 279 #define ARGI_SIMPLE_TARGET 0x11 /* Name, Local, Arg -- no implicit conversion */ 280 + #define ARGI_STORE_TARGET 0x12 /* Target for store is TARGETREF + package objects */ 280 281 281 282 /* Multiple/complex types */ 282 283 283 - #define ARGI_DATAOBJECT 0x12 /* Buffer, String, package or reference to a node - Used only by size_of operator */ 284 - #define ARGI_COMPLEXOBJ 0x13 /* Buffer, String, or package (Used by INDEX op only) */ 285 - #define ARGI_REF_OR_STRING 0x14 /* Reference or String (Used by DEREFOF op only) */ 286 - #define ARGI_REGION_OR_BUFFER 0x15 /* Used by LOAD op only */ 287 - #define ARGI_DATAREFOBJ 0x16 284 + #define ARGI_DATAOBJECT 0x13 /* Buffer, String, package or reference to a node - Used only by size_of operator */ 285 + #define ARGI_COMPLEXOBJ 0x14 /* Buffer, String, or package (Used by INDEX op only) */ 286 + #define ARGI_REF_OR_STRING 0x15 /* Reference or String (Used by DEREFOF op only) */ 287 + #define ARGI_REGION_OR_BUFFER 0x16 /* Used by LOAD op only */ 288 + #define ARGI_DATAREFOBJ 0x17 288 289 289 290 /* Note: types above can expand to 0x1F maximum */ 290 291
+1187
drivers/acpi/acpica/dbcmds.c
··· 1 + /******************************************************************************* 2 + * 3 + * Module Name: dbcmds - Miscellaneous debug commands and output routines 4 + * 5 + ******************************************************************************/ 6 + 7 + /* 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 + * All rights reserved. 10 + * 11 + * Redistribution and use in source and binary forms, with or without 12 + * modification, are permitted provided that the following conditions 13 + * are met: 14 + * 1. Redistributions of source code must retain the above copyright 15 + * notice, this list of conditions, and the following disclaimer, 16 + * without modification. 17 + * 2. Redistributions in binary form must reproduce at minimum a disclaimer 18 + * substantially similar to the "NO WARRANTY" disclaimer below 19 + * ("Disclaimer") and any redistribution must be conditioned upon 20 + * including a substantially similar Disclaimer requirement for further 21 + * binary redistribution. 22 + * 3. Neither the names of the above-listed copyright holders nor the names 23 + * of any contributors may be used to endorse or promote products derived 24 + * from this software without specific prior written permission. 25 + * 26 + * Alternatively, this software may be distributed under the terms of the 27 + * GNU General Public License ("GPL") version 2 as published by the Free 28 + * Software Foundation. 29 + * 30 + * NO WARRANTY 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 36 + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 37 + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 38 + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 39 + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 40 + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 41 + * POSSIBILITY OF SUCH DAMAGES. 42 + */ 43 + 44 + #include <acpi/acpi.h> 45 + #include "accommon.h" 46 + #include "acevents.h" 47 + #include "acdebug.h" 48 + #include "acnamesp.h" 49 + #include "acresrc.h" 50 + #include "actables.h" 51 + 52 + #define _COMPONENT ACPI_CA_DEBUGGER 53 + ACPI_MODULE_NAME("dbcmds") 54 + 55 + /* Local prototypes */ 56 + static void 57 + acpi_dm_compare_aml_resources(u8 *aml1_buffer, 58 + acpi_rsdesc_size aml1_buffer_length, 59 + u8 *aml2_buffer, 60 + acpi_rsdesc_size aml2_buffer_length); 61 + 62 + static acpi_status 63 + acpi_dm_test_resource_conversion(struct acpi_namespace_node *node, char *name); 64 + 65 + static acpi_status 66 + acpi_db_resource_callback(struct acpi_resource *resource, void *context); 67 + 68 + static acpi_status 69 + acpi_db_device_resources(acpi_handle obj_handle, 70 + u32 nesting_level, void *context, void **return_value); 71 + 72 + static void acpi_db_do_one_sleep_state(u8 sleep_state); 73 + 74 + static char *acpi_db_trace_method_name = NULL; 75 + 76 + /******************************************************************************* 77 + * 78 + * FUNCTION: acpi_db_convert_to_node 79 + * 80 + * PARAMETERS: in_string - String to convert 81 + * 82 + * RETURN: Pointer to a NS node 83 + * 84 + * DESCRIPTION: Convert a string to a valid NS pointer. Handles numeric or 85 + * alphanumeric strings. 86 + * 87 + ******************************************************************************/ 88 + 89 + struct acpi_namespace_node *acpi_db_convert_to_node(char *in_string) 90 + { 91 + struct acpi_namespace_node *node; 92 + acpi_size address; 93 + 94 + if ((*in_string >= 0x30) && (*in_string <= 0x39)) { 95 + 96 + /* Numeric argument, convert */ 97 + 98 + address = strtoul(in_string, NULL, 16); 99 + node = ACPI_TO_POINTER(address); 100 + if (!acpi_os_readable(node, sizeof(struct acpi_namespace_node))) { 101 + acpi_os_printf("Address %p is invalid", node); 102 + return (NULL); 103 + } 104 + 105 + /* Make sure pointer is valid NS node */ 106 + 107 + if (ACPI_GET_DESCRIPTOR_TYPE(node) != ACPI_DESC_TYPE_NAMED) { 108 + acpi_os_printf 109 + ("Address %p is not a valid namespace node [%s]\n", 110 + node, acpi_ut_get_descriptor_name(node)); 111 + return (NULL); 112 + } 113 + } else { 114 + /* 115 + * Alpha argument: The parameter is a name string that must be 116 + * resolved to a Namespace object. 117 + */ 118 + node = acpi_db_local_ns_lookup(in_string); 119 + if (!node) { 120 + acpi_os_printf 121 + ("Could not find [%s] in namespace, defaulting to root node\n", 122 + in_string); 123 + node = acpi_gbl_root_node; 124 + } 125 + } 126 + 127 + return (node); 128 + } 129 + 130 + /******************************************************************************* 131 + * 132 + * FUNCTION: acpi_db_sleep 133 + * 134 + * PARAMETERS: object_arg - Desired sleep state (0-5). NULL means 135 + * invoke all possible sleep states. 136 + * 137 + * RETURN: Status 138 + * 139 + * DESCRIPTION: Simulate sleep/wake sequences 140 + * 141 + ******************************************************************************/ 142 + 143 + acpi_status acpi_db_sleep(char *object_arg) 144 + { 145 + u8 sleep_state; 146 + u32 i; 147 + 148 + ACPI_FUNCTION_TRACE(acpi_db_sleep); 149 + 150 + /* Null input (no arguments) means to invoke all sleep states */ 151 + 152 + if (!object_arg) { 153 + acpi_os_printf("Invoking all possible sleep states, 0-%d\n", 154 + ACPI_S_STATES_MAX); 155 + 156 + for (i = 0; i <= ACPI_S_STATES_MAX; i++) { 157 + acpi_db_do_one_sleep_state((u8)i); 158 + } 159 + 160 + return_ACPI_STATUS(AE_OK); 161 + } 162 + 163 + /* Convert argument to binary and invoke the sleep state */ 164 + 165 + sleep_state = (u8)strtoul(object_arg, NULL, 0); 166 + acpi_db_do_one_sleep_state(sleep_state); 167 + return_ACPI_STATUS(AE_OK); 168 + } 169 + 170 + /******************************************************************************* 171 + * 172 + * FUNCTION: acpi_db_do_one_sleep_state 173 + * 174 + * PARAMETERS: sleep_state - Desired sleep state (0-5) 175 + * 176 + * RETURN: None 177 + * 178 + * DESCRIPTION: Simulate a sleep/wake sequence 179 + * 180 + ******************************************************************************/ 181 + 182 + static void acpi_db_do_one_sleep_state(u8 sleep_state) 183 + { 184 + acpi_status status; 185 + u8 sleep_type_a; 186 + u8 sleep_type_b; 187 + 188 + /* Validate parameter */ 189 + 190 + if (sleep_state > ACPI_S_STATES_MAX) { 191 + acpi_os_printf("Sleep state %d out of range (%d max)\n", 192 + sleep_state, ACPI_S_STATES_MAX); 193 + return; 194 + } 195 + 196 + acpi_os_printf("\n---- Invoking sleep state S%d (%s):\n", 197 + sleep_state, acpi_gbl_sleep_state_names[sleep_state]); 198 + 199 + /* Get the values for the sleep type registers (for display only) */ 200 + 201 + status = 202 + acpi_get_sleep_type_data(sleep_state, &sleep_type_a, &sleep_type_b); 203 + if (ACPI_FAILURE(status)) { 204 + acpi_os_printf("Could not evaluate [%s] method, %s\n", 205 + acpi_gbl_sleep_state_names[sleep_state], 206 + acpi_format_exception(status)); 207 + return; 208 + } 209 + 210 + acpi_os_printf 211 + ("Register values for sleep state S%d: Sleep-A: %.2X, Sleep-B: %.2X\n", 212 + sleep_state, sleep_type_a, sleep_type_b); 213 + 214 + /* Invoke the various sleep/wake interfaces */ 215 + 216 + acpi_os_printf("**** Sleep: Prepare to sleep (S%d) ****\n", 217 + sleep_state); 218 + status = acpi_enter_sleep_state_prep(sleep_state); 219 + if (ACPI_FAILURE(status)) { 220 + goto error_exit; 221 + } 222 + 223 + acpi_os_printf("**** Sleep: Going to sleep (S%d) ****\n", sleep_state); 224 + status = acpi_enter_sleep_state(sleep_state); 225 + if (ACPI_FAILURE(status)) { 226 + goto error_exit; 227 + } 228 + 229 + acpi_os_printf("**** Wake: Prepare to return from sleep (S%d) ****\n", 230 + sleep_state); 231 + status = acpi_leave_sleep_state_prep(sleep_state); 232 + if (ACPI_FAILURE(status)) { 233 + goto error_exit; 234 + } 235 + 236 + acpi_os_printf("**** Wake: Return from sleep (S%d) ****\n", 237 + sleep_state); 238 + status = acpi_leave_sleep_state(sleep_state); 239 + if (ACPI_FAILURE(status)) { 240 + goto error_exit; 241 + } 242 + 243 + return; 244 + 245 + error_exit: 246 + ACPI_EXCEPTION((AE_INFO, status, "During invocation of sleep state S%d", 247 + sleep_state)); 248 + } 249 + 250 + /******************************************************************************* 251 + * 252 + * FUNCTION: acpi_db_display_locks 253 + * 254 + * PARAMETERS: None 255 + * 256 + * RETURN: None 257 + * 258 + * DESCRIPTION: Display information about internal mutexes. 259 + * 260 + ******************************************************************************/ 261 + 262 + void acpi_db_display_locks(void) 263 + { 264 + u32 i; 265 + 266 + for (i = 0; i < ACPI_MAX_MUTEX; i++) { 267 + acpi_os_printf("%26s : %s\n", acpi_ut_get_mutex_name(i), 268 + acpi_gbl_mutex_info[i].thread_id == 269 + ACPI_MUTEX_NOT_ACQUIRED ? "Locked" : "Unlocked"); 270 + } 271 + } 272 + 273 + /******************************************************************************* 274 + * 275 + * FUNCTION: acpi_db_display_table_info 276 + * 277 + * PARAMETERS: table_arg - Name of table to be displayed 278 + * 279 + * RETURN: None 280 + * 281 + * DESCRIPTION: Display information about loaded tables. Current 282 + * implementation displays all loaded tables. 283 + * 284 + ******************************************************************************/ 285 + 286 + void acpi_db_display_table_info(char *table_arg) 287 + { 288 + u32 i; 289 + struct acpi_table_desc *table_desc; 290 + acpi_status status; 291 + 292 + /* Header */ 293 + 294 + acpi_os_printf("Idx ID Status Type " 295 + "TableHeader (Sig, Address, Length, Misc)\n"); 296 + 297 + /* Walk the entire root table list */ 298 + 299 + for (i = 0; i < acpi_gbl_root_table_list.current_table_count; i++) { 300 + table_desc = &acpi_gbl_root_table_list.tables[i]; 301 + 302 + /* Index and Table ID */ 303 + 304 + acpi_os_printf("%3u %.2u ", i, table_desc->owner_id); 305 + 306 + /* Decode the table flags */ 307 + 308 + if (!(table_desc->flags & ACPI_TABLE_IS_LOADED)) { 309 + acpi_os_printf("NotLoaded "); 310 + } else { 311 + acpi_os_printf(" Loaded "); 312 + } 313 + 314 + switch (table_desc->flags & ACPI_TABLE_ORIGIN_MASK) { 315 + case ACPI_TABLE_ORIGIN_EXTERNAL_VIRTUAL: 316 + 317 + acpi_os_printf("External/virtual "); 318 + break; 319 + 320 + case ACPI_TABLE_ORIGIN_INTERNAL_PHYSICAL: 321 + 322 + acpi_os_printf("Internal/physical "); 323 + break; 324 + 325 + case ACPI_TABLE_ORIGIN_INTERNAL_VIRTUAL: 326 + 327 + acpi_os_printf("Internal/virtual "); 328 + break; 329 + 330 + default: 331 + 332 + acpi_os_printf("INVALID TYPE "); 333 + break; 334 + } 335 + 336 + /* Make sure that the table is mapped */ 337 + 338 + status = acpi_tb_validate_table(table_desc); 339 + if (ACPI_FAILURE(status)) { 340 + return; 341 + } 342 + 343 + /* Dump the table header */ 344 + 345 + if (table_desc->pointer) { 346 + acpi_tb_print_table_header(table_desc->address, 347 + table_desc->pointer); 348 + } else { 349 + /* If the pointer is null, the table has been unloaded */ 350 + 351 + ACPI_INFO((AE_INFO, "%4.4s - Table has been unloaded", 352 + table_desc->signature.ascii)); 353 + } 354 + } 355 + } 356 + 357 + /******************************************************************************* 358 + * 359 + * FUNCTION: acpi_db_unload_acpi_table 360 + * 361 + * PARAMETERS: object_name - Namespace pathname for an object that 362 + * is owned by the table to be unloaded 363 + * 364 + * RETURN: None 365 + * 366 + * DESCRIPTION: Unload an ACPI table, via any namespace node that is owned 367 + * by the table. 368 + * 369 + ******************************************************************************/ 370 + 371 + void acpi_db_unload_acpi_table(char *object_name) 372 + { 373 + struct acpi_namespace_node *node; 374 + acpi_status status; 375 + 376 + /* Translate name to an Named object */ 377 + 378 + node = acpi_db_convert_to_node(object_name); 379 + if (!node) { 380 + return; 381 + } 382 + 383 + status = acpi_unload_parent_table(ACPI_CAST_PTR(acpi_handle, node)); 384 + if (ACPI_SUCCESS(status)) { 385 + acpi_os_printf("Parent of [%s] (%p) unloaded and uninstalled\n", 386 + object_name, node); 387 + } else { 388 + acpi_os_printf("%s, while unloading parent table of [%s]\n", 389 + acpi_format_exception(status), object_name); 390 + } 391 + } 392 + 393 + /******************************************************************************* 394 + * 395 + * FUNCTION: acpi_db_send_notify 396 + * 397 + * PARAMETERS: name - Name of ACPI object where to send notify 398 + * value - Value of the notify to send. 399 + * 400 + * RETURN: None 401 + * 402 + * DESCRIPTION: Send an ACPI notification. The value specified is sent to the 403 + * named object as an ACPI notify. 404 + * 405 + ******************************************************************************/ 406 + 407 + void acpi_db_send_notify(char *name, u32 value) 408 + { 409 + struct acpi_namespace_node *node; 410 + acpi_status status; 411 + 412 + /* Translate name to an Named object */ 413 + 414 + node = acpi_db_convert_to_node(name); 415 + if (!node) { 416 + return; 417 + } 418 + 419 + /* Dispatch the notify if legal */ 420 + 421 + if (acpi_ev_is_notify_object(node)) { 422 + status = acpi_ev_queue_notify_request(node, value); 423 + if (ACPI_FAILURE(status)) { 424 + acpi_os_printf("Could not queue notify\n"); 425 + } 426 + } else { 427 + acpi_os_printf("Named object [%4.4s] Type %s, " 428 + "must be Device/Thermal/Processor type\n", 429 + acpi_ut_get_node_name(node), 430 + acpi_ut_get_type_name(node->type)); 431 + } 432 + } 433 + 434 + /******************************************************************************* 435 + * 436 + * FUNCTION: acpi_db_display_interfaces 437 + * 438 + * PARAMETERS: action_arg - Null, "install", or "remove" 439 + * interface_name_arg - Name for install/remove options 440 + * 441 + * RETURN: None 442 + * 443 + * DESCRIPTION: Display or modify the global _OSI interface list 444 + * 445 + ******************************************************************************/ 446 + 447 + void acpi_db_display_interfaces(char *action_arg, char *interface_name_arg) 448 + { 449 + struct acpi_interface_info *next_interface; 450 + char *sub_string; 451 + acpi_status status; 452 + 453 + /* If no arguments, just display current interface list */ 454 + 455 + if (!action_arg) { 456 + (void)acpi_os_acquire_mutex(acpi_gbl_osi_mutex, 457 + ACPI_WAIT_FOREVER); 458 + 459 + next_interface = acpi_gbl_supported_interfaces; 460 + while (next_interface) { 461 + if (!(next_interface->flags & ACPI_OSI_INVALID)) { 462 + acpi_os_printf("%s\n", next_interface->name); 463 + } 464 + 465 + next_interface = next_interface->next; 466 + } 467 + 468 + acpi_os_release_mutex(acpi_gbl_osi_mutex); 469 + return; 470 + } 471 + 472 + /* If action_arg exists, so must interface_name_arg */ 473 + 474 + if (!interface_name_arg) { 475 + acpi_os_printf("Missing Interface Name argument\n"); 476 + return; 477 + } 478 + 479 + /* Uppercase the action for match below */ 480 + 481 + acpi_ut_strupr(action_arg); 482 + 483 + /* install - install an interface */ 484 + 485 + sub_string = strstr("INSTALL", action_arg); 486 + if (sub_string) { 487 + status = acpi_install_interface(interface_name_arg); 488 + if (ACPI_FAILURE(status)) { 489 + acpi_os_printf("%s, while installing \"%s\"\n", 490 + acpi_format_exception(status), 491 + interface_name_arg); 492 + } 493 + return; 494 + } 495 + 496 + /* remove - remove an interface */ 497 + 498 + sub_string = strstr("REMOVE", action_arg); 499 + if (sub_string) { 500 + status = acpi_remove_interface(interface_name_arg); 501 + if (ACPI_FAILURE(status)) { 502 + acpi_os_printf("%s, while removing \"%s\"\n", 503 + acpi_format_exception(status), 504 + interface_name_arg); 505 + } 506 + return; 507 + } 508 + 509 + /* Invalid action_arg */ 510 + 511 + acpi_os_printf("Invalid action argument: %s\n", action_arg); 512 + return; 513 + } 514 + 515 + /******************************************************************************* 516 + * 517 + * FUNCTION: acpi_db_display_template 518 + * 519 + * PARAMETERS: buffer_arg - Buffer name or address 520 + * 521 + * RETURN: None 522 + * 523 + * DESCRIPTION: Dump a buffer that contains a resource template 524 + * 525 + ******************************************************************************/ 526 + 527 + void acpi_db_display_template(char *buffer_arg) 528 + { 529 + struct acpi_namespace_node *node; 530 + acpi_status status; 531 + struct acpi_buffer return_buffer; 532 + 533 + /* Translate buffer_arg to an Named object */ 534 + 535 + node = acpi_db_convert_to_node(buffer_arg); 536 + if (!node || (node == acpi_gbl_root_node)) { 537 + acpi_os_printf("Invalid argument: %s\n", buffer_arg); 538 + return; 539 + } 540 + 541 + /* We must have a buffer object */ 542 + 543 + if (node->type != ACPI_TYPE_BUFFER) { 544 + acpi_os_printf 545 + ("Not a Buffer object, cannot be a template: %s\n", 546 + buffer_arg); 547 + return; 548 + } 549 + 550 + return_buffer.length = ACPI_DEBUG_BUFFER_SIZE; 551 + return_buffer.pointer = acpi_gbl_db_buffer; 552 + 553 + /* Attempt to convert the raw buffer to a resource list */ 554 + 555 + status = acpi_rs_create_resource_list(node->object, &return_buffer); 556 + 557 + acpi_db_set_output_destination(ACPI_DB_REDIRECTABLE_OUTPUT); 558 + acpi_dbg_level |= ACPI_LV_RESOURCES; 559 + 560 + if (ACPI_FAILURE(status)) { 561 + acpi_os_printf 562 + ("Could not convert Buffer to a resource list: %s, %s\n", 563 + buffer_arg, acpi_format_exception(status)); 564 + goto dump_buffer; 565 + } 566 + 567 + /* Now we can dump the resource list */ 568 + 569 + acpi_rs_dump_resource_list(ACPI_CAST_PTR(struct acpi_resource, 570 + return_buffer.pointer)); 571 + 572 + dump_buffer: 573 + acpi_os_printf("\nRaw data buffer:\n"); 574 + acpi_ut_debug_dump_buffer((u8 *)node->object->buffer.pointer, 575 + node->object->buffer.length, 576 + DB_BYTE_DISPLAY, ACPI_UINT32_MAX); 577 + 578 + acpi_db_set_output_destination(ACPI_DB_CONSOLE_OUTPUT); 579 + return; 580 + } 581 + 582 + /******************************************************************************* 583 + * 584 + * FUNCTION: acpi_dm_compare_aml_resources 585 + * 586 + * PARAMETERS: aml1_buffer - Contains first resource list 587 + * aml1_buffer_length - Length of first resource list 588 + * aml2_buffer - Contains second resource list 589 + * aml2_buffer_length - Length of second resource list 590 + * 591 + * RETURN: None 592 + * 593 + * DESCRIPTION: Compare two AML resource lists, descriptor by descriptor (in 594 + * order to isolate a miscompare to an individual resource) 595 + * 596 + ******************************************************************************/ 597 + 598 + static void 599 + acpi_dm_compare_aml_resources(u8 *aml1_buffer, 600 + acpi_rsdesc_size aml1_buffer_length, 601 + u8 *aml2_buffer, 602 + acpi_rsdesc_size aml2_buffer_length) 603 + { 604 + u8 *aml1; 605 + u8 *aml2; 606 + u8 *aml1_end; 607 + u8 *aml2_end; 608 + acpi_rsdesc_size aml1_length; 609 + acpi_rsdesc_size aml2_length; 610 + acpi_rsdesc_size offset = 0; 611 + u8 resource_type; 612 + u32 count = 0; 613 + u32 i; 614 + 615 + /* Compare overall buffer sizes (may be different due to size rounding) */ 616 + 617 + if (aml1_buffer_length != aml2_buffer_length) { 618 + acpi_os_printf("**** Buffer length mismatch in converted " 619 + "AML: Original %X, New %X ****\n", 620 + aml1_buffer_length, aml2_buffer_length); 621 + } 622 + 623 + aml1 = aml1_buffer; 624 + aml2 = aml2_buffer; 625 + aml1_end = aml1_buffer + aml1_buffer_length; 626 + aml2_end = aml2_buffer + aml2_buffer_length; 627 + 628 + /* Walk the descriptor lists, comparing each descriptor */ 629 + 630 + while ((aml1 < aml1_end) && (aml2 < aml2_end)) { 631 + 632 + /* Get the lengths of each descriptor */ 633 + 634 + aml1_length = acpi_ut_get_descriptor_length(aml1); 635 + aml2_length = acpi_ut_get_descriptor_length(aml2); 636 + resource_type = acpi_ut_get_resource_type(aml1); 637 + 638 + /* Check for descriptor length match */ 639 + 640 + if (aml1_length != aml2_length) { 641 + acpi_os_printf 642 + ("**** Length mismatch in descriptor [%.2X] type %2.2X, " 643 + "Offset %8.8X Len1 %X, Len2 %X ****\n", count, 644 + resource_type, offset, aml1_length, aml2_length); 645 + } 646 + 647 + /* Check for descriptor byte match */ 648 + 649 + else if (memcmp(aml1, aml2, aml1_length)) { 650 + acpi_os_printf 651 + ("**** Data mismatch in descriptor [%.2X] type %2.2X, " 652 + "Offset %8.8X ****\n", count, resource_type, 653 + offset); 654 + 655 + for (i = 0; i < aml1_length; i++) { 656 + if (aml1[i] != aml2[i]) { 657 + acpi_os_printf 658 + ("Mismatch at byte offset %.2X: is %2.2X, " 659 + "should be %2.2X\n", i, aml2[i], 660 + aml1[i]); 661 + } 662 + } 663 + } 664 + 665 + /* Exit on end_tag descriptor */ 666 + 667 + if (resource_type == ACPI_RESOURCE_NAME_END_TAG) { 668 + return; 669 + } 670 + 671 + /* Point to next descriptor in each buffer */ 672 + 673 + count++; 674 + offset += aml1_length; 675 + aml1 += aml1_length; 676 + aml2 += aml2_length; 677 + } 678 + } 679 + 680 + /******************************************************************************* 681 + * 682 + * FUNCTION: acpi_dm_test_resource_conversion 683 + * 684 + * PARAMETERS: node - Parent device node 685 + * name - resource method name (_CRS) 686 + * 687 + * RETURN: Status 688 + * 689 + * DESCRIPTION: Compare the original AML with a conversion of the AML to 690 + * internal resource list, then back to AML. 691 + * 692 + ******************************************************************************/ 693 + 694 + static acpi_status 695 + acpi_dm_test_resource_conversion(struct acpi_namespace_node *node, char *name) 696 + { 697 + acpi_status status; 698 + struct acpi_buffer return_buffer; 699 + struct acpi_buffer resource_buffer; 700 + struct acpi_buffer new_aml; 701 + union acpi_object *original_aml; 702 + 703 + acpi_os_printf("Resource Conversion Comparison:\n"); 704 + 705 + new_aml.length = ACPI_ALLOCATE_LOCAL_BUFFER; 706 + return_buffer.length = ACPI_ALLOCATE_LOCAL_BUFFER; 707 + resource_buffer.length = ACPI_ALLOCATE_LOCAL_BUFFER; 708 + 709 + /* Get the original _CRS AML resource template */ 710 + 711 + status = acpi_evaluate_object(node, name, NULL, &return_buffer); 712 + if (ACPI_FAILURE(status)) { 713 + acpi_os_printf("Could not obtain %s: %s\n", 714 + name, acpi_format_exception(status)); 715 + return (status); 716 + } 717 + 718 + /* Get the AML resource template, converted to internal resource structs */ 719 + 720 + status = acpi_get_current_resources(node, &resource_buffer); 721 + if (ACPI_FAILURE(status)) { 722 + acpi_os_printf("AcpiGetCurrentResources failed: %s\n", 723 + acpi_format_exception(status)); 724 + goto exit1; 725 + } 726 + 727 + /* Convert internal resource list to external AML resource template */ 728 + 729 + status = acpi_rs_create_aml_resources(&resource_buffer, &new_aml); 730 + if (ACPI_FAILURE(status)) { 731 + acpi_os_printf("AcpiRsCreateAmlResources failed: %s\n", 732 + acpi_format_exception(status)); 733 + goto exit2; 734 + } 735 + 736 + /* Compare original AML to the newly created AML resource list */ 737 + 738 + original_aml = return_buffer.pointer; 739 + 740 + acpi_dm_compare_aml_resources(original_aml->buffer.pointer, 741 + (acpi_rsdesc_size) original_aml->buffer. 742 + length, new_aml.pointer, 743 + (acpi_rsdesc_size) new_aml.length); 744 + 745 + /* Cleanup and exit */ 746 + 747 + ACPI_FREE(new_aml.pointer); 748 + exit2: 749 + ACPI_FREE(resource_buffer.pointer); 750 + exit1: 751 + ACPI_FREE(return_buffer.pointer); 752 + return (status); 753 + } 754 + 755 + /******************************************************************************* 756 + * 757 + * FUNCTION: acpi_db_resource_callback 758 + * 759 + * PARAMETERS: acpi_walk_resource_callback 760 + * 761 + * RETURN: Status 762 + * 763 + * DESCRIPTION: Simple callback to exercise acpi_walk_resources and 764 + * acpi_walk_resource_buffer. 765 + * 766 + ******************************************************************************/ 767 + 768 + static acpi_status 769 + acpi_db_resource_callback(struct acpi_resource *resource, void *context) 770 + { 771 + 772 + return (AE_OK); 773 + } 774 + 775 + /******************************************************************************* 776 + * 777 + * FUNCTION: acpi_db_device_resources 778 + * 779 + * PARAMETERS: acpi_walk_callback 780 + * 781 + * RETURN: Status 782 + * 783 + * DESCRIPTION: Display the _PRT/_CRS/_PRS resources for a device object. 784 + * 785 + ******************************************************************************/ 786 + 787 + static acpi_status 788 + acpi_db_device_resources(acpi_handle obj_handle, 789 + u32 nesting_level, void *context, void **return_value) 790 + { 791 + struct acpi_namespace_node *node; 792 + struct acpi_namespace_node *prt_node = NULL; 793 + struct acpi_namespace_node *crs_node = NULL; 794 + struct acpi_namespace_node *prs_node = NULL; 795 + struct acpi_namespace_node *aei_node = NULL; 796 + char *parent_path; 797 + struct acpi_buffer return_buffer; 798 + acpi_status status; 799 + 800 + node = ACPI_CAST_PTR(struct acpi_namespace_node, obj_handle); 801 + parent_path = acpi_ns_get_external_pathname(node); 802 + if (!parent_path) { 803 + return (AE_NO_MEMORY); 804 + } 805 + 806 + /* Get handles to the resource methods for this device */ 807 + 808 + (void)acpi_get_handle(node, METHOD_NAME__PRT, 809 + ACPI_CAST_PTR(acpi_handle, &prt_node)); 810 + (void)acpi_get_handle(node, METHOD_NAME__CRS, 811 + ACPI_CAST_PTR(acpi_handle, &crs_node)); 812 + (void)acpi_get_handle(node, METHOD_NAME__PRS, 813 + ACPI_CAST_PTR(acpi_handle, &prs_node)); 814 + (void)acpi_get_handle(node, METHOD_NAME__AEI, 815 + ACPI_CAST_PTR(acpi_handle, &aei_node)); 816 + 817 + if (!prt_node && !crs_node && !prs_node && !aei_node) { 818 + goto cleanup; /* Nothing to do */ 819 + } 820 + 821 + acpi_os_printf("\nDevice: %s\n", parent_path); 822 + 823 + /* Prepare for a return object of arbitrary size */ 824 + 825 + return_buffer.pointer = acpi_gbl_db_buffer; 826 + return_buffer.length = ACPI_DEBUG_BUFFER_SIZE; 827 + 828 + /* _PRT */ 829 + 830 + if (prt_node) { 831 + acpi_os_printf("Evaluating _PRT\n"); 832 + 833 + status = 834 + acpi_evaluate_object(prt_node, NULL, NULL, &return_buffer); 835 + if (ACPI_FAILURE(status)) { 836 + acpi_os_printf("Could not evaluate _PRT: %s\n", 837 + acpi_format_exception(status)); 838 + goto get_crs; 839 + } 840 + 841 + return_buffer.pointer = acpi_gbl_db_buffer; 842 + return_buffer.length = ACPI_DEBUG_BUFFER_SIZE; 843 + 844 + status = acpi_get_irq_routing_table(node, &return_buffer); 845 + if (ACPI_FAILURE(status)) { 846 + acpi_os_printf("GetIrqRoutingTable failed: %s\n", 847 + acpi_format_exception(status)); 848 + goto get_crs; 849 + } 850 + 851 + acpi_rs_dump_irq_list(ACPI_CAST_PTR(u8, acpi_gbl_db_buffer)); 852 + } 853 + 854 + /* _CRS */ 855 + 856 + get_crs: 857 + if (crs_node) { 858 + acpi_os_printf("Evaluating _CRS\n"); 859 + 860 + return_buffer.pointer = acpi_gbl_db_buffer; 861 + return_buffer.length = ACPI_DEBUG_BUFFER_SIZE; 862 + 863 + status = 864 + acpi_evaluate_object(crs_node, NULL, NULL, &return_buffer); 865 + if (ACPI_FAILURE(status)) { 866 + acpi_os_printf("Could not evaluate _CRS: %s\n", 867 + acpi_format_exception(status)); 868 + goto get_prs; 869 + } 870 + 871 + /* This code exercises the acpi_walk_resources interface */ 872 + 873 + status = acpi_walk_resources(node, METHOD_NAME__CRS, 874 + acpi_db_resource_callback, NULL); 875 + if (ACPI_FAILURE(status)) { 876 + acpi_os_printf("AcpiWalkResources failed: %s\n", 877 + acpi_format_exception(status)); 878 + goto get_prs; 879 + } 880 + 881 + /* Get the _CRS resource list (test ALLOCATE buffer) */ 882 + 883 + return_buffer.pointer = NULL; 884 + return_buffer.length = ACPI_ALLOCATE_LOCAL_BUFFER; 885 + 886 + status = acpi_get_current_resources(node, &return_buffer); 887 + if (ACPI_FAILURE(status)) { 888 + acpi_os_printf("AcpiGetCurrentResources failed: %s\n", 889 + acpi_format_exception(status)); 890 + goto get_prs; 891 + } 892 + 893 + /* This code exercises the acpi_walk_resource_buffer interface */ 894 + 895 + status = acpi_walk_resource_buffer(&return_buffer, 896 + acpi_db_resource_callback, 897 + NULL); 898 + if (ACPI_FAILURE(status)) { 899 + acpi_os_printf("AcpiWalkResourceBuffer failed: %s\n", 900 + acpi_format_exception(status)); 901 + goto end_crs; 902 + } 903 + 904 + /* Dump the _CRS resource list */ 905 + 906 + acpi_rs_dump_resource_list(ACPI_CAST_PTR(struct acpi_resource, 907 + return_buffer. 908 + pointer)); 909 + 910 + /* 911 + * Perform comparison of original AML to newly created AML. This 912 + * tests both the AML->Resource conversion and the Resource->AML 913 + * conversion. 914 + */ 915 + (void)acpi_dm_test_resource_conversion(node, METHOD_NAME__CRS); 916 + 917 + /* Execute _SRS with the resource list */ 918 + 919 + acpi_os_printf("Evaluating _SRS\n"); 920 + 921 + status = acpi_set_current_resources(node, &return_buffer); 922 + if (ACPI_FAILURE(status)) { 923 + acpi_os_printf("AcpiSetCurrentResources failed: %s\n", 924 + acpi_format_exception(status)); 925 + goto end_crs; 926 + } 927 + 928 + end_crs: 929 + ACPI_FREE(return_buffer.pointer); 930 + } 931 + 932 + /* _PRS */ 933 + 934 + get_prs: 935 + if (prs_node) { 936 + acpi_os_printf("Evaluating _PRS\n"); 937 + 938 + return_buffer.pointer = acpi_gbl_db_buffer; 939 + return_buffer.length = ACPI_DEBUG_BUFFER_SIZE; 940 + 941 + status = 942 + acpi_evaluate_object(prs_node, NULL, NULL, &return_buffer); 943 + if (ACPI_FAILURE(status)) { 944 + acpi_os_printf("Could not evaluate _PRS: %s\n", 945 + acpi_format_exception(status)); 946 + goto get_aei; 947 + } 948 + 949 + return_buffer.pointer = acpi_gbl_db_buffer; 950 + return_buffer.length = ACPI_DEBUG_BUFFER_SIZE; 951 + 952 + status = acpi_get_possible_resources(node, &return_buffer); 953 + if (ACPI_FAILURE(status)) { 954 + acpi_os_printf("AcpiGetPossibleResources failed: %s\n", 955 + acpi_format_exception(status)); 956 + goto get_aei; 957 + } 958 + 959 + acpi_rs_dump_resource_list(ACPI_CAST_PTR 960 + (struct acpi_resource, 961 + acpi_gbl_db_buffer)); 962 + } 963 + 964 + /* _AEI */ 965 + 966 + get_aei: 967 + if (aei_node) { 968 + acpi_os_printf("Evaluating _AEI\n"); 969 + 970 + return_buffer.pointer = acpi_gbl_db_buffer; 971 + return_buffer.length = ACPI_DEBUG_BUFFER_SIZE; 972 + 973 + status = 974 + acpi_evaluate_object(aei_node, NULL, NULL, &return_buffer); 975 + if (ACPI_FAILURE(status)) { 976 + acpi_os_printf("Could not evaluate _AEI: %s\n", 977 + acpi_format_exception(status)); 978 + goto cleanup; 979 + } 980 + 981 + return_buffer.pointer = acpi_gbl_db_buffer; 982 + return_buffer.length = ACPI_DEBUG_BUFFER_SIZE; 983 + 984 + status = acpi_get_event_resources(node, &return_buffer); 985 + if (ACPI_FAILURE(status)) { 986 + acpi_os_printf("AcpiGetEventResources failed: %s\n", 987 + acpi_format_exception(status)); 988 + goto cleanup; 989 + } 990 + 991 + acpi_rs_dump_resource_list(ACPI_CAST_PTR 992 + (struct acpi_resource, 993 + acpi_gbl_db_buffer)); 994 + } 995 + 996 + cleanup: 997 + ACPI_FREE(parent_path); 998 + return (AE_OK); 999 + } 1000 + 1001 + /******************************************************************************* 1002 + * 1003 + * FUNCTION: acpi_db_display_resources 1004 + * 1005 + * PARAMETERS: object_arg - String object name or object pointer. 1006 + * NULL or "*" means "display resources for 1007 + * all devices" 1008 + * 1009 + * RETURN: None 1010 + * 1011 + * DESCRIPTION: Display the resource objects associated with a device. 1012 + * 1013 + ******************************************************************************/ 1014 + 1015 + void acpi_db_display_resources(char *object_arg) 1016 + { 1017 + struct acpi_namespace_node *node; 1018 + 1019 + acpi_db_set_output_destination(ACPI_DB_REDIRECTABLE_OUTPUT); 1020 + acpi_dbg_level |= ACPI_LV_RESOURCES; 1021 + 1022 + /* Asterisk means "display resources for all devices" */ 1023 + 1024 + if (!object_arg || (!strcmp(object_arg, "*"))) { 1025 + (void)acpi_walk_namespace(ACPI_TYPE_DEVICE, ACPI_ROOT_OBJECT, 1026 + ACPI_UINT32_MAX, 1027 + acpi_db_device_resources, NULL, NULL, 1028 + NULL); 1029 + } else { 1030 + /* Convert string to object pointer */ 1031 + 1032 + node = acpi_db_convert_to_node(object_arg); 1033 + if (node) { 1034 + if (node->type != ACPI_TYPE_DEVICE) { 1035 + acpi_os_printf 1036 + ("%4.4s: Name is not a device object (%s)\n", 1037 + node->name.ascii, 1038 + acpi_ut_get_type_name(node->type)); 1039 + } else { 1040 + (void)acpi_db_device_resources(node, 0, NULL, 1041 + NULL); 1042 + } 1043 + } 1044 + } 1045 + 1046 + acpi_db_set_output_destination(ACPI_DB_CONSOLE_OUTPUT); 1047 + } 1048 + 1049 + #if (!ACPI_REDUCED_HARDWARE) 1050 + /******************************************************************************* 1051 + * 1052 + * FUNCTION: acpi_db_generate_gpe 1053 + * 1054 + * PARAMETERS: gpe_arg - Raw GPE number, ascii string 1055 + * block_arg - GPE block number, ascii string 1056 + * 0 or 1 for FADT GPE blocks 1057 + * 1058 + * RETURN: None 1059 + * 1060 + * DESCRIPTION: Simulate firing of a GPE 1061 + * 1062 + ******************************************************************************/ 1063 + 1064 + void acpi_db_generate_gpe(char *gpe_arg, char *block_arg) 1065 + { 1066 + u32 block_number = 0; 1067 + u32 gpe_number; 1068 + struct acpi_gpe_event_info *gpe_event_info; 1069 + 1070 + gpe_number = strtoul(gpe_arg, NULL, 0); 1071 + 1072 + /* 1073 + * If no block arg, or block arg == 0 or 1, use the FADT-defined 1074 + * GPE blocks. 1075 + */ 1076 + if (block_arg) { 1077 + block_number = strtoul(block_arg, NULL, 0); 1078 + if (block_number == 1) { 1079 + block_number = 0; 1080 + } 1081 + } 1082 + 1083 + gpe_event_info = 1084 + acpi_ev_get_gpe_event_info(ACPI_TO_POINTER(block_number), 1085 + gpe_number); 1086 + if (!gpe_event_info) { 1087 + acpi_os_printf("Invalid GPE\n"); 1088 + return; 1089 + } 1090 + 1091 + (void)acpi_ev_gpe_dispatch(NULL, gpe_event_info, gpe_number); 1092 + } 1093 + 1094 + /******************************************************************************* 1095 + * 1096 + * FUNCTION: acpi_db_generate_sci 1097 + * 1098 + * PARAMETERS: None 1099 + * 1100 + * RETURN: None 1101 + * 1102 + * DESCRIPTION: Simulate an SCI -- just call the SCI dispatch. 1103 + * 1104 + ******************************************************************************/ 1105 + 1106 + void acpi_db_generate_sci(void) 1107 + { 1108 + acpi_ev_sci_dispatch(); 1109 + } 1110 + 1111 + #endif /* !ACPI_REDUCED_HARDWARE */ 1112 + 1113 + /******************************************************************************* 1114 + * 1115 + * FUNCTION: acpi_db_trace 1116 + * 1117 + * PARAMETERS: enable_arg - ENABLE/AML to enable tracer 1118 + * DISABLE to disable tracer 1119 + * method_arg - Method to trace 1120 + * once_arg - Whether trace once 1121 + * 1122 + * RETURN: None 1123 + * 1124 + * DESCRIPTION: Control method tracing facility 1125 + * 1126 + ******************************************************************************/ 1127 + 1128 + void acpi_db_trace(char *enable_arg, char *method_arg, char *once_arg) 1129 + { 1130 + u32 debug_level = 0; 1131 + u32 debug_layer = 0; 1132 + u32 flags = 0; 1133 + 1134 + if (enable_arg) { 1135 + acpi_ut_strupr(enable_arg); 1136 + } 1137 + 1138 + if (once_arg) { 1139 + acpi_ut_strupr(once_arg); 1140 + } 1141 + 1142 + if (method_arg) { 1143 + if (acpi_db_trace_method_name) { 1144 + ACPI_FREE(acpi_db_trace_method_name); 1145 + acpi_db_trace_method_name = NULL; 1146 + } 1147 + 1148 + acpi_db_trace_method_name = 1149 + ACPI_ALLOCATE(strlen(method_arg) + 1); 1150 + if (!acpi_db_trace_method_name) { 1151 + acpi_os_printf("Failed to allocate method name (%s)\n", 1152 + method_arg); 1153 + return; 1154 + } 1155 + 1156 + strcpy(acpi_db_trace_method_name, method_arg); 1157 + } 1158 + 1159 + if (!strcmp(enable_arg, "ENABLE") || 1160 + !strcmp(enable_arg, "METHOD") || !strcmp(enable_arg, "OPCODE")) { 1161 + if (!strcmp(enable_arg, "ENABLE")) { 1162 + 1163 + /* Inherit current console settings */ 1164 + 1165 + debug_level = acpi_gbl_db_console_debug_level; 1166 + debug_layer = acpi_dbg_layer; 1167 + } else { 1168 + /* Restrict console output to trace points only */ 1169 + 1170 + debug_level = ACPI_LV_TRACE_POINT; 1171 + debug_layer = ACPI_EXECUTER; 1172 + } 1173 + 1174 + flags = ACPI_TRACE_ENABLED; 1175 + 1176 + if (!strcmp(enable_arg, "OPCODE")) { 1177 + flags |= ACPI_TRACE_OPCODE; 1178 + } 1179 + 1180 + if (once_arg && !strcmp(once_arg, "ONCE")) { 1181 + flags |= ACPI_TRACE_ONESHOT; 1182 + } 1183 + } 1184 + 1185 + (void)acpi_debug_trace(acpi_db_trace_method_name, 1186 + debug_level, debug_layer, flags); 1187 + }
+484
drivers/acpi/acpica/dbconvert.c
··· 1 + /******************************************************************************* 2 + * 3 + * Module Name: dbconvert - debugger miscellaneous conversion routines 4 + * 5 + ******************************************************************************/ 6 + 7 + /* 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 + * All rights reserved. 10 + * 11 + * Redistribution and use in source and binary forms, with or without 12 + * modification, are permitted provided that the following conditions 13 + * are met: 14 + * 1. Redistributions of source code must retain the above copyright 15 + * notice, this list of conditions, and the following disclaimer, 16 + * without modification. 17 + * 2. Redistributions in binary form must reproduce at minimum a disclaimer 18 + * substantially similar to the "NO WARRANTY" disclaimer below 19 + * ("Disclaimer") and any redistribution must be conditioned upon 20 + * including a substantially similar Disclaimer requirement for further 21 + * binary redistribution. 22 + * 3. Neither the names of the above-listed copyright holders nor the names 23 + * of any contributors may be used to endorse or promote products derived 24 + * from this software without specific prior written permission. 25 + * 26 + * Alternatively, this software may be distributed under the terms of the 27 + * GNU General Public License ("GPL") version 2 as published by the Free 28 + * Software Foundation. 29 + * 30 + * NO WARRANTY 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 36 + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 37 + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 38 + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 39 + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 40 + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 41 + * POSSIBILITY OF SUCH DAMAGES. 42 + */ 43 + 44 + #include <acpi/acpi.h> 45 + #include "accommon.h" 46 + #include "acdebug.h" 47 + 48 + #define _COMPONENT ACPI_CA_DEBUGGER 49 + ACPI_MODULE_NAME("dbconvert") 50 + 51 + #define DB_DEFAULT_PKG_ELEMENTS 33 52 + /******************************************************************************* 53 + * 54 + * FUNCTION: acpi_db_hex_char_to_value 55 + * 56 + * PARAMETERS: hex_char - Ascii Hex digit, 0-9|a-f|A-F 57 + * return_value - Where the converted value is returned 58 + * 59 + * RETURN: Status 60 + * 61 + * DESCRIPTION: Convert a single hex character to a 4-bit number (0-16). 62 + * 63 + ******************************************************************************/ 64 + acpi_status acpi_db_hex_char_to_value(int hex_char, u8 *return_value) 65 + { 66 + u8 value; 67 + 68 + /* Digit must be ascii [0-9a-fA-F] */ 69 + 70 + if (!isxdigit(hex_char)) { 71 + return (AE_BAD_HEX_CONSTANT); 72 + } 73 + 74 + if (hex_char <= 0x39) { 75 + value = (u8)(hex_char - 0x30); 76 + } else { 77 + value = (u8)(toupper(hex_char) - 0x37); 78 + } 79 + 80 + *return_value = value; 81 + return (AE_OK); 82 + } 83 + 84 + /******************************************************************************* 85 + * 86 + * FUNCTION: acpi_db_hex_byte_to_binary 87 + * 88 + * PARAMETERS: hex_byte - Double hex digit (0x00 - 0xFF) in format: 89 + * hi_byte then lo_byte. 90 + * return_value - Where the converted value is returned 91 + * 92 + * RETURN: Status 93 + * 94 + * DESCRIPTION: Convert two hex characters to an 8 bit number (0 - 255). 95 + * 96 + ******************************************************************************/ 97 + 98 + static acpi_status acpi_db_hex_byte_to_binary(char *hex_byte, u8 *return_value) 99 + { 100 + u8 local0; 101 + u8 local1; 102 + acpi_status status; 103 + 104 + /* High byte */ 105 + 106 + status = acpi_db_hex_char_to_value(hex_byte[0], &local0); 107 + if (ACPI_FAILURE(status)) { 108 + return (status); 109 + } 110 + 111 + /* Low byte */ 112 + 113 + status = acpi_db_hex_char_to_value(hex_byte[1], &local1); 114 + if (ACPI_FAILURE(status)) { 115 + return (status); 116 + } 117 + 118 + *return_value = (u8)((local0 << 4) | local1); 119 + return (AE_OK); 120 + } 121 + 122 + /******************************************************************************* 123 + * 124 + * FUNCTION: acpi_db_convert_to_buffer 125 + * 126 + * PARAMETERS: string - Input string to be converted 127 + * object - Where the buffer object is returned 128 + * 129 + * RETURN: Status 130 + * 131 + * DESCRIPTION: Convert a string to a buffer object. String is treated a list 132 + * of buffer elements, each separated by a space or comma. 133 + * 134 + ******************************************************************************/ 135 + 136 + static acpi_status 137 + acpi_db_convert_to_buffer(char *string, union acpi_object *object) 138 + { 139 + u32 i; 140 + u32 j; 141 + u32 length; 142 + u8 *buffer; 143 + acpi_status status; 144 + 145 + /* Generate the final buffer length */ 146 + 147 + for (i = 0, length = 0; string[i];) { 148 + i += 2; 149 + length++; 150 + 151 + while (string[i] && ((string[i] == ',') || (string[i] == ' '))) { 152 + i++; 153 + } 154 + } 155 + 156 + buffer = ACPI_ALLOCATE(length); 157 + if (!buffer) { 158 + return (AE_NO_MEMORY); 159 + } 160 + 161 + /* Convert the command line bytes to the buffer */ 162 + 163 + for (i = 0, j = 0; string[i];) { 164 + status = acpi_db_hex_byte_to_binary(&string[i], &buffer[j]); 165 + if (ACPI_FAILURE(status)) { 166 + ACPI_FREE(buffer); 167 + return (status); 168 + } 169 + 170 + j++; 171 + i += 2; 172 + while (string[i] && ((string[i] == ',') || (string[i] == ' '))) { 173 + i++; 174 + } 175 + } 176 + 177 + object->type = ACPI_TYPE_BUFFER; 178 + object->buffer.pointer = buffer; 179 + object->buffer.length = length; 180 + return (AE_OK); 181 + } 182 + 183 + /******************************************************************************* 184 + * 185 + * FUNCTION: acpi_db_convert_to_package 186 + * 187 + * PARAMETERS: string - Input string to be converted 188 + * object - Where the package object is returned 189 + * 190 + * RETURN: Status 191 + * 192 + * DESCRIPTION: Convert a string to a package object. Handles nested packages 193 + * via recursion with acpi_db_convert_to_object. 194 + * 195 + ******************************************************************************/ 196 + 197 + acpi_status acpi_db_convert_to_package(char *string, union acpi_object * object) 198 + { 199 + char *this; 200 + char *next; 201 + u32 i; 202 + acpi_object_type type; 203 + union acpi_object *elements; 204 + acpi_status status; 205 + 206 + elements = 207 + ACPI_ALLOCATE_ZEROED(DB_DEFAULT_PKG_ELEMENTS * 208 + sizeof(union acpi_object)); 209 + 210 + this = string; 211 + for (i = 0; i < (DB_DEFAULT_PKG_ELEMENTS - 1); i++) { 212 + this = acpi_db_get_next_token(this, &next, &type); 213 + if (!this) { 214 + break; 215 + } 216 + 217 + /* Recursive call to convert each package element */ 218 + 219 + status = acpi_db_convert_to_object(type, this, &elements[i]); 220 + if (ACPI_FAILURE(status)) { 221 + acpi_db_delete_objects(i + 1, elements); 222 + ACPI_FREE(elements); 223 + return (status); 224 + } 225 + 226 + this = next; 227 + } 228 + 229 + object->type = ACPI_TYPE_PACKAGE; 230 + object->package.count = i; 231 + object->package.elements = elements; 232 + return (AE_OK); 233 + } 234 + 235 + /******************************************************************************* 236 + * 237 + * FUNCTION: acpi_db_convert_to_object 238 + * 239 + * PARAMETERS: type - Object type as determined by parser 240 + * string - Input string to be converted 241 + * object - Where the new object is returned 242 + * 243 + * RETURN: Status 244 + * 245 + * DESCRIPTION: Convert a typed and tokenized string to an union acpi_object. Typing: 246 + * 1) String objects were surrounded by quotes. 247 + * 2) Buffer objects were surrounded by parentheses. 248 + * 3) Package objects were surrounded by brackets "[]". 249 + * 4) All standalone tokens are treated as integers. 250 + * 251 + ******************************************************************************/ 252 + 253 + acpi_status 254 + acpi_db_convert_to_object(acpi_object_type type, 255 + char *string, union acpi_object * object) 256 + { 257 + acpi_status status = AE_OK; 258 + 259 + switch (type) { 260 + case ACPI_TYPE_STRING: 261 + 262 + object->type = ACPI_TYPE_STRING; 263 + object->string.pointer = string; 264 + object->string.length = (u32)strlen(string); 265 + break; 266 + 267 + case ACPI_TYPE_BUFFER: 268 + 269 + status = acpi_db_convert_to_buffer(string, object); 270 + break; 271 + 272 + case ACPI_TYPE_PACKAGE: 273 + 274 + status = acpi_db_convert_to_package(string, object); 275 + break; 276 + 277 + default: 278 + 279 + object->type = ACPI_TYPE_INTEGER; 280 + status = acpi_ut_strtoul64(string, 16, &object->integer.value); 281 + break; 282 + } 283 + 284 + return (status); 285 + } 286 + 287 + /******************************************************************************* 288 + * 289 + * FUNCTION: acpi_db_encode_pld_buffer 290 + * 291 + * PARAMETERS: pld_info - _PLD buffer struct (Using local struct) 292 + * 293 + * RETURN: Encode _PLD buffer suitable for return value from _PLD 294 + * 295 + * DESCRIPTION: Bit-packs a _PLD buffer struct. Used to test the _PLD macros 296 + * 297 + ******************************************************************************/ 298 + 299 + u8 *acpi_db_encode_pld_buffer(struct acpi_pld_info *pld_info) 300 + { 301 + u32 *buffer; 302 + u32 dword; 303 + 304 + buffer = ACPI_ALLOCATE_ZEROED(ACPI_PLD_BUFFER_SIZE); 305 + if (!buffer) { 306 + return (NULL); 307 + } 308 + 309 + /* First 32 bits */ 310 + 311 + dword = 0; 312 + ACPI_PLD_SET_REVISION(&dword, pld_info->revision); 313 + ACPI_PLD_SET_IGNORE_COLOR(&dword, pld_info->ignore_color); 314 + ACPI_PLD_SET_RED(&dword, pld_info->red); 315 + ACPI_PLD_SET_GREEN(&dword, pld_info->green); 316 + ACPI_PLD_SET_BLUE(&dword, pld_info->blue); 317 + ACPI_MOVE_32_TO_32(&buffer[0], &dword); 318 + 319 + /* Second 32 bits */ 320 + 321 + dword = 0; 322 + ACPI_PLD_SET_WIDTH(&dword, pld_info->width); 323 + ACPI_PLD_SET_HEIGHT(&dword, pld_info->height); 324 + ACPI_MOVE_32_TO_32(&buffer[1], &dword); 325 + 326 + /* Third 32 bits */ 327 + 328 + dword = 0; 329 + ACPI_PLD_SET_USER_VISIBLE(&dword, pld_info->user_visible); 330 + ACPI_PLD_SET_DOCK(&dword, pld_info->dock); 331 + ACPI_PLD_SET_LID(&dword, pld_info->lid); 332 + ACPI_PLD_SET_PANEL(&dword, pld_info->panel); 333 + ACPI_PLD_SET_VERTICAL(&dword, pld_info->vertical_position); 334 + ACPI_PLD_SET_HORIZONTAL(&dword, pld_info->horizontal_position); 335 + ACPI_PLD_SET_SHAPE(&dword, pld_info->shape); 336 + ACPI_PLD_SET_ORIENTATION(&dword, pld_info->group_orientation); 337 + ACPI_PLD_SET_TOKEN(&dword, pld_info->group_token); 338 + ACPI_PLD_SET_POSITION(&dword, pld_info->group_position); 339 + ACPI_PLD_SET_BAY(&dword, pld_info->bay); 340 + ACPI_MOVE_32_TO_32(&buffer[2], &dword); 341 + 342 + /* Fourth 32 bits */ 343 + 344 + dword = 0; 345 + ACPI_PLD_SET_EJECTABLE(&dword, pld_info->ejectable); 346 + ACPI_PLD_SET_OSPM_EJECT(&dword, pld_info->ospm_eject_required); 347 + ACPI_PLD_SET_CABINET(&dword, pld_info->cabinet_number); 348 + ACPI_PLD_SET_CARD_CAGE(&dword, pld_info->card_cage_number); 349 + ACPI_PLD_SET_REFERENCE(&dword, pld_info->reference); 350 + ACPI_PLD_SET_ROTATION(&dword, pld_info->rotation); 351 + ACPI_PLD_SET_ORDER(&dword, pld_info->order); 352 + ACPI_MOVE_32_TO_32(&buffer[3], &dword); 353 + 354 + if (pld_info->revision >= 2) { 355 + 356 + /* Fifth 32 bits */ 357 + 358 + dword = 0; 359 + ACPI_PLD_SET_VERT_OFFSET(&dword, pld_info->vertical_offset); 360 + ACPI_PLD_SET_HORIZ_OFFSET(&dword, pld_info->horizontal_offset); 361 + ACPI_MOVE_32_TO_32(&buffer[4], &dword); 362 + } 363 + 364 + return (ACPI_CAST_PTR(u8, buffer)); 365 + } 366 + 367 + /******************************************************************************* 368 + * 369 + * FUNCTION: acpi_db_dump_pld_buffer 370 + * 371 + * PARAMETERS: obj_desc - Object returned from _PLD method 372 + * 373 + * RETURN: None. 374 + * 375 + * DESCRIPTION: Dumps formatted contents of a _PLD return buffer. 376 + * 377 + ******************************************************************************/ 378 + 379 + #define ACPI_PLD_OUTPUT "%20s : %-6X\n" 380 + 381 + void acpi_db_dump_pld_buffer(union acpi_object *obj_desc) 382 + { 383 + union acpi_object *buffer_desc; 384 + struct acpi_pld_info *pld_info; 385 + u8 *new_buffer; 386 + acpi_status status; 387 + 388 + /* Object must be of type Package with at least one Buffer element */ 389 + 390 + if (obj_desc->type != ACPI_TYPE_PACKAGE) { 391 + return; 392 + } 393 + 394 + buffer_desc = &obj_desc->package.elements[0]; 395 + if (buffer_desc->type != ACPI_TYPE_BUFFER) { 396 + return; 397 + } 398 + 399 + /* Convert _PLD buffer to local _PLD struct */ 400 + 401 + status = acpi_decode_pld_buffer(buffer_desc->buffer.pointer, 402 + buffer_desc->buffer.length, &pld_info); 403 + if (ACPI_FAILURE(status)) { 404 + return; 405 + } 406 + 407 + /* Encode local _PLD struct back to a _PLD buffer */ 408 + 409 + new_buffer = acpi_db_encode_pld_buffer(pld_info); 410 + if (!new_buffer) { 411 + return; 412 + } 413 + 414 + /* The two bit-packed buffers should match */ 415 + 416 + if (memcmp(new_buffer, buffer_desc->buffer.pointer, 417 + buffer_desc->buffer.length)) { 418 + acpi_os_printf 419 + ("Converted _PLD buffer does not compare. New:\n"); 420 + 421 + acpi_ut_dump_buffer(new_buffer, 422 + buffer_desc->buffer.length, DB_BYTE_DISPLAY, 423 + 0); 424 + } 425 + 426 + /* First 32-bit dword */ 427 + 428 + acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_Revision", pld_info->revision); 429 + acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_IgnoreColor", 430 + pld_info->ignore_color); 431 + acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_Red", pld_info->red); 432 + acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_Green", pld_info->green); 433 + acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_Blue", pld_info->blue); 434 + 435 + /* Second 32-bit dword */ 436 + 437 + acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_Width", pld_info->width); 438 + acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_Height", pld_info->height); 439 + 440 + /* Third 32-bit dword */ 441 + 442 + acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_UserVisible", 443 + pld_info->user_visible); 444 + acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_Dock", pld_info->dock); 445 + acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_Lid", pld_info->lid); 446 + acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_Panel", pld_info->panel); 447 + acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_VerticalPosition", 448 + pld_info->vertical_position); 449 + acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_HorizontalPosition", 450 + pld_info->horizontal_position); 451 + acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_Shape", pld_info->shape); 452 + acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_GroupOrientation", 453 + pld_info->group_orientation); 454 + acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_GroupToken", 455 + pld_info->group_token); 456 + acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_GroupPosition", 457 + pld_info->group_position); 458 + acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_Bay", pld_info->bay); 459 + 460 + /* Fourth 32-bit dword */ 461 + 462 + acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_Ejectable", pld_info->ejectable); 463 + acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_EjectRequired", 464 + pld_info->ospm_eject_required); 465 + acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_CabinetNumber", 466 + pld_info->cabinet_number); 467 + acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_CardCageNumber", 468 + pld_info->card_cage_number); 469 + acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_Reference", pld_info->reference); 470 + acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_Rotation", pld_info->rotation); 471 + acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_Order", pld_info->order); 472 + 473 + /* Fifth 32-bit dword */ 474 + 475 + if (buffer_desc->buffer.length > 16) { 476 + acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_VerticalOffset", 477 + pld_info->vertical_offset); 478 + acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_HorizontalOffset", 479 + pld_info->horizontal_offset); 480 + } 481 + 482 + ACPI_FREE(pld_info); 483 + ACPI_FREE(new_buffer); 484 + }
+1108
drivers/acpi/acpica/dbdisply.c
··· 1 + /******************************************************************************* 2 + * 3 + * Module Name: dbdisply - debug display commands 4 + * 5 + ******************************************************************************/ 6 + 7 + /* 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 + * All rights reserved. 10 + * 11 + * Redistribution and use in source and binary forms, with or without 12 + * modification, are permitted provided that the following conditions 13 + * are met: 14 + * 1. Redistributions of source code must retain the above copyright 15 + * notice, this list of conditions, and the following disclaimer, 16 + * without modification. 17 + * 2. Redistributions in binary form must reproduce at minimum a disclaimer 18 + * substantially similar to the "NO WARRANTY" disclaimer below 19 + * ("Disclaimer") and any redistribution must be conditioned upon 20 + * including a substantially similar Disclaimer requirement for further 21 + * binary redistribution. 22 + * 3. Neither the names of the above-listed copyright holders nor the names 23 + * of any contributors may be used to endorse or promote products derived 24 + * from this software without specific prior written permission. 25 + * 26 + * Alternatively, this software may be distributed under the terms of the 27 + * GNU General Public License ("GPL") version 2 as published by the Free 28 + * Software Foundation. 29 + * 30 + * NO WARRANTY 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 36 + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 37 + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 38 + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 39 + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 40 + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 41 + * POSSIBILITY OF SUCH DAMAGES. 42 + */ 43 + 44 + #include <acpi/acpi.h> 45 + #include "accommon.h" 46 + #include "amlcode.h" 47 + #include "acdispat.h" 48 + #include "acnamesp.h" 49 + #include "acparser.h" 50 + #include "acinterp.h" 51 + #include "acdebug.h" 52 + 53 + #define _COMPONENT ACPI_CA_DEBUGGER 54 + ACPI_MODULE_NAME("dbdisply") 55 + 56 + /* Local prototypes */ 57 + static void acpi_db_dump_parser_descriptor(union acpi_parse_object *op); 58 + 59 + static void *acpi_db_get_pointer(void *target); 60 + 61 + static acpi_status 62 + acpi_db_display_non_root_handlers(acpi_handle obj_handle, 63 + u32 nesting_level, 64 + void *context, void **return_value); 65 + 66 + /* 67 + * System handler information. 68 + * Used for Handlers command, in acpi_db_display_handlers. 69 + */ 70 + #define ACPI_PREDEFINED_PREFIX "%25s (%.2X) : " 71 + #define ACPI_HANDLER_NAME_STRING "%30s : " 72 + #define ACPI_HANDLER_PRESENT_STRING "%-9s (%p)\n" 73 + #define ACPI_HANDLER_PRESENT_STRING2 "%-9s (%p)" 74 + #define ACPI_HANDLER_NOT_PRESENT_STRING "%-9s\n" 75 + 76 + /* All predefined Address Space IDs */ 77 + 78 + static acpi_adr_space_type acpi_gbl_space_id_list[] = { 79 + ACPI_ADR_SPACE_SYSTEM_MEMORY, 80 + ACPI_ADR_SPACE_SYSTEM_IO, 81 + ACPI_ADR_SPACE_PCI_CONFIG, 82 + ACPI_ADR_SPACE_EC, 83 + ACPI_ADR_SPACE_SMBUS, 84 + ACPI_ADR_SPACE_CMOS, 85 + ACPI_ADR_SPACE_PCI_BAR_TARGET, 86 + ACPI_ADR_SPACE_IPMI, 87 + ACPI_ADR_SPACE_GPIO, 88 + ACPI_ADR_SPACE_GSBUS, 89 + ACPI_ADR_SPACE_DATA_TABLE, 90 + ACPI_ADR_SPACE_FIXED_HARDWARE 91 + }; 92 + 93 + /* Global handler information */ 94 + 95 + typedef struct acpi_handler_info { 96 + void *handler; 97 + char *name; 98 + 99 + } acpi_handler_info; 100 + 101 + static struct acpi_handler_info acpi_gbl_handler_list[] = { 102 + {&acpi_gbl_global_notify[0].handler, "System Notifications"}, 103 + {&acpi_gbl_global_notify[1].handler, "Device Notifications"}, 104 + {&acpi_gbl_table_handler, "ACPI Table Events"}, 105 + {&acpi_gbl_exception_handler, "Control Method Exceptions"}, 106 + {&acpi_gbl_interface_handler, "OSI Invocations"} 107 + }; 108 + 109 + /******************************************************************************* 110 + * 111 + * FUNCTION: acpi_db_get_pointer 112 + * 113 + * PARAMETERS: target - Pointer to string to be converted 114 + * 115 + * RETURN: Converted pointer 116 + * 117 + * DESCRIPTION: Convert an ascii pointer value to a real value 118 + * 119 + ******************************************************************************/ 120 + 121 + static void *acpi_db_get_pointer(void *target) 122 + { 123 + void *obj_ptr; 124 + acpi_size address; 125 + 126 + address = strtoul(target, NULL, 16); 127 + obj_ptr = ACPI_TO_POINTER(address); 128 + return (obj_ptr); 129 + } 130 + 131 + /******************************************************************************* 132 + * 133 + * FUNCTION: acpi_db_dump_parser_descriptor 134 + * 135 + * PARAMETERS: op - A parser Op descriptor 136 + * 137 + * RETURN: None 138 + * 139 + * DESCRIPTION: Display a formatted parser object 140 + * 141 + ******************************************************************************/ 142 + 143 + static void acpi_db_dump_parser_descriptor(union acpi_parse_object *op) 144 + { 145 + const struct acpi_opcode_info *info; 146 + 147 + info = acpi_ps_get_opcode_info(op->common.aml_opcode); 148 + 149 + acpi_os_printf("Parser Op Descriptor:\n"); 150 + acpi_os_printf("%20.20s : %4.4X\n", "Opcode", op->common.aml_opcode); 151 + 152 + ACPI_DEBUG_ONLY_MEMBERS(acpi_os_printf("%20.20s : %s\n", "Opcode Name", 153 + info->name)); 154 + 155 + acpi_os_printf("%20.20s : %p\n", "Value/ArgList", op->common.value.arg); 156 + acpi_os_printf("%20.20s : %p\n", "Parent", op->common.parent); 157 + acpi_os_printf("%20.20s : %p\n", "NextOp", op->common.next); 158 + } 159 + 160 + /******************************************************************************* 161 + * 162 + * FUNCTION: acpi_db_decode_and_display_object 163 + * 164 + * PARAMETERS: target - String with object to be displayed. Names 165 + * and hex pointers are supported. 166 + * output_type - Byte, Word, Dword, or Qword (B|W|D|Q) 167 + * 168 + * RETURN: None 169 + * 170 + * DESCRIPTION: Display a formatted ACPI object 171 + * 172 + ******************************************************************************/ 173 + 174 + void acpi_db_decode_and_display_object(char *target, char *output_type) 175 + { 176 + void *obj_ptr; 177 + struct acpi_namespace_node *node; 178 + union acpi_operand_object *obj_desc; 179 + u32 display = DB_BYTE_DISPLAY; 180 + char buffer[80]; 181 + struct acpi_buffer ret_buf; 182 + acpi_status status; 183 + u32 size; 184 + 185 + if (!target) { 186 + return; 187 + } 188 + 189 + /* Decode the output type */ 190 + 191 + if (output_type) { 192 + acpi_ut_strupr(output_type); 193 + if (output_type[0] == 'W') { 194 + display = DB_WORD_DISPLAY; 195 + } else if (output_type[0] == 'D') { 196 + display = DB_DWORD_DISPLAY; 197 + } else if (output_type[0] == 'Q') { 198 + display = DB_QWORD_DISPLAY; 199 + } 200 + } 201 + 202 + ret_buf.length = sizeof(buffer); 203 + ret_buf.pointer = buffer; 204 + 205 + /* Differentiate between a number and a name */ 206 + 207 + if ((target[0] >= 0x30) && (target[0] <= 0x39)) { 208 + obj_ptr = acpi_db_get_pointer(target); 209 + if (!acpi_os_readable(obj_ptr, 16)) { 210 + acpi_os_printf 211 + ("Address %p is invalid in this address space\n", 212 + obj_ptr); 213 + return; 214 + } 215 + 216 + /* Decode the object type */ 217 + 218 + switch (ACPI_GET_DESCRIPTOR_TYPE(obj_ptr)) { 219 + case ACPI_DESC_TYPE_NAMED: 220 + 221 + /* This is a namespace Node */ 222 + 223 + if (!acpi_os_readable 224 + (obj_ptr, sizeof(struct acpi_namespace_node))) { 225 + acpi_os_printf 226 + ("Cannot read entire Named object at address %p\n", 227 + obj_ptr); 228 + return; 229 + } 230 + 231 + node = obj_ptr; 232 + goto dump_node; 233 + 234 + case ACPI_DESC_TYPE_OPERAND: 235 + 236 + /* This is a ACPI OPERAND OBJECT */ 237 + 238 + if (!acpi_os_readable 239 + (obj_ptr, sizeof(union acpi_operand_object))) { 240 + acpi_os_printf 241 + ("Cannot read entire ACPI object at address %p\n", 242 + obj_ptr); 243 + return; 244 + } 245 + 246 + acpi_ut_debug_dump_buffer(obj_ptr, 247 + sizeof(union 248 + acpi_operand_object), 249 + display, ACPI_UINT32_MAX); 250 + acpi_ex_dump_object_descriptor(obj_ptr, 1); 251 + break; 252 + 253 + case ACPI_DESC_TYPE_PARSER: 254 + 255 + /* This is a Parser Op object */ 256 + 257 + if (!acpi_os_readable 258 + (obj_ptr, sizeof(union acpi_parse_object))) { 259 + acpi_os_printf 260 + ("Cannot read entire Parser object at address %p\n", 261 + obj_ptr); 262 + return; 263 + } 264 + 265 + acpi_ut_debug_dump_buffer(obj_ptr, 266 + sizeof(union 267 + acpi_parse_object), 268 + display, ACPI_UINT32_MAX); 269 + acpi_db_dump_parser_descriptor((union acpi_parse_object 270 + *)obj_ptr); 271 + break; 272 + 273 + default: 274 + 275 + /* Is not a recognizeable object */ 276 + 277 + acpi_os_printf 278 + ("Not a known ACPI internal object, descriptor type %2.2X\n", 279 + ACPI_GET_DESCRIPTOR_TYPE(obj_ptr)); 280 + 281 + size = 16; 282 + if (acpi_os_readable(obj_ptr, 64)) { 283 + size = 64; 284 + } 285 + 286 + /* Just dump some memory */ 287 + 288 + acpi_ut_debug_dump_buffer(obj_ptr, size, display, 289 + ACPI_UINT32_MAX); 290 + break; 291 + } 292 + 293 + return; 294 + } 295 + 296 + /* The parameter is a name string that must be resolved to a Named obj */ 297 + 298 + node = acpi_db_local_ns_lookup(target); 299 + if (!node) { 300 + return; 301 + } 302 + 303 + dump_node: 304 + /* Now dump the NS node */ 305 + 306 + status = acpi_get_name(node, ACPI_FULL_PATHNAME_NO_TRAILING, &ret_buf); 307 + if (ACPI_FAILURE(status)) { 308 + acpi_os_printf("Could not convert name to pathname\n"); 309 + } 310 + 311 + else { 312 + acpi_os_printf("Object (%p) Pathname: %s\n", 313 + node, (char *)ret_buf.pointer); 314 + } 315 + 316 + if (!acpi_os_readable(node, sizeof(struct acpi_namespace_node))) { 317 + acpi_os_printf("Invalid Named object at address %p\n", node); 318 + return; 319 + } 320 + 321 + acpi_ut_debug_dump_buffer((void *)node, 322 + sizeof(struct acpi_namespace_node), display, 323 + ACPI_UINT32_MAX); 324 + acpi_ex_dump_namespace_node(node, 1); 325 + 326 + obj_desc = acpi_ns_get_attached_object(node); 327 + if (obj_desc) { 328 + acpi_os_printf("\nAttached Object (%p):\n", obj_desc); 329 + if (!acpi_os_readable 330 + (obj_desc, sizeof(union acpi_operand_object))) { 331 + acpi_os_printf 332 + ("Invalid internal ACPI Object at address %p\n", 333 + obj_desc); 334 + return; 335 + } 336 + 337 + acpi_ut_debug_dump_buffer((void *)obj_desc, 338 + sizeof(union acpi_operand_object), 339 + display, ACPI_UINT32_MAX); 340 + acpi_ex_dump_object_descriptor(obj_desc, 1); 341 + } 342 + } 343 + 344 + /******************************************************************************* 345 + * 346 + * FUNCTION: acpi_db_display_method_info 347 + * 348 + * PARAMETERS: start_op - Root of the control method parse tree 349 + * 350 + * RETURN: None 351 + * 352 + * DESCRIPTION: Display information about the current method 353 + * 354 + ******************************************************************************/ 355 + 356 + void acpi_db_display_method_info(union acpi_parse_object *start_op) 357 + { 358 + struct acpi_walk_state *walk_state; 359 + union acpi_operand_object *obj_desc; 360 + struct acpi_namespace_node *node; 361 + union acpi_parse_object *root_op; 362 + union acpi_parse_object *op; 363 + const struct acpi_opcode_info *op_info; 364 + u32 num_ops = 0; 365 + u32 num_operands = 0; 366 + u32 num_operators = 0; 367 + u32 num_remaining_ops = 0; 368 + u32 num_remaining_operands = 0; 369 + u32 num_remaining_operators = 0; 370 + u8 count_remaining = FALSE; 371 + 372 + walk_state = acpi_ds_get_current_walk_state(acpi_gbl_current_walk_list); 373 + if (!walk_state) { 374 + acpi_os_printf("There is no method currently executing\n"); 375 + return; 376 + } 377 + 378 + obj_desc = walk_state->method_desc; 379 + node = walk_state->method_node; 380 + 381 + acpi_os_printf("Currently executing control method is [%4.4s]\n", 382 + acpi_ut_get_node_name(node)); 383 + acpi_os_printf("%X Arguments, SyncLevel = %X\n", 384 + (u32)obj_desc->method.param_count, 385 + (u32)obj_desc->method.sync_level); 386 + 387 + root_op = start_op; 388 + while (root_op->common.parent) { 389 + root_op = root_op->common.parent; 390 + } 391 + 392 + op = root_op; 393 + 394 + while (op) { 395 + if (op == start_op) { 396 + count_remaining = TRUE; 397 + } 398 + 399 + num_ops++; 400 + if (count_remaining) { 401 + num_remaining_ops++; 402 + } 403 + 404 + /* Decode the opcode */ 405 + 406 + op_info = acpi_ps_get_opcode_info(op->common.aml_opcode); 407 + switch (op_info->class) { 408 + case AML_CLASS_ARGUMENT: 409 + 410 + if (count_remaining) { 411 + num_remaining_operands++; 412 + } 413 + 414 + num_operands++; 415 + break; 416 + 417 + case AML_CLASS_UNKNOWN: 418 + 419 + /* Bad opcode or ASCII character */ 420 + 421 + continue; 422 + 423 + default: 424 + 425 + if (count_remaining) { 426 + num_remaining_operators++; 427 + } 428 + 429 + num_operators++; 430 + break; 431 + } 432 + 433 + op = acpi_ps_get_depth_next(start_op, op); 434 + } 435 + 436 + acpi_os_printf 437 + ("Method contains: %X AML Opcodes - %X Operators, %X Operands\n", 438 + num_ops, num_operators, num_operands); 439 + 440 + acpi_os_printf 441 + ("Remaining to execute: %X AML Opcodes - %X Operators, %X Operands\n", 442 + num_remaining_ops, num_remaining_operators, 443 + num_remaining_operands); 444 + } 445 + 446 + /******************************************************************************* 447 + * 448 + * FUNCTION: acpi_db_display_locals 449 + * 450 + * PARAMETERS: None 451 + * 452 + * RETURN: None 453 + * 454 + * DESCRIPTION: Display all locals for the currently running control method 455 + * 456 + ******************************************************************************/ 457 + 458 + void acpi_db_display_locals(void) 459 + { 460 + struct acpi_walk_state *walk_state; 461 + 462 + walk_state = acpi_ds_get_current_walk_state(acpi_gbl_current_walk_list); 463 + if (!walk_state) { 464 + acpi_os_printf("There is no method currently executing\n"); 465 + return; 466 + } 467 + 468 + acpi_db_decode_locals(walk_state); 469 + } 470 + 471 + /******************************************************************************* 472 + * 473 + * FUNCTION: acpi_db_display_arguments 474 + * 475 + * PARAMETERS: None 476 + * 477 + * RETURN: None 478 + * 479 + * DESCRIPTION: Display all arguments for the currently running control method 480 + * 481 + ******************************************************************************/ 482 + 483 + void acpi_db_display_arguments(void) 484 + { 485 + struct acpi_walk_state *walk_state; 486 + 487 + walk_state = acpi_ds_get_current_walk_state(acpi_gbl_current_walk_list); 488 + if (!walk_state) { 489 + acpi_os_printf("There is no method currently executing\n"); 490 + return; 491 + } 492 + 493 + acpi_db_decode_arguments(walk_state); 494 + } 495 + 496 + /******************************************************************************* 497 + * 498 + * FUNCTION: acpi_db_display_results 499 + * 500 + * PARAMETERS: None 501 + * 502 + * RETURN: None 503 + * 504 + * DESCRIPTION: Display current contents of a method result stack 505 + * 506 + ******************************************************************************/ 507 + 508 + void acpi_db_display_results(void) 509 + { 510 + u32 i; 511 + struct acpi_walk_state *walk_state; 512 + union acpi_operand_object *obj_desc; 513 + u32 result_count = 0; 514 + struct acpi_namespace_node *node; 515 + union acpi_generic_state *frame; 516 + u32 index; /* Index onto current frame */ 517 + 518 + walk_state = acpi_ds_get_current_walk_state(acpi_gbl_current_walk_list); 519 + if (!walk_state) { 520 + acpi_os_printf("There is no method currently executing\n"); 521 + return; 522 + } 523 + 524 + obj_desc = walk_state->method_desc; 525 + node = walk_state->method_node; 526 + 527 + if (walk_state->results) { 528 + result_count = walk_state->result_count; 529 + } 530 + 531 + acpi_os_printf("Method [%4.4s] has %X stacked result objects\n", 532 + acpi_ut_get_node_name(node), result_count); 533 + 534 + /* From the top element of result stack */ 535 + 536 + frame = walk_state->results; 537 + index = (result_count - 1) % ACPI_RESULTS_FRAME_OBJ_NUM; 538 + 539 + for (i = 0; i < result_count; i++) { 540 + obj_desc = frame->results.obj_desc[index]; 541 + acpi_os_printf("Result%u: ", i); 542 + acpi_db_display_internal_object(obj_desc, walk_state); 543 + 544 + if (index == 0) { 545 + frame = frame->results.next; 546 + index = ACPI_RESULTS_FRAME_OBJ_NUM; 547 + } 548 + 549 + index--; 550 + } 551 + } 552 + 553 + /******************************************************************************* 554 + * 555 + * FUNCTION: acpi_db_display_calling_tree 556 + * 557 + * PARAMETERS: None 558 + * 559 + * RETURN: None 560 + * 561 + * DESCRIPTION: Display current calling tree of nested control methods 562 + * 563 + ******************************************************************************/ 564 + 565 + void acpi_db_display_calling_tree(void) 566 + { 567 + struct acpi_walk_state *walk_state; 568 + struct acpi_namespace_node *node; 569 + 570 + walk_state = acpi_ds_get_current_walk_state(acpi_gbl_current_walk_list); 571 + if (!walk_state) { 572 + acpi_os_printf("There is no method currently executing\n"); 573 + return; 574 + } 575 + 576 + node = walk_state->method_node; 577 + acpi_os_printf("Current Control Method Call Tree\n"); 578 + 579 + while (walk_state) { 580 + node = walk_state->method_node; 581 + acpi_os_printf(" [%4.4s]\n", acpi_ut_get_node_name(node)); 582 + 583 + walk_state = walk_state->next; 584 + } 585 + } 586 + 587 + /******************************************************************************* 588 + * 589 + * FUNCTION: acpi_db_display_object_type 590 + * 591 + * PARAMETERS: name - User entered NS node handle or name 592 + * 593 + * RETURN: None 594 + * 595 + * DESCRIPTION: Display type of an arbitrary NS node 596 + * 597 + ******************************************************************************/ 598 + 599 + void acpi_db_display_object_type(char *name) 600 + { 601 + struct acpi_namespace_node *node; 602 + struct acpi_device_info *info; 603 + acpi_status status; 604 + u32 i; 605 + 606 + node = acpi_db_convert_to_node(name); 607 + if (!node) { 608 + return; 609 + } 610 + 611 + status = acpi_get_object_info(ACPI_CAST_PTR(acpi_handle, node), &info); 612 + if (ACPI_FAILURE(status)) { 613 + acpi_os_printf("Could not get object info, %s\n", 614 + acpi_format_exception(status)); 615 + return; 616 + } 617 + 618 + if (info->valid & ACPI_VALID_ADR) { 619 + acpi_os_printf("ADR: %8.8X%8.8X, STA: %8.8X, Flags: %X\n", 620 + ACPI_FORMAT_UINT64(info->address), 621 + info->current_status, info->flags); 622 + } 623 + if (info->valid & ACPI_VALID_SXDS) { 624 + acpi_os_printf("S1D-%2.2X S2D-%2.2X S3D-%2.2X S4D-%2.2X\n", 625 + info->highest_dstates[0], 626 + info->highest_dstates[1], 627 + info->highest_dstates[2], 628 + info->highest_dstates[3]); 629 + } 630 + if (info->valid & ACPI_VALID_SXWS) { 631 + acpi_os_printf 632 + ("S0W-%2.2X S1W-%2.2X S2W-%2.2X S3W-%2.2X S4W-%2.2X\n", 633 + info->lowest_dstates[0], info->lowest_dstates[1], 634 + info->lowest_dstates[2], info->lowest_dstates[3], 635 + info->lowest_dstates[4]); 636 + } 637 + 638 + if (info->valid & ACPI_VALID_HID) { 639 + acpi_os_printf("HID: %s\n", info->hardware_id.string); 640 + } 641 + 642 + if (info->valid & ACPI_VALID_UID) { 643 + acpi_os_printf("UID: %s\n", info->unique_id.string); 644 + } 645 + 646 + if (info->valid & ACPI_VALID_SUB) { 647 + acpi_os_printf("SUB: %s\n", info->subsystem_id.string); 648 + } 649 + 650 + if (info->valid & ACPI_VALID_CID) { 651 + for (i = 0; i < info->compatible_id_list.count; i++) { 652 + acpi_os_printf("CID %u: %s\n", i, 653 + info->compatible_id_list.ids[i].string); 654 + } 655 + } 656 + 657 + ACPI_FREE(info); 658 + } 659 + 660 + /******************************************************************************* 661 + * 662 + * FUNCTION: acpi_db_display_result_object 663 + * 664 + * PARAMETERS: obj_desc - Object to be displayed 665 + * walk_state - Current walk state 666 + * 667 + * RETURN: None 668 + * 669 + * DESCRIPTION: Display the result of an AML opcode 670 + * 671 + * Note: Curently only displays the result object if we are single stepping. 672 + * However, this output may be useful in other contexts and could be enabled 673 + * to do so if needed. 674 + * 675 + ******************************************************************************/ 676 + 677 + void 678 + acpi_db_display_result_object(union acpi_operand_object *obj_desc, 679 + struct acpi_walk_state *walk_state) 680 + { 681 + 682 + /* Only display if single stepping */ 683 + 684 + if (!acpi_gbl_cm_single_step) { 685 + return; 686 + } 687 + 688 + acpi_os_printf("ResultObj: "); 689 + acpi_db_display_internal_object(obj_desc, walk_state); 690 + acpi_os_printf("\n"); 691 + } 692 + 693 + /******************************************************************************* 694 + * 695 + * FUNCTION: acpi_db_display_argument_object 696 + * 697 + * PARAMETERS: obj_desc - Object to be displayed 698 + * walk_state - Current walk state 699 + * 700 + * RETURN: None 701 + * 702 + * DESCRIPTION: Display the result of an AML opcode 703 + * 704 + ******************************************************************************/ 705 + 706 + void 707 + acpi_db_display_argument_object(union acpi_operand_object *obj_desc, 708 + struct acpi_walk_state *walk_state) 709 + { 710 + 711 + if (!acpi_gbl_cm_single_step) { 712 + return; 713 + } 714 + 715 + acpi_os_printf("ArgObj: "); 716 + acpi_db_display_internal_object(obj_desc, walk_state); 717 + } 718 + 719 + #if (!ACPI_REDUCED_HARDWARE) 720 + /******************************************************************************* 721 + * 722 + * FUNCTION: acpi_db_display_gpes 723 + * 724 + * PARAMETERS: None 725 + * 726 + * RETURN: None 727 + * 728 + * DESCRIPTION: Display the current GPE structures 729 + * 730 + ******************************************************************************/ 731 + 732 + void acpi_db_display_gpes(void) 733 + { 734 + struct acpi_gpe_block_info *gpe_block; 735 + struct acpi_gpe_xrupt_info *gpe_xrupt_info; 736 + struct acpi_gpe_event_info *gpe_event_info; 737 + struct acpi_gpe_register_info *gpe_register_info; 738 + char *gpe_type; 739 + struct acpi_gpe_notify_info *notify; 740 + u32 gpe_index; 741 + u32 block = 0; 742 + u32 i; 743 + u32 j; 744 + u32 count; 745 + char buffer[80]; 746 + struct acpi_buffer ret_buf; 747 + acpi_status status; 748 + 749 + ret_buf.length = sizeof(buffer); 750 + ret_buf.pointer = buffer; 751 + 752 + block = 0; 753 + 754 + /* Walk the GPE lists */ 755 + 756 + gpe_xrupt_info = acpi_gbl_gpe_xrupt_list_head; 757 + while (gpe_xrupt_info) { 758 + gpe_block = gpe_xrupt_info->gpe_block_list_head; 759 + while (gpe_block) { 760 + status = acpi_get_name(gpe_block->node, 761 + ACPI_FULL_PATHNAME_NO_TRAILING, 762 + &ret_buf); 763 + if (ACPI_FAILURE(status)) { 764 + acpi_os_printf 765 + ("Could not convert name to pathname\n"); 766 + } 767 + 768 + if (gpe_block->node == acpi_gbl_fadt_gpe_device) { 769 + gpe_type = "FADT-defined GPE block"; 770 + } else { 771 + gpe_type = "GPE Block Device"; 772 + } 773 + 774 + acpi_os_printf 775 + ("\nBlock %u - Info %p DeviceNode %p [%s] - %s\n", 776 + block, gpe_block, gpe_block->node, buffer, 777 + gpe_type); 778 + 779 + acpi_os_printf(" Registers: %u (%u GPEs)\n", 780 + gpe_block->register_count, 781 + gpe_block->gpe_count); 782 + 783 + acpi_os_printf 784 + (" GPE range: 0x%X to 0x%X on interrupt %u\n", 785 + gpe_block->block_base_number, 786 + gpe_block->block_base_number + 787 + (gpe_block->gpe_count - 1), 788 + gpe_xrupt_info->interrupt_number); 789 + 790 + acpi_os_printf 791 + (" RegisterInfo: %p Status %8.8X%8.8X Enable %8.8X%8.8X\n", 792 + gpe_block->register_info, 793 + ACPI_FORMAT_UINT64(gpe_block->register_info-> 794 + status_address.address), 795 + ACPI_FORMAT_UINT64(gpe_block->register_info-> 796 + enable_address.address)); 797 + 798 + acpi_os_printf(" EventInfo: %p\n", 799 + gpe_block->event_info); 800 + 801 + /* Examine each GPE Register within the block */ 802 + 803 + for (i = 0; i < gpe_block->register_count; i++) { 804 + gpe_register_info = 805 + &gpe_block->register_info[i]; 806 + 807 + acpi_os_printf(" Reg %u: (GPE %.2X-%.2X) " 808 + "RunEnable %2.2X WakeEnable %2.2X" 809 + " Status %8.8X%8.8X Enable %8.8X%8.8X\n", 810 + i, 811 + gpe_register_info-> 812 + base_gpe_number, 813 + gpe_register_info-> 814 + base_gpe_number + 815 + (ACPI_GPE_REGISTER_WIDTH - 1), 816 + gpe_register_info-> 817 + enable_for_run, 818 + gpe_register_info-> 819 + enable_for_wake, 820 + ACPI_FORMAT_UINT64 821 + (gpe_register_info-> 822 + status_address.address), 823 + ACPI_FORMAT_UINT64 824 + (gpe_register_info-> 825 + enable_address.address)); 826 + 827 + /* Now look at the individual GPEs in this byte register */ 828 + 829 + for (j = 0; j < ACPI_GPE_REGISTER_WIDTH; j++) { 830 + gpe_index = 831 + (i * ACPI_GPE_REGISTER_WIDTH) + j; 832 + gpe_event_info = 833 + &gpe_block->event_info[gpe_index]; 834 + 835 + if (ACPI_GPE_DISPATCH_TYPE 836 + (gpe_event_info->flags) == 837 + ACPI_GPE_DISPATCH_NONE) { 838 + 839 + /* This GPE is not used (no method or handler), ignore it */ 840 + 841 + continue; 842 + } 843 + 844 + acpi_os_printf 845 + (" GPE %.2X: %p RunRefs %2.2X Flags %2.2X (", 846 + gpe_block->block_base_number + 847 + gpe_index, gpe_event_info, 848 + gpe_event_info->runtime_count, 849 + gpe_event_info->flags); 850 + 851 + /* Decode the flags byte */ 852 + 853 + if (gpe_event_info-> 854 + flags & ACPI_GPE_LEVEL_TRIGGERED) { 855 + acpi_os_printf("Level, "); 856 + } else { 857 + acpi_os_printf("Edge, "); 858 + } 859 + 860 + if (gpe_event_info-> 861 + flags & ACPI_GPE_CAN_WAKE) { 862 + acpi_os_printf("CanWake, "); 863 + } else { 864 + acpi_os_printf("RunOnly, "); 865 + } 866 + 867 + switch (ACPI_GPE_DISPATCH_TYPE 868 + (gpe_event_info->flags)) { 869 + case ACPI_GPE_DISPATCH_NONE: 870 + 871 + acpi_os_printf("NotUsed"); 872 + break; 873 + 874 + case ACPI_GPE_DISPATCH_METHOD: 875 + 876 + acpi_os_printf("Method"); 877 + break; 878 + 879 + case ACPI_GPE_DISPATCH_HANDLER: 880 + 881 + acpi_os_printf("Handler"); 882 + break; 883 + 884 + case ACPI_GPE_DISPATCH_NOTIFY: 885 + 886 + count = 0; 887 + notify = 888 + gpe_event_info->dispatch. 889 + notify_list; 890 + while (notify) { 891 + count++; 892 + notify = notify->next; 893 + } 894 + 895 + acpi_os_printf 896 + ("Implicit Notify on %u devices", 897 + count); 898 + break; 899 + 900 + case ACPI_GPE_DISPATCH_RAW_HANDLER: 901 + 902 + acpi_os_printf("RawHandler"); 903 + break; 904 + 905 + default: 906 + 907 + acpi_os_printf("UNKNOWN: %X", 908 + ACPI_GPE_DISPATCH_TYPE 909 + (gpe_event_info-> 910 + flags)); 911 + break; 912 + } 913 + 914 + acpi_os_printf(")\n"); 915 + } 916 + } 917 + 918 + block++; 919 + gpe_block = gpe_block->next; 920 + } 921 + 922 + gpe_xrupt_info = gpe_xrupt_info->next; 923 + } 924 + } 925 + #endif /* !ACPI_REDUCED_HARDWARE */ 926 + 927 + /******************************************************************************* 928 + * 929 + * FUNCTION: acpi_db_display_handlers 930 + * 931 + * PARAMETERS: None 932 + * 933 + * RETURN: None 934 + * 935 + * DESCRIPTION: Display the currently installed global handlers 936 + * 937 + ******************************************************************************/ 938 + 939 + void acpi_db_display_handlers(void) 940 + { 941 + union acpi_operand_object *obj_desc; 942 + union acpi_operand_object *handler_obj; 943 + acpi_adr_space_type space_id; 944 + u32 i; 945 + 946 + /* Operation region handlers */ 947 + 948 + acpi_os_printf("\nOperation Region Handlers at the namespace root:\n"); 949 + 950 + obj_desc = acpi_ns_get_attached_object(acpi_gbl_root_node); 951 + if (obj_desc) { 952 + for (i = 0; i < ACPI_ARRAY_LENGTH(acpi_gbl_space_id_list); i++) { 953 + space_id = acpi_gbl_space_id_list[i]; 954 + handler_obj = obj_desc->device.handler; 955 + 956 + acpi_os_printf(ACPI_PREDEFINED_PREFIX, 957 + acpi_ut_get_region_name((u8)space_id), 958 + space_id); 959 + 960 + while (handler_obj) { 961 + if (acpi_gbl_space_id_list[i] == 962 + handler_obj->address_space.space_id) { 963 + acpi_os_printf 964 + (ACPI_HANDLER_PRESENT_STRING, 965 + (handler_obj->address_space. 966 + handler_flags & 967 + ACPI_ADDR_HANDLER_DEFAULT_INSTALLED) 968 + ? "Default" : "User", 969 + handler_obj->address_space. 970 + handler); 971 + 972 + goto found_handler; 973 + } 974 + 975 + handler_obj = handler_obj->address_space.next; 976 + } 977 + 978 + /* There is no handler for this space_id */ 979 + 980 + acpi_os_printf("None\n"); 981 + 982 + found_handler: ; 983 + } 984 + 985 + /* Find all handlers for user-defined space_IDs */ 986 + 987 + handler_obj = obj_desc->device.handler; 988 + while (handler_obj) { 989 + if (handler_obj->address_space.space_id >= 990 + ACPI_USER_REGION_BEGIN) { 991 + acpi_os_printf(ACPI_PREDEFINED_PREFIX, 992 + "User-defined ID", 993 + handler_obj->address_space. 994 + space_id); 995 + acpi_os_printf(ACPI_HANDLER_PRESENT_STRING, 996 + (handler_obj->address_space. 997 + handler_flags & 998 + ACPI_ADDR_HANDLER_DEFAULT_INSTALLED) 999 + ? "Default" : "User", 1000 + handler_obj->address_space. 1001 + handler); 1002 + } 1003 + 1004 + handler_obj = handler_obj->address_space.next; 1005 + } 1006 + } 1007 + #if (!ACPI_REDUCED_HARDWARE) 1008 + 1009 + /* Fixed event handlers */ 1010 + 1011 + acpi_os_printf("\nFixed Event Handlers:\n"); 1012 + 1013 + for (i = 0; i < ACPI_NUM_FIXED_EVENTS; i++) { 1014 + acpi_os_printf(ACPI_PREDEFINED_PREFIX, 1015 + acpi_ut_get_event_name(i), i); 1016 + if (acpi_gbl_fixed_event_handlers[i].handler) { 1017 + acpi_os_printf(ACPI_HANDLER_PRESENT_STRING, "User", 1018 + acpi_gbl_fixed_event_handlers[i]. 1019 + handler); 1020 + } else { 1021 + acpi_os_printf(ACPI_HANDLER_NOT_PRESENT_STRING, "None"); 1022 + } 1023 + } 1024 + 1025 + #endif /* !ACPI_REDUCED_HARDWARE */ 1026 + 1027 + /* Miscellaneous global handlers */ 1028 + 1029 + acpi_os_printf("\nMiscellaneous Global Handlers:\n"); 1030 + 1031 + for (i = 0; i < ACPI_ARRAY_LENGTH(acpi_gbl_handler_list); i++) { 1032 + acpi_os_printf(ACPI_HANDLER_NAME_STRING, 1033 + acpi_gbl_handler_list[i].name); 1034 + 1035 + if (acpi_gbl_handler_list[i].handler) { 1036 + acpi_os_printf(ACPI_HANDLER_PRESENT_STRING, "User", 1037 + acpi_gbl_handler_list[i].handler); 1038 + } else { 1039 + acpi_os_printf(ACPI_HANDLER_NOT_PRESENT_STRING, "None"); 1040 + } 1041 + } 1042 + 1043 + /* Other handlers that are installed throughout the namespace */ 1044 + 1045 + acpi_os_printf("\nOperation Region Handlers for specific devices:\n"); 1046 + 1047 + (void)acpi_walk_namespace(ACPI_TYPE_DEVICE, ACPI_ROOT_OBJECT, 1048 + ACPI_UINT32_MAX, 1049 + acpi_db_display_non_root_handlers, NULL, NULL, 1050 + NULL); 1051 + } 1052 + 1053 + /******************************************************************************* 1054 + * 1055 + * FUNCTION: acpi_db_display_non_root_handlers 1056 + * 1057 + * PARAMETERS: acpi_walk_callback 1058 + * 1059 + * RETURN: Status 1060 + * 1061 + * DESCRIPTION: Display information about all handlers installed for a 1062 + * device object. 1063 + * 1064 + ******************************************************************************/ 1065 + 1066 + static acpi_status 1067 + acpi_db_display_non_root_handlers(acpi_handle obj_handle, 1068 + u32 nesting_level, 1069 + void *context, void **return_value) 1070 + { 1071 + struct acpi_namespace_node *node = 1072 + ACPI_CAST_PTR(struct acpi_namespace_node, obj_handle); 1073 + union acpi_operand_object *obj_desc; 1074 + union acpi_operand_object *handler_obj; 1075 + char *pathname; 1076 + 1077 + obj_desc = acpi_ns_get_attached_object(node); 1078 + if (!obj_desc) { 1079 + return (AE_OK); 1080 + } 1081 + 1082 + pathname = acpi_ns_get_external_pathname(node); 1083 + if (!pathname) { 1084 + return (AE_OK); 1085 + } 1086 + 1087 + /* Display all handlers associated with this device */ 1088 + 1089 + handler_obj = obj_desc->device.handler; 1090 + while (handler_obj) { 1091 + acpi_os_printf(ACPI_PREDEFINED_PREFIX, 1092 + acpi_ut_get_region_name((u8)handler_obj-> 1093 + address_space.space_id), 1094 + handler_obj->address_space.space_id); 1095 + 1096 + acpi_os_printf(ACPI_HANDLER_PRESENT_STRING2, 1097 + (handler_obj->address_space.handler_flags & 1098 + ACPI_ADDR_HANDLER_DEFAULT_INSTALLED) ? "Default" 1099 + : "User", handler_obj->address_space.handler); 1100 + 1101 + acpi_os_printf(" Device Name: %s (%p)\n", pathname, node); 1102 + 1103 + handler_obj = handler_obj->address_space.next; 1104 + } 1105 + 1106 + ACPI_FREE(pathname); 1107 + return (AE_OK); 1108 + }
+764
drivers/acpi/acpica/dbexec.c
··· 1 + /******************************************************************************* 2 + * 3 + * Module Name: dbexec - debugger control method execution 4 + * 5 + ******************************************************************************/ 6 + 7 + /* 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 + * All rights reserved. 10 + * 11 + * Redistribution and use in source and binary forms, with or without 12 + * modification, are permitted provided that the following conditions 13 + * are met: 14 + * 1. Redistributions of source code must retain the above copyright 15 + * notice, this list of conditions, and the following disclaimer, 16 + * without modification. 17 + * 2. Redistributions in binary form must reproduce at minimum a disclaimer 18 + * substantially similar to the "NO WARRANTY" disclaimer below 19 + * ("Disclaimer") and any redistribution must be conditioned upon 20 + * including a substantially similar Disclaimer requirement for further 21 + * binary redistribution. 22 + * 3. Neither the names of the above-listed copyright holders nor the names 23 + * of any contributors may be used to endorse or promote products derived 24 + * from this software without specific prior written permission. 25 + * 26 + * Alternatively, this software may be distributed under the terms of the 27 + * GNU General Public License ("GPL") version 2 as published by the Free 28 + * Software Foundation. 29 + * 30 + * NO WARRANTY 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 36 + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 37 + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 38 + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 39 + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 40 + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 41 + * POSSIBILITY OF SUCH DAMAGES. 42 + */ 43 + 44 + #include <acpi/acpi.h> 45 + #include "accommon.h" 46 + #include "acdebug.h" 47 + #include "acnamesp.h" 48 + 49 + #define _COMPONENT ACPI_CA_DEBUGGER 50 + ACPI_MODULE_NAME("dbexec") 51 + 52 + static struct acpi_db_method_info acpi_gbl_db_method_info; 53 + 54 + /* Local prototypes */ 55 + 56 + static acpi_status 57 + acpi_db_execute_method(struct acpi_db_method_info *info, 58 + struct acpi_buffer *return_obj); 59 + 60 + static acpi_status acpi_db_execute_setup(struct acpi_db_method_info *info); 61 + 62 + static u32 acpi_db_get_outstanding_allocations(void); 63 + 64 + static void ACPI_SYSTEM_XFACE acpi_db_method_thread(void *context); 65 + 66 + static acpi_status 67 + acpi_db_execution_walk(acpi_handle obj_handle, 68 + u32 nesting_level, void *context, void **return_value); 69 + 70 + /******************************************************************************* 71 + * 72 + * FUNCTION: acpi_db_delete_objects 73 + * 74 + * PARAMETERS: count - Count of objects in the list 75 + * objects - Array of ACPI_OBJECTs to be deleted 76 + * 77 + * RETURN: None 78 + * 79 + * DESCRIPTION: Delete a list of ACPI_OBJECTS. Handles packages and nested 80 + * packages via recursion. 81 + * 82 + ******************************************************************************/ 83 + 84 + void acpi_db_delete_objects(u32 count, union acpi_object *objects) 85 + { 86 + u32 i; 87 + 88 + for (i = 0; i < count; i++) { 89 + switch (objects[i].type) { 90 + case ACPI_TYPE_BUFFER: 91 + 92 + ACPI_FREE(objects[i].buffer.pointer); 93 + break; 94 + 95 + case ACPI_TYPE_PACKAGE: 96 + 97 + /* Recursive call to delete package elements */ 98 + 99 + acpi_db_delete_objects(objects[i].package.count, 100 + objects[i].package.elements); 101 + 102 + /* Free the elements array */ 103 + 104 + ACPI_FREE(objects[i].package.elements); 105 + break; 106 + 107 + default: 108 + 109 + break; 110 + } 111 + } 112 + } 113 + 114 + /******************************************************************************* 115 + * 116 + * FUNCTION: acpi_db_execute_method 117 + * 118 + * PARAMETERS: info - Valid info segment 119 + * return_obj - Where to put return object 120 + * 121 + * RETURN: Status 122 + * 123 + * DESCRIPTION: Execute a control method. 124 + * 125 + ******************************************************************************/ 126 + 127 + static acpi_status 128 + acpi_db_execute_method(struct acpi_db_method_info *info, 129 + struct acpi_buffer *return_obj) 130 + { 131 + acpi_status status; 132 + struct acpi_object_list param_objects; 133 + union acpi_object params[ACPI_DEBUGGER_MAX_ARGS + 1]; 134 + u32 i; 135 + 136 + ACPI_FUNCTION_TRACE(db_execute_method); 137 + 138 + if (acpi_gbl_db_output_to_file && !acpi_dbg_level) { 139 + acpi_os_printf("Warning: debug output is not enabled!\n"); 140 + } 141 + 142 + param_objects.count = 0; 143 + param_objects.pointer = NULL; 144 + 145 + /* Pass through any command-line arguments */ 146 + 147 + if (info->args && info->args[0]) { 148 + 149 + /* Get arguments passed on the command line */ 150 + 151 + for (i = 0; (info->args[i] && *(info->args[i])); i++) { 152 + 153 + /* Convert input string (token) to an actual union acpi_object */ 154 + 155 + status = acpi_db_convert_to_object(info->types[i], 156 + info->args[i], 157 + &params[i]); 158 + if (ACPI_FAILURE(status)) { 159 + ACPI_EXCEPTION((AE_INFO, status, 160 + "While parsing method arguments")); 161 + goto cleanup; 162 + } 163 + } 164 + 165 + param_objects.count = i; 166 + param_objects.pointer = params; 167 + } 168 + 169 + /* Prepare for a return object of arbitrary size */ 170 + 171 + return_obj->pointer = acpi_gbl_db_buffer; 172 + return_obj->length = ACPI_DEBUG_BUFFER_SIZE; 173 + 174 + /* Do the actual method execution */ 175 + 176 + acpi_gbl_method_executing = TRUE; 177 + status = acpi_evaluate_object(NULL, info->pathname, 178 + &param_objects, return_obj); 179 + 180 + acpi_gbl_cm_single_step = FALSE; 181 + acpi_gbl_method_executing = FALSE; 182 + 183 + if (ACPI_FAILURE(status)) { 184 + ACPI_EXCEPTION((AE_INFO, status, 185 + "while executing %s from debugger", 186 + info->pathname)); 187 + 188 + if (status == AE_BUFFER_OVERFLOW) { 189 + ACPI_ERROR((AE_INFO, 190 + "Possible overflow of internal debugger " 191 + "buffer (size 0x%X needed 0x%X)", 192 + ACPI_DEBUG_BUFFER_SIZE, 193 + (u32)return_obj->length)); 194 + } 195 + } 196 + 197 + cleanup: 198 + acpi_db_delete_objects(param_objects.count, params); 199 + return_ACPI_STATUS(status); 200 + } 201 + 202 + /******************************************************************************* 203 + * 204 + * FUNCTION: acpi_db_execute_setup 205 + * 206 + * PARAMETERS: info - Valid method info 207 + * 208 + * RETURN: None 209 + * 210 + * DESCRIPTION: Setup info segment prior to method execution 211 + * 212 + ******************************************************************************/ 213 + 214 + static acpi_status acpi_db_execute_setup(struct acpi_db_method_info *info) 215 + { 216 + acpi_status status; 217 + 218 + ACPI_FUNCTION_NAME(db_execute_setup); 219 + 220 + /* Catenate the current scope to the supplied name */ 221 + 222 + info->pathname[0] = 0; 223 + if ((info->name[0] != '\\') && (info->name[0] != '/')) { 224 + if (acpi_ut_safe_strcat(info->pathname, sizeof(info->pathname), 225 + acpi_gbl_db_scope_buf)) { 226 + status = AE_BUFFER_OVERFLOW; 227 + goto error_exit; 228 + } 229 + } 230 + 231 + if (acpi_ut_safe_strcat(info->pathname, sizeof(info->pathname), 232 + info->name)) { 233 + status = AE_BUFFER_OVERFLOW; 234 + goto error_exit; 235 + } 236 + 237 + acpi_db_prep_namestring(info->pathname); 238 + 239 + acpi_db_set_output_destination(ACPI_DB_DUPLICATE_OUTPUT); 240 + acpi_os_printf("Evaluating %s\n", info->pathname); 241 + 242 + if (info->flags & EX_SINGLE_STEP) { 243 + acpi_gbl_cm_single_step = TRUE; 244 + acpi_db_set_output_destination(ACPI_DB_CONSOLE_OUTPUT); 245 + } 246 + 247 + else { 248 + /* No single step, allow redirection to a file */ 249 + 250 + acpi_db_set_output_destination(ACPI_DB_REDIRECTABLE_OUTPUT); 251 + } 252 + 253 + return (AE_OK); 254 + 255 + error_exit: 256 + 257 + ACPI_EXCEPTION((AE_INFO, status, "During setup for method execution")); 258 + return (status); 259 + } 260 + 261 + #ifdef ACPI_DBG_TRACK_ALLOCATIONS 262 + u32 acpi_db_get_cache_info(struct acpi_memory_list *cache) 263 + { 264 + 265 + return (cache->total_allocated - cache->total_freed - 266 + cache->current_depth); 267 + } 268 + #endif 269 + 270 + /******************************************************************************* 271 + * 272 + * FUNCTION: acpi_db_get_outstanding_allocations 273 + * 274 + * PARAMETERS: None 275 + * 276 + * RETURN: Current global allocation count minus cache entries 277 + * 278 + * DESCRIPTION: Determine the current number of "outstanding" allocations -- 279 + * those allocations that have not been freed and also are not 280 + * in one of the various object caches. 281 + * 282 + ******************************************************************************/ 283 + 284 + static u32 acpi_db_get_outstanding_allocations(void) 285 + { 286 + u32 outstanding = 0; 287 + 288 + #ifdef ACPI_DBG_TRACK_ALLOCATIONS 289 + 290 + outstanding += acpi_db_get_cache_info(acpi_gbl_state_cache); 291 + outstanding += acpi_db_get_cache_info(acpi_gbl_ps_node_cache); 292 + outstanding += acpi_db_get_cache_info(acpi_gbl_ps_node_ext_cache); 293 + outstanding += acpi_db_get_cache_info(acpi_gbl_operand_cache); 294 + #endif 295 + 296 + return (outstanding); 297 + } 298 + 299 + /******************************************************************************* 300 + * 301 + * FUNCTION: acpi_db_execution_walk 302 + * 303 + * PARAMETERS: WALK_CALLBACK 304 + * 305 + * RETURN: Status 306 + * 307 + * DESCRIPTION: Execute a control method. Name is relative to the current 308 + * scope. 309 + * 310 + ******************************************************************************/ 311 + 312 + static acpi_status 313 + acpi_db_execution_walk(acpi_handle obj_handle, 314 + u32 nesting_level, void *context, void **return_value) 315 + { 316 + union acpi_operand_object *obj_desc; 317 + struct acpi_namespace_node *node = 318 + (struct acpi_namespace_node *)obj_handle; 319 + struct acpi_buffer return_obj; 320 + acpi_status status; 321 + 322 + obj_desc = acpi_ns_get_attached_object(node); 323 + if (obj_desc->method.param_count) { 324 + return (AE_OK); 325 + } 326 + 327 + return_obj.pointer = NULL; 328 + return_obj.length = ACPI_ALLOCATE_BUFFER; 329 + 330 + acpi_ns_print_node_pathname(node, "Evaluating"); 331 + 332 + /* Do the actual method execution */ 333 + 334 + acpi_os_printf("\n"); 335 + acpi_gbl_method_executing = TRUE; 336 + 337 + status = acpi_evaluate_object(node, NULL, NULL, &return_obj); 338 + 339 + acpi_os_printf("Evaluation of [%4.4s] returned %s\n", 340 + acpi_ut_get_node_name(node), 341 + acpi_format_exception(status)); 342 + 343 + acpi_gbl_method_executing = FALSE; 344 + return (AE_OK); 345 + } 346 + 347 + /******************************************************************************* 348 + * 349 + * FUNCTION: acpi_db_execute 350 + * 351 + * PARAMETERS: name - Name of method to execute 352 + * args - Parameters to the method 353 + * Types - 354 + * flags - single step/no single step 355 + * 356 + * RETURN: None 357 + * 358 + * DESCRIPTION: Execute a control method. Name is relative to the current 359 + * scope. 360 + * 361 + ******************************************************************************/ 362 + 363 + void 364 + acpi_db_execute(char *name, char **args, acpi_object_type * types, u32 flags) 365 + { 366 + acpi_status status; 367 + struct acpi_buffer return_obj; 368 + char *name_string; 369 + 370 + #ifdef ACPI_DEBUG_OUTPUT 371 + u32 previous_allocations; 372 + u32 allocations; 373 + #endif 374 + 375 + /* 376 + * Allow one execution to be performed by debugger or single step 377 + * execution will be dead locked by the interpreter mutexes. 378 + */ 379 + if (acpi_gbl_method_executing) { 380 + acpi_os_printf("Only one debugger execution is allowed.\n"); 381 + return; 382 + } 383 + #ifdef ACPI_DEBUG_OUTPUT 384 + /* Memory allocation tracking */ 385 + 386 + previous_allocations = acpi_db_get_outstanding_allocations(); 387 + #endif 388 + 389 + if (*name == '*') { 390 + (void)acpi_walk_namespace(ACPI_TYPE_METHOD, ACPI_ROOT_OBJECT, 391 + ACPI_UINT32_MAX, 392 + acpi_db_execution_walk, NULL, NULL, 393 + NULL); 394 + return; 395 + } else { 396 + name_string = ACPI_ALLOCATE(strlen(name) + 1); 397 + if (!name_string) { 398 + return; 399 + } 400 + 401 + memset(&acpi_gbl_db_method_info, 0, 402 + sizeof(struct acpi_db_method_info)); 403 + 404 + strcpy(name_string, name); 405 + acpi_ut_strupr(name_string); 406 + acpi_gbl_db_method_info.name = name_string; 407 + acpi_gbl_db_method_info.args = args; 408 + acpi_gbl_db_method_info.types = types; 409 + acpi_gbl_db_method_info.flags = flags; 410 + 411 + return_obj.pointer = NULL; 412 + return_obj.length = ACPI_ALLOCATE_BUFFER; 413 + 414 + status = acpi_db_execute_setup(&acpi_gbl_db_method_info); 415 + if (ACPI_FAILURE(status)) { 416 + ACPI_FREE(name_string); 417 + return; 418 + } 419 + 420 + /* Get the NS node, determines existence also */ 421 + 422 + status = acpi_get_handle(NULL, acpi_gbl_db_method_info.pathname, 423 + &acpi_gbl_db_method_info.method); 424 + if (ACPI_SUCCESS(status)) { 425 + status = 426 + acpi_db_execute_method(&acpi_gbl_db_method_info, 427 + &return_obj); 428 + } 429 + ACPI_FREE(name_string); 430 + } 431 + 432 + /* 433 + * Allow any handlers in separate threads to complete. 434 + * (Such as Notify handlers invoked from AML executed above). 435 + */ 436 + acpi_os_sleep((u64)10); 437 + 438 + #ifdef ACPI_DEBUG_OUTPUT 439 + 440 + /* Memory allocation tracking */ 441 + 442 + allocations = 443 + acpi_db_get_outstanding_allocations() - previous_allocations; 444 + 445 + acpi_db_set_output_destination(ACPI_DB_DUPLICATE_OUTPUT); 446 + 447 + if (allocations > 0) { 448 + acpi_os_printf 449 + ("0x%X Outstanding allocations after evaluation of %s\n", 450 + allocations, acpi_gbl_db_method_info.pathname); 451 + } 452 + #endif 453 + 454 + if (ACPI_FAILURE(status)) { 455 + acpi_os_printf("Evaluation of %s failed with status %s\n", 456 + acpi_gbl_db_method_info.pathname, 457 + acpi_format_exception(status)); 458 + } else { 459 + /* Display a return object, if any */ 460 + 461 + if (return_obj.length) { 462 + acpi_os_printf("Evaluation of %s returned object %p, " 463 + "external buffer length %X\n", 464 + acpi_gbl_db_method_info.pathname, 465 + return_obj.pointer, 466 + (u32)return_obj.length); 467 + 468 + acpi_db_dump_external_object(return_obj.pointer, 1); 469 + 470 + /* Dump a _PLD buffer if present */ 471 + 472 + if (ACPI_COMPARE_NAME 473 + ((ACPI_CAST_PTR 474 + (struct acpi_namespace_node, 475 + acpi_gbl_db_method_info.method)->name.ascii), 476 + METHOD_NAME__PLD)) { 477 + acpi_db_dump_pld_buffer(return_obj.pointer); 478 + } 479 + } else { 480 + acpi_os_printf 481 + ("No object was returned from evaluation of %s\n", 482 + acpi_gbl_db_method_info.pathname); 483 + } 484 + } 485 + 486 + acpi_db_set_output_destination(ACPI_DB_CONSOLE_OUTPUT); 487 + } 488 + 489 + /******************************************************************************* 490 + * 491 + * FUNCTION: acpi_db_method_thread 492 + * 493 + * PARAMETERS: context - Execution info segment 494 + * 495 + * RETURN: None 496 + * 497 + * DESCRIPTION: Debugger execute thread. Waits for a command line, then 498 + * simply dispatches it. 499 + * 500 + ******************************************************************************/ 501 + 502 + static void ACPI_SYSTEM_XFACE acpi_db_method_thread(void *context) 503 + { 504 + acpi_status status; 505 + struct acpi_db_method_info *info = context; 506 + struct acpi_db_method_info local_info; 507 + u32 i; 508 + u8 allow; 509 + struct acpi_buffer return_obj; 510 + 511 + /* 512 + * acpi_gbl_db_method_info.Arguments will be passed as method arguments. 513 + * Prevent acpi_gbl_db_method_info from being modified by multiple threads 514 + * concurrently. 515 + * 516 + * Note: The arguments we are passing are used by the ASL test suite 517 + * (aslts). Do not change them without updating the tests. 518 + */ 519 + (void)acpi_os_wait_semaphore(info->info_gate, 1, ACPI_WAIT_FOREVER); 520 + 521 + if (info->init_args) { 522 + acpi_db_uint32_to_hex_string(info->num_created, 523 + info->index_of_thread_str); 524 + acpi_db_uint32_to_hex_string((u32)acpi_os_get_thread_id(), 525 + info->id_of_thread_str); 526 + } 527 + 528 + if (info->threads && (info->num_created < info->num_threads)) { 529 + info->threads[info->num_created++] = acpi_os_get_thread_id(); 530 + } 531 + 532 + local_info = *info; 533 + local_info.args = local_info.arguments; 534 + local_info.arguments[0] = local_info.num_threads_str; 535 + local_info.arguments[1] = local_info.id_of_thread_str; 536 + local_info.arguments[2] = local_info.index_of_thread_str; 537 + local_info.arguments[3] = NULL; 538 + 539 + local_info.types = local_info.arg_types; 540 + 541 + (void)acpi_os_signal_semaphore(info->info_gate, 1); 542 + 543 + for (i = 0; i < info->num_loops; i++) { 544 + status = acpi_db_execute_method(&local_info, &return_obj); 545 + if (ACPI_FAILURE(status)) { 546 + acpi_os_printf 547 + ("%s During evaluation of %s at iteration %X\n", 548 + acpi_format_exception(status), info->pathname, i); 549 + if (status == AE_ABORT_METHOD) { 550 + break; 551 + } 552 + } 553 + #if 0 554 + if ((i % 100) == 0) { 555 + acpi_os_printf("%u loops, Thread 0x%x\n", 556 + i, acpi_os_get_thread_id()); 557 + } 558 + 559 + if (return_obj.length) { 560 + acpi_os_printf 561 + ("Evaluation of %s returned object %p Buflen %X\n", 562 + info->pathname, return_obj.pointer, 563 + (u32)return_obj.length); 564 + acpi_db_dump_external_object(return_obj.pointer, 1); 565 + } 566 + #endif 567 + } 568 + 569 + /* Signal our completion */ 570 + 571 + allow = 0; 572 + (void)acpi_os_wait_semaphore(info->thread_complete_gate, 573 + 1, ACPI_WAIT_FOREVER); 574 + info->num_completed++; 575 + 576 + if (info->num_completed == info->num_threads) { 577 + 578 + /* Do signal for main thread once only */ 579 + allow = 1; 580 + } 581 + 582 + (void)acpi_os_signal_semaphore(info->thread_complete_gate, 1); 583 + 584 + if (allow) { 585 + status = acpi_os_signal_semaphore(info->main_thread_gate, 1); 586 + if (ACPI_FAILURE(status)) { 587 + acpi_os_printf 588 + ("Could not signal debugger thread sync semaphore, %s\n", 589 + acpi_format_exception(status)); 590 + } 591 + } 592 + } 593 + 594 + /******************************************************************************* 595 + * 596 + * FUNCTION: acpi_db_create_execution_threads 597 + * 598 + * PARAMETERS: num_threads_arg - Number of threads to create 599 + * num_loops_arg - Loop count for the thread(s) 600 + * method_name_arg - Control method to execute 601 + * 602 + * RETURN: None 603 + * 604 + * DESCRIPTION: Create threads to execute method(s) 605 + * 606 + ******************************************************************************/ 607 + 608 + void 609 + acpi_db_create_execution_threads(char *num_threads_arg, 610 + char *num_loops_arg, char *method_name_arg) 611 + { 612 + acpi_status status; 613 + u32 num_threads; 614 + u32 num_loops; 615 + u32 i; 616 + u32 size; 617 + acpi_mutex main_thread_gate; 618 + acpi_mutex thread_complete_gate; 619 + acpi_mutex info_gate; 620 + 621 + /* Get the arguments */ 622 + 623 + num_threads = strtoul(num_threads_arg, NULL, 0); 624 + num_loops = strtoul(num_loops_arg, NULL, 0); 625 + 626 + if (!num_threads || !num_loops) { 627 + acpi_os_printf("Bad argument: Threads %X, Loops %X\n", 628 + num_threads, num_loops); 629 + return; 630 + } 631 + 632 + /* 633 + * Create the semaphore for synchronization of 634 + * the created threads with the main thread. 635 + */ 636 + status = acpi_os_create_semaphore(1, 0, &main_thread_gate); 637 + if (ACPI_FAILURE(status)) { 638 + acpi_os_printf("Could not create semaphore for " 639 + "synchronization with the main thread, %s\n", 640 + acpi_format_exception(status)); 641 + return; 642 + } 643 + 644 + /* 645 + * Create the semaphore for synchronization 646 + * between the created threads. 647 + */ 648 + status = acpi_os_create_semaphore(1, 1, &thread_complete_gate); 649 + if (ACPI_FAILURE(status)) { 650 + acpi_os_printf("Could not create semaphore for " 651 + "synchronization between the created threads, %s\n", 652 + acpi_format_exception(status)); 653 + 654 + (void)acpi_os_delete_semaphore(main_thread_gate); 655 + return; 656 + } 657 + 658 + status = acpi_os_create_semaphore(1, 1, &info_gate); 659 + if (ACPI_FAILURE(status)) { 660 + acpi_os_printf("Could not create semaphore for " 661 + "synchronization of AcpiGbl_DbMethodInfo, %s\n", 662 + acpi_format_exception(status)); 663 + 664 + (void)acpi_os_delete_semaphore(thread_complete_gate); 665 + (void)acpi_os_delete_semaphore(main_thread_gate); 666 + return; 667 + } 668 + 669 + memset(&acpi_gbl_db_method_info, 0, sizeof(struct acpi_db_method_info)); 670 + 671 + /* Array to store IDs of threads */ 672 + 673 + acpi_gbl_db_method_info.num_threads = num_threads; 674 + size = sizeof(acpi_thread_id) * acpi_gbl_db_method_info.num_threads; 675 + 676 + acpi_gbl_db_method_info.threads = acpi_os_allocate(size); 677 + if (acpi_gbl_db_method_info.threads == NULL) { 678 + acpi_os_printf("No memory for thread IDs array\n"); 679 + (void)acpi_os_delete_semaphore(main_thread_gate); 680 + (void)acpi_os_delete_semaphore(thread_complete_gate); 681 + (void)acpi_os_delete_semaphore(info_gate); 682 + return; 683 + } 684 + memset(acpi_gbl_db_method_info.threads, 0, size); 685 + 686 + /* Setup the context to be passed to each thread */ 687 + 688 + acpi_gbl_db_method_info.name = method_name_arg; 689 + acpi_gbl_db_method_info.flags = 0; 690 + acpi_gbl_db_method_info.num_loops = num_loops; 691 + acpi_gbl_db_method_info.main_thread_gate = main_thread_gate; 692 + acpi_gbl_db_method_info.thread_complete_gate = thread_complete_gate; 693 + acpi_gbl_db_method_info.info_gate = info_gate; 694 + 695 + /* Init arguments to be passed to method */ 696 + 697 + acpi_gbl_db_method_info.init_args = 1; 698 + acpi_gbl_db_method_info.args = acpi_gbl_db_method_info.arguments; 699 + acpi_gbl_db_method_info.arguments[0] = 700 + acpi_gbl_db_method_info.num_threads_str; 701 + acpi_gbl_db_method_info.arguments[1] = 702 + acpi_gbl_db_method_info.id_of_thread_str; 703 + acpi_gbl_db_method_info.arguments[2] = 704 + acpi_gbl_db_method_info.index_of_thread_str; 705 + acpi_gbl_db_method_info.arguments[3] = NULL; 706 + 707 + acpi_gbl_db_method_info.types = acpi_gbl_db_method_info.arg_types; 708 + acpi_gbl_db_method_info.arg_types[0] = ACPI_TYPE_INTEGER; 709 + acpi_gbl_db_method_info.arg_types[1] = ACPI_TYPE_INTEGER; 710 + acpi_gbl_db_method_info.arg_types[2] = ACPI_TYPE_INTEGER; 711 + 712 + acpi_db_uint32_to_hex_string(num_threads, 713 + acpi_gbl_db_method_info.num_threads_str); 714 + 715 + status = acpi_db_execute_setup(&acpi_gbl_db_method_info); 716 + if (ACPI_FAILURE(status)) { 717 + goto cleanup_and_exit; 718 + } 719 + 720 + /* Get the NS node, determines existence also */ 721 + 722 + status = acpi_get_handle(NULL, acpi_gbl_db_method_info.pathname, 723 + &acpi_gbl_db_method_info.method); 724 + if (ACPI_FAILURE(status)) { 725 + acpi_os_printf("%s Could not get handle for %s\n", 726 + acpi_format_exception(status), 727 + acpi_gbl_db_method_info.pathname); 728 + goto cleanup_and_exit; 729 + } 730 + 731 + /* Create the threads */ 732 + 733 + acpi_os_printf("Creating %X threads to execute %X times each\n", 734 + num_threads, num_loops); 735 + 736 + for (i = 0; i < (num_threads); i++) { 737 + status = 738 + acpi_os_execute(OSL_DEBUGGER_EXEC_THREAD, 739 + acpi_db_method_thread, 740 + &acpi_gbl_db_method_info); 741 + if (ACPI_FAILURE(status)) { 742 + break; 743 + } 744 + } 745 + 746 + /* Wait for all threads to complete */ 747 + 748 + (void)acpi_os_wait_semaphore(main_thread_gate, 1, ACPI_WAIT_FOREVER); 749 + 750 + acpi_db_set_output_destination(ACPI_DB_DUPLICATE_OUTPUT); 751 + acpi_os_printf("All threads (%X) have completed\n", num_threads); 752 + acpi_db_set_output_destination(ACPI_DB_CONSOLE_OUTPUT); 753 + 754 + cleanup_and_exit: 755 + 756 + /* Cleanup and exit */ 757 + 758 + (void)acpi_os_delete_semaphore(main_thread_gate); 759 + (void)acpi_os_delete_semaphore(thread_complete_gate); 760 + (void)acpi_os_delete_semaphore(info_gate); 761 + 762 + acpi_os_free(acpi_gbl_db_method_info.threads); 763 + acpi_gbl_db_method_info.threads = NULL; 764 + }
+256
drivers/acpi/acpica/dbfileio.c
··· 1 + /******************************************************************************* 2 + * 3 + * Module Name: dbfileio - Debugger file I/O commands. These can't usually 4 + * be used when running the debugger in Ring 0 (Kernel mode) 5 + * 6 + ******************************************************************************/ 7 + 8 + /* 9 + * Copyright (C) 2000 - 2015, Intel Corp. 10 + * All rights reserved. 11 + * 12 + * Redistribution and use in source and binary forms, with or without 13 + * modification, are permitted provided that the following conditions 14 + * are met: 15 + * 1. Redistributions of source code must retain the above copyright 16 + * notice, this list of conditions, and the following disclaimer, 17 + * without modification. 18 + * 2. Redistributions in binary form must reproduce at minimum a disclaimer 19 + * substantially similar to the "NO WARRANTY" disclaimer below 20 + * ("Disclaimer") and any redistribution must be conditioned upon 21 + * including a substantially similar Disclaimer requirement for further 22 + * binary redistribution. 23 + * 3. Neither the names of the above-listed copyright holders nor the names 24 + * of any contributors may be used to endorse or promote products derived 25 + * from this software without specific prior written permission. 26 + * 27 + * Alternatively, this software may be distributed under the terms of the 28 + * GNU General Public License ("GPL") version 2 as published by the Free 29 + * Software Foundation. 30 + * 31 + * NO WARRANTY 32 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 33 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 34 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR 35 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 36 + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 37 + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 38 + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 39 + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 40 + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 41 + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 42 + * POSSIBILITY OF SUCH DAMAGES. 43 + */ 44 + 45 + #include <acpi/acpi.h> 46 + #include "accommon.h" 47 + #include "acdebug.h" 48 + #include "actables.h" 49 + 50 + #define _COMPONENT ACPI_CA_DEBUGGER 51 + ACPI_MODULE_NAME("dbfileio") 52 + 53 + #ifdef ACPI_DEBUGGER 54 + /******************************************************************************* 55 + * 56 + * FUNCTION: acpi_db_close_debug_file 57 + * 58 + * PARAMETERS: None 59 + * 60 + * RETURN: None 61 + * 62 + * DESCRIPTION: If open, close the current debug output file 63 + * 64 + ******************************************************************************/ 65 + void acpi_db_close_debug_file(void) 66 + { 67 + 68 + #ifdef ACPI_APPLICATION 69 + 70 + if (acpi_gbl_debug_file) { 71 + fclose(acpi_gbl_debug_file); 72 + acpi_gbl_debug_file = NULL; 73 + acpi_gbl_db_output_to_file = FALSE; 74 + acpi_os_printf("Debug output file %s closed\n", 75 + acpi_gbl_db_debug_filename); 76 + } 77 + #endif 78 + } 79 + 80 + /******************************************************************************* 81 + * 82 + * FUNCTION: acpi_db_open_debug_file 83 + * 84 + * PARAMETERS: name - Filename to open 85 + * 86 + * RETURN: None 87 + * 88 + * DESCRIPTION: Open a file where debug output will be directed. 89 + * 90 + ******************************************************************************/ 91 + 92 + void acpi_db_open_debug_file(char *name) 93 + { 94 + 95 + #ifdef ACPI_APPLICATION 96 + 97 + acpi_db_close_debug_file(); 98 + acpi_gbl_debug_file = fopen(name, "w+"); 99 + if (!acpi_gbl_debug_file) { 100 + acpi_os_printf("Could not open debug file %s\n", name); 101 + return; 102 + } 103 + 104 + acpi_os_printf("Debug output file %s opened\n", name); 105 + strncpy(acpi_gbl_db_debug_filename, name, 106 + sizeof(acpi_gbl_db_debug_filename)); 107 + acpi_gbl_db_output_to_file = TRUE; 108 + 109 + #endif 110 + } 111 + #endif 112 + 113 + #ifdef ACPI_APPLICATION 114 + #include "acapps.h" 115 + 116 + /******************************************************************************* 117 + * 118 + * FUNCTION: ae_local_load_table 119 + * 120 + * PARAMETERS: table - pointer to a buffer containing the entire 121 + * table to be loaded 122 + * 123 + * RETURN: Status 124 + * 125 + * DESCRIPTION: This function is called to load a table from the caller's 126 + * buffer. The buffer must contain an entire ACPI Table including 127 + * a valid header. The header fields will be verified, and if it 128 + * is determined that the table is invalid, the call will fail. 129 + * 130 + ******************************************************************************/ 131 + 132 + static acpi_status ae_local_load_table(struct acpi_table_header *table) 133 + { 134 + acpi_status status = AE_OK; 135 + 136 + ACPI_FUNCTION_TRACE(ae_local_load_table); 137 + 138 + #if 0 139 + /* struct acpi_table_desc table_info; */ 140 + 141 + if (!table) { 142 + return_ACPI_STATUS(AE_BAD_PARAMETER); 143 + } 144 + 145 + table_info.pointer = table; 146 + status = acpi_tb_recognize_table(&table_info, ACPI_TABLE_ALL); 147 + if (ACPI_FAILURE(status)) { 148 + return_ACPI_STATUS(status); 149 + } 150 + 151 + /* Install the new table into the local data structures */ 152 + 153 + status = acpi_tb_init_table_descriptor(&table_info); 154 + if (ACPI_FAILURE(status)) { 155 + if (status == AE_ALREADY_EXISTS) { 156 + 157 + /* Table already exists, no error */ 158 + 159 + status = AE_OK; 160 + } 161 + 162 + /* Free table allocated by acpi_tb_get_table */ 163 + 164 + acpi_tb_delete_single_table(&table_info); 165 + return_ACPI_STATUS(status); 166 + } 167 + #if (!defined (ACPI_NO_METHOD_EXECUTION) && !defined (ACPI_CONSTANT_EVAL_ONLY)) 168 + 169 + status = 170 + acpi_ns_load_table(table_info.installed_desc, acpi_gbl_root_node); 171 + if (ACPI_FAILURE(status)) { 172 + 173 + /* Uninstall table and free the buffer */ 174 + 175 + acpi_tb_delete_tables_by_type(ACPI_TABLE_ID_DSDT); 176 + return_ACPI_STATUS(status); 177 + } 178 + #endif 179 + #endif 180 + 181 + return_ACPI_STATUS(status); 182 + } 183 + #endif 184 + 185 + /******************************************************************************* 186 + * 187 + * FUNCTION: acpi_db_get_table_from_file 188 + * 189 + * PARAMETERS: filename - File where table is located 190 + * return_table - Where a pointer to the table is returned 191 + * 192 + * RETURN: Status 193 + * 194 + * DESCRIPTION: Load an ACPI table from a file 195 + * 196 + ******************************************************************************/ 197 + 198 + acpi_status 199 + acpi_db_get_table_from_file(char *filename, 200 + struct acpi_table_header **return_table, 201 + u8 must_be_aml_file) 202 + { 203 + #ifdef ACPI_APPLICATION 204 + acpi_status status; 205 + struct acpi_table_header *table; 206 + u8 is_aml_table = TRUE; 207 + 208 + status = acpi_ut_read_table_from_file(filename, &table); 209 + if (ACPI_FAILURE(status)) { 210 + return (status); 211 + } 212 + 213 + if (must_be_aml_file) { 214 + is_aml_table = acpi_ut_is_aml_table(table); 215 + if (!is_aml_table) { 216 + ACPI_EXCEPTION((AE_INFO, AE_OK, 217 + "Input for -e is not an AML table: " 218 + "\"%4.4s\" (must be DSDT/SSDT)", 219 + table->signature)); 220 + return (AE_TYPE); 221 + } 222 + } 223 + 224 + if (is_aml_table) { 225 + 226 + /* Attempt to recognize and install the table */ 227 + 228 + status = ae_local_load_table(table); 229 + if (ACPI_FAILURE(status)) { 230 + if (status == AE_ALREADY_EXISTS) { 231 + acpi_os_printf 232 + ("Table %4.4s is already installed\n", 233 + table->signature); 234 + } else { 235 + acpi_os_printf("Could not install table, %s\n", 236 + acpi_format_exception(status)); 237 + } 238 + 239 + return (status); 240 + } 241 + 242 + acpi_tb_print_table_header(0, table); 243 + 244 + fprintf(stderr, 245 + "Acpi table [%4.4s] successfully installed and loaded\n", 246 + table->signature); 247 + } 248 + 249 + acpi_gbl_acpi_hardware_present = FALSE; 250 + if (return_table) { 251 + *return_table = table; 252 + } 253 + 254 + #endif /* ACPI_APPLICATION */ 255 + return (AE_OK); 256 + }
+239
drivers/acpi/acpica/dbhistry.c
··· 1 + /****************************************************************************** 2 + * 3 + * Module Name: dbhistry - debugger HISTORY command 4 + * 5 + *****************************************************************************/ 6 + 7 + /* 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 + * All rights reserved. 10 + * 11 + * Redistribution and use in source and binary forms, with or without 12 + * modification, are permitted provided that the following conditions 13 + * are met: 14 + * 1. Redistributions of source code must retain the above copyright 15 + * notice, this list of conditions, and the following disclaimer, 16 + * without modification. 17 + * 2. Redistributions in binary form must reproduce at minimum a disclaimer 18 + * substantially similar to the "NO WARRANTY" disclaimer below 19 + * ("Disclaimer") and any redistribution must be conditioned upon 20 + * including a substantially similar Disclaimer requirement for further 21 + * binary redistribution. 22 + * 3. Neither the names of the above-listed copyright holders nor the names 23 + * of any contributors may be used to endorse or promote products derived 24 + * from this software without specific prior written permission. 25 + * 26 + * Alternatively, this software may be distributed under the terms of the 27 + * GNU General Public License ("GPL") version 2 as published by the Free 28 + * Software Foundation. 29 + * 30 + * NO WARRANTY 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 36 + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 37 + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 38 + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 39 + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 40 + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 41 + * POSSIBILITY OF SUCH DAMAGES. 42 + */ 43 + 44 + #include <acpi/acpi.h> 45 + #include "accommon.h" 46 + #include "acdebug.h" 47 + 48 + #define _COMPONENT ACPI_CA_DEBUGGER 49 + ACPI_MODULE_NAME("dbhistry") 50 + 51 + #define HI_NO_HISTORY 0 52 + #define HI_RECORD_HISTORY 1 53 + #define HISTORY_SIZE 40 54 + typedef struct history_info { 55 + char *command; 56 + u32 cmd_num; 57 + 58 + } HISTORY_INFO; 59 + 60 + static HISTORY_INFO acpi_gbl_history_buffer[HISTORY_SIZE]; 61 + static u16 acpi_gbl_lo_history = 0; 62 + static u16 acpi_gbl_num_history = 0; 63 + static u16 acpi_gbl_next_history_index = 0; 64 + u32 acpi_gbl_next_cmd_num = 1; 65 + 66 + /******************************************************************************* 67 + * 68 + * FUNCTION: acpi_db_add_to_history 69 + * 70 + * PARAMETERS: command_line - Command to add 71 + * 72 + * RETURN: None 73 + * 74 + * DESCRIPTION: Add a command line to the history buffer. 75 + * 76 + ******************************************************************************/ 77 + 78 + void acpi_db_add_to_history(char *command_line) 79 + { 80 + u16 cmd_len; 81 + u16 buffer_len; 82 + 83 + /* Put command into the next available slot */ 84 + 85 + cmd_len = (u16)strlen(command_line); 86 + if (!cmd_len) { 87 + return; 88 + } 89 + 90 + if (acpi_gbl_history_buffer[acpi_gbl_next_history_index].command != 91 + NULL) { 92 + buffer_len = 93 + (u16) 94 + strlen(acpi_gbl_history_buffer[acpi_gbl_next_history_index]. 95 + command); 96 + 97 + if (cmd_len > buffer_len) { 98 + acpi_os_free(acpi_gbl_history_buffer 99 + [acpi_gbl_next_history_index].command); 100 + acpi_gbl_history_buffer[acpi_gbl_next_history_index]. 101 + command = acpi_os_allocate(cmd_len + 1); 102 + } 103 + } else { 104 + acpi_gbl_history_buffer[acpi_gbl_next_history_index].command = 105 + acpi_os_allocate(cmd_len + 1); 106 + } 107 + 108 + strcpy(acpi_gbl_history_buffer[acpi_gbl_next_history_index].command, 109 + command_line); 110 + 111 + acpi_gbl_history_buffer[acpi_gbl_next_history_index].cmd_num = 112 + acpi_gbl_next_cmd_num; 113 + 114 + /* Adjust indexes */ 115 + 116 + if ((acpi_gbl_num_history == HISTORY_SIZE) && 117 + (acpi_gbl_next_history_index == acpi_gbl_lo_history)) { 118 + acpi_gbl_lo_history++; 119 + if (acpi_gbl_lo_history >= HISTORY_SIZE) { 120 + acpi_gbl_lo_history = 0; 121 + } 122 + } 123 + 124 + acpi_gbl_next_history_index++; 125 + if (acpi_gbl_next_history_index >= HISTORY_SIZE) { 126 + acpi_gbl_next_history_index = 0; 127 + } 128 + 129 + acpi_gbl_next_cmd_num++; 130 + if (acpi_gbl_num_history < HISTORY_SIZE) { 131 + acpi_gbl_num_history++; 132 + } 133 + } 134 + 135 + /******************************************************************************* 136 + * 137 + * FUNCTION: acpi_db_display_history 138 + * 139 + * PARAMETERS: None 140 + * 141 + * RETURN: None 142 + * 143 + * DESCRIPTION: Display the contents of the history buffer 144 + * 145 + ******************************************************************************/ 146 + 147 + void acpi_db_display_history(void) 148 + { 149 + u32 i; 150 + u16 history_index; 151 + 152 + history_index = acpi_gbl_lo_history; 153 + 154 + /* Dump entire history buffer */ 155 + 156 + for (i = 0; i < acpi_gbl_num_history; i++) { 157 + if (acpi_gbl_history_buffer[history_index].command) { 158 + acpi_os_printf("%3ld %s\n", 159 + acpi_gbl_history_buffer[history_index]. 160 + cmd_num, 161 + acpi_gbl_history_buffer[history_index]. 162 + command); 163 + } 164 + 165 + history_index++; 166 + if (history_index >= HISTORY_SIZE) { 167 + history_index = 0; 168 + } 169 + } 170 + } 171 + 172 + /******************************************************************************* 173 + * 174 + * FUNCTION: acpi_db_get_from_history 175 + * 176 + * PARAMETERS: command_num_arg - String containing the number of the 177 + * command to be retrieved 178 + * 179 + * RETURN: Pointer to the retrieved command. Null on error. 180 + * 181 + * DESCRIPTION: Get a command from the history buffer 182 + * 183 + ******************************************************************************/ 184 + 185 + char *acpi_db_get_from_history(char *command_num_arg) 186 + { 187 + u32 cmd_num; 188 + 189 + if (command_num_arg == NULL) { 190 + cmd_num = acpi_gbl_next_cmd_num - 1; 191 + } 192 + 193 + else { 194 + cmd_num = strtoul(command_num_arg, NULL, 0); 195 + } 196 + 197 + return (acpi_db_get_history_by_index(cmd_num)); 198 + } 199 + 200 + /******************************************************************************* 201 + * 202 + * FUNCTION: acpi_db_get_history_by_index 203 + * 204 + * PARAMETERS: cmd_num - Index of the desired history entry. 205 + * Values are 0...(acpi_gbl_next_cmd_num - 1) 206 + * 207 + * RETURN: Pointer to the retrieved command. Null on error. 208 + * 209 + * DESCRIPTION: Get a command from the history buffer 210 + * 211 + ******************************************************************************/ 212 + 213 + char *acpi_db_get_history_by_index(u32 cmd_num) 214 + { 215 + u32 i; 216 + u16 history_index; 217 + 218 + /* Search history buffer */ 219 + 220 + history_index = acpi_gbl_lo_history; 221 + for (i = 0; i < acpi_gbl_num_history; i++) { 222 + if (acpi_gbl_history_buffer[history_index].cmd_num == cmd_num) { 223 + 224 + /* Found the command, return it */ 225 + 226 + return (acpi_gbl_history_buffer[history_index].command); 227 + } 228 + 229 + /* History buffer is circular */ 230 + 231 + history_index++; 232 + if (history_index >= HISTORY_SIZE) { 233 + history_index = 0; 234 + } 235 + } 236 + 237 + acpi_os_printf("Invalid history number: %u\n", history_index); 238 + return (NULL); 239 + }
+1267
drivers/acpi/acpica/dbinput.c
··· 1 + /******************************************************************************* 2 + * 3 + * Module Name: dbinput - user front-end to the AML debugger 4 + * 5 + ******************************************************************************/ 6 + 7 + /* 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 + * All rights reserved. 10 + * 11 + * Redistribution and use in source and binary forms, with or without 12 + * modification, are permitted provided that the following conditions 13 + * are met: 14 + * 1. Redistributions of source code must retain the above copyright 15 + * notice, this list of conditions, and the following disclaimer, 16 + * without modification. 17 + * 2. Redistributions in binary form must reproduce at minimum a disclaimer 18 + * substantially similar to the "NO WARRANTY" disclaimer below 19 + * ("Disclaimer") and any redistribution must be conditioned upon 20 + * including a substantially similar Disclaimer requirement for further 21 + * binary redistribution. 22 + * 3. Neither the names of the above-listed copyright holders nor the names 23 + * of any contributors may be used to endorse or promote products derived 24 + * from this software without specific prior written permission. 25 + * 26 + * Alternatively, this software may be distributed under the terms of the 27 + * GNU General Public License ("GPL") version 2 as published by the Free 28 + * Software Foundation. 29 + * 30 + * NO WARRANTY 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 36 + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 37 + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 38 + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 39 + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 40 + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 41 + * POSSIBILITY OF SUCH DAMAGES. 42 + */ 43 + 44 + #include <acpi/acpi.h> 45 + #include "accommon.h" 46 + #include "acdebug.h" 47 + 48 + #define _COMPONENT ACPI_CA_DEBUGGER 49 + ACPI_MODULE_NAME("dbinput") 50 + 51 + /* Local prototypes */ 52 + static u32 acpi_db_get_line(char *input_buffer); 53 + 54 + static u32 acpi_db_match_command(char *user_command); 55 + 56 + static void acpi_db_single_thread(void); 57 + 58 + static void acpi_db_display_command_info(char *command, u8 display_all); 59 + 60 + static void acpi_db_display_help(char *command); 61 + 62 + static u8 63 + acpi_db_match_command_help(char *command, 64 + const struct acpi_db_command_help *help); 65 + 66 + /* 67 + * Top-level debugger commands. 68 + * 69 + * This list of commands must match the string table below it 70 + */ 71 + enum acpi_ex_debugger_commands { 72 + CMD_NOT_FOUND = 0, 73 + CMD_NULL, 74 + CMD_ALLOCATIONS, 75 + CMD_ARGS, 76 + CMD_ARGUMENTS, 77 + CMD_BREAKPOINT, 78 + CMD_BUSINFO, 79 + CMD_CALL, 80 + CMD_DEBUG, 81 + CMD_DISASSEMBLE, 82 + CMD_DISASM, 83 + CMD_DUMP, 84 + CMD_EVALUATE, 85 + CMD_EXECUTE, 86 + CMD_EXIT, 87 + CMD_FIND, 88 + CMD_GO, 89 + CMD_HANDLERS, 90 + CMD_HELP, 91 + CMD_HELP2, 92 + CMD_HISTORY, 93 + CMD_HISTORY_EXE, 94 + CMD_HISTORY_LAST, 95 + CMD_INFORMATION, 96 + CMD_INTEGRITY, 97 + CMD_INTO, 98 + CMD_LEVEL, 99 + CMD_LIST, 100 + CMD_LOCALS, 101 + CMD_LOCKS, 102 + CMD_METHODS, 103 + CMD_NAMESPACE, 104 + CMD_NOTIFY, 105 + CMD_OBJECTS, 106 + CMD_OSI, 107 + CMD_OWNER, 108 + CMD_PATHS, 109 + CMD_PREDEFINED, 110 + CMD_PREFIX, 111 + CMD_QUIT, 112 + CMD_REFERENCES, 113 + CMD_RESOURCES, 114 + CMD_RESULTS, 115 + CMD_SET, 116 + CMD_STATS, 117 + CMD_STOP, 118 + CMD_TABLES, 119 + CMD_TEMPLATE, 120 + CMD_TRACE, 121 + CMD_TREE, 122 + CMD_TYPE, 123 + #ifdef ACPI_APPLICATION 124 + CMD_ENABLEACPI, 125 + CMD_EVENT, 126 + CMD_GPE, 127 + CMD_GPES, 128 + CMD_SCI, 129 + CMD_SLEEP, 130 + 131 + CMD_CLOSE, 132 + CMD_LOAD, 133 + CMD_OPEN, 134 + CMD_UNLOAD, 135 + 136 + CMD_TERMINATE, 137 + CMD_THREADS, 138 + 139 + CMD_TEST, 140 + #endif 141 + }; 142 + 143 + #define CMD_FIRST_VALID 2 144 + 145 + /* Second parameter is the required argument count */ 146 + 147 + static const struct acpi_db_command_info acpi_gbl_db_commands[] = { 148 + {"<NOT FOUND>", 0}, 149 + {"<NULL>", 0}, 150 + {"ALLOCATIONS", 0}, 151 + {"ARGS", 0}, 152 + {"ARGUMENTS", 0}, 153 + {"BREAKPOINT", 1}, 154 + {"BUSINFO", 0}, 155 + {"CALL", 0}, 156 + {"DEBUG", 1}, 157 + {"DISASSEMBLE", 1}, 158 + {"DISASM", 1}, 159 + {"DUMP", 1}, 160 + {"EVALUATE", 1}, 161 + {"EXECUTE", 1}, 162 + {"EXIT", 0}, 163 + {"FIND", 1}, 164 + {"GO", 0}, 165 + {"HANDLERS", 0}, 166 + {"HELP", 0}, 167 + {"?", 0}, 168 + {"HISTORY", 0}, 169 + {"!", 1}, 170 + {"!!", 0}, 171 + {"INFORMATION", 0}, 172 + {"INTEGRITY", 0}, 173 + {"INTO", 0}, 174 + {"LEVEL", 0}, 175 + {"LIST", 0}, 176 + {"LOCALS", 0}, 177 + {"LOCKS", 0}, 178 + {"METHODS", 0}, 179 + {"NAMESPACE", 0}, 180 + {"NOTIFY", 2}, 181 + {"OBJECTS", 0}, 182 + {"OSI", 0}, 183 + {"OWNER", 1}, 184 + {"PATHS", 0}, 185 + {"PREDEFINED", 0}, 186 + {"PREFIX", 0}, 187 + {"QUIT", 0}, 188 + {"REFERENCES", 1}, 189 + {"RESOURCES", 0}, 190 + {"RESULTS", 0}, 191 + {"SET", 3}, 192 + {"STATS", 1}, 193 + {"STOP", 0}, 194 + {"TABLES", 0}, 195 + {"TEMPLATE", 1}, 196 + {"TRACE", 1}, 197 + {"TREE", 0}, 198 + {"TYPE", 1}, 199 + #ifdef ACPI_APPLICATION 200 + {"ENABLEACPI", 0}, 201 + {"EVENT", 1}, 202 + {"GPE", 1}, 203 + {"GPES", 0}, 204 + {"SCI", 0}, 205 + {"SLEEP", 0}, 206 + 207 + {"CLOSE", 0}, 208 + {"LOAD", 1}, 209 + {"OPEN", 1}, 210 + {"UNLOAD", 1}, 211 + 212 + {"TERMINATE", 0}, 213 + {"THREADS", 3}, 214 + 215 + {"TEST", 1}, 216 + #endif 217 + {NULL, 0} 218 + }; 219 + 220 + /* 221 + * Help for all debugger commands. First argument is the number of lines 222 + * of help to output for the command. 223 + */ 224 + static const struct acpi_db_command_help acpi_gbl_db_command_help[] = { 225 + {0, "\nGeneral-Purpose Commands:", "\n"}, 226 + {1, " Allocations", "Display list of current memory allocations\n"}, 227 + {2, " Dump <Address>|<Namepath>", "\n"}, 228 + {0, " [Byte|Word|Dword|Qword]", 229 + "Display ACPI objects or memory\n"}, 230 + {1, " Handlers", "Info about global handlers\n"}, 231 + {1, " Help [Command]", "This help screen or individual command\n"}, 232 + {1, " History", "Display command history buffer\n"}, 233 + {1, " Level <DebugLevel>] [console]", 234 + "Get/Set debug level for file or console\n"}, 235 + {1, " Locks", "Current status of internal mutexes\n"}, 236 + {1, " Osi [Install|Remove <name>]", 237 + "Display or modify global _OSI list\n"}, 238 + {1, " Quit or Exit", "Exit this command\n"}, 239 + {8, " Stats <SubCommand>", 240 + "Display namespace and memory statistics\n"}, 241 + {1, " Allocations", "Display list of current memory allocations\n"}, 242 + {1, " Memory", "Dump internal memory lists\n"}, 243 + {1, " Misc", "Namespace search and mutex stats\n"}, 244 + {1, " Objects", "Summary of namespace objects\n"}, 245 + {1, " Sizes", "Sizes for each of the internal objects\n"}, 246 + {1, " Stack", "Display CPU stack usage\n"}, 247 + {1, " Tables", "Info about current ACPI table(s)\n"}, 248 + {1, " Tables", "Display info about loaded ACPI tables\n"}, 249 + {1, " ! <CommandNumber>", "Execute command from history buffer\n"}, 250 + {1, " !!", "Execute last command again\n"}, 251 + 252 + {0, "\nNamespace Access Commands:", "\n"}, 253 + {1, " Businfo", "Display system bus info\n"}, 254 + {1, " Disassemble <Method>", "Disassemble a control method\n"}, 255 + {1, " Find <AcpiName> (? is wildcard)", 256 + "Find ACPI name(s) with wildcards\n"}, 257 + {1, " Integrity", "Validate namespace integrity\n"}, 258 + {1, " Methods", "Display list of loaded control methods\n"}, 259 + {1, " Namespace [Object] [Depth]", 260 + "Display loaded namespace tree/subtree\n"}, 261 + {1, " Notify <Object> <Value>", "Send a notification on Object\n"}, 262 + {1, " Objects [ObjectType]", 263 + "Display summary of all objects or just given type\n"}, 264 + {1, " Owner <OwnerId> [Depth]", 265 + "Display loaded namespace by object owner\n"}, 266 + {1, " Paths", "Display full pathnames of namespace objects\n"}, 267 + {1, " Predefined", "Check all predefined names\n"}, 268 + {1, " Prefix [<Namepath>]", "Set or Get current execution prefix\n"}, 269 + {1, " References <Addr>", "Find all references to object at addr\n"}, 270 + {1, " Resources [DeviceName]", 271 + "Display Device resources (no arg = all devices)\n"}, 272 + {1, " Set N <NamedObject> <Value>", "Set value for named integer\n"}, 273 + {1, " Template <Object>", "Format/dump a Buffer/ResourceTemplate\n"}, 274 + {1, " Type <Object>", "Display object type\n"}, 275 + 276 + {0, "\nControl Method Execution Commands:", "\n"}, 277 + {1, " Arguments (or Args)", "Display method arguments\n"}, 278 + {1, " Breakpoint <AmlOffset>", "Set an AML execution breakpoint\n"}, 279 + {1, " Call", "Run to next control method invocation\n"}, 280 + {1, " Debug <Namepath> [Arguments]", "Single Step a control method\n"}, 281 + {6, " Evaluate", "Synonym for Execute\n"}, 282 + {5, " Execute <Namepath> [Arguments]", "Execute control method\n"}, 283 + {1, " Hex Integer", "Integer method argument\n"}, 284 + {1, " \"Ascii String\"", "String method argument\n"}, 285 + {1, " (Hex Byte List)", "Buffer method argument\n"}, 286 + {1, " [Package Element List]", "Package method argument\n"}, 287 + {1, " Go", "Allow method to run to completion\n"}, 288 + {1, " Information", "Display info about the current method\n"}, 289 + {1, " Into", "Step into (not over) a method call\n"}, 290 + {1, " List [# of Aml Opcodes]", "Display method ASL statements\n"}, 291 + {1, " Locals", "Display method local variables\n"}, 292 + {1, " Results", "Display method result stack\n"}, 293 + {1, " Set <A|L> <#> <Value>", "Set method data (Arguments/Locals)\n"}, 294 + {1, " Stop", "Terminate control method\n"}, 295 + {5, " Trace <State> [<Namepath>] [Once]", 296 + "Trace control method execution\n"}, 297 + {1, " Enable", "Enable all messages\n"}, 298 + {1, " Disable", "Disable tracing\n"}, 299 + {1, " Method", "Enable method execution messages\n"}, 300 + {1, " Opcode", "Enable opcode execution messages\n"}, 301 + {1, " Tree", "Display control method calling tree\n"}, 302 + {1, " <Enter>", "Single step next AML opcode (over calls)\n"}, 303 + 304 + #ifdef ACPI_APPLICATION 305 + {0, "\nHardware Simulation Commands:", "\n"}, 306 + {1, " EnableAcpi", "Enable ACPI (hardware) mode\n"}, 307 + {1, " Event <F|G> <Value>", "Generate AcpiEvent (Fixed/GPE)\n"}, 308 + {1, " Gpe <GpeNum> [GpeBlockDevice]", "Simulate a GPE\n"}, 309 + {1, " Gpes", "Display info on all GPE devices\n"}, 310 + {1, " Sci", "Generate an SCI\n"}, 311 + {1, " Sleep [SleepState]", "Simulate sleep/wake sequence(s) (0-5)\n"}, 312 + 313 + {0, "\nFile I/O Commands:", "\n"}, 314 + {1, " Close", "Close debug output file\n"}, 315 + {1, " Load <Input Filename>", "Load ACPI table from a file\n"}, 316 + {1, " Open <Output Filename>", "Open a file for debug output\n"}, 317 + {1, " Unload <Namepath>", 318 + "Unload an ACPI table via namespace object\n"}, 319 + 320 + {0, "\nUser Space Commands:", "\n"}, 321 + {1, " Terminate", "Delete namespace and all internal objects\n"}, 322 + {1, " Thread <Threads><Loops><NamePath>", 323 + "Spawn threads to execute method(s)\n"}, 324 + 325 + {0, "\nDebug Test Commands:", "\n"}, 326 + {3, " Test <TestName>", "Invoke a debug test\n"}, 327 + {1, " Objects", "Read/write/compare all namespace data objects\n"}, 328 + {1, " Predefined", 329 + "Execute all ACPI predefined names (_STA, etc.)\n"}, 330 + #endif 331 + {0, NULL, NULL} 332 + }; 333 + 334 + /******************************************************************************* 335 + * 336 + * FUNCTION: acpi_db_match_command_help 337 + * 338 + * PARAMETERS: command - Command string to match 339 + * help - Help table entry to attempt match 340 + * 341 + * RETURN: TRUE if command matched, FALSE otherwise 342 + * 343 + * DESCRIPTION: Attempt to match a command in the help table in order to 344 + * print help information for a single command. 345 + * 346 + ******************************************************************************/ 347 + 348 + static u8 349 + acpi_db_match_command_help(char *command, 350 + const struct acpi_db_command_help *help) 351 + { 352 + char *invocation = help->invocation; 353 + u32 line_count; 354 + 355 + /* Valid commands in the help table begin with a couple of spaces */ 356 + 357 + if (*invocation != ' ') { 358 + return (FALSE); 359 + } 360 + 361 + while (*invocation == ' ') { 362 + invocation++; 363 + } 364 + 365 + /* Match command name (full command or substring) */ 366 + 367 + while ((*command) && (*invocation) && (*invocation != ' ')) { 368 + if (tolower((int)*command) != tolower((int)*invocation)) { 369 + return (FALSE); 370 + } 371 + 372 + invocation++; 373 + command++; 374 + } 375 + 376 + /* Print the appropriate number of help lines */ 377 + 378 + line_count = help->line_count; 379 + while (line_count) { 380 + acpi_os_printf("%-38s : %s", help->invocation, 381 + help->description); 382 + help++; 383 + line_count--; 384 + } 385 + 386 + return (TRUE); 387 + } 388 + 389 + /******************************************************************************* 390 + * 391 + * FUNCTION: acpi_db_display_command_info 392 + * 393 + * PARAMETERS: command - Command string to match 394 + * display_all - Display all matching commands, or just 395 + * the first one (substring match) 396 + * 397 + * RETURN: None 398 + * 399 + * DESCRIPTION: Display help information for a Debugger command. 400 + * 401 + ******************************************************************************/ 402 + 403 + static void acpi_db_display_command_info(char *command, u8 display_all) 404 + { 405 + const struct acpi_db_command_help *next; 406 + u8 matched; 407 + 408 + next = acpi_gbl_db_command_help; 409 + while (next->invocation) { 410 + matched = acpi_db_match_command_help(command, next); 411 + if (!display_all && matched) { 412 + return; 413 + } 414 + 415 + next++; 416 + } 417 + } 418 + 419 + /******************************************************************************* 420 + * 421 + * FUNCTION: acpi_db_display_help 422 + * 423 + * PARAMETERS: command - Optional command string to display help. 424 + * if not specified, all debugger command 425 + * help strings are displayed 426 + * 427 + * RETURN: None 428 + * 429 + * DESCRIPTION: Display help for a single debugger command, or all of them. 430 + * 431 + ******************************************************************************/ 432 + 433 + static void acpi_db_display_help(char *command) 434 + { 435 + const struct acpi_db_command_help *next = acpi_gbl_db_command_help; 436 + 437 + if (!command) { 438 + 439 + /* No argument to help, display help for all commands */ 440 + 441 + while (next->invocation) { 442 + acpi_os_printf("%-38s%s", next->invocation, 443 + next->description); 444 + next++; 445 + } 446 + } else { 447 + /* Display help for all commands that match the subtring */ 448 + 449 + acpi_db_display_command_info(command, TRUE); 450 + } 451 + } 452 + 453 + /******************************************************************************* 454 + * 455 + * FUNCTION: acpi_db_get_next_token 456 + * 457 + * PARAMETERS: string - Command buffer 458 + * next - Return value, end of next token 459 + * 460 + * RETURN: Pointer to the start of the next token. 461 + * 462 + * DESCRIPTION: Command line parsing. Get the next token on the command line 463 + * 464 + ******************************************************************************/ 465 + 466 + char *acpi_db_get_next_token(char *string, 467 + char **next, acpi_object_type * return_type) 468 + { 469 + char *start; 470 + u32 depth; 471 + acpi_object_type type = ACPI_TYPE_INTEGER; 472 + 473 + /* At end of buffer? */ 474 + 475 + if (!string || !(*string)) { 476 + return (NULL); 477 + } 478 + 479 + /* Remove any spaces at the beginning */ 480 + 481 + if (*string == ' ') { 482 + while (*string && (*string == ' ')) { 483 + string++; 484 + } 485 + 486 + if (!(*string)) { 487 + return (NULL); 488 + } 489 + } 490 + 491 + switch (*string) { 492 + case '"': 493 + 494 + /* This is a quoted string, scan until closing quote */ 495 + 496 + string++; 497 + start = string; 498 + type = ACPI_TYPE_STRING; 499 + 500 + /* Find end of string */ 501 + 502 + while (*string && (*string != '"')) { 503 + string++; 504 + } 505 + break; 506 + 507 + case '(': 508 + 509 + /* This is the start of a buffer, scan until closing paren */ 510 + 511 + string++; 512 + start = string; 513 + type = ACPI_TYPE_BUFFER; 514 + 515 + /* Find end of buffer */ 516 + 517 + while (*string && (*string != ')')) { 518 + string++; 519 + } 520 + break; 521 + 522 + case '[': 523 + 524 + /* This is the start of a package, scan until closing bracket */ 525 + 526 + string++; 527 + depth = 1; 528 + start = string; 529 + type = ACPI_TYPE_PACKAGE; 530 + 531 + /* Find end of package (closing bracket) */ 532 + 533 + while (*string) { 534 + 535 + /* Handle String package elements */ 536 + 537 + if (*string == '"') { 538 + /* Find end of string */ 539 + 540 + string++; 541 + while (*string && (*string != '"')) { 542 + string++; 543 + } 544 + if (!(*string)) { 545 + break; 546 + } 547 + } else if (*string == '[') { 548 + depth++; /* A nested package declaration */ 549 + } else if (*string == ']') { 550 + depth--; 551 + if (depth == 0) { /* Found final package closing bracket */ 552 + break; 553 + } 554 + } 555 + 556 + string++; 557 + } 558 + break; 559 + 560 + default: 561 + 562 + start = string; 563 + 564 + /* Find end of token */ 565 + 566 + while (*string && (*string != ' ')) { 567 + string++; 568 + } 569 + break; 570 + } 571 + 572 + if (!(*string)) { 573 + *next = NULL; 574 + } else { 575 + *string = 0; 576 + *next = string + 1; 577 + } 578 + 579 + *return_type = type; 580 + return (start); 581 + } 582 + 583 + /******************************************************************************* 584 + * 585 + * FUNCTION: acpi_db_get_line 586 + * 587 + * PARAMETERS: input_buffer - Command line buffer 588 + * 589 + * RETURN: Count of arguments to the command 590 + * 591 + * DESCRIPTION: Get the next command line from the user. Gets entire line 592 + * up to the next newline 593 + * 594 + ******************************************************************************/ 595 + 596 + static u32 acpi_db_get_line(char *input_buffer) 597 + { 598 + u32 i; 599 + u32 count; 600 + char *next; 601 + char *this; 602 + 603 + if (acpi_ut_safe_strcpy 604 + (acpi_gbl_db_parsed_buf, sizeof(acpi_gbl_db_parsed_buf), 605 + input_buffer)) { 606 + acpi_os_printf 607 + ("Buffer overflow while parsing input line (max %u characters)\n", 608 + sizeof(acpi_gbl_db_parsed_buf)); 609 + return (0); 610 + } 611 + 612 + this = acpi_gbl_db_parsed_buf; 613 + for (i = 0; i < ACPI_DEBUGGER_MAX_ARGS; i++) { 614 + acpi_gbl_db_args[i] = acpi_db_get_next_token(this, &next, 615 + &acpi_gbl_db_arg_types 616 + [i]); 617 + if (!acpi_gbl_db_args[i]) { 618 + break; 619 + } 620 + 621 + this = next; 622 + } 623 + 624 + /* Uppercase the actual command */ 625 + 626 + if (acpi_gbl_db_args[0]) { 627 + acpi_ut_strupr(acpi_gbl_db_args[0]); 628 + } 629 + 630 + count = i; 631 + if (count) { 632 + count--; /* Number of args only */ 633 + } 634 + 635 + return (count); 636 + } 637 + 638 + /******************************************************************************* 639 + * 640 + * FUNCTION: acpi_db_match_command 641 + * 642 + * PARAMETERS: user_command - User command line 643 + * 644 + * RETURN: Index into command array, -1 if not found 645 + * 646 + * DESCRIPTION: Search command array for a command match 647 + * 648 + ******************************************************************************/ 649 + 650 + static u32 acpi_db_match_command(char *user_command) 651 + { 652 + u32 i; 653 + 654 + if (!user_command || user_command[0] == 0) { 655 + return (CMD_NULL); 656 + } 657 + 658 + for (i = CMD_FIRST_VALID; acpi_gbl_db_commands[i].name; i++) { 659 + if (strstr(acpi_gbl_db_commands[i].name, user_command) == 660 + acpi_gbl_db_commands[i].name) { 661 + return (i); 662 + } 663 + } 664 + 665 + /* Command not recognized */ 666 + 667 + return (CMD_NOT_FOUND); 668 + } 669 + 670 + /******************************************************************************* 671 + * 672 + * FUNCTION: acpi_db_command_dispatch 673 + * 674 + * PARAMETERS: input_buffer - Command line buffer 675 + * walk_state - Current walk 676 + * op - Current (executing) parse op 677 + * 678 + * RETURN: Status 679 + * 680 + * DESCRIPTION: Command dispatcher. 681 + * 682 + ******************************************************************************/ 683 + 684 + acpi_status 685 + acpi_db_command_dispatch(char *input_buffer, 686 + struct acpi_walk_state * walk_state, 687 + union acpi_parse_object * op) 688 + { 689 + u32 temp; 690 + u32 command_index; 691 + u32 param_count; 692 + char *command_line; 693 + acpi_status status = AE_CTRL_TRUE; 694 + 695 + /* If acpi_terminate has been called, terminate this thread */ 696 + 697 + if (acpi_gbl_db_terminate_loop) { 698 + return (AE_CTRL_TERMINATE); 699 + } 700 + 701 + /* Find command and add to the history buffer */ 702 + 703 + param_count = acpi_db_get_line(input_buffer); 704 + command_index = acpi_db_match_command(acpi_gbl_db_args[0]); 705 + temp = 0; 706 + 707 + /* 708 + * We don't want to add the !! command to the history buffer. It 709 + * would cause an infinite loop because it would always be the 710 + * previous command. 711 + */ 712 + if (command_index != CMD_HISTORY_LAST) { 713 + acpi_db_add_to_history(input_buffer); 714 + } 715 + 716 + /* Verify that we have the minimum number of params */ 717 + 718 + if (param_count < acpi_gbl_db_commands[command_index].min_args) { 719 + acpi_os_printf 720 + ("%u parameters entered, [%s] requires %u parameters\n", 721 + param_count, acpi_gbl_db_commands[command_index].name, 722 + acpi_gbl_db_commands[command_index].min_args); 723 + 724 + acpi_db_display_command_info(acpi_gbl_db_commands 725 + [command_index].name, FALSE); 726 + return (AE_CTRL_TRUE); 727 + } 728 + 729 + /* Decode and dispatch the command */ 730 + 731 + switch (command_index) { 732 + case CMD_NULL: 733 + 734 + if (op) { 735 + return (AE_OK); 736 + } 737 + break; 738 + 739 + case CMD_ALLOCATIONS: 740 + 741 + #ifdef ACPI_DBG_TRACK_ALLOCATIONS 742 + acpi_ut_dump_allocations((u32)-1, NULL); 743 + #endif 744 + break; 745 + 746 + case CMD_ARGS: 747 + case CMD_ARGUMENTS: 748 + 749 + acpi_db_display_arguments(); 750 + break; 751 + 752 + case CMD_BREAKPOINT: 753 + 754 + acpi_db_set_method_breakpoint(acpi_gbl_db_args[1], walk_state, 755 + op); 756 + break; 757 + 758 + case CMD_BUSINFO: 759 + 760 + acpi_db_get_bus_info(); 761 + break; 762 + 763 + case CMD_CALL: 764 + 765 + acpi_db_set_method_call_breakpoint(op); 766 + status = AE_OK; 767 + break; 768 + 769 + case CMD_DEBUG: 770 + 771 + acpi_db_execute(acpi_gbl_db_args[1], 772 + &acpi_gbl_db_args[2], &acpi_gbl_db_arg_types[2], 773 + EX_SINGLE_STEP); 774 + break; 775 + 776 + case CMD_DISASSEMBLE: 777 + case CMD_DISASM: 778 + 779 + (void)acpi_db_disassemble_method(acpi_gbl_db_args[1]); 780 + break; 781 + 782 + case CMD_DUMP: 783 + 784 + acpi_db_decode_and_display_object(acpi_gbl_db_args[1], 785 + acpi_gbl_db_args[2]); 786 + break; 787 + 788 + case CMD_EVALUATE: 789 + case CMD_EXECUTE: 790 + 791 + acpi_db_execute(acpi_gbl_db_args[1], 792 + &acpi_gbl_db_args[2], &acpi_gbl_db_arg_types[2], 793 + EX_NO_SINGLE_STEP); 794 + break; 795 + 796 + case CMD_FIND: 797 + 798 + status = acpi_db_find_name_in_namespace(acpi_gbl_db_args[1]); 799 + break; 800 + 801 + case CMD_GO: 802 + 803 + acpi_gbl_cm_single_step = FALSE; 804 + return (AE_OK); 805 + 806 + case CMD_HANDLERS: 807 + 808 + acpi_db_display_handlers(); 809 + break; 810 + 811 + case CMD_HELP: 812 + case CMD_HELP2: 813 + 814 + acpi_db_display_help(acpi_gbl_db_args[1]); 815 + break; 816 + 817 + case CMD_HISTORY: 818 + 819 + acpi_db_display_history(); 820 + break; 821 + 822 + case CMD_HISTORY_EXE: /* ! command */ 823 + 824 + command_line = acpi_db_get_from_history(acpi_gbl_db_args[1]); 825 + if (!command_line) { 826 + return (AE_CTRL_TRUE); 827 + } 828 + 829 + status = acpi_db_command_dispatch(command_line, walk_state, op); 830 + return (status); 831 + 832 + case CMD_HISTORY_LAST: /* !! command */ 833 + 834 + command_line = acpi_db_get_from_history(NULL); 835 + if (!command_line) { 836 + return (AE_CTRL_TRUE); 837 + } 838 + 839 + status = acpi_db_command_dispatch(command_line, walk_state, op); 840 + return (status); 841 + 842 + case CMD_INFORMATION: 843 + 844 + acpi_db_display_method_info(op); 845 + break; 846 + 847 + case CMD_INTEGRITY: 848 + 849 + acpi_db_check_integrity(); 850 + break; 851 + 852 + case CMD_INTO: 853 + 854 + if (op) { 855 + acpi_gbl_cm_single_step = TRUE; 856 + return (AE_OK); 857 + } 858 + break; 859 + 860 + case CMD_LEVEL: 861 + 862 + if (param_count == 0) { 863 + acpi_os_printf 864 + ("Current debug level for file output is: %8.8lX\n", 865 + acpi_gbl_db_debug_level); 866 + acpi_os_printf 867 + ("Current debug level for console output is: %8.8lX\n", 868 + acpi_gbl_db_console_debug_level); 869 + } else if (param_count == 2) { 870 + temp = acpi_gbl_db_console_debug_level; 871 + acpi_gbl_db_console_debug_level = 872 + strtoul(acpi_gbl_db_args[1], NULL, 16); 873 + acpi_os_printf 874 + ("Debug Level for console output was %8.8lX, now %8.8lX\n", 875 + temp, acpi_gbl_db_console_debug_level); 876 + } else { 877 + temp = acpi_gbl_db_debug_level; 878 + acpi_gbl_db_debug_level = 879 + strtoul(acpi_gbl_db_args[1], NULL, 16); 880 + acpi_os_printf 881 + ("Debug Level for file output was %8.8lX, now %8.8lX\n", 882 + temp, acpi_gbl_db_debug_level); 883 + } 884 + break; 885 + 886 + case CMD_LIST: 887 + 888 + acpi_db_disassemble_aml(acpi_gbl_db_args[1], op); 889 + break; 890 + 891 + case CMD_LOCKS: 892 + 893 + acpi_db_display_locks(); 894 + break; 895 + 896 + case CMD_LOCALS: 897 + 898 + acpi_db_display_locals(); 899 + break; 900 + 901 + case CMD_METHODS: 902 + 903 + status = acpi_db_display_objects("METHOD", acpi_gbl_db_args[1]); 904 + break; 905 + 906 + case CMD_NAMESPACE: 907 + 908 + acpi_db_dump_namespace(acpi_gbl_db_args[1], 909 + acpi_gbl_db_args[2]); 910 + break; 911 + 912 + case CMD_NOTIFY: 913 + 914 + temp = strtoul(acpi_gbl_db_args[2], NULL, 0); 915 + acpi_db_send_notify(acpi_gbl_db_args[1], temp); 916 + break; 917 + 918 + case CMD_OBJECTS: 919 + 920 + acpi_ut_strupr(acpi_gbl_db_args[1]); 921 + status = 922 + acpi_db_display_objects(acpi_gbl_db_args[1], 923 + acpi_gbl_db_args[2]); 924 + break; 925 + 926 + case CMD_OSI: 927 + 928 + acpi_db_display_interfaces(acpi_gbl_db_args[1], 929 + acpi_gbl_db_args[2]); 930 + break; 931 + 932 + case CMD_OWNER: 933 + 934 + acpi_db_dump_namespace_by_owner(acpi_gbl_db_args[1], 935 + acpi_gbl_db_args[2]); 936 + break; 937 + 938 + case CMD_PATHS: 939 + 940 + acpi_db_dump_namespace_paths(); 941 + break; 942 + 943 + case CMD_PREFIX: 944 + 945 + acpi_db_set_scope(acpi_gbl_db_args[1]); 946 + break; 947 + 948 + case CMD_REFERENCES: 949 + 950 + acpi_db_find_references(acpi_gbl_db_args[1]); 951 + break; 952 + 953 + case CMD_RESOURCES: 954 + 955 + acpi_db_display_resources(acpi_gbl_db_args[1]); 956 + break; 957 + 958 + case CMD_RESULTS: 959 + 960 + acpi_db_display_results(); 961 + break; 962 + 963 + case CMD_SET: 964 + 965 + acpi_db_set_method_data(acpi_gbl_db_args[1], 966 + acpi_gbl_db_args[2], 967 + acpi_gbl_db_args[3]); 968 + break; 969 + 970 + case CMD_STATS: 971 + 972 + status = acpi_db_display_statistics(acpi_gbl_db_args[1]); 973 + break; 974 + 975 + case CMD_STOP: 976 + 977 + return (AE_NOT_IMPLEMENTED); 978 + 979 + case CMD_TABLES: 980 + 981 + acpi_db_display_table_info(acpi_gbl_db_args[1]); 982 + break; 983 + 984 + case CMD_TEMPLATE: 985 + 986 + acpi_db_display_template(acpi_gbl_db_args[1]); 987 + break; 988 + 989 + case CMD_TRACE: 990 + 991 + acpi_db_trace(acpi_gbl_db_args[1], acpi_gbl_db_args[2], 992 + acpi_gbl_db_args[3]); 993 + break; 994 + 995 + case CMD_TREE: 996 + 997 + acpi_db_display_calling_tree(); 998 + break; 999 + 1000 + case CMD_TYPE: 1001 + 1002 + acpi_db_display_object_type(acpi_gbl_db_args[1]); 1003 + break; 1004 + 1005 + #ifdef ACPI_APPLICATION 1006 + 1007 + /* Hardware simulation commands. */ 1008 + 1009 + case CMD_ENABLEACPI: 1010 + #if (!ACPI_REDUCED_HARDWARE) 1011 + 1012 + status = acpi_enable(); 1013 + if (ACPI_FAILURE(status)) { 1014 + acpi_os_printf("AcpiEnable failed (Status=%X)\n", 1015 + status); 1016 + return (status); 1017 + } 1018 + #endif /* !ACPI_REDUCED_HARDWARE */ 1019 + break; 1020 + 1021 + case CMD_EVENT: 1022 + 1023 + acpi_os_printf("Event command not implemented\n"); 1024 + break; 1025 + 1026 + case CMD_GPE: 1027 + 1028 + acpi_db_generate_gpe(acpi_gbl_db_args[1], acpi_gbl_db_args[2]); 1029 + break; 1030 + 1031 + case CMD_GPES: 1032 + 1033 + acpi_db_display_gpes(); 1034 + break; 1035 + 1036 + case CMD_SCI: 1037 + 1038 + acpi_db_generate_sci(); 1039 + break; 1040 + 1041 + case CMD_SLEEP: 1042 + 1043 + status = acpi_db_sleep(acpi_gbl_db_args[1]); 1044 + break; 1045 + 1046 + /* File I/O commands. */ 1047 + 1048 + case CMD_CLOSE: 1049 + 1050 + acpi_db_close_debug_file(); 1051 + break; 1052 + 1053 + case CMD_LOAD: 1054 + 1055 + status = 1056 + acpi_db_get_table_from_file(acpi_gbl_db_args[1], NULL, 1057 + FALSE); 1058 + break; 1059 + 1060 + case CMD_OPEN: 1061 + 1062 + acpi_db_open_debug_file(acpi_gbl_db_args[1]); 1063 + break; 1064 + 1065 + /* User space commands. */ 1066 + 1067 + case CMD_TERMINATE: 1068 + 1069 + acpi_db_set_output_destination(ACPI_DB_REDIRECTABLE_OUTPUT); 1070 + acpi_ut_subsystem_shutdown(); 1071 + 1072 + /* 1073 + * TBD: [Restructure] Need some way to re-initialize without 1074 + * re-creating the semaphores! 1075 + */ 1076 + 1077 + acpi_gbl_db_terminate_loop = TRUE; 1078 + /* acpi_initialize (NULL); */ 1079 + break; 1080 + 1081 + case CMD_THREADS: 1082 + 1083 + acpi_db_create_execution_threads(acpi_gbl_db_args[1], 1084 + acpi_gbl_db_args[2], 1085 + acpi_gbl_db_args[3]); 1086 + break; 1087 + 1088 + /* Debug test commands. */ 1089 + 1090 + case CMD_PREDEFINED: 1091 + 1092 + acpi_db_check_predefined_names(); 1093 + break; 1094 + 1095 + case CMD_TEST: 1096 + 1097 + acpi_db_execute_test(acpi_gbl_db_args[1]); 1098 + break; 1099 + 1100 + case CMD_UNLOAD: 1101 + 1102 + acpi_db_unload_acpi_table(acpi_gbl_db_args[1]); 1103 + break; 1104 + #endif 1105 + 1106 + case CMD_EXIT: 1107 + case CMD_QUIT: 1108 + 1109 + if (op) { 1110 + acpi_os_printf("Method execution terminated\n"); 1111 + return (AE_CTRL_TERMINATE); 1112 + } 1113 + 1114 + if (!acpi_gbl_db_output_to_file) { 1115 + acpi_dbg_level = ACPI_DEBUG_DEFAULT; 1116 + } 1117 + #ifdef ACPI_APPLICATION 1118 + acpi_db_close_debug_file(); 1119 + #endif 1120 + acpi_gbl_db_terminate_loop = TRUE; 1121 + return (AE_CTRL_TERMINATE); 1122 + 1123 + case CMD_NOT_FOUND: 1124 + default: 1125 + 1126 + acpi_os_printf("%s: unknown command\n", acpi_gbl_db_args[0]); 1127 + return (AE_CTRL_TRUE); 1128 + } 1129 + 1130 + if (ACPI_SUCCESS(status)) { 1131 + status = AE_CTRL_TRUE; 1132 + } 1133 + 1134 + return (status); 1135 + } 1136 + 1137 + /******************************************************************************* 1138 + * 1139 + * FUNCTION: acpi_db_execute_thread 1140 + * 1141 + * PARAMETERS: context - Not used 1142 + * 1143 + * RETURN: None 1144 + * 1145 + * DESCRIPTION: Debugger execute thread. Waits for a command line, then 1146 + * simply dispatches it. 1147 + * 1148 + ******************************************************************************/ 1149 + 1150 + void ACPI_SYSTEM_XFACE acpi_db_execute_thread(void *context) 1151 + { 1152 + acpi_status status = AE_OK; 1153 + acpi_status Mstatus; 1154 + 1155 + while (status != AE_CTRL_TERMINATE && !acpi_gbl_db_terminate_loop) { 1156 + acpi_gbl_method_executing = FALSE; 1157 + acpi_gbl_step_to_next_call = FALSE; 1158 + 1159 + Mstatus = acpi_os_acquire_mutex(acpi_gbl_db_command_ready, 1160 + ACPI_WAIT_FOREVER); 1161 + if (ACPI_FAILURE(Mstatus)) { 1162 + return; 1163 + } 1164 + 1165 + status = 1166 + acpi_db_command_dispatch(acpi_gbl_db_line_buf, NULL, NULL); 1167 + 1168 + acpi_os_release_mutex(acpi_gbl_db_command_complete); 1169 + } 1170 + acpi_gbl_db_threads_terminated = TRUE; 1171 + } 1172 + 1173 + /******************************************************************************* 1174 + * 1175 + * FUNCTION: acpi_db_single_thread 1176 + * 1177 + * PARAMETERS: None 1178 + * 1179 + * RETURN: None 1180 + * 1181 + * DESCRIPTION: Debugger execute thread. Waits for a command line, then 1182 + * simply dispatches it. 1183 + * 1184 + ******************************************************************************/ 1185 + 1186 + static void acpi_db_single_thread(void) 1187 + { 1188 + 1189 + acpi_gbl_method_executing = FALSE; 1190 + acpi_gbl_step_to_next_call = FALSE; 1191 + 1192 + (void)acpi_db_command_dispatch(acpi_gbl_db_line_buf, NULL, NULL); 1193 + } 1194 + 1195 + /******************************************************************************* 1196 + * 1197 + * FUNCTION: acpi_db_user_commands 1198 + * 1199 + * PARAMETERS: prompt - User prompt (depends on mode) 1200 + * op - Current executing parse op 1201 + * 1202 + * RETURN: None 1203 + * 1204 + * DESCRIPTION: Command line execution for the AML debugger. Commands are 1205 + * matched and dispatched here. 1206 + * 1207 + ******************************************************************************/ 1208 + 1209 + acpi_status acpi_db_user_commands(char prompt, union acpi_parse_object *op) 1210 + { 1211 + acpi_status status = AE_OK; 1212 + 1213 + acpi_os_printf("\n"); 1214 + 1215 + /* TBD: [Restructure] Need a separate command line buffer for step mode */ 1216 + 1217 + while (!acpi_gbl_db_terminate_loop) { 1218 + 1219 + /* Force output to console until a command is entered */ 1220 + 1221 + acpi_db_set_output_destination(ACPI_DB_CONSOLE_OUTPUT); 1222 + 1223 + /* Different prompt if method is executing */ 1224 + 1225 + if (!acpi_gbl_method_executing) { 1226 + acpi_os_printf("%1c ", ACPI_DEBUGGER_COMMAND_PROMPT); 1227 + } else { 1228 + acpi_os_printf("%1c ", ACPI_DEBUGGER_EXECUTE_PROMPT); 1229 + } 1230 + 1231 + /* Get the user input line */ 1232 + 1233 + status = acpi_os_get_line(acpi_gbl_db_line_buf, 1234 + ACPI_DB_LINE_BUFFER_SIZE, NULL); 1235 + if (ACPI_FAILURE(status)) { 1236 + ACPI_EXCEPTION((AE_INFO, status, 1237 + "While parsing command line")); 1238 + return (status); 1239 + } 1240 + 1241 + /* Check for single or multithreaded debug */ 1242 + 1243 + if (acpi_gbl_debugger_configuration & DEBUGGER_MULTI_THREADED) { 1244 + /* 1245 + * Signal the debug thread that we have a command to execute, 1246 + * and wait for the command to complete. 1247 + */ 1248 + acpi_os_release_mutex(acpi_gbl_db_command_ready); 1249 + if (ACPI_FAILURE(status)) { 1250 + return (status); 1251 + } 1252 + 1253 + status = 1254 + acpi_os_acquire_mutex(acpi_gbl_db_command_complete, 1255 + ACPI_WAIT_FOREVER); 1256 + if (ACPI_FAILURE(status)) { 1257 + return (status); 1258 + } 1259 + } else { 1260 + /* Just call to the command line interpreter */ 1261 + 1262 + acpi_db_single_thread(); 1263 + } 1264 + } 1265 + 1266 + return (status); 1267 + }
+369
drivers/acpi/acpica/dbmethod.c
··· 1 + /******************************************************************************* 2 + * 3 + * Module Name: dbmethod - Debug commands for control methods 4 + * 5 + ******************************************************************************/ 6 + 7 + /* 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 + * All rights reserved. 10 + * 11 + * Redistribution and use in source and binary forms, with or without 12 + * modification, are permitted provided that the following conditions 13 + * are met: 14 + * 1. Redistributions of source code must retain the above copyright 15 + * notice, this list of conditions, and the following disclaimer, 16 + * without modification. 17 + * 2. Redistributions in binary form must reproduce at minimum a disclaimer 18 + * substantially similar to the "NO WARRANTY" disclaimer below 19 + * ("Disclaimer") and any redistribution must be conditioned upon 20 + * including a substantially similar Disclaimer requirement for further 21 + * binary redistribution. 22 + * 3. Neither the names of the above-listed copyright holders nor the names 23 + * of any contributors may be used to endorse or promote products derived 24 + * from this software without specific prior written permission. 25 + * 26 + * Alternatively, this software may be distributed under the terms of the 27 + * GNU General Public License ("GPL") version 2 as published by the Free 28 + * Software Foundation. 29 + * 30 + * NO WARRANTY 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 36 + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 37 + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 38 + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 39 + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 40 + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 41 + * POSSIBILITY OF SUCH DAMAGES. 42 + */ 43 + 44 + #include <acpi/acpi.h> 45 + #include "accommon.h" 46 + #include "acdispat.h" 47 + #include "acnamesp.h" 48 + #include "acdebug.h" 49 + #include "acparser.h" 50 + #include "acpredef.h" 51 + 52 + #define _COMPONENT ACPI_CA_DEBUGGER 53 + ACPI_MODULE_NAME("dbmethod") 54 + 55 + /******************************************************************************* 56 + * 57 + * FUNCTION: acpi_db_set_method_breakpoint 58 + * 59 + * PARAMETERS: location - AML offset of breakpoint 60 + * walk_state - Current walk info 61 + * op - Current Op (from parse walk) 62 + * 63 + * RETURN: None 64 + * 65 + * DESCRIPTION: Set a breakpoint in a control method at the specified 66 + * AML offset 67 + * 68 + ******************************************************************************/ 69 + void 70 + acpi_db_set_method_breakpoint(char *location, 71 + struct acpi_walk_state *walk_state, 72 + union acpi_parse_object *op) 73 + { 74 + u32 address; 75 + u32 aml_offset; 76 + 77 + if (!op) { 78 + acpi_os_printf("There is no method currently executing\n"); 79 + return; 80 + } 81 + 82 + /* Get and verify the breakpoint address */ 83 + 84 + address = strtoul(location, NULL, 16); 85 + aml_offset = (u32)ACPI_PTR_DIFF(op->common.aml, 86 + walk_state->parser_state.aml_start); 87 + if (address <= aml_offset) { 88 + acpi_os_printf("Breakpoint %X is beyond current address %X\n", 89 + address, aml_offset); 90 + } 91 + 92 + /* Save breakpoint in current walk */ 93 + 94 + walk_state->user_breakpoint = address; 95 + acpi_os_printf("Breakpoint set at AML offset %X\n", address); 96 + } 97 + 98 + /******************************************************************************* 99 + * 100 + * FUNCTION: acpi_db_set_method_call_breakpoint 101 + * 102 + * PARAMETERS: op - Current Op (from parse walk) 103 + * 104 + * RETURN: None 105 + * 106 + * DESCRIPTION: Set a breakpoint in a control method at the specified 107 + * AML offset 108 + * 109 + ******************************************************************************/ 110 + 111 + void acpi_db_set_method_call_breakpoint(union acpi_parse_object *op) 112 + { 113 + 114 + if (!op) { 115 + acpi_os_printf("There is no method currently executing\n"); 116 + return; 117 + } 118 + 119 + acpi_gbl_step_to_next_call = TRUE; 120 + } 121 + 122 + /******************************************************************************* 123 + * 124 + * FUNCTION: acpi_db_set_method_data 125 + * 126 + * PARAMETERS: type_arg - L for local, A for argument 127 + * index_arg - which one 128 + * value_arg - Value to set. 129 + * 130 + * RETURN: None 131 + * 132 + * DESCRIPTION: Set a local or argument for the running control method. 133 + * NOTE: only object supported is Number. 134 + * 135 + ******************************************************************************/ 136 + 137 + void acpi_db_set_method_data(char *type_arg, char *index_arg, char *value_arg) 138 + { 139 + char type; 140 + u32 index; 141 + u32 value; 142 + struct acpi_walk_state *walk_state; 143 + union acpi_operand_object *obj_desc; 144 + acpi_status status; 145 + struct acpi_namespace_node *node; 146 + 147 + /* Validate type_arg */ 148 + 149 + acpi_ut_strupr(type_arg); 150 + type = type_arg[0]; 151 + if ((type != 'L') && (type != 'A') && (type != 'N')) { 152 + acpi_os_printf("Invalid SET operand: %s\n", type_arg); 153 + return; 154 + } 155 + 156 + value = strtoul(value_arg, NULL, 16); 157 + 158 + if (type == 'N') { 159 + node = acpi_db_convert_to_node(index_arg); 160 + if (!node) { 161 + return; 162 + } 163 + 164 + if (node->type != ACPI_TYPE_INTEGER) { 165 + acpi_os_printf("Can only set Integer nodes\n"); 166 + return; 167 + } 168 + obj_desc = node->object; 169 + obj_desc->integer.value = value; 170 + return; 171 + } 172 + 173 + /* Get the index and value */ 174 + 175 + index = strtoul(index_arg, NULL, 16); 176 + 177 + walk_state = acpi_ds_get_current_walk_state(acpi_gbl_current_walk_list); 178 + if (!walk_state) { 179 + acpi_os_printf("There is no method currently executing\n"); 180 + return; 181 + } 182 + 183 + /* Create and initialize the new object */ 184 + 185 + obj_desc = acpi_ut_create_integer_object((u64)value); 186 + if (!obj_desc) { 187 + acpi_os_printf("Could not create an internal object\n"); 188 + return; 189 + } 190 + 191 + /* Store the new object into the target */ 192 + 193 + switch (type) { 194 + case 'A': 195 + 196 + /* Set a method argument */ 197 + 198 + if (index > ACPI_METHOD_MAX_ARG) { 199 + acpi_os_printf("Arg%u - Invalid argument name\n", 200 + index); 201 + goto cleanup; 202 + } 203 + 204 + status = acpi_ds_store_object_to_local(ACPI_REFCLASS_ARG, 205 + index, obj_desc, 206 + walk_state); 207 + if (ACPI_FAILURE(status)) { 208 + goto cleanup; 209 + } 210 + 211 + obj_desc = walk_state->arguments[index].object; 212 + 213 + acpi_os_printf("Arg%u: ", index); 214 + acpi_db_display_internal_object(obj_desc, walk_state); 215 + break; 216 + 217 + case 'L': 218 + 219 + /* Set a method local */ 220 + 221 + if (index > ACPI_METHOD_MAX_LOCAL) { 222 + acpi_os_printf 223 + ("Local%u - Invalid local variable name\n", index); 224 + goto cleanup; 225 + } 226 + 227 + status = acpi_ds_store_object_to_local(ACPI_REFCLASS_LOCAL, 228 + index, obj_desc, 229 + walk_state); 230 + if (ACPI_FAILURE(status)) { 231 + goto cleanup; 232 + } 233 + 234 + obj_desc = walk_state->local_variables[index].object; 235 + 236 + acpi_os_printf("Local%u: ", index); 237 + acpi_db_display_internal_object(obj_desc, walk_state); 238 + break; 239 + 240 + default: 241 + 242 + break; 243 + } 244 + 245 + cleanup: 246 + acpi_ut_remove_reference(obj_desc); 247 + } 248 + 249 + /******************************************************************************* 250 + * 251 + * FUNCTION: acpi_db_disassemble_aml 252 + * 253 + * PARAMETERS: statements - Number of statements to disassemble 254 + * op - Current Op (from parse walk) 255 + * 256 + * RETURN: None 257 + * 258 + * DESCRIPTION: Display disassembled AML (ASL) starting from Op for the number 259 + * of statements specified. 260 + * 261 + ******************************************************************************/ 262 + 263 + void acpi_db_disassemble_aml(char *statements, union acpi_parse_object *op) 264 + { 265 + u32 num_statements = 8; 266 + 267 + if (!op) { 268 + acpi_os_printf("There is no method currently executing\n"); 269 + return; 270 + } 271 + 272 + if (statements) { 273 + num_statements = strtoul(statements, NULL, 0); 274 + } 275 + #ifdef ACPI_DISASSEMBLER 276 + acpi_dm_disassemble(NULL, op, num_statements); 277 + #endif 278 + } 279 + 280 + /******************************************************************************* 281 + * 282 + * FUNCTION: acpi_db_disassemble_method 283 + * 284 + * PARAMETERS: name - Name of control method 285 + * 286 + * RETURN: None 287 + * 288 + * DESCRIPTION: Display disassembled AML (ASL) starting from Op for the number 289 + * of statements specified. 290 + * 291 + ******************************************************************************/ 292 + 293 + acpi_status acpi_db_disassemble_method(char *name) 294 + { 295 + acpi_status status; 296 + union acpi_parse_object *op; 297 + struct acpi_walk_state *walk_state; 298 + union acpi_operand_object *obj_desc; 299 + struct acpi_namespace_node *method; 300 + 301 + method = acpi_db_convert_to_node(name); 302 + if (!method) { 303 + return (AE_BAD_PARAMETER); 304 + } 305 + 306 + if (method->type != ACPI_TYPE_METHOD) { 307 + ACPI_ERROR((AE_INFO, "%s (%s): Object must be a control method", 308 + name, acpi_ut_get_type_name(method->type))); 309 + return (AE_BAD_PARAMETER); 310 + } 311 + 312 + obj_desc = method->object; 313 + 314 + op = acpi_ps_create_scope_op(obj_desc->method.aml_start); 315 + if (!op) { 316 + return (AE_NO_MEMORY); 317 + } 318 + 319 + /* Create and initialize a new walk state */ 320 + 321 + walk_state = acpi_ds_create_walk_state(0, op, NULL, NULL); 322 + if (!walk_state) { 323 + return (AE_NO_MEMORY); 324 + } 325 + 326 + status = acpi_ds_init_aml_walk(walk_state, op, NULL, 327 + obj_desc->method.aml_start, 328 + obj_desc->method.aml_length, NULL, 329 + ACPI_IMODE_LOAD_PASS1); 330 + if (ACPI_FAILURE(status)) { 331 + return (status); 332 + } 333 + 334 + status = acpi_ut_allocate_owner_id(&obj_desc->method.owner_id); 335 + walk_state->owner_id = obj_desc->method.owner_id; 336 + 337 + /* Push start scope on scope stack and make it current */ 338 + 339 + status = acpi_ds_scope_stack_push(method, method->type, walk_state); 340 + if (ACPI_FAILURE(status)) { 341 + return (status); 342 + } 343 + 344 + /* Parse the entire method AML including deferred operators */ 345 + 346 + walk_state->parse_flags &= ~ACPI_PARSE_DELETE_TREE; 347 + walk_state->parse_flags |= ACPI_PARSE_DISASSEMBLE; 348 + 349 + status = acpi_ps_parse_aml(walk_state); 350 + 351 + #ifdef ACPI_DISASSEMBLER 352 + (void)acpi_dm_parse_deferred_ops(op); 353 + 354 + /* Now we can disassemble the method */ 355 + 356 + acpi_gbl_dm_opt_verbose = FALSE; 357 + acpi_dm_disassemble(NULL, op, 0); 358 + acpi_gbl_dm_opt_verbose = TRUE; 359 + #endif 360 + 361 + acpi_ps_delete_parse_tree(op); 362 + 363 + /* Method cleanup */ 364 + 365 + acpi_ns_delete_namespace_subtree(method); 366 + acpi_ns_delete_namespace_by_owner(obj_desc->method.owner_id); 367 + acpi_ut_release_owner_id(&obj_desc->method.owner_id); 368 + return (AE_OK); 369 + }
+947
drivers/acpi/acpica/dbnames.c
··· 1 + /******************************************************************************* 2 + * 3 + * Module Name: dbnames - Debugger commands for the acpi namespace 4 + * 5 + ******************************************************************************/ 6 + 7 + /* 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 + * All rights reserved. 10 + * 11 + * Redistribution and use in source and binary forms, with or without 12 + * modification, are permitted provided that the following conditions 13 + * are met: 14 + * 1. Redistributions of source code must retain the above copyright 15 + * notice, this list of conditions, and the following disclaimer, 16 + * without modification. 17 + * 2. Redistributions in binary form must reproduce at minimum a disclaimer 18 + * substantially similar to the "NO WARRANTY" disclaimer below 19 + * ("Disclaimer") and any redistribution must be conditioned upon 20 + * including a substantially similar Disclaimer requirement for further 21 + * binary redistribution. 22 + * 3. Neither the names of the above-listed copyright holders nor the names 23 + * of any contributors may be used to endorse or promote products derived 24 + * from this software without specific prior written permission. 25 + * 26 + * Alternatively, this software may be distributed under the terms of the 27 + * GNU General Public License ("GPL") version 2 as published by the Free 28 + * Software Foundation. 29 + * 30 + * NO WARRANTY 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 36 + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 37 + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 38 + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 39 + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 40 + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 41 + * POSSIBILITY OF SUCH DAMAGES. 42 + */ 43 + 44 + #include <acpi/acpi.h> 45 + #include "accommon.h" 46 + #include "acnamesp.h" 47 + #include "acdebug.h" 48 + #include "acpredef.h" 49 + 50 + #define _COMPONENT ACPI_CA_DEBUGGER 51 + ACPI_MODULE_NAME("dbnames") 52 + 53 + /* Local prototypes */ 54 + static acpi_status 55 + acpi_db_walk_and_match_name(acpi_handle obj_handle, 56 + u32 nesting_level, 57 + void *context, void **return_value); 58 + 59 + static acpi_status 60 + acpi_db_walk_for_predefined_names(acpi_handle obj_handle, 61 + u32 nesting_level, 62 + void *context, void **return_value); 63 + 64 + static acpi_status 65 + acpi_db_walk_for_specific_objects(acpi_handle obj_handle, 66 + u32 nesting_level, 67 + void *context, void **return_value); 68 + 69 + static acpi_status 70 + acpi_db_walk_for_object_counts(acpi_handle obj_handle, 71 + u32 nesting_level, 72 + void *context, void **return_value); 73 + 74 + static acpi_status 75 + acpi_db_integrity_walk(acpi_handle obj_handle, 76 + u32 nesting_level, void *context, void **return_value); 77 + 78 + static acpi_status 79 + acpi_db_walk_for_references(acpi_handle obj_handle, 80 + u32 nesting_level, 81 + void *context, void **return_value); 82 + 83 + static acpi_status 84 + acpi_db_bus_walk(acpi_handle obj_handle, 85 + u32 nesting_level, void *context, void **return_value); 86 + 87 + /* 88 + * Arguments for the Objects command 89 + * These object types map directly to the ACPI_TYPES 90 + */ 91 + static struct acpi_db_argument_info acpi_db_object_types[] = { 92 + {"ANY"}, 93 + {"INTEGERS"}, 94 + {"STRINGS"}, 95 + {"BUFFERS"}, 96 + {"PACKAGES"}, 97 + {"FIELDS"}, 98 + {"DEVICES"}, 99 + {"EVENTS"}, 100 + {"METHODS"}, 101 + {"MUTEXES"}, 102 + {"REGIONS"}, 103 + {"POWERRESOURCES"}, 104 + {"PROCESSORS"}, 105 + {"THERMALZONES"}, 106 + {"BUFFERFIELDS"}, 107 + {"DDBHANDLES"}, 108 + {"DEBUG"}, 109 + {"REGIONFIELDS"}, 110 + {"BANKFIELDS"}, 111 + {"INDEXFIELDS"}, 112 + {"REFERENCES"}, 113 + {"ALIASES"}, 114 + {"METHODALIASES"}, 115 + {"NOTIFY"}, 116 + {"ADDRESSHANDLER"}, 117 + {"RESOURCE"}, 118 + {"RESOURCEFIELD"}, 119 + {"SCOPES"}, 120 + {NULL} /* Must be null terminated */ 121 + }; 122 + 123 + /******************************************************************************* 124 + * 125 + * FUNCTION: acpi_db_set_scope 126 + * 127 + * PARAMETERS: name - New scope path 128 + * 129 + * RETURN: Status 130 + * 131 + * DESCRIPTION: Set the "current scope" as maintained by this utility. 132 + * The scope is used as a prefix to ACPI paths. 133 + * 134 + ******************************************************************************/ 135 + 136 + void acpi_db_set_scope(char *name) 137 + { 138 + acpi_status status; 139 + struct acpi_namespace_node *node; 140 + 141 + if (!name || name[0] == 0) { 142 + acpi_os_printf("Current scope: %s\n", acpi_gbl_db_scope_buf); 143 + return; 144 + } 145 + 146 + acpi_db_prep_namestring(name); 147 + 148 + if (ACPI_IS_ROOT_PREFIX(name[0])) { 149 + 150 + /* Validate new scope from the root */ 151 + 152 + status = acpi_ns_get_node(acpi_gbl_root_node, name, 153 + ACPI_NS_NO_UPSEARCH, &node); 154 + if (ACPI_FAILURE(status)) { 155 + goto error_exit; 156 + } 157 + 158 + acpi_gbl_db_scope_buf[0] = 0; 159 + } else { 160 + /* Validate new scope relative to old scope */ 161 + 162 + status = acpi_ns_get_node(acpi_gbl_db_scope_node, name, 163 + ACPI_NS_NO_UPSEARCH, &node); 164 + if (ACPI_FAILURE(status)) { 165 + goto error_exit; 166 + } 167 + } 168 + 169 + /* Build the final pathname */ 170 + 171 + if (acpi_ut_safe_strcat 172 + (acpi_gbl_db_scope_buf, sizeof(acpi_gbl_db_scope_buf), name)) { 173 + status = AE_BUFFER_OVERFLOW; 174 + goto error_exit; 175 + } 176 + 177 + if (acpi_ut_safe_strcat 178 + (acpi_gbl_db_scope_buf, sizeof(acpi_gbl_db_scope_buf), "\\")) { 179 + status = AE_BUFFER_OVERFLOW; 180 + goto error_exit; 181 + } 182 + 183 + acpi_gbl_db_scope_node = node; 184 + acpi_os_printf("New scope: %s\n", acpi_gbl_db_scope_buf); 185 + return; 186 + 187 + error_exit: 188 + 189 + acpi_os_printf("Could not attach scope: %s, %s\n", 190 + name, acpi_format_exception(status)); 191 + } 192 + 193 + /******************************************************************************* 194 + * 195 + * FUNCTION: acpi_db_dump_namespace 196 + * 197 + * PARAMETERS: start_arg - Node to begin namespace dump 198 + * depth_arg - Maximum tree depth to be dumped 199 + * 200 + * RETURN: None 201 + * 202 + * DESCRIPTION: Dump entire namespace or a subtree. Each node is displayed 203 + * with type and other information. 204 + * 205 + ******************************************************************************/ 206 + 207 + void acpi_db_dump_namespace(char *start_arg, char *depth_arg) 208 + { 209 + acpi_handle subtree_entry = acpi_gbl_root_node; 210 + u32 max_depth = ACPI_UINT32_MAX; 211 + 212 + /* No argument given, just start at the root and dump entire namespace */ 213 + 214 + if (start_arg) { 215 + subtree_entry = acpi_db_convert_to_node(start_arg); 216 + if (!subtree_entry) { 217 + return; 218 + } 219 + 220 + /* Now we can check for the depth argument */ 221 + 222 + if (depth_arg) { 223 + max_depth = strtoul(depth_arg, NULL, 0); 224 + } 225 + } 226 + 227 + acpi_db_set_output_destination(ACPI_DB_DUPLICATE_OUTPUT); 228 + acpi_os_printf("ACPI Namespace (from %4.4s (%p) subtree):\n", 229 + ((struct acpi_namespace_node *)subtree_entry)->name. 230 + ascii, subtree_entry); 231 + 232 + /* Display the subtree */ 233 + 234 + acpi_db_set_output_destination(ACPI_DB_REDIRECTABLE_OUTPUT); 235 + acpi_ns_dump_objects(ACPI_TYPE_ANY, ACPI_DISPLAY_SUMMARY, max_depth, 236 + ACPI_OWNER_ID_MAX, subtree_entry); 237 + acpi_db_set_output_destination(ACPI_DB_CONSOLE_OUTPUT); 238 + } 239 + 240 + /******************************************************************************* 241 + * 242 + * FUNCTION: acpi_db_dump_namespace_paths 243 + * 244 + * PARAMETERS: None 245 + * 246 + * RETURN: None 247 + * 248 + * DESCRIPTION: Dump entire namespace with full object pathnames and object 249 + * type information. Alternative to "namespace" command. 250 + * 251 + ******************************************************************************/ 252 + 253 + void acpi_db_dump_namespace_paths(void) 254 + { 255 + 256 + acpi_db_set_output_destination(ACPI_DB_DUPLICATE_OUTPUT); 257 + acpi_os_printf("ACPI Namespace (from root):\n"); 258 + 259 + /* Display the entire namespace */ 260 + 261 + acpi_db_set_output_destination(ACPI_DB_REDIRECTABLE_OUTPUT); 262 + acpi_ns_dump_object_paths(ACPI_TYPE_ANY, ACPI_DISPLAY_SUMMARY, 263 + ACPI_UINT32_MAX, ACPI_OWNER_ID_MAX, 264 + acpi_gbl_root_node); 265 + 266 + acpi_db_set_output_destination(ACPI_DB_CONSOLE_OUTPUT); 267 + } 268 + 269 + /******************************************************************************* 270 + * 271 + * FUNCTION: acpi_db_dump_namespace_by_owner 272 + * 273 + * PARAMETERS: owner_arg - Owner ID whose nodes will be displayed 274 + * depth_arg - Maximum tree depth to be dumped 275 + * 276 + * RETURN: None 277 + * 278 + * DESCRIPTION: Dump elements of the namespace that are owned by the owner_id. 279 + * 280 + ******************************************************************************/ 281 + 282 + void acpi_db_dump_namespace_by_owner(char *owner_arg, char *depth_arg) 283 + { 284 + acpi_handle subtree_entry = acpi_gbl_root_node; 285 + u32 max_depth = ACPI_UINT32_MAX; 286 + acpi_owner_id owner_id; 287 + 288 + owner_id = (acpi_owner_id) strtoul(owner_arg, NULL, 0); 289 + 290 + /* Now we can check for the depth argument */ 291 + 292 + if (depth_arg) { 293 + max_depth = strtoul(depth_arg, NULL, 0); 294 + } 295 + 296 + acpi_db_set_output_destination(ACPI_DB_DUPLICATE_OUTPUT); 297 + acpi_os_printf("ACPI Namespace by owner %X:\n", owner_id); 298 + 299 + /* Display the subtree */ 300 + 301 + acpi_db_set_output_destination(ACPI_DB_REDIRECTABLE_OUTPUT); 302 + acpi_ns_dump_objects(ACPI_TYPE_ANY, ACPI_DISPLAY_SUMMARY, max_depth, 303 + owner_id, subtree_entry); 304 + acpi_db_set_output_destination(ACPI_DB_CONSOLE_OUTPUT); 305 + } 306 + 307 + /******************************************************************************* 308 + * 309 + * FUNCTION: acpi_db_walk_and_match_name 310 + * 311 + * PARAMETERS: Callback from walk_namespace 312 + * 313 + * RETURN: Status 314 + * 315 + * DESCRIPTION: Find a particular name/names within the namespace. Wildcards 316 + * are supported -- '?' matches any character. 317 + * 318 + ******************************************************************************/ 319 + 320 + static acpi_status 321 + acpi_db_walk_and_match_name(acpi_handle obj_handle, 322 + u32 nesting_level, 323 + void *context, void **return_value) 324 + { 325 + acpi_status status; 326 + char *requested_name = (char *)context; 327 + u32 i; 328 + struct acpi_buffer buffer; 329 + struct acpi_walk_info info; 330 + 331 + /* Check for a name match */ 332 + 333 + for (i = 0; i < 4; i++) { 334 + 335 + /* Wildcard support */ 336 + 337 + if ((requested_name[i] != '?') && 338 + (requested_name[i] != ((struct acpi_namespace_node *) 339 + obj_handle)->name.ascii[i])) { 340 + 341 + /* No match, just exit */ 342 + 343 + return (AE_OK); 344 + } 345 + } 346 + 347 + /* Get the full pathname to this object */ 348 + 349 + buffer.length = ACPI_ALLOCATE_LOCAL_BUFFER; 350 + status = acpi_ns_handle_to_pathname(obj_handle, &buffer, TRUE); 351 + if (ACPI_FAILURE(status)) { 352 + acpi_os_printf("Could Not get pathname for object %p\n", 353 + obj_handle); 354 + } else { 355 + info.owner_id = ACPI_OWNER_ID_MAX; 356 + info.debug_level = ACPI_UINT32_MAX; 357 + info.display_type = ACPI_DISPLAY_SUMMARY | ACPI_DISPLAY_SHORT; 358 + 359 + acpi_os_printf("%32s", (char *)buffer.pointer); 360 + (void)acpi_ns_dump_one_object(obj_handle, nesting_level, &info, 361 + NULL); 362 + ACPI_FREE(buffer.pointer); 363 + } 364 + 365 + return (AE_OK); 366 + } 367 + 368 + /******************************************************************************* 369 + * 370 + * FUNCTION: acpi_db_find_name_in_namespace 371 + * 372 + * PARAMETERS: name_arg - The 4-character ACPI name to find. 373 + * wildcards are supported. 374 + * 375 + * RETURN: None 376 + * 377 + * DESCRIPTION: Search the namespace for a given name (with wildcards) 378 + * 379 + ******************************************************************************/ 380 + 381 + acpi_status acpi_db_find_name_in_namespace(char *name_arg) 382 + { 383 + char acpi_name[5] = "____"; 384 + char *acpi_name_ptr = acpi_name; 385 + 386 + if (strlen(name_arg) > ACPI_NAME_SIZE) { 387 + acpi_os_printf("Name must be no longer than 4 characters\n"); 388 + return (AE_OK); 389 + } 390 + 391 + /* Pad out name with underscores as necessary to create a 4-char name */ 392 + 393 + acpi_ut_strupr(name_arg); 394 + while (*name_arg) { 395 + *acpi_name_ptr = *name_arg; 396 + acpi_name_ptr++; 397 + name_arg++; 398 + } 399 + 400 + /* Walk the namespace from the root */ 401 + 402 + (void)acpi_walk_namespace(ACPI_TYPE_ANY, ACPI_ROOT_OBJECT, 403 + ACPI_UINT32_MAX, acpi_db_walk_and_match_name, 404 + NULL, acpi_name, NULL); 405 + 406 + acpi_db_set_output_destination(ACPI_DB_CONSOLE_OUTPUT); 407 + return (AE_OK); 408 + } 409 + 410 + /******************************************************************************* 411 + * 412 + * FUNCTION: acpi_db_walk_for_predefined_names 413 + * 414 + * PARAMETERS: Callback from walk_namespace 415 + * 416 + * RETURN: Status 417 + * 418 + * DESCRIPTION: Detect and display predefined ACPI names (names that start with 419 + * an underscore) 420 + * 421 + ******************************************************************************/ 422 + 423 + static acpi_status 424 + acpi_db_walk_for_predefined_names(acpi_handle obj_handle, 425 + u32 nesting_level, 426 + void *context, void **return_value) 427 + { 428 + struct acpi_namespace_node *node = 429 + (struct acpi_namespace_node *)obj_handle; 430 + u32 *count = (u32 *)context; 431 + const union acpi_predefined_info *predefined; 432 + const union acpi_predefined_info *package = NULL; 433 + char *pathname; 434 + char string_buffer[48]; 435 + 436 + predefined = acpi_ut_match_predefined_method(node->name.ascii); 437 + if (!predefined) { 438 + return (AE_OK); 439 + } 440 + 441 + pathname = acpi_ns_get_external_pathname(node); 442 + if (!pathname) { 443 + return (AE_OK); 444 + } 445 + 446 + /* If method returns a package, the info is in the next table entry */ 447 + 448 + if (predefined->info.expected_btypes & ACPI_RTYPE_PACKAGE) { 449 + package = predefined + 1; 450 + } 451 + 452 + acpi_ut_get_expected_return_types(string_buffer, 453 + predefined->info.expected_btypes); 454 + 455 + acpi_os_printf("%-32s Arguments %X, Return Types: %s", pathname, 456 + METHOD_GET_ARG_COUNT(predefined->info.argument_list), 457 + string_buffer); 458 + 459 + if (package) { 460 + acpi_os_printf(" (PkgType %2.2X, ObjType %2.2X, Count %2.2X)", 461 + package->ret_info.type, 462 + package->ret_info.object_type1, 463 + package->ret_info.count1); 464 + } 465 + 466 + acpi_os_printf("\n"); 467 + 468 + /* Check that the declared argument count matches the ACPI spec */ 469 + 470 + acpi_ns_check_acpi_compliance(pathname, node, predefined); 471 + 472 + ACPI_FREE(pathname); 473 + (*count)++; 474 + return (AE_OK); 475 + } 476 + 477 + /******************************************************************************* 478 + * 479 + * FUNCTION: acpi_db_check_predefined_names 480 + * 481 + * PARAMETERS: None 482 + * 483 + * RETURN: None 484 + * 485 + * DESCRIPTION: Validate all predefined names in the namespace 486 + * 487 + ******************************************************************************/ 488 + 489 + void acpi_db_check_predefined_names(void) 490 + { 491 + u32 count = 0; 492 + 493 + /* Search all nodes in namespace */ 494 + 495 + (void)acpi_walk_namespace(ACPI_TYPE_ANY, ACPI_ROOT_OBJECT, 496 + ACPI_UINT32_MAX, 497 + acpi_db_walk_for_predefined_names, NULL, 498 + (void *)&count, NULL); 499 + 500 + acpi_os_printf("Found %u predefined names in the namespace\n", count); 501 + } 502 + 503 + /******************************************************************************* 504 + * 505 + * FUNCTION: acpi_db_walk_for_object_counts 506 + * 507 + * PARAMETERS: Callback from walk_namespace 508 + * 509 + * RETURN: Status 510 + * 511 + * DESCRIPTION: Display short info about objects in the namespace 512 + * 513 + ******************************************************************************/ 514 + 515 + static acpi_status 516 + acpi_db_walk_for_object_counts(acpi_handle obj_handle, 517 + u32 nesting_level, 518 + void *context, void **return_value) 519 + { 520 + struct acpi_object_info *info = (struct acpi_object_info *)context; 521 + struct acpi_namespace_node *node = 522 + (struct acpi_namespace_node *)obj_handle; 523 + 524 + if (node->type > ACPI_TYPE_NS_NODE_MAX) { 525 + acpi_os_printf("[%4.4s]: Unknown object type %X\n", 526 + node->name.ascii, node->type); 527 + } else { 528 + info->types[node->type]++; 529 + } 530 + 531 + return (AE_OK); 532 + } 533 + 534 + /******************************************************************************* 535 + * 536 + * FUNCTION: acpi_db_walk_for_specific_objects 537 + * 538 + * PARAMETERS: Callback from walk_namespace 539 + * 540 + * RETURN: Status 541 + * 542 + * DESCRIPTION: Display short info about objects in the namespace 543 + * 544 + ******************************************************************************/ 545 + 546 + static acpi_status 547 + acpi_db_walk_for_specific_objects(acpi_handle obj_handle, 548 + u32 nesting_level, 549 + void *context, void **return_value) 550 + { 551 + struct acpi_walk_info *info = (struct acpi_walk_info *)context; 552 + struct acpi_buffer buffer; 553 + acpi_status status; 554 + 555 + info->count++; 556 + 557 + /* Get and display the full pathname to this object */ 558 + 559 + buffer.length = ACPI_ALLOCATE_LOCAL_BUFFER; 560 + status = acpi_ns_handle_to_pathname(obj_handle, &buffer, TRUE); 561 + if (ACPI_FAILURE(status)) { 562 + acpi_os_printf("Could Not get pathname for object %p\n", 563 + obj_handle); 564 + return (AE_OK); 565 + } 566 + 567 + acpi_os_printf("%32s", (char *)buffer.pointer); 568 + ACPI_FREE(buffer.pointer); 569 + 570 + /* Dump short info about the object */ 571 + 572 + (void)acpi_ns_dump_one_object(obj_handle, nesting_level, info, NULL); 573 + return (AE_OK); 574 + } 575 + 576 + /******************************************************************************* 577 + * 578 + * FUNCTION: acpi_db_display_objects 579 + * 580 + * PARAMETERS: obj_type_arg - Type of object to display 581 + * display_count_arg - Max depth to display 582 + * 583 + * RETURN: None 584 + * 585 + * DESCRIPTION: Display objects in the namespace of the requested type 586 + * 587 + ******************************************************************************/ 588 + 589 + acpi_status acpi_db_display_objects(char *obj_type_arg, char *display_count_arg) 590 + { 591 + struct acpi_walk_info info; 592 + acpi_object_type type; 593 + struct acpi_object_info *object_info; 594 + u32 i; 595 + u32 total_objects = 0; 596 + 597 + /* No argument means display summary/count of all object types */ 598 + 599 + if (!obj_type_arg) { 600 + object_info = 601 + ACPI_ALLOCATE_ZEROED(sizeof(struct acpi_object_info)); 602 + 603 + /* Walk the namespace from the root */ 604 + 605 + (void)acpi_walk_namespace(ACPI_TYPE_ANY, ACPI_ROOT_OBJECT, 606 + ACPI_UINT32_MAX, 607 + acpi_db_walk_for_object_counts, NULL, 608 + (void *)object_info, NULL); 609 + 610 + acpi_os_printf("\nSummary of namespace objects:\n\n"); 611 + 612 + for (i = 0; i < ACPI_TOTAL_TYPES; i++) { 613 + acpi_os_printf("%8u %s\n", object_info->types[i], 614 + acpi_ut_get_type_name(i)); 615 + 616 + total_objects += object_info->types[i]; 617 + } 618 + 619 + acpi_os_printf("\n%8u Total namespace objects\n\n", 620 + total_objects); 621 + 622 + ACPI_FREE(object_info); 623 + return (AE_OK); 624 + } 625 + 626 + /* Get the object type */ 627 + 628 + type = acpi_db_match_argument(obj_type_arg, acpi_db_object_types); 629 + if (type == ACPI_TYPE_NOT_FOUND) { 630 + acpi_os_printf("Invalid or unsupported argument\n"); 631 + return (AE_OK); 632 + } 633 + 634 + acpi_db_set_output_destination(ACPI_DB_DUPLICATE_OUTPUT); 635 + acpi_os_printf 636 + ("Objects of type [%s] defined in the current ACPI Namespace:\n", 637 + acpi_ut_get_type_name(type)); 638 + 639 + acpi_db_set_output_destination(ACPI_DB_REDIRECTABLE_OUTPUT); 640 + 641 + info.count = 0; 642 + info.owner_id = ACPI_OWNER_ID_MAX; 643 + info.debug_level = ACPI_UINT32_MAX; 644 + info.display_type = ACPI_DISPLAY_SUMMARY | ACPI_DISPLAY_SHORT; 645 + 646 + /* Walk the namespace from the root */ 647 + 648 + (void)acpi_walk_namespace(type, ACPI_ROOT_OBJECT, ACPI_UINT32_MAX, 649 + acpi_db_walk_for_specific_objects, NULL, 650 + (void *)&info, NULL); 651 + 652 + acpi_os_printf 653 + ("\nFound %u objects of type [%s] in the current ACPI Namespace\n", 654 + info.count, acpi_ut_get_type_name(type)); 655 + 656 + acpi_db_set_output_destination(ACPI_DB_CONSOLE_OUTPUT); 657 + return (AE_OK); 658 + } 659 + 660 + /******************************************************************************* 661 + * 662 + * FUNCTION: acpi_db_integrity_walk 663 + * 664 + * PARAMETERS: Callback from walk_namespace 665 + * 666 + * RETURN: Status 667 + * 668 + * DESCRIPTION: Examine one NS node for valid values. 669 + * 670 + ******************************************************************************/ 671 + 672 + static acpi_status 673 + acpi_db_integrity_walk(acpi_handle obj_handle, 674 + u32 nesting_level, void *context, void **return_value) 675 + { 676 + struct acpi_integrity_info *info = 677 + (struct acpi_integrity_info *)context; 678 + struct acpi_namespace_node *node = 679 + (struct acpi_namespace_node *)obj_handle; 680 + union acpi_operand_object *object; 681 + u8 alias = TRUE; 682 + 683 + info->nodes++; 684 + 685 + /* Verify the NS node, and dereference aliases */ 686 + 687 + while (alias) { 688 + if (ACPI_GET_DESCRIPTOR_TYPE(node) != ACPI_DESC_TYPE_NAMED) { 689 + acpi_os_printf 690 + ("Invalid Descriptor Type for Node %p [%s] - " 691 + "is %2.2X should be %2.2X\n", node, 692 + acpi_ut_get_descriptor_name(node), 693 + ACPI_GET_DESCRIPTOR_TYPE(node), 694 + ACPI_DESC_TYPE_NAMED); 695 + return (AE_OK); 696 + } 697 + 698 + if ((node->type == ACPI_TYPE_LOCAL_ALIAS) || 699 + (node->type == ACPI_TYPE_LOCAL_METHOD_ALIAS)) { 700 + node = (struct acpi_namespace_node *)node->object; 701 + } else { 702 + alias = FALSE; 703 + } 704 + } 705 + 706 + if (node->type > ACPI_TYPE_LOCAL_MAX) { 707 + acpi_os_printf("Invalid Object Type for Node %p, Type = %X\n", 708 + node, node->type); 709 + return (AE_OK); 710 + } 711 + 712 + if (!acpi_ut_valid_acpi_name(node->name.ascii)) { 713 + acpi_os_printf("Invalid AcpiName for Node %p\n", node); 714 + return (AE_OK); 715 + } 716 + 717 + object = acpi_ns_get_attached_object(node); 718 + if (object) { 719 + info->objects++; 720 + if (ACPI_GET_DESCRIPTOR_TYPE(object) != ACPI_DESC_TYPE_OPERAND) { 721 + acpi_os_printf 722 + ("Invalid Descriptor Type for Object %p [%s]\n", 723 + object, acpi_ut_get_descriptor_name(object)); 724 + } 725 + } 726 + 727 + return (AE_OK); 728 + } 729 + 730 + /******************************************************************************* 731 + * 732 + * FUNCTION: acpi_db_check_integrity 733 + * 734 + * PARAMETERS: None 735 + * 736 + * RETURN: None 737 + * 738 + * DESCRIPTION: Check entire namespace for data structure integrity 739 + * 740 + ******************************************************************************/ 741 + 742 + void acpi_db_check_integrity(void) 743 + { 744 + struct acpi_integrity_info info = { 0, 0 }; 745 + 746 + /* Search all nodes in namespace */ 747 + 748 + (void)acpi_walk_namespace(ACPI_TYPE_ANY, ACPI_ROOT_OBJECT, 749 + ACPI_UINT32_MAX, acpi_db_integrity_walk, NULL, 750 + (void *)&info, NULL); 751 + 752 + acpi_os_printf("Verified %u namespace nodes with %u Objects\n", 753 + info.nodes, info.objects); 754 + } 755 + 756 + /******************************************************************************* 757 + * 758 + * FUNCTION: acpi_db_walk_for_references 759 + * 760 + * PARAMETERS: Callback from walk_namespace 761 + * 762 + * RETURN: Status 763 + * 764 + * DESCRIPTION: Check if this namespace object refers to the target object 765 + * that is passed in as the context value. 766 + * 767 + * Note: Currently doesn't check subobjects within the Node's object 768 + * 769 + ******************************************************************************/ 770 + 771 + static acpi_status 772 + acpi_db_walk_for_references(acpi_handle obj_handle, 773 + u32 nesting_level, 774 + void *context, void **return_value) 775 + { 776 + union acpi_operand_object *obj_desc = 777 + (union acpi_operand_object *)context; 778 + struct acpi_namespace_node *node = 779 + (struct acpi_namespace_node *)obj_handle; 780 + 781 + /* Check for match against the namespace node itself */ 782 + 783 + if (node == (void *)obj_desc) { 784 + acpi_os_printf("Object is a Node [%4.4s]\n", 785 + acpi_ut_get_node_name(node)); 786 + } 787 + 788 + /* Check for match against the object attached to the node */ 789 + 790 + if (acpi_ns_get_attached_object(node) == obj_desc) { 791 + acpi_os_printf("Reference at Node->Object %p [%4.4s]\n", 792 + node, acpi_ut_get_node_name(node)); 793 + } 794 + 795 + return (AE_OK); 796 + } 797 + 798 + /******************************************************************************* 799 + * 800 + * FUNCTION: acpi_db_find_references 801 + * 802 + * PARAMETERS: object_arg - String with hex value of the object 803 + * 804 + * RETURN: None 805 + * 806 + * DESCRIPTION: Search namespace for all references to the input object 807 + * 808 + ******************************************************************************/ 809 + 810 + void acpi_db_find_references(char *object_arg) 811 + { 812 + union acpi_operand_object *obj_desc; 813 + acpi_size address; 814 + 815 + /* Convert string to object pointer */ 816 + 817 + address = strtoul(object_arg, NULL, 16); 818 + obj_desc = ACPI_TO_POINTER(address); 819 + 820 + /* Search all nodes in namespace */ 821 + 822 + (void)acpi_walk_namespace(ACPI_TYPE_ANY, ACPI_ROOT_OBJECT, 823 + ACPI_UINT32_MAX, acpi_db_walk_for_references, 824 + NULL, (void *)obj_desc, NULL); 825 + } 826 + 827 + /******************************************************************************* 828 + * 829 + * FUNCTION: acpi_db_bus_walk 830 + * 831 + * PARAMETERS: Callback from walk_namespace 832 + * 833 + * RETURN: Status 834 + * 835 + * DESCRIPTION: Display info about device objects that have a corresponding 836 + * _PRT method. 837 + * 838 + ******************************************************************************/ 839 + 840 + static acpi_status 841 + acpi_db_bus_walk(acpi_handle obj_handle, 842 + u32 nesting_level, void *context, void **return_value) 843 + { 844 + struct acpi_namespace_node *node = 845 + (struct acpi_namespace_node *)obj_handle; 846 + acpi_status status; 847 + struct acpi_buffer buffer; 848 + struct acpi_namespace_node *temp_node; 849 + struct acpi_device_info *info; 850 + u32 i; 851 + 852 + if ((node->type != ACPI_TYPE_DEVICE) && 853 + (node->type != ACPI_TYPE_PROCESSOR)) { 854 + return (AE_OK); 855 + } 856 + 857 + /* Exit if there is no _PRT under this device */ 858 + 859 + status = acpi_get_handle(node, METHOD_NAME__PRT, 860 + ACPI_CAST_PTR(acpi_handle, &temp_node)); 861 + if (ACPI_FAILURE(status)) { 862 + return (AE_OK); 863 + } 864 + 865 + /* Get the full path to this device object */ 866 + 867 + buffer.length = ACPI_ALLOCATE_LOCAL_BUFFER; 868 + status = acpi_ns_handle_to_pathname(obj_handle, &buffer, TRUE); 869 + if (ACPI_FAILURE(status)) { 870 + acpi_os_printf("Could Not get pathname for object %p\n", 871 + obj_handle); 872 + return (AE_OK); 873 + } 874 + 875 + status = acpi_get_object_info(obj_handle, &info); 876 + if (ACPI_FAILURE(status)) { 877 + return (AE_OK); 878 + } 879 + 880 + /* Display the full path */ 881 + 882 + acpi_os_printf("%-32s Type %X", (char *)buffer.pointer, node->type); 883 + ACPI_FREE(buffer.pointer); 884 + 885 + if (info->flags & ACPI_PCI_ROOT_BRIDGE) { 886 + acpi_os_printf(" - Is PCI Root Bridge"); 887 + } 888 + acpi_os_printf("\n"); 889 + 890 + /* _PRT info */ 891 + 892 + acpi_os_printf("_PRT: %p\n", temp_node); 893 + 894 + /* Dump _ADR, _HID, _UID, _CID */ 895 + 896 + if (info->valid & ACPI_VALID_ADR) { 897 + acpi_os_printf("_ADR: %8.8X%8.8X\n", 898 + ACPI_FORMAT_UINT64(info->address)); 899 + } else { 900 + acpi_os_printf("_ADR: <Not Present>\n"); 901 + } 902 + 903 + if (info->valid & ACPI_VALID_HID) { 904 + acpi_os_printf("_HID: %s\n", info->hardware_id.string); 905 + } else { 906 + acpi_os_printf("_HID: <Not Present>\n"); 907 + } 908 + 909 + if (info->valid & ACPI_VALID_UID) { 910 + acpi_os_printf("_UID: %s\n", info->unique_id.string); 911 + } else { 912 + acpi_os_printf("_UID: <Not Present>\n"); 913 + } 914 + 915 + if (info->valid & ACPI_VALID_CID) { 916 + for (i = 0; i < info->compatible_id_list.count; i++) { 917 + acpi_os_printf("_CID: %s\n", 918 + info->compatible_id_list.ids[i].string); 919 + } 920 + } else { 921 + acpi_os_printf("_CID: <Not Present>\n"); 922 + } 923 + 924 + ACPI_FREE(info); 925 + return (AE_OK); 926 + } 927 + 928 + /******************************************************************************* 929 + * 930 + * FUNCTION: acpi_db_get_bus_info 931 + * 932 + * PARAMETERS: None 933 + * 934 + * RETURN: None 935 + * 936 + * DESCRIPTION: Display info about system busses. 937 + * 938 + ******************************************************************************/ 939 + 940 + void acpi_db_get_bus_info(void) 941 + { 942 + /* Search all nodes in namespace */ 943 + 944 + (void)acpi_walk_namespace(ACPI_TYPE_ANY, ACPI_ROOT_OBJECT, 945 + ACPI_UINT32_MAX, acpi_db_bus_walk, NULL, NULL, 946 + NULL); 947 + }
+533
drivers/acpi/acpica/dbobject.c
··· 1 + /******************************************************************************* 2 + * 3 + * Module Name: dbobject - ACPI object decode and display 4 + * 5 + ******************************************************************************/ 6 + 7 + /* 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 + * All rights reserved. 10 + * 11 + * Redistribution and use in source and binary forms, with or without 12 + * modification, are permitted provided that the following conditions 13 + * are met: 14 + * 1. Redistributions of source code must retain the above copyright 15 + * notice, this list of conditions, and the following disclaimer, 16 + * without modification. 17 + * 2. Redistributions in binary form must reproduce at minimum a disclaimer 18 + * substantially similar to the "NO WARRANTY" disclaimer below 19 + * ("Disclaimer") and any redistribution must be conditioned upon 20 + * including a substantially similar Disclaimer requirement for further 21 + * binary redistribution. 22 + * 3. Neither the names of the above-listed copyright holders nor the names 23 + * of any contributors may be used to endorse or promote products derived 24 + * from this software without specific prior written permission. 25 + * 26 + * Alternatively, this software may be distributed under the terms of the 27 + * GNU General Public License ("GPL") version 2 as published by the Free 28 + * Software Foundation. 29 + * 30 + * NO WARRANTY 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 36 + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 37 + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 38 + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 39 + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 40 + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 41 + * POSSIBILITY OF SUCH DAMAGES. 42 + */ 43 + 44 + #include <acpi/acpi.h> 45 + #include "accommon.h" 46 + #include "acnamesp.h" 47 + #include "acdebug.h" 48 + 49 + #define _COMPONENT ACPI_CA_DEBUGGER 50 + ACPI_MODULE_NAME("dbobject") 51 + 52 + /* Local prototypes */ 53 + static void acpi_db_decode_node(struct acpi_namespace_node *node); 54 + 55 + /******************************************************************************* 56 + * 57 + * FUNCTION: acpi_db_dump_method_info 58 + * 59 + * PARAMETERS: status - Method execution status 60 + * walk_state - Current state of the parse tree walk 61 + * 62 + * RETURN: None 63 + * 64 + * DESCRIPTION: Called when a method has been aborted because of an error. 65 + * Dumps the method execution stack, and the method locals/args, 66 + * and disassembles the AML opcode that failed. 67 + * 68 + ******************************************************************************/ 69 + 70 + void 71 + acpi_db_dump_method_info(acpi_status status, struct acpi_walk_state *walk_state) 72 + { 73 + struct acpi_thread_state *thread; 74 + 75 + /* Ignore control codes, they are not errors */ 76 + 77 + if ((status & AE_CODE_MASK) == AE_CODE_CONTROL) { 78 + return; 79 + } 80 + 81 + /* We may be executing a deferred opcode */ 82 + 83 + if (walk_state->deferred_node) { 84 + acpi_os_printf("Executing subtree for Buffer/Package/Region\n"); 85 + return; 86 + } 87 + 88 + /* 89 + * If there is no Thread, we are not actually executing a method. 90 + * This can happen when the iASL compiler calls the interpreter 91 + * to perform constant folding. 92 + */ 93 + thread = walk_state->thread; 94 + if (!thread) { 95 + return; 96 + } 97 + 98 + /* Display the method locals and arguments */ 99 + 100 + acpi_os_printf("\n"); 101 + acpi_db_decode_locals(walk_state); 102 + acpi_os_printf("\n"); 103 + acpi_db_decode_arguments(walk_state); 104 + acpi_os_printf("\n"); 105 + } 106 + 107 + /******************************************************************************* 108 + * 109 + * FUNCTION: acpi_db_decode_internal_object 110 + * 111 + * PARAMETERS: obj_desc - Object to be displayed 112 + * 113 + * RETURN: None 114 + * 115 + * DESCRIPTION: Short display of an internal object. Numbers/Strings/Buffers. 116 + * 117 + ******************************************************************************/ 118 + 119 + void acpi_db_decode_internal_object(union acpi_operand_object *obj_desc) 120 + { 121 + u32 i; 122 + 123 + if (!obj_desc) { 124 + acpi_os_printf(" Uninitialized"); 125 + return; 126 + } 127 + 128 + if (ACPI_GET_DESCRIPTOR_TYPE(obj_desc) != ACPI_DESC_TYPE_OPERAND) { 129 + acpi_os_printf(" %p [%s]", obj_desc, 130 + acpi_ut_get_descriptor_name(obj_desc)); 131 + return; 132 + } 133 + 134 + acpi_os_printf(" %s", acpi_ut_get_object_type_name(obj_desc)); 135 + 136 + switch (obj_desc->common.type) { 137 + case ACPI_TYPE_INTEGER: 138 + 139 + acpi_os_printf(" %8.8X%8.8X", 140 + ACPI_FORMAT_UINT64(obj_desc->integer.value)); 141 + break; 142 + 143 + case ACPI_TYPE_STRING: 144 + 145 + acpi_os_printf("(%u) \"%.24s", 146 + obj_desc->string.length, 147 + obj_desc->string.pointer); 148 + 149 + if (obj_desc->string.length > 24) { 150 + acpi_os_printf("..."); 151 + } else { 152 + acpi_os_printf("\""); 153 + } 154 + break; 155 + 156 + case ACPI_TYPE_BUFFER: 157 + 158 + acpi_os_printf("(%u)", obj_desc->buffer.length); 159 + for (i = 0; (i < 8) && (i < obj_desc->buffer.length); i++) { 160 + acpi_os_printf(" %2.2X", obj_desc->buffer.pointer[i]); 161 + } 162 + break; 163 + 164 + default: 165 + 166 + acpi_os_printf(" %p", obj_desc); 167 + break; 168 + } 169 + } 170 + 171 + /******************************************************************************* 172 + * 173 + * FUNCTION: acpi_db_decode_node 174 + * 175 + * PARAMETERS: node - Object to be displayed 176 + * 177 + * RETURN: None 178 + * 179 + * DESCRIPTION: Short display of a namespace node 180 + * 181 + ******************************************************************************/ 182 + 183 + static void acpi_db_decode_node(struct acpi_namespace_node *node) 184 + { 185 + 186 + acpi_os_printf("<Node> Name %4.4s", 187 + acpi_ut_get_node_name(node)); 188 + 189 + if (node->flags & ANOBJ_METHOD_ARG) { 190 + acpi_os_printf(" [Method Arg]"); 191 + } 192 + if (node->flags & ANOBJ_METHOD_LOCAL) { 193 + acpi_os_printf(" [Method Local]"); 194 + } 195 + 196 + switch (node->type) { 197 + 198 + /* These types have no attached object */ 199 + 200 + case ACPI_TYPE_DEVICE: 201 + 202 + acpi_os_printf(" Device"); 203 + break; 204 + 205 + case ACPI_TYPE_THERMAL: 206 + 207 + acpi_os_printf(" Thermal Zone"); 208 + break; 209 + 210 + default: 211 + 212 + acpi_db_decode_internal_object(acpi_ns_get_attached_object 213 + (node)); 214 + break; 215 + } 216 + } 217 + 218 + /******************************************************************************* 219 + * 220 + * FUNCTION: acpi_db_display_internal_object 221 + * 222 + * PARAMETERS: obj_desc - Object to be displayed 223 + * walk_state - Current walk state 224 + * 225 + * RETURN: None 226 + * 227 + * DESCRIPTION: Short display of an internal object 228 + * 229 + ******************************************************************************/ 230 + 231 + void 232 + acpi_db_display_internal_object(union acpi_operand_object *obj_desc, 233 + struct acpi_walk_state *walk_state) 234 + { 235 + u8 type; 236 + 237 + acpi_os_printf("%p ", obj_desc); 238 + 239 + if (!obj_desc) { 240 + acpi_os_printf("<Null Object>\n"); 241 + return; 242 + } 243 + 244 + /* Decode the object type */ 245 + 246 + switch (ACPI_GET_DESCRIPTOR_TYPE(obj_desc)) { 247 + case ACPI_DESC_TYPE_PARSER: 248 + 249 + acpi_os_printf("<Parser> "); 250 + break; 251 + 252 + case ACPI_DESC_TYPE_NAMED: 253 + 254 + acpi_db_decode_node((struct acpi_namespace_node *)obj_desc); 255 + break; 256 + 257 + case ACPI_DESC_TYPE_OPERAND: 258 + 259 + type = obj_desc->common.type; 260 + if (type > ACPI_TYPE_LOCAL_MAX) { 261 + acpi_os_printf(" Type %X [Invalid Type]", (u32)type); 262 + return; 263 + } 264 + 265 + /* Decode the ACPI object type */ 266 + 267 + switch (obj_desc->common.type) { 268 + case ACPI_TYPE_LOCAL_REFERENCE: 269 + 270 + acpi_os_printf("[%s] ", 271 + acpi_ut_get_reference_name(obj_desc)); 272 + 273 + /* Decode the refererence */ 274 + 275 + switch (obj_desc->reference.class) { 276 + case ACPI_REFCLASS_LOCAL: 277 + 278 + acpi_os_printf("%X ", 279 + obj_desc->reference.value); 280 + if (walk_state) { 281 + obj_desc = walk_state->local_variables 282 + [obj_desc->reference.value].object; 283 + acpi_os_printf("%p", obj_desc); 284 + acpi_db_decode_internal_object 285 + (obj_desc); 286 + } 287 + break; 288 + 289 + case ACPI_REFCLASS_ARG: 290 + 291 + acpi_os_printf("%X ", 292 + obj_desc->reference.value); 293 + if (walk_state) { 294 + obj_desc = walk_state->arguments 295 + [obj_desc->reference.value].object; 296 + acpi_os_printf("%p", obj_desc); 297 + acpi_db_decode_internal_object 298 + (obj_desc); 299 + } 300 + break; 301 + 302 + case ACPI_REFCLASS_INDEX: 303 + 304 + switch (obj_desc->reference.target_type) { 305 + case ACPI_TYPE_BUFFER_FIELD: 306 + 307 + acpi_os_printf("%p", 308 + obj_desc->reference. 309 + object); 310 + acpi_db_decode_internal_object 311 + (obj_desc->reference.object); 312 + break; 313 + 314 + case ACPI_TYPE_PACKAGE: 315 + 316 + acpi_os_printf("%p", 317 + obj_desc->reference. 318 + where); 319 + if (!obj_desc->reference.where) { 320 + acpi_os_printf 321 + (" Uninitialized WHERE pointer"); 322 + } else { 323 + acpi_db_decode_internal_object(* 324 + (obj_desc-> 325 + reference. 326 + where)); 327 + } 328 + break; 329 + 330 + default: 331 + 332 + acpi_os_printf 333 + ("Unknown index target type"); 334 + break; 335 + } 336 + break; 337 + 338 + case ACPI_REFCLASS_REFOF: 339 + 340 + if (!obj_desc->reference.object) { 341 + acpi_os_printf 342 + ("Uninitialized reference subobject pointer"); 343 + break; 344 + } 345 + 346 + /* Reference can be to a Node or an Operand object */ 347 + 348 + switch (ACPI_GET_DESCRIPTOR_TYPE 349 + (obj_desc->reference.object)) { 350 + case ACPI_DESC_TYPE_NAMED: 351 + 352 + acpi_db_decode_node(obj_desc->reference. 353 + object); 354 + break; 355 + 356 + case ACPI_DESC_TYPE_OPERAND: 357 + 358 + acpi_db_decode_internal_object 359 + (obj_desc->reference.object); 360 + break; 361 + 362 + default: 363 + break; 364 + } 365 + break; 366 + 367 + case ACPI_REFCLASS_NAME: 368 + 369 + acpi_db_decode_node(obj_desc->reference.node); 370 + break; 371 + 372 + case ACPI_REFCLASS_DEBUG: 373 + case ACPI_REFCLASS_TABLE: 374 + 375 + acpi_os_printf("\n"); 376 + break; 377 + 378 + default: /* Unknown reference class */ 379 + 380 + acpi_os_printf("%2.2X\n", 381 + obj_desc->reference.class); 382 + break; 383 + } 384 + break; 385 + 386 + default: 387 + 388 + acpi_os_printf("<Obj> "); 389 + acpi_db_decode_internal_object(obj_desc); 390 + break; 391 + } 392 + break; 393 + 394 + default: 395 + 396 + acpi_os_printf("<Not a valid ACPI Object Descriptor> [%s]", 397 + acpi_ut_get_descriptor_name(obj_desc)); 398 + break; 399 + } 400 + 401 + acpi_os_printf("\n"); 402 + } 403 + 404 + /******************************************************************************* 405 + * 406 + * FUNCTION: acpi_db_decode_locals 407 + * 408 + * PARAMETERS: walk_state - State for current method 409 + * 410 + * RETURN: None 411 + * 412 + * DESCRIPTION: Display all locals for the currently running control method 413 + * 414 + ******************************************************************************/ 415 + 416 + void acpi_db_decode_locals(struct acpi_walk_state *walk_state) 417 + { 418 + u32 i; 419 + union acpi_operand_object *obj_desc; 420 + struct acpi_namespace_node *node; 421 + u8 display_locals = FALSE; 422 + 423 + obj_desc = walk_state->method_desc; 424 + node = walk_state->method_node; 425 + 426 + if (!node) { 427 + acpi_os_printf 428 + ("No method node (Executing subtree for buffer or opregion)\n"); 429 + return; 430 + } 431 + 432 + if (node->type != ACPI_TYPE_METHOD) { 433 + acpi_os_printf("Executing subtree for Buffer/Package/Region\n"); 434 + return; 435 + } 436 + 437 + /* Are any locals actually set? */ 438 + 439 + for (i = 0; i < ACPI_METHOD_NUM_LOCALS; i++) { 440 + obj_desc = walk_state->local_variables[i].object; 441 + if (obj_desc) { 442 + display_locals = TRUE; 443 + break; 444 + } 445 + } 446 + 447 + /* If any are set, only display the ones that are set */ 448 + 449 + if (display_locals) { 450 + acpi_os_printf 451 + ("\nInitialized Local Variables for method [%4.4s]:\n", 452 + acpi_ut_get_node_name(node)); 453 + 454 + for (i = 0; i < ACPI_METHOD_NUM_LOCALS; i++) { 455 + obj_desc = walk_state->local_variables[i].object; 456 + if (obj_desc) { 457 + acpi_os_printf(" Local%X: ", i); 458 + acpi_db_display_internal_object(obj_desc, 459 + walk_state); 460 + } 461 + } 462 + } else { 463 + acpi_os_printf 464 + ("No Local Variables are initialized for method [%4.4s]\n", 465 + acpi_ut_get_node_name(node)); 466 + } 467 + } 468 + 469 + /******************************************************************************* 470 + * 471 + * FUNCTION: acpi_db_decode_arguments 472 + * 473 + * PARAMETERS: walk_state - State for current method 474 + * 475 + * RETURN: None 476 + * 477 + * DESCRIPTION: Display all arguments for the currently running control method 478 + * 479 + ******************************************************************************/ 480 + 481 + void acpi_db_decode_arguments(struct acpi_walk_state *walk_state) 482 + { 483 + u32 i; 484 + union acpi_operand_object *obj_desc; 485 + struct acpi_namespace_node *node; 486 + u8 display_args = FALSE; 487 + 488 + node = walk_state->method_node; 489 + obj_desc = walk_state->method_desc; 490 + 491 + if (!node) { 492 + acpi_os_printf 493 + ("No method node (Executing subtree for buffer or opregion)\n"); 494 + return; 495 + } 496 + 497 + if (node->type != ACPI_TYPE_METHOD) { 498 + acpi_os_printf("Executing subtree for Buffer/Package/Region\n"); 499 + return; 500 + } 501 + 502 + /* Are any arguments actually set? */ 503 + 504 + for (i = 0; i < ACPI_METHOD_NUM_ARGS; i++) { 505 + obj_desc = walk_state->arguments[i].object; 506 + if (obj_desc) { 507 + display_args = TRUE; 508 + break; 509 + } 510 + } 511 + 512 + /* If any are set, only display the ones that are set */ 513 + 514 + if (display_args) { 515 + acpi_os_printf("Initialized Arguments for Method [%4.4s]: " 516 + "(%X arguments defined for method invocation)\n", 517 + acpi_ut_get_node_name(node), 518 + obj_desc->method.param_count); 519 + 520 + for (i = 0; i < ACPI_METHOD_NUM_ARGS; i++) { 521 + obj_desc = walk_state->arguments[i].object; 522 + if (obj_desc) { 523 + acpi_os_printf(" Arg%u: ", i); 524 + acpi_db_display_internal_object(obj_desc, 525 + walk_state); 526 + } 527 + } 528 + } else { 529 + acpi_os_printf 530 + ("No Arguments are initialized for method [%4.4s]\n", 531 + acpi_ut_get_node_name(node)); 532 + } 533 + }
+546
drivers/acpi/acpica/dbstats.c
··· 1 + /******************************************************************************* 2 + * 3 + * Module Name: dbstats - Generation and display of ACPI table statistics 4 + * 5 + ******************************************************************************/ 6 + 7 + /* 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 + * All rights reserved. 10 + * 11 + * Redistribution and use in source and binary forms, with or without 12 + * modification, are permitted provided that the following conditions 13 + * are met: 14 + * 1. Redistributions of source code must retain the above copyright 15 + * notice, this list of conditions, and the following disclaimer, 16 + * without modification. 17 + * 2. Redistributions in binary form must reproduce at minimum a disclaimer 18 + * substantially similar to the "NO WARRANTY" disclaimer below 19 + * ("Disclaimer") and any redistribution must be conditioned upon 20 + * including a substantially similar Disclaimer requirement for further 21 + * binary redistribution. 22 + * 3. Neither the names of the above-listed copyright holders nor the names 23 + * of any contributors may be used to endorse or promote products derived 24 + * from this software without specific prior written permission. 25 + * 26 + * Alternatively, this software may be distributed under the terms of the 27 + * GNU General Public License ("GPL") version 2 as published by the Free 28 + * Software Foundation. 29 + * 30 + * NO WARRANTY 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 36 + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 37 + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 38 + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 39 + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 40 + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 41 + * POSSIBILITY OF SUCH DAMAGES. 42 + */ 43 + 44 + #include <acpi/acpi.h> 45 + #include "accommon.h" 46 + #include "acdebug.h" 47 + #include "acnamesp.h" 48 + 49 + #define _COMPONENT ACPI_CA_DEBUGGER 50 + ACPI_MODULE_NAME("dbstats") 51 + 52 + /* Local prototypes */ 53 + static void acpi_db_count_namespace_objects(void); 54 + 55 + static void acpi_db_enumerate_object(union acpi_operand_object *obj_desc); 56 + 57 + static acpi_status 58 + acpi_db_classify_one_object(acpi_handle obj_handle, 59 + u32 nesting_level, 60 + void *context, void **return_value); 61 + 62 + #if defined ACPI_DBG_TRACK_ALLOCATIONS || defined ACPI_USE_LOCAL_CACHE 63 + static void acpi_db_list_info(struct acpi_memory_list *list); 64 + #endif 65 + 66 + /* 67 + * Statistics subcommands 68 + */ 69 + static struct acpi_db_argument_info acpi_db_stat_types[] = { 70 + {"ALLOCATIONS"}, 71 + {"OBJECTS"}, 72 + {"MEMORY"}, 73 + {"MISC"}, 74 + {"TABLES"}, 75 + {"SIZES"}, 76 + {"STACK"}, 77 + {NULL} /* Must be null terminated */ 78 + }; 79 + 80 + #define CMD_STAT_ALLOCATIONS 0 81 + #define CMD_STAT_OBJECTS 1 82 + #define CMD_STAT_MEMORY 2 83 + #define CMD_STAT_MISC 3 84 + #define CMD_STAT_TABLES 4 85 + #define CMD_STAT_SIZES 5 86 + #define CMD_STAT_STACK 6 87 + 88 + #if defined ACPI_DBG_TRACK_ALLOCATIONS || defined ACPI_USE_LOCAL_CACHE 89 + /******************************************************************************* 90 + * 91 + * FUNCTION: acpi_db_list_info 92 + * 93 + * PARAMETERS: list - Memory list/cache to be displayed 94 + * 95 + * RETURN: None 96 + * 97 + * DESCRIPTION: Display information about the input memory list or cache. 98 + * 99 + ******************************************************************************/ 100 + 101 + static void acpi_db_list_info(struct acpi_memory_list *list) 102 + { 103 + #ifdef ACPI_DBG_TRACK_ALLOCATIONS 104 + u32 outstanding; 105 + #endif 106 + 107 + acpi_os_printf("\n%s\n", list->list_name); 108 + 109 + /* max_depth > 0 indicates a cache object */ 110 + 111 + if (list->max_depth > 0) { 112 + acpi_os_printf 113 + (" Cache: [Depth MaxD Avail Size] " 114 + "%8.2X %8.2X %8.2X %8.2X\n", list->current_depth, 115 + list->max_depth, list->max_depth - list->current_depth, 116 + (list->current_depth * list->object_size)); 117 + } 118 + #ifdef ACPI_DBG_TRACK_ALLOCATIONS 119 + if (list->max_depth > 0) { 120 + acpi_os_printf 121 + (" Cache: [Requests Hits Misses ObjSize] " 122 + "%8.2X %8.2X %8.2X %8.2X\n", list->requests, list->hits, 123 + list->requests - list->hits, list->object_size); 124 + } 125 + 126 + outstanding = acpi_db_get_cache_info(list); 127 + 128 + if (list->object_size) { 129 + acpi_os_printf 130 + (" Mem: [Alloc Free Max CurSize Outstanding] " 131 + "%8.2X %8.2X %8.2X %8.2X %8.2X\n", list->total_allocated, 132 + list->total_freed, list->max_occupied, 133 + outstanding * list->object_size, outstanding); 134 + } else { 135 + acpi_os_printf 136 + (" Mem: [Alloc Free Max CurSize Outstanding Total] " 137 + "%8.2X %8.2X %8.2X %8.2X %8.2X %8.2X\n", 138 + list->total_allocated, list->total_freed, 139 + list->max_occupied, list->current_total_size, outstanding, 140 + list->total_size); 141 + } 142 + #endif 143 + } 144 + #endif 145 + 146 + /******************************************************************************* 147 + * 148 + * FUNCTION: acpi_db_enumerate_object 149 + * 150 + * PARAMETERS: obj_desc - Object to be counted 151 + * 152 + * RETURN: None 153 + * 154 + * DESCRIPTION: Add this object to the global counts, by object type. 155 + * Limited recursion handles subobjects and packages, and this 156 + * is probably acceptable within the AML debugger only. 157 + * 158 + ******************************************************************************/ 159 + 160 + static void acpi_db_enumerate_object(union acpi_operand_object *obj_desc) 161 + { 162 + u32 i; 163 + 164 + if (!obj_desc) { 165 + return; 166 + } 167 + 168 + /* Enumerate this object first */ 169 + 170 + acpi_gbl_num_objects++; 171 + 172 + if (obj_desc->common.type > ACPI_TYPE_NS_NODE_MAX) { 173 + acpi_gbl_obj_type_count_misc++; 174 + } else { 175 + acpi_gbl_obj_type_count[obj_desc->common.type]++; 176 + } 177 + 178 + /* Count the sub-objects */ 179 + 180 + switch (obj_desc->common.type) { 181 + case ACPI_TYPE_PACKAGE: 182 + 183 + for (i = 0; i < obj_desc->package.count; i++) { 184 + acpi_db_enumerate_object(obj_desc->package.elements[i]); 185 + } 186 + break; 187 + 188 + case ACPI_TYPE_DEVICE: 189 + 190 + acpi_db_enumerate_object(obj_desc->device.notify_list[0]); 191 + acpi_db_enumerate_object(obj_desc->device.notify_list[1]); 192 + acpi_db_enumerate_object(obj_desc->device.handler); 193 + break; 194 + 195 + case ACPI_TYPE_BUFFER_FIELD: 196 + 197 + if (acpi_ns_get_secondary_object(obj_desc)) { 198 + acpi_gbl_obj_type_count[ACPI_TYPE_BUFFER_FIELD]++; 199 + } 200 + break; 201 + 202 + case ACPI_TYPE_REGION: 203 + 204 + acpi_gbl_obj_type_count[ACPI_TYPE_LOCAL_REGION_FIELD]++; 205 + acpi_db_enumerate_object(obj_desc->region.handler); 206 + break; 207 + 208 + case ACPI_TYPE_POWER: 209 + 210 + acpi_db_enumerate_object(obj_desc->power_resource. 211 + notify_list[0]); 212 + acpi_db_enumerate_object(obj_desc->power_resource. 213 + notify_list[1]); 214 + break; 215 + 216 + case ACPI_TYPE_PROCESSOR: 217 + 218 + acpi_db_enumerate_object(obj_desc->processor.notify_list[0]); 219 + acpi_db_enumerate_object(obj_desc->processor.notify_list[1]); 220 + acpi_db_enumerate_object(obj_desc->processor.handler); 221 + break; 222 + 223 + case ACPI_TYPE_THERMAL: 224 + 225 + acpi_db_enumerate_object(obj_desc->thermal_zone.notify_list[0]); 226 + acpi_db_enumerate_object(obj_desc->thermal_zone.notify_list[1]); 227 + acpi_db_enumerate_object(obj_desc->thermal_zone.handler); 228 + break; 229 + 230 + default: 231 + 232 + break; 233 + } 234 + } 235 + 236 + /******************************************************************************* 237 + * 238 + * FUNCTION: acpi_db_classify_one_object 239 + * 240 + * PARAMETERS: Callback for walk_namespace 241 + * 242 + * RETURN: Status 243 + * 244 + * DESCRIPTION: Enumerate both the object descriptor (including subobjects) and 245 + * the parent namespace node. 246 + * 247 + ******************************************************************************/ 248 + 249 + static acpi_status 250 + acpi_db_classify_one_object(acpi_handle obj_handle, 251 + u32 nesting_level, 252 + void *context, void **return_value) 253 + { 254 + struct acpi_namespace_node *node; 255 + union acpi_operand_object *obj_desc; 256 + u32 type; 257 + 258 + acpi_gbl_num_nodes++; 259 + 260 + node = (struct acpi_namespace_node *)obj_handle; 261 + obj_desc = acpi_ns_get_attached_object(node); 262 + 263 + acpi_db_enumerate_object(obj_desc); 264 + 265 + type = node->type; 266 + if (type > ACPI_TYPE_NS_NODE_MAX) { 267 + acpi_gbl_node_type_count_misc++; 268 + } else { 269 + acpi_gbl_node_type_count[type]++; 270 + } 271 + 272 + return (AE_OK); 273 + 274 + #ifdef ACPI_FUTURE_IMPLEMENTATION 275 + 276 + /* TBD: These need to be counted during the initial parsing phase */ 277 + 278 + if (acpi_ps_is_named_op(op->opcode)) { 279 + num_nodes++; 280 + } 281 + 282 + if (is_method) { 283 + num_method_elements++; 284 + } 285 + 286 + num_grammar_elements++; 287 + op = acpi_ps_get_depth_next(root, op); 288 + 289 + size_of_parse_tree = (num_grammar_elements - num_method_elements) * 290 + (u32)sizeof(union acpi_parse_object); 291 + size_of_method_trees = 292 + num_method_elements * (u32)sizeof(union acpi_parse_object); 293 + size_of_node_entries = 294 + num_nodes * (u32)sizeof(struct acpi_namespace_node); 295 + size_of_acpi_objects = 296 + num_nodes * (u32)sizeof(union acpi_operand_object); 297 + #endif 298 + } 299 + 300 + /******************************************************************************* 301 + * 302 + * FUNCTION: acpi_db_count_namespace_objects 303 + * 304 + * PARAMETERS: None 305 + * 306 + * RETURN: None 307 + * 308 + * DESCRIPTION: Count and classify the entire namespace, including all 309 + * namespace nodes and attached objects. 310 + * 311 + ******************************************************************************/ 312 + 313 + static void acpi_db_count_namespace_objects(void) 314 + { 315 + u32 i; 316 + 317 + acpi_gbl_num_nodes = 0; 318 + acpi_gbl_num_objects = 0; 319 + 320 + acpi_gbl_obj_type_count_misc = 0; 321 + for (i = 0; i < (ACPI_TYPE_NS_NODE_MAX - 1); i++) { 322 + acpi_gbl_obj_type_count[i] = 0; 323 + acpi_gbl_node_type_count[i] = 0; 324 + } 325 + 326 + (void)acpi_ns_walk_namespace(ACPI_TYPE_ANY, ACPI_ROOT_OBJECT, 327 + ACPI_UINT32_MAX, FALSE, 328 + acpi_db_classify_one_object, NULL, NULL, 329 + NULL); 330 + } 331 + 332 + /******************************************************************************* 333 + * 334 + * FUNCTION: acpi_db_display_statistics 335 + * 336 + * PARAMETERS: type_arg - Subcommand 337 + * 338 + * RETURN: Status 339 + * 340 + * DESCRIPTION: Display various statistics 341 + * 342 + ******************************************************************************/ 343 + 344 + acpi_status acpi_db_display_statistics(char *type_arg) 345 + { 346 + u32 i; 347 + u32 temp; 348 + 349 + acpi_ut_strupr(type_arg); 350 + temp = acpi_db_match_argument(type_arg, acpi_db_stat_types); 351 + if (temp == ACPI_TYPE_NOT_FOUND) { 352 + acpi_os_printf("Invalid or unsupported argument\n"); 353 + return (AE_OK); 354 + } 355 + 356 + switch (temp) { 357 + case CMD_STAT_ALLOCATIONS: 358 + 359 + #ifdef ACPI_DBG_TRACK_ALLOCATIONS 360 + acpi_ut_dump_allocation_info(); 361 + #endif 362 + break; 363 + 364 + case CMD_STAT_TABLES: 365 + 366 + acpi_os_printf("ACPI Table Information (not implemented):\n\n"); 367 + break; 368 + 369 + case CMD_STAT_OBJECTS: 370 + 371 + acpi_db_count_namespace_objects(); 372 + 373 + acpi_os_printf 374 + ("\nObjects defined in the current namespace:\n\n"); 375 + 376 + acpi_os_printf("%16.16s %10.10s %10.10s\n", 377 + "ACPI_TYPE", "NODES", "OBJECTS"); 378 + 379 + for (i = 0; i < ACPI_TYPE_NS_NODE_MAX; i++) { 380 + acpi_os_printf("%16.16s % 10ld% 10ld\n", 381 + acpi_ut_get_type_name(i), 382 + acpi_gbl_node_type_count[i], 383 + acpi_gbl_obj_type_count[i]); 384 + } 385 + acpi_os_printf("%16.16s % 10ld% 10ld\n", "Misc/Unknown", 386 + acpi_gbl_node_type_count_misc, 387 + acpi_gbl_obj_type_count_misc); 388 + 389 + acpi_os_printf("%16.16s % 10ld% 10ld\n", "TOTALS:", 390 + acpi_gbl_num_nodes, acpi_gbl_num_objects); 391 + break; 392 + 393 + case CMD_STAT_MEMORY: 394 + 395 + #ifdef ACPI_DBG_TRACK_ALLOCATIONS 396 + acpi_os_printf 397 + ("\n----Object Statistics (all in hex)---------\n"); 398 + 399 + acpi_db_list_info(acpi_gbl_global_list); 400 + acpi_db_list_info(acpi_gbl_ns_node_list); 401 + #endif 402 + 403 + #ifdef ACPI_USE_LOCAL_CACHE 404 + acpi_os_printf 405 + ("\n----Cache Statistics (all in hex)---------\n"); 406 + acpi_db_list_info(acpi_gbl_operand_cache); 407 + acpi_db_list_info(acpi_gbl_ps_node_cache); 408 + acpi_db_list_info(acpi_gbl_ps_node_ext_cache); 409 + acpi_db_list_info(acpi_gbl_state_cache); 410 + #endif 411 + 412 + break; 413 + 414 + case CMD_STAT_MISC: 415 + 416 + acpi_os_printf("\nMiscellaneous Statistics:\n\n"); 417 + acpi_os_printf("Calls to AcpiPsFind:.. ........% 7ld\n", 418 + acpi_gbl_ps_find_count); 419 + acpi_os_printf("Calls to AcpiNsLookup:..........% 7ld\n", 420 + acpi_gbl_ns_lookup_count); 421 + 422 + acpi_os_printf("\n"); 423 + 424 + acpi_os_printf("Mutex usage:\n\n"); 425 + for (i = 0; i < ACPI_NUM_MUTEX; i++) { 426 + acpi_os_printf("%-28s: % 7ld\n", 427 + acpi_ut_get_mutex_name(i), 428 + acpi_gbl_mutex_info[i].use_count); 429 + } 430 + break; 431 + 432 + case CMD_STAT_SIZES: 433 + 434 + acpi_os_printf("\nInternal object sizes:\n\n"); 435 + 436 + acpi_os_printf("Common %3d\n", 437 + sizeof(struct acpi_object_common)); 438 + acpi_os_printf("Number %3d\n", 439 + sizeof(struct acpi_object_integer)); 440 + acpi_os_printf("String %3d\n", 441 + sizeof(struct acpi_object_string)); 442 + acpi_os_printf("Buffer %3d\n", 443 + sizeof(struct acpi_object_buffer)); 444 + acpi_os_printf("Package %3d\n", 445 + sizeof(struct acpi_object_package)); 446 + acpi_os_printf("BufferField %3d\n", 447 + sizeof(struct acpi_object_buffer_field)); 448 + acpi_os_printf("Device %3d\n", 449 + sizeof(struct acpi_object_device)); 450 + acpi_os_printf("Event %3d\n", 451 + sizeof(struct acpi_object_event)); 452 + acpi_os_printf("Method %3d\n", 453 + sizeof(struct acpi_object_method)); 454 + acpi_os_printf("Mutex %3d\n", 455 + sizeof(struct acpi_object_mutex)); 456 + acpi_os_printf("Region %3d\n", 457 + sizeof(struct acpi_object_region)); 458 + acpi_os_printf("PowerResource %3d\n", 459 + sizeof(struct acpi_object_power_resource)); 460 + acpi_os_printf("Processor %3d\n", 461 + sizeof(struct acpi_object_processor)); 462 + acpi_os_printf("ThermalZone %3d\n", 463 + sizeof(struct acpi_object_thermal_zone)); 464 + acpi_os_printf("RegionField %3d\n", 465 + sizeof(struct acpi_object_region_field)); 466 + acpi_os_printf("BankField %3d\n", 467 + sizeof(struct acpi_object_bank_field)); 468 + acpi_os_printf("IndexField %3d\n", 469 + sizeof(struct acpi_object_index_field)); 470 + acpi_os_printf("Reference %3d\n", 471 + sizeof(struct acpi_object_reference)); 472 + acpi_os_printf("Notify %3d\n", 473 + sizeof(struct acpi_object_notify_handler)); 474 + acpi_os_printf("AddressSpace %3d\n", 475 + sizeof(struct acpi_object_addr_handler)); 476 + acpi_os_printf("Extra %3d\n", 477 + sizeof(struct acpi_object_extra)); 478 + acpi_os_printf("Data %3d\n", 479 + sizeof(struct acpi_object_data)); 480 + 481 + acpi_os_printf("\n"); 482 + 483 + acpi_os_printf("ParseObject %3d\n", 484 + sizeof(struct acpi_parse_obj_common)); 485 + acpi_os_printf("ParseObjectNamed %3d\n", 486 + sizeof(struct acpi_parse_obj_named)); 487 + acpi_os_printf("ParseObjectAsl %3d\n", 488 + sizeof(struct acpi_parse_obj_asl)); 489 + acpi_os_printf("OperandObject %3d\n", 490 + sizeof(union acpi_operand_object)); 491 + acpi_os_printf("NamespaceNode %3d\n", 492 + sizeof(struct acpi_namespace_node)); 493 + acpi_os_printf("AcpiObject %3d\n", 494 + sizeof(union acpi_object)); 495 + 496 + acpi_os_printf("\n"); 497 + 498 + acpi_os_printf("Generic State %3d\n", 499 + sizeof(union acpi_generic_state)); 500 + acpi_os_printf("Common State %3d\n", 501 + sizeof(struct acpi_common_state)); 502 + acpi_os_printf("Control State %3d\n", 503 + sizeof(struct acpi_control_state)); 504 + acpi_os_printf("Update State %3d\n", 505 + sizeof(struct acpi_update_state)); 506 + acpi_os_printf("Scope State %3d\n", 507 + sizeof(struct acpi_scope_state)); 508 + acpi_os_printf("Parse Scope %3d\n", 509 + sizeof(struct acpi_pscope_state)); 510 + acpi_os_printf("Package State %3d\n", 511 + sizeof(struct acpi_pkg_state)); 512 + acpi_os_printf("Thread State %3d\n", 513 + sizeof(struct acpi_thread_state)); 514 + acpi_os_printf("Result Values %3d\n", 515 + sizeof(struct acpi_result_values)); 516 + acpi_os_printf("Notify Info %3d\n", 517 + sizeof(struct acpi_notify_info)); 518 + break; 519 + 520 + case CMD_STAT_STACK: 521 + #if defined(ACPI_DEBUG_OUTPUT) 522 + 523 + temp = 524 + (u32)ACPI_PTR_DIFF(acpi_gbl_entry_stack_pointer, 525 + acpi_gbl_lowest_stack_pointer); 526 + 527 + acpi_os_printf("\nSubsystem Stack Usage:\n\n"); 528 + acpi_os_printf("Entry Stack Pointer %p\n", 529 + acpi_gbl_entry_stack_pointer); 530 + acpi_os_printf("Lowest Stack Pointer %p\n", 531 + acpi_gbl_lowest_stack_pointer); 532 + acpi_os_printf("Stack Use %X (%u)\n", temp, 533 + temp); 534 + acpi_os_printf("Deepest Procedure Nesting %u\n", 535 + acpi_gbl_deepest_nesting); 536 + #endif 537 + break; 538 + 539 + default: 540 + 541 + break; 542 + } 543 + 544 + acpi_os_printf("\n"); 545 + return (AE_OK); 546 + }
+1057
drivers/acpi/acpica/dbtest.c
··· 1 + /******************************************************************************* 2 + * 3 + * Module Name: dbtest - Various debug-related tests 4 + * 5 + ******************************************************************************/ 6 + 7 + /* 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 + * All rights reserved. 10 + * 11 + * Redistribution and use in source and binary forms, with or without 12 + * modification, are permitted provided that the following conditions 13 + * are met: 14 + * 1. Redistributions of source code must retain the above copyright 15 + * notice, this list of conditions, and the following disclaimer, 16 + * without modification. 17 + * 2. Redistributions in binary form must reproduce at minimum a disclaimer 18 + * substantially similar to the "NO WARRANTY" disclaimer below 19 + * ("Disclaimer") and any redistribution must be conditioned upon 20 + * including a substantially similar Disclaimer requirement for further 21 + * binary redistribution. 22 + * 3. Neither the names of the above-listed copyright holders nor the names 23 + * of any contributors may be used to endorse or promote products derived 24 + * from this software without specific prior written permission. 25 + * 26 + * Alternatively, this software may be distributed under the terms of the 27 + * GNU General Public License ("GPL") version 2 as published by the Free 28 + * Software Foundation. 29 + * 30 + * NO WARRANTY 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 36 + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 37 + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 38 + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 39 + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 40 + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 41 + * POSSIBILITY OF SUCH DAMAGES. 42 + */ 43 + 44 + #include <acpi/acpi.h> 45 + #include "accommon.h" 46 + #include "acdebug.h" 47 + #include "acnamesp.h" 48 + #include "acpredef.h" 49 + 50 + #define _COMPONENT ACPI_CA_DEBUGGER 51 + ACPI_MODULE_NAME("dbtest") 52 + 53 + /* Local prototypes */ 54 + static void acpi_db_test_all_objects(void); 55 + 56 + static acpi_status 57 + acpi_db_test_one_object(acpi_handle obj_handle, 58 + u32 nesting_level, void *context, void **return_value); 59 + 60 + static acpi_status 61 + acpi_db_test_integer_type(struct acpi_namespace_node *node, u32 bit_length); 62 + 63 + static acpi_status 64 + acpi_db_test_buffer_type(struct acpi_namespace_node *node, u32 bit_length); 65 + 66 + static acpi_status 67 + acpi_db_test_string_type(struct acpi_namespace_node *node, u32 byte_length); 68 + 69 + static acpi_status 70 + acpi_db_read_from_object(struct acpi_namespace_node *node, 71 + acpi_object_type expected_type, 72 + union acpi_object **value); 73 + 74 + static acpi_status 75 + acpi_db_write_to_object(struct acpi_namespace_node *node, 76 + union acpi_object *value); 77 + 78 + static void acpi_db_evaluate_all_predefined_names(char *count_arg); 79 + 80 + static acpi_status 81 + acpi_db_evaluate_one_predefined_name(acpi_handle obj_handle, 82 + u32 nesting_level, 83 + void *context, void **return_value); 84 + 85 + /* 86 + * Test subcommands 87 + */ 88 + static struct acpi_db_argument_info acpi_db_test_types[] = { 89 + {"OBJECTS"}, 90 + {"PREDEFINED"}, 91 + {NULL} /* Must be null terminated */ 92 + }; 93 + 94 + #define CMD_TEST_OBJECTS 0 95 + #define CMD_TEST_PREDEFINED 1 96 + 97 + #define BUFFER_FILL_VALUE 0xFF 98 + 99 + /* 100 + * Support for the special debugger read/write control methods. 101 + * These methods are installed into the current namespace and are 102 + * used to read and write the various namespace objects. The point 103 + * is to force the AML interpreter do all of the work. 104 + */ 105 + #define ACPI_DB_READ_METHOD "\\_T98" 106 + #define ACPI_DB_WRITE_METHOD "\\_T99" 107 + 108 + static acpi_handle read_handle = NULL; 109 + static acpi_handle write_handle = NULL; 110 + 111 + /* ASL Definitions of the debugger read/write control methods */ 112 + 113 + #if 0 114 + definition_block("ssdt.aml", "SSDT", 2, "Intel", "DEBUG", 0x00000001) 115 + { 116 + method(_T98, 1, not_serialized) { /* Read */ 117 + return (de_ref_of(arg0)) 118 + } 119 + } 120 + 121 + definition_block("ssdt2.aml", "SSDT", 2, "Intel", "DEBUG", 0x00000001) 122 + { 123 + method(_T99, 2, not_serialized) { /* Write */ 124 + store(arg1, arg0) 125 + } 126 + } 127 + #endif 128 + 129 + static unsigned char read_method_code[] = { 130 + 0x53, 0x53, 0x44, 0x54, 0x2E, 0x00, 0x00, 0x00, /* 00000000 "SSDT...." */ 131 + 0x02, 0xC9, 0x49, 0x6E, 0x74, 0x65, 0x6C, 0x00, /* 00000008 "..Intel." */ 132 + 0x44, 0x45, 0x42, 0x55, 0x47, 0x00, 0x00, 0x00, /* 00000010 "DEBUG..." */ 133 + 0x01, 0x00, 0x00, 0x00, 0x49, 0x4E, 0x54, 0x4C, /* 00000018 "....INTL" */ 134 + 0x18, 0x12, 0x13, 0x20, 0x14, 0x09, 0x5F, 0x54, /* 00000020 "... .._T" */ 135 + 0x39, 0x38, 0x01, 0xA4, 0x83, 0x68 /* 00000028 "98...h" */ 136 + }; 137 + 138 + static unsigned char write_method_code[] = { 139 + 0x53, 0x53, 0x44, 0x54, 0x2E, 0x00, 0x00, 0x00, /* 00000000 "SSDT...." */ 140 + 0x02, 0x15, 0x49, 0x6E, 0x74, 0x65, 0x6C, 0x00, /* 00000008 "..Intel." */ 141 + 0x44, 0x45, 0x42, 0x55, 0x47, 0x00, 0x00, 0x00, /* 00000010 "DEBUG..." */ 142 + 0x01, 0x00, 0x00, 0x00, 0x49, 0x4E, 0x54, 0x4C, /* 00000018 "....INTL" */ 143 + 0x18, 0x12, 0x13, 0x20, 0x14, 0x09, 0x5F, 0x54, /* 00000020 "... .._T" */ 144 + 0x39, 0x39, 0x02, 0x70, 0x69, 0x68 /* 00000028 "99.pih" */ 145 + }; 146 + 147 + /******************************************************************************* 148 + * 149 + * FUNCTION: acpi_db_execute_test 150 + * 151 + * PARAMETERS: type_arg - Subcommand 152 + * 153 + * RETURN: None 154 + * 155 + * DESCRIPTION: Execute various debug tests. 156 + * 157 + * Note: Code is prepared for future expansion of the TEST command. 158 + * 159 + ******************************************************************************/ 160 + 161 + void acpi_db_execute_test(char *type_arg) 162 + { 163 + u32 temp; 164 + 165 + acpi_ut_strupr(type_arg); 166 + temp = acpi_db_match_argument(type_arg, acpi_db_test_types); 167 + if (temp == ACPI_TYPE_NOT_FOUND) { 168 + acpi_os_printf("Invalid or unsupported argument\n"); 169 + return; 170 + } 171 + 172 + switch (temp) { 173 + case CMD_TEST_OBJECTS: 174 + 175 + acpi_db_test_all_objects(); 176 + break; 177 + 178 + case CMD_TEST_PREDEFINED: 179 + 180 + acpi_db_evaluate_all_predefined_names(NULL); 181 + break; 182 + 183 + default: 184 + break; 185 + } 186 + } 187 + 188 + /******************************************************************************* 189 + * 190 + * FUNCTION: acpi_db_test_all_objects 191 + * 192 + * PARAMETERS: None 193 + * 194 + * RETURN: None 195 + * 196 + * DESCRIPTION: This test implements the OBJECTS subcommand. It exercises the 197 + * namespace by reading/writing/comparing all data objects such 198 + * as integers, strings, buffers, fields, buffer fields, etc. 199 + * 200 + ******************************************************************************/ 201 + 202 + static void acpi_db_test_all_objects(void) 203 + { 204 + acpi_status status; 205 + 206 + /* Install the debugger read-object control method if necessary */ 207 + 208 + if (!read_handle) { 209 + status = acpi_install_method(read_method_code); 210 + if (ACPI_FAILURE(status)) { 211 + acpi_os_printf 212 + ("%s, Could not install debugger read method\n", 213 + acpi_format_exception(status)); 214 + return; 215 + } 216 + 217 + status = 218 + acpi_get_handle(NULL, ACPI_DB_READ_METHOD, &read_handle); 219 + if (ACPI_FAILURE(status)) { 220 + acpi_os_printf 221 + ("Could not obtain handle for debug method %s\n", 222 + ACPI_DB_READ_METHOD); 223 + return; 224 + } 225 + } 226 + 227 + /* Install the debugger write-object control method if necessary */ 228 + 229 + if (!write_handle) { 230 + status = acpi_install_method(write_method_code); 231 + if (ACPI_FAILURE(status)) { 232 + acpi_os_printf 233 + ("%s, Could not install debugger write method\n", 234 + acpi_format_exception(status)); 235 + return; 236 + } 237 + 238 + status = 239 + acpi_get_handle(NULL, ACPI_DB_WRITE_METHOD, &write_handle); 240 + if (ACPI_FAILURE(status)) { 241 + acpi_os_printf 242 + ("Could not obtain handle for debug method %s\n", 243 + ACPI_DB_WRITE_METHOD); 244 + return; 245 + } 246 + } 247 + 248 + /* Walk the entire namespace, testing each supported named data object */ 249 + 250 + (void)acpi_walk_namespace(ACPI_TYPE_ANY, ACPI_ROOT_OBJECT, 251 + ACPI_UINT32_MAX, acpi_db_test_one_object, 252 + NULL, NULL, NULL); 253 + } 254 + 255 + /******************************************************************************* 256 + * 257 + * FUNCTION: acpi_db_test_one_object 258 + * 259 + * PARAMETERS: acpi_walk_callback 260 + * 261 + * RETURN: Status 262 + * 263 + * DESCRIPTION: Test one namespace object. Supported types are Integer, 264 + * String, Buffer, buffer_field, and field_unit. All other object 265 + * types are simply ignored. 266 + * 267 + * Note: Support for Packages is not implemented. 268 + * 269 + ******************************************************************************/ 270 + 271 + static acpi_status 272 + acpi_db_test_one_object(acpi_handle obj_handle, 273 + u32 nesting_level, void *context, void **return_value) 274 + { 275 + struct acpi_namespace_node *node; 276 + union acpi_operand_object *obj_desc; 277 + union acpi_operand_object *region_obj; 278 + acpi_object_type local_type; 279 + u32 bit_length = 0; 280 + u32 byte_length = 0; 281 + acpi_status status = AE_OK; 282 + 283 + node = ACPI_CAST_PTR(struct acpi_namespace_node, obj_handle); 284 + obj_desc = node->object; 285 + 286 + /* 287 + * For the supported types, get the actual bit length or 288 + * byte length. Map the type to one of Integer/String/Buffer. 289 + */ 290 + switch (node->type) { 291 + case ACPI_TYPE_INTEGER: 292 + 293 + /* Integer width is either 32 or 64 */ 294 + 295 + local_type = ACPI_TYPE_INTEGER; 296 + bit_length = acpi_gbl_integer_bit_width; 297 + break; 298 + 299 + case ACPI_TYPE_STRING: 300 + 301 + local_type = ACPI_TYPE_STRING; 302 + byte_length = obj_desc->string.length; 303 + break; 304 + 305 + case ACPI_TYPE_BUFFER: 306 + 307 + local_type = ACPI_TYPE_BUFFER; 308 + byte_length = obj_desc->buffer.length; 309 + bit_length = byte_length * 8; 310 + break; 311 + 312 + case ACPI_TYPE_FIELD_UNIT: 313 + case ACPI_TYPE_BUFFER_FIELD: 314 + case ACPI_TYPE_LOCAL_REGION_FIELD: 315 + case ACPI_TYPE_LOCAL_INDEX_FIELD: 316 + case ACPI_TYPE_LOCAL_BANK_FIELD: 317 + 318 + local_type = ACPI_TYPE_INTEGER; 319 + if (obj_desc) { 320 + /* 321 + * Returned object will be a Buffer if the field length 322 + * is larger than the size of an Integer (32 or 64 bits 323 + * depending on the DSDT version). 324 + */ 325 + bit_length = obj_desc->common_field.bit_length; 326 + byte_length = ACPI_ROUND_BITS_UP_TO_BYTES(bit_length); 327 + if (bit_length > acpi_gbl_integer_bit_width) { 328 + local_type = ACPI_TYPE_BUFFER; 329 + } 330 + } 331 + break; 332 + 333 + default: 334 + 335 + /* Ignore all other types */ 336 + 337 + return (AE_OK); 338 + } 339 + 340 + /* Emit the common prefix: Type:Name */ 341 + 342 + acpi_os_printf("%14s: %4.4s", 343 + acpi_ut_get_type_name(node->type), node->name.ascii); 344 + if (!obj_desc) { 345 + acpi_os_printf(" Ignoring, no attached object\n"); 346 + return (AE_OK); 347 + } 348 + 349 + /* 350 + * Check for unsupported region types. Note: acpi_exec simulates 351 + * access to system_memory, system_IO, PCI_Config, and EC. 352 + */ 353 + switch (node->type) { 354 + case ACPI_TYPE_LOCAL_REGION_FIELD: 355 + 356 + region_obj = obj_desc->field.region_obj; 357 + switch (region_obj->region.space_id) { 358 + case ACPI_ADR_SPACE_SYSTEM_MEMORY: 359 + case ACPI_ADR_SPACE_SYSTEM_IO: 360 + case ACPI_ADR_SPACE_PCI_CONFIG: 361 + case ACPI_ADR_SPACE_EC: 362 + 363 + break; 364 + 365 + default: 366 + 367 + acpi_os_printf 368 + (" %s space is not supported [%4.4s]\n", 369 + acpi_ut_get_region_name(region_obj->region. 370 + space_id), 371 + region_obj->region.node->name.ascii); 372 + return (AE_OK); 373 + } 374 + break; 375 + 376 + default: 377 + break; 378 + } 379 + 380 + /* At this point, we have resolved the object to one of the major types */ 381 + 382 + switch (local_type) { 383 + case ACPI_TYPE_INTEGER: 384 + 385 + status = acpi_db_test_integer_type(node, bit_length); 386 + break; 387 + 388 + case ACPI_TYPE_STRING: 389 + 390 + status = acpi_db_test_string_type(node, byte_length); 391 + break; 392 + 393 + case ACPI_TYPE_BUFFER: 394 + 395 + status = acpi_db_test_buffer_type(node, bit_length); 396 + break; 397 + 398 + default: 399 + 400 + acpi_os_printf(" Ignoring, type not implemented (%2.2X)", 401 + local_type); 402 + break; 403 + } 404 + 405 + switch (node->type) { 406 + case ACPI_TYPE_LOCAL_REGION_FIELD: 407 + 408 + region_obj = obj_desc->field.region_obj; 409 + acpi_os_printf(" (%s)", 410 + acpi_ut_get_region_name(region_obj->region. 411 + space_id)); 412 + break; 413 + 414 + default: 415 + break; 416 + } 417 + 418 + acpi_os_printf("\n"); 419 + return (status); 420 + } 421 + 422 + /******************************************************************************* 423 + * 424 + * FUNCTION: acpi_db_test_integer_type 425 + * 426 + * PARAMETERS: node - Parent NS node for the object 427 + * bit_length - Actual length of the object. Used for 428 + * support of arbitrary length field_unit 429 + * and buffer_field objects. 430 + * 431 + * RETURN: Status 432 + * 433 + * DESCRIPTION: Test read/write for an Integer-valued object. Performs a 434 + * write/read/compare of an arbitrary new value, then performs 435 + * a write/read/compare of the original value. 436 + * 437 + ******************************************************************************/ 438 + 439 + static acpi_status 440 + acpi_db_test_integer_type(struct acpi_namespace_node *node, u32 bit_length) 441 + { 442 + union acpi_object *temp1 = NULL; 443 + union acpi_object *temp2 = NULL; 444 + union acpi_object *temp3 = NULL; 445 + union acpi_object write_value; 446 + u64 value_to_write; 447 + acpi_status status; 448 + 449 + if (bit_length > 64) { 450 + acpi_os_printf(" Invalid length for an Integer: %u", 451 + bit_length); 452 + return (AE_OK); 453 + } 454 + 455 + /* Read the original value */ 456 + 457 + status = acpi_db_read_from_object(node, ACPI_TYPE_INTEGER, &temp1); 458 + if (ACPI_FAILURE(status)) { 459 + return (status); 460 + } 461 + 462 + acpi_os_printf(" (%4.4X/%3.3X) %8.8X%8.8X", 463 + bit_length, ACPI_ROUND_BITS_UP_TO_BYTES(bit_length), 464 + ACPI_FORMAT_UINT64(temp1->integer.value)); 465 + 466 + value_to_write = ACPI_UINT64_MAX >> (64 - bit_length); 467 + if (temp1->integer.value == value_to_write) { 468 + value_to_write = 0; 469 + } 470 + 471 + /* Write a new value */ 472 + 473 + write_value.type = ACPI_TYPE_INTEGER; 474 + write_value.integer.value = value_to_write; 475 + status = acpi_db_write_to_object(node, &write_value); 476 + if (ACPI_FAILURE(status)) { 477 + goto exit; 478 + } 479 + 480 + /* Ensure that we can read back the new value */ 481 + 482 + status = acpi_db_read_from_object(node, ACPI_TYPE_INTEGER, &temp2); 483 + if (ACPI_FAILURE(status)) { 484 + goto exit; 485 + } 486 + 487 + if (temp2->integer.value != value_to_write) { 488 + acpi_os_printf(" MISMATCH 2: %8.8X%8.8X, expecting %8.8X%8.8X", 489 + ACPI_FORMAT_UINT64(temp2->integer.value), 490 + ACPI_FORMAT_UINT64(value_to_write)); 491 + } 492 + 493 + /* Write back the original value */ 494 + 495 + write_value.integer.value = temp1->integer.value; 496 + status = acpi_db_write_to_object(node, &write_value); 497 + if (ACPI_FAILURE(status)) { 498 + goto exit; 499 + } 500 + 501 + /* Ensure that we can read back the original value */ 502 + 503 + status = acpi_db_read_from_object(node, ACPI_TYPE_INTEGER, &temp3); 504 + if (ACPI_FAILURE(status)) { 505 + goto exit; 506 + } 507 + 508 + if (temp3->integer.value != temp1->integer.value) { 509 + acpi_os_printf(" MISMATCH 3: %8.8X%8.8X, expecting %8.8X%8.8X", 510 + ACPI_FORMAT_UINT64(temp3->integer.value), 511 + ACPI_FORMAT_UINT64(temp1->integer.value)); 512 + } 513 + 514 + exit: 515 + if (temp1) { 516 + acpi_os_free(temp1); 517 + } 518 + if (temp2) { 519 + acpi_os_free(temp2); 520 + } 521 + if (temp3) { 522 + acpi_os_free(temp3); 523 + } 524 + return (AE_OK); 525 + } 526 + 527 + /******************************************************************************* 528 + * 529 + * FUNCTION: acpi_db_test_buffer_type 530 + * 531 + * PARAMETERS: node - Parent NS node for the object 532 + * bit_length - Actual length of the object. 533 + * 534 + * RETURN: Status 535 + * 536 + * DESCRIPTION: Test read/write for an Buffer-valued object. Performs a 537 + * write/read/compare of an arbitrary new value, then performs 538 + * a write/read/compare of the original value. 539 + * 540 + ******************************************************************************/ 541 + 542 + static acpi_status 543 + acpi_db_test_buffer_type(struct acpi_namespace_node *node, u32 bit_length) 544 + { 545 + union acpi_object *temp1 = NULL; 546 + union acpi_object *temp2 = NULL; 547 + union acpi_object *temp3 = NULL; 548 + u8 *buffer; 549 + union acpi_object write_value; 550 + acpi_status status; 551 + u32 byte_length; 552 + u32 i; 553 + u8 extra_bits; 554 + 555 + byte_length = ACPI_ROUND_BITS_UP_TO_BYTES(bit_length); 556 + if (byte_length == 0) { 557 + acpi_os_printf(" Ignoring zero length buffer"); 558 + return (AE_OK); 559 + } 560 + 561 + /* Allocate a local buffer */ 562 + 563 + buffer = ACPI_ALLOCATE_ZEROED(byte_length); 564 + if (!buffer) { 565 + return (AE_NO_MEMORY); 566 + } 567 + 568 + /* Read the original value */ 569 + 570 + status = acpi_db_read_from_object(node, ACPI_TYPE_BUFFER, &temp1); 571 + if (ACPI_FAILURE(status)) { 572 + goto exit; 573 + } 574 + 575 + /* Emit a few bytes of the buffer */ 576 + 577 + acpi_os_printf(" (%4.4X/%3.3X)", bit_length, temp1->buffer.length); 578 + for (i = 0; ((i < 4) && (i < byte_length)); i++) { 579 + acpi_os_printf(" %2.2X", temp1->buffer.pointer[i]); 580 + } 581 + acpi_os_printf("... "); 582 + 583 + /* 584 + * Write a new value. 585 + * 586 + * Handle possible extra bits at the end of the buffer. Can 587 + * happen for field_units larger than an integer, but the bit 588 + * count is not an integral number of bytes. Zero out the 589 + * unused bits. 590 + */ 591 + memset(buffer, BUFFER_FILL_VALUE, byte_length); 592 + extra_bits = bit_length % 8; 593 + if (extra_bits) { 594 + buffer[byte_length - 1] = ACPI_MASK_BITS_ABOVE(extra_bits); 595 + } 596 + 597 + write_value.type = ACPI_TYPE_BUFFER; 598 + write_value.buffer.length = byte_length; 599 + write_value.buffer.pointer = buffer; 600 + 601 + status = acpi_db_write_to_object(node, &write_value); 602 + if (ACPI_FAILURE(status)) { 603 + goto exit; 604 + } 605 + 606 + /* Ensure that we can read back the new value */ 607 + 608 + status = acpi_db_read_from_object(node, ACPI_TYPE_BUFFER, &temp2); 609 + if (ACPI_FAILURE(status)) { 610 + goto exit; 611 + } 612 + 613 + if (memcmp(temp2->buffer.pointer, buffer, byte_length)) { 614 + acpi_os_printf(" MISMATCH 2: New buffer value"); 615 + } 616 + 617 + /* Write back the original value */ 618 + 619 + write_value.buffer.length = byte_length; 620 + write_value.buffer.pointer = temp1->buffer.pointer; 621 + 622 + status = acpi_db_write_to_object(node, &write_value); 623 + if (ACPI_FAILURE(status)) { 624 + goto exit; 625 + } 626 + 627 + /* Ensure that we can read back the original value */ 628 + 629 + status = acpi_db_read_from_object(node, ACPI_TYPE_BUFFER, &temp3); 630 + if (ACPI_FAILURE(status)) { 631 + goto exit; 632 + } 633 + 634 + if (memcmp(temp1->buffer.pointer, temp3->buffer.pointer, byte_length)) { 635 + acpi_os_printf(" MISMATCH 3: While restoring original buffer"); 636 + } 637 + 638 + exit: 639 + ACPI_FREE(buffer); 640 + if (temp1) { 641 + acpi_os_free(temp1); 642 + } 643 + if (temp2) { 644 + acpi_os_free(temp2); 645 + } 646 + if (temp3) { 647 + acpi_os_free(temp3); 648 + } 649 + return (status); 650 + } 651 + 652 + /******************************************************************************* 653 + * 654 + * FUNCTION: acpi_db_test_string_type 655 + * 656 + * PARAMETERS: node - Parent NS node for the object 657 + * byte_length - Actual length of the object. 658 + * 659 + * RETURN: Status 660 + * 661 + * DESCRIPTION: Test read/write for an String-valued object. Performs a 662 + * write/read/compare of an arbitrary new value, then performs 663 + * a write/read/compare of the original value. 664 + * 665 + ******************************************************************************/ 666 + 667 + static acpi_status 668 + acpi_db_test_string_type(struct acpi_namespace_node *node, u32 byte_length) 669 + { 670 + union acpi_object *temp1 = NULL; 671 + union acpi_object *temp2 = NULL; 672 + union acpi_object *temp3 = NULL; 673 + char *value_to_write = "Test String from AML Debugger"; 674 + union acpi_object write_value; 675 + acpi_status status; 676 + 677 + /* Read the original value */ 678 + 679 + status = acpi_db_read_from_object(node, ACPI_TYPE_STRING, &temp1); 680 + if (ACPI_FAILURE(status)) { 681 + return (status); 682 + } 683 + 684 + acpi_os_printf(" (%4.4X/%3.3X) \"%s\"", (temp1->string.length * 8), 685 + temp1->string.length, temp1->string.pointer); 686 + 687 + /* Write a new value */ 688 + 689 + write_value.type = ACPI_TYPE_STRING; 690 + write_value.string.length = strlen(value_to_write); 691 + write_value.string.pointer = value_to_write; 692 + 693 + status = acpi_db_write_to_object(node, &write_value); 694 + if (ACPI_FAILURE(status)) { 695 + goto exit; 696 + } 697 + 698 + /* Ensure that we can read back the new value */ 699 + 700 + status = acpi_db_read_from_object(node, ACPI_TYPE_STRING, &temp2); 701 + if (ACPI_FAILURE(status)) { 702 + goto exit; 703 + } 704 + 705 + if (strcmp(temp2->string.pointer, value_to_write)) { 706 + acpi_os_printf(" MISMATCH 2: %s, expecting %s", 707 + temp2->string.pointer, value_to_write); 708 + } 709 + 710 + /* Write back the original value */ 711 + 712 + write_value.string.length = strlen(temp1->string.pointer); 713 + write_value.string.pointer = temp1->string.pointer; 714 + 715 + status = acpi_db_write_to_object(node, &write_value); 716 + if (ACPI_FAILURE(status)) { 717 + goto exit; 718 + } 719 + 720 + /* Ensure that we can read back the original value */ 721 + 722 + status = acpi_db_read_from_object(node, ACPI_TYPE_STRING, &temp3); 723 + if (ACPI_FAILURE(status)) { 724 + goto exit; 725 + } 726 + 727 + if (strcmp(temp1->string.pointer, temp3->string.pointer)) { 728 + acpi_os_printf(" MISMATCH 3: %s, expecting %s", 729 + temp3->string.pointer, temp1->string.pointer); 730 + } 731 + 732 + exit: 733 + if (temp1) { 734 + acpi_os_free(temp1); 735 + } 736 + if (temp2) { 737 + acpi_os_free(temp2); 738 + } 739 + if (temp3) { 740 + acpi_os_free(temp3); 741 + } 742 + return (status); 743 + } 744 + 745 + /******************************************************************************* 746 + * 747 + * FUNCTION: acpi_db_read_from_object 748 + * 749 + * PARAMETERS: node - Parent NS node for the object 750 + * expected_type - Object type expected from the read 751 + * value - Where the value read is returned 752 + * 753 + * RETURN: Status 754 + * 755 + * DESCRIPTION: Performs a read from the specified object by invoking the 756 + * special debugger control method that reads the object. Thus, 757 + * the AML interpreter is doing all of the work, increasing the 758 + * validity of the test. 759 + * 760 + ******************************************************************************/ 761 + 762 + static acpi_status 763 + acpi_db_read_from_object(struct acpi_namespace_node *node, 764 + acpi_object_type expected_type, 765 + union acpi_object **value) 766 + { 767 + union acpi_object *ret_value; 768 + struct acpi_object_list param_objects; 769 + union acpi_object params[2]; 770 + struct acpi_buffer return_obj; 771 + acpi_status status; 772 + 773 + params[0].type = ACPI_TYPE_LOCAL_REFERENCE; 774 + params[0].reference.actual_type = node->type; 775 + params[0].reference.handle = ACPI_CAST_PTR(acpi_handle, node); 776 + 777 + param_objects.count = 1; 778 + param_objects.pointer = params; 779 + 780 + return_obj.length = ACPI_ALLOCATE_BUFFER; 781 + 782 + acpi_gbl_method_executing = TRUE; 783 + status = acpi_evaluate_object(read_handle, NULL, 784 + &param_objects, &return_obj); 785 + acpi_gbl_method_executing = FALSE; 786 + 787 + if (ACPI_FAILURE(status)) { 788 + acpi_os_printf("Could not read from object, %s", 789 + acpi_format_exception(status)); 790 + return (status); 791 + } 792 + 793 + ret_value = (union acpi_object *)return_obj.pointer; 794 + 795 + switch (ret_value->type) { 796 + case ACPI_TYPE_INTEGER: 797 + case ACPI_TYPE_BUFFER: 798 + case ACPI_TYPE_STRING: 799 + /* 800 + * Did we receive the type we wanted? Most important for the 801 + * Integer/Buffer case (when a field is larger than an Integer, 802 + * it should return a Buffer). 803 + */ 804 + if (ret_value->type != expected_type) { 805 + acpi_os_printf 806 + (" Type mismatch: Expected %s, Received %s", 807 + acpi_ut_get_type_name(expected_type), 808 + acpi_ut_get_type_name(ret_value->type)); 809 + 810 + return (AE_TYPE); 811 + } 812 + 813 + *value = ret_value; 814 + break; 815 + 816 + default: 817 + 818 + acpi_os_printf(" Unsupported return object type, %s", 819 + acpi_ut_get_type_name(ret_value->type)); 820 + 821 + acpi_os_free(return_obj.pointer); 822 + return (AE_TYPE); 823 + } 824 + 825 + return (status); 826 + } 827 + 828 + /******************************************************************************* 829 + * 830 + * FUNCTION: acpi_db_write_to_object 831 + * 832 + * PARAMETERS: node - Parent NS node for the object 833 + * value - Value to be written 834 + * 835 + * RETURN: Status 836 + * 837 + * DESCRIPTION: Performs a write to the specified object by invoking the 838 + * special debugger control method that writes the object. Thus, 839 + * the AML interpreter is doing all of the work, increasing the 840 + * validity of the test. 841 + * 842 + ******************************************************************************/ 843 + 844 + static acpi_status 845 + acpi_db_write_to_object(struct acpi_namespace_node *node, 846 + union acpi_object *value) 847 + { 848 + struct acpi_object_list param_objects; 849 + union acpi_object params[2]; 850 + acpi_status status; 851 + 852 + params[0].type = ACPI_TYPE_LOCAL_REFERENCE; 853 + params[0].reference.actual_type = node->type; 854 + params[0].reference.handle = ACPI_CAST_PTR(acpi_handle, node); 855 + 856 + /* Copy the incoming user parameter */ 857 + 858 + memcpy(&params[1], value, sizeof(union acpi_object)); 859 + 860 + param_objects.count = 2; 861 + param_objects.pointer = params; 862 + 863 + acpi_gbl_method_executing = TRUE; 864 + status = acpi_evaluate_object(write_handle, NULL, &param_objects, NULL); 865 + acpi_gbl_method_executing = FALSE; 866 + 867 + if (ACPI_FAILURE(status)) { 868 + acpi_os_printf("Could not write to object, %s", 869 + acpi_format_exception(status)); 870 + } 871 + 872 + return (status); 873 + } 874 + 875 + /******************************************************************************* 876 + * 877 + * FUNCTION: acpi_db_evaluate_all_predefined_names 878 + * 879 + * PARAMETERS: count_arg - Max number of methods to execute 880 + * 881 + * RETURN: None 882 + * 883 + * DESCRIPTION: Namespace batch execution. Execute predefined names in the 884 + * namespace, up to the max count, if specified. 885 + * 886 + ******************************************************************************/ 887 + 888 + static void acpi_db_evaluate_all_predefined_names(char *count_arg) 889 + { 890 + struct acpi_db_execute_walk info; 891 + 892 + info.count = 0; 893 + info.max_count = ACPI_UINT32_MAX; 894 + 895 + if (count_arg) { 896 + info.max_count = strtoul(count_arg, NULL, 0); 897 + } 898 + 899 + /* Search all nodes in namespace */ 900 + 901 + (void)acpi_walk_namespace(ACPI_TYPE_ANY, ACPI_ROOT_OBJECT, 902 + ACPI_UINT32_MAX, 903 + acpi_db_evaluate_one_predefined_name, NULL, 904 + (void *)&info, NULL); 905 + 906 + acpi_os_printf("Evaluated %u predefined names in the namespace\n", 907 + info.count); 908 + } 909 + 910 + /******************************************************************************* 911 + * 912 + * FUNCTION: acpi_db_evaluate_one_predefined_name 913 + * 914 + * PARAMETERS: Callback from walk_namespace 915 + * 916 + * RETURN: Status 917 + * 918 + * DESCRIPTION: Batch execution module. Currently only executes predefined 919 + * ACPI names. 920 + * 921 + ******************************************************************************/ 922 + 923 + static acpi_status 924 + acpi_db_evaluate_one_predefined_name(acpi_handle obj_handle, 925 + u32 nesting_level, 926 + void *context, void **return_value) 927 + { 928 + struct acpi_namespace_node *node = 929 + (struct acpi_namespace_node *)obj_handle; 930 + struct acpi_db_execute_walk *info = 931 + (struct acpi_db_execute_walk *)context; 932 + char *pathname; 933 + const union acpi_predefined_info *predefined; 934 + struct acpi_device_info *obj_info; 935 + struct acpi_object_list param_objects; 936 + union acpi_object params[ACPI_METHOD_NUM_ARGS]; 937 + union acpi_object *this_param; 938 + struct acpi_buffer return_obj; 939 + acpi_status status; 940 + u16 arg_type_list; 941 + u8 arg_count; 942 + u8 arg_type; 943 + u32 i; 944 + 945 + /* The name must be a predefined ACPI name */ 946 + 947 + predefined = acpi_ut_match_predefined_method(node->name.ascii); 948 + if (!predefined) { 949 + return (AE_OK); 950 + } 951 + 952 + if (node->type == ACPI_TYPE_LOCAL_SCOPE) { 953 + return (AE_OK); 954 + } 955 + 956 + pathname = acpi_ns_get_external_pathname(node); 957 + if (!pathname) { 958 + return (AE_OK); 959 + } 960 + 961 + /* Get the object info for number of method parameters */ 962 + 963 + status = acpi_get_object_info(obj_handle, &obj_info); 964 + if (ACPI_FAILURE(status)) { 965 + ACPI_FREE(pathname); 966 + return (status); 967 + } 968 + 969 + param_objects.count = 0; 970 + param_objects.pointer = NULL; 971 + 972 + if (obj_info->type == ACPI_TYPE_METHOD) { 973 + 974 + /* Setup default parameters (with proper types) */ 975 + 976 + arg_type_list = predefined->info.argument_list; 977 + arg_count = METHOD_GET_ARG_COUNT(arg_type_list); 978 + 979 + /* 980 + * Setup the ACPI-required number of arguments, regardless of what 981 + * the actual method defines. If there is a difference, then the 982 + * method is wrong and a warning will be issued during execution. 983 + */ 984 + this_param = params; 985 + for (i = 0; i < arg_count; i++) { 986 + arg_type = METHOD_GET_NEXT_TYPE(arg_type_list); 987 + this_param->type = arg_type; 988 + 989 + switch (arg_type) { 990 + case ACPI_TYPE_INTEGER: 991 + 992 + this_param->integer.value = 1; 993 + break; 994 + 995 + case ACPI_TYPE_STRING: 996 + 997 + this_param->string.pointer = 998 + "This is the default argument string"; 999 + this_param->string.length = 1000 + strlen(this_param->string.pointer); 1001 + break; 1002 + 1003 + case ACPI_TYPE_BUFFER: 1004 + 1005 + this_param->buffer.pointer = (u8 *)params; /* just a garbage buffer */ 1006 + this_param->buffer.length = 48; 1007 + break; 1008 + 1009 + case ACPI_TYPE_PACKAGE: 1010 + 1011 + this_param->package.elements = NULL; 1012 + this_param->package.count = 0; 1013 + break; 1014 + 1015 + default: 1016 + 1017 + acpi_os_printf 1018 + ("%s: Unsupported argument type: %u\n", 1019 + pathname, arg_type); 1020 + break; 1021 + } 1022 + 1023 + this_param++; 1024 + } 1025 + 1026 + param_objects.count = arg_count; 1027 + param_objects.pointer = params; 1028 + } 1029 + 1030 + ACPI_FREE(obj_info); 1031 + return_obj.pointer = NULL; 1032 + return_obj.length = ACPI_ALLOCATE_BUFFER; 1033 + 1034 + /* Do the actual method execution */ 1035 + 1036 + acpi_gbl_method_executing = TRUE; 1037 + 1038 + status = acpi_evaluate_object(node, NULL, &param_objects, &return_obj); 1039 + 1040 + acpi_os_printf("%-32s returned %s\n", 1041 + pathname, acpi_format_exception(status)); 1042 + acpi_gbl_method_executing = FALSE; 1043 + ACPI_FREE(pathname); 1044 + 1045 + /* Ignore status from method execution */ 1046 + 1047 + status = AE_OK; 1048 + 1049 + /* Update count, check if we have executed enough methods */ 1050 + 1051 + info->count++; 1052 + if (info->count >= info->max_count) { 1053 + status = AE_CTRL_TERMINATE; 1054 + } 1055 + 1056 + return (status); 1057 + }
+457
drivers/acpi/acpica/dbutils.c
··· 1 + /******************************************************************************* 2 + * 3 + * Module Name: dbutils - AML debugger utilities 4 + * 5 + ******************************************************************************/ 6 + 7 + /* 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 + * All rights reserved. 10 + * 11 + * Redistribution and use in source and binary forms, with or without 12 + * modification, are permitted provided that the following conditions 13 + * are met: 14 + * 1. Redistributions of source code must retain the above copyright 15 + * notice, this list of conditions, and the following disclaimer, 16 + * without modification. 17 + * 2. Redistributions in binary form must reproduce at minimum a disclaimer 18 + * substantially similar to the "NO WARRANTY" disclaimer below 19 + * ("Disclaimer") and any redistribution must be conditioned upon 20 + * including a substantially similar Disclaimer requirement for further 21 + * binary redistribution. 22 + * 3. Neither the names of the above-listed copyright holders nor the names 23 + * of any contributors may be used to endorse or promote products derived 24 + * from this software without specific prior written permission. 25 + * 26 + * Alternatively, this software may be distributed under the terms of the 27 + * GNU General Public License ("GPL") version 2 as published by the Free 28 + * Software Foundation. 29 + * 30 + * NO WARRANTY 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 36 + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 37 + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 38 + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 39 + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 40 + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 41 + * POSSIBILITY OF SUCH DAMAGES. 42 + */ 43 + 44 + #include <acpi/acpi.h> 45 + #include "accommon.h" 46 + #include "acnamesp.h" 47 + #include "acdebug.h" 48 + 49 + #define _COMPONENT ACPI_CA_DEBUGGER 50 + ACPI_MODULE_NAME("dbutils") 51 + 52 + /* Local prototypes */ 53 + #ifdef ACPI_OBSOLETE_FUNCTIONS 54 + acpi_status acpi_db_second_pass_parse(union acpi_parse_object *root); 55 + 56 + void acpi_db_dump_buffer(u32 address); 57 + #endif 58 + 59 + static char *gbl_hex_to_ascii = "0123456789ABCDEF"; 60 + 61 + /******************************************************************************* 62 + * 63 + * FUNCTION: acpi_db_match_argument 64 + * 65 + * PARAMETERS: user_argument - User command line 66 + * arguments - Array of commands to match against 67 + * 68 + * RETURN: Index into command array or ACPI_TYPE_NOT_FOUND if not found 69 + * 70 + * DESCRIPTION: Search command array for a command match 71 + * 72 + ******************************************************************************/ 73 + 74 + acpi_object_type 75 + acpi_db_match_argument(char *user_argument, 76 + struct acpi_db_argument_info *arguments) 77 + { 78 + u32 i; 79 + 80 + if (!user_argument || user_argument[0] == 0) { 81 + return (ACPI_TYPE_NOT_FOUND); 82 + } 83 + 84 + for (i = 0; arguments[i].name; i++) { 85 + if (strstr(arguments[i].name, user_argument) == 86 + arguments[i].name) { 87 + return (i); 88 + } 89 + } 90 + 91 + /* Argument not recognized */ 92 + 93 + return (ACPI_TYPE_NOT_FOUND); 94 + } 95 + 96 + /******************************************************************************* 97 + * 98 + * FUNCTION: acpi_db_set_output_destination 99 + * 100 + * PARAMETERS: output_flags - Current flags word 101 + * 102 + * RETURN: None 103 + * 104 + * DESCRIPTION: Set the current destination for debugger output. Also sets 105 + * the debug output level accordingly. 106 + * 107 + ******************************************************************************/ 108 + 109 + void acpi_db_set_output_destination(u32 output_flags) 110 + { 111 + 112 + acpi_gbl_db_output_flags = (u8)output_flags; 113 + 114 + if ((output_flags & ACPI_DB_REDIRECTABLE_OUTPUT) && 115 + acpi_gbl_db_output_to_file) { 116 + acpi_dbg_level = acpi_gbl_db_debug_level; 117 + } else { 118 + acpi_dbg_level = acpi_gbl_db_console_debug_level; 119 + } 120 + } 121 + 122 + /******************************************************************************* 123 + * 124 + * FUNCTION: acpi_db_dump_external_object 125 + * 126 + * PARAMETERS: obj_desc - External ACPI object to dump 127 + * level - Nesting level. 128 + * 129 + * RETURN: None 130 + * 131 + * DESCRIPTION: Dump the contents of an ACPI external object 132 + * 133 + ******************************************************************************/ 134 + 135 + void acpi_db_dump_external_object(union acpi_object *obj_desc, u32 level) 136 + { 137 + u32 i; 138 + 139 + if (!obj_desc) { 140 + acpi_os_printf("[Null Object]\n"); 141 + return; 142 + } 143 + 144 + for (i = 0; i < level; i++) { 145 + acpi_os_printf(" "); 146 + } 147 + 148 + switch (obj_desc->type) { 149 + case ACPI_TYPE_ANY: 150 + 151 + acpi_os_printf("[Null Object] (Type=0)\n"); 152 + break; 153 + 154 + case ACPI_TYPE_INTEGER: 155 + 156 + acpi_os_printf("[Integer] = %8.8X%8.8X\n", 157 + ACPI_FORMAT_UINT64(obj_desc->integer.value)); 158 + break; 159 + 160 + case ACPI_TYPE_STRING: 161 + 162 + acpi_os_printf("[String] Length %.2X = ", 163 + obj_desc->string.length); 164 + acpi_ut_print_string(obj_desc->string.pointer, ACPI_UINT8_MAX); 165 + acpi_os_printf("\n"); 166 + break; 167 + 168 + case ACPI_TYPE_BUFFER: 169 + 170 + acpi_os_printf("[Buffer] Length %.2X = ", 171 + obj_desc->buffer.length); 172 + if (obj_desc->buffer.length) { 173 + if (obj_desc->buffer.length > 16) { 174 + acpi_os_printf("\n"); 175 + } 176 + acpi_ut_debug_dump_buffer(ACPI_CAST_PTR 177 + (u8, 178 + obj_desc->buffer.pointer), 179 + obj_desc->buffer.length, 180 + DB_BYTE_DISPLAY, _COMPONENT); 181 + } else { 182 + acpi_os_printf("\n"); 183 + } 184 + break; 185 + 186 + case ACPI_TYPE_PACKAGE: 187 + 188 + acpi_os_printf("[Package] Contains %u Elements:\n", 189 + obj_desc->package.count); 190 + 191 + for (i = 0; i < obj_desc->package.count; i++) { 192 + acpi_db_dump_external_object(&obj_desc->package. 193 + elements[i], level + 1); 194 + } 195 + break; 196 + 197 + case ACPI_TYPE_LOCAL_REFERENCE: 198 + 199 + acpi_os_printf("[Object Reference] = "); 200 + acpi_db_display_internal_object(obj_desc->reference.handle, 201 + NULL); 202 + break; 203 + 204 + case ACPI_TYPE_PROCESSOR: 205 + 206 + acpi_os_printf("[Processor]\n"); 207 + break; 208 + 209 + case ACPI_TYPE_POWER: 210 + 211 + acpi_os_printf("[Power Resource]\n"); 212 + break; 213 + 214 + default: 215 + 216 + acpi_os_printf("[Unknown Type] %X\n", obj_desc->type); 217 + break; 218 + } 219 + } 220 + 221 + /******************************************************************************* 222 + * 223 + * FUNCTION: acpi_db_prep_namestring 224 + * 225 + * PARAMETERS: name - String to prepare 226 + * 227 + * RETURN: None 228 + * 229 + * DESCRIPTION: Translate all forward slashes and dots to backslashes. 230 + * 231 + ******************************************************************************/ 232 + 233 + void acpi_db_prep_namestring(char *name) 234 + { 235 + 236 + if (!name) { 237 + return; 238 + } 239 + 240 + acpi_ut_strupr(name); 241 + 242 + /* Convert a leading forward slash to a backslash */ 243 + 244 + if (*name == '/') { 245 + *name = '\\'; 246 + } 247 + 248 + /* Ignore a leading backslash, this is the root prefix */ 249 + 250 + if (ACPI_IS_ROOT_PREFIX(*name)) { 251 + name++; 252 + } 253 + 254 + /* Convert all slash path separators to dots */ 255 + 256 + while (*name) { 257 + if ((*name == '/') || (*name == '\\')) { 258 + *name = '.'; 259 + } 260 + 261 + name++; 262 + } 263 + } 264 + 265 + /******************************************************************************* 266 + * 267 + * FUNCTION: acpi_db_local_ns_lookup 268 + * 269 + * PARAMETERS: name - Name to lookup 270 + * 271 + * RETURN: Pointer to a namespace node, null on failure 272 + * 273 + * DESCRIPTION: Lookup a name in the ACPI namespace 274 + * 275 + * Note: Currently begins search from the root. Could be enhanced to use 276 + * the current prefix (scope) node as the search beginning point. 277 + * 278 + ******************************************************************************/ 279 + 280 + struct acpi_namespace_node *acpi_db_local_ns_lookup(char *name) 281 + { 282 + char *internal_path; 283 + acpi_status status; 284 + struct acpi_namespace_node *node = NULL; 285 + 286 + acpi_db_prep_namestring(name); 287 + 288 + /* Build an internal namestring */ 289 + 290 + status = acpi_ns_internalize_name(name, &internal_path); 291 + if (ACPI_FAILURE(status)) { 292 + acpi_os_printf("Invalid namestring: %s\n", name); 293 + return (NULL); 294 + } 295 + 296 + /* 297 + * Lookup the name. 298 + * (Uses root node as the search starting point) 299 + */ 300 + status = acpi_ns_lookup(NULL, internal_path, ACPI_TYPE_ANY, 301 + ACPI_IMODE_EXECUTE, 302 + ACPI_NS_NO_UPSEARCH | ACPI_NS_DONT_OPEN_SCOPE, 303 + NULL, &node); 304 + if (ACPI_FAILURE(status)) { 305 + acpi_os_printf("Could not locate name: %s, %s\n", 306 + name, acpi_format_exception(status)); 307 + } 308 + 309 + ACPI_FREE(internal_path); 310 + return (node); 311 + } 312 + 313 + /******************************************************************************* 314 + * 315 + * FUNCTION: acpi_db_uint32_to_hex_string 316 + * 317 + * PARAMETERS: value - The value to be converted to string 318 + * buffer - Buffer for result (not less than 11 bytes) 319 + * 320 + * RETURN: None 321 + * 322 + * DESCRIPTION: Convert the unsigned 32-bit value to the hexadecimal image 323 + * 324 + * NOTE: It is the caller's responsibility to ensure that the length of buffer 325 + * is sufficient. 326 + * 327 + ******************************************************************************/ 328 + 329 + void acpi_db_uint32_to_hex_string(u32 value, char *buffer) 330 + { 331 + int i; 332 + 333 + if (value == 0) { 334 + strcpy(buffer, "0"); 335 + return; 336 + } 337 + 338 + buffer[8] = '\0'; 339 + 340 + for (i = 7; i >= 0; i--) { 341 + buffer[i] = gbl_hex_to_ascii[value & 0x0F]; 342 + value = value >> 4; 343 + } 344 + } 345 + 346 + #ifdef ACPI_OBSOLETE_FUNCTIONS 347 + /******************************************************************************* 348 + * 349 + * FUNCTION: acpi_db_second_pass_parse 350 + * 351 + * PARAMETERS: root - Root of the parse tree 352 + * 353 + * RETURN: Status 354 + * 355 + * DESCRIPTION: Second pass parse of the ACPI tables. We need to wait until 356 + * second pass to parse the control methods 357 + * 358 + ******************************************************************************/ 359 + 360 + acpi_status acpi_db_second_pass_parse(union acpi_parse_object *root) 361 + { 362 + union acpi_parse_object *op = root; 363 + union acpi_parse_object *method; 364 + union acpi_parse_object *search_op; 365 + union acpi_parse_object *start_op; 366 + acpi_status status = AE_OK; 367 + u32 base_aml_offset; 368 + struct acpi_walk_state *walk_state; 369 + 370 + ACPI_FUNCTION_ENTRY(); 371 + 372 + acpi_os_printf("Pass two parse ....\n"); 373 + 374 + while (op) { 375 + if (op->common.aml_opcode == AML_METHOD_OP) { 376 + method = op; 377 + 378 + /* Create a new walk state for the parse */ 379 + 380 + walk_state = 381 + acpi_ds_create_walk_state(0, NULL, NULL, NULL); 382 + if (!walk_state) { 383 + return (AE_NO_MEMORY); 384 + } 385 + 386 + /* Init the Walk State */ 387 + 388 + walk_state->parser_state.aml = 389 + walk_state->parser_state.aml_start = 390 + method->named.data; 391 + walk_state->parser_state.aml_end = 392 + walk_state->parser_state.pkg_end = 393 + method->named.data + method->named.length; 394 + walk_state->parser_state.start_scope = op; 395 + 396 + walk_state->descending_callback = 397 + acpi_ds_load1_begin_op; 398 + walk_state->ascending_callback = acpi_ds_load1_end_op; 399 + 400 + /* Perform the AML parse */ 401 + 402 + status = acpi_ps_parse_aml(walk_state); 403 + 404 + base_aml_offset = 405 + (method->common.value.arg)->common.aml_offset + 1; 406 + start_op = (method->common.value.arg)->common.next; 407 + search_op = start_op; 408 + 409 + while (search_op) { 410 + search_op->common.aml_offset += base_aml_offset; 411 + search_op = 412 + acpi_ps_get_depth_next(start_op, search_op); 413 + } 414 + } 415 + 416 + if (op->common.aml_opcode == AML_REGION_OP) { 417 + 418 + /* TBD: [Investigate] this isn't quite the right thing to do! */ 419 + /* 420 + * 421 + * Method = (ACPI_DEFERRED_OP *) Op; 422 + * Status = acpi_ps_parse_aml (Op, Method->Body, Method->body_length); 423 + */ 424 + } 425 + 426 + if (ACPI_FAILURE(status)) { 427 + break; 428 + } 429 + 430 + op = acpi_ps_get_depth_next(root, op); 431 + } 432 + 433 + return (status); 434 + } 435 + 436 + /******************************************************************************* 437 + * 438 + * FUNCTION: acpi_db_dump_buffer 439 + * 440 + * PARAMETERS: address - Pointer to the buffer 441 + * 442 + * RETURN: None 443 + * 444 + * DESCRIPTION: Print a portion of a buffer 445 + * 446 + ******************************************************************************/ 447 + 448 + void acpi_db_dump_buffer(u32 address) 449 + { 450 + 451 + acpi_os_printf("\nLocation %X:\n", address); 452 + 453 + acpi_dbg_level |= ACPI_LV_TABLES; 454 + acpi_ut_debug_dump_buffer(ACPI_TO_POINTER(address), 64, DB_BYTE_DISPLAY, 455 + ACPI_UINT32_MAX); 456 + } 457 + #endif
+513
drivers/acpi/acpica/dbxface.c
··· 1 + /******************************************************************************* 2 + * 3 + * Module Name: dbxface - AML Debugger external interfaces 4 + * 5 + ******************************************************************************/ 6 + 7 + /* 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 + * All rights reserved. 10 + * 11 + * Redistribution and use in source and binary forms, with or without 12 + * modification, are permitted provided that the following conditions 13 + * are met: 14 + * 1. Redistributions of source code must retain the above copyright 15 + * notice, this list of conditions, and the following disclaimer, 16 + * without modification. 17 + * 2. Redistributions in binary form must reproduce at minimum a disclaimer 18 + * substantially similar to the "NO WARRANTY" disclaimer below 19 + * ("Disclaimer") and any redistribution must be conditioned upon 20 + * including a substantially similar Disclaimer requirement for further 21 + * binary redistribution. 22 + * 3. Neither the names of the above-listed copyright holders nor the names 23 + * of any contributors may be used to endorse or promote products derived 24 + * from this software without specific prior written permission. 25 + * 26 + * Alternatively, this software may be distributed under the terms of the 27 + * GNU General Public License ("GPL") version 2 as published by the Free 28 + * Software Foundation. 29 + * 30 + * NO WARRANTY 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 36 + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 37 + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 38 + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, 39 + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 40 + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 41 + * POSSIBILITY OF SUCH DAMAGES. 42 + */ 43 + 44 + #include <acpi/acpi.h> 45 + #include "accommon.h" 46 + #include "amlcode.h" 47 + #include "acdebug.h" 48 + 49 + #define _COMPONENT ACPI_CA_DEBUGGER 50 + ACPI_MODULE_NAME("dbxface") 51 + 52 + /* Local prototypes */ 53 + static acpi_status 54 + acpi_db_start_command(struct acpi_walk_state *walk_state, 55 + union acpi_parse_object *op); 56 + 57 + #ifdef ACPI_OBSOLETE_FUNCTIONS 58 + void acpi_db_method_end(struct acpi_walk_state *walk_state); 59 + #endif 60 + 61 + /******************************************************************************* 62 + * 63 + * FUNCTION: acpi_db_start_command 64 + * 65 + * PARAMETERS: walk_state - Current walk 66 + * op - Current executing Op, from AML interpreter 67 + * 68 + * RETURN: Status 69 + * 70 + * DESCRIPTION: Enter debugger command loop 71 + * 72 + ******************************************************************************/ 73 + 74 + static acpi_status 75 + acpi_db_start_command(struct acpi_walk_state *walk_state, 76 + union acpi_parse_object *op) 77 + { 78 + acpi_status status; 79 + 80 + /* TBD: [Investigate] are there namespace locking issues here? */ 81 + 82 + /* acpi_ut_release_mutex (ACPI_MTX_NAMESPACE); */ 83 + 84 + /* Go into the command loop and await next user command */ 85 + 86 + acpi_gbl_method_executing = TRUE; 87 + status = AE_CTRL_TRUE; 88 + while (status == AE_CTRL_TRUE) { 89 + if (acpi_gbl_debugger_configuration == DEBUGGER_MULTI_THREADED) { 90 + 91 + /* Handshake with the front-end that gets user command lines */ 92 + 93 + acpi_os_release_mutex(acpi_gbl_db_command_complete); 94 + 95 + status = 96 + acpi_os_acquire_mutex(acpi_gbl_db_command_ready, 97 + ACPI_WAIT_FOREVER); 98 + if (ACPI_FAILURE(status)) { 99 + return (status); 100 + } 101 + } else { 102 + /* Single threaded, we must get a command line ourselves */ 103 + 104 + /* Force output to console until a command is entered */ 105 + 106 + acpi_db_set_output_destination(ACPI_DB_CONSOLE_OUTPUT); 107 + 108 + /* Different prompt if method is executing */ 109 + 110 + if (!acpi_gbl_method_executing) { 111 + acpi_os_printf("%1c ", 112 + ACPI_DEBUGGER_COMMAND_PROMPT); 113 + } else { 114 + acpi_os_printf("%1c ", 115 + ACPI_DEBUGGER_EXECUTE_PROMPT); 116 + } 117 + 118 + /* Get the user input line */ 119 + 120 + status = acpi_os_get_line(acpi_gbl_db_line_buf, 121 + ACPI_DB_LINE_BUFFER_SIZE, 122 + NULL); 123 + if (ACPI_FAILURE(status)) { 124 + ACPI_EXCEPTION((AE_INFO, status, 125 + "While parsing command line")); 126 + return (status); 127 + } 128 + } 129 + 130 + status = 131 + acpi_db_command_dispatch(acpi_gbl_db_line_buf, walk_state, 132 + op); 133 + } 134 + 135 + /* acpi_ut_acquire_mutex (ACPI_MTX_NAMESPACE); */ 136 + 137 + return (status); 138 + } 139 + 140 + /******************************************************************************* 141 + * 142 + * FUNCTION: acpi_db_single_step 143 + * 144 + * PARAMETERS: walk_state - Current walk 145 + * op - Current executing op (from aml interpreter) 146 + * opcode_class - Class of the current AML Opcode 147 + * 148 + * RETURN: Status 149 + * 150 + * DESCRIPTION: Called just before execution of an AML opcode. 151 + * 152 + ******************************************************************************/ 153 + 154 + acpi_status 155 + acpi_db_single_step(struct acpi_walk_state * walk_state, 156 + union acpi_parse_object * op, u32 opcode_class) 157 + { 158 + union acpi_parse_object *next; 159 + acpi_status status = AE_OK; 160 + u32 original_debug_level; 161 + union acpi_parse_object *display_op; 162 + union acpi_parse_object *parent_op; 163 + u32 aml_offset; 164 + 165 + ACPI_FUNCTION_ENTRY(); 166 + 167 + #ifndef ACPI_APPLICATION 168 + if (acpi_gbl_db_thread_id != acpi_os_get_thread_id()) { 169 + return (AE_OK); 170 + } 171 + #endif 172 + 173 + /* Check the abort flag */ 174 + 175 + if (acpi_gbl_abort_method) { 176 + acpi_gbl_abort_method = FALSE; 177 + return (AE_ABORT_METHOD); 178 + } 179 + 180 + aml_offset = (u32)ACPI_PTR_DIFF(op->common.aml, 181 + walk_state->parser_state.aml_start); 182 + 183 + /* Check for single-step breakpoint */ 184 + 185 + if (walk_state->method_breakpoint && 186 + (walk_state->method_breakpoint <= aml_offset)) { 187 + 188 + /* Check if the breakpoint has been reached or passed */ 189 + /* Hit the breakpoint, resume single step, reset breakpoint */ 190 + 191 + acpi_os_printf("***Break*** at AML offset %X\n", aml_offset); 192 + acpi_gbl_cm_single_step = TRUE; 193 + acpi_gbl_step_to_next_call = FALSE; 194 + walk_state->method_breakpoint = 0; 195 + } 196 + 197 + /* Check for user breakpoint (Must be on exact Aml offset) */ 198 + 199 + else if (walk_state->user_breakpoint && 200 + (walk_state->user_breakpoint == aml_offset)) { 201 + acpi_os_printf("***UserBreakpoint*** at AML offset %X\n", 202 + aml_offset); 203 + acpi_gbl_cm_single_step = TRUE; 204 + acpi_gbl_step_to_next_call = FALSE; 205 + walk_state->method_breakpoint = 0; 206 + } 207 + 208 + /* 209 + * Check if this is an opcode that we are interested in -- 210 + * namely, opcodes that have arguments 211 + */ 212 + if (op->common.aml_opcode == AML_INT_NAMEDFIELD_OP) { 213 + return (AE_OK); 214 + } 215 + 216 + switch (opcode_class) { 217 + case AML_CLASS_UNKNOWN: 218 + case AML_CLASS_ARGUMENT: /* constants, literals, etc. do nothing */ 219 + 220 + return (AE_OK); 221 + 222 + default: 223 + 224 + /* All other opcodes -- continue */ 225 + break; 226 + } 227 + 228 + /* 229 + * Under certain debug conditions, display this opcode and its operands 230 + */ 231 + if ((acpi_gbl_db_output_to_file) || 232 + (acpi_gbl_cm_single_step) || (acpi_dbg_level & ACPI_LV_PARSE)) { 233 + if ((acpi_gbl_db_output_to_file) || 234 + (acpi_dbg_level & ACPI_LV_PARSE)) { 235 + acpi_os_printf 236 + ("\n[AmlDebug] Next AML Opcode to execute:\n"); 237 + } 238 + 239 + /* 240 + * Display this op (and only this op - zero out the NEXT field 241 + * temporarily, and disable parser trace output for the duration of 242 + * the display because we don't want the extraneous debug output) 243 + */ 244 + original_debug_level = acpi_dbg_level; 245 + acpi_dbg_level &= ~(ACPI_LV_PARSE | ACPI_LV_FUNCTIONS); 246 + next = op->common.next; 247 + op->common.next = NULL; 248 + 249 + display_op = op; 250 + parent_op = op->common.parent; 251 + if (parent_op) { 252 + if ((walk_state->control_state) && 253 + (walk_state->control_state->common.state == 254 + ACPI_CONTROL_PREDICATE_EXECUTING)) { 255 + /* 256 + * We are executing the predicate of an IF or WHILE statement 257 + * Search upwards for the containing IF or WHILE so that the 258 + * entire predicate can be displayed. 259 + */ 260 + while (parent_op) { 261 + if ((parent_op->common.aml_opcode == 262 + AML_IF_OP) 263 + || (parent_op->common.aml_opcode == 264 + AML_WHILE_OP)) { 265 + display_op = parent_op; 266 + break; 267 + } 268 + parent_op = parent_op->common.parent; 269 + } 270 + } else { 271 + while (parent_op) { 272 + if ((parent_op->common.aml_opcode == 273 + AML_IF_OP) 274 + || (parent_op->common.aml_opcode == 275 + AML_ELSE_OP) 276 + || (parent_op->common.aml_opcode == 277 + AML_SCOPE_OP) 278 + || (parent_op->common.aml_opcode == 279 + AML_METHOD_OP) 280 + || (parent_op->common.aml_opcode == 281 + AML_WHILE_OP)) { 282 + break; 283 + } 284 + display_op = parent_op; 285 + parent_op = parent_op->common.parent; 286 + } 287 + } 288 + } 289 + 290 + /* Now we can display it */ 291 + 292 + #ifdef ACPI_DISASSEMBLER 293 + acpi_dm_disassemble(walk_state, display_op, ACPI_UINT32_MAX); 294 + #endif 295 + 296 + if ((op->common.aml_opcode == AML_IF_OP) || 297 + (op->common.aml_opcode == AML_WHILE_OP)) { 298 + if (walk_state->control_state->common.value) { 299 + acpi_os_printf 300 + ("Predicate = [True], IF block was executed\n"); 301 + } else { 302 + acpi_os_printf 303 + ("Predicate = [False], Skipping IF block\n"); 304 + } 305 + } else if (op->common.aml_opcode == AML_ELSE_OP) { 306 + acpi_os_printf 307 + ("Predicate = [False], ELSE block was executed\n"); 308 + } 309 + 310 + /* Restore everything */ 311 + 312 + op->common.next = next; 313 + acpi_os_printf("\n"); 314 + if ((acpi_gbl_db_output_to_file) || 315 + (acpi_dbg_level & ACPI_LV_PARSE)) { 316 + acpi_os_printf("\n"); 317 + } 318 + acpi_dbg_level = original_debug_level; 319 + } 320 + 321 + /* If we are not single stepping, just continue executing the method */ 322 + 323 + if (!acpi_gbl_cm_single_step) { 324 + return (AE_OK); 325 + } 326 + 327 + /* 328 + * If we are executing a step-to-call command, 329 + * Check if this is a method call. 330 + */ 331 + if (acpi_gbl_step_to_next_call) { 332 + if (op->common.aml_opcode != AML_INT_METHODCALL_OP) { 333 + 334 + /* Not a method call, just keep executing */ 335 + 336 + return (AE_OK); 337 + } 338 + 339 + /* Found a method call, stop executing */ 340 + 341 + acpi_gbl_step_to_next_call = FALSE; 342 + } 343 + 344 + /* 345 + * If the next opcode is a method call, we will "step over" it 346 + * by default. 347 + */ 348 + if (op->common.aml_opcode == AML_INT_METHODCALL_OP) { 349 + 350 + /* Force no more single stepping while executing called method */ 351 + 352 + acpi_gbl_cm_single_step = FALSE; 353 + 354 + /* 355 + * Set the breakpoint on/before the call, it will stop execution 356 + * as soon as we return 357 + */ 358 + walk_state->method_breakpoint = 1; /* Must be non-zero! */ 359 + } 360 + 361 + status = acpi_db_start_command(walk_state, op); 362 + 363 + /* User commands complete, continue execution of the interrupted method */ 364 + 365 + return (status); 366 + } 367 + 368 + /******************************************************************************* 369 + * 370 + * FUNCTION: acpi_initialize_debugger 371 + * 372 + * PARAMETERS: None 373 + * 374 + * RETURN: Status 375 + * 376 + * DESCRIPTION: Init and start debugger 377 + * 378 + ******************************************************************************/ 379 + 380 + acpi_status acpi_initialize_debugger(void) 381 + { 382 + acpi_status status; 383 + 384 + ACPI_FUNCTION_TRACE(acpi_initialize_debugger); 385 + 386 + /* Init globals */ 387 + 388 + acpi_gbl_db_buffer = NULL; 389 + acpi_gbl_db_filename = NULL; 390 + acpi_gbl_db_output_to_file = FALSE; 391 + 392 + acpi_gbl_db_debug_level = ACPI_LV_VERBOSITY2; 393 + acpi_gbl_db_console_debug_level = ACPI_NORMAL_DEFAULT | ACPI_LV_TABLES; 394 + acpi_gbl_db_output_flags = ACPI_DB_CONSOLE_OUTPUT; 395 + 396 + acpi_gbl_db_opt_no_ini_methods = FALSE; 397 + 398 + acpi_gbl_db_buffer = acpi_os_allocate(ACPI_DEBUG_BUFFER_SIZE); 399 + if (!acpi_gbl_db_buffer) { 400 + return_ACPI_STATUS(AE_NO_MEMORY); 401 + } 402 + memset(acpi_gbl_db_buffer, 0, ACPI_DEBUG_BUFFER_SIZE); 403 + 404 + /* Initial scope is the root */ 405 + 406 + acpi_gbl_db_scope_buf[0] = AML_ROOT_PREFIX; 407 + acpi_gbl_db_scope_buf[1] = 0; 408 + acpi_gbl_db_scope_node = acpi_gbl_root_node; 409 + 410 + /* Initialize user commands loop */ 411 + 412 + acpi_gbl_db_terminate_loop = FALSE; 413 + 414 + /* 415 + * If configured for multi-thread support, the debug executor runs in 416 + * a separate thread so that the front end can be in another address 417 + * space, environment, or even another machine. 418 + */ 419 + if (acpi_gbl_debugger_configuration & DEBUGGER_MULTI_THREADED) { 420 + 421 + /* These were created with one unit, grab it */ 422 + 423 + status = acpi_os_acquire_mutex(acpi_gbl_db_command_complete, 424 + ACPI_WAIT_FOREVER); 425 + if (ACPI_FAILURE(status)) { 426 + acpi_os_printf("Could not get debugger mutex\n"); 427 + return_ACPI_STATUS(status); 428 + } 429 + 430 + status = acpi_os_acquire_mutex(acpi_gbl_db_command_ready, 431 + ACPI_WAIT_FOREVER); 432 + if (ACPI_FAILURE(status)) { 433 + acpi_os_printf("Could not get debugger mutex\n"); 434 + return_ACPI_STATUS(status); 435 + } 436 + 437 + /* Create the debug execution thread to execute commands */ 438 + 439 + acpi_gbl_db_threads_terminated = FALSE; 440 + status = acpi_os_execute(OSL_DEBUGGER_MAIN_THREAD, 441 + acpi_db_execute_thread, NULL); 442 + if (ACPI_FAILURE(status)) { 443 + ACPI_EXCEPTION((AE_INFO, status, 444 + "Could not start debugger thread")); 445 + acpi_gbl_db_threads_terminated = TRUE; 446 + return_ACPI_STATUS(status); 447 + } 448 + } else { 449 + acpi_gbl_db_thread_id = acpi_os_get_thread_id(); 450 + } 451 + 452 + return_ACPI_STATUS(AE_OK); 453 + } 454 + 455 + ACPI_EXPORT_SYMBOL(acpi_initialize_debugger) 456 + 457 + /******************************************************************************* 458 + * 459 + * FUNCTION: acpi_terminate_debugger 460 + * 461 + * PARAMETERS: None 462 + * 463 + * RETURN: None 464 + * 465 + * DESCRIPTION: Stop debugger 466 + * 467 + ******************************************************************************/ 468 + void acpi_terminate_debugger(void) 469 + { 470 + 471 + /* Terminate the AML Debugger */ 472 + 473 + acpi_gbl_db_terminate_loop = TRUE; 474 + 475 + if (acpi_gbl_debugger_configuration & DEBUGGER_MULTI_THREADED) { 476 + acpi_os_release_mutex(acpi_gbl_db_command_ready); 477 + 478 + /* Wait the AML Debugger threads */ 479 + 480 + while (!acpi_gbl_db_threads_terminated) { 481 + acpi_os_sleep(100); 482 + } 483 + } 484 + 485 + if (acpi_gbl_db_buffer) { 486 + acpi_os_free(acpi_gbl_db_buffer); 487 + acpi_gbl_db_buffer = NULL; 488 + } 489 + 490 + /* Ensure that debug output is now disabled */ 491 + 492 + acpi_gbl_db_output_flags = ACPI_DB_DISABLE_OUTPUT; 493 + } 494 + 495 + ACPI_EXPORT_SYMBOL(acpi_terminate_debugger) 496 + 497 + /******************************************************************************* 498 + * 499 + * FUNCTION: acpi_set_debugger_thread_id 500 + * 501 + * PARAMETERS: thread_id - Debugger thread ID 502 + * 503 + * RETURN: None 504 + * 505 + * DESCRIPTION: Set debugger thread ID 506 + * 507 + ******************************************************************************/ 508 + void acpi_set_debugger_thread_id(acpi_thread_id thread_id) 509 + { 510 + acpi_gbl_db_thread_id = thread_id; 511 + } 512 + 513 + ACPI_EXPORT_SYMBOL(acpi_set_debugger_thread_id)
+1 -1
drivers/acpi/acpica/evxface.c
··· 405 405 } 406 406 407 407 ACPI_EXPORT_SYMBOL(acpi_install_exception_handler) 408 - #endif /* ACPI_FUTURE_USAGE */ 408 + #endif 409 409 410 410 #if (!ACPI_REDUCED_HARDWARE) 411 411 /*******************************************************************************
+1
drivers/acpi/acpica/exconvrt.c
··· 618 618 break; 619 619 620 620 case ARGI_TARGETREF: 621 + case ARGI_STORE_TARGET: 621 622 622 623 switch (destination_type) { 623 624 case ACPI_TYPE_INTEGER:
-1
drivers/acpi/acpica/exresolv.c
··· 209 209 * (i.e., dereference the package index) 210 210 * Delete the ref object, increment the returned object 211 211 */ 212 - acpi_ut_remove_reference(stack_desc); 213 212 acpi_ut_add_reference(obj_desc); 214 213 *stack_ptr = obj_desc; 215 214 } else {
+2
drivers/acpi/acpica/exresop.c
··· 307 307 case ARGI_TARGETREF: /* Allows implicit conversion rules before store */ 308 308 case ARGI_FIXED_TARGET: /* No implicit conversion before store to target */ 309 309 case ARGI_SIMPLE_TARGET: /* Name, Local, or arg - no implicit conversion */ 310 + case ARGI_STORE_TARGET: 311 + 310 312 /* 311 313 * Need an operand of type ACPI_TYPE_LOCAL_REFERENCE 312 314 * A Namespace Node is OK as-is
+91 -29
drivers/acpi/acpica/exstore.c
··· 137 137 /* Destination is not a Reference object */ 138 138 139 139 ACPI_ERROR((AE_INFO, 140 - "Target is not a Reference or Constant object - %s [%p]", 140 + "Target is not a Reference or Constant object - [%s] %p", 141 141 acpi_ut_get_object_type_name(dest_desc), 142 142 dest_desc)); 143 143 ··· 189 189 * displayed and otherwise has no effect -- see ACPI Specification 190 190 */ 191 191 ACPI_DEBUG_PRINT((ACPI_DB_EXEC, 192 - "**** Write to Debug Object: Object %p %s ****:\n\n", 192 + "**** Write to Debug Object: Object %p [%s] ****:\n\n", 193 193 source_desc, 194 194 acpi_ut_get_object_type_name(source_desc))); 195 195 ··· 341 341 /* All other types are invalid */ 342 342 343 343 ACPI_ERROR((AE_INFO, 344 - "Source must be Integer/Buffer/String type, not %s", 344 + "Source must be type [Integer/Buffer/String], found [%s]", 345 345 acpi_ut_get_object_type_name(source_desc))); 346 346 return_ACPI_STATUS(AE_AML_OPERAND_TYPE); 347 347 } ··· 352 352 break; 353 353 354 354 default: 355 - ACPI_ERROR((AE_INFO, "Target is not a Package or BufferField")); 356 - status = AE_AML_OPERAND_TYPE; 355 + ACPI_ERROR((AE_INFO, 356 + "Target is not of type [Package/BufferField]")); 357 + status = AE_AML_TARGET_TYPE; 357 358 break; 358 359 } 359 360 ··· 374 373 * 375 374 * DESCRIPTION: Store the object to the named object. 376 375 * 377 - * The Assignment of an object to a named object is handled here 378 - * The value passed in will replace the current value (if any) 379 - * with the input value. 376 + * The assignment of an object to a named object is handled here. 377 + * The value passed in will replace the current value (if any) 378 + * with the input value. 380 379 * 381 - * When storing into an object the data is converted to the 382 - * target object type then stored in the object. This means 383 - * that the target object type (for an initialized target) will 384 - * not be changed by a store operation. A copy_object can change 385 - * the target type, however. 380 + * When storing into an object the data is converted to the 381 + * target object type then stored in the object. This means 382 + * that the target object type (for an initialized target) will 383 + * not be changed by a store operation. A copy_object can change 384 + * the target type, however. 386 385 * 387 - * The implicit_conversion flag is set to NO/FALSE only when 388 - * storing to an arg_x -- as per the rules of the ACPI spec. 386 + * The implicit_conversion flag is set to NO/FALSE only when 387 + * storing to an arg_x -- as per the rules of the ACPI spec. 389 388 * 390 - * Assumes parameters are already validated. 389 + * Assumes parameters are already validated. 391 390 * 392 391 ******************************************************************************/ 393 392 ··· 409 408 target_type = acpi_ns_get_type(node); 410 409 target_desc = acpi_ns_get_attached_object(node); 411 410 412 - ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "Storing %p (%s) to node %p (%s)\n", 411 + ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "Storing %p [%s] to node %p [%s]\n", 413 412 source_desc, 414 413 acpi_ut_get_object_type_name(source_desc), node, 415 414 acpi_ut_get_type_name(target_type))); 415 + 416 + /* Only limited target types possible for everything except copy_object */ 417 + 418 + if (walk_state->opcode != AML_COPY_OP) { 419 + /* 420 + * Only copy_object allows all object types to be overwritten. For 421 + * target_ref(s), there are restrictions on the object types that 422 + * are allowed. 423 + * 424 + * Allowable operations/typing for Store: 425 + * 426 + * 1) Simple Store 427 + * Integer --> Integer (Named/Local/Arg) 428 + * String --> String (Named/Local/Arg) 429 + * Buffer --> Buffer (Named/Local/Arg) 430 + * Package --> Package (Named/Local/Arg) 431 + * 432 + * 2) Store with implicit conversion 433 + * Integer --> String or Buffer (Named) 434 + * String --> Integer or Buffer (Named) 435 + * Buffer --> Integer or String (Named) 436 + */ 437 + switch (target_type) { 438 + case ACPI_TYPE_PACKAGE: 439 + /* 440 + * Here, can only store a package to an existing package. 441 + * Storing a package to a Local/Arg is OK, and handled 442 + * elsewhere. 443 + */ 444 + if (walk_state->opcode == AML_STORE_OP) { 445 + if (source_desc->common.type != 446 + ACPI_TYPE_PACKAGE) { 447 + ACPI_ERROR((AE_INFO, 448 + "Cannot assign type [%s] to [Package] " 449 + "(source must be type Pkg)", 450 + acpi_ut_get_object_type_name 451 + (source_desc))); 452 + 453 + return_ACPI_STATUS(AE_AML_TARGET_TYPE); 454 + } 455 + break; 456 + } 457 + 458 + /* Fallthrough */ 459 + 460 + case ACPI_TYPE_DEVICE: 461 + case ACPI_TYPE_EVENT: 462 + case ACPI_TYPE_MUTEX: 463 + case ACPI_TYPE_REGION: 464 + case ACPI_TYPE_POWER: 465 + case ACPI_TYPE_PROCESSOR: 466 + case ACPI_TYPE_THERMAL: 467 + 468 + ACPI_ERROR((AE_INFO, 469 + "Target must be [Buffer/Integer/String/Reference], found [%s] (%4.4s)", 470 + acpi_ut_get_type_name(node->type), 471 + node->name.ascii)); 472 + 473 + return_ACPI_STATUS(AE_AML_TARGET_TYPE); 474 + 475 + default: 476 + break; 477 + } 478 + } 416 479 417 480 /* 418 481 * Resolve the source object to an actual value ··· 490 425 /* Do the actual store operation */ 491 426 492 427 switch (target_type) { 493 - case ACPI_TYPE_INTEGER: 494 - case ACPI_TYPE_STRING: 495 - case ACPI_TYPE_BUFFER: 496 428 /* 497 429 * The simple data types all support implicit source operand 498 430 * conversion before the store. 499 431 */ 432 + case ACPI_TYPE_INTEGER: 433 + case ACPI_TYPE_STRING: 434 + case ACPI_TYPE_BUFFER: 500 435 501 436 if ((walk_state->opcode == AML_COPY_OP) || !implicit_conversion) { 502 437 /* ··· 532 467 new_desc->common.type); 533 468 534 469 ACPI_DEBUG_PRINT((ACPI_DB_EXEC, 535 - "Store %s into %s via Convert/Attach\n", 470 + "Store type [%s] into [%s] via Convert/Attach\n", 536 471 acpi_ut_get_object_type_name 537 472 (source_desc), 538 473 acpi_ut_get_object_type_name ··· 556 491 557 492 default: 558 493 /* 559 - * No conversions for all other types. Directly store a copy of 560 - * the source object. This is the ACPI spec-defined behavior for 561 - * the copy_object operator. 494 + * copy_object operator: No conversions for all other types. 495 + * Instead, directly store a copy of the source object. 562 496 * 563 - * NOTE: For the Store operator, this is a departure from the 564 - * ACPI spec, which states "If conversion is impossible, abort 565 - * the running control method". Instead, this code implements 566 - * "If conversion is impossible, treat the Store operation as 567 - * a CopyObject". 497 + * This is the ACPI spec-defined behavior for the copy_object 498 + * operator. (Note, for this default case, all normal 499 + * Store/Target operations exited above with an error). 568 500 */ 569 501 status = acpi_ex_store_direct_to_node(source_desc, node, 570 502 walk_state);
+3 -2
drivers/acpi/acpica/exstoren.c
··· 122 122 /* Conversion successful but still not a valid type */ 123 123 124 124 ACPI_ERROR((AE_INFO, 125 - "Cannot assign type %s to %s (must be type Int/Str/Buf)", 125 + "Cannot assign type [%s] to [%s] (must be type Int/Str/Buf)", 126 126 acpi_ut_get_object_type_name(source_desc), 127 127 acpi_ut_get_type_name(target_type))); 128 + 128 129 status = AE_AML_OPERAND_TYPE; 129 130 } 130 131 break; ··· 276 275 /* 277 276 * All other types come here. 278 277 */ 279 - ACPI_WARNING((AE_INFO, "Store into type %s not implemented", 278 + ACPI_WARNING((AE_INFO, "Store into type [%s] not implemented", 280 279 acpi_ut_get_object_type_name(dest_desc))); 281 280 282 281 status = AE_NOT_IMPLEMENTED;
-6
drivers/acpi/acpica/nsdump.c
··· 60 60 61 61 #if defined(ACPI_DEBUG_OUTPUT) || defined(ACPI_DEBUGGER) 62 62 63 - #ifdef ACPI_FUTURE_USAGE 64 63 static acpi_status 65 64 acpi_ns_dump_one_object_path(acpi_handle obj_handle, 66 65 u32 level, void *context, void **return_value); ··· 67 68 static acpi_status 68 69 acpi_ns_get_max_depth(acpi_handle obj_handle, 69 70 u32 level, void *context, void **return_value); 70 - #endif /* ACPI_FUTURE_USAGE */ 71 71 72 72 /******************************************************************************* 73 73 * ··· 623 625 return (AE_OK); 624 626 } 625 627 626 - #ifdef ACPI_FUTURE_USAGE 627 628 /******************************************************************************* 628 629 * 629 630 * FUNCTION: acpi_ns_dump_objects ··· 677 680 678 681 (void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE); 679 682 } 680 - #endif /* ACPI_FUTURE_USAGE */ 681 683 682 - #ifdef ACPI_FUTURE_USAGE 683 684 /******************************************************************************* 684 685 * 685 686 * FUNCTION: acpi_ns_dump_one_object_path, acpi_ns_get_max_depth ··· 805 810 806 811 (void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE); 807 812 } 808 - #endif /* ACPI_FUTURE_USAGE */ 809 813 810 814 /******************************************************************************* 811 815 *
+1 -1
drivers/acpi/acpica/nspredef.c
··· 226 226 { 227 227 union acpi_operand_object *return_object = *return_object_ptr; 228 228 acpi_status status = AE_OK; 229 - char type_buffer[48]; /* Room for 5 types */ 229 + char type_buffer[96]; /* Room for 10 types */ 230 230 231 231 /* A Namespace node should not get here, but make sure */ 232 232
-2
drivers/acpi/acpica/pstree.c
··· 183 183 } 184 184 } 185 185 186 - #ifdef ACPI_FUTURE_USAGE 187 186 /******************************************************************************* 188 187 * 189 188 * FUNCTION: acpi_ps_get_depth_next ··· 316 317 return (child); 317 318 } 318 319 #endif 319 - #endif /* ACPI_FUTURE_USAGE */
-2
drivers/acpi/acpica/psutils.c
··· 205 205 /* 206 206 * Get op's name (4-byte name segment) or 0 if unnamed 207 207 */ 208 - #ifdef ACPI_FUTURE_USAGE 209 208 u32 acpi_ps_get_name(union acpi_parse_object * op) 210 209 { 211 210 ··· 218 219 219 220 return (op->named.name); 220 221 } 221 - #endif /* ACPI_FUTURE_USAGE */ 222 222 223 223 /* 224 224 * Set op's name
-3
drivers/acpi/acpica/rsdump.c
··· 51 51 /* 52 52 * All functions in this module are used by the AML Debugger only 53 53 */ 54 - #if defined(ACPI_DEBUGGER) 55 54 /* Local prototypes */ 56 55 static void acpi_rs_out_string(char *title, char *value); 57 56 ··· 564 565 acpi_os_printf("%25s%2.2X : %4.4X\n", "Word", i, data[i]); 565 566 } 566 567 } 567 - 568 - #endif
-2
drivers/acpi/acpica/rsutils.c
··· 564 564 * 565 565 ******************************************************************************/ 566 566 567 - #ifdef ACPI_FUTURE_USAGE 568 567 acpi_status 569 568 acpi_rs_get_prs_method_data(struct acpi_namespace_node *node, 570 569 struct acpi_buffer *ret_buffer) ··· 595 596 acpi_ut_remove_reference(obj_desc); 596 597 return_ACPI_STATUS(status); 597 598 } 598 - #endif /* ACPI_FUTURE_USAGE */ 599 599 600 600 /******************************************************************************* 601 601 *
+2 -2
drivers/acpi/acpica/rsxface.c
··· 220 220 } 221 221 222 222 ACPI_EXPORT_SYMBOL(acpi_get_current_resources) 223 - #ifdef ACPI_FUTURE_USAGE 223 + 224 224 /******************************************************************************* 225 225 * 226 226 * FUNCTION: acpi_get_possible_resources ··· 262 262 } 263 263 264 264 ACPI_EXPORT_SYMBOL(acpi_get_possible_resources) 265 - #endif /* ACPI_FUTURE_USAGE */ 265 + 266 266 /******************************************************************************* 267 267 * 268 268 * FUNCTION: acpi_set_current_resources
+17 -4
drivers/acpi/acpica/utdecode.c
··· 232 232 233 233 char *acpi_ut_get_object_type_name(union acpi_operand_object *obj_desc) 234 234 { 235 + ACPI_FUNCTION_TRACE(ut_get_object_type_name); 235 236 236 237 if (!obj_desc) { 237 - return ("[NULL Object Descriptor]"); 238 + ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "Null Object Descriptor\n")); 239 + return_PTR("[NULL Object Descriptor]"); 238 240 } 239 241 240 - return (acpi_ut_get_type_name(obj_desc->common.type)); 242 + /* These descriptor types share a common area */ 243 + 244 + if ((ACPI_GET_DESCRIPTOR_TYPE(obj_desc) != ACPI_DESC_TYPE_OPERAND) && 245 + (ACPI_GET_DESCRIPTOR_TYPE(obj_desc) != ACPI_DESC_TYPE_NAMED)) { 246 + ACPI_DEBUG_PRINT((ACPI_DB_EXEC, 247 + "Invalid object descriptor type: 0x%2.2X [%s] (%p)\n", 248 + ACPI_GET_DESCRIPTOR_TYPE(obj_desc), 249 + acpi_ut_get_descriptor_name(obj_desc), 250 + obj_desc)); 251 + 252 + return_PTR("Invalid object"); 253 + } 254 + 255 + return_PTR(acpi_ut_get_type_name(obj_desc->common.type)); 241 256 } 242 257 243 258 /******************************************************************************* ··· 422 407 "ACPI_MTX_Events", 423 408 "ACPI_MTX_Caches", 424 409 "ACPI_MTX_Memory", 425 - "ACPI_MTX_CommandComplete", 426 - "ACPI_MTX_CommandReady" 427 410 }; 428 411 429 412 char *acpi_ut_get_mutex_name(u32 mutex_id)
+6
drivers/acpi/acpica/utfileio.c
··· 45 45 #include "accommon.h" 46 46 #include "actables.h" 47 47 #include "acapps.h" 48 + #include "errno.h" 48 49 49 50 #ifdef ACPI_ASL_COMPILER 50 51 #include "aslcompiler.h" ··· 302 301 file = fopen(filename, "rb"); 303 302 if (!file) { 304 303 perror("Could not open input file"); 304 + 305 + if (errno == ENOENT) { 306 + return (AE_NOT_EXIST); 307 + } 308 + 305 309 return (status); 306 310 } 307 311
+13 -2
drivers/acpi/acpica/utinit.c
··· 241 241 acpi_gbl_disable_mem_tracking = FALSE; 242 242 #endif 243 243 244 - ACPI_DEBUGGER_EXEC(acpi_gbl_db_terminate_threads = FALSE); 245 - 246 244 return_ACPI_STATUS(AE_OK); 247 245 } 248 246 ··· 281 283 void acpi_ut_subsystem_shutdown(void) 282 284 { 283 285 ACPI_FUNCTION_TRACE(ut_subsystem_shutdown); 286 + 287 + /* Just exit if subsystem is already shutdown */ 288 + 289 + if (acpi_gbl_shutdown) { 290 + ACPI_ERROR((AE_INFO, "ACPI Subsystem is already terminated")); 291 + return_VOID; 292 + } 293 + 294 + /* Subsystem appears active, go ahead and shut it down */ 295 + 296 + acpi_gbl_shutdown = TRUE; 297 + acpi_gbl_startup_flags = 0; 298 + ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Shutting down ACPI Subsystem\n")); 284 299 285 300 #ifndef ACPI_ASL_COMPILER 286 301
+21
drivers/acpi/acpica/utmutex.c
··· 108 108 /* Create the reader/writer lock for namespace access */ 109 109 110 110 status = acpi_ut_create_rw_lock(&acpi_gbl_namespace_rw_lock); 111 + if (ACPI_FAILURE(status)) { 112 + return_ACPI_STATUS(status); 113 + } 114 + #ifdef ACPI_DEBUGGER 115 + 116 + /* Debugger Support */ 117 + 118 + status = acpi_os_create_mutex(&acpi_gbl_db_command_ready); 119 + if (ACPI_FAILURE(status)) { 120 + return_ACPI_STATUS(status); 121 + } 122 + 123 + status = acpi_os_create_mutex(&acpi_gbl_db_command_complete); 124 + #endif 125 + 111 126 return_ACPI_STATUS(status); 112 127 } 113 128 ··· 162 147 /* Delete the reader/writer lock */ 163 148 164 149 acpi_ut_delete_rw_lock(&acpi_gbl_namespace_rw_lock); 150 + 151 + #ifdef ACPI_DEBUGGER 152 + acpi_os_delete_mutex(acpi_gbl_db_command_ready); 153 + acpi_os_delete_mutex(acpi_gbl_db_command_complete); 154 + #endif 155 + 165 156 return_VOID; 166 157 } 167 158
+1 -18
drivers/acpi/acpica/utxface.c
··· 67 67 68 68 ACPI_FUNCTION_TRACE(acpi_terminate); 69 69 70 - /* Just exit if subsystem is already shutdown */ 71 - 72 - if (acpi_gbl_shutdown) { 73 - ACPI_ERROR((AE_INFO, "ACPI Subsystem is already terminated")); 74 - return_ACPI_STATUS(AE_OK); 75 - } 76 - 77 - /* Subsystem appears active, go ahead and shut it down */ 78 - 79 - acpi_gbl_shutdown = TRUE; 80 - acpi_gbl_startup_flags = 0; 81 - ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Shutting down ACPI Subsystem\n")); 82 - 83 - /* Terminate the AML Debugger if present */ 84 - 85 - ACPI_DEBUGGER_EXEC(acpi_gbl_db_terminate_threads = TRUE); 86 - 87 70 /* Shutdown and free all resources */ 88 71 89 72 acpi_ut_subsystem_shutdown(); ··· 253 270 } 254 271 255 272 ACPI_EXPORT_SYMBOL(acpi_install_initialization_handler) 256 - #endif /* ACPI_FUTURE_USAGE */ 273 + #endif 257 274 258 275 /***************************************************************************** 259 276 *
+733
drivers/acpi/cppc_acpi.c
··· 1 + /* 2 + * CPPC (Collaborative Processor Performance Control) methods used by CPUfreq drivers. 3 + * 4 + * (C) Copyright 2014, 2015 Linaro Ltd. 5 + * Author: Ashwin Chaugule <ashwin.chaugule@linaro.org> 6 + * 7 + * This program is free software; you can redistribute it and/or 8 + * modify it under the terms of the GNU General Public License 9 + * as published by the Free Software Foundation; version 2 10 + * of the License. 11 + * 12 + * CPPC describes a few methods for controlling CPU performance using 13 + * information from a per CPU table called CPC. This table is described in 14 + * the ACPI v5.0+ specification. The table consists of a list of 15 + * registers which may be memory mapped or hardware registers and also may 16 + * include some static integer values. 17 + * 18 + * CPU performance is on an abstract continuous scale as against a discretized 19 + * P-state scale which is tied to CPU frequency only. In brief, the basic 20 + * operation involves: 21 + * 22 + * - OS makes a CPU performance request. (Can provide min and max bounds) 23 + * 24 + * - Platform (such as BMC) is free to optimize request within requested bounds 25 + * depending on power/thermal budgets etc. 26 + * 27 + * - Platform conveys its decision back to OS 28 + * 29 + * The communication between OS and platform occurs through another medium 30 + * called (PCC) Platform Communication Channel. This is a generic mailbox like 31 + * mechanism which includes doorbell semantics to indicate register updates. 32 + * See drivers/mailbox/pcc.c for details on PCC. 33 + * 34 + * Finer details about the PCC and CPPC spec are available in the ACPI v5.1 and 35 + * above specifications. 36 + */ 37 + 38 + #define pr_fmt(fmt) "ACPI CPPC: " fmt 39 + 40 + #include <linux/cpufreq.h> 41 + #include <linux/delay.h> 42 + 43 + #include <acpi/cppc_acpi.h> 44 + /* 45 + * Lock to provide mutually exclusive access to the PCC 46 + * channel. e.g. When the remote updates the shared region 47 + * with new data, the reader needs to be protected from 48 + * other CPUs activity on the same channel. 49 + */ 50 + static DEFINE_SPINLOCK(pcc_lock); 51 + 52 + /* 53 + * The cpc_desc structure contains the ACPI register details 54 + * as described in the per CPU _CPC tables. The details 55 + * include the type of register (e.g. PCC, System IO, FFH etc.) 56 + * and destination addresses which lets us READ/WRITE CPU performance 57 + * information using the appropriate I/O methods. 58 + */ 59 + static DEFINE_PER_CPU(struct cpc_desc *, cpc_desc_ptr); 60 + 61 + /* This layer handles all the PCC specifics for CPPC. */ 62 + static struct mbox_chan *pcc_channel; 63 + static void __iomem *pcc_comm_addr; 64 + static u64 comm_base_addr; 65 + static int pcc_subspace_idx = -1; 66 + static u16 pcc_cmd_delay; 67 + static bool pcc_channel_acquired; 68 + 69 + /* 70 + * Arbitrary Retries in case the remote processor is slow to respond 71 + * to PCC commands. 72 + */ 73 + #define NUM_RETRIES 500 74 + 75 + static int send_pcc_cmd(u16 cmd) 76 + { 77 + int retries, result = -EIO; 78 + struct acpi_pcct_hw_reduced *pcct_ss = pcc_channel->con_priv; 79 + struct acpi_pcct_shared_memory *generic_comm_base = 80 + (struct acpi_pcct_shared_memory *) pcc_comm_addr; 81 + u32 cmd_latency = pcct_ss->latency; 82 + 83 + /* Min time OS should wait before sending next command. */ 84 + udelay(pcc_cmd_delay); 85 + 86 + /* Write to the shared comm region. */ 87 + writew(cmd, &generic_comm_base->command); 88 + 89 + /* Flip CMD COMPLETE bit */ 90 + writew(0, &generic_comm_base->status); 91 + 92 + /* Ring doorbell */ 93 + result = mbox_send_message(pcc_channel, &cmd); 94 + if (result < 0) { 95 + pr_err("Err sending PCC mbox message. cmd:%d, ret:%d\n", 96 + cmd, result); 97 + return result; 98 + } 99 + 100 + /* Wait for a nominal time to let platform process command. */ 101 + udelay(cmd_latency); 102 + 103 + /* Retry in case the remote processor was too slow to catch up. */ 104 + for (retries = NUM_RETRIES; retries > 0; retries--) { 105 + if (readw_relaxed(&generic_comm_base->status) & PCC_CMD_COMPLETE) { 106 + result = 0; 107 + break; 108 + } 109 + } 110 + 111 + mbox_client_txdone(pcc_channel, result); 112 + return result; 113 + } 114 + 115 + static void cppc_chan_tx_done(struct mbox_client *cl, void *msg, int ret) 116 + { 117 + if (ret) 118 + pr_debug("TX did not complete: CMD sent:%x, ret:%d\n", 119 + *(u16 *)msg, ret); 120 + else 121 + pr_debug("TX completed. CMD sent:%x, ret:%d\n", 122 + *(u16 *)msg, ret); 123 + } 124 + 125 + struct mbox_client cppc_mbox_cl = { 126 + .tx_done = cppc_chan_tx_done, 127 + .knows_txdone = true, 128 + }; 129 + 130 + static int acpi_get_psd(struct cpc_desc *cpc_ptr, acpi_handle handle) 131 + { 132 + int result = -EFAULT; 133 + acpi_status status = AE_OK; 134 + struct acpi_buffer buffer = {ACPI_ALLOCATE_BUFFER, NULL}; 135 + struct acpi_buffer format = {sizeof("NNNNN"), "NNNNN"}; 136 + struct acpi_buffer state = {0, NULL}; 137 + union acpi_object *psd = NULL; 138 + struct acpi_psd_package *pdomain; 139 + 140 + status = acpi_evaluate_object_typed(handle, "_PSD", NULL, &buffer, 141 + ACPI_TYPE_PACKAGE); 142 + if (ACPI_FAILURE(status)) 143 + return -ENODEV; 144 + 145 + psd = buffer.pointer; 146 + if (!psd || psd->package.count != 1) { 147 + pr_debug("Invalid _PSD data\n"); 148 + goto end; 149 + } 150 + 151 + pdomain = &(cpc_ptr->domain_info); 152 + 153 + state.length = sizeof(struct acpi_psd_package); 154 + state.pointer = pdomain; 155 + 156 + status = acpi_extract_package(&(psd->package.elements[0]), 157 + &format, &state); 158 + if (ACPI_FAILURE(status)) { 159 + pr_debug("Invalid _PSD data for CPU:%d\n", cpc_ptr->cpu_id); 160 + goto end; 161 + } 162 + 163 + if (pdomain->num_entries != ACPI_PSD_REV0_ENTRIES) { 164 + pr_debug("Unknown _PSD:num_entries for CPU:%d\n", cpc_ptr->cpu_id); 165 + goto end; 166 + } 167 + 168 + if (pdomain->revision != ACPI_PSD_REV0_REVISION) { 169 + pr_debug("Unknown _PSD:revision for CPU: %d\n", cpc_ptr->cpu_id); 170 + goto end; 171 + } 172 + 173 + if (pdomain->coord_type != DOMAIN_COORD_TYPE_SW_ALL && 174 + pdomain->coord_type != DOMAIN_COORD_TYPE_SW_ANY && 175 + pdomain->coord_type != DOMAIN_COORD_TYPE_HW_ALL) { 176 + pr_debug("Invalid _PSD:coord_type for CPU:%d\n", cpc_ptr->cpu_id); 177 + goto end; 178 + } 179 + 180 + result = 0; 181 + end: 182 + kfree(buffer.pointer); 183 + return result; 184 + } 185 + 186 + /** 187 + * acpi_get_psd_map - Map the CPUs in a common freq domain. 188 + * @all_cpu_data: Ptrs to CPU specific CPPC data including PSD info. 189 + * 190 + * Return: 0 for success or negative value for err. 191 + */ 192 + int acpi_get_psd_map(struct cpudata **all_cpu_data) 193 + { 194 + int count_target; 195 + int retval = 0; 196 + unsigned int i, j; 197 + cpumask_var_t covered_cpus; 198 + struct cpudata *pr, *match_pr; 199 + struct acpi_psd_package *pdomain; 200 + struct acpi_psd_package *match_pdomain; 201 + struct cpc_desc *cpc_ptr, *match_cpc_ptr; 202 + 203 + if (!zalloc_cpumask_var(&covered_cpus, GFP_KERNEL)) 204 + return -ENOMEM; 205 + 206 + /* 207 + * Now that we have _PSD data from all CPUs, lets setup P-state 208 + * domain info. 209 + */ 210 + for_each_possible_cpu(i) { 211 + pr = all_cpu_data[i]; 212 + if (!pr) 213 + continue; 214 + 215 + if (cpumask_test_cpu(i, covered_cpus)) 216 + continue; 217 + 218 + cpc_ptr = per_cpu(cpc_desc_ptr, i); 219 + if (!cpc_ptr) 220 + continue; 221 + 222 + pdomain = &(cpc_ptr->domain_info); 223 + cpumask_set_cpu(i, pr->shared_cpu_map); 224 + cpumask_set_cpu(i, covered_cpus); 225 + if (pdomain->num_processors <= 1) 226 + continue; 227 + 228 + /* Validate the Domain info */ 229 + count_target = pdomain->num_processors; 230 + if (pdomain->coord_type == DOMAIN_COORD_TYPE_SW_ALL) 231 + pr->shared_type = CPUFREQ_SHARED_TYPE_ALL; 232 + else if (pdomain->coord_type == DOMAIN_COORD_TYPE_HW_ALL) 233 + pr->shared_type = CPUFREQ_SHARED_TYPE_HW; 234 + else if (pdomain->coord_type == DOMAIN_COORD_TYPE_SW_ANY) 235 + pr->shared_type = CPUFREQ_SHARED_TYPE_ANY; 236 + 237 + for_each_possible_cpu(j) { 238 + if (i == j) 239 + continue; 240 + 241 + match_cpc_ptr = per_cpu(cpc_desc_ptr, j); 242 + if (!match_cpc_ptr) 243 + continue; 244 + 245 + match_pdomain = &(match_cpc_ptr->domain_info); 246 + if (match_pdomain->domain != pdomain->domain) 247 + continue; 248 + 249 + /* Here i and j are in the same domain */ 250 + if (match_pdomain->num_processors != count_target) { 251 + retval = -EFAULT; 252 + goto err_ret; 253 + } 254 + 255 + if (pdomain->coord_type != match_pdomain->coord_type) { 256 + retval = -EFAULT; 257 + goto err_ret; 258 + } 259 + 260 + cpumask_set_cpu(j, covered_cpus); 261 + cpumask_set_cpu(j, pr->shared_cpu_map); 262 + } 263 + 264 + for_each_possible_cpu(j) { 265 + if (i == j) 266 + continue; 267 + 268 + match_pr = all_cpu_data[j]; 269 + if (!match_pr) 270 + continue; 271 + 272 + match_cpc_ptr = per_cpu(cpc_desc_ptr, j); 273 + if (!match_cpc_ptr) 274 + continue; 275 + 276 + match_pdomain = &(match_cpc_ptr->domain_info); 277 + if (match_pdomain->domain != pdomain->domain) 278 + continue; 279 + 280 + match_pr->shared_type = pr->shared_type; 281 + cpumask_copy(match_pr->shared_cpu_map, 282 + pr->shared_cpu_map); 283 + } 284 + } 285 + 286 + err_ret: 287 + for_each_possible_cpu(i) { 288 + pr = all_cpu_data[i]; 289 + if (!pr) 290 + continue; 291 + 292 + /* Assume no coordination on any error parsing domain info */ 293 + if (retval) { 294 + cpumask_clear(pr->shared_cpu_map); 295 + cpumask_set_cpu(i, pr->shared_cpu_map); 296 + pr->shared_type = CPUFREQ_SHARED_TYPE_ALL; 297 + } 298 + } 299 + 300 + free_cpumask_var(covered_cpus); 301 + return retval; 302 + } 303 + EXPORT_SYMBOL_GPL(acpi_get_psd_map); 304 + 305 + static int register_pcc_channel(int pcc_subspace_idx) 306 + { 307 + struct acpi_pcct_subspace *cppc_ss; 308 + unsigned int len; 309 + 310 + if (pcc_subspace_idx >= 0) { 311 + pcc_channel = pcc_mbox_request_channel(&cppc_mbox_cl, 312 + pcc_subspace_idx); 313 + 314 + if (IS_ERR(pcc_channel)) { 315 + pr_err("Failed to find PCC communication channel\n"); 316 + return -ENODEV; 317 + } 318 + 319 + /* 320 + * The PCC mailbox controller driver should 321 + * have parsed the PCCT (global table of all 322 + * PCC channels) and stored pointers to the 323 + * subspace communication region in con_priv. 324 + */ 325 + cppc_ss = pcc_channel->con_priv; 326 + 327 + if (!cppc_ss) { 328 + pr_err("No PCC subspace found for CPPC\n"); 329 + return -ENODEV; 330 + } 331 + 332 + /* 333 + * This is the shared communication region 334 + * for the OS and Platform to communicate over. 335 + */ 336 + comm_base_addr = cppc_ss->base_address; 337 + len = cppc_ss->length; 338 + pcc_cmd_delay = cppc_ss->min_turnaround_time; 339 + 340 + pcc_comm_addr = acpi_os_ioremap(comm_base_addr, len); 341 + if (!pcc_comm_addr) { 342 + pr_err("Failed to ioremap PCC comm region mem\n"); 343 + return -ENOMEM; 344 + } 345 + 346 + /* Set flag so that we dont come here for each CPU. */ 347 + pcc_channel_acquired = true; 348 + } 349 + 350 + return 0; 351 + } 352 + 353 + /* 354 + * An example CPC table looks like the following. 355 + * 356 + * Name(_CPC, Package() 357 + * { 358 + * 17, 359 + * NumEntries 360 + * 1, 361 + * // Revision 362 + * ResourceTemplate(){Register(PCC, 32, 0, 0x120, 2)}, 363 + * // Highest Performance 364 + * ResourceTemplate(){Register(PCC, 32, 0, 0x124, 2)}, 365 + * // Nominal Performance 366 + * ResourceTemplate(){Register(PCC, 32, 0, 0x128, 2)}, 367 + * // Lowest Nonlinear Performance 368 + * ResourceTemplate(){Register(PCC, 32, 0, 0x12C, 2)}, 369 + * // Lowest Performance 370 + * ResourceTemplate(){Register(PCC, 32, 0, 0x130, 2)}, 371 + * // Guaranteed Performance Register 372 + * ResourceTemplate(){Register(PCC, 32, 0, 0x110, 2)}, 373 + * // Desired Performance Register 374 + * ResourceTemplate(){Register(SystemMemory, 0, 0, 0, 0)}, 375 + * .. 376 + * .. 377 + * .. 378 + * 379 + * } 380 + * Each Register() encodes how to access that specific register. 381 + * e.g. a sample PCC entry has the following encoding: 382 + * 383 + * Register ( 384 + * PCC, 385 + * AddressSpaceKeyword 386 + * 8, 387 + * //RegisterBitWidth 388 + * 8, 389 + * //RegisterBitOffset 390 + * 0x30, 391 + * //RegisterAddress 392 + * 9 393 + * //AccessSize (subspace ID) 394 + * 0 395 + * ) 396 + * } 397 + */ 398 + 399 + /** 400 + * acpi_cppc_processor_probe - Search for per CPU _CPC objects. 401 + * @pr: Ptr to acpi_processor containing this CPUs logical Id. 402 + * 403 + * Return: 0 for success or negative value for err. 404 + */ 405 + int acpi_cppc_processor_probe(struct acpi_processor *pr) 406 + { 407 + struct acpi_buffer output = {ACPI_ALLOCATE_BUFFER, NULL}; 408 + union acpi_object *out_obj, *cpc_obj; 409 + struct cpc_desc *cpc_ptr; 410 + struct cpc_reg *gas_t; 411 + acpi_handle handle = pr->handle; 412 + unsigned int num_ent, i, cpc_rev; 413 + acpi_status status; 414 + int ret = -EFAULT; 415 + 416 + /* Parse the ACPI _CPC table for this cpu. */ 417 + status = acpi_evaluate_object_typed(handle, "_CPC", NULL, &output, 418 + ACPI_TYPE_PACKAGE); 419 + if (ACPI_FAILURE(status)) { 420 + ret = -ENODEV; 421 + goto out_buf_free; 422 + } 423 + 424 + out_obj = (union acpi_object *) output.pointer; 425 + 426 + cpc_ptr = kzalloc(sizeof(struct cpc_desc), GFP_KERNEL); 427 + if (!cpc_ptr) { 428 + ret = -ENOMEM; 429 + goto out_buf_free; 430 + } 431 + 432 + /* First entry is NumEntries. */ 433 + cpc_obj = &out_obj->package.elements[0]; 434 + if (cpc_obj->type == ACPI_TYPE_INTEGER) { 435 + num_ent = cpc_obj->integer.value; 436 + } else { 437 + pr_debug("Unexpected entry type(%d) for NumEntries\n", 438 + cpc_obj->type); 439 + goto out_free; 440 + } 441 + 442 + /* Only support CPPCv2. Bail otherwise. */ 443 + if (num_ent != CPPC_NUM_ENT) { 444 + pr_debug("Firmware exports %d entries. Expected: %d\n", 445 + num_ent, CPPC_NUM_ENT); 446 + goto out_free; 447 + } 448 + 449 + /* Second entry should be revision. */ 450 + cpc_obj = &out_obj->package.elements[1]; 451 + if (cpc_obj->type == ACPI_TYPE_INTEGER) { 452 + cpc_rev = cpc_obj->integer.value; 453 + } else { 454 + pr_debug("Unexpected entry type(%d) for Revision\n", 455 + cpc_obj->type); 456 + goto out_free; 457 + } 458 + 459 + if (cpc_rev != CPPC_REV) { 460 + pr_debug("Firmware exports revision:%d. Expected:%d\n", 461 + cpc_rev, CPPC_REV); 462 + goto out_free; 463 + } 464 + 465 + /* Iterate through remaining entries in _CPC */ 466 + for (i = 2; i < num_ent; i++) { 467 + cpc_obj = &out_obj->package.elements[i]; 468 + 469 + if (cpc_obj->type == ACPI_TYPE_INTEGER) { 470 + cpc_ptr->cpc_regs[i-2].type = ACPI_TYPE_INTEGER; 471 + cpc_ptr->cpc_regs[i-2].cpc_entry.int_value = cpc_obj->integer.value; 472 + } else if (cpc_obj->type == ACPI_TYPE_BUFFER) { 473 + gas_t = (struct cpc_reg *) 474 + cpc_obj->buffer.pointer; 475 + 476 + /* 477 + * The PCC Subspace index is encoded inside 478 + * the CPC table entries. The same PCC index 479 + * will be used for all the PCC entries, 480 + * so extract it only once. 481 + */ 482 + if (gas_t->space_id == ACPI_ADR_SPACE_PLATFORM_COMM) { 483 + if (pcc_subspace_idx < 0) 484 + pcc_subspace_idx = gas_t->access_width; 485 + else if (pcc_subspace_idx != gas_t->access_width) { 486 + pr_debug("Mismatched PCC ids.\n"); 487 + goto out_free; 488 + } 489 + } else if (gas_t->space_id != ACPI_ADR_SPACE_SYSTEM_MEMORY) { 490 + /* Support only PCC and SYS MEM type regs */ 491 + pr_debug("Unsupported register type: %d\n", gas_t->space_id); 492 + goto out_free; 493 + } 494 + 495 + cpc_ptr->cpc_regs[i-2].type = ACPI_TYPE_BUFFER; 496 + memcpy(&cpc_ptr->cpc_regs[i-2].cpc_entry.reg, gas_t, sizeof(*gas_t)); 497 + } else { 498 + pr_debug("Err in entry:%d in CPC table of CPU:%d \n", i, pr->id); 499 + goto out_free; 500 + } 501 + } 502 + /* Store CPU Logical ID */ 503 + cpc_ptr->cpu_id = pr->id; 504 + 505 + /* Plug it into this CPUs CPC descriptor. */ 506 + per_cpu(cpc_desc_ptr, pr->id) = cpc_ptr; 507 + 508 + /* Parse PSD data for this CPU */ 509 + ret = acpi_get_psd(cpc_ptr, handle); 510 + if (ret) 511 + goto out_free; 512 + 513 + /* Register PCC channel once for all CPUs. */ 514 + if (!pcc_channel_acquired) { 515 + ret = register_pcc_channel(pcc_subspace_idx); 516 + if (ret) 517 + goto out_free; 518 + } 519 + 520 + /* Everything looks okay */ 521 + pr_debug("Parsed CPC struct for CPU: %d\n", pr->id); 522 + 523 + kfree(output.pointer); 524 + return 0; 525 + 526 + out_free: 527 + kfree(cpc_ptr); 528 + 529 + out_buf_free: 530 + kfree(output.pointer); 531 + return ret; 532 + } 533 + EXPORT_SYMBOL_GPL(acpi_cppc_processor_probe); 534 + 535 + /** 536 + * acpi_cppc_processor_exit - Cleanup CPC structs. 537 + * @pr: Ptr to acpi_processor containing this CPUs logical Id. 538 + * 539 + * Return: Void 540 + */ 541 + void acpi_cppc_processor_exit(struct acpi_processor *pr) 542 + { 543 + struct cpc_desc *cpc_ptr; 544 + cpc_ptr = per_cpu(cpc_desc_ptr, pr->id); 545 + kfree(cpc_ptr); 546 + } 547 + EXPORT_SYMBOL_GPL(acpi_cppc_processor_exit); 548 + 549 + static u64 get_phys_addr(struct cpc_reg *reg) 550 + { 551 + /* PCC communication addr space begins at byte offset 0x8. */ 552 + if (reg->space_id == ACPI_ADR_SPACE_PLATFORM_COMM) 553 + return (u64)comm_base_addr + 0x8 + reg->address; 554 + else 555 + return reg->address; 556 + } 557 + 558 + static void cpc_read(struct cpc_reg *reg, u64 *val) 559 + { 560 + u64 addr = get_phys_addr(reg); 561 + 562 + acpi_os_read_memory((acpi_physical_address)addr, 563 + val, reg->bit_width); 564 + } 565 + 566 + static void cpc_write(struct cpc_reg *reg, u64 val) 567 + { 568 + u64 addr = get_phys_addr(reg); 569 + 570 + acpi_os_write_memory((acpi_physical_address)addr, 571 + val, reg->bit_width); 572 + } 573 + 574 + /** 575 + * cppc_get_perf_caps - Get a CPUs performance capabilities. 576 + * @cpunum: CPU from which to get capabilities info. 577 + * @perf_caps: ptr to cppc_perf_caps. See cppc_acpi.h 578 + * 579 + * Return: 0 for success with perf_caps populated else -ERRNO. 580 + */ 581 + int cppc_get_perf_caps(int cpunum, struct cppc_perf_caps *perf_caps) 582 + { 583 + struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpunum); 584 + struct cpc_register_resource *highest_reg, *lowest_reg, *ref_perf, 585 + *nom_perf; 586 + u64 high, low, ref, nom; 587 + int ret = 0; 588 + 589 + if (!cpc_desc) { 590 + pr_debug("No CPC descriptor for CPU:%d\n", cpunum); 591 + return -ENODEV; 592 + } 593 + 594 + highest_reg = &cpc_desc->cpc_regs[HIGHEST_PERF]; 595 + lowest_reg = &cpc_desc->cpc_regs[LOWEST_PERF]; 596 + ref_perf = &cpc_desc->cpc_regs[REFERENCE_PERF]; 597 + nom_perf = &cpc_desc->cpc_regs[NOMINAL_PERF]; 598 + 599 + spin_lock(&pcc_lock); 600 + 601 + /* Are any of the regs PCC ?*/ 602 + if ((highest_reg->cpc_entry.reg.space_id == ACPI_ADR_SPACE_PLATFORM_COMM) || 603 + (lowest_reg->cpc_entry.reg.space_id == ACPI_ADR_SPACE_PLATFORM_COMM) || 604 + (ref_perf->cpc_entry.reg.space_id == ACPI_ADR_SPACE_PLATFORM_COMM) || 605 + (nom_perf->cpc_entry.reg.space_id == ACPI_ADR_SPACE_PLATFORM_COMM)) { 606 + /* Ring doorbell once to update PCC subspace */ 607 + if (send_pcc_cmd(CMD_READ)) { 608 + ret = -EIO; 609 + goto out_err; 610 + } 611 + } 612 + 613 + cpc_read(&highest_reg->cpc_entry.reg, &high); 614 + perf_caps->highest_perf = high; 615 + 616 + cpc_read(&lowest_reg->cpc_entry.reg, &low); 617 + perf_caps->lowest_perf = low; 618 + 619 + cpc_read(&ref_perf->cpc_entry.reg, &ref); 620 + perf_caps->reference_perf = ref; 621 + 622 + cpc_read(&nom_perf->cpc_entry.reg, &nom); 623 + perf_caps->nominal_perf = nom; 624 + 625 + if (!ref) 626 + perf_caps->reference_perf = perf_caps->nominal_perf; 627 + 628 + if (!high || !low || !nom) 629 + ret = -EFAULT; 630 + 631 + out_err: 632 + spin_unlock(&pcc_lock); 633 + return ret; 634 + } 635 + EXPORT_SYMBOL_GPL(cppc_get_perf_caps); 636 + 637 + /** 638 + * cppc_get_perf_ctrs - Read a CPUs performance feedback counters. 639 + * @cpunum: CPU from which to read counters. 640 + * @perf_fb_ctrs: ptr to cppc_perf_fb_ctrs. See cppc_acpi.h 641 + * 642 + * Return: 0 for success with perf_fb_ctrs populated else -ERRNO. 643 + */ 644 + int cppc_get_perf_ctrs(int cpunum, struct cppc_perf_fb_ctrs *perf_fb_ctrs) 645 + { 646 + struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpunum); 647 + struct cpc_register_resource *delivered_reg, *reference_reg; 648 + u64 delivered, reference; 649 + int ret = 0; 650 + 651 + if (!cpc_desc) { 652 + pr_debug("No CPC descriptor for CPU:%d\n", cpunum); 653 + return -ENODEV; 654 + } 655 + 656 + delivered_reg = &cpc_desc->cpc_regs[DELIVERED_CTR]; 657 + reference_reg = &cpc_desc->cpc_regs[REFERENCE_CTR]; 658 + 659 + spin_lock(&pcc_lock); 660 + 661 + /* Are any of the regs PCC ?*/ 662 + if ((delivered_reg->cpc_entry.reg.space_id == ACPI_ADR_SPACE_PLATFORM_COMM) || 663 + (reference_reg->cpc_entry.reg.space_id == ACPI_ADR_SPACE_PLATFORM_COMM)) { 664 + /* Ring doorbell once to update PCC subspace */ 665 + if (send_pcc_cmd(CMD_READ)) { 666 + ret = -EIO; 667 + goto out_err; 668 + } 669 + } 670 + 671 + cpc_read(&delivered_reg->cpc_entry.reg, &delivered); 672 + cpc_read(&reference_reg->cpc_entry.reg, &reference); 673 + 674 + if (!delivered || !reference) { 675 + ret = -EFAULT; 676 + goto out_err; 677 + } 678 + 679 + perf_fb_ctrs->delivered = delivered; 680 + perf_fb_ctrs->reference = reference; 681 + 682 + perf_fb_ctrs->delivered -= perf_fb_ctrs->prev_delivered; 683 + perf_fb_ctrs->reference -= perf_fb_ctrs->prev_reference; 684 + 685 + perf_fb_ctrs->prev_delivered = delivered; 686 + perf_fb_ctrs->prev_reference = reference; 687 + 688 + out_err: 689 + spin_unlock(&pcc_lock); 690 + return ret; 691 + } 692 + EXPORT_SYMBOL_GPL(cppc_get_perf_ctrs); 693 + 694 + /** 695 + * cppc_set_perf - Set a CPUs performance controls. 696 + * @cpu: CPU for which to set performance controls. 697 + * @perf_ctrls: ptr to cppc_perf_ctrls. See cppc_acpi.h 698 + * 699 + * Return: 0 for success, -ERRNO otherwise. 700 + */ 701 + int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls) 702 + { 703 + struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpu); 704 + struct cpc_register_resource *desired_reg; 705 + int ret = 0; 706 + 707 + if (!cpc_desc) { 708 + pr_debug("No CPC descriptor for CPU:%d\n", cpu); 709 + return -ENODEV; 710 + } 711 + 712 + desired_reg = &cpc_desc->cpc_regs[DESIRED_PERF]; 713 + 714 + spin_lock(&pcc_lock); 715 + 716 + /* 717 + * Skip writing MIN/MAX until Linux knows how to come up with 718 + * useful values. 719 + */ 720 + cpc_write(&desired_reg->cpc_entry.reg, perf_ctrls->desired_perf); 721 + 722 + /* Is this a PCC reg ?*/ 723 + if (desired_reg->cpc_entry.reg.space_id == ACPI_ADR_SPACE_PLATFORM_COMM) { 724 + /* Ring doorbell so Remote can get our perf request. */ 725 + if (send_pcc_cmd(CMD_WRITE)) 726 + ret = -EIO; 727 + } 728 + 729 + spin_unlock(&pcc_lock); 730 + 731 + return ret; 732 + } 733 + EXPORT_SYMBOL_GPL(cppc_set_perf);
+1 -18
drivers/acpi/device_pm.c
··· 963 963 EXPORT_SYMBOL_GPL(acpi_subsys_prepare); 964 964 965 965 /** 966 - * acpi_subsys_complete - Finalize device's resume during system resume. 967 - * @dev: Device to handle. 968 - */ 969 - void acpi_subsys_complete(struct device *dev) 970 - { 971 - pm_generic_complete(dev); 972 - /* 973 - * If the device had been runtime-suspended before the system went into 974 - * the sleep state it is going out of and it has never been resumed till 975 - * now, resume it in case the firmware powered it up. 976 - */ 977 - if (dev->power.direct_complete) 978 - pm_request_resume(dev); 979 - } 980 - EXPORT_SYMBOL_GPL(acpi_subsys_complete); 981 - 982 - /** 983 966 * acpi_subsys_suspend - Run the device driver's suspend callback. 984 967 * @dev: Device to handle. 985 968 * ··· 1030 1047 .runtime_resume = acpi_subsys_runtime_resume, 1031 1048 #ifdef CONFIG_PM_SLEEP 1032 1049 .prepare = acpi_subsys_prepare, 1033 - .complete = acpi_subsys_complete, 1050 + .complete = pm_complete_with_resume_check, 1034 1051 .suspend = acpi_subsys_suspend, 1035 1052 .suspend_late = acpi_subsys_suspend_late, 1036 1053 .resume_early = acpi_subsys_resume_early,
+108 -12
drivers/acpi/device_sysfs.c
··· 26 26 27 27 #include "internal.h" 28 28 29 + static ssize_t acpi_object_path(acpi_handle handle, char *buf) 30 + { 31 + struct acpi_buffer path = {ACPI_ALLOCATE_BUFFER, NULL}; 32 + int result; 33 + 34 + result = acpi_get_name(handle, ACPI_FULL_PATHNAME, &path); 35 + if (result) 36 + return result; 37 + 38 + result = sprintf(buf, "%s\n", (char*)path.pointer); 39 + kfree(path.pointer); 40 + return result; 41 + } 42 + 43 + struct acpi_data_node_attr { 44 + struct attribute attr; 45 + ssize_t (*show)(struct acpi_data_node *, char *); 46 + ssize_t (*store)(struct acpi_data_node *, const char *, size_t count); 47 + }; 48 + 49 + #define DATA_NODE_ATTR(_name) \ 50 + static struct acpi_data_node_attr data_node_##_name = \ 51 + __ATTR(_name, 0444, data_node_show_##_name, NULL) 52 + 53 + static ssize_t data_node_show_path(struct acpi_data_node *dn, char *buf) 54 + { 55 + return acpi_object_path(dn->handle, buf); 56 + } 57 + 58 + DATA_NODE_ATTR(path); 59 + 60 + static struct attribute *acpi_data_node_default_attrs[] = { 61 + &data_node_path.attr, 62 + NULL 63 + }; 64 + 65 + #define to_data_node(k) container_of(k, struct acpi_data_node, kobj) 66 + #define to_attr(a) container_of(a, struct acpi_data_node_attr, attr) 67 + 68 + static ssize_t acpi_data_node_attr_show(struct kobject *kobj, 69 + struct attribute *attr, char *buf) 70 + { 71 + struct acpi_data_node *dn = to_data_node(kobj); 72 + struct acpi_data_node_attr *dn_attr = to_attr(attr); 73 + 74 + return dn_attr->show ? dn_attr->show(dn, buf) : -ENXIO; 75 + } 76 + 77 + static const struct sysfs_ops acpi_data_node_sysfs_ops = { 78 + .show = acpi_data_node_attr_show, 79 + }; 80 + 81 + static void acpi_data_node_release(struct kobject *kobj) 82 + { 83 + struct acpi_data_node *dn = to_data_node(kobj); 84 + complete(&dn->kobj_done); 85 + } 86 + 87 + static struct kobj_type acpi_data_node_ktype = { 88 + .sysfs_ops = &acpi_data_node_sysfs_ops, 89 + .default_attrs = acpi_data_node_default_attrs, 90 + .release = acpi_data_node_release, 91 + }; 92 + 93 + static void acpi_expose_nondev_subnodes(struct kobject *kobj, 94 + struct acpi_device_data *data) 95 + { 96 + struct list_head *list = &data->subnodes; 97 + struct acpi_data_node *dn; 98 + 99 + if (list_empty(list)) 100 + return; 101 + 102 + list_for_each_entry(dn, list, sibling) { 103 + int ret; 104 + 105 + init_completion(&dn->kobj_done); 106 + ret = kobject_init_and_add(&dn->kobj, &acpi_data_node_ktype, 107 + kobj, dn->name); 108 + if (ret) 109 + acpi_handle_err(dn->handle, "Failed to expose (%d)\n", ret); 110 + else 111 + acpi_expose_nondev_subnodes(&dn->kobj, &dn->data); 112 + } 113 + } 114 + 115 + static void acpi_hide_nondev_subnodes(struct acpi_device_data *data) 116 + { 117 + struct list_head *list = &data->subnodes; 118 + struct acpi_data_node *dn; 119 + 120 + if (list_empty(list)) 121 + return; 122 + 123 + list_for_each_entry_reverse(dn, list, sibling) { 124 + acpi_hide_nondev_subnodes(&dn->data); 125 + kobject_put(&dn->kobj); 126 + } 127 + } 128 + 29 129 /** 30 130 * create_pnp_modalias - Create hid/cid(s) string for modalias and uevent 31 131 * @acpi_dev: ACPI device object. ··· 423 323 } 424 324 static DEVICE_ATTR(adr, 0444, acpi_device_adr_show, NULL); 425 325 426 - static ssize_t 427 - acpi_device_path_show(struct device *dev, struct device_attribute *attr, char *buf) { 326 + static ssize_t acpi_device_path_show(struct device *dev, 327 + struct device_attribute *attr, char *buf) 328 + { 428 329 struct acpi_device *acpi_dev = to_acpi_device(dev); 429 - struct acpi_buffer path = {ACPI_ALLOCATE_BUFFER, NULL}; 430 - int result; 431 330 432 - result = acpi_get_name(acpi_dev->handle, ACPI_FULL_PATHNAME, &path); 433 - if (result) 434 - goto end; 435 - 436 - result = sprintf(buf, "%s\n", (char*)path.pointer); 437 - kfree(path.pointer); 438 - end: 439 - return result; 331 + return acpi_object_path(acpi_dev->handle, buf); 440 332 } 441 333 static DEVICE_ATTR(path, 0444, acpi_device_path_show, NULL); 442 334 ··· 567 475 &dev_attr_real_power_state); 568 476 } 569 477 478 + acpi_expose_nondev_subnodes(&dev->dev.kobj, &dev->data); 479 + 570 480 end: 571 481 return result; 572 482 } ··· 579 485 */ 580 486 void acpi_device_remove_files(struct acpi_device *dev) 581 487 { 488 + acpi_hide_nondev_subnodes(&dev->data); 489 + 582 490 if (dev->flags.power_manageable) { 583 491 device_remove_file(&dev->dev, &dev_attr_power_state); 584 492 if (dev->power.flags.power_resources)
+74 -41
drivers/acpi/ec.c
··· 441 441 442 442 static bool acpi_ec_guard_event(struct acpi_ec *ec) 443 443 { 444 + bool guarded = true; 445 + unsigned long flags; 446 + 447 + spin_lock_irqsave(&ec->lock, flags); 448 + /* 449 + * If firmware SCI_EVT clearing timing is "event", we actually 450 + * don't know when the SCI_EVT will be cleared by firmware after 451 + * evaluating _Qxx, so we need to re-check SCI_EVT after waiting an 452 + * acceptable period. 453 + * 454 + * The guarding period begins when EC_FLAGS_QUERY_PENDING is 455 + * flagged, which means SCI_EVT check has just been performed. 456 + * But if the current transaction is ACPI_EC_COMMAND_QUERY, the 457 + * guarding should have already been performed (via 458 + * EC_FLAGS_QUERY_GUARDING) and should not be applied so that the 459 + * ACPI_EC_COMMAND_QUERY transaction can be transitioned into 460 + * ACPI_EC_COMMAND_POLL state immediately. 461 + */ 444 462 if (ec_event_clearing == ACPI_EC_EVT_TIMING_STATUS || 445 463 ec_event_clearing == ACPI_EC_EVT_TIMING_QUERY || 446 464 !test_bit(EC_FLAGS_QUERY_PENDING, &ec->flags) || 447 465 (ec->curr && ec->curr->command == ACPI_EC_COMMAND_QUERY)) 448 - return false; 449 - 450 - /* 451 - * Postpone the query submission to allow the firmware to proceed, 452 - * we shouldn't check SCI_EVT before the firmware reflagging it. 453 - */ 454 - return true; 466 + guarded = false; 467 + spin_unlock_irqrestore(&ec->lock, flags); 468 + return guarded; 455 469 } 456 470 457 471 static int ec_transaction_polled(struct acpi_ec *ec) ··· 611 597 unsigned long guard = usecs_to_jiffies(ec_polling_guard); 612 598 unsigned long timeout = ec->timestamp + guard; 613 599 600 + /* Ensure guarding period before polling EC status */ 614 601 do { 615 602 if (ec_busy_polling) { 616 603 /* Perform busy polling */ ··· 621 606 } else { 622 607 /* 623 608 * Perform wait polling 624 - * 625 - * For SCI_EVT clearing timing of "event", 626 - * performing guarding before re-checking the 627 - * SCI_EVT. Otherwise, such guarding is not needed 628 - * due to the old practices. 609 + * 1. Wait the transaction to be completed by the 610 + * GPE handler after the transaction enters 611 + * ACPI_EC_COMMAND_POLL state. 612 + * 2. A special guarding logic is also required 613 + * for event clearing mode "event" before the 614 + * transaction enters ACPI_EC_COMMAND_POLL 615 + * state. 629 616 */ 630 617 if (!ec_transaction_polled(ec) && 631 618 !acpi_ec_guard_event(ec)) ··· 637 620 guard)) 638 621 return 0; 639 622 } 640 - /* Guard the register accesses for the polling modes */ 641 623 } while (time_before(jiffies, timeout)); 642 624 return -ETIME; 643 625 } ··· 945 929 return handler; 946 930 } 947 931 932 + static struct acpi_ec_query_handler * 933 + acpi_ec_get_query_handler_by_value(struct acpi_ec *ec, u8 value) 934 + { 935 + struct acpi_ec_query_handler *handler; 936 + bool found = false; 937 + 938 + mutex_lock(&ec->mutex); 939 + list_for_each_entry(handler, &ec->list, node) { 940 + if (value == handler->query_bit) { 941 + found = true; 942 + break; 943 + } 944 + } 945 + mutex_unlock(&ec->mutex); 946 + return found ? acpi_ec_get_query_handler(handler) : NULL; 947 + } 948 + 948 949 static void acpi_ec_query_handler_release(struct kref *kref) 949 950 { 950 951 struct acpi_ec_query_handler *handler = ··· 997 964 } 998 965 EXPORT_SYMBOL_GPL(acpi_ec_add_query_handler); 999 966 1000 - void acpi_ec_remove_query_handler(struct acpi_ec *ec, u8 query_bit) 967 + static void acpi_ec_remove_query_handlers(struct acpi_ec *ec, 968 + bool remove_all, u8 query_bit) 1001 969 { 1002 970 struct acpi_ec_query_handler *handler, *tmp; 1003 971 LIST_HEAD(free_list); 1004 972 1005 973 mutex_lock(&ec->mutex); 1006 974 list_for_each_entry_safe(handler, tmp, &ec->list, node) { 1007 - if (query_bit == handler->query_bit) { 975 + if (remove_all || query_bit == handler->query_bit) { 1008 976 list_del_init(&handler->node); 1009 977 list_add(&handler->node, &free_list); 1010 978 } ··· 1013 979 mutex_unlock(&ec->mutex); 1014 980 list_for_each_entry_safe(handler, tmp, &free_list, node) 1015 981 acpi_ec_put_query_handler(handler); 982 + } 983 + 984 + void acpi_ec_remove_query_handler(struct acpi_ec *ec, u8 query_bit) 985 + { 986 + acpi_ec_remove_query_handlers(ec, false, query_bit); 1016 987 } 1017 988 EXPORT_SYMBOL_GPL(acpi_ec_remove_query_handler); 1018 989 ··· 1064 1025 { 1065 1026 u8 value = 0; 1066 1027 int result; 1067 - struct acpi_ec_query_handler *handler; 1068 1028 struct acpi_ec_query *q; 1069 1029 1070 1030 q = acpi_ec_create_query(&value); ··· 1081 1043 if (result) 1082 1044 goto err_exit; 1083 1045 1084 - mutex_lock(&ec->mutex); 1085 - result = -ENODATA; 1086 - list_for_each_entry(handler, &ec->list, node) { 1087 - if (value == handler->query_bit) { 1088 - result = 0; 1089 - q->handler = acpi_ec_get_query_handler(handler); 1090 - ec_dbg_evt("Query(0x%02x) scheduled", 1091 - q->handler->query_bit); 1092 - /* 1093 - * It is reported that _Qxx are evaluated in a 1094 - * parallel way on Windows: 1095 - * https://bugzilla.kernel.org/show_bug.cgi?id=94411 1096 - */ 1097 - if (!schedule_work(&q->work)) 1098 - result = -EBUSY; 1099 - break; 1100 - } 1046 + q->handler = acpi_ec_get_query_handler_by_value(ec, value); 1047 + if (!q->handler) { 1048 + result = -ENODATA; 1049 + goto err_exit; 1101 1050 } 1102 - mutex_unlock(&ec->mutex); 1051 + 1052 + /* 1053 + * It is reported that _Qxx are evaluated in a parallel way on 1054 + * Windows: 1055 + * https://bugzilla.kernel.org/show_bug.cgi?id=94411 1056 + * 1057 + * Put this log entry before schedule_work() in order to make 1058 + * it appearing before any other log entries occurred during the 1059 + * work queue execution. 1060 + */ 1061 + ec_dbg_evt("Query(0x%02x) scheduled", value); 1062 + if (!schedule_work(&q->work)) { 1063 + ec_dbg_evt("Query(0x%02x) overlapped", value); 1064 + result = -EBUSY; 1065 + } 1103 1066 1104 1067 err_exit: 1105 1068 if (result && q) ··· 1393 1354 static int acpi_ec_remove(struct acpi_device *device) 1394 1355 { 1395 1356 struct acpi_ec *ec; 1396 - struct acpi_ec_query_handler *handler, *tmp; 1397 1357 1398 1358 if (!device) 1399 1359 return -EINVAL; 1400 1360 1401 1361 ec = acpi_driver_data(device); 1402 1362 ec_remove_handlers(ec); 1403 - mutex_lock(&ec->mutex); 1404 - list_for_each_entry_safe(handler, tmp, &ec->list, node) { 1405 - list_del(&handler->node); 1406 - kfree(handler); 1407 - } 1408 - mutex_unlock(&ec->mutex); 1363 + acpi_ec_remove_query_handlers(ec, true, 0); 1409 1364 release_region(ec->data_addr, 1); 1410 1365 release_region(ec->command_addr, 1); 1411 1366 device->driver_data = NULL;
+2 -3
drivers/acpi/glue.c
··· 351 351 return 0; 352 352 } 353 353 354 - int __init init_acpi_device_notify(void) 354 + void __init init_acpi_device_notify(void) 355 355 { 356 356 if (platform_notify || platform_notify_remove) { 357 357 printk(KERN_ERR PREFIX "Can't use platform_notify\n"); 358 - return 0; 358 + return; 359 359 } 360 360 platform_notify = acpi_platform_notify; 361 361 platform_notify_remove = acpi_platform_notify_remove; 362 - return 0; 363 362 }
+3 -3
drivers/acpi/internal.h
··· 21 21 #define PREFIX "ACPI: " 22 22 23 23 acpi_status acpi_os_initialize1(void); 24 - int init_acpi_device_notify(void); 24 + void init_acpi_device_notify(void); 25 25 int acpi_scan_init(void); 26 26 void acpi_pci_root_init(void); 27 27 void acpi_pci_link_init(void); ··· 179 179 #endif 180 180 181 181 #ifdef CONFIG_ACPI_SLEEP 182 - int acpi_sleep_proc_init(void); 182 + void acpi_sleep_proc_init(void); 183 183 int suspend_nvs_alloc(void); 184 184 void suspend_nvs_free(void); 185 185 int suspend_nvs_save(void); 186 186 void suspend_nvs_restore(void); 187 187 #else 188 - static inline int acpi_sleep_proc_init(void) { return 0; } 188 + static inline void acpi_sleep_proc_init(void) {} 189 189 static inline int suspend_nvs_alloc(void) { return 0; } 190 190 static inline void suspend_nvs_free(void) {} 191 191 static inline int suspend_nvs_save(void) { return 0; }
+3 -3
drivers/acpi/nfit.c
··· 706 706 flags & ACPI_NFIT_MEM_SAVE_FAILED ? "save_fail " : "", 707 707 flags & ACPI_NFIT_MEM_RESTORE_FAILED ? "restore_fail " : "", 708 708 flags & ACPI_NFIT_MEM_FLUSH_FAILED ? "flush_fail " : "", 709 - flags & ACPI_NFIT_MEM_ARMED ? "not_armed " : "", 709 + flags & ACPI_NFIT_MEM_NOT_ARMED ? "not_armed " : "", 710 710 flags & ACPI_NFIT_MEM_HEALTH_OBSERVED ? "smart_event " : ""); 711 711 } 712 712 static DEVICE_ATTR_RO(flags); ··· 815 815 flags |= NDD_ALIASING; 816 816 817 817 mem_flags = __to_nfit_memdev(nfit_mem)->flags; 818 - if (mem_flags & ACPI_NFIT_MEM_ARMED) 818 + if (mem_flags & ACPI_NFIT_MEM_NOT_ARMED) 819 819 flags |= NDD_UNARMED; 820 820 821 821 rc = acpi_nfit_add_dimm(acpi_desc, nfit_mem, device_handle); ··· 839 839 mem_flags & ACPI_NFIT_MEM_SAVE_FAILED ? " save_fail" : "", 840 840 mem_flags & ACPI_NFIT_MEM_RESTORE_FAILED ? " restore_fail":"", 841 841 mem_flags & ACPI_NFIT_MEM_FLUSH_FAILED ? " flush_fail" : "", 842 - mem_flags & ACPI_NFIT_MEM_ARMED ? " not_armed" : ""); 842 + mem_flags & ACPI_NFIT_MEM_NOT_ARMED ? " not_armed" : ""); 843 843 844 844 } 845 845
+1 -1
drivers/acpi/nfit.h
··· 24 24 #define UUID_NFIT_DIMM "4309ac30-0d11-11e4-9191-0800200c9a66" 25 25 #define ACPI_NFIT_MEM_FAILED_MASK (ACPI_NFIT_MEM_SAVE_FAILED \ 26 26 | ACPI_NFIT_MEM_RESTORE_FAILED | ACPI_NFIT_MEM_FLUSH_FAILED \ 27 - | ACPI_NFIT_MEM_ARMED) 27 + | ACPI_NFIT_MEM_NOT_ARMED) 28 28 29 29 enum nfit_uuids { 30 30 NFIT_SPA_VOLATILE,
+11 -13
drivers/acpi/osl.c
··· 66 66 /* stuff for debugger support */ 67 67 int acpi_in_debugger; 68 68 EXPORT_SYMBOL(acpi_in_debugger); 69 - 70 - extern char line_buf[80]; 71 69 #endif /*ENABLE_DEBUGGER */ 72 70 73 71 static int (*__acpi_os_prepare_sleep)(u8 sleep_state, u32 pm1a_ctrl, ··· 79 81 static struct workqueue_struct *kacpi_notify_wq; 80 82 static struct workqueue_struct *kacpi_hotplug_wq; 81 83 static bool acpi_os_initialized; 84 + unsigned int acpi_sci_irq = INVALID_ACPI_IRQ; 82 85 83 86 /* 84 87 * This list of permanent mappings is for memory that may be accessed from ··· 855 856 acpi_irq_handler = NULL; 856 857 return AE_NOT_ACQUIRED; 857 858 } 859 + acpi_sci_irq = irq; 858 860 859 861 return AE_OK; 860 862 } 861 863 862 - acpi_status acpi_os_remove_interrupt_handler(u32 irq, acpi_osd_handler handler) 864 + acpi_status acpi_os_remove_interrupt_handler(u32 gsi, acpi_osd_handler handler) 863 865 { 864 - if (irq != acpi_gbl_FADT.sci_interrupt) 866 + if (gsi != acpi_gbl_FADT.sci_interrupt || !acpi_sci_irq_valid()) 865 867 return AE_BAD_PARAMETER; 866 868 867 - free_irq(irq, acpi_irq); 869 + free_irq(acpi_sci_irq, acpi_irq); 868 870 acpi_irq_handler = NULL; 871 + acpi_sci_irq = INVALID_ACPI_IRQ; 869 872 870 873 return AE_OK; 871 874 } ··· 1181 1180 * Make sure the GPE handler or the fixed event handler is not used 1182 1181 * on another CPU after removal. 1183 1182 */ 1184 - if (acpi_irq_handler) 1185 - synchronize_hardirq(acpi_gbl_FADT.sci_interrupt); 1183 + if (acpi_sci_irq_valid()) 1184 + synchronize_hardirq(acpi_sci_irq); 1186 1185 flush_workqueue(kacpid_wq); 1187 1186 flush_workqueue(kacpi_notify_wq); 1188 1187 } ··· 1346 1345 return AE_OK; 1347 1346 } 1348 1347 1349 - #ifdef ACPI_FUTURE_USAGE 1350 - u32 acpi_os_get_line(char *buffer) 1348 + acpi_status acpi_os_get_line(char *buffer, u32 buffer_length, u32 *bytes_read) 1351 1349 { 1352 - 1353 1350 #ifdef ENABLE_DEBUGGER 1354 1351 if (acpi_in_debugger) { 1355 1352 u32 chars; 1356 1353 1357 - kdb_read(buffer, sizeof(line_buf)); 1354 + kdb_read(buffer, buffer_length); 1358 1355 1359 1356 /* remove the CR kdb includes */ 1360 1357 chars = strlen(buffer) - 1; ··· 1360 1361 } 1361 1362 #endif 1362 1363 1363 - return 0; 1364 + return AE_OK; 1364 1365 } 1365 - #endif /* ACPI_FUTURE_USAGE */ 1366 1366 1367 1367 acpi_status acpi_os_signal(u32 function, void *info) 1368 1368 {
+204
drivers/acpi/pci_root.c
··· 652 652 kfree(root); 653 653 } 654 654 655 + /* 656 + * Following code to support acpi_pci_root_create() is copied from 657 + * arch/x86/pci/acpi.c and modified so it could be reused by x86, IA64 658 + * and ARM64. 659 + */ 660 + static void acpi_pci_root_validate_resources(struct device *dev, 661 + struct list_head *resources, 662 + unsigned long type) 663 + { 664 + LIST_HEAD(list); 665 + struct resource *res1, *res2, *root = NULL; 666 + struct resource_entry *tmp, *entry, *entry2; 667 + 668 + BUG_ON((type & (IORESOURCE_MEM | IORESOURCE_IO)) == 0); 669 + root = (type & IORESOURCE_MEM) ? &iomem_resource : &ioport_resource; 670 + 671 + list_splice_init(resources, &list); 672 + resource_list_for_each_entry_safe(entry, tmp, &list) { 673 + bool free = false; 674 + resource_size_t end; 675 + 676 + res1 = entry->res; 677 + if (!(res1->flags & type)) 678 + goto next; 679 + 680 + /* Exclude non-addressable range or non-addressable portion */ 681 + end = min(res1->end, root->end); 682 + if (end <= res1->start) { 683 + dev_info(dev, "host bridge window %pR (ignored, not CPU addressable)\n", 684 + res1); 685 + free = true; 686 + goto next; 687 + } else if (res1->end != end) { 688 + dev_info(dev, "host bridge window %pR ([%#llx-%#llx] ignored, not CPU addressable)\n", 689 + res1, (unsigned long long)end + 1, 690 + (unsigned long long)res1->end); 691 + res1->end = end; 692 + } 693 + 694 + resource_list_for_each_entry(entry2, resources) { 695 + res2 = entry2->res; 696 + if (!(res2->flags & type)) 697 + continue; 698 + 699 + /* 700 + * I don't like throwing away windows because then 701 + * our resources no longer match the ACPI _CRS, but 702 + * the kernel resource tree doesn't allow overlaps. 703 + */ 704 + if (resource_overlaps(res1, res2)) { 705 + res2->start = min(res1->start, res2->start); 706 + res2->end = max(res1->end, res2->end); 707 + dev_info(dev, "host bridge window expanded to %pR; %pR ignored\n", 708 + res2, res1); 709 + free = true; 710 + goto next; 711 + } 712 + } 713 + 714 + next: 715 + resource_list_del(entry); 716 + if (free) 717 + resource_list_free_entry(entry); 718 + else 719 + resource_list_add_tail(entry, resources); 720 + } 721 + } 722 + 723 + int acpi_pci_probe_root_resources(struct acpi_pci_root_info *info) 724 + { 725 + int ret; 726 + struct list_head *list = &info->resources; 727 + struct acpi_device *device = info->bridge; 728 + struct resource_entry *entry, *tmp; 729 + unsigned long flags; 730 + 731 + flags = IORESOURCE_IO | IORESOURCE_MEM | IORESOURCE_MEM_8AND16BIT; 732 + ret = acpi_dev_get_resources(device, list, 733 + acpi_dev_filter_resource_type_cb, 734 + (void *)flags); 735 + if (ret < 0) 736 + dev_warn(&device->dev, 737 + "failed to parse _CRS method, error code %d\n", ret); 738 + else if (ret == 0) 739 + dev_dbg(&device->dev, 740 + "no IO and memory resources present in _CRS\n"); 741 + else { 742 + resource_list_for_each_entry_safe(entry, tmp, list) { 743 + if (entry->res->flags & IORESOURCE_DISABLED) 744 + resource_list_destroy_entry(entry); 745 + else 746 + entry->res->name = info->name; 747 + } 748 + acpi_pci_root_validate_resources(&device->dev, list, 749 + IORESOURCE_MEM); 750 + acpi_pci_root_validate_resources(&device->dev, list, 751 + IORESOURCE_IO); 752 + } 753 + 754 + return ret; 755 + } 756 + 757 + static void pci_acpi_root_add_resources(struct acpi_pci_root_info *info) 758 + { 759 + struct resource_entry *entry, *tmp; 760 + struct resource *res, *conflict, *root = NULL; 761 + 762 + resource_list_for_each_entry_safe(entry, tmp, &info->resources) { 763 + res = entry->res; 764 + if (res->flags & IORESOURCE_MEM) 765 + root = &iomem_resource; 766 + else if (res->flags & IORESOURCE_IO) 767 + root = &ioport_resource; 768 + else 769 + continue; 770 + 771 + conflict = insert_resource_conflict(root, res); 772 + if (conflict) { 773 + dev_info(&info->bridge->dev, 774 + "ignoring host bridge window %pR (conflicts with %s %pR)\n", 775 + res, conflict->name, conflict); 776 + resource_list_destroy_entry(entry); 777 + } 778 + } 779 + } 780 + 781 + static void __acpi_pci_root_release_info(struct acpi_pci_root_info *info) 782 + { 783 + struct resource *res; 784 + struct resource_entry *entry, *tmp; 785 + 786 + if (!info) 787 + return; 788 + 789 + resource_list_for_each_entry_safe(entry, tmp, &info->resources) { 790 + res = entry->res; 791 + if (res->parent && 792 + (res->flags & (IORESOURCE_MEM | IORESOURCE_IO))) 793 + release_resource(res); 794 + resource_list_destroy_entry(entry); 795 + } 796 + 797 + info->ops->release_info(info); 798 + } 799 + 800 + static void acpi_pci_root_release_info(struct pci_host_bridge *bridge) 801 + { 802 + struct resource *res; 803 + struct resource_entry *entry; 804 + 805 + resource_list_for_each_entry(entry, &bridge->windows) { 806 + res = entry->res; 807 + if (res->parent && 808 + (res->flags & (IORESOURCE_MEM | IORESOURCE_IO))) 809 + release_resource(res); 810 + } 811 + __acpi_pci_root_release_info(bridge->release_data); 812 + } 813 + 814 + struct pci_bus *acpi_pci_root_create(struct acpi_pci_root *root, 815 + struct acpi_pci_root_ops *ops, 816 + struct acpi_pci_root_info *info, 817 + void *sysdata) 818 + { 819 + int ret, busnum = root->secondary.start; 820 + struct acpi_device *device = root->device; 821 + int node = acpi_get_node(device->handle); 822 + struct pci_bus *bus; 823 + 824 + info->root = root; 825 + info->bridge = device; 826 + info->ops = ops; 827 + INIT_LIST_HEAD(&info->resources); 828 + snprintf(info->name, sizeof(info->name), "PCI Bus %04x:%02x", 829 + root->segment, busnum); 830 + 831 + if (ops->init_info && ops->init_info(info)) 832 + goto out_release_info; 833 + if (ops->prepare_resources) 834 + ret = ops->prepare_resources(info); 835 + else 836 + ret = acpi_pci_probe_root_resources(info); 837 + if (ret < 0) 838 + goto out_release_info; 839 + 840 + pci_acpi_root_add_resources(info); 841 + pci_add_resource(&info->resources, &root->secondary); 842 + bus = pci_create_root_bus(NULL, busnum, ops->pci_ops, 843 + sysdata, &info->resources); 844 + if (!bus) 845 + goto out_release_info; 846 + 847 + pci_scan_child_bus(bus); 848 + pci_set_host_bridge_release(to_pci_host_bridge(bus->bridge), 849 + acpi_pci_root_release_info, info); 850 + if (node != NUMA_NO_NODE) 851 + dev_printk(KERN_DEBUG, &bus->dev, "on NUMA node %d\n", node); 852 + return bus; 853 + 854 + out_release_info: 855 + __acpi_pci_root_release_info(info); 856 + return NULL; 857 + } 858 + 655 859 void __init acpi_pci_root_init(void) 656 860 { 657 861 acpi_hest_init();
+1 -3
drivers/acpi/proc.c
··· 144 144 .release = single_release, 145 145 }; 146 146 147 - int __init acpi_sleep_proc_init(void) 147 + void __init acpi_sleep_proc_init(void) 148 148 { 149 149 /* 'wakeup device' [R/W] */ 150 150 proc_create("wakeup", S_IFREG | S_IRUGO | S_IWUSR, 151 151 acpi_root_dir, &acpi_system_wakeup_device_fops); 152 - 153 - return 0; 154 152 }
+6
drivers/acpi/processor_driver.c
··· 242 242 if (pr->flags.need_hotplug_init) 243 243 return 0; 244 244 245 + result = acpi_cppc_processor_probe(pr); 246 + if (result) 247 + return -ENODEV; 248 + 245 249 if (!cpuidle_get_driver() || cpuidle_get_driver() == &acpi_idle_driver) 246 250 acpi_processor_power_init(pr); 247 251 ··· 290 286 acpi_processor_power_exit(pr); 291 287 292 288 acpi_pss_perf_exit(pr, device); 289 + 290 + acpi_cppc_processor_exit(pr); 293 291 294 292 return 0; 295 293 }
+363 -74
drivers/acpi/property.c
··· 19 19 20 20 #include "internal.h" 21 21 22 + static int acpi_data_get_property_array(struct acpi_device_data *data, 23 + const char *name, 24 + acpi_object_type type, 25 + const union acpi_object **obj); 26 + 22 27 /* ACPI _DSD device properties UUID: daffd814-6eba-4d8c-8a91-bc9bbf4aa301 */ 23 28 static const u8 prp_uuid[16] = { 24 29 0x14, 0xd8, 0xff, 0xda, 0xba, 0x6e, 0x8c, 0x4d, 25 30 0x8a, 0x91, 0xbc, 0x9b, 0xbf, 0x4a, 0xa3, 0x01 26 31 }; 32 + /* ACPI _DSD data subnodes UUID: dbb8e3e6-5886-4ba6-8795-1319f52a966b */ 33 + static const u8 ads_uuid[16] = { 34 + 0xe6, 0xe3, 0xb8, 0xdb, 0x86, 0x58, 0xa6, 0x4b, 35 + 0x87, 0x95, 0x13, 0x19, 0xf5, 0x2a, 0x96, 0x6b 36 + }; 37 + 38 + static bool acpi_enumerate_nondev_subnodes(acpi_handle scope, 39 + const union acpi_object *desc, 40 + struct acpi_device_data *data); 41 + static bool acpi_extract_properties(const union acpi_object *desc, 42 + struct acpi_device_data *data); 43 + 44 + static bool acpi_nondev_subnode_ok(acpi_handle scope, 45 + const union acpi_object *link, 46 + struct list_head *list) 47 + { 48 + struct acpi_buffer buf = { ACPI_ALLOCATE_BUFFER }; 49 + struct acpi_data_node *dn; 50 + acpi_handle handle; 51 + acpi_status status; 52 + 53 + dn = kzalloc(sizeof(*dn), GFP_KERNEL); 54 + if (!dn) 55 + return false; 56 + 57 + dn->name = link->package.elements[0].string.pointer; 58 + dn->fwnode.type = FWNODE_ACPI_DATA; 59 + INIT_LIST_HEAD(&dn->data.subnodes); 60 + 61 + status = acpi_get_handle(scope, link->package.elements[1].string.pointer, 62 + &handle); 63 + if (ACPI_FAILURE(status)) 64 + goto fail; 65 + 66 + status = acpi_evaluate_object_typed(handle, NULL, NULL, &buf, 67 + ACPI_TYPE_PACKAGE); 68 + if (ACPI_FAILURE(status)) 69 + goto fail; 70 + 71 + if (acpi_extract_properties(buf.pointer, &dn->data)) 72 + dn->handle = handle; 73 + 74 + /* 75 + * The scope for the subnode object lookup is the one of the namespace 76 + * node (device) containing the object that has returned the package. 77 + * That is, it's the scope of that object's parent. 78 + */ 79 + status = acpi_get_parent(handle, &scope); 80 + if (ACPI_SUCCESS(status) 81 + && acpi_enumerate_nondev_subnodes(scope, buf.pointer, &dn->data)) 82 + dn->handle = handle; 83 + 84 + if (dn->handle) { 85 + dn->data.pointer = buf.pointer; 86 + list_add_tail(&dn->sibling, list); 87 + return true; 88 + } 89 + 90 + acpi_handle_debug(handle, "Invalid properties/subnodes data, skipping\n"); 91 + 92 + fail: 93 + ACPI_FREE(buf.pointer); 94 + kfree(dn); 95 + return false; 96 + } 97 + 98 + static int acpi_add_nondev_subnodes(acpi_handle scope, 99 + const union acpi_object *links, 100 + struct list_head *list) 101 + { 102 + bool ret = false; 103 + int i; 104 + 105 + for (i = 0; i < links->package.count; i++) { 106 + const union acpi_object *link; 107 + 108 + link = &links->package.elements[i]; 109 + /* Only two elements allowed, both must be strings. */ 110 + if (link->package.count == 2 111 + && link->package.elements[0].type == ACPI_TYPE_STRING 112 + && link->package.elements[1].type == ACPI_TYPE_STRING 113 + && acpi_nondev_subnode_ok(scope, link, list)) 114 + ret = true; 115 + } 116 + 117 + return ret; 118 + } 119 + 120 + static bool acpi_enumerate_nondev_subnodes(acpi_handle scope, 121 + const union acpi_object *desc, 122 + struct acpi_device_data *data) 123 + { 124 + int i; 125 + 126 + /* Look for the ACPI data subnodes UUID. */ 127 + for (i = 0; i < desc->package.count; i += 2) { 128 + const union acpi_object *uuid, *links; 129 + 130 + uuid = &desc->package.elements[i]; 131 + links = &desc->package.elements[i + 1]; 132 + 133 + /* 134 + * The first element must be a UUID and the second one must be 135 + * a package. 136 + */ 137 + if (uuid->type != ACPI_TYPE_BUFFER || uuid->buffer.length != 16 138 + || links->type != ACPI_TYPE_PACKAGE) 139 + break; 140 + 141 + if (memcmp(uuid->buffer.pointer, ads_uuid, sizeof(ads_uuid))) 142 + continue; 143 + 144 + return acpi_add_nondev_subnodes(scope, links, &data->subnodes); 145 + } 146 + 147 + return false; 148 + } 27 149 28 150 static bool acpi_property_value_ok(const union acpi_object *value) 29 151 { ··· 203 81 const union acpi_object *of_compatible; 204 82 int ret; 205 83 206 - ret = acpi_dev_get_property_array(adev, "compatible", ACPI_TYPE_STRING, 207 - &of_compatible); 84 + ret = acpi_data_get_property_array(&adev->data, "compatible", 85 + ACPI_TYPE_STRING, &of_compatible); 208 86 if (ret) { 209 87 ret = acpi_dev_get_property(adev, "compatible", 210 88 ACPI_TYPE_STRING, &of_compatible); ··· 222 100 adev->flags.of_compatible_ok = 1; 223 101 } 224 102 225 - void acpi_init_properties(struct acpi_device *adev) 103 + static bool acpi_extract_properties(const union acpi_object *desc, 104 + struct acpi_device_data *data) 226 105 { 227 - struct acpi_buffer buf = { ACPI_ALLOCATE_BUFFER }; 228 - bool acpi_of = false; 229 - struct acpi_hardware_id *hwid; 230 - const union acpi_object *desc; 231 - acpi_status status; 232 106 int i; 233 107 234 - /* 235 - * Check if ACPI_DT_NAMESPACE_HID is present and inthat case we fill in 236 - * Device Tree compatible properties for this device. 237 - */ 238 - list_for_each_entry(hwid, &adev->pnp.ids, list) { 239 - if (!strcmp(hwid->id, ACPI_DT_NAMESPACE_HID)) { 240 - acpi_of = true; 241 - break; 242 - } 243 - } 244 - 245 - status = acpi_evaluate_object_typed(adev->handle, "_DSD", NULL, &buf, 246 - ACPI_TYPE_PACKAGE); 247 - if (ACPI_FAILURE(status)) 248 - goto out; 249 - 250 - desc = buf.pointer; 251 108 if (desc->package.count % 2) 252 - goto fail; 109 + return false; 253 110 254 111 /* Look for the device properties UUID. */ 255 112 for (i = 0; i < desc->package.count; i += 2) { ··· 255 154 if (!acpi_properties_format_valid(properties)) 256 155 break; 257 156 258 - adev->data.pointer = buf.pointer; 259 - adev->data.properties = properties; 260 - 261 - if (acpi_of) 262 - acpi_init_of_compatible(adev); 263 - 264 - goto out; 157 + data->properties = properties; 158 + return true; 265 159 } 266 160 267 - fail: 268 - dev_dbg(&adev->dev, "Returned _DSD data is not valid, skipping\n"); 269 - ACPI_FREE(buf.pointer); 161 + return false; 162 + } 163 + 164 + void acpi_init_properties(struct acpi_device *adev) 165 + { 166 + struct acpi_buffer buf = { ACPI_ALLOCATE_BUFFER }; 167 + struct acpi_hardware_id *hwid; 168 + acpi_status status; 169 + bool acpi_of = false; 170 + 171 + INIT_LIST_HEAD(&adev->data.subnodes); 172 + 173 + /* 174 + * Check if ACPI_DT_NAMESPACE_HID is present and inthat case we fill in 175 + * Device Tree compatible properties for this device. 176 + */ 177 + list_for_each_entry(hwid, &adev->pnp.ids, list) { 178 + if (!strcmp(hwid->id, ACPI_DT_NAMESPACE_HID)) { 179 + acpi_of = true; 180 + break; 181 + } 182 + } 183 + 184 + status = acpi_evaluate_object_typed(adev->handle, "_DSD", NULL, &buf, 185 + ACPI_TYPE_PACKAGE); 186 + if (ACPI_FAILURE(status)) 187 + goto out; 188 + 189 + if (acpi_extract_properties(buf.pointer, &adev->data)) { 190 + adev->data.pointer = buf.pointer; 191 + if (acpi_of) 192 + acpi_init_of_compatible(adev); 193 + } 194 + if (acpi_enumerate_nondev_subnodes(adev->handle, buf.pointer, &adev->data)) 195 + adev->data.pointer = buf.pointer; 196 + 197 + if (!adev->data.pointer) { 198 + acpi_handle_debug(adev->handle, "Invalid _DSD data, skipping\n"); 199 + ACPI_FREE(buf.pointer); 200 + } 270 201 271 202 out: 272 203 if (acpi_of && !adev->flags.of_compatible_ok) ··· 306 173 ACPI_DT_NAMESPACE_HID " requires 'compatible' property\n"); 307 174 } 308 175 176 + static void acpi_destroy_nondev_subnodes(struct list_head *list) 177 + { 178 + struct acpi_data_node *dn, *next; 179 + 180 + if (list_empty(list)) 181 + return; 182 + 183 + list_for_each_entry_safe_reverse(dn, next, list, sibling) { 184 + acpi_destroy_nondev_subnodes(&dn->data.subnodes); 185 + wait_for_completion(&dn->kobj_done); 186 + list_del(&dn->sibling); 187 + ACPI_FREE((void *)dn->data.pointer); 188 + kfree(dn); 189 + } 190 + } 191 + 309 192 void acpi_free_properties(struct acpi_device *adev) 310 193 { 194 + acpi_destroy_nondev_subnodes(&adev->data.subnodes); 311 195 ACPI_FREE((void *)adev->data.pointer); 312 196 adev->data.of_compatible = NULL; 313 197 adev->data.pointer = NULL; ··· 332 182 } 333 183 334 184 /** 335 - * acpi_dev_get_property - return an ACPI property with given name 336 - * @adev: ACPI device to get property 185 + * acpi_data_get_property - return an ACPI property with given name 186 + * @data: ACPI device deta object to get the property from 337 187 * @name: Name of the property 338 188 * @type: Expected property type 339 189 * @obj: Location to store the property value (if not %NULL) ··· 342 192 * object at the location pointed to by @obj if found. 343 193 * 344 194 * Callers must not attempt to free the returned objects. These objects will be 345 - * freed by the ACPI core automatically during the removal of @adev. 195 + * freed by the ACPI core automatically during the removal of @data. 346 196 * 347 197 * Return: %0 if property with @name has been found (success), 348 198 * %-EINVAL if the arguments are invalid, 349 199 * %-ENODATA if the property doesn't exist, 350 200 * %-EPROTO if the property value type doesn't match @type. 351 201 */ 352 - int acpi_dev_get_property(struct acpi_device *adev, const char *name, 353 - acpi_object_type type, const union acpi_object **obj) 202 + static int acpi_data_get_property(struct acpi_device_data *data, 203 + const char *name, acpi_object_type type, 204 + const union acpi_object **obj) 354 205 { 355 206 const union acpi_object *properties; 356 207 int i; 357 208 358 - if (!adev || !name) 209 + if (!data || !name) 359 210 return -EINVAL; 360 211 361 - if (!adev->data.pointer || !adev->data.properties) 212 + if (!data->pointer || !data->properties) 362 213 return -ENODATA; 363 214 364 - properties = adev->data.properties; 215 + properties = data->properties; 365 216 for (i = 0; i < properties->package.count; i++) { 366 217 const union acpi_object *propname, *propvalue; 367 218 const union acpi_object *property; ··· 383 232 } 384 233 return -ENODATA; 385 234 } 386 - EXPORT_SYMBOL_GPL(acpi_dev_get_property); 387 235 388 236 /** 389 - * acpi_dev_get_property_array - return an ACPI array property with given name 390 - * @adev: ACPI device to get property 237 + * acpi_dev_get_property - return an ACPI property with given name. 238 + * @adev: ACPI device to get the property from. 239 + * @name: Name of the property. 240 + * @type: Expected property type. 241 + * @obj: Location to store the property value (if not %NULL). 242 + */ 243 + int acpi_dev_get_property(struct acpi_device *adev, const char *name, 244 + acpi_object_type type, const union acpi_object **obj) 245 + { 246 + return adev ? acpi_data_get_property(&adev->data, name, type, obj) : -EINVAL; 247 + } 248 + EXPORT_SYMBOL_GPL(acpi_dev_get_property); 249 + 250 + static struct acpi_device_data *acpi_device_data_of_node(struct fwnode_handle *fwnode) 251 + { 252 + if (fwnode->type == FWNODE_ACPI) { 253 + struct acpi_device *adev = to_acpi_device_node(fwnode); 254 + return &adev->data; 255 + } else if (fwnode->type == FWNODE_ACPI_DATA) { 256 + struct acpi_data_node *dn = to_acpi_data_node(fwnode); 257 + return &dn->data; 258 + } 259 + return NULL; 260 + } 261 + 262 + /** 263 + * acpi_node_prop_get - return an ACPI property with given name. 264 + * @fwnode: Firmware node to get the property from. 265 + * @propname: Name of the property. 266 + * @valptr: Location to store a pointer to the property value (if not %NULL). 267 + */ 268 + int acpi_node_prop_get(struct fwnode_handle *fwnode, const char *propname, 269 + void **valptr) 270 + { 271 + return acpi_data_get_property(acpi_device_data_of_node(fwnode), 272 + propname, ACPI_TYPE_ANY, 273 + (const union acpi_object **)valptr); 274 + } 275 + 276 + /** 277 + * acpi_data_get_property_array - return an ACPI array property with given name 278 + * @adev: ACPI data object to get the property from 391 279 * @name: Name of the property 392 280 * @type: Expected type of array elements 393 281 * @obj: Location to store a pointer to the property value (if not NULL) ··· 435 245 * ACPI object at the location pointed to by @obj if found. 436 246 * 437 247 * Callers must not attempt to free the returned objects. Those objects will be 438 - * freed by the ACPI core automatically during the removal of @adev. 248 + * freed by the ACPI core automatically during the removal of @data. 439 249 * 440 250 * Return: %0 if array property (package) with @name has been found (success), 441 251 * %-EINVAL if the arguments are invalid, ··· 443 253 * %-EPROTO if the property is not a package or the type of its elements 444 254 * doesn't match @type. 445 255 */ 446 - int acpi_dev_get_property_array(struct acpi_device *adev, const char *name, 447 - acpi_object_type type, 448 - const union acpi_object **obj) 256 + static int acpi_data_get_property_array(struct acpi_device_data *data, 257 + const char *name, 258 + acpi_object_type type, 259 + const union acpi_object **obj) 449 260 { 450 261 const union acpi_object *prop; 451 262 int ret, i; 452 263 453 - ret = acpi_dev_get_property(adev, name, ACPI_TYPE_PACKAGE, &prop); 264 + ret = acpi_data_get_property(data, name, ACPI_TYPE_PACKAGE, &prop); 454 265 if (ret) 455 266 return ret; 456 267 ··· 466 275 467 276 return 0; 468 277 } 469 - EXPORT_SYMBOL_GPL(acpi_dev_get_property_array); 470 278 471 279 /** 472 - * acpi_dev_get_property_reference - returns handle to the referenced object 473 - * @adev: ACPI device to get property 474 - * @name: Name of the property 280 + * acpi_data_get_property_reference - returns handle to the referenced object 281 + * @data: ACPI device data object containing the property 282 + * @propname: Name of the property 475 283 * @index: Index of the reference to return 476 284 * @args: Location to store the returned reference with optional arguments 477 285 * ··· 484 294 * 485 295 * Return: %0 on success, negative error code on failure. 486 296 */ 487 - int acpi_dev_get_property_reference(struct acpi_device *adev, 488 - const char *name, size_t index, 489 - struct acpi_reference_args *args) 297 + static int acpi_data_get_property_reference(struct acpi_device_data *data, 298 + const char *propname, size_t index, 299 + struct acpi_reference_args *args) 490 300 { 491 301 const union acpi_object *element, *end; 492 302 const union acpi_object *obj; 493 303 struct acpi_device *device; 494 304 int ret, idx = 0; 495 305 496 - ret = acpi_dev_get_property(adev, name, ACPI_TYPE_ANY, &obj); 306 + ret = acpi_data_get_property(data, propname, ACPI_TYPE_ANY, &obj); 497 307 if (ret) 498 308 return ret; 499 309 ··· 568 378 569 379 return -EPROTO; 570 380 } 571 - EXPORT_SYMBOL_GPL(acpi_dev_get_property_reference); 572 381 573 - int acpi_dev_prop_get(struct acpi_device *adev, const char *propname, 574 - void **valptr) 382 + /** 383 + * acpi_node_get_property_reference - get a handle to the referenced object. 384 + * @fwnode: Firmware node to get the property from. 385 + * @propname: Name of the property. 386 + * @index: Index of the reference to return. 387 + * @args: Location to store the returned reference with optional arguments. 388 + */ 389 + int acpi_node_get_property_reference(struct fwnode_handle *fwnode, 390 + const char *name, size_t index, 391 + struct acpi_reference_args *args) 575 392 { 576 - return acpi_dev_get_property(adev, propname, ACPI_TYPE_ANY, 577 - (const union acpi_object **)valptr); 578 - } 393 + struct acpi_device_data *data = acpi_device_data_of_node(fwnode); 579 394 580 - int acpi_dev_prop_read_single(struct acpi_device *adev, const char *propname, 581 - enum dev_prop_type proptype, void *val) 395 + return data ? acpi_data_get_property_reference(data, name, index, args) : -EINVAL; 396 + } 397 + EXPORT_SYMBOL_GPL(acpi_node_get_property_reference); 398 + 399 + static int acpi_data_prop_read_single(struct acpi_device_data *data, 400 + const char *propname, 401 + enum dev_prop_type proptype, void *val) 582 402 { 583 403 const union acpi_object *obj; 584 404 int ret; ··· 597 397 return -EINVAL; 598 398 599 399 if (proptype >= DEV_PROP_U8 && proptype <= DEV_PROP_U64) { 600 - ret = acpi_dev_get_property(adev, propname, ACPI_TYPE_INTEGER, &obj); 400 + ret = acpi_data_get_property(data, propname, ACPI_TYPE_INTEGER, &obj); 601 401 if (ret) 602 402 return ret; 603 403 ··· 622 422 break; 623 423 } 624 424 } else if (proptype == DEV_PROP_STRING) { 625 - ret = acpi_dev_get_property(adev, propname, ACPI_TYPE_STRING, &obj); 425 + ret = acpi_data_get_property(data, propname, ACPI_TYPE_STRING, &obj); 626 426 if (ret) 627 427 return ret; 628 428 ··· 631 431 ret = -EINVAL; 632 432 } 633 433 return ret; 434 + } 435 + 436 + int acpi_dev_prop_read_single(struct acpi_device *adev, const char *propname, 437 + enum dev_prop_type proptype, void *val) 438 + { 439 + return adev ? acpi_data_prop_read_single(&adev->data, propname, proptype, val) : -EINVAL; 634 440 } 635 441 636 442 static int acpi_copy_property_array_u8(const union acpi_object *items, u8 *val, ··· 715 509 return 0; 716 510 } 717 511 718 - int acpi_dev_prop_read(struct acpi_device *adev, const char *propname, 719 - enum dev_prop_type proptype, void *val, size_t nval) 512 + static int acpi_data_prop_read(struct acpi_device_data *data, 513 + const char *propname, 514 + enum dev_prop_type proptype, 515 + void *val, size_t nval) 720 516 { 721 517 const union acpi_object *obj; 722 518 const union acpi_object *items; 723 519 int ret; 724 520 725 521 if (val && nval == 1) { 726 - ret = acpi_dev_prop_read_single(adev, propname, proptype, val); 522 + ret = acpi_data_prop_read_single(data, propname, proptype, val); 727 523 if (!ret) 728 524 return ret; 729 525 } 730 526 731 - ret = acpi_dev_get_property_array(adev, propname, ACPI_TYPE_ANY, &obj); 527 + ret = acpi_data_get_property_array(data, propname, ACPI_TYPE_ANY, &obj); 732 528 if (ret) 733 529 return ret; 734 530 ··· 765 557 break; 766 558 } 767 559 return ret; 560 + } 561 + 562 + int acpi_dev_prop_read(struct acpi_device *adev, const char *propname, 563 + enum dev_prop_type proptype, void *val, size_t nval) 564 + { 565 + return adev ? acpi_data_prop_read(&adev->data, propname, proptype, val, nval) : -EINVAL; 566 + } 567 + 568 + /** 569 + * acpi_node_prop_read - retrieve the value of an ACPI property with given name. 570 + * @fwnode: Firmware node to get the property from. 571 + * @propname: Name of the property. 572 + * @proptype: Expected property type. 573 + * @val: Location to store the property value (if not %NULL). 574 + * @nval: Size of the array pointed to by @val. 575 + * 576 + * If @val is %NULL, return the number of array elements comprising the value 577 + * of the property. Otherwise, read at most @nval values to the array at the 578 + * location pointed to by @val. 579 + */ 580 + int acpi_node_prop_read(struct fwnode_handle *fwnode, const char *propname, 581 + enum dev_prop_type proptype, void *val, size_t nval) 582 + { 583 + return acpi_data_prop_read(acpi_device_data_of_node(fwnode), 584 + propname, proptype, val, nval); 585 + } 586 + 587 + /** 588 + * acpi_get_next_subnode - Return the next child node handle for a device. 589 + * @dev: Device to find the next child node for. 590 + * @child: Handle to one of the device's child nodes or a null handle. 591 + */ 592 + struct fwnode_handle *acpi_get_next_subnode(struct device *dev, 593 + struct fwnode_handle *child) 594 + { 595 + struct acpi_device *adev = ACPI_COMPANION(dev); 596 + struct list_head *head, *next; 597 + 598 + if (!adev) 599 + return NULL; 600 + 601 + if (!child || child->type == FWNODE_ACPI) { 602 + head = &adev->children; 603 + if (list_empty(head)) 604 + goto nondev; 605 + 606 + if (child) { 607 + adev = to_acpi_device_node(child); 608 + next = adev->node.next; 609 + if (next == head) { 610 + child = NULL; 611 + goto nondev; 612 + } 613 + adev = list_entry(next, struct acpi_device, node); 614 + } else { 615 + adev = list_first_entry(head, struct acpi_device, node); 616 + } 617 + return acpi_fwnode_handle(adev); 618 + } 619 + 620 + nondev: 621 + if (!child || child->type == FWNODE_ACPI_DATA) { 622 + struct acpi_data_node *dn; 623 + 624 + head = &adev->data.subnodes; 625 + if (list_empty(head)) 626 + return NULL; 627 + 628 + if (child) { 629 + dn = to_acpi_data_node(child); 630 + next = dn->sibling.next; 631 + if (next == head) 632 + return NULL; 633 + 634 + dn = list_entry(next, struct acpi_data_node, sibling); 635 + } else { 636 + dn = list_first_entry(head, struct acpi_data_node, sibling); 637 + } 638 + return &dn->fwnode; 639 + } 640 + return NULL; 768 641 }
+6 -3
drivers/acpi/resource.c
··· 119 119 EXPORT_SYMBOL_GPL(acpi_dev_resource_memory); 120 120 121 121 static void acpi_dev_ioresource_flags(struct resource *res, u64 len, 122 - u8 io_decode) 122 + u8 io_decode, u8 translation_type) 123 123 { 124 124 res->flags = IORESOURCE_IO; 125 125 ··· 131 131 132 132 if (io_decode == ACPI_DECODE_16) 133 133 res->flags |= IORESOURCE_IO_16BIT_ADDR; 134 + if (translation_type == ACPI_SPARSE_TRANSLATION) 135 + res->flags |= IORESOURCE_IO_SPARSE; 134 136 } 135 137 136 138 static void acpi_dev_get_ioresource(struct resource *res, u64 start, u64 len, ··· 140 138 { 141 139 res->start = start; 142 140 res->end = start + len - 1; 143 - acpi_dev_ioresource_flags(res, len, io_decode); 141 + acpi_dev_ioresource_flags(res, len, io_decode, 0); 144 142 } 145 143 146 144 /** ··· 233 231 acpi_dev_memresource_flags(res, len, wp); 234 232 break; 235 233 case ACPI_IO_RANGE: 236 - acpi_dev_ioresource_flags(res, len, iodec); 234 + acpi_dev_ioresource_flags(res, len, iodec, 235 + addr->info.io.translation_type); 237 236 break; 238 237 case ACPI_BUS_NUMBER_RANGE: 239 238 res->flags = IORESOURCE_BUS;
+43 -24
drivers/acpi/scan.c
··· 695 695 return result; 696 696 } 697 697 698 - struct acpi_device *acpi_get_next_child(struct device *dev, 699 - struct acpi_device *child) 700 - { 701 - struct acpi_device *adev = ACPI_COMPANION(dev); 702 - struct list_head *head, *next; 703 - 704 - if (!adev) 705 - return NULL; 706 - 707 - head = &adev->children; 708 - if (list_empty(head)) 709 - return NULL; 710 - 711 - if (!child) 712 - return list_first_entry(head, struct acpi_device, node); 713 - 714 - next = child->node.next; 715 - return next == head ? NULL : list_entry(next, struct acpi_device, node); 716 - } 717 - 718 698 /* -------------------------------------------------------------------------- 719 699 Device Enumeration 720 700 -------------------------------------------------------------------------- */ ··· 1164 1184 if (!id) 1165 1185 return; 1166 1186 1167 - id->id = kstrdup(dev_id, GFP_KERNEL); 1187 + id->id = kstrdup_const(dev_id, GFP_KERNEL); 1168 1188 if (!id->id) { 1169 1189 kfree(id); 1170 1190 return; ··· 1302 1322 struct acpi_hardware_id *id, *tmp; 1303 1323 1304 1324 list_for_each_entry_safe(id, tmp, &pnp->ids, list) { 1305 - kfree(id->id); 1325 + kfree_const(id->id); 1306 1326 kfree(id); 1307 1327 } 1308 1328 kfree(pnp->unique_id); ··· 1452 1472 } 1453 1473 1454 1474 static bool acpi_scan_handler_matching(struct acpi_scan_handler *handler, 1455 - char *idstr, 1475 + const char *idstr, 1456 1476 const struct acpi_device_id **matchid) 1457 1477 { 1458 1478 const struct acpi_device_id *devid; ··· 1471 1491 return false; 1472 1492 } 1473 1493 1474 - static struct acpi_scan_handler *acpi_scan_match_handler(char *idstr, 1494 + static struct acpi_scan_handler *acpi_scan_match_handler(const char *idstr, 1475 1495 const struct acpi_device_id **matchid) 1476 1496 { 1477 1497 struct acpi_scan_handler *handler; ··· 1912 1932 out: 1913 1933 mutex_unlock(&acpi_scan_lock); 1914 1934 return result; 1935 + } 1936 + 1937 + static struct acpi_probe_entry *ape; 1938 + static int acpi_probe_count; 1939 + static DEFINE_SPINLOCK(acpi_probe_lock); 1940 + 1941 + static int __init acpi_match_madt(struct acpi_subtable_header *header, 1942 + const unsigned long end) 1943 + { 1944 + if (!ape->subtable_valid || ape->subtable_valid(header, ape)) 1945 + if (!ape->probe_subtbl(header, end)) 1946 + acpi_probe_count++; 1947 + 1948 + return 0; 1949 + } 1950 + 1951 + int __init __acpi_probe_device_table(struct acpi_probe_entry *ap_head, int nr) 1952 + { 1953 + int count = 0; 1954 + 1955 + if (acpi_disabled) 1956 + return 0; 1957 + 1958 + spin_lock(&acpi_probe_lock); 1959 + for (ape = ap_head; nr; ape++, nr--) { 1960 + if (ACPI_COMPARE_NAME(ACPI_SIG_MADT, ape->id)) { 1961 + acpi_probe_count = 0; 1962 + acpi_table_parse_madt(ape->type, acpi_match_madt, 0); 1963 + count += acpi_probe_count; 1964 + } else { 1965 + int res; 1966 + res = acpi_table_parse(ape->id, ape->probe_table); 1967 + if (!res) 1968 + count++; 1969 + } 1970 + } 1971 + spin_unlock(&acpi_probe_lock); 1972 + 1973 + return count; 1915 1974 }
+7 -2
drivers/acpi/sleep.c
··· 487 487 pr_err("ACPI does not support sleep state S%u\n", acpi_state); 488 488 return -ENOSYS; 489 489 } 490 + if (acpi_state > ACPI_STATE_S1) 491 + pm_set_suspend_via_firmware(); 490 492 491 493 acpi_pm_start(acpi_state); 492 494 return 0; ··· 524 522 if (error) 525 523 return error; 526 524 pr_info(PREFIX "Low-level resume complete\n"); 525 + pm_set_resume_via_firmware(); 527 526 break; 528 527 } 529 528 trace_suspend_resume(TPS("acpi_suspend"), acpi_state, false); ··· 635 632 acpi_enable_wakeup_devices(ACPI_STATE_S0); 636 633 acpi_enable_all_wakeup_gpes(); 637 634 acpi_os_wait_events_complete(); 638 - enable_irq_wake(acpi_gbl_FADT.sci_interrupt); 635 + if (acpi_sci_irq_valid()) 636 + enable_irq_wake(acpi_sci_irq); 639 637 return 0; 640 638 } 641 639 642 640 static void acpi_freeze_restore(void) 643 641 { 644 642 acpi_disable_wakeup_devices(ACPI_STATE_S0); 645 - disable_irq_wake(acpi_gbl_FADT.sci_interrupt); 643 + if (acpi_sci_irq_valid()) 644 + disable_irq_wake(acpi_sci_irq); 646 645 acpi_enable_all_runtime_gpes(); 647 646 } 648 647
+3
drivers/acpi/sysfs.c
··· 878 878 return result; 879 879 880 880 hotplug_kobj = kobject_create_and_add("hotplug", acpi_kobj); 881 + if (!hotplug_kobj) 882 + return -ENOMEM; 883 + 881 884 result = sysfs_create_file(hotplug_kobj, &force_remove_attr.attr); 882 885 if (result) 883 886 return result;
+76 -18
drivers/acpi/tables.c
··· 210 210 } 211 211 } 212 212 213 - int __init 214 - acpi_parse_entries(char *id, unsigned long table_size, 215 - acpi_tbl_entry_handler handler, 213 + /** 214 + * acpi_parse_entries_array - for each proc_num find a suitable subtable 215 + * 216 + * @id: table id (for debugging purposes) 217 + * @table_size: single entry size 218 + * @table_header: where does the table start? 219 + * @proc: array of acpi_subtable_proc struct containing entry id 220 + * and associated handler with it 221 + * @proc_num: how big proc is? 222 + * @max_entries: how many entries can we process? 223 + * 224 + * For each proc_num find a subtable with proc->id and run proc->handler 225 + * on it. Assumption is that there's only single handler for particular 226 + * entry id. 227 + * 228 + * On success returns sum of all matching entries for all proc handlers. 229 + * Otherwise, -ENODEV or -EINVAL is returned. 230 + */ 231 + static int __init 232 + acpi_parse_entries_array(char *id, unsigned long table_size, 216 233 struct acpi_table_header *table_header, 217 - int entry_id, unsigned int max_entries) 234 + struct acpi_subtable_proc *proc, int proc_num, 235 + unsigned int max_entries) 218 236 { 219 237 struct acpi_subtable_header *entry; 220 - int count = 0; 221 238 unsigned long table_end; 239 + int count = 0; 240 + int i; 222 241 223 242 if (acpi_disabled) 224 243 return -ENODEV; 225 244 226 - if (!id || !handler) 245 + if (!id) 227 246 return -EINVAL; 228 247 229 248 if (!table_size) ··· 262 243 263 244 while (((unsigned long)entry) + sizeof(struct acpi_subtable_header) < 264 245 table_end) { 265 - if (entry->type == entry_id 266 - && (!max_entries || count < max_entries)) { 267 - if (handler(entry, table_end)) 246 + if (max_entries && count >= max_entries) 247 + break; 248 + 249 + for (i = 0; i < proc_num; i++) { 250 + if (entry->type != proc[i].id) 251 + continue; 252 + if (!proc[i].handler || 253 + proc[i].handler(entry, table_end)) 268 254 return -EINVAL; 269 255 270 - count++; 256 + proc->count++; 257 + break; 271 258 } 259 + if (i != proc_num) 260 + count++; 272 261 273 262 /* 274 263 * If entry->length is 0, break from this loop to avoid 275 264 * infinite loop. 276 265 */ 277 266 if (entry->length == 0) { 278 - pr_err("[%4.4s:0x%02x] Invalid zero length\n", id, entry_id); 267 + pr_err("[%4.4s:0x%02x] Invalid zero length\n", id, proc->id); 279 268 return -EINVAL; 280 269 } 281 270 ··· 293 266 294 267 if (max_entries && count > max_entries) { 295 268 pr_warn("[%4.4s:0x%02x] ignored %i entries of %i found\n", 296 - id, entry_id, count - max_entries, count); 269 + id, proc->id, count - max_entries, count); 297 270 } 298 271 299 272 return count; 300 273 } 301 274 302 275 int __init 303 - acpi_table_parse_entries(char *id, 276 + acpi_parse_entries(char *id, 277 + unsigned long table_size, 278 + acpi_tbl_entry_handler handler, 279 + struct acpi_table_header *table_header, 280 + int entry_id, unsigned int max_entries) 281 + { 282 + struct acpi_subtable_proc proc = { 283 + .id = entry_id, 284 + .handler = handler, 285 + }; 286 + 287 + return acpi_parse_entries_array(id, table_size, table_header, 288 + &proc, 1, max_entries); 289 + } 290 + 291 + int __init 292 + acpi_table_parse_entries_array(char *id, 304 293 unsigned long table_size, 305 - int entry_id, 306 - acpi_tbl_entry_handler handler, 294 + struct acpi_subtable_proc *proc, int proc_num, 307 295 unsigned int max_entries) 308 296 { 309 297 struct acpi_table_header *table_header = NULL; ··· 329 287 if (acpi_disabled) 330 288 return -ENODEV; 331 289 332 - if (!id || !handler) 290 + if (!id) 333 291 return -EINVAL; 334 292 335 293 if (!strncmp(id, ACPI_SIG_MADT, 4)) ··· 341 299 return -ENODEV; 342 300 } 343 301 344 - count = acpi_parse_entries(id, table_size, handler, table_header, 345 - entry_id, max_entries); 302 + count = acpi_parse_entries_array(id, table_size, table_header, 303 + proc, proc_num, max_entries); 346 304 347 305 early_acpi_os_unmap_memory((char *)table_header, tbl_size); 348 306 return count; 307 + } 308 + 309 + int __init 310 + acpi_table_parse_entries(char *id, 311 + unsigned long table_size, 312 + int entry_id, 313 + acpi_tbl_entry_handler handler, 314 + unsigned int max_entries) 315 + { 316 + struct acpi_subtable_proc proc = { 317 + .id = entry_id, 318 + .handler = handler, 319 + }; 320 + 321 + return acpi_table_parse_entries_array(id, table_size, &proc, 1, 322 + max_entries); 349 323 } 350 324 351 325 int __init
+9
drivers/acpi/video_detect.c
··· 244 244 245 245 /* Non win8 machines which need native backlight nevertheless */ 246 246 { 247 + /* https://bugzilla.redhat.com/show_bug.cgi?id=1201530 */ 248 + .callback = video_detect_force_native, 249 + .ident = "Lenovo Ideapad S405", 250 + .matches = { 251 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 252 + DMI_MATCH(DMI_BOARD_NAME, "Lenovo IdeaPad S405"), 253 + }, 254 + }, 255 + { 247 256 /* https://bugzilla.redhat.com/show_bug.cgi?id=1187004 */ 248 257 .callback = video_detect_force_native, 249 258 .ident = "Lenovo Ideapad Z570",
+1 -1
drivers/base/power/Makefile
··· 1 1 obj-$(CONFIG_PM) += sysfs.o generic_ops.o common.o qos.o runtime.o wakeirq.o 2 2 obj-$(CONFIG_PM_SLEEP) += main.o wakeup.o 3 3 obj-$(CONFIG_PM_TRACE_RTC) += trace.o 4 - obj-$(CONFIG_PM_OPP) += opp.o 4 + obj-$(CONFIG_PM_OPP) += opp/ 5 5 obj-$(CONFIG_PM_GENERIC_DOMAINS) += domain.o domain_governor.o 6 6 obj-$(CONFIG_HAVE_CLK) += clock_ops.o 7 7
+3 -3
drivers/base/power/clock_ops.c
··· 17 17 #include <linux/err.h> 18 18 #include <linux/pm_runtime.h> 19 19 20 - #ifdef CONFIG_PM 20 + #ifdef CONFIG_PM_CLK 21 21 22 22 enum pce_status { 23 23 PCE_STATUS_NONE = 0, ··· 404 404 return pm_generic_runtime_resume(dev); 405 405 } 406 406 407 - #else /* !CONFIG_PM */ 407 + #else /* !CONFIG_PM_CLK */ 408 408 409 409 /** 410 410 * enable_clock - Enable a device clock. ··· 484 484 return 0; 485 485 } 486 486 487 - #endif /* !CONFIG_PM */ 487 + #endif /* !CONFIG_PM_CLK */ 488 488 489 489 /** 490 490 * pm_clk_add_notifier - Add bus type notifier for power management clocks.
+73 -295
drivers/base/power/domain.c
··· 34 34 __ret; \ 35 35 }) 36 36 37 - #define GENPD_DEV_TIMED_CALLBACK(genpd, type, callback, dev, field, name) \ 38 - ({ \ 39 - ktime_t __start = ktime_get(); \ 40 - type __retval = GENPD_DEV_CALLBACK(genpd, type, callback, dev); \ 41 - s64 __elapsed = ktime_to_ns(ktime_sub(ktime_get(), __start)); \ 42 - struct gpd_timing_data *__td = &dev_gpd_data(dev)->td; \ 43 - if (!__retval && __elapsed > __td->field) { \ 44 - __td->field = __elapsed; \ 45 - dev_dbg(dev, name " latency exceeded, new value %lld ns\n", \ 46 - __elapsed); \ 47 - genpd->max_off_time_changed = true; \ 48 - __td->constraint_changed = true; \ 49 - } \ 50 - __retval; \ 51 - }) 52 - 53 37 static LIST_HEAD(gpd_list); 54 38 static DEFINE_MUTEX(gpd_list_lock); 55 - 56 - static struct generic_pm_domain *pm_genpd_lookup_name(const char *domain_name) 57 - { 58 - struct generic_pm_domain *genpd = NULL, *gpd; 59 - 60 - if (IS_ERR_OR_NULL(domain_name)) 61 - return NULL; 62 - 63 - mutex_lock(&gpd_list_lock); 64 - list_for_each_entry(gpd, &gpd_list, gpd_list_node) { 65 - if (!strcmp(gpd->name, domain_name)) { 66 - genpd = gpd; 67 - break; 68 - } 69 - } 70 - mutex_unlock(&gpd_list_lock); 71 - return genpd; 72 - } 73 39 74 40 /* 75 41 * Get the generic PM domain for a particular struct device. ··· 76 110 77 111 static int genpd_stop_dev(struct generic_pm_domain *genpd, struct device *dev) 78 112 { 79 - return GENPD_DEV_TIMED_CALLBACK(genpd, int, stop, dev, 80 - stop_latency_ns, "stop"); 113 + return GENPD_DEV_CALLBACK(genpd, int, stop, dev); 81 114 } 82 115 83 - static int genpd_start_dev(struct generic_pm_domain *genpd, struct device *dev, 84 - bool timed) 116 + static int genpd_start_dev(struct generic_pm_domain *genpd, struct device *dev) 85 117 { 86 - if (!timed) 87 - return GENPD_DEV_CALLBACK(genpd, int, start, dev); 88 - 89 - return GENPD_DEV_TIMED_CALLBACK(genpd, int, start, dev, 90 - start_latency_ns, "start"); 118 + return GENPD_DEV_CALLBACK(genpd, int, start, dev); 91 119 } 92 120 93 121 static bool genpd_sd_counter_dec(struct generic_pm_domain *genpd) ··· 98 138 { 99 139 atomic_inc(&genpd->sd_count); 100 140 smp_mb__after_atomic(); 101 - } 102 - 103 - static void genpd_recalc_cpu_exit_latency(struct generic_pm_domain *genpd) 104 - { 105 - s64 usecs64; 106 - 107 - if (!genpd->cpuidle_data) 108 - return; 109 - 110 - usecs64 = genpd->power_on_latency_ns; 111 - do_div(usecs64, NSEC_PER_USEC); 112 - usecs64 += genpd->cpuidle_data->saved_exit_latency; 113 - genpd->cpuidle_data->idle_state->exit_latency = usecs64; 114 141 } 115 142 116 143 static int genpd_power_on(struct generic_pm_domain *genpd, bool timed) ··· 123 176 124 177 genpd->power_on_latency_ns = elapsed_ns; 125 178 genpd->max_off_time_changed = true; 126 - genpd_recalc_cpu_exit_latency(genpd); 127 179 pr_debug("%s: Power-%s latency exceeded, new value %lld ns\n", 128 180 genpd->name, "on", elapsed_ns); 129 181 ··· 159 213 } 160 214 161 215 /** 162 - * genpd_queue_power_off_work - Queue up the execution of pm_genpd_poweroff(). 216 + * genpd_queue_power_off_work - Queue up the execution of genpd_poweroff(). 163 217 * @genpd: PM domait to power off. 164 218 * 165 - * Queue up the execution of pm_genpd_poweroff() unless it's already been done 219 + * Queue up the execution of genpd_poweroff() unless it's already been done 166 220 * before. 167 221 */ 168 222 static void genpd_queue_power_off_work(struct generic_pm_domain *genpd) ··· 170 224 queue_work(pm_wq, &genpd->power_off_work); 171 225 } 172 226 227 + static int genpd_poweron(struct generic_pm_domain *genpd); 228 + 173 229 /** 174 - * __pm_genpd_poweron - Restore power to a given PM domain and its masters. 230 + * __genpd_poweron - Restore power to a given PM domain and its masters. 175 231 * @genpd: PM domain to power up. 176 232 * 177 233 * Restore power to @genpd and all of its masters so that it is possible to 178 234 * resume a device belonging to it. 179 235 */ 180 - static int __pm_genpd_poweron(struct generic_pm_domain *genpd) 236 + static int __genpd_poweron(struct generic_pm_domain *genpd) 181 237 { 182 238 struct gpd_link *link; 183 239 int ret = 0; ··· 187 239 if (genpd->status == GPD_STATE_ACTIVE 188 240 || (genpd->prepared_count > 0 && genpd->suspend_power_off)) 189 241 return 0; 190 - 191 - if (genpd->cpuidle_data) { 192 - cpuidle_pause_and_lock(); 193 - genpd->cpuidle_data->idle_state->disabled = true; 194 - cpuidle_resume_and_unlock(); 195 - goto out; 196 - } 197 242 198 243 /* 199 244 * The list is guaranteed not to change while the loop below is being ··· 196 255 list_for_each_entry(link, &genpd->slave_links, slave_node) { 197 256 genpd_sd_counter_inc(link->master); 198 257 199 - ret = pm_genpd_poweron(link->master); 258 + ret = genpd_poweron(link->master); 200 259 if (ret) { 201 260 genpd_sd_counter_dec(link->master); 202 261 goto err; ··· 207 266 if (ret) 208 267 goto err; 209 268 210 - out: 211 269 genpd->status = GPD_STATE_ACTIVE; 212 270 return 0; 213 271 ··· 222 282 } 223 283 224 284 /** 225 - * pm_genpd_poweron - Restore power to a given PM domain and its masters. 285 + * genpd_poweron - Restore power to a given PM domain and its masters. 226 286 * @genpd: PM domain to power up. 227 287 */ 228 - int pm_genpd_poweron(struct generic_pm_domain *genpd) 288 + static int genpd_poweron(struct generic_pm_domain *genpd) 229 289 { 230 290 int ret; 231 291 232 292 mutex_lock(&genpd->lock); 233 - ret = __pm_genpd_poweron(genpd); 293 + ret = __genpd_poweron(genpd); 234 294 mutex_unlock(&genpd->lock); 235 295 return ret; 236 296 } 237 297 238 - /** 239 - * pm_genpd_name_poweron - Restore power to a given PM domain and its masters. 240 - * @domain_name: Name of the PM domain to power up. 241 - */ 242 - int pm_genpd_name_poweron(const char *domain_name) 243 - { 244 - struct generic_pm_domain *genpd; 245 - 246 - genpd = pm_genpd_lookup_name(domain_name); 247 - return genpd ? pm_genpd_poweron(genpd) : -EINVAL; 248 - } 249 - 250 298 static int genpd_save_dev(struct generic_pm_domain *genpd, struct device *dev) 251 299 { 252 - return GENPD_DEV_TIMED_CALLBACK(genpd, int, save_state, dev, 253 - save_state_latency_ns, "state save"); 300 + return GENPD_DEV_CALLBACK(genpd, int, save_state, dev); 254 301 } 255 302 256 303 static int genpd_restore_dev(struct generic_pm_domain *genpd, 257 - struct device *dev, bool timed) 304 + struct device *dev) 258 305 { 259 - if (!timed) 260 - return GENPD_DEV_CALLBACK(genpd, int, restore_state, dev); 261 - 262 - return GENPD_DEV_TIMED_CALLBACK(genpd, int, restore_state, dev, 263 - restore_state_latency_ns, 264 - "state restore"); 306 + return GENPD_DEV_CALLBACK(genpd, int, restore_state, dev); 265 307 } 266 308 267 309 static int genpd_dev_pm_qos_notifier(struct notifier_block *nb, ··· 287 365 } 288 366 289 367 /** 290 - * pm_genpd_poweroff - Remove power from a given PM domain. 368 + * genpd_poweroff - Remove power from a given PM domain. 291 369 * @genpd: PM domain to power down. 370 + * @is_async: PM domain is powered down from a scheduled work 292 371 * 293 372 * If all of the @genpd's devices have been suspended and all of its subdomains 294 373 * have been powered down, remove power from @genpd. 295 374 */ 296 - static int pm_genpd_poweroff(struct generic_pm_domain *genpd) 375 + static int genpd_poweroff(struct generic_pm_domain *genpd, bool is_async) 297 376 { 298 377 struct pm_domain_data *pdd; 299 378 struct gpd_link *link; ··· 326 403 not_suspended++; 327 404 } 328 405 329 - if (not_suspended > genpd->in_progress) 406 + if (not_suspended > 1 || (not_suspended == 1 && is_async)) 330 407 return -EBUSY; 331 408 332 409 if (genpd->gov && genpd->gov->power_down_ok) { 333 410 if (!genpd->gov->power_down_ok(&genpd->domain)) 334 411 return -EAGAIN; 335 - } 336 - 337 - if (genpd->cpuidle_data) { 338 - /* 339 - * If cpuidle_data is set, cpuidle should turn the domain off 340 - * when the CPU in it is idle. In that case we don't decrement 341 - * the subdomain counts of the master domains, so that power is 342 - * not removed from the current domain prematurely as a result 343 - * of cutting off the masters' power. 344 - */ 345 - genpd->status = GPD_STATE_POWER_OFF; 346 - cpuidle_pause_and_lock(); 347 - genpd->cpuidle_data->idle_state->disabled = false; 348 - cpuidle_resume_and_unlock(); 349 - return 0; 350 412 } 351 413 352 414 if (genpd->power_off) { ··· 342 434 343 435 /* 344 436 * If sd_count > 0 at this point, one of the subdomains hasn't 345 - * managed to call pm_genpd_poweron() for the master yet after 346 - * incrementing it. In that case pm_genpd_poweron() will wait 437 + * managed to call genpd_poweron() for the master yet after 438 + * incrementing it. In that case genpd_poweron() will wait 347 439 * for us to drop the lock, so we can call .power_off() and let 348 - * the pm_genpd_poweron() restore power for us (this shouldn't 440 + * the genpd_poweron() restore power for us (this shouldn't 349 441 * happen very often). 350 442 */ 351 443 ret = genpd_power_off(genpd, true); ··· 374 466 genpd = container_of(work, struct generic_pm_domain, power_off_work); 375 467 376 468 mutex_lock(&genpd->lock); 377 - pm_genpd_poweroff(genpd); 469 + genpd_poweroff(genpd, true); 378 470 mutex_unlock(&genpd->lock); 379 471 } 380 472 ··· 390 482 { 391 483 struct generic_pm_domain *genpd; 392 484 bool (*stop_ok)(struct device *__dev); 485 + struct gpd_timing_data *td = &dev_gpd_data(dev)->td; 486 + ktime_t time_start; 487 + s64 elapsed_ns; 393 488 int ret; 394 489 395 490 dev_dbg(dev, "%s()\n", __func__); ··· 405 494 if (stop_ok && !stop_ok(dev)) 406 495 return -EBUSY; 407 496 497 + /* Measure suspend latency. */ 498 + time_start = ktime_get(); 499 + 408 500 ret = genpd_save_dev(genpd, dev); 409 501 if (ret) 410 502 return ret; 411 503 412 504 ret = genpd_stop_dev(genpd, dev); 413 505 if (ret) { 414 - genpd_restore_dev(genpd, dev, true); 506 + genpd_restore_dev(genpd, dev); 415 507 return ret; 508 + } 509 + 510 + /* Update suspend latency value if the measured time exceeds it. */ 511 + elapsed_ns = ktime_to_ns(ktime_sub(ktime_get(), time_start)); 512 + if (elapsed_ns > td->suspend_latency_ns) { 513 + td->suspend_latency_ns = elapsed_ns; 514 + dev_dbg(dev, "suspend latency exceeded, %lld ns\n", 515 + elapsed_ns); 516 + genpd->max_off_time_changed = true; 517 + td->constraint_changed = true; 416 518 } 417 519 418 520 /* ··· 436 512 return 0; 437 513 438 514 mutex_lock(&genpd->lock); 439 - genpd->in_progress++; 440 - pm_genpd_poweroff(genpd); 441 - genpd->in_progress--; 515 + genpd_poweroff(genpd, false); 442 516 mutex_unlock(&genpd->lock); 443 517 444 518 return 0; ··· 453 531 static int pm_genpd_runtime_resume(struct device *dev) 454 532 { 455 533 struct generic_pm_domain *genpd; 534 + struct gpd_timing_data *td = &dev_gpd_data(dev)->td; 535 + ktime_t time_start; 536 + s64 elapsed_ns; 456 537 int ret; 457 538 bool timed = true; 458 539 ··· 472 547 } 473 548 474 549 mutex_lock(&genpd->lock); 475 - ret = __pm_genpd_poweron(genpd); 550 + ret = __genpd_poweron(genpd); 476 551 mutex_unlock(&genpd->lock); 477 552 478 553 if (ret) 479 554 return ret; 480 555 481 556 out: 482 - genpd_start_dev(genpd, dev, timed); 483 - genpd_restore_dev(genpd, dev, timed); 557 + /* Measure resume latency. */ 558 + if (timed) 559 + time_start = ktime_get(); 560 + 561 + genpd_start_dev(genpd, dev); 562 + genpd_restore_dev(genpd, dev); 563 + 564 + /* Update resume latency value if the measured time exceeds it. */ 565 + if (timed) { 566 + elapsed_ns = ktime_to_ns(ktime_sub(ktime_get(), time_start)); 567 + if (elapsed_ns > td->resume_latency_ns) { 568 + td->resume_latency_ns = elapsed_ns; 569 + dev_dbg(dev, "resume latency exceeded, %lld ns\n", 570 + elapsed_ns); 571 + genpd->max_off_time_changed = true; 572 + td->constraint_changed = true; 573 + } 574 + } 484 575 485 576 return 0; 486 577 } ··· 510 569 __setup("pd_ignore_unused", pd_ignore_unused_setup); 511 570 512 571 /** 513 - * pm_genpd_poweroff_unused - Power off all PM domains with no devices in use. 572 + * genpd_poweroff_unused - Power off all PM domains with no devices in use. 514 573 */ 515 - void pm_genpd_poweroff_unused(void) 574 + static int __init genpd_poweroff_unused(void) 516 575 { 517 576 struct generic_pm_domain *genpd; 518 577 519 578 if (pd_ignore_unused) { 520 579 pr_warn("genpd: Not disabling unused power domains\n"); 521 - return; 580 + return 0; 522 581 } 523 582 524 583 mutex_lock(&gpd_list_lock); ··· 527 586 genpd_queue_power_off_work(genpd); 528 587 529 588 mutex_unlock(&gpd_list_lock); 530 - } 531 589 532 - static int __init genpd_poweroff_unused(void) 533 - { 534 - pm_genpd_poweroff_unused(); 535 590 return 0; 536 591 } 537 592 late_initcall(genpd_poweroff_unused); ··· 701 764 702 765 /* 703 766 * The PM domain must be in the GPD_STATE_ACTIVE state at this point, 704 - * so pm_genpd_poweron() will return immediately, but if the device 767 + * so genpd_poweron() will return immediately, but if the device 705 768 * is suspended (e.g. it's been stopped by genpd_stop_dev()), we need 706 769 * to make it operational. 707 770 */ ··· 827 890 pm_genpd_sync_poweron(genpd, true); 828 891 genpd->suspended_count--; 829 892 830 - return genpd_start_dev(genpd, dev, true); 893 + return genpd_start_dev(genpd, dev); 831 894 } 832 895 833 896 /** ··· 955 1018 if (IS_ERR(genpd)) 956 1019 return -EINVAL; 957 1020 958 - return genpd->suspend_power_off ? 0 : genpd_start_dev(genpd, dev, true); 1021 + return genpd->suspend_power_off ? 1022 + 0 : genpd_start_dev(genpd, dev); 959 1023 } 960 1024 961 1025 /** ··· 1050 1112 1051 1113 pm_genpd_sync_poweron(genpd, true); 1052 1114 1053 - return genpd_start_dev(genpd, dev, true); 1115 + return genpd_start_dev(genpd, dev); 1054 1116 } 1055 1117 1056 1118 /** ··· 1255 1317 } 1256 1318 1257 1319 /** 1258 - * __pm_genpd_name_add_device - Find I/O PM domain and add a device to it. 1259 - * @domain_name: Name of the PM domain to add the device to. 1260 - * @dev: Device to be added. 1261 - * @td: Set of PM QoS timing parameters to attach to the device. 1262 - */ 1263 - int __pm_genpd_name_add_device(const char *domain_name, struct device *dev, 1264 - struct gpd_timing_data *td) 1265 - { 1266 - return __pm_genpd_add_device(pm_genpd_lookup_name(domain_name), dev, td); 1267 - } 1268 - 1269 - /** 1270 1320 * pm_genpd_remove_device - Remove a device from an I/O PM domain. 1271 1321 * @genpd: PM domain to remove the device from. 1272 1322 * @dev: Device to be removed. ··· 1355 1429 } 1356 1430 1357 1431 /** 1358 - * pm_genpd_add_subdomain_names - Add a subdomain to an I/O PM domain. 1359 - * @master_name: Name of the master PM domain to add the subdomain to. 1360 - * @subdomain_name: Name of the subdomain to be added. 1361 - */ 1362 - int pm_genpd_add_subdomain_names(const char *master_name, 1363 - const char *subdomain_name) 1364 - { 1365 - struct generic_pm_domain *master = NULL, *subdomain = NULL, *gpd; 1366 - 1367 - if (IS_ERR_OR_NULL(master_name) || IS_ERR_OR_NULL(subdomain_name)) 1368 - return -EINVAL; 1369 - 1370 - mutex_lock(&gpd_list_lock); 1371 - list_for_each_entry(gpd, &gpd_list, gpd_list_node) { 1372 - if (!master && !strcmp(gpd->name, master_name)) 1373 - master = gpd; 1374 - 1375 - if (!subdomain && !strcmp(gpd->name, subdomain_name)) 1376 - subdomain = gpd; 1377 - 1378 - if (master && subdomain) 1379 - break; 1380 - } 1381 - mutex_unlock(&gpd_list_lock); 1382 - 1383 - return pm_genpd_add_subdomain(master, subdomain); 1384 - } 1385 - 1386 - /** 1387 1432 * pm_genpd_remove_subdomain - Remove a subdomain from an I/O PM domain. 1388 1433 * @genpd: Master PM domain to remove the subdomain from. 1389 1434 * @subdomain: Subdomain to be removed. ··· 1399 1502 mutex_unlock(&genpd->lock); 1400 1503 1401 1504 return ret; 1402 - } 1403 - 1404 - /** 1405 - * pm_genpd_attach_cpuidle - Connect the given PM domain with cpuidle. 1406 - * @genpd: PM domain to be connected with cpuidle. 1407 - * @state: cpuidle state this domain can disable/enable. 1408 - * 1409 - * Make a PM domain behave as though it contained a CPU core, that is, instead 1410 - * of calling its power down routine it will enable the given cpuidle state so 1411 - * that the cpuidle subsystem can power it down (if possible and desirable). 1412 - */ 1413 - int pm_genpd_attach_cpuidle(struct generic_pm_domain *genpd, int state) 1414 - { 1415 - struct cpuidle_driver *cpuidle_drv; 1416 - struct gpd_cpuidle_data *cpuidle_data; 1417 - struct cpuidle_state *idle_state; 1418 - int ret = 0; 1419 - 1420 - if (IS_ERR_OR_NULL(genpd) || state < 0) 1421 - return -EINVAL; 1422 - 1423 - mutex_lock(&genpd->lock); 1424 - 1425 - if (genpd->cpuidle_data) { 1426 - ret = -EEXIST; 1427 - goto out; 1428 - } 1429 - cpuidle_data = kzalloc(sizeof(*cpuidle_data), GFP_KERNEL); 1430 - if (!cpuidle_data) { 1431 - ret = -ENOMEM; 1432 - goto out; 1433 - } 1434 - cpuidle_drv = cpuidle_driver_ref(); 1435 - if (!cpuidle_drv) { 1436 - ret = -ENODEV; 1437 - goto err_drv; 1438 - } 1439 - if (cpuidle_drv->state_count <= state) { 1440 - ret = -EINVAL; 1441 - goto err; 1442 - } 1443 - idle_state = &cpuidle_drv->states[state]; 1444 - if (!idle_state->disabled) { 1445 - ret = -EAGAIN; 1446 - goto err; 1447 - } 1448 - cpuidle_data->idle_state = idle_state; 1449 - cpuidle_data->saved_exit_latency = idle_state->exit_latency; 1450 - genpd->cpuidle_data = cpuidle_data; 1451 - genpd_recalc_cpu_exit_latency(genpd); 1452 - 1453 - out: 1454 - mutex_unlock(&genpd->lock); 1455 - return ret; 1456 - 1457 - err: 1458 - cpuidle_driver_unref(); 1459 - 1460 - err_drv: 1461 - kfree(cpuidle_data); 1462 - goto out; 1463 - } 1464 - 1465 - /** 1466 - * pm_genpd_name_attach_cpuidle - Find PM domain and connect cpuidle to it. 1467 - * @name: Name of the domain to connect to cpuidle. 1468 - * @state: cpuidle state this domain can manipulate. 1469 - */ 1470 - int pm_genpd_name_attach_cpuidle(const char *name, int state) 1471 - { 1472 - return pm_genpd_attach_cpuidle(pm_genpd_lookup_name(name), state); 1473 - } 1474 - 1475 - /** 1476 - * pm_genpd_detach_cpuidle - Remove the cpuidle connection from a PM domain. 1477 - * @genpd: PM domain to remove the cpuidle connection from. 1478 - * 1479 - * Remove the cpuidle connection set up by pm_genpd_attach_cpuidle() from the 1480 - * given PM domain. 1481 - */ 1482 - int pm_genpd_detach_cpuidle(struct generic_pm_domain *genpd) 1483 - { 1484 - struct gpd_cpuidle_data *cpuidle_data; 1485 - struct cpuidle_state *idle_state; 1486 - int ret = 0; 1487 - 1488 - if (IS_ERR_OR_NULL(genpd)) 1489 - return -EINVAL; 1490 - 1491 - mutex_lock(&genpd->lock); 1492 - 1493 - cpuidle_data = genpd->cpuidle_data; 1494 - if (!cpuidle_data) { 1495 - ret = -ENODEV; 1496 - goto out; 1497 - } 1498 - idle_state = cpuidle_data->idle_state; 1499 - if (!idle_state->disabled) { 1500 - ret = -EAGAIN; 1501 - goto out; 1502 - } 1503 - idle_state->exit_latency = cpuidle_data->saved_exit_latency; 1504 - cpuidle_driver_unref(); 1505 - genpd->cpuidle_data = NULL; 1506 - kfree(cpuidle_data); 1507 - 1508 - out: 1509 - mutex_unlock(&genpd->lock); 1510 - return ret; 1511 - } 1512 - 1513 - /** 1514 - * pm_genpd_name_detach_cpuidle - Find PM domain and disconnect cpuidle from it. 1515 - * @name: Name of the domain to disconnect cpuidle from. 1516 - */ 1517 - int pm_genpd_name_detach_cpuidle(const char *name) 1518 - { 1519 - return pm_genpd_detach_cpuidle(pm_genpd_lookup_name(name)); 1520 1505 } 1521 1506 1522 1507 /* Default device callbacks for generic PM domains. */ ··· 1467 1688 mutex_init(&genpd->lock); 1468 1689 genpd->gov = gov; 1469 1690 INIT_WORK(&genpd->power_off_work, genpd_power_off_work_fn); 1470 - genpd->in_progress = 0; 1471 1691 atomic_set(&genpd->sd_count, 0); 1472 1692 genpd->status = is_off ? GPD_STATE_POWER_OFF : GPD_STATE_ACTIVE; 1473 1693 genpd->device_count = 0; ··· 1801 2023 1802 2024 dev->pm_domain->detach = genpd_dev_pm_detach; 1803 2025 dev->pm_domain->sync = genpd_dev_pm_sync; 1804 - ret = pm_genpd_poweron(pd); 2026 + ret = genpd_poweron(pd); 1805 2027 1806 2028 out: 1807 2029 return ret ? -EPROBE_DEFER : 0;
+2 -4
drivers/base/power/domain_governor.c
··· 77 77 dev_update_qos_constraint); 78 78 79 79 if (constraint_ns > 0) { 80 - constraint_ns -= td->save_state_latency_ns + 81 - td->stop_latency_ns + 82 - td->start_latency_ns + 83 - td->restore_state_latency_ns; 80 + constraint_ns -= td->suspend_latency_ns + 81 + td->resume_latency_ns; 84 82 if (constraint_ns == 0) 85 83 return false; 86 84 }
+23 -6
drivers/base/power/generic_ops.c
··· 9 9 #include <linux/pm.h> 10 10 #include <linux/pm_runtime.h> 11 11 #include <linux/export.h> 12 + #include <linux/suspend.h> 12 13 13 14 #ifdef CONFIG_PM 14 15 /** ··· 297 296 298 297 if (drv && drv->pm && drv->pm->complete) 299 298 drv->pm->complete(dev); 300 - 301 - /* 302 - * Let runtime PM try to suspend devices that haven't been in use before 303 - * going into the system-wide sleep state we're resuming from. 304 - */ 305 - pm_request_idle(dev); 306 299 } 300 + 301 + /** 302 + * pm_complete_with_resume_check - Complete a device power transition. 303 + * @dev: Device to handle. 304 + * 305 + * Complete a device power transition during a system-wide power transition and 306 + * optionally schedule a runtime resume of the device if the system resume in 307 + * progress has been initated by the platform firmware and the device had its 308 + * power.direct_complete flag set. 309 + */ 310 + void pm_complete_with_resume_check(struct device *dev) 311 + { 312 + pm_generic_complete(dev); 313 + /* 314 + * If the device had been runtime-suspended before the system went into 315 + * the sleep state it is going out of and it has never been resumed till 316 + * now, resume it in case the firmware powered it up. 317 + */ 318 + if (dev->power.direct_complete && pm_resume_via_firmware()) 319 + pm_request_resume(dev); 320 + } 321 + EXPORT_SYMBOL_GPL(pm_complete_with_resume_check); 307 322 #endif /* CONFIG_PM_SLEEP */
+30 -333
drivers/base/power/opp.c drivers/base/power/opp/core.c
··· 11 11 * published by the Free Software Foundation. 12 12 */ 13 13 14 - #include <linux/cpu.h> 15 - #include <linux/kernel.h> 16 14 #include <linux/errno.h> 17 15 #include <linux/err.h> 18 16 #include <linux/slab.h> 19 17 #include <linux/device.h> 20 - #include <linux/list.h> 21 - #include <linux/rculist.h> 22 - #include <linux/rcupdate.h> 23 - #include <linux/pm_opp.h> 24 18 #include <linux/of.h> 25 19 #include <linux/export.h> 26 20 27 - /* 28 - * Internal data structure organization with the OPP layer library is as 29 - * follows: 30 - * dev_opp_list (root) 31 - * |- device 1 (represents voltage domain 1) 32 - * | |- opp 1 (availability, freq, voltage) 33 - * | |- opp 2 .. 34 - * ... ... 35 - * | `- opp n .. 36 - * |- device 2 (represents the next voltage domain) 37 - * ... 38 - * `- device m (represents mth voltage domain) 39 - * device 1, 2.. are represented by dev_opp structure while each opp 40 - * is represented by the opp structure. 41 - */ 42 - 43 - /** 44 - * struct dev_pm_opp - Generic OPP description structure 45 - * @node: opp list node. The nodes are maintained throughout the lifetime 46 - * of boot. It is expected only an optimal set of OPPs are 47 - * added to the library by the SoC framework. 48 - * RCU usage: opp list is traversed with RCU locks. node 49 - * modification is possible realtime, hence the modifications 50 - * are protected by the dev_opp_list_lock for integrity. 51 - * IMPORTANT: the opp nodes should be maintained in increasing 52 - * order. 53 - * @dynamic: not-created from static DT entries. 54 - * @available: true/false - marks if this OPP as available or not 55 - * @turbo: true if turbo (boost) OPP 56 - * @rate: Frequency in hertz 57 - * @u_volt: Target voltage in microvolts corresponding to this OPP 58 - * @u_volt_min: Minimum voltage in microvolts corresponding to this OPP 59 - * @u_volt_max: Maximum voltage in microvolts corresponding to this OPP 60 - * @u_amp: Maximum current drawn by the device in microamperes 61 - * @clock_latency_ns: Latency (in nanoseconds) of switching to this OPP's 62 - * frequency from any other OPP's frequency. 63 - * @dev_opp: points back to the device_opp struct this opp belongs to 64 - * @rcu_head: RCU callback head used for deferred freeing 65 - * @np: OPP's device node. 66 - * 67 - * This structure stores the OPP information for a given device. 68 - */ 69 - struct dev_pm_opp { 70 - struct list_head node; 71 - 72 - bool available; 73 - bool dynamic; 74 - bool turbo; 75 - unsigned long rate; 76 - 77 - unsigned long u_volt; 78 - unsigned long u_volt_min; 79 - unsigned long u_volt_max; 80 - unsigned long u_amp; 81 - unsigned long clock_latency_ns; 82 - 83 - struct device_opp *dev_opp; 84 - struct rcu_head rcu_head; 85 - 86 - struct device_node *np; 87 - }; 88 - 89 - /** 90 - * struct device_list_opp - devices managed by 'struct device_opp' 91 - * @node: list node 92 - * @dev: device to which the struct object belongs 93 - * @rcu_head: RCU callback head used for deferred freeing 94 - * 95 - * This is an internal data structure maintaining the list of devices that are 96 - * managed by 'struct device_opp'. 97 - */ 98 - struct device_list_opp { 99 - struct list_head node; 100 - const struct device *dev; 101 - struct rcu_head rcu_head; 102 - }; 103 - 104 - /** 105 - * struct device_opp - Device opp structure 106 - * @node: list node - contains the devices with OPPs that 107 - * have been registered. Nodes once added are not modified in this 108 - * list. 109 - * RCU usage: nodes are not modified in the list of device_opp, 110 - * however addition is possible and is secured by dev_opp_list_lock 111 - * @srcu_head: notifier head to notify the OPP availability changes. 112 - * @rcu_head: RCU callback head used for deferred freeing 113 - * @dev_list: list of devices that share these OPPs 114 - * @opp_list: list of opps 115 - * @np: struct device_node pointer for opp's DT node. 116 - * @shared_opp: OPP is shared between multiple devices. 117 - * 118 - * This is an internal data structure maintaining the link to opps attached to 119 - * a device. This structure is not meant to be shared to users as it is 120 - * meant for book keeping and private to OPP library. 121 - * 122 - * Because the opp structures can be used from both rcu and srcu readers, we 123 - * need to wait for the grace period of both of them before freeing any 124 - * resources. And so we have used kfree_rcu() from within call_srcu() handlers. 125 - */ 126 - struct device_opp { 127 - struct list_head node; 128 - 129 - struct srcu_notifier_head srcu_head; 130 - struct rcu_head rcu_head; 131 - struct list_head dev_list; 132 - struct list_head opp_list; 133 - 134 - struct device_node *np; 135 - unsigned long clock_latency_ns_max; 136 - bool shared_opp; 137 - struct dev_pm_opp *suspend_opp; 138 - }; 21 + #include "opp.h" 139 22 140 23 /* 141 24 * The root of the list of all devices. All device_opp structures branch off ··· 83 200 * is a RCU protected pointer. This means that device_opp is valid as long 84 201 * as we are under RCU lock. 85 202 */ 86 - static struct device_opp *_find_device_opp(struct device *dev) 203 + struct device_opp *_find_device_opp(struct device *dev) 87 204 { 88 205 struct device_opp *dev_opp; 89 206 ··· 462 579 _kfree_list_dev_rcu); 463 580 } 464 581 465 - static struct device_list_opp *_add_list_dev(const struct device *dev, 466 - struct device_opp *dev_opp) 582 + struct device_list_opp *_add_list_dev(const struct device *dev, 583 + struct device_opp *dev_opp) 467 584 { 468 585 struct device_list_opp *list_dev; 469 586 ··· 711 828 * The opp is made available by default and it can be controlled using 712 829 * dev_pm_opp_enable/disable functions and may be removed by dev_pm_opp_remove. 713 830 * 714 - * NOTE: "dynamic" parameter impacts OPPs added by the of_init_opp_table and 715 - * freed by of_free_opp_table. 831 + * NOTE: "dynamic" parameter impacts OPPs added by the dev_pm_opp_of_add_table 832 + * and freed by dev_pm_opp_of_remove_table. 716 833 * 717 834 * Locking: The internal device_opp and opp structures are RCU protected. 718 835 * Hence this function internally uses RCU updater strategy with mutex locks ··· 1103 1220 1104 1221 #ifdef CONFIG_OF 1105 1222 /** 1106 - * of_free_opp_table() - Free OPP table entries created from static DT entries 1223 + * dev_pm_opp_of_remove_table() - Free OPP table entries created from static DT 1224 + * entries 1107 1225 * @dev: device pointer used to lookup device OPPs. 1108 1226 * 1109 1227 * Free OPPs created using static entries present in DT. ··· 1115 1231 * that this function is *NOT* called under RCU protection or in contexts where 1116 1232 * mutex cannot be locked. 1117 1233 */ 1118 - void of_free_opp_table(struct device *dev) 1234 + void dev_pm_opp_of_remove_table(struct device *dev) 1119 1235 { 1120 1236 struct device_opp *dev_opp; 1121 1237 struct dev_pm_opp *opp, *tmp; ··· 1150 1266 unlock: 1151 1267 mutex_unlock(&dev_opp_list_lock); 1152 1268 } 1153 - EXPORT_SYMBOL_GPL(of_free_opp_table); 1269 + EXPORT_SYMBOL_GPL(dev_pm_opp_of_remove_table); 1154 1270 1155 - void of_cpumask_free_opp_table(cpumask_var_t cpumask) 1271 + /* Returns opp descriptor node for a device, caller must do of_node_put() */ 1272 + struct device_node *_of_get_opp_desc_node(struct device *dev) 1156 1273 { 1157 - struct device *cpu_dev; 1158 - int cpu; 1159 - 1160 - WARN_ON(cpumask_empty(cpumask)); 1161 - 1162 - for_each_cpu(cpu, cpumask) { 1163 - cpu_dev = get_cpu_device(cpu); 1164 - if (!cpu_dev) { 1165 - pr_err("%s: failed to get cpu%d device\n", __func__, 1166 - cpu); 1167 - continue; 1168 - } 1169 - 1170 - of_free_opp_table(cpu_dev); 1171 - } 1172 - } 1173 - EXPORT_SYMBOL_GPL(of_cpumask_free_opp_table); 1174 - 1175 - /* Returns opp descriptor node from its phandle. Caller must do of_node_put() */ 1176 - static struct device_node * 1177 - _of_get_opp_desc_node_from_prop(struct device *dev, const struct property *prop) 1178 - { 1179 - struct device_node *opp_np; 1180 - 1181 - opp_np = of_find_node_by_phandle(be32_to_cpup(prop->value)); 1182 - if (!opp_np) { 1183 - dev_err(dev, "%s: Prop: %s contains invalid opp desc phandle\n", 1184 - __func__, prop->name); 1185 - return ERR_PTR(-EINVAL); 1186 - } 1187 - 1188 - return opp_np; 1189 - } 1190 - 1191 - /* Returns opp descriptor node for a device. Caller must do of_node_put() */ 1192 - static struct device_node *_of_get_opp_desc_node(struct device *dev) 1193 - { 1194 - const struct property *prop; 1195 - 1196 - prop = of_find_property(dev->of_node, "operating-points-v2", NULL); 1197 - if (!prop) 1198 - return ERR_PTR(-ENODEV); 1199 - if (!prop->value) 1200 - return ERR_PTR(-ENODATA); 1201 - 1202 1274 /* 1203 1275 * TODO: Support for multiple OPP tables. 1204 1276 * 1205 1277 * There should be only ONE phandle present in "operating-points-v2" 1206 1278 * property. 1207 1279 */ 1208 - if (prop->length != sizeof(__be32)) { 1209 - dev_err(dev, "%s: Invalid opp desc phandle\n", __func__); 1210 - return ERR_PTR(-EINVAL); 1211 - } 1212 1280 1213 - return _of_get_opp_desc_node_from_prop(dev, prop); 1281 + return of_parse_phandle(dev->of_node, "operating-points-v2", 0); 1214 1282 } 1215 1283 1216 1284 /* Initializes OPP tables based on new bindings */ 1217 - static int _of_init_opp_table_v2(struct device *dev, 1218 - const struct property *prop) 1285 + static int _of_add_opp_table_v2(struct device *dev, struct device_node *opp_np) 1219 1286 { 1220 - struct device_node *opp_np, *np; 1287 + struct device_node *np; 1221 1288 struct device_opp *dev_opp; 1222 1289 int ret = 0, count = 0; 1223 - 1224 - if (!prop->value) 1225 - return -ENODATA; 1226 - 1227 - /* Get opp node */ 1228 - opp_np = _of_get_opp_desc_node_from_prop(dev, prop); 1229 - if (IS_ERR(opp_np)) 1230 - return PTR_ERR(opp_np); 1231 1290 1232 1291 dev_opp = _managed_opp(opp_np); 1233 1292 if (dev_opp) { 1234 1293 /* OPPs are already managed */ 1235 1294 if (!_add_list_dev(dev, dev_opp)) 1236 1295 ret = -ENOMEM; 1237 - goto put_opp_np; 1296 + return ret; 1238 1297 } 1239 1298 1240 1299 /* We have opp-list node now, iterate over it and add OPPs */ ··· 1193 1366 } 1194 1367 1195 1368 /* There should be one of more OPP defined */ 1196 - if (WARN_ON(!count)) { 1197 - ret = -ENOENT; 1198 - goto put_opp_np; 1199 - } 1369 + if (WARN_ON(!count)) 1370 + return -ENOENT; 1200 1371 1201 1372 dev_opp = _find_device_opp(dev); 1202 1373 if (WARN_ON(IS_ERR(dev_opp))) { ··· 1205 1380 dev_opp->np = opp_np; 1206 1381 dev_opp->shared_opp = of_property_read_bool(opp_np, "opp-shared"); 1207 1382 1208 - of_node_put(opp_np); 1209 1383 return 0; 1210 1384 1211 1385 free_table: 1212 - of_free_opp_table(dev); 1213 - put_opp_np: 1214 - of_node_put(opp_np); 1386 + dev_pm_opp_of_remove_table(dev); 1215 1387 1216 1388 return ret; 1217 1389 } 1218 1390 1219 1391 /* Initializes OPP tables based on old-deprecated bindings */ 1220 - static int _of_init_opp_table_v1(struct device *dev) 1392 + static int _of_add_opp_table_v1(struct device *dev) 1221 1393 { 1222 1394 const struct property *prop; 1223 1395 const __be32 *val; ··· 1251 1429 } 1252 1430 1253 1431 /** 1254 - * of_init_opp_table() - Initialize opp table from device tree 1432 + * dev_pm_opp_of_add_table() - Initialize opp table from device tree 1255 1433 * @dev: device pointer used to lookup device OPPs. 1256 1434 * 1257 1435 * Register the initial OPP table with the OPP library for given device. ··· 1273 1451 * -ENODATA when empty 'operating-points' property is found 1274 1452 * -EINVAL when invalid entries are found in opp-v2 table 1275 1453 */ 1276 - int of_init_opp_table(struct device *dev) 1454 + int dev_pm_opp_of_add_table(struct device *dev) 1277 1455 { 1278 - const struct property *prop; 1456 + struct device_node *opp_np; 1457 + int ret; 1279 1458 1280 1459 /* 1281 1460 * OPPs have two version of bindings now. The older one is deprecated, 1282 1461 * try for the new binding first. 1283 1462 */ 1284 - prop = of_find_property(dev->of_node, "operating-points-v2", NULL); 1285 - if (!prop) { 1463 + opp_np = _of_get_opp_desc_node(dev); 1464 + if (!opp_np) { 1286 1465 /* 1287 1466 * Try old-deprecated bindings for backward compatibility with 1288 1467 * older dtbs. 1289 1468 */ 1290 - return _of_init_opp_table_v1(dev); 1469 + return _of_add_opp_table_v1(dev); 1291 1470 } 1292 1471 1293 - return _of_init_opp_table_v2(dev, prop); 1294 - } 1295 - EXPORT_SYMBOL_GPL(of_init_opp_table); 1296 - 1297 - int of_cpumask_init_opp_table(cpumask_var_t cpumask) 1298 - { 1299 - struct device *cpu_dev; 1300 - int cpu, ret = 0; 1301 - 1302 - WARN_ON(cpumask_empty(cpumask)); 1303 - 1304 - for_each_cpu(cpu, cpumask) { 1305 - cpu_dev = get_cpu_device(cpu); 1306 - if (!cpu_dev) { 1307 - pr_err("%s: failed to get cpu%d device\n", __func__, 1308 - cpu); 1309 - continue; 1310 - } 1311 - 1312 - ret = of_init_opp_table(cpu_dev); 1313 - if (ret) { 1314 - pr_err("%s: couldn't find opp table for cpu:%d, %d\n", 1315 - __func__, cpu, ret); 1316 - 1317 - /* Free all other OPPs */ 1318 - of_cpumask_free_opp_table(cpumask); 1319 - break; 1320 - } 1321 - } 1472 + ret = _of_add_opp_table_v2(dev, opp_np); 1473 + of_node_put(opp_np); 1322 1474 1323 1475 return ret; 1324 1476 } 1325 - EXPORT_SYMBOL_GPL(of_cpumask_init_opp_table); 1326 - 1327 - /* Required only for V1 bindings, as v2 can manage it from DT itself */ 1328 - int set_cpus_sharing_opps(struct device *cpu_dev, cpumask_var_t cpumask) 1329 - { 1330 - struct device_list_opp *list_dev; 1331 - struct device_opp *dev_opp; 1332 - struct device *dev; 1333 - int cpu, ret = 0; 1334 - 1335 - rcu_read_lock(); 1336 - 1337 - dev_opp = _find_device_opp(cpu_dev); 1338 - if (IS_ERR(dev_opp)) { 1339 - ret = -EINVAL; 1340 - goto out_rcu_read_unlock; 1341 - } 1342 - 1343 - for_each_cpu(cpu, cpumask) { 1344 - if (cpu == cpu_dev->id) 1345 - continue; 1346 - 1347 - dev = get_cpu_device(cpu); 1348 - if (!dev) { 1349 - dev_err(cpu_dev, "%s: failed to get cpu%d device\n", 1350 - __func__, cpu); 1351 - continue; 1352 - } 1353 - 1354 - list_dev = _add_list_dev(dev, dev_opp); 1355 - if (!list_dev) { 1356 - dev_err(dev, "%s: failed to add list-dev for cpu%d device\n", 1357 - __func__, cpu); 1358 - continue; 1359 - } 1360 - } 1361 - out_rcu_read_unlock: 1362 - rcu_read_unlock(); 1363 - 1364 - return 0; 1365 - } 1366 - EXPORT_SYMBOL_GPL(set_cpus_sharing_opps); 1367 - 1368 - /* 1369 - * Works only for OPP v2 bindings. 1370 - * 1371 - * cpumask should be already set to mask of cpu_dev->id. 1372 - * Returns -ENOENT if operating-points-v2 bindings aren't supported. 1373 - */ 1374 - int of_get_cpus_sharing_opps(struct device *cpu_dev, cpumask_var_t cpumask) 1375 - { 1376 - struct device_node *np, *tmp_np; 1377 - struct device *tcpu_dev; 1378 - int cpu, ret = 0; 1379 - 1380 - /* Get OPP descriptor node */ 1381 - np = _of_get_opp_desc_node(cpu_dev); 1382 - if (IS_ERR(np)) { 1383 - dev_dbg(cpu_dev, "%s: Couldn't find opp node: %ld\n", __func__, 1384 - PTR_ERR(np)); 1385 - return -ENOENT; 1386 - } 1387 - 1388 - /* OPPs are shared ? */ 1389 - if (!of_property_read_bool(np, "opp-shared")) 1390 - goto put_cpu_node; 1391 - 1392 - for_each_possible_cpu(cpu) { 1393 - if (cpu == cpu_dev->id) 1394 - continue; 1395 - 1396 - tcpu_dev = get_cpu_device(cpu); 1397 - if (!tcpu_dev) { 1398 - dev_err(cpu_dev, "%s: failed to get cpu%d device\n", 1399 - __func__, cpu); 1400 - ret = -ENODEV; 1401 - goto put_cpu_node; 1402 - } 1403 - 1404 - /* Get OPP descriptor node */ 1405 - tmp_np = _of_get_opp_desc_node(tcpu_dev); 1406 - if (IS_ERR(tmp_np)) { 1407 - dev_err(tcpu_dev, "%s: Couldn't find opp node: %ld\n", 1408 - __func__, PTR_ERR(tmp_np)); 1409 - ret = PTR_ERR(tmp_np); 1410 - goto put_cpu_node; 1411 - } 1412 - 1413 - /* CPUs are sharing opp node */ 1414 - if (np == tmp_np) 1415 - cpumask_set_cpu(cpu, cpumask); 1416 - 1417 - of_node_put(tmp_np); 1418 - } 1419 - 1420 - put_cpu_node: 1421 - of_node_put(np); 1422 - return ret; 1423 - } 1424 - EXPORT_SYMBOL_GPL(of_get_cpus_sharing_opps); 1477 + EXPORT_SYMBOL_GPL(dev_pm_opp_of_add_table); 1425 1478 #endif
+2
drivers/base/power/opp/Makefile
··· 1 + ccflags-$(CONFIG_DEBUG_DRIVER) := -DDEBUG 2 + obj-y += core.o cpu.o
+267
drivers/base/power/opp/cpu.c
··· 1 + /* 2 + * Generic OPP helper interface for CPU device 3 + * 4 + * Copyright (C) 2009-2014 Texas Instruments Incorporated. 5 + * Nishanth Menon 6 + * Romit Dasgupta 7 + * Kevin Hilman 8 + * 9 + * This program is free software; you can redistribute it and/or modify 10 + * it under the terms of the GNU General Public License version 2 as 11 + * published by the Free Software Foundation. 12 + */ 13 + #include <linux/cpu.h> 14 + #include <linux/cpufreq.h> 15 + #include <linux/err.h> 16 + #include <linux/errno.h> 17 + #include <linux/export.h> 18 + #include <linux/of.h> 19 + #include <linux/slab.h> 20 + 21 + #include "opp.h" 22 + 23 + #ifdef CONFIG_CPU_FREQ 24 + 25 + /** 26 + * dev_pm_opp_init_cpufreq_table() - create a cpufreq table for a device 27 + * @dev: device for which we do this operation 28 + * @table: Cpufreq table returned back to caller 29 + * 30 + * Generate a cpufreq table for a provided device- this assumes that the 31 + * opp list is already initialized and ready for usage. 32 + * 33 + * This function allocates required memory for the cpufreq table. It is 34 + * expected that the caller does the required maintenance such as freeing 35 + * the table as required. 36 + * 37 + * Returns -EINVAL for bad pointers, -ENODEV if the device is not found, -ENOMEM 38 + * if no memory available for the operation (table is not populated), returns 0 39 + * if successful and table is populated. 40 + * 41 + * WARNING: It is important for the callers to ensure refreshing their copy of 42 + * the table if any of the mentioned functions have been invoked in the interim. 43 + * 44 + * Locking: The internal device_opp and opp structures are RCU protected. 45 + * Since we just use the regular accessor functions to access the internal data 46 + * structures, we use RCU read lock inside this function. As a result, users of 47 + * this function DONOT need to use explicit locks for invoking. 48 + */ 49 + int dev_pm_opp_init_cpufreq_table(struct device *dev, 50 + struct cpufreq_frequency_table **table) 51 + { 52 + struct dev_pm_opp *opp; 53 + struct cpufreq_frequency_table *freq_table = NULL; 54 + int i, max_opps, ret = 0; 55 + unsigned long rate; 56 + 57 + rcu_read_lock(); 58 + 59 + max_opps = dev_pm_opp_get_opp_count(dev); 60 + if (max_opps <= 0) { 61 + ret = max_opps ? max_opps : -ENODATA; 62 + goto out; 63 + } 64 + 65 + freq_table = kcalloc((max_opps + 1), sizeof(*freq_table), GFP_ATOMIC); 66 + if (!freq_table) { 67 + ret = -ENOMEM; 68 + goto out; 69 + } 70 + 71 + for (i = 0, rate = 0; i < max_opps; i++, rate++) { 72 + /* find next rate */ 73 + opp = dev_pm_opp_find_freq_ceil(dev, &rate); 74 + if (IS_ERR(opp)) { 75 + ret = PTR_ERR(opp); 76 + goto out; 77 + } 78 + freq_table[i].driver_data = i; 79 + freq_table[i].frequency = rate / 1000; 80 + 81 + /* Is Boost/turbo opp ? */ 82 + if (dev_pm_opp_is_turbo(opp)) 83 + freq_table[i].flags = CPUFREQ_BOOST_FREQ; 84 + } 85 + 86 + freq_table[i].driver_data = i; 87 + freq_table[i].frequency = CPUFREQ_TABLE_END; 88 + 89 + *table = &freq_table[0]; 90 + 91 + out: 92 + rcu_read_unlock(); 93 + if (ret) 94 + kfree(freq_table); 95 + 96 + return ret; 97 + } 98 + EXPORT_SYMBOL_GPL(dev_pm_opp_init_cpufreq_table); 99 + 100 + /** 101 + * dev_pm_opp_free_cpufreq_table() - free the cpufreq table 102 + * @dev: device for which we do this operation 103 + * @table: table to free 104 + * 105 + * Free up the table allocated by dev_pm_opp_init_cpufreq_table 106 + */ 107 + void dev_pm_opp_free_cpufreq_table(struct device *dev, 108 + struct cpufreq_frequency_table **table) 109 + { 110 + if (!table) 111 + return; 112 + 113 + kfree(*table); 114 + *table = NULL; 115 + } 116 + EXPORT_SYMBOL_GPL(dev_pm_opp_free_cpufreq_table); 117 + #endif /* CONFIG_CPU_FREQ */ 118 + 119 + /* Required only for V1 bindings, as v2 can manage it from DT itself */ 120 + int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, cpumask_var_t cpumask) 121 + { 122 + struct device_list_opp *list_dev; 123 + struct device_opp *dev_opp; 124 + struct device *dev; 125 + int cpu, ret = 0; 126 + 127 + rcu_read_lock(); 128 + 129 + dev_opp = _find_device_opp(cpu_dev); 130 + if (IS_ERR(dev_opp)) { 131 + ret = -EINVAL; 132 + goto out_rcu_read_unlock; 133 + } 134 + 135 + for_each_cpu(cpu, cpumask) { 136 + if (cpu == cpu_dev->id) 137 + continue; 138 + 139 + dev = get_cpu_device(cpu); 140 + if (!dev) { 141 + dev_err(cpu_dev, "%s: failed to get cpu%d device\n", 142 + __func__, cpu); 143 + continue; 144 + } 145 + 146 + list_dev = _add_list_dev(dev, dev_opp); 147 + if (!list_dev) { 148 + dev_err(dev, "%s: failed to add list-dev for cpu%d device\n", 149 + __func__, cpu); 150 + continue; 151 + } 152 + } 153 + out_rcu_read_unlock: 154 + rcu_read_unlock(); 155 + 156 + return 0; 157 + } 158 + EXPORT_SYMBOL_GPL(dev_pm_opp_set_sharing_cpus); 159 + 160 + #ifdef CONFIG_OF 161 + void dev_pm_opp_of_cpumask_remove_table(cpumask_var_t cpumask) 162 + { 163 + struct device *cpu_dev; 164 + int cpu; 165 + 166 + WARN_ON(cpumask_empty(cpumask)); 167 + 168 + for_each_cpu(cpu, cpumask) { 169 + cpu_dev = get_cpu_device(cpu); 170 + if (!cpu_dev) { 171 + pr_err("%s: failed to get cpu%d device\n", __func__, 172 + cpu); 173 + continue; 174 + } 175 + 176 + dev_pm_opp_of_remove_table(cpu_dev); 177 + } 178 + } 179 + EXPORT_SYMBOL_GPL(dev_pm_opp_of_cpumask_remove_table); 180 + 181 + int dev_pm_opp_of_cpumask_add_table(cpumask_var_t cpumask) 182 + { 183 + struct device *cpu_dev; 184 + int cpu, ret = 0; 185 + 186 + WARN_ON(cpumask_empty(cpumask)); 187 + 188 + for_each_cpu(cpu, cpumask) { 189 + cpu_dev = get_cpu_device(cpu); 190 + if (!cpu_dev) { 191 + pr_err("%s: failed to get cpu%d device\n", __func__, 192 + cpu); 193 + continue; 194 + } 195 + 196 + ret = dev_pm_opp_of_add_table(cpu_dev); 197 + if (ret) { 198 + pr_err("%s: couldn't find opp table for cpu:%d, %d\n", 199 + __func__, cpu, ret); 200 + 201 + /* Free all other OPPs */ 202 + dev_pm_opp_of_cpumask_remove_table(cpumask); 203 + break; 204 + } 205 + } 206 + 207 + return ret; 208 + } 209 + EXPORT_SYMBOL_GPL(dev_pm_opp_of_cpumask_add_table); 210 + 211 + /* 212 + * Works only for OPP v2 bindings. 213 + * 214 + * cpumask should be already set to mask of cpu_dev->id. 215 + * Returns -ENOENT if operating-points-v2 bindings aren't supported. 216 + */ 217 + int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, cpumask_var_t cpumask) 218 + { 219 + struct device_node *np, *tmp_np; 220 + struct device *tcpu_dev; 221 + int cpu, ret = 0; 222 + 223 + /* Get OPP descriptor node */ 224 + np = _of_get_opp_desc_node(cpu_dev); 225 + if (!np) { 226 + dev_dbg(cpu_dev, "%s: Couldn't find cpu_dev node.\n", __func__); 227 + return -ENOENT; 228 + } 229 + 230 + /* OPPs are shared ? */ 231 + if (!of_property_read_bool(np, "opp-shared")) 232 + goto put_cpu_node; 233 + 234 + for_each_possible_cpu(cpu) { 235 + if (cpu == cpu_dev->id) 236 + continue; 237 + 238 + tcpu_dev = get_cpu_device(cpu); 239 + if (!tcpu_dev) { 240 + dev_err(cpu_dev, "%s: failed to get cpu%d device\n", 241 + __func__, cpu); 242 + ret = -ENODEV; 243 + goto put_cpu_node; 244 + } 245 + 246 + /* Get OPP descriptor node */ 247 + tmp_np = _of_get_opp_desc_node(tcpu_dev); 248 + if (!tmp_np) { 249 + dev_err(tcpu_dev, "%s: Couldn't find tcpu_dev node.\n", 250 + __func__); 251 + ret = -ENOENT; 252 + goto put_cpu_node; 253 + } 254 + 255 + /* CPUs are sharing opp node */ 256 + if (np == tmp_np) 257 + cpumask_set_cpu(cpu, cpumask); 258 + 259 + of_node_put(tmp_np); 260 + } 261 + 262 + put_cpu_node: 263 + of_node_put(np); 264 + return ret; 265 + } 266 + EXPORT_SYMBOL_GPL(dev_pm_opp_of_get_sharing_cpus); 267 + #endif
+143
drivers/base/power/opp/opp.h
··· 1 + /* 2 + * Generic OPP Interface 3 + * 4 + * Copyright (C) 2009-2010 Texas Instruments Incorporated. 5 + * Nishanth Menon 6 + * Romit Dasgupta 7 + * Kevin Hilman 8 + * 9 + * This program is free software; you can redistribute it and/or modify 10 + * it under the terms of the GNU General Public License version 2 as 11 + * published by the Free Software Foundation. 12 + */ 13 + 14 + #ifndef __DRIVER_OPP_H__ 15 + #define __DRIVER_OPP_H__ 16 + 17 + #include <linux/device.h> 18 + #include <linux/kernel.h> 19 + #include <linux/list.h> 20 + #include <linux/pm_opp.h> 21 + #include <linux/rculist.h> 22 + #include <linux/rcupdate.h> 23 + 24 + /* 25 + * Internal data structure organization with the OPP layer library is as 26 + * follows: 27 + * dev_opp_list (root) 28 + * |- device 1 (represents voltage domain 1) 29 + * | |- opp 1 (availability, freq, voltage) 30 + * | |- opp 2 .. 31 + * ... ... 32 + * | `- opp n .. 33 + * |- device 2 (represents the next voltage domain) 34 + * ... 35 + * `- device m (represents mth voltage domain) 36 + * device 1, 2.. are represented by dev_opp structure while each opp 37 + * is represented by the opp structure. 38 + */ 39 + 40 + /** 41 + * struct dev_pm_opp - Generic OPP description structure 42 + * @node: opp list node. The nodes are maintained throughout the lifetime 43 + * of boot. It is expected only an optimal set of OPPs are 44 + * added to the library by the SoC framework. 45 + * RCU usage: opp list is traversed with RCU locks. node 46 + * modification is possible realtime, hence the modifications 47 + * are protected by the dev_opp_list_lock for integrity. 48 + * IMPORTANT: the opp nodes should be maintained in increasing 49 + * order. 50 + * @dynamic: not-created from static DT entries. 51 + * @available: true/false - marks if this OPP as available or not 52 + * @turbo: true if turbo (boost) OPP 53 + * @rate: Frequency in hertz 54 + * @u_volt: Target voltage in microvolts corresponding to this OPP 55 + * @u_volt_min: Minimum voltage in microvolts corresponding to this OPP 56 + * @u_volt_max: Maximum voltage in microvolts corresponding to this OPP 57 + * @u_amp: Maximum current drawn by the device in microamperes 58 + * @clock_latency_ns: Latency (in nanoseconds) of switching to this OPP's 59 + * frequency from any other OPP's frequency. 60 + * @dev_opp: points back to the device_opp struct this opp belongs to 61 + * @rcu_head: RCU callback head used for deferred freeing 62 + * @np: OPP's device node. 63 + * 64 + * This structure stores the OPP information for a given device. 65 + */ 66 + struct dev_pm_opp { 67 + struct list_head node; 68 + 69 + bool available; 70 + bool dynamic; 71 + bool turbo; 72 + unsigned long rate; 73 + 74 + unsigned long u_volt; 75 + unsigned long u_volt_min; 76 + unsigned long u_volt_max; 77 + unsigned long u_amp; 78 + unsigned long clock_latency_ns; 79 + 80 + struct device_opp *dev_opp; 81 + struct rcu_head rcu_head; 82 + 83 + struct device_node *np; 84 + }; 85 + 86 + /** 87 + * struct device_list_opp - devices managed by 'struct device_opp' 88 + * @node: list node 89 + * @dev: device to which the struct object belongs 90 + * @rcu_head: RCU callback head used for deferred freeing 91 + * 92 + * This is an internal data structure maintaining the list of devices that are 93 + * managed by 'struct device_opp'. 94 + */ 95 + struct device_list_opp { 96 + struct list_head node; 97 + const struct device *dev; 98 + struct rcu_head rcu_head; 99 + }; 100 + 101 + /** 102 + * struct device_opp - Device opp structure 103 + * @node: list node - contains the devices with OPPs that 104 + * have been registered. Nodes once added are not modified in this 105 + * list. 106 + * RCU usage: nodes are not modified in the list of device_opp, 107 + * however addition is possible and is secured by dev_opp_list_lock 108 + * @srcu_head: notifier head to notify the OPP availability changes. 109 + * @rcu_head: RCU callback head used for deferred freeing 110 + * @dev_list: list of devices that share these OPPs 111 + * @opp_list: list of opps 112 + * @np: struct device_node pointer for opp's DT node. 113 + * @shared_opp: OPP is shared between multiple devices. 114 + * 115 + * This is an internal data structure maintaining the link to opps attached to 116 + * a device. This structure is not meant to be shared to users as it is 117 + * meant for book keeping and private to OPP library. 118 + * 119 + * Because the opp structures can be used from both rcu and srcu readers, we 120 + * need to wait for the grace period of both of them before freeing any 121 + * resources. And so we have used kfree_rcu() from within call_srcu() handlers. 122 + */ 123 + struct device_opp { 124 + struct list_head node; 125 + 126 + struct srcu_notifier_head srcu_head; 127 + struct rcu_head rcu_head; 128 + struct list_head dev_list; 129 + struct list_head opp_list; 130 + 131 + struct device_node *np; 132 + unsigned long clock_latency_ns_max; 133 + bool shared_opp; 134 + struct dev_pm_opp *suspend_opp; 135 + }; 136 + 137 + /* Routines internal to opp core */ 138 + struct device_opp *_find_device_opp(struct device *dev); 139 + struct device_list_opp *_add_list_dev(const struct device *dev, 140 + struct device_opp *dev_opp); 141 + struct device_node *_of_get_opp_desc_node(struct device *dev); 142 + 143 + #endif /* __DRIVER_OPP_H__ */
+14 -2
drivers/base/power/wakeup.c
··· 25 25 */ 26 26 bool events_check_enabled __read_mostly; 27 27 28 + /* First wakeup IRQ seen by the kernel in the last cycle. */ 29 + unsigned int pm_wakeup_irq __read_mostly; 30 + 28 31 /* If set and the system is suspending, terminate the suspend. */ 29 32 static bool pm_abort_suspend __read_mostly; 30 33 ··· 94 91 if (!ws) 95 92 return NULL; 96 93 97 - wakeup_source_prepare(ws, name ? kstrdup(name, GFP_KERNEL) : NULL); 94 + wakeup_source_prepare(ws, name ? kstrdup_const(name, GFP_KERNEL) : NULL); 98 95 return ws; 99 96 } 100 97 EXPORT_SYMBOL_GPL(wakeup_source_create); ··· 157 154 158 155 wakeup_source_drop(ws); 159 156 wakeup_source_record(ws); 160 - kfree(ws->name); 157 + kfree_const(ws->name); 161 158 kfree(ws); 162 159 } 163 160 EXPORT_SYMBOL_GPL(wakeup_source_destroy); ··· 871 868 void pm_wakeup_clear(void) 872 869 { 873 870 pm_abort_suspend = false; 871 + pm_wakeup_irq = 0; 872 + } 873 + 874 + void pm_system_irq_wakeup(unsigned int irq_number) 875 + { 876 + if (pm_wakeup_irq == 0) { 877 + pm_wakeup_irq = irq_number; 878 + pm_system_wakeup(); 879 + } 874 880 } 875 881 876 882 /**
+76 -12
drivers/base/property.c
··· 134 134 if (is_of_node(fwnode)) 135 135 return of_property_read_bool(to_of_node(fwnode), propname); 136 136 else if (is_acpi_node(fwnode)) 137 - return !acpi_dev_prop_get(to_acpi_node(fwnode), propname, NULL); 137 + return !acpi_node_prop_get(fwnode, propname, NULL); 138 138 139 139 return !!pset_prop_get(to_pset(fwnode), propname); 140 140 } ··· 287 287 } 288 288 EXPORT_SYMBOL_GPL(device_property_read_string); 289 289 290 + /** 291 + * device_property_match_string - find a string in an array and return index 292 + * @dev: Device to get the property of 293 + * @propname: Name of the property holding the array 294 + * @string: String to look for 295 + * 296 + * Find a given string in a string array and if it is found return the 297 + * index back. 298 + * 299 + * Return: %0 if the property was found (success), 300 + * %-EINVAL if given arguments are not valid, 301 + * %-ENODATA if the property does not have a value, 302 + * %-EPROTO if the property is not an array of strings, 303 + * %-ENXIO if no suitable firmware interface is present. 304 + */ 305 + int device_property_match_string(struct device *dev, const char *propname, 306 + const char *string) 307 + { 308 + return fwnode_property_match_string(dev_fwnode(dev), propname, string); 309 + } 310 + EXPORT_SYMBOL_GPL(device_property_match_string); 311 + 290 312 #define OF_DEV_PROP_READ_ARRAY(node, propname, type, val, nval) \ 291 313 (val) ? of_property_read_##type##_array((node), (propname), (val), (nval)) \ 292 314 : of_property_count_elems_of_size((node), (propname), sizeof(type)) ··· 320 298 _ret_ = OF_DEV_PROP_READ_ARRAY(to_of_node(_fwnode_), _propname_, \ 321 299 _type_, _val_, _nval_); \ 322 300 else if (is_acpi_node(_fwnode_)) \ 323 - _ret_ = acpi_dev_prop_read(to_acpi_node(_fwnode_), _propname_, \ 324 - _proptype_, _val_, _nval_); \ 301 + _ret_ = acpi_node_prop_read(_fwnode_, _propname_, _proptype_, \ 302 + _val_, _nval_); \ 325 303 else if (is_pset(_fwnode_)) \ 326 304 _ret_ = pset_prop_read_array(to_pset(_fwnode_), _propname_, \ 327 305 _proptype_, _val_, _nval_); \ ··· 462 440 propname, val, nval) : 463 441 of_property_count_strings(to_of_node(fwnode), propname); 464 442 else if (is_acpi_node(fwnode)) 465 - return acpi_dev_prop_read(to_acpi_node(fwnode), propname, 466 - DEV_PROP_STRING, val, nval); 443 + return acpi_node_prop_read(fwnode, propname, DEV_PROP_STRING, 444 + val, nval); 467 445 else if (is_pset(fwnode)) 468 446 return pset_prop_read_array(to_pset(fwnode), propname, 469 447 DEV_PROP_STRING, val, nval); ··· 492 470 if (is_of_node(fwnode)) 493 471 return of_property_read_string(to_of_node(fwnode), propname, val); 494 472 else if (is_acpi_node(fwnode)) 495 - return acpi_dev_prop_read(to_acpi_node(fwnode), propname, 496 - DEV_PROP_STRING, val, 1); 473 + return acpi_node_prop_read(fwnode, propname, DEV_PROP_STRING, 474 + val, 1); 497 475 498 476 return pset_prop_read_array(to_pset(fwnode), propname, 499 477 DEV_PROP_STRING, val, 1); 500 478 } 501 479 EXPORT_SYMBOL_GPL(fwnode_property_read_string); 480 + 481 + /** 482 + * fwnode_property_match_string - find a string in an array and return index 483 + * @fwnode: Firmware node to get the property of 484 + * @propname: Name of the property holding the array 485 + * @string: String to look for 486 + * 487 + * Find a given string in a string array and if it is found return the 488 + * index back. 489 + * 490 + * Return: %0 if the property was found (success), 491 + * %-EINVAL if given arguments are not valid, 492 + * %-ENODATA if the property does not have a value, 493 + * %-EPROTO if the property is not an array of strings, 494 + * %-ENXIO if no suitable firmware interface is present. 495 + */ 496 + int fwnode_property_match_string(struct fwnode_handle *fwnode, 497 + const char *propname, const char *string) 498 + { 499 + const char **values; 500 + int nval, ret, i; 501 + 502 + nval = fwnode_property_read_string_array(fwnode, propname, NULL, 0); 503 + if (nval < 0) 504 + return nval; 505 + 506 + values = kcalloc(nval, sizeof(*values), GFP_KERNEL); 507 + if (!values) 508 + return -ENOMEM; 509 + 510 + ret = fwnode_property_read_string_array(fwnode, propname, values, nval); 511 + if (ret < 0) 512 + goto out; 513 + 514 + ret = -ENODATA; 515 + for (i = 0; i < nval; i++) { 516 + if (!strcmp(values[i], string)) { 517 + ret = i; 518 + break; 519 + } 520 + } 521 + out: 522 + kfree(values); 523 + return ret; 524 + } 525 + EXPORT_SYMBOL_GPL(fwnode_property_match_string); 502 526 503 527 /** 504 528 * device_get_next_child_node - Return the next child node handle for a device ··· 561 493 if (node) 562 494 return &node->fwnode; 563 495 } else if (IS_ENABLED(CONFIG_ACPI)) { 564 - struct acpi_device *node; 565 - 566 - node = acpi_get_next_child(dev, to_acpi_node(child)); 567 - if (node) 568 - return acpi_fwnode_handle(node); 496 + return acpi_get_next_subnode(dev, child); 569 497 } 570 498 return NULL; 571 499 }
+9
drivers/clocksource/Kconfig
··· 2 2 3 3 config CLKSRC_OF 4 4 bool 5 + select CLKSRC_PROBE 6 + 7 + config CLKSRC_ACPI 8 + bool 9 + select CLKSRC_PROBE 10 + 11 + config CLKSRC_PROBE 12 + bool 5 13 6 14 config CLKSRC_I8253 7 15 bool ··· 131 123 config ARM_ARCH_TIMER 132 124 bool 133 125 select CLKSRC_OF if OF 126 + select CLKSRC_ACPI if ACPI 134 127 135 128 config ARM_ARCH_TIMER_EVTSTREAM 136 129 bool "Support for ARM architected timer event stream generation"
+1 -1
drivers/clocksource/Makefile
··· 1 - obj-$(CONFIG_CLKSRC_OF) += clksrc-of.o 1 + obj-$(CONFIG_CLKSRC_PROBE) += clksrc-probe.o 2 2 obj-$(CONFIG_ATMEL_PIT) += timer-atmel-pit.o 3 3 obj-$(CONFIG_ATMEL_ST) += timer-atmel-st.o 4 4 obj-$(CONFIG_ATMEL_TCB_CLKSRC) += tcb_clksrc.o
+1 -9
drivers/clocksource/arm_arch_timer.c
··· 864 864 arch_timer_init(); 865 865 return 0; 866 866 } 867 - 868 - /* Initialize all the generic timers presented in GTDT */ 869 - void __init acpi_generic_timer_init(void) 870 - { 871 - if (acpi_disabled) 872 - return; 873 - 874 - acpi_table_parse(ACPI_SIG_GTDT, arch_timer_acpi_init); 875 - } 867 + CLOCKSOURCE_ACPI_DECLARE(arch_timer, ACPI_SIG_GTDT, arch_timer_acpi_init); 876 868 #endif
+5 -1
drivers/clocksource/clksrc-of.c drivers/clocksource/clksrc-probe.c
··· 14 14 * along with this program. If not, see <http://www.gnu.org/licenses/>. 15 15 */ 16 16 17 + #include <linux/acpi.h> 17 18 #include <linux/init.h> 18 19 #include <linux/of.h> 19 20 #include <linux/clocksource.h> ··· 24 23 static const struct of_device_id __clksrc_of_table_sentinel 25 24 __used __section(__clksrc_of_table_end); 26 25 27 - void __init clocksource_of_init(void) 26 + void __init clocksource_probe(void) 28 27 { 29 28 struct device_node *np; 30 29 const struct of_device_id *match; ··· 39 38 init_func(np); 40 39 clocksources++; 41 40 } 41 + 42 + clocksources += acpi_probe_device_table(clksrc); 43 + 42 44 if (!clocksources) 43 45 pr_crit("%s: no matching clocksources found\n", __func__); 44 46 }
+17
drivers/cpufreq/Kconfig.arm
··· 227 227 This add the CPUFreq driver support for Intel PXA2xx SOCs. 228 228 229 229 If in doubt, say N. 230 + 231 + config ACPI_CPPC_CPUFREQ 232 + tristate "CPUFreq driver based on the ACPI CPPC spec" 233 + depends on ACPI 234 + select ACPI_CPPC_LIB 235 + default n 236 + help 237 + This adds a CPUFreq driver which uses CPPC methods 238 + as described in the ACPIv5.1 spec. CPPC stands for 239 + Collaborative Processor Performance Controls. It 240 + is based on an abstract continuous scale of CPU 241 + performance values which allows the remote power 242 + processor to flexibly optimize for power and 243 + performance. CPPC relies on power management firmware 244 + support for its operation. 245 + 246 + If in doubt, say N.
+1
drivers/cpufreq/Kconfig.x86
··· 5 5 config X86_INTEL_PSTATE 6 6 bool "Intel P state control" 7 7 depends on X86 8 + select ACPI_PROCESSOR if ACPI 8 9 help 9 10 This driver provides a P state for Intel core processors. 10 11 The driver implements an internal governor and will become
+2 -1
drivers/cpufreq/Makefile
··· 1 1 # CPUfreq core 2 2 obj-$(CONFIG_CPU_FREQ) += cpufreq.o freq_table.o 3 - obj-$(CONFIG_PM_OPP) += cpufreq_opp.o 4 3 5 4 # CPUfreq stats 6 5 obj-$(CONFIG_CPU_FREQ_STAT) += cpufreq_stats.o ··· 75 76 obj-$(CONFIG_ARM_TEGRA20_CPUFREQ) += tegra20-cpufreq.o 76 77 obj-$(CONFIG_ARM_TEGRA124_CPUFREQ) += tegra124-cpufreq.o 77 78 obj-$(CONFIG_ARM_VEXPRESS_SPC_CPUFREQ) += vexpress-spc-cpufreq.o 79 + obj-$(CONFIG_ACPI_CPPC_CPUFREQ) += cppc_cpufreq.o 80 + 78 81 79 82 ################################################################################## 80 83 # PowerPC platform drivers
+1 -1
drivers/cpufreq/arm_big_little.h
··· 28 28 29 29 /* 30 30 * This must set opp table for cpu_dev in a similar way as done by 31 - * of_init_opp_table(). 31 + * dev_pm_opp_of_add_table(). 32 32 */ 33 33 int (*init_opp_table)(struct device *cpu_dev); 34 34
+2 -2
drivers/cpufreq/arm_big_little_dt.c
··· 54 54 return -ENOENT; 55 55 } 56 56 57 - ret = of_init_opp_table(cpu_dev); 57 + ret = dev_pm_opp_of_add_table(cpu_dev); 58 58 of_node_put(np); 59 59 60 60 return ret; ··· 82 82 .name = "dt-bl", 83 83 .get_transition_latency = dt_get_transition_latency, 84 84 .init_opp_table = dt_init_opp_table, 85 - .free_opp_table = of_free_opp_table, 85 + .free_opp_table = dev_pm_opp_of_remove_table, 86 86 }; 87 87 88 88 static int generic_bL_probe(struct platform_device *pdev)
+176
drivers/cpufreq/cppc_cpufreq.c
··· 1 + /* 2 + * CPPC (Collaborative Processor Performance Control) driver for 3 + * interfacing with the CPUfreq layer and governors. See 4 + * cppc_acpi.c for CPPC specific methods. 5 + * 6 + * (C) Copyright 2014, 2015 Linaro Ltd. 7 + * Author: Ashwin Chaugule <ashwin.chaugule@linaro.org> 8 + * 9 + * This program is free software; you can redistribute it and/or 10 + * modify it under the terms of the GNU General Public License 11 + * as published by the Free Software Foundation; version 2 12 + * of the License. 13 + */ 14 + 15 + #define pr_fmt(fmt) "CPPC Cpufreq:" fmt 16 + 17 + #include <linux/kernel.h> 18 + #include <linux/module.h> 19 + #include <linux/delay.h> 20 + #include <linux/cpu.h> 21 + #include <linux/cpufreq.h> 22 + #include <linux/vmalloc.h> 23 + 24 + #include <acpi/cppc_acpi.h> 25 + 26 + /* 27 + * These structs contain information parsed from per CPU 28 + * ACPI _CPC structures. 29 + * e.g. For each CPU the highest, lowest supported 30 + * performance capabilities, desired performance level 31 + * requested etc. 32 + */ 33 + static struct cpudata **all_cpu_data; 34 + 35 + static int cppc_cpufreq_set_target(struct cpufreq_policy *policy, 36 + unsigned int target_freq, 37 + unsigned int relation) 38 + { 39 + struct cpudata *cpu; 40 + struct cpufreq_freqs freqs; 41 + int ret = 0; 42 + 43 + cpu = all_cpu_data[policy->cpu]; 44 + 45 + cpu->perf_ctrls.desired_perf = target_freq; 46 + freqs.old = policy->cur; 47 + freqs.new = target_freq; 48 + 49 + cpufreq_freq_transition_begin(policy, &freqs); 50 + ret = cppc_set_perf(cpu->cpu, &cpu->perf_ctrls); 51 + cpufreq_freq_transition_end(policy, &freqs, ret != 0); 52 + 53 + if (ret) 54 + pr_debug("Failed to set target on CPU:%d. ret:%d\n", 55 + cpu->cpu, ret); 56 + 57 + return ret; 58 + } 59 + 60 + static int cppc_verify_policy(struct cpufreq_policy *policy) 61 + { 62 + cpufreq_verify_within_cpu_limits(policy); 63 + return 0; 64 + } 65 + 66 + static void cppc_cpufreq_stop_cpu(struct cpufreq_policy *policy) 67 + { 68 + int cpu_num = policy->cpu; 69 + struct cpudata *cpu = all_cpu_data[cpu_num]; 70 + int ret; 71 + 72 + cpu->perf_ctrls.desired_perf = cpu->perf_caps.lowest_perf; 73 + 74 + ret = cppc_set_perf(cpu_num, &cpu->perf_ctrls); 75 + if (ret) 76 + pr_debug("Err setting perf value:%d on CPU:%d. ret:%d\n", 77 + cpu->perf_caps.lowest_perf, cpu_num, ret); 78 + } 79 + 80 + static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy) 81 + { 82 + struct cpudata *cpu; 83 + unsigned int cpu_num = policy->cpu; 84 + int ret = 0; 85 + 86 + cpu = all_cpu_data[policy->cpu]; 87 + 88 + cpu->cpu = cpu_num; 89 + ret = cppc_get_perf_caps(policy->cpu, &cpu->perf_caps); 90 + 91 + if (ret) { 92 + pr_debug("Err reading CPU%d perf capabilities. ret:%d\n", 93 + cpu_num, ret); 94 + return ret; 95 + } 96 + 97 + policy->min = cpu->perf_caps.lowest_perf; 98 + policy->max = cpu->perf_caps.highest_perf; 99 + policy->cpuinfo.min_freq = policy->min; 100 + policy->cpuinfo.max_freq = policy->max; 101 + 102 + if (policy->shared_type == CPUFREQ_SHARED_TYPE_ANY) 103 + cpumask_copy(policy->cpus, cpu->shared_cpu_map); 104 + else { 105 + /* Support only SW_ANY for now. */ 106 + pr_debug("Unsupported CPU co-ord type\n"); 107 + return -EFAULT; 108 + } 109 + 110 + cpumask_set_cpu(policy->cpu, policy->cpus); 111 + cpu->cur_policy = policy; 112 + 113 + /* Set policy->cur to max now. The governors will adjust later. */ 114 + policy->cur = cpu->perf_ctrls.desired_perf = cpu->perf_caps.highest_perf; 115 + 116 + ret = cppc_set_perf(cpu_num, &cpu->perf_ctrls); 117 + if (ret) 118 + pr_debug("Err setting perf value:%d on CPU:%d. ret:%d\n", 119 + cpu->perf_caps.highest_perf, cpu_num, ret); 120 + 121 + return ret; 122 + } 123 + 124 + static struct cpufreq_driver cppc_cpufreq_driver = { 125 + .flags = CPUFREQ_CONST_LOOPS, 126 + .verify = cppc_verify_policy, 127 + .target = cppc_cpufreq_set_target, 128 + .init = cppc_cpufreq_cpu_init, 129 + .stop_cpu = cppc_cpufreq_stop_cpu, 130 + .name = "cppc_cpufreq", 131 + }; 132 + 133 + static int __init cppc_cpufreq_init(void) 134 + { 135 + int i, ret = 0; 136 + struct cpudata *cpu; 137 + 138 + if (acpi_disabled) 139 + return -ENODEV; 140 + 141 + all_cpu_data = kzalloc(sizeof(void *) * num_possible_cpus(), GFP_KERNEL); 142 + if (!all_cpu_data) 143 + return -ENOMEM; 144 + 145 + for_each_possible_cpu(i) { 146 + all_cpu_data[i] = kzalloc(sizeof(struct cpudata), GFP_KERNEL); 147 + if (!all_cpu_data[i]) 148 + goto out; 149 + 150 + cpu = all_cpu_data[i]; 151 + if (!zalloc_cpumask_var(&cpu->shared_cpu_map, GFP_KERNEL)) 152 + goto out; 153 + } 154 + 155 + ret = acpi_get_psd_map(all_cpu_data); 156 + if (ret) { 157 + pr_debug("Error parsing PSD data. Aborting cpufreq registration.\n"); 158 + goto out; 159 + } 160 + 161 + ret = cpufreq_register_driver(&cppc_cpufreq_driver); 162 + if (ret) 163 + goto out; 164 + 165 + return ret; 166 + 167 + out: 168 + for_each_possible_cpu(i) 169 + if (all_cpu_data[i]) 170 + kfree(all_cpu_data[i]); 171 + 172 + kfree(all_cpu_data); 173 + return -ENODEV; 174 + } 175 + 176 + late_initcall(cppc_cpufreq_init);
+5 -5
drivers/cpufreq/cpufreq-dt.c
··· 216 216 } 217 217 218 218 /* Get OPP-sharing information from "operating-points-v2" bindings */ 219 - ret = of_get_cpus_sharing_opps(cpu_dev, policy->cpus); 219 + ret = dev_pm_opp_of_get_sharing_cpus(cpu_dev, policy->cpus); 220 220 if (ret) { 221 221 /* 222 222 * operating-points-v2 not supported, fallback to old method of ··· 238 238 * 239 239 * OPPs might be populated at runtime, don't check for error here 240 240 */ 241 - of_cpumask_init_opp_table(policy->cpus); 241 + dev_pm_opp_of_cpumask_add_table(policy->cpus); 242 242 243 243 /* 244 244 * But we need OPP table to function so if it is not there let's ··· 261 261 * OPP tables are initialized only for policy->cpu, do it for 262 262 * others as well. 263 263 */ 264 - ret = set_cpus_sharing_opps(cpu_dev, policy->cpus); 264 + ret = dev_pm_opp_set_sharing_cpus(cpu_dev, policy->cpus); 265 265 if (ret) 266 266 dev_err(cpu_dev, "%s: failed to mark OPPs as shared: %d\n", 267 267 __func__, ret); ··· 368 368 out_free_priv: 369 369 kfree(priv); 370 370 out_free_opp: 371 - of_cpumask_free_opp_table(policy->cpus); 371 + dev_pm_opp_of_cpumask_remove_table(policy->cpus); 372 372 out_node_put: 373 373 of_node_put(np); 374 374 out_put_reg_clk: ··· 385 385 386 386 cpufreq_cooling_unregister(priv->cdev); 387 387 dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table); 388 - of_cpumask_free_opp_table(policy->related_cpus); 388 + dev_pm_opp_of_cpumask_remove_table(policy->related_cpus); 389 389 clk_put(policy->clk); 390 390 if (!IS_ERR(priv->cpu_reg)) 391 391 regulator_put(priv->cpu_reg);
+20 -92
drivers/cpufreq/cpufreq.c
··· 843 843 844 844 down_write(&policy->rwsem); 845 845 846 - /* Updating inactive policies is invalid, so avoid doing that. */ 847 - if (unlikely(policy_is_inactive(policy))) { 848 - ret = -EBUSY; 849 - goto unlock_policy_rwsem; 850 - } 851 - 852 846 if (fattr->store) 853 847 ret = fattr->store(policy, buf, count); 854 848 else 855 849 ret = -EIO; 856 850 857 - unlock_policy_rwsem: 858 851 up_write(&policy->rwsem); 859 852 unlock: 860 853 put_online_cpus(); ··· 872 879 .default_attrs = default_attrs, 873 880 .release = cpufreq_sysfs_release, 874 881 }; 875 - 876 - struct kobject *cpufreq_global_kobject; 877 - EXPORT_SYMBOL(cpufreq_global_kobject); 878 - 879 - static int cpufreq_global_kobject_usage; 880 - 881 - int cpufreq_get_global_kobject(void) 882 - { 883 - if (!cpufreq_global_kobject_usage++) 884 - return kobject_add(cpufreq_global_kobject, 885 - &cpu_subsys.dev_root->kobj, "%s", "cpufreq"); 886 - 887 - return 0; 888 - } 889 - EXPORT_SYMBOL(cpufreq_get_global_kobject); 890 - 891 - void cpufreq_put_global_kobject(void) 892 - { 893 - if (!--cpufreq_global_kobject_usage) 894 - kobject_del(cpufreq_global_kobject); 895 - } 896 - EXPORT_SYMBOL(cpufreq_put_global_kobject); 897 - 898 - int cpufreq_sysfs_create_file(const struct attribute *attr) 899 - { 900 - int ret = cpufreq_get_global_kobject(); 901 - 902 - if (!ret) { 903 - ret = sysfs_create_file(cpufreq_global_kobject, attr); 904 - if (ret) 905 - cpufreq_put_global_kobject(); 906 - } 907 - 908 - return ret; 909 - } 910 - EXPORT_SYMBOL(cpufreq_sysfs_create_file); 911 - 912 - void cpufreq_sysfs_remove_file(const struct attribute *attr) 913 - { 914 - sysfs_remove_file(cpufreq_global_kobject, attr); 915 - cpufreq_put_global_kobject(); 916 - } 917 - EXPORT_SYMBOL(cpufreq_sysfs_remove_file); 918 882 919 883 static int add_cpu_dev_symlink(struct cpufreq_policy *policy, int cpu) 920 884 { ··· 910 960 911 961 /* Some related CPUs might not be present (physically hotplugged) */ 912 962 for_each_cpu(j, policy->real_cpus) { 913 - if (j == policy->kobj_cpu) 914 - continue; 915 - 916 963 ret = add_cpu_dev_symlink(policy, j); 917 964 if (ret) 918 965 break; ··· 923 976 unsigned int j; 924 977 925 978 /* Some related CPUs might not be present (physically hotplugged) */ 926 - for_each_cpu(j, policy->real_cpus) { 927 - if (j == policy->kobj_cpu) 928 - continue; 929 - 979 + for_each_cpu(j, policy->real_cpus) 930 980 remove_cpu_dev_symlink(policy, j); 931 - } 932 981 } 933 982 934 983 static int cpufreq_add_dev_interface(struct cpufreq_policy *policy) ··· 1022 1079 { 1023 1080 struct device *dev = get_cpu_device(cpu); 1024 1081 struct cpufreq_policy *policy; 1025 - int ret; 1026 1082 1027 1083 if (WARN_ON(!dev)) 1028 1084 return NULL; ··· 1039 1097 if (!zalloc_cpumask_var(&policy->real_cpus, GFP_KERNEL)) 1040 1098 goto err_free_rcpumask; 1041 1099 1042 - ret = kobject_init_and_add(&policy->kobj, &ktype_cpufreq, &dev->kobj, 1043 - "cpufreq"); 1044 - if (ret) { 1045 - pr_err("%s: failed to init policy->kobj: %d\n", __func__, ret); 1046 - goto err_free_real_cpus; 1047 - } 1048 - 1100 + kobject_init(&policy->kobj, &ktype_cpufreq); 1049 1101 INIT_LIST_HEAD(&policy->policy_list); 1050 1102 init_rwsem(&policy->rwsem); 1051 1103 spin_lock_init(&policy->transition_lock); ··· 1048 1112 INIT_WORK(&policy->update, handle_update); 1049 1113 1050 1114 policy->cpu = cpu; 1051 - 1052 - /* Set this once on allocation */ 1053 - policy->kobj_cpu = cpu; 1054 - 1055 1115 return policy; 1056 1116 1057 - err_free_real_cpus: 1058 - free_cpumask_var(policy->real_cpus); 1059 1117 err_free_rcpumask: 1060 1118 free_cpumask_var(policy->related_cpus); 1061 1119 err_free_cpumask: ··· 1151 1221 1152 1222 if (new_policy) { 1153 1223 /* related_cpus should at least include policy->cpus. */ 1154 - cpumask_or(policy->related_cpus, policy->related_cpus, policy->cpus); 1224 + cpumask_copy(policy->related_cpus, policy->cpus); 1155 1225 /* Remember CPUs present at the policy creation time. */ 1156 1226 cpumask_and(policy->real_cpus, policy->cpus, cpu_present_mask); 1227 + 1228 + /* Name and add the kobject */ 1229 + ret = kobject_add(&policy->kobj, cpufreq_global_kobject, 1230 + "policy%u", 1231 + cpumask_first(policy->related_cpus)); 1232 + if (ret) { 1233 + pr_err("%s: failed to add policy->kobj: %d\n", __func__, 1234 + ret); 1235 + goto out_exit_policy; 1236 + } 1157 1237 } 1158 1238 1159 1239 /* ··· 1407 1467 return; 1408 1468 } 1409 1469 1410 - if (cpu != policy->kobj_cpu) { 1411 - remove_cpu_dev_symlink(policy, cpu); 1412 - } else { 1413 - /* 1414 - * The CPU owning the policy object is going away. Move it to 1415 - * another suitable CPU. 1416 - */ 1417 - unsigned int new_cpu = cpumask_first(policy->real_cpus); 1418 - struct device *new_dev = get_cpu_device(new_cpu); 1419 - 1420 - dev_dbg(dev, "%s: Moving policy object to CPU%u\n", __func__, new_cpu); 1421 - 1422 - sysfs_remove_link(&new_dev->kobj, "cpufreq"); 1423 - policy->kobj_cpu = new_cpu; 1424 - WARN_ON(kobject_move(&policy->kobj, &new_dev->kobj)); 1425 - } 1470 + remove_cpu_dev_symlink(policy, cpu); 1426 1471 } 1427 1472 1428 1473 static void handle_update(struct work_struct *work) ··· 2350 2425 if (!cpufreq_driver->set_boost) 2351 2426 cpufreq_driver->set_boost = cpufreq_boost_set_sw; 2352 2427 2353 - ret = cpufreq_sysfs_create_file(&boost.attr); 2428 + ret = sysfs_create_file(cpufreq_global_kobject, &boost.attr); 2354 2429 if (ret) 2355 2430 pr_err("%s: cannot register global BOOST sysfs file\n", 2356 2431 __func__); ··· 2361 2436 static void remove_boost_sysfs_file(void) 2362 2437 { 2363 2438 if (cpufreq_boost_supported()) 2364 - cpufreq_sysfs_remove_file(&boost.attr); 2439 + sysfs_remove_file(cpufreq_global_kobject, &boost.attr); 2365 2440 } 2366 2441 2367 2442 int cpufreq_enable_boost_support(void) ··· 2509 2584 .shutdown = cpufreq_suspend, 2510 2585 }; 2511 2586 2587 + struct kobject *cpufreq_global_kobject; 2588 + EXPORT_SYMBOL(cpufreq_global_kobject); 2589 + 2512 2590 static int __init cpufreq_core_init(void) 2513 2591 { 2514 2592 if (cpufreq_disabled()) 2515 2593 return -ENODEV; 2516 2594 2517 - cpufreq_global_kobject = kobject_create(); 2595 + cpufreq_global_kobject = kobject_create_and_add("cpufreq", &cpu_subsys.dev_root->kobj); 2518 2596 BUG_ON(!cpufreq_global_kobject); 2519 2597 2520 2598 register_syscore_ops(&cpufreq_syscore_ops);
+18 -13
drivers/cpufreq/cpufreq_conservative.c
··· 23 23 24 24 static DEFINE_PER_CPU(struct cs_cpu_dbs_info_s, cs_cpu_dbs_info); 25 25 26 + static int cs_cpufreq_governor_dbs(struct cpufreq_policy *policy, 27 + unsigned int event); 28 + 29 + #ifndef CONFIG_CPU_FREQ_DEFAULT_GOV_CONSERVATIVE 30 + static 31 + #endif 32 + struct cpufreq_governor cpufreq_gov_conservative = { 33 + .name = "conservative", 34 + .governor = cs_cpufreq_governor_dbs, 35 + .max_transition_latency = TRANSITION_LATENCY_LIMIT, 36 + .owner = THIS_MODULE, 37 + }; 38 + 26 39 static inline unsigned int get_freq_target(struct cs_dbs_tuners *cs_tuners, 27 40 struct cpufreq_policy *policy) 28 41 { ··· 132 119 struct cpufreq_freqs *freq = data; 133 120 struct cs_cpu_dbs_info_s *dbs_info = 134 121 &per_cpu(cs_cpu_dbs_info, freq->cpu); 135 - struct cpufreq_policy *policy; 122 + struct cpufreq_policy *policy = cpufreq_cpu_get_raw(freq->cpu); 136 123 137 - if (!dbs_info->enable) 124 + if (!policy) 138 125 return 0; 139 126 140 - policy = dbs_info->cdbs.shared->policy; 127 + /* policy isn't governed by conservative governor */ 128 + if (policy->governor != &cpufreq_gov_conservative) 129 + return 0; 141 130 142 131 /* 143 132 * we only care if our internally tracked freq moves outside the 'valid' ··· 381 366 { 382 367 return cpufreq_governor_dbs(policy, &cs_dbs_cdata, event); 383 368 } 384 - 385 - #ifndef CONFIG_CPU_FREQ_DEFAULT_GOV_CONSERVATIVE 386 - static 387 - #endif 388 - struct cpufreq_governor cpufreq_gov_conservative = { 389 - .name = "conservative", 390 - .governor = cs_cpufreq_governor_dbs, 391 - .max_transition_latency = TRANSITION_LATENCY_LIMIT, 392 - .owner = THIS_MODULE, 393 - }; 394 369 395 370 static int __init cpufreq_gov_dbs_init(void) 396 371 {
+6 -26
drivers/cpufreq/cpufreq_governor.c
··· 348 348 set_sampling_rate(dbs_data, max(dbs_data->min_sampling_rate, 349 349 latency * LATENCY_MULTIPLIER)); 350 350 351 - if (!have_governor_per_policy()) { 352 - if (WARN_ON(cpufreq_get_global_kobject())) { 353 - ret = -EINVAL; 354 - goto cdata_exit; 355 - } 351 + if (!have_governor_per_policy()) 356 352 cdata->gdbs_data = dbs_data; 357 - } 358 353 359 354 ret = sysfs_create_group(get_governor_parent_kobj(policy), 360 355 get_sysfs_attr(dbs_data)); 361 356 if (ret) 362 - goto put_kobj; 357 + goto reset_gdbs_data; 363 358 364 359 policy->governor_data = dbs_data; 365 360 366 361 return 0; 367 362 368 - put_kobj: 369 - if (!have_governor_per_policy()) { 363 + reset_gdbs_data: 364 + if (!have_governor_per_policy()) 370 365 cdata->gdbs_data = NULL; 371 - cpufreq_put_global_kobject(); 372 - } 373 - cdata_exit: 374 366 cdata->exit(dbs_data, !policy->governor->initialized); 375 367 free_common_dbs_info: 376 368 free_common_dbs_info(policy, cdata); ··· 386 394 sysfs_remove_group(get_governor_parent_kobj(policy), 387 395 get_sysfs_attr(dbs_data)); 388 396 389 - if (!have_governor_per_policy()) { 397 + if (!have_governor_per_policy()) 390 398 cdata->gdbs_data = NULL; 391 - cpufreq_put_global_kobject(); 392 - } 393 399 394 400 cdata->exit(dbs_data, policy->governor->initialized == 1); 395 401 kfree(dbs_data); ··· 453 463 cdata->get_cpu_dbs_info_s(cpu); 454 464 455 465 cs_dbs_info->down_skip = 0; 456 - cs_dbs_info->enable = 1; 457 466 cs_dbs_info->requested_freq = policy->cur; 458 467 } else { 459 468 struct od_ops *od_ops = cdata->gov_ops; ··· 471 482 static int cpufreq_governor_stop(struct cpufreq_policy *policy, 472 483 struct dbs_data *dbs_data) 473 484 { 474 - struct common_dbs_data *cdata = dbs_data->cdata; 475 - unsigned int cpu = policy->cpu; 476 - struct cpu_dbs_info *cdbs = cdata->get_cpu_cdbs(cpu); 485 + struct cpu_dbs_info *cdbs = dbs_data->cdata->get_cpu_cdbs(policy->cpu); 477 486 struct cpu_common_dbs_info *shared = cdbs->shared; 478 487 479 488 /* State should be equivalent to START */ ··· 479 492 return -EBUSY; 480 493 481 494 gov_cancel_work(dbs_data, policy); 482 - 483 - if (cdata->governor == GOV_CONSERVATIVE) { 484 - struct cs_cpu_dbs_info_s *cs_dbs_info = 485 - cdata->get_cpu_dbs_info_s(cpu); 486 - 487 - cs_dbs_info->enable = 0; 488 - } 489 495 490 496 shared->policy = NULL; 491 497 mutex_destroy(&shared->timer_mutex);
-1
drivers/cpufreq/cpufreq_governor.h
··· 170 170 struct cpu_dbs_info cdbs; 171 171 unsigned int down_skip; 172 172 unsigned int requested_freq; 173 - unsigned int enable:1; 174 173 }; 175 174 176 175 /* Per policy Governors sysfs tunables */
+1 -9
drivers/cpufreq/cpufreq_ondemand.c
··· 267 267 dbs_info = &per_cpu(od_cpu_dbs_info, cpu); 268 268 cpufreq_cpu_put(policy); 269 269 270 - mutex_lock(&dbs_info->cdbs.shared->timer_mutex); 271 - 272 - if (!delayed_work_pending(&dbs_info->cdbs.dwork)) { 273 - mutex_unlock(&dbs_info->cdbs.shared->timer_mutex); 270 + if (!delayed_work_pending(&dbs_info->cdbs.dwork)) 274 271 continue; 275 - } 276 272 277 273 next_sampling = jiffies + usecs_to_jiffies(new_rate); 278 274 appointed_at = dbs_info->cdbs.dwork.timer.expires; 279 275 280 276 if (time_before(next_sampling, appointed_at)) { 281 - 282 - mutex_unlock(&dbs_info->cdbs.shared->timer_mutex); 283 277 cancel_delayed_work_sync(&dbs_info->cdbs.dwork); 284 - mutex_lock(&dbs_info->cdbs.shared->timer_mutex); 285 278 286 279 gov_queue_work(dbs_data, policy, 287 280 usecs_to_jiffies(new_rate), true); 288 281 289 282 } 290 - mutex_unlock(&dbs_info->cdbs.shared->timer_mutex); 291 283 } 292 284 } 293 285
-114
drivers/cpufreq/cpufreq_opp.c
··· 1 - /* 2 - * Generic OPP helper interface for CPUFreq drivers 3 - * 4 - * Copyright (C) 2009-2014 Texas Instruments Incorporated. 5 - * Nishanth Menon 6 - * Romit Dasgupta 7 - * Kevin Hilman 8 - * 9 - * This program is free software; you can redistribute it and/or modify 10 - * it under the terms of the GNU General Public License version 2 as 11 - * published by the Free Software Foundation. 12 - */ 13 - #include <linux/cpufreq.h> 14 - #include <linux/device.h> 15 - #include <linux/err.h> 16 - #include <linux/errno.h> 17 - #include <linux/export.h> 18 - #include <linux/kernel.h> 19 - #include <linux/pm_opp.h> 20 - #include <linux/rcupdate.h> 21 - #include <linux/slab.h> 22 - 23 - /** 24 - * dev_pm_opp_init_cpufreq_table() - create a cpufreq table for a device 25 - * @dev: device for which we do this operation 26 - * @table: Cpufreq table returned back to caller 27 - * 28 - * Generate a cpufreq table for a provided device- this assumes that the 29 - * opp list is already initialized and ready for usage. 30 - * 31 - * This function allocates required memory for the cpufreq table. It is 32 - * expected that the caller does the required maintenance such as freeing 33 - * the table as required. 34 - * 35 - * Returns -EINVAL for bad pointers, -ENODEV if the device is not found, -ENOMEM 36 - * if no memory available for the operation (table is not populated), returns 0 37 - * if successful and table is populated. 38 - * 39 - * WARNING: It is important for the callers to ensure refreshing their copy of 40 - * the table if any of the mentioned functions have been invoked in the interim. 41 - * 42 - * Locking: The internal device_opp and opp structures are RCU protected. 43 - * Since we just use the regular accessor functions to access the internal data 44 - * structures, we use RCU read lock inside this function. As a result, users of 45 - * this function DONOT need to use explicit locks for invoking. 46 - */ 47 - int dev_pm_opp_init_cpufreq_table(struct device *dev, 48 - struct cpufreq_frequency_table **table) 49 - { 50 - struct dev_pm_opp *opp; 51 - struct cpufreq_frequency_table *freq_table = NULL; 52 - int i, max_opps, ret = 0; 53 - unsigned long rate; 54 - 55 - rcu_read_lock(); 56 - 57 - max_opps = dev_pm_opp_get_opp_count(dev); 58 - if (max_opps <= 0) { 59 - ret = max_opps ? max_opps : -ENODATA; 60 - goto out; 61 - } 62 - 63 - freq_table = kcalloc((max_opps + 1), sizeof(*freq_table), GFP_ATOMIC); 64 - if (!freq_table) { 65 - ret = -ENOMEM; 66 - goto out; 67 - } 68 - 69 - for (i = 0, rate = 0; i < max_opps; i++, rate++) { 70 - /* find next rate */ 71 - opp = dev_pm_opp_find_freq_ceil(dev, &rate); 72 - if (IS_ERR(opp)) { 73 - ret = PTR_ERR(opp); 74 - goto out; 75 - } 76 - freq_table[i].driver_data = i; 77 - freq_table[i].frequency = rate / 1000; 78 - 79 - /* Is Boost/turbo opp ? */ 80 - if (dev_pm_opp_is_turbo(opp)) 81 - freq_table[i].flags = CPUFREQ_BOOST_FREQ; 82 - } 83 - 84 - freq_table[i].driver_data = i; 85 - freq_table[i].frequency = CPUFREQ_TABLE_END; 86 - 87 - *table = &freq_table[0]; 88 - 89 - out: 90 - rcu_read_unlock(); 91 - if (ret) 92 - kfree(freq_table); 93 - 94 - return ret; 95 - } 96 - EXPORT_SYMBOL_GPL(dev_pm_opp_init_cpufreq_table); 97 - 98 - /** 99 - * dev_pm_opp_free_cpufreq_table() - free the cpufreq table 100 - * @dev: device for which we do this operation 101 - * @table: table to free 102 - * 103 - * Free up the table allocated by dev_pm_opp_init_cpufreq_table 104 - */ 105 - void dev_pm_opp_free_cpufreq_table(struct device *dev, 106 - struct cpufreq_frequency_table **table) 107 - { 108 - if (!table) 109 - return; 110 - 111 - kfree(*table); 112 - *table = NULL; 113 - } 114 - EXPORT_SYMBOL_GPL(dev_pm_opp_free_cpufreq_table);
+3 -3
drivers/cpufreq/exynos5440-cpufreq.c
··· 360 360 goto err_put_node; 361 361 } 362 362 363 - ret = of_init_opp_table(dvfs_info->dev); 363 + ret = dev_pm_opp_of_add_table(dvfs_info->dev); 364 364 if (ret) { 365 365 dev_err(dvfs_info->dev, "failed to init OPP table: %d\n", ret); 366 366 goto err_put_node; ··· 424 424 err_free_table: 425 425 dev_pm_opp_free_cpufreq_table(dvfs_info->dev, &dvfs_info->freq_table); 426 426 err_free_opp: 427 - of_free_opp_table(dvfs_info->dev); 427 + dev_pm_opp_of_remove_table(dvfs_info->dev); 428 428 err_put_node: 429 429 of_node_put(np); 430 430 dev_err(&pdev->dev, "%s: failed initialization\n", __func__); ··· 435 435 { 436 436 cpufreq_unregister_driver(&exynos_driver); 437 437 dev_pm_opp_free_cpufreq_table(dvfs_info->dev, &dvfs_info->freq_table); 438 - of_free_opp_table(dvfs_info->dev); 438 + dev_pm_opp_of_remove_table(dvfs_info->dev); 439 439 return 0; 440 440 } 441 441
+48 -8
drivers/cpufreq/imx6q-cpufreq.c
··· 30 30 static struct clk *step_clk; 31 31 static struct clk *pll2_pfd2_396m_clk; 32 32 33 + /* clk used by i.MX6UL */ 34 + static struct clk *pll2_bus_clk; 35 + static struct clk *secondary_sel_clk; 36 + 33 37 static struct device *cpu_dev; 34 38 static bool free_opp; 35 39 static struct cpufreq_frequency_table *freq_table; ··· 95 91 * The setpoints are selected per PLL/PDF frequencies, so we need to 96 92 * reprogram PLL for frequency scaling. The procedure of reprogramming 97 93 * PLL1 is as below. 98 - * 94 + * For i.MX6UL, it has a secondary clk mux, the cpu frequency change 95 + * flow is slightly different from other i.MX6 OSC. 96 + * The cpu frequeny change flow for i.MX6(except i.MX6UL) is as below: 99 97 * - Enable pll2_pfd2_396m_clk and reparent pll1_sw_clk to it 100 98 * - Reprogram pll1_sys_clk and reparent pll1_sw_clk back to it 101 99 * - Disable pll2_pfd2_396m_clk 102 100 */ 103 - clk_set_parent(step_clk, pll2_pfd2_396m_clk); 104 - clk_set_parent(pll1_sw_clk, step_clk); 105 - if (freq_hz > clk_get_rate(pll2_pfd2_396m_clk)) { 106 - clk_set_rate(pll1_sys_clk, new_freq * 1000); 101 + if (of_machine_is_compatible("fsl,imx6ul")) { 102 + /* 103 + * When changing pll1_sw_clk's parent to pll1_sys_clk, 104 + * CPU may run at higher than 528MHz, this will lead to 105 + * the system unstable if the voltage is lower than the 106 + * voltage of 528MHz, so lower the CPU frequency to one 107 + * half before changing CPU frequency. 108 + */ 109 + clk_set_rate(arm_clk, (old_freq >> 1) * 1000); 107 110 clk_set_parent(pll1_sw_clk, pll1_sys_clk); 111 + if (freq_hz > clk_get_rate(pll2_pfd2_396m_clk)) 112 + clk_set_parent(secondary_sel_clk, pll2_bus_clk); 113 + else 114 + clk_set_parent(secondary_sel_clk, pll2_pfd2_396m_clk); 115 + clk_set_parent(step_clk, secondary_sel_clk); 116 + clk_set_parent(pll1_sw_clk, step_clk); 117 + } else { 118 + clk_set_parent(step_clk, pll2_pfd2_396m_clk); 119 + clk_set_parent(pll1_sw_clk, step_clk); 120 + if (freq_hz > clk_get_rate(pll2_pfd2_396m_clk)) { 121 + clk_set_rate(pll1_sys_clk, new_freq * 1000); 122 + clk_set_parent(pll1_sw_clk, pll1_sys_clk); 123 + } 108 124 } 109 125 110 126 /* Ensure the arm clock divider is what we expect */ ··· 210 186 goto put_clk; 211 187 } 212 188 189 + if (of_machine_is_compatible("fsl,imx6ul")) { 190 + pll2_bus_clk = clk_get(cpu_dev, "pll2_bus"); 191 + secondary_sel_clk = clk_get(cpu_dev, "secondary_sel"); 192 + if (IS_ERR(pll2_bus_clk) || IS_ERR(secondary_sel_clk)) { 193 + dev_err(cpu_dev, "failed to get clocks specific to imx6ul\n"); 194 + ret = -ENOENT; 195 + goto put_clk; 196 + } 197 + } 198 + 213 199 arm_reg = regulator_get(cpu_dev, "arm"); 214 200 pu_reg = regulator_get_optional(cpu_dev, "pu"); 215 201 soc_reg = regulator_get(cpu_dev, "soc"); ··· 236 202 */ 237 203 num = dev_pm_opp_get_opp_count(cpu_dev); 238 204 if (num < 0) { 239 - ret = of_init_opp_table(cpu_dev); 205 + ret = dev_pm_opp_of_add_table(cpu_dev); 240 206 if (ret < 0) { 241 207 dev_err(cpu_dev, "failed to init OPP table: %d\n", ret); 242 208 goto put_reg; ··· 346 312 dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table); 347 313 out_free_opp: 348 314 if (free_opp) 349 - of_free_opp_table(cpu_dev); 315 + dev_pm_opp_of_remove_table(cpu_dev); 350 316 put_reg: 351 317 if (!IS_ERR(arm_reg)) 352 318 regulator_put(arm_reg); ··· 365 331 clk_put(step_clk); 366 332 if (!IS_ERR(pll2_pfd2_396m_clk)) 367 333 clk_put(pll2_pfd2_396m_clk); 334 + if (!IS_ERR(pll2_bus_clk)) 335 + clk_put(pll2_bus_clk); 336 + if (!IS_ERR(secondary_sel_clk)) 337 + clk_put(secondary_sel_clk); 368 338 of_node_put(np); 369 339 return ret; 370 340 } ··· 378 340 cpufreq_unregister_driver(&imx6q_cpufreq_driver); 379 341 dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table); 380 342 if (free_opp) 381 - of_free_opp_table(cpu_dev); 343 + dev_pm_opp_of_remove_table(cpu_dev); 382 344 regulator_put(arm_reg); 383 345 if (!IS_ERR(pu_reg)) 384 346 regulator_put(pu_reg); ··· 388 350 clk_put(pll1_sw_clk); 389 351 clk_put(step_clk); 390 352 clk_put(pll2_pfd2_396m_clk); 353 + clk_put(pll2_bus_clk); 354 + clk_put(secondary_sel_clk); 391 355 392 356 return 0; 393 357 }
+2
drivers/cpufreq/integrator-cpufreq.c
··· 221 221 { }, 222 222 }; 223 223 224 + MODULE_DEVICE_TABLE(of, integrator_cpufreq_match); 225 + 224 226 static struct platform_driver integrator_cpufreq_driver = { 225 227 .driver = { 226 228 .name = "integrator-cpufreq",
+337 -53
drivers/cpufreq/intel_pstate.c
··· 34 34 #include <asm/cpu_device_id.h> 35 35 #include <asm/cpufeature.h> 36 36 37 + #if IS_ENABLED(CONFIG_ACPI) 38 + #include <acpi/processor.h> 39 + #endif 40 + 37 41 #define BYT_RATIOS 0x66a 38 42 #define BYT_VIDS 0x66b 39 43 #define BYT_TURBO_RATIOS 0x66c ··· 46 42 #define FRAC_BITS 8 47 43 #define int_tofp(X) ((int64_t)(X) << FRAC_BITS) 48 44 #define fp_toint(X) ((X) >> FRAC_BITS) 49 - 50 45 51 46 static inline int32_t mul_fp(int32_t x, int32_t y) 52 47 { ··· 81 78 int current_pstate; 82 79 int min_pstate; 83 80 int max_pstate; 81 + int max_pstate_physical; 84 82 int scaling; 85 83 int turbo_pstate; 86 84 }; ··· 117 113 u64 prev_mperf; 118 114 u64 prev_tsc; 119 115 struct sample sample; 116 + #if IS_ENABLED(CONFIG_ACPI) 117 + struct acpi_processor_performance acpi_perf_data; 118 + #endif 120 119 }; 121 120 122 121 static struct cpudata **all_cpu_data; ··· 134 127 135 128 struct pstate_funcs { 136 129 int (*get_max)(void); 130 + int (*get_max_physical)(void); 137 131 int (*get_min)(void); 138 132 int (*get_turbo)(void); 139 133 int (*get_scaling)(void); ··· 150 142 static struct pstate_adjust_policy pid_params; 151 143 static struct pstate_funcs pstate_funcs; 152 144 static int hwp_active; 145 + static int no_acpi_perf; 153 146 154 147 struct perf_limits { 155 148 int no_turbo; ··· 163 154 int max_sysfs_pct; 164 155 int min_policy_pct; 165 156 int min_sysfs_pct; 157 + int max_perf_ctl; 158 + int min_perf_ctl; 166 159 }; 167 160 168 - static struct perf_limits limits = { 161 + static struct perf_limits performance_limits = { 162 + .no_turbo = 0, 163 + .turbo_disabled = 0, 164 + .max_perf_pct = 100, 165 + .max_perf = int_tofp(1), 166 + .min_perf_pct = 100, 167 + .min_perf = int_tofp(1), 168 + .max_policy_pct = 100, 169 + .max_sysfs_pct = 100, 170 + .min_policy_pct = 0, 171 + .min_sysfs_pct = 0, 172 + }; 173 + 174 + static struct perf_limits powersave_limits = { 169 175 .no_turbo = 0, 170 176 .turbo_disabled = 0, 171 177 .max_perf_pct = 100, ··· 191 167 .max_sysfs_pct = 100, 192 168 .min_policy_pct = 0, 193 169 .min_sysfs_pct = 0, 170 + .max_perf_ctl = 0, 171 + .min_perf_ctl = 0, 194 172 }; 173 + 174 + #ifdef CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE 175 + static struct perf_limits *limits = &performance_limits; 176 + #else 177 + static struct perf_limits *limits = &powersave_limits; 178 + #endif 179 + 180 + #if IS_ENABLED(CONFIG_ACPI) 181 + /* 182 + * The max target pstate ratio is a 8 bit value in both PLATFORM_INFO MSR and 183 + * in TURBO_RATIO_LIMIT MSR, which pstate driver stores in max_pstate and 184 + * max_turbo_pstate fields. The PERF_CTL MSR contains 16 bit value for P state 185 + * ratio, out of it only high 8 bits are used. For example 0x1700 is setting 186 + * target ratio 0x17. The _PSS control value stores in a format which can be 187 + * directly written to PERF_CTL MSR. But in intel_pstate driver this shift 188 + * occurs during write to PERF_CTL (E.g. for cores core_set_pstate()). 189 + * This function converts the _PSS control value to intel pstate driver format 190 + * for comparison and assignment. 191 + */ 192 + static int convert_to_native_pstate_format(struct cpudata *cpu, int index) 193 + { 194 + return cpu->acpi_perf_data.states[index].control >> 8; 195 + } 196 + 197 + static int intel_pstate_init_perf_limits(struct cpufreq_policy *policy) 198 + { 199 + struct cpudata *cpu; 200 + int ret; 201 + bool turbo_absent = false; 202 + int max_pstate_index; 203 + int min_pss_ctl, max_pss_ctl, turbo_pss_ctl; 204 + int i; 205 + 206 + cpu = all_cpu_data[policy->cpu]; 207 + 208 + pr_debug("intel_pstate: default limits 0x%x 0x%x 0x%x\n", 209 + cpu->pstate.min_pstate, cpu->pstate.max_pstate, 210 + cpu->pstate.turbo_pstate); 211 + 212 + if (!cpu->acpi_perf_data.shared_cpu_map && 213 + zalloc_cpumask_var_node(&cpu->acpi_perf_data.shared_cpu_map, 214 + GFP_KERNEL, cpu_to_node(policy->cpu))) { 215 + return -ENOMEM; 216 + } 217 + 218 + ret = acpi_processor_register_performance(&cpu->acpi_perf_data, 219 + policy->cpu); 220 + if (ret) 221 + return ret; 222 + 223 + /* 224 + * Check if the control value in _PSS is for PERF_CTL MSR, which should 225 + * guarantee that the states returned by it map to the states in our 226 + * list directly. 227 + */ 228 + if (cpu->acpi_perf_data.control_register.space_id != 229 + ACPI_ADR_SPACE_FIXED_HARDWARE) 230 + return -EIO; 231 + 232 + pr_debug("intel_pstate: CPU%u - ACPI _PSS perf data\n", policy->cpu); 233 + for (i = 0; i < cpu->acpi_perf_data.state_count; i++) 234 + pr_debug(" %cP%d: %u MHz, %u mW, 0x%x\n", 235 + (i == cpu->acpi_perf_data.state ? '*' : ' '), i, 236 + (u32) cpu->acpi_perf_data.states[i].core_frequency, 237 + (u32) cpu->acpi_perf_data.states[i].power, 238 + (u32) cpu->acpi_perf_data.states[i].control); 239 + 240 + /* 241 + * If there is only one entry _PSS, simply ignore _PSS and continue as 242 + * usual without taking _PSS into account 243 + */ 244 + if (cpu->acpi_perf_data.state_count < 2) 245 + return 0; 246 + 247 + turbo_pss_ctl = convert_to_native_pstate_format(cpu, 0); 248 + min_pss_ctl = convert_to_native_pstate_format(cpu, 249 + cpu->acpi_perf_data.state_count - 1); 250 + /* Check if there is a turbo freq in _PSS */ 251 + if (turbo_pss_ctl <= cpu->pstate.max_pstate && 252 + turbo_pss_ctl > cpu->pstate.min_pstate) { 253 + pr_debug("intel_pstate: no turbo range exists in _PSS\n"); 254 + limits->no_turbo = limits->turbo_disabled = 1; 255 + cpu->pstate.turbo_pstate = cpu->pstate.max_pstate; 256 + turbo_absent = true; 257 + } 258 + 259 + /* Check if the max non turbo p state < Intel P state max */ 260 + max_pstate_index = turbo_absent ? 0 : 1; 261 + max_pss_ctl = convert_to_native_pstate_format(cpu, max_pstate_index); 262 + if (max_pss_ctl < cpu->pstate.max_pstate && 263 + max_pss_ctl > cpu->pstate.min_pstate) 264 + cpu->pstate.max_pstate = max_pss_ctl; 265 + 266 + /* check If min perf > Intel P State min */ 267 + if (min_pss_ctl > cpu->pstate.min_pstate && 268 + min_pss_ctl < cpu->pstate.max_pstate) { 269 + cpu->pstate.min_pstate = min_pss_ctl; 270 + policy->cpuinfo.min_freq = min_pss_ctl * cpu->pstate.scaling; 271 + } 272 + 273 + if (turbo_absent) 274 + policy->cpuinfo.max_freq = cpu->pstate.max_pstate * 275 + cpu->pstate.scaling; 276 + else { 277 + policy->cpuinfo.max_freq = cpu->pstate.turbo_pstate * 278 + cpu->pstate.scaling; 279 + /* 280 + * The _PSS table doesn't contain whole turbo frequency range. 281 + * This just contains +1 MHZ above the max non turbo frequency, 282 + * with control value corresponding to max turbo ratio. But 283 + * when cpufreq set policy is called, it will call with this 284 + * max frequency, which will cause a reduced performance as 285 + * this driver uses real max turbo frequency as the max 286 + * frequeny. So correct this frequency in _PSS table to 287 + * correct max turbo frequency based on the turbo ratio. 288 + * Also need to convert to MHz as _PSS freq is in MHz. 289 + */ 290 + cpu->acpi_perf_data.states[0].core_frequency = 291 + turbo_pss_ctl * 100; 292 + } 293 + 294 + pr_debug("intel_pstate: Updated limits using _PSS 0x%x 0x%x 0x%x\n", 295 + cpu->pstate.min_pstate, cpu->pstate.max_pstate, 296 + cpu->pstate.turbo_pstate); 297 + pr_debug("intel_pstate: policy max_freq=%d Khz min_freq = %d KHz\n", 298 + policy->cpuinfo.max_freq, policy->cpuinfo.min_freq); 299 + 300 + return 0; 301 + } 302 + 303 + static int intel_pstate_exit_perf_limits(struct cpufreq_policy *policy) 304 + { 305 + struct cpudata *cpu; 306 + 307 + if (!no_acpi_perf) 308 + return 0; 309 + 310 + cpu = all_cpu_data[policy->cpu]; 311 + acpi_processor_unregister_performance(policy->cpu); 312 + return 0; 313 + } 314 + 315 + #else 316 + static int intel_pstate_init_perf_limits(struct cpufreq_policy *policy) 317 + { 318 + return 0; 319 + } 320 + 321 + static int intel_pstate_exit_perf_limits(struct cpufreq_policy *policy) 322 + { 323 + return 0; 324 + } 325 + #endif 195 326 196 327 static inline void pid_reset(struct _pid *pid, int setpoint, int busy, 197 328 int deadband, int integral) { ··· 434 255 435 256 cpu = all_cpu_data[0]; 436 257 rdmsrl(MSR_IA32_MISC_ENABLE, misc_en); 437 - limits.turbo_disabled = 258 + limits->turbo_disabled = 438 259 (misc_en & MSR_IA32_MISC_ENABLE_TURBO_DISABLE || 439 260 cpu->pstate.max_pstate == cpu->pstate.turbo_pstate); 440 261 } ··· 453 274 454 275 for_each_online_cpu(cpu) { 455 276 rdmsrl_on_cpu(cpu, MSR_HWP_REQUEST, &value); 456 - adj_range = limits.min_perf_pct * range / 100; 277 + adj_range = limits->min_perf_pct * range / 100; 457 278 min = hw_min + adj_range; 458 279 value &= ~HWP_MIN_PERF(~0L); 459 280 value |= HWP_MIN_PERF(min); 460 281 461 - adj_range = limits.max_perf_pct * range / 100; 282 + adj_range = limits->max_perf_pct * range / 100; 462 283 max = hw_min + adj_range; 463 - if (limits.no_turbo) { 284 + if (limits->no_turbo) { 464 285 hw_max = HWP_GUARANTEED_PERF(cap); 465 286 if (hw_max < max) 466 287 max = hw_max; ··· 529 350 static ssize_t show_##file_name \ 530 351 (struct kobject *kobj, struct attribute *attr, char *buf) \ 531 352 { \ 532 - return sprintf(buf, "%u\n", limits.object); \ 353 + return sprintf(buf, "%u\n", limits->object); \ 533 354 } 534 355 535 356 static ssize_t show_turbo_pct(struct kobject *kobj, ··· 565 386 ssize_t ret; 566 387 567 388 update_turbo_state(); 568 - if (limits.turbo_disabled) 569 - ret = sprintf(buf, "%u\n", limits.turbo_disabled); 389 + if (limits->turbo_disabled) 390 + ret = sprintf(buf, "%u\n", limits->turbo_disabled); 570 391 else 571 - ret = sprintf(buf, "%u\n", limits.no_turbo); 392 + ret = sprintf(buf, "%u\n", limits->no_turbo); 572 393 573 394 return ret; 574 395 } ··· 584 405 return -EINVAL; 585 406 586 407 update_turbo_state(); 587 - if (limits.turbo_disabled) { 408 + if (limits->turbo_disabled) { 588 409 pr_warn("intel_pstate: Turbo disabled by BIOS or unavailable on processor\n"); 589 410 return -EPERM; 590 411 } 591 412 592 - limits.no_turbo = clamp_t(int, input, 0, 1); 413 + limits->no_turbo = clamp_t(int, input, 0, 1); 593 414 594 415 if (hwp_active) 595 416 intel_pstate_hwp_set(); ··· 607 428 if (ret != 1) 608 429 return -EINVAL; 609 430 610 - limits.max_sysfs_pct = clamp_t(int, input, 0 , 100); 611 - limits.max_perf_pct = min(limits.max_policy_pct, limits.max_sysfs_pct); 612 - limits.max_perf_pct = max(limits.min_policy_pct, limits.max_perf_pct); 613 - limits.max_perf_pct = max(limits.min_perf_pct, limits.max_perf_pct); 614 - limits.max_perf = div_fp(int_tofp(limits.max_perf_pct), int_tofp(100)); 431 + limits->max_sysfs_pct = clamp_t(int, input, 0 , 100); 432 + limits->max_perf_pct = min(limits->max_policy_pct, 433 + limits->max_sysfs_pct); 434 + limits->max_perf_pct = max(limits->min_policy_pct, 435 + limits->max_perf_pct); 436 + limits->max_perf_pct = max(limits->min_perf_pct, 437 + limits->max_perf_pct); 438 + limits->max_perf = div_fp(int_tofp(limits->max_perf_pct), 439 + int_tofp(100)); 615 440 616 441 if (hwp_active) 617 442 intel_pstate_hwp_set(); ··· 632 449 if (ret != 1) 633 450 return -EINVAL; 634 451 635 - limits.min_sysfs_pct = clamp_t(int, input, 0 , 100); 636 - limits.min_perf_pct = max(limits.min_policy_pct, limits.min_sysfs_pct); 637 - limits.min_perf_pct = min(limits.max_policy_pct, limits.min_perf_pct); 638 - limits.min_perf_pct = min(limits.max_perf_pct, limits.min_perf_pct); 639 - limits.min_perf = div_fp(int_tofp(limits.min_perf_pct), int_tofp(100)); 452 + limits->min_sysfs_pct = clamp_t(int, input, 0 , 100); 453 + limits->min_perf_pct = max(limits->min_policy_pct, 454 + limits->min_sysfs_pct); 455 + limits->min_perf_pct = min(limits->max_policy_pct, 456 + limits->min_perf_pct); 457 + limits->min_perf_pct = min(limits->max_perf_pct, 458 + limits->min_perf_pct); 459 + limits->min_perf = div_fp(int_tofp(limits->min_perf_pct), 460 + int_tofp(100)); 640 461 641 462 if (hwp_active) 642 463 intel_pstate_hwp_set(); ··· 720 533 u32 vid; 721 534 722 535 val = (u64)pstate << 8; 723 - if (limits.no_turbo && !limits.turbo_disabled) 536 + if (limits->no_turbo && !limits->turbo_disabled) 724 537 val |= (u64)1 << 32; 725 538 726 539 vid_fp = cpudata->vid.min + mul_fp( ··· 778 591 return (value >> 40) & 0xFF; 779 592 } 780 593 781 - static int core_get_max_pstate(void) 594 + static int core_get_max_pstate_physical(void) 782 595 { 783 596 u64 value; 784 597 785 598 rdmsrl(MSR_PLATFORM_INFO, value); 786 599 return (value >> 8) & 0xFF; 600 + } 601 + 602 + static int core_get_max_pstate(void) 603 + { 604 + u64 tar; 605 + u64 plat_info; 606 + int max_pstate; 607 + int err; 608 + 609 + rdmsrl(MSR_PLATFORM_INFO, plat_info); 610 + max_pstate = (plat_info >> 8) & 0xFF; 611 + 612 + err = rdmsrl_safe(MSR_TURBO_ACTIVATION_RATIO, &tar); 613 + if (!err) { 614 + /* Do some sanity checking for safety */ 615 + if (plat_info & 0x600000000) { 616 + u64 tdp_ctrl; 617 + u64 tdp_ratio; 618 + int tdp_msr; 619 + 620 + err = rdmsrl_safe(MSR_CONFIG_TDP_CONTROL, &tdp_ctrl); 621 + if (err) 622 + goto skip_tar; 623 + 624 + tdp_msr = MSR_CONFIG_TDP_NOMINAL + tdp_ctrl; 625 + err = rdmsrl_safe(tdp_msr, &tdp_ratio); 626 + if (err) 627 + goto skip_tar; 628 + 629 + if (tdp_ratio - 1 == tar) { 630 + max_pstate = tar; 631 + pr_debug("max_pstate=TAC %x\n", max_pstate); 632 + } else { 633 + goto skip_tar; 634 + } 635 + } 636 + } 637 + 638 + skip_tar: 639 + return max_pstate; 787 640 } 788 641 789 642 static int core_get_turbo_pstate(void) ··· 849 622 u64 val; 850 623 851 624 val = (u64)pstate << 8; 852 - if (limits.no_turbo && !limits.turbo_disabled) 625 + if (limits->no_turbo && !limits->turbo_disabled) 853 626 val |= (u64)1 << 32; 854 627 855 628 wrmsrl_on_cpu(cpudata->cpu, MSR_IA32_PERF_CTL, val); ··· 879 652 }, 880 653 .funcs = { 881 654 .get_max = core_get_max_pstate, 655 + .get_max_physical = core_get_max_pstate_physical, 882 656 .get_min = core_get_min_pstate, 883 657 .get_turbo = core_get_turbo_pstate, 884 658 .get_scaling = core_get_scaling, ··· 898 670 }, 899 671 .funcs = { 900 672 .get_max = byt_get_max_pstate, 673 + .get_max_physical = byt_get_max_pstate, 901 674 .get_min = byt_get_min_pstate, 902 675 .get_turbo = byt_get_turbo_pstate, 903 676 .set = byt_set_pstate, ··· 918 689 }, 919 690 .funcs = { 920 691 .get_max = core_get_max_pstate, 692 + .get_max_physical = core_get_max_pstate_physical, 921 693 .get_min = core_get_min_pstate, 922 694 .get_turbo = knl_get_turbo_pstate, 923 695 .get_scaling = core_get_scaling, ··· 932 702 int max_perf_adj; 933 703 int min_perf; 934 704 935 - if (limits.no_turbo || limits.turbo_disabled) 705 + if (limits->no_turbo || limits->turbo_disabled) 936 706 max_perf = cpu->pstate.max_pstate; 937 707 938 708 /* ··· 940 710 * policy, or by cpu specific default values determined through 941 711 * experimentation. 942 712 */ 943 - max_perf_adj = fp_toint(mul_fp(int_tofp(max_perf), limits.max_perf)); 944 - *max = clamp_t(int, max_perf_adj, 945 - cpu->pstate.min_pstate, cpu->pstate.turbo_pstate); 713 + if (limits->max_perf_ctl && limits->max_sysfs_pct >= 714 + limits->max_policy_pct) { 715 + *max = limits->max_perf_ctl; 716 + } else { 717 + max_perf_adj = fp_toint(mul_fp(int_tofp(max_perf), 718 + limits->max_perf)); 719 + *max = clamp_t(int, max_perf_adj, cpu->pstate.min_pstate, 720 + cpu->pstate.turbo_pstate); 721 + } 946 722 947 - min_perf = fp_toint(mul_fp(int_tofp(max_perf), limits.min_perf)); 948 - *min = clamp_t(int, min_perf, cpu->pstate.min_pstate, max_perf); 723 + if (limits->min_perf_ctl) { 724 + *min = limits->min_perf_ctl; 725 + } else { 726 + min_perf = fp_toint(mul_fp(int_tofp(max_perf), 727 + limits->min_perf)); 728 + *min = clamp_t(int, min_perf, cpu->pstate.min_pstate, max_perf); 729 + } 949 730 } 950 731 951 732 static void intel_pstate_set_pstate(struct cpudata *cpu, int pstate, bool force) ··· 984 743 { 985 744 cpu->pstate.min_pstate = pstate_funcs.get_min(); 986 745 cpu->pstate.max_pstate = pstate_funcs.get_max(); 746 + cpu->pstate.max_pstate_physical = pstate_funcs.get_max_physical(); 987 747 cpu->pstate.turbo_pstate = pstate_funcs.get_turbo(); 988 748 cpu->pstate.scaling = pstate_funcs.get_scaling(); 989 749 ··· 1003 761 1004 762 sample->freq = fp_toint( 1005 763 mul_fp(int_tofp( 1006 - cpu->pstate.max_pstate * cpu->pstate.scaling / 100), 764 + cpu->pstate.max_pstate_physical * 765 + cpu->pstate.scaling / 100), 1007 766 core_pct)); 1008 767 1009 768 sample->core_pct_busy = (int32_t)core_pct; ··· 1077 834 * specified pstate. 1078 835 */ 1079 836 core_busy = cpu->sample.core_pct_busy; 1080 - max_pstate = int_tofp(cpu->pstate.max_pstate); 837 + max_pstate = int_tofp(cpu->pstate.max_pstate_physical); 1081 838 current_pstate = int_tofp(cpu->pstate.current_pstate); 1082 839 core_busy = mul_fp(core_busy, div_fp(max_pstate, current_pstate)); 1083 840 ··· 1231 988 1232 989 static int intel_pstate_set_policy(struct cpufreq_policy *policy) 1233 990 { 991 + #if IS_ENABLED(CONFIG_ACPI) 992 + struct cpudata *cpu; 993 + int i; 994 + #endif 995 + pr_debug("intel_pstate: %s max %u policy->max %u\n", __func__, 996 + policy->cpuinfo.max_freq, policy->max); 1234 997 if (!policy->cpuinfo.max_freq) 1235 998 return -ENODEV; 1236 999 1237 1000 if (policy->policy == CPUFREQ_POLICY_PERFORMANCE && 1238 1001 policy->max >= policy->cpuinfo.max_freq) { 1239 - limits.min_policy_pct = 100; 1240 - limits.min_perf_pct = 100; 1241 - limits.min_perf = int_tofp(1); 1242 - limits.max_policy_pct = 100; 1243 - limits.max_perf_pct = 100; 1244 - limits.max_perf = int_tofp(1); 1245 - limits.no_turbo = 0; 1002 + pr_debug("intel_pstate: set performance\n"); 1003 + limits = &performance_limits; 1246 1004 return 0; 1247 1005 } 1248 1006 1249 - limits.min_policy_pct = (policy->min * 100) / policy->cpuinfo.max_freq; 1250 - limits.min_policy_pct = clamp_t(int, limits.min_policy_pct, 0 , 100); 1251 - limits.max_policy_pct = (policy->max * 100) / policy->cpuinfo.max_freq; 1252 - limits.max_policy_pct = clamp_t(int, limits.max_policy_pct, 0 , 100); 1007 + pr_debug("intel_pstate: set powersave\n"); 1008 + limits = &powersave_limits; 1009 + limits->min_policy_pct = (policy->min * 100) / policy->cpuinfo.max_freq; 1010 + limits->min_policy_pct = clamp_t(int, limits->min_policy_pct, 0 , 100); 1011 + limits->max_policy_pct = (policy->max * 100) / policy->cpuinfo.max_freq; 1012 + limits->max_policy_pct = clamp_t(int, limits->max_policy_pct, 0 , 100); 1253 1013 1254 1014 /* Normalize user input to [min_policy_pct, max_policy_pct] */ 1255 - limits.min_perf_pct = max(limits.min_policy_pct, limits.min_sysfs_pct); 1256 - limits.min_perf_pct = min(limits.max_policy_pct, limits.min_perf_pct); 1257 - limits.max_perf_pct = min(limits.max_policy_pct, limits.max_sysfs_pct); 1258 - limits.max_perf_pct = max(limits.min_policy_pct, limits.max_perf_pct); 1015 + limits->min_perf_pct = max(limits->min_policy_pct, 1016 + limits->min_sysfs_pct); 1017 + limits->min_perf_pct = min(limits->max_policy_pct, 1018 + limits->min_perf_pct); 1019 + limits->max_perf_pct = min(limits->max_policy_pct, 1020 + limits->max_sysfs_pct); 1021 + limits->max_perf_pct = max(limits->min_policy_pct, 1022 + limits->max_perf_pct); 1259 1023 1260 1024 /* Make sure min_perf_pct <= max_perf_pct */ 1261 - limits.min_perf_pct = min(limits.max_perf_pct, limits.min_perf_pct); 1025 + limits->min_perf_pct = min(limits->max_perf_pct, limits->min_perf_pct); 1262 1026 1263 - limits.min_perf = div_fp(int_tofp(limits.min_perf_pct), int_tofp(100)); 1264 - limits.max_perf = div_fp(int_tofp(limits.max_perf_pct), int_tofp(100)); 1027 + limits->min_perf = div_fp(int_tofp(limits->min_perf_pct), 1028 + int_tofp(100)); 1029 + limits->max_perf = div_fp(int_tofp(limits->max_perf_pct), 1030 + int_tofp(100)); 1031 + 1032 + #if IS_ENABLED(CONFIG_ACPI) 1033 + cpu = all_cpu_data[policy->cpu]; 1034 + for (i = 0; i < cpu->acpi_perf_data.state_count; i++) { 1035 + int control; 1036 + 1037 + control = convert_to_native_pstate_format(cpu, i); 1038 + if (control * cpu->pstate.scaling == policy->max) 1039 + limits->max_perf_ctl = control; 1040 + if (control * cpu->pstate.scaling == policy->min) 1041 + limits->min_perf_ctl = control; 1042 + } 1043 + 1044 + pr_debug("intel_pstate: max %u policy_max %u perf_ctl [0x%x-0x%x]\n", 1045 + policy->cpuinfo.max_freq, policy->max, limits->min_perf_ctl, 1046 + limits->max_perf_ctl); 1047 + #endif 1265 1048 1266 1049 if (hwp_active) 1267 1050 intel_pstate_hwp_set(); ··· 1331 1062 1332 1063 cpu = all_cpu_data[policy->cpu]; 1333 1064 1334 - if (limits.min_perf_pct == 100 && limits.max_perf_pct == 100) 1065 + if (limits->min_perf_pct == 100 && limits->max_perf_pct == 100) 1335 1066 policy->policy = CPUFREQ_POLICY_PERFORMANCE; 1336 1067 else 1337 1068 policy->policy = CPUFREQ_POLICY_POWERSAVE; ··· 1343 1074 policy->cpuinfo.min_freq = cpu->pstate.min_pstate * cpu->pstate.scaling; 1344 1075 policy->cpuinfo.max_freq = 1345 1076 cpu->pstate.turbo_pstate * cpu->pstate.scaling; 1077 + if (!no_acpi_perf) 1078 + intel_pstate_init_perf_limits(policy); 1079 + /* 1080 + * If there is no acpi perf data or error, we ignore and use Intel P 1081 + * state calculated limits, So this is not fatal error. 1082 + */ 1346 1083 policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL; 1347 1084 cpumask_set_cpu(policy->cpu, policy->cpus); 1348 1085 1349 1086 return 0; 1087 + } 1088 + 1089 + static int intel_pstate_cpu_exit(struct cpufreq_policy *policy) 1090 + { 1091 + return intel_pstate_exit_perf_limits(policy); 1350 1092 } 1351 1093 1352 1094 static struct cpufreq_driver intel_pstate_driver = { ··· 1366 1086 .setpolicy = intel_pstate_set_policy, 1367 1087 .get = intel_pstate_get, 1368 1088 .init = intel_pstate_cpu_init, 1089 + .exit = intel_pstate_cpu_exit, 1369 1090 .stop_cpu = intel_pstate_stop_cpu, 1370 1091 .name = "intel_pstate", 1371 1092 }; ··· 1399 1118 static void copy_cpu_funcs(struct pstate_funcs *funcs) 1400 1119 { 1401 1120 pstate_funcs.get_max = funcs->get_max; 1121 + pstate_funcs.get_max_physical = funcs->get_max_physical; 1402 1122 pstate_funcs.get_min = funcs->get_min; 1403 1123 pstate_funcs.get_turbo = funcs->get_turbo; 1404 1124 pstate_funcs.get_scaling = funcs->get_scaling; ··· 1408 1126 } 1409 1127 1410 1128 #if IS_ENABLED(CONFIG_ACPI) 1411 - #include <acpi/processor.h> 1412 1129 1413 1130 static bool intel_pstate_no_acpi_pss(void) 1414 1131 { ··· 1599 1318 force_load = 1; 1600 1319 if (!strcmp(str, "hwp_only")) 1601 1320 hwp_only = 1; 1321 + if (!strcmp(str, "no_acpi")) 1322 + no_acpi_perf = 1; 1323 + 1602 1324 return 0; 1603 1325 } 1604 1326 early_param("intel_pstate", intel_pstate_setup);
+3 -3
drivers/cpufreq/mt8173-cpufreq.c
··· 344 344 /* Both presence and absence of sram regulator are valid cases. */ 345 345 sram_reg = regulator_get_exclusive(cpu_dev, "sram"); 346 346 347 - ret = of_init_opp_table(cpu_dev); 347 + ret = dev_pm_opp_of_add_table(cpu_dev); 348 348 if (ret) { 349 349 pr_warn("no OPP table for cpu%d\n", cpu); 350 350 goto out_free_resources; ··· 378 378 return 0; 379 379 380 380 out_free_opp_table: 381 - of_free_opp_table(cpu_dev); 381 + dev_pm_opp_of_remove_table(cpu_dev); 382 382 383 383 out_free_resources: 384 384 if (!IS_ERR(proc_reg)) ··· 404 404 if (!IS_ERR(info->inter_clk)) 405 405 clk_put(info->inter_clk); 406 406 407 - of_free_opp_table(info->cpu_dev); 407 + dev_pm_opp_of_remove_table(info->cpu_dev); 408 408 } 409 409 410 410 static int mtk_cpufreq_init(struct cpufreq_policy *policy)
+8 -2
drivers/cpufreq/powernv-cpufreq.c
··· 327 327 if (chips[i].throttled) 328 328 goto next; 329 329 chips[i].throttled = true; 330 - pr_info("CPU %d on Chip %u has Pmax reduced to %d\n", cpu, 331 - chips[i].id, pmsr_pmax); 330 + if (pmsr_pmax < powernv_pstate_info.nominal) 331 + pr_crit("CPU %d on Chip %u has Pmax reduced below nominal frequency (%d < %d)\n", 332 + cpu, chips[i].id, pmsr_pmax, 333 + powernv_pstate_info.nominal); 334 + else 335 + pr_info("CPU %d on Chip %u has Pmax reduced below turbo frequency (%d < %d)\n", 336 + cpu, chips[i].id, pmsr_pmax, 337 + powernv_pstate_info.max); 332 338 } else if (chips[i].throttled) { 333 339 chips[i].throttled = false; 334 340 pr_info("CPU %d on Chip %u has Pmax restored to %d\n", cpu,
-2
drivers/cpufreq/tegra20-cpufreq.c
··· 175 175 .exit = tegra_cpu_exit, 176 176 .name = "tegra", 177 177 .attr = cpufreq_generic_attr, 178 - #ifdef CONFIG_PM 179 178 .suspend = cpufreq_generic_suspend, 180 - #endif 181 179 }; 182 180 183 181 static int __init tegra_cpufreq_init(void)
+24 -28
drivers/cpuidle/cpuidle-mvebu-v7.c
··· 99 99 100 100 static int mvebu_v7_cpuidle_probe(struct platform_device *pdev) 101 101 { 102 + const struct platform_device_id *id = pdev->id_entry; 103 + 104 + if (!id) 105 + return -EINVAL; 106 + 102 107 mvebu_v7_cpu_suspend = pdev->dev.platform_data; 103 108 104 - if (!strcmp(pdev->dev.driver->name, "cpuidle-armada-xp")) 105 - return cpuidle_register(&armadaxp_idle_driver, NULL); 106 - else if (!strcmp(pdev->dev.driver->name, "cpuidle-armada-370")) 107 - return cpuidle_register(&armada370_idle_driver, NULL); 108 - else if (!strcmp(pdev->dev.driver->name, "cpuidle-armada-38x")) 109 - return cpuidle_register(&armada38x_idle_driver, NULL); 110 - else 111 - return -EINVAL; 109 + return cpuidle_register((struct cpuidle_driver *)id->driver_data, NULL); 112 110 } 113 111 114 - static struct platform_driver armadaxp_cpuidle_plat_driver = { 115 - .driver = { 112 + static const struct platform_device_id mvebu_cpuidle_ids[] = { 113 + { 116 114 .name = "cpuidle-armada-xp", 117 - }, 118 - .probe = mvebu_v7_cpuidle_probe, 119 - }; 120 - 121 - module_platform_driver(armadaxp_cpuidle_plat_driver); 122 - 123 - static struct platform_driver armada370_cpuidle_plat_driver = { 124 - .driver = { 115 + .driver_data = (unsigned long)&armadaxp_idle_driver, 116 + }, { 125 117 .name = "cpuidle-armada-370", 126 - }, 127 - .probe = mvebu_v7_cpuidle_probe, 128 - }; 129 - 130 - module_platform_driver(armada370_cpuidle_plat_driver); 131 - 132 - static struct platform_driver armada38x_cpuidle_plat_driver = { 133 - .driver = { 118 + .driver_data = (unsigned long)&armada370_idle_driver, 119 + }, { 134 120 .name = "cpuidle-armada-38x", 121 + .driver_data = (unsigned long)&armada38x_idle_driver, 135 122 }, 136 - .probe = mvebu_v7_cpuidle_probe, 123 + {} 137 124 }; 138 125 139 - module_platform_driver(armada38x_cpuidle_plat_driver); 126 + static struct platform_driver mvebu_cpuidle_driver = { 127 + .probe = mvebu_v7_cpuidle_probe, 128 + .driver = { 129 + .name = "cpuidle-mbevu", 130 + .suppress_bind_attrs = true, 131 + }, 132 + .id_table = mvebu_cpuidle_ids, 133 + }; 134 + 135 + builtin_platform_driver(mvebu_cpuidle_driver); 140 136 141 137 MODULE_AUTHOR("Gregory CLEMENT <gregory.clement@free-electrons.com>"); 142 138 MODULE_DESCRIPTION("Marvell EBU v7 cpuidle driver");
+16 -7
drivers/dma/acpi-dma.c
··· 21 21 #include <linux/ioport.h> 22 22 #include <linux/acpi.h> 23 23 #include <linux/acpi_dma.h> 24 + #include <linux/property.h> 24 25 25 26 static LIST_HEAD(acpi_dma_list); 26 27 static DEFINE_MUTEX(acpi_dma_lock); ··· 414 413 * translate the names "tx" and "rx" here based on the most common case where 415 414 * the first FixedDMA descriptor is TX and second is RX. 416 415 * 416 + * If the device has "dma-names" property the FixedDMA descriptor indices 417 + * are retrieved based on those. Otherwise the function falls back using 418 + * hardcoded indices. 419 + * 417 420 * Return: 418 421 * Pointer to appropriate dma channel on success or an error pointer. 419 422 */ 420 423 struct dma_chan *acpi_dma_request_slave_chan_by_name(struct device *dev, 421 424 const char *name) 422 425 { 423 - size_t index; 426 + int index; 424 427 425 - if (!strcmp(name, "tx")) 426 - index = 0; 427 - else if (!strcmp(name, "rx")) 428 - index = 1; 429 - else 430 - return ERR_PTR(-ENODEV); 428 + index = device_property_match_string(dev, "dma-names", name); 429 + if (index < 0) { 430 + if (!strcmp(name, "tx")) 431 + index = 0; 432 + else if (!strcmp(name, "rx")) 433 + index = 1; 434 + else 435 + return ERR_PTR(-ENODEV); 436 + } 431 437 438 + dev_dbg(dev, "found DMA channel \"%s\" at index %d\n", name, index); 432 439 return acpi_dma_request_slave_chan_by_index(dev, index); 433 440 } 434 441 EXPORT_SYMBOL_GPL(acpi_dma_request_slave_chan_by_name);
+110 -45
drivers/gpio/gpiolib-acpi.c
··· 388 388 struct acpi_gpio_info info; 389 389 int index; 390 390 int pin_index; 391 + bool active_low; 392 + struct acpi_device *adev; 391 393 struct gpio_desc *desc; 392 394 int n; 393 395 }; ··· 426 424 return 1; 427 425 } 428 426 427 + static int acpi_gpio_resource_lookup(struct acpi_gpio_lookup *lookup, 428 + struct acpi_gpio_info *info) 429 + { 430 + struct list_head res_list; 431 + int ret; 432 + 433 + INIT_LIST_HEAD(&res_list); 434 + 435 + ret = acpi_dev_get_resources(lookup->adev, &res_list, acpi_find_gpio, 436 + lookup); 437 + if (ret < 0) 438 + return ret; 439 + 440 + acpi_dev_free_resource_list(&res_list); 441 + 442 + if (!lookup->desc) 443 + return -ENOENT; 444 + 445 + if (info) { 446 + *info = lookup->info; 447 + if (lookup->active_low) 448 + info->active_low = lookup->active_low; 449 + } 450 + return 0; 451 + } 452 + 453 + static int acpi_gpio_property_lookup(struct fwnode_handle *fwnode, 454 + const char *propname, int index, 455 + struct acpi_gpio_lookup *lookup) 456 + { 457 + struct acpi_reference_args args; 458 + int ret; 459 + 460 + memset(&args, 0, sizeof(args)); 461 + ret = acpi_node_get_property_reference(fwnode, propname, index, &args); 462 + if (ret) { 463 + struct acpi_device *adev = to_acpi_device_node(fwnode); 464 + 465 + if (!adev) 466 + return ret; 467 + 468 + if (!acpi_get_driver_gpio_data(adev, propname, index, &args)) 469 + return ret; 470 + } 471 + /* 472 + * The property was found and resolved, so need to lookup the GPIO based 473 + * on returned args. 474 + */ 475 + lookup->adev = args.adev; 476 + if (args.nargs >= 2) { 477 + lookup->index = args.args[0]; 478 + lookup->pin_index = args.args[1]; 479 + /* 3rd argument, if present is used to specify active_low. */ 480 + if (args.nargs >= 3) 481 + lookup->active_low = !!args.args[2]; 482 + } 483 + return 0; 484 + } 485 + 429 486 /** 430 487 * acpi_get_gpiod_by_index() - get a GPIO descriptor from device resources 431 488 * @adev: pointer to a ACPI device to get GPIO from ··· 512 451 struct acpi_gpio_info *info) 513 452 { 514 453 struct acpi_gpio_lookup lookup; 515 - struct list_head resource_list; 516 - bool active_low = false; 517 454 int ret; 518 455 519 456 if (!adev) ··· 521 462 lookup.index = index; 522 463 523 464 if (propname) { 524 - struct acpi_reference_args args; 525 - 526 465 dev_dbg(&adev->dev, "GPIO: looking up %s\n", propname); 527 466 528 - memset(&args, 0, sizeof(args)); 529 - ret = acpi_dev_get_property_reference(adev, propname, 530 - index, &args); 531 - if (ret) { 532 - bool found = acpi_get_driver_gpio_data(adev, propname, 533 - index, &args); 534 - if (!found) 535 - return ERR_PTR(ret); 536 - } 467 + ret = acpi_gpio_property_lookup(acpi_fwnode_handle(adev), 468 + propname, index, &lookup); 469 + if (ret) 470 + return ERR_PTR(ret); 537 471 538 - /* 539 - * The property was found and resolved so need to 540 - * lookup the GPIO based on returned args instead. 541 - */ 542 - adev = args.adev; 543 - if (args.nargs >= 2) { 544 - lookup.index = args.args[0]; 545 - lookup.pin_index = args.args[1]; 546 - /* 547 - * 3rd argument, if present is used to 548 - * specify active_low. 549 - */ 550 - if (args.nargs >= 3) 551 - active_low = !!args.args[2]; 552 - } 553 - 554 - dev_dbg(&adev->dev, "GPIO: _DSD returned %s %zd %llu %llu %llu\n", 555 - dev_name(&adev->dev), args.nargs, 556 - args.args[0], args.args[1], args.args[2]); 472 + dev_dbg(&adev->dev, "GPIO: _DSD returned %s %d %d %u\n", 473 + dev_name(&lookup.adev->dev), lookup.index, 474 + lookup.pin_index, lookup.active_low); 557 475 } else { 558 476 dev_dbg(&adev->dev, "GPIO: looking up %d in _CRS\n", index); 477 + lookup.adev = adev; 559 478 } 560 479 561 - INIT_LIST_HEAD(&resource_list); 562 - ret = acpi_dev_get_resources(adev, &resource_list, acpi_find_gpio, 563 - &lookup); 564 - if (ret < 0) 480 + ret = acpi_gpio_resource_lookup(&lookup, info); 481 + return ret ? ERR_PTR(ret) : lookup.desc; 482 + } 483 + 484 + /** 485 + * acpi_node_get_gpiod() - get a GPIO descriptor from ACPI resources 486 + * @fwnode: pointer to an ACPI firmware node to get the GPIO information from 487 + * @propname: Property name of the GPIO 488 + * @index: index of GpioIo/GpioInt resource (starting from %0) 489 + * @info: info pointer to fill in (optional) 490 + * 491 + * If @fwnode is an ACPI device object, call %acpi_get_gpiod_by_index() for it. 492 + * Otherwise (ie. it is a data-only non-device object), use the property-based 493 + * GPIO lookup to get to the GPIO resource with the relevant information and use 494 + * that to obtain the GPIO descriptor to return. 495 + */ 496 + struct gpio_desc *acpi_node_get_gpiod(struct fwnode_handle *fwnode, 497 + const char *propname, int index, 498 + struct acpi_gpio_info *info) 499 + { 500 + struct acpi_gpio_lookup lookup; 501 + struct acpi_device *adev; 502 + int ret; 503 + 504 + adev = to_acpi_device_node(fwnode); 505 + if (adev) 506 + return acpi_get_gpiod_by_index(adev, propname, index, info); 507 + 508 + if (!is_acpi_data_node(fwnode)) 509 + return ERR_PTR(-ENODEV); 510 + 511 + if (!propname) 512 + return ERR_PTR(-EINVAL); 513 + 514 + memset(&lookup, 0, sizeof(lookup)); 515 + lookup.index = index; 516 + 517 + ret = acpi_gpio_property_lookup(fwnode, propname, index, &lookup); 518 + if (ret) 565 519 return ERR_PTR(ret); 566 520 567 - acpi_dev_free_resource_list(&resource_list); 568 - 569 - if (lookup.desc && info) { 570 - *info = lookup.info; 571 - if (active_low) 572 - info->active_low = active_low; 573 - } 574 - 575 - return lookup.desc ? lookup.desc : ERR_PTR(-ENOENT); 521 + ret = acpi_gpio_resource_lookup(&lookup, info); 522 + return ret ? ERR_PTR(ret) : lookup.desc; 576 523 } 577 524 578 525 /**
+1 -2
drivers/gpio/gpiolib.c
··· 2209 2209 } else if (is_acpi_node(fwnode)) { 2210 2210 struct acpi_gpio_info info; 2211 2211 2212 - desc = acpi_get_gpiod_by_index(to_acpi_node(fwnode), propname, 0, 2213 - &info); 2212 + desc = acpi_node_get_gpiod(fwnode, propname, 0, &info); 2214 2213 if (!IS_ERR(desc)) 2215 2214 active_low = info.active_low; 2216 2215 }
+9 -1
drivers/gpio/gpiolib.h
··· 42 42 struct gpio_desc *acpi_get_gpiod_by_index(struct acpi_device *adev, 43 43 const char *propname, int index, 44 44 struct acpi_gpio_info *info); 45 + struct gpio_desc *acpi_node_get_gpiod(struct fwnode_handle *fwnode, 46 + const char *propname, int index, 47 + struct acpi_gpio_info *info); 45 48 46 49 int acpi_gpio_count(struct device *dev, const char *con_id); 47 50 #else ··· 63 60 { 64 61 return ERR_PTR(-ENOSYS); 65 62 } 66 - 63 + static inline struct gpio_desc * 64 + acpi_node_get_gpiod(struct fwnode_handle *fwnode, const char *propname, 65 + int index, struct acpi_gpio_info *info) 66 + { 67 + return ERR_PTR(-ENXIO); 68 + } 67 69 static inline int acpi_gpio_count(struct device *dev, const char *con_id) 68 70 { 69 71 return -ENODEV;
+27 -5
drivers/input/serio/i8042.c
··· 24 24 #include <linux/platform_device.h> 25 25 #include <linux/i8042.h> 26 26 #include <linux/slab.h> 27 + #include <linux/suspend.h> 27 28 28 29 #include <asm/io.h> 29 30 ··· 1171 1170 { 1172 1171 int i; 1173 1172 1174 - i8042_controller_reset(true); 1173 + if (pm_suspend_via_firmware()) 1174 + i8042_controller_reset(true); 1175 1175 1176 1176 /* Set up serio interrupts for system wakeup. */ 1177 1177 for (i = 0; i < I8042_NUM_PORTS; i++) { ··· 1185 1183 return 0; 1186 1184 } 1187 1185 1186 + static int i8042_pm_resume_noirq(struct device *dev) 1187 + { 1188 + if (!pm_resume_via_firmware()) 1189 + i8042_interrupt(0, NULL); 1190 + 1191 + return 0; 1192 + } 1193 + 1188 1194 static int i8042_pm_resume(struct device *dev) 1189 1195 { 1196 + bool force_reset; 1190 1197 int i; 1191 1198 1192 1199 for (i = 0; i < I8042_NUM_PORTS; i++) { ··· 1206 1195 } 1207 1196 1208 1197 /* 1209 - * On resume from S2R we always try to reset the controller 1210 - * to bring it in a sane state. (In case of S2D we expect 1211 - * BIOS to reset the controller for us.) 1198 + * If platform firmware was not going to be involved in suspend, we did 1199 + * not restore the controller state to whatever it had been at boot 1200 + * time, so we do not need to do anything. 1212 1201 */ 1213 - return i8042_controller_resume(true); 1202 + if (!pm_suspend_via_firmware()) 1203 + return 0; 1204 + 1205 + /* 1206 + * We only need to reset the controller if we are resuming after handing 1207 + * off control to the platform firmware, otherwise we can simply restore 1208 + * the mode. 1209 + */ 1210 + force_reset = pm_resume_via_firmware(); 1211 + 1212 + return i8042_controller_resume(force_reset); 1214 1213 } 1215 1214 1216 1215 static int i8042_pm_thaw(struct device *dev) ··· 1244 1223 1245 1224 static const struct dev_pm_ops i8042_pm_ops = { 1246 1225 .suspend = i8042_pm_suspend, 1226 + .resume_noirq = i8042_pm_resume_noirq, 1247 1227 .resume = i8042_pm_resume, 1248 1228 .thaw = i8042_pm_thaw, 1249 1229 .poweroff = i8042_pm_reset,
+37 -36
drivers/irqchip/irq-gic.c
··· 41 41 #include <linux/irqchip.h> 42 42 #include <linux/irqchip/chained_irq.h> 43 43 #include <linux/irqchip/arm-gic.h> 44 - #include <linux/irqchip/arm-gic-acpi.h> 45 44 46 45 #include <asm/cputype.h> 47 46 #include <asm/irq.h> ··· 1218 1219 #endif 1219 1220 1220 1221 #ifdef CONFIG_ACPI 1221 - static phys_addr_t dist_phy_base, cpu_phy_base __initdata; 1222 + static phys_addr_t cpu_phy_base __initdata; 1222 1223 1223 1224 static int __init 1224 1225 gic_acpi_parse_madt_cpu(struct acpi_subtable_header *header, ··· 1246 1247 return 0; 1247 1248 } 1248 1249 1249 - static int __init 1250 - gic_acpi_parse_madt_distributor(struct acpi_subtable_header *header, 1251 - const unsigned long end) 1250 + /* The things you have to do to just *count* something... */ 1251 + static int __init acpi_dummy_func(struct acpi_subtable_header *header, 1252 + const unsigned long end) 1252 1253 { 1253 - struct acpi_madt_generic_distributor *dist; 1254 - 1255 - dist = (struct acpi_madt_generic_distributor *)header; 1256 - 1257 - if (BAD_MADT_ENTRY(dist, end)) 1258 - return -EINVAL; 1259 - 1260 - dist_phy_base = dist->base_address; 1261 1254 return 0; 1262 1255 } 1263 1256 1264 - int __init 1265 - gic_v2_acpi_init(struct acpi_table_header *table) 1257 + static bool __init acpi_gic_redist_is_present(void) 1266 1258 { 1259 + return acpi_table_parse_madt(ACPI_MADT_TYPE_GENERIC_REDISTRIBUTOR, 1260 + acpi_dummy_func, 0) > 0; 1261 + } 1262 + 1263 + static bool __init gic_validate_dist(struct acpi_subtable_header *header, 1264 + struct acpi_probe_entry *ape) 1265 + { 1266 + struct acpi_madt_generic_distributor *dist; 1267 + dist = (struct acpi_madt_generic_distributor *)header; 1268 + 1269 + return (dist->version == ape->driver_data && 1270 + (dist->version != ACPI_MADT_GIC_VERSION_NONE || 1271 + !acpi_gic_redist_is_present())); 1272 + } 1273 + 1274 + #define ACPI_GICV2_DIST_MEM_SIZE (SZ_4K) 1275 + #define ACPI_GIC_CPU_IF_MEM_SIZE (SZ_8K) 1276 + 1277 + static int __init gic_v2_acpi_init(struct acpi_subtable_header *header, 1278 + const unsigned long end) 1279 + { 1280 + struct acpi_madt_generic_distributor *dist; 1267 1281 void __iomem *cpu_base, *dist_base; 1268 1282 struct fwnode_handle *domain_handle; 1269 1283 int count; 1270 1284 1271 1285 /* Collect CPU base addresses */ 1272 - count = acpi_parse_entries(ACPI_SIG_MADT, 1273 - sizeof(struct acpi_table_madt), 1274 - gic_acpi_parse_madt_cpu, table, 1275 - ACPI_MADT_TYPE_GENERIC_INTERRUPT, 0); 1286 + count = acpi_table_parse_madt(ACPI_MADT_TYPE_GENERIC_INTERRUPT, 1287 + gic_acpi_parse_madt_cpu, 0); 1276 1288 if (count <= 0) { 1277 1289 pr_err("No valid GICC entries exist\n"); 1278 - return -EINVAL; 1279 - } 1280 - 1281 - /* 1282 - * Find distributor base address. We expect one distributor entry since 1283 - * ACPI 5.1 spec neither support multi-GIC instances nor GIC cascade. 1284 - */ 1285 - count = acpi_parse_entries(ACPI_SIG_MADT, 1286 - sizeof(struct acpi_table_madt), 1287 - gic_acpi_parse_madt_distributor, table, 1288 - ACPI_MADT_TYPE_GENERIC_DISTRIBUTOR, 0); 1289 - if (count <= 0) { 1290 - pr_err("No valid GICD entries exist\n"); 1291 - return -EINVAL; 1292 - } else if (count > 1) { 1293 - pr_err("More than one GICD entry detected\n"); 1294 1290 return -EINVAL; 1295 1291 } 1296 1292 ··· 1295 1301 return -ENOMEM; 1296 1302 } 1297 1303 1298 - dist_base = ioremap(dist_phy_base, ACPI_GICV2_DIST_MEM_SIZE); 1304 + dist = (struct acpi_madt_generic_distributor *)header; 1305 + dist_base = ioremap(dist->base_address, ACPI_GICV2_DIST_MEM_SIZE); 1299 1306 if (!dist_base) { 1300 1307 pr_err("Unable to map GICD registers\n"); 1301 1308 iounmap(cpu_base); ··· 1327 1332 acpi_set_irq_model(ACPI_IRQ_MODEL_GIC, domain_handle); 1328 1333 return 0; 1329 1334 } 1335 + IRQCHIP_ACPI_DECLARE(gic_v2, ACPI_MADT_TYPE_GENERIC_DISTRIBUTOR, 1336 + gic_validate_dist, ACPI_MADT_GIC_VERSION_V2, 1337 + gic_v2_acpi_init); 1338 + IRQCHIP_ACPI_DECLARE(gic_v2_maybe, ACPI_MADT_TYPE_GENERIC_DISTRIBUTOR, 1339 + gic_validate_dist, ACPI_MADT_GIC_VERSION_NONE, 1340 + gic_v2_acpi_init); 1330 1341 #endif
+2 -3
drivers/irqchip/irqchip.c
··· 8 8 * warranty of any kind, whether express or implied. 9 9 */ 10 10 11 - #include <linux/acpi_irq.h> 11 + #include <linux/acpi.h> 12 12 #include <linux/init.h> 13 13 #include <linux/of_irq.h> 14 14 #include <linux/irqchip.h> ··· 27 27 void __init irqchip_init(void) 28 28 { 29 29 of_irq_init(__irqchip_of_table); 30 - 31 - acpi_irq_init(); 30 + acpi_probe_device_table(irqchip); 32 31 }
+7
drivers/pci/pci-driver.c
··· 684 684 return pci_dev_keep_suspended(to_pci_dev(dev)); 685 685 } 686 686 687 + static void pci_pm_complete(struct device *dev) 688 + { 689 + pci_dev_complete_resume(to_pci_dev(dev)); 690 + pm_complete_with_resume_check(dev); 691 + } 687 692 688 693 #else /* !CONFIG_PM_SLEEP */ 689 694 690 695 #define pci_pm_prepare NULL 696 + #define pci_pm_complete NULL 691 697 692 698 #endif /* !CONFIG_PM_SLEEP */ 693 699 ··· 1224 1218 1225 1219 static const struct dev_pm_ops pci_dev_pm_ops = { 1226 1220 .prepare = pci_pm_prepare, 1221 + .complete = pci_pm_complete, 1227 1222 .suspend = pci_pm_suspend, 1228 1223 .resume = pci_pm_resume, 1229 1224 .freeze = pci_pm_freeze,
+59 -11
drivers/pci/pci.c
··· 1710 1710 mutex_unlock(&pci_pme_list_mutex); 1711 1711 } 1712 1712 1713 - /** 1714 - * pci_pme_active - enable or disable PCI device's PME# function 1715 - * @dev: PCI device to handle. 1716 - * @enable: 'true' to enable PME# generation; 'false' to disable it. 1717 - * 1718 - * The caller must verify that the device is capable of generating PME# before 1719 - * calling this function with @enable equal to 'true'. 1720 - */ 1721 - void pci_pme_active(struct pci_dev *dev, bool enable) 1713 + static void __pci_pme_active(struct pci_dev *dev, bool enable) 1722 1714 { 1723 1715 u16 pmcsr; 1724 1716 ··· 1724 1732 pmcsr &= ~PCI_PM_CTRL_PME_ENABLE; 1725 1733 1726 1734 pci_write_config_word(dev, dev->pm_cap + PCI_PM_CTRL, pmcsr); 1735 + } 1736 + 1737 + /** 1738 + * pci_pme_active - enable or disable PCI device's PME# function 1739 + * @dev: PCI device to handle. 1740 + * @enable: 'true' to enable PME# generation; 'false' to disable it. 1741 + * 1742 + * The caller must verify that the device is capable of generating PME# before 1743 + * calling this function with @enable equal to 'true'. 1744 + */ 1745 + void pci_pme_active(struct pci_dev *dev, bool enable) 1746 + { 1747 + __pci_pme_active(dev, enable); 1727 1748 1728 1749 /* 1729 1750 * PCI (as opposed to PCIe) PME requires that the device have ··· 2037 2032 * reconfigured due to wakeup settings difference between system and runtime 2038 2033 * suspend and the current power state of it is suitable for the upcoming 2039 2034 * (system) transition. 2035 + * 2036 + * If the device is not configured for system wakeup, disable PME for it before 2037 + * returning 'true' to prevent it from waking up the system unnecessarily. 2040 2038 */ 2041 2039 bool pci_dev_keep_suspended(struct pci_dev *pci_dev) 2042 2040 { 2043 2041 struct device *dev = &pci_dev->dev; 2044 2042 2045 2043 if (!pm_runtime_suspended(dev) 2046 - || (device_can_wakeup(dev) && !device_may_wakeup(dev)) 2044 + || pci_target_state(pci_dev) != pci_dev->current_state 2047 2045 || platform_pci_need_resume(pci_dev)) 2048 2046 return false; 2049 2047 2050 - return pci_target_state(pci_dev) == pci_dev->current_state; 2048 + /* 2049 + * At this point the device is good to go unless it's been configured 2050 + * to generate PME at the runtime suspend time, but it is not supposed 2051 + * to wake up the system. In that case, simply disable PME for it 2052 + * (it will have to be re-enabled on exit from system resume). 2053 + * 2054 + * If the device's power state is D3cold and the platform check above 2055 + * hasn't triggered, the device's configuration is suitable and we don't 2056 + * need to manipulate it at all. 2057 + */ 2058 + spin_lock_irq(&dev->power.lock); 2059 + 2060 + if (pm_runtime_suspended(dev) && pci_dev->current_state < PCI_D3cold && 2061 + !device_may_wakeup(dev)) 2062 + __pci_pme_active(pci_dev, false); 2063 + 2064 + spin_unlock_irq(&dev->power.lock); 2065 + return true; 2066 + } 2067 + 2068 + /** 2069 + * pci_dev_complete_resume - Finalize resume from system sleep for a device. 2070 + * @pci_dev: Device to handle. 2071 + * 2072 + * If the device is runtime suspended and wakeup-capable, enable PME for it as 2073 + * it might have been disabled during the prepare phase of system suspend if 2074 + * the device was not configured for system wakeup. 2075 + */ 2076 + void pci_dev_complete_resume(struct pci_dev *pci_dev) 2077 + { 2078 + struct device *dev = &pci_dev->dev; 2079 + 2080 + if (!pci_dev_run_wake(pci_dev)) 2081 + return; 2082 + 2083 + spin_lock_irq(&dev->power.lock); 2084 + 2085 + if (pm_runtime_suspended(dev) && pci_dev->current_state < PCI_D3cold) 2086 + __pci_pme_active(pci_dev, true); 2087 + 2088 + spin_unlock_irq(&dev->power.lock); 2051 2089 } 2052 2090 2053 2091 void pci_config_pm_runtime_get(struct pci_dev *pdev)
+1
drivers/pci/pci.h
··· 75 75 int pci_finish_runtime_suspend(struct pci_dev *dev); 76 76 int __pci_pme_wakeup(struct pci_dev *dev, void *ign); 77 77 bool pci_dev_keep_suspended(struct pci_dev *dev); 78 + void pci_dev_complete_resume(struct pci_dev *pci_dev); 78 79 void pci_config_pm_runtime_get(struct pci_dev *dev); 79 80 void pci_config_pm_runtime_put(struct pci_dev *dev); 80 81 void pci_pm_init(struct pci_dev *dev);
+2 -2
drivers/pnp/pnpacpi/core.c
··· 207 207 }; 208 208 EXPORT_SYMBOL(pnpacpi_protocol); 209 209 210 - static char *__init pnpacpi_get_id(struct acpi_device *device) 210 + static const char *__init pnpacpi_get_id(struct acpi_device *device) 211 211 { 212 212 struct acpi_hardware_id *id; 213 213 ··· 222 222 static int __init pnpacpi_add_device(struct acpi_device *device) 223 223 { 224 224 struct pnp_dev *dev; 225 - char *pnpid; 225 + const char *pnpid; 226 226 struct acpi_hardware_id *id; 227 227 int error; 228 228
+1
drivers/power/avs/rockchip-io-domain.c
··· 271 271 }, 272 272 { /* sentinel */ }, 273 273 }; 274 + MODULE_DEVICE_TABLE(of, rockchip_iodomain_match); 274 275 275 276 static int rockchip_iodomain_probe(struct platform_device *pdev) 276 277 {
+1
drivers/powercap/intel_rapl.c
··· 1102 1102 RAPL_CPU(0x4A, rapl_defaults_tng),/* Tangier */ 1103 1103 RAPL_CPU(0x56, rapl_defaults_core),/* Future Xeon */ 1104 1104 RAPL_CPU(0x5A, rapl_defaults_ann),/* Annidale */ 1105 + RAPL_CPU(0X5C, rapl_defaults_core),/* Broxton */ 1105 1106 RAPL_CPU(0x5E, rapl_defaults_core),/* Skylake-H/S */ 1106 1107 RAPL_CPU(0x57, rapl_defaults_hsw_server),/* Knights Landing */ 1107 1108 {}
-1
drivers/soc/dove/pmu.c
··· 396 396 397 397 __pmu_domain_register(domain, np); 398 398 } 399 - pm_genpd_poweroff_unused(); 400 399 401 400 /* Loss of the interrupt controller is not a fatal error. */ 402 401 parent_irq = irq_of_parse_and_map(pmu->of_node, 0);
+5 -2
include/acpi/acexcep.h
··· 193 193 #define AE_AML_ILLEGAL_ADDRESS EXCEP_AML (0x0020) 194 194 #define AE_AML_INFINITE_LOOP EXCEP_AML (0x0021) 195 195 #define AE_AML_UNINITIALIZED_NODE EXCEP_AML (0x0022) 196 + #define AE_AML_TARGET_TYPE EXCEP_AML (0x0023) 196 197 197 - #define AE_CODE_AML_MAX 0x0022 198 + #define AE_CODE_AML_MAX 0x0023 198 199 199 200 /* 200 201 * Internal exceptions used for control ··· 359 358 EXCEP_TXT("AE_AML_INFINITE_LOOP", 360 359 "An apparent infinite AML While loop, method was aborted"), 361 360 EXCEP_TXT("AE_AML_UNINITIALIZED_NODE", 362 - "A namespace node is uninitialized or unresolved") 361 + "A namespace node is uninitialized or unresolved"), 362 + EXCEP_TXT("AE_AML_TARGET_TYPE", 363 + "A target operand of an incorrect type was encountered") 363 364 }; 364 365 365 366 static const struct acpi_exception_info acpi_gbl_exception_names_ctrl[] = {
+33 -4
include/acpi/acpi_bus.h
··· 129 129 struct acpi_scan_handler { 130 130 const struct acpi_device_id *ids; 131 131 struct list_head list_node; 132 - bool (*match)(char *idstr, const struct acpi_device_id **matchid); 132 + bool (*match)(const char *idstr, const struct acpi_device_id **matchid); 133 133 int (*attach)(struct acpi_device *dev, const struct acpi_device_id *id); 134 134 void (*detach)(struct acpi_device *dev); 135 135 void (*bind)(struct device *phys_dev); ··· 227 227 228 228 struct acpi_hardware_id { 229 229 struct list_head list; 230 - char *id; 230 + const char *id; 231 231 }; 232 232 233 233 struct acpi_pnp_type { ··· 343 343 const union acpi_object *pointer; 344 344 const union acpi_object *properties; 345 345 const union acpi_object *of_compatible; 346 + struct list_head subnodes; 346 347 }; 347 348 348 349 struct acpi_gpio_mapping; ··· 377 376 struct list_head physical_node_list; 378 377 struct mutex physical_node_lock; 379 378 void (*remove)(struct acpi_device *); 379 + }; 380 + 381 + /* Non-device subnode */ 382 + struct acpi_data_node { 383 + const char *name; 384 + acpi_handle handle; 385 + struct fwnode_handle fwnode; 386 + struct acpi_device_data data; 387 + struct list_head sibling; 388 + struct kobject kobj; 389 + struct completion kobj_done; 380 390 }; 381 391 382 392 static inline bool acpi_check_dma(struct acpi_device *adev, bool *coherent) ··· 425 413 426 414 static inline bool is_acpi_node(struct fwnode_handle *fwnode) 427 415 { 416 + return fwnode && (fwnode->type == FWNODE_ACPI 417 + || fwnode->type == FWNODE_ACPI_DATA); 418 + } 419 + 420 + static inline bool is_acpi_device_node(struct fwnode_handle *fwnode) 421 + { 428 422 return fwnode && fwnode->type == FWNODE_ACPI; 429 423 } 430 424 431 - static inline struct acpi_device *to_acpi_node(struct fwnode_handle *fwnode) 425 + static inline struct acpi_device *to_acpi_device_node(struct fwnode_handle *fwnode) 432 426 { 433 - return is_acpi_node(fwnode) ? 427 + return is_acpi_device_node(fwnode) ? 434 428 container_of(fwnode, struct acpi_device, fwnode) : NULL; 429 + } 430 + 431 + static inline bool is_acpi_data_node(struct fwnode_handle *fwnode) 432 + { 433 + return fwnode && fwnode->type == FWNODE_ACPI_DATA; 434 + } 435 + 436 + static inline struct acpi_data_node *to_acpi_data_node(struct fwnode_handle *fwnode) 437 + { 438 + return is_acpi_data_node(fwnode) ? 439 + container_of(fwnode, struct acpi_data_node, fwnode) : NULL; 435 440 } 436 441 437 442 static inline struct fwnode_handle *acpi_fwnode_handle(struct acpi_device *adev)
+2 -1
include/acpi/acpiosxf.h
··· 55 55 OSL_GLOBAL_LOCK_HANDLER, 56 56 OSL_NOTIFY_HANDLER, 57 57 OSL_GPE_HANDLER, 58 - OSL_DEBUGGER_THREAD, 58 + OSL_DEBUGGER_MAIN_THREAD, 59 + OSL_DEBUGGER_EXEC_THREAD, 59 60 OSL_EC_POLL_HANDLER, 60 61 OSL_EC_BURST_HANDLER 61 62 } acpi_execute_type;
+3 -11
include/acpi/acpixf.h
··· 46 46 47 47 /* Current ACPICA subsystem version in YYYYMMDD format */ 48 48 49 - #define ACPI_CA_VERSION 0x20150818 49 + #define ACPI_CA_VERSION 0x20150930 50 50 51 51 #include <acpi/acconfig.h> 52 52 #include <acpi/actypes.h> ··· 393 393 */ 394 394 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_enable(void)) 395 395 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_disable(void)) 396 - #ifdef ACPI_FUTURE_USAGE 397 396 ACPI_EXTERNAL_RETURN_STATUS(acpi_status acpi_subsystem_status(void)) 398 - #endif 399 397 400 - #ifdef ACPI_FUTURE_USAGE 401 398 ACPI_EXTERNAL_RETURN_STATUS(acpi_status 402 399 acpi_get_system_info(struct acpi_buffer 403 400 *ret_buffer)) 404 - #endif 405 401 ACPI_EXTERNAL_RETURN_STATUS(acpi_status 406 402 acpi_get_statistics(struct acpi_statistics *stats)) 407 403 ACPI_EXTERNAL_RETURN_PTR(const char ··· 621 625 space_id, 622 626 acpi_adr_space_handler 623 627 handler)) 624 - #ifdef ACPI_FUTURE_USAGE 625 628 ACPI_EXTERNAL_RETURN_STATUS(acpi_status 626 629 acpi_install_exception_handler 627 630 (acpi_exception_handler handler)) 628 - #endif 629 631 ACPI_EXTERNAL_RETURN_STATUS(acpi_status 630 632 acpi_install_interface_handler 631 633 (acpi_interface_handler handler)) ··· 744 750 acpi_get_current_resources(acpi_handle device, 745 751 struct acpi_buffer 746 752 *ret_buffer)) 747 - #ifdef ACPI_FUTURE_USAGE 748 753 ACPI_EXTERNAL_RETURN_STATUS(acpi_status 749 754 acpi_get_possible_resources(acpi_handle device, 750 755 struct acpi_buffer 751 756 *ret_buffer)) 752 - #endif 753 757 ACPI_EXTERNAL_RETURN_STATUS(acpi_status 754 758 acpi_get_event_resources(acpi_handle device_handle, 755 759 struct acpi_buffer ··· 836 844 /* 837 845 * ACPI Timer interfaces 838 846 */ 839 - #ifdef ACPI_FUTURE_USAGE 840 847 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status 841 848 acpi_get_timer_resolution(u32 *resolution)) 842 849 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_get_timer(u32 *ticks)) ··· 844 853 acpi_get_timer_duration(u32 start_ticks, 845 854 u32 end_ticks, 846 855 u32 *time_elapsed)) 847 - #endif /* ACPI_FUTURE_USAGE */ 848 856 849 857 /* 850 858 * Error/Warning output ··· 928 938 acpi_object_handler handler, 929 939 void **data, 930 940 void (*callback)(void *))) 941 + 942 + void acpi_set_debugger_thread_id(acpi_thread_id thread_id); 931 943 932 944 #endif /* __ACXFACE_H__ */
+1 -1
include/acpi/actbl1.h
··· 1012 1012 #define ACPI_NFIT_MEM_SAVE_FAILED (1) /* 00: Last SAVE to Memory Device failed */ 1013 1013 #define ACPI_NFIT_MEM_RESTORE_FAILED (1<<1) /* 01: Last RESTORE from Memory Device failed */ 1014 1014 #define ACPI_NFIT_MEM_FLUSH_FAILED (1<<2) /* 02: Platform flush failed */ 1015 - #define ACPI_NFIT_MEM_ARMED (1<<3) /* 03: Memory Device observed to be not armed */ 1015 + #define ACPI_NFIT_MEM_NOT_ARMED (1<<3) /* 03: Memory Device is not armed */ 1016 1016 #define ACPI_NFIT_MEM_HEALTH_OBSERVED (1<<4) /* 04: Memory Device observed SMART/health events */ 1017 1017 #define ACPI_NFIT_MEM_HEALTH_ENABLED (1<<5) /* 05: SMART/health events enabled */ 1018 1018
+138
include/acpi/cppc_acpi.h
··· 1 + /* 2 + * CPPC (Collaborative Processor Performance Control) methods used 3 + * by CPUfreq drivers. 4 + * 5 + * (C) Copyright 2014, 2015 Linaro Ltd. 6 + * Author: Ashwin Chaugule <ashwin.chaugule@linaro.org> 7 + * 8 + * This program is free software; you can redistribute it and/or 9 + * modify it under the terms of the GNU General Public License 10 + * as published by the Free Software Foundation; version 2 11 + * of the License. 12 + */ 13 + 14 + #ifndef _CPPC_ACPI_H 15 + #define _CPPC_ACPI_H 16 + 17 + #include <linux/acpi.h> 18 + #include <linux/mailbox_controller.h> 19 + #include <linux/mailbox_client.h> 20 + #include <linux/types.h> 21 + 22 + #include <acpi/processor.h> 23 + 24 + /* Only support CPPCv2 for now. */ 25 + #define CPPC_NUM_ENT 21 26 + #define CPPC_REV 2 27 + 28 + #define PCC_CMD_COMPLETE 1 29 + #define MAX_CPC_REG_ENT 19 30 + 31 + /* CPPC specific PCC commands. */ 32 + #define CMD_READ 0 33 + #define CMD_WRITE 1 34 + 35 + /* Each register has the folowing format. */ 36 + struct cpc_reg { 37 + u8 descriptor; 38 + u16 length; 39 + u8 space_id; 40 + u8 bit_width; 41 + u8 bit_offset; 42 + u8 access_width; 43 + u64 __iomem address; 44 + } __packed; 45 + 46 + /* 47 + * Each entry in the CPC table is either 48 + * of type ACPI_TYPE_BUFFER or 49 + * ACPI_TYPE_INTEGER. 50 + */ 51 + struct cpc_register_resource { 52 + acpi_object_type type; 53 + union { 54 + struct cpc_reg reg; 55 + u64 int_value; 56 + } cpc_entry; 57 + }; 58 + 59 + /* Container to hold the CPC details for each CPU */ 60 + struct cpc_desc { 61 + int num_entries; 62 + int version; 63 + int cpu_id; 64 + struct cpc_register_resource cpc_regs[MAX_CPC_REG_ENT]; 65 + struct acpi_psd_package domain_info; 66 + }; 67 + 68 + /* These are indexes into the per-cpu cpc_regs[]. Order is important. */ 69 + enum cppc_regs { 70 + HIGHEST_PERF, 71 + NOMINAL_PERF, 72 + LOW_NON_LINEAR_PERF, 73 + LOWEST_PERF, 74 + GUARANTEED_PERF, 75 + DESIRED_PERF, 76 + MIN_PERF, 77 + MAX_PERF, 78 + PERF_REDUC_TOLERANCE, 79 + TIME_WINDOW, 80 + CTR_WRAP_TIME, 81 + REFERENCE_CTR, 82 + DELIVERED_CTR, 83 + PERF_LIMITED, 84 + ENABLE, 85 + AUTO_SEL_ENABLE, 86 + AUTO_ACT_WINDOW, 87 + ENERGY_PERF, 88 + REFERENCE_PERF, 89 + }; 90 + 91 + /* 92 + * Categorization of registers as described 93 + * in the ACPI v.5.1 spec. 94 + * XXX: Only filling up ones which are used by governors 95 + * today. 96 + */ 97 + struct cppc_perf_caps { 98 + u32 highest_perf; 99 + u32 nominal_perf; 100 + u32 reference_perf; 101 + u32 lowest_perf; 102 + }; 103 + 104 + struct cppc_perf_ctrls { 105 + u32 max_perf; 106 + u32 min_perf; 107 + u32 desired_perf; 108 + }; 109 + 110 + struct cppc_perf_fb_ctrs { 111 + u64 reference; 112 + u64 prev_reference; 113 + u64 delivered; 114 + u64 prev_delivered; 115 + }; 116 + 117 + /* Per CPU container for runtime CPPC management. */ 118 + struct cpudata { 119 + int cpu; 120 + struct cppc_perf_caps perf_caps; 121 + struct cppc_perf_ctrls perf_ctrls; 122 + struct cppc_perf_fb_ctrs perf_fb_ctrs; 123 + struct cpufreq_policy *cur_policy; 124 + unsigned int shared_type; 125 + cpumask_var_t shared_cpu_map; 126 + }; 127 + 128 + extern int cppc_get_perf_ctrs(int cpu, struct cppc_perf_fb_ctrs *perf_fb_ctrs); 129 + extern int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls); 130 + extern int cppc_get_perf_caps(int cpu, struct cppc_perf_caps *caps); 131 + extern int acpi_get_psd_map(struct cpudata **); 132 + 133 + /* Methods to interact with the PCC mailbox controller. */ 134 + extern struct mbox_chan * 135 + pcc_mbox_request_channel(struct mbox_client *, unsigned int); 136 + extern int mbox_send_message(struct mbox_chan *chan, void *mssg); 137 + 138 + #endif /* _CPPC_ACPI_H*/
+4 -4
include/acpi/platform/acenv.h
··· 142 142 143 143 #ifdef ACPI_LIBRARY 144 144 #define ACPI_USE_LOCAL_CACHE 145 - #define ACPI_FUTURE_USAGE 145 + #define ACPI_FULL_DEBUG 146 146 #endif 147 147 148 148 /* Common for all ACPICA applications */ ··· 304 304 * multi-threaded if ACPI_APPLICATION is not set. 305 305 */ 306 306 #ifndef DEBUGGER_THREADING 307 - #ifdef ACPI_APPLICATION 308 - #define DEBUGGER_THREADING DEBUGGER_SINGLE_THREADED 307 + #if !defined (ACPI_APPLICATION) || defined (ACPI_EXEC_APP) 308 + #define DEBUGGER_THREADING DEBUGGER_MULTI_THREADED 309 309 310 310 #else 311 - #define DEBUGGER_THREADING DEBUGGER_MULTI_THREADED 311 + #define DEBUGGER_THREADING DEBUGGER_SINGLE_THREADED 312 312 #endif 313 313 #endif /* !DEBUGGER_THREADING */ 314 314
+5 -2
include/acpi/platform/aclinux.h
··· 63 63 64 64 #define ACPI_USE_SYSTEM_INTTYPES 65 65 66 - /* Compile for reduced hardware mode only with this kernel config */ 66 + /* Kernel specific ACPICA configuration */ 67 67 68 68 #ifdef CONFIG_ACPI_REDUCED_HARDWARE_ONLY 69 69 #define ACPI_REDUCED_HARDWARE 1 70 + #endif 71 + 72 + #ifdef CONFIG_ACPI_DEBUGGER 73 + #define ACPI_DEBUGGER 70 74 #endif 71 75 72 76 #include <linux/string.h> ··· 155 151 * OSL interfaces used by utilities 156 152 */ 157 153 #define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_redirect_output 158 - #define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_get_line 159 154 #define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_get_table_by_name 160 155 #define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_get_table_by_index 161 156 #define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_get_table_by_address
+5
include/acpi/platform/aclinuxex.h
··· 124 124 lock ? AE_OK : AE_NO_MEMORY; \ 125 125 }) 126 126 127 + static inline u8 acpi_os_readable(void *pointer, acpi_size length) 128 + { 129 + return TRUE; 130 + } 131 + 127 132 /* 128 133 * OSL interfaces added by Linux 129 134 */
+14
include/acpi/processor.h
··· 311 311 int acpi_map_cpuid(phys_cpuid_t phys_id, u32 acpi_id); 312 312 int acpi_get_cpuid(acpi_handle, int type, u32 acpi_id); 313 313 314 + #ifdef CONFIG_ACPI_CPPC_LIB 315 + extern int acpi_cppc_processor_probe(struct acpi_processor *pr); 316 + extern void acpi_cppc_processor_exit(struct acpi_processor *pr); 317 + #else 318 + static inline int acpi_cppc_processor_probe(struct acpi_processor *pr) 319 + { 320 + return 0; 321 + } 322 + static inline void acpi_cppc_processor_exit(struct acpi_processor *pr) 323 + { 324 + return; 325 + } 326 + #endif /* CONFIG_ACPI_CPPC_LIB */ 327 + 314 328 /* in processor_pdc.c */ 315 329 void acpi_processor_set_pdc(acpi_handle handle); 316 330
+12
include/asm-generic/vmlinux.lds.h
··· 181 181 #define CPUIDLE_METHOD_OF_TABLES() OF_TABLE(CONFIG_CPU_IDLE, cpuidle_method) 182 182 #define EARLYCON_OF_TABLES() OF_TABLE(CONFIG_SERIAL_EARLYCON, earlycon) 183 183 184 + #ifdef CONFIG_ACPI 185 + #define ACPI_PROBE_TABLE(name) \ 186 + . = ALIGN(8); \ 187 + VMLINUX_SYMBOL(__##name##_acpi_probe_table) = .; \ 188 + *(__##name##_acpi_probe_table) \ 189 + VMLINUX_SYMBOL(__##name##_acpi_probe_table_end) = .; 190 + #else 191 + #define ACPI_PROBE_TABLE(name) 192 + #endif 193 + 184 194 #define KERNEL_DTB() \ 185 195 STRUCT_ALIGN(); \ 186 196 VMLINUX_SYMBOL(__dtb_start) = .; \ ··· 524 514 CPUIDLE_METHOD_OF_TABLES() \ 525 515 KERNEL_DTB() \ 526 516 IRQCHIP_OF_MATCH_TABLE() \ 517 + ACPI_PROBE_TABLE(irqchip) \ 518 + ACPI_PROBE_TABLE(clksrc) \ 527 519 EARLYCON_TABLE() \ 528 520 EARLYCON_OF_TABLES() 529 521
+132 -25
include/linux/acpi.h
··· 49 49 return adev ? adev->handle : NULL; 50 50 } 51 51 52 - #define ACPI_COMPANION(dev) to_acpi_node((dev)->fwnode) 52 + #define ACPI_COMPANION(dev) to_acpi_device_node((dev)->fwnode) 53 53 #define ACPI_COMPANION_SET(dev, adev) set_primary_fwnode(dev, (adev) ? \ 54 54 acpi_fwnode_handle(adev) : NULL) 55 55 #define ACPI_HANDLE(dev) acpi_device_handle(ACPI_COMPANION(dev)) ··· 69 69 70 70 static inline bool has_acpi_companion(struct device *dev) 71 71 { 72 - return is_acpi_node(dev->fwnode); 72 + return is_acpi_device_node(dev->fwnode); 73 73 } 74 74 75 75 static inline void acpi_preset_companion(struct device *dev, ··· 131 131 (!entry) || (unsigned long)entry + sizeof(*entry) > end || \ 132 132 ((struct acpi_subtable_header *)entry)->length < sizeof(*entry)) 133 133 134 + struct acpi_subtable_proc { 135 + int id; 136 + acpi_tbl_entry_handler handler; 137 + int count; 138 + }; 139 + 134 140 char * __acpi_map_table (unsigned long phys_addr, unsigned long size); 135 141 void __acpi_unmap_table(char *map, unsigned long size); 136 142 int early_acpi_boot_init(void); ··· 152 146 struct acpi_table_header *table_header, 153 147 int entry_id, unsigned int max_entries); 154 148 int __init acpi_table_parse_entries(char *id, unsigned long table_size, 155 - int entry_id, 156 - acpi_tbl_entry_handler handler, 157 - unsigned int max_entries); 149 + int entry_id, 150 + acpi_tbl_entry_handler handler, 151 + unsigned int max_entries); 152 + int __init acpi_table_parse_entries(char *id, unsigned long table_size, 153 + int entry_id, 154 + acpi_tbl_entry_handler handler, 155 + unsigned int max_entries); 156 + int __init acpi_table_parse_entries_array(char *id, unsigned long table_size, 157 + struct acpi_subtable_proc *proc, int proc_num, 158 + unsigned int max_entries); 158 159 int acpi_table_parse_madt(enum acpi_madt_type id, 159 160 acpi_tbl_entry_handler handler, 160 161 unsigned int max_entries); ··· 206 193 void acpi_irq_stats_init(void); 207 194 extern u32 acpi_irq_handled; 208 195 extern u32 acpi_irq_not_handled; 196 + extern unsigned int acpi_sci_irq; 197 + #define INVALID_ACPI_IRQ ((unsigned)-1) 198 + static inline bool acpi_sci_irq_valid(void) 199 + { 200 + return acpi_sci_irq != INVALID_ACPI_IRQ; 201 + } 209 202 210 203 extern int sbf_port; 211 204 extern unsigned long acpi_realmode_flags; ··· 484 465 return false; 485 466 } 486 467 487 - static inline struct acpi_device *to_acpi_node(struct fwnode_handle *fwnode) 468 + static inline bool is_acpi_device_node(struct fwnode_handle *fwnode) 469 + { 470 + return false; 471 + } 472 + 473 + static inline struct acpi_device *to_acpi_device_node(struct fwnode_handle *fwnode) 474 + { 475 + return NULL; 476 + } 477 + 478 + static inline bool is_acpi_data_node(struct fwnode_handle *fwnode) 479 + { 480 + return false; 481 + } 482 + 483 + static inline struct acpi_data_node *to_acpi_data_node(struct fwnode_handle *fwnode) 488 484 { 489 485 return NULL; 490 486 } ··· 781 747 #ifdef CONFIG_ACPI 782 748 int acpi_dev_get_property(struct acpi_device *adev, const char *name, 783 749 acpi_object_type type, const union acpi_object **obj); 784 - int acpi_dev_get_property_array(struct acpi_device *adev, const char *name, 785 - acpi_object_type type, 786 - const union acpi_object **obj); 787 - int acpi_dev_get_property_reference(struct acpi_device *adev, 788 - const char *name, size_t index, 789 - struct acpi_reference_args *args); 750 + int acpi_node_get_property_reference(struct fwnode_handle *fwnode, 751 + const char *name, size_t index, 752 + struct acpi_reference_args *args); 790 753 791 - int acpi_dev_prop_get(struct acpi_device *adev, const char *propname, 792 - void **valptr); 754 + int acpi_node_prop_get(struct fwnode_handle *fwnode, const char *propname, 755 + void **valptr); 793 756 int acpi_dev_prop_read_single(struct acpi_device *adev, const char *propname, 794 757 enum dev_prop_type proptype, void *val); 758 + int acpi_node_prop_read(struct fwnode_handle *fwnode, const char *propname, 759 + enum dev_prop_type proptype, void *val, size_t nval); 795 760 int acpi_dev_prop_read(struct acpi_device *adev, const char *propname, 796 761 enum dev_prop_type proptype, void *val, size_t nval); 797 762 798 - struct acpi_device *acpi_get_next_child(struct device *dev, 799 - struct acpi_device *child); 763 + struct fwnode_handle *acpi_get_next_subnode(struct device *dev, 764 + struct fwnode_handle *subnode); 765 + 766 + struct acpi_probe_entry; 767 + typedef bool (*acpi_probe_entry_validate_subtbl)(struct acpi_subtable_header *, 768 + struct acpi_probe_entry *); 769 + 770 + #define ACPI_TABLE_ID_LEN 5 771 + 772 + /** 773 + * struct acpi_probe_entry - boot-time probing entry 774 + * @id: ACPI table name 775 + * @type: Optional subtable type to match 776 + * (if @id contains subtables) 777 + * @subtable_valid: Optional callback to check the validity of 778 + * the subtable 779 + * @probe_table: Callback to the driver being probed when table 780 + * match is successful 781 + * @probe_subtbl: Callback to the driver being probed when table and 782 + * subtable match (and optional callback is successful) 783 + * @driver_data: Sideband data provided back to the driver 784 + */ 785 + struct acpi_probe_entry { 786 + __u8 id[ACPI_TABLE_ID_LEN]; 787 + __u8 type; 788 + acpi_probe_entry_validate_subtbl subtable_valid; 789 + union { 790 + acpi_tbl_table_handler probe_table; 791 + acpi_tbl_entry_handler probe_subtbl; 792 + }; 793 + kernel_ulong_t driver_data; 794 + }; 795 + 796 + #define ACPI_DECLARE_PROBE_ENTRY(table, name, table_id, subtable, valid, data, fn) \ 797 + static const struct acpi_probe_entry __acpi_probe_##name \ 798 + __used __section(__##table##_acpi_probe_table) \ 799 + = { \ 800 + .id = table_id, \ 801 + .type = subtable, \ 802 + .subtable_valid = valid, \ 803 + .probe_table = (acpi_tbl_table_handler)fn, \ 804 + .driver_data = data, \ 805 + } 806 + 807 + #define ACPI_PROBE_TABLE(name) __##name##_acpi_probe_table 808 + #define ACPI_PROBE_TABLE_END(name) __##name##_acpi_probe_table_end 809 + 810 + int __acpi_probe_device_table(struct acpi_probe_entry *start, int nr); 811 + 812 + #define acpi_probe_device_table(t) \ 813 + ({ \ 814 + extern struct acpi_probe_entry ACPI_PROBE_TABLE(t), \ 815 + ACPI_PROBE_TABLE_END(t); \ 816 + __acpi_probe_device_table(&ACPI_PROBE_TABLE(t), \ 817 + (&ACPI_PROBE_TABLE_END(t) - \ 818 + &ACPI_PROBE_TABLE(t))); \ 819 + }) 800 820 #else 801 821 static inline int acpi_dev_get_property(struct acpi_device *adev, 802 822 const char *name, acpi_object_type type, ··· 858 770 { 859 771 return -ENXIO; 860 772 } 861 - static inline int acpi_dev_get_property_array(struct acpi_device *adev, 862 - const char *name, 863 - acpi_object_type type, 864 - const union acpi_object **obj) 773 + 774 + static inline int acpi_node_get_property_reference(struct fwnode_handle *fwnode, 775 + const char *name, const char *cells_name, 776 + size_t index, struct acpi_reference_args *args) 865 777 { 866 778 return -ENXIO; 867 779 } 868 - static inline int acpi_dev_get_property_reference(struct acpi_device *adev, 869 - const char *name, const char *cells_name, 870 - size_t index, struct acpi_reference_args *args) 780 + 781 + static inline int acpi_node_prop_get(struct fwnode_handle *fwnode, 782 + const char *propname, 783 + void **valptr) 871 784 { 872 785 return -ENXIO; 873 786 } ··· 888 799 return -ENXIO; 889 800 } 890 801 802 + static inline int acpi_node_prop_read(struct fwnode_handle *fwnode, 803 + const char *propname, 804 + enum dev_prop_type proptype, 805 + void *val, size_t nval) 806 + { 807 + return -ENXIO; 808 + } 809 + 891 810 static inline int acpi_dev_prop_read(struct acpi_device *adev, 892 811 const char *propname, 893 812 enum dev_prop_type proptype, ··· 904 807 return -ENXIO; 905 808 } 906 809 907 - static inline struct acpi_device *acpi_get_next_child(struct device *dev, 908 - struct acpi_device *child) 810 + static inline struct fwnode_handle *acpi_get_next_subnode(struct device *dev, 811 + struct fwnode_handle *subnode) 909 812 { 910 813 return NULL; 911 814 } 912 815 816 + #define ACPI_DECLARE_PROBE_ENTRY(table, name, table_id, subtable, validate, data, fn) \ 817 + static const void * __acpi_table_##name[] \ 818 + __attribute__((unused)) \ 819 + = { (void *) table_id, \ 820 + (void *) subtable, \ 821 + (void *) valid, \ 822 + (void *) fn, \ 823 + (void *) data } 824 + 825 + #define acpi_probe_device_table(t) ({ int __r = 0; __r;}) 913 826 #endif 914 827 915 828 #endif /*_LINUX_ACPI_H*/
-10
include/linux/acpi_irq.h
··· 1 - #ifndef _LINUX_ACPI_IRQ_H 2 - #define _LINUX_ACPI_IRQ_H 3 - 4 - #include <linux/irq.h> 5 - 6 - #ifndef acpi_irq_init 7 - static inline void acpi_irq_init(void) { } 8 - #endif 9 - 10 - #endif /* _LINUX_ACPI_IRQ_H */
+5 -8
include/linux/clocksource.h
··· 246 246 #define CLOCKSOURCE_OF_DECLARE(name, compat, fn) \ 247 247 OF_DECLARE_1(clksrc, name, compat, fn) 248 248 249 - #ifdef CONFIG_CLKSRC_OF 250 - extern void clocksource_of_init(void); 249 + #ifdef CONFIG_CLKSRC_PROBE 250 + extern void clocksource_probe(void); 251 251 #else 252 - static inline void clocksource_of_init(void) {} 252 + static inline void clocksource_probe(void) {} 253 253 #endif 254 254 255 - #ifdef CONFIG_ACPI 256 - void acpi_generic_timer_init(void); 257 - #else 258 - static inline void acpi_generic_timer_init(void) { } 259 - #endif 255 + #define CLOCKSOURCE_ACPI_DECLARE(name, table_id, fn) \ 256 + ACPI_DECLARE_PROBE_ENTRY(clksrc, name, table_id, 0, NULL, 0, fn) 260 257 261 258 #endif /* _LINUX_CLOCKSOURCE_H */
-5
include/linux/cpufreq.h
··· 65 65 unsigned int shared_type; /* ACPI: ANY or ALL affected CPUs 66 66 should set cpufreq */ 67 67 unsigned int cpu; /* cpu managing this policy, must be online */ 68 - unsigned int kobj_cpu; /* cpu managing sysfs files, can be offline */ 69 68 70 69 struct clk *clk; 71 70 struct cpufreq_cpuinfo cpuinfo;/* see above */ ··· 148 149 149 150 /* /sys/devices/system/cpu/cpufreq: entry point for global variables */ 150 151 extern struct kobject *cpufreq_global_kobject; 151 - int cpufreq_get_global_kobject(void); 152 - void cpufreq_put_global_kobject(void); 153 - int cpufreq_sysfs_create_file(const struct attribute *attr); 154 - void cpufreq_sysfs_remove_file(const struct attribute *attr); 155 152 156 153 #ifdef CONFIG_CPU_FREQ 157 154 unsigned int cpufreq_get(unsigned int cpu);
+1
include/linux/fwnode.h
··· 16 16 FWNODE_INVALID = 0, 17 17 FWNODE_OF, 18 18 FWNODE_ACPI, 19 + FWNODE_ACPI_DATA, 19 20 FWNODE_PDATA, 20 21 FWNODE_IRQCHIP, 21 22 };
+1
include/linux/ioport.h
··· 94 94 /* PnP I/O specific bits (IORESOURCE_BITS) */ 95 95 #define IORESOURCE_IO_16BIT_ADDR (1<<0) 96 96 #define IORESOURCE_IO_FIXED (1<<1) 97 + #define IORESOURCE_IO_SPARSE (1<<2) 97 98 98 99 /* PCI ROM control bits (IORESOURCE_BITS) */ 99 100 #define IORESOURCE_ROM_ENABLE (1<<0) /* ROM is enabled, same as PCI_ROM_ADDRESS_ENABLE */
+17
include/linux/irqchip.h
··· 11 11 #ifndef _LINUX_IRQCHIP_H 12 12 #define _LINUX_IRQCHIP_H 13 13 14 + #include <linux/acpi.h> 14 15 #include <linux/of.h> 15 16 16 17 /* ··· 25 24 * @fn: initialization function 26 25 */ 27 26 #define IRQCHIP_DECLARE(name, compat, fn) OF_DECLARE_2(irqchip, name, compat, fn) 27 + 28 + /* 29 + * This macro must be used by the different irqchip drivers to declare 30 + * the association between their version and their initialization function. 31 + * 32 + * @name: name that must be unique accross all IRQCHIP_ACPI_DECLARE of the 33 + * same file. 34 + * @subtable: Subtable to be identified in MADT 35 + * @validate: Function to be called on that subtable to check its validity. 36 + * Can be NULL. 37 + * @data: data to be checked by the validate function. 38 + * @fn: initialization function 39 + */ 40 + #define IRQCHIP_ACPI_DECLARE(name, subtable, validate, data, fn) \ 41 + ACPI_DECLARE_PROBE_ENTRY(irqchip, name, ACPI_SIG_MADT, \ 42 + subtable, validate, data, fn) 28 43 29 44 #ifdef CONFIG_IRQCHIP 30 45 void irqchip_init(void);
-31
include/linux/irqchip/arm-gic-acpi.h
··· 1 - /* 2 - * Copyright (C) 2014, Linaro Ltd. 3 - * Author: Tomasz Nowicki <tomasz.nowicki@linaro.org> 4 - * 5 - * This program is free software; you can redistribute it and/or modify 6 - * it under the terms of the GNU General Public License version 2 as 7 - * published by the Free Software Foundation. 8 - */ 9 - 10 - #ifndef ARM_GIC_ACPI_H_ 11 - #define ARM_GIC_ACPI_H_ 12 - 13 - #ifdef CONFIG_ACPI 14 - 15 - /* 16 - * Hard code here, we can not get memory size from MADT (but FDT does), 17 - * Actually no need to do that, because this size can be inferred 18 - * from GIC spec. 19 - */ 20 - #define ACPI_GICV2_DIST_MEM_SIZE (SZ_4K) 21 - #define ACPI_GIC_CPU_IF_MEM_SIZE (SZ_8K) 22 - 23 - struct acpi_table_header; 24 - 25 - int gic_v2_acpi_init(struct acpi_table_header *table); 26 - void acpi_gic_init(void); 27 - #else 28 - static inline void acpi_gic_init(void) { } 29 - #endif 30 - 31 - #endif /* ARM_GIC_ACPI_H_ */
+24
include/linux/pci-acpi.h
··· 52 52 return ACPI_HANDLE(dev); 53 53 } 54 54 55 + struct acpi_pci_root; 56 + struct acpi_pci_root_ops; 57 + 58 + struct acpi_pci_root_info { 59 + struct acpi_pci_root *root; 60 + struct acpi_device *bridge; 61 + struct acpi_pci_root_ops *ops; 62 + struct list_head resources; 63 + char name[16]; 64 + }; 65 + 66 + struct acpi_pci_root_ops { 67 + struct pci_ops *pci_ops; 68 + int (*init_info)(struct acpi_pci_root_info *info); 69 + void (*release_info)(struct acpi_pci_root_info *info); 70 + int (*prepare_resources)(struct acpi_pci_root_info *info); 71 + }; 72 + 73 + extern int acpi_pci_probe_root_resources(struct acpi_pci_root_info *info); 74 + extern struct pci_bus *acpi_pci_root_create(struct acpi_pci_root *root, 75 + struct acpi_pci_root_ops *ops, 76 + struct acpi_pci_root_info *info, 77 + void *sd); 78 + 55 79 void acpi_pci_add_bus(struct pci_bus *bus); 56 80 void acpi_pci_remove_bus(struct pci_bus *bus); 57 81
+1
include/linux/pm.h
··· 732 732 extern int pm_generic_poweroff_late(struct device *dev); 733 733 extern int pm_generic_poweroff(struct device *dev); 734 734 extern void pm_generic_complete(struct device *dev); 735 + extern void pm_complete_with_resume_check(struct device *dev); 735 736 736 737 #else /* !CONFIG_PM_SLEEP */ 737 738
+2 -68
include/linux/pm_domain.h
··· 15 15 #include <linux/err.h> 16 16 #include <linux/of.h> 17 17 #include <linux/notifier.h> 18 - #include <linux/cpuidle.h> 19 18 20 19 /* Defines used for the flags field in the struct generic_pm_domain */ 21 20 #define GENPD_FLAG_PM_CLK (1U << 0) /* PM domain uses PM clk */ ··· 37 38 bool (*active_wakeup)(struct device *dev); 38 39 }; 39 40 40 - struct gpd_cpuidle_data { 41 - unsigned int saved_exit_latency; 42 - struct cpuidle_state *idle_state; 43 - }; 44 - 45 41 struct generic_pm_domain { 46 42 struct dev_pm_domain domain; /* PM domain operations */ 47 43 struct list_head gpd_list_node; /* Node in the global PM domains list */ ··· 47 53 struct dev_power_governor *gov; 48 54 struct work_struct power_off_work; 49 55 const char *name; 50 - unsigned int in_progress; /* Number of devices being suspended now */ 51 56 atomic_t sd_count; /* Number of subdomains with power "on" */ 52 57 enum gpd_status status; /* Current state of the domain */ 53 58 unsigned int device_count; /* Number of devices */ ··· 61 68 s64 max_off_time_ns; /* Maximum allowed "suspended" time. */ 62 69 bool max_off_time_changed; 63 70 bool cached_power_down_ok; 64 - struct gpd_cpuidle_data *cpuidle_data; 65 71 int (*attach_dev)(struct generic_pm_domain *domain, 66 72 struct device *dev); 67 73 void (*detach_dev)(struct generic_pm_domain *domain, ··· 81 89 }; 82 90 83 91 struct gpd_timing_data { 84 - s64 stop_latency_ns; 85 - s64 start_latency_ns; 86 - s64 save_state_latency_ns; 87 - s64 restore_state_latency_ns; 92 + s64 suspend_latency_ns; 93 + s64 resume_latency_ns; 88 94 s64 effective_constraint_ns; 89 95 bool constraint_changed; 90 96 bool cached_stop_ok; ··· 115 125 struct device *dev, 116 126 struct gpd_timing_data *td); 117 127 118 - extern int __pm_genpd_name_add_device(const char *domain_name, 119 - struct device *dev, 120 - struct gpd_timing_data *td); 121 - 122 128 extern int pm_genpd_remove_device(struct generic_pm_domain *genpd, 123 129 struct device *dev); 124 130 extern int pm_genpd_add_subdomain(struct generic_pm_domain *genpd, 125 131 struct generic_pm_domain *new_subdomain); 126 - extern int pm_genpd_add_subdomain_names(const char *master_name, 127 - const char *subdomain_name); 128 132 extern int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd, 129 133 struct generic_pm_domain *target); 130 - extern int pm_genpd_attach_cpuidle(struct generic_pm_domain *genpd, int state); 131 - extern int pm_genpd_name_attach_cpuidle(const char *name, int state); 132 - extern int pm_genpd_detach_cpuidle(struct generic_pm_domain *genpd); 133 - extern int pm_genpd_name_detach_cpuidle(const char *name); 134 134 extern void pm_genpd_init(struct generic_pm_domain *genpd, 135 135 struct dev_power_governor *gov, bool is_off); 136 - 137 - extern int pm_genpd_poweron(struct generic_pm_domain *genpd); 138 - extern int pm_genpd_name_poweron(const char *domain_name); 139 - extern void pm_genpd_poweroff_unused(void); 140 136 141 137 extern struct dev_power_governor simple_qos_governor; 142 138 extern struct dev_power_governor pm_domain_always_on_gov; ··· 142 166 { 143 167 return -ENOSYS; 144 168 } 145 - static inline int __pm_genpd_name_add_device(const char *domain_name, 146 - struct device *dev, 147 - struct gpd_timing_data *td) 148 - { 149 - return -ENOSYS; 150 - } 151 169 static inline int pm_genpd_remove_device(struct generic_pm_domain *genpd, 152 170 struct device *dev) 153 171 { ··· 152 182 { 153 183 return -ENOSYS; 154 184 } 155 - static inline int pm_genpd_add_subdomain_names(const char *master_name, 156 - const char *subdomain_name) 157 - { 158 - return -ENOSYS; 159 - } 160 185 static inline int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd, 161 186 struct generic_pm_domain *target) 162 - { 163 - return -ENOSYS; 164 - } 165 - static inline int pm_genpd_attach_cpuidle(struct generic_pm_domain *genpd, int st) 166 - { 167 - return -ENOSYS; 168 - } 169 - static inline int pm_genpd_name_attach_cpuidle(const char *name, int state) 170 - { 171 - return -ENOSYS; 172 - } 173 - static inline int pm_genpd_detach_cpuidle(struct generic_pm_domain *genpd) 174 - { 175 - return -ENOSYS; 176 - } 177 - static inline int pm_genpd_name_detach_cpuidle(const char *name) 178 187 { 179 188 return -ENOSYS; 180 189 } ··· 161 212 struct dev_power_governor *gov, bool is_off) 162 213 { 163 214 } 164 - static inline int pm_genpd_poweron(struct generic_pm_domain *genpd) 165 - { 166 - return -ENOSYS; 167 - } 168 - static inline int pm_genpd_name_poweron(const char *domain_name) 169 - { 170 - return -ENOSYS; 171 - } 172 - static inline void pm_genpd_poweroff_unused(void) {} 173 215 #endif 174 216 175 217 static inline int pm_genpd_add_device(struct generic_pm_domain *genpd, 176 218 struct device *dev) 177 219 { 178 220 return __pm_genpd_add_device(genpd, dev, NULL); 179 - } 180 - 181 - static inline int pm_genpd_name_add_device(const char *domain_name, 182 - struct device *dev) 183 - { 184 - return __pm_genpd_name_add_device(domain_name, dev, NULL); 185 221 } 186 222 187 223 #ifdef CONFIG_PM_GENERIC_DOMAINS_SLEEP
+19 -19
include/linux/pm_opp.h
··· 132 132 #endif /* CONFIG_PM_OPP */ 133 133 134 134 #if defined(CONFIG_PM_OPP) && defined(CONFIG_OF) 135 - int of_init_opp_table(struct device *dev); 136 - void of_free_opp_table(struct device *dev); 137 - int of_cpumask_init_opp_table(cpumask_var_t cpumask); 138 - void of_cpumask_free_opp_table(cpumask_var_t cpumask); 139 - int of_get_cpus_sharing_opps(struct device *cpu_dev, cpumask_var_t cpumask); 140 - int set_cpus_sharing_opps(struct device *cpu_dev, cpumask_var_t cpumask); 135 + int dev_pm_opp_of_add_table(struct device *dev); 136 + void dev_pm_opp_of_remove_table(struct device *dev); 137 + int dev_pm_opp_of_cpumask_add_table(cpumask_var_t cpumask); 138 + void dev_pm_opp_of_cpumask_remove_table(cpumask_var_t cpumask); 139 + int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, cpumask_var_t cpumask); 140 + int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, cpumask_var_t cpumask); 141 141 #else 142 - static inline int of_init_opp_table(struct device *dev) 142 + static inline int dev_pm_opp_of_add_table(struct device *dev) 143 143 { 144 144 return -EINVAL; 145 145 } 146 146 147 - static inline void of_free_opp_table(struct device *dev) 147 + static inline void dev_pm_opp_of_remove_table(struct device *dev) 148 148 { 149 149 } 150 150 151 - static inline int of_cpumask_init_opp_table(cpumask_var_t cpumask) 152 - { 153 - return -ENOSYS; 154 - } 155 - 156 - static inline void of_cpumask_free_opp_table(cpumask_var_t cpumask) 157 - { 158 - } 159 - 160 - static inline int of_get_cpus_sharing_opps(struct device *cpu_dev, cpumask_var_t cpumask) 151 + static inline int dev_pm_opp_of_cpumask_add_table(cpumask_var_t cpumask) 161 152 { 162 153 return -ENOSYS; 163 154 } 164 155 165 - static inline int set_cpus_sharing_opps(struct device *cpu_dev, cpumask_var_t cpumask) 156 + static inline void dev_pm_opp_of_cpumask_remove_table(cpumask_var_t cpumask) 157 + { 158 + } 159 + 160 + static inline int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, cpumask_var_t cpumask) 161 + { 162 + return -ENOSYS; 163 + } 164 + 165 + static inline int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, cpumask_var_t cpumask) 166 166 { 167 167 return -ENOSYS; 168 168 }
+4
include/linux/property.h
··· 40 40 const char **val, size_t nval); 41 41 int device_property_read_string(struct device *dev, const char *propname, 42 42 const char **val); 43 + int device_property_match_string(struct device *dev, 44 + const char *propname, const char *string); 43 45 44 46 bool fwnode_property_present(struct fwnode_handle *fwnode, const char *propname); 45 47 int fwnode_property_read_u8_array(struct fwnode_handle *fwnode, ··· 61 59 size_t nval); 62 60 int fwnode_property_read_string(struct fwnode_handle *fwnode, 63 61 const char *propname, const char **val); 62 + int fwnode_property_match_string(struct fwnode_handle *fwnode, 63 + const char *propname, const char *string); 64 64 65 65 struct fwnode_handle *device_get_next_child_node(struct device *dev, 66 66 struct fwnode_handle *child);
+39
include/linux/suspend.h
··· 202 202 extern void suspend_set_ops(const struct platform_suspend_ops *ops); 203 203 extern int suspend_valid_only_mem(suspend_state_t state); 204 204 205 + extern unsigned int pm_suspend_global_flags; 206 + 207 + #define PM_SUSPEND_FLAG_FW_SUSPEND (1 << 0) 208 + #define PM_SUSPEND_FLAG_FW_RESUME (1 << 1) 209 + 210 + static inline void pm_suspend_clear_flags(void) 211 + { 212 + pm_suspend_global_flags = 0; 213 + } 214 + 215 + static inline void pm_set_suspend_via_firmware(void) 216 + { 217 + pm_suspend_global_flags |= PM_SUSPEND_FLAG_FW_SUSPEND; 218 + } 219 + 220 + static inline void pm_set_resume_via_firmware(void) 221 + { 222 + pm_suspend_global_flags |= PM_SUSPEND_FLAG_FW_RESUME; 223 + } 224 + 225 + static inline bool pm_suspend_via_firmware(void) 226 + { 227 + return !!(pm_suspend_global_flags & PM_SUSPEND_FLAG_FW_SUSPEND); 228 + } 229 + 230 + static inline bool pm_resume_via_firmware(void) 231 + { 232 + return !!(pm_suspend_global_flags & PM_SUSPEND_FLAG_FW_RESUME); 233 + } 234 + 205 235 /* Suspend-to-idle state machnine. */ 206 236 enum freeze_state { 207 237 FREEZE_STATE_NONE, /* Not suspended/suspending. */ ··· 270 240 extern int pm_suspend(suspend_state_t state); 271 241 #else /* !CONFIG_SUSPEND */ 272 242 #define suspend_valid_only_mem NULL 243 + 244 + static inline void pm_suspend_clear_flags(void) {} 245 + static inline void pm_set_suspend_via_firmware(void) {} 246 + static inline void pm_set_resume_via_firmware(void) {} 247 + static inline bool pm_suspend_via_firmware(void) { return false; } 248 + static inline bool pm_resume_via_firmware(void) { return false; } 273 249 274 250 static inline void suspend_set_ops(const struct platform_suspend_ops *ops) {} 275 251 static inline int pm_suspend(suspend_state_t state) { return -ENOSYS; } ··· 423 387 424 388 /* drivers/base/power/wakeup.c */ 425 389 extern bool events_check_enabled; 390 + extern unsigned int pm_wakeup_irq; 426 391 427 392 extern bool pm_wakeup_pending(void); 428 393 extern void pm_system_wakeup(void); 429 394 extern void pm_wakeup_clear(void); 395 + extern void pm_system_irq_wakeup(unsigned int irq_number); 430 396 extern bool pm_get_wakeup_count(unsigned int *count, bool block); 431 397 extern bool pm_save_wakeup_count(unsigned int count); 432 398 extern void pm_wakep_autosleep_enabled(bool set); ··· 478 440 static inline bool pm_wakeup_pending(void) { return false; } 479 441 static inline void pm_system_wakeup(void) {} 480 442 static inline void pm_wakeup_clear(void) {} 443 + static inline void pm_system_irq_wakeup(unsigned int irq_number) {} 481 444 482 445 static inline void lock_system_sleep(void) {} 483 446 static inline void unlock_system_sleep(void) {}
+1 -1
kernel/irq/pm.c
··· 21 21 desc->istate |= IRQS_SUSPENDED | IRQS_PENDING; 22 22 desc->depth++; 23 23 irq_disable(desc); 24 - pm_system_wakeup(); 24 + pm_system_irq_wakeup(irq_desc_get_irq(desc)); 25 25 return true; 26 26 } 27 27 return false;
+1 -1
kernel/power/hibernate.c
··· 733 733 * contents of memory is restored from the saved image. 734 734 * 735 735 * If this is successful, control reappears in the restored target kernel in 736 - * hibernation_snaphot() which returns to hibernate(). Otherwise, the routine 736 + * hibernation_snapshot() which returns to hibernate(). Otherwise, the routine 737 737 * attempts to recover gracefully and make the kernel return to the normal mode 738 738 * of operation. 739 739 */
+17
kernel/power/main.c
··· 272 272 { 273 273 pm_print_times_enabled = !!initcall_debug; 274 274 } 275 + 276 + static ssize_t pm_wakeup_irq_show(struct kobject *kobj, 277 + struct kobj_attribute *attr, 278 + char *buf) 279 + { 280 + return pm_wakeup_irq ? sprintf(buf, "%u\n", pm_wakeup_irq) : -ENODATA; 281 + } 282 + 283 + static ssize_t pm_wakeup_irq_store(struct kobject *kobj, 284 + struct kobj_attribute *attr, 285 + const char *buf, size_t n) 286 + { 287 + return -EINVAL; 288 + } 289 + power_attr(pm_wakeup_irq); 290 + 275 291 #else /* !CONFIG_PM_SLEEP_DEBUG */ 276 292 static inline void pm_print_times_init(void) {} 277 293 #endif /* CONFIG_PM_SLEEP_DEBUG */ ··· 620 604 #endif 621 605 #ifdef CONFIG_PM_SLEEP_DEBUG 622 606 &pm_print_times_attr.attr, 607 + &pm_wakeup_irq_attr.attr, 623 608 #endif 624 609 #endif 625 610 #ifdef CONFIG_FREEZER
+4
kernel/power/suspend.c
··· 35 35 const char *pm_labels[] = { "mem", "standby", "freeze", NULL }; 36 36 const char *pm_states[PM_SUSPEND_MAX]; 37 37 38 + unsigned int pm_suspend_global_flags; 39 + EXPORT_SYMBOL_GPL(pm_suspend_global_flags); 40 + 38 41 static const struct platform_suspend_ops *suspend_ops; 39 42 static const struct platform_freeze_ops *freeze_ops; 40 43 static DECLARE_WAIT_QUEUE_HEAD(suspend_freeze_wait_head); ··· 496 493 #endif 497 494 498 495 pr_debug("PM: Preparing system for sleep (%s)\n", pm_states[state]); 496 + pm_suspend_clear_flags(); 499 497 error = suspend_prepare(state); 500 498 if (error) 501 499 goto Unlock;
+1 -1
tools/power/acpi/tools/acpidump/apfiles.c
··· 150 150 strcat(filename, instance_str); 151 151 } 152 152 153 - strcat(filename, ACPI_TABLE_FILE_SUFFIX); 153 + strcat(filename, FILE_SUFFIX_BINARY_TABLE); 154 154 155 155 if (gbl_verbose_mode) { 156 156 acpi_log_error