Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pm+acpi-3.20-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull ACPI and power management updates from Rafael Wysocki:
"We have a few new features this time, including a new SFI-based
cpufreq driver, a new devfreq driver for Tegra Activity Monitor, a new
devfreq class for providing its governors with raw utilization data
and a new ACPI driver for AMD SoCs.

Still, the majority of changes here are reworks of existing code to
make it more straightforward or to prepare it for implementing new
features on top of it. The primary example is the rework of ACPI
resources handling from Jiang Liu, Thomas Gleixner and Lv Zheng with
support for IOAPIC hotplug implemented on top of it, but there is
quite a number of changes of this kind in the cpufreq core, ACPICA,
ACPI EC driver, ACPI processor driver and the generic power domains
core code too.

The most active developer is Viresh Kumar with his cpufreq changes.

Specifics:

- Rework of the core ACPI resources parsing code to fix issues in it
and make using resource offsets more convenient and consolidation
of some resource-handing code in a couple of places that have grown
analagous data structures and code to cover the the same gap in the
core (Jiang Liu, Thomas Gleixner, Lv Zheng).

- ACPI-based IOAPIC hotplug support on top of the resources handling
rework (Jiang Liu, Yinghai Lu).

- ACPICA update to upstream release 20150204 including an interrupt
handling rework that allows drivers to install raw handlers for
ACPI GPEs which then become entirely responsible for the given GPE
and the ACPICA core code won't touch it (Lv Zheng, David E Box,
Octavian Purdila).

- ACPI EC driver rework to fix several concurrency issues and other
problems related to events handling on top of the ACPICA's new
support for raw GPE handlers (Lv Zheng).

- New ACPI driver for AMD SoCs analogous to the LPSS (Low-Power
Subsystem) driver for Intel chips (Ken Xue).

- Two minor fixes of the ACPI LPSS driver (Heikki Krogerus, Jarkko
Nikula).

- Two new blacklist entries for machines (Samsung 730U3E/740U3E and
510R) where the native backlight interface doesn't work correctly
while the ACPI one does (Hans de Goede).

- Rework of the ACPI processor driver's handling of idle states to
make the code more straightforward and less bloated overall (Rafael
J Wysocki).

- Assorted minor fixes related to ACPI and SFI (Andreas Ruprecht,
Andy Shevchenko, Hanjun Guo, Jan Beulich, Rafael J Wysocki, Yaowei
Bai).

- PCI core power management modification to avoid resuming (some)
runtime-suspended devices during system suspend if they are in the
right states already (Rafael J Wysocki).

- New SFI-based cpufreq driver for Intel platforms using SFI
(Srinidhi Kasagar).

- cpufreq core fixes, cleanups and simplifications (Viresh Kumar,
Doug Anderson, Wolfram Sang).

- SkyLake CPU support and other updates for the intel_pstate driver
(Kristen Carlson Accardi, Srinivas Pandruvada).

- cpufreq-dt driver cleanup (Markus Elfring).

- Init fix for the ARM big.LITTLE cpuidle driver (Sudeep Holla).

- Generic power domains core code fixes and cleanups (Ulf Hansson).

- Operating Performance Points (OPP) core code cleanups and kernel
documentation update (Nishanth Menon).

- New dabugfs interface to make the list of PM QoS constraints
available to user space (Nishanth Menon).

- New devfreq driver for Tegra Activity Monitor (Tomeu Vizoso).

- New devfreq class (devfreq_event) to provide raw utilization data
to devfreq governors (Chanwoo Choi).

- Assorted minor fixes and cleanups related to power management
(Andreas Ruprecht, Krzysztof Kozlowski, Rickard Strandqvist, Pavel
Machek, Todd E Brandt, Wonhong Kwon).

- turbostat updates (Len Brown) and cpupower Makefile improvement
(Sriram Raghunathan)"

* tag 'pm+acpi-3.20-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (151 commits)
tools/power turbostat: relax dependency on APERF_MSR
tools/power turbostat: relax dependency on invariant TSC
Merge branch 'pci/host-generic' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci into acpi-resources
tools/power turbostat: decode MSR_*_PERF_LIMIT_REASONS
tools/power turbostat: relax dependency on root permission
ACPI / video: Add disable_native_backlight quirk for Samsung 510R
ACPI / PM: Remove unneeded nested #ifdef
USB / PM: Remove unneeded #ifdef and associated dead code
intel_pstate: provide option to only use intel_pstate with HWP
ACPI / EC: Add GPE reference counting debugging messages
ACPI / EC: Add query flushing support
ACPI / EC: Refine command storm prevention support
ACPI / EC: Add command flushing support.
ACPI / EC: Introduce STARTED/STOPPED flags to replace BLOCKED flag
ACPI: add AMD ACPI2Platform device support for x86 system
ACPI / table: remove duplicate NULL check for the handler of acpi_table_parse()
ACPI / EC: Update revision due to raw handler mode.
ACPI / EC: Reduce ec_poll() by referencing the last register access timestamp.
ACPI / EC: Fix several GPE handling issues by deploying ACPI_GPE_DISPATCH_RAW_HANDLER mode.
ACPICA: Events: Enable APIs to allow interrupt/polling adaptive request based GPE handling model
...

+5325 -1837
+1 -1
Documentation/acpi/enumeration.txt
··· 243 243 .owner = THIS_MODULE, 244 244 .pm = &mpu3050_pm, 245 245 .of_match_table = mpu3050_of_match, 246 - .acpi_match_table ACPI_PTR(mpu3050_acpi_match), 246 + .acpi_match_table = ACPI_PTR(mpu3050_acpi_match), 247 247 }, 248 248 .probe = mpu3050_probe, 249 249 .remove = mpu3050_remove,
+8
Documentation/cpu-freq/intel-pstate.txt
··· 37 37 no_turbo: limits the driver to selecting P states below the turbo 38 38 frequency range. 39 39 40 + turbo_pct: displays the percentage of the total performance that 41 + is supported by hardware that is in the turbo range. This number 42 + is independent of whether turbo has been disabled or not. 43 + 44 + num_pstates: displays the number of pstates that are supported 45 + by hardware. This number is independent of whether turbo has 46 + been disabled or not. 47 + 40 48 For contemporary Intel processors, the frequency is controlled by the 41 49 processor itself and the P-states exposed to software are related to 42 50 performance levels. The idea that frequency can be set to a single
+110
Documentation/devicetree/bindings/devfreq/event/exynos-ppmu.txt
··· 1 + 2 + * Samsung Exynos PPMU (Platform Performance Monitoring Unit) device 3 + 4 + The Samsung Exynos SoC has PPMU (Platform Performance Monitoring Unit) for 5 + each IP. PPMU provides the primitive values to get performance data. These 6 + PPMU events provide information of the SoC's behaviors so that you may 7 + use to analyze system performance, to make behaviors visible and to count 8 + usages of each IP (DMC, CPU, RIGHTBUS, LEFTBUS, CAM interface, LCD, G3D, MFC). 9 + The Exynos PPMU driver uses the devfreq-event class to provide event data 10 + to various devfreq devices. The devfreq devices would use the event data when 11 + derterming the current state of each IP. 12 + 13 + Required properties: 14 + - compatible: Should be "samsung,exynos-ppmu". 15 + - reg: physical base address of each PPMU and length of memory mapped region. 16 + 17 + Optional properties: 18 + - clock-names : the name of clock used by the PPMU, "ppmu" 19 + - clocks : phandles for clock specified in "clock-names" property 20 + - #clock-cells: should be 1. 21 + 22 + Example1 : PPMU nodes in exynos3250.dtsi are listed below. 23 + 24 + ppmu_dmc0: ppmu_dmc0@106a0000 { 25 + compatible = "samsung,exynos-ppmu"; 26 + reg = <0x106a0000 0x2000>; 27 + status = "disabled"; 28 + }; 29 + 30 + ppmu_dmc1: ppmu_dmc1@106b0000 { 31 + compatible = "samsung,exynos-ppmu"; 32 + reg = <0x106b0000 0x2000>; 33 + status = "disabled"; 34 + }; 35 + 36 + ppmu_cpu: ppmu_cpu@106c0000 { 37 + compatible = "samsung,exynos-ppmu"; 38 + reg = <0x106c0000 0x2000>; 39 + status = "disabled"; 40 + }; 41 + 42 + ppmu_rightbus: ppmu_rightbus@112a0000 { 43 + compatible = "samsung,exynos-ppmu"; 44 + reg = <0x112a0000 0x2000>; 45 + clocks = <&cmu CLK_PPMURIGHT>; 46 + clock-names = "ppmu"; 47 + status = "disabled"; 48 + }; 49 + 50 + ppmu_leftbus: ppmu_leftbus0@116a0000 { 51 + compatible = "samsung,exynos-ppmu"; 52 + reg = <0x116a0000 0x2000>; 53 + clocks = <&cmu CLK_PPMULEFT>; 54 + clock-names = "ppmu"; 55 + status = "disabled"; 56 + }; 57 + 58 + Example2 : Events of each PPMU node in exynos3250-rinato.dts are listed below. 59 + 60 + &ppmu_dmc0 { 61 + status = "okay"; 62 + 63 + events { 64 + ppmu_dmc0_3: ppmu-event3-dmc0 { 65 + event-name = "ppmu-event3-dmc0"; 66 + }; 67 + 68 + ppmu_dmc0_2: ppmu-event2-dmc0 { 69 + event-name = "ppmu-event2-dmc0"; 70 + }; 71 + 72 + ppmu_dmc0_1: ppmu-event1-dmc0 { 73 + event-name = "ppmu-event1-dmc0"; 74 + }; 75 + 76 + ppmu_dmc0_0: ppmu-event0-dmc0 { 77 + event-name = "ppmu-event0-dmc0"; 78 + }; 79 + }; 80 + }; 81 + 82 + &ppmu_dmc1 { 83 + status = "okay"; 84 + 85 + events { 86 + ppmu_dmc1_3: ppmu-event3-dmc1 { 87 + event-name = "ppmu-event3-dmc1"; 88 + }; 89 + }; 90 + }; 91 + 92 + &ppmu_leftbus { 93 + status = "okay"; 94 + 95 + events { 96 + ppmu_leftbus_3: ppmu-event3-leftbus { 97 + event-name = "ppmu-event3-leftbus"; 98 + }; 99 + }; 100 + }; 101 + 102 + &ppmu_rightbus { 103 + status = "okay"; 104 + 105 + events { 106 + ppmu_rightbus_3: ppmu-event3-rightbus { 107 + event-name = "ppmu-event3-rightbus"; 108 + }; 109 + }; 110 + };
+3
Documentation/kernel-parameters.txt
··· 1470 1470 no_hwp 1471 1471 Do not enable hardware P state control (HWP) 1472 1472 if available. 1473 + hwp_only 1474 + Only load intel_pstate on systems which support 1475 + hardware P state control (HWP) if available. 1473 1476 1474 1477 intremap= [X86-64, Intel-IOMMU] 1475 1478 on enable Interrupt Remapping (default)
+4
Documentation/power/s2ram.txt
··· 69 69 hardware during resume operations where a value can be set that will 70 70 survive a reboot. 71 71 72 + pm_trace is not compatible with asynchronous suspend, so it turns 73 + asynchronous suspend off (which may work around timing or 74 + ordering-sensitive bugs). 75 + 72 76 Consequence is that after a resume (even if it is successful) your system 73 77 clock will have a value corresponding to the magic number instead of the 74 78 correct date/time! It is therefore advisable to use a program like ntp-date
+2 -2
MAINTAINERS
··· 270 270 F: drivers/pnp/pnpacpi/ 271 271 F: include/linux/acpi.h 272 272 F: include/acpi/ 273 - F: Documentation/acpi 273 + F: Documentation/acpi/ 274 274 F: Documentation/ABI/testing/sysfs-bus-acpi 275 275 F: drivers/pci/*acpi* 276 276 F: drivers/pci/*/*acpi* 277 277 F: drivers/pci/*/*/*acpi* 278 - F: tools/power/acpi 278 + F: tools/power/acpi/ 279 279 280 280 ACPI COMPONENT ARCHITECTURE (ACPICA) 281 281 M: Robert Moore <robert.moore@intel.com>
+2 -3
arch/arm/kernel/bios32.c
··· 422 422 static int pcibios_init_resources(int busnr, struct pci_sys_data *sys) 423 423 { 424 424 int ret; 425 - struct pci_host_bridge_window *window; 425 + struct resource_entry *window; 426 426 427 427 if (list_empty(&sys->resources)) { 428 428 pci_add_resource_offset(&sys->resources, 429 429 &iomem_resource, sys->mem_offset); 430 430 } 431 431 432 - list_for_each_entry(window, &sys->resources, list) { 432 + resource_list_for_each_entry(window, &sys->resources) 433 433 if (resource_type(window->res) == IORESOURCE_IO) 434 434 return 0; 435 - } 436 435 437 436 sys->io_res.start = (busnr * SZ_64K) ? : pcibios_min_io; 438 437 sys->io_res.end = (busnr + 1) * SZ_64K - 1;
+3 -3
arch/ia64/kernel/acpi-ext.c
··· 69 69 status = acpi_resource_to_address64(resource, &addr); 70 70 if (ACPI_SUCCESS(status) && 71 71 addr.resource_type == ACPI_MEMORY_RANGE && 72 - addr.address_length && 72 + addr.address.address_length && 73 73 addr.producer_consumer == ACPI_CONSUMER) { 74 - space->base = addr.minimum; 75 - space->length = addr.address_length; 74 + space->base = addr.address.minimum; 75 + space->length = addr.address.address_length; 76 76 return AE_CTRL_TERMINATE; 77 77 } 78 78 return AE_OK; /* keep looking */
-6
arch/ia64/kernel/acpi.c
··· 380 380 381 381 static int __init acpi_parse_madt(struct acpi_table_header *table) 382 382 { 383 - if (!table) 384 - return -EINVAL; 385 - 386 383 acpi_madt = (struct acpi_table_madt *)table; 387 384 388 385 acpi_madt_rev = acpi_madt->header.revision; ··· 641 644 { 642 645 struct acpi_table_header *fadt_header; 643 646 struct acpi_table_fadt *fadt; 644 - 645 - if (!table) 646 - return -EINVAL; 647 647 648 648 fadt_header = (struct acpi_table_header *)table; 649 649 if (fadt_header->revision != 3)
+7 -7
arch/ia64/pci/pci.c
··· 188 188 189 189 name = (char *)(iospace + 1); 190 190 191 - min = addr->minimum; 192 - max = min + addr->address_length - 1; 191 + min = addr->address.minimum; 192 + max = min + addr->address.address_length - 1; 193 193 if (addr->info.io.translation_type == ACPI_SPARSE_TRANSLATION) 194 194 sparse = 1; 195 195 196 - space_nr = new_space(addr->translation_offset, sparse); 196 + space_nr = new_space(addr->address.translation_offset, sparse); 197 197 if (space_nr == ~0) 198 198 goto free_resource; 199 199 ··· 247 247 if (ACPI_SUCCESS(status) && 248 248 (addr->resource_type == ACPI_MEMORY_RANGE || 249 249 addr->resource_type == ACPI_IO_RANGE) && 250 - addr->address_length && 250 + addr->address.address_length && 251 251 addr->producer_consumer == ACPI_PRODUCER) 252 252 return AE_OK; 253 253 ··· 284 284 if (addr.resource_type == ACPI_MEMORY_RANGE) { 285 285 flags = IORESOURCE_MEM; 286 286 root = &iomem_resource; 287 - offset = addr.translation_offset; 287 + offset = addr.address.translation_offset; 288 288 } else if (addr.resource_type == ACPI_IO_RANGE) { 289 289 flags = IORESOURCE_IO; 290 290 root = &ioport_resource; ··· 297 297 resource = &info->res[info->res_num]; 298 298 resource->name = info->name; 299 299 resource->flags = flags; 300 - resource->start = addr.minimum + offset; 301 - resource->end = resource->start + addr.address_length - 1; 300 + resource->start = addr.address.minimum + offset; 301 + resource->end = resource->start + addr.address.address_length - 1; 302 302 info->res_offset[info->res_num] = offset; 303 303 304 304 if (insert_resource(root, resource)) {
+11
arch/x86/Kconfig
··· 497 497 things like clock tree (common clock framework) and pincontrol 498 498 which are needed by the LPSS peripheral drivers. 499 499 500 + config X86_AMD_PLATFORM_DEVICE 501 + bool "AMD ACPI2Platform devices support" 502 + depends on ACPI 503 + select COMMON_CLK 504 + select PINCTRL 505 + ---help--- 506 + Select to interpret AMD specific ACPI device to platform device 507 + such as I2C, UART, GPIO found on AMD Carrizo and later chipsets. 508 + I2C and UART depend on COMMON_CLK to set clock. GPIO driver is 509 + implemented under PINCTRL subsystem. 510 + 500 511 config IOSF_MBI 501 512 tristate "Intel SoC IOSF Sideband support for SoC platforms" 502 513 depends on PCI
-2
arch/x86/include/asm/pci_x86.h
··· 93 93 extern int (*pcibios_enable_irq)(struct pci_dev *dev); 94 94 extern void (*pcibios_disable_irq)(struct pci_dev *dev); 95 95 96 - extern bool mp_should_keep_irq(struct device *dev); 97 - 98 96 struct pci_raw_ops { 99 97 int (*read)(unsigned int domain, unsigned int bus, unsigned int devfn, 100 98 int reg, int len, u32 *val);
+5
arch/x86/include/uapi/asm/msr-index.h
··· 152 152 #define MSR_CC6_DEMOTION_POLICY_CONFIG 0x00000668 153 153 #define MSR_MC6_DEMOTION_POLICY_CONFIG 0x00000669 154 154 155 + #define MSR_CORE_PERF_LIMIT_REASONS 0x00000690 156 + #define MSR_GFX_PERF_LIMIT_REASONS 0x000006B0 157 + #define MSR_RING_PERF_LIMIT_REASONS 0x000006B1 158 + 155 159 /* Hardware P state interface */ 156 160 #define MSR_PPERF 0x0000064e 157 161 #define MSR_PERF_LIMIT_REASONS 0x0000064f ··· 366 362 367 363 #define MSR_IA32_PERF_STATUS 0x00000198 368 364 #define MSR_IA32_PERF_CTL 0x00000199 365 + #define INTEL_PERF_CTL_MASK 0xffff 369 366 #define MSR_AMD_PSTATE_DEF_BASE 0xc0010064 370 367 #define MSR_AMD_PERF_STATUS 0xc0010063 371 368 #define MSR_AMD_PERF_CTL 0xc0010062
+2 -14
arch/x86/kernel/acpi/boot.c
··· 845 845 846 846 static int __init acpi_parse_sbf(struct acpi_table_header *table) 847 847 { 848 - struct acpi_table_boot *sb; 849 - 850 - sb = (struct acpi_table_boot *)table; 851 - if (!sb) { 852 - printk(KERN_WARNING PREFIX "Unable to map SBF\n"); 853 - return -ENODEV; 854 - } 848 + struct acpi_table_boot *sb = (struct acpi_table_boot *)table; 855 849 856 850 sbf_port = sb->cmos_index; /* Save CMOS port */ 857 851 ··· 859 865 860 866 static int __init acpi_parse_hpet(struct acpi_table_header *table) 861 867 { 862 - struct acpi_table_hpet *hpet_tbl; 863 - 864 - hpet_tbl = (struct acpi_table_hpet *)table; 865 - if (!hpet_tbl) { 866 - printk(KERN_WARNING PREFIX "Unable to map HPET\n"); 867 - return -ENODEV; 868 - } 868 + struct acpi_table_hpet *hpet_tbl = (struct acpi_table_hpet *)table; 869 869 870 870 if (hpet_tbl->address.space_id != ACPI_SPACE_MEM) { 871 871 printk(KERN_WARNING PREFIX "HPET timers must be located in "
+94 -205
arch/x86/pci/acpi.c
··· 10 10 struct pci_root_info { 11 11 struct acpi_device *bridge; 12 12 char name[16]; 13 - unsigned int res_num; 14 - struct resource *res; 15 - resource_size_t *res_offset; 16 13 struct pci_sysdata sd; 17 14 #ifdef CONFIG_PCI_MMCONFIG 18 15 bool mcfg_added; ··· 215 218 } 216 219 #endif 217 220 218 - static acpi_status resource_to_addr(struct acpi_resource *resource, 219 - struct acpi_resource_address64 *addr) 221 + static void validate_resources(struct device *dev, struct list_head *crs_res, 222 + unsigned long type) 220 223 { 221 - acpi_status status; 222 - struct acpi_resource_memory24 *memory24; 223 - struct acpi_resource_memory32 *memory32; 224 - struct acpi_resource_fixed_memory32 *fixed_memory32; 224 + LIST_HEAD(list); 225 + struct resource *res1, *res2, *root = NULL; 226 + struct resource_entry *tmp, *entry, *entry2; 225 227 226 - memset(addr, 0, sizeof(*addr)); 227 - switch (resource->type) { 228 - case ACPI_RESOURCE_TYPE_MEMORY24: 229 - memory24 = &resource->data.memory24; 230 - addr->resource_type = ACPI_MEMORY_RANGE; 231 - addr->minimum = memory24->minimum; 232 - addr->address_length = memory24->address_length; 233 - addr->maximum = addr->minimum + addr->address_length - 1; 234 - return AE_OK; 235 - case ACPI_RESOURCE_TYPE_MEMORY32: 236 - memory32 = &resource->data.memory32; 237 - addr->resource_type = ACPI_MEMORY_RANGE; 238 - addr->minimum = memory32->minimum; 239 - addr->address_length = memory32->address_length; 240 - addr->maximum = addr->minimum + addr->address_length - 1; 241 - return AE_OK; 242 - case ACPI_RESOURCE_TYPE_FIXED_MEMORY32: 243 - fixed_memory32 = &resource->data.fixed_memory32; 244 - addr->resource_type = ACPI_MEMORY_RANGE; 245 - addr->minimum = fixed_memory32->address; 246 - addr->address_length = fixed_memory32->address_length; 247 - addr->maximum = addr->minimum + addr->address_length - 1; 248 - return AE_OK; 249 - case ACPI_RESOURCE_TYPE_ADDRESS16: 250 - case ACPI_RESOURCE_TYPE_ADDRESS32: 251 - case ACPI_RESOURCE_TYPE_ADDRESS64: 252 - status = acpi_resource_to_address64(resource, addr); 253 - if (ACPI_SUCCESS(status) && 254 - (addr->resource_type == ACPI_MEMORY_RANGE || 255 - addr->resource_type == ACPI_IO_RANGE) && 256 - addr->address_length > 0) { 257 - return AE_OK; 258 - } 259 - break; 260 - } 261 - return AE_ERROR; 262 - } 228 + BUG_ON((type & (IORESOURCE_MEM | IORESOURCE_IO)) == 0); 229 + root = (type & IORESOURCE_MEM) ? &iomem_resource : &ioport_resource; 263 230 264 - static acpi_status count_resource(struct acpi_resource *acpi_res, void *data) 265 - { 266 - struct pci_root_info *info = data; 267 - struct acpi_resource_address64 addr; 268 - acpi_status status; 231 + list_splice_init(crs_res, &list); 232 + resource_list_for_each_entry_safe(entry, tmp, &list) { 233 + bool free = false; 234 + resource_size_t end; 269 235 270 - status = resource_to_addr(acpi_res, &addr); 271 - if (ACPI_SUCCESS(status)) 272 - info->res_num++; 273 - return AE_OK; 274 - } 275 - 276 - static acpi_status setup_resource(struct acpi_resource *acpi_res, void *data) 277 - { 278 - struct pci_root_info *info = data; 279 - struct resource *res; 280 - struct acpi_resource_address64 addr; 281 - acpi_status status; 282 - unsigned long flags; 283 - u64 start, orig_end, end; 284 - 285 - status = resource_to_addr(acpi_res, &addr); 286 - if (!ACPI_SUCCESS(status)) 287 - return AE_OK; 288 - 289 - if (addr.resource_type == ACPI_MEMORY_RANGE) { 290 - flags = IORESOURCE_MEM; 291 - if (addr.info.mem.caching == ACPI_PREFETCHABLE_MEMORY) 292 - flags |= IORESOURCE_PREFETCH; 293 - } else if (addr.resource_type == ACPI_IO_RANGE) { 294 - flags = IORESOURCE_IO; 295 - } else 296 - return AE_OK; 297 - 298 - start = addr.minimum + addr.translation_offset; 299 - orig_end = end = addr.maximum + addr.translation_offset; 300 - 301 - /* Exclude non-addressable range or non-addressable portion of range */ 302 - end = min(end, (u64)iomem_resource.end); 303 - if (end <= start) { 304 - dev_info(&info->bridge->dev, 305 - "host bridge window [%#llx-%#llx] " 306 - "(ignored, not CPU addressable)\n", start, orig_end); 307 - return AE_OK; 308 - } else if (orig_end != end) { 309 - dev_info(&info->bridge->dev, 310 - "host bridge window [%#llx-%#llx] " 311 - "([%#llx-%#llx] ignored, not CPU addressable)\n", 312 - start, orig_end, end + 1, orig_end); 313 - } 314 - 315 - res = &info->res[info->res_num]; 316 - res->name = info->name; 317 - res->flags = flags; 318 - res->start = start; 319 - res->end = end; 320 - info->res_offset[info->res_num] = addr.translation_offset; 321 - info->res_num++; 322 - 323 - if (!pci_use_crs) 324 - dev_printk(KERN_DEBUG, &info->bridge->dev, 325 - "host bridge window %pR (ignored)\n", res); 326 - 327 - return AE_OK; 328 - } 329 - 330 - static void coalesce_windows(struct pci_root_info *info, unsigned long type) 331 - { 332 - int i, j; 333 - struct resource *res1, *res2; 334 - 335 - for (i = 0; i < info->res_num; i++) { 336 - res1 = &info->res[i]; 236 + res1 = entry->res; 337 237 if (!(res1->flags & type)) 338 - continue; 238 + goto next; 339 239 340 - for (j = i + 1; j < info->res_num; j++) { 341 - res2 = &info->res[j]; 240 + /* Exclude non-addressable range or non-addressable portion */ 241 + end = min(res1->end, root->end); 242 + if (end <= res1->start) { 243 + dev_info(dev, "host bridge window %pR (ignored, not CPU addressable)\n", 244 + res1); 245 + free = true; 246 + goto next; 247 + } else if (res1->end != end) { 248 + dev_info(dev, "host bridge window %pR ([%#llx-%#llx] ignored, not CPU addressable)\n", 249 + res1, (unsigned long long)end + 1, 250 + (unsigned long long)res1->end); 251 + res1->end = end; 252 + } 253 + 254 + resource_list_for_each_entry(entry2, crs_res) { 255 + res2 = entry2->res; 342 256 if (!(res2->flags & type)) 343 257 continue; 344 258 ··· 261 353 if (resource_overlaps(res1, res2)) { 262 354 res2->start = min(res1->start, res2->start); 263 355 res2->end = max(res1->end, res2->end); 264 - dev_info(&info->bridge->dev, 265 - "host bridge window expanded to %pR; %pR ignored\n", 356 + dev_info(dev, "host bridge window expanded to %pR; %pR ignored\n", 266 357 res2, res1); 267 - res1->flags = 0; 358 + free = true; 359 + goto next; 268 360 } 269 361 } 362 + 363 + next: 364 + resource_list_del(entry); 365 + if (free) 366 + resource_list_free_entry(entry); 367 + else 368 + resource_list_add_tail(entry, crs_res); 270 369 } 271 370 } 272 371 273 372 static void add_resources(struct pci_root_info *info, 274 - struct list_head *resources) 373 + struct list_head *resources, 374 + struct list_head *crs_res) 275 375 { 276 - int i; 277 - struct resource *res, *root, *conflict; 376 + struct resource_entry *entry, *tmp; 377 + struct resource *res, *conflict, *root = NULL; 278 378 279 - coalesce_windows(info, IORESOURCE_MEM); 280 - coalesce_windows(info, IORESOURCE_IO); 379 + validate_resources(&info->bridge->dev, crs_res, IORESOURCE_MEM); 380 + validate_resources(&info->bridge->dev, crs_res, IORESOURCE_IO); 281 381 282 - for (i = 0; i < info->res_num; i++) { 283 - res = &info->res[i]; 284 - 382 + resource_list_for_each_entry_safe(entry, tmp, crs_res) { 383 + res = entry->res; 285 384 if (res->flags & IORESOURCE_MEM) 286 385 root = &iomem_resource; 287 386 else if (res->flags & IORESOURCE_IO) 288 387 root = &ioport_resource; 289 388 else 290 - continue; 389 + BUG_ON(res); 291 390 292 391 conflict = insert_resource_conflict(root, res); 293 - if (conflict) 392 + if (conflict) { 294 393 dev_info(&info->bridge->dev, 295 394 "ignoring host bridge window %pR (conflicts with %s %pR)\n", 296 395 res, conflict->name, conflict); 297 - else 298 - pci_add_resource_offset(resources, res, 299 - info->res_offset[i]); 300 - } 301 - } 302 - 303 - static void free_pci_root_info_res(struct pci_root_info *info) 304 - { 305 - kfree(info->res); 306 - info->res = NULL; 307 - kfree(info->res_offset); 308 - info->res_offset = NULL; 309 - info->res_num = 0; 310 - } 311 - 312 - static void __release_pci_root_info(struct pci_root_info *info) 313 - { 314 - int i; 315 - struct resource *res; 316 - 317 - for (i = 0; i < info->res_num; i++) { 318 - res = &info->res[i]; 319 - 320 - if (!res->parent) 321 - continue; 322 - 323 - if (!(res->flags & (IORESOURCE_MEM | IORESOURCE_IO))) 324 - continue; 325 - 326 - release_resource(res); 396 + resource_list_destroy_entry(entry); 397 + } 327 398 } 328 399 329 - free_pci_root_info_res(info); 330 - 331 - teardown_mcfg_map(info); 332 - 333 - kfree(info); 400 + list_splice_tail(crs_res, resources); 334 401 } 335 402 336 403 static void release_pci_root_info(struct pci_host_bridge *bridge) 337 404 { 405 + struct resource *res; 406 + struct resource_entry *entry; 338 407 struct pci_root_info *info = bridge->release_data; 339 408 340 - __release_pci_root_info(info); 409 + resource_list_for_each_entry(entry, &bridge->windows) { 410 + res = entry->res; 411 + if (res->parent && 412 + (res->flags & (IORESOURCE_MEM | IORESOURCE_IO))) 413 + release_resource(res); 414 + } 415 + 416 + teardown_mcfg_map(info); 417 + kfree(info); 341 418 } 342 419 343 420 static void probe_pci_root_info(struct pci_root_info *info, 344 421 struct acpi_device *device, 345 - int busnum, int domain) 422 + int busnum, int domain, 423 + struct list_head *list) 346 424 { 347 - size_t size; 425 + int ret; 426 + struct resource_entry *entry; 348 427 349 428 sprintf(info->name, "PCI Bus %04x:%02x", domain, busnum); 350 429 info->bridge = device; 351 - 352 - info->res_num = 0; 353 - acpi_walk_resources(device->handle, METHOD_NAME__CRS, count_resource, 354 - info); 355 - if (!info->res_num) 356 - return; 357 - 358 - size = sizeof(*info->res) * info->res_num; 359 - info->res = kzalloc_node(size, GFP_KERNEL, info->sd.node); 360 - if (!info->res) { 361 - info->res_num = 0; 362 - return; 363 - } 364 - 365 - size = sizeof(*info->res_offset) * info->res_num; 366 - info->res_num = 0; 367 - info->res_offset = kzalloc_node(size, GFP_KERNEL, info->sd.node); 368 - if (!info->res_offset) { 369 - kfree(info->res); 370 - info->res = NULL; 371 - return; 372 - } 373 - 374 - acpi_walk_resources(device->handle, METHOD_NAME__CRS, setup_resource, 375 - info); 430 + ret = acpi_dev_get_resources(device, list, 431 + acpi_dev_filter_resource_type_cb, 432 + (void *)(IORESOURCE_IO | IORESOURCE_MEM)); 433 + if (ret < 0) 434 + dev_warn(&device->dev, 435 + "failed to parse _CRS method, error code %d\n", ret); 436 + else if (ret == 0) 437 + dev_dbg(&device->dev, 438 + "no IO and memory resources present in _CRS\n"); 439 + else 440 + resource_list_for_each_entry(entry, list) 441 + entry->res->name = info->name; 376 442 } 377 443 378 444 struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root) ··· 355 473 struct pci_root_info *info; 356 474 int domain = root->segment; 357 475 int busnum = root->secondary.start; 476 + struct resource_entry *res_entry; 477 + LIST_HEAD(crs_res); 358 478 LIST_HEAD(resources); 359 479 struct pci_bus *bus; 360 480 struct pci_sysdata *sd; ··· 404 520 memcpy(bus->sysdata, sd, sizeof(*sd)); 405 521 kfree(info); 406 522 } else { 407 - probe_pci_root_info(info, device, busnum, domain); 408 - 409 523 /* insert busn res at first */ 410 524 pci_add_resource(&resources, &root->secondary); 525 + 411 526 /* 412 527 * _CRS with no apertures is normal, so only fall back to 413 528 * defaults or native bridge info if we're ignoring _CRS. 414 529 */ 415 - if (pci_use_crs) 416 - add_resources(info, &resources); 417 - else { 418 - free_pci_root_info_res(info); 530 + probe_pci_root_info(info, device, busnum, domain, &crs_res); 531 + if (pci_use_crs) { 532 + add_resources(info, &resources, &crs_res); 533 + } else { 534 + resource_list_for_each_entry(res_entry, &crs_res) 535 + dev_printk(KERN_DEBUG, &device->dev, 536 + "host bridge window %pR (ignored)\n", 537 + res_entry->res); 538 + resource_list_free(&crs_res); 419 539 x86_pci_root_bus_resources(busnum, &resources); 420 540 } 421 541 ··· 434 546 to_pci_host_bridge(bus->bridge), 435 547 release_pci_root_info, info); 436 548 } else { 437 - pci_free_resource_list(&resources); 438 - __release_pci_root_info(info); 549 + resource_list_free(&resources); 550 + teardown_mcfg_map(info); 551 + kfree(info); 439 552 } 440 553 } 441 554
+2 -2
arch/x86/pci/bus_numa.c
··· 31 31 { 32 32 struct pci_root_info *info = x86_find_pci_root_info(bus); 33 33 struct pci_root_res *root_res; 34 - struct pci_host_bridge_window *window; 34 + struct resource_entry *window; 35 35 bool found = false; 36 36 37 37 if (!info) ··· 41 41 bus); 42 42 43 43 /* already added by acpi ? */ 44 - list_for_each_entry(window, resources, list) 44 + resource_list_for_each_entry(window, resources) 45 45 if (window->res->flags & IORESOURCE_BUS) { 46 46 found = true; 47 47 break;
+28 -6
arch/x86/pci/common.c
··· 513 513 } 514 514 } 515 515 516 + /* 517 + * Some device drivers assume dev->irq won't change after calling 518 + * pci_disable_device(). So delay releasing of IRQ resource to driver 519 + * unbinding time. Otherwise it will break PM subsystem and drivers 520 + * like xen-pciback etc. 521 + */ 522 + static int pci_irq_notifier(struct notifier_block *nb, unsigned long action, 523 + void *data) 524 + { 525 + struct pci_dev *dev = to_pci_dev(data); 526 + 527 + if (action != BUS_NOTIFY_UNBOUND_DRIVER) 528 + return NOTIFY_DONE; 529 + 530 + if (pcibios_disable_irq) 531 + pcibios_disable_irq(dev); 532 + 533 + return NOTIFY_OK; 534 + } 535 + 536 + static struct notifier_block pci_irq_nb = { 537 + .notifier_call = pci_irq_notifier, 538 + .priority = INT_MIN, 539 + }; 540 + 516 541 int __init pcibios_init(void) 517 542 { 518 543 if (!raw_pci_ops) { ··· 550 525 551 526 if (pci_bf_sort >= pci_force_bf) 552 527 pci_sort_breadthfirst(); 528 + 529 + bus_register_notifier(&pci_bus_type, &pci_irq_nb); 530 + 553 531 return 0; 554 532 } 555 533 ··· 709 681 if (!pci_dev_msi_enabled(dev)) 710 682 return pcibios_enable_irq(dev); 711 683 return 0; 712 - } 713 - 714 - void pcibios_disable_device (struct pci_dev *dev) 715 - { 716 - if (!pci_dev_msi_enabled(dev) && pcibios_disable_irq) 717 - pcibios_disable_irq(dev); 718 684 } 719 685 720 686 int pci_ext_cfg_avail(void)
+2 -2
arch/x86/pci/intel_mid_pci.c
··· 234 234 235 235 static void intel_mid_pci_irq_disable(struct pci_dev *dev) 236 236 { 237 - if (!mp_should_keep_irq(&dev->dev) && dev->irq_managed && 238 - dev->irq > 0) { 237 + if (dev->irq_managed && dev->irq > 0) { 239 238 mp_unmap_irq(dev->irq); 240 239 dev->irq_managed = 0; 240 + dev->irq = 0; 241 241 } 242 242 } 243 243
+1 -14
arch/x86/pci/irq.c
··· 1256 1256 return 0; 1257 1257 } 1258 1258 1259 - bool mp_should_keep_irq(struct device *dev) 1260 - { 1261 - if (dev->power.is_prepared) 1262 - return true; 1263 - #ifdef CONFIG_PM 1264 - if (dev->power.runtime_status == RPM_SUSPENDING) 1265 - return true; 1266 - #endif 1267 - 1268 - return false; 1269 - } 1270 - 1271 1259 static void pirq_disable_irq(struct pci_dev *dev) 1272 1260 { 1273 - if (io_apic_assign_pci_irqs && !mp_should_keep_irq(&dev->dev) && 1274 - dev->irq_managed && dev->irq) { 1261 + if (io_apic_assign_pci_irqs && dev->irq_managed && dev->irq) { 1275 1262 mp_unmap_irq(dev->irq); 1276 1263 dev->irq = 0; 1277 1264 dev->irq_managed = 0;
+3 -3
arch/x86/pci/mmconfig-shared.c
··· 397 397 398 398 status = acpi_resource_to_address64(res, &address); 399 399 if (ACPI_FAILURE(status) || 400 - (address.address_length <= 0) || 400 + (address.address.address_length <= 0) || 401 401 (address.resource_type != ACPI_MEMORY_RANGE)) 402 402 return AE_OK; 403 403 404 - if ((mcfg_res->start >= address.minimum) && 405 - (mcfg_res->end < (address.minimum + address.address_length))) { 404 + if ((mcfg_res->start >= address.address.minimum) && 405 + (mcfg_res->end < (address.address.minimum + address.address.address_length))) { 406 406 mcfg_res->flags = 1; 407 407 return AE_CTRL_TERMINATE; 408 408 }
+6
drivers/acpi/Kconfig
··· 315 315 To compile this driver as a module, choose M here: 316 316 the module will be called acpi_memhotplug. 317 317 318 + config ACPI_HOTPLUG_IOAPIC 319 + bool 320 + depends on PCI 321 + depends on X86_IO_APIC 322 + default y 323 + 318 324 config ACPI_SBS 319 325 tristate "Smart Battery System" 320 326 depends on X86
+2 -1
drivers/acpi/Makefile
··· 40 40 acpi-y += ec.o 41 41 acpi-$(CONFIG_ACPI_DOCK) += dock.o 42 42 acpi-y += pci_root.o pci_link.o pci_irq.o 43 - acpi-y += acpi_lpss.o 43 + acpi-y += acpi_lpss.o acpi_apd.o 44 44 acpi-y += acpi_platform.o 45 45 acpi-y += acpi_pnp.o 46 46 acpi-y += int340x_thermal.o ··· 70 70 obj-y += container.o 71 71 obj-$(CONFIG_ACPI_THERMAL) += thermal.o 72 72 obj-y += acpi_memhotplug.o 73 + obj-$(CONFIG_ACPI_HOTPLUG_IOAPIC) += ioapic.o 73 74 obj-$(CONFIG_ACPI_BATTERY) += battery.o 74 75 obj-$(CONFIG_ACPI_SBS) += sbshc.o 75 76 obj-$(CONFIG_ACPI_SBS) += sbs.o
+150
drivers/acpi/acpi_apd.c
··· 1 + /* 2 + * AMD ACPI support for ACPI2platform device. 3 + * 4 + * Copyright (c) 2014,2015 AMD Corporation. 5 + * Authors: Ken Xue <Ken.Xue@amd.com> 6 + * Wu, Jeff <Jeff.Wu@amd.com> 7 + * 8 + * This program is free software; you can redistribute it and/or modify 9 + * it under the terms of the GNU General Public License version 2 as 10 + * published by the Free Software Foundation. 11 + */ 12 + 13 + #include <linux/clk-provider.h> 14 + #include <linux/platform_device.h> 15 + #include <linux/pm_domain.h> 16 + #include <linux/clkdev.h> 17 + #include <linux/acpi.h> 18 + #include <linux/err.h> 19 + #include <linux/clk.h> 20 + #include <linux/pm.h> 21 + 22 + #include "internal.h" 23 + 24 + ACPI_MODULE_NAME("acpi_apd"); 25 + struct apd_private_data; 26 + 27 + /** 28 + * ACPI_APD_SYSFS : add device attributes in sysfs 29 + * ACPI_APD_PM : attach power domain to device 30 + */ 31 + #define ACPI_APD_SYSFS BIT(0) 32 + #define ACPI_APD_PM BIT(1) 33 + 34 + /** 35 + * struct apd_device_desc - a descriptor for apd device 36 + * @flags: device flags like %ACPI_APD_SYSFS, %ACPI_APD_PM 37 + * @fixed_clk_rate: fixed rate input clock source for acpi device; 38 + * 0 means no fixed rate input clock source 39 + * @setup: a hook routine to set device resource during create platform device 40 + * 41 + * Device description defined as acpi_device_id.driver_data 42 + */ 43 + struct apd_device_desc { 44 + unsigned int flags; 45 + unsigned int fixed_clk_rate; 46 + int (*setup)(struct apd_private_data *pdata); 47 + }; 48 + 49 + struct apd_private_data { 50 + struct clk *clk; 51 + struct acpi_device *adev; 52 + const struct apd_device_desc *dev_desc; 53 + }; 54 + 55 + #ifdef CONFIG_X86_AMD_PLATFORM_DEVICE 56 + #define APD_ADDR(desc) ((unsigned long)&desc) 57 + 58 + static int acpi_apd_setup(struct apd_private_data *pdata) 59 + { 60 + const struct apd_device_desc *dev_desc = pdata->dev_desc; 61 + struct clk *clk = ERR_PTR(-ENODEV); 62 + 63 + if (dev_desc->fixed_clk_rate) { 64 + clk = clk_register_fixed_rate(&pdata->adev->dev, 65 + dev_name(&pdata->adev->dev), 66 + NULL, CLK_IS_ROOT, 67 + dev_desc->fixed_clk_rate); 68 + clk_register_clkdev(clk, NULL, dev_name(&pdata->adev->dev)); 69 + pdata->clk = clk; 70 + } 71 + 72 + return 0; 73 + } 74 + 75 + static struct apd_device_desc cz_i2c_desc = { 76 + .setup = acpi_apd_setup, 77 + .fixed_clk_rate = 133000000, 78 + }; 79 + 80 + static struct apd_device_desc cz_uart_desc = { 81 + .setup = acpi_apd_setup, 82 + .fixed_clk_rate = 48000000, 83 + }; 84 + 85 + #else 86 + 87 + #define APD_ADDR(desc) (0UL) 88 + 89 + #endif /* CONFIG_X86_AMD_PLATFORM_DEVICE */ 90 + 91 + /** 92 + * Create platform device during acpi scan attach handle. 93 + * Return value > 0 on success of creating device. 94 + */ 95 + static int acpi_apd_create_device(struct acpi_device *adev, 96 + const struct acpi_device_id *id) 97 + { 98 + const struct apd_device_desc *dev_desc = (void *)id->driver_data; 99 + struct apd_private_data *pdata; 100 + struct platform_device *pdev; 101 + int ret; 102 + 103 + if (!dev_desc) { 104 + pdev = acpi_create_platform_device(adev); 105 + return IS_ERR_OR_NULL(pdev) ? PTR_ERR(pdev) : 1; 106 + } 107 + 108 + pdata = kzalloc(sizeof(*pdata), GFP_KERNEL); 109 + if (!pdata) 110 + return -ENOMEM; 111 + 112 + pdata->adev = adev; 113 + pdata->dev_desc = dev_desc; 114 + 115 + if (dev_desc->setup) { 116 + ret = dev_desc->setup(pdata); 117 + if (ret) 118 + goto err_out; 119 + } 120 + 121 + adev->driver_data = pdata; 122 + pdev = acpi_create_platform_device(adev); 123 + if (!IS_ERR_OR_NULL(pdev)) 124 + return 1; 125 + 126 + ret = PTR_ERR(pdev); 127 + adev->driver_data = NULL; 128 + 129 + err_out: 130 + kfree(pdata); 131 + return ret; 132 + } 133 + 134 + static const struct acpi_device_id acpi_apd_device_ids[] = { 135 + /* Generic apd devices */ 136 + { "AMD0010", APD_ADDR(cz_i2c_desc) }, 137 + { "AMD0020", APD_ADDR(cz_uart_desc) }, 138 + { "AMD0030", }, 139 + { } 140 + }; 141 + 142 + static struct acpi_scan_handler apd_handler = { 143 + .ids = acpi_apd_device_ids, 144 + .attach = acpi_apd_create_device, 145 + }; 146 + 147 + void __init acpi_apd_init(void) 148 + { 149 + acpi_scan_add_handler(&apd_handler); 150 + }
+7 -5
drivers/acpi/acpi_lpss.c
··· 125 125 }; 126 126 127 127 static struct lpss_device_desc lpt_i2c_dev_desc = { 128 - .flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_LTR, 128 + .flags = LPSS_CLK | LPSS_LTR, 129 129 .prv_offset = 0x800, 130 130 }; 131 131 ··· 307 307 { 308 308 struct lpss_device_desc *dev_desc; 309 309 struct lpss_private_data *pdata; 310 - struct resource_list_entry *rentry; 310 + struct resource_entry *rentry; 311 311 struct list_head resource_list; 312 312 struct platform_device *pdev; 313 313 int ret; ··· 327 327 goto err_out; 328 328 329 329 list_for_each_entry(rentry, &resource_list, node) 330 - if (resource_type(&rentry->res) == IORESOURCE_MEM) { 330 + if (resource_type(rentry->res) == IORESOURCE_MEM) { 331 331 if (dev_desc->prv_size_override) 332 332 pdata->mmio_size = dev_desc->prv_size_override; 333 333 else 334 - pdata->mmio_size = resource_size(&rentry->res); 335 - pdata->mmio_base = ioremap(rentry->res.start, 334 + pdata->mmio_size = resource_size(rentry->res); 335 + pdata->mmio_base = ioremap(rentry->res->start, 336 336 pdata->mmio_size); 337 + if (!pdata->mmio_base) 338 + goto err_out; 337 339 break; 338 340 } 339 341
+4 -4
drivers/acpi/acpi_memhotplug.c
··· 101 101 /* Can we combine the resource range information? */ 102 102 if ((info->caching == address64.info.mem.caching) && 103 103 (info->write_protect == address64.info.mem.write_protect) && 104 - (info->start_addr + info->length == address64.minimum)) { 105 - info->length += address64.address_length; 104 + (info->start_addr + info->length == address64.address.minimum)) { 105 + info->length += address64.address.address_length; 106 106 return AE_OK; 107 107 } 108 108 } ··· 114 114 INIT_LIST_HEAD(&new->list); 115 115 new->caching = address64.info.mem.caching; 116 116 new->write_protect = address64.info.mem.write_protect; 117 - new->start_addr = address64.minimum; 118 - new->length = address64.address_length; 117 + new->start_addr = address64.address.minimum; 118 + new->length = address64.address.address_length; 119 119 list_add_tail(&new->list, &mem_device->res_list); 120 120 121 121 return AE_OK;
+2 -2
drivers/acpi/acpi_platform.c
··· 45 45 struct platform_device *pdev = NULL; 46 46 struct acpi_device *acpi_parent; 47 47 struct platform_device_info pdevinfo; 48 - struct resource_list_entry *rentry; 48 + struct resource_entry *rentry; 49 49 struct list_head resource_list; 50 50 struct resource *resources = NULL; 51 51 int count; ··· 71 71 } 72 72 count = 0; 73 73 list_for_each_entry(rentry, &resource_list, node) 74 - resources[count++] = rentry->res; 74 + resources[count++] = *rentry->res; 75 75 76 76 acpi_dev_free_resource_list(&resource_list); 77 77 }
+2 -2
drivers/acpi/acpica/acapps.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 47 47 /* Common info for tool signons */ 48 48 49 49 #define ACPICA_NAME "Intel ACPI Component Architecture" 50 - #define ACPICA_COPYRIGHT "Copyright (c) 2000 - 2014 Intel Corporation" 50 + #define ACPICA_COPYRIGHT "Copyright (c) 2000 - 2015 Intel Corporation" 51 51 52 52 #if ACPI_MACHINE_WIDTH == 64 53 53 #define ACPI_WIDTH "-64"
+1 -1
drivers/acpi/acpica/accommon.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/acdebug.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/acdispat.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -3
drivers/acpi/acpica/acevents.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 142 142 */ 143 143 acpi_status 144 144 acpi_ev_walk_gpe_list(acpi_gpe_callback gpe_walk_callback, void *context); 145 - 146 - u8 acpi_ev_valid_gpe_event(struct acpi_gpe_event_info *gpe_event_info); 147 145 148 146 acpi_status 149 147 acpi_ev_get_gpe_device(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
+1 -1
drivers/acpi/acpica/acglobal.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/achware.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/acinterp.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/aclocal.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/acmacros.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/acnamesp.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/acobject.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/acopcode.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/acparser.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/acpredef.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/acresrc.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/acstruct.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/actables.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/acutils.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/amlcode.h
··· 7 7 *****************************************************************************/ 8 8 9 9 /* 10 - * Copyright (C) 2000 - 2014, Intel Corp. 10 + * Copyright (C) 2000 - 2015, Intel Corp. 11 11 * All rights reserved. 12 12 * 13 13 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/amlresrc.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/dsargs.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2014, Intel Corp. 9 + * Copyright (C) 2000 - 2015, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/dscontrol.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2014, Intel Corp. 9 + * Copyright (C) 2000 - 2015, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/dsfield.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/dsinit.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/dsmethod.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/dsmthdat.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/dsobject.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/dsopcode.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/dsutils.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/dswexec.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2014, Intel Corp. 9 + * Copyright (C) 2000 - 2015, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/dswload.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/dswload2.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/dswscope.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/dswstate.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/evevent.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/evglock.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+83 -81
drivers/acpi/acpica/evgpe.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 113 113 acpi_status status; 114 114 115 115 ACPI_FUNCTION_TRACE(ev_enable_gpe); 116 - 117 - /* 118 - * We will only allow a GPE to be enabled if it has either an associated 119 - * method (_Lxx/_Exx) or a handler, or is using the implicit notify 120 - * feature. Otherwise, the GPE will be immediately disabled by 121 - * acpi_ev_gpe_dispatch the first time it fires. 122 - */ 123 - if ((gpe_event_info->flags & ACPI_GPE_DISPATCH_MASK) == 124 - ACPI_GPE_DISPATCH_NONE) { 125 - return_ACPI_STATUS(AE_NO_HANDLER); 126 - } 127 116 128 117 /* Clear the GPE (of stale events) */ 129 118 ··· 328 339 { 329 340 acpi_status status; 330 341 struct acpi_gpe_block_info *gpe_block; 342 + struct acpi_namespace_node *gpe_device; 331 343 struct acpi_gpe_register_info *gpe_register_info; 344 + struct acpi_gpe_event_info *gpe_event_info; 345 + u32 gpe_number; 346 + struct acpi_gpe_handler_info *gpe_handler_info; 332 347 u32 int_status = ACPI_INTERRUPT_NOT_HANDLED; 333 348 u8 enabled_status_byte; 334 349 u32 status_reg; ··· 360 367 361 368 gpe_block = gpe_xrupt_list->gpe_block_list_head; 362 369 while (gpe_block) { 370 + gpe_device = gpe_block->node; 371 + 363 372 /* 364 373 * Read all of the 8-bit GPE status and enable registers in this GPE 365 374 * block, saving all of them. Find all currently active GP events. ··· 437 442 438 443 /* Examine one GPE bit */ 439 444 445 + gpe_event_info = 446 + &gpe_block-> 447 + event_info[((acpi_size) i * 448 + ACPI_GPE_REGISTER_WIDTH) + j]; 449 + gpe_number = 450 + j + gpe_register_info->base_gpe_number; 451 + 440 452 if (enabled_status_byte & (1 << j)) { 441 - /* 442 - * Found an active GPE. Dispatch the event to a handler 443 - * or method. 444 - */ 445 - int_status |= 446 - acpi_ev_gpe_dispatch(gpe_block-> 447 - node, 448 - &gpe_block-> 449 - event_info[((acpi_size) i * ACPI_GPE_REGISTER_WIDTH) + j], j + gpe_register_info->base_gpe_number); 453 + 454 + /* Invoke global event handler if present */ 455 + 456 + acpi_gpe_count++; 457 + if (acpi_gbl_global_event_handler) { 458 + acpi_gbl_global_event_handler 459 + (ACPI_EVENT_TYPE_GPE, 460 + gpe_device, gpe_number, 461 + acpi_gbl_global_event_handler_context); 462 + } 463 + 464 + /* Found an active GPE */ 465 + 466 + if (ACPI_GPE_DISPATCH_TYPE 467 + (gpe_event_info->flags) == 468 + ACPI_GPE_DISPATCH_RAW_HANDLER) { 469 + 470 + /* Dispatch the event to a raw handler */ 471 + 472 + gpe_handler_info = 473 + gpe_event_info->dispatch. 474 + handler; 475 + 476 + /* 477 + * There is no protection around the namespace node 478 + * and the GPE handler to ensure a safe destruction 479 + * because: 480 + * 1. The namespace node is expected to always 481 + * exist after loading a table. 482 + * 2. The GPE handler is expected to be flushed by 483 + * acpi_os_wait_events_complete() before the 484 + * destruction. 485 + */ 486 + acpi_os_release_lock 487 + (acpi_gbl_gpe_lock, flags); 488 + int_status |= 489 + gpe_handler_info-> 490 + address(gpe_device, 491 + gpe_number, 492 + gpe_handler_info-> 493 + context); 494 + flags = 495 + acpi_os_acquire_lock 496 + (acpi_gbl_gpe_lock); 497 + } else { 498 + /* 499 + * Dispatch the event to a standard handler or 500 + * method. 501 + */ 502 + int_status |= 503 + acpi_ev_gpe_dispatch 504 + (gpe_device, gpe_event_info, 505 + gpe_number); 506 + } 450 507 } 451 508 } 452 509 } ··· 531 484 static void ACPI_SYSTEM_XFACE acpi_ev_asynch_execute_gpe_method(void *context) 532 485 { 533 486 struct acpi_gpe_event_info *gpe_event_info = context; 534 - acpi_status status; 535 - struct acpi_gpe_event_info *local_gpe_event_info; 487 + acpi_status status = AE_OK; 536 488 struct acpi_evaluate_info *info; 537 489 struct acpi_gpe_notify_info *notify; 538 490 539 491 ACPI_FUNCTION_TRACE(ev_asynch_execute_gpe_method); 540 492 541 - /* Allocate a local GPE block */ 542 - 543 - local_gpe_event_info = 544 - ACPI_ALLOCATE_ZEROED(sizeof(struct acpi_gpe_event_info)); 545 - if (!local_gpe_event_info) { 546 - ACPI_EXCEPTION((AE_INFO, AE_NO_MEMORY, "while handling a GPE")); 547 - return_VOID; 548 - } 549 - 550 - status = acpi_ut_acquire_mutex(ACPI_MTX_EVENTS); 551 - if (ACPI_FAILURE(status)) { 552 - ACPI_FREE(local_gpe_event_info); 553 - return_VOID; 554 - } 555 - 556 - /* Must revalidate the gpe_number/gpe_block */ 557 - 558 - if (!acpi_ev_valid_gpe_event(gpe_event_info)) { 559 - status = acpi_ut_release_mutex(ACPI_MTX_EVENTS); 560 - ACPI_FREE(local_gpe_event_info); 561 - return_VOID; 562 - } 563 - 564 - /* 565 - * Take a snapshot of the GPE info for this level - we copy the info to 566 - * prevent a race condition with remove_handler/remove_block. 567 - */ 568 - ACPI_MEMCPY(local_gpe_event_info, gpe_event_info, 569 - sizeof(struct acpi_gpe_event_info)); 570 - 571 - status = acpi_ut_release_mutex(ACPI_MTX_EVENTS); 572 - if (ACPI_FAILURE(status)) { 573 - ACPI_FREE(local_gpe_event_info); 574 - return_VOID; 575 - } 576 - 577 493 /* Do the correct dispatch - normal method or implicit notify */ 578 494 579 - switch (local_gpe_event_info->flags & ACPI_GPE_DISPATCH_MASK) { 495 + switch (ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags)) { 580 496 case ACPI_GPE_DISPATCH_NOTIFY: 581 497 /* 582 498 * Implicit notify. ··· 552 542 * June 2012: Expand implicit notify mechanism to support 553 543 * notifies on multiple device objects. 554 544 */ 555 - notify = local_gpe_event_info->dispatch.notify_list; 545 + notify = gpe_event_info->dispatch.notify_list; 556 546 while (ACPI_SUCCESS(status) && notify) { 557 547 status = 558 548 acpi_ev_queue_notify_request(notify->device_node, ··· 576 566 * _Lxx/_Exx control method that corresponds to this GPE 577 567 */ 578 568 info->prefix_node = 579 - local_gpe_event_info->dispatch.method_node; 569 + gpe_event_info->dispatch.method_node; 580 570 info->flags = ACPI_IGNORE_RETURN_VALUE; 581 571 582 572 status = acpi_ns_evaluate(info); ··· 586 576 if (ACPI_FAILURE(status)) { 587 577 ACPI_EXCEPTION((AE_INFO, status, 588 578 "while evaluating GPE method [%4.4s]", 589 - acpi_ut_get_node_name 590 - (local_gpe_event_info->dispatch. 591 - method_node))); 579 + acpi_ut_get_node_name(gpe_event_info-> 580 + dispatch. 581 + method_node))); 592 582 } 593 583 break; 594 584 595 585 default: 596 586 597 - return_VOID; /* Should never happen */ 587 + goto error_exit; /* Should never happen */ 598 588 } 599 589 600 590 /* Defer enabling of GPE until all notify handlers are done */ 601 591 602 592 status = acpi_os_execute(OSL_NOTIFY_HANDLER, 603 - acpi_ev_asynch_enable_gpe, 604 - local_gpe_event_info); 605 - if (ACPI_FAILURE(status)) { 606 - ACPI_FREE(local_gpe_event_info); 593 + acpi_ev_asynch_enable_gpe, gpe_event_info); 594 + if (ACPI_SUCCESS(status)) { 595 + return_VOID; 607 596 } 597 + 598 + error_exit: 599 + acpi_ev_asynch_enable_gpe(gpe_event_info); 608 600 return_VOID; 609 601 } 610 602 ··· 634 622 (void)acpi_ev_finish_gpe(gpe_event_info); 635 623 acpi_os_release_lock(acpi_gbl_gpe_lock, flags); 636 624 637 - ACPI_FREE(gpe_event_info); 638 625 return; 639 626 } 640 627 ··· 703 692 704 693 ACPI_FUNCTION_TRACE(ev_gpe_dispatch); 705 694 706 - /* Invoke global event handler if present */ 707 - 708 - acpi_gpe_count++; 709 - if (acpi_gbl_global_event_handler) { 710 - acpi_gbl_global_event_handler(ACPI_EVENT_TYPE_GPE, gpe_device, 711 - gpe_number, 712 - acpi_gbl_global_event_handler_context); 713 - } 714 - 715 695 /* 716 696 * Always disable the GPE so that it does not keep firing before 717 697 * any asynchronous activity completes (either from the execution ··· 743 741 * If there is neither a handler nor a method, leave the GPE 744 742 * disabled. 745 743 */ 746 - switch (gpe_event_info->flags & ACPI_GPE_DISPATCH_MASK) { 744 + switch (ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags)) { 747 745 case ACPI_GPE_DISPATCH_HANDLER: 748 746 749 747 /* Invoke the installed handler (at interrupt level) */
+6 -4
drivers/acpi/acpica/evgpeblk.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 474 474 * Ignore GPEs that have no corresponding _Lxx/_Exx method 475 475 * and GPEs that are used to wake the system 476 476 */ 477 - if (((gpe_event_info->flags & ACPI_GPE_DISPATCH_MASK) == 477 + if ((ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) == 478 478 ACPI_GPE_DISPATCH_NONE) 479 - || ((gpe_event_info->flags & ACPI_GPE_DISPATCH_MASK) 480 - == ACPI_GPE_DISPATCH_HANDLER) 479 + || (ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) == 480 + ACPI_GPE_DISPATCH_HANDLER) 481 + || (ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) == 482 + ACPI_GPE_DISPATCH_RAW_HANDLER) 481 483 || (gpe_event_info->flags & ACPI_GPE_CAN_WAKE)) { 482 484 continue; 483 485 }
+6 -4
drivers/acpi/acpica/evgpeinit.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 401 401 return_ACPI_STATUS(AE_OK); 402 402 } 403 403 404 - if ((gpe_event_info->flags & ACPI_GPE_DISPATCH_MASK) == 405 - ACPI_GPE_DISPATCH_HANDLER) { 404 + if ((ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) == 405 + ACPI_GPE_DISPATCH_HANDLER) || 406 + (ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) == 407 + ACPI_GPE_DISPATCH_RAW_HANDLER)) { 406 408 407 409 /* If there is already a handler, ignore this GPE method */ 408 410 409 411 return_ACPI_STATUS(AE_OK); 410 412 } 411 413 412 - if ((gpe_event_info->flags & ACPI_GPE_DISPATCH_MASK) == 414 + if (ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) == 413 415 ACPI_GPE_DISPATCH_METHOD) { 414 416 /* 415 417 * If there is already a method, ignore this method. But check
+7 -54
drivers/acpi/acpica/evgpeutil.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 104 104 unlock_and_exit: 105 105 acpi_os_release_lock(acpi_gbl_gpe_lock, flags); 106 106 return_ACPI_STATUS(status); 107 - } 108 - 109 - /******************************************************************************* 110 - * 111 - * FUNCTION: acpi_ev_valid_gpe_event 112 - * 113 - * PARAMETERS: gpe_event_info - Info for this GPE 114 - * 115 - * RETURN: TRUE if the gpe_event is valid 116 - * 117 - * DESCRIPTION: Validate a GPE event. DO NOT CALL FROM INTERRUPT LEVEL. 118 - * Should be called only when the GPE lists are semaphore locked 119 - * and not subject to change. 120 - * 121 - ******************************************************************************/ 122 - 123 - u8 acpi_ev_valid_gpe_event(struct acpi_gpe_event_info *gpe_event_info) 124 - { 125 - struct acpi_gpe_xrupt_info *gpe_xrupt_block; 126 - struct acpi_gpe_block_info *gpe_block; 127 - 128 - ACPI_FUNCTION_ENTRY(); 129 - 130 - /* No need for spin lock since we are not changing any list elements */ 131 - 132 - /* Walk the GPE interrupt levels */ 133 - 134 - gpe_xrupt_block = acpi_gbl_gpe_xrupt_list_head; 135 - while (gpe_xrupt_block) { 136 - gpe_block = gpe_xrupt_block->gpe_block_list_head; 137 - 138 - /* Walk the GPE blocks on this interrupt level */ 139 - 140 - while (gpe_block) { 141 - if ((&gpe_block->event_info[0] <= gpe_event_info) && 142 - (&gpe_block->event_info[gpe_block->gpe_count] > 143 - gpe_event_info)) { 144 - return (TRUE); 145 - } 146 - 147 - gpe_block = gpe_block->next; 148 - } 149 - 150 - gpe_xrupt_block = gpe_xrupt_block->next; 151 - } 152 - 153 - return (FALSE); 154 107 } 155 108 156 109 /******************************************************************************* ··· 324 371 ACPI_GPE_REGISTER_WIDTH) 325 372 + j]; 326 373 327 - if ((gpe_event_info->flags & ACPI_GPE_DISPATCH_MASK) == 328 - ACPI_GPE_DISPATCH_HANDLER) { 374 + if ((ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) == 375 + ACPI_GPE_DISPATCH_HANDLER) || 376 + (ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) == 377 + ACPI_GPE_DISPATCH_RAW_HANDLER)) { 329 378 330 379 /* Delete an installed handler block */ 331 380 ··· 335 380 gpe_event_info->dispatch.handler = NULL; 336 381 gpe_event_info->flags &= 337 382 ~ACPI_GPE_DISPATCH_MASK; 338 - } else 339 - if ((gpe_event_info-> 340 - flags & ACPI_GPE_DISPATCH_MASK) == 341 - ACPI_GPE_DISPATCH_NOTIFY) { 383 + } else if (ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) 384 + == ACPI_GPE_DISPATCH_NOTIFY) { 342 385 343 386 /* Delete the implicit notification device list */ 344 387
+1 -1
drivers/acpi/acpica/evhandler.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/evmisc.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/evregion.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/evrgnini.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/evsci.c
··· 6 6 ******************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2014, Intel Corp. 9 + * Copyright (C) 2000 - 2015, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without
+113 -19
drivers/acpi/acpica/evxface.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 51 51 52 52 #define _COMPONENT ACPI_EVENTS 53 53 ACPI_MODULE_NAME("evxface") 54 + #if (!ACPI_REDUCED_HARDWARE) 55 + /* Local prototypes */ 56 + static acpi_status 57 + acpi_ev_install_gpe_handler(acpi_handle gpe_device, 58 + u32 gpe_number, 59 + u32 type, 60 + u8 is_raw_handler, 61 + acpi_gpe_handler address, void *context); 62 + 63 + #endif 54 64 55 65 56 66 /******************************************************************************* ··· 86 76 * handlers. 87 77 * 88 78 ******************************************************************************/ 79 + 89 80 acpi_status 90 81 acpi_install_notify_handler(acpi_handle device, 91 82 u32 handler_type, ··· 728 717 729 718 /******************************************************************************* 730 719 * 731 - * FUNCTION: acpi_install_gpe_handler 720 + * FUNCTION: acpi_ev_install_gpe_handler 732 721 * 733 722 * PARAMETERS: gpe_device - Namespace node for the GPE (NULL for FADT 734 723 * defined GPEs) 735 724 * gpe_number - The GPE number within the GPE block 736 725 * type - Whether this GPE should be treated as an 737 726 * edge- or level-triggered interrupt. 727 + * is_raw_handler - Whether this GPE should be handled using 728 + * the special GPE handler mode. 738 729 * address - Address of the handler 739 730 * context - Value passed to the handler on each GPE 740 731 * 741 732 * RETURN: Status 742 733 * 743 - * DESCRIPTION: Install a handler for a General Purpose Event. 734 + * DESCRIPTION: Internal function to install a handler for a General Purpose 735 + * Event. 744 736 * 745 737 ******************************************************************************/ 746 - acpi_status 747 - acpi_install_gpe_handler(acpi_handle gpe_device, 748 - u32 gpe_number, 749 - u32 type, acpi_gpe_handler address, void *context) 738 + static acpi_status 739 + acpi_ev_install_gpe_handler(acpi_handle gpe_device, 740 + u32 gpe_number, 741 + u32 type, 742 + u8 is_raw_handler, 743 + acpi_gpe_handler address, void *context) 750 744 { 751 745 struct acpi_gpe_event_info *gpe_event_info; 752 746 struct acpi_gpe_handler_info *handler; 753 747 acpi_status status; 754 748 acpi_cpu_flags flags; 755 749 756 - ACPI_FUNCTION_TRACE(acpi_install_gpe_handler); 750 + ACPI_FUNCTION_TRACE(ev_install_gpe_handler); 757 751 758 752 /* Parameter validation */ 759 753 ··· 791 775 792 776 /* Make sure that there isn't a handler there already */ 793 777 794 - if ((gpe_event_info->flags & ACPI_GPE_DISPATCH_MASK) == 795 - ACPI_GPE_DISPATCH_HANDLER) { 778 + if ((ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) == 779 + ACPI_GPE_DISPATCH_HANDLER) || 780 + (ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) == 781 + ACPI_GPE_DISPATCH_RAW_HANDLER)) { 796 782 status = AE_ALREADY_EXISTS; 797 783 goto free_and_exit; 798 784 } ··· 811 793 * automatically during initialization, in which case it has to be 812 794 * disabled now to avoid spurious execution of the handler. 813 795 */ 814 - if (((handler->original_flags & ACPI_GPE_DISPATCH_METHOD) || 815 - (handler->original_flags & ACPI_GPE_DISPATCH_NOTIFY)) && 816 - gpe_event_info->runtime_count) { 796 + if (((ACPI_GPE_DISPATCH_TYPE(handler->original_flags) == 797 + ACPI_GPE_DISPATCH_METHOD) || 798 + (ACPI_GPE_DISPATCH_TYPE(handler->original_flags) == 799 + ACPI_GPE_DISPATCH_NOTIFY)) && gpe_event_info->runtime_count) { 817 800 handler->originally_enabled = TRUE; 818 801 (void)acpi_ev_remove_gpe_reference(gpe_event_info); 819 802 ··· 835 816 836 817 gpe_event_info->flags &= 837 818 ~(ACPI_GPE_XRUPT_TYPE_MASK | ACPI_GPE_DISPATCH_MASK); 838 - gpe_event_info->flags |= (u8)(type | ACPI_GPE_DISPATCH_HANDLER); 819 + gpe_event_info->flags |= 820 + (u8)(type | 821 + (is_raw_handler ? ACPI_GPE_DISPATCH_RAW_HANDLER : 822 + ACPI_GPE_DISPATCH_HANDLER)); 839 823 840 824 acpi_os_release_lock(acpi_gbl_gpe_lock, flags); 841 825 ··· 852 830 goto unlock_and_exit; 853 831 } 854 832 833 + /******************************************************************************* 834 + * 835 + * FUNCTION: acpi_install_gpe_handler 836 + * 837 + * PARAMETERS: gpe_device - Namespace node for the GPE (NULL for FADT 838 + * defined GPEs) 839 + * gpe_number - The GPE number within the GPE block 840 + * type - Whether this GPE should be treated as an 841 + * edge- or level-triggered interrupt. 842 + * address - Address of the handler 843 + * context - Value passed to the handler on each GPE 844 + * 845 + * RETURN: Status 846 + * 847 + * DESCRIPTION: Install a handler for a General Purpose Event. 848 + * 849 + ******************************************************************************/ 850 + 851 + acpi_status 852 + acpi_install_gpe_handler(acpi_handle gpe_device, 853 + u32 gpe_number, 854 + u32 type, acpi_gpe_handler address, void *context) 855 + { 856 + acpi_status status; 857 + 858 + ACPI_FUNCTION_TRACE(acpi_install_gpe_handler); 859 + 860 + status = 861 + acpi_ev_install_gpe_handler(gpe_device, gpe_number, type, FALSE, 862 + address, context); 863 + 864 + return_ACPI_STATUS(status); 865 + } 866 + 855 867 ACPI_EXPORT_SYMBOL(acpi_install_gpe_handler) 868 + 869 + /******************************************************************************* 870 + * 871 + * FUNCTION: acpi_install_gpe_raw_handler 872 + * 873 + * PARAMETERS: gpe_device - Namespace node for the GPE (NULL for FADT 874 + * defined GPEs) 875 + * gpe_number - The GPE number within the GPE block 876 + * type - Whether this GPE should be treated as an 877 + * edge- or level-triggered interrupt. 878 + * address - Address of the handler 879 + * context - Value passed to the handler on each GPE 880 + * 881 + * RETURN: Status 882 + * 883 + * DESCRIPTION: Install a handler for a General Purpose Event. 884 + * 885 + ******************************************************************************/ 886 + acpi_status 887 + acpi_install_gpe_raw_handler(acpi_handle gpe_device, 888 + u32 gpe_number, 889 + u32 type, acpi_gpe_handler address, void *context) 890 + { 891 + acpi_status status; 892 + 893 + ACPI_FUNCTION_TRACE(acpi_install_gpe_raw_handler); 894 + 895 + status = acpi_ev_install_gpe_handler(gpe_device, gpe_number, type, TRUE, 896 + address, context); 897 + 898 + return_ACPI_STATUS(status); 899 + } 900 + 901 + ACPI_EXPORT_SYMBOL(acpi_install_gpe_raw_handler) 856 902 857 903 /******************************************************************************* 858 904 * ··· 970 880 971 881 /* Make sure that a handler is indeed installed */ 972 882 973 - if ((gpe_event_info->flags & ACPI_GPE_DISPATCH_MASK) != 974 - ACPI_GPE_DISPATCH_HANDLER) { 883 + if ((ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) != 884 + ACPI_GPE_DISPATCH_HANDLER) && 885 + (ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) != 886 + ACPI_GPE_DISPATCH_RAW_HANDLER)) { 975 887 status = AE_NOT_EXIST; 976 888 goto unlock_and_exit; 977 889 } ··· 988 896 /* Remove the handler */ 989 897 990 898 handler = gpe_event_info->dispatch.handler; 899 + gpe_event_info->dispatch.handler = NULL; 991 900 992 901 /* Restore Method node (if any), set dispatch flags */ 993 902 ··· 1002 909 * enabled, it should be enabled at this point to restore the 1003 910 * post-initialization configuration. 1004 911 */ 1005 - if (((handler->original_flags & ACPI_GPE_DISPATCH_METHOD) || 1006 - (handler->original_flags & ACPI_GPE_DISPATCH_NOTIFY)) && 1007 - handler->originally_enabled) { 912 + if (((ACPI_GPE_DISPATCH_TYPE(handler->original_flags) == 913 + ACPI_GPE_DISPATCH_METHOD) || 914 + (ACPI_GPE_DISPATCH_TYPE(handler->original_flags) == 915 + ACPI_GPE_DISPATCH_NOTIFY)) && handler->originally_enabled) { 1008 916 (void)acpi_ev_add_gpe_reference(gpe_event_info); 1009 917 } 1010 918
+1 -1
drivers/acpi/acpica/evxfevnt.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+118 -5
drivers/acpi/acpica/evxfgpe.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 132 132 */ 133 133 gpe_event_info = acpi_ev_get_gpe_event_info(gpe_device, gpe_number); 134 134 if (gpe_event_info) { 135 - if ((gpe_event_info->flags & ACPI_GPE_DISPATCH_MASK) != 135 + if (ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) != 136 136 ACPI_GPE_DISPATCH_NONE) { 137 137 status = acpi_ev_add_gpe_reference(gpe_event_info); 138 138 } else { ··· 183 183 184 184 ACPI_EXPORT_SYMBOL(acpi_disable_gpe) 185 185 186 + /******************************************************************************* 187 + * 188 + * FUNCTION: acpi_set_gpe 189 + * 190 + * PARAMETERS: gpe_device - Parent GPE Device. NULL for GPE0/GPE1 191 + * gpe_number - GPE level within the GPE block 192 + * action - ACPI_GPE_ENABLE or ACPI_GPE_DISABLE 193 + * 194 + * RETURN: Status 195 + * 196 + * DESCRIPTION: Enable or disable an individual GPE. This function bypasses 197 + * the reference count mechanism used in the acpi_enable_gpe(), 198 + * acpi_disable_gpe() interfaces. 199 + * This API is typically used by the GPE raw handler mode driver 200 + * to switch between the polling mode and the interrupt mode after 201 + * the driver has enabled the GPE. 202 + * The APIs should be invoked in this order: 203 + * acpi_enable_gpe() <- Ensure the reference count > 0 204 + * acpi_set_gpe(ACPI_GPE_DISABLE) <- Enter polling mode 205 + * acpi_set_gpe(ACPI_GPE_ENABLE) <- Leave polling mode 206 + * acpi_disable_gpe() <- Decrease the reference count 207 + * 208 + * Note: If a GPE is shared by 2 silicon components, then both the drivers 209 + * should support GPE polling mode or disabling the GPE for long period 210 + * for one driver may break the other. So use it with care since all 211 + * firmware _Lxx/_Exx handlers currently rely on the GPE interrupt mode. 212 + * 213 + ******************************************************************************/ 214 + acpi_status acpi_set_gpe(acpi_handle gpe_device, u32 gpe_number, u8 action) 215 + { 216 + struct acpi_gpe_event_info *gpe_event_info; 217 + acpi_status status; 218 + acpi_cpu_flags flags; 219 + 220 + ACPI_FUNCTION_TRACE(acpi_set_gpe); 221 + 222 + flags = acpi_os_acquire_lock(acpi_gbl_gpe_lock); 223 + 224 + /* Ensure that we have a valid GPE number */ 225 + 226 + gpe_event_info = acpi_ev_get_gpe_event_info(gpe_device, gpe_number); 227 + if (!gpe_event_info) { 228 + status = AE_BAD_PARAMETER; 229 + goto unlock_and_exit; 230 + } 231 + 232 + /* Perform the action */ 233 + 234 + switch (action) { 235 + case ACPI_GPE_ENABLE: 236 + 237 + status = acpi_hw_low_set_gpe(gpe_event_info, ACPI_GPE_ENABLE); 238 + break; 239 + 240 + case ACPI_GPE_DISABLE: 241 + 242 + status = acpi_hw_low_set_gpe(gpe_event_info, ACPI_GPE_DISABLE); 243 + break; 244 + 245 + default: 246 + 247 + status = AE_BAD_PARAMETER; 248 + break; 249 + } 250 + 251 + unlock_and_exit: 252 + acpi_os_release_lock(acpi_gbl_gpe_lock, flags); 253 + return_ACPI_STATUS(status); 254 + } 255 + 256 + ACPI_EXPORT_SYMBOL(acpi_set_gpe) 186 257 187 258 /******************************************************************************* 188 259 * ··· 384 313 * known as an "implicit notify". Note: The GPE is assumed to be 385 314 * level-triggered (for windows compatibility). 386 315 */ 387 - if ((gpe_event_info->flags & ACPI_GPE_DISPATCH_MASK) == 316 + if (ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) == 388 317 ACPI_GPE_DISPATCH_NONE) { 389 318 /* 390 319 * This is the first device for implicit notify on this GPE. ··· 398 327 * If we already have an implicit notify on this GPE, add 399 328 * this device to the notify list. 400 329 */ 401 - if ((gpe_event_info->flags & ACPI_GPE_DISPATCH_MASK) == 330 + if (ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) == 402 331 ACPI_GPE_DISPATCH_NOTIFY) { 403 332 404 333 /* Ensure that the device is not already in the list */ ··· 601 530 602 531 ACPI_EXPORT_SYMBOL(acpi_get_gpe_status) 603 532 533 + /******************************************************************************* 534 + * 535 + * FUNCTION: acpi_finish_gpe 536 + * 537 + * PARAMETERS: gpe_device - Namespace node for the GPE Block 538 + * (NULL for FADT defined GPEs) 539 + * gpe_number - GPE level within the GPE block 540 + * 541 + * RETURN: Status 542 + * 543 + * DESCRIPTION: Clear and conditionally reenable a GPE. This completes the GPE 544 + * processing. Intended for use by asynchronous host-installed 545 + * GPE handlers. The GPE is only reenabled if the enable_for_run bit 546 + * is set in the GPE info. 547 + * 548 + ******************************************************************************/ 549 + acpi_status acpi_finish_gpe(acpi_handle gpe_device, u32 gpe_number) 550 + { 551 + struct acpi_gpe_event_info *gpe_event_info; 552 + acpi_status status; 553 + acpi_cpu_flags flags; 554 + 555 + ACPI_FUNCTION_TRACE(acpi_finish_gpe); 556 + 557 + flags = acpi_os_acquire_lock(acpi_gbl_gpe_lock); 558 + 559 + /* Ensure that we have a valid GPE number */ 560 + 561 + gpe_event_info = acpi_ev_get_gpe_event_info(gpe_device, gpe_number); 562 + if (!gpe_event_info) { 563 + status = AE_BAD_PARAMETER; 564 + goto unlock_and_exit; 565 + } 566 + 567 + status = acpi_ev_finish_gpe(gpe_event_info); 568 + 569 + unlock_and_exit: 570 + acpi_os_release_lock(acpi_gbl_gpe_lock, flags); 571 + return_ACPI_STATUS(status); 572 + } 573 + 574 + ACPI_EXPORT_SYMBOL(acpi_finish_gpe) 575 + 604 576 /****************************************************************************** 605 577 * 606 578 * FUNCTION: acpi_disable_all_gpes ··· 718 604 * all GPE blocks. 719 605 * 720 606 ******************************************************************************/ 721 - 722 607 acpi_status acpi_enable_all_wakeup_gpes(void) 723 608 { 724 609 acpi_status status;
+1 -1
drivers/acpi/acpica/evxfregn.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2014, Intel Corp. 9 + * Copyright (C) 2000 - 2015, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/exconfig.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/exconvrt.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/excreate.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/exdebug.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/exdump.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/exfield.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/exfldio.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/exmisc.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/exmutex.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/exnames.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/exoparg1.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/exoparg2.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/exoparg3.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/exoparg6.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/exprep.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/exregion.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/exresnte.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/exresolv.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/exresop.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/exstore.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/exstoren.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2014, Intel Corp. 9 + * Copyright (C) 2000 - 2015, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/exstorob.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/exsystem.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/exutils.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/hwacpi.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/hwesleep.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2014, Intel Corp. 9 + * Copyright (C) 2000 - 2015, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without
+7 -3
drivers/acpi/acpica/hwgpe.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 53 53 acpi_hw_enable_wakeup_gpe_block(struct acpi_gpe_xrupt_info *gpe_xrupt_info, 54 54 struct acpi_gpe_block_info *gpe_block, 55 55 void *context); 56 + 57 + static acpi_status 58 + acpi_hw_gpe_enable_write(u8 enable_mask, 59 + struct acpi_gpe_register_info *gpe_register_info); 56 60 57 61 /****************************************************************************** 58 62 * ··· 150 146 151 147 status = acpi_hw_write(enable_mask, &gpe_register_info->enable_address); 152 148 if (ACPI_SUCCESS(status) && (action & ACPI_GPE_SAVE_MASK)) { 153 - gpe_register_info->enable_mask = enable_mask; 149 + gpe_register_info->enable_mask = (u8)enable_mask; 154 150 } 155 151 return (status); 156 152 } ··· 225 221 226 222 /* GPE currently handled? */ 227 223 228 - if ((gpe_event_info->flags & ACPI_GPE_DISPATCH_MASK) != 224 + if (ACPI_GPE_DISPATCH_TYPE(gpe_event_info->flags) != 229 225 ACPI_GPE_DISPATCH_NONE) { 230 226 local_event_status |= ACPI_EVENT_FLAG_HAS_HANDLER; 231 227 }
+1 -1
drivers/acpi/acpica/hwpci.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/hwregs.c
··· 6 6 ******************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2014, Intel Corp. 9 + * Copyright (C) 2000 - 2015, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/hwsleep.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2014, Intel Corp. 9 + * Copyright (C) 2000 - 2015, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/hwtimer.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/hwvalid.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/hwxface.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/hwxfsleep.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/nsaccess.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/nsalloc.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/nsarguments.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/nsconvert.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2014, Intel Corp. 9 + * Copyright (C) 2000 - 2015, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/nsdump.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/nsdumpdv.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/nseval.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/nsinit.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/nsload.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/nsnames.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/nsobject.c
··· 6 6 ******************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2014, Intel Corp. 9 + * Copyright (C) 2000 - 2015, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/nsparse.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/nspredef.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/nsprepkg.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/nsrepair.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/nsrepair2.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2014, Intel Corp. 9 + * Copyright (C) 2000 - 2015, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/nssearch.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/nsutils.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2014, Intel Corp. 9 + * Copyright (C) 2000 - 2015, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/nswalk.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/nsxfeval.c
··· 6 6 ******************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2014, Intel Corp. 9 + * Copyright (C) 2000 - 2015, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/nsxfname.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2014, Intel Corp. 9 + * Copyright (C) 2000 - 2015, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without
+1 -45
drivers/acpi/acpica/nsxfobj.c
··· 6 6 ******************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2014, Intel Corp. 9 + * Copyright (C) 2000 - 2015, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without ··· 50 50 51 51 #define _COMPONENT ACPI_NAMESPACE 52 52 ACPI_MODULE_NAME("nsxfobj") 53 - 54 - /******************************************************************************* 55 - * 56 - * FUNCTION: acpi_get_id 57 - * 58 - * PARAMETERS: Handle - Handle of object whose id is desired 59 - * ret_id - Where the id will be placed 60 - * 61 - * RETURN: Status 62 - * 63 - * DESCRIPTION: This routine returns the owner id associated with a handle 64 - * 65 - ******************************************************************************/ 66 - acpi_status acpi_get_id(acpi_handle handle, acpi_owner_id * ret_id) 67 - { 68 - struct acpi_namespace_node *node; 69 - acpi_status status; 70 - 71 - /* Parameter Validation */ 72 - 73 - if (!ret_id) { 74 - return (AE_BAD_PARAMETER); 75 - } 76 - 77 - status = acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE); 78 - if (ACPI_FAILURE(status)) { 79 - return (status); 80 - } 81 - 82 - /* Convert and validate the handle */ 83 - 84 - node = acpi_ns_validate_handle(handle); 85 - if (!node) { 86 - (void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE); 87 - return (AE_BAD_PARAMETER); 88 - } 89 - 90 - *ret_id = node->owner_id; 91 - 92 - status = acpi_ut_release_mutex(ACPI_MTX_NAMESPACE); 93 - return (status); 94 - } 95 - 96 - ACPI_EXPORT_SYMBOL(acpi_get_id) 97 53 98 54 /******************************************************************************* 99 55 *
+1 -1
drivers/acpi/acpica/psargs.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/psloop.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/psobject.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/psopcode.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/psopinfo.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/psparse.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/psscope.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/pstree.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/psutils.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/pswalk.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/psxface.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+6 -5
drivers/acpi/acpica/rsaddr.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 74 74 * Address Translation Offset 75 75 * Address Length 76 76 */ 77 - {ACPI_RSC_MOVE16, ACPI_RS_OFFSET(data.address16.granularity), 77 + {ACPI_RSC_MOVE16, ACPI_RS_OFFSET(data.address16.address.granularity), 78 78 AML_OFFSET(address16.granularity), 79 79 5}, 80 80 ··· 112 112 * Address Translation Offset 113 113 * Address Length 114 114 */ 115 - {ACPI_RSC_MOVE32, ACPI_RS_OFFSET(data.address32.granularity), 115 + {ACPI_RSC_MOVE32, ACPI_RS_OFFSET(data.address32.address.granularity), 116 116 AML_OFFSET(address32.granularity), 117 117 5}, 118 118 ··· 150 150 * Address Translation Offset 151 151 * Address Length 152 152 */ 153 - {ACPI_RSC_MOVE64, ACPI_RS_OFFSET(data.address64.granularity), 153 + {ACPI_RSC_MOVE64, ACPI_RS_OFFSET(data.address64.address.granularity), 154 154 AML_OFFSET(address64.granularity), 155 155 5}, 156 156 ··· 194 194 * Address Length 195 195 * Type-Specific Attribute 196 196 */ 197 - {ACPI_RSC_MOVE64, ACPI_RS_OFFSET(data.ext_address64.granularity), 197 + {ACPI_RSC_MOVE64, 198 + ACPI_RS_OFFSET(data.ext_address64.address.granularity), 198 199 AML_OFFSET(ext_address64.granularity), 199 200 6} 200 201 };
+1 -1
drivers/acpi/acpica/rscalc.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/rscreate.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/rsdump.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+31 -30
drivers/acpi/acpica/rsdumpinfo.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 183 183 {ACPI_RSD_TITLE, ACPI_RSD_TABLE_SIZE(acpi_rs_dump_address16), 184 184 "16-Bit WORD Address Space", NULL}, 185 185 {ACPI_RSD_ADDRESS, 0, NULL, NULL}, 186 - {ACPI_RSD_UINT16, ACPI_RSD_OFFSET(address16.granularity), "Granularity", 187 - NULL}, 188 - {ACPI_RSD_UINT16, ACPI_RSD_OFFSET(address16.minimum), "Address Minimum", 189 - NULL}, 190 - {ACPI_RSD_UINT16, ACPI_RSD_OFFSET(address16.maximum), "Address Maximum", 191 - NULL}, 192 - {ACPI_RSD_UINT16, ACPI_RSD_OFFSET(address16.translation_offset), 186 + {ACPI_RSD_UINT16, ACPI_RSD_OFFSET(address16.address.granularity), 187 + "Granularity", NULL}, 188 + {ACPI_RSD_UINT16, ACPI_RSD_OFFSET(address16.address.minimum), 189 + "Address Minimum", NULL}, 190 + {ACPI_RSD_UINT16, ACPI_RSD_OFFSET(address16.address.maximum), 191 + "Address Maximum", NULL}, 192 + {ACPI_RSD_UINT16, ACPI_RSD_OFFSET(address16.address.translation_offset), 193 193 "Translation Offset", NULL}, 194 - {ACPI_RSD_UINT16, ACPI_RSD_OFFSET(address16.address_length), 194 + {ACPI_RSD_UINT16, ACPI_RSD_OFFSET(address16.address.address_length), 195 195 "Address Length", NULL}, 196 196 {ACPI_RSD_SOURCE, ACPI_RSD_OFFSET(address16.resource_source), NULL, NULL} 197 197 }; ··· 200 200 {ACPI_RSD_TITLE, ACPI_RSD_TABLE_SIZE(acpi_rs_dump_address32), 201 201 "32-Bit DWORD Address Space", NULL}, 202 202 {ACPI_RSD_ADDRESS, 0, NULL, NULL}, 203 - {ACPI_RSD_UINT32, ACPI_RSD_OFFSET(address32.granularity), "Granularity", 204 - NULL}, 205 - {ACPI_RSD_UINT32, ACPI_RSD_OFFSET(address32.minimum), "Address Minimum", 206 - NULL}, 207 - {ACPI_RSD_UINT32, ACPI_RSD_OFFSET(address32.maximum), "Address Maximum", 208 - NULL}, 209 - {ACPI_RSD_UINT32, ACPI_RSD_OFFSET(address32.translation_offset), 203 + {ACPI_RSD_UINT32, ACPI_RSD_OFFSET(address32.address.granularity), 204 + "Granularity", NULL}, 205 + {ACPI_RSD_UINT32, ACPI_RSD_OFFSET(address32.address.minimum), 206 + "Address Minimum", NULL}, 207 + {ACPI_RSD_UINT32, ACPI_RSD_OFFSET(address32.address.maximum), 208 + "Address Maximum", NULL}, 209 + {ACPI_RSD_UINT32, ACPI_RSD_OFFSET(address32.address.translation_offset), 210 210 "Translation Offset", NULL}, 211 - {ACPI_RSD_UINT32, ACPI_RSD_OFFSET(address32.address_length), 211 + {ACPI_RSD_UINT32, ACPI_RSD_OFFSET(address32.address.address_length), 212 212 "Address Length", NULL}, 213 213 {ACPI_RSD_SOURCE, ACPI_RSD_OFFSET(address32.resource_source), NULL, NULL} 214 214 }; ··· 217 217 {ACPI_RSD_TITLE, ACPI_RSD_TABLE_SIZE(acpi_rs_dump_address64), 218 218 "64-Bit QWORD Address Space", NULL}, 219 219 {ACPI_RSD_ADDRESS, 0, NULL, NULL}, 220 - {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(address64.granularity), "Granularity", 221 - NULL}, 222 - {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(address64.minimum), "Address Minimum", 223 - NULL}, 224 - {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(address64.maximum), "Address Maximum", 225 - NULL}, 226 - {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(address64.translation_offset), 220 + {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(address64.address.granularity), 221 + "Granularity", NULL}, 222 + {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(address64.address.minimum), 223 + "Address Minimum", NULL}, 224 + {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(address64.address.maximum), 225 + "Address Maximum", NULL}, 226 + {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(address64.address.translation_offset), 227 227 "Translation Offset", NULL}, 228 - {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(address64.address_length), 228 + {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(address64.address.address_length), 229 229 "Address Length", NULL}, 230 230 {ACPI_RSD_SOURCE, ACPI_RSD_OFFSET(address64.resource_source), NULL, NULL} 231 231 }; ··· 234 234 {ACPI_RSD_TITLE, ACPI_RSD_TABLE_SIZE(acpi_rs_dump_ext_address64), 235 235 "64-Bit Extended Address Space", NULL}, 236 236 {ACPI_RSD_ADDRESS, 0, NULL, NULL}, 237 - {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(ext_address64.granularity), 237 + {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(ext_address64.address.granularity), 238 238 "Granularity", NULL}, 239 - {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(ext_address64.minimum), 239 + {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(ext_address64.address.minimum), 240 240 "Address Minimum", NULL}, 241 - {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(ext_address64.maximum), 241 + {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(ext_address64.address.maximum), 242 242 "Address Maximum", NULL}, 243 - {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(ext_address64.translation_offset), 243 + {ACPI_RSD_UINT64, 244 + ACPI_RSD_OFFSET(ext_address64.address.translation_offset), 244 245 "Translation Offset", NULL}, 245 - {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(ext_address64.address_length), 246 + {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(ext_address64.address.address_length), 246 247 "Address Length", NULL}, 247 248 {ACPI_RSD_UINT64, ACPI_RSD_OFFSET(ext_address64.type_specific), 248 249 "Type-Specific Attribute", NULL}
+1 -1
drivers/acpi/acpica/rsinfo.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/rsio.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/rsirq.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/rslist.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/rsmemory.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/rsmisc.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/rsserial.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/rsutils.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+6 -6
drivers/acpi/acpica/rsxface.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 60 60 ACPI_COPY_FIELD(out, in, min_address_fixed); \ 61 61 ACPI_COPY_FIELD(out, in, max_address_fixed); \ 62 62 ACPI_COPY_FIELD(out, in, info); \ 63 - ACPI_COPY_FIELD(out, in, granularity); \ 64 - ACPI_COPY_FIELD(out, in, minimum); \ 65 - ACPI_COPY_FIELD(out, in, maximum); \ 66 - ACPI_COPY_FIELD(out, in, translation_offset); \ 67 - ACPI_COPY_FIELD(out, in, address_length); \ 63 + ACPI_COPY_FIELD(out, in, address.granularity); \ 64 + ACPI_COPY_FIELD(out, in, address.minimum); \ 65 + ACPI_COPY_FIELD(out, in, address.maximum); \ 66 + ACPI_COPY_FIELD(out, in, address.translation_offset); \ 67 + ACPI_COPY_FIELD(out, in, address.address_length); \ 68 68 ACPI_COPY_FIELD(out, in, resource_source); 69 69 /* Local prototypes */ 70 70 static acpi_status
+1 -1
drivers/acpi/acpica/tbdata.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/tbfadt.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/tbfind.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/tbinstal.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/tbprint.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/tbutils.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -40
drivers/acpi/acpica/tbxface.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 262 262 } 263 263 264 264 ACPI_EXPORT_SYMBOL(acpi_get_table_header) 265 - 266 - /******************************************************************************* 267 - * 268 - * FUNCTION: acpi_unload_table_id 269 - * 270 - * PARAMETERS: id - Owner ID of the table to be removed. 271 - * 272 - * RETURN: Status 273 - * 274 - * DESCRIPTION: This routine is used to force the unload of a table (by id) 275 - * 276 - ******************************************************************************/ 277 - acpi_status acpi_unload_table_id(acpi_owner_id id) 278 - { 279 - int i; 280 - acpi_status status = AE_NOT_EXIST; 281 - 282 - ACPI_FUNCTION_TRACE(acpi_unload_table_id); 283 - 284 - /* Find table in the global table list */ 285 - for (i = 0; i < acpi_gbl_root_table_list.current_table_count; ++i) { 286 - if (id != acpi_gbl_root_table_list.tables[i].owner_id) { 287 - continue; 288 - } 289 - /* 290 - * Delete all namespace objects owned by this table. Note that these 291 - * objects can appear anywhere in the namespace by virtue of the AML 292 - * "Scope" operator. Thus, we need to track ownership by an ID, not 293 - * simply a position within the hierarchy 294 - */ 295 - acpi_tb_delete_namespace_by_owner(i); 296 - status = acpi_tb_release_owner_id(i); 297 - acpi_tb_set_table_loaded_flag(i, FALSE); 298 - break; 299 - } 300 - return_ACPI_STATUS(status); 301 - } 302 - 303 - ACPI_EXPORT_SYMBOL(acpi_unload_table_id) 304 265 305 266 /******************************************************************************* 306 267 *
+1 -1
drivers/acpi/acpica/tbxfload.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/tbxfroot.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/utaddress.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/utalloc.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/utbuffer.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/utcache.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/utcopy.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/utdebug.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/utdecode.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/utdelete.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/uterror.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/uteval.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/utexcep.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/utfileio.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/utglobal.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/uthex.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/utids.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/utinit.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/utlock.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/utmath.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/utmisc.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/utmutex.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/utobject.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/utosi.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/utownerid.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/utpredef.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/utprint.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/utresrc.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/utstate.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/utstring.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/uttrack.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/utuuid.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/utxface.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/utxferror.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/utxfinit.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/acpica/utxfmutex.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
-2
drivers/acpi/device_pm.c
··· 1027 1027 1028 1028 static struct dev_pm_domain acpi_general_pm_domain = { 1029 1029 .ops = { 1030 - #ifdef CONFIG_PM 1031 1030 .runtime_suspend = acpi_subsys_runtime_suspend, 1032 1031 .runtime_resume = acpi_subsys_runtime_resume, 1033 1032 #ifdef CONFIG_PM_SLEEP ··· 1039 1040 .poweroff = acpi_subsys_suspend, 1040 1041 .poweroff_late = acpi_subsys_suspend_late, 1041 1042 .restore_early = acpi_subsys_resume_early, 1042 - #endif 1043 1043 #endif 1044 1044 }, 1045 1045 };
+418 -132
drivers/acpi/ec.c
··· 1 1 /* 2 - * ec.c - ACPI Embedded Controller Driver (v2.2) 2 + * ec.c - ACPI Embedded Controller Driver (v3) 3 3 * 4 - * Copyright (C) 2001-2014 Intel Corporation 5 - * Author: 2014 Lv Zheng <lv.zheng@intel.com> 4 + * Copyright (C) 2001-2015 Intel Corporation 5 + * Author: 2014, 2015 Lv Zheng <lv.zheng@intel.com> 6 6 * 2006, 2007 Alexey Starikovskiy <alexey.y.starikovskiy@intel.com> 7 7 * 2006 Denis Sadykov <denis.m.sadykov@intel.com> 8 8 * 2004 Luming Yu <luming.yu@intel.com> ··· 31 31 32 32 /* Uncomment next line to get verbose printout */ 33 33 /* #define DEBUG */ 34 + #define DEBUG_REF 0 34 35 #define pr_fmt(fmt) "ACPI : EC: " fmt 35 36 36 37 #include <linux/kernel.h> ··· 72 71 #define ACPI_EC_DELAY 500 /* Wait 500ms max. during EC ops */ 73 72 #define ACPI_EC_UDELAY_GLK 1000 /* Wait 1ms max. to get global lock */ 74 73 #define ACPI_EC_MSI_UDELAY 550 /* Wait 550us for MSI EC */ 74 + #define ACPI_EC_UDELAY_POLL 1000 /* Wait 1ms for EC transaction polling */ 75 75 #define ACPI_EC_CLEAR_MAX 100 /* Maximum number of events to query 76 76 * when trying to clear the EC */ 77 77 78 78 enum { 79 - EC_FLAGS_QUERY_PENDING, /* Query is pending */ 80 - EC_FLAGS_GPE_STORM, /* GPE storm detected */ 79 + EC_FLAGS_EVENT_ENABLED, /* Event is enabled */ 80 + EC_FLAGS_EVENT_PENDING, /* Event is pending */ 81 + EC_FLAGS_EVENT_DETECTED, /* Event is detected */ 81 82 EC_FLAGS_HANDLERS_INSTALLED, /* Handlers for GPE and 82 83 * OpReg are installed */ 83 - EC_FLAGS_BLOCKED, /* Transactions are blocked */ 84 + EC_FLAGS_STARTED, /* Driver is started */ 85 + EC_FLAGS_STOPPED, /* Driver is stopped */ 86 + EC_FLAGS_COMMAND_STORM, /* GPE storms occurred to the 87 + * current command processing */ 84 88 }; 85 89 86 90 #define ACPI_EC_COMMAND_POLL 0x01 /* Available for command byte */ 87 91 #define ACPI_EC_COMMAND_COMPLETE 0x02 /* Completed last byte */ 92 + 93 + #define ec_debug_ref(ec, fmt, ...) \ 94 + do { \ 95 + if (DEBUG_REF) \ 96 + pr_debug("%lu: " fmt, ec->reference_count, \ 97 + ## __VA_ARGS__); \ 98 + } while (0) 88 99 89 100 /* ec.c is compiled in acpi namespace so this shows up as acpi.ec_delay param */ 90 101 static unsigned int ec_delay __read_mostly = ACPI_EC_DELAY; ··· 118 105 acpi_handle handle; 119 106 void *data; 120 107 u8 query_bit; 108 + struct kref kref; 121 109 }; 122 110 123 111 struct transaction { ··· 131 117 u8 wlen; 132 118 u8 rlen; 133 119 u8 flags; 120 + unsigned long timestamp; 134 121 }; 122 + 123 + static int acpi_ec_query(struct acpi_ec *ec, u8 *data); 124 + static void advance_transaction(struct acpi_ec *ec); 135 125 136 126 struct acpi_ec *boot_ec, *first_ec; 137 127 EXPORT_SYMBOL(first_ec); ··· 147 129 static int EC_FLAGS_QUERY_HANDSHAKE; /* Needs QR_EC issued when SCI_EVT set */ 148 130 149 131 /* -------------------------------------------------------------------------- 150 - * Transaction Management 132 + * Device Flags 133 + * -------------------------------------------------------------------------- */ 134 + 135 + static bool acpi_ec_started(struct acpi_ec *ec) 136 + { 137 + return test_bit(EC_FLAGS_STARTED, &ec->flags) && 138 + !test_bit(EC_FLAGS_STOPPED, &ec->flags); 139 + } 140 + 141 + static bool acpi_ec_flushed(struct acpi_ec *ec) 142 + { 143 + return ec->reference_count == 1; 144 + } 145 + 146 + static bool acpi_ec_has_pending_event(struct acpi_ec *ec) 147 + { 148 + return test_bit(EC_FLAGS_EVENT_DETECTED, &ec->flags) || 149 + test_bit(EC_FLAGS_EVENT_PENDING, &ec->flags); 150 + } 151 + 152 + /* -------------------------------------------------------------------------- 153 + * EC Registers 151 154 * -------------------------------------------------------------------------- */ 152 155 153 156 static inline u8 acpi_ec_read_status(struct acpi_ec *ec) ··· 190 151 { 191 152 u8 x = inb(ec->data_addr); 192 153 154 + ec->curr->timestamp = jiffies; 193 155 pr_debug("EC_DATA(R) = 0x%2.2x\n", x); 194 156 return x; 195 157 } ··· 199 159 { 200 160 pr_debug("EC_SC(W) = 0x%2.2x\n", command); 201 161 outb(command, ec->command_addr); 162 + ec->curr->timestamp = jiffies; 202 163 } 203 164 204 165 static inline void acpi_ec_write_data(struct acpi_ec *ec, u8 data) 205 166 { 206 167 pr_debug("EC_DATA(W) = 0x%2.2x\n", data); 207 168 outb(data, ec->data_addr); 169 + ec->curr->timestamp = jiffies; 208 170 } 209 171 210 172 #ifdef DEBUG ··· 230 188 #define acpi_ec_cmd_string(cmd) "UNDEF" 231 189 #endif 232 190 191 + /* -------------------------------------------------------------------------- 192 + * GPE Registers 193 + * -------------------------------------------------------------------------- */ 194 + 195 + static inline bool acpi_ec_is_gpe_raised(struct acpi_ec *ec) 196 + { 197 + acpi_event_status gpe_status = 0; 198 + 199 + (void)acpi_get_gpe_status(NULL, ec->gpe, &gpe_status); 200 + return (gpe_status & ACPI_EVENT_FLAG_SET) ? true : false; 201 + } 202 + 203 + static inline void acpi_ec_enable_gpe(struct acpi_ec *ec, bool open) 204 + { 205 + if (open) 206 + acpi_enable_gpe(NULL, ec->gpe); 207 + else { 208 + BUG_ON(ec->reference_count < 1); 209 + acpi_set_gpe(NULL, ec->gpe, ACPI_GPE_ENABLE); 210 + } 211 + if (acpi_ec_is_gpe_raised(ec)) { 212 + /* 213 + * On some platforms, EN=1 writes cannot trigger GPE. So 214 + * software need to manually trigger a pseudo GPE event on 215 + * EN=1 writes. 216 + */ 217 + pr_debug("***** Polling quirk *****\n"); 218 + advance_transaction(ec); 219 + } 220 + } 221 + 222 + static inline void acpi_ec_disable_gpe(struct acpi_ec *ec, bool close) 223 + { 224 + if (close) 225 + acpi_disable_gpe(NULL, ec->gpe); 226 + else { 227 + BUG_ON(ec->reference_count < 1); 228 + acpi_set_gpe(NULL, ec->gpe, ACPI_GPE_DISABLE); 229 + } 230 + } 231 + 232 + static inline void acpi_ec_clear_gpe(struct acpi_ec *ec) 233 + { 234 + /* 235 + * GPE STS is a W1C register, which means: 236 + * 1. Software can clear it without worrying about clearing other 237 + * GPEs' STS bits when the hardware sets them in parallel. 238 + * 2. As long as software can ensure only clearing it when it is 239 + * set, hardware won't set it in parallel. 240 + * So software can clear GPE in any contexts. 241 + * Warning: do not move the check into advance_transaction() as the 242 + * EC commands will be sent without GPE raised. 243 + */ 244 + if (!acpi_ec_is_gpe_raised(ec)) 245 + return; 246 + acpi_clear_gpe(NULL, ec->gpe); 247 + } 248 + 249 + /* -------------------------------------------------------------------------- 250 + * Transaction Management 251 + * -------------------------------------------------------------------------- */ 252 + 253 + static void acpi_ec_submit_request(struct acpi_ec *ec) 254 + { 255 + ec->reference_count++; 256 + if (ec->reference_count == 1) 257 + acpi_ec_enable_gpe(ec, true); 258 + } 259 + 260 + static void acpi_ec_complete_request(struct acpi_ec *ec) 261 + { 262 + bool flushed = false; 263 + 264 + ec->reference_count--; 265 + if (ec->reference_count == 0) 266 + acpi_ec_disable_gpe(ec, true); 267 + flushed = acpi_ec_flushed(ec); 268 + if (flushed) 269 + wake_up(&ec->wait); 270 + } 271 + 272 + static void acpi_ec_set_storm(struct acpi_ec *ec, u8 flag) 273 + { 274 + if (!test_bit(flag, &ec->flags)) { 275 + acpi_ec_disable_gpe(ec, false); 276 + pr_debug("+++++ Polling enabled +++++\n"); 277 + set_bit(flag, &ec->flags); 278 + } 279 + } 280 + 281 + static void acpi_ec_clear_storm(struct acpi_ec *ec, u8 flag) 282 + { 283 + if (test_bit(flag, &ec->flags)) { 284 + clear_bit(flag, &ec->flags); 285 + acpi_ec_enable_gpe(ec, false); 286 + pr_debug("+++++ Polling disabled +++++\n"); 287 + } 288 + } 289 + 290 + /* 291 + * acpi_ec_submit_flushable_request() - Increase the reference count unless 292 + * the flush operation is not in 293 + * progress 294 + * @ec: the EC device 295 + * @allow_event: whether event should be handled 296 + * 297 + * This function must be used before taking a new action that should hold 298 + * the reference count. If this function returns false, then the action 299 + * must be discarded or it will prevent the flush operation from being 300 + * completed. 301 + * 302 + * During flushing, QR_EC command need to pass this check when there is a 303 + * pending event, so that the reference count held for the pending event 304 + * can be decreased by the completion of the QR_EC command. 305 + */ 306 + static bool acpi_ec_submit_flushable_request(struct acpi_ec *ec, 307 + bool allow_event) 308 + { 309 + if (!acpi_ec_started(ec)) { 310 + if (!allow_event || !acpi_ec_has_pending_event(ec)) 311 + return false; 312 + } 313 + acpi_ec_submit_request(ec); 314 + return true; 315 + } 316 + 317 + static void acpi_ec_submit_event(struct acpi_ec *ec) 318 + { 319 + if (!test_bit(EC_FLAGS_EVENT_DETECTED, &ec->flags) || 320 + !test_bit(EC_FLAGS_EVENT_ENABLED, &ec->flags)) 321 + return; 322 + /* Hold reference for pending event */ 323 + if (!acpi_ec_submit_flushable_request(ec, true)) 324 + return; 325 + ec_debug_ref(ec, "Increase event\n"); 326 + if (!test_and_set_bit(EC_FLAGS_EVENT_PENDING, &ec->flags)) { 327 + pr_debug("***** Event query started *****\n"); 328 + schedule_work(&ec->work); 329 + return; 330 + } 331 + acpi_ec_complete_request(ec); 332 + ec_debug_ref(ec, "Decrease event\n"); 333 + } 334 + 335 + static void acpi_ec_complete_event(struct acpi_ec *ec) 336 + { 337 + if (ec->curr->command == ACPI_EC_COMMAND_QUERY) { 338 + clear_bit(EC_FLAGS_EVENT_PENDING, &ec->flags); 339 + pr_debug("***** Event query stopped *****\n"); 340 + /* Unhold reference for pending event */ 341 + acpi_ec_complete_request(ec); 342 + ec_debug_ref(ec, "Decrease event\n"); 343 + /* Check if there is another SCI_EVT detected */ 344 + acpi_ec_submit_event(ec); 345 + } 346 + } 347 + 348 + static void acpi_ec_submit_detection(struct acpi_ec *ec) 349 + { 350 + /* Hold reference for query submission */ 351 + if (!acpi_ec_submit_flushable_request(ec, false)) 352 + return; 353 + ec_debug_ref(ec, "Increase query\n"); 354 + if (!test_and_set_bit(EC_FLAGS_EVENT_DETECTED, &ec->flags)) { 355 + pr_debug("***** Event detection blocked *****\n"); 356 + acpi_ec_submit_event(ec); 357 + return; 358 + } 359 + acpi_ec_complete_request(ec); 360 + ec_debug_ref(ec, "Decrease query\n"); 361 + } 362 + 363 + static void acpi_ec_complete_detection(struct acpi_ec *ec) 364 + { 365 + if (ec->curr->command == ACPI_EC_COMMAND_QUERY) { 366 + clear_bit(EC_FLAGS_EVENT_DETECTED, &ec->flags); 367 + pr_debug("***** Event detetion unblocked *****\n"); 368 + /* Unhold reference for query submission */ 369 + acpi_ec_complete_request(ec); 370 + ec_debug_ref(ec, "Decrease query\n"); 371 + } 372 + } 373 + 374 + static void acpi_ec_enable_event(struct acpi_ec *ec) 375 + { 376 + unsigned long flags; 377 + 378 + spin_lock_irqsave(&ec->lock, flags); 379 + set_bit(EC_FLAGS_EVENT_ENABLED, &ec->flags); 380 + /* 381 + * An event may be pending even with SCI_EVT=0, so QR_EC should 382 + * always be issued right after started. 383 + */ 384 + acpi_ec_submit_detection(ec); 385 + spin_unlock_irqrestore(&ec->lock, flags); 386 + } 387 + 233 388 static int ec_transaction_completed(struct acpi_ec *ec) 234 389 { 235 390 unsigned long flags; ··· 439 200 return ret; 440 201 } 441 202 442 - static bool advance_transaction(struct acpi_ec *ec) 203 + static void advance_transaction(struct acpi_ec *ec) 443 204 { 444 205 struct transaction *t; 445 206 u8 status; ··· 447 208 448 209 pr_debug("===== %s (%d) =====\n", 449 210 in_interrupt() ? "IRQ" : "TASK", smp_processor_id()); 211 + /* 212 + * By always clearing STS before handling all indications, we can 213 + * ensure a hardware STS 0->1 change after this clearing can always 214 + * trigger a GPE interrupt. 215 + */ 216 + acpi_ec_clear_gpe(ec); 450 217 status = acpi_ec_read_status(ec); 451 218 t = ec->curr; 452 219 if (!t) ··· 468 223 t->rdata[t->ri++] = acpi_ec_read_data(ec); 469 224 if (t->rlen == t->ri) { 470 225 t->flags |= ACPI_EC_COMMAND_COMPLETE; 226 + acpi_ec_complete_event(ec); 471 227 if (t->command == ACPI_EC_COMMAND_QUERY) 472 228 pr_debug("***** Command(%s) hardware completion *****\n", 473 229 acpi_ec_cmd_string(t->command)); ··· 479 233 } else if (t->wlen == t->wi && 480 234 (status & ACPI_EC_FLAG_IBF) == 0) { 481 235 t->flags |= ACPI_EC_COMMAND_COMPLETE; 236 + acpi_ec_complete_event(ec); 482 237 wakeup = true; 483 238 } 484 - return wakeup; 239 + goto out; 485 240 } else { 486 241 if (EC_FLAGS_QUERY_HANDSHAKE && 487 242 !(status & ACPI_EC_FLAG_SCI) && 488 243 (t->command == ACPI_EC_COMMAND_QUERY)) { 489 244 t->flags |= ACPI_EC_COMMAND_POLL; 245 + acpi_ec_complete_detection(ec); 490 246 t->rdata[t->ri++] = 0x00; 491 247 t->flags |= ACPI_EC_COMMAND_COMPLETE; 248 + acpi_ec_complete_event(ec); 492 249 pr_debug("***** Command(%s) software completion *****\n", 493 250 acpi_ec_cmd_string(t->command)); 494 251 wakeup = true; 495 252 } else if ((status & ACPI_EC_FLAG_IBF) == 0) { 496 253 acpi_ec_write_cmd(ec, t->command); 497 254 t->flags |= ACPI_EC_COMMAND_POLL; 255 + acpi_ec_complete_detection(ec); 498 256 } else 499 257 goto err; 500 - return wakeup; 258 + goto out; 501 259 } 502 260 err: 503 261 /* ··· 509 259 * otherwise will take a not handled IRQ as a false one. 510 260 */ 511 261 if (!(status & ACPI_EC_FLAG_SCI)) { 512 - if (in_interrupt() && t) 513 - ++t->irq_count; 262 + if (in_interrupt() && t) { 263 + if (t->irq_count < ec_storm_threshold) 264 + ++t->irq_count; 265 + /* Allow triggering on 0 threshold */ 266 + if (t->irq_count == ec_storm_threshold) 267 + acpi_ec_set_storm(ec, EC_FLAGS_COMMAND_STORM); 268 + } 514 269 } 515 - return wakeup; 270 + out: 271 + if (status & ACPI_EC_FLAG_SCI) 272 + acpi_ec_submit_detection(ec); 273 + if (wakeup && in_interrupt()) 274 + wake_up(&ec->wait); 516 275 } 517 276 518 277 static void start_transaction(struct acpi_ec *ec) 519 278 { 520 279 ec->curr->irq_count = ec->curr->wi = ec->curr->ri = 0; 521 280 ec->curr->flags = 0; 522 - (void)advance_transaction(ec); 523 - } 524 - 525 - static int acpi_ec_sync_query(struct acpi_ec *ec, u8 *data); 526 - 527 - static int ec_check_sci_sync(struct acpi_ec *ec, u8 state) 528 - { 529 - if (state & ACPI_EC_FLAG_SCI) { 530 - if (!test_and_set_bit(EC_FLAGS_QUERY_PENDING, &ec->flags)) 531 - return acpi_ec_sync_query(ec, NULL); 532 - } 533 - return 0; 281 + ec->curr->timestamp = jiffies; 282 + advance_transaction(ec); 534 283 } 535 284 536 285 static int ec_poll(struct acpi_ec *ec) ··· 540 291 while (repeat--) { 541 292 unsigned long delay = jiffies + 542 293 msecs_to_jiffies(ec_delay); 294 + unsigned long usecs = ACPI_EC_UDELAY_POLL; 543 295 do { 544 296 /* don't sleep with disabled interrupts */ 545 297 if (EC_FLAGS_MSI || irqs_disabled()) { 546 - udelay(ACPI_EC_MSI_UDELAY); 298 + usecs = ACPI_EC_MSI_UDELAY; 299 + udelay(usecs); 547 300 if (ec_transaction_completed(ec)) 548 301 return 0; 549 302 } else { 550 303 if (wait_event_timeout(ec->wait, 551 304 ec_transaction_completed(ec), 552 - msecs_to_jiffies(1))) 305 + usecs_to_jiffies(usecs))) 553 306 return 0; 554 307 } 555 308 spin_lock_irqsave(&ec->lock, flags); 556 - (void)advance_transaction(ec); 309 + if (time_after(jiffies, 310 + ec->curr->timestamp + 311 + usecs_to_jiffies(usecs))) 312 + advance_transaction(ec); 557 313 spin_unlock_irqrestore(&ec->lock, flags); 558 314 } while (time_before(jiffies, delay)); 559 315 pr_debug("controller reset, restart transaction\n"); ··· 579 325 udelay(ACPI_EC_MSI_UDELAY); 580 326 /* start transaction */ 581 327 spin_lock_irqsave(&ec->lock, tmp); 328 + /* Enable GPE for command processing (IBF=0/OBF=1) */ 329 + if (!acpi_ec_submit_flushable_request(ec, true)) { 330 + ret = -EINVAL; 331 + goto unlock; 332 + } 333 + ec_debug_ref(ec, "Increase command\n"); 582 334 /* following two actions should be kept atomic */ 583 335 ec->curr = t; 584 336 pr_debug("***** Command(%s) started *****\n", 585 337 acpi_ec_cmd_string(t->command)); 586 338 start_transaction(ec); 587 - if (ec->curr->command == ACPI_EC_COMMAND_QUERY) { 588 - clear_bit(EC_FLAGS_QUERY_PENDING, &ec->flags); 589 - pr_debug("***** Event stopped *****\n"); 590 - } 591 339 spin_unlock_irqrestore(&ec->lock, tmp); 592 340 ret = ec_poll(ec); 593 341 spin_lock_irqsave(&ec->lock, tmp); 342 + if (t->irq_count == ec_storm_threshold) 343 + acpi_ec_clear_storm(ec, EC_FLAGS_COMMAND_STORM); 594 344 pr_debug("***** Command(%s) stopped *****\n", 595 345 acpi_ec_cmd_string(t->command)); 596 346 ec->curr = NULL; 347 + /* Disable GPE for command processing (IBF=0/OBF=1) */ 348 + acpi_ec_complete_request(ec); 349 + ec_debug_ref(ec, "Decrease command\n"); 350 + unlock: 597 351 spin_unlock_irqrestore(&ec->lock, tmp); 598 352 return ret; 599 353 } ··· 616 354 if (t->rdata) 617 355 memset(t->rdata, 0, t->rlen); 618 356 mutex_lock(&ec->mutex); 619 - if (test_bit(EC_FLAGS_BLOCKED, &ec->flags)) { 620 - status = -EINVAL; 621 - goto unlock; 622 - } 623 357 if (ec->global_lock) { 624 358 status = acpi_acquire_global_lock(ACPI_EC_UDELAY_GLK, &glk); 625 359 if (ACPI_FAILURE(status)) { ··· 623 365 goto unlock; 624 366 } 625 367 } 626 - /* disable GPE during transaction if storm is detected */ 627 - if (test_bit(EC_FLAGS_GPE_STORM, &ec->flags)) { 628 - /* It has to be disabled, so that it doesn't trigger. */ 629 - acpi_disable_gpe(NULL, ec->gpe); 630 - } 631 368 632 369 status = acpi_ec_transaction_unlocked(ec, t); 633 370 634 - /* check if we received SCI during transaction */ 635 - ec_check_sci_sync(ec, acpi_ec_read_status(ec)); 636 - if (test_bit(EC_FLAGS_GPE_STORM, &ec->flags)) { 371 + if (test_bit(EC_FLAGS_COMMAND_STORM, &ec->flags)) 637 372 msleep(1); 638 - /* It is safe to enable the GPE outside of the transaction. */ 639 - acpi_enable_gpe(NULL, ec->gpe); 640 - } else if (t->irq_count > ec_storm_threshold) { 641 - pr_info("GPE storm detected(%d GPEs), " 642 - "transactions will use polling mode\n", 643 - t->irq_count); 644 - set_bit(EC_FLAGS_GPE_STORM, &ec->flags); 645 - } 646 373 if (ec->global_lock) 647 374 acpi_release_global_lock(glk); 648 375 unlock: ··· 743 500 u8 value = 0; 744 501 745 502 for (i = 0; i < ACPI_EC_CLEAR_MAX; i++) { 746 - status = acpi_ec_sync_query(ec, &value); 503 + status = acpi_ec_query(ec, &value); 747 504 if (status || !value) 748 505 break; 749 506 } ··· 752 509 pr_warn("Warning: Maximum of %d stale EC events cleared\n", i); 753 510 else 754 511 pr_info("%d stale EC events cleared\n", i); 512 + } 513 + 514 + static void acpi_ec_start(struct acpi_ec *ec, bool resuming) 515 + { 516 + unsigned long flags; 517 + 518 + spin_lock_irqsave(&ec->lock, flags); 519 + if (!test_and_set_bit(EC_FLAGS_STARTED, &ec->flags)) { 520 + pr_debug("+++++ Starting EC +++++\n"); 521 + /* Enable GPE for event processing (SCI_EVT=1) */ 522 + if (!resuming) { 523 + acpi_ec_submit_request(ec); 524 + ec_debug_ref(ec, "Increase driver\n"); 525 + } 526 + pr_info("+++++ EC started +++++\n"); 527 + } 528 + spin_unlock_irqrestore(&ec->lock, flags); 529 + } 530 + 531 + static bool acpi_ec_stopped(struct acpi_ec *ec) 532 + { 533 + unsigned long flags; 534 + bool flushed; 535 + 536 + spin_lock_irqsave(&ec->lock, flags); 537 + flushed = acpi_ec_flushed(ec); 538 + spin_unlock_irqrestore(&ec->lock, flags); 539 + return flushed; 540 + } 541 + 542 + static void acpi_ec_stop(struct acpi_ec *ec, bool suspending) 543 + { 544 + unsigned long flags; 545 + 546 + spin_lock_irqsave(&ec->lock, flags); 547 + if (acpi_ec_started(ec)) { 548 + pr_debug("+++++ Stopping EC +++++\n"); 549 + set_bit(EC_FLAGS_STOPPED, &ec->flags); 550 + spin_unlock_irqrestore(&ec->lock, flags); 551 + wait_event(ec->wait, acpi_ec_stopped(ec)); 552 + spin_lock_irqsave(&ec->lock, flags); 553 + /* Disable GPE for event processing (SCI_EVT=1) */ 554 + if (!suspending) { 555 + acpi_ec_complete_request(ec); 556 + ec_debug_ref(ec, "Decrease driver\n"); 557 + } 558 + clear_bit(EC_FLAGS_STARTED, &ec->flags); 559 + clear_bit(EC_FLAGS_STOPPED, &ec->flags); 560 + pr_info("+++++ EC stopped +++++\n"); 561 + } 562 + spin_unlock_irqrestore(&ec->lock, flags); 755 563 } 756 564 757 565 void acpi_ec_block_transactions(void) ··· 814 520 815 521 mutex_lock(&ec->mutex); 816 522 /* Prevent transactions from being carried out */ 817 - set_bit(EC_FLAGS_BLOCKED, &ec->flags); 523 + acpi_ec_stop(ec, true); 818 524 mutex_unlock(&ec->mutex); 819 525 } 820 526 ··· 825 531 if (!ec) 826 532 return; 827 533 828 - mutex_lock(&ec->mutex); 829 534 /* Allow transactions to be carried out again */ 830 - clear_bit(EC_FLAGS_BLOCKED, &ec->flags); 535 + acpi_ec_start(ec, true); 831 536 832 537 if (EC_FLAGS_CLEAR_ON_RESUME) 833 538 acpi_ec_clear(ec); 834 - 835 - mutex_unlock(&ec->mutex); 836 539 } 837 540 838 541 void acpi_ec_unblock_transactions_early(void) ··· 839 548 * atomic context during wakeup, so we don't need to acquire the mutex). 840 549 */ 841 550 if (first_ec) 842 - clear_bit(EC_FLAGS_BLOCKED, &first_ec->flags); 843 - } 844 - 845 - static int acpi_ec_query_unlocked(struct acpi_ec *ec, u8 *data) 846 - { 847 - int result; 848 - u8 d; 849 - struct transaction t = {.command = ACPI_EC_COMMAND_QUERY, 850 - .wdata = NULL, .rdata = &d, 851 - .wlen = 0, .rlen = 1}; 852 - 853 - if (!ec || !data) 854 - return -EINVAL; 855 - /* 856 - * Query the EC to find out which _Qxx method we need to evaluate. 857 - * Note that successful completion of the query causes the ACPI_EC_SCI 858 - * bit to be cleared (and thus clearing the interrupt source). 859 - */ 860 - result = acpi_ec_transaction_unlocked(ec, &t); 861 - if (result) 862 - return result; 863 - if (!d) 864 - return -ENODATA; 865 - *data = d; 866 - return 0; 551 + acpi_ec_start(first_ec, true); 867 552 } 868 553 869 554 /* -------------------------------------------------------------------------- 870 555 Event Management 871 556 -------------------------------------------------------------------------- */ 557 + static struct acpi_ec_query_handler * 558 + acpi_ec_get_query_handler(struct acpi_ec_query_handler *handler) 559 + { 560 + if (handler) 561 + kref_get(&handler->kref); 562 + return handler; 563 + } 564 + 565 + static void acpi_ec_query_handler_release(struct kref *kref) 566 + { 567 + struct acpi_ec_query_handler *handler = 568 + container_of(kref, struct acpi_ec_query_handler, kref); 569 + 570 + kfree(handler); 571 + } 572 + 573 + static void acpi_ec_put_query_handler(struct acpi_ec_query_handler *handler) 574 + { 575 + kref_put(&handler->kref, acpi_ec_query_handler_release); 576 + } 577 + 872 578 int acpi_ec_add_query_handler(struct acpi_ec *ec, u8 query_bit, 873 579 acpi_handle handle, acpi_ec_query_func func, 874 580 void *data) ··· 881 593 handler->func = func; 882 594 handler->data = data; 883 595 mutex_lock(&ec->mutex); 596 + kref_init(&handler->kref); 884 597 list_add(&handler->node, &ec->list); 885 598 mutex_unlock(&ec->mutex); 886 599 return 0; ··· 891 602 void acpi_ec_remove_query_handler(struct acpi_ec *ec, u8 query_bit) 892 603 { 893 604 struct acpi_ec_query_handler *handler, *tmp; 605 + LIST_HEAD(free_list); 894 606 895 607 mutex_lock(&ec->mutex); 896 608 list_for_each_entry_safe(handler, tmp, &ec->list, node) { 897 609 if (query_bit == handler->query_bit) { 898 - list_del(&handler->node); 899 - kfree(handler); 610 + list_del_init(&handler->node); 611 + list_add(&handler->node, &free_list); 900 612 } 901 613 } 902 614 mutex_unlock(&ec->mutex); 615 + list_for_each_entry(handler, &free_list, node) 616 + acpi_ec_put_query_handler(handler); 903 617 } 904 618 EXPORT_SYMBOL_GPL(acpi_ec_remove_query_handler); 905 619 ··· 918 626 else if (handler->handle) 919 627 acpi_evaluate_object(handler->handle, NULL, NULL, NULL); 920 628 pr_debug("##### Query(0x%02x) stopped #####\n", handler->query_bit); 921 - kfree(handler); 629 + acpi_ec_put_query_handler(handler); 922 630 } 923 631 924 - static int acpi_ec_sync_query(struct acpi_ec *ec, u8 *data) 632 + static int acpi_ec_query(struct acpi_ec *ec, u8 *data) 925 633 { 926 634 u8 value = 0; 927 - int status; 928 - struct acpi_ec_query_handler *handler, *copy; 635 + int result; 636 + acpi_status status; 637 + struct acpi_ec_query_handler *handler; 638 + struct transaction t = {.command = ACPI_EC_COMMAND_QUERY, 639 + .wdata = NULL, .rdata = &value, 640 + .wlen = 0, .rlen = 1}; 929 641 930 - status = acpi_ec_query_unlocked(ec, &value); 642 + /* 643 + * Query the EC to find out which _Qxx method we need to evaluate. 644 + * Note that successful completion of the query causes the ACPI_EC_SCI 645 + * bit to be cleared (and thus clearing the interrupt source). 646 + */ 647 + result = acpi_ec_transaction(ec, &t); 648 + if (result) 649 + return result; 931 650 if (data) 932 651 *data = value; 933 - if (status) 934 - return status; 652 + if (!value) 653 + return -ENODATA; 935 654 655 + mutex_lock(&ec->mutex); 936 656 list_for_each_entry(handler, &ec->list, node) { 937 657 if (value == handler->query_bit) { 938 658 /* have custom handler for this bit */ 939 - copy = kmalloc(sizeof(*handler), GFP_KERNEL); 940 - if (!copy) 941 - return -ENOMEM; 942 - memcpy(copy, handler, sizeof(*copy)); 659 + handler = acpi_ec_get_query_handler(handler); 943 660 pr_debug("##### Query(0x%02x) scheduled #####\n", 944 661 handler->query_bit); 945 - return acpi_os_execute((copy->func) ? 662 + status = acpi_os_execute((handler->func) ? 946 663 OSL_NOTIFY_HANDLER : OSL_GPE_HANDLER, 947 - acpi_ec_run, copy); 664 + acpi_ec_run, handler); 665 + if (ACPI_FAILURE(status)) 666 + result = -EBUSY; 667 + break; 948 668 } 949 669 } 950 - return 0; 951 - } 952 - 953 - static void acpi_ec_gpe_query(void *ec_cxt) 954 - { 955 - struct acpi_ec *ec = ec_cxt; 956 - 957 - if (!ec) 958 - return; 959 - mutex_lock(&ec->mutex); 960 - acpi_ec_sync_query(ec, NULL); 961 670 mutex_unlock(&ec->mutex); 671 + return result; 962 672 } 963 673 964 - static int ec_check_sci(struct acpi_ec *ec, u8 state) 674 + static void acpi_ec_gpe_poller(struct work_struct *work) 965 675 { 966 - if (state & ACPI_EC_FLAG_SCI) { 967 - if (!test_and_set_bit(EC_FLAGS_QUERY_PENDING, &ec->flags)) { 968 - pr_debug("***** Event started *****\n"); 969 - return acpi_os_execute(OSL_NOTIFY_HANDLER, 970 - acpi_ec_gpe_query, ec); 971 - } 972 - } 973 - return 0; 676 + struct acpi_ec *ec = container_of(work, struct acpi_ec, work); 677 + 678 + pr_debug("***** Event poller started *****\n"); 679 + acpi_ec_query(ec, NULL); 680 + pr_debug("***** Event poller stopped *****\n"); 974 681 } 975 682 976 683 static u32 acpi_ec_gpe_handler(acpi_handle gpe_device, ··· 979 688 struct acpi_ec *ec = data; 980 689 981 690 spin_lock_irqsave(&ec->lock, flags); 982 - if (advance_transaction(ec)) 983 - wake_up(&ec->wait); 691 + advance_transaction(ec); 984 692 spin_unlock_irqrestore(&ec->lock, flags); 985 - ec_check_sci(ec, acpi_ec_read_status(ec)); 986 - return ACPI_INTERRUPT_HANDLED | ACPI_REENABLE_GPE; 693 + return ACPI_INTERRUPT_HANDLED; 987 694 } 988 695 989 696 /* -------------------------------------------------------------------------- ··· 1039 750 1040 751 if (!ec) 1041 752 return NULL; 1042 - ec->flags = 1 << EC_FLAGS_QUERY_PENDING; 1043 753 mutex_init(&ec->mutex); 1044 754 init_waitqueue_head(&ec->wait); 1045 755 INIT_LIST_HEAD(&ec->list); 1046 756 spin_lock_init(&ec->lock); 757 + INIT_WORK(&ec->work, acpi_ec_gpe_poller); 1047 758 return ec; 1048 759 } 1049 760 ··· 1099 810 1100 811 if (test_bit(EC_FLAGS_HANDLERS_INSTALLED, &ec->flags)) 1101 812 return 0; 1102 - status = acpi_install_gpe_handler(NULL, ec->gpe, 813 + status = acpi_install_gpe_raw_handler(NULL, ec->gpe, 1103 814 ACPI_GPE_EDGE_TRIGGERED, 1104 815 &acpi_ec_gpe_handler, ec); 1105 816 if (ACPI_FAILURE(status)) 1106 817 return -ENODEV; 1107 818 1108 - acpi_enable_gpe(NULL, ec->gpe); 819 + acpi_ec_start(ec, false); 1109 820 status = acpi_install_address_space_handler(ec->handle, 1110 821 ACPI_ADR_SPACE_EC, 1111 822 &acpi_ec_space_handler, ··· 1120 831 pr_err("Fail in evaluating the _REG object" 1121 832 " of EC device. Broken bios is suspected.\n"); 1122 833 } else { 1123 - acpi_disable_gpe(NULL, ec->gpe); 834 + acpi_ec_stop(ec, false); 1124 835 acpi_remove_gpe_handler(NULL, ec->gpe, 1125 836 &acpi_ec_gpe_handler); 1126 837 return -ENODEV; ··· 1135 846 { 1136 847 if (!test_bit(EC_FLAGS_HANDLERS_INSTALLED, &ec->flags)) 1137 848 return; 1138 - acpi_disable_gpe(NULL, ec->gpe); 849 + acpi_ec_stop(ec, false); 1139 850 if (ACPI_FAILURE(acpi_remove_address_space_handler(ec->handle, 1140 851 ACPI_ADR_SPACE_EC, &acpi_ec_space_handler))) 1141 852 pr_err("failed to remove space handler\n"); ··· 1189 900 ret = ec_install_handlers(ec); 1190 901 1191 902 /* EC is fully operational, allow queries */ 1192 - clear_bit(EC_FLAGS_QUERY_PENDING, &ec->flags); 903 + acpi_ec_enable_event(ec); 1193 904 1194 905 /* Clear stale _Q events if hardware might require that */ 1195 - if (EC_FLAGS_CLEAR_ON_RESUME) { 1196 - mutex_lock(&ec->mutex); 906 + if (EC_FLAGS_CLEAR_ON_RESUME) 1197 907 acpi_ec_clear(ec); 1198 - mutex_unlock(&ec->mutex); 1199 - } 1200 908 return ret; 1201 909 } 1202 910
+11
drivers/acpi/internal.h
··· 35 35 int acpi_sysfs_init(void); 36 36 void acpi_container_init(void); 37 37 void acpi_memory_hotplug_init(void); 38 + #ifdef CONFIG_ACPI_HOTPLUG_IOAPIC 39 + int acpi_ioapic_add(struct acpi_pci_root *root); 40 + int acpi_ioapic_remove(struct acpi_pci_root *root); 41 + #else 42 + static inline int acpi_ioapic_add(struct acpi_pci_root *root) { return 0; } 43 + static inline int acpi_ioapic_remove(struct acpi_pci_root *root) { return 0; } 44 + #endif 38 45 #ifdef CONFIG_ACPI_DOCK 39 46 void register_dock_dependent_device(struct acpi_device *adev, 40 47 acpi_handle dshandle); ··· 74 67 static inline void acpi_debugfs_init(void) { return; } 75 68 #endif 76 69 void acpi_lpss_init(void); 70 + 71 + void acpi_apd_init(void); 77 72 78 73 acpi_status acpi_hotplug_schedule(struct acpi_device *adev, u32 src); 79 74 bool acpi_queue_hotplug_work(struct work_struct *work); ··· 131 122 unsigned long data_addr; 132 123 unsigned long global_lock; 133 124 unsigned long flags; 125 + unsigned long reference_count; 134 126 struct mutex mutex; 135 127 wait_queue_head_t wait; 136 128 struct list_head list; 137 129 struct transaction *curr; 138 130 spinlock_t lock; 131 + struct work_struct work; 139 132 }; 140 133 141 134 extern struct acpi_ec *first_ec;
+229
drivers/acpi/ioapic.c
··· 1 + /* 2 + * IOAPIC/IOxAPIC/IOSAPIC driver 3 + * 4 + * Copyright (C) 2009 Fujitsu Limited. 5 + * (c) Copyright 2009 Hewlett-Packard Development Company, L.P. 6 + * 7 + * Copyright (C) 2014 Intel Corporation 8 + * 9 + * This program is free software; you can redistribute it and/or modify 10 + * it under the terms of the GNU General Public License version 2 as 11 + * published by the Free Software Foundation. 12 + * 13 + * Based on original drivers/pci/ioapic.c 14 + * Yinghai Lu <yinghai@kernel.org> 15 + * Jiang Liu <jiang.liu@intel.com> 16 + */ 17 + 18 + /* 19 + * This driver manages I/O APICs added by hotplug after boot. 20 + * We try to claim all I/O APIC devices, but those present at boot were 21 + * registered when we parsed the ACPI MADT. 22 + */ 23 + 24 + #define pr_fmt(fmt) "ACPI : IOAPIC: " fmt 25 + 26 + #include <linux/slab.h> 27 + #include <linux/acpi.h> 28 + #include <linux/pci.h> 29 + #include <acpi/acpi.h> 30 + 31 + struct acpi_pci_ioapic { 32 + acpi_handle root_handle; 33 + acpi_handle handle; 34 + u32 gsi_base; 35 + struct resource res; 36 + struct pci_dev *pdev; 37 + struct list_head list; 38 + }; 39 + 40 + static LIST_HEAD(ioapic_list); 41 + static DEFINE_MUTEX(ioapic_list_lock); 42 + 43 + static acpi_status setup_res(struct acpi_resource *acpi_res, void *data) 44 + { 45 + struct resource *res = data; 46 + struct resource_win win; 47 + 48 + res->flags = 0; 49 + if (acpi_dev_filter_resource_type(acpi_res, IORESOURCE_MEM) == 0) 50 + return AE_OK; 51 + 52 + if (!acpi_dev_resource_memory(acpi_res, res)) { 53 + if (acpi_dev_resource_address_space(acpi_res, &win) || 54 + acpi_dev_resource_ext_address_space(acpi_res, &win)) 55 + *res = win.res; 56 + } 57 + if ((res->flags & IORESOURCE_PREFETCH) || 58 + (res->flags & IORESOURCE_DISABLED)) 59 + res->flags = 0; 60 + 61 + return AE_CTRL_TERMINATE; 62 + } 63 + 64 + static bool acpi_is_ioapic(acpi_handle handle, char **type) 65 + { 66 + acpi_status status; 67 + struct acpi_device_info *info; 68 + char *hid = NULL; 69 + bool match = false; 70 + 71 + if (!acpi_has_method(handle, "_GSB")) 72 + return false; 73 + 74 + status = acpi_get_object_info(handle, &info); 75 + if (ACPI_SUCCESS(status)) { 76 + if (info->valid & ACPI_VALID_HID) 77 + hid = info->hardware_id.string; 78 + if (hid) { 79 + if (strcmp(hid, "ACPI0009") == 0) { 80 + *type = "IOxAPIC"; 81 + match = true; 82 + } else if (strcmp(hid, "ACPI000A") == 0) { 83 + *type = "IOAPIC"; 84 + match = true; 85 + } 86 + } 87 + kfree(info); 88 + } 89 + 90 + return match; 91 + } 92 + 93 + static acpi_status handle_ioapic_add(acpi_handle handle, u32 lvl, 94 + void *context, void **rv) 95 + { 96 + acpi_status status; 97 + unsigned long long gsi_base; 98 + struct acpi_pci_ioapic *ioapic; 99 + struct pci_dev *dev = NULL; 100 + struct resource *res = NULL; 101 + char *type = NULL; 102 + 103 + if (!acpi_is_ioapic(handle, &type)) 104 + return AE_OK; 105 + 106 + mutex_lock(&ioapic_list_lock); 107 + list_for_each_entry(ioapic, &ioapic_list, list) 108 + if (ioapic->handle == handle) { 109 + mutex_unlock(&ioapic_list_lock); 110 + return AE_OK; 111 + } 112 + 113 + status = acpi_evaluate_integer(handle, "_GSB", NULL, &gsi_base); 114 + if (ACPI_FAILURE(status)) { 115 + acpi_handle_warn(handle, "failed to evaluate _GSB method\n"); 116 + goto exit; 117 + } 118 + 119 + ioapic = kzalloc(sizeof(*ioapic), GFP_KERNEL); 120 + if (!ioapic) { 121 + pr_err("cannot allocate memory for new IOAPIC\n"); 122 + goto exit; 123 + } else { 124 + ioapic->root_handle = (acpi_handle)context; 125 + ioapic->handle = handle; 126 + ioapic->gsi_base = (u32)gsi_base; 127 + INIT_LIST_HEAD(&ioapic->list); 128 + } 129 + 130 + if (acpi_ioapic_registered(handle, (u32)gsi_base)) 131 + goto done; 132 + 133 + dev = acpi_get_pci_dev(handle); 134 + if (dev && pci_resource_len(dev, 0)) { 135 + if (pci_enable_device(dev) < 0) 136 + goto exit_put; 137 + pci_set_master(dev); 138 + if (pci_request_region(dev, 0, type)) 139 + goto exit_disable; 140 + res = &dev->resource[0]; 141 + ioapic->pdev = dev; 142 + } else { 143 + pci_dev_put(dev); 144 + dev = NULL; 145 + 146 + res = &ioapic->res; 147 + acpi_walk_resources(handle, METHOD_NAME__CRS, setup_res, res); 148 + if (res->flags == 0) { 149 + acpi_handle_warn(handle, "failed to get resource\n"); 150 + goto exit_free; 151 + } else if (request_resource(&iomem_resource, res)) { 152 + acpi_handle_warn(handle, "failed to insert resource\n"); 153 + goto exit_free; 154 + } 155 + } 156 + 157 + if (acpi_register_ioapic(handle, res->start, (u32)gsi_base)) { 158 + acpi_handle_warn(handle, "failed to register IOAPIC\n"); 159 + goto exit_release; 160 + } 161 + done: 162 + list_add(&ioapic->list, &ioapic_list); 163 + mutex_unlock(&ioapic_list_lock); 164 + 165 + if (dev) 166 + dev_info(&dev->dev, "%s at %pR, GSI %u\n", 167 + type, res, (u32)gsi_base); 168 + else 169 + acpi_handle_info(handle, "%s at %pR, GSI %u\n", 170 + type, res, (u32)gsi_base); 171 + 172 + return AE_OK; 173 + 174 + exit_release: 175 + if (dev) 176 + pci_release_region(dev, 0); 177 + else 178 + release_resource(res); 179 + exit_disable: 180 + if (dev) 181 + pci_disable_device(dev); 182 + exit_put: 183 + pci_dev_put(dev); 184 + exit_free: 185 + kfree(ioapic); 186 + exit: 187 + mutex_unlock(&ioapic_list_lock); 188 + *(acpi_status *)rv = AE_ERROR; 189 + return AE_OK; 190 + } 191 + 192 + int acpi_ioapic_add(struct acpi_pci_root *root) 193 + { 194 + acpi_status status, retval = AE_OK; 195 + 196 + status = acpi_walk_namespace(ACPI_TYPE_DEVICE, root->device->handle, 197 + UINT_MAX, handle_ioapic_add, NULL, 198 + root->device->handle, (void **)&retval); 199 + 200 + return ACPI_SUCCESS(status) && ACPI_SUCCESS(retval) ? 0 : -ENODEV; 201 + } 202 + 203 + int acpi_ioapic_remove(struct acpi_pci_root *root) 204 + { 205 + int retval = 0; 206 + struct acpi_pci_ioapic *ioapic, *tmp; 207 + 208 + mutex_lock(&ioapic_list_lock); 209 + list_for_each_entry_safe(ioapic, tmp, &ioapic_list, list) { 210 + if (root->device->handle != ioapic->root_handle) 211 + continue; 212 + 213 + if (acpi_unregister_ioapic(ioapic->handle, ioapic->gsi_base)) 214 + retval = -EBUSY; 215 + 216 + if (ioapic->pdev) { 217 + pci_release_region(ioapic->pdev, 0); 218 + pci_disable_device(ioapic->pdev); 219 + pci_dev_put(ioapic->pdev); 220 + } else if (ioapic->res.flags && ioapic->res.parent) { 221 + release_resource(&ioapic->res); 222 + } 223 + list_del(&ioapic->list); 224 + kfree(ioapic); 225 + } 226 + mutex_unlock(&ioapic_list_lock); 227 + 228 + return retval; 229 + }
+2 -10
drivers/acpi/numa.c
··· 177 177 178 178 static int __init acpi_parse_slit(struct acpi_table_header *table) 179 179 { 180 - struct acpi_table_slit *slit; 181 - 182 - if (!table) 183 - return -EINVAL; 184 - 185 - slit = (struct acpi_table_slit *)table; 180 + struct acpi_table_slit *slit = (struct acpi_table_slit *)table; 186 181 187 182 if (!slit_valid(slit)) { 188 183 printk(KERN_INFO "ACPI: SLIT table looks invalid. Not used.\n"); ··· 255 260 256 261 static int __init acpi_parse_srat(struct acpi_table_header *table) 257 262 { 258 - struct acpi_table_srat *srat; 259 - if (!table) 260 - return -EINVAL; 263 + struct acpi_table_srat *srat = (struct acpi_table_srat *)table; 261 264 262 - srat = (struct acpi_table_srat *)table; 263 265 acpi_srat_revision = srat->header.revision; 264 266 265 267 /* Real work done in acpi_table_parse_srat below. */
+1 -8
drivers/acpi/pci_irq.c
··· 485 485 if (!pin || !dev->irq_managed || dev->irq <= 0) 486 486 return; 487 487 488 - /* Keep IOAPIC pin configuration when suspending */ 489 - if (dev->dev.power.is_prepared) 490 - return; 491 - #ifdef CONFIG_PM 492 - if (dev->dev.power.runtime_status == RPM_SUSPENDING) 493 - return; 494 - #endif 495 - 496 488 entry = acpi_pci_irq_lookup(dev, pin); 497 489 if (!entry) 498 490 return; ··· 505 513 if (gsi >= 0) { 506 514 acpi_unregister_gsi(gsi); 507 515 dev->irq_managed = 0; 516 + dev->irq = 0; 508 517 } 509 518 }
+6 -3
drivers/acpi/pci_root.c
··· 112 112 if (ACPI_FAILURE(status)) 113 113 return AE_OK; 114 114 115 - if ((address.address_length > 0) && 115 + if ((address.address.address_length > 0) && 116 116 (address.resource_type == ACPI_BUS_NUMBER_RANGE)) { 117 - res->start = address.minimum; 118 - res->end = address.minimum + address.address_length - 1; 117 + res->start = address.address.minimum; 118 + res->end = address.address.minimum + address.address.address_length - 1; 119 119 } 120 120 121 121 return AE_OK; ··· 621 621 if (hotadd) { 622 622 pcibios_resource_survey_bus(root->bus); 623 623 pci_assign_unassigned_root_bus_resources(root->bus); 624 + acpi_ioapic_add(root); 624 625 } 625 626 626 627 pci_lock_rescan_remove(); ··· 644 643 pci_lock_rescan_remove(); 645 644 646 645 pci_stop_root_bus(root->bus); 646 + 647 + WARN_ON(acpi_ioapic_remove(root)); 647 648 648 649 device_set_run_wake(root->bus->bridge, false); 649 650 pci_acpi_remove_bus_pm_notifier(device);
+114 -9
drivers/acpi/processor_core.c
··· 4 4 * 5 5 * Alex Chiang <achiang@hp.com> 6 6 * - Unified x86/ia64 implementations 7 + * 8 + * I/O APIC hotplug support 9 + * Yinghai Lu <yinghai@kernel.org> 10 + * Jiang Liu <jiang.liu@intel.com> 7 11 */ 8 12 #include <linux/export.h> 9 13 #include <linux/acpi.h> ··· 15 11 16 12 #define _COMPONENT ACPI_PROCESSOR_COMPONENT 17 13 ACPI_MODULE_NAME("processor_core"); 14 + 15 + static struct acpi_table_madt *get_madt_table(void) 16 + { 17 + static struct acpi_table_madt *madt; 18 + static int read_madt; 19 + 20 + if (!read_madt) { 21 + if (ACPI_FAILURE(acpi_get_table(ACPI_SIG_MADT, 0, 22 + (struct acpi_table_header **)&madt))) 23 + madt = NULL; 24 + read_madt++; 25 + } 26 + 27 + return madt; 28 + } 18 29 19 30 static int map_lapic_id(struct acpi_subtable_header *entry, 20 31 u32 acpi_id, int *apic_id) ··· 86 67 static int map_madt_entry(int type, u32 acpi_id) 87 68 { 88 69 unsigned long madt_end, entry; 89 - static struct acpi_table_madt *madt; 90 - static int read_madt; 91 70 int phys_id = -1; /* CPU hardware ID */ 71 + struct acpi_table_madt *madt; 92 72 93 - if (!read_madt) { 94 - if (ACPI_FAILURE(acpi_get_table(ACPI_SIG_MADT, 0, 95 - (struct acpi_table_header **)&madt))) 96 - madt = NULL; 97 - read_madt++; 98 - } 99 - 73 + madt = get_madt_table(); 100 74 if (!madt) 101 75 return phys_id; 102 76 ··· 215 203 return acpi_map_cpuid(phys_id, acpi_id); 216 204 } 217 205 EXPORT_SYMBOL_GPL(acpi_get_cpuid); 206 + 207 + #ifdef CONFIG_ACPI_HOTPLUG_IOAPIC 208 + static int get_ioapic_id(struct acpi_subtable_header *entry, u32 gsi_base, 209 + u64 *phys_addr, int *ioapic_id) 210 + { 211 + struct acpi_madt_io_apic *ioapic = (struct acpi_madt_io_apic *)entry; 212 + 213 + if (ioapic->global_irq_base != gsi_base) 214 + return 0; 215 + 216 + *phys_addr = ioapic->address; 217 + *ioapic_id = ioapic->id; 218 + return 1; 219 + } 220 + 221 + static int parse_madt_ioapic_entry(u32 gsi_base, u64 *phys_addr) 222 + { 223 + struct acpi_subtable_header *hdr; 224 + unsigned long madt_end, entry; 225 + struct acpi_table_madt *madt; 226 + int apic_id = -1; 227 + 228 + madt = get_madt_table(); 229 + if (!madt) 230 + return apic_id; 231 + 232 + entry = (unsigned long)madt; 233 + madt_end = entry + madt->header.length; 234 + 235 + /* Parse all entries looking for a match. */ 236 + entry += sizeof(struct acpi_table_madt); 237 + while (entry + sizeof(struct acpi_subtable_header) < madt_end) { 238 + hdr = (struct acpi_subtable_header *)entry; 239 + if (hdr->type == ACPI_MADT_TYPE_IO_APIC && 240 + get_ioapic_id(hdr, gsi_base, phys_addr, &apic_id)) 241 + break; 242 + else 243 + entry += hdr->length; 244 + } 245 + 246 + return apic_id; 247 + } 248 + 249 + static int parse_mat_ioapic_entry(acpi_handle handle, u32 gsi_base, 250 + u64 *phys_addr) 251 + { 252 + struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL }; 253 + struct acpi_subtable_header *header; 254 + union acpi_object *obj; 255 + int apic_id = -1; 256 + 257 + if (ACPI_FAILURE(acpi_evaluate_object(handle, "_MAT", NULL, &buffer))) 258 + goto exit; 259 + 260 + if (!buffer.length || !buffer.pointer) 261 + goto exit; 262 + 263 + obj = buffer.pointer; 264 + if (obj->type != ACPI_TYPE_BUFFER || 265 + obj->buffer.length < sizeof(struct acpi_subtable_header)) 266 + goto exit; 267 + 268 + header = (struct acpi_subtable_header *)obj->buffer.pointer; 269 + if (header->type == ACPI_MADT_TYPE_IO_APIC) 270 + get_ioapic_id(header, gsi_base, phys_addr, &apic_id); 271 + 272 + exit: 273 + kfree(buffer.pointer); 274 + return apic_id; 275 + } 276 + 277 + /** 278 + * acpi_get_ioapic_id - Get IOAPIC ID and physical address matching @gsi_base 279 + * @handle: ACPI object for IOAPIC device 280 + * @gsi_base: GSI base to match with 281 + * @phys_addr: Pointer to store physical address of matching IOAPIC record 282 + * 283 + * Walk resources returned by ACPI_MAT method, then ACPI MADT table, to search 284 + * for an ACPI IOAPIC record matching @gsi_base. 285 + * Return IOAPIC id and store physical address in @phys_addr if found a match, 286 + * otherwise return <0. 287 + */ 288 + int acpi_get_ioapic_id(acpi_handle handle, u32 gsi_base, u64 *phys_addr) 289 + { 290 + int apic_id; 291 + 292 + apic_id = parse_mat_ioapic_entry(handle, gsi_base, phys_addr); 293 + if (apic_id == -1) 294 + apic_id = parse_madt_ioapic_entry(gsi_base, phys_addr); 295 + 296 + return apic_id; 297 + } 298 + #endif /* CONFIG_ACPI_HOTPLUG_IOAPIC */
+52 -130
drivers/acpi/processor_idle.c
··· 681 681 } 682 682 683 683 /** 684 - * acpi_idle_do_entry - a helper function that does C2 and C3 type entry 684 + * acpi_idle_do_entry - enter idle state using the appropriate method 685 685 * @cx: cstate data 686 686 * 687 687 * Caller disables interrupt before call and enables interrupt after return. 688 688 */ 689 - static inline void acpi_idle_do_entry(struct acpi_processor_cx *cx) 689 + static void acpi_idle_do_entry(struct acpi_processor_cx *cx) 690 690 { 691 - /* Don't trace irqs off for idle */ 692 - stop_critical_timings(); 693 691 if (cx->entry_method == ACPI_CSTATE_FFH) { 694 692 /* Call into architectural FFH based C-state */ 695 693 acpi_processor_ffh_cstate_enter(cx); ··· 701 703 gets asserted in time to freeze execution properly. */ 702 704 inl(acpi_gbl_FADT.xpm_timer_block.address); 703 705 } 704 - start_critical_timings(); 705 706 } 706 - 707 - /** 708 - * acpi_idle_enter_c1 - enters an ACPI C1 state-type 709 - * @dev: the target CPU 710 - * @drv: cpuidle driver containing cpuidle state info 711 - * @index: index of target state 712 - * 713 - * This is equivalent to the HALT instruction. 714 - */ 715 - static int acpi_idle_enter_c1(struct cpuidle_device *dev, 716 - struct cpuidle_driver *drv, int index) 717 - { 718 - struct acpi_processor *pr; 719 - struct acpi_processor_cx *cx = per_cpu(acpi_cstate[index], dev->cpu); 720 - 721 - pr = __this_cpu_read(processors); 722 - 723 - if (unlikely(!pr)) 724 - return -EINVAL; 725 - 726 - lapic_timer_state_broadcast(pr, cx, 1); 727 - acpi_idle_do_entry(cx); 728 - 729 - lapic_timer_state_broadcast(pr, cx, 0); 730 - 731 - return index; 732 - } 733 - 734 707 735 708 /** 736 709 * acpi_idle_play_dead - enters an ACPI state for long-term idle (i.e. off-lining) ··· 730 761 return 0; 731 762 } 732 763 733 - /** 734 - * acpi_idle_enter_simple - enters an ACPI state without BM handling 735 - * @dev: the target CPU 736 - * @drv: cpuidle driver with cpuidle state information 737 - * @index: the index of suggested state 738 - */ 739 - static int acpi_idle_enter_simple(struct cpuidle_device *dev, 740 - struct cpuidle_driver *drv, int index) 764 + static bool acpi_idle_fallback_to_c1(struct acpi_processor *pr) 741 765 { 742 - struct acpi_processor *pr; 743 - struct acpi_processor_cx *cx = per_cpu(acpi_cstate[index], dev->cpu); 744 - 745 - pr = __this_cpu_read(processors); 746 - 747 - if (unlikely(!pr)) 748 - return -EINVAL; 749 - 750 - #ifdef CONFIG_HOTPLUG_CPU 751 - if ((cx->type != ACPI_STATE_C1) && (num_online_cpus() > 1) && 752 - !pr->flags.has_cst && 753 - !(acpi_gbl_FADT.flags & ACPI_FADT_C2_MP_SUPPORTED)) 754 - return acpi_idle_enter_c1(dev, drv, CPUIDLE_DRIVER_STATE_START); 755 - #endif 756 - 757 - /* 758 - * Must be done before busmaster disable as we might need to 759 - * access HPET ! 760 - */ 761 - lapic_timer_state_broadcast(pr, cx, 1); 762 - 763 - if (cx->type == ACPI_STATE_C3) 764 - ACPI_FLUSH_CPU_CACHE(); 765 - 766 - /* Tell the scheduler that we are going deep-idle: */ 767 - sched_clock_idle_sleep_event(); 768 - acpi_idle_do_entry(cx); 769 - 770 - sched_clock_idle_wakeup_event(0); 771 - 772 - lapic_timer_state_broadcast(pr, cx, 0); 773 - return index; 766 + return IS_ENABLED(CONFIG_HOTPLUG_CPU) && num_online_cpus() > 1 && 767 + !(acpi_gbl_FADT.flags & ACPI_FADT_C2_MP_SUPPORTED) && 768 + !pr->flags.has_cst; 774 769 } 775 770 776 771 static int c3_cpu_count; ··· 742 809 743 810 /** 744 811 * acpi_idle_enter_bm - enters C3 with proper BM handling 745 - * @dev: the target CPU 746 - * @drv: cpuidle driver containing state data 747 - * @index: the index of suggested state 748 - * 749 - * If BM is detected, the deepest non-C3 idle state is entered instead. 812 + * @pr: Target processor 813 + * @cx: Target state context 750 814 */ 751 - static int acpi_idle_enter_bm(struct cpuidle_device *dev, 752 - struct cpuidle_driver *drv, int index) 815 + static void acpi_idle_enter_bm(struct acpi_processor *pr, 816 + struct acpi_processor_cx *cx) 753 817 { 754 - struct acpi_processor *pr; 755 - struct acpi_processor_cx *cx = per_cpu(acpi_cstate[index], dev->cpu); 756 - 757 - pr = __this_cpu_read(processors); 758 - 759 - if (unlikely(!pr)) 760 - return -EINVAL; 761 - 762 - #ifdef CONFIG_HOTPLUG_CPU 763 - if ((cx->type != ACPI_STATE_C1) && (num_online_cpus() > 1) && 764 - !pr->flags.has_cst && 765 - !(acpi_gbl_FADT.flags & ACPI_FADT_C2_MP_SUPPORTED)) 766 - return acpi_idle_enter_c1(dev, drv, CPUIDLE_DRIVER_STATE_START); 767 - #endif 768 - 769 - if (!cx->bm_sts_skip && acpi_idle_bm_check()) { 770 - if (drv->safe_state_index >= 0) { 771 - return drv->states[drv->safe_state_index].enter(dev, 772 - drv, drv->safe_state_index); 773 - } else { 774 - acpi_safe_halt(); 775 - return -EBUSY; 776 - } 777 - } 778 - 779 818 acpi_unlazy_tlb(smp_processor_id()); 780 819 781 - /* Tell the scheduler that we are going deep-idle: */ 782 - sched_clock_idle_sleep_event(); 783 820 /* 784 821 * Must be done before busmaster disable as we might need to 785 822 * access HPET ! ··· 759 856 /* 760 857 * disable bus master 761 858 * bm_check implies we need ARB_DIS 762 - * !bm_check implies we need cache flush 763 859 * bm_control implies whether we can do ARB_DIS 764 860 * 765 861 * That leaves a case where bm_check is set and bm_control is 766 862 * not set. In that case we cannot do much, we enter C3 767 863 * without doing anything. 768 864 */ 769 - if (pr->flags.bm_check && pr->flags.bm_control) { 865 + if (pr->flags.bm_control) { 770 866 raw_spin_lock(&c3_lock); 771 867 c3_cpu_count++; 772 868 /* Disable bus master arbitration when all CPUs are in C3 */ 773 869 if (c3_cpu_count == num_online_cpus()) 774 870 acpi_write_bit_register(ACPI_BITREG_ARB_DISABLE, 1); 775 871 raw_spin_unlock(&c3_lock); 776 - } else if (!pr->flags.bm_check) { 777 - ACPI_FLUSH_CPU_CACHE(); 778 872 } 779 873 780 874 acpi_idle_do_entry(cx); 781 875 782 876 /* Re-enable bus master arbitration */ 783 - if (pr->flags.bm_check && pr->flags.bm_control) { 877 + if (pr->flags.bm_control) { 784 878 raw_spin_lock(&c3_lock); 785 879 acpi_write_bit_register(ACPI_BITREG_ARB_DISABLE, 0); 786 880 c3_cpu_count--; 787 881 raw_spin_unlock(&c3_lock); 788 882 } 789 883 790 - sched_clock_idle_wakeup_event(0); 884 + lapic_timer_state_broadcast(pr, cx, 0); 885 + } 886 + 887 + static int acpi_idle_enter(struct cpuidle_device *dev, 888 + struct cpuidle_driver *drv, int index) 889 + { 890 + struct acpi_processor_cx *cx = per_cpu(acpi_cstate[index], dev->cpu); 891 + struct acpi_processor *pr; 892 + 893 + pr = __this_cpu_read(processors); 894 + if (unlikely(!pr)) 895 + return -EINVAL; 896 + 897 + if (cx->type != ACPI_STATE_C1) { 898 + if (acpi_idle_fallback_to_c1(pr)) { 899 + index = CPUIDLE_DRIVER_STATE_START; 900 + cx = per_cpu(acpi_cstate[index], dev->cpu); 901 + } else if (cx->type == ACPI_STATE_C3 && pr->flags.bm_check) { 902 + if (cx->bm_sts_skip || !acpi_idle_bm_check()) { 903 + acpi_idle_enter_bm(pr, cx); 904 + return index; 905 + } else if (drv->safe_state_index >= 0) { 906 + index = drv->safe_state_index; 907 + cx = per_cpu(acpi_cstate[index], dev->cpu); 908 + } else { 909 + acpi_safe_halt(); 910 + return -EBUSY; 911 + } 912 + } 913 + } 914 + 915 + lapic_timer_state_broadcast(pr, cx, 1); 916 + 917 + if (cx->type == ACPI_STATE_C3) 918 + ACPI_FLUSH_CPU_CACHE(); 919 + 920 + acpi_idle_do_entry(cx); 791 921 792 922 lapic_timer_state_broadcast(pr, cx, 0); 923 + 793 924 return index; 794 925 } 795 926 ··· 918 981 strncpy(state->desc, cx->desc, CPUIDLE_DESC_LEN); 919 982 state->exit_latency = cx->latency; 920 983 state->target_residency = cx->latency * latency_factor; 984 + state->enter = acpi_idle_enter; 921 985 922 986 state->flags = 0; 923 - switch (cx->type) { 924 - case ACPI_STATE_C1: 925 - 926 - state->enter = acpi_idle_enter_c1; 987 + if (cx->type == ACPI_STATE_C1 || cx->type == ACPI_STATE_C2) { 927 988 state->enter_dead = acpi_idle_play_dead; 928 989 drv->safe_state_index = count; 929 - break; 930 - 931 - case ACPI_STATE_C2: 932 - state->enter = acpi_idle_enter_simple; 933 - state->enter_dead = acpi_idle_play_dead; 934 - drv->safe_state_index = count; 935 - break; 936 - 937 - case ACPI_STATE_C3: 938 - state->enter = pr->flags.bm_check ? 939 - acpi_idle_enter_bm : 940 - acpi_idle_enter_simple; 941 - break; 942 990 } 943 991 944 992 count++;
+220 -139
drivers/acpi/resource.c
··· 34 34 #define valid_IRQ(i) (true) 35 35 #endif 36 36 37 - static unsigned long acpi_dev_memresource_flags(u64 len, u8 write_protect, 38 - bool window) 37 + static bool acpi_dev_resource_len_valid(u64 start, u64 end, u64 len, bool io) 39 38 { 40 - unsigned long flags = IORESOURCE_MEM; 39 + u64 reslen = end - start + 1; 41 40 42 - if (len == 0) 43 - flags |= IORESOURCE_DISABLED; 41 + /* 42 + * CHECKME: len might be required to check versus a minimum 43 + * length as well. 1 for io is fine, but for memory it does 44 + * not make any sense at all. 45 + */ 46 + if (len && reslen && reslen == len && start <= end) 47 + return true; 48 + 49 + pr_info("ACPI: invalid or unassigned resource %s [%016llx - %016llx] length [%016llx]\n", 50 + io ? "io" : "mem", start, end, len); 51 + 52 + return false; 53 + } 54 + 55 + static void acpi_dev_memresource_flags(struct resource *res, u64 len, 56 + u8 write_protect) 57 + { 58 + res->flags = IORESOURCE_MEM; 59 + 60 + if (!acpi_dev_resource_len_valid(res->start, res->end, len, false)) 61 + res->flags |= IORESOURCE_DISABLED | IORESOURCE_UNSET; 44 62 45 63 if (write_protect == ACPI_READ_WRITE_MEMORY) 46 - flags |= IORESOURCE_MEM_WRITEABLE; 47 - 48 - if (window) 49 - flags |= IORESOURCE_WINDOW; 50 - 51 - return flags; 64 + res->flags |= IORESOURCE_MEM_WRITEABLE; 52 65 } 53 66 54 67 static void acpi_dev_get_memresource(struct resource *res, u64 start, u64 len, ··· 69 56 { 70 57 res->start = start; 71 58 res->end = start + len - 1; 72 - res->flags = acpi_dev_memresource_flags(len, write_protect, false); 59 + acpi_dev_memresource_flags(res, len, write_protect); 73 60 } 74 61 75 62 /** ··· 80 67 * Check if the given ACPI resource object represents a memory resource and 81 68 * if that's the case, use the information in it to populate the generic 82 69 * resource object pointed to by @res. 70 + * 71 + * Return: 72 + * 1) false with res->flags setting to zero: not the expected resource type 73 + * 2) false with IORESOURCE_DISABLED in res->flags: valid unassigned resource 74 + * 3) true: valid assigned resource 83 75 */ 84 76 bool acpi_dev_resource_memory(struct acpi_resource *ares, struct resource *res) 85 77 { ··· 95 77 switch (ares->type) { 96 78 case ACPI_RESOURCE_TYPE_MEMORY24: 97 79 memory24 = &ares->data.memory24; 98 - if (!memory24->minimum && !memory24->address_length) 99 - return false; 100 - acpi_dev_get_memresource(res, memory24->minimum, 101 - memory24->address_length, 80 + acpi_dev_get_memresource(res, memory24->minimum << 8, 81 + memory24->address_length << 8, 102 82 memory24->write_protect); 103 83 break; 104 84 case ACPI_RESOURCE_TYPE_MEMORY32: 105 85 memory32 = &ares->data.memory32; 106 - if (!memory32->minimum && !memory32->address_length) 107 - return false; 108 86 acpi_dev_get_memresource(res, memory32->minimum, 109 87 memory32->address_length, 110 88 memory32->write_protect); 111 89 break; 112 90 case ACPI_RESOURCE_TYPE_FIXED_MEMORY32: 113 91 fixed_memory32 = &ares->data.fixed_memory32; 114 - if (!fixed_memory32->address && !fixed_memory32->address_length) 115 - return false; 116 92 acpi_dev_get_memresource(res, fixed_memory32->address, 117 93 fixed_memory32->address_length, 118 94 fixed_memory32->write_protect); 119 95 break; 120 96 default: 97 + res->flags = 0; 121 98 return false; 122 99 } 123 - return true; 100 + 101 + return !(res->flags & IORESOURCE_DISABLED); 124 102 } 125 103 EXPORT_SYMBOL_GPL(acpi_dev_resource_memory); 126 104 127 - static unsigned int acpi_dev_ioresource_flags(u64 start, u64 end, u8 io_decode, 128 - bool window) 105 + static void acpi_dev_ioresource_flags(struct resource *res, u64 len, 106 + u8 io_decode) 129 107 { 130 - int flags = IORESOURCE_IO; 108 + res->flags = IORESOURCE_IO; 109 + 110 + if (!acpi_dev_resource_len_valid(res->start, res->end, len, true)) 111 + res->flags |= IORESOURCE_DISABLED | IORESOURCE_UNSET; 112 + 113 + if (res->end >= 0x10003) 114 + res->flags |= IORESOURCE_DISABLED | IORESOURCE_UNSET; 131 115 132 116 if (io_decode == ACPI_DECODE_16) 133 - flags |= IORESOURCE_IO_16BIT_ADDR; 134 - 135 - if (start > end || end >= 0x10003) 136 - flags |= IORESOURCE_DISABLED; 137 - 138 - if (window) 139 - flags |= IORESOURCE_WINDOW; 140 - 141 - return flags; 117 + res->flags |= IORESOURCE_IO_16BIT_ADDR; 142 118 } 143 119 144 120 static void acpi_dev_get_ioresource(struct resource *res, u64 start, u64 len, 145 121 u8 io_decode) 146 122 { 147 - u64 end = start + len - 1; 148 - 149 123 res->start = start; 150 - res->end = end; 151 - res->flags = acpi_dev_ioresource_flags(start, end, io_decode, false); 124 + res->end = start + len - 1; 125 + acpi_dev_ioresource_flags(res, len, io_decode); 152 126 } 153 127 154 128 /** ··· 151 141 * Check if the given ACPI resource object represents an I/O resource and 152 142 * if that's the case, use the information in it to populate the generic 153 143 * resource object pointed to by @res. 144 + * 145 + * Return: 146 + * 1) false with res->flags setting to zero: not the expected resource type 147 + * 2) false with IORESOURCE_DISABLED in res->flags: valid unassigned resource 148 + * 3) true: valid assigned resource 154 149 */ 155 150 bool acpi_dev_resource_io(struct acpi_resource *ares, struct resource *res) 156 151 { ··· 165 150 switch (ares->type) { 166 151 case ACPI_RESOURCE_TYPE_IO: 167 152 io = &ares->data.io; 168 - if (!io->minimum && !io->address_length) 169 - return false; 170 153 acpi_dev_get_ioresource(res, io->minimum, 171 154 io->address_length, 172 155 io->io_decode); 173 156 break; 174 157 case ACPI_RESOURCE_TYPE_FIXED_IO: 175 158 fixed_io = &ares->data.fixed_io; 176 - if (!fixed_io->address && !fixed_io->address_length) 177 - return false; 178 159 acpi_dev_get_ioresource(res, fixed_io->address, 179 160 fixed_io->address_length, 180 161 ACPI_DECODE_10); 181 162 break; 182 163 default: 164 + res->flags = 0; 183 165 return false; 184 166 } 185 - return true; 167 + 168 + return !(res->flags & IORESOURCE_DISABLED); 186 169 } 187 170 EXPORT_SYMBOL_GPL(acpi_dev_resource_io); 188 171 189 - /** 190 - * acpi_dev_resource_address_space - Extract ACPI address space information. 191 - * @ares: Input ACPI resource object. 192 - * @res: Output generic resource object. 193 - * 194 - * Check if the given ACPI resource object represents an address space resource 195 - * and if that's the case, use the information in it to populate the generic 196 - * resource object pointed to by @res. 197 - */ 198 - bool acpi_dev_resource_address_space(struct acpi_resource *ares, 199 - struct resource *res) 172 + static bool acpi_decode_space(struct resource_win *win, 173 + struct acpi_resource_address *addr, 174 + struct acpi_address64_attribute *attr) 200 175 { 201 - acpi_status status; 202 - struct acpi_resource_address64 addr; 203 - bool window; 204 - u64 len; 205 - u8 io_decode; 176 + u8 iodec = attr->granularity == 0xfff ? ACPI_DECODE_10 : ACPI_DECODE_16; 177 + bool wp = addr->info.mem.write_protect; 178 + u64 len = attr->address_length; 179 + struct resource *res = &win->res; 206 180 207 - switch (ares->type) { 208 - case ACPI_RESOURCE_TYPE_ADDRESS16: 209 - case ACPI_RESOURCE_TYPE_ADDRESS32: 210 - case ACPI_RESOURCE_TYPE_ADDRESS64: 211 - break; 212 - default: 213 - return false; 181 + /* 182 + * Filter out invalid descriptor according to ACPI Spec 5.0, section 183 + * 6.4.3.5 Address Space Resource Descriptors. 184 + */ 185 + if ((addr->min_address_fixed != addr->max_address_fixed && len) || 186 + (addr->min_address_fixed && addr->max_address_fixed && !len)) 187 + pr_debug("ACPI: Invalid address space min_addr_fix %d, max_addr_fix %d, len %llx\n", 188 + addr->min_address_fixed, addr->max_address_fixed, len); 189 + 190 + res->start = attr->minimum; 191 + res->end = attr->maximum; 192 + 193 + /* 194 + * For bridges that translate addresses across the bridge, 195 + * translation_offset is the offset that must be added to the 196 + * address on the secondary side to obtain the address on the 197 + * primary side. Non-bridge devices must list 0 for all Address 198 + * Translation offset bits. 199 + */ 200 + if (addr->producer_consumer == ACPI_PRODUCER) { 201 + res->start += attr->translation_offset; 202 + res->end += attr->translation_offset; 203 + } else if (attr->translation_offset) { 204 + pr_debug("ACPI: translation_offset(%lld) is invalid for non-bridge device.\n", 205 + attr->translation_offset); 214 206 } 215 207 216 - status = acpi_resource_to_address64(ares, &addr); 217 - if (ACPI_FAILURE(status)) 218 - return false; 219 - 220 - res->start = addr.minimum; 221 - res->end = addr.maximum; 222 - window = addr.producer_consumer == ACPI_PRODUCER; 223 - 224 - switch(addr.resource_type) { 208 + switch (addr->resource_type) { 225 209 case ACPI_MEMORY_RANGE: 226 - len = addr.maximum - addr.minimum + 1; 227 - res->flags = acpi_dev_memresource_flags(len, 228 - addr.info.mem.write_protect, 229 - window); 210 + acpi_dev_memresource_flags(res, len, wp); 230 211 break; 231 212 case ACPI_IO_RANGE: 232 - io_decode = addr.granularity == 0xfff ? 233 - ACPI_DECODE_10 : ACPI_DECODE_16; 234 - res->flags = acpi_dev_ioresource_flags(addr.minimum, 235 - addr.maximum, 236 - io_decode, window); 213 + acpi_dev_ioresource_flags(res, len, iodec); 237 214 break; 238 215 case ACPI_BUS_NUMBER_RANGE: 239 216 res->flags = IORESOURCE_BUS; 240 217 break; 241 218 default: 242 - res->flags = 0; 219 + return false; 243 220 } 244 221 245 - return true; 222 + win->offset = attr->translation_offset; 223 + 224 + if (addr->producer_consumer == ACPI_PRODUCER) 225 + res->flags |= IORESOURCE_WINDOW; 226 + 227 + if (addr->info.mem.caching == ACPI_PREFETCHABLE_MEMORY) 228 + res->flags |= IORESOURCE_PREFETCH; 229 + 230 + return !(res->flags & IORESOURCE_DISABLED); 231 + } 232 + 233 + /** 234 + * acpi_dev_resource_address_space - Extract ACPI address space information. 235 + * @ares: Input ACPI resource object. 236 + * @win: Output generic resource object. 237 + * 238 + * Check if the given ACPI resource object represents an address space resource 239 + * and if that's the case, use the information in it to populate the generic 240 + * resource object pointed to by @win. 241 + * 242 + * Return: 243 + * 1) false with win->res.flags setting to zero: not the expected resource type 244 + * 2) false with IORESOURCE_DISABLED in win->res.flags: valid unassigned 245 + * resource 246 + * 3) true: valid assigned resource 247 + */ 248 + bool acpi_dev_resource_address_space(struct acpi_resource *ares, 249 + struct resource_win *win) 250 + { 251 + struct acpi_resource_address64 addr; 252 + 253 + win->res.flags = 0; 254 + if (ACPI_FAILURE(acpi_resource_to_address64(ares, &addr))) 255 + return false; 256 + 257 + return acpi_decode_space(win, (struct acpi_resource_address *)&addr, 258 + &addr.address); 246 259 } 247 260 EXPORT_SYMBOL_GPL(acpi_dev_resource_address_space); 248 261 249 262 /** 250 263 * acpi_dev_resource_ext_address_space - Extract ACPI address space information. 251 264 * @ares: Input ACPI resource object. 252 - * @res: Output generic resource object. 265 + * @win: Output generic resource object. 253 266 * 254 267 * Check if the given ACPI resource object represents an extended address space 255 268 * resource and if that's the case, use the information in it to populate the 256 - * generic resource object pointed to by @res. 269 + * generic resource object pointed to by @win. 270 + * 271 + * Return: 272 + * 1) false with win->res.flags setting to zero: not the expected resource type 273 + * 2) false with IORESOURCE_DISABLED in win->res.flags: valid unassigned 274 + * resource 275 + * 3) true: valid assigned resource 257 276 */ 258 277 bool acpi_dev_resource_ext_address_space(struct acpi_resource *ares, 259 - struct resource *res) 278 + struct resource_win *win) 260 279 { 261 280 struct acpi_resource_extended_address64 *ext_addr; 262 - bool window; 263 - u64 len; 264 - u8 io_decode; 265 281 282 + win->res.flags = 0; 266 283 if (ares->type != ACPI_RESOURCE_TYPE_EXTENDED_ADDRESS64) 267 284 return false; 268 285 269 286 ext_addr = &ares->data.ext_address64; 270 287 271 - res->start = ext_addr->minimum; 272 - res->end = ext_addr->maximum; 273 - window = ext_addr->producer_consumer == ACPI_PRODUCER; 274 - 275 - switch(ext_addr->resource_type) { 276 - case ACPI_MEMORY_RANGE: 277 - len = ext_addr->maximum - ext_addr->minimum + 1; 278 - res->flags = acpi_dev_memresource_flags(len, 279 - ext_addr->info.mem.write_protect, 280 - window); 281 - break; 282 - case ACPI_IO_RANGE: 283 - io_decode = ext_addr->granularity == 0xfff ? 284 - ACPI_DECODE_10 : ACPI_DECODE_16; 285 - res->flags = acpi_dev_ioresource_flags(ext_addr->minimum, 286 - ext_addr->maximum, 287 - io_decode, window); 288 - break; 289 - case ACPI_BUS_NUMBER_RANGE: 290 - res->flags = IORESOURCE_BUS; 291 - break; 292 - default: 293 - res->flags = 0; 294 - } 295 - 296 - return true; 288 + return acpi_decode_space(win, (struct acpi_resource_address *)ext_addr, 289 + &ext_addr->address); 297 290 } 298 291 EXPORT_SYMBOL_GPL(acpi_dev_resource_ext_address_space); 299 292 ··· 333 310 { 334 311 res->start = gsi; 335 312 res->end = gsi; 336 - res->flags = IORESOURCE_IRQ | IORESOURCE_DISABLED; 313 + res->flags = IORESOURCE_IRQ | IORESOURCE_DISABLED | IORESOURCE_UNSET; 337 314 } 338 315 339 316 static void acpi_dev_get_irqresource(struct resource *res, u32 gsi, ··· 392 369 * represented by the resource and populate the generic resource object pointed 393 370 * to by @res accordingly. If the registration of the GSI is not successful, 394 371 * IORESOURCE_DISABLED will be set it that object's flags. 372 + * 373 + * Return: 374 + * 1) false with res->flags setting to zero: not the expected resource type 375 + * 2) false with IORESOURCE_DISABLED in res->flags: valid unassigned resource 376 + * 3) true: valid assigned resource 395 377 */ 396 378 bool acpi_dev_resource_interrupt(struct acpi_resource *ares, int index, 397 379 struct resource *res) ··· 430 402 ext_irq->sharable, false); 431 403 break; 432 404 default: 405 + res->flags = 0; 433 406 return false; 434 407 } 435 408 ··· 444 415 */ 445 416 void acpi_dev_free_resource_list(struct list_head *list) 446 417 { 447 - struct resource_list_entry *rentry, *re; 448 - 449 - list_for_each_entry_safe(rentry, re, list, node) { 450 - list_del(&rentry->node); 451 - kfree(rentry); 452 - } 418 + resource_list_free(list); 453 419 } 454 420 EXPORT_SYMBOL_GPL(acpi_dev_free_resource_list); 455 421 ··· 456 432 int error; 457 433 }; 458 434 459 - static acpi_status acpi_dev_new_resource_entry(struct resource *r, 435 + static acpi_status acpi_dev_new_resource_entry(struct resource_win *win, 460 436 struct res_proc_context *c) 461 437 { 462 - struct resource_list_entry *rentry; 438 + struct resource_entry *rentry; 463 439 464 - rentry = kmalloc(sizeof(*rentry), GFP_KERNEL); 440 + rentry = resource_list_create_entry(NULL, 0); 465 441 if (!rentry) { 466 442 c->error = -ENOMEM; 467 443 return AE_NO_MEMORY; 468 444 } 469 - rentry->res = *r; 470 - list_add_tail(&rentry->node, c->list); 445 + *rentry->res = win->res; 446 + rentry->offset = win->offset; 447 + resource_list_add_tail(rentry, c->list); 471 448 c->count++; 472 449 return AE_OK; 473 450 } ··· 477 452 void *context) 478 453 { 479 454 struct res_proc_context *c = context; 480 - struct resource r; 455 + struct resource_win win; 456 + struct resource *res = &win.res; 481 457 int i; 482 458 483 459 if (c->preproc) { ··· 493 467 } 494 468 } 495 469 496 - memset(&r, 0, sizeof(r)); 470 + memset(&win, 0, sizeof(win)); 497 471 498 - if (acpi_dev_resource_memory(ares, &r) 499 - || acpi_dev_resource_io(ares, &r) 500 - || acpi_dev_resource_address_space(ares, &r) 501 - || acpi_dev_resource_ext_address_space(ares, &r)) 502 - return acpi_dev_new_resource_entry(&r, c); 472 + if (acpi_dev_resource_memory(ares, res) 473 + || acpi_dev_resource_io(ares, res) 474 + || acpi_dev_resource_address_space(ares, &win) 475 + || acpi_dev_resource_ext_address_space(ares, &win)) 476 + return acpi_dev_new_resource_entry(&win, c); 503 477 504 - for (i = 0; acpi_dev_resource_interrupt(ares, i, &r); i++) { 478 + for (i = 0; acpi_dev_resource_interrupt(ares, i, res); i++) { 505 479 acpi_status status; 506 480 507 - status = acpi_dev_new_resource_entry(&r, c); 481 + status = acpi_dev_new_resource_entry(&win, c); 508 482 if (ACPI_FAILURE(status)) 509 483 return status; 510 484 } ··· 529 503 * returned as the final error code. 530 504 * 531 505 * The resultant struct resource objects are put on the list pointed to by 532 - * @list, that must be empty initially, as members of struct resource_list_entry 506 + * @list, that must be empty initially, as members of struct resource_entry 533 507 * objects. Callers of this routine should use %acpi_dev_free_resource_list() to 534 508 * free that list. 535 509 * ··· 564 538 return c.count; 565 539 } 566 540 EXPORT_SYMBOL_GPL(acpi_dev_get_resources); 541 + 542 + /** 543 + * acpi_dev_filter_resource_type - Filter ACPI resource according to resource 544 + * types 545 + * @ares: Input ACPI resource object. 546 + * @types: Valid resource types of IORESOURCE_XXX 547 + * 548 + * This is a hepler function to support acpi_dev_get_resources(), which filters 549 + * ACPI resource objects according to resource types. 550 + */ 551 + int acpi_dev_filter_resource_type(struct acpi_resource *ares, 552 + unsigned long types) 553 + { 554 + unsigned long type = 0; 555 + 556 + switch (ares->type) { 557 + case ACPI_RESOURCE_TYPE_MEMORY24: 558 + case ACPI_RESOURCE_TYPE_MEMORY32: 559 + case ACPI_RESOURCE_TYPE_FIXED_MEMORY32: 560 + type = IORESOURCE_MEM; 561 + break; 562 + case ACPI_RESOURCE_TYPE_IO: 563 + case ACPI_RESOURCE_TYPE_FIXED_IO: 564 + type = IORESOURCE_IO; 565 + break; 566 + case ACPI_RESOURCE_TYPE_IRQ: 567 + case ACPI_RESOURCE_TYPE_EXTENDED_IRQ: 568 + type = IORESOURCE_IRQ; 569 + break; 570 + case ACPI_RESOURCE_TYPE_DMA: 571 + case ACPI_RESOURCE_TYPE_FIXED_DMA: 572 + type = IORESOURCE_DMA; 573 + break; 574 + case ACPI_RESOURCE_TYPE_GENERIC_REGISTER: 575 + type = IORESOURCE_REG; 576 + break; 577 + case ACPI_RESOURCE_TYPE_ADDRESS16: 578 + case ACPI_RESOURCE_TYPE_ADDRESS32: 579 + case ACPI_RESOURCE_TYPE_ADDRESS64: 580 + case ACPI_RESOURCE_TYPE_EXTENDED_ADDRESS64: 581 + if (ares->data.address.resource_type == ACPI_MEMORY_RANGE) 582 + type = IORESOURCE_MEM; 583 + else if (ares->data.address.resource_type == ACPI_IO_RANGE) 584 + type = IORESOURCE_IO; 585 + else if (ares->data.address.resource_type == 586 + ACPI_BUS_NUMBER_RANGE) 587 + type = IORESOURCE_BUS; 588 + break; 589 + default: 590 + break; 591 + } 592 + 593 + return (type & types) ? 0 : 1; 594 + } 595 + EXPORT_SYMBOL_GPL(acpi_dev_filter_resource_type);
+1
drivers/acpi/scan.c
··· 2544 2544 acpi_pci_link_init(); 2545 2545 acpi_processor_init(); 2546 2546 acpi_lpss_init(); 2547 + acpi_apd_init(); 2547 2548 acpi_cmos_rtc_init(); 2548 2549 acpi_container_init(); 2549 2550 acpi_memory_hotplug_init();
+1 -1
drivers/acpi/sleep.c
··· 321 321 {}, 322 322 }; 323 323 324 - static void acpi_sleep_dmi_check(void) 324 + static void __init acpi_sleep_dmi_check(void) 325 325 { 326 326 int year; 327 327
+18
drivers/acpi/video.c
··· 522 522 DMI_MATCH(DMI_PRODUCT_NAME, "370R4E/370R4V/370R5E/3570RE/370R5V"), 523 523 }, 524 524 }, 525 + { 526 + /* https://bugzilla.redhat.com/show_bug.cgi?id=1186097 */ 527 + .callback = video_disable_native_backlight, 528 + .ident = "SAMSUNG 3570R/370R/470R/450R/510R/4450RV", 529 + .matches = { 530 + DMI_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD."), 531 + DMI_MATCH(DMI_PRODUCT_NAME, "3570R/370R/470R/450R/510R/4450RV"), 532 + }, 533 + }, 534 + { 535 + /* https://bugzilla.redhat.com/show_bug.cgi?id=1094948 */ 536 + .callback = video_disable_native_backlight, 537 + .ident = "SAMSUNG 730U3E/740U3E", 538 + .matches = { 539 + DMI_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD."), 540 + DMI_MATCH(DMI_PRODUCT_NAME, "730U3E/740U3E"), 541 + }, 542 + }, 525 543 526 544 { 527 545 /* https://bugzilla.redhat.com/show_bug.cgi?id=1163574 */
+6 -12
drivers/base/power/common.c
··· 19 19 * @dev: Device to handle. 20 20 * 21 21 * If power.subsys_data is NULL, point it to a new object, otherwise increment 22 - * its reference counter. Return 1 if a new object has been created, otherwise 23 - * return 0 or error code. 22 + * its reference counter. Return 0 if new object has been created or refcount 23 + * increased, otherwise negative error code. 24 24 */ 25 25 int dev_pm_get_subsys_data(struct device *dev) 26 26 { ··· 56 56 * @dev: Device to handle. 57 57 * 58 58 * If the reference counter of power.subsys_data is zero after dropping the 59 - * reference, power.subsys_data is removed. Return 1 if that happens or 0 60 - * otherwise. 59 + * reference, power.subsys_data is removed. 61 60 */ 62 - int dev_pm_put_subsys_data(struct device *dev) 61 + void dev_pm_put_subsys_data(struct device *dev) 63 62 { 64 63 struct pm_subsys_data *psd; 65 - int ret = 1; 66 64 67 65 spin_lock_irq(&dev->power.lock); 68 66 ··· 68 70 if (!psd) 69 71 goto out; 70 72 71 - if (--psd->refcount == 0) { 73 + if (--psd->refcount == 0) 72 74 dev->power.subsys_data = NULL; 73 - } else { 75 + else 74 76 psd = NULL; 75 - ret = 0; 76 - } 77 77 78 78 out: 79 79 spin_unlock_irq(&dev->power.lock); 80 80 kfree(psd); 81 - 82 - return ret; 83 81 } 84 82 EXPORT_SYMBOL_GPL(dev_pm_put_subsys_data); 85 83
+65 -92
drivers/base/power/domain.c
··· 344 344 struct device *dev; 345 345 346 346 gpd_data = container_of(nb, struct generic_pm_domain_data, nb); 347 - 348 - mutex_lock(&gpd_data->lock); 349 347 dev = gpd_data->base.dev; 350 - if (!dev) { 351 - mutex_unlock(&gpd_data->lock); 352 - return NOTIFY_DONE; 353 - } 354 - mutex_unlock(&gpd_data->lock); 355 348 356 349 for (;;) { 357 350 struct generic_pm_domain *genpd; ··· 1377 1384 1378 1385 #endif /* CONFIG_PM_SLEEP */ 1379 1386 1380 - static struct generic_pm_domain_data *__pm_genpd_alloc_dev_data(struct device *dev) 1387 + static struct generic_pm_domain_data *genpd_alloc_dev_data(struct device *dev, 1388 + struct generic_pm_domain *genpd, 1389 + struct gpd_timing_data *td) 1381 1390 { 1382 1391 struct generic_pm_domain_data *gpd_data; 1392 + int ret; 1393 + 1394 + ret = dev_pm_get_subsys_data(dev); 1395 + if (ret) 1396 + return ERR_PTR(ret); 1383 1397 1384 1398 gpd_data = kzalloc(sizeof(*gpd_data), GFP_KERNEL); 1385 - if (!gpd_data) 1386 - return NULL; 1399 + if (!gpd_data) { 1400 + ret = -ENOMEM; 1401 + goto err_put; 1402 + } 1387 1403 1388 - mutex_init(&gpd_data->lock); 1404 + if (td) 1405 + gpd_data->td = *td; 1406 + 1407 + gpd_data->base.dev = dev; 1408 + gpd_data->need_restore = -1; 1409 + gpd_data->td.constraint_changed = true; 1410 + gpd_data->td.effective_constraint_ns = -1; 1389 1411 gpd_data->nb.notifier_call = genpd_dev_pm_qos_notifier; 1390 - dev_pm_qos_add_notifier(dev, &gpd_data->nb); 1412 + 1413 + spin_lock_irq(&dev->power.lock); 1414 + 1415 + if (dev->power.subsys_data->domain_data) { 1416 + ret = -EINVAL; 1417 + goto err_free; 1418 + } 1419 + 1420 + dev->power.subsys_data->domain_data = &gpd_data->base; 1421 + dev->pm_domain = &genpd->domain; 1422 + 1423 + spin_unlock_irq(&dev->power.lock); 1424 + 1391 1425 return gpd_data; 1426 + 1427 + err_free: 1428 + spin_unlock_irq(&dev->power.lock); 1429 + kfree(gpd_data); 1430 + err_put: 1431 + dev_pm_put_subsys_data(dev); 1432 + return ERR_PTR(ret); 1392 1433 } 1393 1434 1394 - static void __pm_genpd_free_dev_data(struct device *dev, 1395 - struct generic_pm_domain_data *gpd_data) 1435 + static void genpd_free_dev_data(struct device *dev, 1436 + struct generic_pm_domain_data *gpd_data) 1396 1437 { 1397 - dev_pm_qos_remove_notifier(dev, &gpd_data->nb); 1438 + spin_lock_irq(&dev->power.lock); 1439 + 1440 + dev->pm_domain = NULL; 1441 + dev->power.subsys_data->domain_data = NULL; 1442 + 1443 + spin_unlock_irq(&dev->power.lock); 1444 + 1398 1445 kfree(gpd_data); 1446 + dev_pm_put_subsys_data(dev); 1399 1447 } 1400 1448 1401 1449 /** ··· 1448 1414 int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev, 1449 1415 struct gpd_timing_data *td) 1450 1416 { 1451 - struct generic_pm_domain_data *gpd_data_new, *gpd_data = NULL; 1452 - struct pm_domain_data *pdd; 1417 + struct generic_pm_domain_data *gpd_data; 1453 1418 int ret = 0; 1454 1419 1455 1420 dev_dbg(dev, "%s()\n", __func__); ··· 1456 1423 if (IS_ERR_OR_NULL(genpd) || IS_ERR_OR_NULL(dev)) 1457 1424 return -EINVAL; 1458 1425 1459 - gpd_data_new = __pm_genpd_alloc_dev_data(dev); 1460 - if (!gpd_data_new) 1461 - return -ENOMEM; 1426 + gpd_data = genpd_alloc_dev_data(dev, genpd, td); 1427 + if (IS_ERR(gpd_data)) 1428 + return PTR_ERR(gpd_data); 1462 1429 1463 1430 genpd_acquire_lock(genpd); 1464 1431 ··· 1467 1434 goto out; 1468 1435 } 1469 1436 1470 - list_for_each_entry(pdd, &genpd->dev_list, list_node) 1471 - if (pdd->dev == dev) { 1472 - ret = -EINVAL; 1473 - goto out; 1474 - } 1475 - 1476 - ret = dev_pm_get_subsys_data(dev); 1437 + ret = genpd->attach_dev ? genpd->attach_dev(genpd, dev) : 0; 1477 1438 if (ret) 1478 1439 goto out; 1479 1440 1480 1441 genpd->device_count++; 1481 1442 genpd->max_off_time_changed = true; 1482 1443 1483 - spin_lock_irq(&dev->power.lock); 1484 - 1485 - dev->pm_domain = &genpd->domain; 1486 - if (dev->power.subsys_data->domain_data) { 1487 - gpd_data = to_gpd_data(dev->power.subsys_data->domain_data); 1488 - } else { 1489 - gpd_data = gpd_data_new; 1490 - dev->power.subsys_data->domain_data = &gpd_data->base; 1491 - } 1492 - gpd_data->refcount++; 1493 - if (td) 1494 - gpd_data->td = *td; 1495 - 1496 - spin_unlock_irq(&dev->power.lock); 1497 - 1498 - if (genpd->attach_dev) 1499 - genpd->attach_dev(genpd, dev); 1500 - 1501 - mutex_lock(&gpd_data->lock); 1502 - gpd_data->base.dev = dev; 1503 1444 list_add_tail(&gpd_data->base.list_node, &genpd->dev_list); 1504 - gpd_data->need_restore = -1; 1505 - gpd_data->td.constraint_changed = true; 1506 - gpd_data->td.effective_constraint_ns = -1; 1507 - mutex_unlock(&gpd_data->lock); 1508 1445 1509 1446 out: 1510 1447 genpd_release_lock(genpd); 1511 1448 1512 - if (gpd_data != gpd_data_new) 1513 - __pm_genpd_free_dev_data(dev, gpd_data_new); 1449 + if (ret) 1450 + genpd_free_dev_data(dev, gpd_data); 1451 + else 1452 + dev_pm_qos_add_notifier(dev, &gpd_data->nb); 1514 1453 1515 1454 return ret; 1516 1455 } ··· 1509 1504 { 1510 1505 struct generic_pm_domain_data *gpd_data; 1511 1506 struct pm_domain_data *pdd; 1512 - bool remove = false; 1513 1507 int ret = 0; 1514 1508 1515 1509 dev_dbg(dev, "%s()\n", __func__); ··· 1517 1513 || IS_ERR_OR_NULL(dev->pm_domain) 1518 1514 || pd_to_genpd(dev->pm_domain) != genpd) 1519 1515 return -EINVAL; 1516 + 1517 + /* The above validation also means we have existing domain_data. */ 1518 + pdd = dev->power.subsys_data->domain_data; 1519 + gpd_data = to_gpd_data(pdd); 1520 + dev_pm_qos_remove_notifier(dev, &gpd_data->nb); 1520 1521 1521 1522 genpd_acquire_lock(genpd); 1522 1523 ··· 1536 1527 if (genpd->detach_dev) 1537 1528 genpd->detach_dev(genpd, dev); 1538 1529 1539 - spin_lock_irq(&dev->power.lock); 1540 - 1541 - dev->pm_domain = NULL; 1542 - pdd = dev->power.subsys_data->domain_data; 1543 1530 list_del_init(&pdd->list_node); 1544 - gpd_data = to_gpd_data(pdd); 1545 - if (--gpd_data->refcount == 0) { 1546 - dev->power.subsys_data->domain_data = NULL; 1547 - remove = true; 1548 - } 1549 - 1550 - spin_unlock_irq(&dev->power.lock); 1551 - 1552 - mutex_lock(&gpd_data->lock); 1553 - pdd->dev = NULL; 1554 - mutex_unlock(&gpd_data->lock); 1555 1531 1556 1532 genpd_release_lock(genpd); 1557 1533 1558 - dev_pm_put_subsys_data(dev); 1559 - if (remove) 1560 - __pm_genpd_free_dev_data(dev, gpd_data); 1534 + genpd_free_dev_data(dev, gpd_data); 1561 1535 1562 1536 return 0; 1563 1537 1564 1538 out: 1565 1539 genpd_release_lock(genpd); 1540 + dev_pm_qos_add_notifier(dev, &gpd_data->nb); 1566 1541 1567 1542 return ret; 1568 1543 } 1569 - 1570 - /** 1571 - * pm_genpd_dev_need_restore - Set/unset the device's "need restore" flag. 1572 - * @dev: Device to set/unset the flag for. 1573 - * @val: The new value of the device's "need restore" flag. 1574 - */ 1575 - void pm_genpd_dev_need_restore(struct device *dev, bool val) 1576 - { 1577 - struct pm_subsys_data *psd; 1578 - unsigned long flags; 1579 - 1580 - spin_lock_irqsave(&dev->power.lock, flags); 1581 - 1582 - psd = dev_to_psd(dev); 1583 - if (psd && psd->domain_data) 1584 - to_gpd_data(psd->domain_data)->need_restore = val ? 1 : 0; 1585 - 1586 - spin_unlock_irqrestore(&dev->power.lock, flags); 1587 - } 1588 - EXPORT_SYMBOL_GPL(pm_genpd_dev_need_restore); 1589 1544 1590 1545 /** 1591 1546 * pm_genpd_add_subdomain - Add a subdomain to an I/O PM domain.
+150 -44
drivers/base/power/opp.c
··· 117 117 } while (0) 118 118 119 119 /** 120 - * find_device_opp() - find device_opp struct using device pointer 120 + * _find_device_opp() - find device_opp struct using device pointer 121 121 * @dev: device pointer used to lookup device OPPs 122 122 * 123 123 * Search list of device OPPs for one containing matching device. Does a RCU 124 124 * reader operation to grab the pointer needed. 125 125 * 126 - * Returns pointer to 'struct device_opp' if found, otherwise -ENODEV or 126 + * Return: pointer to 'struct device_opp' if found, otherwise -ENODEV or 127 127 * -EINVAL based on type of error. 128 128 * 129 129 * Locking: This function must be called under rcu_read_lock(). device_opp 130 130 * is a RCU protected pointer. This means that device_opp is valid as long 131 131 * as we are under RCU lock. 132 132 */ 133 - static struct device_opp *find_device_opp(struct device *dev) 133 + static struct device_opp *_find_device_opp(struct device *dev) 134 134 { 135 135 struct device_opp *tmp_dev_opp, *dev_opp = ERR_PTR(-ENODEV); 136 136 ··· 153 153 * dev_pm_opp_get_voltage() - Gets the voltage corresponding to an available opp 154 154 * @opp: opp for which voltage has to be returned for 155 155 * 156 - * Return voltage in micro volt corresponding to the opp, else 156 + * Return: voltage in micro volt corresponding to the opp, else 157 157 * return 0 158 158 * 159 159 * Locking: This function must be called under rcu_read_lock(). opp is a rcu ··· 169 169 struct dev_pm_opp *tmp_opp; 170 170 unsigned long v = 0; 171 171 172 + opp_rcu_lockdep_assert(); 173 + 172 174 tmp_opp = rcu_dereference(opp); 173 175 if (unlikely(IS_ERR_OR_NULL(tmp_opp)) || !tmp_opp->available) 174 176 pr_err("%s: Invalid parameters\n", __func__); ··· 185 183 * dev_pm_opp_get_freq() - Gets the frequency corresponding to an available opp 186 184 * @opp: opp for which frequency has to be returned for 187 185 * 188 - * Return frequency in hertz corresponding to the opp, else 186 + * Return: frequency in hertz corresponding to the opp, else 189 187 * return 0 190 188 * 191 189 * Locking: This function must be called under rcu_read_lock(). opp is a rcu ··· 201 199 struct dev_pm_opp *tmp_opp; 202 200 unsigned long f = 0; 203 201 202 + opp_rcu_lockdep_assert(); 203 + 204 204 tmp_opp = rcu_dereference(opp); 205 205 if (unlikely(IS_ERR_OR_NULL(tmp_opp)) || !tmp_opp->available) 206 206 pr_err("%s: Invalid parameters\n", __func__); ··· 217 213 * dev_pm_opp_get_opp_count() - Get number of opps available in the opp list 218 214 * @dev: device for which we do this operation 219 215 * 220 - * This function returns the number of available opps if there are any, 216 + * Return: This function returns the number of available opps if there are any, 221 217 * else returns 0 if none or the corresponding error value. 222 218 * 223 219 * Locking: This function takes rcu_read_lock(). ··· 230 226 231 227 rcu_read_lock(); 232 228 233 - dev_opp = find_device_opp(dev); 229 + dev_opp = _find_device_opp(dev); 234 230 if (IS_ERR(dev_opp)) { 235 231 count = PTR_ERR(dev_opp); 236 232 dev_err(dev, "%s: device OPP not found (%d)\n", ··· 255 251 * @freq: frequency to search for 256 252 * @available: true/false - match for available opp 257 253 * 258 - * Searches for exact match in the opp list and returns pointer to the matching 259 - * opp if found, else returns ERR_PTR in case of error and should be handled 260 - * using IS_ERR. Error return values can be: 254 + * Return: Searches for exact match in the opp list and returns pointer to the 255 + * matching opp if found, else returns ERR_PTR in case of error and should 256 + * be handled using IS_ERR. Error return values can be: 261 257 * EINVAL: for bad pointer 262 258 * ERANGE: no match found for search 263 259 * ENODEV: if device not found in list of registered devices ··· 284 280 285 281 opp_rcu_lockdep_assert(); 286 282 287 - dev_opp = find_device_opp(dev); 283 + dev_opp = _find_device_opp(dev); 288 284 if (IS_ERR(dev_opp)) { 289 285 int r = PTR_ERR(dev_opp); 290 286 dev_err(dev, "%s: device OPP not found (%d)\n", __func__, r); ··· 311 307 * Search for the matching ceil *available* OPP from a starting freq 312 308 * for a device. 313 309 * 314 - * Returns matching *opp and refreshes *freq accordingly, else returns 310 + * Return: matching *opp and refreshes *freq accordingly, else returns 315 311 * ERR_PTR in case of error and should be handled using IS_ERR. Error return 316 312 * values can be: 317 313 * EINVAL: for bad pointer ··· 337 333 return ERR_PTR(-EINVAL); 338 334 } 339 335 340 - dev_opp = find_device_opp(dev); 336 + dev_opp = _find_device_opp(dev); 341 337 if (IS_ERR(dev_opp)) 342 338 return ERR_CAST(dev_opp); 343 339 ··· 361 357 * Search for the matching floor *available* OPP from a starting freq 362 358 * for a device. 363 359 * 364 - * Returns matching *opp and refreshes *freq accordingly, else returns 360 + * Return: matching *opp and refreshes *freq accordingly, else returns 365 361 * ERR_PTR in case of error and should be handled using IS_ERR. Error return 366 362 * values can be: 367 363 * EINVAL: for bad pointer ··· 387 383 return ERR_PTR(-EINVAL); 388 384 } 389 385 390 - dev_opp = find_device_opp(dev); 386 + dev_opp = _find_device_opp(dev); 391 387 if (IS_ERR(dev_opp)) 392 388 return ERR_CAST(dev_opp); 393 389 ··· 407 403 } 408 404 EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_floor); 409 405 410 - static struct device_opp *add_device_opp(struct device *dev) 406 + /** 407 + * _add_device_opp() - Allocate a new device OPP table 408 + * @dev: device for which we do this operation 409 + * 410 + * New device node which uses OPPs - used when multiple devices with OPP tables 411 + * are maintained. 412 + * 413 + * Return: valid device_opp pointer if success, else NULL. 414 + */ 415 + static struct device_opp *_add_device_opp(struct device *dev) 411 416 { 412 417 struct device_opp *dev_opp; 413 418 ··· 437 424 return dev_opp; 438 425 } 439 426 440 - static int dev_pm_opp_add_dynamic(struct device *dev, unsigned long freq, 441 - unsigned long u_volt, bool dynamic) 427 + /** 428 + * _opp_add_dynamic() - Allocate a dynamic OPP. 429 + * @dev: device for which we do this operation 430 + * @freq: Frequency in Hz for this OPP 431 + * @u_volt: Voltage in uVolts for this OPP 432 + * @dynamic: Dynamically added OPPs. 433 + * 434 + * This function adds an opp definition to the opp list and returns status. 435 + * The opp is made available by default and it can be controlled using 436 + * dev_pm_opp_enable/disable functions and may be removed by dev_pm_opp_remove. 437 + * 438 + * NOTE: "dynamic" parameter impacts OPPs added by the of_init_opp_table and 439 + * freed by of_free_opp_table. 440 + * 441 + * Locking: The internal device_opp and opp structures are RCU protected. 442 + * Hence this function internally uses RCU updater strategy with mutex locks 443 + * to keep the integrity of the internal data structures. Callers should ensure 444 + * that this function is *NOT* called under RCU protection or in contexts where 445 + * mutex cannot be locked. 446 + * 447 + * Return: 448 + * 0 On success OR 449 + * Duplicate OPPs (both freq and volt are same) and opp->available 450 + * -EEXIST Freq are same and volt are different OR 451 + * Duplicate OPPs (both freq and volt are same) and !opp->available 452 + * -ENOMEM Memory allocation failure 453 + */ 454 + static int _opp_add_dynamic(struct device *dev, unsigned long freq, 455 + long u_volt, bool dynamic) 442 456 { 443 457 struct device_opp *dev_opp = NULL; 444 458 struct dev_pm_opp *opp, *new_opp; ··· 489 449 new_opp->dynamic = dynamic; 490 450 491 451 /* Check for existing list for 'dev' */ 492 - dev_opp = find_device_opp(dev); 452 + dev_opp = _find_device_opp(dev); 493 453 if (IS_ERR(dev_opp)) { 494 - dev_opp = add_device_opp(dev); 454 + dev_opp = _add_device_opp(dev); 495 455 if (!dev_opp) { 496 456 ret = -ENOMEM; 497 457 goto free_opp; ··· 559 519 * mutex cannot be locked. 560 520 * 561 521 * Return: 562 - * 0: On success OR 522 + * 0 On success OR 563 523 * Duplicate OPPs (both freq and volt are same) and opp->available 564 - * -EEXIST: Freq are same and volt are different OR 524 + * -EEXIST Freq are same and volt are different OR 565 525 * Duplicate OPPs (both freq and volt are same) and !opp->available 566 - * -ENOMEM: Memory allocation failure 526 + * -ENOMEM Memory allocation failure 567 527 */ 568 528 int dev_pm_opp_add(struct device *dev, unsigned long freq, unsigned long u_volt) 569 529 { 570 - return dev_pm_opp_add_dynamic(dev, freq, u_volt, true); 530 + return _opp_add_dynamic(dev, freq, u_volt, true); 571 531 } 572 532 EXPORT_SYMBOL_GPL(dev_pm_opp_add); 573 533 574 - static void kfree_opp_rcu(struct rcu_head *head) 534 + /** 535 + * _kfree_opp_rcu() - Free OPP RCU handler 536 + * @head: RCU head 537 + */ 538 + static void _kfree_opp_rcu(struct rcu_head *head) 575 539 { 576 540 struct dev_pm_opp *opp = container_of(head, struct dev_pm_opp, rcu_head); 577 541 578 542 kfree_rcu(opp, rcu_head); 579 543 } 580 544 581 - static void kfree_device_rcu(struct rcu_head *head) 545 + /** 546 + * _kfree_device_rcu() - Free device_opp RCU handler 547 + * @head: RCU head 548 + */ 549 + static void _kfree_device_rcu(struct rcu_head *head) 582 550 { 583 551 struct device_opp *device_opp = container_of(head, struct device_opp, rcu_head); 584 552 585 553 kfree_rcu(device_opp, rcu_head); 586 554 } 587 555 588 - static void __dev_pm_opp_remove(struct device_opp *dev_opp, 589 - struct dev_pm_opp *opp) 556 + /** 557 + * _opp_remove() - Remove an OPP from a table definition 558 + * @dev_opp: points back to the device_opp struct this opp belongs to 559 + * @opp: pointer to the OPP to remove 560 + * 561 + * This function removes an opp definition from the opp list. 562 + * 563 + * Locking: The internal device_opp and opp structures are RCU protected. 564 + * It is assumed that the caller holds required mutex for an RCU updater 565 + * strategy. 566 + */ 567 + static void _opp_remove(struct device_opp *dev_opp, 568 + struct dev_pm_opp *opp) 590 569 { 591 570 /* 592 571 * Notify the changes in the availability of the operable ··· 613 554 */ 614 555 srcu_notifier_call_chain(&dev_opp->srcu_head, OPP_EVENT_REMOVE, opp); 615 556 list_del_rcu(&opp->node); 616 - call_srcu(&dev_opp->srcu_head.srcu, &opp->rcu_head, kfree_opp_rcu); 557 + call_srcu(&dev_opp->srcu_head.srcu, &opp->rcu_head, _kfree_opp_rcu); 617 558 618 559 if (list_empty(&dev_opp->opp_list)) { 619 560 list_del_rcu(&dev_opp->node); 620 561 call_srcu(&dev_opp->srcu_head.srcu, &dev_opp->rcu_head, 621 - kfree_device_rcu); 562 + _kfree_device_rcu); 622 563 } 623 564 } 624 565 ··· 628 569 * @freq: OPP to remove with matching 'freq' 629 570 * 630 571 * This function removes an opp from the opp list. 572 + * 573 + * Locking: The internal device_opp and opp structures are RCU protected. 574 + * Hence this function internally uses RCU updater strategy with mutex locks 575 + * to keep the integrity of the internal data structures. Callers should ensure 576 + * that this function is *NOT* called under RCU protection or in contexts where 577 + * mutex cannot be locked. 631 578 */ 632 579 void dev_pm_opp_remove(struct device *dev, unsigned long freq) 633 580 { ··· 644 579 /* Hold our list modification lock here */ 645 580 mutex_lock(&dev_opp_list_lock); 646 581 647 - dev_opp = find_device_opp(dev); 582 + dev_opp = _find_device_opp(dev); 648 583 if (IS_ERR(dev_opp)) 649 584 goto unlock; 650 585 ··· 661 596 goto unlock; 662 597 } 663 598 664 - __dev_pm_opp_remove(dev_opp, opp); 599 + _opp_remove(dev_opp, opp); 665 600 unlock: 666 601 mutex_unlock(&dev_opp_list_lock); 667 602 } 668 603 EXPORT_SYMBOL_GPL(dev_pm_opp_remove); 669 604 670 605 /** 671 - * opp_set_availability() - helper to set the availability of an opp 606 + * _opp_set_availability() - helper to set the availability of an opp 672 607 * @dev: device for which we do this operation 673 608 * @freq: OPP frequency to modify availability 674 609 * @availability_req: availability status requested for this opp ··· 676 611 * Set the availability of an OPP with an RCU operation, opp_{enable,disable} 677 612 * share a common logic which is isolated here. 678 613 * 679 - * Returns -EINVAL for bad pointers, -ENOMEM if no memory available for the 614 + * Return: -EINVAL for bad pointers, -ENOMEM if no memory available for the 680 615 * copy operation, returns 0 if no modifcation was done OR modification was 681 616 * successful. 682 617 * ··· 686 621 * that this function is *NOT* called under RCU protection or in contexts where 687 622 * mutex locking or synchronize_rcu() blocking calls cannot be used. 688 623 */ 689 - static int opp_set_availability(struct device *dev, unsigned long freq, 690 - bool availability_req) 624 + static int _opp_set_availability(struct device *dev, unsigned long freq, 625 + bool availability_req) 691 626 { 692 627 struct device_opp *dev_opp; 693 628 struct dev_pm_opp *new_opp, *tmp_opp, *opp = ERR_PTR(-ENODEV); ··· 703 638 mutex_lock(&dev_opp_list_lock); 704 639 705 640 /* Find the device_opp */ 706 - dev_opp = find_device_opp(dev); 641 + dev_opp = _find_device_opp(dev); 707 642 if (IS_ERR(dev_opp)) { 708 643 r = PTR_ERR(dev_opp); 709 644 dev_warn(dev, "%s: Device OPP not found (%d)\n", __func__, r); ··· 733 668 734 669 list_replace_rcu(&opp->node, &new_opp->node); 735 670 mutex_unlock(&dev_opp_list_lock); 736 - call_srcu(&dev_opp->srcu_head.srcu, &opp->rcu_head, kfree_opp_rcu); 671 + call_srcu(&dev_opp->srcu_head.srcu, &opp->rcu_head, _kfree_opp_rcu); 737 672 738 673 /* Notify the change of the OPP availability */ 739 674 if (availability_req) ··· 765 700 * integrity of the internal data structures. Callers should ensure that 766 701 * this function is *NOT* called under RCU protection or in contexts where 767 702 * mutex locking or synchronize_rcu() blocking calls cannot be used. 703 + * 704 + * Return: -EINVAL for bad pointers, -ENOMEM if no memory available for the 705 + * copy operation, returns 0 if no modifcation was done OR modification was 706 + * successful. 768 707 */ 769 708 int dev_pm_opp_enable(struct device *dev, unsigned long freq) 770 709 { 771 - return opp_set_availability(dev, freq, true); 710 + return _opp_set_availability(dev, freq, true); 772 711 } 773 712 EXPORT_SYMBOL_GPL(dev_pm_opp_enable); 774 713 ··· 791 722 * integrity of the internal data structures. Callers should ensure that 792 723 * this function is *NOT* called under RCU protection or in contexts where 793 724 * mutex locking or synchronize_rcu() blocking calls cannot be used. 725 + * 726 + * Return: -EINVAL for bad pointers, -ENOMEM if no memory available for the 727 + * copy operation, returns 0 if no modifcation was done OR modification was 728 + * successful. 794 729 */ 795 730 int dev_pm_opp_disable(struct device *dev, unsigned long freq) 796 731 { 797 - return opp_set_availability(dev, freq, false); 732 + return _opp_set_availability(dev, freq, false); 798 733 } 799 734 EXPORT_SYMBOL_GPL(dev_pm_opp_disable); 800 735 801 736 /** 802 737 * dev_pm_opp_get_notifier() - find notifier_head of the device with opp 803 738 * @dev: device pointer used to lookup device OPPs. 739 + * 740 + * Return: pointer to notifier head if found, otherwise -ENODEV or 741 + * -EINVAL based on type of error casted as pointer. value must be checked 742 + * with IS_ERR to determine valid pointer or error result. 743 + * 744 + * Locking: This function must be called under rcu_read_lock(). dev_opp is a RCU 745 + * protected pointer. The reason for the same is that the opp pointer which is 746 + * returned will remain valid for use with opp_get_{voltage, freq} only while 747 + * under the locked area. The pointer returned must be used prior to unlocking 748 + * with rcu_read_unlock() to maintain the integrity of the pointer. 804 749 */ 805 750 struct srcu_notifier_head *dev_pm_opp_get_notifier(struct device *dev) 806 751 { 807 - struct device_opp *dev_opp = find_device_opp(dev); 752 + struct device_opp *dev_opp = _find_device_opp(dev); 808 753 809 754 if (IS_ERR(dev_opp)) 810 755 return ERR_CAST(dev_opp); /* matching type */ 811 756 812 757 return &dev_opp->srcu_head; 813 758 } 759 + EXPORT_SYMBOL_GPL(dev_pm_opp_get_notifier); 814 760 815 761 #ifdef CONFIG_OF 816 762 /** ··· 833 749 * @dev: device pointer used to lookup device OPPs. 834 750 * 835 751 * Register the initial OPP table with the OPP library for given device. 752 + * 753 + * Locking: The internal device_opp and opp structures are RCU protected. 754 + * Hence this function indirectly uses RCU updater strategy with mutex locks 755 + * to keep the integrity of the internal data structures. Callers should ensure 756 + * that this function is *NOT* called under RCU protection or in contexts where 757 + * mutex cannot be locked. 758 + * 759 + * Return: 760 + * 0 On success OR 761 + * Duplicate OPPs (both freq and volt are same) and opp->available 762 + * -EEXIST Freq are same and volt are different OR 763 + * Duplicate OPPs (both freq and volt are same) and !opp->available 764 + * -ENOMEM Memory allocation failure 765 + * -ENODEV when 'operating-points' property is not found or is invalid data 766 + * in device node. 767 + * -ENODATA when empty 'operating-points' property is found 836 768 */ 837 769 int of_init_opp_table(struct device *dev) 838 770 { ··· 877 777 unsigned long freq = be32_to_cpup(val++) * 1000; 878 778 unsigned long volt = be32_to_cpup(val++); 879 779 880 - if (dev_pm_opp_add_dynamic(dev, freq, volt, false)) 780 + if (_opp_add_dynamic(dev, freq, volt, false)) 881 781 dev_warn(dev, "%s: Failed to add OPP %ld\n", 882 782 __func__, freq); 883 783 nr -= 2; ··· 892 792 * @dev: device pointer used to lookup device OPPs. 893 793 * 894 794 * Free OPPs created using static entries present in DT. 795 + * 796 + * Locking: The internal device_opp and opp structures are RCU protected. 797 + * Hence this function indirectly uses RCU updater strategy with mutex locks 798 + * to keep the integrity of the internal data structures. Callers should ensure 799 + * that this function is *NOT* called under RCU protection or in contexts where 800 + * mutex cannot be locked. 895 801 */ 896 802 void of_free_opp_table(struct device *dev) 897 803 { ··· 905 799 struct dev_pm_opp *opp, *tmp; 906 800 907 801 /* Check for existing list for 'dev' */ 908 - dev_opp = find_device_opp(dev); 802 + dev_opp = _find_device_opp(dev); 909 803 if (IS_ERR(dev_opp)) { 910 804 int error = PTR_ERR(dev_opp); 911 805 if (error != -ENODEV) ··· 922 816 /* Free static OPPs */ 923 817 list_for_each_entry_safe(opp, tmp, &dev_opp->opp_list, node) { 924 818 if (!opp->dynamic) 925 - __dev_pm_opp_remove(dev_opp, opp); 819 + _opp_remove(dev_opp, opp); 926 820 } 927 821 928 822 mutex_unlock(&dev_opp_list_lock);
+4
drivers/base/power/qos.c
··· 64 64 struct pm_qos_flags *pqf; 65 65 s32 val; 66 66 67 + lockdep_assert_held(&dev->power.lock); 68 + 67 69 if (IS_ERR_OR_NULL(qos)) 68 70 return PM_QOS_FLAGS_UNDEFINED; 69 71 ··· 106 104 */ 107 105 s32 __dev_pm_qos_read_value(struct device *dev) 108 106 { 107 + lockdep_assert_held(&dev->power.lock); 108 + 109 109 return IS_ERR_OR_NULL(dev->power.qos) ? 110 110 0 : pm_qos_read_value(&dev->power.qos->resume_latency); 111 111 }
+2 -2
drivers/char/hpet.c
··· 976 976 status = acpi_resource_to_address64(res, &addr); 977 977 978 978 if (ACPI_SUCCESS(status)) { 979 - hdp->hd_phys_address = addr.minimum; 980 - hdp->hd_address = ioremap(addr.minimum, addr.address_length); 979 + hdp->hd_phys_address = addr.address.minimum; 980 + hdp->hd_address = ioremap(addr.address.minimum, addr.address.address_length); 981 981 982 982 if (hpet_is_known(hdp)) { 983 983 iounmap(hdp->hd_address);
+10
drivers/cpufreq/Kconfig.x86
··· 57 57 By enabling this option the acpi_cpufreq driver provides the old 58 58 entry in addition to the new boost ones, for compatibility reasons. 59 59 60 + config X86_SFI_CPUFREQ 61 + tristate "SFI Performance-States driver" 62 + depends on X86_INTEL_MID && SFI 63 + help 64 + This adds a CPUFreq driver for some Silvermont based Intel Atom 65 + architectures like Z34xx and Z35xx which enumerate processor 66 + performance states through SFI. 67 + 68 + If in doubt, say N. 69 + 60 70 config ELAN_CPUFREQ 61 71 tristate "AMD Elan SC400 and SC410" 62 72 depends on MELAN
+1
drivers/cpufreq/Makefile
··· 41 41 obj-$(CONFIG_X86_CPUFREQ_NFORCE2) += cpufreq-nforce2.o 42 42 obj-$(CONFIG_X86_INTEL_PSTATE) += intel_pstate.o 43 43 obj-$(CONFIG_X86_AMD_FREQ_SENSITIVITY) += amd_freq_sensitivity.o 44 + obj-$(CONFIG_X86_SFI_CPUFREQ) += sfi-cpufreq.o 44 45 45 46 ################################################################################## 46 47 # ARM SoC drivers
+1 -2
drivers/cpufreq/cpufreq-dt.c
··· 320 320 { 321 321 struct private_data *priv = policy->driver_data; 322 322 323 - if (priv->cdev) 324 - cpufreq_cooling_unregister(priv->cdev); 323 + cpufreq_cooling_unregister(priv->cdev); 325 324 dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table); 326 325 of_free_opp_table(priv->cpu_dev); 327 326 clk_put(policy->clk);
+71 -103
drivers/cpufreq/cpufreq.c
··· 27 27 #include <linux/mutex.h> 28 28 #include <linux/slab.h> 29 29 #include <linux/suspend.h> 30 + #include <linux/syscore_ops.h> 30 31 #include <linux/tick.h> 31 32 #include <trace/events/power.h> 33 + 34 + /* Macros to iterate over lists */ 35 + /* Iterate over online CPUs policies */ 36 + static LIST_HEAD(cpufreq_policy_list); 37 + #define for_each_policy(__policy) \ 38 + list_for_each_entry(__policy, &cpufreq_policy_list, policy_list) 39 + 40 + /* Iterate over governors */ 41 + static LIST_HEAD(cpufreq_governor_list); 42 + #define for_each_governor(__governor) \ 43 + list_for_each_entry(__governor, &cpufreq_governor_list, governor_list) 32 44 33 45 /** 34 46 * The "cpufreq driver" - the arch- or hardware-dependent low ··· 52 40 static DEFINE_PER_CPU(struct cpufreq_policy *, cpufreq_cpu_data_fallback); 53 41 static DEFINE_RWLOCK(cpufreq_driver_lock); 54 42 DEFINE_MUTEX(cpufreq_governor_lock); 55 - static LIST_HEAD(cpufreq_policy_list); 56 43 57 44 /* This one keeps track of the previously set governor of a removed CPU */ 58 45 static DEFINE_PER_CPU(char[CPUFREQ_NAME_LEN], cpufreq_cpu_governor); ··· 73 62 /* internal prototypes */ 74 63 static int __cpufreq_governor(struct cpufreq_policy *policy, 75 64 unsigned int event); 76 - static unsigned int __cpufreq_get(unsigned int cpu); 65 + static unsigned int __cpufreq_get(struct cpufreq_policy *policy); 77 66 static void handle_update(struct work_struct *work); 78 67 79 68 /** ··· 104 93 { 105 94 off = 1; 106 95 } 107 - static LIST_HEAD(cpufreq_governor_list); 108 96 static DEFINE_MUTEX(cpufreq_governor_mutex); 109 97 110 98 bool have_governor_per_policy(void) ··· 212 202 struct cpufreq_policy *policy = NULL; 213 203 unsigned long flags; 214 204 215 - if (cpufreq_disabled() || (cpu >= nr_cpu_ids)) 205 + if (cpu >= nr_cpu_ids) 216 206 return NULL; 217 207 218 208 if (!down_read_trylock(&cpufreq_rwsem)) ··· 239 229 240 230 void cpufreq_cpu_put(struct cpufreq_policy *policy) 241 231 { 242 - if (cpufreq_disabled()) 243 - return; 244 - 245 232 kobject_put(&policy->kobj); 246 233 up_read(&cpufreq_rwsem); 247 234 } ··· 256 249 * systems as each CPU might be scaled differently. So, use the arch 257 250 * per-CPU loops_per_jiffy value wherever possible. 258 251 */ 259 - #ifndef CONFIG_SMP 260 - static unsigned long l_p_j_ref; 261 - static unsigned int l_p_j_ref_freq; 262 - 263 252 static void adjust_jiffies(unsigned long val, struct cpufreq_freqs *ci) 264 253 { 254 + #ifndef CONFIG_SMP 255 + static unsigned long l_p_j_ref; 256 + static unsigned int l_p_j_ref_freq; 257 + 265 258 if (ci->flags & CPUFREQ_CONST_LOOPS) 266 259 return; 267 260 ··· 277 270 pr_debug("scaling loops_per_jiffy to %lu for frequency %u kHz\n", 278 271 loops_per_jiffy, ci->new); 279 272 } 280 - } 281 - #else 282 - static inline void adjust_jiffies(unsigned long val, struct cpufreq_freqs *ci) 283 - { 284 - return; 285 - } 286 273 #endif 274 + } 287 275 288 276 static void __cpufreq_notify_transition(struct cpufreq_policy *policy, 289 277 struct cpufreq_freqs *freqs, unsigned int state) ··· 434 432 } 435 433 define_one_global_rw(boost); 436 434 437 - static struct cpufreq_governor *__find_governor(const char *str_governor) 435 + static struct cpufreq_governor *find_governor(const char *str_governor) 438 436 { 439 437 struct cpufreq_governor *t; 440 438 441 - list_for_each_entry(t, &cpufreq_governor_list, governor_list) 439 + for_each_governor(t) 442 440 if (!strncasecmp(str_governor, t->name, CPUFREQ_NAME_LEN)) 443 441 return t; 444 442 ··· 465 463 *policy = CPUFREQ_POLICY_POWERSAVE; 466 464 err = 0; 467 465 } 468 - } else if (has_target()) { 466 + } else { 469 467 struct cpufreq_governor *t; 470 468 471 469 mutex_lock(&cpufreq_governor_mutex); 472 470 473 - t = __find_governor(str_governor); 471 + t = find_governor(str_governor); 474 472 475 473 if (t == NULL) { 476 474 int ret; ··· 480 478 mutex_lock(&cpufreq_governor_mutex); 481 479 482 480 if (ret == 0) 483 - t = __find_governor(str_governor); 481 + t = find_governor(str_governor); 484 482 } 485 483 486 484 if (t != NULL) { ··· 515 513 show_one(scaling_min_freq, min); 516 514 show_one(scaling_max_freq, max); 517 515 518 - static ssize_t show_scaling_cur_freq( 519 - struct cpufreq_policy *policy, char *buf) 516 + static ssize_t show_scaling_cur_freq(struct cpufreq_policy *policy, char *buf) 520 517 { 521 518 ssize_t ret; 522 519 ··· 564 563 static ssize_t show_cpuinfo_cur_freq(struct cpufreq_policy *policy, 565 564 char *buf) 566 565 { 567 - unsigned int cur_freq = __cpufreq_get(policy->cpu); 566 + unsigned int cur_freq = __cpufreq_get(policy); 568 567 if (!cur_freq) 569 568 return sprintf(buf, "<unknown>"); 570 569 return sprintf(buf, "%u\n", cur_freq); ··· 640 639 goto out; 641 640 } 642 641 643 - list_for_each_entry(t, &cpufreq_governor_list, governor_list) { 642 + for_each_governor(t) { 644 643 if (i >= (ssize_t) ((PAGE_SIZE / sizeof(char)) 645 644 - (CPUFREQ_NAME_LEN + 2))) 646 645 goto out; ··· 903 902 904 903 /* set up files for this cpu device */ 905 904 drv_attr = cpufreq_driver->attr; 906 - while ((drv_attr) && (*drv_attr)) { 905 + while (drv_attr && *drv_attr) { 907 906 ret = sysfs_create_file(&policy->kobj, &((*drv_attr)->attr)); 908 907 if (ret) 909 908 return ret; ··· 937 936 memcpy(&new_policy, policy, sizeof(*policy)); 938 937 939 938 /* Update governor of new_policy to the governor used before hotplug */ 940 - gov = __find_governor(per_cpu(cpufreq_cpu_governor, policy->cpu)); 939 + gov = find_governor(per_cpu(cpufreq_cpu_governor, policy->cpu)); 941 940 if (gov) 942 941 pr_debug("Restoring governor %s for cpu %d\n", 943 942 policy->governor->name, policy->cpu); ··· 959 958 } 960 959 } 961 960 962 - #ifdef CONFIG_HOTPLUG_CPU 963 961 static int cpufreq_add_policy_cpu(struct cpufreq_policy *policy, 964 962 unsigned int cpu, struct device *dev) 965 963 { ··· 996 996 997 997 return sysfs_create_link(&dev->kobj, &policy->kobj, "cpufreq"); 998 998 } 999 - #endif 1000 999 1001 1000 static struct cpufreq_policy *cpufreq_policy_restore(unsigned int cpu) 1002 1001 { ··· 1032 1033 init_rwsem(&policy->rwsem); 1033 1034 spin_lock_init(&policy->transition_lock); 1034 1035 init_waitqueue_head(&policy->transition_wait); 1036 + init_completion(&policy->kobj_unregister); 1037 + INIT_WORK(&policy->update, handle_update); 1035 1038 1036 1039 return policy; 1037 1040 ··· 1092 1091 } 1093 1092 1094 1093 down_write(&policy->rwsem); 1095 - 1096 - policy->last_cpu = policy->cpu; 1097 1094 policy->cpu = cpu; 1098 - 1099 1095 up_write(&policy->rwsem); 1100 - 1101 - blocking_notifier_call_chain(&cpufreq_policy_notifier_list, 1102 - CPUFREQ_UPDATE_POLICY_CPU, policy); 1103 1096 1104 1097 return 0; 1105 1098 } ··· 1105 1110 struct cpufreq_policy *policy; 1106 1111 unsigned long flags; 1107 1112 bool recover_policy = cpufreq_suspended; 1108 - #ifdef CONFIG_HOTPLUG_CPU 1109 - struct cpufreq_policy *tpolicy; 1110 - #endif 1111 1113 1112 1114 if (cpu_is_offline(cpu)) 1113 1115 return 0; 1114 1116 1115 1117 pr_debug("adding CPU %u\n", cpu); 1116 1118 1117 - #ifdef CONFIG_SMP 1118 1119 /* check whether a different CPU already registered this 1119 1120 * CPU because it is in the same boat. */ 1120 - policy = cpufreq_cpu_get(cpu); 1121 - if (unlikely(policy)) { 1122 - cpufreq_cpu_put(policy); 1121 + policy = cpufreq_cpu_get_raw(cpu); 1122 + if (unlikely(policy)) 1123 1123 return 0; 1124 - } 1125 - #endif 1126 1124 1127 1125 if (!down_read_trylock(&cpufreq_rwsem)) 1128 1126 return 0; 1129 1127 1130 - #ifdef CONFIG_HOTPLUG_CPU 1131 1128 /* Check if this cpu was hot-unplugged earlier and has siblings */ 1132 1129 read_lock_irqsave(&cpufreq_driver_lock, flags); 1133 - list_for_each_entry(tpolicy, &cpufreq_policy_list, policy_list) { 1134 - if (cpumask_test_cpu(cpu, tpolicy->related_cpus)) { 1130 + for_each_policy(policy) { 1131 + if (cpumask_test_cpu(cpu, policy->related_cpus)) { 1135 1132 read_unlock_irqrestore(&cpufreq_driver_lock, flags); 1136 - ret = cpufreq_add_policy_cpu(tpolicy, cpu, dev); 1133 + ret = cpufreq_add_policy_cpu(policy, cpu, dev); 1137 1134 up_read(&cpufreq_rwsem); 1138 1135 return ret; 1139 1136 } 1140 1137 } 1141 1138 read_unlock_irqrestore(&cpufreq_driver_lock, flags); 1142 - #endif 1143 1139 1144 1140 /* 1145 1141 * Restore the saved policy when doing light-weight init and fall back ··· 1156 1170 policy->cpu = cpu; 1157 1171 1158 1172 cpumask_copy(policy->cpus, cpumask_of(cpu)); 1159 - 1160 - init_completion(&policy->kobj_unregister); 1161 - INIT_WORK(&policy->update, handle_update); 1162 1173 1163 1174 /* call driver. From then on the cpufreq must be able 1164 1175 * to accept all calls to ->verify and ->setpolicy for this CPU ··· 1354 1371 pr_err("%s: Failed to stop governor\n", __func__); 1355 1372 return ret; 1356 1373 } 1357 - } 1358 1374 1359 - if (!cpufreq_driver->setpolicy) 1360 1375 strncpy(per_cpu(cpufreq_cpu_governor, cpu), 1361 1376 policy->governor->name, CPUFREQ_NAME_LEN); 1377 + } 1362 1378 1363 1379 down_read(&policy->rwsem); 1364 1380 cpus = cpumask_weight(policy->cpus); ··· 1398 1416 unsigned long flags; 1399 1417 struct cpufreq_policy *policy; 1400 1418 1401 - read_lock_irqsave(&cpufreq_driver_lock, flags); 1419 + write_lock_irqsave(&cpufreq_driver_lock, flags); 1402 1420 policy = per_cpu(cpufreq_cpu_data, cpu); 1403 - read_unlock_irqrestore(&cpufreq_driver_lock, flags); 1421 + per_cpu(cpufreq_cpu_data, cpu) = NULL; 1422 + write_unlock_irqrestore(&cpufreq_driver_lock, flags); 1404 1423 1405 1424 if (!policy) { 1406 1425 pr_debug("%s: No cpu_data found\n", __func__); ··· 1456 1473 } 1457 1474 } 1458 1475 1459 - per_cpu(cpufreq_cpu_data, cpu) = NULL; 1460 1476 return 0; 1461 1477 } 1462 1478 ··· 1492 1510 /** 1493 1511 * cpufreq_out_of_sync - If actual and saved CPU frequency differs, we're 1494 1512 * in deep trouble. 1495 - * @cpu: cpu number 1496 - * @old_freq: CPU frequency the kernel thinks the CPU runs at 1513 + * @policy: policy managing CPUs 1497 1514 * @new_freq: CPU frequency the CPU actually runs at 1498 1515 * 1499 1516 * We adjust to current frequency first, and need to clean up later. 1500 1517 * So either call to cpufreq_update_policy() or schedule handle_update()). 1501 1518 */ 1502 - static void cpufreq_out_of_sync(unsigned int cpu, unsigned int old_freq, 1519 + static void cpufreq_out_of_sync(struct cpufreq_policy *policy, 1503 1520 unsigned int new_freq) 1504 1521 { 1505 - struct cpufreq_policy *policy; 1506 1522 struct cpufreq_freqs freqs; 1507 - unsigned long flags; 1508 1523 1509 1524 pr_debug("Warning: CPU frequency out of sync: cpufreq and timing core thinks of %u, is %u kHz\n", 1510 - old_freq, new_freq); 1525 + policy->cur, new_freq); 1511 1526 1512 - freqs.old = old_freq; 1527 + freqs.old = policy->cur; 1513 1528 freqs.new = new_freq; 1514 - 1515 - read_lock_irqsave(&cpufreq_driver_lock, flags); 1516 - policy = per_cpu(cpufreq_cpu_data, cpu); 1517 - read_unlock_irqrestore(&cpufreq_driver_lock, flags); 1518 1529 1519 1530 cpufreq_freq_transition_begin(policy, &freqs); 1520 1531 cpufreq_freq_transition_end(policy, &freqs, 0); ··· 1558 1583 } 1559 1584 EXPORT_SYMBOL(cpufreq_quick_get_max); 1560 1585 1561 - static unsigned int __cpufreq_get(unsigned int cpu) 1586 + static unsigned int __cpufreq_get(struct cpufreq_policy *policy) 1562 1587 { 1563 - struct cpufreq_policy *policy = per_cpu(cpufreq_cpu_data, cpu); 1564 1588 unsigned int ret_freq = 0; 1565 1589 1566 1590 if (!cpufreq_driver->get) 1567 1591 return ret_freq; 1568 1592 1569 - ret_freq = cpufreq_driver->get(cpu); 1593 + ret_freq = cpufreq_driver->get(policy->cpu); 1570 1594 1571 1595 if (ret_freq && policy->cur && 1572 1596 !(cpufreq_driver->flags & CPUFREQ_CONST_LOOPS)) { 1573 1597 /* verify no discrepancy between actual and 1574 1598 saved value exists */ 1575 1599 if (unlikely(ret_freq != policy->cur)) { 1576 - cpufreq_out_of_sync(cpu, policy->cur, ret_freq); 1600 + cpufreq_out_of_sync(policy, ret_freq); 1577 1601 schedule_work(&policy->update); 1578 1602 } 1579 1603 } ··· 1593 1619 1594 1620 if (policy) { 1595 1621 down_read(&policy->rwsem); 1596 - ret_freq = __cpufreq_get(cpu); 1622 + ret_freq = __cpufreq_get(policy); 1597 1623 up_read(&policy->rwsem); 1598 1624 1599 1625 cpufreq_cpu_put(policy); ··· 1656 1682 1657 1683 pr_debug("%s: Suspending Governors\n", __func__); 1658 1684 1659 - list_for_each_entry(policy, &cpufreq_policy_list, policy_list) { 1685 + for_each_policy(policy) { 1660 1686 if (__cpufreq_governor(policy, CPUFREQ_GOV_STOP)) 1661 1687 pr_err("%s: Failed to stop governor for policy: %p\n", 1662 1688 __func__, policy); ··· 1690 1716 1691 1717 pr_debug("%s: Resuming Governors\n", __func__); 1692 1718 1693 - list_for_each_entry(policy, &cpufreq_policy_list, policy_list) { 1719 + for_each_policy(policy) { 1694 1720 if (cpufreq_driver->resume && cpufreq_driver->resume(policy)) 1695 1721 pr_err("%s: Failed to resume driver: %p\n", __func__, 1696 1722 policy); ··· 1980 2006 } 1981 2007 EXPORT_SYMBOL_GPL(cpufreq_driver_target); 1982 2008 1983 - /* 1984 - * when "event" is CPUFREQ_GOV_LIMITS 1985 - */ 1986 - 1987 2009 static int __cpufreq_governor(struct cpufreq_policy *policy, 1988 2010 unsigned int event) 1989 2011 { ··· 2077 2107 2078 2108 governor->initialized = 0; 2079 2109 err = -EBUSY; 2080 - if (__find_governor(governor->name) == NULL) { 2110 + if (!find_governor(governor->name)) { 2081 2111 err = 0; 2082 2112 list_add(&governor->governor_list, &cpufreq_governor_list); 2083 2113 } ··· 2277 2307 policy->cur = new_policy.cur; 2278 2308 } else { 2279 2309 if (policy->cur != new_policy.cur && has_target()) 2280 - cpufreq_out_of_sync(cpu, policy->cur, 2281 - new_policy.cur); 2310 + cpufreq_out_of_sync(policy, new_policy.cur); 2282 2311 } 2283 2312 } 2284 2313 ··· 2333 2364 struct cpufreq_policy *policy; 2334 2365 int ret = -EINVAL; 2335 2366 2336 - list_for_each_entry(policy, &cpufreq_policy_list, policy_list) { 2367 + for_each_policy(policy) { 2337 2368 freq_table = cpufreq_frequency_get_table(policy->cpu); 2338 2369 if (freq_table) { 2339 2370 ret = cpufreq_frequency_table_cpuinfo(policy, ··· 2423 2454 2424 2455 pr_debug("trying to register driver %s\n", driver_data->name); 2425 2456 2426 - if (driver_data->setpolicy) 2427 - driver_data->flags |= CPUFREQ_CONST_LOOPS; 2428 - 2429 2457 write_lock_irqsave(&cpufreq_driver_lock, flags); 2430 2458 if (cpufreq_driver) { 2431 2459 write_unlock_irqrestore(&cpufreq_driver_lock, flags); ··· 2430 2464 } 2431 2465 cpufreq_driver = driver_data; 2432 2466 write_unlock_irqrestore(&cpufreq_driver_lock, flags); 2467 + 2468 + if (driver_data->setpolicy) 2469 + driver_data->flags |= CPUFREQ_CONST_LOOPS; 2433 2470 2434 2471 if (cpufreq_boost_supported()) { 2435 2472 /* ··· 2454 2485 if (ret) 2455 2486 goto err_boost_unreg; 2456 2487 2457 - if (!(cpufreq_driver->flags & CPUFREQ_STICKY)) { 2458 - int i; 2459 - ret = -ENODEV; 2460 - 2461 - /* check for at least one working CPU */ 2462 - for (i = 0; i < nr_cpu_ids; i++) 2463 - if (cpu_possible(i) && per_cpu(cpufreq_cpu_data, i)) { 2464 - ret = 0; 2465 - break; 2466 - } 2467 - 2488 + if (!(cpufreq_driver->flags & CPUFREQ_STICKY) && 2489 + list_empty(&cpufreq_policy_list)) { 2468 2490 /* if all ->init() calls failed, unregister */ 2469 - if (ret) { 2470 - pr_debug("no CPU initialized for driver %s\n", 2471 - driver_data->name); 2472 - goto err_if_unreg; 2473 - } 2491 + pr_debug("%s: No CPU initialized for driver %s\n", __func__, 2492 + driver_data->name); 2493 + goto err_if_unreg; 2474 2494 } 2475 2495 2476 2496 register_hotcpu_notifier(&cpufreq_cpu_notifier); ··· 2514 2556 } 2515 2557 EXPORT_SYMBOL_GPL(cpufreq_unregister_driver); 2516 2558 2559 + /* 2560 + * Stop cpufreq at shutdown to make sure it isn't holding any locks 2561 + * or mutexes when secondary CPUs are halted. 2562 + */ 2563 + static struct syscore_ops cpufreq_syscore_ops = { 2564 + .shutdown = cpufreq_suspend, 2565 + }; 2566 + 2517 2567 static int __init cpufreq_core_init(void) 2518 2568 { 2519 2569 if (cpufreq_disabled()) ··· 2529 2563 2530 2564 cpufreq_global_kobject = kobject_create(); 2531 2565 BUG_ON(!cpufreq_global_kobject); 2566 + 2567 + register_syscore_ops(&cpufreq_syscore_ops); 2532 2568 2533 2569 return 0; 2534 2570 }
+97 -122
drivers/cpufreq/cpufreq_stats.c
··· 18 18 static spinlock_t cpufreq_stats_lock; 19 19 20 20 struct cpufreq_stats { 21 - unsigned int cpu; 22 21 unsigned int total_trans; 23 22 unsigned long long last_time; 24 23 unsigned int max_state; ··· 30 31 #endif 31 32 }; 32 33 33 - static DEFINE_PER_CPU(struct cpufreq_stats *, cpufreq_stats_table); 34 - 35 - struct cpufreq_stats_attribute { 36 - struct attribute attr; 37 - ssize_t(*show) (struct cpufreq_stats *, char *); 38 - }; 39 - 40 - static int cpufreq_stats_update(unsigned int cpu) 34 + static int cpufreq_stats_update(struct cpufreq_stats *stats) 41 35 { 42 - struct cpufreq_stats *stat; 43 - unsigned long long cur_time; 36 + unsigned long long cur_time = get_jiffies_64(); 44 37 45 - cur_time = get_jiffies_64(); 46 38 spin_lock(&cpufreq_stats_lock); 47 - stat = per_cpu(cpufreq_stats_table, cpu); 48 - if (stat->time_in_state) 49 - stat->time_in_state[stat->last_index] += 50 - cur_time - stat->last_time; 51 - stat->last_time = cur_time; 39 + stats->time_in_state[stats->last_index] += cur_time - stats->last_time; 40 + stats->last_time = cur_time; 52 41 spin_unlock(&cpufreq_stats_lock); 53 42 return 0; 54 43 } 55 44 56 45 static ssize_t show_total_trans(struct cpufreq_policy *policy, char *buf) 57 46 { 58 - struct cpufreq_stats *stat = per_cpu(cpufreq_stats_table, policy->cpu); 59 - if (!stat) 60 - return 0; 61 - return sprintf(buf, "%d\n", 62 - per_cpu(cpufreq_stats_table, stat->cpu)->total_trans); 47 + return sprintf(buf, "%d\n", policy->stats->total_trans); 63 48 } 64 49 65 50 static ssize_t show_time_in_state(struct cpufreq_policy *policy, char *buf) 66 51 { 52 + struct cpufreq_stats *stats = policy->stats; 67 53 ssize_t len = 0; 68 54 int i; 69 - struct cpufreq_stats *stat = per_cpu(cpufreq_stats_table, policy->cpu); 70 - if (!stat) 71 - return 0; 72 - cpufreq_stats_update(stat->cpu); 73 - for (i = 0; i < stat->state_num; i++) { 74 - len += sprintf(buf + len, "%u %llu\n", stat->freq_table[i], 55 + 56 + cpufreq_stats_update(stats); 57 + for (i = 0; i < stats->state_num; i++) { 58 + len += sprintf(buf + len, "%u %llu\n", stats->freq_table[i], 75 59 (unsigned long long) 76 - jiffies_64_to_clock_t(stat->time_in_state[i])); 60 + jiffies_64_to_clock_t(stats->time_in_state[i])); 77 61 } 78 62 return len; 79 63 } ··· 64 82 #ifdef CONFIG_CPU_FREQ_STAT_DETAILS 65 83 static ssize_t show_trans_table(struct cpufreq_policy *policy, char *buf) 66 84 { 85 + struct cpufreq_stats *stats = policy->stats; 67 86 ssize_t len = 0; 68 87 int i, j; 69 88 70 - struct cpufreq_stats *stat = per_cpu(cpufreq_stats_table, policy->cpu); 71 - if (!stat) 72 - return 0; 73 - cpufreq_stats_update(stat->cpu); 74 89 len += snprintf(buf + len, PAGE_SIZE - len, " From : To\n"); 75 90 len += snprintf(buf + len, PAGE_SIZE - len, " : "); 76 - for (i = 0; i < stat->state_num; i++) { 91 + for (i = 0; i < stats->state_num; i++) { 77 92 if (len >= PAGE_SIZE) 78 93 break; 79 94 len += snprintf(buf + len, PAGE_SIZE - len, "%9u ", 80 - stat->freq_table[i]); 95 + stats->freq_table[i]); 81 96 } 82 97 if (len >= PAGE_SIZE) 83 98 return PAGE_SIZE; 84 99 85 100 len += snprintf(buf + len, PAGE_SIZE - len, "\n"); 86 101 87 - for (i = 0; i < stat->state_num; i++) { 102 + for (i = 0; i < stats->state_num; i++) { 88 103 if (len >= PAGE_SIZE) 89 104 break; 90 105 91 106 len += snprintf(buf + len, PAGE_SIZE - len, "%9u: ", 92 - stat->freq_table[i]); 107 + stats->freq_table[i]); 93 108 94 - for (j = 0; j < stat->state_num; j++) { 109 + for (j = 0; j < stats->state_num; j++) { 95 110 if (len >= PAGE_SIZE) 96 111 break; 97 112 len += snprintf(buf + len, PAGE_SIZE - len, "%9u ", 98 - stat->trans_table[i*stat->max_state+j]); 113 + stats->trans_table[i*stats->max_state+j]); 99 114 } 100 115 if (len >= PAGE_SIZE) 101 116 break; ··· 121 142 .name = "stats" 122 143 }; 123 144 124 - static int freq_table_get_index(struct cpufreq_stats *stat, unsigned int freq) 145 + static int freq_table_get_index(struct cpufreq_stats *stats, unsigned int freq) 125 146 { 126 147 int index; 127 - for (index = 0; index < stat->max_state; index++) 128 - if (stat->freq_table[index] == freq) 148 + for (index = 0; index < stats->max_state; index++) 149 + if (stats->freq_table[index] == freq) 129 150 return index; 130 151 return -1; 131 152 } 132 153 133 154 static void __cpufreq_stats_free_table(struct cpufreq_policy *policy) 134 155 { 135 - struct cpufreq_stats *stat = per_cpu(cpufreq_stats_table, policy->cpu); 156 + struct cpufreq_stats *stats = policy->stats; 136 157 137 - if (!stat) 158 + /* Already freed */ 159 + if (!stats) 138 160 return; 139 161 140 - pr_debug("%s: Free stat table\n", __func__); 162 + pr_debug("%s: Free stats table\n", __func__); 141 163 142 164 sysfs_remove_group(&policy->kobj, &stats_attr_group); 143 - kfree(stat->time_in_state); 144 - kfree(stat); 145 - per_cpu(cpufreq_stats_table, policy->cpu) = NULL; 165 + kfree(stats->time_in_state); 166 + kfree(stats); 167 + policy->stats = NULL; 146 168 } 147 169 148 170 static void cpufreq_stats_free_table(unsigned int cpu) ··· 154 174 if (!policy) 155 175 return; 156 176 157 - if (cpufreq_frequency_get_table(policy->cpu)) 158 - __cpufreq_stats_free_table(policy); 177 + __cpufreq_stats_free_table(policy); 159 178 160 179 cpufreq_cpu_put(policy); 161 180 } 162 181 163 182 static int __cpufreq_stats_create_table(struct cpufreq_policy *policy) 164 183 { 165 - unsigned int i, count = 0, ret = 0; 166 - struct cpufreq_stats *stat; 184 + unsigned int i = 0, count = 0, ret = -ENOMEM; 185 + struct cpufreq_stats *stats; 167 186 unsigned int alloc_size; 168 187 unsigned int cpu = policy->cpu; 169 188 struct cpufreq_frequency_table *pos, *table; 170 189 190 + /* We need cpufreq table for creating stats table */ 171 191 table = cpufreq_frequency_get_table(cpu); 172 192 if (unlikely(!table)) 173 193 return 0; 174 194 175 - if (per_cpu(cpufreq_stats_table, cpu)) 176 - return -EBUSY; 177 - stat = kzalloc(sizeof(*stat), GFP_KERNEL); 178 - if ((stat) == NULL) 195 + /* stats already initialized */ 196 + if (policy->stats) 197 + return -EEXIST; 198 + 199 + stats = kzalloc(sizeof(*stats), GFP_KERNEL); 200 + if (!stats) 179 201 return -ENOMEM; 180 202 181 - ret = sysfs_create_group(&policy->kobj, &stats_attr_group); 182 - if (ret) 183 - goto error_out; 184 - 185 - stat->cpu = cpu; 186 - per_cpu(cpufreq_stats_table, cpu) = stat; 187 - 203 + /* Find total allocation size */ 188 204 cpufreq_for_each_valid_entry(pos, table) 189 205 count++; 190 206 ··· 189 213 #ifdef CONFIG_CPU_FREQ_STAT_DETAILS 190 214 alloc_size += count * count * sizeof(int); 191 215 #endif 192 - stat->max_state = count; 193 - stat->time_in_state = kzalloc(alloc_size, GFP_KERNEL); 194 - if (!stat->time_in_state) { 195 - ret = -ENOMEM; 196 - goto error_alloc; 197 - } 198 - stat->freq_table = (unsigned int *)(stat->time_in_state + count); 216 + 217 + /* Allocate memory for time_in_state/freq_table/trans_table in one go */ 218 + stats->time_in_state = kzalloc(alloc_size, GFP_KERNEL); 219 + if (!stats->time_in_state) 220 + goto free_stat; 221 + 222 + stats->freq_table = (unsigned int *)(stats->time_in_state + count); 199 223 200 224 #ifdef CONFIG_CPU_FREQ_STAT_DETAILS 201 - stat->trans_table = stat->freq_table + count; 225 + stats->trans_table = stats->freq_table + count; 202 226 #endif 203 - i = 0; 227 + 228 + stats->max_state = count; 229 + 230 + /* Find valid-unique entries */ 204 231 cpufreq_for_each_valid_entry(pos, table) 205 - if (freq_table_get_index(stat, pos->frequency) == -1) 206 - stat->freq_table[i++] = pos->frequency; 207 - stat->state_num = i; 208 - spin_lock(&cpufreq_stats_lock); 209 - stat->last_time = get_jiffies_64(); 210 - stat->last_index = freq_table_get_index(stat, policy->cur); 211 - spin_unlock(&cpufreq_stats_lock); 212 - return 0; 213 - error_alloc: 214 - sysfs_remove_group(&policy->kobj, &stats_attr_group); 215 - error_out: 216 - kfree(stat); 217 - per_cpu(cpufreq_stats_table, cpu) = NULL; 232 + if (freq_table_get_index(stats, pos->frequency) == -1) 233 + stats->freq_table[i++] = pos->frequency; 234 + 235 + stats->state_num = i; 236 + stats->last_time = get_jiffies_64(); 237 + stats->last_index = freq_table_get_index(stats, policy->cur); 238 + 239 + policy->stats = stats; 240 + ret = sysfs_create_group(&policy->kobj, &stats_attr_group); 241 + if (!ret) 242 + return 0; 243 + 244 + /* We failed, release resources */ 245 + policy->stats = NULL; 246 + kfree(stats->time_in_state); 247 + free_stat: 248 + kfree(stats); 249 + 218 250 return ret; 219 251 } 220 252 ··· 243 259 cpufreq_cpu_put(policy); 244 260 } 245 261 246 - static void cpufreq_stats_update_policy_cpu(struct cpufreq_policy *policy) 247 - { 248 - struct cpufreq_stats *stat = per_cpu(cpufreq_stats_table, 249 - policy->last_cpu); 250 - 251 - pr_debug("Updating stats_table for new_cpu %u from last_cpu %u\n", 252 - policy->cpu, policy->last_cpu); 253 - per_cpu(cpufreq_stats_table, policy->cpu) = per_cpu(cpufreq_stats_table, 254 - policy->last_cpu); 255 - per_cpu(cpufreq_stats_table, policy->last_cpu) = NULL; 256 - stat->cpu = policy->cpu; 257 - } 258 - 259 262 static int cpufreq_stat_notifier_policy(struct notifier_block *nb, 260 263 unsigned long val, void *data) 261 264 { 262 265 int ret = 0; 263 266 struct cpufreq_policy *policy = data; 264 - 265 - if (val == CPUFREQ_UPDATE_POLICY_CPU) { 266 - cpufreq_stats_update_policy_cpu(policy); 267 - return 0; 268 - } 269 267 270 268 if (val == CPUFREQ_CREATE_POLICY) 271 269 ret = __cpufreq_stats_create_table(policy); ··· 261 295 unsigned long val, void *data) 262 296 { 263 297 struct cpufreq_freqs *freq = data; 264 - struct cpufreq_stats *stat; 298 + struct cpufreq_policy *policy = cpufreq_cpu_get(freq->cpu); 299 + struct cpufreq_stats *stats; 265 300 int old_index, new_index; 266 301 302 + if (!policy) { 303 + pr_err("%s: No policy found\n", __func__); 304 + return 0; 305 + } 306 + 267 307 if (val != CPUFREQ_POSTCHANGE) 268 - return 0; 308 + goto put_policy; 269 309 270 - stat = per_cpu(cpufreq_stats_table, freq->cpu); 271 - if (!stat) 272 - return 0; 310 + if (!policy->stats) { 311 + pr_debug("%s: No stats found\n", __func__); 312 + goto put_policy; 313 + } 273 314 274 - old_index = stat->last_index; 275 - new_index = freq_table_get_index(stat, freq->new); 315 + stats = policy->stats; 276 316 277 - /* We can't do stat->time_in_state[-1]= .. */ 317 + old_index = stats->last_index; 318 + new_index = freq_table_get_index(stats, freq->new); 319 + 320 + /* We can't do stats->time_in_state[-1]= .. */ 278 321 if (old_index == -1 || new_index == -1) 279 - return 0; 280 - 281 - cpufreq_stats_update(freq->cpu); 322 + goto put_policy; 282 323 283 324 if (old_index == new_index) 284 - return 0; 325 + goto put_policy; 285 326 286 - spin_lock(&cpufreq_stats_lock); 287 - stat->last_index = new_index; 327 + cpufreq_stats_update(stats); 328 + 329 + stats->last_index = new_index; 288 330 #ifdef CONFIG_CPU_FREQ_STAT_DETAILS 289 - stat->trans_table[old_index * stat->max_state + new_index]++; 331 + stats->trans_table[old_index * stats->max_state + new_index]++; 290 332 #endif 291 - stat->total_trans++; 292 - spin_unlock(&cpufreq_stats_lock); 333 + stats->total_trans++; 334 + 335 + put_policy: 336 + cpufreq_cpu_put(policy); 293 337 return 0; 294 338 } 295 339 ··· 350 374 } 351 375 352 376 MODULE_AUTHOR("Zou Nan hai <nanhai.zou@intel.com>"); 353 - MODULE_DESCRIPTION("'cpufreq_stats' - A driver to export cpufreq stats " 354 - "through sysfs filesystem"); 377 + MODULE_DESCRIPTION("Export cpufreq stats via sysfs"); 355 378 MODULE_LICENSE("GPL"); 356 379 357 380 module_init(cpufreq_stats_init);
+51 -4
drivers/cpufreq/intel_pstate.c
··· 148 148 int32_t min_perf; 149 149 int max_policy_pct; 150 150 int max_sysfs_pct; 151 + int min_policy_pct; 152 + int min_sysfs_pct; 151 153 }; 152 154 153 155 static struct perf_limits limits = { ··· 161 159 .min_perf = 0, 162 160 .max_policy_pct = 100, 163 161 .max_sysfs_pct = 100, 162 + .min_policy_pct = 0, 163 + .min_sysfs_pct = 0, 164 164 }; 165 165 166 166 static inline void pid_reset(struct _pid *pid, int setpoint, int busy, ··· 342 338 return sprintf(buf, "%u\n", limits.object); \ 343 339 } 344 340 341 + static ssize_t show_turbo_pct(struct kobject *kobj, 342 + struct attribute *attr, char *buf) 343 + { 344 + struct cpudata *cpu; 345 + int total, no_turbo, turbo_pct; 346 + uint32_t turbo_fp; 347 + 348 + cpu = all_cpu_data[0]; 349 + 350 + total = cpu->pstate.turbo_pstate - cpu->pstate.min_pstate + 1; 351 + no_turbo = cpu->pstate.max_pstate - cpu->pstate.min_pstate + 1; 352 + turbo_fp = div_fp(int_tofp(no_turbo), int_tofp(total)); 353 + turbo_pct = 100 - fp_toint(mul_fp(turbo_fp, int_tofp(100))); 354 + return sprintf(buf, "%u\n", turbo_pct); 355 + } 356 + 357 + static ssize_t show_num_pstates(struct kobject *kobj, 358 + struct attribute *attr, char *buf) 359 + { 360 + struct cpudata *cpu; 361 + int total; 362 + 363 + cpu = all_cpu_data[0]; 364 + total = cpu->pstate.turbo_pstate - cpu->pstate.min_pstate + 1; 365 + return sprintf(buf, "%u\n", total); 366 + } 367 + 345 368 static ssize_t show_no_turbo(struct kobject *kobj, 346 369 struct attribute *attr, char *buf) 347 370 { ··· 435 404 ret = sscanf(buf, "%u", &input); 436 405 if (ret != 1) 437 406 return -EINVAL; 438 - limits.min_perf_pct = clamp_t(int, input, 0 , 100); 407 + 408 + limits.min_sysfs_pct = clamp_t(int, input, 0 , 100); 409 + limits.min_perf_pct = max(limits.min_policy_pct, limits.min_sysfs_pct); 439 410 limits.min_perf = div_fp(int_tofp(limits.min_perf_pct), int_tofp(100)); 440 411 441 412 if (hwp_active) ··· 451 418 define_one_global_rw(no_turbo); 452 419 define_one_global_rw(max_perf_pct); 453 420 define_one_global_rw(min_perf_pct); 421 + define_one_global_ro(turbo_pct); 422 + define_one_global_ro(num_pstates); 454 423 455 424 static struct attribute *intel_pstate_attributes[] = { 456 425 &no_turbo.attr, 457 426 &max_perf_pct.attr, 458 427 &min_perf_pct.attr, 428 + &turbo_pct.attr, 429 + &num_pstates.attr, 459 430 NULL 460 431 }; 461 432 ··· 862 825 ICPU(0x46, core_params), 863 826 ICPU(0x47, core_params), 864 827 ICPU(0x4c, byt_params), 828 + ICPU(0x4e, core_params), 865 829 ICPU(0x4f, core_params), 866 830 ICPU(0x56, core_params), 867 831 {} ··· 925 887 if (!policy->cpuinfo.max_freq) 926 888 return -ENODEV; 927 889 928 - if (policy->policy == CPUFREQ_POLICY_PERFORMANCE) { 890 + if (policy->policy == CPUFREQ_POLICY_PERFORMANCE && 891 + policy->max >= policy->cpuinfo.max_freq) { 892 + limits.min_policy_pct = 100; 929 893 limits.min_perf_pct = 100; 930 894 limits.min_perf = int_tofp(1); 931 895 limits.max_policy_pct = 100; ··· 937 897 return 0; 938 898 } 939 899 940 - limits.min_perf_pct = (policy->min * 100) / policy->cpuinfo.max_freq; 941 - limits.min_perf_pct = clamp_t(int, limits.min_perf_pct, 0 , 100); 900 + limits.min_policy_pct = (policy->min * 100) / policy->cpuinfo.max_freq; 901 + limits.min_policy_pct = clamp_t(int, limits.min_policy_pct, 0 , 100); 902 + limits.min_perf_pct = max(limits.min_policy_pct, limits.min_sysfs_pct); 942 903 limits.min_perf = div_fp(int_tofp(limits.min_perf_pct), int_tofp(100)); 943 904 944 905 limits.max_policy_pct = (policy->max * 100) / policy->cpuinfo.max_freq; ··· 1019 978 1020 979 static int __initdata no_load; 1021 980 static int __initdata no_hwp; 981 + static int __initdata hwp_only; 1022 982 static unsigned int force_load; 1023 983 1024 984 static int intel_pstate_msrs_not_valid(void) ··· 1217 1175 if (cpu_has(c,X86_FEATURE_HWP) && !no_hwp) 1218 1176 intel_pstate_hwp_enable(); 1219 1177 1178 + if (!hwp_active && hwp_only) 1179 + goto out; 1180 + 1220 1181 rc = cpufreq_register_driver(&intel_pstate_driver); 1221 1182 if (rc) 1222 1183 goto out; ··· 1254 1209 no_hwp = 1; 1255 1210 if (!strcmp(str, "force")) 1256 1211 force_load = 1; 1212 + if (!strcmp(str, "hwp_only")) 1213 + hwp_only = 1; 1257 1214 return 0; 1258 1215 } 1259 1216 early_param("intel_pstate", intel_pstate_setup);
-1
drivers/cpufreq/ls1x-cpufreq.c
··· 210 210 static struct platform_driver ls1x_cpufreq_platdrv = { 211 211 .driver = { 212 212 .name = "ls1x-cpufreq", 213 - .owner = THIS_MODULE, 214 213 }, 215 214 .probe = ls1x_cpufreq_probe, 216 215 .remove = ls1x_cpufreq_remove,
+136
drivers/cpufreq/sfi-cpufreq.c
··· 1 + /* 2 + * SFI Performance States Driver 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License as published by 6 + * the Free Software Foundation; either version 2 of the License. 7 + * 8 + * This program is distributed in the hope that it will be useful, but 9 + * WITHOUT ANY WARRANTY; without even the implied warranty of 10 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 11 + * General Public License for more details. 12 + * 13 + * Author: Vishwesh M Rudramuni <vishwesh.m.rudramuni@intel.com> 14 + * Author: Srinidhi Kasagar <srinidhi.kasagar@intel.com> 15 + */ 16 + 17 + #include <linux/cpufreq.h> 18 + #include <linux/init.h> 19 + #include <linux/kernel.h> 20 + #include <linux/module.h> 21 + #include <linux/sfi.h> 22 + #include <linux/slab.h> 23 + #include <linux/smp.h> 24 + 25 + #include <asm/msr.h> 26 + 27 + struct cpufreq_frequency_table *freq_table; 28 + static struct sfi_freq_table_entry *sfi_cpufreq_array; 29 + static int num_freq_table_entries; 30 + 31 + static int sfi_parse_freq(struct sfi_table_header *table) 32 + { 33 + struct sfi_table_simple *sb; 34 + struct sfi_freq_table_entry *pentry; 35 + int totallen; 36 + 37 + sb = (struct sfi_table_simple *)table; 38 + num_freq_table_entries = SFI_GET_NUM_ENTRIES(sb, 39 + struct sfi_freq_table_entry); 40 + if (num_freq_table_entries <= 1) { 41 + pr_err("No p-states discovered\n"); 42 + return -ENODEV; 43 + } 44 + 45 + pentry = (struct sfi_freq_table_entry *)sb->pentry; 46 + totallen = num_freq_table_entries * sizeof(*pentry); 47 + 48 + sfi_cpufreq_array = kzalloc(totallen, GFP_KERNEL); 49 + if (!sfi_cpufreq_array) 50 + return -ENOMEM; 51 + 52 + memcpy(sfi_cpufreq_array, pentry, totallen); 53 + 54 + return 0; 55 + } 56 + 57 + static int sfi_cpufreq_target(struct cpufreq_policy *policy, unsigned int index) 58 + { 59 + unsigned int next_perf_state = 0; /* Index into perf table */ 60 + u32 lo, hi; 61 + 62 + next_perf_state = policy->freq_table[index].driver_data; 63 + 64 + rdmsr_on_cpu(policy->cpu, MSR_IA32_PERF_CTL, &lo, &hi); 65 + lo = (lo & ~INTEL_PERF_CTL_MASK) | 66 + ((u32) sfi_cpufreq_array[next_perf_state].ctrl_val & 67 + INTEL_PERF_CTL_MASK); 68 + wrmsr_on_cpu(policy->cpu, MSR_IA32_PERF_CTL, lo, hi); 69 + 70 + return 0; 71 + } 72 + 73 + static int sfi_cpufreq_cpu_init(struct cpufreq_policy *policy) 74 + { 75 + policy->shared_type = CPUFREQ_SHARED_TYPE_HW; 76 + policy->cpuinfo.transition_latency = 100000; /* 100us */ 77 + 78 + return cpufreq_table_validate_and_show(policy, freq_table); 79 + } 80 + 81 + static struct cpufreq_driver sfi_cpufreq_driver = { 82 + .flags = CPUFREQ_CONST_LOOPS, 83 + .verify = cpufreq_generic_frequency_table_verify, 84 + .target_index = sfi_cpufreq_target, 85 + .init = sfi_cpufreq_cpu_init, 86 + .name = "sfi-cpufreq", 87 + .attr = cpufreq_generic_attr, 88 + }; 89 + 90 + static int __init sfi_cpufreq_init(void) 91 + { 92 + int ret, i; 93 + 94 + /* parse the freq table from SFI */ 95 + ret = sfi_table_parse(SFI_SIG_FREQ, NULL, NULL, sfi_parse_freq); 96 + if (ret) 97 + return ret; 98 + 99 + freq_table = kzalloc(sizeof(*freq_table) * 100 + (num_freq_table_entries + 1), GFP_KERNEL); 101 + if (!freq_table) { 102 + ret = -ENOMEM; 103 + goto err_free_array; 104 + } 105 + 106 + for (i = 0; i < num_freq_table_entries; i++) { 107 + freq_table[i].driver_data = i; 108 + freq_table[i].frequency = sfi_cpufreq_array[i].freq_mhz * 1000; 109 + } 110 + freq_table[i].frequency = CPUFREQ_TABLE_END; 111 + 112 + ret = cpufreq_register_driver(&sfi_cpufreq_driver); 113 + if (ret) 114 + goto err_free_tbl; 115 + 116 + return ret; 117 + 118 + err_free_tbl: 119 + kfree(freq_table); 120 + err_free_array: 121 + kfree(sfi_cpufreq_array); 122 + return ret; 123 + } 124 + late_initcall(sfi_cpufreq_init); 125 + 126 + static void __exit sfi_cpufreq_exit(void) 127 + { 128 + cpufreq_unregister_driver(&sfi_cpufreq_driver); 129 + kfree(freq_table); 130 + kfree(sfi_cpufreq_array); 131 + } 132 + module_exit(sfi_cpufreq_exit); 133 + 134 + MODULE_AUTHOR("Vishwesh M Rudramuni <vishwesh.m.rudramuni@intel.com>"); 135 + MODULE_DESCRIPTION("SFI Performance-States Driver"); 136 + MODULE_LICENSE("GPL");
+4
drivers/cpuidle/cpuidle-big_little.c
··· 182 182 */ 183 183 if (!of_match_node(compatible_machine_match, root)) 184 184 return -ENODEV; 185 + 186 + if (!mcpm_is_available()) 187 + return -EUNATCH; 188 + 185 189 /* 186 190 * For now the differentiation between little and big cores 187 191 * is based on the part number. A7 cores are considered little
+12
drivers/devfreq/Kconfig
··· 88 88 It reads PPMU counters of memory controllers and adjusts the 89 89 operating frequencies and voltages with OPP support. 90 90 91 + config ARM_TEGRA_DEVFREQ 92 + tristate "Tegra DEVFREQ Driver" 93 + depends on ARCH_TEGRA_124_SOC 94 + select DEVFREQ_GOV_SIMPLE_ONDEMAND 95 + select PM_OPP 96 + help 97 + This adds the DEVFREQ driver for the Tegra family of SoCs. 98 + It reads ACTMON counters of memory controllers and adjusts the 99 + operating frequencies and voltages with OPP support. 100 + 101 + source "drivers/devfreq/event/Kconfig" 102 + 91 103 endif # PM_DEVFREQ
+5
drivers/devfreq/Makefile
··· 1 1 obj-$(CONFIG_PM_DEVFREQ) += devfreq.o 2 + obj-$(CONFIG_PM_DEVFREQ_EVENT) += devfreq-event.o 2 3 obj-$(CONFIG_DEVFREQ_GOV_SIMPLE_ONDEMAND) += governor_simpleondemand.o 3 4 obj-$(CONFIG_DEVFREQ_GOV_PERFORMANCE) += governor_performance.o 4 5 obj-$(CONFIG_DEVFREQ_GOV_POWERSAVE) += governor_powersave.o ··· 8 7 # DEVFREQ Drivers 9 8 obj-$(CONFIG_ARM_EXYNOS4_BUS_DEVFREQ) += exynos/ 10 9 obj-$(CONFIG_ARM_EXYNOS5_BUS_DEVFREQ) += exynos/ 10 + obj-$(CONFIG_ARM_TEGRA_DEVFREQ) += tegra-devfreq.o 11 + 12 + # DEVFREQ Event Drivers 13 + obj-$(CONFIG_PM_DEVFREQ_EVENT) += event/
+494
drivers/devfreq/devfreq-event.c
··· 1 + /* 2 + * devfreq-event: a framework to provide raw data and events of devfreq devices 3 + * 4 + * Copyright (C) 2015 Samsung Electronics 5 + * Author: Chanwoo Choi <cw00.choi@samsung.com> 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + * 11 + * This driver is based on drivers/devfreq/devfreq.c. 12 + */ 13 + 14 + #include <linux/devfreq-event.h> 15 + #include <linux/kernel.h> 16 + #include <linux/err.h> 17 + #include <linux/init.h> 18 + #include <linux/module.h> 19 + #include <linux/slab.h> 20 + #include <linux/list.h> 21 + #include <linux/of.h> 22 + 23 + static struct class *devfreq_event_class; 24 + 25 + /* The list of all devfreq event list */ 26 + static LIST_HEAD(devfreq_event_list); 27 + static DEFINE_MUTEX(devfreq_event_list_lock); 28 + 29 + #define to_devfreq_event(DEV) container_of(DEV, struct devfreq_event_dev, dev) 30 + 31 + /** 32 + * devfreq_event_enable_edev() - Enable the devfreq-event dev and increase 33 + * the enable_count of devfreq-event dev. 34 + * @edev : the devfreq-event device 35 + * 36 + * Note that this function increase the enable_count and enable the 37 + * devfreq-event device. The devfreq-event device should be enabled before 38 + * using it by devfreq device. 39 + */ 40 + int devfreq_event_enable_edev(struct devfreq_event_dev *edev) 41 + { 42 + int ret = 0; 43 + 44 + if (!edev || !edev->desc) 45 + return -EINVAL; 46 + 47 + mutex_lock(&edev->lock); 48 + if (edev->desc->ops && edev->desc->ops->enable 49 + && edev->enable_count == 0) { 50 + ret = edev->desc->ops->enable(edev); 51 + if (ret < 0) 52 + goto err; 53 + } 54 + edev->enable_count++; 55 + err: 56 + mutex_unlock(&edev->lock); 57 + 58 + return ret; 59 + } 60 + EXPORT_SYMBOL_GPL(devfreq_event_enable_edev); 61 + 62 + /** 63 + * devfreq_event_disable_edev() - Disable the devfreq-event dev and decrease 64 + * the enable_count of the devfreq-event dev. 65 + * @edev : the devfreq-event device 66 + * 67 + * Note that this function decrease the enable_count and disable the 68 + * devfreq-event device. After the devfreq-event device is disabled, 69 + * devfreq device can't use the devfreq-event device for get/set/reset 70 + * operations. 71 + */ 72 + int devfreq_event_disable_edev(struct devfreq_event_dev *edev) 73 + { 74 + int ret = 0; 75 + 76 + if (!edev || !edev->desc) 77 + return -EINVAL; 78 + 79 + mutex_lock(&edev->lock); 80 + if (edev->enable_count <= 0) { 81 + dev_warn(&edev->dev, "unbalanced enable_count\n"); 82 + ret = -EIO; 83 + goto err; 84 + } 85 + 86 + if (edev->desc->ops && edev->desc->ops->disable 87 + && edev->enable_count == 1) { 88 + ret = edev->desc->ops->disable(edev); 89 + if (ret < 0) 90 + goto err; 91 + } 92 + edev->enable_count--; 93 + err: 94 + mutex_unlock(&edev->lock); 95 + 96 + return ret; 97 + } 98 + EXPORT_SYMBOL_GPL(devfreq_event_disable_edev); 99 + 100 + /** 101 + * devfreq_event_is_enabled() - Check whether devfreq-event dev is enabled or 102 + * not. 103 + * @edev : the devfreq-event device 104 + * 105 + * Note that this function check whether devfreq-event dev is enabled or not. 106 + * If return true, the devfreq-event dev is enabeld. If return false, the 107 + * devfreq-event dev is disabled. 108 + */ 109 + bool devfreq_event_is_enabled(struct devfreq_event_dev *edev) 110 + { 111 + bool enabled = false; 112 + 113 + if (!edev || !edev->desc) 114 + return enabled; 115 + 116 + mutex_lock(&edev->lock); 117 + 118 + if (edev->enable_count > 0) 119 + enabled = true; 120 + 121 + mutex_unlock(&edev->lock); 122 + 123 + return enabled; 124 + } 125 + EXPORT_SYMBOL_GPL(devfreq_event_is_enabled); 126 + 127 + /** 128 + * devfreq_event_set_event() - Set event to devfreq-event dev to start. 129 + * @edev : the devfreq-event device 130 + * 131 + * Note that this function set the event to the devfreq-event device to start 132 + * for getting the event data which could be various event type. 133 + */ 134 + int devfreq_event_set_event(struct devfreq_event_dev *edev) 135 + { 136 + int ret; 137 + 138 + if (!edev || !edev->desc) 139 + return -EINVAL; 140 + 141 + if (!edev->desc->ops || !edev->desc->ops->set_event) 142 + return -EINVAL; 143 + 144 + if (!devfreq_event_is_enabled(edev)) 145 + return -EPERM; 146 + 147 + mutex_lock(&edev->lock); 148 + ret = edev->desc->ops->set_event(edev); 149 + mutex_unlock(&edev->lock); 150 + 151 + return ret; 152 + } 153 + EXPORT_SYMBOL_GPL(devfreq_event_set_event); 154 + 155 + /** 156 + * devfreq_event_get_event() - Get {load|total}_count from devfreq-event dev. 157 + * @edev : the devfreq-event device 158 + * @edata : the calculated data of devfreq-event device 159 + * 160 + * Note that this function get the calculated event data from devfreq-event dev 161 + * after stoping the progress of whole sequence of devfreq-event dev. 162 + */ 163 + int devfreq_event_get_event(struct devfreq_event_dev *edev, 164 + struct devfreq_event_data *edata) 165 + { 166 + int ret; 167 + 168 + if (!edev || !edev->desc) 169 + return -EINVAL; 170 + 171 + if (!edev->desc->ops || !edev->desc->ops->get_event) 172 + return -EINVAL; 173 + 174 + if (!devfreq_event_is_enabled(edev)) 175 + return -EINVAL; 176 + 177 + edata->total_count = edata->load_count = 0; 178 + 179 + mutex_lock(&edev->lock); 180 + ret = edev->desc->ops->get_event(edev, edata); 181 + if (ret < 0) 182 + edata->total_count = edata->load_count = 0; 183 + mutex_unlock(&edev->lock); 184 + 185 + return ret; 186 + } 187 + EXPORT_SYMBOL_GPL(devfreq_event_get_event); 188 + 189 + /** 190 + * devfreq_event_reset_event() - Reset all opeations of devfreq-event dev. 191 + * @edev : the devfreq-event device 192 + * 193 + * Note that this function stop all operations of devfreq-event dev and reset 194 + * the current event data to make the devfreq-event device into initial state. 195 + */ 196 + int devfreq_event_reset_event(struct devfreq_event_dev *edev) 197 + { 198 + int ret = 0; 199 + 200 + if (!edev || !edev->desc) 201 + return -EINVAL; 202 + 203 + if (!devfreq_event_is_enabled(edev)) 204 + return -EPERM; 205 + 206 + mutex_lock(&edev->lock); 207 + if (edev->desc->ops && edev->desc->ops->reset) 208 + ret = edev->desc->ops->reset(edev); 209 + mutex_unlock(&edev->lock); 210 + 211 + return ret; 212 + } 213 + EXPORT_SYMBOL_GPL(devfreq_event_reset_event); 214 + 215 + /** 216 + * devfreq_event_get_edev_by_phandle() - Get the devfreq-event dev from 217 + * devicetree. 218 + * @dev : the pointer to the given device 219 + * @index : the index into list of devfreq-event device 220 + * 221 + * Note that this function return the pointer of devfreq-event device. 222 + */ 223 + struct devfreq_event_dev *devfreq_event_get_edev_by_phandle(struct device *dev, 224 + int index) 225 + { 226 + struct device_node *node; 227 + struct devfreq_event_dev *edev; 228 + 229 + if (!dev->of_node) { 230 + dev_err(dev, "device does not have a device node entry\n"); 231 + return ERR_PTR(-EINVAL); 232 + } 233 + 234 + node = of_parse_phandle(dev->of_node, "devfreq-events", index); 235 + if (!node) { 236 + dev_err(dev, "failed to get phandle in %s node\n", 237 + dev->of_node->full_name); 238 + return ERR_PTR(-ENODEV); 239 + } 240 + 241 + mutex_lock(&devfreq_event_list_lock); 242 + list_for_each_entry(edev, &devfreq_event_list, node) { 243 + if (!strcmp(edev->desc->name, node->name)) 244 + goto out; 245 + } 246 + edev = NULL; 247 + out: 248 + mutex_unlock(&devfreq_event_list_lock); 249 + 250 + if (!edev) { 251 + dev_err(dev, "unable to get devfreq-event device : %s\n", 252 + node->name); 253 + of_node_put(node); 254 + return ERR_PTR(-ENODEV); 255 + } 256 + 257 + of_node_put(node); 258 + 259 + return edev; 260 + } 261 + EXPORT_SYMBOL_GPL(devfreq_event_get_edev_by_phandle); 262 + 263 + /** 264 + * devfreq_event_get_edev_count() - Get the count of devfreq-event dev 265 + * @dev : the pointer to the given device 266 + * 267 + * Note that this function return the count of devfreq-event devices. 268 + */ 269 + int devfreq_event_get_edev_count(struct device *dev) 270 + { 271 + int count; 272 + 273 + if (!dev->of_node) { 274 + dev_err(dev, "device does not have a device node entry\n"); 275 + return -EINVAL; 276 + } 277 + 278 + count = of_property_count_elems_of_size(dev->of_node, "devfreq-events", 279 + sizeof(u32)); 280 + if (count < 0 ) { 281 + dev_err(dev, 282 + "failed to get the count of devfreq-event in %s node\n", 283 + dev->of_node->full_name); 284 + return count; 285 + } 286 + 287 + return count; 288 + } 289 + EXPORT_SYMBOL_GPL(devfreq_event_get_edev_count); 290 + 291 + static void devfreq_event_release_edev(struct device *dev) 292 + { 293 + struct devfreq_event_dev *edev = to_devfreq_event(dev); 294 + 295 + kfree(edev); 296 + } 297 + 298 + /** 299 + * devfreq_event_add_edev() - Add new devfreq-event device. 300 + * @dev : the device owning the devfreq-event device being created 301 + * @desc : the devfreq-event device's decriptor which include essential 302 + * data for devfreq-event device. 303 + * 304 + * Note that this function add new devfreq-event device to devfreq-event class 305 + * list and register the device of the devfreq-event device. 306 + */ 307 + struct devfreq_event_dev *devfreq_event_add_edev(struct device *dev, 308 + struct devfreq_event_desc *desc) 309 + { 310 + struct devfreq_event_dev *edev; 311 + static atomic_t event_no = ATOMIC_INIT(0); 312 + int ret; 313 + 314 + if (!dev || !desc) 315 + return ERR_PTR(-EINVAL); 316 + 317 + if (!desc->name || !desc->ops) 318 + return ERR_PTR(-EINVAL); 319 + 320 + if (!desc->ops->set_event || !desc->ops->get_event) 321 + return ERR_PTR(-EINVAL); 322 + 323 + edev = kzalloc(sizeof(struct devfreq_event_dev), GFP_KERNEL); 324 + if (!edev) 325 + return ERR_PTR(-ENOMEM); 326 + 327 + mutex_init(&edev->lock); 328 + edev->desc = desc; 329 + edev->enable_count = 0; 330 + edev->dev.parent = dev; 331 + edev->dev.class = devfreq_event_class; 332 + edev->dev.release = devfreq_event_release_edev; 333 + 334 + dev_set_name(&edev->dev, "event.%d", atomic_inc_return(&event_no) - 1); 335 + ret = device_register(&edev->dev); 336 + if (ret < 0) { 337 + put_device(&edev->dev); 338 + return ERR_PTR(ret); 339 + } 340 + dev_set_drvdata(&edev->dev, edev); 341 + 342 + INIT_LIST_HEAD(&edev->node); 343 + 344 + mutex_lock(&devfreq_event_list_lock); 345 + list_add(&edev->node, &devfreq_event_list); 346 + mutex_unlock(&devfreq_event_list_lock); 347 + 348 + return edev; 349 + } 350 + EXPORT_SYMBOL_GPL(devfreq_event_add_edev); 351 + 352 + /** 353 + * devfreq_event_remove_edev() - Remove the devfreq-event device registered. 354 + * @dev : the devfreq-event device 355 + * 356 + * Note that this function remove the registered devfreq-event device. 357 + */ 358 + int devfreq_event_remove_edev(struct devfreq_event_dev *edev) 359 + { 360 + if (!edev) 361 + return -EINVAL; 362 + 363 + WARN_ON(edev->enable_count); 364 + 365 + mutex_lock(&devfreq_event_list_lock); 366 + list_del(&edev->node); 367 + mutex_unlock(&devfreq_event_list_lock); 368 + 369 + device_unregister(&edev->dev); 370 + 371 + return 0; 372 + } 373 + EXPORT_SYMBOL_GPL(devfreq_event_remove_edev); 374 + 375 + static int devm_devfreq_event_match(struct device *dev, void *res, void *data) 376 + { 377 + struct devfreq_event_dev **r = res; 378 + 379 + if (WARN_ON(!r || !*r)) 380 + return 0; 381 + 382 + return *r == data; 383 + } 384 + 385 + static void devm_devfreq_event_release(struct device *dev, void *res) 386 + { 387 + devfreq_event_remove_edev(*(struct devfreq_event_dev **)res); 388 + } 389 + 390 + /** 391 + * devm_devfreq_event_add_edev() - Resource-managed devfreq_event_add_edev() 392 + * @dev : the device owning the devfreq-event device being created 393 + * @desc : the devfreq-event device's decriptor which include essential 394 + * data for devfreq-event device. 395 + * 396 + * Note that this function manages automatically the memory of devfreq-event 397 + * device using device resource management and simplify the free operation 398 + * for memory of devfreq-event device. 399 + */ 400 + struct devfreq_event_dev *devm_devfreq_event_add_edev(struct device *dev, 401 + struct devfreq_event_desc *desc) 402 + { 403 + struct devfreq_event_dev **ptr, *edev; 404 + 405 + ptr = devres_alloc(devm_devfreq_event_release, sizeof(*ptr), GFP_KERNEL); 406 + if (!ptr) 407 + return ERR_PTR(-ENOMEM); 408 + 409 + edev = devfreq_event_add_edev(dev, desc); 410 + if (IS_ERR(edev)) { 411 + devres_free(ptr); 412 + return ERR_PTR(-ENOMEM); 413 + } 414 + 415 + *ptr = edev; 416 + devres_add(dev, ptr); 417 + 418 + return edev; 419 + } 420 + EXPORT_SYMBOL_GPL(devm_devfreq_event_add_edev); 421 + 422 + /** 423 + * devm_devfreq_event_remove_edev()- Resource-managed devfreq_event_remove_edev() 424 + * @dev : the device owning the devfreq-event device being created 425 + * @edev : the devfreq-event device 426 + * 427 + * Note that this function manages automatically the memory of devfreq-event 428 + * device using device resource management. 429 + */ 430 + void devm_devfreq_event_remove_edev(struct device *dev, 431 + struct devfreq_event_dev *edev) 432 + { 433 + WARN_ON(devres_release(dev, devm_devfreq_event_release, 434 + devm_devfreq_event_match, edev)); 435 + } 436 + EXPORT_SYMBOL_GPL(devm_devfreq_event_remove_edev); 437 + 438 + /* 439 + * Device attributes for devfreq-event class. 440 + */ 441 + static ssize_t name_show(struct device *dev, struct device_attribute *attr, 442 + char *buf) 443 + { 444 + struct devfreq_event_dev *edev = to_devfreq_event(dev); 445 + 446 + if (!edev || !edev->desc) 447 + return -EINVAL; 448 + 449 + return sprintf(buf, "%s\n", edev->desc->name); 450 + } 451 + static DEVICE_ATTR_RO(name); 452 + 453 + static ssize_t enable_count_show(struct device *dev, 454 + struct device_attribute *attr, char *buf) 455 + { 456 + struct devfreq_event_dev *edev = to_devfreq_event(dev); 457 + 458 + if (!edev || !edev->desc) 459 + return -EINVAL; 460 + 461 + return sprintf(buf, "%d\n", edev->enable_count); 462 + } 463 + static DEVICE_ATTR_RO(enable_count); 464 + 465 + static struct attribute *devfreq_event_attrs[] = { 466 + &dev_attr_name.attr, 467 + &dev_attr_enable_count.attr, 468 + NULL, 469 + }; 470 + ATTRIBUTE_GROUPS(devfreq_event); 471 + 472 + static int __init devfreq_event_init(void) 473 + { 474 + devfreq_event_class = class_create(THIS_MODULE, "devfreq-event"); 475 + if (IS_ERR(devfreq_event_class)) { 476 + pr_err("%s: couldn't create class\n", __FILE__); 477 + return PTR_ERR(devfreq_event_class); 478 + } 479 + 480 + devfreq_event_class->dev_groups = devfreq_event_groups; 481 + 482 + return 0; 483 + } 484 + subsys_initcall(devfreq_event_init); 485 + 486 + static void __exit devfreq_event_exit(void) 487 + { 488 + class_destroy(devfreq_event_class); 489 + } 490 + module_exit(devfreq_event_exit); 491 + 492 + MODULE_AUTHOR("Chanwoo Choi <cw00.choi@samsung.com>"); 493 + MODULE_DESCRIPTION("DEVFREQ-Event class support"); 494 + MODULE_LICENSE("GPL");
+25
drivers/devfreq/event/Kconfig
··· 1 + menuconfig PM_DEVFREQ_EVENT 2 + bool "DEVFREQ-Event device Support" 3 + help 4 + The devfreq-event device provide the raw data and events which 5 + indicate the current state of devfreq-event device. The provided 6 + data from devfreq-event device is used to monitor the state of 7 + device and determine the suitable size of resource to reduce the 8 + wasted resource. 9 + 10 + The devfreq-event device can support the various type of events 11 + (e.g., raw data, utilization, latency, bandwidth). The events 12 + may be used by devfreq governor and other subsystem. 13 + 14 + if PM_DEVFREQ_EVENT 15 + 16 + config DEVFREQ_EVENT_EXYNOS_PPMU 17 + bool "EXYNOS PPMU (Platform Performance Monitoring Unit) DEVFREQ event Driver" 18 + depends on ARCH_EXYNOS 19 + select PM_OPP 20 + help 21 + This add the devfreq-event driver for Exynos SoC. It provides PPMU 22 + (Platform Performance Monitoring Unit) counters to estimate the 23 + utilization of each module. 24 + 25 + endif # PM_DEVFREQ_EVENT
+2
drivers/devfreq/event/Makefile
··· 1 + # Exynos DEVFREQ Event Drivers 2 + obj-$(CONFIG_DEVFREQ_EVENT_EXYNOS_PPMU) += exynos-ppmu.o
+374
drivers/devfreq/event/exynos-ppmu.c
··· 1 + /* 2 + * exynos_ppmu.c - EXYNOS PPMU (Platform Performance Monitoring Unit) support 3 + * 4 + * Copyright (c) 2014 Samsung Electronics Co., Ltd. 5 + * Author : Chanwoo Choi <cw00.choi@samsung.com> 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + * 11 + * This driver is based on drivers/devfreq/exynos/exynos_ppmu.c 12 + */ 13 + 14 + #include <linux/clk.h> 15 + #include <linux/io.h> 16 + #include <linux/kernel.h> 17 + #include <linux/module.h> 18 + #include <linux/mutex.h> 19 + #include <linux/of_address.h> 20 + #include <linux/platform_device.h> 21 + #include <linux/suspend.h> 22 + #include <linux/devfreq-event.h> 23 + 24 + #include "exynos-ppmu.h" 25 + 26 + struct exynos_ppmu_data { 27 + void __iomem *base; 28 + struct clk *clk; 29 + }; 30 + 31 + struct exynos_ppmu { 32 + struct devfreq_event_dev **edev; 33 + struct devfreq_event_desc *desc; 34 + unsigned int num_events; 35 + 36 + struct device *dev; 37 + struct mutex lock; 38 + 39 + struct exynos_ppmu_data ppmu; 40 + }; 41 + 42 + #define PPMU_EVENT(name) \ 43 + { "ppmu-event0-"#name, PPMU_PMNCNT0 }, \ 44 + { "ppmu-event1-"#name, PPMU_PMNCNT1 }, \ 45 + { "ppmu-event2-"#name, PPMU_PMNCNT2 }, \ 46 + { "ppmu-event3-"#name, PPMU_PMNCNT3 } 47 + 48 + struct __exynos_ppmu_events { 49 + char *name; 50 + int id; 51 + } ppmu_events[] = { 52 + /* For Exynos3250, Exynos4 and Exynos5260 */ 53 + PPMU_EVENT(g3d), 54 + PPMU_EVENT(fsys), 55 + 56 + /* For Exynos4 SoCs and Exynos3250 */ 57 + PPMU_EVENT(dmc0), 58 + PPMU_EVENT(dmc1), 59 + PPMU_EVENT(cpu), 60 + PPMU_EVENT(rightbus), 61 + PPMU_EVENT(leftbus), 62 + PPMU_EVENT(lcd0), 63 + PPMU_EVENT(camif), 64 + 65 + /* Only for Exynos3250 and Exynos5260 */ 66 + PPMU_EVENT(mfc), 67 + 68 + /* Only for Exynos4 SoCs */ 69 + PPMU_EVENT(mfc-left), 70 + PPMU_EVENT(mfc-right), 71 + 72 + /* Only for Exynos5260 SoCs */ 73 + PPMU_EVENT(drex0-s0), 74 + PPMU_EVENT(drex0-s1), 75 + PPMU_EVENT(drex1-s0), 76 + PPMU_EVENT(drex1-s1), 77 + PPMU_EVENT(eagle), 78 + PPMU_EVENT(kfc), 79 + PPMU_EVENT(isp), 80 + PPMU_EVENT(fimc), 81 + PPMU_EVENT(gscl), 82 + PPMU_EVENT(mscl), 83 + PPMU_EVENT(fimd0x), 84 + PPMU_EVENT(fimd1x), 85 + { /* sentinel */ }, 86 + }; 87 + 88 + static int exynos_ppmu_find_ppmu_id(struct devfreq_event_dev *edev) 89 + { 90 + int i; 91 + 92 + for (i = 0; i < ARRAY_SIZE(ppmu_events); i++) 93 + if (!strcmp(edev->desc->name, ppmu_events[i].name)) 94 + return ppmu_events[i].id; 95 + 96 + return -EINVAL; 97 + } 98 + 99 + static int exynos_ppmu_disable(struct devfreq_event_dev *edev) 100 + { 101 + struct exynos_ppmu *info = devfreq_event_get_drvdata(edev); 102 + u32 pmnc; 103 + 104 + /* Disable all counters */ 105 + __raw_writel(PPMU_CCNT_MASK | 106 + PPMU_PMCNT0_MASK | 107 + PPMU_PMCNT1_MASK | 108 + PPMU_PMCNT2_MASK | 109 + PPMU_PMCNT3_MASK, 110 + info->ppmu.base + PPMU_CNTENC); 111 + 112 + /* Disable PPMU */ 113 + pmnc = __raw_readl(info->ppmu.base + PPMU_PMNC); 114 + pmnc &= ~PPMU_PMNC_ENABLE_MASK; 115 + __raw_writel(pmnc, info->ppmu.base + PPMU_PMNC); 116 + 117 + return 0; 118 + } 119 + 120 + static int exynos_ppmu_set_event(struct devfreq_event_dev *edev) 121 + { 122 + struct exynos_ppmu *info = devfreq_event_get_drvdata(edev); 123 + int id = exynos_ppmu_find_ppmu_id(edev); 124 + u32 pmnc, cntens; 125 + 126 + if (id < 0) 127 + return id; 128 + 129 + /* Enable specific counter */ 130 + cntens = __raw_readl(info->ppmu.base + PPMU_CNTENS); 131 + cntens |= (PPMU_CCNT_MASK | (PPMU_ENABLE << id)); 132 + __raw_writel(cntens, info->ppmu.base + PPMU_CNTENS); 133 + 134 + /* Set the event of Read/Write data count */ 135 + __raw_writel(PPMU_RO_DATA_CNT | PPMU_WO_DATA_CNT, 136 + info->ppmu.base + PPMU_BEVTxSEL(id)); 137 + 138 + /* Reset cycle counter/performance counter and enable PPMU */ 139 + pmnc = __raw_readl(info->ppmu.base + PPMU_PMNC); 140 + pmnc &= ~(PPMU_PMNC_ENABLE_MASK 141 + | PPMU_PMNC_COUNTER_RESET_MASK 142 + | PPMU_PMNC_CC_RESET_MASK); 143 + pmnc |= (PPMU_ENABLE << PPMU_PMNC_ENABLE_SHIFT); 144 + pmnc |= (PPMU_ENABLE << PPMU_PMNC_COUNTER_RESET_SHIFT); 145 + pmnc |= (PPMU_ENABLE << PPMU_PMNC_CC_RESET_SHIFT); 146 + __raw_writel(pmnc, info->ppmu.base + PPMU_PMNC); 147 + 148 + return 0; 149 + } 150 + 151 + static int exynos_ppmu_get_event(struct devfreq_event_dev *edev, 152 + struct devfreq_event_data *edata) 153 + { 154 + struct exynos_ppmu *info = devfreq_event_get_drvdata(edev); 155 + int id = exynos_ppmu_find_ppmu_id(edev); 156 + u32 pmnc, cntenc; 157 + 158 + if (id < 0) 159 + return -EINVAL; 160 + 161 + /* Disable PPMU */ 162 + pmnc = __raw_readl(info->ppmu.base + PPMU_PMNC); 163 + pmnc &= ~PPMU_PMNC_ENABLE_MASK; 164 + __raw_writel(pmnc, info->ppmu.base + PPMU_PMNC); 165 + 166 + /* Read cycle count */ 167 + edata->total_count = __raw_readl(info->ppmu.base + PPMU_CCNT); 168 + 169 + /* Read performance count */ 170 + switch (id) { 171 + case PPMU_PMNCNT0: 172 + case PPMU_PMNCNT1: 173 + case PPMU_PMNCNT2: 174 + edata->load_count 175 + = __raw_readl(info->ppmu.base + PPMU_PMNCT(id)); 176 + break; 177 + case PPMU_PMNCNT3: 178 + edata->load_count = 179 + ((__raw_readl(info->ppmu.base + PPMU_PMCNT3_HIGH) << 8) 180 + | __raw_readl(info->ppmu.base + PPMU_PMCNT3_LOW)); 181 + break; 182 + default: 183 + return -EINVAL; 184 + } 185 + 186 + /* Disable specific counter */ 187 + cntenc = __raw_readl(info->ppmu.base + PPMU_CNTENC); 188 + cntenc |= (PPMU_CCNT_MASK | (PPMU_ENABLE << id)); 189 + __raw_writel(cntenc, info->ppmu.base + PPMU_CNTENC); 190 + 191 + dev_dbg(&edev->dev, "%s (event: %ld/%ld)\n", edev->desc->name, 192 + edata->load_count, edata->total_count); 193 + 194 + return 0; 195 + } 196 + 197 + static struct devfreq_event_ops exynos_ppmu_ops = { 198 + .disable = exynos_ppmu_disable, 199 + .set_event = exynos_ppmu_set_event, 200 + .get_event = exynos_ppmu_get_event, 201 + }; 202 + 203 + static int of_get_devfreq_events(struct device_node *np, 204 + struct exynos_ppmu *info) 205 + { 206 + struct devfreq_event_desc *desc; 207 + struct device *dev = info->dev; 208 + struct device_node *events_np, *node; 209 + int i, j, count; 210 + 211 + events_np = of_get_child_by_name(np, "events"); 212 + if (!events_np) { 213 + dev_err(dev, 214 + "failed to get child node of devfreq-event devices\n"); 215 + return -EINVAL; 216 + } 217 + 218 + count = of_get_child_count(events_np); 219 + desc = devm_kzalloc(dev, sizeof(*desc) * count, GFP_KERNEL); 220 + if (!desc) 221 + return -ENOMEM; 222 + info->num_events = count; 223 + 224 + j = 0; 225 + for_each_child_of_node(events_np, node) { 226 + for (i = 0; i < ARRAY_SIZE(ppmu_events); i++) { 227 + if (!ppmu_events[i].name) 228 + continue; 229 + 230 + if (!of_node_cmp(node->name, ppmu_events[i].name)) 231 + break; 232 + } 233 + 234 + if (i == ARRAY_SIZE(ppmu_events)) { 235 + dev_warn(dev, 236 + "don't know how to configure events : %s\n", 237 + node->name); 238 + continue; 239 + } 240 + 241 + desc[j].ops = &exynos_ppmu_ops; 242 + desc[j].driver_data = info; 243 + 244 + of_property_read_string(node, "event-name", &desc[j].name); 245 + 246 + j++; 247 + 248 + of_node_put(node); 249 + } 250 + info->desc = desc; 251 + 252 + of_node_put(events_np); 253 + 254 + return 0; 255 + } 256 + 257 + static int exynos_ppmu_parse_dt(struct exynos_ppmu *info) 258 + { 259 + struct device *dev = info->dev; 260 + struct device_node *np = dev->of_node; 261 + int ret = 0; 262 + 263 + if (!np) { 264 + dev_err(dev, "failed to find devicetree node\n"); 265 + return -EINVAL; 266 + } 267 + 268 + /* Maps the memory mapped IO to control PPMU register */ 269 + info->ppmu.base = of_iomap(np, 0); 270 + if (IS_ERR_OR_NULL(info->ppmu.base)) { 271 + dev_err(dev, "failed to map memory region\n"); 272 + return -ENOMEM; 273 + } 274 + 275 + info->ppmu.clk = devm_clk_get(dev, "ppmu"); 276 + if (IS_ERR(info->ppmu.clk)) { 277 + info->ppmu.clk = NULL; 278 + dev_warn(dev, "cannot get PPMU clock\n"); 279 + } 280 + 281 + ret = of_get_devfreq_events(np, info); 282 + if (ret < 0) { 283 + dev_err(dev, "failed to parse exynos ppmu dt node\n"); 284 + goto err; 285 + } 286 + 287 + return 0; 288 + 289 + err: 290 + iounmap(info->ppmu.base); 291 + 292 + return ret; 293 + } 294 + 295 + static int exynos_ppmu_probe(struct platform_device *pdev) 296 + { 297 + struct exynos_ppmu *info; 298 + struct devfreq_event_dev **edev; 299 + struct devfreq_event_desc *desc; 300 + int i, ret = 0, size; 301 + 302 + info = devm_kzalloc(&pdev->dev, sizeof(*info), GFP_KERNEL); 303 + if (!info) 304 + return -ENOMEM; 305 + 306 + mutex_init(&info->lock); 307 + info->dev = &pdev->dev; 308 + 309 + /* Parse dt data to get resource */ 310 + ret = exynos_ppmu_parse_dt(info); 311 + if (ret < 0) { 312 + dev_err(&pdev->dev, 313 + "failed to parse devicetree for resource\n"); 314 + return ret; 315 + } 316 + desc = info->desc; 317 + 318 + size = sizeof(struct devfreq_event_dev *) * info->num_events; 319 + info->edev = devm_kzalloc(&pdev->dev, size, GFP_KERNEL); 320 + if (!info->edev) { 321 + dev_err(&pdev->dev, 322 + "failed to allocate memory devfreq-event devices\n"); 323 + return -ENOMEM; 324 + } 325 + edev = info->edev; 326 + platform_set_drvdata(pdev, info); 327 + 328 + for (i = 0; i < info->num_events; i++) { 329 + edev[i] = devm_devfreq_event_add_edev(&pdev->dev, &desc[i]); 330 + if (IS_ERR(edev)) { 331 + ret = PTR_ERR(edev); 332 + dev_err(&pdev->dev, 333 + "failed to add devfreq-event device\n"); 334 + goto err; 335 + } 336 + } 337 + 338 + clk_prepare_enable(info->ppmu.clk); 339 + 340 + return 0; 341 + err: 342 + iounmap(info->ppmu.base); 343 + 344 + return ret; 345 + } 346 + 347 + static int exynos_ppmu_remove(struct platform_device *pdev) 348 + { 349 + struct exynos_ppmu *info = platform_get_drvdata(pdev); 350 + 351 + clk_disable_unprepare(info->ppmu.clk); 352 + iounmap(info->ppmu.base); 353 + 354 + return 0; 355 + } 356 + 357 + static struct of_device_id exynos_ppmu_id_match[] = { 358 + { .compatible = "samsung,exynos-ppmu", }, 359 + { /* sentinel */ }, 360 + }; 361 + 362 + static struct platform_driver exynos_ppmu_driver = { 363 + .probe = exynos_ppmu_probe, 364 + .remove = exynos_ppmu_remove, 365 + .driver = { 366 + .name = "exynos-ppmu", 367 + .of_match_table = exynos_ppmu_id_match, 368 + }, 369 + }; 370 + module_platform_driver(exynos_ppmu_driver); 371 + 372 + MODULE_DESCRIPTION("Exynos PPMU(Platform Performance Monitoring Unit) driver"); 373 + MODULE_AUTHOR("Chanwoo Choi <cw00.choi@samsung.com>"); 374 + MODULE_LICENSE("GPL");
+93
drivers/devfreq/event/exynos-ppmu.h
··· 1 + /* 2 + * exynos_ppmu.h - EXYNOS PPMU header file 3 + * 4 + * Copyright (c) 2015 Samsung Electronics Co., Ltd. 5 + * Author : Chanwoo Choi <cw00.choi@samsung.com> 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + */ 11 + 12 + #ifndef __EXYNOS_PPMU_H__ 13 + #define __EXYNOS_PPMU_H__ 14 + 15 + enum ppmu_state { 16 + PPMU_DISABLE = 0, 17 + PPMU_ENABLE, 18 + }; 19 + 20 + enum ppmu_counter { 21 + PPMU_PMNCNT0 = 0, 22 + PPMU_PMNCNT1, 23 + PPMU_PMNCNT2, 24 + PPMU_PMNCNT3, 25 + 26 + PPMU_PMNCNT_MAX, 27 + }; 28 + 29 + enum ppmu_event_type { 30 + PPMU_RO_BUSY_CYCLE_CNT = 0x0, 31 + PPMU_WO_BUSY_CYCLE_CNT = 0x1, 32 + PPMU_RW_BUSY_CYCLE_CNT = 0x2, 33 + PPMU_RO_REQUEST_CNT = 0x3, 34 + PPMU_WO_REQUEST_CNT = 0x4, 35 + PPMU_RO_DATA_CNT = 0x5, 36 + PPMU_WO_DATA_CNT = 0x6, 37 + PPMU_RO_LATENCY = 0x12, 38 + PPMU_WO_LATENCY = 0x16, 39 + }; 40 + 41 + enum ppmu_reg { 42 + /* PPC control register */ 43 + PPMU_PMNC = 0x00, 44 + PPMU_CNTENS = 0x10, 45 + PPMU_CNTENC = 0x20, 46 + PPMU_INTENS = 0x30, 47 + PPMU_INTENC = 0x40, 48 + PPMU_FLAG = 0x50, 49 + 50 + /* Cycle Counter and Performance Event Counter Register */ 51 + PPMU_CCNT = 0x100, 52 + PPMU_PMCNT0 = 0x110, 53 + PPMU_PMCNT1 = 0x120, 54 + PPMU_PMCNT2 = 0x130, 55 + PPMU_PMCNT3_HIGH = 0x140, 56 + PPMU_PMCNT3_LOW = 0x150, 57 + 58 + /* Bus Event Generator */ 59 + PPMU_BEVT0SEL = 0x1000, 60 + PPMU_BEVT1SEL = 0x1100, 61 + PPMU_BEVT2SEL = 0x1200, 62 + PPMU_BEVT3SEL = 0x1300, 63 + PPMU_COUNTER_RESET = 0x1810, 64 + PPMU_READ_OVERFLOW_CNT = 0x1810, 65 + PPMU_READ_UNDERFLOW_CNT = 0x1814, 66 + PPMU_WRITE_OVERFLOW_CNT = 0x1850, 67 + PPMU_WRITE_UNDERFLOW_CNT = 0x1854, 68 + PPMU_READ_PENDING_CNT = 0x1880, 69 + PPMU_WRITE_PENDING_CNT = 0x1884 70 + }; 71 + 72 + /* PMNC register */ 73 + #define PPMU_PMNC_CC_RESET_SHIFT 2 74 + #define PPMU_PMNC_COUNTER_RESET_SHIFT 1 75 + #define PPMU_PMNC_ENABLE_SHIFT 0 76 + #define PPMU_PMNC_START_MODE_MASK BIT(16) 77 + #define PPMU_PMNC_CC_DIVIDER_MASK BIT(3) 78 + #define PPMU_PMNC_CC_RESET_MASK BIT(2) 79 + #define PPMU_PMNC_COUNTER_RESET_MASK BIT(1) 80 + #define PPMU_PMNC_ENABLE_MASK BIT(0) 81 + 82 + /* CNTENS/CNTENC/INTENS/INTENC/FLAG register */ 83 + #define PPMU_CCNT_MASK BIT(31) 84 + #define PPMU_PMCNT3_MASK BIT(3) 85 + #define PPMU_PMCNT2_MASK BIT(2) 86 + #define PPMU_PMCNT1_MASK BIT(1) 87 + #define PPMU_PMCNT0_MASK BIT(0) 88 + 89 + /* PPMU_PMNCTx/PPMU_BETxSEL registers */ 90 + #define PPMU_PMNCT(x) (PPMU_PMCNT0 + (0x10 * x)) 91 + #define PPMU_BEVTxSEL(x) (PPMU_BEVT0SEL + (0x100 * x)) 92 + 93 + #endif /* __EXYNOS_PPMU_H__ */
+718
drivers/devfreq/tegra-devfreq.c
··· 1 + /* 2 + * A devfreq driver for NVIDIA Tegra SoCs 3 + * 4 + * Copyright (c) 2014 NVIDIA CORPORATION. All rights reserved. 5 + * Copyright (C) 2014 Google, Inc 6 + * 7 + * This program is free software; you can redistribute it and/or modify it 8 + * under the terms and conditions of the GNU General Public License, 9 + * version 2, as published by the Free Software Foundation. 10 + * 11 + * This program is distributed in the hope it will be useful, but WITHOUT 12 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 13 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 14 + * more details. 15 + * 16 + * You should have received a copy of the GNU General Public License 17 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 18 + * 19 + */ 20 + 21 + #include <linux/clk.h> 22 + #include <linux/cpufreq.h> 23 + #include <linux/devfreq.h> 24 + #include <linux/interrupt.h> 25 + #include <linux/io.h> 26 + #include <linux/module.h> 27 + #include <linux/platform_device.h> 28 + #include <linux/pm_opp.h> 29 + #include <linux/reset.h> 30 + 31 + #include "governor.h" 32 + 33 + #define ACTMON_GLB_STATUS 0x0 34 + #define ACTMON_GLB_PERIOD_CTRL 0x4 35 + 36 + #define ACTMON_DEV_CTRL 0x0 37 + #define ACTMON_DEV_CTRL_K_VAL_SHIFT 10 38 + #define ACTMON_DEV_CTRL_ENB_PERIODIC BIT(18) 39 + #define ACTMON_DEV_CTRL_AVG_BELOW_WMARK_EN BIT(20) 40 + #define ACTMON_DEV_CTRL_AVG_ABOVE_WMARK_EN BIT(21) 41 + #define ACTMON_DEV_CTRL_CONSECUTIVE_BELOW_WMARK_NUM_SHIFT 23 42 + #define ACTMON_DEV_CTRL_CONSECUTIVE_ABOVE_WMARK_NUM_SHIFT 26 43 + #define ACTMON_DEV_CTRL_CONSECUTIVE_BELOW_WMARK_EN BIT(29) 44 + #define ACTMON_DEV_CTRL_CONSECUTIVE_ABOVE_WMARK_EN BIT(30) 45 + #define ACTMON_DEV_CTRL_ENB BIT(31) 46 + 47 + #define ACTMON_DEV_UPPER_WMARK 0x4 48 + #define ACTMON_DEV_LOWER_WMARK 0x8 49 + #define ACTMON_DEV_INIT_AVG 0xc 50 + #define ACTMON_DEV_AVG_UPPER_WMARK 0x10 51 + #define ACTMON_DEV_AVG_LOWER_WMARK 0x14 52 + #define ACTMON_DEV_COUNT_WEIGHT 0x18 53 + #define ACTMON_DEV_AVG_COUNT 0x20 54 + #define ACTMON_DEV_INTR_STATUS 0x24 55 + 56 + #define ACTMON_INTR_STATUS_CLEAR 0xffffffff 57 + 58 + #define ACTMON_DEV_INTR_CONSECUTIVE_UPPER BIT(31) 59 + #define ACTMON_DEV_INTR_CONSECUTIVE_LOWER BIT(30) 60 + 61 + #define ACTMON_ABOVE_WMARK_WINDOW 1 62 + #define ACTMON_BELOW_WMARK_WINDOW 3 63 + #define ACTMON_BOOST_FREQ_STEP 16000 64 + 65 + /* activity counter is incremented every 256 memory transactions, and each 66 + * transaction takes 4 EMC clocks for Tegra124; So the COUNT_WEIGHT is 67 + * 4 * 256 = 1024. 68 + */ 69 + #define ACTMON_COUNT_WEIGHT 0x400 70 + 71 + /* 72 + * ACTMON_AVERAGE_WINDOW_LOG2: default value for @DEV_CTRL_K_VAL, which 73 + * translates to 2 ^ (K_VAL + 1). ex: 2 ^ (6 + 1) = 128 74 + */ 75 + #define ACTMON_AVERAGE_WINDOW_LOG2 6 76 + #define ACTMON_SAMPLING_PERIOD 12 /* ms */ 77 + #define ACTMON_DEFAULT_AVG_BAND 6 /* 1/10 of % */ 78 + 79 + #define KHZ 1000 80 + 81 + /* Assume that the bus is saturated if the utilization is 25% */ 82 + #define BUS_SATURATION_RATIO 25 83 + 84 + /** 85 + * struct tegra_devfreq_device_config - configuration specific to an ACTMON 86 + * device 87 + * 88 + * Coefficients and thresholds are in % 89 + */ 90 + struct tegra_devfreq_device_config { 91 + u32 offset; 92 + u32 irq_mask; 93 + 94 + unsigned int boost_up_coeff; 95 + unsigned int boost_down_coeff; 96 + unsigned int boost_up_threshold; 97 + unsigned int boost_down_threshold; 98 + u32 avg_dependency_threshold; 99 + }; 100 + 101 + enum tegra_actmon_device { 102 + MCALL = 0, 103 + MCCPU, 104 + }; 105 + 106 + static struct tegra_devfreq_device_config actmon_device_configs[] = { 107 + { 108 + /* MCALL */ 109 + .offset = 0x1c0, 110 + .irq_mask = 1 << 26, 111 + .boost_up_coeff = 200, 112 + .boost_down_coeff = 50, 113 + .boost_up_threshold = 60, 114 + .boost_down_threshold = 40, 115 + }, 116 + { 117 + /* MCCPU */ 118 + .offset = 0x200, 119 + .irq_mask = 1 << 25, 120 + .boost_up_coeff = 800, 121 + .boost_down_coeff = 90, 122 + .boost_up_threshold = 27, 123 + .boost_down_threshold = 10, 124 + .avg_dependency_threshold = 50000, 125 + }, 126 + }; 127 + 128 + /** 129 + * struct tegra_devfreq_device - state specific to an ACTMON device 130 + * 131 + * Frequencies are in kHz. 132 + */ 133 + struct tegra_devfreq_device { 134 + const struct tegra_devfreq_device_config *config; 135 + 136 + void __iomem *regs; 137 + u32 avg_band_freq; 138 + u32 avg_count; 139 + 140 + unsigned long target_freq; 141 + unsigned long boost_freq; 142 + }; 143 + 144 + struct tegra_devfreq { 145 + struct devfreq *devfreq; 146 + 147 + struct platform_device *pdev; 148 + struct reset_control *reset; 149 + struct clk *clock; 150 + void __iomem *regs; 151 + 152 + spinlock_t lock; 153 + 154 + struct clk *emc_clock; 155 + unsigned long max_freq; 156 + unsigned long cur_freq; 157 + struct notifier_block rate_change_nb; 158 + 159 + struct tegra_devfreq_device devices[ARRAY_SIZE(actmon_device_configs)]; 160 + }; 161 + 162 + struct tegra_actmon_emc_ratio { 163 + unsigned long cpu_freq; 164 + unsigned long emc_freq; 165 + }; 166 + 167 + static struct tegra_actmon_emc_ratio actmon_emc_ratios[] = { 168 + { 1400000, ULONG_MAX }, 169 + { 1200000, 750000 }, 170 + { 1100000, 600000 }, 171 + { 1000000, 500000 }, 172 + { 800000, 375000 }, 173 + { 500000, 200000 }, 174 + { 250000, 100000 }, 175 + }; 176 + 177 + static unsigned long do_percent(unsigned long val, unsigned int pct) 178 + { 179 + return val * pct / 100; 180 + } 181 + 182 + static void tegra_devfreq_update_avg_wmark(struct tegra_devfreq_device *dev) 183 + { 184 + u32 avg = dev->avg_count; 185 + u32 band = dev->avg_band_freq * ACTMON_SAMPLING_PERIOD; 186 + 187 + writel(avg + band, dev->regs + ACTMON_DEV_AVG_UPPER_WMARK); 188 + avg = max(avg, band); 189 + writel(avg - band, dev->regs + ACTMON_DEV_AVG_LOWER_WMARK); 190 + } 191 + 192 + static void tegra_devfreq_update_wmark(struct tegra_devfreq *tegra, 193 + struct tegra_devfreq_device *dev) 194 + { 195 + u32 val = tegra->cur_freq * ACTMON_SAMPLING_PERIOD; 196 + 197 + writel(do_percent(val, dev->config->boost_up_threshold), 198 + dev->regs + ACTMON_DEV_UPPER_WMARK); 199 + 200 + writel(do_percent(val, dev->config->boost_down_threshold), 201 + dev->regs + ACTMON_DEV_LOWER_WMARK); 202 + } 203 + 204 + static void actmon_write_barrier(struct tegra_devfreq *tegra) 205 + { 206 + /* ensure the update has reached the ACTMON */ 207 + wmb(); 208 + readl(tegra->regs + ACTMON_GLB_STATUS); 209 + } 210 + 211 + static irqreturn_t actmon_isr(int irq, void *data) 212 + { 213 + struct tegra_devfreq *tegra = data; 214 + struct tegra_devfreq_device *dev = NULL; 215 + unsigned long flags; 216 + u32 val; 217 + unsigned int i; 218 + 219 + val = readl(tegra->regs + ACTMON_GLB_STATUS); 220 + 221 + for (i = 0; i < ARRAY_SIZE(tegra->devices); i++) { 222 + if (val & tegra->devices[i].config->irq_mask) { 223 + dev = tegra->devices + i; 224 + break; 225 + } 226 + } 227 + 228 + if (!dev) 229 + return IRQ_NONE; 230 + 231 + spin_lock_irqsave(&tegra->lock, flags); 232 + 233 + dev->avg_count = readl(dev->regs + ACTMON_DEV_AVG_COUNT); 234 + tegra_devfreq_update_avg_wmark(dev); 235 + 236 + val = readl(dev->regs + ACTMON_DEV_INTR_STATUS); 237 + if (val & ACTMON_DEV_INTR_CONSECUTIVE_UPPER) { 238 + val = readl(dev->regs + ACTMON_DEV_CTRL) | 239 + ACTMON_DEV_CTRL_CONSECUTIVE_ABOVE_WMARK_EN | 240 + ACTMON_DEV_CTRL_CONSECUTIVE_BELOW_WMARK_EN; 241 + 242 + /* 243 + * new_boost = min(old_boost * up_coef + step, max_freq) 244 + */ 245 + dev->boost_freq = do_percent(dev->boost_freq, 246 + dev->config->boost_up_coeff); 247 + dev->boost_freq += ACTMON_BOOST_FREQ_STEP; 248 + if (dev->boost_freq >= tegra->max_freq) { 249 + dev->boost_freq = tegra->max_freq; 250 + val &= ~ACTMON_DEV_CTRL_CONSECUTIVE_ABOVE_WMARK_EN; 251 + } 252 + writel(val, dev->regs + ACTMON_DEV_CTRL); 253 + } else if (val & ACTMON_DEV_INTR_CONSECUTIVE_LOWER) { 254 + val = readl(dev->regs + ACTMON_DEV_CTRL) | 255 + ACTMON_DEV_CTRL_CONSECUTIVE_ABOVE_WMARK_EN | 256 + ACTMON_DEV_CTRL_CONSECUTIVE_BELOW_WMARK_EN; 257 + 258 + /* 259 + * new_boost = old_boost * down_coef 260 + * or 0 if (old_boost * down_coef < step / 2) 261 + */ 262 + dev->boost_freq = do_percent(dev->boost_freq, 263 + dev->config->boost_down_coeff); 264 + if (dev->boost_freq < (ACTMON_BOOST_FREQ_STEP >> 1)) { 265 + dev->boost_freq = 0; 266 + val &= ~ACTMON_DEV_CTRL_CONSECUTIVE_BELOW_WMARK_EN; 267 + } 268 + writel(val, dev->regs + ACTMON_DEV_CTRL); 269 + } 270 + 271 + if (dev->config->avg_dependency_threshold) { 272 + val = readl(dev->regs + ACTMON_DEV_CTRL); 273 + if (dev->avg_count >= dev->config->avg_dependency_threshold) 274 + val |= ACTMON_DEV_CTRL_CONSECUTIVE_BELOW_WMARK_EN; 275 + else if (dev->boost_freq == 0) 276 + val &= ~ACTMON_DEV_CTRL_CONSECUTIVE_BELOW_WMARK_EN; 277 + writel(val, dev->regs + ACTMON_DEV_CTRL); 278 + } 279 + 280 + writel(ACTMON_INTR_STATUS_CLEAR, dev->regs + ACTMON_DEV_INTR_STATUS); 281 + 282 + actmon_write_barrier(tegra); 283 + 284 + spin_unlock_irqrestore(&tegra->lock, flags); 285 + 286 + return IRQ_WAKE_THREAD; 287 + } 288 + 289 + static unsigned long actmon_cpu_to_emc_rate(struct tegra_devfreq *tegra, 290 + unsigned long cpu_freq) 291 + { 292 + unsigned int i; 293 + struct tegra_actmon_emc_ratio *ratio = actmon_emc_ratios; 294 + 295 + for (i = 0; i < ARRAY_SIZE(actmon_emc_ratios); i++, ratio++) { 296 + if (cpu_freq >= ratio->cpu_freq) { 297 + if (ratio->emc_freq >= tegra->max_freq) 298 + return tegra->max_freq; 299 + else 300 + return ratio->emc_freq; 301 + } 302 + } 303 + 304 + return 0; 305 + } 306 + 307 + static void actmon_update_target(struct tegra_devfreq *tegra, 308 + struct tegra_devfreq_device *dev) 309 + { 310 + unsigned long cpu_freq = 0; 311 + unsigned long static_cpu_emc_freq = 0; 312 + unsigned int avg_sustain_coef; 313 + unsigned long flags; 314 + 315 + if (dev->config->avg_dependency_threshold) { 316 + cpu_freq = cpufreq_get(0); 317 + static_cpu_emc_freq = actmon_cpu_to_emc_rate(tegra, cpu_freq); 318 + } 319 + 320 + spin_lock_irqsave(&tegra->lock, flags); 321 + 322 + dev->target_freq = dev->avg_count / ACTMON_SAMPLING_PERIOD; 323 + avg_sustain_coef = 100 * 100 / dev->config->boost_up_threshold; 324 + dev->target_freq = do_percent(dev->target_freq, avg_sustain_coef); 325 + dev->target_freq += dev->boost_freq; 326 + 327 + if (dev->avg_count >= dev->config->avg_dependency_threshold) 328 + dev->target_freq = max(dev->target_freq, static_cpu_emc_freq); 329 + 330 + spin_unlock_irqrestore(&tegra->lock, flags); 331 + } 332 + 333 + static irqreturn_t actmon_thread_isr(int irq, void *data) 334 + { 335 + struct tegra_devfreq *tegra = data; 336 + 337 + mutex_lock(&tegra->devfreq->lock); 338 + update_devfreq(tegra->devfreq); 339 + mutex_unlock(&tegra->devfreq->lock); 340 + 341 + return IRQ_HANDLED; 342 + } 343 + 344 + static int tegra_actmon_rate_notify_cb(struct notifier_block *nb, 345 + unsigned long action, void *ptr) 346 + { 347 + struct clk_notifier_data *data = ptr; 348 + struct tegra_devfreq *tegra = container_of(nb, struct tegra_devfreq, 349 + rate_change_nb); 350 + unsigned int i; 351 + unsigned long flags; 352 + 353 + spin_lock_irqsave(&tegra->lock, flags); 354 + 355 + switch (action) { 356 + case POST_RATE_CHANGE: 357 + tegra->cur_freq = data->new_rate / KHZ; 358 + 359 + for (i = 0; i < ARRAY_SIZE(tegra->devices); i++) 360 + tegra_devfreq_update_wmark(tegra, tegra->devices + i); 361 + 362 + actmon_write_barrier(tegra); 363 + break; 364 + case PRE_RATE_CHANGE: 365 + /* fall through */ 366 + case ABORT_RATE_CHANGE: 367 + break; 368 + }; 369 + 370 + spin_unlock_irqrestore(&tegra->lock, flags); 371 + 372 + return NOTIFY_OK; 373 + } 374 + 375 + static void tegra_actmon_configure_device(struct tegra_devfreq *tegra, 376 + struct tegra_devfreq_device *dev) 377 + { 378 + u32 val; 379 + 380 + dev->avg_band_freq = tegra->max_freq * ACTMON_DEFAULT_AVG_BAND / KHZ; 381 + dev->target_freq = tegra->cur_freq; 382 + 383 + dev->avg_count = tegra->cur_freq * ACTMON_SAMPLING_PERIOD; 384 + writel(dev->avg_count, dev->regs + ACTMON_DEV_INIT_AVG); 385 + 386 + tegra_devfreq_update_avg_wmark(dev); 387 + tegra_devfreq_update_wmark(tegra, dev); 388 + 389 + writel(ACTMON_COUNT_WEIGHT, dev->regs + ACTMON_DEV_COUNT_WEIGHT); 390 + writel(ACTMON_INTR_STATUS_CLEAR, dev->regs + ACTMON_DEV_INTR_STATUS); 391 + 392 + val = 0; 393 + val |= ACTMON_DEV_CTRL_ENB_PERIODIC | 394 + ACTMON_DEV_CTRL_AVG_ABOVE_WMARK_EN | 395 + ACTMON_DEV_CTRL_AVG_BELOW_WMARK_EN; 396 + val |= (ACTMON_AVERAGE_WINDOW_LOG2 - 1) 397 + << ACTMON_DEV_CTRL_K_VAL_SHIFT; 398 + val |= (ACTMON_BELOW_WMARK_WINDOW - 1) 399 + << ACTMON_DEV_CTRL_CONSECUTIVE_BELOW_WMARK_NUM_SHIFT; 400 + val |= (ACTMON_ABOVE_WMARK_WINDOW - 1) 401 + << ACTMON_DEV_CTRL_CONSECUTIVE_ABOVE_WMARK_NUM_SHIFT; 402 + val |= ACTMON_DEV_CTRL_CONSECUTIVE_BELOW_WMARK_EN | 403 + ACTMON_DEV_CTRL_CONSECUTIVE_ABOVE_WMARK_EN; 404 + 405 + writel(val, dev->regs + ACTMON_DEV_CTRL); 406 + 407 + actmon_write_barrier(tegra); 408 + 409 + val = readl(dev->regs + ACTMON_DEV_CTRL); 410 + val |= ACTMON_DEV_CTRL_ENB; 411 + writel(val, dev->regs + ACTMON_DEV_CTRL); 412 + 413 + actmon_write_barrier(tegra); 414 + } 415 + 416 + static int tegra_devfreq_suspend(struct device *dev) 417 + { 418 + struct platform_device *pdev; 419 + struct tegra_devfreq *tegra; 420 + struct tegra_devfreq_device *actmon_dev; 421 + unsigned int i; 422 + u32 val; 423 + 424 + pdev = container_of(dev, struct platform_device, dev); 425 + tegra = platform_get_drvdata(pdev); 426 + 427 + for (i = 0; i < ARRAY_SIZE(tegra->devices); i++) { 428 + actmon_dev = &tegra->devices[i]; 429 + 430 + val = readl(actmon_dev->regs + ACTMON_DEV_CTRL); 431 + val &= ~ACTMON_DEV_CTRL_ENB; 432 + writel(val, actmon_dev->regs + ACTMON_DEV_CTRL); 433 + 434 + writel(ACTMON_INTR_STATUS_CLEAR, 435 + actmon_dev->regs + ACTMON_DEV_INTR_STATUS); 436 + 437 + actmon_write_barrier(tegra); 438 + } 439 + 440 + return 0; 441 + } 442 + 443 + static int tegra_devfreq_resume(struct device *dev) 444 + { 445 + struct platform_device *pdev; 446 + struct tegra_devfreq *tegra; 447 + struct tegra_devfreq_device *actmon_dev; 448 + unsigned int i; 449 + 450 + pdev = container_of(dev, struct platform_device, dev); 451 + tegra = platform_get_drvdata(pdev); 452 + 453 + for (i = 0; i < ARRAY_SIZE(tegra->devices); i++) { 454 + actmon_dev = &tegra->devices[i]; 455 + 456 + tegra_actmon_configure_device(tegra, actmon_dev); 457 + } 458 + 459 + return 0; 460 + } 461 + 462 + static int tegra_devfreq_target(struct device *dev, unsigned long *freq, 463 + u32 flags) 464 + { 465 + struct platform_device *pdev; 466 + struct tegra_devfreq *tegra; 467 + struct dev_pm_opp *opp; 468 + unsigned long rate = *freq * KHZ; 469 + 470 + pdev = container_of(dev, struct platform_device, dev); 471 + tegra = platform_get_drvdata(pdev); 472 + 473 + rcu_read_lock(); 474 + opp = devfreq_recommended_opp(dev, &rate, flags); 475 + if (IS_ERR(opp)) { 476 + rcu_read_unlock(); 477 + dev_err(dev, "Failed to find opp for %lu KHz\n", *freq); 478 + return PTR_ERR(opp); 479 + } 480 + rate = dev_pm_opp_get_freq(opp); 481 + rcu_read_unlock(); 482 + 483 + /* TODO: Once we have per-user clk constraints, set a floor */ 484 + clk_set_rate(tegra->emc_clock, rate); 485 + 486 + /* TODO: Set voltage as well */ 487 + 488 + return 0; 489 + } 490 + 491 + static int tegra_devfreq_get_dev_status(struct device *dev, 492 + struct devfreq_dev_status *stat) 493 + { 494 + struct platform_device *pdev; 495 + struct tegra_devfreq *tegra; 496 + struct tegra_devfreq_device *actmon_dev; 497 + 498 + pdev = container_of(dev, struct platform_device, dev); 499 + tegra = platform_get_drvdata(pdev); 500 + 501 + stat->current_frequency = tegra->cur_freq; 502 + 503 + /* To be used by the tegra governor */ 504 + stat->private_data = tegra; 505 + 506 + /* The below are to be used by the other governors */ 507 + 508 + actmon_dev = &tegra->devices[MCALL]; 509 + 510 + /* Number of cycles spent on memory access */ 511 + stat->busy_time = actmon_dev->avg_count; 512 + 513 + /* The bus can be considered to be saturated way before 100% */ 514 + stat->busy_time *= 100 / BUS_SATURATION_RATIO; 515 + 516 + /* Number of cycles in a sampling period */ 517 + stat->total_time = ACTMON_SAMPLING_PERIOD * tegra->cur_freq; 518 + 519 + return 0; 520 + } 521 + 522 + static int tegra_devfreq_get_target(struct devfreq *devfreq, 523 + unsigned long *freq) 524 + { 525 + struct devfreq_dev_status stat; 526 + struct tegra_devfreq *tegra; 527 + struct tegra_devfreq_device *dev; 528 + unsigned long target_freq = 0; 529 + unsigned int i; 530 + int err; 531 + 532 + err = devfreq->profile->get_dev_status(devfreq->dev.parent, &stat); 533 + if (err) 534 + return err; 535 + 536 + tegra = stat.private_data; 537 + 538 + for (i = 0; i < ARRAY_SIZE(tegra->devices); i++) { 539 + dev = &tegra->devices[i]; 540 + 541 + actmon_update_target(tegra, dev); 542 + 543 + target_freq = max(target_freq, dev->target_freq); 544 + } 545 + 546 + *freq = target_freq; 547 + 548 + return 0; 549 + } 550 + 551 + static int tegra_devfreq_event_handler(struct devfreq *devfreq, 552 + unsigned int event, void *data) 553 + { 554 + return 0; 555 + } 556 + 557 + static struct devfreq_governor tegra_devfreq_governor = { 558 + .name = "tegra", 559 + .get_target_freq = tegra_devfreq_get_target, 560 + .event_handler = tegra_devfreq_event_handler, 561 + }; 562 + 563 + static struct devfreq_dev_profile tegra_devfreq_profile = { 564 + .polling_ms = 0, 565 + .target = tegra_devfreq_target, 566 + .get_dev_status = tegra_devfreq_get_dev_status, 567 + }; 568 + 569 + static int tegra_devfreq_probe(struct platform_device *pdev) 570 + { 571 + struct tegra_devfreq *tegra; 572 + struct tegra_devfreq_device *dev; 573 + struct resource *res; 574 + unsigned long max_freq; 575 + unsigned int i; 576 + int irq; 577 + int err; 578 + 579 + tegra = devm_kzalloc(&pdev->dev, sizeof(*tegra), GFP_KERNEL); 580 + if (!tegra) 581 + return -ENOMEM; 582 + 583 + spin_lock_init(&tegra->lock); 584 + 585 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 586 + if (!res) { 587 + dev_err(&pdev->dev, "Failed to get regs resource\n"); 588 + return -ENODEV; 589 + } 590 + 591 + tegra->regs = devm_ioremap_resource(&pdev->dev, res); 592 + if (IS_ERR(tegra->regs)) { 593 + dev_err(&pdev->dev, "Failed to get IO memory\n"); 594 + return PTR_ERR(tegra->regs); 595 + } 596 + 597 + tegra->reset = devm_reset_control_get(&pdev->dev, "actmon"); 598 + if (IS_ERR(tegra->reset)) { 599 + dev_err(&pdev->dev, "Failed to get reset\n"); 600 + return PTR_ERR(tegra->reset); 601 + } 602 + 603 + tegra->clock = devm_clk_get(&pdev->dev, "actmon"); 604 + if (IS_ERR(tegra->clock)) { 605 + dev_err(&pdev->dev, "Failed to get actmon clock\n"); 606 + return PTR_ERR(tegra->clock); 607 + } 608 + 609 + tegra->emc_clock = devm_clk_get(&pdev->dev, "emc"); 610 + if (IS_ERR(tegra->emc_clock)) { 611 + dev_err(&pdev->dev, "Failed to get emc clock\n"); 612 + return PTR_ERR(tegra->emc_clock); 613 + } 614 + 615 + err = of_init_opp_table(&pdev->dev); 616 + if (err) { 617 + dev_err(&pdev->dev, "Failed to init operating point table\n"); 618 + return err; 619 + } 620 + 621 + tegra->rate_change_nb.notifier_call = tegra_actmon_rate_notify_cb; 622 + err = clk_notifier_register(tegra->emc_clock, &tegra->rate_change_nb); 623 + if (err) { 624 + dev_err(&pdev->dev, 625 + "Failed to register rate change notifier\n"); 626 + return err; 627 + } 628 + 629 + reset_control_assert(tegra->reset); 630 + 631 + err = clk_prepare_enable(tegra->clock); 632 + if (err) { 633 + reset_control_deassert(tegra->reset); 634 + return err; 635 + } 636 + 637 + reset_control_deassert(tegra->reset); 638 + 639 + max_freq = clk_round_rate(tegra->emc_clock, ULONG_MAX); 640 + tegra->max_freq = max_freq / KHZ; 641 + 642 + clk_set_rate(tegra->emc_clock, max_freq); 643 + 644 + tegra->cur_freq = clk_get_rate(tegra->emc_clock) / KHZ; 645 + 646 + writel(ACTMON_SAMPLING_PERIOD - 1, 647 + tegra->regs + ACTMON_GLB_PERIOD_CTRL); 648 + 649 + for (i = 0; i < ARRAY_SIZE(actmon_device_configs); i++) { 650 + dev = tegra->devices + i; 651 + dev->config = actmon_device_configs + i; 652 + dev->regs = tegra->regs + dev->config->offset; 653 + 654 + tegra_actmon_configure_device(tegra, tegra->devices + i); 655 + } 656 + 657 + err = devfreq_add_governor(&tegra_devfreq_governor); 658 + if (err) { 659 + dev_err(&pdev->dev, "Failed to add governor\n"); 660 + return err; 661 + } 662 + 663 + tegra_devfreq_profile.initial_freq = clk_get_rate(tegra->emc_clock); 664 + tegra->devfreq = devm_devfreq_add_device(&pdev->dev, 665 + &tegra_devfreq_profile, 666 + "tegra", 667 + NULL); 668 + 669 + irq = platform_get_irq(pdev, 0); 670 + err = devm_request_threaded_irq(&pdev->dev, irq, actmon_isr, 671 + actmon_thread_isr, IRQF_SHARED, 672 + "tegra-devfreq", tegra); 673 + if (err) { 674 + dev_err(&pdev->dev, "Interrupt request failed\n"); 675 + return err; 676 + } 677 + 678 + platform_set_drvdata(pdev, tegra); 679 + 680 + return 0; 681 + } 682 + 683 + static int tegra_devfreq_remove(struct platform_device *pdev) 684 + { 685 + struct tegra_devfreq *tegra = platform_get_drvdata(pdev); 686 + 687 + clk_notifier_unregister(tegra->emc_clock, &tegra->rate_change_nb); 688 + 689 + clk_disable_unprepare(tegra->clock); 690 + 691 + return 0; 692 + } 693 + 694 + static SIMPLE_DEV_PM_OPS(tegra_devfreq_pm_ops, 695 + tegra_devfreq_suspend, 696 + tegra_devfreq_resume); 697 + 698 + static struct of_device_id tegra_devfreq_of_match[] = { 699 + { .compatible = "nvidia,tegra124-actmon" }, 700 + { }, 701 + }; 702 + 703 + static struct platform_driver tegra_devfreq_driver = { 704 + .probe = tegra_devfreq_probe, 705 + .remove = tegra_devfreq_remove, 706 + .driver = { 707 + .name = "tegra-devfreq", 708 + .owner = THIS_MODULE, 709 + .of_match_table = tegra_devfreq_of_match, 710 + .pm = &tegra_devfreq_pm_ops, 711 + }, 712 + }; 713 + module_platform_driver(tegra_devfreq_driver); 714 + 715 + MODULE_LICENSE("GPL"); 716 + MODULE_DESCRIPTION("Tegra devfreq driver"); 717 + MODULE_AUTHOR("Tomeu Vizoso <tomeu.vizoso@collabora.com>"); 718 + MODULE_DEVICE_TABLE(of, tegra_devfreq_of_match);
+5 -5
drivers/dma/acpi-dma.c
··· 43 43 { 44 44 const struct acpi_csrt_shared_info *si; 45 45 struct list_head resource_list; 46 - struct resource_list_entry *rentry; 46 + struct resource_entry *rentry; 47 47 resource_size_t mem = 0, irq = 0; 48 48 int ret; 49 49 ··· 56 56 return 0; 57 57 58 58 list_for_each_entry(rentry, &resource_list, node) { 59 - if (resource_type(&rentry->res) == IORESOURCE_MEM) 60 - mem = rentry->res.start; 61 - else if (resource_type(&rentry->res) == IORESOURCE_IRQ) 62 - irq = rentry->res.start; 59 + if (resource_type(rentry->res) == IORESOURCE_MEM) 60 + mem = rentry->res->start; 61 + else if (resource_type(rentry->res) == IORESOURCE_IRQ) 62 + irq = rentry->res->start; 63 63 } 64 64 65 65 acpi_dev_free_resource_list(&resource_list);
+2 -2
drivers/hv/vmbus_drv.c
··· 861 861 break; 862 862 863 863 case ACPI_RESOURCE_TYPE_ADDRESS64: 864 - hyperv_mmio.start = res->data.address64.minimum; 865 - hyperv_mmio.end = res->data.address64.maximum; 864 + hyperv_mmio.start = res->data.address64.address.minimum; 865 + hyperv_mmio.end = res->data.address64.address.maximum; 866 866 break; 867 867 } 868 868
+2 -2
drivers/mailbox/pcc.c
··· 386 386 ret = acpi_pcc_probe(); 387 387 388 388 if (ret) { 389 - pr_err("ACPI PCC probe failed.\n"); 389 + pr_debug("ACPI PCC probe failed.\n"); 390 390 return -ENODEV; 391 391 } 392 392 ··· 394 394 pcc_mbox_probe, NULL, 0, NULL, 0); 395 395 396 396 if (!pcc_pdev) { 397 - pr_err("Err creating PCC platform bundle\n"); 397 + pr_debug("Err creating PCC platform bundle\n"); 398 398 return -ENODEV; 399 399 } 400 400
+2 -2
drivers/of/of_pci.c
··· 140 140 unsigned char busno, unsigned char bus_max, 141 141 struct list_head *resources, resource_size_t *io_base) 142 142 { 143 - struct pci_host_bridge_window *window; 143 + struct resource_entry *window; 144 144 struct resource *res; 145 145 struct resource *bus_range; 146 146 struct of_pci_range range; ··· 226 226 conversion_failed: 227 227 kfree(res); 228 228 parse_failed: 229 - list_for_each_entry(window, resources, list) 229 + resource_list_for_each_entry(window, resources) 230 230 kfree(window->res); 231 231 pci_free_resource_list(resources); 232 232 kfree(bus_range);
+6 -12
drivers/pci/bus.c
··· 20 20 void pci_add_resource_offset(struct list_head *resources, struct resource *res, 21 21 resource_size_t offset) 22 22 { 23 - struct pci_host_bridge_window *window; 23 + struct resource_entry *entry; 24 24 25 - window = kzalloc(sizeof(struct pci_host_bridge_window), GFP_KERNEL); 26 - if (!window) { 25 + entry = resource_list_create_entry(res, 0); 26 + if (!entry) { 27 27 printk(KERN_ERR "PCI: can't add host bridge window %pR\n", res); 28 28 return; 29 29 } 30 30 31 - window->res = res; 32 - window->offset = offset; 33 - list_add_tail(&window->list, resources); 31 + entry->offset = offset; 32 + resource_list_add_tail(entry, resources); 34 33 } 35 34 EXPORT_SYMBOL(pci_add_resource_offset); 36 35 ··· 41 42 42 43 void pci_free_resource_list(struct list_head *resources) 43 44 { 44 - struct pci_host_bridge_window *window, *tmp; 45 - 46 - list_for_each_entry_safe(window, tmp, resources, list) { 47 - list_del(&window->list); 48 - kfree(window); 49 - } 45 + resource_list_free(resources); 50 46 } 51 47 EXPORT_SYMBOL(pci_free_resource_list); 52 48
+4 -4
drivers/pci/host-bridge.c
··· 35 35 struct resource *res) 36 36 { 37 37 struct pci_host_bridge *bridge = find_pci_host_bridge(bus); 38 - struct pci_host_bridge_window *window; 38 + struct resource_entry *window; 39 39 resource_size_t offset = 0; 40 40 41 - list_for_each_entry(window, &bridge->windows, list) { 41 + resource_list_for_each_entry(window, &bridge->windows) { 42 42 if (resource_contains(window->res, res)) { 43 43 offset = window->offset; 44 44 break; ··· 60 60 struct pci_bus_region *region) 61 61 { 62 62 struct pci_host_bridge *bridge = find_pci_host_bridge(bus); 63 - struct pci_host_bridge_window *window; 63 + struct resource_entry *window; 64 64 resource_size_t offset = 0; 65 65 66 - list_for_each_entry(window, &bridge->windows, list) { 66 + resource_list_for_each_entry(window, &bridge->windows) { 67 67 struct pci_bus_region bus_region; 68 68 69 69 if (resource_type(res) != resource_type(window->res))
+2 -2
drivers/pci/host/pci-host-generic.c
··· 103 103 struct device *dev = pci->host.dev.parent; 104 104 struct device_node *np = dev->of_node; 105 105 resource_size_t iobase; 106 - struct pci_host_bridge_window *win; 106 + struct resource_entry *win; 107 107 108 108 err = of_pci_get_host_bridge_resources(np, 0, 0xff, &pci->resources, 109 109 &iobase); 110 110 if (err) 111 111 return err; 112 112 113 - list_for_each_entry(win, &pci->resources, list) { 113 + resource_list_for_each_entry(win, &pci->resources) { 114 114 struct resource *parent, *res = win->res; 115 115 116 116 switch (resource_type(res)) {
+2 -2
drivers/pci/host/pci-versatile.c
··· 74 74 int err, mem = 1, res_valid = 0; 75 75 struct device_node *np = dev->of_node; 76 76 resource_size_t iobase; 77 - struct pci_host_bridge_window *win; 77 + struct resource_entry *win; 78 78 79 79 err = of_pci_get_host_bridge_resources(np, 0, 0xff, res, &iobase); 80 80 if (err) 81 81 return err; 82 82 83 - list_for_each_entry(win, res, list) { 83 + resource_list_for_each_entry(win, res, list) { 84 84 struct resource *parent, *res = win->res; 85 85 86 86 switch (resource_type(res)) {
+2 -2
drivers/pci/host/pci-xgene.c
··· 269 269 struct list_head *res, 270 270 resource_size_t io_base) 271 271 { 272 - struct pci_host_bridge_window *window; 272 + struct resource_entry *window; 273 273 struct device *dev = port->dev; 274 274 int ret; 275 275 276 - list_for_each_entry(window, res, list) { 276 + resource_list_for_each_entry(window, res) { 277 277 struct resource *res = window->res; 278 278 u64 restype = resource_type(res); 279 279
+2 -2
drivers/pci/host/pcie-xilinx.c
··· 667 667 resource_size_t offset; 668 668 struct of_pci_range_parser parser; 669 669 struct of_pci_range range; 670 - struct pci_host_bridge_window *win; 670 + struct resource_entry *win; 671 671 int err = 0, mem_resno = 0; 672 672 673 673 /* Get the ranges */ ··· 737 737 738 738 free_resources: 739 739 release_child_resources(&iomem_resource); 740 - list_for_each_entry(win, &port->resources, list) 740 + resource_list_for_each_entry(win, &port->resources) 741 741 devm_kfree(dev, win->res); 742 742 pci_free_resource_list(&port->resources); 743 743
+7 -6
drivers/pci/hotplug/sgi_hotplug.c
··· 475 475 struct slot *slot = bss_hotplug_slot->private; 476 476 struct pci_dev *dev, *temp; 477 477 int rc; 478 - acpi_owner_id ssdt_id = 0; 478 + acpi_handle ssdt_hdl = NULL; 479 479 480 480 /* Acquire update access to the bus */ 481 481 mutex_lock(&sn_hotplug_mutex); ··· 522 522 if (ACPI_SUCCESS(ret) && 523 523 (adr>>16) == (slot->device_num + 1)) { 524 524 /* retain the owner id */ 525 - acpi_get_id(chandle, &ssdt_id); 525 + ssdt_hdl = chandle; 526 526 527 527 ret = acpi_bus_get_device(chandle, 528 528 &device); ··· 547 547 pci_unlock_rescan_remove(); 548 548 549 549 /* Remove the SSDT for the slot from the ACPI namespace */ 550 - if (SN_ACPI_BASE_SUPPORT() && ssdt_id) { 550 + if (SN_ACPI_BASE_SUPPORT() && ssdt_hdl) { 551 551 acpi_status ret; 552 - ret = acpi_unload_table_id(ssdt_id); 552 + ret = acpi_unload_parent_table(ssdt_hdl); 553 553 if (ACPI_FAILURE(ret)) { 554 - printk(KERN_ERR "%s: acpi_unload_table_id failed (0x%x) for id %d\n", 555 - __func__, ret, ssdt_id); 554 + acpi_handle_err(ssdt_hdl, 555 + "%s: acpi_unload_parent_table failed (0x%x)\n", 556 + __func__, ret); 556 557 /* try to continue on */ 557 558 } 558 559 }
+17
drivers/pci/pci-acpi.c
··· 501 501 return 0; 502 502 } 503 503 504 + static bool acpi_pci_need_resume(struct pci_dev *dev) 505 + { 506 + struct acpi_device *adev = ACPI_COMPANION(&dev->dev); 507 + 508 + if (!adev || !acpi_device_power_manageable(adev)) 509 + return false; 510 + 511 + if (device_may_wakeup(&dev->dev) != !!adev->wakeup.prepare_count) 512 + return true; 513 + 514 + if (acpi_target_system_state() == ACPI_STATE_S0) 515 + return false; 516 + 517 + return !!adev->power.flags.dsw_present; 518 + } 519 + 504 520 static struct pci_platform_pm_ops acpi_pci_platform_pm = { 505 521 .is_manageable = acpi_pci_power_manageable, 506 522 .set_state = acpi_pci_set_power_state, 507 523 .choose_state = acpi_pci_choose_state, 508 524 .sleep_wake = acpi_pci_sleep_wake, 509 525 .run_wake = acpi_pci_run_wake, 526 + .need_resume = acpi_pci_need_resume, 510 527 }; 511 528 512 529 void acpi_pci_add_bus(struct pci_bus *bus)
+6 -5
drivers/pci/pci-driver.c
··· 653 653 static int pci_pm_prepare(struct device *dev) 654 654 { 655 655 struct device_driver *drv = dev->driver; 656 - int error = 0; 657 656 658 657 /* 659 658 * Devices having power.ignore_children set may still be necessary for ··· 661 662 if (dev->power.ignore_children) 662 663 pm_runtime_resume(dev); 663 664 664 - if (drv && drv->pm && drv->pm->prepare) 665 - error = drv->pm->prepare(dev); 666 - 667 - return error; 665 + if (drv && drv->pm && drv->pm->prepare) { 666 + int error = drv->pm->prepare(dev); 667 + if (error) 668 + return error; 669 + } 670 + return pci_dev_keep_suspended(to_pci_dev(dev)); 668 671 } 669 672 670 673
+26
drivers/pci/pci.c
··· 523 523 pci_platform_pm->run_wake(dev, enable) : -ENODEV; 524 524 } 525 525 526 + static inline bool platform_pci_need_resume(struct pci_dev *dev) 527 + { 528 + return pci_platform_pm ? pci_platform_pm->need_resume(dev) : false; 529 + } 530 + 526 531 /** 527 532 * pci_raw_set_power_state - Use PCI PM registers to set the power state of 528 533 * given PCI device ··· 2005 2000 return false; 2006 2001 } 2007 2002 EXPORT_SYMBOL_GPL(pci_dev_run_wake); 2003 + 2004 + /** 2005 + * pci_dev_keep_suspended - Check if the device can stay in the suspended state. 2006 + * @pci_dev: Device to check. 2007 + * 2008 + * Return 'true' if the device is runtime-suspended, it doesn't have to be 2009 + * reconfigured due to wakeup settings difference between system and runtime 2010 + * suspend and the current power state of it is suitable for the upcoming 2011 + * (system) transition. 2012 + */ 2013 + bool pci_dev_keep_suspended(struct pci_dev *pci_dev) 2014 + { 2015 + struct device *dev = &pci_dev->dev; 2016 + 2017 + if (!pm_runtime_suspended(dev) 2018 + || (device_can_wakeup(dev) && !device_may_wakeup(dev)) 2019 + || platform_pci_need_resume(pci_dev)) 2020 + return false; 2021 + 2022 + return pci_target_state(pci_dev) == pci_dev->current_state; 2023 + } 2008 2024 2009 2025 void pci_config_pm_runtime_get(struct pci_dev *pdev) 2010 2026 {
+6
drivers/pci/pci.h
··· 50 50 * for given device (the device's wake-up capability has to be 51 51 * enabled by @sleep_wake for this feature to work) 52 52 * 53 + * @need_resume: returns 'true' if the given device (which is currently 54 + * suspended) needs to be resumed to be configured for system 55 + * wakeup. 56 + * 53 57 * If given platform is generally capable of power managing PCI devices, all of 54 58 * these callbacks are mandatory. 55 59 */ ··· 63 59 pci_power_t (*choose_state)(struct pci_dev *dev); 64 60 int (*sleep_wake)(struct pci_dev *dev, bool enable); 65 61 int (*run_wake)(struct pci_dev *dev, bool enable); 62 + bool (*need_resume)(struct pci_dev *dev); 66 63 }; 67 64 68 65 int pci_set_platform_pm(struct pci_platform_pm_ops *ops); ··· 72 67 void pci_disable_enabled_device(struct pci_dev *dev); 73 68 int pci_finish_runtime_suspend(struct pci_dev *dev); 74 69 int __pci_pme_wakeup(struct pci_dev *dev, void *ign); 70 + bool pci_dev_keep_suspended(struct pci_dev *dev); 75 71 void pci_config_pm_runtime_get(struct pci_dev *dev); 76 72 void pci_config_pm_runtime_put(struct pci_dev *dev); 77 73 void pci_pm_init(struct pci_dev *dev);
+5 -5
drivers/pci/probe.c
··· 1895 1895 int error; 1896 1896 struct pci_host_bridge *bridge; 1897 1897 struct pci_bus *b, *b2; 1898 - struct pci_host_bridge_window *window, *n; 1898 + struct resource_entry *window, *n; 1899 1899 struct resource *res; 1900 1900 resource_size_t offset; 1901 1901 char bus_addr[64]; ··· 1959 1959 printk(KERN_INFO "PCI host bridge to bus %s\n", dev_name(&b->dev)); 1960 1960 1961 1961 /* Add initial resources to the bus */ 1962 - list_for_each_entry_safe(window, n, resources, list) { 1963 - list_move_tail(&window->list, &bridge->windows); 1962 + resource_list_for_each_entry_safe(window, n, resources) { 1963 + list_move_tail(&window->node, &bridge->windows); 1964 1964 res = window->res; 1965 1965 offset = window->offset; 1966 1966 if (res->flags & IORESOURCE_BUS) ··· 2060 2060 struct pci_bus *pci_scan_root_bus(struct device *parent, int bus, 2061 2061 struct pci_ops *ops, void *sysdata, struct list_head *resources) 2062 2062 { 2063 - struct pci_host_bridge_window *window; 2063 + struct resource_entry *window; 2064 2064 bool found = false; 2065 2065 struct pci_bus *b; 2066 2066 int max; 2067 2067 2068 - list_for_each_entry(window, resources, list) 2068 + resource_list_for_each_entry(window, resources) 2069 2069 if (window->res->flags & IORESOURCE_BUS) { 2070 2070 found = true; 2071 2071 break;
+23 -22
drivers/pnp/pnpacpi/rsparser.c
··· 180 180 struct pnp_dev *dev = data; 181 181 struct acpi_resource_dma *dma; 182 182 struct acpi_resource_vendor_typed *vendor_typed; 183 - struct resource r = {0}; 183 + struct resource_win win = {{0}, 0}; 184 + struct resource *r = &win.res; 184 185 int i, flags; 185 186 186 - if (acpi_dev_resource_address_space(res, &r) 187 - || acpi_dev_resource_ext_address_space(res, &r)) { 188 - pnp_add_resource(dev, &r); 187 + if (acpi_dev_resource_address_space(res, &win) 188 + || acpi_dev_resource_ext_address_space(res, &win)) { 189 + pnp_add_resource(dev, &win.res); 189 190 return AE_OK; 190 191 } 191 192 192 - r.flags = 0; 193 - if (acpi_dev_resource_interrupt(res, 0, &r)) { 194 - pnpacpi_add_irqresource(dev, &r); 195 - for (i = 1; acpi_dev_resource_interrupt(res, i, &r); i++) 196 - pnpacpi_add_irqresource(dev, &r); 193 + r->flags = 0; 194 + if (acpi_dev_resource_interrupt(res, 0, r)) { 195 + pnpacpi_add_irqresource(dev, r); 196 + for (i = 1; acpi_dev_resource_interrupt(res, i, r); i++) 197 + pnpacpi_add_irqresource(dev, r); 197 198 198 199 if (i > 1) { 199 200 /* ··· 210 209 } 211 210 } 212 211 return AE_OK; 213 - } else if (r.flags & IORESOURCE_DISABLED) { 212 + } else if (r->flags & IORESOURCE_DISABLED) { 214 213 pnp_add_irq_resource(dev, 0, IORESOURCE_DISABLED); 215 214 return AE_OK; 216 215 } ··· 219 218 case ACPI_RESOURCE_TYPE_MEMORY24: 220 219 case ACPI_RESOURCE_TYPE_MEMORY32: 221 220 case ACPI_RESOURCE_TYPE_FIXED_MEMORY32: 222 - if (acpi_dev_resource_memory(res, &r)) 223 - pnp_add_resource(dev, &r); 221 + if (acpi_dev_resource_memory(res, r)) 222 + pnp_add_resource(dev, r); 224 223 break; 225 224 case ACPI_RESOURCE_TYPE_IO: 226 225 case ACPI_RESOURCE_TYPE_FIXED_IO: 227 - if (acpi_dev_resource_io(res, &r)) 228 - pnp_add_resource(dev, &r); 226 + if (acpi_dev_resource_io(res, r)) 227 + pnp_add_resource(dev, r); 229 228 break; 230 229 case ACPI_RESOURCE_TYPE_DMA: 231 230 dma = &res->data.dma; ··· 411 410 if (p->resource_type == ACPI_MEMORY_RANGE) { 412 411 if (p->info.mem.write_protect == ACPI_READ_WRITE_MEMORY) 413 412 flags = IORESOURCE_MEM_WRITEABLE; 414 - pnp_register_mem_resource(dev, option_flags, p->minimum, 415 - p->minimum, 0, p->address_length, 413 + pnp_register_mem_resource(dev, option_flags, p->address.minimum, 414 + p->address.minimum, 0, p->address.address_length, 416 415 flags); 417 416 } else if (p->resource_type == ACPI_IO_RANGE) 418 - pnp_register_port_resource(dev, option_flags, p->minimum, 419 - p->minimum, 0, p->address_length, 417 + pnp_register_port_resource(dev, option_flags, p->address.minimum, 418 + p->address.minimum, 0, p->address.address_length, 420 419 IORESOURCE_IO_FIXED); 421 420 } 422 421 ··· 430 429 if (p->resource_type == ACPI_MEMORY_RANGE) { 431 430 if (p->info.mem.write_protect == ACPI_READ_WRITE_MEMORY) 432 431 flags = IORESOURCE_MEM_WRITEABLE; 433 - pnp_register_mem_resource(dev, option_flags, p->minimum, 434 - p->minimum, 0, p->address_length, 432 + pnp_register_mem_resource(dev, option_flags, p->address.minimum, 433 + p->address.minimum, 0, p->address.address_length, 435 434 flags); 436 435 } else if (p->resource_type == ACPI_IO_RANGE) 437 - pnp_register_port_resource(dev, option_flags, p->minimum, 438 - p->minimum, 0, p->address_length, 436 + pnp_register_port_resource(dev, option_flags, p->address.minimum, 437 + p->address.minimum, 0, p->address.address_length, 439 438 IORESOURCE_IO_FIXED); 440 439 } 441 440
+2 -2
drivers/sfi/sfi_core.c
··· 161 161 * Check for common case that we can re-use mapping to SYST, 162 162 * which requires syst_pa, syst_va to be initialized. 163 163 */ 164 - struct sfi_table_header *sfi_map_table(u64 pa) 164 + static struct sfi_table_header *sfi_map_table(u64 pa) 165 165 { 166 166 struct sfi_table_header *th; 167 167 u32 length; ··· 189 189 * Undoes effect of sfi_map_table() by unmapping table 190 190 * if it did not completely fit on same page as SYST. 191 191 */ 192 - void sfi_unmap_table(struct sfi_table_header *th) 192 + static void sfi_unmap_table(struct sfi_table_header *th) 193 193 { 194 194 if (!TABLE_ON_PAGE(syst_va, th, th->len)) 195 195 sfi_unmap_memory(th, TABLE_ON_PAGE(th, th, th->len) ?
-12
drivers/usb/core/hub.c
··· 3452 3452 return status; 3453 3453 } 3454 3454 3455 - #ifdef CONFIG_PM 3456 - 3457 3455 int usb_remote_wakeup(struct usb_device *udev) 3458 3456 { 3459 3457 int status = 0; ··· 3509 3511 dev_dbg(&port_dev->dev, "resume, status %d\n", ret); 3510 3512 return connect_change; 3511 3513 } 3512 - 3513 - #else 3514 - 3515 - static int hub_handle_remote_wakeup(struct usb_hub *hub, unsigned int port, 3516 - u16 portstatus, u16 portchange) 3517 - { 3518 - return 0; 3519 - } 3520 - 3521 - #endif 3522 3514 3523 3515 static int check_ports_changed(struct usb_hub *hub) 3524 3516 {
+4 -4
drivers/xen/xen-acpi-memhotplug.c
··· 117 117 list_for_each_entry(info, &mem_device->res_list, list) { 118 118 if ((info->caching == address64.info.mem.caching) && 119 119 (info->write_protect == address64.info.mem.write_protect) && 120 - (info->start_addr + info->length == address64.minimum)) { 121 - info->length += address64.address_length; 120 + (info->start_addr + info->length == address64.address.minimum)) { 121 + info->length += address64.address.address_length; 122 122 return AE_OK; 123 123 } 124 124 } ··· 130 130 INIT_LIST_HEAD(&new->list); 131 131 new->caching = address64.info.mem.caching; 132 132 new->write_protect = address64.info.mem.write_protect; 133 - new->start_addr = address64.minimum; 134 - new->length = address64.address_length; 133 + new->start_addr = address64.address.minimum; 134 + new->length = address64.address.address_length; 135 135 list_add_tail(&new->list, &mem_device->res_list); 136 136 137 137 return AE_OK;
+1 -1
include/acpi/acbuffer.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
include/acpi/acconfig.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
include/acpi/acexcep.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
include/acpi/acnames.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
include/acpi/acoutput.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
include/acpi/acpi.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
include/acpi/acpiosxf.h
··· 7 7 *****************************************************************************/ 8 8 9 9 /* 10 - * Copyright (C) 2000 - 2014, Intel Corp. 10 + * Copyright (C) 2000 - 2015, Intel Corp. 11 11 * All rights reserved. 12 12 * 13 13 * Redistribution and use in source and binary forms, with or without
+10 -8
include/acpi/acpixf.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 46 46 47 47 /* Current ACPICA subsystem version in YYYYMMDD format */ 48 48 49 - #define ACPI_CA_VERSION 0x20141107 49 + #define ACPI_CA_VERSION 0x20150204 50 50 51 51 #include <acpi/acconfig.h> 52 52 #include <acpi/actypes.h> ··· 569 569 address, 570 570 void *context)) 571 571 ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status 572 + acpi_install_gpe_raw_handler(acpi_handle 573 + gpe_device, 574 + u32 gpe_number, 575 + u32 type, 576 + acpi_gpe_handler 577 + address, 578 + void *context)) 579 + ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status 572 580 acpi_remove_gpe_handler(acpi_handle gpe_device, 573 581 u32 gpe_number, 574 582 acpi_gpe_handler ··· 897 889 * Divergences 898 890 */ 899 891 ACPI_GLOBAL(u8, acpi_gbl_permanent_mmap); 900 - 901 - ACPI_EXTERNAL_RETURN_STATUS(acpi_status 902 - acpi_get_id(acpi_handle object, 903 - acpi_owner_id * out_type)) 904 - 905 - ACPI_EXTERNAL_RETURN_STATUS(acpi_status acpi_unload_table_id(acpi_owner_id id)) 906 892 907 893 ACPI_EXTERNAL_RETURN_STATUS(acpi_status 908 894 acpi_get_table_with_size(acpi_string signature,
+29 -21
include/acpi/acrestyp.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 305 305 u8 max_address_fixed; \ 306 306 union acpi_resource_attribute info; 307 307 308 - struct acpi_resource_address { 309 - ACPI_RESOURCE_ADDRESS_COMMON}; 310 - 311 - struct acpi_resource_address16 { 312 - ACPI_RESOURCE_ADDRESS_COMMON u16 granularity; 308 + struct acpi_address16_attribute { 309 + u16 granularity; 313 310 u16 minimum; 314 311 u16 maximum; 315 312 u16 translation_offset; 316 313 u16 address_length; 317 - struct acpi_resource_source resource_source; 318 314 }; 319 315 320 - struct acpi_resource_address32 { 321 - ACPI_RESOURCE_ADDRESS_COMMON u32 granularity; 316 + struct acpi_address32_attribute { 317 + u32 granularity; 322 318 u32 minimum; 323 319 u32 maximum; 324 320 u32 translation_offset; 325 321 u32 address_length; 326 - struct acpi_resource_source resource_source; 327 322 }; 328 323 329 - struct acpi_resource_address64 { 330 - ACPI_RESOURCE_ADDRESS_COMMON u64 granularity; 331 - u64 minimum; 332 - u64 maximum; 333 - u64 translation_offset; 334 - u64 address_length; 335 - struct acpi_resource_source resource_source; 336 - }; 337 - 338 - struct acpi_resource_extended_address64 { 339 - ACPI_RESOURCE_ADDRESS_COMMON u8 revision_ID; 324 + struct acpi_address64_attribute { 340 325 u64 granularity; 341 326 u64 minimum; 342 327 u64 maximum; 343 328 u64 translation_offset; 344 329 u64 address_length; 330 + }; 331 + 332 + struct acpi_resource_address { 333 + ACPI_RESOURCE_ADDRESS_COMMON}; 334 + 335 + struct acpi_resource_address16 { 336 + ACPI_RESOURCE_ADDRESS_COMMON struct acpi_address16_attribute address; 337 + struct acpi_resource_source resource_source; 338 + }; 339 + 340 + struct acpi_resource_address32 { 341 + ACPI_RESOURCE_ADDRESS_COMMON struct acpi_address32_attribute address; 342 + struct acpi_resource_source resource_source; 343 + }; 344 + 345 + struct acpi_resource_address64 { 346 + ACPI_RESOURCE_ADDRESS_COMMON struct acpi_address64_attribute address; 347 + struct acpi_resource_source resource_source; 348 + }; 349 + 350 + struct acpi_resource_extended_address64 { 351 + ACPI_RESOURCE_ADDRESS_COMMON u8 revision_ID; 352 + struct acpi_address64_attribute address; 345 353 u64 type_specific; 346 354 }; 347 355
+1 -1
include/acpi/actbl.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
include/acpi/actbl1.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
include/acpi/actbl2.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
include/acpi/actbl3.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+8 -6
include/acpi/actypes.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 744 744 /* 745 745 * GPE info flags - Per GPE 746 746 * +-------+-+-+---+ 747 - * | 7:4 |3|2|1:0| 747 + * | 7:5 |4|3|2:0| 748 748 * +-------+-+-+---+ 749 749 * | | | | 750 750 * | | | +-- Type of dispatch:to method, handler, notify, or none ··· 756 756 #define ACPI_GPE_DISPATCH_METHOD (u8) 0x01 757 757 #define ACPI_GPE_DISPATCH_HANDLER (u8) 0x02 758 758 #define ACPI_GPE_DISPATCH_NOTIFY (u8) 0x03 759 - #define ACPI_GPE_DISPATCH_MASK (u8) 0x03 759 + #define ACPI_GPE_DISPATCH_RAW_HANDLER (u8) 0x04 760 + #define ACPI_GPE_DISPATCH_MASK (u8) 0x07 761 + #define ACPI_GPE_DISPATCH_TYPE(flags) ((u8) ((flags) & ACPI_GPE_DISPATCH_MASK)) 760 762 761 - #define ACPI_GPE_LEVEL_TRIGGERED (u8) 0x04 763 + #define ACPI_GPE_LEVEL_TRIGGERED (u8) 0x08 762 764 #define ACPI_GPE_EDGE_TRIGGERED (u8) 0x00 763 - #define ACPI_GPE_XRUPT_TYPE_MASK (u8) 0x04 765 + #define ACPI_GPE_XRUPT_TYPE_MASK (u8) 0x08 764 766 765 - #define ACPI_GPE_CAN_WAKE (u8) 0x08 767 + #define ACPI_GPE_CAN_WAKE (u8) 0x10 766 768 767 769 /* 768 770 * Flags for GPE and Lock interfaces
+1 -1
include/acpi/platform/acenv.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
include/acpi/platform/acenvex.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
include/acpi/platform/acgcc.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
include/acpi/platform/aclinux.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
include/acpi/platform/aclinuxex.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+15 -7
include/linux/acpi.h
··· 27 27 28 28 #include <linux/errno.h> 29 29 #include <linux/ioport.h> /* for struct resource */ 30 + #include <linux/resource_ext.h> 30 31 #include <linux/device.h> 31 32 #include <linux/property.h> 32 33 ··· 151 150 int acpi_map_cpu(acpi_handle handle, int physid, int *pcpu); 152 151 int acpi_unmap_cpu(int cpu); 153 152 #endif /* CONFIG_ACPI_HOTPLUG_CPU */ 153 + 154 + #ifdef CONFIG_ACPI_HOTPLUG_IOAPIC 155 + int acpi_get_ioapic_id(acpi_handle handle, u32 gsi_base, u64 *phys_addr); 156 + #endif 154 157 155 158 int acpi_register_ioapic(acpi_handle handle, u64 phys_addr, u32 gsi_base); 156 159 int acpi_unregister_ioapic(acpi_handle handle, u32 gsi_base); ··· 293 288 bool acpi_dev_resource_memory(struct acpi_resource *ares, struct resource *res); 294 289 bool acpi_dev_resource_io(struct acpi_resource *ares, struct resource *res); 295 290 bool acpi_dev_resource_address_space(struct acpi_resource *ares, 296 - struct resource *res); 291 + struct resource_win *win); 297 292 bool acpi_dev_resource_ext_address_space(struct acpi_resource *ares, 298 - struct resource *res); 293 + struct resource_win *win); 299 294 unsigned long acpi_dev_irq_flags(u8 triggering, u8 polarity, u8 shareable); 300 295 bool acpi_dev_resource_interrupt(struct acpi_resource *ares, int index, 301 296 struct resource *res); 302 - 303 - struct resource_list_entry { 304 - struct list_head node; 305 - struct resource res; 306 - }; 307 297 308 298 void acpi_dev_free_resource_list(struct list_head *list); 309 299 int acpi_dev_get_resources(struct acpi_device *adev, struct list_head *list, 310 300 int (*preproc)(struct acpi_resource *, void *), 311 301 void *preproc_data); 302 + int acpi_dev_filter_resource_type(struct acpi_resource *ares, 303 + unsigned long types); 304 + 305 + static inline int acpi_dev_filter_resource_type_cb(struct acpi_resource *ares, 306 + void *arg) 307 + { 308 + return acpi_dev_filter_resource_type(ares, (unsigned long)arg); 309 + } 312 310 313 311 int acpi_check_resource_conflict(const struct resource *res); 314 312
+5 -5
include/linux/cpufreq.h
··· 66 66 unsigned int shared_type; /* ACPI: ANY or ALL affected CPUs 67 67 should set cpufreq */ 68 68 unsigned int cpu; /* cpu nr of CPU managing this policy */ 69 - unsigned int last_cpu; /* cpu nr of previous CPU that managed 70 - * this policy */ 71 69 struct clk *clk; 72 70 struct cpufreq_cpuinfo cpuinfo;/* see above */ 73 71 ··· 110 112 spinlock_t transition_lock; 111 113 wait_queue_head_t transition_wait; 112 114 struct task_struct *transition_task; /* Task which is doing the transition */ 115 + 116 + /* cpufreq-stats */ 117 + struct cpufreq_stats *stats; 113 118 114 119 /* For cpufreq driver's internal use */ 115 120 void *driver_data; ··· 368 367 #define CPUFREQ_INCOMPATIBLE (1) 369 368 #define CPUFREQ_NOTIFY (2) 370 369 #define CPUFREQ_START (3) 371 - #define CPUFREQ_UPDATE_POLICY_CPU (4) 372 - #define CPUFREQ_CREATE_POLICY (5) 373 - #define CPUFREQ_REMOVE_POLICY (6) 370 + #define CPUFREQ_CREATE_POLICY (4) 371 + #define CPUFREQ_REMOVE_POLICY (5) 374 372 375 373 #ifdef CONFIG_CPU_FREQ 376 374 int cpufreq_register_notifier(struct notifier_block *nb, unsigned int list);
+196
include/linux/devfreq-event.h
··· 1 + /* 2 + * devfreq-event: a framework to provide raw data and events of devfreq devices 3 + * 4 + * Copyright (C) 2014 Samsung Electronics 5 + * Author: Chanwoo Choi <cw00.choi@samsung.com> 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + */ 11 + 12 + #ifndef __LINUX_DEVFREQ_EVENT_H__ 13 + #define __LINUX_DEVFREQ_EVENT_H__ 14 + 15 + #include <linux/device.h> 16 + 17 + /** 18 + * struct devfreq_event_dev - the devfreq-event device 19 + * 20 + * @node : Contain the devfreq-event device that have been registered. 21 + * @dev : the device registered by devfreq-event class. dev.parent is 22 + * the device using devfreq-event. 23 + * @lock : a mutex to protect accessing devfreq-event. 24 + * @enable_count: the number of enable function have been called. 25 + * @desc : the description for devfreq-event device. 26 + * 27 + * This structure contains devfreq-event device information. 28 + */ 29 + struct devfreq_event_dev { 30 + struct list_head node; 31 + 32 + struct device dev; 33 + struct mutex lock; 34 + u32 enable_count; 35 + 36 + const struct devfreq_event_desc *desc; 37 + }; 38 + 39 + /** 40 + * struct devfreq_event_data - the devfreq-event data 41 + * 42 + * @load_count : load count of devfreq-event device for the given period. 43 + * @total_count : total count of devfreq-event device for the given period. 44 + * each count may represent a clock cycle, a time unit 45 + * (ns/us/...), or anything the device driver wants. 46 + * Generally, utilization is load_count / total_count. 47 + * 48 + * This structure contains the data of devfreq-event device for polling period. 49 + */ 50 + struct devfreq_event_data { 51 + unsigned long load_count; 52 + unsigned long total_count; 53 + }; 54 + 55 + /** 56 + * struct devfreq_event_ops - the operations of devfreq-event device 57 + * 58 + * @enable : Enable the devfreq-event device. 59 + * @disable : Disable the devfreq-event device. 60 + * @reset : Reset all setting of the devfreq-event device. 61 + * @set_event : Set the specific event type for the devfreq-event device. 62 + * @get_event : Get the result of the devfreq-event devie with specific 63 + * event type. 64 + * 65 + * This structure contains devfreq-event device operations which can be 66 + * implemented by devfreq-event device drivers. 67 + */ 68 + struct devfreq_event_ops { 69 + /* Optional functions */ 70 + int (*enable)(struct devfreq_event_dev *edev); 71 + int (*disable)(struct devfreq_event_dev *edev); 72 + int (*reset)(struct devfreq_event_dev *edev); 73 + 74 + /* Mandatory functions */ 75 + int (*set_event)(struct devfreq_event_dev *edev); 76 + int (*get_event)(struct devfreq_event_dev *edev, 77 + struct devfreq_event_data *edata); 78 + }; 79 + 80 + /** 81 + * struct devfreq_event_desc - the descriptor of devfreq-event device 82 + * 83 + * @name : the name of devfreq-event device. 84 + * @driver_data : the private data for devfreq-event driver. 85 + * @ops : the operation to control devfreq-event device. 86 + * 87 + * Each devfreq-event device is described with a this structure. 88 + * This structure contains the various data for devfreq-event device. 89 + */ 90 + struct devfreq_event_desc { 91 + const char *name; 92 + void *driver_data; 93 + 94 + struct devfreq_event_ops *ops; 95 + }; 96 + 97 + #if defined(CONFIG_PM_DEVFREQ_EVENT) 98 + extern int devfreq_event_enable_edev(struct devfreq_event_dev *edev); 99 + extern int devfreq_event_disable_edev(struct devfreq_event_dev *edev); 100 + extern bool devfreq_event_is_enabled(struct devfreq_event_dev *edev); 101 + extern int devfreq_event_set_event(struct devfreq_event_dev *edev); 102 + extern int devfreq_event_get_event(struct devfreq_event_dev *edev, 103 + struct devfreq_event_data *edata); 104 + extern int devfreq_event_reset_event(struct devfreq_event_dev *edev); 105 + extern struct devfreq_event_dev *devfreq_event_get_edev_by_phandle( 106 + struct device *dev, int index); 107 + extern int devfreq_event_get_edev_count(struct device *dev); 108 + extern struct devfreq_event_dev *devfreq_event_add_edev(struct device *dev, 109 + struct devfreq_event_desc *desc); 110 + extern int devfreq_event_remove_edev(struct devfreq_event_dev *edev); 111 + extern struct devfreq_event_dev *devm_devfreq_event_add_edev(struct device *dev, 112 + struct devfreq_event_desc *desc); 113 + extern void devm_devfreq_event_remove_edev(struct device *dev, 114 + struct devfreq_event_dev *edev); 115 + static inline void *devfreq_event_get_drvdata(struct devfreq_event_dev *edev) 116 + { 117 + return edev->desc->driver_data; 118 + } 119 + #else 120 + static inline int devfreq_event_enable_edev(struct devfreq_event_dev *edev) 121 + { 122 + return -EINVAL; 123 + } 124 + 125 + static inline int devfreq_event_disable_edev(struct devfreq_event_dev *edev) 126 + { 127 + return -EINVAL; 128 + } 129 + 130 + static inline bool devfreq_event_is_enabled(struct devfreq_event_dev *edev) 131 + { 132 + return false; 133 + } 134 + 135 + static inline int devfreq_event_set_event(struct devfreq_event_dev *edev) 136 + { 137 + return -EINVAL; 138 + } 139 + 140 + static inline int devfreq_event_get_event(struct devfreq_event_dev *edev, 141 + struct devfreq_event_data *edata) 142 + { 143 + return -EINVAL; 144 + } 145 + 146 + static inline int devfreq_event_reset_event(struct devfreq_event_dev *edev) 147 + { 148 + return -EINVAL; 149 + } 150 + 151 + static inline void *devfreq_event_get_drvdata(struct devfreq_event_dev *edev) 152 + { 153 + return ERR_PTR(-EINVAL); 154 + } 155 + 156 + static inline struct devfreq_event_dev *devfreq_event_get_edev_by_phandle( 157 + struct device *dev, int index) 158 + { 159 + return ERR_PTR(-EINVAL); 160 + } 161 + 162 + static inline int devfreq_event_get_edev_count(struct device *dev) 163 + { 164 + return -EINVAL; 165 + } 166 + 167 + static inline struct devfreq_event_dev *devfreq_event_add_edev(struct device *dev, 168 + struct devfreq_event_desc *desc) 169 + { 170 + return ERR_PTR(-EINVAL); 171 + } 172 + 173 + static inline int devfreq_event_remove_edev(struct devfreq_event_dev *edev) 174 + { 175 + return -EINVAL; 176 + } 177 + 178 + static inline struct devfreq_event_dev *devm_devfreq_event_add_edev( 179 + struct device *dev, 180 + struct devfreq_event_desc *desc) 181 + { 182 + return ERR_PTR(-EINVAL); 183 + } 184 + 185 + static inline void devm_devfreq_event_remove_edev(struct device *dev, 186 + struct devfreq_event_dev *edev) 187 + { 188 + } 189 + 190 + static inline void *devfreq_event_get_drvdata(struct devfreq_event_dev *edev) 191 + { 192 + return NULL; 193 + } 194 + #endif /* CONFIG_PM_DEVFREQ_EVENT */ 195 + 196 + #endif /* __LINUX_DEVFREQ_EVENT_H__ */
+2 -7
include/linux/pci.h
··· 29 29 #include <linux/atomic.h> 30 30 #include <linux/device.h> 31 31 #include <linux/io.h> 32 + #include <linux/resource_ext.h> 32 33 #include <uapi/linux/pci.h> 33 34 34 35 #include <linux/pci_ids.h> ··· 400 399 return (pdev->error_state != pci_channel_io_normal); 401 400 } 402 401 403 - struct pci_host_bridge_window { 404 - struct list_head list; 405 - struct resource *res; /* host bridge aperture (CPU address) */ 406 - resource_size_t offset; /* bus address + offset = CPU address */ 407 - }; 408 - 409 402 struct pci_host_bridge { 410 403 struct device dev; 411 404 struct pci_bus *bus; /* root bus */ 412 - struct list_head windows; /* pci_host_bridge_windows */ 405 + struct list_head windows; /* resource_entry */ 413 406 void (*release_fn)(struct pci_host_bridge *); 414 407 void *release_data; 415 408 };
+1 -1
include/linux/pm.h
··· 597 597 598 598 extern void update_pm_runtime_accounting(struct device *dev); 599 599 extern int dev_pm_get_subsys_data(struct device *dev); 600 - extern int dev_pm_put_subsys_data(struct device *dev); 600 + extern void dev_pm_put_subsys_data(struct device *dev); 601 601 602 602 /* 603 603 * Power domains provide callbacks that are executed during system suspend,
-4
include/linux/pm_domain.h
··· 113 113 struct pm_domain_data base; 114 114 struct gpd_timing_data td; 115 115 struct notifier_block nb; 116 - struct mutex lock; 117 - unsigned int refcount; 118 116 int need_restore; 119 117 }; 120 118 ··· 138 140 139 141 extern int pm_genpd_remove_device(struct generic_pm_domain *genpd, 140 142 struct device *dev); 141 - extern void pm_genpd_dev_need_restore(struct device *dev, bool val); 142 143 extern int pm_genpd_add_subdomain(struct generic_pm_domain *genpd, 143 144 struct generic_pm_domain *new_subdomain); 144 145 extern int pm_genpd_add_subdomain_names(const char *master_name, ··· 184 187 { 185 188 return -ENOSYS; 186 189 } 187 - static inline void pm_genpd_dev_need_restore(struct device *dev, bool val) {} 188 190 static inline int pm_genpd_add_subdomain(struct generic_pm_domain *genpd, 189 191 struct generic_pm_domain *new_sd) 190 192 {
+77
include/linux/resource_ext.h
··· 1 + /* 2 + * Copyright (C) 2015, Intel Corporation 3 + * Author: Jiang Liu <jiang.liu@linux.intel.com> 4 + * 5 + * This program is free software; you can redistribute it and/or modify it 6 + * under the terms and conditions of the GNU General Public License, 7 + * version 2, as published by the Free Software Foundation. 8 + * 9 + * This program is distributed in the hope it will be useful, but WITHOUT 10 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 + * more details. 13 + */ 14 + #ifndef _LINUX_RESOURCE_EXT_H 15 + #define _LINUX_RESOURCE_EXT_H 16 + #include <linux/types.h> 17 + #include <linux/list.h> 18 + #include <linux/ioport.h> 19 + #include <linux/slab.h> 20 + 21 + /* Represent resource window for bridge devices */ 22 + struct resource_win { 23 + struct resource res; /* In master (CPU) address space */ 24 + resource_size_t offset; /* Translation offset for bridge */ 25 + }; 26 + 27 + /* 28 + * Common resource list management data structure and interfaces to support 29 + * ACPI, PNP and PCI host bridge etc. 30 + */ 31 + struct resource_entry { 32 + struct list_head node; 33 + struct resource *res; /* In master (CPU) address space */ 34 + resource_size_t offset; /* Translation offset for bridge */ 35 + struct resource __res; /* Default storage for res */ 36 + }; 37 + 38 + extern struct resource_entry * 39 + resource_list_create_entry(struct resource *res, size_t extra_size); 40 + extern void resource_list_free(struct list_head *head); 41 + 42 + static inline void resource_list_add(struct resource_entry *entry, 43 + struct list_head *head) 44 + { 45 + list_add(&entry->node, head); 46 + } 47 + 48 + static inline void resource_list_add_tail(struct resource_entry *entry, 49 + struct list_head *head) 50 + { 51 + list_add_tail(&entry->node, head); 52 + } 53 + 54 + static inline void resource_list_del(struct resource_entry *entry) 55 + { 56 + list_del(&entry->node); 57 + } 58 + 59 + static inline void resource_list_free_entry(struct resource_entry *entry) 60 + { 61 + kfree(entry); 62 + } 63 + 64 + static inline void 65 + resource_list_destroy_entry(struct resource_entry *entry) 66 + { 67 + resource_list_del(entry); 68 + resource_list_free_entry(entry); 69 + } 70 + 71 + #define resource_list_for_each_entry(entry, list) \ 72 + list_for_each_entry((entry), (list), node) 73 + 74 + #define resource_list_for_each_entry_safe(entry, tmp, list) \ 75 + list_for_each_entry_safe((entry), (tmp), (list), node) 76 + 77 + #endif /* _LINUX_RESOURCE_EXT_H */
+89 -2
kernel/power/qos.c
··· 41 41 #include <linux/platform_device.h> 42 42 #include <linux/init.h> 43 43 #include <linux/kernel.h> 44 + #include <linux/debugfs.h> 45 + #include <linux/seq_file.h> 44 46 45 47 #include <linux/uaccess.h> 46 48 #include <linux/export.h> ··· 183 181 { 184 182 c->target_value = value; 185 183 } 184 + 185 + static inline int pm_qos_get_value(struct pm_qos_constraints *c); 186 + static int pm_qos_dbg_show_requests(struct seq_file *s, void *unused) 187 + { 188 + struct pm_qos_object *qos = (struct pm_qos_object *)s->private; 189 + struct pm_qos_constraints *c; 190 + struct pm_qos_request *req; 191 + char *type; 192 + unsigned long flags; 193 + int tot_reqs = 0; 194 + int active_reqs = 0; 195 + 196 + if (IS_ERR_OR_NULL(qos)) { 197 + pr_err("%s: bad qos param!\n", __func__); 198 + return -EINVAL; 199 + } 200 + c = qos->constraints; 201 + if (IS_ERR_OR_NULL(c)) { 202 + pr_err("%s: Bad constraints on qos?\n", __func__); 203 + return -EINVAL; 204 + } 205 + 206 + /* Lock to ensure we have a snapshot */ 207 + spin_lock_irqsave(&pm_qos_lock, flags); 208 + if (plist_head_empty(&c->list)) { 209 + seq_puts(s, "Empty!\n"); 210 + goto out; 211 + } 212 + 213 + switch (c->type) { 214 + case PM_QOS_MIN: 215 + type = "Minimum"; 216 + break; 217 + case PM_QOS_MAX: 218 + type = "Maximum"; 219 + break; 220 + case PM_QOS_SUM: 221 + type = "Sum"; 222 + break; 223 + default: 224 + type = "Unknown"; 225 + } 226 + 227 + plist_for_each_entry(req, &c->list, node) { 228 + char *state = "Default"; 229 + 230 + if ((req->node).prio != c->default_value) { 231 + active_reqs++; 232 + state = "Active"; 233 + } 234 + tot_reqs++; 235 + seq_printf(s, "%d: %d: %s\n", tot_reqs, 236 + (req->node).prio, state); 237 + } 238 + 239 + seq_printf(s, "Type=%s, Value=%d, Requests: active=%d / total=%d\n", 240 + type, pm_qos_get_value(c), active_reqs, tot_reqs); 241 + 242 + out: 243 + spin_unlock_irqrestore(&pm_qos_lock, flags); 244 + return 0; 245 + } 246 + 247 + static int pm_qos_dbg_open(struct inode *inode, struct file *file) 248 + { 249 + return single_open(file, pm_qos_dbg_show_requests, 250 + inode->i_private); 251 + } 252 + 253 + static const struct file_operations pm_qos_debug_fops = { 254 + .open = pm_qos_dbg_open, 255 + .read = seq_read, 256 + .llseek = seq_lseek, 257 + .release = single_release, 258 + }; 186 259 187 260 /** 188 261 * pm_qos_update_target - manages the constraints list and calls the notifiers ··· 586 509 EXPORT_SYMBOL_GPL(pm_qos_remove_notifier); 587 510 588 511 /* User space interface to PM QoS classes via misc devices */ 589 - static int register_pm_qos_misc(struct pm_qos_object *qos) 512 + static int register_pm_qos_misc(struct pm_qos_object *qos, struct dentry *d) 590 513 { 591 514 qos->pm_qos_power_miscdev.minor = MISC_DYNAMIC_MINOR; 592 515 qos->pm_qos_power_miscdev.name = qos->name; 593 516 qos->pm_qos_power_miscdev.fops = &pm_qos_power_fops; 517 + 518 + if (d) { 519 + (void)debugfs_create_file(qos->name, S_IRUGO, d, 520 + (void *)qos, &pm_qos_debug_fops); 521 + } 594 522 595 523 return misc_register(&qos->pm_qos_power_miscdev); 596 524 } ··· 690 608 { 691 609 int ret = 0; 692 610 int i; 611 + struct dentry *d; 693 612 694 613 BUILD_BUG_ON(ARRAY_SIZE(pm_qos_array) != PM_QOS_NUM_CLASSES); 695 614 615 + d = debugfs_create_dir("pm_qos", NULL); 616 + if (IS_ERR_OR_NULL(d)) 617 + d = NULL; 618 + 696 619 for (i = PM_QOS_CPU_DMA_LATENCY; i < PM_QOS_NUM_CLASSES; i++) { 697 - ret = register_pm_qos_misc(pm_qos_array[i]); 620 + ret = register_pm_qos_misc(pm_qos_array[i], d); 698 621 if (ret < 0) { 699 622 printk(KERN_ERR "pm_qos_param: %s setup failed\n", 700 623 pm_qos_array[i]->name);
+6 -5
kernel/power/snapshot.c
··· 1472 1472 /** 1473 1473 * free_unnecessary_pages - Release preallocated pages not needed for the image 1474 1474 */ 1475 - static void free_unnecessary_pages(void) 1475 + static unsigned long free_unnecessary_pages(void) 1476 1476 { 1477 - unsigned long save, to_free_normal, to_free_highmem; 1477 + unsigned long save, to_free_normal, to_free_highmem, free; 1478 1478 1479 1479 save = count_data_pages(); 1480 1480 if (alloc_normal >= save) { ··· 1495 1495 else 1496 1496 to_free_normal = 0; 1497 1497 } 1498 + free = to_free_normal + to_free_highmem; 1498 1499 1499 1500 memory_bm_position_reset(&copy_bm); 1500 1501 ··· 1519 1518 swsusp_unset_page_free(page); 1520 1519 __free_page(page); 1521 1520 } 1521 + 1522 + return free; 1522 1523 } 1523 1524 1524 1525 /** ··· 1710 1707 * pages in memory, but we have allocated more. Release the excessive 1711 1708 * ones now. 1712 1709 */ 1713 - free_unnecessary_pages(); 1710 + pages -= free_unnecessary_pages(); 1714 1711 1715 1712 out: 1716 1713 stop = ktime_get(); ··· 2313 2310 free_image_page(buffer, PG_UNSAFE_CLEAR); 2314 2311 } 2315 2312 #else 2316 - static inline int get_safe_write_buffer(void) { return 0; } 2317 - 2318 2313 static unsigned int 2319 2314 count_highmem_image_pages(struct memory_bitmap *bm) { return 0; } 2320 2315
+25
kernel/resource.c
··· 22 22 #include <linux/device.h> 23 23 #include <linux/pfn.h> 24 24 #include <linux/mm.h> 25 + #include <linux/resource_ext.h> 25 26 #include <asm/io.h> 26 27 27 28 ··· 1529 1528 1530 1529 return err; 1531 1530 } 1531 + 1532 + struct resource_entry *resource_list_create_entry(struct resource *res, 1533 + size_t extra_size) 1534 + { 1535 + struct resource_entry *entry; 1536 + 1537 + entry = kzalloc(sizeof(*entry) + extra_size, GFP_KERNEL); 1538 + if (entry) { 1539 + INIT_LIST_HEAD(&entry->node); 1540 + entry->res = res ? res : &entry->__res; 1541 + } 1542 + 1543 + return entry; 1544 + } 1545 + EXPORT_SYMBOL(resource_list_create_entry); 1546 + 1547 + void resource_list_free(struct list_head *head) 1548 + { 1549 + struct resource_entry *entry, *tmp; 1550 + 1551 + list_for_each_entry_safe(entry, tmp, head, node) 1552 + resource_list_destroy_entry(entry); 1553 + } 1554 + EXPORT_SYMBOL(resource_list_free); 1532 1555 1533 1556 static int __init strict_iomem(char *str) 1534 1557 {
+1
kernel/trace/power-traces.c
··· 13 13 #define CREATE_TRACE_POINTS 14 14 #include <trace/events/power.h> 15 15 16 + EXPORT_TRACEPOINT_SYMBOL_GPL(suspend_resume); 16 17 EXPORT_TRACEPOINT_SYMBOL_GPL(cpu_idle); 17 18
+1 -1
tools/power/acpi/common/cmfsize.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
tools/power/acpi/common/getopt.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
tools/power/acpi/os_specific/service_layers/oslibcfs.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
tools/power/acpi/os_specific/service_layers/oslinuxtbl.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
tools/power/acpi/os_specific/service_layers/osunixdir.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
tools/power/acpi/os_specific/service_layers/osunixmap.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
tools/power/acpi/os_specific/service_layers/osunixxf.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
tools/power/acpi/tools/acpidump/acpidump.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
tools/power/acpi/tools/acpidump/apdump.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
tools/power/acpi/tools/acpidump/apfiles.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
tools/power/acpi/tools/acpidump/apmain.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2014, Intel Corp. 8 + * Copyright (C) 2000 - 2015, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
tools/power/cpupower/Makefile
··· 209 209 210 210 $(OUTPUT)cpupower: $(UTIL_OBJS) $(OUTPUT)libcpupower.so.$(LIB_MAJ) 211 211 $(ECHO) " CC " $@ 212 - $(QUIET) $(CC) $(CFLAGS) $(LDFLAGS) $(UTIL_OBJS) -lcpupower -lrt -lpci -L$(OUTPUT) -o $@ 212 + $(QUIET) $(CC) $(CFLAGS) $(LDFLAGS) $(UTIL_OBJS) -lcpupower -Wl,-rpath=./ -lrt -lpci -L$(OUTPUT) -o $@ 213 213 $(QUIET) $(STRIPCMD) $@ 214 214 215 215 $(OUTPUT)po/$(PACKAGE).pot: $(UTIL_SRC)
+44 -20
tools/power/x86/turbostat/turbostat.8
··· 12 12 .RB [ "\-i interval_sec" ] 13 13 .SH DESCRIPTION 14 14 \fBturbostat \fP reports processor topology, frequency, 15 - idle power-state statistics, temperature and power on modern X86 processors. 16 - Either \fBcommand\fP is forked and statistics are printed 17 - upon its completion, or statistics are printed periodically. 15 + idle power-state statistics, temperature and power on X86 processors. 16 + There are two ways to invoke turbostat. 17 + The first method is to supply a 18 + \fBcommand\fP, which is forked and statistics are printed 19 + upon its completion. 20 + The second method is to omit the command, 21 + and turbodstat will print statistics every 5 seconds. 22 + The 5-second interval can changed using the -i option. 18 23 19 - \fBturbostat \fP 20 - must be run on root, and 21 - minimally requires that the processor 22 - supports an "invariant" TSC, plus the APERF and MPERF MSRs. 23 - Additional information is reported depending on hardware counter support. 24 - 24 + Some information is not availalbe on older processors. 25 25 .SS Options 26 26 The \fB-p\fP option limits output to the 1st thread in 1st core of each package. 27 27 .PP ··· 130 130 ... 131 131 .fi 132 132 The \fBmax efficiency\fP frequency, a.k.a. Low Frequency Mode, is the frequency 133 - available at the minimum package voltage. The \fBTSC frequency\fP is the nominal 134 - maximum frequency of the processor if turbo-mode were not available. This frequency 133 + available at the minimum package voltage. The \fBTSC frequency\fP is the base 134 + frequency of the processor -- this should match the brand string 135 + in /proc/cpuinfo. This base frequency 135 136 should be sustainable on all CPUs indefinitely, given nominal power and cooling. 136 137 The remaining rows show what maximum turbo frequency is possible 137 - depending on the number of idle cores. Note that this information is 138 - not available on all processors. 138 + depending on the number of idle cores. Note that not all information is 139 + available on all processors. 139 140 .SH FORK EXAMPLE 140 141 If turbostat is invoked with a command, it will fork that command 141 142 and output the statistics gathered when the command exits. ··· 177 176 178 177 .B "turbostat " 179 178 must be run as root. 179 + Alternatively, non-root users can be enabled to run turbostat this way: 180 + 181 + # setcap cap_sys_rawio=ep ./turbostat 182 + 183 + # chmod +r /dev/cpu/*/msr 180 184 181 185 .B "turbostat " 182 186 reads hardware counters, but doesn't write them. ··· 190 184 191 185 \fBturbostat \fP 192 186 may work poorly on Linux-2.6.20 through 2.6.29, 193 - as \fBacpi-cpufreq \fPperiodically cleared the APERF and MPERF 187 + as \fBacpi-cpufreq \fPperiodically cleared the APERF and MPERF MSRs 194 188 in those kernels. 195 189 196 - If the TSC column does not make sense, then 197 - the other numbers will also make no sense. 198 - Turbostat is lightweight, and its data collection is not atomic. 199 - These issues are usually caused by an extremely short measurement 200 - interval (much less than 1 second), or system activity that prevents 201 - turbostat from being able to run on all CPUS to quickly collect data. 190 + AVG_MHz = APERF_delta/measurement_interval. This is the actual 191 + number of elapsed cycles divided by the entire sample interval -- 192 + including idle time. Note that this calculation is resiliant 193 + to systems lacking a non-stop TSC. 194 + 195 + TSC_MHz = TSC_delta/measurement_interval. 196 + On a system with an invariant TSC, this value will be constant 197 + and will closely match the base frequency value shown 198 + in the brand string in /proc/cpuinfo. On a system where 199 + the TSC stops in idle, TSC_MHz will drop 200 + below the processor's base frequency. 201 + 202 + %Busy = MPERF_delta/TSC_delta 203 + 204 + Bzy_MHz = TSC_delta/APERF_delta/MPERF_delta/measurement_interval 205 + 206 + Note that these calculations depend on TSC_delta, so they 207 + are not reliable during intervals when TSC_MHz is not running at the base frequency. 208 + 209 + Turbostat data collection is not atomic. 210 + Extremely short measurement intervals (much less than 1 second), 211 + or system activity that prevents turbostat from being able 212 + to run on all CPUS to quickly collect data, will result in 213 + inconsistent results. 202 214 203 215 The APERF, MPERF MSRs are defined to count non-halted cycles. 204 216 Although it is not guaranteed by the architecture, turbostat assumes
+263 -73
tools/power/x86/turbostat/turbostat.c
··· 38 38 #include <ctype.h> 39 39 #include <sched.h> 40 40 #include <cpuid.h> 41 + #include <linux/capability.h> 42 + #include <errno.h> 41 43 42 44 char *proc_stat = "/proc/stat"; 43 45 unsigned int interval_sec = 5; /* set with -i interval_sec */ ··· 61 59 unsigned int units = 1000000; /* MHz etc */ 62 60 unsigned int genuine_intel; 63 61 unsigned int has_invariant_tsc; 64 - unsigned int do_nehalem_platform_info; 65 - unsigned int do_nehalem_turbo_ratio_limit; 62 + unsigned int do_nhm_platform_info; 63 + unsigned int do_nhm_turbo_ratio_limit; 66 64 unsigned int do_ivt_turbo_ratio_limit; 67 65 unsigned int extra_msr_offset32; 68 66 unsigned int extra_msr_offset64; ··· 83 81 unsigned int tcc_activation_temp_override; 84 82 double rapl_power_units, rapl_energy_units, rapl_time_units; 85 83 double rapl_joule_counter_range; 84 + unsigned int do_core_perf_limit_reasons; 85 + unsigned int do_gfx_perf_limit_reasons; 86 + unsigned int do_ring_perf_limit_reasons; 86 87 87 88 #define RAPL_PKG (1 << 0) 88 89 /* 0x610 MSR_PKG_POWER_LIMIT */ ··· 256 251 sprintf(pathname, "/dev/cpu/%d/msr", cpu); 257 252 fd = open(pathname, O_RDONLY); 258 253 if (fd < 0) 259 - return -1; 254 + err(-1, "%s open failed, try chown or chmod +r /dev/cpu/*/msr, or run as root", pathname); 260 255 261 256 retval = pread(fd, msr, sizeof *msr, offset); 262 257 close(fd); 263 258 264 - if (retval != sizeof *msr) { 265 - fprintf(stderr, "%s offset 0x%llx read failed\n", pathname, (unsigned long long)offset); 266 - return -1; 267 - } 259 + if (retval != sizeof *msr) 260 + err(-1, "%s offset 0x%llx read failed", pathname, (unsigned long long)offset); 268 261 269 262 return 0; 270 263 } ··· 284 281 outp += sprintf(outp, " CPU"); 285 282 if (has_aperf) 286 283 outp += sprintf(outp, " Avg_MHz"); 287 - if (do_nhm_cstates) 284 + if (has_aperf) 288 285 outp += sprintf(outp, " %%Busy"); 289 286 if (has_aperf) 290 287 outp += sprintf(outp, " Bzy_MHz"); ··· 340 337 outp += sprintf(outp, " PKG_%%"); 341 338 if (do_rapl & RAPL_DRAM_PERF_STATUS) 342 339 outp += sprintf(outp, " RAM_%%"); 343 - } else { 340 + } else if (do_rapl && rapl_joules) { 344 341 if (do_rapl & RAPL_PKG) 345 342 outp += sprintf(outp, " Pkg_J"); 346 343 if (do_rapl & RAPL_CORES) ··· 460 457 outp += sprintf(outp, "%8d", t->cpu_id); 461 458 } 462 459 463 - /* AvgMHz */ 460 + /* Avg_MHz */ 464 461 if (has_aperf) 465 462 outp += sprintf(outp, "%8.0f", 466 463 1.0 / units * t->aperf / interval_float); 467 464 468 - /* %c0 */ 469 - if (do_nhm_cstates) { 465 + /* %Busy */ 466 + if (has_aperf) { 470 467 if (!skip_c0) 471 468 outp += sprintf(outp, "%8.2f", 100.0 * t->mperf/t->tsc); 472 469 else 473 470 outp += sprintf(outp, "********"); 474 471 } 475 472 476 - /* BzyMHz */ 473 + /* Bzy_MHz */ 477 474 if (has_aperf) 478 475 outp += sprintf(outp, "%8.0f", 479 476 1.0 * t->tsc / units * t->aperf / t->mperf / interval_float); 480 477 481 - /* TSC */ 478 + /* TSC_MHz */ 482 479 outp += sprintf(outp, "%8.0f", 1.0 * t->tsc/units/interval_float); 483 480 484 481 /* SMI */ ··· 564 561 outp += sprintf(outp, fmt8, 100.0 * p->rapl_pkg_perf_status * rapl_time_units / interval_float); 565 562 if (do_rapl & RAPL_DRAM_PERF_STATUS) 566 563 outp += sprintf(outp, fmt8, 100.0 * p->rapl_dram_perf_status * rapl_time_units / interval_float); 567 - } else { 564 + } else if (do_rapl && rapl_joules) { 568 565 if (do_rapl & RAPL_PKG) 569 566 outp += sprintf(outp, fmt8, 570 567 p->energy_pkg * rapl_energy_units); ··· 581 578 outp += sprintf(outp, fmt8, 100.0 * p->rapl_pkg_perf_status * rapl_time_units / interval_float); 582 579 if (do_rapl & RAPL_DRAM_PERF_STATUS) 583 580 outp += sprintf(outp, fmt8, 100.0 * p->rapl_dram_perf_status * rapl_time_units / interval_float); 584 - outp += sprintf(outp, fmt8, interval_float); 585 581 582 + outp += sprintf(outp, fmt8, interval_float); 586 583 } 587 584 done: 588 585 outp += sprintf(outp, "\n"); ··· 673 670 674 671 old->c1 = new->c1 - old->c1; 675 672 676 - if ((new->aperf > old->aperf) && (new->mperf > old->mperf)) { 677 - old->aperf = new->aperf - old->aperf; 678 - old->mperf = new->mperf - old->mperf; 679 - } else { 673 + if (has_aperf) { 674 + if ((new->aperf > old->aperf) && (new->mperf > old->mperf)) { 675 + old->aperf = new->aperf - old->aperf; 676 + old->mperf = new->mperf - old->mperf; 677 + } else { 680 678 681 - if (!aperf_mperf_unstable) { 682 - fprintf(stderr, "%s: APERF or MPERF went backwards *\n", progname); 683 - fprintf(stderr, "* Frequency results do not cover entire interval *\n"); 684 - fprintf(stderr, "* fix this by running Linux-2.6.30 or later *\n"); 679 + if (!aperf_mperf_unstable) { 680 + fprintf(stderr, "%s: APERF or MPERF went backwards *\n", progname); 681 + fprintf(stderr, "* Frequency results do not cover entire interval *\n"); 682 + fprintf(stderr, "* fix this by running Linux-2.6.30 or later *\n"); 685 683 686 - aperf_mperf_unstable = 1; 684 + aperf_mperf_unstable = 1; 685 + } 686 + /* 687 + * mperf delta is likely a huge "positive" number 688 + * can not use it for calculating c0 time 689 + */ 690 + skip_c0 = 1; 691 + skip_c1 = 1; 687 692 } 688 - /* 689 - * mperf delta is likely a huge "positive" number 690 - * can not use it for calculating c0 time 691 - */ 692 - skip_c0 = 1; 693 - skip_c1 = 1; 694 693 } 695 694 696 695 ··· 1024 1019 unsigned long long msr; 1025 1020 unsigned int ratio; 1026 1021 1027 - if (!do_nehalem_platform_info) 1022 + if (!do_nhm_platform_info) 1028 1023 return; 1029 1024 1030 1025 get_msr(0, MSR_NHM_PLATFORM_INFO, &msr); ··· 1137 1132 } 1138 1133 fprintf(stderr, ")\n"); 1139 1134 1140 - if (!do_nehalem_turbo_ratio_limit) 1135 + if (!do_nhm_turbo_ratio_limit) 1141 1136 return; 1142 1137 1143 1138 get_msr(0, MSR_NHM_TURBO_RATIO_LIMIT, &msr); ··· 1183 1178 if (ratio) 1184 1179 fprintf(stderr, "%d * %.0f = %.0f MHz max turbo 1 active cores\n", 1185 1180 ratio, bclk, ratio * bclk); 1181 + 1186 1182 } 1187 1183 1188 1184 void free_all_buffers(void) ··· 1464 1458 struct stat sb; 1465 1459 1466 1460 if (stat("/dev/cpu/0/msr", &sb)) 1467 - err(-5, "no /dev/cpu/0/msr\n" 1468 - "Try \"# modprobe msr\""); 1461 + err(-5, "no /dev/cpu/0/msr, Try \"# modprobe msr\" "); 1469 1462 } 1470 1463 1471 - void check_super_user() 1464 + void check_permissions() 1472 1465 { 1473 - if (getuid() != 0) 1474 - errx(-6, "must be root"); 1466 + struct __user_cap_header_struct cap_header_data; 1467 + cap_user_header_t cap_header = &cap_header_data; 1468 + struct __user_cap_data_struct cap_data_data; 1469 + cap_user_data_t cap_data = &cap_data_data; 1470 + extern int capget(cap_user_header_t hdrp, cap_user_data_t datap); 1471 + int do_exit = 0; 1472 + 1473 + /* check for CAP_SYS_RAWIO */ 1474 + cap_header->pid = getpid(); 1475 + cap_header->version = _LINUX_CAPABILITY_VERSION; 1476 + if (capget(cap_header, cap_data) < 0) 1477 + err(-6, "capget(2) failed"); 1478 + 1479 + if ((cap_data->effective & (1 << CAP_SYS_RAWIO)) == 0) { 1480 + do_exit++; 1481 + warnx("capget(CAP_SYS_RAWIO) failed," 1482 + " try \"# setcap cap_sys_rawio=ep %s\"", progname); 1483 + } 1484 + 1485 + /* test file permissions */ 1486 + if (euidaccess("/dev/cpu/0/msr", R_OK)) { 1487 + do_exit++; 1488 + warn("/dev/cpu/0/msr open failed, try chown or chmod +r /dev/cpu/*/msr"); 1489 + } 1490 + 1491 + /* if all else fails, thell them to be root */ 1492 + if (do_exit) 1493 + if (getuid() != 0) 1494 + warnx("... or simply run as root"); 1495 + 1496 + if (do_exit) 1497 + exit(-6); 1475 1498 } 1476 1499 1477 - int has_nehalem_turbo_ratio_limit(unsigned int family, unsigned int model) 1500 + /* 1501 + * NHM adds support for additional MSRs: 1502 + * 1503 + * MSR_SMI_COUNT 0x00000034 1504 + * 1505 + * MSR_NHM_PLATFORM_INFO 0x000000ce 1506 + * MSR_NHM_SNB_PKG_CST_CFG_CTL 0x000000e2 1507 + * 1508 + * MSR_PKG_C3_RESIDENCY 0x000003f8 1509 + * MSR_PKG_C6_RESIDENCY 0x000003f9 1510 + * MSR_CORE_C3_RESIDENCY 0x000003fc 1511 + * MSR_CORE_C6_RESIDENCY 0x000003fd 1512 + * 1513 + */ 1514 + int has_nhm_msrs(unsigned int family, unsigned int model) 1478 1515 { 1479 1516 if (!genuine_intel) 1480 1517 return 0; ··· 1544 1495 case 0x3D: /* BDW */ 1545 1496 case 0x4F: /* BDX */ 1546 1497 case 0x56: /* BDX-DE */ 1547 - return 1; 1548 1498 case 0x2E: /* Nehalem-EX Xeon - Beckton */ 1549 1499 case 0x2F: /* Westmere-EX Xeon - Eagleton */ 1500 + return 1; 1550 1501 default: 1551 1502 return 0; 1503 + } 1504 + } 1505 + int has_nhm_turbo_ratio_limit(unsigned int family, unsigned int model) 1506 + { 1507 + if (!has_nhm_msrs(family, model)) 1508 + return 0; 1509 + 1510 + switch (model) { 1511 + /* Nehalem compatible, but do not include turbo-ratio limit support */ 1512 + case 0x2E: /* Nehalem-EX Xeon - Beckton */ 1513 + case 0x2F: /* Westmere-EX Xeon - Eagleton */ 1514 + return 0; 1515 + default: 1516 + return 1; 1552 1517 } 1553 1518 } 1554 1519 int has_ivt_turbo_ratio_limit(unsigned int family, unsigned int model) ··· 1624 1561 } 1625 1562 fprintf(stderr, "cpu%d: MSR_IA32_ENERGY_PERF_BIAS: 0x%08llx (%s)\n", cpu, msr, epb_string); 1626 1563 1564 + return 0; 1565 + } 1566 + 1567 + /* 1568 + * print_perf_limit() 1569 + */ 1570 + int print_perf_limit(struct thread_data *t, struct core_data *c, struct pkg_data *p) 1571 + { 1572 + unsigned long long msr; 1573 + int cpu; 1574 + 1575 + cpu = t->cpu_id; 1576 + 1577 + /* per-package */ 1578 + if (!(t->flags & CPU_IS_FIRST_THREAD_IN_CORE) || !(t->flags & CPU_IS_FIRST_CORE_IN_PACKAGE)) 1579 + return 0; 1580 + 1581 + if (cpu_migrate(cpu)) { 1582 + fprintf(stderr, "Could not migrate to CPU %d\n", cpu); 1583 + return -1; 1584 + } 1585 + 1586 + if (do_core_perf_limit_reasons) { 1587 + get_msr(cpu, MSR_CORE_PERF_LIMIT_REASONS, &msr); 1588 + fprintf(stderr, "cpu%d: MSR_CORE_PERF_LIMIT_REASONS, 0x%08llx", cpu, msr); 1589 + fprintf(stderr, " (Active: %s%s%s%s%s%s%s%s%s%s%s%s%s%s)", 1590 + (msr & 1 << 0) ? "PROCHOT, " : "", 1591 + (msr & 1 << 1) ? "ThermStatus, " : "", 1592 + (msr & 1 << 2) ? "bit2, " : "", 1593 + (msr & 1 << 4) ? "Graphics, " : "", 1594 + (msr & 1 << 5) ? "Auto-HWP, " : "", 1595 + (msr & 1 << 6) ? "VR-Therm, " : "", 1596 + (msr & 1 << 8) ? "Amps, " : "", 1597 + (msr & 1 << 9) ? "CorePwr, " : "", 1598 + (msr & 1 << 10) ? "PkgPwrL1, " : "", 1599 + (msr & 1 << 11) ? "PkgPwrL2, " : "", 1600 + (msr & 1 << 12) ? "MultiCoreTurbo, " : "", 1601 + (msr & 1 << 13) ? "Transitions, " : "", 1602 + (msr & 1 << 14) ? "bit14, " : "", 1603 + (msr & 1 << 15) ? "bit15, " : ""); 1604 + fprintf(stderr, " (Logged: %s%s%s%s%s%s%s%s%s%s%s%s%s%s)\n", 1605 + (msr & 1 << 16) ? "PROCHOT, " : "", 1606 + (msr & 1 << 17) ? "ThermStatus, " : "", 1607 + (msr & 1 << 18) ? "bit18, " : "", 1608 + (msr & 1 << 20) ? "Graphics, " : "", 1609 + (msr & 1 << 21) ? "Auto-HWP, " : "", 1610 + (msr & 1 << 22) ? "VR-Therm, " : "", 1611 + (msr & 1 << 24) ? "Amps, " : "", 1612 + (msr & 1 << 25) ? "CorePwr, " : "", 1613 + (msr & 1 << 26) ? "PkgPwrL1, " : "", 1614 + (msr & 1 << 27) ? "PkgPwrL2, " : "", 1615 + (msr & 1 << 28) ? "MultiCoreTurbo, " : "", 1616 + (msr & 1 << 29) ? "Transitions, " : "", 1617 + (msr & 1 << 30) ? "bit30, " : "", 1618 + (msr & 1 << 31) ? "bit31, " : ""); 1619 + 1620 + } 1621 + if (do_gfx_perf_limit_reasons) { 1622 + get_msr(cpu, MSR_GFX_PERF_LIMIT_REASONS, &msr); 1623 + fprintf(stderr, "cpu%d: MSR_GFX_PERF_LIMIT_REASONS, 0x%08llx", cpu, msr); 1624 + fprintf(stderr, " (Active: %s%s%s%s%s%s%s%s)", 1625 + (msr & 1 << 0) ? "PROCHOT, " : "", 1626 + (msr & 1 << 1) ? "ThermStatus, " : "", 1627 + (msr & 1 << 4) ? "Graphics, " : "", 1628 + (msr & 1 << 6) ? "VR-Therm, " : "", 1629 + (msr & 1 << 8) ? "Amps, " : "", 1630 + (msr & 1 << 9) ? "GFXPwr, " : "", 1631 + (msr & 1 << 10) ? "PkgPwrL1, " : "", 1632 + (msr & 1 << 11) ? "PkgPwrL2, " : ""); 1633 + fprintf(stderr, " (Logged: %s%s%s%s%s%s%s%s)\n", 1634 + (msr & 1 << 16) ? "PROCHOT, " : "", 1635 + (msr & 1 << 17) ? "ThermStatus, " : "", 1636 + (msr & 1 << 20) ? "Graphics, " : "", 1637 + (msr & 1 << 22) ? "VR-Therm, " : "", 1638 + (msr & 1 << 24) ? "Amps, " : "", 1639 + (msr & 1 << 25) ? "GFXPwr, " : "", 1640 + (msr & 1 << 26) ? "PkgPwrL1, " : "", 1641 + (msr & 1 << 27) ? "PkgPwrL2, " : ""); 1642 + } 1643 + if (do_ring_perf_limit_reasons) { 1644 + get_msr(cpu, MSR_RING_PERF_LIMIT_REASONS, &msr); 1645 + fprintf(stderr, "cpu%d: MSR_RING_PERF_LIMIT_REASONS, 0x%08llx", cpu, msr); 1646 + fprintf(stderr, " (Active: %s%s%s%s%s%s)", 1647 + (msr & 1 << 0) ? "PROCHOT, " : "", 1648 + (msr & 1 << 1) ? "ThermStatus, " : "", 1649 + (msr & 1 << 6) ? "VR-Therm, " : "", 1650 + (msr & 1 << 8) ? "Amps, " : "", 1651 + (msr & 1 << 10) ? "PkgPwrL1, " : "", 1652 + (msr & 1 << 11) ? "PkgPwrL2, " : ""); 1653 + fprintf(stderr, " (Logged: %s%s%s%s%s%s)\n", 1654 + (msr & 1 << 16) ? "PROCHOT, " : "", 1655 + (msr & 1 << 17) ? "ThermStatus, " : "", 1656 + (msr & 1 << 22) ? "VR-Therm, " : "", 1657 + (msr & 1 << 24) ? "Amps, " : "", 1658 + (msr & 1 << 26) ? "PkgPwrL1, " : "", 1659 + (msr & 1 << 27) ? "PkgPwrL2, " : ""); 1660 + } 1627 1661 return 0; 1628 1662 } 1629 1663 ··· 1811 1651 fprintf(stderr, "RAPL: %.0f sec. Joule Counter Range, at %.0f Watts\n", rapl_joule_counter_range, tdp); 1812 1652 1813 1653 return; 1654 + } 1655 + 1656 + void perf_limit_reasons_probe(family, model) 1657 + { 1658 + if (!genuine_intel) 1659 + return; 1660 + 1661 + if (family != 6) 1662 + return; 1663 + 1664 + switch (model) { 1665 + case 0x3C: /* HSW */ 1666 + case 0x45: /* HSW */ 1667 + case 0x46: /* HSW */ 1668 + do_gfx_perf_limit_reasons = 1; 1669 + case 0x3F: /* HSX */ 1670 + do_core_perf_limit_reasons = 1; 1671 + do_ring_perf_limit_reasons = 1; 1672 + default: 1673 + return; 1674 + } 1814 1675 } 1815 1676 1816 1677 int print_thermal(struct thread_data *t, struct core_data *c, struct pkg_data *p) ··· 2023 1842 return 0; 2024 1843 } 2025 1844 1845 + /* 1846 + * SNB adds support for additional MSRs: 1847 + * 1848 + * MSR_PKG_C7_RESIDENCY 0x000003fa 1849 + * MSR_CORE_C7_RESIDENCY 0x000003fe 1850 + * MSR_PKG_C2_RESIDENCY 0x0000060d 1851 + */ 2026 1852 2027 - int is_snb(unsigned int family, unsigned int model) 1853 + int has_snb_msrs(unsigned int family, unsigned int model) 2028 1854 { 2029 1855 if (!genuine_intel) 2030 1856 return 0; ··· 2053 1865 return 0; 2054 1866 } 2055 1867 2056 - int has_c8_c9_c10(unsigned int family, unsigned int model) 1868 + /* 1869 + * HSW adds support for additional MSRs: 1870 + * 1871 + * MSR_PKG_C8_RESIDENCY 0x00000630 1872 + * MSR_PKG_C9_RESIDENCY 0x00000631 1873 + * MSR_PKG_C10_RESIDENCY 0x00000632 1874 + */ 1875 + int has_hsw_msrs(unsigned int family, unsigned int model) 2057 1876 { 2058 1877 if (!genuine_intel) 2059 1878 return 0; ··· 2112 1917 2113 1918 double discover_bclk(unsigned int family, unsigned int model) 2114 1919 { 2115 - if (is_snb(family, model)) 1920 + if (has_snb_msrs(family, model)) 2116 1921 return 100.00; 2117 1922 else if (is_slm(family, model)) 2118 1923 return slm_bclk(); ··· 2160 1965 } 2161 1966 2162 1967 /* Temperature Target MSR is Nehalem and newer only */ 2163 - if (!do_nehalem_platform_info) 1968 + if (!do_nhm_platform_info) 2164 1969 goto guess; 2165 1970 2166 1971 if (get_msr(0, MSR_IA32_TEMPERATURE_TARGET, &msr)) ··· 2224 2029 ebx = ecx = edx = 0; 2225 2030 __get_cpuid(0x80000000, &max_level, &ebx, &ecx, &edx); 2226 2031 2227 - if (max_level < 0x80000007) 2228 - errx(1, "CPUID: no invariant TSC (max_level 0x%x)", max_level); 2032 + if (max_level >= 0x80000007) { 2229 2033 2230 - /* 2231 - * Non-Stop TSC is advertised by CPUID.EAX=0x80000007: EDX.bit8 2232 - * this check is valid for both Intel and AMD 2233 - */ 2234 - __get_cpuid(0x80000007, &eax, &ebx, &ecx, &edx); 2235 - has_invariant_tsc = edx & (1 << 8); 2236 - 2237 - if (!has_invariant_tsc) 2238 - errx(1, "No invariant TSC"); 2034 + /* 2035 + * Non-Stop TSC is advertised by CPUID.EAX=0x80000007: EDX.bit8 2036 + * this check is valid for both Intel and AMD 2037 + */ 2038 + __get_cpuid(0x80000007, &eax, &ebx, &ecx, &edx); 2039 + has_invariant_tsc = edx & (1 << 8); 2040 + } 2239 2041 2240 2042 /* 2241 2043 * APERF/MPERF is advertised by CPUID.EAX=0x6: ECX.bit0 ··· 2246 2054 has_epb = ecx & (1 << 3); 2247 2055 2248 2056 if (verbose) 2249 - fprintf(stderr, "CPUID(6): %s%s%s%s\n", 2250 - has_aperf ? "APERF" : "No APERF!", 2251 - do_dts ? ", DTS" : "", 2252 - do_ptm ? ", PTM": "", 2253 - has_epb ? ", EPB": ""); 2057 + fprintf(stderr, "CPUID(6): %sAPERF, %sDTS, %sPTM, %sEPB\n", 2058 + has_aperf ? "" : "No ", 2059 + do_dts ? "" : "No ", 2060 + do_ptm ? "" : "No ", 2061 + has_epb ? "" : "No "); 2254 2062 2255 - if (!has_aperf) 2256 - errx(-1, "No APERF"); 2257 - 2258 - do_nehalem_platform_info = genuine_intel && has_invariant_tsc; 2259 - do_nhm_cstates = genuine_intel; /* all Intel w/ non-stop TSC have NHM counters */ 2260 - do_smi = do_nhm_cstates; 2261 - do_snb_cstates = is_snb(family, model); 2262 - do_c8_c9_c10 = has_c8_c9_c10(family, model); 2063 + do_nhm_platform_info = do_nhm_cstates = do_smi = has_nhm_msrs(family, model); 2064 + do_snb_cstates = has_snb_msrs(family, model); 2065 + do_c8_c9_c10 = has_hsw_msrs(family, model); 2263 2066 do_slm_cstates = is_slm(family, model); 2264 2067 bclk = discover_bclk(family, model); 2265 2068 2266 - do_nehalem_turbo_ratio_limit = has_nehalem_turbo_ratio_limit(family, model); 2069 + do_nhm_turbo_ratio_limit = has_nhm_turbo_ratio_limit(family, model); 2267 2070 do_ivt_turbo_ratio_limit = has_ivt_turbo_ratio_limit(family, model); 2268 2071 rapl_probe(family, model); 2072 + perf_limit_reasons_probe(family, model); 2269 2073 2270 2074 return; 2271 2075 } ··· 2487 2299 2488 2300 void turbostat_init() 2489 2301 { 2490 - check_cpuid(); 2491 - 2492 2302 check_dev_msr(); 2493 - check_super_user(); 2303 + check_permissions(); 2304 + check_cpuid(); 2494 2305 2495 2306 setup_all_buffers(); 2496 2307 ··· 2498 2311 2499 2312 if (verbose) 2500 2313 for_all_cpus(print_epb, ODD_COUNTERS); 2314 + 2315 + if (verbose) 2316 + for_all_cpus(print_perf_limit, ODD_COUNTERS); 2501 2317 2502 2318 if (verbose) 2503 2319 for_all_cpus(print_rapl, ODD_COUNTERS); ··· 2631 2441 cmdline(argc, argv); 2632 2442 2633 2443 if (verbose) 2634 - fprintf(stderr, "turbostat v3.7 Feb 6, 2014" 2444 + fprintf(stderr, "turbostat v3.9 23-Jan, 2015" 2635 2445 " - Len Brown <lenb@kernel.org>\n"); 2636 2446 2637 2447 turbostat_init();