Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'acpi-4.8-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull ACPI updates from Rafael Wysocki:
"The new feaures here are the support for ACPI overlays (allowing ACPI
tables to be loaded at any time from EFI variables or via configfs)
and the LPI (Low-Power Idle) support. Also notable is the ACPI-based
NUMA support for ARM64.

Apart from that we have two new drivers, for the DPTF (Dynamic Power
and Thermal Framework) power participant device and for the Intel
Broxton WhiskeyCove PMIC, some more PMIC-related changes, support for
the Boot Error Record Table (BERT) in APEI and support for
platform-initiated graceful shutdown.

Plus two new pieces of documentation and usual assorted fixes and
cleanups in quite a few places.

Specifics:

- Support for ACPI SSDT overlays allowing Secondary System
Description Tables (SSDTs) to be loaded at any time from EFI
variables or via configfs (Octavian Purdila, Mika Westerberg).

- Support for the ACPI LPI (Low-Power Idle) feature introduced in
ACPI 6.0 and allowing processor idle states to be represented in
ACPI tables in a hierarchical way (with the help of Processor
Container objects) and support for ACPI idle states management on
ARM64, based on LPI (Sudeep Holla).

- General improvements of ACPI support for NUMA and ARM64 support for
ACPI-based NUMA (Hanjun Guo, David Daney, Robert Richter).

- General improvements of the ACPI table upgrade mechanism and ARM64
support for that feature (Aleksey Makarov, Jon Masters).

- Support for the Boot Error Record Table (BERT) in APEI and
improvements of kernel messages printed by the error injection code
(Huang Ying, Borislav Petkov).

- New driver for the Intel Broxton WhiskeyCove PMIC operation region
and support for the REGS operation region on Broxton, PMIC code
cleanups (Bin Gao, Felipe Balbi, Paul Gortmaker).

- New driver for the power participant device which is part of the
Dynamic Power and Thermal Framework (DPTF) and DPTF-related code
reorganization (Srinivas Pandruvada).

- Support for the platform-initiated graceful shutdown feature
introduced in ACPI 6.1 (Prashanth Prakash).

- ACPI button driver update related to lid input events generated
automatically on initialization and system resume that have been
problematic for some time (Lv Zheng).

- ACPI EC driver cleanups (Lv Zheng).

- Documentation of the ACPICA release automation process and the
in-kernel ACPI AML debugger (Lv Zheng).

- New blacklist entry and two fixes for the ACPI backlight driver
(Alex Hung, Arvind Yadav, Ralf Gerbig).

- Cleanups of the ACPI pci_slot driver (Joe Perches, Paul Gortmaker).

- ACPI CPPC code changes to make it more robust against possible
defects in ACPI tables and new symbol definitions for PCC (Hoan
Tran).

- System reboot code modification to execute the ACPI _PTS (Prepare
To Sleep) method in addition to _TTS (Ocean He).

- ACPICA-related change to carry out lock ordering checks in ACPICA
if ACPICA debug is enabled in the kernel (Lv Zheng).

- Assorted minor fixes and cleanups (Andy Shevchenko, Baoquan He,
Bhaktipriya Shridhar, Paul Gortmaker, Rafael Wysocki)"

* tag 'acpi-4.8-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (71 commits)
ACPI: enable ACPI_PROCESSOR_IDLE on ARM64
arm64: add support for ACPI Low Power Idle(LPI)
drivers: firmware: psci: initialise idle states using ACPI LPI
cpuidle: introduce CPU_PM_CPU_IDLE_ENTER macro for ARM{32, 64}
arm64: cpuidle: drop __init section marker to arm_cpuidle_init
ACPI / processor_idle: Add support for Low Power Idle(LPI) states
ACPI / processor_idle: introduce ACPI_PROCESSOR_CSTATE
ACPI / DPTF: move int340x_thermal.c to the DPTF folder
ACPI / DPTF: Add DPTF power participant driver
ACPI / lpat: make it explicitly non-modular
ACPI / dock: make dock explicitly non-modular
ACPI / PCI: make pci_slot explicitly non-modular
ACPI / PMIC: remove modular references from non-modular code
ACPICA: Linux: Enable ACPI_MUTEX_DEBUG for Linux kernel
ACPI: Rename configfs.c to acpi_configfs.c to prevent link error
ACPI / debugger: Add AML debugger documentation
ACPI: Add documentation describing ACPICA release automation
ACPI: add support for loading SSDTs via configfs
ACPI: add support for configfs
efi / ACPI: load SSTDs from EFI variables
...

+3454 -607
+36
Documentation/ABI/testing/configfs-acpi
··· 1 + What: /config/acpi 2 + Date: July 2016 3 + KernelVersion: 4.8 4 + Contact: linux-acpi@vger.kernel.org 5 + Description: 6 + This represents the ACPI subsystem entry point directory. It 7 + contains sub-groups corresponding to ACPI configurable options. 8 + 9 + What: /config/acpi/table 10 + Date: July 2016 11 + KernelVersion: 4.8 12 + Description: 13 + 14 + This group contains the configuration for user defined ACPI 15 + tables. The attributes of a user define table are: 16 + 17 + aml - a binary attribute that the user can use to 18 + fill in the ACPI aml definitions. Once the aml 19 + data is written to this file and the file is 20 + closed the table will be loaded and ACPI devices 21 + will be enumerated. To check if the operation is 22 + successful the user must check the error code 23 + for close(). If the operation is successful, 24 + subsequent writes to this attribute will fail. 25 + 26 + The rest of the attributes are read-only and are valid only 27 + after the table has been loaded by filling the aml entry: 28 + 29 + signature - ASCII table signature 30 + length - length of table in bytes, including the header 31 + revision - ACPI Specification minor version number 32 + oem_id - ASCII OEM identification 33 + oem_table_id - ASCII OEM table identification 34 + oem_revision - OEM revision number 35 + asl_compiler_id - ASCII ASL compiler vendor ID 36 + asl_compiler_revision - ASL compiler version
+66
Documentation/acpi/aml-debugger.txt
··· 1 + The AML Debugger 2 + 3 + Copyright (C) 2016, Intel Corporation 4 + Author: Lv Zheng <lv.zheng@intel.com> 5 + 6 + 7 + This document describes the usage of the AML debugger embedded in the Linux 8 + kernel. 9 + 10 + 1. Build the debugger 11 + 12 + The following kernel configuration items are required to enable the AML 13 + debugger interface from the Linux kernel: 14 + 15 + CONFIG_ACPI_DEBUGGER=y 16 + CONFIG_ACPI_DEBUGGER_USER=m 17 + 18 + The userspace utlities can be built from the kernel source tree using 19 + the following commands: 20 + 21 + $ cd tools 22 + $ make acpi 23 + 24 + The resultant userspace tool binary is then located at: 25 + 26 + tools/acpi/power/acpi/acpidbg/acpidbg 27 + 28 + It can be installed to system directories by running "make install" (as a 29 + sufficiently privileged user). 30 + 31 + 2. Start the userspace debugger interface 32 + 33 + After booting the kernel with the debugger built-in, the debugger can be 34 + started by using the following commands: 35 + 36 + # mount -t debugfs none /sys/kernel/debug 37 + # modprobe acpi_dbg 38 + # tools/acpi/power/acpi/acpidbg/acpidbg 39 + 40 + That spawns the interactive AML debugger environment where you can execute 41 + debugger commands. 42 + 43 + The commands are documented in the "ACPICA Overview and Programmer Reference" 44 + that can be downloaded from 45 + 46 + https://acpica.org/documentation 47 + 48 + The detailed debugger commands reference is located in Chapter 12 "ACPICA 49 + Debugger Reference". The "help" command can be used for a quick reference. 50 + 51 + 3. Stop the userspace debugger interface 52 + 53 + The interactive debugger interface can be closed by pressing Ctrl+C or using 54 + the "quit" or "exit" commands. When finished, unload the module with: 55 + 56 + # rmmod acpi_dbg 57 + 58 + The module unloading may fail if there is an acpidbg instance running. 59 + 60 + 4. Run the debugger in a script 61 + 62 + It may be useful to run the AML debugger in a test script. "acpidbg" supports 63 + this in a special "batch" mode. For example, the following command outputs 64 + the entire ACPI namespace: 65 + 66 + # acpidbg -b "namespace"
+262
Documentation/acpi/linuxized-acpica.txt
··· 1 + Linuxized ACPICA - Introduction to ACPICA Release Automation 2 + 3 + Copyright (C) 2013-2016, Intel Corporation 4 + Author: Lv Zheng <lv.zheng@intel.com> 5 + 6 + 7 + Abstract: 8 + 9 + This document describes the ACPICA project and the relationship between 10 + ACPICA and Linux. It also describes how ACPICA code in drivers/acpi/acpica, 11 + include/acpi and tools/power/acpi is automatically updated to follow the 12 + upstream. 13 + 14 + 15 + 1. ACPICA Project 16 + 17 + The ACPI Component Architecture (ACPICA) project provides an operating 18 + system (OS)-independent reference implementation of the Advanced 19 + Configuration and Power Interface Specification (ACPI). It has been 20 + adapted by various host OSes. By directly integrating ACPICA, Linux can 21 + also benefit from the application experiences of ACPICA from other host 22 + OSes. 23 + 24 + The homepage of ACPICA project is: www.acpica.org, it is maintained and 25 + supported by Intel Corporation. 26 + 27 + The following figure depicts the Linux ACPI subystem where the ACPICA 28 + adaptation is included: 29 + 30 + +---------------------------------------------------------+ 31 + | | 32 + | +---------------------------------------------------+ | 33 + | | +------------------+ | | 34 + | | | Table Management | | | 35 + | | +------------------+ | | 36 + | | +----------------------+ | | 37 + | | | Namespace Management | | | 38 + | | +----------------------+ | | 39 + | | +------------------+ ACPICA Components | | 40 + | | | Event Management | | | 41 + | | +------------------+ | | 42 + | | +---------------------+ | | 43 + | | | Resource Management | | | 44 + | | +---------------------+ | | 45 + | | +---------------------+ | | 46 + | | | Hardware Management | | | 47 + | | +---------------------+ | | 48 + | +---------------------------------------------------+ | | 49 + | | | +------------------+ | | | 50 + | | | | OS Service Layer | | | | 51 + | | | +------------------+ | | | 52 + | | +-------------------------------------------------|-+ | 53 + | | +--------------------+ | | 54 + | | | Device Enumeration | | | 55 + | | +--------------------+ | | 56 + | | +------------------+ | | 57 + | | | Power Management | | | 58 + | | +------------------+ Linux/ACPI Components | | 59 + | | +--------------------+ | | 60 + | | | Thermal Management | | | 61 + | | +--------------------+ | | 62 + | | +--------------------------+ | | 63 + | | | Drivers for ACPI Devices | | | 64 + | | +--------------------------+ | | 65 + | | +--------+ | | 66 + | | | ...... | | | 67 + | | +--------+ | | 68 + | +---------------------------------------------------+ | 69 + | | 70 + +---------------------------------------------------------+ 71 + 72 + Figure 1. Linux ACPI Software Components 73 + 74 + NOTE: 75 + A. OS Service Layer - Provided by Linux to offer OS dependent 76 + implementation of the predefined ACPICA interfaces (acpi_os_*). 77 + include/acpi/acpiosxf.h 78 + drivers/acpi/osl.c 79 + include/acpi/platform 80 + include/asm/acenv.h 81 + B. ACPICA Functionality - Released from ACPICA code base to offer 82 + OS independent implementation of the ACPICA interfaces (acpi_*). 83 + drivers/acpi/acpica 84 + include/acpi/ac*.h 85 + tools/power/acpi 86 + C. Linux/ACPI Functionality - Providing Linux specific ACPI 87 + functionality to the other Linux kernel subsystems and user space 88 + programs. 89 + drivers/acpi 90 + include/linux/acpi.h 91 + include/linux/acpi*.h 92 + include/acpi 93 + tools/power/acpi 94 + D. Architecture Specific ACPICA/ACPI Functionalities - Provided by the 95 + ACPI subsystem to offer architecture specific implementation of the 96 + ACPI interfaces. They are Linux specific components and are out of 97 + the scope of this document. 98 + include/asm/acpi.h 99 + include/asm/acpi*.h 100 + arch/*/acpi 101 + 102 + 2. ACPICA Release 103 + 104 + The ACPICA project maintains its code base at the following repository URL: 105 + https://github.com/acpica/acpica.git. As a rule, a release is made every 106 + month. 107 + 108 + As the coding style adopted by the ACPICA project is not acceptable by 109 + Linux, there is a release process to convert the ACPICA git commits into 110 + Linux patches. The patches generated by this process are referred to as 111 + "linuxized ACPICA patches". The release process is carried out on a local 112 + copy the ACPICA git repository. Each commit in the monthly release is 113 + converted into a linuxized ACPICA patch. Together, they form the montly 114 + ACPICA release patchset for the Linux ACPI community. This process is 115 + illustrated in the following figure: 116 + 117 + +-----------------------------+ 118 + | acpica / master (-) commits | 119 + +-----------------------------+ 120 + /|\ | 121 + | \|/ 122 + | /---------------------\ +----------------------+ 123 + | < Linuxize repo Utility >-->| old linuxized acpica |--+ 124 + | \---------------------/ +----------------------+ | 125 + | | 126 + /---------\ | 127 + < git reset > \ 128 + \---------/ \ 129 + /|\ /+-+ 130 + | / | 131 + +-----------------------------+ | | 132 + | acpica / master (+) commits | | | 133 + +-----------------------------+ | | 134 + | | | 135 + \|/ | | 136 + /-----------------------\ +----------------------+ | | 137 + < Linuxize repo Utilities >-->| new linuxized acpica |--+ | 138 + \-----------------------/ +----------------------+ | 139 + \|/ 140 + +--------------------------+ /----------------------\ 141 + | Linuxized ACPICA Patches |<----------------< Linuxize patch Utility > 142 + +--------------------------+ \----------------------/ 143 + | 144 + \|/ 145 + /---------------------------\ 146 + < Linux ACPI Community Review > 147 + \---------------------------/ 148 + | 149 + \|/ 150 + +-----------------------+ /------------------\ +----------------+ 151 + | linux-pm / linux-next |-->< Linux Merge Window >-->| linux / master | 152 + +-----------------------+ \------------------/ +----------------+ 153 + 154 + Figure 2. ACPICA -> Linux Upstream Process 155 + 156 + NOTE: 157 + A. Linuxize Utilities - Provided by the ACPICA repository, including a 158 + utility located in source/tools/acpisrc folder and a number of 159 + scripts located in generate/linux folder. 160 + B. acpica / master - "master" branch of the git repository at 161 + <https://github.com/acpica/acpica.git>. 162 + C. linux-pm / linux-next - "linux-next" branch of the git repository at 163 + <http://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git>. 164 + D. linux / master - "master" branch of the git repository at 165 + <http://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git>. 166 + 167 + Before the linuxized ACPICA patches are sent to the Linux ACPI community 168 + for review, there is a quality ensurance build test process to reduce 169 + porting issues. Currently this build process only takes care of the 170 + following kernel configuration options: 171 + CONFIG_ACPI/CONFIG_ACPI_DEBUG/CONFIG_ACPI_DEBUGGER 172 + 173 + 3. ACPICA Divergences 174 + 175 + Ideally, all of the ACPICA commits should be converted into Linux patches 176 + automatically without manual modifications, the "linux / master" tree should 177 + contain the ACPICA code that exactly corresponds to the ACPICA code 178 + contained in "new linuxized acpica" tree and it should be possible to run 179 + the release process fully automatically. 180 + 181 + As a matter of fact, however, there are source code differences between 182 + the ACPICA code in Linux and the upstream ACPICA code, referred to as 183 + "ACPICA Divergences". 184 + 185 + The various sources of ACPICA divergences include: 186 + 1. Legacy divergences - Before the current ACPICA release process was 187 + established, there already had been divergences between Linux and 188 + ACPICA. Over the past several years those divergences have been greatly 189 + reduced, but there still are several ones and it takes time to figure 190 + out the underlying reasons for their existence. 191 + 2. Manual modifications - Any manual modification (eg. coding style fixes) 192 + made directly in the Linux sources obviously hurts the ACPICA release 193 + automation. Thus it is recommended to fix such issues in the ACPICA 194 + upstream source code and generate the linuxized fix using the ACPICA 195 + release utilities (please refer to Section 4 below for the details). 196 + 3. Linux specific features - Sometimes it's impossible to use the 197 + current ACPICA APIs to implement features required by the Linux kernel, 198 + so Linux developers occasionaly have to change ACPICA code directly. 199 + Those changes may not be acceptable by ACPICA upstream and in such cases 200 + they are left as committed ACPICA divergences unless the ACPICA side can 201 + implement new mechanisms as replacements for them. 202 + 4. ACPICA release fixups - ACPICA only tests commits using a set of the 203 + user space simulation utilies, thus the linuxized ACPICA patches may 204 + break the Linux kernel, leaving us build/boot failures. In order to 205 + avoid breaking Linux bisection, fixes are applied directly to the 206 + linuxized ACPICA patches during the release process. When the release 207 + fixups are backported to the upstream ACPICA sources, they must follow 208 + the upstream ACPICA rules and so further modifications may appear. 209 + That may result in the appearance of new divergences. 210 + 5. Fast tracking of ACPICA commits - Some ACPICA commits are regression 211 + fixes or stable-candidate material, so they are applied in advance with 212 + respect to the ACPICA release process. If such commits are reverted or 213 + rebased on the ACPICA side in order to offer better solutions, new ACPICA 214 + divergences are generated. 215 + 216 + 4. ACPICA Development 217 + 218 + This paragraph guides Linux developers to use the ACPICA upstream release 219 + utilities to obtain Linux patches corresponding to upstream ACPICA commits 220 + before they become available from the ACPICA release process. 221 + 222 + 1. Cherry-pick an ACPICA commit 223 + 224 + First you need to git clone the ACPICA repository and the ACPICA change 225 + you want to cherry pick must be committed into the local repository. 226 + 227 + Then the gen-patch.sh command can help to cherry-pick an ACPICA commit 228 + from the ACPICA local repository: 229 + 230 + $ git clone https://github.com/acpica/acpica 231 + $ cd acpica 232 + $ generate/linux/gen-patch.sh -u [commit ID] 233 + 234 + Here the commit ID is the ACPICA local repository commit ID you want to 235 + cherry pick. It can be omitted if the commit is "HEAD". 236 + 237 + 2. Cherry-pick recent ACPICA commits 238 + 239 + Sometimes you need to rebase your code on top of the most recent ACPICA 240 + changes that haven't been applied to Linux yet. 241 + 242 + You can generate the ACPICA release series yourself and rebase your code on 243 + top of the generated ACPICA release patches: 244 + 245 + $ git clone https://github.com/acpica/acpica 246 + $ cd acpica 247 + $ generate/linux/make-patches.sh -u [commit ID] 248 + 249 + The commit ID should be the last ACPICA commit accepted by Linux. Usually, 250 + it is the commit modifying ACPI_CA_VERSION. It can be found by executing 251 + "git blame source/include/acpixf.h" and referencing the line that contains 252 + "ACPI_CA_VERSION". 253 + 254 + 3. Inspect the current divergences 255 + 256 + If you have local copies of both Linux and upstream ACPICA, you can generate 257 + a diff file indicating the state of the current divergences: 258 + 259 + # git clone https://github.com/acpica/acpica 260 + # git clone http://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 261 + # cd acpica 262 + # generate/linux/divergences.sh -s ../linux
+172
Documentation/acpi/ssdt-overlays.txt
··· 1 + 2 + In order to support ACPI open-ended hardware configurations (e.g. development 3 + boards) we need a way to augment the ACPI configuration provided by the firmware 4 + image. A common example is connecting sensors on I2C / SPI buses on development 5 + boards. 6 + 7 + Although this can be accomplished by creating a kernel platform driver or 8 + recompiling the firmware image with updated ACPI tables, neither is practical: 9 + the former proliferates board specific kernel code while the latter requires 10 + access to firmware tools which are often not publicly available. 11 + 12 + Because ACPI supports external references in AML code a more practical 13 + way to augment firmware ACPI configuration is by dynamically loading 14 + user defined SSDT tables that contain the board specific information. 15 + 16 + For example, to enumerate a Bosch BMA222E accelerometer on the I2C bus of the 17 + Minnowboard MAX development board exposed via the LSE connector [1], the 18 + following ASL code can be used: 19 + 20 + DefinitionBlock ("minnowmax.aml", "SSDT", 1, "Vendor", "Accel", 0x00000003) 21 + { 22 + External (\_SB.I2C6, DeviceObj) 23 + 24 + Scope (\_SB.I2C6) 25 + { 26 + Device (STAC) 27 + { 28 + Name (_ADR, Zero) 29 + Name (_HID, "BMA222E") 30 + 31 + Method (_CRS, 0, Serialized) 32 + { 33 + Name (RBUF, ResourceTemplate () 34 + { 35 + I2cSerialBus (0x0018, ControllerInitiated, 0x00061A80, 36 + AddressingMode7Bit, "\\_SB.I2C6", 0x00, 37 + ResourceConsumer, ,) 38 + GpioInt (Edge, ActiveHigh, Exclusive, PullDown, 0x0000, 39 + "\\_SB.GPO2", 0x00, ResourceConsumer, , ) 40 + { // Pin list 41 + 0 42 + } 43 + }) 44 + Return (RBUF) 45 + } 46 + } 47 + } 48 + } 49 + 50 + which can then be compiled to AML binary format: 51 + 52 + $ iasl minnowmax.asl 53 + 54 + Intel ACPI Component Architecture 55 + ASL Optimizing Compiler version 20140214-64 [Mar 29 2014] 56 + Copyright (c) 2000 - 2014 Intel Corporation 57 + 58 + ASL Input: minnomax.asl - 30 lines, 614 bytes, 7 keywords 59 + AML Output: minnowmax.aml - 165 bytes, 6 named objects, 1 executable opcodes 60 + 61 + [1] http://wiki.minnowboard.org/MinnowBoard_MAX#Low_Speed_Expansion_Connector_.28Top.29 62 + 63 + The resulting AML code can then be loaded by the kernel using one of the methods 64 + below. 65 + 66 + == Loading ACPI SSDTs from initrd == 67 + 68 + This option allows loading of user defined SSDTs from initrd and it is useful 69 + when the system does not support EFI or when there is not enough EFI storage. 70 + 71 + It works in a similar way with initrd based ACPI tables override/upgrade: SSDT 72 + aml code must be placed in the first, uncompressed, initrd under the 73 + "kernel/firmware/acpi" path. Multiple files can be used and this will translate 74 + in loading multiple tables. Only SSDT and OEM tables are allowed. See 75 + initrd_table_override.txt for more details. 76 + 77 + Here is an example: 78 + 79 + # Add the raw ACPI tables to an uncompressed cpio archive. 80 + # They must be put into a /kernel/firmware/acpi directory inside the 81 + # cpio archive. 82 + # The uncompressed cpio archive must be the first. 83 + # Other, typically compressed cpio archives, must be 84 + # concatenated on top of the uncompressed one. 85 + mkdir -p kernel/firmware/acpi 86 + cp ssdt.aml kernel/firmware/acpi 87 + 88 + # Create the uncompressed cpio archive and concatenate the original initrd 89 + # on top: 90 + find kernel | cpio -H newc --create > /boot/instrumented_initrd 91 + cat /boot/initrd >>/boot/instrumented_initrd 92 + 93 + == Loading ACPI SSDTs from EFI variables == 94 + 95 + This is the preferred method, when EFI is supported on the platform, because it 96 + allows a persistent, OS independent way of storing the user defined SSDTs. There 97 + is also work underway to implement EFI support for loading user defined SSDTs 98 + and using this method will make it easier to convert to the EFI loading 99 + mechanism when that will arrive. 100 + 101 + In order to load SSDTs from an EFI variable the efivar_ssdt kernel command line 102 + parameter can be used. The argument for the option is the variable name to 103 + use. If there are multiple variables with the same name but with different 104 + vendor GUIDs, all of them will be loaded. 105 + 106 + In order to store the AML code in an EFI variable the efivarfs filesystem can be 107 + used. It is enabled and mounted by default in /sys/firmware/efi/efivars in all 108 + recent distribution. 109 + 110 + Creating a new file in /sys/firmware/efi/efivars will automatically create a new 111 + EFI variable. Updating a file in /sys/firmware/efi/efivars will update the EFI 112 + variable. Please note that the file name needs to be specially formatted as 113 + "Name-GUID" and that the first 4 bytes in the file (little-endian format) 114 + represent the attributes of the EFI variable (see EFI_VARIABLE_MASK in 115 + include/linux/efi.h). Writing to the file must also be done with one write 116 + operation. 117 + 118 + For example, you can use the following bash script to create/update an EFI 119 + variable with the content from a given file: 120 + 121 + #!/bin/sh -e 122 + 123 + while ! [ -z "$1" ]; do 124 + case "$1" in 125 + "-f") filename="$2"; shift;; 126 + "-g") guid="$2"; shift;; 127 + *) name="$1";; 128 + esac 129 + shift 130 + done 131 + 132 + usage() 133 + { 134 + echo "Syntax: ${0##*/} -f filename [ -g guid ] name" 135 + exit 1 136 + } 137 + 138 + [ -n "$name" -a -f "$filename" ] || usage 139 + 140 + EFIVARFS="/sys/firmware/efi/efivars" 141 + 142 + [ -d "$EFIVARFS" ] || exit 2 143 + 144 + if stat -tf $EFIVARFS | grep -q -v de5e81e4; then 145 + mount -t efivarfs none $EFIVARFS 146 + fi 147 + 148 + # try to pick up an existing GUID 149 + [ -n "$guid" ] || guid=$(find "$EFIVARFS" -name "$name-*" | head -n1 | cut -f2- -d-) 150 + 151 + # use a randomly generated GUID 152 + [ -n "$guid" ] || guid="$(cat /proc/sys/kernel/random/uuid)" 153 + 154 + # efivarfs expects all of the data in one write 155 + tmp=$(mktemp) 156 + /bin/echo -ne "\007\000\000\000" | cat - $filename > $tmp 157 + dd if=$tmp of="$EFIVARFS/$name-$guid" bs=$(stat -c %s $tmp) 158 + rm $tmp 159 + 160 + == Loading ACPI SSDTs from configfs == 161 + 162 + This option allows loading of user defined SSDTs from userspace via the configfs 163 + interface. The CONFIG_ACPI_CONFIGFS option must be select and configfs must be 164 + mounted. In the following examples, we assume that configfs has been mounted in 165 + /config. 166 + 167 + New tables can be loading by creating new directories in /config/acpi/table/ and 168 + writing the SSDT aml code in the aml attribute: 169 + 170 + cd /config/acpi/table 171 + mkdir my_ssdt 172 + cat ~/ssdt.aml > my_ssdt/aml
+10
Documentation/kernel-parameters.txt
··· 582 582 583 583 bootmem_debug [KNL] Enable bootmem allocator debug messages. 584 584 585 + bert_disable [ACPI] 586 + Disable BERT OS support on buggy BIOSes. 587 + 585 588 bttv.card= [HW,V4L] bttv (bt848 + bt878 based grabber cards) 586 589 bttv.radio= Most important insmod options are available as 587 590 kernel args too. ··· 1195 1192 related feature. For example, you can do debugging of 1196 1193 Address Range Mirroring feature even if your box 1197 1194 doesn't support it. 1195 + 1196 + efivar_ssdt= [EFI; X86] Name of an EFI variable that contains an SSDT 1197 + that is to be dynamically loaded by Linux. If there are 1198 + multiple variables with the same name but with different 1199 + vendor GUIDs, all of them will be loaded. See 1200 + Documentation/acpi/ssdt-overlays.txt for details. 1201 + 1198 1202 1199 1203 eisa_irq_edge= [PARISC,HW] 1200 1204 See header of drivers/parisc/eisa.c.
+1
MAINTAINERS
··· 288 288 F: include/acpi/ 289 289 F: Documentation/acpi/ 290 290 F: Documentation/ABI/testing/sysfs-bus-acpi 291 + F: Documentation/ABI/testing/configfs-acpi 291 292 F: drivers/pci/*acpi* 292 293 F: drivers/pci/*/*acpi* 293 294 F: drivers/pci/*/*/*acpi*
+1
arch/arm64/Kconfig
··· 4 4 select ACPI_GENERIC_GSI if ACPI 5 5 select ACPI_REDUCED_HARDWARE_ONLY if ACPI 6 6 select ARCH_HAS_DEVMEM_IS_ALLOWED 7 + select ARCH_HAS_ACPI_TABLE_UPGRADE if ACPI 7 8 select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE 8 9 select ARCH_HAS_ELF_RANDOMIZE 9 10 select ARCH_HAS_GCOV_PROFILE_ALL
+10
arch/arm64/include/asm/acpi.h
··· 113 113 pgprot_t arch_apei_get_mem_attribute(phys_addr_t addr); 114 114 #endif 115 115 116 + #ifdef CONFIG_ACPI_NUMA 117 + int arm64_acpi_numa_init(void); 118 + int acpi_numa_get_nid(unsigned int cpu, u64 hwid); 119 + #else 120 + static inline int arm64_acpi_numa_init(void) { return -ENOSYS; } 121 + static inline int acpi_numa_get_nid(unsigned int cpu, u64 hwid) { return NUMA_NO_NODE; } 122 + #endif /* CONFIG_ACPI_NUMA */ 123 + 124 + #define ACPI_TABLE_UPGRADE_MAX_PHYS MEMBLOCK_ALLOC_ACCESSIBLE 125 + 116 126 #endif /*_ASM_ACPI_H*/
+2
arch/arm64/include/asm/numa.h
··· 5 5 6 6 #ifdef CONFIG_NUMA 7 7 8 + #define NR_NODE_MEMBLKS (MAX_NUMNODES * 2) 9 + 8 10 /* currently, arm64 implements flat NUMA topology */ 9 11 #define parent_node(node) (node) 10 12
+1
arch/arm64/kernel/Makefile
··· 42 42 arm64-obj-$(CONFIG_PCI) += pci.o 43 43 arm64-obj-$(CONFIG_ARMV8_DEPRECATED) += armv8_deprecated.o 44 44 arm64-obj-$(CONFIG_ACPI) += acpi.o 45 + arm64-obj-$(CONFIG_ACPI_NUMA) += acpi_numa.o 45 46 arm64-obj-$(CONFIG_ARM64_ACPI_PARKING_PROTOCOL) += acpi_parking_protocol.o 46 47 arm64-obj-$(CONFIG_PARAVIRT) += paravirt.o 47 48 arm64-obj-$(CONFIG_RANDOMIZE_BASE) += kaslr.o
+112
arch/arm64/kernel/acpi_numa.c
··· 1 + /* 2 + * ACPI 5.1 based NUMA setup for ARM64 3 + * Lots of code was borrowed from arch/x86/mm/srat.c 4 + * 5 + * Copyright 2004 Andi Kleen, SuSE Labs. 6 + * Copyright (C) 2013-2016, Linaro Ltd. 7 + * Author: Hanjun Guo <hanjun.guo@linaro.org> 8 + * 9 + * Reads the ACPI SRAT table to figure out what memory belongs to which CPUs. 10 + * 11 + * Called from acpi_numa_init while reading the SRAT and SLIT tables. 12 + * Assumes all memory regions belonging to a single proximity domain 13 + * are in one chunk. Holes between them will be included in the node. 14 + */ 15 + 16 + #define pr_fmt(fmt) "ACPI: NUMA: " fmt 17 + 18 + #include <linux/acpi.h> 19 + #include <linux/bitmap.h> 20 + #include <linux/bootmem.h> 21 + #include <linux/kernel.h> 22 + #include <linux/mm.h> 23 + #include <linux/memblock.h> 24 + #include <linux/mmzone.h> 25 + #include <linux/module.h> 26 + #include <linux/topology.h> 27 + 28 + #include <acpi/processor.h> 29 + #include <asm/numa.h> 30 + 31 + static int cpus_in_srat; 32 + 33 + struct __node_cpu_hwid { 34 + u32 node_id; /* logical node containing this CPU */ 35 + u64 cpu_hwid; /* MPIDR for this CPU */ 36 + }; 37 + 38 + static struct __node_cpu_hwid early_node_cpu_hwid[NR_CPUS] = { 39 + [0 ... NR_CPUS - 1] = {NUMA_NO_NODE, PHYS_CPUID_INVALID} }; 40 + 41 + int acpi_numa_get_nid(unsigned int cpu, u64 hwid) 42 + { 43 + int i; 44 + 45 + for (i = 0; i < cpus_in_srat; i++) { 46 + if (hwid == early_node_cpu_hwid[i].cpu_hwid) 47 + return early_node_cpu_hwid[i].node_id; 48 + } 49 + 50 + return NUMA_NO_NODE; 51 + } 52 + 53 + /* Callback for Proximity Domain -> ACPI processor UID mapping */ 54 + void __init acpi_numa_gicc_affinity_init(struct acpi_srat_gicc_affinity *pa) 55 + { 56 + int pxm, node; 57 + phys_cpuid_t mpidr; 58 + 59 + if (srat_disabled()) 60 + return; 61 + 62 + if (pa->header.length < sizeof(struct acpi_srat_gicc_affinity)) { 63 + pr_err("SRAT: Invalid SRAT header length: %d\n", 64 + pa->header.length); 65 + bad_srat(); 66 + return; 67 + } 68 + 69 + if (!(pa->flags & ACPI_SRAT_GICC_ENABLED)) 70 + return; 71 + 72 + if (cpus_in_srat >= NR_CPUS) { 73 + pr_warn_once("SRAT: cpu_to_node_map[%d] is too small, may not be able to use all cpus\n", 74 + NR_CPUS); 75 + return; 76 + } 77 + 78 + pxm = pa->proximity_domain; 79 + node = acpi_map_pxm_to_node(pxm); 80 + 81 + if (node == NUMA_NO_NODE || node >= MAX_NUMNODES) { 82 + pr_err("SRAT: Too many proximity domains %d\n", pxm); 83 + bad_srat(); 84 + return; 85 + } 86 + 87 + mpidr = acpi_map_madt_entry(pa->acpi_processor_uid); 88 + if (mpidr == PHYS_CPUID_INVALID) { 89 + pr_err("SRAT: PXM %d with ACPI ID %d has no valid MPIDR in MADT\n", 90 + pxm, pa->acpi_processor_uid); 91 + bad_srat(); 92 + return; 93 + } 94 + 95 + early_node_cpu_hwid[cpus_in_srat].node_id = node; 96 + early_node_cpu_hwid[cpus_in_srat].cpu_hwid = mpidr; 97 + node_set(node, numa_nodes_parsed); 98 + cpus_in_srat++; 99 + pr_info("SRAT: PXM %d -> MPIDR 0x%Lx -> Node %d\n", 100 + pxm, mpidr, node); 101 + } 102 + 103 + int __init arm64_acpi_numa_init(void) 104 + { 105 + int ret; 106 + 107 + ret = acpi_numa_init(); 108 + if (ret) 109 + return ret; 110 + 111 + return srat_disabled() ? -EINVAL : 0; 112 + }
+19 -1
arch/arm64/kernel/cpuidle.c
··· 9 9 * published by the Free Software Foundation. 10 10 */ 11 11 12 + #include <linux/acpi.h> 13 + #include <linux/cpuidle.h> 14 + #include <linux/cpu_pm.h> 12 15 #include <linux/of.h> 13 16 #include <linux/of_device.h> 14 17 15 18 #include <asm/cpuidle.h> 16 19 #include <asm/cpu_ops.h> 17 20 18 - int __init arm_cpuidle_init(unsigned int cpu) 21 + int arm_cpuidle_init(unsigned int cpu) 19 22 { 20 23 int ret = -EOPNOTSUPP; 21 24 ··· 42 39 43 40 return cpu_ops[cpu]->cpu_suspend(index); 44 41 } 42 + 43 + #ifdef CONFIG_ACPI 44 + 45 + #include <acpi/processor.h> 46 + 47 + int acpi_processor_ffh_lpi_probe(unsigned int cpu) 48 + { 49 + return arm_cpuidle_init(cpu); 50 + } 51 + 52 + int acpi_processor_ffh_lpi_enter(struct acpi_lpi_state *lpi) 53 + { 54 + return CPU_PM_CPU_IDLE_ENTER(arm_cpuidle_suspend, lpi->index); 55 + } 56 + #endif
+4 -2
arch/arm64/kernel/setup.c
··· 260 260 efi_init(); 261 261 arm64_memblock_init(); 262 262 263 + paging_init(); 264 + 265 + acpi_table_upgrade(); 266 + 263 267 /* Parse the ACPI tables for possible boot-time configuration */ 264 268 acpi_boot_table_init(); 265 - 266 - paging_init(); 267 269 268 270 if (acpi_disabled) 269 271 unflatten_device_tree();
+2
arch/arm64/kernel/smp.c
··· 560 560 */ 561 561 acpi_set_mailbox_entry(cpu_count, processor); 562 562 563 + early_map_cpu_to_node(cpu_count, acpi_numa_get_nid(cpu_count, hwid)); 564 + 563 565 cpu_count++; 564 566 } 565 567
+17 -11
arch/arm64/mm/numa.c
··· 17 17 * along with this program. If not, see <http://www.gnu.org/licenses/>. 18 18 */ 19 19 20 + #include <linux/acpi.h> 20 21 #include <linux/bootmem.h> 21 22 #include <linux/memblock.h> 22 23 #include <linux/module.h> ··· 30 29 31 30 static int numa_distance_cnt; 32 31 static u8 *numa_distance; 33 - static int numa_off; 32 + static bool numa_off; 34 33 35 34 static __init int numa_parse_early_param(char *opt) 36 35 { ··· 38 37 return -EINVAL; 39 38 if (!strncmp(opt, "off", 3)) { 40 39 pr_info("%s\n", "NUMA turned off"); 41 - numa_off = 1; 40 + numa_off = true; 42 41 } 43 42 return 0; 44 43 } ··· 132 131 * numa_add_memblk - Set node id to memblk 133 132 * @nid: NUMA node ID of the new memblk 134 133 * @start: Start address of the new memblk 135 - * @size: Size of the new memblk 134 + * @end: End address of the new memblk 136 135 * 137 136 * RETURNS: 138 137 * 0 on success, -errno on failure. 139 138 */ 140 - int __init numa_add_memblk(int nid, u64 start, u64 size) 139 + int __init numa_add_memblk(int nid, u64 start, u64 end) 141 140 { 142 141 int ret; 143 142 144 - ret = memblock_set_node(start, size, &memblock.memory, nid); 143 + ret = memblock_set_node(start, (end - start), &memblock.memory, nid); 145 144 if (ret < 0) { 146 145 pr_err("NUMA: memblock [0x%llx - 0x%llx] failed to add on node %d\n", 147 - start, (start + size - 1), nid); 146 + start, (end - 1), nid); 148 147 return ret; 149 148 } 150 149 151 150 node_set(nid, numa_nodes_parsed); 152 151 pr_info("NUMA: Adding memblock [0x%llx - 0x%llx] on node %d\n", 153 - start, (start + size - 1), nid); 152 + start, (end - 1), nid); 154 153 return ret; 155 154 } 156 155 ··· 363 362 int ret; 364 363 struct memblock_region *mblk; 365 364 366 - pr_info("%s\n", "No NUMA configuration found"); 365 + if (numa_off) 366 + pr_info("NUMA disabled\n"); /* Forced off on command line. */ 367 + else 368 + pr_info("No NUMA configuration found\n"); 367 369 pr_info("NUMA: Faking a node at [mem %#018Lx-%#018Lx]\n", 368 370 0LLU, PFN_PHYS(max_pfn) - 1); 369 371 370 372 for_each_memblock(memory, mblk) { 371 - ret = numa_add_memblk(0, mblk->base, mblk->size); 373 + ret = numa_add_memblk(0, mblk->base, mblk->base + mblk->size); 372 374 if (!ret) 373 375 continue; 374 376 ··· 379 375 return ret; 380 376 } 381 377 382 - numa_off = 1; 378 + numa_off = true; 383 379 return 0; 384 380 } 385 381 ··· 392 388 void __init arm64_numa_init(void) 393 389 { 394 390 if (!numa_off) { 395 - if (!numa_init(of_numa_init)) 391 + if (!acpi_disabled && !numa_init(arm64_acpi_numa_init)) 392 + return; 393 + if (acpi_disabled && !numa_init(of_numa_init)) 396 394 return; 397 395 } 398 396
+3
arch/ia64/include/asm/acpi.h
··· 140 140 } 141 141 } 142 142 } 143 + 144 + extern void acpi_numa_fixup(void); 145 + 143 146 #endif /* CONFIG_ACPI_NUMA */ 144 147 145 148 #endif /*__KERNEL__*/
+1 -1
arch/ia64/kernel/acpi.c
··· 524 524 return 0; 525 525 } 526 526 527 - void __init acpi_numa_arch_fixup(void) 527 + void __init acpi_numa_fixup(void) 528 528 { 529 529 int i, j, node_from, node_to; 530 530
+1
arch/ia64/kernel/setup.c
··· 552 552 early_acpi_boot_init(); 553 553 # ifdef CONFIG_ACPI_NUMA 554 554 acpi_numa_init(); 555 + acpi_numa_fixup(); 555 556 # ifdef CONFIG_ACPI_HOTPLUG_CPU 556 557 prefill_possible_map(); 557 558 # endif
+1
arch/x86/Kconfig
··· 22 22 select ANON_INODES 23 23 select ARCH_CLOCKSOURCE_DATA 24 24 select ARCH_DISCARD_MEMBLOCK 25 + select ARCH_HAS_ACPI_TABLE_UPGRADE if ACPI 25 26 select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE 26 27 select ARCH_HAS_DEBUG_STRICT_USER_COPY_CHECKS 27 28 select ARCH_HAS_DEVMEM_IS_ALLOWED
+2 -1
arch/x86/include/asm/acpi.h
··· 145 145 #define ARCH_HAS_POWER_INIT 1 146 146 147 147 #ifdef CONFIG_ACPI_NUMA 148 - extern int acpi_numa; 149 148 extern int x86_acpi_numa_init(void); 150 149 #endif /* CONFIG_ACPI_NUMA */ 151 150 ··· 168 169 return PAGE_KERNEL; 169 170 } 170 171 #endif 172 + 173 + #define ACPI_TABLE_UPGRADE_MAX_PHYS (max_low_pfn_mapped << PAGE_SHIFT) 171 174 172 175 #endif /* _ASM_X86_ACPI_H */
+1 -8
arch/x86/kernel/setup.c
··· 400 400 memblock_free(ramdisk_image, ramdisk_end - ramdisk_image); 401 401 } 402 402 403 - static void __init early_initrd_acpi_init(void) 404 - { 405 - early_acpi_table_init((void *)initrd_start, initrd_end - initrd_start); 406 - } 407 403 #else 408 404 static void __init early_reserve_initrd(void) 409 405 { 410 406 } 411 407 static void __init reserve_initrd(void) 412 - { 413 - } 414 - static void __init early_initrd_acpi_init(void) 415 408 { 416 409 } 417 410 #endif /* CONFIG_BLK_DEV_INITRD */ ··· 1142 1149 1143 1150 reserve_initrd(); 1144 1151 1145 - early_initrd_acpi_init(); 1152 + acpi_table_upgrade(); 1146 1153 1147 1154 vsmp_init(); 1148 1155
+1 -1
arch/x86/mm/numa.c
··· 1 1 /* Common code for 32 and 64-bit NUMA */ 2 + #include <linux/acpi.h> 2 3 #include <linux/kernel.h> 3 4 #include <linux/mm.h> 4 5 #include <linux/string.h> ··· 16 15 #include <asm/e820.h> 17 16 #include <asm/proto.h> 18 17 #include <asm/dma.h> 19 - #include <asm/acpi.h> 20 18 #include <asm/amd_nb.h> 21 19 22 20 #include "numa_internal.h"
+2 -114
arch/x86/mm/srat.c
··· 15 15 #include <linux/bitmap.h> 16 16 #include <linux/module.h> 17 17 #include <linux/topology.h> 18 - #include <linux/bootmem.h> 19 - #include <linux/memblock.h> 20 18 #include <linux/mm.h> 21 19 #include <asm/proto.h> 22 20 #include <asm/numa.h> 23 21 #include <asm/e820.h> 24 22 #include <asm/apic.h> 25 23 #include <asm/uv/uv.h> 26 - 27 - int acpi_numa __initdata; 28 - 29 - static __init int setup_node(int pxm) 30 - { 31 - return acpi_map_pxm_to_node(pxm); 32 - } 33 - 34 - static __init void bad_srat(void) 35 - { 36 - printk(KERN_ERR "SRAT: SRAT not used.\n"); 37 - acpi_numa = -1; 38 - } 39 - 40 - static __init inline int srat_disabled(void) 41 - { 42 - return acpi_numa < 0; 43 - } 44 - 45 - /* 46 - * Callback for SLIT parsing. pxm_to_node() returns NUMA_NO_NODE for 47 - * I/O localities since SRAT does not list them. I/O localities are 48 - * not supported at this point. 49 - */ 50 - void __init acpi_numa_slit_init(struct acpi_table_slit *slit) 51 - { 52 - int i, j; 53 - 54 - for (i = 0; i < slit->locality_count; i++) { 55 - const int from_node = pxm_to_node(i); 56 - 57 - if (from_node == NUMA_NO_NODE) 58 - continue; 59 - 60 - for (j = 0; j < slit->locality_count; j++) { 61 - const int to_node = pxm_to_node(j); 62 - 63 - if (to_node == NUMA_NO_NODE) 64 - continue; 65 - 66 - numa_set_distance(from_node, to_node, 67 - slit->entry[slit->locality_count * i + j]); 68 - } 69 - } 70 - } 71 24 72 25 /* Callback for Proximity Domain -> x2APIC mapping */ 73 26 void __init ··· 44 91 pxm, apic_id); 45 92 return; 46 93 } 47 - node = setup_node(pxm); 94 + node = acpi_map_pxm_to_node(pxm); 48 95 if (node < 0) { 49 96 printk(KERN_ERR "SRAT: Too many proximity domains %x\n", pxm); 50 97 bad_srat(); ··· 57 104 } 58 105 set_apicid_to_node(apic_id, node); 59 106 node_set(node, numa_nodes_parsed); 60 - acpi_numa = 1; 61 107 printk(KERN_INFO "SRAT: PXM %u -> APIC 0x%04x -> Node %u\n", 62 108 pxm, apic_id, node); 63 109 } ··· 79 127 pxm = pa->proximity_domain_lo; 80 128 if (acpi_srat_revision >= 2) 81 129 pxm |= *((unsigned int*)pa->proximity_domain_hi) << 8; 82 - node = setup_node(pxm); 130 + node = acpi_map_pxm_to_node(pxm); 83 131 if (node < 0) { 84 132 printk(KERN_ERR "SRAT: Too many proximity domains %x\n", pxm); 85 133 bad_srat(); ··· 98 146 99 147 set_apicid_to_node(apic_id, node); 100 148 node_set(node, numa_nodes_parsed); 101 - acpi_numa = 1; 102 149 printk(KERN_INFO "SRAT: PXM %u -> APIC 0x%02x -> Node %u\n", 103 150 pxm, apic_id, node); 104 151 } 105 - 106 - #ifdef CONFIG_MEMORY_HOTPLUG 107 - static inline int save_add_info(void) {return 1;} 108 - #else 109 - static inline int save_add_info(void) {return 0;} 110 - #endif 111 - 112 - /* Callback for parsing of the Proximity Domain <-> Memory Area mappings */ 113 - int __init 114 - acpi_numa_memory_affinity_init(struct acpi_srat_mem_affinity *ma) 115 - { 116 - u64 start, end; 117 - u32 hotpluggable; 118 - int node, pxm; 119 - 120 - if (srat_disabled()) 121 - goto out_err; 122 - if (ma->header.length != sizeof(struct acpi_srat_mem_affinity)) 123 - goto out_err_bad_srat; 124 - if ((ma->flags & ACPI_SRAT_MEM_ENABLED) == 0) 125 - goto out_err; 126 - hotpluggable = ma->flags & ACPI_SRAT_MEM_HOT_PLUGGABLE; 127 - if (hotpluggable && !save_add_info()) 128 - goto out_err; 129 - 130 - start = ma->base_address; 131 - end = start + ma->length; 132 - pxm = ma->proximity_domain; 133 - if (acpi_srat_revision <= 1) 134 - pxm &= 0xff; 135 - 136 - node = setup_node(pxm); 137 - if (node < 0) { 138 - printk(KERN_ERR "SRAT: Too many proximity domains.\n"); 139 - goto out_err_bad_srat; 140 - } 141 - 142 - if (numa_add_memblk(node, start, end) < 0) 143 - goto out_err_bad_srat; 144 - 145 - node_set(node, numa_nodes_parsed); 146 - 147 - pr_info("SRAT: Node %u PXM %u [mem %#010Lx-%#010Lx]%s%s\n", 148 - node, pxm, 149 - (unsigned long long) start, (unsigned long long) end - 1, 150 - hotpluggable ? " hotplug" : "", 151 - ma->flags & ACPI_SRAT_MEM_NON_VOLATILE ? " non-volatile" : ""); 152 - 153 - /* Mark hotplug range in memblock. */ 154 - if (hotpluggable && memblock_mark_hotplug(start, ma->length)) 155 - pr_warn("SRAT: Failed to mark hotplug range [mem %#010Lx-%#010Lx] in memblock\n", 156 - (unsigned long long)start, (unsigned long long)end - 1); 157 - 158 - max_possible_pfn = max(max_possible_pfn, PFN_UP(end - 1)); 159 - 160 - return 0; 161 - out_err_bad_srat: 162 - bad_srat(); 163 - out_err: 164 - return -1; 165 - } 166 - 167 - void __init acpi_numa_arch_fixup(void) {} 168 152 169 153 int __init x86_acpi_numa_init(void) 170 154 {
+26 -4
drivers/acpi/Kconfig
··· 213 213 bool 214 214 select THERMAL 215 215 216 + config ACPI_PROCESSOR_CSTATE 217 + def_bool y 218 + depends on IA64 || X86 219 + 216 220 config ACPI_PROCESSOR_IDLE 217 221 bool 218 222 select CPU_IDLE ··· 238 234 config ACPI_PROCESSOR 239 235 tristate "Processor" 240 236 depends on X86 || IA64 || ARM64 241 - select ACPI_PROCESSOR_IDLE if X86 || IA64 237 + select ACPI_PROCESSOR_IDLE 242 238 select ACPI_CPU_FREQ_PSS if X86 || IA64 243 239 default y 244 240 help ··· 295 291 config ACPI_NUMA 296 292 bool "NUMA support" 297 293 depends on NUMA 298 - depends on (X86 || IA64) 299 - default y if IA64_GENERIC || IA64_SGI_SN2 294 + depends on (X86 || IA64 || ARM64) 295 + default y if IA64_GENERIC || IA64_SGI_SN2 || ARM64 300 296 301 297 config ACPI_CUSTOM_DSDT_FILE 302 298 string "Custom DSDT Table file to include" ··· 315 311 bool 316 312 default ACPI_CUSTOM_DSDT_FILE != "" 317 313 314 + config ARCH_HAS_ACPI_TABLE_UPGRADE 315 + def_bool n 316 + 318 317 config ACPI_TABLE_UPGRADE 319 318 bool "Allow upgrading ACPI tables via initrd" 320 - depends on BLK_DEV_INITRD && X86 319 + depends on BLK_DEV_INITRD && ARCH_HAS_ACPI_TABLE_UPGRADE 321 320 default y 322 321 help 323 322 This option provides functionality to upgrade arbitrary ACPI tables ··· 482 475 issue. 483 476 484 477 source "drivers/acpi/apei/Kconfig" 478 + source "drivers/acpi/dptf/Kconfig" 485 479 486 480 config ACPI_EXTLOG 487 481 tristate "Extended Error Log support" ··· 527 519 help 528 520 This config adds ACPI operation region support for XPower AXP288 PMIC. 529 521 522 + config BXT_WC_PMIC_OPREGION 523 + bool "ACPI operation region support for BXT WhiskeyCove PMIC" 524 + depends on INTEL_SOC_PMIC 525 + help 526 + This config adds ACPI operation region support for BXT WhiskeyCove PMIC. 527 + 530 528 endif 529 + 530 + config ACPI_CONFIGFS 531 + tristate "ACPI configfs support" 532 + select CONFIGFS_FS 533 + help 534 + Select this option to enable support for ACPI configuration from 535 + userspace. The configurable ACPI groups will be visible under 536 + /config/acpi, assuming configfs is mounted under /config. 531 537 532 538 endif # ACPI
+4 -1
drivers/acpi/Makefile
··· 44 44 acpi-y += acpi_platform.o 45 45 acpi-y += acpi_pnp.o 46 46 acpi-$(CONFIG_ARM_AMBA) += acpi_amba.o 47 - acpi-y += int340x_thermal.o 48 47 acpi-y += power.o 49 48 acpi-y += event.o 50 49 acpi-$(CONFIG_ACPI_REDUCED_HARDWARE_ONLY) += evged.o ··· 98 99 obj-$(CONFIG_PMIC_OPREGION) += pmic/intel_pmic.o 99 100 obj-$(CONFIG_CRC_PMIC_OPREGION) += pmic/intel_pmic_crc.o 100 101 obj-$(CONFIG_XPOWER_PMIC_OPREGION) += pmic/intel_pmic_xpower.o 102 + obj-$(CONFIG_BXT_WC_PMIC_OPREGION) += pmic/intel_pmic_bxtwc.o 103 + 104 + obj-$(CONFIG_ACPI_CONFIGFS) += acpi_configfs.o 101 105 102 106 video-objs += acpi_video.o video_detect.o 107 + obj-y += dptf/
+267
drivers/acpi/acpi_configfs.c
··· 1 + /* 2 + * ACPI configfs support 3 + * 4 + * Copyright (c) 2016 Intel Corporation 5 + * 6 + * This program is free software; you can redistribute it and/or modify it 7 + * under the terms of the GNU General Public License version 2 as published by 8 + * the Free Software Foundation. 9 + */ 10 + 11 + #define pr_fmt(fmt) "ACPI configfs: " fmt 12 + 13 + #include <linux/init.h> 14 + #include <linux/module.h> 15 + #include <linux/configfs.h> 16 + #include <linux/acpi.h> 17 + 18 + static struct config_group *acpi_table_group; 19 + 20 + struct acpi_table { 21 + struct config_item cfg; 22 + struct acpi_table_header *header; 23 + }; 24 + 25 + static ssize_t acpi_table_aml_write(struct config_item *cfg, 26 + const void *data, size_t size) 27 + { 28 + const struct acpi_table_header *header = data; 29 + struct acpi_table *table; 30 + int ret; 31 + 32 + table = container_of(cfg, struct acpi_table, cfg); 33 + 34 + if (table->header) { 35 + pr_err("table already loaded\n"); 36 + return -EBUSY; 37 + } 38 + 39 + if (header->length != size) { 40 + pr_err("invalid table length\n"); 41 + return -EINVAL; 42 + } 43 + 44 + if (memcmp(header->signature, ACPI_SIG_SSDT, 4)) { 45 + pr_err("invalid table signature\n"); 46 + return -EINVAL; 47 + } 48 + 49 + table = container_of(cfg, struct acpi_table, cfg); 50 + 51 + table->header = kmemdup(header, header->length, GFP_KERNEL); 52 + if (!table->header) 53 + return -ENOMEM; 54 + 55 + ret = acpi_load_table(table->header); 56 + if (ret) { 57 + kfree(table->header); 58 + table->header = NULL; 59 + } 60 + 61 + return ret; 62 + } 63 + 64 + static inline struct acpi_table_header *get_header(struct config_item *cfg) 65 + { 66 + struct acpi_table *table = container_of(cfg, struct acpi_table, cfg); 67 + 68 + if (!table->header) 69 + pr_err("table not loaded\n"); 70 + 71 + return table->header; 72 + } 73 + 74 + static ssize_t acpi_table_aml_read(struct config_item *cfg, 75 + void *data, size_t size) 76 + { 77 + struct acpi_table_header *h = get_header(cfg); 78 + 79 + if (!h) 80 + return -EINVAL; 81 + 82 + if (data) 83 + memcpy(data, h, h->length); 84 + 85 + return h->length; 86 + } 87 + 88 + #define MAX_ACPI_TABLE_SIZE (128 * 1024) 89 + 90 + CONFIGFS_BIN_ATTR(acpi_table_, aml, NULL, MAX_ACPI_TABLE_SIZE); 91 + 92 + struct configfs_bin_attribute *acpi_table_bin_attrs[] = { 93 + &acpi_table_attr_aml, 94 + NULL, 95 + }; 96 + 97 + ssize_t acpi_table_signature_show(struct config_item *cfg, char *str) 98 + { 99 + struct acpi_table_header *h = get_header(cfg); 100 + 101 + if (!h) 102 + return -EINVAL; 103 + 104 + return sprintf(str, "%.*s\n", ACPI_NAME_SIZE, h->signature); 105 + } 106 + 107 + ssize_t acpi_table_length_show(struct config_item *cfg, char *str) 108 + { 109 + struct acpi_table_header *h = get_header(cfg); 110 + 111 + if (!h) 112 + return -EINVAL; 113 + 114 + return sprintf(str, "%d\n", h->length); 115 + } 116 + 117 + ssize_t acpi_table_revision_show(struct config_item *cfg, char *str) 118 + { 119 + struct acpi_table_header *h = get_header(cfg); 120 + 121 + if (!h) 122 + return -EINVAL; 123 + 124 + return sprintf(str, "%d\n", h->revision); 125 + } 126 + 127 + ssize_t acpi_table_oem_id_show(struct config_item *cfg, char *str) 128 + { 129 + struct acpi_table_header *h = get_header(cfg); 130 + 131 + if (!h) 132 + return -EINVAL; 133 + 134 + return sprintf(str, "%.*s\n", ACPI_OEM_ID_SIZE, h->oem_id); 135 + } 136 + 137 + ssize_t acpi_table_oem_table_id_show(struct config_item *cfg, char *str) 138 + { 139 + struct acpi_table_header *h = get_header(cfg); 140 + 141 + if (!h) 142 + return -EINVAL; 143 + 144 + return sprintf(str, "%.*s\n", ACPI_OEM_TABLE_ID_SIZE, h->oem_table_id); 145 + } 146 + 147 + ssize_t acpi_table_oem_revision_show(struct config_item *cfg, char *str) 148 + { 149 + struct acpi_table_header *h = get_header(cfg); 150 + 151 + if (!h) 152 + return -EINVAL; 153 + 154 + return sprintf(str, "%d\n", h->oem_revision); 155 + } 156 + 157 + ssize_t acpi_table_asl_compiler_id_show(struct config_item *cfg, char *str) 158 + { 159 + struct acpi_table_header *h = get_header(cfg); 160 + 161 + if (!h) 162 + return -EINVAL; 163 + 164 + return sprintf(str, "%.*s\n", ACPI_NAME_SIZE, h->asl_compiler_id); 165 + } 166 + 167 + ssize_t acpi_table_asl_compiler_revision_show(struct config_item *cfg, 168 + char *str) 169 + { 170 + struct acpi_table_header *h = get_header(cfg); 171 + 172 + if (!h) 173 + return -EINVAL; 174 + 175 + return sprintf(str, "%d\n", h->asl_compiler_revision); 176 + } 177 + 178 + CONFIGFS_ATTR_RO(acpi_table_, signature); 179 + CONFIGFS_ATTR_RO(acpi_table_, length); 180 + CONFIGFS_ATTR_RO(acpi_table_, revision); 181 + CONFIGFS_ATTR_RO(acpi_table_, oem_id); 182 + CONFIGFS_ATTR_RO(acpi_table_, oem_table_id); 183 + CONFIGFS_ATTR_RO(acpi_table_, oem_revision); 184 + CONFIGFS_ATTR_RO(acpi_table_, asl_compiler_id); 185 + CONFIGFS_ATTR_RO(acpi_table_, asl_compiler_revision); 186 + 187 + struct configfs_attribute *acpi_table_attrs[] = { 188 + &acpi_table_attr_signature, 189 + &acpi_table_attr_length, 190 + &acpi_table_attr_revision, 191 + &acpi_table_attr_oem_id, 192 + &acpi_table_attr_oem_table_id, 193 + &acpi_table_attr_oem_revision, 194 + &acpi_table_attr_asl_compiler_id, 195 + &acpi_table_attr_asl_compiler_revision, 196 + NULL, 197 + }; 198 + 199 + static struct config_item_type acpi_table_type = { 200 + .ct_owner = THIS_MODULE, 201 + .ct_bin_attrs = acpi_table_bin_attrs, 202 + .ct_attrs = acpi_table_attrs, 203 + }; 204 + 205 + static struct config_item *acpi_table_make_item(struct config_group *group, 206 + const char *name) 207 + { 208 + struct acpi_table *table; 209 + 210 + table = kzalloc(sizeof(*table), GFP_KERNEL); 211 + if (!table) 212 + return ERR_PTR(-ENOMEM); 213 + 214 + config_item_init_type_name(&table->cfg, name, &acpi_table_type); 215 + return &table->cfg; 216 + } 217 + 218 + struct configfs_group_operations acpi_table_group_ops = { 219 + .make_item = acpi_table_make_item, 220 + }; 221 + 222 + static struct config_item_type acpi_tables_type = { 223 + .ct_owner = THIS_MODULE, 224 + .ct_group_ops = &acpi_table_group_ops, 225 + }; 226 + 227 + static struct config_item_type acpi_root_group_type = { 228 + .ct_owner = THIS_MODULE, 229 + }; 230 + 231 + static struct configfs_subsystem acpi_configfs = { 232 + .su_group = { 233 + .cg_item = { 234 + .ci_namebuf = "acpi", 235 + .ci_type = &acpi_root_group_type, 236 + }, 237 + }, 238 + .su_mutex = __MUTEX_INITIALIZER(acpi_configfs.su_mutex), 239 + }; 240 + 241 + static int __init acpi_configfs_init(void) 242 + { 243 + int ret; 244 + struct config_group *root = &acpi_configfs.su_group; 245 + 246 + config_group_init(root); 247 + 248 + ret = configfs_register_subsystem(&acpi_configfs); 249 + if (ret) 250 + return ret; 251 + 252 + acpi_table_group = configfs_register_default_group(root, "table", 253 + &acpi_tables_type); 254 + return PTR_ERR_OR_ZERO(acpi_table_group); 255 + } 256 + module_init(acpi_configfs_init); 257 + 258 + static void __exit acpi_configfs_exit(void) 259 + { 260 + configfs_unregister_default_group(acpi_table_group); 261 + configfs_unregister_subsystem(&acpi_configfs); 262 + } 263 + module_exit(acpi_configfs_exit); 264 + 265 + MODULE_AUTHOR("Octavian Purdila <octavian.purdila@intel.com>"); 266 + MODULE_DESCRIPTION("ACPI configfs support"); 267 + MODULE_LICENSE("GPL v2");
+1 -3
drivers/acpi/acpi_lpat.c
··· 13 13 * GNU General Public License for more details. 14 14 */ 15 15 16 - #include <linux/module.h> 16 + #include <linux/export.h> 17 17 #include <linux/acpi.h> 18 18 #include <acpi/acpi_lpat.h> 19 19 ··· 157 157 } 158 158 } 159 159 EXPORT_SYMBOL_GPL(acpi_lpat_free_conversion_table); 160 - 161 - MODULE_LICENSE("GPL");
+3
drivers/acpi/acpi_video.c
··· 1246 1246 union acpi_object *dod = NULL; 1247 1247 union acpi_object *obj; 1248 1248 1249 + if (!video->cap._DOD) 1250 + return AE_NOT_EXIST; 1251 + 1249 1252 status = acpi_evaluate_object(video->device->handle, "_DOD", NULL, &buffer); 1250 1253 if (!ACPI_SUCCESS(status)) { 1251 1254 ACPI_EXCEPTION((AE_INFO, status, "Evaluating _DOD"));
+1 -1
drivers/acpi/apei/Makefile
··· 3 3 obj-$(CONFIG_ACPI_APEI_EINJ) += einj.o 4 4 obj-$(CONFIG_ACPI_APEI_ERST_DEBUG) += erst-dbg.o 5 5 6 - apei-y := apei-base.o hest.o erst.o 6 + apei-y := apei-base.o hest.o erst.o bert.o
+1 -1
drivers/acpi/apei/apei-internal.h
··· 1 1 /* 2 2 * apei-internal.h - ACPI Platform Error Interface internal 3 - * definations. 3 + * definitions. 4 4 */ 5 5 6 6 #ifndef APEI_INTERNAL_H
+150
drivers/acpi/apei/bert.c
··· 1 + /* 2 + * APEI Boot Error Record Table (BERT) support 3 + * 4 + * Copyright 2011 Intel Corp. 5 + * Author: Huang Ying <ying.huang@intel.com> 6 + * 7 + * Under normal circumstances, when a hardware error occurs, the error 8 + * handler receives control and processes the error. This gives OSPM a 9 + * chance to process the error condition, report it, and optionally attempt 10 + * recovery. In some cases, the system is unable to process an error. 11 + * For example, system firmware or a management controller may choose to 12 + * reset the system or the system might experience an uncontrolled crash 13 + * or reset.The boot error source is used to report unhandled errors that 14 + * occurred in a previous boot. This mechanism is described in the BERT 15 + * table. 16 + * 17 + * For more information about BERT, please refer to ACPI Specification 18 + * version 4.0, section 17.3.1 19 + * 20 + * This file is licensed under GPLv2. 21 + * 22 + */ 23 + 24 + #include <linux/kernel.h> 25 + #include <linux/module.h> 26 + #include <linux/init.h> 27 + #include <linux/acpi.h> 28 + #include <linux/io.h> 29 + 30 + #include "apei-internal.h" 31 + 32 + #undef pr_fmt 33 + #define pr_fmt(fmt) "BERT: " fmt 34 + 35 + static int bert_disable; 36 + 37 + static void __init bert_print_all(struct acpi_bert_region *region, 38 + unsigned int region_len) 39 + { 40 + struct acpi_hest_generic_status *estatus = 41 + (struct acpi_hest_generic_status *)region; 42 + int remain = region_len; 43 + u32 estatus_len; 44 + 45 + if (!estatus->block_status) 46 + return; 47 + 48 + while (remain > sizeof(struct acpi_bert_region)) { 49 + if (cper_estatus_check(estatus)) { 50 + pr_err(FW_BUG "Invalid error record.\n"); 51 + return; 52 + } 53 + 54 + estatus_len = cper_estatus_len(estatus); 55 + if (remain < estatus_len) { 56 + pr_err(FW_BUG "Truncated status block (length: %u).\n", 57 + estatus_len); 58 + return; 59 + } 60 + 61 + pr_info_once("Error records from previous boot:\n"); 62 + 63 + cper_estatus_print(KERN_INFO HW_ERR, estatus); 64 + 65 + /* 66 + * Because the boot error source is "one-time polled" type, 67 + * clear Block Status of current Generic Error Status Block, 68 + * once it's printed. 69 + */ 70 + estatus->block_status = 0; 71 + 72 + estatus = (void *)estatus + estatus_len; 73 + /* No more error records. */ 74 + if (!estatus->block_status) 75 + return; 76 + 77 + remain -= estatus_len; 78 + } 79 + } 80 + 81 + static int __init setup_bert_disable(char *str) 82 + { 83 + bert_disable = 1; 84 + 85 + return 0; 86 + } 87 + __setup("bert_disable", setup_bert_disable); 88 + 89 + static int __init bert_check_table(struct acpi_table_bert *bert_tab) 90 + { 91 + if (bert_tab->header.length < sizeof(struct acpi_table_bert) || 92 + bert_tab->region_length < sizeof(struct acpi_bert_region)) 93 + return -EINVAL; 94 + 95 + return 0; 96 + } 97 + 98 + static int __init bert_init(void) 99 + { 100 + struct acpi_bert_region *boot_error_region; 101 + struct acpi_table_bert *bert_tab; 102 + unsigned int region_len; 103 + acpi_status status; 104 + int rc = 0; 105 + 106 + if (acpi_disabled) 107 + return 0; 108 + 109 + if (bert_disable) { 110 + pr_info("Boot Error Record Table support is disabled.\n"); 111 + return 0; 112 + } 113 + 114 + status = acpi_get_table(ACPI_SIG_BERT, 0, (struct acpi_table_header **)&bert_tab); 115 + if (status == AE_NOT_FOUND) 116 + return 0; 117 + 118 + if (ACPI_FAILURE(status)) { 119 + pr_err("get table failed, %s.\n", acpi_format_exception(status)); 120 + return -EINVAL; 121 + } 122 + 123 + rc = bert_check_table(bert_tab); 124 + if (rc) { 125 + pr_err(FW_BUG "table invalid.\n"); 126 + return rc; 127 + } 128 + 129 + region_len = bert_tab->region_length; 130 + if (!request_mem_region(bert_tab->address, region_len, "APEI BERT")) { 131 + pr_err("Can't request iomem region <%016llx-%016llx>.\n", 132 + (unsigned long long)bert_tab->address, 133 + (unsigned long long)bert_tab->address + region_len - 1); 134 + return -EIO; 135 + } 136 + 137 + boot_error_region = ioremap_cache(bert_tab->address, region_len); 138 + if (boot_error_region) { 139 + bert_print_all(boot_error_region, region_len); 140 + iounmap(boot_error_region); 141 + } else { 142 + rc = -ENOMEM; 143 + } 144 + 145 + release_mem_region(bert_tab->address, region_len); 146 + 147 + return rc; 148 + } 149 + 150 + late_initcall(bert_init);
+36 -21
drivers/acpi/apei/einj.c
··· 33 33 34 34 #include "apei-internal.h" 35 35 36 - #define EINJ_PFX "EINJ: " 36 + #undef pr_fmt 37 + #define pr_fmt(fmt) "EINJ: " fmt 37 38 38 39 #define SPIN_UNIT 100 /* 100ns */ 39 40 /* Firmware should respond within 1 milliseconds */ ··· 180 179 static int einj_timedout(u64 *t) 181 180 { 182 181 if ((s64)*t < SPIN_UNIT) { 183 - pr_warning(FW_WARN EINJ_PFX 184 - "Firmware does not respond in time\n"); 182 + pr_warning(FW_WARN "Firmware does not respond in time\n"); 185 183 return 1; 186 184 } 187 185 *t -= SPIN_UNIT; ··· 307 307 r = request_mem_region(trigger_paddr, sizeof(*trigger_tab), 308 308 "APEI EINJ Trigger Table"); 309 309 if (!r) { 310 - pr_err(EINJ_PFX 311 - "Can not request [mem %#010llx-%#010llx] for Trigger table\n", 310 + pr_err("Can not request [mem %#010llx-%#010llx] for Trigger table\n", 312 311 (unsigned long long)trigger_paddr, 313 312 (unsigned long long)trigger_paddr + 314 313 sizeof(*trigger_tab) - 1); ··· 315 316 } 316 317 trigger_tab = ioremap_cache(trigger_paddr, sizeof(*trigger_tab)); 317 318 if (!trigger_tab) { 318 - pr_err(EINJ_PFX "Failed to map trigger table!\n"); 319 + pr_err("Failed to map trigger table!\n"); 319 320 goto out_rel_header; 320 321 } 321 322 rc = einj_check_trigger_header(trigger_tab); 322 323 if (rc) { 323 - pr_warning(FW_BUG EINJ_PFX 324 - "The trigger error action table is invalid\n"); 324 + pr_warning(FW_BUG "Invalid trigger error action table.\n"); 325 325 goto out_rel_header; 326 326 } 327 327 ··· 334 336 table_size - sizeof(*trigger_tab), 335 337 "APEI EINJ Trigger Table"); 336 338 if (!r) { 337 - pr_err(EINJ_PFX 338 - "Can not request [mem %#010llx-%#010llx] for Trigger Table Entry\n", 339 + pr_err("Can not request [mem %#010llx-%#010llx] for Trigger Table Entry\n", 339 340 (unsigned long long)trigger_paddr + sizeof(*trigger_tab), 340 341 (unsigned long long)trigger_paddr + table_size - 1); 341 342 goto out_rel_header; ··· 342 345 iounmap(trigger_tab); 343 346 trigger_tab = ioremap_cache(trigger_paddr, table_size); 344 347 if (!trigger_tab) { 345 - pr_err(EINJ_PFX "Failed to map trigger table!\n"); 348 + pr_err("Failed to map trigger table!\n"); 346 349 goto out_rel_entry; 347 350 } 348 351 trigger_entry = (struct acpi_whea_header *) ··· 692 695 struct dentry *fentry; 693 696 struct apei_exec_context ctx; 694 697 695 - if (acpi_disabled) 698 + if (acpi_disabled) { 699 + pr_warn("ACPI disabled.\n"); 696 700 return -ENODEV; 701 + } 697 702 698 703 status = acpi_get_table(ACPI_SIG_EINJ, 0, 699 704 (struct acpi_table_header **)&einj_tab); 700 - if (status == AE_NOT_FOUND) 705 + if (status == AE_NOT_FOUND) { 706 + pr_warn("EINJ table not found.\n"); 701 707 return -ENODEV; 708 + } 702 709 else if (ACPI_FAILURE(status)) { 703 - const char *msg = acpi_format_exception(status); 704 - pr_err(EINJ_PFX "Failed to get table, %s\n", msg); 710 + pr_err("Failed to get EINJ table: %s\n", 711 + acpi_format_exception(status)); 705 712 return -EINVAL; 706 713 } 707 714 708 715 rc = einj_check_table(einj_tab); 709 716 if (rc) { 710 - pr_warning(FW_BUG EINJ_PFX "EINJ table is invalid\n"); 717 + pr_warn(FW_BUG "Invalid EINJ table.n"); 711 718 return -EINVAL; 712 719 } 713 720 714 721 rc = -ENOMEM; 715 722 einj_debug_dir = debugfs_create_dir("einj", apei_get_debugfs_dir()); 716 - if (!einj_debug_dir) 723 + if (!einj_debug_dir) { 724 + pr_err("Error creating debugfs node.\n"); 717 725 goto err_cleanup; 726 + } 727 + 718 728 fentry = debugfs_create_file("available_error_type", S_IRUSR, 719 729 einj_debug_dir, NULL, 720 730 &available_error_type_fops); 721 731 if (!fentry) 722 732 goto err_cleanup; 733 + 723 734 fentry = debugfs_create_file("error_type", S_IRUSR | S_IWUSR, 724 735 einj_debug_dir, NULL, &error_type_fops); 725 736 if (!fentry) ··· 740 735 apei_resources_init(&einj_resources); 741 736 einj_exec_ctx_init(&ctx); 742 737 rc = apei_exec_collect_resources(&ctx, &einj_resources); 743 - if (rc) 738 + if (rc) { 739 + pr_err("Error collecting EINJ resources.\n"); 744 740 goto err_fini; 741 + } 742 + 745 743 rc = apei_resources_request(&einj_resources, "APEI EINJ"); 746 - if (rc) 744 + if (rc) { 745 + pr_err("Error requesting memory/port resources.\n"); 747 746 goto err_fini; 747 + } 748 + 748 749 rc = apei_exec_pre_map_gars(&ctx); 749 - if (rc) 750 + if (rc) { 751 + pr_err("Error pre-mapping GARs.\n"); 750 752 goto err_release; 753 + } 751 754 752 755 rc = -ENOMEM; 753 756 einj_param = einj_get_parameter_address(); ··· 800 787 goto err_unmap; 801 788 } 802 789 803 - pr_info(EINJ_PFX "Error INJection is initialized.\n"); 790 + pr_info("Error INJection is initialized.\n"); 804 791 805 792 return 0; 806 793 ··· 811 798 sizeof(struct einj_parameter); 812 799 813 800 acpi_os_unmap_iomem(einj_param, size); 801 + pr_err("Error creating param extension debugfs nodes.\n"); 814 802 } 815 803 apei_exec_post_unmap_gars(&ctx); 816 804 err_release: ··· 819 805 err_fini: 820 806 apei_resources_fini(&einj_resources); 821 807 err_cleanup: 808 + pr_err("Error creating primary debugfs nodes.\n"); 822 809 debugfs_remove_recursive(einj_debug_dir); 823 810 824 811 return rc;
+84 -15
drivers/acpi/bus.c
··· 30 30 #include <linux/acpi.h> 31 31 #include <linux/slab.h> 32 32 #include <linux/regulator/machine.h> 33 + #include <linux/workqueue.h> 34 + #include <linux/reboot.h> 35 + #include <linux/delay.h> 33 36 #ifdef CONFIG_X86 34 37 #include <asm/mpspec.h> 35 38 #endif ··· 177 174 EXPORT_SYMBOL_GPL(acpi_bus_detach_private_data); 178 175 179 176 static void acpi_print_osc_error(acpi_handle handle, 180 - struct acpi_osc_context *context, char *error) 177 + struct acpi_osc_context *context, char *error) 181 178 { 182 - struct acpi_buffer buffer = {ACPI_ALLOCATE_BUFFER}; 183 179 int i; 184 180 185 - if (ACPI_FAILURE(acpi_get_name(handle, ACPI_FULL_PATHNAME, &buffer))) 186 - printk(KERN_DEBUG "%s: %s\n", context->uuid_str, error); 187 - else { 188 - printk(KERN_DEBUG "%s (%s): %s\n", 189 - (char *)buffer.pointer, context->uuid_str, error); 190 - kfree(buffer.pointer); 191 - } 192 - printk(KERN_DEBUG "_OSC request data:"); 181 + acpi_handle_debug(handle, "(%s): %s\n", context->uuid_str, error); 182 + 183 + pr_debug("_OSC request data:"); 193 184 for (i = 0; i < context->cap.length; i += sizeof(u32)) 194 - printk(" %x", *((u32 *)(context->cap.pointer + i))); 195 - printk("\n"); 185 + pr_debug(" %x", *((u32 *)(context->cap.pointer + i))); 186 + 187 + pr_debug("\n"); 196 188 } 197 189 198 190 acpi_status acpi_str_to_uuid(char *str, u8 *uuid) ··· 300 302 EXPORT_SYMBOL(acpi_run_osc); 301 303 302 304 bool osc_sb_apei_support_acked; 305 + 306 + /* 307 + * ACPI 6.0 Section 8.4.4.2 Idle State Coordination 308 + * OSPM supports platform coordinated low power idle(LPI) states 309 + */ 310 + bool osc_pc_lpi_support_confirmed; 311 + EXPORT_SYMBOL_GPL(osc_pc_lpi_support_confirmed); 312 + 303 313 static u8 sb_uuid_str[] = "0811B06E-4A27-44F9-8D60-3CBBC22E7B48"; 304 314 static void acpi_bus_osc_support(void) 305 315 { ··· 328 322 capbuf[OSC_SUPPORT_DWORD] |= OSC_SB_PPC_OST_SUPPORT; 329 323 330 324 capbuf[OSC_SUPPORT_DWORD] |= OSC_SB_HOTPLUG_OST_SUPPORT; 325 + capbuf[OSC_SUPPORT_DWORD] |= OSC_SB_PCLPI_SUPPORT; 331 326 332 327 if (!ghes_disable) 333 328 capbuf[OSC_SUPPORT_DWORD] |= OSC_SB_APEI_SUPPORT; ··· 336 329 return; 337 330 if (ACPI_SUCCESS(acpi_run_osc(handle, &context))) { 338 331 u32 *capbuf_ret = context.ret.pointer; 339 - if (context.ret.length > OSC_SUPPORT_DWORD) 332 + if (context.ret.length > OSC_SUPPORT_DWORD) { 340 333 osc_sb_apei_support_acked = 341 334 capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_APEI_SUPPORT; 335 + osc_pc_lpi_support_confirmed = 336 + capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_PCLPI_SUPPORT; 337 + } 342 338 kfree(context.ret.pointer); 343 339 } 344 340 /* do we need to check other returned cap? Sounds no */ ··· 483 473 else 484 474 acpi_remove_notify_handler(device->handle, ACPI_DEVICE_NOTIFY, 485 475 acpi_device_notify); 476 + } 477 + 478 + /* Handle events targeting \_SB device (at present only graceful shutdown) */ 479 + 480 + #define ACPI_SB_NOTIFY_SHUTDOWN_REQUEST 0x81 481 + #define ACPI_SB_INDICATE_INTERVAL 10000 482 + 483 + static void sb_notify_work(struct work_struct *dummy) 484 + { 485 + acpi_handle sb_handle; 486 + 487 + orderly_poweroff(true); 488 + 489 + /* 490 + * After initiating graceful shutdown, the ACPI spec requires OSPM 491 + * to evaluate _OST method once every 10seconds to indicate that 492 + * the shutdown is in progress 493 + */ 494 + acpi_get_handle(NULL, "\\_SB", &sb_handle); 495 + while (1) { 496 + pr_info("Graceful shutdown in progress.\n"); 497 + acpi_evaluate_ost(sb_handle, ACPI_OST_EC_OSPM_SHUTDOWN, 498 + ACPI_OST_SC_OS_SHUTDOWN_IN_PROGRESS, NULL); 499 + msleep(ACPI_SB_INDICATE_INTERVAL); 500 + } 501 + } 502 + 503 + static void acpi_sb_notify(acpi_handle handle, u32 event, void *data) 504 + { 505 + static DECLARE_WORK(acpi_sb_work, sb_notify_work); 506 + 507 + if (event == ACPI_SB_NOTIFY_SHUTDOWN_REQUEST) { 508 + if (!work_busy(&acpi_sb_work)) 509 + schedule_work(&acpi_sb_work); 510 + } else 511 + pr_warn("event %x is not supported by \\_SB device\n", event); 512 + } 513 + 514 + static int __init acpi_setup_sb_notify_handler(void) 515 + { 516 + acpi_handle sb_handle; 517 + 518 + if (ACPI_FAILURE(acpi_get_handle(NULL, "\\_SB", &sb_handle))) 519 + return -ENXIO; 520 + 521 + if (ACPI_FAILURE(acpi_install_notify_handler(sb_handle, ACPI_DEVICE_NOTIFY, 522 + acpi_sb_notify, NULL))) 523 + return -EINVAL; 524 + 525 + return 0; 486 526 } 487 527 488 528 /* -------------------------------------------------------------------------- ··· 1021 961 /** 1022 962 * acpi_subsystem_init - Finalize the early initialization of ACPI. 1023 963 * 1024 - * Switch over the platform to the ACPI mode (if possible), initialize the 1025 - * handling of ACPI events, install the interrupt and global lock handlers. 964 + * Switch over the platform to the ACPI mode (if possible). 1026 965 * 1027 966 * Doing this too early is generally unsafe, but at the same time it needs to be 1028 967 * done before all things that really depend on ACPI. The right spot appears to ··· 1047 988 */ 1048 989 regulator_has_full_constraints(); 1049 990 } 991 + } 992 + 993 + static acpi_status acpi_bus_table_handler(u32 event, void *table, void *context) 994 + { 995 + acpi_scan_table_handler(event, table, context); 996 + 997 + return acpi_sysfs_table_handler(event, table, context); 1050 998 } 1051 999 1052 1000 static int __init acpi_bus_init(void) ··· 1109 1043 * _PDC control method may load dynamic SSDT tables, 1110 1044 * and we need to install the table handler before that. 1111 1045 */ 1046 + status = acpi_install_table_handler(acpi_bus_table_handler, NULL); 1047 + 1112 1048 acpi_sysfs_init(); 1113 1049 1114 1050 acpi_early_processor_set_pdc(); ··· 1192 1124 acpi_sleep_proc_init(); 1193 1125 acpi_wakeup_device_init(); 1194 1126 acpi_debugger_init(); 1127 + acpi_setup_sb_notify_handler(); 1195 1128 return 0; 1196 1129 } 1197 1130
+107 -42
drivers/acpi/button.c
··· 53 53 #define ACPI_BUTTON_DEVICE_NAME_LID "Lid Switch" 54 54 #define ACPI_BUTTON_TYPE_LID 0x05 55 55 56 + #define ACPI_BUTTON_LID_INIT_IGNORE 0x00 57 + #define ACPI_BUTTON_LID_INIT_OPEN 0x01 58 + #define ACPI_BUTTON_LID_INIT_METHOD 0x02 59 + 56 60 #define _COMPONENT ACPI_BUTTON_COMPONENT 57 61 ACPI_MODULE_NAME("button"); 58 62 ··· 109 105 110 106 static BLOCKING_NOTIFIER_HEAD(acpi_lid_notifier); 111 107 static struct acpi_device *lid_device; 108 + static u8 lid_init_state = ACPI_BUTTON_LID_INIT_METHOD; 112 109 113 110 /* -------------------------------------------------------------------------- 114 111 FS Interface (/proc) ··· 118 113 static struct proc_dir_entry *acpi_button_dir; 119 114 static struct proc_dir_entry *acpi_lid_dir; 120 115 116 + static int acpi_lid_evaluate_state(struct acpi_device *device) 117 + { 118 + unsigned long long lid_state; 119 + acpi_status status; 120 + 121 + status = acpi_evaluate_integer(device->handle, "_LID", NULL, &lid_state); 122 + if (ACPI_FAILURE(status)) 123 + return -ENODEV; 124 + 125 + return lid_state ? 1 : 0; 126 + } 127 + 128 + static int acpi_lid_notify_state(struct acpi_device *device, int state) 129 + { 130 + struct acpi_button *button = acpi_driver_data(device); 131 + int ret; 132 + 133 + /* input layer checks if event is redundant */ 134 + input_report_switch(button->input, SW_LID, !state); 135 + input_sync(button->input); 136 + 137 + if (state) 138 + pm_wakeup_event(&device->dev, 0); 139 + 140 + ret = blocking_notifier_call_chain(&acpi_lid_notifier, state, device); 141 + if (ret == NOTIFY_DONE) 142 + ret = blocking_notifier_call_chain(&acpi_lid_notifier, state, 143 + device); 144 + if (ret == NOTIFY_DONE || ret == NOTIFY_OK) { 145 + /* 146 + * It is also regarded as success if the notifier_chain 147 + * returns NOTIFY_OK or NOTIFY_DONE. 148 + */ 149 + ret = 0; 150 + } 151 + return ret; 152 + } 153 + 121 154 static int acpi_button_state_seq_show(struct seq_file *seq, void *offset) 122 155 { 123 156 struct acpi_device *device = seq->private; 124 - acpi_status status; 125 - unsigned long long state; 157 + int state; 126 158 127 - status = acpi_evaluate_integer(device->handle, "_LID", NULL, &state); 159 + state = acpi_lid_evaluate_state(device); 128 160 seq_printf(seq, "state: %s\n", 129 - ACPI_FAILURE(status) ? "unsupported" : 130 - (state ? "open" : "closed")); 161 + state < 0 ? "unsupported" : (state ? "open" : "closed")); 131 162 return 0; 132 163 } 133 164 ··· 272 231 273 232 int acpi_lid_open(void) 274 233 { 275 - acpi_status status; 276 - unsigned long long state; 277 - 278 234 if (!lid_device) 279 235 return -ENODEV; 280 236 281 - status = acpi_evaluate_integer(lid_device->handle, "_LID", NULL, 282 - &state); 283 - if (ACPI_FAILURE(status)) 284 - return -ENODEV; 285 - 286 - return !!state; 237 + return acpi_lid_evaluate_state(lid_device); 287 238 } 288 239 EXPORT_SYMBOL(acpi_lid_open); 289 240 290 - static int acpi_lid_send_state(struct acpi_device *device) 241 + static int acpi_lid_update_state(struct acpi_device *device) 291 242 { 292 - struct acpi_button *button = acpi_driver_data(device); 293 - unsigned long long state; 294 - acpi_status status; 295 - int ret; 243 + int state; 296 244 297 - status = acpi_evaluate_integer(device->handle, "_LID", NULL, &state); 298 - if (ACPI_FAILURE(status)) 299 - return -ENODEV; 245 + state = acpi_lid_evaluate_state(device); 246 + if (state < 0) 247 + return state; 300 248 301 - /* input layer checks if event is redundant */ 302 - input_report_switch(button->input, SW_LID, !state); 303 - input_sync(button->input); 249 + return acpi_lid_notify_state(device, state); 250 + } 304 251 305 - if (state) 306 - pm_wakeup_event(&device->dev, 0); 307 - 308 - ret = blocking_notifier_call_chain(&acpi_lid_notifier, state, device); 309 - if (ret == NOTIFY_DONE) 310 - ret = blocking_notifier_call_chain(&acpi_lid_notifier, state, 311 - device); 312 - if (ret == NOTIFY_DONE || ret == NOTIFY_OK) { 313 - /* 314 - * It is also regarded as success if the notifier_chain 315 - * returns NOTIFY_OK or NOTIFY_DONE. 316 - */ 317 - ret = 0; 252 + static void acpi_lid_initialize_state(struct acpi_device *device) 253 + { 254 + switch (lid_init_state) { 255 + case ACPI_BUTTON_LID_INIT_OPEN: 256 + (void)acpi_lid_notify_state(device, 1); 257 + break; 258 + case ACPI_BUTTON_LID_INIT_METHOD: 259 + (void)acpi_lid_update_state(device); 260 + break; 261 + case ACPI_BUTTON_LID_INIT_IGNORE: 262 + default: 263 + break; 318 264 } 319 - return ret; 320 265 } 321 266 322 267 static void acpi_button_notify(struct acpi_device *device, u32 event) ··· 317 290 case ACPI_BUTTON_NOTIFY_STATUS: 318 291 input = button->input; 319 292 if (button->type == ACPI_BUTTON_TYPE_LID) { 320 - acpi_lid_send_state(device); 293 + acpi_lid_update_state(device); 321 294 } else { 322 295 int keycode; 323 296 ··· 362 335 363 336 button->suspended = false; 364 337 if (button->type == ACPI_BUTTON_TYPE_LID) 365 - return acpi_lid_send_state(device); 338 + acpi_lid_initialize_state(device); 366 339 return 0; 367 340 } 368 341 #endif ··· 443 416 if (error) 444 417 goto err_remove_fs; 445 418 if (button->type == ACPI_BUTTON_TYPE_LID) { 446 - acpi_lid_send_state(device); 419 + acpi_lid_initialize_state(device); 447 420 /* 448 421 * This assumes there's only one lid device, or if there are 449 422 * more we only care about the last one... ··· 472 445 kfree(button); 473 446 return 0; 474 447 } 448 + 449 + static int param_set_lid_init_state(const char *val, struct kernel_param *kp) 450 + { 451 + int result = 0; 452 + 453 + if (!strncmp(val, "open", sizeof("open") - 1)) { 454 + lid_init_state = ACPI_BUTTON_LID_INIT_OPEN; 455 + pr_info("Notify initial lid state as open\n"); 456 + } else if (!strncmp(val, "method", sizeof("method") - 1)) { 457 + lid_init_state = ACPI_BUTTON_LID_INIT_METHOD; 458 + pr_info("Notify initial lid state with _LID return value\n"); 459 + } else if (!strncmp(val, "ignore", sizeof("ignore") - 1)) { 460 + lid_init_state = ACPI_BUTTON_LID_INIT_IGNORE; 461 + pr_info("Do not notify initial lid state\n"); 462 + } else 463 + result = -EINVAL; 464 + return result; 465 + } 466 + 467 + static int param_get_lid_init_state(char *buffer, struct kernel_param *kp) 468 + { 469 + switch (lid_init_state) { 470 + case ACPI_BUTTON_LID_INIT_OPEN: 471 + return sprintf(buffer, "open"); 472 + case ACPI_BUTTON_LID_INIT_METHOD: 473 + return sprintf(buffer, "method"); 474 + case ACPI_BUTTON_LID_INIT_IGNORE: 475 + return sprintf(buffer, "ignore"); 476 + default: 477 + return sprintf(buffer, "invalid"); 478 + } 479 + return 0; 480 + } 481 + 482 + module_param_call(lid_init_state, 483 + param_set_lid_init_state, param_get_lid_init_state, 484 + NULL, 0644); 485 + MODULE_PARM_DESC(lid_init_state, "Behavior for reporting LID initial state"); 475 486 476 487 module_acpi_driver(acpi_button_driver);
+15 -9
drivers/acpi/cppc_acpi.c
··· 299 299 continue; 300 300 301 301 cpc_ptr = per_cpu(cpc_desc_ptr, i); 302 - if (!cpc_ptr) 303 - continue; 302 + if (!cpc_ptr) { 303 + retval = -EFAULT; 304 + goto err_ret; 305 + } 304 306 305 307 pdomain = &(cpc_ptr->domain_info); 306 308 cpumask_set_cpu(i, pr->shared_cpu_map); ··· 324 322 continue; 325 323 326 324 match_cpc_ptr = per_cpu(cpc_desc_ptr, j); 327 - if (!match_cpc_ptr) 328 - continue; 325 + if (!match_cpc_ptr) { 326 + retval = -EFAULT; 327 + goto err_ret; 328 + } 329 329 330 330 match_pdomain = &(match_cpc_ptr->domain_info); 331 331 if (match_pdomain->domain != pdomain->domain) ··· 357 353 continue; 358 354 359 355 match_cpc_ptr = per_cpu(cpc_desc_ptr, j); 360 - if (!match_cpc_ptr) 361 - continue; 356 + if (!match_cpc_ptr) { 357 + retval = -EFAULT; 358 + goto err_ret; 359 + } 362 360 363 361 match_pdomain = &(match_cpc_ptr->domain_info); 364 362 if (match_pdomain->domain != pdomain->domain) ··· 601 595 /* Store CPU Logical ID */ 602 596 cpc_ptr->cpu_id = pr->id; 603 597 604 - /* Plug it into this CPUs CPC descriptor. */ 605 - per_cpu(cpc_desc_ptr, pr->id) = cpc_ptr; 606 - 607 598 /* Parse PSD data for this CPU */ 608 599 ret = acpi_get_psd(cpc_ptr, handle); 609 600 if (ret) ··· 612 609 if (ret) 613 610 goto out_free; 614 611 } 612 + 613 + /* Plug PSD data into this CPUs CPC descriptor. */ 614 + per_cpu(cpc_desc_ptr, pr->id) = cpc_ptr; 615 615 616 616 /* Everything looks okay */ 617 617 pr_debug("Parsed CPC struct for CPU: %d\n", pr->id);
+1 -6
drivers/acpi/dock.c
··· 21 21 */ 22 22 23 23 #include <linux/kernel.h> 24 - #include <linux/module.h> 24 + #include <linux/moduleparam.h> 25 25 #include <linux/slab.h> 26 26 #include <linux/init.h> 27 27 #include <linux/types.h> ··· 33 33 34 34 #include "internal.h" 35 35 36 - #define ACPI_DOCK_DRIVER_DESCRIPTION "ACPI Dock Station Driver" 37 - 38 36 ACPI_MODULE_NAME("dock"); 39 - MODULE_AUTHOR("Kristen Carlson Accardi"); 40 - MODULE_DESCRIPTION(ACPI_DOCK_DRIVER_DESCRIPTION); 41 - MODULE_LICENSE("GPL"); 42 37 43 38 static bool immediate_undock = 1; 44 39 module_param(immediate_undock, bool, 0644);
+15
drivers/acpi/dptf/Kconfig
··· 1 + config DPTF_POWER 2 + tristate "DPTF Platform Power Participant" 3 + depends on X86 4 + help 5 + This driver adds support for Dynamic Platform and Thermal Framework 6 + (DPTF) Platform Power Participant device (INT3407) support. 7 + This participant is responsible for exposing platform telemetry: 8 + max_platform_power 9 + platform_power_source 10 + adapter_rating 11 + battery_steady_power 12 + charger_type 13 + 14 + To compile this driver as a module, choose M here: 15 + the module will be called dptf_power.
+4
drivers/acpi/dptf/Makefile
··· 1 + obj-$(CONFIG_ACPI) += int340x_thermal.o 2 + obj-$(CONFIG_DPTF_POWER) += dptf_power.o 3 + 4 + ccflags-y += -Idrivers/acpi
+128
drivers/acpi/dptf/dptf_power.c
··· 1 + /* 2 + * dptf_power: DPTF platform power driver 3 + * Copyright (c) 2016, Intel Corporation. 4 + * 5 + * This program is free software; you can redistribute it and/or modify it 6 + * under the terms and conditions of the GNU General Public License, 7 + * version 2, as published by the Free Software Foundation. 8 + * 9 + * This program is distributed in the hope it will be useful, but WITHOUT 10 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 + * more details. 13 + * 14 + */ 15 + 16 + #include <linux/kernel.h> 17 + #include <linux/module.h> 18 + #include <linux/acpi.h> 19 + #include <linux/platform_device.h> 20 + 21 + /* 22 + * Presentation of attributes which are defined for INT3407. They are: 23 + * PMAX : Maximum platform powe 24 + * PSRC : Platform power source 25 + * ARTG : Adapter rating 26 + * CTYP : Charger type 27 + * PBSS : Battery steady power 28 + */ 29 + #define DPTF_POWER_SHOW(name, object) \ 30 + static ssize_t name##_show(struct device *dev,\ 31 + struct device_attribute *attr,\ 32 + char *buf)\ 33 + {\ 34 + struct platform_device *pdev = to_platform_device(dev);\ 35 + struct acpi_device *acpi_dev = platform_get_drvdata(pdev);\ 36 + unsigned long long val;\ 37 + acpi_status status;\ 38 + \ 39 + status = acpi_evaluate_integer(acpi_dev->handle, #object,\ 40 + NULL, &val);\ 41 + if (ACPI_SUCCESS(status))\ 42 + return sprintf(buf, "%d\n", (int)val);\ 43 + else \ 44 + return -EINVAL;\ 45 + } 46 + 47 + DPTF_POWER_SHOW(max_platform_power_mw, PMAX) 48 + DPTF_POWER_SHOW(platform_power_source, PSRC) 49 + DPTF_POWER_SHOW(adapter_rating_mw, ARTG) 50 + DPTF_POWER_SHOW(battery_steady_power_mw, PBSS) 51 + DPTF_POWER_SHOW(charger_type, CTYP) 52 + 53 + static DEVICE_ATTR_RO(max_platform_power_mw); 54 + static DEVICE_ATTR_RO(platform_power_source); 55 + static DEVICE_ATTR_RO(adapter_rating_mw); 56 + static DEVICE_ATTR_RO(battery_steady_power_mw); 57 + static DEVICE_ATTR_RO(charger_type); 58 + 59 + static struct attribute *dptf_power_attrs[] = { 60 + &dev_attr_max_platform_power_mw.attr, 61 + &dev_attr_platform_power_source.attr, 62 + &dev_attr_adapter_rating_mw.attr, 63 + &dev_attr_battery_steady_power_mw.attr, 64 + &dev_attr_charger_type.attr, 65 + NULL 66 + }; 67 + 68 + static struct attribute_group dptf_power_attribute_group = { 69 + .attrs = dptf_power_attrs, 70 + .name = "dptf_power" 71 + }; 72 + 73 + static int dptf_power_add(struct platform_device *pdev) 74 + { 75 + struct acpi_device *acpi_dev; 76 + acpi_status status; 77 + unsigned long long ptype; 78 + int result; 79 + 80 + acpi_dev = ACPI_COMPANION(&(pdev->dev)); 81 + if (!acpi_dev) 82 + return -ENODEV; 83 + 84 + status = acpi_evaluate_integer(acpi_dev->handle, "PTYP", NULL, &ptype); 85 + if (ACPI_FAILURE(status)) 86 + return -ENODEV; 87 + 88 + if (ptype != 0x11) 89 + return -ENODEV; 90 + 91 + result = sysfs_create_group(&pdev->dev.kobj, 92 + &dptf_power_attribute_group); 93 + if (result) 94 + return result; 95 + 96 + platform_set_drvdata(pdev, acpi_dev); 97 + 98 + return 0; 99 + } 100 + 101 + static int dptf_power_remove(struct platform_device *pdev) 102 + { 103 + 104 + sysfs_remove_group(&pdev->dev.kobj, &dptf_power_attribute_group); 105 + 106 + return 0; 107 + } 108 + 109 + static const struct acpi_device_id int3407_device_ids[] = { 110 + {"INT3407", 0}, 111 + {"", 0}, 112 + }; 113 + MODULE_DEVICE_TABLE(acpi, int3407_device_ids); 114 + 115 + static struct platform_driver dptf_power_driver = { 116 + .probe = dptf_power_add, 117 + .remove = dptf_power_remove, 118 + .driver = { 119 + .name = "DPTF Platform Power", 120 + .acpi_match_table = int3407_device_ids, 121 + }, 122 + }; 123 + 124 + module_platform_driver(dptf_power_driver); 125 + 126 + MODULE_AUTHOR("Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>"); 127 + MODULE_LICENSE("GPL v2"); 128 + MODULE_DESCRIPTION("ACPI DPTF platform power driver");
+53 -53
drivers/acpi/ec.c
··· 1359 1359 } 1360 1360 } 1361 1361 1362 - static int acpi_ec_add(struct acpi_device *device) 1362 + static struct acpi_ec *acpi_ec_alloc(void) 1363 1363 { 1364 - struct acpi_ec *ec = NULL; 1365 - int ret; 1366 - 1367 - strcpy(acpi_device_name(device), ACPI_EC_DEVICE_NAME); 1368 - strcpy(acpi_device_class(device), ACPI_EC_CLASS); 1364 + struct acpi_ec *ec; 1369 1365 1370 1366 /* Check for boot EC */ 1371 1367 if (boot_ec) { ··· 1372 1376 first_ec = NULL; 1373 1377 } else { 1374 1378 ec = make_acpi_ec(); 1375 - if (!ec) 1376 - return -ENOMEM; 1377 1379 } 1380 + return ec; 1381 + } 1382 + 1383 + static int acpi_ec_add(struct acpi_device *device) 1384 + { 1385 + struct acpi_ec *ec = NULL; 1386 + int ret; 1387 + 1388 + strcpy(acpi_device_name(device), ACPI_EC_DEVICE_NAME); 1389 + strcpy(acpi_device_class(device), ACPI_EC_CLASS); 1390 + 1391 + ec = acpi_ec_alloc(); 1392 + if (!ec) 1393 + return -ENOMEM; 1378 1394 if (ec_parse_device(device->handle, 0, ec, NULL) != 1379 1395 AE_CTRL_TERMINATE) { 1380 1396 kfree(ec); ··· 1473 1465 int __init acpi_ec_dsdt_probe(void) 1474 1466 { 1475 1467 acpi_status status; 1468 + struct acpi_ec *ec; 1469 + int ret; 1476 1470 1477 - if (boot_ec) 1478 - return 0; 1479 - 1471 + ec = acpi_ec_alloc(); 1472 + if (!ec) 1473 + return -ENOMEM; 1480 1474 /* 1481 1475 * Finding EC from DSDT if there is no ECDT EC available. When this 1482 1476 * function is invoked, ACPI tables have been fully loaded, we can 1483 1477 * walk namespace now. 1484 1478 */ 1485 - boot_ec = make_acpi_ec(); 1486 - if (!boot_ec) 1487 - return -ENOMEM; 1488 1479 status = acpi_get_devices(ec_device_ids[0].id, 1489 - ec_parse_device, boot_ec, NULL); 1490 - if (ACPI_FAILURE(status) || !boot_ec->handle) 1491 - return -ENODEV; 1492 - if (!ec_install_handlers(boot_ec)) { 1493 - first_ec = boot_ec; 1494 - return 0; 1480 + ec_parse_device, ec, NULL); 1481 + if (ACPI_FAILURE(status) || !ec->handle) { 1482 + ret = -ENODEV; 1483 + goto error; 1495 1484 } 1496 - return -EFAULT; 1485 + ret = ec_install_handlers(ec); 1486 + 1487 + error: 1488 + if (ret) 1489 + kfree(ec); 1490 + else 1491 + first_ec = boot_ec = ec; 1492 + return ret; 1497 1493 } 1498 1494 1499 1495 #if 0 ··· 1541 1529 return 0; 1542 1530 } 1543 1531 1532 + /* 1533 + * Some ECDTs contain wrong register addresses. 1534 + * MSI MS-171F 1535 + * https://bugzilla.kernel.org/show_bug.cgi?id=12461 1536 + */ 1544 1537 static int ec_correct_ecdt(const struct dmi_system_id *id) 1545 1538 { 1546 1539 pr_debug("Detected system needing ECDT address correction.\n"); ··· 1554 1537 } 1555 1538 1556 1539 static struct dmi_system_id ec_dmi_table[] __initdata = { 1557 - { 1558 - ec_correct_ecdt, "Asus L4R", { 1559 - DMI_MATCH(DMI_BIOS_VERSION, "1008.006"), 1560 - DMI_MATCH(DMI_PRODUCT_NAME, "L4R"), 1561 - DMI_MATCH(DMI_BOARD_NAME, "L4R") }, NULL}, 1562 - { 1563 - ec_correct_ecdt, "Asus M6R", { 1564 - DMI_MATCH(DMI_BIOS_VERSION, "0207"), 1565 - DMI_MATCH(DMI_PRODUCT_NAME, "M6R"), 1566 - DMI_MATCH(DMI_BOARD_NAME, "M6R") }, NULL}, 1567 1540 { 1568 1541 ec_correct_ecdt, "MSI MS-171F", { 1569 1542 DMI_MATCH(DMI_SYS_VENDOR, "Micro-Star"), ··· 1566 1559 1567 1560 int __init acpi_ec_ecdt_probe(void) 1568 1561 { 1569 - int ret = 0; 1562 + int ret; 1570 1563 acpi_status status; 1571 1564 struct acpi_table_ecdt *ecdt_ptr; 1565 + struct acpi_ec *ec; 1572 1566 1573 - boot_ec = make_acpi_ec(); 1574 - if (!boot_ec) 1567 + ec = acpi_ec_alloc(); 1568 + if (!ec) 1575 1569 return -ENOMEM; 1576 1570 /* 1577 1571 * Generate a boot ec context ··· 1596 1588 1597 1589 pr_info("EC description table is found, configuring boot EC\n"); 1598 1590 if (EC_FLAGS_CORRECT_ECDT) { 1599 - /* 1600 - * Asus L4R, Asus M6R 1601 - * https://bugzilla.kernel.org/show_bug.cgi?id=9399 1602 - * MSI MS-171F 1603 - * https://bugzilla.kernel.org/show_bug.cgi?id=12461 1604 - */ 1605 - boot_ec->command_addr = ecdt_ptr->data.address; 1606 - boot_ec->data_addr = ecdt_ptr->control.address; 1591 + ec->command_addr = ecdt_ptr->data.address; 1592 + ec->data_addr = ecdt_ptr->control.address; 1607 1593 } else { 1608 - boot_ec->command_addr = ecdt_ptr->control.address; 1609 - boot_ec->data_addr = ecdt_ptr->data.address; 1594 + ec->command_addr = ecdt_ptr->control.address; 1595 + ec->data_addr = ecdt_ptr->data.address; 1610 1596 } 1611 - boot_ec->gpe = ecdt_ptr->gpe; 1612 - boot_ec->handle = ACPI_ROOT_OBJECT; 1613 - ret = ec_install_handlers(boot_ec); 1614 - if (!ret) 1615 - first_ec = boot_ec; 1597 + ec->gpe = ecdt_ptr->gpe; 1598 + ec->handle = ACPI_ROOT_OBJECT; 1599 + ret = ec_install_handlers(ec); 1616 1600 error: 1617 - if (ret) { 1618 - kfree(boot_ec); 1619 - boot_ec = NULL; 1620 - } 1601 + if (ret) 1602 + kfree(ec); 1603 + else 1604 + first_ec = boot_ec = ec; 1621 1605 return ret; 1622 1606 } 1623 1607
drivers/acpi/int340x_thermal.c drivers/acpi/dptf/int340x_thermal.c
+3
drivers/acpi/internal.h
··· 87 87 void acpi_device_hotplug(struct acpi_device *adev, u32 src); 88 88 bool acpi_scan_is_offline(struct acpi_device *adev, bool uevent); 89 89 90 + acpi_status acpi_sysfs_table_handler(u32 event, void *table, void *context); 91 + void acpi_scan_table_handler(u32 event, void *table, void *context); 92 + 90 93 /* -------------------------------------------------------------------------- 91 94 Device Node Initialization / Removal 92 95 -------------------------------------------------------------------------- */
+172 -54
drivers/acpi/numa.c
··· 18 18 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 19 19 * 20 20 */ 21 + 22 + #define pr_fmt(fmt) "ACPI: " fmt 23 + 21 24 #include <linux/module.h> 22 25 #include <linux/init.h> 23 26 #include <linux/kernel.h> 24 27 #include <linux/types.h> 25 28 #include <linux/errno.h> 26 29 #include <linux/acpi.h> 30 + #include <linux/bootmem.h> 31 + #include <linux/memblock.h> 27 32 #include <linux/numa.h> 28 33 #include <linux/nodemask.h> 29 34 #include <linux/topology.h> 30 - 31 - #define PREFIX "ACPI: " 32 - 33 - #define ACPI_NUMA 0x80000000 34 - #define _COMPONENT ACPI_NUMA 35 - ACPI_MODULE_NAME("numa"); 36 35 37 36 static nodemask_t nodes_found_map = NODE_MASK_NONE; 38 37 ··· 42 43 = { [0 ... MAX_NUMNODES - 1] = PXM_INVAL }; 43 44 44 45 unsigned char acpi_srat_revision __initdata; 46 + int acpi_numa __initdata; 45 47 46 48 int pxm_to_node(int pxm) 47 49 { ··· 128 128 static void __init 129 129 acpi_table_print_srat_entry(struct acpi_subtable_header *header) 130 130 { 131 - 132 - ACPI_FUNCTION_NAME("acpi_table_print_srat_entry"); 133 - 134 - if (!header) 135 - return; 136 - 137 131 switch (header->type) { 138 - 139 132 case ACPI_SRAT_TYPE_CPU_AFFINITY: 140 - #ifdef ACPI_DEBUG_OUTPUT 141 133 { 142 134 struct acpi_srat_cpu_affinity *p = 143 135 (struct acpi_srat_cpu_affinity *)header; 144 - ACPI_DEBUG_PRINT((ACPI_DB_INFO, 145 - "SRAT Processor (id[0x%02x] eid[0x%02x]) in proximity domain %d %s\n", 146 - p->apic_id, p->local_sapic_eid, 147 - p->proximity_domain_lo, 148 - (p->flags & ACPI_SRAT_CPU_ENABLED)? 149 - "enabled" : "disabled")); 136 + pr_debug("SRAT Processor (id[0x%02x] eid[0x%02x]) in proximity domain %d %s\n", 137 + p->apic_id, p->local_sapic_eid, 138 + p->proximity_domain_lo, 139 + (p->flags & ACPI_SRAT_CPU_ENABLED) ? 140 + "enabled" : "disabled"); 150 141 } 151 - #endif /* ACPI_DEBUG_OUTPUT */ 152 142 break; 153 143 154 144 case ACPI_SRAT_TYPE_MEMORY_AFFINITY: 155 - #ifdef ACPI_DEBUG_OUTPUT 156 145 { 157 146 struct acpi_srat_mem_affinity *p = 158 147 (struct acpi_srat_mem_affinity *)header; 159 - ACPI_DEBUG_PRINT((ACPI_DB_INFO, 160 - "SRAT Memory (0x%lx length 0x%lx) in proximity domain %d %s%s%s\n", 161 - (unsigned long)p->base_address, 162 - (unsigned long)p->length, 163 - p->proximity_domain, 164 - (p->flags & ACPI_SRAT_MEM_ENABLED)? 165 - "enabled" : "disabled", 166 - (p->flags & ACPI_SRAT_MEM_HOT_PLUGGABLE)? 167 - " hot-pluggable" : "", 168 - (p->flags & ACPI_SRAT_MEM_NON_VOLATILE)? 169 - " non-volatile" : "")); 148 + pr_debug("SRAT Memory (0x%lx length 0x%lx) in proximity domain %d %s%s%s\n", 149 + (unsigned long)p->base_address, 150 + (unsigned long)p->length, 151 + p->proximity_domain, 152 + (p->flags & ACPI_SRAT_MEM_ENABLED) ? 153 + "enabled" : "disabled", 154 + (p->flags & ACPI_SRAT_MEM_HOT_PLUGGABLE) ? 155 + " hot-pluggable" : "", 156 + (p->flags & ACPI_SRAT_MEM_NON_VOLATILE) ? 157 + " non-volatile" : ""); 170 158 } 171 - #endif /* ACPI_DEBUG_OUTPUT */ 172 159 break; 173 160 174 161 case ACPI_SRAT_TYPE_X2APIC_CPU_AFFINITY: 175 - #ifdef ACPI_DEBUG_OUTPUT 176 162 { 177 163 struct acpi_srat_x2apic_cpu_affinity *p = 178 164 (struct acpi_srat_x2apic_cpu_affinity *)header; 179 - ACPI_DEBUG_PRINT((ACPI_DB_INFO, 180 - "SRAT Processor (x2apicid[0x%08x]) in" 181 - " proximity domain %d %s\n", 182 - p->apic_id, 183 - p->proximity_domain, 184 - (p->flags & ACPI_SRAT_CPU_ENABLED) ? 185 - "enabled" : "disabled")); 165 + pr_debug("SRAT Processor (x2apicid[0x%08x]) in proximity domain %d %s\n", 166 + p->apic_id, 167 + p->proximity_domain, 168 + (p->flags & ACPI_SRAT_CPU_ENABLED) ? 169 + "enabled" : "disabled"); 186 170 } 187 - #endif /* ACPI_DEBUG_OUTPUT */ 188 171 break; 172 + 173 + case ACPI_SRAT_TYPE_GICC_AFFINITY: 174 + { 175 + struct acpi_srat_gicc_affinity *p = 176 + (struct acpi_srat_gicc_affinity *)header; 177 + pr_debug("SRAT Processor (acpi id[0x%04x]) in proximity domain %d %s\n", 178 + p->acpi_processor_uid, 179 + p->proximity_domain, 180 + (p->flags & ACPI_SRAT_GICC_ENABLED) ? 181 + "enabled" : "disabled"); 182 + } 183 + break; 184 + 189 185 default: 190 - printk(KERN_WARNING PREFIX 191 - "Found unsupported SRAT entry (type = 0x%x)\n", 192 - header->type); 186 + pr_warn("Found unsupported SRAT entry (type = 0x%x)\n", 187 + header->type); 193 188 break; 194 189 } 195 190 } ··· 212 217 return 1; 213 218 } 214 219 220 + void __init bad_srat(void) 221 + { 222 + pr_err("SRAT: SRAT not used.\n"); 223 + acpi_numa = -1; 224 + } 225 + 226 + int __init srat_disabled(void) 227 + { 228 + return acpi_numa < 0; 229 + } 230 + 231 + #if defined(CONFIG_X86) || defined(CONFIG_ARM64) 232 + /* 233 + * Callback for SLIT parsing. pxm_to_node() returns NUMA_NO_NODE for 234 + * I/O localities since SRAT does not list them. I/O localities are 235 + * not supported at this point. 236 + */ 237 + void __init acpi_numa_slit_init(struct acpi_table_slit *slit) 238 + { 239 + int i, j; 240 + 241 + for (i = 0; i < slit->locality_count; i++) { 242 + const int from_node = pxm_to_node(i); 243 + 244 + if (from_node == NUMA_NO_NODE) 245 + continue; 246 + 247 + for (j = 0; j < slit->locality_count; j++) { 248 + const int to_node = pxm_to_node(j); 249 + 250 + if (to_node == NUMA_NO_NODE) 251 + continue; 252 + 253 + numa_set_distance(from_node, to_node, 254 + slit->entry[slit->locality_count * i + j]); 255 + } 256 + } 257 + } 258 + 259 + /* 260 + * Default callback for parsing of the Proximity Domain <-> Memory 261 + * Area mappings 262 + */ 263 + int __init 264 + acpi_numa_memory_affinity_init(struct acpi_srat_mem_affinity *ma) 265 + { 266 + u64 start, end; 267 + u32 hotpluggable; 268 + int node, pxm; 269 + 270 + if (srat_disabled()) 271 + goto out_err; 272 + if (ma->header.length < sizeof(struct acpi_srat_mem_affinity)) { 273 + pr_err("SRAT: Unexpected header length: %d\n", 274 + ma->header.length); 275 + goto out_err_bad_srat; 276 + } 277 + if ((ma->flags & ACPI_SRAT_MEM_ENABLED) == 0) 278 + goto out_err; 279 + hotpluggable = ma->flags & ACPI_SRAT_MEM_HOT_PLUGGABLE; 280 + if (hotpluggable && !IS_ENABLED(CONFIG_MEMORY_HOTPLUG)) 281 + goto out_err; 282 + 283 + start = ma->base_address; 284 + end = start + ma->length; 285 + pxm = ma->proximity_domain; 286 + if (acpi_srat_revision <= 1) 287 + pxm &= 0xff; 288 + 289 + node = acpi_map_pxm_to_node(pxm); 290 + if (node == NUMA_NO_NODE || node >= MAX_NUMNODES) { 291 + pr_err("SRAT: Too many proximity domains.\n"); 292 + goto out_err_bad_srat; 293 + } 294 + 295 + if (numa_add_memblk(node, start, end) < 0) { 296 + pr_err("SRAT: Failed to add memblk to node %u [mem %#010Lx-%#010Lx]\n", 297 + node, (unsigned long long) start, 298 + (unsigned long long) end - 1); 299 + goto out_err_bad_srat; 300 + } 301 + 302 + node_set(node, numa_nodes_parsed); 303 + 304 + pr_info("SRAT: Node %u PXM %u [mem %#010Lx-%#010Lx]%s%s\n", 305 + node, pxm, 306 + (unsigned long long) start, (unsigned long long) end - 1, 307 + hotpluggable ? " hotplug" : "", 308 + ma->flags & ACPI_SRAT_MEM_NON_VOLATILE ? " non-volatile" : ""); 309 + 310 + /* Mark hotplug range in memblock. */ 311 + if (hotpluggable && memblock_mark_hotplug(start, ma->length)) 312 + pr_warn("SRAT: Failed to mark hotplug range [mem %#010Lx-%#010Lx] in memblock\n", 313 + (unsigned long long)start, (unsigned long long)end - 1); 314 + 315 + max_possible_pfn = max(max_possible_pfn, PFN_UP(end - 1)); 316 + 317 + return 0; 318 + out_err_bad_srat: 319 + bad_srat(); 320 + out_err: 321 + return -EINVAL; 322 + } 323 + #endif /* defined(CONFIG_X86) || defined (CONFIG_ARM64) */ 324 + 215 325 static int __init acpi_parse_slit(struct acpi_table_header *table) 216 326 { 217 327 struct acpi_table_slit *slit = (struct acpi_table_slit *)table; 218 328 219 329 if (!slit_valid(slit)) { 220 - printk(KERN_INFO "ACPI: SLIT table looks invalid. Not used.\n"); 330 + pr_info("SLIT table looks invalid. Not used.\n"); 221 331 return -EINVAL; 222 332 } 223 333 acpi_numa_slit_init(slit); ··· 333 233 void __init __weak 334 234 acpi_numa_x2apic_affinity_init(struct acpi_srat_x2apic_cpu_affinity *pa) 335 235 { 336 - printk(KERN_WARNING PREFIX 337 - "Found unsupported x2apic [0x%08x] SRAT entry\n", pa->apic_id); 338 - return; 236 + pr_warn("Found unsupported x2apic [0x%08x] SRAT entry\n", pa->apic_id); 339 237 } 340 - 341 238 342 239 static int __init 343 240 acpi_parse_x2apic_affinity(struct acpi_subtable_header *header, ··· 368 271 369 272 /* let architecture-dependent part to do it */ 370 273 acpi_numa_processor_affinity_init(processor_affinity); 274 + 275 + return 0; 276 + } 277 + 278 + static int __init 279 + acpi_parse_gicc_affinity(struct acpi_subtable_header *header, 280 + const unsigned long end) 281 + { 282 + struct acpi_srat_gicc_affinity *processor_affinity; 283 + 284 + processor_affinity = (struct acpi_srat_gicc_affinity *)header; 285 + if (!processor_affinity) 286 + return -EINVAL; 287 + 288 + acpi_table_print_srat_entry(header); 289 + 290 + /* let architecture-dependent part to do it */ 291 + acpi_numa_gicc_affinity_init(processor_affinity); 371 292 372 293 return 0; 373 294 } ··· 434 319 { 435 320 int cnt = 0; 436 321 322 + if (acpi_disabled) 323 + return -EINVAL; 324 + 437 325 /* 438 326 * Should not limit number with cpu num that is from NR_CPUS or nr_cpus= 439 327 * SRAT cpu entries could have different order with that in MADT. ··· 445 327 446 328 /* SRAT: Static Resource Affinity Table */ 447 329 if (!acpi_table_parse(ACPI_SIG_SRAT, acpi_parse_srat)) { 448 - struct acpi_subtable_proc srat_proc[2]; 330 + struct acpi_subtable_proc srat_proc[3]; 449 331 450 332 memset(srat_proc, 0, sizeof(srat_proc)); 451 333 srat_proc[0].id = ACPI_SRAT_TYPE_CPU_AFFINITY; 452 334 srat_proc[0].handler = acpi_parse_processor_affinity; 453 335 srat_proc[1].id = ACPI_SRAT_TYPE_X2APIC_CPU_AFFINITY; 454 336 srat_proc[1].handler = acpi_parse_x2apic_affinity; 337 + srat_proc[2].id = ACPI_SRAT_TYPE_GICC_AFFINITY; 338 + srat_proc[2].handler = acpi_parse_gicc_affinity; 455 339 456 340 acpi_table_parse_entries_array(ACPI_SIG_SRAT, 457 341 sizeof(struct acpi_table_srat), ··· 466 346 467 347 /* SLIT: System Locality Information Table */ 468 348 acpi_table_parse(ACPI_SIG_SLIT, acpi_parse_slit); 469 - 470 - acpi_numa_arch_fixup(); 471 349 472 350 if (cnt < 0) 473 351 return cnt;
+13 -30
drivers/acpi/pci_slot.c
··· 22 22 * General Public License for more details. 23 23 */ 24 24 25 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 26 + 25 27 #include <linux/kernel.h> 26 - #include <linux/module.h> 27 28 #include <linux/init.h> 28 29 #include <linux/slab.h> 29 30 #include <linux/types.h> ··· 34 33 #include <linux/dmi.h> 35 34 #include <linux/pci-acpi.h> 36 35 37 - static bool debug; 38 36 static int check_sta_before_sun; 39 - 40 - #define DRIVER_VERSION "0.1" 41 - #define DRIVER_AUTHOR "Alex Chiang <achiang@hp.com>" 42 - #define DRIVER_DESC "ACPI PCI Slot Detection Driver" 43 - MODULE_AUTHOR(DRIVER_AUTHOR); 44 - MODULE_DESCRIPTION(DRIVER_DESC); 45 - MODULE_LICENSE("GPL"); 46 - MODULE_PARM_DESC(debug, "Debugging mode enabled or not"); 47 - module_param(debug, bool, 0644); 48 37 49 38 #define _COMPONENT ACPI_PCI_COMPONENT 50 39 ACPI_MODULE_NAME("pci_slot"); 51 - 52 - #define MY_NAME "pci_slot" 53 - #define err(format, arg...) pr_err("%s: " format , MY_NAME , ## arg) 54 - #define info(format, arg...) pr_info("%s: " format , MY_NAME , ## arg) 55 - #define dbg(format, arg...) \ 56 - do { \ 57 - if (debug) \ 58 - pr_debug("%s: " format, MY_NAME , ## arg); \ 59 - } while (0) 60 40 61 41 #define SLOT_NAME_SIZE 21 /* Inspired by #define in acpiphp.h */ 62 42 ··· 58 76 struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL }; 59 77 60 78 acpi_get_name(handle, ACPI_FULL_PATHNAME, &buffer); 61 - dbg("Checking slot on path: %s\n", (char *)buffer.pointer); 79 + pr_debug("Checking slot on path: %s\n", (char *)buffer.pointer); 62 80 63 81 if (check_sta_before_sun) { 64 82 /* If SxFy doesn't have _STA, we just assume it's there */ ··· 69 87 70 88 status = acpi_evaluate_integer(handle, "_ADR", NULL, &adr); 71 89 if (ACPI_FAILURE(status)) { 72 - dbg("_ADR returned %d on %s\n", status, (char *)buffer.pointer); 90 + pr_debug("_ADR returned %d on %s\n", 91 + status, (char *)buffer.pointer); 73 92 goto out; 74 93 } 75 94 76 95 /* No _SUN == not a slot == bail */ 77 96 status = acpi_evaluate_integer(handle, "_SUN", NULL, sun); 78 97 if (ACPI_FAILURE(status)) { 79 - dbg("_SUN returned %d on %s\n", status, (char *)buffer.pointer); 98 + pr_debug("_SUN returned %d on %s\n", 99 + status, (char *)buffer.pointer); 80 100 goto out; 81 101 } 82 102 ··· 116 132 } 117 133 118 134 slot = kmalloc(sizeof(*slot), GFP_KERNEL); 119 - if (!slot) { 120 - err("%s: cannot allocate memory\n", __func__); 135 + if (!slot) 121 136 return AE_OK; 122 - } 123 137 124 138 snprintf(name, sizeof(name), "%llu", sun); 125 139 pci_slot = pci_create_slot(pci_bus, device, name, NULL); 126 140 if (IS_ERR(pci_slot)) { 127 - err("pci_create_slot returned %ld\n", PTR_ERR(pci_slot)); 141 + pr_err("pci_create_slot returned %ld\n", PTR_ERR(pci_slot)); 128 142 kfree(slot); 129 143 return AE_OK; 130 144 } ··· 132 150 133 151 get_device(&pci_bus->dev); 134 152 135 - dbg("pci_slot: %p, pci_bus: %x, device: %d, name: %s\n", 136 - pci_slot, pci_bus->number, device, name); 153 + pr_debug("%p, pci_bus: %x, device: %d, name: %s\n", 154 + pci_slot, pci_bus->number, device, name); 137 155 138 156 return AE_OK; 139 157 } ··· 168 186 169 187 static int do_sta_before_sun(const struct dmi_system_id *d) 170 188 { 171 - info("%s detected: will evaluate _STA before calling _SUN\n", d->ident); 189 + pr_info("%s detected: will evaluate _STA before calling _SUN\n", 190 + d->ident); 172 191 check_sta_before_sun = 1; 173 192 return 0; 174 193 }
+74 -10
drivers/acpi/pmic/intel_pmic.c
··· 13 13 * GNU General Public License for more details. 14 14 */ 15 15 16 - #include <linux/module.h> 16 + #include <linux/export.h> 17 17 #include <linux/acpi.h> 18 18 #include <linux/regmap.h> 19 19 #include <acpi/acpi_lpat.h> ··· 21 21 22 22 #define PMIC_POWER_OPREGION_ID 0x8d 23 23 #define PMIC_THERMAL_OPREGION_ID 0x8c 24 + #define PMIC_REGS_OPREGION_ID 0x8f 25 + 26 + struct intel_pmic_regs_handler_ctx { 27 + unsigned int val; 28 + u16 addr; 29 + }; 24 30 25 31 struct intel_pmic_opregion { 26 32 struct mutex lock; 27 33 struct acpi_lpat_conversion_table *lpat_table; 28 34 struct regmap *regmap; 29 35 struct intel_pmic_opregion_data *data; 36 + struct intel_pmic_regs_handler_ctx ctx; 30 37 }; 31 38 32 39 static int pmic_get_reg_bit(int address, struct pmic_table *table, ··· 138 131 } 139 132 140 133 static int pmic_thermal_pen(struct intel_pmic_opregion *opregion, int reg, 141 - u32 function, u64 *value) 134 + int bit, u32 function, u64 *value) 142 135 { 143 136 struct intel_pmic_opregion_data *d = opregion->data; 144 137 struct regmap *regmap = opregion->regmap; ··· 147 140 return -ENXIO; 148 141 149 142 if (function == ACPI_READ) 150 - return d->get_policy(regmap, reg, value); 143 + return d->get_policy(regmap, reg, bit, value); 151 144 152 145 if (*value != 0 && *value != 1) 153 146 return -EINVAL; 154 147 155 - return d->update_policy(regmap, reg, *value); 148 + return d->update_policy(regmap, reg, bit, *value); 156 149 } 157 150 158 151 static bool pmic_thermal_is_temp(int address) ··· 177 170 { 178 171 struct intel_pmic_opregion *opregion = region_context; 179 172 struct intel_pmic_opregion_data *d = opregion->data; 180 - int reg, result; 173 + int reg, bit, result; 181 174 182 175 if (bits != 32 || !value64) 183 176 return AE_BAD_PARAMETER; 184 177 185 178 result = pmic_get_reg_bit(address, d->thermal_table, 186 - d->thermal_table_count, &reg, NULL); 179 + d->thermal_table_count, &reg, &bit); 187 180 if (result == -ENOENT) 188 181 return AE_BAD_PARAMETER; 189 182 ··· 194 187 else if (pmic_thermal_is_aux(address)) 195 188 result = pmic_thermal_aux(opregion, reg, function, value64); 196 189 else if (pmic_thermal_is_pen(address)) 197 - result = pmic_thermal_pen(opregion, reg, function, value64); 190 + result = pmic_thermal_pen(opregion, reg, bit, 191 + function, value64); 198 192 else 199 193 result = -EINVAL; 200 194 201 195 mutex_unlock(&opregion->lock); 196 + 197 + if (result < 0) { 198 + if (result == -EINVAL) 199 + return AE_BAD_PARAMETER; 200 + else 201 + return AE_ERROR; 202 + } 203 + 204 + return AE_OK; 205 + } 206 + 207 + static acpi_status intel_pmic_regs_handler(u32 function, 208 + acpi_physical_address address, u32 bits, u64 *value64, 209 + void *handler_context, void *region_context) 210 + { 211 + struct intel_pmic_opregion *opregion = region_context; 212 + int result = 0; 213 + 214 + switch (address) { 215 + case 0: 216 + return AE_OK; 217 + case 1: 218 + opregion->ctx.addr |= (*value64 & 0xff) << 8; 219 + return AE_OK; 220 + case 2: 221 + opregion->ctx.addr |= *value64 & 0xff; 222 + return AE_OK; 223 + case 3: 224 + opregion->ctx.val = *value64 & 0xff; 225 + return AE_OK; 226 + case 4: 227 + if (*value64) { 228 + result = regmap_write(opregion->regmap, opregion->ctx.addr, 229 + opregion->ctx.val); 230 + } else { 231 + result = regmap_read(opregion->regmap, opregion->ctx.addr, 232 + &opregion->ctx.val); 233 + if (result == 0) 234 + *value64 = opregion->ctx.val; 235 + } 236 + memset(&opregion->ctx, 0x00, sizeof(opregion->ctx)); 237 + } 202 238 203 239 if (result < 0) { 204 240 if (result == -EINVAL) ··· 292 242 acpi_remove_address_space_handler(handle, PMIC_POWER_OPREGION_ID, 293 243 intel_pmic_power_handler); 294 244 ret = -ENODEV; 295 - goto out_error; 245 + goto out_remove_power_handler; 246 + } 247 + 248 + status = acpi_install_address_space_handler(handle, 249 + PMIC_REGS_OPREGION_ID, intel_pmic_regs_handler, NULL, 250 + opregion); 251 + if (ACPI_FAILURE(status)) { 252 + ret = -ENODEV; 253 + goto out_remove_thermal_handler; 296 254 } 297 255 298 256 opregion->data = d; 299 257 return 0; 258 + 259 + out_remove_thermal_handler: 260 + acpi_remove_address_space_handler(handle, PMIC_THERMAL_OPREGION_ID, 261 + intel_pmic_thermal_handler); 262 + 263 + out_remove_power_handler: 264 + acpi_remove_address_space_handler(handle, PMIC_POWER_OPREGION_ID, 265 + intel_pmic_power_handler); 300 266 301 267 out_error: 302 268 acpi_lpat_free_conversion_table(opregion->lpat_table); 303 269 return ret; 304 270 } 305 271 EXPORT_SYMBOL_GPL(intel_pmic_install_opregion_handler); 306 - 307 - MODULE_LICENSE("GPL");
+2 -2
drivers/acpi/pmic/intel_pmic.h
··· 12 12 int (*update_power)(struct regmap *r, int reg, int bit, bool on); 13 13 int (*get_raw_temp)(struct regmap *r, int reg); 14 14 int (*update_aux)(struct regmap *r, int reg, int raw_temp); 15 - int (*get_policy)(struct regmap *r, int reg, u64 *value); 16 - int (*update_policy)(struct regmap *r, int reg, int enable); 15 + int (*get_policy)(struct regmap *r, int reg, int bit, u64 *value); 16 + int (*update_policy)(struct regmap *r, int reg, int bit, int enable); 17 17 struct pmic_table *power_table; 18 18 int power_table_count; 19 19 struct pmic_table *thermal_table;
+420
drivers/acpi/pmic/intel_pmic_bxtwc.c
··· 1 + /* 2 + * intel_pmic_bxtwc.c - Intel BXT WhiskeyCove PMIC operation region driver 3 + * 4 + * Copyright (C) 2015 Intel Corporation. All rights reserved. 5 + * 6 + * This program is free software; you can redistribute it and/or 7 + * modify it under the terms of the GNU General Public License version 8 + * 2 as published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + */ 15 + 16 + #include <linux/init.h> 17 + #include <linux/acpi.h> 18 + #include <linux/mfd/intel_soc_pmic.h> 19 + #include <linux/regmap.h> 20 + #include <linux/platform_device.h> 21 + #include "intel_pmic.h" 22 + 23 + #define WHISKEY_COVE_ALRT_HIGH_BIT_MASK 0x0F 24 + #define WHISKEY_COVE_ADC_HIGH_BIT(x) (((x & 0x0F) << 8)) 25 + #define WHISKEY_COVE_ADC_CURSRC(x) (((x & 0xF0) >> 4)) 26 + #define VR_MODE_DISABLED 0 27 + #define VR_MODE_AUTO BIT(0) 28 + #define VR_MODE_NORMAL BIT(1) 29 + #define VR_MODE_SWITCH BIT(2) 30 + #define VR_MODE_ECO (BIT(0)|BIT(1)) 31 + #define VSWITCH2_OUTPUT BIT(5) 32 + #define VSWITCH1_OUTPUT BIT(4) 33 + #define VUSBPHY_CHARGE BIT(1) 34 + 35 + static struct pmic_table power_table[] = { 36 + { 37 + .address = 0x0, 38 + .reg = 0x63, 39 + .bit = VR_MODE_AUTO, 40 + }, /* VDD1 -> VDD1CNT */ 41 + { 42 + .address = 0x04, 43 + .reg = 0x65, 44 + .bit = VR_MODE_AUTO, 45 + }, /* VDD2 -> VDD2CNT */ 46 + { 47 + .address = 0x08, 48 + .reg = 0x67, 49 + .bit = VR_MODE_AUTO, 50 + }, /* VDD3 -> VDD3CNT */ 51 + { 52 + .address = 0x0c, 53 + .reg = 0x6d, 54 + .bit = VR_MODE_AUTO, 55 + }, /* VLFX -> VFLEXCNT */ 56 + { 57 + .address = 0x10, 58 + .reg = 0x6f, 59 + .bit = VR_MODE_NORMAL, 60 + }, /* VP1A -> VPROG1ACNT */ 61 + { 62 + .address = 0x14, 63 + .reg = 0x70, 64 + .bit = VR_MODE_NORMAL, 65 + }, /* VP1B -> VPROG1BCNT */ 66 + { 67 + .address = 0x18, 68 + .reg = 0x71, 69 + .bit = VR_MODE_NORMAL, 70 + }, /* VP1C -> VPROG1CCNT */ 71 + { 72 + .address = 0x1c, 73 + .reg = 0x72, 74 + .bit = VR_MODE_NORMAL, 75 + }, /* VP1D -> VPROG1DCNT */ 76 + { 77 + .address = 0x20, 78 + .reg = 0x73, 79 + .bit = VR_MODE_NORMAL, 80 + }, /* VP2A -> VPROG2ACNT */ 81 + { 82 + .address = 0x24, 83 + .reg = 0x74, 84 + .bit = VR_MODE_NORMAL, 85 + }, /* VP2B -> VPROG2BCNT */ 86 + { 87 + .address = 0x28, 88 + .reg = 0x75, 89 + .bit = VR_MODE_NORMAL, 90 + }, /* VP2C -> VPROG2CCNT */ 91 + { 92 + .address = 0x2c, 93 + .reg = 0x76, 94 + .bit = VR_MODE_NORMAL, 95 + }, /* VP3A -> VPROG3ACNT */ 96 + { 97 + .address = 0x30, 98 + .reg = 0x77, 99 + .bit = VR_MODE_NORMAL, 100 + }, /* VP3B -> VPROG3BCNT */ 101 + { 102 + .address = 0x34, 103 + .reg = 0x78, 104 + .bit = VSWITCH2_OUTPUT, 105 + }, /* VSW2 -> VLD0CNT Bit 5*/ 106 + { 107 + .address = 0x38, 108 + .reg = 0x78, 109 + .bit = VSWITCH1_OUTPUT, 110 + }, /* VSW1 -> VLD0CNT Bit 4 */ 111 + { 112 + .address = 0x3c, 113 + .reg = 0x78, 114 + .bit = VUSBPHY_CHARGE, 115 + }, /* VUPY -> VLDOCNT Bit 1 */ 116 + { 117 + .address = 0x40, 118 + .reg = 0x7b, 119 + .bit = VR_MODE_NORMAL, 120 + }, /* VRSO -> VREFSOCCNT*/ 121 + { 122 + .address = 0x44, 123 + .reg = 0xA0, 124 + .bit = VR_MODE_NORMAL, 125 + }, /* VP1E -> VPROG1ECNT */ 126 + { 127 + .address = 0x48, 128 + .reg = 0xA1, 129 + .bit = VR_MODE_NORMAL, 130 + }, /* VP1F -> VPROG1FCNT */ 131 + { 132 + .address = 0x4c, 133 + .reg = 0xA2, 134 + .bit = VR_MODE_NORMAL, 135 + }, /* VP2D -> VPROG2DCNT */ 136 + { 137 + .address = 0x50, 138 + .reg = 0xA3, 139 + .bit = VR_MODE_NORMAL, 140 + }, /* VP4A -> VPROG4ACNT */ 141 + { 142 + .address = 0x54, 143 + .reg = 0xA4, 144 + .bit = VR_MODE_NORMAL, 145 + }, /* VP4B -> VPROG4BCNT */ 146 + { 147 + .address = 0x58, 148 + .reg = 0xA5, 149 + .bit = VR_MODE_NORMAL, 150 + }, /* VP4C -> VPROG4CCNT */ 151 + { 152 + .address = 0x5c, 153 + .reg = 0xA6, 154 + .bit = VR_MODE_NORMAL, 155 + }, /* VP4D -> VPROG4DCNT */ 156 + { 157 + .address = 0x60, 158 + .reg = 0xA7, 159 + .bit = VR_MODE_NORMAL, 160 + }, /* VP5A -> VPROG5ACNT */ 161 + { 162 + .address = 0x64, 163 + .reg = 0xA8, 164 + .bit = VR_MODE_NORMAL, 165 + }, /* VP5B -> VPROG5BCNT */ 166 + { 167 + .address = 0x68, 168 + .reg = 0xA9, 169 + .bit = VR_MODE_NORMAL, 170 + }, /* VP6A -> VPROG6ACNT */ 171 + { 172 + .address = 0x6c, 173 + .reg = 0xAA, 174 + .bit = VR_MODE_NORMAL, 175 + }, /* VP6B -> VPROG6BCNT */ 176 + { 177 + .address = 0x70, 178 + .reg = 0x36, 179 + .bit = BIT(2), 180 + }, /* SDWN_N -> MODEMCTRL Bit 2 */ 181 + { 182 + .address = 0x74, 183 + .reg = 0x36, 184 + .bit = BIT(0), 185 + } /* MOFF -> MODEMCTRL Bit 0 */ 186 + }; 187 + 188 + static struct pmic_table thermal_table[] = { 189 + { 190 + .address = 0x00, 191 + .reg = 0x4F39 192 + }, 193 + { 194 + .address = 0x04, 195 + .reg = 0x4F24 196 + }, 197 + { 198 + .address = 0x08, 199 + .reg = 0x4F26 200 + }, 201 + { 202 + .address = 0x0c, 203 + .reg = 0x4F3B 204 + }, 205 + { 206 + .address = 0x10, 207 + .reg = 0x4F28 208 + }, 209 + { 210 + .address = 0x14, 211 + .reg = 0x4F2A 212 + }, 213 + { 214 + .address = 0x18, 215 + .reg = 0x4F3D 216 + }, 217 + { 218 + .address = 0x1c, 219 + .reg = 0x4F2C 220 + }, 221 + { 222 + .address = 0x20, 223 + .reg = 0x4F2E 224 + }, 225 + { 226 + .address = 0x24, 227 + .reg = 0x4F3F 228 + }, 229 + { 230 + .address = 0x28, 231 + .reg = 0x4F30 232 + }, 233 + { 234 + .address = 0x30, 235 + .reg = 0x4F41 236 + }, 237 + { 238 + .address = 0x34, 239 + .reg = 0x4F32 240 + }, 241 + { 242 + .address = 0x3c, 243 + .reg = 0x4F43 244 + }, 245 + { 246 + .address = 0x40, 247 + .reg = 0x4F34 248 + }, 249 + { 250 + .address = 0x48, 251 + .reg = 0x4F6A, 252 + .bit = 0, 253 + }, 254 + { 255 + .address = 0x4C, 256 + .reg = 0x4F6A, 257 + .bit = 1 258 + }, 259 + { 260 + .address = 0x50, 261 + .reg = 0x4F6A, 262 + .bit = 2 263 + }, 264 + { 265 + .address = 0x54, 266 + .reg = 0x4F6A, 267 + .bit = 4 268 + }, 269 + { 270 + .address = 0x58, 271 + .reg = 0x4F6A, 272 + .bit = 5 273 + }, 274 + { 275 + .address = 0x5C, 276 + .reg = 0x4F6A, 277 + .bit = 3 278 + }, 279 + }; 280 + 281 + static int intel_bxtwc_pmic_get_power(struct regmap *regmap, int reg, 282 + int bit, u64 *value) 283 + { 284 + int data; 285 + 286 + if (regmap_read(regmap, reg, &data)) 287 + return -EIO; 288 + 289 + *value = (data & bit) ? 1 : 0; 290 + return 0; 291 + } 292 + 293 + static int intel_bxtwc_pmic_update_power(struct regmap *regmap, int reg, 294 + int bit, bool on) 295 + { 296 + u8 val, mask = bit; 297 + 298 + if (on) 299 + val = 0xFF; 300 + else 301 + val = 0x0; 302 + 303 + return regmap_update_bits(regmap, reg, mask, val); 304 + } 305 + 306 + static int intel_bxtwc_pmic_get_raw_temp(struct regmap *regmap, int reg) 307 + { 308 + unsigned int val, adc_val, reg_val; 309 + u8 temp_l, temp_h, cursrc; 310 + unsigned long rlsb; 311 + static const unsigned long rlsb_array[] = { 312 + 0, 260420, 130210, 65100, 32550, 16280, 313 + 8140, 4070, 2030, 0, 260420, 130210 }; 314 + 315 + if (regmap_read(regmap, reg, &val)) 316 + return -EIO; 317 + temp_l = (u8) val; 318 + 319 + if (regmap_read(regmap, (reg - 1), &val)) 320 + return -EIO; 321 + temp_h = (u8) val; 322 + 323 + reg_val = temp_l | WHISKEY_COVE_ADC_HIGH_BIT(temp_h); 324 + cursrc = WHISKEY_COVE_ADC_CURSRC(temp_h); 325 + rlsb = rlsb_array[cursrc]; 326 + adc_val = reg_val * rlsb / 1000; 327 + 328 + return adc_val; 329 + } 330 + 331 + static int 332 + intel_bxtwc_pmic_update_aux(struct regmap *regmap, int reg, int raw) 333 + { 334 + u32 bsr_num; 335 + u16 resi_val, count = 0, thrsh = 0; 336 + u8 alrt_h, alrt_l, cursel = 0; 337 + 338 + bsr_num = raw; 339 + bsr_num /= (1 << 5); 340 + 341 + count = fls(bsr_num) - 1; 342 + 343 + cursel = clamp_t(s8, (count - 7), 0, 7); 344 + thrsh = raw / (1 << (4 + cursel)); 345 + 346 + resi_val = (cursel << 9) | thrsh; 347 + alrt_h = (resi_val >> 8) & WHISKEY_COVE_ALRT_HIGH_BIT_MASK; 348 + if (regmap_update_bits(regmap, 349 + reg - 1, 350 + WHISKEY_COVE_ALRT_HIGH_BIT_MASK, 351 + alrt_h)) 352 + return -EIO; 353 + 354 + alrt_l = (u8)resi_val; 355 + return regmap_write(regmap, reg, alrt_l); 356 + } 357 + 358 + static int 359 + intel_bxtwc_pmic_get_policy(struct regmap *regmap, int reg, int bit, u64 *value) 360 + { 361 + u8 mask = BIT(bit); 362 + unsigned int val; 363 + 364 + if (regmap_read(regmap, reg, &val)) 365 + return -EIO; 366 + 367 + *value = (val & mask) >> bit; 368 + return 0; 369 + } 370 + 371 + static int 372 + intel_bxtwc_pmic_update_policy(struct regmap *regmap, 373 + int reg, int bit, int enable) 374 + { 375 + u8 mask = BIT(bit), val = enable << bit; 376 + 377 + return regmap_update_bits(regmap, reg, mask, val); 378 + } 379 + 380 + static struct intel_pmic_opregion_data intel_bxtwc_pmic_opregion_data = { 381 + .get_power = intel_bxtwc_pmic_get_power, 382 + .update_power = intel_bxtwc_pmic_update_power, 383 + .get_raw_temp = intel_bxtwc_pmic_get_raw_temp, 384 + .update_aux = intel_bxtwc_pmic_update_aux, 385 + .get_policy = intel_bxtwc_pmic_get_policy, 386 + .update_policy = intel_bxtwc_pmic_update_policy, 387 + .power_table = power_table, 388 + .power_table_count = ARRAY_SIZE(power_table), 389 + .thermal_table = thermal_table, 390 + .thermal_table_count = ARRAY_SIZE(thermal_table), 391 + }; 392 + 393 + static int intel_bxtwc_pmic_opregion_probe(struct platform_device *pdev) 394 + { 395 + struct intel_soc_pmic *pmic = dev_get_drvdata(pdev->dev.parent); 396 + 397 + return intel_pmic_install_opregion_handler(&pdev->dev, 398 + ACPI_HANDLE(pdev->dev.parent), 399 + pmic->regmap, 400 + &intel_bxtwc_pmic_opregion_data); 401 + } 402 + 403 + static struct platform_device_id bxt_wc_opregion_id_table[] = { 404 + { .name = "bxt_wcove_region" }, 405 + {}, 406 + }; 407 + 408 + static struct platform_driver intel_bxtwc_pmic_opregion_driver = { 409 + .probe = intel_bxtwc_pmic_opregion_probe, 410 + .driver = { 411 + .name = "bxt_whiskey_cove_pmic", 412 + }, 413 + .id_table = bxt_wc_opregion_id_table, 414 + }; 415 + 416 + static int __init intel_bxtwc_pmic_opregion_driver_init(void) 417 + { 418 + return platform_driver_register(&intel_bxtwc_pmic_opregion_driver); 419 + } 420 + device_initcall(intel_bxtwc_pmic_opregion_driver_init);
+3 -2
drivers/acpi/pmic/intel_pmic_crc.c
··· 141 141 regmap_update_bits(regmap, reg - 1, 0x3, raw >> 8) ? -EIO : 0; 142 142 } 143 143 144 - static int intel_crc_pmic_get_policy(struct regmap *regmap, int reg, u64 *value) 144 + static int intel_crc_pmic_get_policy(struct regmap *regmap, 145 + int reg, int bit, u64 *value) 145 146 { 146 147 int pen; 147 148 ··· 153 152 } 154 153 155 154 static int intel_crc_pmic_update_policy(struct regmap *regmap, 156 - int reg, int enable) 155 + int reg, int bit, int enable) 157 156 { 158 157 int alert0; 159 158
+2 -5
drivers/acpi/pmic/intel_pmic_xpower.c
··· 13 13 * GNU General Public License for more details. 14 14 */ 15 15 16 - #include <linux/module.h> 16 + #include <linux/init.h> 17 17 #include <linux/acpi.h> 18 18 #include <linux/mfd/axp20x.h> 19 19 #include <linux/regmap.h> ··· 262 262 { 263 263 return platform_driver_register(&intel_xpower_pmic_opregion_driver); 264 264 } 265 - module_init(intel_xpower_pmic_opregion_driver_init); 266 - 267 - MODULE_DESCRIPTION("XPower AXP288 ACPI operation region driver"); 268 - MODULE_LICENSE("GPL"); 265 + device_initcall(intel_xpower_pmic_opregion_driver_init);
+22 -4
drivers/acpi/processor_core.c
··· 108 108 return -EINVAL; 109 109 } 110 110 111 - static phys_cpuid_t map_madt_entry(int type, u32 acpi_id) 111 + static phys_cpuid_t map_madt_entry(struct acpi_table_madt *madt, 112 + int type, u32 acpi_id) 112 113 { 113 114 unsigned long madt_end, entry; 114 115 phys_cpuid_t phys_id = PHYS_CPUID_INVALID; /* CPU hardware ID */ 115 - struct acpi_table_madt *madt; 116 116 117 - madt = get_madt_table(); 118 117 if (!madt) 119 118 return phys_id; 120 119 ··· 142 143 entry += header->length; 143 144 } 144 145 return phys_id; 146 + } 147 + 148 + phys_cpuid_t __init acpi_map_madt_entry(u32 acpi_id) 149 + { 150 + struct acpi_table_madt *madt = NULL; 151 + acpi_size tbl_size; 152 + phys_cpuid_t rv; 153 + 154 + acpi_get_table_with_size(ACPI_SIG_MADT, 0, 155 + (struct acpi_table_header **)&madt, 156 + &tbl_size); 157 + if (!madt) 158 + return PHYS_CPUID_INVALID; 159 + 160 + rv = map_madt_entry(madt, 1, acpi_id); 161 + 162 + early_acpi_os_unmap_memory(madt, tbl_size); 163 + 164 + return rv; 145 165 } 146 166 147 167 static phys_cpuid_t map_mat_entry(acpi_handle handle, int type, u32 acpi_id) ··· 203 185 204 186 phys_id = map_mat_entry(handle, type, acpi_id); 205 187 if (invalid_phys_cpuid(phys_id)) 206 - phys_id = map_madt_entry(type, acpi_id); 188 + phys_id = map_madt_entry(get_madt_table(), type, acpi_id); 207 189 208 190 return phys_id; 209 191 }
+1 -1
drivers/acpi/processor_driver.c
··· 90 90 pr->performance_platform_limit); 91 91 break; 92 92 case ACPI_PROCESSOR_NOTIFY_POWER: 93 - acpi_processor_cst_has_changed(pr); 93 + acpi_processor_power_state_has_changed(pr); 94 94 acpi_bus_generate_netlink_event(device->pnp.device_class, 95 95 dev_name(&device->dev), event, 0); 96 96 break;
+462 -80
drivers/acpi/processor_idle.c
··· 59 59 60 60 static DEFINE_PER_CPU(struct cpuidle_device *, acpi_cpuidle_device); 61 61 62 + struct cpuidle_driver acpi_idle_driver = { 63 + .name = "acpi_idle", 64 + .owner = THIS_MODULE, 65 + }; 66 + 67 + #ifdef CONFIG_ACPI_PROCESSOR_CSTATE 62 68 static 63 69 DEFINE_PER_CPU(struct acpi_processor_cx * [CPUIDLE_STATE_MAX], acpi_cstate); 64 70 ··· 302 296 int i, ret = 0; 303 297 struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL }; 304 298 union acpi_object *cst; 305 - 306 299 307 300 if (nocst) 308 301 return -ENODEV; ··· 575 570 return (working); 576 571 } 577 572 578 - static int acpi_processor_get_power_info(struct acpi_processor *pr) 573 + static int acpi_processor_get_cstate_info(struct acpi_processor *pr) 579 574 { 580 575 unsigned int i; 581 576 int result; ··· 809 804 acpi_idle_do_entry(cx); 810 805 } 811 806 812 - struct cpuidle_driver acpi_idle_driver = { 813 - .name = "acpi_idle", 814 - .owner = THIS_MODULE, 815 - }; 816 - 817 - /** 818 - * acpi_processor_setup_cpuidle_cx - prepares and configures CPUIDLE 819 - * device i.e. per-cpu data 820 - * 821 - * @pr: the ACPI processor 822 - * @dev : the cpuidle device 823 - */ 824 807 static int acpi_processor_setup_cpuidle_cx(struct acpi_processor *pr, 825 808 struct cpuidle_device *dev) 826 809 { 827 810 int i, count = CPUIDLE_DRIVER_STATE_START; 828 811 struct acpi_processor_cx *cx; 829 - 830 - if (!pr->flags.power_setup_done) 831 - return -EINVAL; 832 - 833 - if (pr->flags.power == 0) { 834 - return -EINVAL; 835 - } 836 - 837 - if (!dev) 838 - return -EINVAL; 839 - 840 - dev->cpu = pr->id; 841 812 842 813 if (max_cstate == 0) 843 814 max_cstate = 1; ··· 837 856 return 0; 838 857 } 839 858 840 - /** 841 - * acpi_processor_setup_cpuidle states- prepares and configures cpuidle 842 - * global state data i.e. idle routines 843 - * 844 - * @pr: the ACPI processor 845 - */ 846 - static int acpi_processor_setup_cpuidle_states(struct acpi_processor *pr) 859 + static int acpi_processor_setup_cstates(struct acpi_processor *pr) 847 860 { 848 861 int i, count = CPUIDLE_DRIVER_STATE_START; 849 862 struct acpi_processor_cx *cx; 850 863 struct cpuidle_state *state; 851 864 struct cpuidle_driver *drv = &acpi_idle_driver; 852 - 853 - if (!pr->flags.power_setup_done) 854 - return -EINVAL; 855 - 856 - if (pr->flags.power == 0) 857 - return -EINVAL; 858 - 859 - drv->safe_state_index = -1; 860 - for (i = CPUIDLE_DRIVER_STATE_START; i < CPUIDLE_STATE_MAX; i++) { 861 - drv->states[i].name[0] = '\0'; 862 - drv->states[i].desc[0] = '\0'; 863 - } 864 865 865 866 if (max_cstate == 0) 866 867 max_cstate = 1; ··· 855 892 856 893 state = &drv->states[count]; 857 894 snprintf(state->name, CPUIDLE_NAME_LEN, "C%d", i); 858 - strncpy(state->desc, cx->desc, CPUIDLE_DESC_LEN); 895 + strlcpy(state->desc, cx->desc, CPUIDLE_DESC_LEN); 859 896 state->exit_latency = cx->latency; 860 897 state->target_residency = cx->latency * latency_factor; 861 898 state->enter = acpi_idle_enter; ··· 888 925 return 0; 889 926 } 890 927 928 + static inline void acpi_processor_cstate_first_run_checks(void) 929 + { 930 + acpi_status status; 931 + static int first_run; 932 + 933 + if (first_run) 934 + return; 935 + dmi_check_system(processor_power_dmi_table); 936 + max_cstate = acpi_processor_cstate_check(max_cstate); 937 + if (max_cstate < ACPI_C_STATES_MAX) 938 + pr_notice("ACPI: processor limited to max C-state %d\n", 939 + max_cstate); 940 + first_run++; 941 + 942 + if (acpi_gbl_FADT.cst_control && !nocst) { 943 + status = acpi_os_write_port(acpi_gbl_FADT.smi_command, 944 + acpi_gbl_FADT.cst_control, 8); 945 + if (ACPI_FAILURE(status)) 946 + ACPI_EXCEPTION((AE_INFO, status, 947 + "Notifying BIOS of _CST ability failed")); 948 + } 949 + } 950 + #else 951 + 952 + static inline int disabled_by_idle_boot_param(void) { return 0; } 953 + static inline void acpi_processor_cstate_first_run_checks(void) { } 954 + static int acpi_processor_get_cstate_info(struct acpi_processor *pr) 955 + { 956 + return -ENODEV; 957 + } 958 + 959 + static int acpi_processor_setup_cpuidle_cx(struct acpi_processor *pr, 960 + struct cpuidle_device *dev) 961 + { 962 + return -EINVAL; 963 + } 964 + 965 + static int acpi_processor_setup_cstates(struct acpi_processor *pr) 966 + { 967 + return -EINVAL; 968 + } 969 + 970 + #endif /* CONFIG_ACPI_PROCESSOR_CSTATE */ 971 + 972 + struct acpi_lpi_states_array { 973 + unsigned int size; 974 + unsigned int composite_states_size; 975 + struct acpi_lpi_state *entries; 976 + struct acpi_lpi_state *composite_states[ACPI_PROCESSOR_MAX_POWER]; 977 + }; 978 + 979 + static int obj_get_integer(union acpi_object *obj, u32 *value) 980 + { 981 + if (obj->type != ACPI_TYPE_INTEGER) 982 + return -EINVAL; 983 + 984 + *value = obj->integer.value; 985 + return 0; 986 + } 987 + 988 + static int acpi_processor_evaluate_lpi(acpi_handle handle, 989 + struct acpi_lpi_states_array *info) 990 + { 991 + acpi_status status; 992 + int ret = 0; 993 + int pkg_count, state_idx = 1, loop; 994 + struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL }; 995 + union acpi_object *lpi_data; 996 + struct acpi_lpi_state *lpi_state; 997 + 998 + status = acpi_evaluate_object(handle, "_LPI", NULL, &buffer); 999 + if (ACPI_FAILURE(status)) { 1000 + ACPI_DEBUG_PRINT((ACPI_DB_INFO, "No _LPI, giving up\n")); 1001 + return -ENODEV; 1002 + } 1003 + 1004 + lpi_data = buffer.pointer; 1005 + 1006 + /* There must be at least 4 elements = 3 elements + 1 package */ 1007 + if (!lpi_data || lpi_data->type != ACPI_TYPE_PACKAGE || 1008 + lpi_data->package.count < 4) { 1009 + pr_debug("not enough elements in _LPI\n"); 1010 + ret = -ENODATA; 1011 + goto end; 1012 + } 1013 + 1014 + pkg_count = lpi_data->package.elements[2].integer.value; 1015 + 1016 + /* Validate number of power states. */ 1017 + if (pkg_count < 1 || pkg_count != lpi_data->package.count - 3) { 1018 + pr_debug("count given by _LPI is not valid\n"); 1019 + ret = -ENODATA; 1020 + goto end; 1021 + } 1022 + 1023 + lpi_state = kcalloc(pkg_count, sizeof(*lpi_state), GFP_KERNEL); 1024 + if (!lpi_state) { 1025 + ret = -ENOMEM; 1026 + goto end; 1027 + } 1028 + 1029 + info->size = pkg_count; 1030 + info->entries = lpi_state; 1031 + 1032 + /* LPI States start at index 3 */ 1033 + for (loop = 3; state_idx <= pkg_count; loop++, state_idx++, lpi_state++) { 1034 + union acpi_object *element, *pkg_elem, *obj; 1035 + 1036 + element = &lpi_data->package.elements[loop]; 1037 + if (element->type != ACPI_TYPE_PACKAGE || element->package.count < 7) 1038 + continue; 1039 + 1040 + pkg_elem = element->package.elements; 1041 + 1042 + obj = pkg_elem + 6; 1043 + if (obj->type == ACPI_TYPE_BUFFER) { 1044 + struct acpi_power_register *reg; 1045 + 1046 + reg = (struct acpi_power_register *)obj->buffer.pointer; 1047 + if (reg->space_id != ACPI_ADR_SPACE_SYSTEM_IO && 1048 + reg->space_id != ACPI_ADR_SPACE_FIXED_HARDWARE) 1049 + continue; 1050 + 1051 + lpi_state->address = reg->address; 1052 + lpi_state->entry_method = 1053 + reg->space_id == ACPI_ADR_SPACE_FIXED_HARDWARE ? 1054 + ACPI_CSTATE_FFH : ACPI_CSTATE_SYSTEMIO; 1055 + } else if (obj->type == ACPI_TYPE_INTEGER) { 1056 + lpi_state->entry_method = ACPI_CSTATE_INTEGER; 1057 + lpi_state->address = obj->integer.value; 1058 + } else { 1059 + continue; 1060 + } 1061 + 1062 + /* elements[7,8] skipped for now i.e. Residency/Usage counter*/ 1063 + 1064 + obj = pkg_elem + 9; 1065 + if (obj->type == ACPI_TYPE_STRING) 1066 + strlcpy(lpi_state->desc, obj->string.pointer, 1067 + ACPI_CX_DESC_LEN); 1068 + 1069 + lpi_state->index = state_idx; 1070 + if (obj_get_integer(pkg_elem + 0, &lpi_state->min_residency)) { 1071 + pr_debug("No min. residency found, assuming 10 us\n"); 1072 + lpi_state->min_residency = 10; 1073 + } 1074 + 1075 + if (obj_get_integer(pkg_elem + 1, &lpi_state->wake_latency)) { 1076 + pr_debug("No wakeup residency found, assuming 10 us\n"); 1077 + lpi_state->wake_latency = 10; 1078 + } 1079 + 1080 + if (obj_get_integer(pkg_elem + 2, &lpi_state->flags)) 1081 + lpi_state->flags = 0; 1082 + 1083 + if (obj_get_integer(pkg_elem + 3, &lpi_state->arch_flags)) 1084 + lpi_state->arch_flags = 0; 1085 + 1086 + if (obj_get_integer(pkg_elem + 4, &lpi_state->res_cnt_freq)) 1087 + lpi_state->res_cnt_freq = 1; 1088 + 1089 + if (obj_get_integer(pkg_elem + 5, &lpi_state->enable_parent_state)) 1090 + lpi_state->enable_parent_state = 0; 1091 + } 1092 + 1093 + acpi_handle_debug(handle, "Found %d power states\n", state_idx); 1094 + end: 1095 + kfree(buffer.pointer); 1096 + return ret; 1097 + } 1098 + 1099 + /* 1100 + * flat_state_cnt - the number of composite LPI states after the process of flattening 1101 + */ 1102 + static int flat_state_cnt; 1103 + 1104 + /** 1105 + * combine_lpi_states - combine local and parent LPI states to form a composite LPI state 1106 + * 1107 + * @local: local LPI state 1108 + * @parent: parent LPI state 1109 + * @result: composite LPI state 1110 + */ 1111 + static bool combine_lpi_states(struct acpi_lpi_state *local, 1112 + struct acpi_lpi_state *parent, 1113 + struct acpi_lpi_state *result) 1114 + { 1115 + if (parent->entry_method == ACPI_CSTATE_INTEGER) { 1116 + if (!parent->address) /* 0 means autopromotable */ 1117 + return false; 1118 + result->address = local->address + parent->address; 1119 + } else { 1120 + result->address = parent->address; 1121 + } 1122 + 1123 + result->min_residency = max(local->min_residency, parent->min_residency); 1124 + result->wake_latency = local->wake_latency + parent->wake_latency; 1125 + result->enable_parent_state = parent->enable_parent_state; 1126 + result->entry_method = local->entry_method; 1127 + 1128 + result->flags = parent->flags; 1129 + result->arch_flags = parent->arch_flags; 1130 + result->index = parent->index; 1131 + 1132 + strlcpy(result->desc, local->desc, ACPI_CX_DESC_LEN); 1133 + strlcat(result->desc, "+", ACPI_CX_DESC_LEN); 1134 + strlcat(result->desc, parent->desc, ACPI_CX_DESC_LEN); 1135 + return true; 1136 + } 1137 + 1138 + #define ACPI_LPI_STATE_FLAGS_ENABLED BIT(0) 1139 + 1140 + static void stash_composite_state(struct acpi_lpi_states_array *curr_level, 1141 + struct acpi_lpi_state *t) 1142 + { 1143 + curr_level->composite_states[curr_level->composite_states_size++] = t; 1144 + } 1145 + 1146 + static int flatten_lpi_states(struct acpi_processor *pr, 1147 + struct acpi_lpi_states_array *curr_level, 1148 + struct acpi_lpi_states_array *prev_level) 1149 + { 1150 + int i, j, state_count = curr_level->size; 1151 + struct acpi_lpi_state *p, *t = curr_level->entries; 1152 + 1153 + curr_level->composite_states_size = 0; 1154 + for (j = 0; j < state_count; j++, t++) { 1155 + struct acpi_lpi_state *flpi; 1156 + 1157 + if (!(t->flags & ACPI_LPI_STATE_FLAGS_ENABLED)) 1158 + continue; 1159 + 1160 + if (flat_state_cnt >= ACPI_PROCESSOR_MAX_POWER) { 1161 + pr_warn("Limiting number of LPI states to max (%d)\n", 1162 + ACPI_PROCESSOR_MAX_POWER); 1163 + pr_warn("Please increase ACPI_PROCESSOR_MAX_POWER if needed.\n"); 1164 + break; 1165 + } 1166 + 1167 + flpi = &pr->power.lpi_states[flat_state_cnt]; 1168 + 1169 + if (!prev_level) { /* leaf/processor node */ 1170 + memcpy(flpi, t, sizeof(*t)); 1171 + stash_composite_state(curr_level, flpi); 1172 + flat_state_cnt++; 1173 + continue; 1174 + } 1175 + 1176 + for (i = 0; i < prev_level->composite_states_size; i++) { 1177 + p = prev_level->composite_states[i]; 1178 + if (t->index <= p->enable_parent_state && 1179 + combine_lpi_states(p, t, flpi)) { 1180 + stash_composite_state(curr_level, flpi); 1181 + flat_state_cnt++; 1182 + flpi++; 1183 + } 1184 + } 1185 + } 1186 + 1187 + kfree(curr_level->entries); 1188 + return 0; 1189 + } 1190 + 1191 + static int acpi_processor_get_lpi_info(struct acpi_processor *pr) 1192 + { 1193 + int ret, i; 1194 + acpi_status status; 1195 + acpi_handle handle = pr->handle, pr_ahandle; 1196 + struct acpi_device *d = NULL; 1197 + struct acpi_lpi_states_array info[2], *tmp, *prev, *curr; 1198 + 1199 + if (!osc_pc_lpi_support_confirmed) 1200 + return -EOPNOTSUPP; 1201 + 1202 + if (!acpi_has_method(handle, "_LPI")) 1203 + return -EINVAL; 1204 + 1205 + flat_state_cnt = 0; 1206 + prev = &info[0]; 1207 + curr = &info[1]; 1208 + handle = pr->handle; 1209 + ret = acpi_processor_evaluate_lpi(handle, prev); 1210 + if (ret) 1211 + return ret; 1212 + flatten_lpi_states(pr, prev, NULL); 1213 + 1214 + status = acpi_get_parent(handle, &pr_ahandle); 1215 + while (ACPI_SUCCESS(status)) { 1216 + acpi_bus_get_device(pr_ahandle, &d); 1217 + handle = pr_ahandle; 1218 + 1219 + if (strcmp(acpi_device_hid(d), ACPI_PROCESSOR_CONTAINER_HID)) 1220 + break; 1221 + 1222 + /* can be optional ? */ 1223 + if (!acpi_has_method(handle, "_LPI")) 1224 + break; 1225 + 1226 + ret = acpi_processor_evaluate_lpi(handle, curr); 1227 + if (ret) 1228 + break; 1229 + 1230 + /* flatten all the LPI states in this level of hierarchy */ 1231 + flatten_lpi_states(pr, curr, prev); 1232 + 1233 + tmp = prev, prev = curr, curr = tmp; 1234 + 1235 + status = acpi_get_parent(handle, &pr_ahandle); 1236 + } 1237 + 1238 + pr->power.count = flat_state_cnt; 1239 + /* reset the index after flattening */ 1240 + for (i = 0; i < pr->power.count; i++) 1241 + pr->power.lpi_states[i].index = i; 1242 + 1243 + /* Tell driver that _LPI is supported. */ 1244 + pr->flags.has_lpi = 1; 1245 + pr->flags.power = 1; 1246 + 1247 + return 0; 1248 + } 1249 + 1250 + int __weak acpi_processor_ffh_lpi_probe(unsigned int cpu) 1251 + { 1252 + return -ENODEV; 1253 + } 1254 + 1255 + int __weak acpi_processor_ffh_lpi_enter(struct acpi_lpi_state *lpi) 1256 + { 1257 + return -ENODEV; 1258 + } 1259 + 1260 + /** 1261 + * acpi_idle_lpi_enter - enters an ACPI any LPI state 1262 + * @dev: the target CPU 1263 + * @drv: cpuidle driver containing cpuidle state info 1264 + * @index: index of target state 1265 + * 1266 + * Return: 0 for success or negative value for error 1267 + */ 1268 + static int acpi_idle_lpi_enter(struct cpuidle_device *dev, 1269 + struct cpuidle_driver *drv, int index) 1270 + { 1271 + struct acpi_processor *pr; 1272 + struct acpi_lpi_state *lpi; 1273 + 1274 + pr = __this_cpu_read(processors); 1275 + 1276 + if (unlikely(!pr)) 1277 + return -EINVAL; 1278 + 1279 + lpi = &pr->power.lpi_states[index]; 1280 + if (lpi->entry_method == ACPI_CSTATE_FFH) 1281 + return acpi_processor_ffh_lpi_enter(lpi); 1282 + 1283 + return -EINVAL; 1284 + } 1285 + 1286 + static int acpi_processor_setup_lpi_states(struct acpi_processor *pr) 1287 + { 1288 + int i; 1289 + struct acpi_lpi_state *lpi; 1290 + struct cpuidle_state *state; 1291 + struct cpuidle_driver *drv = &acpi_idle_driver; 1292 + 1293 + if (!pr->flags.has_lpi) 1294 + return -EOPNOTSUPP; 1295 + 1296 + for (i = 0; i < pr->power.count && i < CPUIDLE_STATE_MAX; i++) { 1297 + lpi = &pr->power.lpi_states[i]; 1298 + 1299 + state = &drv->states[i]; 1300 + snprintf(state->name, CPUIDLE_NAME_LEN, "LPI-%d", i); 1301 + strlcpy(state->desc, lpi->desc, CPUIDLE_DESC_LEN); 1302 + state->exit_latency = lpi->wake_latency; 1303 + state->target_residency = lpi->min_residency; 1304 + if (lpi->arch_flags) 1305 + state->flags |= CPUIDLE_FLAG_TIMER_STOP; 1306 + state->enter = acpi_idle_lpi_enter; 1307 + drv->safe_state_index = i; 1308 + } 1309 + 1310 + drv->state_count = i; 1311 + 1312 + return 0; 1313 + } 1314 + 1315 + /** 1316 + * acpi_processor_setup_cpuidle_states- prepares and configures cpuidle 1317 + * global state data i.e. idle routines 1318 + * 1319 + * @pr: the ACPI processor 1320 + */ 1321 + static int acpi_processor_setup_cpuidle_states(struct acpi_processor *pr) 1322 + { 1323 + int i; 1324 + struct cpuidle_driver *drv = &acpi_idle_driver; 1325 + 1326 + if (!pr->flags.power_setup_done || !pr->flags.power) 1327 + return -EINVAL; 1328 + 1329 + drv->safe_state_index = -1; 1330 + for (i = CPUIDLE_DRIVER_STATE_START; i < CPUIDLE_STATE_MAX; i++) { 1331 + drv->states[i].name[0] = '\0'; 1332 + drv->states[i].desc[0] = '\0'; 1333 + } 1334 + 1335 + if (pr->flags.has_lpi) 1336 + return acpi_processor_setup_lpi_states(pr); 1337 + 1338 + return acpi_processor_setup_cstates(pr); 1339 + } 1340 + 1341 + /** 1342 + * acpi_processor_setup_cpuidle_dev - prepares and configures CPUIDLE 1343 + * device i.e. per-cpu data 1344 + * 1345 + * @pr: the ACPI processor 1346 + * @dev : the cpuidle device 1347 + */ 1348 + static int acpi_processor_setup_cpuidle_dev(struct acpi_processor *pr, 1349 + struct cpuidle_device *dev) 1350 + { 1351 + if (!pr->flags.power_setup_done || !pr->flags.power || !dev) 1352 + return -EINVAL; 1353 + 1354 + dev->cpu = pr->id; 1355 + if (pr->flags.has_lpi) 1356 + return acpi_processor_ffh_lpi_probe(pr->id); 1357 + 1358 + return acpi_processor_setup_cpuidle_cx(pr, dev); 1359 + } 1360 + 1361 + static int acpi_processor_get_power_info(struct acpi_processor *pr) 1362 + { 1363 + int ret; 1364 + 1365 + ret = acpi_processor_get_lpi_info(pr); 1366 + if (ret) 1367 + ret = acpi_processor_get_cstate_info(pr); 1368 + 1369 + return ret; 1370 + } 1371 + 891 1372 int acpi_processor_hotplug(struct acpi_processor *pr) 892 1373 { 893 1374 int ret = 0; ··· 1340 933 if (disabled_by_idle_boot_param()) 1341 934 return 0; 1342 935 1343 - if (nocst) 1344 - return -ENODEV; 1345 - 1346 936 if (!pr->flags.power_setup_done) 1347 937 return -ENODEV; 1348 938 1349 939 dev = per_cpu(acpi_cpuidle_device, pr->id); 1350 940 cpuidle_pause_and_lock(); 1351 941 cpuidle_disable_device(dev); 1352 - acpi_processor_get_power_info(pr); 1353 - if (pr->flags.power) { 1354 - acpi_processor_setup_cpuidle_cx(pr, dev); 942 + ret = acpi_processor_get_power_info(pr); 943 + if (!ret && pr->flags.power) { 944 + acpi_processor_setup_cpuidle_dev(pr, dev); 1355 945 ret = cpuidle_enable_device(dev); 1356 946 } 1357 947 cpuidle_resume_and_unlock(); ··· 1356 952 return ret; 1357 953 } 1358 954 1359 - int acpi_processor_cst_has_changed(struct acpi_processor *pr) 955 + int acpi_processor_power_state_has_changed(struct acpi_processor *pr) 1360 956 { 1361 957 int cpu; 1362 958 struct acpi_processor *_pr; ··· 1364 960 1365 961 if (disabled_by_idle_boot_param()) 1366 962 return 0; 1367 - 1368 - if (nocst) 1369 - return -ENODEV; 1370 963 1371 964 if (!pr->flags.power_setup_done) 1372 965 return -ENODEV; ··· 1401 1000 acpi_processor_get_power_info(_pr); 1402 1001 if (_pr->flags.power) { 1403 1002 dev = per_cpu(acpi_cpuidle_device, cpu); 1404 - acpi_processor_setup_cpuidle_cx(_pr, dev); 1003 + acpi_processor_setup_cpuidle_dev(_pr, dev); 1405 1004 cpuidle_enable_device(dev); 1406 1005 } 1407 1006 } ··· 1416 1015 1417 1016 int acpi_processor_power_init(struct acpi_processor *pr) 1418 1017 { 1419 - acpi_status status; 1420 1018 int retval; 1421 1019 struct cpuidle_device *dev; 1422 - static int first_run; 1423 1020 1424 1021 if (disabled_by_idle_boot_param()) 1425 1022 return 0; 1426 1023 1427 - if (!first_run) { 1428 - dmi_check_system(processor_power_dmi_table); 1429 - max_cstate = acpi_processor_cstate_check(max_cstate); 1430 - if (max_cstate < ACPI_C_STATES_MAX) 1431 - printk(KERN_NOTICE 1432 - "ACPI: processor limited to max C-state %d\n", 1433 - max_cstate); 1434 - first_run++; 1435 - } 1024 + acpi_processor_cstate_first_run_checks(); 1436 1025 1437 - if (acpi_gbl_FADT.cst_control && !nocst) { 1438 - status = 1439 - acpi_os_write_port(acpi_gbl_FADT.smi_command, acpi_gbl_FADT.cst_control, 8); 1440 - if (ACPI_FAILURE(status)) { 1441 - ACPI_EXCEPTION((AE_INFO, status, 1442 - "Notifying BIOS of _CST ability failed")); 1443 - } 1444 - } 1445 - 1446 - acpi_processor_get_power_info(pr); 1447 - pr->flags.power_setup_done = 1; 1026 + if (!acpi_processor_get_power_info(pr)) 1027 + pr->flags.power_setup_done = 1; 1448 1028 1449 1029 /* 1450 1030 * Install the idle handler if processor power management is supported. ··· 1448 1066 return -ENOMEM; 1449 1067 per_cpu(acpi_cpuidle_device, pr->id) = dev; 1450 1068 1451 - acpi_processor_setup_cpuidle_cx(pr, dev); 1069 + acpi_processor_setup_cpuidle_dev(pr, dev); 1452 1070 1453 1071 /* Register per-cpu cpuidle_device. Cpuidle driver 1454 1072 * must already be registered before registering device
+74 -7
drivers/acpi/scan.c
··· 494 494 device_del(&device->dev); 495 495 } 496 496 497 + static BLOCKING_NOTIFIER_HEAD(acpi_reconfig_chain); 498 + 497 499 static LIST_HEAD(acpi_device_del_list); 498 500 static DEFINE_MUTEX(acpi_device_del_lock); 499 501 ··· 515 513 list_del(&adev->del_list); 516 514 517 515 mutex_unlock(&acpi_device_del_lock); 516 + 517 + blocking_notifier_call_chain(&acpi_reconfig_chain, 518 + ACPI_RECONFIG_DEVICE_REMOVE, adev); 518 519 519 520 acpi_device_del(adev); 520 521 /* ··· 1411 1406 acpi_bus_get_flags(device); 1412 1407 device->flags.match_driver = false; 1413 1408 device->flags.initialized = true; 1414 - device->flags.visited = false; 1409 + acpi_device_clear_enumerated(device); 1415 1410 device_initialize(&device->dev); 1416 1411 dev_set_uevent_suppress(&device->dev, true); 1417 1412 acpi_init_coherency(device); ··· 1681 1676 bool is_spi_i2c_slave = false; 1682 1677 1683 1678 /* 1684 - * Do not enemerate SPI/I2C slaves as they will be enuerated by their 1679 + * Do not enumerate SPI/I2C slaves as they will be enumerated by their 1685 1680 * respective parents. 1686 1681 */ 1687 1682 INIT_LIST_HEAD(&resource_list); 1688 1683 acpi_dev_get_resources(device, &resource_list, acpi_check_spi_i2c_slave, 1689 1684 &is_spi_i2c_slave); 1690 1685 acpi_dev_free_resource_list(&resource_list); 1691 - if (!is_spi_i2c_slave) 1686 + if (!is_spi_i2c_slave) { 1692 1687 acpi_create_platform_device(device); 1688 + acpi_device_set_enumerated(device); 1689 + } else { 1690 + blocking_notifier_call_chain(&acpi_reconfig_chain, 1691 + ACPI_RECONFIG_DEVICE_ADD, device); 1692 + } 1693 1693 } 1694 1694 1695 1695 static const struct acpi_device_id generic_device_ids[] = { ··· 1761 1751 acpi_bus_get_status(device); 1762 1752 /* Skip devices that are not present. */ 1763 1753 if (!acpi_device_is_present(device)) { 1764 - device->flags.visited = false; 1754 + acpi_device_clear_enumerated(device); 1765 1755 device->flags.power_manageable = 0; 1766 1756 return; 1767 1757 } ··· 1776 1766 1777 1767 device->flags.initialized = true; 1778 1768 } 1779 - device->flags.visited = false; 1769 + 1780 1770 ret = acpi_scan_attach_handler(device); 1781 1771 if (ret < 0) 1782 1772 return; ··· 1790 1780 if (!ret && device->pnp.type.platform_id) 1791 1781 acpi_default_enumeration(device); 1792 1782 } 1793 - device->flags.visited = true; 1794 1783 1795 1784 ok: 1796 1785 list_for_each_entry(child, &device->children, node) ··· 1881 1872 */ 1882 1873 acpi_device_set_power(adev, ACPI_STATE_D3_COLD); 1883 1874 adev->flags.initialized = false; 1884 - adev->flags.visited = false; 1875 + acpi_device_clear_enumerated(adev); 1885 1876 } 1886 1877 EXPORT_SYMBOL_GPL(acpi_bus_trim); 1887 1878 ··· 1924 1915 1925 1916 return result < 0 ? result : 0; 1926 1917 } 1918 + 1919 + static bool acpi_scan_initialized; 1927 1920 1928 1921 int __init acpi_scan_init(void) 1929 1922 { ··· 1971 1960 1972 1961 acpi_update_all_gpes(); 1973 1962 1963 + acpi_scan_initialized = true; 1964 + 1974 1965 out: 1975 1966 mutex_unlock(&acpi_scan_lock); 1976 1967 return result; ··· 2016 2003 2017 2004 return count; 2018 2005 } 2006 + 2007 + struct acpi_table_events_work { 2008 + struct work_struct work; 2009 + void *table; 2010 + u32 event; 2011 + }; 2012 + 2013 + static void acpi_table_events_fn(struct work_struct *work) 2014 + { 2015 + struct acpi_table_events_work *tew; 2016 + 2017 + tew = container_of(work, struct acpi_table_events_work, work); 2018 + 2019 + if (tew->event == ACPI_TABLE_EVENT_LOAD) { 2020 + acpi_scan_lock_acquire(); 2021 + acpi_bus_scan(ACPI_ROOT_OBJECT); 2022 + acpi_scan_lock_release(); 2023 + } 2024 + 2025 + kfree(tew); 2026 + } 2027 + 2028 + void acpi_scan_table_handler(u32 event, void *table, void *context) 2029 + { 2030 + struct acpi_table_events_work *tew; 2031 + 2032 + if (!acpi_scan_initialized) 2033 + return; 2034 + 2035 + if (event != ACPI_TABLE_EVENT_LOAD) 2036 + return; 2037 + 2038 + tew = kmalloc(sizeof(*tew), GFP_KERNEL); 2039 + if (!tew) 2040 + return; 2041 + 2042 + INIT_WORK(&tew->work, acpi_table_events_fn); 2043 + tew->table = table; 2044 + tew->event = event; 2045 + 2046 + schedule_work(&tew->work); 2047 + } 2048 + 2049 + int acpi_reconfig_notifier_register(struct notifier_block *nb) 2050 + { 2051 + return blocking_notifier_chain_register(&acpi_reconfig_chain, nb); 2052 + } 2053 + EXPORT_SYMBOL(acpi_reconfig_notifier_register); 2054 + 2055 + int acpi_reconfig_notifier_unregister(struct notifier_block *nb) 2056 + { 2057 + return blocking_notifier_chain_unregister(&acpi_reconfig_chain, nb); 2058 + } 2059 + EXPORT_SYMBOL(acpi_reconfig_notifier_unregister);
+23 -6
drivers/acpi/sleep.c
··· 47 47 } 48 48 } 49 49 50 - static int tts_notify_reboot(struct notifier_block *this, 50 + static void acpi_sleep_pts_switch(u32 acpi_state) 51 + { 52 + acpi_status status; 53 + 54 + status = acpi_execute_simple_method(NULL, "\\_PTS", acpi_state); 55 + if (ACPI_FAILURE(status) && status != AE_NOT_FOUND) { 56 + /* 57 + * OS can't evaluate the _PTS object correctly. Some warning 58 + * message will be printed. But it won't break anything. 59 + */ 60 + printk(KERN_NOTICE "Failure in evaluating _PTS object\n"); 61 + } 62 + } 63 + 64 + static int sleep_notify_reboot(struct notifier_block *this, 51 65 unsigned long code, void *x) 52 66 { 53 67 acpi_sleep_tts_switch(ACPI_STATE_S5); 68 + 69 + acpi_sleep_pts_switch(ACPI_STATE_S5); 70 + 54 71 return NOTIFY_DONE; 55 72 } 56 73 57 - static struct notifier_block tts_notifier = { 58 - .notifier_call = tts_notify_reboot, 74 + static struct notifier_block sleep_notifier = { 75 + .notifier_call = sleep_notify_reboot, 59 76 .next = NULL, 60 77 .priority = 0, 61 78 }; ··· 916 899 pr_info(PREFIX "(supports%s)\n", supported); 917 900 918 901 /* 919 - * Register the tts_notifier to reboot notifier list so that the _TTS 920 - * object can also be evaluated when the system enters S5. 902 + * Register the sleep_notifier to reboot notifier list so that the _TTS 903 + * and _PTS object can also be evaluated when the system enters S5. 921 904 */ 922 - register_reboot_notifier(&tts_notifier); 905 + register_reboot_notifier(&sleep_notifier); 923 906 return 0; 924 907 }
+2 -4
drivers/acpi/sysfs.c
··· 378 378 return; 379 379 } 380 380 381 - static acpi_status 382 - acpi_sysfs_table_handler(u32 event, void *table, void *context) 381 + acpi_status acpi_sysfs_table_handler(u32 event, void *table, void *context) 383 382 { 384 383 struct acpi_table_attr *table_attr; 385 384 ··· 451 452 452 453 kobject_uevent(tables_kobj, KOBJ_ADD); 453 454 kobject_uevent(dynamic_tables_kobj, KOBJ_ADD); 454 - status = acpi_install_table_handler(acpi_sysfs_table_handler, NULL); 455 455 456 - return ACPI_FAILURE(status) ? -EINVAL : 0; 456 + return 0; 457 457 err_dynamic_tables: 458 458 kobject_put(tables_kobj); 459 459 err:
+9 -14
drivers/acpi/tables.c
··· 34 34 #include <linux/bootmem.h> 35 35 #include <linux/earlycpio.h> 36 36 #include <linux/memblock.h> 37 + #include <linux/initrd.h> 38 + #include <linux/acpi.h> 37 39 #include "internal.h" 38 40 39 41 #ifdef CONFIG_ACPI_CUSTOM_DSDT ··· 483 481 484 482 #define MAP_CHUNK_SIZE (NR_FIX_BTMAPS << PAGE_SHIFT) 485 483 486 - static void __init acpi_table_initrd_init(void *data, size_t size) 484 + void __init acpi_table_upgrade(void) 487 485 { 486 + void *data = (void *)initrd_start; 487 + size_t size = initrd_end - initrd_start; 488 488 int sig, no, table_nr = 0, total_offset = 0; 489 489 long offset = 0; 490 490 struct acpi_table_header *table; ··· 544 540 return; 545 541 546 542 acpi_tables_addr = 547 - memblock_find_in_range(0, max_low_pfn_mapped << PAGE_SHIFT, 543 + memblock_find_in_range(0, ACPI_TABLE_UPGRADE_MAX_PHYS, 548 544 all_tables_size, PAGE_SIZE); 549 545 if (!acpi_tables_addr) { 550 546 WARN_ON(1); ··· 582 578 clen = size; 583 579 if (clen > MAP_CHUNK_SIZE - slop) 584 580 clen = MAP_CHUNK_SIZE - slop; 585 - dest_p = early_ioremap(dest_addr & PAGE_MASK, 586 - clen + slop); 581 + dest_p = early_memremap(dest_addr & PAGE_MASK, 582 + clen + slop); 587 583 memcpy(dest_p + slop, src_p, clen); 588 - early_iounmap(dest_p, clen + slop); 584 + early_memunmap(dest_p, clen + slop); 589 585 src_p += clen; 590 586 dest_addr += clen; 591 587 size -= clen; ··· 700 696 } 701 697 } 702 698 #else 703 - static void __init acpi_table_initrd_init(void *data, size_t size) 704 - { 705 - } 706 - 707 699 static acpi_status 708 700 acpi_table_initrd_override(struct acpi_table_header *existing_table, 709 701 acpi_physical_address *address, ··· 740 740 if (*new_table != NULL) 741 741 acpi_table_taint(existing_table); 742 742 return AE_OK; 743 - } 744 - 745 - void __init early_acpi_table_init(void *data, size_t size) 746 - { 747 - acpi_table_initrd_init(data, size); 748 743 } 749 744 750 745 /*
+2 -1
drivers/acpi/thermal.c
··· 1259 1259 return -ENODEV; 1260 1260 } 1261 1261 1262 - acpi_thermal_pm_queue = create_workqueue("acpi_thermal_pm"); 1262 + acpi_thermal_pm_queue = alloc_workqueue("acpi_thermal_pm", 1263 + WQ_HIGHPRI | WQ_MEM_RECLAIM, 0); 1263 1264 if (!acpi_thermal_pm_queue) 1264 1265 return -ENODEV; 1265 1266
+8
drivers/acpi/video_detect.c
··· 167 167 DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad X201s"), 168 168 }, 169 169 }, 170 + { 171 + .callback = video_detect_force_video, 172 + .ident = "ThinkPad X201T", 173 + .matches = { 174 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 175 + DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad X201T"), 176 + }, 177 + }, 170 178 171 179 /* The native backlight controls do not work on some older machines */ 172 180 {
+6 -20
drivers/cpuidle/cpuidle-arm.c
··· 36 36 static int arm_enter_idle_state(struct cpuidle_device *dev, 37 37 struct cpuidle_driver *drv, int idx) 38 38 { 39 - int ret; 40 - 41 - if (!idx) { 42 - cpu_do_idle(); 43 - return idx; 44 - } 45 - 46 - ret = cpu_pm_enter(); 47 - if (!ret) { 48 - /* 49 - * Pass idle state index to cpu_suspend which in turn will 50 - * call the CPU ops suspend protocol with idle index as a 51 - * parameter. 52 - */ 53 - ret = arm_cpuidle_suspend(idx); 54 - 55 - cpu_pm_exit(); 56 - } 57 - 58 - return ret ? -1 : idx; 39 + /* 40 + * Pass idle state index to arm_cpuidle_suspend which in turn 41 + * will call the CPU ops suspend protocol with idle index as a 42 + * parameter. 43 + */ 44 + return CPU_PM_CPU_IDLE_ENTER(arm_cpuidle_suspend, idx); 59 45 } 60 46 61 47 static struct cpuidle_driver arm_idle_driver = {
+96
drivers/firmware/efi/efi.c
··· 24 24 #include <linux/of_fdt.h> 25 25 #include <linux/io.h> 26 26 #include <linux/platform_device.h> 27 + #include <linux/slab.h> 28 + #include <linux/acpi.h> 29 + #include <linux/ucs2_string.h> 27 30 28 31 #include <asm/early_ioremap.h> 29 32 ··· 198 195 efivars_unregister(&generic_efivars); 199 196 } 200 197 198 + #if IS_ENABLED(CONFIG_ACPI) 199 + #define EFIVAR_SSDT_NAME_MAX 16 200 + static char efivar_ssdt[EFIVAR_SSDT_NAME_MAX] __initdata; 201 + static int __init efivar_ssdt_setup(char *str) 202 + { 203 + if (strlen(str) < sizeof(efivar_ssdt)) 204 + memcpy(efivar_ssdt, str, strlen(str)); 205 + else 206 + pr_warn("efivar_ssdt: name too long: %s\n", str); 207 + return 0; 208 + } 209 + __setup("efivar_ssdt=", efivar_ssdt_setup); 210 + 211 + static __init int efivar_ssdt_iter(efi_char16_t *name, efi_guid_t vendor, 212 + unsigned long name_size, void *data) 213 + { 214 + struct efivar_entry *entry; 215 + struct list_head *list = data; 216 + char utf8_name[EFIVAR_SSDT_NAME_MAX]; 217 + int limit = min_t(unsigned long, EFIVAR_SSDT_NAME_MAX, name_size); 218 + 219 + ucs2_as_utf8(utf8_name, name, limit - 1); 220 + if (strncmp(utf8_name, efivar_ssdt, limit) != 0) 221 + return 0; 222 + 223 + entry = kmalloc(sizeof(*entry), GFP_KERNEL); 224 + if (!entry) 225 + return 0; 226 + 227 + memcpy(entry->var.VariableName, name, name_size); 228 + memcpy(&entry->var.VendorGuid, &vendor, sizeof(efi_guid_t)); 229 + 230 + efivar_entry_add(entry, list); 231 + 232 + return 0; 233 + } 234 + 235 + static __init int efivar_ssdt_load(void) 236 + { 237 + LIST_HEAD(entries); 238 + struct efivar_entry *entry, *aux; 239 + unsigned long size; 240 + void *data; 241 + int ret; 242 + 243 + ret = efivar_init(efivar_ssdt_iter, &entries, true, &entries); 244 + 245 + list_for_each_entry_safe(entry, aux, &entries, list) { 246 + pr_info("loading SSDT from variable %s-%pUl\n", efivar_ssdt, 247 + &entry->var.VendorGuid); 248 + 249 + list_del(&entry->list); 250 + 251 + ret = efivar_entry_size(entry, &size); 252 + if (ret) { 253 + pr_err("failed to get var size\n"); 254 + goto free_entry; 255 + } 256 + 257 + data = kmalloc(size, GFP_KERNEL); 258 + if (!data) 259 + goto free_entry; 260 + 261 + ret = efivar_entry_get(entry, NULL, &size, data); 262 + if (ret) { 263 + pr_err("failed to get var data\n"); 264 + goto free_data; 265 + } 266 + 267 + ret = acpi_load_table(data); 268 + if (ret) { 269 + pr_err("failed to load table: %d\n", ret); 270 + goto free_data; 271 + } 272 + 273 + goto free_entry; 274 + 275 + free_data: 276 + kfree(data); 277 + 278 + free_entry: 279 + kfree(entry); 280 + } 281 + 282 + return ret; 283 + } 284 + #else 285 + static inline int efivar_ssdt_load(void) { return 0; } 286 + #endif 287 + 201 288 /* 202 289 * We register the efi subsystem with the firmware subsystem and the 203 290 * efivars subsystem with the efi subsystem, if the system was booted with ··· 310 217 error = generic_ops_register(); 311 218 if (error) 312 219 goto err_put; 220 + 221 + if (efi_enabled(EFI_RUNTIME_SERVICES)) 222 + efivar_ssdt_load(); 313 223 314 224 error = sysfs_create_group(efi_kobj, &efi_subsys_attr_group); 315 225 if (error) {
+59 -7
drivers/firmware/psci.c
··· 13 13 14 14 #define pr_fmt(fmt) "psci: " fmt 15 15 16 + #include <linux/acpi.h> 16 17 #include <linux/arm-smccc.h> 17 18 #include <linux/cpuidle.h> 18 19 #include <linux/errno.h> ··· 257 256 u32 *psci_states; 258 257 struct device_node *state_node; 259 258 260 - /* 261 - * If the PSCI cpu_suspend function hook has not been initialized 262 - * idle states must not be enabled, so bail out 263 - */ 264 - if (!psci_ops.cpu_suspend) 265 - return -EOPNOTSUPP; 266 - 267 259 /* Count idle states */ 268 260 while ((state_node = of_parse_phandle(cpu_node, "cpu-idle-states", 269 261 count))) { ··· 304 310 return ret; 305 311 } 306 312 313 + #ifdef CONFIG_ACPI 314 + #include <acpi/processor.h> 315 + 316 + static int __maybe_unused psci_acpi_cpu_init_idle(unsigned int cpu) 317 + { 318 + int i, count; 319 + u32 *psci_states; 320 + struct acpi_lpi_state *lpi; 321 + struct acpi_processor *pr = per_cpu(processors, cpu); 322 + 323 + if (unlikely(!pr || !pr->flags.has_lpi)) 324 + return -EINVAL; 325 + 326 + count = pr->power.count - 1; 327 + if (count <= 0) 328 + return -ENODEV; 329 + 330 + psci_states = kcalloc(count, sizeof(*psci_states), GFP_KERNEL); 331 + if (!psci_states) 332 + return -ENOMEM; 333 + 334 + for (i = 0; i < count; i++) { 335 + u32 state; 336 + 337 + lpi = &pr->power.lpi_states[i + 1]; 338 + /* 339 + * Only bits[31:0] represent a PSCI power_state while 340 + * bits[63:32] must be 0x0 as per ARM ACPI FFH Specification 341 + */ 342 + state = lpi->address; 343 + if (!psci_power_state_is_valid(state)) { 344 + pr_warn("Invalid PSCI power state %#x\n", state); 345 + kfree(psci_states); 346 + return -EINVAL; 347 + } 348 + psci_states[i] = state; 349 + } 350 + /* Idle states parsed correctly, initialize per-cpu pointer */ 351 + per_cpu(psci_power_state, cpu) = psci_states; 352 + return 0; 353 + } 354 + #else 355 + static int __maybe_unused psci_acpi_cpu_init_idle(unsigned int cpu) 356 + { 357 + return -EINVAL; 358 + } 359 + #endif 360 + 307 361 int psci_cpu_init_idle(unsigned int cpu) 308 362 { 309 363 struct device_node *cpu_node; 310 364 int ret; 365 + 366 + /* 367 + * If the PSCI cpu_suspend function hook has not been initialized 368 + * idle states must not be enabled, so bail out 369 + */ 370 + if (!psci_ops.cpu_suspend) 371 + return -EOPNOTSUPP; 372 + 373 + if (!acpi_disabled) 374 + return psci_acpi_cpu_init_idle(cpu); 311 375 312 376 cpu_node = of_get_cpu_node(cpu, NULL); 313 377 if (!cpu_node)
+137 -38
drivers/i2c/i2c-core.c
··· 107 107 acpi_handle device_handle; 108 108 }; 109 109 110 - static int acpi_i2c_find_address(struct acpi_resource *ares, void *data) 110 + static int acpi_i2c_fill_info(struct acpi_resource *ares, void *data) 111 111 { 112 112 struct acpi_i2c_lookup *lookup = data; 113 113 struct i2c_board_info *info = lookup->info; 114 114 struct acpi_resource_i2c_serialbus *sb; 115 - acpi_handle adapter_handle; 116 115 acpi_status status; 117 116 118 117 if (info->addr || ares->type != ACPI_RESOURCE_TYPE_SERIAL_BUS) ··· 121 122 if (sb->type != ACPI_RESOURCE_SERIAL_TYPE_I2C) 122 123 return 1; 123 124 124 - /* 125 - * Extract the ResourceSource and make sure that the handle matches 126 - * with the I2C adapter handle. 127 - */ 128 125 status = acpi_get_handle(lookup->device_handle, 129 126 sb->resource_source.string_ptr, 130 - &adapter_handle); 131 - if (ACPI_SUCCESS(status) && adapter_handle == lookup->adapter_handle) { 132 - info->addr = sb->slave_address; 133 - if (sb->access_mode == ACPI_I2C_10BIT_MODE) 134 - info->flags |= I2C_CLIENT_TEN; 135 - } 127 + &lookup->adapter_handle); 128 + if (!ACPI_SUCCESS(status)) 129 + return 1; 130 + 131 + info->addr = sb->slave_address; 132 + if (sb->access_mode == ACPI_I2C_10BIT_MODE) 133 + info->flags |= I2C_CLIENT_TEN; 136 134 137 135 return 1; 138 136 } 139 137 140 - static acpi_status acpi_i2c_add_device(acpi_handle handle, u32 level, 141 - void *data, void **return_value) 138 + static int acpi_i2c_get_info(struct acpi_device *adev, 139 + struct i2c_board_info *info, 140 + acpi_handle *adapter_handle) 142 141 { 143 - struct i2c_adapter *adapter = data; 144 142 struct list_head resource_list; 145 - struct acpi_i2c_lookup lookup; 146 143 struct resource_entry *entry; 147 - struct i2c_board_info info; 148 - struct acpi_device *adev; 144 + struct acpi_i2c_lookup lookup; 149 145 int ret; 150 146 151 - if (acpi_bus_get_device(handle, &adev)) 152 - return AE_OK; 153 - if (acpi_bus_get_status(adev) || !adev->status.present) 154 - return AE_OK; 147 + if (acpi_bus_get_status(adev) || !adev->status.present || 148 + acpi_device_enumerated(adev)) 149 + return -EINVAL; 155 150 156 - memset(&info, 0, sizeof(info)); 157 - info.fwnode = acpi_fwnode_handle(adev); 151 + memset(info, 0, sizeof(*info)); 152 + info->fwnode = acpi_fwnode_handle(adev); 158 153 159 154 memset(&lookup, 0, sizeof(lookup)); 160 - lookup.adapter_handle = ACPI_HANDLE(&adapter->dev); 161 - lookup.device_handle = handle; 162 - lookup.info = &info; 155 + lookup.device_handle = acpi_device_handle(adev); 156 + lookup.info = info; 163 157 164 - /* 165 - * Look up for I2cSerialBus resource with ResourceSource that 166 - * matches with this adapter. 167 - */ 158 + /* Look up for I2cSerialBus resource */ 168 159 INIT_LIST_HEAD(&resource_list); 169 160 ret = acpi_dev_get_resources(adev, &resource_list, 170 - acpi_i2c_find_address, &lookup); 161 + acpi_i2c_fill_info, &lookup); 171 162 acpi_dev_free_resource_list(&resource_list); 172 163 173 - if (ret < 0 || !info.addr) 174 - return AE_OK; 164 + if (ret < 0 || !info->addr) 165 + return -EINVAL; 166 + 167 + *adapter_handle = lookup.adapter_handle; 175 168 176 169 /* Then fill IRQ number if any */ 177 170 ret = acpi_dev_get_resources(adev, &resource_list, NULL, NULL); 178 171 if (ret < 0) 179 - return AE_OK; 172 + return -EINVAL; 180 173 181 174 resource_list_for_each_entry(entry, &resource_list) { 182 175 if (resource_type(entry->res) == IORESOURCE_IRQ) { 183 - info.irq = entry->res->start; 176 + info->irq = entry->res->start; 184 177 break; 185 178 } 186 179 } 187 180 188 181 acpi_dev_free_resource_list(&resource_list); 189 182 183 + strlcpy(info->type, dev_name(&adev->dev), sizeof(info->type)); 184 + 185 + return 0; 186 + } 187 + 188 + static void acpi_i2c_register_device(struct i2c_adapter *adapter, 189 + struct acpi_device *adev, 190 + struct i2c_board_info *info) 191 + { 190 192 adev->power.flags.ignore_parent = true; 191 - strlcpy(info.type, dev_name(&adev->dev), sizeof(info.type)); 192 - if (!i2c_new_device(adapter, &info)) { 193 + acpi_device_set_enumerated(adev); 194 + 195 + if (!i2c_new_device(adapter, info)) { 193 196 adev->power.flags.ignore_parent = false; 194 197 dev_err(&adapter->dev, 195 198 "failed to add I2C device %s from ACPI\n", 196 199 dev_name(&adev->dev)); 197 200 } 201 + } 202 + 203 + static acpi_status acpi_i2c_add_device(acpi_handle handle, u32 level, 204 + void *data, void **return_value) 205 + { 206 + struct i2c_adapter *adapter = data; 207 + struct acpi_device *adev; 208 + acpi_handle adapter_handle; 209 + struct i2c_board_info info; 210 + 211 + if (acpi_bus_get_device(handle, &adev)) 212 + return AE_OK; 213 + 214 + if (acpi_i2c_get_info(adev, &info, &adapter_handle)) 215 + return AE_OK; 216 + 217 + if (adapter_handle != ACPI_HANDLE(&adapter->dev)) 218 + return AE_OK; 219 + 220 + acpi_i2c_register_device(adapter, adev, &info); 198 221 199 222 return AE_OK; 200 223 } ··· 246 225 dev_warn(&adap->dev, "failed to enumerate I2C slaves\n"); 247 226 } 248 227 228 + static int acpi_i2c_match_adapter(struct device *dev, void *data) 229 + { 230 + struct i2c_adapter *adapter = i2c_verify_adapter(dev); 231 + 232 + if (!adapter) 233 + return 0; 234 + 235 + return ACPI_HANDLE(dev) == (acpi_handle)data; 236 + } 237 + 238 + static int acpi_i2c_match_device(struct device *dev, void *data) 239 + { 240 + return ACPI_COMPANION(dev) == data; 241 + } 242 + 243 + static struct i2c_adapter *acpi_i2c_find_adapter_by_handle(acpi_handle handle) 244 + { 245 + struct device *dev; 246 + 247 + dev = bus_find_device(&i2c_bus_type, NULL, handle, 248 + acpi_i2c_match_adapter); 249 + return dev ? i2c_verify_adapter(dev) : NULL; 250 + } 251 + 252 + static struct i2c_client *acpi_i2c_find_client_by_adev(struct acpi_device *adev) 253 + { 254 + struct device *dev; 255 + 256 + dev = bus_find_device(&i2c_bus_type, NULL, adev, acpi_i2c_match_device); 257 + return dev ? i2c_verify_client(dev) : NULL; 258 + } 259 + 260 + static int acpi_i2c_notify(struct notifier_block *nb, unsigned long value, 261 + void *arg) 262 + { 263 + struct acpi_device *adev = arg; 264 + struct i2c_board_info info; 265 + acpi_handle adapter_handle; 266 + struct i2c_adapter *adapter; 267 + struct i2c_client *client; 268 + 269 + switch (value) { 270 + case ACPI_RECONFIG_DEVICE_ADD: 271 + if (acpi_i2c_get_info(adev, &info, &adapter_handle)) 272 + break; 273 + 274 + adapter = acpi_i2c_find_adapter_by_handle(adapter_handle); 275 + if (!adapter) 276 + break; 277 + 278 + acpi_i2c_register_device(adapter, adev, &info); 279 + break; 280 + case ACPI_RECONFIG_DEVICE_REMOVE: 281 + if (!acpi_device_enumerated(adev)) 282 + break; 283 + 284 + client = acpi_i2c_find_client_by_adev(adev); 285 + if (!client) 286 + break; 287 + 288 + i2c_unregister_device(client); 289 + put_device(&client->dev); 290 + break; 291 + } 292 + 293 + return NOTIFY_OK; 294 + } 295 + 296 + static struct notifier_block i2c_acpi_notifier = { 297 + .notifier_call = acpi_i2c_notify, 298 + }; 249 299 #else /* CONFIG_ACPI */ 250 300 static inline void acpi_i2c_register_devices(struct i2c_adapter *adap) { } 301 + extern struct notifier_block i2c_acpi_notifier; 251 302 #endif /* CONFIG_ACPI */ 252 303 253 304 #ifdef CONFIG_ACPI_I2C_OPREGION ··· 1182 1089 { 1183 1090 if (client->dev.of_node) 1184 1091 of_node_clear_flag(client->dev.of_node, OF_POPULATED); 1092 + if (ACPI_COMPANION(&client->dev)) 1093 + acpi_device_clear_enumerated(ACPI_COMPANION(&client->dev)); 1185 1094 device_unregister(&client->dev); 1186 1095 } 1187 1096 EXPORT_SYMBOL_GPL(i2c_unregister_device); ··· 2212 2117 2213 2118 if (IS_ENABLED(CONFIG_OF_DYNAMIC)) 2214 2119 WARN_ON(of_reconfig_notifier_register(&i2c_of_notifier)); 2120 + if (IS_ENABLED(CONFIG_ACPI)) 2121 + WARN_ON(acpi_reconfig_notifier_register(&i2c_acpi_notifier)); 2215 2122 2216 2123 return 0; 2217 2124 ··· 2229 2132 2230 2133 static void __exit i2c_exit(void) 2231 2134 { 2135 + if (IS_ENABLED(CONFIG_ACPI)) 2136 + WARN_ON(acpi_reconfig_notifier_unregister(&i2c_acpi_notifier)); 2232 2137 if (IS_ENABLED(CONFIG_OF_DYNAMIC)) 2233 2138 WARN_ON(of_reconfig_notifier_unregister(&i2c_of_notifier)); 2234 2139 i2c_del_driver(&dummy_driver);
+2 -2
drivers/of/of_numa.c
··· 91 91 pr_debug("NUMA: base = %llx len = %llx, node = %u\n", 92 92 rsrc.start, rsrc.end - rsrc.start + 1, nid); 93 93 94 - r = numa_add_memblk(nid, rsrc.start, 95 - rsrc.end - rsrc.start + 1); 94 + 95 + r = numa_add_memblk(nid, rsrc.start, rsrc.end + 1); 96 96 if (r) 97 97 break; 98 98 }
+93 -7
drivers/spi/spi.c
··· 622 622 623 623 if (spi->dev.of_node) 624 624 of_node_clear_flag(spi->dev.of_node, OF_POPULATED); 625 + if (ACPI_COMPANION(&spi->dev)) 626 + acpi_device_clear_enumerated(ACPI_COMPANION(&spi->dev)); 625 627 device_unregister(&spi->dev); 626 628 } 627 629 EXPORT_SYMBOL_GPL(spi_unregister_device); ··· 1648 1646 return 1; 1649 1647 } 1650 1648 1651 - static acpi_status acpi_spi_add_device(acpi_handle handle, u32 level, 1652 - void *data, void **return_value) 1649 + static acpi_status acpi_register_spi_device(struct spi_master *master, 1650 + struct acpi_device *adev) 1653 1651 { 1654 - struct spi_master *master = data; 1655 1652 struct list_head resource_list; 1656 - struct acpi_device *adev; 1657 1653 struct spi_device *spi; 1658 1654 int ret; 1659 1655 1660 - if (acpi_bus_get_device(handle, &adev)) 1661 - return AE_OK; 1662 - if (acpi_bus_get_status(adev) || !adev->status.present) 1656 + if (acpi_bus_get_status(adev) || !adev->status.present || 1657 + acpi_device_enumerated(adev)) 1663 1658 return AE_OK; 1664 1659 1665 1660 spi = spi_alloc_device(master); ··· 1682 1683 if (spi->irq < 0) 1683 1684 spi->irq = acpi_dev_gpio_irq_get(adev, 0); 1684 1685 1686 + acpi_device_set_enumerated(adev); 1687 + 1685 1688 adev->power.flags.ignore_parent = true; 1686 1689 strlcpy(spi->modalias, acpi_device_hid(adev), sizeof(spi->modalias)); 1687 1690 if (spi_add_device(spi)) { ··· 1694 1693 } 1695 1694 1696 1695 return AE_OK; 1696 + } 1697 + 1698 + static acpi_status acpi_spi_add_device(acpi_handle handle, u32 level, 1699 + void *data, void **return_value) 1700 + { 1701 + struct spi_master *master = data; 1702 + struct acpi_device *adev; 1703 + 1704 + if (acpi_bus_get_device(handle, &adev)) 1705 + return AE_OK; 1706 + 1707 + return acpi_register_spi_device(master, adev); 1697 1708 } 1698 1709 1699 1710 static void acpi_register_spi_devices(struct spi_master *master) ··· 3120 3107 extern struct notifier_block spi_of_notifier; 3121 3108 #endif /* IS_ENABLED(CONFIG_OF_DYNAMIC) */ 3122 3109 3110 + #if IS_ENABLED(CONFIG_ACPI) 3111 + static int spi_acpi_master_match(struct device *dev, const void *data) 3112 + { 3113 + return ACPI_COMPANION(dev->parent) == data; 3114 + } 3115 + 3116 + static int spi_acpi_device_match(struct device *dev, void *data) 3117 + { 3118 + return ACPI_COMPANION(dev) == data; 3119 + } 3120 + 3121 + static struct spi_master *acpi_spi_find_master_by_adev(struct acpi_device *adev) 3122 + { 3123 + struct device *dev; 3124 + 3125 + dev = class_find_device(&spi_master_class, NULL, adev, 3126 + spi_acpi_master_match); 3127 + if (!dev) 3128 + return NULL; 3129 + 3130 + return container_of(dev, struct spi_master, dev); 3131 + } 3132 + 3133 + static struct spi_device *acpi_spi_find_device_by_adev(struct acpi_device *adev) 3134 + { 3135 + struct device *dev; 3136 + 3137 + dev = bus_find_device(&spi_bus_type, NULL, adev, spi_acpi_device_match); 3138 + 3139 + return dev ? to_spi_device(dev) : NULL; 3140 + } 3141 + 3142 + static int acpi_spi_notify(struct notifier_block *nb, unsigned long value, 3143 + void *arg) 3144 + { 3145 + struct acpi_device *adev = arg; 3146 + struct spi_master *master; 3147 + struct spi_device *spi; 3148 + 3149 + switch (value) { 3150 + case ACPI_RECONFIG_DEVICE_ADD: 3151 + master = acpi_spi_find_master_by_adev(adev->parent); 3152 + if (!master) 3153 + break; 3154 + 3155 + acpi_register_spi_device(master, adev); 3156 + put_device(&master->dev); 3157 + break; 3158 + case ACPI_RECONFIG_DEVICE_REMOVE: 3159 + if (!acpi_device_enumerated(adev)) 3160 + break; 3161 + 3162 + spi = acpi_spi_find_device_by_adev(adev); 3163 + if (!spi) 3164 + break; 3165 + 3166 + spi_unregister_device(spi); 3167 + put_device(&spi->dev); 3168 + break; 3169 + } 3170 + 3171 + return NOTIFY_OK; 3172 + } 3173 + 3174 + static struct notifier_block spi_acpi_notifier = { 3175 + .notifier_call = acpi_spi_notify, 3176 + }; 3177 + #else 3178 + extern struct notifier_block spi_acpi_notifier; 3179 + #endif 3180 + 3123 3181 static int __init spi_init(void) 3124 3182 { 3125 3183 int status; ··· 3211 3127 3212 3128 if (IS_ENABLED(CONFIG_OF_DYNAMIC)) 3213 3129 WARN_ON(of_reconfig_notifier_register(&spi_of_notifier)); 3130 + if (IS_ENABLED(CONFIG_ACPI)) 3131 + WARN_ON(acpi_reconfig_notifier_register(&spi_acpi_notifier)); 3214 3132 3215 3133 return 0; 3216 3134
+4
include/acpi/acpi_numa.h
··· 15 15 extern int node_to_pxm(int); 16 16 extern int acpi_map_pxm_to_node(int); 17 17 extern unsigned char acpi_srat_revision; 18 + extern int acpi_numa __initdata; 19 + 20 + extern void bad_srat(void); 21 + extern int srat_disabled(void); 18 22 19 23 #endif /* CONFIG_ACPI_NUMA */ 20 24 #endif /* __ACP_NUMA_H */
+1 -6
include/acpi/cppc_acpi.h
··· 15 15 #define _CPPC_ACPI_H 16 16 17 17 #include <linux/acpi.h> 18 - #include <linux/mailbox_controller.h> 19 - #include <linux/mailbox_client.h> 20 18 #include <linux/types.h> 21 19 20 + #include <acpi/pcc.h> 22 21 #include <acpi/processor.h> 23 22 24 23 /* Only support CPPCv2 for now. */ ··· 128 129 extern int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls); 129 130 extern int cppc_get_perf_caps(int cpu, struct cppc_perf_caps *caps); 130 131 extern int acpi_get_psd_map(struct cpudata **); 131 - 132 - /* Methods to interact with the PCC mailbox controller. */ 133 - extern struct mbox_chan * 134 - pcc_mbox_request_channel(struct mbox_client *, unsigned int); 135 132 136 133 #endif /* _CPPC_ACPI_H*/
+29
include/acpi/pcc.h
··· 1 + /* 2 + * PCC (Platform Communications Channel) methods 3 + * 4 + * This program is free software; you can redistribute it and/or 5 + * modify it under the terms of the GNU General Public License 6 + * as published by the Free Software Foundation; version 2 7 + * of the License. 8 + */ 9 + 10 + #ifndef _PCC_H 11 + #define _PCC_H 12 + 13 + #include <linux/mailbox_controller.h> 14 + #include <linux/mailbox_client.h> 15 + 16 + #ifdef CONFIG_PCC 17 + extern struct mbox_chan *pcc_mbox_request_channel(struct mbox_client *cl, 18 + int subspace_id); 19 + extern void pcc_mbox_free_channel(struct mbox_chan *chan); 20 + #else 21 + static inline struct mbox_chan *pcc_mbox_request_channel(struct mbox_client *cl, 22 + int subspace_id) 23 + { 24 + return NULL; 25 + } 26 + static inline void pcc_mbox_free_channel(struct mbox_chan *chan) { } 27 + #endif 28 + 29 + #endif /* _PCC_H */
+4
include/acpi/platform/aclinux.h
··· 73 73 #define ACPI_DEBUGGER 74 74 #endif 75 75 76 + #ifdef CONFIG_ACPI_DEBUG 77 + #define ACPI_MUTEX_DEBUG 78 + #endif 79 + 76 80 #include <linux/string.h> 77 81 #include <linux/kernel.h> 78 82 #include <linux/ctype.h>
+23 -4
include/acpi/processor.h
··· 39 39 #define ACPI_CSTATE_SYSTEMIO 0 40 40 #define ACPI_CSTATE_FFH 1 41 41 #define ACPI_CSTATE_HALT 2 42 + #define ACPI_CSTATE_INTEGER 3 42 43 43 44 #define ACPI_CX_DESC_LEN 32 44 45 ··· 68 67 char desc[ACPI_CX_DESC_LEN]; 69 68 }; 70 69 70 + struct acpi_lpi_state { 71 + u32 min_residency; 72 + u32 wake_latency; /* worst case */ 73 + u32 flags; 74 + u32 arch_flags; 75 + u32 res_cnt_freq; 76 + u32 enable_parent_state; 77 + u64 address; 78 + u8 index; 79 + u8 entry_method; 80 + char desc[ACPI_CX_DESC_LEN]; 81 + }; 82 + 71 83 struct acpi_processor_power { 72 84 int count; 73 - struct acpi_processor_cx states[ACPI_PROCESSOR_MAX_POWER]; 85 + union { 86 + struct acpi_processor_cx states[ACPI_PROCESSOR_MAX_POWER]; 87 + struct acpi_lpi_state lpi_states[ACPI_PROCESSOR_MAX_POWER]; 88 + }; 74 89 int timer_broadcast_on_state; 75 90 }; 76 91 ··· 206 189 u8 bm_control:1; 207 190 u8 bm_check:1; 208 191 u8 has_cst:1; 192 + u8 has_lpi:1; 209 193 u8 power_setup_done:1; 210 194 u8 bm_rld_set:1; 211 195 u8 need_hotplug_init:1; ··· 260 242 DECLARE_PER_CPU(struct acpi_processor *, processors); 261 243 extern struct acpi_processor_errata errata; 262 244 263 - #ifdef ARCH_HAS_POWER_INIT 245 + #if defined(ARCH_HAS_POWER_INIT) && defined(CONFIG_ACPI_PROCESSOR_CSTATE) 264 246 void acpi_processor_power_init_bm_check(struct acpi_processor_flags *flags, 265 247 unsigned int cpu); 266 248 int acpi_processor_ffh_cstate_probe(unsigned int cpu, ··· 327 309 328 310 /* in processor_core.c */ 329 311 phys_cpuid_t acpi_get_phys_id(acpi_handle, int type, u32 acpi_id); 312 + phys_cpuid_t acpi_map_madt_entry(u32 acpi_id); 330 313 int acpi_map_cpuid(phys_cpuid_t phys_id, u32 acpi_id); 331 314 int acpi_get_cpuid(acpi_handle, int type, u32 acpi_id); 332 315 ··· 390 371 #ifdef CONFIG_ACPI_PROCESSOR_IDLE 391 372 int acpi_processor_power_init(struct acpi_processor *pr); 392 373 int acpi_processor_power_exit(struct acpi_processor *pr); 393 - int acpi_processor_cst_has_changed(struct acpi_processor *pr); 374 + int acpi_processor_power_state_has_changed(struct acpi_processor *pr); 394 375 int acpi_processor_hotplug(struct acpi_processor *pr); 395 376 #else 396 377 static inline int acpi_processor_power_init(struct acpi_processor *pr) ··· 403 384 return -ENODEV; 404 385 } 405 386 406 - static inline int acpi_processor_cst_has_changed(struct acpi_processor *pr) 387 + static inline int acpi_processor_power_state_has_changed(struct acpi_processor *pr) 407 388 { 408 389 return -ENODEV; 409 390 }
+1 -1
include/acpi/video.h
··· 54 54 struct acpi_video_device_brightness **dev_br, 55 55 int *pmax_level); 56 56 #else 57 - static inline int acpi_video_register(void) { return 0; } 57 + static inline int acpi_video_register(void) { return -ENODEV; } 58 58 static inline void acpi_video_unregister(void) { return; } 59 59 static inline int acpi_video_get_edid(struct acpi_device *device, int type, 60 60 int device_id, void **edid)
+62 -4
include/linux/acpi.h
··· 208 208 int acpi_mps_check (void); 209 209 int acpi_numa_init (void); 210 210 211 - void early_acpi_table_init(void *data, size_t size); 212 211 int acpi_table_init (void); 213 212 int acpi_table_parse(char *id, acpi_tbl_table_handler handler); 214 213 int __init acpi_parse_entries(char *id, unsigned long table_size, ··· 231 232 int acpi_parse_mcfg (struct acpi_table_header *header); 232 233 void acpi_table_print_madt_entry (struct acpi_subtable_header *madt); 233 234 234 - /* the following four functions are architecture-dependent */ 235 + /* the following numa functions are architecture-dependent */ 235 236 void acpi_numa_slit_init (struct acpi_table_slit *slit); 237 + 238 + #if defined(CONFIG_X86) || defined(CONFIG_IA64) 236 239 void acpi_numa_processor_affinity_init (struct acpi_srat_cpu_affinity *pa); 240 + #else 241 + static inline void 242 + acpi_numa_processor_affinity_init(struct acpi_srat_cpu_affinity *pa) { } 243 + #endif 244 + 237 245 void acpi_numa_x2apic_affinity_init(struct acpi_srat_x2apic_cpu_affinity *pa); 246 + 247 + #ifdef CONFIG_ARM64 248 + void acpi_numa_gicc_affinity_init(struct acpi_srat_gicc_affinity *pa); 249 + #else 250 + static inline void 251 + acpi_numa_gicc_affinity_init(struct acpi_srat_gicc_affinity *pa) { } 252 + #endif 253 + 238 254 int acpi_numa_memory_affinity_init (struct acpi_srat_mem_affinity *ma); 239 - void acpi_numa_arch_fixup(void); 240 255 241 256 #ifndef PHYS_CPUID_INVALID 242 257 typedef u32 phys_cpuid_t; ··· 457 444 #define OSC_SB_HOTPLUG_OST_SUPPORT 0x00000008 458 445 #define OSC_SB_APEI_SUPPORT 0x00000010 459 446 #define OSC_SB_CPC_SUPPORT 0x00000020 447 + #define OSC_SB_CPCV2_SUPPORT 0x00000040 448 + #define OSC_SB_PCLPI_SUPPORT 0x00000080 449 + #define OSC_SB_OSLPI_SUPPORT 0x00000100 460 450 461 451 extern bool osc_sb_apei_support_acked; 452 + extern bool osc_pc_lpi_support_confirmed; 462 453 463 454 /* PCI Host Bridge _OSC: Capabilities DWORD 2: Support Field */ 464 455 #define OSC_PCI_EXT_CONFIG_SUPPORT 0x00000001 ··· 549 532 struct platform_device *acpi_create_platform_device(struct acpi_device *); 550 533 #define ACPI_PTR(_ptr) (_ptr) 551 534 535 + static inline void acpi_device_set_enumerated(struct acpi_device *adev) 536 + { 537 + adev->flags.visited = true; 538 + } 539 + 540 + static inline void acpi_device_clear_enumerated(struct acpi_device *adev) 541 + { 542 + adev->flags.visited = false; 543 + } 544 + 545 + enum acpi_reconfig_event { 546 + ACPI_RECONFIG_DEVICE_ADD = 0, 547 + ACPI_RECONFIG_DEVICE_REMOVE, 548 + }; 549 + 550 + int acpi_reconfig_notifier_register(struct notifier_block *nb); 551 + int acpi_reconfig_notifier_unregister(struct notifier_block *nb); 552 + 552 553 #else /* !CONFIG_ACPI */ 553 554 554 555 #define acpi_disabled 1 ··· 623 588 return NULL; 624 589 } 625 590 626 - static inline void early_acpi_table_init(void *data, size_t size) { } 627 591 static inline void acpi_early_init(void) { } 628 592 static inline void acpi_subsystem_init(void) { } 629 593 ··· 711 677 } 712 678 713 679 #define ACPI_PTR(_ptr) (NULL) 680 + 681 + static inline void acpi_device_set_enumerated(struct acpi_device *adev) 682 + { 683 + } 684 + 685 + static inline void acpi_device_clear_enumerated(struct acpi_device *adev) 686 + { 687 + } 688 + 689 + static inline int acpi_reconfig_notifier_register(struct notifier_block *nb) 690 + { 691 + return -EINVAL; 692 + } 693 + 694 + static inline int acpi_reconfig_notifier_unregister(struct notifier_block *nb) 695 + { 696 + return -EINVAL; 697 + } 714 698 715 699 #endif /* !CONFIG_ACPI */ 716 700 ··· 1047 995 (void *) data } 1048 996 1049 997 #define acpi_probe_device_table(t) ({ int __r = 0; __r;}) 998 + #endif 999 + 1000 + #ifdef CONFIG_ACPI_TABLE_UPGRADE 1001 + void acpi_table_upgrade(void); 1002 + #else 1003 + static inline void acpi_table_upgrade(void) { } 1050 1004 #endif 1051 1005 1052 1006 #endif /*_LINUX_ACPI_H*/
+18
include/linux/cpuidle.h
··· 252 252 #define CPUIDLE_DRIVER_STATE_START 0 253 253 #endif 254 254 255 + #define CPU_PM_CPU_IDLE_ENTER(low_level_idle_enter, idx) \ 256 + ({ \ 257 + int __ret; \ 258 + \ 259 + if (!idx) { \ 260 + cpu_do_idle(); \ 261 + return idx; \ 262 + } \ 263 + \ 264 + __ret = cpu_pm_enter(); \ 265 + if (!__ret) { \ 266 + __ret = low_level_idle_enter(idx); \ 267 + cpu_pm_exit(); \ 268 + } \ 269 + \ 270 + __ret ? -1 : idx; \ 271 + }) 272 + 255 273 #endif /* _LINUX_CPUIDLE_H */
+4 -3
tools/power/acpi/Makefile.config
··· 54 54 # to something more interesting, like "arm-linux-". If you want 55 55 # to compile vs uClibc, that can be done here as well. 56 56 CROSS = #/usr/i386-linux-uclibc/usr/bin/i386-uclibc- 57 - CC = $(CROSS)gcc 58 - LD = $(CROSS)gcc 59 - STRIP = $(CROSS)strip 57 + CROSS_COMPILE ?= $(CROSS) 58 + CC = $(CROSS_COMPILE)gcc 59 + LD = $(CROSS_COMPILE)gcc 60 + STRIP = $(CROSS_COMPILE)strip 60 61 HOSTCC = gcc 61 62 62 63 # check if compiler option is supported