Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'acpi-5.17-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull ACPI updates from Rafael Wysocki:
"These are usual ACPICA code updates (although there are more of them
than in the last few releases), a noticeable EC driver update (which
mostly consists of cleanups, though), the device enumeration quirks
handling rework from Hans, some updates eliminating unnecessary CPU
cache flushing in some places (processor idle and system-wide PM code)
and a bunch of assorted cleanups and fixes.

Specifics:

- Update ACPICA code in the kernel to the 20211217 upstream release
including the following changes:

- iASL/Disassembler: Additional support for NHLT table (Bob
Moore).
- Change a return_ACPI_STATUS (AE_BAD_PARAMETER) (Bob Moore).
- Fix a couple of warnings under MSVC (Bob Moore).
- iASL: Add TDEL table to both compiler/disassembler (Bob Moore).
- iASL/NHLT table: "Specific Data" field support (Bob Moore).
- Use original data_table_region pointer for accesses (Jessica
Clarke).
- Use original pointer for virtual origin tables (Jessica Clarke).
- Macros: Remove ACPI_PHYSADDR_TO_PTR (Jessica Clarke).
- Avoid subobject buffer overflow when validating RSDP signature
(Jessica Clarke).
- iASL: Add suppport for AGDI table (Ilkka Koskinen).
- Hardware: Do not flush CPU cache when entering S4 and S5 (Kirill
A. Shutemov).
- Expand the ACPI_ACCESS_ definitions (Mark Langsdorf).
- Utilities: Avoid deleting the same object twice in a row (Rafael
Wysocki).
- Executer: Fix REFCLASS_REFOF case in acpi_ex_opcode_1A_0T_1R()
(Rafael Wysocki).
- Fix AEST Processor generic resource substructure data field byte
length (Shuuichirou Ishii).
- Fix wrong interpretation of PCC address (Sudeep Holla).
- Add support for PCC Opregion special context data (Sudeep
Holla).

- Implement OperationRegion handler for PCC Type 3 subtype (Sudeep
Holla).

- Introduce acpi_fetch_acpi_dev() as a replacement for
acpi_bus_get_device() and use it in the ACPI subsystem (Rafael
Wysocki).

- Avoid using _CID for device enumaration if _HID is missing or
invalid (Rafael Wysocki).

- Rework quirk handling during ACPI device enumeration and add some
new quirks for known broken platforms (Hans de Goede).

- Avoid unnecessary or redundant CPU cache flushing during system PM
transitions (Kirill A. Shutemov).

- Add PM debug messages related to power resources (Rafael Wysocki).

- Fix kernel-doc comment in the PCI host bridge ACPI driver (Yang
Li).

- Rework flushing of EC work while suspended to idle and clean up the
handling of events in the ACPI EC driver (Rafael Wysocki).

- Prohibit ec_sys module parameter write_support from being used when
the system is locked down (Hans de Goede).

- Make the ACPI processor thermal driver use cpufreq_cpu_get() to
check for presence of cpufreq policy (Manfred Spraul).

- Avoid unnecessary CPU cache flushing in the ACPI processor idle
driver (Kirill A. Shutemov).

- Replace kernel.h with the necessary inclusions in the ACPI
processor driver (Andy Shevchenko).

- Use swap() instead of open coding it in the ACPI processor idle
driver (Guo Zhengkui).

- Fix the handling of defective LPAT in the ACPI xpower PMIC driver
and clean up some definitions of PMIC data structures (Hans de
Goede).

- Fix outdated comment in the ACPI DPTF driver (Sumeet Pawnikar).

- Add AEST to the list of known ACPI table signatures (Shuuichirou
Ishii).

- Make ACPI NUMA code take hotpluggable memblocks into account when
CONFIG_MEMORY_HOTPLUG is not set (Vitaly Kuznetsov).

- Use default_groups in kobj_type in the ACPI sysfs code (Greg
Kroah-Hartman).

- Rearrange _CPC structure documentation (Andy Shevchenko).

- Drop an always true check from the ACPI thermal driver (Adam
Borowski).

- Add new "not charging" quirk for Lenovo ThinkPads to the ACPI
battery driver (Thomas Weißschuh)"

* tag 'acpi-5.17-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (64 commits)
ACPI: PCC: Implement OperationRegion handler for the PCC Type 3 subtype
ACPI / x86: Skip AC and battery devices on x86 Android tablets with broken DSDTs
ACPI / x86: Introduce an acpi_quirk_skip_acpi_ac_and_battery() helper
ACPI: processor: thermal: avoid cpufreq_get_policy()
serdev: Do not instantiate serdevs on boards with known bogus DSDT entries
i2c: acpi: Do not instantiate I2C-clients on boards with known bogus DSDT entries
ACPI / x86: Add acpi_quirk_skip_[i2c_client|serdev]_enumeration() helpers
ACPI: scan: Create platform device for BCM4752 and LNV4752 ACPI nodes
PCI/ACPI: Fix acpi_pci_osc_control_set() kernel-doc comment
ACPI: battery: Add the ThinkPad "Not Charging" quirk
ACPI: sysfs: use default_groups in kobj_type
ACPICA: Update version to 20211217
ACPICA: iASL/NHLT table: "Specific Data" field support
ACPICA: iASL: Add suppport for AGDI table
ACPICA: iASL: Add TDEL table to both compiler/disassembler
ACPICA: Fixed a couple of warnings under MSVC
ACPICA: Change a return_ACPI_STATUS (AE_BAD_PARAMETER)
ACPICA: Hardware: Do not flush CPU cache when entering S4 and S5
ACPICA: Add support for PCC Opregion special context data
ACPICA: Fix wrong interpretation of PCC address
...

+1210 -590
+17
drivers/acpi/Kconfig
··· 524 524 bool 525 525 endif 526 526 527 + config ACPI_PCC 528 + bool "ACPI PCC Address Space" 529 + depends on PCC 530 + default y 531 + help 532 + The PCC Address Space also referred as PCC Operation Region pertains 533 + to the region of PCC subspace that succeeds the PCC signature. 534 + 535 + The PCC Operation Region works in conjunction with the PCC Table 536 + (Platform Communications Channel Table). PCC subspaces that are 537 + marked for use as PCC Operation Regions must not be used as PCC 538 + subspaces for the standard ACPI features such as CPPC, RASF, PDTT and 539 + MPST. These standard features must always use the PCC Table instead. 540 + 541 + Enable this feature if you want to set up and install the PCC Address 542 + Space handler to handle PCC OpRegion in the firmware. 543 + 527 544 source "drivers/acpi/pmic/Kconfig" 528 545 529 546 config ACPI_VIOT
+1
drivers/acpi/Makefile
··· 67 67 acpi-$(CONFIG_ACPI_GENERIC_GSI) += irq.o 68 68 acpi-$(CONFIG_ACPI_WATCHDOG) += acpi_watchdog.o 69 69 acpi-$(CONFIG_ACPI_PRMT) += prmt.o 70 + acpi-$(CONFIG_ACPI_PCC) += acpi_pcc.o 70 71 71 72 # Address translation 72 73 acpi-$(CONFIG_ACPI_ADXL) += acpi_adxl.o
+3 -40
drivers/acpi/ac.c
··· 48 48 }; 49 49 MODULE_DEVICE_TABLE(acpi, ac_device_ids); 50 50 51 - /* Lists of PMIC ACPI HIDs with an (often better) native charger driver */ 52 - static const struct acpi_ac_bl acpi_ac_blacklist[] = { 53 - { "INT33F4", -1 }, /* X-Powers AXP288 PMIC */ 54 - { "INT34D3", 3 }, /* Intel Cherrytrail Whiskey Cove PMIC */ 55 - }; 56 - 57 51 #ifdef CONFIG_PM_SLEEP 58 52 static int acpi_ac_resume(struct device *dev); 59 53 #endif 60 54 static SIMPLE_DEV_PM_OPS(acpi_ac_pm, NULL, acpi_ac_resume); 61 55 62 56 static int ac_sleep_before_get_state_ms; 63 - static int ac_check_pmic = 1; 64 57 static int ac_only; 65 58 66 59 static struct acpi_driver acpi_ac_driver = { ··· 193 200 return 0; 194 201 } 195 202 196 - static int __init ac_do_not_check_pmic_quirk(const struct dmi_system_id *d) 197 - { 198 - ac_check_pmic = 0; 199 - return 0; 200 - } 201 - 202 203 static int __init ac_only_quirk(const struct dmi_system_id *d) 203 204 { 204 205 ac_only = 1; ··· 202 215 /* Please keep this list alphabetically sorted */ 203 216 static const struct dmi_system_id ac_dmi_table[] __initconst = { 204 217 { 205 - /* ECS EF20EA, AXP288 PMIC but uses separate fuel-gauge */ 206 - .callback = ac_do_not_check_pmic_quirk, 207 - .matches = { 208 - DMI_MATCH(DMI_PRODUCT_NAME, "EF20EA"), 209 - }, 210 - }, 211 - { 212 218 /* Kodlix GK45 returning incorrect state */ 213 219 .callback = ac_only_quirk, 214 220 .matches = { 215 221 DMI_MATCH(DMI_PRODUCT_NAME, "GK45"), 216 - }, 217 - }, 218 - { 219 - /* Lenovo Ideapad Miix 320, AXP288 PMIC, separate fuel-gauge */ 220 - .callback = ac_do_not_check_pmic_quirk, 221 - .matches = { 222 - DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 223 - DMI_MATCH(DMI_PRODUCT_NAME, "80XF"), 224 - DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo MIIX 320-10ICR"), 225 222 }, 226 223 }, 227 224 { ··· 312 341 313 342 static int __init acpi_ac_init(void) 314 343 { 315 - unsigned int i; 316 344 int result; 317 345 318 346 if (acpi_disabled) 319 347 return -ENODEV; 320 348 321 - dmi_check_system(ac_dmi_table); 349 + if (acpi_quirk_skip_acpi_ac_and_battery()) 350 + return -ENODEV; 322 351 323 - if (ac_check_pmic) { 324 - for (i = 0; i < ARRAY_SIZE(acpi_ac_blacklist); i++) 325 - if (acpi_dev_present(acpi_ac_blacklist[i].hid, "1", 326 - acpi_ac_blacklist[i].hrv)) { 327 - pr_info("found native %s PMIC, not loading\n", 328 - acpi_ac_blacklist[i].hid); 329 - return -ENODEV; 330 - } 331 - } 352 + dmi_check_system(ac_dmi_table); 332 353 333 354 result = acpi_bus_register_driver(&acpi_ac_driver); 334 355 if (result < 0)
+120
drivers/acpi/acpi_pcc.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Author: Sudeep Holla <sudeep.holla@arm.com> 4 + * Copyright 2021 Arm Limited 5 + * 6 + * The PCC Address Space also referred as PCC Operation Region pertains to the 7 + * region of PCC subspace that succeeds the PCC signature. The PCC Operation 8 + * Region works in conjunction with the PCC Table(Platform Communications 9 + * Channel Table). PCC subspaces that are marked for use as PCC Operation 10 + * Regions must not be used as PCC subspaces for the standard ACPI features 11 + * such as CPPC, RASF, PDTT and MPST. These standard features must always use 12 + * the PCC Table instead. 13 + * 14 + * This driver sets up the PCC Address Space and installs an handler to enable 15 + * handling of PCC OpRegion in the firmware. 16 + * 17 + */ 18 + #include <linux/kernel.h> 19 + #include <linux/acpi.h> 20 + #include <linux/completion.h> 21 + #include <linux/idr.h> 22 + #include <linux/io.h> 23 + 24 + #include <acpi/pcc.h> 25 + 26 + struct pcc_data { 27 + struct pcc_mbox_chan *pcc_chan; 28 + void __iomem *pcc_comm_addr; 29 + struct completion done; 30 + struct mbox_client cl; 31 + struct acpi_pcc_info ctx; 32 + }; 33 + 34 + struct acpi_pcc_info pcc_ctx; 35 + 36 + static void pcc_rx_callback(struct mbox_client *cl, void *m) 37 + { 38 + struct pcc_data *data = container_of(cl, struct pcc_data, cl); 39 + 40 + complete(&data->done); 41 + } 42 + 43 + static acpi_status 44 + acpi_pcc_address_space_setup(acpi_handle region_handle, u32 function, 45 + void *handler_context, void **region_context) 46 + { 47 + struct pcc_data *data; 48 + struct acpi_pcc_info *ctx = handler_context; 49 + struct pcc_mbox_chan *pcc_chan; 50 + 51 + data = kzalloc(sizeof(*data), GFP_KERNEL); 52 + if (!data) 53 + return AE_NO_MEMORY; 54 + 55 + data->cl.rx_callback = pcc_rx_callback; 56 + data->cl.knows_txdone = true; 57 + data->ctx.length = ctx->length; 58 + data->ctx.subspace_id = ctx->subspace_id; 59 + data->ctx.internal_buffer = ctx->internal_buffer; 60 + 61 + init_completion(&data->done); 62 + data->pcc_chan = pcc_mbox_request_channel(&data->cl, ctx->subspace_id); 63 + if (IS_ERR(data->pcc_chan)) { 64 + pr_err("Failed to find PCC channel for subspace %d\n", 65 + ctx->subspace_id); 66 + return AE_NOT_FOUND; 67 + } 68 + 69 + pcc_chan = data->pcc_chan; 70 + data->pcc_comm_addr = acpi_os_ioremap(pcc_chan->shmem_base_addr, 71 + pcc_chan->shmem_size); 72 + if (!data->pcc_comm_addr) { 73 + pr_err("Failed to ioremap PCC comm region mem for %d\n", 74 + ctx->subspace_id); 75 + return AE_NO_MEMORY; 76 + } 77 + 78 + *region_context = data; 79 + return AE_OK; 80 + } 81 + 82 + static acpi_status 83 + acpi_pcc_address_space_handler(u32 function, acpi_physical_address addr, 84 + u32 bits, acpi_integer *value, 85 + void *handler_context, void *region_context) 86 + { 87 + int ret; 88 + struct pcc_data *data = region_context; 89 + 90 + reinit_completion(&data->done); 91 + 92 + /* Write to Shared Memory */ 93 + memcpy_toio(data->pcc_comm_addr, (void *)value, data->ctx.length); 94 + 95 + ret = mbox_send_message(data->pcc_chan->mchan, NULL); 96 + if (ret < 0) 97 + return AE_ERROR; 98 + 99 + if (data->pcc_chan->mchan->mbox->txdone_irq) 100 + wait_for_completion(&data->done); 101 + 102 + mbox_client_txdone(data->pcc_chan->mchan, ret); 103 + 104 + memcpy_fromio(value, data->pcc_comm_addr, data->ctx.length); 105 + 106 + return AE_OK; 107 + } 108 + 109 + void __init acpi_init_pcc(void) 110 + { 111 + acpi_status status; 112 + 113 + status = acpi_install_address_space_handler(ACPI_ROOT_OBJECT, 114 + ACPI_ADR_SPACE_PLATFORM_COMM, 115 + &acpi_pcc_address_space_handler, 116 + &acpi_pcc_address_space_setup, 117 + &pcc_ctx); 118 + if (ACPI_FAILURE(status)) 119 + pr_alert("OperationRegion handler could not be installed\n"); 120 + }
+2 -3
drivers/acpi/acpi_video.c
··· 1733 1733 { 1734 1734 struct acpi_device *device = context; 1735 1735 struct acpi_device *sibling; 1736 - int result; 1737 1736 1738 1737 if (handle == device->handle) 1739 1738 return AE_CTRL_TERMINATE; 1740 1739 1741 - result = acpi_bus_get_device(handle, &sibling); 1742 - if (result) 1740 + sibling = acpi_fetch_acpi_dev(handle); 1741 + if (!sibling) 1743 1742 return AE_OK; 1744 1743 1745 1744 if (!strcmp(acpi_device_name(sibling), ACPI_VIDEO_BUS_NAME))
+5
drivers/acpi/acpica/acevents.h
··· 224 224 void *handler_context, void **region_context); 225 225 226 226 acpi_status 227 + acpi_ev_data_table_region_setup(acpi_handle handle, 228 + u32 function, 229 + void *handler_context, void **region_context); 230 + 231 + acpi_status 227 232 acpi_ev_default_region_setup(acpi_handle handle, 228 233 u32 function, 229 234 void *handler_context, void **region_context);
+1
drivers/acpi/acpica/acobject.h
··· 138 138 union acpi_operand_object *next; 139 139 acpi_physical_address address; 140 140 u32 length; 141 + void *pointer; /* Only for data table regions */ 141 142 }; 142 143 143 144 struct acpi_object_method {
+6 -2
drivers/acpi/acpica/actables.h
··· 35 35 36 36 acpi_status 37 37 acpi_tb_acquire_temp_table(struct acpi_table_desc *table_desc, 38 - acpi_physical_address address, u8 flags); 38 + acpi_physical_address address, 39 + u8 flags, struct acpi_table_header *table); 39 40 40 41 void acpi_tb_release_temp_table(struct acpi_table_desc *table_desc); 41 42 ··· 87 86 acpi_status 88 87 acpi_tb_install_standard_table(acpi_physical_address address, 89 88 u8 flags, 89 + struct acpi_table_header *table, 90 90 u8 reload, u8 override, u32 *table_index); 91 91 92 92 void acpi_tb_uninstall_table(struct acpi_table_desc *table_desc); ··· 97 95 98 96 acpi_status 99 97 acpi_tb_install_and_load_table(acpi_physical_address address, 100 - u8 flags, u8 override, u32 *table_index); 98 + u8 flags, 99 + struct acpi_table_header *table, 100 + u8 override, u32 *table_index); 101 101 102 102 acpi_status acpi_tb_unload_table(u32 table_index); 103 103
+1
drivers/acpi/acpica/dsopcode.c
··· 531 531 532 532 obj_desc->region.address = ACPI_PTR_TO_PHYSADDR(table); 533 533 obj_desc->region.length = table->length; 534 + obj_desc->region.pointer = table; 534 535 535 536 ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "RgnObj %p Addr %8.8X%8.8X Len %X\n", 536 537 obj_desc,
+1 -1
drivers/acpi/acpica/evhandler.c
··· 386 386 case ACPI_ADR_SPACE_DATA_TABLE: 387 387 388 388 handler = acpi_ex_data_table_space_handler; 389 - setup = NULL; 389 + setup = acpi_ev_data_table_region_setup; 390 390 break; 391 391 392 392 default:
+10
drivers/acpi/acpica/evregion.c
··· 162 162 return_ACPI_STATUS(AE_NOT_EXIST); 163 163 } 164 164 165 + if (region_obj->region.space_id == ACPI_ADR_SPACE_PLATFORM_COMM) { 166 + struct acpi_pcc_info *ctx = 167 + handler_desc->address_space.context; 168 + 169 + ctx->internal_buffer = 170 + field_obj->field.internal_pcc_buffer; 171 + ctx->length = (u16)region_obj->region.length; 172 + ctx->subspace_id = (u8)region_obj->region.address; 173 + } 174 + 165 175 /* 166 176 * We must exit the interpreter because the region setup will 167 177 * potentially execute control methods (for example, the _REG method
+52
drivers/acpi/acpica/evrgnini.c
··· 408 408 409 409 /******************************************************************************* 410 410 * 411 + * FUNCTION: acpi_ev_data_table_region_setup 412 + * 413 + * PARAMETERS: handle - Region we are interested in 414 + * function - Start or stop 415 + * handler_context - Address space handler context 416 + * region_context - Region specific context 417 + * 418 + * RETURN: Status 419 + * 420 + * DESCRIPTION: Setup a data_table_region 421 + * 422 + * MUTEX: Assumes namespace is not locked 423 + * 424 + ******************************************************************************/ 425 + 426 + acpi_status 427 + acpi_ev_data_table_region_setup(acpi_handle handle, 428 + u32 function, 429 + void *handler_context, void **region_context) 430 + { 431 + union acpi_operand_object *region_desc = 432 + (union acpi_operand_object *)handle; 433 + struct acpi_data_table_space_context *local_region_context; 434 + 435 + ACPI_FUNCTION_TRACE(ev_data_table_region_setup); 436 + 437 + if (function == ACPI_REGION_DEACTIVATE) { 438 + if (*region_context) { 439 + ACPI_FREE(*region_context); 440 + *region_context = NULL; 441 + } 442 + return_ACPI_STATUS(AE_OK); 443 + } 444 + 445 + /* Create a new context */ 446 + 447 + local_region_context = 448 + ACPI_ALLOCATE_ZEROED(sizeof(struct acpi_data_table_space_context)); 449 + if (!(local_region_context)) { 450 + return_ACPI_STATUS(AE_NO_MEMORY); 451 + } 452 + 453 + /* Save the data table pointer for use in the handler */ 454 + 455 + local_region_context->pointer = region_desc->region.pointer; 456 + 457 + *region_context = local_region_context; 458 + return_ACPI_STATUS(AE_OK); 459 + } 460 + 461 + /******************************************************************************* 462 + * 411 463 * FUNCTION: acpi_ev_default_region_setup 412 464 * 413 465 * PARAMETERS: handle - Region we are interested in
+1 -1
drivers/acpi/acpica/exconfig.c
··· 411 411 acpi_ex_exit_interpreter(); 412 412 status = acpi_tb_install_and_load_table(ACPI_PTR_TO_PHYSADDR(table), 413 413 ACPI_TABLE_ORIGIN_INTERNAL_VIRTUAL, 414 - TRUE, &table_index); 414 + table, TRUE, &table_index); 415 415 acpi_ex_enter_interpreter(); 416 416 if (ACPI_FAILURE(status)) { 417 417
+1
drivers/acpi/acpica/excreate.c
··· 279 279 obj_desc->region.space_id = space_id; 280 280 obj_desc->region.address = 0; 281 281 obj_desc->region.length = 0; 282 + obj_desc->region.pointer = NULL; 282 283 obj_desc->region.node = node; 283 284 obj_desc->region.handler = NULL; 284 285 obj_desc->common.flags &=
+1 -6
drivers/acpi/acpica/exfield.c
··· 330 330 obj_desc->field.base_byte_offset, 331 331 source_desc->buffer.pointer, data_length); 332 332 333 - if ((obj_desc->field.region_obj->region.address == 334 - PCC_MASTER_SUBSPACE 335 - && MASTER_SUBSPACE_COMMAND(obj_desc->field. 336 - base_byte_offset)) 337 - || GENERIC_SUBSPACE_COMMAND(obj_desc->field. 338 - base_byte_offset)) { 333 + if (MASTER_SUBSPACE_COMMAND(obj_desc->field.base_byte_offset)) { 339 334 340 335 /* Perform the write */ 341 336
+2 -1
drivers/acpi/acpica/exoparg1.c
··· 1007 1007 (walk_state, return_desc, 1008 1008 &temp_desc); 1009 1009 if (ACPI_FAILURE(status)) { 1010 - goto cleanup; 1010 + return_ACPI_STATUS 1011 + (status); 1011 1012 } 1012 1013 1013 1014 return_desc = temp_desc;
+11 -4
drivers/acpi/acpica/exregion.c
··· 509 509 u64 *value, 510 510 void *handler_context, void *region_context) 511 511 { 512 + struct acpi_data_table_space_context *mapping; 513 + char *pointer; 514 + 512 515 ACPI_FUNCTION_TRACE(ex_data_table_space_handler); 516 + 517 + mapping = (struct acpi_data_table_space_context *) region_context; 518 + pointer = ACPI_CAST_PTR(char, mapping->pointer) + 519 + (address - ACPI_PTR_TO_PHYSADDR(mapping->pointer)); 513 520 514 521 /* 515 522 * Perform the memory read or write. The bit_width was already ··· 525 518 switch (function) { 526 519 case ACPI_READ: 527 520 528 - memcpy(ACPI_CAST_PTR(char, value), 529 - ACPI_PHYSADDR_TO_PTR(address), ACPI_DIV_8(bit_width)); 521 + memcpy(ACPI_CAST_PTR(char, value), pointer, 522 + ACPI_DIV_8(bit_width)); 530 523 break; 531 524 532 525 case ACPI_WRITE: 533 526 534 - memcpy(ACPI_PHYSADDR_TO_PTR(address), 535 - ACPI_CAST_PTR(char, value), ACPI_DIV_8(bit_width)); 527 + memcpy(pointer, ACPI_CAST_PTR(char, value), 528 + ACPI_DIV_8(bit_width)); 536 529 break; 537 530 538 531 default:
+3 -1
drivers/acpi/acpica/hwesleep.c
··· 104 104 105 105 /* Flush caches, as per ACPI specification */ 106 106 107 - ACPI_FLUSH_CPU_CACHE(); 107 + if (sleep_state < ACPI_STATE_S4) { 108 + ACPI_FLUSH_CPU_CACHE(); 109 + } 108 110 109 111 status = acpi_os_enter_sleep(sleep_state, sleep_control, 0); 110 112 if (status == AE_CTRL_TERMINATE) {
+3 -1
drivers/acpi/acpica/hwsleep.c
··· 110 110 111 111 /* Flush caches, as per ACPI specification */ 112 112 113 - ACPI_FLUSH_CPU_CACHE(); 113 + if (sleep_state < ACPI_STATE_S4) { 114 + ACPI_FLUSH_CPU_CACHE(); 115 + } 114 116 115 117 status = acpi_os_enter_sleep(sleep_state, pm1a_control, pm1b_control); 116 118 if (status == AE_CTRL_TERMINATE) {
-2
drivers/acpi/acpica/hwxfsleep.c
··· 162 162 return_ACPI_STATUS(status); 163 163 } 164 164 165 - ACPI_FLUSH_CPU_CACHE(); 166 - 167 165 status = acpi_hw_write_port(acpi_gbl_FADT.smi_command, 168 166 (u32)acpi_gbl_FADT.s4_bios_request, 8); 169 167 if (ACPI_FAILURE(status)) {
+63 -30
drivers/acpi/acpica/tbdata.c
··· 89 89 { 90 90 91 91 /* 92 - * Initialize the table descriptor. Set the pointer to NULL, since the 93 - * table is not fully mapped at this time. 92 + * Initialize the table descriptor. Set the pointer to NULL for external 93 + * tables, since the table is not fully mapped at this time. 94 94 */ 95 95 memset(table_desc, 0, sizeof(struct acpi_table_desc)); 96 96 table_desc->address = address; 97 97 table_desc->length = table->length; 98 98 table_desc->flags = flags; 99 99 ACPI_MOVE_32_TO_32(table_desc->signature.ascii, table->signature); 100 + 101 + switch (table_desc->flags & ACPI_TABLE_ORIGIN_MASK) { 102 + case ACPI_TABLE_ORIGIN_INTERNAL_VIRTUAL: 103 + case ACPI_TABLE_ORIGIN_EXTERNAL_VIRTUAL: 104 + 105 + table_desc->pointer = table; 106 + break; 107 + 108 + case ACPI_TABLE_ORIGIN_INTERNAL_PHYSICAL: 109 + default: 110 + 111 + break; 112 + } 100 113 } 101 114 102 115 /******************************************************************************* ··· 145 132 case ACPI_TABLE_ORIGIN_INTERNAL_VIRTUAL: 146 133 case ACPI_TABLE_ORIGIN_EXTERNAL_VIRTUAL: 147 134 148 - table = ACPI_CAST_PTR(struct acpi_table_header, 149 - ACPI_PHYSADDR_TO_PTR(table_desc-> 150 - address)); 135 + table = table_desc->pointer; 151 136 break; 152 137 153 138 default: ··· 207 196 * PARAMETERS: table_desc - Table descriptor to be acquired 208 197 * address - Address of the table 209 198 * flags - Allocation flags of the table 199 + * table - Pointer to the table (required for virtual 200 + * origins, optional for physical) 210 201 * 211 202 * RETURN: Status 212 203 * ··· 221 208 222 209 acpi_status 223 210 acpi_tb_acquire_temp_table(struct acpi_table_desc *table_desc, 224 - acpi_physical_address address, u8 flags) 211 + acpi_physical_address address, 212 + u8 flags, struct acpi_table_header *table) 225 213 { 226 - struct acpi_table_header *table_header; 214 + u8 mapped_table = FALSE; 227 215 228 216 switch (flags & ACPI_TABLE_ORIGIN_MASK) { 229 217 case ACPI_TABLE_ORIGIN_INTERNAL_PHYSICAL: 230 218 231 219 /* Get the length of the full table from the header */ 232 220 233 - table_header = 234 - acpi_os_map_memory(address, 235 - sizeof(struct acpi_table_header)); 236 - if (!table_header) { 237 - return (AE_NO_MEMORY); 221 + if (!table) { 222 + table = 223 + acpi_os_map_memory(address, 224 + sizeof(struct 225 + acpi_table_header)); 226 + if (!table) { 227 + return (AE_NO_MEMORY); 228 + } 229 + 230 + mapped_table = TRUE; 238 231 } 239 232 240 - acpi_tb_init_table_descriptor(table_desc, address, flags, 241 - table_header); 242 - acpi_os_unmap_memory(table_header, 243 - sizeof(struct acpi_table_header)); 244 - return (AE_OK); 233 + break; 245 234 246 235 case ACPI_TABLE_ORIGIN_INTERNAL_VIRTUAL: 247 236 case ACPI_TABLE_ORIGIN_EXTERNAL_VIRTUAL: 248 237 249 - table_header = ACPI_CAST_PTR(struct acpi_table_header, 250 - ACPI_PHYSADDR_TO_PTR(address)); 251 - if (!table_header) { 252 - return (AE_NO_MEMORY); 238 + if (!table) { 239 + return (AE_BAD_PARAMETER); 253 240 } 254 241 255 - acpi_tb_init_table_descriptor(table_desc, address, flags, 256 - table_header); 257 - return (AE_OK); 242 + break; 258 243 259 244 default: 260 245 261 - break; 246 + /* Table is not valid yet */ 247 + 248 + return (AE_NO_MEMORY); 262 249 } 263 250 264 - /* Table is not valid yet */ 251 + acpi_tb_init_table_descriptor(table_desc, address, flags, table); 252 + if (mapped_table) { 253 + acpi_os_unmap_memory(table, sizeof(struct acpi_table_header)); 254 + } 265 255 266 - return (AE_NO_MEMORY); 256 + return (AE_OK); 267 257 } 268 258 269 259 /******************************************************************************* ··· 351 335 352 336 acpi_tb_release_table(table_desc->pointer, table_desc->length, 353 337 table_desc->flags); 354 - table_desc->pointer = NULL; 338 + 339 + switch (table_desc->flags & ACPI_TABLE_ORIGIN_MASK) { 340 + case ACPI_TABLE_ORIGIN_INTERNAL_PHYSICAL: 341 + 342 + table_desc->pointer = NULL; 343 + break; 344 + 345 + case ACPI_TABLE_ORIGIN_INTERNAL_VIRTUAL: 346 + case ACPI_TABLE_ORIGIN_EXTERNAL_VIRTUAL: 347 + default: 348 + 349 + break; 350 + } 355 351 356 352 return_VOID; 357 353 } ··· 987 959 * 988 960 * PARAMETERS: address - Physical address of the table 989 961 * flags - Allocation flags of the table 962 + * table - Pointer to the table (required for 963 + * virtual origins, optional for 964 + * physical) 990 965 * override - Whether override should be performed 991 966 * table_index - Where table index is returned 992 967 * ··· 1001 970 1002 971 acpi_status 1003 972 acpi_tb_install_and_load_table(acpi_physical_address address, 1004 - u8 flags, u8 override, u32 *table_index) 973 + u8 flags, 974 + struct acpi_table_header *table, 975 + u8 override, u32 *table_index) 1005 976 { 1006 977 acpi_status status; 1007 978 u32 i; ··· 1012 979 1013 980 /* Install the table and load it into the namespace */ 1014 981 1015 - status = acpi_tb_install_standard_table(address, flags, TRUE, 982 + status = acpi_tb_install_standard_table(address, flags, table, TRUE, 1016 983 override, &i); 1017 984 if (ACPI_FAILURE(status)) { 1018 985 goto exit;
+3 -3
drivers/acpi/acpica/tbfadt.c
··· 313 313 acpi_tb_install_standard_table((acpi_physical_address)acpi_gbl_FADT. 314 314 Xdsdt, 315 315 ACPI_TABLE_ORIGIN_INTERNAL_PHYSICAL, 316 - FALSE, TRUE, &acpi_gbl_dsdt_index); 316 + NULL, FALSE, TRUE, &acpi_gbl_dsdt_index); 317 317 318 318 /* If Hardware Reduced flag is set, there is no FACS */ 319 319 ··· 322 322 acpi_tb_install_standard_table((acpi_physical_address) 323 323 acpi_gbl_FADT.facs, 324 324 ACPI_TABLE_ORIGIN_INTERNAL_PHYSICAL, 325 - FALSE, TRUE, 325 + NULL, FALSE, TRUE, 326 326 &acpi_gbl_facs_index); 327 327 } 328 328 if (acpi_gbl_FADT.Xfacs) { 329 329 acpi_tb_install_standard_table((acpi_physical_address) 330 330 acpi_gbl_FADT.Xfacs, 331 331 ACPI_TABLE_ORIGIN_INTERNAL_PHYSICAL, 332 - FALSE, TRUE, 332 + NULL, FALSE, TRUE, 333 333 &acpi_gbl_xfacs_index); 334 334 } 335 335 }
+11 -4
drivers/acpi/acpica/tbinstal.c
··· 79 79 * PARAMETERS: address - Address of the table (might be a virtual 80 80 * address depending on the table_flags) 81 81 * flags - Flags for the table 82 + * table - Pointer to the table (required for virtual 83 + * origins, optional for physical) 82 84 * reload - Whether reload should be performed 83 85 * override - Whether override should be performed 84 86 * table_index - Where the table index is returned ··· 98 96 acpi_status 99 97 acpi_tb_install_standard_table(acpi_physical_address address, 100 98 u8 flags, 99 + struct acpi_table_header *table, 101 100 u8 reload, u8 override, u32 *table_index) 102 101 { 103 102 u32 i; ··· 109 106 110 107 /* Acquire a temporary table descriptor for validation */ 111 108 112 - status = acpi_tb_acquire_temp_table(&new_table_desc, address, flags); 109 + status = 110 + acpi_tb_acquire_temp_table(&new_table_desc, address, flags, table); 113 111 if (ACPI_FAILURE(status)) { 114 112 ACPI_ERROR((AE_INFO, 115 113 "Could not acquire table length at %8.8X%8.8X", ··· 213 209 if (ACPI_SUCCESS(status) && table) { 214 210 acpi_tb_acquire_temp_table(&new_table_desc, 215 211 ACPI_PTR_TO_PHYSADDR(table), 216 - ACPI_TABLE_ORIGIN_EXTERNAL_VIRTUAL); 212 + ACPI_TABLE_ORIGIN_EXTERNAL_VIRTUAL, 213 + table); 217 214 ACPI_ERROR_ONLY(override_type = "Logical"); 218 215 goto finish_override; 219 216 } ··· 225 220 &address, &length); 226 221 if (ACPI_SUCCESS(status) && address && length) { 227 222 acpi_tb_acquire_temp_table(&new_table_desc, address, 228 - ACPI_TABLE_ORIGIN_INTERNAL_PHYSICAL); 223 + ACPI_TABLE_ORIGIN_INTERNAL_PHYSICAL, 224 + NULL); 229 225 ACPI_ERROR_ONLY(override_type = "Physical"); 230 226 goto finish_override; 231 227 } ··· 295 289 296 290 if ((table_desc->flags & ACPI_TABLE_ORIGIN_MASK) == 297 291 ACPI_TABLE_ORIGIN_INTERNAL_VIRTUAL) { 298 - ACPI_FREE(ACPI_PHYSADDR_TO_PTR(table_desc->address)); 292 + ACPI_FREE(table_desc->pointer); 293 + table_desc->pointer = NULL; 299 294 } 300 295 301 296 table_desc->address = ACPI_PTR_TO_PHYSADDR(NULL);
+2 -1
drivers/acpi/acpica/tbprint.c
··· 101 101 ACPI_INFO(("%-4.4s 0x%8.8X%8.8X %06X", 102 102 header->signature, ACPI_FORMAT_UINT64(address), 103 103 header->length)); 104 - } else if (ACPI_VALIDATE_RSDP_SIG(header->signature)) { 104 + } else if (ACPI_VALIDATE_RSDP_SIG(ACPI_CAST_PTR(struct acpi_table_rsdp, 105 + header)->signature)) { 105 106 106 107 /* RSDP has no common fields */ 107 108
+1 -1
drivers/acpi/acpica/tbutils.c
··· 328 328 329 329 status = acpi_tb_install_standard_table(address, 330 330 ACPI_TABLE_ORIGIN_INTERNAL_PHYSICAL, 331 - FALSE, TRUE, 331 + NULL, FALSE, TRUE, 332 332 &table_index); 333 333 334 334 if (ACPI_SUCCESS(status) &&
+38 -14
drivers/acpi/acpica/tbxfload.c
··· 227 227 * 228 228 * FUNCTION: acpi_install_table 229 229 * 230 - * PARAMETERS: address - Address of the ACPI table to be installed. 231 - * physical - Whether the address is a physical table 232 - * address or not 230 + * PARAMETERS: table - Pointer to the ACPI table to be installed. 233 231 * 234 232 * RETURN: Status 235 233 * ··· 238 240 ******************************************************************************/ 239 241 240 242 acpi_status ACPI_INIT_FUNCTION 241 - acpi_install_table(acpi_physical_address address, u8 physical) 243 + acpi_install_table(struct acpi_table_header *table) 242 244 { 243 245 acpi_status status; 244 - u8 flags; 245 246 u32 table_index; 246 247 247 248 ACPI_FUNCTION_TRACE(acpi_install_table); 248 249 249 - if (physical) { 250 - flags = ACPI_TABLE_ORIGIN_INTERNAL_PHYSICAL; 251 - } else { 252 - flags = ACPI_TABLE_ORIGIN_EXTERNAL_VIRTUAL; 253 - } 254 - 255 - status = acpi_tb_install_standard_table(address, flags, 256 - FALSE, FALSE, &table_index); 250 + status = acpi_tb_install_standard_table(ACPI_PTR_TO_PHYSADDR(table), 251 + ACPI_TABLE_ORIGIN_EXTERNAL_VIRTUAL, 252 + table, FALSE, FALSE, 253 + &table_index); 257 254 258 255 return_ACPI_STATUS(status); 259 256 } 260 257 261 258 ACPI_EXPORT_SYMBOL_INIT(acpi_install_table) 259 + 260 + /******************************************************************************* 261 + * 262 + * FUNCTION: acpi_install_physical_table 263 + * 264 + * PARAMETERS: address - Address of the ACPI table to be installed. 265 + * 266 + * RETURN: Status 267 + * 268 + * DESCRIPTION: Dynamically install an ACPI table. 269 + * Note: This function should only be invoked after 270 + * acpi_initialize_tables() and before acpi_load_tables(). 271 + * 272 + ******************************************************************************/ 273 + acpi_status ACPI_INIT_FUNCTION 274 + acpi_install_physical_table(acpi_physical_address address) 275 + { 276 + acpi_status status; 277 + u32 table_index; 278 + 279 + ACPI_FUNCTION_TRACE(acpi_install_physical_table); 280 + 281 + status = acpi_tb_install_standard_table(address, 282 + ACPI_TABLE_ORIGIN_INTERNAL_PHYSICAL, 283 + NULL, FALSE, FALSE, 284 + &table_index); 285 + 286 + return_ACPI_STATUS(status); 287 + } 288 + 289 + ACPI_EXPORT_SYMBOL_INIT(acpi_install_physical_table) 262 290 263 291 /******************************************************************************* 264 292 * ··· 322 298 ACPI_INFO(("Host-directed Dynamic ACPI Table Load:")); 323 299 status = acpi_tb_install_and_load_table(ACPI_PTR_TO_PHYSADDR(table), 324 300 ACPI_TABLE_ORIGIN_EXTERNAL_VIRTUAL, 325 - FALSE, &table_index); 301 + table, FALSE, &table_index); 326 302 if (table_idx) { 327 303 *table_idx = table_index; 328 304 }
+1
drivers/acpi/acpica/utdelete.c
··· 422 422 ACPI_WARNING((AE_INFO, 423 423 "Obj %p, Reference Count is already zero, cannot decrement\n", 424 424 object)); 425 + return; 425 426 } 426 427 427 428 ACPI_DEBUG_PRINT_RAW((ACPI_DB_ALLOCATIONS,
+16 -30
drivers/acpi/battery.c
··· 52 52 static int battery_bix_broken_package; 53 53 static int battery_notification_delay_ms; 54 54 static int battery_ac_is_broken; 55 - static int battery_check_pmic = 1; 55 + static int battery_quirk_notcharging; 56 56 static unsigned int cache_time = 1000; 57 57 module_param(cache_time, uint, 0644); 58 58 MODULE_PARM_DESC(cache_time, "cache time in milliseconds"); ··· 63 63 }; 64 64 65 65 MODULE_DEVICE_TABLE(acpi, battery_device_ids); 66 - 67 - /* Lists of PMIC ACPI HIDs with an (often better) native battery driver */ 68 - static const char * const acpi_battery_blacklist[] = { 69 - "INT33F4", /* X-Powers AXP288 PMIC */ 70 - }; 71 66 72 67 enum { 73 68 ACPI_BATTERY_ALARM_PRESENT, ··· 212 217 val->intval = POWER_SUPPLY_STATUS_CHARGING; 213 218 else if (acpi_battery_is_charged(battery)) 214 219 val->intval = POWER_SUPPLY_STATUS_FULL; 220 + else if (battery_quirk_notcharging) 221 + val->intval = POWER_SUPPLY_STATUS_NOT_CHARGING; 215 222 else 216 223 val->intval = POWER_SUPPLY_STATUS_UNKNOWN; 217 224 break; ··· 1101 1104 return 0; 1102 1105 } 1103 1106 1104 - static int __init 1105 - battery_do_not_check_pmic_quirk(const struct dmi_system_id *d) 1107 + static int __init battery_quirk_not_charging(const struct dmi_system_id *d) 1106 1108 { 1107 - battery_check_pmic = 0; 1109 + battery_quirk_notcharging = 1; 1108 1110 return 0; 1109 1111 } 1110 1112 ··· 1136 1140 }, 1137 1141 }, 1138 1142 { 1139 - /* ECS EF20EA, AXP288 PMIC but uses separate fuel-gauge */ 1140 - .callback = battery_do_not_check_pmic_quirk, 1141 - .matches = { 1142 - DMI_MATCH(DMI_PRODUCT_NAME, "EF20EA"), 1143 - }, 1144 - }, 1145 - { 1146 - /* Lenovo Ideapad Miix 320, AXP288 PMIC, separate fuel-gauge */ 1147 - .callback = battery_do_not_check_pmic_quirk, 1143 + /* 1144 + * On Lenovo ThinkPads the BIOS specification defines 1145 + * a state when the bits for charging and discharging 1146 + * are both set to 0. That state is "Not Charging". 1147 + */ 1148 + .callback = battery_quirk_not_charging, 1149 + .ident = "Lenovo ThinkPad", 1148 1150 .matches = { 1149 1151 DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 1150 - DMI_MATCH(DMI_PRODUCT_NAME, "80XF"), 1151 - DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo MIIX 320-10ICR"), 1152 + DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad"), 1152 1153 }, 1153 1154 }, 1154 1155 {}, ··· 1272 1279 1273 1280 static void __init acpi_battery_init_async(void *unused, async_cookie_t cookie) 1274 1281 { 1275 - unsigned int i; 1276 1282 int result; 1277 1283 1278 - dmi_check_system(bat_dmi_table); 1284 + if (acpi_quirk_skip_acpi_ac_and_battery()) 1285 + return; 1279 1286 1280 - if (battery_check_pmic) { 1281 - for (i = 0; i < ARRAY_SIZE(acpi_battery_blacklist); i++) 1282 - if (acpi_dev_present(acpi_battery_blacklist[i], "1", -1)) { 1283 - pr_info("found native %s PMIC, not loading\n", 1284 - acpi_battery_blacklist[i]); 1285 - return; 1286 - } 1287 - } 1287 + dmi_check_system(bat_dmi_table); 1288 1288 1289 1289 result = acpi_bus_register_driver(&acpi_battery_driver); 1290 1290 battery_driver_registered = (result == 0);
+3 -2
drivers/acpi/bus.c
··· 98 98 acpi_status status; 99 99 unsigned long long sta; 100 100 101 - if (acpi_device_always_present(device)) { 102 - acpi_set_device_status(device, ACPI_STA_DEFAULT); 101 + if (acpi_device_override_status(device, &sta)) { 102 + acpi_set_device_status(device, sta); 103 103 return 0; 104 104 } 105 105 ··· 1320 1320 pr_debug("%s: kset create error\n", __func__); 1321 1321 1322 1322 init_prmt(); 1323 + acpi_init_pcc(); 1323 1324 result = acpi_bus_init(); 1324 1325 if (result) { 1325 1326 kobject_put(acpi_kobj);
+23 -39
drivers/acpi/cppc_acpi.c
··· 179 179 &lowest_freq.attr, 180 180 NULL 181 181 }; 182 + ATTRIBUTE_GROUPS(cppc); 182 183 183 184 static struct kobj_type cppc_ktype = { 184 185 .sysfs_ops = &kobj_sysfs_ops, 185 - .default_attrs = cppc_attrs, 186 + .default_groups = cppc_groups, 186 187 }; 187 188 188 189 static int check_pcc_chan(int pcc_ss_id, bool chk_err_bit) ··· 605 604 /* 606 605 * An example CPC table looks like the following. 607 606 * 608 - * Name(_CPC, Package() 609 - * { 610 - * 17, 611 - * NumEntries 612 - * 1, 613 - * // Revision 614 - * ResourceTemplate(){Register(PCC, 32, 0, 0x120, 2)}, 615 - * // Highest Performance 616 - * ResourceTemplate(){Register(PCC, 32, 0, 0x124, 2)}, 617 - * // Nominal Performance 618 - * ResourceTemplate(){Register(PCC, 32, 0, 0x128, 2)}, 619 - * // Lowest Nonlinear Performance 620 - * ResourceTemplate(){Register(PCC, 32, 0, 0x12C, 2)}, 621 - * // Lowest Performance 622 - * ResourceTemplate(){Register(PCC, 32, 0, 0x130, 2)}, 623 - * // Guaranteed Performance Register 624 - * ResourceTemplate(){Register(PCC, 32, 0, 0x110, 2)}, 625 - * // Desired Performance Register 626 - * ResourceTemplate(){Register(SystemMemory, 0, 0, 0, 0)}, 627 - * .. 628 - * .. 629 - * .. 630 - * 631 - * } 607 + * Name (_CPC, Package() { 608 + * 17, // NumEntries 609 + * 1, // Revision 610 + * ResourceTemplate() {Register(PCC, 32, 0, 0x120, 2)}, // Highest Performance 611 + * ResourceTemplate() {Register(PCC, 32, 0, 0x124, 2)}, // Nominal Performance 612 + * ResourceTemplate() {Register(PCC, 32, 0, 0x128, 2)}, // Lowest Nonlinear Performance 613 + * ResourceTemplate() {Register(PCC, 32, 0, 0x12C, 2)}, // Lowest Performance 614 + * ResourceTemplate() {Register(PCC, 32, 0, 0x130, 2)}, // Guaranteed Performance Register 615 + * ResourceTemplate() {Register(PCC, 32, 0, 0x110, 2)}, // Desired Performance Register 616 + * ResourceTemplate() {Register(SystemMemory, 0, 0, 0, 0)}, 617 + * ... 618 + * ... 619 + * ... 620 + * } 632 621 * Each Register() encodes how to access that specific register. 633 622 * e.g. a sample PCC entry has the following encoding: 634 623 * 635 - * Register ( 636 - * PCC, 637 - * AddressSpaceKeyword 638 - * 8, 639 - * //RegisterBitWidth 640 - * 8, 641 - * //RegisterBitOffset 642 - * 0x30, 643 - * //RegisterAddress 644 - * 9 645 - * //AccessSize (subspace ID) 646 - * 0 647 - * ) 648 - * } 624 + * Register ( 625 + * PCC, // AddressSpaceKeyword 626 + * 8, // RegisterBitWidth 627 + * 8, // RegisterBitOffset 628 + * 0x30, // RegisterAddress 629 + * 9, // AccessSize (subspace ID) 630 + * ) 649 631 */ 650 632 651 633 #ifndef init_freq_invariance_cppc
+13 -18
drivers/acpi/device_pm.c
··· 285 285 286 286 int acpi_bus_set_power(acpi_handle handle, int state) 287 287 { 288 - struct acpi_device *device; 289 - int result; 288 + struct acpi_device *device = acpi_fetch_acpi_dev(handle); 290 289 291 - result = acpi_bus_get_device(handle, &device); 292 - if (result) 293 - return result; 290 + if (device) 291 + return acpi_device_set_power(device, state); 294 292 295 - return acpi_device_set_power(device, state); 293 + return -ENODEV; 296 294 } 297 295 EXPORT_SYMBOL(acpi_bus_set_power); 298 296 ··· 408 410 409 411 int acpi_bus_update_power(acpi_handle handle, int *state_p) 410 412 { 411 - struct acpi_device *device; 412 - int result; 413 + struct acpi_device *device = acpi_fetch_acpi_dev(handle); 413 414 414 - result = acpi_bus_get_device(handle, &device); 415 - return result ? result : acpi_device_update_power(device, state_p); 415 + if (device) 416 + return acpi_device_update_power(device, state_p); 417 + 418 + return -ENODEV; 416 419 } 417 420 EXPORT_SYMBOL_GPL(acpi_bus_update_power); 418 421 419 422 bool acpi_bus_power_manageable(acpi_handle handle) 420 423 { 421 - struct acpi_device *device; 422 - int result; 424 + struct acpi_device *device = acpi_fetch_acpi_dev(handle); 423 425 424 - result = acpi_bus_get_device(handle, &device); 425 - return result ? false : device->flags.power_manageable; 426 + return device && device->flags.power_manageable; 426 427 } 427 428 EXPORT_SYMBOL(acpi_bus_power_manageable); 428 429 ··· 540 543 541 544 bool acpi_bus_can_wakeup(acpi_handle handle) 542 545 { 543 - struct acpi_device *device; 544 - int result; 546 + struct acpi_device *device = acpi_fetch_acpi_dev(handle); 545 547 546 - result = acpi_bus_get_device(handle, &device); 547 - return result ? false : device->wakeup.flags.valid; 548 + return device && device->wakeup.flags.valid; 548 549 } 549 550 EXPORT_SYMBOL(acpi_bus_can_wakeup); 550 551
+2 -1
drivers/acpi/device_sysfs.c
··· 53 53 &data_node_path.attr, 54 54 NULL 55 55 }; 56 + ATTRIBUTE_GROUPS(acpi_data_node_default); 56 57 57 58 #define to_data_node(k) container_of(k, struct acpi_data_node, kobj) 58 59 #define to_attr(a) container_of(a, struct acpi_data_node_attr, attr) ··· 80 79 81 80 static struct kobj_type acpi_data_node_ktype = { 82 81 .sysfs_ops = &acpi_data_node_sysfs_ops, 83 - .default_attrs = acpi_data_node_default_attrs, 82 + .default_groups = acpi_data_node_default_groups, 84 83 .release = acpi_data_node_release, 85 84 }; 86 85
+1 -2
drivers/acpi/dock.c
··· 489 489 struct device_attribute *attr, char *buf) 490 490 { 491 491 struct dock_station *dock_station = dev->platform_data; 492 - struct acpi_device *adev = NULL; 492 + struct acpi_device *adev = acpi_fetch_acpi_dev(dock_station->handle); 493 493 494 - acpi_bus_get_device(dock_station->handle, &adev); 495 494 return sysfs_emit(buf, "%u\n", acpi_device_enumerated(adev)); 496 495 } 497 496 static DEVICE_ATTR_RO(docked);
+1 -1
drivers/acpi/dptf/dptf_pch_fivr.c
··· 46 46 } 47 47 48 48 /* 49 - * Presentation of attributes which are defined for INT1045 49 + * Presentation of attributes which are defined for INTC10xx 50 50 * They are: 51 51 * freq_mhz_low_clock : Set PCH FIVR switching freq for 52 52 * FIVR clock 19.2MHz and 24MHz
+184 -133
drivers/acpi/ec.c
··· 92 92 93 93 enum { 94 94 EC_FLAGS_QUERY_ENABLED, /* Query is enabled */ 95 - EC_FLAGS_QUERY_PENDING, /* Query is pending */ 96 - EC_FLAGS_QUERY_GUARDING, /* Guard for SCI_EVT check */ 97 95 EC_FLAGS_EVENT_HANDLER_INSTALLED, /* Event handler installed */ 98 96 EC_FLAGS_EC_HANDLER_INSTALLED, /* OpReg handler installed */ 99 97 EC_FLAGS_QUERY_METHODS_INSTALLED, /* _Qxx handlers installed */ ··· 164 166 struct transaction transaction; 165 167 struct work_struct work; 166 168 struct acpi_ec_query_handler *handler; 169 + struct acpi_ec *ec; 167 170 }; 168 171 169 - static int acpi_ec_query(struct acpi_ec *ec, u8 *data); 170 - static void advance_transaction(struct acpi_ec *ec, bool interrupt); 172 + static int acpi_ec_submit_query(struct acpi_ec *ec); 173 + static bool advance_transaction(struct acpi_ec *ec, bool interrupt); 171 174 static void acpi_ec_event_handler(struct work_struct *work); 172 - static void acpi_ec_event_processor(struct work_struct *work); 173 175 174 176 struct acpi_ec *first_ec; 175 177 EXPORT_SYMBOL(first_ec); ··· 441 443 return true; 442 444 } 443 445 444 - static void acpi_ec_submit_query(struct acpi_ec *ec) 446 + static bool acpi_ec_submit_event(struct acpi_ec *ec) 445 447 { 446 448 acpi_ec_mask_events(ec); 447 449 if (!acpi_ec_event_enabled(ec)) 448 - return; 449 - if (!test_and_set_bit(EC_FLAGS_QUERY_PENDING, &ec->flags)) { 450 + return false; 451 + 452 + if (ec->event_state == EC_EVENT_READY) { 450 453 ec_dbg_evt("Command(%s) submitted/blocked", 451 454 acpi_ec_cmd_string(ACPI_EC_COMMAND_QUERY)); 452 - ec->nr_pending_queries++; 453 - queue_work(ec_wq, &ec->work); 455 + 456 + ec->event_state = EC_EVENT_IN_PROGRESS; 457 + /* 458 + * If events_to_process is greqter than 0 at this point, the 459 + * while () loop in acpi_ec_event_handler() is still running 460 + * and incrementing events_to_process will cause it to invoke 461 + * acpi_ec_submit_query() once more, so it is not necessary to 462 + * queue up the event work to start the same loop again. 463 + */ 464 + if (ec->events_to_process++ > 0) 465 + return true; 466 + 467 + ec->events_in_progress++; 468 + return queue_work(ec_wq, &ec->work); 454 469 } 470 + 471 + /* 472 + * The event handling work has not been completed yet, so it needs to be 473 + * flushed. 474 + */ 475 + return true; 455 476 } 456 477 457 - static void acpi_ec_complete_query(struct acpi_ec *ec) 478 + static void acpi_ec_complete_event(struct acpi_ec *ec) 458 479 { 459 - if (test_and_clear_bit(EC_FLAGS_QUERY_PENDING, &ec->flags)) 480 + if (ec->event_state == EC_EVENT_IN_PROGRESS) 481 + ec->event_state = EC_EVENT_COMPLETE; 482 + } 483 + 484 + static void acpi_ec_close_event(struct acpi_ec *ec) 485 + { 486 + if (ec->event_state != EC_EVENT_READY) 460 487 ec_dbg_evt("Command(%s) unblocked", 461 488 acpi_ec_cmd_string(ACPI_EC_COMMAND_QUERY)); 489 + 490 + ec->event_state = EC_EVENT_READY; 462 491 acpi_ec_unmask_events(ec); 463 492 } 464 493 ··· 512 487 */ 513 488 static void acpi_ec_clear(struct acpi_ec *ec) 514 489 { 515 - int i, status; 516 - u8 value = 0; 490 + int i; 517 491 518 492 for (i = 0; i < ACPI_EC_CLEAR_MAX; i++) { 519 - status = acpi_ec_query(ec, &value); 520 - if (status || !value) 493 + if (acpi_ec_submit_query(ec)) 521 494 break; 522 495 } 523 496 if (unlikely(i == ACPI_EC_CLEAR_MAX)) ··· 541 518 #ifdef CONFIG_PM_SLEEP 542 519 static void __acpi_ec_flush_work(void) 543 520 { 544 - drain_workqueue(ec_wq); /* flush ec->work */ 521 + flush_workqueue(ec_wq); /* flush ec->work */ 545 522 flush_workqueue(ec_query_wq); /* flush queries */ 546 523 } 547 524 ··· 572 549 573 550 static bool acpi_ec_guard_event(struct acpi_ec *ec) 574 551 { 575 - bool guarded = true; 576 552 unsigned long flags; 553 + bool guarded; 577 554 578 555 spin_lock_irqsave(&ec->lock, flags); 579 556 /* ··· 582 559 * evaluating _Qxx, so we need to re-check SCI_EVT after waiting an 583 560 * acceptable period. 584 561 * 585 - * The guarding period begins when EC_FLAGS_QUERY_PENDING is 586 - * flagged, which means SCI_EVT check has just been performed. 587 - * But if the current transaction is ACPI_EC_COMMAND_QUERY, the 588 - * guarding should have already been performed (via 589 - * EC_FLAGS_QUERY_GUARDING) and should not be applied so that the 590 - * ACPI_EC_COMMAND_QUERY transaction can be transitioned into 591 - * ACPI_EC_COMMAND_POLL state immediately. 562 + * The guarding period is applicable if the event state is not 563 + * EC_EVENT_READY, but otherwise if the current transaction is of the 564 + * ACPI_EC_COMMAND_QUERY type, the guarding should have elapsed already 565 + * and it should not be applied to let the transaction transition into 566 + * the ACPI_EC_COMMAND_POLL state immediately. 592 567 */ 593 - if (ec_event_clearing == ACPI_EC_EVT_TIMING_STATUS || 594 - ec_event_clearing == ACPI_EC_EVT_TIMING_QUERY || 595 - !test_bit(EC_FLAGS_QUERY_PENDING, &ec->flags) || 596 - (ec->curr && ec->curr->command == ACPI_EC_COMMAND_QUERY)) 597 - guarded = false; 568 + guarded = ec_event_clearing == ACPI_EC_EVT_TIMING_EVENT && 569 + ec->event_state != EC_EVENT_READY && 570 + (!ec->curr || ec->curr->command != ACPI_EC_COMMAND_QUERY); 598 571 spin_unlock_irqrestore(&ec->lock, flags); 599 572 return guarded; 600 573 } ··· 622 603 static inline void ec_transaction_transition(struct acpi_ec *ec, unsigned long flag) 623 604 { 624 605 ec->curr->flags |= flag; 625 - if (ec->curr->command == ACPI_EC_COMMAND_QUERY) { 626 - if (ec_event_clearing == ACPI_EC_EVT_TIMING_STATUS && 627 - flag == ACPI_EC_COMMAND_POLL) 628 - acpi_ec_complete_query(ec); 629 - if (ec_event_clearing == ACPI_EC_EVT_TIMING_QUERY && 630 - flag == ACPI_EC_COMMAND_COMPLETE) 631 - acpi_ec_complete_query(ec); 632 - if (ec_event_clearing == ACPI_EC_EVT_TIMING_EVENT && 633 - flag == ACPI_EC_COMMAND_COMPLETE) 634 - set_bit(EC_FLAGS_QUERY_GUARDING, &ec->flags); 606 + 607 + if (ec->curr->command != ACPI_EC_COMMAND_QUERY) 608 + return; 609 + 610 + switch (ec_event_clearing) { 611 + case ACPI_EC_EVT_TIMING_STATUS: 612 + if (flag == ACPI_EC_COMMAND_POLL) 613 + acpi_ec_close_event(ec); 614 + 615 + return; 616 + 617 + case ACPI_EC_EVT_TIMING_QUERY: 618 + if (flag == ACPI_EC_COMMAND_COMPLETE) 619 + acpi_ec_close_event(ec); 620 + 621 + return; 622 + 623 + case ACPI_EC_EVT_TIMING_EVENT: 624 + if (flag == ACPI_EC_COMMAND_COMPLETE) 625 + acpi_ec_complete_event(ec); 635 626 } 636 627 } 637 628 ··· 655 626 acpi_ec_mask_events(ec); 656 627 } 657 628 658 - static void advance_transaction(struct acpi_ec *ec, bool interrupt) 629 + static bool advance_transaction(struct acpi_ec *ec, bool interrupt) 659 630 { 660 631 struct transaction *t = ec->curr; 661 632 bool wakeup = false; 633 + bool ret = false; 662 634 u8 status; 663 635 664 636 ec_dbg_stm("%s (%d)", interrupt ? "IRQ" : "TASK", smp_processor_id()); ··· 687 657 */ 688 658 if (!t || !(t->flags & ACPI_EC_COMMAND_POLL)) { 689 659 if (ec_event_clearing == ACPI_EC_EVT_TIMING_EVENT && 690 - (!ec->nr_pending_queries || 691 - test_bit(EC_FLAGS_QUERY_GUARDING, &ec->flags))) { 692 - clear_bit(EC_FLAGS_QUERY_GUARDING, &ec->flags); 693 - acpi_ec_complete_query(ec); 694 - } 660 + ec->event_state == EC_EVENT_COMPLETE) 661 + acpi_ec_close_event(ec); 662 + 695 663 if (!t) 696 664 goto out; 697 665 } ··· 724 696 725 697 out: 726 698 if (status & ACPI_EC_FLAG_SCI) 727 - acpi_ec_submit_query(ec); 699 + ret = acpi_ec_submit_event(ec); 728 700 729 701 if (wakeup && interrupt) 730 702 wake_up(&ec->wait); 703 + 704 + return ret; 731 705 } 732 706 733 707 static void start_transaction(struct acpi_ec *ec) ··· 1133 1103 } 1134 1104 EXPORT_SYMBOL_GPL(acpi_ec_remove_query_handler); 1135 1105 1136 - static struct acpi_ec_query *acpi_ec_create_query(u8 *pval) 1106 + static void acpi_ec_event_processor(struct work_struct *work) 1107 + { 1108 + struct acpi_ec_query *q = container_of(work, struct acpi_ec_query, work); 1109 + struct acpi_ec_query_handler *handler = q->handler; 1110 + struct acpi_ec *ec = q->ec; 1111 + 1112 + ec_dbg_evt("Query(0x%02x) started", handler->query_bit); 1113 + 1114 + if (handler->func) 1115 + handler->func(handler->data); 1116 + else if (handler->handle) 1117 + acpi_evaluate_object(handler->handle, NULL, NULL, NULL); 1118 + 1119 + ec_dbg_evt("Query(0x%02x) stopped", handler->query_bit); 1120 + 1121 + spin_lock_irq(&ec->lock); 1122 + ec->queries_in_progress--; 1123 + spin_unlock_irq(&ec->lock); 1124 + 1125 + acpi_ec_put_query_handler(handler); 1126 + kfree(q); 1127 + } 1128 + 1129 + static struct acpi_ec_query *acpi_ec_create_query(struct acpi_ec *ec, u8 *pval) 1137 1130 { 1138 1131 struct acpi_ec_query *q; 1139 1132 struct transaction *t; ··· 1164 1111 q = kzalloc(sizeof (struct acpi_ec_query), GFP_KERNEL); 1165 1112 if (!q) 1166 1113 return NULL; 1114 + 1167 1115 INIT_WORK(&q->work, acpi_ec_event_processor); 1168 1116 t = &q->transaction; 1169 1117 t->command = ACPI_EC_COMMAND_QUERY; 1170 1118 t->rdata = pval; 1171 1119 t->rlen = 1; 1120 + q->ec = ec; 1172 1121 return q; 1173 1122 } 1174 1123 1175 - static void acpi_ec_delete_query(struct acpi_ec_query *q) 1124 + static int acpi_ec_submit_query(struct acpi_ec *ec) 1176 1125 { 1177 - if (q) { 1178 - if (q->handler) 1179 - acpi_ec_put_query_handler(q->handler); 1180 - kfree(q); 1181 - } 1182 - } 1183 - 1184 - static void acpi_ec_event_processor(struct work_struct *work) 1185 - { 1186 - struct acpi_ec_query *q = container_of(work, struct acpi_ec_query, work); 1187 - struct acpi_ec_query_handler *handler = q->handler; 1188 - 1189 - ec_dbg_evt("Query(0x%02x) started", handler->query_bit); 1190 - if (handler->func) 1191 - handler->func(handler->data); 1192 - else if (handler->handle) 1193 - acpi_evaluate_object(handler->handle, NULL, NULL, NULL); 1194 - ec_dbg_evt("Query(0x%02x) stopped", handler->query_bit); 1195 - acpi_ec_delete_query(q); 1196 - } 1197 - 1198 - static int acpi_ec_query(struct acpi_ec *ec, u8 *data) 1199 - { 1126 + struct acpi_ec_query *q; 1200 1127 u8 value = 0; 1201 1128 int result; 1202 - struct acpi_ec_query *q; 1203 1129 1204 - q = acpi_ec_create_query(&value); 1130 + q = acpi_ec_create_query(ec, &value); 1205 1131 if (!q) 1206 1132 return -ENOMEM; 1207 1133 ··· 1190 1158 * bit to be cleared (and thus clearing the interrupt source). 1191 1159 */ 1192 1160 result = acpi_ec_transaction(ec, &q->transaction); 1193 - if (!value) 1194 - result = -ENODATA; 1195 1161 if (result) 1196 1162 goto err_exit; 1163 + 1164 + if (!value) { 1165 + result = -ENODATA; 1166 + goto err_exit; 1167 + } 1197 1168 1198 1169 q->handler = acpi_ec_get_query_handler_by_value(ec, value); 1199 1170 if (!q->handler) { ··· 1205 1170 } 1206 1171 1207 1172 /* 1208 - * It is reported that _Qxx are evaluated in a parallel way on 1209 - * Windows: 1173 + * It is reported that _Qxx are evaluated in a parallel way on Windows: 1210 1174 * https://bugzilla.kernel.org/show_bug.cgi?id=94411 1211 1175 * 1212 - * Put this log entry before schedule_work() in order to make 1213 - * it appearing before any other log entries occurred during the 1214 - * work queue execution. 1176 + * Put this log entry before queue_work() to make it appear in the log 1177 + * before any other messages emitted during workqueue handling. 1215 1178 */ 1216 1179 ec_dbg_evt("Query(0x%02x) scheduled", value); 1217 - if (!queue_work(ec_query_wq, &q->work)) { 1218 - ec_dbg_evt("Query(0x%02x) overlapped", value); 1219 - result = -EBUSY; 1220 - } 1180 + 1181 + spin_lock_irq(&ec->lock); 1182 + 1183 + ec->queries_in_progress++; 1184 + queue_work(ec_query_wq, &q->work); 1185 + 1186 + spin_unlock_irq(&ec->lock); 1187 + 1188 + return 0; 1221 1189 1222 1190 err_exit: 1223 - if (result) 1224 - acpi_ec_delete_query(q); 1225 - if (data) 1226 - *data = value; 1191 + kfree(q); 1192 + 1227 1193 return result; 1228 - } 1229 - 1230 - static void acpi_ec_check_event(struct acpi_ec *ec) 1231 - { 1232 - unsigned long flags; 1233 - 1234 - if (ec_event_clearing == ACPI_EC_EVT_TIMING_EVENT) { 1235 - if (ec_guard(ec)) { 1236 - spin_lock_irqsave(&ec->lock, flags); 1237 - /* 1238 - * Take care of the SCI_EVT unless no one else is 1239 - * taking care of it. 1240 - */ 1241 - if (!ec->curr) 1242 - advance_transaction(ec, false); 1243 - spin_unlock_irqrestore(&ec->lock, flags); 1244 - } 1245 - } 1246 1194 } 1247 1195 1248 1196 static void acpi_ec_event_handler(struct work_struct *work) 1249 1197 { 1250 - unsigned long flags; 1251 1198 struct acpi_ec *ec = container_of(work, struct acpi_ec, work); 1252 1199 1253 1200 ec_dbg_evt("Event started"); 1254 1201 1255 - spin_lock_irqsave(&ec->lock, flags); 1256 - while (ec->nr_pending_queries) { 1257 - spin_unlock_irqrestore(&ec->lock, flags); 1258 - (void)acpi_ec_query(ec, NULL); 1259 - spin_lock_irqsave(&ec->lock, flags); 1260 - ec->nr_pending_queries--; 1261 - /* 1262 - * Before exit, make sure that this work item can be 1263 - * scheduled again. There might be QR_EC failures, leaving 1264 - * EC_FLAGS_QUERY_PENDING uncleared and preventing this work 1265 - * item from being scheduled again. 1266 - */ 1267 - if (!ec->nr_pending_queries) { 1268 - if (ec_event_clearing == ACPI_EC_EVT_TIMING_STATUS || 1269 - ec_event_clearing == ACPI_EC_EVT_TIMING_QUERY) 1270 - acpi_ec_complete_query(ec); 1271 - } 1202 + spin_lock_irq(&ec->lock); 1203 + 1204 + while (ec->events_to_process) { 1205 + spin_unlock_irq(&ec->lock); 1206 + 1207 + acpi_ec_submit_query(ec); 1208 + 1209 + spin_lock_irq(&ec->lock); 1210 + ec->events_to_process--; 1272 1211 } 1273 - spin_unlock_irqrestore(&ec->lock, flags); 1212 + 1213 + /* 1214 + * Before exit, make sure that the it will be possible to queue up the 1215 + * event handling work again regardless of whether or not the query 1216 + * queued up above is processed successfully. 1217 + */ 1218 + if (ec_event_clearing == ACPI_EC_EVT_TIMING_EVENT) 1219 + acpi_ec_complete_event(ec); 1220 + else 1221 + acpi_ec_close_event(ec); 1222 + 1223 + spin_unlock_irq(&ec->lock); 1274 1224 1275 1225 ec_dbg_evt("Event stopped"); 1276 1226 1277 - acpi_ec_check_event(ec); 1227 + if (ec_event_clearing == ACPI_EC_EVT_TIMING_EVENT && ec_guard(ec)) { 1228 + spin_lock_irq(&ec->lock); 1229 + 1230 + /* Take care of SCI_EVT unless someone else is doing that. */ 1231 + if (!ec->curr) 1232 + advance_transaction(ec, false); 1233 + 1234 + spin_unlock_irq(&ec->lock); 1235 + } 1236 + 1237 + spin_lock_irq(&ec->lock); 1238 + ec->events_in_progress--; 1239 + spin_unlock_irq(&ec->lock); 1278 1240 } 1279 1241 1280 1242 static void acpi_ec_handle_interrupt(struct acpi_ec *ec) ··· 2053 2021 2054 2022 bool acpi_ec_dispatch_gpe(void) 2055 2023 { 2056 - u32 ret; 2024 + bool work_in_progress = false; 2057 2025 2058 2026 if (!first_ec) 2059 2027 return acpi_any_gpe_status_set(U32_MAX); ··· 2069 2037 * Dispatch the EC GPE in-band, but do not report wakeup in any case 2070 2038 * to allow the caller to process events properly after that. 2071 2039 */ 2072 - ret = acpi_dispatch_gpe(NULL, first_ec->gpe); 2073 - if (ret == ACPI_INTERRUPT_HANDLED) 2074 - pm_pr_dbg("ACPI EC GPE dispatched\n"); 2040 + spin_lock_irq(&first_ec->lock); 2075 2041 2076 - /* Flush the event and query workqueues. */ 2077 - acpi_ec_flush_work(); 2042 + if (acpi_ec_gpe_status_set(first_ec)) 2043 + work_in_progress = advance_transaction(first_ec, false); 2044 + 2045 + spin_unlock_irq(&first_ec->lock); 2046 + 2047 + if (!work_in_progress) 2048 + return false; 2049 + 2050 + pm_pr_dbg("ACPI EC GPE dispatched\n"); 2051 + 2052 + /* Drain EC work. */ 2053 + do { 2054 + acpi_ec_flush_work(); 2055 + 2056 + pm_pr_dbg("ACPI EC work flushed\n"); 2057 + 2058 + spin_lock_irq(&first_ec->lock); 2059 + 2060 + work_in_progress = first_ec->events_in_progress + 2061 + first_ec->queries_in_progress > 0; 2062 + 2063 + spin_unlock_irq(&first_ec->lock); 2064 + } while (work_in_progress && !pm_wakeup_pending()); 2078 2065 2079 2066 return false; 2080 2067 }
+1 -1
drivers/acpi/ec_sys.c
··· 19 19 MODULE_LICENSE("GPL"); 20 20 21 21 static bool write_support; 22 - module_param(write_support, bool, 0644); 22 + module_param_hw(write_support, bool, other, 0644); 23 23 MODULE_PARM_DESC(write_support, "Dangerous, reboot and removal of battery may " 24 24 "be needed."); 25 25
+11 -1
drivers/acpi/internal.h
··· 166 166 /* -------------------------------------------------------------------------- 167 167 Embedded Controller 168 168 -------------------------------------------------------------------------- */ 169 + 170 + enum acpi_ec_event_state { 171 + EC_EVENT_READY = 0, /* Event work can be submitted */ 172 + EC_EVENT_IN_PROGRESS, /* Event work is pending or being processed */ 173 + EC_EVENT_COMPLETE, /* Event work processing has completed */ 174 + }; 175 + 169 176 struct acpi_ec { 170 177 acpi_handle handle; 171 178 int gpe; ··· 189 182 spinlock_t lock; 190 183 struct work_struct work; 191 184 unsigned long timestamp; 192 - unsigned long nr_pending_queries; 185 + enum acpi_ec_event_state event_state; 186 + unsigned int events_to_process; 187 + unsigned int events_in_progress; 188 + unsigned int queries_in_progress; 193 189 bool busy_polling; 194 190 unsigned int polling_guard; 195 191 };
+2 -3
drivers/acpi/numa/srat.c
··· 254 254 } 255 255 if ((ma->flags & ACPI_SRAT_MEM_ENABLED) == 0) 256 256 goto out_err; 257 - hotpluggable = ma->flags & ACPI_SRAT_MEM_HOT_PLUGGABLE; 258 - if (hotpluggable && !IS_ENABLED(CONFIG_MEMORY_HOTPLUG)) 259 - goto out_err; 257 + hotpluggable = IS_ENABLED(CONFIG_MEMORY_HOTPLUG) && 258 + (ma->flags & ACPI_SRAT_MEM_HOT_PLUGGABLE); 260 259 261 260 start = ma->base_address; 262 261 end = start + ma->length;
+4 -8
drivers/acpi/pci_link.c
··· 606 606 int acpi_pci_link_allocate_irq(acpi_handle handle, int index, int *triggering, 607 607 int *polarity, char **name) 608 608 { 609 - int result; 610 - struct acpi_device *device; 609 + struct acpi_device *device = acpi_fetch_acpi_dev(handle); 611 610 struct acpi_pci_link *link; 612 611 613 - result = acpi_bus_get_device(handle, &device); 614 - if (result) { 612 + if (!device) { 615 613 acpi_handle_err(handle, "Invalid link device\n"); 616 614 return -1; 617 615 } ··· 656 658 */ 657 659 int acpi_pci_link_free_irq(acpi_handle handle) 658 660 { 659 - struct acpi_device *device; 661 + struct acpi_device *device = acpi_fetch_acpi_dev(handle); 660 662 struct acpi_pci_link *link; 661 - acpi_status result; 662 663 663 - result = acpi_bus_get_device(handle, &device); 664 - if (result) { 664 + if (!device) { 665 665 acpi_handle_err(handle, "Invalid link device\n"); 666 666 return -1; 667 667 }
+5 -7
drivers/acpi/pci_root.c
··· 67 67 */ 68 68 int acpi_is_root_bridge(acpi_handle handle) 69 69 { 70 + struct acpi_device *device = acpi_fetch_acpi_dev(handle); 70 71 int ret; 71 - struct acpi_device *device; 72 72 73 - ret = acpi_bus_get_device(handle, &device); 74 - if (ret) 73 + if (!device) 75 74 return 0; 76 75 77 76 ret = acpi_match_device_ids(device, root_device_ids); ··· 214 215 215 216 struct acpi_pci_root *acpi_pci_find_root(acpi_handle handle) 216 217 { 218 + struct acpi_device *device = acpi_fetch_acpi_dev(handle); 217 219 struct acpi_pci_root *root; 218 - struct acpi_device *device; 219 220 220 - if (acpi_bus_get_device(handle, &device) || 221 - acpi_match_device_ids(device, root_device_ids)) 221 + if (!device || acpi_match_device_ids(device, root_device_ids)) 222 222 return NULL; 223 223 224 224 root = acpi_driver_data(device); ··· 322 324 * acpi_pci_osc_control_set - Request control of PCI root _OSC features. 323 325 * @handle: ACPI handle of a PCI root bridge (or PCIe Root Complex). 324 326 * @mask: Mask of _OSC bits to request control of, place to store control mask. 325 - * @req: Mask of _OSC bits the control of is essential to the caller. 327 + * @support: _OSC supported capability. 326 328 * 327 329 * Run _OSC query for @mask and if that is successful, compare the returned 328 330 * mask of control bits with @req. If all of the @req bits are set in the
+7 -7
drivers/acpi/pmic/intel_pmic.c
··· 25 25 struct mutex lock; 26 26 struct acpi_lpat_conversion_table *lpat_table; 27 27 struct regmap *regmap; 28 - struct intel_pmic_opregion_data *data; 28 + const struct intel_pmic_opregion_data *data; 29 29 struct intel_pmic_regs_handler_ctx ctx; 30 30 }; 31 31 ··· 53 53 { 54 54 struct intel_pmic_opregion *opregion = region_context; 55 55 struct regmap *regmap = opregion->regmap; 56 - struct intel_pmic_opregion_data *d = opregion->data; 56 + const struct intel_pmic_opregion_data *d = opregion->data; 57 57 int reg, bit, result; 58 58 59 59 if (bits != 32 || !value64) ··· 95 95 return 0; 96 96 } 97 97 98 - temp = acpi_lpat_raw_to_temp(opregion->lpat_table, raw_temp); 98 + temp = opregion->data->lpat_raw_to_temp(opregion->lpat_table, raw_temp); 99 99 if (temp < 0) 100 100 return temp; 101 101 ··· 135 135 static int pmic_thermal_pen(struct intel_pmic_opregion *opregion, int reg, 136 136 int bit, u32 function, u64 *value) 137 137 { 138 - struct intel_pmic_opregion_data *d = opregion->data; 138 + const struct intel_pmic_opregion_data *d = opregion->data; 139 139 struct regmap *regmap = opregion->regmap; 140 140 141 141 if (!d->get_policy || !d->update_policy) ··· 171 171 void *handler_context, void *region_context) 172 172 { 173 173 struct intel_pmic_opregion *opregion = region_context; 174 - struct intel_pmic_opregion_data *d = opregion->data; 174 + const struct intel_pmic_opregion_data *d = opregion->data; 175 175 int reg, bit, result; 176 176 177 177 if (bits != 32 || !value64) ··· 255 255 256 256 int intel_pmic_install_opregion_handler(struct device *dev, acpi_handle handle, 257 257 struct regmap *regmap, 258 - struct intel_pmic_opregion_data *d) 258 + const struct intel_pmic_opregion_data *d) 259 259 { 260 260 acpi_status status = AE_OK; 261 261 struct intel_pmic_opregion *opregion; ··· 344 344 int intel_soc_pmic_exec_mipi_pmic_seq_element(u16 i2c_address, u32 reg_address, 345 345 u32 value, u32 mask) 346 346 { 347 - struct intel_pmic_opregion_data *d; 347 + const struct intel_pmic_opregion_data *d; 348 348 int ret; 349 349 350 350 if (!intel_pmic_opregion) {
+7 -1
drivers/acpi/pmic/intel_pmic.h
··· 2 2 #ifndef __INTEL_PMIC_H 3 3 #define __INTEL_PMIC_H 4 4 5 + #include <acpi/acpi_lpat.h> 6 + 5 7 struct pmic_table { 6 8 int address; /* operation region address */ 7 9 int reg; /* corresponding thermal register */ ··· 19 17 int (*update_policy)(struct regmap *r, int reg, int bit, int enable); 20 18 int (*exec_mipi_pmic_seq_element)(struct regmap *r, u16 i2c_address, 21 19 u32 reg_address, u32 value, u32 mask); 20 + int (*lpat_raw_to_temp)(struct acpi_lpat_conversion_table *lpat_table, 21 + int raw); 22 22 struct pmic_table *power_table; 23 23 int power_table_count; 24 24 struct pmic_table *thermal_table; ··· 29 25 int pmic_i2c_address; 30 26 }; 31 27 32 - int intel_pmic_install_opregion_handler(struct device *dev, acpi_handle handle, struct regmap *regmap, struct intel_pmic_opregion_data *d); 28 + int intel_pmic_install_opregion_handler(struct device *dev, acpi_handle handle, 29 + struct regmap *regmap, 30 + const struct intel_pmic_opregion_data *d); 33 31 34 32 #endif
+2 -1
drivers/acpi/pmic/intel_pmic_bxtwc.c
··· 369 369 return regmap_update_bits(regmap, reg, mask, val); 370 370 } 371 371 372 - static struct intel_pmic_opregion_data intel_bxtwc_pmic_opregion_data = { 372 + static const struct intel_pmic_opregion_data intel_bxtwc_pmic_opregion_data = { 373 373 .get_power = intel_bxtwc_pmic_get_power, 374 374 .update_power = intel_bxtwc_pmic_update_power, 375 375 .get_raw_temp = intel_bxtwc_pmic_get_raw_temp, 376 376 .update_aux = intel_bxtwc_pmic_update_aux, 377 377 .get_policy = intel_bxtwc_pmic_get_policy, 378 378 .update_policy = intel_bxtwc_pmic_update_policy, 379 + .lpat_raw_to_temp = acpi_lpat_raw_to_temp, 379 380 .power_table = power_table, 380 381 .power_table_count = ARRAY_SIZE(power_table), 381 382 .thermal_table = thermal_table,
+2 -1
drivers/acpi/pmic/intel_pmic_bytcrc.c
··· 271 271 return 0; 272 272 } 273 273 274 - static struct intel_pmic_opregion_data intel_crc_pmic_opregion_data = { 274 + static const struct intel_pmic_opregion_data intel_crc_pmic_opregion_data = { 275 275 .get_power = intel_crc_pmic_get_power, 276 276 .update_power = intel_crc_pmic_update_power, 277 277 .get_raw_temp = intel_crc_pmic_get_raw_temp, 278 278 .update_aux = intel_crc_pmic_update_aux, 279 279 .get_policy = intel_crc_pmic_get_policy, 280 280 .update_policy = intel_crc_pmic_update_policy, 281 + .lpat_raw_to_temp = acpi_lpat_raw_to_temp, 281 282 .power_table = power_table, 282 283 .power_table_count= ARRAY_SIZE(power_table), 283 284 .thermal_table = thermal_table,
+2 -1
drivers/acpi/pmic/intel_pmic_chtcrc.c
··· 23 23 * intel_soc_pmic_exec_mipi_pmic_seq_element work on devices with a 24 24 * CHT Crystal Cove PMIC. 25 25 */ 26 - static struct intel_pmic_opregion_data intel_chtcrc_pmic_opregion_data = { 26 + static const struct intel_pmic_opregion_data intel_chtcrc_pmic_opregion_data = { 27 + .lpat_raw_to_temp = acpi_lpat_raw_to_temp, 27 28 .pmic_i2c_address = 0x6e, 28 29 }; 29 30
+2 -1
drivers/acpi/pmic/intel_pmic_chtdc_ti.c
··· 94 94 return ((buf[0] & 0x03) << 8) | buf[1]; 95 95 } 96 96 97 - static struct intel_pmic_opregion_data chtdc_ti_pmic_opregion_data = { 97 + static const struct intel_pmic_opregion_data chtdc_ti_pmic_opregion_data = { 98 98 .get_power = chtdc_ti_pmic_get_power, 99 99 .update_power = chtdc_ti_pmic_update_power, 100 100 .get_raw_temp = chtdc_ti_pmic_get_raw_temp, 101 + .lpat_raw_to_temp = acpi_lpat_raw_to_temp, 101 102 .power_table = chtdc_ti_power_table, 102 103 .power_table_count = ARRAY_SIZE(chtdc_ti_power_table), 103 104 .thermal_table = chtdc_ti_thermal_table,
+2 -1
drivers/acpi/pmic/intel_pmic_chtwc.c
··· 253 253 * The thermal table and ops are empty, we do not support the Thermal opregion 254 254 * (DPTF) due to lacking documentation. 255 255 */ 256 - static struct intel_pmic_opregion_data intel_cht_wc_pmic_opregion_data = { 256 + static const struct intel_pmic_opregion_data intel_cht_wc_pmic_opregion_data = { 257 257 .get_power = intel_cht_wc_pmic_get_power, 258 258 .update_power = intel_cht_wc_pmic_update_power, 259 259 .exec_mipi_pmic_seq_element = intel_cht_wc_exec_mipi_pmic_seq_element, 260 + .lpat_raw_to_temp = acpi_lpat_raw_to_temp, 260 261 .power_table = power_table, 261 262 .power_table_count = ARRAY_SIZE(power_table), 262 263 };
+23 -1
drivers/acpi/pmic/intel_pmic_xpower.c
··· 293 293 return ret; 294 294 } 295 295 296 - static struct intel_pmic_opregion_data intel_xpower_pmic_opregion_data = { 296 + static int intel_xpower_lpat_raw_to_temp(struct acpi_lpat_conversion_table *lpat_table, 297 + int raw) 298 + { 299 + struct acpi_lpat first = lpat_table->lpat[0]; 300 + struct acpi_lpat last = lpat_table->lpat[lpat_table->lpat_count - 1]; 301 + 302 + /* 303 + * Some LPAT tables in the ACPI Device for the AXP288 PMIC for some 304 + * reason only describe a small temperature range, e.g. 27° - 37° 305 + * Celcius. Resulting in errors when the tablet is idle in a cool room. 306 + * 307 + * To avoid these errors clamp the raw value to be inside the LPAT. 308 + */ 309 + if (first.raw < last.raw) 310 + raw = clamp(raw, first.raw, last.raw); 311 + else 312 + raw = clamp(raw, last.raw, first.raw); 313 + 314 + return acpi_lpat_raw_to_temp(lpat_table, raw); 315 + } 316 + 317 + static const struct intel_pmic_opregion_data intel_xpower_pmic_opregion_data = { 297 318 .get_power = intel_xpower_pmic_get_power, 298 319 .update_power = intel_xpower_pmic_update_power, 299 320 .get_raw_temp = intel_xpower_pmic_get_raw_temp, 300 321 .exec_mipi_pmic_seq_element = intel_xpower_exec_mipi_pmic_seq_element, 322 + .lpat_raw_to_temp = intel_xpower_lpat_raw_to_temp, 301 323 .power_table = power_table, 302 324 .power_table_count = ARRAY_SIZE(power_table), 303 325 .thermal_table = thermal_table,
+15 -4
drivers/acpi/power.c
··· 81 81 82 82 static struct acpi_power_resource *acpi_power_get_context(acpi_handle handle) 83 83 { 84 - struct acpi_device *device; 84 + struct acpi_device *device = acpi_fetch_acpi_dev(handle); 85 85 86 - if (acpi_bus_get_device(handle, &device)) 86 + if (!device) 87 87 return NULL; 88 88 89 89 return to_power_resource(device); ··· 716 716 717 717 mutex_lock(&acpi_device_lock); 718 718 719 + dev_dbg(&dev->dev, "Enabling wakeup power (count %d)\n", 720 + dev->wakeup.prepare_count); 721 + 719 722 if (dev->wakeup.prepare_count++) 720 723 goto out; 721 724 ··· 737 734 if (err) { 738 735 acpi_power_off_list(&dev->wakeup.resources); 739 736 dev->wakeup.prepare_count = 0; 737 + goto out; 740 738 } 739 + 740 + dev_dbg(&dev->dev, "Wakeup power enabled\n"); 741 741 742 742 out: 743 743 mutex_unlock(&acpi_device_lock); ··· 762 756 return -EINVAL; 763 757 764 758 mutex_lock(&acpi_device_lock); 759 + 760 + dev_dbg(&dev->dev, "Disabling wakeup power (count %d)\n", 761 + dev->wakeup.prepare_count); 765 762 766 763 /* Do nothing if wakeup power has not been enabled for this device. */ 767 764 if (dev->wakeup.prepare_count <= 0) ··· 791 782 if (err) { 792 783 dev_err(&dev->dev, "Cannot turn off wakeup power resources\n"); 793 784 dev->wakeup.flags.valid = 0; 785 + goto out; 794 786 } 787 + 788 + dev_dbg(&dev->dev, "Wakeup power disabled\n"); 795 789 796 790 out: 797 791 mutex_unlock(&acpi_device_lock); ··· 928 916 929 917 struct acpi_device *acpi_add_power_resource(acpi_handle handle) 930 918 { 919 + struct acpi_device *device = acpi_fetch_acpi_dev(handle); 931 920 struct acpi_power_resource *resource; 932 - struct acpi_device *device = NULL; 933 921 union acpi_object acpi_object; 934 922 struct acpi_buffer buffer = { sizeof(acpi_object), &acpi_object }; 935 923 acpi_status status; 936 924 u8 state_dummy; 937 925 int result; 938 926 939 - acpi_bus_get_device(handle, &device); 940 927 if (device) 941 928 return device; 942 929
+7 -3
drivers/acpi/processor_driver.c
··· 98 98 struct acpi_processor *pr = per_cpu(processors, cpu); 99 99 struct acpi_device *device; 100 100 101 - if (!pr || acpi_bus_get_device(pr->handle, &device)) 101 + if (!pr) 102 102 return 0; 103 + 104 + device = acpi_fetch_acpi_dev(pr->handle); 105 + if (!device) 106 + return 0; 107 + 103 108 /* 104 109 * CPU got physically hotplugged and onlined for the first time: 105 110 * Initialize missing things. ··· 130 125 static int acpi_soft_cpu_dead(unsigned int cpu) 131 126 { 132 127 struct acpi_processor *pr = per_cpu(processors, cpu); 133 - struct acpi_device *device; 134 128 135 - if (!pr || acpi_bus_get_device(pr->handle, &device)) 129 + if (!pr || !acpi_fetch_acpi_dev(pr->handle)) 136 130 return 0; 137 131 138 132 acpi_processor_reevaluate_tstate(pr, true);
+5 -6
drivers/acpi/processor_idle.c
··· 20 20 #include <linux/tick.h> 21 21 #include <linux/cpuidle.h> 22 22 #include <linux/cpu.h> 23 + #include <linux/minmax.h> 23 24 #include <acpi/processor.h> 24 25 25 26 /* ··· 401 400 static void acpi_cst_latency_swap(void *a, void *b, int n) 402 401 { 403 402 struct acpi_processor_cx *x = a, *y = b; 404 - u32 tmp; 405 403 406 404 if (!(x->valid && y->valid)) 407 405 return; 408 - tmp = x->latency; 409 - x->latency = y->latency; 410 - y->latency = tmp; 406 + swap(x->latency, y->latency); 411 407 } 412 408 413 409 static int acpi_processor_power_verify(struct acpi_processor *pr) ··· 565 567 { 566 568 struct acpi_processor_cx *cx = per_cpu(acpi_cstate[index], dev->cpu); 567 569 568 - ACPI_FLUSH_CPU_CACHE(); 570 + if (cx->type == ACPI_STATE_C3) 571 + ACPI_FLUSH_CPU_CACHE(); 569 572 570 573 while (1) { 571 574 ··· 1100 1101 1101 1102 status = acpi_get_parent(handle, &pr_ahandle); 1102 1103 while (ACPI_SUCCESS(status)) { 1103 - acpi_bus_get_device(pr_ahandle, &d); 1104 + d = acpi_fetch_acpi_dev(pr_ahandle); 1104 1105 handle = pr_ahandle; 1105 1106 1106 1107 if (strcmp(acpi_device_hid(d), ACPI_PROCESSOR_CONTAINER_HID))
+10 -3
drivers/acpi/processor_thermal.c
··· 53 53 54 54 static int cpu_has_cpufreq(unsigned int cpu) 55 55 { 56 - struct cpufreq_policy policy; 57 - if (!acpi_processor_cpufreq_init || cpufreq_get_policy(&policy, cpu)) 56 + struct cpufreq_policy *policy; 57 + 58 + if (!acpi_processor_cpufreq_init) 58 59 return 0; 59 - return 1; 60 + 61 + policy = cpufreq_cpu_get(cpu); 62 + if (policy) { 63 + cpufreq_cpu_put(policy); 64 + return 1; 65 + } 66 + return 0; 60 67 } 61 68 62 69 static int cpufreq_get_max_state(unsigned int cpu)
+5 -6
drivers/acpi/property.c
··· 687 687 if (index) 688 688 return -EINVAL; 689 689 690 - ret = acpi_bus_get_device(obj->reference.handle, &device); 691 - if (ret) 692 - return ret == -ENODEV ? -EINVAL : ret; 690 + device = acpi_fetch_acpi_dev(obj->reference.handle); 691 + if (!device) 692 + return -EINVAL; 693 693 694 694 args->fwnode = acpi_fwnode_handle(device); 695 695 args->nargs = 0; ··· 719 719 if (element->type == ACPI_TYPE_LOCAL_REFERENCE) { 720 720 struct fwnode_handle *ref_fwnode; 721 721 722 - ret = acpi_bus_get_device(element->reference.handle, 723 - &device); 724 - if (ret) 722 + device = acpi_fetch_acpi_dev(element->reference.handle); 723 + if (!device) 725 724 return -EINVAL; 726 725 727 726 nargs = 0;
+2 -2
drivers/acpi/resource.c
··· 791 791 { 792 792 struct resource *res = context; 793 793 struct acpi_device **consumer = (struct acpi_device **) ret; 794 - struct acpi_device *adev; 794 + struct acpi_device *adev = acpi_fetch_acpi_dev(handle); 795 795 796 - if (acpi_bus_get_device(handle, &adev)) 796 + if (!adev) 797 797 return AE_OK; 798 798 799 799 if (acpi_dev_consumes_res(adev, res)) {
+39 -18
drivers/acpi/scan.c
··· 135 135 static acpi_status acpi_bus_offline(acpi_handle handle, u32 lvl, void *data, 136 136 void **ret_p) 137 137 { 138 - struct acpi_device *device = NULL; 138 + struct acpi_device *device = acpi_fetch_acpi_dev(handle); 139 139 struct acpi_device_physical_node *pn; 140 140 bool second_pass = (bool)data; 141 141 acpi_status status = AE_OK; 142 142 143 - if (acpi_bus_get_device(handle, &device)) 143 + if (!device) 144 144 return AE_OK; 145 145 146 146 if (device->handler && !device->handler->hotplug.enabled) { ··· 180 180 static acpi_status acpi_bus_online(acpi_handle handle, u32 lvl, void *data, 181 181 void **ret_p) 182 182 { 183 - struct acpi_device *device = NULL; 183 + struct acpi_device *device = acpi_fetch_acpi_dev(handle); 184 184 struct acpi_device_physical_node *pn; 185 185 186 - if (acpi_bus_get_device(handle, &device)) 186 + if (!device) 187 187 return AE_OK; 188 188 189 189 mutex_lock(&device->physical_node_lock); ··· 599 599 } 600 600 EXPORT_SYMBOL(acpi_bus_get_device); 601 601 602 + /** 603 + * acpi_fetch_acpi_dev - Retrieve ACPI device object. 604 + * @handle: ACPI handle associated with the requested ACPI device object. 605 + * 606 + * Return a pointer to the ACPI device object associated with @handle, if 607 + * present, or NULL otherwise. 608 + */ 609 + struct acpi_device *acpi_fetch_acpi_dev(acpi_handle handle) 610 + { 611 + return handle_to_device(handle, NULL); 612 + } 613 + EXPORT_SYMBOL_GPL(acpi_fetch_acpi_dev); 614 + 602 615 static void get_acpi_device(void *dev) 603 616 { 604 617 acpi_dev_get(dev); ··· 818 805 819 806 static struct acpi_device *acpi_bus_get_parent(acpi_handle handle) 820 807 { 821 - struct acpi_device *device = NULL; 808 + struct acpi_device *device; 822 809 acpi_status status; 823 810 824 811 /* ··· 833 820 status = acpi_get_parent(handle, &handle); 834 821 if (ACPI_FAILURE(status)) 835 822 return status == AE_NULL_ENTRY ? NULL : acpi_root; 836 - } while (acpi_bus_get_device(handle, &device)); 823 + 824 + device = acpi_fetch_acpi_dev(handle); 825 + } while (!device); 837 826 return device; 838 827 } 839 828 ··· 1361 1346 if (info->valid & ACPI_VALID_HID) { 1362 1347 acpi_add_id(pnp, info->hardware_id.string); 1363 1348 pnp->type.platform_id = 1; 1364 - } 1365 - if (info->valid & ACPI_VALID_CID) { 1366 - cid_list = &info->compatible_id_list; 1367 - for (i = 0; i < cid_list->count; i++) 1368 - acpi_add_id(pnp, cid_list->ids[i].string); 1349 + if (info->valid & ACPI_VALID_CID) { 1350 + cid_list = &info->compatible_id_list; 1351 + for (i = 0; i < cid_list->count; i++) 1352 + acpi_add_id(pnp, cid_list->ids[i].string); 1353 + } 1369 1354 } 1370 1355 if (info->valid & ACPI_VALID_ADR) { 1371 1356 pnp->bus_address = info->address; ··· 1716 1701 { 1717 1702 struct list_head resource_list; 1718 1703 bool is_serial_bus_slave = false; 1704 + static const struct acpi_device_id ignore_serial_bus_ids[] = { 1719 1705 /* 1720 1706 * These devices have multiple I2cSerialBus resources and an i2c-client 1721 1707 * must be instantiated for each, each with its own i2c_device_id. ··· 1725 1709 * drivers/platform/x86/i2c-multi-instantiate.c driver, which knows 1726 1710 * which i2c_device_id to use for each resource. 1727 1711 */ 1728 - static const struct acpi_device_id i2c_multi_instantiate_ids[] = { 1729 1712 {"BSG1160", }, 1730 1713 {"BSG2150", }, 1731 1714 {"INT33FE", }, 1732 1715 {"INT3515", }, 1716 + /* 1717 + * HIDs of device with an UartSerialBusV2 resource for which userspace 1718 + * expects a regular tty cdev to be created (instead of the in kernel 1719 + * serdev) and which have a kernel driver which expects a platform_dev 1720 + * such as the rfkill-gpio driver. 1721 + */ 1722 + {"BCM4752", }, 1723 + {"LNV4752", }, 1733 1724 {} 1734 1725 }; 1735 1726 ··· 1750 1727 fwnode_property_present(&device->fwnode, "baud"))) 1751 1728 return true; 1752 1729 1753 - /* Instantiate a pdev for the i2c-multi-instantiate drv to bind to */ 1754 - if (!acpi_match_device_ids(device, i2c_multi_instantiate_ids)) 1730 + if (!acpi_match_device_ids(device, ignore_serial_bus_ids)) 1755 1731 return false; 1756 1732 1757 1733 INIT_LIST_HEAD(&resource_list); ··· 2037 2015 static acpi_status acpi_bus_check_add(acpi_handle handle, bool check_dep, 2038 2016 struct acpi_device **adev_p) 2039 2017 { 2040 - struct acpi_device *device = NULL; 2018 + struct acpi_device *device = acpi_fetch_acpi_dev(handle); 2041 2019 acpi_object_type acpi_type; 2042 2020 int type; 2043 2021 2044 - acpi_bus_get_device(handle, &device); 2045 2022 if (device) 2046 2023 goto out; 2047 2024 ··· 2598 2577 if (result) 2599 2578 goto out; 2600 2579 2601 - result = acpi_bus_get_device(ACPI_ROOT_OBJECT, &acpi_root); 2602 - if (result) 2580 + acpi_root = acpi_fetch_acpi_dev(ACPI_ROOT_OBJECT); 2581 + if (!acpi_root) 2603 2582 goto out; 2604 2583 2605 2584 /* Fixed feature devices do not exist on HW-reduced platform */
-5
drivers/acpi/sleep.c
··· 73 73 acpi_set_waking_vector(acpi_wakeup_address); 74 74 75 75 } 76 - ACPI_FLUSH_CPU_CACHE(); 77 76 #endif 78 77 pr_info("Preparing to enter system sleep state S%d\n", acpi_state); 79 78 acpi_enable_wakeup_devices(acpi_state); ··· 565 566 u32 acpi_state = acpi_target_sleep_state; 566 567 int error; 567 568 568 - ACPI_FLUSH_CPU_CACHE(); 569 - 570 569 trace_suspend_resume(TPS("acpi_suspend"), acpi_state, true); 571 570 switch (acpi_state) { 572 571 case ACPI_STATE_S1: ··· 899 902 static int acpi_hibernation_enter(void) 900 903 { 901 904 acpi_status status = AE_OK; 902 - 903 - ACPI_FLUSH_CPU_CACHE(); 904 905 905 906 /* This shouldn't return. If it returns, we have a problem */ 906 907 status = acpi_enter_sleep_state(ACPI_STATE_S4);
+2 -1
drivers/acpi/sysfs.c
··· 939 939 &hotplug_enabled_attr.attr, 940 940 NULL 941 941 }; 942 + ATTRIBUTE_GROUPS(hotplug_profile); 942 943 943 944 static struct kobj_type acpi_hotplug_profile_ktype = { 944 945 .sysfs_ops = &kobj_sysfs_ops, 945 - .default_attrs = hotplug_profile_attrs, 946 + .default_groups = hotplug_profile_groups, 946 947 }; 947 948 948 949 void acpi_sysfs_add_hotplug_profile(struct acpi_hotplug_profile *hotplug,
+3 -3
drivers/acpi/tables.c
··· 500 500 ACPI_SIG_WDDT, ACPI_SIG_WDRT, ACPI_SIG_DSDT, ACPI_SIG_FADT, 501 501 ACPI_SIG_PSDT, ACPI_SIG_RSDT, ACPI_SIG_XSDT, ACPI_SIG_SSDT, 502 502 ACPI_SIG_IORT, ACPI_SIG_NFIT, ACPI_SIG_HMAT, ACPI_SIG_PPTT, 503 - ACPI_SIG_NHLT }; 503 + ACPI_SIG_NHLT, ACPI_SIG_AEST }; 504 504 505 505 #define ACPI_HEADER_SIZE sizeof(struct acpi_table_header) 506 506 ··· 723 723 /* 724 724 * Mark the table to avoid being used in 725 725 * acpi_table_initrd_override(). Though this is not possible 726 - * because override is disabled in acpi_install_table(). 726 + * because override is disabled in acpi_install_physical_table(). 727 727 */ 728 728 if (test_and_set_bit(table_index, acpi_initrd_installed)) { 729 729 acpi_os_unmap_memory(table, ACPI_HEADER_SIZE); ··· 734 734 table->signature, table->oem_id, 735 735 table->oem_table_id); 736 736 acpi_os_unmap_memory(table, ACPI_HEADER_SIZE); 737 - acpi_install_table(acpi_tables_addr + table_offset, TRUE); 737 + acpi_install_physical_table(acpi_tables_addr + table_offset); 738 738 next_table: 739 739 table_offset += table_length; 740 740 table_index++;
+4 -7
drivers/acpi/thermal.c
··· 697 697 struct acpi_device *device = cdev->devdata; 698 698 struct acpi_thermal *tz = thermal->devdata; 699 699 struct acpi_device *dev; 700 - acpi_status status; 701 700 acpi_handle handle; 702 701 int i; 703 702 int j; ··· 714 715 for (i = 0; i < tz->trips.passive.devices.count; 715 716 i++) { 716 717 handle = tz->trips.passive.devices.handles[i]; 717 - status = acpi_bus_get_device(handle, &dev); 718 - if (ACPI_FAILURE(status) || dev != device) 718 + dev = acpi_fetch_acpi_dev(handle); 719 + if (dev != device) 719 720 continue; 720 721 if (bind) 721 722 result = ··· 740 741 j < tz->trips.active[i].devices.count; 741 742 j++) { 742 743 handle = tz->trips.active[i].devices.handles[j]; 743 - status = acpi_bus_get_device(handle, &dev); 744 - if (ACPI_FAILURE(status) || dev != device) 744 + dev = acpi_fetch_acpi_dev(handle); 745 + if (dev != device) 745 746 continue; 746 747 if (bind) 747 748 result = thermal_zone_bind_cooling_device ··· 1097 1098 return -EINVAL; 1098 1099 1099 1100 for (i = 0; i < ACPI_THERMAL_MAX_ACTIVE; i++) { 1100 - if (!(&tz->trips.active[i])) 1101 - break; 1102 1101 if (!tz->trips.active[i].flags.valid) 1103 1102 break; 1104 1103 tz->trips.active[i].flags.enabled = 1;
+2 -4
drivers/acpi/video_detect.c
··· 59 59 static acpi_status 60 60 find_video(acpi_handle handle, u32 lvl, void *context, void **rv) 61 61 { 62 + struct acpi_device *acpi_dev = acpi_fetch_acpi_dev(handle); 62 63 long *cap = context; 63 64 struct pci_dev *dev; 64 - struct acpi_device *acpi_dev; 65 65 66 66 static const struct acpi_device_id video_ids[] = { 67 67 {ACPI_VIDEO_HID, 0}, 68 68 {"", 0}, 69 69 }; 70 - if (acpi_bus_get_device(handle, &acpi_dev)) 71 - return AE_OK; 72 70 73 - if (!acpi_match_device_ids(acpi_dev, video_ids)) { 71 + if (acpi_dev && !acpi_match_device_ids(acpi_dev, video_ids)) { 74 72 dev = acpi_get_pci_dev(handle); 75 73 if (!dev) 76 74 return AE_OK;
+2 -2
drivers/acpi/x86/s2idle.c
··· 293 293 294 294 for (i = 0; i < lpi_constraints_table_size; ++i) { 295 295 acpi_handle handle = lpi_constraints_table[i].handle; 296 - struct acpi_device *adev; 296 + struct acpi_device *adev = acpi_fetch_acpi_dev(handle); 297 297 298 - if (!handle || acpi_bus_get_device(handle, &adev)) 298 + if (!adev) 299 299 continue; 300 300 301 301 acpi_handle_debug(handle,
+268 -37
drivers/acpi/x86/utils.c
··· 8 8 * Copyright (C) 2013-2015 Intel Corporation. All rights reserved. 9 9 */ 10 10 11 + #define pr_fmt(fmt) "ACPI: " fmt 12 + 11 13 #include <linux/acpi.h> 12 14 #include <linux/dmi.h> 15 + #include <linux/platform_device.h> 13 16 #include <asm/cpu_device_id.h> 14 17 #include <asm/intel-family.h> 15 18 #include "../internal.h" ··· 25 22 * Some BIOS-es (temporarily) hide specific APCI devices to work around Windows 26 23 * driver bugs. We use DMI matching to match known cases of this. 27 24 * 28 - * We work around this by always reporting ACPI_STA_DEFAULT for these 29 - * devices. Note this MUST only be done for devices where this is safe. 25 + * Likewise sometimes some not-actually present devices are sometimes 26 + * reported as present, which may cause issues. 30 27 * 31 - * This forcing of devices to be present is limited to specific CPU (SoC) 32 - * models both to avoid potentially causing trouble on other models and 33 - * because some HIDs are re-used on different SoCs for completely 34 - * different devices. 28 + * We work around this by using the below quirk list to override the status 29 + * reported by the _STA method with a fixed value (ACPI_STA_DEFAULT or 0). 30 + * Note this MUST only be done for devices where this is safe. 31 + * 32 + * This status overriding is limited to specific CPU (SoC) models both to 33 + * avoid potentially causing trouble on other models and because some HIDs 34 + * are re-used on different SoCs for completely different devices. 35 35 */ 36 - struct always_present_id { 36 + struct override_status_id { 37 37 struct acpi_device_id hid[2]; 38 38 struct x86_cpu_id cpu_ids[2]; 39 39 struct dmi_system_id dmi_ids[2]; /* Optional */ 40 40 const char *uid; 41 + const char *path; 42 + unsigned long long status; 41 43 }; 42 44 43 - #define X86_MATCH(model) X86_MATCH_INTEL_FAM6_MODEL(model, NULL) 44 - 45 - #define ENTRY(hid, uid, cpu_models, dmi...) { \ 45 + #define ENTRY(status, hid, uid, path, cpu_model, dmi...) { \ 46 46 { { hid, }, {} }, \ 47 - { cpu_models, {} }, \ 47 + { X86_MATCH_INTEL_FAM6_MODEL(cpu_model, NULL), {} }, \ 48 48 { { .matches = dmi }, {} }, \ 49 49 uid, \ 50 + path, \ 51 + status, \ 50 52 } 51 53 52 - static const struct always_present_id always_present_ids[] = { 54 + #define PRESENT_ENTRY_HID(hid, uid, cpu_model, dmi...) \ 55 + ENTRY(ACPI_STA_DEFAULT, hid, uid, NULL, cpu_model, dmi) 56 + 57 + #define NOT_PRESENT_ENTRY_HID(hid, uid, cpu_model, dmi...) \ 58 + ENTRY(0, hid, uid, NULL, cpu_model, dmi) 59 + 60 + #define PRESENT_ENTRY_PATH(path, cpu_model, dmi...) \ 61 + ENTRY(ACPI_STA_DEFAULT, "", NULL, path, cpu_model, dmi) 62 + 63 + #define NOT_PRESENT_ENTRY_PATH(path, cpu_model, dmi...) \ 64 + ENTRY(0, "", NULL, path, cpu_model, dmi) 65 + 66 + static const struct override_status_id override_status_ids[] = { 53 67 /* 54 68 * Bay / Cherry Trail PWM directly poked by GPU driver in win10, 55 69 * but Linux uses a separate PWM driver, harmless if not used. 56 70 */ 57 - ENTRY("80860F09", "1", X86_MATCH(ATOM_SILVERMONT), {}), 58 - ENTRY("80862288", "1", X86_MATCH(ATOM_AIRMONT), {}), 71 + PRESENT_ENTRY_HID("80860F09", "1", ATOM_SILVERMONT, {}), 72 + PRESENT_ENTRY_HID("80862288", "1", ATOM_AIRMONT, {}), 59 73 60 - /* Lenovo Yoga Book uses PWM2 for keyboard backlight control */ 61 - ENTRY("80862289", "2", X86_MATCH(ATOM_AIRMONT), { 62 - DMI_MATCH(DMI_PRODUCT_NAME, "Lenovo YB1-X9"), 63 - }), 74 + /* The Xiaomi Mi Pad 2 uses PWM2 for touchkeys backlight control */ 75 + PRESENT_ENTRY_HID("80862289", "2", ATOM_AIRMONT, { 76 + DMI_MATCH(DMI_SYS_VENDOR, "Xiaomi Inc"), 77 + DMI_MATCH(DMI_PRODUCT_NAME, "Mipad2"), 78 + }), 79 + 64 80 /* 65 81 * The INT0002 device is necessary to clear wakeup interrupt sources 66 82 * on Cherry Trail devices, without it we get nobody cared IRQ msgs. 67 83 */ 68 - ENTRY("INT0002", "1", X86_MATCH(ATOM_AIRMONT), {}), 84 + PRESENT_ENTRY_HID("INT0002", "1", ATOM_AIRMONT, {}), 69 85 /* 70 86 * On the Dell Venue 11 Pro 7130 and 7139, the DSDT hides 71 87 * the touchscreen ACPI device until a certain time 72 88 * after _SB.PCI0.GFX0.LCD.LCD1._ON gets called has passed 73 89 * *and* _STA has been called at least 3 times since. 74 90 */ 75 - ENTRY("SYNA7500", "1", X86_MATCH(HASWELL_L), { 91 + PRESENT_ENTRY_HID("SYNA7500", "1", HASWELL_L, { 76 92 DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 77 93 DMI_MATCH(DMI_PRODUCT_NAME, "Venue 11 Pro 7130"), 78 94 }), 79 - ENTRY("SYNA7500", "1", X86_MATCH(HASWELL_L), { 95 + PRESENT_ENTRY_HID("SYNA7500", "1", HASWELL_L, { 80 96 DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 81 97 DMI_MATCH(DMI_PRODUCT_NAME, "Venue 11 Pro 7139"), 82 98 }), ··· 103 81 /* 104 82 * The GPD win BIOS dated 20170221 has disabled the accelerometer, the 105 83 * drivers sometimes cause crashes under Windows and this is how the 106 - * manufacturer has solved this :| Note that the the DMI data is less 107 - * generic then it seems, a board_vendor of "AMI Corporation" is quite 108 - * rare and a board_name of "Default String" also is rare. 84 + * manufacturer has solved this :| The DMI match may not seem unique, 85 + * but it is. In the 67000+ DMI decode dumps from linux-hardware.org 86 + * only 116 have board_vendor set to "AMI Corporation" and of those 116 87 + * only the GPD win and pocket entries' board_name is "Default string". 109 88 * 110 89 * Unfortunately the GPD pocket also uses these strings and its BIOS 111 90 * was copy-pasted from the GPD win, so it has a disabled KIOX000A 112 91 * node which we should not enable, thus we also check the BIOS date. 113 92 */ 114 - ENTRY("KIOX000A", "1", X86_MATCH(ATOM_AIRMONT), { 93 + PRESENT_ENTRY_HID("KIOX000A", "1", ATOM_AIRMONT, { 115 94 DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"), 116 95 DMI_MATCH(DMI_BOARD_NAME, "Default string"), 117 96 DMI_MATCH(DMI_PRODUCT_NAME, "Default string"), 118 97 DMI_MATCH(DMI_BIOS_DATE, "02/21/2017") 119 98 }), 120 - ENTRY("KIOX000A", "1", X86_MATCH(ATOM_AIRMONT), { 99 + PRESENT_ENTRY_HID("KIOX000A", "1", ATOM_AIRMONT, { 121 100 DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"), 122 101 DMI_MATCH(DMI_BOARD_NAME, "Default string"), 123 102 DMI_MATCH(DMI_PRODUCT_NAME, "Default string"), 124 103 DMI_MATCH(DMI_BIOS_DATE, "03/20/2017") 125 104 }), 126 - ENTRY("KIOX000A", "1", X86_MATCH(ATOM_AIRMONT), { 105 + PRESENT_ENTRY_HID("KIOX000A", "1", ATOM_AIRMONT, { 127 106 DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"), 128 107 DMI_MATCH(DMI_BOARD_NAME, "Default string"), 129 108 DMI_MATCH(DMI_PRODUCT_NAME, "Default string"), 130 109 DMI_MATCH(DMI_BIOS_DATE, "05/25/2017") 131 110 }), 111 + 112 + /* 113 + * The GPD win/pocket have a PCI wifi card, but its DSDT has the SDIO 114 + * mmc controller enabled and that has a child-device which _PS3 115 + * method sets a GPIO causing the PCI wifi card to turn off. 116 + * See above remark about uniqueness of the DMI match. 117 + */ 118 + NOT_PRESENT_ENTRY_PATH("\\_SB_.PCI0.SDHB.BRC1", ATOM_AIRMONT, { 119 + DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"), 120 + DMI_EXACT_MATCH(DMI_BOARD_NAME, "Default string"), 121 + DMI_EXACT_MATCH(DMI_BOARD_SERIAL, "Default string"), 122 + DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Default string"), 123 + }), 132 124 }; 133 125 134 - bool acpi_device_always_present(struct acpi_device *adev) 126 + bool acpi_device_override_status(struct acpi_device *adev, unsigned long long *status) 135 127 { 136 128 bool ret = false; 137 129 unsigned int i; 138 130 139 - for (i = 0; i < ARRAY_SIZE(always_present_ids); i++) { 140 - if (acpi_match_device_ids(adev, always_present_ids[i].hid)) 131 + for (i = 0; i < ARRAY_SIZE(override_status_ids); i++) { 132 + if (!x86_match_cpu(override_status_ids[i].cpu_ids)) 141 133 continue; 142 134 143 - if (!adev->pnp.unique_id || 144 - strcmp(adev->pnp.unique_id, always_present_ids[i].uid)) 135 + if (override_status_ids[i].dmi_ids[0].matches[0].slot && 136 + !dmi_check_system(override_status_ids[i].dmi_ids)) 145 137 continue; 146 138 147 - if (!x86_match_cpu(always_present_ids[i].cpu_ids)) 148 - continue; 139 + if (override_status_ids[i].path) { 140 + struct acpi_buffer path = { ACPI_ALLOCATE_BUFFER, NULL }; 141 + bool match; 149 142 150 - if (always_present_ids[i].dmi_ids[0].matches[0].slot && 151 - !dmi_check_system(always_present_ids[i].dmi_ids)) 152 - continue; 143 + if (acpi_get_name(adev->handle, ACPI_FULL_PATHNAME, &path)) 144 + continue; 153 145 146 + match = strcmp((char *)path.pointer, override_status_ids[i].path) == 0; 147 + kfree(path.pointer); 148 + 149 + if (!match) 150 + continue; 151 + } else { 152 + if (acpi_match_device_ids(adev, override_status_ids[i].hid)) 153 + continue; 154 + 155 + if (!adev->pnp.unique_id || 156 + strcmp(adev->pnp.unique_id, override_status_ids[i].uid)) 157 + continue; 158 + } 159 + 160 + *status = override_status_ids[i].status; 154 161 ret = true; 155 162 break; 156 163 } ··· 211 160 { 212 161 return x86_match_cpu(storage_d3_cpu_ids); 213 162 } 163 + 164 + /* 165 + * x86 ACPI boards which ship with only Android as their factory image usually 166 + * declare a whole bunch of bogus I2C devices in their ACPI tables and sometimes 167 + * there are issues with serdev devices on these boards too, e.g. the resource 168 + * points to the wrong serdev_controller. 169 + * 170 + * Instantiating I2C / serdev devs for these bogus devs causes various issues, 171 + * e.g. GPIO/IRQ resource conflicts because sometimes drivers do bind to them. 172 + * The Android x86 kernel fork shipped on these devices has some special code 173 + * to remove the bogus I2C clients (and AFAICT serdevs are ignored completely). 174 + * 175 + * The acpi_quirk_skip_*_enumeration() functions below are used by the I2C or 176 + * serdev code to skip instantiating any I2C or serdev devs on broken boards. 177 + * 178 + * In case of I2C an exception is made for HIDs on the i2c_acpi_known_good_ids 179 + * list. These are known to always be correct (and in case of the audio-codecs 180 + * the drivers heavily rely on the codec being enumerated through ACPI). 181 + * 182 + * Note these boards typically do actually have I2C and serdev devices, 183 + * just different ones then the ones described in their DSDT. The devices 184 + * which are actually present are manually instantiated by the 185 + * drivers/platform/x86/x86-android-tablets.c kernel module. 186 + */ 187 + #define ACPI_QUIRK_SKIP_I2C_CLIENTS BIT(0) 188 + #define ACPI_QUIRK_UART1_TTY_UART2_SKIP BIT(1) 189 + #define ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY BIT(2) 190 + #define ACPI_QUIRK_USE_ACPI_AC_AND_BATTERY BIT(3) 191 + 192 + static const struct dmi_system_id acpi_quirk_skip_dmi_ids[] = { 193 + /* 194 + * 1. Devices with only the skip / don't-skip AC and battery quirks, 195 + * sorted alphabetically. 196 + */ 197 + { 198 + /* ECS EF20EA, AXP288 PMIC but uses separate fuel-gauge */ 199 + .matches = { 200 + DMI_MATCH(DMI_PRODUCT_NAME, "EF20EA"), 201 + }, 202 + .driver_data = (void *)ACPI_QUIRK_USE_ACPI_AC_AND_BATTERY 203 + }, 204 + { 205 + /* Lenovo Ideapad Miix 320, AXP288 PMIC, separate fuel-gauge */ 206 + .matches = { 207 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 208 + DMI_MATCH(DMI_PRODUCT_NAME, "80XF"), 209 + DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo MIIX 320-10ICR"), 210 + }, 211 + .driver_data = (void *)ACPI_QUIRK_USE_ACPI_AC_AND_BATTERY 212 + }, 213 + 214 + /* 215 + * 2. Devices which also have the skip i2c/serdev quirks and which 216 + * need the x86-android-tablets module to properly work. 217 + */ 218 + #if IS_ENABLED(CONFIG_X86_ANDROID_TABLETS) 219 + { 220 + .matches = { 221 + DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 222 + DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "ME176C"), 223 + }, 224 + .driver_data = (void *)(ACPI_QUIRK_SKIP_I2C_CLIENTS | 225 + ACPI_QUIRK_UART1_TTY_UART2_SKIP | 226 + ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY), 227 + }, 228 + { 229 + .matches = { 230 + DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 231 + DMI_MATCH(DMI_PRODUCT_NAME, "TF103C"), 232 + }, 233 + .driver_data = (void *)(ACPI_QUIRK_SKIP_I2C_CLIENTS | 234 + ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY), 235 + }, 236 + { 237 + /* Whitelabel (sold as various brands) TM800A550L */ 238 + .matches = { 239 + DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"), 240 + DMI_MATCH(DMI_BOARD_NAME, "Aptio CRB"), 241 + /* Above strings are too generic, also match on BIOS version */ 242 + DMI_MATCH(DMI_BIOS_VERSION, "ZY-8-BI-PX4S70VTR400-X423B-005-D"), 243 + }, 244 + .driver_data = (void *)(ACPI_QUIRK_SKIP_I2C_CLIENTS | 245 + ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY), 246 + }, 247 + #endif 248 + {} 249 + }; 250 + 251 + #if IS_ENABLED(CONFIG_X86_ANDROID_TABLETS) 252 + static const struct acpi_device_id i2c_acpi_known_good_ids[] = { 253 + { "10EC5640", 0 }, /* RealTek ALC5640 audio codec */ 254 + { "INT33F4", 0 }, /* X-Powers AXP288 PMIC */ 255 + { "INT33FD", 0 }, /* Intel Crystal Cove PMIC */ 256 + { "NPCE69A", 0 }, /* Asus Transformer keyboard dock */ 257 + {} 258 + }; 259 + 260 + bool acpi_quirk_skip_i2c_client_enumeration(struct acpi_device *adev) 261 + { 262 + const struct dmi_system_id *dmi_id; 263 + long quirks; 264 + 265 + dmi_id = dmi_first_match(acpi_quirk_skip_dmi_ids); 266 + if (!dmi_id) 267 + return false; 268 + 269 + quirks = (unsigned long)dmi_id->driver_data; 270 + if (!(quirks & ACPI_QUIRK_SKIP_I2C_CLIENTS)) 271 + return false; 272 + 273 + return acpi_match_device_ids(adev, i2c_acpi_known_good_ids); 274 + } 275 + EXPORT_SYMBOL_GPL(acpi_quirk_skip_i2c_client_enumeration); 276 + 277 + int acpi_quirk_skip_serdev_enumeration(struct device *controller_parent, bool *skip) 278 + { 279 + struct acpi_device *adev = ACPI_COMPANION(controller_parent); 280 + const struct dmi_system_id *dmi_id; 281 + long quirks = 0; 282 + 283 + *skip = false; 284 + 285 + /* !dev_is_platform() to not match on PNP enumerated debug UARTs */ 286 + if (!adev || !adev->pnp.unique_id || !dev_is_platform(controller_parent)) 287 + return 0; 288 + 289 + dmi_id = dmi_first_match(acpi_quirk_skip_dmi_ids); 290 + if (dmi_id) 291 + quirks = (unsigned long)dmi_id->driver_data; 292 + 293 + if (quirks & ACPI_QUIRK_UART1_TTY_UART2_SKIP) { 294 + if (!strcmp(adev->pnp.unique_id, "1")) 295 + return -ENODEV; /* Create tty cdev instead of serdev */ 296 + 297 + if (!strcmp(adev->pnp.unique_id, "2")) 298 + *skip = true; 299 + } 300 + 301 + return 0; 302 + } 303 + EXPORT_SYMBOL_GPL(acpi_quirk_skip_serdev_enumeration); 304 + #endif 305 + 306 + /* Lists of PMIC ACPI HIDs with an (often better) native charger driver */ 307 + static const struct { 308 + const char *hid; 309 + int hrv; 310 + } acpi_skip_ac_and_battery_pmic_ids[] = { 311 + { "INT33F4", -1 }, /* X-Powers AXP288 PMIC */ 312 + { "INT34D3", 3 }, /* Intel Cherrytrail Whiskey Cove PMIC */ 313 + }; 314 + 315 + bool acpi_quirk_skip_acpi_ac_and_battery(void) 316 + { 317 + const struct dmi_system_id *dmi_id; 318 + long quirks = 0; 319 + int i; 320 + 321 + dmi_id = dmi_first_match(acpi_quirk_skip_dmi_ids); 322 + if (dmi_id) 323 + quirks = (unsigned long)dmi_id->driver_data; 324 + 325 + if (quirks & ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY) 326 + return true; 327 + 328 + if (quirks & ACPI_QUIRK_USE_ACPI_AC_AND_BATTERY) 329 + return false; 330 + 331 + for (i = 0; i < ARRAY_SIZE(acpi_skip_ac_and_battery_pmic_ids); i++) { 332 + if (acpi_dev_present(acpi_skip_ac_and_battery_pmic_ids[i].hid, "1", 333 + acpi_skip_ac_and_battery_pmic_ids[i].hrv)) { 334 + pr_info_once("found native %s PMIC, skipping ACPI AC and battery devices\n", 335 + acpi_skip_ac_and_battery_pmic_ids[i].hid); 336 + return true; 337 + } 338 + } 339 + 340 + return false; 341 + } 342 + EXPORT_SYMBOL_GPL(acpi_quirk_skip_acpi_ac_and_battery);
+7
drivers/i2c/i2c-core-acpi.c
··· 257 257 struct acpi_device *adev, 258 258 struct i2c_board_info *info) 259 259 { 260 + /* 261 + * Skip registration on boards where the ACPI tables are 262 + * known to contain bogus I2C devices. 263 + */ 264 + if (acpi_quirk_skip_i2c_client_enumeration(adev)) 265 + return; 266 + 260 267 adev->power.flags.ignore_parent = true; 261 268 acpi_device_set_enumerated(adev); 262 269
+6 -72
drivers/mmc/host/sdhci-acpi.c
··· 31 31 #include <linux/mmc/slot-gpio.h> 32 32 33 33 #ifdef CONFIG_X86 34 - #include <asm/cpu_device_id.h> 35 - #include <asm/intel-family.h> 34 + #include <linux/platform_data/x86/soc.h> 36 35 #include <asm/iosf_mbi.h> 37 - #include <linux/pci.h> 38 36 #endif 39 37 40 38 #include "sdhci.h" ··· 238 240 239 241 #ifdef CONFIG_X86 240 242 241 - static bool sdhci_acpi_byt(void) 242 - { 243 - static const struct x86_cpu_id byt[] = { 244 - X86_MATCH_INTEL_FAM6_MODEL(ATOM_SILVERMONT, NULL), 245 - {} 246 - }; 247 - 248 - return x86_match_cpu(byt); 249 - } 250 - 251 - static bool sdhci_acpi_cht(void) 252 - { 253 - static const struct x86_cpu_id cht[] = { 254 - X86_MATCH_INTEL_FAM6_MODEL(ATOM_AIRMONT, NULL), 255 - {} 256 - }; 257 - 258 - return x86_match_cpu(cht); 259 - } 260 - 261 243 #define BYT_IOSF_SCCEP 0x63 262 244 #define BYT_IOSF_OCP_NETCTRL0 0x1078 263 245 #define BYT_IOSF_OCP_TIMEOUT_BASE GENMASK(10, 8) ··· 246 268 { 247 269 u32 val = 0; 248 270 249 - if (!sdhci_acpi_byt()) 271 + if (!soc_intel_is_byt()) 250 272 return; 251 273 252 274 if (iosf_mbi_read(BYT_IOSF_SCCEP, MBI_CR_READ, BYT_IOSF_OCP_NETCTRL0, ··· 271 293 272 294 static bool sdhci_acpi_byt_defer(struct device *dev) 273 295 { 274 - if (!sdhci_acpi_byt()) 296 + if (!soc_intel_is_byt()) 275 297 return false; 276 298 277 299 if (!iosf_mbi_available()) ··· 282 304 return false; 283 305 } 284 306 285 - static bool sdhci_acpi_cht_pci_wifi(unsigned int vendor, unsigned int device, 286 - unsigned int slot, unsigned int parent_slot) 287 - { 288 - struct pci_dev *dev, *parent, *from = NULL; 289 - 290 - while (1) { 291 - dev = pci_get_device(vendor, device, from); 292 - pci_dev_put(from); 293 - if (!dev) 294 - break; 295 - parent = pci_upstream_bridge(dev); 296 - if (ACPI_COMPANION(&dev->dev) && PCI_SLOT(dev->devfn) == slot && 297 - parent && PCI_SLOT(parent->devfn) == parent_slot && 298 - !pci_upstream_bridge(parent)) { 299 - pci_dev_put(dev); 300 - return true; 301 - } 302 - from = dev; 303 - } 304 - 305 - return false; 306 - } 307 - 308 - /* 309 - * GPDwin uses PCI wifi which conflicts with SDIO's use of 310 - * acpi_device_fix_up_power() on child device nodes. Identifying GPDwin is 311 - * problematic, but since SDIO is only used for wifi, the presence of the PCI 312 - * wifi card in the expected slot with an ACPI companion node, is used to 313 - * indicate that acpi_device_fix_up_power() should be avoided. 314 - */ 315 - static inline bool sdhci_acpi_no_fixup_child_power(struct acpi_device *adev) 316 - { 317 - return sdhci_acpi_cht() && 318 - acpi_dev_hid_uid_match(adev, "80860F14", "2") && 319 - sdhci_acpi_cht_pci_wifi(0x14e4, 0x43ec, 0, 28); 320 - } 321 - 322 307 #else 323 308 324 309 static inline void sdhci_acpi_byt_setting(struct device *dev) ··· 289 348 } 290 349 291 350 static inline bool sdhci_acpi_byt_defer(struct device *dev) 292 - { 293 - return false; 294 - } 295 - 296 - static inline bool sdhci_acpi_no_fixup_child_power(struct acpi_device *adev) 297 351 { 298 352 return false; 299 353 } ··· 797 861 798 862 /* Power on the SDHCI controller and its children */ 799 863 acpi_device_fix_up_power(device); 800 - if (!sdhci_acpi_no_fixup_child_power(device)) { 801 - list_for_each_entry(child, &device->children, node) 802 - if (child->status.present && child->status.enabled) 803 - acpi_device_fix_up_power(child); 804 - } 864 + list_for_each_entry(child, &device->children, node) 865 + if (child->status.present && child->status.enabled) 866 + acpi_device_fix_up_power(child); 805 867 806 868 if (sdhci_acpi_byt_defer(dev)) 807 869 return -EPROBE_DEFER;
+14
drivers/tty/serdev/core.c
··· 727 727 static int acpi_serdev_register_devices(struct serdev_controller *ctrl) 728 728 { 729 729 acpi_status status; 730 + bool skip; 731 + int ret; 730 732 731 733 if (!has_acpi_companion(ctrl->dev.parent)) 732 734 return -ENODEV; 735 + 736 + /* 737 + * Skip registration on boards where the ACPI tables are known to 738 + * contain buggy devices. Note serdev_controller_add() must still 739 + * succeed in this case, so that the proper serdev devices can be 740 + * added "manually" later. 741 + */ 742 + ret = acpi_quirk_skip_serdev_enumeration(ctrl->dev.parent, &skip); 743 + if (ret) 744 + return ret; 745 + if (skip) 746 + return 0; 733 747 734 748 status = acpi_walk_namespace(ACPI_TYPE_DEVICE, ACPI_ROOT_OBJECT, 735 749 SERDEV_ACPI_MAX_SCAN_DEPTH,
+25 -2
include/acpi/acpi_bus.h
··· 507 507 */ 508 508 509 509 int acpi_bus_get_device(acpi_handle handle, struct acpi_device **device); 510 + struct acpi_device *acpi_fetch_acpi_dev(acpi_handle handle); 510 511 acpi_status acpi_bus_get_status_handle(acpi_handle handle, 511 512 unsigned long long *sta); 512 513 int acpi_bus_get_status(struct acpi_device *device); ··· 616 615 int acpi_disable_wakeup_device_power(struct acpi_device *dev); 617 616 618 617 #ifdef CONFIG_X86 619 - bool acpi_device_always_present(struct acpi_device *adev); 618 + bool acpi_device_override_status(struct acpi_device *adev, unsigned long long *status); 619 + bool acpi_quirk_skip_acpi_ac_and_battery(void); 620 620 #else 621 - static inline bool acpi_device_always_present(struct acpi_device *adev) 621 + static inline bool acpi_device_override_status(struct acpi_device *adev, 622 + unsigned long long *status) 622 623 { 623 624 return false; 625 + } 626 + static inline bool acpi_quirk_skip_acpi_ac_and_battery(void) 627 + { 628 + return false; 629 + } 630 + #endif 631 + 632 + #if IS_ENABLED(CONFIG_X86_ANDROID_TABLETS) 633 + bool acpi_quirk_skip_i2c_client_enumeration(struct acpi_device *adev); 634 + int acpi_quirk_skip_serdev_enumeration(struct device *controller_parent, bool *skip); 635 + #else 636 + static inline bool acpi_quirk_skip_i2c_client_enumeration(struct acpi_device *adev) 637 + { 638 + return false; 639 + } 640 + static inline int 641 + acpi_quirk_skip_serdev_enumeration(struct device *controller_parent, bool *skip) 642 + { 643 + *skip = false; 644 + return 0; 624 645 } 625 646 #endif 626 647
-1
include/acpi/acpi_numa.h
··· 3 3 #define __ACPI_NUMA_H 4 4 5 5 #ifdef CONFIG_ACPI_NUMA 6 - #include <linux/kernel.h> 7 6 #include <linux/numa.h> 8 7 9 8 /* Proximity bitmap length */
+5 -3
include/acpi/acpixf.h
··· 12 12 13 13 /* Current ACPICA subsystem version in YYYYMMDD format */ 14 14 15 - #define ACPI_CA_VERSION 0x20210930 15 + #define ACPI_CA_VERSION 0x20211217 16 16 17 17 #include <acpi/acconfig.h> 18 18 #include <acpi/actypes.h> ··· 454 454 * ACPI table load/unload interfaces 455 455 */ 456 456 ACPI_EXTERNAL_RETURN_STATUS(acpi_status ACPI_INIT_FUNCTION 457 - acpi_install_table(acpi_physical_address address, 458 - u8 physical)) 457 + acpi_install_table(struct acpi_table_header *table)) 459 458 459 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status ACPI_INIT_FUNCTION 460 + acpi_install_physical_table(acpi_physical_address 461 + address)) 460 462 ACPI_EXTERNAL_RETURN_STATUS(acpi_status 461 463 acpi_load_table(struct acpi_table_header *table, 462 464 u32 *table_idx))
+84 -32
include/acpi/actbl2.h
··· 24 24 * file. Useful because they make it more difficult to inadvertently type in 25 25 * the wrong signature. 26 26 */ 27 + #define ACPI_SIG_AGDI "AGDI" /* Arm Generic Diagnostic Dump and Reset Device Interface */ 27 28 #define ACPI_SIG_BDAT "BDAT" /* BIOS Data ACPI Table */ 28 29 #define ACPI_SIG_IORT "IORT" /* IO Remapping Table */ 29 30 #define ACPI_SIG_IVRS "IVRS" /* I/O Virtualization Reporting Structure */ ··· 49 48 #define ACPI_SIG_SDEI "SDEI" /* Software Delegated Exception Interface Table */ 50 49 #define ACPI_SIG_SDEV "SDEV" /* Secure Devices table */ 51 50 #define ACPI_SIG_SVKL "SVKL" /* Storage Volume Key Location Table */ 51 + #define ACPI_SIG_TDEL "TDEL" /* TD Event Log Table */ 52 52 53 53 /* 54 54 * All tables must be byte-packed to match the ACPI specification, since ··· 156 154 /* 2R: Processor Generic Resource Substructure */ 157 155 158 156 typedef struct acpi_aest_processor_generic { 159 - u8 *resource; 157 + u32 resource; 160 158 161 159 } acpi_aest_processor_generic; 162 160 ··· 238 236 #define ACPI_AEST_NODE_FAULT_HANDLING 0 239 237 #define ACPI_AEST_NODE_ERROR_RECOVERY 1 240 238 #define ACPI_AEST_XRUPT_RESERVED 2 /* 2 and above are reserved */ 239 + 240 + /******************************************************************************* 241 + * AGDI - Arm Generic Diagnostic Dump and Reset Device Interface 242 + * 243 + * Conforms to "ACPI for Arm Components 1.1, Platform Design Document" 244 + * ARM DEN0093 v1.1 245 + * 246 + ******************************************************************************/ 247 + struct acpi_table_agdi { 248 + struct acpi_table_header header; /* Common ACPI table header */ 249 + u8 flags; 250 + u8 reserved[3]; 251 + u32 sdei_event; 252 + u32 gsiv; 253 + }; 254 + 255 + /* Mask for Flags field above */ 256 + 257 + #define ACPI_AGDI_SIGNALING_MODE (1) 241 258 242 259 /******************************************************************************* 243 260 * ··· 1516 1495 1517 1496 /* Values for Config Type above */ 1518 1497 1519 - #define ACPI_NHLT_TYPE_MIC_ARRAY 0x01 1520 - #define ACPI_NHLT_TYPE_GENERIC 0x00 1521 - 1522 - /* Mask for Extension field of array_type */ 1523 - 1524 - #define ACPI_NHLT_ARRAY_TYPE_MASK 0x10 1498 + #define ACPI_NHLT_CONFIG_TYPE_GENERIC 0x00 1499 + #define ACPI_NHLT_CONFIG_TYPE_MIC_ARRAY 0x01 1500 + #define ACPI_NHLT_CONFIG_TYPE_RENDER_FEEDBACK 0x03 1501 + #define ACPI_NHLT_CONFIG_TYPE_RESERVED 0x04 /* 4 and above are reserved */ 1525 1502 1526 1503 struct acpi_nhlt_device_specific_config_b { 1527 1504 u32 capabilities_size; 1528 1505 }; 1529 1506 1530 1507 struct acpi_nhlt_device_specific_config_c { 1508 + u32 capabilities_size; 1509 + u8 virtual_slot; 1510 + }; 1511 + 1512 + struct acpi_nhlt_render_device_specific_config { 1531 1513 u32 capabilities_size; 1532 1514 u8 virtual_slot; 1533 1515 }; ··· 1597 1573 1598 1574 /* Values for array_type_ext above */ 1599 1575 1600 - #define SMALL_LINEAR_2ELEMENT 0x0A 1601 - #define BIG_LINEAR_2ELEMENT 0x0B 1602 - #define FIRST_GEOMETRY_LINEAR_4ELEMENT 0x0C 1603 - #define PLANAR_LSHAPED_4ELEMENT 0x0D 1604 - #define SECOND_GEOMETRY_LINEAR_4ELEMENT 0x0E 1605 - #define VENDOR_DEFINED 0x0F 1606 - #define ARRAY_TYPE_MASK 0x0F 1607 - #define ARRAY_TYPE_EXT_MASK 0x10 1576 + #define ACPI_NHLT_ARRAY_TYPE_RESERVED 0x09 // 9 and below are reserved 1577 + #define ACPI_NHLT_SMALL_LINEAR_2ELEMENT 0x0A 1578 + #define ACPI_NHLT_BIG_LINEAR_2ELEMENT 0x0B 1579 + #define ACPI_NHLT_FIRST_GEOMETRY_LINEAR_4ELEMENT 0x0C 1580 + #define ACPI_NHLT_PLANAR_LSHAPED_4ELEMENT 0x0D 1581 + #define ACPI_NHLT_SECOND_GEOMETRY_LINEAR_4ELEMENT 0x0E 1582 + #define ACPI_NHLT_VENDOR_DEFINED 0x0F 1583 + #define ACPI_NHLT_ARRAY_TYPE_MASK 0x0F 1584 + #define ACPI_NHLT_ARRAY_TYPE_EXT_MASK 0x10 1608 1585 1609 - #define NO_EXTENSION 0x0 1610 - #define MIC_SNR_SENSITIVITY_EXT 0x1 1586 + #define ACPI_NHLT_NO_EXTENSION 0x0 1587 + #define ACPI_NHLT_MIC_SNR_SENSITIVITY_EXT (1<<4) 1588 + 1589 + struct acpi_nhlt_vendor_mic_count { 1590 + u8 microphone_count; 1591 + }; 1611 1592 1612 1593 struct acpi_nhlt_vendor_mic_config { 1613 1594 u8 type; ··· 1632 1603 1633 1604 /* Values for Type field above */ 1634 1605 1635 - #define MIC_OMNIDIRECTIONAL 0 1636 - #define MIC_SUBCARDIOID 1 1637 - #define MIC_CARDIOID 2 1638 - #define MIC_SUPER_CARDIOID 3 1639 - #define MIC_HYPER_CARDIOID 4 1640 - #define MIC_8_SHAPED 5 1641 - #define MIC_VENDOR_DEFINED 7 1606 + #define ACPI_NHLT_MIC_OMNIDIRECTIONAL 0 1607 + #define ACPI_NHLT_MIC_SUBCARDIOID 1 1608 + #define ACPI_NHLT_MIC_CARDIOID 2 1609 + #define ACPI_NHLT_MIC_SUPER_CARDIOID 3 1610 + #define ACPI_NHLT_MIC_HYPER_CARDIOID 4 1611 + #define ACPI_NHLT_MIC_8_SHAPED 5 1612 + #define ACPI_NHLT_MIC_RESERVED6 6 // 6 is reserved 1613 + #define ACPI_NHLT_MIC_VENDOR_DEFINED 7 1614 + #define ACPI_NHLT_MIC_RESERVED 8 // 8 and above are reserved 1642 1615 1643 1616 /* Values for Panel field above */ 1644 1617 1645 - #define MIC_TOP 0 1646 - #define MIC_BOTTOM 1 1647 - #define MIC_LEFT 2 1648 - #define MIC_RIGHT 3 1649 - #define MIC_FRONT 4 1650 - #define MIC_REAR 5 1618 + #define ACPI_NHLT_MIC_POSITION_TOP 0 1619 + #define ACPI_NHLT_MIC_POSITION_BOTTOM 1 1620 + #define ACPI_NHLT_MIC_POSITION_LEFT 2 1621 + #define ACPI_NHLT_MIC_POSITION_RIGHT 3 1622 + #define ACPI_NHLT_MIC_POSITION_FRONT 4 1623 + #define ACPI_NHLT_MIC_POSITION_BACK 5 1624 + #define ACPI_NHLT_MIC_POSITION_RESERVED 6 // 6 and above are reserved 1651 1625 1652 1626 struct acpi_nhlt_vendor_mic_device_specific_config { 1653 1627 struct acpi_nhlt_mic_device_specific_config mic_array_device_config; ··· 1665 1633 u32 sensitivity; 1666 1634 }; 1667 1635 1636 + /* Render device with feedback */ 1637 + 1668 1638 struct acpi_nhlt_render_feedback_device_specific_config { 1669 - struct acpi_nhlt_device_specific_config device_config; 1670 1639 u8 feedback_virtual_slot; // render slot in case of capture 1671 1640 u16 feedback_channels; // informative only 1672 1641 u16 feedback_valid_bits_per_sample; ··· 1683 1650 u8 device_id[16]; 1684 1651 u8 device_instance_id; 1685 1652 u8 device_port_id; 1686 - u8 filler[18]; 1653 + }; 1654 + 1655 + struct acpi_nhlt_linux_specific_data_b { 1656 + u8 specific_data[18]; 1687 1657 }; 1688 1658 1689 1659 struct acpi_nhlt_table_terminator { ··· 2489 2453 enum acpi_svkl_format { 2490 2454 ACPI_SVKL_FORMAT_RAW_BINARY = 0, 2491 2455 ACPI_SVKL_FORMAT_RESERVED = 1 /* 1 and greater are reserved */ 2456 + }; 2457 + 2458 + /******************************************************************************* 2459 + * 2460 + * TDEL - TD-Event Log 2461 + * From: "Guest-Host-Communication Interface (GHCI) for Intel 2462 + * Trust Domain Extensions (Intel TDX)". 2463 + * September 2020 2464 + * 2465 + ******************************************************************************/ 2466 + 2467 + struct acpi_table_tdel { 2468 + struct acpi_table_header header; /* Common ACPI table header */ 2469 + u32 reserved; 2470 + u64 log_area_minimum_length; 2471 + u64 log_area_start_address; 2492 2472 }; 2493 2473 2494 2474 /* Reset to default packing */
+20 -3
include/acpi/actypes.h
··· 509 509 #define ACPI_TO_POINTER(i) ACPI_CAST_PTR (void, (acpi_size) (i)) 510 510 #define ACPI_TO_INTEGER(p) ACPI_PTR_DIFF (p, (void *) 0) 511 511 #define ACPI_OFFSET(d, f) ACPI_PTR_DIFF (&(((d *) 0)->f), (void *) 0) 512 - #define ACPI_PHYSADDR_TO_PTR(i) ACPI_TO_POINTER(i) 513 512 #define ACPI_PTR_TO_PHYSADDR(i) ACPI_TO_INTEGER(i) 514 513 515 514 /* Optimizations for 4-character (32-bit) acpi_name manipulation */ ··· 535 536 * Can be used with access_width of struct acpi_generic_address and access_size of 536 537 * struct acpi_resource_generic_register. 537 538 */ 538 - #define ACPI_ACCESS_BIT_WIDTH(size) (1 << ((size) + 2)) 539 - #define ACPI_ACCESS_BYTE_WIDTH(size) (1 << ((size) - 1)) 539 + #define ACPI_ACCESS_BIT_SHIFT 2 540 + #define ACPI_ACCESS_BYTE_SHIFT -1 541 + #define ACPI_ACCESS_BIT_MAX (31 - ACPI_ACCESS_BIT_SHIFT) 542 + #define ACPI_ACCESS_BYTE_MAX (31 - ACPI_ACCESS_BYTE_SHIFT) 543 + #define ACPI_ACCESS_BIT_DEFAULT (8 - ACPI_ACCESS_BIT_SHIFT) 544 + #define ACPI_ACCESS_BYTE_DEFAULT (8 - ACPI_ACCESS_BYTE_SHIFT) 545 + #define ACPI_ACCESS_BIT_WIDTH(size) (1 << ((size) + ACPI_ACCESS_BIT_SHIFT)) 546 + #define ACPI_ACCESS_BYTE_WIDTH(size) (1 << ((size) + ACPI_ACCESS_BYTE_SHIFT)) 540 547 541 548 /******************************************************************************* 542 549 * ··· 1103 1098 u8 access_length; 1104 1099 }; 1105 1100 1101 + /* Special Context data for PCC Opregion (ACPI 6.3) */ 1102 + 1103 + struct acpi_pcc_info { 1104 + u8 subspace_id; 1105 + u16 length; 1106 + u8 *internal_buffer; 1107 + }; 1108 + 1106 1109 typedef 1107 1110 acpi_status (*acpi_adr_space_setup) (acpi_handle region_handle, 1108 1111 u32 function, ··· 1226 1213 acpi_physical_address address; 1227 1214 struct acpi_mem_mapping *cur_mm; 1228 1215 struct acpi_mem_mapping *first_mm; 1216 + }; 1217 + 1218 + struct acpi_data_table_space_context { 1219 + void *pointer; 1229 1220 }; 1230 1221 1231 1222 /*
+6 -1
include/acpi/processor.h
··· 2 2 #ifndef __ACPI_PROCESSOR_H 3 3 #define __ACPI_PROCESSOR_H 4 4 5 - #include <linux/kernel.h> 6 5 #include <linux/cpu.h> 7 6 #include <linux/cpufreq.h> 8 7 #include <linux/pm_qos.h> 8 + #include <linux/printk.h> 9 + #include <linux/sched.h> 10 + #include <linux/smp.h> 9 11 #include <linux/thermal.h> 12 + #include <linux/types.h> 13 + #include <linux/workqueue.h> 14 + 10 15 #include <asm/acpi.h> 11 16 12 17 #define ACPI_PROCESSOR_CLASS "processor"
+6
include/linux/acpi.h
··· 1389 1389 } 1390 1390 #endif 1391 1391 1392 + #ifdef CONFIG_ACPI_PCC 1393 + void acpi_init_pcc(void); 1394 + #else 1395 + static inline void acpi_init_pcc(void) { } 1396 + #endif 1397 + 1392 1398 #ifdef CONFIG_ACPI 1393 1399 extern void acpi_device_notify(struct device *dev); 1394 1400 extern void acpi_device_notify_remove(struct device *dev);