Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'acpi-4.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull ACPI updates from Rafael Wysocki:
"These update the ACPICA code in the kernel to the 20180508 upstream
revision and make it support the RT patch, add CPPC v3 support to the
ACPI CPPC library, add a WDAT-based watchdog quirk to prevent clashes
with the RTC, add quirks to the ACPI AC and battery drivers, and
update the ACPI SoC drivers.

Specifics:

- Update the ACPICA code in the kernel to the 20180508 upstream
revision including:
* iASL -tc option enhancement (Bob Moore).
* Debugger improvements (Bob Moore).
* Support for tables larger than 1 MB in acpidump/acpixtract (Bob
Moore).
* Minor fixes and cleanups (Colin Ian King, Toomas Soome).

- Make the ACPICA code in the kernel support the RT patch (Sebastian
Andrzej Siewior, Steven Rostedt).

- Add a kmemleak annotation to the ACPICA code (Larry Finger).

- Add CPPC v3 support to the ACPI CPPC library and fix two issues
related to CPPC (Prashanth Prakash, Al Stone).

- Add an ACPI WDAT-based watchdog quirk to prefer iTCO_wdt on systems
where WDAT clashes with the RTC SRAM (Mika Westerberg).

- Add some quirks to the ACPI AC and battery drivers (Carlo Caione,
Hans de Goede).

- Update the ACPI SoC drivers for Intel (LPSS) and AMD (APD)
platforms (Akshu Agrawal, Hans de Goede).

- Fix up some assorted minor issues (Al Stone, Laszlo Toth, Mathieu
Malaterre)"

* tag 'acpi-4.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (32 commits)
ACPICA: Mark acpi_ut_create_internal_object_dbg() memory allocations as non-leaks
ACPI / watchdog: Prefer iTCO_wdt always when WDAT table uses RTC SRAM
mailbox: PCC: erroneous error message when parsing ACPI PCCT
ACPICA: Update version to 20180508
ACPICA: acpidump/acpixtract: Support for tables larger than 1MB
ACPI: APD: Add AMD misc clock handler support
clk: x86: Add ST oscout platform clock
ACPICA: Update version to 20180427
ACPICA: Debugger: Removed direct support for EC address space in "Test Objects"
ACPICA: Debugger: Add Package support for "test objects" command
ACPICA: Improve error messages for the namespace root node
ACPICA: Fix potential infinite loop in acpi_rs_dump_byte_list
ACPICA: vsnprintf: this statement may fall through
ACPICA: Tables: Fix spelling mistake in comment
ACPICA: iASL: Enhance the -tc option (create AML hex file in C)
ACPI: Add missing prototype_for arch_post_acpi_subsys_init()
ACPI / tables: improve comments regarding acpi_parse_entries_array()
ACPICA: Convert acpi_gbl_hardware lock back to an acpi_raw_spinlock
ACPICA: provide abstraction for raw_spinlock_t
ACPI / CPPC: Fix invalid PCC channel status errors
...

+751 -194
+69
Documentation/acpi/cppc_sysfs.txt
··· 1 + 2 + Collaborative Processor Performance Control (CPPC) 3 + 4 + CPPC defined in the ACPI spec describes a mechanism for the OS to manage the 5 + performance of a logical processor on a contigious and abstract performance 6 + scale. CPPC exposes a set of registers to describe abstract performance scale, 7 + to request performance levels and to measure per-cpu delivered performance. 8 + 9 + For more details on CPPC please refer to the ACPI specification at: 10 + 11 + http://uefi.org/specifications 12 + 13 + Some of the CPPC registers are exposed via sysfs under: 14 + 15 + /sys/devices/system/cpu/cpuX/acpi_cppc/ 16 + 17 + for each cpu X 18 + 19 + -------------------------------------------------------------------------------- 20 + 21 + $ ls -lR /sys/devices/system/cpu/cpu0/acpi_cppc/ 22 + /sys/devices/system/cpu/cpu0/acpi_cppc/: 23 + total 0 24 + -r--r--r-- 1 root root 65536 Mar 5 19:38 feedback_ctrs 25 + -r--r--r-- 1 root root 65536 Mar 5 19:38 highest_perf 26 + -r--r--r-- 1 root root 65536 Mar 5 19:38 lowest_freq 27 + -r--r--r-- 1 root root 65536 Mar 5 19:38 lowest_nonlinear_perf 28 + -r--r--r-- 1 root root 65536 Mar 5 19:38 lowest_perf 29 + -r--r--r-- 1 root root 65536 Mar 5 19:38 nominal_freq 30 + -r--r--r-- 1 root root 65536 Mar 5 19:38 nominal_perf 31 + -r--r--r-- 1 root root 65536 Mar 5 19:38 reference_perf 32 + -r--r--r-- 1 root root 65536 Mar 5 19:38 wraparound_time 33 + 34 + -------------------------------------------------------------------------------- 35 + 36 + * highest_perf : Highest performance of this processor (abstract scale). 37 + * nominal_perf : Highest sustained performance of this processor (abstract scale). 38 + * lowest_nonlinear_perf : Lowest performance of this processor with nonlinear 39 + power savings (abstract scale). 40 + * lowest_perf : Lowest performance of this processor (abstract scale). 41 + 42 + * lowest_freq : CPU frequency corresponding to lowest_perf (in MHz). 43 + * nominal_freq : CPU frequency corresponding to nominal_perf (in MHz). 44 + The above frequencies should only be used to report processor performance in 45 + freqency instead of abstract scale. These values should not be used for any 46 + functional decisions. 47 + 48 + * feedback_ctrs : Includes both Reference and delivered performance counter. 49 + Reference counter ticks up proportional to processor's reference performance. 50 + Delivered counter ticks up proportional to processor's delivered performance. 51 + * wraparound_time: Minimum time for the feedback counters to wraparound (seconds). 52 + * reference_perf : Performance level at which reference performance counter 53 + accumulates (abstract scale). 54 + 55 + -------------------------------------------------------------------------------- 56 + 57 + Computing Average Delivered Performance 58 + 59 + Below describes the steps to compute the average performance delivered by taking 60 + two different snapshots of feedback counters at time T1 and T2. 61 + 62 + T1: Read feedback_ctrs as fbc_t1 63 + Wait or run some workload 64 + T2: Read feedback_ctrs as fbc_t2 65 + 66 + delivered_counter_delta = fbc_t2[del] - fbc_t1[del] 67 + reference_counter_delta = fbc_t2[ref] - fbc_t1[ref] 68 + 69 + delivered_perf = (refernce_perf x delivered_counter_delta) / reference_counter_delta
+37 -11
drivers/acpi/ac.c
··· 86 86 87 87 88 88 static int ac_sleep_before_get_state_ms; 89 + static int ac_check_pmic = 1; 89 90 90 91 static struct acpi_driver acpi_ac_driver = { 91 92 .name = "ac", ··· 294 293 return NOTIFY_OK; 295 294 } 296 295 297 - static int thinkpad_e530_quirk(const struct dmi_system_id *d) 296 + static int __init thinkpad_e530_quirk(const struct dmi_system_id *d) 298 297 { 299 298 ac_sleep_before_get_state_ms = 1000; 300 299 return 0; 301 300 } 302 301 303 - static const struct dmi_system_id ac_dmi_table[] = { 302 + static int __init ac_do_not_check_pmic_quirk(const struct dmi_system_id *d) 303 + { 304 + ac_check_pmic = 0; 305 + return 0; 306 + } 307 + 308 + static const struct dmi_system_id ac_dmi_table[] __initconst = { 304 309 { 310 + /* Thinkpad e530 */ 305 311 .callback = thinkpad_e530_quirk, 306 - .ident = "thinkpad e530", 307 312 .matches = { 308 313 DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 309 314 DMI_MATCH(DMI_PRODUCT_NAME, "32597CG"), 315 + }, 316 + }, 317 + { 318 + /* ECS EF20EA */ 319 + .callback = ac_do_not_check_pmic_quirk, 320 + .matches = { 321 + DMI_MATCH(DMI_PRODUCT_NAME, "EF20EA"), 322 + }, 323 + }, 324 + { 325 + /* Lenovo Ideapad Miix 320 */ 326 + .callback = ac_do_not_check_pmic_quirk, 327 + .matches = { 328 + DMI_EXACT_MATCH(DMI_SYS_VENDOR, "LENOVO"), 329 + DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "80XF"), 330 + DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "Lenovo MIIX 320-10ICR"), 310 331 }, 311 332 }, 312 333 {}, ··· 390 367 kfree(ac); 391 368 } 392 369 393 - dmi_check_system(ac_dmi_table); 394 370 return result; 395 371 } 396 372 ··· 447 425 if (acpi_disabled) 448 426 return -ENODEV; 449 427 450 - for (i = 0; i < ARRAY_SIZE(acpi_ac_blacklist); i++) 451 - if (acpi_dev_present(acpi_ac_blacklist[i].hid, "1", 452 - acpi_ac_blacklist[i].hrv)) { 453 - pr_info(PREFIX "AC: found native %s PMIC, not loading\n", 454 - acpi_ac_blacklist[i].hid); 455 - return -ENODEV; 456 - } 428 + dmi_check_system(ac_dmi_table); 429 + 430 + if (ac_check_pmic) { 431 + for (i = 0; i < ARRAY_SIZE(acpi_ac_blacklist); i++) 432 + if (acpi_dev_present(acpi_ac_blacklist[i].hid, "1", 433 + acpi_ac_blacklist[i].hrv)) { 434 + pr_info(PREFIX "AC: found native %s PMIC, not loading\n", 435 + acpi_ac_blacklist[i].hid); 436 + return -ENODEV; 437 + } 438 + } 457 439 458 440 #ifdef CONFIG_ACPI_PROCFS_POWER 459 441 acpi_ac_dir = acpi_lock_ac_dir();
+47
drivers/acpi/acpi_apd.c
··· 11 11 */ 12 12 13 13 #include <linux/clk-provider.h> 14 + #include <linux/platform_data/clk-st.h> 14 15 #include <linux/platform_device.h> 15 16 #include <linux/pm_domain.h> 16 17 #include <linux/clkdev.h> ··· 73 72 } 74 73 75 74 #ifdef CONFIG_X86_AMD_PLATFORM_DEVICE 75 + 76 + static int misc_check_res(struct acpi_resource *ares, void *data) 77 + { 78 + struct resource res; 79 + 80 + return !acpi_dev_resource_memory(ares, &res); 81 + } 82 + 83 + static int st_misc_setup(struct apd_private_data *pdata) 84 + { 85 + struct acpi_device *adev = pdata->adev; 86 + struct platform_device *clkdev; 87 + struct st_clk_data *clk_data; 88 + struct resource_entry *rentry; 89 + struct list_head resource_list; 90 + int ret; 91 + 92 + clk_data = devm_kzalloc(&adev->dev, sizeof(*clk_data), GFP_KERNEL); 93 + if (!clk_data) 94 + return -ENOMEM; 95 + 96 + INIT_LIST_HEAD(&resource_list); 97 + ret = acpi_dev_get_resources(adev, &resource_list, misc_check_res, 98 + NULL); 99 + if (ret < 0) 100 + return -ENOENT; 101 + 102 + list_for_each_entry(rentry, &resource_list, node) { 103 + clk_data->base = devm_ioremap(&adev->dev, rentry->res->start, 104 + resource_size(rentry->res)); 105 + break; 106 + } 107 + 108 + acpi_dev_free_resource_list(&resource_list); 109 + 110 + clkdev = platform_device_register_data(&adev->dev, "clk-st", 111 + PLATFORM_DEVID_NONE, clk_data, 112 + sizeof(*clk_data)); 113 + return PTR_ERR_OR_ZERO(clkdev); 114 + } 115 + 76 116 static const struct apd_device_desc cz_i2c_desc = { 77 117 .setup = acpi_apd_setup, 78 118 .fixed_clk_rate = 133000000, ··· 135 93 .setup = acpi_apd_setup, 136 94 .fixed_clk_rate = 48000000, 137 95 .properties = uart_properties, 96 + }; 97 + 98 + static const struct apd_device_desc st_misc_desc = { 99 + .setup = st_misc_setup, 138 100 }; 139 101 #endif 140 102 ··· 225 179 { "AMD0020", APD_ADDR(cz_uart_desc) }, 226 180 { "AMDI0020", APD_ADDR(cz_uart_desc) }, 227 181 { "AMD0030", }, 182 + { "AMD0040", APD_ADDR(st_misc_desc)}, 228 183 #endif 229 184 #ifdef CONFIG_ARM64 230 185 { "APMC0D0F", APD_ADDR(xgene_i2c_desc) },
+5 -1
drivers/acpi/acpi_lpss.c
··· 69 69 #define LPSS_SAVE_CTX BIT(4) 70 70 #define LPSS_NO_D3_DELAY BIT(5) 71 71 72 + /* Crystal Cove PMIC shares same ACPI ID between different platforms */ 73 + #define BYT_CRC_HRV 2 74 + #define CHT_CRC_HRV 3 75 + 72 76 struct lpss_private_data; 73 77 74 78 struct lpss_device_desc { ··· 166 162 if (!adev->pnp.unique_id || strcmp(adev->pnp.unique_id, "1")) 167 163 return; 168 164 169 - if (!acpi_dev_present("INT33FD", NULL, -1)) 165 + if (!acpi_dev_present("INT33FD", NULL, BYT_CRC_HRV)) 170 166 pwm_add_table(byt_pwm_lookup, ARRAY_SIZE(byt_pwm_lookup)); 171 167 } 172 168
+45 -27
drivers/acpi/acpi_watchdog.c
··· 12 12 #define pr_fmt(fmt) "ACPI: watchdog: " fmt 13 13 14 14 #include <linux/acpi.h> 15 - #include <linux/dmi.h> 16 15 #include <linux/ioport.h> 17 16 #include <linux/platform_device.h> 18 17 19 18 #include "internal.h" 20 19 21 - static const struct dmi_system_id acpi_watchdog_skip[] = { 22 - { 23 - /* 24 - * On Lenovo Z50-70 there are two issues with the WDAT 25 - * table. First some of the instructions use RTC SRAM 26 - * to store persistent information. This does not work well 27 - * with Linux RTC driver. Second, more important thing is 28 - * that the instructions do not actually reset the system. 29 - * 30 - * On this particular system iTCO_wdt seems to work just 31 - * fine so we prefer that over WDAT for now. 32 - * 33 - * See also https://bugzilla.kernel.org/show_bug.cgi?id=199033. 34 - */ 35 - .ident = "Lenovo Z50-70", 36 - .matches = { 37 - DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 38 - DMI_MATCH(DMI_PRODUCT_NAME, "20354"), 39 - DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo Z50-70"), 40 - }, 41 - }, 42 - {} 43 - }; 20 + #ifdef CONFIG_RTC_MC146818_LIB 21 + #include <linux/mc146818rtc.h> 22 + 23 + /* 24 + * There are several systems where the WDAT table is accessing RTC SRAM to 25 + * store persistent information. This does not work well with the Linux RTC 26 + * driver so on those systems we skip WDAT driver and prefer iTCO_wdt 27 + * instead. 28 + * 29 + * See also https://bugzilla.kernel.org/show_bug.cgi?id=199033. 30 + */ 31 + static bool acpi_watchdog_uses_rtc(const struct acpi_table_wdat *wdat) 32 + { 33 + const struct acpi_wdat_entry *entries; 34 + int i; 35 + 36 + entries = (struct acpi_wdat_entry *)(wdat + 1); 37 + for (i = 0; i < wdat->entries; i++) { 38 + const struct acpi_generic_address *gas; 39 + 40 + gas = &entries[i].register_region; 41 + if (gas->space_id == ACPI_ADR_SPACE_SYSTEM_IO) { 42 + switch (gas->address) { 43 + case RTC_PORT(0): 44 + case RTC_PORT(1): 45 + case RTC_PORT(2): 46 + case RTC_PORT(3): 47 + return true; 48 + } 49 + } 50 + } 51 + 52 + return false; 53 + } 54 + #else 55 + static bool acpi_watchdog_uses_rtc(const struct acpi_table_wdat *wdat) 56 + { 57 + return false; 58 + } 59 + #endif 44 60 45 61 static const struct acpi_table_wdat *acpi_watchdog_get_wdat(void) 46 62 { ··· 66 50 if (acpi_disabled) 67 51 return NULL; 68 52 69 - if (dmi_check_system(acpi_watchdog_skip)) 70 - return NULL; 71 - 72 53 status = acpi_get_table(ACPI_SIG_WDAT, 0, 73 54 (struct acpi_table_header **)&wdat); 74 55 if (ACPI_FAILURE(status)) { 75 56 /* It is fine if there is no WDAT */ 57 + return NULL; 58 + } 59 + 60 + if (acpi_watchdog_uses_rtc(wdat)) { 61 + pr_info("Skipping WDAT on this system because it uses RTC SRAM\n"); 76 62 return NULL; 77 63 } 78 64
+2
drivers/acpi/acpica/acapps.h
··· 143 143 fl_split_input_pathname(char *input_path, 144 144 char **out_directory_path, char **out_filename); 145 145 146 + char *fl_get_file_basename(char *file_pathname); 147 + 146 148 char *ad_generate_filename(char *prefix, char *table_id); 147 149 148 150 void
+1 -1
drivers/acpi/acpica/acglobal.h
··· 82 82 * interrupt level 83 83 */ 84 84 ACPI_GLOBAL(acpi_spinlock, acpi_gbl_gpe_lock); /* For GPE data structs and registers */ 85 - ACPI_GLOBAL(acpi_spinlock, acpi_gbl_hardware_lock); /* For ACPI H/W except GPE registers */ 85 + ACPI_GLOBAL(acpi_raw_spinlock, acpi_gbl_hardware_lock); /* For ACPI H/W except GPE registers */ 86 86 ACPI_GLOBAL(acpi_spinlock, acpi_gbl_reference_count_lock); 87 87 88 88 /* Mutex for _OSI support */
+9 -3
drivers/acpi/acpica/dbnames.c
··· 189 189 } 190 190 191 191 acpi_db_set_output_destination(ACPI_DB_DUPLICATE_OUTPUT); 192 - acpi_os_printf("ACPI Namespace (from %4.4s (%p) subtree):\n", 193 - ((struct acpi_namespace_node *)subtree_entry)->name. 194 - ascii, subtree_entry); 192 + 193 + if (((struct acpi_namespace_node *)subtree_entry)->parent) { 194 + acpi_os_printf("ACPI Namespace (from %4.4s (%p) subtree):\n", 195 + ((struct acpi_namespace_node *)subtree_entry)-> 196 + name.ascii, subtree_entry); 197 + } else { 198 + acpi_os_printf("ACPI Namespace (from %s):\n", 199 + ACPI_NAMESPACE_ROOT); 200 + } 195 201 196 202 /* Display the subtree */ 197 203
+55 -4
drivers/acpi/acpica/dbtest.c
··· 30 30 static acpi_status 31 31 acpi_db_test_string_type(struct acpi_namespace_node *node, u32 byte_length); 32 32 33 + static acpi_status acpi_db_test_package_type(struct acpi_namespace_node *node); 34 + 33 35 static acpi_status 34 36 acpi_db_read_from_object(struct acpi_namespace_node *node, 35 37 acpi_object_type expected_type, ··· 275 273 bit_length = byte_length * 8; 276 274 break; 277 275 276 + case ACPI_TYPE_PACKAGE: 277 + 278 + local_type = ACPI_TYPE_PACKAGE; 279 + break; 280 + 278 281 case ACPI_TYPE_FIELD_UNIT: 279 282 case ACPI_TYPE_BUFFER_FIELD: 280 283 case ACPI_TYPE_LOCAL_REGION_FIELD: ··· 312 305 313 306 acpi_os_printf("%14s: %4.4s", 314 307 acpi_ut_get_type_name(node->type), node->name.ascii); 308 + 315 309 if (!obj_desc) { 316 310 acpi_os_printf(" Ignoring, no attached object\n"); 317 311 return (AE_OK); ··· 330 322 case ACPI_ADR_SPACE_SYSTEM_MEMORY: 331 323 case ACPI_ADR_SPACE_SYSTEM_IO: 332 324 case ACPI_ADR_SPACE_PCI_CONFIG: 333 - case ACPI_ADR_SPACE_EC: 334 325 335 326 break; 336 327 337 328 default: 338 329 339 330 acpi_os_printf 340 - (" %s space is not supported [%4.4s]\n", 331 + (" %s space is not supported in this command [%4.4s]\n", 341 332 acpi_ut_get_region_name(region_obj->region. 342 333 space_id), 343 334 region_obj->region.node->name.ascii); ··· 366 359 status = acpi_db_test_buffer_type(node, bit_length); 367 360 break; 368 361 362 + case ACPI_TYPE_PACKAGE: 363 + 364 + status = acpi_db_test_package_type(node); 365 + break; 366 + 369 367 default: 370 368 371 369 acpi_os_printf(" Ignoring, type not implemented (%2.2X)", 372 370 local_type); 373 371 break; 372 + } 373 + 374 + /* Exit on error, but don't abort the namespace walk */ 375 + 376 + if (ACPI_FAILURE(status)) { 377 + status = AE_OK; 378 + goto exit; 374 379 } 375 380 376 381 switch (node->type) { ··· 392 373 acpi_os_printf(" (%s)", 393 374 acpi_ut_get_region_name(region_obj->region. 394 375 space_id)); 376 + 395 377 break; 396 378 397 379 default: 398 380 break; 399 381 } 400 382 383 + exit: 401 384 acpi_os_printf("\n"); 402 385 return (status); 403 386 } ··· 452 431 if (temp1->integer.value == value_to_write) { 453 432 value_to_write = 0; 454 433 } 455 - 456 434 /* Write a new value */ 457 435 458 436 write_value.type = ACPI_TYPE_INTEGER; ··· 728 708 729 709 /******************************************************************************* 730 710 * 711 + * FUNCTION: acpi_db_test_package_type 712 + * 713 + * PARAMETERS: node - Parent NS node for the object 714 + * 715 + * RETURN: Status 716 + * 717 + * DESCRIPTION: Test read for a Package object. 718 + * 719 + ******************************************************************************/ 720 + 721 + static acpi_status acpi_db_test_package_type(struct acpi_namespace_node *node) 722 + { 723 + union acpi_object *temp1 = NULL; 724 + acpi_status status; 725 + 726 + /* Read the original value */ 727 + 728 + status = acpi_db_read_from_object(node, ACPI_TYPE_PACKAGE, &temp1); 729 + if (ACPI_FAILURE(status)) { 730 + return (status); 731 + } 732 + 733 + acpi_os_printf(" %8.8X Elements", temp1->package.count); 734 + acpi_os_free(temp1); 735 + return (status); 736 + } 737 + 738 + /******************************************************************************* 739 + * 731 740 * FUNCTION: acpi_db_read_from_object 732 741 * 733 742 * PARAMETERS: node - Parent NS node for the object ··· 795 746 acpi_gbl_method_executing = TRUE; 796 747 status = acpi_evaluate_object(read_handle, NULL, 797 748 &param_objects, &return_obj); 798 - acpi_gbl_method_executing = FALSE; 799 749 750 + acpi_gbl_method_executing = FALSE; 800 751 if (ACPI_FAILURE(status)) { 801 752 acpi_os_printf("Could not read from object, %s", 802 753 acpi_format_exception(status)); ··· 809 760 case ACPI_TYPE_INTEGER: 810 761 case ACPI_TYPE_BUFFER: 811 762 case ACPI_TYPE_STRING: 763 + case ACPI_TYPE_PACKAGE: 812 764 /* 813 765 * Did we receive the type we wanted? Most important for the 814 766 * Integer/Buffer case (when a field is larger than an Integer, ··· 821 771 acpi_ut_get_type_name(expected_type), 822 772 acpi_ut_get_type_name(ret_value->type)); 823 773 774 + acpi_os_free(return_obj.pointer); 824 775 return (AE_TYPE); 825 776 } 826 777
+4 -4
drivers/acpi/acpica/dswscope.c
··· 115 115 acpi_ut_get_type_name(old_scope_info-> 116 116 common.value))); 117 117 } else { 118 - ACPI_DEBUG_PRINT_RAW((ACPI_DB_EXEC, "[\\___] (%s)", "ROOT")); 118 + ACPI_DEBUG_PRINT_RAW((ACPI_DB_EXEC, ACPI_NAMESPACE_ROOT)); 119 119 } 120 120 121 121 ACPI_DEBUG_PRINT_RAW((ACPI_DB_EXEC, ··· 166 166 167 167 new_scope_info = walk_state->scope_info; 168 168 if (new_scope_info) { 169 - ACPI_DEBUG_PRINT_RAW((ACPI_DB_EXEC, 170 - "[%4.4s] (%s)\n", 169 + ACPI_DEBUG_PRINT_RAW((ACPI_DB_EXEC, "[%4.4s] (%s)\n", 171 170 acpi_ut_get_node_name(new_scope_info-> 172 171 scope.node), 173 172 acpi_ut_get_type_name(new_scope_info-> 174 173 common.value))); 175 174 } else { 176 - ACPI_DEBUG_PRINT_RAW((ACPI_DB_EXEC, "[\\___] (ROOT)\n")); 175 + ACPI_DEBUG_PRINT_RAW((ACPI_DB_EXEC, "%s\n", 176 + ACPI_NAMESPACE_ROOT)); 177 177 } 178 178 179 179 acpi_ut_delete_generic_state(scope_info);
+2 -2
drivers/acpi/acpica/hwregs.c
··· 390 390 ACPI_BITMASK_ALL_FIXED_STATUS, 391 391 ACPI_FORMAT_UINT64(acpi_gbl_xpm1a_status.address))); 392 392 393 - lock_flags = acpi_os_acquire_lock(acpi_gbl_hardware_lock); 393 + lock_flags = acpi_os_acquire_raw_lock(acpi_gbl_hardware_lock); 394 394 395 395 /* Clear the fixed events in PM1 A/B */ 396 396 397 397 status = acpi_hw_register_write(ACPI_REGISTER_PM1_STATUS, 398 398 ACPI_BITMASK_ALL_FIXED_STATUS); 399 399 400 - acpi_os_release_lock(acpi_gbl_hardware_lock, lock_flags); 400 + acpi_os_release_raw_lock(acpi_gbl_hardware_lock, lock_flags); 401 401 402 402 if (ACPI_FAILURE(status)) { 403 403 goto exit;
+2 -2
drivers/acpi/acpica/hwxface.c
··· 227 227 return_ACPI_STATUS(AE_BAD_PARAMETER); 228 228 } 229 229 230 - lock_flags = acpi_os_acquire_lock(acpi_gbl_hardware_lock); 230 + lock_flags = acpi_os_acquire_raw_lock(acpi_gbl_hardware_lock); 231 231 232 232 /* 233 233 * At this point, we know that the parent register is one of the ··· 288 288 289 289 unlock_and_exit: 290 290 291 - acpi_os_release_lock(acpi_gbl_hardware_lock, lock_flags); 291 + acpi_os_release_raw_lock(acpi_gbl_hardware_lock, lock_flags); 292 292 return_ACPI_STATUS(status); 293 293 } 294 294
+1 -1
drivers/acpi/acpica/rsdump.c
··· 539 539 540 540 static void acpi_rs_dump_byte_list(u16 length, u8 * data) 541 541 { 542 - u8 i; 542 + u16 i; 543 543 544 544 for (i = 0; i < length; i++) { 545 545 acpi_os_printf("%25s%2.2X : %2.2X\n", "Byte", i, data[i]);
+1 -1
drivers/acpi/acpica/tbinstal.c
··· 88 88 * DESCRIPTION: This function is called to verify and install an ACPI table. 89 89 * When this function is called by "Load" or "LoadTable" opcodes, 90 90 * or by acpi_load_table() API, the "Reload" parameter is set. 91 - * After sucessfully returning from this function, table is 91 + * After successfully returning from this function, table is 92 92 * "INSTALLED" but not "VALIDATED". 93 93 * 94 94 ******************************************************************************/
+2 -2
drivers/acpi/acpica/utbuffer.c
··· 53 53 54 54 /* Print current offset */ 55 55 56 - acpi_os_printf("%6.4X: ", (base_offset + i)); 56 + acpi_os_printf("%8.4X: ", (base_offset + i)); 57 57 58 58 /* Print 16 hex chars */ 59 59 ··· 219 219 220 220 /* Print current offset */ 221 221 222 - fprintf(file, "%6.4X: ", (base_offset + i)); 222 + fprintf(file, "%8.4X: ", (base_offset + i)); 223 223 224 224 /* Print 16 hex chars */ 225 225
+2 -2
drivers/acpi/acpica/utmutex.c
··· 52 52 return_ACPI_STATUS (status); 53 53 } 54 54 55 - status = acpi_os_create_lock (&acpi_gbl_hardware_lock); 55 + status = acpi_os_create_raw_lock(&acpi_gbl_hardware_lock); 56 56 if (ACPI_FAILURE (status)) { 57 57 return_ACPI_STATUS (status); 58 58 } ··· 109 109 /* Delete the spinlocks */ 110 110 111 111 acpi_os_delete_lock(acpi_gbl_gpe_lock); 112 - acpi_os_delete_lock(acpi_gbl_hardware_lock); 112 + acpi_os_delete_raw_lock(acpi_gbl_hardware_lock); 113 113 acpi_os_delete_lock(acpi_gbl_reference_count_lock); 114 114 115 115 /* Delete the reader/writer lock */
+2
drivers/acpi/acpica/utobject.c
··· 8 8 *****************************************************************************/ 9 9 10 10 #include <acpi/acpi.h> 11 + #include <linux/kmemleak.h> 11 12 #include "accommon.h" 12 13 #include "acnamesp.h" 13 14 ··· 71 70 if (!object) { 72 71 return_PTR(NULL); 73 72 } 73 + kmemleak_not_leak(object); 74 74 75 75 switch (type) { 76 76 case ACPI_TYPE_REGION:
+1
drivers/acpi/acpica/utprint.c
··· 470 470 case 'X': 471 471 472 472 type |= ACPI_FORMAT_UPPER; 473 + /* FALLTHROUGH */ 473 474 474 475 case 'x': 475 476
+1 -1
drivers/acpi/acpica/utstring.c
··· 141 141 * Special case for the root node. This can happen if we get an 142 142 * error during the execution of module-level code. 143 143 */ 144 - if (ACPI_COMPARE_NAME(name, "\\___")) { 144 + if (ACPI_COMPARE_NAME(name, ACPI_ROOT_PATHNAME)) { 145 145 return; 146 146 } 147 147
+70 -11
drivers/acpi/battery.c
··· 74 74 static bool battery_driver_registered; 75 75 static int battery_bix_broken_package; 76 76 static int battery_notification_delay_ms; 77 + static int battery_ac_is_broken; 78 + static int battery_check_pmic = 1; 77 79 static unsigned int cache_time = 1000; 78 80 module_param(cache_time, uint, 0644); 79 81 MODULE_PARM_DESC(cache_time, "cache time in milliseconds"); ··· 209 207 battery->full_charge_capacity < battery->design_capacity; 210 208 } 211 209 210 + static int acpi_battery_handle_discharging(struct acpi_battery *battery) 211 + { 212 + /* 213 + * Some devices wrongly report discharging if the battery's charge level 214 + * was above the device's start charging threshold atm the AC adapter 215 + * was plugged in and the device thus did not start a new charge cycle. 216 + */ 217 + if ((battery_ac_is_broken || power_supply_is_system_supplied()) && 218 + battery->rate_now == 0) 219 + return POWER_SUPPLY_STATUS_NOT_CHARGING; 220 + 221 + return POWER_SUPPLY_STATUS_DISCHARGING; 222 + } 223 + 212 224 static int acpi_battery_get_property(struct power_supply *psy, 213 225 enum power_supply_property psp, 214 226 union power_supply_propval *val) ··· 238 222 switch (psp) { 239 223 case POWER_SUPPLY_PROP_STATUS: 240 224 if (battery->state & ACPI_BATTERY_STATE_DISCHARGING) 241 - val->intval = POWER_SUPPLY_STATUS_DISCHARGING; 225 + val->intval = acpi_battery_handle_discharging(battery); 242 226 else if (battery->state & ACPI_BATTERY_STATE_CHARGING) 243 227 val->intval = POWER_SUPPLY_STATUS_CHARGING; 244 228 else if (acpi_battery_is_charged(battery)) ··· 1279 1263 return 0; 1280 1264 } 1281 1265 1266 + static int __init 1267 + battery_ac_is_broken_quirk(const struct dmi_system_id *d) 1268 + { 1269 + battery_ac_is_broken = 1; 1270 + return 0; 1271 + } 1272 + 1273 + static int __init 1274 + battery_do_not_check_pmic_quirk(const struct dmi_system_id *d) 1275 + { 1276 + battery_check_pmic = 0; 1277 + return 0; 1278 + } 1279 + 1282 1280 static const struct dmi_system_id bat_dmi_table[] __initconst = { 1283 1281 { 1282 + /* NEC LZ750/LS */ 1284 1283 .callback = battery_bix_broken_package_quirk, 1285 - .ident = "NEC LZ750/LS", 1286 1284 .matches = { 1287 1285 DMI_MATCH(DMI_SYS_VENDOR, "NEC"), 1288 1286 DMI_MATCH(DMI_PRODUCT_NAME, "PC-LZ750LS"), 1289 1287 }, 1290 1288 }, 1291 1289 { 1290 + /* Acer Aspire V5-573G */ 1292 1291 .callback = battery_notification_delay_quirk, 1293 - .ident = "Acer Aspire V5-573G", 1294 1292 .matches = { 1295 1293 DMI_MATCH(DMI_SYS_VENDOR, "Acer"), 1296 1294 DMI_MATCH(DMI_PRODUCT_NAME, "Aspire V5-573G"), 1295 + }, 1296 + }, 1297 + { 1298 + /* Point of View mobii wintab p800w */ 1299 + .callback = battery_ac_is_broken_quirk, 1300 + .matches = { 1301 + DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"), 1302 + DMI_MATCH(DMI_BOARD_NAME, "Aptio CRB"), 1303 + DMI_MATCH(DMI_BIOS_VERSION, "3BAIR1013"), 1304 + /* Above matches are too generic, add bios-date match */ 1305 + DMI_MATCH(DMI_BIOS_DATE, "08/22/2014"), 1306 + }, 1307 + }, 1308 + { 1309 + /* ECS EF20EA */ 1310 + .callback = battery_do_not_check_pmic_quirk, 1311 + .matches = { 1312 + DMI_MATCH(DMI_PRODUCT_NAME, "EF20EA"), 1313 + }, 1314 + }, 1315 + { 1316 + /* Lenovo Ideapad Miix 320 */ 1317 + .callback = battery_do_not_check_pmic_quirk, 1318 + .matches = { 1319 + DMI_EXACT_MATCH(DMI_SYS_VENDOR, "LENOVO"), 1320 + DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "80XF"), 1321 + DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "Lenovo MIIX 320-10ICR"), 1297 1322 }, 1298 1323 }, 1299 1324 {}, ··· 1476 1419 unsigned int i; 1477 1420 int result; 1478 1421 1479 - for (i = 0; i < ARRAY_SIZE(acpi_battery_blacklist); i++) 1480 - if (acpi_dev_present(acpi_battery_blacklist[i], "1", -1)) { 1481 - pr_info(PREFIX ACPI_BATTERY_DEVICE_NAME 1482 - ": found native %s PMIC, not loading\n", 1483 - acpi_battery_blacklist[i]); 1484 - return; 1485 - } 1486 - 1487 1422 dmi_check_system(bat_dmi_table); 1423 + 1424 + if (battery_check_pmic) { 1425 + for (i = 0; i < ARRAY_SIZE(acpi_battery_blacklist); i++) 1426 + if (acpi_dev_present(acpi_battery_blacklist[i], "1", -1)) { 1427 + pr_info(PREFIX ACPI_BATTERY_DEVICE_NAME 1428 + ": found native %s PMIC, not loading\n", 1429 + acpi_battery_blacklist[i]); 1430 + return; 1431 + } 1432 + } 1488 1433 1489 1434 #ifdef CONFIG_ACPI_PROCFS_POWER 1490 1435 acpi_battery_dir = acpi_lock_battery_dir();
+105 -54
drivers/acpi/cppc_acpi.c
··· 39 39 40 40 #include <linux/cpufreq.h> 41 41 #include <linux/delay.h> 42 + #include <linux/iopoll.h> 42 43 #include <linux/ktime.h> 43 44 #include <linux/rwsem.h> 44 45 #include <linux/wait.h> ··· 50 49 struct mbox_chan *pcc_channel; 51 50 void __iomem *pcc_comm_addr; 52 51 bool pcc_channel_acquired; 53 - ktime_t deadline; 52 + unsigned int deadline_us; 54 53 unsigned int pcc_mpar, pcc_mrtt, pcc_nominal; 55 54 56 55 bool pending_pcc_write_cmd; /* Any pending/batched PCC write cmds? */ ··· 157 156 show_cppc_data(cppc_get_perf_caps, cppc_perf_caps, lowest_perf); 158 157 show_cppc_data(cppc_get_perf_caps, cppc_perf_caps, nominal_perf); 159 158 show_cppc_data(cppc_get_perf_caps, cppc_perf_caps, lowest_nonlinear_perf); 159 + show_cppc_data(cppc_get_perf_caps, cppc_perf_caps, lowest_freq); 160 + show_cppc_data(cppc_get_perf_caps, cppc_perf_caps, nominal_freq); 161 + 160 162 show_cppc_data(cppc_get_perf_ctrs, cppc_perf_fb_ctrs, reference_perf); 161 163 show_cppc_data(cppc_get_perf_ctrs, cppc_perf_fb_ctrs, wraparound_time); 162 164 ··· 187 183 &lowest_perf.attr, 188 184 &lowest_nonlinear_perf.attr, 189 185 &nominal_perf.attr, 186 + &nominal_freq.attr, 187 + &lowest_freq.attr, 190 188 NULL 191 189 }; 192 190 ··· 199 193 200 194 static int check_pcc_chan(int pcc_ss_id, bool chk_err_bit) 201 195 { 202 - int ret = -EIO, status = 0; 196 + int ret, status; 203 197 struct cppc_pcc_data *pcc_ss_data = pcc_data[pcc_ss_id]; 204 198 struct acpi_pcct_shared_memory __iomem *generic_comm_base = 205 199 pcc_ss_data->pcc_comm_addr; 206 - ktime_t next_deadline = ktime_add(ktime_get(), 207 - pcc_ss_data->deadline); 208 200 209 201 if (!pcc_ss_data->platform_owns_pcc) 210 202 return 0; 211 203 212 - /* Retry in case the remote processor was too slow to catch up. */ 213 - while (!ktime_after(ktime_get(), next_deadline)) { 214 - /* 215 - * Per spec, prior to boot the PCC space wil be initialized by 216 - * platform and should have set the command completion bit when 217 - * PCC can be used by OSPM 218 - */ 219 - status = readw_relaxed(&generic_comm_base->status); 220 - if (status & PCC_CMD_COMPLETE_MASK) { 221 - ret = 0; 222 - if (chk_err_bit && (status & PCC_ERROR_MASK)) 223 - ret = -EIO; 224 - break; 225 - } 226 - /* 227 - * Reducing the bus traffic in case this loop takes longer than 228 - * a few retries. 229 - */ 230 - udelay(3); 204 + /* 205 + * Poll PCC status register every 3us(delay_us) for maximum of 206 + * deadline_us(timeout_us) until PCC command complete bit is set(cond) 207 + */ 208 + ret = readw_relaxed_poll_timeout(&generic_comm_base->status, status, 209 + status & PCC_CMD_COMPLETE_MASK, 3, 210 + pcc_ss_data->deadline_us); 211 + 212 + if (likely(!ret)) { 213 + pcc_ss_data->platform_owns_pcc = false; 214 + if (chk_err_bit && (status & PCC_ERROR_MASK)) 215 + ret = -EIO; 231 216 } 232 217 233 - if (likely(!ret)) 234 - pcc_ss_data->platform_owns_pcc = false; 235 - else 236 - pr_err("PCC check channel failed for ss: %d. Status=%x\n", 237 - pcc_ss_id, status); 218 + if (unlikely(ret)) 219 + pr_err("PCC check channel failed for ss: %d. ret=%d\n", 220 + pcc_ss_id, ret); 238 221 239 222 return ret; 240 223 } ··· 575 580 * So add an arbitrary amount of wait on top of Nominal. 576 581 */ 577 582 usecs_lat = NUM_RETRIES * cppc_ss->latency; 578 - pcc_data[pcc_ss_idx]->deadline = ns_to_ktime(usecs_lat * NSEC_PER_USEC); 583 + pcc_data[pcc_ss_idx]->deadline_us = usecs_lat; 579 584 pcc_data[pcc_ss_idx]->pcc_mrtt = cppc_ss->min_turnaround_time; 580 585 pcc_data[pcc_ss_idx]->pcc_mpar = cppc_ss->max_access_rate; 581 586 pcc_data[pcc_ss_idx]->pcc_nominal = cppc_ss->latency; ··· 608 613 return false; 609 614 } 610 615 611 - 612 616 /** 613 617 * pcc_data_alloc() - Allocate the pcc_data memory for pcc subspace 614 618 * ··· 635 641 636 642 return 0; 637 643 } 644 + 645 + /* Check if CPPC revision + num_ent combination is supported */ 646 + static bool is_cppc_supported(int revision, int num_ent) 647 + { 648 + int expected_num_ent; 649 + 650 + switch (revision) { 651 + case CPPC_V2_REV: 652 + expected_num_ent = CPPC_V2_NUM_ENT; 653 + break; 654 + case CPPC_V3_REV: 655 + expected_num_ent = CPPC_V3_NUM_ENT; 656 + break; 657 + default: 658 + pr_debug("Firmware exports unsupported CPPC revision: %d\n", 659 + revision); 660 + return false; 661 + } 662 + 663 + if (expected_num_ent != num_ent) { 664 + pr_debug("Firmware exports %d entries. Expected: %d for CPPC rev:%d\n", 665 + num_ent, expected_num_ent, revision); 666 + return false; 667 + } 668 + 669 + return true; 670 + } 671 + 638 672 /* 639 673 * An example CPC table looks like the following. 640 674 * ··· 753 731 cpc_obj->type); 754 732 goto out_free; 755 733 } 756 - 757 - /* Only support CPPCv2. Bail otherwise. */ 758 - if (num_ent != CPPC_NUM_ENT) { 759 - pr_debug("Firmware exports %d entries. Expected: %d\n", 760 - num_ent, CPPC_NUM_ENT); 761 - goto out_free; 762 - } 763 - 764 734 cpc_ptr->num_entries = num_ent; 765 735 766 736 /* Second entry should be revision. */ ··· 764 750 cpc_obj->type); 765 751 goto out_free; 766 752 } 753 + cpc_ptr->version = cpc_rev; 767 754 768 - if (cpc_rev != CPPC_REV) { 769 - pr_debug("Firmware exports revision:%d. Expected:%d\n", 770 - cpc_rev, CPPC_REV); 755 + if (!is_cppc_supported(cpc_rev, num_ent)) 771 756 goto out_free; 772 - } 773 757 774 758 /* Iterate through remaining entries in _CPC */ 775 759 for (i = 2; i < num_ent; i++) { ··· 820 808 } 821 809 } 822 810 per_cpu(cpu_pcc_subspace_idx, pr->id) = pcc_subspace_id; 811 + 812 + /* 813 + * Initialize the remaining cpc_regs as unsupported. 814 + * Example: In case FW exposes CPPC v2, the below loop will initialize 815 + * LOWEST_FREQ and NOMINAL_FREQ regs as unsupported 816 + */ 817 + for (i = num_ent - 2; i < MAX_CPC_REG_ENT; i++) { 818 + cpc_ptr->cpc_regs[i].type = ACPI_TYPE_INTEGER; 819 + cpc_ptr->cpc_regs[i].cpc_entry.int_value = 0; 820 + } 821 + 822 + 823 823 /* Store CPU Logical ID */ 824 824 cpc_ptr->cpu_id = pr->id; 825 825 ··· 1061 1037 { 1062 1038 struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpunum); 1063 1039 struct cpc_register_resource *highest_reg, *lowest_reg, 1064 - *lowest_non_linear_reg, *nominal_reg; 1065 - u64 high, low, nom, min_nonlinear; 1040 + *lowest_non_linear_reg, *nominal_reg, 1041 + *low_freq_reg = NULL, *nom_freq_reg = NULL; 1042 + u64 high, low, nom, min_nonlinear, low_f = 0, nom_f = 0; 1066 1043 int pcc_ss_id = per_cpu(cpu_pcc_subspace_idx, cpunum); 1067 - struct cppc_pcc_data *pcc_ss_data; 1044 + struct cppc_pcc_data *pcc_ss_data = NULL; 1068 1045 int ret = 0, regs_in_pcc = 0; 1069 1046 1070 - if (!cpc_desc || pcc_ss_id < 0) { 1047 + if (!cpc_desc) { 1071 1048 pr_debug("No CPC descriptor for CPU:%d\n", cpunum); 1072 1049 return -ENODEV; 1073 1050 } 1074 1051 1075 - pcc_ss_data = pcc_data[pcc_ss_id]; 1076 1052 highest_reg = &cpc_desc->cpc_regs[HIGHEST_PERF]; 1077 1053 lowest_reg = &cpc_desc->cpc_regs[LOWEST_PERF]; 1078 1054 lowest_non_linear_reg = &cpc_desc->cpc_regs[LOW_NON_LINEAR_PERF]; 1079 1055 nominal_reg = &cpc_desc->cpc_regs[NOMINAL_PERF]; 1056 + low_freq_reg = &cpc_desc->cpc_regs[LOWEST_FREQ]; 1057 + nom_freq_reg = &cpc_desc->cpc_regs[NOMINAL_FREQ]; 1080 1058 1081 1059 /* Are any of the regs PCC ?*/ 1082 1060 if (CPC_IN_PCC(highest_reg) || CPC_IN_PCC(lowest_reg) || 1083 - CPC_IN_PCC(lowest_non_linear_reg) || CPC_IN_PCC(nominal_reg)) { 1061 + CPC_IN_PCC(lowest_non_linear_reg) || CPC_IN_PCC(nominal_reg) || 1062 + CPC_IN_PCC(low_freq_reg) || CPC_IN_PCC(nom_freq_reg)) { 1063 + if (pcc_ss_id < 0) { 1064 + pr_debug("Invalid pcc_ss_id\n"); 1065 + return -ENODEV; 1066 + } 1067 + pcc_ss_data = pcc_data[pcc_ss_id]; 1084 1068 regs_in_pcc = 1; 1085 1069 down_write(&pcc_ss_data->pcc_lock); 1086 1070 /* Ring doorbell once to update PCC subspace */ ··· 1113 1081 if (!high || !low || !nom || !min_nonlinear) 1114 1082 ret = -EFAULT; 1115 1083 1084 + /* Read optional lowest and nominal frequencies if present */ 1085 + if (CPC_SUPPORTED(low_freq_reg)) 1086 + cpc_read(cpunum, low_freq_reg, &low_f); 1087 + 1088 + if (CPC_SUPPORTED(nom_freq_reg)) 1089 + cpc_read(cpunum, nom_freq_reg, &nom_f); 1090 + 1091 + perf_caps->lowest_freq = low_f; 1092 + perf_caps->nominal_freq = nom_f; 1093 + 1094 + 1116 1095 out_err: 1117 1096 if (regs_in_pcc) 1118 1097 up_write(&pcc_ss_data->pcc_lock); ··· 1144 1101 struct cpc_register_resource *delivered_reg, *reference_reg, 1145 1102 *ref_perf_reg, *ctr_wrap_reg; 1146 1103 int pcc_ss_id = per_cpu(cpu_pcc_subspace_idx, cpunum); 1147 - struct cppc_pcc_data *pcc_ss_data; 1104 + struct cppc_pcc_data *pcc_ss_data = NULL; 1148 1105 u64 delivered, reference, ref_perf, ctr_wrap_time; 1149 1106 int ret = 0, regs_in_pcc = 0; 1150 1107 1151 - if (!cpc_desc || pcc_ss_id < 0) { 1108 + if (!cpc_desc) { 1152 1109 pr_debug("No CPC descriptor for CPU:%d\n", cpunum); 1153 1110 return -ENODEV; 1154 1111 } 1155 1112 1156 - pcc_ss_data = pcc_data[pcc_ss_id]; 1157 1113 delivered_reg = &cpc_desc->cpc_regs[DELIVERED_CTR]; 1158 1114 reference_reg = &cpc_desc->cpc_regs[REFERENCE_CTR]; 1159 1115 ref_perf_reg = &cpc_desc->cpc_regs[REFERENCE_PERF]; ··· 1168 1126 /* Are any of the regs PCC ?*/ 1169 1127 if (CPC_IN_PCC(delivered_reg) || CPC_IN_PCC(reference_reg) || 1170 1128 CPC_IN_PCC(ctr_wrap_reg) || CPC_IN_PCC(ref_perf_reg)) { 1129 + if (pcc_ss_id < 0) { 1130 + pr_debug("Invalid pcc_ss_id\n"); 1131 + return -ENODEV; 1132 + } 1133 + pcc_ss_data = pcc_data[pcc_ss_id]; 1171 1134 down_write(&pcc_ss_data->pcc_lock); 1172 1135 regs_in_pcc = 1; 1173 1136 /* Ring doorbell once to update PCC subspace */ ··· 1223 1176 struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpu); 1224 1177 struct cpc_register_resource *desired_reg; 1225 1178 int pcc_ss_id = per_cpu(cpu_pcc_subspace_idx, cpu); 1226 - struct cppc_pcc_data *pcc_ss_data; 1179 + struct cppc_pcc_data *pcc_ss_data = NULL; 1227 1180 int ret = 0; 1228 1181 1229 - if (!cpc_desc || pcc_ss_id < 0) { 1182 + if (!cpc_desc) { 1230 1183 pr_debug("No CPC descriptor for CPU:%d\n", cpu); 1231 1184 return -ENODEV; 1232 1185 } 1233 1186 1234 - pcc_ss_data = pcc_data[pcc_ss_id]; 1235 1187 desired_reg = &cpc_desc->cpc_regs[DESIRED_PERF]; 1236 1188 1237 1189 /* ··· 1241 1195 * achieve that goal here 1242 1196 */ 1243 1197 if (CPC_IN_PCC(desired_reg)) { 1198 + if (pcc_ss_id < 0) { 1199 + pr_debug("Invalid pcc_ss_id\n"); 1200 + return -ENODEV; 1201 + } 1202 + pcc_ss_data = pcc_data[pcc_ss_id]; 1244 1203 down_read(&pcc_ss_data->pcc_lock); /* BEGIN Phase-I */ 1245 1204 if (pcc_ss_data->platform_owns_pcc) { 1246 1205 ret = check_pcc_chan(pcc_ss_id, false);
+2 -2
drivers/acpi/reboot.c
··· 8 8 { 9 9 struct acpi_generic_address *rr; 10 10 struct pci_bus *bus0; 11 - u8 reset_value; 12 11 unsigned int devfn; 12 + u8 reset_value; 13 13 14 14 if (acpi_disabled) 15 15 return; ··· 40 40 /* Form PCI device/function pair. */ 41 41 devfn = PCI_DEVFN((rr->address >> 32) & 0xffff, 42 42 (rr->address >> 16) & 0xffff); 43 - printk(KERN_DEBUG "Resetting with ACPI PCI RESET_REG."); 43 + printk(KERN_DEBUG "Resetting with ACPI PCI RESET_REG.\n"); 44 44 /* Write the value that resets us. */ 45 45 pci_bus_write_config_byte(bus0, devfn, 46 46 (rr->address & 0xffff), reset_value);
+7 -2
drivers/acpi/tables.c
··· 222 222 * acpi_parse_entries_array - for each proc_num find a suitable subtable 223 223 * 224 224 * @id: table id (for debugging purposes) 225 - * @table_size: single entry size 225 + * @table_size: size of the root table 226 226 * @table_header: where does the table start? 227 227 * @proc: array of acpi_subtable_proc struct containing entry id 228 228 * and associated handler with it ··· 232 232 * For each proc_num find a subtable with proc->id and run proc->handler 233 233 * on it. Assumption is that there's only single handler for particular 234 234 * entry id. 235 + * 236 + * The table_size is not the size of the complete ACPI table (the length 237 + * field in the header struct), but only the size of the root table; i.e., 238 + * the offset from the very first byte of the complete ACPI table, to the 239 + * first byte of the very first subtable. 235 240 * 236 241 * On success returns sum of all matching entries for all proc handlers. 237 242 * Otherwise, -ENODEV or -EINVAL is returned. ··· 405 400 return -ENODEV; 406 401 } 407 402 408 - /* 403 + /* 409 404 * The BIOS is supposed to supply a single APIC/MADT, 410 405 * but some report two. Provide a knob to use either. 411 406 * (don't you wish instance 0 and 1 were not the same?)
+2 -1
drivers/clk/x86/Makefile
··· 1 + obj-$(CONFIG_PMC_ATOM) += clk-pmc-atom.o 2 + obj-$(CONFIG_X86_AMD_PLATFORM_DEVICE) += clk-st.o 1 3 clk-x86-lpss-objs := clk-lpt.o 2 4 obj-$(CONFIG_X86_INTEL_LPSS) += clk-x86-lpss.o 3 - obj-$(CONFIG_PMC_ATOM) += clk-pmc-atom.o
+77
drivers/clk/x86/clk-st.c
··· 1 + // SPDX-License-Identifier: MIT 2 + /* 3 + * clock framework for AMD Stoney based clocks 4 + * 5 + * Copyright 2018 Advanced Micro Devices, Inc. 6 + */ 7 + 8 + #include <linux/clk.h> 9 + #include <linux/clkdev.h> 10 + #include <linux/clk-provider.h> 11 + #include <linux/platform_data/clk-st.h> 12 + #include <linux/platform_device.h> 13 + 14 + /* Clock Driving Strength 2 register */ 15 + #define CLKDRVSTR2 0x28 16 + /* Clock Control 1 register */ 17 + #define MISCCLKCNTL1 0x40 18 + /* Auxiliary clock1 enable bit */ 19 + #define OSCCLKENB 2 20 + /* 25Mhz auxiliary output clock freq bit */ 21 + #define OSCOUT1CLK25MHZ 16 22 + 23 + #define ST_CLK_48M 0 24 + #define ST_CLK_25M 1 25 + #define ST_CLK_MUX 2 26 + #define ST_CLK_GATE 3 27 + #define ST_MAX_CLKS 4 28 + 29 + static const char * const clk_oscout1_parents[] = { "clk48MHz", "clk25MHz" }; 30 + static struct clk_hw *hws[ST_MAX_CLKS]; 31 + 32 + static int st_clk_probe(struct platform_device *pdev) 33 + { 34 + struct st_clk_data *st_data; 35 + 36 + st_data = dev_get_platdata(&pdev->dev); 37 + if (!st_data || !st_data->base) 38 + return -EINVAL; 39 + 40 + hws[ST_CLK_48M] = clk_hw_register_fixed_rate(NULL, "clk48MHz", NULL, 0, 41 + 48000000); 42 + hws[ST_CLK_25M] = clk_hw_register_fixed_rate(NULL, "clk25MHz", NULL, 0, 43 + 25000000); 44 + 45 + hws[ST_CLK_MUX] = clk_hw_register_mux(NULL, "oscout1_mux", 46 + clk_oscout1_parents, ARRAY_SIZE(clk_oscout1_parents), 47 + 0, st_data->base + CLKDRVSTR2, OSCOUT1CLK25MHZ, 3, 0, NULL); 48 + 49 + clk_set_parent(hws[ST_CLK_MUX]->clk, hws[ST_CLK_25M]->clk); 50 + 51 + hws[ST_CLK_GATE] = clk_hw_register_gate(NULL, "oscout1", "oscout1_mux", 52 + 0, st_data->base + MISCCLKCNTL1, OSCCLKENB, 53 + CLK_GATE_SET_TO_DISABLE, NULL); 54 + 55 + clk_hw_register_clkdev(hws[ST_CLK_GATE], "oscout1", NULL); 56 + 57 + return 0; 58 + } 59 + 60 + static int st_clk_remove(struct platform_device *pdev) 61 + { 62 + int i; 63 + 64 + for (i = 0; i < ST_MAX_CLKS; i++) 65 + clk_hw_unregister(hws[i]); 66 + return 0; 67 + } 68 + 69 + static struct platform_driver st_clk_driver = { 70 + .driver = { 71 + .name = "clk-st", 72 + .suppress_bind_attrs = true, 73 + }, 74 + .probe = st_clk_probe, 75 + .remove = st_clk_remove, 76 + }; 77 + builtin_platform_driver(st_clk_driver);
+68 -12
drivers/cpufreq/cppc_cpufreq.c
··· 42 42 */ 43 43 static struct cppc_cpudata **all_cpu_data; 44 44 45 - /* Capture the max KHz from DMI */ 46 - static u64 cppc_dmi_max_khz; 47 - 48 45 /* Callback function used to retrieve the max frequency from DMI */ 49 46 static void cppc_find_dmi_mhz(const struct dmi_header *dm, void *private) 50 47 { ··· 72 75 return (1000 * mhz); 73 76 } 74 77 78 + /* 79 + * If CPPC lowest_freq and nominal_freq registers are exposed then we can 80 + * use them to convert perf to freq and vice versa 81 + * 82 + * If the perf/freq point lies between Nominal and Lowest, we can treat 83 + * (Low perf, Low freq) and (Nom Perf, Nom freq) as 2D co-ordinates of a line 84 + * and extrapolate the rest 85 + * For perf/freq > Nominal, we use the ratio perf:freq at Nominal for conversion 86 + */ 87 + static unsigned int cppc_cpufreq_perf_to_khz(struct cppc_cpudata *cpu, 88 + unsigned int perf) 89 + { 90 + static u64 max_khz; 91 + struct cppc_perf_caps *caps = &cpu->perf_caps; 92 + u64 mul, div; 93 + 94 + if (caps->lowest_freq && caps->nominal_freq) { 95 + if (perf >= caps->nominal_perf) { 96 + mul = caps->nominal_freq; 97 + div = caps->nominal_perf; 98 + } else { 99 + mul = caps->nominal_freq - caps->lowest_freq; 100 + div = caps->nominal_perf - caps->lowest_perf; 101 + } 102 + } else { 103 + if (!max_khz) 104 + max_khz = cppc_get_dmi_max_khz(); 105 + mul = max_khz; 106 + div = cpu->perf_caps.highest_perf; 107 + } 108 + return (u64)perf * mul / div; 109 + } 110 + 111 + static unsigned int cppc_cpufreq_khz_to_perf(struct cppc_cpudata *cpu, 112 + unsigned int freq) 113 + { 114 + static u64 max_khz; 115 + struct cppc_perf_caps *caps = &cpu->perf_caps; 116 + u64 mul, div; 117 + 118 + if (caps->lowest_freq && caps->nominal_freq) { 119 + if (freq >= caps->nominal_freq) { 120 + mul = caps->nominal_perf; 121 + div = caps->nominal_freq; 122 + } else { 123 + mul = caps->lowest_perf; 124 + div = caps->lowest_freq; 125 + } 126 + } else { 127 + if (!max_khz) 128 + max_khz = cppc_get_dmi_max_khz(); 129 + mul = cpu->perf_caps.highest_perf; 130 + div = max_khz; 131 + } 132 + 133 + return (u64)freq * mul / div; 134 + } 135 + 75 136 static int cppc_cpufreq_set_target(struct cpufreq_policy *policy, 76 137 unsigned int target_freq, 77 138 unsigned int relation) ··· 141 86 142 87 cpu = all_cpu_data[policy->cpu]; 143 88 144 - desired_perf = (u64)target_freq * cpu->perf_caps.highest_perf / cppc_dmi_max_khz; 89 + desired_perf = cppc_cpufreq_khz_to_perf(cpu, target_freq); 145 90 /* Return if it is exactly the same perf */ 146 91 if (desired_perf == cpu->perf_ctrls.desired_perf) 147 92 return ret; ··· 241 186 return ret; 242 187 } 243 188 244 - cppc_dmi_max_khz = cppc_get_dmi_max_khz(); 189 + /* Convert the lowest and nominal freq from MHz to KHz */ 190 + cpu->perf_caps.lowest_freq *= 1000; 191 + cpu->perf_caps.nominal_freq *= 1000; 245 192 246 193 /* 247 194 * Set min to lowest nonlinear perf to avoid any efficiency penalty (see 248 195 * Section 8.4.7.1.1.5 of ACPI 6.1 spec) 249 196 */ 250 - policy->min = cpu->perf_caps.lowest_nonlinear_perf * cppc_dmi_max_khz / 251 - cpu->perf_caps.highest_perf; 252 - policy->max = cppc_dmi_max_khz; 197 + policy->min = cppc_cpufreq_perf_to_khz(cpu, cpu->perf_caps.lowest_nonlinear_perf); 198 + policy->max = cppc_cpufreq_perf_to_khz(cpu, cpu->perf_caps.highest_perf); 253 199 254 200 /* 255 201 * Set cpuinfo.min_freq to Lowest to make the full range of performance 256 202 * available if userspace wants to use any perf between lowest & lowest 257 203 * nonlinear perf 258 204 */ 259 - policy->cpuinfo.min_freq = cpu->perf_caps.lowest_perf * cppc_dmi_max_khz / 260 - cpu->perf_caps.highest_perf; 261 - policy->cpuinfo.max_freq = cppc_dmi_max_khz; 205 + policy->cpuinfo.min_freq = cppc_cpufreq_perf_to_khz(cpu, cpu->perf_caps.lowest_perf); 206 + policy->cpuinfo.max_freq = cppc_cpufreq_perf_to_khz(cpu, cpu->perf_caps.highest_perf); 262 207 263 208 policy->transition_delay_us = cppc_cpufreq_get_transition_delay_us(cpu_num); 264 209 policy->shared_type = cpu->shared_type; ··· 284 229 cpu->cur_policy = policy; 285 230 286 231 /* Set policy->cur to max now. The governors will adjust later. */ 287 - policy->cur = cppc_dmi_max_khz; 232 + policy->cur = cppc_cpufreq_perf_to_khz(cpu, 233 + cpu->perf_caps.highest_perf); 288 234 cpu->perf_ctrls.desired_perf = cpu->perf_caps.highest_perf; 289 235 290 236 ret = cppc_set_perf(cpu_num, &cpu->perf_ctrls);
+38 -43
drivers/mailbox/pcc.c
··· 373 373 }; 374 374 375 375 /** 376 - * parse_pcc_subspace - Parse the PCC table and verify PCC subspace 377 - * entries. There should be one entry per PCC client. 376 + * parse_pcc_subspaces -- Count PCC subspaces defined 378 377 * @header: Pointer to the ACPI subtable header under the PCCT. 379 378 * @end: End of subtable entry. 380 379 * 381 - * Return: 0 for Success, else errno. 380 + * Return: If we find a PCC subspace entry of a valid type, return 0. 381 + * Otherwise, return -EINVAL. 382 382 * 383 383 * This gets called for each entry in the PCC table. 384 384 */ 385 385 static int parse_pcc_subspace(struct acpi_subtable_header *header, 386 386 const unsigned long end) 387 387 { 388 - struct acpi_pcct_hw_reduced *pcct_ss; 388 + struct acpi_pcct_subspace *ss = (struct acpi_pcct_subspace *) header; 389 389 390 - if (pcc_mbox_ctrl.num_chans <= MAX_PCC_SUBSPACES) { 391 - pcct_ss = (struct acpi_pcct_hw_reduced *) header; 390 + if (ss->header.type < ACPI_PCCT_TYPE_RESERVED) 391 + return 0; 392 392 393 - if ((pcct_ss->header.type != 394 - ACPI_PCCT_TYPE_HW_REDUCED_SUBSPACE) 395 - && (pcct_ss->header.type != 396 - ACPI_PCCT_TYPE_HW_REDUCED_SUBSPACE_TYPE2)) { 397 - pr_err("Incorrect PCC Subspace type detected\n"); 398 - return -EINVAL; 399 - } 400 - } 401 - 402 - return 0; 393 + return -EINVAL; 403 394 } 404 395 405 396 /** ··· 440 449 struct acpi_table_header *pcct_tbl; 441 450 struct acpi_subtable_header *pcct_entry; 442 451 struct acpi_table_pcct *acpi_pcct_tbl; 452 + struct acpi_subtable_proc proc[ACPI_PCCT_TYPE_RESERVED]; 443 453 int count, i, rc; 444 - int sum = 0; 445 454 acpi_status status = AE_OK; 446 455 447 456 /* Search for PCCT */ ··· 450 459 if (ACPI_FAILURE(status) || !pcct_tbl) 451 460 return -ENODEV; 452 461 453 - count = acpi_table_parse_entries(ACPI_SIG_PCCT, 454 - sizeof(struct acpi_table_pcct), 455 - ACPI_PCCT_TYPE_HW_REDUCED_SUBSPACE, 456 - parse_pcc_subspace, MAX_PCC_SUBSPACES); 457 - sum += (count > 0) ? count : 0; 462 + /* Set up the subtable handlers */ 463 + for (i = ACPI_PCCT_TYPE_GENERIC_SUBSPACE; 464 + i < ACPI_PCCT_TYPE_RESERVED; i++) { 465 + proc[i].id = i; 466 + proc[i].count = 0; 467 + proc[i].handler = parse_pcc_subspace; 468 + } 458 469 459 - count = acpi_table_parse_entries(ACPI_SIG_PCCT, 460 - sizeof(struct acpi_table_pcct), 461 - ACPI_PCCT_TYPE_HW_REDUCED_SUBSPACE_TYPE2, 462 - parse_pcc_subspace, MAX_PCC_SUBSPACES); 463 - sum += (count > 0) ? count : 0; 464 - 465 - if (sum == 0 || sum >= MAX_PCC_SUBSPACES) { 466 - pr_err("Error parsing PCC subspaces from PCCT\n"); 470 + count = acpi_table_parse_entries_array(ACPI_SIG_PCCT, 471 + sizeof(struct acpi_table_pcct), proc, 472 + ACPI_PCCT_TYPE_RESERVED, MAX_PCC_SUBSPACES); 473 + if (count == 0 || count > MAX_PCC_SUBSPACES) { 474 + pr_warn("Invalid PCCT: %d PCC subspaces\n", count); 467 475 return -EINVAL; 468 476 } 469 477 470 - pcc_mbox_channels = kzalloc(sizeof(struct mbox_chan) * 471 - sum, GFP_KERNEL); 478 + pcc_mbox_channels = kzalloc(sizeof(struct mbox_chan) * count, GFP_KERNEL); 472 479 if (!pcc_mbox_channels) { 473 480 pr_err("Could not allocate space for PCC mbox channels\n"); 474 481 return -ENOMEM; 475 482 } 476 483 477 - pcc_doorbell_vaddr = kcalloc(sum, sizeof(void *), GFP_KERNEL); 484 + pcc_doorbell_vaddr = kcalloc(count, sizeof(void *), GFP_KERNEL); 478 485 if (!pcc_doorbell_vaddr) { 479 486 rc = -ENOMEM; 480 487 goto err_free_mbox; 481 488 } 482 489 483 - pcc_doorbell_ack_vaddr = kcalloc(sum, sizeof(void *), GFP_KERNEL); 490 + pcc_doorbell_ack_vaddr = kcalloc(count, sizeof(void *), GFP_KERNEL); 484 491 if (!pcc_doorbell_ack_vaddr) { 485 492 rc = -ENOMEM; 486 493 goto err_free_db_vaddr; 487 494 } 488 495 489 - pcc_doorbell_irq = kcalloc(sum, sizeof(int), GFP_KERNEL); 496 + pcc_doorbell_irq = kcalloc(count, sizeof(int), GFP_KERNEL); 490 497 if (!pcc_doorbell_irq) { 491 498 rc = -ENOMEM; 492 499 goto err_free_db_ack_vaddr; ··· 498 509 if (acpi_pcct_tbl->flags & ACPI_PCCT_DOORBELL) 499 510 pcc_mbox_ctrl.txdone_irq = true; 500 511 501 - for (i = 0; i < sum; i++) { 512 + for (i = 0; i < count; i++) { 502 513 struct acpi_generic_address *db_reg; 503 - struct acpi_pcct_hw_reduced *pcct_ss; 514 + struct acpi_pcct_subspace *pcct_ss; 504 515 pcc_mbox_channels[i].con_priv = pcct_entry; 505 516 506 - pcct_ss = (struct acpi_pcct_hw_reduced *) pcct_entry; 517 + if (pcct_entry->type == ACPI_PCCT_TYPE_HW_REDUCED_SUBSPACE || 518 + pcct_entry->type == ACPI_PCCT_TYPE_HW_REDUCED_SUBSPACE_TYPE2) { 519 + struct acpi_pcct_hw_reduced *pcct_hrss; 507 520 508 - if (pcc_mbox_ctrl.txdone_irq) { 509 - rc = pcc_parse_subspace_irq(i, pcct_ss); 510 - if (rc < 0) 511 - goto err; 521 + pcct_hrss = (struct acpi_pcct_hw_reduced *) pcct_entry; 522 + 523 + if (pcc_mbox_ctrl.txdone_irq) { 524 + rc = pcc_parse_subspace_irq(i, pcct_hrss); 525 + if (rc < 0) 526 + goto err; 527 + } 512 528 } 529 + pcct_ss = (struct acpi_pcct_subspace *) pcct_entry; 513 530 514 531 /* If doorbell is in system memory cache the virt address */ 515 532 db_reg = &pcct_ss->doorbell_register; ··· 526 531 ((unsigned long) pcct_entry + pcct_entry->length); 527 532 } 528 533 529 - pcc_mbox_ctrl.num_chans = sum; 534 + pcc_mbox_ctrl.num_chans = count; 530 535 531 536 pr_info("Detected %d PCC Subspaces\n", pcc_mbox_ctrl.num_chans); 532 537
+5 -2
include/acpi/acnames.h
··· 49 49 /* Definitions of the predefined namespace names */ 50 50 51 51 #define ACPI_UNKNOWN_NAME (u32) 0x3F3F3F3F /* Unknown name is "????" */ 52 - #define ACPI_ROOT_NAME (u32) 0x5F5F5F5C /* Root name is "\___" */ 53 - 54 52 #define ACPI_PREFIX_MIXED (u32) 0x69706341 /* "Acpi" */ 55 53 #define ACPI_PREFIX_LOWER (u32) 0x69706361 /* "acpi" */ 56 54 55 + /* Root name stuff */ 56 + 57 + #define ACPI_ROOT_NAME (u32) 0x5F5F5F5C /* Root name is "\___" */ 58 + #define ACPI_ROOT_PATHNAME "\\___" 59 + #define ACPI_NAMESPACE_ROOT "Namespace Root" 57 60 #define ACPI_NS_ROOT_PATH "\\" 58 61 59 62 #endif /* __ACNAMES_H__ */
+21
include/acpi/acpiosxf.h
··· 98 98 #endif 99 99 100 100 /* 101 + * RAW spinlock primitives. If the OS does not provide them, fallback to 102 + * spinlock primitives 103 + */ 104 + #ifndef ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_create_raw_lock 105 + # define acpi_os_create_raw_lock(out_handle) acpi_os_create_lock(out_handle) 106 + #endif 107 + 108 + #ifndef ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_delete_raw_lock 109 + # define acpi_os_delete_raw_lock(handle) acpi_os_delete_lock(handle) 110 + #endif 111 + 112 + #ifndef ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_acquire_raw_lock 113 + # define acpi_os_acquire_raw_lock(handle) acpi_os_acquire_lock(handle) 114 + #endif 115 + 116 + #ifndef ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_release_raw_lock 117 + # define acpi_os_release_raw_lock(handle, flags) \ 118 + acpi_os_release_lock(handle, flags) 119 + #endif 120 + 121 + /* 101 122 * Semaphore primitives 102 123 */ 103 124 #ifndef ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_create_semaphore
+1 -1
include/acpi/acpixf.h
··· 12 12 13 13 /* Current ACPICA subsystem version in YYYYMMDD format */ 14 14 15 - #define ACPI_CA_VERSION 0x20180313 15 + #define ACPI_CA_VERSION 0x20180508 16 16 17 17 #include <acpi/acconfig.h> 18 18 #include <acpi/actypes.h>
+4
include/acpi/actypes.h
··· 245 245 #define acpi_spinlock void * 246 246 #endif 247 247 248 + #ifndef acpi_raw_spinlock 249 + #define acpi_raw_spinlock acpi_spinlock 250 + #endif 251 + 248 252 #ifndef acpi_semaphore 249 253 #define acpi_semaphore void * 250 254 #endif
+10 -4
include/acpi/cppc_acpi.h
··· 20 20 #include <acpi/pcc.h> 21 21 #include <acpi/processor.h> 22 22 23 - /* Only support CPPCv2 for now. */ 24 - #define CPPC_NUM_ENT 21 25 - #define CPPC_REV 2 23 + /* Support CPPCv2 and CPPCv3 */ 24 + #define CPPC_V2_REV 2 25 + #define CPPC_V3_REV 3 26 + #define CPPC_V2_NUM_ENT 21 27 + #define CPPC_V3_NUM_ENT 23 26 28 27 29 #define PCC_CMD_COMPLETE_MASK (1 << 0) 28 30 #define PCC_ERROR_MASK (1 << 2) 29 31 30 - #define MAX_CPC_REG_ENT 19 32 + #define MAX_CPC_REG_ENT 21 31 33 32 34 /* CPPC specific PCC commands. */ 33 35 #define CMD_READ 0 ··· 93 91 AUTO_ACT_WINDOW, 94 92 ENERGY_PERF, 95 93 REFERENCE_PERF, 94 + LOWEST_FREQ, 95 + NOMINAL_FREQ, 96 96 }; 97 97 98 98 /* ··· 108 104 u32 nominal_perf; 109 105 u32 lowest_perf; 110 106 u32 lowest_nonlinear_perf; 107 + u32 lowest_freq; 108 + u32 nominal_freq; 111 109 }; 112 110 113 111 struct cppc_perf_ctrls {
+5
include/acpi/platform/aclinux.h
··· 102 102 103 103 #define acpi_cache_t struct kmem_cache 104 104 #define acpi_spinlock spinlock_t * 105 + #define acpi_raw_spinlock raw_spinlock_t * 105 106 #define acpi_cpu_flags unsigned long 106 107 107 108 /* Use native linux version of acpi_os_allocate_zeroed */ ··· 120 119 #define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_acquire_object 121 120 #define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_get_thread_id 122 121 #define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_create_lock 122 + #define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_create_raw_lock 123 + #define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_delete_raw_lock 124 + #define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_acquire_raw_lock 125 + #define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_release_raw_lock 123 126 124 127 /* 125 128 * OSL interfaces used by debugger/disassembler
+30
include/acpi/platform/aclinuxex.h
··· 90 90 lock ? AE_OK : AE_NO_MEMORY; \ 91 91 }) 92 92 93 + 94 + #define acpi_os_create_raw_lock(__handle) \ 95 + ({ \ 96 + raw_spinlock_t *lock = ACPI_ALLOCATE(sizeof(*lock)); \ 97 + if (lock) { \ 98 + *(__handle) = lock; \ 99 + raw_spin_lock_init(*(__handle)); \ 100 + } \ 101 + lock ? AE_OK : AE_NO_MEMORY; \ 102 + }) 103 + 104 + static inline acpi_cpu_flags acpi_os_acquire_raw_lock(acpi_raw_spinlock lockp) 105 + { 106 + acpi_cpu_flags flags; 107 + 108 + raw_spin_lock_irqsave(lockp, flags); 109 + return flags; 110 + } 111 + 112 + static inline void acpi_os_release_raw_lock(acpi_raw_spinlock lockp, 113 + acpi_cpu_flags flags) 114 + { 115 + raw_spin_unlock_irqrestore(lockp, flags); 116 + } 117 + 118 + static inline void acpi_os_delete_raw_lock(acpi_raw_spinlock handle) 119 + { 120 + ACPI_FREE(handle); 121 + } 122 + 93 123 static inline u8 acpi_os_readable(void *pointer, acpi_size length) 94 124 { 95 125 return TRUE;
+1
include/linux/acpi.h
··· 578 578 579 579 extern void acpi_early_init(void); 580 580 extern void acpi_subsystem_init(void); 581 + extern void arch_post_acpi_subsys_init(void); 581 582 582 583 extern int acpi_nvs_register(__u64 start, __u64 size); 583 584
+17
include/linux/platform_data/clk-st.h
··· 1 + /* SPDX-License-Identifier: MIT */ 2 + /* 3 + * clock framework for AMD Stoney based clock 4 + * 5 + * Copyright 2018 Advanced Micro Devices, Inc. 6 + */ 7 + 8 + #ifndef __CLK_ST_H 9 + #define __CLK_ST_H 10 + 11 + #include <linux/compiler.h> 12 + 13 + struct st_clk_data { 14 + void __iomem *base; 15 + }; 16 + 17 + #endif /* __CLK_ST_H */