Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'platform-drivers-x86-v6.9-1' of git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86

Pull x86 platform driver updates from Ilpo Järvinen:

- New acer-wmi HW support

- Support for new revision of amd/pmf heartbeat notify

- Correctly handle asus-wmi HW without LEDs

- fujitsu-laptop battery charge control support

- Support for new hp-wmi thermal profiles

- Support ideapad-laptop refresh rate key

- Put intel/pmc AI accelerator (GNA) into D3 if it has no driver to
allow entry into low-power modes, and temporarily removed Lunar Lake
SSRAM support due to breaking FW changes causing probe fail (further
breaking FW changes are still pending)

- Report pmc/punit_atom devices that prevent reacing low power levels

- Surface Fan speed function support

- Support for more sperial keys and complete the list of models with
non-standard fan registers in thinkpad_acpi

- New DMI touchscreen HW support

- Continued modernization efforts of wmi

- Removal of obsoleted ledtrig-audio call and the related dependency

- Debug & metrics interface improvements

- Miscellaneous cleanups / fixes / improvements

* tag 'platform-drivers-x86-v6.9-1' of git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86: (87 commits)
platform/x86/intel/pmc: Improve PKGC residency counters debug
platform/x86: asus-wmi: Consider device is absent when the read is ~0
Documentation/x86/amd/hsmp: Updating urls
platform/mellanox: mlxreg-hotplug: Remove redundant NULL-check
platform/x86/amd/pmf: Update sps power thermals according to the platform-profiles
platform/x86/amd/pmf: Add support to get sps default APTS index values
platform/x86/amd/pmf: Add support to get APTS index numbers for static slider
platform/x86/amd/pmf: Add support to notify sbios heart beat event
platform/x86/amd/pmf: Add support to get sbios requests in PMF driver
platform/x86/amd/pmf: Disable debugfs support for querying power thermals
platform/x86/amd/pmf: Differentiate PMF ACPI versions
x86/platform/atom: Check state of Punit managed devices on s2idle
platform/x86: pmc_atom: Check state of PMC clocks on s2idle
platform/x86: pmc_atom: Check state of PMC managed devices on s2idle
platform/x86: pmc_atom: Annotate d3_sts register bit defines
clk: x86: Move clk-pmc-atom register defines to include/linux/platform_data/x86/pmc_atom.h
platform/x86: make fw_attr_class constant
platform/x86/intel/tpmi: Change vsec offset to u64
platform/x86: intel_scu_pcidrv: Remove unused intel-mid.h
platform/x86: intel_scu_wdt: Remove unused intel-mid.h
...

+1843 -708
+6 -1
Documentation/admin-guide/laptops/thinkpad-acpi.rst
··· 444 444 445 445 0x1008 0x07 FN+F8 IBM: toggle screen expand 446 446 Lenovo: configure UltraNav, 447 - or toggle screen expand 447 + or toggle screen expand. 448 + On newer platforms (2024+) 449 + replaced by 0x131f (see below) 448 450 449 451 0x1009 0x08 FN+F9 - 450 452 ··· 505 503 0x1018 0x17 THINKPAD ThinkPad/Access IBM/Lenovo key 506 504 507 505 0x1019 0x18 unknown 506 + 507 + 0x131f ... FN+F8 Platform Mode change. 508 + Implemented in driver. 508 509 509 510 ... ... ... 510 511
+4 -3
Documentation/arch/x86/amd_hsmp.rst
··· 13 13 14 14 More details on the interface can be found in chapter 15 15 "7 Host System Management Port (HSMP)" of the family/model PPR 16 - Eg: https://www.amd.com/system/files/TechDocs/55898_B1_pub_0.50.zip 16 + Eg: https://www.amd.com/content/dam/amd/en/documents/epyc-technical-docs/programmer-references/55898_B1_pub_0_50.zip 17 + 17 18 18 19 HSMP interface is supported on EPYC server CPU models only. 19 20 ··· 98 97 99 98 More details on the interface and message definitions can be found in chapter 100 99 "7 Host System Management Port (HSMP)" of the respective family/model PPR 101 - eg: https://www.amd.com/system/files/TechDocs/55898_B1_pub_0.50.zip 100 + eg: https://www.amd.com/content/dam/amd/en/documents/epyc-technical-docs/programmer-references/55898_B1_pub_0_50.zip 102 101 103 102 User space C-APIs are made available by linking against the esmi library, 104 - which is provided by the E-SMS project https://developer.amd.com/e-sms/. 103 + which is provided by the E-SMS project https://www.amd.com/en/developer/e-sms.html. 105 104 See: https://github.com/amd/esmi_ib_library
+4 -1
Documentation/wmi/acpi-interface.rst
··· 93 93 ---------------- 94 94 95 95 Used to retrieve additional WMI event data, its single parameter is a integer 96 - holding the notification ID of the event. 96 + holding the notification ID of the event. This method should be evaluated every 97 + time an ACPI notification is received, since some ACPI implementations use a 98 + queue to store WMI event data items. This queue will overflow after a couple 99 + of WMI events are received without retrieving the associated WMI event data.
+53 -1
arch/x86/platform/atom/punit_atom_debug.c
··· 7 7 * Copyright (c) 2015, Intel Corporation. 8 8 */ 9 9 10 + #define pr_fmt(fmt) "punit_atom: " fmt 11 + 12 + #include <linux/acpi.h> 10 13 #include <linux/module.h> 11 14 #include <linux/init.h> 12 15 #include <linux/device.h> ··· 120 117 debugfs_remove_recursive(punit_dbg_file); 121 118 } 122 119 120 + #if defined(CONFIG_ACPI) && defined(CONFIG_SUSPEND) 121 + static const struct punit_device *punit_dev; 122 + 123 + static void punit_s2idle_check(void) 124 + { 125 + const struct punit_device *punit_devp; 126 + u32 punit_pwr_status, dstate; 127 + int status; 128 + 129 + for (punit_devp = punit_dev; punit_devp->name; punit_devp++) { 130 + /* Skip MIO, it is on till the very last moment */ 131 + if (punit_devp->reg == MIO_SS_PM) 132 + continue; 133 + 134 + status = iosf_mbi_read(BT_MBI_UNIT_PMC, MBI_REG_READ, 135 + punit_devp->reg, &punit_pwr_status); 136 + if (status) { 137 + pr_err("%s read failed\n", punit_devp->name); 138 + } else { 139 + dstate = (punit_pwr_status >> punit_devp->sss_pos) & 3; 140 + if (!dstate) 141 + pr_err("%s is in D0 prior to s2idle\n", punit_devp->name); 142 + } 143 + } 144 + } 145 + 146 + static struct acpi_s2idle_dev_ops punit_s2idle_ops = { 147 + .check = punit_s2idle_check, 148 + }; 149 + 150 + static void punit_s2idle_check_register(struct punit_device *punit_device) 151 + { 152 + punit_dev = punit_device; 153 + acpi_register_lps0_dev(&punit_s2idle_ops); 154 + } 155 + 156 + static void punit_s2idle_check_unregister(void) 157 + { 158 + acpi_unregister_lps0_dev(&punit_s2idle_ops); 159 + } 160 + #else 161 + static void punit_s2idle_check_register(struct punit_device *punit_device) {} 162 + static void punit_s2idle_check_unregister(void) {} 163 + #endif 164 + 123 165 #define X86_MATCH(model, data) \ 124 166 X86_MATCH_VENDOR_FAM_MODEL_FEATURE(INTEL, 6, INTEL_FAM6_##model, \ 125 167 X86_FEATURE_MWAIT, data) ··· 179 131 180 132 static int __init punit_atom_debug_init(void) 181 133 { 134 + struct punit_device *punit_device; 182 135 const struct x86_cpu_id *id; 183 136 184 137 id = x86_match_cpu(intel_punit_cpu_ids); 185 138 if (!id) 186 139 return -ENODEV; 187 140 188 - punit_dbgfs_register((struct punit_device *)id->driver_data); 141 + punit_device = (struct punit_device *)id->driver_data; 142 + punit_dbgfs_register(punit_device); 143 + punit_s2idle_check_register(punit_device); 189 144 190 145 return 0; 191 146 } 192 147 193 148 static void __exit punit_atom_debug_exit(void) 194 149 { 150 + punit_s2idle_check_unregister(); 195 151 punit_dbgfs_unregister(); 196 152 } 197 153
+1 -12
drivers/clk/x86/clk-pmc-atom.c
··· 11 11 #include <linux/err.h> 12 12 #include <linux/io.h> 13 13 #include <linux/platform_data/x86/clk-pmc-atom.h> 14 + #include <linux/platform_data/x86/pmc_atom.h> 14 15 #include <linux/platform_device.h> 15 16 #include <linux/slab.h> 16 17 17 18 #define PLT_CLK_NAME_BASE "pmc_plt_clk" 18 - 19 - #define PMC_CLK_CTL_OFFSET 0x60 20 - #define PMC_CLK_CTL_SIZE 4 21 - #define PMC_CLK_NUM 6 22 - #define PMC_CLK_CTL_GATED_ON_D3 0x0 23 - #define PMC_CLK_CTL_FORCE_ON 0x1 24 - #define PMC_CLK_CTL_FORCE_OFF 0x2 25 - #define PMC_CLK_CTL_RESERVED 0x3 26 - #define PMC_MASK_CLK_CTL GENMASK(1, 0) 27 - #define PMC_MASK_CLK_FREQ BIT(2) 28 - #define PMC_CLK_FREQ_XTAL (0 << 2) /* 25 MHz */ 29 - #define PMC_CLK_FREQ_PLL (1 << 2) /* 19.2 MHz */ 30 19 31 20 struct clk_plt_fixed { 32 21 struct clk_hw *clk;
+1
drivers/hwmon/dell-smm-hwmon.c
··· 1600 1600 }, 1601 1601 .id_table = dell_smm_wmi_id_table, 1602 1602 .probe = dell_smm_wmi_probe, 1603 + .no_singleton = true, 1603 1604 }; 1604 1605 1605 1606 /*
+7 -7
drivers/platform/mellanox/mlxbf-bootctl.c
··· 463 463 if (res.a0) 464 464 return -EPERM; 465 465 466 - return snprintf(buf, PAGE_SIZE, "0x%lx", res.a1); 466 + return sysfs_emit(buf, "0x%lx", res.a1); 467 467 } 468 468 469 469 static ssize_t large_icm_store(struct device *dev, ··· 581 581 } 582 582 mutex_unlock(&mfg_ops_lock); 583 583 584 - return snprintf(buf, PAGE_SIZE, "%s", (char *)opn_data); 584 + return sysfs_emit(buf, "%s", (char *)opn_data); 585 585 } 586 586 587 587 static ssize_t opn_store(struct device *dev, ··· 632 632 } 633 633 mutex_unlock(&mfg_ops_lock); 634 634 635 - return snprintf(buf, PAGE_SIZE, "%s", (char *)sku_data); 635 + return sysfs_emit(buf, "%s", (char *)sku_data); 636 636 } 637 637 638 638 static ssize_t sku_store(struct device *dev, ··· 683 683 } 684 684 mutex_unlock(&mfg_ops_lock); 685 685 686 - return snprintf(buf, PAGE_SIZE, "%s", (char *)modl_data); 686 + return sysfs_emit(buf, "%s", (char *)modl_data); 687 687 } 688 688 689 689 static ssize_t modl_store(struct device *dev, ··· 734 734 } 735 735 mutex_unlock(&mfg_ops_lock); 736 736 737 - return snprintf(buf, PAGE_SIZE, "%s", (char *)sn_data); 737 + return sysfs_emit(buf, "%s", (char *)sn_data); 738 738 } 739 739 740 740 static ssize_t sn_store(struct device *dev, ··· 785 785 } 786 786 mutex_unlock(&mfg_ops_lock); 787 787 788 - return snprintf(buf, PAGE_SIZE, "%s", (char *)uuid_data); 788 + return sysfs_emit(buf, "%s", (char *)uuid_data); 789 789 } 790 790 791 791 static ssize_t uuid_store(struct device *dev, ··· 836 836 } 837 837 mutex_unlock(&mfg_ops_lock); 838 838 839 - return snprintf(buf, PAGE_SIZE, "%s", (char *)rev_data); 839 + return sysfs_emit(buf, "%s", (char *)rev_data); 840 840 } 841 841 842 842 static ssize_t rev_store(struct device *dev,
+139 -128
drivers/platform/mellanox/mlxbf-pmc.c
··· 99 99 */ 100 100 struct mlxbf_pmc_attribute { 101 101 struct device_attribute dev_attr; 102 - int index; 103 - int nr; 102 + unsigned int index; 103 + unsigned int nr; 104 104 }; 105 105 106 106 /** ··· 121 121 void __iomem *mmio_base; 122 122 size_t blk_size; 123 123 size_t counters; 124 - int type; 124 + unsigned int type; 125 125 struct mlxbf_pmc_attribute *attr_counter; 126 126 struct mlxbf_pmc_attribute *attr_event; 127 127 struct mlxbf_pmc_attribute attr_event_list; ··· 149 149 */ 150 150 struct mlxbf_pmc_context { 151 151 struct platform_device *pdev; 152 - uint32_t total_blocks; 153 - uint32_t tile_count; 154 - uint8_t llt_enable; 155 - uint8_t mss_enable; 156 - uint32_t group_num; 152 + u32 total_blocks; 153 + u32 tile_count; 154 + u8 llt_enable; 155 + u8 mss_enable; 156 + u32 group_num; 157 157 struct device *hwmon_dev; 158 158 const char *block_name[MLXBF_PMC_MAX_BLOCKS]; 159 159 struct mlxbf_pmc_block_info block[MLXBF_PMC_MAX_BLOCKS]; 160 160 const struct attribute_group *groups[MLXBF_PMC_MAX_BLOCKS]; 161 161 bool svc_sreg_support; 162 - uint32_t sreg_tbl_perf; 162 + u32 sreg_tbl_perf; 163 163 unsigned int event_set; 164 164 }; 165 165 ··· 169 169 * @evt_name: Name of the event 170 170 */ 171 171 struct mlxbf_pmc_events { 172 - int evt_num; 172 + u32 evt_num; 173 173 char *evt_name; 174 174 }; 175 175 ··· 865 865 static const char *mlxbf_pmc_svc_uuid_str = "89c036b4-e7d7-11e6-8797-001aca00bfc4"; 866 866 867 867 /* Calls an SMC to access a performance register */ 868 - static int mlxbf_pmc_secure_read(void __iomem *addr, uint32_t command, 869 - uint64_t *result) 868 + static int mlxbf_pmc_secure_read(void __iomem *addr, u32 command, u64 *result) 870 869 { 871 870 struct arm_smccc_res res; 872 871 int status, err = 0; ··· 891 892 } 892 893 893 894 /* Read from a performance counter */ 894 - static int mlxbf_pmc_read(void __iomem *addr, uint32_t command, 895 - uint64_t *result) 895 + static int mlxbf_pmc_read(void __iomem *addr, u32 command, u64 *result) 896 896 { 897 897 if (pmc->svc_sreg_support) 898 898 return mlxbf_pmc_secure_read(addr, command, result); ··· 905 907 } 906 908 907 909 /* Convenience function for 32-bit reads */ 908 - static int mlxbf_pmc_readl(void __iomem *addr, uint32_t *result) 910 + static int mlxbf_pmc_readl(void __iomem *addr, u32 *result) 909 911 { 910 - uint64_t read_out; 912 + u64 read_out; 911 913 int status; 912 914 913 915 status = mlxbf_pmc_read(addr, MLXBF_PMC_READ_REG_32, &read_out); 914 916 if (status) 915 917 return status; 916 - *result = (uint32_t)read_out; 918 + *result = (u32)read_out; 917 919 918 920 return 0; 919 921 } 920 922 921 923 /* Calls an SMC to access a performance register */ 922 - static int mlxbf_pmc_secure_write(void __iomem *addr, uint32_t command, 923 - uint64_t value) 924 + static int mlxbf_pmc_secure_write(void __iomem *addr, u32 command, u64 value) 924 925 { 925 926 struct arm_smccc_res res; 926 927 int status, err = 0; ··· 942 945 } 943 946 944 947 /* Write to a performance counter */ 945 - static int mlxbf_pmc_write(void __iomem *addr, int command, uint64_t value) 948 + static int mlxbf_pmc_write(void __iomem *addr, int command, u64 value) 946 949 { 947 950 if (pmc->svc_sreg_support) 948 951 return mlxbf_pmc_secure_write(addr, command, value); ··· 956 959 } 957 960 958 961 /* Check if the register offset is within the mapped region for the block */ 959 - static bool mlxbf_pmc_valid_range(int blk_num, uint32_t offset) 962 + static bool mlxbf_pmc_valid_range(unsigned int blk_num, u32 offset) 960 963 { 961 964 if ((offset >= 0) && !(offset % MLXBF_PMC_REG_SIZE) && 962 965 (offset + MLXBF_PMC_REG_SIZE <= pmc->block[blk_num].blk_size)) ··· 966 969 } 967 970 968 971 /* Get the event list corresponding to a certain block */ 969 - static const struct mlxbf_pmc_events *mlxbf_pmc_event_list(const char *blk, 970 - int *size) 972 + static const struct mlxbf_pmc_events *mlxbf_pmc_event_list(const char *blk, size_t *psize) 971 973 { 972 974 const struct mlxbf_pmc_events *events; 975 + size_t size; 973 976 974 977 if (strstr(blk, "tilenet")) { 975 978 events = mlxbf_pmc_hnfnet_events; 976 - *size = ARRAY_SIZE(mlxbf_pmc_hnfnet_events); 979 + size = ARRAY_SIZE(mlxbf_pmc_hnfnet_events); 977 980 } else if (strstr(blk, "tile")) { 978 981 events = mlxbf_pmc_hnf_events; 979 - *size = ARRAY_SIZE(mlxbf_pmc_hnf_events); 982 + size = ARRAY_SIZE(mlxbf_pmc_hnf_events); 980 983 } else if (strstr(blk, "triogen")) { 981 984 events = mlxbf_pmc_smgen_events; 982 - *size = ARRAY_SIZE(mlxbf_pmc_smgen_events); 985 + size = ARRAY_SIZE(mlxbf_pmc_smgen_events); 983 986 } else if (strstr(blk, "trio")) { 984 987 switch (pmc->event_set) { 985 988 case MLXBF_PMC_EVENT_SET_BF1: 986 989 events = mlxbf_pmc_trio_events_1; 987 - *size = ARRAY_SIZE(mlxbf_pmc_trio_events_1); 990 + size = ARRAY_SIZE(mlxbf_pmc_trio_events_1); 988 991 break; 989 992 case MLXBF_PMC_EVENT_SET_BF2: 990 993 events = mlxbf_pmc_trio_events_2; 991 - *size = ARRAY_SIZE(mlxbf_pmc_trio_events_2); 994 + size = ARRAY_SIZE(mlxbf_pmc_trio_events_2); 992 995 break; 993 996 default: 994 997 events = NULL; 995 - *size = 0; 998 + size = 0; 996 999 break; 997 1000 } 998 1001 } else if (strstr(blk, "mss")) { ··· 1000 1003 case MLXBF_PMC_EVENT_SET_BF1: 1001 1004 case MLXBF_PMC_EVENT_SET_BF2: 1002 1005 events = mlxbf_pmc_mss_events_1; 1003 - *size = ARRAY_SIZE(mlxbf_pmc_mss_events_1); 1006 + size = ARRAY_SIZE(mlxbf_pmc_mss_events_1); 1004 1007 break; 1005 1008 case MLXBF_PMC_EVENT_SET_BF3: 1006 1009 events = mlxbf_pmc_mss_events_3; 1007 - *size = ARRAY_SIZE(mlxbf_pmc_mss_events_3); 1010 + size = ARRAY_SIZE(mlxbf_pmc_mss_events_3); 1008 1011 break; 1009 1012 default: 1010 1013 events = NULL; 1011 - *size = 0; 1014 + size = 0; 1012 1015 break; 1013 1016 } 1014 1017 } else if (strstr(blk, "ecc")) { 1015 1018 events = mlxbf_pmc_ecc_events; 1016 - *size = ARRAY_SIZE(mlxbf_pmc_ecc_events); 1019 + size = ARRAY_SIZE(mlxbf_pmc_ecc_events); 1017 1020 } else if (strstr(blk, "pcie")) { 1018 1021 events = mlxbf_pmc_pcie_events; 1019 - *size = ARRAY_SIZE(mlxbf_pmc_pcie_events); 1022 + size = ARRAY_SIZE(mlxbf_pmc_pcie_events); 1020 1023 } else if (strstr(blk, "l3cache")) { 1021 1024 events = mlxbf_pmc_l3c_events; 1022 - *size = ARRAY_SIZE(mlxbf_pmc_l3c_events); 1025 + size = ARRAY_SIZE(mlxbf_pmc_l3c_events); 1023 1026 } else if (strstr(blk, "gic")) { 1024 1027 events = mlxbf_pmc_smgen_events; 1025 - *size = ARRAY_SIZE(mlxbf_pmc_smgen_events); 1028 + size = ARRAY_SIZE(mlxbf_pmc_smgen_events); 1026 1029 } else if (strstr(blk, "smmu")) { 1027 1030 events = mlxbf_pmc_smgen_events; 1028 - *size = ARRAY_SIZE(mlxbf_pmc_smgen_events); 1031 + size = ARRAY_SIZE(mlxbf_pmc_smgen_events); 1029 1032 } else if (strstr(blk, "llt_miss")) { 1030 1033 events = mlxbf_pmc_llt_miss_events; 1031 - *size = ARRAY_SIZE(mlxbf_pmc_llt_miss_events); 1034 + size = ARRAY_SIZE(mlxbf_pmc_llt_miss_events); 1032 1035 } else if (strstr(blk, "llt")) { 1033 1036 events = mlxbf_pmc_llt_events; 1034 - *size = ARRAY_SIZE(mlxbf_pmc_llt_events); 1037 + size = ARRAY_SIZE(mlxbf_pmc_llt_events); 1035 1038 } else { 1036 1039 events = NULL; 1037 - *size = 0; 1040 + size = 0; 1038 1041 } 1039 1042 1043 + if (psize) 1044 + *psize = size; 1045 + 1040 1046 return events; 1047 + } 1048 + 1049 + static bool mlxbf_pmc_event_supported(const char *blk) 1050 + { 1051 + return !!mlxbf_pmc_event_list(blk, NULL); 1041 1052 } 1042 1053 1043 1054 /* Get the event number given the name */ 1044 1055 static int mlxbf_pmc_get_event_num(const char *blk, const char *evt) 1045 1056 { 1046 1057 const struct mlxbf_pmc_events *events; 1047 - int i, size; 1058 + unsigned int i; 1059 + size_t size; 1048 1060 1049 1061 events = mlxbf_pmc_event_list(blk, &size); 1050 1062 if (!events) ··· 1068 1062 } 1069 1063 1070 1064 /* Get the event number given the name */ 1071 - static char *mlxbf_pmc_get_event_name(const char *blk, int evt) 1065 + static char *mlxbf_pmc_get_event_name(const char *blk, u32 evt) 1072 1066 { 1073 1067 const struct mlxbf_pmc_events *events; 1074 - int i, size; 1068 + unsigned int i; 1069 + size_t size; 1075 1070 1076 1071 events = mlxbf_pmc_event_list(blk, &size); 1077 1072 if (!events) ··· 1087 1080 } 1088 1081 1089 1082 /* Method to enable/disable/reset l3cache counters */ 1090 - static int mlxbf_pmc_config_l3_counters(int blk_num, bool enable, bool reset) 1083 + static int mlxbf_pmc_config_l3_counters(unsigned int blk_num, bool enable, bool reset) 1091 1084 { 1092 - uint32_t perfcnt_cfg = 0; 1085 + u32 perfcnt_cfg = 0; 1093 1086 1094 1087 if (enable) 1095 1088 perfcnt_cfg |= MLXBF_PMC_L3C_PERF_CNT_CFG_EN; ··· 1102 1095 } 1103 1096 1104 1097 /* Method to handle l3cache counter programming */ 1105 - static int mlxbf_pmc_program_l3_counter(int blk_num, uint32_t cnt_num, 1106 - uint32_t evt) 1098 + static int mlxbf_pmc_program_l3_counter(unsigned int blk_num, u32 cnt_num, u32 evt) 1107 1099 { 1108 - uint32_t perfcnt_sel_1 = 0; 1109 - uint32_t perfcnt_sel = 0; 1110 - uint32_t *wordaddr; 1100 + u32 perfcnt_sel_1 = 0, perfcnt_sel = 0, *wordaddr; 1111 1101 void __iomem *pmcaddr; 1112 1102 int ret; 1113 1103 ··· 1166 1162 } 1167 1163 1168 1164 /* Method to handle crspace counter programming */ 1169 - static int mlxbf_pmc_program_crspace_counter(int blk_num, uint32_t cnt_num, 1170 - uint32_t evt) 1165 + static int mlxbf_pmc_program_crspace_counter(unsigned int blk_num, u32 cnt_num, u32 evt) 1171 1166 { 1172 - uint32_t word; 1173 1167 void *addr; 1168 + u32 word; 1174 1169 int ret; 1175 1170 1176 1171 addr = pmc->block[blk_num].mmio_base + ··· 1190 1187 } 1191 1188 1192 1189 /* Method to clear crspace counter value */ 1193 - static int mlxbf_pmc_clear_crspace_counter(int blk_num, uint32_t cnt_num) 1190 + static int mlxbf_pmc_clear_crspace_counter(unsigned int blk_num, u32 cnt_num) 1194 1191 { 1195 1192 void *addr; 1196 1193 ··· 1202 1199 } 1203 1200 1204 1201 /* Method to program a counter to monitor an event */ 1205 - static int mlxbf_pmc_program_counter(int blk_num, uint32_t cnt_num, 1206 - uint32_t evt, bool is_l3) 1202 + static int mlxbf_pmc_program_counter(unsigned int blk_num, u32 cnt_num, u32 evt, bool is_l3) 1207 1203 { 1208 - uint64_t perfctl, perfevt, perfmon_cfg; 1204 + u64 perfctl, perfevt, perfmon_cfg; 1209 1205 1210 1206 if (cnt_num >= pmc->block[blk_num].counters) 1211 1207 return -ENODEV; ··· 1265 1263 } 1266 1264 1267 1265 /* Method to handle l3 counter reads */ 1268 - static int mlxbf_pmc_read_l3_counter(int blk_num, uint32_t cnt_num, 1269 - uint64_t *result) 1266 + static int mlxbf_pmc_read_l3_counter(unsigned int blk_num, u32 cnt_num, u64 *result) 1270 1267 { 1271 - uint32_t perfcnt_low = 0, perfcnt_high = 0; 1272 - uint64_t value; 1268 + u32 perfcnt_low = 0, perfcnt_high = 0; 1273 1269 int status; 1270 + u64 value; 1274 1271 1275 1272 status = mlxbf_pmc_readl(pmc->block[blk_num].mmio_base + 1276 1273 MLXBF_PMC_L3C_PERF_CNT_LOW + ··· 1296 1295 } 1297 1296 1298 1297 /* Method to handle crspace counter reads */ 1299 - static int mlxbf_pmc_read_crspace_counter(int blk_num, uint32_t cnt_num, 1300 - uint64_t *result) 1298 + static int mlxbf_pmc_read_crspace_counter(unsigned int blk_num, u32 cnt_num, u64 *result) 1301 1299 { 1302 - uint32_t value; 1303 1300 int status = 0; 1301 + u32 value; 1304 1302 1305 1303 status = mlxbf_pmc_readl(pmc->block[blk_num].mmio_base + 1306 1304 MLXBF_PMC_CRSPACE_PERFMON_VAL0(pmc->block[blk_num].counters) + ··· 1313 1313 } 1314 1314 1315 1315 /* Method to read the counter value */ 1316 - static int mlxbf_pmc_read_counter(int blk_num, uint32_t cnt_num, bool is_l3, 1317 - uint64_t *result) 1316 + static int mlxbf_pmc_read_counter(unsigned int blk_num, u32 cnt_num, bool is_l3, u64 *result) 1318 1317 { 1319 - uint32_t perfcfg_offset, perfval_offset; 1320 - uint64_t perfmon_cfg; 1318 + u32 perfcfg_offset, perfval_offset; 1319 + u64 perfmon_cfg; 1321 1320 int status; 1322 1321 1323 1322 if (cnt_num >= pmc->block[blk_num].counters) ··· 1350 1351 } 1351 1352 1352 1353 /* Method to read L3 block event */ 1353 - static int mlxbf_pmc_read_l3_event(int blk_num, uint32_t cnt_num, 1354 - uint64_t *result) 1354 + static int mlxbf_pmc_read_l3_event(unsigned int blk_num, u32 cnt_num, u64 *result) 1355 1355 { 1356 - uint32_t perfcnt_sel = 0, perfcnt_sel_1 = 0; 1357 - uint32_t *wordaddr; 1356 + u32 perfcnt_sel = 0, perfcnt_sel_1 = 0, *wordaddr; 1358 1357 void __iomem *pmcaddr; 1359 - uint64_t evt; 1358 + u64 evt; 1360 1359 1361 1360 /* Select appropriate register information */ 1362 1361 switch (cnt_num) { ··· 1402 1405 } 1403 1406 1404 1407 /* Method to read crspace block event */ 1405 - static int mlxbf_pmc_read_crspace_event(int blk_num, uint32_t cnt_num, 1406 - uint64_t *result) 1408 + static int mlxbf_pmc_read_crspace_event(unsigned int blk_num, u32 cnt_num, u64 *result) 1407 1409 { 1408 - uint32_t word, evt; 1410 + u32 word, evt; 1409 1411 void *addr; 1410 1412 int ret; 1411 1413 ··· 1425 1429 } 1426 1430 1427 1431 /* Method to find the event currently being monitored by a counter */ 1428 - static int mlxbf_pmc_read_event(int blk_num, uint32_t cnt_num, bool is_l3, 1429 - uint64_t *result) 1432 + static int mlxbf_pmc_read_event(unsigned int blk_num, u32 cnt_num, bool is_l3, u64 *result) 1430 1433 { 1431 - uint32_t perfcfg_offset, perfval_offset; 1432 - uint64_t perfmon_cfg, perfevt; 1434 + u32 perfcfg_offset, perfval_offset; 1435 + u64 perfmon_cfg, perfevt; 1433 1436 1434 1437 if (cnt_num >= pmc->block[blk_num].counters) 1435 1438 return -EINVAL; ··· 1464 1469 } 1465 1470 1466 1471 /* Method to read a register */ 1467 - static int mlxbf_pmc_read_reg(int blk_num, uint32_t offset, uint64_t *result) 1472 + static int mlxbf_pmc_read_reg(unsigned int blk_num, u32 offset, u64 *result) 1468 1473 { 1469 - uint32_t ecc_out; 1474 + u32 ecc_out; 1470 1475 1471 1476 if (strstr(pmc->block_name[blk_num], "ecc")) { 1472 1477 if (mlxbf_pmc_readl(pmc->block[blk_num].mmio_base + offset, ··· 1485 1490 } 1486 1491 1487 1492 /* Method to write to a register */ 1488 - static int mlxbf_pmc_write_reg(int blk_num, uint32_t offset, uint64_t data) 1493 + static int mlxbf_pmc_write_reg(unsigned int blk_num, u32 offset, u64 data) 1489 1494 { 1490 1495 if (strstr(pmc->block_name[blk_num], "ecc")) { 1491 1496 return mlxbf_pmc_write(pmc->block[blk_num].mmio_base + offset, ··· 1505 1510 { 1506 1511 struct mlxbf_pmc_attribute *attr_counter = container_of( 1507 1512 attr, struct mlxbf_pmc_attribute, dev_attr); 1508 - int blk_num, cnt_num, offset; 1513 + unsigned int blk_num, cnt_num; 1509 1514 bool is_l3 = false; 1510 - uint64_t value; 1515 + int offset; 1516 + u64 value; 1511 1517 1512 1518 blk_num = attr_counter->nr; 1513 1519 cnt_num = attr_counter->index; ··· 1540 1544 { 1541 1545 struct mlxbf_pmc_attribute *attr_counter = container_of( 1542 1546 attr, struct mlxbf_pmc_attribute, dev_attr); 1543 - int blk_num, cnt_num, offset, err, data; 1547 + unsigned int blk_num, cnt_num, data; 1544 1548 bool is_l3 = false; 1545 - uint64_t evt_num; 1549 + u64 evt_num; 1550 + int offset; 1551 + int err; 1546 1552 1547 1553 blk_num = attr_counter->nr; 1548 1554 cnt_num = attr_counter->index; 1549 1555 1550 - err = kstrtoint(buf, 0, &data); 1556 + err = kstrtouint(buf, 0, &data); 1551 1557 if (err < 0) 1552 1558 return err; 1553 1559 ··· 1578 1580 if (err) 1579 1581 return err; 1580 1582 } else if (pmc->block[blk_num].type == MLXBF_PMC_TYPE_CRSPACE) { 1581 - if (sscanf(attr->attr.name, "counter%d", &cnt_num) != 1) 1583 + if (sscanf(attr->attr.name, "counter%u", &cnt_num) != 1) 1582 1584 return -EINVAL; 1583 1585 err = mlxbf_pmc_clear_crspace_counter(blk_num, cnt_num); 1584 1586 } else ··· 1593 1595 { 1594 1596 struct mlxbf_pmc_attribute *attr_event = container_of( 1595 1597 attr, struct mlxbf_pmc_attribute, dev_attr); 1596 - int blk_num, cnt_num, err; 1598 + unsigned int blk_num, cnt_num; 1597 1599 bool is_l3 = false; 1598 - uint64_t evt_num; 1599 1600 char *evt_name; 1601 + u64 evt_num; 1602 + int err; 1600 1603 1601 1604 blk_num = attr_event->nr; 1602 1605 cnt_num = attr_event->index; ··· 1623 1624 { 1624 1625 struct mlxbf_pmc_attribute *attr_event = container_of( 1625 1626 attr, struct mlxbf_pmc_attribute, dev_attr); 1626 - int blk_num, cnt_num, evt_num, err; 1627 + unsigned int blk_num, cnt_num; 1627 1628 bool is_l3 = false; 1629 + int evt_num; 1630 + int err; 1628 1631 1629 1632 blk_num = attr_event->nr; 1630 1633 cnt_num = attr_event->index; ··· 1637 1636 if (evt_num < 0) 1638 1637 return -EINVAL; 1639 1638 } else { 1640 - err = kstrtoint(buf, 0, &evt_num); 1639 + err = kstrtouint(buf, 0, &evt_num); 1641 1640 if (err < 0) 1642 1641 return err; 1643 1642 } ··· 1659 1658 { 1660 1659 struct mlxbf_pmc_attribute *attr_event_list = container_of( 1661 1660 attr, struct mlxbf_pmc_attribute, dev_attr); 1662 - int blk_num, i, size, len = 0, ret = 0; 1663 1661 const struct mlxbf_pmc_events *events; 1664 1662 char e_info[MLXBF_PMC_EVENT_INFO_LEN]; 1663 + unsigned int blk_num, i, len = 0; 1664 + size_t size; 1665 + int ret = 0; 1665 1666 1666 1667 blk_num = attr_event_list->nr; 1667 1668 ··· 1689 1686 { 1690 1687 struct mlxbf_pmc_attribute *attr_enable = container_of( 1691 1688 attr, struct mlxbf_pmc_attribute, dev_attr); 1692 - uint32_t perfcnt_cfg, word; 1693 - int blk_num, value; 1689 + unsigned int blk_num, value; 1690 + u32 perfcnt_cfg, word; 1694 1691 1695 1692 blk_num = attr_enable->nr; 1696 1693 ··· 1710 1707 value = FIELD_GET(MLXBF_PMC_L3C_PERF_CNT_CFG_EN, perfcnt_cfg); 1711 1708 } 1712 1709 1713 - return sysfs_emit(buf, "%d\n", value); 1710 + return sysfs_emit(buf, "%u\n", value); 1714 1711 } 1715 1712 1716 1713 /* Store function for "enable" sysfs files - only for l3cache & crspace */ ··· 1720 1717 { 1721 1718 struct mlxbf_pmc_attribute *attr_enable = container_of( 1722 1719 attr, struct mlxbf_pmc_attribute, dev_attr); 1723 - int err, en, blk_num; 1724 - uint32_t word; 1720 + unsigned int en, blk_num; 1721 + u32 word; 1722 + int err; 1725 1723 1726 1724 blk_num = attr_enable->nr; 1727 1725 1728 - err = kstrtoint(buf, 0, &en); 1726 + err = kstrtouint(buf, 0, &en); 1729 1727 if (err < 0) 1730 1728 return err; 1731 1729 ··· 1764 1760 } 1765 1761 1766 1762 /* Populate attributes for blocks with counters to monitor performance */ 1767 - static int mlxbf_pmc_init_perftype_counter(struct device *dev, int blk_num) 1763 + static int mlxbf_pmc_init_perftype_counter(struct device *dev, unsigned int blk_num) 1768 1764 { 1769 1765 struct mlxbf_pmc_attribute *attr; 1770 - int i = 0, j = 0; 1766 + unsigned int i = 0, j = 0; 1767 + 1768 + if (!mlxbf_pmc_event_supported(pmc->block_name[blk_num])) 1769 + return -ENOENT; 1771 1770 1772 1771 /* "event_list" sysfs to list events supported by the block */ 1773 1772 attr = &pmc->block[blk_num].attr_event_list; ··· 1819 1812 attr->dev_attr.store = mlxbf_pmc_counter_store; 1820 1813 attr->index = j; 1821 1814 attr->nr = blk_num; 1822 - attr->dev_attr.attr.name = devm_kasprintf(dev, GFP_KERNEL, 1823 - "counter%d", j); 1815 + attr->dev_attr.attr.name = devm_kasprintf(dev, GFP_KERNEL, "counter%u", j); 1824 1816 if (!attr->dev_attr.attr.name) 1825 1817 return -ENOMEM; 1826 1818 pmc->block[blk_num].block_attr[++i] = &attr->dev_attr.attr; ··· 1831 1825 attr->dev_attr.store = mlxbf_pmc_event_store; 1832 1826 attr->index = j; 1833 1827 attr->nr = blk_num; 1834 - attr->dev_attr.attr.name = devm_kasprintf(dev, GFP_KERNEL, 1835 - "event%d", j); 1828 + attr->dev_attr.attr.name = devm_kasprintf(dev, GFP_KERNEL, "event%u", j); 1836 1829 if (!attr->dev_attr.attr.name) 1837 1830 return -ENOMEM; 1838 1831 pmc->block[blk_num].block_attr[++i] = &attr->dev_attr.attr; ··· 1842 1837 } 1843 1838 1844 1839 /* Populate attributes for blocks with registers to monitor performance */ 1845 - static int mlxbf_pmc_init_perftype_reg(struct device *dev, int blk_num) 1840 + static int mlxbf_pmc_init_perftype_reg(struct device *dev, unsigned int blk_num) 1846 1841 { 1847 - struct mlxbf_pmc_attribute *attr; 1848 1842 const struct mlxbf_pmc_events *events; 1849 - int i = 0, j = 0; 1843 + struct mlxbf_pmc_attribute *attr; 1844 + unsigned int i = 0; 1845 + size_t count = 0; 1850 1846 1851 - events = mlxbf_pmc_event_list(pmc->block_name[blk_num], &j); 1847 + events = mlxbf_pmc_event_list(pmc->block_name[blk_num], &count); 1852 1848 if (!events) 1853 - return -EINVAL; 1849 + return -ENOENT; 1854 1850 1855 1851 pmc->block[blk_num].attr_event = devm_kcalloc( 1856 - dev, j, sizeof(struct mlxbf_pmc_attribute), GFP_KERNEL); 1852 + dev, count, sizeof(struct mlxbf_pmc_attribute), GFP_KERNEL); 1857 1853 if (!pmc->block[blk_num].attr_event) 1858 1854 return -ENOMEM; 1859 1855 1860 - while (j > 0) { 1861 - --j; 1862 - attr = &pmc->block[blk_num].attr_event[j]; 1856 + while (count > 0) { 1857 + --count; 1858 + attr = &pmc->block[blk_num].attr_event[count]; 1863 1859 attr->dev_attr.attr.mode = 0644; 1864 1860 attr->dev_attr.show = mlxbf_pmc_counter_show; 1865 1861 attr->dev_attr.store = mlxbf_pmc_counter_store; 1866 1862 attr->nr = blk_num; 1867 1863 attr->dev_attr.attr.name = devm_kasprintf(dev, GFP_KERNEL, 1868 - events[j].evt_name); 1864 + events[count].evt_name); 1869 1865 if (!attr->dev_attr.attr.name) 1870 1866 return -ENOMEM; 1871 1867 pmc->block[blk_num].block_attr[i] = &attr->dev_attr.attr; ··· 1878 1872 } 1879 1873 1880 1874 /* Helper to create the bfperf sysfs sub-directories and files */ 1881 - static int mlxbf_pmc_create_groups(struct device *dev, int blk_num) 1875 + static int mlxbf_pmc_create_groups(struct device *dev, unsigned int blk_num) 1882 1876 { 1883 1877 int err; 1884 1878 ··· 1889 1883 else if (pmc->block[blk_num].type == MLXBF_PMC_TYPE_REGISTER) 1890 1884 err = mlxbf_pmc_init_perftype_reg(dev, blk_num); 1891 1885 else 1892 - err = -EINVAL; 1886 + err = -ENOENT; 1893 1887 1894 1888 if (err) 1895 1889 return err; ··· 1920 1914 /* Helper to map the Performance Counters from the varios blocks */ 1921 1915 static int mlxbf_pmc_map_counters(struct device *dev) 1922 1916 { 1923 - uint64_t info[MLXBF_PMC_INFO_SZ]; 1924 - int i, tile_num, ret; 1917 + u64 info[MLXBF_PMC_INFO_SZ]; 1918 + unsigned int tile_num, i; 1919 + int ret; 1925 1920 1926 1921 for (i = 0; i < pmc->total_blocks; ++i) { 1927 1922 /* Create sysfs for tiles only if block number < tile_count */ 1928 1923 if (strstr(pmc->block_name[i], "tilenet")) { 1929 - if (sscanf(pmc->block_name[i], "tilenet%d", &tile_num) != 1) 1924 + if (sscanf(pmc->block_name[i], "tilenet%u", &tile_num) != 1) 1930 1925 continue; 1931 1926 1932 1927 if (tile_num >= pmc->tile_count) 1933 1928 continue; 1934 1929 } else if (strstr(pmc->block_name[i], "tile")) { 1935 - if (sscanf(pmc->block_name[i], "tile%d", &tile_num) != 1) 1930 + if (sscanf(pmc->block_name[i], "tile%u", &tile_num) != 1) 1936 1931 continue; 1937 1932 1938 1933 if (tile_num >= pmc->tile_count) ··· 1943 1936 /* Create sysfs only for enabled MSS blocks */ 1944 1937 if (strstr(pmc->block_name[i], "mss") && 1945 1938 pmc->event_set == MLXBF_PMC_EVENT_SET_BF3) { 1946 - int mss_num; 1939 + unsigned int mss_num; 1947 1940 1948 - if (sscanf(pmc->block_name[i], "mss%d", &mss_num) != 1) 1941 + if (sscanf(pmc->block_name[i], "mss%u", &mss_num) != 1) 1949 1942 continue; 1950 1943 1951 1944 if (!((pmc->mss_enable >> mss_num) & 0x1)) ··· 1954 1947 1955 1948 /* Create sysfs only for enabled LLT blocks */ 1956 1949 if (strstr(pmc->block_name[i], "llt_miss")) { 1957 - int llt_num; 1950 + unsigned int llt_num; 1958 1951 1959 - if (sscanf(pmc->block_name[i], "llt_miss%d", &llt_num) != 1) 1952 + if (sscanf(pmc->block_name[i], "llt_miss%u", &llt_num) != 1) 1960 1953 continue; 1961 1954 1962 1955 if (!((pmc->llt_enable >> llt_num) & 0x1)) 1963 1956 continue; 1964 1957 } else if (strstr(pmc->block_name[i], "llt")) { 1965 - int llt_num; 1958 + unsigned int llt_num; 1966 1959 1967 - if (sscanf(pmc->block_name[i], "llt%d", &llt_num) != 1) 1960 + if (sscanf(pmc->block_name[i], "llt%u", &llt_num) != 1) 1968 1961 continue; 1969 1962 1970 1963 if (!((pmc->llt_enable >> llt_num) & 0x1)) ··· 1994 1987 return -ENOMEM; 1995 1988 1996 1989 ret = mlxbf_pmc_create_groups(dev, i); 1990 + if (ret == -ENOENT) { 1991 + dev_warn(dev, "ignoring unsupported block: '%s'\n", pmc->block_name[i]); 1992 + continue; 1993 + } 1997 1994 if (ret) 1998 1995 return ret; 1999 1996 }
-14
drivers/platform/mellanox/mlxreg-hotplug.c
··· 348 348 u32 regval, bit; 349 349 int ret; 350 350 351 - /* 352 - * Validate if item related to received signal type is valid. 353 - * It should never happen, excepted the situation when some 354 - * piece of hardware is broken. In such situation just produce 355 - * error message and return. Caller must continue to handle the 356 - * signals from other devices if any. 357 - */ 358 - if (unlikely(!item)) { 359 - dev_err(priv->dev, "False signal: at offset:mask 0x%02x:0x%02x.\n", 360 - item->reg, item->mask); 361 - 362 - return; 363 - } 364 - 365 351 /* Mask event. */ 366 352 ret = regmap_write(priv->regmap, item->reg + MLXREG_HOTPLUG_MASK_OFF, 367 353 0);
+7
drivers/platform/surface/surface_aggregator_registry.c
··· 74 74 .parent = &ssam_node_root, 75 75 }; 76 76 77 + /* Fan speed function. */ 78 + static const struct software_node ssam_node_fan_speed = { 79 + .name = "ssam:01:05:01:01:01", 80 + .parent = &ssam_node_root, 81 + }; 82 + 77 83 /* Tablet-mode switch via KIP subsystem. */ 78 84 static const struct software_node ssam_node_kip_tablet_switch = { 79 85 .name = "ssam:01:0e:01:00:01", ··· 311 305 &ssam_node_bat_ac, 312 306 &ssam_node_bat_main, 313 307 &ssam_node_tmp_pprof, 308 + &ssam_node_fan_speed, 314 309 &ssam_node_pos_tablet_switch, 315 310 &ssam_node_hid_kip_keyboard, 316 311 &ssam_node_hid_kip_penstash,
+1 -6
drivers/platform/x86/Kconfig
··· 56 56 depends on INPUT 57 57 select INPUT_SPARSEKMAP 58 58 select LEDS_CLASS 59 - select LEDS_TRIGGERS 60 - select LEDS_TRIGGER_AUDIO 61 59 select NEW_LEDS 62 60 help 63 61 This driver provides support for Huawei WMI hotkeys, battery charge ··· 267 269 select INPUT_SPARSEKMAP 268 270 select LEDS_CLASS 269 271 select NEW_LEDS 270 - select LEDS_TRIGGERS 271 - select LEDS_TRIGGER_AUDIO 272 272 select ACPI_PLATFORM_PROFILE 273 273 help 274 274 Say Y here if you have a WMI aware Asus laptop (like Eee PCs or new ··· 370 374 depends on ACPI 371 375 depends on INPUT 372 376 depends on BACKLIGHT_CLASS_DEVICE 377 + depends on ACPI_BATTERY 373 378 depends on ACPI_VIDEO || ACPI_VIDEO = n 374 379 select INPUT_SPARSEKMAP 375 380 select NEW_LEDS ··· 504 507 select NVRAM 505 508 select NEW_LEDS 506 509 select LEDS_CLASS 507 - select LEDS_TRIGGERS 508 - select LEDS_TRIGGER_AUDIO 509 510 help 510 511 This is a driver for the IBM and Lenovo ThinkPad laptops. It adds 511 512 support for Fn-Fx key combinations, Bluetooth control, video
+16 -1
drivers/platform/x86/acer-wmi.c
··· 276 276 static u16 commun_func_bitmap; 277 277 static u8 commun_fn_key_number; 278 278 static bool cycle_gaming_thermal_profile = true; 279 + static bool predator_v4; 279 280 280 281 module_param(mailled, int, 0444); 281 282 module_param(brightness, int, 0444); ··· 285 284 module_param(force_caps, int, 0444); 286 285 module_param(ec_raw_mode, bool, 0444); 287 286 module_param(cycle_gaming_thermal_profile, bool, 0644); 287 + module_param(predator_v4, bool, 0444); 288 288 MODULE_PARM_DESC(mailled, "Set initial state of Mail LED"); 289 289 MODULE_PARM_DESC(brightness, "Set initial LCD backlight brightness"); 290 290 MODULE_PARM_DESC(threeg, "Set initial state of 3G hardware"); ··· 294 292 MODULE_PARM_DESC(ec_raw_mode, "Enable EC raw mode"); 295 293 MODULE_PARM_DESC(cycle_gaming_thermal_profile, 296 294 "Set thermal mode key in cycle mode. Disabling it sets the mode key in turbo toggle mode"); 295 + MODULE_PARM_DESC(predator_v4, 296 + "Enable features for predator laptops that use predator sense v4"); 297 297 298 298 struct acer_data { 299 299 int mailled; ··· 589 585 .driver_data = &quirk_acer_predator_v4, 590 586 }, 591 587 { 588 + .callback = dmi_matched, 589 + .ident = "Acer Predator PH16-71", 590 + .matches = { 591 + DMI_MATCH(DMI_SYS_VENDOR, "Acer"), 592 + DMI_MATCH(DMI_PRODUCT_NAME, "Predator PH16-71"), 593 + }, 594 + .driver_data = &quirk_acer_predator_v4, 595 + }, 596 + { 592 597 .callback = set_force_caps, 593 598 .ident = "Acer Aspire Switch 10E SW3-016", 594 599 .matches = { ··· 738 725 /* Find which quirks are needed for a particular vendor/ model pair */ 739 726 static void __init find_quirks(void) 740 727 { 741 - if (!force_series) { 728 + if (predator_v4) { 729 + quirks = &quirk_acer_predator_v4; 730 + } else if (!force_series) { 742 731 dmi_check_system(acer_quirks); 743 732 dmi_check_system(non_acer_quirks); 744 733 } else if (force_series == 2490) {
+1 -1
drivers/platform/x86/amd/Kconfig
··· 8 8 9 9 config AMD_HSMP 10 10 tristate "AMD HSMP Driver" 11 - depends on AMD_NB && X86_64 11 + depends on AMD_NB && X86_64 && ACPI 12 12 help 13 13 The driver provides a way for user space tools to monitor and manage 14 14 system management functionality on EPYC server CPUs from AMD.
+464 -120
drivers/platform/x86/amd/hsmp.c
··· 18 18 #include <linux/pci.h> 19 19 #include <linux/platform_device.h> 20 20 #include <linux/semaphore.h> 21 + #include <linux/acpi.h> 21 22 22 23 #define DRIVER_NAME "amd_hsmp" 23 - #define DRIVER_VERSION "2.0" 24 + #define DRIVER_VERSION "2.2" 25 + #define ACPI_HSMP_DEVICE_HID "AMDI0097" 24 26 25 27 /* HSMP Status / Error codes */ 26 28 #define HSMP_STATUS_NOT_READY 0x00 ··· 42 40 * register into the SMN_INDEX register, and reads/writes the SMN_DATA reg. 43 41 * Below are required SMN address for HSMP Mailbox register offsets in SMU address space 44 42 */ 45 - #define SMN_HSMP_MSG_ID 0x3B10534 46 - #define SMN_HSMP_MSG_RESP 0x3B10980 47 - #define SMN_HSMP_MSG_DATA 0x3B109E0 43 + #define SMN_HSMP_BASE 0x3B00000 44 + #define SMN_HSMP_MSG_ID 0x0010534 45 + #define SMN_HSMP_MSG_ID_F1A_M0H 0x0010934 46 + #define SMN_HSMP_MSG_RESP 0x0010980 47 + #define SMN_HSMP_MSG_DATA 0x00109E0 48 48 49 49 #define HSMP_INDEX_REG 0xc4 50 50 #define HSMP_DATA_REG 0xc8 ··· 57 53 58 54 #define HSMP_ATTR_GRP_NAME_SIZE 10 59 55 56 + /* These are the strings specified in ACPI table */ 57 + #define MSG_IDOFF_STR "MsgIdOffset" 58 + #define MSG_ARGOFF_STR "MsgArgOffset" 59 + #define MSG_RESPOFF_STR "MsgRspOffset" 60 + 61 + #define MAX_AMD_SOCKETS 8 62 + 63 + struct hsmp_mbaddr_info { 64 + u32 base_addr; 65 + u32 msg_id_off; 66 + u32 msg_resp_off; 67 + u32 msg_arg_off; 68 + u32 size; 69 + }; 70 + 60 71 struct hsmp_socket { 61 72 struct bin_attribute hsmp_attr; 73 + struct hsmp_mbaddr_info mbinfo; 62 74 void __iomem *metric_tbl_addr; 75 + void __iomem *virt_base_addr; 63 76 struct semaphore hsmp_sem; 64 77 char name[HSMP_ATTR_GRP_NAME_SIZE]; 78 + struct pci_dev *root; 79 + struct device *dev; 65 80 u16 sock_ind; 66 81 }; 67 82 68 83 struct hsmp_plat_device { 69 84 struct miscdevice hsmp_device; 70 85 struct hsmp_socket *sock; 71 - struct device *dev; 72 86 u32 proto_ver; 73 87 u16 num_sockets; 88 + bool is_acpi_device; 89 + bool is_probed; 74 90 }; 75 91 76 92 static struct hsmp_plat_device plat_dev; 77 93 78 - static int amd_hsmp_rdwr(struct pci_dev *root, u32 address, 79 - u32 *value, bool write) 94 + static int amd_hsmp_pci_rdwr(struct hsmp_socket *sock, u32 offset, 95 + u32 *value, bool write) 80 96 { 81 97 int ret; 82 98 83 - ret = pci_write_config_dword(root, HSMP_INDEX_REG, address); 99 + if (!sock->root) 100 + return -ENODEV; 101 + 102 + ret = pci_write_config_dword(sock->root, HSMP_INDEX_REG, 103 + sock->mbinfo.base_addr + offset); 84 104 if (ret) 85 105 return ret; 86 106 87 - ret = (write ? pci_write_config_dword(root, HSMP_DATA_REG, *value) 88 - : pci_read_config_dword(root, HSMP_DATA_REG, value)); 107 + ret = (write ? pci_write_config_dword(sock->root, HSMP_DATA_REG, *value) 108 + : pci_read_config_dword(sock->root, HSMP_DATA_REG, value)); 89 109 90 110 return ret; 91 111 } 92 112 113 + static void amd_hsmp_acpi_rdwr(struct hsmp_socket *sock, u32 offset, 114 + u32 *value, bool write) 115 + { 116 + if (write) 117 + iowrite32(*value, sock->virt_base_addr + offset); 118 + else 119 + *value = ioread32(sock->virt_base_addr + offset); 120 + } 121 + 122 + static int amd_hsmp_rdwr(struct hsmp_socket *sock, u32 offset, 123 + u32 *value, bool write) 124 + { 125 + if (plat_dev.is_acpi_device) 126 + amd_hsmp_acpi_rdwr(sock, offset, value, write); 127 + else 128 + return amd_hsmp_pci_rdwr(sock, offset, value, write); 129 + 130 + return 0; 131 + } 132 + 93 133 /* 94 - * Send a message to the HSMP port via PCI-e config space registers. 134 + * Send a message to the HSMP port via PCI-e config space registers 135 + * or by writing to MMIO space. 95 136 * 96 137 * The caller is expected to zero out any unused arguments. 97 138 * If a response is expected, the number of response words should be greater than 0. ··· 144 95 * Returns 0 for success and populates the requested number of arguments. 145 96 * Returns a negative error code for failure. 146 97 */ 147 - static int __hsmp_send_message(struct pci_dev *root, struct hsmp_message *msg) 98 + static int __hsmp_send_message(struct hsmp_socket *sock, struct hsmp_message *msg) 148 99 { 100 + struct hsmp_mbaddr_info *mbinfo; 149 101 unsigned long timeout, short_sleep; 150 102 u32 mbox_status; 151 103 u32 index; 152 104 int ret; 153 105 106 + mbinfo = &sock->mbinfo; 107 + 154 108 /* Clear the status register */ 155 109 mbox_status = HSMP_STATUS_NOT_READY; 156 - ret = amd_hsmp_rdwr(root, SMN_HSMP_MSG_RESP, &mbox_status, HSMP_WR); 110 + ret = amd_hsmp_rdwr(sock, mbinfo->msg_resp_off, &mbox_status, HSMP_WR); 157 111 if (ret) { 158 112 pr_err("Error %d clearing mailbox status register\n", ret); 159 113 return ret; ··· 165 113 index = 0; 166 114 /* Write any message arguments */ 167 115 while (index < msg->num_args) { 168 - ret = amd_hsmp_rdwr(root, SMN_HSMP_MSG_DATA + (index << 2), 116 + ret = amd_hsmp_rdwr(sock, mbinfo->msg_arg_off + (index << 2), 169 117 &msg->args[index], HSMP_WR); 170 118 if (ret) { 171 119 pr_err("Error %d writing message argument %d\n", ret, index); ··· 175 123 } 176 124 177 125 /* Write the message ID which starts the operation */ 178 - ret = amd_hsmp_rdwr(root, SMN_HSMP_MSG_ID, &msg->msg_id, HSMP_WR); 126 + ret = amd_hsmp_rdwr(sock, mbinfo->msg_id_off, &msg->msg_id, HSMP_WR); 179 127 if (ret) { 180 128 pr_err("Error %d writing message ID %u\n", ret, msg->msg_id); 181 129 return ret; ··· 192 140 timeout = jiffies + msecs_to_jiffies(HSMP_MSG_TIMEOUT); 193 141 194 142 while (time_before(jiffies, timeout)) { 195 - ret = amd_hsmp_rdwr(root, SMN_HSMP_MSG_RESP, &mbox_status, HSMP_RD); 143 + ret = amd_hsmp_rdwr(sock, mbinfo->msg_resp_off, &mbox_status, HSMP_RD); 196 144 if (ret) { 197 145 pr_err("Error %d reading mailbox status\n", ret); 198 146 return ret; ··· 227 175 */ 228 176 index = 0; 229 177 while (index < msg->response_sz) { 230 - ret = amd_hsmp_rdwr(root, SMN_HSMP_MSG_DATA + (index << 2), 178 + ret = amd_hsmp_rdwr(sock, mbinfo->msg_arg_off + (index << 2), 231 179 &msg->args[index], HSMP_RD); 232 180 if (ret) { 233 181 pr_err("Error %d reading response %u for message ID:%u\n", ··· 260 208 261 209 int hsmp_send_message(struct hsmp_message *msg) 262 210 { 263 - struct hsmp_socket *sock = &plat_dev.sock[msg->sock_ind]; 264 - struct amd_northbridge *nb; 211 + struct hsmp_socket *sock; 265 212 int ret; 266 213 267 214 if (!msg) 268 215 return -EINVAL; 269 - 270 - nb = node_to_amd_nb(msg->sock_ind); 271 - if (!nb || !nb->root) 272 - return -ENODEV; 273 - 274 216 ret = validate_message(msg); 275 217 if (ret) 276 218 return ret; 219 + 220 + if (!plat_dev.sock || msg->sock_ind >= plat_dev.num_sockets) 221 + return -ENODEV; 222 + sock = &plat_dev.sock[msg->sock_ind]; 277 223 278 224 /* 279 225 * The time taken by smu operation to complete is between ··· 283 233 if (ret < 0) 284 234 return ret; 285 235 286 - ret = __hsmp_send_message(nb->root, msg); 236 + ret = __hsmp_send_message(sock, msg); 287 237 288 238 up(&sock->hsmp_sem); 289 239 ··· 294 244 static int hsmp_test(u16 sock_ind, u32 value) 295 245 { 296 246 struct hsmp_message msg = { 0 }; 297 - struct amd_northbridge *nb; 298 - int ret = -ENODEV; 299 - 300 - nb = node_to_amd_nb(sock_ind); 301 - if (!nb || !nb->root) 302 - return ret; 247 + int ret; 303 248 304 249 /* 305 250 * Test the hsmp port by performing TEST command. The test message ··· 306 261 msg.args[0] = value; 307 262 msg.sock_ind = sock_ind; 308 263 309 - ret = __hsmp_send_message(nb->root, &msg); 264 + ret = hsmp_send_message(&msg); 310 265 if (ret) 311 266 return ret; 312 267 313 268 /* Check the response value */ 314 269 if (msg.args[0] != (value + 1)) { 315 - pr_err("Socket %d test message failed, Expected 0x%08X, received 0x%08X\n", 316 - sock_ind, (value + 1), msg.args[0]); 270 + dev_err(plat_dev.sock[sock_ind].dev, 271 + "Socket %d test message failed, Expected 0x%08X, received 0x%08X\n", 272 + sock_ind, (value + 1), msg.args[0]); 317 273 return -EBADE; 318 274 } 319 275 ··· 383 337 .compat_ioctl = hsmp_ioctl, 384 338 }; 385 339 340 + /* This is the UUID used for HSMP */ 341 + static const guid_t acpi_hsmp_uuid = GUID_INIT(0xb74d619d, 0x5707, 0x48bd, 342 + 0xa6, 0x9f, 0x4e, 0xa2, 343 + 0x87, 0x1f, 0xc2, 0xf6); 344 + 345 + static inline bool is_acpi_hsmp_uuid(union acpi_object *obj) 346 + { 347 + if (obj->type == ACPI_TYPE_BUFFER && obj->buffer.length == UUID_SIZE) 348 + return guid_equal((guid_t *)obj->buffer.pointer, &acpi_hsmp_uuid); 349 + 350 + return false; 351 + } 352 + 353 + static inline int hsmp_get_uid(struct device *dev, u16 *sock_ind) 354 + { 355 + char *uid; 356 + 357 + /* 358 + * UID (ID00, ID01..IDXX) is used for differentiating sockets, 359 + * read it and strip the "ID" part of it and convert the remaining 360 + * bytes to integer. 361 + */ 362 + uid = acpi_device_uid(ACPI_COMPANION(dev)); 363 + 364 + return kstrtou16(uid + 2, 10, sock_ind); 365 + } 366 + 367 + static acpi_status hsmp_resource(struct acpi_resource *res, void *data) 368 + { 369 + struct hsmp_socket *sock = data; 370 + struct resource r; 371 + 372 + switch (res->type) { 373 + case ACPI_RESOURCE_TYPE_FIXED_MEMORY32: 374 + if (!acpi_dev_resource_memory(res, &r)) 375 + return AE_ERROR; 376 + if (!r.start || r.end < r.start || !(r.flags & IORESOURCE_MEM_WRITEABLE)) 377 + return AE_ERROR; 378 + sock->mbinfo.base_addr = r.start; 379 + sock->mbinfo.size = resource_size(&r); 380 + break; 381 + case ACPI_RESOURCE_TYPE_END_TAG: 382 + break; 383 + default: 384 + return AE_ERROR; 385 + } 386 + 387 + return AE_OK; 388 + } 389 + 390 + static int hsmp_read_acpi_dsd(struct hsmp_socket *sock) 391 + { 392 + struct acpi_buffer buf = { ACPI_ALLOCATE_BUFFER, NULL }; 393 + union acpi_object *guid, *mailbox_package; 394 + union acpi_object *dsd; 395 + acpi_status status; 396 + int ret = 0; 397 + int j; 398 + 399 + status = acpi_evaluate_object_typed(ACPI_HANDLE(sock->dev), "_DSD", NULL, 400 + &buf, ACPI_TYPE_PACKAGE); 401 + if (ACPI_FAILURE(status)) { 402 + dev_err(sock->dev, "Failed to read mailbox reg offsets from DSD table, err: %s\n", 403 + acpi_format_exception(status)); 404 + return -ENODEV; 405 + } 406 + 407 + dsd = buf.pointer; 408 + 409 + /* HSMP _DSD property should contain 2 objects. 410 + * 1. guid which is an acpi object of type ACPI_TYPE_BUFFER 411 + * 2. mailbox which is an acpi object of type ACPI_TYPE_PACKAGE 412 + * This mailbox object contains 3 more acpi objects of type 413 + * ACPI_TYPE_PACKAGE for holding msgid, msgresp, msgarg offsets 414 + * these packages inturn contain 2 acpi objects of type 415 + * ACPI_TYPE_STRING and ACPI_TYPE_INTEGER 416 + */ 417 + if (!dsd || dsd->type != ACPI_TYPE_PACKAGE || dsd->package.count != 2) { 418 + ret = -EINVAL; 419 + goto free_buf; 420 + } 421 + 422 + guid = &dsd->package.elements[0]; 423 + mailbox_package = &dsd->package.elements[1]; 424 + if (!is_acpi_hsmp_uuid(guid) || mailbox_package->type != ACPI_TYPE_PACKAGE) { 425 + dev_err(sock->dev, "Invalid hsmp _DSD table data\n"); 426 + ret = -EINVAL; 427 + goto free_buf; 428 + } 429 + 430 + for (j = 0; j < mailbox_package->package.count; j++) { 431 + union acpi_object *msgobj, *msgstr, *msgint; 432 + 433 + msgobj = &mailbox_package->package.elements[j]; 434 + msgstr = &msgobj->package.elements[0]; 435 + msgint = &msgobj->package.elements[1]; 436 + 437 + /* package should have 1 string and 1 integer object */ 438 + if (msgobj->type != ACPI_TYPE_PACKAGE || 439 + msgstr->type != ACPI_TYPE_STRING || 440 + msgint->type != ACPI_TYPE_INTEGER) { 441 + ret = -EINVAL; 442 + goto free_buf; 443 + } 444 + 445 + if (!strncmp(msgstr->string.pointer, MSG_IDOFF_STR, 446 + msgstr->string.length)) { 447 + sock->mbinfo.msg_id_off = msgint->integer.value; 448 + } else if (!strncmp(msgstr->string.pointer, MSG_RESPOFF_STR, 449 + msgstr->string.length)) { 450 + sock->mbinfo.msg_resp_off = msgint->integer.value; 451 + } else if (!strncmp(msgstr->string.pointer, MSG_ARGOFF_STR, 452 + msgstr->string.length)) { 453 + sock->mbinfo.msg_arg_off = msgint->integer.value; 454 + } else { 455 + ret = -ENOENT; 456 + goto free_buf; 457 + } 458 + } 459 + 460 + if (!sock->mbinfo.msg_id_off || !sock->mbinfo.msg_resp_off || 461 + !sock->mbinfo.msg_arg_off) 462 + ret = -EINVAL; 463 + 464 + free_buf: 465 + ACPI_FREE(buf.pointer); 466 + return ret; 467 + } 468 + 469 + static int hsmp_read_acpi_crs(struct hsmp_socket *sock) 470 + { 471 + acpi_status status; 472 + 473 + status = acpi_walk_resources(ACPI_HANDLE(sock->dev), METHOD_NAME__CRS, 474 + hsmp_resource, sock); 475 + if (ACPI_FAILURE(status)) { 476 + dev_err(sock->dev, "Failed to look up MP1 base address from CRS method, err: %s\n", 477 + acpi_format_exception(status)); 478 + return -EINVAL; 479 + } 480 + if (!sock->mbinfo.base_addr || !sock->mbinfo.size) 481 + return -EINVAL; 482 + 483 + /* The mapped region should be un cached */ 484 + sock->virt_base_addr = devm_ioremap_uc(sock->dev, sock->mbinfo.base_addr, 485 + sock->mbinfo.size); 486 + if (!sock->virt_base_addr) { 487 + dev_err(sock->dev, "Failed to ioremap MP1 base address\n"); 488 + return -ENOMEM; 489 + } 490 + 491 + return 0; 492 + } 493 + 494 + /* Parse the ACPI table to read the data */ 495 + static int hsmp_parse_acpi_table(struct device *dev, u16 sock_ind) 496 + { 497 + struct hsmp_socket *sock = &plat_dev.sock[sock_ind]; 498 + int ret; 499 + 500 + sock->sock_ind = sock_ind; 501 + sock->dev = dev; 502 + plat_dev.is_acpi_device = true; 503 + 504 + sema_init(&sock->hsmp_sem, 1); 505 + 506 + /* Read MP1 base address from CRS method */ 507 + ret = hsmp_read_acpi_crs(sock); 508 + if (ret) 509 + return ret; 510 + 511 + /* Read mailbox offsets from DSD table */ 512 + return hsmp_read_acpi_dsd(sock); 513 + } 514 + 386 515 static ssize_t hsmp_metric_tbl_read(struct file *filp, struct kobject *kobj, 387 516 struct bin_attribute *bin_attr, char *buf, 388 517 loff_t off, size_t count) ··· 566 345 struct hsmp_message msg = { 0 }; 567 346 int ret; 568 347 348 + if (!sock) 349 + return -EINVAL; 350 + 569 351 /* Do not support lseek(), reads entire metric table */ 570 352 if (count < bin_attr->size) { 571 - dev_err(plat_dev.dev, "Wrong buffer size\n"); 572 - return -EINVAL; 573 - } 574 - 575 - if (!sock) { 576 - dev_err(plat_dev.dev, "Failed to read attribute private data\n"); 353 + dev_err(sock->dev, "Wrong buffer size\n"); 577 354 return -EINVAL; 578 355 } 579 356 ··· 607 388 */ 608 389 dram_addr = msg.args[0] | ((u64)(msg.args[1]) << 32); 609 390 if (!dram_addr) { 610 - dev_err(plat_dev.dev, "Invalid DRAM address for metric table\n"); 391 + dev_err(sock->dev, "Invalid DRAM address for metric table\n"); 611 392 return -ENOMEM; 612 393 } 613 - sock->metric_tbl_addr = devm_ioremap(plat_dev.dev, dram_addr, 394 + sock->metric_tbl_addr = devm_ioremap(sock->dev, dram_addr, 614 395 sizeof(struct hsmp_metric_table)); 615 396 if (!sock->metric_tbl_addr) { 616 - dev_err(plat_dev.dev, "Failed to ioremap metric table addr\n"); 397 + dev_err(sock->dev, "Failed to ioremap metric table addr\n"); 617 398 return -ENOMEM; 618 399 } 619 400 return 0; ··· 641 422 hattrs[0] = hattr; 642 423 643 424 if (plat_dev.proto_ver == HSMP_PROTO_VER6) 644 - return (hsmp_get_tbl_dram_base(sock_ind)); 425 + return hsmp_get_tbl_dram_base(sock_ind); 645 426 else 646 427 return 0; 647 428 } 648 429 649 - /* One bin sysfs for metrics table*/ 430 + /* One bin sysfs for metrics table */ 650 431 #define NUM_HSMP_ATTRS 1 651 432 652 - static int hsmp_create_sysfs_interface(void) 433 + static int hsmp_create_attr_list(struct attribute_group *attr_grp, 434 + struct device *dev, u16 sock_ind) 435 + { 436 + struct bin_attribute **hsmp_bin_attrs; 437 + 438 + /* Null terminated list of attributes */ 439 + hsmp_bin_attrs = devm_kcalloc(dev, NUM_HSMP_ATTRS + 1, 440 + sizeof(*hsmp_bin_attrs), 441 + GFP_KERNEL); 442 + if (!hsmp_bin_attrs) 443 + return -ENOMEM; 444 + 445 + attr_grp->bin_attrs = hsmp_bin_attrs; 446 + 447 + return hsmp_init_metric_tbl_bin_attr(hsmp_bin_attrs, sock_ind); 448 + } 449 + 450 + static int hsmp_create_non_acpi_sysfs_if(struct device *dev) 653 451 { 654 452 const struct attribute_group **hsmp_attr_grps; 655 - struct bin_attribute **hsmp_bin_attrs; 656 453 struct attribute_group *attr_grp; 657 - int ret; 658 454 u16 i; 659 455 660 - /* String formatting is currently limited to u8 sockets */ 661 - if (WARN_ON(plat_dev.num_sockets > U8_MAX)) 662 - return -ERANGE; 663 - 664 - hsmp_attr_grps = devm_kzalloc(plat_dev.dev, sizeof(struct attribute_group *) * 665 - (plat_dev.num_sockets + 1), GFP_KERNEL); 456 + hsmp_attr_grps = devm_kcalloc(dev, plat_dev.num_sockets + 1, 457 + sizeof(*hsmp_attr_grps), 458 + GFP_KERNEL); 666 459 if (!hsmp_attr_grps) 667 460 return -ENOMEM; 668 461 669 462 /* Create a sysfs directory for each socket */ 670 463 for (i = 0; i < plat_dev.num_sockets; i++) { 671 - attr_grp = devm_kzalloc(plat_dev.dev, sizeof(struct attribute_group), GFP_KERNEL); 464 + attr_grp = devm_kzalloc(dev, sizeof(struct attribute_group), 465 + GFP_KERNEL); 672 466 if (!attr_grp) 673 467 return -ENOMEM; 674 468 675 469 snprintf(plat_dev.sock[i].name, HSMP_ATTR_GRP_NAME_SIZE, "socket%u", (u8)i); 676 - attr_grp->name = plat_dev.sock[i].name; 677 - 678 - /* Null terminated list of attributes */ 679 - hsmp_bin_attrs = devm_kzalloc(plat_dev.dev, sizeof(struct bin_attribute *) * 680 - (NUM_HSMP_ATTRS + 1), GFP_KERNEL); 681 - if (!hsmp_bin_attrs) 682 - return -ENOMEM; 683 - 684 - attr_grp->bin_attrs = hsmp_bin_attrs; 470 + attr_grp->name = plat_dev.sock[i].name; 685 471 attr_grp->is_bin_visible = hsmp_is_sock_attr_visible; 686 472 hsmp_attr_grps[i] = attr_grp; 687 473 688 - /* Now create the leaf nodes */ 689 - ret = hsmp_init_metric_tbl_bin_attr(hsmp_bin_attrs, i); 690 - if (ret) 691 - return ret; 474 + hsmp_create_attr_list(attr_grp, dev, i); 692 475 } 693 - return devm_device_add_groups(plat_dev.dev, hsmp_attr_grps); 476 + 477 + return devm_device_add_groups(dev, hsmp_attr_grps); 694 478 } 695 479 696 - static int hsmp_cache_proto_ver(void) 480 + static int hsmp_create_acpi_sysfs_if(struct device *dev) 481 + { 482 + struct attribute_group *attr_grp; 483 + u16 sock_ind; 484 + int ret; 485 + 486 + attr_grp = devm_kzalloc(dev, sizeof(struct attribute_group), GFP_KERNEL); 487 + if (!attr_grp) 488 + return -ENOMEM; 489 + 490 + attr_grp->is_bin_visible = hsmp_is_sock_attr_visible; 491 + 492 + ret = hsmp_get_uid(dev, &sock_ind); 493 + if (ret) 494 + return ret; 495 + 496 + ret = hsmp_create_attr_list(attr_grp, dev, sock_ind); 497 + if (ret) 498 + return ret; 499 + 500 + return devm_device_add_group(dev, attr_grp); 501 + } 502 + 503 + static int hsmp_cache_proto_ver(u16 sock_ind) 697 504 { 698 505 struct hsmp_message msg = { 0 }; 699 506 int ret; 700 507 701 508 msg.msg_id = HSMP_GET_PROTO_VER; 702 - msg.sock_ind = 0; 509 + msg.sock_ind = sock_ind; 703 510 msg.response_sz = hsmp_msg_desc_table[HSMP_GET_PROTO_VER].response_sz; 704 511 705 512 ret = hsmp_send_message(&msg); ··· 735 490 return ret; 736 491 } 737 492 738 - static int hsmp_pltdrv_probe(struct platform_device *pdev) 493 + static inline bool is_f1a_m0h(void) 739 494 { 495 + if (boot_cpu_data.x86 == 0x1A && boot_cpu_data.x86_model <= 0x0F) 496 + return true; 497 + 498 + return false; 499 + } 500 + 501 + static int init_platform_device(struct device *dev) 502 + { 503 + struct hsmp_socket *sock; 740 504 int ret, i; 741 505 742 - plat_dev.sock = devm_kzalloc(&pdev->dev, 743 - (plat_dev.num_sockets * sizeof(struct hsmp_socket)), 744 - GFP_KERNEL); 745 - if (!plat_dev.sock) 746 - return -ENOMEM; 747 - plat_dev.dev = &pdev->dev; 748 - 749 506 for (i = 0; i < plat_dev.num_sockets; i++) { 750 - sema_init(&plat_dev.sock[i].hsmp_sem, 1); 751 - plat_dev.sock[i].sock_ind = i; 507 + if (!node_to_amd_nb(i)) 508 + return -ENODEV; 509 + sock = &plat_dev.sock[i]; 510 + sock->root = node_to_amd_nb(i)->root; 511 + sock->sock_ind = i; 512 + sock->dev = dev; 513 + sock->mbinfo.base_addr = SMN_HSMP_BASE; 514 + 515 + /* 516 + * This is a transitional change from non-ACPI to ACPI, only 517 + * family 0x1A, model 0x00 platform is supported for both ACPI and non-ACPI. 518 + */ 519 + if (is_f1a_m0h()) 520 + sock->mbinfo.msg_id_off = SMN_HSMP_MSG_ID_F1A_M0H; 521 + else 522 + sock->mbinfo.msg_id_off = SMN_HSMP_MSG_ID; 523 + 524 + sock->mbinfo.msg_resp_off = SMN_HSMP_MSG_RESP; 525 + sock->mbinfo.msg_arg_off = SMN_HSMP_MSG_DATA; 526 + sema_init(&sock->hsmp_sem, 1); 527 + 528 + /* Test the hsmp interface on each socket */ 529 + ret = hsmp_test(i, 0xDEADBEEF); 530 + if (ret) { 531 + dev_err(dev, "HSMP test message failed on Fam:%x model:%x\n", 532 + boot_cpu_data.x86, boot_cpu_data.x86_model); 533 + dev_err(dev, "Is HSMP disabled in BIOS ?\n"); 534 + return ret; 535 + } 752 536 } 753 537 754 - plat_dev.hsmp_device.name = HSMP_CDEV_NAME; 755 - plat_dev.hsmp_device.minor = MISC_DYNAMIC_MINOR; 756 - plat_dev.hsmp_device.fops = &hsmp_fops; 757 - plat_dev.hsmp_device.parent = &pdev->dev; 758 - plat_dev.hsmp_device.nodename = HSMP_DEVNODE_NAME; 759 - plat_dev.hsmp_device.mode = 0644; 538 + return 0; 539 + } 760 540 761 - ret = hsmp_cache_proto_ver(); 541 + static const struct acpi_device_id amd_hsmp_acpi_ids[] = { 542 + {ACPI_HSMP_DEVICE_HID, 0}, 543 + {} 544 + }; 545 + MODULE_DEVICE_TABLE(acpi, amd_hsmp_acpi_ids); 546 + 547 + static int hsmp_pltdrv_probe(struct platform_device *pdev) 548 + { 549 + struct acpi_device *adev; 550 + u16 sock_ind = 0; 551 + int ret; 552 + 553 + /* 554 + * On ACPI supported BIOS, there is an ACPI HSMP device added for 555 + * each socket, so the per socket probing, but the memory allocated for 556 + * sockets should be contiguous to access it as an array, 557 + * Hence allocate memory for all the sockets at once instead of allocating 558 + * on each probe. 559 + */ 560 + if (!plat_dev.is_probed) { 561 + plat_dev.sock = devm_kcalloc(&pdev->dev, plat_dev.num_sockets, 562 + sizeof(*plat_dev.sock), 563 + GFP_KERNEL); 564 + if (!plat_dev.sock) 565 + return -ENOMEM; 566 + } 567 + adev = ACPI_COMPANION(&pdev->dev); 568 + if (adev && !acpi_match_device_ids(adev, amd_hsmp_acpi_ids)) { 569 + ret = hsmp_get_uid(&pdev->dev, &sock_ind); 570 + if (ret) 571 + return ret; 572 + if (sock_ind >= plat_dev.num_sockets) 573 + return -EINVAL; 574 + ret = hsmp_parse_acpi_table(&pdev->dev, sock_ind); 575 + if (ret) { 576 + dev_err(&pdev->dev, "Failed to parse ACPI table\n"); 577 + return ret; 578 + } 579 + /* Test the hsmp interface */ 580 + ret = hsmp_test(sock_ind, 0xDEADBEEF); 581 + if (ret) { 582 + dev_err(&pdev->dev, "HSMP test message failed on Fam:%x model:%x\n", 583 + boot_cpu_data.x86, boot_cpu_data.x86_model); 584 + dev_err(&pdev->dev, "Is HSMP disabled in BIOS ?\n"); 585 + return ret; 586 + } 587 + } else { 588 + ret = init_platform_device(&pdev->dev); 589 + if (ret) { 590 + dev_err(&pdev->dev, "Failed to init HSMP mailbox\n"); 591 + return ret; 592 + } 593 + } 594 + 595 + ret = hsmp_cache_proto_ver(sock_ind); 762 596 if (ret) { 763 - dev_err(plat_dev.dev, "Failed to read HSMP protocol version\n"); 597 + dev_err(&pdev->dev, "Failed to read HSMP protocol version\n"); 764 598 return ret; 765 599 } 766 600 767 - ret = hsmp_create_sysfs_interface(); 601 + if (plat_dev.is_acpi_device) 602 + ret = hsmp_create_acpi_sysfs_if(&pdev->dev); 603 + else 604 + ret = hsmp_create_non_acpi_sysfs_if(&pdev->dev); 768 605 if (ret) 769 - dev_err(plat_dev.dev, "Failed to create HSMP sysfs interface\n"); 606 + dev_err(&pdev->dev, "Failed to create HSMP sysfs interface\n"); 770 607 771 - return misc_register(&plat_dev.hsmp_device); 608 + if (!plat_dev.is_probed) { 609 + plat_dev.hsmp_device.name = HSMP_CDEV_NAME; 610 + plat_dev.hsmp_device.minor = MISC_DYNAMIC_MINOR; 611 + plat_dev.hsmp_device.fops = &hsmp_fops; 612 + plat_dev.hsmp_device.parent = &pdev->dev; 613 + plat_dev.hsmp_device.nodename = HSMP_DEVNODE_NAME; 614 + plat_dev.hsmp_device.mode = 0644; 615 + 616 + ret = misc_register(&plat_dev.hsmp_device); 617 + if (ret) 618 + return ret; 619 + 620 + plat_dev.is_probed = true; 621 + } 622 + 623 + return 0; 624 + 772 625 } 773 626 774 627 static void hsmp_pltdrv_remove(struct platform_device *pdev) 775 628 { 776 - misc_deregister(&plat_dev.hsmp_device); 629 + /* 630 + * We register only one misc_device even on multi socket system. 631 + * So, deregister should happen only once. 632 + */ 633 + if (plat_dev.is_probed) { 634 + misc_deregister(&plat_dev.hsmp_device); 635 + plat_dev.is_probed = false; 636 + } 777 637 } 778 638 779 639 static struct platform_driver amd_hsmp_driver = { ··· 886 536 .remove_new = hsmp_pltdrv_remove, 887 537 .driver = { 888 538 .name = DRIVER_NAME, 539 + .acpi_match_table = amd_hsmp_acpi_ids, 889 540 }, 890 541 }; 891 542 892 543 static struct platform_device *amd_hsmp_platdev; 893 544 545 + static int hsmp_plat_dev_register(void) 546 + { 547 + int ret; 548 + 549 + amd_hsmp_platdev = platform_device_alloc(DRIVER_NAME, PLATFORM_DEVID_NONE); 550 + if (!amd_hsmp_platdev) 551 + return -ENOMEM; 552 + 553 + ret = platform_device_add(amd_hsmp_platdev); 554 + if (ret) 555 + platform_device_put(amd_hsmp_platdev); 556 + 557 + return ret; 558 + } 559 + 894 560 static int __init hsmp_plt_init(void) 895 561 { 896 562 int ret = -ENODEV; 897 - int i; 898 563 899 564 if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD || boot_cpu_data.x86 < 0x19) { 900 565 pr_err("HSMP is not supported on Family:%x model:%x\n", ··· 922 557 * if we have N SMN/DF interfaces that ideally means N sockets 923 558 */ 924 559 plat_dev.num_sockets = amd_nb_num(); 925 - if (plat_dev.num_sockets == 0) 560 + if (plat_dev.num_sockets == 0 || plat_dev.num_sockets > MAX_AMD_SOCKETS) 926 561 return ret; 927 - 928 - /* Test the hsmp interface on each socket */ 929 - for (i = 0; i < plat_dev.num_sockets; i++) { 930 - ret = hsmp_test(i, 0xDEADBEEF); 931 - if (ret) { 932 - pr_err("HSMP test message failed on Fam:%x model:%x\n", 933 - boot_cpu_data.x86, boot_cpu_data.x86_model); 934 - pr_err("Is HSMP disabled in BIOS ?\n"); 935 - return ret; 936 - } 937 - } 938 562 939 563 ret = platform_driver_register(&amd_hsmp_driver); 940 564 if (ret) 941 565 return ret; 942 566 943 - amd_hsmp_platdev = platform_device_alloc(DRIVER_NAME, PLATFORM_DEVID_NONE); 944 - if (!amd_hsmp_platdev) { 945 - ret = -ENOMEM; 946 - goto drv_unregister; 567 + if (!plat_dev.is_acpi_device) { 568 + ret = hsmp_plat_dev_register(); 569 + if (ret) 570 + platform_driver_unregister(&amd_hsmp_driver); 947 571 } 948 572 949 - ret = platform_device_add(amd_hsmp_platdev); 950 - if (ret) { 951 - platform_device_put(amd_hsmp_platdev); 952 - goto drv_unregister; 953 - } 954 - 955 - return 0; 956 - 957 - drv_unregister: 958 - platform_driver_unregister(&amd_hsmp_driver); 959 573 return ret; 960 574 } 961 575
+132 -4
drivers/platform/x86/amd/pmf/acpi.c
··· 90 90 return err; 91 91 } 92 92 93 + static union acpi_object *apts_if_call(struct amd_pmf_dev *pdev, u32 state_index) 94 + { 95 + struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL }; 96 + acpi_handle ahandle = ACPI_HANDLE(pdev->dev); 97 + struct acpi_object_list apts_if_arg_list; 98 + union acpi_object apts_if_args[3]; 99 + acpi_status status; 100 + 101 + apts_if_arg_list.count = 3; 102 + apts_if_arg_list.pointer = &apts_if_args[0]; 103 + 104 + apts_if_args[0].type = ACPI_TYPE_INTEGER; 105 + apts_if_args[0].integer.value = 1; 106 + apts_if_args[1].type = ACPI_TYPE_INTEGER; 107 + apts_if_args[1].integer.value = state_index; 108 + apts_if_args[2].type = ACPI_TYPE_INTEGER; 109 + apts_if_args[2].integer.value = 0; 110 + 111 + status = acpi_evaluate_object(ahandle, "APTS", &apts_if_arg_list, &buffer); 112 + if (ACPI_FAILURE(status)) { 113 + dev_err(pdev->dev, "APTS state_idx:%u call failed\n", state_index); 114 + kfree(buffer.pointer); 115 + return NULL; 116 + } 117 + 118 + return buffer.pointer; 119 + } 120 + 121 + static int apts_if_call_store_buffer(struct amd_pmf_dev *pdev, 122 + u32 index, void *data, size_t out_sz) 123 + { 124 + union acpi_object *info; 125 + size_t size; 126 + int err = 0; 127 + 128 + info = apts_if_call(pdev, index); 129 + if (!info) 130 + return -EIO; 131 + 132 + if (info->type != ACPI_TYPE_BUFFER) { 133 + dev_err(pdev->dev, "object is not a buffer\n"); 134 + err = -EINVAL; 135 + goto out; 136 + } 137 + 138 + size = *(u16 *)info->buffer.pointer; 139 + if (info->buffer.length < size) { 140 + dev_err(pdev->dev, "buffer smaller than header size %u < %zu\n", 141 + info->buffer.length, size); 142 + err = -EINVAL; 143 + goto out; 144 + } 145 + 146 + if (size < out_sz) { 147 + dev_err(pdev->dev, "buffer too small %zu\n", size); 148 + err = -EINVAL; 149 + goto out; 150 + } 151 + 152 + memcpy(data, info->buffer.pointer, out_sz); 153 + out: 154 + kfree(info); 155 + return err; 156 + } 157 + 93 158 int is_apmf_func_supported(struct amd_pmf_dev *pdev, unsigned long index) 94 159 { 95 160 /* If bit-n is set, that indicates function n+1 is supported */ 96 161 return !!(pdev->supported_func & BIT(index - 1)); 162 + } 163 + 164 + int apts_get_static_slider_granular_v2(struct amd_pmf_dev *pdev, 165 + struct amd_pmf_apts_granular_output *data, u32 apts_idx) 166 + { 167 + if (!is_apmf_func_supported(pdev, APMF_FUNC_STATIC_SLIDER_GRANULAR)) 168 + return -EINVAL; 169 + 170 + return apts_if_call_store_buffer(pdev, apts_idx, data, sizeof(*data)); 171 + } 172 + 173 + int apmf_get_static_slider_granular_v2(struct amd_pmf_dev *pdev, 174 + struct apmf_static_slider_granular_output_v2 *data) 175 + { 176 + if (!is_apmf_func_supported(pdev, APMF_FUNC_STATIC_SLIDER_GRANULAR)) 177 + return -EINVAL; 178 + 179 + return apmf_if_call_store_buffer(pdev, APMF_FUNC_STATIC_SLIDER_GRANULAR, 180 + data, sizeof(*data)); 97 181 } 98 182 99 183 int apmf_get_static_slider_granular(struct amd_pmf_dev *pdev, ··· 224 140 kfree(info); 225 141 } 226 142 143 + int amd_pmf_notify_sbios_heartbeat_event_v2(struct amd_pmf_dev *dev, u8 flag) 144 + { 145 + struct sbios_hb_event_v2 args = { }; 146 + struct acpi_buffer params; 147 + union acpi_object *info; 148 + 149 + args.size = sizeof(args); 150 + 151 + switch (flag) { 152 + case ON_LOAD: 153 + args.load = 1; 154 + break; 155 + case ON_UNLOAD: 156 + args.unload = 1; 157 + break; 158 + case ON_SUSPEND: 159 + args.suspend = 1; 160 + break; 161 + case ON_RESUME: 162 + args.resume = 1; 163 + break; 164 + default: 165 + dev_dbg(dev->dev, "Failed to send v2 heartbeat event, flag:0x%x\n", flag); 166 + return -EINVAL; 167 + } 168 + 169 + params.length = sizeof(args); 170 + params.pointer = &args; 171 + 172 + info = apmf_if_call(dev, APMF_FUNC_SBIOS_HEARTBEAT_V2, &params); 173 + if (!info) 174 + return -EIO; 175 + 176 + kfree(info); 177 + return 0; 178 + } 179 + 227 180 int apmf_update_fan_idx(struct amd_pmf_dev *pdev, bool manual, u32 idx) 228 181 { 229 182 union acpi_object *info; ··· 285 164 int apmf_get_auto_mode_def(struct amd_pmf_dev *pdev, struct apmf_auto_mode *data) 286 165 { 287 166 return apmf_if_call_store_buffer(pdev, APMF_FUNC_AUTO_MODE, data, sizeof(*data)); 167 + } 168 + 169 + int apmf_get_sbios_requests_v2(struct amd_pmf_dev *pdev, struct apmf_sbios_req_v2 *req) 170 + { 171 + return apmf_if_call_store_buffer(pdev, APMF_FUNC_SBIOS_REQUESTS, req, sizeof(*req)); 288 172 } 289 173 290 174 int apmf_get_sbios_requests(struct amd_pmf_dev *pdev, struct apmf_sbios_req *req) ··· 344 218 return err; 345 219 346 220 pdev->supported_func = output.supported_functions; 347 - dev_dbg(pdev->dev, "supported functions:0x%x notifications:0x%x\n", 348 - output.supported_functions, output.notification_mask); 221 + dev_dbg(pdev->dev, "supported functions:0x%x notifications:0x%x version:%u\n", 222 + output.supported_functions, output.notification_mask, output.version); 223 + 224 + pdev->pmf_if_version = output.version; 349 225 350 226 return 0; 351 227 } ··· 448 320 { 449 321 acpi_handle ahandle = ACPI_HANDLE(pmf_dev->dev); 450 322 451 - if (pmf_dev->hb_interval) 323 + if (pmf_dev->hb_interval && pmf_dev->pmf_if_version == PMF_IF_V1) 452 324 cancel_delayed_work_sync(&pmf_dev->heart_beat); 453 325 454 326 if (is_apmf_func_supported(pmf_dev, APMF_FUNC_AUTO_MODE) && ··· 472 344 goto out; 473 345 } 474 346 475 - if (pmf_dev->hb_interval) { 347 + if (pmf_dev->hb_interval && pmf_dev->pmf_if_version == PMF_IF_V1) { 476 348 /* send heartbeats only if the interval is not zero */ 477 349 INIT_DELAYED_WORK(&pmf_dev->heart_beat, apmf_sbios_heartbeat_notify); 478 350 schedule_delayed_work(&pmf_dev->heart_beat, 0);
+13 -2
drivers/platform/x86/amd/pmf/core.c
··· 113 113 static void amd_pmf_dbgfs_register(struct amd_pmf_dev *dev) 114 114 { 115 115 dev->dbgfs_dir = debugfs_create_dir("amd_pmf", NULL); 116 - debugfs_create_file("current_power_limits", 0644, dev->dbgfs_dir, dev, 117 - &current_power_limits_fops); 116 + if (dev->pmf_if_version == PMF_IF_V1) 117 + debugfs_create_file("current_power_limits", 0644, dev->dbgfs_dir, dev, 118 + &current_power_limits_fops); 118 119 } 119 120 120 121 int amd_pmf_get_power_source(void) ··· 300 299 if (pdev->smart_pc_enabled) 301 300 cancel_delayed_work_sync(&pdev->pb_work); 302 301 302 + if (is_apmf_func_supported(pdev, APMF_FUNC_SBIOS_HEARTBEAT_V2)) 303 + amd_pmf_notify_sbios_heartbeat_event_v2(pdev, ON_SUSPEND); 304 + 303 305 return 0; 304 306 } 305 307 ··· 316 312 if (ret) 317 313 return ret; 318 314 } 315 + 316 + if (is_apmf_func_supported(pdev, APMF_FUNC_SBIOS_HEARTBEAT_V2)) 317 + amd_pmf_notify_sbios_heartbeat_event_v2(pdev, ON_RESUME); 319 318 320 319 if (pdev->smart_pc_enabled) 321 320 schedule_delayed_work(&pdev->pb_work, msecs_to_jiffies(2000)); ··· 450 443 amd_pmf_dbgfs_register(dev); 451 444 amd_pmf_init_features(dev); 452 445 apmf_install_handler(dev); 446 + if (is_apmf_func_supported(dev, APMF_FUNC_SBIOS_HEARTBEAT_V2)) 447 + amd_pmf_notify_sbios_heartbeat_event_v2(dev, ON_LOAD); 453 448 454 449 dev_info(dev->dev, "registered PMF device successfully\n"); 455 450 ··· 463 454 struct amd_pmf_dev *dev = platform_get_drvdata(pdev); 464 455 465 456 amd_pmf_deinit_features(dev); 457 + if (is_apmf_func_supported(dev, APMF_FUNC_SBIOS_HEARTBEAT_V2)) 458 + amd_pmf_notify_sbios_heartbeat_event_v2(dev, ON_UNLOAD); 466 459 apmf_acpi_deinit(dev); 467 460 amd_pmf_dbgfs_unregister(dev); 468 461 mutex_destroy(&dev->lock);
+91 -2
drivers/platform/x86/amd/pmf/pmf.h
··· 17 17 #define POLICY_BUF_MAX_SZ 0x4b000 18 18 #define POLICY_SIGN_COOKIE 0x31535024 19 19 #define POLICY_COOKIE_OFFSET 0x10 20 - #define POLICY_COOKIE_LEN 0x14 20 + 21 + struct cookie_header { 22 + u32 sign; 23 + u32 length; 24 + } __packed; 21 25 22 26 /* APMF Functions */ 23 27 #define APMF_FUNC_VERIFY_INTERFACE 0 ··· 34 30 #define APMF_FUNC_STATIC_SLIDER_GRANULAR 9 35 31 #define APMF_FUNC_DYN_SLIDER_AC 11 36 32 #define APMF_FUNC_DYN_SLIDER_DC 12 33 + #define APMF_FUNC_SBIOS_HEARTBEAT_V2 16 37 34 38 35 /* Message Definitions */ 39 36 #define SET_SPL 0x03 /* SPL: Sustained Power Limit */ ··· 55 50 #define GET_STT_LIMIT_APU 0x20 56 51 #define GET_STT_LIMIT_HS2 0x21 57 52 #define SET_P3T 0x23 /* P3T: Peak Package Power Limit */ 53 + #define SET_PMF_PPT 0x25 54 + #define SET_PMF_PPT_APU_ONLY 0x26 58 55 59 56 /* OS slider update notification */ 60 57 #define DC_BEST_PERF 0 ··· 90 83 #define TA_OUTPUT_RESERVED_MEM 906 91 84 #define MAX_OPERATION_PARAMS 4 92 85 86 + #define PMF_IF_V1 1 87 + #define PMF_IF_V2 2 88 + 89 + #define APTS_MAX_STATES 16 90 + 91 + /* APTS PMF BIOS Interface */ 92 + struct amd_pmf_apts_output { 93 + u16 table_version; 94 + u32 fan_table_idx; 95 + u32 pmf_ppt; 96 + u32 ppt_pmf_apu_only; 97 + u32 stt_min_limit; 98 + u8 stt_skin_temp_limit_apu; 99 + u8 stt_skin_temp_limit_hs2; 100 + } __packed; 101 + 102 + struct amd_pmf_apts_granular_output { 103 + u16 size; 104 + struct amd_pmf_apts_output val; 105 + } __packed; 106 + 107 + struct amd_pmf_apts_granular { 108 + u16 size; 109 + struct amd_pmf_apts_output val[APTS_MAX_STATES]; 110 + }; 111 + 112 + struct sbios_hb_event_v2 { 113 + u16 size; 114 + u8 load; 115 + u8 unload; 116 + u8 suspend; 117 + u8 resume; 118 + } __packed; 119 + 120 + enum sbios_hb_v2 { 121 + ON_LOAD, 122 + ON_UNLOAD, 123 + ON_SUSPEND, 124 + ON_RESUME, 125 + }; 126 + 93 127 /* AMD PMF BIOS interfaces */ 94 128 struct apmf_verify_interface { 95 129 u16 size; ··· 160 112 u32 stt_min_limit; 161 113 u8 skin_temp_apu; 162 114 u8 skin_temp_hs2; 115 + } __packed; 116 + 117 + struct apmf_sbios_req_v2 { 118 + u16 size; 119 + u32 pending_req; 120 + u8 rsd; 121 + u32 ppt_pmf; 122 + u32 ppt_pmf_apu_only; 123 + u32 stt_min_limit; 124 + u8 skin_temp_apu; 125 + u8 skin_temp_hs2; 126 + u32 custom_policy[10]; 163 127 } __packed; 164 128 165 129 struct apmf_fan_idx { ··· 254 194 POWER_MODE_MAX, 255 195 }; 256 196 197 + enum power_modes_v2 { 198 + POWER_MODE_BEST_PERFORMANCE, 199 + POWER_MODE_BALANCED, 200 + POWER_MODE_BEST_POWER_EFFICIENCY, 201 + POWER_MODE_ENERGY_SAVE, 202 + POWER_MODE_V2_MAX, 203 + }; 204 + 257 205 struct amd_pmf_dev { 258 206 void __iomem *regbase; 259 207 void __iomem *smu_virt_addr; ··· 297 229 struct delayed_work pb_work; 298 230 struct pmf_action_table *prev_data; 299 231 u64 policy_addr; 300 - void *policy_base; 232 + void __iomem *policy_base; 301 233 bool smart_pc_enabled; 234 + u16 pmf_if_version; 302 235 }; 236 + 237 + struct apmf_sps_prop_granular_v2 { 238 + u8 power_states[POWER_SOURCE_MAX][POWER_MODE_V2_MAX]; 239 + } __packed; 303 240 304 241 struct apmf_sps_prop_granular { 305 242 u32 fppt; ··· 325 252 struct amd_pmf_static_slider_granular { 326 253 u16 size; 327 254 struct apmf_sps_prop_granular prop[POWER_SOURCE_MAX][POWER_MODE_MAX]; 255 + }; 256 + 257 + struct apmf_static_slider_granular_output_v2 { 258 + u16 size; 259 + struct apmf_sps_prop_granular_v2 sps_idx; 260 + } __packed; 261 + 262 + struct amd_pmf_static_slider_granular_v2 { 263 + u16 size; 264 + struct apmf_sps_prop_granular_v2 sps_idx; 328 265 }; 329 266 330 267 struct os_power_slider { ··· 668 585 int apmf_install_handler(struct amd_pmf_dev *pmf_dev); 669 586 int apmf_os_power_slider_update(struct amd_pmf_dev *dev, u8 flag); 670 587 int amd_pmf_set_dram_addr(struct amd_pmf_dev *dev, bool alloc_buffer); 588 + int amd_pmf_notify_sbios_heartbeat_event_v2(struct amd_pmf_dev *dev, u8 flag); 671 589 672 590 /* SPS Layer */ 673 591 int amd_pmf_get_pprof_modes(struct amd_pmf_dev *pmf); ··· 686 602 687 603 int apmf_update_fan_idx(struct amd_pmf_dev *pdev, bool manual, u32 idx); 688 604 int amd_pmf_set_sps_power_limits(struct amd_pmf_dev *pmf); 605 + int apmf_get_static_slider_granular_v2(struct amd_pmf_dev *dev, 606 + struct apmf_static_slider_granular_output_v2 *data); 607 + int apts_get_static_slider_granular_v2(struct amd_pmf_dev *pdev, 608 + struct amd_pmf_apts_granular_output *data, u32 apts_idx); 689 609 690 610 /* Auto Mode Layer */ 691 611 int apmf_get_auto_mode_def(struct amd_pmf_dev *pdev, struct apmf_auto_mode *data); ··· 697 609 void amd_pmf_deinit_auto_mode(struct amd_pmf_dev *dev); 698 610 void amd_pmf_trans_automode(struct amd_pmf_dev *dev, int socket_power, ktime_t time_elapsed_ms); 699 611 int apmf_get_sbios_requests(struct amd_pmf_dev *pdev, struct apmf_sbios_req *req); 612 + int apmf_get_sbios_requests_v2(struct amd_pmf_dev *pdev, struct apmf_sbios_req_v2 *req); 700 613 701 614 void amd_pmf_update_2_cql(struct amd_pmf_dev *dev, bool is_cql_event); 702 615 int amd_pmf_reset_amt(struct amd_pmf_dev *dev);
+144 -1
drivers/platform/x86/amd/pmf/sps.c
··· 10 10 11 11 #include "pmf.h" 12 12 13 + static struct amd_pmf_static_slider_granular_v2 config_store_v2; 13 14 static struct amd_pmf_static_slider_granular config_store; 15 + static struct amd_pmf_apts_granular apts_config_store; 14 16 15 17 #ifdef CONFIG_AMD_PMF_DEBUG 18 + static const char *slider_v2_as_str(unsigned int state) 19 + { 20 + switch (state) { 21 + case POWER_MODE_BEST_PERFORMANCE: 22 + return "Best Performance"; 23 + case POWER_MODE_BALANCED: 24 + return "Balanced"; 25 + case POWER_MODE_BEST_POWER_EFFICIENCY: 26 + return "Best Power Efficiency"; 27 + case POWER_MODE_ENERGY_SAVE: 28 + return "Energy Save"; 29 + default: 30 + return "Unknown Power Mode"; 31 + } 32 + } 33 + 16 34 static const char *slider_as_str(unsigned int state) 17 35 { 18 36 switch (state) { ··· 81 63 82 64 pr_debug("Static Slider Data - END\n"); 83 65 } 66 + 67 + static void amd_pmf_dump_sps_defaults_v2(struct amd_pmf_static_slider_granular_v2 *data) 68 + { 69 + unsigned int i, j; 70 + 71 + pr_debug("Static Slider APTS state index data - BEGIN"); 72 + pr_debug("size: %u\n", data->size); 73 + 74 + for (i = 0; i < POWER_SOURCE_MAX; i++) 75 + for (j = 0; j < POWER_MODE_V2_MAX; j++) 76 + pr_debug("%s %s: %u\n", amd_pmf_source_as_str(i), slider_v2_as_str(j), 77 + data->sps_idx.power_states[i][j]); 78 + 79 + pr_debug("Static Slider APTS state index data - END\n"); 80 + } 81 + 82 + static void amd_pmf_dump_apts_sps_defaults(struct amd_pmf_apts_granular *info) 83 + { 84 + int i; 85 + 86 + pr_debug("Static Slider APTS index default values data - BEGIN"); 87 + 88 + for (i = 0; i < APTS_MAX_STATES; i++) { 89 + pr_debug("Table Version[%d] = %u\n", i, info->val[i].table_version); 90 + pr_debug("Fan Index[%d] = %u\n", i, info->val[i].fan_table_idx); 91 + pr_debug("PPT[%d] = %u\n", i, info->val[i].pmf_ppt); 92 + pr_debug("PPT APU[%d] = %u\n", i, info->val[i].ppt_pmf_apu_only); 93 + pr_debug("STT Min[%d] = %u\n", i, info->val[i].stt_min_limit); 94 + pr_debug("STT APU[%d] = %u\n", i, info->val[i].stt_skin_temp_limit_apu); 95 + pr_debug("STT HS2[%d] = %u\n", i, info->val[i].stt_skin_temp_limit_hs2); 96 + } 97 + 98 + pr_debug("Static Slider APTS index default values data - END"); 99 + } 84 100 #else 85 101 static void amd_pmf_dump_sps_defaults(struct amd_pmf_static_slider_granular *data) {} 102 + static void amd_pmf_dump_sps_defaults_v2(struct amd_pmf_static_slider_granular_v2 *data) {} 103 + static void amd_pmf_dump_apts_sps_defaults(struct amd_pmf_apts_granular *info) {} 86 104 #endif 105 + 106 + static void amd_pmf_load_apts_defaults_sps_v2(struct amd_pmf_dev *pdev) 107 + { 108 + struct amd_pmf_apts_granular_output output; 109 + struct amd_pmf_apts_output *ps; 110 + int i; 111 + 112 + memset(&apts_config_store, 0, sizeof(apts_config_store)); 113 + 114 + ps = apts_config_store.val; 115 + 116 + for (i = 0; i < APTS_MAX_STATES; i++) { 117 + apts_get_static_slider_granular_v2(pdev, &output, i); 118 + ps[i].table_version = output.val.table_version; 119 + ps[i].fan_table_idx = output.val.fan_table_idx; 120 + ps[i].pmf_ppt = output.val.pmf_ppt; 121 + ps[i].ppt_pmf_apu_only = output.val.ppt_pmf_apu_only; 122 + ps[i].stt_min_limit = output.val.stt_min_limit; 123 + ps[i].stt_skin_temp_limit_apu = output.val.stt_skin_temp_limit_apu; 124 + ps[i].stt_skin_temp_limit_hs2 = output.val.stt_skin_temp_limit_hs2; 125 + } 126 + 127 + amd_pmf_dump_apts_sps_defaults(&apts_config_store); 128 + } 129 + 130 + static void amd_pmf_load_defaults_sps_v2(struct amd_pmf_dev *dev) 131 + { 132 + struct apmf_static_slider_granular_output_v2 output; 133 + unsigned int i, j; 134 + 135 + memset(&config_store_v2, 0, sizeof(config_store_v2)); 136 + apmf_get_static_slider_granular_v2(dev, &output); 137 + 138 + config_store_v2.size = output.size; 139 + 140 + for (i = 0; i < POWER_SOURCE_MAX; i++) 141 + for (j = 0; j < POWER_MODE_V2_MAX; j++) 142 + config_store_v2.sps_idx.power_states[i][j] = 143 + output.sps_idx.power_states[i][j]; 144 + 145 + amd_pmf_dump_sps_defaults_v2(&config_store_v2); 146 + } 87 147 88 148 static void amd_pmf_load_defaults_sps(struct amd_pmf_dev *dev) 89 149 { ··· 188 92 } 189 93 } 190 94 amd_pmf_dump_sps_defaults(&config_store); 95 + } 96 + 97 + static void amd_pmf_update_slider_v2(struct amd_pmf_dev *dev, int idx) 98 + { 99 + amd_pmf_send_cmd(dev, SET_PMF_PPT, false, apts_config_store.val[idx].pmf_ppt, NULL); 100 + amd_pmf_send_cmd(dev, SET_PMF_PPT_APU_ONLY, false, 101 + apts_config_store.val[idx].ppt_pmf_apu_only, NULL); 102 + amd_pmf_send_cmd(dev, SET_STT_MIN_LIMIT, false, 103 + apts_config_store.val[idx].stt_min_limit, NULL); 104 + amd_pmf_send_cmd(dev, SET_STT_LIMIT_APU, false, 105 + apts_config_store.val[idx].stt_skin_temp_limit_apu, NULL); 106 + amd_pmf_send_cmd(dev, SET_STT_LIMIT_HS2, false, 107 + apts_config_store.val[idx].stt_skin_temp_limit_hs2, NULL); 191 108 } 192 109 193 110 void amd_pmf_update_slider(struct amd_pmf_dev *dev, bool op, int idx, ··· 235 126 } 236 127 } 237 128 129 + static int amd_pmf_update_sps_power_limits_v2(struct amd_pmf_dev *pdev, int pwr_mode) 130 + { 131 + int src, index; 132 + 133 + src = amd_pmf_get_power_source(); 134 + 135 + switch (pwr_mode) { 136 + case POWER_MODE_PERFORMANCE: 137 + index = config_store_v2.sps_idx.power_states[src][POWER_MODE_BEST_PERFORMANCE]; 138 + amd_pmf_update_slider_v2(pdev, index); 139 + break; 140 + case POWER_MODE_BALANCED_POWER: 141 + index = config_store_v2.sps_idx.power_states[src][POWER_MODE_BALANCED]; 142 + amd_pmf_update_slider_v2(pdev, index); 143 + break; 144 + case POWER_MODE_POWER_SAVER: 145 + index = config_store_v2.sps_idx.power_states[src][POWER_MODE_BEST_POWER_EFFICIENCY]; 146 + amd_pmf_update_slider_v2(pdev, index); 147 + break; 148 + default: 149 + return -EINVAL; 150 + } 151 + 152 + return 0; 153 + } 154 + 238 155 int amd_pmf_set_sps_power_limits(struct amd_pmf_dev *pmf) 239 156 { 240 157 int mode; ··· 268 133 mode = amd_pmf_get_pprof_modes(pmf); 269 134 if (mode < 0) 270 135 return mode; 136 + 137 + if (pmf->pmf_if_version == PMF_IF_V2) 138 + return amd_pmf_update_sps_power_limits_v2(pmf, mode); 271 139 272 140 amd_pmf_update_slider(pmf, SLIDER_OP_SET, mode, NULL); 273 141 ··· 394 256 dev->current_profile = PLATFORM_PROFILE_BALANCED; 395 257 396 258 if (is_apmf_func_supported(dev, APMF_FUNC_STATIC_SLIDER_GRANULAR)) { 397 - amd_pmf_load_defaults_sps(dev); 259 + if (dev->pmf_if_version == PMF_IF_V2) { 260 + amd_pmf_load_defaults_sps_v2(dev); 261 + amd_pmf_load_apts_defaults_sps_v2(dev); 262 + } else { 263 + amd_pmf_load_defaults_sps(dev); 264 + } 398 265 399 266 /* update SPS balanced power mode thermals */ 400 267 amd_pmf_set_sps_power_limits(dev);
+14 -9
drivers/platform/x86/amd/pmf/tee-if.c
··· 246 246 247 247 static int amd_pmf_start_policy_engine(struct amd_pmf_dev *dev) 248 248 { 249 - u32 cookie, length; 249 + struct cookie_header *header; 250 250 int res; 251 251 252 - cookie = readl(dev->policy_buf + POLICY_COOKIE_OFFSET); 253 - length = readl(dev->policy_buf + POLICY_COOKIE_LEN); 252 + if (dev->policy_sz < POLICY_COOKIE_OFFSET + sizeof(*header)) 253 + return -EINVAL; 254 254 255 - if (cookie != POLICY_SIGN_COOKIE || !length) { 255 + header = (struct cookie_header *)(dev->policy_buf + POLICY_COOKIE_OFFSET); 256 + 257 + if (header->sign != POLICY_SIGN_COOKIE || !header->length) { 256 258 dev_dbg(dev->dev, "cookie doesn't match\n"); 257 259 return -EINVAL; 258 260 } 259 261 262 + if (dev->policy_sz < header->length + 512) 263 + return -EINVAL; 264 + 260 265 /* Update the actual length */ 261 - dev->policy_sz = length + 512; 266 + dev->policy_sz = header->length + 512; 262 267 res = amd_pmf_invoke_cmd_init(dev); 263 268 if (res == TA_PMF_TYPE_SUCCESS) { 264 269 /* Now its safe to announce that smart pc is enabled */ ··· 276 271 } else { 277 272 dev_err(dev->dev, "ta invoke cmd init failed err: %x\n", res); 278 273 dev->smart_pc_enabled = false; 279 - return res; 274 + return -EIO; 280 275 } 281 276 282 277 return 0; ··· 316 311 317 312 amd_pmf_hex_dump_pb(dev); 318 313 ret = amd_pmf_start_policy_engine(dev); 319 - if (ret) 320 - return -EINVAL; 314 + if (ret < 0) 315 + return ret; 321 316 322 317 return length; 323 318 } ··· 458 453 goto error; 459 454 } 460 455 461 - memcpy(dev->policy_buf, dev->policy_base, dev->policy_sz); 456 + memcpy_fromio(dev->policy_buf, dev->policy_base, dev->policy_sz); 462 457 463 458 amd_pmf_hex_dump_pb(dev); 464 459
+16 -68
drivers/platform/x86/asus-wmi.c
··· 101 101 #define PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI 0x9c31 102 102 103 103 #define ASUS_ACPI_UID_ASUSWMI "ASUSWMI" 104 - #define ASUS_ACPI_UID_ATK "ATK" 105 - 106 - #define WMI_EVENT_QUEUE_SIZE 0x10 107 - #define WMI_EVENT_QUEUE_END 0x1 108 - #define WMI_EVENT_MASK 0xFFFF 109 - /* The WMI hotkey event value is always the same. */ 110 - #define WMI_EVENT_VALUE_ATK 0xFF 111 104 112 105 #define WMI_EVENT_MASK 0xFFFF 113 106 ··· 212 219 int dsts_id; 213 220 int spec; 214 221 int sfun; 215 - bool wmi_event_queue; 216 222 217 223 struct input_dev *inputdev; 218 224 struct backlight_device *backlight_device; ··· 481 489 482 490 static int asus_wmi_get_devstate(struct asus_wmi *asus, u32 dev_id, u32 *retval) 483 491 { 484 - return asus_wmi_evaluate_method(asus->dsts_id, dev_id, 0, retval); 492 + int err; 493 + 494 + err = asus_wmi_evaluate_method(asus->dsts_id, dev_id, 0, retval); 495 + 496 + if (err) 497 + return err; 498 + 499 + if (*retval == ~0) 500 + return -ENODEV; 501 + 502 + return 0; 485 503 } 486 504 487 505 static int asus_wmi_set_devstate(u32 dev_id, u32 ctrl_param, ··· 1622 1620 if (asus_wmi_dev_is_present(asus, ASUS_WMI_DEVID_MICMUTE_LED)) { 1623 1621 asus->micmute_led.name = "platform::micmute"; 1624 1622 asus->micmute_led.max_brightness = 1; 1625 - asus->micmute_led.brightness = ledtrig_audio_get(LED_AUDIO_MICMUTE); 1626 1623 asus->micmute_led.brightness_set_blocking = micmute_led_set; 1627 1624 asus->micmute_led.default_trigger = "audio-micmute"; 1628 1625 ··· 4021 4020 static void asus_wmi_notify(u32 value, void *context) 4022 4021 { 4023 4022 struct asus_wmi *asus = context; 4024 - int code; 4025 - int i; 4023 + int code = asus_wmi_get_event_code(value); 4026 4024 4027 - for (i = 0; i < WMI_EVENT_QUEUE_SIZE + 1; i++) { 4028 - code = asus_wmi_get_event_code(value); 4029 - if (code < 0) { 4030 - pr_warn("Failed to get notify code: %d\n", code); 4031 - return; 4032 - } 4033 - 4034 - if (code == WMI_EVENT_QUEUE_END || code == WMI_EVENT_MASK) 4035 - return; 4036 - 4037 - asus_wmi_handle_event_code(code, asus); 4038 - 4039 - /* 4040 - * Double check that queue is present: 4041 - * ATK (with queue) uses 0xff, ASUSWMI (without) 0xd2. 4042 - */ 4043 - if (!asus->wmi_event_queue || value != WMI_EVENT_VALUE_ATK) 4044 - return; 4025 + if (code < 0) { 4026 + pr_warn("Failed to get notify code: %d\n", code); 4027 + return; 4045 4028 } 4046 4029 4047 - pr_warn("Failed to process event queue, last code: 0x%x\n", code); 4048 - } 4049 - 4050 - static int asus_wmi_notify_queue_flush(struct asus_wmi *asus) 4051 - { 4052 - int code; 4053 - int i; 4054 - 4055 - for (i = 0; i < WMI_EVENT_QUEUE_SIZE + 1; i++) { 4056 - code = asus_wmi_get_event_code(WMI_EVENT_VALUE_ATK); 4057 - if (code < 0) { 4058 - pr_warn("Failed to get event during flush: %d\n", code); 4059 - return code; 4060 - } 4061 - 4062 - if (code == WMI_EVENT_QUEUE_END || code == WMI_EVENT_MASK) 4063 - return 0; 4064 - } 4065 - 4066 - pr_warn("Failed to flush event queue\n"); 4067 - return -EIO; 4030 + asus_wmi_handle_event_code(code, asus); 4068 4031 } 4069 4032 4070 4033 /* Sysfs **********************************************************************/ ··· 4266 4301 } else { 4267 4302 dev_info(dev, "Detected %s, not ASUSWMI, use DSTS\n", wmi_uid); 4268 4303 asus->dsts_id = ASUS_WMI_METHODID_DSTS; 4269 - } 4270 - 4271 - /* 4272 - * Some devices can have multiple event codes stored in a queue before 4273 - * the module load if it was unloaded intermittently after calling 4274 - * the INIT method (enables event handling). The WMI notify handler is 4275 - * expected to retrieve all event codes until a retrieved code equals 4276 - * queue end marker (One or Ones). Old codes are flushed from the queue 4277 - * upon module load. Not enabling this when it should be has minimal 4278 - * visible impact so fall back if anything goes wrong. 4279 - */ 4280 - wmi_uid = wmi_get_acpi_device_uid(asus->driver->event_guid); 4281 - if (wmi_uid && !strcmp(wmi_uid, ASUS_ACPI_UID_ATK)) { 4282 - dev_info(dev, "Detected ATK, enable event queue\n"); 4283 - 4284 - if (!asus_wmi_notify_queue_flush(asus)) 4285 - asus->wmi_event_queue = true; 4286 4304 } 4287 4305 4288 4306 /* CWAP allow to define the behavior of the Fn+F2 key,
-3
drivers/platform/x86/dell/Kconfig
··· 57 57 select POWER_SUPPLY 58 58 select LEDS_CLASS 59 59 select NEW_LEDS 60 - select LEDS_TRIGGERS 61 - select LEDS_TRIGGER_AUDIO 62 60 help 63 61 This driver adds support for rfkill and backlight control to Dell 64 62 laptops (except for some models covered by the Compal driver). ··· 163 165 164 166 config DELL_WMI_PRIVACY 165 167 bool "Dell WMI Hardware Privacy Support" 166 - depends on LEDS_TRIGGER_AUDIO = y || DELL_WMI = LEDS_TRIGGER_AUDIO 167 168 depends on DELL_WMI 168 169 help 169 170 This option adds integration with the "Dell Hardware Privacy"
-2
drivers/platform/x86/dell/dell-laptop.c
··· 2252 2252 if (dell_smbios_find_token(GLOBAL_MIC_MUTE_DISABLE) && 2253 2253 dell_smbios_find_token(GLOBAL_MIC_MUTE_ENABLE) && 2254 2254 !dell_privacy_has_mic_mute()) { 2255 - micmute_led_cdev.brightness = ledtrig_audio_get(LED_AUDIO_MICMUTE); 2256 2255 ret = led_classdev_register(&platform_device->dev, &micmute_led_cdev); 2257 2256 if (ret < 0) 2258 2257 goto fail_led; ··· 2260 2261 2261 2262 if (dell_smbios_find_token(GLOBAL_MUTE_DISABLE) && 2262 2263 dell_smbios_find_token(GLOBAL_MUTE_ENABLE)) { 2263 - mute_led_cdev.brightness = ledtrig_audio_get(LED_AUDIO_MUTE); 2264 2264 ret = led_classdev_register(&platform_device->dev, &mute_led_cdev); 2265 2265 if (ret < 0) 2266 2266 goto fail_backlight;
+1
drivers/platform/x86/dell/dell-wmi-ddv.c
··· 882 882 }, 883 883 .id_table = dell_wmi_ddv_id_table, 884 884 .probe = dell_wmi_ddv_probe, 885 + .no_singleton = true, 885 886 }; 886 887 module_wmi_driver(dell_wmi_ddv_driver); 887 888
-5
drivers/platform/x86/dell/dell-wmi-privacy.c
··· 288 288 priv->cdev.max_brightness = 1; 289 289 priv->cdev.brightness_set_blocking = dell_privacy_micmute_led_set; 290 290 priv->cdev.default_trigger = "audio-micmute"; 291 - priv->cdev.brightness = ledtrig_audio_get(LED_AUDIO_MICMUTE); 292 291 return devm_led_classdev_register(dev, &priv->cdev); 293 292 } 294 293 ··· 296 297 struct privacy_wmi_data *priv; 297 298 struct key_entry *keymap; 298 299 int ret, i, j; 299 - 300 - ret = wmi_has_guid(DELL_PRIVACY_GUID); 301 - if (!ret) 302 - pr_debug("Unable to detect available Dell privacy devices!\n"); 303 300 304 301 priv = devm_kzalloc(&wdev->dev, sizeof(*priv), GFP_KERNEL); 305 302 if (!priv)
+1 -1
drivers/platform/x86/dell/dell-wmi-sysman/sysman.c
··· 25 25 /* reset bios to defaults */ 26 26 static const char * const reset_types[] = {"builtinsafe", "lastknowngood", "factory", "custom"}; 27 27 static int reset_option = -1; 28 - static struct class *fw_attr_class; 28 + static const struct class *fw_attr_class; 29 29 30 30 31 31 /**
+2 -2
drivers/platform/x86/firmware_attributes_class.c
··· 10 10 static DEFINE_MUTEX(fw_attr_lock); 11 11 static int fw_attr_inuse; 12 12 13 - static struct class firmware_attributes_class = { 13 + static const struct class firmware_attributes_class = { 14 14 .name = "firmware-attributes", 15 15 }; 16 16 17 - int fw_attributes_class_get(struct class **fw_attr_class) 17 + int fw_attributes_class_get(const struct class **fw_attr_class) 18 18 { 19 19 int err; 20 20
+1 -1
drivers/platform/x86/firmware_attributes_class.h
··· 5 5 #ifndef FW_ATTR_CLASS_H 6 6 #define FW_ATTR_CLASS_H 7 7 8 - int fw_attributes_class_get(struct class **fw_attr_class); 8 + int fw_attributes_class_get(const struct class **fw_attr_class); 9 9 int fw_attributes_class_put(void); 10 10 11 11 #endif /* FW_ATTR_CLASS_H */
+117
drivers/platform/x86/fujitsu-laptop.c
··· 49 49 #include <linux/kfifo.h> 50 50 #include <linux/leds.h> 51 51 #include <linux/platform_device.h> 52 + #include <linux/power_supply.h> 53 + #include <acpi/battery.h> 52 54 #include <acpi/video.h> 53 55 54 56 #define FUJITSU_DRIVER_VERSION "0.6.0" ··· 99 97 #define BACKLIGHT_OFF (BIT(0) | BIT(1)) 100 98 #define BACKLIGHT_ON 0 101 99 100 + /* FUNC interface - battery control interface */ 101 + #define FUNC_S006_METHOD 0x1006 102 + #define CHARGE_CONTROL_RW 0x21 103 + 102 104 /* Scancodes read from the GIRB register */ 103 105 #define KEY1_CODE 0x410 104 106 #define KEY2_CODE 0x411 ··· 138 132 spinlock_t fifo_lock; 139 133 int flags_supported; 140 134 int flags_state; 135 + bool charge_control_supported; 141 136 }; 142 137 143 138 static struct acpi_device *fext; ··· 169 162 "FUNC 0x%x (args 0x%x, 0x%x, 0x%x) returned 0x%x\n", 170 163 func, op, feature, state, (int)value); 171 164 return value; 165 + } 166 + 167 + /* Battery charge control code */ 168 + static ssize_t charge_control_end_threshold_store(struct device *dev, 169 + struct device_attribute *attr, 170 + const char *buf, size_t count) 171 + { 172 + int cc_end_value, s006_cc_return; 173 + int value, ret; 174 + 175 + ret = kstrtouint(buf, 10, &value); 176 + if (ret) 177 + return ret; 178 + 179 + if (value < 50 || value > 100) 180 + return -EINVAL; 181 + 182 + cc_end_value = value * 0x100 + 0x20; 183 + s006_cc_return = call_fext_func(fext, FUNC_S006_METHOD, 184 + CHARGE_CONTROL_RW, cc_end_value, 0x0); 185 + if (s006_cc_return < 0) 186 + return s006_cc_return; 187 + /* 188 + * The S006 0x21 method returns 0x00 in case the provided value 189 + * is invalid. 190 + */ 191 + if (s006_cc_return == 0x00) 192 + return -EINVAL; 193 + 194 + return count; 195 + } 196 + 197 + static ssize_t charge_control_end_threshold_show(struct device *dev, 198 + struct device_attribute *attr, 199 + char *buf) 200 + { 201 + int status; 202 + 203 + status = call_fext_func(fext, FUNC_S006_METHOD, 204 + CHARGE_CONTROL_RW, 0x21, 0x0); 205 + if (status < 0) 206 + return status; 207 + 208 + return sysfs_emit(buf, "%d\n", status); 209 + } 210 + 211 + static DEVICE_ATTR_RW(charge_control_end_threshold); 212 + 213 + /* ACPI battery hook */ 214 + static int fujitsu_battery_add_hook(struct power_supply *battery, 215 + struct acpi_battery_hook *hook) 216 + { 217 + return device_create_file(&battery->dev, 218 + &dev_attr_charge_control_end_threshold); 219 + } 220 + 221 + static int fujitsu_battery_remove_hook(struct power_supply *battery, 222 + struct acpi_battery_hook *hook) 223 + { 224 + device_remove_file(&battery->dev, 225 + &dev_attr_charge_control_end_threshold); 226 + 227 + return 0; 228 + } 229 + 230 + static struct acpi_battery_hook battery_hook = { 231 + .add_battery = fujitsu_battery_add_hook, 232 + .remove_battery = fujitsu_battery_remove_hook, 233 + .name = "Fujitsu Battery Extension", 234 + }; 235 + 236 + /* 237 + * These functions are intended to be called from acpi_fujitsu_laptop_add and 238 + * acpi_fujitsu_laptop_remove. 239 + */ 240 + static int fujitsu_battery_charge_control_add(struct acpi_device *device) 241 + { 242 + struct fujitsu_laptop *priv = acpi_driver_data(device); 243 + int s006_cc_return; 244 + 245 + priv->charge_control_supported = false; 246 + /* 247 + * Check if the S006 0x21 method exists by trying to get the current 248 + * battery charge limit. 249 + */ 250 + s006_cc_return = call_fext_func(fext, FUNC_S006_METHOD, 251 + CHARGE_CONTROL_RW, 0x21, 0x0); 252 + if (s006_cc_return < 0) 253 + return s006_cc_return; 254 + if (s006_cc_return == UNSUPPORTED_CMD) 255 + return -ENODEV; 256 + 257 + priv->charge_control_supported = true; 258 + battery_hook_register(&battery_hook); 259 + 260 + return 0; 261 + } 262 + 263 + static void fujitsu_battery_charge_control_remove(struct acpi_device *device) 264 + { 265 + struct fujitsu_laptop *priv = acpi_driver_data(device); 266 + 267 + if (priv->charge_control_supported) 268 + battery_hook_unregister(&battery_hook); 172 269 } 173 270 174 271 /* Hardware access for LCD brightness control */ ··· 950 839 if (ret) 951 840 goto err_free_fifo; 952 841 842 + ret = fujitsu_battery_charge_control_add(device); 843 + if (ret < 0) 844 + pr_warn("Unable to register battery charge control: %d\n", ret); 845 + 953 846 return 0; 954 847 955 848 err_free_fifo: ··· 965 850 static void acpi_fujitsu_laptop_remove(struct acpi_device *device) 966 851 { 967 852 struct fujitsu_laptop *priv = acpi_driver_data(device); 853 + 854 + fujitsu_battery_charge_control_remove(device); 968 855 969 856 fujitsu_laptop_platform_remove(device); 970 857
+1 -1
drivers/platform/x86/hp/hp-bioscfg/bioscfg.c
··· 24 24 .mutex = __MUTEX_INITIALIZER(bioscfg_drv.mutex), 25 25 }; 26 26 27 - static struct class *fw_attr_class; 27 + static const struct class *fw_attr_class; 28 28 29 29 ssize_t display_name_language_code_show(struct kobject *kobj, 30 30 struct kobj_attribute *attr,
+65 -6
drivers/platform/x86/hp/hp-wmi.c
··· 29 29 #include <linux/dmi.h> 30 30 31 31 MODULE_AUTHOR("Matthew Garrett <mjg59@srcf.ucam.org>"); 32 - MODULE_DESCRIPTION("HP laptop WMI hotkeys driver"); 32 + MODULE_DESCRIPTION("HP laptop WMI driver"); 33 33 MODULE_LICENSE("GPL"); 34 34 35 35 MODULE_ALIAS("wmi:95F24279-4D7B-4334-9387-ACCDC67EF61C"); 36 - MODULE_ALIAS("wmi:5FB7F034-2C63-45e9-BE91-3D44E2C707E4"); 36 + MODULE_ALIAS("wmi:5FB7F034-2C63-45E9-BE91-3D44E2C707E4"); 37 37 38 38 #define HPWMI_EVENT_GUID "95F24279-4D7B-4334-9387-ACCDC67EF61C" 39 - #define HPWMI_BIOS_GUID "5FB7F034-2C63-45e9-BE91-3D44E2C707E4" 39 + #define HPWMI_BIOS_GUID "5FB7F034-2C63-45E9-BE91-3D44E2C707E4" 40 + 41 + #define HP_OMEN_EC_THERMAL_PROFILE_FLAGS_OFFSET 0x62 42 + #define HP_OMEN_EC_THERMAL_PROFILE_TIMER_OFFSET 0x63 40 43 #define HP_OMEN_EC_THERMAL_PROFILE_OFFSET 0x95 44 + 41 45 #define zero_if_sup(tmp) (zero_insize_support?0:sizeof(tmp)) // use when zero insize is required 42 46 43 47 /* DMI board names of devices that should use the omen specific path for ··· 59 55 "874A", "8603", "8604", "8748", "886B", "886C", "878A", "878B", "878C", 60 56 "88C8", "88CB", "8786", "8787", "8788", "88D1", "88D2", "88F4", "88FD", 61 57 "88F5", "88F6", "88F7", "88FE", "88FF", "8900", "8901", "8902", "8912", 62 - "8917", "8918", "8949", "894A", "89EB" 58 + "8917", "8918", "8949", "894A", "89EB", "8BAD", "8A42" 63 59 }; 64 60 65 61 /* DMI Board names of Omen laptops that are specifically set to be thermal 66 62 * profile version 0 by the Omen Command Center app, regardless of what 67 63 * the get system design information WMI call returns 68 64 */ 69 - static const char *const omen_thermal_profile_force_v0_boards[] = { 65 + static const char * const omen_thermal_profile_force_v0_boards[] = { 70 66 "8607", "8746", "8747", "8749", "874A", "8748" 67 + }; 68 + 69 + /* DMI board names of Omen laptops that have a thermal profile timer which will 70 + * cause the embedded controller to set the thermal profile back to 71 + * "balanced" when reaching zero. 72 + */ 73 + static const char * const omen_timed_thermal_profile_boards[] = { 74 + "8BAD", "8A42" 71 75 }; 72 76 73 77 /* DMI Board names of Victus laptops */ ··· 192 180 HP_OMEN_V1_THERMAL_PROFILE_DEFAULT = 0x30, 193 181 HP_OMEN_V1_THERMAL_PROFILE_PERFORMANCE = 0x31, 194 182 HP_OMEN_V1_THERMAL_PROFILE_COOL = 0x50, 183 + }; 184 + 185 + enum hp_thermal_profile_omen_flags { 186 + HP_OMEN_EC_FLAGS_TURBO = 0x04, 187 + HP_OMEN_EC_FLAGS_NOTIMER = 0x02, 188 + HP_OMEN_EC_FLAGS_JUSTSET = 0x01, 195 189 }; 196 190 197 191 enum hp_thermal_profile_victus { ··· 467 449 468 450 static int omen_thermal_profile_set(int mode) 469 451 { 470 - char buffer[2] = {0, mode}; 452 + /* The Omen Control Center actively sets the first byte of the buffer to 453 + * 255, so let's mimic this behaviour to be as close as possible to 454 + * the original software. 455 + */ 456 + char buffer[2] = {-1, mode}; 471 457 int ret; 472 458 473 459 ret = hp_wmi_perform_query(HPWMI_SET_PERFORMANCE_MODE, HPWMI_GM, ··· 1223 1201 return 0; 1224 1202 } 1225 1203 1204 + static bool has_omen_thermal_profile_ec_timer(void) 1205 + { 1206 + const char *board_name = dmi_get_system_info(DMI_BOARD_NAME); 1207 + 1208 + if (!board_name) 1209 + return false; 1210 + 1211 + return match_string(omen_timed_thermal_profile_boards, 1212 + ARRAY_SIZE(omen_timed_thermal_profile_boards), 1213 + board_name) >= 0; 1214 + } 1215 + 1216 + inline int omen_thermal_profile_ec_flags_set(enum hp_thermal_profile_omen_flags flags) 1217 + { 1218 + return ec_write(HP_OMEN_EC_THERMAL_PROFILE_FLAGS_OFFSET, flags); 1219 + } 1220 + 1221 + inline int omen_thermal_profile_ec_timer_set(u8 value) 1222 + { 1223 + return ec_write(HP_OMEN_EC_THERMAL_PROFILE_TIMER_OFFSET, value); 1224 + } 1225 + 1226 1226 static int platform_profile_omen_set(struct platform_profile_handler *pprof, 1227 1227 enum platform_profile_option profile) 1228 1228 { 1229 1229 int err, tp, tp_version; 1230 + enum hp_thermal_profile_omen_flags flags = 0; 1230 1231 1231 1232 tp_version = omen_get_thermal_policy_version(); 1232 1233 ··· 1282 1237 err = omen_thermal_profile_set(tp); 1283 1238 if (err < 0) 1284 1239 return err; 1240 + 1241 + if (has_omen_thermal_profile_ec_timer()) { 1242 + err = omen_thermal_profile_ec_timer_set(0); 1243 + if (err < 0) 1244 + return err; 1245 + 1246 + if (profile == PLATFORM_PROFILE_PERFORMANCE) 1247 + flags = HP_OMEN_EC_FLAGS_NOTIMER | 1248 + HP_OMEN_EC_FLAGS_TURBO; 1249 + 1250 + err = omen_thermal_profile_ec_flags_set(flags); 1251 + if (err < 0) 1252 + return err; 1253 + } 1285 1254 1286 1255 return 0; 1287 1256 }
-1
drivers/platform/x86/huawei-wmi.c
··· 310 310 huawei->cdev.max_brightness = 1; 311 311 huawei->cdev.brightness_set_blocking = &huawei_wmi_micmute_led_set; 312 312 huawei->cdev.default_trigger = "audio-micmute"; 313 - huawei->cdev.brightness = ledtrig_audio_get(LED_AUDIO_MICMUTE); 314 313 huawei->cdev.dev = dev; 315 314 huawei->cdev.flags = LED_CORE_SUSPENDRESUME; 316 315
+1 -1
drivers/platform/x86/ibm_rtl.c
··· 179 179 return ret; 180 180 } 181 181 182 - static struct bus_type rtl_subsys = { 182 + static const struct bus_type rtl_subsys = { 183 183 .name = "ibm_rtl", 184 184 .dev_name = "ibm_rtl", 185 185 };
+3 -1
drivers/platform/x86/ideapad-laptop.c
··· 1091 1091 { KE_KEY, 0x07 | IDEAPAD_WMI_KEY, { KEY_HELP } }, 1092 1092 { KE_KEY, 0x0e | IDEAPAD_WMI_KEY, { KEY_PICKUP_PHONE } }, 1093 1093 { KE_KEY, 0x0f | IDEAPAD_WMI_KEY, { KEY_HANGUP_PHONE } }, 1094 + /* Refresh Rate Toggle (Fn+R) */ 1095 + { KE_KEY, 0x10 | IDEAPAD_WMI_KEY, { KEY_REFRESH_RATE_TOGGLE } }, 1094 1096 /* Dark mode toggle */ 1095 1097 { KE_KEY, 0x13 | IDEAPAD_WMI_KEY, { KEY_PROG1 } }, 1096 1098 /* Sound profile switch */ ··· 1102 1100 /* Lenovo Support */ 1103 1101 { KE_KEY, 0x27 | IDEAPAD_WMI_KEY, { KEY_HELP } }, 1104 1102 /* Refresh Rate Toggle */ 1105 - { KE_KEY, 0x0a | IDEAPAD_WMI_KEY, { KEY_DISPLAYTOGGLE } }, 1103 + { KE_KEY, 0x0a | IDEAPAD_WMI_KEY, { KEY_REFRESH_RATE_TOGGLE } }, 1106 1104 1107 1105 { KE_END }, 1108 1106 };
+1 -1
drivers/platform/x86/intel/ifs/load.c
··· 383 383 unsigned int expected_size; 384 384 const struct firmware *fw; 385 385 char scan_path[64]; 386 - int ret = -EINVAL; 386 + int ret; 387 387 388 388 snprintf(scan_path, sizeof(scan_path), "intel/ifs_%d/%02x-%02x-%02x-%02x.scan", 389 389 test->test_num, boot_cpu_data.x86, boot_cpu_data.x86_model,
+62 -39
drivers/platform/x86/intel/ifs/runtest.c
··· 23 23 /* Max retries on the same chunk */ 24 24 #define MAX_IFS_RETRIES 5 25 25 26 + struct run_params { 27 + struct ifs_data *ifsd; 28 + union ifs_scan *activate; 29 + union ifs_status status; 30 + }; 31 + 26 32 /* 27 33 * Number of TSC cycles that a logical CPU will wait for the other 28 34 * logical CPU on the core in the WRMSR(ACTIVATE_SCAN). ··· 140 134 return false; 141 135 } 142 136 137 + #define SPINUNIT 100 /* 100 nsec */ 138 + static atomic_t array_cpus_in; 139 + static atomic_t scan_cpus_in; 140 + 141 + /* 142 + * Simplified cpu sibling rendezvous loop based on microcode loader __wait_for_cpus() 143 + */ 144 + static void wait_for_sibling_cpu(atomic_t *t, long long timeout) 145 + { 146 + int cpu = smp_processor_id(); 147 + const struct cpumask *smt_mask = cpu_smt_mask(cpu); 148 + int all_cpus = cpumask_weight(smt_mask); 149 + 150 + atomic_inc(t); 151 + while (atomic_read(t) < all_cpus) { 152 + if (timeout < SPINUNIT) 153 + return; 154 + ndelay(SPINUNIT); 155 + timeout -= SPINUNIT; 156 + touch_nmi_watchdog(); 157 + } 158 + } 159 + 143 160 /* 144 161 * Execute the scan. Called "simultaneously" on all threads of a core 145 162 * at high priority using the stop_cpus mechanism. 146 163 */ 147 164 static int doscan(void *data) 148 165 { 149 - int cpu = smp_processor_id(); 150 - u64 *msrs = data; 166 + int cpu = smp_processor_id(), start, stop; 167 + struct run_params *params = data; 168 + union ifs_status status; 169 + struct ifs_data *ifsd; 151 170 int first; 171 + 172 + ifsd = params->ifsd; 173 + 174 + if (ifsd->generation) { 175 + start = params->activate->gen2.start; 176 + stop = params->activate->gen2.stop; 177 + } else { 178 + start = params->activate->gen0.start; 179 + stop = params->activate->gen0.stop; 180 + } 152 181 153 182 /* Only the first logical CPU on a core reports result */ 154 183 first = cpumask_first(cpu_smt_mask(cpu)); 184 + 185 + wait_for_sibling_cpu(&scan_cpus_in, NSEC_PER_SEC); 155 186 156 187 /* 157 188 * This WRMSR will wait for other HT threads to also write ··· 198 155 * take up to 200 milliseconds (in the case where all chunks 199 156 * are processed in a single pass) before it retires. 200 157 */ 201 - wrmsrl(MSR_ACTIVATE_SCAN, msrs[0]); 158 + wrmsrl(MSR_ACTIVATE_SCAN, params->activate->data); 159 + rdmsrl(MSR_SCAN_STATUS, status.data); 202 160 203 - if (cpu == first) { 204 - /* Pass back the result of the scan */ 205 - rdmsrl(MSR_SCAN_STATUS, msrs[1]); 206 - } 161 + trace_ifs_status(ifsd->cur_batch, start, stop, status.data); 162 + 163 + /* Pass back the result of the scan */ 164 + if (cpu == first) 165 + params->status = status; 207 166 208 167 return 0; 209 168 } ··· 224 179 struct ifs_data *ifsd; 225 180 int to_start, to_stop; 226 181 int status_chunk; 227 - u64 msrvals[2]; 182 + struct run_params params; 228 183 int retries; 229 184 230 185 ifsd = ifs_get_data(dev); ··· 234 189 activate.sigmce = 0; 235 190 to_start = 0; 236 191 to_stop = ifsd->valid_chunks - 1; 192 + 193 + params.ifsd = ifs_get_data(dev); 237 194 238 195 if (ifsd->generation) { 239 196 activate.gen2.start = to_start; ··· 254 207 break; 255 208 } 256 209 257 - msrvals[0] = activate.data; 258 - stop_core_cpuslocked(cpu, doscan, msrvals); 210 + params.activate = &activate; 211 + atomic_set(&scan_cpus_in, 0); 212 + stop_core_cpuslocked(cpu, doscan, &params); 259 213 260 - status.data = msrvals[1]; 261 - 262 - trace_ifs_status(cpu, to_start, to_stop, status.data); 214 + status = params.status; 263 215 264 216 /* Some cases can be retried, give up for others */ 265 217 if (!can_restart(status)) ··· 296 250 } 297 251 } 298 252 299 - #define SPINUNIT 100 /* 100 nsec */ 300 - static atomic_t array_cpus_out; 301 - 302 - /* 303 - * Simplified cpu sibling rendezvous loop based on microcode loader __wait_for_cpus() 304 - */ 305 - static void wait_for_sibling_cpu(atomic_t *t, long long timeout) 306 - { 307 - int cpu = smp_processor_id(); 308 - const struct cpumask *smt_mask = cpu_smt_mask(cpu); 309 - int all_cpus = cpumask_weight(smt_mask); 310 - 311 - atomic_inc(t); 312 - while (atomic_read(t) < all_cpus) { 313 - if (timeout < SPINUNIT) 314 - return; 315 - ndelay(SPINUNIT); 316 - timeout -= SPINUNIT; 317 - touch_nmi_watchdog(); 318 - } 319 - } 320 - 321 253 static int do_array_test(void *data) 322 254 { 323 255 union ifs_array *command = data; 324 256 int cpu = smp_processor_id(); 325 257 int first; 258 + 259 + wait_for_sibling_cpu(&array_cpus_in, NSEC_PER_SEC); 326 260 327 261 /* 328 262 * Only one logical CPU on a core needs to trigger the Array test via MSR write. ··· 314 288 /* Pass back the result of the test */ 315 289 rdmsrl(MSR_ARRAY_BIST, command->data); 316 290 } 317 - 318 - /* Tests complete faster if the sibling is spinning here */ 319 - wait_for_sibling_cpu(&array_cpus_out, NSEC_PER_SEC); 320 291 321 292 return 0; 322 293 } ··· 335 312 timed_out = true; 336 313 break; 337 314 } 338 - atomic_set(&array_cpus_out, 0); 315 + atomic_set(&array_cpus_in, 0); 339 316 stop_core_cpuslocked(cpu, do_array_test, &command); 340 317 341 318 if (command.ctrl_result)
+2
drivers/platform/x86/intel/pmc/arl.c
··· 673 673 }; 674 674 675 675 #define ARL_NPU_PCI_DEV 0xad1d 676 + #define ARL_GNA_PCI_DEV 0xae4c 676 677 /* 677 678 * Set power state of select devices that do not have drivers to D3 678 679 * so that they do not block Package C entry. ··· 681 680 static void arl_d3_fixup(void) 682 681 { 683 682 pmc_core_set_device_d3(ARL_NPU_PCI_DEV); 683 + pmc_core_set_device_d3(ARL_GNA_PCI_DEV); 684 684 } 685 685 686 686 static int arl_resume(struct pmc_dev *pmcdev)
+36 -11
drivers/platform/x86/intel/pmc/core.c
··· 1389 1389 return -ENOMEM; 1390 1390 pmcdev->pmcs[PMC_IDX_MAIN] = primary_pmc; 1391 1391 1392 + /* The last element in msr_map is empty */ 1393 + pmcdev->num_of_pkgc = ARRAY_SIZE(msr_map) - 1; 1394 + pmcdev->pkgc_res_cnt = devm_kcalloc(&pdev->dev, 1395 + pmcdev->num_of_pkgc, 1396 + sizeof(*pmcdev->pkgc_res_cnt), 1397 + GFP_KERNEL); 1398 + if (!pmcdev->pkgc_res_cnt) 1399 + return -ENOMEM; 1400 + 1392 1401 /* 1393 1402 * Coffee Lake has CPU ID of Kaby Lake and Cannon Lake PCH. So here 1394 1403 * Sunrisepoint PCH regmap can't be used. Use Cannon Lake PCH regmap ··· 1441 1432 { 1442 1433 struct pmc_dev *pmcdev = dev_get_drvdata(dev); 1443 1434 struct pmc *pmc = pmcdev->pmcs[PMC_IDX_MAIN]; 1435 + unsigned int i; 1444 1436 1445 1437 if (pmcdev->suspend) 1446 1438 pmcdev->suspend(pmcdev); ··· 1450 1440 if (pm_suspend_via_firmware()) 1451 1441 return 0; 1452 1442 1453 - /* Save PC10 residency for checking later */ 1454 - if (rdmsrl_safe(MSR_PKG_C10_RESIDENCY, &pmcdev->pc10_counter)) 1455 - return -EIO; 1443 + /* Save PKGC residency for checking later */ 1444 + for (i = 0; i < pmcdev->num_of_pkgc; i++) { 1445 + if (rdmsrl_safe(msr_map[i].bit_mask, &pmcdev->pkgc_res_cnt[i])) 1446 + return -EIO; 1447 + } 1456 1448 1457 1449 /* Save S0ix residency for checking later */ 1458 1450 if (pmc_core_dev_state_get(pmc, &pmcdev->s0ix_counter)) ··· 1463 1451 return 0; 1464 1452 } 1465 1453 1466 - static inline bool pmc_core_is_pc10_failed(struct pmc_dev *pmcdev) 1454 + static inline bool pmc_core_is_deepest_pkgc_failed(struct pmc_dev *pmcdev) 1467 1455 { 1468 - u64 pc10_counter; 1456 + u32 deepest_pkgc_msr = msr_map[pmcdev->num_of_pkgc - 1].bit_mask; 1457 + u64 deepest_pkgc_residency; 1469 1458 1470 - if (rdmsrl_safe(MSR_PKG_C10_RESIDENCY, &pc10_counter)) 1459 + if (rdmsrl_safe(deepest_pkgc_msr, &deepest_pkgc_residency)) 1471 1460 return false; 1472 1461 1473 - if (pc10_counter == pmcdev->pc10_counter) 1462 + if (deepest_pkgc_residency == pmcdev->pkgc_res_cnt[pmcdev->num_of_pkgc - 1]) 1474 1463 return true; 1475 1464 1476 1465 return false; ··· 1510 1497 if (!warn_on_s0ix_failures) 1511 1498 return 0; 1512 1499 1513 - if (pmc_core_is_pc10_failed(pmcdev)) { 1514 - /* S0ix failed because of PC10 entry failure */ 1515 - dev_info(dev, "CPU did not enter PC10!!! (PC10 cnt=0x%llx)\n", 1516 - pmcdev->pc10_counter); 1500 + if (pmc_core_is_deepest_pkgc_failed(pmcdev)) { 1501 + /* S0ix failed because of deepest PKGC entry failure */ 1502 + dev_info(dev, "CPU did not enter %s!!! (%s cnt=0x%llx)\n", 1503 + msr_map[pmcdev->num_of_pkgc - 1].name, 1504 + msr_map[pmcdev->num_of_pkgc - 1].name, 1505 + pmcdev->pkgc_res_cnt[pmcdev->num_of_pkgc - 1]); 1506 + 1507 + for (i = 0; i < pmcdev->num_of_pkgc; i++) { 1508 + u64 pc_cnt; 1509 + 1510 + if (!rdmsrl_safe(msr_map[i].bit_mask, &pc_cnt)) { 1511 + dev_info(dev, "Prev %s cnt = 0x%llx, Current %s cnt = 0x%llx\n", 1512 + msr_map[i].name, pmcdev->pkgc_res_cnt[i], 1513 + msr_map[i].name, pc_cnt); 1514 + } 1515 + } 1517 1516 return 0; 1518 1517 } 1519 1518
+5 -2
drivers/platform/x86/intel/pmc/core.h
··· 385 385 * @pmc_xram_read_bit: flag to indicate whether PMC XRAM shadow registers 386 386 * used to read MPHY PG and PLL status are available 387 387 * @mutex_lock: mutex to complete one transcation 388 - * @pc10_counter: PC10 residency counter 388 + * @pkgc_res_cnt: Array of PKGC residency counters 389 + * @num_of_pkgc: Number of PKGC 389 390 * @s0ix_counter: S0ix residency (step adjusted) 390 391 * @num_lpm_modes: Count of enabled modes 391 392 * @lpm_en_modes: Array of enabled modes from lowest to highest priority ··· 404 403 int pmc_xram_read_bit; 405 404 struct mutex lock; /* generic mutex lock for PMC Core */ 406 405 407 - u64 pc10_counter; 408 406 u64 s0ix_counter; 409 407 int num_lpm_modes; 410 408 int lpm_en_modes[LPM_MAX_NUM_MODES]; 411 409 void (*suspend)(struct pmc_dev *pmcdev); 412 410 int (*resume)(struct pmc_dev *pmcdev); 411 + 412 + u64 *pkgc_res_cnt; 413 + u8 num_of_pkgc; 413 414 414 415 bool has_die_c6; 415 416 u32 die_c6_offset;
+4 -34
drivers/platform/x86/intel/pmc/lnl.c
··· 13 13 14 14 #include "core.h" 15 15 16 - #define SOCM_LPM_REQ_GUID 0x11594920 17 - 18 - #define PMC_DEVID_SOCM 0xa87f 19 - 20 - static const u8 LNL_LPM_REG_INDEX[] = {0, 4, 5, 6, 8, 9, 10, 11, 12, 13, 14, 15, 16, 20}; 21 - 22 - static struct pmc_info lnl_pmc_info_list[] = { 23 - { 24 - .guid = SOCM_LPM_REQ_GUID, 25 - .devid = PMC_DEVID_SOCM, 26 - .map = &lnl_socm_reg_map, 27 - }, 28 - {} 29 - }; 30 - 31 16 const struct pmc_bit_map lnl_ltr_show_map[] = { 32 17 {"SOUTHPORT_A", CNP_PMC_LTR_SPA}, 33 18 {"SOUTHPORT_B", CNP_PMC_LTR_SPB}, ··· 475 490 .lpm_sts = lnl_lpm_maps, 476 491 .lpm_status_offset = MTL_LPM_STATUS_OFFSET, 477 492 .lpm_live_status_offset = MTL_LPM_LIVE_STATUS_OFFSET, 478 - .lpm_reg_index = LNL_LPM_REG_INDEX, 479 493 }; 480 494 481 495 #define LNL_NPU_PCI_DEV 0x643e ··· 501 517 int lnl_core_init(struct pmc_dev *pmcdev) 502 518 { 503 519 int ret; 504 - int func = 2; 505 - bool ssram_init = true; 506 520 struct pmc *pmc = pmcdev->pmcs[PMC_IDX_SOC]; 507 521 508 522 lnl_d3_fixup(); 509 523 510 524 pmcdev->suspend = cnl_suspend; 511 525 pmcdev->resume = lnl_resume; 512 - pmcdev->regmap_list = lnl_pmc_info_list; 513 - ret = pmc_core_ssram_init(pmcdev, func); 514 526 515 - /* If regbase not assigned, set map and discover using legacy method */ 516 - if (ret) { 517 - ssram_init = false; 518 - pmc->map = &lnl_socm_reg_map; 519 - ret = get_primary_reg_base(pmc); 520 - if (ret) 521 - return ret; 522 - } 527 + pmc->map = &lnl_socm_reg_map; 528 + ret = get_primary_reg_base(pmc); 529 + if (ret) 530 + return ret; 523 531 524 532 pmc_core_get_low_power_modes(pmcdev); 525 - 526 - if (ssram_init) { 527 - ret = pmc_core_ssram_get_lpm_reqs(pmcdev); 528 - if (ret) 529 - return ret; 530 - } 531 533 532 534 return 0; 533 535 }
+2 -2
drivers/platform/x86/intel/speed_select_if/isst_tpmi_core.c
··· 462 462 struct tpmi_per_power_domain_info *power_domain_info; 463 463 struct isst_core_power core_power; 464 464 465 - if (disable_dynamic_sst_features()) 465 + if (copy_from_user(&core_power, argp, sizeof(core_power))) 466 466 return -EFAULT; 467 467 468 - if (copy_from_user(&core_power, argp, sizeof(core_power))) 468 + if (core_power.get_set && disable_dynamic_sst_features()) 469 469 return -EFAULT; 470 470 471 471 power_domain_info = get_instance(core_power.socket_id, core_power.power_domain_id);
+5 -4
drivers/platform/x86/intel/tpmi.c
··· 96 96 */ 97 97 struct intel_tpmi_pm_feature { 98 98 struct intel_tpmi_pfs_entry pfs_header; 99 - unsigned int vsec_offset; 99 + u64 vsec_offset; 100 100 struct intel_vsec_device *vsec_dev; 101 101 }; 102 102 ··· 376 376 read_blocked = feature_state.read_blocked ? 'Y' : 'N'; 377 377 write_blocked = feature_state.write_blocked ? 'Y' : 'N'; 378 378 } 379 - seq_printf(s, "0x%02x\t\t0x%02x\t\t0x%04x\t\t0x%04x\t\t0x%02x\t\t0x%08x\t%c\t%c\t\t%c\t\t%c\n", 379 + seq_printf(s, "0x%02x\t\t0x%02x\t\t0x%04x\t\t0x%04x\t\t0x%02x\t\t0x%016llx\t%c\t%c\t\t%c\t\t%c\n", 380 380 pfs->pfs_header.tpmi_id, pfs->pfs_header.num_entries, 381 381 pfs->pfs_header.entry_size, pfs->pfs_header.cap_offset, 382 382 pfs->pfs_header.attribute, pfs->vsec_offset, locked, disabled, ··· 395 395 struct intel_tpmi_pm_feature *pfs = s->private; 396 396 int count, ret = 0; 397 397 void __iomem *mem; 398 - u32 off, size; 398 + u32 size; 399 + u64 off; 399 400 u8 *buffer; 400 401 401 402 size = TPMI_GET_SINGLE_ENTRY_SIZE(pfs); ··· 412 411 mutex_lock(&tpmi_dev_lock); 413 412 414 413 for (count = 0; count < pfs->pfs_header.num_entries; ++count) { 415 - seq_printf(s, "TPMI Instance:%d offset:0x%x\n", count, off); 414 + seq_printf(s, "TPMI Instance:%d offset:0x%llx\n", count, off); 416 415 417 416 mem = ioremap(off, size); 418 417 if (!mem) {
+1 -4
drivers/platform/x86/intel/vsec.c
··· 236 236 237 237 for ( ; *header; header++) { 238 238 ret = intel_vsec_add_dev(pdev, *header, info); 239 - if (ret) 240 - dev_info(&pdev->dev, "Could not add device for VSEC id %d\n", 241 - (*header)->id); 242 - else 239 + if (!ret) 243 240 have_devices = true; 244 241 } 245 242
+1
drivers/platform/x86/intel/wmi/sbl-fw-update.c
··· 131 131 .probe = intel_wmi_sbl_fw_update_probe, 132 132 .remove = intel_wmi_sbl_fw_update_remove, 133 133 .id_table = intel_wmi_sbl_id_table, 134 + .no_singleton = true, 134 135 }; 135 136 module_wmi_driver(intel_wmi_sbl_fw_update_driver); 136 137
+1
drivers/platform/x86/intel/wmi/thunderbolt.c
··· 63 63 .dev_groups = tbt_groups, 64 64 }, 65 65 .id_table = intel_wmi_thunderbolt_id_table, 66 + .no_singleton = true, 66 67 }; 67 68 68 69 module_wmi_driver(intel_wmi_thunderbolt_driver);
+1 -1
drivers/platform/x86/intel_scu_ipcutil.c
··· 22 22 23 23 static int major; 24 24 25 - struct intel_scu_ipc_dev *scu; 25 + static struct intel_scu_ipc_dev *scu; 26 26 static DEFINE_MUTEX(scu_lock); 27 27 28 28 /* IOCTL commands */
-1
drivers/platform/x86/intel_scu_pcidrv.c
··· 11 11 #include <linux/init.h> 12 12 #include <linux/pci.h> 13 13 14 - #include <asm/intel-mid.h> 15 14 #include <asm/intel_scu_ipc.h> 16 15 17 16 static int intel_scu_pci_probe(struct pci_dev *pdev,
-1
drivers/platform/x86/intel_scu_wdt.c
··· 13 13 14 14 #include <asm/cpu_device_id.h> 15 15 #include <asm/intel-family.h> 16 - #include <asm/intel-mid.h> 17 16 #include <asm/io_apic.h> 18 17 #include <asm/hw_irq.h> 19 18
+79
drivers/platform/x86/pmc_atom.c
··· 6 6 7 7 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 8 8 9 + #include <linux/acpi.h> 9 10 #include <linux/debugfs.h> 10 11 #include <linux/device.h> 11 12 #include <linux/dmi.h> ··· 18 17 #include <linux/platform_device.h> 19 18 #include <linux/pci.h> 20 19 #include <linux/seq_file.h> 20 + #include <linux/suspend.h> 21 21 22 22 struct pmc_bit_map { 23 23 const char *name; ··· 450 448 return 0; 451 449 } 452 450 451 + #ifdef CONFIG_SUSPEND 452 + static void pmc_dev_state_check(u32 sts, const struct pmc_bit_map *sts_map, 453 + u32 fd, const struct pmc_bit_map *fd_map, 454 + u32 sts_possible_false_pos) 455 + { 456 + int index; 457 + 458 + for (index = 0; sts_map[index].name; index++) { 459 + if (!(fd_map[index].bit_mask & fd) && 460 + !(sts_map[index].bit_mask & sts)) { 461 + if (sts_map[index].bit_mask & sts_possible_false_pos) 462 + pm_pr_dbg("%s is in D0 prior to s2idle\n", 463 + sts_map[index].name); 464 + else 465 + pr_err("%s is in D0 prior to s2idle\n", 466 + sts_map[index].name); 467 + } 468 + } 469 + } 470 + 471 + static void pmc_s2idle_check(void) 472 + { 473 + struct pmc_dev *pmc = &pmc_device; 474 + const struct pmc_reg_map *m = pmc->map; 475 + u32 func_dis, func_dis_2; 476 + u32 d3_sts_0, d3_sts_1; 477 + u32 false_pos_sts_0, false_pos_sts_1; 478 + int i; 479 + 480 + func_dis = pmc_reg_read(pmc, PMC_FUNC_DIS); 481 + func_dis_2 = pmc_reg_read(pmc, PMC_FUNC_DIS_2); 482 + d3_sts_0 = pmc_reg_read(pmc, PMC_D3_STS_0); 483 + d3_sts_1 = pmc_reg_read(pmc, PMC_D3_STS_1); 484 + 485 + /* 486 + * Some blocks are not used on lower-featured versions of the SoC and 487 + * always report D0, add these to false_pos mask to log at debug level. 488 + */ 489 + if (m->d3_sts_1 == byt_d3_sts_1_map) { 490 + /* Bay Trail */ 491 + false_pos_sts_0 = BIT_GBE | BIT_SATA | BIT_PCIE_PORT0 | 492 + BIT_PCIE_PORT1 | BIT_PCIE_PORT2 | BIT_PCIE_PORT3 | 493 + BIT_LPSS2_F5_I2C5; 494 + false_pos_sts_1 = BIT_SMB | BIT_USH_SS_PHY | BIT_DFX; 495 + } else { 496 + /* Cherry Trail */ 497 + false_pos_sts_0 = BIT_GBE | BIT_SATA | BIT_LPSS2_F7_I2C7; 498 + false_pos_sts_1 = BIT_SMB | BIT_STS_ISH; 499 + } 500 + 501 + pmc_dev_state_check(d3_sts_0, m->d3_sts_0, func_dis, m->func_dis, false_pos_sts_0); 502 + pmc_dev_state_check(d3_sts_1, m->d3_sts_1, func_dis_2, m->func_dis_2, false_pos_sts_1); 503 + 504 + /* Forced-on PMC clocks prevent S0i3 */ 505 + for (i = 0; i < PMC_CLK_NUM; i++) { 506 + u32 ctl = pmc_reg_read(pmc, PMC_CLK_CTL_OFFSET + 4 * i); 507 + 508 + if ((ctl & PMC_MASK_CLK_CTL) != PMC_CLK_CTL_FORCE_ON) 509 + continue; 510 + 511 + pr_err("clock %d is ON prior to freeze (ctl 0x%08x)\n", i, ctl); 512 + } 513 + } 514 + 515 + static struct acpi_s2idle_dev_ops pmc_s2idle_ops = { 516 + .check = pmc_s2idle_check, 517 + }; 518 + 519 + static void pmc_s2idle_check_register(void) 520 + { 521 + acpi_register_lps0_dev(&pmc_s2idle_ops); 522 + } 523 + #else 524 + static void pmc_s2idle_check_register(void) {} 525 + #endif 526 + 453 527 static int pmc_setup_dev(struct pci_dev *pdev, const struct pci_device_id *ent) 454 528 { 455 529 struct pmc_dev *pmc = &pmc_device; ··· 563 485 dev_warn(&pdev->dev, "platform clocks register failed: %d\n", 564 486 ret); 565 487 488 + pmc_s2idle_check_register(); 566 489 pmc->init = true; 567 490 return ret; 568 491 }
+1 -6
drivers/platform/x86/silicom-platform.c
··· 256 256 if (direction == GPIO_LINE_DIRECTION_IN) 257 257 return; 258 258 259 - if (value) 260 - silicom_mec_port_set(channel, 0); 261 - else if (value == 0) 262 - silicom_mec_port_set(channel, 1); 263 - else 264 - pr_err("Wrong argument value: %d\n", value); 259 + silicom_mec_port_set(channel, !value); 265 260 } 266 261 267 262 static int silicom_gpio_direction_output(struct gpio_chip *gc,
+1 -1
drivers/platform/x86/think-lmi.c
··· 195 195 [TLMI_LEVEL_MASTER] = "master", 196 196 }; 197 197 static struct think_lmi tlmi_priv; 198 - static struct class *fw_attr_class; 198 + static const struct class *fw_attr_class; 199 199 static DEFINE_MUTEX(tlmi_mutex); 200 200 201 201 /* Convert BIOS WMI error string to suitable error code */
+170 -83
drivers/platform/x86/thinkpad_acpi.c
··· 69 69 #include <linux/sysfs.h> 70 70 #include <linux/types.h> 71 71 #include <linux/uaccess.h> 72 + #include <linux/units.h> 72 73 #include <linux/workqueue.h> 73 74 74 75 #include <acpi/battery.h> ··· 167 166 TP_HKEY_EV_VOL_MUTE = 0x1017, /* Mixer output mute */ 168 167 TP_HKEY_EV_PRIVACYGUARD_TOGGLE = 0x130f, /* Toggle priv.guard on/off */ 169 168 TP_HKEY_EV_AMT_TOGGLE = 0x131a, /* Toggle AMT on/off */ 169 + TP_HKEY_EV_PROFILE_TOGGLE = 0x131f, /* Toggle platform profile */ 170 170 171 171 /* Reasons for waking up from S3/S4 */ 172 172 TP_HKEY_EV_WKUP_S3_UNDOCK = 0x2304, /* undock requested, S3 */ ··· 3733 3731 switch (hkey) { 3734 3732 case TP_HKEY_EV_PRIVACYGUARD_TOGGLE: 3735 3733 case TP_HKEY_EV_AMT_TOGGLE: 3734 + case TP_HKEY_EV_PROFILE_TOGGLE: 3736 3735 tpacpi_driver_event(hkey); 3737 3736 return true; 3738 3737 } ··· 6129 6126 TPACPI_THERMAL_ACPI_TMP07, /* Use ACPI TMP0-7 */ 6130 6127 TPACPI_THERMAL_ACPI_UPDT, /* Use ACPI TMP0-7 with UPDT */ 6131 6128 TPACPI_THERMAL_TPEC_8, /* Use ACPI EC regs, 8 sensors */ 6129 + TPACPI_THERMAL_TPEC_12, /* Use ACPI EC regs, 12 sensors */ 6132 6130 TPACPI_THERMAL_TPEC_16, /* Use ACPI EC regs, 16 sensors */ 6133 6131 }; 6134 6132 6135 6133 enum { /* TPACPI_THERMAL_TPEC_* */ 6136 6134 TP_EC_THERMAL_TMP0 = 0x78, /* ACPI EC regs TMP 0..7 */ 6137 6135 TP_EC_THERMAL_TMP8 = 0xC0, /* ACPI EC regs TMP 8..15 */ 6136 + TP_EC_THERMAL_TMP0_NS = 0xA8, /* ACPI EC Non-Standard regs TMP 0..7 */ 6137 + TP_EC_THERMAL_TMP8_NS = 0xB8, /* ACPI EC Non-standard regs TMP 8..11 */ 6138 6138 TP_EC_FUNCREV = 0xEF, /* ACPI EC Functional revision */ 6139 6139 TP_EC_THERMAL_TMP_NA = -128, /* ACPI EC sensor not available */ 6140 6140 ··· 6150 6144 s32 temp[TPACPI_MAX_THERMAL_SENSORS]; 6151 6145 }; 6152 6146 6147 + static const struct tpacpi_quirk thermal_quirk_table[] __initconst = { 6148 + /* Non-standard address for thermal registers on some ThinkPads */ 6149 + TPACPI_Q_LNV3('R', '1', 'F', true), /* L13 Yoga Gen 2 */ 6150 + TPACPI_Q_LNV3('N', '2', 'U', true), /* X13 Yoga Gen 2*/ 6151 + TPACPI_Q_LNV3('R', '0', 'R', true), /* L380 */ 6152 + TPACPI_Q_LNV3('R', '1', '5', true), /* L13 Yoga Gen 1*/ 6153 + TPACPI_Q_LNV3('R', '1', '0', true), /* L390 */ 6154 + TPACPI_Q_LNV3('N', '2', 'L', true), /* X13 Yoga Gen 1*/ 6155 + TPACPI_Q_LNV3('R', '0', 'T', true), /* 11e Gen5 GL*/ 6156 + TPACPI_Q_LNV3('R', '1', 'D', true), /* 11e Gen5 GL-R*/ 6157 + TPACPI_Q_LNV3('R', '0', 'V', true), /* 11e Gen5 KL-Y*/ 6158 + }; 6159 + 6153 6160 static enum thermal_access_mode thermal_read_mode; 6154 6161 static bool thermal_use_labels; 6162 + static bool thermal_with_ns_address; /* Non-standard thermal reg address */ 6163 + 6164 + /* Function to check thermal read mode */ 6165 + static enum thermal_access_mode __init thermal_read_mode_check(void) 6166 + { 6167 + u8 t, ta1, ta2, ver = 0; 6168 + int i; 6169 + int acpi_tmp7; 6170 + 6171 + acpi_tmp7 = acpi_evalf(ec_handle, NULL, "TMP7", "qv"); 6172 + 6173 + if (thinkpad_id.ec_model) { 6174 + /* 6175 + * Direct EC access mode: sensors at registers 0x78-0x7F, 6176 + * 0xC0-0xC7. Registers return 0x00 for non-implemented, 6177 + * thermal sensors return 0x80 when not available. 6178 + * 6179 + * In some special cases (when Power Supply ID is 0xC2) 6180 + * above rule causes thermal control issues. Offset 0xEF 6181 + * determines EC version. 0xC0-0xC7 are not thermal registers 6182 + * in Ver 3. 6183 + */ 6184 + if (!acpi_ec_read(TP_EC_FUNCREV, &ver)) 6185 + pr_warn("Thinkpad ACPI EC unable to access EC version\n"); 6186 + 6187 + /* Quirks to check non-standard EC */ 6188 + thermal_with_ns_address = tpacpi_check_quirks(thermal_quirk_table, 6189 + ARRAY_SIZE(thermal_quirk_table)); 6190 + 6191 + /* Support for Thinkpads with non-standard address */ 6192 + if (thermal_with_ns_address) { 6193 + pr_info("ECFW with non-standard thermal registers found\n"); 6194 + return TPACPI_THERMAL_TPEC_12; 6195 + } 6196 + 6197 + ta1 = ta2 = 0; 6198 + for (i = 0; i < 8; i++) { 6199 + if (acpi_ec_read(TP_EC_THERMAL_TMP0 + i, &t)) { 6200 + ta1 |= t; 6201 + } else { 6202 + ta1 = 0; 6203 + break; 6204 + } 6205 + if (ver < 3) { 6206 + if (acpi_ec_read(TP_EC_THERMAL_TMP8 + i, &t)) { 6207 + ta2 |= t; 6208 + } else { 6209 + ta1 = 0; 6210 + break; 6211 + } 6212 + } 6213 + } 6214 + 6215 + if (ta1 == 0) { 6216 + /* This is sheer paranoia, but we handle it anyway */ 6217 + if (acpi_tmp7) { 6218 + pr_err("ThinkPad ACPI EC access misbehaving, falling back to ACPI TMPx access mode\n"); 6219 + return TPACPI_THERMAL_ACPI_TMP07; 6220 + } 6221 + pr_err("ThinkPad ACPI EC access misbehaving, disabling thermal sensors access\n"); 6222 + return TPACPI_THERMAL_NONE; 6223 + } 6224 + 6225 + if (ver >= 3) { 6226 + thermal_use_labels = true; 6227 + return TPACPI_THERMAL_TPEC_8; 6228 + } 6229 + 6230 + return (ta2 != 0) ? TPACPI_THERMAL_TPEC_16 : TPACPI_THERMAL_TPEC_8; 6231 + } 6232 + 6233 + if (acpi_tmp7) { 6234 + if (tpacpi_is_ibm() && acpi_evalf(ec_handle, NULL, "UPDT", "qv")) { 6235 + /* 600e/x, 770e, 770x */ 6236 + return TPACPI_THERMAL_ACPI_UPDT; 6237 + } 6238 + /* IBM/LENOVO DSDT EC.TMPx access, max 8 sensors */ 6239 + return TPACPI_THERMAL_ACPI_TMP07; 6240 + } 6241 + 6242 + /* temperatures not supported on 570, G4x, R30, R31, R32 */ 6243 + return TPACPI_THERMAL_NONE; 6244 + } 6155 6245 6156 6246 /* idx is zero-based */ 6157 6247 static int thermal_get_sensor(int idx, s32 *value) ··· 6275 6173 return 0; 6276 6174 } 6277 6175 break; 6176 + 6177 + /* The Non-standard EC uses 12 Thermal areas */ 6178 + case TPACPI_THERMAL_TPEC_12: 6179 + if (idx >= 12) 6180 + return -EINVAL; 6181 + 6182 + t = idx < 8 ? TP_EC_THERMAL_TMP0_NS + idx : 6183 + TP_EC_THERMAL_TMP8_NS + (idx - 8); 6184 + 6185 + if (!acpi_ec_read(t, &tmp)) 6186 + return -EIO; 6187 + 6188 + *value = tmp * MILLIDEGREE_PER_DEGREE; 6189 + return 0; 6278 6190 6279 6191 case TPACPI_THERMAL_ACPI_UPDT: 6280 6192 if (idx <= 7) { ··· 6324 6208 6325 6209 static int thermal_get_sensors(struct ibm_thermal_sensors_struct *s) 6326 6210 { 6327 - int res, i; 6328 - int n; 6329 - 6330 - n = 8; 6331 - i = 0; 6211 + int res, i, n; 6332 6212 6333 6213 if (!s) 6334 6214 return -EINVAL; 6335 6215 6336 6216 if (thermal_read_mode == TPACPI_THERMAL_TPEC_16) 6337 6217 n = 16; 6218 + else if (thermal_read_mode == TPACPI_THERMAL_TPEC_12) 6219 + n = 12; 6220 + else 6221 + n = 8; 6338 6222 6339 6223 for (i = 0 ; i < n; i++) { 6340 6224 res = thermal_get_sensor(i, &s->temp[i]); ··· 6433 6317 NULL 6434 6318 }; 6435 6319 6320 + #define to_dev_attr(_attr) container_of(_attr, struct device_attribute, attr) 6321 + 6436 6322 static umode_t thermal_attr_is_visible(struct kobject *kobj, 6437 6323 struct attribute *attr, int n) 6438 6324 { 6439 - if (thermal_read_mode == TPACPI_THERMAL_NONE) 6325 + struct device_attribute *dev_attr = to_dev_attr(attr); 6326 + struct sensor_device_attribute *sensor_attr = 6327 + to_sensor_dev_attr(dev_attr); 6328 + 6329 + int idx = sensor_attr->index; 6330 + 6331 + switch (thermal_read_mode) { 6332 + case TPACPI_THERMAL_NONE: 6440 6333 return 0; 6441 6334 6442 - if (attr == THERMAL_ATTRS(8) || attr == THERMAL_ATTRS(9) || 6443 - attr == THERMAL_ATTRS(10) || attr == THERMAL_ATTRS(11) || 6444 - attr == THERMAL_ATTRS(12) || attr == THERMAL_ATTRS(13) || 6445 - attr == THERMAL_ATTRS(14) || attr == THERMAL_ATTRS(15)) { 6446 - if (thermal_read_mode != TPACPI_THERMAL_TPEC_16) 6335 + case TPACPI_THERMAL_ACPI_TMP07: 6336 + case TPACPI_THERMAL_ACPI_UPDT: 6337 + case TPACPI_THERMAL_TPEC_8: 6338 + if (idx >= 8) 6447 6339 return 0; 6340 + break; 6341 + 6342 + case TPACPI_THERMAL_TPEC_12: 6343 + if (idx >= 12) 6344 + return 0; 6345 + break; 6346 + 6347 + default: 6348 + break; 6349 + 6448 6350 } 6449 6351 6450 6352 return attr->mode; ··· 6509 6375 6510 6376 static int __init thermal_init(struct ibm_init_struct *iibm) 6511 6377 { 6512 - u8 t, ta1, ta2, ver = 0; 6513 - int i; 6514 - int acpi_tmp7; 6515 - 6516 6378 vdbg_printk(TPACPI_DBG_INIT, "initializing thermal subdriver\n"); 6517 6379 6518 - acpi_tmp7 = acpi_evalf(ec_handle, NULL, "TMP7", "qv"); 6519 - 6520 - if (thinkpad_id.ec_model) { 6521 - /* 6522 - * Direct EC access mode: sensors at registers 6523 - * 0x78-0x7F, 0xC0-0xC7. Registers return 0x00 for 6524 - * non-implemented, thermal sensors return 0x80 when 6525 - * not available 6526 - * The above rule is unfortunately flawed. This has been seen with 6527 - * 0xC2 (power supply ID) causing thermal control problems. 6528 - * The EC version can be determined by offset 0xEF and at least for 6529 - * version 3 the Lenovo firmware team confirmed that registers 0xC0-0xC7 6530 - * are not thermal registers. 6531 - */ 6532 - if (!acpi_ec_read(TP_EC_FUNCREV, &ver)) 6533 - pr_warn("Thinkpad ACPI EC unable to access EC version\n"); 6534 - 6535 - ta1 = ta2 = 0; 6536 - for (i = 0; i < 8; i++) { 6537 - if (acpi_ec_read(TP_EC_THERMAL_TMP0 + i, &t)) { 6538 - ta1 |= t; 6539 - } else { 6540 - ta1 = 0; 6541 - break; 6542 - } 6543 - if (ver < 3) { 6544 - if (acpi_ec_read(TP_EC_THERMAL_TMP8 + i, &t)) { 6545 - ta2 |= t; 6546 - } else { 6547 - ta1 = 0; 6548 - break; 6549 - } 6550 - } 6551 - } 6552 - if (ta1 == 0) { 6553 - /* This is sheer paranoia, but we handle it anyway */ 6554 - if (acpi_tmp7) { 6555 - pr_err("ThinkPad ACPI EC access misbehaving, falling back to ACPI TMPx access mode\n"); 6556 - thermal_read_mode = TPACPI_THERMAL_ACPI_TMP07; 6557 - } else { 6558 - pr_err("ThinkPad ACPI EC access misbehaving, disabling thermal sensors access\n"); 6559 - thermal_read_mode = TPACPI_THERMAL_NONE; 6560 - } 6561 - } else { 6562 - if (ver >= 3) { 6563 - thermal_read_mode = TPACPI_THERMAL_TPEC_8; 6564 - thermal_use_labels = true; 6565 - } else { 6566 - thermal_read_mode = 6567 - (ta2 != 0) ? 6568 - TPACPI_THERMAL_TPEC_16 : TPACPI_THERMAL_TPEC_8; 6569 - } 6570 - } 6571 - } else if (acpi_tmp7) { 6572 - if (tpacpi_is_ibm() && 6573 - acpi_evalf(ec_handle, NULL, "UPDT", "qv")) { 6574 - /* 600e/x, 770e, 770x */ 6575 - thermal_read_mode = TPACPI_THERMAL_ACPI_UPDT; 6576 - } else { 6577 - /* IBM/LENOVO DSDT EC.TMPx access, max 8 sensors */ 6578 - thermal_read_mode = TPACPI_THERMAL_ACPI_TMP07; 6579 - } 6580 - } else { 6581 - /* temperatures not supported on 570, G4x, R30, R31, R32 */ 6582 - thermal_read_mode = TPACPI_THERMAL_NONE; 6583 - } 6380 + thermal_read_mode = thermal_read_mode_check(); 6584 6381 6585 6382 vdbg_printk(TPACPI_DBG_INIT, "thermal is %s, mode %d\n", 6586 6383 str_supported(thermal_read_mode != TPACPI_THERMAL_NONE), ··· 8832 8767 TPACPI_Q_LNV3('N', '3', '7', TPACPI_FAN_2CTL), /* T15g (2nd gen) */ 8833 8768 TPACPI_Q_LNV3('R', '1', 'F', TPACPI_FAN_NS), /* L13 Yoga Gen 2 */ 8834 8769 TPACPI_Q_LNV3('N', '2', 'U', TPACPI_FAN_NS), /* X13 Yoga Gen 2*/ 8770 + TPACPI_Q_LNV3('R', '0', 'R', TPACPI_FAN_NS), /* L380 */ 8771 + TPACPI_Q_LNV3('R', '1', '5', TPACPI_FAN_NS), /* L13 Yoga Gen 1 */ 8772 + TPACPI_Q_LNV3('R', '1', '0', TPACPI_FAN_NS), /* L390 */ 8773 + TPACPI_Q_LNV3('N', '2', 'L', TPACPI_FAN_NS), /* X13 Yoga Gen 1 */ 8774 + TPACPI_Q_LNV3('R', '0', 'T', TPACPI_FAN_NS), /* 11e Gen5 GL */ 8775 + TPACPI_Q_LNV3('R', '1', 'D', TPACPI_FAN_NS), /* 11e Gen5 GL-R */ 8776 + TPACPI_Q_LNV3('R', '0', 'V', TPACPI_FAN_NS), /* 11e Gen5 KL-Y */ 8835 8777 TPACPI_Q_LNV3('N', '1', 'O', TPACPI_FAN_NOFAN), /* X1 Tablet (2nd gen) */ 8836 8778 }; 8837 8779 ··· 9357 9285 continue; 9358 9286 } 9359 9287 9360 - mute_led_cdev[i].brightness = ledtrig_audio_get(i); 9361 9288 err = led_classdev_register(&tpacpi_pdev->dev, &mute_led_cdev[i]); 9362 9289 if (err < 0) { 9363 9290 while (i--) ··· 11190 11119 else 11191 11120 dytc_control_amt(!dytc_amt_active); 11192 11121 } 11193 - 11122 + if (hkey_event == TP_HKEY_EV_PROFILE_TOGGLE) { 11123 + switch (dytc_current_profile) { 11124 + case PLATFORM_PROFILE_LOW_POWER: 11125 + dytc_profile_set(NULL, PLATFORM_PROFILE_BALANCED); 11126 + break; 11127 + case PLATFORM_PROFILE_BALANCED: 11128 + dytc_profile_set(NULL, PLATFORM_PROFILE_PERFORMANCE); 11129 + break; 11130 + case PLATFORM_PROFILE_PERFORMANCE: 11131 + dytc_profile_set(NULL, PLATFORM_PROFILE_LOW_POWER); 11132 + break; 11133 + default: 11134 + pr_warn("Profile HKEY unexpected profile %d", dytc_current_profile); 11135 + } 11136 + /* Notify user space the profile changed */ 11137 + platform_profile_notify(); 11138 + } 11194 11139 } 11195 11140 11196 11141 static void hotkey_driver_event(const unsigned int scancode)
+9
drivers/platform/x86/touchscreen_dmi.c
··· 1218 1218 }, 1219 1219 }, 1220 1220 { 1221 + /* Chuwi Vi8 dual-boot (CWI506) */ 1222 + .driver_data = (void *)&chuwi_vi8_data, 1223 + .matches = { 1224 + DMI_MATCH(DMI_SYS_VENDOR, "Insyde"), 1225 + DMI_MATCH(DMI_PRODUCT_NAME, "i86"), 1226 + DMI_MATCH(DMI_BIOS_VERSION, "CHUWI2.D86JHBNR02"), 1227 + }, 1228 + }, 1229 + { 1221 1230 /* Chuwi Vi8 Plus (CWI519) */ 1222 1231 .driver_data = (void *)&chuwi_vi8_plus_data, 1223 1232 .matches = {
+1
drivers/platform/x86/wmi-bmof.c
··· 94 94 .probe = wmi_bmof_probe, 95 95 .remove = wmi_bmof_remove, 96 96 .id_table = wmi_bmof_id_table, 97 + .no_singleton = true, 97 98 }; 98 99 99 100 module_wmi_driver(wmi_bmof_driver);
+126 -100
drivers/platform/x86/wmi.c
··· 57 57 58 58 enum { /* wmi_block flags */ 59 59 WMI_READ_TAKES_NO_ARGS, 60 + WMI_GUID_DUPLICATED, 61 + WMI_NO_EVENT_DATA, 60 62 }; 61 63 62 64 struct wmi_block { ··· 89 87 { } 90 88 }; 91 89 MODULE_DEVICE_TABLE(acpi, wmi_device_ids); 92 - 93 - /* allow duplicate GUIDs as these device drivers use struct wmi_driver */ 94 - static const char * const allow_duplicates[] = { 95 - "05901221-D566-11D1-B2F0-00A0C9062910", /* wmi-bmof */ 96 - "8A42EA14-4F2A-FD45-6422-0087F7A7E608", /* dell-wmi-ddv */ 97 - "44FADEB1-B204-40F2-8581-394BBDC1B651", /* intel-wmi-sbl-fw-update */ 98 - "86CCFD48-205E-4A77-9C48-2021CBEDE341", /* intel-wmi-thunderbolt */ 99 - "F1DDEE52-063C-4784-A11E-8A06684B9B01", /* dell-smm-hwmon */ 100 - NULL 101 - }; 102 90 103 91 #define dev_to_wblock(__dev) container_of_const(__dev, struct wmi_block, dev.dev) 104 92 #define dev_to_wdev(__dev) container_of_const(__dev, struct wmi_device, dev) ··· 122 130 id++; 123 131 } 124 132 return NULL; 125 - } 126 - 127 - static int get_subobj_info(acpi_handle handle, const char *pathname, 128 - struct acpi_device_info **info) 129 - { 130 - acpi_handle subobj_handle; 131 - acpi_status status; 132 - 133 - status = acpi_get_handle(handle, pathname, &subobj_handle); 134 - if (status == AE_NOT_FOUND) 135 - return -ENOENT; 136 - 137 - if (ACPI_FAILURE(status)) 138 - return -EIO; 139 - 140 - status = acpi_get_object_info(subobj_handle, info); 141 - if (ACPI_FAILURE(status)) 142 - return -EIO; 143 - 144 - return 0; 145 133 } 146 134 147 135 static acpi_status wmi_method_enable(struct wmi_block *wblock, bool enable) ··· 187 215 struct wmi_block *wblock = dev_to_wblock(dev); 188 216 const guid_t *guid = data; 189 217 218 + /* Legacy GUID-based functions are restricted to only see 219 + * a single WMI device for each GUID. 220 + */ 221 + if (test_bit(WMI_GUID_DUPLICATED, &wblock->flags)) 222 + return 0; 223 + 190 224 if (guid_equal(guid, &wblock->gblock.guid)) 191 225 return 1; 192 226 ··· 204 226 struct wmi_block *wblock = dev_to_wblock(dev); 205 227 const u32 *notify_id = data; 206 228 229 + /* Legacy GUID-based functions are restricted to only see 230 + * a single WMI device for each GUID. 231 + */ 232 + if (test_bit(WMI_GUID_DUPLICATED, &wblock->flags)) 233 + return 0; 234 + 207 235 if (wblock->gblock.flags & ACPI_WMI_EVENT && wblock->gblock.notify_id == *notify_id) 208 236 return 1; 209 237 210 238 return 0; 211 239 } 212 240 213 - static struct bus_type wmi_bus_type; 241 + static const struct bus_type wmi_bus_type; 214 242 215 243 static struct wmi_device *wmi_find_device_by_guid(const char *guid_string) 216 244 { ··· 300 316 * @guid_string: 36 char string of the form fa50ff2b-f2e8-45de-83fa-65417f2f49ba 301 317 * @instance: Instance index 302 318 * @method_id: Method ID to call 303 - * @in: Buffer containing input for the method call 319 + * @in: Mandatory buffer containing input for the method call 304 320 * @out: Empty buffer to return the method results 305 321 * 306 322 * Call an ACPI-WMI method, the caller must free @out. ··· 330 346 * @wdev: A wmi bus device from a driver 331 347 * @instance: Instance index 332 348 * @method_id: Method ID to call 333 - * @in: Buffer containing input for the method call 349 + * @in: Mandatory buffer containing input for the method call 334 350 * @out: Empty buffer to return the method results 335 351 * 336 352 * Call an ACPI-WMI method, the caller must free @out. ··· 351 367 block = &wblock->gblock; 352 368 handle = wblock->acpi_device->handle; 353 369 370 + if (!in) 371 + return AE_BAD_DATA; 372 + 354 373 if (!(block->flags & ACPI_WMI_METHOD)) 355 374 return AE_BAD_DATA; 356 375 357 376 if (block->instance_count <= instance) 358 377 return AE_BAD_PARAMETER; 359 378 360 - input.count = 2; 379 + input.count = 3; 361 380 input.pointer = params; 381 + 362 382 params[0].type = ACPI_TYPE_INTEGER; 363 383 params[0].integer.value = instance; 364 384 params[1].type = ACPI_TYPE_INTEGER; 365 385 params[1].integer.value = method_id; 366 - 367 - if (in) { 368 - input.count = 3; 369 - 370 - params[2].type = get_param_acpi_type(wblock); 371 - params[2].buffer.length = in->length; 372 - params[2].buffer.pointer = in->pointer; 373 - } 386 + params[2].type = get_param_acpi_type(wblock); 387 + params[2].buffer.length = in->length; 388 + params[2].buffer.pointer = in->pointer; 374 389 375 390 get_acpi_method_name(wblock, 'M', method); 376 391 ··· 873 890 struct wmi_driver *wdriver = drv_to_wdrv(dev->driver); 874 891 int ret = 0; 875 892 893 + /* Some older WMI drivers will break if instantiated multiple times, 894 + * so they are blocked from probing WMI devices with a duplicated GUID. 895 + * 896 + * New WMI drivers should support being instantiated multiple times. 897 + */ 898 + if (test_bit(WMI_GUID_DUPLICATED, &wblock->flags) && !wdriver->no_singleton) { 899 + dev_warn(dev, "Legacy driver %s cannot be instantiated multiple times\n", 900 + dev->driver->name); 901 + 902 + return -ENODEV; 903 + } 904 + 905 + if (wdriver->notify) { 906 + if (test_bit(WMI_NO_EVENT_DATA, &wblock->flags) && !wdriver->no_notify_data) 907 + return -ENODEV; 908 + } 909 + 876 910 if (ACPI_FAILURE(wmi_method_enable(wblock, true))) 877 911 dev_warn(dev, "failed to enable device -- probing anyway\n"); 878 912 ··· 931 931 .name = "wmi_bus", 932 932 }; 933 933 934 - static struct bus_type wmi_bus_type = { 934 + static const struct bus_type wmi_bus_type = { 935 935 .name = "wmi", 936 936 .dev_groups = wmi_groups, 937 937 .match = wmi_dev_match, ··· 979 979 struct wmi_block *wblock, 980 980 struct acpi_device *device) 981 981 { 982 - struct acpi_device_info *info; 983 982 char method[WMI_ACPI_METHOD_NAME_SIZE]; 984 - int result; 983 + struct acpi_device_info *info; 984 + acpi_handle method_handle; 985 + acpi_status status; 985 986 uint count; 986 987 987 988 if (wblock->gblock.flags & ACPI_WMI_EVENT) { ··· 991 990 } 992 991 993 992 if (wblock->gblock.flags & ACPI_WMI_METHOD) { 993 + get_acpi_method_name(wblock, 'M', method); 994 + if (!acpi_has_method(device->handle, method)) { 995 + dev_warn(wmi_bus_dev, 996 + FW_BUG "%s method block execution control method not found\n", 997 + method); 998 + 999 + return -ENXIO; 1000 + } 1001 + 994 1002 wblock->dev.dev.type = &wmi_type_method; 995 1003 goto out_init; 996 1004 } ··· 1010 1000 * we ignore this data block. 1011 1001 */ 1012 1002 get_acpi_method_name(wblock, 'Q', method); 1013 - result = get_subobj_info(device->handle, method, &info); 1014 - 1015 - if (result) { 1003 + status = acpi_get_handle(device->handle, method, &method_handle); 1004 + if (ACPI_FAILURE(status)) { 1016 1005 dev_warn(wmi_bus_dev, 1017 - "%s data block query control method not found\n", 1006 + FW_BUG "%s data block query control method not found\n", 1018 1007 method); 1019 - return result; 1008 + 1009 + return -ENXIO; 1020 1010 } 1011 + 1012 + status = acpi_get_object_info(method_handle, &info); 1013 + if (ACPI_FAILURE(status)) 1014 + return -EIO; 1021 1015 1022 1016 wblock->dev.dev.type = &wmi_type_data; 1023 1017 ··· 1051 1037 wblock->dev.dev.parent = wmi_bus_dev; 1052 1038 1053 1039 count = guid_count(&wblock->gblock.guid); 1054 - if (count) 1040 + if (count) { 1055 1041 dev_set_name(&wblock->dev.dev, "%pUL-%d", &wblock->gblock.guid, count); 1056 - else 1042 + set_bit(WMI_GUID_DUPLICATED, &wblock->flags); 1043 + } else { 1057 1044 dev_set_name(&wblock->dev.dev, "%pUL", &wblock->gblock.guid); 1045 + } 1058 1046 1059 1047 device_initialize(&wblock->dev.dev); 1060 1048 ··· 1083 1067 return device_add(&wdev->dev); 1084 1068 } 1085 1069 1086 - static bool guid_already_parsed_for_legacy(struct acpi_device *device, const guid_t *guid) 1087 - { 1088 - struct wmi_block *wblock; 1089 - 1090 - list_for_each_entry(wblock, &wmi_block_list, list) { 1091 - /* skip warning and register if we know the driver will use struct wmi_driver */ 1092 - for (int i = 0; allow_duplicates[i] != NULL; i++) { 1093 - if (guid_parse_and_compare(allow_duplicates[i], guid)) 1094 - return false; 1095 - } 1096 - if (guid_equal(&wblock->gblock.guid, guid)) { 1097 - /* 1098 - * Because we historically didn't track the relationship 1099 - * between GUIDs and ACPI nodes, we don't know whether 1100 - * we need to suppress GUIDs that are unique on a 1101 - * given node but duplicated across nodes. 1102 - */ 1103 - dev_warn(&device->dev, "duplicate WMI GUID %pUL (first instance was on %s)\n", 1104 - guid, dev_name(&wblock->acpi_device->dev)); 1105 - return true; 1106 - } 1107 - } 1108 - 1109 - return false; 1110 - } 1111 - 1112 1070 /* 1113 1071 * Parse the _WDG method for the GUID data blocks 1114 1072 */ ··· 1091 1101 struct acpi_device *device = ACPI_COMPANION(&pdev->dev); 1092 1102 struct acpi_buffer out = {ACPI_ALLOCATE_BUFFER, NULL}; 1093 1103 const struct guid_block *gblock; 1104 + bool event_data_available; 1094 1105 struct wmi_block *wblock; 1095 1106 union acpi_object *obj; 1096 1107 acpi_status status; ··· 1111 1120 return -ENXIO; 1112 1121 } 1113 1122 1123 + event_data_available = acpi_has_method(device->handle, "_WED"); 1114 1124 gblock = (const struct guid_block *)obj->buffer.pointer; 1115 1125 total = obj->buffer.length / sizeof(struct guid_block); 1116 1126 ··· 1121 1129 continue; 1122 1130 } 1123 1131 1124 - if (guid_already_parsed_for_legacy(device, &gblock[i].guid)) 1125 - continue; 1126 - 1127 1132 wblock = kzalloc(sizeof(*wblock), GFP_KERNEL); 1128 - if (!wblock) { 1129 - dev_err(wmi_bus_dev, "Failed to allocate %pUL\n", &gblock[i].guid); 1133 + if (!wblock) 1130 1134 continue; 1131 - } 1132 1135 1133 1136 wblock->acpi_device = device; 1134 1137 wblock->gblock = gblock[i]; 1138 + if (gblock[i].flags & ACPI_WMI_EVENT && !event_data_available) 1139 + set_bit(WMI_NO_EVENT_DATA, &wblock->flags); 1135 1140 1136 1141 retval = wmi_create_device(wmi_bus_dev, wblock, device); 1137 1142 if (retval) { ··· 1194 1205 } 1195 1206 } 1196 1207 1197 - static void wmi_notify_driver(struct wmi_block *wblock) 1208 + static int wmi_get_notify_data(struct wmi_block *wblock, union acpi_object **obj) 1198 1209 { 1199 - struct wmi_driver *driver = drv_to_wdrv(wblock->dev.dev.driver); 1200 1210 struct acpi_buffer data = { ACPI_ALLOCATE_BUFFER, NULL }; 1201 1211 acpi_status status; 1202 1212 1203 - if (!driver->no_notify_data) { 1204 - status = get_event_data(wblock, &data); 1205 - if (ACPI_FAILURE(status)) { 1206 - dev_warn(&wblock->dev.dev, "Failed to get event data\n"); 1207 - return; 1208 - } 1213 + if (test_bit(WMI_NO_EVENT_DATA, &wblock->flags)) { 1214 + *obj = NULL; 1215 + return 0; 1216 + } 1217 + 1218 + status = get_event_data(wblock, &data); 1219 + if (ACPI_FAILURE(status)) { 1220 + dev_warn(&wblock->dev.dev, "Failed to get event data\n"); 1221 + return -EIO; 1222 + } 1223 + 1224 + *obj = data.pointer; 1225 + 1226 + return 0; 1227 + } 1228 + 1229 + static void wmi_notify_driver(struct wmi_block *wblock, union acpi_object *obj) 1230 + { 1231 + struct wmi_driver *driver = drv_to_wdrv(wblock->dev.dev.driver); 1232 + 1233 + if (!obj && !driver->no_notify_data) { 1234 + dev_warn(&wblock->dev.dev, "Event contains no event data\n"); 1235 + return; 1209 1236 } 1210 1237 1211 1238 if (driver->notify) 1212 - driver->notify(&wblock->dev, data.pointer); 1213 - 1214 - kfree(data.pointer); 1239 + driver->notify(&wblock->dev, obj); 1215 1240 } 1216 1241 1217 1242 static int wmi_notify_device(struct device *dev, void *data) 1218 1243 { 1219 1244 struct wmi_block *wblock = dev_to_wblock(dev); 1245 + union acpi_object *obj; 1220 1246 u32 *event = data; 1247 + int ret; 1221 1248 1222 1249 if (!(wblock->gblock.flags & ACPI_WMI_EVENT && wblock->gblock.notify_id == *event)) 1223 1250 return 0; ··· 1243 1238 * Because of this the WMI driver notify handler takes precedence. 1244 1239 */ 1245 1240 if (wblock->dev.dev.driver && wblock->driver_ready) { 1246 - wmi_notify_driver(wblock); 1241 + ret = wmi_get_notify_data(wblock, &obj); 1242 + if (ret >= 0) { 1243 + wmi_notify_driver(wblock, obj); 1244 + kfree(obj); 1245 + } 1247 1246 } else { 1248 - if (wblock->handler) 1247 + if (wblock->handler) { 1249 1248 wblock->handler(*event, wblock->handler_data); 1249 + } else { 1250 + /* The ACPI WMI specification says that _WED should be 1251 + * evaluated every time an notification is received, even 1252 + * if no consumers are present. 1253 + * 1254 + * Some firmware implementations actually depend on this 1255 + * by using a queue for events which will fill up if the 1256 + * WMI driver core stops evaluating _WED due to missing 1257 + * WMI event consumers. 1258 + * 1259 + * Because of this we need this seemingly useless call to 1260 + * wmi_get_notify_data() which in turn evaluates _WED. 1261 + */ 1262 + ret = wmi_get_notify_data(wblock, &obj); 1263 + if (ret >= 0) 1264 + kfree(obj); 1265 + } 1266 + 1250 1267 } 1251 1268 up_read(&wblock->notify_lock); 1252 1269 1253 - acpi_bus_generate_netlink_event(wblock->acpi_device->pnp.device_class, 1254 - dev_name(&wblock->dev.dev), *event, 0); 1270 + acpi_bus_generate_netlink_event("wmi", acpi_dev_name(wblock->acpi_device), *event, 0); 1255 1271 1256 1272 return -EBUSY; 1257 1273 } ··· 1373 1347 1374 1348 error = parse_wdg(wmi_bus_dev, device); 1375 1349 if (error) { 1376 - pr_err("Failed to parse WDG method\n"); 1350 + dev_err(&device->dev, "Failed to parse _WDG method\n"); 1377 1351 return error; 1378 1352 } 1379 1353
+19 -6
include/linux/platform_data/x86/pmc_atom.h
··· 43 43 BIT_ORED_DEDICATED_IRQ_GPSC | \ 44 44 BIT_SHARED_IRQ_GPSS) 45 45 46 + /* External clk generator settings */ 47 + #define PMC_CLK_CTL_OFFSET 0x60 48 + #define PMC_CLK_CTL_SIZE 4 49 + #define PMC_CLK_NUM 6 50 + #define PMC_CLK_CTL_GATED_ON_D3 0x0 51 + #define PMC_CLK_CTL_FORCE_ON 0x1 52 + #define PMC_CLK_CTL_FORCE_OFF 0x2 53 + #define PMC_CLK_CTL_RESERVED 0x3 54 + #define PMC_MASK_CLK_CTL GENMASK(1, 0) 55 + #define PMC_MASK_CLK_FREQ BIT(2) 56 + #define PMC_CLK_FREQ_XTAL (0 << 2) /* 25 MHz */ 57 + #define PMC_CLK_FREQ_PLL (1 << 2) /* 19.2 MHz */ 58 + 46 59 /* The timers accumulate time spent in sleep state */ 47 60 #define PMC_S0IR_TMR 0x80 48 61 #define PMC_S0I1_TMR 0x84 ··· 117 104 #define BIT_SCC_SDIO BIT(9) 118 105 #define BIT_SCC_SDCARD BIT(10) 119 106 #define BIT_SCC_MIPI BIT(11) 120 - #define BIT_HDA BIT(12) 107 + #define BIT_HDA BIT(12) /* CHT datasheet: reserved */ 121 108 #define BIT_LPE BIT(13) 122 109 #define BIT_OTG BIT(14) 123 - #define BIT_USH BIT(15) 124 - #define BIT_GBE BIT(16) 125 - #define BIT_SATA BIT(17) 126 - #define BIT_USB_EHCI BIT(18) 127 - #define BIT_SEC BIT(19) 110 + #define BIT_USH BIT(15) /* CHT datasheet: reserved */ 111 + #define BIT_GBE BIT(16) /* CHT datasheet: reserved */ 112 + #define BIT_SATA BIT(17) /* CHT datasheet: reserved */ 113 + #define BIT_USB_EHCI BIT(18) /* CHT datasheet: XHCI! */ 114 + #define BIT_SEC BIT(19) /* BYT datasheet: reserved */ 128 115 #define BIT_PCIE_PORT0 BIT(20) 129 116 #define BIT_PCIE_PORT1 BIT(21) 130 117 #define BIT_PCIE_PORT2 BIT(22)
+3 -1
include/linux/wmi.h
··· 48 48 * struct wmi_driver - WMI driver structure 49 49 * @driver: Driver model structure 50 50 * @id_table: List of WMI GUIDs supported by this driver 51 - * @no_notify_data: WMI events provide no event data 51 + * @no_notify_data: Driver supports WMI events which provide no event data 52 + * @no_singleton: Driver can be instantiated multiple times 52 53 * @probe: Callback for device binding 53 54 * @remove: Callback for device unbinding 54 55 * @notify: Callback for receiving WMI events ··· 60 59 struct device_driver driver; 61 60 const struct wmi_device_id *id_table; 62 61 bool no_notify_data; 62 + bool no_singleton; 63 63 64 64 int (*probe)(struct wmi_device *wdev, const void *context); 65 65 void (*remove)(struct wmi_device *wdev);
+6 -6
include/trace/events/intel_ifs.h
··· 10 10 11 11 TRACE_EVENT(ifs_status, 12 12 13 - TP_PROTO(int cpu, int start, int stop, u64 status), 13 + TP_PROTO(int batch, int start, int stop, u64 status), 14 14 15 - TP_ARGS(cpu, start, stop, status), 15 + TP_ARGS(batch, start, stop, status), 16 16 17 17 TP_STRUCT__entry( 18 + __field( int, batch ) 18 19 __field( u64, status ) 19 - __field( int, cpu ) 20 20 __field( u16, start ) 21 21 __field( u16, stop ) 22 22 ), 23 23 24 24 TP_fast_assign( 25 - __entry->cpu = cpu; 25 + __entry->batch = batch; 26 26 __entry->start = start; 27 27 __entry->stop = stop; 28 28 __entry->status = status; 29 29 ), 30 30 31 - TP_printk("cpu: %d, start: %.4x, stop: %.4x, status: %.16llx", 32 - __entry->cpu, 31 + TP_printk("batch: %.2d, start: %.4x, stop: %.4x, status: %.16llx", 32 + __entry->batch, 33 33 __entry->start, 34 34 __entry->stop, 35 35 __entry->status)
+1
include/uapi/linux/input-event-codes.h
··· 602 602 603 603 #define KEY_ALS_TOGGLE 0x230 /* Ambient light sensor */ 604 604 #define KEY_ROTATE_LOCK_TOGGLE 0x231 /* Display rotation lock */ 605 + #define KEY_REFRESH_RATE_TOGGLE 0x232 /* Display refresh rate toggle */ 605 606 606 607 #define KEY_BUTTONCONFIG 0x240 /* AL Button Configuration */ 607 608 #define KEY_TASKMANAGER 0x241 /* AL Task/Project Manager */