Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'platform-drivers-x86-v6.7-1' of git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86

Pull x86 platform driver updates from Ilpo Järvinen:

- asus-wmi: Support for screenpad and solve brightness key press
duplication

- int3472: Eliminate the last use of deprecated GPIO functions

- mlxbf-pmc: New HW support

- msi-ec: Support new EC configurations

- thinkpad_acpi: Support reading aux MAC address during passthrough

- wmi: Fixes & improvements

- x86-android-tablets: Detection fix and avoid use of GPIO private APIs

- Debug & metrics interface improvements

- Miscellaneous cleanups / fixes / improvements

* tag 'platform-drivers-x86-v6.7-1' of git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86: (80 commits)
platform/x86: inspur-platform-profile: Add platform profile support
platform/x86: thinkpad_acpi: Add battery quirk for Thinkpad X120e
platform/x86: wmi: Decouple WMI device removal from wmi_block_list
platform/x86: wmi: Fix opening of char device
platform/x86: wmi: Fix probe failure when failing to register WMI devices
platform/x86: wmi: Fix refcounting of WMI devices in legacy functions
platform/x86: wmi: Decouple probe deferring from wmi_block_list
platform/x86/amd/hsmp: Fix iomem handling
platform/x86: asus-wmi: Do not report brightness up/down keys when also reported by acpi_video
platform/x86: thinkpad_acpi: replace deprecated strncpy with memcpy
tools/power/x86/intel-speed-select: v1.18 release
tools/power/x86/intel-speed-select: Use cgroup isolate for CPU 0
tools/power/x86/intel-speed-select: Increase max CPUs in one request
tools/power/x86/intel-speed-select: Display error for core-power support
tools/power/x86/intel-speed-select: No TRL for non compute domains
tools/power/x86/intel-speed-select: turbo-mode enable disable swapped
tools/power/x86/intel-speed-select: Update help for TRL
tools/power/x86/intel-speed-select: Sanitize integer arguments
platform/x86: acer-wmi: Remove void function return
platform/x86/amd/pmc: Add dump_custom_stb module parameter
...

+3044 -497
+30
Documentation/ABI/testing/sysfs-class-firmware-attributes
··· 383 383 Note that any changes to this attribute requires a reboot 384 384 for changes to take effect. 385 385 386 + What: /sys/class/firmware-attributes/*/attributes/save_settings 387 + Date: August 2023 388 + KernelVersion: 6.6 389 + Contact: Mark Pearson <mpearson-lenovo@squebb.ca> 390 + Description: 391 + On Lenovo platforms there is a limitation in the number of times an attribute can be 392 + saved. This is an architectural limitation and it limits the number of attributes 393 + that can be modified to 48. 394 + A solution for this is instead of the attribute being saved after every modification, 395 + to allow a user to bulk set the attributes, and then trigger a final save. This allows 396 + unlimited attributes. 397 + 398 + Read the attribute to check what save mode is enabled (single or bulk). 399 + E.g: 400 + # cat /sys/class/firmware-attributes/thinklmi/attributes/save_settings 401 + single 402 + 403 + Write the attribute with 'bulk' to enable bulk save mode. 404 + Write the attribute with 'single' to enable saving, after every attribute set. 405 + The default setting is single mode. 406 + E.g: 407 + # echo bulk > /sys/class/firmware-attributes/thinklmi/attributes/save_settings 408 + 409 + When in bulk mode write 'save' to trigger a save of all currently modified attributes. 410 + Note, once a save has been triggered, in bulk mode, attributes can no longer be set and 411 + will return a permissions error. This is to prevent users hitting the 48+ save limitation 412 + (which requires entering the BIOS to clear the error condition) 413 + E.g: 414 + # echo save > /sys/class/firmware-attributes/thinklmi/attributes/save_settings 415 + 386 416 What: /sys/class/firmware-attributes/*/attributes/debug_cmd 387 417 Date: July 2021 388 418 KernelVersion: 5.14
+20
Documentation/admin-guide/laptops/thinkpad-acpi.rst
··· 53 53 - Lap mode sensor 54 54 - Setting keyboard language 55 55 - WWAN Antenna type 56 + - Auxmac 56 57 57 58 A compatibility table by model and feature is maintained on the web 58 59 site, http://ibm-acpi.sf.net/. I appreciate any success or failure ··· 1511 1510 1512 1511 The property is read-only. If the platform doesn't have support the sysfs 1513 1512 class is not created. 1513 + 1514 + Auxmac 1515 + ------ 1516 + 1517 + sysfs: auxmac 1518 + 1519 + Some newer Thinkpads have a feature called MAC Address Pass-through. This 1520 + feature is implemented by the system firmware to provide a system unique MAC, 1521 + that can override a dock or USB ethernet dongle MAC, when connected to a 1522 + network. This property enables user-space to easily determine the MAC address 1523 + if the feature is enabled. 1524 + 1525 + The values of this auxiliary MAC are: 1526 + 1527 + cat /sys/devices/platform/thinkpad_acpi/auxmac 1528 + 1529 + If the feature is disabled, the value will be 'disabled'. 1530 + 1531 + This property is read-only. 1514 1532 1515 1533 Adaptive keyboard 1516 1534 -----------------
+18
Documentation/arch/x86/amd_hsmp.rst
··· 41 41 * Locking across callers is taken care by the driver. 42 42 43 43 44 + HSMP sysfs interface 45 + ==================== 46 + 47 + 1. Metrics table binary sysfs 48 + 49 + AMD MI300A MCM provides GET_METRICS_TABLE message to retrieve 50 + most of the system management information from SMU in one go. 51 + 52 + The metrics table is made available as hexadecimal sysfs binary file 53 + under per socket sysfs directory created at 54 + /sys/devices/platform/amd_hsmp/socket%d/metrics_bin 55 + 56 + Note: lseek() is not supported as entire metrics table is read. 57 + 58 + Metrics table definitions will be documented as part of Public PPR. 59 + The same is defined in the amd_hsmp.h header. 60 + 61 + 44 62 An example 45 63 ========== 46 64
+1
arch/x86/include/asm/msr-index.h
··· 222 222 #define MSR_INTEGRITY_CAPS_ARRAY_BIST BIT(MSR_INTEGRITY_CAPS_ARRAY_BIST_BIT) 223 223 #define MSR_INTEGRITY_CAPS_PERIODIC_BIST_BIT 4 224 224 #define MSR_INTEGRITY_CAPS_PERIODIC_BIST BIT(MSR_INTEGRITY_CAPS_PERIODIC_BIST_BIT) 225 + #define MSR_INTEGRITY_CAPS_SAF_GEN_MASK GENMASK_ULL(10, 9) 225 226 226 227 #define MSR_LBR_NHM_FROM 0x00000680 227 228 #define MSR_LBR_NHM_TO 0x000006c0
+109
arch/x86/include/uapi/asm/amd_hsmp.h
··· 47 47 HSMP_SET_PCI_RATE, /* 20h Control link rate on PCIe devices */ 48 48 HSMP_SET_POWER_MODE, /* 21h Select power efficiency profile policy */ 49 49 HSMP_SET_PSTATE_MAX_MIN, /* 22h Set the max and min DF P-State */ 50 + HSMP_GET_METRIC_TABLE_VER, /* 23h Get metrics table version */ 51 + HSMP_GET_METRIC_TABLE, /* 24h Get metrics table */ 52 + HSMP_GET_METRIC_TABLE_DRAM_ADDR,/* 25h Get metrics table dram address */ 50 53 HSMP_MSG_ID_MAX, 51 54 }; 52 55 ··· 65 62 HSMP_RSVD = -1, 66 63 HSMP_SET = 0, 67 64 HSMP_GET = 1, 65 + }; 66 + 67 + enum hsmp_proto_versions { 68 + HSMP_PROTO_VER2 = 2, 69 + HSMP_PROTO_VER3, 70 + HSMP_PROTO_VER4, 71 + HSMP_PROTO_VER5, 72 + HSMP_PROTO_VER6 68 73 }; 69 74 70 75 struct hsmp_msg_desc { ··· 306 295 * input: args[0] = min df pstate[15:8] + max df pstate[7:0] 307 296 */ 308 297 {1, 0, HSMP_SET}, 298 + 299 + /* 300 + * HSMP_GET_METRIC_TABLE_VER, num_args = 0, response_sz = 1 301 + * output: args[0] = metrics table version 302 + */ 303 + {0, 1, HSMP_GET}, 304 + 305 + /* 306 + * HSMP_GET_METRIC_TABLE, num_args = 0, response_sz = 0 307 + */ 308 + {0, 0, HSMP_GET}, 309 + 310 + /* 311 + * HSMP_GET_METRIC_TABLE_DRAM_ADDR, num_args = 0, response_sz = 2 312 + * output: args[0] = lower 32 bits of the address 313 + * output: args[1] = upper 32 bits of the address 314 + */ 315 + {0, 2, HSMP_GET}, 316 + }; 317 + 318 + /* Metrics table (supported only with proto version 6) */ 319 + struct hsmp_metric_table { 320 + __u32 accumulation_counter; 321 + 322 + /* TEMPERATURE */ 323 + __u32 max_socket_temperature; 324 + __u32 max_vr_temperature; 325 + __u32 max_hbm_temperature; 326 + __u64 max_socket_temperature_acc; 327 + __u64 max_vr_temperature_acc; 328 + __u64 max_hbm_temperature_acc; 329 + 330 + /* POWER */ 331 + __u32 socket_power_limit; 332 + __u32 max_socket_power_limit; 333 + __u32 socket_power; 334 + 335 + /* ENERGY */ 336 + __u64 timestamp; 337 + __u64 socket_energy_acc; 338 + __u64 ccd_energy_acc; 339 + __u64 xcd_energy_acc; 340 + __u64 aid_energy_acc; 341 + __u64 hbm_energy_acc; 342 + 343 + /* FREQUENCY */ 344 + __u32 cclk_frequency_limit; 345 + __u32 gfxclk_frequency_limit; 346 + __u32 fclk_frequency; 347 + __u32 uclk_frequency; 348 + __u32 socclk_frequency[4]; 349 + __u32 vclk_frequency[4]; 350 + __u32 dclk_frequency[4]; 351 + __u32 lclk_frequency[4]; 352 + __u64 gfxclk_frequency_acc[8]; 353 + __u64 cclk_frequency_acc[96]; 354 + 355 + /* FREQUENCY RANGE */ 356 + __u32 max_cclk_frequency; 357 + __u32 min_cclk_frequency; 358 + __u32 max_gfxclk_frequency; 359 + __u32 min_gfxclk_frequency; 360 + __u32 fclk_frequency_table[4]; 361 + __u32 uclk_frequency_table[4]; 362 + __u32 socclk_frequency_table[4]; 363 + __u32 vclk_frequency_table[4]; 364 + __u32 dclk_frequency_table[4]; 365 + __u32 lclk_frequency_table[4]; 366 + __u32 max_lclk_dpm_range; 367 + __u32 min_lclk_dpm_range; 368 + 369 + /* XGMI */ 370 + __u32 xgmi_width; 371 + __u32 xgmi_bitrate; 372 + __u64 xgmi_read_bandwidth_acc[8]; 373 + __u64 xgmi_write_bandwidth_acc[8]; 374 + 375 + /* ACTIVITY */ 376 + __u32 socket_c0_residency; 377 + __u32 socket_gfx_busy; 378 + __u32 dram_bandwidth_utilization; 379 + __u64 socket_c0_residency_acc; 380 + __u64 socket_gfx_busy_acc; 381 + __u64 dram_bandwidth_acc; 382 + __u32 max_dram_bandwidth; 383 + __u64 dram_bandwidth_utilization_acc; 384 + __u64 pcie_bandwidth_acc[4]; 385 + 386 + /* THROTTLERS */ 387 + __u32 prochot_residency_acc; 388 + __u32 ppt_residency_acc; 389 + __u32 socket_thm_residency_acc; 390 + __u32 vr_thm_residency_acc; 391 + __u32 hbm_thm_residency_acc; 392 + __u32 spare; 393 + 394 + /* New items at the end to maintain driver compatibility */ 395 + __u32 gfxclk_frequency[8]; 309 396 }; 310 397 311 398 /* Reset to default packing */
+2 -4
drivers/platform/mellanox/mlxbf-bootctl.c
··· 1028 1028 return ret; 1029 1029 } 1030 1030 1031 - static int mlxbf_bootctl_remove(struct platform_device *pdev) 1031 + static void mlxbf_bootctl_remove(struct platform_device *pdev) 1032 1032 { 1033 1033 sysfs_remove_bin_file(&pdev->dev.kobj, 1034 1034 &mlxbf_bootctl_bootfifo_sysfs_attr); 1035 - 1036 - return 0; 1037 1035 } 1038 1036 1039 1037 static struct platform_driver mlxbf_bootctl_driver = { 1040 1038 .probe = mlxbf_bootctl_probe, 1041 - .remove = mlxbf_bootctl_remove, 1039 + .remove_new = mlxbf_bootctl_remove, 1042 1040 .driver = { 1043 1041 .name = "mlxbf-bootctl", 1044 1042 .dev_groups = mlxbf_bootctl_groups,
+655 -33
drivers/platform/mellanox/mlxbf-pmc.c
··· 30 30 31 31 #define MLXBF_PMC_EVENT_SET_BF1 0 32 32 #define MLXBF_PMC_EVENT_SET_BF2 1 33 + #define MLXBF_PMC_EVENT_SET_BF3 2 33 34 #define MLXBF_PMC_EVENT_INFO_LEN 100 34 35 35 36 #define MLXBF_PMC_MAX_BLOCKS 30 36 - #define MLXBF_PMC_MAX_ATTRS 30 37 + #define MLXBF_PMC_MAX_ATTRS 70 37 38 #define MLXBF_PMC_INFO_SZ 4 38 39 #define MLXBF_PMC_REG_SIZE 8 39 40 #define MLXBF_PMC_L3C_REG_SIZE 4 40 41 42 + #define MLXBF_PMC_TYPE_CRSPACE 2 41 43 #define MLXBF_PMC_TYPE_COUNTER 1 42 44 #define MLXBF_PMC_TYPE_REGISTER 0 43 45 ··· 79 77 80 78 #define MLXBF_PMC_L3C_PERF_CNT_LOW_VAL GENMASK(31, 0) 81 79 #define MLXBF_PMC_L3C_PERF_CNT_HIGH_VAL GENMASK(24, 0) 80 + 81 + #define MLXBF_PMC_CRSPACE_PERFMON_REG0 0x0 82 + #define MLXBF_PMC_CRSPACE_PERFSEL_SZ 4 83 + #define MLXBF_PMC_CRSPACE_PERFSEL0 GENMASK(23, 16) 84 + #define MLXBF_PMC_CRSPACE_PERFSEL1 GENMASK(7, 0) 85 + #define MLXBF_PMC_CRSPACE_PERFMON_REG0_SZ 0x2 86 + #define MLXBF_PMC_CRSPACE_PERFMON_CTL(n) (n * MLXBF_PMC_CRSPACE_PERFMON_REG0_SZ) 87 + #define MLXBF_PMC_CRSPACE_PERFMON_EN BIT(30) 88 + #define MLXBF_PMC_CRSPACE_PERFMON_CLR BIT(28) 89 + #define MLXBF_PMC_CRSPACE_PERFMON_VAL0(n) (MLXBF_PMC_CRSPACE_PERFMON_CTL(n) + 0xc) 82 90 83 91 /** 84 92 * struct mlxbf_pmc_attribute - Structure to hold attribute and block info ··· 136 124 * @pdev: The kernel structure representing the device 137 125 * @total_blocks: Total number of blocks 138 126 * @tile_count: Number of tiles in the system 127 + * @llt_enable: Info on enabled LLTs 128 + * @mss_enable: Info on enabled MSSs 129 + * @group_num: Group number assigned to each valid block 139 130 * @hwmon_dev: Hwmon device for bfperf 140 131 * @block_name: Block name 141 132 * @block: Block info ··· 151 136 struct platform_device *pdev; 152 137 uint32_t total_blocks; 153 138 uint32_t tile_count; 139 + uint8_t llt_enable; 140 + uint8_t mss_enable; 141 + uint32_t group_num; 154 142 struct device *hwmon_dev; 155 143 const char *block_name[MLXBF_PMC_MAX_BLOCKS]; 156 144 struct mlxbf_pmc_block_info block[MLXBF_PMC_MAX_BLOCKS]; ··· 278 260 { 0x348, "DRAM_ECC_ERROR" }, 279 261 }; 280 262 281 - static const struct mlxbf_pmc_events mlxbf_pmc_mss_events[] = { 263 + static const struct mlxbf_pmc_events mlxbf_pmc_mss_events_1[] = { 282 264 { 0x0, "DISABLE" }, 283 265 { 0xc0, "RXREQ_MSS" }, 284 266 { 0xc1, "RXDAT_MSS" }, 285 267 { 0xc2, "TXRSP_MSS" }, 286 268 { 0xc3, "TXDAT_MSS" }, 269 + }; 270 + 271 + static const struct mlxbf_pmc_events mlxbf_pmc_mss_events_3[] = { 272 + {0, "SKYLIB_CDN_TX_FLITS"}, 273 + {1, "SKYLIB_DDN_TX_FLITS"}, 274 + {2, "SKYLIB_NDN_TX_FLITS"}, 275 + {3, "SKYLIB_SDN_TX_FLITS"}, 276 + {4, "SKYLIB_UDN_TX_FLITS"}, 277 + {5, "SKYLIB_CDN_RX_FLITS"}, 278 + {6, "SKYLIB_DDN_RX_FLITS"}, 279 + {7, "SKYLIB_NDN_RX_FLITS"}, 280 + {8, "SKYLIB_SDN_RX_FLITS"}, 281 + {9, "SKYLIB_UDN_RX_FLITS"}, 282 + {10, "SKYLIB_CDN_TX_STALL"}, 283 + {11, "SKYLIB_DDN_TX_STALL"}, 284 + {12, "SKYLIB_NDN_TX_STALL"}, 285 + {13, "SKYLIB_SDN_TX_STALL"}, 286 + {14, "SKYLIB_UDN_TX_STALL"}, 287 + {15, "SKYLIB_CDN_RX_STALL"}, 288 + {16, "SKYLIB_DDN_RX_STALL"}, 289 + {17, "SKYLIB_NDN_RX_STALL"}, 290 + {18, "SKYLIB_SDN_RX_STALL"}, 291 + {19, "SKYLIB_UDN_RX_STALL"}, 292 + {20, "SKYLIB_CHI_REQ0_TX_FLITS"}, 293 + {21, "SKYLIB_CHI_DATA0_TX_FLITS"}, 294 + {22, "SKYLIB_CHI_RESP0_TX_FLITS"}, 295 + {23, "SKYLIB_CHI_SNP0_TX_FLITS"}, 296 + {24, "SKYLIB_CHI_REQ1_TX_FLITS"}, 297 + {25, "SKYLIB_CHI_DATA1_TX_FLITS"}, 298 + {26, "SKYLIB_CHI_RESP1_TX_FLITS"}, 299 + {27, "SKYLIB_CHI_SNP1_TX_FLITS"}, 300 + {28, "SKYLIB_CHI_REQ2_TX_FLITS"}, 301 + {29, "SKYLIB_CHI_DATA2_TX_FLITS"}, 302 + {30, "SKYLIB_CHI_RESP2_TX_FLITS"}, 303 + {31, "SKYLIB_CHI_SNP2_TX_FLITS"}, 304 + {32, "SKYLIB_CHI_REQ3_TX_FLITS"}, 305 + {33, "SKYLIB_CHI_DATA3_TX_FLITS"}, 306 + {34, "SKYLIB_CHI_RESP3_TX_FLITS"}, 307 + {35, "SKYLIB_CHI_SNP3_TX_FLITS"}, 308 + {36, "SKYLIB_TLP_REQ_TX_FLITS"}, 309 + {37, "SKYLIB_TLP_RESP_TX_FLITS"}, 310 + {38, "SKYLIB_TLP_META_TX_FLITS"}, 311 + {39, "SKYLIB_AXIS_DATA_TX_FLITS"}, 312 + {40, "SKYLIB_AXIS_CRED_TX_FLITS"}, 313 + {41, "SKYLIB_APB_TX_FLITS"}, 314 + {42, "SKYLIB_VW_TX_FLITS"}, 315 + {43, "SKYLIB_GGA_MSN_W_TX_FLITS"}, 316 + {44, "SKYLIB_GGA_MSN_N_TX_FLITS"}, 317 + {45, "SKYLIB_CR_REQ_TX_FLITS"}, 318 + {46, "SKYLIB_CR_RESP_TX_FLITS"}, 319 + {47, "SKYLIB_MSN_PRNF_TX_FLITS"}, 320 + {48, "SKYLIB_DBG_DATA_TX_FLITS"}, 321 + {49, "SKYLIB_DBG_CRED_TX_FLITS"}, 322 + {50, "SKYLIB_CHI_REQ0_RX_FLITS"}, 323 + {51, "SKYLIB_CHI_DATA0_RX_FLITS"}, 324 + {52, "SKYLIB_CHI_RESP0_RX_FLITS"}, 325 + {53, "SKYLIB_CHI_SNP0_RX_FLITS"}, 326 + {54, "SKYLIB_CHI_REQ1_RX_FLITS"}, 327 + {55, "SKYLIB_CHI_DATA1_RX_FLITS"}, 328 + {56, "SKYLIB_CHI_RESP1_RX_FLITS"}, 329 + {57, "SKYLIB_CHI_SNP1_RX_FLITS"}, 330 + {58, "SKYLIB_CHI_REQ2_RX_FLITS"}, 331 + {59, "SKYLIB_CHI_DATA2_RX_FLITS"}, 332 + {60, "SKYLIB_CHI_RESP2_RX_FLITS"}, 333 + {61, "SKYLIB_CHI_SNP2_RX_FLITS"}, 334 + {62, "SKYLIB_CHI_REQ3_RX_FLITS"}, 335 + {63, "SKYLIB_CHI_DATA3_RX_FLITS"}, 336 + {64, "SKYLIB_CHI_RESP3_RX_FLITS"}, 337 + {65, "SKYLIB_CHI_SNP3_RX_FLITS"}, 338 + {66, "SKYLIB_TLP_REQ_RX_FLITS"}, 339 + {67, "SKYLIB_TLP_RESP_RX_FLITS"}, 340 + {68, "SKYLIB_TLP_META_RX_FLITS"}, 341 + {69, "SKYLIB_AXIS_DATA_RX_FLITS"}, 342 + {70, "SKYLIB_AXIS_CRED_RX_FLITS"}, 343 + {71, "SKYLIB_APB_RX_FLITS"}, 344 + {72, "SKYLIB_VW_RX_FLITS"}, 345 + {73, "SKYLIB_GGA_MSN_W_RX_FLITS"}, 346 + {74, "SKYLIB_GGA_MSN_N_RX_FLITS"}, 347 + {75, "SKYLIB_CR_REQ_RX_FLITS"}, 348 + {76, "SKYLIB_CR_RESP_RX_FLITS"}, 349 + {77, "SKYLIB_MSN_PRNF_RX_FLITS"}, 350 + {78, "SKYLIB_DBG_DATA_RX_FLITS"}, 351 + {79, "SKYLIB_DBG_CRED_RX_FLITS"}, 352 + {80, "SKYLIB_CHI_REQ0_TX_STALL"}, 353 + {81, "SKYLIB_CHI_DATA0_TX_STALL"}, 354 + {82, "SKYLIB_CHI_RESP0_TX_STALL"}, 355 + {83, "SKYLIB_CHI_SNP0_TX_STALL"}, 356 + {84, "SKYLIB_CHI_REQ1_TX_STALL"}, 357 + {85, "SKYLIB_CHI_DATA1_TX_STALL"}, 358 + {86, "SKYLIB_CHI_RESP1_TX_STALL"}, 359 + {87, "SKYLIB_CHI_SNP1_TX_STALL"}, 360 + {88, "SKYLIB_CHI_REQ2_TX_STALL"}, 361 + {89, "SKYLIB_CHI_DATA2_TX_STALL"}, 362 + {90, "SKYLIB_CHI_RESP2_TX_STALL"}, 363 + {91, "SKYLIB_CHI_SNP2_TX_STALL"}, 364 + {92, "SKYLIB_CHI_REQ3_TX_STALL"}, 365 + {93, "SKYLIB_CHI_DATA3_TX_STALL"}, 366 + {94, "SKYLIB_CHI_RESP3_TX_STALL"}, 367 + {95, "SKYLIB_CHI_SNP3_TX_STALL"}, 368 + {96, "SKYLIB_TLP_REQ_TX_STALL"}, 369 + {97, "SKYLIB_TLP_RESP_TX_STALL"}, 370 + {98, "SKYLIB_TLP_META_TX_STALL"}, 371 + {99, "SKYLIB_AXIS_DATA_TX_STALL"}, 372 + {100, "SKYLIB_AXIS_CRED_TX_STALL"}, 373 + {101, "SKYLIB_APB_TX_STALL"}, 374 + {102, "SKYLIB_VW_TX_STALL"}, 375 + {103, "SKYLIB_GGA_MSN_W_TX_STALL"}, 376 + {104, "SKYLIB_GGA_MSN_N_TX_STALL"}, 377 + {105, "SKYLIB_CR_REQ_TX_STALL"}, 378 + {106, "SKYLIB_CR_RESP_TX_STALL"}, 379 + {107, "SKYLIB_MSN_PRNF_TX_STALL"}, 380 + {108, "SKYLIB_DBG_DATA_TX_STALL"}, 381 + {109, "SKYLIB_DBG_CRED_TX_STALL"}, 382 + {110, "SKYLIB_CHI_REQ0_RX_STALL"}, 383 + {111, "SKYLIB_CHI_DATA0_RX_STALL"}, 384 + {112, "SKYLIB_CHI_RESP0_RX_STALL"}, 385 + {113, "SKYLIB_CHI_SNP0_RX_STALL"}, 386 + {114, "SKYLIB_CHI_REQ1_RX_STALL"}, 387 + {115, "SKYLIB_CHI_DATA1_RX_STALL"}, 388 + {116, "SKYLIB_CHI_RESP1_RX_STALL"}, 389 + {117, "SKYLIB_CHI_SNP1_RX_STALL"}, 390 + {118, "SKYLIB_CHI_REQ2_RX_STALL"}, 391 + {119, "SKYLIB_CHI_DATA2_RX_STALL"}, 392 + {120, "SKYLIB_CHI_RESP2_RX_STALL"}, 393 + {121, "SKYLIB_CHI_SNP2_RX_STALL"}, 394 + {122, "SKYLIB_CHI_REQ3_RX_STALL"}, 395 + {123, "SKYLIB_CHI_DATA3_RX_STALL"}, 396 + {124, "SKYLIB_CHI_RESP3_RX_STALL"}, 397 + {125, "SKYLIB_CHI_SNP3_RX_STALL"}, 398 + {126, "SKYLIB_TLP_REQ_RX_STALL"}, 399 + {127, "SKYLIB_TLP_RESP_RX_STALL"}, 400 + {128, "SKYLIB_TLP_META_RX_STALL"}, 401 + {129, "SKYLIB_AXIS_DATA_RX_STALL"}, 402 + {130, "SKYLIB_AXIS_CRED_RX_STALL"}, 403 + {131, "SKYLIB_APB_RX_STALL"}, 404 + {132, "SKYLIB_VW_RX_STALL"}, 405 + {133, "SKYLIB_GGA_MSN_W_RX_STALL"}, 406 + {134, "SKYLIB_GGA_MSN_N_RX_STALL"}, 407 + {135, "SKYLIB_CR_REQ_RX_STALL"}, 408 + {136, "SKYLIB_CR_RESP_RX_STALL"}, 409 + {137, "SKYLIB_MSN_PRNF_RX_STALL"}, 410 + {138, "SKYLIB_DBG_DATA_RX_STALL"}, 411 + {139, "SKYLIB_DBG_CRED_RX_STALL"}, 412 + {140, "SKYLIB_CDN_LOOPBACK_FLITS"}, 413 + {141, "SKYLIB_DDN_LOOPBACK_FLITS"}, 414 + {142, "SKYLIB_NDN_LOOPBACK_FLITS"}, 415 + {143, "SKYLIB_SDN_LOOPBACK_FLITS"}, 416 + {144, "SKYLIB_UDN_LOOPBACK_FLITS"}, 417 + {145, "HISTOGRAM_HISTOGRAM_BIN0"}, 418 + {146, "HISTOGRAM_HISTOGRAM_BIN1"}, 419 + {147, "HISTOGRAM_HISTOGRAM_BIN2"}, 420 + {148, "HISTOGRAM_HISTOGRAM_BIN3"}, 421 + {149, "HISTOGRAM_HISTOGRAM_BIN4"}, 422 + {150, "HISTOGRAM_HISTOGRAM_BIN5"}, 423 + {151, "HISTOGRAM_HISTOGRAM_BIN6"}, 424 + {152, "HISTOGRAM_HISTOGRAM_BIN7"}, 425 + {153, "HISTOGRAM_HISTOGRAM_BIN8"}, 426 + {154, "HISTOGRAM_HISTOGRAM_BIN9"}, 287 427 }; 288 428 289 429 static const struct mlxbf_pmc_events mlxbf_pmc_hnf_events[] = { ··· 605 429 { 0x2b, "ANY_REJECT_BANK1" }, 606 430 }; 607 431 432 + static const struct mlxbf_pmc_events mlxbf_pmc_llt_events[] = { 433 + {0, "HNF0_CYCLES"}, 434 + {1, "HNF0_REQS_RECEIVED"}, 435 + {2, "HNF0_REQS_PROCESSED"}, 436 + {3, "HNF0_DIR_HIT"}, 437 + {4, "HNF0_DIR_MISS"}, 438 + {5, "HNF0_DIR_RD_ALLOC"}, 439 + {6, "HNF0_DIR_WR_ALLOC"}, 440 + {7, "HNF0_DIR_VICTIM"}, 441 + {8, "HNF0_CL_HAZARD"}, 442 + {9, "HNF0_ALL_HAZARD"}, 443 + {10, "HNF0_PIPE_STALLS"}, 444 + {11, "HNF0_MEM_READS"}, 445 + {12, "HNF0_MEM_WRITES"}, 446 + {13, "HNF0_MEM_ACCESS"}, 447 + {14, "HNF0_DCL_READ"}, 448 + {15, "HNF0_DCL_INVAL"}, 449 + {16, "HNF0_CHI_RXDAT"}, 450 + {17, "HNF0_CHI_RXRSP"}, 451 + {18, "HNF0_CHI_TXDAT"}, 452 + {19, "HNF0_CHI_TXRSP"}, 453 + {20, "HNF0_CHI_TXSNP"}, 454 + {21, "HNF0_DCT_SNP"}, 455 + {22, "HNF0_SNP_FWD_DATA"}, 456 + {23, "HNF0_SNP_FWD_RSP"}, 457 + {24, "HNF0_SNP_RSP"}, 458 + {25, "HNF0_EXCL_FULL"}, 459 + {26, "HNF0_EXCL_WRITE_F"}, 460 + {27, "HNF0_EXCL_WRITE_S"}, 461 + {28, "HNF0_EXCL_WRITE"}, 462 + {29, "HNF0_EXCL_READ"}, 463 + {30, "HNF0_REQ_BUF_EMPTY"}, 464 + {31, "HNF0_ALL_MAFS_BUSY"}, 465 + {32, "HNF0_TXDAT_NO_LCRD"}, 466 + {33, "HNF0_TXSNP_NO_LCRD"}, 467 + {34, "HNF0_TXRSP_NO_LCRD"}, 468 + {35, "HNF0_TXREQ_NO_LCRD"}, 469 + {36, "HNF0_WRITE"}, 470 + {37, "HNF0_READ"}, 471 + {38, "HNF0_ACCESS"}, 472 + {39, "HNF0_MAF_N_BUSY"}, 473 + {40, "HNF0_MAF_N_REQS"}, 474 + {41, "HNF0_SEL_OPCODE"}, 475 + {42, "HNF1_CYCLES"}, 476 + {43, "HNF1_REQS_RECEIVED"}, 477 + {44, "HNF1_REQS_PROCESSED"}, 478 + {45, "HNF1_DIR_HIT"}, 479 + {46, "HNF1_DIR_MISS"}, 480 + {47, "HNF1_DIR_RD_ALLOC"}, 481 + {48, "HNF1_DIR_WR_ALLOC"}, 482 + {49, "HNF1_DIR_VICTIM"}, 483 + {50, "HNF1_CL_HAZARD"}, 484 + {51, "HNF1_ALL_HAZARD"}, 485 + {52, "HNF1_PIPE_STALLS"}, 486 + {53, "HNF1_MEM_READS"}, 487 + {54, "HNF1_MEM_WRITES"}, 488 + {55, "HNF1_MEM_ACCESS"}, 489 + {56, "HNF1_DCL_READ"}, 490 + {57, "HNF1_DCL_INVAL"}, 491 + {58, "HNF1_CHI_RXDAT"}, 492 + {59, "HNF1_CHI_RXRSP"}, 493 + {60, "HNF1_CHI_TXDAT"}, 494 + {61, "HNF1_CHI_TXRSP"}, 495 + {62, "HNF1_CHI_TXSNP"}, 496 + {63, "HNF1_DCT_SNP"}, 497 + {64, "HNF1_SNP_FWD_DATA"}, 498 + {65, "HNF1_SNP_FWD_RSP"}, 499 + {66, "HNF1_SNP_RSP"}, 500 + {67, "HNF1_EXCL_FULL"}, 501 + {68, "HNF1_EXCL_WRITE_F"}, 502 + {69, "HNF1_EXCL_WRITE_S"}, 503 + {70, "HNF1_EXCL_WRITE"}, 504 + {71, "HNF1_EXCL_READ"}, 505 + {72, "HNF1_REQ_BUF_EMPTY"}, 506 + {73, "HNF1_ALL_MAFS_BUSY"}, 507 + {74, "HNF1_TXDAT_NO_LCRD"}, 508 + {75, "HNF1_TXSNP_NO_LCRD"}, 509 + {76, "HNF1_TXRSP_NO_LCRD"}, 510 + {77, "HNF1_TXREQ_NO_LCRD"}, 511 + {78, "HNF1_WRITE"}, 512 + {79, "HNF1_READ"}, 513 + {80, "HNF1_ACCESS"}, 514 + {81, "HNF1_MAF_N_BUSY"}, 515 + {82, "HNF1_MAF_N_REQS"}, 516 + {83, "HNF1_SEL_OPCODE"}, 517 + {84, "GDC_BANK0_RD_REQ"}, 518 + {85, "GDC_BANK0_WR_REQ"}, 519 + {86, "GDC_BANK0_ALLOCATE"}, 520 + {87, "GDC_BANK0_HIT"}, 521 + {88, "GDC_BANK0_MISS"}, 522 + {89, "GDC_BANK0_INVALIDATE"}, 523 + {90, "GDC_BANK0_EVICT"}, 524 + {91, "GDC_BANK0_RD_RESP"}, 525 + {92, "GDC_BANK0_WR_ACK"}, 526 + {93, "GDC_BANK0_SNOOP"}, 527 + {94, "GDC_BANK0_SNOOP_NORMAL"}, 528 + {95, "GDC_BANK0_SNOOP_FWD"}, 529 + {96, "GDC_BANK0_SNOOP_STASH"}, 530 + {97, "GDC_BANK0_SNOOP_STASH_INDPND_RD"}, 531 + {98, "GDC_BANK0_FOLLOWER"}, 532 + {99, "GDC_BANK0_FW"}, 533 + {100, "GDC_BANK0_HIT_DCL_BOTH"}, 534 + {101, "GDC_BANK0_HIT_DCL_PARTIAL"}, 535 + {102, "GDC_BANK0_EVICT_DCL"}, 536 + {103, "GDC_BANK0_G_RSE_PIPE_CACHE_DATA0"}, 537 + {103, "GDC_BANK0_G_RSE_PIPE_CACHE_DATA1"}, 538 + {105, "GDC_BANK0_ARB_STRB"}, 539 + {106, "GDC_BANK0_ARB_WAIT"}, 540 + {107, "GDC_BANK0_GGA_STRB"}, 541 + {108, "GDC_BANK0_GGA_WAIT"}, 542 + {109, "GDC_BANK0_FW_STRB"}, 543 + {110, "GDC_BANK0_FW_WAIT"}, 544 + {111, "GDC_BANK0_SNP_STRB"}, 545 + {112, "GDC_BANK0_SNP_WAIT"}, 546 + {113, "GDC_BANK0_MISS_INARB_STRB"}, 547 + {114, "GDC_BANK0_MISS_INARB_WAIT"}, 548 + {115, "GDC_BANK0_G_FIFO_FF_GGA_RSP_RD0"}, 549 + {116, "GDC_BANK0_G_FIFO_FF_GGA_RSP_RD1"}, 550 + {117, "GDC_BANK0_G_FIFO_FF_GGA_RSP_RD2"}, 551 + {118, "GDC_BANK0_G_FIFO_FF_GGA_RSP_RD3"}, 552 + {119, "GDC_BANK0_G_FIFO_FF_GGA_RSP_WR0"}, 553 + {120, "GDC_BANK0_G_FIFO_FF_GGA_RSP_WR1"}, 554 + {121, "GDC_BANK0_G_FIFO_FF_GGA_RSP_WR2"}, 555 + {122, "GDC_BANK0_G_FIFO_FF_GGA_RSP_WR3"}, 556 + {123, "GDC_BANK1_RD_REQ"}, 557 + {124, "GDC_BANK1_WR_REQ"}, 558 + {125, "GDC_BANK1_ALLOCATE"}, 559 + {126, "GDC_BANK1_HIT"}, 560 + {127, "GDC_BANK1_MISS"}, 561 + {128, "GDC_BANK1_INVALIDATE"}, 562 + {129, "GDC_BANK1_EVICT"}, 563 + {130, "GDC_BANK1_RD_RESP"}, 564 + {131, "GDC_BANK1_WR_ACK"}, 565 + {132, "GDC_BANK1_SNOOP"}, 566 + {133, "GDC_BANK1_SNOOP_NORMAL"}, 567 + {134, "GDC_BANK1_SNOOP_FWD"}, 568 + {135, "GDC_BANK1_SNOOP_STASH"}, 569 + {136, "GDC_BANK1_SNOOP_STASH_INDPND_RD"}, 570 + {137, "GDC_BANK1_FOLLOWER"}, 571 + {138, "GDC_BANK1_FW"}, 572 + {139, "GDC_BANK1_HIT_DCL_BOTH"}, 573 + {140, "GDC_BANK1_HIT_DCL_PARTIAL"}, 574 + {141, "GDC_BANK1_EVICT_DCL"}, 575 + {142, "GDC_BANK1_G_RSE_PIPE_CACHE_DATA0"}, 576 + {143, "GDC_BANK1_G_RSE_PIPE_CACHE_DATA1"}, 577 + {144, "GDC_BANK1_ARB_STRB"}, 578 + {145, "GDC_BANK1_ARB_WAIT"}, 579 + {146, "GDC_BANK1_GGA_STRB"}, 580 + {147, "GDC_BANK1_GGA_WAIT"}, 581 + {148, "GDC_BANK1_FW_STRB"}, 582 + {149, "GDC_BANK1_FW_WAIT"}, 583 + {150, "GDC_BANK1_SNP_STRB"}, 584 + {151, "GDC_BANK1_SNP_WAIT"}, 585 + {152, "GDC_BANK1_MISS_INARB_STRB"}, 586 + {153, "GDC_BANK1_MISS_INARB_WAIT"}, 587 + {154, "GDC_BANK1_G_FIFO_FF_GGA_RSP_RD0"}, 588 + {155, "GDC_BANK1_G_FIFO_FF_GGA_RSP_RD1"}, 589 + {156, "GDC_BANK1_G_FIFO_FF_GGA_RSP_RD2"}, 590 + {157, "GDC_BANK1_G_FIFO_FF_GGA_RSP_RD3"}, 591 + {158, "GDC_BANK1_G_FIFO_FF_GGA_RSP_WR0"}, 592 + {159, "GDC_BANK1_G_FIFO_FF_GGA_RSP_WR1"}, 593 + {160, "GDC_BANK1_G_FIFO_FF_GGA_RSP_WR2"}, 594 + {161, "GDC_BANK1_G_FIFO_FF_GGA_RSP_WR3"}, 595 + {162, "HISTOGRAM_HISTOGRAM_BIN0"}, 596 + {163, "HISTOGRAM_HISTOGRAM_BIN1"}, 597 + {164, "HISTOGRAM_HISTOGRAM_BIN2"}, 598 + {165, "HISTOGRAM_HISTOGRAM_BIN3"}, 599 + {166, "HISTOGRAM_HISTOGRAM_BIN4"}, 600 + {167, "HISTOGRAM_HISTOGRAM_BIN5"}, 601 + {168, "HISTOGRAM_HISTOGRAM_BIN6"}, 602 + {169, "HISTOGRAM_HISTOGRAM_BIN7"}, 603 + {170, "HISTOGRAM_HISTOGRAM_BIN8"}, 604 + {171, "HISTOGRAM_HISTOGRAM_BIN9"}, 605 + }; 606 + 607 + static const struct mlxbf_pmc_events mlxbf_pmc_llt_miss_events[] = { 608 + {0, "GDC_MISS_MACHINE_RD_REQ"}, 609 + {1, "GDC_MISS_MACHINE_WR_REQ"}, 610 + {2, "GDC_MISS_MACHINE_SNP_REQ"}, 611 + {3, "GDC_MISS_MACHINE_EVICT_REQ"}, 612 + {4, "GDC_MISS_MACHINE_FW_REQ"}, 613 + {5, "GDC_MISS_MACHINE_RD_RESP"}, 614 + {6, "GDC_MISS_MACHINE_WR_RESP"}, 615 + {7, "GDC_MISS_MACHINE_SNP_STASH_DATAPULL_DROP"}, 616 + {8, "GDC_MISS_MACHINE_SNP_STASH_DATAPULL_DROP_TXDAT"}, 617 + {9, "GDC_MISS_MACHINE_CHI_TXREQ"}, 618 + {10, "GDC_MISS_MACHINE_CHI_RXRSP"}, 619 + {11, "GDC_MISS_MACHINE_CHI_TXDAT"}, 620 + {12, "GDC_MISS_MACHINE_CHI_RXDAT"}, 621 + {13, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC0_0"}, 622 + {14, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC0_1 "}, 623 + {15, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC0_2"}, 624 + {16, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC0_3 "}, 625 + {17, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC1_0 "}, 626 + {18, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC1_1 "}, 627 + {19, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC1_2 "}, 628 + {20, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC1_3 "}, 629 + {21, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC_DONE0_0"}, 630 + {22, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC_DONE0_1"}, 631 + {23, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC_DONE0_2"}, 632 + {24, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC_DONE0_3"}, 633 + {25, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC_DONE1_0 "}, 634 + {26, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC_DONE1_1"}, 635 + {27, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC_DONE1_2"}, 636 + {28, "GDC_MISS_MACHINE_G_FIFO_FF_EXEC_DONE1_3"}, 637 + {29, "GDC_MISS_MACHINE_GDC_LINK_LIST_FF_0"}, 638 + {30, "GDC_MISS_MACHINE_GDC_LINK_LIST_FF_1"}, 639 + {31, "GDC_MISS_MACHINE_GDC_LINK_LIST_FF_2"}, 640 + {32, "GDC_MISS_MACHINE_GDC_LINK_LIST_FF_3"}, 641 + {33, "GDC_MISS_MACHINE_GDC_LINK_LIST_FF_4"}, 642 + {34, "GDC_MISS_MACHINE_GDC_LINK_LIST_FF_5"}, 643 + {35, "GDC_MISS_MACHINE_GDC_LINK_LIST_FF_6"}, 644 + {36, "GDC_MISS_MACHINE_G_RSE_PIPE_TXREQ_0"}, 645 + {37, "GDC_MISS_MACHINE_G_RSE_PIPE_TXREQ_1"}, 646 + {38, "GDC_MISS_MACHINE_G_CREDIT_TXREQ_0"}, 647 + {39, "GDC_MISS_MACHINE_G_CREDIT_TXREQ_1"}, 648 + {40, "GDC_MISS_MACHINE_G_RSE_PIPE_TXDAT_0"}, 649 + {41, "GDC_MISS_MACHINE_G_RSE_PIPE_TXDAT_1"}, 650 + {42, "GDC_MISS_MACHINE_G_CREDIT_TXDAT_0"}, 651 + {43, "GDC_MISS_MACHINE_G_CREDIT_TXDAT_1"}, 652 + {44, "GDC_MISS_MACHINE_G_FIFO_FF_COMPACK_0"}, 653 + {45, "GDC_MISS_MACHINE_G_FIFO_FF_COMPACK_1"}, 654 + {46, "GDC_MISS_MACHINE_G_FIFO_FF_COMPACK_2"}, 655 + {47, "GDC_MISS_MACHINE_G_FIFO_FF_COMPACK_3"}, 656 + {48, "GDC_MISS_MACHINE_G_RSE_PIPE_TXRSP_0"}, 657 + {49, "GDC_MISS_MACHINE_G_RSE_PIPE_TXRSP_1"}, 658 + {50, "GDC_MISS_MACHINE_G_CREDIT_TXRSP_0"}, 659 + {51, "GDC_MISS_MACHINE_G_CREDIT_TXRSP_1"}, 660 + {52, "GDC_MISS_MACHINE_G_RSE_PIPE_INARB_0"}, 661 + {53, "GDC_MISS_MACHINE_G_RSE_PIPE_INARB_1"}, 662 + {54, "GDC_MISS_MACHINE_G_FIFO_FF_SNOOP_IN_0"}, 663 + {55, "GDC_MISS_MACHINE_G_FIFO_FF_SNOOP_IN_1"}, 664 + {56, "GDC_MISS_MACHINE_G_FIFO_FF_SNOOP_IN_2"}, 665 + {57, "GDC_MISS_MACHINE_G_FIFO_FF_SNOOP_IN_3"}, 666 + {58, "GDC_MISS_MACHINE_G_FIFO_FF_TXRSP_SNOOP_DATAPULL_0"}, 667 + {59, "GDC_MISS_MACHINE_G_FIFO_FF_TXRSP_SNOOP_DATAPULL_1"}, 668 + {60, "GDC_MISS_MACHINE_G_FIFO_FF_TXRSP_SNOOP_DATAPULL_2"}, 669 + {61, "GDC_MISS_MACHINE_G_FIFO_FF_TXRSP_SNOOP_DATAPULL_3"}, 670 + {62, "GDC_MISS_MACHINE_G_FIFO_FF_TXDAT_SNOOP_DATAPULL_4"}, 671 + {63, "GDC_MISS_MACHINE_G_FIFO_FF_TXDAT_SNOOP_DATAPULL_5"}, 672 + {64, "GDC_MISS_MACHINE_G_FIFO_FF_TXDAT_SNOOP_DATAPULL_6"}, 673 + {65, "GDC_MISS_MACHINE_G_FIFO_FF_TXDAT_SNOOP_DATAPULL_7"}, 674 + {66, "HISTOGRAM_HISTOGRAM_BIN0"}, 675 + {67, "HISTOGRAM_HISTOGRAM_BIN1"}, 676 + {68, "HISTOGRAM_HISTOGRAM_BIN2"}, 677 + {69, "HISTOGRAM_HISTOGRAM_BIN3"}, 678 + {70, "HISTOGRAM_HISTOGRAM_BIN4"}, 679 + {71, "HISTOGRAM_HISTOGRAM_BIN5"}, 680 + {72, "HISTOGRAM_HISTOGRAM_BIN6"}, 681 + {73, "HISTOGRAM_HISTOGRAM_BIN7"}, 682 + {74, "HISTOGRAM_HISTOGRAM_BIN8"}, 683 + {75, "HISTOGRAM_HISTOGRAM_BIN9"}, 684 + }; 685 + 608 686 static struct mlxbf_pmc_context *pmc; 609 687 610 688 /* UUID used to probe ATF service. */ ··· 999 569 break; 1000 570 } 1001 571 } else if (strstr(blk, "mss")) { 1002 - events = mlxbf_pmc_mss_events; 1003 - *size = ARRAY_SIZE(mlxbf_pmc_mss_events); 572 + switch (pmc->event_set) { 573 + case MLXBF_PMC_EVENT_SET_BF1: 574 + case MLXBF_PMC_EVENT_SET_BF2: 575 + events = mlxbf_pmc_mss_events_1; 576 + *size = ARRAY_SIZE(mlxbf_pmc_mss_events_1); 577 + break; 578 + case MLXBF_PMC_EVENT_SET_BF3: 579 + events = mlxbf_pmc_mss_events_3; 580 + *size = ARRAY_SIZE(mlxbf_pmc_mss_events_3); 581 + break; 582 + default: 583 + events = NULL; 584 + *size = 0; 585 + break; 586 + } 1004 587 } else if (strstr(blk, "ecc")) { 1005 588 events = mlxbf_pmc_ecc_events; 1006 589 *size = ARRAY_SIZE(mlxbf_pmc_ecc_events); ··· 1029 586 } else if (strstr(blk, "smmu")) { 1030 587 events = mlxbf_pmc_smgen_events; 1031 588 *size = ARRAY_SIZE(mlxbf_pmc_smgen_events); 589 + } else if (strstr(blk, "llt_miss")) { 590 + events = mlxbf_pmc_llt_miss_events; 591 + *size = ARRAY_SIZE(mlxbf_pmc_llt_miss_events); 592 + } else if (strstr(blk, "llt")) { 593 + events = mlxbf_pmc_llt_events; 594 + *size = ARRAY_SIZE(mlxbf_pmc_llt_events); 1032 595 } else { 1033 596 events = NULL; 1034 597 *size = 0; ··· 1161 712 return mlxbf_pmc_write(pmcaddr, MLXBF_PMC_WRITE_REG_32, *wordaddr); 1162 713 } 1163 714 715 + /* Method to handle crspace counter programming */ 716 + static int mlxbf_pmc_program_crspace_counter(int blk_num, uint32_t cnt_num, 717 + uint32_t evt) 718 + { 719 + uint32_t word; 720 + void *addr; 721 + int ret; 722 + 723 + addr = pmc->block[blk_num].mmio_base + 724 + (rounddown(cnt_num, 2) * MLXBF_PMC_CRSPACE_PERFSEL_SZ); 725 + ret = mlxbf_pmc_readl(addr, &word); 726 + if (ret) 727 + return ret; 728 + 729 + if (cnt_num % 2) { 730 + word &= ~MLXBF_PMC_CRSPACE_PERFSEL1; 731 + word |= FIELD_PREP(MLXBF_PMC_CRSPACE_PERFSEL1, evt); 732 + } else { 733 + word &= ~MLXBF_PMC_CRSPACE_PERFSEL0; 734 + word |= FIELD_PREP(MLXBF_PMC_CRSPACE_PERFSEL0, evt); 735 + } 736 + 737 + return mlxbf_pmc_write(addr, MLXBF_PMC_WRITE_REG_32, word); 738 + } 739 + 740 + /* Method to clear crspace counter value */ 741 + static int mlxbf_pmc_clear_crspace_counter(int blk_num, uint32_t cnt_num) 742 + { 743 + void *addr; 744 + 745 + addr = pmc->block[blk_num].mmio_base + 746 + MLXBF_PMC_CRSPACE_PERFMON_VAL0(pmc->block[blk_num].counters) + 747 + (cnt_num * 4); 748 + 749 + return mlxbf_pmc_write(addr, MLXBF_PMC_WRITE_REG_32, 0x0); 750 + } 751 + 1164 752 /* Method to program a counter to monitor an event */ 1165 753 static int mlxbf_pmc_program_counter(int blk_num, uint32_t cnt_num, 1166 754 uint32_t evt, bool is_l3) ··· 1209 723 1210 724 if (is_l3) 1211 725 return mlxbf_pmc_program_l3_counter(blk_num, cnt_num, evt); 726 + 727 + if (pmc->block[blk_num].type == MLXBF_PMC_TYPE_CRSPACE) 728 + return mlxbf_pmc_program_crspace_counter(blk_num, cnt_num, 729 + evt); 1212 730 1213 731 /* Configure the counter */ 1214 732 perfctl = FIELD_PREP(MLXBF_PMC_PERFCTL_EN0, 1); ··· 1268 778 { 1269 779 uint32_t perfcnt_low = 0, perfcnt_high = 0; 1270 780 uint64_t value; 1271 - int status = 0; 781 + int status; 1272 782 1273 783 status = mlxbf_pmc_readl(pmc->block[blk_num].mmio_base + 1274 784 MLXBF_PMC_L3C_PERF_CNT_LOW + ··· 1294 804 return 0; 1295 805 } 1296 806 807 + /* Method to handle crspace counter reads */ 808 + static int mlxbf_pmc_read_crspace_counter(int blk_num, uint32_t cnt_num, 809 + uint64_t *result) 810 + { 811 + uint32_t value; 812 + int status = 0; 813 + 814 + status = mlxbf_pmc_readl(pmc->block[blk_num].mmio_base + 815 + MLXBF_PMC_CRSPACE_PERFMON_VAL0(pmc->block[blk_num].counters) + 816 + (cnt_num * 4), &value); 817 + if (status) 818 + return status; 819 + 820 + *result = value; 821 + 822 + return 0; 823 + } 824 + 1297 825 /* Method to read the counter value */ 1298 826 static int mlxbf_pmc_read_counter(int blk_num, uint32_t cnt_num, bool is_l3, 1299 827 uint64_t *result) ··· 1325 817 1326 818 if (is_l3) 1327 819 return mlxbf_pmc_read_l3_counter(blk_num, cnt_num, result); 820 + 821 + if (pmc->block[blk_num].type == MLXBF_PMC_TYPE_CRSPACE) 822 + return mlxbf_pmc_read_crspace_counter(blk_num, cnt_num, result); 1328 823 1329 824 perfcfg_offset = cnt_num * MLXBF_PMC_REG_SIZE; 1330 825 perfval_offset = perfcfg_offset + ··· 1404 893 return 0; 1405 894 } 1406 895 896 + /* Method to read crspace block event */ 897 + static int mlxbf_pmc_read_crspace_event(int blk_num, uint32_t cnt_num, 898 + uint64_t *result) 899 + { 900 + uint32_t word, evt; 901 + void *addr; 902 + int ret; 903 + 904 + addr = pmc->block[blk_num].mmio_base + 905 + (rounddown(cnt_num, 2) * MLXBF_PMC_CRSPACE_PERFSEL_SZ); 906 + ret = mlxbf_pmc_readl(addr, &word); 907 + if (ret) 908 + return ret; 909 + 910 + if (cnt_num % 2) 911 + evt = FIELD_GET(MLXBF_PMC_CRSPACE_PERFSEL1, word); 912 + else 913 + evt = FIELD_GET(MLXBF_PMC_CRSPACE_PERFSEL0, word); 914 + 915 + *result = evt; 916 + 917 + return 0; 918 + } 919 + 1407 920 /* Method to find the event currently being monitored by a counter */ 1408 921 static int mlxbf_pmc_read_event(int blk_num, uint32_t cnt_num, bool is_l3, 1409 922 uint64_t *result) ··· 1440 905 1441 906 if (is_l3) 1442 907 return mlxbf_pmc_read_l3_event(blk_num, cnt_num, result); 908 + 909 + if (pmc->block[blk_num].type == MLXBF_PMC_TYPE_CRSPACE) 910 + return mlxbf_pmc_read_crspace_event(blk_num, cnt_num, result); 1443 911 1444 912 perfcfg_offset = cnt_num * MLXBF_PMC_REG_SIZE; 1445 913 perfval_offset = perfcfg_offset + ··· 1520 982 if (strstr(pmc->block_name[blk_num], "l3cache")) 1521 983 is_l3 = true; 1522 984 1523 - if (pmc->block[blk_num].type == MLXBF_PMC_TYPE_COUNTER) { 985 + if ((pmc->block[blk_num].type == MLXBF_PMC_TYPE_COUNTER) || 986 + (pmc->block[blk_num].type == MLXBF_PMC_TYPE_CRSPACE)) { 1524 987 if (mlxbf_pmc_read_counter(blk_num, cnt_num, is_l3, &value)) 1525 988 return -EINVAL; 1526 989 } else if (pmc->block[blk_num].type == MLXBF_PMC_TYPE_REGISTER) { ··· 1579 1040 err = mlxbf_pmc_write_reg(blk_num, offset, data); 1580 1041 if (err) 1581 1042 return err; 1043 + } else if (pmc->block[blk_num].type == MLXBF_PMC_TYPE_CRSPACE) { 1044 + if (sscanf(attr->attr.name, "counter%d", &cnt_num) != 1) 1045 + return -EINVAL; 1046 + err = mlxbf_pmc_clear_crspace_counter(blk_num, cnt_num); 1582 1047 } else 1583 1048 return -EINVAL; 1584 1049 ··· 1680 1137 return ret; 1681 1138 } 1682 1139 1683 - /* Show function for "enable" sysfs files - only for l3cache */ 1140 + /* Show function for "enable" sysfs files - only for l3cache & crspace */ 1684 1141 static ssize_t mlxbf_pmc_enable_show(struct device *dev, 1685 1142 struct device_attribute *attr, char *buf) 1686 1143 { 1687 1144 struct mlxbf_pmc_attribute *attr_enable = container_of( 1688 1145 attr, struct mlxbf_pmc_attribute, dev_attr); 1689 - uint32_t perfcnt_cfg; 1146 + uint32_t perfcnt_cfg, word; 1690 1147 int blk_num, value; 1691 1148 1692 1149 blk_num = attr_enable->nr; 1693 1150 1694 - if (mlxbf_pmc_readl(pmc->block[blk_num].mmio_base + 1695 - MLXBF_PMC_L3C_PERF_CNT_CFG, 1696 - &perfcnt_cfg)) 1697 - return -EINVAL; 1151 + if (pmc->block[blk_num].type == MLXBF_PMC_TYPE_CRSPACE) { 1152 + if (mlxbf_pmc_readl(pmc->block[blk_num].mmio_base + 1153 + MLXBF_PMC_CRSPACE_PERFMON_CTL(pmc->block[blk_num].counters), 1154 + &word)) 1155 + return -EINVAL; 1698 1156 1699 - value = FIELD_GET(MLXBF_PMC_L3C_PERF_CNT_CFG_EN, perfcnt_cfg); 1157 + value = FIELD_GET(MLXBF_PMC_CRSPACE_PERFMON_EN, word); 1158 + } else { 1159 + if (mlxbf_pmc_readl(pmc->block[blk_num].mmio_base + 1160 + MLXBF_PMC_L3C_PERF_CNT_CFG, 1161 + &perfcnt_cfg)) 1162 + return -EINVAL; 1163 + 1164 + value = FIELD_GET(MLXBF_PMC_L3C_PERF_CNT_CFG_EN, perfcnt_cfg); 1165 + } 1700 1166 1701 1167 return sysfs_emit(buf, "%d\n", value); 1702 1168 } 1703 1169 1704 - /* Store function for "enable" sysfs files - only for l3cache */ 1170 + /* Store function for "enable" sysfs files - only for l3cache & crspace */ 1705 1171 static ssize_t mlxbf_pmc_enable_store(struct device *dev, 1706 1172 struct device_attribute *attr, 1707 1173 const char *buf, size_t count) ··· 1718 1166 struct mlxbf_pmc_attribute *attr_enable = container_of( 1719 1167 attr, struct mlxbf_pmc_attribute, dev_attr); 1720 1168 int err, en, blk_num; 1169 + uint32_t word; 1721 1170 1722 1171 blk_num = attr_enable->nr; 1723 1172 ··· 1726 1173 if (err < 0) 1727 1174 return err; 1728 1175 1729 - if (!en) { 1730 - err = mlxbf_pmc_config_l3_counters(blk_num, false, false); 1176 + if (pmc->block[blk_num].type == MLXBF_PMC_TYPE_CRSPACE) { 1177 + err = mlxbf_pmc_readl(pmc->block[blk_num].mmio_base + 1178 + MLXBF_PMC_CRSPACE_PERFMON_CTL(pmc->block[blk_num].counters), 1179 + &word); 1180 + if (err) 1181 + return -EINVAL; 1182 + 1183 + word &= ~MLXBF_PMC_CRSPACE_PERFMON_EN; 1184 + word |= FIELD_PREP(MLXBF_PMC_CRSPACE_PERFMON_EN, en); 1185 + if (en) 1186 + word |= FIELD_PREP(MLXBF_PMC_CRSPACE_PERFMON_CLR, 1); 1187 + 1188 + mlxbf_pmc_write(pmc->block[blk_num].mmio_base + 1189 + MLXBF_PMC_CRSPACE_PERFMON_CTL(pmc->block[blk_num].counters), 1190 + MLXBF_PMC_WRITE_REG_32, word); 1191 + } else { 1192 + if (en && en != 1) 1193 + return -EINVAL; 1194 + 1195 + err = mlxbf_pmc_config_l3_counters(blk_num, false, !!en); 1731 1196 if (err) 1732 1197 return err; 1733 - } else if (en == 1) { 1734 - err = mlxbf_pmc_config_l3_counters(blk_num, false, true); 1735 - if (err) 1736 - return err; 1737 - err = mlxbf_pmc_config_l3_counters(blk_num, true, false); 1738 - if (err) 1739 - return err; 1740 - } else 1741 - return -EINVAL; 1198 + 1199 + if (en == 1) { 1200 + err = mlxbf_pmc_config_l3_counters(blk_num, true, false); 1201 + if (err) 1202 + return err; 1203 + } 1204 + } 1742 1205 1743 1206 return count; 1744 1207 } ··· 1775 1206 attr = NULL; 1776 1207 1777 1208 /* "enable" sysfs to start/stop the counters. Only in L3C blocks */ 1778 - if (strstr(pmc->block_name[blk_num], "l3cache")) { 1209 + if (strstr(pmc->block_name[blk_num], "l3cache") || 1210 + ((pmc->block[blk_num].type == MLXBF_PMC_TYPE_CRSPACE))) { 1779 1211 attr = &pmc->block[blk_num].attr_enable; 1780 1212 attr->dev_attr.attr.mode = 0644; 1781 1213 attr->dev_attr.show = mlxbf_pmc_enable_show; ··· 1867 1297 int err; 1868 1298 1869 1299 /* Populate attributes based on counter type */ 1870 - if (pmc->block[blk_num].type == MLXBF_PMC_TYPE_COUNTER) 1300 + if ((pmc->block[blk_num].type == MLXBF_PMC_TYPE_COUNTER) || 1301 + (pmc->block[blk_num].type == MLXBF_PMC_TYPE_CRSPACE)) 1871 1302 err = mlxbf_pmc_init_perftype_counter(dev, blk_num); 1872 1303 else if (pmc->block[blk_num].type == MLXBF_PMC_TYPE_REGISTER) 1873 1304 err = mlxbf_pmc_init_perftype_reg(dev, blk_num); ··· 1882 1311 pmc->block[blk_num].block_attr_grp.attrs = pmc->block[blk_num].block_attr; 1883 1312 pmc->block[blk_num].block_attr_grp.name = devm_kasprintf( 1884 1313 dev, GFP_KERNEL, pmc->block_name[blk_num]); 1885 - pmc->groups[blk_num] = &pmc->block[blk_num].block_attr_grp; 1314 + pmc->groups[pmc->group_num] = &pmc->block[blk_num].block_attr_grp; 1315 + pmc->group_num++; 1886 1316 1887 1317 return 0; 1888 1318 } ··· 1906 1334 int i, tile_num, ret; 1907 1335 1908 1336 for (i = 0; i < pmc->total_blocks; ++i) { 1909 - if (strstr(pmc->block_name[i], "tile")) { 1337 + /* Create sysfs for tiles only if block number < tile_count */ 1338 + if (strstr(pmc->block_name[i], "tilenet")) { 1339 + if (sscanf(pmc->block_name[i], "tilenet%d", &tile_num) != 1) 1340 + continue; 1341 + 1342 + if (tile_num >= pmc->tile_count) 1343 + continue; 1344 + } else if (strstr(pmc->block_name[i], "tile")) { 1910 1345 if (sscanf(pmc->block_name[i], "tile%d", &tile_num) != 1) 1911 - return -EINVAL; 1346 + continue; 1912 1347 1913 1348 if (tile_num >= pmc->tile_count) 1914 1349 continue; 1915 1350 } 1351 + 1352 + /* Create sysfs only for enabled MSS blocks */ 1353 + if (strstr(pmc->block_name[i], "mss") && 1354 + pmc->event_set == MLXBF_PMC_EVENT_SET_BF3) { 1355 + int mss_num; 1356 + 1357 + if (sscanf(pmc->block_name[i], "mss%d", &mss_num) != 1) 1358 + continue; 1359 + 1360 + if (!((pmc->mss_enable >> mss_num) & 0x1)) 1361 + continue; 1362 + } 1363 + 1364 + /* Create sysfs only for enabled LLT blocks */ 1365 + if (strstr(pmc->block_name[i], "llt_miss")) { 1366 + int llt_num; 1367 + 1368 + if (sscanf(pmc->block_name[i], "llt_miss%d", &llt_num) != 1) 1369 + continue; 1370 + 1371 + if (!((pmc->llt_enable >> llt_num) & 0x1)) 1372 + continue; 1373 + } else if (strstr(pmc->block_name[i], "llt")) { 1374 + int llt_num; 1375 + 1376 + if (sscanf(pmc->block_name[i], "llt%d", &llt_num) != 1) 1377 + continue; 1378 + 1379 + if (!((pmc->llt_enable >> llt_num) & 0x1)) 1380 + continue; 1381 + } 1382 + 1916 1383 ret = device_property_read_u64_array(dev, pmc->block_name[i], 1917 1384 info, MLXBF_PMC_INFO_SZ); 1918 1385 if (ret) ··· 2028 1417 pmc->event_set = MLXBF_PMC_EVENT_SET_BF1; 2029 1418 else if (!strcmp(hid, "MLNXBFD1")) 2030 1419 pmc->event_set = MLXBF_PMC_EVENT_SET_BF2; 1420 + else if (!strcmp(hid, "MLNXBFD2")) 1421 + pmc->event_set = MLXBF_PMC_EVENT_SET_BF3; 2031 1422 else 2032 1423 return -ENODEV; 2033 1424 ··· 2043 1430 if (ret != pmc->total_blocks) 2044 1431 return -EFAULT; 2045 1432 2046 - ret = device_property_read_u32(dev, "tile_num", &pmc->tile_count); 2047 - if (ret) 2048 - return ret; 1433 + if (device_property_read_u32(dev, "tile_num", &pmc->tile_count)) { 1434 + if (device_property_read_u8(dev, "llt_enable", &pmc->llt_enable)) { 1435 + dev_err(dev, "Number of tiles/LLTs undefined\n"); 1436 + return -EINVAL; 1437 + } 1438 + if (device_property_read_u8(dev, "mss_enable", &pmc->mss_enable)) { 1439 + dev_err(dev, "Number of tiles/MSSs undefined\n"); 1440 + return -EINVAL; 1441 + } 1442 + } 2049 1443 2050 1444 pmc->pdev = pdev; 1445 + pmc->group_num = 0; 2051 1446 2052 1447 ret = mlxbf_pmc_map_counters(dev); 2053 1448 if (ret) ··· 2070 1449 2071 1450 static const struct acpi_device_id mlxbf_pmc_acpi_ids[] = { { "MLNXBFD0", 0 }, 2072 1451 { "MLNXBFD1", 0 }, 1452 + { "MLNXBFD2", 0 }, 2073 1453 {}, }; 2074 1454 2075 1455 MODULE_DEVICE_TABLE(acpi, mlxbf_pmc_acpi_ids);
+2 -4
drivers/platform/mellanox/mlxbf-tmfifo.c
··· 1364 1364 } 1365 1365 1366 1366 /* Device remove function. */ 1367 - static int mlxbf_tmfifo_remove(struct platform_device *pdev) 1367 + static void mlxbf_tmfifo_remove(struct platform_device *pdev) 1368 1368 { 1369 1369 struct mlxbf_tmfifo *fifo = platform_get_drvdata(pdev); 1370 1370 1371 1371 mlxbf_tmfifo_cleanup(fifo); 1372 - 1373 - return 0; 1374 1372 } 1375 1373 1376 1374 static const struct acpi_device_id mlxbf_tmfifo_acpi_match[] = { ··· 1379 1381 1380 1382 static struct platform_driver mlxbf_tmfifo_driver = { 1381 1383 .probe = mlxbf_tmfifo_probe, 1382 - .remove = mlxbf_tmfifo_remove, 1384 + .remove_new = mlxbf_tmfifo_remove, 1383 1385 .driver = { 1384 1386 .name = "bf-tmfifo", 1385 1387 .acpi_match_table = mlxbf_tmfifo_acpi_match,
+2 -4
drivers/platform/mellanox/mlxreg-hotplug.c
··· 786 786 return 0; 787 787 } 788 788 789 - static int mlxreg_hotplug_remove(struct platform_device *pdev) 789 + static void mlxreg_hotplug_remove(struct platform_device *pdev) 790 790 { 791 791 struct mlxreg_hotplug_priv_data *priv = dev_get_drvdata(&pdev->dev); 792 792 793 793 /* Clean interrupts setup. */ 794 794 mlxreg_hotplug_unset_irq(priv); 795 795 devm_free_irq(&pdev->dev, priv->irq, priv); 796 - 797 - return 0; 798 796 } 799 797 800 798 static struct platform_driver mlxreg_hotplug_driver = { ··· 800 802 .name = "mlxreg-hotplug", 801 803 }, 802 804 .probe = mlxreg_hotplug_probe, 803 - .remove = mlxreg_hotplug_remove, 805 + .remove_new = mlxreg_hotplug_remove, 804 806 }; 805 807 806 808 module_platform_driver(mlxreg_hotplug_driver);
+2 -4
drivers/platform/mellanox/mlxreg-io.c
··· 263 263 return 0; 264 264 } 265 265 266 - static int mlxreg_io_remove(struct platform_device *pdev) 266 + static void mlxreg_io_remove(struct platform_device *pdev) 267 267 { 268 268 struct mlxreg_io_priv_data *priv = dev_get_drvdata(&pdev->dev); 269 269 270 270 mutex_destroy(&priv->io_lock); 271 - 272 - return 0; 273 271 } 274 272 275 273 static struct platform_driver mlxreg_io_driver = { ··· 275 277 .name = "mlxreg-io", 276 278 }, 277 279 .probe = mlxreg_io_probe, 278 - .remove = mlxreg_io_remove, 280 + .remove_new = mlxreg_io_remove, 279 281 }; 280 282 281 283 module_platform_driver(mlxreg_io_driver);
+3 -5
drivers/platform/mellanox/mlxreg-lc.c
··· 907 907 return err; 908 908 } 909 909 910 - static int mlxreg_lc_remove(struct platform_device *pdev) 910 + static void mlxreg_lc_remove(struct platform_device *pdev) 911 911 { 912 912 struct mlxreg_core_data *data = dev_get_platdata(&pdev->dev); 913 913 struct mlxreg_lc *mlxreg_lc = platform_get_drvdata(pdev); ··· 921 921 * is nothing to remove. 922 922 */ 923 923 if (!data->notifier || !data->notifier->handle) 924 - return 0; 924 + return; 925 925 926 926 /* Clear event notification callback and handle. */ 927 927 data->notifier->user_handler = NULL; ··· 940 940 i2c_put_adapter(data->hpdev.adapter); 941 941 data->hpdev.adapter = NULL; 942 942 } 943 - 944 - return 0; 945 943 } 946 944 947 945 static struct platform_driver mlxreg_lc_driver = { 948 946 .probe = mlxreg_lc_probe, 949 - .remove = mlxreg_lc_remove, 947 + .remove_new = mlxreg_lc_remove, 950 948 .driver = { 951 949 .name = "mlxreg-lc", 952 950 },
+2 -4
drivers/platform/mellanox/nvsw-sn2201.c
··· 1217 1217 return nvsw_sn2201_config_pre_init(nvsw_sn2201); 1218 1218 } 1219 1219 1220 - static int nvsw_sn2201_remove(struct platform_device *pdev) 1220 + static void nvsw_sn2201_remove(struct platform_device *pdev) 1221 1221 { 1222 1222 struct nvsw_sn2201 *nvsw_sn2201 = platform_get_drvdata(pdev); 1223 1223 ··· 1239 1239 /* Unregister I2C controller. */ 1240 1240 if (nvsw_sn2201->pdev_i2c) 1241 1241 platform_device_unregister(nvsw_sn2201->pdev_i2c); 1242 - 1243 - return 0; 1244 1242 } 1245 1243 1246 1244 static const struct acpi_device_id nvsw_sn2201_acpi_ids[] = { ··· 1250 1252 1251 1253 static struct platform_driver nvsw_sn2201_driver = { 1252 1254 .probe = nvsw_sn2201_probe, 1253 - .remove = nvsw_sn2201_remove, 1255 + .remove_new = nvsw_sn2201_remove, 1254 1256 .driver = { 1255 1257 .name = "nvsw-sn2201", 1256 1258 .acpi_match_table = nvsw_sn2201_acpi_ids,
+2 -3
drivers/platform/surface/surface3-wmi.c
··· 226 226 return error; 227 227 } 228 228 229 - static int s3_wmi_remove(struct platform_device *device) 229 + static void s3_wmi_remove(struct platform_device *device) 230 230 { 231 231 /* remove the hotplug context from the acpi device */ 232 232 s3_wmi.touchscreen_adev->hp = NULL; 233 233 234 234 /* reinstall the actual PNPC0C0D LID default handle */ 235 235 acpi_bus_scan(s3_wmi.pnp0c0d_adev->handle); 236 - return 0; 237 236 } 238 237 239 238 static int __maybe_unused s3_wmi_resume(struct device *dev) ··· 247 248 .name = "surface3-wmi", 248 249 .pm = &s3_wmi_pm, 249 250 }, 250 - .remove = s3_wmi_remove, 251 + .remove_new = s3_wmi_remove, 251 252 }; 252 253 253 254 static int __init s3_wmi_init(void)
+2 -4
drivers/platform/surface/surface_acpi_notify.c
··· 854 854 return status; 855 855 } 856 856 857 - static int san_remove(struct platform_device *pdev) 857 + static void san_remove(struct platform_device *pdev) 858 858 { 859 859 acpi_handle san = ACPI_HANDLE(&pdev->dev); 860 860 ··· 868 868 * all delayed works they may have spawned are run to completion. 869 869 */ 870 870 flush_workqueue(san_wq); 871 - 872 - return 0; 873 871 } 874 872 875 873 static const struct acpi_device_id san_match[] = { ··· 878 880 879 881 static struct platform_driver surface_acpi_notify = { 880 882 .probe = san_probe, 881 - .remove = san_remove, 883 + .remove_new = san_remove, 882 884 .driver = { 883 885 .name = "surface_acpi_notify", 884 886 .acpi_match_table = san_match,
+2 -3
drivers/platform/surface/surface_aggregator_cdev.c
··· 714 714 return 0; 715 715 } 716 716 717 - static int ssam_dbg_device_remove(struct platform_device *pdev) 717 + static void ssam_dbg_device_remove(struct platform_device *pdev) 718 718 { 719 719 struct ssam_cdev *cdev = platform_get_drvdata(pdev); 720 720 struct ssam_cdev_client *client; ··· 757 757 misc_deregister(&cdev->mdev); 758 758 759 759 ssam_cdev_put(cdev); 760 - return 0; 761 760 } 762 761 763 762 static struct platform_device *ssam_cdev_device; 764 763 765 764 static struct platform_driver ssam_cdev_driver = { 766 765 .probe = ssam_dbg_device_probe, 767 - .remove = ssam_dbg_device_remove, 766 + .remove_new = ssam_dbg_device_remove, 768 767 .driver = { 769 768 .name = SSAM_CDEV_DEVICE_NAME, 770 769 .probe_type = PROBE_PREFER_ASYNCHRONOUS,
+2 -3
drivers/platform/surface/surface_aggregator_registry.c
··· 418 418 return status; 419 419 } 420 420 421 - static int ssam_platform_hub_remove(struct platform_device *pdev) 421 + static void ssam_platform_hub_remove(struct platform_device *pdev) 422 422 { 423 423 const struct software_node **nodes = platform_get_drvdata(pdev); 424 424 425 425 ssam_remove_clients(&pdev->dev); 426 426 set_secondary_fwnode(&pdev->dev, NULL); 427 427 software_node_unregister_node_group(nodes); 428 - return 0; 429 428 } 430 429 431 430 static struct platform_driver ssam_platform_hub_driver = { 432 431 .probe = ssam_platform_hub_probe, 433 - .remove = ssam_platform_hub_remove, 432 + .remove_new = ssam_platform_hub_remove, 434 433 .driver = { 435 434 .name = "surface_aggregator_platform_hub", 436 435 .acpi_match_table = ssam_platform_hub_match,
+2 -3
drivers/platform/surface/surface_dtx.c
··· 1168 1168 return 0; 1169 1169 } 1170 1170 1171 - static int surface_dtx_platform_remove(struct platform_device *pdev) 1171 + static void surface_dtx_platform_remove(struct platform_device *pdev) 1172 1172 { 1173 1173 sdtx_device_destroy(platform_get_drvdata(pdev)); 1174 - return 0; 1175 1174 } 1176 1175 1177 1176 static const struct acpi_device_id surface_dtx_acpi_match[] = { ··· 1181 1182 1182 1183 static struct platform_driver surface_dtx_platform_driver = { 1183 1184 .probe = surface_dtx_platform_probe, 1184 - .remove = surface_dtx_platform_remove, 1185 + .remove_new = surface_dtx_platform_remove, 1185 1186 .driver = { 1186 1187 .name = "surface_dtx_pltf", 1187 1188 .acpi_match_table = surface_dtx_acpi_match,
+2 -4
drivers/platform/surface/surface_gpe.c
··· 267 267 return ret; 268 268 } 269 269 270 - static int surface_gpe_remove(struct platform_device *pdev) 270 + static void surface_gpe_remove(struct platform_device *pdev) 271 271 { 272 272 struct surface_lid_device *lid = dev_get_drvdata(&pdev->dev); 273 273 274 274 /* restore default behavior without this module */ 275 275 surface_lid_enable_wakeup(&pdev->dev, false); 276 276 acpi_disable_gpe(NULL, lid->gpe_number); 277 - 278 - return 0; 279 277 } 280 278 281 279 static struct platform_driver surface_gpe_driver = { 282 280 .probe = surface_gpe_probe, 283 - .remove = surface_gpe_remove, 281 + .remove_new = surface_gpe_remove, 284 282 .driver = { 285 283 .name = "surface_gpe", 286 284 .pm = &surface_gpe_pm,
+2 -4
drivers/platform/surface/surface_hotplug.c
··· 183 183 return 0; 184 184 } 185 185 186 - static int surface_hotplug_remove(struct platform_device *pdev) 186 + static void surface_hotplug_remove(struct platform_device *pdev) 187 187 { 188 188 struct shps_device *sdev = platform_get_drvdata(pdev); 189 189 int i; ··· 195 195 196 196 mutex_destroy(&sdev->lock[i]); 197 197 } 198 - 199 - return 0; 200 198 } 201 199 202 200 static int surface_hotplug_probe(struct platform_device *pdev) ··· 259 261 260 262 static struct platform_driver surface_hotplug_driver = { 261 263 .probe = surface_hotplug_probe, 262 - .remove = surface_hotplug_remove, 264 + .remove_new = surface_hotplug_remove, 263 265 .driver = { 264 266 .name = "surface_hotplug", 265 267 .acpi_match_table = surface_hotplug_acpi_match,
+11
drivers/platform/x86/Kconfig
··· 988 988 the OS-image for the device. This option supplies the missing info. 989 989 Enable this for x86 tablets with Silead or Chipone touchscreens. 990 990 991 + config INSPUR_PLATFORM_PROFILE 992 + tristate "Inspur WMI platform profile driver" 993 + depends on ACPI_WMI 994 + select ACPI_PLATFORM_PROFILE 995 + help 996 + This will allow users to determine and control the platform modes 997 + between low-power, balanced and performance modes. 998 + 999 + To compile this driver as a module, choose M here: the module 1000 + will be called inspur-platform-profile. 1001 + 991 1002 source "drivers/platform/x86/x86-android-tablets/Kconfig" 992 1003 993 1004 config FW_ATTR_CLASS
+3
drivers/platform/x86/Makefile
··· 98 98 # before toshiba_acpi initializes 99 99 obj-$(CONFIG_ACPI_TOSHIBA) += toshiba_acpi.o 100 100 101 + # Inspur 102 + obj-$(CONFIG_INSPUR_PLATFORM_PROFILE) += inspur_platform_profile.o 103 + 101 104 # Laptop drivers 102 105 obj-$(CONFIG_ACPI_CMPC) += classmate-laptop.o 103 106 obj-$(CONFIG_COMPAL_LAPTOP) += compal-laptop.o
-2
drivers/platform/x86/acer-wmi.c
··· 1922 1922 rfkill_unregister(threeg_rfkill); 1923 1923 rfkill_destroy(threeg_rfkill); 1924 1924 } 1925 - return; 1926 1925 } 1927 1926 1928 1927 static void acer_wmi_notify(u32 value, void *context) ··· 2516 2517 platform_driver_unregister(&acer_platform_driver); 2517 2518 2518 2519 pr_info("Acer Laptop WMI Extras unloaded\n"); 2519 - return; 2520 2520 } 2521 2521 2522 2522 module_init(acer_wmi_init);
+214 -29
drivers/platform/x86/amd/hsmp.c
··· 20 20 #include <linux/semaphore.h> 21 21 22 22 #define DRIVER_NAME "amd_hsmp" 23 - #define DRIVER_VERSION "1.0" 23 + #define DRIVER_VERSION "2.0" 24 24 25 25 /* HSMP Status / Error codes */ 26 26 #define HSMP_STATUS_NOT_READY 0x00 ··· 47 47 #define HSMP_INDEX_REG 0xc4 48 48 #define HSMP_DATA_REG 0xc8 49 49 50 - static struct semaphore *hsmp_sem; 50 + #define HSMP_CDEV_NAME "hsmp_cdev" 51 + #define HSMP_DEVNODE_NAME "hsmp" 52 + #define HSMP_METRICS_TABLE_NAME "metrics_bin" 51 53 52 - static struct miscdevice hsmp_device; 54 + #define HSMP_ATTR_GRP_NAME_SIZE 10 55 + 56 + struct hsmp_socket { 57 + struct bin_attribute hsmp_attr; 58 + void __iomem *metric_tbl_addr; 59 + struct semaphore hsmp_sem; 60 + char name[HSMP_ATTR_GRP_NAME_SIZE]; 61 + u16 sock_ind; 62 + }; 63 + 64 + struct hsmp_plat_device { 65 + struct miscdevice hsmp_device; 66 + struct hsmp_socket *sock; 67 + struct device *dev; 68 + u32 proto_ver; 69 + u16 num_sockets; 70 + }; 71 + 72 + static struct hsmp_plat_device plat_dev; 53 73 54 74 static int amd_hsmp_rdwr(struct pci_dev *root, u32 address, 55 75 u32 *value, bool write) ··· 208 188 209 189 int hsmp_send_message(struct hsmp_message *msg) 210 190 { 191 + struct hsmp_socket *sock = &plat_dev.sock[msg->sock_ind]; 211 192 struct amd_northbridge *nb; 212 193 int ret; 213 194 ··· 229 208 * In SMP system timeout of 100 millisecs should 230 209 * be enough for the previous thread to finish the operation 231 210 */ 232 - ret = down_timeout(&hsmp_sem[msg->sock_ind], 233 - msecs_to_jiffies(HSMP_MSG_TIMEOUT)); 211 + ret = down_timeout(&sock->hsmp_sem, msecs_to_jiffies(HSMP_MSG_TIMEOUT)); 234 212 if (ret < 0) 235 213 return ret; 236 214 237 215 ret = __hsmp_send_message(nb->root, msg); 238 216 239 - up(&hsmp_sem[msg->sock_ind]); 217 + up(&sock->hsmp_sem); 240 218 241 219 return ret; 242 220 } ··· 337 317 .compat_ioctl = hsmp_ioctl, 338 318 }; 339 319 340 - static int hsmp_pltdrv_probe(struct platform_device *pdev) 320 + static ssize_t hsmp_metric_tbl_read(struct file *filp, struct kobject *kobj, 321 + struct bin_attribute *bin_attr, char *buf, 322 + loff_t off, size_t count) 341 323 { 342 - int i; 324 + struct hsmp_socket *sock = bin_attr->private; 325 + struct hsmp_message msg = { 0 }; 326 + int ret; 343 327 344 - hsmp_sem = devm_kzalloc(&pdev->dev, 345 - (amd_nb_num() * sizeof(struct semaphore)), 346 - GFP_KERNEL); 347 - if (!hsmp_sem) 328 + /* Do not support lseek(), reads entire metric table */ 329 + if (count < bin_attr->size) { 330 + dev_err(plat_dev.dev, "Wrong buffer size\n"); 331 + return -EINVAL; 332 + } 333 + 334 + if (!sock) { 335 + dev_err(plat_dev.dev, "Failed to read attribute private data\n"); 336 + return -EINVAL; 337 + } 338 + 339 + msg.msg_id = HSMP_GET_METRIC_TABLE; 340 + msg.sock_ind = sock->sock_ind; 341 + 342 + ret = hsmp_send_message(&msg); 343 + if (ret) 344 + return ret; 345 + memcpy_fromio(buf, sock->metric_tbl_addr, bin_attr->size); 346 + 347 + return bin_attr->size; 348 + } 349 + 350 + static int hsmp_get_tbl_dram_base(u16 sock_ind) 351 + { 352 + struct hsmp_socket *sock = &plat_dev.sock[sock_ind]; 353 + struct hsmp_message msg = { 0 }; 354 + phys_addr_t dram_addr; 355 + int ret; 356 + 357 + msg.sock_ind = sock_ind; 358 + msg.response_sz = hsmp_msg_desc_table[HSMP_GET_METRIC_TABLE_DRAM_ADDR].response_sz; 359 + msg.msg_id = HSMP_GET_METRIC_TABLE_DRAM_ADDR; 360 + 361 + ret = hsmp_send_message(&msg); 362 + if (ret) 363 + return ret; 364 + 365 + /* 366 + * calculate the metric table DRAM address from lower and upper 32 bits 367 + * sent from SMU and ioremap it to virtual address. 368 + */ 369 + dram_addr = msg.args[0] | ((u64)(msg.args[1]) << 32); 370 + if (!dram_addr) { 371 + dev_err(plat_dev.dev, "Invalid DRAM address for metric table\n"); 372 + return -ENOMEM; 373 + } 374 + sock->metric_tbl_addr = devm_ioremap(plat_dev.dev, dram_addr, 375 + sizeof(struct hsmp_metric_table)); 376 + if (!sock->metric_tbl_addr) { 377 + dev_err(plat_dev.dev, "Failed to ioremap metric table addr\n"); 378 + return -ENOMEM; 379 + } 380 + return 0; 381 + } 382 + 383 + static umode_t hsmp_is_sock_attr_visible(struct kobject *kobj, 384 + struct bin_attribute *battr, int id) 385 + { 386 + if (plat_dev.proto_ver == HSMP_PROTO_VER6) 387 + return battr->attr.mode; 388 + else 389 + return 0; 390 + } 391 + 392 + static int hsmp_init_metric_tbl_bin_attr(struct bin_attribute **hattrs, u16 sock_ind) 393 + { 394 + struct bin_attribute *hattr = &plat_dev.sock[sock_ind].hsmp_attr; 395 + 396 + sysfs_bin_attr_init(hattr); 397 + hattr->attr.name = HSMP_METRICS_TABLE_NAME; 398 + hattr->attr.mode = 0444; 399 + hattr->read = hsmp_metric_tbl_read; 400 + hattr->size = sizeof(struct hsmp_metric_table); 401 + hattr->private = &plat_dev.sock[sock_ind]; 402 + hattrs[0] = hattr; 403 + 404 + if (plat_dev.proto_ver == HSMP_PROTO_VER6) 405 + return (hsmp_get_tbl_dram_base(sock_ind)); 406 + else 407 + return 0; 408 + } 409 + 410 + /* One bin sysfs for metrics table*/ 411 + #define NUM_HSMP_ATTRS 1 412 + 413 + static int hsmp_create_sysfs_interface(void) 414 + { 415 + const struct attribute_group **hsmp_attr_grps; 416 + struct bin_attribute **hsmp_bin_attrs; 417 + struct attribute_group *attr_grp; 418 + int ret; 419 + u16 i; 420 + 421 + /* String formatting is currently limited to u8 sockets */ 422 + if (WARN_ON(plat_dev.num_sockets > U8_MAX)) 423 + return -ERANGE; 424 + 425 + hsmp_attr_grps = devm_kzalloc(plat_dev.dev, sizeof(struct attribute_group *) * 426 + (plat_dev.num_sockets + 1), GFP_KERNEL); 427 + if (!hsmp_attr_grps) 348 428 return -ENOMEM; 349 429 350 - for (i = 0; i < amd_nb_num(); i++) 351 - sema_init(&hsmp_sem[i], 1); 430 + /* Create a sysfs directory for each socket */ 431 + for (i = 0; i < plat_dev.num_sockets; i++) { 432 + attr_grp = devm_kzalloc(plat_dev.dev, sizeof(struct attribute_group), GFP_KERNEL); 433 + if (!attr_grp) 434 + return -ENOMEM; 352 435 353 - hsmp_device.name = "hsmp_cdev"; 354 - hsmp_device.minor = MISC_DYNAMIC_MINOR; 355 - hsmp_device.fops = &hsmp_fops; 356 - hsmp_device.parent = &pdev->dev; 357 - hsmp_device.nodename = "hsmp"; 358 - hsmp_device.mode = 0644; 436 + snprintf(plat_dev.sock[i].name, HSMP_ATTR_GRP_NAME_SIZE, "socket%u", (u8)i); 437 + attr_grp->name = plat_dev.sock[i].name; 359 438 360 - return misc_register(&hsmp_device); 439 + /* Null terminated list of attributes */ 440 + hsmp_bin_attrs = devm_kzalloc(plat_dev.dev, sizeof(struct bin_attribute *) * 441 + (NUM_HSMP_ATTRS + 1), GFP_KERNEL); 442 + if (!hsmp_bin_attrs) 443 + return -ENOMEM; 444 + 445 + attr_grp->bin_attrs = hsmp_bin_attrs; 446 + attr_grp->is_bin_visible = hsmp_is_sock_attr_visible; 447 + hsmp_attr_grps[i] = attr_grp; 448 + 449 + /* Now create the leaf nodes */ 450 + ret = hsmp_init_metric_tbl_bin_attr(hsmp_bin_attrs, i); 451 + if (ret) 452 + return ret; 453 + } 454 + return devm_device_add_groups(plat_dev.dev, hsmp_attr_grps); 455 + } 456 + 457 + static int hsmp_cache_proto_ver(void) 458 + { 459 + struct hsmp_message msg = { 0 }; 460 + int ret; 461 + 462 + msg.msg_id = HSMP_GET_PROTO_VER; 463 + msg.sock_ind = 0; 464 + msg.response_sz = hsmp_msg_desc_table[HSMP_GET_PROTO_VER].response_sz; 465 + 466 + ret = hsmp_send_message(&msg); 467 + if (!ret) 468 + plat_dev.proto_ver = msg.args[0]; 469 + 470 + return ret; 471 + } 472 + 473 + static int hsmp_pltdrv_probe(struct platform_device *pdev) 474 + { 475 + int ret, i; 476 + 477 + plat_dev.sock = devm_kzalloc(&pdev->dev, 478 + (plat_dev.num_sockets * sizeof(struct hsmp_socket)), 479 + GFP_KERNEL); 480 + if (!plat_dev.sock) 481 + return -ENOMEM; 482 + plat_dev.dev = &pdev->dev; 483 + 484 + for (i = 0; i < plat_dev.num_sockets; i++) { 485 + sema_init(&plat_dev.sock[i].hsmp_sem, 1); 486 + plat_dev.sock[i].sock_ind = i; 487 + } 488 + 489 + plat_dev.hsmp_device.name = HSMP_CDEV_NAME; 490 + plat_dev.hsmp_device.minor = MISC_DYNAMIC_MINOR; 491 + plat_dev.hsmp_device.fops = &hsmp_fops; 492 + plat_dev.hsmp_device.parent = &pdev->dev; 493 + plat_dev.hsmp_device.nodename = HSMP_DEVNODE_NAME; 494 + plat_dev.hsmp_device.mode = 0644; 495 + 496 + ret = hsmp_cache_proto_ver(); 497 + if (ret) { 498 + dev_err(plat_dev.dev, "Failed to read HSMP protocol version\n"); 499 + return ret; 500 + } 501 + 502 + ret = hsmp_create_sysfs_interface(); 503 + if (ret) 504 + dev_err(plat_dev.dev, "Failed to create HSMP sysfs interface\n"); 505 + 506 + return misc_register(&plat_dev.hsmp_device); 361 507 } 362 508 363 509 static void hsmp_pltdrv_remove(struct platform_device *pdev) 364 510 { 365 - misc_deregister(&hsmp_device); 511 + misc_deregister(&plat_dev.hsmp_device); 366 512 } 367 513 368 514 static struct platform_driver amd_hsmp_driver = { ··· 544 358 static int __init hsmp_plt_init(void) 545 359 { 546 360 int ret = -ENODEV; 547 - u16 num_sockets; 548 361 int i; 549 362 550 363 if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD || boot_cpu_data.x86 < 0x19) { ··· 556 371 * amd_nb_num() returns number of SMN/DF interfaces present in the system 557 372 * if we have N SMN/DF interfaces that ideally means N sockets 558 373 */ 559 - num_sockets = amd_nb_num(); 560 - if (num_sockets == 0) 374 + plat_dev.num_sockets = amd_nb_num(); 375 + if (plat_dev.num_sockets == 0) 561 376 return ret; 562 377 563 378 /* Test the hsmp interface on each socket */ 564 - for (i = 0; i < num_sockets; i++) { 379 + for (i = 0; i < plat_dev.num_sockets; i++) { 565 380 ret = hsmp_test(i, 0xDEADBEEF); 566 381 if (ret) { 567 - pr_err("HSMP is not supported on Fam:%x model:%x\n", 382 + pr_err("HSMP test message failed on Fam:%x model:%x\n", 568 383 boot_cpu_data.x86, boot_cpu_data.x86_model); 569 - pr_err("Or Is HSMP disabled in BIOS ?\n"); 570 - return -EOPNOTSUPP; 384 + pr_err("Is HSMP disabled in BIOS ?\n"); 385 + return ret; 571 386 } 572 387 } 573 388
+70 -18
drivers/platform/x86/amd/pmc/pmc.c
··· 52 52 #define AMD_S2D_REGISTER_ARGUMENT 0xA88 53 53 54 54 /* STB Spill to DRAM Parameters */ 55 - #define S2D_TELEMETRY_BYTES_MAX 0x100000 55 + #define S2D_TELEMETRY_BYTES_MAX 0x100000U 56 + #define S2D_RSVD_RAM_SPACE 0x100000 56 57 #define S2D_TELEMETRY_DRAMBYTES_MAX 0x1000000 58 + 59 + /* STB Spill to DRAM Message Definition */ 60 + #define STB_FORCE_FLUSH_DATA 0xCF 57 61 58 62 /* Base address of SMU for mapping physical address to virtual address */ 59 63 #define AMD_PMC_MAPPING_SIZE 0x01000 ··· 123 119 S2D_DRAM_SIZE, 124 120 }; 125 121 122 + struct amd_pmc_stb_v2_data { 123 + size_t size; 124 + u8 data[] __counted_by(size); 125 + }; 126 + 126 127 struct amd_pmc_bit_map { 127 128 const char *name; 128 129 u32 bit_mask; ··· 165 156 static bool disable_workarounds; 166 157 module_param(disable_workarounds, bool, 0644); 167 158 MODULE_PARM_DESC(disable_workarounds, "Disable workarounds for platform bugs"); 159 + 160 + static bool dump_custom_stb; 161 + module_param(dump_custom_stb, bool, 0644); 162 + MODULE_PARM_DESC(dump_custom_stb, "Enable to dump full STB buffer"); 168 163 169 164 static struct amd_pmc_dev pmc; 170 165 static int amd_pmc_send_cmd(struct amd_pmc_dev *dev, u32 arg, u32 *data, u8 msg, bool ret); ··· 246 233 .release = amd_pmc_stb_debugfs_release, 247 234 }; 248 235 236 + /* Enhanced STB Firmware Reporting Mechanism */ 237 + static int amd_pmc_stb_handle_efr(struct file *filp) 238 + { 239 + struct amd_pmc_dev *dev = filp->f_inode->i_private; 240 + struct amd_pmc_stb_v2_data *stb_data_arr; 241 + u32 fsize; 242 + 243 + fsize = dev->dram_size - S2D_RSVD_RAM_SPACE; 244 + stb_data_arr = kmalloc(struct_size(stb_data_arr, data, fsize), GFP_KERNEL); 245 + if (!stb_data_arr) 246 + return -ENOMEM; 247 + 248 + stb_data_arr->size = fsize; 249 + memcpy_fromio(stb_data_arr->data, dev->stb_virt_addr, fsize); 250 + filp->private_data = stb_data_arr; 251 + 252 + return 0; 253 + } 254 + 249 255 static int amd_pmc_stb_debugfs_open_v2(struct inode *inode, struct file *filp) 250 256 { 251 257 struct amd_pmc_dev *dev = filp->f_inode->i_private; 252 - u32 *buf, fsize, num_samples, stb_rdptr_offset = 0; 258 + u32 fsize, num_samples, val, stb_rdptr_offset = 0; 259 + struct amd_pmc_stb_v2_data *stb_data_arr; 253 260 int ret; 254 261 255 262 /* Write dummy postcode while reading the STB buffer */ ··· 277 244 if (ret) 278 245 dev_err(dev->dev, "error writing to STB: %d\n", ret); 279 246 280 - buf = kzalloc(S2D_TELEMETRY_BYTES_MAX, GFP_KERNEL); 281 - if (!buf) 282 - return -ENOMEM; 283 - 284 247 /* Spill to DRAM num_samples uses separate SMU message port */ 285 248 dev->msg_port = 1; 249 + 250 + ret = amd_pmc_send_cmd(dev, 0, &val, STB_FORCE_FLUSH_DATA, 1); 251 + if (ret) 252 + dev_dbg_once(dev->dev, "S2D force flush not supported: %d\n", ret); 253 + 254 + /* 255 + * We have a custom stb size and the PMFW is supposed to give 256 + * the enhanced dram size. Note that we land here only for the 257 + * platforms that support enhanced dram size reporting. 258 + */ 259 + if (dump_custom_stb) 260 + return amd_pmc_stb_handle_efr(filp); 286 261 287 262 /* Get the num_samples to calculate the last push location */ 288 263 ret = amd_pmc_send_cmd(dev, S2D_NUM_SAMPLES, &num_samples, dev->s2d_msg_id, true); ··· 298 257 dev->msg_port = 0; 299 258 if (ret) { 300 259 dev_err(dev->dev, "error: S2D_NUM_SAMPLES not supported : %d\n", ret); 301 - kfree(buf); 302 260 return ret; 303 261 } 304 262 305 - /* Start capturing data from the last push location */ 263 + fsize = min(num_samples, S2D_TELEMETRY_BYTES_MAX); 264 + stb_data_arr = kmalloc(struct_size(stb_data_arr, data, fsize), GFP_KERNEL); 265 + if (!stb_data_arr) 266 + return -ENOMEM; 267 + 268 + stb_data_arr->size = fsize; 269 + 270 + /* 271 + * Start capturing data from the last push location. 272 + * This is for general cases, where the stb limits 273 + * are meant for standard usage. 274 + */ 306 275 if (num_samples > S2D_TELEMETRY_BYTES_MAX) { 307 - fsize = S2D_TELEMETRY_BYTES_MAX; 308 - stb_rdptr_offset = num_samples - fsize; 276 + /* First read oldest data starting 1 behind last write till end of ringbuffer */ 277 + stb_rdptr_offset = num_samples % S2D_TELEMETRY_BYTES_MAX; 278 + fsize = S2D_TELEMETRY_BYTES_MAX - stb_rdptr_offset; 279 + 280 + memcpy_fromio(stb_data_arr->data, dev->stb_virt_addr + stb_rdptr_offset, fsize); 281 + /* Second copy the newer samples from offset 0 - last write */ 282 + memcpy_fromio(stb_data_arr->data + fsize, dev->stb_virt_addr, stb_rdptr_offset); 309 283 } else { 310 - fsize = num_samples; 311 - stb_rdptr_offset = 0; 284 + memcpy_fromio(stb_data_arr->data, dev->stb_virt_addr, fsize); 312 285 } 313 286 314 - memcpy_fromio(buf, dev->stb_virt_addr + stb_rdptr_offset, fsize); 315 - filp->private_data = buf; 287 + filp->private_data = stb_data_arr; 316 288 317 289 return 0; 318 290 } ··· 333 279 static ssize_t amd_pmc_stb_debugfs_read_v2(struct file *filp, char __user *buf, size_t size, 334 280 loff_t *pos) 335 281 { 336 - if (!filp->private_data) 337 - return -EINVAL; 282 + struct amd_pmc_stb_v2_data *data = filp->private_data; 338 283 339 - return simple_read_from_buffer(buf, size, pos, filp->private_data, 340 - S2D_TELEMETRY_BYTES_MAX); 284 + return simple_read_from_buffer(buf, size, pos, data->data, data->size); 341 285 } 342 286 343 287 static int amd_pmc_stb_debugfs_release_v2(struct inode *inode, struct file *filp)
+16
drivers/platform/x86/asus-nb-wmi.c
··· 16 16 #include <linux/dmi.h> 17 17 #include <linux/i8042.h> 18 18 19 + #include <acpi/video.h> 20 + 19 21 #include "asus-wmi.h" 20 22 21 23 #define ASUS_NB_WMI_FILE "asus-nb-wmi" ··· 608 606 { KE_END, 0}, 609 607 }; 610 608 609 + static void asus_nb_wmi_key_filter(struct asus_wmi_driver *asus_wmi, int *code, 610 + unsigned int *value, bool *autorelease) 611 + { 612 + switch (*code) { 613 + case ASUS_WMI_BRN_DOWN: 614 + case ASUS_WMI_BRN_UP: 615 + if (acpi_video_handles_brightness_key_presses()) 616 + *code = ASUS_WMI_KEY_IGNORE; 617 + 618 + break; 619 + } 620 + } 621 + 611 622 static struct asus_wmi_driver asus_nb_wmi_driver = { 612 623 .name = ASUS_NB_WMI_FILE, 613 624 .owner = THIS_MODULE, ··· 629 614 .input_name = "Asus WMI hotkeys", 630 615 .input_phys = ASUS_NB_WMI_FILE "/input0", 631 616 .detect_quirks = asus_nb_wmi_quirks, 617 + .key_filter = asus_nb_wmi_key_filter, 632 618 }; 633 619 634 620
+4 -8
drivers/platform/x86/asus-wireless.c
··· 148 148 if (err) 149 149 return err; 150 150 151 - for (id = device_ids; id->id[0]; id++) { 152 - if (!strcmp((char *) id->id, acpi_device_hid(adev))) { 153 - data->hswc_params = 154 - (const struct hswc_params *)id->driver_data; 155 - break; 156 - } 157 - } 158 - if (!data->hswc_params) 151 + id = acpi_match_acpi_device(device_ids, adev); 152 + if (!id) 159 153 return 0; 154 + 155 + data->hswc_params = (const struct hswc_params *)id->driver_data; 160 156 161 157 data->wq = create_singlethread_workqueue("asus_wireless_workqueue"); 162 158 if (!data->wq)
+133
drivers/platform/x86/asus-wmi.c
··· 25 25 #include <linux/input/sparse-keymap.h> 26 26 #include <linux/kernel.h> 27 27 #include <linux/leds.h> 28 + #include <linux/minmax.h> 28 29 #include <linux/module.h> 29 30 #include <linux/pci.h> 30 31 #include <linux/pci_hotplug.h> ··· 128 127 #define NVIDIA_TEMP_MIN 75 129 128 #define NVIDIA_TEMP_MAX 87 130 129 130 + #define ASUS_SCREENPAD_BRIGHT_MIN 20 131 + #define ASUS_SCREENPAD_BRIGHT_MAX 255 132 + #define ASUS_SCREENPAD_BRIGHT_DEFAULT 60 133 + 131 134 static const char * const ashs_ids[] = { "ATK4001", "ATK4002", NULL }; 132 135 133 136 static int throttle_thermal_policy_write(struct asus_wmi *); ··· 217 212 218 213 struct input_dev *inputdev; 219 214 struct backlight_device *backlight_device; 215 + struct backlight_device *screenpad_backlight_device; 220 216 struct platform_device *platform_device; 221 217 222 218 struct led_classdev wlan_led; ··· 3782 3776 return 0; 3783 3777 } 3784 3778 3779 + /* Screenpad backlight *******************************************************/ 3780 + 3781 + static int read_screenpad_backlight_power(struct asus_wmi *asus) 3782 + { 3783 + int ret; 3784 + 3785 + ret = asus_wmi_get_devstate_simple(asus, ASUS_WMI_DEVID_SCREENPAD_POWER); 3786 + if (ret < 0) 3787 + return ret; 3788 + /* 1 == powered */ 3789 + return ret ? FB_BLANK_UNBLANK : FB_BLANK_POWERDOWN; 3790 + } 3791 + 3792 + static int read_screenpad_brightness(struct backlight_device *bd) 3793 + { 3794 + struct asus_wmi *asus = bl_get_data(bd); 3795 + u32 retval; 3796 + int err; 3797 + 3798 + err = read_screenpad_backlight_power(asus); 3799 + if (err < 0) 3800 + return err; 3801 + /* The device brightness can only be read if powered, so return stored */ 3802 + if (err == FB_BLANK_POWERDOWN) 3803 + return asus->driver->screenpad_brightness - ASUS_SCREENPAD_BRIGHT_MIN; 3804 + 3805 + err = asus_wmi_get_devstate(asus, ASUS_WMI_DEVID_SCREENPAD_LIGHT, &retval); 3806 + if (err < 0) 3807 + return err; 3808 + 3809 + return (retval & ASUS_WMI_DSTS_BRIGHTNESS_MASK) - ASUS_SCREENPAD_BRIGHT_MIN; 3810 + } 3811 + 3812 + static int update_screenpad_bl_status(struct backlight_device *bd) 3813 + { 3814 + struct asus_wmi *asus = bl_get_data(bd); 3815 + int power, err = 0; 3816 + u32 ctrl_param; 3817 + 3818 + power = read_screenpad_backlight_power(asus); 3819 + if (power < 0) 3820 + return power; 3821 + 3822 + if (bd->props.power != power) { 3823 + if (power != FB_BLANK_UNBLANK) { 3824 + /* Only brightness > 0 can power it back on */ 3825 + ctrl_param = asus->driver->screenpad_brightness - ASUS_SCREENPAD_BRIGHT_MIN; 3826 + err = asus_wmi_set_devstate(ASUS_WMI_DEVID_SCREENPAD_LIGHT, 3827 + ctrl_param, NULL); 3828 + } else { 3829 + err = asus_wmi_set_devstate(ASUS_WMI_DEVID_SCREENPAD_POWER, 0, NULL); 3830 + } 3831 + } else if (power == FB_BLANK_UNBLANK) { 3832 + /* Only set brightness if powered on or we get invalid/unsync state */ 3833 + ctrl_param = bd->props.brightness + ASUS_SCREENPAD_BRIGHT_MIN; 3834 + err = asus_wmi_set_devstate(ASUS_WMI_DEVID_SCREENPAD_LIGHT, ctrl_param, NULL); 3835 + } 3836 + 3837 + /* Ensure brightness is stored to turn back on with */ 3838 + if (err == 0) 3839 + asus->driver->screenpad_brightness = bd->props.brightness + ASUS_SCREENPAD_BRIGHT_MIN; 3840 + 3841 + return err; 3842 + } 3843 + 3844 + static const struct backlight_ops asus_screenpad_bl_ops = { 3845 + .get_brightness = read_screenpad_brightness, 3846 + .update_status = update_screenpad_bl_status, 3847 + .options = BL_CORE_SUSPENDRESUME, 3848 + }; 3849 + 3850 + static int asus_screenpad_init(struct asus_wmi *asus) 3851 + { 3852 + struct backlight_device *bd; 3853 + struct backlight_properties props; 3854 + int err, power; 3855 + int brightness = 0; 3856 + 3857 + power = read_screenpad_backlight_power(asus); 3858 + if (power < 0) 3859 + return power; 3860 + 3861 + if (power != FB_BLANK_POWERDOWN) { 3862 + err = asus_wmi_get_devstate(asus, ASUS_WMI_DEVID_SCREENPAD_LIGHT, &brightness); 3863 + if (err < 0) 3864 + return err; 3865 + } 3866 + /* default to an acceptable min brightness on boot if too low */ 3867 + if (brightness < ASUS_SCREENPAD_BRIGHT_MIN) 3868 + brightness = ASUS_SCREENPAD_BRIGHT_DEFAULT; 3869 + 3870 + memset(&props, 0, sizeof(struct backlight_properties)); 3871 + props.type = BACKLIGHT_RAW; /* ensure this bd is last to be picked */ 3872 + props.max_brightness = ASUS_SCREENPAD_BRIGHT_MAX - ASUS_SCREENPAD_BRIGHT_MIN; 3873 + bd = backlight_device_register("asus_screenpad", 3874 + &asus->platform_device->dev, asus, 3875 + &asus_screenpad_bl_ops, &props); 3876 + if (IS_ERR(bd)) { 3877 + pr_err("Could not register backlight device\n"); 3878 + return PTR_ERR(bd); 3879 + } 3880 + 3881 + asus->screenpad_backlight_device = bd; 3882 + asus->driver->screenpad_brightness = brightness; 3883 + bd->props.brightness = brightness - ASUS_SCREENPAD_BRIGHT_MIN; 3884 + bd->props.power = power; 3885 + backlight_update_status(bd); 3886 + 3887 + return 0; 3888 + } 3889 + 3890 + static void asus_screenpad_exit(struct asus_wmi *asus) 3891 + { 3892 + backlight_device_unregister(asus->screenpad_backlight_device); 3893 + 3894 + asus->screenpad_backlight_device = NULL; 3895 + } 3896 + 3785 3897 /* Fn-lock ********************************************************************/ 3786 3898 3787 3899 static bool asus_wmi_has_fnlock_key(struct asus_wmi *asus) ··· 4548 4424 } else if (asus->driver->quirks->wmi_backlight_set_devstate) 4549 4425 err = asus_wmi_set_devstate(ASUS_WMI_DEVID_BACKLIGHT, 2, NULL); 4550 4426 4427 + if (asus_wmi_dev_is_present(asus, ASUS_WMI_DEVID_SCREENPAD_LIGHT)) { 4428 + err = asus_screenpad_init(asus); 4429 + if (err && err != -ENODEV) 4430 + goto fail_screenpad; 4431 + } 4432 + 4551 4433 if (asus_wmi_has_fnlock_key(asus)) { 4552 4434 asus->fnlock_locked = fnlock_default; 4553 4435 asus_wmi_fnlock_update(asus); ··· 4577 4447 asus_wmi_backlight_exit(asus); 4578 4448 fail_backlight: 4579 4449 asus_wmi_rfkill_exit(asus); 4450 + fail_screenpad: 4451 + asus_screenpad_exit(asus); 4580 4452 fail_rfkill: 4581 4453 asus_wmi_led_exit(asus); 4582 4454 fail_leds: ··· 4605 4473 asus = platform_get_drvdata(device); 4606 4474 wmi_remove_notify_handler(asus->driver->event_guid); 4607 4475 asus_wmi_backlight_exit(asus); 4476 + asus_screenpad_exit(asus); 4608 4477 asus_wmi_input_exit(asus); 4609 4478 asus_wmi_led_exit(asus); 4610 4479 asus_wmi_rfkill_exit(asus);
+1
drivers/platform/x86/asus-wmi.h
··· 57 57 struct asus_wmi_driver { 58 58 int brightness; 59 59 int panel_power; 60 + int screenpad_brightness; 60 61 int wlan_ctrl_by_user; 61 62 62 63 const char *name;
+1 -1
drivers/platform/x86/hp/hp-bioscfg/biosattr-interface.c
··· 19 19 u32 command; 20 20 u32 commandtype; 21 21 u32 datasize; 22 - u8 data[]; 22 + u8 data[] __counted_by(datasize); 23 23 }; 24 24 25 25 /**
+216
drivers/platform/x86/inspur_platform_profile.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Inspur WMI Platform Profile 4 + * 5 + * Copyright (C) 2018 Ai Chao <aichao@kylinos.cn> 6 + */ 7 + 8 + #include <linux/acpi.h> 9 + #include <linux/device.h> 10 + #include <linux/module.h> 11 + #include <linux/platform_profile.h> 12 + #include <linux/wmi.h> 13 + 14 + #define WMI_INSPUR_POWERMODE_BIOS_GUID "596C31E3-332D-43C9-AEE9-585493284F5D" 15 + 16 + enum inspur_wmi_method_ids { 17 + INSPUR_WMI_GET_POWERMODE = 0x02, 18 + INSPUR_WMI_SET_POWERMODE = 0x03, 19 + }; 20 + 21 + /* 22 + * Power Mode: 23 + * 0x0: Balance Mode 24 + * 0x1: Performance Mode 25 + * 0x2: Power Saver Mode 26 + */ 27 + enum inspur_tmp_profile { 28 + INSPUR_TMP_PROFILE_BALANCE = 0, 29 + INSPUR_TMP_PROFILE_PERFORMANCE = 1, 30 + INSPUR_TMP_PROFILE_POWERSAVE = 2, 31 + }; 32 + 33 + struct inspur_wmi_priv { 34 + struct wmi_device *wdev; 35 + struct platform_profile_handler handler; 36 + }; 37 + 38 + static int inspur_wmi_perform_query(struct wmi_device *wdev, 39 + enum inspur_wmi_method_ids query_id, 40 + void *buffer, size_t insize, 41 + size_t outsize) 42 + { 43 + struct acpi_buffer output = { ACPI_ALLOCATE_BUFFER, NULL }; 44 + struct acpi_buffer input = { insize, buffer}; 45 + union acpi_object *obj; 46 + acpi_status status; 47 + int ret = 0; 48 + 49 + status = wmidev_evaluate_method(wdev, 0, query_id, &input, &output); 50 + if (ACPI_FAILURE(status)) { 51 + dev_err(&wdev->dev, "EC Powermode control failed: %s\n", 52 + acpi_format_exception(status)); 53 + return -EIO; 54 + } 55 + 56 + obj = output.pointer; 57 + if (!obj) 58 + return -EINVAL; 59 + 60 + if (obj->type != ACPI_TYPE_BUFFER || 61 + obj->buffer.length != outsize) { 62 + ret = -EINVAL; 63 + goto out_free; 64 + } 65 + 66 + memcpy(buffer, obj->buffer.pointer, obj->buffer.length); 67 + 68 + out_free: 69 + kfree(obj); 70 + return ret; 71 + } 72 + 73 + /* 74 + * Set Power Mode to EC RAM. If Power Mode value greater than 0x3, 75 + * return error 76 + * Method ID: 0x3 77 + * Arg: 4 Bytes 78 + * Byte [0]: Power Mode: 79 + * 0x0: Balance Mode 80 + * 0x1: Performance Mode 81 + * 0x2: Power Saver Mode 82 + * Return Value: 4 Bytes 83 + * Byte [0]: Return Code 84 + * 0x0: No Error 85 + * 0x1: Error 86 + */ 87 + static int inspur_platform_profile_set(struct platform_profile_handler *pprof, 88 + enum platform_profile_option profile) 89 + { 90 + struct inspur_wmi_priv *priv = container_of(pprof, struct inspur_wmi_priv, 91 + handler); 92 + u8 ret_code[4] = {0, 0, 0, 0}; 93 + int ret; 94 + 95 + switch (profile) { 96 + case PLATFORM_PROFILE_BALANCED: 97 + ret_code[0] = INSPUR_TMP_PROFILE_BALANCE; 98 + break; 99 + case PLATFORM_PROFILE_PERFORMANCE: 100 + ret_code[0] = INSPUR_TMP_PROFILE_PERFORMANCE; 101 + break; 102 + case PLATFORM_PROFILE_LOW_POWER: 103 + ret_code[0] = INSPUR_TMP_PROFILE_POWERSAVE; 104 + break; 105 + default: 106 + return -EOPNOTSUPP; 107 + } 108 + 109 + ret = inspur_wmi_perform_query(priv->wdev, INSPUR_WMI_SET_POWERMODE, 110 + ret_code, sizeof(ret_code), 111 + sizeof(ret_code)); 112 + 113 + if (ret < 0) 114 + return ret; 115 + 116 + if (ret_code[0]) 117 + return -EBADRQC; 118 + 119 + return 0; 120 + } 121 + 122 + /* 123 + * Get Power Mode from EC RAM, If Power Mode value greater than 0x3, 124 + * return error 125 + * Method ID: 0x2 126 + * Return Value: 4 Bytes 127 + * Byte [0]: Return Code 128 + * 0x0: No Error 129 + * 0x1: Error 130 + * Byte [1]: Power Mode 131 + * 0x0: Balance Mode 132 + * 0x1: Performance Mode 133 + * 0x2: Power Saver Mode 134 + */ 135 + static int inspur_platform_profile_get(struct platform_profile_handler *pprof, 136 + enum platform_profile_option *profile) 137 + { 138 + struct inspur_wmi_priv *priv = container_of(pprof, struct inspur_wmi_priv, 139 + handler); 140 + u8 ret_code[4] = {0, 0, 0, 0}; 141 + int ret; 142 + 143 + ret = inspur_wmi_perform_query(priv->wdev, INSPUR_WMI_GET_POWERMODE, 144 + &ret_code, sizeof(ret_code), 145 + sizeof(ret_code)); 146 + if (ret < 0) 147 + return ret; 148 + 149 + if (ret_code[0]) 150 + return -EBADRQC; 151 + 152 + switch (ret_code[1]) { 153 + case INSPUR_TMP_PROFILE_BALANCE: 154 + *profile = PLATFORM_PROFILE_BALANCED; 155 + break; 156 + case INSPUR_TMP_PROFILE_PERFORMANCE: 157 + *profile = PLATFORM_PROFILE_PERFORMANCE; 158 + break; 159 + case INSPUR_TMP_PROFILE_POWERSAVE: 160 + *profile = PLATFORM_PROFILE_LOW_POWER; 161 + break; 162 + default: 163 + return -EINVAL; 164 + } 165 + 166 + return 0; 167 + } 168 + 169 + static int inspur_wmi_probe(struct wmi_device *wdev, const void *context) 170 + { 171 + struct inspur_wmi_priv *priv; 172 + 173 + priv = devm_kzalloc(&wdev->dev, sizeof(*priv), GFP_KERNEL); 174 + if (!priv) 175 + return -ENOMEM; 176 + 177 + priv->wdev = wdev; 178 + dev_set_drvdata(&wdev->dev, priv); 179 + 180 + priv->handler.profile_get = inspur_platform_profile_get; 181 + priv->handler.profile_set = inspur_platform_profile_set; 182 + 183 + set_bit(PLATFORM_PROFILE_LOW_POWER, priv->handler.choices); 184 + set_bit(PLATFORM_PROFILE_BALANCED, priv->handler.choices); 185 + set_bit(PLATFORM_PROFILE_PERFORMANCE, priv->handler.choices); 186 + 187 + return platform_profile_register(&priv->handler); 188 + } 189 + 190 + static void inspur_wmi_remove(struct wmi_device *wdev) 191 + { 192 + platform_profile_remove(); 193 + } 194 + 195 + static const struct wmi_device_id inspur_wmi_id_table[] = { 196 + { .guid_string = WMI_INSPUR_POWERMODE_BIOS_GUID }, 197 + { } 198 + }; 199 + 200 + MODULE_DEVICE_TABLE(wmi, inspur_wmi_id_table); 201 + 202 + static struct wmi_driver inspur_wmi_driver = { 203 + .driver = { 204 + .name = "inspur-wmi-platform-profile", 205 + .probe_type = PROBE_PREFER_ASYNCHRONOUS, 206 + }, 207 + .id_table = inspur_wmi_id_table, 208 + .probe = inspur_wmi_probe, 209 + .remove = inspur_wmi_remove, 210 + }; 211 + 212 + module_wmi_driver(inspur_wmi_driver); 213 + 214 + MODULE_AUTHOR("Ai Chao <aichao@kylinos.cn>"); 215 + MODULE_DESCRIPTION("Platform Profile Support for Inspur"); 216 + MODULE_LICENSE("GPL");
+2 -3
drivers/platform/x86/intel/bytcrc_pwrsrc.c
··· 158 158 return 0; 159 159 } 160 160 161 - static int crc_pwrsrc_remove(struct platform_device *pdev) 161 + static void crc_pwrsrc_remove(struct platform_device *pdev) 162 162 { 163 163 struct crc_pwrsrc_data *data = platform_get_drvdata(pdev); 164 164 165 165 debugfs_remove_recursive(data->debug_dentry); 166 - return 0; 167 166 } 168 167 169 168 static struct platform_driver crc_pwrsrc_driver = { 170 169 .probe = crc_pwrsrc_probe, 171 - .remove = crc_pwrsrc_remove, 170 + .remove_new = crc_pwrsrc_remove, 172 171 .driver = { 173 172 .name = "crystal_cove_pwrsrc", 174 173 },
+11 -4
drivers/platform/x86/intel/ifs/core.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* Copyright(c) 2022 Intel Corporation. */ 3 3 4 + #include <linux/bitfield.h> 4 5 #include <linux/module.h> 5 6 #include <linux/kdev_t.h> 6 7 #include <linux/semaphore.h> ··· 11 10 12 11 #include "ifs.h" 13 12 14 - #define X86_MATCH(model) \ 13 + #define X86_MATCH(model, array_gen) \ 15 14 X86_MATCH_VENDOR_FAM_MODEL_FEATURE(INTEL, 6, \ 16 - INTEL_FAM6_##model, X86_FEATURE_CORE_CAPABILITIES, NULL) 15 + INTEL_FAM6_##model, X86_FEATURE_CORE_CAPABILITIES, array_gen) 17 16 18 17 static const struct x86_cpu_id ifs_cpu_ids[] __initconst = { 19 - X86_MATCH(SAPPHIRERAPIDS_X), 20 - X86_MATCH(EMERALDRAPIDS_X), 18 + X86_MATCH(SAPPHIRERAPIDS_X, ARRAY_GEN0), 19 + X86_MATCH(EMERALDRAPIDS_X, ARRAY_GEN0), 20 + X86_MATCH(GRANITERAPIDS_X, ARRAY_GEN0), 21 + X86_MATCH(GRANITERAPIDS_D, ARRAY_GEN0), 22 + X86_MATCH(ATOM_CRESTMONT_X, ARRAY_GEN1), 21 23 {} 22 24 }; 23 25 MODULE_DEVICE_TABLE(x86cpu, ifs_cpu_ids); ··· 98 94 for (i = 0; i < IFS_NUMTESTS; i++) { 99 95 if (!(msrval & BIT(ifs_devices[i].test_caps->integrity_cap_bit))) 100 96 continue; 97 + ifs_devices[i].rw_data.generation = FIELD_GET(MSR_INTEGRITY_CAPS_SAF_GEN_MASK, 98 + msrval); 99 + ifs_devices[i].rw_data.array_gen = (u32)m->driver_data; 101 100 ret = misc_register(&ifs_devices[i].misc); 102 101 if (ret) 103 102 goto err_exit;
+58 -6
drivers/platform/x86/intel/ifs/ifs.h
··· 137 137 #define MSR_CHUNKS_AUTHENTICATION_STATUS 0x000002c5 138 138 #define MSR_ACTIVATE_SCAN 0x000002c6 139 139 #define MSR_SCAN_STATUS 0x000002c7 140 + #define MSR_ARRAY_TRIGGER 0x000002d6 141 + #define MSR_ARRAY_STATUS 0x000002d7 142 + #define MSR_SAF_CTRL 0x000004f0 143 + 140 144 #define SCAN_NOT_TESTED 0 141 145 #define SCAN_TEST_PASS 1 142 146 #define SCAN_TEST_FAIL 2 143 147 144 148 #define IFS_TYPE_SAF 0 145 149 #define IFS_TYPE_ARRAY_BIST 1 150 + 151 + #define ARRAY_GEN0 0 152 + #define ARRAY_GEN1 1 146 153 147 154 /* MSR_SCAN_HASHES_STATUS bit fields */ 148 155 union ifs_scan_hashes_status { ··· 165 158 }; 166 159 }; 167 160 161 + union ifs_scan_hashes_status_gen2 { 162 + u64 data; 163 + struct { 164 + u16 chunk_size; 165 + u16 num_chunks; 166 + u32 error_code :8; 167 + u32 chunks_in_stride :9; 168 + u32 rsvd :2; 169 + u32 max_core_limit :12; 170 + u32 valid :1; 171 + }; 172 + }; 173 + 168 174 /* MSR_CHUNKS_AUTH_STATUS bit fields */ 169 175 union ifs_chunks_auth_status { 170 176 u64 data; ··· 190 170 }; 191 171 }; 192 172 173 + union ifs_chunks_auth_status_gen2 { 174 + u64 data; 175 + struct { 176 + u16 valid_chunks; 177 + u16 total_chunks; 178 + u32 error_code :8; 179 + u32 rsvd2 :24; 180 + }; 181 + }; 182 + 193 183 /* MSR_ACTIVATE_SCAN bit fields */ 194 184 union ifs_scan { 195 185 u64 data; 196 186 struct { 197 - u32 start :8; 198 - u32 stop :8; 199 - u32 rsvd :16; 187 + union { 188 + struct { 189 + u8 start; 190 + u8 stop; 191 + u16 rsvd; 192 + } gen0; 193 + struct { 194 + u16 start; 195 + u16 stop; 196 + } gen2; 197 + }; 200 198 u32 delay :31; 201 199 u32 sigmce :1; 202 200 }; ··· 224 186 union ifs_status { 225 187 u64 data; 226 188 struct { 227 - u32 chunk_num :8; 228 - u32 chunk_stop_index :8; 229 - u32 rsvd1 :16; 189 + union { 190 + struct { 191 + u8 chunk_num; 192 + u8 chunk_stop_index; 193 + u16 rsvd1; 194 + } gen0; 195 + struct { 196 + u16 chunk_num; 197 + u16 chunk_stop_index; 198 + } gen2; 199 + }; 230 200 u32 error_code :8; 231 201 u32 rsvd2 :22; 232 202 u32 control_error :1; ··· 275 229 * @status: it holds simple status pass/fail/untested 276 230 * @scan_details: opaque scan status code from h/w 277 231 * @cur_batch: number indicating the currently loaded test file 232 + * @generation: IFS test generation enumerated by hardware 233 + * @chunk_size: size of a test chunk 234 + * @array_gen: test generation of array test 278 235 */ 279 236 struct ifs_data { 280 237 int loaded_version; ··· 287 238 int status; 288 239 u64 scan_details; 289 240 u32 cur_batch; 241 + u32 generation; 242 + u32 chunk_size; 243 + u32 array_gen; 290 244 }; 291 245 292 246 struct ifs_work {
+144 -14
drivers/platform/x86/intel/ifs/load.c
··· 2 2 /* Copyright(c) 2022 Intel Corporation. */ 3 3 4 4 #include <linux/firmware.h> 5 + #include <linux/sizes.h> 5 6 #include <asm/cpu.h> 6 7 #include <asm/microcode.h> 7 8 ··· 27 26 28 27 #define IFS_HEADER_SIZE (sizeof(struct microcode_header_intel)) 29 28 #define META_TYPE_IFS 1 29 + #define INVALIDATE_STRIDE 0x1UL 30 + #define IFS_GEN_STRIDE_AWARE 2 31 + #define AUTH_INTERRUPTED_ERROR 5 32 + #define IFS_AUTH_RETRY_CT 10 33 + 30 34 static struct microcode_header_intel *ifs_header_ptr; /* pointer to the ifs image header */ 31 35 static u64 ifs_hash_ptr; /* Address of ifs metadata (hash) */ 32 36 static u64 ifs_test_image_ptr; /* 256B aligned address of test pattern */ ··· 50 44 static const char * const scan_authentication_status[] = { 51 45 [0] = "No error reported", 52 46 [1] = "Attempt to authenticate a chunk which is already marked as authentic", 53 - [2] = "Chunk authentication error. The hash of chunk did not match expected value" 47 + [2] = "Chunk authentication error. The hash of chunk did not match expected value", 48 + [3] = "Reserved", 49 + [4] = "Chunk outside the current stride", 50 + [5] = "Authentication flow interrupted", 54 51 }; 55 52 56 53 #define MC_HEADER_META_TYPE_END (0) ··· 89 80 return NULL; 90 81 } 91 82 83 + static void hashcopy_err_message(struct device *dev, u32 err_code) 84 + { 85 + if (err_code >= ARRAY_SIZE(scan_hash_status)) 86 + dev_err(dev, "invalid error code 0x%x for hash copy\n", err_code); 87 + else 88 + dev_err(dev, "Hash copy error : %s\n", scan_hash_status[err_code]); 89 + } 90 + 91 + static void auth_err_message(struct device *dev, u32 err_code) 92 + { 93 + if (err_code >= ARRAY_SIZE(scan_authentication_status)) 94 + dev_err(dev, "invalid error code 0x%x for authentication\n", err_code); 95 + else 96 + dev_err(dev, "Chunk authentication error : %s\n", 97 + scan_authentication_status[err_code]); 98 + } 99 + 92 100 /* 93 101 * To copy scan hashes and authenticate test chunks, the initiating cpu must point 94 102 * to the EDX:EAX to the test image in linear address. ··· 135 109 136 110 if (!hashes_status.valid) { 137 111 ifsd->loading_error = true; 138 - if (err_code >= ARRAY_SIZE(scan_hash_status)) { 139 - dev_err(dev, "invalid error code 0x%x for hash copy\n", err_code); 140 - goto done; 141 - } 142 - dev_err(dev, "Hash copy error : %s", scan_hash_status[err_code]); 112 + hashcopy_err_message(dev, err_code); 143 113 goto done; 144 114 } 145 115 ··· 155 133 156 134 if (err_code) { 157 135 ifsd->loading_error = true; 158 - if (err_code >= ARRAY_SIZE(scan_authentication_status)) { 159 - dev_err(dev, 160 - "invalid error code 0x%x for authentication\n", err_code); 161 - goto done; 162 - } 163 - dev_err(dev, "Chunk authentication error %s\n", 164 - scan_authentication_status[err_code]); 136 + auth_err_message(dev, err_code); 165 137 goto done; 166 138 } 167 139 } 168 140 done: 169 141 complete(&ifs_done); 142 + } 143 + 144 + static int get_num_chunks(int gen, union ifs_scan_hashes_status_gen2 status) 145 + { 146 + return gen >= IFS_GEN_STRIDE_AWARE ? status.chunks_in_stride : status.num_chunks; 147 + } 148 + 149 + static bool need_copy_scan_hashes(struct ifs_data *ifsd) 150 + { 151 + return !ifsd->loaded || 152 + ifsd->generation < IFS_GEN_STRIDE_AWARE || 153 + ifsd->loaded_version != ifs_header_ptr->rev; 154 + } 155 + 156 + static int copy_hashes_authenticate_chunks_gen2(struct device *dev) 157 + { 158 + union ifs_scan_hashes_status_gen2 hashes_status; 159 + union ifs_chunks_auth_status_gen2 chunk_status; 160 + u32 err_code, valid_chunks, total_chunks; 161 + int i, num_chunks, chunk_size; 162 + union meta_data *ifs_meta; 163 + int starting_chunk_nr; 164 + struct ifs_data *ifsd; 165 + u64 linear_addr, base; 166 + u64 chunk_table[2]; 167 + int retry_count; 168 + 169 + ifsd = ifs_get_data(dev); 170 + 171 + if (need_copy_scan_hashes(ifsd)) { 172 + wrmsrl(MSR_COPY_SCAN_HASHES, ifs_hash_ptr); 173 + rdmsrl(MSR_SCAN_HASHES_STATUS, hashes_status.data); 174 + 175 + /* enumerate the scan image information */ 176 + chunk_size = hashes_status.chunk_size * SZ_1K; 177 + err_code = hashes_status.error_code; 178 + 179 + num_chunks = get_num_chunks(ifsd->generation, hashes_status); 180 + 181 + if (!hashes_status.valid) { 182 + hashcopy_err_message(dev, err_code); 183 + return -EIO; 184 + } 185 + ifsd->loaded_version = ifs_header_ptr->rev; 186 + ifsd->chunk_size = chunk_size; 187 + } else { 188 + num_chunks = ifsd->valid_chunks; 189 + chunk_size = ifsd->chunk_size; 190 + } 191 + 192 + if (ifsd->generation >= IFS_GEN_STRIDE_AWARE) { 193 + wrmsrl(MSR_SAF_CTRL, INVALIDATE_STRIDE); 194 + rdmsrl(MSR_CHUNKS_AUTHENTICATION_STATUS, chunk_status.data); 195 + if (chunk_status.valid_chunks != 0) { 196 + dev_err(dev, "Couldn't invalidate installed stride - %d\n", 197 + chunk_status.valid_chunks); 198 + return -EIO; 199 + } 200 + } 201 + 202 + base = ifs_test_image_ptr; 203 + ifs_meta = (union meta_data *)find_meta_data(ifs_header_ptr, META_TYPE_IFS); 204 + starting_chunk_nr = ifs_meta->starting_chunk; 205 + 206 + /* scan data authentication and copy chunks to secured memory */ 207 + for (i = 0; i < num_chunks; i++) { 208 + retry_count = IFS_AUTH_RETRY_CT; 209 + linear_addr = base + i * chunk_size; 210 + 211 + chunk_table[0] = starting_chunk_nr + i; 212 + chunk_table[1] = linear_addr; 213 + do { 214 + wrmsrl(MSR_AUTHENTICATE_AND_COPY_CHUNK, (u64)chunk_table); 215 + rdmsrl(MSR_CHUNKS_AUTHENTICATION_STATUS, chunk_status.data); 216 + err_code = chunk_status.error_code; 217 + } while (err_code == AUTH_INTERRUPTED_ERROR && --retry_count); 218 + 219 + if (err_code) { 220 + ifsd->loading_error = true; 221 + auth_err_message(dev, err_code); 222 + return -EIO; 223 + } 224 + } 225 + 226 + valid_chunks = chunk_status.valid_chunks; 227 + total_chunks = chunk_status.total_chunks; 228 + 229 + if (valid_chunks != total_chunks) { 230 + ifsd->loading_error = true; 231 + dev_err(dev, "Couldn't authenticate all the chunks. Authenticated %d total %d.\n", 232 + valid_chunks, total_chunks); 233 + return -EIO; 234 + } 235 + ifsd->valid_chunks = valid_chunks; 236 + 237 + return 0; 170 238 } 171 239 172 240 static int validate_ifs_metadata(struct device *dev) ··· 291 179 return ret; 292 180 } 293 181 182 + if (ifs_meta->chunks_per_stride && 183 + (ifs_meta->starting_chunk % ifs_meta->chunks_per_stride != 0)) { 184 + dev_warn(dev, "Starting chunk num %u not a multiple of chunks_per_stride %u\n", 185 + ifs_meta->starting_chunk, ifs_meta->chunks_per_stride); 186 + return ret; 187 + } 188 + 294 189 return 0; 295 190 } 296 191 ··· 318 199 return ret; 319 200 320 201 ifsd->loading_error = false; 321 - ifsd->loaded_version = ifs_header_ptr->rev; 202 + 203 + if (ifsd->generation > 0) 204 + return copy_hashes_authenticate_chunks_gen2(dev); 322 205 323 206 /* copy the scan hash and authenticate per package */ 324 207 cpus_read_lock(); ··· 340 219 ifs_pkg_auth[curr_pkg] = 1; 341 220 } 342 221 ret = 0; 222 + ifsd->loaded_version = ifs_header_ptr->rev; 343 223 out: 344 224 cpus_read_unlock(); 345 225 ··· 382 260 { 383 261 const struct ifs_test_caps *test = ifs_get_test_caps(dev); 384 262 struct ifs_data *ifsd = ifs_get_data(dev); 263 + unsigned int expected_size; 385 264 const struct firmware *fw; 386 265 char scan_path[64]; 387 266 int ret = -EINVAL; ··· 395 272 if (ret) { 396 273 dev_err(dev, "ifs file %s load failed\n", scan_path); 397 274 goto done; 275 + } 276 + 277 + expected_size = ((struct microcode_header_intel *)fw->data)->totalsize; 278 + if (fw->size != expected_size) { 279 + dev_err(dev, "File size mismatch (expected %u, actual %zu). Corrupted IFS image.\n", 280 + expected_size, fw->size); 281 + return -EINVAL; 398 282 } 399 283 400 284 ret = image_sanity_check(dev, (struct microcode_header_intel *)fw->data);
+64 -8
drivers/platform/x86/intel/ifs/runtest.c
··· 40 40 IFS_UNASSIGNED_ERROR_CODE = 7, 41 41 IFS_EXCEED_NUMBER_OF_THREADS_CONCURRENT = 8, 42 42 IFS_INTERRUPTED_DURING_EXECUTION = 9, 43 + IFS_UNASSIGNED_ERROR_CODE_0xA = 0xA, 44 + IFS_CORRUPTED_CHUNK = 0xB, 43 45 }; 44 46 45 47 static const char * const scan_test_status[] = { ··· 57 55 [IFS_EXCEED_NUMBER_OF_THREADS_CONCURRENT] = 58 56 "Exceeded number of Logical Processors (LP) allowed to run Scan-At-Field concurrently", 59 57 [IFS_INTERRUPTED_DURING_EXECUTION] = "Interrupt occurred prior to SCAN start", 58 + [IFS_UNASSIGNED_ERROR_CODE_0xA] = "Unassigned error code 0xA", 59 + [IFS_CORRUPTED_CHUNK] = "Scan operation aborted due to corrupted image. Try reloading", 60 60 }; 61 61 62 62 static void message_not_tested(struct device *dev, int cpu, union ifs_status status) ··· 127 123 case IFS_MISMATCH_ARGUMENTS_BETWEEN_THREADS: 128 124 case IFS_CORE_NOT_CAPABLE_CURRENTLY: 129 125 case IFS_UNASSIGNED_ERROR_CODE: 126 + case IFS_UNASSIGNED_ERROR_CODE_0xA: 127 + case IFS_CORRUPTED_CHUNK: 130 128 break; 131 129 } 132 130 return false; ··· 177 171 union ifs_status status; 178 172 unsigned long timeout; 179 173 struct ifs_data *ifsd; 174 + int to_start, to_stop; 175 + int status_chunk; 180 176 u64 msrvals[2]; 181 177 int retries; 182 178 183 179 ifsd = ifs_get_data(dev); 184 180 185 - activate.rsvd = 0; 181 + activate.gen0.rsvd = 0; 186 182 activate.delay = IFS_THREAD_WAIT; 187 183 activate.sigmce = 0; 188 - activate.start = 0; 189 - activate.stop = ifsd->valid_chunks - 1; 184 + to_start = 0; 185 + to_stop = ifsd->valid_chunks - 1; 186 + 187 + if (ifsd->generation) { 188 + activate.gen2.start = to_start; 189 + activate.gen2.stop = to_stop; 190 + } else { 191 + activate.gen0.start = to_start; 192 + activate.gen0.stop = to_stop; 193 + } 190 194 191 195 timeout = jiffies + HZ / 2; 192 196 retries = MAX_IFS_RETRIES; 193 197 194 - while (activate.start <= activate.stop) { 198 + while (to_start <= to_stop) { 195 199 if (time_after(jiffies, timeout)) { 196 200 status.error_code = IFS_SW_TIMEOUT; 197 201 break; ··· 212 196 213 197 status.data = msrvals[1]; 214 198 215 - trace_ifs_status(cpu, activate, status); 199 + trace_ifs_status(cpu, to_start, to_stop, status.data); 216 200 217 201 /* Some cases can be retried, give up for others */ 218 202 if (!can_restart(status)) 219 203 break; 220 204 221 - if (status.chunk_num == activate.start) { 205 + status_chunk = ifsd->generation ? status.gen2.chunk_num : status.gen0.chunk_num; 206 + if (status_chunk == to_start) { 222 207 /* Check for forward progress */ 223 208 if (--retries == 0) { 224 209 if (status.error_code == IFS_NO_ERROR) ··· 228 211 } 229 212 } else { 230 213 retries = MAX_IFS_RETRIES; 231 - activate.start = status.chunk_num; 214 + if (ifsd->generation) 215 + activate.gen2.start = status_chunk; 216 + else 217 + activate.gen0.start = status_chunk; 218 + to_start = status_chunk; 232 219 } 233 220 } 234 221 ··· 329 308 ifsd->status = SCAN_TEST_PASS; 330 309 } 331 310 311 + #define ARRAY_GEN1_TEST_ALL_ARRAYS 0x0ULL 312 + #define ARRAY_GEN1_STATUS_FAIL 0x1ULL 313 + 314 + static int do_array_test_gen1(void *status) 315 + { 316 + int cpu = smp_processor_id(); 317 + int first; 318 + 319 + first = cpumask_first(cpu_smt_mask(cpu)); 320 + 321 + if (cpu == first) { 322 + wrmsrl(MSR_ARRAY_TRIGGER, ARRAY_GEN1_TEST_ALL_ARRAYS); 323 + rdmsrl(MSR_ARRAY_STATUS, *((u64 *)status)); 324 + } 325 + 326 + return 0; 327 + } 328 + 329 + static void ifs_array_test_gen1(int cpu, struct device *dev) 330 + { 331 + struct ifs_data *ifsd = ifs_get_data(dev); 332 + u64 status = 0; 333 + 334 + stop_core_cpuslocked(cpu, do_array_test_gen1, &status); 335 + ifsd->scan_details = status; 336 + 337 + if (status & ARRAY_GEN1_STATUS_FAIL) 338 + ifsd->status = SCAN_TEST_FAIL; 339 + else 340 + ifsd->status = SCAN_TEST_PASS; 341 + } 342 + 332 343 /* 333 344 * Initiate per core test. It wakes up work queue threads on the target cpu and 334 345 * its sibling cpu. Once all sibling threads wake up, the scan test gets executed and ··· 389 336 ifs_test_core(cpu, dev); 390 337 break; 391 338 case IFS_TYPE_ARRAY_BIST: 392 - ifs_array_test_core(cpu, dev); 339 + if (ifsd->array_gen == ARRAY_GEN0) 340 + ifs_array_test_core(cpu, dev); 341 + else 342 + ifs_array_test_gen1(cpu, dev); 393 343 break; 394 344 default: 395 345 ret = -EINVAL;
+13 -8
drivers/platform/x86/intel/speed_select_if/isst_if_mmio.c
··· 18 18 struct isst_mmio_range { 19 19 int beg; 20 20 int end; 21 + int size; 21 22 }; 22 23 23 24 static struct isst_mmio_range mmio_range_devid_0[] = { 24 - {0x04, 0x14}, 25 - {0x20, 0xD0}, 25 + {0x04, 0x14, 0x18}, 26 + {0x20, 0xD0, 0xD4}, 26 27 }; 27 28 28 29 static struct isst_mmio_range mmio_range_devid_1[] = { 29 - {0x04, 0x14}, 30 - {0x20, 0x11C}, 30 + {0x04, 0x14, 0x18}, 31 + {0x20, 0x11C, 0x120}, 31 32 }; 32 33 33 34 struct isst_if_device { ··· 94 93 struct isst_if_device *punit_dev; 95 94 struct isst_if_cmd_cb cb; 96 95 u32 mmio_base, pcu_base; 96 + struct resource r; 97 97 u64 base_addr; 98 98 int ret; 99 99 ··· 116 114 117 115 pcu_base &= GENMASK(10, 0); 118 116 base_addr = (u64)mmio_base << 23 | (u64) pcu_base << 12; 119 - punit_dev->punit_mmio = devm_ioremap(&pdev->dev, base_addr, 256); 120 - if (!punit_dev->punit_mmio) 121 - return -ENOMEM; 117 + 118 + punit_dev->mmio_range = (struct isst_mmio_range *) ent->driver_data; 119 + 120 + r = DEFINE_RES_MEM(base_addr, punit_dev->mmio_range[1].size); 121 + punit_dev->punit_mmio = devm_ioremap_resource(&pdev->dev, &r); 122 + if (IS_ERR(punit_dev->punit_mmio)) 123 + return PTR_ERR(punit_dev->punit_mmio); 122 124 123 125 mutex_init(&punit_dev->mutex); 124 126 pci_set_drvdata(pdev, punit_dev); 125 - punit_dev->mmio_range = (struct isst_mmio_range *) ent->driver_data; 126 127 127 128 memset(&cb, 0, sizeof(cb)); 128 129 cb.cmd_size = sizeof(struct isst_if_io_reg);
+13 -8
drivers/platform/x86/intel/speed_select_if/isst_tpmi_core.c
··· 30 30 #include "isst_if_common.h" 31 31 32 32 /* Supported SST hardware version by this driver */ 33 - #define ISST_HEADER_VERSION 1 33 + #define ISST_MAJOR_VERSION 0 34 + #define ISST_MINOR_VERSION 1 34 35 35 36 /* 36 37 * Used to indicate if value read from MMIO needs to get multiplied ··· 353 352 pd_info->sst_header.cp_offset *= 8; 354 353 pd_info->sst_header.pp_offset *= 8; 355 354 356 - if (pd_info->sst_header.interface_version != ISST_HEADER_VERSION) { 357 - dev_err(&auxdev->dev, "SST: Unsupported version:%x\n", 358 - pd_info->sst_header.interface_version); 355 + if (pd_info->sst_header.interface_version == TPMI_VERSION_INVALID) 356 + return -ENODEV; 357 + 358 + if (TPMI_MAJOR_VERSION(pd_info->sst_header.interface_version) != ISST_MAJOR_VERSION) { 359 + dev_err(&auxdev->dev, "SST: Unsupported major version:%lx\n", 360 + TPMI_MAJOR_VERSION(pd_info->sst_header.interface_version)); 359 361 return -ENODEV; 360 362 } 363 + 364 + if (TPMI_MINOR_VERSION(pd_info->sst_header.interface_version) != ISST_MINOR_VERSION) 365 + dev_info(&auxdev->dev, "SST: Ignore: Unsupported minor version:%lx\n", 366 + TPMI_MINOR_VERSION(pd_info->sst_header.interface_version)); 361 367 362 368 /* Read SST CP Header */ 363 369 *((u64 *)&pd_info->cp_header) = readq(pd_info->sst_base + pd_info->sst_header.cp_offset); 364 370 365 371 /* Read PP header */ 366 372 *((u64 *)&pd_info->pp_header) = readq(pd_info->sst_base + pd_info->sst_header.pp_offset); 367 - 368 - /* Force level_en_mask level 0 */ 369 - pd_info->pp_header.level_en_mask |= 0x01; 370 373 371 374 mask = 0x01; 372 375 levels = 0; ··· 709 704 return -EINVAL; 710 705 711 706 perf_level.max_level = power_domain_info->max_level; 712 - perf_level.level_mask = power_domain_info->pp_header.allowed_level_mask; 707 + perf_level.level_mask = power_domain_info->pp_header.level_en_mask; 713 708 perf_level.feature_rev = power_domain_info->pp_header.feature_rev; 714 709 _read_pp_info("current_level", perf_level.current_level, SST_PP_STATUS_OFFSET, 715 710 SST_PP_LEVEL_START, SST_PP_LEVEL_WIDTH, SST_MUL_FACTOR_NONE)
+56 -25
drivers/platform/x86/intel/tpmi.c
··· 143 143 u64 lock:1; 144 144 } __packed; 145 145 146 + /** 147 + * struct tpmi_feature_state - Structure to read hardware state of a feature 148 + * @enabled: Enable state of a feature, 1: enabled, 0: disabled 149 + * @reserved_1: Reserved for future use 150 + * @write_blocked: Writes are blocked means all write operations are ignored 151 + * @read_blocked: Reads are blocked means will read 0xFFs 152 + * @pcs_select: Interface used by out of band software, not used in OS 153 + * @reserved_2: Reserved for future use 154 + * @id: TPMI ID of the feature 155 + * @reserved_3: Reserved for future use 156 + * @locked: When set to 1, OS can't change this register. 157 + * 158 + * The structure is used to read hardware state of a TPMI feature. This 159 + * information is used for debug and restricting operations for this feature. 160 + */ 161 + struct tpmi_feature_state { 162 + u32 enabled:1; 163 + u32 reserved_1:3; 164 + u32 write_blocked:1; 165 + u32 read_blocked:1; 166 + u32 pcs_select:1; 167 + u32 reserved_2:1; 168 + u32 id:8; 169 + u32 reserved_3:15; 170 + u32 locked:1; 171 + } __packed; 172 + 146 173 /* 147 174 * List of supported TMPI IDs. 148 175 * Some TMPI IDs are not used by Linux, so the numbers are not consecutive. ··· 229 202 230 203 #define TPMI_CONTROL_STATUS_OFFSET 0x00 231 204 #define TPMI_COMMAND_OFFSET 0x08 205 + #define TMPI_CONTROL_DATA_VAL_OFFSET 0x0c 232 206 233 207 /* 234 208 * Spec is calling for max 1 seconds to get ownership at the worst ··· 258 230 259 231 /* TPMI command data registers */ 260 232 #define TMPI_CONTROL_DATA_CMD GENMASK_ULL(7, 0) 261 - #define TMPI_CONTROL_DATA_VAL GENMASK_ULL(63, 32) 262 233 #define TPMI_CONTROL_DATA_VAL_FEATURE GENMASK_ULL(48, 40) 263 234 264 235 /* Command to send via control interface */ ··· 266 239 #define TPMI_CONTROL_CMD_MASK GENMASK_ULL(48, 40) 267 240 268 241 #define TPMI_CMD_LEN_MASK GENMASK_ULL(18, 16) 269 - 270 - #define TPMI_STATE_DISABLED BIT_ULL(0) 271 - #define TPMI_STATE_LOCKED BIT_ULL(31) 272 242 273 243 /* Mutex to complete get feature status without interruption */ 274 244 static DEFINE_MUTEX(tpmi_dev_lock); ··· 280 256 } 281 257 282 258 static int tpmi_read_feature_status(struct intel_tpmi_info *tpmi_info, int feature_id, 283 - int *locked, int *disabled) 259 + struct tpmi_feature_state *feature_state) 284 260 { 285 261 u64 control, data; 286 262 int ret; ··· 330 306 } 331 307 332 308 /* Response is ready */ 333 - data = readq(tpmi_info->tpmi_control_mem + TPMI_COMMAND_OFFSET); 334 - data = FIELD_GET(TMPI_CONTROL_DATA_VAL, data); 335 - 336 - *disabled = 0; 337 - *locked = 0; 338 - 339 - if (!(data & TPMI_STATE_DISABLED)) 340 - *disabled = 1; 341 - 342 - if (data & TPMI_STATE_LOCKED) 343 - *locked = 1; 309 + memcpy_fromio(feature_state, tpmi_info->tpmi_control_mem + TMPI_CONTROL_DATA_VAL_OFFSET, 310 + sizeof(*feature_state)); 344 311 345 312 ret = 0; 346 313 ··· 350 335 { 351 336 struct intel_vsec_device *intel_vsec_dev = dev_to_ivdev(auxdev->dev.parent); 352 337 struct intel_tpmi_info *tpmi_info = auxiliary_get_drvdata(&intel_vsec_dev->auxdev); 338 + struct tpmi_feature_state feature_state; 339 + int ret; 353 340 354 - return tpmi_read_feature_status(tpmi_info, feature_id, locked, disabled); 341 + ret = tpmi_read_feature_status(tpmi_info, feature_id, &feature_state); 342 + if (ret) 343 + return ret; 344 + 345 + *locked = feature_state.locked; 346 + *disabled = !feature_state.enabled; 347 + 348 + return 0; 355 349 } 356 350 EXPORT_SYMBOL_NS_GPL(tpmi_get_feature_status, INTEL_TPMI); 357 351 358 352 static int tpmi_pfs_dbg_show(struct seq_file *s, void *unused) 359 353 { 360 354 struct intel_tpmi_info *tpmi_info = s->private; 355 + int locked, disabled, read_blocked, write_blocked; 356 + struct tpmi_feature_state feature_state; 361 357 struct intel_tpmi_pm_feature *pfs; 362 - int locked, disabled, ret, i; 358 + int ret, i; 359 + 363 360 364 361 seq_printf(s, "tpmi PFS start offset 0x:%llx\n", tpmi_info->pfs_start); 365 - seq_puts(s, "tpmi_id\t\tentries\t\tsize\t\tcap_offset\tattribute\tvsec_offset\tlocked\tdisabled\n"); 362 + seq_puts(s, "tpmi_id\t\tentries\t\tsize\t\tcap_offset\tattribute\tvsec_offset\tlocked\tdisabled\tread_blocked\twrite_blocked\n"); 366 363 for (i = 0; i < tpmi_info->feature_count; ++i) { 367 364 pfs = &tpmi_info->tpmi_features[i]; 368 - ret = tpmi_read_feature_status(tpmi_info, pfs->pfs_header.tpmi_id, &locked, 369 - &disabled); 365 + ret = tpmi_read_feature_status(tpmi_info, pfs->pfs_header.tpmi_id, &feature_state); 370 366 if (ret) { 371 367 locked = 'U'; 372 368 disabled = 'U'; 369 + read_blocked = 'U'; 370 + write_blocked = 'U'; 373 371 } else { 374 - disabled = disabled ? 'Y' : 'N'; 375 - locked = locked ? 'Y' : 'N'; 372 + disabled = feature_state.enabled ? 'N' : 'Y'; 373 + locked = feature_state.locked ? 'Y' : 'N'; 374 + read_blocked = feature_state.read_blocked ? 'Y' : 'N'; 375 + write_blocked = feature_state.write_blocked ? 'Y' : 'N'; 376 376 } 377 - seq_printf(s, "0x%02x\t\t0x%02x\t\t0x%04x\t\t0x%04x\t\t0x%02x\t\t0x%08x\t%c\t%c\n", 377 + seq_printf(s, "0x%02x\t\t0x%02x\t\t0x%04x\t\t0x%04x\t\t0x%02x\t\t0x%08x\t%c\t%c\t\t%c\t\t%c\n", 378 378 pfs->pfs_header.tpmi_id, pfs->pfs_header.num_entries, 379 379 pfs->pfs_header.entry_size, pfs->pfs_header.cap_offset, 380 - pfs->pfs_header.attribute, pfs->vsec_offset, locked, disabled); 380 + pfs->pfs_header.attribute, pfs->vsec_offset, locked, disabled, 381 + read_blocked, write_blocked); 381 382 } 382 383 383 384 return 0;
+14 -4
drivers/platform/x86/intel/uncore-frequency/uncore-frequency-tpmi.c
··· 28 28 29 29 #include "uncore-frequency-common.h" 30 30 31 - #define UNCORE_HEADER_VERSION 1 31 + #define UNCORE_MAJOR_VERSION 0 32 + #define UNCORE_MINOR_VERSION 1 32 33 #define UNCORE_HEADER_INDEX 0 33 34 #define UNCORE_FABRIC_CLUSTER_OFFSET 8 34 35 ··· 303 302 /* Check for version and skip this resource if there is mismatch */ 304 303 header = readq(pd_info->uncore_base); 305 304 pd_info->ufs_header_ver = header & UNCORE_VERSION_MASK; 306 - if (pd_info->ufs_header_ver != UNCORE_HEADER_VERSION) { 307 - dev_info(&auxdev->dev, "Uncore: Unsupported version:%d\n", 308 - pd_info->ufs_header_ver); 305 + 306 + if (pd_info->ufs_header_ver == TPMI_VERSION_INVALID) 309 307 continue; 308 + 309 + if (TPMI_MAJOR_VERSION(pd_info->ufs_header_ver) != UNCORE_MAJOR_VERSION) { 310 + dev_err(&auxdev->dev, "Uncore: Unsupported major version:%lx\n", 311 + TPMI_MAJOR_VERSION(pd_info->ufs_header_ver)); 312 + ret = -ENODEV; 313 + goto remove_clusters; 310 314 } 315 + 316 + if (TPMI_MINOR_VERSION(pd_info->ufs_header_ver) != UNCORE_MINOR_VERSION) 317 + dev_info(&auxdev->dev, "Uncore: Ignore: Unsupported minor version:%lx\n", 318 + TPMI_MINOR_VERSION(pd_info->ufs_header_ver)); 311 319 312 320 /* Get Cluster ID Mask */ 313 321 cluster_mask = FIELD_GET(UNCORE_LOCAL_FABRIC_CLUSTER_ID_MASK, header);
+15 -16
drivers/platform/x86/mlx-platform.c
··· 368 368 }; 369 369 370 370 static struct platform_device *mlxplat_dev; 371 - static int mlxplat_i2c_main_complition_notify(void *handle, int id); 371 + static int mlxplat_i2c_main_completion_notify(void *handle, int id); 372 372 static void __iomem *i2c_bridge_addr, *jtag_bridge_addr; 373 373 374 374 /* Regions for LPC I2C controller and LPC base register space */ ··· 384 384 385 385 /* Platform systems default i2c data */ 386 386 static struct mlxreg_core_hotplug_platform_data mlxplat_mlxcpld_i2c_default_data = { 387 - .completion_notify = mlxplat_i2c_main_complition_notify, 387 + .completion_notify = mlxplat_i2c_main_completion_notify, 388 388 }; 389 389 390 390 /* Platform i2c next generation systems data */ ··· 409 409 .mask = MLXPLAT_CPLD_AGGR_MASK_COMEX, 410 410 .cell_low = MLXPLAT_CPLD_LPC_REG_AGGRCO_OFFSET, 411 411 .mask_low = MLXPLAT_CPLD_LOW_AGGR_MASK_I2C, 412 - .completion_notify = mlxplat_i2c_main_complition_notify, 412 + .completion_notify = mlxplat_i2c_main_completion_notify, 413 413 }; 414 414 415 415 /* Platform default channels */ ··· 6291 6291 } 6292 6292 6293 6293 static int 6294 - mlxplat_pre_init(struct resource **hotplug_resources, unsigned int *hotplug_resources_size) 6294 + mlxplat_logicdev_init(struct resource **hotplug_resources, unsigned int *hotplug_resources_size) 6295 6295 { 6296 6296 int err; 6297 6297 ··· 6302 6302 return err; 6303 6303 } 6304 6304 6305 - static void mlxplat_post_exit(void) 6305 + static void mlxplat_logicdev_exit(void) 6306 6306 { 6307 6307 if (lpc_bridge) 6308 6308 mlxplat_pci_fpga_devices_exit(); ··· 6310 6310 mlxplat_lpc_cpld_device_exit(); 6311 6311 } 6312 6312 6313 - static int mlxplat_post_init(struct mlxplat_priv *priv) 6313 + static int mlxplat_platdevs_init(struct mlxplat_priv *priv) 6314 6314 { 6315 6315 int i = 0, err; 6316 6316 ··· 6407 6407 return err; 6408 6408 } 6409 6409 6410 - static void mlxplat_pre_exit(struct mlxplat_priv *priv) 6410 + static void mlxplat_platdevs_exit(struct mlxplat_priv *priv) 6411 6411 { 6412 6412 int i; 6413 6413 ··· 6429 6429 { 6430 6430 struct mlxplat_priv *priv = handle; 6431 6431 6432 - return mlxplat_post_init(priv); 6432 + return mlxplat_platdevs_init(priv); 6433 6433 } 6434 6434 6435 6435 static int mlxplat_i2c_mux_topology_init(struct mlxplat_priv *priv) ··· 6471 6471 } 6472 6472 } 6473 6473 6474 - static int mlxplat_i2c_main_complition_notify(void *handle, int id) 6474 + static int mlxplat_i2c_main_completion_notify(void *handle, int id) 6475 6475 { 6476 6476 struct mlxplat_priv *priv = handle; 6477 6477 ··· 6522 6522 6523 6523 static void mlxplat_i2c_main_exit(struct mlxplat_priv *priv) 6524 6524 { 6525 - mlxplat_pre_exit(priv); 6525 + mlxplat_platdevs_exit(priv); 6526 6526 mlxplat_i2c_mux_topology_exit(priv); 6527 6527 if (priv->pdev_i2c) 6528 6528 platform_device_unregister(priv->pdev_i2c); ··· 6544 6544 mlxplat_dev = pdev; 6545 6545 } 6546 6546 6547 - err = mlxplat_pre_init(&hotplug_resources, &hotplug_resources_size); 6547 + err = mlxplat_logicdev_init(&hotplug_resources, &hotplug_resources_size); 6548 6548 if (err) 6549 6549 return err; 6550 6550 ··· 6603 6603 fail_mlxplat_i2c_main_init: 6604 6604 fail_regmap_write: 6605 6605 fail_alloc: 6606 - mlxplat_post_exit(); 6606 + mlxplat_logicdev_exit(); 6607 6607 6608 6608 return err; 6609 6609 } 6610 6610 6611 - static int mlxplat_remove(struct platform_device *pdev) 6611 + static void mlxplat_remove(struct platform_device *pdev) 6612 6612 { 6613 6613 struct mlxplat_priv *priv = platform_get_drvdata(mlxplat_dev); 6614 6614 ··· 6617 6617 if (mlxplat_reboot_nb) 6618 6618 unregister_reboot_notifier(mlxplat_reboot_nb); 6619 6619 mlxplat_i2c_main_exit(priv); 6620 - mlxplat_post_exit(); 6621 - return 0; 6620 + mlxplat_logicdev_exit(); 6622 6621 } 6623 6622 6624 6623 static const struct acpi_device_id mlxplat_acpi_table[] = { ··· 6633 6634 .probe_type = PROBE_FORCE_SYNCHRONOUS, 6634 6635 }, 6635 6636 .probe = mlxplat_probe, 6636 - .remove = mlxplat_remove, 6637 + .remove_new = mlxplat_remove, 6637 6638 }; 6638 6639 6639 6640 static int __init mlxplat_init(void)
+475 -8
drivers/platform/x86/msi-ec.c
··· 58 58 .block_address = 0x2f, 59 59 .bit = 1, 60 60 }, 61 - .fn_super_swap = { 61 + .fn_win_swap = { 62 62 .address = 0xbf, 63 63 .bit = 4, 64 64 }, ··· 138 138 .block_address = 0x2f, 139 139 .bit = 1, 140 140 }, 141 - .fn_super_swap = { 141 + .fn_win_swap = { 142 142 .address = 0xbf, 143 143 .bit = 4, 144 144 }, ··· 215 215 .block_address = 0x2f, 216 216 .bit = 1, 217 217 }, 218 - .fn_super_swap = { 218 + .fn_win_swap = { 219 219 .address = 0xe8, 220 220 .bit = 4, 221 221 }, ··· 293 293 .block_address = 0x2f, 294 294 .bit = 1, 295 295 }, 296 - .fn_super_swap = { 296 + .fn_win_swap = { 297 297 .address = 0xe8, 298 298 .bit = 4, 299 299 }, ··· 371 371 .block_address = 0x2f, 372 372 .bit = 1, 373 373 }, 374 - .fn_super_swap = { 374 + .fn_win_swap = { 375 375 .address = MSI_EC_ADDR_UNKNOWN, // supported, but unknown 376 376 .bit = 4, 377 377 }, ··· 450 450 .block_address = 0x2f, 451 451 .bit = 1, 452 452 }, 453 - .fn_super_swap = { // todo: reverse 453 + .fn_win_swap = { // todo: reverse 454 454 .address = 0xbf, 455 455 .bit = 4, 456 456 }, ··· 528 528 .block_address = MSI_EC_ADDR_UNSUPP, 529 529 .bit = 1, 530 530 }, 531 - .fn_super_swap = { 531 + .fn_win_swap = { 532 532 .address = 0xbf, // todo: reverse 533 533 .bit = 4, 534 534 }, ··· 608 608 .block_address = MSI_EC_ADDR_UNSUPP, 609 609 .bit = 1, 610 610 }, 611 - .fn_super_swap = { 611 + .fn_win_swap = { 612 612 .address = 0xbf, // needs testing 613 613 .bit = 4, 614 614 }, ··· 667 667 }, 668 668 }; 669 669 670 + static const char * const ALLOWED_FW_8[] __initconst = { 671 + "14F1EMS1.115", 672 + NULL 673 + }; 674 + 675 + static struct msi_ec_conf CONF8 __initdata = { 676 + .allowed_fw = ALLOWED_FW_8, 677 + .charge_control = { 678 + .address = 0xd7, 679 + .offset_start = 0x8a, 680 + .offset_end = 0x80, 681 + .range_min = 0x8a, 682 + .range_max = 0xe4, 683 + }, 684 + .webcam = { 685 + .address = 0x2e, 686 + .block_address = MSI_EC_ADDR_UNSUPP, 687 + .bit = 1, 688 + }, 689 + .fn_win_swap = { 690 + .address = 0xe8, 691 + .bit = 4, 692 + }, 693 + .cooler_boost = { 694 + .address = 0x98, 695 + .bit = 7, 696 + }, 697 + .shift_mode = { 698 + .address = 0xd2, 699 + .modes = { 700 + { SM_ECO_NAME, 0xc2 }, 701 + { SM_COMFORT_NAME, 0xc1 }, 702 + { SM_SPORT_NAME, 0xc0 }, 703 + MSI_EC_MODE_NULL 704 + }, 705 + }, 706 + .super_battery = { 707 + .address = 0xeb, 708 + .mask = 0x0f, 709 + }, 710 + .fan_mode = { 711 + .address = 0xd4, 712 + .modes = { 713 + { FM_AUTO_NAME, 0x0d }, 714 + { FM_SILENT_NAME, 0x1d }, 715 + { FM_BASIC_NAME, 0x4d }, 716 + MSI_EC_MODE_NULL 717 + }, 718 + }, 719 + .cpu = { 720 + .rt_temp_address = 0x68, 721 + .rt_fan_speed_address = 0x71, 722 + .rt_fan_speed_base_min = 0x19, 723 + .rt_fan_speed_base_max = 0x37, 724 + .bs_fan_speed_address = MSI_EC_ADDR_UNSUPP, 725 + .bs_fan_speed_base_min = 0x00, 726 + .bs_fan_speed_base_max = 0x0f, 727 + }, 728 + .gpu = { 729 + .rt_temp_address = MSI_EC_ADDR_UNKNOWN, 730 + .rt_fan_speed_address = MSI_EC_ADDR_UNKNOWN, 731 + }, 732 + .leds = { 733 + .micmute_led_address = MSI_EC_ADDR_UNSUPP, 734 + .mute_led_address = 0x2d, 735 + .bit = 1, 736 + }, 737 + .kbd_bl = { 738 + .bl_mode_address = MSI_EC_ADDR_UNKNOWN, // ? 739 + .bl_modes = { 0x00, 0x08 }, // ? 740 + .max_mode = 1, // ? 741 + .bl_state_address = MSI_EC_ADDR_UNSUPP, // not functional 742 + .state_base_value = 0x80, 743 + .max_state = 3, 744 + }, 745 + }; 746 + 747 + static const char * const ALLOWED_FW_9[] __initconst = { 748 + "14JKEMS1.104", 749 + NULL 750 + }; 751 + 752 + static struct msi_ec_conf CONF9 __initdata = { 753 + .allowed_fw = ALLOWED_FW_9, 754 + .charge_control = { 755 + .address = 0xef, 756 + .offset_start = 0x8a, 757 + .offset_end = 0x80, 758 + .range_min = 0x8a, 759 + .range_max = 0xe4, 760 + }, 761 + .webcam = { 762 + .address = 0x2e, 763 + .block_address = 0x2f, 764 + .bit = 1, 765 + }, 766 + .fn_win_swap = { 767 + .address = 0xbf, 768 + .bit = 4, 769 + }, 770 + .cooler_boost = { 771 + .address = 0x98, 772 + .bit = 7, 773 + }, 774 + .shift_mode = { 775 + .address = 0xf2, 776 + .modes = { 777 + { SM_ECO_NAME, 0xc2 }, 778 + { SM_COMFORT_NAME, 0xc1 }, 779 + { SM_SPORT_NAME, 0xc0 }, 780 + MSI_EC_MODE_NULL 781 + }, 782 + }, 783 + .super_battery = { 784 + .address = MSI_EC_ADDR_UNSUPP, // unsupported or enabled by ECO shift 785 + .mask = 0x0f, 786 + }, 787 + .fan_mode = { 788 + .address = 0xf4, 789 + .modes = { 790 + { FM_AUTO_NAME, 0x0d }, 791 + { FM_SILENT_NAME, 0x1d }, 792 + { FM_ADVANCED_NAME, 0x8d }, 793 + MSI_EC_MODE_NULL 794 + }, 795 + }, 796 + .cpu = { 797 + .rt_temp_address = 0x68, 798 + .rt_fan_speed_address = 0x71, 799 + .rt_fan_speed_base_min = 0x00, 800 + .rt_fan_speed_base_max = 0x96, 801 + .bs_fan_speed_address = MSI_EC_ADDR_UNSUPP, 802 + .bs_fan_speed_base_min = 0x00, 803 + .bs_fan_speed_base_max = 0x0f, 804 + }, 805 + .gpu = { 806 + .rt_temp_address = MSI_EC_ADDR_UNSUPP, 807 + .rt_fan_speed_address = MSI_EC_ADDR_UNSUPP, 808 + }, 809 + .leds = { 810 + .micmute_led_address = 0x2b, 811 + .mute_led_address = 0x2c, 812 + .bit = 2, 813 + }, 814 + .kbd_bl = { 815 + .bl_mode_address = MSI_EC_ADDR_UNSUPP, // not presented in MSI app 816 + .bl_modes = { 0x00, 0x08 }, 817 + .max_mode = 1, 818 + .bl_state_address = 0xf3, 819 + .state_base_value = 0x80, 820 + .max_state = 3, 821 + }, 822 + }; 823 + 824 + static const char * const ALLOWED_FW_10[] __initconst = { 825 + "1582EMS1.107", // GF66 11UC 826 + NULL 827 + }; 828 + 829 + static struct msi_ec_conf CONF10 __initdata = { 830 + .allowed_fw = ALLOWED_FW_10, 831 + .charge_control = { 832 + .address = 0xd7, 833 + .offset_start = 0x8a, 834 + .offset_end = 0x80, 835 + .range_min = 0x8a, 836 + .range_max = 0xe4, 837 + }, 838 + .webcam = { 839 + .address = 0x2e, 840 + .block_address = 0x2f, 841 + .bit = 1, 842 + }, 843 + .fn_win_swap = { 844 + .address = MSI_EC_ADDR_UNSUPP, 845 + .bit = 4, 846 + }, 847 + .cooler_boost = { 848 + .address = 0x98, 849 + .bit = 7, 850 + }, 851 + .shift_mode = { 852 + .address = 0xd2, 853 + .modes = { 854 + { SM_ECO_NAME, 0xc2 }, 855 + { SM_COMFORT_NAME, 0xc1 }, 856 + { SM_SPORT_NAME, 0xc0 }, 857 + { SM_TURBO_NAME, 0xc4 }, 858 + MSI_EC_MODE_NULL 859 + }, 860 + }, 861 + .super_battery = { 862 + .address = 0xe5, 863 + .mask = 0x0f, 864 + }, 865 + .fan_mode = { 866 + .address = 0xd4, 867 + .modes = { 868 + { FM_AUTO_NAME, 0x0d }, 869 + { FM_SILENT_NAME, 0x1d }, 870 + { FM_ADVANCED_NAME, 0x8d }, 871 + MSI_EC_MODE_NULL 872 + }, 873 + }, 874 + .cpu = { 875 + .rt_temp_address = 0x68, 876 + .rt_fan_speed_address = 0x71, // ? 877 + .rt_fan_speed_base_min = 0x19, 878 + .rt_fan_speed_base_max = 0x37, 879 + .bs_fan_speed_address = MSI_EC_ADDR_UNKNOWN, // ? 880 + .bs_fan_speed_base_min = 0x00, 881 + .bs_fan_speed_base_max = 0x0f, 882 + }, 883 + .gpu = { 884 + .rt_temp_address = 0x80, 885 + .rt_fan_speed_address = 0x89, 886 + }, 887 + .leds = { 888 + .micmute_led_address = 0x2c, 889 + .mute_led_address = 0x2d, 890 + .bit = 1, 891 + }, 892 + .kbd_bl = { 893 + .bl_mode_address = 0x2c, 894 + .bl_modes = { 0x00, 0x08 }, 895 + .max_mode = 1, 896 + .bl_state_address = 0xd3, 897 + .state_base_value = 0x80, 898 + .max_state = 3, 899 + }, 900 + }; 901 + 902 + static const char * const ALLOWED_FW_11[] __initconst = { 903 + "16S6EMS1.111", // Prestige 15 a11scx 904 + "1552EMS1.115", // Modern 15 a11m 905 + NULL 906 + }; 907 + 908 + static struct msi_ec_conf CONF11 __initdata = { 909 + .allowed_fw = ALLOWED_FW_11, 910 + .charge_control = { 911 + .address = 0xd7, 912 + .offset_start = 0x8a, 913 + .offset_end = 0x80, 914 + .range_min = 0x8a, 915 + .range_max = 0xe4, 916 + }, 917 + .webcam = { 918 + .address = 0x2e, 919 + .block_address = MSI_EC_ADDR_UNKNOWN, 920 + .bit = 1, 921 + }, 922 + .fn_win_swap = { 923 + .address = 0xe8, 924 + .bit = 4, 925 + }, 926 + .cooler_boost = { 927 + .address = 0x98, 928 + .bit = 7, 929 + }, 930 + .shift_mode = { 931 + .address = 0xd2, 932 + .modes = { 933 + { SM_ECO_NAME, 0xc2 }, 934 + { SM_COMFORT_NAME, 0xc1 }, 935 + { SM_SPORT_NAME, 0xc0 }, 936 + MSI_EC_MODE_NULL 937 + }, 938 + }, 939 + .super_battery = { 940 + .address = 0xeb, 941 + .mask = 0x0f, 942 + }, 943 + .fan_mode = { 944 + .address = 0xd4, 945 + .modes = { 946 + { FM_AUTO_NAME, 0x0d }, 947 + { FM_SILENT_NAME, 0x1d }, 948 + { FM_ADVANCED_NAME, 0x4d }, 949 + MSI_EC_MODE_NULL 950 + }, 951 + }, 952 + .cpu = { 953 + .rt_temp_address = 0x68, 954 + .rt_fan_speed_address = MSI_EC_ADDR_UNSUPP, 955 + .bs_fan_speed_address = MSI_EC_ADDR_UNSUPP, 956 + }, 957 + .gpu = { 958 + .rt_temp_address = MSI_EC_ADDR_UNSUPP, 959 + .rt_fan_speed_address = MSI_EC_ADDR_UNSUPP, 960 + }, 961 + .leds = { 962 + .micmute_led_address = 0x2c, 963 + .mute_led_address = 0x2d, 964 + .bit = 1, 965 + }, 966 + .kbd_bl = { 967 + .bl_mode_address = MSI_EC_ADDR_UNKNOWN, 968 + .bl_modes = {}, // ? 969 + .max_mode = 1, // ? 970 + .bl_state_address = 0xd3, 971 + .state_base_value = 0x80, 972 + .max_state = 3, 973 + }, 974 + }; 975 + 976 + static const char * const ALLOWED_FW_12[] __initconst = { 977 + "16R6EMS1.104", // GF63 Thin 11UC 978 + NULL 979 + }; 980 + 981 + static struct msi_ec_conf CONF12 __initdata = { 982 + .allowed_fw = ALLOWED_FW_12, 983 + .charge_control = { 984 + .address = 0xd7, 985 + .offset_start = 0x8a, 986 + .offset_end = 0x80, 987 + .range_min = 0x8a, 988 + .range_max = 0xe4, 989 + }, 990 + .webcam = { 991 + .address = 0x2e, 992 + .block_address = 0x2f, 993 + .bit = 1, 994 + }, 995 + .fn_win_swap = { 996 + .address = 0xe8, 997 + .bit = 4, 998 + }, 999 + .cooler_boost = { 1000 + .address = 0x98, 1001 + .bit = 7, 1002 + }, 1003 + .shift_mode = { 1004 + .address = 0xd2, 1005 + .modes = { 1006 + { SM_ECO_NAME, 0xc2 }, 1007 + { SM_COMFORT_NAME, 0xc1 }, 1008 + { SM_SPORT_NAME, 0xc0 }, 1009 + { SM_TURBO_NAME, 0xc4 }, 1010 + MSI_EC_MODE_NULL 1011 + }, 1012 + }, 1013 + .super_battery = { 1014 + .address = MSI_EC_ADDR_UNSUPP, // 0xeb 1015 + .mask = 0x0f, // 00, 0f 1016 + }, 1017 + .fan_mode = { 1018 + .address = 0xd4, 1019 + .modes = { 1020 + { FM_AUTO_NAME, 0x0d }, 1021 + { FM_SILENT_NAME, 0x1d }, 1022 + { FM_ADVANCED_NAME, 0x8d }, 1023 + MSI_EC_MODE_NULL 1024 + }, 1025 + }, 1026 + .cpu = { 1027 + .rt_temp_address = 0x68, 1028 + .rt_fan_speed_address = 0x71, 1029 + .rt_fan_speed_base_min = 0x19, 1030 + .rt_fan_speed_base_max = 0x37, 1031 + .bs_fan_speed_address = MSI_EC_ADDR_UNSUPP, 1032 + .bs_fan_speed_base_min = 0x00, 1033 + .bs_fan_speed_base_max = 0x0f, 1034 + }, 1035 + .gpu = { 1036 + .rt_temp_address = MSI_EC_ADDR_UNSUPP, 1037 + .rt_fan_speed_address = 0x89, 1038 + }, 1039 + .leds = { 1040 + .micmute_led_address = MSI_EC_ADDR_UNSUPP, 1041 + .mute_led_address = 0x2d, 1042 + .bit = 1, 1043 + }, 1044 + .kbd_bl = { 1045 + .bl_mode_address = MSI_EC_ADDR_UNKNOWN, 1046 + .bl_modes = { 0x00, 0x08 }, 1047 + .max_mode = 1, 1048 + .bl_state_address = 0xd3, 1049 + .state_base_value = 0x80, 1050 + .max_state = 3, 1051 + }, 1052 + }; 1053 + 1054 + static const char * const ALLOWED_FW_13[] __initconst = { 1055 + "1594EMS1.109", // MSI Prestige 16 Studio A13VE 1056 + NULL 1057 + }; 1058 + 1059 + static struct msi_ec_conf CONF13 __initdata = { 1060 + .allowed_fw = ALLOWED_FW_13, 1061 + .charge_control = { 1062 + .address = 0xd7, 1063 + .offset_start = 0x8a, 1064 + .offset_end = 0x80, 1065 + .range_min = 0x8a, 1066 + .range_max = 0xe4, 1067 + }, 1068 + .webcam = { 1069 + .address = 0x2e, 1070 + .block_address = 0x2f, 1071 + .bit = 1, 1072 + }, 1073 + .fn_win_swap = { 1074 + .address = 0xe8, 1075 + .bit = 4, // 0x00-0x10 1076 + }, 1077 + .cooler_boost = { 1078 + .address = 0x98, 1079 + .bit = 7, 1080 + }, 1081 + .shift_mode = { 1082 + .address = 0xd2, 1083 + .modes = { 1084 + { SM_ECO_NAME, 0xc2 }, // super battery 1085 + { SM_COMFORT_NAME, 0xc1 }, // balanced 1086 + { SM_TURBO_NAME, 0xc4 }, // extreme 1087 + MSI_EC_MODE_NULL 1088 + }, 1089 + }, 1090 + .super_battery = { 1091 + .address = MSI_EC_ADDR_UNSUPP, 1092 + .mask = 0x0f, // 00, 0f 1093 + }, 1094 + .fan_mode = { 1095 + .address = 0xd4, 1096 + .modes = { 1097 + { FM_AUTO_NAME, 0x0d }, 1098 + { FM_SILENT_NAME, 0x1d }, 1099 + { FM_ADVANCED_NAME, 0x8d }, 1100 + MSI_EC_MODE_NULL 1101 + }, 1102 + }, 1103 + .cpu = { 1104 + .rt_temp_address = 0x68, 1105 + .rt_fan_speed_address = 0x71, // 0x0-0x96 1106 + .rt_fan_speed_base_min = 0x00, 1107 + .rt_fan_speed_base_max = 0x96, 1108 + .bs_fan_speed_address = MSI_EC_ADDR_UNSUPP, 1109 + .bs_fan_speed_base_min = 0x00, 1110 + .bs_fan_speed_base_max = 0x0f, 1111 + }, 1112 + .gpu = { 1113 + .rt_temp_address = 0x80, 1114 + .rt_fan_speed_address = 0x89, 1115 + }, 1116 + .leds = { 1117 + .micmute_led_address = 0x2c, 1118 + .mute_led_address = 0x2d, 1119 + .bit = 1, 1120 + }, 1121 + .kbd_bl = { 1122 + .bl_mode_address = 0x2c, // KB auto turn off 1123 + .bl_modes = { 0x00, 0x08 }, // always on; off after 10 sec 1124 + .max_mode = 1, 1125 + .bl_state_address = 0xd3, 1126 + .state_base_value = 0x80, 1127 + .max_state = 3, 1128 + }, 1129 + }; 1130 + 670 1131 static struct msi_ec_conf *CONFIGS[] __initdata = { 671 1132 &CONF0, 672 1133 &CONF1, ··· 1137 676 &CONF5, 1138 677 &CONF6, 1139 678 &CONF7, 679 + &CONF8, 680 + &CONF9, 681 + &CONF10, 682 + &CONF11, 683 + &CONF12, 684 + &CONF13, 1140 685 NULL 1141 686 }; 1142 687
+2 -2
drivers/platform/x86/msi-ec.h
··· 40 40 int bit; 41 41 }; 42 42 43 - struct msi_ec_fn_super_swap_conf { 43 + struct msi_ec_fn_win_swap_conf { 44 44 int address; 45 45 int bit; 46 46 }; ··· 108 108 109 109 struct msi_ec_charge_control_conf charge_control; 110 110 struct msi_ec_webcam_conf webcam; 111 - struct msi_ec_fn_super_swap_conf fn_super_swap; 111 + struct msi_ec_fn_win_swap_conf fn_win_swap; 112 112 struct msi_ec_cooler_boost_conf cooler_boost; 113 113 struct msi_ec_shift_mode_conf shift_mode; 114 114 struct msi_ec_super_battery_conf super_battery;
+2 -4
drivers/platform/x86/sel3350-platform.c
··· 218 218 return rs; 219 219 } 220 220 221 - static int sel3350_remove(struct platform_device *pdev) 221 + static void sel3350_remove(struct platform_device *pdev) 222 222 { 223 223 struct sel3350_data *sel3350 = platform_get_drvdata(pdev); 224 224 225 225 platform_device_unregister(sel3350->leds_pdev); 226 226 gpiod_remove_lookup_table(&sel3350_gpios_table); 227 227 gpiod_remove_lookup_table(&sel3350_leds_table); 228 - 229 - return 0; 230 228 } 231 229 232 230 static const struct acpi_device_id sel3350_device_ids[] = { ··· 235 237 236 238 static struct platform_driver sel3350_platform_driver = { 237 239 .probe = sel3350_probe, 238 - .remove = sel3350_remove, 240 + .remove_new = sel3350_remove, 239 241 .driver = { 240 242 .name = "sel3350-platform", 241 243 .acpi_match_table = sel3350_device_ids,
+3 -3
drivers/platform/x86/siemens/simatic-ipc-batt-apollolake.c
··· 25 25 }, 26 26 }; 27 27 28 - static int simatic_ipc_batt_apollolake_remove(struct platform_device *pdev) 28 + static void simatic_ipc_batt_apollolake_remove(struct platform_device *pdev) 29 29 { 30 - return simatic_ipc_batt_remove(pdev, &simatic_ipc_batt_gpio_table_127e); 30 + simatic_ipc_batt_remove(pdev, &simatic_ipc_batt_gpio_table_127e); 31 31 } 32 32 33 33 static int simatic_ipc_batt_apollolake_probe(struct platform_device *pdev) ··· 37 37 38 38 static struct platform_driver simatic_ipc_batt_driver = { 39 39 .probe = simatic_ipc_batt_apollolake_probe, 40 - .remove = simatic_ipc_batt_apollolake_remove, 40 + .remove_new = simatic_ipc_batt_apollolake_remove, 41 41 .driver = { 42 42 .name = KBUILD_MODNAME, 43 43 },
+3 -3
drivers/platform/x86/siemens/simatic-ipc-batt-elkhartlake.c
··· 25 25 }, 26 26 }; 27 27 28 - static int simatic_ipc_batt_elkhartlake_remove(struct platform_device *pdev) 28 + static void simatic_ipc_batt_elkhartlake_remove(struct platform_device *pdev) 29 29 { 30 - return simatic_ipc_batt_remove(pdev, &simatic_ipc_batt_gpio_table_bx_21a); 30 + simatic_ipc_batt_remove(pdev, &simatic_ipc_batt_gpio_table_bx_21a); 31 31 } 32 32 33 33 static int simatic_ipc_batt_elkhartlake_probe(struct platform_device *pdev) ··· 37 37 38 38 static struct platform_driver simatic_ipc_batt_driver = { 39 39 .probe = simatic_ipc_batt_elkhartlake_probe, 40 - .remove = simatic_ipc_batt_elkhartlake_remove, 40 + .remove_new = simatic_ipc_batt_elkhartlake_remove, 41 41 .driver = { 42 42 .name = KBUILD_MODNAME, 43 43 },
+3 -3
drivers/platform/x86/siemens/simatic-ipc-batt-f7188x.c
··· 45 45 } 46 46 }; 47 47 48 - static int simatic_ipc_batt_f7188x_remove(struct platform_device *pdev) 48 + static void simatic_ipc_batt_f7188x_remove(struct platform_device *pdev) 49 49 { 50 - return simatic_ipc_batt_remove(pdev, batt_lookup_table); 50 + simatic_ipc_batt_remove(pdev, batt_lookup_table); 51 51 } 52 52 53 53 static int simatic_ipc_batt_f7188x_probe(struct platform_device *pdev) ··· 73 73 74 74 static struct platform_driver simatic_ipc_batt_driver = { 75 75 .probe = simatic_ipc_batt_f7188x_probe, 76 - .remove = simatic_ipc_batt_f7188x_remove, 76 + .remove_new = simatic_ipc_batt_f7188x_remove, 77 77 .driver = { 78 78 .name = KBUILD_MODNAME, 79 79 },
+4 -5
drivers/platform/x86/siemens/simatic-ipc-batt.c
··· 146 146 .info = simatic_ipc_batt_info, 147 147 }; 148 148 149 - int simatic_ipc_batt_remove(struct platform_device *pdev, struct gpiod_lookup_table *table) 149 + void simatic_ipc_batt_remove(struct platform_device *pdev, struct gpiod_lookup_table *table) 150 150 { 151 151 gpiod_remove_lookup_table(table); 152 - return 0; 153 152 } 154 153 EXPORT_SYMBOL_GPL(simatic_ipc_batt_remove); 155 154 ··· 227 228 } 228 229 EXPORT_SYMBOL_GPL(simatic_ipc_batt_probe); 229 230 230 - static int simatic_ipc_batt_io_remove(struct platform_device *pdev) 231 + static void simatic_ipc_batt_io_remove(struct platform_device *pdev) 231 232 { 232 - return simatic_ipc_batt_remove(pdev, NULL); 233 + simatic_ipc_batt_remove(pdev, NULL); 233 234 } 234 235 235 236 static int simatic_ipc_batt_io_probe(struct platform_device *pdev) ··· 239 240 240 241 static struct platform_driver simatic_ipc_batt_driver = { 241 242 .probe = simatic_ipc_batt_io_probe, 242 - .remove = simatic_ipc_batt_io_remove, 243 + .remove_new = simatic_ipc_batt_io_remove, 243 244 .driver = { 244 245 .name = KBUILD_MODNAME, 245 246 },
+2 -2
drivers/platform/x86/siemens/simatic-ipc-batt.h
··· 14 14 int simatic_ipc_batt_probe(struct platform_device *pdev, 15 15 struct gpiod_lookup_table *table); 16 16 17 - int simatic_ipc_batt_remove(struct platform_device *pdev, 18 - struct gpiod_lookup_table *table); 17 + void simatic_ipc_batt_remove(struct platform_device *pdev, 18 + struct gpiod_lookup_table *table); 19 19 20 20 #endif /* _SIMATIC_IPC_BATT_H */
+156 -58
drivers/platform/x86/think-lmi.c
··· 15 15 #include <linux/errno.h> 16 16 #include <linux/fs.h> 17 17 #include <linux/mutex.h> 18 - #include <linux/string.h> 18 + #include <linux/string_helpers.h> 19 19 #include <linux/types.h> 20 20 #include <linux/dmi.h> 21 21 #include <linux/wmi.h> ··· 197 197 static struct think_lmi tlmi_priv; 198 198 static struct class *fw_attr_class; 199 199 static DEFINE_MUTEX(tlmi_mutex); 200 - 201 - /* ------ Utility functions ------------*/ 202 - /* Strip out CR if one is present */ 203 - static void strip_cr(char *str) 204 - { 205 - char *p = strchrnul(str, '\n'); 206 - *p = '\0'; 207 - } 208 200 209 201 /* Convert BIOS WMI error string to suitable error code */ 210 202 static int tlmi_errstr_to_err(const char *errstr) ··· 403 411 404 412 strscpy(setting->password, buf, setting->maxlen); 405 413 /* Strip out CR if one is present, setting password won't work if it is present */ 406 - strip_cr(setting->password); 414 + strreplace(setting->password, '\n', '\0'); 407 415 return count; 408 416 } 409 417 ··· 424 432 if (!tlmi_priv.can_set_bios_password) 425 433 return -EOPNOTSUPP; 426 434 427 - new_pwd = kstrdup(buf, GFP_KERNEL); 435 + /* Strip out CR if one is present, setting password won't work if it is present */ 436 + new_pwd = kstrdup_and_replace(buf, '\n', '\0', GFP_KERNEL); 428 437 if (!new_pwd) 429 438 return -ENOMEM; 430 - 431 - /* Strip out CR if one is present, setting password won't work if it is present */ 432 - strip_cr(new_pwd); 433 439 434 440 /* Use lock in case multiple WMI operations needed */ 435 441 mutex_lock(&tlmi_mutex); ··· 699 709 if (!setting->signature || !setting->signature[0]) 700 710 return -EACCES; 701 711 702 - passwd = kstrdup(buf, GFP_KERNEL); 712 + /* Strip out CR if one is present */ 713 + passwd = kstrdup_and_replace(buf, '\n', '\0', GFP_KERNEL); 703 714 if (!passwd) 704 715 return -ENOMEM; 705 - 706 - /* Strip out CR if one is present */ 707 - strip_cr(passwd); 708 716 709 717 /* Format: 'Password,Signature' */ 710 718 auth_str = kasprintf(GFP_KERNEL, "%s,%s", passwd, setting->signature); ··· 753 765 return ret ?: count; 754 766 } 755 767 756 - new_cert = kstrdup(buf, GFP_KERNEL); 768 + /* Strip out CR if one is present */ 769 + new_cert = kstrdup_and_replace(buf, '\n', '\0', GFP_KERNEL); 757 770 if (!new_cert) 758 771 return -ENOMEM; 759 - /* Strip out CR if one is present */ 760 - strip_cr(new_cert); 761 772 762 773 if (setting->cert_installed) { 763 774 /* Certificate is installed so this is an update */ ··· 804 817 if (!tlmi_priv.certificate_support) 805 818 return -EOPNOTSUPP; 806 819 807 - new_signature = kstrdup(buf, GFP_KERNEL); 820 + /* Strip out CR if one is present */ 821 + new_signature = kstrdup_and_replace(buf, '\n', '\0', GFP_KERNEL); 808 822 if (!new_signature) 809 823 return -ENOMEM; 810 - 811 - /* Strip out CR if one is present */ 812 - strip_cr(new_signature); 813 824 814 825 /* Free any previous signature */ 815 826 kfree(setting->signature); ··· 831 846 if (!tlmi_priv.certificate_support) 832 847 return -EOPNOTSUPP; 833 848 834 - new_signature = kstrdup(buf, GFP_KERNEL); 849 + /* Strip out CR if one is present */ 850 + new_signature = kstrdup_and_replace(buf, '\n', '\0', GFP_KERNEL); 835 851 if (!new_signature) 836 852 return -ENOMEM; 837 - 838 - /* Strip out CR if one is present */ 839 - strip_cr(new_signature); 840 853 841 854 /* Free any previous signature */ 842 855 kfree(setting->save_signature); ··· 913 930 static ssize_t current_value_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) 914 931 { 915 932 struct tlmi_attr_setting *setting = to_tlmi_attr_setting(kobj); 916 - char *item, *value, *p; 933 + char *item, *value; 917 934 int ret; 918 935 919 936 ret = tlmi_setting(setting->index, &item, LENOVO_BIOS_SETTING_GUID); ··· 926 943 ret = -EINVAL; 927 944 else { 928 945 /* On Workstations remove the Options part after the value */ 929 - p = strchrnul(value, ';'); 930 - *p = '\0'; 946 + strreplace(value, ';', '\0'); 931 947 ret = sysfs_emit(buf, "%s\n", value + 1); 932 948 } 933 949 kfree(item); ··· 967 985 if (!tlmi_priv.can_set_bios_settings) 968 986 return -EOPNOTSUPP; 969 987 970 - new_setting = kstrdup(buf, GFP_KERNEL); 971 - if (!new_setting) 972 - return -ENOMEM; 988 + /* 989 + * If we are using bulk saves a reboot should be done once save has 990 + * been called 991 + */ 992 + if (tlmi_priv.save_mode == TLMI_SAVE_BULK && tlmi_priv.reboot_required) 993 + return -EPERM; 973 994 974 995 /* Strip out CR if one is present */ 975 - strip_cr(new_setting); 996 + new_setting = kstrdup_and_replace(buf, '\n', '\0', GFP_KERNEL); 997 + if (!new_setting) 998 + return -ENOMEM; 976 999 977 1000 /* Use lock in case multiple WMI operations needed */ 978 1001 mutex_lock(&tlmi_mutex); ··· 998 1011 ret = tlmi_simple_call(LENOVO_SET_BIOS_SETTING_CERT_GUID, set_str); 999 1012 if (ret) 1000 1013 goto out; 1001 - ret = tlmi_simple_call(LENOVO_SAVE_BIOS_SETTING_CERT_GUID, 1002 - tlmi_priv.pwd_admin->save_signature); 1003 - if (ret) 1004 - goto out; 1014 + if (tlmi_priv.save_mode == TLMI_SAVE_BULK) 1015 + tlmi_priv.save_required = true; 1016 + else 1017 + ret = tlmi_simple_call(LENOVO_SAVE_BIOS_SETTING_CERT_GUID, 1018 + tlmi_priv.pwd_admin->save_signature); 1005 1019 } else if (tlmi_priv.opcode_support) { 1006 1020 /* 1007 1021 * If opcode support is present use that interface. ··· 1021 1033 if (ret) 1022 1034 goto out; 1023 1035 1024 - if (tlmi_priv.pwd_admin->valid && tlmi_priv.pwd_admin->password[0]) { 1025 - ret = tlmi_opcode_setting("WmiOpcodePasswordAdmin", 1026 - tlmi_priv.pwd_admin->password); 1027 - if (ret) 1028 - goto out; 1036 + if (tlmi_priv.save_mode == TLMI_SAVE_BULK) { 1037 + tlmi_priv.save_required = true; 1038 + } else { 1039 + if (tlmi_priv.pwd_admin->valid && tlmi_priv.pwd_admin->password[0]) { 1040 + ret = tlmi_opcode_setting("WmiOpcodePasswordAdmin", 1041 + tlmi_priv.pwd_admin->password); 1042 + if (ret) 1043 + goto out; 1044 + } 1045 + ret = tlmi_save_bios_settings(""); 1029 1046 } 1030 - 1031 - ret = tlmi_save_bios_settings(""); 1032 1047 } else { /* old non-opcode based authentication method (deprecated) */ 1033 1048 if (tlmi_priv.pwd_admin->valid && tlmi_priv.pwd_admin->password[0]) { 1034 1049 auth_str = kasprintf(GFP_KERNEL, "%s,%s,%s;", ··· 1059 1068 if (ret) 1060 1069 goto out; 1061 1070 1062 - if (auth_str) 1063 - ret = tlmi_save_bios_settings(auth_str); 1064 - else 1065 - ret = tlmi_save_bios_settings(""); 1071 + if (tlmi_priv.save_mode == TLMI_SAVE_BULK) { 1072 + tlmi_priv.save_required = true; 1073 + } else { 1074 + if (auth_str) 1075 + ret = tlmi_save_bios_settings(auth_str); 1076 + else 1077 + ret = tlmi_save_bios_settings(""); 1078 + } 1066 1079 } 1067 1080 if (!ret && !tlmi_priv.pending_changes) { 1068 1081 tlmi_priv.pending_changes = true; ··· 1147 1152 1148 1153 static struct kobj_attribute pending_reboot = __ATTR_RO(pending_reboot); 1149 1154 1155 + static const char * const save_mode_strings[] = { 1156 + [TLMI_SAVE_SINGLE] = "single", 1157 + [TLMI_SAVE_BULK] = "bulk", 1158 + [TLMI_SAVE_SAVE] = "save" 1159 + }; 1160 + 1161 + static ssize_t save_settings_show(struct kobject *kobj, struct kobj_attribute *attr, 1162 + char *buf) 1163 + { 1164 + /* Check that setting is valid */ 1165 + if (WARN_ON(tlmi_priv.save_mode < TLMI_SAVE_SINGLE || 1166 + tlmi_priv.save_mode > TLMI_SAVE_BULK)) 1167 + return -EIO; 1168 + return sysfs_emit(buf, "%s\n", save_mode_strings[tlmi_priv.save_mode]); 1169 + } 1170 + 1171 + static ssize_t save_settings_store(struct kobject *kobj, struct kobj_attribute *attr, 1172 + const char *buf, size_t count) 1173 + { 1174 + char *auth_str = NULL; 1175 + int ret = 0; 1176 + int cmd; 1177 + 1178 + cmd = sysfs_match_string(save_mode_strings, buf); 1179 + if (cmd < 0) 1180 + return cmd; 1181 + 1182 + /* Use lock in case multiple WMI operations needed */ 1183 + mutex_lock(&tlmi_mutex); 1184 + 1185 + switch (cmd) { 1186 + case TLMI_SAVE_SINGLE: 1187 + case TLMI_SAVE_BULK: 1188 + tlmi_priv.save_mode = cmd; 1189 + goto out; 1190 + case TLMI_SAVE_SAVE: 1191 + /* Check if supported*/ 1192 + if (!tlmi_priv.can_set_bios_settings || 1193 + tlmi_priv.save_mode == TLMI_SAVE_SINGLE) { 1194 + ret = -EOPNOTSUPP; 1195 + goto out; 1196 + } 1197 + /* Check there is actually something to save */ 1198 + if (!tlmi_priv.save_required) { 1199 + ret = -ENOENT; 1200 + goto out; 1201 + } 1202 + /* Check if certificate authentication is enabled and active */ 1203 + if (tlmi_priv.certificate_support && tlmi_priv.pwd_admin->cert_installed) { 1204 + if (!tlmi_priv.pwd_admin->signature || 1205 + !tlmi_priv.pwd_admin->save_signature) { 1206 + ret = -EINVAL; 1207 + goto out; 1208 + } 1209 + ret = tlmi_simple_call(LENOVO_SAVE_BIOS_SETTING_CERT_GUID, 1210 + tlmi_priv.pwd_admin->save_signature); 1211 + if (ret) 1212 + goto out; 1213 + } else if (tlmi_priv.opcode_support) { 1214 + if (tlmi_priv.pwd_admin->valid && tlmi_priv.pwd_admin->password[0]) { 1215 + ret = tlmi_opcode_setting("WmiOpcodePasswordAdmin", 1216 + tlmi_priv.pwd_admin->password); 1217 + if (ret) 1218 + goto out; 1219 + } 1220 + ret = tlmi_save_bios_settings(""); 1221 + } else { /* old non-opcode based authentication method (deprecated) */ 1222 + if (tlmi_priv.pwd_admin->valid && tlmi_priv.pwd_admin->password[0]) { 1223 + auth_str = kasprintf(GFP_KERNEL, "%s,%s,%s;", 1224 + tlmi_priv.pwd_admin->password, 1225 + encoding_options[tlmi_priv.pwd_admin->encoding], 1226 + tlmi_priv.pwd_admin->kbdlang); 1227 + if (!auth_str) { 1228 + ret = -ENOMEM; 1229 + goto out; 1230 + } 1231 + } 1232 + 1233 + if (auth_str) 1234 + ret = tlmi_save_bios_settings(auth_str); 1235 + else 1236 + ret = tlmi_save_bios_settings(""); 1237 + } 1238 + tlmi_priv.save_required = false; 1239 + tlmi_priv.reboot_required = true; 1240 + 1241 + if (!ret && !tlmi_priv.pending_changes) { 1242 + tlmi_priv.pending_changes = true; 1243 + /* let userland know it may need to check reboot pending again */ 1244 + kobject_uevent(&tlmi_priv.class_dev->kobj, KOBJ_CHANGE); 1245 + } 1246 + break; 1247 + } 1248 + out: 1249 + mutex_unlock(&tlmi_mutex); 1250 + kfree(auth_str); 1251 + return ret ?: count; 1252 + } 1253 + 1254 + static struct kobj_attribute save_settings = __ATTR_RW(save_settings); 1255 + 1150 1256 /* ---- Debug interface--------------------------------------------------------- */ 1151 1257 static ssize_t debug_cmd_store(struct kobject *kobj, struct kobj_attribute *attr, 1152 1258 const char *buf, size_t count) ··· 1259 1163 if (!tlmi_priv.can_debug_cmd) 1260 1164 return -EOPNOTSUPP; 1261 1165 1262 - new_setting = kstrdup(buf, GFP_KERNEL); 1166 + /* Strip out CR if one is present */ 1167 + new_setting = kstrdup_and_replace(buf, '\n', '\0', GFP_KERNEL); 1263 1168 if (!new_setting) 1264 1169 return -ENOMEM; 1265 - 1266 - /* Strip out CR if one is present */ 1267 - strip_cr(new_setting); 1268 1170 1269 1171 if (tlmi_priv.pwd_admin->valid && tlmi_priv.pwd_admin->password[0]) { 1270 1172 auth_str = kasprintf(GFP_KERNEL, "%s,%s,%s;", ··· 1315 1221 } 1316 1222 } 1317 1223 sysfs_remove_file(&tlmi_priv.attribute_kset->kobj, &pending_reboot.attr); 1224 + sysfs_remove_file(&tlmi_priv.attribute_kset->kobj, &save_settings.attr); 1225 + 1318 1226 if (tlmi_priv.can_debug_cmd && debug_support) 1319 1227 sysfs_remove_file(&tlmi_priv.attribute_kset->kobj, &debug_cmd.attr); 1320 1228 ··· 1411 1315 } 1412 1316 1413 1317 ret = sysfs_create_file(&tlmi_priv.attribute_kset->kobj, &pending_reboot.attr); 1318 + if (ret) 1319 + goto fail_create_attr; 1320 + 1321 + ret = sysfs_create_file(&tlmi_priv.attribute_kset->kobj, &save_settings.attr); 1414 1322 if (ret) 1415 1323 goto fail_create_attr; 1416 1324 ··· 1547 1447 for (i = 0; i < TLMI_SETTINGS_COUNT; ++i) { 1548 1448 struct tlmi_attr_setting *setting; 1549 1449 char *item = NULL; 1550 - char *p; 1551 1450 1552 1451 tlmi_priv.setting[i] = NULL; 1553 1452 ret = tlmi_setting(i, &item, LENOVO_BIOS_SETTING_GUID); ··· 1563 1464 strreplace(item, '/', '\\'); 1564 1465 1565 1466 /* Remove the value part */ 1566 - p = strchrnul(item, ','); 1567 - *p = '\0'; 1467 + strreplace(item, ',', '\0'); 1568 1468 1569 1469 /* Create a setting entry */ 1570 1470 setting = kzalloc(sizeof(*setting), GFP_KERNEL);
+16
drivers/platform/x86/think-lmi.h
··· 27 27 TLMI_LEVEL_MASTER, 28 28 }; 29 29 30 + /* 31 + * There are a limit on the number of WMI operations you can do if you use 32 + * the default implementation of saving on every set. This is due to a 33 + * limitation in EFI variable space used. 34 + * Have a 'bulk save' mode where you can manually trigger the save, and can 35 + * therefore set unlimited variables - for users that need it. 36 + */ 37 + enum save_mode { 38 + TLMI_SAVE_SINGLE, 39 + TLMI_SAVE_BULK, 40 + TLMI_SAVE_SAVE, 41 + }; 42 + 30 43 /* password configuration details */ 31 44 struct tlmi_pwdcfg_core { 32 45 uint32_t password_mode; ··· 99 86 bool can_debug_cmd; 100 87 bool opcode_support; 101 88 bool certificate_support; 89 + enum save_mode save_mode; 90 + bool save_required; 91 + bool reboot_required; 102 92 103 93 struct tlmi_attr_setting *setting[TLMI_SETTINGS_COUNT]; 104 94 struct device *class_dev;
+94 -2
drivers/platform/x86/thinkpad_acpi.c
··· 9816 9816 * Individual addressing is broken on models that expose the 9817 9817 * primary battery as BAT1. 9818 9818 */ 9819 + TPACPI_Q_LNV('8', 'F', true), /* Thinkpad X120e */ 9819 9820 TPACPI_Q_LNV('J', '7', true), /* B5400 */ 9820 9821 TPACPI_Q_LNV('J', 'I', true), /* Thinkpad 11e */ 9821 9822 TPACPI_Q_LNV3('R', '0', 'B', true), /* Thinkpad 11e gen 3 */ ··· 10788 10787 .name = "dprc", 10789 10788 }; 10790 10789 10790 + /* 10791 + * Auxmac 10792 + * 10793 + * This auxiliary mac address is enabled in the bios through the 10794 + * MAC Address Pass-through feature. In most cases, there are three 10795 + * possibilities: Internal Mac, Second Mac, and disabled. 10796 + * 10797 + */ 10798 + 10799 + #define AUXMAC_LEN 12 10800 + #define AUXMAC_START 9 10801 + #define AUXMAC_STRLEN 22 10802 + #define AUXMAC_BEGIN_MARKER 8 10803 + #define AUXMAC_END_MARKER 21 10804 + 10805 + static char auxmac[AUXMAC_LEN + 1]; 10806 + 10807 + static int auxmac_init(struct ibm_init_struct *iibm) 10808 + { 10809 + acpi_status status; 10810 + struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL }; 10811 + union acpi_object *obj; 10812 + 10813 + status = acpi_evaluate_object(NULL, "\\MACA", NULL, &buffer); 10814 + 10815 + if (ACPI_FAILURE(status)) 10816 + return -ENODEV; 10817 + 10818 + obj = buffer.pointer; 10819 + 10820 + if (obj->type != ACPI_TYPE_STRING || obj->string.length != AUXMAC_STRLEN) { 10821 + pr_info("Invalid buffer for MAC address pass-through.\n"); 10822 + goto auxmacinvalid; 10823 + } 10824 + 10825 + if (obj->string.pointer[AUXMAC_BEGIN_MARKER] != '#' || 10826 + obj->string.pointer[AUXMAC_END_MARKER] != '#') { 10827 + pr_info("Invalid header for MAC address pass-through.\n"); 10828 + goto auxmacinvalid; 10829 + } 10830 + 10831 + if (strncmp(obj->string.pointer + AUXMAC_START, "XXXXXXXXXXXX", AUXMAC_LEN) != 0) 10832 + strscpy(auxmac, obj->string.pointer + AUXMAC_START, sizeof(auxmac)); 10833 + else 10834 + strscpy(auxmac, "disabled", sizeof(auxmac)); 10835 + 10836 + free: 10837 + kfree(obj); 10838 + return 0; 10839 + 10840 + auxmacinvalid: 10841 + strscpy(auxmac, "unavailable", sizeof(auxmac)); 10842 + goto free; 10843 + } 10844 + 10845 + static struct ibm_struct auxmac_data = { 10846 + .name = "auxmac", 10847 + }; 10848 + 10849 + static ssize_t auxmac_show(struct device *dev, 10850 + struct device_attribute *attr, 10851 + char *buf) 10852 + { 10853 + return sysfs_emit(buf, "%s\n", auxmac); 10854 + } 10855 + static DEVICE_ATTR_RO(auxmac); 10856 + 10857 + static umode_t auxmac_attr_is_visible(struct kobject *kobj, 10858 + struct attribute *attr, int n) 10859 + { 10860 + return auxmac[0] == 0 ? 0 : attr->mode; 10861 + } 10862 + 10863 + static struct attribute *auxmac_attributes[] = { 10864 + &dev_attr_auxmac.attr, 10865 + NULL 10866 + }; 10867 + 10868 + static const struct attribute_group auxmac_attr_group = { 10869 + .is_visible = auxmac_attr_is_visible, 10870 + .attrs = auxmac_attributes, 10871 + }; 10872 + 10791 10873 /* --------------------------------------------------------------------- */ 10792 10874 10793 10875 static struct attribute *tpacpi_driver_attributes[] = { ··· 10929 10845 &proxsensor_attr_group, 10930 10846 &kbdlang_attr_group, 10931 10847 &dprc_attr_group, 10848 + &auxmac_attr_group, 10932 10849 NULL, 10933 10850 }; 10934 10851 ··· 11229 11144 return '\0'; 11230 11145 } 11231 11146 11147 + #define EC_FW_STRING_LEN 18 11148 + 11232 11149 static void find_new_ec_fwstr(const struct dmi_header *dm, void *private) 11233 11150 { 11234 11151 char *ec_fw_string = (char *) private; ··· 11259 11172 return; 11260 11173 11261 11174 /* fwstr is the first 8byte string */ 11262 - strncpy(ec_fw_string, dmi_data + 0x0F, 8); 11175 + BUILD_BUG_ON(EC_FW_STRING_LEN <= 8); 11176 + memcpy(ec_fw_string, dmi_data + 0x0F, 8); 11263 11177 } 11264 11178 11265 11179 /* returns 0 - probe ok, or < 0 - probe error. ··· 11270 11182 struct thinkpad_id_data *tp) 11271 11183 { 11272 11184 const struct dmi_device *dev = NULL; 11273 - char ec_fw_string[18] = {0}; 11185 + char ec_fw_string[EC_FW_STRING_LEN] = {0}; 11274 11186 char const *s; 11275 11187 char t; 11276 11188 ··· 11503 11415 { 11504 11416 .init = tpacpi_dprc_init, 11505 11417 .data = &dprc_driver_data, 11418 + }, 11419 + { 11420 + .init = auxmac_init, 11421 + .data = &auxmac_data, 11506 11422 }, 11507 11423 }; 11508 11424
+157 -106
drivers/platform/x86/wmi.c
··· 109 109 NULL 110 110 }; 111 111 112 + #define dev_to_wblock(__dev) container_of_const(__dev, struct wmi_block, dev.dev) 113 + #define dev_to_wdev(__dev) container_of_const(__dev, struct wmi_device, dev) 114 + 112 115 /* 113 116 * GUID parsing functions 114 117 */ 115 - 116 - static acpi_status find_guid(const char *guid_string, struct wmi_block **out) 117 - { 118 - guid_t guid_input; 119 - struct wmi_block *wblock; 120 - 121 - if (!guid_string) 122 - return AE_BAD_PARAMETER; 123 - 124 - if (guid_parse(guid_string, &guid_input)) 125 - return AE_BAD_PARAMETER; 126 - 127 - list_for_each_entry(wblock, &wmi_block_list, list) { 128 - if (guid_equal(&wblock->gblock.guid, &guid_input)) { 129 - if (out) 130 - *out = wblock; 131 - 132 - return AE_OK; 133 - } 134 - } 135 - 136 - return AE_NOT_FOUND; 137 - } 138 118 139 119 static bool guid_parse_and_compare(const char *string, const guid_t *guid) 140 120 { ··· 225 245 return acpi_evaluate_object(wblock->acpi_device->handle, "_WED", &input, out); 226 246 } 227 247 248 + static int wmidev_match_guid(struct device *dev, const void *data) 249 + { 250 + struct wmi_block *wblock = dev_to_wblock(dev); 251 + const guid_t *guid = data; 252 + 253 + if (guid_equal(guid, &wblock->gblock.guid)) 254 + return 1; 255 + 256 + return 0; 257 + } 258 + 259 + static struct bus_type wmi_bus_type; 260 + 261 + static struct wmi_device *wmi_find_device_by_guid(const char *guid_string) 262 + { 263 + struct device *dev; 264 + guid_t guid; 265 + int ret; 266 + 267 + ret = guid_parse(guid_string, &guid); 268 + if (ret < 0) 269 + return ERR_PTR(ret); 270 + 271 + dev = bus_find_device(&wmi_bus_type, NULL, &guid, wmidev_match_guid); 272 + if (!dev) 273 + return ERR_PTR(-ENODEV); 274 + 275 + return dev_to_wdev(dev); 276 + } 277 + 278 + static void wmi_device_put(struct wmi_device *wdev) 279 + { 280 + put_device(&wdev->dev); 281 + } 282 + 228 283 /* 229 284 * Exported WMI functions 230 285 */ ··· 294 279 */ 295 280 int wmi_instance_count(const char *guid_string) 296 281 { 297 - struct wmi_block *wblock; 298 - acpi_status status; 282 + struct wmi_device *wdev; 283 + int ret; 299 284 300 - status = find_guid(guid_string, &wblock); 301 - if (ACPI_FAILURE(status)) { 302 - if (status == AE_BAD_PARAMETER) 303 - return -EINVAL; 285 + wdev = wmi_find_device_by_guid(guid_string); 286 + if (IS_ERR(wdev)) 287 + return PTR_ERR(wdev); 304 288 305 - return -ENODEV; 306 - } 289 + ret = wmidev_instance_count(wdev); 290 + wmi_device_put(wdev); 307 291 308 - return wmidev_instance_count(&wblock->dev); 292 + return ret; 309 293 } 310 294 EXPORT_SYMBOL_GPL(wmi_instance_count); 311 295 ··· 339 325 acpi_status wmi_evaluate_method(const char *guid_string, u8 instance, u32 method_id, 340 326 const struct acpi_buffer *in, struct acpi_buffer *out) 341 327 { 342 - struct wmi_block *wblock = NULL; 328 + struct wmi_device *wdev; 343 329 acpi_status status; 344 330 345 - status = find_guid(guid_string, &wblock); 346 - if (ACPI_FAILURE(status)) 347 - return status; 331 + wdev = wmi_find_device_by_guid(guid_string); 332 + if (IS_ERR(wdev)) 333 + return AE_ERROR; 348 334 349 - return wmidev_evaluate_method(&wblock->dev, instance, method_id, 350 - in, out); 335 + status = wmidev_evaluate_method(wdev, instance, method_id, in, out); 336 + 337 + wmi_device_put(wdev); 338 + 339 + return status; 351 340 } 352 341 EXPORT_SYMBOL_GPL(wmi_evaluate_method); 353 342 ··· 489 472 struct acpi_buffer *out) 490 473 { 491 474 struct wmi_block *wblock; 475 + struct wmi_device *wdev; 492 476 acpi_status status; 493 477 494 - status = find_guid(guid_string, &wblock); 495 - if (ACPI_FAILURE(status)) 496 - return status; 478 + wdev = wmi_find_device_by_guid(guid_string); 479 + if (IS_ERR(wdev)) 480 + return AE_ERROR; 497 481 498 - return __query_block(wblock, instance, out); 482 + wblock = container_of(wdev, struct wmi_block, dev); 483 + status = __query_block(wblock, instance, out); 484 + 485 + wmi_device_put(wdev); 486 + 487 + return status; 499 488 } 500 489 EXPORT_SYMBOL_GPL(wmi_query_block); 501 490 ··· 539 516 acpi_status wmi_set_block(const char *guid_string, u8 instance, 540 517 const struct acpi_buffer *in) 541 518 { 542 - struct wmi_block *wblock = NULL; 519 + struct wmi_block *wblock; 543 520 struct guid_block *block; 521 + struct wmi_device *wdev; 544 522 acpi_handle handle; 545 523 struct acpi_object_list input; 546 524 union acpi_object params[2]; ··· 551 527 if (!in) 552 528 return AE_BAD_DATA; 553 529 554 - status = find_guid(guid_string, &wblock); 555 - if (ACPI_FAILURE(status)) 556 - return status; 530 + wdev = wmi_find_device_by_guid(guid_string); 531 + if (IS_ERR(wdev)) 532 + return AE_ERROR; 557 533 534 + wblock = container_of(wdev, struct wmi_block, dev); 558 535 block = &wblock->gblock; 559 536 handle = wblock->acpi_device->handle; 560 537 561 - if (block->instance_count <= instance) 562 - return AE_BAD_PARAMETER; 538 + if (block->instance_count <= instance) { 539 + status = AE_BAD_PARAMETER; 540 + 541 + goto err_wdev_put; 542 + } 563 543 564 544 /* Check GUID is a data block */ 565 - if (block->flags & (ACPI_WMI_EVENT | ACPI_WMI_METHOD)) 566 - return AE_ERROR; 545 + if (block->flags & (ACPI_WMI_EVENT | ACPI_WMI_METHOD)) { 546 + status = AE_ERROR; 547 + 548 + goto err_wdev_put; 549 + } 567 550 568 551 input.count = 2; 569 552 input.pointer = params; ··· 582 551 583 552 get_acpi_method_name(wblock, 'S', method); 584 553 585 - return acpi_evaluate_object(handle, method, &input, NULL); 554 + status = acpi_evaluate_object(handle, method, &input, NULL); 555 + 556 + err_wdev_put: 557 + wmi_device_put(wdev); 558 + 559 + return status; 586 560 } 587 561 EXPORT_SYMBOL_GPL(wmi_set_block); 588 562 ··· 778 742 */ 779 743 bool wmi_has_guid(const char *guid_string) 780 744 { 781 - return ACPI_SUCCESS(find_guid(guid_string, NULL)); 745 + struct wmi_device *wdev; 746 + 747 + wdev = wmi_find_device_by_guid(guid_string); 748 + if (IS_ERR(wdev)) 749 + return false; 750 + 751 + wmi_device_put(wdev); 752 + 753 + return true; 782 754 } 783 755 EXPORT_SYMBOL_GPL(wmi_has_guid); 784 756 ··· 800 756 */ 801 757 char *wmi_get_acpi_device_uid(const char *guid_string) 802 758 { 803 - struct wmi_block *wblock = NULL; 804 - acpi_status status; 759 + struct wmi_block *wblock; 760 + struct wmi_device *wdev; 761 + char *uid; 805 762 806 - status = find_guid(guid_string, &wblock); 807 - if (ACPI_FAILURE(status)) 763 + wdev = wmi_find_device_by_guid(guid_string); 764 + if (IS_ERR(wdev)) 808 765 return NULL; 809 766 810 - return acpi_device_uid(wblock->acpi_device); 767 + wblock = container_of(wdev, struct wmi_block, dev); 768 + uid = acpi_device_uid(wblock->acpi_device); 769 + 770 + wmi_device_put(wdev); 771 + 772 + return uid; 811 773 } 812 774 EXPORT_SYMBOL_GPL(wmi_get_acpi_device_uid); 813 - 814 - #define dev_to_wblock(__dev) container_of_const(__dev, struct wmi_block, dev.dev) 815 - #define dev_to_wdev(__dev) container_of_const(__dev, struct wmi_device, dev) 816 775 817 776 static inline struct wmi_driver *drv_to_wdrv(struct device_driver *drv) 818 777 { ··· 958 911 } 959 912 static int wmi_char_open(struct inode *inode, struct file *filp) 960 913 { 961 - const char *driver_name = filp->f_path.dentry->d_iname; 962 - struct wmi_block *wblock; 963 - struct wmi_block *next; 914 + /* 915 + * The miscdevice already stores a pointer to itself 916 + * inside filp->private_data 917 + */ 918 + struct wmi_block *wblock = container_of(filp->private_data, struct wmi_block, char_dev); 964 919 965 - list_for_each_entry_safe(wblock, next, &wmi_block_list, list) { 966 - if (!wblock->dev.dev.driver) 967 - continue; 968 - if (strcmp(driver_name, wblock->dev.dev.driver->name) == 0) { 969 - filp->private_data = wblock; 970 - break; 971 - } 972 - } 973 - 974 - if (!filp->private_data) 975 - return -ENODEV; 920 + filp->private_data = wblock; 976 921 977 922 return nonseekable_open(inode, filp); 978 923 } ··· 1260 1221 return 0; 1261 1222 } 1262 1223 1263 - static void wmi_free_devices(struct acpi_device *device) 1224 + static int wmi_add_device(struct platform_device *pdev, struct wmi_device *wdev) 1264 1225 { 1265 - struct wmi_block *wblock, *next; 1226 + struct device_link *link; 1266 1227 1267 - /* Delete devices for all the GUIDs */ 1268 - list_for_each_entry_safe(wblock, next, &wmi_block_list, list) { 1269 - if (wblock->acpi_device == device) { 1270 - list_del(&wblock->list); 1271 - device_unregister(&wblock->dev.dev); 1272 - } 1273 - } 1228 + /* 1229 + * Many aggregate WMI drivers do not use -EPROBE_DEFER when they 1230 + * are unable to find a WMI device during probe, instead they require 1231 + * all WMI devices associated with an platform device to become available 1232 + * at once. This device link thus prevents WMI drivers from probing until 1233 + * the associated platform device has finished probing (and has registered 1234 + * all discovered WMI devices). 1235 + */ 1236 + 1237 + link = device_link_add(&wdev->dev, &pdev->dev, DL_FLAG_AUTOREMOVE_SUPPLIER); 1238 + if (!link) 1239 + return -EINVAL; 1240 + 1241 + return device_add(&wdev->dev); 1274 1242 } 1275 1243 1276 1244 static bool guid_already_parsed_for_legacy(struct acpi_device *device, const guid_t *guid) ··· 1309 1263 /* 1310 1264 * Parse the _WDG method for the GUID data blocks 1311 1265 */ 1312 - static int parse_wdg(struct device *wmi_bus_dev, struct acpi_device *device) 1266 + static int parse_wdg(struct device *wmi_bus_dev, struct platform_device *pdev) 1313 1267 { 1268 + struct acpi_device *device = ACPI_COMPANION(&pdev->dev); 1314 1269 struct acpi_buffer out = {ACPI_ALLOCATE_BUFFER, NULL}; 1315 1270 const struct guid_block *gblock; 1316 - struct wmi_block *wblock, *next; 1271 + struct wmi_block *wblock; 1317 1272 union acpi_object *obj; 1318 1273 acpi_status status; 1319 - int retval = 0; 1320 1274 u32 i, total; 1275 + int retval; 1321 1276 1322 1277 status = acpi_evaluate_object(device->handle, "_WDG", NULL, &out); 1323 1278 if (ACPI_FAILURE(status)) ··· 1329 1282 return -ENXIO; 1330 1283 1331 1284 if (obj->type != ACPI_TYPE_BUFFER) { 1332 - retval = -ENXIO; 1333 - goto out_free_pointer; 1285 + kfree(obj); 1286 + return -ENXIO; 1334 1287 } 1335 1288 1336 1289 gblock = (const struct guid_block *)obj->buffer.pointer; ··· 1345 1298 1346 1299 wblock = kzalloc(sizeof(*wblock), GFP_KERNEL); 1347 1300 if (!wblock) { 1348 - retval = -ENOMEM; 1349 - break; 1301 + dev_err(wmi_bus_dev, "Failed to allocate %pUL\n", &gblock[i].guid); 1302 + continue; 1350 1303 } 1351 1304 1352 1305 wblock->acpi_device = device; ··· 1364 1317 wblock->handler = wmi_notify_debug; 1365 1318 wmi_method_enable(wblock, true); 1366 1319 } 1367 - } 1368 1320 1369 - /* 1370 - * Now that all of the devices are created, add them to the 1371 - * device tree and probe subdrivers. 1372 - */ 1373 - list_for_each_entry_safe(wblock, next, &wmi_block_list, list) { 1374 - if (wblock->acpi_device != device) 1375 - continue; 1376 - 1377 - retval = device_add(&wblock->dev.dev); 1321 + retval = wmi_add_device(pdev, &wblock->dev); 1378 1322 if (retval) { 1379 1323 dev_err(wmi_bus_dev, "failed to register %pUL\n", 1380 1324 &wblock->gblock.guid); 1381 1325 if (debug_event) 1382 1326 wmi_method_enable(wblock, false); 1327 + 1383 1328 list_del(&wblock->list); 1384 1329 put_device(&wblock->dev.dev); 1385 1330 } 1386 1331 } 1387 1332 1388 - out_free_pointer: 1389 - kfree(out.pointer); 1390 - return retval; 1333 + kfree(obj); 1334 + 1335 + return 0; 1391 1336 } 1392 1337 1393 1338 /* ··· 1474 1435 event, 0); 1475 1436 } 1476 1437 1438 + static int wmi_remove_device(struct device *dev, void *data) 1439 + { 1440 + struct wmi_block *wblock = dev_to_wblock(dev); 1441 + 1442 + list_del(&wblock->list); 1443 + device_unregister(dev); 1444 + 1445 + return 0; 1446 + } 1447 + 1477 1448 static void acpi_wmi_remove(struct platform_device *device) 1478 1449 { 1479 1450 struct acpi_device *acpi_device = ACPI_COMPANION(&device->dev); 1451 + struct device *wmi_bus_device = dev_get_drvdata(&device->dev); 1480 1452 1481 1453 acpi_remove_notify_handler(acpi_device->handle, ACPI_ALL_NOTIFY, 1482 1454 acpi_wmi_notify_handler); 1483 1455 acpi_remove_address_space_handler(acpi_device->handle, 1484 1456 ACPI_ADR_SPACE_EC, &acpi_wmi_ec_space_handler); 1485 - wmi_free_devices(acpi_device); 1486 - device_unregister(dev_get_drvdata(&device->dev)); 1457 + 1458 + device_for_each_child_reverse(wmi_bus_device, NULL, wmi_remove_device); 1459 + device_unregister(wmi_bus_device); 1487 1460 } 1488 1461 1489 1462 static int acpi_wmi_probe(struct platform_device *device) ··· 1538 1487 } 1539 1488 dev_set_drvdata(&device->dev, wmi_bus_dev); 1540 1489 1541 - error = parse_wdg(wmi_bus_dev, acpi_device); 1490 + error = parse_wdg(wmi_bus_dev, device); 1542 1491 if (error) { 1543 1492 pr_err("Failed to parse WDG method\n"); 1544 1493 goto err_remove_busdev;
+15
drivers/platform/x86/x86-android-tablets/core.c
··· 24 24 25 25 static struct platform_device *x86_android_tablet_device; 26 26 27 + /* 28 + * This helper allows getting a gpio_desc *before* the actual device consuming 29 + * the GPIO has been instantiated. This function _must_ only be used to handle 30 + * this special case such as e.g. : 31 + * 32 + * 1. Getting an IRQ from a GPIO for i2c_board_info.irq which is passed to 33 + * i2c_client_new() to instantiate i2c_client-s; or 34 + * 2. Calling desc_to_gpio() to get an old style GPIO number for gpio_keys 35 + * platform_data which still uses old style GPIO numbers. 36 + * 37 + * Since the consuming device has not been instatiated yet a dynamic lookup 38 + * is generated using the special x86_android_tablet dev for dev_id. 39 + * 40 + * For normal GPIO lookups a standard static gpiod_lookup_table _must_ be used. 41 + */ 27 42 int x86_android_tablet_get_gpiod(const char *chip, int pin, const char *con_id, 28 43 bool active_low, enum gpiod_flags dflags, 29 44 struct gpio_desc **desc)
+1 -1
drivers/platform/x86/x86-android-tablets/lenovo.c
··· 436 436 437 437 /* Use PMIC GPIO 10 bootstrap pin to differentiate 830 vs 1050 */ 438 438 ret = x86_android_tablet_get_gpiod("gpio_crystalcove", 10, "yoga_bootstrap", 439 - false, GPIOD_IN, &gpiod); 439 + false, GPIOD_ASIS, &gpiod); 440 440 if (ret) 441 441 return ret; 442 442
+5 -4
drivers/platform/x86/xo15-ebook.c
··· 81 81 82 82 static int ebook_switch_add(struct acpi_device *device) 83 83 { 84 + const struct acpi_device_id *id; 84 85 struct ebook_switch *button; 85 86 struct input_dev *input; 86 - const char *hid = acpi_device_hid(device); 87 87 char *name, *class; 88 88 int error; 89 89 ··· 102 102 name = acpi_device_name(device); 103 103 class = acpi_device_class(device); 104 104 105 - if (strcmp(hid, XO15_EBOOK_HID)) { 106 - pr_err("Unsupported hid [%s]\n", hid); 105 + id = acpi_match_acpi_device(ebook_device_ids, device); 106 + if (!id) { 107 + dev_err(&device->dev, "Unsupported hid\n"); 107 108 error = -ENODEV; 108 109 goto err_free_input; 109 110 } ··· 112 111 strcpy(name, XO15_EBOOK_DEVICE_NAME); 113 112 sprintf(class, "%s/%s", XO15_EBOOK_CLASS, XO15_EBOOK_SUBCLASS); 114 113 115 - snprintf(button->phys, sizeof(button->phys), "%s/button/input0", hid); 114 + snprintf(button->phys, sizeof(button->phys), "%s/button/input0", id->id); 116 115 117 116 input->name = name; 118 117 input->phys = button->phys;
+6
include/linux/intel_tpmi.h
··· 6 6 #ifndef _INTEL_TPMI_H_ 7 7 #define _INTEL_TPMI_H_ 8 8 9 + #include <linux/bitfield.h> 10 + 11 + #define TPMI_VERSION_INVALID 0xff 12 + #define TPMI_MINOR_VERSION(val) FIELD_GET(GENMASK(4, 0), val) 13 + #define TPMI_MAJOR_VERSION(val) FIELD_GET(GENMASK(7, 5), val) 14 + 9 15 /** 10 16 * struct intel_tpmi_plat_info - Platform information for a TPMI device instance 11 17 * @package_id: CPU Package id
+4
include/linux/platform_data/x86/asus-wmi.h
··· 58 58 #define ASUS_WMI_DEVID_KBD_BACKLIGHT 0x00050021 59 59 #define ASUS_WMI_DEVID_LIGHT_SENSOR 0x00050022 /* ?? */ 60 60 #define ASUS_WMI_DEVID_LIGHTBAR 0x00050025 61 + /* This can only be used to disable the screen, not re-enable */ 62 + #define ASUS_WMI_DEVID_SCREENPAD_POWER 0x00050031 63 + /* Writing a brightness re-enables the screen if disabled */ 64 + #define ASUS_WMI_DEVID_SCREENPAD_LIGHT 0x00050032 61 65 #define ASUS_WMI_DEVID_FAN_BOOST_MODE 0x00110018 62 66 #define ASUS_WMI_DEVID_THROTTLE_THERMAL_POLICY 0x00120075 63 67
+1 -1
include/linux/surface_aggregator/controller.h
··· 44 44 u8 command_id; 45 45 u8 instance_id; 46 46 u16 length; 47 - u8 data[]; 47 + u8 data[] __counted_by(length); 48 48 }; 49 49 50 50 /**
+8 -8
include/trace/events/intel_ifs.h
··· 10 10 11 11 TRACE_EVENT(ifs_status, 12 12 13 - TP_PROTO(int cpu, union ifs_scan activate, union ifs_status status), 13 + TP_PROTO(int cpu, int start, int stop, u64 status), 14 14 15 - TP_ARGS(cpu, activate, status), 15 + TP_ARGS(cpu, start, stop, status), 16 16 17 17 TP_STRUCT__entry( 18 18 __field( u64, status ) 19 19 __field( int, cpu ) 20 - __field( u8, start ) 21 - __field( u8, stop ) 20 + __field( u16, start ) 21 + __field( u16, stop ) 22 22 ), 23 23 24 24 TP_fast_assign( 25 25 __entry->cpu = cpu; 26 - __entry->start = activate.start; 27 - __entry->stop = activate.stop; 28 - __entry->status = status.data; 26 + __entry->start = start; 27 + __entry->stop = stop; 28 + __entry->status = status; 29 29 ), 30 30 31 - TP_printk("cpu: %d, start: %.2x, stop: %.2x, status: %llx", 31 + TP_printk("cpu: %d, start: %.4x, stop: %.4x, status: %.16llx", 32 32 __entry->cpu, 33 33 __entry->start, 34 34 __entry->stop,
+164 -49
tools/power/x86/intel-speed-select/isst-config.c
··· 4 4 * Copyright (c) 2019 Intel Corporation. 5 5 */ 6 6 7 + #include <ctype.h> 7 8 #include <linux/isst_if.h> 8 - #include <sys/utsname.h> 9 9 10 10 #include "isst.h" 11 11 ··· 16 16 int arg; 17 17 }; 18 18 19 - static const char *version_str = "v1.17"; 19 + static const char *version_str = "v1.18"; 20 20 21 21 static const int supported_api_ver = 2; 22 22 static struct isst_if_platform_info isst_platform_info; ··· 27 27 static int cpu_model; 28 28 static int cpu_stepping; 29 29 30 - #define MAX_CPUS_IN_ONE_REQ 256 30 + #define MAX_CPUS_IN_ONE_REQ 512 31 31 static short max_target_cpus; 32 32 static unsigned short target_cpus[MAX_CPUS_IN_ONE_REQ]; 33 33 ··· 55 55 static int clos_max = -1; 56 56 static int clos_desired = -1; 57 57 static int clos_priority_type; 58 + static int cpu_0_cgroupv2; 59 + static int cpu_0_workaround(int isolate); 58 60 59 61 struct _cpu_map { 60 62 unsigned short core_id; ··· 476 474 return online; 477 475 } 478 476 479 - static int get_kernel_version(int *major, int *minor) 480 - { 481 - struct utsname buf; 482 - int ret; 483 - 484 - ret = uname(&buf); 485 - if (ret) 486 - return ret; 487 - 488 - ret = sscanf(buf.release, "%d.%d", major, minor); 489 - if (ret != 2) 490 - return ret; 491 - 492 - return 0; 493 - } 494 - 495 - #define CPU0_HOTPLUG_DEPRECATE_MAJOR_VER 6 496 - #define CPU0_HOTPLUG_DEPRECATE_MINOR_VER 5 497 - 498 477 void set_cpu_online_offline(int cpu, int state) 499 478 { 500 479 char buffer[128]; 501 480 int fd, ret; 502 481 503 - if (!cpu) { 504 - int major, minor; 505 - 506 - ret = get_kernel_version(&major, &minor); 507 - if (!ret) { 508 - if (major > CPU0_HOTPLUG_DEPRECATE_MAJOR_VER || (major == CPU0_HOTPLUG_DEPRECATE_MAJOR_VER && 509 - minor >= CPU0_HOTPLUG_DEPRECATE_MINOR_VER)) { 510 - debug_printf("Ignore CPU 0 offline/online for kernel version >= %d.%d\n", major, minor); 511 - debug_printf("Use cgroups to isolate CPU 0\n"); 512 - return; 513 - } 514 - } 482 + if (cpu_0_cgroupv2 && !cpu) { 483 + fprintf(stderr, "Will use cgroup v2 for CPU 0\n"); 484 + cpu_0_workaround(!state); 485 + return; 515 486 } 516 487 517 488 snprintf(buffer, sizeof(buffer), ··· 492 517 493 518 fd = open(buffer, O_WRONLY); 494 519 if (fd < 0) { 495 - if (!cpu && state) { 520 + if (!cpu) { 496 521 fprintf(stderr, "This system is not configured for CPU 0 online/offline\n"); 497 - fprintf(stderr, "Ignoring online request for CPU 0 as this is already online\n"); 522 + fprintf(stderr, "Will use cgroup v2\n"); 523 + cpu_0_workaround(!state); 498 524 return; 499 525 } 500 526 err(-1, "%s open failed", buffer); ··· 882 906 return 0; 883 907 } 884 908 885 - int isolate_cpus(struct isst_id *id, int mask_size, cpu_set_t *cpu_mask, int level) 909 + int isolate_cpus(struct isst_id *id, int mask_size, cpu_set_t *cpu_mask, int level, int cpu_0_only) 886 910 { 887 911 int i, first, curr_index, index, ret, fd; 888 912 static char str[512], dir_name[64]; ··· 925 949 curr_index = 0; 926 950 first = 1; 927 951 str[0] = '\0'; 952 + 953 + if (cpu_0_only) { 954 + snprintf(str, str_len, "0"); 955 + goto create_partition; 956 + } 957 + 928 958 for (i = 0; i < get_topo_max_cpus(); ++i) { 929 959 if (!is_cpu_in_power_domain(i, id)) 930 960 continue; ··· 953 971 first = 0; 954 972 } 955 973 974 + create_partition: 956 975 debug_printf("isolated CPUs list: package:%d curr_index:%d [%s]\n", id->pkg, curr_index ,str); 957 976 958 977 snprintf(cpuset_cpus, sizeof(cpuset_cpus), "%s/cpuset.cpus", dir_name); ··· 992 1009 return ret; 993 1010 994 1011 return 0; 1012 + } 1013 + 1014 + static int cpu_0_workaround(int isolate) 1015 + { 1016 + int fd, fd1, len, ret; 1017 + cpu_set_t cpu_mask; 1018 + struct isst_id id; 1019 + char str[2]; 1020 + 1021 + debug_printf("isolate CPU 0 state: %d\n", isolate); 1022 + 1023 + if (isolate) 1024 + goto isolate; 1025 + 1026 + /* First check if CPU 0 was isolated to remove isolation. */ 1027 + 1028 + /* If the cpuset.cpus doesn't exist, that means that none of the CPUs are isolated*/ 1029 + fd = open("/sys/fs/cgroup/0-0-0/cpuset.cpus", O_RDONLY, 0); 1030 + if (fd < 0) 1031 + return 0; 1032 + 1033 + len = read(fd, str, sizeof(str)); 1034 + /* Error check, but unlikely to fail. If fails that means that not isolated */ 1035 + if (len == -1) 1036 + return 0; 1037 + 1038 + 1039 + /* Is CPU 0 is in isolate list, the display is sorted so first element will be CPU 0*/ 1040 + if (str[0] != '0') { 1041 + close(fd); 1042 + return 0; 1043 + } 1044 + 1045 + fd1 = open("/sys/fs/cgroup/0-0-0/cpuset.cpus.partition", O_RDONLY, 0); 1046 + /* Unlikely that, this attribute is not present, but handle error */ 1047 + if (fd1 < 0) { 1048 + close(fd); 1049 + return 0; 1050 + } 1051 + 1052 + /* Is CPU 0 already changed partition to "member" */ 1053 + len = read(fd1, str, sizeof(str)); 1054 + if (len != -1 && str[0] == 'm') { 1055 + close(fd1); 1056 + close(fd); 1057 + return 0; 1058 + } 1059 + 1060 + close(fd1); 1061 + close(fd); 1062 + 1063 + debug_printf("CPU 0 was isolated before, so remove isolation\n"); 1064 + 1065 + isolate: 1066 + ret = enable_cpuset_controller(); 1067 + if (ret) 1068 + goto isolate_fail; 1069 + 1070 + CPU_ZERO(&cpu_mask); 1071 + memset(&id, 0, sizeof(struct isst_id)); 1072 + CPU_SET(0, &cpu_mask); 1073 + 1074 + ret = isolate_cpus(&id, sizeof(cpu_mask), &cpu_mask, isolate, 1); 1075 + isolate_fail: 1076 + if (ret) 1077 + fprintf(stderr, "Can't isolate CPU 0\n"); 1078 + 1079 + return ret; 995 1080 } 996 1081 997 1082 static int isst_fill_platform_info(void) ··· 1508 1457 if (ret) 1509 1458 goto use_offline; 1510 1459 1511 - ret = isolate_cpus(id, ctdp_level.core_cpumask_size, ctdp_level.core_cpumask, tdp_level); 1460 + ret = isolate_cpus(id, ctdp_level.core_cpumask_size, 1461 + ctdp_level.core_cpumask, tdp_level, 0); 1512 1462 if (ret) 1513 1463 goto use_offline; 1514 1464 ··· 2177 2125 fprintf(stderr, 2178 2126 "Enable Intel Speed Select Technology Turbo frequency feature\n"); 2179 2127 fprintf(stderr, 2180 - "Optional: -t|--trl : Specify turbo ratio limit\n"); 2128 + "Optional: -t|--trl : Specify turbo ratio limit in hex starting with 0x\n"); 2181 2129 fprintf(stderr, 2182 2130 "\tOptional Arguments: -a|--auto : Designate specified target CPUs with"); 2183 2131 fprintf(stderr, ··· 2186 2134 fprintf(stderr, 2187 2135 "Disable Intel Speed Select Technology turbo frequency feature\n"); 2188 2136 fprintf(stderr, 2189 - "Optional: -t|--trl : Specify turbo ratio limit\n"); 2137 + "Optional: -t|--trl : Specify turbo ratio limit in hex starting with 0x\n"); 2190 2138 fprintf(stderr, 2191 2139 "\tOptional Arguments: -a|--auto : Also disable core-power associations\n"); 2192 2140 } ··· 2293 2241 { 2294 2242 int ret; 2295 2243 int status = *(int *)arg4; 2244 + int cp_state, cp_cap; 2245 + 2246 + if (!isst_read_pm_config(id, &cp_state, &cp_cap)) { 2247 + if (!cp_cap) { 2248 + isst_display_error_info_message(1, "core-power not supported", 0, 0); 2249 + return; 2250 + } 2251 + } 2296 2252 2297 2253 if (is_skx_based_platform()) 2298 2254 clos_priority_type = 1; ··· 2586 2526 } 2587 2527 2588 2528 if (status) { 2589 - isst_display_result(id, outf, "turbo-mode", "enable", 0); 2590 - } else { 2591 2529 isst_display_result(id, outf, "turbo-mode", "disable", 0); 2530 + } else { 2531 + isst_display_result(id, outf, "turbo-mode", "enable", 0); 2592 2532 } 2593 2533 } 2594 2534 2595 2535 static void set_turbo_mode(int arg) 2596 2536 { 2597 - int i, enable = arg; 2537 + int i, disable = arg; 2598 2538 struct isst_id id; 2599 2539 2600 2540 if (cmd_help) { 2601 - if (enable) 2602 - fprintf(stderr, "Set turbo mode enable\n"); 2603 - else 2541 + if (disable) 2604 2542 fprintf(stderr, "Set turbo mode disable\n"); 2543 + else 2544 + fprintf(stderr, "Set turbo mode enable\n"); 2605 2545 exit(0); 2606 2546 } 2607 2547 ··· 2619 2559 2620 2560 if (online) { 2621 2561 set_isst_id(&id, i); 2622 - set_turbo_mode_for_cpu(&id, enable); 2562 + set_turbo_mode_for_cpu(&id, disable); 2623 2563 } 2624 2564 2625 2565 } ··· 2632 2572 unsigned long long trl; 2633 2573 int set = *(int *)arg4; 2634 2574 int ret; 2575 + 2576 + if (id->cpu < 0) 2577 + return; 2635 2578 2636 2579 if (set && !fact_trl) { 2637 2580 isst_display_error_info_message(1, "Invalid TRL. Specify with [-t|--trl]", 0, 0); ··· 2659 2596 if (cmd_help) { 2660 2597 if (arg) { 2661 2598 fprintf(stderr, "Set TRL (turbo ratio limits)\n"); 2662 - fprintf(stderr, "\t t|--trl: Specify turbo ratio limit for setting TRL\n"); 2599 + fprintf(stderr, "\t t|--trl: Specify turbo ratio limit for setting TRL in hex starting with 0x\n"); 2663 2600 } else { 2664 2601 fprintf(stderr, "Get TRL (turbo ratio limits)\n"); 2665 2602 } ··· 2793 2730 exit(-1); 2794 2731 } 2795 2732 2733 + static void check_optarg(char *option, int hex) 2734 + { 2735 + if (optarg) { 2736 + char *start = optarg; 2737 + int i; 2738 + 2739 + if (hex && strlen(optarg) < 3) { 2740 + /* At least 0x plus one character must be present */ 2741 + fprintf(stderr, "malformed arguments for:%s [%s]\n", option, optarg); 2742 + exit(0); 2743 + } 2744 + 2745 + if (hex) { 2746 + if (optarg[0] != '0' || tolower(optarg[1]) != 'x') { 2747 + fprintf(stderr, "malformed arguments for:%s [%s]\n", 2748 + option, optarg); 2749 + exit(0); 2750 + } 2751 + start = &optarg[2]; 2752 + } 2753 + 2754 + for (i = 0; i < strlen(start); ++i) { 2755 + if (hex) { 2756 + if (!isxdigit(start[i])) { 2757 + fprintf(stderr, "malformed arguments for:%s [%s]\n", 2758 + option, optarg); 2759 + exit(0); 2760 + } 2761 + } else if (!isdigit(start[i])) { 2762 + fprintf(stderr, "malformed arguments for:%s [%s]\n", 2763 + option, optarg); 2764 + exit(0); 2765 + } 2766 + } 2767 + } 2768 + } 2769 + 2796 2770 static void parse_cmd_args(int argc, int start, char **argv) 2797 2771 { 2798 2772 int opt; ··· 2863 2763 auto_mode = 1; 2864 2764 break; 2865 2765 case 'b': 2766 + check_optarg("bucket", 0); 2866 2767 fact_bucket = atoi(optarg); 2867 2768 break; 2868 2769 case 'h': 2869 2770 cmd_help = 1; 2870 2771 break; 2871 2772 case 'l': 2773 + check_optarg("level", 0); 2872 2774 tdp_level = atoi(optarg); 2873 2775 break; 2874 2776 case 'o': 2875 2777 force_online_offline = 1; 2876 2778 break; 2877 2779 case 't': 2780 + check_optarg("trl", 1); 2878 2781 sscanf(optarg, "0x%llx", &fact_trl); 2879 2782 break; 2880 2783 case 'r': ··· 2894 2791 break; 2895 2792 /* CLOS related */ 2896 2793 case 'c': 2794 + check_optarg("clos", 0); 2897 2795 current_clos = atoi(optarg); 2898 2796 break; 2899 2797 case 'd': 2798 + check_optarg("desired", 0); 2900 2799 clos_desired = atoi(optarg); 2901 2800 clos_desired /= isst_get_disp_freq_multiplier(); 2902 2801 break; 2903 2802 case 'e': 2803 + check_optarg("epp", 0); 2904 2804 clos_epp = atoi(optarg); 2905 2805 if (is_skx_based_platform()) { 2906 2806 isst_display_error_info_message(1, "epp can't be specified on this platform", 0, 0); ··· 2911 2805 } 2912 2806 break; 2913 2807 case 'n': 2808 + check_optarg("min", 0); 2914 2809 clos_min = atoi(optarg); 2915 2810 clos_min /= isst_get_disp_freq_multiplier(); 2916 2811 break; 2917 2812 case 'm': 2813 + check_optarg("max", 0); 2918 2814 clos_max = atoi(optarg); 2919 2815 clos_max /= isst_get_disp_freq_multiplier(); 2920 2816 break; 2921 2817 case 'p': 2818 + check_optarg("priority", 0); 2922 2819 clos_priority_type = atoi(optarg); 2923 2820 if (is_skx_based_platform() && !clos_priority_type) { 2924 2821 isst_display_error_info_message(1, "Invalid clos priority type: proportional for this platform", 0, 0); ··· 2929 2820 } 2930 2821 break; 2931 2822 case 'w': 2823 + check_optarg("weight", 0); 2932 2824 clos_prop_prio = atoi(optarg); 2933 2825 if (is_skx_based_platform()) { 2934 2826 isst_display_error_info_message(1, "weight can't be specified on this platform", 0, 0); ··· 3105 2995 printf("\t[-n|--no-daemon : Don't run as daemon. By default --oob will turn on daemon mode\n"); 3106 2996 printf("\t[-w|--delay : Delay for reading config level state change in OOB poll mode.\n"); 3107 2997 printf("\t[-g|--cgroupv2 : Try to use cgroup v2 CPU isolation instead of CPU online/offline.\n"); 2998 + printf("\t[-u|--cpu0-workaround : Don't try to online/offline CPU0 instead use cgroup v2.\n"); 3108 2999 printf("\nResult format\n"); 3109 3000 printf("\tResult display uses a common format for each command:\n"); 3110 3001 printf("\tResults are formatted in text/JSON with\n"); ··· 3159 3048 { "no-daemon", no_argument, 0, 'n' }, 3160 3049 { "poll-interval", required_argument, 0, 'w' }, 3161 3050 { "cgroupv2", required_argument, 0, 'g' }, 3051 + { "cpu0-workaround", required_argument, 0, 'u' }, 3162 3052 { 0, 0, 0, 0 } 3163 3053 }; 3164 3054 ··· 3190 3078 goto out; 3191 3079 3192 3080 progname = argv[0]; 3193 - while ((opt = getopt_long_only(argc, argv, "+c:df:hio:vabw:ng", long_options, 3081 + while ((opt = getopt_long_only(argc, argv, "+c:df:hio:vabw:ngu", long_options, 3194 3082 &option_index)) != -1) { 3195 3083 switch (opt) { 3196 3084 case 'a': ··· 3251 3139 break; 3252 3140 case 'g': 3253 3141 cgroupv2 = 1; 3142 + break; 3143 + case 'u': 3144 + cpu_0_cgroupv2 = 1; 3254 3145 break; 3255 3146 default: 3256 3147 usage();
+2 -1
tools/power/x86/intel-speed-select/isst-daemon.c
··· 90 90 if (ret) 91 91 goto use_offline; 92 92 93 - isolate_cpus(id, ctdp_level.core_cpumask_size, ctdp_level.core_cpumask, pkg_dev.current_level); 93 + isolate_cpus(id, ctdp_level.core_cpumask_size, ctdp_level.core_cpumask, 94 + pkg_dev.current_level, 0); 94 95 95 96 goto free_mask; 96 97 }
+2 -1
tools/power/x86/intel-speed-select/isst.h
··· 318 318 319 319 /* Cgroup related interface */ 320 320 extern int enable_cpuset_controller(void); 321 - extern int isolate_cpus(struct isst_id *id, int mask_size, cpu_set_t *cpu_mask, int level); 321 + extern int isolate_cpus(struct isst_id *id, int mask_size, cpu_set_t *cpu_mask, 322 + int level, int cpu_0_only); 322 323 extern int use_cgroupv2(void); 323 324 324 325 #endif