Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'libnvdimm-for-5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm

Pull libnvdimm updayes from Vishal Verma:
"You'd normally receive this pull request from Dan Williams, but he's
busy watching a newborn (Congrats Dan!), so I'm watching libnvdimm
this cycle.

This adds a new feature in libnvdimm - 'Runtime Firmware Activation',
and a few small cleanups and fixes in libnvdimm and DAX. I'd
originally intended to make separate topic-based pull requests - one
for libnvdimm, and one for DAX, but some of the DAX material fell out
since it wasn't quite ready.

Summary:

- add 'Runtime Firmware Activation' support for NVDIMMs that
advertise the relevant capability

- misc libnvdimm and DAX cleanups"

* tag 'libnvdimm-for-5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm:
libnvdimm/security: ensure sysfs poll thread woke up and fetch updated attr
libnvdimm/security: the 'security' attr never show 'overwrite' state
libnvdimm/security: fix a typo
ACPI: NFIT: Fix ARS zero-sized allocation
dax: Fix incorrect argument passed to xas_set_err()
ACPI: NFIT: Add runtime firmware activate support
PM, libnvdimm: Add runtime firmware activation support
libnvdimm: Convert to DEVICE_ATTR_ADMIN_RO()
drivers/dax: Expand lock scope to cover the use of addresses
fs/dax: Remove unused size parameter
dax: print error message by pr_info() in __generic_fsdax_supported()
driver-core: Introduce DEVICE_ATTR_ADMIN_{RO,RW}
tools/testing/nvdimm: Emulate firmware activation commands
tools/testing/nvdimm: Prepare nfit_ctl_test() for ND_CMD_CALL emulation
tools/testing/nvdimm: Add command debug messages
tools/testing/nvdimm: Cleanup dimm index passing
ACPI: NFIT: Define runtime firmware activation commands
ACPI: NFIT: Move bus_dsm_mask out of generic nvdimm_bus_descriptor
libnvdimm: Validate command family indices

+1475 -137
+19
Documentation/ABI/testing/sysfs-bus-nfit
··· 202 202 functions. See the section named 'NVDIMM Root Device _DSMs' in 203 203 the ACPI specification. 204 204 205 + What: /sys/bus/nd/devices/ndbusX/nfit/firmware_activate_noidle 206 + Date: Apr, 2020 207 + KernelVersion: v5.8 208 + Contact: linux-nvdimm@lists.01.org 209 + Description: 210 + (RW) The Intel platform implementation of firmware activate 211 + support exposes an option let the platform force idle devices in 212 + the system over the activation event, or trust that the OS will 213 + do it. The safe default is to let the platform force idle 214 + devices since the kernel is already in a suspend state, and on 215 + the chance that a driver does not properly quiesce bus-mastering 216 + after a suspend callback the platform will handle it. However, 217 + the activation might abort if, for example, platform firmware 218 + determines that the activation time exceeds the max PCI-E 219 + completion timeout. Since the platform does not know whether the 220 + OS is running the activation from a suspend context it aborts, 221 + but if the system owner trusts driver suspend callback to be 222 + sufficient then 'firmware_activation_noidle' can be 223 + enabled to bypass the activation abort. 205 224 206 225 What: /sys/bus/nd/devices/regionX/nfit/range_index 207 226 Date: Jun, 2015
+2
Documentation/ABI/testing/sysfs-bus-nvdimm
··· 1 + The libnvdimm sub-system implements a common sysfs interface for 2 + platform nvdimm resources. See Documentation/driver-api/nvdimm/.
+86
Documentation/driver-api/nvdimm/firmware-activate.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ================================== 4 + NVDIMM Runtime Firmware Activation 5 + ================================== 6 + 7 + Some persistent memory devices run a firmware locally on the device / 8 + "DIMM" to perform tasks like media management, capacity provisioning, 9 + and health monitoring. The process of updating that firmware typically 10 + involves a reboot because it has implications for in-flight memory 11 + transactions. However, reboots are disruptive and at least the Intel 12 + persistent memory platform implementation, described by the Intel ACPI 13 + DSM specification [1], has added support for activating firmware at 14 + runtime. 15 + 16 + A native sysfs interface is implemented in libnvdimm to allow platform 17 + to advertise and control their local runtime firmware activation 18 + capability. 19 + 20 + The libnvdimm bus object, ndbusX, implements an ndbusX/firmware/activate 21 + attribute that shows the state of the firmware activation as one of 'idle', 22 + 'armed', 'overflow', and 'busy'. 23 + 24 + - idle: 25 + No devices are set / armed to activate firmware 26 + 27 + - armed: 28 + At least one device is armed 29 + 30 + - busy: 31 + In the busy state armed devices are in the process of transitioning 32 + back to idle and completing an activation cycle. 33 + 34 + - overflow: 35 + If the platform has a concept of incremental work needed to perform 36 + the activation it could be the case that too many DIMMs are armed for 37 + activation. In that scenario the potential for firmware activation to 38 + timeout is indicated by the 'overflow' state. 39 + 40 + The 'ndbusX/firmware/activate' property can be written with a value of 41 + either 'live', or 'quiesce'. A value of 'quiesce' triggers the kernel to 42 + run firmware activation from within the equivalent of the hibernation 43 + 'freeze' state where drivers and applications are notified to stop their 44 + modifications of system memory. A value of 'live' attempts 45 + firmware activation without this hibernation cycle. The 46 + 'ndbusX/firmware/activate' property will be elided completely if no 47 + firmware activation capability is detected. 48 + 49 + Another property 'ndbusX/firmware/capability' indicates a value of 50 + 'live' or 'quiesce', where 'live' indicates that the firmware 51 + does not require or inflict any quiesce period on the system to update 52 + firmware. A capability value of 'quiesce' indicates that firmware does 53 + expect and injects a quiet period for the memory controller, but 'live' 54 + may still be written to 'ndbusX/firmware/activate' as an override to 55 + assume the risk of racing firmware update with in-flight device and 56 + application activity. The 'ndbusX/firmware/capability' property will be 57 + elided completely if no firmware activation capability is detected. 58 + 59 + The libnvdimm memory-device / DIMM object, nmemX, implements 60 + 'nmemX/firmware/activate' and 'nmemX/firmware/result' attributes to 61 + communicate the per-device firmware activation state. Similar to the 62 + 'ndbusX/firmware/activate' attribute, the 'nmemX/firmware/activate' 63 + attribute indicates 'idle', 'armed', or 'busy'. The state transitions 64 + from 'armed' to 'idle' when the system is prepared to activate firmware, 65 + firmware staged + state set to armed, and 'ndbusX/firmware/activate' is 66 + triggered. After that activation event the nmemX/firmware/result 67 + attribute reflects the state of the last activation as one of: 68 + 69 + - none: 70 + No runtime activation triggered since the last time the device was reset 71 + 72 + - success: 73 + The last runtime activation completed successfully. 74 + 75 + - fail: 76 + The last runtime activation failed for device-specific reasons. 77 + 78 + - not_staged: 79 + The last runtime activation failed due to a sequencing error of the 80 + firmware image not being staged. 81 + 82 + - need_reset: 83 + Runtime firmware activation failed, but the firmware can still be 84 + activated via the legacy method of power-cycling the system. 85 + 86 + [1]: https://docs.pmem.io/persistent-memory/
+118 -39
drivers/acpi/nfit/core.c
··· 73 73 } 74 74 EXPORT_SYMBOL(to_nfit_uuid); 75 75 76 + static const guid_t *to_nfit_bus_uuid(int family) 77 + { 78 + if (WARN_ONCE(family == NVDIMM_BUS_FAMILY_NFIT, 79 + "only secondary bus families can be translated\n")) 80 + return NULL; 81 + /* 82 + * The index of bus UUIDs starts immediately following the last 83 + * NVDIMM/leaf family. 84 + */ 85 + return to_nfit_uuid(family + NVDIMM_FAMILY_MAX); 86 + } 87 + 76 88 static struct acpi_device *to_acpi_dev(struct acpi_nfit_desc *acpi_desc) 77 89 { 78 90 struct nvdimm_bus_descriptor *nd_desc = &acpi_desc->nd_desc; ··· 374 362 { 375 363 static const u8 revid_table[NVDIMM_FAMILY_MAX+1][NVDIMM_CMD_MAX+1] = { 376 364 [NVDIMM_FAMILY_INTEL] = { 377 - [NVDIMM_INTEL_GET_MODES] = 2, 378 - [NVDIMM_INTEL_GET_FWINFO] = 2, 379 - [NVDIMM_INTEL_START_FWUPDATE] = 2, 380 - [NVDIMM_INTEL_SEND_FWUPDATE] = 2, 381 - [NVDIMM_INTEL_FINISH_FWUPDATE] = 2, 382 - [NVDIMM_INTEL_QUERY_FWUPDATE] = 2, 383 - [NVDIMM_INTEL_SET_THRESHOLD] = 2, 384 - [NVDIMM_INTEL_INJECT_ERROR] = 2, 385 - [NVDIMM_INTEL_GET_SECURITY_STATE] = 2, 386 - [NVDIMM_INTEL_SET_PASSPHRASE] = 2, 387 - [NVDIMM_INTEL_DISABLE_PASSPHRASE] = 2, 388 - [NVDIMM_INTEL_UNLOCK_UNIT] = 2, 389 - [NVDIMM_INTEL_FREEZE_LOCK] = 2, 390 - [NVDIMM_INTEL_SECURE_ERASE] = 2, 391 - [NVDIMM_INTEL_OVERWRITE] = 2, 392 - [NVDIMM_INTEL_QUERY_OVERWRITE] = 2, 393 - [NVDIMM_INTEL_SET_MASTER_PASSPHRASE] = 2, 394 - [NVDIMM_INTEL_MASTER_SECURE_ERASE] = 2, 365 + [NVDIMM_INTEL_GET_MODES ... 366 + NVDIMM_INTEL_FW_ACTIVATE_ARM] = 2, 395 367 }, 396 368 }; 397 369 u8 id; ··· 402 406 } 403 407 404 408 static int cmd_to_func(struct nfit_mem *nfit_mem, unsigned int cmd, 405 - struct nd_cmd_pkg *call_pkg) 409 + struct nd_cmd_pkg *call_pkg, int *family) 406 410 { 407 411 if (call_pkg) { 408 412 int i; ··· 413 417 for (i = 0; i < ARRAY_SIZE(call_pkg->nd_reserved2); i++) 414 418 if (call_pkg->nd_reserved2[i]) 415 419 return -EINVAL; 420 + *family = call_pkg->nd_family; 416 421 return call_pkg->nd_command; 417 422 } 418 423 ··· 447 450 acpi_handle handle; 448 451 const guid_t *guid; 449 452 int func, rc, i; 453 + int family = 0; 450 454 451 455 if (cmd_rc) 452 456 *cmd_rc = -EINVAL; 453 457 454 458 if (cmd == ND_CMD_CALL) 455 459 call_pkg = buf; 456 - func = cmd_to_func(nfit_mem, cmd, call_pkg); 460 + func = cmd_to_func(nfit_mem, cmd, call_pkg, &family); 457 461 if (func < 0) 458 462 return func; 459 463 ··· 476 478 477 479 cmd_name = nvdimm_bus_cmd_name(cmd); 478 480 cmd_mask = nd_desc->cmd_mask; 479 - dsm_mask = nd_desc->bus_dsm_mask; 481 + if (cmd == ND_CMD_CALL && call_pkg->nd_family) { 482 + family = call_pkg->nd_family; 483 + if (!test_bit(family, &nd_desc->bus_family_mask)) 484 + return -EINVAL; 485 + dsm_mask = acpi_desc->family_dsm_mask[family]; 486 + guid = to_nfit_bus_uuid(family); 487 + } else { 488 + dsm_mask = acpi_desc->bus_dsm_mask; 489 + guid = to_nfit_uuid(NFIT_DEV_BUS); 490 + } 480 491 desc = nd_cmd_bus_desc(cmd); 481 - guid = to_nfit_uuid(NFIT_DEV_BUS); 482 492 handle = adev->handle; 483 493 dimm_name = "bus"; 484 494 } ··· 522 516 in_buf.buffer.length = call_pkg->nd_size_in; 523 517 } 524 518 525 - dev_dbg(dev, "%s cmd: %d: func: %d input length: %d\n", 526 - dimm_name, cmd, func, in_buf.buffer.length); 519 + dev_dbg(dev, "%s cmd: %d: family: %d func: %d input length: %d\n", 520 + dimm_name, cmd, family, func, in_buf.buffer.length); 527 521 if (payload_dumpable(nvdimm, func)) 528 522 print_hex_dump_debug("nvdimm in ", DUMP_PREFIX_OFFSET, 4, 4, 529 523 in_buf.buffer.pointer, ··· 1244 1238 { 1245 1239 struct nvdimm_bus *nvdimm_bus = to_nvdimm_bus(dev); 1246 1240 struct nvdimm_bus_descriptor *nd_desc = to_nd_desc(nvdimm_bus); 1241 + struct acpi_nfit_desc *acpi_desc = to_acpi_desc(nd_desc); 1247 1242 1248 - return sprintf(buf, "%#lx\n", nd_desc->bus_dsm_mask); 1243 + return sprintf(buf, "%#lx\n", acpi_desc->bus_dsm_mask); 1249 1244 } 1250 1245 static struct device_attribute dev_attr_bus_dsm_mask = 1251 1246 __ATTR(dsm_mask, 0444, bus_dsm_mask_show, NULL); ··· 1392 1385 struct device *dev = container_of(kobj, struct device, kobj); 1393 1386 struct nvdimm_bus *nvdimm_bus = to_nvdimm_bus(dev); 1394 1387 1395 - if (a == &dev_attr_scrub.attr && !ars_supported(nvdimm_bus)) 1396 - return 0; 1388 + if (a == &dev_attr_scrub.attr) 1389 + return ars_supported(nvdimm_bus) ? a->mode : 0; 1390 + 1391 + if (a == &dev_attr_firmware_activate_noidle.attr) 1392 + return intel_fwa_supported(nvdimm_bus) ? a->mode : 0; 1393 + 1397 1394 return a->mode; 1398 1395 } 1399 1396 ··· 1406 1395 &dev_attr_scrub.attr, 1407 1396 &dev_attr_hw_error_scrub.attr, 1408 1397 &dev_attr_bus_dsm_mask.attr, 1398 + &dev_attr_firmware_activate_noidle.attr, 1409 1399 NULL, 1410 1400 }; 1411 1401 ··· 1835 1823 static int acpi_nfit_add_dimm(struct acpi_nfit_desc *acpi_desc, 1836 1824 struct nfit_mem *nfit_mem, u32 device_handle) 1837 1825 { 1826 + struct nvdimm_bus_descriptor *nd_desc = &acpi_desc->nd_desc; 1838 1827 struct acpi_device *adev, *adev_dimm; 1839 1828 struct device *dev = acpi_desc->dev; 1840 1829 unsigned long dsm_mask, label_mask; ··· 1847 1834 /* nfit test assumes 1:1 relationship between commands and dsms */ 1848 1835 nfit_mem->dsm_mask = acpi_desc->dimm_cmd_force_en; 1849 1836 nfit_mem->family = NVDIMM_FAMILY_INTEL; 1837 + set_bit(NVDIMM_FAMILY_INTEL, &nd_desc->dimm_family_mask); 1850 1838 1851 1839 if (dcr->valid_fields & ACPI_NFIT_CONTROL_MFG_INFO_VALID) 1852 1840 sprintf(nfit_mem->id, "%04x-%02x-%04x-%08x", ··· 1900 1886 * Note, that checking for function0 (bit0) tells us if any commands 1901 1887 * are reachable through this GUID. 1902 1888 */ 1889 + clear_bit(NVDIMM_FAMILY_INTEL, &nd_desc->dimm_family_mask); 1903 1890 for (i = 0; i <= NVDIMM_FAMILY_MAX; i++) 1904 - if (acpi_check_dsm(adev_dimm->handle, to_nfit_uuid(i), 1, 1)) 1891 + if (acpi_check_dsm(adev_dimm->handle, to_nfit_uuid(i), 1, 1)) { 1892 + set_bit(i, &nd_desc->dimm_family_mask); 1905 1893 if (family < 0 || i == default_dsm_family) 1906 1894 family = i; 1895 + } 1907 1896 1908 1897 /* limit the supported commands to those that are publicly documented */ 1909 1898 nfit_mem->family = family; ··· 2024 2007 } 2025 2008 } 2026 2009 2010 + static const struct nvdimm_fw_ops *acpi_nfit_get_fw_ops( 2011 + struct nfit_mem *nfit_mem) 2012 + { 2013 + unsigned long mask; 2014 + struct acpi_nfit_desc *acpi_desc = nfit_mem->acpi_desc; 2015 + struct nvdimm_bus_descriptor *nd_desc = &acpi_desc->nd_desc; 2016 + 2017 + if (!nd_desc->fw_ops) 2018 + return NULL; 2019 + 2020 + if (nfit_mem->family != NVDIMM_FAMILY_INTEL) 2021 + return NULL; 2022 + 2023 + mask = nfit_mem->dsm_mask & NVDIMM_INTEL_FW_ACTIVATE_CMDMASK; 2024 + if (mask != NVDIMM_INTEL_FW_ACTIVATE_CMDMASK) 2025 + return NULL; 2026 + 2027 + return intel_fw_ops; 2028 + } 2029 + 2027 2030 static int acpi_nfit_register_dimms(struct acpi_nfit_desc *acpi_desc) 2028 2031 { 2029 2032 struct nfit_mem *nfit_mem; ··· 2120 2083 acpi_nfit_dimm_attribute_groups, 2121 2084 flags, cmd_mask, flush ? flush->hint_count : 0, 2122 2085 nfit_mem->flush_wpq, &nfit_mem->id[0], 2123 - acpi_nfit_get_security_ops(nfit_mem->family)); 2086 + acpi_nfit_get_security_ops(nfit_mem->family), 2087 + acpi_nfit_get_fw_ops(nfit_mem)); 2124 2088 if (!nvdimm) 2125 2089 return -ENOMEM; 2126 2090 ··· 2185 2147 { 2186 2148 struct nvdimm_bus_descriptor *nd_desc = &acpi_desc->nd_desc; 2187 2149 const guid_t *guid = to_nfit_uuid(NFIT_DEV_BUS); 2150 + unsigned long dsm_mask, *mask; 2188 2151 struct acpi_device *adev; 2189 - unsigned long dsm_mask; 2190 2152 int i; 2191 2153 2192 - nd_desc->cmd_mask = acpi_desc->bus_cmd_force_en; 2193 - nd_desc->bus_dsm_mask = acpi_desc->bus_nfit_cmd_force_en; 2154 + set_bit(ND_CMD_CALL, &nd_desc->cmd_mask); 2155 + set_bit(NVDIMM_BUS_FAMILY_NFIT, &nd_desc->bus_family_mask); 2156 + 2157 + /* enable nfit_test to inject bus command emulation */ 2158 + if (acpi_desc->bus_cmd_force_en) { 2159 + nd_desc->cmd_mask = acpi_desc->bus_cmd_force_en; 2160 + mask = &nd_desc->bus_family_mask; 2161 + if (acpi_desc->family_dsm_mask[NVDIMM_BUS_FAMILY_INTEL]) { 2162 + set_bit(NVDIMM_BUS_FAMILY_INTEL, mask); 2163 + nd_desc->fw_ops = intel_bus_fw_ops; 2164 + } 2165 + } 2166 + 2194 2167 adev = to_acpi_dev(acpi_desc); 2195 2168 if (!adev) 2196 2169 return; ··· 2209 2160 for (i = ND_CMD_ARS_CAP; i <= ND_CMD_CLEAR_ERROR; i++) 2210 2161 if (acpi_check_dsm(adev->handle, guid, 1, 1ULL << i)) 2211 2162 set_bit(i, &nd_desc->cmd_mask); 2212 - set_bit(ND_CMD_CALL, &nd_desc->cmd_mask); 2213 2163 2214 2164 dsm_mask = 2215 2165 (1 << ND_CMD_ARS_CAP) | ··· 2221 2173 (1 << NFIT_CMD_ARS_INJECT_GET); 2222 2174 for_each_set_bit(i, &dsm_mask, BITS_PER_LONG) 2223 2175 if (acpi_check_dsm(adev->handle, guid, 1, 1ULL << i)) 2224 - set_bit(i, &nd_desc->bus_dsm_mask); 2176 + set_bit(i, &acpi_desc->bus_dsm_mask); 2177 + 2178 + /* Enumerate allowed NVDIMM_BUS_FAMILY_INTEL commands */ 2179 + dsm_mask = NVDIMM_BUS_INTEL_FW_ACTIVATE_CMDMASK; 2180 + guid = to_nfit_bus_uuid(NVDIMM_BUS_FAMILY_INTEL); 2181 + mask = &acpi_desc->family_dsm_mask[NVDIMM_BUS_FAMILY_INTEL]; 2182 + for_each_set_bit(i, &dsm_mask, BITS_PER_LONG) 2183 + if (acpi_check_dsm(adev->handle, guid, 1, 1ULL << i)) 2184 + set_bit(i, mask); 2185 + 2186 + if (*mask == dsm_mask) { 2187 + set_bit(NVDIMM_BUS_FAMILY_INTEL, &nd_desc->bus_family_mask); 2188 + nd_desc->fw_ops = intel_bus_fw_ops; 2189 + } 2225 2190 } 2226 2191 2227 2192 static ssize_t range_index_show(struct device *dev, ··· 3334 3273 static int acpi_nfit_register_regions(struct acpi_nfit_desc *acpi_desc) 3335 3274 { 3336 3275 struct nfit_spa *nfit_spa; 3337 - int rc; 3276 + int rc, do_sched_ars = 0; 3338 3277 3339 3278 set_bit(ARS_VALID, &acpi_desc->scrub_flags); 3340 3279 list_for_each_entry(nfit_spa, &acpi_desc->spas, list) { ··· 3346 3285 } 3347 3286 } 3348 3287 3349 - list_for_each_entry(nfit_spa, &acpi_desc->spas, list) 3288 + list_for_each_entry(nfit_spa, &acpi_desc->spas, list) { 3350 3289 switch (nfit_spa_type(nfit_spa->spa)) { 3351 3290 case NFIT_SPA_VOLATILE: 3352 3291 case NFIT_SPA_PM: ··· 3354 3293 rc = ars_register(acpi_desc, nfit_spa); 3355 3294 if (rc) 3356 3295 return rc; 3296 + 3297 + /* 3298 + * Kick off background ARS if at least one 3299 + * region successfully registered ARS 3300 + */ 3301 + if (!test_bit(ARS_FAILED, &nfit_spa->ars_state)) 3302 + do_sched_ars++; 3357 3303 break; 3358 3304 case NFIT_SPA_BDW: 3359 3305 /* nothing to register */ ··· 3379 3311 /* don't register unknown regions */ 3380 3312 break; 3381 3313 } 3314 + } 3382 3315 3383 - sched_ars(acpi_desc); 3316 + if (do_sched_ars) 3317 + sched_ars(acpi_desc); 3384 3318 return 0; 3385 3319 } 3386 3320 ··· 3555 3485 return 0; 3556 3486 } 3557 3487 3558 - /* prevent security commands from being issued via ioctl */ 3488 + /* 3489 + * Prevent security and firmware activate commands from being issued via 3490 + * ioctl. 3491 + */ 3559 3492 static int acpi_nfit_clear_to_send(struct nvdimm_bus_descriptor *nd_desc, 3560 3493 struct nvdimm *nvdimm, unsigned int cmd, void *buf) 3561 3494 { ··· 3569 3496 call_pkg->nd_family == NVDIMM_FAMILY_INTEL) { 3570 3497 func = call_pkg->nd_command; 3571 3498 if (func > NVDIMM_CMD_MAX || 3572 - (1 << func) & NVDIMM_INTEL_SECURITY_CMDMASK) 3499 + (1 << func) & NVDIMM_INTEL_DENY_CMDMASK) 3573 3500 return -EOPNOTSUPP; 3574 3501 } 3502 + 3503 + /* block all non-nfit bus commands */ 3504 + if (!nvdimm && cmd == ND_CMD_CALL && 3505 + call_pkg->nd_family != NVDIMM_BUS_FAMILY_NFIT) 3506 + return -EOPNOTSUPP; 3575 3507 3576 3508 return __acpi_nfit_clear_to_send(nd_desc, nvdimm, cmd); 3577 3509 } ··· 3869 3791 guid_parse(UUID_NFIT_DIMM_N_HPE2, &nfit_uuid[NFIT_DEV_DIMM_N_HPE2]); 3870 3792 guid_parse(UUID_NFIT_DIMM_N_MSFT, &nfit_uuid[NFIT_DEV_DIMM_N_MSFT]); 3871 3793 guid_parse(UUID_NFIT_DIMM_N_HYPERV, &nfit_uuid[NFIT_DEV_DIMM_N_HYPERV]); 3794 + guid_parse(UUID_INTEL_BUS, &nfit_uuid[NFIT_BUS_INTEL]); 3872 3795 3873 3796 nfit_wq = create_singlethread_workqueue("nfit"); 3874 3797 if (!nfit_wq)
+386
drivers/acpi/nfit/intel.c
··· 7 7 #include "intel.h" 8 8 #include "nfit.h" 9 9 10 + static ssize_t firmware_activate_noidle_show(struct device *dev, 11 + struct device_attribute *attr, char *buf) 12 + { 13 + struct nvdimm_bus *nvdimm_bus = to_nvdimm_bus(dev); 14 + struct nvdimm_bus_descriptor *nd_desc = to_nd_desc(nvdimm_bus); 15 + struct acpi_nfit_desc *acpi_desc = to_acpi_desc(nd_desc); 16 + 17 + return sprintf(buf, "%s\n", acpi_desc->fwa_noidle ? "Y" : "N"); 18 + } 19 + 20 + static ssize_t firmware_activate_noidle_store(struct device *dev, 21 + struct device_attribute *attr, const char *buf, size_t size) 22 + { 23 + struct nvdimm_bus *nvdimm_bus = to_nvdimm_bus(dev); 24 + struct nvdimm_bus_descriptor *nd_desc = to_nd_desc(nvdimm_bus); 25 + struct acpi_nfit_desc *acpi_desc = to_acpi_desc(nd_desc); 26 + ssize_t rc; 27 + bool val; 28 + 29 + rc = kstrtobool(buf, &val); 30 + if (rc) 31 + return rc; 32 + if (val != acpi_desc->fwa_noidle) 33 + acpi_desc->fwa_cap = NVDIMM_FWA_CAP_INVALID; 34 + acpi_desc->fwa_noidle = val; 35 + return size; 36 + } 37 + DEVICE_ATTR_RW(firmware_activate_noidle); 38 + 39 + bool intel_fwa_supported(struct nvdimm_bus *nvdimm_bus) 40 + { 41 + struct nvdimm_bus_descriptor *nd_desc = to_nd_desc(nvdimm_bus); 42 + struct acpi_nfit_desc *acpi_desc = to_acpi_desc(nd_desc); 43 + unsigned long *mask; 44 + 45 + if (!test_bit(NVDIMM_BUS_FAMILY_INTEL, &nd_desc->bus_family_mask)) 46 + return false; 47 + 48 + mask = &acpi_desc->family_dsm_mask[NVDIMM_BUS_FAMILY_INTEL]; 49 + return *mask == NVDIMM_BUS_INTEL_FW_ACTIVATE_CMDMASK; 50 + } 51 + 10 52 static unsigned long intel_security_flags(struct nvdimm *nvdimm, 11 53 enum nvdimm_passphrase_type ptype) 12 54 { ··· 431 389 }; 432 390 433 391 const struct nvdimm_security_ops *intel_security_ops = &__intel_security_ops; 392 + 393 + static int intel_bus_fwa_businfo(struct nvdimm_bus_descriptor *nd_desc, 394 + struct nd_intel_bus_fw_activate_businfo *info) 395 + { 396 + struct { 397 + struct nd_cmd_pkg pkg; 398 + struct nd_intel_bus_fw_activate_businfo cmd; 399 + } nd_cmd = { 400 + .pkg = { 401 + .nd_command = NVDIMM_BUS_INTEL_FW_ACTIVATE_BUSINFO, 402 + .nd_family = NVDIMM_BUS_FAMILY_INTEL, 403 + .nd_size_out = 404 + sizeof(struct nd_intel_bus_fw_activate_businfo), 405 + .nd_fw_size = 406 + sizeof(struct nd_intel_bus_fw_activate_businfo), 407 + }, 408 + }; 409 + int rc; 410 + 411 + rc = nd_desc->ndctl(nd_desc, NULL, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), 412 + NULL); 413 + *info = nd_cmd.cmd; 414 + return rc; 415 + } 416 + 417 + /* The fw_ops expect to be called with the nvdimm_bus_lock() held */ 418 + static enum nvdimm_fwa_state intel_bus_fwa_state( 419 + struct nvdimm_bus_descriptor *nd_desc) 420 + { 421 + struct acpi_nfit_desc *acpi_desc = to_acpi_desc(nd_desc); 422 + struct nd_intel_bus_fw_activate_businfo info; 423 + struct device *dev = acpi_desc->dev; 424 + enum nvdimm_fwa_state state; 425 + int rc; 426 + 427 + /* 428 + * It should not be possible for platform firmware to return 429 + * busy because activate is a synchronous operation. Treat it 430 + * similar to invalid, i.e. always refresh / poll the status. 431 + */ 432 + switch (acpi_desc->fwa_state) { 433 + case NVDIMM_FWA_INVALID: 434 + case NVDIMM_FWA_BUSY: 435 + break; 436 + default: 437 + /* check if capability needs to be refreshed */ 438 + if (acpi_desc->fwa_cap == NVDIMM_FWA_CAP_INVALID) 439 + break; 440 + return acpi_desc->fwa_state; 441 + } 442 + 443 + /* Refresh with platform firmware */ 444 + rc = intel_bus_fwa_businfo(nd_desc, &info); 445 + if (rc) 446 + return NVDIMM_FWA_INVALID; 447 + 448 + switch (info.state) { 449 + case ND_INTEL_FWA_IDLE: 450 + state = NVDIMM_FWA_IDLE; 451 + break; 452 + case ND_INTEL_FWA_BUSY: 453 + state = NVDIMM_FWA_BUSY; 454 + break; 455 + case ND_INTEL_FWA_ARMED: 456 + if (info.activate_tmo > info.max_quiesce_tmo) 457 + state = NVDIMM_FWA_ARM_OVERFLOW; 458 + else 459 + state = NVDIMM_FWA_ARMED; 460 + break; 461 + default: 462 + dev_err_once(dev, "invalid firmware activate state %d\n", 463 + info.state); 464 + return NVDIMM_FWA_INVALID; 465 + } 466 + 467 + /* 468 + * Capability data is available in the same payload as state. It 469 + * is expected to be static. 470 + */ 471 + if (acpi_desc->fwa_cap == NVDIMM_FWA_CAP_INVALID) { 472 + if (info.capability & ND_INTEL_BUS_FWA_CAP_FWQUIESCE) 473 + acpi_desc->fwa_cap = NVDIMM_FWA_CAP_QUIESCE; 474 + else if (info.capability & ND_INTEL_BUS_FWA_CAP_OSQUIESCE) { 475 + /* 476 + * Skip hibernate cycle by default if platform 477 + * indicates that it does not need devices to be 478 + * quiesced. 479 + */ 480 + acpi_desc->fwa_cap = NVDIMM_FWA_CAP_LIVE; 481 + } else 482 + acpi_desc->fwa_cap = NVDIMM_FWA_CAP_NONE; 483 + } 484 + 485 + acpi_desc->fwa_state = state; 486 + 487 + return state; 488 + } 489 + 490 + static enum nvdimm_fwa_capability intel_bus_fwa_capability( 491 + struct nvdimm_bus_descriptor *nd_desc) 492 + { 493 + struct acpi_nfit_desc *acpi_desc = to_acpi_desc(nd_desc); 494 + 495 + if (acpi_desc->fwa_cap > NVDIMM_FWA_CAP_INVALID) 496 + return acpi_desc->fwa_cap; 497 + 498 + if (intel_bus_fwa_state(nd_desc) > NVDIMM_FWA_INVALID) 499 + return acpi_desc->fwa_cap; 500 + 501 + return NVDIMM_FWA_CAP_INVALID; 502 + } 503 + 504 + static int intel_bus_fwa_activate(struct nvdimm_bus_descriptor *nd_desc) 505 + { 506 + struct acpi_nfit_desc *acpi_desc = to_acpi_desc(nd_desc); 507 + struct { 508 + struct nd_cmd_pkg pkg; 509 + struct nd_intel_bus_fw_activate cmd; 510 + } nd_cmd = { 511 + .pkg = { 512 + .nd_command = NVDIMM_BUS_INTEL_FW_ACTIVATE, 513 + .nd_family = NVDIMM_BUS_FAMILY_INTEL, 514 + .nd_size_in = sizeof(nd_cmd.cmd.iodev_state), 515 + .nd_size_out = 516 + sizeof(struct nd_intel_bus_fw_activate), 517 + .nd_fw_size = 518 + sizeof(struct nd_intel_bus_fw_activate), 519 + }, 520 + /* 521 + * Even though activate is run from a suspended context, 522 + * for safety, still ask platform firmware to force 523 + * quiesce devices by default. Let a module 524 + * parameter override that policy. 525 + */ 526 + .cmd = { 527 + .iodev_state = acpi_desc->fwa_noidle 528 + ? ND_INTEL_BUS_FWA_IODEV_OS_IDLE 529 + : ND_INTEL_BUS_FWA_IODEV_FORCE_IDLE, 530 + }, 531 + }; 532 + int rc; 533 + 534 + switch (intel_bus_fwa_state(nd_desc)) { 535 + case NVDIMM_FWA_ARMED: 536 + case NVDIMM_FWA_ARM_OVERFLOW: 537 + break; 538 + default: 539 + return -ENXIO; 540 + } 541 + 542 + rc = nd_desc->ndctl(nd_desc, NULL, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), 543 + NULL); 544 + 545 + /* 546 + * Whether the command succeeded, or failed, the agent checking 547 + * for the result needs to query the DIMMs individually. 548 + * Increment the activation count to invalidate all the DIMM 549 + * states at once (it's otherwise not possible to take 550 + * acpi_desc->init_mutex in this context) 551 + */ 552 + acpi_desc->fwa_state = NVDIMM_FWA_INVALID; 553 + acpi_desc->fwa_count++; 554 + 555 + dev_dbg(acpi_desc->dev, "result: %d\n", rc); 556 + 557 + return rc; 558 + } 559 + 560 + static const struct nvdimm_bus_fw_ops __intel_bus_fw_ops = { 561 + .activate_state = intel_bus_fwa_state, 562 + .capability = intel_bus_fwa_capability, 563 + .activate = intel_bus_fwa_activate, 564 + }; 565 + 566 + const struct nvdimm_bus_fw_ops *intel_bus_fw_ops = &__intel_bus_fw_ops; 567 + 568 + static int intel_fwa_dimminfo(struct nvdimm *nvdimm, 569 + struct nd_intel_fw_activate_dimminfo *info) 570 + { 571 + struct { 572 + struct nd_cmd_pkg pkg; 573 + struct nd_intel_fw_activate_dimminfo cmd; 574 + } nd_cmd = { 575 + .pkg = { 576 + .nd_command = NVDIMM_INTEL_FW_ACTIVATE_DIMMINFO, 577 + .nd_family = NVDIMM_FAMILY_INTEL, 578 + .nd_size_out = 579 + sizeof(struct nd_intel_fw_activate_dimminfo), 580 + .nd_fw_size = 581 + sizeof(struct nd_intel_fw_activate_dimminfo), 582 + }, 583 + }; 584 + int rc; 585 + 586 + rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL); 587 + *info = nd_cmd.cmd; 588 + return rc; 589 + } 590 + 591 + static enum nvdimm_fwa_state intel_fwa_state(struct nvdimm *nvdimm) 592 + { 593 + struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); 594 + struct acpi_nfit_desc *acpi_desc = nfit_mem->acpi_desc; 595 + struct nd_intel_fw_activate_dimminfo info; 596 + int rc; 597 + 598 + /* 599 + * Similar to the bus state, since activate is synchronous the 600 + * busy state should resolve within the context of 'activate'. 601 + */ 602 + switch (nfit_mem->fwa_state) { 603 + case NVDIMM_FWA_INVALID: 604 + case NVDIMM_FWA_BUSY: 605 + break; 606 + default: 607 + /* If no activations occurred the old state is still valid */ 608 + if (nfit_mem->fwa_count == acpi_desc->fwa_count) 609 + return nfit_mem->fwa_state; 610 + } 611 + 612 + rc = intel_fwa_dimminfo(nvdimm, &info); 613 + if (rc) 614 + return NVDIMM_FWA_INVALID; 615 + 616 + switch (info.state) { 617 + case ND_INTEL_FWA_IDLE: 618 + nfit_mem->fwa_state = NVDIMM_FWA_IDLE; 619 + break; 620 + case ND_INTEL_FWA_BUSY: 621 + nfit_mem->fwa_state = NVDIMM_FWA_BUSY; 622 + break; 623 + case ND_INTEL_FWA_ARMED: 624 + nfit_mem->fwa_state = NVDIMM_FWA_ARMED; 625 + break; 626 + default: 627 + nfit_mem->fwa_state = NVDIMM_FWA_INVALID; 628 + break; 629 + } 630 + 631 + switch (info.result) { 632 + case ND_INTEL_DIMM_FWA_NONE: 633 + nfit_mem->fwa_result = NVDIMM_FWA_RESULT_NONE; 634 + break; 635 + case ND_INTEL_DIMM_FWA_SUCCESS: 636 + nfit_mem->fwa_result = NVDIMM_FWA_RESULT_SUCCESS; 637 + break; 638 + case ND_INTEL_DIMM_FWA_NOTSTAGED: 639 + nfit_mem->fwa_result = NVDIMM_FWA_RESULT_NOTSTAGED; 640 + break; 641 + case ND_INTEL_DIMM_FWA_NEEDRESET: 642 + nfit_mem->fwa_result = NVDIMM_FWA_RESULT_NEEDRESET; 643 + break; 644 + case ND_INTEL_DIMM_FWA_MEDIAFAILED: 645 + case ND_INTEL_DIMM_FWA_ABORT: 646 + case ND_INTEL_DIMM_FWA_NOTSUPP: 647 + case ND_INTEL_DIMM_FWA_ERROR: 648 + default: 649 + nfit_mem->fwa_result = NVDIMM_FWA_RESULT_FAIL; 650 + break; 651 + } 652 + 653 + nfit_mem->fwa_count = acpi_desc->fwa_count; 654 + 655 + return nfit_mem->fwa_state; 656 + } 657 + 658 + static enum nvdimm_fwa_result intel_fwa_result(struct nvdimm *nvdimm) 659 + { 660 + struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); 661 + struct acpi_nfit_desc *acpi_desc = nfit_mem->acpi_desc; 662 + 663 + if (nfit_mem->fwa_count == acpi_desc->fwa_count 664 + && nfit_mem->fwa_result > NVDIMM_FWA_RESULT_INVALID) 665 + return nfit_mem->fwa_result; 666 + 667 + if (intel_fwa_state(nvdimm) > NVDIMM_FWA_INVALID) 668 + return nfit_mem->fwa_result; 669 + 670 + return NVDIMM_FWA_RESULT_INVALID; 671 + } 672 + 673 + static int intel_fwa_arm(struct nvdimm *nvdimm, enum nvdimm_fwa_trigger arm) 674 + { 675 + struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); 676 + struct acpi_nfit_desc *acpi_desc = nfit_mem->acpi_desc; 677 + struct { 678 + struct nd_cmd_pkg pkg; 679 + struct nd_intel_fw_activate_arm cmd; 680 + } nd_cmd = { 681 + .pkg = { 682 + .nd_command = NVDIMM_INTEL_FW_ACTIVATE_ARM, 683 + .nd_family = NVDIMM_FAMILY_INTEL, 684 + .nd_size_in = sizeof(nd_cmd.cmd.activate_arm), 685 + .nd_size_out = 686 + sizeof(struct nd_intel_fw_activate_arm), 687 + .nd_fw_size = 688 + sizeof(struct nd_intel_fw_activate_arm), 689 + }, 690 + .cmd = { 691 + .activate_arm = arm == NVDIMM_FWA_ARM 692 + ? ND_INTEL_DIMM_FWA_ARM 693 + : ND_INTEL_DIMM_FWA_DISARM, 694 + }, 695 + }; 696 + int rc; 697 + 698 + switch (intel_fwa_state(nvdimm)) { 699 + case NVDIMM_FWA_INVALID: 700 + return -ENXIO; 701 + case NVDIMM_FWA_BUSY: 702 + return -EBUSY; 703 + case NVDIMM_FWA_IDLE: 704 + if (arm == NVDIMM_FWA_DISARM) 705 + return 0; 706 + break; 707 + case NVDIMM_FWA_ARMED: 708 + if (arm == NVDIMM_FWA_ARM) 709 + return 0; 710 + break; 711 + default: 712 + return -ENXIO; 713 + } 714 + 715 + /* 716 + * Invalidate the bus-level state, now that we're committed to 717 + * changing the 'arm' state. 718 + */ 719 + acpi_desc->fwa_state = NVDIMM_FWA_INVALID; 720 + nfit_mem->fwa_state = NVDIMM_FWA_INVALID; 721 + 722 + rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL); 723 + 724 + dev_dbg(acpi_desc->dev, "%s result: %d\n", arm == NVDIMM_FWA_ARM 725 + ? "arm" : "disarm", rc); 726 + return rc; 727 + } 728 + 729 + static const struct nvdimm_fw_ops __intel_fw_ops = { 730 + .activate_state = intel_fwa_state, 731 + .activate_result = intel_fwa_result, 732 + .arm = intel_fwa_arm, 733 + }; 734 + 735 + const struct nvdimm_fw_ops *intel_fw_ops = &__intel_fw_ops;
+61
drivers/acpi/nfit/intel.h
··· 111 111 u8 passphrase[ND_INTEL_PASSPHRASE_SIZE]; 112 112 u32 status; 113 113 } __packed; 114 + 115 + #define ND_INTEL_FWA_IDLE 0 116 + #define ND_INTEL_FWA_ARMED 1 117 + #define ND_INTEL_FWA_BUSY 2 118 + 119 + #define ND_INTEL_DIMM_FWA_NONE 0 120 + #define ND_INTEL_DIMM_FWA_NOTSTAGED 1 121 + #define ND_INTEL_DIMM_FWA_SUCCESS 2 122 + #define ND_INTEL_DIMM_FWA_NEEDRESET 3 123 + #define ND_INTEL_DIMM_FWA_MEDIAFAILED 4 124 + #define ND_INTEL_DIMM_FWA_ABORT 5 125 + #define ND_INTEL_DIMM_FWA_NOTSUPP 6 126 + #define ND_INTEL_DIMM_FWA_ERROR 7 127 + 128 + struct nd_intel_fw_activate_dimminfo { 129 + u32 status; 130 + u16 result; 131 + u8 state; 132 + u8 reserved[7]; 133 + } __packed; 134 + 135 + #define ND_INTEL_DIMM_FWA_ARM 1 136 + #define ND_INTEL_DIMM_FWA_DISARM 0 137 + 138 + struct nd_intel_fw_activate_arm { 139 + u8 activate_arm; 140 + u32 status; 141 + } __packed; 142 + 143 + /* Root device command payloads */ 144 + #define ND_INTEL_BUS_FWA_CAP_FWQUIESCE (1 << 0) 145 + #define ND_INTEL_BUS_FWA_CAP_OSQUIESCE (1 << 1) 146 + #define ND_INTEL_BUS_FWA_CAP_RESET (1 << 2) 147 + 148 + struct nd_intel_bus_fw_activate_businfo { 149 + u32 status; 150 + u16 reserved; 151 + u8 state; 152 + u8 capability; 153 + u64 activate_tmo; 154 + u64 cpu_quiesce_tmo; 155 + u64 io_quiesce_tmo; 156 + u64 max_quiesce_tmo; 157 + } __packed; 158 + 159 + #define ND_INTEL_BUS_FWA_STATUS_NOARM (6 | 1 << 16) 160 + #define ND_INTEL_BUS_FWA_STATUS_BUSY (6 | 2 << 16) 161 + #define ND_INTEL_BUS_FWA_STATUS_NOFW (6 | 3 << 16) 162 + #define ND_INTEL_BUS_FWA_STATUS_TMO (6 | 4 << 16) 163 + #define ND_INTEL_BUS_FWA_STATUS_NOIDLE (6 | 5 << 16) 164 + #define ND_INTEL_BUS_FWA_STATUS_ABORT (6 | 6 << 16) 165 + 166 + #define ND_INTEL_BUS_FWA_IODEV_FORCE_IDLE (0) 167 + #define ND_INTEL_BUS_FWA_IODEV_OS_IDLE (1) 168 + struct nd_intel_bus_fw_activate { 169 + u8 iodev_state; 170 + u32 status; 171 + } __packed; 172 + 173 + extern const struct nvdimm_fw_ops *intel_fw_ops; 174 + extern const struct nvdimm_bus_fw_ops *intel_bus_fw_ops; 114 175 #endif
+35 -3
drivers/acpi/nfit/nfit.h
··· 18 18 19 19 /* https://pmem.io/documents/NVDIMM_DSM_Interface-V1.6.pdf */ 20 20 #define UUID_NFIT_DIMM "4309ac30-0d11-11e4-9191-0800200c9a66" 21 + #define UUID_INTEL_BUS "c7d8acd4-2df8-4b82-9f65-a325335af149" 21 22 22 23 /* https://github.com/HewlettPackard/hpe-nvm/blob/master/Documentation/ */ 23 24 #define UUID_NFIT_DIMM_N_HPE1 "9002c334-acf3-4c0e-9642-a235f0d53bc6" ··· 34 33 | ACPI_NFIT_MEM_RESTORE_FAILED | ACPI_NFIT_MEM_FLUSH_FAILED \ 35 34 | ACPI_NFIT_MEM_NOT_ARMED | ACPI_NFIT_MEM_MAP_FAILED) 36 35 37 - #define NVDIMM_FAMILY_MAX NVDIMM_FAMILY_HYPERV 38 36 #define NVDIMM_CMD_MAX 31 39 37 40 38 #define NVDIMM_STANDARD_CMDMASK \ ··· 66 66 NVDIMM_INTEL_QUERY_OVERWRITE = 26, 67 67 NVDIMM_INTEL_SET_MASTER_PASSPHRASE = 27, 68 68 NVDIMM_INTEL_MASTER_SECURE_ERASE = 28, 69 + NVDIMM_INTEL_FW_ACTIVATE_DIMMINFO = 29, 70 + NVDIMM_INTEL_FW_ACTIVATE_ARM = 30, 71 + }; 72 + 73 + enum nvdimm_bus_family_cmds { 74 + NVDIMM_BUS_INTEL_FW_ACTIVATE_BUSINFO = 1, 75 + NVDIMM_BUS_INTEL_FW_ACTIVATE = 2, 69 76 }; 70 77 71 78 #define NVDIMM_INTEL_SECURITY_CMDMASK \ ··· 83 76 | 1 << NVDIMM_INTEL_SET_MASTER_PASSPHRASE \ 84 77 | 1 << NVDIMM_INTEL_MASTER_SECURE_ERASE) 85 78 79 + #define NVDIMM_INTEL_FW_ACTIVATE_CMDMASK \ 80 + (1 << NVDIMM_INTEL_FW_ACTIVATE_DIMMINFO | 1 << NVDIMM_INTEL_FW_ACTIVATE_ARM) 81 + 82 + #define NVDIMM_BUS_INTEL_FW_ACTIVATE_CMDMASK \ 83 + (1 << NVDIMM_BUS_INTEL_FW_ACTIVATE_BUSINFO | 1 << NVDIMM_BUS_INTEL_FW_ACTIVATE) 84 + 86 85 #define NVDIMM_INTEL_CMDMASK \ 87 86 (NVDIMM_STANDARD_CMDMASK | 1 << NVDIMM_INTEL_GET_MODES \ 88 87 | 1 << NVDIMM_INTEL_GET_FWINFO | 1 << NVDIMM_INTEL_START_FWUPDATE \ 89 88 | 1 << NVDIMM_INTEL_SEND_FWUPDATE | 1 << NVDIMM_INTEL_FINISH_FWUPDATE \ 90 89 | 1 << NVDIMM_INTEL_QUERY_FWUPDATE | 1 << NVDIMM_INTEL_SET_THRESHOLD \ 91 90 | 1 << NVDIMM_INTEL_INJECT_ERROR | 1 << NVDIMM_INTEL_LATCH_SHUTDOWN \ 92 - | NVDIMM_INTEL_SECURITY_CMDMASK) 91 + | NVDIMM_INTEL_SECURITY_CMDMASK | NVDIMM_INTEL_FW_ACTIVATE_CMDMASK) 92 + 93 + #define NVDIMM_INTEL_DENY_CMDMASK \ 94 + (NVDIMM_INTEL_SECURITY_CMDMASK | NVDIMM_INTEL_FW_ACTIVATE_CMDMASK) 93 95 94 96 enum nfit_uuids { 95 97 /* for simplicity alias the uuid index with the family id */ ··· 107 91 NFIT_DEV_DIMM_N_HPE2 = NVDIMM_FAMILY_HPE2, 108 92 NFIT_DEV_DIMM_N_MSFT = NVDIMM_FAMILY_MSFT, 109 93 NFIT_DEV_DIMM_N_HYPERV = NVDIMM_FAMILY_HYPERV, 94 + /* 95 + * to_nfit_bus_uuid() expects to translate bus uuid family ids 96 + * to a UUID index using NVDIMM_FAMILY_MAX as an offset 97 + */ 98 + NFIT_BUS_INTEL = NVDIMM_FAMILY_MAX + NVDIMM_BUS_FAMILY_INTEL, 110 99 NFIT_SPA_VOLATILE, 111 100 NFIT_SPA_PM, 112 101 NFIT_SPA_DCR, ··· 220 199 struct list_head list; 221 200 struct acpi_device *adev; 222 201 struct acpi_nfit_desc *acpi_desc; 202 + enum nvdimm_fwa_state fwa_state; 203 + enum nvdimm_fwa_result fwa_result; 204 + int fwa_count; 223 205 char id[NFIT_DIMM_ID_LEN+1]; 224 206 struct resource *flush_wpq; 225 207 unsigned long dsm_mask; ··· 262 238 unsigned long scrub_flags; 263 239 unsigned long dimm_cmd_force_en; 264 240 unsigned long bus_cmd_force_en; 265 - unsigned long bus_nfit_cmd_force_en; 241 + unsigned long bus_dsm_mask; 242 + unsigned long family_dsm_mask[NVDIMM_BUS_FAMILY_MAX + 1]; 266 243 unsigned int platform_cap; 267 244 unsigned int scrub_tmo; 268 245 int (*blk_do_io)(struct nd_blk_region *ndbr, resource_size_t dpa, 269 246 void *iobuf, u64 len, int rw); 247 + enum nvdimm_fwa_state fwa_state; 248 + enum nvdimm_fwa_capability fwa_cap; 249 + int fwa_count; 250 + bool fwa_noidle; 251 + bool fwa_nosuspend; 270 252 }; 271 253 272 254 enum scrub_mode { ··· 375 345 int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm, 376 346 unsigned int cmd, void *buf, unsigned int buf_len, int *cmd_rc); 377 347 void acpi_nfit_desc_init(struct acpi_nfit_desc *acpi_desc, struct device *dev); 348 + bool intel_fwa_supported(struct nvdimm_bus *nvdimm_bus); 349 + extern struct device_attribute dev_attr_firmware_activate_noidle; 378 350 #endif /* __NFIT_H__ */
+7 -6
drivers/dax/super.c
··· 80 80 int err, id; 81 81 82 82 if (blocksize != PAGE_SIZE) { 83 - pr_debug("%s: error: unsupported blocksize for dax\n", 83 + pr_info("%s: error: unsupported blocksize for dax\n", 84 84 bdevname(bdev, buf)); 85 85 return false; 86 86 } 87 87 88 88 err = bdev_dax_pgoff(bdev, start, PAGE_SIZE, &pgoff); 89 89 if (err) { 90 - pr_debug("%s: error: unaligned partition for dax\n", 90 + pr_info("%s: error: unaligned partition for dax\n", 91 91 bdevname(bdev, buf)); 92 92 return false; 93 93 } ··· 95 95 last_page = PFN_DOWN((start + sectors - 1) * 512) * PAGE_SIZE / 512; 96 96 err = bdev_dax_pgoff(bdev, last_page, PAGE_SIZE, &pgoff_end); 97 97 if (err) { 98 - pr_debug("%s: error: unaligned partition for dax\n", 98 + pr_info("%s: error: unaligned partition for dax\n", 99 99 bdevname(bdev, buf)); 100 100 return false; 101 101 } ··· 103 103 id = dax_read_lock(); 104 104 len = dax_direct_access(dax_dev, pgoff, 1, &kaddr, &pfn); 105 105 len2 = dax_direct_access(dax_dev, pgoff_end, 1, &end_kaddr, &end_pfn); 106 - dax_read_unlock(id); 107 106 108 107 if (len < 1 || len2 < 1) { 109 - pr_debug("%s: error: dax access failed (%ld)\n", 108 + pr_info("%s: error: dax access failed (%ld)\n", 110 109 bdevname(bdev, buf), len < 1 ? len : len2); 110 + dax_read_unlock(id); 111 111 return false; 112 112 } 113 113 ··· 137 137 put_dev_pagemap(end_pgmap); 138 138 139 139 } 140 + dax_read_unlock(id); 140 141 141 142 if (!dax_enabled) { 142 - pr_debug("%s: error: dax support not enabled\n", 143 + pr_info("%s: error: dax support not enabled\n", 143 144 bdevname(bdev, buf)); 144 145 return false; 145 146 }
+16
drivers/nvdimm/bus.c
··· 1037 1037 dimm_name = "bus"; 1038 1038 } 1039 1039 1040 + /* Validate command family support against bus declared support */ 1040 1041 if (cmd == ND_CMD_CALL) { 1042 + unsigned long *mask; 1043 + 1041 1044 if (copy_from_user(&pkg, p, sizeof(pkg))) 1042 1045 return -EFAULT; 1046 + 1047 + if (nvdimm) { 1048 + if (pkg.nd_family > NVDIMM_FAMILY_MAX) 1049 + return -EINVAL; 1050 + mask = &nd_desc->dimm_family_mask; 1051 + } else { 1052 + if (pkg.nd_family > NVDIMM_BUS_FAMILY_MAX) 1053 + return -EINVAL; 1054 + mask = &nd_desc->bus_family_mask; 1055 + } 1056 + 1057 + if (!test_bit(pkg.nd_family, mask)) 1058 + return -EINVAL; 1043 1059 } 1044 1060 1045 1061 if (!desc ||
+149
drivers/nvdimm/core.c
··· 4 4 */ 5 5 #include <linux/libnvdimm.h> 6 6 #include <linux/badblocks.h> 7 + #include <linux/suspend.h> 7 8 #include <linux/export.h> 8 9 #include <linux/module.h> 9 10 #include <linux/blkdev.h> ··· 390 389 .attrs = nvdimm_bus_attributes, 391 390 }; 392 391 392 + static ssize_t capability_show(struct device *dev, 393 + struct device_attribute *attr, char *buf) 394 + { 395 + struct nvdimm_bus *nvdimm_bus = to_nvdimm_bus(dev); 396 + struct nvdimm_bus_descriptor *nd_desc = nvdimm_bus->nd_desc; 397 + enum nvdimm_fwa_capability cap; 398 + 399 + if (!nd_desc->fw_ops) 400 + return -EOPNOTSUPP; 401 + 402 + nvdimm_bus_lock(dev); 403 + cap = nd_desc->fw_ops->capability(nd_desc); 404 + nvdimm_bus_unlock(dev); 405 + 406 + switch (cap) { 407 + case NVDIMM_FWA_CAP_QUIESCE: 408 + return sprintf(buf, "quiesce\n"); 409 + case NVDIMM_FWA_CAP_LIVE: 410 + return sprintf(buf, "live\n"); 411 + default: 412 + return -EOPNOTSUPP; 413 + } 414 + } 415 + 416 + static DEVICE_ATTR_RO(capability); 417 + 418 + static ssize_t activate_show(struct device *dev, 419 + struct device_attribute *attr, char *buf) 420 + { 421 + struct nvdimm_bus *nvdimm_bus = to_nvdimm_bus(dev); 422 + struct nvdimm_bus_descriptor *nd_desc = nvdimm_bus->nd_desc; 423 + enum nvdimm_fwa_capability cap; 424 + enum nvdimm_fwa_state state; 425 + 426 + if (!nd_desc->fw_ops) 427 + return -EOPNOTSUPP; 428 + 429 + nvdimm_bus_lock(dev); 430 + cap = nd_desc->fw_ops->capability(nd_desc); 431 + state = nd_desc->fw_ops->activate_state(nd_desc); 432 + nvdimm_bus_unlock(dev); 433 + 434 + if (cap < NVDIMM_FWA_CAP_QUIESCE) 435 + return -EOPNOTSUPP; 436 + 437 + switch (state) { 438 + case NVDIMM_FWA_IDLE: 439 + return sprintf(buf, "idle\n"); 440 + case NVDIMM_FWA_BUSY: 441 + return sprintf(buf, "busy\n"); 442 + case NVDIMM_FWA_ARMED: 443 + return sprintf(buf, "armed\n"); 444 + case NVDIMM_FWA_ARM_OVERFLOW: 445 + return sprintf(buf, "overflow\n"); 446 + default: 447 + return -ENXIO; 448 + } 449 + } 450 + 451 + static int exec_firmware_activate(void *data) 452 + { 453 + struct nvdimm_bus_descriptor *nd_desc = data; 454 + 455 + return nd_desc->fw_ops->activate(nd_desc); 456 + } 457 + 458 + static ssize_t activate_store(struct device *dev, 459 + struct device_attribute *attr, const char *buf, size_t len) 460 + { 461 + struct nvdimm_bus *nvdimm_bus = to_nvdimm_bus(dev); 462 + struct nvdimm_bus_descriptor *nd_desc = nvdimm_bus->nd_desc; 463 + enum nvdimm_fwa_state state; 464 + bool quiesce; 465 + ssize_t rc; 466 + 467 + if (!nd_desc->fw_ops) 468 + return -EOPNOTSUPP; 469 + 470 + if (sysfs_streq(buf, "live")) 471 + quiesce = false; 472 + else if (sysfs_streq(buf, "quiesce")) 473 + quiesce = true; 474 + else 475 + return -EINVAL; 476 + 477 + nvdimm_bus_lock(dev); 478 + state = nd_desc->fw_ops->activate_state(nd_desc); 479 + 480 + switch (state) { 481 + case NVDIMM_FWA_BUSY: 482 + rc = -EBUSY; 483 + break; 484 + case NVDIMM_FWA_ARMED: 485 + case NVDIMM_FWA_ARM_OVERFLOW: 486 + if (quiesce) 487 + rc = hibernate_quiet_exec(exec_firmware_activate, nd_desc); 488 + else 489 + rc = nd_desc->fw_ops->activate(nd_desc); 490 + break; 491 + case NVDIMM_FWA_IDLE: 492 + default: 493 + rc = -ENXIO; 494 + } 495 + nvdimm_bus_unlock(dev); 496 + 497 + if (rc == 0) 498 + rc = len; 499 + return rc; 500 + } 501 + 502 + static DEVICE_ATTR_ADMIN_RW(activate); 503 + 504 + static umode_t nvdimm_bus_firmware_visible(struct kobject *kobj, struct attribute *a, int n) 505 + { 506 + struct device *dev = container_of(kobj, typeof(*dev), kobj); 507 + struct nvdimm_bus *nvdimm_bus = to_nvdimm_bus(dev); 508 + struct nvdimm_bus_descriptor *nd_desc = nvdimm_bus->nd_desc; 509 + enum nvdimm_fwa_capability cap; 510 + 511 + /* 512 + * Both 'activate' and 'capability' disappear when no ops 513 + * detected, or a negative capability is indicated. 514 + */ 515 + if (!nd_desc->fw_ops) 516 + return 0; 517 + 518 + nvdimm_bus_lock(dev); 519 + cap = nd_desc->fw_ops->capability(nd_desc); 520 + nvdimm_bus_unlock(dev); 521 + 522 + if (cap < NVDIMM_FWA_CAP_QUIESCE) 523 + return 0; 524 + 525 + return a->mode; 526 + } 527 + static struct attribute *nvdimm_bus_firmware_attributes[] = { 528 + &dev_attr_activate.attr, 529 + &dev_attr_capability.attr, 530 + NULL, 531 + }; 532 + 533 + static const struct attribute_group nvdimm_bus_firmware_attribute_group = { 534 + .name = "firmware", 535 + .attrs = nvdimm_bus_firmware_attributes, 536 + .is_visible = nvdimm_bus_firmware_visible, 537 + }; 538 + 393 539 const struct attribute_group *nvdimm_bus_attribute_groups[] = { 394 540 &nvdimm_bus_attribute_group, 541 + &nvdimm_bus_firmware_attribute_group, 395 542 NULL, 396 543 }; 397 544
+120 -3
drivers/nvdimm/dimm_devs.c
··· 363 363 { 364 364 struct nvdimm *nvdimm = to_nvdimm(dev); 365 365 366 + if (test_bit(NVDIMM_SECURITY_OVERWRITE, &nvdimm->sec.flags)) 367 + return sprintf(buf, "overwrite\n"); 366 368 if (test_bit(NVDIMM_SECURITY_DISABLED, &nvdimm->sec.flags)) 367 369 return sprintf(buf, "disabled\n"); 368 370 if (test_bit(NVDIMM_SECURITY_UNLOCKED, &nvdimm->sec.flags)) 369 371 return sprintf(buf, "unlocked\n"); 370 372 if (test_bit(NVDIMM_SECURITY_LOCKED, &nvdimm->sec.flags)) 371 373 return sprintf(buf, "locked\n"); 372 - if (test_bit(NVDIMM_SECURITY_OVERWRITE, &nvdimm->sec.flags)) 373 - return sprintf(buf, "overwrite\n"); 374 374 return -ENOTTY; 375 375 } 376 376 ··· 446 446 .is_visible = nvdimm_visible, 447 447 }; 448 448 449 + static ssize_t result_show(struct device *dev, struct device_attribute *attr, char *buf) 450 + { 451 + struct nvdimm *nvdimm = to_nvdimm(dev); 452 + enum nvdimm_fwa_result result; 453 + 454 + if (!nvdimm->fw_ops) 455 + return -EOPNOTSUPP; 456 + 457 + nvdimm_bus_lock(dev); 458 + result = nvdimm->fw_ops->activate_result(nvdimm); 459 + nvdimm_bus_unlock(dev); 460 + 461 + switch (result) { 462 + case NVDIMM_FWA_RESULT_NONE: 463 + return sprintf(buf, "none\n"); 464 + case NVDIMM_FWA_RESULT_SUCCESS: 465 + return sprintf(buf, "success\n"); 466 + case NVDIMM_FWA_RESULT_FAIL: 467 + return sprintf(buf, "fail\n"); 468 + case NVDIMM_FWA_RESULT_NOTSTAGED: 469 + return sprintf(buf, "not_staged\n"); 470 + case NVDIMM_FWA_RESULT_NEEDRESET: 471 + return sprintf(buf, "need_reset\n"); 472 + default: 473 + return -ENXIO; 474 + } 475 + } 476 + static DEVICE_ATTR_ADMIN_RO(result); 477 + 478 + static ssize_t activate_show(struct device *dev, struct device_attribute *attr, char *buf) 479 + { 480 + struct nvdimm *nvdimm = to_nvdimm(dev); 481 + enum nvdimm_fwa_state state; 482 + 483 + if (!nvdimm->fw_ops) 484 + return -EOPNOTSUPP; 485 + 486 + nvdimm_bus_lock(dev); 487 + state = nvdimm->fw_ops->activate_state(nvdimm); 488 + nvdimm_bus_unlock(dev); 489 + 490 + switch (state) { 491 + case NVDIMM_FWA_IDLE: 492 + return sprintf(buf, "idle\n"); 493 + case NVDIMM_FWA_BUSY: 494 + return sprintf(buf, "busy\n"); 495 + case NVDIMM_FWA_ARMED: 496 + return sprintf(buf, "armed\n"); 497 + default: 498 + return -ENXIO; 499 + } 500 + } 501 + 502 + static ssize_t activate_store(struct device *dev, struct device_attribute *attr, 503 + const char *buf, size_t len) 504 + { 505 + struct nvdimm *nvdimm = to_nvdimm(dev); 506 + enum nvdimm_fwa_trigger arg; 507 + int rc; 508 + 509 + if (!nvdimm->fw_ops) 510 + return -EOPNOTSUPP; 511 + 512 + if (sysfs_streq(buf, "arm")) 513 + arg = NVDIMM_FWA_ARM; 514 + else if (sysfs_streq(buf, "disarm")) 515 + arg = NVDIMM_FWA_DISARM; 516 + else 517 + return -EINVAL; 518 + 519 + nvdimm_bus_lock(dev); 520 + rc = nvdimm->fw_ops->arm(nvdimm, arg); 521 + nvdimm_bus_unlock(dev); 522 + 523 + if (rc < 0) 524 + return rc; 525 + return len; 526 + } 527 + static DEVICE_ATTR_ADMIN_RW(activate); 528 + 529 + static struct attribute *nvdimm_firmware_attributes[] = { 530 + &dev_attr_activate.attr, 531 + &dev_attr_result.attr, 532 + }; 533 + 534 + static umode_t nvdimm_firmware_visible(struct kobject *kobj, struct attribute *a, int n) 535 + { 536 + struct device *dev = container_of(kobj, typeof(*dev), kobj); 537 + struct nvdimm_bus *nvdimm_bus = walk_to_nvdimm_bus(dev); 538 + struct nvdimm_bus_descriptor *nd_desc = nvdimm_bus->nd_desc; 539 + struct nvdimm *nvdimm = to_nvdimm(dev); 540 + enum nvdimm_fwa_capability cap; 541 + 542 + if (!nd_desc->fw_ops) 543 + return 0; 544 + if (!nvdimm->fw_ops) 545 + return 0; 546 + 547 + nvdimm_bus_lock(dev); 548 + cap = nd_desc->fw_ops->capability(nd_desc); 549 + nvdimm_bus_unlock(dev); 550 + 551 + if (cap < NVDIMM_FWA_CAP_QUIESCE) 552 + return 0; 553 + 554 + return a->mode; 555 + } 556 + 557 + static const struct attribute_group nvdimm_firmware_attribute_group = { 558 + .name = "firmware", 559 + .attrs = nvdimm_firmware_attributes, 560 + .is_visible = nvdimm_firmware_visible, 561 + }; 562 + 449 563 static const struct attribute_group *nvdimm_attribute_groups[] = { 450 564 &nd_device_attribute_group, 451 565 &nvdimm_attribute_group, 566 + &nvdimm_firmware_attribute_group, 452 567 NULL, 453 568 }; 454 569 ··· 582 467 void *provider_data, const struct attribute_group **groups, 583 468 unsigned long flags, unsigned long cmd_mask, int num_flush, 584 469 struct resource *flush_wpq, const char *dimm_id, 585 - const struct nvdimm_security_ops *sec_ops) 470 + const struct nvdimm_security_ops *sec_ops, 471 + const struct nvdimm_fw_ops *fw_ops) 586 472 { 587 473 struct nvdimm *nvdimm = kzalloc(sizeof(*nvdimm), GFP_KERNEL); 588 474 struct device *dev; ··· 613 497 dev->devt = MKDEV(nvdimm_major, nvdimm->id); 614 498 dev->groups = groups; 615 499 nvdimm->sec.ops = sec_ops; 500 + nvdimm->fw_ops = fw_ops; 616 501 nvdimm->sec.overwrite_tmo = 0; 617 502 INIT_DELAYED_WORK(&nvdimm->dwork, nvdimm_security_overwrite_query); 618 503 /*
+1 -1
drivers/nvdimm/namespace_devs.c
··· 1309 1309 return -ENXIO; 1310 1310 return sprintf(buf, "%#llx\n", (unsigned long long) res->start); 1311 1311 } 1312 - static DEVICE_ATTR(resource, 0400, resource_show, NULL); 1312 + static DEVICE_ATTR_ADMIN_RO(resource); 1313 1313 1314 1314 static const unsigned long blk_lbasize_supported[] = { 512, 520, 528, 1315 1315 4096, 4104, 4160, 4224, 0 };
+1
drivers/nvdimm/nd-core.h
··· 45 45 struct kernfs_node *overwrite_state; 46 46 } sec; 47 47 struct delayed_work dwork; 48 + const struct nvdimm_fw_ops *fw_ops; 48 49 }; 49 50 50 51 static inline unsigned long nvdimm_security_flags(
+1 -1
drivers/nvdimm/pfn_devs.c
··· 218 218 219 219 return rc; 220 220 } 221 - static DEVICE_ATTR(resource, 0400, resource_show, NULL); 221 + static DEVICE_ATTR_ADMIN_RO(resource); 222 222 223 223 static ssize_t size_show(struct device *dev, 224 224 struct device_attribute *attr, char *buf)
+1 -1
drivers/nvdimm/region_devs.c
··· 605 605 606 606 return sprintf(buf, "%#llx\n", nd_region->ndr_start); 607 607 } 608 - static DEVICE_ATTR(resource, 0400, resource_show, NULL); 608 + static DEVICE_ATTR_ADMIN_RO(resource); 609 609 610 610 static ssize_t persistence_domain_show(struct device *dev, 611 611 struct device_attribute *attr, char *buf)
+9 -4
drivers/nvdimm/security.c
··· 450 450 else 451 451 dev_dbg(&nvdimm->dev, "overwrite completed\n"); 452 452 453 - if (nvdimm->sec.overwrite_state) 454 - sysfs_notify_dirent(nvdimm->sec.overwrite_state); 453 + /* 454 + * Mark the overwrite work done and update dimm security flags, 455 + * then send a sysfs event notification to wake up userspace 456 + * poll threads to picked up the changed state. 457 + */ 455 458 nvdimm->sec.overwrite_tmo = 0; 456 459 clear_bit(NDD_SECURITY_OVERWRITE, &nvdimm->flags); 457 460 clear_bit(NDD_WORK_PENDING, &nvdimm->flags); 458 - put_device(&nvdimm->dev); 459 461 nvdimm->sec.flags = nvdimm_security_flags(nvdimm, NVDIMM_USER); 460 - nvdimm->sec.flags = nvdimm_security_flags(nvdimm, NVDIMM_MASTER); 462 + nvdimm->sec.ext_flags = nvdimm_security_flags(nvdimm, NVDIMM_MASTER); 463 + if (nvdimm->sec.overwrite_state) 464 + sysfs_notify_dirent(nvdimm->sec.overwrite_state); 465 + put_device(&nvdimm->dev); 461 466 } 462 467 463 468 void nvdimm_security_overwrite_query(struct work_struct *work)
+7 -8
fs/dax.c
··· 488 488 if (dax_is_conflict(entry)) 489 489 goto fallback; 490 490 if (!xa_is_value(entry)) { 491 - xas_set_err(xas, EIO); 491 + xas_set_err(xas, -EIO); 492 492 goto out_unlock; 493 493 } 494 494 ··· 680 680 return __dax_invalidate_entry(mapping, index, false); 681 681 } 682 682 683 - static int copy_user_dax(struct block_device *bdev, struct dax_device *dax_dev, 684 - sector_t sector, size_t size, struct page *to, 685 - unsigned long vaddr) 683 + static int copy_cow_page_dax(struct block_device *bdev, struct dax_device *dax_dev, 684 + sector_t sector, struct page *to, unsigned long vaddr) 686 685 { 687 686 void *vto, *kaddr; 688 687 pgoff_t pgoff; 689 688 long rc; 690 689 int id; 691 690 692 - rc = bdev_dax_pgoff(bdev, sector, size, &pgoff); 691 + rc = bdev_dax_pgoff(bdev, sector, PAGE_SIZE, &pgoff); 693 692 if (rc) 694 693 return rc; 695 694 696 695 id = dax_read_lock(); 697 - rc = dax_direct_access(dax_dev, pgoff, PHYS_PFN(size), &kaddr, NULL); 696 + rc = dax_direct_access(dax_dev, pgoff, PHYS_PFN(PAGE_SIZE), &kaddr, NULL); 698 697 if (rc < 0) { 699 698 dax_read_unlock(id); 700 699 return rc; ··· 1304 1305 clear_user_highpage(vmf->cow_page, vaddr); 1305 1306 break; 1306 1307 case IOMAP_MAPPED: 1307 - error = copy_user_dax(iomap.bdev, iomap.dax_dev, 1308 - sector, PAGE_SIZE, vmf->cow_page, vaddr); 1308 + error = copy_cow_page_dax(iomap.bdev, iomap.dax_dev, 1309 + sector, vmf->cow_page, vaddr); 1309 1310 break; 1310 1311 default: 1311 1312 WARN_ON_ONCE(1);
+49 -3
include/linux/libnvdimm.h
··· 76 76 struct device_node; 77 77 struct nvdimm_bus_descriptor { 78 78 const struct attribute_group **attr_groups; 79 - unsigned long bus_dsm_mask; 80 79 unsigned long cmd_mask; 80 + unsigned long dimm_family_mask; 81 + unsigned long bus_family_mask; 81 82 struct module *module; 82 83 char *provider_name; 83 84 struct device_node *of_node; ··· 86 85 int (*flush_probe)(struct nvdimm_bus_descriptor *nd_desc); 87 86 int (*clear_to_send)(struct nvdimm_bus_descriptor *nd_desc, 88 87 struct nvdimm *nvdimm, unsigned int cmd, void *data); 88 + const struct nvdimm_bus_fw_ops *fw_ops; 89 89 }; 90 90 91 91 struct nd_cmd_desc { ··· 201 199 int (*query_overwrite)(struct nvdimm *nvdimm); 202 200 }; 203 201 202 + enum nvdimm_fwa_state { 203 + NVDIMM_FWA_INVALID, 204 + NVDIMM_FWA_IDLE, 205 + NVDIMM_FWA_ARMED, 206 + NVDIMM_FWA_BUSY, 207 + NVDIMM_FWA_ARM_OVERFLOW, 208 + }; 209 + 210 + enum nvdimm_fwa_trigger { 211 + NVDIMM_FWA_ARM, 212 + NVDIMM_FWA_DISARM, 213 + }; 214 + 215 + enum nvdimm_fwa_capability { 216 + NVDIMM_FWA_CAP_INVALID, 217 + NVDIMM_FWA_CAP_NONE, 218 + NVDIMM_FWA_CAP_QUIESCE, 219 + NVDIMM_FWA_CAP_LIVE, 220 + }; 221 + 222 + enum nvdimm_fwa_result { 223 + NVDIMM_FWA_RESULT_INVALID, 224 + NVDIMM_FWA_RESULT_NONE, 225 + NVDIMM_FWA_RESULT_SUCCESS, 226 + NVDIMM_FWA_RESULT_NOTSTAGED, 227 + NVDIMM_FWA_RESULT_NEEDRESET, 228 + NVDIMM_FWA_RESULT_FAIL, 229 + }; 230 + 231 + struct nvdimm_bus_fw_ops { 232 + enum nvdimm_fwa_state (*activate_state) 233 + (struct nvdimm_bus_descriptor *nd_desc); 234 + enum nvdimm_fwa_capability (*capability) 235 + (struct nvdimm_bus_descriptor *nd_desc); 236 + int (*activate)(struct nvdimm_bus_descriptor *nd_desc); 237 + }; 238 + 239 + struct nvdimm_fw_ops { 240 + enum nvdimm_fwa_state (*activate_state)(struct nvdimm *nvdimm); 241 + enum nvdimm_fwa_result (*activate_result)(struct nvdimm *nvdimm); 242 + int (*arm)(struct nvdimm *nvdimm, enum nvdimm_fwa_trigger arg); 243 + }; 244 + 204 245 void badrange_init(struct badrange *badrange); 205 246 int badrange_add(struct badrange *badrange, u64 addr, u64 length); 206 247 void badrange_forget(struct badrange *badrange, phys_addr_t start, ··· 269 224 void *provider_data, const struct attribute_group **groups, 270 225 unsigned long flags, unsigned long cmd_mask, int num_flush, 271 226 struct resource *flush_wpq, const char *dimm_id, 272 - const struct nvdimm_security_ops *sec_ops); 227 + const struct nvdimm_security_ops *sec_ops, 228 + const struct nvdimm_fw_ops *fw_ops); 273 229 static inline struct nvdimm *nvdimm_create(struct nvdimm_bus *nvdimm_bus, 274 230 void *provider_data, const struct attribute_group **groups, 275 231 unsigned long flags, unsigned long cmd_mask, int num_flush, 276 232 struct resource *flush_wpq) 277 233 { 278 234 return __nvdimm_create(nvdimm_bus, provider_data, groups, flags, 279 - cmd_mask, num_flush, flush_wpq, NULL, NULL); 235 + cmd_mask, num_flush, flush_wpq, NULL, NULL, NULL); 280 236 } 281 237 282 238 const struct nd_cmd_desc *nd_cmd_dimm_desc(int cmd);
+6
include/linux/suspend.h
··· 453 453 asmlinkage int swsusp_save(void); 454 454 extern struct pbe *restore_pblist; 455 455 int pfn_is_nosave(unsigned long pfn); 456 + 457 + int hibernate_quiet_exec(int (*func)(void *data), void *data); 456 458 #else /* CONFIG_HIBERNATION */ 457 459 static inline void register_nosave_region(unsigned long b, unsigned long e) {} 458 460 static inline void register_nosave_region_late(unsigned long b, unsigned long e) {} ··· 466 464 static inline int hibernate(void) { return -ENOSYS; } 467 465 static inline bool system_entering_hibernation(void) { return false; } 468 466 static inline bool hibernation_available(void) { return false; } 467 + 468 + static inline int hibernate_quiet_exec(int (*func)(void *data), void *data) { 469 + return -ENOTSUPP; 470 + } 469 471 #endif /* CONFIG_HIBERNATION */ 470 472 471 473 #ifdef CONFIG_HIBERNATION_SNAPSHOT_DEV
+5
include/uapi/linux/ndctl.h
··· 245 245 #define NVDIMM_FAMILY_MSFT 3 246 246 #define NVDIMM_FAMILY_HYPERV 4 247 247 #define NVDIMM_FAMILY_PAPR 5 248 + #define NVDIMM_FAMILY_MAX NVDIMM_FAMILY_PAPR 249 + 250 + #define NVDIMM_BUS_FAMILY_NFIT 0 251 + #define NVDIMM_BUS_FAMILY_INTEL 1 252 + #define NVDIMM_BUS_FAMILY_MAX NVDIMM_BUS_FAMILY_INTEL 248 253 249 254 #define ND_IOCTL_CALL _IOWR(ND_IOCTL, ND_CMD_CALL,\ 250 255 struct nd_cmd_pkg)
+97
kernel/power/hibernate.c
··· 795 795 return error; 796 796 } 797 797 798 + /** 799 + * hibernate_quiet_exec - Execute a function with all devices frozen. 800 + * @func: Function to execute. 801 + * @data: Data pointer to pass to @func. 802 + * 803 + * Return the @func return value or an error code if it cannot be executed. 804 + */ 805 + int hibernate_quiet_exec(int (*func)(void *data), void *data) 806 + { 807 + int error, nr_calls = 0; 808 + 809 + lock_system_sleep(); 810 + 811 + if (!hibernate_acquire()) { 812 + error = -EBUSY; 813 + goto unlock; 814 + } 815 + 816 + pm_prepare_console(); 817 + 818 + error = __pm_notifier_call_chain(PM_HIBERNATION_PREPARE, -1, &nr_calls); 819 + if (error) { 820 + nr_calls--; 821 + goto exit; 822 + } 823 + 824 + error = freeze_processes(); 825 + if (error) 826 + goto exit; 827 + 828 + lock_device_hotplug(); 829 + 830 + pm_suspend_clear_flags(); 831 + 832 + error = platform_begin(true); 833 + if (error) 834 + goto thaw; 835 + 836 + error = freeze_kernel_threads(); 837 + if (error) 838 + goto thaw; 839 + 840 + error = dpm_prepare(PMSG_FREEZE); 841 + if (error) 842 + goto dpm_complete; 843 + 844 + suspend_console(); 845 + 846 + error = dpm_suspend(PMSG_FREEZE); 847 + if (error) 848 + goto dpm_resume; 849 + 850 + error = dpm_suspend_end(PMSG_FREEZE); 851 + if (error) 852 + goto dpm_resume; 853 + 854 + error = platform_pre_snapshot(true); 855 + if (error) 856 + goto skip; 857 + 858 + error = func(data); 859 + 860 + skip: 861 + platform_finish(true); 862 + 863 + dpm_resume_start(PMSG_THAW); 864 + 865 + dpm_resume: 866 + dpm_resume(PMSG_THAW); 867 + 868 + resume_console(); 869 + 870 + dpm_complete: 871 + dpm_complete(PMSG_THAW); 872 + 873 + thaw_kernel_threads(); 874 + 875 + thaw: 876 + platform_end(true); 877 + 878 + unlock_device_hotplug(); 879 + 880 + thaw_processes(); 881 + 882 + exit: 883 + __pm_notifier_call_chain(PM_POST_HIBERNATION, nr_calls, NULL); 884 + 885 + pm_restore_console(); 886 + 887 + hibernate_release(); 888 + 889 + unlock: 890 + unlock_system_sleep(); 891 + 892 + return error; 893 + } 894 + EXPORT_SYMBOL_GPL(hibernate_quiet_exec); 798 895 799 896 /** 800 897 * software_resume - Resume from a saved hibernation image.
+299 -68
tools/testing/nvdimm/test/nfit.c
··· 173 173 u64 version; 174 174 u32 size_received; 175 175 u64 end_time; 176 + bool armed; 177 + bool missed_activate; 178 + unsigned long last_activate; 176 179 }; 177 180 178 181 struct nfit_test { ··· 348 345 __func__, t, nd_cmd, buf_len, idx); 349 346 350 347 if (fw->state == FW_STATE_UPDATED) { 351 - /* update already done, need cold boot */ 348 + /* update already done, need activation */ 352 349 nd_cmd->status = 0x20007; 353 350 return 0; 354 351 } ··· 433 430 } 434 431 dev_dbg(dev, "%s: transition out verify\n", __func__); 435 432 fw->state = FW_STATE_UPDATED; 433 + fw->missed_activate = false; 436 434 /* fall through */ 437 435 case FW_STATE_UPDATED: 438 436 nd_cmd->status = 0; ··· 1182 1178 return 0; 1183 1179 } 1184 1180 1181 + static unsigned long last_activate; 1182 + 1183 + static int nvdimm_bus_intel_fw_activate_businfo(struct nfit_test *t, 1184 + struct nd_intel_bus_fw_activate_businfo *nd_cmd, 1185 + unsigned int buf_len) 1186 + { 1187 + int i, armed = 0; 1188 + int state; 1189 + u64 tmo; 1190 + 1191 + for (i = 0; i < NUM_DCR; i++) { 1192 + struct nfit_test_fw *fw = &t->fw[i]; 1193 + 1194 + if (fw->armed) 1195 + armed++; 1196 + } 1197 + 1198 + /* 1199 + * Emulate 3 second activation max, and 1 second incremental 1200 + * quiesce time per dimm requiring multiple activates to get all 1201 + * DIMMs updated. 1202 + */ 1203 + if (armed) 1204 + state = ND_INTEL_FWA_ARMED; 1205 + else if (!last_activate || time_after(jiffies, last_activate + 3 * HZ)) 1206 + state = ND_INTEL_FWA_IDLE; 1207 + else 1208 + state = ND_INTEL_FWA_BUSY; 1209 + 1210 + tmo = armed * USEC_PER_SEC; 1211 + *nd_cmd = (struct nd_intel_bus_fw_activate_businfo) { 1212 + .capability = ND_INTEL_BUS_FWA_CAP_FWQUIESCE 1213 + | ND_INTEL_BUS_FWA_CAP_OSQUIESCE 1214 + | ND_INTEL_BUS_FWA_CAP_RESET, 1215 + .state = state, 1216 + .activate_tmo = tmo, 1217 + .cpu_quiesce_tmo = tmo, 1218 + .io_quiesce_tmo = tmo, 1219 + .max_quiesce_tmo = 3 * USEC_PER_SEC, 1220 + }; 1221 + 1222 + return 0; 1223 + } 1224 + 1225 + static int nvdimm_bus_intel_fw_activate(struct nfit_test *t, 1226 + struct nd_intel_bus_fw_activate *nd_cmd, 1227 + unsigned int buf_len) 1228 + { 1229 + struct nd_intel_bus_fw_activate_businfo info; 1230 + u32 status = 0; 1231 + int i; 1232 + 1233 + nvdimm_bus_intel_fw_activate_businfo(t, &info, sizeof(info)); 1234 + if (info.state == ND_INTEL_FWA_BUSY) 1235 + status = ND_INTEL_BUS_FWA_STATUS_BUSY; 1236 + else if (info.activate_tmo > info.max_quiesce_tmo) 1237 + status = ND_INTEL_BUS_FWA_STATUS_TMO; 1238 + else if (info.state == ND_INTEL_FWA_IDLE) 1239 + status = ND_INTEL_BUS_FWA_STATUS_NOARM; 1240 + 1241 + dev_dbg(&t->pdev.dev, "status: %d\n", status); 1242 + nd_cmd->status = status; 1243 + if (status && status != ND_INTEL_BUS_FWA_STATUS_TMO) 1244 + return 0; 1245 + 1246 + last_activate = jiffies; 1247 + for (i = 0; i < NUM_DCR; i++) { 1248 + struct nfit_test_fw *fw = &t->fw[i]; 1249 + 1250 + if (!fw->armed) 1251 + continue; 1252 + if (fw->state != FW_STATE_UPDATED) 1253 + fw->missed_activate = true; 1254 + else 1255 + fw->state = FW_STATE_NEW; 1256 + fw->armed = false; 1257 + fw->last_activate = last_activate; 1258 + } 1259 + 1260 + return 0; 1261 + } 1262 + 1263 + static int nd_intel_test_cmd_fw_activate_dimminfo(struct nfit_test *t, 1264 + struct nd_intel_fw_activate_dimminfo *nd_cmd, 1265 + unsigned int buf_len, int dimm) 1266 + { 1267 + struct nd_intel_bus_fw_activate_businfo info; 1268 + struct nfit_test_fw *fw = &t->fw[dimm]; 1269 + u32 result, state; 1270 + 1271 + nvdimm_bus_intel_fw_activate_businfo(t, &info, sizeof(info)); 1272 + 1273 + if (info.state == ND_INTEL_FWA_BUSY) 1274 + state = ND_INTEL_FWA_BUSY; 1275 + else if (info.state == ND_INTEL_FWA_IDLE) 1276 + state = ND_INTEL_FWA_IDLE; 1277 + else if (fw->armed) 1278 + state = ND_INTEL_FWA_ARMED; 1279 + else 1280 + state = ND_INTEL_FWA_IDLE; 1281 + 1282 + result = ND_INTEL_DIMM_FWA_NONE; 1283 + if (last_activate && fw->last_activate == last_activate && 1284 + state == ND_INTEL_FWA_IDLE) { 1285 + if (fw->missed_activate) 1286 + result = ND_INTEL_DIMM_FWA_NOTSTAGED; 1287 + else 1288 + result = ND_INTEL_DIMM_FWA_SUCCESS; 1289 + } 1290 + 1291 + *nd_cmd = (struct nd_intel_fw_activate_dimminfo) { 1292 + .result = result, 1293 + .state = state, 1294 + }; 1295 + 1296 + return 0; 1297 + } 1298 + 1299 + static int nd_intel_test_cmd_fw_activate_arm(struct nfit_test *t, 1300 + struct nd_intel_fw_activate_arm *nd_cmd, 1301 + unsigned int buf_len, int dimm) 1302 + { 1303 + struct nfit_test_fw *fw = &t->fw[dimm]; 1304 + 1305 + fw->armed = nd_cmd->activate_arm == ND_INTEL_DIMM_FWA_ARM; 1306 + nd_cmd->status = 0; 1307 + return 0; 1308 + } 1185 1309 1186 1310 static int get_dimm(struct nfit_mem *nfit_mem, unsigned int func) 1187 1311 { ··· 1324 1192 return i; 1325 1193 } 1326 1194 1195 + static void nfit_ctl_dbg(struct acpi_nfit_desc *acpi_desc, 1196 + struct nvdimm *nvdimm, unsigned int cmd, void *buf, 1197 + unsigned int len) 1198 + { 1199 + struct nfit_test *t = container_of(acpi_desc, typeof(*t), acpi_desc); 1200 + unsigned int func = cmd; 1201 + unsigned int family = 0; 1202 + 1203 + if (cmd == ND_CMD_CALL) { 1204 + struct nd_cmd_pkg *pkg = buf; 1205 + 1206 + len = pkg->nd_size_in; 1207 + family = pkg->nd_family; 1208 + buf = pkg->nd_payload; 1209 + func = pkg->nd_command; 1210 + } 1211 + dev_dbg(&t->pdev.dev, "%s family: %d cmd: %d: func: %d input length: %d\n", 1212 + nvdimm ? nvdimm_name(nvdimm) : "bus", family, cmd, func, 1213 + len); 1214 + print_hex_dump_debug("nvdimm in ", DUMP_PREFIX_OFFSET, 16, 4, 1215 + buf, min(len, 256u), true); 1216 + } 1217 + 1327 1218 static int nfit_test_ctl(struct nvdimm_bus_descriptor *nd_desc, 1328 1219 struct nvdimm *nvdimm, unsigned int cmd, void *buf, 1329 1220 unsigned int buf_len, int *cmd_rc) ··· 1359 1204 if (!cmd_rc) 1360 1205 cmd_rc = &__cmd_rc; 1361 1206 *cmd_rc = 0; 1207 + 1208 + nfit_ctl_dbg(acpi_desc, nvdimm, cmd, buf, buf_len); 1362 1209 1363 1210 if (nvdimm) { 1364 1211 struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); ··· 1381 1224 i = get_dimm(nfit_mem, func); 1382 1225 if (i < 0) 1383 1226 return i; 1227 + if (i >= NUM_DCR) { 1228 + dev_WARN_ONCE(&t->pdev.dev, 1, 1229 + "ND_CMD_CALL only valid for nfit_test0\n"); 1230 + return -EINVAL; 1231 + } 1384 1232 1385 1233 switch (func) { 1386 1234 case NVDIMM_INTEL_GET_SECURITY_STATE: ··· 1414 1252 break; 1415 1253 case NVDIMM_INTEL_OVERWRITE: 1416 1254 rc = nd_intel_test_cmd_overwrite(t, 1417 - buf, buf_len, i - t->dcr_idx); 1255 + buf, buf_len, i); 1418 1256 break; 1419 1257 case NVDIMM_INTEL_QUERY_OVERWRITE: 1420 1258 rc = nd_intel_test_cmd_query_overwrite(t, 1421 - buf, buf_len, i - t->dcr_idx); 1259 + buf, buf_len, i); 1422 1260 break; 1423 1261 case NVDIMM_INTEL_SET_MASTER_PASSPHRASE: 1424 1262 rc = nd_intel_test_cmd_master_set_pass(t, ··· 1428 1266 rc = nd_intel_test_cmd_master_secure_erase(t, 1429 1267 buf, buf_len, i); 1430 1268 break; 1269 + case NVDIMM_INTEL_FW_ACTIVATE_DIMMINFO: 1270 + rc = nd_intel_test_cmd_fw_activate_dimminfo( 1271 + t, buf, buf_len, i); 1272 + break; 1273 + case NVDIMM_INTEL_FW_ACTIVATE_ARM: 1274 + rc = nd_intel_test_cmd_fw_activate_arm( 1275 + t, buf, buf_len, i); 1276 + break; 1431 1277 case ND_INTEL_ENABLE_LSS_STATUS: 1432 1278 rc = nd_intel_test_cmd_set_lss_status(t, 1433 1279 buf, buf_len); 1434 1280 break; 1435 1281 case ND_INTEL_FW_GET_INFO: 1436 1282 rc = nd_intel_test_get_fw_info(t, buf, 1437 - buf_len, i - t->dcr_idx); 1283 + buf_len, i); 1438 1284 break; 1439 1285 case ND_INTEL_FW_START_UPDATE: 1440 1286 rc = nd_intel_test_start_update(t, buf, 1441 - buf_len, i - t->dcr_idx); 1287 + buf_len, i); 1442 1288 break; 1443 1289 case ND_INTEL_FW_SEND_DATA: 1444 1290 rc = nd_intel_test_send_data(t, buf, 1445 - buf_len, i - t->dcr_idx); 1291 + buf_len, i); 1446 1292 break; 1447 1293 case ND_INTEL_FW_FINISH_UPDATE: 1448 1294 rc = nd_intel_test_finish_fw(t, buf, 1449 - buf_len, i - t->dcr_idx); 1295 + buf_len, i); 1450 1296 break; 1451 1297 case ND_INTEL_FW_FINISH_QUERY: 1452 1298 rc = nd_intel_test_finish_query(t, buf, 1453 - buf_len, i - t->dcr_idx); 1299 + buf_len, i); 1454 1300 break; 1455 1301 case ND_INTEL_SMART: 1456 1302 rc = nfit_test_cmd_smart(buf, buf_len, 1457 - &t->smart[i - t->dcr_idx]); 1303 + &t->smart[i]); 1458 1304 break; 1459 1305 case ND_INTEL_SMART_THRESHOLD: 1460 1306 rc = nfit_test_cmd_smart_threshold(buf, 1461 1307 buf_len, 1462 - &t->smart_threshold[i - 1463 - t->dcr_idx]); 1308 + &t->smart_threshold[i]); 1464 1309 break; 1465 1310 case ND_INTEL_SMART_SET_THRESHOLD: 1466 1311 rc = nfit_test_cmd_smart_set_threshold(buf, 1467 1312 buf_len, 1468 - &t->smart_threshold[i - 1469 - t->dcr_idx], 1470 - &t->smart[i - t->dcr_idx], 1313 + &t->smart_threshold[i], 1314 + &t->smart[i], 1471 1315 &t->pdev.dev, t->dimm_dev[i]); 1472 1316 break; 1473 1317 case ND_INTEL_SMART_INJECT: 1474 1318 rc = nfit_test_cmd_smart_inject(buf, 1475 1319 buf_len, 1476 - &t->smart_threshold[i - 1477 - t->dcr_idx], 1478 - &t->smart[i - t->dcr_idx], 1320 + &t->smart_threshold[i], 1321 + &t->smart[i], 1479 1322 &t->pdev.dev, t->dimm_dev[i]); 1480 1323 break; 1481 1324 default: ··· 1520 1353 if (!nd_desc) 1521 1354 return -ENOTTY; 1522 1355 1523 - if (cmd == ND_CMD_CALL) { 1356 + if (cmd == ND_CMD_CALL && call_pkg->nd_family 1357 + == NVDIMM_BUS_FAMILY_NFIT) { 1524 1358 func = call_pkg->nd_command; 1525 - 1526 1359 buf_len = call_pkg->nd_size_in + call_pkg->nd_size_out; 1527 1360 buf = (void *) call_pkg->nd_payload; 1528 1361 ··· 1546 1379 default: 1547 1380 return -ENOTTY; 1548 1381 } 1549 - } 1382 + } else if (cmd == ND_CMD_CALL && call_pkg->nd_family 1383 + == NVDIMM_BUS_FAMILY_INTEL) { 1384 + func = call_pkg->nd_command; 1385 + buf_len = call_pkg->nd_size_in + call_pkg->nd_size_out; 1386 + buf = (void *) call_pkg->nd_payload; 1387 + 1388 + switch (func) { 1389 + case NVDIMM_BUS_INTEL_FW_ACTIVATE_BUSINFO: 1390 + rc = nvdimm_bus_intel_fw_activate_businfo(t, 1391 + buf, buf_len); 1392 + return rc; 1393 + case NVDIMM_BUS_INTEL_FW_ACTIVATE: 1394 + rc = nvdimm_bus_intel_fw_activate(t, buf, 1395 + buf_len); 1396 + return rc; 1397 + default: 1398 + return -ENOTTY; 1399 + } 1400 + } else if (cmd == ND_CMD_CALL) 1401 + return -ENOTTY; 1550 1402 1551 1403 if (!nd_desc || !test_bit(cmd, &nd_desc->cmd_mask)) 1552 1404 return -ENOTTY; ··· 1991 1805 struct acpi_nfit_flush_address *flush; 1992 1806 struct acpi_nfit_capabilities *pcap; 1993 1807 unsigned int offset = 0, i; 1808 + unsigned long *acpi_mask; 1994 1809 1995 1810 /* 1996 1811 * spa0 (interleave first half of dimm0 and dimm1, note storage ··· 2694 2507 set_bit(ND_CMD_ARS_STATUS, &acpi_desc->bus_cmd_force_en); 2695 2508 set_bit(ND_CMD_CLEAR_ERROR, &acpi_desc->bus_cmd_force_en); 2696 2509 set_bit(ND_CMD_CALL, &acpi_desc->bus_cmd_force_en); 2697 - set_bit(NFIT_CMD_TRANSLATE_SPA, &acpi_desc->bus_nfit_cmd_force_en); 2698 - set_bit(NFIT_CMD_ARS_INJECT_SET, &acpi_desc->bus_nfit_cmd_force_en); 2699 - set_bit(NFIT_CMD_ARS_INJECT_CLEAR, &acpi_desc->bus_nfit_cmd_force_en); 2700 - set_bit(NFIT_CMD_ARS_INJECT_GET, &acpi_desc->bus_nfit_cmd_force_en); 2510 + set_bit(NFIT_CMD_TRANSLATE_SPA, &acpi_desc->bus_dsm_mask); 2511 + set_bit(NFIT_CMD_ARS_INJECT_SET, &acpi_desc->bus_dsm_mask); 2512 + set_bit(NFIT_CMD_ARS_INJECT_CLEAR, &acpi_desc->bus_dsm_mask); 2513 + set_bit(NFIT_CMD_ARS_INJECT_GET, &acpi_desc->bus_dsm_mask); 2701 2514 set_bit(ND_INTEL_FW_GET_INFO, &acpi_desc->dimm_cmd_force_en); 2702 2515 set_bit(ND_INTEL_FW_START_UPDATE, &acpi_desc->dimm_cmd_force_en); 2703 2516 set_bit(ND_INTEL_FW_SEND_DATA, &acpi_desc->dimm_cmd_force_en); ··· 2718 2531 &acpi_desc->dimm_cmd_force_en); 2719 2532 set_bit(NVDIMM_INTEL_MASTER_SECURE_ERASE, 2720 2533 &acpi_desc->dimm_cmd_force_en); 2534 + set_bit(NVDIMM_INTEL_FW_ACTIVATE_DIMMINFO, &acpi_desc->dimm_cmd_force_en); 2535 + set_bit(NVDIMM_INTEL_FW_ACTIVATE_ARM, &acpi_desc->dimm_cmd_force_en); 2536 + 2537 + acpi_mask = &acpi_desc->family_dsm_mask[NVDIMM_BUS_FAMILY_INTEL]; 2538 + set_bit(NVDIMM_BUS_INTEL_FW_ACTIVATE_BUSINFO, acpi_mask); 2539 + set_bit(NVDIMM_BUS_INTEL_FW_ACTIVATE, acpi_mask); 2721 2540 } 2722 2541 2723 2542 static void nfit_test1_setup(struct nfit_test *t) ··· 2892 2699 struct acpi_nfit_desc *acpi_desc; 2893 2700 const u64 test_val = 0x0123456789abcdefULL; 2894 2701 unsigned long mask, cmd_size, offset; 2895 - union { 2896 - struct nd_cmd_get_config_size cfg_size; 2897 - struct nd_cmd_clear_error clear_err; 2898 - struct nd_cmd_ars_status ars_stat; 2899 - struct nd_cmd_ars_cap ars_cap; 2900 - char buf[sizeof(struct nd_cmd_ars_status) 2901 - + sizeof(struct nd_ars_record)]; 2902 - } cmds; 2702 + struct nfit_ctl_test_cmd { 2703 + struct nd_cmd_pkg pkg; 2704 + union { 2705 + struct nd_cmd_get_config_size cfg_size; 2706 + struct nd_cmd_clear_error clear_err; 2707 + struct nd_cmd_ars_status ars_stat; 2708 + struct nd_cmd_ars_cap ars_cap; 2709 + struct nd_intel_bus_fw_activate_businfo fwa_info; 2710 + char buf[sizeof(struct nd_cmd_ars_status) 2711 + + sizeof(struct nd_ars_record)]; 2712 + }; 2713 + } cmd; 2903 2714 2904 2715 adev = devm_kzalloc(dev, sizeof(*adev), GFP_KERNEL); 2905 2716 if (!adev) ··· 2928 2731 .module = THIS_MODULE, 2929 2732 .provider_name = "ACPI.NFIT", 2930 2733 .ndctl = acpi_nfit_ctl, 2931 - .bus_dsm_mask = 1UL << NFIT_CMD_TRANSLATE_SPA 2932 - | 1UL << NFIT_CMD_ARS_INJECT_SET 2933 - | 1UL << NFIT_CMD_ARS_INJECT_CLEAR 2934 - | 1UL << NFIT_CMD_ARS_INJECT_GET, 2734 + .bus_family_mask = 1UL << NVDIMM_BUS_FAMILY_NFIT 2735 + | 1UL << NVDIMM_BUS_FAMILY_INTEL, 2935 2736 }, 2737 + .bus_dsm_mask = 1UL << NFIT_CMD_TRANSLATE_SPA 2738 + | 1UL << NFIT_CMD_ARS_INJECT_SET 2739 + | 1UL << NFIT_CMD_ARS_INJECT_CLEAR 2740 + | 1UL << NFIT_CMD_ARS_INJECT_GET, 2741 + .family_dsm_mask[NVDIMM_BUS_FAMILY_INTEL] = 2742 + NVDIMM_BUS_INTEL_FW_ACTIVATE_CMDMASK, 2936 2743 .dev = &adev->dev, 2937 2744 }; 2938 2745 ··· 2967 2766 2968 2767 2969 2768 /* basic checkout of a typical 'get config size' command */ 2970 - cmd_size = sizeof(cmds.cfg_size); 2971 - cmds.cfg_size = (struct nd_cmd_get_config_size) { 2769 + cmd_size = sizeof(cmd.cfg_size); 2770 + cmd.cfg_size = (struct nd_cmd_get_config_size) { 2972 2771 .status = 0, 2973 2772 .config_size = SZ_128K, 2974 2773 .max_xfer = SZ_4K, 2975 2774 }; 2976 - rc = setup_result(cmds.buf, cmd_size); 2775 + rc = setup_result(cmd.buf, cmd_size); 2977 2776 if (rc) 2978 2777 return rc; 2979 2778 rc = acpi_nfit_ctl(&acpi_desc->nd_desc, nvdimm, ND_CMD_GET_CONFIG_SIZE, 2980 - cmds.buf, cmd_size, &cmd_rc); 2779 + cmd.buf, cmd_size, &cmd_rc); 2981 2780 2982 - if (rc < 0 || cmd_rc || cmds.cfg_size.status != 0 2983 - || cmds.cfg_size.config_size != SZ_128K 2984 - || cmds.cfg_size.max_xfer != SZ_4K) { 2781 + if (rc < 0 || cmd_rc || cmd.cfg_size.status != 0 2782 + || cmd.cfg_size.config_size != SZ_128K 2783 + || cmd.cfg_size.max_xfer != SZ_4K) { 2985 2784 dev_dbg(dev, "%s: failed at: %d rc: %d cmd_rc: %d\n", 2986 2785 __func__, __LINE__, rc, cmd_rc); 2987 2786 return -EIO; ··· 2990 2789 2991 2790 /* test ars_status with zero output */ 2992 2791 cmd_size = offsetof(struct nd_cmd_ars_status, address); 2993 - cmds.ars_stat = (struct nd_cmd_ars_status) { 2792 + cmd.ars_stat = (struct nd_cmd_ars_status) { 2994 2793 .out_length = 0, 2995 2794 }; 2996 - rc = setup_result(cmds.buf, cmd_size); 2795 + rc = setup_result(cmd.buf, cmd_size); 2997 2796 if (rc) 2998 2797 return rc; 2999 2798 rc = acpi_nfit_ctl(&acpi_desc->nd_desc, NULL, ND_CMD_ARS_STATUS, 3000 - cmds.buf, cmd_size, &cmd_rc); 2799 + cmd.buf, cmd_size, &cmd_rc); 3001 2800 3002 2801 if (rc < 0 || cmd_rc) { 3003 2802 dev_dbg(dev, "%s: failed at: %d rc: %d cmd_rc: %d\n", ··· 3007 2806 3008 2807 3009 2808 /* test ars_cap with benign extended status */ 3010 - cmd_size = sizeof(cmds.ars_cap); 3011 - cmds.ars_cap = (struct nd_cmd_ars_cap) { 2809 + cmd_size = sizeof(cmd.ars_cap); 2810 + cmd.ars_cap = (struct nd_cmd_ars_cap) { 3012 2811 .status = ND_ARS_PERSISTENT << 16, 3013 2812 }; 3014 2813 offset = offsetof(struct nd_cmd_ars_cap, status); 3015 - rc = setup_result(cmds.buf + offset, cmd_size - offset); 2814 + rc = setup_result(cmd.buf + offset, cmd_size - offset); 3016 2815 if (rc) 3017 2816 return rc; 3018 2817 rc = acpi_nfit_ctl(&acpi_desc->nd_desc, NULL, ND_CMD_ARS_CAP, 3019 - cmds.buf, cmd_size, &cmd_rc); 2818 + cmd.buf, cmd_size, &cmd_rc); 3020 2819 3021 2820 if (rc < 0 || cmd_rc) { 3022 2821 dev_dbg(dev, "%s: failed at: %d rc: %d cmd_rc: %d\n", ··· 3026 2825 3027 2826 3028 2827 /* test ars_status with 'status' trimmed from 'out_length' */ 3029 - cmd_size = sizeof(cmds.ars_stat) + sizeof(struct nd_ars_record); 3030 - cmds.ars_stat = (struct nd_cmd_ars_status) { 2828 + cmd_size = sizeof(cmd.ars_stat) + sizeof(struct nd_ars_record); 2829 + cmd.ars_stat = (struct nd_cmd_ars_status) { 3031 2830 .out_length = cmd_size - 4, 3032 2831 }; 3033 - record = &cmds.ars_stat.records[0]; 2832 + record = &cmd.ars_stat.records[0]; 3034 2833 *record = (struct nd_ars_record) { 3035 2834 .length = test_val, 3036 2835 }; 3037 - rc = setup_result(cmds.buf, cmd_size); 2836 + rc = setup_result(cmd.buf, cmd_size); 3038 2837 if (rc) 3039 2838 return rc; 3040 2839 rc = acpi_nfit_ctl(&acpi_desc->nd_desc, NULL, ND_CMD_ARS_STATUS, 3041 - cmds.buf, cmd_size, &cmd_rc); 2840 + cmd.buf, cmd_size, &cmd_rc); 3042 2841 3043 2842 if (rc < 0 || cmd_rc || record->length != test_val) { 3044 2843 dev_dbg(dev, "%s: failed at: %d rc: %d cmd_rc: %d\n", ··· 3048 2847 3049 2848 3050 2849 /* test ars_status with 'Output (Size)' including 'status' */ 3051 - cmd_size = sizeof(cmds.ars_stat) + sizeof(struct nd_ars_record); 3052 - cmds.ars_stat = (struct nd_cmd_ars_status) { 2850 + cmd_size = sizeof(cmd.ars_stat) + sizeof(struct nd_ars_record); 2851 + cmd.ars_stat = (struct nd_cmd_ars_status) { 3053 2852 .out_length = cmd_size, 3054 2853 }; 3055 - record = &cmds.ars_stat.records[0]; 2854 + record = &cmd.ars_stat.records[0]; 3056 2855 *record = (struct nd_ars_record) { 3057 2856 .length = test_val, 3058 2857 }; 3059 - rc = setup_result(cmds.buf, cmd_size); 2858 + rc = setup_result(cmd.buf, cmd_size); 3060 2859 if (rc) 3061 2860 return rc; 3062 2861 rc = acpi_nfit_ctl(&acpi_desc->nd_desc, NULL, ND_CMD_ARS_STATUS, 3063 - cmds.buf, cmd_size, &cmd_rc); 2862 + cmd.buf, cmd_size, &cmd_rc); 3064 2863 3065 2864 if (rc < 0 || cmd_rc || record->length != test_val) { 3066 2865 dev_dbg(dev, "%s: failed at: %d rc: %d cmd_rc: %d\n", ··· 3070 2869 3071 2870 3072 2871 /* test extended status for get_config_size results in failure */ 3073 - cmd_size = sizeof(cmds.cfg_size); 3074 - cmds.cfg_size = (struct nd_cmd_get_config_size) { 2872 + cmd_size = sizeof(cmd.cfg_size); 2873 + cmd.cfg_size = (struct nd_cmd_get_config_size) { 3075 2874 .status = 1 << 16, 3076 2875 }; 3077 - rc = setup_result(cmds.buf, cmd_size); 2876 + rc = setup_result(cmd.buf, cmd_size); 3078 2877 if (rc) 3079 2878 return rc; 3080 2879 rc = acpi_nfit_ctl(&acpi_desc->nd_desc, nvdimm, ND_CMD_GET_CONFIG_SIZE, 3081 - cmds.buf, cmd_size, &cmd_rc); 2880 + cmd.buf, cmd_size, &cmd_rc); 3082 2881 3083 2882 if (rc < 0 || cmd_rc >= 0) { 3084 2883 dev_dbg(dev, "%s: failed at: %d rc: %d cmd_rc: %d\n", ··· 3087 2886 } 3088 2887 3089 2888 /* test clear error */ 3090 - cmd_size = sizeof(cmds.clear_err); 3091 - cmds.clear_err = (struct nd_cmd_clear_error) { 2889 + cmd_size = sizeof(cmd.clear_err); 2890 + cmd.clear_err = (struct nd_cmd_clear_error) { 3092 2891 .length = 512, 3093 2892 .cleared = 512, 3094 2893 }; 3095 - rc = setup_result(cmds.buf, cmd_size); 2894 + rc = setup_result(cmd.buf, cmd_size); 3096 2895 if (rc) 3097 2896 return rc; 3098 2897 rc = acpi_nfit_ctl(&acpi_desc->nd_desc, NULL, ND_CMD_CLEAR_ERROR, 3099 - cmds.buf, cmd_size, &cmd_rc); 2898 + cmd.buf, cmd_size, &cmd_rc); 2899 + if (rc < 0 || cmd_rc) { 2900 + dev_dbg(dev, "%s: failed at: %d rc: %d cmd_rc: %d\n", 2901 + __func__, __LINE__, rc, cmd_rc); 2902 + return -EIO; 2903 + } 2904 + 2905 + /* test firmware activate bus info */ 2906 + cmd_size = sizeof(cmd.fwa_info); 2907 + cmd = (struct nfit_ctl_test_cmd) { 2908 + .pkg = { 2909 + .nd_command = NVDIMM_BUS_INTEL_FW_ACTIVATE_BUSINFO, 2910 + .nd_family = NVDIMM_BUS_FAMILY_INTEL, 2911 + .nd_size_out = cmd_size, 2912 + .nd_fw_size = cmd_size, 2913 + }, 2914 + .fwa_info = { 2915 + .state = ND_INTEL_FWA_IDLE, 2916 + .capability = ND_INTEL_BUS_FWA_CAP_FWQUIESCE 2917 + | ND_INTEL_BUS_FWA_CAP_OSQUIESCE, 2918 + .activate_tmo = 1, 2919 + .cpu_quiesce_tmo = 1, 2920 + .io_quiesce_tmo = 1, 2921 + .max_quiesce_tmo = 1, 2922 + }, 2923 + }; 2924 + rc = setup_result(cmd.buf, cmd_size); 2925 + if (rc) 2926 + return rc; 2927 + rc = acpi_nfit_ctl(&acpi_desc->nd_desc, NULL, ND_CMD_CALL, 2928 + &cmd, sizeof(cmd.pkg) + cmd_size, &cmd_rc); 3100 2929 if (rc < 0 || cmd_rc) { 3101 2930 dev_dbg(dev, "%s: failed at: %d rc: %d cmd_rc: %d\n", 3102 2931 __func__, __LINE__, rc, cmd_rc);