Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'libnvdimm-for-4.21' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm

Pull libnvdimm updates from Dan Williams:
"The vast bulk of this update is the new support for the security
capabilities of some nvdimms.

The userspace tooling for this capability is still a work in progress,
but the changes survive the existing libnvdimm unit tests. The changes
also pass manual checkout on hardware and the new nfit_test emulation
of the security capability.

The touches of the security/keys/ files have received the necessary
acks from Mimi and David. Those changes were necessary to allow for a
new generic encrypted-key type, and allow the nvdimm sub-system to
lookup key material referenced by the libnvdimm-sysfs interface.

Summary:

- Add support for the security features of nvdimm devices that
implement a security model similar to ATA hard drive security. The
security model supports locking access to the media at
device-power-loss, to be unlocked with a passphrase, and
secure-erase (crypto-scramble).

Unlike the ATA security case where the kernel expects device
security to be managed in a pre-OS environment, the libnvdimm
security implementation allows key provisioning and key-operations
at OS runtime. Keys are managed with the kernel's encrypted-keys
facility to provide data-at-rest security for the libnvdimm key
material. The usage model mirrors fscrypt key management, but is
driven via libnvdimm sysfs.

- Miscellaneous updates for api usage and comment fixes"

* tag 'libnvdimm-for-4.21' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm: (21 commits)
libnvdimm/security: Quiet security operations
libnvdimm/security: Add documentation for nvdimm security support
tools/testing/nvdimm: add Intel DSM 1.8 support for nfit_test
tools/testing/nvdimm: Add overwrite support for nfit_test
tools/testing/nvdimm: Add test support for Intel nvdimm security DSMs
acpi/nfit, libnvdimm/security: add Intel DSM 1.8 master passphrase support
acpi/nfit, libnvdimm/security: Add security DSM overwrite support
acpi/nfit, libnvdimm: Add support for issue secure erase DSM to Intel nvdimm
acpi/nfit, libnvdimm: Add enable/update passphrase support for Intel nvdimms
acpi/nfit, libnvdimm: Add disable passphrase support to Intel nvdimm.
acpi/nfit, libnvdimm: Add unlock of nvdimm support for Intel DIMMs
acpi/nfit, libnvdimm: Add freeze security support to Intel nvdimm
acpi/nfit, libnvdimm: Introduce nvdimm_security_ops
keys-encrypted: add nvdimm key format type to encrypted keys
keys: Export lookup_user_key to external users
acpi/nfit, libnvdimm: Store dimm id as a member to struct nvdimm
libnvdimm, namespace: Replace kmemdup() with kstrndup()
libnvdimm, label: Switch to bitmap_zalloc()
ACPI/nfit: Adjust annotation for why return 0 if fail to find NFIT at start
libnvdimm, bus: Check id immediately following ida_simple_get
...

+1971 -54
+141
Documentation/nvdimm/security.txt
··· 1 + NVDIMM SECURITY 2 + =============== 3 + 4 + 1. Introduction 5 + --------------- 6 + 7 + With the introduction of Intel Device Specific Methods (DSM) v1.8 8 + specification [1], security DSMs are introduced. The spec added the following 9 + security DSMs: "get security state", "set passphrase", "disable passphrase", 10 + "unlock unit", "freeze lock", "secure erase", and "overwrite". A security_ops 11 + data structure has been added to struct dimm in order to support the security 12 + operations and generic APIs are exposed to allow vendor neutral operations. 13 + 14 + 2. Sysfs Interface 15 + ------------------ 16 + The "security" sysfs attribute is provided in the nvdimm sysfs directory. For 17 + example: 18 + /sys/devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0012:00/ndbus0/nmem0/security 19 + 20 + The "show" attribute of that attribute will display the security state for 21 + that DIMM. The following states are available: disabled, unlocked, locked, 22 + frozen, and overwrite. If security is not supported, the sysfs attribute 23 + will not be visible. 24 + 25 + The "store" attribute takes several commands when it is being written to 26 + in order to support some of the security functionalities: 27 + update <old_keyid> <new_keyid> - enable or update passphrase. 28 + disable <keyid> - disable enabled security and remove key. 29 + freeze - freeze changing of security states. 30 + erase <keyid> - delete existing user encryption key. 31 + overwrite <keyid> - wipe the entire nvdimm. 32 + master_update <keyid> <new_keyid> - enable or update master passphrase. 33 + master_erase <keyid> - delete existing user encryption key. 34 + 35 + 3. Key Management 36 + ----------------- 37 + 38 + The key is associated to the payload by the DIMM id. For example: 39 + # cat /sys/devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0012:00/ndbus0/nmem0/nfit/id 40 + 8089-a2-1740-00000133 41 + The DIMM id would be provided along with the key payload (passphrase) to 42 + the kernel. 43 + 44 + The security keys are managed on the basis of a single key per DIMM. The 45 + key "passphrase" is expected to be 32bytes long. This is similar to the ATA 46 + security specification [2]. A key is initially acquired via the request_key() 47 + kernel API call during nvdimm unlock. It is up to the user to make sure that 48 + all the keys are in the kernel user keyring for unlock. 49 + 50 + A nvdimm encrypted-key of format enc32 has the description format of: 51 + nvdimm:<bus-provider-specific-unique-id> 52 + 53 + See file ``Documentation/security/keys/trusted-encrypted.rst`` for creating 54 + encrypted-keys of enc32 format. TPM usage with a master trusted key is 55 + preferred for sealing the encrypted-keys. 56 + 57 + 4. Unlocking 58 + ------------ 59 + When the DIMMs are being enumerated by the kernel, the kernel will attempt to 60 + retrieve the key from the kernel user keyring. This is the only time 61 + a locked DIMM can be unlocked. Once unlocked, the DIMM will remain unlocked 62 + until reboot. Typically an entity (i.e. shell script) will inject all the 63 + relevant encrypted-keys into the kernel user keyring during the initramfs phase. 64 + This provides the unlock function access to all the related keys that contain 65 + the passphrase for the respective nvdimms. It is also recommended that the 66 + keys are injected before libnvdimm is loaded by modprobe. 67 + 68 + 5. Update 69 + --------- 70 + When doing an update, it is expected that the existing key is removed from 71 + the kernel user keyring and reinjected as different (old) key. It's irrelevant 72 + what the key description is for the old key since we are only interested in the 73 + keyid when doing the update operation. It is also expected that the new key 74 + is injected with the description format described from earlier in this 75 + document. The update command written to the sysfs attribute will be with 76 + the format: 77 + update <old keyid> <new keyid> 78 + 79 + If there is no old keyid due to a security enabling, then a 0 should be 80 + passed in. 81 + 82 + 6. Freeze 83 + --------- 84 + The freeze operation does not require any keys. The security config can be 85 + frozen by a user with root privelege. 86 + 87 + 7. Disable 88 + ---------- 89 + The security disable command format is: 90 + disable <keyid> 91 + 92 + An key with the current passphrase payload that is tied to the nvdimm should be 93 + in the kernel user keyring. 94 + 95 + 8. Secure Erase 96 + --------------- 97 + The command format for doing a secure erase is: 98 + erase <keyid> 99 + 100 + An key with the current passphrase payload that is tied to the nvdimm should be 101 + in the kernel user keyring. 102 + 103 + 9. Overwrite 104 + ------------ 105 + The command format for doing an overwrite is: 106 + overwrite <keyid> 107 + 108 + Overwrite can be done without a key if security is not enabled. A key serial 109 + of 0 can be passed in to indicate no key. 110 + 111 + The sysfs attribute "security" can be polled to wait on overwrite completion. 112 + Overwrite can last tens of minutes or more depending on nvdimm size. 113 + 114 + An encrypted-key with the current user passphrase that is tied to the nvdimm 115 + should be injected and its keyid should be passed in via sysfs. 116 + 117 + 10. Master Update 118 + ----------------- 119 + The command format for doing a master update is: 120 + update <old keyid> <new keyid> 121 + 122 + The operating mechanism for master update is identical to update except the 123 + master passphrase key is passed to the kernel. The master passphrase key 124 + is just another encrypted-key. 125 + 126 + This command is only available when security is disabled. 127 + 128 + 11. Master Erase 129 + ---------------- 130 + The command format for doing a master erase is: 131 + master_erase <current keyid> 132 + 133 + This command has the same operating mechanism as erase except the master 134 + passphrase key is passed to the kernel. The master passphrase key is just 135 + another encrypted-key. 136 + 137 + This command is only available when the master security is enabled, indicated 138 + by the extended security status. 139 + 140 + [1]: http://pmem.io/documents/NVDIMM_DSM_Interface-V1.8.pdf 141 + [2]: http://www.t13.org/documents/UploadedDocuments/docs2006/e05179r4-ACS-SecurityClarifications.pdf
+5 -1
Documentation/security/keys/trusted-encrypted.rst
··· 76 76 77 77 Where:: 78 78 79 - format:= 'default | ecryptfs' 79 + format:= 'default | ecryptfs | enc32' 80 80 key-type:= 'trusted' | 'user' 81 81 82 82 ··· 173 173 in order to use encrypted keys to mount an eCryptfs filesystem. More details 174 174 about the usage can be found in the file 175 175 ``Documentation/security/keys/ecryptfs.rst``. 176 + 177 + Another new format 'enc32' has been defined in order to support encrypted keys 178 + with payload size of 32 bytes. This will initially be used for nvdimm security 179 + but may expand to other usages that require 32 bytes payload.
+11
drivers/acpi/nfit/Kconfig
··· 13 13 14 14 To compile this driver as a module, choose M here: 15 15 the module will be called nfit. 16 + 17 + config NFIT_SECURITY_DEBUG 18 + bool "Enable debug for NVDIMM security commands" 19 + depends on ACPI_NFIT 20 + help 21 + Some NVDIMM devices and controllers support encryption and 22 + other security features. The payloads for the commands that 23 + enable those features may contain sensitive clear-text 24 + security material. Disable debug of those command payloads 25 + by default. If you are a kernel developer actively working 26 + on NVDIMM security enabling say Y, otherwise say N.
+1
drivers/acpi/nfit/Makefile
··· 1 1 obj-$(CONFIG_ACPI_NFIT) := nfit.o 2 2 nfit-y := core.o 3 + nfit-y += intel.o 3 4 nfit-$(CONFIG_X86_MCE) += mce.o
+85 -18
drivers/acpi/nfit/core.c
··· 24 24 #include <linux/nd.h> 25 25 #include <asm/cacheflush.h> 26 26 #include <acpi/nfit.h> 27 + #include "intel.h" 27 28 #include "nfit.h" 28 29 #include "intel.h" 29 30 ··· 381 380 [NVDIMM_INTEL_QUERY_FWUPDATE] = 2, 382 381 [NVDIMM_INTEL_SET_THRESHOLD] = 2, 383 382 [NVDIMM_INTEL_INJECT_ERROR] = 2, 383 + [NVDIMM_INTEL_GET_SECURITY_STATE] = 2, 384 + [NVDIMM_INTEL_SET_PASSPHRASE] = 2, 385 + [NVDIMM_INTEL_DISABLE_PASSPHRASE] = 2, 386 + [NVDIMM_INTEL_UNLOCK_UNIT] = 2, 387 + [NVDIMM_INTEL_FREEZE_LOCK] = 2, 388 + [NVDIMM_INTEL_SECURE_ERASE] = 2, 389 + [NVDIMM_INTEL_OVERWRITE] = 2, 390 + [NVDIMM_INTEL_QUERY_OVERWRITE] = 2, 391 + [NVDIMM_INTEL_SET_MASTER_PASSPHRASE] = 2, 392 + [NVDIMM_INTEL_MASTER_SECURE_ERASE] = 2, 384 393 }, 385 394 }; 386 395 u8 id; ··· 403 392 if (id == 0) 404 393 return 1; /* default */ 405 394 return id; 395 + } 396 + 397 + static bool payload_dumpable(struct nvdimm *nvdimm, unsigned int func) 398 + { 399 + struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); 400 + 401 + if (nfit_mem && nfit_mem->family == NVDIMM_FAMILY_INTEL 402 + && func >= NVDIMM_INTEL_GET_SECURITY_STATE 403 + && func <= NVDIMM_INTEL_MASTER_SECURE_ERASE) 404 + return IS_ENABLED(CONFIG_NFIT_SECURITY_DEBUG); 405 + return true; 406 406 } 407 407 408 408 int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm, ··· 500 478 501 479 dev_dbg(dev, "%s cmd: %d: func: %d input length: %d\n", 502 480 dimm_name, cmd, func, in_buf.buffer.length); 503 - print_hex_dump_debug("nvdimm in ", DUMP_PREFIX_OFFSET, 4, 4, 504 - in_buf.buffer.pointer, 505 - min_t(u32, 256, in_buf.buffer.length), true); 481 + if (payload_dumpable(nvdimm, func)) 482 + print_hex_dump_debug("nvdimm in ", DUMP_PREFIX_OFFSET, 4, 4, 483 + in_buf.buffer.pointer, 484 + min_t(u32, 256, in_buf.buffer.length), true); 506 485 507 486 /* call the BIOS, prefer the named methods over _DSM if available */ 508 487 if (nvdimm && cmd == ND_CMD_GET_CONFIG_SIZE ··· 1596 1573 static ssize_t id_show(struct device *dev, 1597 1574 struct device_attribute *attr, char *buf) 1598 1575 { 1599 - struct acpi_nfit_control_region *dcr = to_nfit_dcr(dev); 1576 + struct nvdimm *nvdimm = to_nvdimm(dev); 1577 + struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); 1600 1578 1601 - if (dcr->valid_fields & ACPI_NFIT_CONTROL_MFG_INFO_VALID) 1602 - return sprintf(buf, "%04x-%02x-%04x-%08x\n", 1603 - be16_to_cpu(dcr->vendor_id), 1604 - dcr->manufacturing_location, 1605 - be16_to_cpu(dcr->manufacturing_date), 1606 - be32_to_cpu(dcr->serial_number)); 1607 - else 1608 - return sprintf(buf, "%04x-%08x\n", 1609 - be16_to_cpu(dcr->vendor_id), 1610 - be32_to_cpu(dcr->serial_number)); 1579 + return sprintf(buf, "%s\n", nfit_mem->id); 1611 1580 } 1612 1581 static DEVICE_ATTR_RO(id); 1613 1582 ··· 1795 1780 const guid_t *guid; 1796 1781 int i; 1797 1782 int family = -1; 1783 + struct acpi_nfit_control_region *dcr = nfit_mem->dcr; 1798 1784 1799 1785 /* nfit test assumes 1:1 relationship between commands and dsms */ 1800 1786 nfit_mem->dsm_mask = acpi_desc->dimm_cmd_force_en; 1801 1787 nfit_mem->family = NVDIMM_FAMILY_INTEL; 1788 + 1789 + if (dcr->valid_fields & ACPI_NFIT_CONTROL_MFG_INFO_VALID) 1790 + sprintf(nfit_mem->id, "%04x-%02x-%04x-%08x", 1791 + be16_to_cpu(dcr->vendor_id), 1792 + dcr->manufacturing_location, 1793 + be16_to_cpu(dcr->manufacturing_date), 1794 + be32_to_cpu(dcr->serial_number)); 1795 + else 1796 + sprintf(nfit_mem->id, "%04x-%08x", 1797 + be16_to_cpu(dcr->vendor_id), 1798 + be32_to_cpu(dcr->serial_number)); 1799 + 1802 1800 adev = to_acpi_dev(acpi_desc); 1803 1801 if (!adev) { 1804 1802 /* unit test case */ ··· 1932 1904 mutex_unlock(&acpi_desc->init_mutex); 1933 1905 } 1934 1906 1907 + static const struct nvdimm_security_ops *acpi_nfit_get_security_ops(int family) 1908 + { 1909 + switch (family) { 1910 + case NVDIMM_FAMILY_INTEL: 1911 + return intel_security_ops; 1912 + default: 1913 + return NULL; 1914 + } 1915 + } 1916 + 1935 1917 static int acpi_nfit_register_dimms(struct acpi_nfit_desc *acpi_desc) 1936 1918 { 1937 1919 struct nfit_mem *nfit_mem; ··· 2008 1970 2009 1971 flush = nfit_mem->nfit_flush ? nfit_mem->nfit_flush->flush 2010 1972 : NULL; 2011 - nvdimm = nvdimm_create(acpi_desc->nvdimm_bus, nfit_mem, 1973 + nvdimm = __nvdimm_create(acpi_desc->nvdimm_bus, nfit_mem, 2012 1974 acpi_nfit_dimm_attribute_groups, 2013 1975 flags, cmd_mask, flush ? flush->hint_count : 0, 2014 - nfit_mem->flush_wpq); 1976 + nfit_mem->flush_wpq, &nfit_mem->id[0], 1977 + acpi_nfit_get_security_ops(nfit_mem->family)); 2015 1978 if (!nvdimm) 2016 1979 return -ENOMEM; 2017 1980 ··· 2046 2007 nvdimm = nfit_mem->nvdimm; 2047 2008 if (!nvdimm) 2048 2009 continue; 2010 + 2011 + rc = nvdimm_security_setup_events(nvdimm); 2012 + if (rc < 0) 2013 + dev_warn(acpi_desc->dev, 2014 + "security event setup failed: %d\n", rc); 2049 2015 2050 2016 nfit_kernfs = sysfs_get_dirent(nvdimm_kobj(nvdimm)->sd, "nfit"); 2051 2017 if (nfit_kernfs) ··· 3381 3337 return 0; 3382 3338 } 3383 3339 3384 - static int acpi_nfit_clear_to_send(struct nvdimm_bus_descriptor *nd_desc, 3340 + static int __acpi_nfit_clear_to_send(struct nvdimm_bus_descriptor *nd_desc, 3385 3341 struct nvdimm *nvdimm, unsigned int cmd) 3386 3342 { 3387 3343 struct acpi_nfit_desc *acpi_desc = to_acpi_nfit_desc(nd_desc); ··· 3401 3357 return -EBUSY; 3402 3358 3403 3359 return 0; 3360 + } 3361 + 3362 + /* prevent security commands from being issued via ioctl */ 3363 + static int acpi_nfit_clear_to_send(struct nvdimm_bus_descriptor *nd_desc, 3364 + struct nvdimm *nvdimm, unsigned int cmd, void *buf) 3365 + { 3366 + struct nd_cmd_pkg *call_pkg = buf; 3367 + unsigned int func; 3368 + 3369 + if (nvdimm && cmd == ND_CMD_CALL && 3370 + call_pkg->nd_family == NVDIMM_FAMILY_INTEL) { 3371 + func = call_pkg->nd_command; 3372 + if ((1 << func) & NVDIMM_INTEL_SECURITY_CMDMASK) 3373 + return -EOPNOTSUPP; 3374 + } 3375 + 3376 + return __acpi_nfit_clear_to_send(nd_desc, nvdimm, cmd); 3404 3377 } 3405 3378 3406 3379 int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc, ··· 3535 3474 3536 3475 status = acpi_get_table(ACPI_SIG_NFIT, 0, &tbl); 3537 3476 if (ACPI_FAILURE(status)) { 3538 - /* This is ok, we could have an nvdimm hotplugged later */ 3477 + /* The NVDIMM root device allows OS to trigger enumeration of 3478 + * NVDIMMs through NFIT at boot time and re-enumeration at 3479 + * root level via the _FIT method during runtime. 3480 + * This is ok to return 0 here, we could have an nvdimm 3481 + * hotplugged later and evaluate _FIT method which returns 3482 + * data in the format of a series of NFIT Structures. 3483 + */ 3539 3484 dev_dbg(dev, "failed to find NFIT at startup\n"); 3540 3485 return 0; 3541 3486 }
+388
drivers/acpi/nfit/intel.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright(c) 2018 Intel Corporation. All rights reserved. */ 3 + #include <linux/libnvdimm.h> 4 + #include <linux/ndctl.h> 5 + #include <linux/acpi.h> 6 + #include <asm/smp.h> 7 + #include "intel.h" 8 + #include "nfit.h" 9 + 10 + static enum nvdimm_security_state intel_security_state(struct nvdimm *nvdimm, 11 + enum nvdimm_passphrase_type ptype) 12 + { 13 + struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); 14 + struct { 15 + struct nd_cmd_pkg pkg; 16 + struct nd_intel_get_security_state cmd; 17 + } nd_cmd = { 18 + .pkg = { 19 + .nd_command = NVDIMM_INTEL_GET_SECURITY_STATE, 20 + .nd_family = NVDIMM_FAMILY_INTEL, 21 + .nd_size_out = 22 + sizeof(struct nd_intel_get_security_state), 23 + .nd_fw_size = 24 + sizeof(struct nd_intel_get_security_state), 25 + }, 26 + }; 27 + int rc; 28 + 29 + if (!test_bit(NVDIMM_INTEL_GET_SECURITY_STATE, &nfit_mem->dsm_mask)) 30 + return -ENXIO; 31 + 32 + /* 33 + * Short circuit the state retrieval while we are doing overwrite. 34 + * The DSM spec states that the security state is indeterminate 35 + * until the overwrite DSM completes. 36 + */ 37 + if (nvdimm_in_overwrite(nvdimm) && ptype == NVDIMM_USER) 38 + return NVDIMM_SECURITY_OVERWRITE; 39 + 40 + rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL); 41 + if (rc < 0) 42 + return rc; 43 + if (nd_cmd.cmd.status) 44 + return -EIO; 45 + 46 + /* check and see if security is enabled and locked */ 47 + if (ptype == NVDIMM_MASTER) { 48 + if (nd_cmd.cmd.extended_state & ND_INTEL_SEC_ESTATE_ENABLED) 49 + return NVDIMM_SECURITY_UNLOCKED; 50 + else if (nd_cmd.cmd.extended_state & 51 + ND_INTEL_SEC_ESTATE_PLIMIT) 52 + return NVDIMM_SECURITY_FROZEN; 53 + } else { 54 + if (nd_cmd.cmd.state & ND_INTEL_SEC_STATE_UNSUPPORTED) 55 + return -ENXIO; 56 + else if (nd_cmd.cmd.state & ND_INTEL_SEC_STATE_ENABLED) { 57 + if (nd_cmd.cmd.state & ND_INTEL_SEC_STATE_LOCKED) 58 + return NVDIMM_SECURITY_LOCKED; 59 + else if (nd_cmd.cmd.state & ND_INTEL_SEC_STATE_FROZEN 60 + || nd_cmd.cmd.state & 61 + ND_INTEL_SEC_STATE_PLIMIT) 62 + return NVDIMM_SECURITY_FROZEN; 63 + else 64 + return NVDIMM_SECURITY_UNLOCKED; 65 + } 66 + } 67 + 68 + /* this should cover master security disabled as well */ 69 + return NVDIMM_SECURITY_DISABLED; 70 + } 71 + 72 + static int intel_security_freeze(struct nvdimm *nvdimm) 73 + { 74 + struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); 75 + struct { 76 + struct nd_cmd_pkg pkg; 77 + struct nd_intel_freeze_lock cmd; 78 + } nd_cmd = { 79 + .pkg = { 80 + .nd_command = NVDIMM_INTEL_FREEZE_LOCK, 81 + .nd_family = NVDIMM_FAMILY_INTEL, 82 + .nd_size_out = ND_INTEL_STATUS_SIZE, 83 + .nd_fw_size = ND_INTEL_STATUS_SIZE, 84 + }, 85 + }; 86 + int rc; 87 + 88 + if (!test_bit(NVDIMM_INTEL_FREEZE_LOCK, &nfit_mem->dsm_mask)) 89 + return -ENOTTY; 90 + 91 + rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL); 92 + if (rc < 0) 93 + return rc; 94 + if (nd_cmd.cmd.status) 95 + return -EIO; 96 + return 0; 97 + } 98 + 99 + static int intel_security_change_key(struct nvdimm *nvdimm, 100 + const struct nvdimm_key_data *old_data, 101 + const struct nvdimm_key_data *new_data, 102 + enum nvdimm_passphrase_type ptype) 103 + { 104 + struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); 105 + unsigned int cmd = ptype == NVDIMM_MASTER ? 106 + NVDIMM_INTEL_SET_MASTER_PASSPHRASE : 107 + NVDIMM_INTEL_SET_PASSPHRASE; 108 + struct { 109 + struct nd_cmd_pkg pkg; 110 + struct nd_intel_set_passphrase cmd; 111 + } nd_cmd = { 112 + .pkg = { 113 + .nd_family = NVDIMM_FAMILY_INTEL, 114 + .nd_size_in = ND_INTEL_PASSPHRASE_SIZE * 2, 115 + .nd_size_out = ND_INTEL_STATUS_SIZE, 116 + .nd_fw_size = ND_INTEL_STATUS_SIZE, 117 + .nd_command = cmd, 118 + }, 119 + }; 120 + int rc; 121 + 122 + if (!test_bit(cmd, &nfit_mem->dsm_mask)) 123 + return -ENOTTY; 124 + 125 + if (old_data) 126 + memcpy(nd_cmd.cmd.old_pass, old_data->data, 127 + sizeof(nd_cmd.cmd.old_pass)); 128 + memcpy(nd_cmd.cmd.new_pass, new_data->data, 129 + sizeof(nd_cmd.cmd.new_pass)); 130 + rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL); 131 + if (rc < 0) 132 + return rc; 133 + 134 + switch (nd_cmd.cmd.status) { 135 + case 0: 136 + return 0; 137 + case ND_INTEL_STATUS_INVALID_PASS: 138 + return -EINVAL; 139 + case ND_INTEL_STATUS_NOT_SUPPORTED: 140 + return -EOPNOTSUPP; 141 + case ND_INTEL_STATUS_INVALID_STATE: 142 + default: 143 + return -EIO; 144 + } 145 + } 146 + 147 + static void nvdimm_invalidate_cache(void); 148 + 149 + static int intel_security_unlock(struct nvdimm *nvdimm, 150 + const struct nvdimm_key_data *key_data) 151 + { 152 + struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); 153 + struct { 154 + struct nd_cmd_pkg pkg; 155 + struct nd_intel_unlock_unit cmd; 156 + } nd_cmd = { 157 + .pkg = { 158 + .nd_command = NVDIMM_INTEL_UNLOCK_UNIT, 159 + .nd_family = NVDIMM_FAMILY_INTEL, 160 + .nd_size_in = ND_INTEL_PASSPHRASE_SIZE, 161 + .nd_size_out = ND_INTEL_STATUS_SIZE, 162 + .nd_fw_size = ND_INTEL_STATUS_SIZE, 163 + }, 164 + }; 165 + int rc; 166 + 167 + if (!test_bit(NVDIMM_INTEL_UNLOCK_UNIT, &nfit_mem->dsm_mask)) 168 + return -ENOTTY; 169 + 170 + memcpy(nd_cmd.cmd.passphrase, key_data->data, 171 + sizeof(nd_cmd.cmd.passphrase)); 172 + rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL); 173 + if (rc < 0) 174 + return rc; 175 + switch (nd_cmd.cmd.status) { 176 + case 0: 177 + break; 178 + case ND_INTEL_STATUS_INVALID_PASS: 179 + return -EINVAL; 180 + default: 181 + return -EIO; 182 + } 183 + 184 + /* DIMM unlocked, invalidate all CPU caches before we read it */ 185 + nvdimm_invalidate_cache(); 186 + 187 + return 0; 188 + } 189 + 190 + static int intel_security_disable(struct nvdimm *nvdimm, 191 + const struct nvdimm_key_data *key_data) 192 + { 193 + int rc; 194 + struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); 195 + struct { 196 + struct nd_cmd_pkg pkg; 197 + struct nd_intel_disable_passphrase cmd; 198 + } nd_cmd = { 199 + .pkg = { 200 + .nd_command = NVDIMM_INTEL_DISABLE_PASSPHRASE, 201 + .nd_family = NVDIMM_FAMILY_INTEL, 202 + .nd_size_in = ND_INTEL_PASSPHRASE_SIZE, 203 + .nd_size_out = ND_INTEL_STATUS_SIZE, 204 + .nd_fw_size = ND_INTEL_STATUS_SIZE, 205 + }, 206 + }; 207 + 208 + if (!test_bit(NVDIMM_INTEL_DISABLE_PASSPHRASE, &nfit_mem->dsm_mask)) 209 + return -ENOTTY; 210 + 211 + memcpy(nd_cmd.cmd.passphrase, key_data->data, 212 + sizeof(nd_cmd.cmd.passphrase)); 213 + rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL); 214 + if (rc < 0) 215 + return rc; 216 + 217 + switch (nd_cmd.cmd.status) { 218 + case 0: 219 + break; 220 + case ND_INTEL_STATUS_INVALID_PASS: 221 + return -EINVAL; 222 + case ND_INTEL_STATUS_INVALID_STATE: 223 + default: 224 + return -ENXIO; 225 + } 226 + 227 + return 0; 228 + } 229 + 230 + static int intel_security_erase(struct nvdimm *nvdimm, 231 + const struct nvdimm_key_data *key, 232 + enum nvdimm_passphrase_type ptype) 233 + { 234 + int rc; 235 + struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); 236 + unsigned int cmd = ptype == NVDIMM_MASTER ? 237 + NVDIMM_INTEL_MASTER_SECURE_ERASE : NVDIMM_INTEL_SECURE_ERASE; 238 + struct { 239 + struct nd_cmd_pkg pkg; 240 + struct nd_intel_secure_erase cmd; 241 + } nd_cmd = { 242 + .pkg = { 243 + .nd_family = NVDIMM_FAMILY_INTEL, 244 + .nd_size_in = ND_INTEL_PASSPHRASE_SIZE, 245 + .nd_size_out = ND_INTEL_STATUS_SIZE, 246 + .nd_fw_size = ND_INTEL_STATUS_SIZE, 247 + .nd_command = cmd, 248 + }, 249 + }; 250 + 251 + if (!test_bit(cmd, &nfit_mem->dsm_mask)) 252 + return -ENOTTY; 253 + 254 + /* flush all cache before we erase DIMM */ 255 + nvdimm_invalidate_cache(); 256 + memcpy(nd_cmd.cmd.passphrase, key->data, 257 + sizeof(nd_cmd.cmd.passphrase)); 258 + rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL); 259 + if (rc < 0) 260 + return rc; 261 + 262 + switch (nd_cmd.cmd.status) { 263 + case 0: 264 + break; 265 + case ND_INTEL_STATUS_NOT_SUPPORTED: 266 + return -EOPNOTSUPP; 267 + case ND_INTEL_STATUS_INVALID_PASS: 268 + return -EINVAL; 269 + case ND_INTEL_STATUS_INVALID_STATE: 270 + default: 271 + return -ENXIO; 272 + } 273 + 274 + /* DIMM erased, invalidate all CPU caches before we read it */ 275 + nvdimm_invalidate_cache(); 276 + return 0; 277 + } 278 + 279 + static int intel_security_query_overwrite(struct nvdimm *nvdimm) 280 + { 281 + int rc; 282 + struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); 283 + struct { 284 + struct nd_cmd_pkg pkg; 285 + struct nd_intel_query_overwrite cmd; 286 + } nd_cmd = { 287 + .pkg = { 288 + .nd_command = NVDIMM_INTEL_QUERY_OVERWRITE, 289 + .nd_family = NVDIMM_FAMILY_INTEL, 290 + .nd_size_out = ND_INTEL_STATUS_SIZE, 291 + .nd_fw_size = ND_INTEL_STATUS_SIZE, 292 + }, 293 + }; 294 + 295 + if (!test_bit(NVDIMM_INTEL_QUERY_OVERWRITE, &nfit_mem->dsm_mask)) 296 + return -ENOTTY; 297 + 298 + rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL); 299 + if (rc < 0) 300 + return rc; 301 + 302 + switch (nd_cmd.cmd.status) { 303 + case 0: 304 + break; 305 + case ND_INTEL_STATUS_OQUERY_INPROGRESS: 306 + return -EBUSY; 307 + default: 308 + return -ENXIO; 309 + } 310 + 311 + /* flush all cache before we make the nvdimms available */ 312 + nvdimm_invalidate_cache(); 313 + return 0; 314 + } 315 + 316 + static int intel_security_overwrite(struct nvdimm *nvdimm, 317 + const struct nvdimm_key_data *nkey) 318 + { 319 + int rc; 320 + struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); 321 + struct { 322 + struct nd_cmd_pkg pkg; 323 + struct nd_intel_overwrite cmd; 324 + } nd_cmd = { 325 + .pkg = { 326 + .nd_command = NVDIMM_INTEL_OVERWRITE, 327 + .nd_family = NVDIMM_FAMILY_INTEL, 328 + .nd_size_in = ND_INTEL_PASSPHRASE_SIZE, 329 + .nd_size_out = ND_INTEL_STATUS_SIZE, 330 + .nd_fw_size = ND_INTEL_STATUS_SIZE, 331 + }, 332 + }; 333 + 334 + if (!test_bit(NVDIMM_INTEL_OVERWRITE, &nfit_mem->dsm_mask)) 335 + return -ENOTTY; 336 + 337 + /* flush all cache before we erase DIMM */ 338 + nvdimm_invalidate_cache(); 339 + if (nkey) 340 + memcpy(nd_cmd.cmd.passphrase, nkey->data, 341 + sizeof(nd_cmd.cmd.passphrase)); 342 + rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL); 343 + if (rc < 0) 344 + return rc; 345 + 346 + switch (nd_cmd.cmd.status) { 347 + case 0: 348 + return 0; 349 + case ND_INTEL_STATUS_OVERWRITE_UNSUPPORTED: 350 + return -ENOTSUPP; 351 + case ND_INTEL_STATUS_INVALID_PASS: 352 + return -EINVAL; 353 + case ND_INTEL_STATUS_INVALID_STATE: 354 + default: 355 + return -ENXIO; 356 + } 357 + } 358 + 359 + /* 360 + * TODO: define a cross arch wbinvd equivalent when/if 361 + * NVDIMM_FAMILY_INTEL command support arrives on another arch. 362 + */ 363 + #ifdef CONFIG_X86 364 + static void nvdimm_invalidate_cache(void) 365 + { 366 + wbinvd_on_all_cpus(); 367 + } 368 + #else 369 + static void nvdimm_invalidate_cache(void) 370 + { 371 + WARN_ON_ONCE("cache invalidation required after unlock\n"); 372 + } 373 + #endif 374 + 375 + static const struct nvdimm_security_ops __intel_security_ops = { 376 + .state = intel_security_state, 377 + .freeze = intel_security_freeze, 378 + .change_key = intel_security_change_key, 379 + .disable = intel_security_disable, 380 + #ifdef CONFIG_X86 381 + .unlock = intel_security_unlock, 382 + .erase = intel_security_erase, 383 + .overwrite = intel_security_overwrite, 384 + .query_overwrite = intel_security_query_overwrite, 385 + #endif 386 + }; 387 + 388 + const struct nvdimm_security_ops *intel_security_ops = &__intel_security_ops;
+76
drivers/acpi/nfit/intel.h
··· 35 35 }; 36 36 } __packed; 37 37 38 + extern const struct nvdimm_security_ops *intel_security_ops; 39 + 40 + #define ND_INTEL_STATUS_SIZE 4 41 + #define ND_INTEL_PASSPHRASE_SIZE 32 42 + 43 + #define ND_INTEL_STATUS_NOT_SUPPORTED 1 44 + #define ND_INTEL_STATUS_RETRY 5 45 + #define ND_INTEL_STATUS_NOT_READY 9 46 + #define ND_INTEL_STATUS_INVALID_STATE 10 47 + #define ND_INTEL_STATUS_INVALID_PASS 11 48 + #define ND_INTEL_STATUS_OVERWRITE_UNSUPPORTED 0x10007 49 + #define ND_INTEL_STATUS_OQUERY_INPROGRESS 0x10007 50 + #define ND_INTEL_STATUS_OQUERY_SEQUENCE_ERR 0x20007 51 + 52 + #define ND_INTEL_SEC_STATE_ENABLED 0x02 53 + #define ND_INTEL_SEC_STATE_LOCKED 0x04 54 + #define ND_INTEL_SEC_STATE_FROZEN 0x08 55 + #define ND_INTEL_SEC_STATE_PLIMIT 0x10 56 + #define ND_INTEL_SEC_STATE_UNSUPPORTED 0x20 57 + #define ND_INTEL_SEC_STATE_OVERWRITE 0x40 58 + 59 + #define ND_INTEL_SEC_ESTATE_ENABLED 0x01 60 + #define ND_INTEL_SEC_ESTATE_PLIMIT 0x02 61 + 62 + struct nd_intel_get_security_state { 63 + u32 status; 64 + u8 extended_state; 65 + u8 reserved[3]; 66 + u8 state; 67 + u8 reserved1[3]; 68 + } __packed; 69 + 70 + struct nd_intel_set_passphrase { 71 + u8 old_pass[ND_INTEL_PASSPHRASE_SIZE]; 72 + u8 new_pass[ND_INTEL_PASSPHRASE_SIZE]; 73 + u32 status; 74 + } __packed; 75 + 76 + struct nd_intel_unlock_unit { 77 + u8 passphrase[ND_INTEL_PASSPHRASE_SIZE]; 78 + u32 status; 79 + } __packed; 80 + 81 + struct nd_intel_disable_passphrase { 82 + u8 passphrase[ND_INTEL_PASSPHRASE_SIZE]; 83 + u32 status; 84 + } __packed; 85 + 86 + struct nd_intel_freeze_lock { 87 + u32 status; 88 + } __packed; 89 + 90 + struct nd_intel_secure_erase { 91 + u8 passphrase[ND_INTEL_PASSPHRASE_SIZE]; 92 + u32 status; 93 + } __packed; 94 + 95 + struct nd_intel_overwrite { 96 + u8 passphrase[ND_INTEL_PASSPHRASE_SIZE]; 97 + u32 status; 98 + } __packed; 99 + 100 + struct nd_intel_query_overwrite { 101 + u32 status; 102 + } __packed; 103 + 104 + struct nd_intel_set_master_passphrase { 105 + u8 old_pass[ND_INTEL_PASSPHRASE_SIZE]; 106 + u8 new_pass[ND_INTEL_PASSPHRASE_SIZE]; 107 + u32 status; 108 + } __packed; 109 + 110 + struct nd_intel_master_secure_erase { 111 + u8 passphrase[ND_INTEL_PASSPHRASE_SIZE]; 112 + u32 status; 113 + } __packed; 38 114 #endif
+23 -1
drivers/acpi/nfit/nfit.h
··· 60 60 NVDIMM_INTEL_QUERY_FWUPDATE = 16, 61 61 NVDIMM_INTEL_SET_THRESHOLD = 17, 62 62 NVDIMM_INTEL_INJECT_ERROR = 18, 63 + NVDIMM_INTEL_GET_SECURITY_STATE = 19, 64 + NVDIMM_INTEL_SET_PASSPHRASE = 20, 65 + NVDIMM_INTEL_DISABLE_PASSPHRASE = 21, 66 + NVDIMM_INTEL_UNLOCK_UNIT = 22, 67 + NVDIMM_INTEL_FREEZE_LOCK = 23, 68 + NVDIMM_INTEL_SECURE_ERASE = 24, 69 + NVDIMM_INTEL_OVERWRITE = 25, 70 + NVDIMM_INTEL_QUERY_OVERWRITE = 26, 71 + NVDIMM_INTEL_SET_MASTER_PASSPHRASE = 27, 72 + NVDIMM_INTEL_MASTER_SECURE_ERASE = 28, 63 73 }; 74 + 75 + #define NVDIMM_INTEL_SECURITY_CMDMASK \ 76 + (1 << NVDIMM_INTEL_GET_SECURITY_STATE | 1 << NVDIMM_INTEL_SET_PASSPHRASE \ 77 + | 1 << NVDIMM_INTEL_DISABLE_PASSPHRASE | 1 << NVDIMM_INTEL_UNLOCK_UNIT \ 78 + | 1 << NVDIMM_INTEL_FREEZE_LOCK | 1 << NVDIMM_INTEL_SECURE_ERASE \ 79 + | 1 << NVDIMM_INTEL_OVERWRITE | 1 << NVDIMM_INTEL_QUERY_OVERWRITE \ 80 + | 1 << NVDIMM_INTEL_SET_MASTER_PASSPHRASE \ 81 + | 1 << NVDIMM_INTEL_MASTER_SECURE_ERASE) 64 82 65 83 #define NVDIMM_INTEL_CMDMASK \ 66 84 (NVDIMM_STANDARD_CMDMASK | 1 << NVDIMM_INTEL_GET_MODES \ 67 85 | 1 << NVDIMM_INTEL_GET_FWINFO | 1 << NVDIMM_INTEL_START_FWUPDATE \ 68 86 | 1 << NVDIMM_INTEL_SEND_FWUPDATE | 1 << NVDIMM_INTEL_FINISH_FWUPDATE \ 69 87 | 1 << NVDIMM_INTEL_QUERY_FWUPDATE | 1 << NVDIMM_INTEL_SET_THRESHOLD \ 70 - | 1 << NVDIMM_INTEL_INJECT_ERROR | 1 << NVDIMM_INTEL_LATCH_SHUTDOWN) 88 + | 1 << NVDIMM_INTEL_INJECT_ERROR | 1 << NVDIMM_INTEL_LATCH_SHUTDOWN \ 89 + | NVDIMM_INTEL_SECURITY_CMDMASK) 71 90 72 91 enum nfit_uuids { 73 92 /* for simplicity alias the uuid index with the family id */ ··· 183 164 NFIT_MEM_DIRTY_COUNT, 184 165 }; 185 166 167 + #define NFIT_DIMM_ID_LEN 22 168 + 186 169 /* assembled tables for a given dimm/memory-device */ 187 170 struct nfit_mem { 188 171 struct nvdimm *nvdimm; ··· 202 181 struct list_head list; 203 182 struct acpi_device *adev; 204 183 struct acpi_nfit_desc *acpi_desc; 184 + char id[NFIT_DIMM_ID_LEN+1]; 205 185 struct resource *flush_wpq; 206 186 unsigned long dsm_mask; 207 187 unsigned long flags;
+5
drivers/nvdimm/Kconfig
··· 112 112 113 113 Select Y if unsure. 114 114 115 + config NVDIMM_KEYS 116 + def_bool y 117 + depends on ENCRYPTED_KEYS 118 + depends on (LIBNVDIMM=ENCRYPTED_KEYS) || LIBNVDIMM=m 119 + 115 120 endif
+1
drivers/nvdimm/Makefile
··· 27 27 libnvdimm-$(CONFIG_BTT) += btt_devs.o 28 28 libnvdimm-$(CONFIG_NVDIMM_PFN) += pfn_devs.o 29 29 libnvdimm-$(CONFIG_NVDIMM_DAX) += dax_devs.o 30 + libnvdimm-$(CONFIG_NVDIMM_KEYS) += security.o
+27 -6
drivers/nvdimm/bus.c
··· 331 331 } 332 332 EXPORT_SYMBOL_GPL(to_nvdimm_bus); 333 333 334 + struct nvdimm_bus *nvdimm_to_bus(struct nvdimm *nvdimm) 335 + { 336 + return to_nvdimm_bus(nvdimm->dev.parent); 337 + } 338 + EXPORT_SYMBOL_GPL(nvdimm_to_bus); 339 + 334 340 struct nvdimm_bus *nvdimm_bus_register(struct device *parent, 335 341 struct nvdimm_bus_descriptor *nd_desc) 336 342 { ··· 350 344 INIT_LIST_HEAD(&nvdimm_bus->mapping_list); 351 345 init_waitqueue_head(&nvdimm_bus->probe_wait); 352 346 nvdimm_bus->id = ida_simple_get(&nd_ida, 0, 0, GFP_KERNEL); 353 - mutex_init(&nvdimm_bus->reconfig_mutex); 354 - badrange_init(&nvdimm_bus->badrange); 355 347 if (nvdimm_bus->id < 0) { 356 348 kfree(nvdimm_bus); 357 349 return NULL; 358 350 } 351 + mutex_init(&nvdimm_bus->reconfig_mutex); 352 + badrange_init(&nvdimm_bus->badrange); 359 353 nvdimm_bus->nd_desc = nd_desc; 360 354 nvdimm_bus->dev.parent = parent; 361 355 nvdimm_bus->dev.release = nvdimm_bus_release; ··· 393 387 * i.e. remove classless children 394 388 */ 395 389 if (dev->class) 396 - /* pass */; 397 - else 398 - nd_device_unregister(dev, ND_SYNC); 390 + return 0; 391 + 392 + if (is_nvdimm(dev)) { 393 + struct nvdimm *nvdimm = to_nvdimm(dev); 394 + bool dev_put = false; 395 + 396 + /* We are shutting down. Make state frozen artificially. */ 397 + nvdimm_bus_lock(dev); 398 + nvdimm->sec.state = NVDIMM_SECURITY_FROZEN; 399 + if (test_and_clear_bit(NDD_WORK_PENDING, &nvdimm->flags)) 400 + dev_put = true; 401 + nvdimm_bus_unlock(dev); 402 + cancel_delayed_work_sync(&nvdimm->dwork); 403 + if (dev_put) 404 + put_device(dev); 405 + } 406 + nd_device_unregister(dev, ND_SYNC); 407 + 399 408 return 0; 400 409 } 401 410 ··· 923 902 924 903 /* ask the bus provider if it would like to block this request */ 925 904 if (nd_desc->clear_to_send) { 926 - int rc = nd_desc->clear_to_send(nd_desc, nvdimm, cmd); 905 + int rc = nd_desc->clear_to_send(nd_desc, nvdimm, cmd, data); 927 906 928 907 if (rc) 929 908 return rc;
+15 -1
drivers/nvdimm/dimm.c
··· 34 34 return rc; 35 35 } 36 36 37 - /* reset locked, to be validated below... */ 37 + /* 38 + * The locked status bit reflects explicit status codes from the 39 + * label reading commands, revalidate it each time the driver is 40 + * activated and re-reads the label area. 41 + */ 38 42 nvdimm_clear_locked(dev); 39 43 40 44 ndd = kzalloc(sizeof(*ndd), GFP_KERNEL); ··· 54 50 ndd->dev = dev; 55 51 get_device(dev); 56 52 kref_init(&ndd->kref); 53 + 54 + /* 55 + * Attempt to unlock, if the DIMM supports security commands, 56 + * otherwise the locked indication is determined by explicit 57 + * status codes from the label reading commands. 58 + */ 59 + rc = nvdimm_security_unlock(dev); 60 + if (rc < 0) 61 + dev_dbg(dev, "failed to unlock dimm: %d\n", rc); 62 + 57 63 58 64 /* 59 65 * EACCES failures reading the namespace label-area-properties
+205 -5
drivers/nvdimm/dimm_devs.c
··· 370 370 } 371 371 static DEVICE_ATTR_RO(available_slots); 372 372 373 + __weak ssize_t security_show(struct device *dev, 374 + struct device_attribute *attr, char *buf) 375 + { 376 + struct nvdimm *nvdimm = to_nvdimm(dev); 377 + 378 + switch (nvdimm->sec.state) { 379 + case NVDIMM_SECURITY_DISABLED: 380 + return sprintf(buf, "disabled\n"); 381 + case NVDIMM_SECURITY_UNLOCKED: 382 + return sprintf(buf, "unlocked\n"); 383 + case NVDIMM_SECURITY_LOCKED: 384 + return sprintf(buf, "locked\n"); 385 + case NVDIMM_SECURITY_FROZEN: 386 + return sprintf(buf, "frozen\n"); 387 + case NVDIMM_SECURITY_OVERWRITE: 388 + return sprintf(buf, "overwrite\n"); 389 + default: 390 + return -ENOTTY; 391 + } 392 + 393 + return -ENOTTY; 394 + } 395 + 396 + #define OPS \ 397 + C( OP_FREEZE, "freeze", 1), \ 398 + C( OP_DISABLE, "disable", 2), \ 399 + C( OP_UPDATE, "update", 3), \ 400 + C( OP_ERASE, "erase", 2), \ 401 + C( OP_OVERWRITE, "overwrite", 2), \ 402 + C( OP_MASTER_UPDATE, "master_update", 3), \ 403 + C( OP_MASTER_ERASE, "master_erase", 2) 404 + #undef C 405 + #define C(a, b, c) a 406 + enum nvdimmsec_op_ids { OPS }; 407 + #undef C 408 + #define C(a, b, c) { b, c } 409 + static struct { 410 + const char *name; 411 + int args; 412 + } ops[] = { OPS }; 413 + #undef C 414 + 415 + #define SEC_CMD_SIZE 32 416 + #define KEY_ID_SIZE 10 417 + 418 + static ssize_t __security_store(struct device *dev, const char *buf, size_t len) 419 + { 420 + struct nvdimm *nvdimm = to_nvdimm(dev); 421 + ssize_t rc; 422 + char cmd[SEC_CMD_SIZE+1], keystr[KEY_ID_SIZE+1], 423 + nkeystr[KEY_ID_SIZE+1]; 424 + unsigned int key, newkey; 425 + int i; 426 + 427 + if (atomic_read(&nvdimm->busy)) 428 + return -EBUSY; 429 + 430 + rc = sscanf(buf, "%"__stringify(SEC_CMD_SIZE)"s" 431 + " %"__stringify(KEY_ID_SIZE)"s" 432 + " %"__stringify(KEY_ID_SIZE)"s", 433 + cmd, keystr, nkeystr); 434 + if (rc < 1) 435 + return -EINVAL; 436 + for (i = 0; i < ARRAY_SIZE(ops); i++) 437 + if (sysfs_streq(cmd, ops[i].name)) 438 + break; 439 + if (i >= ARRAY_SIZE(ops)) 440 + return -EINVAL; 441 + if (ops[i].args > 1) 442 + rc = kstrtouint(keystr, 0, &key); 443 + if (rc >= 0 && ops[i].args > 2) 444 + rc = kstrtouint(nkeystr, 0, &newkey); 445 + if (rc < 0) 446 + return rc; 447 + 448 + if (i == OP_FREEZE) { 449 + dev_dbg(dev, "freeze\n"); 450 + rc = nvdimm_security_freeze(nvdimm); 451 + } else if (i == OP_DISABLE) { 452 + dev_dbg(dev, "disable %u\n", key); 453 + rc = nvdimm_security_disable(nvdimm, key); 454 + } else if (i == OP_UPDATE) { 455 + dev_dbg(dev, "update %u %u\n", key, newkey); 456 + rc = nvdimm_security_update(nvdimm, key, newkey, NVDIMM_USER); 457 + } else if (i == OP_ERASE) { 458 + dev_dbg(dev, "erase %u\n", key); 459 + rc = nvdimm_security_erase(nvdimm, key, NVDIMM_USER); 460 + } else if (i == OP_OVERWRITE) { 461 + dev_dbg(dev, "overwrite %u\n", key); 462 + rc = nvdimm_security_overwrite(nvdimm, key); 463 + } else if (i == OP_MASTER_UPDATE) { 464 + dev_dbg(dev, "master_update %u %u\n", key, newkey); 465 + rc = nvdimm_security_update(nvdimm, key, newkey, 466 + NVDIMM_MASTER); 467 + } else if (i == OP_MASTER_ERASE) { 468 + dev_dbg(dev, "master_erase %u\n", key); 469 + rc = nvdimm_security_erase(nvdimm, key, 470 + NVDIMM_MASTER); 471 + } else 472 + return -EINVAL; 473 + 474 + if (rc == 0) 475 + rc = len; 476 + return rc; 477 + } 478 + 479 + static ssize_t security_store(struct device *dev, 480 + struct device_attribute *attr, const char *buf, size_t len) 481 + 482 + { 483 + ssize_t rc; 484 + 485 + /* 486 + * Require all userspace triggered security management to be 487 + * done while probing is idle and the DIMM is not in active use 488 + * in any region. 489 + */ 490 + device_lock(dev); 491 + nvdimm_bus_lock(dev); 492 + wait_nvdimm_bus_probe_idle(dev); 493 + rc = __security_store(dev, buf, len); 494 + nvdimm_bus_unlock(dev); 495 + device_unlock(dev); 496 + 497 + return rc; 498 + } 499 + static DEVICE_ATTR_RW(security); 500 + 373 501 static struct attribute *nvdimm_attributes[] = { 374 502 &dev_attr_state.attr, 375 503 &dev_attr_flags.attr, 376 504 &dev_attr_commands.attr, 377 505 &dev_attr_available_slots.attr, 506 + &dev_attr_security.attr, 378 507 NULL, 379 508 }; 380 509 510 + static umode_t nvdimm_visible(struct kobject *kobj, struct attribute *a, int n) 511 + { 512 + struct device *dev = container_of(kobj, typeof(*dev), kobj); 513 + struct nvdimm *nvdimm = to_nvdimm(dev); 514 + 515 + if (a != &dev_attr_security.attr) 516 + return a->mode; 517 + if (nvdimm->sec.state < 0) 518 + return 0; 519 + /* Are there any state mutation ops? */ 520 + if (nvdimm->sec.ops->freeze || nvdimm->sec.ops->disable 521 + || nvdimm->sec.ops->change_key 522 + || nvdimm->sec.ops->erase 523 + || nvdimm->sec.ops->overwrite) 524 + return a->mode; 525 + return 0444; 526 + } 527 + 381 528 struct attribute_group nvdimm_attribute_group = { 382 529 .attrs = nvdimm_attributes, 530 + .is_visible = nvdimm_visible, 383 531 }; 384 532 EXPORT_SYMBOL_GPL(nvdimm_attribute_group); 385 533 386 - struct nvdimm *nvdimm_create(struct nvdimm_bus *nvdimm_bus, void *provider_data, 387 - const struct attribute_group **groups, unsigned long flags, 388 - unsigned long cmd_mask, int num_flush, 389 - struct resource *flush_wpq) 534 + struct nvdimm *__nvdimm_create(struct nvdimm_bus *nvdimm_bus, 535 + void *provider_data, const struct attribute_group **groups, 536 + unsigned long flags, unsigned long cmd_mask, int num_flush, 537 + struct resource *flush_wpq, const char *dimm_id, 538 + const struct nvdimm_security_ops *sec_ops) 390 539 { 391 540 struct nvdimm *nvdimm = kzalloc(sizeof(*nvdimm), GFP_KERNEL); 392 541 struct device *dev; ··· 548 399 kfree(nvdimm); 549 400 return NULL; 550 401 } 402 + 403 + nvdimm->dimm_id = dimm_id; 551 404 nvdimm->provider_data = provider_data; 552 405 nvdimm->flags = flags; 553 406 nvdimm->cmd_mask = cmd_mask; ··· 562 411 dev->type = &nvdimm_device_type; 563 412 dev->devt = MKDEV(nvdimm_major, nvdimm->id); 564 413 dev->groups = groups; 414 + nvdimm->sec.ops = sec_ops; 415 + nvdimm->sec.overwrite_tmo = 0; 416 + INIT_DELAYED_WORK(&nvdimm->dwork, nvdimm_security_overwrite_query); 417 + /* 418 + * Security state must be initialized before device_add() for 419 + * attribute visibility. 420 + */ 421 + /* get security state and extended (master) state */ 422 + nvdimm->sec.state = nvdimm_security_state(nvdimm, NVDIMM_USER); 423 + nvdimm->sec.ext_state = nvdimm_security_state(nvdimm, NVDIMM_MASTER); 565 424 nd_device_register(dev); 566 425 567 426 return nvdimm; 568 427 } 569 - EXPORT_SYMBOL_GPL(nvdimm_create); 428 + EXPORT_SYMBOL_GPL(__nvdimm_create); 429 + 430 + int nvdimm_security_setup_events(struct nvdimm *nvdimm) 431 + { 432 + nvdimm->sec.overwrite_state = sysfs_get_dirent(nvdimm->dev.kobj.sd, 433 + "security"); 434 + if (!nvdimm->sec.overwrite_state) 435 + return -ENODEV; 436 + return 0; 437 + } 438 + EXPORT_SYMBOL_GPL(nvdimm_security_setup_events); 439 + 440 + int nvdimm_in_overwrite(struct nvdimm *nvdimm) 441 + { 442 + return test_bit(NDD_SECURITY_OVERWRITE, &nvdimm->flags); 443 + } 444 + EXPORT_SYMBOL_GPL(nvdimm_in_overwrite); 445 + 446 + int nvdimm_security_freeze(struct nvdimm *nvdimm) 447 + { 448 + int rc; 449 + 450 + WARN_ON_ONCE(!is_nvdimm_bus_locked(&nvdimm->dev)); 451 + 452 + if (!nvdimm->sec.ops || !nvdimm->sec.ops->freeze) 453 + return -EOPNOTSUPP; 454 + 455 + if (nvdimm->sec.state < 0) 456 + return -EIO; 457 + 458 + if (test_bit(NDD_SECURITY_OVERWRITE, &nvdimm->flags)) { 459 + dev_warn(&nvdimm->dev, "Overwrite operation in progress.\n"); 460 + return -EBUSY; 461 + } 462 + 463 + rc = nvdimm->sec.ops->freeze(nvdimm); 464 + nvdimm->sec.state = nvdimm_security_state(nvdimm, NVDIMM_USER); 465 + 466 + return rc; 467 + } 570 468 571 469 int alias_dpa_busy(struct device *dev, void *data) 572 470 {
+3 -4
drivers/nvdimm/label.c
··· 944 944 victims = 0; 945 945 if (old_num_resources) { 946 946 /* convert old local-label-map to dimm-slot victim-map */ 947 - victim_map = kcalloc(BITS_TO_LONGS(nslot), sizeof(long), 948 - GFP_KERNEL); 947 + victim_map = bitmap_zalloc(nslot, GFP_KERNEL); 949 948 if (!victim_map) 950 949 return -ENOMEM; 951 950 ··· 967 968 /* don't allow updates that consume the last label */ 968 969 if (nfree - alloc < 0 || nfree - alloc + victims < 1) { 969 970 dev_info(&nsblk->common.dev, "insufficient label space\n"); 970 - kfree(victim_map); 971 + bitmap_free(victim_map); 971 972 return -ENOSPC; 972 973 } 973 974 /* from here on we need to abort on error */ ··· 1139 1140 1140 1141 out: 1141 1142 kfree(old_res_list); 1142 - kfree(victim_map); 1143 + bitmap_free(victim_map); 1143 1144 return rc; 1144 1145 1145 1146 abort:
+1 -2
drivers/nvdimm/namespace_devs.c
··· 270 270 if (dev->driver || to_ndns(dev)->claim) 271 271 return -EBUSY; 272 272 273 - input = kmemdup(buf, len + 1, GFP_KERNEL); 273 + input = kstrndup(buf, len, GFP_KERNEL); 274 274 if (!input) 275 275 return -ENOMEM; 276 276 277 - input[len] = '\0'; 278 277 pos = strim(input); 279 278 if (strlen(pos) + 1 > NSLABEL_NAME_LEN) { 280 279 rc = -EINVAL;
+57
drivers/nvdimm/nd-core.h
··· 21 21 extern struct list_head nvdimm_bus_list; 22 22 extern struct mutex nvdimm_bus_list_mutex; 23 23 extern int nvdimm_major; 24 + extern struct workqueue_struct *nvdimm_wq; 24 25 25 26 struct nvdimm_bus { 26 27 struct nvdimm_bus_descriptor *nd_desc; ··· 42 41 atomic_t busy; 43 42 int id, num_flush; 44 43 struct resource *flush_wpq; 44 + const char *dimm_id; 45 + struct { 46 + const struct nvdimm_security_ops *ops; 47 + enum nvdimm_security_state state; 48 + enum nvdimm_security_state ext_state; 49 + unsigned int overwrite_tmo; 50 + struct kernfs_node *overwrite_state; 51 + } sec; 52 + struct delayed_work dwork; 45 53 }; 54 + 55 + static inline enum nvdimm_security_state nvdimm_security_state( 56 + struct nvdimm *nvdimm, bool master) 57 + { 58 + if (!nvdimm->sec.ops) 59 + return -ENXIO; 60 + 61 + return nvdimm->sec.ops->state(nvdimm, master); 62 + } 63 + int nvdimm_security_freeze(struct nvdimm *nvdimm); 64 + #if IS_ENABLED(CONFIG_NVDIMM_KEYS) 65 + int nvdimm_security_disable(struct nvdimm *nvdimm, unsigned int keyid); 66 + int nvdimm_security_update(struct nvdimm *nvdimm, unsigned int keyid, 67 + unsigned int new_keyid, 68 + enum nvdimm_passphrase_type pass_type); 69 + int nvdimm_security_erase(struct nvdimm *nvdimm, unsigned int keyid, 70 + enum nvdimm_passphrase_type pass_type); 71 + int nvdimm_security_overwrite(struct nvdimm *nvdimm, unsigned int keyid); 72 + void nvdimm_security_overwrite_query(struct work_struct *work); 73 + #else 74 + static inline int nvdimm_security_disable(struct nvdimm *nvdimm, 75 + unsigned int keyid) 76 + { 77 + return -EOPNOTSUPP; 78 + } 79 + static inline int nvdimm_security_update(struct nvdimm *nvdimm, 80 + unsigned int keyid, 81 + unsigned int new_keyid, 82 + enum nvdimm_passphrase_type pass_type) 83 + { 84 + return -EOPNOTSUPP; 85 + } 86 + static inline int nvdimm_security_erase(struct nvdimm *nvdimm, 87 + unsigned int keyid, 88 + enum nvdimm_passphrase_type pass_type) 89 + { 90 + return -EOPNOTSUPP; 91 + } 92 + static inline int nvdimm_security_overwrite(struct nvdimm *nvdimm, 93 + unsigned int keyid) 94 + { 95 + return -EOPNOTSUPP; 96 + } 97 + static inline void nvdimm_security_overwrite_query(struct work_struct *work) 98 + { 99 + } 100 + #endif 46 101 47 102 /** 48 103 * struct blk_alloc_info - tracking info for BLK dpa scanning
+8
drivers/nvdimm/nd.h
··· 250 250 void nvdimm_set_aliasing(struct device *dev); 251 251 void nvdimm_set_locked(struct device *dev); 252 252 void nvdimm_clear_locked(struct device *dev); 253 + #if IS_ENABLED(CONFIG_NVDIMM_KEYS) 254 + int nvdimm_security_unlock(struct device *dev); 255 + #else 256 + static inline int nvdimm_security_unlock(struct device *dev) 257 + { 258 + return 0; 259 + } 260 + #endif 253 261 struct nd_btt *to_nd_btt(struct device *dev); 254 262 255 263 struct nd_gen_sb {
+5
drivers/nvdimm/region_devs.c
··· 79 79 struct nd_mapping *nd_mapping = &nd_region->mapping[i]; 80 80 struct nvdimm *nvdimm = nd_mapping->nvdimm; 81 81 82 + if (test_bit(NDD_SECURITY_OVERWRITE, &nvdimm->flags)) { 83 + nvdimm_bus_unlock(&nd_region->dev); 84 + return -EBUSY; 85 + } 86 + 82 87 /* at least one null hint slot per-dimm for the "no-hint" case */ 83 88 flush_data_size += sizeof(void *); 84 89 num_flush = min_not_zero(num_flush, nvdimm->num_flush);
+454
drivers/nvdimm/security.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright(c) 2018 Intel Corporation. All rights reserved. */ 3 + 4 + #include <linux/module.h> 5 + #include <linux/device.h> 6 + #include <linux/ndctl.h> 7 + #include <linux/slab.h> 8 + #include <linux/io.h> 9 + #include <linux/mm.h> 10 + #include <linux/cred.h> 11 + #include <linux/key.h> 12 + #include <linux/key-type.h> 13 + #include <keys/user-type.h> 14 + #include <keys/encrypted-type.h> 15 + #include "nd-core.h" 16 + #include "nd.h" 17 + 18 + #define NVDIMM_BASE_KEY 0 19 + #define NVDIMM_NEW_KEY 1 20 + 21 + static bool key_revalidate = true; 22 + module_param(key_revalidate, bool, 0444); 23 + MODULE_PARM_DESC(key_revalidate, "Require key validation at init."); 24 + 25 + static void *key_data(struct key *key) 26 + { 27 + struct encrypted_key_payload *epayload = dereference_key_locked(key); 28 + 29 + lockdep_assert_held_read(&key->sem); 30 + 31 + return epayload->decrypted_data; 32 + } 33 + 34 + static void nvdimm_put_key(struct key *key) 35 + { 36 + if (!key) 37 + return; 38 + 39 + up_read(&key->sem); 40 + key_put(key); 41 + } 42 + 43 + /* 44 + * Retrieve kernel key for DIMM and request from user space if 45 + * necessary. Returns a key held for read and must be put by 46 + * nvdimm_put_key() before the usage goes out of scope. 47 + */ 48 + static struct key *nvdimm_request_key(struct nvdimm *nvdimm) 49 + { 50 + struct key *key = NULL; 51 + static const char NVDIMM_PREFIX[] = "nvdimm:"; 52 + char desc[NVDIMM_KEY_DESC_LEN + sizeof(NVDIMM_PREFIX)]; 53 + struct device *dev = &nvdimm->dev; 54 + 55 + sprintf(desc, "%s%s", NVDIMM_PREFIX, nvdimm->dimm_id); 56 + key = request_key(&key_type_encrypted, desc, ""); 57 + if (IS_ERR(key)) { 58 + if (PTR_ERR(key) == -ENOKEY) 59 + dev_dbg(dev, "request_key() found no key\n"); 60 + else 61 + dev_dbg(dev, "request_key() upcall failed\n"); 62 + key = NULL; 63 + } else { 64 + struct encrypted_key_payload *epayload; 65 + 66 + down_read(&key->sem); 67 + epayload = dereference_key_locked(key); 68 + if (epayload->decrypted_datalen != NVDIMM_PASSPHRASE_LEN) { 69 + up_read(&key->sem); 70 + key_put(key); 71 + key = NULL; 72 + } 73 + } 74 + 75 + return key; 76 + } 77 + 78 + static struct key *nvdimm_lookup_user_key(struct nvdimm *nvdimm, 79 + key_serial_t id, int subclass) 80 + { 81 + key_ref_t keyref; 82 + struct key *key; 83 + struct encrypted_key_payload *epayload; 84 + struct device *dev = &nvdimm->dev; 85 + 86 + keyref = lookup_user_key(id, 0, 0); 87 + if (IS_ERR(keyref)) 88 + return NULL; 89 + 90 + key = key_ref_to_ptr(keyref); 91 + if (key->type != &key_type_encrypted) { 92 + key_put(key); 93 + return NULL; 94 + } 95 + 96 + dev_dbg(dev, "%s: key found: %#x\n", __func__, key_serial(key)); 97 + 98 + down_read_nested(&key->sem, subclass); 99 + epayload = dereference_key_locked(key); 100 + if (epayload->decrypted_datalen != NVDIMM_PASSPHRASE_LEN) { 101 + up_read(&key->sem); 102 + key_put(key); 103 + key = NULL; 104 + } 105 + return key; 106 + } 107 + 108 + static struct key *nvdimm_key_revalidate(struct nvdimm *nvdimm) 109 + { 110 + struct key *key; 111 + int rc; 112 + 113 + if (!nvdimm->sec.ops->change_key) 114 + return NULL; 115 + 116 + key = nvdimm_request_key(nvdimm); 117 + if (!key) 118 + return NULL; 119 + 120 + /* 121 + * Send the same key to the hardware as new and old key to 122 + * verify that the key is good. 123 + */ 124 + rc = nvdimm->sec.ops->change_key(nvdimm, key_data(key), 125 + key_data(key), NVDIMM_USER); 126 + if (rc < 0) { 127 + nvdimm_put_key(key); 128 + key = NULL; 129 + } 130 + return key; 131 + } 132 + 133 + static int __nvdimm_security_unlock(struct nvdimm *nvdimm) 134 + { 135 + struct device *dev = &nvdimm->dev; 136 + struct nvdimm_bus *nvdimm_bus = walk_to_nvdimm_bus(dev); 137 + struct key *key = NULL; 138 + int rc; 139 + 140 + /* The bus lock should be held at the top level of the call stack */ 141 + lockdep_assert_held(&nvdimm_bus->reconfig_mutex); 142 + 143 + if (!nvdimm->sec.ops || !nvdimm->sec.ops->unlock 144 + || nvdimm->sec.state < 0) 145 + return -EIO; 146 + 147 + if (test_bit(NDD_SECURITY_OVERWRITE, &nvdimm->flags)) { 148 + dev_dbg(dev, "Security operation in progress.\n"); 149 + return -EBUSY; 150 + } 151 + 152 + /* 153 + * If the pre-OS has unlocked the DIMM, attempt to send the key 154 + * from request_key() to the hardware for verification. Failure 155 + * to revalidate the key against the hardware results in a 156 + * freeze of the security configuration. I.e. if the OS does not 157 + * have the key, security is being managed pre-OS. 158 + */ 159 + if (nvdimm->sec.state == NVDIMM_SECURITY_UNLOCKED) { 160 + if (!key_revalidate) 161 + return 0; 162 + 163 + key = nvdimm_key_revalidate(nvdimm); 164 + if (!key) 165 + return nvdimm_security_freeze(nvdimm); 166 + } else 167 + key = nvdimm_request_key(nvdimm); 168 + 169 + if (!key) 170 + return -ENOKEY; 171 + 172 + rc = nvdimm->sec.ops->unlock(nvdimm, key_data(key)); 173 + dev_dbg(dev, "key: %d unlock: %s\n", key_serial(key), 174 + rc == 0 ? "success" : "fail"); 175 + 176 + nvdimm_put_key(key); 177 + nvdimm->sec.state = nvdimm_security_state(nvdimm, NVDIMM_USER); 178 + return rc; 179 + } 180 + 181 + int nvdimm_security_unlock(struct device *dev) 182 + { 183 + struct nvdimm *nvdimm = to_nvdimm(dev); 184 + int rc; 185 + 186 + nvdimm_bus_lock(dev); 187 + rc = __nvdimm_security_unlock(nvdimm); 188 + nvdimm_bus_unlock(dev); 189 + return rc; 190 + } 191 + 192 + int nvdimm_security_disable(struct nvdimm *nvdimm, unsigned int keyid) 193 + { 194 + struct device *dev = &nvdimm->dev; 195 + struct nvdimm_bus *nvdimm_bus = walk_to_nvdimm_bus(dev); 196 + struct key *key; 197 + int rc; 198 + 199 + /* The bus lock should be held at the top level of the call stack */ 200 + lockdep_assert_held(&nvdimm_bus->reconfig_mutex); 201 + 202 + if (!nvdimm->sec.ops || !nvdimm->sec.ops->disable 203 + || nvdimm->sec.state < 0) 204 + return -EOPNOTSUPP; 205 + 206 + if (nvdimm->sec.state >= NVDIMM_SECURITY_FROZEN) { 207 + dev_dbg(dev, "Incorrect security state: %d\n", 208 + nvdimm->sec.state); 209 + return -EIO; 210 + } 211 + 212 + if (test_bit(NDD_SECURITY_OVERWRITE, &nvdimm->flags)) { 213 + dev_dbg(dev, "Security operation in progress.\n"); 214 + return -EBUSY; 215 + } 216 + 217 + key = nvdimm_lookup_user_key(nvdimm, keyid, NVDIMM_BASE_KEY); 218 + if (!key) 219 + return -ENOKEY; 220 + 221 + rc = nvdimm->sec.ops->disable(nvdimm, key_data(key)); 222 + dev_dbg(dev, "key: %d disable: %s\n", key_serial(key), 223 + rc == 0 ? "success" : "fail"); 224 + 225 + nvdimm_put_key(key); 226 + nvdimm->sec.state = nvdimm_security_state(nvdimm, NVDIMM_USER); 227 + return rc; 228 + } 229 + 230 + int nvdimm_security_update(struct nvdimm *nvdimm, unsigned int keyid, 231 + unsigned int new_keyid, 232 + enum nvdimm_passphrase_type pass_type) 233 + { 234 + struct device *dev = &nvdimm->dev; 235 + struct nvdimm_bus *nvdimm_bus = walk_to_nvdimm_bus(dev); 236 + struct key *key, *newkey; 237 + int rc; 238 + 239 + /* The bus lock should be held at the top level of the call stack */ 240 + lockdep_assert_held(&nvdimm_bus->reconfig_mutex); 241 + 242 + if (!nvdimm->sec.ops || !nvdimm->sec.ops->change_key 243 + || nvdimm->sec.state < 0) 244 + return -EOPNOTSUPP; 245 + 246 + if (nvdimm->sec.state >= NVDIMM_SECURITY_FROZEN) { 247 + dev_dbg(dev, "Incorrect security state: %d\n", 248 + nvdimm->sec.state); 249 + return -EIO; 250 + } 251 + 252 + if (keyid == 0) 253 + key = NULL; 254 + else { 255 + key = nvdimm_lookup_user_key(nvdimm, keyid, NVDIMM_BASE_KEY); 256 + if (!key) 257 + return -ENOKEY; 258 + } 259 + 260 + newkey = nvdimm_lookup_user_key(nvdimm, new_keyid, NVDIMM_NEW_KEY); 261 + if (!newkey) { 262 + nvdimm_put_key(key); 263 + return -ENOKEY; 264 + } 265 + 266 + rc = nvdimm->sec.ops->change_key(nvdimm, key ? key_data(key) : NULL, 267 + key_data(newkey), pass_type); 268 + dev_dbg(dev, "key: %d %d update%s: %s\n", 269 + key_serial(key), key_serial(newkey), 270 + pass_type == NVDIMM_MASTER ? "(master)" : "(user)", 271 + rc == 0 ? "success" : "fail"); 272 + 273 + nvdimm_put_key(newkey); 274 + nvdimm_put_key(key); 275 + if (pass_type == NVDIMM_MASTER) 276 + nvdimm->sec.ext_state = nvdimm_security_state(nvdimm, 277 + NVDIMM_MASTER); 278 + else 279 + nvdimm->sec.state = nvdimm_security_state(nvdimm, 280 + NVDIMM_USER); 281 + return rc; 282 + } 283 + 284 + int nvdimm_security_erase(struct nvdimm *nvdimm, unsigned int keyid, 285 + enum nvdimm_passphrase_type pass_type) 286 + { 287 + struct device *dev = &nvdimm->dev; 288 + struct nvdimm_bus *nvdimm_bus = walk_to_nvdimm_bus(dev); 289 + struct key *key; 290 + int rc; 291 + 292 + /* The bus lock should be held at the top level of the call stack */ 293 + lockdep_assert_held(&nvdimm_bus->reconfig_mutex); 294 + 295 + if (!nvdimm->sec.ops || !nvdimm->sec.ops->erase 296 + || nvdimm->sec.state < 0) 297 + return -EOPNOTSUPP; 298 + 299 + if (atomic_read(&nvdimm->busy)) { 300 + dev_dbg(dev, "Unable to secure erase while DIMM active.\n"); 301 + return -EBUSY; 302 + } 303 + 304 + if (nvdimm->sec.state >= NVDIMM_SECURITY_FROZEN) { 305 + dev_dbg(dev, "Incorrect security state: %d\n", 306 + nvdimm->sec.state); 307 + return -EIO; 308 + } 309 + 310 + if (test_bit(NDD_SECURITY_OVERWRITE, &nvdimm->flags)) { 311 + dev_dbg(dev, "Security operation in progress.\n"); 312 + return -EBUSY; 313 + } 314 + 315 + if (nvdimm->sec.ext_state != NVDIMM_SECURITY_UNLOCKED 316 + && pass_type == NVDIMM_MASTER) { 317 + dev_dbg(dev, 318 + "Attempt to secure erase in wrong master state.\n"); 319 + return -EOPNOTSUPP; 320 + } 321 + 322 + key = nvdimm_lookup_user_key(nvdimm, keyid, NVDIMM_BASE_KEY); 323 + if (!key) 324 + return -ENOKEY; 325 + 326 + rc = nvdimm->sec.ops->erase(nvdimm, key_data(key), pass_type); 327 + dev_dbg(dev, "key: %d erase%s: %s\n", key_serial(key), 328 + pass_type == NVDIMM_MASTER ? "(master)" : "(user)", 329 + rc == 0 ? "success" : "fail"); 330 + 331 + nvdimm_put_key(key); 332 + nvdimm->sec.state = nvdimm_security_state(nvdimm, NVDIMM_USER); 333 + return rc; 334 + } 335 + 336 + int nvdimm_security_overwrite(struct nvdimm *nvdimm, unsigned int keyid) 337 + { 338 + struct device *dev = &nvdimm->dev; 339 + struct nvdimm_bus *nvdimm_bus = walk_to_nvdimm_bus(dev); 340 + struct key *key; 341 + int rc; 342 + 343 + /* The bus lock should be held at the top level of the call stack */ 344 + lockdep_assert_held(&nvdimm_bus->reconfig_mutex); 345 + 346 + if (!nvdimm->sec.ops || !nvdimm->sec.ops->overwrite 347 + || nvdimm->sec.state < 0) 348 + return -EOPNOTSUPP; 349 + 350 + if (atomic_read(&nvdimm->busy)) { 351 + dev_dbg(dev, "Unable to overwrite while DIMM active.\n"); 352 + return -EBUSY; 353 + } 354 + 355 + if (dev->driver == NULL) { 356 + dev_dbg(dev, "Unable to overwrite while DIMM active.\n"); 357 + return -EINVAL; 358 + } 359 + 360 + if (nvdimm->sec.state >= NVDIMM_SECURITY_FROZEN) { 361 + dev_dbg(dev, "Incorrect security state: %d\n", 362 + nvdimm->sec.state); 363 + return -EIO; 364 + } 365 + 366 + if (test_bit(NDD_SECURITY_OVERWRITE, &nvdimm->flags)) { 367 + dev_dbg(dev, "Security operation in progress.\n"); 368 + return -EBUSY; 369 + } 370 + 371 + if (keyid == 0) 372 + key = NULL; 373 + else { 374 + key = nvdimm_lookup_user_key(nvdimm, keyid, NVDIMM_BASE_KEY); 375 + if (!key) 376 + return -ENOKEY; 377 + } 378 + 379 + rc = nvdimm->sec.ops->overwrite(nvdimm, key ? key_data(key) : NULL); 380 + dev_dbg(dev, "key: %d overwrite submission: %s\n", key_serial(key), 381 + rc == 0 ? "success" : "fail"); 382 + 383 + nvdimm_put_key(key); 384 + if (rc == 0) { 385 + set_bit(NDD_SECURITY_OVERWRITE, &nvdimm->flags); 386 + set_bit(NDD_WORK_PENDING, &nvdimm->flags); 387 + nvdimm->sec.state = NVDIMM_SECURITY_OVERWRITE; 388 + /* 389 + * Make sure we don't lose device while doing overwrite 390 + * query. 391 + */ 392 + get_device(dev); 393 + queue_delayed_work(system_wq, &nvdimm->dwork, 0); 394 + } 395 + 396 + return rc; 397 + } 398 + 399 + void __nvdimm_security_overwrite_query(struct nvdimm *nvdimm) 400 + { 401 + struct nvdimm_bus *nvdimm_bus = walk_to_nvdimm_bus(&nvdimm->dev); 402 + int rc; 403 + unsigned int tmo; 404 + 405 + /* The bus lock should be held at the top level of the call stack */ 406 + lockdep_assert_held(&nvdimm_bus->reconfig_mutex); 407 + 408 + /* 409 + * Abort and release device if we no longer have the overwrite 410 + * flag set. It means the work has been canceled. 411 + */ 412 + if (!test_bit(NDD_WORK_PENDING, &nvdimm->flags)) 413 + return; 414 + 415 + tmo = nvdimm->sec.overwrite_tmo; 416 + 417 + if (!nvdimm->sec.ops || !nvdimm->sec.ops->query_overwrite 418 + || nvdimm->sec.state < 0) 419 + return; 420 + 421 + rc = nvdimm->sec.ops->query_overwrite(nvdimm); 422 + if (rc == -EBUSY) { 423 + 424 + /* setup delayed work again */ 425 + tmo += 10; 426 + queue_delayed_work(system_wq, &nvdimm->dwork, tmo * HZ); 427 + nvdimm->sec.overwrite_tmo = min(15U * 60U, tmo); 428 + return; 429 + } 430 + 431 + if (rc < 0) 432 + dev_dbg(&nvdimm->dev, "overwrite failed\n"); 433 + else 434 + dev_dbg(&nvdimm->dev, "overwrite completed\n"); 435 + 436 + if (nvdimm->sec.overwrite_state) 437 + sysfs_notify_dirent(nvdimm->sec.overwrite_state); 438 + nvdimm->sec.overwrite_tmo = 0; 439 + clear_bit(NDD_SECURITY_OVERWRITE, &nvdimm->flags); 440 + clear_bit(NDD_WORK_PENDING, &nvdimm->flags); 441 + put_device(&nvdimm->dev); 442 + nvdimm->sec.state = nvdimm_security_state(nvdimm, NVDIMM_USER); 443 + nvdimm->sec.ext_state = nvdimm_security_state(nvdimm, NVDIMM_MASTER); 444 + } 445 + 446 + void nvdimm_security_overwrite_query(struct work_struct *work) 447 + { 448 + struct nvdimm *nvdimm = 449 + container_of(work, typeof(*nvdimm), dwork.work); 450 + 451 + nvdimm_bus_lock(&nvdimm->dev); 452 + __nvdimm_security_overwrite_query(nvdimm); 453 + nvdimm_bus_unlock(&nvdimm->dev); 454 + }
+3
include/linux/key.h
··· 346 346 347 347 extern void key_set_timeout(struct key *, unsigned); 348 348 349 + extern key_ref_t lookup_user_key(key_serial_t id, unsigned long flags, 350 + key_perm_t perm); 351 + 349 352 /* 350 353 * The permissions required on a key that we're looking up. 351 354 */
+71 -5
include/linux/libnvdimm.h
··· 38 38 NDD_UNARMED = 1, 39 39 /* locked memory devices should not be accessed */ 40 40 NDD_LOCKED = 2, 41 + /* memory under security wipes should not be accessed */ 42 + NDD_SECURITY_OVERWRITE = 3, 43 + /* tracking whether or not there is a pending device reference */ 44 + NDD_WORK_PENDING = 4, 41 45 42 46 /* need to set a limit somewhere, but yes, this is likely overkill */ 43 47 ND_IOCTL_MAX_BUFLEN = SZ_4M, ··· 91 87 ndctl_fn ndctl; 92 88 int (*flush_probe)(struct nvdimm_bus_descriptor *nd_desc); 93 89 int (*clear_to_send)(struct nvdimm_bus_descriptor *nd_desc, 94 - struct nvdimm *nvdimm, unsigned int cmd); 90 + struct nvdimm *nvdimm, unsigned int cmd, void *data); 95 91 }; 96 92 97 93 struct nd_cmd_desc { ··· 159 155 160 156 } 161 157 158 + enum nvdimm_security_state { 159 + NVDIMM_SECURITY_DISABLED, 160 + NVDIMM_SECURITY_UNLOCKED, 161 + NVDIMM_SECURITY_LOCKED, 162 + NVDIMM_SECURITY_FROZEN, 163 + NVDIMM_SECURITY_OVERWRITE, 164 + }; 165 + 166 + #define NVDIMM_PASSPHRASE_LEN 32 167 + #define NVDIMM_KEY_DESC_LEN 22 168 + 169 + struct nvdimm_key_data { 170 + u8 data[NVDIMM_PASSPHRASE_LEN]; 171 + }; 172 + 173 + enum nvdimm_passphrase_type { 174 + NVDIMM_USER, 175 + NVDIMM_MASTER, 176 + }; 177 + 178 + struct nvdimm_security_ops { 179 + enum nvdimm_security_state (*state)(struct nvdimm *nvdimm, 180 + enum nvdimm_passphrase_type pass_type); 181 + int (*freeze)(struct nvdimm *nvdimm); 182 + int (*change_key)(struct nvdimm *nvdimm, 183 + const struct nvdimm_key_data *old_data, 184 + const struct nvdimm_key_data *new_data, 185 + enum nvdimm_passphrase_type pass_type); 186 + int (*unlock)(struct nvdimm *nvdimm, 187 + const struct nvdimm_key_data *key_data); 188 + int (*disable)(struct nvdimm *nvdimm, 189 + const struct nvdimm_key_data *key_data); 190 + int (*erase)(struct nvdimm *nvdimm, 191 + const struct nvdimm_key_data *key_data, 192 + enum nvdimm_passphrase_type pass_type); 193 + int (*overwrite)(struct nvdimm *nvdimm, 194 + const struct nvdimm_key_data *key_data); 195 + int (*query_overwrite)(struct nvdimm *nvdimm); 196 + }; 197 + 162 198 void badrange_init(struct badrange *badrange); 163 199 int badrange_add(struct badrange *badrange, u64 addr, u64 length); 164 200 void badrange_forget(struct badrange *badrange, phys_addr_t start, ··· 209 165 struct nvdimm_bus_descriptor *nfit_desc); 210 166 void nvdimm_bus_unregister(struct nvdimm_bus *nvdimm_bus); 211 167 struct nvdimm_bus *to_nvdimm_bus(struct device *dev); 168 + struct nvdimm_bus *nvdimm_to_bus(struct nvdimm *nvdimm); 212 169 struct nvdimm *to_nvdimm(struct device *dev); 213 170 struct nd_region *to_nd_region(struct device *dev); 214 171 struct device *nd_region_dev(struct nd_region *nd_region); ··· 220 175 struct kobject *nvdimm_kobj(struct nvdimm *nvdimm); 221 176 unsigned long nvdimm_cmd_mask(struct nvdimm *nvdimm); 222 177 void *nvdimm_provider_data(struct nvdimm *nvdimm); 223 - struct nvdimm *nvdimm_create(struct nvdimm_bus *nvdimm_bus, void *provider_data, 224 - const struct attribute_group **groups, unsigned long flags, 225 - unsigned long cmd_mask, int num_flush, 226 - struct resource *flush_wpq); 178 + struct nvdimm *__nvdimm_create(struct nvdimm_bus *nvdimm_bus, 179 + void *provider_data, const struct attribute_group **groups, 180 + unsigned long flags, unsigned long cmd_mask, int num_flush, 181 + struct resource *flush_wpq, const char *dimm_id, 182 + const struct nvdimm_security_ops *sec_ops); 183 + static inline struct nvdimm *nvdimm_create(struct nvdimm_bus *nvdimm_bus, 184 + void *provider_data, const struct attribute_group **groups, 185 + unsigned long flags, unsigned long cmd_mask, int num_flush, 186 + struct resource *flush_wpq) 187 + { 188 + return __nvdimm_create(nvdimm_bus, provider_data, groups, flags, 189 + cmd_mask, num_flush, flush_wpq, NULL, NULL); 190 + } 191 + 192 + int nvdimm_security_setup_events(struct nvdimm *nvdimm); 227 193 const struct nd_cmd_desc *nd_cmd_dimm_desc(int cmd); 228 194 const struct nd_cmd_desc *nd_cmd_bus_desc(int cmd); 229 195 u32 nd_cmd_in_size(struct nvdimm *nvdimm, int cmd, ··· 260 204 void nvdimm_flush(struct nd_region *nd_region); 261 205 int nvdimm_has_flush(struct nd_region *nd_region); 262 206 int nvdimm_has_cache(struct nd_region *nd_region); 207 + int nvdimm_in_overwrite(struct nvdimm *nvdimm); 208 + 209 + static inline int nvdimm_ctl(struct nvdimm *nvdimm, unsigned int cmd, void *buf, 210 + unsigned int buf_len, int *cmd_rc) 211 + { 212 + struct nvdimm_bus *nvdimm_bus = nvdimm_to_bus(nvdimm); 213 + struct nvdimm_bus_descriptor *nd_desc = to_nd_desc(nvdimm_bus); 214 + 215 + return nd_desc->ndctl(nd_desc, nvdimm, cmd, buf, buf_len, cmd_rc); 216 + } 263 217 264 218 #ifdef CONFIG_ARCH_HAS_PMEM_API 265 219 #define ARCH_MEMREMAP_PMEM MEMREMAP_WB
+20 -9
security/keys/encrypted-keys/encrypted.c
··· 45 45 static const char blkcipher_alg[] = "cbc(aes)"; 46 46 static const char key_format_default[] = "default"; 47 47 static const char key_format_ecryptfs[] = "ecryptfs"; 48 + static const char key_format_enc32[] = "enc32"; 48 49 static unsigned int ivsize; 49 50 static int blksize; 50 51 ··· 55 54 #define HASH_SIZE SHA256_DIGEST_SIZE 56 55 #define MAX_DATA_SIZE 4096 57 56 #define MIN_DATA_SIZE 20 57 + #define KEY_ENC32_PAYLOAD_LEN 32 58 58 59 59 static struct crypto_shash *hash_tfm; 60 60 ··· 64 62 }; 65 63 66 64 enum { 67 - Opt_error = -1, Opt_default, Opt_ecryptfs 65 + Opt_error = -1, Opt_default, Opt_ecryptfs, Opt_enc32 68 66 }; 69 67 70 68 static const match_table_t key_format_tokens = { 71 69 {Opt_default, "default"}, 72 70 {Opt_ecryptfs, "ecryptfs"}, 71 + {Opt_enc32, "enc32"}, 73 72 {Opt_error, NULL} 74 73 }; 75 74 ··· 198 195 key_format = match_token(p, key_format_tokens, args); 199 196 switch (key_format) { 200 197 case Opt_ecryptfs: 198 + case Opt_enc32: 201 199 case Opt_default: 202 200 *format = p; 203 201 *master_desc = strsep(&datablob, " \t"); ··· 629 625 format_len = (!format) ? strlen(key_format_default) : strlen(format); 630 626 decrypted_datalen = dlen; 631 627 payload_datalen = decrypted_datalen; 632 - if (format && !strcmp(format, key_format_ecryptfs)) { 633 - if (dlen != ECRYPTFS_MAX_KEY_BYTES) { 634 - pr_err("encrypted_key: keylen for the ecryptfs format " 635 - "must be equal to %d bytes\n", 636 - ECRYPTFS_MAX_KEY_BYTES); 637 - return ERR_PTR(-EINVAL); 628 + if (format) { 629 + if (!strcmp(format, key_format_ecryptfs)) { 630 + if (dlen != ECRYPTFS_MAX_KEY_BYTES) { 631 + pr_err("encrypted_key: keylen for the ecryptfs format must be equal to %d bytes\n", 632 + ECRYPTFS_MAX_KEY_BYTES); 633 + return ERR_PTR(-EINVAL); 634 + } 635 + decrypted_datalen = ECRYPTFS_MAX_KEY_BYTES; 636 + payload_datalen = sizeof(struct ecryptfs_auth_tok); 637 + } else if (!strcmp(format, key_format_enc32)) { 638 + if (decrypted_datalen != KEY_ENC32_PAYLOAD_LEN) { 639 + pr_err("encrypted_key: enc32 key payload incorrect length: %d\n", 640 + decrypted_datalen); 641 + return ERR_PTR(-EINVAL); 642 + } 638 643 } 639 - decrypted_datalen = ECRYPTFS_MAX_KEY_BYTES; 640 - payload_datalen = sizeof(struct ecryptfs_auth_tok); 641 644 } 642 645 643 646 encrypted_datalen = roundup(decrypted_datalen, blksize);
-2
security/keys/internal.h
··· 158 158 159 159 extern bool lookup_user_key_possessed(const struct key *key, 160 160 const struct key_match_data *match_data); 161 - extern key_ref_t lookup_user_key(key_serial_t id, unsigned long flags, 162 - key_perm_t perm); 163 161 #define KEY_LOOKUP_CREATE 0x01 164 162 #define KEY_LOOKUP_PARTIAL 0x02 165 163 #define KEY_LOOKUP_FOR_UNLINK 0x04
+1
security/keys/process_keys.c
··· 754 754 put_cred(ctx.cred); 755 755 goto try_again; 756 756 } 757 + EXPORT_SYMBOL(lookup_user_key); 757 758 758 759 /* 759 760 * Join the named keyring as the session keyring if possible else attempt to
+3
tools/testing/nvdimm/Kbuild
··· 37 37 obj-$(CONFIG_DEV_DAX_PMEM) += dax_pmem.o 38 38 39 39 nfit-y := $(ACPI_SRC)/core.o 40 + nfit-y += $(ACPI_SRC)/intel.o 40 41 nfit-$(CONFIG_X86_MCE) += $(ACPI_SRC)/mce.o 41 42 nfit-y += acpi_nfit_test.o 42 43 nfit-y += config_check.o ··· 80 79 libnvdimm-$(CONFIG_BTT) += $(NVDIMM_SRC)/btt_devs.o 81 80 libnvdimm-$(CONFIG_NVDIMM_PFN) += $(NVDIMM_SRC)/pfn_devs.o 82 81 libnvdimm-$(CONFIG_NVDIMM_DAX) += $(NVDIMM_SRC)/dax_devs.o 82 + libnvdimm-$(CONFIG_NVDIMM_KEYS) += $(NVDIMM_SRC)/security.o 83 + libnvdimm-y += dimm_devs.o 83 84 libnvdimm-y += libnvdimm_test.o 84 85 libnvdimm-y += config_check.o 85 86
+41
tools/testing/nvdimm/dimm_devs.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright Intel Corp. 2018 */ 3 + #include <linux/init.h> 4 + #include <linux/module.h> 5 + #include <linux/moduleparam.h> 6 + #include <linux/nd.h> 7 + #include "pmem.h" 8 + #include "pfn.h" 9 + #include "nd.h" 10 + #include "nd-core.h" 11 + 12 + ssize_t security_show(struct device *dev, 13 + struct device_attribute *attr, char *buf) 14 + { 15 + struct nvdimm *nvdimm = to_nvdimm(dev); 16 + 17 + /* 18 + * For the test version we need to poll the "hardware" in order 19 + * to get the updated status for unlock testing. 20 + */ 21 + nvdimm->sec.state = nvdimm_security_state(nvdimm, false); 22 + nvdimm->sec.ext_state = nvdimm_security_state(nvdimm, true); 23 + 24 + switch (nvdimm->sec.state) { 25 + case NVDIMM_SECURITY_DISABLED: 26 + return sprintf(buf, "disabled\n"); 27 + case NVDIMM_SECURITY_UNLOCKED: 28 + return sprintf(buf, "unlocked\n"); 29 + case NVDIMM_SECURITY_LOCKED: 30 + return sprintf(buf, "locked\n"); 31 + case NVDIMM_SECURITY_FROZEN: 32 + return sprintf(buf, "frozen\n"); 33 + case NVDIMM_SECURITY_OVERWRITE: 34 + return sprintf(buf, "overwrite\n"); 35 + default: 36 + return -ENOTTY; 37 + } 38 + 39 + return -ENOTTY; 40 + } 41 +
+321
tools/testing/nvdimm/test/nfit.c
··· 143 143 144 144 static unsigned long dimm_fail_cmd_flags[ARRAY_SIZE(handle)]; 145 145 static int dimm_fail_cmd_code[ARRAY_SIZE(handle)]; 146 + struct nfit_test_sec { 147 + u8 state; 148 + u8 ext_state; 149 + u8 passphrase[32]; 150 + u8 master_passphrase[32]; 151 + u64 overwrite_end_time; 152 + } dimm_sec_info[NUM_DCR]; 146 153 147 154 static const struct nd_intel_smart smart_def = { 148 155 .flags = ND_INTEL_SMART_HEALTH_VALID ··· 943 936 return rc; 944 937 } 945 938 939 + static int nd_intel_test_cmd_security_status(struct nfit_test *t, 940 + struct nd_intel_get_security_state *nd_cmd, 941 + unsigned int buf_len, int dimm) 942 + { 943 + struct device *dev = &t->pdev.dev; 944 + struct nfit_test_sec *sec = &dimm_sec_info[dimm]; 945 + 946 + nd_cmd->status = 0; 947 + nd_cmd->state = sec->state; 948 + nd_cmd->extended_state = sec->ext_state; 949 + dev_dbg(dev, "security state (%#x) returned\n", nd_cmd->state); 950 + 951 + return 0; 952 + } 953 + 954 + static int nd_intel_test_cmd_unlock_unit(struct nfit_test *t, 955 + struct nd_intel_unlock_unit *nd_cmd, 956 + unsigned int buf_len, int dimm) 957 + { 958 + struct device *dev = &t->pdev.dev; 959 + struct nfit_test_sec *sec = &dimm_sec_info[dimm]; 960 + 961 + if (!(sec->state & ND_INTEL_SEC_STATE_LOCKED) || 962 + (sec->state & ND_INTEL_SEC_STATE_FROZEN)) { 963 + nd_cmd->status = ND_INTEL_STATUS_INVALID_STATE; 964 + dev_dbg(dev, "unlock unit: invalid state: %#x\n", 965 + sec->state); 966 + } else if (memcmp(nd_cmd->passphrase, sec->passphrase, 967 + ND_INTEL_PASSPHRASE_SIZE) != 0) { 968 + nd_cmd->status = ND_INTEL_STATUS_INVALID_PASS; 969 + dev_dbg(dev, "unlock unit: invalid passphrase\n"); 970 + } else { 971 + nd_cmd->status = 0; 972 + sec->state = ND_INTEL_SEC_STATE_ENABLED; 973 + dev_dbg(dev, "Unit unlocked\n"); 974 + } 975 + 976 + dev_dbg(dev, "unlocking status returned: %#x\n", nd_cmd->status); 977 + return 0; 978 + } 979 + 980 + static int nd_intel_test_cmd_set_pass(struct nfit_test *t, 981 + struct nd_intel_set_passphrase *nd_cmd, 982 + unsigned int buf_len, int dimm) 983 + { 984 + struct device *dev = &t->pdev.dev; 985 + struct nfit_test_sec *sec = &dimm_sec_info[dimm]; 986 + 987 + if (sec->state & ND_INTEL_SEC_STATE_FROZEN) { 988 + nd_cmd->status = ND_INTEL_STATUS_INVALID_STATE; 989 + dev_dbg(dev, "set passphrase: wrong security state\n"); 990 + } else if (memcmp(nd_cmd->old_pass, sec->passphrase, 991 + ND_INTEL_PASSPHRASE_SIZE) != 0) { 992 + nd_cmd->status = ND_INTEL_STATUS_INVALID_PASS; 993 + dev_dbg(dev, "set passphrase: wrong passphrase\n"); 994 + } else { 995 + memcpy(sec->passphrase, nd_cmd->new_pass, 996 + ND_INTEL_PASSPHRASE_SIZE); 997 + sec->state |= ND_INTEL_SEC_STATE_ENABLED; 998 + nd_cmd->status = 0; 999 + dev_dbg(dev, "passphrase updated\n"); 1000 + } 1001 + 1002 + return 0; 1003 + } 1004 + 1005 + static int nd_intel_test_cmd_freeze_lock(struct nfit_test *t, 1006 + struct nd_intel_freeze_lock *nd_cmd, 1007 + unsigned int buf_len, int dimm) 1008 + { 1009 + struct device *dev = &t->pdev.dev; 1010 + struct nfit_test_sec *sec = &dimm_sec_info[dimm]; 1011 + 1012 + if (!(sec->state & ND_INTEL_SEC_STATE_ENABLED)) { 1013 + nd_cmd->status = ND_INTEL_STATUS_INVALID_STATE; 1014 + dev_dbg(dev, "freeze lock: wrong security state\n"); 1015 + } else { 1016 + sec->state |= ND_INTEL_SEC_STATE_FROZEN; 1017 + nd_cmd->status = 0; 1018 + dev_dbg(dev, "security frozen\n"); 1019 + } 1020 + 1021 + return 0; 1022 + } 1023 + 1024 + static int nd_intel_test_cmd_disable_pass(struct nfit_test *t, 1025 + struct nd_intel_disable_passphrase *nd_cmd, 1026 + unsigned int buf_len, int dimm) 1027 + { 1028 + struct device *dev = &t->pdev.dev; 1029 + struct nfit_test_sec *sec = &dimm_sec_info[dimm]; 1030 + 1031 + if (!(sec->state & ND_INTEL_SEC_STATE_ENABLED) || 1032 + (sec->state & ND_INTEL_SEC_STATE_FROZEN)) { 1033 + nd_cmd->status = ND_INTEL_STATUS_INVALID_STATE; 1034 + dev_dbg(dev, "disable passphrase: wrong security state\n"); 1035 + } else if (memcmp(nd_cmd->passphrase, sec->passphrase, 1036 + ND_INTEL_PASSPHRASE_SIZE) != 0) { 1037 + nd_cmd->status = ND_INTEL_STATUS_INVALID_PASS; 1038 + dev_dbg(dev, "disable passphrase: wrong passphrase\n"); 1039 + } else { 1040 + memset(sec->passphrase, 0, ND_INTEL_PASSPHRASE_SIZE); 1041 + sec->state = 0; 1042 + dev_dbg(dev, "disable passphrase: done\n"); 1043 + } 1044 + 1045 + return 0; 1046 + } 1047 + 1048 + static int nd_intel_test_cmd_secure_erase(struct nfit_test *t, 1049 + struct nd_intel_secure_erase *nd_cmd, 1050 + unsigned int buf_len, int dimm) 1051 + { 1052 + struct device *dev = &t->pdev.dev; 1053 + struct nfit_test_sec *sec = &dimm_sec_info[dimm]; 1054 + 1055 + if (!(sec->state & ND_INTEL_SEC_STATE_ENABLED) || 1056 + (sec->state & ND_INTEL_SEC_STATE_FROZEN)) { 1057 + nd_cmd->status = ND_INTEL_STATUS_INVALID_STATE; 1058 + dev_dbg(dev, "secure erase: wrong security state\n"); 1059 + } else if (memcmp(nd_cmd->passphrase, sec->passphrase, 1060 + ND_INTEL_PASSPHRASE_SIZE) != 0) { 1061 + nd_cmd->status = ND_INTEL_STATUS_INVALID_PASS; 1062 + dev_dbg(dev, "secure erase: wrong passphrase\n"); 1063 + } else { 1064 + memset(sec->passphrase, 0, ND_INTEL_PASSPHRASE_SIZE); 1065 + memset(sec->master_passphrase, 0, ND_INTEL_PASSPHRASE_SIZE); 1066 + sec->state = 0; 1067 + sec->ext_state = ND_INTEL_SEC_ESTATE_ENABLED; 1068 + dev_dbg(dev, "secure erase: done\n"); 1069 + } 1070 + 1071 + return 0; 1072 + } 1073 + 1074 + static int nd_intel_test_cmd_overwrite(struct nfit_test *t, 1075 + struct nd_intel_overwrite *nd_cmd, 1076 + unsigned int buf_len, int dimm) 1077 + { 1078 + struct device *dev = &t->pdev.dev; 1079 + struct nfit_test_sec *sec = &dimm_sec_info[dimm]; 1080 + 1081 + if ((sec->state & ND_INTEL_SEC_STATE_ENABLED) && 1082 + memcmp(nd_cmd->passphrase, sec->passphrase, 1083 + ND_INTEL_PASSPHRASE_SIZE) != 0) { 1084 + nd_cmd->status = ND_INTEL_STATUS_INVALID_PASS; 1085 + dev_dbg(dev, "overwrite: wrong passphrase\n"); 1086 + return 0; 1087 + } 1088 + 1089 + memset(sec->passphrase, 0, ND_INTEL_PASSPHRASE_SIZE); 1090 + sec->state = ND_INTEL_SEC_STATE_OVERWRITE; 1091 + dev_dbg(dev, "overwrite progressing.\n"); 1092 + sec->overwrite_end_time = get_jiffies_64() + 5 * HZ; 1093 + 1094 + return 0; 1095 + } 1096 + 1097 + static int nd_intel_test_cmd_query_overwrite(struct nfit_test *t, 1098 + struct nd_intel_query_overwrite *nd_cmd, 1099 + unsigned int buf_len, int dimm) 1100 + { 1101 + struct device *dev = &t->pdev.dev; 1102 + struct nfit_test_sec *sec = &dimm_sec_info[dimm]; 1103 + 1104 + if (!(sec->state & ND_INTEL_SEC_STATE_OVERWRITE)) { 1105 + nd_cmd->status = ND_INTEL_STATUS_OQUERY_SEQUENCE_ERR; 1106 + return 0; 1107 + } 1108 + 1109 + if (time_is_before_jiffies64(sec->overwrite_end_time)) { 1110 + sec->overwrite_end_time = 0; 1111 + sec->state = 0; 1112 + sec->ext_state = ND_INTEL_SEC_ESTATE_ENABLED; 1113 + dev_dbg(dev, "overwrite is complete\n"); 1114 + } else 1115 + nd_cmd->status = ND_INTEL_STATUS_OQUERY_INPROGRESS; 1116 + return 0; 1117 + } 1118 + 1119 + static int nd_intel_test_cmd_master_set_pass(struct nfit_test *t, 1120 + struct nd_intel_set_master_passphrase *nd_cmd, 1121 + unsigned int buf_len, int dimm) 1122 + { 1123 + struct device *dev = &t->pdev.dev; 1124 + struct nfit_test_sec *sec = &dimm_sec_info[dimm]; 1125 + 1126 + if (!(sec->ext_state & ND_INTEL_SEC_ESTATE_ENABLED)) { 1127 + nd_cmd->status = ND_INTEL_STATUS_NOT_SUPPORTED; 1128 + dev_dbg(dev, "master set passphrase: in wrong state\n"); 1129 + } else if (sec->ext_state & ND_INTEL_SEC_ESTATE_PLIMIT) { 1130 + nd_cmd->status = ND_INTEL_STATUS_INVALID_STATE; 1131 + dev_dbg(dev, "master set passphrase: in wrong security state\n"); 1132 + } else if (memcmp(nd_cmd->old_pass, sec->master_passphrase, 1133 + ND_INTEL_PASSPHRASE_SIZE) != 0) { 1134 + nd_cmd->status = ND_INTEL_STATUS_INVALID_PASS; 1135 + dev_dbg(dev, "master set passphrase: wrong passphrase\n"); 1136 + } else { 1137 + memcpy(sec->master_passphrase, nd_cmd->new_pass, 1138 + ND_INTEL_PASSPHRASE_SIZE); 1139 + sec->ext_state = ND_INTEL_SEC_ESTATE_ENABLED; 1140 + dev_dbg(dev, "master passphrase: updated\n"); 1141 + } 1142 + 1143 + return 0; 1144 + } 1145 + 1146 + static int nd_intel_test_cmd_master_secure_erase(struct nfit_test *t, 1147 + struct nd_intel_master_secure_erase *nd_cmd, 1148 + unsigned int buf_len, int dimm) 1149 + { 1150 + struct device *dev = &t->pdev.dev; 1151 + struct nfit_test_sec *sec = &dimm_sec_info[dimm]; 1152 + 1153 + if (!(sec->ext_state & ND_INTEL_SEC_ESTATE_ENABLED)) { 1154 + nd_cmd->status = ND_INTEL_STATUS_NOT_SUPPORTED; 1155 + dev_dbg(dev, "master secure erase: in wrong state\n"); 1156 + } else if (sec->ext_state & ND_INTEL_SEC_ESTATE_PLIMIT) { 1157 + nd_cmd->status = ND_INTEL_STATUS_INVALID_STATE; 1158 + dev_dbg(dev, "master secure erase: in wrong security state\n"); 1159 + } else if (memcmp(nd_cmd->passphrase, sec->master_passphrase, 1160 + ND_INTEL_PASSPHRASE_SIZE) != 0) { 1161 + nd_cmd->status = ND_INTEL_STATUS_INVALID_PASS; 1162 + dev_dbg(dev, "master secure erase: wrong passphrase\n"); 1163 + } else { 1164 + /* we do not erase master state passphrase ever */ 1165 + sec->ext_state = ND_INTEL_SEC_ESTATE_ENABLED; 1166 + memset(sec->passphrase, 0, ND_INTEL_PASSPHRASE_SIZE); 1167 + sec->state = 0; 1168 + dev_dbg(dev, "master secure erase: done\n"); 1169 + } 1170 + 1171 + return 0; 1172 + } 1173 + 1174 + 946 1175 static int get_dimm(struct nfit_mem *nfit_mem, unsigned int func) 947 1176 { 948 1177 int i; ··· 1226 983 return i; 1227 984 1228 985 switch (func) { 986 + case NVDIMM_INTEL_GET_SECURITY_STATE: 987 + rc = nd_intel_test_cmd_security_status(t, 988 + buf, buf_len, i); 989 + break; 990 + case NVDIMM_INTEL_UNLOCK_UNIT: 991 + rc = nd_intel_test_cmd_unlock_unit(t, 992 + buf, buf_len, i); 993 + break; 994 + case NVDIMM_INTEL_SET_PASSPHRASE: 995 + rc = nd_intel_test_cmd_set_pass(t, 996 + buf, buf_len, i); 997 + break; 998 + case NVDIMM_INTEL_DISABLE_PASSPHRASE: 999 + rc = nd_intel_test_cmd_disable_pass(t, 1000 + buf, buf_len, i); 1001 + break; 1002 + case NVDIMM_INTEL_FREEZE_LOCK: 1003 + rc = nd_intel_test_cmd_freeze_lock(t, 1004 + buf, buf_len, i); 1005 + break; 1006 + case NVDIMM_INTEL_SECURE_ERASE: 1007 + rc = nd_intel_test_cmd_secure_erase(t, 1008 + buf, buf_len, i); 1009 + break; 1010 + case NVDIMM_INTEL_OVERWRITE: 1011 + rc = nd_intel_test_cmd_overwrite(t, 1012 + buf, buf_len, i - t->dcr_idx); 1013 + break; 1014 + case NVDIMM_INTEL_QUERY_OVERWRITE: 1015 + rc = nd_intel_test_cmd_query_overwrite(t, 1016 + buf, buf_len, i - t->dcr_idx); 1017 + break; 1018 + case NVDIMM_INTEL_SET_MASTER_PASSPHRASE: 1019 + rc = nd_intel_test_cmd_master_set_pass(t, 1020 + buf, buf_len, i); 1021 + break; 1022 + case NVDIMM_INTEL_MASTER_SECURE_ERASE: 1023 + rc = nd_intel_test_cmd_master_secure_erase(t, 1024 + buf, buf_len, i); 1025 + break; 1229 1026 case ND_INTEL_ENABLE_LSS_STATUS: 1230 1027 rc = nd_intel_test_cmd_set_lss_status(t, 1231 1028 buf, buf_len); ··· 1611 1328 } 1612 1329 static DEVICE_ATTR_RW(fail_cmd_code); 1613 1330 1331 + static ssize_t lock_dimm_store(struct device *dev, 1332 + struct device_attribute *attr, const char *buf, size_t size) 1333 + { 1334 + int dimm = dimm_name_to_id(dev); 1335 + struct nfit_test_sec *sec = &dimm_sec_info[dimm]; 1336 + 1337 + sec->state = ND_INTEL_SEC_STATE_ENABLED | ND_INTEL_SEC_STATE_LOCKED; 1338 + return size; 1339 + } 1340 + static DEVICE_ATTR_WO(lock_dimm); 1341 + 1614 1342 static struct attribute *nfit_test_dimm_attributes[] = { 1615 1343 &dev_attr_fail_cmd.attr, 1616 1344 &dev_attr_fail_cmd_code.attr, 1617 1345 &dev_attr_handle.attr, 1346 + &dev_attr_lock_dimm.attr, 1618 1347 NULL, 1619 1348 }; 1620 1349 ··· 1654 1359 return -ENOMEM; 1655 1360 } 1656 1361 return 0; 1362 + } 1363 + 1364 + static void security_init(struct nfit_test *t) 1365 + { 1366 + int i; 1367 + 1368 + for (i = 0; i < t->num_dcr; i++) { 1369 + struct nfit_test_sec *sec = &dimm_sec_info[i]; 1370 + 1371 + sec->ext_state = ND_INTEL_SEC_ESTATE_ENABLED; 1372 + } 1657 1373 } 1658 1374 1659 1375 static void smart_init(struct nfit_test *t) ··· 1745 1439 if (nfit_test_dimm_init(t)) 1746 1440 return -ENOMEM; 1747 1441 smart_init(t); 1442 + security_init(t); 1748 1443 return ars_state_init(&t->pdev.dev, &t->ars_state); 1749 1444 } 1750 1445 ··· 2517 2210 set_bit(ND_INTEL_FW_FINISH_UPDATE, &acpi_desc->dimm_cmd_force_en); 2518 2211 set_bit(ND_INTEL_FW_FINISH_QUERY, &acpi_desc->dimm_cmd_force_en); 2519 2212 set_bit(ND_INTEL_ENABLE_LSS_STATUS, &acpi_desc->dimm_cmd_force_en); 2213 + set_bit(NVDIMM_INTEL_GET_SECURITY_STATE, 2214 + &acpi_desc->dimm_cmd_force_en); 2215 + set_bit(NVDIMM_INTEL_SET_PASSPHRASE, &acpi_desc->dimm_cmd_force_en); 2216 + set_bit(NVDIMM_INTEL_DISABLE_PASSPHRASE, 2217 + &acpi_desc->dimm_cmd_force_en); 2218 + set_bit(NVDIMM_INTEL_UNLOCK_UNIT, &acpi_desc->dimm_cmd_force_en); 2219 + set_bit(NVDIMM_INTEL_FREEZE_LOCK, &acpi_desc->dimm_cmd_force_en); 2220 + set_bit(NVDIMM_INTEL_SECURE_ERASE, &acpi_desc->dimm_cmd_force_en); 2221 + set_bit(NVDIMM_INTEL_OVERWRITE, &acpi_desc->dimm_cmd_force_en); 2222 + set_bit(NVDIMM_INTEL_QUERY_OVERWRITE, &acpi_desc->dimm_cmd_force_en); 2223 + set_bit(NVDIMM_INTEL_SET_MASTER_PASSPHRASE, 2224 + &acpi_desc->dimm_cmd_force_en); 2225 + set_bit(NVDIMM_INTEL_MASTER_SECURE_ERASE, 2226 + &acpi_desc->dimm_cmd_force_en); 2520 2227 } 2521 2228 2522 2229 static void nfit_test1_setup(struct nfit_test *t)