Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'libnvdimm-for-4.15' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm

Pull libnvdimm and dax updates from Dan Williams:
"Save for a few late fixes, all of these commits have shipped in -next
releases since before the merge window opened, and 0day has given a
build success notification.

The ext4 touches came from Jan, and the xfs touches have Darrick's
reviewed-by. An xfstest for the MAP_SYNC feature has been through
a few round of reviews and is on track to be merged.

- Introduce MAP_SYNC and MAP_SHARED_VALIDATE, a mechanism to enable
'userspace flush' of persistent memory updates via filesystem-dax
mappings. It arranges for any filesystem metadata updates that may
be required to satisfy a write fault to also be flushed ("on disk")
before the kernel returns to userspace from the fault handler.
Effectively every write-fault that dirties metadata completes an
fsync() before returning from the fault handler. The new
MAP_SHARED_VALIDATE mapping type guarantees that the MAP_SYNC flag
is validated as supported by the filesystem's ->mmap() file
operation.

- Add support for the standard ACPI 6.2 label access methods that
replace the NVDIMM_FAMILY_INTEL (vendor specific) label methods.
This enables interoperability with environments that only implement
the standardized methods.

- Add support for the ACPI 6.2 NVDIMM media error injection methods.

- Add support for the NVDIMM_FAMILY_INTEL v1.6 DIMM commands for
latch last shutdown status, firmware update, SMART error injection,
and SMART alarm threshold control.

- Cleanup physical address information disclosures to be root-only.

- Fix revalidation of the DIMM "locked label area" status to support
dynamic unlock of the label area.

- Expand unit test infrastructure to mock the ACPI 6.2 Translate SPA
(system-physical-address) command and error injection commands.

Acknowledgements that came after the commits were pushed to -next:

- 957ac8c421ad ("dax: fix PMD faults on zero-length files"):
Reviewed-by: Ross Zwisler <ross.zwisler@linux.intel.com>

- a39e596baa07 ("xfs: support for synchronous DAX faults") and
7b565c9f965b ("xfs: Implement xfs_filemap_pfn_mkwrite() using __xfs_filemap_fault()")
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>"

* tag 'libnvdimm-for-4.15' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm: (49 commits)
acpi, nfit: add 'Enable Latch System Shutdown Status' command support
dax: fix general protection fault in dax_alloc_inode
dax: fix PMD faults on zero-length files
dax: stop requiring a live device for dax_flush()
brd: remove dax support
dax: quiet bdev_dax_supported()
fs, dax: unify IOMAP_F_DIRTY read vs write handling policy in the dax core
tools/testing/nvdimm: unit test clear-error commands
acpi, nfit: validate commands against the device type
tools/testing/nvdimm: stricter bounds checking for error injection commands
xfs: support for synchronous DAX faults
xfs: Implement xfs_filemap_pfn_mkwrite() using __xfs_filemap_fault()
ext4: Support for synchronous DAX faults
ext4: Simplify error handling in ext4_dax_huge_fault()
dax: Implement dax_finish_sync_fault()
dax, iomap: Add support for synchronous faults
mm: Define MAP_SYNC and VM_SYNC flags
dax: Allow tuning whether dax_insert_mapping_entry() dirties entry
dax: Allow dax_iomap_fault() to return pfn
dax: Fix comment describing dax_iomap_fault()
...

+1407 -562
+7 -1
MAINTAINERS
··· 4208 4208 S: Maintained 4209 4209 F: drivers/i2c/busses/i2c-diolan-u2c.c 4210 4210 4211 - DIRECT ACCESS (DAX) 4211 + FILESYSTEM DIRECT ACCESS (DAX) 4212 4212 M: Matthew Wilcox <mawilcox@microsoft.com> 4213 4213 M: Ross Zwisler <ross.zwisler@linux.intel.com> 4214 4214 L: linux-fsdevel@vger.kernel.org ··· 4216 4216 F: fs/dax.c 4217 4217 F: include/linux/dax.h 4218 4218 F: include/trace/events/fs_dax.h 4219 + 4220 + DEVICE DIRECT ACCESS (DAX) 4221 + M: Dan Williams <dan.j.williams@intel.com> 4222 + L: linux-nvdimm@lists.01.org 4223 + S: Supported 4224 + F: drivers/dax/ 4219 4225 4220 4226 DIRECTORY NOTIFICATION (DNOTIFY) 4221 4227 M: Jan Kara <jack@suse.cz>
+1
arch/alpha/include/uapi/asm/mman.h
··· 12 12 13 13 #define MAP_SHARED 0x01 /* Share changes */ 14 14 #define MAP_PRIVATE 0x02 /* Changes are private */ 15 + #define MAP_SHARED_VALIDATE 0x03 /* share + validate extension flags */ 15 16 #define MAP_TYPE 0x0f /* Mask for type of mapping (OSF/1 is _wrong_) */ 16 17 #define MAP_FIXED 0x100 /* Interpret addr exactly */ 17 18 #define MAP_ANONYMOUS 0x10 /* don't use a file */
+1
arch/mips/include/uapi/asm/mman.h
··· 29 29 */ 30 30 #define MAP_SHARED 0x001 /* Share changes */ 31 31 #define MAP_PRIVATE 0x002 /* Changes are private */ 32 + #define MAP_SHARED_VALIDATE 0x003 /* share + validate extension flags */ 32 33 #define MAP_TYPE 0x00f /* Mask for type of mapping */ 33 34 #define MAP_FIXED 0x010 /* Interpret addr exactly */ 34 35
+1
arch/parisc/include/uapi/asm/mman.h
··· 12 12 13 13 #define MAP_SHARED 0x01 /* Share changes */ 14 14 #define MAP_PRIVATE 0x02 /* Changes are private */ 15 + #define MAP_SHARED_VALIDATE 0x03 /* share + validate extension flags */ 15 16 #define MAP_TYPE 0x03 /* Mask for type of mapping */ 16 17 #define MAP_FIXED 0x04 /* Interpret addr exactly */ 17 18 #define MAP_ANONYMOUS 0x10 /* don't use a file */
+1
arch/xtensa/include/uapi/asm/mman.h
··· 36 36 */ 37 37 #define MAP_SHARED 0x001 /* Share changes */ 38 38 #define MAP_PRIVATE 0x002 /* Changes are private */ 39 + #define MAP_SHARED_VALIDATE 0x003 /* share + validate extension flags */ 39 40 #define MAP_TYPE 0x00f /* Mask for type of mapping */ 40 41 #define MAP_FIXED 0x010 /* Interpret addr exactly */ 41 42
+262 -12
drivers/acpi/nfit/core.c
··· 183 183 return 0; 184 184 } 185 185 186 - static int xlat_nvdimm_status(void *buf, unsigned int cmd, u32 status) 186 + #define ACPI_LABELS_LOCKED 3 187 + 188 + static int xlat_nvdimm_status(struct nvdimm *nvdimm, void *buf, unsigned int cmd, 189 + u32 status) 187 190 { 191 + struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); 192 + 188 193 switch (cmd) { 189 194 case ND_CMD_GET_CONFIG_SIZE: 195 + /* 196 + * In the _LSI, _LSR, _LSW case the locked status is 197 + * communicated via the read/write commands 198 + */ 199 + if (nfit_mem->has_lsi) 200 + break; 201 + 190 202 if (status >> 16 & ND_CONFIG_LOCKED) 203 + return -EACCES; 204 + break; 205 + case ND_CMD_GET_CONFIG_DATA: 206 + if (nfit_mem->has_lsr && status == ACPI_LABELS_LOCKED) 207 + return -EACCES; 208 + break; 209 + case ND_CMD_SET_CONFIG_DATA: 210 + if (nfit_mem->has_lsw && status == ACPI_LABELS_LOCKED) 191 211 return -EACCES; 192 212 break; 193 213 default: ··· 225 205 { 226 206 if (!nvdimm) 227 207 return xlat_bus_status(buf, cmd, status); 228 - return xlat_nvdimm_status(buf, cmd, status); 208 + return xlat_nvdimm_status(nvdimm, buf, cmd, status); 209 + } 210 + 211 + /* convert _LS{I,R} packages to the buffer object acpi_nfit_ctl expects */ 212 + static union acpi_object *pkg_to_buf(union acpi_object *pkg) 213 + { 214 + int i; 215 + void *dst; 216 + size_t size = 0; 217 + union acpi_object *buf = NULL; 218 + 219 + if (pkg->type != ACPI_TYPE_PACKAGE) { 220 + WARN_ONCE(1, "BIOS bug, unexpected element type: %d\n", 221 + pkg->type); 222 + goto err; 223 + } 224 + 225 + for (i = 0; i < pkg->package.count; i++) { 226 + union acpi_object *obj = &pkg->package.elements[i]; 227 + 228 + if (obj->type == ACPI_TYPE_INTEGER) 229 + size += 4; 230 + else if (obj->type == ACPI_TYPE_BUFFER) 231 + size += obj->buffer.length; 232 + else { 233 + WARN_ONCE(1, "BIOS bug, unexpected element type: %d\n", 234 + obj->type); 235 + goto err; 236 + } 237 + } 238 + 239 + buf = ACPI_ALLOCATE(sizeof(*buf) + size); 240 + if (!buf) 241 + goto err; 242 + 243 + dst = buf + 1; 244 + buf->type = ACPI_TYPE_BUFFER; 245 + buf->buffer.length = size; 246 + buf->buffer.pointer = dst; 247 + for (i = 0; i < pkg->package.count; i++) { 248 + union acpi_object *obj = &pkg->package.elements[i]; 249 + 250 + if (obj->type == ACPI_TYPE_INTEGER) { 251 + memcpy(dst, &obj->integer.value, 4); 252 + dst += 4; 253 + } else if (obj->type == ACPI_TYPE_BUFFER) { 254 + memcpy(dst, obj->buffer.pointer, obj->buffer.length); 255 + dst += obj->buffer.length; 256 + } 257 + } 258 + err: 259 + ACPI_FREE(pkg); 260 + return buf; 261 + } 262 + 263 + static union acpi_object *int_to_buf(union acpi_object *integer) 264 + { 265 + union acpi_object *buf = ACPI_ALLOCATE(sizeof(*buf) + 4); 266 + void *dst = NULL; 267 + 268 + if (!buf) 269 + goto err; 270 + 271 + if (integer->type != ACPI_TYPE_INTEGER) { 272 + WARN_ONCE(1, "BIOS bug, unexpected element type: %d\n", 273 + integer->type); 274 + goto err; 275 + } 276 + 277 + dst = buf + 1; 278 + buf->type = ACPI_TYPE_BUFFER; 279 + buf->buffer.length = 4; 280 + buf->buffer.pointer = dst; 281 + memcpy(dst, &integer->integer.value, 4); 282 + err: 283 + ACPI_FREE(integer); 284 + return buf; 285 + } 286 + 287 + static union acpi_object *acpi_label_write(acpi_handle handle, u32 offset, 288 + u32 len, void *data) 289 + { 290 + acpi_status rc; 291 + struct acpi_buffer buf = { ACPI_ALLOCATE_BUFFER, NULL }; 292 + struct acpi_object_list input = { 293 + .count = 3, 294 + .pointer = (union acpi_object []) { 295 + [0] = { 296 + .integer.type = ACPI_TYPE_INTEGER, 297 + .integer.value = offset, 298 + }, 299 + [1] = { 300 + .integer.type = ACPI_TYPE_INTEGER, 301 + .integer.value = len, 302 + }, 303 + [2] = { 304 + .buffer.type = ACPI_TYPE_BUFFER, 305 + .buffer.pointer = data, 306 + .buffer.length = len, 307 + }, 308 + }, 309 + }; 310 + 311 + rc = acpi_evaluate_object(handle, "_LSW", &input, &buf); 312 + if (ACPI_FAILURE(rc)) 313 + return NULL; 314 + return int_to_buf(buf.pointer); 315 + } 316 + 317 + static union acpi_object *acpi_label_read(acpi_handle handle, u32 offset, 318 + u32 len) 319 + { 320 + acpi_status rc; 321 + struct acpi_buffer buf = { ACPI_ALLOCATE_BUFFER, NULL }; 322 + struct acpi_object_list input = { 323 + .count = 2, 324 + .pointer = (union acpi_object []) { 325 + [0] = { 326 + .integer.type = ACPI_TYPE_INTEGER, 327 + .integer.value = offset, 328 + }, 329 + [1] = { 330 + .integer.type = ACPI_TYPE_INTEGER, 331 + .integer.value = len, 332 + }, 333 + }, 334 + }; 335 + 336 + rc = acpi_evaluate_object(handle, "_LSR", &input, &buf); 337 + if (ACPI_FAILURE(rc)) 338 + return NULL; 339 + return pkg_to_buf(buf.pointer); 340 + } 341 + 342 + static union acpi_object *acpi_label_info(acpi_handle handle) 343 + { 344 + acpi_status rc; 345 + struct acpi_buffer buf = { ACPI_ALLOCATE_BUFFER, NULL }; 346 + 347 + rc = acpi_evaluate_object(handle, "_LSI", NULL, &buf); 348 + if (ACPI_FAILURE(rc)) 349 + return NULL; 350 + return pkg_to_buf(buf.pointer); 351 + } 352 + 353 + static u8 nfit_dsm_revid(unsigned family, unsigned func) 354 + { 355 + static const u8 revid_table[NVDIMM_FAMILY_MAX+1][32] = { 356 + [NVDIMM_FAMILY_INTEL] = { 357 + [NVDIMM_INTEL_GET_MODES] = 2, 358 + [NVDIMM_INTEL_GET_FWINFO] = 2, 359 + [NVDIMM_INTEL_START_FWUPDATE] = 2, 360 + [NVDIMM_INTEL_SEND_FWUPDATE] = 2, 361 + [NVDIMM_INTEL_FINISH_FWUPDATE] = 2, 362 + [NVDIMM_INTEL_QUERY_FWUPDATE] = 2, 363 + [NVDIMM_INTEL_SET_THRESHOLD] = 2, 364 + [NVDIMM_INTEL_INJECT_ERROR] = 2, 365 + }, 366 + }; 367 + u8 id; 368 + 369 + if (family > NVDIMM_FAMILY_MAX) 370 + return 0; 371 + if (func > 31) 372 + return 0; 373 + id = revid_table[family][func]; 374 + if (id == 0) 375 + return 1; /* default */ 376 + return id; 229 377 } 230 378 231 379 int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm, 232 380 unsigned int cmd, void *buf, unsigned int buf_len, int *cmd_rc) 233 381 { 234 382 struct acpi_nfit_desc *acpi_desc = to_acpi_nfit_desc(nd_desc); 383 + struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); 235 384 union acpi_object in_obj, in_buf, *out_obj; 236 385 const struct nd_cmd_desc *desc = NULL; 237 386 struct device *dev = acpi_desc->dev; ··· 424 235 } 425 236 426 237 if (nvdimm) { 427 - struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); 428 238 struct acpi_device *adev = nfit_mem->adev; 429 239 430 240 if (!adev) ··· 482 294 in_buf.buffer.pointer, 483 295 min_t(u32, 256, in_buf.buffer.length), true); 484 296 485 - out_obj = acpi_evaluate_dsm(handle, guid, 1, func, &in_obj); 297 + /* call the BIOS, prefer the named methods over _DSM if available */ 298 + if (nvdimm && cmd == ND_CMD_GET_CONFIG_SIZE && nfit_mem->has_lsi) 299 + out_obj = acpi_label_info(handle); 300 + else if (nvdimm && cmd == ND_CMD_GET_CONFIG_DATA && nfit_mem->has_lsr) { 301 + struct nd_cmd_get_config_data_hdr *p = buf; 302 + 303 + out_obj = acpi_label_read(handle, p->in_offset, p->in_length); 304 + } else if (nvdimm && cmd == ND_CMD_SET_CONFIG_DATA 305 + && nfit_mem->has_lsw) { 306 + struct nd_cmd_set_config_hdr *p = buf; 307 + 308 + out_obj = acpi_label_write(handle, p->in_offset, p->in_length, 309 + p->in_buf); 310 + } else { 311 + u8 revid; 312 + 313 + if (nvdimm) 314 + revid = nfit_dsm_revid(nfit_mem->family, func); 315 + else 316 + revid = 1; 317 + out_obj = acpi_evaluate_dsm(handle, guid, revid, func, &in_obj); 318 + } 319 + 486 320 if (!out_obj) { 487 321 dev_dbg(dev, "%s:%s _DSM failed cmd: %s\n", __func__, dimm_name, 488 322 cmd_name); ··· 566 356 * Set fw_status for all the commands with a known format to be 567 357 * later interpreted by xlat_status(). 568 358 */ 569 - if (i >= 1 && ((cmd >= ND_CMD_ARS_CAP && cmd <= ND_CMD_CLEAR_ERROR) 570 - || (cmd >= ND_CMD_SMART && cmd <= ND_CMD_VENDOR))) 359 + if (i >= 1 && ((!nvdimm && cmd >= ND_CMD_ARS_CAP 360 + && cmd <= ND_CMD_CLEAR_ERROR) 361 + || (nvdimm && cmd >= ND_CMD_SMART 362 + && cmd <= ND_CMD_VENDOR))) 571 363 fw_status = *(u32 *) out_obj->buffer.pointer; 572 364 573 365 if (offset + in_buf.buffer.length < buf_len) { ··· 1643 1431 { 1644 1432 struct acpi_device *adev, *adev_dimm; 1645 1433 struct device *dev = acpi_desc->dev; 1434 + union acpi_object *obj; 1646 1435 unsigned long dsm_mask; 1647 1436 const guid_t *guid; 1648 1437 int i; ··· 1676 1463 * different command sets. Note, that checking for function0 (bit0) 1677 1464 * tells us if any commands are reachable through this GUID. 1678 1465 */ 1679 - for (i = NVDIMM_FAMILY_INTEL; i <= NVDIMM_FAMILY_MSFT; i++) 1466 + for (i = 0; i <= NVDIMM_FAMILY_MAX; i++) 1680 1467 if (acpi_check_dsm(adev_dimm->handle, to_nfit_uuid(i), 1, 1)) 1681 1468 if (family < 0 || i == default_dsm_family) 1682 1469 family = i; ··· 1686 1473 if (override_dsm_mask && !disable_vendor_specific) 1687 1474 dsm_mask = override_dsm_mask; 1688 1475 else if (nfit_mem->family == NVDIMM_FAMILY_INTEL) { 1689 - dsm_mask = 0x3fe; 1476 + dsm_mask = NVDIMM_INTEL_CMDMASK; 1690 1477 if (disable_vendor_specific) 1691 1478 dsm_mask &= ~(1 << ND_CMD_VENDOR); 1692 1479 } else if (nfit_mem->family == NVDIMM_FAMILY_HPE1) { ··· 1706 1493 1707 1494 guid = to_nfit_uuid(nfit_mem->family); 1708 1495 for_each_set_bit(i, &dsm_mask, BITS_PER_LONG) 1709 - if (acpi_check_dsm(adev_dimm->handle, guid, 1, 1ULL << i)) 1496 + if (acpi_check_dsm(adev_dimm->handle, guid, 1497 + nfit_dsm_revid(nfit_mem->family, i), 1498 + 1ULL << i)) 1710 1499 set_bit(i, &nfit_mem->dsm_mask); 1500 + 1501 + obj = acpi_label_info(adev_dimm->handle); 1502 + if (obj) { 1503 + ACPI_FREE(obj); 1504 + nfit_mem->has_lsi = 1; 1505 + dev_dbg(dev, "%s: has _LSI\n", dev_name(&adev_dimm->dev)); 1506 + } 1507 + 1508 + obj = acpi_label_read(adev_dimm->handle, 0, 0); 1509 + if (obj) { 1510 + ACPI_FREE(obj); 1511 + nfit_mem->has_lsr = 1; 1512 + dev_dbg(dev, "%s: has _LSR\n", dev_name(&adev_dimm->dev)); 1513 + } 1514 + 1515 + obj = acpi_label_write(adev_dimm->handle, 0, 0, NULL); 1516 + if (obj) { 1517 + ACPI_FREE(obj); 1518 + nfit_mem->has_lsw = 1; 1519 + dev_dbg(dev, "%s: has _LSW\n", dev_name(&adev_dimm->dev)); 1520 + } 1711 1521 1712 1522 return 0; 1713 1523 } ··· 1807 1571 * userspace interface. 1808 1572 */ 1809 1573 cmd_mask = 1UL << ND_CMD_CALL; 1810 - if (nfit_mem->family == NVDIMM_FAMILY_INTEL) 1811 - cmd_mask |= nfit_mem->dsm_mask; 1574 + if (nfit_mem->family == NVDIMM_FAMILY_INTEL) { 1575 + /* 1576 + * These commands have a 1:1 correspondence 1577 + * between DSM payload and libnvdimm ioctl 1578 + * payload format. 1579 + */ 1580 + cmd_mask |= nfit_mem->dsm_mask & NVDIMM_STANDARD_CMDMASK; 1581 + } 1582 + 1583 + if (nfit_mem->has_lsi) 1584 + set_bit(ND_CMD_GET_CONFIG_SIZE, &cmd_mask); 1585 + if (nfit_mem->has_lsr) 1586 + set_bit(ND_CMD_GET_CONFIG_DATA, &cmd_mask); 1587 + if (nfit_mem->has_lsw) 1588 + set_bit(ND_CMD_SET_CONFIG_DATA, &cmd_mask); 1812 1589 1813 1590 flush = nfit_mem->nfit_flush ? nfit_mem->nfit_flush->flush 1814 1591 : NULL; ··· 1894 1645 int i; 1895 1646 1896 1647 nd_desc->cmd_mask = acpi_desc->bus_cmd_force_en; 1648 + nd_desc->bus_dsm_mask = acpi_desc->bus_nfit_cmd_force_en; 1897 1649 adev = to_acpi_dev(acpi_desc); 1898 1650 if (!adev) 1899 1651 return; ··· 2489 2239 if (ars_status->out_length 2490 2240 < 44 + sizeof(struct nd_ars_record) * (i + 1)) 2491 2241 break; 2492 - rc = nvdimm_bus_add_poison(nvdimm_bus, 2242 + rc = nvdimm_bus_add_badrange(nvdimm_bus, 2493 2243 ars_status->records[i].err_address, 2494 2244 ars_status->records[i].length); 2495 2245 if (rc)
+1 -1
drivers/acpi/nfit/mce.c
··· 67 67 continue; 68 68 69 69 /* If this fails due to an -ENOMEM, there is little we can do */ 70 - nvdimm_bus_add_poison(acpi_desc->nvdimm_bus, 70 + nvdimm_bus_add_badrange(acpi_desc->nvdimm_bus, 71 71 ALIGN(mce->addr, L1_CACHE_BYTES), 72 72 L1_CACHE_BYTES); 73 73 nvdimm_region_notify(nfit_spa->nd_region,
+36 -1
drivers/acpi/nfit/nfit.h
··· 24 24 /* ACPI 6.1 */ 25 25 #define UUID_NFIT_BUS "2f10e7a4-9e91-11e4-89d3-123b93f75cba" 26 26 27 - /* http://pmem.io/documents/NVDIMM_DSM_Interface_Example.pdf */ 27 + /* http://pmem.io/documents/NVDIMM_DSM_Interface-V1.6.pdf */ 28 28 #define UUID_NFIT_DIMM "4309ac30-0d11-11e4-9191-0800200c9a66" 29 29 30 30 /* https://github.com/HewlettPackard/hpe-nvm/blob/master/Documentation/ */ ··· 37 37 #define ACPI_NFIT_MEM_FAILED_MASK (ACPI_NFIT_MEM_SAVE_FAILED \ 38 38 | ACPI_NFIT_MEM_RESTORE_FAILED | ACPI_NFIT_MEM_FLUSH_FAILED \ 39 39 | ACPI_NFIT_MEM_NOT_ARMED | ACPI_NFIT_MEM_MAP_FAILED) 40 + 41 + #define NVDIMM_FAMILY_MAX NVDIMM_FAMILY_MSFT 42 + 43 + #define NVDIMM_STANDARD_CMDMASK \ 44 + (1 << ND_CMD_SMART | 1 << ND_CMD_SMART_THRESHOLD | 1 << ND_CMD_DIMM_FLAGS \ 45 + | 1 << ND_CMD_GET_CONFIG_SIZE | 1 << ND_CMD_GET_CONFIG_DATA \ 46 + | 1 << ND_CMD_SET_CONFIG_DATA | 1 << ND_CMD_VENDOR_EFFECT_LOG_SIZE \ 47 + | 1 << ND_CMD_VENDOR_EFFECT_LOG | 1 << ND_CMD_VENDOR) 48 + 49 + /* 50 + * Command numbers that the kernel needs to know about to handle 51 + * non-default DSM revision ids 52 + */ 53 + enum nvdimm_family_cmds { 54 + NVDIMM_INTEL_LATCH_SHUTDOWN = 10, 55 + NVDIMM_INTEL_GET_MODES = 11, 56 + NVDIMM_INTEL_GET_FWINFO = 12, 57 + NVDIMM_INTEL_START_FWUPDATE = 13, 58 + NVDIMM_INTEL_SEND_FWUPDATE = 14, 59 + NVDIMM_INTEL_FINISH_FWUPDATE = 15, 60 + NVDIMM_INTEL_QUERY_FWUPDATE = 16, 61 + NVDIMM_INTEL_SET_THRESHOLD = 17, 62 + NVDIMM_INTEL_INJECT_ERROR = 18, 63 + }; 64 + 65 + #define NVDIMM_INTEL_CMDMASK \ 66 + (NVDIMM_STANDARD_CMDMASK | 1 << NVDIMM_INTEL_GET_MODES \ 67 + | 1 << NVDIMM_INTEL_GET_FWINFO | 1 << NVDIMM_INTEL_START_FWUPDATE \ 68 + | 1 << NVDIMM_INTEL_SEND_FWUPDATE | 1 << NVDIMM_INTEL_FINISH_FWUPDATE \ 69 + | 1 << NVDIMM_INTEL_QUERY_FWUPDATE | 1 << NVDIMM_INTEL_SET_THRESHOLD \ 70 + | 1 << NVDIMM_INTEL_INJECT_ERROR | 1 << NVDIMM_INTEL_LATCH_SHUTDOWN) 40 71 41 72 enum nfit_uuids { 42 73 /* for simplicity alias the uuid index with the family id */ ··· 171 140 struct resource *flush_wpq; 172 141 unsigned long dsm_mask; 173 142 int family; 143 + u32 has_lsi:1; 144 + u32 has_lsr:1; 145 + u32 has_lsw:1; 174 146 }; 175 147 176 148 struct acpi_nfit_desc { ··· 201 167 unsigned int init_complete:1; 202 168 unsigned long dimm_cmd_force_en; 203 169 unsigned long bus_cmd_force_en; 170 + unsigned long bus_nfit_cmd_force_en; 204 171 int (*blk_do_io)(struct nd_blk_region *ndbr, resource_size_t dpa, 205 172 void *iobuf, u64 len, int rw); 206 173 };
-12
drivers/block/Kconfig
··· 302 302 303 303 config BLK_DEV_RAM 304 304 tristate "RAM block device support" 305 - select DAX if BLK_DEV_RAM_DAX 306 305 ---help--- 307 306 Saying Y here will allow you to use a portion of your RAM memory as 308 307 a block device, so that you can make file systems on it, read and ··· 336 337 help 337 338 The default value is 4096 kilobytes. Only change this if you know 338 339 what you are doing. 339 - 340 - config BLK_DEV_RAM_DAX 341 - bool "Support Direct Access (DAX) to RAM block devices" 342 - depends on BLK_DEV_RAM && FS_DAX 343 - default n 344 - help 345 - Support filesystems using DAX to access RAM block devices. This 346 - avoids double-buffering data in the page cache before copying it 347 - to the block device. Answering Y will slightly enlarge the kernel, 348 - and will prevent RAM block device backing store memory from being 349 - allocated from highmem (only a problem for highmem systems). 350 340 351 341 config CDROM_PKTCDVD 352 342 tristate "Packet writing on CD/DVD media (DEPRECATED)"
-65
drivers/block/brd.c
··· 21 21 #include <linux/fs.h> 22 22 #include <linux/slab.h> 23 23 #include <linux/backing-dev.h> 24 - #ifdef CONFIG_BLK_DEV_RAM_DAX 25 - #include <linux/pfn_t.h> 26 - #include <linux/dax.h> 27 - #include <linux/uio.h> 28 - #endif 29 24 30 25 #include <linux/uaccess.h> 31 26 ··· 40 45 41 46 struct request_queue *brd_queue; 42 47 struct gendisk *brd_disk; 43 - #ifdef CONFIG_BLK_DEV_RAM_DAX 44 - struct dax_device *dax_dev; 45 - #endif 46 48 struct list_head brd_list; 47 49 48 50 /* ··· 104 112 * restriction might be able to be lifted. 105 113 */ 106 114 gfp_flags = GFP_NOIO | __GFP_ZERO; 107 - #ifndef CONFIG_BLK_DEV_RAM_DAX 108 - gfp_flags |= __GFP_HIGHMEM; 109 - #endif 110 115 page = alloc_page(gfp_flags); 111 116 if (!page) 112 117 return NULL; ··· 323 334 return err; 324 335 } 325 336 326 - #ifdef CONFIG_BLK_DEV_RAM_DAX 327 - static long __brd_direct_access(struct brd_device *brd, pgoff_t pgoff, 328 - long nr_pages, void **kaddr, pfn_t *pfn) 329 - { 330 - struct page *page; 331 - 332 - if (!brd) 333 - return -ENODEV; 334 - page = brd_insert_page(brd, (sector_t)pgoff << PAGE_SECTORS_SHIFT); 335 - if (!page) 336 - return -ENOSPC; 337 - *kaddr = page_address(page); 338 - *pfn = page_to_pfn_t(page); 339 - 340 - return 1; 341 - } 342 - 343 - static long brd_dax_direct_access(struct dax_device *dax_dev, 344 - pgoff_t pgoff, long nr_pages, void **kaddr, pfn_t *pfn) 345 - { 346 - struct brd_device *brd = dax_get_private(dax_dev); 347 - 348 - return __brd_direct_access(brd, pgoff, nr_pages, kaddr, pfn); 349 - } 350 - 351 - static size_t brd_dax_copy_from_iter(struct dax_device *dax_dev, pgoff_t pgoff, 352 - void *addr, size_t bytes, struct iov_iter *i) 353 - { 354 - return copy_from_iter(addr, bytes, i); 355 - } 356 - 357 - static const struct dax_operations brd_dax_ops = { 358 - .direct_access = brd_dax_direct_access, 359 - .copy_from_iter = brd_dax_copy_from_iter, 360 - }; 361 - #endif 362 - 363 337 static const struct block_device_operations brd_fops = { 364 338 .owner = THIS_MODULE, 365 339 .rw_page = brd_rw_page, ··· 403 451 set_capacity(disk, rd_size * 2); 404 452 disk->queue->backing_dev_info->capabilities |= BDI_CAP_SYNCHRONOUS_IO; 405 453 406 - #ifdef CONFIG_BLK_DEV_RAM_DAX 407 - queue_flag_set_unlocked(QUEUE_FLAG_DAX, brd->brd_queue); 408 - brd->dax_dev = alloc_dax(brd, disk->disk_name, &brd_dax_ops); 409 - if (!brd->dax_dev) 410 - goto out_free_inode; 411 - #endif 412 - 413 - 414 454 return brd; 415 455 416 - #ifdef CONFIG_BLK_DEV_RAM_DAX 417 - out_free_inode: 418 - kill_dax(brd->dax_dev); 419 - put_dax(brd->dax_dev); 420 - #endif 421 456 out_free_queue: 422 457 blk_cleanup_queue(brd->brd_queue); 423 458 out_free_dev: ··· 444 505 static void brd_del_one(struct brd_device *brd) 445 506 { 446 507 list_del(&brd->brd_list); 447 - #ifdef CONFIG_BLK_DEV_RAM_DAX 448 - kill_dax(brd->dax_dev); 449 - put_dax(brd->dax_dev); 450 - #endif 451 508 del_gendisk(brd->brd_disk); 452 509 brd_free(brd); 453 510 }
+2 -1
drivers/dax/device.c
··· 222 222 unsigned long size) 223 223 { 224 224 struct resource *res; 225 - phys_addr_t phys; 225 + /* gcc-4.6.3-nolibc for i386 complains that this is uninitialized */ 226 + phys_addr_t uninitialized_var(phys); 226 227 int i; 227 228 228 229 for (i = 0; i < dev_dax->num_resources; i++) {
+7 -7
drivers/dax/super.c
··· 92 92 long len; 93 93 94 94 if (blocksize != PAGE_SIZE) { 95 - pr_err("VFS (%s): error: unsupported blocksize for dax\n", 95 + pr_debug("VFS (%s): error: unsupported blocksize for dax\n", 96 96 sb->s_id); 97 97 return -EINVAL; 98 98 } 99 99 100 100 err = bdev_dax_pgoff(bdev, 0, PAGE_SIZE, &pgoff); 101 101 if (err) { 102 - pr_err("VFS (%s): error: unaligned partition for dax\n", 102 + pr_debug("VFS (%s): error: unaligned partition for dax\n", 103 103 sb->s_id); 104 104 return err; 105 105 } 106 106 107 107 dax_dev = dax_get_by_host(bdev->bd_disk->disk_name); 108 108 if (!dax_dev) { 109 - pr_err("VFS (%s): error: device does not support dax\n", 109 + pr_debug("VFS (%s): error: device does not support dax\n", 110 110 sb->s_id); 111 111 return -EOPNOTSUPP; 112 112 } ··· 118 118 put_dax(dax_dev); 119 119 120 120 if (len < 1) { 121 - pr_err("VFS (%s): error: dax access failed (%ld)", 121 + pr_debug("VFS (%s): error: dax access failed (%ld)\n", 122 122 sb->s_id, len); 123 123 return len < 0 ? len : -EIO; 124 124 } ··· 273 273 void arch_wb_cache_pmem(void *addr, size_t size); 274 274 void dax_flush(struct dax_device *dax_dev, void *addr, size_t size) 275 275 { 276 - if (unlikely(!dax_alive(dax_dev))) 277 - return; 278 - 279 276 if (unlikely(!test_bit(DAXDEV_WRITE_CACHE, &dax_dev->flags))) 280 277 return; 281 278 ··· 341 344 struct inode *inode; 342 345 343 346 dax_dev = kmem_cache_alloc(dax_cache, GFP_KERNEL); 347 + if (!dax_dev) 348 + return NULL; 349 + 344 350 inode = &dax_dev->inode; 345 351 inode->i_rdev = 0; 346 352 return inode;
+1
drivers/nvdimm/Makefile
··· 21 21 libnvdimm-y += region.o 22 22 libnvdimm-y += namespace_devs.o 23 23 libnvdimm-y += label.o 24 + libnvdimm-y += badrange.o 24 25 libnvdimm-$(CONFIG_ND_CLAIM) += claim.o 25 26 libnvdimm-$(CONFIG_BTT) += btt_devs.o 26 27 libnvdimm-$(CONFIG_NVDIMM_PFN) += pfn_devs.o
+293
drivers/nvdimm/badrange.c
··· 1 + /* 2 + * Copyright(c) 2017 Intel Corporation. All rights reserved. 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of version 2 of the GNU General Public License as 6 + * published by the Free Software Foundation. 7 + * 8 + * This program is distributed in the hope that it will be useful, but 9 + * WITHOUT ANY WARRANTY; without even the implied warranty of 10 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 11 + * General Public License for more details. 12 + */ 13 + #include <linux/libnvdimm.h> 14 + #include <linux/badblocks.h> 15 + #include <linux/export.h> 16 + #include <linux/module.h> 17 + #include <linux/blkdev.h> 18 + #include <linux/device.h> 19 + #include <linux/ctype.h> 20 + #include <linux/ndctl.h> 21 + #include <linux/mutex.h> 22 + #include <linux/slab.h> 23 + #include <linux/io.h> 24 + #include "nd-core.h" 25 + #include "nd.h" 26 + 27 + void badrange_init(struct badrange *badrange) 28 + { 29 + INIT_LIST_HEAD(&badrange->list); 30 + spin_lock_init(&badrange->lock); 31 + } 32 + EXPORT_SYMBOL_GPL(badrange_init); 33 + 34 + static void append_badrange_entry(struct badrange *badrange, 35 + struct badrange_entry *bre, u64 addr, u64 length) 36 + { 37 + lockdep_assert_held(&badrange->lock); 38 + bre->start = addr; 39 + bre->length = length; 40 + list_add_tail(&bre->list, &badrange->list); 41 + } 42 + 43 + static int alloc_and_append_badrange_entry(struct badrange *badrange, 44 + u64 addr, u64 length, gfp_t flags) 45 + { 46 + struct badrange_entry *bre; 47 + 48 + bre = kzalloc(sizeof(*bre), flags); 49 + if (!bre) 50 + return -ENOMEM; 51 + 52 + append_badrange_entry(badrange, bre, addr, length); 53 + return 0; 54 + } 55 + 56 + static int add_badrange(struct badrange *badrange, u64 addr, u64 length) 57 + { 58 + struct badrange_entry *bre, *bre_new; 59 + 60 + spin_unlock(&badrange->lock); 61 + bre_new = kzalloc(sizeof(*bre_new), GFP_KERNEL); 62 + spin_lock(&badrange->lock); 63 + 64 + if (list_empty(&badrange->list)) { 65 + if (!bre_new) 66 + return -ENOMEM; 67 + append_badrange_entry(badrange, bre_new, addr, length); 68 + return 0; 69 + } 70 + 71 + /* 72 + * There is a chance this is a duplicate, check for those first. 73 + * This will be the common case as ARS_STATUS returns all known 74 + * errors in the SPA space, and we can't query it per region 75 + */ 76 + list_for_each_entry(bre, &badrange->list, list) 77 + if (bre->start == addr) { 78 + /* If length has changed, update this list entry */ 79 + if (bre->length != length) 80 + bre->length = length; 81 + kfree(bre_new); 82 + return 0; 83 + } 84 + 85 + /* 86 + * If not a duplicate or a simple length update, add the entry as is, 87 + * as any overlapping ranges will get resolved when the list is consumed 88 + * and converted to badblocks 89 + */ 90 + if (!bre_new) 91 + return -ENOMEM; 92 + append_badrange_entry(badrange, bre_new, addr, length); 93 + 94 + return 0; 95 + } 96 + 97 + int badrange_add(struct badrange *badrange, u64 addr, u64 length) 98 + { 99 + int rc; 100 + 101 + spin_lock(&badrange->lock); 102 + rc = add_badrange(badrange, addr, length); 103 + spin_unlock(&badrange->lock); 104 + 105 + return rc; 106 + } 107 + EXPORT_SYMBOL_GPL(badrange_add); 108 + 109 + void badrange_forget(struct badrange *badrange, phys_addr_t start, 110 + unsigned int len) 111 + { 112 + struct list_head *badrange_list = &badrange->list; 113 + u64 clr_end = start + len - 1; 114 + struct badrange_entry *bre, *next; 115 + 116 + spin_lock(&badrange->lock); 117 + 118 + /* 119 + * [start, clr_end] is the badrange interval being cleared. 120 + * [bre->start, bre_end] is the badrange_list entry we're comparing 121 + * the above interval against. The badrange list entry may need 122 + * to be modified (update either start or length), deleted, or 123 + * split into two based on the overlap characteristics 124 + */ 125 + 126 + list_for_each_entry_safe(bre, next, badrange_list, list) { 127 + u64 bre_end = bre->start + bre->length - 1; 128 + 129 + /* Skip intervals with no intersection */ 130 + if (bre_end < start) 131 + continue; 132 + if (bre->start > clr_end) 133 + continue; 134 + /* Delete completely overlapped badrange entries */ 135 + if ((bre->start >= start) && (bre_end <= clr_end)) { 136 + list_del(&bre->list); 137 + kfree(bre); 138 + continue; 139 + } 140 + /* Adjust start point of partially cleared entries */ 141 + if ((start <= bre->start) && (clr_end > bre->start)) { 142 + bre->length -= clr_end - bre->start + 1; 143 + bre->start = clr_end + 1; 144 + continue; 145 + } 146 + /* Adjust bre->length for partial clearing at the tail end */ 147 + if ((bre->start < start) && (bre_end <= clr_end)) { 148 + /* bre->start remains the same */ 149 + bre->length = start - bre->start; 150 + continue; 151 + } 152 + /* 153 + * If clearing in the middle of an entry, we split it into 154 + * two by modifying the current entry to represent one half of 155 + * the split, and adding a new entry for the second half. 156 + */ 157 + if ((bre->start < start) && (bre_end > clr_end)) { 158 + u64 new_start = clr_end + 1; 159 + u64 new_len = bre_end - new_start + 1; 160 + 161 + /* Add new entry covering the right half */ 162 + alloc_and_append_badrange_entry(badrange, new_start, 163 + new_len, GFP_NOWAIT); 164 + /* Adjust this entry to cover the left half */ 165 + bre->length = start - bre->start; 166 + continue; 167 + } 168 + } 169 + spin_unlock(&badrange->lock); 170 + } 171 + EXPORT_SYMBOL_GPL(badrange_forget); 172 + 173 + static void set_badblock(struct badblocks *bb, sector_t s, int num) 174 + { 175 + dev_dbg(bb->dev, "Found a bad range (0x%llx, 0x%llx)\n", 176 + (u64) s * 512, (u64) num * 512); 177 + /* this isn't an error as the hardware will still throw an exception */ 178 + if (badblocks_set(bb, s, num, 1)) 179 + dev_info_once(bb->dev, "%s: failed for sector %llx\n", 180 + __func__, (u64) s); 181 + } 182 + 183 + /** 184 + * __add_badblock_range() - Convert a physical address range to bad sectors 185 + * @bb: badblocks instance to populate 186 + * @ns_offset: namespace offset where the error range begins (in bytes) 187 + * @len: number of bytes of badrange to be added 188 + * 189 + * This assumes that the range provided with (ns_offset, len) is within 190 + * the bounds of physical addresses for this namespace, i.e. lies in the 191 + * interval [ns_start, ns_start + ns_size) 192 + */ 193 + static void __add_badblock_range(struct badblocks *bb, u64 ns_offset, u64 len) 194 + { 195 + const unsigned int sector_size = 512; 196 + sector_t start_sector, end_sector; 197 + u64 num_sectors; 198 + u32 rem; 199 + 200 + start_sector = div_u64(ns_offset, sector_size); 201 + end_sector = div_u64_rem(ns_offset + len, sector_size, &rem); 202 + if (rem) 203 + end_sector++; 204 + num_sectors = end_sector - start_sector; 205 + 206 + if (unlikely(num_sectors > (u64)INT_MAX)) { 207 + u64 remaining = num_sectors; 208 + sector_t s = start_sector; 209 + 210 + while (remaining) { 211 + int done = min_t(u64, remaining, INT_MAX); 212 + 213 + set_badblock(bb, s, done); 214 + remaining -= done; 215 + s += done; 216 + } 217 + } else 218 + set_badblock(bb, start_sector, num_sectors); 219 + } 220 + 221 + static void badblocks_populate(struct badrange *badrange, 222 + struct badblocks *bb, const struct resource *res) 223 + { 224 + struct badrange_entry *bre; 225 + 226 + if (list_empty(&badrange->list)) 227 + return; 228 + 229 + list_for_each_entry(bre, &badrange->list, list) { 230 + u64 bre_end = bre->start + bre->length - 1; 231 + 232 + /* Discard intervals with no intersection */ 233 + if (bre_end < res->start) 234 + continue; 235 + if (bre->start > res->end) 236 + continue; 237 + /* Deal with any overlap after start of the namespace */ 238 + if (bre->start >= res->start) { 239 + u64 start = bre->start; 240 + u64 len; 241 + 242 + if (bre_end <= res->end) 243 + len = bre->length; 244 + else 245 + len = res->start + resource_size(res) 246 + - bre->start; 247 + __add_badblock_range(bb, start - res->start, len); 248 + continue; 249 + } 250 + /* 251 + * Deal with overlap for badrange starting before 252 + * the namespace. 253 + */ 254 + if (bre->start < res->start) { 255 + u64 len; 256 + 257 + if (bre_end < res->end) 258 + len = bre->start + bre->length - res->start; 259 + else 260 + len = resource_size(res); 261 + __add_badblock_range(bb, 0, len); 262 + } 263 + } 264 + } 265 + 266 + /** 267 + * nvdimm_badblocks_populate() - Convert a list of badranges to badblocks 268 + * @region: parent region of the range to interrogate 269 + * @bb: badblocks instance to populate 270 + * @res: resource range to consider 271 + * 272 + * The badrange list generated during bus initialization may contain 273 + * multiple, possibly overlapping physical address ranges. Compare each 274 + * of these ranges to the resource range currently being initialized, 275 + * and add badblocks entries for all matching sub-ranges 276 + */ 277 + void nvdimm_badblocks_populate(struct nd_region *nd_region, 278 + struct badblocks *bb, const struct resource *res) 279 + { 280 + struct nvdimm_bus *nvdimm_bus; 281 + 282 + if (!is_memory(&nd_region->dev)) { 283 + dev_WARN_ONCE(&nd_region->dev, 1, 284 + "%s only valid for pmem regions\n", __func__); 285 + return; 286 + } 287 + nvdimm_bus = walk_to_nvdimm_bus(&nd_region->dev); 288 + 289 + nvdimm_bus_lock(&nvdimm_bus->dev); 290 + badblocks_populate(&nvdimm_bus->badrange, bb, res); 291 + nvdimm_bus_unlock(&nvdimm_bus->dev); 292 + } 293 + EXPORT_SYMBOL_GPL(nvdimm_badblocks_populate);
+12 -12
drivers/nvdimm/bus.c
··· 11 11 * General Public License for more details. 12 12 */ 13 13 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 14 + #include <linux/libnvdimm.h> 14 15 #include <linux/sched/mm.h> 15 16 #include <linux/vmalloc.h> 16 17 #include <linux/uaccess.h> ··· 222 221 phys_addr_t phys, u64 cleared) 223 222 { 224 223 if (cleared > 0) 225 - nvdimm_forget_poison(nvdimm_bus, phys, cleared); 224 + badrange_forget(&nvdimm_bus->badrange, phys, cleared); 226 225 227 226 if (cleared > 0 && cleared / 512) 228 227 nvdimm_clear_badblocks_regions(nvdimm_bus, phys, cleared); ··· 345 344 return NULL; 346 345 INIT_LIST_HEAD(&nvdimm_bus->list); 347 346 INIT_LIST_HEAD(&nvdimm_bus->mapping_list); 348 - INIT_LIST_HEAD(&nvdimm_bus->poison_list); 349 347 init_waitqueue_head(&nvdimm_bus->probe_wait); 350 348 nvdimm_bus->id = ida_simple_get(&nd_ida, 0, 0, GFP_KERNEL); 351 349 mutex_init(&nvdimm_bus->reconfig_mutex); 352 - spin_lock_init(&nvdimm_bus->poison_lock); 350 + badrange_init(&nvdimm_bus->badrange); 353 351 if (nvdimm_bus->id < 0) { 354 352 kfree(nvdimm_bus); 355 353 return NULL; ··· 395 395 return 0; 396 396 } 397 397 398 - static void free_poison_list(struct list_head *poison_list) 398 + static void free_badrange_list(struct list_head *badrange_list) 399 399 { 400 - struct nd_poison *pl, *next; 400 + struct badrange_entry *bre, *next; 401 401 402 - list_for_each_entry_safe(pl, next, poison_list, list) { 403 - list_del(&pl->list); 404 - kfree(pl); 402 + list_for_each_entry_safe(bre, next, badrange_list, list) { 403 + list_del(&bre->list); 404 + kfree(bre); 405 405 } 406 - list_del_init(poison_list); 406 + list_del_init(badrange_list); 407 407 } 408 408 409 409 static int nd_bus_remove(struct device *dev) ··· 417 417 nd_synchronize(); 418 418 device_for_each_child(&nvdimm_bus->dev, NULL, child_unregister); 419 419 420 - spin_lock(&nvdimm_bus->poison_lock); 421 - free_poison_list(&nvdimm_bus->poison_list); 422 - spin_unlock(&nvdimm_bus->poison_lock); 420 + spin_lock(&nvdimm_bus->badrange.lock); 421 + free_badrange_list(&nvdimm_bus->badrange.list); 422 + spin_unlock(&nvdimm_bus->badrange.lock); 423 423 424 424 nvdimm_bus_destroy_ndctl(nvdimm_bus); 425 425
+3 -257
drivers/nvdimm/core.c
··· 398 398 }; 399 399 EXPORT_SYMBOL_GPL(nvdimm_bus_attribute_group); 400 400 401 - static void set_badblock(struct badblocks *bb, sector_t s, int num) 401 + int nvdimm_bus_add_badrange(struct nvdimm_bus *nvdimm_bus, u64 addr, u64 length) 402 402 { 403 - dev_dbg(bb->dev, "Found a poison range (0x%llx, 0x%llx)\n", 404 - (u64) s * 512, (u64) num * 512); 405 - /* this isn't an error as the hardware will still throw an exception */ 406 - if (badblocks_set(bb, s, num, 1)) 407 - dev_info_once(bb->dev, "%s: failed for sector %llx\n", 408 - __func__, (u64) s); 403 + return badrange_add(&nvdimm_bus->badrange, addr, length); 409 404 } 410 - 411 - /** 412 - * __add_badblock_range() - Convert a physical address range to bad sectors 413 - * @bb: badblocks instance to populate 414 - * @ns_offset: namespace offset where the error range begins (in bytes) 415 - * @len: number of bytes of poison to be added 416 - * 417 - * This assumes that the range provided with (ns_offset, len) is within 418 - * the bounds of physical addresses for this namespace, i.e. lies in the 419 - * interval [ns_start, ns_start + ns_size) 420 - */ 421 - static void __add_badblock_range(struct badblocks *bb, u64 ns_offset, u64 len) 422 - { 423 - const unsigned int sector_size = 512; 424 - sector_t start_sector, end_sector; 425 - u64 num_sectors; 426 - u32 rem; 427 - 428 - start_sector = div_u64(ns_offset, sector_size); 429 - end_sector = div_u64_rem(ns_offset + len, sector_size, &rem); 430 - if (rem) 431 - end_sector++; 432 - num_sectors = end_sector - start_sector; 433 - 434 - if (unlikely(num_sectors > (u64)INT_MAX)) { 435 - u64 remaining = num_sectors; 436 - sector_t s = start_sector; 437 - 438 - while (remaining) { 439 - int done = min_t(u64, remaining, INT_MAX); 440 - 441 - set_badblock(bb, s, done); 442 - remaining -= done; 443 - s += done; 444 - } 445 - } else 446 - set_badblock(bb, start_sector, num_sectors); 447 - } 448 - 449 - static void badblocks_populate(struct list_head *poison_list, 450 - struct badblocks *bb, const struct resource *res) 451 - { 452 - struct nd_poison *pl; 453 - 454 - if (list_empty(poison_list)) 455 - return; 456 - 457 - list_for_each_entry(pl, poison_list, list) { 458 - u64 pl_end = pl->start + pl->length - 1; 459 - 460 - /* Discard intervals with no intersection */ 461 - if (pl_end < res->start) 462 - continue; 463 - if (pl->start > res->end) 464 - continue; 465 - /* Deal with any overlap after start of the namespace */ 466 - if (pl->start >= res->start) { 467 - u64 start = pl->start; 468 - u64 len; 469 - 470 - if (pl_end <= res->end) 471 - len = pl->length; 472 - else 473 - len = res->start + resource_size(res) 474 - - pl->start; 475 - __add_badblock_range(bb, start - res->start, len); 476 - continue; 477 - } 478 - /* Deal with overlap for poison starting before the namespace */ 479 - if (pl->start < res->start) { 480 - u64 len; 481 - 482 - if (pl_end < res->end) 483 - len = pl->start + pl->length - res->start; 484 - else 485 - len = resource_size(res); 486 - __add_badblock_range(bb, 0, len); 487 - } 488 - } 489 - } 490 - 491 - /** 492 - * nvdimm_badblocks_populate() - Convert a list of poison ranges to badblocks 493 - * @region: parent region of the range to interrogate 494 - * @bb: badblocks instance to populate 495 - * @res: resource range to consider 496 - * 497 - * The poison list generated during bus initialization may contain 498 - * multiple, possibly overlapping physical address ranges. Compare each 499 - * of these ranges to the resource range currently being initialized, 500 - * and add badblocks entries for all matching sub-ranges 501 - */ 502 - void nvdimm_badblocks_populate(struct nd_region *nd_region, 503 - struct badblocks *bb, const struct resource *res) 504 - { 505 - struct nvdimm_bus *nvdimm_bus; 506 - struct list_head *poison_list; 507 - 508 - if (!is_memory(&nd_region->dev)) { 509 - dev_WARN_ONCE(&nd_region->dev, 1, 510 - "%s only valid for pmem regions\n", __func__); 511 - return; 512 - } 513 - nvdimm_bus = walk_to_nvdimm_bus(&nd_region->dev); 514 - poison_list = &nvdimm_bus->poison_list; 515 - 516 - nvdimm_bus_lock(&nvdimm_bus->dev); 517 - badblocks_populate(poison_list, bb, res); 518 - nvdimm_bus_unlock(&nvdimm_bus->dev); 519 - } 520 - EXPORT_SYMBOL_GPL(nvdimm_badblocks_populate); 521 - 522 - static void append_poison_entry(struct nvdimm_bus *nvdimm_bus, 523 - struct nd_poison *pl, u64 addr, u64 length) 524 - { 525 - lockdep_assert_held(&nvdimm_bus->poison_lock); 526 - pl->start = addr; 527 - pl->length = length; 528 - list_add_tail(&pl->list, &nvdimm_bus->poison_list); 529 - } 530 - 531 - static int add_poison(struct nvdimm_bus *nvdimm_bus, u64 addr, u64 length, 532 - gfp_t flags) 533 - { 534 - struct nd_poison *pl; 535 - 536 - pl = kzalloc(sizeof(*pl), flags); 537 - if (!pl) 538 - return -ENOMEM; 539 - 540 - append_poison_entry(nvdimm_bus, pl, addr, length); 541 - return 0; 542 - } 543 - 544 - static int bus_add_poison(struct nvdimm_bus *nvdimm_bus, u64 addr, u64 length) 545 - { 546 - struct nd_poison *pl, *pl_new; 547 - 548 - spin_unlock(&nvdimm_bus->poison_lock); 549 - pl_new = kzalloc(sizeof(*pl_new), GFP_KERNEL); 550 - spin_lock(&nvdimm_bus->poison_lock); 551 - 552 - if (list_empty(&nvdimm_bus->poison_list)) { 553 - if (!pl_new) 554 - return -ENOMEM; 555 - append_poison_entry(nvdimm_bus, pl_new, addr, length); 556 - return 0; 557 - } 558 - 559 - /* 560 - * There is a chance this is a duplicate, check for those first. 561 - * This will be the common case as ARS_STATUS returns all known 562 - * errors in the SPA space, and we can't query it per region 563 - */ 564 - list_for_each_entry(pl, &nvdimm_bus->poison_list, list) 565 - if (pl->start == addr) { 566 - /* If length has changed, update this list entry */ 567 - if (pl->length != length) 568 - pl->length = length; 569 - kfree(pl_new); 570 - return 0; 571 - } 572 - 573 - /* 574 - * If not a duplicate or a simple length update, add the entry as is, 575 - * as any overlapping ranges will get resolved when the list is consumed 576 - * and converted to badblocks 577 - */ 578 - if (!pl_new) 579 - return -ENOMEM; 580 - append_poison_entry(nvdimm_bus, pl_new, addr, length); 581 - 582 - return 0; 583 - } 584 - 585 - int nvdimm_bus_add_poison(struct nvdimm_bus *nvdimm_bus, u64 addr, u64 length) 586 - { 587 - int rc; 588 - 589 - spin_lock(&nvdimm_bus->poison_lock); 590 - rc = bus_add_poison(nvdimm_bus, addr, length); 591 - spin_unlock(&nvdimm_bus->poison_lock); 592 - 593 - return rc; 594 - } 595 - EXPORT_SYMBOL_GPL(nvdimm_bus_add_poison); 596 - 597 - void nvdimm_forget_poison(struct nvdimm_bus *nvdimm_bus, phys_addr_t start, 598 - unsigned int len) 599 - { 600 - struct list_head *poison_list = &nvdimm_bus->poison_list; 601 - u64 clr_end = start + len - 1; 602 - struct nd_poison *pl, *next; 603 - 604 - spin_lock(&nvdimm_bus->poison_lock); 605 - WARN_ON_ONCE(list_empty(poison_list)); 606 - 607 - /* 608 - * [start, clr_end] is the poison interval being cleared. 609 - * [pl->start, pl_end] is the poison_list entry we're comparing 610 - * the above interval against. The poison list entry may need 611 - * to be modified (update either start or length), deleted, or 612 - * split into two based on the overlap characteristics 613 - */ 614 - 615 - list_for_each_entry_safe(pl, next, poison_list, list) { 616 - u64 pl_end = pl->start + pl->length - 1; 617 - 618 - /* Skip intervals with no intersection */ 619 - if (pl_end < start) 620 - continue; 621 - if (pl->start > clr_end) 622 - continue; 623 - /* Delete completely overlapped poison entries */ 624 - if ((pl->start >= start) && (pl_end <= clr_end)) { 625 - list_del(&pl->list); 626 - kfree(pl); 627 - continue; 628 - } 629 - /* Adjust start point of partially cleared entries */ 630 - if ((start <= pl->start) && (clr_end > pl->start)) { 631 - pl->length -= clr_end - pl->start + 1; 632 - pl->start = clr_end + 1; 633 - continue; 634 - } 635 - /* Adjust pl->length for partial clearing at the tail end */ 636 - if ((pl->start < start) && (pl_end <= clr_end)) { 637 - /* pl->start remains the same */ 638 - pl->length = start - pl->start; 639 - continue; 640 - } 641 - /* 642 - * If clearing in the middle of an entry, we split it into 643 - * two by modifying the current entry to represent one half of 644 - * the split, and adding a new entry for the second half. 645 - */ 646 - if ((pl->start < start) && (pl_end > clr_end)) { 647 - u64 new_start = clr_end + 1; 648 - u64 new_len = pl_end - new_start + 1; 649 - 650 - /* Add new entry covering the right half */ 651 - add_poison(nvdimm_bus, new_start, new_len, GFP_NOWAIT); 652 - /* Adjust this entry to cover the left half */ 653 - pl->length = start - pl->start; 654 - continue; 655 - } 656 - } 657 - spin_unlock(&nvdimm_bus->poison_lock); 658 - } 659 - EXPORT_SYMBOL_GPL(nvdimm_forget_poison); 405 + EXPORT_SYMBOL_GPL(nvdimm_bus_add_badrange); 660 406 661 407 #ifdef CONFIG_BLK_DEV_INTEGRITY 662 408 int nd_integrity_init(struct gendisk *disk, unsigned long meta_size)
+3
drivers/nvdimm/dimm.c
··· 55 55 goto err; 56 56 57 57 rc = nvdimm_init_config_data(ndd); 58 + if (rc == -EACCES) 59 + nvdimm_set_locked(dev); 58 60 if (rc) 59 61 goto err; 60 62 ··· 70 68 rc = nd_label_reserve_dpa(ndd); 71 69 if (ndd->ns_current >= 0) 72 70 nvdimm_set_aliasing(dev); 71 + nvdimm_clear_locked(dev); 73 72 nvdimm_bus_unlock(dev); 74 73 75 74 if (rc)
+19
drivers/nvdimm/dimm_devs.c
··· 200 200 set_bit(NDD_LOCKED, &nvdimm->flags); 201 201 } 202 202 203 + void nvdimm_clear_locked(struct device *dev) 204 + { 205 + struct nvdimm *nvdimm = to_nvdimm(dev); 206 + 207 + clear_bit(NDD_LOCKED, &nvdimm->flags); 208 + } 209 + 203 210 static void nvdimm_release(struct device *dev) 204 211 { 205 212 struct nvdimm *nvdimm = to_nvdimm(dev); ··· 331 324 } 332 325 static DEVICE_ATTR_RO(commands); 333 326 327 + static ssize_t flags_show(struct device *dev, 328 + struct device_attribute *attr, char *buf) 329 + { 330 + struct nvdimm *nvdimm = to_nvdimm(dev); 331 + 332 + return sprintf(buf, "%s%s\n", 333 + test_bit(NDD_ALIASING, &nvdimm->flags) ? "alias " : "", 334 + test_bit(NDD_LOCKED, &nvdimm->flags) ? "lock " : ""); 335 + } 336 + static DEVICE_ATTR_RO(flags); 337 + 334 338 static ssize_t state_show(struct device *dev, struct device_attribute *attr, 335 339 char *buf) 336 340 { ··· 383 365 384 366 static struct attribute *nvdimm_attributes[] = { 385 367 &dev_attr_state.attr, 368 + &dev_attr_flags.attr, 386 369 &dev_attr_commands.attr, 387 370 &dev_attr_available_slots.attr, 388 371 NULL,
+1 -1
drivers/nvdimm/label.c
··· 1050 1050 nsindex = to_namespace_index(ndd, 0); 1051 1051 memset(nsindex, 0, ndd->nsarea.config_size); 1052 1052 for (i = 0; i < 2; i++) { 1053 - int rc = nd_label_write_index(ndd, i, i*2, ND_NSINDEX_INIT); 1053 + int rc = nd_label_write_index(ndd, i, 3 - i, ND_NSINDEX_INIT); 1054 1054 1055 1055 if (rc) 1056 1056 return rc;
+3 -3
drivers/nvdimm/namespace_devs.c
··· 1620 1620 if (a == &dev_attr_resource.attr) { 1621 1621 if (is_namespace_blk(dev)) 1622 1622 return 0; 1623 - return a->mode; 1623 + return 0400; 1624 1624 } 1625 1625 1626 1626 if (is_namespace_pmem(dev) || is_namespace_blk(dev)) { ··· 1875 1875 * @nspm: target namespace to create 1876 1876 * @nd_label: target pmem namespace label to evaluate 1877 1877 */ 1878 - struct device *create_namespace_pmem(struct nd_region *nd_region, 1878 + static struct device *create_namespace_pmem(struct nd_region *nd_region, 1879 1879 struct nd_namespace_index *nsindex, 1880 1880 struct nd_namespace_label *nd_label) 1881 1881 { ··· 2186 2186 return i; 2187 2187 } 2188 2188 2189 - struct device *create_namespace_blk(struct nd_region *nd_region, 2189 + static struct device *create_namespace_blk(struct nd_region *nd_region, 2190 2190 struct nd_namespace_label *nd_label, int count) 2191 2191 { 2192 2192
+1 -2
drivers/nvdimm/nd-core.h
··· 29 29 struct list_head list; 30 30 struct device dev; 31 31 int id, probe_active; 32 - struct list_head poison_list; 33 32 struct list_head mapping_list; 34 33 struct mutex reconfig_mutex; 35 - spinlock_t poison_lock; 34 + struct badrange badrange; 36 35 }; 37 36 38 37 struct nvdimm {
+1 -6
drivers/nvdimm/nd.h
··· 34 34 NVDIMM_IO_ATOMIC = 1, 35 35 }; 36 36 37 - struct nd_poison { 38 - u64 start; 39 - u64 length; 40 - struct list_head list; 41 - }; 42 - 43 37 struct nvdimm_drvdata { 44 38 struct device *dev; 45 39 int nslabel_size; ··· 248 254 unsigned int len); 249 255 void nvdimm_set_aliasing(struct device *dev); 250 256 void nvdimm_set_locked(struct device *dev); 257 + void nvdimm_clear_locked(struct device *dev); 251 258 struct nd_btt *to_nd_btt(struct device *dev); 252 259 253 260 struct nd_gen_sb {
+8
drivers/nvdimm/pfn_devs.c
··· 282 282 NULL, 283 283 }; 284 284 285 + static umode_t pfn_visible(struct kobject *kobj, struct attribute *a, int n) 286 + { 287 + if (a == &dev_attr_resource.attr) 288 + return 0400; 289 + return a->mode; 290 + } 291 + 285 292 struct attribute_group nd_pfn_attribute_group = { 286 293 .attrs = nd_pfn_attributes, 294 + .is_visible = pfn_visible, 287 295 }; 288 296 289 297 static const struct attribute_group *nd_pfn_attribute_groups[] = {
+6 -2
drivers/nvdimm/region_devs.c
··· 562 562 if (!is_nd_pmem(dev) && a == &dev_attr_badblocks.attr) 563 563 return 0; 564 564 565 - if (!is_nd_pmem(dev) && a == &dev_attr_resource.attr) 566 - return 0; 565 + if (a == &dev_attr_resource.attr) { 566 + if (is_nd_pmem(dev)) 567 + return 0400; 568 + else 569 + return 0; 570 + } 567 571 568 572 if (a == &dev_attr_deep_flush.attr) { 569 573 int has_flush = nvdimm_has_flush(nd_region);
+221 -102
fs/dax.c
··· 526 526 static void *dax_insert_mapping_entry(struct address_space *mapping, 527 527 struct vm_fault *vmf, 528 528 void *entry, sector_t sector, 529 - unsigned long flags) 529 + unsigned long flags, bool dirty) 530 530 { 531 531 struct radix_tree_root *page_tree = &mapping->page_tree; 532 532 void *new_entry; 533 533 pgoff_t index = vmf->pgoff; 534 534 535 - if (vmf->flags & FAULT_FLAG_WRITE) 535 + if (dirty) 536 536 __mark_inode_dirty(mapping->host, I_DIRTY_PAGES); 537 537 538 538 if (dax_is_zero_entry(entry) && !(flags & RADIX_DAX_ZERO_PAGE)) { ··· 569 569 entry = new_entry; 570 570 } 571 571 572 - if (vmf->flags & FAULT_FLAG_WRITE) 572 + if (dirty) 573 573 radix_tree_tag_set(page_tree, index, PAGECACHE_TAG_DIRTY); 574 574 575 575 spin_unlock_irq(&mapping->tree_lock); ··· 825 825 } 826 826 EXPORT_SYMBOL_GPL(dax_writeback_mapping_range); 827 827 828 - static int dax_insert_mapping(struct address_space *mapping, 829 - struct block_device *bdev, struct dax_device *dax_dev, 830 - sector_t sector, size_t size, void *entry, 831 - struct vm_area_struct *vma, struct vm_fault *vmf) 828 + static sector_t dax_iomap_sector(struct iomap *iomap, loff_t pos) 832 829 { 833 - unsigned long vaddr = vmf->address; 834 - void *ret, *kaddr; 835 - pgoff_t pgoff; 836 - int id, rc; 837 - pfn_t pfn; 830 + return (iomap->addr + (pos & PAGE_MASK) - iomap->offset) >> 9; 831 + } 838 832 839 - rc = bdev_dax_pgoff(bdev, sector, size, &pgoff); 833 + static int dax_iomap_pfn(struct iomap *iomap, loff_t pos, size_t size, 834 + pfn_t *pfnp) 835 + { 836 + const sector_t sector = dax_iomap_sector(iomap, pos); 837 + pgoff_t pgoff; 838 + void *kaddr; 839 + int id, rc; 840 + long length; 841 + 842 + rc = bdev_dax_pgoff(iomap->bdev, sector, size, &pgoff); 840 843 if (rc) 841 844 return rc; 842 - 843 845 id = dax_read_lock(); 844 - rc = dax_direct_access(dax_dev, pgoff, PHYS_PFN(size), &kaddr, &pfn); 845 - if (rc < 0) { 846 - dax_read_unlock(id); 847 - return rc; 846 + length = dax_direct_access(iomap->dax_dev, pgoff, PHYS_PFN(size), 847 + &kaddr, pfnp); 848 + if (length < 0) { 849 + rc = length; 850 + goto out; 848 851 } 852 + rc = -EINVAL; 853 + if (PFN_PHYS(length) < size) 854 + goto out; 855 + if (pfn_t_to_pfn(*pfnp) & (PHYS_PFN(size)-1)) 856 + goto out; 857 + /* For larger pages we need devmap */ 858 + if (length > 1 && !pfn_t_devmap(*pfnp)) 859 + goto out; 860 + rc = 0; 861 + out: 849 862 dax_read_unlock(id); 850 - 851 - ret = dax_insert_mapping_entry(mapping, vmf, entry, sector, 0); 852 - if (IS_ERR(ret)) 853 - return PTR_ERR(ret); 854 - 855 - trace_dax_insert_mapping(mapping->host, vmf, ret); 856 - if (vmf->flags & FAULT_FLAG_WRITE) 857 - return vm_insert_mixed_mkwrite(vma, vaddr, pfn); 858 - else 859 - return vm_insert_mixed(vma, vaddr, pfn); 863 + return rc; 860 864 } 861 865 862 866 /* ··· 886 882 } 887 883 888 884 entry2 = dax_insert_mapping_entry(mapping, vmf, entry, 0, 889 - RADIX_DAX_ZERO_PAGE); 885 + RADIX_DAX_ZERO_PAGE, false); 890 886 if (IS_ERR(entry2)) { 891 887 ret = VM_FAULT_SIGBUS; 892 888 goto out; ··· 944 940 return 0; 945 941 } 946 942 EXPORT_SYMBOL_GPL(__dax_zero_page_range); 947 - 948 - static sector_t dax_iomap_sector(struct iomap *iomap, loff_t pos) 949 - { 950 - return (iomap->addr + (pos & PAGE_MASK) - iomap->offset) >> 9; 951 - } 952 943 953 944 static loff_t 954 945 dax_iomap_actor(struct inode *inode, loff_t pos, loff_t length, void *data, ··· 1084 1085 return VM_FAULT_SIGBUS; 1085 1086 } 1086 1087 1087 - static int dax_iomap_pte_fault(struct vm_fault *vmf, 1088 + /* 1089 + * MAP_SYNC on a dax mapping guarantees dirty metadata is 1090 + * flushed on write-faults (non-cow), but not read-faults. 1091 + */ 1092 + static bool dax_fault_is_synchronous(unsigned long flags, 1093 + struct vm_area_struct *vma, struct iomap *iomap) 1094 + { 1095 + return (flags & IOMAP_WRITE) && (vma->vm_flags & VM_SYNC) 1096 + && (iomap->flags & IOMAP_F_DIRTY); 1097 + } 1098 + 1099 + static int dax_iomap_pte_fault(struct vm_fault *vmf, pfn_t *pfnp, 1088 1100 const struct iomap_ops *ops) 1089 1101 { 1090 - struct address_space *mapping = vmf->vma->vm_file->f_mapping; 1102 + struct vm_area_struct *vma = vmf->vma; 1103 + struct address_space *mapping = vma->vm_file->f_mapping; 1091 1104 struct inode *inode = mapping->host; 1092 1105 unsigned long vaddr = vmf->address; 1093 1106 loff_t pos = (loff_t)vmf->pgoff << PAGE_SHIFT; 1094 - sector_t sector; 1095 1107 struct iomap iomap = { 0 }; 1096 1108 unsigned flags = IOMAP_FAULT; 1097 1109 int error, major = 0; 1110 + bool write = vmf->flags & FAULT_FLAG_WRITE; 1111 + bool sync; 1098 1112 int vmf_ret = 0; 1099 1113 void *entry; 1114 + pfn_t pfn; 1100 1115 1101 1116 trace_dax_pte_fault(inode, vmf, vmf_ret); 1102 1117 /* ··· 1123 1110 goto out; 1124 1111 } 1125 1112 1126 - if ((vmf->flags & FAULT_FLAG_WRITE) && !vmf->cow_page) 1113 + if (write && !vmf->cow_page) 1127 1114 flags |= IOMAP_WRITE; 1128 1115 1129 1116 entry = grab_mapping_entry(mapping, vmf->pgoff, 0); ··· 1158 1145 goto error_finish_iomap; 1159 1146 } 1160 1147 1161 - sector = dax_iomap_sector(&iomap, pos); 1162 - 1163 1148 if (vmf->cow_page) { 1149 + sector_t sector = dax_iomap_sector(&iomap, pos); 1150 + 1164 1151 switch (iomap.type) { 1165 1152 case IOMAP_HOLE: 1166 1153 case IOMAP_UNWRITTEN: ··· 1186 1173 goto finish_iomap; 1187 1174 } 1188 1175 1176 + sync = dax_fault_is_synchronous(flags, vma, &iomap); 1177 + 1189 1178 switch (iomap.type) { 1190 1179 case IOMAP_MAPPED: 1191 1180 if (iomap.flags & IOMAP_F_NEW) { 1192 1181 count_vm_event(PGMAJFAULT); 1193 - count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT); 1182 + count_memcg_event_mm(vma->vm_mm, PGMAJFAULT); 1194 1183 major = VM_FAULT_MAJOR; 1195 1184 } 1196 - error = dax_insert_mapping(mapping, iomap.bdev, iomap.dax_dev, 1197 - sector, PAGE_SIZE, entry, vmf->vma, vmf); 1185 + error = dax_iomap_pfn(&iomap, pos, PAGE_SIZE, &pfn); 1186 + if (error < 0) 1187 + goto error_finish_iomap; 1188 + 1189 + entry = dax_insert_mapping_entry(mapping, vmf, entry, 1190 + dax_iomap_sector(&iomap, pos), 1191 + 0, write && !sync); 1192 + if (IS_ERR(entry)) { 1193 + error = PTR_ERR(entry); 1194 + goto error_finish_iomap; 1195 + } 1196 + 1197 + /* 1198 + * If we are doing synchronous page fault and inode needs fsync, 1199 + * we can insert PTE into page tables only after that happens. 1200 + * Skip insertion for now and return the pfn so that caller can 1201 + * insert it after fsync is done. 1202 + */ 1203 + if (sync) { 1204 + if (WARN_ON_ONCE(!pfnp)) { 1205 + error = -EIO; 1206 + goto error_finish_iomap; 1207 + } 1208 + *pfnp = pfn; 1209 + vmf_ret = VM_FAULT_NEEDDSYNC | major; 1210 + goto finish_iomap; 1211 + } 1212 + trace_dax_insert_mapping(inode, vmf, entry); 1213 + if (write) 1214 + error = vm_insert_mixed_mkwrite(vma, vaddr, pfn); 1215 + else 1216 + error = vm_insert_mixed(vma, vaddr, pfn); 1217 + 1198 1218 /* -EBUSY is fine, somebody else faulted on the same PTE */ 1199 1219 if (error == -EBUSY) 1200 1220 error = 0; 1201 1221 break; 1202 1222 case IOMAP_UNWRITTEN: 1203 1223 case IOMAP_HOLE: 1204 - if (!(vmf->flags & FAULT_FLAG_WRITE)) { 1224 + if (!write) { 1205 1225 vmf_ret = dax_load_hole(mapping, entry, vmf); 1206 1226 goto finish_iomap; 1207 1227 } ··· 1269 1223 } 1270 1224 1271 1225 #ifdef CONFIG_FS_DAX_PMD 1272 - static int dax_pmd_insert_mapping(struct vm_fault *vmf, struct iomap *iomap, 1273 - loff_t pos, void *entry) 1274 - { 1275 - struct address_space *mapping = vmf->vma->vm_file->f_mapping; 1276 - const sector_t sector = dax_iomap_sector(iomap, pos); 1277 - struct dax_device *dax_dev = iomap->dax_dev; 1278 - struct block_device *bdev = iomap->bdev; 1279 - struct inode *inode = mapping->host; 1280 - const size_t size = PMD_SIZE; 1281 - void *ret = NULL, *kaddr; 1282 - long length = 0; 1283 - pgoff_t pgoff; 1284 - pfn_t pfn = {}; 1285 - int id; 1286 - 1287 - if (bdev_dax_pgoff(bdev, sector, size, &pgoff) != 0) 1288 - goto fallback; 1289 - 1290 - id = dax_read_lock(); 1291 - length = dax_direct_access(dax_dev, pgoff, PHYS_PFN(size), &kaddr, &pfn); 1292 - if (length < 0) 1293 - goto unlock_fallback; 1294 - length = PFN_PHYS(length); 1295 - 1296 - if (length < size) 1297 - goto unlock_fallback; 1298 - if (pfn_t_to_pfn(pfn) & PG_PMD_COLOUR) 1299 - goto unlock_fallback; 1300 - if (!pfn_t_devmap(pfn)) 1301 - goto unlock_fallback; 1302 - dax_read_unlock(id); 1303 - 1304 - ret = dax_insert_mapping_entry(mapping, vmf, entry, sector, 1305 - RADIX_DAX_PMD); 1306 - if (IS_ERR(ret)) 1307 - goto fallback; 1308 - 1309 - trace_dax_pmd_insert_mapping(inode, vmf, length, pfn, ret); 1310 - return vmf_insert_pfn_pmd(vmf->vma, vmf->address, vmf->pmd, 1311 - pfn, vmf->flags & FAULT_FLAG_WRITE); 1312 - 1313 - unlock_fallback: 1314 - dax_read_unlock(id); 1315 - fallback: 1316 - trace_dax_pmd_insert_mapping_fallback(inode, vmf, length, pfn, ret); 1317 - return VM_FAULT_FALLBACK; 1318 - } 1226 + /* 1227 + * The 'colour' (ie low bits) within a PMD of a page offset. This comes up 1228 + * more often than one might expect in the below functions. 1229 + */ 1230 + #define PG_PMD_COLOUR ((PMD_SIZE >> PAGE_SHIFT) - 1) 1319 1231 1320 1232 static int dax_pmd_load_hole(struct vm_fault *vmf, struct iomap *iomap, 1321 1233 void *entry) ··· 1292 1288 goto fallback; 1293 1289 1294 1290 ret = dax_insert_mapping_entry(mapping, vmf, entry, 0, 1295 - RADIX_DAX_PMD | RADIX_DAX_ZERO_PAGE); 1291 + RADIX_DAX_PMD | RADIX_DAX_ZERO_PAGE, false); 1296 1292 if (IS_ERR(ret)) 1297 1293 goto fallback; 1298 1294 ··· 1314 1310 return VM_FAULT_FALLBACK; 1315 1311 } 1316 1312 1317 - static int dax_iomap_pmd_fault(struct vm_fault *vmf, 1313 + static int dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp, 1318 1314 const struct iomap_ops *ops) 1319 1315 { 1320 1316 struct vm_area_struct *vma = vmf->vma; 1321 1317 struct address_space *mapping = vma->vm_file->f_mapping; 1322 1318 unsigned long pmd_addr = vmf->address & PMD_MASK; 1323 1319 bool write = vmf->flags & FAULT_FLAG_WRITE; 1320 + bool sync; 1324 1321 unsigned int iomap_flags = (write ? IOMAP_WRITE : 0) | IOMAP_FAULT; 1325 1322 struct inode *inode = mapping->host; 1326 1323 int result = VM_FAULT_FALLBACK; ··· 1330 1325 void *entry; 1331 1326 loff_t pos; 1332 1327 int error; 1328 + pfn_t pfn; 1333 1329 1334 1330 /* 1335 1331 * Check whether offset isn't beyond end of file now. Caller is ··· 1338 1332 * this is a reliable test. 1339 1333 */ 1340 1334 pgoff = linear_page_index(vma, pmd_addr); 1341 - max_pgoff = (i_size_read(inode) - 1) >> PAGE_SHIFT; 1335 + max_pgoff = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); 1342 1336 1343 1337 trace_dax_pmd_fault(inode, vmf, max_pgoff, 0); 1344 1338 ··· 1362 1356 if ((pmd_addr + PMD_SIZE) > vma->vm_end) 1363 1357 goto fallback; 1364 1358 1365 - if (pgoff > max_pgoff) { 1359 + if (pgoff >= max_pgoff) { 1366 1360 result = VM_FAULT_SIGBUS; 1367 1361 goto out; 1368 1362 } 1369 1363 1370 1364 /* If the PMD would extend beyond the file size */ 1371 - if ((pgoff | PG_PMD_COLOUR) > max_pgoff) 1365 + if ((pgoff | PG_PMD_COLOUR) >= max_pgoff) 1372 1366 goto fallback; 1373 1367 1374 1368 /* ··· 1406 1400 if (iomap.offset + iomap.length < pos + PMD_SIZE) 1407 1401 goto finish_iomap; 1408 1402 1403 + sync = dax_fault_is_synchronous(iomap_flags, vma, &iomap); 1404 + 1409 1405 switch (iomap.type) { 1410 1406 case IOMAP_MAPPED: 1411 - result = dax_pmd_insert_mapping(vmf, &iomap, pos, entry); 1407 + error = dax_iomap_pfn(&iomap, pos, PMD_SIZE, &pfn); 1408 + if (error < 0) 1409 + goto finish_iomap; 1410 + 1411 + entry = dax_insert_mapping_entry(mapping, vmf, entry, 1412 + dax_iomap_sector(&iomap, pos), 1413 + RADIX_DAX_PMD, write && !sync); 1414 + if (IS_ERR(entry)) 1415 + goto finish_iomap; 1416 + 1417 + /* 1418 + * If we are doing synchronous page fault and inode needs fsync, 1419 + * we can insert PMD into page tables only after that happens. 1420 + * Skip insertion for now and return the pfn so that caller can 1421 + * insert it after fsync is done. 1422 + */ 1423 + if (sync) { 1424 + if (WARN_ON_ONCE(!pfnp)) 1425 + goto finish_iomap; 1426 + *pfnp = pfn; 1427 + result = VM_FAULT_NEEDDSYNC; 1428 + goto finish_iomap; 1429 + } 1430 + 1431 + trace_dax_pmd_insert_mapping(inode, vmf, PMD_SIZE, pfn, entry); 1432 + result = vmf_insert_pfn_pmd(vma, vmf->address, vmf->pmd, pfn, 1433 + write); 1412 1434 break; 1413 1435 case IOMAP_UNWRITTEN: 1414 1436 case IOMAP_HOLE: ··· 1476 1442 return result; 1477 1443 } 1478 1444 #else 1479 - static int dax_iomap_pmd_fault(struct vm_fault *vmf, 1445 + static int dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp, 1480 1446 const struct iomap_ops *ops) 1481 1447 { 1482 1448 return VM_FAULT_FALLBACK; ··· 1486 1452 /** 1487 1453 * dax_iomap_fault - handle a page fault on a DAX file 1488 1454 * @vmf: The description of the fault 1489 - * @ops: iomap ops passed from the file system 1455 + * @pe_size: Size of the page to fault in 1456 + * @pfnp: PFN to insert for synchronous faults if fsync is required 1457 + * @ops: Iomap ops passed from the file system 1490 1458 * 1491 1459 * When a page fault occurs, filesystems may call this helper in 1492 1460 * their fault handler for DAX files. dax_iomap_fault() assumes the caller ··· 1496 1460 * successfully. 1497 1461 */ 1498 1462 int dax_iomap_fault(struct vm_fault *vmf, enum page_entry_size pe_size, 1499 - const struct iomap_ops *ops) 1463 + pfn_t *pfnp, const struct iomap_ops *ops) 1500 1464 { 1501 1465 switch (pe_size) { 1502 1466 case PE_SIZE_PTE: 1503 - return dax_iomap_pte_fault(vmf, ops); 1467 + return dax_iomap_pte_fault(vmf, pfnp, ops); 1504 1468 case PE_SIZE_PMD: 1505 - return dax_iomap_pmd_fault(vmf, ops); 1469 + return dax_iomap_pmd_fault(vmf, pfnp, ops); 1506 1470 default: 1507 1471 return VM_FAULT_FALLBACK; 1508 1472 } 1509 1473 } 1510 1474 EXPORT_SYMBOL_GPL(dax_iomap_fault); 1475 + 1476 + /** 1477 + * dax_insert_pfn_mkwrite - insert PTE or PMD entry into page tables 1478 + * @vmf: The description of the fault 1479 + * @pe_size: Size of entry to be inserted 1480 + * @pfn: PFN to insert 1481 + * 1482 + * This function inserts writeable PTE or PMD entry into page tables for mmaped 1483 + * DAX file. It takes care of marking corresponding radix tree entry as dirty 1484 + * as well. 1485 + */ 1486 + static int dax_insert_pfn_mkwrite(struct vm_fault *vmf, 1487 + enum page_entry_size pe_size, 1488 + pfn_t pfn) 1489 + { 1490 + struct address_space *mapping = vmf->vma->vm_file->f_mapping; 1491 + void *entry, **slot; 1492 + pgoff_t index = vmf->pgoff; 1493 + int vmf_ret, error; 1494 + 1495 + spin_lock_irq(&mapping->tree_lock); 1496 + entry = get_unlocked_mapping_entry(mapping, index, &slot); 1497 + /* Did we race with someone splitting entry or so? */ 1498 + if (!entry || 1499 + (pe_size == PE_SIZE_PTE && !dax_is_pte_entry(entry)) || 1500 + (pe_size == PE_SIZE_PMD && !dax_is_pmd_entry(entry))) { 1501 + put_unlocked_mapping_entry(mapping, index, entry); 1502 + spin_unlock_irq(&mapping->tree_lock); 1503 + trace_dax_insert_pfn_mkwrite_no_entry(mapping->host, vmf, 1504 + VM_FAULT_NOPAGE); 1505 + return VM_FAULT_NOPAGE; 1506 + } 1507 + radix_tree_tag_set(&mapping->page_tree, index, PAGECACHE_TAG_DIRTY); 1508 + entry = lock_slot(mapping, slot); 1509 + spin_unlock_irq(&mapping->tree_lock); 1510 + switch (pe_size) { 1511 + case PE_SIZE_PTE: 1512 + error = vm_insert_mixed_mkwrite(vmf->vma, vmf->address, pfn); 1513 + vmf_ret = dax_fault_return(error); 1514 + break; 1515 + #ifdef CONFIG_FS_DAX_PMD 1516 + case PE_SIZE_PMD: 1517 + vmf_ret = vmf_insert_pfn_pmd(vmf->vma, vmf->address, vmf->pmd, 1518 + pfn, true); 1519 + break; 1520 + #endif 1521 + default: 1522 + vmf_ret = VM_FAULT_FALLBACK; 1523 + } 1524 + put_locked_mapping_entry(mapping, index); 1525 + trace_dax_insert_pfn_mkwrite(mapping->host, vmf, vmf_ret); 1526 + return vmf_ret; 1527 + } 1528 + 1529 + /** 1530 + * dax_finish_sync_fault - finish synchronous page fault 1531 + * @vmf: The description of the fault 1532 + * @pe_size: Size of entry to be inserted 1533 + * @pfn: PFN to insert 1534 + * 1535 + * This function ensures that the file range touched by the page fault is 1536 + * stored persistently on the media and handles inserting of appropriate page 1537 + * table entry. 1538 + */ 1539 + int dax_finish_sync_fault(struct vm_fault *vmf, enum page_entry_size pe_size, 1540 + pfn_t pfn) 1541 + { 1542 + int err; 1543 + loff_t start = ((loff_t)vmf->pgoff) << PAGE_SHIFT; 1544 + size_t len = 0; 1545 + 1546 + if (pe_size == PE_SIZE_PTE) 1547 + len = PAGE_SIZE; 1548 + else if (pe_size == PE_SIZE_PMD) 1549 + len = PMD_SIZE; 1550 + else 1551 + WARN_ON_ONCE(1); 1552 + err = vfs_fsync_range(vmf->vma->vm_file, start, start + len - 1, 1); 1553 + if (err) 1554 + return VM_FAULT_SIGBUS; 1555 + return dax_insert_pfn_mkwrite(vmf, pe_size, pfn); 1556 + } 1557 + EXPORT_SYMBOL_GPL(dax_finish_sync_fault);
+1 -1
fs/ext2/file.c
··· 100 100 } 101 101 down_read(&ei->dax_sem); 102 102 103 - ret = dax_iomap_fault(vmf, PE_SIZE_PTE, &ext2_iomap_ops); 103 + ret = dax_iomap_fault(vmf, PE_SIZE_PTE, NULL, &ext2_iomap_ops); 104 104 105 105 up_read(&ei->dax_sem); 106 106 if (vmf->flags & FAULT_FLAG_WRITE)
+20 -6
fs/ext4/file.c
··· 28 28 #include <linux/quotaops.h> 29 29 #include <linux/pagevec.h> 30 30 #include <linux/uio.h> 31 + #include <linux/mman.h> 31 32 #include "ext4.h" 32 33 #include "ext4_jbd2.h" 33 34 #include "xattr.h" ··· 298 297 */ 299 298 bool write = (vmf->flags & FAULT_FLAG_WRITE) && 300 299 (vmf->vma->vm_flags & VM_SHARED); 300 + pfn_t pfn; 301 301 302 302 if (write) { 303 303 sb_start_pagefault(sb); ··· 306 304 down_read(&EXT4_I(inode)->i_mmap_sem); 307 305 handle = ext4_journal_start_sb(sb, EXT4_HT_WRITE_PAGE, 308 306 EXT4_DATA_TRANS_BLOCKS(sb)); 307 + if (IS_ERR(handle)) { 308 + up_read(&EXT4_I(inode)->i_mmap_sem); 309 + sb_end_pagefault(sb); 310 + return VM_FAULT_SIGBUS; 311 + } 309 312 } else { 310 313 down_read(&EXT4_I(inode)->i_mmap_sem); 311 314 } 312 - if (!IS_ERR(handle)) 313 - result = dax_iomap_fault(vmf, pe_size, &ext4_iomap_ops); 314 - else 315 - result = VM_FAULT_SIGBUS; 315 + result = dax_iomap_fault(vmf, pe_size, &pfn, &ext4_iomap_ops); 316 316 if (write) { 317 - if (!IS_ERR(handle)) 318 - ext4_journal_stop(handle); 317 + ext4_journal_stop(handle); 318 + /* Handling synchronous page fault? */ 319 + if (result & VM_FAULT_NEEDDSYNC) 320 + result = dax_finish_sync_fault(vmf, pe_size, pfn); 319 321 up_read(&EXT4_I(inode)->i_mmap_sem); 320 322 sb_end_pagefault(sb); 321 323 } else { ··· 356 350 357 351 if (unlikely(ext4_forced_shutdown(EXT4_SB(inode->i_sb)))) 358 352 return -EIO; 353 + 354 + /* 355 + * We don't support synchronous mappings for non-DAX files. At least 356 + * until someone comes with a sensible use case. 357 + */ 358 + if (!IS_DAX(file_inode(file)) && (vma->vm_flags & VM_SYNC)) 359 + return -EOPNOTSUPP; 359 360 360 361 file_accessed(file); 361 362 if (IS_DAX(file_inode(file))) { ··· 482 469 .compat_ioctl = ext4_compat_ioctl, 483 470 #endif 484 471 .mmap = ext4_file_mmap, 472 + .mmap_supported_flags = MAP_SYNC, 485 473 .open = ext4_file_open, 486 474 .release = ext4_release_file, 487 475 .fsync = ext4_sync_file,
+15
fs/ext4/inode.c
··· 3384 3384 return try_to_free_buffers(page); 3385 3385 } 3386 3386 3387 + static bool ext4_inode_datasync_dirty(struct inode *inode) 3388 + { 3389 + journal_t *journal = EXT4_SB(inode->i_sb)->s_journal; 3390 + 3391 + if (journal) 3392 + return !jbd2_transaction_committed(journal, 3393 + EXT4_I(inode)->i_datasync_tid); 3394 + /* Any metadata buffers to write? */ 3395 + if (!list_empty(&inode->i_mapping->private_list)) 3396 + return true; 3397 + return inode->i_state & I_DIRTY_DATASYNC; 3398 + } 3399 + 3387 3400 static int ext4_iomap_begin(struct inode *inode, loff_t offset, loff_t length, 3388 3401 unsigned flags, struct iomap *iomap) 3389 3402 { ··· 3510 3497 } 3511 3498 3512 3499 iomap->flags = 0; 3500 + if (ext4_inode_datasync_dirty(inode)) 3501 + iomap->flags |= IOMAP_F_DIRTY; 3513 3502 iomap->bdev = inode->i_sb->s_bdev; 3514 3503 iomap->dax_dev = sbi->s_daxdev; 3515 3504 iomap->offset = first_block << blkbits;
+17
fs/jbd2/journal.c
··· 737 737 return err; 738 738 } 739 739 740 + /* Return 1 when transaction with given tid has already committed. */ 741 + int jbd2_transaction_committed(journal_t *journal, tid_t tid) 742 + { 743 + int ret = 1; 744 + 745 + read_lock(&journal->j_state_lock); 746 + if (journal->j_running_transaction && 747 + journal->j_running_transaction->t_tid == tid) 748 + ret = 0; 749 + if (journal->j_committing_transaction && 750 + journal->j_committing_transaction->t_tid == tid) 751 + ret = 0; 752 + read_unlock(&journal->j_state_lock); 753 + return ret; 754 + } 755 + EXPORT_SYMBOL(jbd2_transaction_committed); 756 + 740 757 /* 741 758 * When this function returns the transaction corresponding to tid 742 759 * will be completed. If the transaction has currently running, start
+1
fs/proc/task_mmu.c
··· 661 661 [ilog2(VM_ACCOUNT)] = "ac", 662 662 [ilog2(VM_NORESERVE)] = "nr", 663 663 [ilog2(VM_HUGETLB)] = "ht", 664 + [ilog2(VM_SYNC)] = "sf", 664 665 [ilog2(VM_ARCH_1)] = "ar", 665 666 [ilog2(VM_WIPEONFORK)] = "wf", 666 667 [ilog2(VM_DONTDUMP)] = "dd",
+18 -26
fs/xfs/xfs_file.c
··· 44 44 #include <linux/falloc.h> 45 45 #include <linux/pagevec.h> 46 46 #include <linux/backing-dev.h> 47 + #include <linux/mman.h> 47 48 48 49 static const struct vm_operations_struct xfs_file_vm_ops; 49 50 ··· 1046 1045 1047 1046 xfs_ilock(XFS_I(inode), XFS_MMAPLOCK_SHARED); 1048 1047 if (IS_DAX(inode)) { 1049 - ret = dax_iomap_fault(vmf, pe_size, &xfs_iomap_ops); 1048 + pfn_t pfn; 1049 + 1050 + ret = dax_iomap_fault(vmf, pe_size, &pfn, &xfs_iomap_ops); 1051 + if (ret & VM_FAULT_NEEDDSYNC) 1052 + ret = dax_finish_sync_fault(vmf, pe_size, pfn); 1050 1053 } else { 1051 1054 if (write_fault) 1052 1055 ret = iomap_page_mkwrite(vmf, &xfs_iomap_ops); ··· 1095 1090 } 1096 1091 1097 1092 /* 1098 - * pfn_mkwrite was originally inteneded to ensure we capture time stamp 1099 - * updates on write faults. In reality, it's need to serialise against 1100 - * truncate similar to page_mkwrite. Hence we cycle the XFS_MMAPLOCK_SHARED 1101 - * to ensure we serialise the fault barrier in place. 1093 + * pfn_mkwrite was originally intended to ensure we capture time stamp updates 1094 + * on write faults. In reality, it needs to serialise against truncate and 1095 + * prepare memory for writing so handle is as standard write fault. 1102 1096 */ 1103 1097 static int 1104 1098 xfs_filemap_pfn_mkwrite( 1105 1099 struct vm_fault *vmf) 1106 1100 { 1107 1101 1108 - struct inode *inode = file_inode(vmf->vma->vm_file); 1109 - struct xfs_inode *ip = XFS_I(inode); 1110 - int ret = VM_FAULT_NOPAGE; 1111 - loff_t size; 1112 - 1113 - trace_xfs_filemap_pfn_mkwrite(ip); 1114 - 1115 - sb_start_pagefault(inode->i_sb); 1116 - file_update_time(vmf->vma->vm_file); 1117 - 1118 - /* check if the faulting page hasn't raced with truncate */ 1119 - xfs_ilock(ip, XFS_MMAPLOCK_SHARED); 1120 - size = (i_size_read(inode) + PAGE_SIZE - 1) >> PAGE_SHIFT; 1121 - if (vmf->pgoff >= size) 1122 - ret = VM_FAULT_SIGBUS; 1123 - else if (IS_DAX(inode)) 1124 - ret = dax_iomap_fault(vmf, PE_SIZE_PTE, &xfs_iomap_ops); 1125 - xfs_iunlock(ip, XFS_MMAPLOCK_SHARED); 1126 - sb_end_pagefault(inode->i_sb); 1127 - return ret; 1128 - 1102 + return __xfs_filemap_fault(vmf, PE_SIZE_PTE, true); 1129 1103 } 1130 1104 1131 1105 static const struct vm_operations_struct xfs_file_vm_ops = { ··· 1120 1136 struct file *filp, 1121 1137 struct vm_area_struct *vma) 1122 1138 { 1139 + /* 1140 + * We don't support synchronous mappings for non-DAX files. At least 1141 + * until someone comes with a sensible use case. 1142 + */ 1143 + if (!IS_DAX(file_inode(filp)) && (vma->vm_flags & VM_SYNC)) 1144 + return -EOPNOTSUPP; 1145 + 1123 1146 file_accessed(filp); 1124 1147 vma->vm_ops = &xfs_file_vm_ops; 1125 1148 if (IS_DAX(file_inode(filp))) ··· 1145 1154 .compat_ioctl = xfs_file_compat_ioctl, 1146 1155 #endif 1147 1156 .mmap = xfs_file_mmap, 1157 + .mmap_supported_flags = MAP_SYNC, 1148 1158 .open = xfs_file_open, 1149 1159 .release = xfs_file_release, 1150 1160 .fsync = xfs_file_fsync,
+5
fs/xfs/xfs_iomap.c
··· 34 34 #include "xfs_error.h" 35 35 #include "xfs_trans.h" 36 36 #include "xfs_trans_space.h" 37 + #include "xfs_inode_item.h" 37 38 #include "xfs_iomap.h" 38 39 #include "xfs_trace.h" 39 40 #include "xfs_icache.h" ··· 1089 1088 xfs_iunlock(ip, lockmode); 1090 1089 trace_xfs_iomap_found(ip, offset, length, 0, &imap); 1091 1090 } 1091 + 1092 + if (xfs_ipincount(ip) && (ip->i_itemp->ili_fsync_fields 1093 + & ~XFS_ILOG_TIMESTAMP)) 1094 + iomap->flags |= IOMAP_F_DIRTY; 1092 1095 1093 1096 xfs_bmbt_to_iomap(ip, iomap, &imap); 1094 1097
-2
fs/xfs/xfs_trace.h
··· 654 654 DEFINE_INODE_EVENT(xfs_inode_clear_cowblocks_tag); 655 655 DEFINE_INODE_EVENT(xfs_inode_free_cowblocks_invalid); 656 656 657 - DEFINE_INODE_EVENT(xfs_filemap_pfn_mkwrite); 658 - 659 657 TRACE_EVENT(xfs_filemap_fault, 660 658 TP_PROTO(struct xfs_inode *ip, enum page_entry_size pe_size, 661 659 bool write_fault),
+3 -1
include/linux/dax.h
··· 96 96 ssize_t dax_iomap_rw(struct kiocb *iocb, struct iov_iter *iter, 97 97 const struct iomap_ops *ops); 98 98 int dax_iomap_fault(struct vm_fault *vmf, enum page_entry_size pe_size, 99 - const struct iomap_ops *ops); 99 + pfn_t *pfnp, const struct iomap_ops *ops); 100 + int dax_finish_sync_fault(struct vm_fault *vmf, enum page_entry_size pe_size, 101 + pfn_t pfn); 100 102 int dax_delete_mapping_entry(struct address_space *mapping, pgoff_t index); 101 103 int dax_invalidate_mapping_entry_sync(struct address_space *mapping, 102 104 pgoff_t index);
+1
include/linux/fs.h
··· 1702 1702 long (*unlocked_ioctl) (struct file *, unsigned int, unsigned long); 1703 1703 long (*compat_ioctl) (struct file *, unsigned int, unsigned long); 1704 1704 int (*mmap) (struct file *, struct vm_area_struct *); 1705 + unsigned long mmap_supported_flags; 1705 1706 int (*open) (struct inode *, struct file *); 1706 1707 int (*flush) (struct file *, fl_owner_t id); 1707 1708 int (*release) (struct inode *, struct file *);
+4
include/linux/iomap.h
··· 21 21 22 22 /* 23 23 * Flags for all iomap mappings: 24 + * 25 + * IOMAP_F_DIRTY indicates the inode has uncommitted metadata needed to access 26 + * written data and requires fdatasync to commit them to persistent storage. 24 27 */ 25 28 #define IOMAP_F_NEW 0x01 /* blocks have been newly allocated */ 26 29 #define IOMAP_F_BOUNDARY 0x02 /* mapping ends at metadata boundary */ 30 + #define IOMAP_F_DIRTY 0x04 /* uncommitted metadata */ 27 31 28 32 /* 29 33 * Flags that only need to be reported for IOMAP_REPORT requests:
+1
include/linux/jbd2.h
··· 1367 1367 int __jbd2_log_start_commit(journal_t *journal, tid_t tid); 1368 1368 int jbd2_journal_start_commit(journal_t *journal, tid_t *tid); 1369 1369 int jbd2_log_wait_commit(journal_t *journal, tid_t tid); 1370 + int jbd2_transaction_committed(journal_t *journal, tid_t tid); 1370 1371 int jbd2_complete_transaction(journal_t *journal, tid_t tid); 1371 1372 int jbd2_log_do_checkpoint(journal_t *journal); 1372 1373 int jbd2_trans_will_send_data_barrier(journal_t *journal, tid_t tid);
+18 -3
include/linux/libnvdimm.h
··· 18 18 #include <linux/sizes.h> 19 19 #include <linux/types.h> 20 20 #include <linux/uuid.h> 21 + #include <linux/spinlock.h> 22 + 23 + struct badrange_entry { 24 + u64 start; 25 + u64 length; 26 + struct list_head list; 27 + }; 28 + 29 + struct badrange { 30 + struct list_head list; 31 + spinlock_t lock; 32 + }; 21 33 22 34 enum { 23 35 /* when a dimm supports both PMEM and BLK access a label is required */ ··· 141 129 142 130 } 143 131 144 - int nvdimm_bus_add_poison(struct nvdimm_bus *nvdimm_bus, u64 addr, u64 length); 145 - void nvdimm_forget_poison(struct nvdimm_bus *nvdimm_bus, 146 - phys_addr_t start, unsigned int len); 132 + void badrange_init(struct badrange *badrange); 133 + int badrange_add(struct badrange *badrange, u64 addr, u64 length); 134 + void badrange_forget(struct badrange *badrange, phys_addr_t start, 135 + unsigned int len); 136 + int nvdimm_bus_add_badrange(struct nvdimm_bus *nvdimm_bus, u64 addr, 137 + u64 length); 147 138 struct nvdimm_bus *nvdimm_bus_register(struct device *parent, 148 139 struct nvdimm_bus_descriptor *nfit_desc); 149 140 void nvdimm_bus_unregister(struct nvdimm_bus *nvdimm_bus);
+6 -3
include/linux/mm.h
··· 199 199 #define VM_ACCOUNT 0x00100000 /* Is a VM accounted object */ 200 200 #define VM_NORESERVE 0x00200000 /* should the VM suppress accounting */ 201 201 #define VM_HUGETLB 0x00400000 /* Huge TLB Page VM */ 202 + #define VM_SYNC 0x00800000 /* Synchronous page faults */ 202 203 #define VM_ARCH_1 0x01000000 /* Architecture-specific flag */ 203 204 #define VM_WIPEONFORK 0x02000000 /* Wipe VMA contents in child. */ 204 205 #define VM_DONTDUMP 0x04000000 /* Do not include in the core dump */ ··· 1192 1191 #define VM_FAULT_RETRY 0x0400 /* ->fault blocked, must retry */ 1193 1192 #define VM_FAULT_FALLBACK 0x0800 /* huge page fault failed, fall back to small */ 1194 1193 #define VM_FAULT_DONE_COW 0x1000 /* ->fault has fully handled COW */ 1195 - 1196 - #define VM_FAULT_HWPOISON_LARGE_MASK 0xf000 /* encodes hpage index for large hwpoison */ 1194 + #define VM_FAULT_NEEDDSYNC 0x2000 /* ->fault did not modify page tables 1195 + * and needs fsync() to complete (for 1196 + * synchronous page faults in DAX) */ 1197 1197 1198 1198 #define VM_FAULT_ERROR (VM_FAULT_OOM | VM_FAULT_SIGBUS | VM_FAULT_SIGSEGV | \ 1199 1199 VM_FAULT_HWPOISON | VM_FAULT_HWPOISON_LARGE | \ ··· 1212 1210 { VM_FAULT_LOCKED, "LOCKED" }, \ 1213 1211 { VM_FAULT_RETRY, "RETRY" }, \ 1214 1212 { VM_FAULT_FALLBACK, "FALLBACK" }, \ 1215 - { VM_FAULT_DONE_COW, "DONE_COW" } 1213 + { VM_FAULT_DONE_COW, "DONE_COW" }, \ 1214 + { VM_FAULT_NEEDDSYNC, "NEEDDSYNC" } 1216 1215 1217 1216 /* Encode hstate index for a hwpoisoned large page */ 1218 1217 #define VM_FAULT_SET_HINDEX(x) ((x) << 12)
+46 -2
include/linux/mman.h
··· 8 8 #include <linux/atomic.h> 9 9 #include <uapi/linux/mman.h> 10 10 11 + /* 12 + * Arrange for legacy / undefined architecture specific flags to be 13 + * ignored by mmap handling code. 14 + */ 15 + #ifndef MAP_32BIT 16 + #define MAP_32BIT 0 17 + #endif 18 + #ifndef MAP_HUGE_2MB 19 + #define MAP_HUGE_2MB 0 20 + #endif 21 + #ifndef MAP_HUGE_1GB 22 + #define MAP_HUGE_1GB 0 23 + #endif 24 + #ifndef MAP_UNINITIALIZED 25 + #define MAP_UNINITIALIZED 0 26 + #endif 27 + #ifndef MAP_SYNC 28 + #define MAP_SYNC 0 29 + #endif 30 + 31 + /* 32 + * The historical set of flags that all mmap implementations implicitly 33 + * support when a ->mmap_validate() op is not provided in file_operations. 34 + */ 35 + #define LEGACY_MAP_MASK (MAP_SHARED \ 36 + | MAP_PRIVATE \ 37 + | MAP_FIXED \ 38 + | MAP_ANONYMOUS \ 39 + | MAP_DENYWRITE \ 40 + | MAP_EXECUTABLE \ 41 + | MAP_UNINITIALIZED \ 42 + | MAP_GROWSDOWN \ 43 + | MAP_LOCKED \ 44 + | MAP_NORESERVE \ 45 + | MAP_POPULATE \ 46 + | MAP_NONBLOCK \ 47 + | MAP_STACK \ 48 + | MAP_HUGETLB \ 49 + | MAP_32BIT \ 50 + | MAP_HUGE_2MB \ 51 + | MAP_HUGE_1GB) 52 + 11 53 extern int sysctl_overcommit_memory; 12 54 extern int sysctl_overcommit_ratio; 13 55 extern unsigned long sysctl_overcommit_kbytes; ··· 106 64 * ("bit1" and "bit2" must be single bits) 107 65 */ 108 66 #define _calc_vm_trans(x, bit1, bit2) \ 67 + ((!(bit1) || !(bit2)) ? 0 : \ 109 68 ((bit1) <= (bit2) ? ((x) & (bit1)) * ((bit2) / (bit1)) \ 110 - : ((x) & (bit1)) / ((bit1) / (bit2))) 69 + : ((x) & (bit1)) / ((bit1) / (bit2)))) 111 70 112 71 /* 113 72 * Combine the mmap "prot" argument into "vm_flags" used internally. ··· 130 87 { 131 88 return _calc_vm_trans(flags, MAP_GROWSDOWN, VM_GROWSDOWN ) | 132 89 _calc_vm_trans(flags, MAP_DENYWRITE, VM_DENYWRITE ) | 133 - _calc_vm_trans(flags, MAP_LOCKED, VM_LOCKED ); 90 + _calc_vm_trans(flags, MAP_LOCKED, VM_LOCKED ) | 91 + _calc_vm_trans(flags, MAP_SYNC, VM_SYNC ); 134 92 } 135 93 136 94 unsigned long vm_commit_limit(void);
+2 -1
include/trace/events/fs_dax.h
··· 149 149 TP_ARGS(inode, vmf, length, pfn, radix_entry)) 150 150 151 151 DEFINE_PMD_INSERT_MAPPING_EVENT(dax_pmd_insert_mapping); 152 - DEFINE_PMD_INSERT_MAPPING_EVENT(dax_pmd_insert_mapping_fallback); 153 152 154 153 DECLARE_EVENT_CLASS(dax_pte_fault_class, 155 154 TP_PROTO(struct inode *inode, struct vm_fault *vmf, int result), ··· 191 192 DEFINE_PTE_FAULT_EVENT(dax_pte_fault); 192 193 DEFINE_PTE_FAULT_EVENT(dax_pte_fault_done); 193 194 DEFINE_PTE_FAULT_EVENT(dax_load_hole); 195 + DEFINE_PTE_FAULT_EVENT(dax_insert_pfn_mkwrite_no_entry); 196 + DEFINE_PTE_FAULT_EVENT(dax_insert_pfn_mkwrite); 194 197 195 198 TRACE_EVENT(dax_insert_mapping, 196 199 TP_PROTO(struct inode *inode, struct vm_fault *vmf, void *radix_entry),
+1
include/uapi/asm-generic/mman-common.h
··· 17 17 18 18 #define MAP_SHARED 0x01 /* Share changes */ 19 19 #define MAP_PRIVATE 0x02 /* Changes are private */ 20 + #define MAP_SHARED_VALIDATE 0x03 /* share + validate extension flags */ 20 21 #define MAP_TYPE 0x0f /* Mask for type of mapping */ 21 22 #define MAP_FIXED 0x10 /* Interpret addr exactly */ 22 23 #define MAP_ANONYMOUS 0x20 /* don't use a file */
+1
include/uapi/asm-generic/mman.h
··· 13 13 #define MAP_NONBLOCK 0x10000 /* do not block on IO */ 14 14 #define MAP_STACK 0x20000 /* give out an address that is best suited for process/thread stacks */ 15 15 #define MAP_HUGETLB 0x40000 /* create a huge page mapping */ 16 + #define MAP_SYNC 0x80000 /* perform synchronous page faults for the mapping */ 16 17 17 18 /* Bits [26:31] are reserved, see mman-common.h for MAP_HUGETLB usage */ 18 19
+15
mm/mmap.c
··· 1387 1387 1388 1388 if (file) { 1389 1389 struct inode *inode = file_inode(file); 1390 + unsigned long flags_mask; 1391 + 1392 + flags_mask = LEGACY_MAP_MASK | file->f_op->mmap_supported_flags; 1390 1393 1391 1394 switch (flags & MAP_TYPE) { 1392 1395 case MAP_SHARED: 1396 + /* 1397 + * Force use of MAP_SHARED_VALIDATE with non-legacy 1398 + * flags. E.g. MAP_SYNC is dangerous to use with 1399 + * MAP_SHARED as you don't know which consistency model 1400 + * you will get. We silently ignore unsupported flags 1401 + * with MAP_SHARED to preserve backward compatibility. 1402 + */ 1403 + flags &= LEGACY_MAP_MASK; 1404 + /* fall through */ 1405 + case MAP_SHARED_VALIDATE: 1406 + if (flags & ~flags_mask) 1407 + return -EOPNOTSUPP; 1393 1408 if ((prot&PROT_WRITE) && !(file->f_mode&FMODE_WRITE)) 1394 1409 return -EACCES; 1395 1410
+1
tools/include/uapi/asm-generic/mman-common.h
··· 17 17 18 18 #define MAP_SHARED 0x01 /* Share changes */ 19 19 #define MAP_PRIVATE 0x02 /* Changes are private */ 20 + #define MAP_SHARED_VALIDATE 0x03 /* share + validate extension flags */ 20 21 #define MAP_TYPE 0x0f /* Mask for type of mapping */ 21 22 #define MAP_FIXED 0x10 /* Interpret addr exactly */ 22 23 #define MAP_ANONYMOUS 0x20 /* don't use a file */
+1
tools/testing/nvdimm/Kbuild
··· 70 70 libnvdimm-y += $(NVDIMM_SRC)/region.o 71 71 libnvdimm-y += $(NVDIMM_SRC)/namespace_devs.o 72 72 libnvdimm-y += $(NVDIMM_SRC)/label.o 73 + libnvdimm-y += $(NVDIMM_SRC)/badrange.o 73 74 libnvdimm-$(CONFIG_ND_CLAIM) += $(NVDIMM_SRC)/claim.o 74 75 libnvdimm-$(CONFIG_BTT) += $(NVDIMM_SRC)/btt_devs.o 75 76 libnvdimm-$(CONFIG_NVDIMM_PFN) += $(NVDIMM_SRC)/pfn_devs.o
+287 -32
tools/testing/nvdimm/test/nfit.c
··· 168 168 spinlock_t lock; 169 169 } ars_state; 170 170 struct device *dimm_dev[NUM_DCR]; 171 + struct badrange badrange; 172 + struct work_struct work; 171 173 }; 174 + 175 + static struct workqueue_struct *nfit_wq; 172 176 173 177 static struct nfit_test *to_nfit_test(struct device *dev) 174 178 { ··· 238 234 return rc; 239 235 } 240 236 241 - #define NFIT_TEST_ARS_RECORDS 4 242 237 #define NFIT_TEST_CLEAR_ERR_UNIT 256 243 238 244 239 static int nfit_test_cmd_ars_cap(struct nd_cmd_ars_cap *nd_cmd, 245 240 unsigned int buf_len) 246 241 { 242 + int ars_recs; 243 + 247 244 if (buf_len < sizeof(*nd_cmd)) 248 245 return -EINVAL; 249 246 247 + /* for testing, only store up to n records that fit within 4k */ 248 + ars_recs = SZ_4K / sizeof(struct nd_ars_record); 249 + 250 250 nd_cmd->max_ars_out = sizeof(struct nd_cmd_ars_status) 251 - + NFIT_TEST_ARS_RECORDS * sizeof(struct nd_ars_record); 251 + + ars_recs * sizeof(struct nd_ars_record); 252 252 nd_cmd->status = (ND_ARS_PERSISTENT | ND_ARS_VOLATILE) << 16; 253 253 nd_cmd->clear_err_unit = NFIT_TEST_CLEAR_ERR_UNIT; 254 254 255 255 return 0; 256 256 } 257 257 258 - /* 259 - * Initialize the ars_state to return an ars_result 1 second in the future with 260 - * a 4K error range in the middle of the requested address range. 261 - */ 262 - static void post_ars_status(struct ars_state *ars_state, u64 addr, u64 len) 258 + static void post_ars_status(struct ars_state *ars_state, 259 + struct badrange *badrange, u64 addr, u64 len) 263 260 { 264 261 struct nd_cmd_ars_status *ars_status; 265 262 struct nd_ars_record *ars_record; 263 + struct badrange_entry *be; 264 + u64 end = addr + len - 1; 265 + int i = 0; 266 266 267 267 ars_state->deadline = jiffies + 1*HZ; 268 268 ars_status = ars_state->ars_status; 269 269 ars_status->status = 0; 270 - ars_status->out_length = sizeof(struct nd_cmd_ars_status) 271 - + sizeof(struct nd_ars_record); 272 270 ars_status->address = addr; 273 271 ars_status->length = len; 274 272 ars_status->type = ND_ARS_PERSISTENT; 275 - ars_status->num_records = 1; 276 - ars_record = &ars_status->records[0]; 277 - ars_record->handle = 0; 278 - ars_record->err_address = addr + len / 2; 279 - ars_record->length = SZ_4K; 273 + 274 + spin_lock(&badrange->lock); 275 + list_for_each_entry(be, &badrange->list, list) { 276 + u64 be_end = be->start + be->length - 1; 277 + u64 rstart, rend; 278 + 279 + /* skip entries outside the range */ 280 + if (be_end < addr || be->start > end) 281 + continue; 282 + 283 + rstart = (be->start < addr) ? addr : be->start; 284 + rend = (be_end < end) ? be_end : end; 285 + ars_record = &ars_status->records[i]; 286 + ars_record->handle = 0; 287 + ars_record->err_address = rstart; 288 + ars_record->length = rend - rstart + 1; 289 + i++; 290 + } 291 + spin_unlock(&badrange->lock); 292 + ars_status->num_records = i; 293 + ars_status->out_length = sizeof(struct nd_cmd_ars_status) 294 + + i * sizeof(struct nd_ars_record); 280 295 } 281 296 282 - static int nfit_test_cmd_ars_start(struct ars_state *ars_state, 297 + static int nfit_test_cmd_ars_start(struct nfit_test *t, 298 + struct ars_state *ars_state, 283 299 struct nd_cmd_ars_start *ars_start, unsigned int buf_len, 284 300 int *cmd_rc) 285 301 { ··· 313 289 } else { 314 290 ars_start->status = 0; 315 291 ars_start->scrub_time = 1; 316 - post_ars_status(ars_state, ars_start->address, 292 + post_ars_status(ars_state, &t->badrange, ars_start->address, 317 293 ars_start->length); 318 294 *cmd_rc = 0; 319 295 } ··· 344 320 return 0; 345 321 } 346 322 347 - static int nfit_test_cmd_clear_error(struct nd_cmd_clear_error *clear_err, 323 + static int nfit_test_cmd_clear_error(struct nfit_test *t, 324 + struct nd_cmd_clear_error *clear_err, 348 325 unsigned int buf_len, int *cmd_rc) 349 326 { 350 327 const u64 mask = NFIT_TEST_CLEAR_ERR_UNIT - 1; ··· 355 330 if ((clear_err->address & mask) || (clear_err->length & mask)) 356 331 return -EINVAL; 357 332 358 - /* 359 - * Report 'all clear' success for all commands even though a new 360 - * scrub will find errors again. This is enough to have the 361 - * error removed from the 'badblocks' tracking in the pmem 362 - * driver. 363 - */ 333 + badrange_forget(&t->badrange, clear_err->address, clear_err->length); 364 334 clear_err->status = 0; 365 335 clear_err->cleared = clear_err->length; 366 336 *cmd_rc = 0; 337 + return 0; 338 + } 339 + 340 + struct region_search_spa { 341 + u64 addr; 342 + struct nd_region *region; 343 + }; 344 + 345 + static int is_region_device(struct device *dev) 346 + { 347 + return !strncmp(dev->kobj.name, "region", 6); 348 + } 349 + 350 + static int nfit_test_search_region_spa(struct device *dev, void *data) 351 + { 352 + struct region_search_spa *ctx = data; 353 + struct nd_region *nd_region; 354 + resource_size_t ndr_end; 355 + 356 + if (!is_region_device(dev)) 357 + return 0; 358 + 359 + nd_region = to_nd_region(dev); 360 + ndr_end = nd_region->ndr_start + nd_region->ndr_size; 361 + 362 + if (ctx->addr >= nd_region->ndr_start && ctx->addr < ndr_end) { 363 + ctx->region = nd_region; 364 + return 1; 365 + } 366 + 367 + return 0; 368 + } 369 + 370 + static int nfit_test_search_spa(struct nvdimm_bus *bus, 371 + struct nd_cmd_translate_spa *spa) 372 + { 373 + int ret; 374 + struct nd_region *nd_region = NULL; 375 + struct nvdimm *nvdimm = NULL; 376 + struct nd_mapping *nd_mapping = NULL; 377 + struct region_search_spa ctx = { 378 + .addr = spa->spa, 379 + .region = NULL, 380 + }; 381 + u64 dpa; 382 + 383 + ret = device_for_each_child(&bus->dev, &ctx, 384 + nfit_test_search_region_spa); 385 + 386 + if (!ret) 387 + return -ENODEV; 388 + 389 + nd_region = ctx.region; 390 + 391 + dpa = ctx.addr - nd_region->ndr_start; 392 + 393 + /* 394 + * last dimm is selected for test 395 + */ 396 + nd_mapping = &nd_region->mapping[nd_region->ndr_mappings - 1]; 397 + nvdimm = nd_mapping->nvdimm; 398 + 399 + spa->devices[0].nfit_device_handle = handle[nvdimm->id]; 400 + spa->num_nvdimms = 1; 401 + spa->devices[0].dpa = dpa; 402 + 403 + return 0; 404 + } 405 + 406 + static int nfit_test_cmd_translate_spa(struct nvdimm_bus *bus, 407 + struct nd_cmd_translate_spa *spa, unsigned int buf_len) 408 + { 409 + if (buf_len < spa->translate_length) 410 + return -EINVAL; 411 + 412 + if (nfit_test_search_spa(bus, spa) < 0 || !spa->num_nvdimms) 413 + spa->status = 2; 414 + 367 415 return 0; 368 416 } 369 417 ··· 473 375 if (buf_len < sizeof(*smart_t)) 474 376 return -EINVAL; 475 377 memcpy(smart_t->data, &smart_t_data, sizeof(smart_t_data)); 378 + return 0; 379 + } 380 + 381 + static void uc_error_notify(struct work_struct *work) 382 + { 383 + struct nfit_test *t = container_of(work, typeof(*t), work); 384 + 385 + __acpi_nfit_notify(&t->pdev.dev, t, NFIT_NOTIFY_UC_MEMORY_ERROR); 386 + } 387 + 388 + static int nfit_test_cmd_ars_error_inject(struct nfit_test *t, 389 + struct nd_cmd_ars_err_inj *err_inj, unsigned int buf_len) 390 + { 391 + int rc; 392 + 393 + if (buf_len != sizeof(*err_inj)) { 394 + rc = -EINVAL; 395 + goto err; 396 + } 397 + 398 + if (err_inj->err_inj_spa_range_length <= 0) { 399 + rc = -EINVAL; 400 + goto err; 401 + } 402 + 403 + rc = badrange_add(&t->badrange, err_inj->err_inj_spa_range_base, 404 + err_inj->err_inj_spa_range_length); 405 + if (rc < 0) 406 + goto err; 407 + 408 + if (err_inj->err_inj_options & (1 << ND_ARS_ERR_INJ_OPT_NOTIFY)) 409 + queue_work(nfit_wq, &t->work); 410 + 411 + err_inj->status = 0; 412 + return 0; 413 + 414 + err: 415 + err_inj->status = NFIT_ARS_INJECT_INVALID; 416 + return rc; 417 + } 418 + 419 + static int nfit_test_cmd_ars_inject_clear(struct nfit_test *t, 420 + struct nd_cmd_ars_err_inj_clr *err_clr, unsigned int buf_len) 421 + { 422 + int rc; 423 + 424 + if (buf_len != sizeof(*err_clr)) { 425 + rc = -EINVAL; 426 + goto err; 427 + } 428 + 429 + if (err_clr->err_inj_clr_spa_range_length <= 0) { 430 + rc = -EINVAL; 431 + goto err; 432 + } 433 + 434 + badrange_forget(&t->badrange, err_clr->err_inj_clr_spa_range_base, 435 + err_clr->err_inj_clr_spa_range_length); 436 + 437 + err_clr->status = 0; 438 + return 0; 439 + 440 + err: 441 + err_clr->status = NFIT_ARS_INJECT_INVALID; 442 + return rc; 443 + } 444 + 445 + static int nfit_test_cmd_ars_inject_status(struct nfit_test *t, 446 + struct nd_cmd_ars_err_inj_stat *err_stat, 447 + unsigned int buf_len) 448 + { 449 + struct badrange_entry *be; 450 + int max = SZ_4K / sizeof(struct nd_error_stat_query_record); 451 + int i = 0; 452 + 453 + err_stat->status = 0; 454 + spin_lock(&t->badrange.lock); 455 + list_for_each_entry(be, &t->badrange.list, list) { 456 + err_stat->record[i].err_inj_stat_spa_range_base = be->start; 457 + err_stat->record[i].err_inj_stat_spa_range_length = be->length; 458 + i++; 459 + if (i > max) 460 + break; 461 + } 462 + spin_unlock(&t->badrange.lock); 463 + err_stat->inj_err_rec_count = i; 464 + 476 465 return 0; 477 466 } 478 467 ··· 634 449 } 635 450 } else { 636 451 struct ars_state *ars_state = &t->ars_state; 452 + struct nd_cmd_pkg *call_pkg = buf; 453 + 454 + if (!nd_desc) 455 + return -ENOTTY; 456 + 457 + if (cmd == ND_CMD_CALL) { 458 + func = call_pkg->nd_command; 459 + 460 + buf_len = call_pkg->nd_size_in + call_pkg->nd_size_out; 461 + buf = (void *) call_pkg->nd_payload; 462 + 463 + switch (func) { 464 + case NFIT_CMD_TRANSLATE_SPA: 465 + rc = nfit_test_cmd_translate_spa( 466 + acpi_desc->nvdimm_bus, buf, buf_len); 467 + return rc; 468 + case NFIT_CMD_ARS_INJECT_SET: 469 + rc = nfit_test_cmd_ars_error_inject(t, buf, 470 + buf_len); 471 + return rc; 472 + case NFIT_CMD_ARS_INJECT_CLEAR: 473 + rc = nfit_test_cmd_ars_inject_clear(t, buf, 474 + buf_len); 475 + return rc; 476 + case NFIT_CMD_ARS_INJECT_GET: 477 + rc = nfit_test_cmd_ars_inject_status(t, buf, 478 + buf_len); 479 + return rc; 480 + default: 481 + return -ENOTTY; 482 + } 483 + } 637 484 638 485 if (!nd_desc || !test_bit(cmd, &nd_desc->cmd_mask)) 639 486 return -ENOTTY; ··· 675 458 rc = nfit_test_cmd_ars_cap(buf, buf_len); 676 459 break; 677 460 case ND_CMD_ARS_START: 678 - rc = nfit_test_cmd_ars_start(ars_state, buf, buf_len, 679 - cmd_rc); 461 + rc = nfit_test_cmd_ars_start(t, ars_state, buf, 462 + buf_len, cmd_rc); 680 463 break; 681 464 case ND_CMD_ARS_STATUS: 682 465 rc = nfit_test_cmd_ars_status(ars_state, buf, buf_len, 683 466 cmd_rc); 684 467 break; 685 468 case ND_CMD_CLEAR_ERROR: 686 - rc = nfit_test_cmd_clear_error(buf, buf_len, cmd_rc); 469 + rc = nfit_test_cmd_clear_error(t, buf, buf_len, cmd_rc); 687 470 break; 688 471 default: 689 472 return -ENOTTY; ··· 783 566 784 567 static int ars_state_init(struct device *dev, struct ars_state *ars_state) 785 568 { 569 + /* for testing, only store up to n records that fit within 4k */ 786 570 ars_state->ars_status = devm_kzalloc(dev, 787 - sizeof(struct nd_cmd_ars_status) 788 - + sizeof(struct nd_ars_record) * NFIT_TEST_ARS_RECORDS, 789 - GFP_KERNEL); 571 + sizeof(struct nd_cmd_ars_status) + SZ_4K, GFP_KERNEL); 790 572 if (!ars_state->ars_status) 791 573 return -ENOMEM; 792 574 spin_lock_init(&ars_state->lock); ··· 1635 1419 + i * sizeof(u64); 1636 1420 } 1637 1421 1638 - post_ars_status(&t->ars_state, t->spa_set_dma[0], SPA0_SIZE); 1422 + post_ars_status(&t->ars_state, &t->badrange, t->spa_set_dma[0], 1423 + SPA0_SIZE); 1639 1424 1640 1425 acpi_desc = &t->acpi_desc; 1641 1426 set_bit(ND_CMD_GET_CONFIG_SIZE, &acpi_desc->dimm_cmd_force_en); ··· 1647 1430 set_bit(ND_CMD_ARS_START, &acpi_desc->bus_cmd_force_en); 1648 1431 set_bit(ND_CMD_ARS_STATUS, &acpi_desc->bus_cmd_force_en); 1649 1432 set_bit(ND_CMD_CLEAR_ERROR, &acpi_desc->bus_cmd_force_en); 1433 + set_bit(ND_CMD_CALL, &acpi_desc->bus_cmd_force_en); 1650 1434 set_bit(ND_CMD_SMART_THRESHOLD, &acpi_desc->dimm_cmd_force_en); 1435 + set_bit(NFIT_CMD_TRANSLATE_SPA, &acpi_desc->bus_nfit_cmd_force_en); 1436 + set_bit(NFIT_CMD_ARS_INJECT_SET, &acpi_desc->bus_nfit_cmd_force_en); 1437 + set_bit(NFIT_CMD_ARS_INJECT_CLEAR, &acpi_desc->bus_nfit_cmd_force_en); 1438 + set_bit(NFIT_CMD_ARS_INJECT_GET, &acpi_desc->bus_nfit_cmd_force_en); 1651 1439 } 1652 1440 1653 1441 static void nfit_test1_setup(struct nfit_test *t) ··· 1742 1520 dcr->code = NFIT_FIC_BYTE; 1743 1521 dcr->windows = 0; 1744 1522 1745 - post_ars_status(&t->ars_state, t->spa_set_dma[0], SPA2_SIZE); 1523 + post_ars_status(&t->ars_state, &t->badrange, t->spa_set_dma[0], 1524 + SPA2_SIZE); 1746 1525 1747 1526 acpi_desc = &t->acpi_desc; 1748 1527 set_bit(ND_CMD_ARS_CAP, &acpi_desc->bus_cmd_force_en); ··· 1812 1589 unsigned long mask, cmd_size, offset; 1813 1590 union { 1814 1591 struct nd_cmd_get_config_size cfg_size; 1592 + struct nd_cmd_clear_error clear_err; 1815 1593 struct nd_cmd_ars_status ars_stat; 1816 1594 struct nd_cmd_ars_cap ars_cap; 1817 1595 char buf[sizeof(struct nd_cmd_ars_status) ··· 1837 1613 .cmd_mask = 1UL << ND_CMD_ARS_CAP 1838 1614 | 1UL << ND_CMD_ARS_START 1839 1615 | 1UL << ND_CMD_ARS_STATUS 1840 - | 1UL << ND_CMD_CLEAR_ERROR, 1616 + | 1UL << ND_CMD_CLEAR_ERROR 1617 + | 1UL << ND_CMD_CALL, 1841 1618 .module = THIS_MODULE, 1842 1619 .provider_name = "ACPI.NFIT", 1843 1620 .ndctl = acpi_nfit_ctl, 1621 + .bus_dsm_mask = 1UL << NFIT_CMD_TRANSLATE_SPA 1622 + | 1UL << NFIT_CMD_ARS_INJECT_SET 1623 + | 1UL << NFIT_CMD_ARS_INJECT_CLEAR 1624 + | 1UL << NFIT_CMD_ARS_INJECT_GET, 1844 1625 }, 1845 1626 .dev = &adev->dev, 1846 1627 }; ··· 1991 1762 cmds.buf, cmd_size, &cmd_rc); 1992 1763 1993 1764 if (rc < 0 || cmd_rc >= 0) { 1765 + dev_dbg(dev, "%s: failed at: %d rc: %d cmd_rc: %d\n", 1766 + __func__, __LINE__, rc, cmd_rc); 1767 + return -EIO; 1768 + } 1769 + 1770 + /* test clear error */ 1771 + cmd_size = sizeof(cmds.clear_err); 1772 + cmds.clear_err = (struct nd_cmd_clear_error) { 1773 + .length = 512, 1774 + .cleared = 512, 1775 + }; 1776 + rc = setup_result(cmds.buf, cmd_size); 1777 + if (rc) 1778 + return rc; 1779 + rc = acpi_nfit_ctl(&acpi_desc->nd_desc, NULL, ND_CMD_CLEAR_ERROR, 1780 + cmds.buf, cmd_size, &cmd_rc); 1781 + if (rc < 0 || cmd_rc) { 1994 1782 dev_dbg(dev, "%s: failed at: %d rc: %d cmd_rc: %d\n", 1995 1783 __func__, __LINE__, rc, cmd_rc); 1996 1784 return -EIO; ··· 2161 1915 2162 1916 nfit_test_setup(nfit_test_lookup, nfit_test_evaluate_dsm); 2163 1917 1918 + nfit_wq = create_singlethread_workqueue("nfit"); 1919 + if (!nfit_wq) 1920 + return -ENOMEM; 1921 + 2164 1922 nfit_test_dimm = class_create(THIS_MODULE, "nfit_test_dimm"); 2165 1923 if (IS_ERR(nfit_test_dimm)) { 2166 1924 rc = PTR_ERR(nfit_test_dimm); ··· 2181 1931 goto err_register; 2182 1932 } 2183 1933 INIT_LIST_HEAD(&nfit_test->resources); 1934 + badrange_init(&nfit_test->badrange); 2184 1935 switch (i) { 2185 1936 case 0: 2186 1937 nfit_test->num_pm = NUM_PM; ··· 2217 1966 goto err_register; 2218 1967 2219 1968 instances[i] = nfit_test; 1969 + INIT_WORK(&nfit_test->work, uc_error_notify); 2220 1970 } 2221 1971 2222 1972 rc = platform_driver_register(&nfit_test_driver); ··· 2226 1974 return 0; 2227 1975 2228 1976 err_register: 1977 + destroy_workqueue(nfit_wq); 2229 1978 for (i = 0; i < NUM_NFITS; i++) 2230 1979 if (instances[i]) 2231 1980 platform_device_unregister(&instances[i]->pdev); ··· 2242 1989 { 2243 1990 int i; 2244 1991 1992 + flush_workqueue(nfit_wq); 1993 + destroy_workqueue(nfit_wq); 2245 1994 for (i = 0; i < NUM_NFITS; i++) 2246 1995 platform_device_unregister(&instances[i]->pdev); 2247 1996 platform_driver_unregister(&nfit_test_driver);
+52
tools/testing/nvdimm/test/nfit_test.h
··· 32 32 void *buf; 33 33 }; 34 34 35 + #define ND_TRANSLATE_SPA_STATUS_INVALID_SPA 2 36 + #define NFIT_ARS_INJECT_INVALID 2 37 + 38 + enum err_inj_options { 39 + ND_ARS_ERR_INJ_OPT_NOTIFY = 0, 40 + }; 41 + 42 + /* nfit commands */ 43 + enum nfit_cmd_num { 44 + NFIT_CMD_TRANSLATE_SPA = 5, 45 + NFIT_CMD_ARS_INJECT_SET = 7, 46 + NFIT_CMD_ARS_INJECT_CLEAR = 8, 47 + NFIT_CMD_ARS_INJECT_GET = 9, 48 + }; 49 + 50 + struct nd_cmd_translate_spa { 51 + __u64 spa; 52 + __u32 status; 53 + __u8 flags; 54 + __u8 _reserved[3]; 55 + __u64 translate_length; 56 + __u32 num_nvdimms; 57 + struct nd_nvdimm_device { 58 + __u32 nfit_device_handle; 59 + __u32 _reserved; 60 + __u64 dpa; 61 + } __packed devices[0]; 62 + 63 + } __packed; 64 + 65 + struct nd_cmd_ars_err_inj { 66 + __u64 err_inj_spa_range_base; 67 + __u64 err_inj_spa_range_length; 68 + __u8 err_inj_options; 69 + __u32 status; 70 + } __packed; 71 + 72 + struct nd_cmd_ars_err_inj_clr { 73 + __u64 err_inj_clr_spa_range_base; 74 + __u64 err_inj_clr_spa_range_length; 75 + __u32 status; 76 + } __packed; 77 + 78 + struct nd_cmd_ars_err_inj_stat { 79 + __u32 status; 80 + __u32 inj_err_rec_count; 81 + struct nd_error_stat_query_record { 82 + __u64 err_inj_stat_spa_range_base; 83 + __u64 err_inj_stat_spa_range_length; 84 + } __packed record[0]; 85 + } __packed; 86 + 35 87 union acpi_object; 36 88 typedef void *acpi_handle; 37 89