Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'cxl-for-6.5' of git://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl

Pull CXL updates from Dan Williams:
"The highlights in terms of new functionality are support for the
standard CXL Performance Monitor definition that appeared in CXL 3.0,
support for device sanitization (wiping all data from a device),
secure-erase (re-keying encryption of user data), and support for
firmware update. The firmware update support is notable as it reuses
the simple sysfs_upload interface to just cat(1) a blob to a sysfs
file and pipe that to the device.

Additionally there are a substantial number of cleanups and
reorganizations to get ready for RCH error handling (RCH == Restricted
CXL Host == current shipping hardware generation / pre CXL-2.0
topologies) and type-2 (accelerator / vendor specific) devices.

For vendor specific devices they implement a subset of what the
generic type-3 (generic memory expander) driver expects. As a result
the rework decouples optional infrastructure from the core driver
context.

For RCH topologies, where the specification working group did not want
to confuse pre-CXL-aware operating systems, many of the standard
registers are hidden which makes support standard bus features like
AER (PCIe Advanced Error Reporting) difficult. The rework arranges for
the driver to help the PCI-AER core. Bjorn is on board with this
direction but a late regression disocvery means the completion of this
functionality needs to cook a bit longer, so it is code
reorganizations only for now.

Summary:

- Add infrastructure for supporting background commands along with
support for device sanitization and firmware update

- Introduce a CXL performance monitoring unit driver based on the
common definition in the specification.

- Land some preparatory cleanup and refactoring for the anticipated
arrival of CXL type-2 (accelerator devices) and CXL RCH (CXL-v1.1
topology) error handling.

- Rework CPU cache management with respect to region configuration
(device hotplug or other dynamic changes to memory interleaving)

- Fix region reconfiguration vs CXL decoder ordering rules"

* tag 'cxl-for-6.5' of git://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl: (51 commits)
cxl: Fix one kernel-doc comment
cxl/pci: Use correct flag for sanitize polling
docs: perf: Minimal introduction the the CXL PMU device and driver
perf: CXL Performance Monitoring Unit driver
tools/testing/cxl: add firmware update emulation to CXL memdevs
tools/testing/cxl: Use named effects for the Command Effect Log
tools/testing/cxl: Fix command effects for inject/clear poison
cxl: add a firmware update mechanism using the sysfs firmware loader
cxl/test: Add Secure Erase opcode support
cxl/mem: Support Secure Erase
cxl/test: Add Sanitize opcode support
cxl/mem: Wire up Sanitization support
cxl/mbox: Add sanitization handling machinery
cxl/mem: Introduce security state sysfs file
cxl/mbox: Allow for IRQ_NONE case in the isr
Revert "cxl/port: Enable the HDM decoder capability for switch ports"
cxl/memdev: Formalize endpoint port linkage
cxl/pci: Unconditionally unmask 256B Flit errors
cxl/region: Manage decoder target_type at decoder-attach time
cxl/hdm: Default CXL_DEVTYPE_DEVMEM decoders to CXL_DECODER_DEVMEM
...

+3516 -884
+48
Documentation/ABI/testing/sysfs-bus-cxl
··· 58 58 affinity for this device. 59 59 60 60 61 + What: /sys/bus/cxl/devices/memX/security/state 62 + Date: June, 2023 63 + KernelVersion: v6.5 64 + Contact: linux-cxl@vger.kernel.org 65 + Description: 66 + (RO) Reading this file will display the CXL security state for 67 + that device. Such states can be: 'disabled', 'sanitize', when 68 + a sanitization is currently underway; or those available only 69 + for persistent memory: 'locked', 'unlocked' or 'frozen'. This 70 + sysfs entry is select/poll capable from userspace to notify 71 + upon completion of a sanitize operation. 72 + 73 + 74 + What: /sys/bus/cxl/devices/memX/security/sanitize 75 + Date: June, 2023 76 + KernelVersion: v6.5 77 + Contact: linux-cxl@vger.kernel.org 78 + Description: 79 + (WO) Write a boolean 'true' string value to this attribute to 80 + sanitize the device to securely re-purpose or decommission it. 81 + This is done by ensuring that all user data and meta-data, 82 + whether it resides in persistent capacity, volatile capacity, 83 + or the LSA, is made permanently unavailable by whatever means 84 + is appropriate for the media type. This functionality requires 85 + the device to be not be actively decoding any HPA ranges. 86 + 87 + 88 + What /sys/bus/cxl/devices/memX/security/erase 89 + Date: June, 2023 90 + KernelVersion: v6.5 91 + Contact: linux-cxl@vger.kernel.org 92 + Description: 93 + (WO) Write a boolean 'true' string value to this attribute to 94 + secure erase user data by changing the media encryption keys for 95 + all user data areas of the device. 96 + 97 + 98 + What: /sys/bus/cxl/devices/memX/firmware/ 99 + Date: April, 2023 100 + KernelVersion: v6.5 101 + Contact: linux-cxl@vger.kernel.org 102 + Description: 103 + (RW) Firmware uploader mechanism. The different files under 104 + this directory can be used to upload and activate new 105 + firmware for CXL devices. The interfaces under this are 106 + documented in sysfs-class-firmware. 107 + 108 + 61 109 What: /sys/bus/cxl/devices/*/devtype 62 110 Date: June, 2021 63 111 KernelVersion: v5.14
+68
Documentation/admin-guide/perf/cxl.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ====================================== 4 + CXL Performance Monitoring Unit (CPMU) 5 + ====================================== 6 + 7 + The CXL rev 3.0 specification provides a definition of CXL Performance 8 + Monitoring Unit in section 13.2: Performance Monitoring. 9 + 10 + CXL components (e.g. Root Port, Switch Upstream Port, End Point) may have 11 + any number of CPMU instances. CPMU capabilities are fully discoverable from 12 + the devices. The specification provides event definitions for all CXL protocol 13 + message types and a set of additional events for things commonly counted on 14 + CXL devices (e.g. DRAM events). 15 + 16 + CPMU driver 17 + =========== 18 + 19 + The CPMU driver registers a perf PMU with the name pmu_mem<X>.<Y> on the CXL bus 20 + representing the Yth CPMU for memX. 21 + 22 + /sys/bus/cxl/device/pmu_mem<X>.<Y> 23 + 24 + The associated PMU is registered as 25 + 26 + /sys/bus/event_sources/devices/cxl_pmu_mem<X>.<Y> 27 + 28 + In common with other CXL bus devices, the id has no specific meaning and the 29 + relationship to specific CXL device should be established via the device parent 30 + of the device on the CXL bus. 31 + 32 + PMU driver provides description of available events and filter options in sysfs. 33 + 34 + The "format" directory describes all formats of the config (event vendor id, 35 + group id and mask) config1 (threshold, filter enables) and config2 (filter 36 + parameters) fields of the perf_event_attr structure. The "events" directory 37 + describes all documented events show in perf list. 38 + 39 + The events shown in perf list are the most fine grained events with a single 40 + bit of the event mask set. More general events may be enable by setting 41 + multiple mask bits in config. For example, all Device to Host Read Requests 42 + may be captured on a single counter by setting the bits for all of 43 + 44 + * d2h_req_rdcurr 45 + * d2h_req_rdown 46 + * d2h_req_rdshared 47 + * d2h_req_rdany 48 + * d2h_req_rdownnodata 49 + 50 + Example of usage:: 51 + 52 + $#perf list 53 + cxl_pmu_mem0.0/clock_ticks/ [Kernel PMU event] 54 + cxl_pmu_mem0.0/d2h_req_rdshared/ [Kernel PMU event] 55 + cxl_pmu_mem0.0/h2d_req_snpcur/ [Kernel PMU event] 56 + cxl_pmu_mem0.0/h2d_req_snpdata/ [Kernel PMU event] 57 + cxl_pmu_mem0.0/h2d_req_snpinv/ [Kernel PMU event] 58 + ----------------------------------------------------------- 59 + 60 + $# perf stat -a -e cxl_pmu_mem0.0/clock_ticks/ -e cxl_pmu_mem0.0/d2h_req_rdshared/ 61 + 62 + Vendor specific events may also be available and if so can be used via 63 + 64 + $# perf stat -a -e cxl_pmu_mem0.0/vid=VID,gid=GID,mask=MASK/ 65 + 66 + The driver does not support sampling so "perf record" is unsupported. 67 + It only supports system-wide counting so attaching to a task is 68 + unsupported.
+1
Documentation/admin-guide/perf/index.rst
··· 21 21 alibaba_pmu 22 22 nvidia-pmu 23 23 meson-ddr-pmu 24 + cxl
+7
MAINTAINERS
··· 5203 5203 F: drivers/cxl/ 5204 5204 F: include/uapi/linux/cxl_mem.h 5205 5205 5206 + COMPUTE EXPRESS LINK PMU (CPMU) 5207 + M: Jonathan Cameron <jonathan.cameron@huawei.com> 5208 + L: linux-cxl@vger.kernel.org 5209 + S: Maintained 5210 + F: Documentation/admin-guide/perf/cxl.rst 5211 + F: drivers/perf/cxl_pmu.c 5212 + 5206 5213 CONEXANT ACCESSRUNNER USB DRIVER 5207 5214 L: accessrunner-general@lists.sourceforge.net 5208 5215 S: Orphan
+14
drivers/cxl/Kconfig
··· 82 82 config CXL_MEM 83 83 tristate "CXL: Memory Expansion" 84 84 depends on CXL_PCI 85 + select FW_UPLOAD 85 86 default CXL_BUS 86 87 help 87 88 The CXL.mem protocol allows a device to act as a provider of "System ··· 140 139 If unsure, or if this kernel is meant for production environments, 141 140 say N. 142 141 142 + config CXL_PMU 143 + tristate "CXL Performance Monitoring Unit" 144 + default CXL_BUS 145 + depends on PERF_EVENTS 146 + help 147 + Support performance monitoring as defined in CXL rev 3.0 148 + section 13.2: Performance Monitoring. CXL components may have 149 + one or more CXL Performance Monitoring Units (CPMUs). 150 + 151 + Say 'y/m' to enable a driver that will attach to performance 152 + monitoring units and provide standard perf based interfaces. 153 + 154 + If unsure say 'm'. 143 155 endif
+133 -107
drivers/cxl/acpi.c
··· 258 258 259 259 cxld = &cxlrd->cxlsd.cxld; 260 260 cxld->flags = cfmws_to_decoder_flags(cfmws->restrictions); 261 - cxld->target_type = CXL_DECODER_EXPANDER; 261 + cxld->target_type = CXL_DECODER_HOSTONLYMEM; 262 262 cxld->hpa_range = (struct range) { 263 263 .start = res->start, 264 264 .end = res->end, ··· 327 327 return NULL; 328 328 } 329 329 330 + /* Note, @dev is used by mock_acpi_table_parse_cedt() */ 331 + struct cxl_chbs_context { 332 + struct device *dev; 333 + unsigned long long uid; 334 + resource_size_t base; 335 + u32 cxl_version; 336 + }; 337 + 338 + static int cxl_get_chbs_iter(union acpi_subtable_headers *header, void *arg, 339 + const unsigned long end) 340 + { 341 + struct cxl_chbs_context *ctx = arg; 342 + struct acpi_cedt_chbs *chbs; 343 + 344 + if (ctx->base != CXL_RESOURCE_NONE) 345 + return 0; 346 + 347 + chbs = (struct acpi_cedt_chbs *) header; 348 + 349 + if (ctx->uid != chbs->uid) 350 + return 0; 351 + 352 + ctx->cxl_version = chbs->cxl_version; 353 + if (!chbs->base) 354 + return 0; 355 + 356 + if (chbs->cxl_version == ACPI_CEDT_CHBS_VERSION_CXL11 && 357 + chbs->length != CXL_RCRB_SIZE) 358 + return 0; 359 + 360 + ctx->base = chbs->base; 361 + 362 + return 0; 363 + } 364 + 365 + static int cxl_get_chbs(struct device *dev, struct acpi_device *hb, 366 + struct cxl_chbs_context *ctx) 367 + { 368 + unsigned long long uid; 369 + int rc; 370 + 371 + rc = acpi_evaluate_integer(hb->handle, METHOD_NAME__UID, NULL, &uid); 372 + if (rc != AE_OK) { 373 + dev_err(dev, "unable to retrieve _UID\n"); 374 + return -ENOENT; 375 + } 376 + 377 + dev_dbg(dev, "UID found: %lld\n", uid); 378 + *ctx = (struct cxl_chbs_context) { 379 + .dev = dev, 380 + .uid = uid, 381 + .base = CXL_RESOURCE_NONE, 382 + .cxl_version = UINT_MAX, 383 + }; 384 + 385 + acpi_table_parse_cedt(ACPI_CEDT_TYPE_CHBS, cxl_get_chbs_iter, ctx); 386 + 387 + return 0; 388 + } 389 + 390 + static int add_host_bridge_dport(struct device *match, void *arg) 391 + { 392 + acpi_status rc; 393 + struct device *bridge; 394 + struct cxl_dport *dport; 395 + struct cxl_chbs_context ctx; 396 + struct acpi_pci_root *pci_root; 397 + struct cxl_port *root_port = arg; 398 + struct device *host = root_port->dev.parent; 399 + struct acpi_device *hb = to_cxl_host_bridge(host, match); 400 + 401 + if (!hb) 402 + return 0; 403 + 404 + rc = cxl_get_chbs(match, hb, &ctx); 405 + if (rc) 406 + return rc; 407 + 408 + if (ctx.cxl_version == UINT_MAX) { 409 + dev_warn(match, "No CHBS found for Host Bridge (UID %lld)\n", 410 + ctx.uid); 411 + return 0; 412 + } 413 + 414 + if (ctx.base == CXL_RESOURCE_NONE) { 415 + dev_warn(match, "CHBS invalid for Host Bridge (UID %lld)\n", 416 + ctx.uid); 417 + return 0; 418 + } 419 + 420 + pci_root = acpi_pci_find_root(hb->handle); 421 + bridge = pci_root->bus->bridge; 422 + 423 + /* 424 + * In RCH mode, bind the component regs base to the dport. In 425 + * VH mode it will be bound to the CXL host bridge's port 426 + * object later in add_host_bridge_uport(). 427 + */ 428 + if (ctx.cxl_version == ACPI_CEDT_CHBS_VERSION_CXL11) { 429 + dev_dbg(match, "RCRB found for UID %lld: %pa\n", ctx.uid, 430 + &ctx.base); 431 + dport = devm_cxl_add_rch_dport(root_port, bridge, ctx.uid, 432 + ctx.base); 433 + } else { 434 + dport = devm_cxl_add_dport(root_port, bridge, ctx.uid, 435 + CXL_RESOURCE_NONE); 436 + } 437 + 438 + if (IS_ERR(dport)) 439 + return PTR_ERR(dport); 440 + 441 + return 0; 442 + } 443 + 330 444 /* 331 445 * A host bridge is a dport to a CFMWS decode and it is a uport to the 332 446 * dport (PCIe Root Ports) in the host bridge. ··· 454 340 struct cxl_dport *dport; 455 341 struct cxl_port *port; 456 342 struct device *bridge; 343 + struct cxl_chbs_context ctx; 344 + resource_size_t component_reg_phys; 457 345 int rc; 458 346 459 347 if (!hb) ··· 474 358 return 0; 475 359 } 476 360 361 + rc = cxl_get_chbs(match, hb, &ctx); 362 + if (rc) 363 + return rc; 364 + 365 + if (ctx.cxl_version == ACPI_CEDT_CHBS_VERSION_CXL11) { 366 + dev_warn(bridge, 367 + "CXL CHBS version mismatch, skip port registration\n"); 368 + return 0; 369 + } 370 + 371 + component_reg_phys = ctx.base; 372 + if (component_reg_phys != CXL_RESOURCE_NONE) 373 + dev_dbg(match, "CHBCR found for UID %lld: %pa\n", 374 + ctx.uid, &component_reg_phys); 375 + 477 376 rc = devm_cxl_register_pci_bus(host, bridge, pci_root->bus); 478 377 if (rc) 479 378 return rc; 480 379 481 - port = devm_cxl_add_port(host, bridge, dport->component_reg_phys, 482 - dport); 380 + port = devm_cxl_add_port(host, bridge, component_reg_phys, dport); 483 381 if (IS_ERR(port)) 484 382 return PTR_ERR(port); 485 383 486 384 dev_info(bridge, "host supports CXL\n"); 487 - 488 - return 0; 489 - } 490 - 491 - struct cxl_chbs_context { 492 - struct device *dev; 493 - unsigned long long uid; 494 - resource_size_t rcrb; 495 - resource_size_t chbcr; 496 - u32 cxl_version; 497 - }; 498 - 499 - static int cxl_get_chbcr(union acpi_subtable_headers *header, void *arg, 500 - const unsigned long end) 501 - { 502 - struct cxl_chbs_context *ctx = arg; 503 - struct acpi_cedt_chbs *chbs; 504 - 505 - if (ctx->chbcr) 506 - return 0; 507 - 508 - chbs = (struct acpi_cedt_chbs *) header; 509 - 510 - if (ctx->uid != chbs->uid) 511 - return 0; 512 - 513 - ctx->cxl_version = chbs->cxl_version; 514 - ctx->rcrb = CXL_RESOURCE_NONE; 515 - ctx->chbcr = CXL_RESOURCE_NONE; 516 - 517 - if (!chbs->base) 518 - return 0; 519 - 520 - if (chbs->cxl_version != ACPI_CEDT_CHBS_VERSION_CXL11) { 521 - ctx->chbcr = chbs->base; 522 - return 0; 523 - } 524 - 525 - if (chbs->length != CXL_RCRB_SIZE) 526 - return 0; 527 - 528 - ctx->rcrb = chbs->base; 529 - ctx->chbcr = cxl_rcrb_to_component(ctx->dev, chbs->base, 530 - CXL_RCRB_DOWNSTREAM); 531 - 532 - return 0; 533 - } 534 - 535 - static int add_host_bridge_dport(struct device *match, void *arg) 536 - { 537 - acpi_status rc; 538 - struct device *bridge; 539 - unsigned long long uid; 540 - struct cxl_dport *dport; 541 - struct cxl_chbs_context ctx; 542 - struct acpi_pci_root *pci_root; 543 - struct cxl_port *root_port = arg; 544 - struct device *host = root_port->dev.parent; 545 - struct acpi_device *hb = to_cxl_host_bridge(host, match); 546 - 547 - if (!hb) 548 - return 0; 549 - 550 - rc = acpi_evaluate_integer(hb->handle, METHOD_NAME__UID, NULL, &uid); 551 - if (rc != AE_OK) { 552 - dev_err(match, "unable to retrieve _UID\n"); 553 - return -ENODEV; 554 - } 555 - 556 - dev_dbg(match, "UID found: %lld\n", uid); 557 - 558 - ctx = (struct cxl_chbs_context) { 559 - .dev = match, 560 - .uid = uid, 561 - }; 562 - acpi_table_parse_cedt(ACPI_CEDT_TYPE_CHBS, cxl_get_chbcr, &ctx); 563 - 564 - if (!ctx.chbcr) { 565 - dev_warn(match, "No CHBS found for Host Bridge (UID %lld)\n", 566 - uid); 567 - return 0; 568 - } 569 - 570 - if (ctx.rcrb != CXL_RESOURCE_NONE) 571 - dev_dbg(match, "RCRB found for UID %lld: %pa\n", uid, &ctx.rcrb); 572 - 573 - if (ctx.chbcr == CXL_RESOURCE_NONE) { 574 - dev_warn(match, "CHBCR invalid for Host Bridge (UID %lld)\n", 575 - uid); 576 - return 0; 577 - } 578 - 579 - dev_dbg(match, "CHBCR found: %pa\n", &ctx.chbcr); 580 - 581 - pci_root = acpi_pci_find_root(hb->handle); 582 - bridge = pci_root->bus->bridge; 583 - if (ctx.cxl_version == ACPI_CEDT_CHBS_VERSION_CXL11) 584 - dport = devm_cxl_add_rch_dport(root_port, bridge, uid, 585 - ctx.chbcr, ctx.rcrb); 586 - else 587 - dport = devm_cxl_add_dport(root_port, bridge, uid, 588 - ctx.chbcr); 589 - if (IS_ERR(dport)) 590 - return PTR_ERR(dport); 591 385 592 386 return 0; 593 387 }
+1
drivers/cxl/core/Makefile
··· 12 12 cxl_core-y += mbox.o 13 13 cxl_core-y += pci.o 14 14 cxl_core-y += hdm.o 15 + cxl_core-y += pmu.o 15 16 cxl_core-$(CONFIG_TRACING) += trace.o 16 17 cxl_core-$(CONFIG_CXL_REGION) += region.o
+11
drivers/cxl/core/core.h
··· 6 6 7 7 extern const struct device_type cxl_nvdimm_bridge_type; 8 8 extern const struct device_type cxl_nvdimm_type; 9 + extern const struct device_type cxl_pmu_type; 9 10 10 11 extern struct attribute_group cxl_base_attribute_group; 11 12 ··· 64 63 int cxl_dpa_free(struct cxl_endpoint_decoder *cxled); 65 64 resource_size_t cxl_dpa_size(struct cxl_endpoint_decoder *cxled); 66 65 resource_size_t cxl_dpa_resource_start(struct cxl_endpoint_decoder *cxled); 66 + 67 + enum cxl_rcrb { 68 + CXL_RCRB_DOWNSTREAM, 69 + CXL_RCRB_UPSTREAM, 70 + }; 71 + struct cxl_rcrb_info; 72 + resource_size_t __rcrb_to_component(struct device *dev, 73 + struct cxl_rcrb_info *ri, 74 + enum cxl_rcrb which); 75 + 67 76 extern struct rw_semaphore cxl_dpa_rwsem; 68 77 69 78 int cxl_memdev_init(void);
+33 -15
drivers/cxl/core/hdm.c
··· 85 85 struct cxl_component_regs *regs) 86 86 { 87 87 struct cxl_register_map map = { 88 + .dev = &port->dev, 88 89 .resource = port->component_reg_phys, 89 90 .base = crb, 90 91 .max_size = CXL_COMPONENT_REG_BLOCK_SIZE, ··· 98 97 return -ENODEV; 99 98 } 100 99 101 - return cxl_map_component_regs(&port->dev, regs, &map, 102 - BIT(CXL_CM_CAP_CAP_ID_HDM)); 100 + return cxl_map_component_regs(&map, regs, BIT(CXL_CM_CAP_CAP_ID_HDM)); 103 101 } 104 102 105 103 static bool should_emulate_decoders(struct cxl_endpoint_dvsec_info *info) ··· 570 570 571 571 static void cxld_set_type(struct cxl_decoder *cxld, u32 *ctrl) 572 572 { 573 - u32p_replace_bits(ctrl, !!(cxld->target_type == 3), 574 - CXL_HDM_DECODER0_CTRL_TYPE); 573 + u32p_replace_bits(ctrl, 574 + !!(cxld->target_type == CXL_DECODER_HOSTONLYMEM), 575 + CXL_HDM_DECODER0_CTRL_HOSTONLY); 575 576 } 576 577 577 578 static int cxlsd_set_targets(struct cxl_switch_decoder *cxlsd, u64 *tgt) ··· 765 764 if (!len) 766 765 return -ENOENT; 767 766 768 - cxld->target_type = CXL_DECODER_EXPANDER; 767 + cxld->target_type = CXL_DECODER_HOSTONLYMEM; 769 768 cxld->commit = NULL; 770 769 cxld->reset = NULL; 771 770 cxld->hpa_range = info->dvsec_range[which]; ··· 794 793 int *target_map, void __iomem *hdm, int which, 795 794 u64 *dpa_base, struct cxl_endpoint_dvsec_info *info) 796 795 { 796 + struct cxl_endpoint_decoder *cxled = NULL; 797 797 u64 size, base, skip, dpa_size, lo, hi; 798 - struct cxl_endpoint_decoder *cxled; 799 798 bool committed; 800 799 u32 remainder; 801 800 int i, rc; ··· 828 827 return -ENXIO; 829 828 } 830 829 830 + if (info) 831 + cxled = to_cxl_endpoint_decoder(&cxld->dev); 831 832 cxld->hpa_range = (struct range) { 832 833 .start = base, 833 834 .end = base + size - 1, ··· 840 837 cxld->flags |= CXL_DECODER_F_ENABLE; 841 838 if (ctrl & CXL_HDM_DECODER0_CTRL_LOCK) 842 839 cxld->flags |= CXL_DECODER_F_LOCK; 843 - if (FIELD_GET(CXL_HDM_DECODER0_CTRL_TYPE, ctrl)) 844 - cxld->target_type = CXL_DECODER_EXPANDER; 840 + if (FIELD_GET(CXL_HDM_DECODER0_CTRL_HOSTONLY, ctrl)) 841 + cxld->target_type = CXL_DECODER_HOSTONLYMEM; 845 842 else 846 - cxld->target_type = CXL_DECODER_ACCELERATOR; 843 + cxld->target_type = CXL_DECODER_DEVMEM; 847 844 if (cxld->id != port->commit_end + 1) { 848 845 dev_warn(&port->dev, 849 846 "decoder%d.%d: Committed out of order\n", ··· 859 856 } 860 857 port->commit_end = cxld->id; 861 858 } else { 862 - /* unless / until type-2 drivers arrive, assume type-3 */ 863 - if (FIELD_GET(CXL_HDM_DECODER0_CTRL_TYPE, ctrl) == 0) { 864 - ctrl |= CXL_HDM_DECODER0_CTRL_TYPE; 859 + if (cxled) { 860 + struct cxl_memdev *cxlmd = cxled_to_memdev(cxled); 861 + struct cxl_dev_state *cxlds = cxlmd->cxlds; 862 + 863 + /* 864 + * Default by devtype until a device arrives that needs 865 + * more precision. 866 + */ 867 + if (cxlds->type == CXL_DEVTYPE_CLASSMEM) 868 + cxld->target_type = CXL_DECODER_HOSTONLYMEM; 869 + else 870 + cxld->target_type = CXL_DECODER_DEVMEM; 871 + } else { 872 + /* To be overridden by region type at commit time */ 873 + cxld->target_type = CXL_DECODER_HOSTONLYMEM; 874 + } 875 + 876 + if (!FIELD_GET(CXL_HDM_DECODER0_CTRL_HOSTONLY, ctrl) && 877 + cxld->target_type == CXL_DECODER_HOSTONLYMEM) { 878 + ctrl |= CXL_HDM_DECODER0_CTRL_HOSTONLY; 865 879 writel(ctrl, hdm + CXL_HDM_DECODER0_CTRL_OFFSET(which)); 866 880 } 867 - cxld->target_type = CXL_DECODER_EXPANDER; 868 881 } 869 882 rc = eiw_to_ways(FIELD_GET(CXL_HDM_DECODER0_CTRL_IW_MASK, ctrl), 870 883 &cxld->interleave_ways); ··· 899 880 port->id, cxld->id, cxld->hpa_range.start, cxld->hpa_range.end, 900 881 cxld->interleave_ways, cxld->interleave_granularity); 901 882 902 - if (!info) { 883 + if (!cxled) { 903 884 lo = readl(hdm + CXL_HDM_DECODER0_TL_LOW(which)); 904 885 hi = readl(hdm + CXL_HDM_DECODER0_TL_HIGH(which)); 905 886 target_list.value = (hi << 32) + lo; ··· 922 903 lo = readl(hdm + CXL_HDM_DECODER0_SKIP_LOW(which)); 923 904 hi = readl(hdm + CXL_HDM_DECODER0_SKIP_HIGH(which)); 924 905 skip = (hi << 32) + lo; 925 - cxled = to_cxl_endpoint_decoder(&cxld->dev); 926 906 rc = devm_cxl_dpa_reserve(cxled, *dpa_base + skip, dpa_size, skip); 927 907 if (rc) { 928 908 dev_err(&port->dev,
+204 -135
drivers/cxl/core/mbox.c
··· 182 182 183 183 /** 184 184 * cxl_internal_send_cmd() - Kernel internal interface to send a mailbox command 185 - * @cxlds: The device data for the operation 185 + * @mds: The driver data for the operation 186 186 * @mbox_cmd: initialized command to execute 187 187 * 188 188 * Context: Any context. ··· 198 198 * error. While this distinction can be useful for commands from userspace, the 199 199 * kernel will only be able to use results when both are successful. 200 200 */ 201 - int cxl_internal_send_cmd(struct cxl_dev_state *cxlds, 201 + int cxl_internal_send_cmd(struct cxl_memdev_state *mds, 202 202 struct cxl_mbox_cmd *mbox_cmd) 203 203 { 204 204 size_t out_size, min_out; 205 205 int rc; 206 206 207 - if (mbox_cmd->size_in > cxlds->payload_size || 208 - mbox_cmd->size_out > cxlds->payload_size) 207 + if (mbox_cmd->size_in > mds->payload_size || 208 + mbox_cmd->size_out > mds->payload_size) 209 209 return -E2BIG; 210 210 211 211 out_size = mbox_cmd->size_out; 212 212 min_out = mbox_cmd->min_out; 213 - rc = cxlds->mbox_send(cxlds, mbox_cmd); 213 + rc = mds->mbox_send(mds, mbox_cmd); 214 214 /* 215 215 * EIO is reserved for a payload size mismatch and mbox_send() 216 216 * may not return this error. ··· 220 220 if (rc) 221 221 return rc; 222 222 223 - if (mbox_cmd->return_code != CXL_MBOX_CMD_RC_SUCCESS) 223 + if (mbox_cmd->return_code != CXL_MBOX_CMD_RC_SUCCESS && 224 + mbox_cmd->return_code != CXL_MBOX_CMD_RC_BACKGROUND) 224 225 return cxl_mbox_cmd_rc2errno(mbox_cmd); 225 226 226 227 if (!out_size) ··· 298 297 } 299 298 300 299 static int cxl_mbox_cmd_ctor(struct cxl_mbox_cmd *mbox, 301 - struct cxl_dev_state *cxlds, u16 opcode, 300 + struct cxl_memdev_state *mds, u16 opcode, 302 301 size_t in_size, size_t out_size, u64 in_payload) 303 302 { 304 303 *mbox = (struct cxl_mbox_cmd) { ··· 313 312 return PTR_ERR(mbox->payload_in); 314 313 315 314 if (!cxl_payload_from_user_allowed(opcode, mbox->payload_in)) { 316 - dev_dbg(cxlds->dev, "%s: input payload not allowed\n", 315 + dev_dbg(mds->cxlds.dev, "%s: input payload not allowed\n", 317 316 cxl_mem_opcode_to_name(opcode)); 318 317 kvfree(mbox->payload_in); 319 318 return -EBUSY; ··· 322 321 323 322 /* Prepare to handle a full payload for variable sized output */ 324 323 if (out_size == CXL_VARIABLE_PAYLOAD) 325 - mbox->size_out = cxlds->payload_size; 324 + mbox->size_out = mds->payload_size; 326 325 else 327 326 mbox->size_out = out_size; 328 327 ··· 344 343 345 344 static int cxl_to_mem_cmd_raw(struct cxl_mem_command *mem_cmd, 346 345 const struct cxl_send_command *send_cmd, 347 - struct cxl_dev_state *cxlds) 346 + struct cxl_memdev_state *mds) 348 347 { 349 348 if (send_cmd->raw.rsvd) 350 349 return -EINVAL; ··· 354 353 * gets passed along without further checking, so it must be 355 354 * validated here. 356 355 */ 357 - if (send_cmd->out.size > cxlds->payload_size) 356 + if (send_cmd->out.size > mds->payload_size) 358 357 return -EINVAL; 359 358 360 359 if (!cxl_mem_raw_command_allowed(send_cmd->raw.opcode)) 361 360 return -EPERM; 362 361 363 - dev_WARN_ONCE(cxlds->dev, true, "raw command path used\n"); 362 + dev_WARN_ONCE(mds->cxlds.dev, true, "raw command path used\n"); 364 363 365 364 *mem_cmd = (struct cxl_mem_command) { 366 365 .info = { ··· 376 375 377 376 static int cxl_to_mem_cmd(struct cxl_mem_command *mem_cmd, 378 377 const struct cxl_send_command *send_cmd, 379 - struct cxl_dev_state *cxlds) 378 + struct cxl_memdev_state *mds) 380 379 { 381 380 struct cxl_mem_command *c = &cxl_mem_commands[send_cmd->id]; 382 381 const struct cxl_command_info *info = &c->info; ··· 391 390 return -EINVAL; 392 391 393 392 /* Check that the command is enabled for hardware */ 394 - if (!test_bit(info->id, cxlds->enabled_cmds)) 393 + if (!test_bit(info->id, mds->enabled_cmds)) 395 394 return -ENOTTY; 396 395 397 396 /* Check that the command is not claimed for exclusive kernel use */ 398 - if (test_bit(info->id, cxlds->exclusive_cmds)) 397 + if (test_bit(info->id, mds->exclusive_cmds)) 399 398 return -EBUSY; 400 399 401 400 /* Check the input buffer is the expected size */ ··· 424 423 /** 425 424 * cxl_validate_cmd_from_user() - Check fields for CXL_MEM_SEND_COMMAND. 426 425 * @mbox_cmd: Sanitized and populated &struct cxl_mbox_cmd. 427 - * @cxlds: The device data for the operation 426 + * @mds: The driver data for the operation 428 427 * @send_cmd: &struct cxl_send_command copied in from userspace. 429 428 * 430 429 * Return: ··· 439 438 * safe to send to the hardware. 440 439 */ 441 440 static int cxl_validate_cmd_from_user(struct cxl_mbox_cmd *mbox_cmd, 442 - struct cxl_dev_state *cxlds, 441 + struct cxl_memdev_state *mds, 443 442 const struct cxl_send_command *send_cmd) 444 443 { 445 444 struct cxl_mem_command mem_cmd; ··· 453 452 * supports, but output can be arbitrarily large (simply write out as 454 453 * much data as the hardware provides). 455 454 */ 456 - if (send_cmd->in.size > cxlds->payload_size) 455 + if (send_cmd->in.size > mds->payload_size) 457 456 return -EINVAL; 458 457 459 458 /* Sanitize and construct a cxl_mem_command */ 460 459 if (send_cmd->id == CXL_MEM_COMMAND_ID_RAW) 461 - rc = cxl_to_mem_cmd_raw(&mem_cmd, send_cmd, cxlds); 460 + rc = cxl_to_mem_cmd_raw(&mem_cmd, send_cmd, mds); 462 461 else 463 - rc = cxl_to_mem_cmd(&mem_cmd, send_cmd, cxlds); 462 + rc = cxl_to_mem_cmd(&mem_cmd, send_cmd, mds); 464 463 465 464 if (rc) 466 465 return rc; 467 466 468 467 /* Sanitize and construct a cxl_mbox_cmd */ 469 - return cxl_mbox_cmd_ctor(mbox_cmd, cxlds, mem_cmd.opcode, 468 + return cxl_mbox_cmd_ctor(mbox_cmd, mds, mem_cmd.opcode, 470 469 mem_cmd.info.size_in, mem_cmd.info.size_out, 471 470 send_cmd->in.payload); 472 471 } ··· 474 473 int cxl_query_cmd(struct cxl_memdev *cxlmd, 475 474 struct cxl_mem_query_commands __user *q) 476 475 { 476 + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); 477 477 struct device *dev = &cxlmd->dev; 478 478 struct cxl_mem_command *cmd; 479 479 u32 n_commands; ··· 496 494 cxl_for_each_cmd(cmd) { 497 495 struct cxl_command_info info = cmd->info; 498 496 499 - if (test_bit(info.id, cxlmd->cxlds->enabled_cmds)) 497 + if (test_bit(info.id, mds->enabled_cmds)) 500 498 info.flags |= CXL_MEM_COMMAND_FLAG_ENABLED; 501 - if (test_bit(info.id, cxlmd->cxlds->exclusive_cmds)) 499 + if (test_bit(info.id, mds->exclusive_cmds)) 502 500 info.flags |= CXL_MEM_COMMAND_FLAG_EXCLUSIVE; 503 501 504 502 if (copy_to_user(&q->commands[j++], &info, sizeof(info))) ··· 513 511 514 512 /** 515 513 * handle_mailbox_cmd_from_user() - Dispatch a mailbox command for userspace. 516 - * @cxlds: The device data for the operation 514 + * @mds: The driver data for the operation 517 515 * @mbox_cmd: The validated mailbox command. 518 516 * @out_payload: Pointer to userspace's output payload. 519 517 * @size_out: (Input) Max payload size to copy out. ··· 534 532 * 535 533 * See cxl_send_cmd(). 536 534 */ 537 - static int handle_mailbox_cmd_from_user(struct cxl_dev_state *cxlds, 535 + static int handle_mailbox_cmd_from_user(struct cxl_memdev_state *mds, 538 536 struct cxl_mbox_cmd *mbox_cmd, 539 537 u64 out_payload, s32 *size_out, 540 538 u32 *retval) 541 539 { 542 - struct device *dev = cxlds->dev; 540 + struct device *dev = mds->cxlds.dev; 543 541 int rc; 544 542 545 543 dev_dbg(dev, ··· 549 547 cxl_mem_opcode_to_name(mbox_cmd->opcode), 550 548 mbox_cmd->opcode, mbox_cmd->size_in); 551 549 552 - rc = cxlds->mbox_send(cxlds, mbox_cmd); 550 + rc = mds->mbox_send(mds, mbox_cmd); 553 551 if (rc) 554 552 goto out; 555 553 ··· 578 576 579 577 int cxl_send_cmd(struct cxl_memdev *cxlmd, struct cxl_send_command __user *s) 580 578 { 581 - struct cxl_dev_state *cxlds = cxlmd->cxlds; 579 + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); 582 580 struct device *dev = &cxlmd->dev; 583 581 struct cxl_send_command send; 584 582 struct cxl_mbox_cmd mbox_cmd; ··· 589 587 if (copy_from_user(&send, s, sizeof(send))) 590 588 return -EFAULT; 591 589 592 - rc = cxl_validate_cmd_from_user(&mbox_cmd, cxlmd->cxlds, &send); 590 + rc = cxl_validate_cmd_from_user(&mbox_cmd, mds, &send); 593 591 if (rc) 594 592 return rc; 595 593 596 - rc = handle_mailbox_cmd_from_user(cxlds, &mbox_cmd, send.out.payload, 594 + rc = handle_mailbox_cmd_from_user(mds, &mbox_cmd, send.out.payload, 597 595 &send.out.size, &send.retval); 598 596 if (rc) 599 597 return rc; ··· 604 602 return 0; 605 603 } 606 604 607 - static int cxl_xfer_log(struct cxl_dev_state *cxlds, uuid_t *uuid, u32 *size, u8 *out) 605 + static int cxl_xfer_log(struct cxl_memdev_state *mds, uuid_t *uuid, 606 + u32 *size, u8 *out) 608 607 { 609 608 u32 remaining = *size; 610 609 u32 offset = 0; 611 610 612 611 while (remaining) { 613 - u32 xfer_size = min_t(u32, remaining, cxlds->payload_size); 612 + u32 xfer_size = min_t(u32, remaining, mds->payload_size); 614 613 struct cxl_mbox_cmd mbox_cmd; 615 614 struct cxl_mbox_get_log log; 616 615 int rc; ··· 630 627 .payload_out = out, 631 628 }; 632 629 633 - rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); 630 + rc = cxl_internal_send_cmd(mds, &mbox_cmd); 634 631 635 632 /* 636 633 * The output payload length that indicates the number ··· 657 654 658 655 /** 659 656 * cxl_walk_cel() - Walk through the Command Effects Log. 660 - * @cxlds: The device data for the operation 657 + * @mds: The driver data for the operation 661 658 * @size: Length of the Command Effects Log. 662 659 * @cel: CEL 663 660 * 664 661 * Iterate over each entry in the CEL and determine if the driver supports the 665 662 * command. If so, the command is enabled for the device and can be used later. 666 663 */ 667 - static void cxl_walk_cel(struct cxl_dev_state *cxlds, size_t size, u8 *cel) 664 + static void cxl_walk_cel(struct cxl_memdev_state *mds, size_t size, u8 *cel) 668 665 { 669 666 struct cxl_cel_entry *cel_entry; 670 667 const int cel_entries = size / sizeof(*cel_entry); 668 + struct device *dev = mds->cxlds.dev; 671 669 int i; 672 670 673 671 cel_entry = (struct cxl_cel_entry *) cel; ··· 678 674 struct cxl_mem_command *cmd = cxl_mem_find_command(opcode); 679 675 680 676 if (!cmd && !cxl_is_poison_command(opcode)) { 681 - dev_dbg(cxlds->dev, 677 + dev_dbg(dev, 682 678 "Opcode 0x%04x unsupported by driver\n", opcode); 683 679 continue; 684 680 } 685 681 686 682 if (cmd) 687 - set_bit(cmd->info.id, cxlds->enabled_cmds); 683 + set_bit(cmd->info.id, mds->enabled_cmds); 688 684 689 685 if (cxl_is_poison_command(opcode)) 690 - cxl_set_poison_cmd_enabled(&cxlds->poison, opcode); 686 + cxl_set_poison_cmd_enabled(&mds->poison, opcode); 691 687 692 - dev_dbg(cxlds->dev, "Opcode 0x%04x enabled\n", opcode); 688 + dev_dbg(dev, "Opcode 0x%04x enabled\n", opcode); 693 689 } 694 690 } 695 691 696 - static struct cxl_mbox_get_supported_logs *cxl_get_gsl(struct cxl_dev_state *cxlds) 692 + static struct cxl_mbox_get_supported_logs *cxl_get_gsl(struct cxl_memdev_state *mds) 697 693 { 698 694 struct cxl_mbox_get_supported_logs *ret; 699 695 struct cxl_mbox_cmd mbox_cmd; 700 696 int rc; 701 697 702 - ret = kvmalloc(cxlds->payload_size, GFP_KERNEL); 698 + ret = kvmalloc(mds->payload_size, GFP_KERNEL); 703 699 if (!ret) 704 700 return ERR_PTR(-ENOMEM); 705 701 706 702 mbox_cmd = (struct cxl_mbox_cmd) { 707 703 .opcode = CXL_MBOX_OP_GET_SUPPORTED_LOGS, 708 - .size_out = cxlds->payload_size, 704 + .size_out = mds->payload_size, 709 705 .payload_out = ret, 710 706 /* At least the record number field must be valid */ 711 707 .min_out = 2, 712 708 }; 713 - rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); 709 + rc = cxl_internal_send_cmd(mds, &mbox_cmd); 714 710 if (rc < 0) { 715 711 kvfree(ret); 716 712 return ERR_PTR(rc); ··· 733 729 734 730 /** 735 731 * cxl_enumerate_cmds() - Enumerate commands for a device. 736 - * @cxlds: The device data for the operation 732 + * @mds: The driver data for the operation 737 733 * 738 734 * Returns 0 if enumerate completed successfully. 739 735 * 740 736 * CXL devices have optional support for certain commands. This function will 741 737 * determine the set of supported commands for the hardware and update the 742 - * enabled_cmds bitmap in the @cxlds. 738 + * enabled_cmds bitmap in the @mds. 743 739 */ 744 - int cxl_enumerate_cmds(struct cxl_dev_state *cxlds) 740 + int cxl_enumerate_cmds(struct cxl_memdev_state *mds) 745 741 { 746 742 struct cxl_mbox_get_supported_logs *gsl; 747 - struct device *dev = cxlds->dev; 743 + struct device *dev = mds->cxlds.dev; 748 744 struct cxl_mem_command *cmd; 749 745 int i, rc; 750 746 751 - gsl = cxl_get_gsl(cxlds); 747 + gsl = cxl_get_gsl(mds); 752 748 if (IS_ERR(gsl)) 753 749 return PTR_ERR(gsl); 754 750 ··· 769 765 goto out; 770 766 } 771 767 772 - rc = cxl_xfer_log(cxlds, &uuid, &size, log); 768 + rc = cxl_xfer_log(mds, &uuid, &size, log); 773 769 if (rc) { 774 770 kvfree(log); 775 771 goto out; 776 772 } 777 773 778 - cxl_walk_cel(cxlds, size, log); 774 + cxl_walk_cel(mds, size, log); 779 775 kvfree(log); 780 776 781 777 /* In case CEL was bogus, enable some default commands. */ 782 778 cxl_for_each_cmd(cmd) 783 779 if (cmd->flags & CXL_CMD_FLAG_FORCE_ENABLE) 784 - set_bit(cmd->info.id, cxlds->enabled_cmds); 780 + set_bit(cmd->info.id, mds->enabled_cmds); 785 781 786 782 /* Found the required CEL */ 787 783 rc = 0; ··· 842 838 } 843 839 } 844 840 845 - static int cxl_clear_event_record(struct cxl_dev_state *cxlds, 841 + static int cxl_clear_event_record(struct cxl_memdev_state *mds, 846 842 enum cxl_event_log_type log, 847 843 struct cxl_get_event_payload *get_pl) 848 844 { ··· 856 852 int i; 857 853 858 854 /* Payload size may limit the max handles */ 859 - if (pl_size > cxlds->payload_size) { 860 - max_handles = (cxlds->payload_size - sizeof(*payload)) / 861 - sizeof(__le16); 855 + if (pl_size > mds->payload_size) { 856 + max_handles = (mds->payload_size - sizeof(*payload)) / 857 + sizeof(__le16); 862 858 pl_size = struct_size(payload, handles, max_handles); 863 859 } 864 860 ··· 883 879 i = 0; 884 880 for (cnt = 0; cnt < total; cnt++) { 885 881 payload->handles[i++] = get_pl->records[cnt].hdr.handle; 886 - dev_dbg(cxlds->dev, "Event log '%d': Clearing %u\n", 887 - log, le16_to_cpu(payload->handles[i])); 882 + dev_dbg(mds->cxlds.dev, "Event log '%d': Clearing %u\n", log, 883 + le16_to_cpu(payload->handles[i])); 888 884 889 885 if (i == max_handles) { 890 886 payload->nr_recs = i; 891 - rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); 887 + rc = cxl_internal_send_cmd(mds, &mbox_cmd); 892 888 if (rc) 893 889 goto free_pl; 894 890 i = 0; ··· 899 895 if (i) { 900 896 payload->nr_recs = i; 901 897 mbox_cmd.size_in = struct_size(payload, handles, i); 902 - rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); 898 + rc = cxl_internal_send_cmd(mds, &mbox_cmd); 903 899 if (rc) 904 900 goto free_pl; 905 901 } ··· 909 905 return rc; 910 906 } 911 907 912 - static void cxl_mem_get_records_log(struct cxl_dev_state *cxlds, 908 + static void cxl_mem_get_records_log(struct cxl_memdev_state *mds, 913 909 enum cxl_event_log_type type) 914 910 { 911 + struct cxl_memdev *cxlmd = mds->cxlds.cxlmd; 912 + struct device *dev = mds->cxlds.dev; 915 913 struct cxl_get_event_payload *payload; 916 914 struct cxl_mbox_cmd mbox_cmd; 917 915 u8 log_type = type; 918 916 u16 nr_rec; 919 917 920 - mutex_lock(&cxlds->event.log_lock); 921 - payload = cxlds->event.buf; 918 + mutex_lock(&mds->event.log_lock); 919 + payload = mds->event.buf; 922 920 923 921 mbox_cmd = (struct cxl_mbox_cmd) { 924 922 .opcode = CXL_MBOX_OP_GET_EVENT_RECORD, 925 923 .payload_in = &log_type, 926 924 .size_in = sizeof(log_type), 927 925 .payload_out = payload, 928 - .size_out = cxlds->payload_size, 926 + .size_out = mds->payload_size, 929 927 .min_out = struct_size(payload, records, 0), 930 928 }; 931 929 932 930 do { 933 931 int rc, i; 934 932 935 - rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); 933 + rc = cxl_internal_send_cmd(mds, &mbox_cmd); 936 934 if (rc) { 937 - dev_err_ratelimited(cxlds->dev, 935 + dev_err_ratelimited(dev, 938 936 "Event log '%d': Failed to query event records : %d", 939 937 type, rc); 940 938 break; ··· 947 941 break; 948 942 949 943 for (i = 0; i < nr_rec; i++) 950 - cxl_event_trace_record(cxlds->cxlmd, type, 944 + cxl_event_trace_record(cxlmd, type, 951 945 &payload->records[i]); 952 946 953 947 if (payload->flags & CXL_GET_EVENT_FLAG_OVERFLOW) 954 - trace_cxl_overflow(cxlds->cxlmd, type, payload); 948 + trace_cxl_overflow(cxlmd, type, payload); 955 949 956 - rc = cxl_clear_event_record(cxlds, type, payload); 950 + rc = cxl_clear_event_record(mds, type, payload); 957 951 if (rc) { 958 - dev_err_ratelimited(cxlds->dev, 952 + dev_err_ratelimited(dev, 959 953 "Event log '%d': Failed to clear events : %d", 960 954 type, rc); 961 955 break; 962 956 } 963 957 } while (nr_rec); 964 958 965 - mutex_unlock(&cxlds->event.log_lock); 959 + mutex_unlock(&mds->event.log_lock); 966 960 } 967 961 968 962 /** 969 963 * cxl_mem_get_event_records - Get Event Records from the device 970 - * @cxlds: The device data for the operation 964 + * @mds: The driver data for the operation 971 965 * @status: Event Status register value identifying which events are available. 972 966 * 973 967 * Retrieve all event records available on the device, report them as trace ··· 976 970 * See CXL rev 3.0 @8.2.9.2.2 Get Event Records 977 971 * See CXL rev 3.0 @8.2.9.2.3 Clear Event Records 978 972 */ 979 - void cxl_mem_get_event_records(struct cxl_dev_state *cxlds, u32 status) 973 + void cxl_mem_get_event_records(struct cxl_memdev_state *mds, u32 status) 980 974 { 981 - dev_dbg(cxlds->dev, "Reading event logs: %x\n", status); 975 + dev_dbg(mds->cxlds.dev, "Reading event logs: %x\n", status); 982 976 983 977 if (status & CXLDEV_EVENT_STATUS_FATAL) 984 - cxl_mem_get_records_log(cxlds, CXL_EVENT_TYPE_FATAL); 978 + cxl_mem_get_records_log(mds, CXL_EVENT_TYPE_FATAL); 985 979 if (status & CXLDEV_EVENT_STATUS_FAIL) 986 - cxl_mem_get_records_log(cxlds, CXL_EVENT_TYPE_FAIL); 980 + cxl_mem_get_records_log(mds, CXL_EVENT_TYPE_FAIL); 987 981 if (status & CXLDEV_EVENT_STATUS_WARN) 988 - cxl_mem_get_records_log(cxlds, CXL_EVENT_TYPE_WARN); 982 + cxl_mem_get_records_log(mds, CXL_EVENT_TYPE_WARN); 989 983 if (status & CXLDEV_EVENT_STATUS_INFO) 990 - cxl_mem_get_records_log(cxlds, CXL_EVENT_TYPE_INFO); 984 + cxl_mem_get_records_log(mds, CXL_EVENT_TYPE_INFO); 991 985 } 992 986 EXPORT_SYMBOL_NS_GPL(cxl_mem_get_event_records, CXL); 993 987 994 988 /** 995 989 * cxl_mem_get_partition_info - Get partition info 996 - * @cxlds: The device data for the operation 990 + * @mds: The driver data for the operation 997 991 * 998 992 * Retrieve the current partition info for the device specified. The active 999 993 * values are the current capacity in bytes. If not 0, the 'next' values are ··· 1003 997 * 1004 998 * See CXL @8.2.9.5.2.1 Get Partition Info 1005 999 */ 1006 - static int cxl_mem_get_partition_info(struct cxl_dev_state *cxlds) 1000 + static int cxl_mem_get_partition_info(struct cxl_memdev_state *mds) 1007 1001 { 1008 1002 struct cxl_mbox_get_partition_info pi; 1009 1003 struct cxl_mbox_cmd mbox_cmd; ··· 1014 1008 .size_out = sizeof(pi), 1015 1009 .payload_out = &pi, 1016 1010 }; 1017 - rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); 1011 + rc = cxl_internal_send_cmd(mds, &mbox_cmd); 1018 1012 if (rc) 1019 1013 return rc; 1020 1014 1021 - cxlds->active_volatile_bytes = 1015 + mds->active_volatile_bytes = 1022 1016 le64_to_cpu(pi.active_volatile_cap) * CXL_CAPACITY_MULTIPLIER; 1023 - cxlds->active_persistent_bytes = 1017 + mds->active_persistent_bytes = 1024 1018 le64_to_cpu(pi.active_persistent_cap) * CXL_CAPACITY_MULTIPLIER; 1025 - cxlds->next_volatile_bytes = 1019 + mds->next_volatile_bytes = 1026 1020 le64_to_cpu(pi.next_volatile_cap) * CXL_CAPACITY_MULTIPLIER; 1027 - cxlds->next_persistent_bytes = 1021 + mds->next_persistent_bytes = 1028 1022 le64_to_cpu(pi.next_volatile_cap) * CXL_CAPACITY_MULTIPLIER; 1029 1023 1030 1024 return 0; ··· 1032 1026 1033 1027 /** 1034 1028 * cxl_dev_state_identify() - Send the IDENTIFY command to the device. 1035 - * @cxlds: The device data for the operation 1029 + * @mds: The driver data for the operation 1036 1030 * 1037 1031 * Return: 0 if identify was executed successfully or media not ready. 1038 1032 * 1039 1033 * This will dispatch the identify command to the device and on success populate 1040 1034 * structures to be exported to sysfs. 1041 1035 */ 1042 - int cxl_dev_state_identify(struct cxl_dev_state *cxlds) 1036 + int cxl_dev_state_identify(struct cxl_memdev_state *mds) 1043 1037 { 1044 1038 /* See CXL 2.0 Table 175 Identify Memory Device Output Payload */ 1045 1039 struct cxl_mbox_identify id; ··· 1047 1041 u32 val; 1048 1042 int rc; 1049 1043 1050 - if (!cxlds->media_ready) 1044 + if (!mds->cxlds.media_ready) 1051 1045 return 0; 1052 1046 1053 1047 mbox_cmd = (struct cxl_mbox_cmd) { ··· 1055 1049 .size_out = sizeof(id), 1056 1050 .payload_out = &id, 1057 1051 }; 1058 - rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); 1052 + rc = cxl_internal_send_cmd(mds, &mbox_cmd); 1059 1053 if (rc < 0) 1060 1054 return rc; 1061 1055 1062 - cxlds->total_bytes = 1056 + mds->total_bytes = 1063 1057 le64_to_cpu(id.total_capacity) * CXL_CAPACITY_MULTIPLIER; 1064 - cxlds->volatile_only_bytes = 1058 + mds->volatile_only_bytes = 1065 1059 le64_to_cpu(id.volatile_capacity) * CXL_CAPACITY_MULTIPLIER; 1066 - cxlds->persistent_only_bytes = 1060 + mds->persistent_only_bytes = 1067 1061 le64_to_cpu(id.persistent_capacity) * CXL_CAPACITY_MULTIPLIER; 1068 - cxlds->partition_align_bytes = 1062 + mds->partition_align_bytes = 1069 1063 le64_to_cpu(id.partition_align) * CXL_CAPACITY_MULTIPLIER; 1070 1064 1071 - cxlds->lsa_size = le32_to_cpu(id.lsa_size); 1072 - memcpy(cxlds->firmware_version, id.fw_revision, sizeof(id.fw_revision)); 1065 + mds->lsa_size = le32_to_cpu(id.lsa_size); 1066 + memcpy(mds->firmware_version, id.fw_revision, 1067 + sizeof(id.fw_revision)); 1073 1068 1074 - if (test_bit(CXL_POISON_ENABLED_LIST, cxlds->poison.enabled_cmds)) { 1069 + if (test_bit(CXL_POISON_ENABLED_LIST, mds->poison.enabled_cmds)) { 1075 1070 val = get_unaligned_le24(id.poison_list_max_mer); 1076 - cxlds->poison.max_errors = min_t(u32, val, CXL_POISON_LIST_MAX); 1071 + mds->poison.max_errors = min_t(u32, val, CXL_POISON_LIST_MAX); 1077 1072 } 1078 1073 1079 1074 return 0; 1080 1075 } 1081 1076 EXPORT_SYMBOL_NS_GPL(cxl_dev_state_identify, CXL); 1077 + 1078 + /** 1079 + * cxl_mem_sanitize() - Send a sanitization command to the device. 1080 + * @mds: The device data for the operation 1081 + * @cmd: The specific sanitization command opcode 1082 + * 1083 + * Return: 0 if the command was executed successfully, regardless of 1084 + * whether or not the actual security operation is done in the background, 1085 + * such as for the Sanitize case. 1086 + * Error return values can be the result of the mailbox command, -EINVAL 1087 + * when security requirements are not met or invalid contexts. 1088 + * 1089 + * See CXL 3.0 @8.2.9.8.5.1 Sanitize and @8.2.9.8.5.2 Secure Erase. 1090 + */ 1091 + int cxl_mem_sanitize(struct cxl_memdev_state *mds, u16 cmd) 1092 + { 1093 + int rc; 1094 + u32 sec_out = 0; 1095 + struct cxl_get_security_output { 1096 + __le32 flags; 1097 + } out; 1098 + struct cxl_mbox_cmd sec_cmd = { 1099 + .opcode = CXL_MBOX_OP_GET_SECURITY_STATE, 1100 + .payload_out = &out, 1101 + .size_out = sizeof(out), 1102 + }; 1103 + struct cxl_mbox_cmd mbox_cmd = { .opcode = cmd }; 1104 + struct cxl_dev_state *cxlds = &mds->cxlds; 1105 + 1106 + if (cmd != CXL_MBOX_OP_SANITIZE && cmd != CXL_MBOX_OP_SECURE_ERASE) 1107 + return -EINVAL; 1108 + 1109 + rc = cxl_internal_send_cmd(mds, &sec_cmd); 1110 + if (rc < 0) { 1111 + dev_err(cxlds->dev, "Failed to get security state : %d", rc); 1112 + return rc; 1113 + } 1114 + 1115 + /* 1116 + * Prior to using these commands, any security applied to 1117 + * the user data areas of the device shall be DISABLED (or 1118 + * UNLOCKED for secure erase case). 1119 + */ 1120 + sec_out = le32_to_cpu(out.flags); 1121 + if (sec_out & CXL_PMEM_SEC_STATE_USER_PASS_SET) 1122 + return -EINVAL; 1123 + 1124 + if (cmd == CXL_MBOX_OP_SECURE_ERASE && 1125 + sec_out & CXL_PMEM_SEC_STATE_LOCKED) 1126 + return -EINVAL; 1127 + 1128 + rc = cxl_internal_send_cmd(mds, &mbox_cmd); 1129 + if (rc < 0) { 1130 + dev_err(cxlds->dev, "Failed to sanitize device : %d", rc); 1131 + return rc; 1132 + } 1133 + 1134 + return 0; 1135 + } 1136 + EXPORT_SYMBOL_NS_GPL(cxl_mem_sanitize, CXL); 1082 1137 1083 1138 static int add_dpa_res(struct device *dev, struct resource *parent, 1084 1139 struct resource *res, resource_size_t start, ··· 1167 1100 return 0; 1168 1101 } 1169 1102 1170 - int cxl_mem_create_range_info(struct cxl_dev_state *cxlds) 1103 + int cxl_mem_create_range_info(struct cxl_memdev_state *mds) 1171 1104 { 1105 + struct cxl_dev_state *cxlds = &mds->cxlds; 1172 1106 struct device *dev = cxlds->dev; 1173 1107 int rc; 1174 1108 ··· 1181 1113 } 1182 1114 1183 1115 cxlds->dpa_res = 1184 - (struct resource)DEFINE_RES_MEM(0, cxlds->total_bytes); 1116 + (struct resource)DEFINE_RES_MEM(0, mds->total_bytes); 1185 1117 1186 - if (cxlds->partition_align_bytes == 0) { 1118 + if (mds->partition_align_bytes == 0) { 1187 1119 rc = add_dpa_res(dev, &cxlds->dpa_res, &cxlds->ram_res, 0, 1188 - cxlds->volatile_only_bytes, "ram"); 1120 + mds->volatile_only_bytes, "ram"); 1189 1121 if (rc) 1190 1122 return rc; 1191 1123 return add_dpa_res(dev, &cxlds->dpa_res, &cxlds->pmem_res, 1192 - cxlds->volatile_only_bytes, 1193 - cxlds->persistent_only_bytes, "pmem"); 1124 + mds->volatile_only_bytes, 1125 + mds->persistent_only_bytes, "pmem"); 1194 1126 } 1195 1127 1196 - rc = cxl_mem_get_partition_info(cxlds); 1128 + rc = cxl_mem_get_partition_info(mds); 1197 1129 if (rc) { 1198 1130 dev_err(dev, "Failed to query partition information\n"); 1199 1131 return rc; 1200 1132 } 1201 1133 1202 1134 rc = add_dpa_res(dev, &cxlds->dpa_res, &cxlds->ram_res, 0, 1203 - cxlds->active_volatile_bytes, "ram"); 1135 + mds->active_volatile_bytes, "ram"); 1204 1136 if (rc) 1205 1137 return rc; 1206 1138 return add_dpa_res(dev, &cxlds->dpa_res, &cxlds->pmem_res, 1207 - cxlds->active_volatile_bytes, 1208 - cxlds->active_persistent_bytes, "pmem"); 1139 + mds->active_volatile_bytes, 1140 + mds->active_persistent_bytes, "pmem"); 1209 1141 } 1210 1142 EXPORT_SYMBOL_NS_GPL(cxl_mem_create_range_info, CXL); 1211 1143 1212 - int cxl_set_timestamp(struct cxl_dev_state *cxlds) 1144 + int cxl_set_timestamp(struct cxl_memdev_state *mds) 1213 1145 { 1214 1146 struct cxl_mbox_cmd mbox_cmd; 1215 1147 struct cxl_mbox_set_timestamp_in pi; ··· 1222 1154 .payload_in = &pi, 1223 1155 }; 1224 1156 1225 - rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); 1157 + rc = cxl_internal_send_cmd(mds, &mbox_cmd); 1226 1158 /* 1227 1159 * Command is optional. Devices may have another way of providing 1228 1160 * a timestamp, or may return all 0s in timestamp fields. ··· 1238 1170 int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len, 1239 1171 struct cxl_region *cxlr) 1240 1172 { 1241 - struct cxl_dev_state *cxlds = cxlmd->cxlds; 1173 + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); 1242 1174 struct cxl_mbox_poison_out *po; 1243 1175 struct cxl_mbox_poison_in pi; 1244 1176 struct cxl_mbox_cmd mbox_cmd; 1245 1177 int nr_records = 0; 1246 1178 int rc; 1247 1179 1248 - rc = mutex_lock_interruptible(&cxlds->poison.lock); 1180 + rc = mutex_lock_interruptible(&mds->poison.lock); 1249 1181 if (rc) 1250 1182 return rc; 1251 1183 1252 - po = cxlds->poison.list_out; 1184 + po = mds->poison.list_out; 1253 1185 pi.offset = cpu_to_le64(offset); 1254 1186 pi.length = cpu_to_le64(len / CXL_POISON_LEN_MULT); 1255 1187 ··· 1257 1189 .opcode = CXL_MBOX_OP_GET_POISON, 1258 1190 .size_in = sizeof(pi), 1259 1191 .payload_in = &pi, 1260 - .size_out = cxlds->payload_size, 1192 + .size_out = mds->payload_size, 1261 1193 .payload_out = po, 1262 1194 .min_out = struct_size(po, record, 0), 1263 1195 }; 1264 1196 1265 1197 do { 1266 - rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); 1198 + rc = cxl_internal_send_cmd(mds, &mbox_cmd); 1267 1199 if (rc) 1268 1200 break; 1269 1201 ··· 1274 1206 1275 1207 /* Protect against an uncleared _FLAG_MORE */ 1276 1208 nr_records = nr_records + le16_to_cpu(po->count); 1277 - if (nr_records >= cxlds->poison.max_errors) { 1209 + if (nr_records >= mds->poison.max_errors) { 1278 1210 dev_dbg(&cxlmd->dev, "Max Error Records reached: %d\n", 1279 1211 nr_records); 1280 1212 break; 1281 1213 } 1282 1214 } while (po->flags & CXL_POISON_FLAG_MORE); 1283 1215 1284 - mutex_unlock(&cxlds->poison.lock); 1216 + mutex_unlock(&mds->poison.lock); 1285 1217 return rc; 1286 1218 } 1287 1219 EXPORT_SYMBOL_NS_GPL(cxl_mem_get_poison, CXL); ··· 1291 1223 kvfree(buf); 1292 1224 } 1293 1225 1294 - /* Get Poison List output buffer is protected by cxlds->poison.lock */ 1295 - static int cxl_poison_alloc_buf(struct cxl_dev_state *cxlds) 1226 + /* Get Poison List output buffer is protected by mds->poison.lock */ 1227 + static int cxl_poison_alloc_buf(struct cxl_memdev_state *mds) 1296 1228 { 1297 - cxlds->poison.list_out = kvmalloc(cxlds->payload_size, GFP_KERNEL); 1298 - if (!cxlds->poison.list_out) 1229 + mds->poison.list_out = kvmalloc(mds->payload_size, GFP_KERNEL); 1230 + if (!mds->poison.list_out) 1299 1231 return -ENOMEM; 1300 1232 1301 - return devm_add_action_or_reset(cxlds->dev, free_poison_buf, 1302 - cxlds->poison.list_out); 1233 + return devm_add_action_or_reset(mds->cxlds.dev, free_poison_buf, 1234 + mds->poison.list_out); 1303 1235 } 1304 1236 1305 - int cxl_poison_state_init(struct cxl_dev_state *cxlds) 1237 + int cxl_poison_state_init(struct cxl_memdev_state *mds) 1306 1238 { 1307 1239 int rc; 1308 1240 1309 - if (!test_bit(CXL_POISON_ENABLED_LIST, cxlds->poison.enabled_cmds)) 1241 + if (!test_bit(CXL_POISON_ENABLED_LIST, mds->poison.enabled_cmds)) 1310 1242 return 0; 1311 1243 1312 - rc = cxl_poison_alloc_buf(cxlds); 1244 + rc = cxl_poison_alloc_buf(mds); 1313 1245 if (rc) { 1314 - clear_bit(CXL_POISON_ENABLED_LIST, cxlds->poison.enabled_cmds); 1246 + clear_bit(CXL_POISON_ENABLED_LIST, mds->poison.enabled_cmds); 1315 1247 return rc; 1316 1248 } 1317 1249 1318 - mutex_init(&cxlds->poison.lock); 1250 + mutex_init(&mds->poison.lock); 1319 1251 return 0; 1320 1252 } 1321 1253 EXPORT_SYMBOL_NS_GPL(cxl_poison_state_init, CXL); 1322 1254 1323 - struct cxl_dev_state *cxl_dev_state_create(struct device *dev) 1255 + struct cxl_memdev_state *cxl_memdev_state_create(struct device *dev) 1324 1256 { 1325 - struct cxl_dev_state *cxlds; 1257 + struct cxl_memdev_state *mds; 1326 1258 1327 - cxlds = devm_kzalloc(dev, sizeof(*cxlds), GFP_KERNEL); 1328 - if (!cxlds) { 1259 + mds = devm_kzalloc(dev, sizeof(*mds), GFP_KERNEL); 1260 + if (!mds) { 1329 1261 dev_err(dev, "No memory available\n"); 1330 1262 return ERR_PTR(-ENOMEM); 1331 1263 } 1332 1264 1333 - mutex_init(&cxlds->mbox_mutex); 1334 - mutex_init(&cxlds->event.log_lock); 1335 - cxlds->dev = dev; 1265 + mutex_init(&mds->mbox_mutex); 1266 + mutex_init(&mds->event.log_lock); 1267 + mds->cxlds.dev = dev; 1268 + mds->cxlds.type = CXL_DEVTYPE_CLASSMEM; 1336 1269 1337 - return cxlds; 1270 + return mds; 1338 1271 } 1339 - EXPORT_SYMBOL_NS_GPL(cxl_dev_state_create, CXL); 1272 + EXPORT_SYMBOL_NS_GPL(cxl_memdev_state_create, CXL); 1340 1273 1341 1274 void __init cxl_mbox_init(void) 1342 1275 {
+484 -19
drivers/cxl/core/memdev.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* Copyright(c) 2020 Intel Corporation. */ 3 3 4 + #include <linux/io-64-nonatomic-lo-hi.h> 5 + #include <linux/firmware.h> 4 6 #include <linux/device.h> 5 7 #include <linux/slab.h> 6 8 #include <linux/idr.h> ··· 41 39 { 42 40 struct cxl_memdev *cxlmd = to_cxl_memdev(dev); 43 41 struct cxl_dev_state *cxlds = cxlmd->cxlds; 42 + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); 44 43 45 - return sysfs_emit(buf, "%.16s\n", cxlds->firmware_version); 44 + if (!mds) 45 + return sysfs_emit(buf, "\n"); 46 + return sysfs_emit(buf, "%.16s\n", mds->firmware_version); 46 47 } 47 48 static DEVICE_ATTR_RO(firmware_version); 48 49 ··· 54 49 { 55 50 struct cxl_memdev *cxlmd = to_cxl_memdev(dev); 56 51 struct cxl_dev_state *cxlds = cxlmd->cxlds; 52 + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); 57 53 58 - return sysfs_emit(buf, "%zu\n", cxlds->payload_size); 54 + if (!mds) 55 + return sysfs_emit(buf, "\n"); 56 + return sysfs_emit(buf, "%zu\n", mds->payload_size); 59 57 } 60 58 static DEVICE_ATTR_RO(payload_max); 61 59 ··· 67 59 { 68 60 struct cxl_memdev *cxlmd = to_cxl_memdev(dev); 69 61 struct cxl_dev_state *cxlds = cxlmd->cxlds; 62 + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); 70 63 71 - return sysfs_emit(buf, "%zu\n", cxlds->lsa_size); 64 + if (!mds) 65 + return sysfs_emit(buf, "\n"); 66 + return sysfs_emit(buf, "%zu\n", mds->lsa_size); 72 67 } 73 68 static DEVICE_ATTR_RO(label_storage_size); 74 69 ··· 118 107 } 119 108 static DEVICE_ATTR_RO(numa_node); 120 109 110 + static ssize_t security_state_show(struct device *dev, 111 + struct device_attribute *attr, 112 + char *buf) 113 + { 114 + struct cxl_memdev *cxlmd = to_cxl_memdev(dev); 115 + struct cxl_dev_state *cxlds = cxlmd->cxlds; 116 + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); 117 + u64 reg = readq(cxlds->regs.mbox + CXLDEV_MBOX_BG_CMD_STATUS_OFFSET); 118 + u32 pct = FIELD_GET(CXLDEV_MBOX_BG_CMD_COMMAND_PCT_MASK, reg); 119 + u16 cmd = FIELD_GET(CXLDEV_MBOX_BG_CMD_COMMAND_OPCODE_MASK, reg); 120 + unsigned long state = mds->security.state; 121 + 122 + if (cmd == CXL_MBOX_OP_SANITIZE && pct != 100) 123 + return sysfs_emit(buf, "sanitize\n"); 124 + 125 + if (!(state & CXL_PMEM_SEC_STATE_USER_PASS_SET)) 126 + return sysfs_emit(buf, "disabled\n"); 127 + if (state & CXL_PMEM_SEC_STATE_FROZEN || 128 + state & CXL_PMEM_SEC_STATE_MASTER_PLIMIT || 129 + state & CXL_PMEM_SEC_STATE_USER_PLIMIT) 130 + return sysfs_emit(buf, "frozen\n"); 131 + if (state & CXL_PMEM_SEC_STATE_LOCKED) 132 + return sysfs_emit(buf, "locked\n"); 133 + else 134 + return sysfs_emit(buf, "unlocked\n"); 135 + } 136 + static struct device_attribute dev_attr_security_state = 137 + __ATTR(state, 0444, security_state_show, NULL); 138 + 139 + static ssize_t security_sanitize_store(struct device *dev, 140 + struct device_attribute *attr, 141 + const char *buf, size_t len) 142 + { 143 + struct cxl_memdev *cxlmd = to_cxl_memdev(dev); 144 + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); 145 + struct cxl_port *port = cxlmd->endpoint; 146 + bool sanitize; 147 + ssize_t rc; 148 + 149 + if (kstrtobool(buf, &sanitize) || !sanitize) 150 + return -EINVAL; 151 + 152 + if (!port || !is_cxl_endpoint(port)) 153 + return -EINVAL; 154 + 155 + /* ensure no regions are mapped to this memdev */ 156 + if (port->commit_end != -1) 157 + return -EBUSY; 158 + 159 + rc = cxl_mem_sanitize(mds, CXL_MBOX_OP_SANITIZE); 160 + 161 + return rc ? rc : len; 162 + } 163 + static struct device_attribute dev_attr_security_sanitize = 164 + __ATTR(sanitize, 0200, NULL, security_sanitize_store); 165 + 166 + static ssize_t security_erase_store(struct device *dev, 167 + struct device_attribute *attr, 168 + const char *buf, size_t len) 169 + { 170 + struct cxl_memdev *cxlmd = to_cxl_memdev(dev); 171 + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); 172 + struct cxl_port *port = cxlmd->endpoint; 173 + ssize_t rc; 174 + bool erase; 175 + 176 + if (kstrtobool(buf, &erase) || !erase) 177 + return -EINVAL; 178 + 179 + if (!port || !is_cxl_endpoint(port)) 180 + return -EINVAL; 181 + 182 + /* ensure no regions are mapped to this memdev */ 183 + if (port->commit_end != -1) 184 + return -EBUSY; 185 + 186 + rc = cxl_mem_sanitize(mds, CXL_MBOX_OP_SECURE_ERASE); 187 + 188 + return rc ? rc : len; 189 + } 190 + static struct device_attribute dev_attr_security_erase = 191 + __ATTR(erase, 0200, NULL, security_erase_store); 192 + 121 193 static int cxl_get_poison_by_memdev(struct cxl_memdev *cxlmd) 122 194 { 123 195 struct cxl_dev_state *cxlds = cxlmd->cxlds; ··· 234 140 struct cxl_port *port; 235 141 int rc; 236 142 237 - port = dev_get_drvdata(&cxlmd->dev); 143 + port = cxlmd->endpoint; 238 144 if (!port || !is_cxl_endpoint(port)) 239 145 return -EINVAL; 240 146 ··· 292 198 ctx = (struct cxl_dpa_to_region_context) { 293 199 .dpa = dpa, 294 200 }; 295 - port = dev_get_drvdata(&cxlmd->dev); 201 + port = cxlmd->endpoint; 296 202 if (port && is_cxl_endpoint(port) && port->commit_end != -1) 297 203 device_for_each_child(&port->dev, &ctx, __cxl_dpa_to_region); 298 204 ··· 325 231 326 232 int cxl_inject_poison(struct cxl_memdev *cxlmd, u64 dpa) 327 233 { 328 - struct cxl_dev_state *cxlds = cxlmd->cxlds; 234 + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); 329 235 struct cxl_mbox_inject_poison inject; 330 236 struct cxl_poison_record record; 331 237 struct cxl_mbox_cmd mbox_cmd; ··· 349 255 .size_in = sizeof(inject), 350 256 .payload_in = &inject, 351 257 }; 352 - rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); 258 + rc = cxl_internal_send_cmd(mds, &mbox_cmd); 353 259 if (rc) 354 260 goto out; 355 261 356 262 cxlr = cxl_dpa_to_region(cxlmd, dpa); 357 263 if (cxlr) 358 - dev_warn_once(cxlds->dev, 264 + dev_warn_once(mds->cxlds.dev, 359 265 "poison inject dpa:%#llx region: %s\n", dpa, 360 266 dev_name(&cxlr->dev)); 361 267 ··· 373 279 374 280 int cxl_clear_poison(struct cxl_memdev *cxlmd, u64 dpa) 375 281 { 376 - struct cxl_dev_state *cxlds = cxlmd->cxlds; 282 + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); 377 283 struct cxl_mbox_clear_poison clear; 378 284 struct cxl_poison_record record; 379 285 struct cxl_mbox_cmd mbox_cmd; ··· 406 312 .payload_in = &clear, 407 313 }; 408 314 409 - rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); 315 + rc = cxl_internal_send_cmd(mds, &mbox_cmd); 410 316 if (rc) 411 317 goto out; 412 318 413 319 cxlr = cxl_dpa_to_region(cxlmd, dpa); 414 320 if (cxlr) 415 - dev_warn_once(cxlds->dev, "poison clear dpa:%#llx region: %s\n", 416 - dpa, dev_name(&cxlr->dev)); 321 + dev_warn_once(mds->cxlds.dev, 322 + "poison clear dpa:%#llx region: %s\n", dpa, 323 + dev_name(&cxlr->dev)); 417 324 418 325 record = (struct cxl_poison_record) { 419 326 .address = cpu_to_le64(dpa), ··· 447 352 NULL, 448 353 }; 449 354 355 + static struct attribute *cxl_memdev_security_attributes[] = { 356 + &dev_attr_security_state.attr, 357 + &dev_attr_security_sanitize.attr, 358 + &dev_attr_security_erase.attr, 359 + NULL, 360 + }; 361 + 450 362 static umode_t cxl_memdev_visible(struct kobject *kobj, struct attribute *a, 451 363 int n) 452 364 { ··· 477 375 .attrs = cxl_memdev_pmem_attributes, 478 376 }; 479 377 378 + static struct attribute_group cxl_memdev_security_attribute_group = { 379 + .name = "security", 380 + .attrs = cxl_memdev_security_attributes, 381 + }; 382 + 480 383 static const struct attribute_group *cxl_memdev_attribute_groups[] = { 481 384 &cxl_memdev_attribute_group, 482 385 &cxl_memdev_ram_attribute_group, 483 386 &cxl_memdev_pmem_attribute_group, 387 + &cxl_memdev_security_attribute_group, 484 388 NULL, 485 389 }; 486 390 ··· 505 397 506 398 /** 507 399 * set_exclusive_cxl_commands() - atomically disable user cxl commands 508 - * @cxlds: The device state to operate on 400 + * @mds: The device state to operate on 509 401 * @cmds: bitmap of commands to mark exclusive 510 402 * 511 403 * Grab the cxl_memdev_rwsem in write mode to flush in-flight 512 404 * invocations of the ioctl path and then disable future execution of 513 405 * commands with the command ids set in @cmds. 514 406 */ 515 - void set_exclusive_cxl_commands(struct cxl_dev_state *cxlds, unsigned long *cmds) 407 + void set_exclusive_cxl_commands(struct cxl_memdev_state *mds, 408 + unsigned long *cmds) 516 409 { 517 410 down_write(&cxl_memdev_rwsem); 518 - bitmap_or(cxlds->exclusive_cmds, cxlds->exclusive_cmds, cmds, 411 + bitmap_or(mds->exclusive_cmds, mds->exclusive_cmds, cmds, 519 412 CXL_MEM_COMMAND_ID_MAX); 520 413 up_write(&cxl_memdev_rwsem); 521 414 } ··· 524 415 525 416 /** 526 417 * clear_exclusive_cxl_commands() - atomically enable user cxl commands 527 - * @cxlds: The device state to modify 418 + * @mds: The device state to modify 528 419 * @cmds: bitmap of commands to mark available for userspace 529 420 */ 530 - void clear_exclusive_cxl_commands(struct cxl_dev_state *cxlds, unsigned long *cmds) 421 + void clear_exclusive_cxl_commands(struct cxl_memdev_state *mds, 422 + unsigned long *cmds) 531 423 { 532 424 down_write(&cxl_memdev_rwsem); 533 - bitmap_andnot(cxlds->exclusive_cmds, cxlds->exclusive_cmds, cmds, 425 + bitmap_andnot(mds->exclusive_cmds, mds->exclusive_cmds, cmds, 534 426 CXL_MEM_COMMAND_ID_MAX); 535 427 up_write(&cxl_memdev_rwsem); 536 428 } 537 429 EXPORT_SYMBOL_NS_GPL(clear_exclusive_cxl_commands, CXL); 430 + 431 + static void cxl_memdev_security_shutdown(struct device *dev) 432 + { 433 + struct cxl_memdev *cxlmd = to_cxl_memdev(dev); 434 + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); 435 + 436 + if (mds->security.poll) 437 + cancel_delayed_work_sync(&mds->security.poll_dwork); 438 + } 538 439 539 440 static void cxl_memdev_shutdown(struct device *dev) 540 441 { 541 442 struct cxl_memdev *cxlmd = to_cxl_memdev(dev); 542 443 543 444 down_write(&cxl_memdev_rwsem); 445 + cxl_memdev_security_shutdown(dev); 544 446 cxlmd->cxlds = NULL; 545 447 up_write(&cxl_memdev_rwsem); 546 448 } ··· 631 511 unsigned long arg) 632 512 { 633 513 struct cxl_memdev *cxlmd = file->private_data; 514 + struct cxl_dev_state *cxlds; 634 515 int rc = -ENXIO; 635 516 636 517 down_read(&cxl_memdev_rwsem); 637 - if (cxlmd->cxlds) 518 + cxlds = cxlmd->cxlds; 519 + if (cxlds && cxlds->type == CXL_DEVTYPE_CLASSMEM) 638 520 rc = __cxl_memdev_ioctl(cxlmd, cmd, arg); 639 521 up_read(&cxl_memdev_rwsem); 640 522 ··· 664 542 return 0; 665 543 } 666 544 545 + /** 546 + * cxl_mem_get_fw_info - Get Firmware info 547 + * @mds: The device data for the operation 548 + * 549 + * Retrieve firmware info for the device specified. 550 + * 551 + * Return: 0 if no error: or the result of the mailbox command. 552 + * 553 + * See CXL-3.0 8.2.9.3.1 Get FW Info 554 + */ 555 + static int cxl_mem_get_fw_info(struct cxl_memdev_state *mds) 556 + { 557 + struct cxl_mbox_get_fw_info info; 558 + struct cxl_mbox_cmd mbox_cmd; 559 + int rc; 560 + 561 + mbox_cmd = (struct cxl_mbox_cmd) { 562 + .opcode = CXL_MBOX_OP_GET_FW_INFO, 563 + .size_out = sizeof(info), 564 + .payload_out = &info, 565 + }; 566 + 567 + rc = cxl_internal_send_cmd(mds, &mbox_cmd); 568 + if (rc < 0) 569 + return rc; 570 + 571 + mds->fw.num_slots = info.num_slots; 572 + mds->fw.cur_slot = FIELD_GET(CXL_FW_INFO_SLOT_INFO_CUR_MASK, 573 + info.slot_info); 574 + 575 + return 0; 576 + } 577 + 578 + /** 579 + * cxl_mem_activate_fw - Activate Firmware 580 + * @mds: The device data for the operation 581 + * @slot: slot number to activate 582 + * 583 + * Activate firmware in a given slot for the device specified. 584 + * 585 + * Return: 0 if no error: or the result of the mailbox command. 586 + * 587 + * See CXL-3.0 8.2.9.3.3 Activate FW 588 + */ 589 + static int cxl_mem_activate_fw(struct cxl_memdev_state *mds, int slot) 590 + { 591 + struct cxl_mbox_activate_fw activate; 592 + struct cxl_mbox_cmd mbox_cmd; 593 + 594 + if (slot == 0 || slot > mds->fw.num_slots) 595 + return -EINVAL; 596 + 597 + mbox_cmd = (struct cxl_mbox_cmd) { 598 + .opcode = CXL_MBOX_OP_ACTIVATE_FW, 599 + .size_in = sizeof(activate), 600 + .payload_in = &activate, 601 + }; 602 + 603 + /* Only offline activation supported for now */ 604 + activate.action = CXL_FW_ACTIVATE_OFFLINE; 605 + activate.slot = slot; 606 + 607 + return cxl_internal_send_cmd(mds, &mbox_cmd); 608 + } 609 + 610 + /** 611 + * cxl_mem_abort_fw_xfer - Abort an in-progress FW transfer 612 + * @mds: The device data for the operation 613 + * 614 + * Abort an in-progress firmware transfer for the device specified. 615 + * 616 + * Return: 0 if no error: or the result of the mailbox command. 617 + * 618 + * See CXL-3.0 8.2.9.3.2 Transfer FW 619 + */ 620 + static int cxl_mem_abort_fw_xfer(struct cxl_memdev_state *mds) 621 + { 622 + struct cxl_mbox_transfer_fw *transfer; 623 + struct cxl_mbox_cmd mbox_cmd; 624 + int rc; 625 + 626 + transfer = kzalloc(struct_size(transfer, data, 0), GFP_KERNEL); 627 + if (!transfer) 628 + return -ENOMEM; 629 + 630 + /* Set a 1s poll interval and a total wait time of 30s */ 631 + mbox_cmd = (struct cxl_mbox_cmd) { 632 + .opcode = CXL_MBOX_OP_TRANSFER_FW, 633 + .size_in = sizeof(*transfer), 634 + .payload_in = transfer, 635 + .poll_interval_ms = 1000, 636 + .poll_count = 30, 637 + }; 638 + 639 + transfer->action = CXL_FW_TRANSFER_ACTION_ABORT; 640 + 641 + rc = cxl_internal_send_cmd(mds, &mbox_cmd); 642 + kfree(transfer); 643 + return rc; 644 + } 645 + 646 + static void cxl_fw_cleanup(struct fw_upload *fwl) 647 + { 648 + struct cxl_memdev_state *mds = fwl->dd_handle; 649 + 650 + mds->fw.next_slot = 0; 651 + } 652 + 653 + static int cxl_fw_do_cancel(struct fw_upload *fwl) 654 + { 655 + struct cxl_memdev_state *mds = fwl->dd_handle; 656 + struct cxl_dev_state *cxlds = &mds->cxlds; 657 + struct cxl_memdev *cxlmd = cxlds->cxlmd; 658 + int rc; 659 + 660 + rc = cxl_mem_abort_fw_xfer(mds); 661 + if (rc < 0) 662 + dev_err(&cxlmd->dev, "Error aborting FW transfer: %d\n", rc); 663 + 664 + return FW_UPLOAD_ERR_CANCELED; 665 + } 666 + 667 + static enum fw_upload_err cxl_fw_prepare(struct fw_upload *fwl, const u8 *data, 668 + u32 size) 669 + { 670 + struct cxl_memdev_state *mds = fwl->dd_handle; 671 + struct cxl_mbox_transfer_fw *transfer; 672 + 673 + if (!size) 674 + return FW_UPLOAD_ERR_INVALID_SIZE; 675 + 676 + mds->fw.oneshot = struct_size(transfer, data, size) < 677 + mds->payload_size; 678 + 679 + if (cxl_mem_get_fw_info(mds)) 680 + return FW_UPLOAD_ERR_HW_ERROR; 681 + 682 + /* 683 + * So far no state has been changed, hence no other cleanup is 684 + * necessary. Simply return the cancelled status. 685 + */ 686 + if (test_and_clear_bit(CXL_FW_CANCEL, mds->fw.state)) 687 + return FW_UPLOAD_ERR_CANCELED; 688 + 689 + return FW_UPLOAD_ERR_NONE; 690 + } 691 + 692 + static enum fw_upload_err cxl_fw_write(struct fw_upload *fwl, const u8 *data, 693 + u32 offset, u32 size, u32 *written) 694 + { 695 + struct cxl_memdev_state *mds = fwl->dd_handle; 696 + struct cxl_dev_state *cxlds = &mds->cxlds; 697 + struct cxl_memdev *cxlmd = cxlds->cxlmd; 698 + struct cxl_mbox_transfer_fw *transfer; 699 + struct cxl_mbox_cmd mbox_cmd; 700 + u32 cur_size, remaining; 701 + size_t size_in; 702 + int rc; 703 + 704 + *written = 0; 705 + 706 + /* Offset has to be aligned to 128B (CXL-3.0 8.2.9.3.2 Table 8-57) */ 707 + if (!IS_ALIGNED(offset, CXL_FW_TRANSFER_ALIGNMENT)) { 708 + dev_err(&cxlmd->dev, 709 + "misaligned offset for FW transfer slice (%u)\n", 710 + offset); 711 + return FW_UPLOAD_ERR_RW_ERROR; 712 + } 713 + 714 + /* 715 + * Pick transfer size based on mds->payload_size @size must bw 128-byte 716 + * aligned, ->payload_size is a power of 2 starting at 256 bytes, and 717 + * sizeof(*transfer) is 128. These constraints imply that @cur_size 718 + * will always be 128b aligned. 719 + */ 720 + cur_size = min_t(size_t, size, mds->payload_size - sizeof(*transfer)); 721 + 722 + remaining = size - cur_size; 723 + size_in = struct_size(transfer, data, cur_size); 724 + 725 + if (test_and_clear_bit(CXL_FW_CANCEL, mds->fw.state)) 726 + return cxl_fw_do_cancel(fwl); 727 + 728 + /* 729 + * Slot numbers are 1-indexed 730 + * cur_slot is the 0-indexed next_slot (i.e. 'cur_slot - 1 + 1') 731 + * Check for rollover using modulo, and 1-index it by adding 1 732 + */ 733 + mds->fw.next_slot = (mds->fw.cur_slot % mds->fw.num_slots) + 1; 734 + 735 + /* Do the transfer via mailbox cmd */ 736 + transfer = kzalloc(size_in, GFP_KERNEL); 737 + if (!transfer) 738 + return FW_UPLOAD_ERR_RW_ERROR; 739 + 740 + transfer->offset = cpu_to_le32(offset / CXL_FW_TRANSFER_ALIGNMENT); 741 + memcpy(transfer->data, data + offset, cur_size); 742 + if (mds->fw.oneshot) { 743 + transfer->action = CXL_FW_TRANSFER_ACTION_FULL; 744 + transfer->slot = mds->fw.next_slot; 745 + } else { 746 + if (offset == 0) { 747 + transfer->action = CXL_FW_TRANSFER_ACTION_INITIATE; 748 + } else if (remaining == 0) { 749 + transfer->action = CXL_FW_TRANSFER_ACTION_END; 750 + transfer->slot = mds->fw.next_slot; 751 + } else { 752 + transfer->action = CXL_FW_TRANSFER_ACTION_CONTINUE; 753 + } 754 + } 755 + 756 + mbox_cmd = (struct cxl_mbox_cmd) { 757 + .opcode = CXL_MBOX_OP_TRANSFER_FW, 758 + .size_in = size_in, 759 + .payload_in = transfer, 760 + .poll_interval_ms = 1000, 761 + .poll_count = 30, 762 + }; 763 + 764 + rc = cxl_internal_send_cmd(mds, &mbox_cmd); 765 + if (rc < 0) { 766 + rc = FW_UPLOAD_ERR_RW_ERROR; 767 + goto out_free; 768 + } 769 + 770 + *written = cur_size; 771 + 772 + /* Activate FW if oneshot or if the last slice was written */ 773 + if (mds->fw.oneshot || remaining == 0) { 774 + dev_dbg(&cxlmd->dev, "Activating firmware slot: %d\n", 775 + mds->fw.next_slot); 776 + rc = cxl_mem_activate_fw(mds, mds->fw.next_slot); 777 + if (rc < 0) { 778 + dev_err(&cxlmd->dev, "Error activating firmware: %d\n", 779 + rc); 780 + rc = FW_UPLOAD_ERR_HW_ERROR; 781 + goto out_free; 782 + } 783 + } 784 + 785 + rc = FW_UPLOAD_ERR_NONE; 786 + 787 + out_free: 788 + kfree(transfer); 789 + return rc; 790 + } 791 + 792 + static enum fw_upload_err cxl_fw_poll_complete(struct fw_upload *fwl) 793 + { 794 + struct cxl_memdev_state *mds = fwl->dd_handle; 795 + 796 + /* 797 + * cxl_internal_send_cmd() handles background operations synchronously. 798 + * No need to wait for completions here - any errors would've been 799 + * reported and handled during the ->write() call(s). 800 + * Just check if a cancel request was received, and return success. 801 + */ 802 + if (test_and_clear_bit(CXL_FW_CANCEL, mds->fw.state)) 803 + return cxl_fw_do_cancel(fwl); 804 + 805 + return FW_UPLOAD_ERR_NONE; 806 + } 807 + 808 + static void cxl_fw_cancel(struct fw_upload *fwl) 809 + { 810 + struct cxl_memdev_state *mds = fwl->dd_handle; 811 + 812 + set_bit(CXL_FW_CANCEL, mds->fw.state); 813 + } 814 + 815 + static const struct fw_upload_ops cxl_memdev_fw_ops = { 816 + .prepare = cxl_fw_prepare, 817 + .write = cxl_fw_write, 818 + .poll_complete = cxl_fw_poll_complete, 819 + .cancel = cxl_fw_cancel, 820 + .cleanup = cxl_fw_cleanup, 821 + }; 822 + 823 + static void devm_cxl_remove_fw_upload(void *fwl) 824 + { 825 + firmware_upload_unregister(fwl); 826 + } 827 + 828 + int cxl_memdev_setup_fw_upload(struct cxl_memdev_state *mds) 829 + { 830 + struct cxl_dev_state *cxlds = &mds->cxlds; 831 + struct device *dev = &cxlds->cxlmd->dev; 832 + struct fw_upload *fwl; 833 + int rc; 834 + 835 + if (!test_bit(CXL_MEM_COMMAND_ID_GET_FW_INFO, mds->enabled_cmds)) 836 + return 0; 837 + 838 + fwl = firmware_upload_register(THIS_MODULE, dev, dev_name(dev), 839 + &cxl_memdev_fw_ops, mds); 840 + if (IS_ERR(fwl)) 841 + return dev_err_probe(dev, PTR_ERR(fwl), 842 + "Failed to register firmware loader\n"); 843 + 844 + rc = devm_add_action_or_reset(cxlds->dev, devm_cxl_remove_fw_upload, 845 + fwl); 846 + if (rc) 847 + dev_err(dev, 848 + "Failed to add firmware loader remove action: %d\n", 849 + rc); 850 + 851 + return rc; 852 + } 853 + EXPORT_SYMBOL_NS_GPL(cxl_memdev_setup_fw_upload, CXL); 854 + 667 855 static const struct file_operations cxl_memdev_fops = { 668 856 .owner = THIS_MODULE, 669 857 .unlocked_ioctl = cxl_memdev_ioctl, ··· 982 550 .compat_ioctl = compat_ptr_ioctl, 983 551 .llseek = noop_llseek, 984 552 }; 553 + 554 + static void put_sanitize(void *data) 555 + { 556 + struct cxl_memdev_state *mds = data; 557 + 558 + sysfs_put(mds->security.sanitize_node); 559 + } 560 + 561 + static int cxl_memdev_security_init(struct cxl_memdev *cxlmd) 562 + { 563 + struct cxl_dev_state *cxlds = cxlmd->cxlds; 564 + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); 565 + struct device *dev = &cxlmd->dev; 566 + struct kernfs_node *sec; 567 + 568 + sec = sysfs_get_dirent(dev->kobj.sd, "security"); 569 + if (!sec) { 570 + dev_err(dev, "sysfs_get_dirent 'security' failed\n"); 571 + return -ENODEV; 572 + } 573 + mds->security.sanitize_node = sysfs_get_dirent(sec, "state"); 574 + sysfs_put(sec); 575 + if (!mds->security.sanitize_node) { 576 + dev_err(dev, "sysfs_get_dirent 'state' failed\n"); 577 + return -ENODEV; 578 + } 579 + 580 + return devm_add_action_or_reset(cxlds->dev, put_sanitize, mds); 581 + } 985 582 986 583 struct cxl_memdev *devm_cxl_add_memdev(struct cxl_dev_state *cxlds) 987 584 { ··· 1037 576 1038 577 cdev = &cxlmd->cdev; 1039 578 rc = cdev_device_add(cdev, dev); 579 + if (rc) 580 + goto err; 581 + 582 + rc = cxl_memdev_security_init(cxlmd); 1040 583 if (rc) 1041 584 goto err; 1042 585
+6 -25
drivers/cxl/core/pci.c
··· 67 67 68 68 /** 69 69 * devm_cxl_port_enumerate_dports - enumerate downstream ports of the upstream port 70 - * @port: cxl_port whose ->uport is the upstream of dports to be enumerated 70 + * @port: cxl_port whose ->uport_dev is the upstream of dports to be enumerated 71 71 * 72 72 * Returns a positive number of dports enumerated or a negative error 73 73 * code. ··· 308 308 hdm + CXL_HDM_DECODER_CTRL_OFFSET); 309 309 } 310 310 311 - int devm_cxl_enable_hdm(struct cxl_port *port, struct cxl_hdm *cxlhdm) 311 + static int devm_cxl_enable_hdm(struct device *host, struct cxl_hdm *cxlhdm) 312 312 { 313 - void __iomem *hdm; 313 + void __iomem *hdm = cxlhdm->regs.hdm_decoder; 314 314 u32 global_ctrl; 315 315 316 - /* 317 - * If the hdm capability was not mapped there is nothing to enable and 318 - * the caller is responsible for what happens next. For example, 319 - * emulate a passthrough decoder. 320 - */ 321 - if (IS_ERR(cxlhdm)) 322 - return 0; 323 - 324 - hdm = cxlhdm->regs.hdm_decoder; 325 316 global_ctrl = readl(hdm + CXL_HDM_DECODER_CTRL_OFFSET); 326 - 327 - /* 328 - * If the HDM decoder capability was enabled on entry, skip 329 - * registering disable_hdm() since this decode capability may be 330 - * owned by platform firmware. 331 - */ 332 - if (global_ctrl & CXL_HDM_DECODER_ENABLE) 333 - return 0; 334 - 335 317 writel(global_ctrl | CXL_HDM_DECODER_ENABLE, 336 318 hdm + CXL_HDM_DECODER_CTRL_OFFSET); 337 319 338 - return devm_add_action_or_reset(&port->dev, disable_hdm, cxlhdm); 320 + return devm_add_action_or_reset(host, disable_hdm, cxlhdm); 339 321 } 340 - EXPORT_SYMBOL_NS_GPL(devm_cxl_enable_hdm, CXL); 341 322 342 323 int cxl_dvsec_rr_decode(struct device *dev, int d, 343 324 struct cxl_endpoint_dvsec_info *info) ··· 492 511 if (info->mem_enabled) 493 512 return 0; 494 513 495 - rc = devm_cxl_enable_hdm(port, cxlhdm); 514 + rc = devm_cxl_enable_hdm(&port->dev, cxlhdm); 496 515 if (rc) 497 516 return rc; 498 517 ··· 603 622 */ 604 623 void read_cdat_data(struct cxl_port *port) 605 624 { 606 - struct cxl_memdev *cxlmd = to_cxl_memdev(port->uport); 625 + struct cxl_memdev *cxlmd = to_cxl_memdev(port->uport_dev); 607 626 struct device *host = cxlmd->dev.parent; 608 627 struct device *dev = &port->dev; 609 628 struct pci_doe_mb *cdat_doe;
+1 -1
drivers/cxl/core/pmem.c
··· 64 64 65 65 struct cxl_nvdimm_bridge *cxl_find_nvdimm_bridge(struct cxl_memdev *cxlmd) 66 66 { 67 - struct cxl_port *port = find_cxl_root(dev_get_drvdata(&cxlmd->dev)); 67 + struct cxl_port *port = find_cxl_root(cxlmd->endpoint); 68 68 struct device *dev; 69 69 70 70 if (!port)
+68
drivers/cxl/core/pmu.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* Copyright(c) 2023 Huawei. All rights reserved. */ 3 + 4 + #include <linux/device.h> 5 + #include <linux/slab.h> 6 + #include <linux/idr.h> 7 + #include <cxlmem.h> 8 + #include <pmu.h> 9 + #include <cxl.h> 10 + #include "core.h" 11 + 12 + static void cxl_pmu_release(struct device *dev) 13 + { 14 + struct cxl_pmu *pmu = to_cxl_pmu(dev); 15 + 16 + kfree(pmu); 17 + } 18 + 19 + const struct device_type cxl_pmu_type = { 20 + .name = "cxl_pmu", 21 + .release = cxl_pmu_release, 22 + }; 23 + 24 + static void remove_dev(void *dev) 25 + { 26 + device_del(dev); 27 + } 28 + 29 + int devm_cxl_pmu_add(struct device *parent, struct cxl_pmu_regs *regs, 30 + int assoc_id, int index, enum cxl_pmu_type type) 31 + { 32 + struct cxl_pmu *pmu; 33 + struct device *dev; 34 + int rc; 35 + 36 + pmu = kzalloc(sizeof(*pmu), GFP_KERNEL); 37 + if (!pmu) 38 + return -ENOMEM; 39 + 40 + pmu->assoc_id = assoc_id; 41 + pmu->index = index; 42 + pmu->type = type; 43 + pmu->base = regs->pmu; 44 + dev = &pmu->dev; 45 + device_initialize(dev); 46 + device_set_pm_not_required(dev); 47 + dev->parent = parent; 48 + dev->bus = &cxl_bus_type; 49 + dev->type = &cxl_pmu_type; 50 + switch (pmu->type) { 51 + case CXL_PMU_MEMDEV: 52 + rc = dev_set_name(dev, "pmu_mem%d.%d", assoc_id, index); 53 + break; 54 + } 55 + if (rc) 56 + goto err; 57 + 58 + rc = device_add(dev); 59 + if (rc) 60 + goto err; 61 + 62 + return devm_add_action_or_reset(parent, remove_dev, dev); 63 + 64 + err: 65 + put_device(&pmu->dev); 66 + return rc; 67 + } 68 + EXPORT_SYMBOL_NS_GPL(devm_cxl_pmu_add, CXL);
+114 -51
drivers/cxl/core/port.c
··· 56 56 return CXL_DEVICE_MEMORY_EXPANDER; 57 57 if (dev->type == CXL_REGION_TYPE()) 58 58 return CXL_DEVICE_REGION; 59 + if (dev->type == &cxl_pmu_type) 60 + return CXL_DEVICE_PMU; 59 61 return 0; 60 62 } 61 63 ··· 119 117 struct cxl_decoder *cxld = to_cxl_decoder(dev); 120 118 121 119 switch (cxld->target_type) { 122 - case CXL_DECODER_ACCELERATOR: 120 + case CXL_DECODER_DEVMEM: 123 121 return sysfs_emit(buf, "accelerator\n"); 124 - case CXL_DECODER_EXPANDER: 122 + case CXL_DECODER_HOSTONLYMEM: 125 123 return sysfs_emit(buf, "expander\n"); 126 124 } 127 125 return -ENXIO; ··· 563 561 * unregistered while holding their parent port lock. 564 562 */ 565 563 if (!parent) 566 - lock_dev = port->uport; 564 + lock_dev = port->uport_dev; 567 565 else if (is_cxl_root(parent)) 568 - lock_dev = parent->uport; 566 + lock_dev = parent->uport_dev; 569 567 else 570 568 lock_dev = &parent->dev; 571 569 ··· 585 583 { 586 584 int rc; 587 585 588 - rc = sysfs_create_link(&port->dev.kobj, &port->uport->kobj, "uport"); 586 + rc = sysfs_create_link(&port->dev.kobj, &port->uport_dev->kobj, 587 + "uport"); 589 588 if (rc) 590 589 return rc; 591 590 return devm_add_action_or_reset(host, cxl_unlink_uport, port); ··· 608 605 if (!parent_dport) 609 606 return 0; 610 607 611 - rc = sysfs_create_link(&port->dev.kobj, &parent_dport->dport->kobj, 608 + rc = sysfs_create_link(&port->dev.kobj, &parent_dport->dport_dev->kobj, 612 609 "parent_dport"); 613 610 if (rc) 614 611 return rc; ··· 617 614 618 615 static struct lock_class_key cxl_port_key; 619 616 620 - static struct cxl_port *cxl_port_alloc(struct device *uport, 617 + static struct cxl_port *cxl_port_alloc(struct device *uport_dev, 621 618 resource_size_t component_reg_phys, 622 619 struct cxl_dport *parent_dport) 623 620 { ··· 633 630 if (rc < 0) 634 631 goto err; 635 632 port->id = rc; 636 - port->uport = uport; 633 + port->uport_dev = uport_dev; 637 634 638 635 /* 639 636 * The top-level cxl_port "cxl_root" does not have a cxl_port as ··· 661 658 if (iter->host_bridge) 662 659 port->host_bridge = iter->host_bridge; 663 660 else if (parent_dport->rch) 664 - port->host_bridge = parent_dport->dport; 661 + port->host_bridge = parent_dport->dport_dev; 665 662 else 666 - port->host_bridge = iter->uport; 667 - dev_dbg(uport, "host-bridge: %s\n", dev_name(port->host_bridge)); 663 + port->host_bridge = iter->uport_dev; 664 + dev_dbg(uport_dev, "host-bridge: %s\n", 665 + dev_name(port->host_bridge)); 668 666 } else 669 - dev->parent = uport; 667 + dev->parent = uport_dev; 670 668 671 669 port->component_reg_phys = component_reg_phys; 672 670 ida_init(&port->decoder_ida); ··· 690 686 return ERR_PTR(rc); 691 687 } 692 688 689 + static int cxl_setup_comp_regs(struct device *dev, struct cxl_register_map *map, 690 + resource_size_t component_reg_phys) 691 + { 692 + if (component_reg_phys == CXL_RESOURCE_NONE) 693 + return 0; 694 + 695 + *map = (struct cxl_register_map) { 696 + .dev = dev, 697 + .reg_type = CXL_REGLOC_RBI_COMPONENT, 698 + .resource = component_reg_phys, 699 + .max_size = CXL_COMPONENT_REG_BLOCK_SIZE, 700 + }; 701 + 702 + return cxl_setup_regs(map); 703 + } 704 + 705 + static inline int cxl_port_setup_regs(struct cxl_port *port, 706 + resource_size_t component_reg_phys) 707 + { 708 + return cxl_setup_comp_regs(&port->dev, &port->comp_map, 709 + component_reg_phys); 710 + } 711 + 712 + static inline int cxl_dport_setup_regs(struct cxl_dport *dport, 713 + resource_size_t component_reg_phys) 714 + { 715 + return cxl_setup_comp_regs(dport->dport_dev, &dport->comp_map, 716 + component_reg_phys); 717 + } 718 + 693 719 static struct cxl_port *__devm_cxl_add_port(struct device *host, 694 - struct device *uport, 720 + struct device *uport_dev, 695 721 resource_size_t component_reg_phys, 696 722 struct cxl_dport *parent_dport) 697 723 { ··· 729 695 struct device *dev; 730 696 int rc; 731 697 732 - port = cxl_port_alloc(uport, component_reg_phys, parent_dport); 698 + port = cxl_port_alloc(uport_dev, component_reg_phys, parent_dport); 733 699 if (IS_ERR(port)) 734 700 return port; 735 701 736 702 dev = &port->dev; 737 - if (is_cxl_memdev(uport)) 703 + if (is_cxl_memdev(uport_dev)) 738 704 rc = dev_set_name(dev, "endpoint%d", port->id); 739 705 else if (parent_dport) 740 706 rc = dev_set_name(dev, "port%d", port->id); 741 707 else 742 708 rc = dev_set_name(dev, "root%d", port->id); 709 + if (rc) 710 + goto err; 711 + 712 + rc = cxl_port_setup_regs(port, component_reg_phys); 743 713 if (rc) 744 714 goto err; 745 715 ··· 773 735 /** 774 736 * devm_cxl_add_port - register a cxl_port in CXL memory decode hierarchy 775 737 * @host: host device for devm operations 776 - * @uport: "physical" device implementing this upstream port 738 + * @uport_dev: "physical" device implementing this upstream port 777 739 * @component_reg_phys: (optional) for configurable cxl_port instances 778 740 * @parent_dport: next hop up in the CXL memory decode hierarchy 779 741 */ 780 - struct cxl_port *devm_cxl_add_port(struct device *host, struct device *uport, 742 + struct cxl_port *devm_cxl_add_port(struct device *host, 743 + struct device *uport_dev, 781 744 resource_size_t component_reg_phys, 782 745 struct cxl_dport *parent_dport) 783 746 { 784 747 struct cxl_port *port, *parent_port; 785 748 786 - port = __devm_cxl_add_port(host, uport, component_reg_phys, 749 + port = __devm_cxl_add_port(host, uport_dev, component_reg_phys, 787 750 parent_dport); 788 751 789 752 parent_port = parent_dport ? parent_dport->port : NULL; 790 753 if (IS_ERR(port)) { 791 - dev_dbg(uport, "Failed to add%s%s%s: %ld\n", 754 + dev_dbg(uport_dev, "Failed to add%s%s%s: %ld\n", 792 755 parent_port ? " port to " : "", 793 756 parent_port ? dev_name(&parent_port->dev) : "", 794 757 parent_port ? "" : " root port", 795 758 PTR_ERR(port)); 796 759 } else { 797 - dev_dbg(uport, "%s added%s%s%s\n", 760 + dev_dbg(uport_dev, "%s added%s%s%s\n", 798 761 dev_name(&port->dev), 799 762 parent_port ? " to " : "", 800 763 parent_port ? dev_name(&parent_port->dev) : "", ··· 812 773 if (is_cxl_root(port)) 813 774 return NULL; 814 775 815 - if (dev_is_pci(port->uport)) { 816 - struct pci_dev *pdev = to_pci_dev(port->uport); 776 + if (dev_is_pci(port->uport_dev)) { 777 + struct pci_dev *pdev = to_pci_dev(port->uport_dev); 817 778 818 779 return pdev->subordinate; 819 780 } 820 781 821 - return xa_load(&cxl_root_buses, (unsigned long)port->uport); 782 + return xa_load(&cxl_root_buses, (unsigned long)port->uport_dev); 822 783 } 823 784 EXPORT_SYMBOL_NS_GPL(cxl_port_to_pci_bus, CXL); 824 785 825 - static void unregister_pci_bus(void *uport) 786 + static void unregister_pci_bus(void *uport_dev) 826 787 { 827 - xa_erase(&cxl_root_buses, (unsigned long)uport); 788 + xa_erase(&cxl_root_buses, (unsigned long)uport_dev); 828 789 } 829 790 830 - int devm_cxl_register_pci_bus(struct device *host, struct device *uport, 791 + int devm_cxl_register_pci_bus(struct device *host, struct device *uport_dev, 831 792 struct pci_bus *bus) 832 793 { 833 794 int rc; 834 795 835 - if (dev_is_pci(uport)) 796 + if (dev_is_pci(uport_dev)) 836 797 return -EINVAL; 837 798 838 - rc = xa_insert(&cxl_root_buses, (unsigned long)uport, bus, GFP_KERNEL); 799 + rc = xa_insert(&cxl_root_buses, (unsigned long)uport_dev, bus, 800 + GFP_KERNEL); 839 801 if (rc) 840 802 return rc; 841 - return devm_add_action_or_reset(host, unregister_pci_bus, uport); 803 + return devm_add_action_or_reset(host, unregister_pci_bus, uport_dev); 842 804 } 843 805 EXPORT_SYMBOL_NS_GPL(devm_cxl_register_pci_bus, CXL); 844 806 ··· 887 847 return NULL; 888 848 } 889 849 890 - static int add_dport(struct cxl_port *port, struct cxl_dport *new) 850 + static int add_dport(struct cxl_port *port, struct cxl_dport *dport) 891 851 { 892 852 struct cxl_dport *dup; 893 853 int rc; 894 854 895 855 device_lock_assert(&port->dev); 896 - dup = find_dport(port, new->port_id); 856 + dup = find_dport(port, dport->port_id); 897 857 if (dup) { 898 858 dev_err(&port->dev, 899 859 "unable to add dport%d-%s non-unique port id (%s)\n", 900 - new->port_id, dev_name(new->dport), 901 - dev_name(dup->dport)); 860 + dport->port_id, dev_name(dport->dport_dev), 861 + dev_name(dup->dport_dev)); 902 862 return -EBUSY; 903 863 } 904 864 905 - rc = xa_insert(&port->dports, (unsigned long)new->dport, new, 865 + rc = xa_insert(&port->dports, (unsigned long)dport->dport_dev, dport, 906 866 GFP_KERNEL); 907 867 if (rc) 908 868 return rc; ··· 935 895 struct cxl_dport *dport = data; 936 896 struct cxl_port *port = dport->port; 937 897 938 - xa_erase(&port->dports, (unsigned long) dport->dport); 939 - put_device(dport->dport); 898 + xa_erase(&port->dports, (unsigned long) dport->dport_dev); 899 + put_device(dport->dport_dev); 940 900 } 941 901 942 902 static void cxl_dport_unlink(void *data) ··· 960 920 int rc; 961 921 962 922 if (is_cxl_root(port)) 963 - host = port->uport; 923 + host = port->uport_dev; 964 924 else 965 925 host = &port->dev; 966 926 ··· 978 938 if (!dport) 979 939 return ERR_PTR(-ENOMEM); 980 940 981 - dport->dport = dport_dev; 982 - dport->port_id = port_id; 983 - dport->component_reg_phys = component_reg_phys; 984 - dport->port = port; 985 - if (rcrb != CXL_RESOURCE_NONE) 941 + if (rcrb != CXL_RESOURCE_NONE) { 942 + dport->rcrb.base = rcrb; 943 + component_reg_phys = __rcrb_to_component(dport_dev, &dport->rcrb, 944 + CXL_RCRB_DOWNSTREAM); 945 + if (component_reg_phys == CXL_RESOURCE_NONE) { 946 + dev_warn(dport_dev, "Invalid Component Registers in RCRB"); 947 + return ERR_PTR(-ENXIO); 948 + } 949 + 986 950 dport->rch = true; 987 - dport->rcrb = rcrb; 951 + } 952 + 953 + if (component_reg_phys != CXL_RESOURCE_NONE) 954 + dev_dbg(dport_dev, "Component Registers found for dport: %pa\n", 955 + &component_reg_phys); 956 + 957 + dport->dport_dev = dport_dev; 958 + dport->port_id = port_id; 959 + dport->port = port; 960 + 961 + rc = cxl_dport_setup_regs(dport, component_reg_phys); 962 + if (rc) 963 + return ERR_PTR(rc); 988 964 989 965 cond_cxl_root_lock(port); 990 966 rc = add_dport(port, dport); ··· 1060 1004 * @port: the cxl_port that references this dport 1061 1005 * @dport_dev: firmware or PCI device representing the dport 1062 1006 * @port_id: identifier for this dport in a decoder's target list 1063 - * @component_reg_phys: optional location of CXL component registers 1064 1007 * @rcrb: mandatory location of a Root Complex Register Block 1065 1008 * 1066 1009 * See CXL 3.0 9.11.8 CXL Devices Attached to an RCH 1067 1010 */ 1068 1011 struct cxl_dport *devm_cxl_add_rch_dport(struct cxl_port *port, 1069 1012 struct device *dport_dev, int port_id, 1070 - resource_size_t component_reg_phys, 1071 1013 resource_size_t rcrb) 1072 1014 { 1073 1015 struct cxl_dport *dport; ··· 1076 1022 } 1077 1023 1078 1024 dport = __devm_cxl_add_dport(port, dport_dev, port_id, 1079 - component_reg_phys, rcrb); 1025 + CXL_RESOURCE_NONE, rcrb); 1080 1026 if (IS_ERR(dport)) { 1081 1027 dev_dbg(dport_dev, "failed to add RCH dport to %s: %ld\n", 1082 1028 dev_name(&port->dev), PTR_ERR(dport)); ··· 1215 1161 static void delete_endpoint(void *data) 1216 1162 { 1217 1163 struct cxl_memdev *cxlmd = data; 1218 - struct cxl_port *endpoint = dev_get_drvdata(&cxlmd->dev); 1164 + struct cxl_port *endpoint = cxlmd->endpoint; 1219 1165 struct cxl_port *parent_port; 1220 1166 struct device *parent; 1221 1167 ··· 1230 1176 devm_release_action(parent, cxl_unlink_uport, endpoint); 1231 1177 devm_release_action(parent, unregister_port, endpoint); 1232 1178 } 1179 + cxlmd->endpoint = NULL; 1233 1180 device_unlock(parent); 1234 1181 put_device(parent); 1235 1182 out: ··· 1242 1187 struct device *dev = &cxlmd->dev; 1243 1188 1244 1189 get_device(&endpoint->dev); 1245 - dev_set_drvdata(dev, endpoint); 1190 + cxlmd->endpoint = endpoint; 1246 1191 cxlmd->depth = endpoint->depth; 1247 1192 return devm_add_action_or_reset(dev, delete_endpoint, cxlmd); 1248 1193 } ··· 1418 1363 rc = PTR_ERR(port); 1419 1364 else { 1420 1365 dev_dbg(&cxlmd->dev, "add to new port %s:%s\n", 1421 - dev_name(&port->dev), dev_name(port->uport)); 1366 + dev_name(&port->dev), dev_name(port->uport_dev)); 1422 1367 rc = cxl_add_ep(dport, &cxlmd->dev); 1423 1368 if (rc == -EBUSY) { 1424 1369 /* ··· 1480 1425 if (port) { 1481 1426 dev_dbg(&cxlmd->dev, 1482 1427 "found already registered port %s:%s\n", 1483 - dev_name(&port->dev), dev_name(port->uport)); 1428 + dev_name(&port->dev), 1429 + dev_name(port->uport_dev)); 1484 1430 rc = cxl_add_ep(dport, &cxlmd->dev); 1485 1431 1486 1432 /* ··· 1520 1464 return 0; 1521 1465 } 1522 1466 EXPORT_SYMBOL_NS_GPL(devm_cxl_enumerate_ports, CXL); 1467 + 1468 + struct cxl_port *cxl_pci_find_port(struct pci_dev *pdev, 1469 + struct cxl_dport **dport) 1470 + { 1471 + return find_cxl_port(pdev->dev.parent, dport); 1472 + } 1473 + EXPORT_SYMBOL_NS_GPL(cxl_pci_find_port, CXL); 1523 1474 1524 1475 struct cxl_port *cxl_mem_find_port(struct cxl_memdev *cxlmd, 1525 1476 struct cxl_dport **dport) ··· 1613 1550 /* Pre initialize an "empty" decoder */ 1614 1551 cxld->interleave_ways = 1; 1615 1552 cxld->interleave_granularity = PAGE_SIZE; 1616 - cxld->target_type = CXL_DECODER_EXPANDER; 1553 + cxld->target_type = CXL_DECODER_HOSTONLYMEM; 1617 1554 cxld->hpa_range = (struct range) { 1618 1555 .start = 0, 1619 1556 .end = -1,
+103 -65
drivers/cxl/core/region.c
··· 125 125 return xa_load(&port->regions, (unsigned long)cxlr); 126 126 } 127 127 128 + static int cxl_region_invalidate_memregion(struct cxl_region *cxlr) 129 + { 130 + if (!cpu_cache_has_invalidate_memregion()) { 131 + if (IS_ENABLED(CONFIG_CXL_REGION_INVALIDATION_TEST)) { 132 + dev_warn_once( 133 + &cxlr->dev, 134 + "Bypassing cpu_cache_invalidate_memregion() for testing!\n"); 135 + return 0; 136 + } else { 137 + dev_err(&cxlr->dev, 138 + "Failed to synchronize CPU cache state\n"); 139 + return -ENXIO; 140 + } 141 + } 142 + 143 + cpu_cache_invalidate_memregion(IORES_DESC_CXL); 144 + return 0; 145 + } 146 + 128 147 static int cxl_region_decode_reset(struct cxl_region *cxlr, int count) 129 148 { 130 149 struct cxl_region_params *p = &cxlr->params; 131 - int i; 150 + int i, rc = 0; 151 + 152 + /* 153 + * Before region teardown attempt to flush, and if the flush 154 + * fails cancel the region teardown for data consistency 155 + * concerns 156 + */ 157 + rc = cxl_region_invalidate_memregion(cxlr); 158 + if (rc) 159 + return rc; 132 160 133 161 for (i = count - 1; i >= 0; i--) { 134 162 struct cxl_endpoint_decoder *cxled = p->targets[i]; ··· 164 136 struct cxl_port *iter = cxled_to_port(cxled); 165 137 struct cxl_dev_state *cxlds = cxlmd->cxlds; 166 138 struct cxl_ep *ep; 167 - int rc = 0; 168 139 169 140 if (cxlds->rcd) 170 141 goto endpoint_reset; ··· 182 155 rc = cxld->reset(cxld); 183 156 if (rc) 184 157 return rc; 158 + set_bit(CXL_REGION_F_NEEDS_RESET, &cxlr->flags); 185 159 } 186 160 187 161 endpoint_reset: 188 162 rc = cxled->cxld.reset(&cxled->cxld); 189 163 if (rc) 190 164 return rc; 165 + set_bit(CXL_REGION_F_NEEDS_RESET, &cxlr->flags); 191 166 } 167 + 168 + /* all decoders associated with this region have been torn down */ 169 + clear_bit(CXL_REGION_F_NEEDS_RESET, &cxlr->flags); 192 170 193 171 return 0; 194 172 } ··· 288 256 goto out; 289 257 } 290 258 291 - if (commit) 259 + /* 260 + * Invalidate caches before region setup to drop any speculative 261 + * consumption of this address space 262 + */ 263 + rc = cxl_region_invalidate_memregion(cxlr); 264 + if (rc) 265 + return rc; 266 + 267 + if (commit) { 292 268 rc = cxl_region_decode_commit(cxlr); 293 - else { 269 + if (rc == 0) 270 + p->state = CXL_CONFIG_COMMIT; 271 + } else { 294 272 p->state = CXL_CONFIG_RESET_PENDING; 295 273 up_write(&cxl_region_rwsem); 296 274 device_release_driver(&cxlr->dev); ··· 310 268 * The lock was dropped, so need to revalidate that the reset is 311 269 * still pending. 312 270 */ 313 - if (p->state == CXL_CONFIG_RESET_PENDING) 271 + if (p->state == CXL_CONFIG_RESET_PENDING) { 314 272 rc = cxl_region_decode_reset(cxlr, p->interleave_ways); 273 + /* 274 + * Revert to committed since there may still be active 275 + * decoders associated with this region, or move forward 276 + * to active to mark the reset successful 277 + */ 278 + if (rc) 279 + p->state = CXL_CONFIG_COMMIT; 280 + else 281 + p->state = CXL_CONFIG_ACTIVE; 282 + } 315 283 } 316 - 317 - if (rc) 318 - goto out; 319 - 320 - if (commit) 321 - p->state = CXL_CONFIG_COMMIT; 322 - else if (p->state == CXL_CONFIG_RESET_PENDING) 323 - p->state = CXL_CONFIG_ACTIVE; 324 284 325 285 out: 326 286 up_write(&cxl_region_rwsem); ··· 853 809 return -EBUSY; 854 810 } 855 811 812 + /* 813 + * Endpoints should already match the region type, but backstop that 814 + * assumption with an assertion. Switch-decoders change mapping-type 815 + * based on what is mapped when they are assigned to a region. 816 + */ 817 + dev_WARN_ONCE(&cxlr->dev, 818 + port == cxled_to_port(cxled) && 819 + cxld->target_type != cxlr->type, 820 + "%s:%s mismatch decoder type %d -> %d\n", 821 + dev_name(&cxled_to_memdev(cxled)->dev), 822 + dev_name(&cxld->dev), cxld->target_type, cxlr->type); 823 + cxld->target_type = cxlr->type; 856 824 cxl_rr->decoder = cxld; 857 825 return 0; 858 826 } ··· 962 906 963 907 dev_dbg(&cxlr->dev, 964 908 "%s:%s %s add: %s:%s @ %d next: %s nr_eps: %d nr_targets: %d\n", 965 - dev_name(port->uport), dev_name(&port->dev), 909 + dev_name(port->uport_dev), dev_name(&port->dev), 966 910 dev_name(&cxld->dev), dev_name(&cxlmd->dev), 967 911 dev_name(&cxled->cxld.dev), pos, 968 - ep ? ep->next ? dev_name(ep->next->uport) : 912 + ep ? ep->next ? dev_name(ep->next->uport_dev) : 969 913 dev_name(&cxlmd->dev) : 970 914 "none", 971 915 cxl_rr->nr_eps, cxl_rr->nr_targets); ··· 1040 984 */ 1041 985 if (pos < distance) { 1042 986 dev_dbg(&cxlr->dev, "%s:%s: cannot host %s:%s at %d\n", 1043 - dev_name(port->uport), dev_name(&port->dev), 987 + dev_name(port->uport_dev), dev_name(&port->dev), 1044 988 dev_name(&cxlmd->dev), dev_name(&cxled->cxld.dev), pos); 1045 989 return -ENXIO; 1046 990 } ··· 1050 994 if (ep->dport != ep_peer->dport) { 1051 995 dev_dbg(&cxlr->dev, 1052 996 "%s:%s: %s:%s pos %d mismatched peer %s:%s\n", 1053 - dev_name(port->uport), dev_name(&port->dev), 997 + dev_name(port->uport_dev), dev_name(&port->dev), 1054 998 dev_name(&cxlmd->dev), dev_name(&cxled->cxld.dev), pos, 1055 999 dev_name(&cxlmd_peer->dev), 1056 1000 dev_name(&cxled_peer->cxld.dev)); ··· 1082 1026 */ 1083 1027 if (!is_power_of_2(cxl_rr->nr_targets)) { 1084 1028 dev_dbg(&cxlr->dev, "%s:%s: invalid target count %d\n", 1085 - dev_name(port->uport), dev_name(&port->dev), 1029 + dev_name(port->uport_dev), dev_name(&port->dev), 1086 1030 cxl_rr->nr_targets); 1087 1031 return -EINVAL; 1088 1032 } ··· 1132 1076 rc = granularity_to_eig(parent_ig, &peig); 1133 1077 if (rc) { 1134 1078 dev_dbg(&cxlr->dev, "%s:%s: invalid parent granularity: %d\n", 1135 - dev_name(parent_port->uport), 1079 + dev_name(parent_port->uport_dev), 1136 1080 dev_name(&parent_port->dev), parent_ig); 1137 1081 return rc; 1138 1082 } ··· 1140 1084 rc = ways_to_eiw(parent_iw, &peiw); 1141 1085 if (rc) { 1142 1086 dev_dbg(&cxlr->dev, "%s:%s: invalid parent interleave: %d\n", 1143 - dev_name(parent_port->uport), 1087 + dev_name(parent_port->uport_dev), 1144 1088 dev_name(&parent_port->dev), parent_iw); 1145 1089 return rc; 1146 1090 } ··· 1149 1093 rc = ways_to_eiw(iw, &eiw); 1150 1094 if (rc) { 1151 1095 dev_dbg(&cxlr->dev, "%s:%s: invalid port interleave: %d\n", 1152 - dev_name(port->uport), dev_name(&port->dev), iw); 1096 + dev_name(port->uport_dev), dev_name(&port->dev), iw); 1153 1097 return rc; 1154 1098 } 1155 1099 ··· 1169 1113 rc = eig_to_granularity(eig, &ig); 1170 1114 if (rc) { 1171 1115 dev_dbg(&cxlr->dev, "%s:%s: invalid interleave: %d\n", 1172 - dev_name(port->uport), dev_name(&port->dev), 1116 + dev_name(port->uport_dev), dev_name(&port->dev), 1173 1117 256 << eig); 1174 1118 return rc; 1175 1119 } ··· 1182 1126 ((cxld->flags & CXL_DECODER_F_ENABLE) == 0)) { 1183 1127 dev_err(&cxlr->dev, 1184 1128 "%s:%s %s expected iw: %d ig: %d %pr\n", 1185 - dev_name(port->uport), dev_name(&port->dev), 1129 + dev_name(port->uport_dev), dev_name(&port->dev), 1186 1130 __func__, iw, ig, p->res); 1187 1131 dev_err(&cxlr->dev, 1188 1132 "%s:%s %s got iw: %d ig: %d state: %s %#llx:%#llx\n", 1189 - dev_name(port->uport), dev_name(&port->dev), 1133 + dev_name(port->uport_dev), dev_name(&port->dev), 1190 1134 __func__, cxld->interleave_ways, 1191 1135 cxld->interleave_granularity, 1192 1136 (cxld->flags & CXL_DECODER_F_ENABLE) ? ··· 1203 1147 .end = p->res->end, 1204 1148 }; 1205 1149 } 1206 - dev_dbg(&cxlr->dev, "%s:%s iw: %d ig: %d\n", dev_name(port->uport), 1150 + dev_dbg(&cxlr->dev, "%s:%s iw: %d ig: %d\n", dev_name(port->uport_dev), 1207 1151 dev_name(&port->dev), iw, ig); 1208 1152 add_target: 1209 1153 if (cxl_rr->nr_targets_set == cxl_rr->nr_targets) { 1210 1154 dev_dbg(&cxlr->dev, 1211 1155 "%s:%s: targets full trying to add %s:%s at %d\n", 1212 - dev_name(port->uport), dev_name(&port->dev), 1156 + dev_name(port->uport_dev), dev_name(&port->dev), 1213 1157 dev_name(&cxlmd->dev), dev_name(&cxled->cxld.dev), pos); 1214 1158 return -ENXIO; 1215 1159 } 1216 1160 if (test_bit(CXL_REGION_F_AUTO, &cxlr->flags)) { 1217 1161 if (cxlsd->target[cxl_rr->nr_targets_set] != ep->dport) { 1218 1162 dev_dbg(&cxlr->dev, "%s:%s: %s expected %s at %d\n", 1219 - dev_name(port->uport), dev_name(&port->dev), 1163 + dev_name(port->uport_dev), dev_name(&port->dev), 1220 1164 dev_name(&cxlsd->cxld.dev), 1221 - dev_name(ep->dport->dport), 1165 + dev_name(ep->dport->dport_dev), 1222 1166 cxl_rr->nr_targets_set); 1223 1167 return -ENXIO; 1224 1168 } ··· 1228 1172 out_target_set: 1229 1173 cxl_rr->nr_targets_set += inc; 1230 1174 dev_dbg(&cxlr->dev, "%s:%s target[%d] = %s for %s:%s @ %d\n", 1231 - dev_name(port->uport), dev_name(&port->dev), 1232 - cxl_rr->nr_targets_set - 1, dev_name(ep->dport->dport), 1175 + dev_name(port->uport_dev), dev_name(&port->dev), 1176 + cxl_rr->nr_targets_set - 1, dev_name(ep->dport->dport_dev), 1233 1177 dev_name(&cxlmd->dev), dev_name(&cxled->cxld.dev), pos); 1234 1178 1235 1179 return 0; ··· 1548 1492 if (!dev) { 1549 1493 struct range *range = &cxled_a->cxld.hpa_range; 1550 1494 1551 - dev_err(port->uport, 1495 + dev_err(port->uport_dev, 1552 1496 "failed to find decoder that maps %#llx-%#llx\n", 1553 1497 range->start, range->end); 1554 1498 goto err; ··· 1563 1507 put_device(dev); 1564 1508 1565 1509 if (a_pos < 0 || b_pos < 0) { 1566 - dev_err(port->uport, 1510 + dev_err(port->uport_dev, 1567 1511 "failed to find shared decoder for %s and %s\n", 1568 1512 dev_name(cxlmd_a->dev.parent), 1569 1513 dev_name(cxlmd_b->dev.parent)); 1570 1514 goto err; 1571 1515 } 1572 1516 1573 - dev_dbg(port->uport, "%s comes %s %s\n", dev_name(cxlmd_a->dev.parent), 1517 + dev_dbg(port->uport_dev, "%s comes %s %s\n", 1518 + dev_name(cxlmd_a->dev.parent), 1574 1519 a_pos - b_pos < 0 ? "before" : "after", 1575 1520 dev_name(cxlmd_b->dev.parent)); 1576 1521 ··· 1731 1674 if (rc) 1732 1675 goto err_decrement; 1733 1676 p->state = CXL_CONFIG_ACTIVE; 1734 - set_bit(CXL_REGION_F_INCOHERENT, &cxlr->flags); 1735 1677 } 1736 1678 1737 1679 cxled->cxld.interleave_ways = p->interleave_ways; ··· 2115 2059 if (rc) 2116 2060 goto err; 2117 2061 2118 - rc = devm_add_action_or_reset(port->uport, unregister_region, cxlr); 2062 + rc = devm_add_action_or_reset(port->uport_dev, unregister_region, cxlr); 2119 2063 if (rc) 2120 2064 return ERR_PTR(rc); 2121 2065 2122 - dev_dbg(port->uport, "%s: created %s\n", 2066 + dev_dbg(port->uport_dev, "%s: created %s\n", 2123 2067 dev_name(&cxlrd->cxlsd.cxld.dev), dev_name(dev)); 2124 2068 return cxlr; 2125 2069 ··· 2159 2103 return ERR_PTR(-EBUSY); 2160 2104 } 2161 2105 2162 - return devm_cxl_add_region(cxlrd, id, mode, CXL_DECODER_EXPANDER); 2106 + return devm_cxl_add_region(cxlrd, id, mode, CXL_DECODER_HOSTONLYMEM); 2163 2107 } 2164 2108 2165 2109 static ssize_t create_pmem_region_store(struct device *dev, ··· 2247 2191 if (IS_ERR(cxlr)) 2248 2192 return PTR_ERR(cxlr); 2249 2193 2250 - devm_release_action(port->uport, unregister_region, cxlr); 2194 + devm_release_action(port->uport_dev, unregister_region, cxlr); 2251 2195 put_device(&cxlr->dev); 2252 2196 2253 2197 return len; ··· 2412 2356 2413 2357 rc = device_for_each_child(&port->dev, &ctx, poison_by_decoder); 2414 2358 if (rc == 1) 2415 - rc = cxl_get_poison_unmapped(to_cxl_memdev(port->uport), &ctx); 2359 + rc = cxl_get_poison_unmapped(to_cxl_memdev(port->uport_dev), 2360 + &ctx); 2416 2361 2417 2362 up_read(&cxl_region_rwsem); 2418 2363 return rc; ··· 2789 2732 2790 2733 err: 2791 2734 up_write(&cxl_region_rwsem); 2792 - devm_release_action(port->uport, unregister_region, cxlr); 2735 + devm_release_action(port->uport_dev, unregister_region, cxlr); 2793 2736 return ERR_PTR(rc); 2794 2737 } 2795 2738 ··· 2860 2803 } 2861 2804 EXPORT_SYMBOL_NS_GPL(cxl_add_to_region, CXL); 2862 2805 2863 - static int cxl_region_invalidate_memregion(struct cxl_region *cxlr) 2864 - { 2865 - if (!test_bit(CXL_REGION_F_INCOHERENT, &cxlr->flags)) 2866 - return 0; 2867 - 2868 - if (!cpu_cache_has_invalidate_memregion()) { 2869 - if (IS_ENABLED(CONFIG_CXL_REGION_INVALIDATION_TEST)) { 2870 - dev_warn_once( 2871 - &cxlr->dev, 2872 - "Bypassing cpu_cache_invalidate_memregion() for testing!\n"); 2873 - clear_bit(CXL_REGION_F_INCOHERENT, &cxlr->flags); 2874 - return 0; 2875 - } else { 2876 - dev_err(&cxlr->dev, 2877 - "Failed to synchronize CPU cache state\n"); 2878 - return -ENXIO; 2879 - } 2880 - } 2881 - 2882 - cpu_cache_invalidate_memregion(IORES_DESC_CXL); 2883 - clear_bit(CXL_REGION_F_INCOHERENT, &cxlr->flags); 2884 - return 0; 2885 - } 2886 - 2887 2806 static int is_system_ram(struct resource *res, void *arg) 2888 2807 { 2889 2808 struct cxl_region *cxlr = arg; ··· 2887 2854 goto out; 2888 2855 } 2889 2856 2890 - rc = cxl_region_invalidate_memregion(cxlr); 2857 + if (test_bit(CXL_REGION_F_NEEDS_RESET, &cxlr->flags)) { 2858 + dev_err(&cxlr->dev, 2859 + "failed to activate, re-commit region and retry\n"); 2860 + rc = -ENXIO; 2861 + goto out; 2862 + } 2891 2863 2892 2864 /* 2893 2865 * From this point on any path that changes the region's state away from
+164 -18
drivers/cxl/core/regs.c
··· 6 6 #include <linux/pci.h> 7 7 #include <cxlmem.h> 8 8 #include <cxlpci.h> 9 + #include <pmu.h> 9 10 10 11 #include "core.h" 11 12 ··· 200 199 return ret_val; 201 200 } 202 201 203 - int cxl_map_component_regs(struct device *dev, struct cxl_component_regs *regs, 204 - struct cxl_register_map *map, unsigned long map_mask) 202 + int cxl_map_component_regs(const struct cxl_register_map *map, 203 + struct cxl_component_regs *regs, 204 + unsigned long map_mask) 205 205 { 206 + struct device *dev = map->dev; 206 207 struct mapinfo { 207 - struct cxl_reg_map *rmap; 208 + const struct cxl_reg_map *rmap; 208 209 void __iomem **addr; 209 210 } mapinfo[] = { 210 211 { &map->component_map.hdm_decoder, &regs->hdm_decoder }, ··· 234 231 } 235 232 EXPORT_SYMBOL_NS_GPL(cxl_map_component_regs, CXL); 236 233 237 - int cxl_map_device_regs(struct device *dev, 238 - struct cxl_device_regs *regs, 239 - struct cxl_register_map *map) 234 + int cxl_map_device_regs(const struct cxl_register_map *map, 235 + struct cxl_device_regs *regs) 240 236 { 237 + struct device *dev = map->dev; 241 238 resource_size_t phys_addr = map->resource; 242 239 struct mapinfo { 243 - struct cxl_reg_map *rmap; 240 + const struct cxl_reg_map *rmap; 244 241 void __iomem **addr; 245 242 } mapinfo[] = { 246 243 { &map->device_map.status, &regs->status, }, ··· 289 286 } 290 287 291 288 /** 292 - * cxl_find_regblock() - Locate register blocks by type 289 + * cxl_find_regblock_instance() - Locate a register block by type / index 293 290 * @pdev: The CXL PCI device to enumerate. 294 291 * @type: Register Block Indicator id 295 292 * @map: Enumeration output, clobbered on error 293 + * @index: Index into which particular instance of a regblock wanted in the 294 + * order found in register locator DVSEC. 296 295 * 297 296 * Return: 0 if register block enumerated, negative error code otherwise 298 297 * 299 298 * A CXL DVSEC may point to one or more register blocks, search for them 300 - * by @type. 299 + * by @type and @index. 301 300 */ 302 - int cxl_find_regblock(struct pci_dev *pdev, enum cxl_regloc_type type, 303 - struct cxl_register_map *map) 301 + int cxl_find_regblock_instance(struct pci_dev *pdev, enum cxl_regloc_type type, 302 + struct cxl_register_map *map, int index) 304 303 { 305 304 u32 regloc_size, regblocks; 305 + int instance = 0; 306 306 int regloc, i; 307 307 308 - map->resource = CXL_RESOURCE_NONE; 308 + *map = (struct cxl_register_map) { 309 + .dev = &pdev->dev, 310 + .resource = CXL_RESOURCE_NONE, 311 + }; 312 + 309 313 regloc = pci_find_dvsec_capability(pdev, PCI_DVSEC_VENDOR_ID_CXL, 310 314 CXL_DVSEC_REG_LOCATOR); 311 315 if (!regloc) ··· 333 323 if (!cxl_decode_regblock(pdev, reg_lo, reg_hi, map)) 334 324 continue; 335 325 336 - if (map->reg_type == type) 337 - return 0; 326 + if (map->reg_type == type) { 327 + if (index == instance) 328 + return 0; 329 + instance++; 330 + } 338 331 } 339 332 340 333 map->resource = CXL_RESOURCE_NONE; 341 334 return -ENODEV; 342 335 } 336 + EXPORT_SYMBOL_NS_GPL(cxl_find_regblock_instance, CXL); 337 + 338 + /** 339 + * cxl_find_regblock() - Locate register blocks by type 340 + * @pdev: The CXL PCI device to enumerate. 341 + * @type: Register Block Indicator id 342 + * @map: Enumeration output, clobbered on error 343 + * 344 + * Return: 0 if register block enumerated, negative error code otherwise 345 + * 346 + * A CXL DVSEC may point to one or more register blocks, search for them 347 + * by @type. 348 + */ 349 + int cxl_find_regblock(struct pci_dev *pdev, enum cxl_regloc_type type, 350 + struct cxl_register_map *map) 351 + { 352 + return cxl_find_regblock_instance(pdev, type, map, 0); 353 + } 343 354 EXPORT_SYMBOL_NS_GPL(cxl_find_regblock, CXL); 344 355 345 - resource_size_t cxl_rcrb_to_component(struct device *dev, 346 - resource_size_t rcrb, 347 - enum cxl_rcrb which) 356 + /** 357 + * cxl_count_regblock() - Count instances of a given regblock type. 358 + * @pdev: The CXL PCI device to enumerate. 359 + * @type: Register Block Indicator id 360 + * 361 + * Some regblocks may be repeated. Count how many instances. 362 + * 363 + * Return: count of matching regblocks. 364 + */ 365 + int cxl_count_regblock(struct pci_dev *pdev, enum cxl_regloc_type type) 366 + { 367 + struct cxl_register_map map; 368 + int rc, count = 0; 369 + 370 + while (1) { 371 + rc = cxl_find_regblock_instance(pdev, type, &map, count); 372 + if (rc) 373 + return count; 374 + count++; 375 + } 376 + } 377 + EXPORT_SYMBOL_NS_GPL(cxl_count_regblock, CXL); 378 + 379 + int cxl_map_pmu_regs(struct pci_dev *pdev, struct cxl_pmu_regs *regs, 380 + struct cxl_register_map *map) 381 + { 382 + struct device *dev = &pdev->dev; 383 + resource_size_t phys_addr; 384 + 385 + phys_addr = map->resource; 386 + regs->pmu = devm_cxl_iomap_block(dev, phys_addr, CXL_PMU_REGMAP_SIZE); 387 + if (!regs->pmu) 388 + return -ENOMEM; 389 + 390 + return 0; 391 + } 392 + EXPORT_SYMBOL_NS_GPL(cxl_map_pmu_regs, CXL); 393 + 394 + static int cxl_map_regblock(struct cxl_register_map *map) 395 + { 396 + struct device *dev = map->dev; 397 + 398 + map->base = ioremap(map->resource, map->max_size); 399 + if (!map->base) { 400 + dev_err(dev, "failed to map registers\n"); 401 + return -ENOMEM; 402 + } 403 + 404 + dev_dbg(dev, "Mapped CXL Memory Device resource %pa\n", &map->resource); 405 + return 0; 406 + } 407 + 408 + static void cxl_unmap_regblock(struct cxl_register_map *map) 409 + { 410 + iounmap(map->base); 411 + map->base = NULL; 412 + } 413 + 414 + static int cxl_probe_regs(struct cxl_register_map *map) 415 + { 416 + struct cxl_component_reg_map *comp_map; 417 + struct cxl_device_reg_map *dev_map; 418 + struct device *dev = map->dev; 419 + void __iomem *base = map->base; 420 + 421 + switch (map->reg_type) { 422 + case CXL_REGLOC_RBI_COMPONENT: 423 + comp_map = &map->component_map; 424 + cxl_probe_component_regs(dev, base, comp_map); 425 + dev_dbg(dev, "Set up component registers\n"); 426 + break; 427 + case CXL_REGLOC_RBI_MEMDEV: 428 + dev_map = &map->device_map; 429 + cxl_probe_device_regs(dev, base, dev_map); 430 + if (!dev_map->status.valid || !dev_map->mbox.valid || 431 + !dev_map->memdev.valid) { 432 + dev_err(dev, "registers not found: %s%s%s\n", 433 + !dev_map->status.valid ? "status " : "", 434 + !dev_map->mbox.valid ? "mbox " : "", 435 + !dev_map->memdev.valid ? "memdev " : ""); 436 + return -ENXIO; 437 + } 438 + 439 + dev_dbg(dev, "Probing device registers...\n"); 440 + break; 441 + default: 442 + break; 443 + } 444 + 445 + return 0; 446 + } 447 + 448 + int cxl_setup_regs(struct cxl_register_map *map) 449 + { 450 + int rc; 451 + 452 + rc = cxl_map_regblock(map); 453 + if (rc) 454 + return rc; 455 + 456 + rc = cxl_probe_regs(map); 457 + cxl_unmap_regblock(map); 458 + 459 + return rc; 460 + } 461 + EXPORT_SYMBOL_NS_GPL(cxl_setup_regs, CXL); 462 + 463 + resource_size_t __rcrb_to_component(struct device *dev, struct cxl_rcrb_info *ri, 464 + enum cxl_rcrb which) 348 465 { 349 466 resource_size_t component_reg_phys; 467 + resource_size_t rcrb = ri->base; 350 468 void __iomem *addr; 351 469 u32 bar0, bar1; 352 470 u16 cmd; ··· 533 395 534 396 return component_reg_phys; 535 397 } 536 - EXPORT_SYMBOL_NS_GPL(cxl_rcrb_to_component, CXL); 398 + 399 + resource_size_t cxl_rcd_component_reg_phys(struct device *dev, 400 + struct cxl_dport *dport) 401 + { 402 + if (!dport->rch) 403 + return CXL_RESOURCE_NONE; 404 + return __rcrb_to_component(dev, &dport->rcrb, CXL_RCRB_UPSTREAM); 405 + } 406 + EXPORT_SYMBOL_NS_GPL(cxl_rcd_component_reg_phys, CXL);
+68 -36
drivers/cxl/cxl.h
··· 56 56 #define CXL_HDM_DECODER0_CTRL_COMMIT BIT(9) 57 57 #define CXL_HDM_DECODER0_CTRL_COMMITTED BIT(10) 58 58 #define CXL_HDM_DECODER0_CTRL_COMMIT_ERROR BIT(11) 59 - #define CXL_HDM_DECODER0_CTRL_TYPE BIT(12) 59 + #define CXL_HDM_DECODER0_CTRL_HOSTONLY BIT(12) 60 60 #define CXL_HDM_DECODER0_TL_LOW(i) (0x20 * (i) + 0x24) 61 61 #define CXL_HDM_DECODER0_TL_HIGH(i) (0x20 * (i) + 0x28) 62 62 #define CXL_HDM_DECODER0_SKIP_LOW(i) CXL_HDM_DECODER0_TL_LOW(i) ··· 176 176 /* CXL 2.0 8.2.8.4 Mailbox Registers */ 177 177 #define CXLDEV_MBOX_CAPS_OFFSET 0x00 178 178 #define CXLDEV_MBOX_CAP_PAYLOAD_SIZE_MASK GENMASK(4, 0) 179 + #define CXLDEV_MBOX_CAP_BG_CMD_IRQ BIT(6) 180 + #define CXLDEV_MBOX_CAP_IRQ_MSGNUM_MASK GENMASK(10, 7) 179 181 #define CXLDEV_MBOX_CTRL_OFFSET 0x04 180 182 #define CXLDEV_MBOX_CTRL_DOORBELL BIT(0) 183 + #define CXLDEV_MBOX_CTRL_BG_CMD_IRQ BIT(2) 181 184 #define CXLDEV_MBOX_CMD_OFFSET 0x08 182 185 #define CXLDEV_MBOX_CMD_COMMAND_OPCODE_MASK GENMASK_ULL(15, 0) 183 186 #define CXLDEV_MBOX_CMD_PAYLOAD_LENGTH_MASK GENMASK_ULL(36, 16) 184 187 #define CXLDEV_MBOX_STATUS_OFFSET 0x10 188 + #define CXLDEV_MBOX_STATUS_BG_CMD BIT(0) 185 189 #define CXLDEV_MBOX_STATUS_RET_CODE_MASK GENMASK_ULL(47, 32) 186 190 #define CXLDEV_MBOX_BG_CMD_STATUS_OFFSET 0x18 191 + #define CXLDEV_MBOX_BG_CMD_COMMAND_OPCODE_MASK GENMASK_ULL(15, 0) 192 + #define CXLDEV_MBOX_BG_CMD_COMMAND_PCT_MASK GENMASK_ULL(22, 16) 193 + #define CXLDEV_MBOX_BG_CMD_COMMAND_RC_MASK GENMASK_ULL(47, 32) 194 + #define CXLDEV_MBOX_BG_CMD_COMMAND_VENDOR_MASK GENMASK_ULL(63, 48) 187 195 #define CXLDEV_MBOX_PAYLOAD_OFFSET 0x20 188 196 189 197 /* ··· 217 209 struct_group_tagged(cxl_device_regs, device_regs, 218 210 void __iomem *status, *mbox, *memdev; 219 211 ); 212 + 213 + struct_group_tagged(cxl_pmu_regs, pmu_regs, 214 + void __iomem *pmu; 215 + ); 220 216 }; 221 217 222 218 struct cxl_reg_map { ··· 241 229 struct cxl_reg_map memdev; 242 230 }; 243 231 232 + struct cxl_pmu_reg_map { 233 + struct cxl_reg_map pmu; 234 + }; 235 + 244 236 /** 245 237 * struct cxl_register_map - DVSEC harvested register block mapping parameters 238 + * @dev: device for devm operations and logging 246 239 * @base: virtual base of the register-block-BAR + @block_offset 247 240 * @resource: physical resource base of the register block 248 241 * @max_size: maximum mapping size to perform register search 249 242 * @reg_type: see enum cxl_regloc_type 250 243 * @component_map: cxl_reg_map for component registers 251 244 * @device_map: cxl_reg_maps for device registers 245 + * @pmu_map: cxl_reg_maps for CXL Performance Monitoring Units 252 246 */ 253 247 struct cxl_register_map { 248 + struct device *dev; 254 249 void __iomem *base; 255 250 resource_size_t resource; 256 251 resource_size_t max_size; ··· 265 246 union { 266 247 struct cxl_component_reg_map component_map; 267 248 struct cxl_device_reg_map device_map; 249 + struct cxl_pmu_reg_map pmu_map; 268 250 }; 269 251 }; 270 252 ··· 273 253 struct cxl_component_reg_map *map); 274 254 void cxl_probe_device_regs(struct device *dev, void __iomem *base, 275 255 struct cxl_device_reg_map *map); 276 - int cxl_map_component_regs(struct device *dev, struct cxl_component_regs *regs, 277 - struct cxl_register_map *map, 256 + int cxl_map_component_regs(const struct cxl_register_map *map, 257 + struct cxl_component_regs *regs, 278 258 unsigned long map_mask); 279 - int cxl_map_device_regs(struct device *dev, struct cxl_device_regs *regs, 280 - struct cxl_register_map *map); 259 + int cxl_map_device_regs(const struct cxl_register_map *map, 260 + struct cxl_device_regs *regs); 261 + int cxl_map_pmu_regs(struct pci_dev *pdev, struct cxl_pmu_regs *regs, 262 + struct cxl_register_map *map); 281 263 282 264 enum cxl_regloc_type; 265 + int cxl_count_regblock(struct pci_dev *pdev, enum cxl_regloc_type type); 266 + int cxl_find_regblock_instance(struct pci_dev *pdev, enum cxl_regloc_type type, 267 + struct cxl_register_map *map, int index); 283 268 int cxl_find_regblock(struct pci_dev *pdev, enum cxl_regloc_type type, 284 269 struct cxl_register_map *map); 285 - 286 - enum cxl_rcrb { 287 - CXL_RCRB_DOWNSTREAM, 288 - CXL_RCRB_UPSTREAM, 289 - }; 290 - resource_size_t cxl_rcrb_to_component(struct device *dev, 291 - resource_size_t rcrb, 292 - enum cxl_rcrb which); 270 + int cxl_setup_regs(struct cxl_register_map *map); 271 + struct cxl_dport; 272 + resource_size_t cxl_rcd_component_reg_phys(struct device *dev, 273 + struct cxl_dport *dport); 293 274 294 275 #define CXL_RESOURCE_NONE ((resource_size_t) -1) 295 276 #define CXL_TARGET_STRLEN 20 ··· 311 290 #define CXL_DECODER_F_MASK GENMASK(5, 0) 312 291 313 292 enum cxl_decoder_type { 314 - CXL_DECODER_ACCELERATOR = 2, 315 - CXL_DECODER_EXPANDER = 3, 293 + CXL_DECODER_DEVMEM = 2, 294 + CXL_DECODER_HOSTONLYMEM = 3, 316 295 }; 317 296 318 297 /* ··· 484 463 }; 485 464 486 465 /* 487 - * Flag whether this region needs to have its HPA span synchronized with 488 - * CPU cache state at region activation time. 489 - */ 490 - #define CXL_REGION_F_INCOHERENT 0 491 - 492 - /* 493 466 * Indicate whether this region has been assembled by autodetection or 494 467 * userspace assembly. Prevent endpoint decoders outside of automatic 495 468 * detection from being added to the region. 496 469 */ 497 - #define CXL_REGION_F_AUTO 1 470 + #define CXL_REGION_F_AUTO 0 471 + 472 + /* 473 + * Require that a committed region successfully complete a teardown once 474 + * any of its associated decoders have been torn down. This maintains 475 + * the commit state for the region since there are committed decoders, 476 + * but blocks cxl_region_probe(). 477 + */ 478 + #define CXL_REGION_F_NEEDS_RESET 1 498 479 499 480 /** 500 481 * struct cxl_region - CXL region ··· 564 541 * downstream port devices to construct a CXL memory 565 542 * decode hierarchy. 566 543 * @dev: this port's device 567 - * @uport: PCI or platform device implementing the upstream port capability 544 + * @uport_dev: PCI or platform device implementing the upstream port capability 568 545 * @host_bridge: Shortcut to the platform attach point for this port 569 546 * @id: id for port device-name 570 547 * @dports: cxl_dport instances referenced by decoders ··· 572 549 * @regions: cxl_region_ref instances, regions mapped by this port 573 550 * @parent_dport: dport that points to this port in the parent 574 551 * @decoder_ida: allocator for decoder ids 552 + * @comp_map: component register capability mappings 575 553 * @nr_dports: number of entries in @dports 576 554 * @hdm_end: track last allocated HDM decoder instance for allocation ordering 577 555 * @commit_end: cursor to track highest committed decoder for commit ordering ··· 584 560 */ 585 561 struct cxl_port { 586 562 struct device dev; 587 - struct device *uport; 563 + struct device *uport_dev; 588 564 struct device *host_bridge; 589 565 int id; 590 566 struct xarray dports; ··· 592 568 struct xarray regions; 593 569 struct cxl_dport *parent_dport; 594 570 struct ida decoder_ida; 571 + struct cxl_register_map comp_map; 595 572 int nr_dports; 596 573 int hdm_end; 597 574 int commit_end; ··· 612 587 return xa_load(&port->dports, (unsigned long)dport_dev); 613 588 } 614 589 590 + struct cxl_rcrb_info { 591 + resource_size_t base; 592 + u16 aer_cap; 593 + }; 594 + 615 595 /** 616 596 * struct cxl_dport - CXL downstream port 617 - * @dport: PCI bridge or firmware device representing the downstream link 597 + * @dport_dev: PCI bridge or firmware device representing the downstream link 598 + * @comp_map: component register capability mappings 618 599 * @port_id: unique hardware identifier for dport in decoder target list 619 - * @component_reg_phys: downstream port component registers 620 - * @rcrb: base address for the Root Complex Register Block 600 + * @rcrb: Data about the Root Complex Register Block layout 621 601 * @rch: Indicate whether this dport was enumerated in RCH or VH mode 622 602 * @port: reference to cxl_port that contains this downstream port 623 603 */ 624 604 struct cxl_dport { 625 - struct device *dport; 605 + struct device *dport_dev; 606 + struct cxl_register_map comp_map; 626 607 int port_id; 627 - resource_size_t component_reg_phys; 628 - resource_size_t rcrb; 608 + struct cxl_rcrb_info rcrb; 629 609 bool rch; 630 610 struct cxl_port *port; 631 611 }; ··· 671 641 /* 672 642 * The platform firmware device hosting the root is also the top of the 673 643 * CXL port topology. All other CXL ports have another CXL port as their 674 - * parent and their ->uport / host device is out-of-line of the port 644 + * parent and their ->uport_dev / host device is out-of-line of the port 675 645 * ancestry. 676 646 */ 677 647 static inline bool is_cxl_root(struct cxl_port *port) 678 648 { 679 - return port->uport == port->dev.parent; 649 + return port->uport_dev == port->dev.parent; 680 650 } 681 651 682 652 bool is_cxl_port(const struct device *dev); 683 653 struct cxl_port *to_cxl_port(const struct device *dev); 684 654 struct pci_bus; 685 - int devm_cxl_register_pci_bus(struct device *host, struct device *uport, 655 + int devm_cxl_register_pci_bus(struct device *host, struct device *uport_dev, 686 656 struct pci_bus *bus); 687 657 struct pci_bus *cxl_port_to_pci_bus(struct cxl_port *port); 688 - struct cxl_port *devm_cxl_add_port(struct device *host, struct device *uport, 658 + struct cxl_port *devm_cxl_add_port(struct device *host, 659 + struct device *uport_dev, 689 660 resource_size_t component_reg_phys, 690 661 struct cxl_dport *parent_dport); 691 662 struct cxl_port *find_cxl_root(struct cxl_port *port); 692 663 int devm_cxl_enumerate_ports(struct cxl_memdev *cxlmd); 693 664 void cxl_bus_rescan(void); 694 665 void cxl_bus_drain(void); 666 + struct cxl_port *cxl_pci_find_port(struct pci_dev *pdev, 667 + struct cxl_dport **dport); 695 668 struct cxl_port *cxl_mem_find_port(struct cxl_memdev *cxlmd, 696 669 struct cxl_dport **dport); 697 670 bool schedule_cxl_memdev_detach(struct cxl_memdev *cxlmd); ··· 704 671 resource_size_t component_reg_phys); 705 672 struct cxl_dport *devm_cxl_add_rch_dport(struct cxl_port *port, 706 673 struct device *dport_dev, int port_id, 707 - resource_size_t component_reg_phys, 708 674 resource_size_t rcrb); 709 675 710 676 struct cxl_decoder *to_cxl_decoder(struct device *dev); ··· 742 710 struct cxl_hdm; 743 711 struct cxl_hdm *devm_cxl_setup_hdm(struct cxl_port *port, 744 712 struct cxl_endpoint_dvsec_info *info); 745 - int devm_cxl_enable_hdm(struct cxl_port *port, struct cxl_hdm *cxlhdm); 746 713 int devm_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm, 747 714 struct cxl_endpoint_dvsec_info *info); 748 715 int devm_cxl_add_passthrough_decoder(struct cxl_port *port); ··· 781 750 #define CXL_DEVICE_REGION 6 782 751 #define CXL_DEVICE_PMEM_REGION 7 783 752 #define CXL_DEVICE_DAX_REGION 8 753 + #define CXL_DEVICE_PMU 9 784 754 785 755 #define MODULE_ALIAS_CXL(type) MODULE_ALIAS("cxl:t" __stringify(type) "*") 786 756 #define CXL_MODALIAS_FMT "cxl:t%d"
+188 -41
drivers/cxl/cxlmem.h
··· 5 5 #include <uapi/linux/cxl_mem.h> 6 6 #include <linux/cdev.h> 7 7 #include <linux/uuid.h> 8 + #include <linux/rcuwait.h> 8 9 #include "cxl.h" 9 10 10 11 /* CXL 2.0 8.2.8.5.1.1 Memory Device Status Register */ ··· 39 38 * @detach_work: active memdev lost a port in its ancestry 40 39 * @cxl_nvb: coordinate removal of @cxl_nvd if present 41 40 * @cxl_nvd: optional bridge to an nvdimm if the device supports pmem 41 + * @endpoint: connection to the CXL port topology for this memory device 42 42 * @id: id number of this memdev instance. 43 43 * @depth: endpoint port depth 44 44 */ ··· 50 48 struct work_struct detach_work; 51 49 struct cxl_nvdimm_bridge *cxl_nvb; 52 50 struct cxl_nvdimm *cxl_nvd; 51 + struct cxl_port *endpoint; 53 52 int id; 54 53 int depth; 55 54 }; ··· 75 72 { 76 73 struct cxl_port *port = to_cxl_port(cxled->cxld.dev.parent); 77 74 78 - return to_cxl_memdev(port->uport); 75 + return to_cxl_memdev(port->uport_dev); 79 76 } 80 77 81 78 bool is_cxl_memdev(const struct device *dev); 82 79 static inline bool is_cxl_endpoint(struct cxl_port *port) 83 80 { 84 - return is_cxl_memdev(port->uport); 81 + return is_cxl_memdev(port->uport_dev); 85 82 } 86 83 87 84 struct cxl_memdev *devm_cxl_add_memdev(struct cxl_dev_state *cxlds); 85 + struct cxl_memdev_state; 86 + int cxl_memdev_setup_fw_upload(struct cxl_memdev_state *mds); 88 87 int devm_cxl_dpa_reserve(struct cxl_endpoint_decoder *cxled, 89 88 resource_size_t base, resource_size_t len, 90 89 resource_size_t skipped); ··· 113 108 * variable sized output commands, it tells the exact number of bytes 114 109 * written. 115 110 * @min_out: (input) internal command output payload size validation 111 + * @poll_count: (input) Number of timeouts to attempt. 112 + * @poll_interval_ms: (input) Time between mailbox background command polling 113 + * interval timeouts. 116 114 * @return_code: (output) Error code returned from hardware. 117 115 * 118 116 * This is the primary mechanism used to send commands to the hardware. ··· 131 123 size_t size_in; 132 124 size_t size_out; 133 125 size_t min_out; 126 + int poll_count; 127 + int poll_interval_ms; 134 128 u16 return_code; 135 129 }; 136 130 ··· 205 195 */ 206 196 #define CXL_CAPACITY_MULTIPLIER SZ_256M 207 197 208 - /** 198 + /* 209 199 * Event Interrupt Policy 210 200 * 211 201 * CXL rev 3.0 section 8.2.9.2.4; Table 8-52 ··· 225 215 /** 226 216 * struct cxl_event_state - Event log driver state 227 217 * 228 - * @event_buf: Buffer to receive event data 229 - * @event_log_lock: Serialize event_buf and log use 218 + * @buf: Buffer to receive event data 219 + * @log_lock: Serialize event_buf and log use 230 220 */ 231 221 struct cxl_event_state { 232 222 struct cxl_get_event_payload *buf; ··· 264 254 struct mutex lock; /* Protect reads of poison list */ 265 255 }; 266 256 257 + /* 258 + * Get FW Info 259 + * CXL rev 3.0 section 8.2.9.3.1; Table 8-56 260 + */ 261 + struct cxl_mbox_get_fw_info { 262 + u8 num_slots; 263 + u8 slot_info; 264 + u8 activation_cap; 265 + u8 reserved[13]; 266 + char slot_1_revision[16]; 267 + char slot_2_revision[16]; 268 + char slot_3_revision[16]; 269 + char slot_4_revision[16]; 270 + } __packed; 271 + 272 + #define CXL_FW_INFO_SLOT_INFO_CUR_MASK GENMASK(2, 0) 273 + #define CXL_FW_INFO_SLOT_INFO_NEXT_MASK GENMASK(5, 3) 274 + #define CXL_FW_INFO_SLOT_INFO_NEXT_SHIFT 3 275 + #define CXL_FW_INFO_ACTIVATION_CAP_HAS_LIVE_ACTIVATE BIT(0) 276 + 277 + /* 278 + * Transfer FW Input Payload 279 + * CXL rev 3.0 section 8.2.9.3.2; Table 8-57 280 + */ 281 + struct cxl_mbox_transfer_fw { 282 + u8 action; 283 + u8 slot; 284 + u8 reserved[2]; 285 + __le32 offset; 286 + u8 reserved2[0x78]; 287 + u8 data[]; 288 + } __packed; 289 + 290 + #define CXL_FW_TRANSFER_ACTION_FULL 0x0 291 + #define CXL_FW_TRANSFER_ACTION_INITIATE 0x1 292 + #define CXL_FW_TRANSFER_ACTION_CONTINUE 0x2 293 + #define CXL_FW_TRANSFER_ACTION_END 0x3 294 + #define CXL_FW_TRANSFER_ACTION_ABORT 0x4 295 + 296 + /* 297 + * CXL rev 3.0 section 8.2.9.3.2 mandates 128-byte alignment for FW packages 298 + * and for each part transferred in a Transfer FW command. 299 + */ 300 + #define CXL_FW_TRANSFER_ALIGNMENT 128 301 + 302 + /* 303 + * Activate FW Input Payload 304 + * CXL rev 3.0 section 8.2.9.3.3; Table 8-58 305 + */ 306 + struct cxl_mbox_activate_fw { 307 + u8 action; 308 + u8 slot; 309 + } __packed; 310 + 311 + #define CXL_FW_ACTIVATE_ONLINE 0x0 312 + #define CXL_FW_ACTIVATE_OFFLINE 0x1 313 + 314 + /* FW state bits */ 315 + #define CXL_FW_STATE_BITS 32 316 + #define CXL_FW_CANCEL BIT(0) 317 + 318 + /** 319 + * struct cxl_fw_state - Firmware upload / activation state 320 + * 321 + * @state: fw_uploader state bitmask 322 + * @oneshot: whether the fw upload fits in a single transfer 323 + * @num_slots: Number of FW slots available 324 + * @cur_slot: Slot number currently active 325 + * @next_slot: Slot number for the new firmware 326 + */ 327 + struct cxl_fw_state { 328 + DECLARE_BITMAP(state, CXL_FW_STATE_BITS); 329 + bool oneshot; 330 + int num_slots; 331 + int cur_slot; 332 + int next_slot; 333 + }; 334 + 335 + /** 336 + * struct cxl_security_state - Device security state 337 + * 338 + * @state: state of last security operation 339 + * @poll: polling for sanitization is enabled, device has no mbox irq support 340 + * @poll_tmo_secs: polling timeout 341 + * @poll_dwork: polling work item 342 + * @sanitize_node: sanitation sysfs file to notify 343 + */ 344 + struct cxl_security_state { 345 + unsigned long state; 346 + bool poll; 347 + int poll_tmo_secs; 348 + struct delayed_work poll_dwork; 349 + struct kernfs_node *sanitize_node; 350 + }; 351 + 352 + /* 353 + * enum cxl_devtype - delineate type-2 from a generic type-3 device 354 + * @CXL_DEVTYPE_DEVMEM - Vendor specific CXL Type-2 device implementing HDM-D or 355 + * HDM-DB, no requirement that this device implements a 356 + * mailbox, or other memory-device-standard manageability 357 + * flows. 358 + * @CXL_DEVTYPE_CLASSMEM - Common class definition of a CXL Type-3 device with 359 + * HDM-H and class-mandatory memory device registers 360 + */ 361 + enum cxl_devtype { 362 + CXL_DEVTYPE_DEVMEM, 363 + CXL_DEVTYPE_CLASSMEM, 364 + }; 365 + 267 366 /** 268 367 * struct cxl_dev_state - The driver device state 269 368 * ··· 386 267 * @cxl_dvsec: Offset to the PCIe device DVSEC 387 268 * @rcd: operating in RCD mode (CXL 3.0 9.11.8 CXL Devices Attached to an RCH) 388 269 * @media_ready: Indicate whether the device media is usable 270 + * @dpa_res: Overall DPA resource tree for the device 271 + * @pmem_res: Active Persistent memory capacity configuration 272 + * @ram_res: Active Volatile memory capacity configuration 273 + * @component_reg_phys: register base of component registers 274 + * @serial: PCIe Device Serial Number 275 + * @type: Generic Memory Class device or Vendor Specific Memory device 276 + */ 277 + struct cxl_dev_state { 278 + struct device *dev; 279 + struct cxl_memdev *cxlmd; 280 + struct cxl_regs regs; 281 + int cxl_dvsec; 282 + bool rcd; 283 + bool media_ready; 284 + struct resource dpa_res; 285 + struct resource pmem_res; 286 + struct resource ram_res; 287 + resource_size_t component_reg_phys; 288 + u64 serial; 289 + enum cxl_devtype type; 290 + }; 291 + 292 + /** 293 + * struct cxl_memdev_state - Generic Type-3 Memory Device Class driver data 294 + * 295 + * CXL 8.1.12.1 PCI Header - Class Code Register Memory Device defines 296 + * common memory device functionality like the presence of a mailbox and 297 + * the functionality related to that like Identify Memory Device and Get 298 + * Partition Info 299 + * @cxlds: Core driver state common across Type-2 and Type-3 devices 389 300 * @payload_size: Size of space for payload 390 301 * (CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register) 391 302 * @lsa_size: Size of Label Storage Area ··· 424 275 * @firmware_version: Firmware version for the memory device. 425 276 * @enabled_cmds: Hardware commands found enabled in CEL. 426 277 * @exclusive_cmds: Commands that are kernel-internal only 427 - * @dpa_res: Overall DPA resource tree for the device 428 - * @pmem_res: Active Persistent memory capacity configuration 429 - * @ram_res: Active Volatile memory capacity configuration 430 278 * @total_bytes: sum of all possible capacities 431 279 * @volatile_only_bytes: hard volatile capacity 432 280 * @persistent_only_bytes: hard persistent capacity ··· 432 286 * @active_persistent_bytes: sum of hard + soft persistent 433 287 * @next_volatile_bytes: volatile capacity change pending device reset 434 288 * @next_persistent_bytes: persistent capacity change pending device reset 435 - * @component_reg_phys: register base of component registers 436 - * @info: Cached DVSEC information about the device. 437 - * @serial: PCIe Device Serial Number 438 289 * @event: event log driver state 439 290 * @poison: poison driver state info 291 + * @fw: firmware upload / activation state 440 292 * @mbox_send: @dev specific transport for transmitting mailbox commands 441 293 * 442 - * See section 8.2.9.5.2 Capacity Configuration and Label Storage for 294 + * See CXL 3.0 8.2.9.8.2 Capacity Configuration and Label Storage for 443 295 * details on capacity parameters. 444 296 */ 445 - struct cxl_dev_state { 446 - struct device *dev; 447 - struct cxl_memdev *cxlmd; 448 - 449 - struct cxl_regs regs; 450 - int cxl_dvsec; 451 - 452 - bool rcd; 453 - bool media_ready; 297 + struct cxl_memdev_state { 298 + struct cxl_dev_state cxlds; 454 299 size_t payload_size; 455 300 size_t lsa_size; 456 301 struct mutex mbox_mutex; /* Protects device mailbox and firmware */ 457 302 char firmware_version[0x10]; 458 303 DECLARE_BITMAP(enabled_cmds, CXL_MEM_COMMAND_ID_MAX); 459 304 DECLARE_BITMAP(exclusive_cmds, CXL_MEM_COMMAND_ID_MAX); 460 - 461 - struct resource dpa_res; 462 - struct resource pmem_res; 463 - struct resource ram_res; 464 305 u64 total_bytes; 465 306 u64 volatile_only_bytes; 466 307 u64 persistent_only_bytes; 467 308 u64 partition_align_bytes; 468 - 469 309 u64 active_volatile_bytes; 470 310 u64 active_persistent_bytes; 471 311 u64 next_volatile_bytes; 472 312 u64 next_persistent_bytes; 473 - 474 - resource_size_t component_reg_phys; 475 - u64 serial; 476 - 477 313 struct cxl_event_state event; 478 314 struct cxl_poison_state poison; 315 + struct cxl_security_state security; 316 + struct cxl_fw_state fw; 479 317 480 - int (*mbox_send)(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd); 318 + struct rcuwait mbox_wait; 319 + int (*mbox_send)(struct cxl_memdev_state *mds, 320 + struct cxl_mbox_cmd *cmd); 481 321 }; 322 + 323 + static inline struct cxl_memdev_state * 324 + to_cxl_memdev_state(struct cxl_dev_state *cxlds) 325 + { 326 + if (cxlds->type != CXL_DEVTYPE_CLASSMEM) 327 + return NULL; 328 + return container_of(cxlds, struct cxl_memdev_state, cxlds); 329 + } 482 330 483 331 enum cxl_opcode { 484 332 CXL_MBOX_OP_INVALID = 0x0000, ··· 482 342 CXL_MBOX_OP_GET_EVT_INT_POLICY = 0x0102, 483 343 CXL_MBOX_OP_SET_EVT_INT_POLICY = 0x0103, 484 344 CXL_MBOX_OP_GET_FW_INFO = 0x0200, 345 + CXL_MBOX_OP_TRANSFER_FW = 0x0201, 485 346 CXL_MBOX_OP_ACTIVATE_FW = 0x0202, 486 347 CXL_MBOX_OP_SET_TIMESTAMP = 0x0301, 487 348 CXL_MBOX_OP_GET_SUPPORTED_LOGS = 0x0400, ··· 503 362 CXL_MBOX_OP_GET_SCAN_MEDIA_CAPS = 0x4303, 504 363 CXL_MBOX_OP_SCAN_MEDIA = 0x4304, 505 364 CXL_MBOX_OP_GET_SCAN_MEDIA = 0x4305, 365 + CXL_MBOX_OP_SANITIZE = 0x4400, 366 + CXL_MBOX_OP_SECURE_ERASE = 0x4401, 506 367 CXL_MBOX_OP_GET_SECURITY_STATE = 0x4500, 507 368 CXL_MBOX_OP_SET_PASSPHRASE = 0x4501, 508 369 CXL_MBOX_OP_DISABLE_PASSPHRASE = 0x4502, ··· 835 692 CXL_PMEM_SEC_PASS_USER, 836 693 }; 837 694 838 - int cxl_internal_send_cmd(struct cxl_dev_state *cxlds, 695 + int cxl_internal_send_cmd(struct cxl_memdev_state *mds, 839 696 struct cxl_mbox_cmd *cmd); 840 - int cxl_dev_state_identify(struct cxl_dev_state *cxlds); 697 + int cxl_dev_state_identify(struct cxl_memdev_state *mds); 841 698 int cxl_await_media_ready(struct cxl_dev_state *cxlds); 842 - int cxl_enumerate_cmds(struct cxl_dev_state *cxlds); 843 - int cxl_mem_create_range_info(struct cxl_dev_state *cxlds); 844 - struct cxl_dev_state *cxl_dev_state_create(struct device *dev); 845 - void set_exclusive_cxl_commands(struct cxl_dev_state *cxlds, unsigned long *cmds); 846 - void clear_exclusive_cxl_commands(struct cxl_dev_state *cxlds, unsigned long *cmds); 847 - void cxl_mem_get_event_records(struct cxl_dev_state *cxlds, u32 status); 848 - int cxl_set_timestamp(struct cxl_dev_state *cxlds); 849 - int cxl_poison_state_init(struct cxl_dev_state *cxlds); 699 + int cxl_enumerate_cmds(struct cxl_memdev_state *mds); 700 + int cxl_mem_create_range_info(struct cxl_memdev_state *mds); 701 + struct cxl_memdev_state *cxl_memdev_state_create(struct device *dev); 702 + void set_exclusive_cxl_commands(struct cxl_memdev_state *mds, 703 + unsigned long *cmds); 704 + void clear_exclusive_cxl_commands(struct cxl_memdev_state *mds, 705 + unsigned long *cmds); 706 + void cxl_mem_get_event_records(struct cxl_memdev_state *mds, u32 status); 707 + int cxl_set_timestamp(struct cxl_memdev_state *mds); 708 + int cxl_poison_state_init(struct cxl_memdev_state *mds); 850 709 int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len, 851 710 struct cxl_region *cxlr); 852 711 int cxl_trigger_poison_list(struct cxl_memdev *cxlmd); ··· 866 721 { 867 722 } 868 723 #endif 724 + 725 + int cxl_mem_sanitize(struct cxl_memdev_state *mds, u16 cmd); 869 726 870 727 struct cxl_hdm { 871 728 struct cxl_component_regs regs;
+1
drivers/cxl/cxlpci.h
··· 67 67 CXL_REGLOC_RBI_COMPONENT, 68 68 CXL_REGLOC_RBI_VIRT, 69 69 CXL_REGLOC_RBI_MEMDEV, 70 + CXL_REGLOC_RBI_PMU, 70 71 CXL_REGLOC_RBI_TYPES 71 72 }; 72 73
+10 -16
drivers/cxl/mem.c
··· 51 51 struct cxl_port *parent_port = parent_dport->port; 52 52 struct cxl_dev_state *cxlds = cxlmd->cxlds; 53 53 struct cxl_port *endpoint, *iter, *down; 54 - resource_size_t component_reg_phys; 55 54 int rc; 56 55 57 56 /* ··· 65 66 ep->next = down; 66 67 } 67 68 68 - /* 69 - * The component registers for an RCD might come from the 70 - * host-bridge RCRB if they are not already mapped via the 71 - * typical register locator mechanism. 72 - */ 73 - if (parent_dport->rch && cxlds->component_reg_phys == CXL_RESOURCE_NONE) 74 - component_reg_phys = cxl_rcrb_to_component( 75 - &cxlmd->dev, parent_dport->rcrb, CXL_RCRB_UPSTREAM); 76 - else 77 - component_reg_phys = cxlds->component_reg_phys; 78 - endpoint = devm_cxl_add_port(host, &cxlmd->dev, component_reg_phys, 69 + endpoint = devm_cxl_add_port(host, &cxlmd->dev, 70 + cxlds->component_reg_phys, 79 71 parent_dport); 80 72 if (IS_ERR(endpoint)) 81 73 return PTR_ERR(endpoint); ··· 107 117 static int cxl_mem_probe(struct device *dev) 108 118 { 109 119 struct cxl_memdev *cxlmd = to_cxl_memdev(dev); 120 + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); 110 121 struct cxl_dev_state *cxlds = cxlmd->cxlds; 111 122 struct device *endpoint_parent; 112 123 struct cxl_port *parent_port; ··· 132 141 dentry = cxl_debugfs_create_dir(dev_name(dev)); 133 142 debugfs_create_devm_seqfile(dev, "dpamem", dentry, cxl_mem_dpa_show); 134 143 135 - if (test_bit(CXL_POISON_ENABLED_INJECT, cxlds->poison.enabled_cmds)) 144 + if (test_bit(CXL_POISON_ENABLED_INJECT, mds->poison.enabled_cmds)) 136 145 debugfs_create_file("inject_poison", 0200, dentry, cxlmd, 137 146 &cxl_poison_inject_fops); 138 - if (test_bit(CXL_POISON_ENABLED_CLEAR, cxlds->poison.enabled_cmds)) 147 + if (test_bit(CXL_POISON_ENABLED_CLEAR, mds->poison.enabled_cmds)) 139 148 debugfs_create_file("clear_poison", 0200, dentry, cxlmd, 140 149 &cxl_poison_clear_fops); 141 150 ··· 154 163 } 155 164 156 165 if (dport->rch) 157 - endpoint_parent = parent_port->uport; 166 + endpoint_parent = parent_port->uport_dev; 158 167 else 159 168 endpoint_parent = &parent_port->dev; 160 169 ··· 218 227 { 219 228 if (a == &dev_attr_trigger_poison_list.attr) { 220 229 struct device *dev = kobj_to_dev(kobj); 230 + struct cxl_memdev *cxlmd = to_cxl_memdev(dev); 231 + struct cxl_memdev_state *mds = 232 + to_cxl_memdev_state(cxlmd->cxlds); 221 233 222 234 if (!test_bit(CXL_POISON_ENABLED_LIST, 223 - to_cxl_memdev(dev)->cxlds->poison.enabled_cmds)) 235 + mds->poison.enabled_cmds)) 224 236 return 0; 225 237 } 226 238 return a->mode;
+351 -183
drivers/cxl/pci.c
··· 13 13 #include "cxlmem.h" 14 14 #include "cxlpci.h" 15 15 #include "cxl.h" 16 + #include "pmu.h" 16 17 17 18 /** 18 19 * DOC: cxl pci ··· 85 84 status & CXLMDEV_DEV_FATAL ? " fatal" : "", \ 86 85 status & CXLMDEV_FW_HALT ? " firmware-halt" : "") 87 86 87 + struct cxl_dev_id { 88 + struct cxl_dev_state *cxlds; 89 + }; 90 + 91 + static int cxl_request_irq(struct cxl_dev_state *cxlds, int irq, 92 + irq_handler_t handler, irq_handler_t thread_fn) 93 + { 94 + struct device *dev = cxlds->dev; 95 + struct cxl_dev_id *dev_id; 96 + 97 + /* dev_id must be globally unique and must contain the cxlds */ 98 + dev_id = devm_kzalloc(dev, sizeof(*dev_id), GFP_KERNEL); 99 + if (!dev_id) 100 + return -ENOMEM; 101 + dev_id->cxlds = cxlds; 102 + 103 + return devm_request_threaded_irq(dev, irq, handler, thread_fn, 104 + IRQF_SHARED | IRQF_ONESHOT, 105 + NULL, dev_id); 106 + } 107 + 108 + static bool cxl_mbox_background_complete(struct cxl_dev_state *cxlds) 109 + { 110 + u64 reg; 111 + 112 + reg = readq(cxlds->regs.mbox + CXLDEV_MBOX_BG_CMD_STATUS_OFFSET); 113 + return FIELD_GET(CXLDEV_MBOX_BG_CMD_COMMAND_PCT_MASK, reg) == 100; 114 + } 115 + 116 + static irqreturn_t cxl_pci_mbox_irq(int irq, void *id) 117 + { 118 + u64 reg; 119 + u16 opcode; 120 + struct cxl_dev_id *dev_id = id; 121 + struct cxl_dev_state *cxlds = dev_id->cxlds; 122 + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); 123 + 124 + if (!cxl_mbox_background_complete(cxlds)) 125 + return IRQ_NONE; 126 + 127 + reg = readq(cxlds->regs.mbox + CXLDEV_MBOX_BG_CMD_STATUS_OFFSET); 128 + opcode = FIELD_GET(CXLDEV_MBOX_BG_CMD_COMMAND_OPCODE_MASK, reg); 129 + if (opcode == CXL_MBOX_OP_SANITIZE) { 130 + if (mds->security.sanitize_node) 131 + sysfs_notify_dirent(mds->security.sanitize_node); 132 + 133 + dev_dbg(cxlds->dev, "Sanitization operation ended\n"); 134 + } else { 135 + /* short-circuit the wait in __cxl_pci_mbox_send_cmd() */ 136 + rcuwait_wake_up(&mds->mbox_wait); 137 + } 138 + 139 + return IRQ_HANDLED; 140 + } 141 + 142 + /* 143 + * Sanitization operation polling mode. 144 + */ 145 + static void cxl_mbox_sanitize_work(struct work_struct *work) 146 + { 147 + struct cxl_memdev_state *mds = 148 + container_of(work, typeof(*mds), security.poll_dwork.work); 149 + struct cxl_dev_state *cxlds = &mds->cxlds; 150 + 151 + mutex_lock(&mds->mbox_mutex); 152 + if (cxl_mbox_background_complete(cxlds)) { 153 + mds->security.poll_tmo_secs = 0; 154 + put_device(cxlds->dev); 155 + 156 + if (mds->security.sanitize_node) 157 + sysfs_notify_dirent(mds->security.sanitize_node); 158 + 159 + dev_dbg(cxlds->dev, "Sanitization operation ended\n"); 160 + } else { 161 + int timeout = mds->security.poll_tmo_secs + 10; 162 + 163 + mds->security.poll_tmo_secs = min(15 * 60, timeout); 164 + queue_delayed_work(system_wq, &mds->security.poll_dwork, 165 + timeout * HZ); 166 + } 167 + mutex_unlock(&mds->mbox_mutex); 168 + } 169 + 88 170 /** 89 171 * __cxl_pci_mbox_send_cmd() - Execute a mailbox command 90 - * @cxlds: The device state to communicate with. 172 + * @mds: The memory device driver data 91 173 * @mbox_cmd: Command to send to the memory device. 92 174 * 93 175 * Context: Any context. Expects mbox_mutex to be held. ··· 190 106 * not need to coordinate with each other. The driver only uses the primary 191 107 * mailbox. 192 108 */ 193 - static int __cxl_pci_mbox_send_cmd(struct cxl_dev_state *cxlds, 109 + static int __cxl_pci_mbox_send_cmd(struct cxl_memdev_state *mds, 194 110 struct cxl_mbox_cmd *mbox_cmd) 195 111 { 112 + struct cxl_dev_state *cxlds = &mds->cxlds; 196 113 void __iomem *payload = cxlds->regs.mbox + CXLDEV_MBOX_PAYLOAD_OFFSET; 197 114 struct device *dev = cxlds->dev; 198 115 u64 cmd_reg, status_reg; 199 116 size_t out_len; 200 117 int rc; 201 118 202 - lockdep_assert_held(&cxlds->mbox_mutex); 119 + lockdep_assert_held(&mds->mbox_mutex); 203 120 204 121 /* 205 122 * Here are the steps from 8.2.8.4 of the CXL 2.0 spec. ··· 227 142 cxl_cmd_err(cxlds->dev, mbox_cmd, md_status, 228 143 "mailbox queue busy"); 229 144 return -EBUSY; 145 + } 146 + 147 + /* 148 + * With sanitize polling, hardware might be done and the poller still 149 + * not be in sync. Ensure no new command comes in until so. Keep the 150 + * hardware semantics and only allow device health status. 151 + */ 152 + if (mds->security.poll_tmo_secs > 0) { 153 + if (mbox_cmd->opcode != CXL_MBOX_OP_GET_HEALTH_INFO) 154 + return -EBUSY; 230 155 } 231 156 232 157 cmd_reg = FIELD_PREP(CXLDEV_MBOX_CMD_COMMAND_OPCODE_MASK, ··· 272 177 mbox_cmd->return_code = 273 178 FIELD_GET(CXLDEV_MBOX_STATUS_RET_CODE_MASK, status_reg); 274 179 180 + /* 181 + * Handle the background command in a synchronous manner. 182 + * 183 + * All other mailbox commands will serialize/queue on the mbox_mutex, 184 + * which we currently hold. Furthermore this also guarantees that 185 + * cxl_mbox_background_complete() checks are safe amongst each other, 186 + * in that no new bg operation can occur in between. 187 + * 188 + * Background operations are timesliced in accordance with the nature 189 + * of the command. In the event of timeout, the mailbox state is 190 + * indeterminate until the next successful command submission and the 191 + * driver can get back in sync with the hardware state. 192 + */ 193 + if (mbox_cmd->return_code == CXL_MBOX_CMD_RC_BACKGROUND) { 194 + u64 bg_status_reg; 195 + int i, timeout; 196 + 197 + /* 198 + * Sanitization is a special case which monopolizes the device 199 + * and cannot be timesliced. Handle asynchronously instead, 200 + * and allow userspace to poll(2) for completion. 201 + */ 202 + if (mbox_cmd->opcode == CXL_MBOX_OP_SANITIZE) { 203 + if (mds->security.poll) { 204 + /* hold the device throughout */ 205 + get_device(cxlds->dev); 206 + 207 + /* give first timeout a second */ 208 + timeout = 1; 209 + mds->security.poll_tmo_secs = timeout; 210 + queue_delayed_work(system_wq, 211 + &mds->security.poll_dwork, 212 + timeout * HZ); 213 + } 214 + 215 + dev_dbg(dev, "Sanitization operation started\n"); 216 + goto success; 217 + } 218 + 219 + dev_dbg(dev, "Mailbox background operation (0x%04x) started\n", 220 + mbox_cmd->opcode); 221 + 222 + timeout = mbox_cmd->poll_interval_ms; 223 + for (i = 0; i < mbox_cmd->poll_count; i++) { 224 + if (rcuwait_wait_event_timeout(&mds->mbox_wait, 225 + cxl_mbox_background_complete(cxlds), 226 + TASK_UNINTERRUPTIBLE, 227 + msecs_to_jiffies(timeout)) > 0) 228 + break; 229 + } 230 + 231 + if (!cxl_mbox_background_complete(cxlds)) { 232 + dev_err(dev, "timeout waiting for background (%d ms)\n", 233 + timeout * mbox_cmd->poll_count); 234 + return -ETIMEDOUT; 235 + } 236 + 237 + bg_status_reg = readq(cxlds->regs.mbox + 238 + CXLDEV_MBOX_BG_CMD_STATUS_OFFSET); 239 + mbox_cmd->return_code = 240 + FIELD_GET(CXLDEV_MBOX_BG_CMD_COMMAND_RC_MASK, 241 + bg_status_reg); 242 + dev_dbg(dev, 243 + "Mailbox background operation (0x%04x) completed\n", 244 + mbox_cmd->opcode); 245 + } 246 + 275 247 if (mbox_cmd->return_code != CXL_MBOX_CMD_RC_SUCCESS) { 276 248 dev_dbg(dev, "Mailbox operation had an error: %s\n", 277 249 cxl_mbox_cmd_rc2str(mbox_cmd)); 278 250 return 0; /* completed but caller must check return_code */ 279 251 } 280 252 253 + success: 281 254 /* #7 */ 282 255 cmd_reg = readq(cxlds->regs.mbox + CXLDEV_MBOX_CMD_OFFSET); 283 256 out_len = FIELD_GET(CXLDEV_MBOX_CMD_PAYLOAD_LENGTH_MASK, cmd_reg); ··· 359 196 * have requested less data than the hardware supplied even 360 197 * within spec. 361 198 */ 362 - size_t n = min3(mbox_cmd->size_out, cxlds->payload_size, out_len); 199 + size_t n; 363 200 201 + n = min3(mbox_cmd->size_out, mds->payload_size, out_len); 364 202 memcpy_fromio(mbox_cmd->payload_out, payload, n); 365 203 mbox_cmd->size_out = n; 366 204 } else { ··· 371 207 return 0; 372 208 } 373 209 374 - static int cxl_pci_mbox_send(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd) 210 + static int cxl_pci_mbox_send(struct cxl_memdev_state *mds, 211 + struct cxl_mbox_cmd *cmd) 375 212 { 376 213 int rc; 377 214 378 - mutex_lock_io(&cxlds->mbox_mutex); 379 - rc = __cxl_pci_mbox_send_cmd(cxlds, cmd); 380 - mutex_unlock(&cxlds->mbox_mutex); 215 + mutex_lock_io(&mds->mbox_mutex); 216 + rc = __cxl_pci_mbox_send_cmd(mds, cmd); 217 + mutex_unlock(&mds->mbox_mutex); 381 218 382 219 return rc; 383 220 } 384 221 385 - static int cxl_pci_setup_mailbox(struct cxl_dev_state *cxlds) 222 + static int cxl_pci_setup_mailbox(struct cxl_memdev_state *mds) 386 223 { 224 + struct cxl_dev_state *cxlds = &mds->cxlds; 387 225 const int cap = readl(cxlds->regs.mbox + CXLDEV_MBOX_CAPS_OFFSET); 226 + struct device *dev = cxlds->dev; 388 227 unsigned long timeout; 389 228 u64 md_status; 390 229 ··· 401 234 } while (!time_after(jiffies, timeout)); 402 235 403 236 if (!(md_status & CXLMDEV_MBOX_IF_READY)) { 404 - cxl_err(cxlds->dev, md_status, 405 - "timeout awaiting mailbox ready"); 237 + cxl_err(dev, md_status, "timeout awaiting mailbox ready"); 406 238 return -ETIMEDOUT; 407 239 } 408 240 ··· 412 246 * source for future doorbell busy events. 413 247 */ 414 248 if (cxl_pci_mbox_wait_for_doorbell(cxlds) != 0) { 415 - cxl_err(cxlds->dev, md_status, "timeout awaiting mailbox idle"); 249 + cxl_err(dev, md_status, "timeout awaiting mailbox idle"); 416 250 return -ETIMEDOUT; 417 251 } 418 252 419 - cxlds->mbox_send = cxl_pci_mbox_send; 420 - cxlds->payload_size = 253 + mds->mbox_send = cxl_pci_mbox_send; 254 + mds->payload_size = 421 255 1 << FIELD_GET(CXLDEV_MBOX_CAP_PAYLOAD_SIZE_MASK, cap); 422 256 423 257 /* ··· 427 261 * there's no point in going forward. If the size is too large, there's 428 262 * no harm is soft limiting it. 429 263 */ 430 - cxlds->payload_size = min_t(size_t, cxlds->payload_size, SZ_1M); 431 - if (cxlds->payload_size < 256) { 432 - dev_err(cxlds->dev, "Mailbox is too small (%zub)", 433 - cxlds->payload_size); 264 + mds->payload_size = min_t(size_t, mds->payload_size, SZ_1M); 265 + if (mds->payload_size < 256) { 266 + dev_err(dev, "Mailbox is too small (%zub)", 267 + mds->payload_size); 434 268 return -ENXIO; 435 269 } 436 270 437 - dev_dbg(cxlds->dev, "Mailbox payload sized %zu", 438 - cxlds->payload_size); 271 + dev_dbg(dev, "Mailbox payload sized %zu", mds->payload_size); 439 272 440 - return 0; 441 - } 273 + rcuwait_init(&mds->mbox_wait); 442 274 443 - static int cxl_map_regblock(struct pci_dev *pdev, struct cxl_register_map *map) 444 - { 445 - struct device *dev = &pdev->dev; 275 + if (cap & CXLDEV_MBOX_CAP_BG_CMD_IRQ) { 276 + u32 ctrl; 277 + int irq, msgnum; 278 + struct pci_dev *pdev = to_pci_dev(cxlds->dev); 446 279 447 - map->base = ioremap(map->resource, map->max_size); 448 - if (!map->base) { 449 - dev_err(dev, "failed to map registers\n"); 450 - return -ENOMEM; 280 + msgnum = FIELD_GET(CXLDEV_MBOX_CAP_IRQ_MSGNUM_MASK, cap); 281 + irq = pci_irq_vector(pdev, msgnum); 282 + if (irq < 0) 283 + goto mbox_poll; 284 + 285 + if (cxl_request_irq(cxlds, irq, cxl_pci_mbox_irq, NULL)) 286 + goto mbox_poll; 287 + 288 + /* enable background command mbox irq support */ 289 + ctrl = readl(cxlds->regs.mbox + CXLDEV_MBOX_CTRL_OFFSET); 290 + ctrl |= CXLDEV_MBOX_CTRL_BG_CMD_IRQ; 291 + writel(ctrl, cxlds->regs.mbox + CXLDEV_MBOX_CTRL_OFFSET); 292 + 293 + return 0; 451 294 } 452 295 453 - dev_dbg(dev, "Mapped CXL Memory Device resource %pa\n", &map->resource); 296 + mbox_poll: 297 + mds->security.poll = true; 298 + INIT_DELAYED_WORK(&mds->security.poll_dwork, cxl_mbox_sanitize_work); 299 + 300 + dev_dbg(cxlds->dev, "Mailbox interrupts are unsupported"); 454 301 return 0; 455 - } 456 - 457 - static void cxl_unmap_regblock(struct pci_dev *pdev, 458 - struct cxl_register_map *map) 459 - { 460 - iounmap(map->base); 461 - map->base = NULL; 462 - } 463 - 464 - static int cxl_probe_regs(struct pci_dev *pdev, struct cxl_register_map *map) 465 - { 466 - struct cxl_component_reg_map *comp_map; 467 - struct cxl_device_reg_map *dev_map; 468 - struct device *dev = &pdev->dev; 469 - void __iomem *base = map->base; 470 - 471 - switch (map->reg_type) { 472 - case CXL_REGLOC_RBI_COMPONENT: 473 - comp_map = &map->component_map; 474 - cxl_probe_component_regs(dev, base, comp_map); 475 - if (!comp_map->hdm_decoder.valid) { 476 - dev_err(dev, "HDM decoder registers not found\n"); 477 - return -ENXIO; 478 - } 479 - 480 - if (!comp_map->ras.valid) 481 - dev_dbg(dev, "RAS registers not found\n"); 482 - 483 - dev_dbg(dev, "Set up component registers\n"); 484 - break; 485 - case CXL_REGLOC_RBI_MEMDEV: 486 - dev_map = &map->device_map; 487 - cxl_probe_device_regs(dev, base, dev_map); 488 - if (!dev_map->status.valid || !dev_map->mbox.valid || 489 - !dev_map->memdev.valid) { 490 - dev_err(dev, "registers not found: %s%s%s\n", 491 - !dev_map->status.valid ? "status " : "", 492 - !dev_map->mbox.valid ? "mbox " : "", 493 - !dev_map->memdev.valid ? "memdev " : ""); 494 - return -ENXIO; 495 - } 496 - 497 - dev_dbg(dev, "Probing device registers...\n"); 498 - break; 499 - default: 500 - break; 501 - } 502 - 503 - return 0; 504 - } 505 - 506 - static int cxl_setup_regs(struct pci_dev *pdev, enum cxl_regloc_type type, 507 - struct cxl_register_map *map) 508 - { 509 - int rc; 510 - 511 - rc = cxl_find_regblock(pdev, type, map); 512 - if (rc) 513 - return rc; 514 - 515 - rc = cxl_map_regblock(pdev, map); 516 - if (rc) 517 - return rc; 518 - 519 - rc = cxl_probe_regs(pdev, map); 520 - cxl_unmap_regblock(pdev, map); 521 - 522 - return rc; 523 302 } 524 303 525 304 /* ··· 476 365 return pci_pcie_type(pdev) == PCI_EXP_TYPE_RC_END; 477 366 } 478 367 479 - /* 480 - * CXL v3.0 6.2.3 Table 6-4 481 - * The table indicates that if PCIe Flit Mode is set, then CXL is in 256B flits 482 - * mode, otherwise it's 68B flits mode. 483 - */ 484 - static bool cxl_pci_flit_256(struct pci_dev *pdev) 368 + static int cxl_rcrb_get_comp_regs(struct pci_dev *pdev, 369 + struct cxl_register_map *map) 485 370 { 486 - u16 lnksta2; 371 + struct cxl_port *port; 372 + struct cxl_dport *dport; 373 + resource_size_t component_reg_phys; 487 374 488 - pcie_capability_read_word(pdev, PCI_EXP_LNKSTA2, &lnksta2); 489 - return lnksta2 & PCI_EXP_LNKSTA2_FLIT; 375 + *map = (struct cxl_register_map) { 376 + .dev = &pdev->dev, 377 + .resource = CXL_RESOURCE_NONE, 378 + }; 379 + 380 + port = cxl_pci_find_port(pdev, &dport); 381 + if (!port) 382 + return -EPROBE_DEFER; 383 + 384 + component_reg_phys = cxl_rcd_component_reg_phys(&pdev->dev, dport); 385 + 386 + put_device(&port->dev); 387 + 388 + if (component_reg_phys == CXL_RESOURCE_NONE) 389 + return -ENXIO; 390 + 391 + map->resource = component_reg_phys; 392 + map->reg_type = CXL_REGLOC_RBI_COMPONENT; 393 + map->max_size = CXL_COMPONENT_REG_BLOCK_SIZE; 394 + 395 + return 0; 396 + } 397 + 398 + static int cxl_pci_setup_regs(struct pci_dev *pdev, enum cxl_regloc_type type, 399 + struct cxl_register_map *map) 400 + { 401 + int rc; 402 + 403 + rc = cxl_find_regblock(pdev, type, map); 404 + 405 + /* 406 + * If the Register Locator DVSEC does not exist, check if it 407 + * is an RCH and try to extract the Component Registers from 408 + * an RCRB. 409 + */ 410 + if (rc && type == CXL_REGLOC_RBI_COMPONENT && is_cxl_restricted(pdev)) 411 + rc = cxl_rcrb_get_comp_regs(pdev, map); 412 + 413 + if (rc) 414 + return rc; 415 + 416 + return cxl_setup_regs(map); 490 417 } 491 418 492 419 static int cxl_pci_ras_unmask(struct pci_dev *pdev) ··· 553 404 addr = cxlds->regs.ras + CXL_RAS_UNCORRECTABLE_MASK_OFFSET; 554 405 orig_val = readl(addr); 555 406 556 - mask = CXL_RAS_UNCORRECTABLE_MASK_MASK; 557 - if (!cxl_pci_flit_256(pdev)) 558 - mask &= ~CXL_RAS_UNCORRECTABLE_MASK_F256B_MASK; 407 + mask = CXL_RAS_UNCORRECTABLE_MASK_MASK | 408 + CXL_RAS_UNCORRECTABLE_MASK_F256B_MASK; 559 409 val = orig_val & ~mask; 560 410 writel(val, addr); 561 411 dev_dbg(&pdev->dev, ··· 581 433 582 434 /* 583 435 * There is a single buffer for reading event logs from the mailbox. All logs 584 - * share this buffer protected by the cxlds->event_log_lock. 436 + * share this buffer protected by the mds->event_log_lock. 585 437 */ 586 - static int cxl_mem_alloc_event_buf(struct cxl_dev_state *cxlds) 438 + static int cxl_mem_alloc_event_buf(struct cxl_memdev_state *mds) 587 439 { 588 440 struct cxl_get_event_payload *buf; 589 441 590 - buf = kvmalloc(cxlds->payload_size, GFP_KERNEL); 442 + buf = kvmalloc(mds->payload_size, GFP_KERNEL); 591 443 if (!buf) 592 444 return -ENOMEM; 593 - cxlds->event.buf = buf; 445 + mds->event.buf = buf; 594 446 595 - return devm_add_action_or_reset(cxlds->dev, free_event_buf, buf); 447 + return devm_add_action_or_reset(mds->cxlds.dev, free_event_buf, buf); 596 448 } 597 449 598 450 static int cxl_alloc_irq_vectors(struct pci_dev *pdev) ··· 617 469 return 0; 618 470 } 619 471 620 - struct cxl_dev_id { 621 - struct cxl_dev_state *cxlds; 622 - }; 623 - 624 472 static irqreturn_t cxl_event_thread(int irq, void *id) 625 473 { 626 474 struct cxl_dev_id *dev_id = id; 627 475 struct cxl_dev_state *cxlds = dev_id->cxlds; 476 + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); 628 477 u32 status; 629 478 630 479 do { ··· 634 489 status &= CXLDEV_EVENT_STATUS_ALL; 635 490 if (!status) 636 491 break; 637 - cxl_mem_get_event_records(cxlds, status); 492 + cxl_mem_get_event_records(mds, status); 638 493 cond_resched(); 639 494 } while (status); 640 495 ··· 643 498 644 499 static int cxl_event_req_irq(struct cxl_dev_state *cxlds, u8 setting) 645 500 { 646 - struct device *dev = cxlds->dev; 647 - struct pci_dev *pdev = to_pci_dev(dev); 648 - struct cxl_dev_id *dev_id; 501 + struct pci_dev *pdev = to_pci_dev(cxlds->dev); 649 502 int irq; 650 503 651 504 if (FIELD_GET(CXLDEV_EVENT_INT_MODE_MASK, setting) != CXL_INT_MSI_MSIX) 652 505 return -ENXIO; 653 - 654 - /* dev_id must be globally unique and must contain the cxlds */ 655 - dev_id = devm_kzalloc(dev, sizeof(*dev_id), GFP_KERNEL); 656 - if (!dev_id) 657 - return -ENOMEM; 658 - dev_id->cxlds = cxlds; 659 506 660 507 irq = pci_irq_vector(pdev, 661 508 FIELD_GET(CXLDEV_EVENT_INT_MSGNUM_MASK, setting)); 662 509 if (irq < 0) 663 510 return irq; 664 511 665 - return devm_request_threaded_irq(dev, irq, NULL, cxl_event_thread, 666 - IRQF_SHARED | IRQF_ONESHOT, NULL, 667 - dev_id); 512 + return cxl_request_irq(cxlds, irq, NULL, cxl_event_thread); 668 513 } 669 514 670 - static int cxl_event_get_int_policy(struct cxl_dev_state *cxlds, 515 + static int cxl_event_get_int_policy(struct cxl_memdev_state *mds, 671 516 struct cxl_event_interrupt_policy *policy) 672 517 { 673 518 struct cxl_mbox_cmd mbox_cmd = { ··· 667 532 }; 668 533 int rc; 669 534 670 - rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); 535 + rc = cxl_internal_send_cmd(mds, &mbox_cmd); 671 536 if (rc < 0) 672 - dev_err(cxlds->dev, "Failed to get event interrupt policy : %d", 673 - rc); 537 + dev_err(mds->cxlds.dev, 538 + "Failed to get event interrupt policy : %d", rc); 674 539 675 540 return rc; 676 541 } 677 542 678 - static int cxl_event_config_msgnums(struct cxl_dev_state *cxlds, 543 + static int cxl_event_config_msgnums(struct cxl_memdev_state *mds, 679 544 struct cxl_event_interrupt_policy *policy) 680 545 { 681 546 struct cxl_mbox_cmd mbox_cmd; ··· 694 559 .size_in = sizeof(*policy), 695 560 }; 696 561 697 - rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); 562 + rc = cxl_internal_send_cmd(mds, &mbox_cmd); 698 563 if (rc < 0) { 699 - dev_err(cxlds->dev, "Failed to set event interrupt policy : %d", 564 + dev_err(mds->cxlds.dev, "Failed to set event interrupt policy : %d", 700 565 rc); 701 566 return rc; 702 567 } 703 568 704 569 /* Retrieve final interrupt settings */ 705 - return cxl_event_get_int_policy(cxlds, policy); 570 + return cxl_event_get_int_policy(mds, policy); 706 571 } 707 572 708 - static int cxl_event_irqsetup(struct cxl_dev_state *cxlds) 573 + static int cxl_event_irqsetup(struct cxl_memdev_state *mds) 709 574 { 575 + struct cxl_dev_state *cxlds = &mds->cxlds; 710 576 struct cxl_event_interrupt_policy policy; 711 577 int rc; 712 578 713 - rc = cxl_event_config_msgnums(cxlds, &policy); 579 + rc = cxl_event_config_msgnums(mds, &policy); 714 580 if (rc) 715 581 return rc; 716 582 ··· 750 614 } 751 615 752 616 static int cxl_event_config(struct pci_host_bridge *host_bridge, 753 - struct cxl_dev_state *cxlds) 617 + struct cxl_memdev_state *mds) 754 618 { 755 619 struct cxl_event_interrupt_policy policy; 756 620 int rc; ··· 762 626 if (!host_bridge->native_cxl_error) 763 627 return 0; 764 628 765 - rc = cxl_mem_alloc_event_buf(cxlds); 629 + rc = cxl_mem_alloc_event_buf(mds); 766 630 if (rc) 767 631 return rc; 768 632 769 - rc = cxl_event_get_int_policy(cxlds, &policy); 633 + rc = cxl_event_get_int_policy(mds, &policy); 770 634 if (rc) 771 635 return rc; 772 636 ··· 774 638 cxl_event_int_is_fw(policy.warn_settings) || 775 639 cxl_event_int_is_fw(policy.failure_settings) || 776 640 cxl_event_int_is_fw(policy.fatal_settings)) { 777 - dev_err(cxlds->dev, "FW still in control of Event Logs despite _OSC settings\n"); 641 + dev_err(mds->cxlds.dev, 642 + "FW still in control of Event Logs despite _OSC settings\n"); 778 643 return -EBUSY; 779 644 } 780 645 781 - rc = cxl_event_irqsetup(cxlds); 646 + rc = cxl_event_irqsetup(mds); 782 647 if (rc) 783 648 return rc; 784 649 785 - cxl_mem_get_event_records(cxlds, CXLDEV_EVENT_STATUS_ALL); 650 + cxl_mem_get_event_records(mds, CXLDEV_EVENT_STATUS_ALL); 786 651 787 652 return 0; 788 653 } ··· 791 654 static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) 792 655 { 793 656 struct pci_host_bridge *host_bridge = pci_find_host_bridge(pdev->bus); 657 + struct cxl_memdev_state *mds; 658 + struct cxl_dev_state *cxlds; 794 659 struct cxl_register_map map; 795 660 struct cxl_memdev *cxlmd; 796 - struct cxl_dev_state *cxlds; 797 - int rc; 661 + int i, rc, pmu_count; 798 662 799 663 /* 800 664 * Double check the anonymous union trickery in struct cxl_regs ··· 809 671 return rc; 810 672 pci_set_master(pdev); 811 673 812 - cxlds = cxl_dev_state_create(&pdev->dev); 813 - if (IS_ERR(cxlds)) 814 - return PTR_ERR(cxlds); 674 + mds = cxl_memdev_state_create(&pdev->dev); 675 + if (IS_ERR(mds)) 676 + return PTR_ERR(mds); 677 + cxlds = &mds->cxlds; 815 678 pci_set_drvdata(pdev, cxlds); 816 679 817 680 cxlds->rcd = is_cxl_restricted(pdev); ··· 823 684 dev_warn(&pdev->dev, 824 685 "Device DVSEC not present, skip CXL.mem init\n"); 825 686 826 - rc = cxl_setup_regs(pdev, CXL_REGLOC_RBI_MEMDEV, &map); 687 + rc = cxl_pci_setup_regs(pdev, CXL_REGLOC_RBI_MEMDEV, &map); 827 688 if (rc) 828 689 return rc; 829 690 830 - rc = cxl_map_device_regs(&pdev->dev, &cxlds->regs.device_regs, &map); 691 + rc = cxl_map_device_regs(&map, &cxlds->regs.device_regs); 831 692 if (rc) 832 693 return rc; 833 694 ··· 836 697 * still be useful for management functions so don't return an error. 837 698 */ 838 699 cxlds->component_reg_phys = CXL_RESOURCE_NONE; 839 - rc = cxl_setup_regs(pdev, CXL_REGLOC_RBI_COMPONENT, &map); 700 + rc = cxl_pci_setup_regs(pdev, CXL_REGLOC_RBI_COMPONENT, &map); 840 701 if (rc) 841 702 dev_warn(&pdev->dev, "No component registers (%d)\n", rc); 703 + else if (!map.component_map.ras.valid) 704 + dev_dbg(&pdev->dev, "RAS registers not found\n"); 842 705 843 706 cxlds->component_reg_phys = map.resource; 844 707 845 - rc = cxl_map_component_regs(&pdev->dev, &cxlds->regs.component, 846 - &map, BIT(CXL_CM_CAP_CAP_ID_RAS)); 708 + rc = cxl_map_component_regs(&map, &cxlds->regs.component, 709 + BIT(CXL_CM_CAP_CAP_ID_RAS)); 847 710 if (rc) 848 711 dev_dbg(&pdev->dev, "Failed to map RAS capability.\n"); 849 712 ··· 855 714 else 856 715 dev_warn(&pdev->dev, "Media not active (%d)\n", rc); 857 716 858 - rc = cxl_pci_setup_mailbox(cxlds); 859 - if (rc) 860 - return rc; 861 - 862 - rc = cxl_enumerate_cmds(cxlds); 863 - if (rc) 864 - return rc; 865 - 866 - rc = cxl_set_timestamp(cxlds); 867 - if (rc) 868 - return rc; 869 - 870 - rc = cxl_poison_state_init(cxlds); 871 - if (rc) 872 - return rc; 873 - 874 - rc = cxl_dev_state_identify(cxlds); 875 - if (rc) 876 - return rc; 877 - 878 - rc = cxl_mem_create_range_info(cxlds); 879 - if (rc) 880 - return rc; 881 - 882 717 rc = cxl_alloc_irq_vectors(pdev); 718 + if (rc) 719 + return rc; 720 + 721 + rc = cxl_pci_setup_mailbox(mds); 722 + if (rc) 723 + return rc; 724 + 725 + rc = cxl_enumerate_cmds(mds); 726 + if (rc) 727 + return rc; 728 + 729 + rc = cxl_set_timestamp(mds); 730 + if (rc) 731 + return rc; 732 + 733 + rc = cxl_poison_state_init(mds); 734 + if (rc) 735 + return rc; 736 + 737 + rc = cxl_dev_state_identify(mds); 738 + if (rc) 739 + return rc; 740 + 741 + rc = cxl_mem_create_range_info(mds); 883 742 if (rc) 884 743 return rc; 885 744 ··· 887 746 if (IS_ERR(cxlmd)) 888 747 return PTR_ERR(cxlmd); 889 748 890 - rc = cxl_event_config(host_bridge, cxlds); 749 + rc = cxl_memdev_setup_fw_upload(mds); 750 + if (rc) 751 + return rc; 752 + 753 + pmu_count = cxl_count_regblock(pdev, CXL_REGLOC_RBI_PMU); 754 + for (i = 0; i < pmu_count; i++) { 755 + struct cxl_pmu_regs pmu_regs; 756 + 757 + rc = cxl_find_regblock_instance(pdev, CXL_REGLOC_RBI_PMU, &map, i); 758 + if (rc) { 759 + dev_dbg(&pdev->dev, "Could not find PMU regblock\n"); 760 + break; 761 + } 762 + 763 + rc = cxl_map_pmu_regs(pdev, &pmu_regs, &map); 764 + if (rc) { 765 + dev_dbg(&pdev->dev, "Could not map PMU regs\n"); 766 + break; 767 + } 768 + 769 + rc = devm_cxl_pmu_add(cxlds->dev, &pmu_regs, cxlmd->id, i, CXL_PMU_MEMDEV); 770 + if (rc) { 771 + dev_dbg(&pdev->dev, "Could not add PMU instance\n"); 772 + break; 773 + } 774 + } 775 + 776 + rc = cxl_event_config(host_bridge, mds); 891 777 if (rc) 892 778 return rc; 893 779
+18 -17
drivers/cxl/pmem.c
··· 15 15 16 16 static __read_mostly DECLARE_BITMAP(exclusive_cmds, CXL_MEM_COMMAND_ID_MAX); 17 17 18 - static void clear_exclusive(void *cxlds) 18 + static void clear_exclusive(void *mds) 19 19 { 20 - clear_exclusive_cxl_commands(cxlds, exclusive_cmds); 20 + clear_exclusive_cxl_commands(mds, exclusive_cmds); 21 21 } 22 22 23 23 static void unregister_nvdimm(void *nvdimm) ··· 65 65 struct cxl_nvdimm *cxl_nvd = to_cxl_nvdimm(dev); 66 66 struct cxl_memdev *cxlmd = cxl_nvd->cxlmd; 67 67 struct cxl_nvdimm_bridge *cxl_nvb = cxlmd->cxl_nvb; 68 + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); 68 69 unsigned long flags = 0, cmd_mask = 0; 69 - struct cxl_dev_state *cxlds = cxlmd->cxlds; 70 70 struct nvdimm *nvdimm; 71 71 int rc; 72 72 73 - set_exclusive_cxl_commands(cxlds, exclusive_cmds); 74 - rc = devm_add_action_or_reset(dev, clear_exclusive, cxlds); 73 + set_exclusive_cxl_commands(mds, exclusive_cmds); 74 + rc = devm_add_action_or_reset(dev, clear_exclusive, mds); 75 75 if (rc) 76 76 return rc; 77 77 ··· 100 100 }, 101 101 }; 102 102 103 - static int cxl_pmem_get_config_size(struct cxl_dev_state *cxlds, 103 + static int cxl_pmem_get_config_size(struct cxl_memdev_state *mds, 104 104 struct nd_cmd_get_config_size *cmd, 105 105 unsigned int buf_len) 106 106 { 107 107 if (sizeof(*cmd) > buf_len) 108 108 return -EINVAL; 109 109 110 - *cmd = (struct nd_cmd_get_config_size) { 111 - .config_size = cxlds->lsa_size, 112 - .max_xfer = cxlds->payload_size - sizeof(struct cxl_mbox_set_lsa), 110 + *cmd = (struct nd_cmd_get_config_size){ 111 + .config_size = mds->lsa_size, 112 + .max_xfer = 113 + mds->payload_size - sizeof(struct cxl_mbox_set_lsa), 113 114 }; 114 115 115 116 return 0; 116 117 } 117 118 118 - static int cxl_pmem_get_config_data(struct cxl_dev_state *cxlds, 119 + static int cxl_pmem_get_config_data(struct cxl_memdev_state *mds, 119 120 struct nd_cmd_get_config_data_hdr *cmd, 120 121 unsigned int buf_len) 121 122 { ··· 141 140 .payload_out = cmd->out_buf, 142 141 }; 143 142 144 - rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); 143 + rc = cxl_internal_send_cmd(mds, &mbox_cmd); 145 144 cmd->status = 0; 146 145 147 146 return rc; 148 147 } 149 148 150 - static int cxl_pmem_set_config_data(struct cxl_dev_state *cxlds, 149 + static int cxl_pmem_set_config_data(struct cxl_memdev_state *mds, 151 150 struct nd_cmd_set_config_hdr *cmd, 152 151 unsigned int buf_len) 153 152 { ··· 177 176 .size_in = struct_size(set_lsa, data, cmd->in_length), 178 177 }; 179 178 180 - rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); 179 + rc = cxl_internal_send_cmd(mds, &mbox_cmd); 181 180 182 181 /* 183 182 * Set "firmware" status (4-packed bytes at the end of the input ··· 195 194 struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm); 196 195 unsigned long cmd_mask = nvdimm_cmd_mask(nvdimm); 197 196 struct cxl_memdev *cxlmd = cxl_nvd->cxlmd; 198 - struct cxl_dev_state *cxlds = cxlmd->cxlds; 197 + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); 199 198 200 199 if (!test_bit(cmd, &cmd_mask)) 201 200 return -ENOTTY; 202 201 203 202 switch (cmd) { 204 203 case ND_CMD_GET_CONFIG_SIZE: 205 - return cxl_pmem_get_config_size(cxlds, buf, buf_len); 204 + return cxl_pmem_get_config_size(mds, buf, buf_len); 206 205 case ND_CMD_GET_CONFIG_DATA: 207 - return cxl_pmem_get_config_data(cxlds, buf, buf_len); 206 + return cxl_pmem_get_config_data(mds, buf, buf_len); 208 207 case ND_CMD_SET_CONFIG_DATA: 209 - return cxl_pmem_set_config_data(cxlds, buf, buf_len); 208 + return cxl_pmem_set_config_data(mds, buf, buf_len); 210 209 default: 211 210 return -ENOTTY; 212 211 }
+28
drivers/cxl/pmu.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * Copyright(c) 2023 Huawei 4 + * CXL Specification rev 3.0 Setion 8.2.7 (CPMU Register Interface) 5 + */ 6 + #ifndef CXL_PMU_H 7 + #define CXL_PMU_H 8 + #include <linux/device.h> 9 + 10 + enum cxl_pmu_type { 11 + CXL_PMU_MEMDEV, 12 + }; 13 + 14 + #define CXL_PMU_REGMAP_SIZE 0xe00 /* Table 8-32 CXL 3.0 specification */ 15 + struct cxl_pmu { 16 + struct device dev; 17 + void __iomem *base; 18 + int assoc_id; 19 + int index; 20 + enum cxl_pmu_type type; 21 + }; 22 + 23 + #define to_cxl_pmu(dev) container_of(dev, struct cxl_pmu, dev) 24 + struct cxl_pmu_regs; 25 + int devm_cxl_pmu_add(struct device *parent, struct cxl_pmu_regs *regs, 26 + int assoc_id, int idx, enum cxl_pmu_type type); 27 + 28 + #endif
+10 -11
drivers/cxl/port.c
··· 60 60 static int cxl_switch_port_probe(struct cxl_port *port) 61 61 { 62 62 struct cxl_hdm *cxlhdm; 63 - int rc, nr_dports; 63 + int rc; 64 64 65 - nr_dports = devm_cxl_port_enumerate_dports(port); 66 - if (nr_dports < 0) 67 - return nr_dports; 68 - 69 - cxlhdm = devm_cxl_setup_hdm(port, NULL); 70 - rc = devm_cxl_enable_hdm(port, cxlhdm); 71 - if (rc) 65 + rc = devm_cxl_port_enumerate_dports(port); 66 + if (rc < 0) 72 67 return rc; 73 68 69 + cxlhdm = devm_cxl_setup_hdm(port, NULL); 74 70 if (!IS_ERR(cxlhdm)) 75 71 return devm_cxl_enumerate_decoders(cxlhdm, NULL); 76 72 ··· 75 79 return PTR_ERR(cxlhdm); 76 80 } 77 81 78 - if (nr_dports == 1) { 82 + if (rc == 1) { 79 83 dev_dbg(&port->dev, "Fallback to passthrough decoder\n"); 80 84 return devm_cxl_add_passthrough_decoder(port); 81 85 } ··· 87 91 static int cxl_endpoint_port_probe(struct cxl_port *port) 88 92 { 89 93 struct cxl_endpoint_dvsec_info info = { .port = port }; 90 - struct cxl_memdev *cxlmd = to_cxl_memdev(port->uport); 94 + struct cxl_memdev *cxlmd = to_cxl_memdev(port->uport_dev); 91 95 struct cxl_dev_state *cxlds = cxlmd->cxlds; 92 96 struct cxl_hdm *cxlhdm; 93 97 struct cxl_port *root; ··· 98 102 return rc; 99 103 100 104 cxlhdm = devm_cxl_setup_hdm(port, &info); 101 - if (IS_ERR(cxlhdm)) 105 + if (IS_ERR(cxlhdm)) { 106 + if (PTR_ERR(cxlhdm) == -ENODEV) 107 + dev_err(&port->dev, "HDM decoder registers not found\n"); 102 108 return PTR_ERR(cxlhdm); 109 + } 103 110 104 111 /* Cache the data early to ensure is_visible() works */ 105 112 read_cdat_data(port);
+15 -12
drivers/cxl/security.c
··· 14 14 { 15 15 struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm); 16 16 struct cxl_memdev *cxlmd = cxl_nvd->cxlmd; 17 - struct cxl_dev_state *cxlds = cxlmd->cxlds; 17 + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); 18 18 unsigned long security_flags = 0; 19 19 struct cxl_get_security_output { 20 20 __le32 flags; ··· 29 29 .payload_out = &out, 30 30 }; 31 31 32 - rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); 32 + rc = cxl_internal_send_cmd(mds, &mbox_cmd); 33 33 if (rc < 0) 34 34 return 0; 35 35 36 36 sec_out = le32_to_cpu(out.flags); 37 + /* cache security state */ 38 + mds->security.state = sec_out; 39 + 37 40 if (ptype == NVDIMM_MASTER) { 38 41 if (sec_out & CXL_PMEM_SEC_STATE_MASTER_PASS_SET) 39 42 set_bit(NVDIMM_SECURITY_UNLOCKED, &security_flags); ··· 70 67 { 71 68 struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm); 72 69 struct cxl_memdev *cxlmd = cxl_nvd->cxlmd; 73 - struct cxl_dev_state *cxlds = cxlmd->cxlds; 70 + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); 74 71 struct cxl_mbox_cmd mbox_cmd; 75 72 struct cxl_set_pass set_pass; 76 73 ··· 87 84 .payload_in = &set_pass, 88 85 }; 89 86 90 - return cxl_internal_send_cmd(cxlds, &mbox_cmd); 87 + return cxl_internal_send_cmd(mds, &mbox_cmd); 91 88 } 92 89 93 90 static int __cxl_pmem_security_disable(struct nvdimm *nvdimm, ··· 96 93 { 97 94 struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm); 98 95 struct cxl_memdev *cxlmd = cxl_nvd->cxlmd; 99 - struct cxl_dev_state *cxlds = cxlmd->cxlds; 96 + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); 100 97 struct cxl_disable_pass dis_pass; 101 98 struct cxl_mbox_cmd mbox_cmd; 102 99 ··· 112 109 .payload_in = &dis_pass, 113 110 }; 114 111 115 - return cxl_internal_send_cmd(cxlds, &mbox_cmd); 112 + return cxl_internal_send_cmd(mds, &mbox_cmd); 116 113 } 117 114 118 115 static int cxl_pmem_security_disable(struct nvdimm *nvdimm, ··· 131 128 { 132 129 struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm); 133 130 struct cxl_memdev *cxlmd = cxl_nvd->cxlmd; 134 - struct cxl_dev_state *cxlds = cxlmd->cxlds; 131 + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); 135 132 struct cxl_mbox_cmd mbox_cmd = { 136 133 .opcode = CXL_MBOX_OP_FREEZE_SECURITY, 137 134 }; 138 135 139 - return cxl_internal_send_cmd(cxlds, &mbox_cmd); 136 + return cxl_internal_send_cmd(mds, &mbox_cmd); 140 137 } 141 138 142 139 static int cxl_pmem_security_unlock(struct nvdimm *nvdimm, ··· 144 141 { 145 142 struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm); 146 143 struct cxl_memdev *cxlmd = cxl_nvd->cxlmd; 147 - struct cxl_dev_state *cxlds = cxlmd->cxlds; 144 + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); 148 145 u8 pass[NVDIMM_PASSPHRASE_LEN]; 149 146 struct cxl_mbox_cmd mbox_cmd; 150 147 int rc; ··· 156 153 .payload_in = pass, 157 154 }; 158 155 159 - rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); 156 + rc = cxl_internal_send_cmd(mds, &mbox_cmd); 160 157 if (rc < 0) 161 158 return rc; 162 159 ··· 169 166 { 170 167 struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm); 171 168 struct cxl_memdev *cxlmd = cxl_nvd->cxlmd; 172 - struct cxl_dev_state *cxlds = cxlmd->cxlds; 169 + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); 173 170 struct cxl_mbox_cmd mbox_cmd; 174 171 struct cxl_pass_erase erase; 175 172 int rc; ··· 185 182 .payload_in = &erase, 186 183 }; 187 184 188 - rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); 185 + rc = cxl_internal_send_cmd(mds, &mbox_cmd); 189 186 if (rc < 0) 190 187 return rc; 191 188
+13
drivers/perf/Kconfig
··· 221 221 222 222 source "drivers/perf/amlogic/Kconfig" 223 223 224 + config CXL_PMU 225 + tristate "CXL Performance Monitoring Unit" 226 + depends on CXL_BUS 227 + help 228 + Support performance monitoring as defined in CXL rev 3.0 229 + section 13.2: Performance Monitoring. CXL components may have 230 + one or more CXL Performance Monitoring Units (CPMUs). 231 + 232 + Say 'y/m' to enable a driver that will attach to performance 233 + monitoring units and provide standard perf based interfaces. 234 + 235 + If unsure say 'm'. 236 + 224 237 endmenu
+1
drivers/perf/Makefile
··· 25 25 obj-$(CONFIG_ALIBABA_UNCORE_DRW_PMU) += alibaba_uncore_drw_pmu.o 26 26 obj-$(CONFIG_ARM_CORESIGHT_PMU_ARCH_SYSTEM_PMU) += arm_cspmu/ 27 27 obj-$(CONFIG_MESON_DDR_PMU) += amlogic/ 28 + obj-$(CONFIG_CXL_PMU) += cxl_pmu.o
+990
drivers/perf/cxl_pmu.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + 3 + /* 4 + * Copyright(c) 2023 Huawei 5 + * 6 + * The CXL 3.0 specification includes a standard Performance Monitoring Unit, 7 + * called the CXL PMU, or CPMU. In order to allow a high degree of 8 + * implementation flexibility the specification provides a wide range of 9 + * options all of which are self describing. 10 + * 11 + * Details in CXL rev 3.0 section 8.2.7 CPMU Register Interface 12 + */ 13 + 14 + #include <linux/io-64-nonatomic-lo-hi.h> 15 + #include <linux/perf_event.h> 16 + #include <linux/bitops.h> 17 + #include <linux/device.h> 18 + #include <linux/bits.h> 19 + #include <linux/list.h> 20 + #include <linux/bug.h> 21 + #include <linux/pci.h> 22 + 23 + #include "../cxl/cxlpci.h" 24 + #include "../cxl/cxl.h" 25 + #include "../cxl/pmu.h" 26 + 27 + #define CXL_PMU_CAP_REG 0x0 28 + #define CXL_PMU_CAP_NUM_COUNTERS_MSK GENMASK_ULL(4, 0) 29 + #define CXL_PMU_CAP_COUNTER_WIDTH_MSK GENMASK_ULL(15, 8) 30 + #define CXL_PMU_CAP_NUM_EVN_CAP_REG_SUP_MSK GENMASK_ULL(24, 20) 31 + #define CXL_PMU_CAP_FILTERS_SUP_MSK GENMASK_ULL(39, 32) 32 + #define CXL_PMU_FILTER_HDM BIT(0) 33 + #define CXL_PMU_FILTER_CHAN_RANK_BANK BIT(1) 34 + #define CXL_PMU_CAP_MSI_N_MSK GENMASK_ULL(47, 44) 35 + #define CXL_PMU_CAP_WRITEABLE_WHEN_FROZEN BIT_ULL(48) 36 + #define CXL_PMU_CAP_FREEZE BIT_ULL(49) 37 + #define CXL_PMU_CAP_INT BIT_ULL(50) 38 + #define CXL_PMU_CAP_VERSION_MSK GENMASK_ULL(63, 60) 39 + 40 + #define CXL_PMU_OVERFLOW_REG 0x10 41 + #define CXL_PMU_FREEZE_REG 0x18 42 + #define CXL_PMU_EVENT_CAP_REG(n) (0x100 + 8 * (n)) 43 + #define CXL_PMU_EVENT_CAP_SUPPORTED_EVENTS_MSK GENMASK_ULL(31, 0) 44 + #define CXL_PMU_EVENT_CAP_GROUP_ID_MSK GENMASK_ULL(47, 32) 45 + #define CXL_PMU_EVENT_CAP_VENDOR_ID_MSK GENMASK_ULL(63, 48) 46 + 47 + #define CXL_PMU_COUNTER_CFG_REG(n) (0x200 + 8 * (n)) 48 + #define CXL_PMU_COUNTER_CFG_TYPE_MSK GENMASK_ULL(1, 0) 49 + #define CXL_PMU_COUNTER_CFG_TYPE_FREE_RUN 0 50 + #define CXL_PMU_COUNTER_CFG_TYPE_FIXED_FUN 1 51 + #define CXL_PMU_COUNTER_CFG_TYPE_CONFIGURABLE 2 52 + #define CXL_PMU_COUNTER_CFG_ENABLE BIT_ULL(8) 53 + #define CXL_PMU_COUNTER_CFG_INT_ON_OVRFLW BIT_ULL(9) 54 + #define CXL_PMU_COUNTER_CFG_FREEZE_ON_OVRFLW BIT_ULL(10) 55 + #define CXL_PMU_COUNTER_CFG_EDGE BIT_ULL(11) 56 + #define CXL_PMU_COUNTER_CFG_INVERT BIT_ULL(12) 57 + #define CXL_PMU_COUNTER_CFG_THRESHOLD_MSK GENMASK_ULL(23, 16) 58 + #define CXL_PMU_COUNTER_CFG_EVENTS_MSK GENMASK_ULL(55, 24) 59 + #define CXL_PMU_COUNTER_CFG_EVENT_GRP_ID_IDX_MSK GENMASK_ULL(63, 59) 60 + 61 + #define CXL_PMU_FILTER_CFG_REG(n, f) (0x400 + 4 * ((f) + (n) * 8)) 62 + #define CXL_PMU_FILTER_CFG_VALUE_MSK GENMASK(15, 0) 63 + 64 + #define CXL_PMU_COUNTER_REG(n) (0xc00 + 8 * (n)) 65 + 66 + /* CXL rev 3.0 Table 13-5 Events under CXL Vendor ID */ 67 + #define CXL_PMU_GID_CLOCK_TICKS 0x00 68 + #define CXL_PMU_GID_D2H_REQ 0x0010 69 + #define CXL_PMU_GID_D2H_RSP 0x0011 70 + #define CXL_PMU_GID_H2D_REQ 0x0012 71 + #define CXL_PMU_GID_H2D_RSP 0x0013 72 + #define CXL_PMU_GID_CACHE_DATA 0x0014 73 + #define CXL_PMU_GID_M2S_REQ 0x0020 74 + #define CXL_PMU_GID_M2S_RWD 0x0021 75 + #define CXL_PMU_GID_M2S_BIRSP 0x0022 76 + #define CXL_PMU_GID_S2M_BISNP 0x0023 77 + #define CXL_PMU_GID_S2M_NDR 0x0024 78 + #define CXL_PMU_GID_S2M_DRS 0x0025 79 + #define CXL_PMU_GID_DDR 0x8000 80 + 81 + static int cxl_pmu_cpuhp_state_num; 82 + 83 + struct cxl_pmu_ev_cap { 84 + u16 vid; 85 + u16 gid; 86 + u32 msk; 87 + union { 88 + int counter_idx; /* fixed counters */ 89 + int event_idx; /* configurable counters */ 90 + }; 91 + struct list_head node; 92 + }; 93 + 94 + #define CXL_PMU_MAX_COUNTERS 64 95 + struct cxl_pmu_info { 96 + struct pmu pmu; 97 + void __iomem *base; 98 + struct perf_event **hw_events; 99 + struct list_head event_caps_configurable; 100 + struct list_head event_caps_fixed; 101 + DECLARE_BITMAP(used_counter_bm, CXL_PMU_MAX_COUNTERS); 102 + DECLARE_BITMAP(conf_counter_bm, CXL_PMU_MAX_COUNTERS); 103 + u16 counter_width; 104 + u8 num_counters; 105 + u8 num_event_capabilities; 106 + int on_cpu; 107 + struct hlist_node node; 108 + bool filter_hdm; 109 + int irq; 110 + }; 111 + 112 + #define pmu_to_cxl_pmu_info(_pmu) container_of(_pmu, struct cxl_pmu_info, pmu) 113 + 114 + /* 115 + * All CPMU counters are discoverable via the Event Capabilities Registers. 116 + * Each Event Capability register contains a a VID / GroupID. 117 + * A counter may then count any combination (by summing) of events in 118 + * that group which are in the Supported Events Bitmask. 119 + * However, there are some complexities to the scheme. 120 + * - Fixed function counters refer to an Event Capabilities register. 121 + * That event capability register is not then used for Configurable 122 + * counters. 123 + */ 124 + static int cxl_pmu_parse_caps(struct device *dev, struct cxl_pmu_info *info) 125 + { 126 + unsigned long fixed_counter_event_cap_bm = 0; 127 + void __iomem *base = info->base; 128 + bool freeze_for_enable; 129 + u64 val, eval; 130 + int i; 131 + 132 + val = readq(base + CXL_PMU_CAP_REG); 133 + freeze_for_enable = FIELD_GET(CXL_PMU_CAP_WRITEABLE_WHEN_FROZEN, val) && 134 + FIELD_GET(CXL_PMU_CAP_FREEZE, val); 135 + if (!freeze_for_enable) { 136 + dev_err(dev, "Counters not writable while frozen\n"); 137 + return -ENODEV; 138 + } 139 + 140 + info->num_counters = FIELD_GET(CXL_PMU_CAP_NUM_COUNTERS_MSK, val) + 1; 141 + info->counter_width = FIELD_GET(CXL_PMU_CAP_COUNTER_WIDTH_MSK, val); 142 + info->num_event_capabilities = FIELD_GET(CXL_PMU_CAP_NUM_EVN_CAP_REG_SUP_MSK, val) + 1; 143 + 144 + info->filter_hdm = FIELD_GET(CXL_PMU_CAP_FILTERS_SUP_MSK, val) & CXL_PMU_FILTER_HDM; 145 + if (FIELD_GET(CXL_PMU_CAP_INT, val)) 146 + info->irq = FIELD_GET(CXL_PMU_CAP_MSI_N_MSK, val); 147 + else 148 + info->irq = -1; 149 + 150 + /* First handle fixed function counters; note if configurable counters found */ 151 + for (i = 0; i < info->num_counters; i++) { 152 + struct cxl_pmu_ev_cap *pmu_ev; 153 + u32 events_msk; 154 + u8 group_idx; 155 + 156 + val = readq(base + CXL_PMU_COUNTER_CFG_REG(i)); 157 + 158 + if (FIELD_GET(CXL_PMU_COUNTER_CFG_TYPE_MSK, val) == 159 + CXL_PMU_COUNTER_CFG_TYPE_CONFIGURABLE) { 160 + set_bit(i, info->conf_counter_bm); 161 + } 162 + 163 + if (FIELD_GET(CXL_PMU_COUNTER_CFG_TYPE_MSK, val) != 164 + CXL_PMU_COUNTER_CFG_TYPE_FIXED_FUN) 165 + continue; 166 + 167 + /* In this case we know which fields are const */ 168 + group_idx = FIELD_GET(CXL_PMU_COUNTER_CFG_EVENT_GRP_ID_IDX_MSK, val); 169 + events_msk = FIELD_GET(CXL_PMU_COUNTER_CFG_EVENTS_MSK, val); 170 + eval = readq(base + CXL_PMU_EVENT_CAP_REG(group_idx)); 171 + pmu_ev = devm_kzalloc(dev, sizeof(*pmu_ev), GFP_KERNEL); 172 + if (!pmu_ev) 173 + return -ENOMEM; 174 + 175 + pmu_ev->vid = FIELD_GET(CXL_PMU_EVENT_CAP_VENDOR_ID_MSK, eval); 176 + pmu_ev->gid = FIELD_GET(CXL_PMU_EVENT_CAP_GROUP_ID_MSK, eval); 177 + /* For a fixed purpose counter use the events mask from the counter CFG */ 178 + pmu_ev->msk = events_msk; 179 + pmu_ev->counter_idx = i; 180 + /* This list add is never unwound as all entries deleted on remove */ 181 + list_add(&pmu_ev->node, &info->event_caps_fixed); 182 + /* 183 + * Configurable counters must not use an Event Capability registers that 184 + * is in use for a Fixed counter 185 + */ 186 + set_bit(group_idx, &fixed_counter_event_cap_bm); 187 + } 188 + 189 + if (!bitmap_empty(info->conf_counter_bm, CXL_PMU_MAX_COUNTERS)) { 190 + struct cxl_pmu_ev_cap *pmu_ev; 191 + int j; 192 + /* Walk event capabilities unused by fixed counters */ 193 + for_each_clear_bit(j, &fixed_counter_event_cap_bm, 194 + info->num_event_capabilities) { 195 + pmu_ev = devm_kzalloc(dev, sizeof(*pmu_ev), GFP_KERNEL); 196 + if (!pmu_ev) 197 + return -ENOMEM; 198 + 199 + eval = readq(base + CXL_PMU_EVENT_CAP_REG(j)); 200 + pmu_ev->vid = FIELD_GET(CXL_PMU_EVENT_CAP_VENDOR_ID_MSK, eval); 201 + pmu_ev->gid = FIELD_GET(CXL_PMU_EVENT_CAP_GROUP_ID_MSK, eval); 202 + pmu_ev->msk = FIELD_GET(CXL_PMU_EVENT_CAP_SUPPORTED_EVENTS_MSK, eval); 203 + pmu_ev->event_idx = j; 204 + list_add(&pmu_ev->node, &info->event_caps_configurable); 205 + } 206 + } 207 + 208 + return 0; 209 + } 210 + 211 + static ssize_t cxl_pmu_format_sysfs_show(struct device *dev, 212 + struct device_attribute *attr, char *buf) 213 + { 214 + struct dev_ext_attribute *eattr; 215 + 216 + eattr = container_of(attr, struct dev_ext_attribute, attr); 217 + 218 + return sysfs_emit(buf, "%s\n", (char *)eattr->var); 219 + } 220 + 221 + #define CXL_PMU_FORMAT_ATTR(_name, _format)\ 222 + (&((struct dev_ext_attribute[]) { \ 223 + { \ 224 + .attr = __ATTR(_name, 0444, \ 225 + cxl_pmu_format_sysfs_show, NULL), \ 226 + .var = (void *)_format \ 227 + } \ 228 + })[0].attr.attr) 229 + 230 + enum { 231 + cxl_pmu_mask_attr, 232 + cxl_pmu_gid_attr, 233 + cxl_pmu_vid_attr, 234 + cxl_pmu_threshold_attr, 235 + cxl_pmu_invert_attr, 236 + cxl_pmu_edge_attr, 237 + cxl_pmu_hdm_filter_en_attr, 238 + cxl_pmu_hdm_attr, 239 + }; 240 + 241 + static struct attribute *cxl_pmu_format_attr[] = { 242 + [cxl_pmu_mask_attr] = CXL_PMU_FORMAT_ATTR(mask, "config:0-31"), 243 + [cxl_pmu_gid_attr] = CXL_PMU_FORMAT_ATTR(gid, "config:32-47"), 244 + [cxl_pmu_vid_attr] = CXL_PMU_FORMAT_ATTR(vid, "config:48-63"), 245 + [cxl_pmu_threshold_attr] = CXL_PMU_FORMAT_ATTR(threshold, "config1:0-15"), 246 + [cxl_pmu_invert_attr] = CXL_PMU_FORMAT_ATTR(invert, "config1:16"), 247 + [cxl_pmu_edge_attr] = CXL_PMU_FORMAT_ATTR(edge, "config1:17"), 248 + [cxl_pmu_hdm_filter_en_attr] = CXL_PMU_FORMAT_ATTR(hdm_filter_en, "config1:18"), 249 + [cxl_pmu_hdm_attr] = CXL_PMU_FORMAT_ATTR(hdm, "config2:0-15"), 250 + NULL 251 + }; 252 + 253 + #define CXL_PMU_ATTR_CONFIG_MASK_MSK GENMASK_ULL(31, 0) 254 + #define CXL_PMU_ATTR_CONFIG_GID_MSK GENMASK_ULL(47, 32) 255 + #define CXL_PMU_ATTR_CONFIG_VID_MSK GENMASK_ULL(63, 48) 256 + #define CXL_PMU_ATTR_CONFIG1_THRESHOLD_MSK GENMASK_ULL(15, 0) 257 + #define CXL_PMU_ATTR_CONFIG1_INVERT_MSK BIT(16) 258 + #define CXL_PMU_ATTR_CONFIG1_EDGE_MSK BIT(17) 259 + #define CXL_PMU_ATTR_CONFIG1_FILTER_EN_MSK BIT(18) 260 + #define CXL_PMU_ATTR_CONFIG2_HDM_MSK GENMASK(15, 0) 261 + 262 + static umode_t cxl_pmu_format_is_visible(struct kobject *kobj, 263 + struct attribute *attr, int a) 264 + { 265 + struct device *dev = kobj_to_dev(kobj); 266 + struct cxl_pmu_info *info = dev_get_drvdata(dev); 267 + 268 + /* 269 + * Filter capability at the CPMU level, so hide the attributes if the particular 270 + * filter is not supported. 271 + */ 272 + if (!info->filter_hdm && 273 + (attr == cxl_pmu_format_attr[cxl_pmu_hdm_filter_en_attr] || 274 + attr == cxl_pmu_format_attr[cxl_pmu_hdm_attr])) 275 + return 0; 276 + 277 + return attr->mode; 278 + } 279 + 280 + static const struct attribute_group cxl_pmu_format_group = { 281 + .name = "format", 282 + .attrs = cxl_pmu_format_attr, 283 + .is_visible = cxl_pmu_format_is_visible, 284 + }; 285 + 286 + static u32 cxl_pmu_config_get_mask(struct perf_event *event) 287 + { 288 + return FIELD_GET(CXL_PMU_ATTR_CONFIG_MASK_MSK, event->attr.config); 289 + } 290 + 291 + static u16 cxl_pmu_config_get_gid(struct perf_event *event) 292 + { 293 + return FIELD_GET(CXL_PMU_ATTR_CONFIG_GID_MSK, event->attr.config); 294 + } 295 + 296 + static u16 cxl_pmu_config_get_vid(struct perf_event *event) 297 + { 298 + return FIELD_GET(CXL_PMU_ATTR_CONFIG_VID_MSK, event->attr.config); 299 + } 300 + 301 + static u8 cxl_pmu_config1_get_threshold(struct perf_event *event) 302 + { 303 + return FIELD_GET(CXL_PMU_ATTR_CONFIG1_THRESHOLD_MSK, event->attr.config1); 304 + } 305 + 306 + static bool cxl_pmu_config1_get_invert(struct perf_event *event) 307 + { 308 + return FIELD_GET(CXL_PMU_ATTR_CONFIG1_INVERT_MSK, event->attr.config1); 309 + } 310 + 311 + static bool cxl_pmu_config1_get_edge(struct perf_event *event) 312 + { 313 + return FIELD_GET(CXL_PMU_ATTR_CONFIG1_EDGE_MSK, event->attr.config1); 314 + } 315 + 316 + /* 317 + * CPMU specification allows for 8 filters, each with a 16 bit value... 318 + * So we need to find 8x16bits to store it in. 319 + * As the value used for disable is 0xffff, a separate enable switch 320 + * is needed. 321 + */ 322 + 323 + static bool cxl_pmu_config1_hdm_filter_en(struct perf_event *event) 324 + { 325 + return FIELD_GET(CXL_PMU_ATTR_CONFIG1_FILTER_EN_MSK, event->attr.config1); 326 + } 327 + 328 + static u16 cxl_pmu_config2_get_hdm_decoder(struct perf_event *event) 329 + { 330 + return FIELD_GET(CXL_PMU_ATTR_CONFIG2_HDM_MSK, event->attr.config2); 331 + } 332 + 333 + static ssize_t cxl_pmu_event_sysfs_show(struct device *dev, 334 + struct device_attribute *attr, char *buf) 335 + { 336 + struct perf_pmu_events_attr *pmu_attr = 337 + container_of(attr, struct perf_pmu_events_attr, attr); 338 + 339 + return sysfs_emit(buf, "config=%#llx\n", pmu_attr->id); 340 + } 341 + 342 + #define CXL_PMU_EVENT_ATTR(_name, _vid, _gid, _msk) \ 343 + PMU_EVENT_ATTR_ID(_name, cxl_pmu_event_sysfs_show, \ 344 + ((u64)(_vid) << 48) | ((u64)(_gid) << 32) | (u64)(_msk)) 345 + 346 + /* For CXL spec defined events */ 347 + #define CXL_PMU_EVENT_CXL_ATTR(_name, _gid, _msk) \ 348 + CXL_PMU_EVENT_ATTR(_name, PCI_DVSEC_VENDOR_ID_CXL, _gid, _msk) 349 + 350 + static struct attribute *cxl_pmu_event_attrs[] = { 351 + CXL_PMU_EVENT_CXL_ATTR(clock_ticks, CXL_PMU_GID_CLOCK_TICKS, BIT(0)), 352 + /* CXL rev 3.0 Table 3-17 - Device to Host Requests */ 353 + CXL_PMU_EVENT_CXL_ATTR(d2h_req_rdcurr, CXL_PMU_GID_D2H_REQ, BIT(1)), 354 + CXL_PMU_EVENT_CXL_ATTR(d2h_req_rdown, CXL_PMU_GID_D2H_REQ, BIT(2)), 355 + CXL_PMU_EVENT_CXL_ATTR(d2h_req_rdshared, CXL_PMU_GID_D2H_REQ, BIT(3)), 356 + CXL_PMU_EVENT_CXL_ATTR(d2h_req_rdany, CXL_PMU_GID_D2H_REQ, BIT(4)), 357 + CXL_PMU_EVENT_CXL_ATTR(d2h_req_rdownnodata, CXL_PMU_GID_D2H_REQ, BIT(5)), 358 + CXL_PMU_EVENT_CXL_ATTR(d2h_req_itomwr, CXL_PMU_GID_D2H_REQ, BIT(6)), 359 + CXL_PMU_EVENT_CXL_ATTR(d2h_req_wrcurr, CXL_PMU_GID_D2H_REQ, BIT(7)), 360 + CXL_PMU_EVENT_CXL_ATTR(d2h_req_clflush, CXL_PMU_GID_D2H_REQ, BIT(8)), 361 + CXL_PMU_EVENT_CXL_ATTR(d2h_req_cleanevict, CXL_PMU_GID_D2H_REQ, BIT(9)), 362 + CXL_PMU_EVENT_CXL_ATTR(d2h_req_dirtyevict, CXL_PMU_GID_D2H_REQ, BIT(10)), 363 + CXL_PMU_EVENT_CXL_ATTR(d2h_req_cleanevictnodata, CXL_PMU_GID_D2H_REQ, BIT(11)), 364 + CXL_PMU_EVENT_CXL_ATTR(d2h_req_wowrinv, CXL_PMU_GID_D2H_REQ, BIT(12)), 365 + CXL_PMU_EVENT_CXL_ATTR(d2h_req_wowrinvf, CXL_PMU_GID_D2H_REQ, BIT(13)), 366 + CXL_PMU_EVENT_CXL_ATTR(d2h_req_wrinv, CXL_PMU_GID_D2H_REQ, BIT(14)), 367 + CXL_PMU_EVENT_CXL_ATTR(d2h_req_cacheflushed, CXL_PMU_GID_D2H_REQ, BIT(16)), 368 + /* CXL rev 3.0 Table 3-20 - D2H Repsonse Encodings */ 369 + CXL_PMU_EVENT_CXL_ATTR(d2h_rsp_rspihiti, CXL_PMU_GID_D2H_RSP, BIT(4)), 370 + CXL_PMU_EVENT_CXL_ATTR(d2h_rsp_rspvhitv, CXL_PMU_GID_D2H_RSP, BIT(6)), 371 + CXL_PMU_EVENT_CXL_ATTR(d2h_rsp_rspihitse, CXL_PMU_GID_D2H_RSP, BIT(5)), 372 + CXL_PMU_EVENT_CXL_ATTR(d2h_rsp_rspshitse, CXL_PMU_GID_D2H_RSP, BIT(1)), 373 + CXL_PMU_EVENT_CXL_ATTR(d2h_rsp_rspsfwdm, CXL_PMU_GID_D2H_RSP, BIT(7)), 374 + CXL_PMU_EVENT_CXL_ATTR(d2h_rsp_rspifwdm, CXL_PMU_GID_D2H_RSP, BIT(15)), 375 + CXL_PMU_EVENT_CXL_ATTR(d2h_rsp_rspvfwdv, CXL_PMU_GID_D2H_RSP, BIT(22)), 376 + /* CXL rev 3.0 Table 3-21 - CXL.cache - Mapping of H2D Requests to D2H Responses */ 377 + CXL_PMU_EVENT_CXL_ATTR(h2d_req_snpdata, CXL_PMU_GID_H2D_REQ, BIT(1)), 378 + CXL_PMU_EVENT_CXL_ATTR(h2d_req_snpinv, CXL_PMU_GID_H2D_REQ, BIT(2)), 379 + CXL_PMU_EVENT_CXL_ATTR(h2d_req_snpcur, CXL_PMU_GID_H2D_REQ, BIT(3)), 380 + /* CXL rev 3.0 Table 3-22 - H2D Response Opcode Encodings */ 381 + CXL_PMU_EVENT_CXL_ATTR(h2d_rsp_writepull, CXL_PMU_GID_H2D_RSP, BIT(1)), 382 + CXL_PMU_EVENT_CXL_ATTR(h2d_rsp_go, CXL_PMU_GID_H2D_RSP, BIT(4)), 383 + CXL_PMU_EVENT_CXL_ATTR(h2d_rsp_gowritepull, CXL_PMU_GID_H2D_RSP, BIT(5)), 384 + CXL_PMU_EVENT_CXL_ATTR(h2d_rsp_extcmp, CXL_PMU_GID_H2D_RSP, BIT(6)), 385 + CXL_PMU_EVENT_CXL_ATTR(h2d_rsp_gowritepulldrop, CXL_PMU_GID_H2D_RSP, BIT(8)), 386 + CXL_PMU_EVENT_CXL_ATTR(h2d_rsp_fastgowritepull, CXL_PMU_GID_H2D_RSP, BIT(13)), 387 + CXL_PMU_EVENT_CXL_ATTR(h2d_rsp_goerrwritepull, CXL_PMU_GID_H2D_RSP, BIT(15)), 388 + /* CXL rev 3.0 Table 13-5 directly lists these */ 389 + CXL_PMU_EVENT_CXL_ATTR(cachedata_d2h_data, CXL_PMU_GID_CACHE_DATA, BIT(0)), 390 + CXL_PMU_EVENT_CXL_ATTR(cachedata_h2d_data, CXL_PMU_GID_CACHE_DATA, BIT(1)), 391 + /* CXL rev 3.0 Table 3-29 M2S Req Memory Opcodes */ 392 + CXL_PMU_EVENT_CXL_ATTR(m2s_req_meminv, CXL_PMU_GID_M2S_REQ, BIT(0)), 393 + CXL_PMU_EVENT_CXL_ATTR(m2s_req_memrd, CXL_PMU_GID_M2S_REQ, BIT(1)), 394 + CXL_PMU_EVENT_CXL_ATTR(m2s_req_memrddata, CXL_PMU_GID_M2S_REQ, BIT(2)), 395 + CXL_PMU_EVENT_CXL_ATTR(m2s_req_memrdfwd, CXL_PMU_GID_M2S_REQ, BIT(3)), 396 + CXL_PMU_EVENT_CXL_ATTR(m2s_req_memwrfwd, CXL_PMU_GID_M2S_REQ, BIT(4)), 397 + CXL_PMU_EVENT_CXL_ATTR(m2s_req_memspecrd, CXL_PMU_GID_M2S_REQ, BIT(8)), 398 + CXL_PMU_EVENT_CXL_ATTR(m2s_req_meminvnt, CXL_PMU_GID_M2S_REQ, BIT(9)), 399 + CXL_PMU_EVENT_CXL_ATTR(m2s_req_memcleanevict, CXL_PMU_GID_M2S_REQ, BIT(10)), 400 + /* CXL rev 3.0 Table 3-35 M2S RwD Memory Opcodes */ 401 + CXL_PMU_EVENT_CXL_ATTR(m2s_rwd_memwr, CXL_PMU_GID_M2S_RWD, BIT(1)), 402 + CXL_PMU_EVENT_CXL_ATTR(m2s_rwd_memwrptl, CXL_PMU_GID_M2S_RWD, BIT(2)), 403 + CXL_PMU_EVENT_CXL_ATTR(m2s_rwd_biconflict, CXL_PMU_GID_M2S_RWD, BIT(4)), 404 + /* CXL rev 3.0 Table 3-38 M2S BIRsp Memory Opcodes */ 405 + CXL_PMU_EVENT_CXL_ATTR(m2s_birsp_i, CXL_PMU_GID_M2S_BIRSP, BIT(0)), 406 + CXL_PMU_EVENT_CXL_ATTR(m2s_birsp_s, CXL_PMU_GID_M2S_BIRSP, BIT(1)), 407 + CXL_PMU_EVENT_CXL_ATTR(m2s_birsp_e, CXL_PMU_GID_M2S_BIRSP, BIT(2)), 408 + CXL_PMU_EVENT_CXL_ATTR(m2s_birsp_iblk, CXL_PMU_GID_M2S_BIRSP, BIT(4)), 409 + CXL_PMU_EVENT_CXL_ATTR(m2s_birsp_sblk, CXL_PMU_GID_M2S_BIRSP, BIT(5)), 410 + CXL_PMU_EVENT_CXL_ATTR(m2s_birsp_eblk, CXL_PMU_GID_M2S_BIRSP, BIT(6)), 411 + /* CXL rev 3.0 Table 3-40 S2M BISnp Opcodes */ 412 + CXL_PMU_EVENT_CXL_ATTR(s2m_bisnp_cur, CXL_PMU_GID_S2M_BISNP, BIT(0)), 413 + CXL_PMU_EVENT_CXL_ATTR(s2m_bisnp_data, CXL_PMU_GID_S2M_BISNP, BIT(1)), 414 + CXL_PMU_EVENT_CXL_ATTR(s2m_bisnp_inv, CXL_PMU_GID_S2M_BISNP, BIT(2)), 415 + CXL_PMU_EVENT_CXL_ATTR(s2m_bisnp_curblk, CXL_PMU_GID_S2M_BISNP, BIT(4)), 416 + CXL_PMU_EVENT_CXL_ATTR(s2m_bisnp_datblk, CXL_PMU_GID_S2M_BISNP, BIT(5)), 417 + CXL_PMU_EVENT_CXL_ATTR(s2m_bisnp_invblk, CXL_PMU_GID_S2M_BISNP, BIT(6)), 418 + /* CXL rev 3.0 Table 3-43 S2M NDR Opcopdes */ 419 + CXL_PMU_EVENT_CXL_ATTR(s2m_ndr_cmp, CXL_PMU_GID_S2M_NDR, BIT(0)), 420 + CXL_PMU_EVENT_CXL_ATTR(s2m_ndr_cmps, CXL_PMU_GID_S2M_NDR, BIT(1)), 421 + CXL_PMU_EVENT_CXL_ATTR(s2m_ndr_cmpe, CXL_PMU_GID_S2M_NDR, BIT(2)), 422 + CXL_PMU_EVENT_CXL_ATTR(s2m_ndr_biconflictack, CXL_PMU_GID_S2M_NDR, BIT(3)), 423 + /* CXL rev 3.0 Table 3-46 S2M DRS opcodes */ 424 + CXL_PMU_EVENT_CXL_ATTR(s2m_drs_memdata, CXL_PMU_GID_S2M_DRS, BIT(0)), 425 + CXL_PMU_EVENT_CXL_ATTR(s2m_drs_memdatanxm, CXL_PMU_GID_S2M_DRS, BIT(1)), 426 + /* CXL rev 3.0 Table 13-5 directly lists these */ 427 + CXL_PMU_EVENT_CXL_ATTR(ddr_act, CXL_PMU_GID_DDR, BIT(0)), 428 + CXL_PMU_EVENT_CXL_ATTR(ddr_pre, CXL_PMU_GID_DDR, BIT(1)), 429 + CXL_PMU_EVENT_CXL_ATTR(ddr_casrd, CXL_PMU_GID_DDR, BIT(2)), 430 + CXL_PMU_EVENT_CXL_ATTR(ddr_caswr, CXL_PMU_GID_DDR, BIT(3)), 431 + CXL_PMU_EVENT_CXL_ATTR(ddr_refresh, CXL_PMU_GID_DDR, BIT(4)), 432 + CXL_PMU_EVENT_CXL_ATTR(ddr_selfrefreshent, CXL_PMU_GID_DDR, BIT(5)), 433 + CXL_PMU_EVENT_CXL_ATTR(ddr_rfm, CXL_PMU_GID_DDR, BIT(6)), 434 + NULL 435 + }; 436 + 437 + static struct cxl_pmu_ev_cap *cxl_pmu_find_fixed_counter_ev_cap(struct cxl_pmu_info *info, 438 + int vid, int gid, int msk) 439 + { 440 + struct cxl_pmu_ev_cap *pmu_ev; 441 + 442 + list_for_each_entry(pmu_ev, &info->event_caps_fixed, node) { 443 + if (vid != pmu_ev->vid || gid != pmu_ev->gid) 444 + continue; 445 + 446 + /* Precise match for fixed counter */ 447 + if (msk == pmu_ev->msk) 448 + return pmu_ev; 449 + } 450 + 451 + return ERR_PTR(-EINVAL); 452 + } 453 + 454 + static struct cxl_pmu_ev_cap *cxl_pmu_find_config_counter_ev_cap(struct cxl_pmu_info *info, 455 + int vid, int gid, int msk) 456 + { 457 + struct cxl_pmu_ev_cap *pmu_ev; 458 + 459 + list_for_each_entry(pmu_ev, &info->event_caps_configurable, node) { 460 + if (vid != pmu_ev->vid || gid != pmu_ev->gid) 461 + continue; 462 + 463 + /* Request mask must be subset of supported */ 464 + if (msk & ~pmu_ev->msk) 465 + continue; 466 + 467 + return pmu_ev; 468 + } 469 + 470 + return ERR_PTR(-EINVAL); 471 + } 472 + 473 + static umode_t cxl_pmu_event_is_visible(struct kobject *kobj, struct attribute *attr, int a) 474 + { 475 + struct device_attribute *dev_attr = container_of(attr, struct device_attribute, attr); 476 + struct perf_pmu_events_attr *pmu_attr = 477 + container_of(dev_attr, struct perf_pmu_events_attr, attr); 478 + struct device *dev = kobj_to_dev(kobj); 479 + struct cxl_pmu_info *info = dev_get_drvdata(dev); 480 + int vid = FIELD_GET(CXL_PMU_ATTR_CONFIG_VID_MSK, pmu_attr->id); 481 + int gid = FIELD_GET(CXL_PMU_ATTR_CONFIG_GID_MSK, pmu_attr->id); 482 + int msk = FIELD_GET(CXL_PMU_ATTR_CONFIG_MASK_MSK, pmu_attr->id); 483 + 484 + if (!IS_ERR(cxl_pmu_find_fixed_counter_ev_cap(info, vid, gid, msk))) 485 + return attr->mode; 486 + 487 + if (!IS_ERR(cxl_pmu_find_config_counter_ev_cap(info, vid, gid, msk))) 488 + return attr->mode; 489 + 490 + return 0; 491 + } 492 + 493 + static const struct attribute_group cxl_pmu_events = { 494 + .name = "events", 495 + .attrs = cxl_pmu_event_attrs, 496 + .is_visible = cxl_pmu_event_is_visible, 497 + }; 498 + 499 + static ssize_t cpumask_show(struct device *dev, struct device_attribute *attr, 500 + char *buf) 501 + { 502 + struct cxl_pmu_info *info = dev_get_drvdata(dev); 503 + 504 + return cpumap_print_to_pagebuf(true, buf, cpumask_of(info->on_cpu)); 505 + } 506 + static DEVICE_ATTR_RO(cpumask); 507 + 508 + static struct attribute *cxl_pmu_cpumask_attrs[] = { 509 + &dev_attr_cpumask.attr, 510 + NULL 511 + }; 512 + 513 + static const struct attribute_group cxl_pmu_cpumask_group = { 514 + .attrs = cxl_pmu_cpumask_attrs, 515 + }; 516 + 517 + static const struct attribute_group *cxl_pmu_attr_groups[] = { 518 + &cxl_pmu_events, 519 + &cxl_pmu_format_group, 520 + &cxl_pmu_cpumask_group, 521 + NULL 522 + }; 523 + 524 + /* If counter_idx == NULL, don't try to allocate a counter. */ 525 + static int cxl_pmu_get_event_idx(struct perf_event *event, int *counter_idx, 526 + int *event_idx) 527 + { 528 + struct cxl_pmu_info *info = pmu_to_cxl_pmu_info(event->pmu); 529 + DECLARE_BITMAP(configurable_and_free, CXL_PMU_MAX_COUNTERS); 530 + struct cxl_pmu_ev_cap *pmu_ev; 531 + u32 mask; 532 + u16 gid, vid; 533 + int i; 534 + 535 + vid = cxl_pmu_config_get_vid(event); 536 + gid = cxl_pmu_config_get_gid(event); 537 + mask = cxl_pmu_config_get_mask(event); 538 + 539 + pmu_ev = cxl_pmu_find_fixed_counter_ev_cap(info, vid, gid, mask); 540 + if (!IS_ERR(pmu_ev)) { 541 + if (!counter_idx) 542 + return 0; 543 + if (!test_bit(pmu_ev->counter_idx, info->used_counter_bm)) { 544 + *counter_idx = pmu_ev->counter_idx; 545 + return 0; 546 + } 547 + /* Fixed counter is in use, but maybe a configurable one? */ 548 + } 549 + 550 + pmu_ev = cxl_pmu_find_config_counter_ev_cap(info, vid, gid, mask); 551 + if (!IS_ERR(pmu_ev)) { 552 + if (!counter_idx) 553 + return 0; 554 + 555 + bitmap_andnot(configurable_and_free, info->conf_counter_bm, 556 + info->used_counter_bm, CXL_PMU_MAX_COUNTERS); 557 + 558 + i = find_first_bit(configurable_and_free, CXL_PMU_MAX_COUNTERS); 559 + if (i == CXL_PMU_MAX_COUNTERS) 560 + return -EINVAL; 561 + 562 + *counter_idx = i; 563 + return 0; 564 + } 565 + 566 + return -EINVAL; 567 + } 568 + 569 + static int cxl_pmu_event_init(struct perf_event *event) 570 + { 571 + struct cxl_pmu_info *info = pmu_to_cxl_pmu_info(event->pmu); 572 + int rc; 573 + 574 + /* Top level type sanity check - is this a Hardware Event being requested */ 575 + if (event->attr.type != event->pmu->type) 576 + return -ENOENT; 577 + 578 + if (is_sampling_event(event) || event->attach_state & PERF_ATTACH_TASK) 579 + return -EOPNOTSUPP; 580 + /* TODO: Validation of any filter */ 581 + 582 + /* 583 + * Verify that it is possible to count what was requested. Either must 584 + * be a fixed counter that is a precise match or a configurable counter 585 + * where this is a subset. 586 + */ 587 + rc = cxl_pmu_get_event_idx(event, NULL, NULL); 588 + if (rc < 0) 589 + return rc; 590 + 591 + event->cpu = info->on_cpu; 592 + 593 + return 0; 594 + } 595 + 596 + static void cxl_pmu_enable(struct pmu *pmu) 597 + { 598 + struct cxl_pmu_info *info = pmu_to_cxl_pmu_info(pmu); 599 + void __iomem *base = info->base; 600 + 601 + /* Can assume frozen at this stage */ 602 + writeq(0, base + CXL_PMU_FREEZE_REG); 603 + } 604 + 605 + static void cxl_pmu_disable(struct pmu *pmu) 606 + { 607 + struct cxl_pmu_info *info = pmu_to_cxl_pmu_info(pmu); 608 + void __iomem *base = info->base; 609 + 610 + /* 611 + * Whilst bits above number of counters are RsvdZ 612 + * they are unlikely to be repurposed given 613 + * number of counters is allowed to be 64 leaving 614 + * no reserved bits. Hence this is only slightly 615 + * naughty. 616 + */ 617 + writeq(GENMASK_ULL(63, 0), base + CXL_PMU_FREEZE_REG); 618 + } 619 + 620 + static void cxl_pmu_event_start(struct perf_event *event, int flags) 621 + { 622 + struct cxl_pmu_info *info = pmu_to_cxl_pmu_info(event->pmu); 623 + struct hw_perf_event *hwc = &event->hw; 624 + void __iomem *base = info->base; 625 + u64 cfg; 626 + 627 + /* 628 + * All paths to here should either set these flags directly or 629 + * call cxl_pmu_event_stop() which will ensure the correct state. 630 + */ 631 + if (WARN_ON_ONCE(!(hwc->state & PERF_HES_STOPPED))) 632 + return; 633 + 634 + WARN_ON_ONCE(!(hwc->state & PERF_HES_UPTODATE)); 635 + hwc->state = 0; 636 + 637 + /* 638 + * Currently only hdm filter control is implemnted, this code will 639 + * want generalizing when more filters are added. 640 + */ 641 + if (info->filter_hdm) { 642 + if (cxl_pmu_config1_hdm_filter_en(event)) 643 + cfg = cxl_pmu_config2_get_hdm_decoder(event); 644 + else 645 + cfg = GENMASK(15, 0); /* No filtering if 0xFFFF_FFFF */ 646 + writeq(cfg, base + CXL_PMU_FILTER_CFG_REG(hwc->idx, 0)); 647 + } 648 + 649 + cfg = readq(base + CXL_PMU_COUNTER_CFG_REG(hwc->idx)); 650 + cfg |= FIELD_PREP(CXL_PMU_COUNTER_CFG_INT_ON_OVRFLW, 1); 651 + cfg |= FIELD_PREP(CXL_PMU_COUNTER_CFG_FREEZE_ON_OVRFLW, 1); 652 + cfg |= FIELD_PREP(CXL_PMU_COUNTER_CFG_ENABLE, 1); 653 + cfg |= FIELD_PREP(CXL_PMU_COUNTER_CFG_EDGE, 654 + cxl_pmu_config1_get_edge(event) ? 1 : 0); 655 + cfg |= FIELD_PREP(CXL_PMU_COUNTER_CFG_INVERT, 656 + cxl_pmu_config1_get_invert(event) ? 1 : 0); 657 + 658 + /* Fixed purpose counters have next two fields RO */ 659 + if (test_bit(hwc->idx, info->conf_counter_bm)) { 660 + cfg |= FIELD_PREP(CXL_PMU_COUNTER_CFG_EVENT_GRP_ID_IDX_MSK, 661 + hwc->event_base); 662 + cfg |= FIELD_PREP(CXL_PMU_COUNTER_CFG_EVENTS_MSK, 663 + cxl_pmu_config_get_mask(event)); 664 + } 665 + cfg &= ~CXL_PMU_COUNTER_CFG_THRESHOLD_MSK; 666 + /* 667 + * For events that generate only 1 count per clock the CXL 3.0 spec 668 + * states the threshold shall be set to 1 but if set to 0 it will 669 + * count the raw value anwyay? 670 + * There is no definition of what events will count multiple per cycle 671 + * and hence to which non 1 values of threshold can apply. 672 + * (CXL 3.0 8.2.7.2.1 Counter Configuration - threshold field definition) 673 + */ 674 + cfg |= FIELD_PREP(CXL_PMU_COUNTER_CFG_THRESHOLD_MSK, 675 + cxl_pmu_config1_get_threshold(event)); 676 + writeq(cfg, base + CXL_PMU_COUNTER_CFG_REG(hwc->idx)); 677 + 678 + local64_set(&hwc->prev_count, 0); 679 + writeq(0, base + CXL_PMU_COUNTER_REG(hwc->idx)); 680 + 681 + perf_event_update_userpage(event); 682 + } 683 + 684 + static u64 cxl_pmu_read_counter(struct perf_event *event) 685 + { 686 + struct cxl_pmu_info *info = pmu_to_cxl_pmu_info(event->pmu); 687 + void __iomem *base = info->base; 688 + 689 + return readq(base + CXL_PMU_COUNTER_REG(event->hw.idx)); 690 + } 691 + 692 + static void __cxl_pmu_read(struct perf_event *event, bool overflow) 693 + { 694 + struct cxl_pmu_info *info = pmu_to_cxl_pmu_info(event->pmu); 695 + struct hw_perf_event *hwc = &event->hw; 696 + u64 new_cnt, prev_cnt, delta; 697 + 698 + do { 699 + prev_cnt = local64_read(&hwc->prev_count); 700 + new_cnt = cxl_pmu_read_counter(event); 701 + } while (local64_cmpxchg(&hwc->prev_count, prev_cnt, new_cnt) != prev_cnt); 702 + 703 + /* 704 + * If we know an overflow occur then take that into account. 705 + * Note counter is not reset as that would lose events 706 + */ 707 + delta = (new_cnt - prev_cnt) & GENMASK_ULL(info->counter_width - 1, 0); 708 + if (overflow && delta < GENMASK_ULL(info->counter_width - 1, 0)) 709 + delta += (1UL << info->counter_width); 710 + 711 + local64_add(delta, &event->count); 712 + } 713 + 714 + static void cxl_pmu_read(struct perf_event *event) 715 + { 716 + __cxl_pmu_read(event, false); 717 + } 718 + 719 + static void cxl_pmu_event_stop(struct perf_event *event, int flags) 720 + { 721 + struct cxl_pmu_info *info = pmu_to_cxl_pmu_info(event->pmu); 722 + void __iomem *base = info->base; 723 + struct hw_perf_event *hwc = &event->hw; 724 + u64 cfg; 725 + 726 + cxl_pmu_read(event); 727 + WARN_ON_ONCE(hwc->state & PERF_HES_STOPPED); 728 + hwc->state |= PERF_HES_STOPPED; 729 + 730 + cfg = readq(base + CXL_PMU_COUNTER_CFG_REG(hwc->idx)); 731 + cfg &= ~(FIELD_PREP(CXL_PMU_COUNTER_CFG_INT_ON_OVRFLW, 1) | 732 + FIELD_PREP(CXL_PMU_COUNTER_CFG_ENABLE, 1)); 733 + writeq(cfg, base + CXL_PMU_COUNTER_CFG_REG(hwc->idx)); 734 + 735 + hwc->state |= PERF_HES_UPTODATE; 736 + } 737 + 738 + static int cxl_pmu_event_add(struct perf_event *event, int flags) 739 + { 740 + struct cxl_pmu_info *info = pmu_to_cxl_pmu_info(event->pmu); 741 + struct hw_perf_event *hwc = &event->hw; 742 + int idx, rc; 743 + int event_idx = 0; 744 + 745 + hwc->state = PERF_HES_STOPPED | PERF_HES_UPTODATE; 746 + 747 + rc = cxl_pmu_get_event_idx(event, &idx, &event_idx); 748 + if (rc < 0) 749 + return rc; 750 + 751 + hwc->idx = idx; 752 + 753 + /* Only set for configurable counters */ 754 + hwc->event_base = event_idx; 755 + info->hw_events[idx] = event; 756 + set_bit(idx, info->used_counter_bm); 757 + 758 + if (flags & PERF_EF_START) 759 + cxl_pmu_event_start(event, PERF_EF_RELOAD); 760 + 761 + return 0; 762 + } 763 + 764 + static void cxl_pmu_event_del(struct perf_event *event, int flags) 765 + { 766 + struct cxl_pmu_info *info = pmu_to_cxl_pmu_info(event->pmu); 767 + struct hw_perf_event *hwc = &event->hw; 768 + 769 + cxl_pmu_event_stop(event, PERF_EF_UPDATE); 770 + clear_bit(hwc->idx, info->used_counter_bm); 771 + info->hw_events[hwc->idx] = NULL; 772 + perf_event_update_userpage(event); 773 + } 774 + 775 + static irqreturn_t cxl_pmu_irq(int irq, void *data) 776 + { 777 + struct cxl_pmu_info *info = data; 778 + void __iomem *base = info->base; 779 + u64 overflowed; 780 + DECLARE_BITMAP(overflowedbm, 64); 781 + int i; 782 + 783 + overflowed = readq(base + CXL_PMU_OVERFLOW_REG); 784 + 785 + /* Interrupt may be shared, so maybe it isn't ours */ 786 + if (!overflowed) 787 + return IRQ_NONE; 788 + 789 + bitmap_from_arr64(overflowedbm, &overflowed, 64); 790 + for_each_set_bit(i, overflowedbm, info->num_counters) { 791 + struct perf_event *event = info->hw_events[i]; 792 + 793 + if (!event) { 794 + dev_dbg(info->pmu.dev, 795 + "overflow but on non enabled counter %d\n", i); 796 + continue; 797 + } 798 + 799 + __cxl_pmu_read(event, true); 800 + } 801 + 802 + writeq(overflowed, base + CXL_PMU_OVERFLOW_REG); 803 + 804 + return IRQ_HANDLED; 805 + } 806 + 807 + static void cxl_pmu_perf_unregister(void *_info) 808 + { 809 + struct cxl_pmu_info *info = _info; 810 + 811 + perf_pmu_unregister(&info->pmu); 812 + } 813 + 814 + static void cxl_pmu_cpuhp_remove(void *_info) 815 + { 816 + struct cxl_pmu_info *info = _info; 817 + 818 + cpuhp_state_remove_instance_nocalls(cxl_pmu_cpuhp_state_num, &info->node); 819 + } 820 + 821 + static int cxl_pmu_probe(struct device *dev) 822 + { 823 + struct cxl_pmu *pmu = to_cxl_pmu(dev); 824 + struct pci_dev *pdev = to_pci_dev(dev->parent); 825 + struct cxl_pmu_info *info; 826 + char *irq_name; 827 + char *dev_name; 828 + int rc, irq; 829 + 830 + info = devm_kzalloc(dev, sizeof(*info), GFP_KERNEL); 831 + if (!info) 832 + return -ENOMEM; 833 + 834 + dev_set_drvdata(dev, info); 835 + INIT_LIST_HEAD(&info->event_caps_fixed); 836 + INIT_LIST_HEAD(&info->event_caps_configurable); 837 + 838 + info->base = pmu->base; 839 + 840 + info->on_cpu = -1; 841 + rc = cxl_pmu_parse_caps(dev, info); 842 + if (rc) 843 + return rc; 844 + 845 + info->hw_events = devm_kcalloc(dev, sizeof(*info->hw_events), 846 + info->num_counters, GFP_KERNEL); 847 + if (!info->hw_events) 848 + return -ENOMEM; 849 + 850 + switch (pmu->type) { 851 + case CXL_PMU_MEMDEV: 852 + dev_name = devm_kasprintf(dev, GFP_KERNEL, "cxl_pmu_mem%d.%d", 853 + pmu->assoc_id, pmu->index); 854 + break; 855 + } 856 + if (!dev_name) 857 + return -ENOMEM; 858 + 859 + info->pmu = (struct pmu) { 860 + .name = dev_name, 861 + .parent = dev, 862 + .module = THIS_MODULE, 863 + .event_init = cxl_pmu_event_init, 864 + .pmu_enable = cxl_pmu_enable, 865 + .pmu_disable = cxl_pmu_disable, 866 + .add = cxl_pmu_event_add, 867 + .del = cxl_pmu_event_del, 868 + .start = cxl_pmu_event_start, 869 + .stop = cxl_pmu_event_stop, 870 + .read = cxl_pmu_read, 871 + .task_ctx_nr = perf_invalid_context, 872 + .attr_groups = cxl_pmu_attr_groups, 873 + .capabilities = PERF_PMU_CAP_NO_EXCLUDE, 874 + }; 875 + 876 + if (info->irq <= 0) 877 + return -EINVAL; 878 + 879 + rc = pci_irq_vector(pdev, info->irq); 880 + if (rc < 0) 881 + return rc; 882 + irq = rc; 883 + 884 + irq_name = devm_kasprintf(dev, GFP_KERNEL, "%s_overflow\n", dev_name); 885 + if (!irq_name) 886 + return -ENOMEM; 887 + 888 + rc = devm_request_irq(dev, irq, cxl_pmu_irq, IRQF_SHARED | IRQF_ONESHOT, 889 + irq_name, info); 890 + if (rc) 891 + return rc; 892 + info->irq = irq; 893 + 894 + rc = cpuhp_state_add_instance(cxl_pmu_cpuhp_state_num, &info->node); 895 + if (rc) 896 + return rc; 897 + 898 + rc = devm_add_action_or_reset(dev, cxl_pmu_cpuhp_remove, info); 899 + if (rc) 900 + return rc; 901 + 902 + rc = perf_pmu_register(&info->pmu, info->pmu.name, -1); 903 + if (rc) 904 + return rc; 905 + 906 + rc = devm_add_action_or_reset(dev, cxl_pmu_perf_unregister, info); 907 + if (rc) 908 + return rc; 909 + 910 + return 0; 911 + } 912 + 913 + static struct cxl_driver cxl_pmu_driver = { 914 + .name = "cxl_pmu", 915 + .probe = cxl_pmu_probe, 916 + .id = CXL_DEVICE_PMU, 917 + }; 918 + 919 + static int cxl_pmu_online_cpu(unsigned int cpu, struct hlist_node *node) 920 + { 921 + struct cxl_pmu_info *info = hlist_entry_safe(node, struct cxl_pmu_info, node); 922 + 923 + if (info->on_cpu != -1) 924 + return 0; 925 + 926 + info->on_cpu = cpu; 927 + /* 928 + * CPU HP lock is held so we should be guaranteed that the CPU hasn't yet 929 + * gone away again. 930 + */ 931 + WARN_ON(irq_set_affinity(info->irq, cpumask_of(cpu))); 932 + 933 + return 0; 934 + } 935 + 936 + static int cxl_pmu_offline_cpu(unsigned int cpu, struct hlist_node *node) 937 + { 938 + struct cxl_pmu_info *info = hlist_entry_safe(node, struct cxl_pmu_info, node); 939 + unsigned int target; 940 + 941 + if (info->on_cpu != cpu) 942 + return 0; 943 + 944 + info->on_cpu = -1; 945 + target = cpumask_any_but(cpu_online_mask, cpu); 946 + if (target >= nr_cpu_ids) { 947 + dev_err(info->pmu.dev, "Unable to find a suitable CPU\n"); 948 + return 0; 949 + } 950 + 951 + perf_pmu_migrate_context(&info->pmu, cpu, target); 952 + info->on_cpu = target; 953 + /* 954 + * CPU HP lock is held so we should be guaranteed that this CPU hasn't yet 955 + * gone away. 956 + */ 957 + WARN_ON(irq_set_affinity(info->irq, cpumask_of(target))); 958 + 959 + return 0; 960 + } 961 + 962 + static __init int cxl_pmu_init(void) 963 + { 964 + int rc; 965 + 966 + rc = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, 967 + "AP_PERF_CXL_PMU_ONLINE", 968 + cxl_pmu_online_cpu, cxl_pmu_offline_cpu); 969 + if (rc < 0) 970 + return rc; 971 + cxl_pmu_cpuhp_state_num = rc; 972 + 973 + rc = cxl_driver_register(&cxl_pmu_driver); 974 + if (rc) 975 + cpuhp_remove_multi_state(cxl_pmu_cpuhp_state_num); 976 + 977 + return rc; 978 + } 979 + 980 + static __exit void cxl_pmu_exit(void) 981 + { 982 + cxl_driver_unregister(&cxl_pmu_driver); 983 + cpuhp_remove_multi_state(cxl_pmu_cpuhp_state_num); 984 + } 985 + 986 + MODULE_LICENSE("GPL"); 987 + MODULE_IMPORT_NS(CXL); 988 + module_init(cxl_pmu_init); 989 + module_exit(cxl_pmu_exit); 990 + MODULE_ALIAS_CXL(CXL_DEVICE_PMU);
+1
include/linux/perf_event.h
··· 305 305 306 306 struct module *module; 307 307 struct device *dev; 308 + struct device *parent; 308 309 const struct attribute_group **attr_groups; 309 310 const struct attribute_group **attr_update; 310 311 const char *name;
+20 -3
include/linux/rcuwait.h
··· 49 49 50 50 extern void finish_rcuwait(struct rcuwait *w); 51 51 52 - #define rcuwait_wait_event(w, condition, state) \ 52 + #define ___rcuwait_wait_event(w, condition, state, ret, cmd) \ 53 53 ({ \ 54 - int __ret = 0; \ 54 + long __ret = ret; \ 55 55 prepare_to_rcuwait(w); \ 56 56 for (;;) { \ 57 57 /* \ ··· 67 67 break; \ 68 68 } \ 69 69 \ 70 - schedule(); \ 70 + cmd; \ 71 71 } \ 72 72 finish_rcuwait(w); \ 73 + __ret; \ 74 + }) 75 + 76 + #define rcuwait_wait_event(w, condition, state) \ 77 + ___rcuwait_wait_event(w, condition, state, 0, schedule()) 78 + 79 + #define __rcuwait_wait_event_timeout(w, condition, state, timeout) \ 80 + ___rcuwait_wait_event(w, ___wait_cond_timeout(condition), \ 81 + state, timeout, \ 82 + __ret = schedule_timeout(__ret)) 83 + 84 + #define rcuwait_wait_event_timeout(w, condition, state, timeout) \ 85 + ({ \ 86 + long __ret = timeout; \ 87 + if (!___wait_cond_timeout(condition)) \ 88 + __ret = __rcuwait_wait_event_timeout(w, condition, \ 89 + state, timeout); \ 73 90 __ret; \ 74 91 }) 75 92
+1
kernel/events/core.c
··· 11391 11391 11392 11392 dev_set_drvdata(pmu->dev, pmu); 11393 11393 pmu->dev->bus = &pmu_bus; 11394 + pmu->dev->parent = pmu->parent; 11394 11395 pmu->dev->release = pmu_dev_release; 11395 11396 11396 11397 ret = dev_set_name(pmu->dev, "%s", pmu->name);
+3 -2
tools/testing/cxl/Kbuild
··· 6 6 ldflags-y += --wrap=nvdimm_bus_register 7 7 ldflags-y += --wrap=devm_cxl_port_enumerate_dports 8 8 ldflags-y += --wrap=devm_cxl_setup_hdm 9 - ldflags-y += --wrap=devm_cxl_enable_hdm 10 9 ldflags-y += --wrap=devm_cxl_add_passthrough_decoder 11 10 ldflags-y += --wrap=devm_cxl_enumerate_decoders 12 11 ldflags-y += --wrap=cxl_await_media_ready 13 12 ldflags-y += --wrap=cxl_hdm_decode_init 14 13 ldflags-y += --wrap=cxl_dvsec_rr_decode 15 - ldflags-y += --wrap=cxl_rcrb_to_component 14 + ldflags-y += --wrap=devm_cxl_add_rch_dport 15 + ldflags-y += --wrap=cxl_rcd_component_reg_phys 16 16 17 17 DRIVERS := ../../../drivers 18 18 CXL_SRC := $(DRIVERS)/cxl ··· 57 57 cxl_core-y += $(CXL_CORE_SRC)/mbox.o 58 58 cxl_core-y += $(CXL_CORE_SRC)/pci.o 59 59 cxl_core-y += $(CXL_CORE_SRC)/hdm.o 60 + cxl_core-y += $(CXL_CORE_SRC)/pmu.o 60 61 cxl_core-$(CONFIG_TRACING) += $(CXL_CORE_SRC)/trace.o 61 62 cxl_core-$(CONFIG_CXL_REGION) += $(CXL_CORE_SRC)/region.o 62 63 cxl_core-y += config_check.o
+13 -23
tools/testing/cxl/test/cxl.c
··· 713 713 714 714 cxld->interleave_ways = 1; 715 715 cxld->interleave_granularity = 256; 716 - cxld->target_type = CXL_DECODER_EXPANDER; 716 + cxld->target_type = CXL_DECODER_HOSTONLYMEM; 717 717 cxld->commit = mock_decoder_commit; 718 718 cxld->reset = mock_decoder_reset; 719 719 } ··· 754 754 /* check is endpoint is attach to host-bridge0 */ 755 755 port = cxled_to_port(cxled); 756 756 do { 757 - if (port->uport == &cxl_host_bridge[0]->dev) { 757 + if (port->uport_dev == &cxl_host_bridge[0]->dev) { 758 758 hb0 = true; 759 759 break; 760 760 } ··· 787 787 788 788 cxld->interleave_ways = 2; 789 789 eig_to_granularity(window->granularity, &cxld->interleave_granularity); 790 - cxld->target_type = CXL_DECODER_EXPANDER; 790 + cxld->target_type = CXL_DECODER_HOSTONLYMEM; 791 791 cxld->flags = CXL_DECODER_F_ENABLE; 792 792 cxled->state = CXL_DECODER_STATE_AUTO; 793 793 port->commit_end = cxld->id; ··· 820 820 } else 821 821 cxlsd->target[0] = dport; 822 822 cxld = &cxlsd->cxld; 823 - cxld->target_type = CXL_DECODER_EXPANDER; 823 + cxld->target_type = CXL_DECODER_HOSTONLYMEM; 824 824 cxld->flags = CXL_DECODER_F_ENABLE; 825 825 iter->commit_end = 0; 826 826 /* ··· 889 889 mock_init_hdm_decoder(cxld); 890 890 891 891 if (target_count) { 892 - rc = device_for_each_child(port->uport, &ctx, 892 + rc = device_for_each_child(port->uport_dev, &ctx, 893 893 map_targets); 894 894 if (rc) { 895 895 put_device(&cxld->dev); ··· 919 919 int i, array_size; 920 920 921 921 if (port->depth == 1) { 922 - if (is_multi_bridge(port->uport)) { 922 + if (is_multi_bridge(port->uport_dev)) { 923 923 array_size = ARRAY_SIZE(cxl_root_port); 924 924 array = cxl_root_port; 925 - } else if (is_single_bridge(port->uport)) { 925 + } else if (is_single_bridge(port->uport_dev)) { 926 926 array_size = ARRAY_SIZE(cxl_root_single); 927 927 array = cxl_root_single; 928 928 } else { 929 929 dev_dbg(&port->dev, "%s: unknown bridge type\n", 930 - dev_name(port->uport)); 930 + dev_name(port->uport_dev)); 931 931 return -ENXIO; 932 932 } 933 933 } else if (port->depth == 2) { 934 934 struct cxl_port *parent = to_cxl_port(port->dev.parent); 935 935 936 - if (is_multi_bridge(parent->uport)) { 936 + if (is_multi_bridge(parent->uport_dev)) { 937 937 array_size = ARRAY_SIZE(cxl_switch_dport); 938 938 array = cxl_switch_dport; 939 - } else if (is_single_bridge(parent->uport)) { 939 + } else if (is_single_bridge(parent->uport_dev)) { 940 940 array_size = ARRAY_SIZE(cxl_swd_single); 941 941 array = cxl_swd_single; 942 942 } else { 943 943 dev_dbg(&port->dev, "%s: unknown bridge type\n", 944 - dev_name(port->uport)); 944 + dev_name(port->uport_dev)); 945 945 return -ENXIO; 946 946 } 947 947 } else { ··· 954 954 struct platform_device *pdev = array[i]; 955 955 struct cxl_dport *dport; 956 956 957 - if (pdev->dev.parent != port->uport) { 957 + if (pdev->dev.parent != port->uport_dev) { 958 958 dev_dbg(&port->dev, "%s: mismatch parent %s\n", 959 - dev_name(port->uport), 959 + dev_name(port->uport_dev), 960 960 dev_name(pdev->dev.parent)); 961 961 continue; 962 962 } ··· 971 971 return 0; 972 972 } 973 973 974 - resource_size_t mock_cxl_rcrb_to_component(struct device *dev, 975 - resource_size_t rcrb, 976 - enum cxl_rcrb which) 977 - { 978 - dev_dbg(dev, "rcrb: %pa which: %d\n", &rcrb, which); 979 - 980 - return (resource_size_t) which + 1; 981 - } 982 - 983 974 static struct cxl_mock_ops cxl_mock_ops = { 984 975 .is_mock_adev = is_mock_adev, 985 976 .is_mock_bridge = is_mock_bridge, ··· 979 988 .is_mock_dev = is_mock_dev, 980 989 .acpi_table_parse_cedt = mock_acpi_table_parse_cedt, 981 990 .acpi_evaluate_integer = mock_acpi_evaluate_integer, 982 - .cxl_rcrb_to_component = mock_cxl_rcrb_to_component, 983 991 .acpi_pci_find_root = mock_acpi_pci_find_root, 984 992 .devm_cxl_port_enumerate_dports = mock_cxl_port_enumerate_dports, 985 993 .devm_cxl_setup_hdm = mock_cxl_setup_hdm,
+292 -75
tools/testing/cxl/test/mem.c
··· 8 8 #include <linux/sizes.h> 9 9 #include <linux/bits.h> 10 10 #include <asm/unaligned.h> 11 + #include <crypto/sha2.h> 11 12 #include <cxlmem.h> 12 13 13 14 #include "trace.h" 14 15 15 16 #define LSA_SIZE SZ_128K 17 + #define FW_SIZE SZ_64M 18 + #define FW_SLOTS 3 16 19 #define DEV_SIZE SZ_2G 17 20 #define EFFECT(x) (1U << x) 18 21 ··· 24 21 25 22 static unsigned int poison_inject_dev_max = MOCK_INJECT_DEV_MAX; 26 23 24 + enum cxl_command_effects { 25 + CONF_CHANGE_COLD_RESET = 0, 26 + CONF_CHANGE_IMMEDIATE, 27 + DATA_CHANGE_IMMEDIATE, 28 + POLICY_CHANGE_IMMEDIATE, 29 + LOG_CHANGE_IMMEDIATE, 30 + SECURITY_CHANGE_IMMEDIATE, 31 + BACKGROUND_OP, 32 + SECONDARY_MBOX_SUPPORTED, 33 + }; 34 + 35 + #define CXL_CMD_EFFECT_NONE cpu_to_le16(0) 36 + 27 37 static struct cxl_cel_entry mock_cel[] = { 28 38 { 29 39 .opcode = cpu_to_le16(CXL_MBOX_OP_GET_SUPPORTED_LOGS), 30 - .effect = cpu_to_le16(0), 40 + .effect = CXL_CMD_EFFECT_NONE, 31 41 }, 32 42 { 33 43 .opcode = cpu_to_le16(CXL_MBOX_OP_IDENTIFY), 34 - .effect = cpu_to_le16(0), 44 + .effect = CXL_CMD_EFFECT_NONE, 35 45 }, 36 46 { 37 47 .opcode = cpu_to_le16(CXL_MBOX_OP_GET_LSA), 38 - .effect = cpu_to_le16(0), 48 + .effect = CXL_CMD_EFFECT_NONE, 39 49 }, 40 50 { 41 51 .opcode = cpu_to_le16(CXL_MBOX_OP_GET_PARTITION_INFO), 42 - .effect = cpu_to_le16(0), 52 + .effect = CXL_CMD_EFFECT_NONE, 43 53 }, 44 54 { 45 55 .opcode = cpu_to_le16(CXL_MBOX_OP_SET_LSA), 46 - .effect = cpu_to_le16(EFFECT(1) | EFFECT(2)), 56 + .effect = cpu_to_le16(EFFECT(CONF_CHANGE_IMMEDIATE) | 57 + EFFECT(DATA_CHANGE_IMMEDIATE)), 47 58 }, 48 59 { 49 60 .opcode = cpu_to_le16(CXL_MBOX_OP_GET_HEALTH_INFO), 50 - .effect = cpu_to_le16(0), 61 + .effect = CXL_CMD_EFFECT_NONE, 51 62 }, 52 63 { 53 64 .opcode = cpu_to_le16(CXL_MBOX_OP_GET_POISON), 54 - .effect = cpu_to_le16(0), 65 + .effect = CXL_CMD_EFFECT_NONE, 55 66 }, 56 67 { 57 68 .opcode = cpu_to_le16(CXL_MBOX_OP_INJECT_POISON), 58 - .effect = cpu_to_le16(0), 69 + .effect = cpu_to_le16(EFFECT(DATA_CHANGE_IMMEDIATE)), 59 70 }, 60 71 { 61 72 .opcode = cpu_to_le16(CXL_MBOX_OP_CLEAR_POISON), 62 - .effect = cpu_to_le16(0), 73 + .effect = cpu_to_le16(EFFECT(DATA_CHANGE_IMMEDIATE)), 74 + }, 75 + { 76 + .opcode = cpu_to_le16(CXL_MBOX_OP_GET_FW_INFO), 77 + .effect = CXL_CMD_EFFECT_NONE, 78 + }, 79 + { 80 + .opcode = cpu_to_le16(CXL_MBOX_OP_TRANSFER_FW), 81 + .effect = cpu_to_le16(EFFECT(CONF_CHANGE_COLD_RESET) | 82 + EFFECT(BACKGROUND_OP)), 83 + }, 84 + { 85 + .opcode = cpu_to_le16(CXL_MBOX_OP_ACTIVATE_FW), 86 + .effect = cpu_to_le16(EFFECT(CONF_CHANGE_COLD_RESET) | 87 + EFFECT(CONF_CHANGE_IMMEDIATE)), 63 88 }, 64 89 }; 65 90 ··· 133 102 }; 134 103 135 104 struct mock_event_store { 136 - struct cxl_dev_state *cxlds; 105 + struct cxl_memdev_state *mds; 137 106 struct mock_event_log mock_logs[CXL_EVENT_TYPE_MAX]; 138 107 u32 ev_status; 139 108 }; 140 109 141 110 struct cxl_mockmem_data { 142 111 void *lsa; 112 + void *fw; 113 + int fw_slot; 114 + int fw_staged; 115 + size_t fw_size; 143 116 u32 security_state; 144 117 u8 user_pass[NVDIMM_PASSPHRASE_LEN]; 145 118 u8 master_pass[NVDIMM_PASSPHRASE_LEN]; ··· 215 180 log->nr_events++; 216 181 } 217 182 218 - static int mock_get_event(struct cxl_dev_state *cxlds, 219 - struct cxl_mbox_cmd *cmd) 183 + static int mock_get_event(struct device *dev, struct cxl_mbox_cmd *cmd) 220 184 { 221 185 struct cxl_get_event_payload *pl; 222 186 struct mock_event_log *log; ··· 235 201 236 202 memset(cmd->payload_out, 0, cmd->size_out); 237 203 238 - log = event_find_log(cxlds->dev, log_type); 204 + log = event_find_log(dev, log_type); 239 205 if (!log || event_log_empty(log)) 240 206 return 0; 241 207 ··· 268 234 return 0; 269 235 } 270 236 271 - static int mock_clear_event(struct cxl_dev_state *cxlds, 272 - struct cxl_mbox_cmd *cmd) 237 + static int mock_clear_event(struct device *dev, struct cxl_mbox_cmd *cmd) 273 238 { 274 239 struct cxl_mbox_clear_event_payload *pl = cmd->payload_in; 275 240 struct mock_event_log *log; ··· 279 246 if (log_type >= CXL_EVENT_TYPE_MAX) 280 247 return -EINVAL; 281 248 282 - log = event_find_log(cxlds->dev, log_type); 249 + log = event_find_log(dev, log_type); 283 250 if (!log) 284 251 return 0; /* No mock data in this log */ 285 252 ··· 289 256 * However, this is not good behavior for the host so test it. 290 257 */ 291 258 if (log->clear_idx + pl->nr_recs > log->cur_idx) { 292 - dev_err(cxlds->dev, 259 + dev_err(dev, 293 260 "Attempting to clear more events than returned!\n"); 294 261 return -EINVAL; 295 262 } ··· 299 266 nr < pl->nr_recs; 300 267 nr++, handle++) { 301 268 if (handle != le16_to_cpu(pl->handles[nr])) { 302 - dev_err(cxlds->dev, "Clearing events out of order\n"); 269 + dev_err(dev, "Clearing events out of order\n"); 303 270 return -EINVAL; 304 271 } 305 272 } ··· 326 293 event_reset_log(log); 327 294 } 328 295 329 - cxl_mem_get_event_records(mes->cxlds, mes->ev_status); 296 + cxl_mem_get_event_records(mes->mds, mes->ev_status); 330 297 } 331 298 332 299 struct cxl_event_record_raw maint_needed = { ··· 486 453 return 0; 487 454 } 488 455 489 - static int mock_get_log(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd) 456 + static int mock_get_log(struct cxl_memdev_state *mds, struct cxl_mbox_cmd *cmd) 490 457 { 491 458 struct cxl_mbox_get_log *gl = cmd->payload_in; 492 459 u32 offset = le32_to_cpu(gl->offset); ··· 496 463 497 464 if (cmd->size_in < sizeof(*gl)) 498 465 return -EINVAL; 499 - if (length > cxlds->payload_size) 466 + if (length > mds->payload_size) 500 467 return -EINVAL; 501 468 if (offset + length > sizeof(mock_cel)) 502 469 return -EINVAL; ··· 510 477 return 0; 511 478 } 512 479 513 - static int mock_rcd_id(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd) 480 + static int mock_rcd_id(struct cxl_mbox_cmd *cmd) 514 481 { 515 482 struct cxl_mbox_identify id = { 516 483 .fw_revision = { "mock fw v1 " }, ··· 528 495 return 0; 529 496 } 530 497 531 - static int mock_id(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd) 498 + static int mock_id(struct cxl_mbox_cmd *cmd) 532 499 { 533 500 struct cxl_mbox_identify id = { 534 501 .fw_revision = { "mock fw v1 " }, ··· 550 517 return 0; 551 518 } 552 519 553 - static int mock_partition_info(struct cxl_dev_state *cxlds, 554 - struct cxl_mbox_cmd *cmd) 520 + static int mock_partition_info(struct cxl_mbox_cmd *cmd) 555 521 { 556 522 struct cxl_mbox_get_partition_info pi = { 557 523 .active_volatile_cap = ··· 567 535 return 0; 568 536 } 569 537 570 - static int mock_get_security_state(struct cxl_dev_state *cxlds, 538 + static int mock_sanitize(struct cxl_mockmem_data *mdata, 539 + struct cxl_mbox_cmd *cmd) 540 + { 541 + if (cmd->size_in != 0) 542 + return -EINVAL; 543 + 544 + if (cmd->size_out != 0) 545 + return -EINVAL; 546 + 547 + if (mdata->security_state & CXL_PMEM_SEC_STATE_USER_PASS_SET) { 548 + cmd->return_code = CXL_MBOX_CMD_RC_SECURITY; 549 + return -ENXIO; 550 + } 551 + if (mdata->security_state & CXL_PMEM_SEC_STATE_LOCKED) { 552 + cmd->return_code = CXL_MBOX_CMD_RC_SECURITY; 553 + return -ENXIO; 554 + } 555 + 556 + return 0; /* assume less than 2 secs, no bg */ 557 + } 558 + 559 + static int mock_secure_erase(struct cxl_mockmem_data *mdata, 560 + struct cxl_mbox_cmd *cmd) 561 + { 562 + if (cmd->size_in != 0) 563 + return -EINVAL; 564 + 565 + if (cmd->size_out != 0) 566 + return -EINVAL; 567 + 568 + if (mdata->security_state & CXL_PMEM_SEC_STATE_USER_PASS_SET) { 569 + cmd->return_code = CXL_MBOX_CMD_RC_SECURITY; 570 + return -ENXIO; 571 + } 572 + 573 + if (mdata->security_state & CXL_PMEM_SEC_STATE_LOCKED) { 574 + cmd->return_code = CXL_MBOX_CMD_RC_SECURITY; 575 + return -ENXIO; 576 + } 577 + 578 + return 0; 579 + } 580 + 581 + static int mock_get_security_state(struct cxl_mockmem_data *mdata, 571 582 struct cxl_mbox_cmd *cmd) 572 583 { 573 - struct cxl_mockmem_data *mdata = dev_get_drvdata(cxlds->dev); 574 - 575 584 if (cmd->size_in) 576 585 return -EINVAL; 577 586 ··· 642 569 mdata->security_state |= CXL_PMEM_SEC_STATE_USER_PLIMIT; 643 570 } 644 571 645 - static int mock_set_passphrase(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd) 572 + static int mock_set_passphrase(struct cxl_mockmem_data *mdata, 573 + struct cxl_mbox_cmd *cmd) 646 574 { 647 - struct cxl_mockmem_data *mdata = dev_get_drvdata(cxlds->dev); 648 575 struct cxl_set_pass *set_pass; 649 576 650 577 if (cmd->size_in != sizeof(*set_pass)) ··· 702 629 return -EINVAL; 703 630 } 704 631 705 - static int mock_disable_passphrase(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd) 632 + static int mock_disable_passphrase(struct cxl_mockmem_data *mdata, 633 + struct cxl_mbox_cmd *cmd) 706 634 { 707 - struct cxl_mockmem_data *mdata = dev_get_drvdata(cxlds->dev); 708 635 struct cxl_disable_pass *dis_pass; 709 636 710 637 if (cmd->size_in != sizeof(*dis_pass)) ··· 773 700 return 0; 774 701 } 775 702 776 - static int mock_freeze_security(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd) 703 + static int mock_freeze_security(struct cxl_mockmem_data *mdata, 704 + struct cxl_mbox_cmd *cmd) 777 705 { 778 - struct cxl_mockmem_data *mdata = dev_get_drvdata(cxlds->dev); 779 - 780 706 if (cmd->size_in != 0) 781 707 return -EINVAL; 782 708 ··· 789 717 return 0; 790 718 } 791 719 792 - static int mock_unlock_security(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd) 720 + static int mock_unlock_security(struct cxl_mockmem_data *mdata, 721 + struct cxl_mbox_cmd *cmd) 793 722 { 794 - struct cxl_mockmem_data *mdata = dev_get_drvdata(cxlds->dev); 795 - 796 723 if (cmd->size_in != NVDIMM_PASSPHRASE_LEN) 797 724 return -EINVAL; 798 725 ··· 830 759 return 0; 831 760 } 832 761 833 - static int mock_passphrase_secure_erase(struct cxl_dev_state *cxlds, 762 + static int mock_passphrase_secure_erase(struct cxl_mockmem_data *mdata, 834 763 struct cxl_mbox_cmd *cmd) 835 764 { 836 - struct cxl_mockmem_data *mdata = dev_get_drvdata(cxlds->dev); 837 765 struct cxl_pass_erase *erase; 838 766 839 767 if (cmd->size_in != sizeof(*erase)) ··· 928 858 return 0; 929 859 } 930 860 931 - static int mock_get_lsa(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd) 861 + static int mock_get_lsa(struct cxl_mockmem_data *mdata, 862 + struct cxl_mbox_cmd *cmd) 932 863 { 933 864 struct cxl_mbox_get_lsa *get_lsa = cmd->payload_in; 934 - struct cxl_mockmem_data *mdata = dev_get_drvdata(cxlds->dev); 935 865 void *lsa = mdata->lsa; 936 866 u32 offset, length; 937 867 ··· 948 878 return 0; 949 879 } 950 880 951 - static int mock_set_lsa(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd) 881 + static int mock_set_lsa(struct cxl_mockmem_data *mdata, 882 + struct cxl_mbox_cmd *cmd) 952 883 { 953 884 struct cxl_mbox_set_lsa *set_lsa = cmd->payload_in; 954 - struct cxl_mockmem_data *mdata = dev_get_drvdata(cxlds->dev); 955 885 void *lsa = mdata->lsa; 956 886 u32 offset, length; 957 887 ··· 966 896 return 0; 967 897 } 968 898 969 - static int mock_health_info(struct cxl_dev_state *cxlds, 970 - struct cxl_mbox_cmd *cmd) 899 + static int mock_health_info(struct cxl_mbox_cmd *cmd) 971 900 { 972 901 struct cxl_mbox_health_info health_info = { 973 902 /* set flags for maint needed, perf degraded, hw replacement */ ··· 1183 1114 }; 1184 1115 ATTRIBUTE_GROUPS(cxl_mock_mem_core); 1185 1116 1186 - static int cxl_mock_mbox_send(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd) 1117 + static int mock_fw_info(struct cxl_mockmem_data *mdata, 1118 + struct cxl_mbox_cmd *cmd) 1187 1119 { 1120 + struct cxl_mbox_get_fw_info fw_info = { 1121 + .num_slots = FW_SLOTS, 1122 + .slot_info = (mdata->fw_slot & 0x7) | 1123 + ((mdata->fw_staged & 0x7) << 3), 1124 + .activation_cap = 0, 1125 + }; 1126 + 1127 + strcpy(fw_info.slot_1_revision, "cxl_test_fw_001"); 1128 + strcpy(fw_info.slot_2_revision, "cxl_test_fw_002"); 1129 + strcpy(fw_info.slot_3_revision, "cxl_test_fw_003"); 1130 + strcpy(fw_info.slot_4_revision, ""); 1131 + 1132 + if (cmd->size_out < sizeof(fw_info)) 1133 + return -EINVAL; 1134 + 1135 + memcpy(cmd->payload_out, &fw_info, sizeof(fw_info)); 1136 + return 0; 1137 + } 1138 + 1139 + static int mock_transfer_fw(struct cxl_mockmem_data *mdata, 1140 + struct cxl_mbox_cmd *cmd) 1141 + { 1142 + struct cxl_mbox_transfer_fw *transfer = cmd->payload_in; 1143 + void *fw = mdata->fw; 1144 + size_t offset, length; 1145 + 1146 + offset = le32_to_cpu(transfer->offset) * CXL_FW_TRANSFER_ALIGNMENT; 1147 + length = cmd->size_in - sizeof(*transfer); 1148 + if (offset + length > FW_SIZE) 1149 + return -EINVAL; 1150 + 1151 + switch (transfer->action) { 1152 + case CXL_FW_TRANSFER_ACTION_FULL: 1153 + if (offset != 0) 1154 + return -EINVAL; 1155 + fallthrough; 1156 + case CXL_FW_TRANSFER_ACTION_END: 1157 + if (transfer->slot == 0 || transfer->slot > FW_SLOTS) 1158 + return -EINVAL; 1159 + mdata->fw_size = offset + length; 1160 + break; 1161 + case CXL_FW_TRANSFER_ACTION_INITIATE: 1162 + case CXL_FW_TRANSFER_ACTION_CONTINUE: 1163 + break; 1164 + case CXL_FW_TRANSFER_ACTION_ABORT: 1165 + return 0; 1166 + default: 1167 + return -EINVAL; 1168 + } 1169 + 1170 + memcpy(fw + offset, transfer->data, length); 1171 + return 0; 1172 + } 1173 + 1174 + static int mock_activate_fw(struct cxl_mockmem_data *mdata, 1175 + struct cxl_mbox_cmd *cmd) 1176 + { 1177 + struct cxl_mbox_activate_fw *activate = cmd->payload_in; 1178 + 1179 + if (activate->slot == 0 || activate->slot > FW_SLOTS) 1180 + return -EINVAL; 1181 + 1182 + switch (activate->action) { 1183 + case CXL_FW_ACTIVATE_ONLINE: 1184 + mdata->fw_slot = activate->slot; 1185 + mdata->fw_staged = 0; 1186 + return 0; 1187 + case CXL_FW_ACTIVATE_OFFLINE: 1188 + mdata->fw_staged = activate->slot; 1189 + return 0; 1190 + } 1191 + 1192 + return -EINVAL; 1193 + } 1194 + 1195 + static int cxl_mock_mbox_send(struct cxl_memdev_state *mds, 1196 + struct cxl_mbox_cmd *cmd) 1197 + { 1198 + struct cxl_dev_state *cxlds = &mds->cxlds; 1188 1199 struct device *dev = cxlds->dev; 1200 + struct cxl_mockmem_data *mdata = dev_get_drvdata(dev); 1189 1201 int rc = -EIO; 1190 1202 1191 1203 switch (cmd->opcode) { ··· 1277 1127 rc = mock_gsl(cmd); 1278 1128 break; 1279 1129 case CXL_MBOX_OP_GET_LOG: 1280 - rc = mock_get_log(cxlds, cmd); 1130 + rc = mock_get_log(mds, cmd); 1281 1131 break; 1282 1132 case CXL_MBOX_OP_IDENTIFY: 1283 1133 if (cxlds->rcd) 1284 - rc = mock_rcd_id(cxlds, cmd); 1134 + rc = mock_rcd_id(cmd); 1285 1135 else 1286 - rc = mock_id(cxlds, cmd); 1136 + rc = mock_id(cmd); 1287 1137 break; 1288 1138 case CXL_MBOX_OP_GET_LSA: 1289 - rc = mock_get_lsa(cxlds, cmd); 1139 + rc = mock_get_lsa(mdata, cmd); 1290 1140 break; 1291 1141 case CXL_MBOX_OP_GET_PARTITION_INFO: 1292 - rc = mock_partition_info(cxlds, cmd); 1142 + rc = mock_partition_info(cmd); 1293 1143 break; 1294 1144 case CXL_MBOX_OP_GET_EVENT_RECORD: 1295 - rc = mock_get_event(cxlds, cmd); 1145 + rc = mock_get_event(dev, cmd); 1296 1146 break; 1297 1147 case CXL_MBOX_OP_CLEAR_EVENT_RECORD: 1298 - rc = mock_clear_event(cxlds, cmd); 1148 + rc = mock_clear_event(dev, cmd); 1299 1149 break; 1300 1150 case CXL_MBOX_OP_SET_LSA: 1301 - rc = mock_set_lsa(cxlds, cmd); 1151 + rc = mock_set_lsa(mdata, cmd); 1302 1152 break; 1303 1153 case CXL_MBOX_OP_GET_HEALTH_INFO: 1304 - rc = mock_health_info(cxlds, cmd); 1154 + rc = mock_health_info(cmd); 1155 + break; 1156 + case CXL_MBOX_OP_SANITIZE: 1157 + rc = mock_sanitize(mdata, cmd); 1158 + break; 1159 + case CXL_MBOX_OP_SECURE_ERASE: 1160 + rc = mock_secure_erase(mdata, cmd); 1305 1161 break; 1306 1162 case CXL_MBOX_OP_GET_SECURITY_STATE: 1307 - rc = mock_get_security_state(cxlds, cmd); 1163 + rc = mock_get_security_state(mdata, cmd); 1308 1164 break; 1309 1165 case CXL_MBOX_OP_SET_PASSPHRASE: 1310 - rc = mock_set_passphrase(cxlds, cmd); 1166 + rc = mock_set_passphrase(mdata, cmd); 1311 1167 break; 1312 1168 case CXL_MBOX_OP_DISABLE_PASSPHRASE: 1313 - rc = mock_disable_passphrase(cxlds, cmd); 1169 + rc = mock_disable_passphrase(mdata, cmd); 1314 1170 break; 1315 1171 case CXL_MBOX_OP_FREEZE_SECURITY: 1316 - rc = mock_freeze_security(cxlds, cmd); 1172 + rc = mock_freeze_security(mdata, cmd); 1317 1173 break; 1318 1174 case CXL_MBOX_OP_UNLOCK: 1319 - rc = mock_unlock_security(cxlds, cmd); 1175 + rc = mock_unlock_security(mdata, cmd); 1320 1176 break; 1321 1177 case CXL_MBOX_OP_PASSPHRASE_SECURE_ERASE: 1322 - rc = mock_passphrase_secure_erase(cxlds, cmd); 1178 + rc = mock_passphrase_secure_erase(mdata, cmd); 1323 1179 break; 1324 1180 case CXL_MBOX_OP_GET_POISON: 1325 1181 rc = mock_get_poison(cxlds, cmd); ··· 1335 1179 break; 1336 1180 case CXL_MBOX_OP_CLEAR_POISON: 1337 1181 rc = mock_clear_poison(cxlds, cmd); 1182 + break; 1183 + case CXL_MBOX_OP_GET_FW_INFO: 1184 + rc = mock_fw_info(mdata, cmd); 1185 + break; 1186 + case CXL_MBOX_OP_TRANSFER_FW: 1187 + rc = mock_transfer_fw(mdata, cmd); 1188 + break; 1189 + case CXL_MBOX_OP_ACTIVATE_FW: 1190 + rc = mock_activate_fw(mdata, cmd); 1338 1191 break; 1339 1192 default: 1340 1193 break; ··· 1358 1193 static void label_area_release(void *lsa) 1359 1194 { 1360 1195 vfree(lsa); 1196 + } 1197 + 1198 + static void fw_buf_release(void *buf) 1199 + { 1200 + vfree(buf); 1361 1201 } 1362 1202 1363 1203 static bool is_rcd(struct platform_device *pdev) ··· 1385 1215 { 1386 1216 struct device *dev = &pdev->dev; 1387 1217 struct cxl_memdev *cxlmd; 1218 + struct cxl_memdev_state *mds; 1388 1219 struct cxl_dev_state *cxlds; 1389 1220 struct cxl_mockmem_data *mdata; 1390 1221 int rc; ··· 1398 1227 mdata->lsa = vmalloc(LSA_SIZE); 1399 1228 if (!mdata->lsa) 1400 1229 return -ENOMEM; 1230 + mdata->fw = vmalloc(FW_SIZE); 1231 + if (!mdata->fw) 1232 + return -ENOMEM; 1233 + mdata->fw_slot = 2; 1234 + 1401 1235 rc = devm_add_action_or_reset(dev, label_area_release, mdata->lsa); 1402 1236 if (rc) 1403 1237 return rc; 1404 1238 1405 - cxlds = cxl_dev_state_create(dev); 1406 - if (IS_ERR(cxlds)) 1407 - return PTR_ERR(cxlds); 1239 + rc = devm_add_action_or_reset(dev, fw_buf_release, mdata->fw); 1240 + if (rc) 1241 + return rc; 1408 1242 1243 + mds = cxl_memdev_state_create(dev); 1244 + if (IS_ERR(mds)) 1245 + return PTR_ERR(mds); 1246 + 1247 + mds->mbox_send = cxl_mock_mbox_send; 1248 + mds->payload_size = SZ_4K; 1249 + mds->event.buf = (struct cxl_get_event_payload *) mdata->event_buf; 1250 + 1251 + cxlds = &mds->cxlds; 1409 1252 cxlds->serial = pdev->id; 1410 - cxlds->mbox_send = cxl_mock_mbox_send; 1411 - cxlds->payload_size = SZ_4K; 1412 - cxlds->event.buf = (struct cxl_get_event_payload *) mdata->event_buf; 1413 1253 if (is_rcd(pdev)) { 1414 1254 cxlds->rcd = true; 1415 1255 cxlds->component_reg_phys = CXL_RESOURCE_NONE; 1416 1256 } 1417 1257 1418 - rc = cxl_enumerate_cmds(cxlds); 1258 + rc = cxl_enumerate_cmds(mds); 1419 1259 if (rc) 1420 1260 return rc; 1421 1261 1422 - rc = cxl_poison_state_init(cxlds); 1262 + rc = cxl_poison_state_init(mds); 1423 1263 if (rc) 1424 1264 return rc; 1425 1265 1426 - rc = cxl_set_timestamp(cxlds); 1266 + rc = cxl_set_timestamp(mds); 1427 1267 if (rc) 1428 1268 return rc; 1429 1269 1430 1270 cxlds->media_ready = true; 1431 - rc = cxl_dev_state_identify(cxlds); 1271 + rc = cxl_dev_state_identify(mds); 1432 1272 if (rc) 1433 1273 return rc; 1434 1274 1435 - rc = cxl_mem_create_range_info(cxlds); 1275 + rc = cxl_mem_create_range_info(mds); 1436 1276 if (rc) 1437 1277 return rc; 1438 1278 1439 - mdata->mes.cxlds = cxlds; 1279 + mdata->mes.mds = mds; 1440 1280 cxl_mock_add_event_logs(&mdata->mes); 1441 1281 1442 1282 cxlmd = devm_cxl_add_memdev(cxlds); 1443 1283 if (IS_ERR(cxlmd)) 1444 1284 return PTR_ERR(cxlmd); 1445 1285 1446 - cxl_mem_get_event_records(cxlds, CXLDEV_EVENT_STATUS_ALL); 1286 + rc = cxl_memdev_setup_fw_upload(mds); 1287 + if (rc) 1288 + return rc; 1289 + 1290 + cxl_mem_get_event_records(mds, CXLDEV_EVENT_STATUS_ALL); 1447 1291 1448 1292 return 0; 1449 1293 } ··· 1496 1310 1497 1311 static DEVICE_ATTR_RW(security_lock); 1498 1312 1313 + static ssize_t fw_buf_checksum_show(struct device *dev, 1314 + struct device_attribute *attr, char *buf) 1315 + { 1316 + struct cxl_mockmem_data *mdata = dev_get_drvdata(dev); 1317 + u8 hash[SHA256_DIGEST_SIZE]; 1318 + unsigned char *hstr, *hptr; 1319 + struct sha256_state sctx; 1320 + ssize_t written = 0; 1321 + int i; 1322 + 1323 + sha256_init(&sctx); 1324 + sha256_update(&sctx, mdata->fw, mdata->fw_size); 1325 + sha256_final(&sctx, hash); 1326 + 1327 + hstr = kzalloc((SHA256_DIGEST_SIZE * 2) + 1, GFP_KERNEL); 1328 + if (!hstr) 1329 + return -ENOMEM; 1330 + 1331 + hptr = hstr; 1332 + for (i = 0; i < SHA256_DIGEST_SIZE; i++) 1333 + hptr += sprintf(hptr, "%02x", hash[i]); 1334 + 1335 + written = sysfs_emit(buf, "%s\n", hstr); 1336 + 1337 + kfree(hstr); 1338 + return written; 1339 + } 1340 + 1341 + static DEVICE_ATTR_RO(fw_buf_checksum); 1342 + 1499 1343 static struct attribute *cxl_mock_mem_attrs[] = { 1500 1344 &dev_attr_security_lock.attr, 1501 1345 &dev_attr_event_trigger.attr, 1346 + &dev_attr_fw_buf_checksum.attr, 1502 1347 NULL 1503 1348 }; 1504 1349 ATTRIBUTE_GROUPS(cxl_mock_mem);
+33 -26
tools/testing/cxl/test/mock.c
··· 139 139 struct cxl_hdm *cxlhdm; 140 140 struct cxl_mock_ops *ops = get_cxl_mock_ops(&index); 141 141 142 - if (ops && ops->is_mock_port(port->uport)) 142 + if (ops && ops->is_mock_port(port->uport_dev)) 143 143 cxlhdm = ops->devm_cxl_setup_hdm(port, info); 144 144 else 145 145 cxlhdm = devm_cxl_setup_hdm(port, info); ··· 149 149 } 150 150 EXPORT_SYMBOL_NS_GPL(__wrap_devm_cxl_setup_hdm, CXL); 151 151 152 - int __wrap_devm_cxl_enable_hdm(struct cxl_port *port, struct cxl_hdm *cxlhdm) 153 - { 154 - int index, rc; 155 - struct cxl_mock_ops *ops = get_cxl_mock_ops(&index); 156 - 157 - if (ops && ops->is_mock_port(port->uport)) 158 - rc = 0; 159 - else 160 - rc = devm_cxl_enable_hdm(port, cxlhdm); 161 - put_cxl_mock_ops(index); 162 - 163 - return rc; 164 - } 165 - EXPORT_SYMBOL_NS_GPL(__wrap_devm_cxl_enable_hdm, CXL); 166 - 167 152 int __wrap_devm_cxl_add_passthrough_decoder(struct cxl_port *port) 168 153 { 169 154 int rc, index; 170 155 struct cxl_mock_ops *ops = get_cxl_mock_ops(&index); 171 156 172 - if (ops && ops->is_mock_port(port->uport)) 157 + if (ops && ops->is_mock_port(port->uport_dev)) 173 158 rc = ops->devm_cxl_add_passthrough_decoder(port); 174 159 else 175 160 rc = devm_cxl_add_passthrough_decoder(port); ··· 171 186 struct cxl_port *port = cxlhdm->port; 172 187 struct cxl_mock_ops *ops = get_cxl_mock_ops(&index); 173 188 174 - if (ops && ops->is_mock_port(port->uport)) 189 + if (ops && ops->is_mock_port(port->uport_dev)) 175 190 rc = ops->devm_cxl_enumerate_decoders(cxlhdm, info); 176 191 else 177 192 rc = devm_cxl_enumerate_decoders(cxlhdm, info); ··· 186 201 int rc, index; 187 202 struct cxl_mock_ops *ops = get_cxl_mock_ops(&index); 188 203 189 - if (ops && ops->is_mock_port(port->uport)) 204 + if (ops && ops->is_mock_port(port->uport_dev)) 190 205 rc = ops->devm_cxl_port_enumerate_dports(port); 191 206 else 192 207 rc = devm_cxl_port_enumerate_dports(port); ··· 244 259 } 245 260 EXPORT_SYMBOL_NS_GPL(__wrap_cxl_dvsec_rr_decode, CXL); 246 261 247 - resource_size_t __wrap_cxl_rcrb_to_component(struct device *dev, 248 - resource_size_t rcrb, 249 - enum cxl_rcrb which) 262 + struct cxl_dport *__wrap_devm_cxl_add_rch_dport(struct cxl_port *port, 263 + struct device *dport_dev, 264 + int port_id, 265 + resource_size_t rcrb) 266 + { 267 + int index; 268 + struct cxl_dport *dport; 269 + struct cxl_mock_ops *ops = get_cxl_mock_ops(&index); 270 + 271 + if (ops && ops->is_mock_port(dport_dev)) { 272 + dport = devm_cxl_add_dport(port, dport_dev, port_id, 273 + CXL_RESOURCE_NONE); 274 + if (!IS_ERR(dport)) { 275 + dport->rcrb.base = rcrb; 276 + dport->rch = true; 277 + } 278 + } else 279 + dport = devm_cxl_add_rch_dport(port, dport_dev, port_id, rcrb); 280 + put_cxl_mock_ops(index); 281 + 282 + return dport; 283 + } 284 + EXPORT_SYMBOL_NS_GPL(__wrap_devm_cxl_add_rch_dport, CXL); 285 + 286 + resource_size_t __wrap_cxl_rcd_component_reg_phys(struct device *dev, 287 + struct cxl_dport *dport) 250 288 { 251 289 int index; 252 290 resource_size_t component_reg_phys; 253 291 struct cxl_mock_ops *ops = get_cxl_mock_ops(&index); 254 292 255 293 if (ops && ops->is_mock_port(dev)) 256 - component_reg_phys = 257 - ops->cxl_rcrb_to_component(dev, rcrb, which); 294 + component_reg_phys = CXL_RESOURCE_NONE; 258 295 else 259 - component_reg_phys = cxl_rcrb_to_component(dev, rcrb, which); 296 + component_reg_phys = cxl_rcd_component_reg_phys(dev, dport); 260 297 put_cxl_mock_ops(index); 261 298 262 299 return component_reg_phys; 263 300 } 264 - EXPORT_SYMBOL_NS_GPL(__wrap_cxl_rcrb_to_component, CXL); 301 + EXPORT_SYMBOL_NS_GPL(__wrap_cxl_rcd_component_reg_phys, CXL); 265 302 266 303 MODULE_LICENSE("GPL v2"); 267 304 MODULE_IMPORT_NS(ACPI);
-3
tools/testing/cxl/test/mock.h
··· 15 15 acpi_string pathname, 16 16 struct acpi_object_list *arguments, 17 17 unsigned long long *data); 18 - resource_size_t (*cxl_rcrb_to_component)(struct device *dev, 19 - resource_size_t rcrb, 20 - enum cxl_rcrb which); 21 18 struct acpi_pci_root *(*acpi_pci_find_root)(acpi_handle handle); 22 19 bool (*is_mock_bus)(struct pci_bus *bus); 23 20 bool (*is_mock_port)(struct device *dev);