Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'cxl-for-6.12' of git://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl

Pull compute express link (cxl) updates from Dave Jiang:
"Major changes address HDM decoder initialization from DVSEC ranges,
refactoring the code related to cxl mailboxes to be independent of the
memory devices, and adding support for shared upstream link
access_coordinate calculation, as well as a change to remove locking
from memory notifier callback.

In addition, a number of misc cleanups and refactoring of the code are
also included.

Address HDM decoder initialization from DVSEC ranges:
- Only register non-zero DVSEC ranges
- Remove duplicate implementation of waiting for memory_info_valid
- Simplify the checking of mem_enabled in cxl_hdm_decode_init()

Refactor the code related to cxl mailboxes to be independent of the memory devices:
- Move cxl headers in include/linux/ to include/cxl
- Move all mailbox related data to 'struct cxl_mailbox'
- Refactor mailbox APIs with 'struct cxl_mailbox' as input instead of
memory device state

Add support for shared upstream link access_coordinate calculation for
configurations that have multiple targets under a switch or a root
port where the aggregated bandwidth can be greater than the upstream
link of the switch/RP upstream link:
- Preserve the CDAT access_coordinate from an endpoint
- Add the support for shared upstream link access_coordinate calculation
- Add documentation to explain how the calculations are done

Remove locking from memory notifier callback.

Misc cleanups:
- Convert devm_cxl_add_root() to return using ERR_CAST()
- cxl_test use dev_is_platform() instead of open coding
- Remove duplicate include of header core.h in core/cdat.c
- use scoped resource management to drop put_device() for cxl_port
- Use scoped_guard to drop device_lock() for cxl_port
- Refactor __devm_cxl_add_port() to drop gotos
- Rename cxl_setup_parent_dport to cxl_dport_init_aer and
cxl_dport_map_regs() to cxl_dport_map_ras()
- Refactor cxl_dport_init_aer() to be more concise
- Remove duplicate host_bridge->native_aer checking in
cxl_dport_init_ras_reporting()
- Fix comment for cxl_query_cmd()"

* tag 'cxl-for-6.12' of git://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl: (21 commits)
cxl: Add documentation to explain the shared link bandwidth calculation
cxl: Calculate region bandwidth of targets with shared upstream link
cxl: Preserve the CDAT access_coordinate for an endpoint
cxl: Fix comment regarding cxl_query_cmd() return data
cxl: Convert cxl_internal_send_cmd() to use 'struct cxl_mailbox' as input
cxl: Move mailbox related bits to the same context
cxl: move cxl headers to new include/cxl/ directory
cxl/region: Remove lock from memory notifier callback
cxl/pci: simplify the check of mem_enabled in cxl_hdm_decode_init()
cxl/pci: Check Mem_info_valid bit for each applicable DVSEC
cxl/pci: Remove duplicated implementation of waiting for memory_info_valid
cxl/pci: Fix to record only non-zero ranges
cxl/pci: Remove duplicate host_bridge->native_aer checking
cxl/pci: cxl_dport_map_rch_aer() cleanup
cxl/pci: Rename cxl_setup_parent_dport() and cxl_dport_map_regs()
cxl/port: Refactor __devm_cxl_add_port() to drop goto pattern
cxl/port: Use scoped_guard()/guard() to drop device_lock() for cxl_port
cxl/port: Use __free() to drop put_device() for cxl_port
cxl: Remove duplicate included header file core.h
tools/testing/cxl: Use dev_is_platform()
...

+1094 -408
+91
Documentation/driver-api/cxl/access-coordinates.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + .. include:: <isonum.txt> 3 + 4 + ================================== 5 + CXL Access Coordinates Computation 6 + ================================== 7 + 8 + Shared Upstream Link Calculation 9 + ================================ 10 + For certain CXL region construction with endpoints behind CXL switches (SW) or 11 + Root Ports (RP), there is the possibility of the total bandwidth for all 12 + the endpoints behind a switch being more than the switch upstream link. 13 + A similar situation can occur within the host, upstream of the root ports. 14 + The CXL driver performs an additional pass after all the targets have 15 + arrived for a region in order to recalculate the bandwidths with possible 16 + upstream link being a limiting factor in mind. 17 + 18 + The algorithm assumes the configuration is a symmetric topology as that 19 + maximizes performance. When asymmetric topology is detected, the calculation 20 + is aborted. An asymmetric topology is detected during topology walk where the 21 + number of RPs detected as a grandparent is not equal to the number of devices 22 + iterated in the same iteration loop. The assumption is made that subtle 23 + asymmetry in properties does not happen and all paths to EPs are equal. 24 + 25 + There can be multiple switches under an RP. There can be multiple RPs under 26 + a CXL Host Bridge (HB). There can be multiple HBs under a CXL Fixed Memory 27 + Window Structure (CFMWS). 28 + 29 + An example hierarchy: 30 + 31 + > CFMWS 0 32 + > | 33 + > _________|_________ 34 + > | | 35 + > ACPI0017-0 ACPI0017-1 36 + > GP0/HB0/ACPI0016-0 GP1/HB1/ACPI0016-1 37 + > | | | | 38 + > RP0 RP1 RP2 RP3 39 + > | | | | 40 + > SW 0 SW 1 SW 2 SW 3 41 + > | | | | | | | | 42 + > EP0 EP1 EP2 EP3 EP4 EP5 EP6 EP7 43 + 44 + Computation for the example hierarchy: 45 + 46 + Min (GP0 to CPU BW, 47 + Min(SW 0 Upstream Link to RP0 BW, 48 + Min(SW0SSLBIS for SW0DSP0 (EP0), EP0 DSLBIS, EP0 Upstream Link) + 49 + Min(SW0SSLBIS for SW0DSP1 (EP1), EP1 DSLBIS, EP1 Upstream link)) + 50 + Min(SW 1 Upstream Link to RP1 BW, 51 + Min(SW1SSLBIS for SW1DSP0 (EP2), EP2 DSLBIS, EP2 Upstream Link) + 52 + Min(SW1SSLBIS for SW1DSP1 (EP3), EP3 DSLBIS, EP3 Upstream link))) + 53 + Min (GP1 to CPU BW, 54 + Min(SW 2 Upstream Link to RP2 BW, 55 + Min(SW2SSLBIS for SW2DSP0 (EP4), EP4 DSLBIS, EP4 Upstream Link) + 56 + Min(SW2SSLBIS for SW2DSP1 (EP5), EP5 DSLBIS, EP5 Upstream link)) + 57 + Min(SW 3 Upstream Link to RP3 BW, 58 + Min(SW3SSLBIS for SW3DSP0 (EP6), EP6 DSLBIS, EP6 Upstream Link) + 59 + Min(SW3SSLBIS for SW3DSP1 (EP7), EP7 DSLBIS, EP7 Upstream link)))) 60 + 61 + The calculation starts at cxl_region_shared_upstream_perf_update(). A xarray 62 + is created to collect all the endpoint bandwidths via the 63 + cxl_endpoint_gather_bandwidth() function. The min() of bandwidth from the 64 + endpoint CDAT and the upstream link bandwidth is calculated. If the endpoint 65 + has a CXL switch as a parent, then min() of calculated bandwidth and the 66 + bandwidth from the SSLBIS for the switch downstream port that is associated 67 + with the endpoint is calculated. The final bandwidth is stored in a 68 + 'struct cxl_perf_ctx' in the xarray indexed by a device pointer. If the 69 + endpoint is direct attached to a root port (RP), the device pointer would be an 70 + RP device. If the endpoint is behind a switch, the device pointer would be the 71 + upstream device of the parent switch. 72 + 73 + At the next stage, the code walks through one or more switches if they exist 74 + in the topology. For endpoints directly attached to RPs, this step is skipped. 75 + If there is another switch upstream, the code takes the min() of the current 76 + gathered bandwidth and the upstream link bandwidth. If there's a switch 77 + upstream, then the SSLBIS of the upstream switch. 78 + 79 + Once the topology walk reaches the RP, whether it's direct attached endpoints 80 + or walking through the switch(es), cxl_rp_gather_bandwidth() is called. At 81 + this point all the bandwidths are aggregated per each host bridge, which is 82 + also the index for the resulting xarray. 83 + 84 + The next step is to take the min() of the per host bridge bandwidth and the 85 + bandwidth from the Generic Port (GP). The bandwidths for the GP is retrieved 86 + via ACPI tables SRAT/HMAT. The min bandwidth are aggregated under the same 87 + ACPI0017 device to form a new xarray. 88 + 89 + Finally, the cxl_region_update_bandwidth() is called and the aggregated 90 + bandwidth from all the members of the last xarray is updated for the 91 + access coordinates residing in the cxl region (cxlr) context.
+1
Documentation/driver-api/cxl/index.rst
··· 8 8 :maxdepth: 1 9 9 10 10 memory-devices 11 + access-coordinates 11 12 12 13 maturity-map 13 14
+1 -2
MAINTAINERS
··· 5728 5728 S: Maintained 5729 5729 F: Documentation/driver-api/cxl 5730 5730 F: drivers/cxl/ 5731 - F: include/linux/einj-cxl.h 5732 - F: include/linux/cxl-event.h 5731 + F: include/cxl/ 5733 5732 F: include/uapi/linux/cxl_mem.h 5734 5733 F: tools/testing/cxl/ 5735 5734
+1 -1
drivers/acpi/apei/einj-cxl.c
··· 7 7 * 8 8 * Author: Ben Cheatham <benjamin.cheatham@amd.com> 9 9 */ 10 - #include <linux/einj-cxl.h> 11 10 #include <linux/seq_file.h> 12 11 #include <linux/pci.h> 12 + #include <cxl/einj.h> 13 13 14 14 #include "apei-internal.h" 15 15
+1 -1
drivers/acpi/apei/ghes.c
··· 27 27 #include <linux/timer.h> 28 28 #include <linux/cper.h> 29 29 #include <linux/cleanup.h> 30 - #include <linux/cxl-event.h> 31 30 #include <linux/platform_device.h> 32 31 #include <linux/mutex.h> 33 32 #include <linux/ratelimit.h> ··· 49 50 #include <acpi/apei.h> 50 51 #include <asm/fixmap.h> 51 52 #include <asm/tlbflush.h> 53 + #include <cxl/event.h> 52 54 #include <ras/ras_event.h> 53 55 54 56 #include "apei-internal.h"
+496 -16
drivers/cxl/core/cdat.c
··· 9 9 #include "cxlmem.h" 10 10 #include "core.h" 11 11 #include "cxl.h" 12 - #include "core.h" 13 12 14 13 struct dsmas_entry { 15 14 struct range dpa_range; 16 15 u8 handle; 17 16 struct access_coordinate coord[ACCESS_COORDINATE_MAX]; 18 - 17 + struct access_coordinate cdat_coord[ACCESS_COORDINATE_MAX]; 19 18 int entries; 20 19 int qos_class; 21 20 }; ··· 162 163 val = cdat_normalize(le16_to_cpu(le_val), le64_to_cpu(le_base), 163 164 dslbis->data_type); 164 165 165 - cxl_access_coordinate_set(dent->coord, dslbis->data_type, val); 166 + cxl_access_coordinate_set(dent->cdat_coord, dslbis->data_type, val); 166 167 167 168 return 0; 168 169 } ··· 219 220 xa_for_each(dsmas_xa, index, dent) { 220 221 int qos_class; 221 222 222 - cxl_coordinates_combine(dent->coord, dent->coord, ep_c); 223 + cxl_coordinates_combine(dent->coord, dent->cdat_coord, ep_c); 223 224 dent->entries = 1; 224 225 rc = cxl_root->ops->qos_class(cxl_root, 225 226 &dent->coord[ACCESS_COORDINATE_CPU], ··· 240 241 static void update_perf_entry(struct device *dev, struct dsmas_entry *dent, 241 242 struct cxl_dpa_perf *dpa_perf) 242 243 { 243 - for (int i = 0; i < ACCESS_COORDINATE_MAX; i++) 244 + for (int i = 0; i < ACCESS_COORDINATE_MAX; i++) { 244 245 dpa_perf->coord[i] = dent->coord[i]; 246 + dpa_perf->cdat_coord[i] = dent->cdat_coord[i]; 247 + } 245 248 dpa_perf->dpa_range = dent->dpa_range; 246 249 dpa_perf->qos_class = dent->qos_class; 247 250 dev_dbg(dev, ··· 547 546 548 547 MODULE_IMPORT_NS(CXL); 549 548 550 - void cxl_region_perf_data_calculate(struct cxl_region *cxlr, 551 - struct cxl_endpoint_decoder *cxled) 549 + static void cxl_bandwidth_add(struct access_coordinate *coord, 550 + struct access_coordinate *c1, 551 + struct access_coordinate *c2) 552 + { 553 + for (int i = 0; i < ACCESS_COORDINATE_MAX; i++) { 554 + coord[i].read_bandwidth = c1[i].read_bandwidth + 555 + c2[i].read_bandwidth; 556 + coord[i].write_bandwidth = c1[i].write_bandwidth + 557 + c2[i].write_bandwidth; 558 + } 559 + } 560 + 561 + static bool dpa_perf_contains(struct cxl_dpa_perf *perf, 562 + struct resource *dpa_res) 563 + { 564 + struct range dpa = { 565 + .start = dpa_res->start, 566 + .end = dpa_res->end, 567 + }; 568 + 569 + return range_contains(&perf->dpa_range, &dpa); 570 + } 571 + 572 + static struct cxl_dpa_perf *cxled_get_dpa_perf(struct cxl_endpoint_decoder *cxled, 573 + enum cxl_decoder_mode mode) 552 574 { 553 575 struct cxl_memdev *cxlmd = cxled_to_memdev(cxled); 554 - struct cxl_dev_state *cxlds = cxlmd->cxlds; 555 - struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); 556 - struct range dpa = { 557 - .start = cxled->dpa_res->start, 558 - .end = cxled->dpa_res->end, 559 - }; 576 + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); 560 577 struct cxl_dpa_perf *perf; 561 578 562 - switch (cxlr->mode) { 579 + switch (mode) { 563 580 case CXL_DECODER_RAM: 564 581 perf = &mds->ram_perf; 565 582 break; ··· 585 566 perf = &mds->pmem_perf; 586 567 break; 587 568 default: 588 - return; 569 + return ERR_PTR(-EINVAL); 589 570 } 571 + 572 + if (!dpa_perf_contains(perf, cxled->dpa_res)) 573 + return ERR_PTR(-EINVAL); 574 + 575 + return perf; 576 + } 577 + 578 + /* 579 + * Transient context for containing the current calculation of bandwidth when 580 + * doing walking the port hierarchy to deal with shared upstream link. 581 + */ 582 + struct cxl_perf_ctx { 583 + struct access_coordinate coord[ACCESS_COORDINATE_MAX]; 584 + struct cxl_port *port; 585 + }; 586 + 587 + /** 588 + * cxl_endpoint_gather_bandwidth - collect all the endpoint bandwidth in an xarray 589 + * @cxlr: CXL region for the bandwidth calculation 590 + * @cxled: endpoint decoder to start on 591 + * @usp_xa: (output) the xarray that collects all the bandwidth coordinates 592 + * indexed by the upstream device with data of 'struct cxl_perf_ctx'. 593 + * @gp_is_root: (output) bool of whether the grandparent is cxl root. 594 + * 595 + * Return: 0 for success or -errno 596 + * 597 + * Collects aggregated endpoint bandwidth and store the bandwidth in 598 + * an xarray indexed by the upstream device of the switch or the RP 599 + * device. Each endpoint consists the minimum of the bandwidth from DSLBIS 600 + * from the endpoint CDAT, the endpoint upstream link bandwidth, and the 601 + * bandwidth from the SSLBIS of the switch CDAT for the switch upstream port to 602 + * the downstream port that's associated with the endpoint. If the 603 + * device is directly connected to a RP, then no SSLBIS is involved. 604 + */ 605 + static int cxl_endpoint_gather_bandwidth(struct cxl_region *cxlr, 606 + struct cxl_endpoint_decoder *cxled, 607 + struct xarray *usp_xa, 608 + bool *gp_is_root) 609 + { 610 + struct cxl_port *endpoint = to_cxl_port(cxled->cxld.dev.parent); 611 + struct cxl_port *parent_port = to_cxl_port(endpoint->dev.parent); 612 + struct cxl_port *gp_port = to_cxl_port(parent_port->dev.parent); 613 + struct access_coordinate pci_coord[ACCESS_COORDINATE_MAX]; 614 + struct access_coordinate sw_coord[ACCESS_COORDINATE_MAX]; 615 + struct access_coordinate ep_coord[ACCESS_COORDINATE_MAX]; 616 + struct cxl_memdev *cxlmd = cxled_to_memdev(cxled); 617 + struct cxl_dev_state *cxlds = cxlmd->cxlds; 618 + struct pci_dev *pdev = to_pci_dev(cxlds->dev); 619 + struct cxl_perf_ctx *perf_ctx; 620 + struct cxl_dpa_perf *perf; 621 + unsigned long index; 622 + void *ptr; 623 + int rc; 624 + 625 + if (cxlds->rcd) 626 + return -ENODEV; 627 + 628 + perf = cxled_get_dpa_perf(cxled, cxlr->mode); 629 + if (IS_ERR(perf)) 630 + return PTR_ERR(perf); 631 + 632 + gp_port = to_cxl_port(parent_port->dev.parent); 633 + *gp_is_root = is_cxl_root(gp_port); 634 + 635 + /* 636 + * If the grandparent is cxl root, then index is the root port, 637 + * otherwise it's the parent switch upstream device. 638 + */ 639 + if (*gp_is_root) 640 + index = (unsigned long)endpoint->parent_dport->dport_dev; 641 + else 642 + index = (unsigned long)parent_port->uport_dev; 643 + 644 + perf_ctx = xa_load(usp_xa, index); 645 + if (!perf_ctx) { 646 + struct cxl_perf_ctx *c __free(kfree) = 647 + kzalloc(sizeof(*perf_ctx), GFP_KERNEL); 648 + 649 + if (!c) 650 + return -ENOMEM; 651 + ptr = xa_store(usp_xa, index, c, GFP_KERNEL); 652 + if (xa_is_err(ptr)) 653 + return xa_err(ptr); 654 + perf_ctx = no_free_ptr(c); 655 + perf_ctx->port = parent_port; 656 + } 657 + 658 + /* Direct upstream link from EP bandwidth */ 659 + rc = cxl_pci_get_bandwidth(pdev, pci_coord); 660 + if (rc < 0) 661 + return rc; 662 + 663 + /* 664 + * Min of upstream link bandwidth and Endpoint CDAT bandwidth from 665 + * DSLBIS. 666 + */ 667 + cxl_coordinates_combine(ep_coord, pci_coord, perf->cdat_coord); 668 + 669 + /* 670 + * If grandparent port is root, then there's no switch involved and 671 + * the endpoint is connected to a root port. 672 + */ 673 + if (!*gp_is_root) { 674 + /* 675 + * Retrieve the switch SSLBIS for switch downstream port 676 + * associated with the endpoint bandwidth. 677 + */ 678 + rc = cxl_port_get_switch_dport_bandwidth(endpoint, sw_coord); 679 + if (rc) 680 + return rc; 681 + 682 + /* 683 + * Min of the earlier coordinates with the switch SSLBIS 684 + * bandwidth 685 + */ 686 + cxl_coordinates_combine(ep_coord, ep_coord, sw_coord); 687 + } 688 + 689 + /* 690 + * Aggregate the computed bandwidth with the current aggregated bandwidth 691 + * of the endpoints with the same switch upstream device or RP. 692 + */ 693 + cxl_bandwidth_add(perf_ctx->coord, perf_ctx->coord, ep_coord); 694 + 695 + return 0; 696 + } 697 + 698 + static void free_perf_xa(struct xarray *xa) 699 + { 700 + struct cxl_perf_ctx *ctx; 701 + unsigned long index; 702 + 703 + if (!xa) 704 + return; 705 + 706 + xa_for_each(xa, index, ctx) 707 + kfree(ctx); 708 + xa_destroy(xa); 709 + kfree(xa); 710 + } 711 + DEFINE_FREE(free_perf_xa, struct xarray *, if (_T) free_perf_xa(_T)) 712 + 713 + /** 714 + * cxl_switch_gather_bandwidth - collect all the bandwidth at switch level in an xarray 715 + * @cxlr: The region being operated on 716 + * @input_xa: xarray indexed by upstream device of a switch with data of 'struct 717 + * cxl_perf_ctx' 718 + * @gp_is_root: (output) bool of whether the grandparent is cxl root. 719 + * 720 + * Return: a xarray of resulting cxl_perf_ctx per parent switch or root port 721 + * or ERR_PTR(-errno) 722 + * 723 + * Iterate through the xarray. Take the minimum of the downstream calculated 724 + * bandwidth, the upstream link bandwidth, and the SSLBIS of the upstream 725 + * switch if exists. Sum the resulting bandwidth under the switch upstream 726 + * device or a RP device. The function can be iterated over multiple switches 727 + * if the switches are present. 728 + */ 729 + static struct xarray *cxl_switch_gather_bandwidth(struct cxl_region *cxlr, 730 + struct xarray *input_xa, 731 + bool *gp_is_root) 732 + { 733 + struct xarray *res_xa __free(free_perf_xa) = 734 + kzalloc(sizeof(*res_xa), GFP_KERNEL); 735 + struct access_coordinate coords[ACCESS_COORDINATE_MAX]; 736 + struct cxl_perf_ctx *ctx, *us_ctx; 737 + unsigned long index, us_index; 738 + int dev_count = 0; 739 + int gp_count = 0; 740 + void *ptr; 741 + int rc; 742 + 743 + if (!res_xa) 744 + return ERR_PTR(-ENOMEM); 745 + xa_init(res_xa); 746 + 747 + xa_for_each(input_xa, index, ctx) { 748 + struct device *dev = (struct device *)index; 749 + struct cxl_port *port = ctx->port; 750 + struct cxl_port *parent_port = to_cxl_port(port->dev.parent); 751 + struct cxl_port *gp_port = to_cxl_port(parent_port->dev.parent); 752 + struct cxl_dport *dport = port->parent_dport; 753 + bool is_root = false; 754 + 755 + dev_count++; 756 + if (is_cxl_root(gp_port)) { 757 + is_root = true; 758 + gp_count++; 759 + } 760 + 761 + /* 762 + * If the grandparent is cxl root, then index is the root port, 763 + * otherwise it's the parent switch upstream device. 764 + */ 765 + if (is_root) 766 + us_index = (unsigned long)port->parent_dport->dport_dev; 767 + else 768 + us_index = (unsigned long)parent_port->uport_dev; 769 + 770 + us_ctx = xa_load(res_xa, us_index); 771 + if (!us_ctx) { 772 + struct cxl_perf_ctx *n __free(kfree) = 773 + kzalloc(sizeof(*n), GFP_KERNEL); 774 + 775 + if (!n) 776 + return ERR_PTR(-ENOMEM); 777 + 778 + ptr = xa_store(res_xa, us_index, n, GFP_KERNEL); 779 + if (xa_is_err(ptr)) 780 + return ERR_PTR(xa_err(ptr)); 781 + us_ctx = no_free_ptr(n); 782 + us_ctx->port = parent_port; 783 + } 784 + 785 + /* 786 + * If the device isn't an upstream PCIe port, there's something 787 + * wrong with the topology. 788 + */ 789 + if (!dev_is_pci(dev)) 790 + return ERR_PTR(-EINVAL); 791 + 792 + /* Retrieve the upstream link bandwidth */ 793 + rc = cxl_pci_get_bandwidth(to_pci_dev(dev), coords); 794 + if (rc) 795 + return ERR_PTR(-ENXIO); 796 + 797 + /* 798 + * Take the min of downstream bandwidth and the upstream link 799 + * bandwidth. 800 + */ 801 + cxl_coordinates_combine(coords, coords, ctx->coord); 802 + 803 + /* 804 + * Take the min of the calculated bandwdith and the upstream 805 + * switch SSLBIS bandwidth if there's a parent switch 806 + */ 807 + if (!is_root) 808 + cxl_coordinates_combine(coords, coords, dport->coord); 809 + 810 + /* 811 + * Aggregate the calculated bandwidth common to an upstream 812 + * switch. 813 + */ 814 + cxl_bandwidth_add(us_ctx->coord, us_ctx->coord, coords); 815 + } 816 + 817 + /* Asymmetric topology detected. */ 818 + if (gp_count) { 819 + if (gp_count != dev_count) { 820 + dev_dbg(&cxlr->dev, 821 + "Asymmetric hierarchy detected, bandwidth not updated\n"); 822 + return ERR_PTR(-EOPNOTSUPP); 823 + } 824 + *gp_is_root = true; 825 + } 826 + 827 + return no_free_ptr(res_xa); 828 + } 829 + 830 + /** 831 + * cxl_rp_gather_bandwidth - handle the root port level bandwidth collection 832 + * @xa: the xarray that holds the cxl_perf_ctx that has the bandwidth calculated 833 + * below each root port device. 834 + * 835 + * Return: xarray that holds cxl_perf_ctx per host bridge or ERR_PTR(-errno) 836 + */ 837 + static struct xarray *cxl_rp_gather_bandwidth(struct xarray *xa) 838 + { 839 + struct xarray *hb_xa __free(free_perf_xa) = 840 + kzalloc(sizeof(*hb_xa), GFP_KERNEL); 841 + struct cxl_perf_ctx *ctx; 842 + unsigned long index; 843 + 844 + if (!hb_xa) 845 + return ERR_PTR(-ENOMEM); 846 + xa_init(hb_xa); 847 + 848 + xa_for_each(xa, index, ctx) { 849 + struct cxl_port *port = ctx->port; 850 + unsigned long hb_index = (unsigned long)port->uport_dev; 851 + struct cxl_perf_ctx *hb_ctx; 852 + void *ptr; 853 + 854 + hb_ctx = xa_load(hb_xa, hb_index); 855 + if (!hb_ctx) { 856 + struct cxl_perf_ctx *n __free(kfree) = 857 + kzalloc(sizeof(*n), GFP_KERNEL); 858 + 859 + if (!n) 860 + return ERR_PTR(-ENOMEM); 861 + ptr = xa_store(hb_xa, hb_index, n, GFP_KERNEL); 862 + if (xa_is_err(ptr)) 863 + return ERR_PTR(xa_err(ptr)); 864 + hb_ctx = no_free_ptr(n); 865 + hb_ctx->port = port; 866 + } 867 + 868 + cxl_bandwidth_add(hb_ctx->coord, hb_ctx->coord, ctx->coord); 869 + } 870 + 871 + return no_free_ptr(hb_xa); 872 + } 873 + 874 + /** 875 + * cxl_hb_gather_bandwidth - handle the host bridge level bandwidth collection 876 + * @xa: the xarray that holds the cxl_perf_ctx that has the bandwidth calculated 877 + * below each host bridge. 878 + * 879 + * Return: xarray that holds cxl_perf_ctx per ACPI0017 device or ERR_PTR(-errno) 880 + */ 881 + static struct xarray *cxl_hb_gather_bandwidth(struct xarray *xa) 882 + { 883 + struct xarray *mw_xa __free(free_perf_xa) = 884 + kzalloc(sizeof(*mw_xa), GFP_KERNEL); 885 + struct cxl_perf_ctx *ctx; 886 + unsigned long index; 887 + 888 + if (!mw_xa) 889 + return ERR_PTR(-ENOMEM); 890 + xa_init(mw_xa); 891 + 892 + xa_for_each(xa, index, ctx) { 893 + struct cxl_port *port = ctx->port; 894 + struct cxl_port *parent_port; 895 + struct cxl_perf_ctx *mw_ctx; 896 + struct cxl_dport *dport; 897 + unsigned long mw_index; 898 + void *ptr; 899 + 900 + parent_port = to_cxl_port(port->dev.parent); 901 + mw_index = (unsigned long)parent_port->uport_dev; 902 + 903 + mw_ctx = xa_load(mw_xa, mw_index); 904 + if (!mw_ctx) { 905 + struct cxl_perf_ctx *n __free(kfree) = 906 + kzalloc(sizeof(*n), GFP_KERNEL); 907 + 908 + if (!n) 909 + return ERR_PTR(-ENOMEM); 910 + ptr = xa_store(mw_xa, mw_index, n, GFP_KERNEL); 911 + if (xa_is_err(ptr)) 912 + return ERR_PTR(xa_err(ptr)); 913 + mw_ctx = no_free_ptr(n); 914 + } 915 + 916 + dport = port->parent_dport; 917 + cxl_coordinates_combine(ctx->coord, ctx->coord, dport->coord); 918 + cxl_bandwidth_add(mw_ctx->coord, mw_ctx->coord, ctx->coord); 919 + } 920 + 921 + return no_free_ptr(mw_xa); 922 + } 923 + 924 + /** 925 + * cxl_region_update_bandwidth - Update the bandwidth access coordinates of a region 926 + * @cxlr: The region being operated on 927 + * @input_xa: xarray holds cxl_perf_ctx wht calculated bandwidth per ACPI0017 instance 928 + */ 929 + static void cxl_region_update_bandwidth(struct cxl_region *cxlr, 930 + struct xarray *input_xa) 931 + { 932 + struct access_coordinate coord[ACCESS_COORDINATE_MAX]; 933 + struct cxl_perf_ctx *ctx; 934 + unsigned long index; 935 + 936 + memset(coord, 0, sizeof(coord)); 937 + xa_for_each(input_xa, index, ctx) 938 + cxl_bandwidth_add(coord, coord, ctx->coord); 939 + 940 + for (int i = 0; i < ACCESS_COORDINATE_MAX; i++) { 941 + cxlr->coord[i].read_bandwidth = coord[i].read_bandwidth; 942 + cxlr->coord[i].write_bandwidth = coord[i].write_bandwidth; 943 + } 944 + } 945 + 946 + /** 947 + * cxl_region_shared_upstream_bandwidth_update - Recalculate the bandwidth for 948 + * the region 949 + * @cxlr: the cxl region to recalculate 950 + * 951 + * The function walks the topology from bottom up and calculates the bandwidth. It 952 + * starts at the endpoints, processes at the switches if any, processes at the rootport 953 + * level, at the host bridge level, and finally aggregates at the region. 954 + */ 955 + void cxl_region_shared_upstream_bandwidth_update(struct cxl_region *cxlr) 956 + { 957 + struct xarray *working_xa; 958 + int root_count = 0; 959 + bool is_root; 960 + int rc; 590 961 591 962 lockdep_assert_held(&cxl_dpa_rwsem); 592 963 593 - if (!range_contains(&perf->dpa_range, &dpa)) 964 + struct xarray *usp_xa __free(free_perf_xa) = 965 + kzalloc(sizeof(*usp_xa), GFP_KERNEL); 966 + 967 + if (!usp_xa) 968 + return; 969 + 970 + xa_init(usp_xa); 971 + 972 + /* Collect bandwidth data from all the endpoints. */ 973 + for (int i = 0; i < cxlr->params.nr_targets; i++) { 974 + struct cxl_endpoint_decoder *cxled = cxlr->params.targets[i]; 975 + 976 + is_root = false; 977 + rc = cxl_endpoint_gather_bandwidth(cxlr, cxled, usp_xa, &is_root); 978 + if (rc) 979 + return; 980 + root_count += is_root; 981 + } 982 + 983 + /* Detect asymmetric hierarchy with some direct attached endpoints. */ 984 + if (root_count && root_count != cxlr->params.nr_targets) { 985 + dev_dbg(&cxlr->dev, 986 + "Asymmetric hierarchy detected, bandwidth not updated\n"); 987 + return; 988 + } 989 + 990 + /* 991 + * Walk up one or more switches to deal with the bandwidth of the 992 + * switches if they exist. Endpoints directly attached to RPs skip 993 + * over this part. 994 + */ 995 + if (!root_count) { 996 + do { 997 + working_xa = cxl_switch_gather_bandwidth(cxlr, usp_xa, 998 + &is_root); 999 + if (IS_ERR(working_xa)) 1000 + return; 1001 + free_perf_xa(usp_xa); 1002 + usp_xa = working_xa; 1003 + } while (!is_root); 1004 + } 1005 + 1006 + /* Handle the bandwidth at the root port of the hierarchy */ 1007 + working_xa = cxl_rp_gather_bandwidth(usp_xa); 1008 + if (IS_ERR(working_xa)) 1009 + return; 1010 + free_perf_xa(usp_xa); 1011 + usp_xa = working_xa; 1012 + 1013 + /* Handle the bandwidth at the host bridge of the hierarchy */ 1014 + working_xa = cxl_hb_gather_bandwidth(usp_xa); 1015 + if (IS_ERR(working_xa)) 1016 + return; 1017 + free_perf_xa(usp_xa); 1018 + usp_xa = working_xa; 1019 + 1020 + /* 1021 + * Aggregate all the bandwidth collected per CFMWS (ACPI0017) and 1022 + * update the region bandwidth with the final calculated values. 1023 + */ 1024 + cxl_region_update_bandwidth(cxlr, usp_xa); 1025 + } 1026 + 1027 + void cxl_region_perf_data_calculate(struct cxl_region *cxlr, 1028 + struct cxl_endpoint_decoder *cxled) 1029 + { 1030 + struct cxl_dpa_perf *perf; 1031 + 1032 + lockdep_assert_held(&cxl_dpa_rwsem); 1033 + 1034 + perf = cxled_get_dpa_perf(cxled, cxlr->mode); 1035 + if (IS_ERR(perf)) 594 1036 return; 595 1037 596 1038 for (int i = 0; i < ACCESS_COORDINATE_MAX; i++) {
+3 -1
drivers/cxl/core/core.h
··· 103 103 }; 104 104 105 105 long cxl_pci_get_latency(struct pci_dev *pdev); 106 - 106 + int cxl_pci_get_bandwidth(struct pci_dev *pdev, struct access_coordinate *c); 107 107 int cxl_update_hmat_access_coordinates(int nid, struct cxl_region *cxlr, 108 108 enum access_coordinate_class access); 109 109 bool cxl_need_node_perf_attrs_update(int nid); 110 + int cxl_port_get_switch_dport_bandwidth(struct cxl_port *port, 111 + struct access_coordinate *c); 110 112 111 113 #endif /* __CXL_CORE_H__ */
+61 -35
drivers/cxl/core/mbox.c
··· 225 225 226 226 /** 227 227 * cxl_internal_send_cmd() - Kernel internal interface to send a mailbox command 228 - * @mds: The driver data for the operation 228 + * @cxl_mbox: CXL mailbox context 229 229 * @mbox_cmd: initialized command to execute 230 230 * 231 231 * Context: Any context. ··· 241 241 * error. While this distinction can be useful for commands from userspace, the 242 242 * kernel will only be able to use results when both are successful. 243 243 */ 244 - int cxl_internal_send_cmd(struct cxl_memdev_state *mds, 244 + int cxl_internal_send_cmd(struct cxl_mailbox *cxl_mbox, 245 245 struct cxl_mbox_cmd *mbox_cmd) 246 246 { 247 247 size_t out_size, min_out; 248 248 int rc; 249 249 250 - if (mbox_cmd->size_in > mds->payload_size || 251 - mbox_cmd->size_out > mds->payload_size) 250 + if (mbox_cmd->size_in > cxl_mbox->payload_size || 251 + mbox_cmd->size_out > cxl_mbox->payload_size) 252 252 return -E2BIG; 253 253 254 254 out_size = mbox_cmd->size_out; 255 255 min_out = mbox_cmd->min_out; 256 - rc = mds->mbox_send(mds, mbox_cmd); 256 + rc = cxl_mbox->mbox_send(cxl_mbox, mbox_cmd); 257 257 /* 258 258 * EIO is reserved for a payload size mismatch and mbox_send() 259 259 * may not return this error. ··· 353 353 struct cxl_memdev_state *mds, u16 opcode, 354 354 size_t in_size, size_t out_size, u64 in_payload) 355 355 { 356 + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; 356 357 *mbox = (struct cxl_mbox_cmd) { 357 358 .opcode = opcode, 358 359 .size_in = in_size, ··· 375 374 376 375 /* Prepare to handle a full payload for variable sized output */ 377 376 if (out_size == CXL_VARIABLE_PAYLOAD) 378 - mbox->size_out = mds->payload_size; 377 + mbox->size_out = cxl_mbox->payload_size; 379 378 else 380 379 mbox->size_out = out_size; 381 380 ··· 399 398 const struct cxl_send_command *send_cmd, 400 399 struct cxl_memdev_state *mds) 401 400 { 401 + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; 402 + 402 403 if (send_cmd->raw.rsvd) 403 404 return -EINVAL; 404 405 ··· 409 406 * gets passed along without further checking, so it must be 410 407 * validated here. 411 408 */ 412 - if (send_cmd->out.size > mds->payload_size) 409 + if (send_cmd->out.size > cxl_mbox->payload_size) 413 410 return -EINVAL; 414 411 415 412 if (!cxl_mem_raw_command_allowed(send_cmd->raw.opcode)) ··· 497 494 struct cxl_memdev_state *mds, 498 495 const struct cxl_send_command *send_cmd) 499 496 { 497 + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; 500 498 struct cxl_mem_command mem_cmd; 501 499 int rc; 502 500 ··· 509 505 * supports, but output can be arbitrarily large (simply write out as 510 506 * much data as the hardware provides). 511 507 */ 512 - if (send_cmd->in.size > mds->payload_size) 508 + if (send_cmd->in.size > cxl_mbox->payload_size) 513 509 return -EINVAL; 514 510 515 511 /* Sanitize and construct a cxl_mem_command */ ··· 546 542 return put_user(ARRAY_SIZE(cxl_mem_commands), &q->n_commands); 547 543 548 544 /* 549 - * otherwise, return max(n_commands, total commands) cxl_command_info 545 + * otherwise, return min(n_commands, total commands) cxl_command_info 550 546 * structures. 551 547 */ 552 548 cxl_for_each_cmd(cmd) { ··· 595 591 u64 out_payload, s32 *size_out, 596 592 u32 *retval) 597 593 { 594 + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; 598 595 struct device *dev = mds->cxlds.dev; 599 596 int rc; 600 597 ··· 606 601 cxl_mem_opcode_to_name(mbox_cmd->opcode), 607 602 mbox_cmd->opcode, mbox_cmd->size_in); 608 603 609 - rc = mds->mbox_send(mds, mbox_cmd); 604 + rc = cxl_mbox->mbox_send(cxl_mbox, mbox_cmd); 610 605 if (rc) 611 606 goto out; 612 607 ··· 664 659 static int cxl_xfer_log(struct cxl_memdev_state *mds, uuid_t *uuid, 665 660 u32 *size, u8 *out) 666 661 { 662 + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; 667 663 u32 remaining = *size; 668 664 u32 offset = 0; 669 665 670 666 while (remaining) { 671 - u32 xfer_size = min_t(u32, remaining, mds->payload_size); 667 + u32 xfer_size = min_t(u32, remaining, cxl_mbox->payload_size); 672 668 struct cxl_mbox_cmd mbox_cmd; 673 669 struct cxl_mbox_get_log log; 674 670 int rc; ··· 688 682 .payload_out = out, 689 683 }; 690 684 691 - rc = cxl_internal_send_cmd(mds, &mbox_cmd); 685 + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); 692 686 693 687 /* 694 688 * The output payload length that indicates the number ··· 758 752 759 753 static struct cxl_mbox_get_supported_logs *cxl_get_gsl(struct cxl_memdev_state *mds) 760 754 { 755 + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; 761 756 struct cxl_mbox_get_supported_logs *ret; 762 757 struct cxl_mbox_cmd mbox_cmd; 763 758 int rc; 764 759 765 - ret = kvmalloc(mds->payload_size, GFP_KERNEL); 760 + ret = kvmalloc(cxl_mbox->payload_size, GFP_KERNEL); 766 761 if (!ret) 767 762 return ERR_PTR(-ENOMEM); 768 763 769 764 mbox_cmd = (struct cxl_mbox_cmd) { 770 765 .opcode = CXL_MBOX_OP_GET_SUPPORTED_LOGS, 771 - .size_out = mds->payload_size, 766 + .size_out = cxl_mbox->payload_size, 772 767 .payload_out = ret, 773 768 /* At least the record number field must be valid */ 774 769 .min_out = 2, 775 770 }; 776 - rc = cxl_internal_send_cmd(mds, &mbox_cmd); 771 + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); 777 772 if (rc < 0) { 778 773 kvfree(ret); 779 774 return ERR_PTR(rc); ··· 917 910 enum cxl_event_log_type log, 918 911 struct cxl_get_event_payload *get_pl) 919 912 { 913 + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; 920 914 struct cxl_mbox_clear_event_payload *payload; 921 915 u16 total = le16_to_cpu(get_pl->record_count); 922 916 u8 max_handles = CXL_CLEAR_EVENT_MAX_HANDLES; ··· 928 920 int i; 929 921 930 922 /* Payload size may limit the max handles */ 931 - if (pl_size > mds->payload_size) { 932 - max_handles = (mds->payload_size - sizeof(*payload)) / 923 + if (pl_size > cxl_mbox->payload_size) { 924 + max_handles = (cxl_mbox->payload_size - sizeof(*payload)) / 933 925 sizeof(__le16); 934 926 pl_size = struct_size(payload, handles, max_handles); 935 927 } ··· 963 955 964 956 if (i == max_handles) { 965 957 payload->nr_recs = i; 966 - rc = cxl_internal_send_cmd(mds, &mbox_cmd); 958 + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); 967 959 if (rc) 968 960 goto free_pl; 969 961 i = 0; ··· 974 966 if (i) { 975 967 payload->nr_recs = i; 976 968 mbox_cmd.size_in = struct_size(payload, handles, i); 977 - rc = cxl_internal_send_cmd(mds, &mbox_cmd); 969 + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); 978 970 if (rc) 979 971 goto free_pl; 980 972 } ··· 987 979 static void cxl_mem_get_records_log(struct cxl_memdev_state *mds, 988 980 enum cxl_event_log_type type) 989 981 { 982 + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; 990 983 struct cxl_memdev *cxlmd = mds->cxlds.cxlmd; 991 984 struct device *dev = mds->cxlds.dev; 992 985 struct cxl_get_event_payload *payload; ··· 1004 995 .payload_in = &log_type, 1005 996 .size_in = sizeof(log_type), 1006 997 .payload_out = payload, 1007 - .size_out = mds->payload_size, 998 + .size_out = cxl_mbox->payload_size, 1008 999 .min_out = struct_size(payload, records, 0), 1009 1000 }; 1010 1001 1011 - rc = cxl_internal_send_cmd(mds, &mbox_cmd); 1002 + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); 1012 1003 if (rc) { 1013 1004 dev_err_ratelimited(dev, 1014 1005 "Event log '%d': Failed to query event records : %d", ··· 1079 1070 */ 1080 1071 static int cxl_mem_get_partition_info(struct cxl_memdev_state *mds) 1081 1072 { 1073 + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; 1082 1074 struct cxl_mbox_get_partition_info pi; 1083 1075 struct cxl_mbox_cmd mbox_cmd; 1084 1076 int rc; ··· 1089 1079 .size_out = sizeof(pi), 1090 1080 .payload_out = &pi, 1091 1081 }; 1092 - rc = cxl_internal_send_cmd(mds, &mbox_cmd); 1082 + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); 1093 1083 if (rc) 1094 1084 return rc; 1095 1085 ··· 1116 1106 */ 1117 1107 int cxl_dev_state_identify(struct cxl_memdev_state *mds) 1118 1108 { 1109 + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; 1119 1110 /* See CXL 2.0 Table 175 Identify Memory Device Output Payload */ 1120 1111 struct cxl_mbox_identify id; 1121 1112 struct cxl_mbox_cmd mbox_cmd; ··· 1131 1120 .size_out = sizeof(id), 1132 1121 .payload_out = &id, 1133 1122 }; 1134 - rc = cxl_internal_send_cmd(mds, &mbox_cmd); 1123 + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); 1135 1124 if (rc < 0) 1136 1125 return rc; 1137 1126 ··· 1159 1148 1160 1149 static int __cxl_mem_sanitize(struct cxl_memdev_state *mds, u16 cmd) 1161 1150 { 1151 + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; 1162 1152 int rc; 1163 1153 u32 sec_out = 0; 1164 1154 struct cxl_get_security_output { ··· 1171 1159 .size_out = sizeof(out), 1172 1160 }; 1173 1161 struct cxl_mbox_cmd mbox_cmd = { .opcode = cmd }; 1174 - struct cxl_dev_state *cxlds = &mds->cxlds; 1175 1162 1176 1163 if (cmd != CXL_MBOX_OP_SANITIZE && cmd != CXL_MBOX_OP_SECURE_ERASE) 1177 1164 return -EINVAL; 1178 1165 1179 - rc = cxl_internal_send_cmd(mds, &sec_cmd); 1166 + rc = cxl_internal_send_cmd(cxl_mbox, &sec_cmd); 1180 1167 if (rc < 0) { 1181 - dev_err(cxlds->dev, "Failed to get security state : %d", rc); 1168 + dev_err(cxl_mbox->host, "Failed to get security state : %d", rc); 1182 1169 return rc; 1183 1170 } 1184 1171 ··· 1194 1183 sec_out & CXL_PMEM_SEC_STATE_LOCKED) 1195 1184 return -EINVAL; 1196 1185 1197 - rc = cxl_internal_send_cmd(mds, &mbox_cmd); 1186 + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); 1198 1187 if (rc < 0) { 1199 - dev_err(cxlds->dev, "Failed to sanitize device : %d", rc); 1188 + dev_err(cxl_mbox->host, "Failed to sanitize device : %d", rc); 1200 1189 return rc; 1201 1190 } 1202 1191 ··· 1225 1214 int rc; 1226 1215 1227 1216 /* synchronize with cxl_mem_probe() and decoder write operations */ 1228 - device_lock(&cxlmd->dev); 1217 + guard(device)(&cxlmd->dev); 1229 1218 endpoint = cxlmd->endpoint; 1230 1219 down_read(&cxl_region_rwsem); 1231 1220 /* ··· 1237 1226 else 1238 1227 rc = -EBUSY; 1239 1228 up_read(&cxl_region_rwsem); 1240 - device_unlock(&cxlmd->dev); 1241 1229 1242 1230 return rc; 1243 1231 } ··· 1310 1300 1311 1301 int cxl_set_timestamp(struct cxl_memdev_state *mds) 1312 1302 { 1303 + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; 1313 1304 struct cxl_mbox_cmd mbox_cmd; 1314 1305 struct cxl_mbox_set_timestamp_in pi; 1315 1306 int rc; ··· 1322 1311 .payload_in = &pi, 1323 1312 }; 1324 1313 1325 - rc = cxl_internal_send_cmd(mds, &mbox_cmd); 1314 + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); 1326 1315 /* 1327 1316 * Command is optional. Devices may have another way of providing 1328 1317 * a timestamp, or may return all 0s in timestamp fields. ··· 1339 1328 struct cxl_region *cxlr) 1340 1329 { 1341 1330 struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); 1331 + struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox; 1342 1332 struct cxl_mbox_poison_out *po; 1343 1333 struct cxl_mbox_poison_in pi; 1344 1334 int nr_records = 0; ··· 1358 1346 .opcode = CXL_MBOX_OP_GET_POISON, 1359 1347 .size_in = sizeof(pi), 1360 1348 .payload_in = &pi, 1361 - .size_out = mds->payload_size, 1349 + .size_out = cxl_mbox->payload_size, 1362 1350 .payload_out = po, 1363 1351 .min_out = struct_size(po, record, 0), 1364 1352 }; 1365 1353 1366 - rc = cxl_internal_send_cmd(mds, &mbox_cmd); 1354 + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); 1367 1355 if (rc) 1368 1356 break; 1369 1357 ··· 1394 1382 /* Get Poison List output buffer is protected by mds->poison.lock */ 1395 1383 static int cxl_poison_alloc_buf(struct cxl_memdev_state *mds) 1396 1384 { 1397 - mds->poison.list_out = kvmalloc(mds->payload_size, GFP_KERNEL); 1385 + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; 1386 + 1387 + mds->poison.list_out = kvmalloc(cxl_mbox->payload_size, GFP_KERNEL); 1398 1388 if (!mds->poison.list_out) 1399 1389 return -ENOMEM; 1400 1390 ··· 1422 1408 } 1423 1409 EXPORT_SYMBOL_NS_GPL(cxl_poison_state_init, CXL); 1424 1410 1411 + int cxl_mailbox_init(struct cxl_mailbox *cxl_mbox, struct device *host) 1412 + { 1413 + if (!cxl_mbox || !host) 1414 + return -EINVAL; 1415 + 1416 + cxl_mbox->host = host; 1417 + mutex_init(&cxl_mbox->mbox_mutex); 1418 + rcuwait_init(&cxl_mbox->mbox_wait); 1419 + 1420 + return 0; 1421 + } 1422 + EXPORT_SYMBOL_NS_GPL(cxl_mailbox_init, CXL); 1423 + 1425 1424 struct cxl_memdev_state *cxl_memdev_state_create(struct device *dev) 1426 1425 { 1427 1426 struct cxl_memdev_state *mds; ··· 1445 1418 return ERR_PTR(-ENOMEM); 1446 1419 } 1447 1420 1448 - mutex_init(&mds->mbox_mutex); 1449 1421 mutex_init(&mds->event.log_lock); 1450 1422 mds->cxlds.dev = dev; 1451 1423 mds->cxlds.reg_map.host = dev;
+24 -17
drivers/cxl/core/memdev.c
··· 58 58 59 59 if (!mds) 60 60 return sysfs_emit(buf, "\n"); 61 - return sysfs_emit(buf, "%zu\n", mds->payload_size); 61 + return sysfs_emit(buf, "%zu\n", cxlds->cxl_mbox.payload_size); 62 62 } 63 63 static DEVICE_ATTR_RO(payload_max); 64 64 ··· 124 124 { 125 125 struct cxl_memdev *cxlmd = to_cxl_memdev(dev); 126 126 struct cxl_dev_state *cxlds = cxlmd->cxlds; 127 + struct cxl_mailbox *cxl_mbox = &cxlds->cxl_mbox; 127 128 struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); 128 129 unsigned long state = mds->security.state; 129 130 int rc = 0; 130 131 131 132 /* sync with latest submission state */ 132 - mutex_lock(&mds->mbox_mutex); 133 + mutex_lock(&cxl_mbox->mbox_mutex); 133 134 if (mds->security.sanitize_active) 134 135 rc = sysfs_emit(buf, "sanitize\n"); 135 - mutex_unlock(&mds->mbox_mutex); 136 + mutex_unlock(&cxl_mbox->mbox_mutex); 136 137 if (rc) 137 138 return rc; 138 139 ··· 278 277 279 278 int cxl_inject_poison(struct cxl_memdev *cxlmd, u64 dpa) 280 279 { 281 - struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); 280 + struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox; 282 281 struct cxl_mbox_inject_poison inject; 283 282 struct cxl_poison_record record; 284 283 struct cxl_mbox_cmd mbox_cmd; ··· 308 307 .size_in = sizeof(inject), 309 308 .payload_in = &inject, 310 309 }; 311 - rc = cxl_internal_send_cmd(mds, &mbox_cmd); 310 + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); 312 311 if (rc) 313 312 goto out; 314 313 315 314 cxlr = cxl_dpa_to_region(cxlmd, dpa); 316 315 if (cxlr) 317 - dev_warn_once(mds->cxlds.dev, 316 + dev_warn_once(cxl_mbox->host, 318 317 "poison inject dpa:%#llx region: %s\n", dpa, 319 318 dev_name(&cxlr->dev)); 320 319 ··· 333 332 334 333 int cxl_clear_poison(struct cxl_memdev *cxlmd, u64 dpa) 335 334 { 336 - struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); 335 + struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox; 337 336 struct cxl_mbox_clear_poison clear; 338 337 struct cxl_poison_record record; 339 338 struct cxl_mbox_cmd mbox_cmd; ··· 372 371 .payload_in = &clear, 373 372 }; 374 373 375 - rc = cxl_internal_send_cmd(mds, &mbox_cmd); 374 + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); 376 375 if (rc) 377 376 goto out; 378 377 379 378 cxlr = cxl_dpa_to_region(cxlmd, dpa); 380 379 if (cxlr) 381 - dev_warn_once(mds->cxlds.dev, 380 + dev_warn_once(cxl_mbox->host, 382 381 "poison clear dpa:%#llx region: %s\n", dpa, 383 382 dev_name(&cxlr->dev)); 384 383 ··· 715 714 */ 716 715 static int cxl_mem_get_fw_info(struct cxl_memdev_state *mds) 717 716 { 717 + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; 718 718 struct cxl_mbox_get_fw_info info; 719 719 struct cxl_mbox_cmd mbox_cmd; 720 720 int rc; ··· 726 724 .payload_out = &info, 727 725 }; 728 726 729 - rc = cxl_internal_send_cmd(mds, &mbox_cmd); 727 + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); 730 728 if (rc < 0) 731 729 return rc; 732 730 ··· 750 748 */ 751 749 static int cxl_mem_activate_fw(struct cxl_memdev_state *mds, int slot) 752 750 { 751 + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; 753 752 struct cxl_mbox_activate_fw activate; 754 753 struct cxl_mbox_cmd mbox_cmd; 755 754 ··· 767 764 activate.action = CXL_FW_ACTIVATE_OFFLINE; 768 765 activate.slot = slot; 769 766 770 - return cxl_internal_send_cmd(mds, &mbox_cmd); 767 + return cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); 771 768 } 772 769 773 770 /** ··· 782 779 */ 783 780 static int cxl_mem_abort_fw_xfer(struct cxl_memdev_state *mds) 784 781 { 782 + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; 785 783 struct cxl_mbox_transfer_fw *transfer; 786 784 struct cxl_mbox_cmd mbox_cmd; 787 785 int rc; ··· 802 798 803 799 transfer->action = CXL_FW_TRANSFER_ACTION_ABORT; 804 800 805 - rc = cxl_internal_send_cmd(mds, &mbox_cmd); 801 + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); 806 802 kfree(transfer); 807 803 return rc; 808 804 } ··· 833 829 { 834 830 struct cxl_memdev_state *mds = fwl->dd_handle; 835 831 struct cxl_mbox_transfer_fw *transfer; 832 + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; 836 833 837 834 if (!size) 838 835 return FW_UPLOAD_ERR_INVALID_SIZE; 839 836 840 837 mds->fw.oneshot = struct_size(transfer, data, size) < 841 - mds->payload_size; 838 + cxl_mbox->payload_size; 842 839 843 840 if (cxl_mem_get_fw_info(mds)) 844 841 return FW_UPLOAD_ERR_HW_ERROR; ··· 859 854 { 860 855 struct cxl_memdev_state *mds = fwl->dd_handle; 861 856 struct cxl_dev_state *cxlds = &mds->cxlds; 857 + struct cxl_mailbox *cxl_mbox = &cxlds->cxl_mbox; 862 858 struct cxl_memdev *cxlmd = cxlds->cxlmd; 863 859 struct cxl_mbox_transfer_fw *transfer; 864 860 struct cxl_mbox_cmd mbox_cmd; ··· 883 877 * sizeof(*transfer) is 128. These constraints imply that @cur_size 884 878 * will always be 128b aligned. 885 879 */ 886 - cur_size = min_t(size_t, size, mds->payload_size - sizeof(*transfer)); 880 + cur_size = min_t(size_t, size, cxl_mbox->payload_size - sizeof(*transfer)); 887 881 888 882 remaining = size - cur_size; 889 883 size_in = struct_size(transfer, data, cur_size); ··· 927 921 .poll_count = 30, 928 922 }; 929 923 930 - rc = cxl_internal_send_cmd(mds, &mbox_cmd); 924 + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); 931 925 if (rc < 0) { 932 926 rc = FW_UPLOAD_ERR_RW_ERROR; 933 927 goto out_free; ··· 1065 1059 static void sanitize_teardown_notifier(void *data) 1066 1060 { 1067 1061 struct cxl_memdev_state *mds = data; 1062 + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; 1068 1063 struct kernfs_node *state; 1069 1064 1070 1065 /* 1071 1066 * Prevent new irq triggered invocations of the workqueue and 1072 1067 * flush inflight invocations. 1073 1068 */ 1074 - mutex_lock(&mds->mbox_mutex); 1069 + mutex_lock(&cxl_mbox->mbox_mutex); 1075 1070 state = mds->security.sanitize_node; 1076 1071 mds->security.sanitize_node = NULL; 1077 - mutex_unlock(&mds->mbox_mutex); 1072 + mutex_unlock(&cxl_mbox->mbox_mutex); 1078 1073 1079 1074 cancel_delayed_work_sync(&mds->security.poll_dwork); 1080 1075 sysfs_put(state);
+71 -93
drivers/cxl/core/pci.c
··· 211 211 } 212 212 EXPORT_SYMBOL_NS_GPL(cxl_await_media_ready, CXL); 213 213 214 - static int wait_for_valid(struct pci_dev *pdev, int d) 215 - { 216 - u32 val; 217 - int rc; 218 - 219 - /* 220 - * Memory_Info_Valid: When set, indicates that the CXL Range 1 Size high 221 - * and Size Low registers are valid. Must be set within 1 second of 222 - * deassertion of reset to CXL device. Likely it is already set by the 223 - * time this runs, but otherwise give a 1.5 second timeout in case of 224 - * clock skew. 225 - */ 226 - rc = pci_read_config_dword(pdev, d + CXL_DVSEC_RANGE_SIZE_LOW(0), &val); 227 - if (rc) 228 - return rc; 229 - 230 - if (val & CXL_DVSEC_MEM_INFO_VALID) 231 - return 0; 232 - 233 - msleep(1500); 234 - 235 - rc = pci_read_config_dword(pdev, d + CXL_DVSEC_RANGE_SIZE_LOW(0), &val); 236 - if (rc) 237 - return rc; 238 - 239 - if (val & CXL_DVSEC_MEM_INFO_VALID) 240 - return 0; 241 - 242 - return -ETIMEDOUT; 243 - } 244 - 245 214 static int cxl_set_mem_enable(struct cxl_dev_state *cxlds, u16 val) 246 215 { 247 216 struct pci_dev *pdev = to_pci_dev(cxlds->dev); ··· 291 322 return devm_add_action_or_reset(host, disable_hdm, cxlhdm); 292 323 } 293 324 294 - int cxl_dvsec_rr_decode(struct device *dev, int d, 325 + int cxl_dvsec_rr_decode(struct device *dev, struct cxl_port *port, 295 326 struct cxl_endpoint_dvsec_info *info) 296 327 { 297 328 struct pci_dev *pdev = to_pci_dev(dev); 329 + struct cxl_dev_state *cxlds = pci_get_drvdata(pdev); 298 330 int hdm_count, rc, i, ranges = 0; 331 + int d = cxlds->cxl_dvsec; 299 332 u16 cap, ctrl; 300 333 301 334 if (!d) { ··· 324 353 if (!hdm_count || hdm_count > 2) 325 354 return -EINVAL; 326 355 327 - rc = wait_for_valid(pdev, d); 328 - if (rc) { 329 - dev_dbg(dev, "Failure awaiting MEM_INFO_VALID (%d)\n", rc); 330 - return rc; 331 - } 332 - 333 356 /* 334 357 * The current DVSEC values are moot if the memory capability is 335 358 * disabled, and they will remain moot after the HDM Decoder ··· 341 376 u64 base, size; 342 377 u32 temp; 343 378 379 + rc = cxl_dvsec_mem_range_valid(cxlds, i); 380 + if (rc) 381 + return rc; 382 + 344 383 rc = pci_read_config_dword( 345 384 pdev, d + CXL_DVSEC_RANGE_SIZE_HIGH(i), &temp); 346 385 if (rc) ··· 359 390 360 391 size |= temp & CXL_DVSEC_MEM_SIZE_LOW_MASK; 361 392 if (!size) { 362 - info->dvsec_range[i] = (struct range) { 363 - .start = 0, 364 - .end = CXL_RESOURCE_NONE, 365 - }; 366 393 continue; 367 394 } 368 395 ··· 376 411 377 412 base |= temp & CXL_DVSEC_MEM_BASE_LOW_MASK; 378 413 379 - info->dvsec_range[i] = (struct range) { 414 + info->dvsec_range[ranges++] = (struct range) { 380 415 .start = base, 381 416 .end = base + size - 1 382 417 }; 383 - 384 - ranges++; 385 418 } 386 419 387 420 info->ranges = ranges; ··· 426 463 return -ENODEV; 427 464 } 428 465 429 - for (i = 0, allowed = 0; info->mem_enabled && i < info->ranges; i++) { 466 + if (!info->mem_enabled) { 467 + rc = devm_cxl_enable_hdm(&port->dev, cxlhdm); 468 + if (rc) 469 + return rc; 470 + 471 + return devm_cxl_enable_mem(&port->dev, cxlds); 472 + } 473 + 474 + for (i = 0, allowed = 0; i < info->ranges; i++) { 430 475 struct device *cxld_dev; 431 476 432 477 cxld_dev = device_find_child(&root->dev, &info->dvsec_range[i], ··· 448 477 allowed++; 449 478 } 450 479 451 - if (!allowed && info->mem_enabled) { 480 + if (!allowed) { 452 481 dev_err(dev, "Range register decodes outside platform defined CXL ranges.\n"); 453 482 return -ENXIO; 454 483 } ··· 462 491 * match. If at least one DVSEC range is enabled and allowed, skip HDM 463 492 * Decoder Capability Enable. 464 493 */ 465 - if (info->mem_enabled) 466 - return 0; 467 - 468 - rc = devm_cxl_enable_hdm(&port->dev, cxlhdm); 469 - if (rc) 470 - return rc; 471 - 472 - return devm_cxl_enable_mem(&port->dev, cxlds); 494 + return 0; 473 495 } 474 496 EXPORT_SYMBOL_NS_GPL(cxl_hdm_decode_init, CXL); 475 497 ··· 736 772 737 773 static void cxl_dport_map_rch_aer(struct cxl_dport *dport) 738 774 { 739 - struct cxl_rcrb_info *ri = &dport->rcrb; 740 - void __iomem *dport_aer = NULL; 741 775 resource_size_t aer_phys; 742 776 struct device *host; 777 + u16 aer_cap; 743 778 744 - if (dport->rch && ri->aer_cap) { 779 + aer_cap = cxl_rcrb_to_aer(dport->dport_dev, dport->rcrb.base); 780 + if (aer_cap) { 745 781 host = dport->reg_map.host; 746 - aer_phys = ri->aer_cap + ri->base; 747 - dport_aer = devm_cxl_iomap_block(host, aer_phys, 748 - sizeof(struct aer_capability_regs)); 782 + aer_phys = aer_cap + dport->rcrb.base; 783 + dport->regs.dport_aer = devm_cxl_iomap_block(host, aer_phys, 784 + sizeof(struct aer_capability_regs)); 749 785 } 750 - 751 - dport->regs.dport_aer = dport_aer; 752 786 } 753 787 754 - static void cxl_dport_map_regs(struct cxl_dport *dport) 788 + static void cxl_dport_map_ras(struct cxl_dport *dport) 755 789 { 756 790 struct cxl_register_map *map = &dport->reg_map; 757 791 struct device *dev = dport->dport_dev; ··· 759 797 else if (cxl_map_component_regs(map, &dport->regs.component, 760 798 BIT(CXL_CM_CAP_CAP_ID_RAS))) 761 799 dev_dbg(dev, "Failed to map RAS capability.\n"); 762 - 763 - if (dport->rch) 764 - cxl_dport_map_rch_aer(dport); 765 800 } 766 801 767 802 static void cxl_disable_rch_root_ints(struct cxl_dport *dport) 768 803 { 769 804 void __iomem *aer_base = dport->regs.dport_aer; 770 - struct pci_host_bridge *bridge; 771 805 u32 aer_cmd_mask, aer_cmd; 772 806 773 807 if (!aer_base) 774 808 return; 775 - 776 - bridge = to_pci_host_bridge(dport->dport_dev); 777 809 778 810 /* 779 811 * Disable RCH root port command interrupts. ··· 777 821 * the root cmd register's interrupts is required. But, PCI spec 778 822 * shows these are disabled by default on reset. 779 823 */ 780 - if (bridge->native_aer) { 781 - aer_cmd_mask = (PCI_ERR_ROOT_CMD_COR_EN | 782 - PCI_ERR_ROOT_CMD_NONFATAL_EN | 783 - PCI_ERR_ROOT_CMD_FATAL_EN); 784 - aer_cmd = readl(aer_base + PCI_ERR_ROOT_COMMAND); 785 - aer_cmd &= ~aer_cmd_mask; 786 - writel(aer_cmd, aer_base + PCI_ERR_ROOT_COMMAND); 787 - } 824 + aer_cmd_mask = (PCI_ERR_ROOT_CMD_COR_EN | 825 + PCI_ERR_ROOT_CMD_NONFATAL_EN | 826 + PCI_ERR_ROOT_CMD_FATAL_EN); 827 + aer_cmd = readl(aer_base + PCI_ERR_ROOT_COMMAND); 828 + aer_cmd &= ~aer_cmd_mask; 829 + writel(aer_cmd, aer_base + PCI_ERR_ROOT_COMMAND); 788 830 } 789 831 790 - void cxl_setup_parent_dport(struct device *host, struct cxl_dport *dport) 832 + /** 833 + * cxl_dport_init_ras_reporting - Setup CXL RAS report on this dport 834 + * @dport: the cxl_dport that needs to be initialized 835 + * @host: host device for devm operations 836 + */ 837 + void cxl_dport_init_ras_reporting(struct cxl_dport *dport, struct device *host) 791 838 { 792 - struct device *dport_dev = dport->dport_dev; 839 + dport->reg_map.host = host; 840 + cxl_dport_map_ras(dport); 793 841 794 842 if (dport->rch) { 795 - struct pci_host_bridge *host_bridge = to_pci_host_bridge(dport_dev); 843 + struct pci_host_bridge *host_bridge = to_pci_host_bridge(dport->dport_dev); 796 844 797 - if (host_bridge->native_aer) 798 - dport->rcrb.aer_cap = cxl_rcrb_to_aer(dport_dev, dport->rcrb.base); 799 - } 845 + if (!host_bridge->native_aer) 846 + return; 800 847 801 - dport->reg_map.host = host; 802 - cxl_dport_map_regs(dport); 803 - 804 - if (dport->rch) 848 + cxl_dport_map_rch_aer(dport); 805 849 cxl_disable_rch_root_ints(dport); 850 + } 806 851 } 807 - EXPORT_SYMBOL_NS_GPL(cxl_setup_parent_dport, CXL); 852 + EXPORT_SYMBOL_NS_GPL(cxl_dport_init_ras_reporting, CXL); 808 853 809 854 static void cxl_handle_rdport_cor_ras(struct cxl_dev_state *cxlds, 810 855 struct cxl_dport *dport) ··· 872 915 struct pci_dev *pdev = to_pci_dev(cxlds->dev); 873 916 struct aer_capability_regs aer_regs; 874 917 struct cxl_dport *dport; 875 - struct cxl_port *port; 876 918 int severity; 877 919 878 - port = cxl_pci_find_port(pdev, &dport); 920 + struct cxl_port *port __free(put_cxl_port) = 921 + cxl_pci_find_port(pdev, &dport); 879 922 if (!port) 880 923 return; 881 - 882 - put_device(&port->dev); 883 924 884 925 if (!cxl_rch_get_aer_info(dport->regs.dport_aer, &aer_regs)) 885 926 return; ··· 1031 1076 __cxl_endpoint_decoder_reset_detected); 1032 1077 } 1033 1078 EXPORT_SYMBOL_NS_GPL(cxl_endpoint_decoder_reset_detected, CXL); 1079 + 1080 + int cxl_pci_get_bandwidth(struct pci_dev *pdev, struct access_coordinate *c) 1081 + { 1082 + int speed, bw; 1083 + u16 lnksta; 1084 + u32 width; 1085 + 1086 + speed = pcie_link_speed_mbps(pdev); 1087 + if (speed < 0) 1088 + return speed; 1089 + speed /= BITS_PER_BYTE; 1090 + 1091 + pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnksta); 1092 + width = FIELD_GET(PCI_EXP_LNKSTA_NLW, lnksta); 1093 + bw = speed * width; 1094 + 1095 + for (int i = 0; i < ACCESS_COORDINATE_MAX; i++) { 1096 + c[i].read_bandwidth = bw; 1097 + c[i].write_bandwidth = bw; 1098 + } 1099 + 1100 + return 0; 1101 + }
+114 -98
drivers/cxl/core/port.c
··· 3 3 #include <linux/platform_device.h> 4 4 #include <linux/memregion.h> 5 5 #include <linux/workqueue.h> 6 - #include <linux/einj-cxl.h> 7 6 #include <linux/debugfs.h> 8 7 #include <linux/device.h> 9 8 #include <linux/module.h> ··· 10 11 #include <linux/slab.h> 11 12 #include <linux/idr.h> 12 13 #include <linux/node.h> 14 + #include <cxl/einj.h> 13 15 #include <cxlmem.h> 14 16 #include <cxlpci.h> 15 17 #include <cxl.h> ··· 828 828 &cxl_einj_inject_fops); 829 829 } 830 830 831 - static struct cxl_port *__devm_cxl_add_port(struct device *host, 832 - struct device *uport_dev, 833 - resource_size_t component_reg_phys, 834 - struct cxl_dport *parent_dport) 831 + static int cxl_port_add(struct cxl_port *port, 832 + resource_size_t component_reg_phys, 833 + struct cxl_dport *parent_dport) 835 834 { 836 - struct cxl_port *port; 837 - struct device *dev; 835 + struct device *dev __free(put_device) = &port->dev; 838 836 int rc; 839 837 840 - port = cxl_port_alloc(uport_dev, parent_dport); 841 - if (IS_ERR(port)) 842 - return port; 843 - 844 - dev = &port->dev; 845 - if (is_cxl_memdev(uport_dev)) { 846 - struct cxl_memdev *cxlmd = to_cxl_memdev(uport_dev); 838 + if (is_cxl_memdev(port->uport_dev)) { 839 + struct cxl_memdev *cxlmd = to_cxl_memdev(port->uport_dev); 847 840 struct cxl_dev_state *cxlds = cxlmd->cxlds; 848 841 849 842 rc = dev_set_name(dev, "endpoint%d", port->id); 850 843 if (rc) 851 - goto err; 844 + return rc; 852 845 853 846 /* 854 847 * The endpoint driver already enumerated the component and RAS ··· 854 861 } else if (parent_dport) { 855 862 rc = dev_set_name(dev, "port%d", port->id); 856 863 if (rc) 857 - goto err; 864 + return rc; 858 865 859 866 rc = cxl_port_setup_regs(port, component_reg_phys); 860 867 if (rc) 861 - goto err; 862 - } else 868 + return rc; 869 + } else { 863 870 rc = dev_set_name(dev, "root%d", port->id); 864 - if (rc) 865 - goto err; 871 + if (rc) 872 + return rc; 873 + } 866 874 867 875 rc = device_add(dev); 868 876 if (rc) 869 - goto err; 877 + return rc; 878 + 879 + /* Inhibit the cleanup function invoked */ 880 + dev = NULL; 881 + return 0; 882 + } 883 + 884 + static struct cxl_port *__devm_cxl_add_port(struct device *host, 885 + struct device *uport_dev, 886 + resource_size_t component_reg_phys, 887 + struct cxl_dport *parent_dport) 888 + { 889 + struct cxl_port *port; 890 + int rc; 891 + 892 + port = cxl_port_alloc(uport_dev, parent_dport); 893 + if (IS_ERR(port)) 894 + return port; 895 + 896 + rc = cxl_port_add(port, component_reg_phys, parent_dport); 897 + if (rc) 898 + return ERR_PTR(rc); 870 899 871 900 rc = devm_add_action_or_reset(host, unregister_port, port); 872 901 if (rc) ··· 906 891 port->pci_latency = cxl_pci_get_latency(to_pci_dev(uport_dev)); 907 892 908 893 return port; 909 - 910 - err: 911 - put_device(dev); 912 - return ERR_PTR(rc); 913 894 } 914 895 915 896 /** ··· 952 941 953 942 port = devm_cxl_add_port(host, host, CXL_RESOURCE_NONE, NULL); 954 943 if (IS_ERR(port)) 955 - return (struct cxl_root *)port; 944 + return ERR_CAST(port); 956 945 957 946 cxl_root = to_cxl_root(port); 958 947 cxl_root->ops = ops; ··· 1269 1258 static int add_ep(struct cxl_ep *new) 1270 1259 { 1271 1260 struct cxl_port *port = new->dport->port; 1272 - int rc; 1273 1261 1274 - device_lock(&port->dev); 1275 - if (port->dead) { 1276 - device_unlock(&port->dev); 1262 + guard(device)(&port->dev); 1263 + if (port->dead) 1277 1264 return -ENXIO; 1278 - } 1279 - rc = xa_insert(&port->endpoints, (unsigned long)new->ep, new, 1280 - GFP_KERNEL); 1281 - device_unlock(&port->dev); 1282 1265 1283 - return rc; 1266 + return xa_insert(&port->endpoints, (unsigned long)new->ep, 1267 + new, GFP_KERNEL); 1284 1268 } 1285 1269 1286 1270 /** ··· 1399 1393 struct cxl_port *endpoint = cxlmd->endpoint; 1400 1394 struct device *host = endpoint_host(endpoint); 1401 1395 1402 - device_lock(host); 1403 - if (host->driver && !endpoint->dead) { 1404 - devm_release_action(host, cxl_unlink_parent_dport, endpoint); 1405 - devm_release_action(host, cxl_unlink_uport, endpoint); 1406 - devm_release_action(host, unregister_port, endpoint); 1396 + scoped_guard(device, host) { 1397 + if (host->driver && !endpoint->dead) { 1398 + devm_release_action(host, cxl_unlink_parent_dport, endpoint); 1399 + devm_release_action(host, cxl_unlink_uport, endpoint); 1400 + devm_release_action(host, unregister_port, endpoint); 1401 + } 1402 + cxlmd->endpoint = NULL; 1407 1403 } 1408 - cxlmd->endpoint = NULL; 1409 - device_unlock(host); 1410 1404 put_device(&endpoint->dev); 1411 1405 put_device(host); 1412 1406 } ··· 1483 1477 .cxlmd = cxlmd, 1484 1478 .depth = i, 1485 1479 }; 1486 - struct device *dev; 1487 1480 struct cxl_ep *ep; 1488 1481 bool died = false; 1489 1482 1490 - dev = bus_find_device(&cxl_bus_type, NULL, &ctx, 1491 - port_has_memdev); 1483 + struct device *dev __free(put_device) = 1484 + bus_find_device(&cxl_bus_type, NULL, &ctx, port_has_memdev); 1492 1485 if (!dev) 1493 1486 continue; 1494 1487 port = to_cxl_port(dev); ··· 1517 1512 dev_name(&port->dev)); 1518 1513 delete_switch_port(port); 1519 1514 } 1520 - put_device(&port->dev); 1521 1515 device_unlock(&parent_port->dev); 1522 1516 } 1523 1517 } ··· 1544 1540 struct device *dport_dev) 1545 1541 { 1546 1542 struct device *dparent = grandparent(dport_dev); 1547 - struct cxl_port *port, *parent_port = NULL; 1548 1543 struct cxl_dport *dport, *parent_dport; 1549 1544 resource_size_t component_reg_phys; 1550 1545 int rc; ··· 1559 1556 return -ENXIO; 1560 1557 } 1561 1558 1562 - parent_port = find_cxl_port(dparent, &parent_dport); 1559 + struct cxl_port *parent_port __free(put_cxl_port) = 1560 + find_cxl_port(dparent, &parent_dport); 1563 1561 if (!parent_port) { 1564 1562 /* iterate to create this parent_port */ 1565 1563 return -EAGAIN; 1566 1564 } 1567 1565 1568 - device_lock(&parent_port->dev); 1569 - if (!parent_port->dev.driver) { 1570 - dev_warn(&cxlmd->dev, 1571 - "port %s:%s disabled, failed to enumerate CXL.mem\n", 1572 - dev_name(&parent_port->dev), dev_name(uport_dev)); 1573 - port = ERR_PTR(-ENXIO); 1574 - goto out; 1575 - } 1576 - 1577 - port = find_cxl_port_at(parent_port, dport_dev, &dport); 1578 - if (!port) { 1579 - component_reg_phys = find_component_registers(uport_dev); 1580 - port = devm_cxl_add_port(&parent_port->dev, uport_dev, 1581 - component_reg_phys, parent_dport); 1582 - /* retry find to pick up the new dport information */ 1583 - if (!IS_ERR(port)) 1584 - port = find_cxl_port_at(parent_port, dport_dev, &dport); 1585 - } 1586 - out: 1587 - device_unlock(&parent_port->dev); 1588 - 1589 - if (IS_ERR(port)) 1590 - rc = PTR_ERR(port); 1591 - else { 1592 - dev_dbg(&cxlmd->dev, "add to new port %s:%s\n", 1593 - dev_name(&port->dev), dev_name(port->uport_dev)); 1594 - rc = cxl_add_ep(dport, &cxlmd->dev); 1595 - if (rc == -EBUSY) { 1596 - /* 1597 - * "can't" happen, but this error code means 1598 - * something to the caller, so translate it. 1599 - */ 1600 - rc = -ENXIO; 1566 + /* 1567 + * Definition with __free() here to keep the sequence of 1568 + * dereferencing the device of the port before the parent_port releasing. 1569 + */ 1570 + struct cxl_port *port __free(put_cxl_port) = NULL; 1571 + scoped_guard(device, &parent_port->dev) { 1572 + if (!parent_port->dev.driver) { 1573 + dev_warn(&cxlmd->dev, 1574 + "port %s:%s disabled, failed to enumerate CXL.mem\n", 1575 + dev_name(&parent_port->dev), dev_name(uport_dev)); 1576 + return -ENXIO; 1601 1577 } 1602 - put_device(&port->dev); 1578 + 1579 + port = find_cxl_port_at(parent_port, dport_dev, &dport); 1580 + if (!port) { 1581 + component_reg_phys = find_component_registers(uport_dev); 1582 + port = devm_cxl_add_port(&parent_port->dev, uport_dev, 1583 + component_reg_phys, parent_dport); 1584 + if (IS_ERR(port)) 1585 + return PTR_ERR(port); 1586 + 1587 + /* retry find to pick up the new dport information */ 1588 + port = find_cxl_port_at(parent_port, dport_dev, &dport); 1589 + if (!port) 1590 + return -ENXIO; 1591 + } 1603 1592 } 1604 1593 1605 - put_device(&parent_port->dev); 1594 + dev_dbg(&cxlmd->dev, "add to new port %s:%s\n", 1595 + dev_name(&port->dev), dev_name(port->uport_dev)); 1596 + rc = cxl_add_ep(dport, &cxlmd->dev); 1597 + if (rc == -EBUSY) { 1598 + /* 1599 + * "can't" happen, but this error code means 1600 + * something to the caller, so translate it. 1601 + */ 1602 + rc = -ENXIO; 1603 + } 1604 + 1606 1605 return rc; 1607 1606 } 1608 1607 ··· 1635 1630 struct device *dport_dev = grandparent(iter); 1636 1631 struct device *uport_dev; 1637 1632 struct cxl_dport *dport; 1638 - struct cxl_port *port; 1639 1633 1640 1634 /* 1641 1635 * The terminal "grandparent" in PCI is NULL and @platform_bus ··· 1653 1649 dev_dbg(dev, "scan: iter: %s dport_dev: %s parent: %s\n", 1654 1650 dev_name(iter), dev_name(dport_dev), 1655 1651 dev_name(uport_dev)); 1656 - port = find_cxl_port(dport_dev, &dport); 1652 + struct cxl_port *port __free(put_cxl_port) = 1653 + find_cxl_port(dport_dev, &dport); 1657 1654 if (port) { 1658 1655 dev_dbg(&cxlmd->dev, 1659 1656 "found already registered port %s:%s\n", ··· 1669 1664 * the parent_port lock as the current port may be being 1670 1665 * reaped. 1671 1666 */ 1672 - if (rc && rc != -EBUSY) { 1673 - put_device(&port->dev); 1667 + if (rc && rc != -EBUSY) 1674 1668 return rc; 1675 - } 1676 1669 1677 1670 /* Any more ports to add between this one and the root? */ 1678 - if (!dev_is_cxl_root_child(&port->dev)) { 1679 - put_device(&port->dev); 1671 + if (!dev_is_cxl_root_child(&port->dev)) 1680 1672 continue; 1681 - } 1682 1673 1683 - put_device(&port->dev); 1684 1674 return 0; 1685 1675 } 1686 1676 ··· 1983 1983 int cxl_decoder_add(struct cxl_decoder *cxld, int *target_map) 1984 1984 { 1985 1985 struct cxl_port *port; 1986 - int rc; 1987 1986 1988 1987 if (WARN_ON_ONCE(!cxld)) 1989 1988 return -EINVAL; ··· 1992 1993 1993 1994 port = to_cxl_port(cxld->dev.parent); 1994 1995 1995 - device_lock(&port->dev); 1996 - rc = cxl_decoder_add_locked(cxld, target_map); 1997 - device_unlock(&port->dev); 1998 - 1999 - return rc; 1996 + guard(device)(&port->dev); 1997 + return cxl_decoder_add_locked(cxld, target_map); 2000 1998 } 2001 1999 EXPORT_SYMBOL_NS_GPL(cxl_decoder_add, CXL); 2002 2000 ··· 2236 2240 return 0; 2237 2241 } 2238 2242 EXPORT_SYMBOL_NS_GPL(cxl_endpoint_get_perf_coordinates, CXL); 2243 + 2244 + int cxl_port_get_switch_dport_bandwidth(struct cxl_port *port, 2245 + struct access_coordinate *c) 2246 + { 2247 + struct cxl_dport *dport = port->parent_dport; 2248 + 2249 + /* Check this port is connected to a switch DSP and not an RP */ 2250 + if (parent_port_is_cxl_root(to_cxl_port(port->dev.parent))) 2251 + return -ENODEV; 2252 + 2253 + if (!coordinates_valid(dport->coord)) 2254 + return -EINVAL; 2255 + 2256 + for (int i = 0; i < ACCESS_COORDINATE_MAX; i++) { 2257 + c[i].read_bandwidth = dport->coord[i].read_bandwidth; 2258 + c[i].write_bandwidth = dport->coord[i].write_bandwidth; 2259 + } 2260 + 2261 + return 0; 2262 + } 2239 2263 2240 2264 /* for user tooling to ensure port disable work has completed */ 2241 2265 static ssize_t flush_store(const struct bus_type *bus, const char *buf, size_t count)
+45 -36
drivers/cxl/core/region.c
··· 1983 1983 * then the region is already committed. 1984 1984 */ 1985 1985 p->state = CXL_CONFIG_COMMIT; 1986 + cxl_region_shared_upstream_bandwidth_update(cxlr); 1986 1987 1987 1988 return 0; 1988 1989 } ··· 2005 2004 if (rc) 2006 2005 return rc; 2007 2006 p->state = CXL_CONFIG_ACTIVE; 2007 + cxl_region_shared_upstream_bandwidth_update(cxlr); 2008 2008 } 2009 2009 2010 2010 cxled->cxld.interleave_ways = p->interleave_ways; ··· 2315 2313 struct cxl_region_params *p = &cxlr->params; 2316 2314 int i; 2317 2315 2318 - unregister_memory_notifier(&cxlr->memory_notifier); 2319 - unregister_mt_adistance_algorithm(&cxlr->adist_notifier); 2320 2316 device_del(&cxlr->dev); 2321 2317 2322 2318 /* ··· 2391 2391 return true; 2392 2392 } 2393 2393 2394 - static int cxl_region_nid(struct cxl_region *cxlr) 2395 - { 2396 - struct cxl_region_params *p = &cxlr->params; 2397 - struct resource *res; 2398 - 2399 - guard(rwsem_read)(&cxl_region_rwsem); 2400 - res = p->res; 2401 - if (!res) 2402 - return NUMA_NO_NODE; 2403 - return phys_to_target_node(res->start); 2404 - } 2405 - 2406 2394 static int cxl_region_perf_attrs_callback(struct notifier_block *nb, 2407 2395 unsigned long action, void *arg) 2408 2396 { ··· 2403 2415 if (nid == NUMA_NO_NODE || action != MEM_ONLINE) 2404 2416 return NOTIFY_DONE; 2405 2417 2406 - region_nid = cxl_region_nid(cxlr); 2418 + /* 2419 + * No need to hold cxl_region_rwsem; region parameters are stable 2420 + * within the cxl_region driver. 2421 + */ 2422 + region_nid = phys_to_target_node(cxlr->params.res->start); 2407 2423 if (nid != region_nid) 2408 2424 return NOTIFY_DONE; 2409 2425 ··· 2426 2434 int *adist = data; 2427 2435 int region_nid; 2428 2436 2429 - region_nid = cxl_region_nid(cxlr); 2437 + /* 2438 + * No need to hold cxl_region_rwsem; region parameters are stable 2439 + * within the cxl_region driver. 2440 + */ 2441 + region_nid = phys_to_target_node(cxlr->params.res->start); 2430 2442 if (nid != region_nid) 2431 2443 return NOTIFY_OK; 2432 2444 ··· 2479 2483 rc = device_add(dev); 2480 2484 if (rc) 2481 2485 goto err; 2482 - 2483 - cxlr->memory_notifier.notifier_call = cxl_region_perf_attrs_callback; 2484 - cxlr->memory_notifier.priority = CXL_CALLBACK_PRI; 2485 - register_memory_notifier(&cxlr->memory_notifier); 2486 - 2487 - cxlr->adist_notifier.notifier_call = cxl_region_calculate_adistance; 2488 - cxlr->adist_notifier.priority = 100; 2489 - register_mt_adistance_algorithm(&cxlr->adist_notifier); 2490 2486 2491 2487 rc = devm_add_action_or_reset(port->uport_dev, unregister_region, cxlr); 2492 2488 if (rc) ··· 3082 3094 struct cxl_region *cxlr = _cxlr; 3083 3095 struct cxl_nvdimm_bridge *cxl_nvb = cxlr->cxl_nvb; 3084 3096 3085 - device_lock(&cxl_nvb->dev); 3086 - if (cxlr->cxlr_pmem) 3087 - devm_release_action(&cxl_nvb->dev, cxlr_pmem_unregister, 3088 - cxlr->cxlr_pmem); 3089 - device_unlock(&cxl_nvb->dev); 3097 + scoped_guard(device, &cxl_nvb->dev) { 3098 + if (cxlr->cxlr_pmem) 3099 + devm_release_action(&cxl_nvb->dev, cxlr_pmem_unregister, 3100 + cxlr->cxlr_pmem); 3101 + } 3090 3102 cxlr->cxl_nvb = NULL; 3091 3103 put_device(&cxl_nvb->dev); 3092 3104 } ··· 3122 3134 dev_dbg(&cxlr->dev, "%s: register %s\n", dev_name(dev->parent), 3123 3135 dev_name(dev)); 3124 3136 3125 - device_lock(&cxl_nvb->dev); 3126 - if (cxl_nvb->dev.driver) 3127 - rc = devm_add_action_or_reset(&cxl_nvb->dev, 3128 - cxlr_pmem_unregister, cxlr_pmem); 3129 - else 3130 - rc = -ENXIO; 3131 - device_unlock(&cxl_nvb->dev); 3137 + scoped_guard(device, &cxl_nvb->dev) { 3138 + if (cxl_nvb->dev.driver) 3139 + rc = devm_add_action_or_reset(&cxl_nvb->dev, 3140 + cxlr_pmem_unregister, 3141 + cxlr_pmem); 3142 + else 3143 + rc = -ENXIO; 3144 + } 3132 3145 3133 3146 if (rc) 3134 3147 goto err_bridge; ··· 3375 3386 return 1; 3376 3387 } 3377 3388 3389 + static void shutdown_notifiers(void *_cxlr) 3390 + { 3391 + struct cxl_region *cxlr = _cxlr; 3392 + 3393 + unregister_memory_notifier(&cxlr->memory_notifier); 3394 + unregister_mt_adistance_algorithm(&cxlr->adist_notifier); 3395 + } 3396 + 3378 3397 static int cxl_region_probe(struct device *dev) 3379 3398 { 3380 3399 struct cxl_region *cxlr = to_cxl_region(dev); ··· 3415 3418 out: 3416 3419 up_read(&cxl_region_rwsem); 3417 3420 3421 + if (rc) 3422 + return rc; 3423 + 3424 + cxlr->memory_notifier.notifier_call = cxl_region_perf_attrs_callback; 3425 + cxlr->memory_notifier.priority = CXL_CALLBACK_PRI; 3426 + register_memory_notifier(&cxlr->memory_notifier); 3427 + 3428 + cxlr->adist_notifier.notifier_call = cxl_region_calculate_adistance; 3429 + cxlr->adist_notifier.priority = 100; 3430 + register_mt_adistance_algorithm(&cxlr->adist_notifier); 3431 + 3432 + rc = devm_add_action_or_reset(&cxlr->dev, shutdown_notifiers, cxlr); 3418 3433 if (rc) 3419 3434 return rc; 3420 3435
+6 -3
drivers/cxl/cxl.h
··· 744 744 void put_cxl_root(struct cxl_root *cxl_root); 745 745 DEFINE_FREE(put_cxl_root, struct cxl_root *, if (_T) put_cxl_root(_T)) 746 746 747 + DEFINE_FREE(put_cxl_port, struct cxl_port *, if (!IS_ERR_OR_NULL(_T)) put_device(&_T->dev)) 747 748 int devm_cxl_enumerate_ports(struct cxl_memdev *cxlmd); 748 749 void cxl_bus_rescan(void); 749 750 void cxl_bus_drain(void); ··· 763 762 764 763 #ifdef CONFIG_PCIEAER_CXL 765 764 void cxl_setup_parent_dport(struct device *host, struct cxl_dport *dport); 765 + void cxl_dport_init_ras_reporting(struct cxl_dport *dport, struct device *host); 766 766 #else 767 - static inline void cxl_setup_parent_dport(struct device *host, 768 - struct cxl_dport *dport) { } 767 + static inline void cxl_dport_init_ras_reporting(struct cxl_dport *dport, 768 + struct device *host) { } 769 769 #endif 770 770 771 771 struct cxl_decoder *to_cxl_decoder(struct device *dev); ··· 811 809 int devm_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm, 812 810 struct cxl_endpoint_dvsec_info *info); 813 811 int devm_cxl_add_passthrough_decoder(struct cxl_port *port); 814 - int cxl_dvsec_rr_decode(struct device *dev, int dvsec, 812 + int cxl_dvsec_rr_decode(struct device *dev, struct cxl_port *port, 815 813 struct cxl_endpoint_dvsec_info *info); 816 814 817 815 bool is_cxl_region(struct device *dev); ··· 891 889 struct access_coordinate *coord); 892 890 void cxl_region_perf_data_calculate(struct cxl_region *cxlr, 893 891 struct cxl_endpoint_decoder *cxled); 892 + void cxl_region_shared_upstream_bandwidth_update(struct cxl_region *cxlr); 894 893 895 894 void cxl_memdev_update_perf(struct cxl_memdev *cxlmd); 896 895
+13 -14
drivers/cxl/cxlmem.h
··· 3 3 #ifndef __CXL_MEM_H__ 4 4 #define __CXL_MEM_H__ 5 5 #include <uapi/linux/cxl_mem.h> 6 + #include <linux/pci.h> 6 7 #include <linux/cdev.h> 7 8 #include <linux/uuid.h> 8 - #include <linux/rcuwait.h> 9 - #include <linux/cxl-event.h> 10 9 #include <linux/node.h> 10 + #include <cxl/event.h> 11 + #include <cxl/mailbox.h> 11 12 #include "cxl.h" 12 13 13 14 /* CXL 2.0 8.2.8.5.1.1 Memory Device Status Register */ ··· 398 397 * struct cxl_dpa_perf - DPA performance property entry 399 398 * @dpa_range: range for DPA address 400 399 * @coord: QoS performance data (i.e. latency, bandwidth) 400 + * @cdat_coord: raw QoS performance data from CDAT 401 401 * @qos_class: QoS Class cookies 402 402 */ 403 403 struct cxl_dpa_perf { 404 404 struct range dpa_range; 405 405 struct access_coordinate coord[ACCESS_COORDINATE_MAX]; 406 + struct access_coordinate cdat_coord[ACCESS_COORDINATE_MAX]; 406 407 int qos_class; 407 408 }; 408 409 ··· 427 424 * @ram_res: Active Volatile memory capacity configuration 428 425 * @serial: PCIe Device Serial Number 429 426 * @type: Generic Memory Class device or Vendor Specific Memory device 427 + * @cxl_mbox: CXL mailbox context 430 428 */ 431 429 struct cxl_dev_state { 432 430 struct device *dev; ··· 442 438 struct resource ram_res; 443 439 u64 serial; 444 440 enum cxl_devtype type; 441 + struct cxl_mailbox cxl_mbox; 445 442 }; 443 + 444 + static inline struct cxl_dev_state *mbox_to_cxlds(struct cxl_mailbox *cxl_mbox) 445 + { 446 + return dev_get_drvdata(cxl_mbox->host); 447 + } 446 448 447 449 /** 448 450 * struct cxl_memdev_state - Generic Type-3 Memory Device Class driver data ··· 458 448 * the functionality related to that like Identify Memory Device and Get 459 449 * Partition Info 460 450 * @cxlds: Core driver state common across Type-2 and Type-3 devices 461 - * @payload_size: Size of space for payload 462 - * (CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register) 463 451 * @lsa_size: Size of Label Storage Area 464 452 * (CXL 2.0 8.2.9.5.1.1 Identify Memory Device) 465 - * @mbox_mutex: Mutex to synchronize mailbox access. 466 453 * @firmware_version: Firmware version for the memory device. 467 454 * @enabled_cmds: Hardware commands found enabled in CEL. 468 455 * @exclusive_cmds: Commands that are kernel-internal only ··· 477 470 * @poison: poison driver state info 478 471 * @security: security driver state info 479 472 * @fw: firmware upload / activation state 480 - * @mbox_wait: RCU wait for mbox send completely 481 - * @mbox_send: @dev specific transport for transmitting mailbox commands 482 473 * 483 474 * See CXL 3.0 8.2.9.8.2 Capacity Configuration and Label Storage for 484 475 * details on capacity parameters. 485 476 */ 486 477 struct cxl_memdev_state { 487 478 struct cxl_dev_state cxlds; 488 - size_t payload_size; 489 479 size_t lsa_size; 490 - struct mutex mbox_mutex; /* Protects device mailbox and firmware */ 491 480 char firmware_version[0x10]; 492 481 DECLARE_BITMAP(enabled_cmds, CXL_MEM_COMMAND_ID_MAX); 493 482 DECLARE_BITMAP(exclusive_cmds, CXL_MEM_COMMAND_ID_MAX); ··· 503 500 struct cxl_poison_state poison; 504 501 struct cxl_security_state security; 505 502 struct cxl_fw_state fw; 506 - 507 - struct rcuwait mbox_wait; 508 - int (*mbox_send)(struct cxl_memdev_state *mds, 509 - struct cxl_mbox_cmd *cmd); 510 503 }; 511 504 512 505 static inline struct cxl_memdev_state * ··· 813 814 CXL_PMEM_SEC_PASS_USER, 814 815 }; 815 816 816 - int cxl_internal_send_cmd(struct cxl_memdev_state *mds, 817 + int cxl_internal_send_cmd(struct cxl_mailbox *cxl_mbox, 817 818 struct cxl_mbox_cmd *cmd); 818 819 int cxl_dev_state_identify(struct cxl_memdev_state *mds); 819 820 int cxl_await_media_ready(struct cxl_dev_state *cxlds);
+13 -16
drivers/cxl/mem.c
··· 109 109 struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); 110 110 struct cxl_dev_state *cxlds = cxlmd->cxlds; 111 111 struct device *endpoint_parent; 112 - struct cxl_port *parent_port; 113 112 struct cxl_dport *dport; 114 113 struct dentry *dentry; 115 114 int rc; ··· 145 146 if (rc) 146 147 return rc; 147 148 148 - parent_port = cxl_mem_find_port(cxlmd, &dport); 149 + struct cxl_port *parent_port __free(put_cxl_port) = 150 + cxl_mem_find_port(cxlmd, &dport); 149 151 if (!parent_port) { 150 152 dev_err(dev, "CXL port topology not found\n"); 151 153 return -ENXIO; ··· 166 166 else 167 167 endpoint_parent = &parent_port->dev; 168 168 169 - cxl_setup_parent_dport(dev, dport); 169 + cxl_dport_init_ras_reporting(dport, dev); 170 170 171 - device_lock(endpoint_parent); 172 - if (!endpoint_parent->driver) { 173 - dev_err(dev, "CXL port topology %s not enabled\n", 174 - dev_name(endpoint_parent)); 175 - rc = -ENXIO; 176 - goto unlock; 171 + scoped_guard(device, endpoint_parent) { 172 + if (!endpoint_parent->driver) { 173 + dev_err(dev, "CXL port topology %s not enabled\n", 174 + dev_name(endpoint_parent)); 175 + return -ENXIO; 176 + } 177 + 178 + rc = devm_cxl_add_endpoint(endpoint_parent, cxlmd, dport); 179 + if (rc) 180 + return rc; 177 181 } 178 - 179 - rc = devm_cxl_add_endpoint(endpoint_parent, cxlmd, dport); 180 - unlock: 181 - device_unlock(endpoint_parent); 182 - put_device(&parent_port->dev); 183 - if (rc) 184 - return rc; 185 182 186 183 /* 187 184 * The kernel may be operating out of CXL memory on this device,
+58 -33
drivers/cxl/pci.c
··· 11 11 #include <linux/pci.h> 12 12 #include <linux/aer.h> 13 13 #include <linux/io.h> 14 + #include <cxl/mailbox.h> 14 15 #include "cxlmem.h" 15 16 #include "cxlpci.h" 16 17 #include "cxl.h" ··· 125 124 u16 opcode; 126 125 struct cxl_dev_id *dev_id = id; 127 126 struct cxl_dev_state *cxlds = dev_id->cxlds; 127 + struct cxl_mailbox *cxl_mbox = &cxlds->cxl_mbox; 128 128 struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); 129 129 130 130 if (!cxl_mbox_background_complete(cxlds)) ··· 134 132 reg = readq(cxlds->regs.mbox + CXLDEV_MBOX_BG_CMD_STATUS_OFFSET); 135 133 opcode = FIELD_GET(CXLDEV_MBOX_BG_CMD_COMMAND_OPCODE_MASK, reg); 136 134 if (opcode == CXL_MBOX_OP_SANITIZE) { 137 - mutex_lock(&mds->mbox_mutex); 135 + mutex_lock(&cxl_mbox->mbox_mutex); 138 136 if (mds->security.sanitize_node) 139 137 mod_delayed_work(system_wq, &mds->security.poll_dwork, 0); 140 - mutex_unlock(&mds->mbox_mutex); 138 + mutex_unlock(&cxl_mbox->mbox_mutex); 141 139 } else { 142 140 /* short-circuit the wait in __cxl_pci_mbox_send_cmd() */ 143 - rcuwait_wake_up(&mds->mbox_wait); 141 + rcuwait_wake_up(&cxl_mbox->mbox_wait); 144 142 } 145 143 146 144 return IRQ_HANDLED; ··· 154 152 struct cxl_memdev_state *mds = 155 153 container_of(work, typeof(*mds), security.poll_dwork.work); 156 154 struct cxl_dev_state *cxlds = &mds->cxlds; 155 + struct cxl_mailbox *cxl_mbox = &cxlds->cxl_mbox; 157 156 158 - mutex_lock(&mds->mbox_mutex); 157 + mutex_lock(&cxl_mbox->mbox_mutex); 159 158 if (cxl_mbox_background_complete(cxlds)) { 160 159 mds->security.poll_tmo_secs = 0; 161 160 if (mds->security.sanitize_node) ··· 170 167 mds->security.poll_tmo_secs = min(15 * 60, timeout); 171 168 schedule_delayed_work(&mds->security.poll_dwork, timeout * HZ); 172 169 } 173 - mutex_unlock(&mds->mbox_mutex); 170 + mutex_unlock(&cxl_mbox->mbox_mutex); 174 171 } 175 172 176 173 /** 177 174 * __cxl_pci_mbox_send_cmd() - Execute a mailbox command 178 - * @mds: The memory device driver data 175 + * @cxl_mbox: CXL mailbox context 179 176 * @mbox_cmd: Command to send to the memory device. 180 177 * 181 178 * Context: Any context. Expects mbox_mutex to be held. ··· 195 192 * not need to coordinate with each other. The driver only uses the primary 196 193 * mailbox. 197 194 */ 198 - static int __cxl_pci_mbox_send_cmd(struct cxl_memdev_state *mds, 195 + static int __cxl_pci_mbox_send_cmd(struct cxl_mailbox *cxl_mbox, 199 196 struct cxl_mbox_cmd *mbox_cmd) 200 197 { 201 - struct cxl_dev_state *cxlds = &mds->cxlds; 198 + struct cxl_dev_state *cxlds = mbox_to_cxlds(cxl_mbox); 199 + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); 202 200 void __iomem *payload = cxlds->regs.mbox + CXLDEV_MBOX_PAYLOAD_OFFSET; 203 201 struct device *dev = cxlds->dev; 204 202 u64 cmd_reg, status_reg; 205 203 size_t out_len; 206 204 int rc; 207 205 208 - lockdep_assert_held(&mds->mbox_mutex); 206 + lockdep_assert_held(&cxl_mbox->mbox_mutex); 209 207 210 208 /* 211 209 * Here are the steps from 8.2.8.4 of the CXL 2.0 spec. ··· 319 315 320 316 timeout = mbox_cmd->poll_interval_ms; 321 317 for (i = 0; i < mbox_cmd->poll_count; i++) { 322 - if (rcuwait_wait_event_timeout(&mds->mbox_wait, 323 - cxl_mbox_background_complete(cxlds), 324 - TASK_UNINTERRUPTIBLE, 325 - msecs_to_jiffies(timeout)) > 0) 318 + if (rcuwait_wait_event_timeout(&cxl_mbox->mbox_wait, 319 + cxl_mbox_background_complete(cxlds), 320 + TASK_UNINTERRUPTIBLE, 321 + msecs_to_jiffies(timeout)) > 0) 326 322 break; 327 323 } 328 324 ··· 364 360 */ 365 361 size_t n; 366 362 367 - n = min3(mbox_cmd->size_out, mds->payload_size, out_len); 363 + n = min3(mbox_cmd->size_out, cxl_mbox->payload_size, out_len); 368 364 memcpy_fromio(mbox_cmd->payload_out, payload, n); 369 365 mbox_cmd->size_out = n; 370 366 } else { ··· 374 370 return 0; 375 371 } 376 372 377 - static int cxl_pci_mbox_send(struct cxl_memdev_state *mds, 373 + static int cxl_pci_mbox_send(struct cxl_mailbox *cxl_mbox, 378 374 struct cxl_mbox_cmd *cmd) 379 375 { 380 376 int rc; 381 377 382 - mutex_lock_io(&mds->mbox_mutex); 383 - rc = __cxl_pci_mbox_send_cmd(mds, cmd); 384 - mutex_unlock(&mds->mbox_mutex); 378 + mutex_lock_io(&cxl_mbox->mbox_mutex); 379 + rc = __cxl_pci_mbox_send_cmd(cxl_mbox, cmd); 380 + mutex_unlock(&cxl_mbox->mbox_mutex); 385 381 386 382 return rc; 387 383 } ··· 389 385 static int cxl_pci_setup_mailbox(struct cxl_memdev_state *mds, bool irq_avail) 390 386 { 391 387 struct cxl_dev_state *cxlds = &mds->cxlds; 388 + struct cxl_mailbox *cxl_mbox = &cxlds->cxl_mbox; 392 389 const int cap = readl(cxlds->regs.mbox + CXLDEV_MBOX_CAPS_OFFSET); 393 390 struct device *dev = cxlds->dev; 394 391 unsigned long timeout; ··· 422 417 return -ETIMEDOUT; 423 418 } 424 419 425 - mds->mbox_send = cxl_pci_mbox_send; 426 - mds->payload_size = 420 + cxl_mbox->mbox_send = cxl_pci_mbox_send; 421 + cxl_mbox->payload_size = 427 422 1 << FIELD_GET(CXLDEV_MBOX_CAP_PAYLOAD_SIZE_MASK, cap); 428 423 429 424 /* ··· 433 428 * there's no point in going forward. If the size is too large, there's 434 429 * no harm is soft limiting it. 435 430 */ 436 - mds->payload_size = min_t(size_t, mds->payload_size, SZ_1M); 437 - if (mds->payload_size < 256) { 431 + cxl_mbox->payload_size = min_t(size_t, cxl_mbox->payload_size, SZ_1M); 432 + if (cxl_mbox->payload_size < 256) { 438 433 dev_err(dev, "Mailbox is too small (%zub)", 439 - mds->payload_size); 434 + cxl_mbox->payload_size); 440 435 return -ENXIO; 441 436 } 442 437 443 - dev_dbg(dev, "Mailbox payload sized %zu", mds->payload_size); 438 + dev_dbg(dev, "Mailbox payload sized %zu", cxl_mbox->payload_size); 444 439 445 - rcuwait_init(&mds->mbox_wait); 446 440 INIT_DELAYED_WORK(&mds->security.poll_dwork, cxl_mbox_sanitize_work); 447 441 448 442 /* background command interrupts are optional */ ··· 477 473 static int cxl_rcrb_get_comp_regs(struct pci_dev *pdev, 478 474 struct cxl_register_map *map) 479 475 { 480 - struct cxl_port *port; 481 476 struct cxl_dport *dport; 482 477 resource_size_t component_reg_phys; 483 478 ··· 485 482 .resource = CXL_RESOURCE_NONE, 486 483 }; 487 484 488 - port = cxl_pci_find_port(pdev, &dport); 485 + struct cxl_port *port __free(put_cxl_port) = 486 + cxl_pci_find_port(pdev, &dport); 489 487 if (!port) 490 488 return -EPROBE_DEFER; 491 489 492 490 component_reg_phys = cxl_rcd_component_reg_phys(&pdev->dev, dport); 493 - 494 - put_device(&port->dev); 495 - 496 491 if (component_reg_phys == CXL_RESOURCE_NONE) 497 492 return -ENXIO; 498 493 ··· 579 578 */ 580 579 static int cxl_mem_alloc_event_buf(struct cxl_memdev_state *mds) 581 580 { 581 + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; 582 582 struct cxl_get_event_payload *buf; 583 583 584 - buf = kvmalloc(mds->payload_size, GFP_KERNEL); 584 + buf = kvmalloc(cxl_mbox->payload_size, GFP_KERNEL); 585 585 if (!buf) 586 586 return -ENOMEM; 587 587 mds->event.buf = buf; ··· 655 653 static int cxl_event_get_int_policy(struct cxl_memdev_state *mds, 656 654 struct cxl_event_interrupt_policy *policy) 657 655 { 656 + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; 658 657 struct cxl_mbox_cmd mbox_cmd = { 659 658 .opcode = CXL_MBOX_OP_GET_EVT_INT_POLICY, 660 659 .payload_out = policy, ··· 663 660 }; 664 661 int rc; 665 662 666 - rc = cxl_internal_send_cmd(mds, &mbox_cmd); 663 + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); 667 664 if (rc < 0) 668 665 dev_err(mds->cxlds.dev, 669 666 "Failed to get event interrupt policy : %d", rc); ··· 674 671 static int cxl_event_config_msgnums(struct cxl_memdev_state *mds, 675 672 struct cxl_event_interrupt_policy *policy) 676 673 { 674 + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; 677 675 struct cxl_mbox_cmd mbox_cmd; 678 676 int rc; 679 677 ··· 691 687 .size_in = sizeof(*policy), 692 688 }; 693 689 694 - rc = cxl_internal_send_cmd(mds, &mbox_cmd); 690 + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); 695 691 if (rc < 0) { 696 692 dev_err(mds->cxlds.dev, "Failed to set event interrupt policy : %d", 697 693 rc); ··· 790 786 return 0; 791 787 } 792 788 789 + static int cxl_pci_type3_init_mailbox(struct cxl_dev_state *cxlds) 790 + { 791 + int rc; 792 + 793 + /* 794 + * Fail the init if there's no mailbox. For a type3 this is out of spec. 795 + */ 796 + if (!cxlds->reg_map.device_map.mbox.valid) 797 + return -ENODEV; 798 + 799 + rc = cxl_mailbox_init(&cxlds->cxl_mbox, cxlds->dev); 800 + if (rc) 801 + return rc; 802 + 803 + return 0; 804 + } 805 + 793 806 static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) 794 807 { 795 808 struct pci_host_bridge *host_bridge = pci_find_host_bridge(pdev->bus); ··· 866 845 BIT(CXL_CM_CAP_CAP_ID_RAS)); 867 846 if (rc) 868 847 dev_dbg(&pdev->dev, "Failed to map RAS capability.\n"); 848 + 849 + rc = cxl_pci_type3_init_mailbox(cxlds); 850 + if (rc) 851 + return rc; 869 852 870 853 rc = cxl_await_media_ready(cxlds); 871 854 if (rc == 0)
+14 -12
drivers/cxl/pmem.c
··· 102 102 struct nd_cmd_get_config_size *cmd, 103 103 unsigned int buf_len) 104 104 { 105 + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; 106 + 105 107 if (sizeof(*cmd) > buf_len) 106 108 return -EINVAL; 107 109 108 110 *cmd = (struct nd_cmd_get_config_size){ 109 111 .config_size = mds->lsa_size, 110 112 .max_xfer = 111 - mds->payload_size - sizeof(struct cxl_mbox_set_lsa), 113 + cxl_mbox->payload_size - sizeof(struct cxl_mbox_set_lsa), 112 114 }; 113 115 114 116 return 0; ··· 120 118 struct nd_cmd_get_config_data_hdr *cmd, 121 119 unsigned int buf_len) 122 120 { 121 + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; 123 122 struct cxl_mbox_get_lsa get_lsa; 124 123 struct cxl_mbox_cmd mbox_cmd; 125 124 int rc; ··· 142 139 .payload_out = cmd->out_buf, 143 140 }; 144 141 145 - rc = cxl_internal_send_cmd(mds, &mbox_cmd); 142 + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); 146 143 cmd->status = 0; 147 144 148 145 return rc; ··· 152 149 struct nd_cmd_set_config_hdr *cmd, 153 150 unsigned int buf_len) 154 151 { 152 + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; 155 153 struct cxl_mbox_set_lsa *set_lsa; 156 154 struct cxl_mbox_cmd mbox_cmd; 157 155 int rc; ··· 179 175 .size_in = struct_size(set_lsa, data, cmd->in_length), 180 176 }; 181 177 182 - rc = cxl_internal_send_cmd(mds, &mbox_cmd); 178 + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); 183 179 184 180 /* 185 181 * Set "firmware" status (4-packed bytes at the end of the input ··· 237 233 if (!is_cxl_nvdimm(dev)) 238 234 return 0; 239 235 240 - device_lock(dev); 241 - if (!dev->driver) 242 - goto out; 243 - 244 - cxl_nvd = to_cxl_nvdimm(dev); 245 - if (cxl_nvd->cxlmd && cxl_nvd->cxlmd->cxl_nvb == data) 246 - release = true; 247 - out: 248 - device_unlock(dev); 236 + scoped_guard(device, dev) { 237 + if (dev->driver) { 238 + cxl_nvd = to_cxl_nvdimm(dev); 239 + if (cxl_nvd->cxlmd && cxl_nvd->cxlmd->cxl_nvb == data) 240 + release = true; 241 + } 242 + } 249 243 if (release) 250 244 device_release_driver(dev); 251 245 return 0;
+1 -1
drivers/cxl/port.c
··· 98 98 struct cxl_port *root; 99 99 int rc; 100 100 101 - rc = cxl_dvsec_rr_decode(cxlds->dev, cxlds->cxl_dvsec, &info); 101 + rc = cxl_dvsec_rr_decode(cxlds->dev, port, &info); 102 102 if (rc < 0) 103 103 return rc; 104 104
+12 -11
drivers/cxl/security.c
··· 14 14 { 15 15 struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm); 16 16 struct cxl_memdev *cxlmd = cxl_nvd->cxlmd; 17 + struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox; 17 18 struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); 18 19 unsigned long security_flags = 0; 19 20 struct cxl_get_security_output { ··· 30 29 .payload_out = &out, 31 30 }; 32 31 33 - rc = cxl_internal_send_cmd(mds, &mbox_cmd); 32 + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); 34 33 if (rc < 0) 35 34 return 0; 36 35 ··· 71 70 { 72 71 struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm); 73 72 struct cxl_memdev *cxlmd = cxl_nvd->cxlmd; 74 - struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); 73 + struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox; 75 74 struct cxl_mbox_cmd mbox_cmd; 76 75 struct cxl_set_pass set_pass; 77 76 ··· 88 87 .payload_in = &set_pass, 89 88 }; 90 89 91 - return cxl_internal_send_cmd(mds, &mbox_cmd); 90 + return cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); 92 91 } 93 92 94 93 static int __cxl_pmem_security_disable(struct nvdimm *nvdimm, ··· 97 96 { 98 97 struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm); 99 98 struct cxl_memdev *cxlmd = cxl_nvd->cxlmd; 100 - struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); 99 + struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox; 101 100 struct cxl_disable_pass dis_pass; 102 101 struct cxl_mbox_cmd mbox_cmd; 103 102 ··· 113 112 .payload_in = &dis_pass, 114 113 }; 115 114 116 - return cxl_internal_send_cmd(mds, &mbox_cmd); 115 + return cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); 117 116 } 118 117 119 118 static int cxl_pmem_security_disable(struct nvdimm *nvdimm, ··· 132 131 { 133 132 struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm); 134 133 struct cxl_memdev *cxlmd = cxl_nvd->cxlmd; 135 - struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); 134 + struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox; 136 135 struct cxl_mbox_cmd mbox_cmd = { 137 136 .opcode = CXL_MBOX_OP_FREEZE_SECURITY, 138 137 }; 139 138 140 - return cxl_internal_send_cmd(mds, &mbox_cmd); 139 + return cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); 141 140 } 142 141 143 142 static int cxl_pmem_security_unlock(struct nvdimm *nvdimm, ··· 145 144 { 146 145 struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm); 147 146 struct cxl_memdev *cxlmd = cxl_nvd->cxlmd; 148 - struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); 147 + struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox; 149 148 u8 pass[NVDIMM_PASSPHRASE_LEN]; 150 149 struct cxl_mbox_cmd mbox_cmd; 151 150 int rc; ··· 157 156 .payload_in = pass, 158 157 }; 159 158 160 - rc = cxl_internal_send_cmd(mds, &mbox_cmd); 159 + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); 161 160 if (rc < 0) 162 161 return rc; 163 162 ··· 170 169 { 171 170 struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm); 172 171 struct cxl_memdev *cxlmd = cxl_nvd->cxlmd; 173 - struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); 172 + struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox; 174 173 struct cxl_mbox_cmd mbox_cmd; 175 174 struct cxl_pass_erase erase; 176 175 int rc; ··· 186 185 .payload_in = &erase, 187 186 }; 188 187 189 - rc = cxl_internal_send_cmd(mds, &mbox_cmd); 188 + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); 190 189 if (rc < 0) 191 190 return rc; 192 191
+28
include/cxl/mailbox.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* Copyright(c) 2024 Intel Corporation. */ 3 + #ifndef __CXL_MBOX_H__ 4 + #define __CXL_MBOX_H__ 5 + #include <linux/rcuwait.h> 6 + 7 + struct cxl_mbox_cmd; 8 + 9 + /** 10 + * struct cxl_mailbox - context for CXL mailbox operations 11 + * @host: device that hosts the mailbox 12 + * @payload_size: Size of space for payload 13 + * (CXL 3.1 8.2.8.4.3 Mailbox Capabilities Register) 14 + * @mbox_mutex: mutex protects device mailbox and firmware 15 + * @mbox_wait: rcuwait for mailbox 16 + * @mbox_send: @dev specific transport for transmitting mailbox commands 17 + */ 18 + struct cxl_mailbox { 19 + struct device *host; 20 + size_t payload_size; 21 + struct mutex mbox_mutex; /* lock to protect mailbox context */ 22 + struct rcuwait mbox_wait; 23 + int (*mbox_send)(struct cxl_mailbox *cxl_mbox, struct cxl_mbox_cmd *cmd); 24 + }; 25 + 26 + int cxl_mailbox_init(struct cxl_mailbox *cxl_mbox, struct device *host); 27 + 28 + #endif
include/linux/cxl-event.h include/cxl/event.h
include/linux/einj-cxl.h include/cxl/einj.h
+1 -1
tools/testing/cxl/Kbuild
··· 14 14 ldflags-y += --wrap=devm_cxl_add_rch_dport 15 15 ldflags-y += --wrap=cxl_rcd_component_reg_phys 16 16 ldflags-y += --wrap=cxl_endpoint_parse_cdat 17 - ldflags-y += --wrap=cxl_setup_parent_dport 17 + ldflags-y += --wrap=cxl_dport_init_ras_reporting 18 18 19 19 DRIVERS := ../../../drivers 20 20 CXL_SRC := $(DRIVERS)/cxl
+1 -1
tools/testing/cxl/mock_acpi.c
··· 18 18 goto out; 19 19 } 20 20 21 - if (dev->bus == &platform_bus_type) 21 + if (dev_is_platform(dev)) 22 22 goto out; 23 23 24 24 adev = to_acpi_device(dev);
+33 -11
tools/testing/cxl/test/mem.c
··· 8 8 #include <linux/delay.h> 9 9 #include <linux/sizes.h> 10 10 #include <linux/bits.h> 11 + #include <cxl/mailbox.h> 11 12 #include <asm/unaligned.h> 12 13 #include <crypto/sha2.h> 13 14 #include <cxlmem.h> ··· 535 534 536 535 static int mock_get_log(struct cxl_memdev_state *mds, struct cxl_mbox_cmd *cmd) 537 536 { 537 + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; 538 538 struct cxl_mbox_get_log *gl = cmd->payload_in; 539 539 u32 offset = le32_to_cpu(gl->offset); 540 540 u32 length = le32_to_cpu(gl->length); ··· 544 542 545 543 if (cmd->size_in < sizeof(*gl)) 546 544 return -EINVAL; 547 - if (length > mds->payload_size) 545 + if (length > cxl_mbox->payload_size) 548 546 return -EINVAL; 549 547 if (offset + length > sizeof(mock_cel)) 550 548 return -EINVAL; ··· 619 617 { 620 618 struct cxl_memdev_state *mds = 621 619 container_of(work, typeof(*mds), security.poll_dwork.work); 620 + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; 622 621 623 - mutex_lock(&mds->mbox_mutex); 622 + mutex_lock(&cxl_mbox->mbox_mutex); 624 623 if (mds->security.sanitize_node) 625 624 sysfs_notify_dirent(mds->security.sanitize_node); 626 625 mds->security.sanitize_active = false; 627 - mutex_unlock(&mds->mbox_mutex); 626 + mutex_unlock(&cxl_mbox->mbox_mutex); 628 627 629 628 dev_dbg(mds->cxlds.dev, "sanitize complete\n"); 630 629 } ··· 634 631 struct cxl_mbox_cmd *cmd) 635 632 { 636 633 struct cxl_memdev_state *mds = mdata->mds; 634 + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox; 637 635 int rc = 0; 638 636 639 637 if (cmd->size_in != 0) ··· 652 648 return -ENXIO; 653 649 } 654 650 655 - mutex_lock(&mds->mbox_mutex); 651 + mutex_lock(&cxl_mbox->mbox_mutex); 656 652 if (schedule_delayed_work(&mds->security.poll_dwork, 657 653 msecs_to_jiffies(mdata->sanitize_timeout))) { 658 654 mds->security.sanitize_active = true; 659 655 dev_dbg(mds->cxlds.dev, "sanitize issued\n"); 660 656 } else 661 657 rc = -EBUSY; 662 - mutex_unlock(&mds->mbox_mutex); 658 + mutex_unlock(&cxl_mbox->mbox_mutex); 663 659 664 660 return rc; 665 661 } ··· 1337 1333 return -EINVAL; 1338 1334 } 1339 1335 1340 - static int cxl_mock_mbox_send(struct cxl_memdev_state *mds, 1336 + static int cxl_mock_mbox_send(struct cxl_mailbox *cxl_mbox, 1341 1337 struct cxl_mbox_cmd *cmd) 1342 1338 { 1343 - struct cxl_dev_state *cxlds = &mds->cxlds; 1344 - struct device *dev = cxlds->dev; 1339 + struct device *dev = cxl_mbox->host; 1345 1340 struct cxl_mockmem_data *mdata = dev_get_drvdata(dev); 1341 + struct cxl_memdev_state *mds = mdata->mds; 1342 + struct cxl_dev_state *cxlds = &mds->cxlds; 1346 1343 int rc = -EIO; 1347 1344 1348 1345 switch (cmd->opcode) { ··· 1458 1453 } 1459 1454 static DEVICE_ATTR_WO(event_trigger); 1460 1455 1456 + static int cxl_mock_mailbox_create(struct cxl_dev_state *cxlds) 1457 + { 1458 + int rc; 1459 + 1460 + rc = cxl_mailbox_init(&cxlds->cxl_mbox, cxlds->dev); 1461 + if (rc) 1462 + return rc; 1463 + 1464 + return 0; 1465 + } 1466 + 1461 1467 static int cxl_mock_mem_probe(struct platform_device *pdev) 1462 1468 { 1463 1469 struct device *dev = &pdev->dev; ··· 1476 1460 struct cxl_memdev_state *mds; 1477 1461 struct cxl_dev_state *cxlds; 1478 1462 struct cxl_mockmem_data *mdata; 1463 + struct cxl_mailbox *cxl_mbox; 1479 1464 int rc; 1480 1465 1481 1466 mdata = devm_kzalloc(dev, sizeof(*mdata), GFP_KERNEL); ··· 1504 1487 if (IS_ERR(mds)) 1505 1488 return PTR_ERR(mds); 1506 1489 1490 + cxlds = &mds->cxlds; 1491 + rc = cxl_mock_mailbox_create(cxlds); 1492 + if (rc) 1493 + return rc; 1494 + 1495 + cxl_mbox = &mds->cxlds.cxl_mbox; 1507 1496 mdata->mds = mds; 1508 - mds->mbox_send = cxl_mock_mbox_send; 1509 - mds->payload_size = SZ_4K; 1497 + cxl_mbox->mbox_send = cxl_mock_mbox_send; 1498 + cxl_mbox->payload_size = SZ_4K; 1510 1499 mds->event.buf = (struct cxl_get_event_payload *) mdata->event_buf; 1511 1500 INIT_DELAYED_WORK(&mds->security.poll_dwork, cxl_mockmem_sanitize_work); 1512 1501 1513 - cxlds = &mds->cxlds; 1514 1502 cxlds->serial = pdev->id; 1515 1503 if (is_rcd(pdev)) 1516 1504 cxlds->rcd = true;
+5 -5
tools/testing/cxl/test/mock.c
··· 228 228 } 229 229 EXPORT_SYMBOL_NS_GPL(__wrap_cxl_hdm_decode_init, CXL); 230 230 231 - int __wrap_cxl_dvsec_rr_decode(struct device *dev, int dvsec, 231 + int __wrap_cxl_dvsec_rr_decode(struct device *dev, struct cxl_port *port, 232 232 struct cxl_endpoint_dvsec_info *info) 233 233 { 234 234 int rc = 0, index; ··· 237 237 if (ops && ops->is_mock_dev(dev)) 238 238 rc = 0; 239 239 else 240 - rc = cxl_dvsec_rr_decode(dev, dvsec, info); 240 + rc = cxl_dvsec_rr_decode(dev, port, info); 241 241 put_cxl_mock_ops(index); 242 242 243 243 return rc; ··· 299 299 } 300 300 EXPORT_SYMBOL_NS_GPL(__wrap_cxl_endpoint_parse_cdat, CXL); 301 301 302 - void __wrap_cxl_setup_parent_dport(struct device *host, struct cxl_dport *dport) 302 + void __wrap_cxl_dport_init_ras_reporting(struct cxl_dport *dport, struct device *host) 303 303 { 304 304 int index; 305 305 struct cxl_mock_ops *ops = get_cxl_mock_ops(&index); 306 306 307 307 if (!ops || !ops->is_mock_port(dport->dport_dev)) 308 - cxl_setup_parent_dport(host, dport); 308 + cxl_dport_init_ras_reporting(dport, host); 309 309 310 310 put_cxl_mock_ops(index); 311 311 } 312 - EXPORT_SYMBOL_NS_GPL(__wrap_cxl_setup_parent_dport, CXL); 312 + EXPORT_SYMBOL_NS_GPL(__wrap_cxl_dport_init_ras_reporting, CXL); 313 313 314 314 MODULE_LICENSE("GPL v2"); 315 315 MODULE_IMPORT_NS(ACPI);