Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'cxl-for-6.19' of git://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl

Pull compute express link (CXL) updates from Dave Jiang:
"The additions of note are adding CXL region remove support for locked
CXL decoders, adding unit testing support for XOR address translation,
and adding unit testing support for extended linear cache.

Misc:
- Remove incorrect page-allocator quirk section in documentation
- Remove unused devm_cxl_port_enumerate_dports() function
- Fix typo in cdat.c code comment
- Replace use of system_wq with system_percpu_wq
- Add locked CXL decoder support for region removal
- Return when generic target updated
- Rename region_res_match_cxl_range() to spa_maps_hpa()
- Clarify comment in spa_maps_hpa()

Enable unit testing for XOR address translation of SPA to DPA and vice versa:
- Refactor address translation funcs for testing in cxl_region
- Make the XOR calculations available for testing
- Add cxl_translate module for address translation testing in
cxl_test

Extended Linear Cache changes:
- Add extended linear cache size sysfs attribute
- Adjust failure emission of extended linear cache detection in
cxl_acpi
- Added extended linear cache unit testing support in cxl_test

Preparation refactor patches for PRM translation support:
- Simplify cxl_rd_ops allocation and handling
- Group xor arithmetric setup code in a single block
- Remove local variable @inc in cxl_port_setup_targets()"

* tag 'cxl-for-6.19' of git://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl: (22 commits)
cxl/test: Assign overflow_err_count from log->nr_overflow
cxl/test: Remove ret_limit race condition in mock_get_event()
cxl/test: remove unused mock function for cxl_rcd_component_reg_phys()
cxl/test: Add support for acpi extended linear cache
cxl/test: Add cxl_test CFMWS support for extended linear cache
cxl/test: Standardize CXL auto region size
cxl/region: Remove local variable @inc in cxl_port_setup_targets()
cxl/acpi: Group xor arithmetric setup code in a single block
cxl: Simplify cxl_rd_ops allocation and handling
cxl: Clarify comment in spa_maps_hpa()
cxl: Rename region_res_match_cxl_range() to spa_maps_hpa()
acpi/hmat: Return when generic target is updated
cxl: Add handling of locked CXL decoder
cxl/region: Add support to indicate region has extended linear cache
cxl: Adjust extended linear cache failure emission in cxl_acpi
cxl/test: Add cxl_translate module for address translation testing
cxl/acpi: Make the XOR calculations available for testing
cxl/region: Refactor address translation funcs for testing
cxl/pci: replace use of system_wq with system_percpu_wq
cxl: fix typos in cdat.c comments
...

+844 -332
+10 -1
Documentation/ABI/testing/sysfs-bus-cxl
··· 496 496 changed, only freed by writing 0. The kernel makes no guarantees 497 497 that data is maintained over an address space freeing event, and 498 498 there is no guarantee that a free followed by an allocate 499 - results in the same address being allocated. 499 + results in the same address being allocated. If extended linear 500 + cache is present, the size indicates extended linear cache size 501 + plus the CXL region size. 500 502 503 + What: /sys/bus/cxl/devices/regionZ/extended_linear_cache_size 504 + Date: October, 2025 505 + KernelVersion: v6.19 506 + Contact: linux-cxl@vger.kernel.org 507 + Description: 508 + (RO) The size of extended linear cache, if there is an extended 509 + linear cache. Otherwise the attribute will not be visible. 501 510 502 511 What: /sys/bus/cxl/devices/regionZ/mode 503 512 Date: January, 2023
-31
Documentation/driver-api/cxl/allocation/page-allocator.rst
··· 41 41 will fallback to allocate from :code:`ZONE_NORMAL`. 42 42 43 43 44 - Zone and Node Quirks 45 - ==================== 46 - Let's consider a configuration where the local DRAM capacity is largely onlined 47 - into :code:`ZONE_NORMAL`, with no :code:`ZONE_MOVABLE` capacity present. The 48 - CXL capacity has the opposite configuration - all onlined in 49 - :code:`ZONE_MOVABLE`. 50 - 51 - Under the default allocation policy, the page allocator will completely skip 52 - :code:`ZONE_MOVABLE` as a valid allocation target. This is because, as of 53 - Linux v6.15, the page allocator does (approximately) the following: :: 54 - 55 - for (each zone in local_node): 56 - 57 - for (each node in fallback_order): 58 - 59 - attempt_allocation(gfp_flags); 60 - 61 - Because the local node does not have :code:`ZONE_MOVABLE`, the CXL node is 62 - functionally unreachable for direct allocation. As a result, the only way 63 - for CXL capacity to be used is via `demotion` in the reclaim path. 64 - 65 - This configuration also means that if the DRAM ndoe has :code:`ZONE_MOVABLE` 66 - capacity - when that capacity is depleted, the page allocator will actually 67 - prefer CXL :code:`ZONE_MOVABLE` pages over DRAM :code:`ZONE_NORMAL` pages. 68 - 69 - We may wish to invert this priority in future Linux versions. 70 - 71 - If `demotion` and `swap` are disabled, Linux will begin to cause OOM crashes 72 - when the DRAM nodes are depleted. See the reclaim section for more details. 73 - 74 - 75 44 CGroups and CPUSets 76 45 =================== 77 46 Finally, assuming CXL memory is reachable via the page allocation (i.e. onlined
+6 -5
drivers/acpi/numa/hmat.c
··· 910 910 * Register generic port perf numbers. The nid may not be 911 911 * initialized and is still NUMA_NO_NODE. 912 912 */ 913 - mutex_lock(&target_lock); 914 - if (*(u16 *)target->gen_port_device_handle) { 915 - hmat_update_generic_target(target); 916 - target->registered = true; 913 + scoped_guard(mutex, &target_lock) { 914 + if (*(u16 *)target->gen_port_device_handle) { 915 + hmat_update_generic_target(target); 916 + target->registered = true; 917 + return; 918 + } 917 919 } 918 - mutex_unlock(&target_lock); 919 920 920 921 hmat_hotplug_target(target); 921 922 }
+41 -32
drivers/cxl/acpi.c
··· 11 11 #include "cxlpci.h" 12 12 #include "cxl.h" 13 13 14 - struct cxl_cxims_data { 15 - int nr_maps; 16 - u64 xormaps[] __counted_by(nr_maps); 17 - }; 18 - 19 14 static const guid_t acpi_cxl_qtg_id_guid = 20 15 GUID_INIT(0xF365F9A6, 0xA7DE, 0x4071, 21 16 0xA6, 0x6A, 0xB4, 0x0C, 0x0B, 0x4F, 0x8E, 0x52); 22 17 23 - static u64 cxl_apply_xor_maps(struct cxl_root_decoder *cxlrd, u64 addr) 18 + #define HBIW_TO_NR_MAPS_SIZE (CXL_DECODER_MAX_INTERLEAVE + 1) 19 + static const int hbiw_to_nr_maps[HBIW_TO_NR_MAPS_SIZE] = { 20 + [1] = 0, [2] = 1, [3] = 0, [4] = 2, [6] = 1, [8] = 3, [12] = 2, [16] = 4 21 + }; 22 + 23 + static const int valid_hbiw[] = { 1, 2, 3, 4, 6, 8, 12, 16 }; 24 + 25 + u64 cxl_do_xormap_calc(struct cxl_cxims_data *cximsd, u64 addr, int hbiw) 24 26 { 25 - struct cxl_cxims_data *cximsd = cxlrd->platform_data; 26 - int hbiw = cxlrd->cxlsd.nr_targets; 27 + int nr_maps_to_apply = -1; 27 28 u64 val; 28 29 int pos; 29 30 30 - /* No xormaps for host bridge interleave ways of 1 or 3 */ 31 - if (hbiw == 1 || hbiw == 3) 32 - return addr; 31 + /* 32 + * Strictly validate hbiw since this function is used for testing and 33 + * that nullifies any expectation of trusted parameters from the CXL 34 + * Region Driver. 35 + */ 36 + for (int i = 0; i < ARRAY_SIZE(valid_hbiw); i++) { 37 + if (valid_hbiw[i] == hbiw) { 38 + nr_maps_to_apply = hbiw_to_nr_maps[hbiw]; 39 + break; 40 + } 41 + } 42 + if (nr_maps_to_apply == -1 || nr_maps_to_apply > cximsd->nr_maps) 43 + return ULLONG_MAX; 33 44 34 45 /* 35 46 * In regions using XOR interleave arithmetic the CXL HPA may not ··· 70 59 } 71 60 72 61 return addr; 62 + } 63 + EXPORT_SYMBOL_FOR_MODULES(cxl_do_xormap_calc, "cxl_translate"); 64 + 65 + static u64 cxl_apply_xor_maps(struct cxl_root_decoder *cxlrd, u64 addr) 66 + { 67 + struct cxl_cxims_data *cximsd = cxlrd->platform_data; 68 + 69 + return cxl_do_xormap_calc(cximsd, addr, cxlrd->cxlsd.nr_targets); 73 70 } 74 71 75 72 struct cxl_cxims_context { ··· 372 353 373 354 rc = hmat_get_extended_linear_cache_size(&res, nid, &cache_size); 374 355 if (rc) 375 - return rc; 356 + return 0; 376 357 377 358 /* 378 359 * The cache range is expected to be within the CFMWS. ··· 397 378 int rc; 398 379 399 380 rc = cxl_acpi_set_cache_size(cxlrd); 400 - if (!rc) 401 - return; 402 - 403 - if (rc != -EOPNOTSUPP) { 381 + if (rc) { 404 382 /* 405 - * Failing to support extended linear cache region resize does not 383 + * Failing to retrieve extended linear cache region resize does not 406 384 * prevent the region from functioning. Only causes cxl list showing 407 385 * incorrect region size. 408 386 */ 409 387 dev_warn(cxlrd->cxlsd.cxld.dev.parent, 410 - "Extended linear cache calculation failed rc:%d\n", rc); 411 - } 388 + "Extended linear cache retrieval failed rc:%d\n", rc); 412 389 413 - /* Ignoring return code */ 414 - cxlrd->cache_size = 0; 390 + /* Ignoring return code */ 391 + cxlrd->cache_size = 0; 392 + } 415 393 } 416 394 417 395 DEFINE_FREE(put_cxlrd, struct cxl_root_decoder *, ··· 469 453 ig = CXL_DECODER_MIN_GRANULARITY; 470 454 cxld->interleave_granularity = ig; 471 455 472 - cxl_setup_extended_linear_cache(cxlrd); 473 - 474 456 if (cfmws->interleave_arithmetic == ACPI_CEDT_CFMWS_ARITHMETIC_XOR) { 475 457 if (ways != 1 && ways != 3) { 476 458 cxims_ctx = (struct cxl_cxims_context) { ··· 484 470 return -EINVAL; 485 471 } 486 472 } 473 + cxlrd->ops.hpa_to_spa = cxl_apply_xor_maps; 474 + cxlrd->ops.spa_to_hpa = cxl_apply_xor_maps; 487 475 } 476 + 477 + cxl_setup_extended_linear_cache(cxlrd); 488 478 489 479 cxlrd->qos_class = cfmws->qtg_id; 490 - 491 - if (cfmws->interleave_arithmetic == ACPI_CEDT_CFMWS_ARITHMETIC_XOR) { 492 - cxlrd->ops = kzalloc(sizeof(*cxlrd->ops), GFP_KERNEL); 493 - if (!cxlrd->ops) 494 - return -ENOMEM; 495 - 496 - cxlrd->ops->hpa_to_spa = cxl_apply_xor_maps; 497 - cxlrd->ops->spa_to_hpa = cxl_apply_xor_maps; 498 - } 499 480 500 481 rc = cxl_decoder_add(cxld); 501 482 if (rc)
+2 -2
drivers/cxl/core/cdat.c
··· 826 826 cxl_coordinates_combine(coords, coords, ctx->coord); 827 827 828 828 /* 829 - * Take the min of the calculated bandwdith and the upstream 829 + * Take the min of the calculated bandwidth and the upstream 830 830 * switch SSLBIS bandwidth if there's a parent switch 831 831 */ 832 832 if (!is_root) ··· 949 949 /** 950 950 * cxl_region_update_bandwidth - Update the bandwidth access coordinates of a region 951 951 * @cxlr: The region being operated on 952 - * @input_xa: xarray holds cxl_perf_ctx wht calculated bandwidth per ACPI0017 instance 952 + * @input_xa: xarray holds cxl_perf_ctx with calculated bandwidth per ACPI0017 instance 953 953 */ 954 954 static void cxl_region_update_bandwidth(struct cxl_region *cxlr, 955 955 struct xarray *input_xa)
+3
drivers/cxl/core/hdm.c
··· 905 905 if ((cxld->flags & CXL_DECODER_F_ENABLE) == 0) 906 906 return; 907 907 908 + if (test_bit(CXL_DECODER_F_LOCK, &cxld->flags)) 909 + return; 910 + 908 911 if (port->commit_end == id) 909 912 cxl_port_commit_reap(cxld); 910 913 else
+8 -79
drivers/cxl/core/pci.c
··· 71 71 } 72 72 EXPORT_SYMBOL_NS_GPL(__devm_cxl_add_dport_by_dev, "CXL"); 73 73 74 - struct cxl_walk_context { 75 - struct pci_bus *bus; 76 - struct cxl_port *port; 77 - int type; 78 - int error; 79 - int count; 80 - }; 81 - 82 - static int match_add_dports(struct pci_dev *pdev, void *data) 83 - { 84 - struct cxl_walk_context *ctx = data; 85 - struct cxl_port *port = ctx->port; 86 - int type = pci_pcie_type(pdev); 87 - struct cxl_register_map map; 88 - struct cxl_dport *dport; 89 - u32 lnkcap, port_num; 90 - int rc; 91 - 92 - if (pdev->bus != ctx->bus) 93 - return 0; 94 - if (!pci_is_pcie(pdev)) 95 - return 0; 96 - if (type != ctx->type) 97 - return 0; 98 - if (pci_read_config_dword(pdev, pci_pcie_cap(pdev) + PCI_EXP_LNKCAP, 99 - &lnkcap)) 100 - return 0; 101 - 102 - rc = cxl_find_regblock(pdev, CXL_REGLOC_RBI_COMPONENT, &map); 103 - if (rc) 104 - dev_dbg(&port->dev, "failed to find component registers\n"); 105 - 106 - port_num = FIELD_GET(PCI_EXP_LNKCAP_PN, lnkcap); 107 - dport = devm_cxl_add_dport(port, &pdev->dev, port_num, map.resource); 108 - if (IS_ERR(dport)) { 109 - ctx->error = PTR_ERR(dport); 110 - return PTR_ERR(dport); 111 - } 112 - ctx->count++; 113 - 114 - return 0; 115 - } 116 - 117 - /** 118 - * devm_cxl_port_enumerate_dports - enumerate downstream ports of the upstream port 119 - * @port: cxl_port whose ->uport_dev is the upstream of dports to be enumerated 120 - * 121 - * Returns a positive number of dports enumerated or a negative error 122 - * code. 123 - */ 124 - int devm_cxl_port_enumerate_dports(struct cxl_port *port) 125 - { 126 - struct pci_bus *bus = cxl_port_to_pci_bus(port); 127 - struct cxl_walk_context ctx; 128 - int type; 129 - 130 - if (!bus) 131 - return -ENXIO; 132 - 133 - if (pci_is_root_bus(bus)) 134 - type = PCI_EXP_TYPE_ROOT_PORT; 135 - else 136 - type = PCI_EXP_TYPE_DOWNSTREAM; 137 - 138 - ctx = (struct cxl_walk_context) { 139 - .port = port, 140 - .bus = bus, 141 - .type = type, 142 - }; 143 - pci_walk_bus(bus, match_add_dports, &ctx); 144 - 145 - if (ctx.count == 0) 146 - return -ENODEV; 147 - if (ctx.error) 148 - return ctx.error; 149 - return ctx.count; 150 - } 151 - EXPORT_SYMBOL_NS_GPL(devm_cxl_port_enumerate_dports, "CXL"); 152 - 153 74 static int cxl_dvsec_mem_range_valid(struct cxl_dev_state *cxlds, int id) 154 75 { 155 76 struct pci_dev *pdev = to_pci_dev(cxlds->dev); ··· 1137 1216 1138 1217 return 0; 1139 1218 } 1219 + 1220 + struct cxl_walk_context { 1221 + struct pci_bus *bus; 1222 + struct cxl_port *port; 1223 + int type; 1224 + int error; 1225 + int count; 1226 + }; 1140 1227 1141 1228 static int count_dports(struct pci_dev *pdev, void *data) 1142 1229 {
-1
drivers/cxl/core/port.c
··· 459 459 if (atomic_read(&cxlrd->region_id) >= 0) 460 460 memregion_free(atomic_read(&cxlrd->region_id)); 461 461 __cxl_decoder_release(&cxlrd->cxlsd.cxld); 462 - kfree(cxlrd->ops); 463 462 kfree(cxlrd); 464 463 } 465 464
+220 -101
drivers/cxl/core/region.c
··· 245 245 struct cxl_region_params *p = &cxlr->params; 246 246 int i; 247 247 248 + if (test_bit(CXL_REGION_F_LOCK, &cxlr->flags)) 249 + return; 250 + 248 251 /* 249 252 * Before region teardown attempt to flush, evict any data cached for 250 253 * this region, or scream loudly about missing arch / platform support ··· 422 419 return len; 423 420 } 424 421 422 + if (test_bit(CXL_REGION_F_LOCK, &cxlr->flags)) 423 + return -EPERM; 424 + 425 425 rc = queue_reset(cxlr); 426 426 if (rc) 427 427 return rc; ··· 466 460 return sysfs_emit(buf, "%d\n", p->state >= CXL_CONFIG_COMMIT); 467 461 } 468 462 static DEVICE_ATTR_RW(commit); 469 - 470 - static umode_t cxl_region_visible(struct kobject *kobj, struct attribute *a, 471 - int n) 472 - { 473 - struct device *dev = kobj_to_dev(kobj); 474 - struct cxl_region *cxlr = to_cxl_region(dev); 475 - 476 - /* 477 - * Support tooling that expects to find a 'uuid' attribute for all 478 - * regions regardless of mode. 479 - */ 480 - if (a == &dev_attr_uuid.attr && cxlr->mode != CXL_PARTMODE_PMEM) 481 - return 0444; 482 - return a->mode; 483 - } 484 463 485 464 static ssize_t interleave_ways_show(struct device *dev, 486 465 struct device_attribute *attr, char *buf) ··· 745 754 } 746 755 static DEVICE_ATTR_RW(size); 747 756 757 + static ssize_t extended_linear_cache_size_show(struct device *dev, 758 + struct device_attribute *attr, 759 + char *buf) 760 + { 761 + struct cxl_region *cxlr = to_cxl_region(dev); 762 + struct cxl_region_params *p = &cxlr->params; 763 + ssize_t rc; 764 + 765 + ACQUIRE(rwsem_read_intr, rwsem)(&cxl_rwsem.region); 766 + if ((rc = ACQUIRE_ERR(rwsem_read_intr, &rwsem))) 767 + return rc; 768 + return sysfs_emit(buf, "%#llx\n", p->cache_size); 769 + } 770 + static DEVICE_ATTR_RO(extended_linear_cache_size); 771 + 748 772 static struct attribute *cxl_region_attrs[] = { 749 773 &dev_attr_uuid.attr, 750 774 &dev_attr_commit.attr, ··· 768 762 &dev_attr_resource.attr, 769 763 &dev_attr_size.attr, 770 764 &dev_attr_mode.attr, 765 + &dev_attr_extended_linear_cache_size.attr, 771 766 NULL, 772 767 }; 768 + 769 + static umode_t cxl_region_visible(struct kobject *kobj, struct attribute *a, 770 + int n) 771 + { 772 + struct device *dev = kobj_to_dev(kobj); 773 + struct cxl_region *cxlr = to_cxl_region(dev); 774 + 775 + /* 776 + * Support tooling that expects to find a 'uuid' attribute for all 777 + * regions regardless of mode. 778 + */ 779 + if (a == &dev_attr_uuid.attr && cxlr->mode != CXL_PARTMODE_PMEM) 780 + return 0444; 781 + 782 + /* 783 + * Don't display extended linear cache attribute if there is no 784 + * extended linear cache. 785 + */ 786 + if (a == &dev_attr_extended_linear_cache_size.attr && 787 + cxlr->params.cache_size == 0) 788 + return 0; 789 + 790 + return a->mode; 791 + } 773 792 774 793 static const struct attribute_group cxl_region_group = { 775 794 .attrs = cxl_region_attrs, ··· 869 838 return 1; 870 839 } 871 840 872 - static bool region_res_match_cxl_range(const struct cxl_region_params *p, 873 - const struct range *range) 841 + static bool spa_maps_hpa(const struct cxl_region_params *p, 842 + const struct range *range) 874 843 { 875 844 if (!p->res) 876 845 return false; 877 846 878 847 /* 879 - * If an extended linear cache region then the CXL range is assumed 880 - * to be fronted by the DRAM range in current known implementation. 881 - * This assumption will be made until a variant implementation exists. 848 + * The extended linear cache region is constructed by a 1:1 ratio 849 + * where the SPA maps equal amounts of DRAM and CXL HPA capacity with 850 + * CXL decoders at the high end of the SPA range. 882 851 */ 883 852 return p->res->start + p->cache_size == range->start && 884 853 p->res->end == range->end; ··· 896 865 cxld = to_cxl_decoder(dev); 897 866 r = &cxld->hpa_range; 898 867 899 - if (region_res_match_cxl_range(p, r)) 868 + if (spa_maps_hpa(p, r)) 900 869 return 1; 901 870 902 871 return 0; ··· 1090 1059 return 0; 1091 1060 } 1092 1061 1062 + static void cxl_region_set_lock(struct cxl_region *cxlr, 1063 + struct cxl_decoder *cxld) 1064 + { 1065 + if (!test_bit(CXL_DECODER_F_LOCK, &cxld->flags)) 1066 + return; 1067 + 1068 + set_bit(CXL_REGION_F_LOCK, &cxlr->flags); 1069 + clear_bit(CXL_REGION_F_NEEDS_RESET, &cxlr->flags); 1070 + } 1071 + 1093 1072 /** 1094 1073 * cxl_port_attach_region() - track a region's interest in a port by endpoint 1095 1074 * @port: port to add a new region reference 'struct cxl_region_ref' ··· 1210 1169 goto out_erase; 1211 1170 } 1212 1171 } 1172 + 1173 + cxl_region_set_lock(cxlr, cxld); 1213 1174 1214 1175 rc = cxl_rr_ep_add(cxl_rr, cxled); 1215 1176 if (rc) { ··· 1371 1328 struct cxl_endpoint_decoder *cxled) 1372 1329 { 1373 1330 struct cxl_root_decoder *cxlrd = to_cxl_root_decoder(cxlr->dev.parent); 1374 - int parent_iw, parent_ig, ig, iw, rc, inc = 0, pos = cxled->pos; 1331 + int parent_iw, parent_ig, ig, iw, rc, pos = cxled->pos; 1375 1332 struct cxl_port *parent_port = to_cxl_port(port->dev.parent); 1376 1333 struct cxl_region_ref *cxl_rr = cxl_rr_load(port, cxlr); 1377 1334 struct cxl_memdev *cxlmd = cxled_to_memdev(cxled); ··· 1508 1465 if (test_bit(CXL_REGION_F_AUTO, &cxlr->flags)) { 1509 1466 if (cxld->interleave_ways != iw || 1510 1467 (iw > 1 && cxld->interleave_granularity != ig) || 1511 - !region_res_match_cxl_range(p, &cxld->hpa_range) || 1468 + !spa_maps_hpa(p, &cxld->hpa_range) || 1512 1469 ((cxld->flags & CXL_DECODER_F_ENABLE) == 0)) { 1513 1470 dev_err(&cxlr->dev, 1514 1471 "%s:%s %s expected iw: %d ig: %d %pr\n", ··· 1563 1520 cxlsd->target[cxl_rr->nr_targets_set] = ep->dport; 1564 1521 cxlsd->cxld.target_map[cxl_rr->nr_targets_set] = ep->dport->port_id; 1565 1522 } 1566 - inc = 1; 1523 + cxl_rr->nr_targets_set++; 1567 1524 out_target_set: 1568 - cxl_rr->nr_targets_set += inc; 1569 1525 dev_dbg(&cxlr->dev, "%s:%s target[%d] = %s for %s:%s @ %d\n", 1570 1526 dev_name(port->uport_dev), dev_name(&port->dev), 1571 1527 cxl_rr->nr_targets_set - 1, dev_name(ep->dport->dport_dev), ··· 2481 2439 dev->bus = &cxl_bus_type; 2482 2440 dev->type = &cxl_region_type; 2483 2441 cxlr->id = id; 2442 + cxl_region_set_lock(cxlr, &cxlrd->cxlsd.cxld); 2484 2443 2485 2444 return cxlr; 2486 2445 } ··· 2967 2924 return false; 2968 2925 } 2969 2926 2970 - static bool has_hpa_to_spa(struct cxl_root_decoder *cxlrd) 2927 + #define CXL_POS_ZERO 0 2928 + /** 2929 + * cxl_validate_translation_params 2930 + * @eiw: encoded interleave ways 2931 + * @eig: encoded interleave granularity 2932 + * @pos: position in interleave 2933 + * 2934 + * Callers pass CXL_POS_ZERO when no position parameter needs validating. 2935 + * 2936 + * Returns: 0 on success, -EINVAL on first invalid parameter 2937 + */ 2938 + int cxl_validate_translation_params(u8 eiw, u16 eig, int pos) 2971 2939 { 2972 - return cxlrd->ops && cxlrd->ops->hpa_to_spa; 2973 - } 2940 + int ways, gran; 2974 2941 2975 - static bool has_spa_to_hpa(struct cxl_root_decoder *cxlrd) 2976 - { 2977 - return cxlrd->ops && cxlrd->ops->spa_to_hpa; 2978 - } 2979 - 2980 - u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, const struct cxl_memdev *cxlmd, 2981 - u64 dpa) 2982 - { 2983 - struct cxl_root_decoder *cxlrd = to_cxl_root_decoder(cxlr->dev.parent); 2984 - u64 dpa_offset, hpa_offset, bits_upper, mask_upper, hpa; 2985 - struct cxl_region_params *p = &cxlr->params; 2986 - struct cxl_endpoint_decoder *cxled = NULL; 2987 - u16 eig = 0; 2988 - u8 eiw = 0; 2989 - int pos; 2990 - 2991 - for (int i = 0; i < p->nr_targets; i++) { 2992 - cxled = p->targets[i]; 2993 - if (cxlmd == cxled_to_memdev(cxled)) 2994 - break; 2942 + if (eiw_to_ways(eiw, &ways)) { 2943 + pr_debug("%s: invalid eiw=%u\n", __func__, eiw); 2944 + return -EINVAL; 2995 2945 } 2996 - if (!cxled || cxlmd != cxled_to_memdev(cxled)) 2946 + if (eig_to_granularity(eig, &gran)) { 2947 + pr_debug("%s: invalid eig=%u\n", __func__, eig); 2948 + return -EINVAL; 2949 + } 2950 + if (pos < 0 || pos >= ways) { 2951 + pr_debug("%s: invalid pos=%d for ways=%u\n", __func__, pos, 2952 + ways); 2953 + return -EINVAL; 2954 + } 2955 + 2956 + return 0; 2957 + } 2958 + EXPORT_SYMBOL_FOR_MODULES(cxl_validate_translation_params, "cxl_translate"); 2959 + 2960 + u64 cxl_calculate_dpa_offset(u64 hpa_offset, u8 eiw, u16 eig) 2961 + { 2962 + u64 dpa_offset, bits_lower, bits_upper, temp; 2963 + int ret; 2964 + 2965 + ret = cxl_validate_translation_params(eiw, eig, CXL_POS_ZERO); 2966 + if (ret) 2997 2967 return ULLONG_MAX; 2998 2968 2999 - pos = cxled->pos; 3000 - ways_to_eiw(p->interleave_ways, &eiw); 3001 - granularity_to_eig(p->interleave_granularity, &eig); 2969 + /* 2970 + * DPA offset: CXL Spec 3.2 Section 8.2.4.20.13 2971 + * Lower bits [IG+7:0] pass through unchanged 2972 + * (eiw < 8) 2973 + * Per spec: DPAOffset[51:IG+8] = (HPAOffset[51:IG+IW+8] >> IW) 2974 + * Clear the position bits to isolate upper section, then 2975 + * reverse the left shift by eiw that occurred during DPA->HPA 2976 + * (eiw >= 8) 2977 + * Per spec: DPAOffset[51:IG+8] = HPAOffset[51:IG+IW] / 3 2978 + * Extract upper bits from the correct bit range and divide by 3 2979 + * to recover the original DPA upper bits 2980 + */ 2981 + bits_lower = hpa_offset & GENMASK_ULL(eig + 7, 0); 2982 + if (eiw < 8) { 2983 + temp = hpa_offset &= ~GENMASK_ULL(eig + eiw + 8 - 1, 0); 2984 + dpa_offset = temp >> eiw; 2985 + } else { 2986 + bits_upper = div64_u64(hpa_offset >> (eig + eiw), 3); 2987 + dpa_offset = bits_upper << (eig + 8); 2988 + } 2989 + dpa_offset |= bits_lower; 2990 + 2991 + return dpa_offset; 2992 + } 2993 + EXPORT_SYMBOL_FOR_MODULES(cxl_calculate_dpa_offset, "cxl_translate"); 2994 + 2995 + int cxl_calculate_position(u64 hpa_offset, u8 eiw, u16 eig) 2996 + { 2997 + unsigned int ways = 0; 2998 + u64 shifted, rem; 2999 + int pos, ret; 3000 + 3001 + ret = cxl_validate_translation_params(eiw, eig, CXL_POS_ZERO); 3002 + if (ret) 3003 + return ret; 3004 + 3005 + if (!eiw) 3006 + /* position is 0 if no interleaving */ 3007 + return 0; 3008 + 3009 + /* 3010 + * Interleave position: CXL Spec 3.2 Section 8.2.4.20.13 3011 + * eiw < 8 3012 + * Position is in the IW bits at HPA_OFFSET[IG+8+IW-1:IG+8]. 3013 + * Per spec "remove IW bits starting with bit position IG+8" 3014 + * eiw >= 8 3015 + * Position is not explicitly stored in HPA_OFFSET bits. It is 3016 + * derived from the modulo operation of the upper bits using 3017 + * the total number of interleave ways. 3018 + */ 3019 + if (eiw < 8) { 3020 + pos = (hpa_offset >> (eig + 8)) & GENMASK(eiw - 1, 0); 3021 + } else { 3022 + shifted = hpa_offset >> (eig + 8); 3023 + eiw_to_ways(eiw, &ways); 3024 + div64_u64_rem(shifted, ways, &rem); 3025 + pos = rem; 3026 + } 3027 + 3028 + return pos; 3029 + } 3030 + EXPORT_SYMBOL_FOR_MODULES(cxl_calculate_position, "cxl_translate"); 3031 + 3032 + u64 cxl_calculate_hpa_offset(u64 dpa_offset, int pos, u8 eiw, u16 eig) 3033 + { 3034 + u64 mask_upper, hpa_offset, bits_upper; 3035 + int ret; 3036 + 3037 + ret = cxl_validate_translation_params(eiw, eig, pos); 3038 + if (ret) 3039 + return ULLONG_MAX; 3002 3040 3003 3041 /* 3004 3042 * The device position in the region interleave set was removed ··· 3090 2966 * ways and granularity and is defined in the CXL Spec 3.0 Section 3091 2967 * 8.2.4.19.13 Implementation Note: Device Decode Logic 3092 2968 */ 3093 - 3094 - /* Remove the dpa base */ 3095 - dpa_offset = dpa - cxl_dpa_resource_start(cxled); 3096 2969 3097 2970 mask_upper = GENMASK_ULL(51, eig + 8); 3098 2971 ··· 3105 2984 /* The lower bits remain unchanged */ 3106 2985 hpa_offset |= dpa_offset & GENMASK_ULL(eig + 7, 0); 3107 2986 2987 + return hpa_offset; 2988 + } 2989 + EXPORT_SYMBOL_FOR_MODULES(cxl_calculate_hpa_offset, "cxl_translate"); 2990 + 2991 + u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, const struct cxl_memdev *cxlmd, 2992 + u64 dpa) 2993 + { 2994 + struct cxl_root_decoder *cxlrd = to_cxl_root_decoder(cxlr->dev.parent); 2995 + struct cxl_region_params *p = &cxlr->params; 2996 + struct cxl_endpoint_decoder *cxled = NULL; 2997 + u64 dpa_offset, hpa_offset, hpa; 2998 + u16 eig = 0; 2999 + u8 eiw = 0; 3000 + int pos; 3001 + 3002 + for (int i = 0; i < p->nr_targets; i++) { 3003 + if (cxlmd == cxled_to_memdev(p->targets[i])) { 3004 + cxled = p->targets[i]; 3005 + break; 3006 + } 3007 + } 3008 + if (!cxled) 3009 + return ULLONG_MAX; 3010 + 3011 + pos = cxled->pos; 3012 + ways_to_eiw(p->interleave_ways, &eiw); 3013 + granularity_to_eig(p->interleave_granularity, &eig); 3014 + 3015 + dpa_offset = dpa - cxl_dpa_resource_start(cxled); 3016 + hpa_offset = cxl_calculate_hpa_offset(dpa_offset, pos, eiw, eig); 3017 + 3108 3018 /* Apply the hpa_offset to the region base address */ 3109 3019 hpa = hpa_offset + p->res->start + p->cache_size; 3110 3020 3111 3021 /* Root decoder translation overrides typical modulo decode */ 3112 - if (has_hpa_to_spa(cxlrd)) 3113 - hpa = cxlrd->ops->hpa_to_spa(cxlrd, hpa); 3022 + if (cxlrd->ops.hpa_to_spa) 3023 + hpa = cxlrd->ops.hpa_to_spa(cxlrd, hpa); 3114 3024 3115 3025 if (!cxl_resource_contains_addr(p->res, hpa)) { 3116 3026 dev_dbg(&cxlr->dev, ··· 3150 2998 } 3151 2999 3152 3000 /* Simple chunk check, by pos & gran, only applies to modulo decodes */ 3153 - if (!has_hpa_to_spa(cxlrd) && (!cxl_is_hpa_in_chunk(hpa, cxlr, pos))) 3001 + if (!cxlrd->ops.hpa_to_spa && !cxl_is_hpa_in_chunk(hpa, cxlr, pos)) 3154 3002 return ULLONG_MAX; 3155 3003 3156 3004 return hpa; ··· 3168 3016 struct cxl_root_decoder *cxlrd = to_cxl_root_decoder(cxlr->dev.parent); 3169 3017 struct cxl_endpoint_decoder *cxled; 3170 3018 u64 hpa, hpa_offset, dpa_offset; 3171 - u64 bits_upper, bits_lower; 3172 - u64 shifted, rem, temp; 3173 3019 u16 eig = 0; 3174 3020 u8 eiw = 0; 3175 3021 int pos; ··· 3183 3033 * If the root decoder has SPA to CXL HPA callback, use it. Otherwise 3184 3034 * CXL HPA is assumed to equal SPA. 3185 3035 */ 3186 - if (has_spa_to_hpa(cxlrd)) { 3187 - hpa = cxlrd->ops->spa_to_hpa(cxlrd, p->res->start + offset); 3036 + if (cxlrd->ops.spa_to_hpa) { 3037 + hpa = cxlrd->ops.spa_to_hpa(cxlrd, p->res->start + offset); 3188 3038 hpa_offset = hpa - p->res->start; 3189 3039 } else { 3190 3040 hpa_offset = offset; 3191 3041 } 3192 - /* 3193 - * Interleave position: CXL Spec 3.2 Section 8.2.4.20.13 3194 - * eiw < 8 3195 - * Position is in the IW bits at HPA_OFFSET[IG+8+IW-1:IG+8]. 3196 - * Per spec "remove IW bits starting with bit position IG+8" 3197 - * eiw >= 8 3198 - * Position is not explicitly stored in HPA_OFFSET bits. It is 3199 - * derived from the modulo operation of the upper bits using 3200 - * the total number of interleave ways. 3201 - */ 3202 - if (eiw < 8) { 3203 - pos = (hpa_offset >> (eig + 8)) & GENMASK(eiw - 1, 0); 3204 - } else { 3205 - shifted = hpa_offset >> (eig + 8); 3206 - div64_u64_rem(shifted, p->interleave_ways, &rem); 3207 - pos = rem; 3208 - } 3042 + 3043 + pos = cxl_calculate_position(hpa_offset, eiw, eig); 3209 3044 if (pos < 0 || pos >= p->nr_targets) { 3210 3045 dev_dbg(&cxlr->dev, "Invalid position %d for %d targets\n", 3211 3046 pos, p->nr_targets); 3212 3047 return -ENXIO; 3213 3048 } 3214 3049 3215 - /* 3216 - * DPA offset: CXL Spec 3.2 Section 8.2.4.20.13 3217 - * Lower bits [IG+7:0] pass through unchanged 3218 - * (eiw < 8) 3219 - * Per spec: DPAOffset[51:IG+8] = (HPAOffset[51:IG+IW+8] >> IW) 3220 - * Clear the position bits to isolate upper section, then 3221 - * reverse the left shift by eiw that occurred during DPA->HPA 3222 - * (eiw >= 8) 3223 - * Per spec: DPAOffset[51:IG+8] = HPAOffset[51:IG+IW] / 3 3224 - * Extract upper bits from the correct bit range and divide by 3 3225 - * to recover the original DPA upper bits 3226 - */ 3227 - bits_lower = hpa_offset & GENMASK_ULL(eig + 7, 0); 3228 - if (eiw < 8) { 3229 - temp = hpa_offset &= ~((u64)GENMASK(eig + eiw + 8 - 1, 0)); 3230 - dpa_offset = temp >> eiw; 3231 - } else { 3232 - bits_upper = div64_u64(hpa_offset >> (eig + eiw), 3); 3233 - dpa_offset = bits_upper << (eig + 8); 3234 - } 3235 - dpa_offset |= bits_lower; 3050 + dpa_offset = cxl_calculate_dpa_offset(hpa_offset, eiw, eig); 3236 3051 3237 3052 /* Look-up and return the result: a memdev and a DPA */ 3238 3053 for (int i = 0; i < p->nr_targets; i++) { ··· 3513 3398 p = &cxlr->params; 3514 3399 3515 3400 guard(rwsem_read)(&cxl_rwsem.region); 3516 - return region_res_match_cxl_range(p, r); 3401 + return spa_maps_hpa(p, r); 3517 3402 } 3518 3403 3519 3404 static int cxl_extended_linear_cache_resize(struct cxl_region *cxlr, ··· 3593 3478 dev_warn(cxlmd->dev.parent, 3594 3479 "Extended linear cache calculation failed rc:%d\n", rc); 3595 3480 } 3481 + 3482 + rc = sysfs_update_group(&cxlr->dev.kobj, &cxl_region_group); 3483 + if (rc) 3484 + return rc; 3596 3485 3597 3486 rc = insert_resource(cxlrd->res, res); 3598 3487 if (rc) {
+28 -1
drivers/cxl/cxl.h
··· 451 451 void *platform_data; 452 452 struct mutex range_lock; 453 453 int qos_class; 454 - struct cxl_rd_ops *ops; 454 + struct cxl_rd_ops ops; 455 455 struct cxl_switch_decoder cxlsd; 456 456 }; 457 457 ··· 516 516 * but blocks cxl_region_probe(). 517 517 */ 518 518 #define CXL_REGION_F_NEEDS_RESET 1 519 + 520 + /* 521 + * Indicate whether this region is locked due to 1 or more decoders that have 522 + * been locked. The approach of all or nothing is taken with regard to the 523 + * locked attribute. CXL_REGION_F_NEEDS_RESET should not be set if this flag is 524 + * set. 525 + */ 526 + #define CXL_REGION_F_LOCK 2 519 527 520 528 /** 521 529 * struct cxl_region - CXL region ··· 745 737 { 746 738 return port->uport_dev == port->dev.parent; 747 739 } 740 + 741 + /* Address translation functions exported to cxl_translate test module only */ 742 + int cxl_validate_translation_params(u8 eiw, u16 eig, int pos); 743 + u64 cxl_calculate_hpa_offset(u64 dpa_offset, int pos, u8 eiw, u16 eig); 744 + u64 cxl_calculate_dpa_offset(u64 hpa_offset, u8 eiw, u16 eig); 745 + int cxl_calculate_position(u64 hpa_offset, u8 eiw, u16 eig); 746 + struct cxl_cxims_data { 747 + int nr_maps; 748 + u64 xormaps[] __counted_by(nr_maps); 749 + }; 750 + 751 + #if IS_ENABLED(CONFIG_CXL_ACPI) 752 + u64 cxl_do_xormap_calc(struct cxl_cxims_data *cximsd, u64 addr, int hbiw); 753 + #else 754 + static inline u64 cxl_do_xormap_calc(struct cxl_cxims_data *cximsd, u64 addr, int hbiw) 755 + { 756 + return ULLONG_MAX; 757 + } 758 + #endif 748 759 749 760 int cxl_num_decoders_committed(struct cxl_port *port); 750 761 bool is_cxl_port(const struct device *dev);
-1
drivers/cxl/cxlpci.h
··· 127 127 return lnksta2 & PCI_EXP_LNKSTA2_FLIT; 128 128 } 129 129 130 - int devm_cxl_port_enumerate_dports(struct cxl_port *port); 131 130 struct cxl_dev_state; 132 131 void read_cdat_data(struct cxl_port *port); 133 132 void cxl_cor_error_detected(struct pci_dev *pdev);
+1 -1
drivers/cxl/pci.c
··· 136 136 if (opcode == CXL_MBOX_OP_SANITIZE) { 137 137 mutex_lock(&cxl_mbox->mbox_mutex); 138 138 if (mds->security.sanitize_node) 139 - mod_delayed_work(system_wq, &mds->security.poll_dwork, 0); 139 + mod_delayed_work(system_percpu_wq, &mds->security.poll_dwork, 0); 140 140 mutex_unlock(&cxl_mbox->mbox_mutex); 141 141 } else { 142 142 /* short-circuit the wait in __cxl_pci_mbox_send_cmd() */
+1 -2
tools/testing/cxl/Kbuild
··· 4 4 ldflags-y += --wrap=acpi_evaluate_integer 5 5 ldflags-y += --wrap=acpi_pci_find_root 6 6 ldflags-y += --wrap=nvdimm_bus_register 7 - ldflags-y += --wrap=devm_cxl_port_enumerate_dports 8 7 ldflags-y += --wrap=cxl_await_media_ready 9 8 ldflags-y += --wrap=devm_cxl_add_rch_dport 10 - ldflags-y += --wrap=cxl_rcd_component_reg_phys 11 9 ldflags-y += --wrap=cxl_endpoint_parse_cdat 12 10 ldflags-y += --wrap=cxl_dport_init_ras_reporting 13 11 ldflags-y += --wrap=devm_cxl_endpoint_decoders_setup 12 + ldflags-y += --wrap=hmat_get_extended_linear_cache_size 14 13 15 14 DRIVERS := ../../../drivers 16 15 CXL_SRC := $(DRIVERS)/cxl
+1
tools/testing/cxl/test/Kbuild
··· 4 4 obj-m += cxl_test.o 5 5 obj-m += cxl_mock.o 6 6 obj-m += cxl_mock_mem.o 7 + obj-m += cxl_translate.o 7 8 8 9 cxl_test-y := cxl.o 9 10 cxl_mock-y := mock.o
+50 -36
tools/testing/cxl/test/cxl.c
··· 15 15 #include "mock.h" 16 16 17 17 static int interleave_arithmetic; 18 + static bool extended_linear_cache; 18 19 19 20 #define FAKE_QTG_ID 42 20 21 ··· 26 25 #define NR_CXL_SWITCH_PORTS 2 27 26 #define NR_CXL_PORT_DECODERS 8 28 27 #define NR_BRIDGES (NR_CXL_HOST_BRIDGES + NR_CXL_SINGLE_HOST + NR_CXL_RCH) 28 + 29 + #define MOCK_AUTO_REGION_SIZE_DEFAULT SZ_512M 30 + static int mock_auto_region_size = MOCK_AUTO_REGION_SIZE_DEFAULT; 29 31 30 32 static struct platform_device *cxl_acpi; 31 33 static struct platform_device *cxl_host_bridge[NR_CXL_HOST_BRIDGES]; ··· 430 426 return res; 431 427 } 432 428 429 + /* Only update CFMWS0 as this is used by the auto region. */ 430 + static void cfmws_elc_update(struct acpi_cedt_cfmws *window, int index) 431 + { 432 + if (!extended_linear_cache) 433 + return; 434 + 435 + if (index != 0) 436 + return; 437 + 438 + /* 439 + * The window size should be 2x of the CXL region size where half is 440 + * DRAM and half is CXL 441 + */ 442 + window->window_size = mock_auto_region_size * 2; 443 + } 444 + 433 445 static int populate_cedt(void) 434 446 { 435 447 struct cxl_mock_res *res; ··· 470 450 for (i = cfmws_start; i <= cfmws_end; i++) { 471 451 struct acpi_cedt_cfmws *window = mock_cfmws[i]; 472 452 453 + cfmws_elc_update(window, i); 473 454 res = alloc_mock_res(window->window_size, SZ_256M); 474 455 if (!res) 475 456 return -ENOMEM; ··· 610 589 611 590 *data = host_bridge_index(adev); 612 591 return AE_OK; 592 + } 593 + 594 + static int 595 + mock_hmat_get_extended_linear_cache_size(struct resource *backing_res, 596 + int nid, resource_size_t *cache_size) 597 + { 598 + struct acpi_cedt_cfmws *window = mock_cfmws[0]; 599 + struct resource cfmws0_res = 600 + DEFINE_RES_MEM(window->base_hpa, window->window_size); 601 + 602 + if (!extended_linear_cache || 603 + !resource_contains(&cfmws0_res, backing_res)) { 604 + return hmat_get_extended_linear_cache_size(backing_res, 605 + nid, cache_size); 606 + } 607 + 608 + *cache_size = mock_auto_region_size; 609 + 610 + return 0; 613 611 } 614 612 615 613 static struct pci_bus mock_pci_bus[NR_BRIDGES]; ··· 778 738 struct cxl_endpoint_decoder *cxled; 779 739 struct cxl_switch_decoder *cxlsd; 780 740 struct cxl_port *port, *iter; 781 - const int size = SZ_512M; 782 741 struct cxl_memdev *cxlmd; 783 742 struct cxl_dport *dport; 784 743 struct device *dev; ··· 820 781 } 821 782 822 783 base = window->base_hpa; 784 + if (extended_linear_cache) 785 + base += mock_auto_region_size; 823 786 cxld->hpa_range = (struct range) { 824 787 .start = base, 825 - .end = base + size - 1, 788 + .end = base + mock_auto_region_size - 1, 826 789 }; 827 790 828 791 cxld->interleave_ways = 2; ··· 833 792 cxld->flags = CXL_DECODER_F_ENABLE; 834 793 cxled->state = CXL_DECODER_STATE_AUTO; 835 794 port->commit_end = cxld->id; 836 - devm_cxl_dpa_reserve(cxled, 0, size / cxld->interleave_ways, 0); 795 + devm_cxl_dpa_reserve(cxled, 0, 796 + mock_auto_region_size / cxld->interleave_ways, 0); 837 797 cxld->commit = mock_decoder_commit; 838 798 cxld->reset = mock_decoder_reset; 839 799 ··· 883 841 cxld->interleave_granularity = 4096; 884 842 cxld->hpa_range = (struct range) { 885 843 .start = base, 886 - .end = base + size - 1, 844 + .end = base + mock_auto_region_size - 1, 887 845 }; 888 846 put_device(dev); 889 847 } ··· 1037 995 return 0; 1038 996 } 1039 997 1040 - static int mock_cxl_port_enumerate_dports(struct cxl_port *port) 1041 - { 1042 - struct platform_device **array; 1043 - int i, array_size; 1044 - int rc; 1045 - 1046 - rc = get_port_array(port, &array, &array_size); 1047 - if (rc) 1048 - return rc; 1049 - 1050 - for (i = 0; i < array_size; i++) { 1051 - struct platform_device *pdev = array[i]; 1052 - struct cxl_dport *dport; 1053 - 1054 - if (pdev->dev.parent != port->uport_dev) { 1055 - dev_dbg(&port->dev, "%s: mismatch parent %s\n", 1056 - dev_name(port->uport_dev), 1057 - dev_name(pdev->dev.parent)); 1058 - continue; 1059 - } 1060 - 1061 - dport = devm_cxl_add_dport(port, &pdev->dev, pdev->id, 1062 - CXL_RESOURCE_NONE); 1063 - 1064 - if (IS_ERR(dport)) 1065 - return PTR_ERR(dport); 1066 - } 1067 - 1068 - return 0; 1069 - } 1070 - 1071 998 static struct cxl_dport *mock_cxl_add_dport_by_dev(struct cxl_port *port, 1072 999 struct device *dport_dev) 1073 1000 { ··· 1125 1114 .acpi_pci_find_root = mock_acpi_pci_find_root, 1126 1115 .devm_cxl_switch_port_decoders_setup = mock_cxl_switch_port_decoders_setup, 1127 1116 .devm_cxl_endpoint_decoders_setup = mock_cxl_endpoint_decoders_setup, 1128 - .devm_cxl_port_enumerate_dports = mock_cxl_port_enumerate_dports, 1129 1117 .cxl_endpoint_parse_cdat = mock_cxl_endpoint_parse_cdat, 1130 1118 .devm_cxl_add_dport_by_dev = mock_cxl_add_dport_by_dev, 1119 + .hmat_get_extended_linear_cache_size = 1120 + mock_hmat_get_extended_linear_cache_size, 1131 1121 .list = LIST_HEAD_INIT(cxl_mock_ops.list), 1132 1122 }; 1133 1123 ··· 1618 1606 1619 1607 module_param(interleave_arithmetic, int, 0444); 1620 1608 MODULE_PARM_DESC(interleave_arithmetic, "Modulo:0, XOR:1"); 1609 + module_param(extended_linear_cache, bool, 0444); 1610 + MODULE_PARM_DESC(extended_linear_cache, "Enable extended linear cache support"); 1621 1611 module_init(cxl_test_init); 1622 1612 module_exit(cxl_test_exit); 1623 1613 MODULE_LICENSE("GPL v2");
+445
tools/testing/cxl/test/cxl_translate.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + // Copyright(c) 2025 Intel Corporation. All rights reserved. 3 + 4 + /* Preface all log entries with "cxl_translate" */ 5 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 6 + 7 + #include <linux/moduleparam.h> 8 + #include <linux/module.h> 9 + #include <linux/kernel.h> 10 + #include <linux/init.h> 11 + #include <linux/slab.h> 12 + #include <linux/acpi.h> 13 + #include <cxlmem.h> 14 + #include <cxl.h> 15 + 16 + /* Maximum number of test vectors and entry length */ 17 + #define MAX_TABLE_ENTRIES 128 18 + #define MAX_ENTRY_LEN 128 19 + 20 + /* Expected number of parameters in each test vector */ 21 + #define EXPECTED_PARAMS 7 22 + 23 + /* Module parameters for test vectors */ 24 + static char *table[MAX_TABLE_ENTRIES]; 25 + static int table_num; 26 + 27 + /* Interleave Arithmetic */ 28 + #define MODULO_MATH 0 29 + #define XOR_MATH 1 30 + 31 + /* 32 + * XOR mapping configuration 33 + * The test data sets all use the same set of xormaps. When additional 34 + * data sets arrive for validation, this static setup will need to 35 + * be changed to accept xormaps as additional parameters. 36 + */ 37 + struct cxl_cxims_data *cximsd; 38 + static u64 xormaps[] = { 39 + 0x2020900, 40 + 0x4041200, 41 + 0x1010400, 42 + 0x800, 43 + }; 44 + 45 + static int nr_maps = ARRAY_SIZE(xormaps); 46 + 47 + #define HBIW_TO_NR_MAPS_SIZE (CXL_DECODER_MAX_INTERLEAVE + 1) 48 + static const int hbiw_to_nr_maps[HBIW_TO_NR_MAPS_SIZE] = { 49 + [1] = 0, [2] = 1, [3] = 0, [4] = 2, [6] = 1, [8] = 3, [12] = 2, [16] = 4 50 + }; 51 + 52 + /** 53 + * to_hpa - calculate an HPA offset from a DPA offset and position 54 + * 55 + * dpa_offset: device physical address offset 56 + * pos: devices position in interleave 57 + * r_eiw: region encoded interleave ways 58 + * r_eig: region encoded interleave granularity 59 + * hb_ways: host bridge interleave ways 60 + * math: interleave arithmetic (MODULO_MATH or XOR_MATH) 61 + * 62 + * Returns: host physical address offset 63 + */ 64 + static u64 to_hpa(u64 dpa_offset, int pos, u8 r_eiw, u16 r_eig, u8 hb_ways, 65 + u8 math) 66 + { 67 + u64 hpa_offset; 68 + 69 + /* Calculate base HPA offset from DPA and position */ 70 + hpa_offset = cxl_calculate_hpa_offset(dpa_offset, pos, r_eiw, r_eig); 71 + 72 + if (math == XOR_MATH) { 73 + cximsd->nr_maps = hbiw_to_nr_maps[hb_ways]; 74 + if (cximsd->nr_maps) 75 + return cxl_do_xormap_calc(cximsd, hpa_offset, hb_ways); 76 + } 77 + return hpa_offset; 78 + } 79 + 80 + /** 81 + * to_dpa - translate an HPA offset to DPA offset 82 + * 83 + * hpa_offset: host physical address offset 84 + * r_eiw: region encoded interleave ways 85 + * r_eig: region encoded interleave granularity 86 + * hb_ways: host bridge interleave ways 87 + * math: interleave arithmetic (MODULO_MATH or XOR_MATH) 88 + * 89 + * Returns: device physical address offset 90 + */ 91 + static u64 to_dpa(u64 hpa_offset, u8 r_eiw, u16 r_eig, u8 hb_ways, u8 math) 92 + { 93 + u64 offset = hpa_offset; 94 + 95 + if (math == XOR_MATH) { 96 + cximsd->nr_maps = hbiw_to_nr_maps[hb_ways]; 97 + if (cximsd->nr_maps) 98 + offset = 99 + cxl_do_xormap_calc(cximsd, hpa_offset, hb_ways); 100 + } 101 + return cxl_calculate_dpa_offset(offset, r_eiw, r_eig); 102 + } 103 + 104 + /** 105 + * to_pos - extract an interleave position from an HPA offset 106 + * 107 + * hpa_offset: host physical address offset 108 + * r_eiw: region encoded interleave ways 109 + * r_eig: region encoded interleave granularity 110 + * hb_ways: host bridge interleave ways 111 + * math: interleave arithmetic (MODULO_MATH or XOR_MATH) 112 + * 113 + * Returns: devices position in region interleave 114 + */ 115 + static u64 to_pos(u64 hpa_offset, u8 r_eiw, u16 r_eig, u8 hb_ways, u8 math) 116 + { 117 + u64 offset = hpa_offset; 118 + 119 + /* Reverse XOR mapping if specified */ 120 + if (math == XOR_MATH) 121 + offset = cxl_do_xormap_calc(cximsd, hpa_offset, hb_ways); 122 + 123 + return cxl_calculate_position(offset, r_eiw, r_eig); 124 + } 125 + 126 + /** 127 + * run_translation_test - execute forward and reverse translations 128 + * 129 + * @dpa: device physical address 130 + * @pos: expected position in region interleave 131 + * @r_eiw: region encoded interleave ways 132 + * @r_eig: region encoded interleave granularity 133 + * @hb_ways: host bridge interleave ways 134 + * @math: interleave arithmetic (MODULO_MATH or XOR_MATH) 135 + * @expect_spa: expected system physical address 136 + * 137 + * Returns: 0 on success, -1 on failure 138 + */ 139 + static int run_translation_test(u64 dpa, int pos, u8 r_eiw, u16 r_eig, 140 + u8 hb_ways, int math, u64 expect_hpa) 141 + { 142 + u64 translated_spa, reverse_dpa; 143 + int reverse_pos; 144 + 145 + /* Test Device to Host translation: DPA + POS -> SPA */ 146 + translated_spa = to_hpa(dpa, pos, r_eiw, r_eig, hb_ways, math); 147 + if (translated_spa != expect_hpa) { 148 + pr_err("Device to host failed: expected HPA %llu, got %llu\n", 149 + expect_hpa, translated_spa); 150 + return -1; 151 + } 152 + 153 + /* Test Host to Device DPA translation: SPA -> DPA */ 154 + reverse_dpa = to_dpa(translated_spa, r_eiw, r_eig, hb_ways, math); 155 + if (reverse_dpa != dpa) { 156 + pr_err("Host to Device DPA failed: expected %llu, got %llu\n", 157 + dpa, reverse_dpa); 158 + return -1; 159 + } 160 + 161 + /* Test Host to Device Position translation: SPA -> POS */ 162 + reverse_pos = to_pos(translated_spa, r_eiw, r_eig, hb_ways, math); 163 + if (reverse_pos != pos) { 164 + pr_err("Position lookup failed: expected %d, got %d\n", pos, 165 + reverse_pos); 166 + return -1; 167 + } 168 + 169 + return 0; 170 + } 171 + 172 + /** 173 + * parse_test_vector - parse a single test vector string 174 + * 175 + * entry: test vector string to parse 176 + * dpa: device physical address 177 + * pos: expected position in region interleave 178 + * r_eiw: region encoded interleave ways 179 + * r_eig: region encoded interleave granularity 180 + * hb_ways: host bridge interleave ways 181 + * math: interleave arithmetic (MODULO_MATH or XOR_MATH) 182 + * expect_spa: expected system physical address 183 + * 184 + * Returns: 0 on success, negative error code on failure 185 + */ 186 + static int parse_test_vector(const char *entry, u64 *dpa, int *pos, u8 *r_eiw, 187 + u16 *r_eig, u8 *hb_ways, int *math, 188 + u64 *expect_hpa) 189 + { 190 + unsigned int tmp_r_eiw, tmp_r_eig, tmp_hb_ways; 191 + int parsed; 192 + 193 + parsed = sscanf(entry, "%llu %d %u %u %u %d %llu", dpa, pos, &tmp_r_eiw, 194 + &tmp_r_eig, &tmp_hb_ways, math, expect_hpa); 195 + 196 + if (parsed != EXPECTED_PARAMS) { 197 + pr_err("Parse error: expected %d parameters, got %d in '%s'\n", 198 + EXPECTED_PARAMS, parsed, entry); 199 + return -EINVAL; 200 + } 201 + if (tmp_r_eiw > U8_MAX || tmp_r_eig > U16_MAX || tmp_hb_ways > U8_MAX) { 202 + pr_err("Parameter overflow in entry: '%s'\n", entry); 203 + return -ERANGE; 204 + } 205 + if (*math != MODULO_MATH && *math != XOR_MATH) { 206 + pr_err("Invalid math type %d in entry: '%s'\n", *math, entry); 207 + return -EINVAL; 208 + } 209 + *r_eiw = tmp_r_eiw; 210 + *r_eig = tmp_r_eig; 211 + *hb_ways = tmp_hb_ways; 212 + 213 + return 0; 214 + } 215 + 216 + /* 217 + * setup_xor_mapping - Initialize XOR mapping data structure 218 + * 219 + * The test data sets all use the same HBIG so we can use one set 220 + * of xormaps, and set the number to apply based on HBIW before 221 + * calling cxl_do_xormap_calc(). 222 + * 223 + * When additional data sets arrive for validation with different 224 + * HBIG's this static setup will need to be updated. 225 + * 226 + * Returns: 0 on success, negative error code on failure 227 + */ 228 + static int setup_xor_mapping(void) 229 + { 230 + if (nr_maps <= 0) 231 + return -EINVAL; 232 + 233 + cximsd = kzalloc(struct_size(cximsd, xormaps, nr_maps), GFP_KERNEL); 234 + if (!cximsd) 235 + return -ENOMEM; 236 + 237 + memcpy(cximsd->xormaps, xormaps, nr_maps * sizeof(*cximsd->xormaps)); 238 + cximsd->nr_maps = nr_maps; 239 + 240 + return 0; 241 + } 242 + 243 + static int test_random_params(void) 244 + { 245 + u8 valid_eiws[] = { 0, 1, 2, 3, 4, 8, 9, 10 }; 246 + u16 valid_eigs[] = { 0, 1, 2, 3, 4, 5, 6 }; 247 + int i, ways, pos, reverse_pos; 248 + u64 dpa, hpa, reverse_dpa; 249 + int iterations = 10000; 250 + int failures = 0; 251 + 252 + for (i = 0; i < iterations; i++) { 253 + /* Generate valid random parameters for eiw, eig, pos, dpa */ 254 + u8 eiw = valid_eiws[get_random_u32() % ARRAY_SIZE(valid_eiws)]; 255 + u16 eig = valid_eigs[get_random_u32() % ARRAY_SIZE(valid_eigs)]; 256 + 257 + eiw_to_ways(eiw, &ways); 258 + pos = get_random_u32() % ways; 259 + dpa = get_random_u64() >> 12; 260 + 261 + hpa = cxl_calculate_hpa_offset(dpa, pos, eiw, eig); 262 + reverse_dpa = cxl_calculate_dpa_offset(hpa, eiw, eig); 263 + reverse_pos = cxl_calculate_position(hpa, eiw, eig); 264 + 265 + if (reverse_dpa != dpa || reverse_pos != pos) { 266 + pr_err("test random iter %d FAIL hpa=%llu, dpa=%llu reverse_dpa=%llu, pos=%d reverse_pos=%d eiw=%u eig=%u\n", 267 + i, hpa, dpa, reverse_dpa, pos, reverse_pos, eiw, 268 + eig); 269 + 270 + if (failures++ > 10) { 271 + pr_err("test random too many failures, stop\n"); 272 + break; 273 + } 274 + } 275 + } 276 + pr_info("..... test random: PASS %d FAIL %d\n", i - failures, failures); 277 + 278 + if (failures) 279 + return -EINVAL; 280 + 281 + return 0; 282 + } 283 + 284 + struct param_test { 285 + u8 eiw; 286 + u16 eig; 287 + int pos; 288 + bool expect; /* true: expect pass, false: expect fail */ 289 + const char *desc; 290 + }; 291 + 292 + static struct param_test param_tests[] = { 293 + { 0x0, 0, 0, true, "1-way, min eig=0, pos=0" }, 294 + { 0x0, 3, 0, true, "1-way, mid eig=3, pos=0" }, 295 + { 0x0, 6, 0, true, "1-way, max eig=6, pos=0" }, 296 + { 0x1, 0, 0, true, "2-way, eig=0, pos=0" }, 297 + { 0x1, 3, 1, true, "2-way, eig=3, max pos=1" }, 298 + { 0x1, 6, 1, true, "2-way, eig=6, max pos=1" }, 299 + { 0x2, 0, 0, true, "4-way, eig=0, pos=0" }, 300 + { 0x2, 3, 3, true, "4-way, eig=3, max pos=3" }, 301 + { 0x2, 6, 3, true, "4-way, eig=6, max pos=3" }, 302 + { 0x3, 0, 0, true, "8-way, eig=0, pos=0" }, 303 + { 0x3, 3, 7, true, "8-way, eig=3, max pos=7" }, 304 + { 0x3, 6, 7, true, "8-way, eig=6, max pos=7" }, 305 + { 0x4, 0, 0, true, "16-way, eig=0, pos=0" }, 306 + { 0x4, 3, 15, true, "16-way, eig=3, max pos=15" }, 307 + { 0x4, 6, 15, true, "16-way, eig=6, max pos=15" }, 308 + { 0x8, 0, 0, true, "3-way, eig=0, pos=0" }, 309 + { 0x8, 3, 2, true, "3-way, eig=3, max pos=2" }, 310 + { 0x8, 6, 2, true, "3-way, eig=6, max pos=2" }, 311 + { 0x9, 0, 0, true, "6-way, eig=0, pos=0" }, 312 + { 0x9, 3, 5, true, "6-way, eig=3, max pos=5" }, 313 + { 0x9, 6, 5, true, "6-way, eig=6, max pos=5" }, 314 + { 0xA, 0, 0, true, "12-way, eig=0, pos=0" }, 315 + { 0xA, 3, 11, true, "12-way, eig=3, max pos=11" }, 316 + { 0xA, 6, 11, true, "12-way, eig=6, max pos=11" }, 317 + { 0x5, 0, 0, false, "invalid eiw=5" }, 318 + { 0x7, 0, 0, false, "invalid eiw=7" }, 319 + { 0xB, 0, 0, false, "invalid eiw=0xB" }, 320 + { 0xFF, 0, 0, false, "invalid eiw=0xFF" }, 321 + { 0x1, 7, 0, false, "invalid eig=7 (out of range)" }, 322 + { 0x2, 0x10, 0, false, "invalid eig=0x10" }, 323 + { 0x3, 0xFFFF, 0, false, "invalid eig=0xFFFF" }, 324 + { 0x1, 0, -1, false, "pos < 0" }, 325 + { 0x1, 0, 2, false, "2-way, pos=2 (>= ways)" }, 326 + { 0x2, 0, 4, false, "4-way, pos=4 (>= ways)" }, 327 + { 0x3, 0, 8, false, "8-way, pos=8 (>= ways)" }, 328 + { 0x4, 0, 16, false, "16-way, pos=16 (>= ways)" }, 329 + { 0x8, 0, 3, false, "3-way, pos=3 (>= ways)" }, 330 + { 0x9, 0, 6, false, "6-way, pos=6 (>= ways)" }, 331 + { 0xA, 0, 12, false, "12-way, pos=12 (>= ways)" }, 332 + }; 333 + 334 + static int test_cxl_validate_translation_params(void) 335 + { 336 + int i, rc, failures = 0; 337 + bool valid; 338 + 339 + for (i = 0; i < ARRAY_SIZE(param_tests); i++) { 340 + struct param_test *t = &param_tests[i]; 341 + 342 + rc = cxl_validate_translation_params(t->eiw, t->eig, t->pos); 343 + valid = (rc == 0); 344 + 345 + if (valid != t->expect) { 346 + pr_err("test params failed: %s\n", t->desc); 347 + failures++; 348 + } 349 + } 350 + pr_info("..... test params: PASS %d FAIL %d\n", i - failures, failures); 351 + 352 + if (failures) 353 + return -EINVAL; 354 + 355 + return 0; 356 + } 357 + 358 + /* 359 + * cxl_translate_init 360 + * 361 + * Run the internal validation tests when no params are passed. 362 + * Otherwise, parse the parameters (test vectors), and kick off 363 + * the translation test. 364 + * 365 + * Returns: 0 on success, negative error code on failure 366 + */ 367 + static int __init cxl_translate_init(void) 368 + { 369 + int rc, i; 370 + 371 + /* If no tables are passed, validate module params only */ 372 + if (table_num == 0) { 373 + pr_info("Internal validation test start...\n"); 374 + rc = test_cxl_validate_translation_params(); 375 + if (rc) 376 + return rc; 377 + 378 + rc = test_random_params(); 379 + if (rc) 380 + return rc; 381 + 382 + pr_info("Internal validation test completed successfully\n"); 383 + 384 + return 0; 385 + } 386 + 387 + pr_info("CXL translate test module loaded with %d test vectors\n", 388 + table_num); 389 + 390 + rc = setup_xor_mapping(); 391 + if (rc) 392 + return rc; 393 + 394 + /* Process each test vector */ 395 + for (i = 0; i < table_num; i++) { 396 + u64 dpa, expect_spa; 397 + int pos, math; 398 + u8 r_eiw, hb_ways; 399 + u16 r_eig; 400 + 401 + pr_debug("Processing test vector %d: '%s'\n", i, table[i]); 402 + 403 + /* Parse the test vector */ 404 + rc = parse_test_vector(table[i], &dpa, &pos, &r_eiw, &r_eig, 405 + &hb_ways, &math, &expect_spa); 406 + if (rc) { 407 + pr_err("CXL Translate Test %d: FAIL\n" 408 + " Failed to parse test vector '%s'\n", 409 + i, table[i]); 410 + continue; 411 + } 412 + /* Run the translation test */ 413 + rc = run_translation_test(dpa, pos, r_eiw, r_eig, hb_ways, math, 414 + expect_spa); 415 + if (rc) { 416 + pr_err("CXL Translate Test %d: FAIL\n" 417 + " dpa=%llu pos=%d r_eiw=%u r_eig=%u hb_ways=%u math=%s expect_spa=%llu\n", 418 + i, dpa, pos, r_eiw, r_eig, hb_ways, 419 + (math == XOR_MATH) ? "XOR" : "MODULO", 420 + expect_spa); 421 + } else { 422 + pr_info("CXL Translate Test %d: PASS\n", i); 423 + } 424 + } 425 + 426 + kfree(cximsd); 427 + pr_info("CXL translate test completed\n"); 428 + 429 + return 0; 430 + } 431 + 432 + static void __exit cxl_translate_exit(void) 433 + { 434 + pr_info("CXL translate test module unloaded\n"); 435 + } 436 + 437 + module_param_array(table, charp, &table_num, 0444); 438 + MODULE_PARM_DESC(table, "Test vectors as space-separated decimal strings"); 439 + 440 + MODULE_LICENSE("GPL"); 441 + MODULE_DESCRIPTION("cxl_test: cxl address translation test module"); 442 + MODULE_IMPORT_NS("CXL"); 443 + 444 + module_init(cxl_translate_init); 445 + module_exit(cxl_translate_exit);
+5 -6
tools/testing/cxl/test/mem.c
··· 250 250 * Vary the number of events returned to simulate events occuring while the 251 251 * logs are being read. 252 252 */ 253 - static int ret_limit = 0; 253 + static atomic_t event_counter = ATOMIC_INIT(0); 254 254 255 255 static int mock_get_event(struct device *dev, struct cxl_mbox_cmd *cmd) 256 256 { 257 257 struct cxl_get_event_payload *pl; 258 258 struct mock_event_log *log; 259 - u16 nr_overflow; 259 + int ret_limit; 260 260 u8 log_type; 261 261 int i; 262 262 263 263 if (cmd->size_in != sizeof(log_type)) 264 264 return -EINVAL; 265 265 266 - ret_limit = (ret_limit + 1) % CXL_TEST_EVENT_RET_MAX; 267 - if (!ret_limit) 268 - ret_limit = 1; 266 + /* Vary return limit from 1 to CXL_TEST_EVENT_RET_MAX */ 267 + ret_limit = (atomic_inc_return(&event_counter) % CXL_TEST_EVENT_RET_MAX) + 1; 269 268 270 269 if (cmd->size_out < struct_size(pl, records, ret_limit)) 271 270 return -EINVAL; ··· 298 299 u64 ns; 299 300 300 301 pl->flags |= CXL_GET_EVENT_FLAG_OVERFLOW; 301 - pl->overflow_err_count = cpu_to_le16(nr_overflow); 302 + pl->overflow_err_count = cpu_to_le16(log->nr_overflow); 302 303 ns = ktime_get_real_ns(); 303 304 ns -= 5000000000; /* 5s ago */ 304 305 pl->first_overflow_timestamp = cpu_to_le64(ns);
+20 -32
tools/testing/cxl/test/mock.c
··· 111 111 } 112 112 EXPORT_SYMBOL(__wrap_acpi_evaluate_integer); 113 113 114 + int __wrap_hmat_get_extended_linear_cache_size(struct resource *backing_res, 115 + int nid, 116 + resource_size_t *cache_size) 117 + { 118 + int index, rc; 119 + struct cxl_mock_ops *ops = get_cxl_mock_ops(&index); 120 + 121 + if (ops) 122 + rc = ops->hmat_get_extended_linear_cache_size(backing_res, nid, 123 + cache_size); 124 + else 125 + rc = hmat_get_extended_linear_cache_size(backing_res, nid, 126 + cache_size); 127 + 128 + put_cxl_mock_ops(index); 129 + 130 + return rc; 131 + } 132 + EXPORT_SYMBOL_GPL(__wrap_hmat_get_extended_linear_cache_size); 133 + 114 134 struct acpi_pci_root *__wrap_acpi_pci_find_root(acpi_handle handle) 115 135 { 116 136 int index; ··· 192 172 } 193 173 EXPORT_SYMBOL_NS_GPL(__wrap_devm_cxl_endpoint_decoders_setup, "CXL"); 194 174 195 - int __wrap_devm_cxl_port_enumerate_dports(struct cxl_port *port) 196 - { 197 - int rc, index; 198 - struct cxl_mock_ops *ops = get_cxl_mock_ops(&index); 199 - 200 - if (ops && ops->is_mock_port(port->uport_dev)) 201 - rc = ops->devm_cxl_port_enumerate_dports(port); 202 - else 203 - rc = devm_cxl_port_enumerate_dports(port); 204 - put_cxl_mock_ops(index); 205 - 206 - return rc; 207 - } 208 - EXPORT_SYMBOL_NS_GPL(__wrap_devm_cxl_port_enumerate_dports, "CXL"); 209 - 210 175 int __wrap_cxl_await_media_ready(struct cxl_dev_state *cxlds) 211 176 { 212 177 int rc, index; ··· 230 225 return dport; 231 226 } 232 227 EXPORT_SYMBOL_NS_GPL(__wrap_devm_cxl_add_rch_dport, "CXL"); 233 - 234 - resource_size_t __wrap_cxl_rcd_component_reg_phys(struct device *dev, 235 - struct cxl_dport *dport) 236 - { 237 - int index; 238 - resource_size_t component_reg_phys; 239 - struct cxl_mock_ops *ops = get_cxl_mock_ops(&index); 240 - 241 - if (ops && ops->is_mock_port(dev)) 242 - component_reg_phys = CXL_RESOURCE_NONE; 243 - else 244 - component_reg_phys = cxl_rcd_component_reg_phys(dev, dport); 245 - put_cxl_mock_ops(index); 246 - 247 - return component_reg_phys; 248 - } 249 - EXPORT_SYMBOL_NS_GPL(__wrap_cxl_rcd_component_reg_phys, "CXL"); 250 228 251 229 void __wrap_cxl_endpoint_parse_cdat(struct cxl_port *port) 252 230 {
+3 -1
tools/testing/cxl/test/mock.h
··· 19 19 bool (*is_mock_bus)(struct pci_bus *bus); 20 20 bool (*is_mock_port)(struct device *dev); 21 21 bool (*is_mock_dev)(struct device *dev); 22 - int (*devm_cxl_port_enumerate_dports)(struct cxl_port *port); 23 22 int (*devm_cxl_switch_port_decoders_setup)(struct cxl_port *port); 24 23 int (*devm_cxl_endpoint_decoders_setup)(struct cxl_port *port); 25 24 void (*cxl_endpoint_parse_cdat)(struct cxl_port *port); 26 25 struct cxl_dport *(*devm_cxl_add_dport_by_dev)(struct cxl_port *port, 27 26 struct device *dport_dev); 27 + int (*hmat_get_extended_linear_cache_size)(struct resource *backing_res, 28 + int nid, 29 + resource_size_t *cache_size); 28 30 }; 29 31 30 32 void register_cxl_mock_ops(struct cxl_mock_ops *ops);