Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'mlx5-updates-2023-06-09' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux

mlx5-updates-2023-06-09

1) Embedded CPU Virtual Functions
2) Lightweight local SFs

Daniel Jurgens says:
====================
Embedded CPU Virtual Functions

This series enables the creation of virtual functions on Bluefield (the
embedded CPU platform). Embedded CPU virtual functions (EC VFs). EC VF
creation, deletion and management interfaces are the same as those for
virtual functions in a server with a Connect-X NIC.

When using EC VFs on the ARM the creation of virtual functions on the
host system is still supported. Host VFs eswitch vports occupy a range
of 1..max_vfs, the EC VF vport range is max_vfs+1..max_ec_vfs.

Every function (PF, ECPF, VF, EC VF, and subfunction) has a function ID
associated with it. Prior to this series the function ID and the eswitch
vport were the same. That is no longer the case, the EC VF function ID
range is 1..max_ec_vfs. When querying or setting the capabilities of an
EC VF function an new bit must be set in the query/set HCA cap
structure.

This is a high level overview of the changes made:
- Allocate vports for EC VFs if they are enabled.
- Create representors and devlink ports for the EC VF vports.
- When querying/setting HCA caps by vport break the assumption
that function ID is the same a vport number and adjust
accordingly.
- Create a new type of page, so that when SRIOV on the ARM is
disabled, but remains enabled on the host, the driver can
wait for the correct pages.
- Update SRIOV code to support EC VF creation/deletion.

===================

Lightweight local SFs:

Last 3 patches form Shay Drory:

SFs are heavy weight and by default they come with the full package of
ConnectX features. Usually users want specialized SFs for one specific
purpose and using devlink users will almost always override the set of
advertises features of an SF and reload it.

Shay Drory says:
================
In order to avoid the wasted time and resources on the reload, local SFs
will probe without any auxiliary sub-device, so that the SFs can be
configured prior to its full probe.

The defaults of the enable_* devlink params of these SFs are set to
false.

Usage example:
Create SF:
$ devlink port add pci/0000:08:00.0 flavour pcisf pfnum 0 sfnum 11
$ devlink port function set pci/0000:08:00.0/32768 \
hw_addr 00:00:00:00:00:11 state active

Enable ETH auxiliary device:
$ devlink dev param set auxiliary/mlx5_core.sf.1 \
name enable_eth value true cmode driverinit

Now, in order to fully probe the SF, use devlink reload:
$ devlink dev reload auxiliary/mlx5_core.sf.1

At this point the user have SF devlink instance with auxiliary device
for the Ethernet functionality only.

================

+624 -191
+20
Documentation/networking/device_drivers/ethernet/mellanox/mlx5/switchdev.rst
··· 45 45 Subfunction 46 46 =========== 47 47 48 + Subfunction which are spawned over the E-switch are created only with devlink 49 + device, and by default all the SF auxiliary devices are disabled. 50 + This will allow user to configure the SF before the SF have been fully probed, 51 + which will save time. 52 + 53 + Usage example: 54 + Create SF: 55 + $ devlink port add pci/0000:08:00.0 flavour pcisf pfnum 0 sfnum 11 56 + $ devlink port function set pci/0000:08:00.0/32768 \ 57 + hw_addr 00:00:00:00:00:11 state active 58 + 59 + Enable ETH auxiliary device: 60 + $ devlink dev param set auxiliary/mlx5_core.sf.1 \ 61 + name enable_eth value true cmode driverinit 62 + 63 + Now, in order to fully probe the SF, use devlink reload: 64 + $ devlink dev reload auxiliary/mlx5_core.sf.1 65 + 66 + mlx5 supports ETH,rdma and vdpa (vnet) auxiliary devices devlink params (see :ref:`Documentation/networking/devlink/devlink-params.rst`) 67 + 48 68 mlx5 supports subfunction management using devlink port (see :ref:`Documentation/networking/devlink/devlink-port.rst <devlink_port>`) interface. 49 69 50 70 A subfunction has its own function capabilities and its own resources. This
+1
drivers/net/ethernet/mellanox/mlx5/core/debugfs.c
··· 246 246 247 247 debugfs_create_u32("fw_pages_total", 0400, pages, &dev->priv.fw_pages); 248 248 debugfs_create_u32("fw_pages_vfs", 0400, pages, &dev->priv.page_counters[MLX5_VF]); 249 + debugfs_create_u32("fw_pages_ec_vfs", 0400, pages, &dev->priv.page_counters[MLX5_EC_VF]); 249 250 debugfs_create_u32("fw_pages_sfs", 0400, pages, &dev->priv.page_counters[MLX5_SF]); 250 251 debugfs_create_u32("fw_pages_host_pf", 0400, pages, &dev->priv.page_counters[MLX5_HOST_PF]); 251 252 debugfs_create_u32("fw_pages_alloc_failed", 0400, pages, &dev->priv.fw_pages_alloc_failed);
+16
drivers/net/ethernet/mellanox/mlx5/core/dev.c
··· 323 323 auxiliary_device_uninit(adev); 324 324 } 325 325 326 + void mlx5_dev_set_lightweight(struct mlx5_core_dev *dev) 327 + { 328 + mutex_lock(&mlx5_intf_mutex); 329 + dev->priv.flags |= MLX5_PRIV_FLAGS_DISABLE_ALL_ADEV; 330 + mutex_unlock(&mlx5_intf_mutex); 331 + } 332 + 333 + bool mlx5_dev_is_lightweight(struct mlx5_core_dev *dev) 334 + { 335 + return dev->priv.flags & MLX5_PRIV_FLAGS_DISABLE_ALL_ADEV; 336 + } 337 + 326 338 int mlx5_attach_device(struct mlx5_core_dev *dev) 327 339 { 328 340 struct mlx5_priv *priv = &dev->priv; ··· 467 455 bool is_supported = false; 468 456 469 457 if (priv->adev[i]) 458 + continue; 459 + 460 + if (mlx5_adev_devices[i].is_enabled && 461 + !(mlx5_adev_devices[i].is_enabled(dev))) 470 462 continue; 471 463 472 464 if (mlx5_adev_devices[i].is_supported)
+16 -38
drivers/net/ethernet/mellanox/mlx5/core/devlink.c
··· 7 7 #include "fw_reset.h" 8 8 #include "fs_core.h" 9 9 #include "eswitch.h" 10 - #include "lag/lag.h" 11 10 #include "esw/qos.h" 12 11 #include "sf/dev/dev.h" 13 12 #include "sf/sf.h" ··· 141 142 bool sf_dev_allocated; 142 143 int ret = 0; 143 144 145 + if (mlx5_dev_is_lightweight(dev)) { 146 + if (action != DEVLINK_RELOAD_ACTION_DRIVER_REINIT) 147 + return -EOPNOTSUPP; 148 + mlx5_unload_one_light(dev); 149 + return 0; 150 + } 151 + 144 152 sf_dev_allocated = mlx5_sf_dev_allocated(dev); 145 153 if (sf_dev_allocated) { 146 154 /* Reload results in deleting SF device which further results in ··· 200 194 *actions_performed = BIT(action); 201 195 switch (action) { 202 196 case DEVLINK_RELOAD_ACTION_DRIVER_REINIT: 197 + if (mlx5_dev_is_lightweight(dev)) { 198 + mlx5_fw_reporters_create(dev); 199 + return mlx5_init_one_devl_locked(dev); 200 + } 203 201 ret = mlx5_load_one_devl_locked(dev, false); 204 202 break; 205 203 case DEVLINK_RELOAD_ACTION_FW_ACTIVATE: ··· 437 427 438 428 return 0; 439 429 } 440 - 441 - static int mlx5_devlink_esw_multiport_set(struct devlink *devlink, u32 id, 442 - struct devlink_param_gset_ctx *ctx) 443 - { 444 - struct mlx5_core_dev *dev = devlink_priv(devlink); 445 - 446 - if (!MLX5_ESWITCH_MANAGER(dev)) 447 - return -EOPNOTSUPP; 448 - 449 - if (ctx->val.vbool) 450 - return mlx5_lag_mpesw_enable(dev); 451 - 452 - mlx5_lag_mpesw_disable(dev); 453 - return 0; 454 - } 455 - 456 - static int mlx5_devlink_esw_multiport_get(struct devlink *devlink, u32 id, 457 - struct devlink_param_gset_ctx *ctx) 458 - { 459 - struct mlx5_core_dev *dev = devlink_priv(devlink); 460 - 461 - if (!MLX5_ESWITCH_MANAGER(dev)) 462 - return -EOPNOTSUPP; 463 - 464 - ctx->val.vbool = mlx5_lag_is_mpesw(dev); 465 - return 0; 466 - } 467 430 #endif 468 431 469 432 static int mlx5_devlink_eq_depth_validate(struct devlink *devlink, u32 id, ··· 510 527 BIT(DEVLINK_PARAM_CMODE_DRIVERINIT), 511 528 NULL, NULL, 512 529 mlx5_devlink_large_group_num_validate), 513 - DEVLINK_PARAM_DRIVER(MLX5_DEVLINK_PARAM_ID_ESW_MULTIPORT, 514 - "esw_multiport", DEVLINK_PARAM_TYPE_BOOL, 515 - BIT(DEVLINK_PARAM_CMODE_RUNTIME), 516 - mlx5_devlink_esw_multiport_get, 517 - mlx5_devlink_esw_multiport_set, 518 - NULL), 519 530 #endif 520 531 DEVLINK_PARAM_GENERIC(IO_EQ_SIZE, BIT(DEVLINK_PARAM_CMODE_DRIVERINIT), 521 532 NULL, NULL, mlx5_devlink_eq_depth_validate), ··· 522 545 struct mlx5_core_dev *dev = devlink_priv(devlink); 523 546 union devlink_param_value value; 524 547 525 - value.vbool = MLX5_CAP_GEN(dev, roce); 548 + value.vbool = MLX5_CAP_GEN(dev, roce) && !mlx5_dev_is_lightweight(dev); 526 549 devl_param_driverinit_value_set(devlink, 527 550 DEVLINK_PARAM_GENERIC_ID_ENABLE_ROCE, 528 551 value); ··· 572 595 if (err) 573 596 return err; 574 597 575 - value.vbool = true; 598 + value.vbool = !mlx5_dev_is_lightweight(dev); 576 599 devl_param_driverinit_value_set(devlink, 577 600 DEVLINK_PARAM_GENERIC_ID_ENABLE_ETH, 578 601 value); ··· 612 635 613 636 static int mlx5_devlink_rdma_params_register(struct devlink *devlink) 614 637 { 638 + struct mlx5_core_dev *dev = devlink_priv(devlink); 615 639 union devlink_param_value value; 616 640 int err; 617 641 ··· 624 646 if (err) 625 647 return err; 626 648 627 - value.vbool = true; 649 + value.vbool = !mlx5_dev_is_lightweight(dev); 628 650 devl_param_driverinit_value_set(devlink, 629 651 DEVLINK_PARAM_GENERIC_ID_ENABLE_RDMA, 630 652 value); ··· 659 681 if (err) 660 682 return err; 661 683 662 - value.vbool = true; 684 + value.vbool = !mlx5_dev_is_lightweight(dev); 663 685 devl_param_driverinit_value_set(devlink, 664 686 DEVLINK_PARAM_GENERIC_ID_ENABLE_VNET, 665 687 value);
+1 -3
drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_act.c
··· 112 112 int err; 113 113 114 114 handle = kzalloc(sizeof(*handle), GFP_KERNEL); 115 - if (!handle) { 116 - kfree(handle); 115 + if (!handle) 117 116 return ERR_PTR(-ENOMEM); 118 - } 119 117 120 118 post_attr->chain = 0; 121 119 post_attr->prio = 0;
+154 -18
drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
··· 41 41 #include "esw/qos.h" 42 42 #include "mlx5_core.h" 43 43 #include "lib/eq.h" 44 + #include "lag/lag.h" 44 45 #include "eswitch.h" 45 46 #include "fs_core.h" 46 47 #include "devlink.h" ··· 1052 1051 } 1053 1052 } 1054 1053 1054 + static void mlx5_eswitch_clear_ec_vf_vports_info(struct mlx5_eswitch *esw) 1055 + { 1056 + struct mlx5_vport *vport; 1057 + unsigned long i; 1058 + 1059 + mlx5_esw_for_each_ec_vf_vport(esw, i, vport, esw->esw_funcs.num_ec_vfs) { 1060 + memset(&vport->qos, 0, sizeof(vport->qos)); 1061 + memset(&vport->info, 0, sizeof(vport->info)); 1062 + vport->info.link_state = MLX5_VPORT_ADMIN_STATE_AUTO; 1063 + } 1064 + } 1065 + 1055 1066 /* Public E-Switch API */ 1056 1067 int mlx5_eswitch_load_vport(struct mlx5_eswitch *esw, u16 vport_num, 1057 1068 enum mlx5_eswitch_vport_event enabled_events) ··· 1103 1090 } 1104 1091 } 1105 1092 1093 + static void mlx5_eswitch_unload_ec_vf_vports(struct mlx5_eswitch *esw, 1094 + u16 num_ec_vfs) 1095 + { 1096 + struct mlx5_vport *vport; 1097 + unsigned long i; 1098 + 1099 + mlx5_esw_for_each_ec_vf_vport(esw, i, vport, num_ec_vfs) { 1100 + if (!vport->enabled) 1101 + continue; 1102 + mlx5_eswitch_unload_vport(esw, vport->vport); 1103 + } 1104 + } 1105 + 1106 1106 int mlx5_eswitch_load_vf_vports(struct mlx5_eswitch *esw, u16 num_vfs, 1107 1107 enum mlx5_eswitch_vport_event enabled_events) 1108 1108 { ··· 1133 1107 1134 1108 vf_err: 1135 1109 mlx5_eswitch_unload_vf_vports(esw, num_vfs); 1110 + return err; 1111 + } 1112 + 1113 + static int mlx5_eswitch_load_ec_vf_vports(struct mlx5_eswitch *esw, u16 num_ec_vfs, 1114 + enum mlx5_eswitch_vport_event enabled_events) 1115 + { 1116 + struct mlx5_vport *vport; 1117 + unsigned long i; 1118 + int err; 1119 + 1120 + mlx5_esw_for_each_ec_vf_vport(esw, i, vport, num_ec_vfs) { 1121 + err = mlx5_eswitch_load_vport(esw, vport->vport, enabled_events); 1122 + if (err) 1123 + goto vf_err; 1124 + } 1125 + 1126 + return 0; 1127 + 1128 + vf_err: 1129 + mlx5_eswitch_unload_ec_vf_vports(esw, num_ec_vfs); 1136 1130 return err; 1137 1131 } 1138 1132 ··· 1200 1154 ret = mlx5_eswitch_load_vport(esw, MLX5_VPORT_ECPF, enabled_events); 1201 1155 if (ret) 1202 1156 goto ecpf_err; 1157 + if (mlx5_core_ec_sriov_enabled(esw->dev)) { 1158 + ret = mlx5_eswitch_load_ec_vf_vports(esw, esw->esw_funcs.num_ec_vfs, 1159 + enabled_events); 1160 + if (ret) 1161 + goto ec_vf_err; 1162 + } 1203 1163 } 1204 1164 1205 1165 /* Enable VF vports */ ··· 1216 1164 return 0; 1217 1165 1218 1166 vf_err: 1167 + if (mlx5_core_ec_sriov_enabled(esw->dev)) 1168 + mlx5_eswitch_unload_ec_vf_vports(esw, esw->esw_funcs.num_ec_vfs); 1169 + ec_vf_err: 1219 1170 if (mlx5_ecpf_vport_exists(esw->dev)) 1220 1171 mlx5_eswitch_unload_vport(esw, MLX5_VPORT_ECPF); 1221 1172 ecpf_err: ··· 1235 1180 { 1236 1181 mlx5_eswitch_unload_vf_vports(esw, esw->esw_funcs.num_vfs); 1237 1182 1238 - if (mlx5_ecpf_vport_exists(esw->dev)) 1183 + if (mlx5_ecpf_vport_exists(esw->dev)) { 1184 + if (mlx5_core_ec_sriov_enabled(esw->dev)) 1185 + mlx5_eswitch_unload_ec_vf_vports(esw, esw->esw_funcs.num_vfs); 1239 1186 mlx5_eswitch_unload_vport(esw, MLX5_VPORT_ECPF); 1187 + } 1240 1188 1241 1189 host_pf_disable_hca(esw->dev); 1242 1190 mlx5_eswitch_unload_vport(esw, MLX5_VPORT_PF); ··· 1283 1225 1284 1226 esw->esw_funcs.num_vfs = MLX5_GET(query_esw_functions_out, out, 1285 1227 host_params_context.host_num_of_vfs); 1228 + if (mlx5_core_ec_sriov_enabled(esw->dev)) 1229 + esw->esw_funcs.num_ec_vfs = num_vfs; 1230 + 1286 1231 kvfree(out); 1287 1232 } 1288 1233 ··· 1393 1332 1394 1333 mlx5_eswitch_event_handlers_register(esw); 1395 1334 1396 - esw_info(esw->dev, "Enable: mode(%s), nvfs(%d), active vports(%d)\n", 1335 + esw_info(esw->dev, "Enable: mode(%s), nvfs(%d), necvfs(%d), active vports(%d)\n", 1397 1336 esw->mode == MLX5_ESWITCH_LEGACY ? "LEGACY" : "OFFLOADS", 1398 - esw->esw_funcs.num_vfs, esw->enabled_vports); 1337 + esw->esw_funcs.num_vfs, esw->esw_funcs.num_ec_vfs, esw->enabled_vports); 1399 1338 1400 1339 mlx5_esw_mode_change_notify(esw, esw->mode); 1401 1340 ··· 1417 1356 int mlx5_eswitch_enable(struct mlx5_eswitch *esw, int num_vfs) 1418 1357 { 1419 1358 bool toggle_lag; 1420 - int ret; 1359 + int ret = 0; 1421 1360 1422 1361 if (!mlx5_esw_allowed(esw)) 1423 1362 return 0; ··· 1437 1376 1438 1377 vport_events = (esw->mode == MLX5_ESWITCH_LEGACY) ? 1439 1378 MLX5_LEGACY_SRIOV_VPORT_EVENTS : MLX5_VPORT_UC_ADDR_CHANGE; 1440 - ret = mlx5_eswitch_load_vf_vports(esw, num_vfs, vport_events); 1441 - if (!ret) 1442 - esw->esw_funcs.num_vfs = num_vfs; 1379 + /* If this is the ECPF the number of host VFs is managed via the 1380 + * eswitch function change event handler, and any num_vfs provided 1381 + * here are intended to be EC VFs. 1382 + */ 1383 + if (!mlx5_core_is_ecpf(esw->dev)) { 1384 + ret = mlx5_eswitch_load_vf_vports(esw, num_vfs, vport_events); 1385 + if (!ret) 1386 + esw->esw_funcs.num_vfs = num_vfs; 1387 + } else if (mlx5_core_ec_sriov_enabled(esw->dev)) { 1388 + ret = mlx5_eswitch_load_ec_vf_vports(esw, num_vfs, vport_events); 1389 + if (!ret) 1390 + esw->esw_funcs.num_ec_vfs = num_vfs; 1391 + } 1443 1392 } 1393 + 1444 1394 up_write(&esw->mode_lock); 1445 1395 1446 1396 if (toggle_lag) ··· 1471 1399 /* If driver is unloaded, this function is called twice by remove_one() 1472 1400 * and mlx5_unload(). Prevent the second call. 1473 1401 */ 1474 - if (!esw->esw_funcs.num_vfs && !clear_vf) 1402 + if (!esw->esw_funcs.num_vfs && !esw->esw_funcs.num_ec_vfs && !clear_vf) 1475 1403 goto unlock; 1476 1404 1477 - esw_info(esw->dev, "Unload vfs: mode(%s), nvfs(%d), active vports(%d)\n", 1405 + esw_info(esw->dev, "Unload vfs: mode(%s), nvfs(%d), necvfs(%d), active vports(%d)\n", 1478 1406 esw->mode == MLX5_ESWITCH_LEGACY ? "LEGACY" : "OFFLOADS", 1479 - esw->esw_funcs.num_vfs, esw->enabled_vports); 1407 + esw->esw_funcs.num_vfs, esw->esw_funcs.num_ec_vfs, esw->enabled_vports); 1480 1408 1481 - mlx5_eswitch_unload_vf_vports(esw, esw->esw_funcs.num_vfs); 1482 - if (clear_vf) 1483 - mlx5_eswitch_clear_vf_vports_info(esw); 1409 + if (!mlx5_core_is_ecpf(esw->dev)) { 1410 + mlx5_eswitch_unload_vf_vports(esw, esw->esw_funcs.num_vfs); 1411 + if (clear_vf) 1412 + mlx5_eswitch_clear_vf_vports_info(esw); 1413 + } else if (mlx5_core_ec_sriov_enabled(esw->dev)) { 1414 + mlx5_eswitch_unload_ec_vf_vports(esw, esw->esw_funcs.num_ec_vfs); 1415 + if (clear_vf) 1416 + mlx5_eswitch_clear_ec_vf_vports_info(esw); 1417 + } 1484 1418 1485 1419 if (esw->mode == MLX5_ESWITCH_OFFLOADS) { 1486 1420 struct devlink *devlink = priv_to_devlink(esw->dev); ··· 1497 1419 if (esw->mode == MLX5_ESWITCH_LEGACY) 1498 1420 mlx5_eswitch_disable_locked(esw); 1499 1421 1500 - esw->esw_funcs.num_vfs = 0; 1422 + if (!mlx5_core_is_ecpf(esw->dev)) 1423 + esw->esw_funcs.num_vfs = 0; 1424 + else 1425 + esw->esw_funcs.num_ec_vfs = 0; 1501 1426 1502 1427 unlock: 1503 1428 up_write(&esw->mode_lock); ··· 1520 1439 1521 1440 mlx5_eswitch_event_handlers_unregister(esw); 1522 1441 1523 - esw_info(esw->dev, "Disable: mode(%s), nvfs(%d), active vports(%d)\n", 1442 + esw_info(esw->dev, "Disable: mode(%s), nvfs(%d), necvfs(%d), active vports(%d)\n", 1524 1443 esw->mode == MLX5_ESWITCH_LEGACY ? "LEGACY" : "OFFLOADS", 1525 - esw->esw_funcs.num_vfs, esw->enabled_vports); 1444 + esw->esw_funcs.num_vfs, esw->esw_funcs.num_ec_vfs, esw->enabled_vports); 1526 1445 1527 1446 if (esw->fdb_table.flags & MLX5_ESW_FDB_CREATED) { 1528 1447 esw->fdb_table.flags &= ~MLX5_ESW_FDB_CREATED; ··· 1682 1601 idx++; 1683 1602 } 1684 1603 1604 + if (mlx5_core_ec_sriov_enabled(esw->dev)) { 1605 + int ec_vf_base_num = mlx5_core_ec_vf_vport_base(dev); 1606 + 1607 + for (i = 0; i < mlx5_core_max_ec_vfs(esw->dev); i++) { 1608 + err = mlx5_esw_vport_alloc(esw, idx, ec_vf_base_num + i); 1609 + if (err) 1610 + goto err; 1611 + idx++; 1612 + } 1613 + } 1614 + 1685 1615 if (mlx5_ecpf_vport_exists(dev) || 1686 1616 mlx5_core_is_ecpf_esw_manager(dev)) { 1687 1617 err = mlx5_esw_vport_alloc(esw, idx, MLX5_VPORT_ECPF); ··· 1710 1618 return err; 1711 1619 } 1712 1620 1621 + static int mlx5_devlink_esw_multiport_set(struct devlink *devlink, u32 id, 1622 + struct devlink_param_gset_ctx *ctx) 1623 + { 1624 + struct mlx5_core_dev *dev = devlink_priv(devlink); 1625 + 1626 + if (!MLX5_ESWITCH_MANAGER(dev)) 1627 + return -EOPNOTSUPP; 1628 + 1629 + if (ctx->val.vbool) 1630 + return mlx5_lag_mpesw_enable(dev); 1631 + 1632 + mlx5_lag_mpesw_disable(dev); 1633 + return 0; 1634 + } 1635 + 1636 + static int mlx5_devlink_esw_multiport_get(struct devlink *devlink, u32 id, 1637 + struct devlink_param_gset_ctx *ctx) 1638 + { 1639 + struct mlx5_core_dev *dev = devlink_priv(devlink); 1640 + 1641 + ctx->val.vbool = mlx5_lag_is_mpesw(dev); 1642 + return 0; 1643 + } 1644 + 1645 + static const struct devlink_param mlx5_eswitch_params[] = { 1646 + DEVLINK_PARAM_DRIVER(MLX5_DEVLINK_PARAM_ID_ESW_MULTIPORT, 1647 + "esw_multiport", DEVLINK_PARAM_TYPE_BOOL, 1648 + BIT(DEVLINK_PARAM_CMODE_RUNTIME), 1649 + mlx5_devlink_esw_multiport_get, 1650 + mlx5_devlink_esw_multiport_set, NULL), 1651 + }; 1652 + 1713 1653 int mlx5_eswitch_init(struct mlx5_core_dev *dev) 1714 1654 { 1715 1655 struct mlx5_eswitch *esw; ··· 1750 1626 if (!MLX5_VPORT_MANAGER(dev) && !MLX5_ESWITCH_MANAGER(dev)) 1751 1627 return 0; 1752 1628 1629 + err = devl_params_register(priv_to_devlink(dev), mlx5_eswitch_params, 1630 + ARRAY_SIZE(mlx5_eswitch_params)); 1631 + if (err) 1632 + return err; 1633 + 1753 1634 esw = kzalloc(sizeof(*esw), GFP_KERNEL); 1754 - if (!esw) 1755 - return -ENOMEM; 1635 + if (!esw) { 1636 + err = -ENOMEM; 1637 + goto unregister_param; 1638 + } 1756 1639 1757 1640 esw->dev = dev; 1758 1641 esw->manager_vport = mlx5_eswitch_manager_vport(dev); ··· 1819 1688 if (esw->work_queue) 1820 1689 destroy_workqueue(esw->work_queue); 1821 1690 kfree(esw); 1691 + unregister_param: 1692 + devl_params_unregister(priv_to_devlink(dev), mlx5_eswitch_params, 1693 + ARRAY_SIZE(mlx5_eswitch_params)); 1822 1694 return err; 1823 1695 } 1824 1696 ··· 1845 1711 esw_offloads_cleanup(esw); 1846 1712 mlx5_esw_vports_cleanup(esw); 1847 1713 kfree(esw); 1714 + devl_params_unregister(priv_to_devlink(esw->dev), mlx5_eswitch_params, 1715 + ARRAY_SIZE(mlx5_eswitch_params)); 1848 1716 } 1849 1717 1850 1718 /* Vport Administration */
+13
drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
··· 289 289 struct mlx5_esw_functions { 290 290 struct mlx5_nb nb; 291 291 u16 num_vfs; 292 + u16 num_ec_vfs; 292 293 }; 293 294 294 295 enum { ··· 654 653 655 654 #define mlx5_esw_for_each_host_func_vport(esw, index, vport, last) \ 656 655 mlx5_esw_for_each_vport_marked(esw, index, vport, last, MLX5_ESW_VPT_HOST_FN) 656 + 657 + /* This macro should only be used if EC SRIOV is enabled. 658 + * 659 + * Because there were no more marks available on the xarray this uses a 660 + * for_each_range approach. The range is only valid when EC SRIOV is enabled 661 + */ 662 + #define mlx5_esw_for_each_ec_vf_vport(esw, index, vport, last) \ 663 + xa_for_each_range(&((esw)->vports), \ 664 + index, \ 665 + vport, \ 666 + MLX5_CAP_GEN_2((esw->dev), ec_vf_vport_base), \ 667 + (last) - 1) 657 668 658 669 struct mlx5_eswitch *mlx5_devlink_eswitch_get(struct devlink *devlink); 659 670 struct mlx5_vport *__must_check
+55 -47
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
··· 55 55 #define mlx5_esw_for_each_rep(esw, i, rep) \ 56 56 xa_for_each(&((esw)->offloads.vport_reps), i, rep) 57 57 58 - #define mlx5_esw_for_each_sf_rep(esw, i, rep) \ 59 - xa_for_each_marked(&((esw)->offloads.vport_reps), i, rep, MLX5_ESW_VPT_SF) 60 - 61 - #define mlx5_esw_for_each_vf_rep(esw, index, rep) \ 62 - mlx5_esw_for_each_entry_marked(&((esw)->offloads.vport_reps), index, \ 63 - rep, (esw)->esw_funcs.num_vfs, MLX5_ESW_VPT_VF) 64 - 65 58 /* There are two match-all miss flows, one for unicast dst mac and 66 59 * one for multicast. 67 60 */ ··· 1125 1132 flows[vport->index] = flow; 1126 1133 } 1127 1134 1135 + if (mlx5_core_ec_sriov_enabled(esw->dev)) { 1136 + mlx5_esw_for_each_ec_vf_vport(esw, i, vport, mlx5_core_max_ec_vfs(esw->dev)) { 1137 + if (i >= mlx5_core_max_ec_vfs(peer_dev)) 1138 + break; 1139 + esw_set_peer_miss_rule_source_port(esw, peer_dev->priv.eswitch, 1140 + spec, vport->vport); 1141 + flow = mlx5_add_flow_rules(esw->fdb_table.offloads.slow_fdb, 1142 + spec, &flow_act, &dest, 1); 1143 + if (IS_ERR(flow)) { 1144 + err = PTR_ERR(flow); 1145 + goto add_ec_vf_flow_err; 1146 + } 1147 + flows[vport->index] = flow; 1148 + } 1149 + } 1128 1150 esw->fdb_table.offloads.peer_miss_rules[mlx5_get_dev_index(peer_dev)] = flows; 1129 1151 1130 1152 kvfree(spec); 1131 1153 return 0; 1132 1154 1155 + add_ec_vf_flow_err: 1156 + mlx5_esw_for_each_ec_vf_vport(esw, i, vport, mlx5_core_max_ec_vfs(esw->dev)) { 1157 + if (!flows[vport->index]) 1158 + continue; 1159 + mlx5_del_flow_rules(flows[vport->index]); 1160 + } 1133 1161 add_vf_flow_err: 1134 1162 mlx5_esw_for_each_vf_vport(esw, i, vport, mlx5_core_max_vfs(esw->dev)) { 1135 1163 if (!flows[vport->index]) ··· 1182 1168 unsigned long i; 1183 1169 1184 1170 flows = esw->fdb_table.offloads.peer_miss_rules[mlx5_get_dev_index(peer_dev)]; 1171 + 1172 + if (mlx5_core_ec_sriov_enabled(esw->dev)) { 1173 + mlx5_esw_for_each_ec_vf_vport(esw, i, vport, mlx5_core_max_ec_vfs(esw->dev)) { 1174 + /* The flow for a particular vport could be NULL if the other ECPF 1175 + * has fewer or no VFs enabled 1176 + */ 1177 + if (!flows[vport->index]) 1178 + continue; 1179 + mlx5_del_flow_rules(flows[vport->index]); 1180 + } 1181 + } 1185 1182 1186 1183 mlx5_esw_for_each_vf_vport(esw, i, vport, mlx5_core_max_vfs(esw->dev)) 1187 1184 mlx5_del_flow_rules(flows[vport->index]); ··· 2216 2191 return 0; 2217 2192 } 2218 2193 2219 - static void mlx5_esw_offloads_rep_mark_set(struct mlx5_eswitch *esw, 2220 - struct mlx5_eswitch_rep *rep, 2221 - xa_mark_t mark) 2222 - { 2223 - bool mark_set; 2224 - 2225 - /* Copy the mark from vport to its rep */ 2226 - mark_set = xa_get_mark(&esw->vports, rep->vport, mark); 2227 - if (mark_set) 2228 - xa_set_mark(&esw->offloads.vport_reps, rep->vport, mark); 2229 - } 2230 - 2231 2194 static int mlx5_esw_offloads_rep_init(struct mlx5_eswitch *esw, const struct mlx5_vport *vport) 2232 2195 { 2233 2196 struct mlx5_eswitch_rep *rep; ··· 2235 2222 if (err) 2236 2223 goto insert_err; 2237 2224 2238 - mlx5_esw_offloads_rep_mark_set(esw, rep, MLX5_ESW_VPT_HOST_FN); 2239 - mlx5_esw_offloads_rep_mark_set(esw, rep, MLX5_ESW_VPT_VF); 2240 - mlx5_esw_offloads_rep_mark_set(esw, rep, MLX5_ESW_VPT_SF); 2241 2225 return 0; 2242 2226 2243 2227 insert_err: ··· 2375 2365 esw->offloads.rep_ops[rep_type]->unload(rep); 2376 2366 } 2377 2367 2378 - static void __unload_reps_sf_vport(struct mlx5_eswitch *esw, u8 rep_type) 2379 - { 2380 - struct mlx5_eswitch_rep *rep; 2381 - unsigned long i; 2382 - 2383 - mlx5_esw_for_each_sf_rep(esw, i, rep) 2384 - __esw_offloads_unload_rep(esw, rep, rep_type); 2385 - } 2386 - 2387 2368 static void __unload_reps_all_vport(struct mlx5_eswitch *esw, u8 rep_type) 2388 2369 { 2389 2370 struct mlx5_eswitch_rep *rep; 2390 2371 unsigned long i; 2391 2372 2392 - __unload_reps_sf_vport(esw, rep_type); 2393 - 2394 - mlx5_esw_for_each_vf_rep(esw, i, rep) 2373 + mlx5_esw_for_each_rep(esw, i, rep) 2395 2374 __esw_offloads_unload_rep(esw, rep, rep_type); 2396 - 2397 - if (mlx5_ecpf_vport_exists(esw->dev)) { 2398 - rep = mlx5_eswitch_get_rep(esw, MLX5_VPORT_ECPF); 2399 - __esw_offloads_unload_rep(esw, rep, rep_type); 2400 - } 2401 - 2402 - if (mlx5_core_is_ecpf_esw_manager(esw->dev)) { 2403 - rep = mlx5_eswitch_get_rep(esw, MLX5_VPORT_PF); 2404 - __esw_offloads_unload_rep(esw, rep, rep_type); 2405 - } 2406 - 2407 - rep = mlx5_eswitch_get_rep(esw, MLX5_VPORT_UPLINK); 2408 - __esw_offloads_unload_rep(esw, rep, rep_type); 2409 2375 } 2410 2376 2411 2377 int mlx5_esw_offloads_rep_load(struct mlx5_eswitch *esw, u16 vport_num) ··· 3319 3333 /* Representor will control the vport link state */ 3320 3334 mlx5_esw_for_each_vf_vport(esw, i, vport, esw->esw_funcs.num_vfs) 3321 3335 vport->info.link_state = MLX5_VPORT_ADMIN_STATE_DOWN; 3336 + if (mlx5_core_ec_sriov_enabled(esw->dev)) 3337 + mlx5_esw_for_each_ec_vf_vport(esw, i, vport, esw->esw_funcs.num_ec_vfs) 3338 + vport->info.link_state = MLX5_VPORT_ADMIN_STATE_DOWN; 3322 3339 3323 3340 /* Uplink vport rep must load first. */ 3324 3341 err = esw_offloads_load_rep(esw, MLX5_VPORT_UPLINK); ··· 3559 3570 goto revert_inline_mode; 3560 3571 } 3561 3572 } 3573 + if (mlx5_core_ec_sriov_enabled(esw->dev)) { 3574 + mlx5_esw_for_each_ec_vf_vport(esw, i, vport, esw->esw_funcs.num_ec_vfs) { 3575 + err = mlx5_modify_nic_vport_min_inline(dev, vport->vport, mlx5_mode); 3576 + if (err) { 3577 + err_vport_num = vport->vport; 3578 + NL_SET_ERR_MSG_MOD(extack, 3579 + "Failed to set min inline on vport"); 3580 + goto revert_ec_vf_inline_mode; 3581 + } 3582 + } 3583 + } 3562 3584 return 0; 3563 3585 3586 + revert_ec_vf_inline_mode: 3587 + mlx5_esw_for_each_ec_vf_vport(esw, i, vport, esw->esw_funcs.num_ec_vfs) { 3588 + if (vport->vport == err_vport_num) 3589 + break; 3590 + mlx5_modify_nic_vport_min_inline(dev, 3591 + vport->vport, 3592 + esw->offloads.inline_mode); 3593 + } 3564 3594 revert_inline_mode: 3565 3595 mlx5_esw_for_each_host_func_vport(esw, i, vport, esw->esw_funcs.num_vfs) { 3566 3596 if (vport->vport == err_vport_num)
+15 -9
drivers/net/ethernet/mellanox/mlx5/core/health.c
··· 719 719 #define MLX5_FW_REPORTER_VF_GRACEFUL_PERIOD 30000 720 720 #define MLX5_FW_REPORTER_DEFAULT_GRACEFUL_PERIOD MLX5_FW_REPORTER_VF_GRACEFUL_PERIOD 721 721 722 - static void mlx5_fw_reporters_create(struct mlx5_core_dev *dev) 722 + void mlx5_fw_reporters_create(struct mlx5_core_dev *dev) 723 723 { 724 724 struct mlx5_core_health *health = &dev->priv.health; 725 725 struct devlink *devlink = priv_to_devlink(dev); ··· 735 735 } 736 736 737 737 health->fw_reporter = 738 - devlink_health_reporter_create(devlink, &mlx5_fw_reporter_ops, 739 - 0, dev); 738 + devl_health_reporter_create(devlink, &mlx5_fw_reporter_ops, 739 + 0, dev); 740 740 if (IS_ERR(health->fw_reporter)) 741 741 mlx5_core_warn(dev, "Failed to create fw reporter, err = %ld\n", 742 742 PTR_ERR(health->fw_reporter)); 743 743 744 744 health->fw_fatal_reporter = 745 - devlink_health_reporter_create(devlink, 746 - &mlx5_fw_fatal_reporter_ops, 747 - grace_period, 748 - dev); 745 + devl_health_reporter_create(devlink, 746 + &mlx5_fw_fatal_reporter_ops, 747 + grace_period, 748 + dev); 749 749 if (IS_ERR(health->fw_fatal_reporter)) 750 750 mlx5_core_warn(dev, "Failed to create fw fatal reporter, err = %ld\n", 751 751 PTR_ERR(health->fw_fatal_reporter)); ··· 777 777 { 778 778 struct mlx5_core_health *health = &dev->priv.health; 779 779 780 - queue_work(health->wq, &health->fatal_report_work); 780 + if (!mlx5_dev_is_lightweight(dev)) 781 + queue_work(health->wq, &health->fatal_report_work); 781 782 } 782 783 783 784 #define MLX5_MSEC_PER_HOUR (MSEC_PER_SEC * 60 * 60) ··· 906 905 907 906 int mlx5_health_init(struct mlx5_core_dev *dev) 908 907 { 908 + struct devlink *devlink = priv_to_devlink(dev); 909 909 struct mlx5_core_health *health; 910 910 char *name; 911 911 912 - mlx5_fw_reporters_create(dev); 912 + if (!mlx5_dev_is_lightweight(dev)) { 913 + devl_lock(devlink); 914 + mlx5_fw_reporters_create(dev); 915 + devl_unlock(devlink); 916 + } 913 917 mlx5_reporter_vnic_create(dev); 914 918 915 919 health = &dev->priv.health;
+190 -49
drivers/net/ethernet/mellanox/mlx5/core/main.c
··· 1118 1118 mlx5_devcom_unregister_device(dev->priv.devcom); 1119 1119 } 1120 1120 1121 - static int mlx5_function_setup(struct mlx5_core_dev *dev, bool boot, u64 timeout) 1121 + static int mlx5_function_enable(struct mlx5_core_dev *dev, bool boot, u64 timeout) 1122 1122 { 1123 1123 int err; 1124 1124 ··· 1183 1183 goto reclaim_boot_pages; 1184 1184 } 1185 1185 1186 - err = set_hca_ctrl(dev); 1187 - if (err) { 1188 - mlx5_core_err(dev, "set_hca_ctrl failed\n"); 1189 - goto reclaim_boot_pages; 1190 - } 1191 - 1192 - err = set_hca_cap(dev); 1193 - if (err) { 1194 - mlx5_core_err(dev, "set_hca_cap failed\n"); 1195 - goto reclaim_boot_pages; 1196 - } 1197 - 1198 - err = mlx5_satisfy_startup_pages(dev, 0); 1199 - if (err) { 1200 - mlx5_core_err(dev, "failed to allocate init pages\n"); 1201 - goto reclaim_boot_pages; 1202 - } 1203 - 1204 - err = mlx5_cmd_init_hca(dev, sw_owner_id); 1205 - if (err) { 1206 - mlx5_core_err(dev, "init hca failed\n"); 1207 - goto reclaim_boot_pages; 1208 - } 1209 - 1210 - mlx5_set_driver_version(dev); 1211 - 1212 - err = mlx5_query_hca_caps(dev); 1213 - if (err) { 1214 - mlx5_core_err(dev, "query hca failed\n"); 1215 - goto reclaim_boot_pages; 1216 - } 1217 - mlx5_start_health_fw_log_up(dev); 1218 - 1219 1186 return 0; 1220 1187 1221 1188 reclaim_boot_pages: ··· 1198 1231 return err; 1199 1232 } 1200 1233 1201 - static int mlx5_function_teardown(struct mlx5_core_dev *dev, bool boot) 1234 + static void mlx5_function_disable(struct mlx5_core_dev *dev, bool boot) 1235 + { 1236 + mlx5_reclaim_startup_pages(dev); 1237 + mlx5_core_disable_hca(dev, 0); 1238 + mlx5_stop_health_poll(dev, boot); 1239 + mlx5_cmd_set_state(dev, MLX5_CMDIF_STATE_DOWN); 1240 + mlx5_cmd_cleanup(dev); 1241 + } 1242 + 1243 + static int mlx5_function_open(struct mlx5_core_dev *dev) 1244 + { 1245 + int err; 1246 + 1247 + err = set_hca_ctrl(dev); 1248 + if (err) { 1249 + mlx5_core_err(dev, "set_hca_ctrl failed\n"); 1250 + return err; 1251 + } 1252 + 1253 + err = set_hca_cap(dev); 1254 + if (err) { 1255 + mlx5_core_err(dev, "set_hca_cap failed\n"); 1256 + return err; 1257 + } 1258 + 1259 + err = mlx5_satisfy_startup_pages(dev, 0); 1260 + if (err) { 1261 + mlx5_core_err(dev, "failed to allocate init pages\n"); 1262 + return err; 1263 + } 1264 + 1265 + err = mlx5_cmd_init_hca(dev, sw_owner_id); 1266 + if (err) { 1267 + mlx5_core_err(dev, "init hca failed\n"); 1268 + return err; 1269 + } 1270 + 1271 + mlx5_set_driver_version(dev); 1272 + 1273 + err = mlx5_query_hca_caps(dev); 1274 + if (err) { 1275 + mlx5_core_err(dev, "query hca failed\n"); 1276 + return err; 1277 + } 1278 + mlx5_start_health_fw_log_up(dev); 1279 + return 0; 1280 + } 1281 + 1282 + static int mlx5_function_close(struct mlx5_core_dev *dev) 1202 1283 { 1203 1284 int err; 1204 1285 ··· 1255 1240 mlx5_core_err(dev, "tear_down_hca failed, skip cleanup\n"); 1256 1241 return err; 1257 1242 } 1258 - mlx5_reclaim_startup_pages(dev); 1259 - mlx5_core_disable_hca(dev, 0); 1260 - mlx5_stop_health_poll(dev, boot); 1261 - mlx5_cmd_set_state(dev, MLX5_CMDIF_STATE_DOWN); 1262 - mlx5_cmd_cleanup(dev); 1263 1243 1264 1244 return 0; 1245 + } 1246 + 1247 + static int mlx5_function_setup(struct mlx5_core_dev *dev, bool boot, u64 timeout) 1248 + { 1249 + int err; 1250 + 1251 + err = mlx5_function_enable(dev, boot, timeout); 1252 + if (err) 1253 + return err; 1254 + 1255 + err = mlx5_function_open(dev); 1256 + if (err) 1257 + mlx5_function_disable(dev, boot); 1258 + return err; 1259 + } 1260 + 1261 + static int mlx5_function_teardown(struct mlx5_core_dev *dev, bool boot) 1262 + { 1263 + int err = mlx5_function_close(dev); 1264 + 1265 + if (!err) 1266 + mlx5_function_disable(dev, boot); 1267 + return err; 1265 1268 } 1266 1269 1267 1270 static int mlx5_load(struct mlx5_core_dev *dev) ··· 1424 1391 mlx5_put_uars_page(dev, dev->priv.uar); 1425 1392 } 1426 1393 1427 - int mlx5_init_one(struct mlx5_core_dev *dev) 1394 + int mlx5_init_one_devl_locked(struct mlx5_core_dev *dev) 1428 1395 { 1429 - struct devlink *devlink = priv_to_devlink(dev); 1396 + bool light_probe = mlx5_dev_is_lightweight(dev); 1430 1397 int err = 0; 1431 1398 1432 - devl_lock(devlink); 1433 1399 mutex_lock(&dev->intf_state_mutex); 1434 1400 dev->state = MLX5_DEVICE_STATE_UP; 1435 1401 ··· 1442 1410 goto function_teardown; 1443 1411 } 1444 1412 1445 - err = mlx5_devlink_params_register(priv_to_devlink(dev)); 1446 - if (err) 1447 - goto err_devlink_params_reg; 1413 + /* In case of light_probe, mlx5_devlink is already registered. 1414 + * Hence, don't register devlink again. 1415 + */ 1416 + if (!light_probe) { 1417 + err = mlx5_devlink_params_register(priv_to_devlink(dev)); 1418 + if (err) 1419 + goto err_devlink_params_reg; 1420 + } 1448 1421 1449 1422 err = mlx5_load(dev); 1450 1423 if (err) ··· 1462 1425 goto err_register; 1463 1426 1464 1427 mutex_unlock(&dev->intf_state_mutex); 1465 - devl_unlock(devlink); 1466 1428 return 0; 1467 1429 1468 1430 err_register: 1469 1431 clear_bit(MLX5_INTERFACE_STATE_UP, &dev->intf_state); 1470 1432 mlx5_unload(dev); 1471 1433 err_load: 1472 - mlx5_devlink_params_unregister(priv_to_devlink(dev)); 1434 + if (!light_probe) 1435 + mlx5_devlink_params_unregister(priv_to_devlink(dev)); 1473 1436 err_devlink_params_reg: 1474 1437 mlx5_cleanup_once(dev); 1475 1438 function_teardown: ··· 1477 1440 err_function: 1478 1441 dev->state = MLX5_DEVICE_STATE_INTERNAL_ERROR; 1479 1442 mutex_unlock(&dev->intf_state_mutex); 1443 + return err; 1444 + } 1445 + 1446 + int mlx5_init_one(struct mlx5_core_dev *dev) 1447 + { 1448 + struct devlink *devlink = priv_to_devlink(dev); 1449 + int err; 1450 + 1451 + devl_lock(devlink); 1452 + err = mlx5_init_one_devl_locked(dev); 1480 1453 devl_unlock(devlink); 1481 1454 return err; 1482 1455 } ··· 1602 1555 devl_lock(devlink); 1603 1556 mlx5_unload_one_devl_locked(dev, suspend); 1604 1557 devl_unlock(devlink); 1558 + } 1559 + 1560 + /* In case of light probe, we don't need a full query of hca_caps, but only the bellow caps. 1561 + * A full query of hca_caps will be done when the device will reload. 1562 + */ 1563 + static int mlx5_query_hca_caps_light(struct mlx5_core_dev *dev) 1564 + { 1565 + int err; 1566 + 1567 + err = mlx5_core_get_caps(dev, MLX5_CAP_GENERAL); 1568 + if (err) 1569 + return err; 1570 + 1571 + if (MLX5_CAP_GEN(dev, eth_net_offloads)) { 1572 + err = mlx5_core_get_caps(dev, MLX5_CAP_ETHERNET_OFFLOADS); 1573 + if (err) 1574 + return err; 1575 + } 1576 + 1577 + if (MLX5_CAP_GEN(dev, nic_flow_table) || 1578 + MLX5_CAP_GEN(dev, ipoib_enhanced_offloads)) { 1579 + err = mlx5_core_get_caps(dev, MLX5_CAP_FLOW_TABLE); 1580 + if (err) 1581 + return err; 1582 + } 1583 + 1584 + if (MLX5_CAP_GEN_64(dev, general_obj_types) & 1585 + MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_NET_Q) { 1586 + err = mlx5_core_get_caps(dev, MLX5_CAP_VDPA_EMULATION); 1587 + if (err) 1588 + return err; 1589 + } 1590 + 1591 + return 0; 1592 + } 1593 + 1594 + int mlx5_init_one_light(struct mlx5_core_dev *dev) 1595 + { 1596 + struct devlink *devlink = priv_to_devlink(dev); 1597 + int err; 1598 + 1599 + dev->state = MLX5_DEVICE_STATE_UP; 1600 + err = mlx5_function_enable(dev, true, mlx5_tout_ms(dev, FW_PRE_INIT_TIMEOUT)); 1601 + if (err) { 1602 + mlx5_core_warn(dev, "mlx5_function_enable err=%d\n", err); 1603 + goto out; 1604 + } 1605 + 1606 + err = mlx5_query_hca_caps_light(dev); 1607 + if (err) { 1608 + mlx5_core_warn(dev, "mlx5_query_hca_caps_light err=%d\n", err); 1609 + goto query_hca_caps_err; 1610 + } 1611 + 1612 + devl_lock(devlink); 1613 + err = mlx5_devlink_params_register(priv_to_devlink(dev)); 1614 + devl_unlock(devlink); 1615 + if (err) { 1616 + mlx5_core_warn(dev, "mlx5_devlink_param_reg err = %d\n", err); 1617 + goto query_hca_caps_err; 1618 + } 1619 + 1620 + return 0; 1621 + 1622 + query_hca_caps_err: 1623 + mlx5_function_disable(dev, true); 1624 + out: 1625 + dev->state = MLX5_DEVICE_STATE_INTERNAL_ERROR; 1626 + return err; 1627 + } 1628 + 1629 + void mlx5_uninit_one_light(struct mlx5_core_dev *dev) 1630 + { 1631 + struct devlink *devlink = priv_to_devlink(dev); 1632 + 1633 + devl_lock(devlink); 1634 + mlx5_devlink_params_unregister(priv_to_devlink(dev)); 1635 + devl_unlock(devlink); 1636 + if (dev->state != MLX5_DEVICE_STATE_UP) 1637 + return; 1638 + mlx5_function_disable(dev, true); 1639 + } 1640 + 1641 + /* xxx_light() function are used in order to configure the device without full 1642 + * init (light init). e.g.: There isn't a point in reload a device to light state. 1643 + * Hence, mlx5_load_one_light() isn't needed. 1644 + */ 1645 + 1646 + void mlx5_unload_one_light(struct mlx5_core_dev *dev) 1647 + { 1648 + if (dev->state != MLX5_DEVICE_STATE_UP) 1649 + return; 1650 + mlx5_function_disable(dev, false); 1651 + dev->state = MLX5_DEVICE_STATE_INTERNAL_ERROR; 1605 1652 } 1606 1653 1607 1654 static const int types[] = { ··· 1950 1809 mlx5_drain_fw_reset(dev); 1951 1810 mlx5_drain_health_wq(dev); 1952 1811 devlink_unregister(devlink); 1953 - mlx5_sriov_disable(pdev); 1812 + mlx5_sriov_disable(pdev, false); 1954 1813 mlx5_thermal_uninit(dev); 1955 1814 mlx5_crdump_disable(dev); 1956 1815 mlx5_uninit_one(dev);
+31 -4
drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
··· 195 195 int mlx5_sriov_attach(struct mlx5_core_dev *dev); 196 196 void mlx5_sriov_detach(struct mlx5_core_dev *dev); 197 197 int mlx5_core_sriov_configure(struct pci_dev *dev, int num_vfs); 198 - void mlx5_sriov_disable(struct pci_dev *pdev); 198 + void mlx5_sriov_disable(struct pci_dev *pdev, bool num_vf_change); 199 199 int mlx5_core_sriov_set_msix_vec_count(struct pci_dev *vf, int msix_vec_count); 200 200 int mlx5_core_enable_hca(struct mlx5_core_dev *dev, u16 func_id); 201 201 int mlx5_core_disable_hca(struct mlx5_core_dev *dev, u16 func_id); ··· 240 240 void mlx5_detach_device(struct mlx5_core_dev *dev, bool suspend); 241 241 int mlx5_register_device(struct mlx5_core_dev *dev); 242 242 void mlx5_unregister_device(struct mlx5_core_dev *dev); 243 + void mlx5_dev_set_lightweight(struct mlx5_core_dev *dev); 244 + bool mlx5_dev_is_lightweight(struct mlx5_core_dev *dev); 243 245 struct mlx5_core_dev *mlx5_get_next_phys_dev_lag(struct mlx5_core_dev *dev); 244 246 void mlx5_dev_list_lock(void); 245 247 void mlx5_dev_list_unlock(void); 246 248 int mlx5_dev_list_trylock(void); 247 249 250 + void mlx5_fw_reporters_create(struct mlx5_core_dev *dev); 248 251 int mlx5_query_mtpps(struct mlx5_core_dev *dev, u32 *mtpps, u32 mtpps_size); 249 252 int mlx5_set_mtpps(struct mlx5_core_dev *mdev, u32 *mtpps, u32 mtpps_size); 250 253 int mlx5_query_mtppse(struct mlx5_core_dev *mdev, u8 pin, u8 *arm, u8 *mode); ··· 322 319 int mlx5_mdev_init(struct mlx5_core_dev *dev, int profile_idx); 323 320 void mlx5_mdev_uninit(struct mlx5_core_dev *dev); 324 321 int mlx5_init_one(struct mlx5_core_dev *dev); 322 + int mlx5_init_one_devl_locked(struct mlx5_core_dev *dev); 325 323 void mlx5_uninit_one(struct mlx5_core_dev *dev); 326 324 void mlx5_unload_one(struct mlx5_core_dev *dev, bool suspend); 327 325 void mlx5_unload_one_devl_locked(struct mlx5_core_dev *dev, bool suspend); 328 326 int mlx5_load_one(struct mlx5_core_dev *dev, bool recovery); 329 327 int mlx5_load_one_devl_locked(struct mlx5_core_dev *dev, bool recovery); 328 + int mlx5_init_one_light(struct mlx5_core_dev *dev); 329 + void mlx5_uninit_one_light(struct mlx5_core_dev *dev); 330 + void mlx5_unload_one_light(struct mlx5_core_dev *dev); 330 331 331 - int mlx5_vport_set_other_func_cap(struct mlx5_core_dev *dev, const void *hca_cap, u16 function_id, 332 + int mlx5_vport_set_other_func_cap(struct mlx5_core_dev *dev, const void *hca_cap, u16 vport, 332 333 u16 opmod); 333 - #define mlx5_vport_get_other_func_general_cap(dev, fid, out) \ 334 - mlx5_vport_get_other_func_cap(dev, fid, out, MLX5_CAP_GENERAL) 334 + #define mlx5_vport_get_other_func_general_cap(dev, vport, out) \ 335 + mlx5_vport_get_other_func_cap(dev, vport, out, MLX5_CAP_GENERAL) 335 336 336 337 void mlx5_events_work_enqueue(struct mlx5_core_dev *dev, struct work_struct *work); 337 338 static inline u32 mlx5_sriov_get_vf_total_msix(struct pci_dev *pdev) ··· 350 343 bool mlx5_vnet_supported(struct mlx5_core_dev *dev); 351 344 bool mlx5_same_hw_devs(struct mlx5_core_dev *dev, struct mlx5_core_dev *peer_dev); 352 345 346 + static inline u16 mlx5_core_ec_vf_vport_base(const struct mlx5_core_dev *dev) 347 + { 348 + return MLX5_CAP_GEN_2(dev, ec_vf_vport_base); 349 + } 350 + 351 + static inline u16 mlx5_core_ec_sriov_enabled(const struct mlx5_core_dev *dev) 352 + { 353 + return mlx5_core_is_ecpf(dev) && mlx5_core_ec_vf_vport_base(dev); 354 + } 355 + 356 + static inline bool mlx5_core_is_ec_vf_vport(const struct mlx5_core_dev *dev, u16 vport_num) 357 + { 358 + int base_vport = mlx5_core_ec_vf_vport_base(dev); 359 + int max_vport = base_vport + mlx5_core_max_ec_vfs(dev); 360 + 361 + if (!mlx5_core_ec_sriov_enabled(dev)) 362 + return false; 363 + 364 + return (vport_num >= base_vport && vport_num < max_vport); 365 + } 353 366 #endif /* __MLX5_CORE_H__ */
+10 -1
drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
··· 79 79 if (!func_id) 80 80 return mlx5_core_is_ecpf(dev) && !ec_function ? MLX5_HOST_PF : MLX5_PF; 81 81 82 - return func_id <= mlx5_core_max_vfs(dev) ? MLX5_VF : MLX5_SF; 82 + if (func_id <= max(mlx5_core_max_vfs(dev), mlx5_core_max_ec_vfs(dev))) { 83 + if (ec_function) 84 + return MLX5_EC_VF; 85 + else 86 + return MLX5_VF; 87 + } 88 + return MLX5_SF; 83 89 } 84 90 85 91 static u32 mlx5_get_ec_function(u32 function) ··· 736 730 WARN(dev->priv.page_counters[MLX5_HOST_PF], 737 731 "External host PF FW pages counter is %d after reclaiming all pages\n", 738 732 dev->priv.page_counters[MLX5_HOST_PF]); 733 + WARN(dev->priv.page_counters[MLX5_EC_VF], 734 + "EC VFs FW pages counter is %d after reclaiming all pages\n", 735 + dev->priv.page_counters[MLX5_EC_VF]); 739 736 740 737 return 0; 741 738 }
+15 -1
drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
··· 41 41 struct mlx5_irq_pool *sf_comp_pool; 42 42 }; 43 43 44 + static int mlx5_core_func_to_vport(const struct mlx5_core_dev *dev, 45 + int func, 46 + bool ec_vf_func) 47 + { 48 + if (!ec_vf_func) 49 + return func; 50 + return mlx5_core_ec_vf_vport_base(dev) + func - 1; 51 + } 52 + 44 53 /** 45 54 * mlx5_get_default_msix_vec_count - Get the default number of MSI-X vectors 46 55 * to be ssigned to each VF. ··· 88 79 int set_sz = MLX5_ST_SZ_BYTES(set_hca_cap_in); 89 80 void *hca_cap = NULL, *query_cap = NULL, *cap; 90 81 int num_vf_msix, min_msix, max_msix; 82 + bool ec_vf_function; 83 + int vport; 91 84 int ret; 92 85 93 86 num_vf_msix = MLX5_CAP_GEN_MAX(dev, num_total_dynamic_vf_msix); ··· 115 104 goto out; 116 105 } 117 106 118 - ret = mlx5_vport_get_other_func_general_cap(dev, function_id, query_cap); 107 + ec_vf_function = mlx5_core_ec_sriov_enabled(dev); 108 + vport = mlx5_core_func_to_vport(dev, function_id, ec_vf_function); 109 + ret = mlx5_vport_get_other_func_general_cap(dev, vport, query_cap); 119 110 if (ret) 120 111 goto out; 121 112 ··· 128 115 129 116 MLX5_SET(set_hca_cap_in, hca_cap, opcode, MLX5_CMD_OP_SET_HCA_CAP); 130 117 MLX5_SET(set_hca_cap_in, hca_cap, other_function, 1); 118 + MLX5_SET(set_hca_cap_in, hca_cap, ec_vf_function, ec_vf_function); 131 119 MLX5_SET(set_hca_cap_in, hca_cap, function_id, function_id); 132 120 133 121 MLX5_SET(set_hca_cap_in, hca_cap, op_mod,
+13 -2
drivers/net/ethernet/mellanox/mlx5/core/sf/dev/driver.c
··· 3 3 4 4 #include <linux/mlx5/driver.h> 5 5 #include <linux/mlx5/device.h> 6 + #include <linux/mlx5/eswitch.h> 6 7 #include "mlx5_core.h" 7 8 #include "dev.h" 8 9 #include "devlink.h" ··· 29 28 mdev->priv.adev_idx = adev->id; 30 29 sf_dev->mdev = mdev; 31 30 31 + /* Only local SFs do light probe */ 32 + if (MLX5_ESWITCH_MANAGER(sf_dev->parent_mdev)) 33 + mlx5_dev_set_lightweight(mdev); 34 + 32 35 err = mlx5_mdev_init(mdev, MLX5_SF_PROF); 33 36 if (err) { 34 37 mlx5_core_warn(mdev, "mlx5_mdev_init on err=%d\n", err); ··· 46 41 goto remap_err; 47 42 } 48 43 49 - err = mlx5_init_one(mdev); 44 + if (MLX5_ESWITCH_MANAGER(sf_dev->parent_mdev)) 45 + err = mlx5_init_one_light(mdev); 46 + else 47 + err = mlx5_init_one(mdev); 50 48 if (err) { 51 49 mlx5_core_warn(mdev, "mlx5_init_one err=%d\n", err); 52 50 goto init_one_err; ··· 73 65 74 66 mlx5_drain_health_wq(sf_dev->mdev); 75 67 devlink_unregister(devlink); 76 - mlx5_uninit_one(sf_dev->mdev); 68 + if (mlx5_dev_is_lightweight(sf_dev->mdev)) 69 + mlx5_uninit_one_light(sf_dev->mdev); 70 + else 71 + mlx5_uninit_one(sf_dev->mdev); 77 72 iounmap(sf_dev->mdev->iseg); 78 73 mlx5_mdev_uninit(sf_dev->mdev); 79 74 mlx5_devlink_free(devlink);
+36 -10
drivers/net/ethernet/mellanox/mlx5/core/sriov.c
··· 37 37 #include "mlx5_irq.h" 38 38 #include "eswitch.h" 39 39 40 - static int sriov_restore_guids(struct mlx5_core_dev *dev, int vf) 40 + static int sriov_restore_guids(struct mlx5_core_dev *dev, int vf, u16 func_id) 41 41 { 42 42 struct mlx5_core_sriov *sriov = &dev->priv.sriov; 43 43 struct mlx5_hca_vport_context *in; ··· 59 59 !!(in->node_guid) * MLX5_HCA_VPORT_SEL_NODE_GUID | 60 60 !!(in->policy) * MLX5_HCA_VPORT_SEL_STATE_POLICY; 61 61 62 - err = mlx5_core_modify_hca_vport_context(dev, 1, 1, vf + 1, in); 62 + err = mlx5_core_modify_hca_vport_context(dev, 1, 1, func_id, in); 63 63 if (err) 64 64 mlx5_core_warn(dev, "modify vport context failed, unable to restore VF %d settings\n", vf); 65 65 ··· 73 73 { 74 74 struct mlx5_core_sriov *sriov = &dev->priv.sriov; 75 75 int err, vf, num_msix_count; 76 + int vport_num; 76 77 77 78 err = mlx5_eswitch_enable(dev->priv.eswitch, num_vfs); 78 79 if (err) { ··· 105 104 106 105 sriov->vfs_ctx[vf].enabled = 1; 107 106 if (MLX5_CAP_GEN(dev, port_type) == MLX5_CAP_PORT_TYPE_IB) { 108 - err = sriov_restore_guids(dev, vf); 107 + vport_num = mlx5_core_ec_sriov_enabled(dev) ? 108 + mlx5_core_ec_vf_vport_base(dev) + vf 109 + : vf + 1; 110 + err = sriov_restore_guids(dev, vf, vport_num); 109 111 if (err) { 110 112 mlx5_core_warn(dev, 111 113 "failed to restore VF %d settings, err %d\n", ··· 123 119 } 124 120 125 121 static void 126 - mlx5_device_disable_sriov(struct mlx5_core_dev *dev, int num_vfs, bool clear_vf) 122 + mlx5_device_disable_sriov(struct mlx5_core_dev *dev, int num_vfs, bool clear_vf, bool num_vf_change) 127 123 { 128 124 struct mlx5_core_sriov *sriov = &dev->priv.sriov; 125 + bool wait_for_ec_vf_pages = true; 126 + bool wait_for_vf_pages = true; 129 127 int err; 130 128 int vf; 131 129 ··· 149 143 150 144 mlx5_eswitch_disable_sriov(dev->priv.eswitch, clear_vf); 151 145 146 + /* There are a number of scenarios when SRIOV is being disabled: 147 + * 1. VFs or ECVFs had been created, and now set back to 0 (num_vf_change == true). 148 + * - If EC SRIOV is enabled then this flow is happening on the 149 + * embedded platform, wait for only EC VF pages. 150 + * - If EC SRIOV is not enabled this flow is happening on non-embedded 151 + * platform, wait for the VF pages. 152 + * 153 + * 2. The driver is being unloaded. In this case wait for all pages. 154 + */ 155 + if (num_vf_change) { 156 + if (mlx5_core_ec_sriov_enabled(dev)) 157 + wait_for_vf_pages = false; 158 + else 159 + wait_for_ec_vf_pages = false; 160 + } 161 + 162 + if (wait_for_ec_vf_pages && mlx5_wait_for_pages(dev, &dev->priv.page_counters[MLX5_EC_VF])) 163 + mlx5_core_warn(dev, "timeout reclaiming EC VFs pages\n"); 164 + 152 165 /* For ECPFs, skip waiting for host VF pages until ECPF is destroyed */ 153 166 if (mlx5_core_is_ecpf(dev)) 154 167 return; 155 168 156 - if (mlx5_wait_for_pages(dev, &dev->priv.page_counters[MLX5_VF])) 169 + if (wait_for_vf_pages && mlx5_wait_for_pages(dev, &dev->priv.page_counters[MLX5_VF])) 157 170 mlx5_core_warn(dev, "timeout reclaiming VFs pages\n"); 158 171 } 159 172 ··· 193 168 err = pci_enable_sriov(pdev, num_vfs); 194 169 if (err) { 195 170 mlx5_core_warn(dev, "pci_enable_sriov failed : %d\n", err); 196 - mlx5_device_disable_sriov(dev, num_vfs, true); 171 + mlx5_device_disable_sriov(dev, num_vfs, true, true); 197 172 } 198 173 return err; 199 174 } 200 175 201 - void mlx5_sriov_disable(struct pci_dev *pdev) 176 + void mlx5_sriov_disable(struct pci_dev *pdev, bool num_vf_change) 202 177 { 203 178 struct mlx5_core_dev *dev = pci_get_drvdata(pdev); 204 179 struct devlink *devlink = priv_to_devlink(dev); ··· 206 181 207 182 pci_disable_sriov(pdev); 208 183 devl_lock(devlink); 209 - mlx5_device_disable_sriov(dev, num_vfs, true); 184 + mlx5_device_disable_sriov(dev, num_vfs, true, num_vf_change); 210 185 devl_unlock(devlink); 211 186 } 212 187 ··· 221 196 if (num_vfs) 222 197 err = mlx5_sriov_enable(pdev, num_vfs); 223 198 else 224 - mlx5_sriov_disable(pdev); 199 + mlx5_sriov_disable(pdev, true); 225 200 226 201 if (!err) 227 202 sriov->num_vfs = num_vfs; ··· 266 241 if (!mlx5_core_is_pf(dev)) 267 242 return; 268 243 269 - mlx5_device_disable_sriov(dev, pci_num_vf(dev->pdev), false); 244 + mlx5_device_disable_sriov(dev, pci_num_vf(dev->pdev), false, false); 270 245 } 271 246 272 247 static u16 mlx5_get_max_vfs(struct mlx5_core_dev *dev) ··· 305 280 total_vfs = pci_sriov_get_totalvfs(pdev); 306 281 sriov->max_vfs = mlx5_get_max_vfs(dev); 307 282 sriov->num_vfs = pci_num_vf(pdev); 283 + sriov->max_ec_vfs = mlx5_core_ec_sriov_enabled(dev) ? pci_sriov_get_totalvfs(dev->pdev) : 0; 308 284 sriov->vfs_ctx = kcalloc(total_vfs, sizeof(*sriov->vfs_ctx), GFP_KERNEL); 309 285 if (!sriov->vfs_ctx) 310 286 return -ENOMEM;
+15 -4
drivers/net/ethernet/mellanox/mlx5/core/vport.c
··· 1161 1161 } 1162 1162 EXPORT_SYMBOL_GPL(mlx5_query_nic_system_image_guid); 1163 1163 1164 - int mlx5_vport_get_other_func_cap(struct mlx5_core_dev *dev, u16 function_id, void *out, 1164 + static int mlx5_vport_to_func_id(const struct mlx5_core_dev *dev, u16 vport, bool ec_vf_func) 1165 + { 1166 + return ec_vf_func ? vport - mlx5_core_ec_vf_vport_base(dev) 1167 + : vport; 1168 + } 1169 + 1170 + int mlx5_vport_get_other_func_cap(struct mlx5_core_dev *dev, u16 vport, void *out, 1165 1171 u16 opmod) 1166 1172 { 1173 + bool ec_vf_func = mlx5_core_is_ec_vf_vport(dev, vport); 1167 1174 u8 in[MLX5_ST_SZ_BYTES(query_hca_cap_in)] = {}; 1168 1175 1169 1176 opmod = (opmod << 1) | (HCA_CAP_OPMOD_GET_MAX & 0x01); 1170 1177 MLX5_SET(query_hca_cap_in, in, opcode, MLX5_CMD_OP_QUERY_HCA_CAP); 1171 1178 MLX5_SET(query_hca_cap_in, in, op_mod, opmod); 1172 - MLX5_SET(query_hca_cap_in, in, function_id, function_id); 1179 + MLX5_SET(query_hca_cap_in, in, function_id, mlx5_vport_to_func_id(dev, vport, ec_vf_func)); 1173 1180 MLX5_SET(query_hca_cap_in, in, other_function, true); 1181 + MLX5_SET(query_hca_cap_in, in, ec_vf_function, ec_vf_func); 1174 1182 return mlx5_cmd_exec_inout(dev, query_hca_cap, in, out); 1175 1183 } 1176 1184 EXPORT_SYMBOL_GPL(mlx5_vport_get_other_func_cap); 1177 1185 1178 1186 int mlx5_vport_set_other_func_cap(struct mlx5_core_dev *dev, const void *hca_cap, 1179 - u16 function_id, u16 opmod) 1187 + u16 vport, u16 opmod) 1180 1188 { 1189 + bool ec_vf_func = mlx5_core_is_ec_vf_vport(dev, vport); 1181 1190 int set_sz = MLX5_ST_SZ_BYTES(set_hca_cap_in); 1182 1191 void *set_hca_cap; 1183 1192 void *set_ctx; ··· 1200 1191 MLX5_SET(set_hca_cap_in, set_ctx, op_mod, opmod << 1); 1201 1192 set_hca_cap = MLX5_ADDR_OF(set_hca_cap_in, set_ctx, capability); 1202 1193 memcpy(set_hca_cap, hca_cap, MLX5_ST_SZ_BYTES(cmd_hca_cap)); 1203 - MLX5_SET(set_hca_cap_in, set_ctx, function_id, function_id); 1194 + MLX5_SET(set_hca_cap_in, set_ctx, function_id, 1195 + mlx5_vport_to_func_id(dev, vport, ec_vf_func)); 1204 1196 MLX5_SET(set_hca_cap_in, set_ctx, other_function, true); 1197 + MLX5_SET(set_hca_cap_in, set_ctx, ec_vf_function, ec_vf_func); 1205 1198 ret = mlx5_cmd_exec_in(dev, set_hca_cap, set_ctx); 1206 1199 1207 1200 kfree(set_ctx);
+7
include/linux/mlx5/driver.h
··· 474 474 struct mlx5_vf_context *vfs_ctx; 475 475 int num_vfs; 476 476 u16 max_vfs; 477 + u16 max_ec_vfs; 477 478 }; 478 479 479 480 struct mlx5_fc_pool { ··· 581 580 MLX5_VF, 582 581 MLX5_SF, 583 582 MLX5_HOST_PF, 583 + MLX5_EC_VF, 584 584 MLX5_FUNC_TYPE_NUM, 585 585 }; 586 586 ··· 1244 1242 static inline u16 mlx5_core_max_vfs(const struct mlx5_core_dev *dev) 1245 1243 { 1246 1244 return dev->priv.sriov.max_vfs; 1245 + } 1246 + 1247 + static inline u16 mlx5_core_max_ec_vfs(const struct mlx5_core_dev *dev) 1248 + { 1249 + return dev->priv.sriov.max_ec_vfs; 1247 1250 } 1248 1251 1249 1252 static inline int mlx5_get_gid_table_len(u16 param)
+8 -3
include/linux/mlx5/mlx5_ifc.h
··· 1992 1992 u8 ts_cqe_metadata_size2wqe_counter[0x5]; 1993 1993 u8 reserved_at_250[0x10]; 1994 1994 1995 - u8 reserved_at_260[0x5a0]; 1995 + u8 reserved_at_260[0x120]; 1996 + u8 reserved_at_380[0x10]; 1997 + u8 ec_vf_vport_base[0x10]; 1998 + u8 reserved_at_3a0[0x460]; 1996 1999 }; 1997 2000 1998 2001 enum mlx5_ifc_flow_destination_type { ··· 4808 4805 u8 op_mod[0x10]; 4809 4806 4810 4807 u8 other_function[0x1]; 4811 - u8 reserved_at_41[0xf]; 4808 + u8 ec_vf_function[0x1]; 4809 + u8 reserved_at_42[0xe]; 4812 4810 u8 function_id[0x10]; 4813 4811 4814 4812 u8 reserved_at_60[0x20]; ··· 5960 5956 u8 op_mod[0x10]; 5961 5957 5962 5958 u8 other_function[0x1]; 5963 - u8 reserved_at_41[0xf]; 5959 + u8 ec_vf_function[0x1]; 5960 + u8 reserved_at_42[0xe]; 5964 5961 u8 function_id[0x10]; 5965 5962 5966 5963 u8 reserved_at_60[0x20];
+1 -1
include/linux/mlx5/vport.h
··· 132 132 int mlx5_nic_vport_unaffiliate_multiport(struct mlx5_core_dev *port_mdev); 133 133 134 134 u64 mlx5_query_nic_system_image_guid(struct mlx5_core_dev *mdev); 135 - int mlx5_vport_get_other_func_cap(struct mlx5_core_dev *dev, u16 function_id, void *out, 135 + int mlx5_vport_get_other_func_cap(struct mlx5_core_dev *dev, u16 vport, void *out, 136 136 u16 opmod); 137 137 #endif /* __MLX5_VPORT_H__ */