Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'mlx5-updates-2023-04-20' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux

Saeed Mahameed says:

====================
mlx5-updates-2023-04-20

1) Dragos Improves RX page pool, and provides some fixes to his previous
series:
1.1) Fix releasing page_pool for striding RQ and legacy RQ nonlinear case
1.2) Hook NAPIs to page pools to gain more performance.

2) From Roi, Some cleanups to TC and eswitch modules.

3) Maher migrates vnic diagnostic counters reporting from debugfs to a
dedicated devlink health reporter

Maher Says:
===========
net/mlx5: Expose vnic diagnostic counters using devlink

Currently, vnic diagnostic counters are exposed through the following
debugfs:

$ ls /sys/kernel/debug/mlx5/0000:08:00.0/esw/vf_0/vnic_diag/
cq_overrun
quota_exceeded_command
total_q_under_processor_handle
invalid_command
send_queue_priority_update_flow
nic_receive_steering_discard

The current design does not allow the hypervisor to view the diagnostic
counters of its VFs, in case the VFs get bound to a VM. In other words,
the counters are not exposed for representor interfaces.
Furthermore, the debugfs design is inconvenient future-wise, in case more
counters need to be reported by the driver in the future.

As these counters pertain to vNIC health, it is more appropriate to
utilize the devlink health reporter to expose them.

Thus, this patchest includes the following changes:

* Drop the current vnic diagnostic counters debugfs interface.
* Add a vnic devlink health reporter for PFs/VFs core devices, which
when diagnosed will dump vnic diagnostic counter values that are
queried from FW.
* Add a vnic devlink health reporter for the representor interface, which
serves the same purpose listed in the previous point, in addition to
allowing the hypervisor to view its VFs diagnostic counters, even when
the VFs are bounded to external VMs.

Example of devlink health reporter usage is:
$devlink health diagnose pci/0000:08:00.0 reporter vnic
vNIC env counters:
total_error_queues: 0 send_queue_priority_update_flow: 0
comp_eq_overrun: 0 async_eq_overrun: 0 cq_overrun: 0
invalid_command: 0 quota_exceeded_command: 0
nic_receive_steering_discard: 0

===========

4) SW steering fixes and improvements

Yevgeny Kliteynik Says:
=======================
These short patch series are just small fixes / improvements for
SW steering:

- Patch 1: Fix dumping of legacy modify_hdr in debug dump to
align to what is expected by parser
- Patch 2: Have separate threshold for ICM sync per ICM type
- Patch 3: Add more info to the steering debug dump - Linux
version and device name
- Patch 4: Keep track of number of buddies that are currently
in use per domain per buddy type

=======================

* tag 'mlx5-updates-2023-04-20' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux:
net/mlx5: Update op_mode to op_mod for port selection
net/mlx5: E-Switch, Remove unused mlx5_esw_offloads_vport_metadata_set()
net/mlx5: E-Switch, Remove redundant dev arg from mlx5_esw_vport_alloc()
net/mlx5: Include linux/pci.h for pci_msix_can_alloc_dyn()
net/mlx5e: RX, Hook NAPIs to page pools
net/mlx5e: RX, Fix XDP_TX page release for legacy rq nonlinear case
net/mlx5e: RX, Fix releasing page_pool pages twice for striding RQ
net/mlx5e: Add vnic devlink health reporter to representors
net/mlx5: Add vnic devlink health reporter to PFs/VFs
Revert "net/mlx5: Expose vnic diagnostic counters for eswitch managed vports"
Revert "net/mlx5: Expose steering dropped packets counter"
net/mlx5: DR, Add memory statistics for domain object
net/mlx5: DR, Add more info in domain dbg dump
net/mlx5: DR, Calculate sync threshold of each pool according to its type
net/mlx5: DR, Fix dumping of legacy modify_hdr in debug dump
====================

Link: https://lore.kernel.org/r/20230421013850.349646-1-saeed@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+297 -273
+33
Documentation/networking/device_drivers/ethernet/mellanox/mlx5/devlink.rst
··· 257 257 $ devlink health dump show pci/0000:82:00.1 reporter fw_fatal 258 258 259 259 NOTE: This command can run only on PF. 260 + 261 + vnic reporter 262 + ------------- 263 + The vnic reporter implements only the `diagnose` callback. 264 + It is responsible for querying the vnic diagnostic counters from fw and displaying 265 + them in realtime. 266 + 267 + Description of the vnic counters: 268 + total_q_under_processor_handle: number of queues in an error state due to 269 + an async error or errored command. 270 + send_queue_priority_update_flow: number of QP/SQ priority/SL update 271 + events. 272 + cq_overrun: number of times CQ entered an error state due to an 273 + overflow. 274 + async_eq_overrun: number of times an EQ mapped to async events was 275 + overrun. 276 + comp_eq_overrun: number of times an EQ mapped to completion events was 277 + overrun. 278 + quota_exceeded_command: number of commands issued and failed due to quota 279 + exceeded. 280 + invalid_command: number of commands issued and failed dues to any reason 281 + other than quota exceeded. 282 + nic_receive_steering_discard: number of packets that completed RX flow 283 + steering but were discarded due to a mismatch in flow table. 284 + 285 + User commands examples: 286 + - Diagnose PF/VF vnic counters 287 + $ devlink health diagnose pci/0000:82:00.1 reporter vnic 288 + - Diagnose representor vnic counters (performed by supplying devlink port of the 289 + representor, which can be obtained via devlink port command) 290 + $ devlink health diagnose pci/0000:82:00.1/65537 reporter vnic 291 + 292 + NOTE: This command can run over all interfaces such as PF/VF and representor ports.
+2 -2
drivers/net/ethernet/mellanox/mlx5/core/Makefile
··· 16 16 transobj.o vport.o sriov.o fs_cmd.o fs_core.o pci_irq.o \ 17 17 fs_counters.o fs_ft_pool.o rl.o lag/debugfs.o lag/lag.o dev.o events.o wq.o lib/gid.o \ 18 18 lib/devcom.o lib/pci_vsc.o lib/dm.o lib/fs_ttc.o diag/fs_tracepoint.o \ 19 - diag/fw_tracer.o diag/crdump.o devlink.o diag/rsc_dump.o \ 19 + diag/fw_tracer.o diag/crdump.o devlink.o diag/rsc_dump.o diag/reporter_vnic.o \ 20 20 fw_reset.o qos.o lib/tout.o lib/aso.o 21 21 22 22 # ··· 69 69 # 70 70 mlx5_core-$(CONFIG_MLX5_ESWITCH) += eswitch.o eswitch_offloads.o eswitch_offloads_termtbl.o \ 71 71 ecpf.o rdma.o esw/legacy.o \ 72 - esw/debugfs.o esw/devlink_port.o esw/vporttbl.o esw/qos.o 72 + esw/devlink_port.o esw/vporttbl.o esw/qos.o 73 73 74 74 mlx5_core-$(CONFIG_MLX5_ESWITCH) += esw/acl/helper.o \ 75 75 esw/acl/egress_lgcy.o esw/acl/egress_ofld.o \
+125
drivers/net/ethernet/mellanox/mlx5/core/diag/reporter_vnic.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB 2 + /* Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. */ 3 + 4 + #include "reporter_vnic.h" 5 + #include "devlink.h" 6 + 7 + #define VNIC_ENV_GET64(vnic_env_stats, c) \ 8 + MLX5_GET64(query_vnic_env_out, (vnic_env_stats)->query_vnic_env_out, \ 9 + vport_env.c) 10 + 11 + struct mlx5_vnic_diag_stats { 12 + __be64 query_vnic_env_out[MLX5_ST_SZ_QW(query_vnic_env_out)]; 13 + }; 14 + 15 + int mlx5_reporter_vnic_diagnose_counters(struct mlx5_core_dev *dev, 16 + struct devlink_fmsg *fmsg, 17 + u16 vport_num, bool other_vport) 18 + { 19 + u32 in[MLX5_ST_SZ_DW(query_vnic_env_in)] = {}; 20 + struct mlx5_vnic_diag_stats vnic; 21 + int err; 22 + 23 + MLX5_SET(query_vnic_env_in, in, opcode, MLX5_CMD_OP_QUERY_VNIC_ENV); 24 + MLX5_SET(query_vnic_env_in, in, vport_number, vport_num); 25 + MLX5_SET(query_vnic_env_in, in, other_vport, !!other_vport); 26 + 27 + err = mlx5_cmd_exec_inout(dev, query_vnic_env, in, &vnic.query_vnic_env_out); 28 + if (err) 29 + return err; 30 + 31 + err = devlink_fmsg_pair_nest_start(fmsg, "vNIC env counters"); 32 + if (err) 33 + return err; 34 + 35 + err = devlink_fmsg_obj_nest_start(fmsg); 36 + if (err) 37 + return err; 38 + 39 + err = devlink_fmsg_u64_pair_put(fmsg, "total_error_queues", 40 + VNIC_ENV_GET64(&vnic, total_error_queues)); 41 + if (err) 42 + return err; 43 + 44 + err = devlink_fmsg_u64_pair_put(fmsg, "send_queue_priority_update_flow", 45 + VNIC_ENV_GET64(&vnic, send_queue_priority_update_flow)); 46 + if (err) 47 + return err; 48 + 49 + err = devlink_fmsg_u64_pair_put(fmsg, "comp_eq_overrun", 50 + VNIC_ENV_GET64(&vnic, comp_eq_overrun)); 51 + if (err) 52 + return err; 53 + 54 + err = devlink_fmsg_u64_pair_put(fmsg, "async_eq_overrun", 55 + VNIC_ENV_GET64(&vnic, async_eq_overrun)); 56 + if (err) 57 + return err; 58 + 59 + err = devlink_fmsg_u64_pair_put(fmsg, "cq_overrun", 60 + VNIC_ENV_GET64(&vnic, cq_overrun)); 61 + if (err) 62 + return err; 63 + 64 + err = devlink_fmsg_u64_pair_put(fmsg, "invalid_command", 65 + VNIC_ENV_GET64(&vnic, invalid_command)); 66 + if (err) 67 + return err; 68 + 69 + err = devlink_fmsg_u64_pair_put(fmsg, "quota_exceeded_command", 70 + VNIC_ENV_GET64(&vnic, quota_exceeded_command)); 71 + if (err) 72 + return err; 73 + 74 + err = devlink_fmsg_u64_pair_put(fmsg, "nic_receive_steering_discard", 75 + VNIC_ENV_GET64(&vnic, nic_receive_steering_discard)); 76 + if (err) 77 + return err; 78 + 79 + err = devlink_fmsg_obj_nest_end(fmsg); 80 + if (err) 81 + return err; 82 + 83 + err = devlink_fmsg_pair_nest_end(fmsg); 84 + if (err) 85 + return err; 86 + 87 + return 0; 88 + } 89 + 90 + static int mlx5_reporter_vnic_diagnose(struct devlink_health_reporter *reporter, 91 + struct devlink_fmsg *fmsg, 92 + struct netlink_ext_ack *extack) 93 + { 94 + struct mlx5_core_dev *dev = devlink_health_reporter_priv(reporter); 95 + 96 + return mlx5_reporter_vnic_diagnose_counters(dev, fmsg, 0, false); 97 + } 98 + 99 + static const struct devlink_health_reporter_ops mlx5_reporter_vnic_ops = { 100 + .name = "vnic", 101 + .diagnose = mlx5_reporter_vnic_diagnose, 102 + }; 103 + 104 + void mlx5_reporter_vnic_create(struct mlx5_core_dev *dev) 105 + { 106 + struct mlx5_core_health *health = &dev->priv.health; 107 + struct devlink *devlink = priv_to_devlink(dev); 108 + 109 + health->vnic_reporter = 110 + devlink_health_reporter_create(devlink, 111 + &mlx5_reporter_vnic_ops, 112 + 0, dev); 113 + if (IS_ERR(health->vnic_reporter)) 114 + mlx5_core_warn(dev, 115 + "Failed to create vnic reporter, err = %ld\n", 116 + PTR_ERR(health->vnic_reporter)); 117 + } 118 + 119 + void mlx5_reporter_vnic_destroy(struct mlx5_core_dev *dev) 120 + { 121 + struct mlx5_core_health *health = &dev->priv.health; 122 + 123 + if (!IS_ERR_OR_NULL(health->vnic_reporter)) 124 + devlink_health_reporter_destroy(health->vnic_reporter); 125 + }
+16
drivers/net/ethernet/mellanox/mlx5/core/diag/reporter_vnic.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB 2 + * Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. 3 + */ 4 + #ifndef __MLX5_REPORTER_VNIC_H 5 + #define __MLX5_REPORTER_VNIC_H 6 + 7 + #include "mlx5_core.h" 8 + 9 + void mlx5_reporter_vnic_create(struct mlx5_core_dev *dev); 10 + void mlx5_reporter_vnic_destroy(struct mlx5_core_dev *dev); 11 + 12 + int mlx5_reporter_vnic_diagnose_counters(struct mlx5_core_dev *dev, 13 + struct devlink_fmsg *fmsg, 14 + u16 vport_num, bool other_vport); 15 + 16 + #endif /* __MLX5_REPORTER_VNIC_H */
+1
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 857 857 pp_params.pool_size = pool_size; 858 858 pp_params.nid = node; 859 859 pp_params.dev = rq->pdev; 860 + pp_params.napi = rq->cq.napi; 860 861 pp_params.dma_dir = rq->buff.map_dir; 861 862 pp_params.max_len = PAGE_SIZE; 862 863
+50 -2
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
··· 53 53 #include "lib/vxlan.h" 54 54 #define CREATE_TRACE_POINTS 55 55 #include "diag/en_rep_tracepoint.h" 56 + #include "diag/reporter_vnic.h" 56 57 #include "en_accel/ipsec.h" 57 58 #include "en/tc/int_port.h" 58 59 #include "en/ptp.h" ··· 1295 1294 return ARRAY_SIZE(mlx5e_ul_rep_stats_grps); 1296 1295 } 1297 1296 1297 + static int 1298 + mlx5e_rep_vnic_reporter_diagnose(struct devlink_health_reporter *reporter, 1299 + struct devlink_fmsg *fmsg, 1300 + struct netlink_ext_ack *extack) 1301 + { 1302 + struct mlx5e_rep_priv *rpriv = devlink_health_reporter_priv(reporter); 1303 + struct mlx5_eswitch_rep *rep = rpriv->rep; 1304 + 1305 + return mlx5_reporter_vnic_diagnose_counters(rep->esw->dev, fmsg, 1306 + rep->vport, true); 1307 + } 1308 + 1309 + static const struct devlink_health_reporter_ops mlx5_rep_vnic_reporter_ops = { 1310 + .name = "vnic", 1311 + .diagnose = mlx5e_rep_vnic_reporter_diagnose, 1312 + }; 1313 + 1314 + static void mlx5e_rep_vnic_reporter_create(struct mlx5e_priv *priv, 1315 + struct devlink_port *dl_port) 1316 + { 1317 + struct mlx5e_rep_priv *rpriv = priv->ppriv; 1318 + struct devlink_health_reporter *reporter; 1319 + 1320 + reporter = devl_port_health_reporter_create(dl_port, 1321 + &mlx5_rep_vnic_reporter_ops, 1322 + 0, rpriv); 1323 + if (IS_ERR(reporter)) { 1324 + mlx5_core_err(priv->mdev, 1325 + "Failed to create representor vnic reporter, err = %ld\n", 1326 + PTR_ERR(reporter)); 1327 + return; 1328 + } 1329 + 1330 + rpriv->rep_vnic_reporter = reporter; 1331 + } 1332 + 1333 + static void mlx5e_rep_vnic_reporter_destroy(struct mlx5e_priv *priv) 1334 + { 1335 + struct mlx5e_rep_priv *rpriv = priv->ppriv; 1336 + 1337 + if (!IS_ERR_OR_NULL(rpriv->rep_vnic_reporter)) 1338 + devl_health_reporter_destroy(rpriv->rep_vnic_reporter); 1339 + } 1340 + 1298 1341 static const struct mlx5e_profile mlx5e_rep_profile = { 1299 1342 .init = mlx5e_init_rep, 1300 1343 .cleanup = mlx5e_cleanup_rep, ··· 1439 1394 1440 1395 dl_port = mlx5_esw_offloads_devlink_port(dev->priv.eswitch, 1441 1396 rpriv->rep->vport); 1442 - if (dl_port) 1397 + if (dl_port) { 1443 1398 SET_NETDEV_DEVLINK_PORT(netdev, dl_port); 1399 + mlx5e_rep_vnic_reporter_create(priv, dl_port); 1400 + } 1444 1401 1445 1402 err = register_netdev(netdev); 1446 1403 if (err) { ··· 1455 1408 return 0; 1456 1409 1457 1410 err_detach_netdev: 1411 + mlx5e_rep_vnic_reporter_destroy(priv); 1458 1412 mlx5e_detach_netdev(netdev_priv(netdev)); 1459 - 1460 1413 err_cleanup_profile: 1461 1414 priv->profile->cleanup(priv); 1462 1415 ··· 1505 1458 } 1506 1459 1507 1460 unregister_netdev(netdev); 1461 + mlx5e_rep_vnic_reporter_destroy(priv); 1508 1462 mlx5e_detach_netdev(priv); 1509 1463 priv->profile->cleanup(priv); 1510 1464 mlx5e_destroy_netdev(priv);
+1
drivers/net/ethernet/mellanox/mlx5/core/en_rep.h
··· 118 118 struct rtnl_link_stats64 prev_vf_vport_stats; 119 119 struct mlx5_flow_handle *send_to_vport_meta_rule; 120 120 struct rhashtable tc_ht; 121 + struct devlink_health_reporter *rep_vnic_reporter; 121 122 }; 122 123 123 124 static inline
+8 -3
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
··· 861 861 struct mlx5e_mpw_info *wi = mlx5e_get_mpw_info(rq, ix); 862 862 /* This function is called on rq/netdev close. */ 863 863 mlx5e_free_rx_mpwqe(rq, wi); 864 + 865 + /* Avoid a second release of the wqe pages: dealloc is called also 866 + * for missing wqes on an already flushed RQ. 867 + */ 868 + bitmap_fill(wi->skip_release_bitmap, rq->mpwqe.pages_per_wqe); 864 869 } 865 870 866 871 INDIRECT_CALLABLE_SCOPE bool mlx5e_post_rx_wqes(struct mlx5e_rq *rq) ··· 1746 1741 prog = rcu_dereference(rq->xdp_prog); 1747 1742 if (prog && mlx5e_xdp_handle(rq, prog, &mxbuf)) { 1748 1743 if (test_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) { 1749 - int i; 1744 + struct mlx5e_wqe_frag_info *pwi; 1750 1745 1751 - for (i = wi - head_wi; i < rq->wqe.info.num_frags; i++) 1752 - mlx5e_put_rx_frag(rq, &head_wi[i]); 1746 + for (pwi = head_wi; pwi < wi; pwi++) 1747 + pwi->flags |= BIT(MLX5E_WQE_FRAG_SKIP_RELEASE); 1753 1748 } 1754 1749 return NULL; /* page/packet was consumed by XDP */ 1755 1750 }
-198
drivers/net/ethernet/mellanox/mlx5/core/esw/debugfs.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB 2 - /* Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved. */ 3 - 4 - #include <linux/debugfs.h> 5 - #include "eswitch.h" 6 - 7 - enum vnic_diag_counter { 8 - MLX5_VNIC_DIAG_TOTAL_Q_UNDER_PROCESSOR_HANDLE, 9 - MLX5_VNIC_DIAG_SEND_QUEUE_PRIORITY_UPDATE_FLOW, 10 - MLX5_VNIC_DIAG_COMP_EQ_OVERRUN, 11 - MLX5_VNIC_DIAG_ASYNC_EQ_OVERRUN, 12 - MLX5_VNIC_DIAG_CQ_OVERRUN, 13 - MLX5_VNIC_DIAG_INVALID_COMMAND, 14 - MLX5_VNIC_DIAG_QOUTA_EXCEEDED_COMMAND, 15 - MLX5_VNIC_DIAG_RX_STEERING_DISCARD, 16 - }; 17 - 18 - static int mlx5_esw_query_vnic_diag(struct mlx5_vport *vport, enum vnic_diag_counter counter, 19 - u64 *val) 20 - { 21 - u32 out[MLX5_ST_SZ_DW(query_vnic_env_out)] = {}; 22 - u32 in[MLX5_ST_SZ_DW(query_vnic_env_in)] = {}; 23 - struct mlx5_core_dev *dev = vport->dev; 24 - u16 vport_num = vport->vport; 25 - void *vnic_diag_out; 26 - int err; 27 - 28 - MLX5_SET(query_vnic_env_in, in, opcode, MLX5_CMD_OP_QUERY_VNIC_ENV); 29 - MLX5_SET(query_vnic_env_in, in, vport_number, vport_num); 30 - if (!mlx5_esw_is_manager_vport(dev->priv.eswitch, vport_num)) 31 - MLX5_SET(query_vnic_env_in, in, other_vport, 1); 32 - 33 - err = mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out)); 34 - if (err) 35 - return err; 36 - 37 - vnic_diag_out = MLX5_ADDR_OF(query_vnic_env_out, out, vport_env); 38 - switch (counter) { 39 - case MLX5_VNIC_DIAG_TOTAL_Q_UNDER_PROCESSOR_HANDLE: 40 - *val = MLX5_GET(vnic_diagnostic_statistics, vnic_diag_out, total_error_queues); 41 - break; 42 - case MLX5_VNIC_DIAG_SEND_QUEUE_PRIORITY_UPDATE_FLOW: 43 - *val = MLX5_GET(vnic_diagnostic_statistics, vnic_diag_out, 44 - send_queue_priority_update_flow); 45 - break; 46 - case MLX5_VNIC_DIAG_COMP_EQ_OVERRUN: 47 - *val = MLX5_GET(vnic_diagnostic_statistics, vnic_diag_out, comp_eq_overrun); 48 - break; 49 - case MLX5_VNIC_DIAG_ASYNC_EQ_OVERRUN: 50 - *val = MLX5_GET(vnic_diagnostic_statistics, vnic_diag_out, async_eq_overrun); 51 - break; 52 - case MLX5_VNIC_DIAG_CQ_OVERRUN: 53 - *val = MLX5_GET(vnic_diagnostic_statistics, vnic_diag_out, cq_overrun); 54 - break; 55 - case MLX5_VNIC_DIAG_INVALID_COMMAND: 56 - *val = MLX5_GET(vnic_diagnostic_statistics, vnic_diag_out, invalid_command); 57 - break; 58 - case MLX5_VNIC_DIAG_QOUTA_EXCEEDED_COMMAND: 59 - *val = MLX5_GET(vnic_diagnostic_statistics, vnic_diag_out, quota_exceeded_command); 60 - break; 61 - case MLX5_VNIC_DIAG_RX_STEERING_DISCARD: 62 - *val = MLX5_GET64(vnic_diagnostic_statistics, vnic_diag_out, 63 - nic_receive_steering_discard); 64 - break; 65 - } 66 - 67 - return 0; 68 - } 69 - 70 - static int __show_vnic_diag(struct seq_file *file, struct mlx5_vport *vport, 71 - enum vnic_diag_counter type) 72 - { 73 - u64 val = 0; 74 - int ret; 75 - 76 - ret = mlx5_esw_query_vnic_diag(vport, type, &val); 77 - if (ret) 78 - return ret; 79 - 80 - seq_printf(file, "%llu\n", val); 81 - return 0; 82 - } 83 - 84 - static int total_q_under_processor_handle_show(struct seq_file *file, void *priv) 85 - { 86 - return __show_vnic_diag(file, file->private, MLX5_VNIC_DIAG_TOTAL_Q_UNDER_PROCESSOR_HANDLE); 87 - } 88 - 89 - static int send_queue_priority_update_flow_show(struct seq_file *file, void *priv) 90 - { 91 - return __show_vnic_diag(file, file->private, 92 - MLX5_VNIC_DIAG_SEND_QUEUE_PRIORITY_UPDATE_FLOW); 93 - } 94 - 95 - static int comp_eq_overrun_show(struct seq_file *file, void *priv) 96 - { 97 - return __show_vnic_diag(file, file->private, MLX5_VNIC_DIAG_COMP_EQ_OVERRUN); 98 - } 99 - 100 - static int async_eq_overrun_show(struct seq_file *file, void *priv) 101 - { 102 - return __show_vnic_diag(file, file->private, MLX5_VNIC_DIAG_ASYNC_EQ_OVERRUN); 103 - } 104 - 105 - static int cq_overrun_show(struct seq_file *file, void *priv) 106 - { 107 - return __show_vnic_diag(file, file->private, MLX5_VNIC_DIAG_CQ_OVERRUN); 108 - } 109 - 110 - static int invalid_command_show(struct seq_file *file, void *priv) 111 - { 112 - return __show_vnic_diag(file, file->private, MLX5_VNIC_DIAG_INVALID_COMMAND); 113 - } 114 - 115 - static int quota_exceeded_command_show(struct seq_file *file, void *priv) 116 - { 117 - return __show_vnic_diag(file, file->private, MLX5_VNIC_DIAG_QOUTA_EXCEEDED_COMMAND); 118 - } 119 - 120 - static int rx_steering_discard_show(struct seq_file *file, void *priv) 121 - { 122 - return __show_vnic_diag(file, file->private, MLX5_VNIC_DIAG_RX_STEERING_DISCARD); 123 - } 124 - 125 - DEFINE_SHOW_ATTRIBUTE(total_q_under_processor_handle); 126 - DEFINE_SHOW_ATTRIBUTE(send_queue_priority_update_flow); 127 - DEFINE_SHOW_ATTRIBUTE(comp_eq_overrun); 128 - DEFINE_SHOW_ATTRIBUTE(async_eq_overrun); 129 - DEFINE_SHOW_ATTRIBUTE(cq_overrun); 130 - DEFINE_SHOW_ATTRIBUTE(invalid_command); 131 - DEFINE_SHOW_ATTRIBUTE(quota_exceeded_command); 132 - DEFINE_SHOW_ATTRIBUTE(rx_steering_discard); 133 - 134 - void mlx5_esw_vport_debugfs_destroy(struct mlx5_eswitch *esw, u16 vport_num) 135 - { 136 - struct mlx5_vport *vport = mlx5_eswitch_get_vport(esw, vport_num); 137 - 138 - debugfs_remove_recursive(vport->dbgfs); 139 - vport->dbgfs = NULL; 140 - } 141 - 142 - /* vnic diag dir name is "pf", "ecpf" or "{vf/sf}_xxxx" */ 143 - #define VNIC_DIAG_DIR_NAME_MAX_LEN 8 144 - 145 - void mlx5_esw_vport_debugfs_create(struct mlx5_eswitch *esw, u16 vport_num, bool is_sf, u16 sf_num) 146 - { 147 - struct mlx5_vport *vport = mlx5_eswitch_get_vport(esw, vport_num); 148 - struct dentry *vnic_diag; 149 - char dir_name[VNIC_DIAG_DIR_NAME_MAX_LEN]; 150 - int err; 151 - 152 - if (!MLX5_CAP_GEN(esw->dev, vport_group_manager)) 153 - return; 154 - 155 - if (vport_num == MLX5_VPORT_PF) { 156 - strcpy(dir_name, "pf"); 157 - } else if (vport_num == MLX5_VPORT_ECPF) { 158 - strcpy(dir_name, "ecpf"); 159 - } else { 160 - err = snprintf(dir_name, VNIC_DIAG_DIR_NAME_MAX_LEN, "%s_%d", is_sf ? "sf" : "vf", 161 - is_sf ? sf_num : vport_num - MLX5_VPORT_FIRST_VF); 162 - if (WARN_ON(err < 0)) 163 - return; 164 - } 165 - 166 - vport->dbgfs = debugfs_create_dir(dir_name, esw->dbgfs); 167 - vnic_diag = debugfs_create_dir("vnic_diag", vport->dbgfs); 168 - 169 - if (MLX5_CAP_GEN(esw->dev, vnic_env_queue_counters)) { 170 - debugfs_create_file("total_q_under_processor_handle", 0444, vnic_diag, vport, 171 - &total_q_under_processor_handle_fops); 172 - debugfs_create_file("send_queue_priority_update_flow", 0444, vnic_diag, vport, 173 - &send_queue_priority_update_flow_fops); 174 - } 175 - 176 - if (MLX5_CAP_GEN(esw->dev, eq_overrun_count)) { 177 - debugfs_create_file("comp_eq_overrun", 0444, vnic_diag, vport, 178 - &comp_eq_overrun_fops); 179 - debugfs_create_file("async_eq_overrun", 0444, vnic_diag, vport, 180 - &async_eq_overrun_fops); 181 - } 182 - 183 - if (MLX5_CAP_GEN(esw->dev, vnic_env_cq_overrun)) 184 - debugfs_create_file("cq_overrun", 0444, vnic_diag, vport, &cq_overrun_fops); 185 - 186 - if (MLX5_CAP_GEN(esw->dev, invalid_command_count)) 187 - debugfs_create_file("invalid_command", 0444, vnic_diag, vport, 188 - &invalid_command_fops); 189 - 190 - if (MLX5_CAP_GEN(esw->dev, quota_exceeded_count)) 191 - debugfs_create_file("quota_exceeded_command", 0444, vnic_diag, vport, 192 - &quota_exceeded_command_fops); 193 - 194 - if (MLX5_CAP_GEN(esw->dev, nic_receive_steering_discard)) 195 - debugfs_create_file("rx_steering_discard", 0444, vnic_diag, vport, 196 - &rx_steering_discard_fops); 197 - 198 - }
+7 -13
drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
··· 36 36 #include <linux/mlx5/vport.h> 37 37 #include <linux/mlx5/fs.h> 38 38 #include <linux/mlx5/mpfs.h> 39 - #include <linux/debugfs.h> 40 39 #include "esw/acl/lgcy.h" 41 40 #include "esw/legacy.h" 42 41 #include "esw/qos.h" ··· 1055 1056 if (err) 1056 1057 return err; 1057 1058 1058 - mlx5_esw_vport_debugfs_create(esw, vport_num, false, 0); 1059 1059 err = esw_offloads_load_rep(esw, vport_num); 1060 1060 if (err) 1061 1061 goto err_rep; ··· 1062 1064 return err; 1063 1065 1064 1066 err_rep: 1065 - mlx5_esw_vport_debugfs_destroy(esw, vport_num); 1066 1067 mlx5_esw_vport_disable(esw, vport_num); 1067 1068 return err; 1068 1069 } ··· 1069 1072 void mlx5_eswitch_unload_vport(struct mlx5_eswitch *esw, u16 vport_num) 1070 1073 { 1071 1074 esw_offloads_unload_rep(esw, vport_num); 1072 - mlx5_esw_vport_debugfs_destroy(esw, vport_num); 1073 1075 mlx5_esw_vport_disable(esw, vport_num); 1074 1076 } 1075 1077 ··· 1506 1510 return err; 1507 1511 } 1508 1512 1509 - static int mlx5_esw_vport_alloc(struct mlx5_eswitch *esw, struct mlx5_core_dev *dev, 1513 + static int mlx5_esw_vport_alloc(struct mlx5_eswitch *esw, 1510 1514 int index, u16 vport_num) 1511 1515 { 1512 1516 struct mlx5_vport *vport; ··· 1560 1564 1561 1565 xa_init(&esw->vports); 1562 1566 1563 - err = mlx5_esw_vport_alloc(esw, dev, idx, MLX5_VPORT_PF); 1567 + err = mlx5_esw_vport_alloc(esw, idx, MLX5_VPORT_PF); 1564 1568 if (err) 1565 1569 goto err; 1566 1570 if (esw->first_host_vport == MLX5_VPORT_PF) ··· 1568 1572 idx++; 1569 1573 1570 1574 for (i = 0; i < mlx5_core_max_vfs(dev); i++) { 1571 - err = mlx5_esw_vport_alloc(esw, dev, idx, idx); 1575 + err = mlx5_esw_vport_alloc(esw, idx, idx); 1572 1576 if (err) 1573 1577 goto err; 1574 1578 xa_set_mark(&esw->vports, idx, MLX5_ESW_VPT_VF); ··· 1577 1581 } 1578 1582 base_sf_num = mlx5_sf_start_function_id(dev); 1579 1583 for (i = 0; i < mlx5_sf_max_functions(dev); i++) { 1580 - err = mlx5_esw_vport_alloc(esw, dev, idx, base_sf_num + i); 1584 + err = mlx5_esw_vport_alloc(esw, idx, base_sf_num + i); 1581 1585 if (err) 1582 1586 goto err; 1583 1587 xa_set_mark(&esw->vports, base_sf_num + i, MLX5_ESW_VPT_SF); ··· 1588 1592 if (err) 1589 1593 goto err; 1590 1594 for (i = 0; i < max_host_pf_sfs; i++) { 1591 - err = mlx5_esw_vport_alloc(esw, dev, idx, base_sf_num + i); 1595 + err = mlx5_esw_vport_alloc(esw, idx, base_sf_num + i); 1592 1596 if (err) 1593 1597 goto err; 1594 1598 xa_set_mark(&esw->vports, base_sf_num + i, MLX5_ESW_VPT_SF); ··· 1596 1600 } 1597 1601 1598 1602 if (mlx5_ecpf_vport_exists(dev)) { 1599 - err = mlx5_esw_vport_alloc(esw, dev, idx, MLX5_VPORT_ECPF); 1603 + err = mlx5_esw_vport_alloc(esw, idx, MLX5_VPORT_ECPF); 1600 1604 if (err) 1601 1605 goto err; 1602 1606 idx++; 1603 1607 } 1604 - err = mlx5_esw_vport_alloc(esw, dev, idx, MLX5_VPORT_UPLINK); 1608 + err = mlx5_esw_vport_alloc(esw, idx, MLX5_VPORT_UPLINK); 1605 1609 if (err) 1606 1610 goto err; 1607 1611 return 0; ··· 1668 1672 dev->priv.eswitch = esw; 1669 1673 BLOCKING_INIT_NOTIFIER_HEAD(&esw->n_head); 1670 1674 1671 - esw->dbgfs = debugfs_create_dir("esw", mlx5_debugfs_get_dev_root(esw->dev)); 1672 1675 esw_info(dev, 1673 1676 "Total vports %d, per vport: max uc(%d) max mc(%d)\n", 1674 1677 esw->total_vports, ··· 1691 1696 1692 1697 esw_info(esw->dev, "cleanup\n"); 1693 1698 1694 - debugfs_remove_recursive(esw->dbgfs); 1695 1699 esw->dev->priv.eswitch = NULL; 1696 1700 destroy_workqueue(esw->work_queue); 1697 1701 WARN_ON(refcount_read(&esw->qos.refcnt));
-6
drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
··· 195 195 enum mlx5_eswitch_vport_event enabled_events; 196 196 int index; 197 197 struct devlink_port *dl_port; 198 - struct dentry *dbgfs; 199 198 }; 200 199 201 200 struct mlx5_esw_indir_table; ··· 342 343 u32 large_group_num; 343 344 } params; 344 345 struct blocking_notifier_head n_head; 345 - struct dentry *dbgfs; 346 346 }; 347 347 348 348 void esw_offloads_disable(struct mlx5_eswitch *esw); ··· 354 356 void mlx5_eswitch_del_send_to_vport_meta_rule(struct mlx5_flow_handle *rule); 355 357 356 358 bool mlx5_esw_vport_match_metadata_supported(const struct mlx5_eswitch *esw); 357 - int mlx5_esw_offloads_vport_metadata_set(struct mlx5_eswitch *esw, bool enable); 358 359 u32 mlx5_esw_match_metadata_alloc(struct mlx5_eswitch *esw); 359 360 void mlx5_esw_match_metadata_free(struct mlx5_eswitch *esw, u32 metadata); 360 361 ··· 700 703 int mlx5_esw_offloads_devlink_port_register(struct mlx5_eswitch *esw, u16 vport_num); 701 704 void mlx5_esw_offloads_devlink_port_unregister(struct mlx5_eswitch *esw, u16 vport_num); 702 705 struct devlink_port *mlx5_esw_offloads_devlink_port(struct mlx5_eswitch *esw, u16 vport_num); 703 - 704 - void mlx5_esw_vport_debugfs_create(struct mlx5_eswitch *esw, u16 vport_num, bool is_sf, u16 sf_num); 705 - void mlx5_esw_vport_debugfs_destroy(struct mlx5_eswitch *esw, u16 vport_num); 706 706 707 707 int mlx5_esw_devlink_sf_port_register(struct mlx5_eswitch *esw, struct devlink_port *dl_port, 708 708 u16 vport_num, u32 controller, u32 sfnum);
-25
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
··· 2939 2939 return err; 2940 2940 } 2941 2941 2942 - int mlx5_esw_offloads_vport_metadata_set(struct mlx5_eswitch *esw, bool enable) 2943 - { 2944 - int err = 0; 2945 - 2946 - down_write(&esw->mode_lock); 2947 - if (mlx5_esw_is_fdb_created(esw)) { 2948 - err = -EBUSY; 2949 - goto done; 2950 - } 2951 - if (!mlx5_esw_vport_match_metadata_supported(esw)) { 2952 - err = -EOPNOTSUPP; 2953 - goto done; 2954 - } 2955 - if (enable) 2956 - esw->flags |= MLX5_ESWITCH_VPORT_MATCH_METADATA; 2957 - else 2958 - esw->flags &= ~MLX5_ESWITCH_VPORT_MATCH_METADATA; 2959 - done: 2960 - up_write(&esw->mode_lock); 2961 - return err; 2962 - } 2963 - 2964 2942 int 2965 2943 esw_vport_create_offloads_acl_tables(struct mlx5_eswitch *esw, 2966 2944 struct mlx5_vport *vport) ··· 3806 3828 if (err) 3807 3829 goto devlink_err; 3808 3830 3809 - mlx5_esw_vport_debugfs_create(esw, vport_num, true, sfnum); 3810 3831 err = mlx5_esw_offloads_rep_load(esw, vport_num); 3811 3832 if (err) 3812 3833 goto rep_err; 3813 3834 return 0; 3814 3835 3815 3836 rep_err: 3816 - mlx5_esw_vport_debugfs_destroy(esw, vport_num); 3817 3837 mlx5_esw_devlink_sf_port_unregister(esw, vport_num); 3818 3838 devlink_err: 3819 3839 mlx5_esw_vport_disable(esw, vport_num); ··· 3821 3845 void mlx5_esw_offloads_sf_vport_disable(struct mlx5_eswitch *esw, u16 vport_num) 3822 3846 { 3823 3847 mlx5_esw_offloads_rep_unload(esw, vport_num); 3824 - mlx5_esw_vport_debugfs_destroy(esw, vport_num); 3825 3848 mlx5_esw_devlink_sf_port_unregister(esw, vport_num); 3826 3849 mlx5_esw_vport_disable(esw, vport_num); 3827 3850 }
+4
drivers/net/ethernet/mellanox/mlx5/core/health.c
··· 42 42 #include "lib/pci_vsc.h" 43 43 #include "lib/tout.h" 44 44 #include "diag/fw_tracer.h" 45 + #include "diag/reporter_vnic.h" 45 46 46 47 enum { 47 48 MAX_MISSES = 3, ··· 899 898 900 899 cancel_delayed_work_sync(&health->update_fw_log_ts_work); 901 900 destroy_workqueue(health->wq); 901 + mlx5_reporter_vnic_destroy(dev); 902 902 mlx5_fw_reporters_destroy(dev); 903 903 } 904 904 ··· 909 907 char *name; 910 908 911 909 mlx5_fw_reporters_create(dev); 910 + mlx5_reporter_vnic_create(dev); 912 911 913 912 health = &dev->priv.health; 914 913 name = kmalloc(64, GFP_KERNEL); ··· 929 926 return 0; 930 927 931 928 out_err: 929 + mlx5_reporter_vnic_destroy(dev); 932 930 mlx5_fw_reporters_destroy(dev); 933 931 return -ENOMEM; 934 932 }
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/main.c
··· 717 717 MLX5_ST_SZ_BYTES(port_selection_cap)); 718 718 MLX5_SET(port_selection_cap, set_hca_cap, port_select_flow_table_bypass, 1); 719 719 720 - err = set_caps(dev, set_ctx, MLX5_SET_HCA_CAP_OP_MODE_PORT_SELECTION); 720 + err = set_caps(dev, set_ctx, MLX5_SET_HCA_CAP_OP_MOD_PORT_SELECTION); 721 721 722 722 return err; 723 723 }
+1
drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB 2 2 /* Copyright (c) 2019 Mellanox Technologies. */ 3 3 4 + #include <linux/pci.h> 4 5 #include <linux/interrupt.h> 5 6 #include <linux/notifier.h> 6 7 #include <linux/mlx5/driver.h>
+18 -6
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_dbg.c
··· 4 4 #include <linux/debugfs.h> 5 5 #include <linux/kernel.h> 6 6 #include <linux/seq_file.h> 7 + #include <linux/version.h> 7 8 #include "dr_types.h" 8 9 9 10 #define DR_DBG_PTR_TO_ID(p) ((u64)(uintptr_t)(p) & 0xFFFFFFFFULL) ··· 154 153 DR_DUMP_REC_TYPE_ACTION_MODIFY_HDR, action_id, 155 154 rule_id, action->rewrite->index, 156 155 action->rewrite->single_action_opt, 157 - action->rewrite->num_of_actions, 156 + ptrn_arg ? action->rewrite->num_of_actions : 0, 158 157 ptrn_arg ? ptrn->index : 0, 159 158 ptrn_arg ? mlx5dr_arg_get_obj_id(arg) : 0); 160 159 161 - for (i = 0; i < action->rewrite->num_of_actions; i++) { 162 - seq_printf(file, ",0x%016llx", 163 - be64_to_cpu(((__be64 *)rewrite_data)[i])); 160 + if (ptrn_arg) { 161 + for (i = 0; i < action->rewrite->num_of_actions; i++) { 162 + seq_printf(file, ",0x%016llx", 163 + be64_to_cpu(((__be64 *)rewrite_data)[i])); 164 + } 164 165 } 165 166 166 167 seq_puts(file, "\n"); ··· 633 630 u64 domain_id = DR_DBG_PTR_TO_ID(dmn); 634 631 int ret; 635 632 636 - seq_printf(file, "%d,0x%llx,%d,0%x,%d,%s\n", DR_DUMP_REC_TYPE_DOMAIN, 633 + seq_printf(file, "%d,0x%llx,%d,0%x,%d,%u.%u.%u,%s,%d,%u,%u,%u\n", 634 + DR_DUMP_REC_TYPE_DOMAIN, 637 635 domain_id, dmn->type, dmn->info.caps.gvmi, 638 - dmn->info.supp_sw_steering, pci_name(dmn->mdev->pdev)); 636 + dmn->info.supp_sw_steering, 637 + /* package version */ 638 + LINUX_VERSION_MAJOR, LINUX_VERSION_PATCHLEVEL, 639 + LINUX_VERSION_SUBLEVEL, 640 + pci_name(dmn->mdev->pdev), 641 + 0, /* domain flags */ 642 + dmn->num_buddies[DR_ICM_TYPE_STE], 643 + dmn->num_buddies[DR_ICM_TYPE_MODIFY_ACTION], 644 + dmn->num_buddies[DR_ICM_TYPE_MODIFY_HDR_PTRN]); 639 645 640 646 ret = dr_dump_domain_info(file, &dmn->info, domain_id); 641 647 if (ret < 0)
+25 -16
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c
··· 4 4 #include "dr_types.h" 5 5 6 6 #define DR_ICM_MODIFY_HDR_ALIGN_BASE 64 7 - #define DR_ICM_POOL_HOT_MEMORY_FRACTION 4 7 + #define DR_ICM_POOL_STE_HOT_MEM_PERCENT 25 8 + #define DR_ICM_POOL_MODIFY_HDR_PTRN_HOT_MEM_PERCENT 50 9 + #define DR_ICM_POOL_MODIFY_ACTION_HOT_MEM_PERCENT 90 8 10 9 11 struct mlx5dr_icm_hot_chunk { 10 12 struct mlx5dr_icm_buddy_mem *buddy_mem; ··· 31 29 struct mlx5dr_icm_hot_chunk *hot_chunks_arr; 32 30 u32 hot_chunks_num; 33 31 u64 hot_memory_size; 32 + /* hot memory size threshold for triggering sync */ 33 + u64 th; 34 34 }; 35 35 36 36 struct mlx5dr_icm_dm { ··· 288 284 /* add it to the -start- of the list in order to search in it first */ 289 285 list_add(&buddy->list_node, &pool->buddy_mem_list); 290 286 287 + pool->dmn->num_buddies[pool->icm_type]++; 288 + 291 289 return 0; 292 290 293 291 err_cleanup_buddy: ··· 303 297 304 298 static void dr_icm_buddy_destroy(struct mlx5dr_icm_buddy_mem *buddy) 305 299 { 300 + enum mlx5dr_icm_type icm_type = buddy->pool->icm_type; 301 + 306 302 dr_icm_pool_mr_destroy(buddy->icm_mr); 307 303 308 304 mlx5dr_buddy_cleanup(buddy); 309 305 310 - if (buddy->pool->icm_type == DR_ICM_TYPE_STE) 306 + if (icm_type == DR_ICM_TYPE_STE) 311 307 dr_icm_buddy_cleanup_ste_cache(buddy); 308 + 309 + buddy->pool->dmn->num_buddies[icm_type]--; 312 310 313 311 kvfree(buddy); 314 312 } ··· 340 330 341 331 static bool dr_icm_pool_is_sync_required(struct mlx5dr_icm_pool *pool) 342 332 { 343 - int allow_hot_size; 344 - 345 - /* sync when hot memory reaches a certain fraction of the pool size */ 346 - allow_hot_size = 347 - mlx5dr_icm_pool_chunk_size_to_byte(pool->max_log_chunk_sz, 348 - pool->icm_type) / 349 - DR_ICM_POOL_HOT_MEMORY_FRACTION; 350 - 351 - return pool->hot_memory_size > allow_hot_size; 333 + return pool->hot_memory_size > pool->th; 352 334 } 353 335 354 336 static void dr_icm_pool_clear_hot_chunks_arr(struct mlx5dr_icm_pool *pool) ··· 505 503 struct mlx5dr_icm_pool *mlx5dr_icm_pool_create(struct mlx5dr_domain *dmn, 506 504 enum mlx5dr_icm_type icm_type) 507 505 { 508 - u32 num_of_chunks, entry_size, max_hot_size; 506 + u32 num_of_chunks, entry_size; 509 507 struct mlx5dr_icm_pool *pool; 508 + u32 max_hot_size = 0; 510 509 511 510 pool = kvzalloc(sizeof(*pool), GFP_KERNEL); 512 511 if (!pool) ··· 523 520 switch (icm_type) { 524 521 case DR_ICM_TYPE_STE: 525 522 pool->max_log_chunk_sz = dmn->info.max_log_sw_icm_sz; 523 + max_hot_size = mlx5dr_icm_pool_chunk_size_to_byte(pool->max_log_chunk_sz, 524 + pool->icm_type) * 525 + DR_ICM_POOL_STE_HOT_MEM_PERCENT / 100; 526 526 break; 527 527 case DR_ICM_TYPE_MODIFY_ACTION: 528 528 pool->max_log_chunk_sz = dmn->info.max_log_action_icm_sz; 529 + max_hot_size = mlx5dr_icm_pool_chunk_size_to_byte(pool->max_log_chunk_sz, 530 + pool->icm_type) * 531 + DR_ICM_POOL_MODIFY_ACTION_HOT_MEM_PERCENT / 100; 529 532 break; 530 533 case DR_ICM_TYPE_MODIFY_HDR_PTRN: 531 534 pool->max_log_chunk_sz = dmn->info.max_log_modify_hdr_pattern_icm_sz; 535 + max_hot_size = mlx5dr_icm_pool_chunk_size_to_byte(pool->max_log_chunk_sz, 536 + pool->icm_type) * 537 + DR_ICM_POOL_MODIFY_HDR_PTRN_HOT_MEM_PERCENT / 100; 532 538 break; 533 539 default: 534 540 WARN_ON(icm_type); ··· 545 533 546 534 entry_size = mlx5dr_icm_pool_dm_type_to_entry_size(pool->icm_type); 547 535 548 - max_hot_size = mlx5dr_icm_pool_chunk_size_to_byte(pool->max_log_chunk_sz, 549 - pool->icm_type) / 550 - DR_ICM_POOL_HOT_MEMORY_FRACTION; 551 - 552 536 num_of_chunks = DIV_ROUND_UP(max_hot_size, entry_size) + 1; 537 + pool->th = max_hot_size; 553 538 554 539 pool->hot_chunks_arr = kvcalloc(num_of_chunks, 555 540 sizeof(struct mlx5dr_icm_hot_chunk),
+3
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h
··· 72 72 DR_ICM_TYPE_STE, 73 73 DR_ICM_TYPE_MODIFY_ACTION, 74 74 DR_ICM_TYPE_MODIFY_HDR_PTRN, 75 + DR_ICM_TYPE_MAX, 75 76 }; 76 77 77 78 static inline enum mlx5dr_icm_chunk_size ··· 956 955 struct list_head dbg_tbl_list; 957 956 struct mlx5dr_dbg_dump_info dump_info; 958 957 struct xarray definers_xa; 958 + /* memory management statistics */ 959 + u32 num_buddies[DR_ICM_TYPE_MAX]; 959 960 }; 960 961 961 962 struct mlx5dr_table_rx_tx {
+1
include/linux/mlx5/driver.h
··· 439 439 struct work_struct report_work; 440 440 struct devlink_health_reporter *fw_reporter; 441 441 struct devlink_health_reporter *fw_fatal_reporter; 442 + struct devlink_health_reporter *vnic_reporter; 442 443 struct delayed_work update_fw_log_ts_work; 443 444 }; 444 445
+1 -1
include/linux/mlx5/mlx5_ifc.h
··· 69 69 MLX5_SET_HCA_CAP_OP_MOD_ATOMIC = 0x3, 70 70 MLX5_SET_HCA_CAP_OP_MOD_ROCE = 0x4, 71 71 MLX5_SET_HCA_CAP_OP_MOD_GENERAL_DEVICE2 = 0x20, 72 - MLX5_SET_HCA_CAP_OP_MODE_PORT_SELECTION = 0x25, 72 + MLX5_SET_HCA_CAP_OP_MOD_PORT_SELECTION = 0x25, 73 73 }; 74 74 75 75 enum {