Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'mlx5-updates-2023-08-14' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux

Saeed Mahameed says:

====================
mlx5-updates-2023-08-14

1) Handle PTP out of order CQEs issue
2) Check FW status before determining reset successful
3) Expose maximum supported SFs via devlink resource
4) MISC cleanups

* tag 'mlx5-updates-2023-08-14' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux:
net/mlx5: Don't query MAX caps twice
net/mlx5: Remove unused MAX HCA capabilities
net/mlx5: Remove unused CAPs
net/mlx5: Fix error message in mlx5_sf_dev_state_change_handler()
net/mlx5: Remove redundant check of mlx5_vhca_event_supported()
net/mlx5: Use mlx5_sf_start_function_id() helper instead of directly calling MLX5_CAP_GEN()
net/mlx5: Remove redundant SF supported check from mlx5_sf_hw_table_init()
net/mlx5: Use auxiliary_device_uninit() instead of device_put()
net/mlx5: E-switch, Add checking for flow rule destinations
net/mlx5: Check with FW that sync reset completed successfully
net/mlx5: Expose max possible SFs via devlink resource
net/mlx5e: Add recovery flow for tx devlink health reporter for unhealthy PTP SQ
net/mlx5e: Make tx_port_ts logic resilient to out-of-order CQEs
net/mlx5: Consolidate devlink documentation in devlink/mlx5.rst
====================

Link: https://lore.kernel.org/r/20230814214144.159464-1-saeed@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+677 -574
+6
Documentation/networking/device_drivers/ethernet/mellanox/mlx5/counters.rst
··· 683 683 time protocol. 684 684 - Error 685 685 686 + * - `ptp_cq[i]_late_cqe` 687 + - Number of times a CQE has been delivered on the PTP timestamping CQ when 688 + the CQE was not expected since a certain amount of time had elapsed where 689 + the device typically ensures not posting the CQE. 690 + - Error 691 + 686 692 .. [#ring_global] The corresponding ring and global counters do not share the 687 693 same name (i.e. do not follow the common naming scheme). 688 694
-313
Documentation/networking/device_drivers/ethernet/mellanox/mlx5/devlink.rst
··· 1 - .. SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB 2 - .. include:: <isonum.txt> 3 - 4 - ======= 5 - Devlink 6 - ======= 7 - 8 - :Copyright: |copy| 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. 9 - 10 - Contents 11 - ======== 12 - 13 - - `Info`_ 14 - - `Parameters`_ 15 - - `Health reporters`_ 16 - 17 - Info 18 - ==== 19 - 20 - The devlink info reports the running and stored firmware versions on device. 21 - It also prints the device PSID which represents the HCA board type ID. 22 - 23 - User command example:: 24 - 25 - $ devlink dev info pci/0000:00:06.0 26 - pci/0000:00:06.0: 27 - driver mlx5_core 28 - versions: 29 - fixed: 30 - fw.psid MT_0000000009 31 - running: 32 - fw.version 16.26.0100 33 - stored: 34 - fw.version 16.26.0100 35 - 36 - Parameters 37 - ========== 38 - 39 - flow_steering_mode: Device flow steering mode 40 - --------------------------------------------- 41 - The flow steering mode parameter controls the flow steering mode of the driver. 42 - Two modes are supported: 43 - 44 - 1. 'dmfs' - Device managed flow steering. 45 - 2. 'smfs' - Software/Driver managed flow steering. 46 - 47 - In DMFS mode, the HW steering entities are created and managed through the 48 - Firmware. 49 - In SMFS mode, the HW steering entities are created and managed though by 50 - the driver directly into hardware without firmware intervention. 51 - 52 - SMFS mode is faster and provides better rule insertion rate compared to default DMFS mode. 53 - 54 - User command examples: 55 - 56 - - Set SMFS flow steering mode:: 57 - 58 - $ devlink dev param set pci/0000:06:00.0 name flow_steering_mode value "smfs" cmode runtime 59 - 60 - - Read device flow steering mode:: 61 - 62 - $ devlink dev param show pci/0000:06:00.0 name flow_steering_mode 63 - pci/0000:06:00.0: 64 - name flow_steering_mode type driver-specific 65 - values: 66 - cmode runtime value smfs 67 - 68 - enable_roce: RoCE enablement state 69 - ---------------------------------- 70 - If the device supports RoCE disablement, RoCE enablement state controls device 71 - support for RoCE capability. Otherwise, the control occurs in the driver stack. 72 - When RoCE is disabled at the driver level, only raw ethernet QPs are supported. 73 - 74 - To change RoCE enablement state, a user must change the driverinit cmode value 75 - and run devlink reload. 76 - 77 - User command examples: 78 - 79 - - Disable RoCE:: 80 - 81 - $ devlink dev param set pci/0000:06:00.0 name enable_roce value false cmode driverinit 82 - $ devlink dev reload pci/0000:06:00.0 83 - 84 - - Read RoCE enablement state:: 85 - 86 - $ devlink dev param show pci/0000:06:00.0 name enable_roce 87 - pci/0000:06:00.0: 88 - name enable_roce type generic 89 - values: 90 - cmode driverinit value true 91 - 92 - esw_port_metadata: Eswitch port metadata state 93 - ---------------------------------------------- 94 - When applicable, disabling eswitch metadata can increase packet rate 95 - up to 20% depending on the use case and packet sizes. 96 - 97 - Eswitch port metadata state controls whether to internally tag packets with 98 - metadata. Metadata tagging must be enabled for multi-port RoCE, failover 99 - between representors and stacked devices. 100 - By default metadata is enabled on the supported devices in E-switch. 101 - Metadata is applicable only for E-switch in switchdev mode and 102 - users may disable it when NONE of the below use cases will be in use: 103 - 104 - 1. HCA is in Dual/multi-port RoCE mode. 105 - 2. VF/SF representor bonding (Usually used for Live migration) 106 - 3. Stacked devices 107 - 108 - When metadata is disabled, the above use cases will fail to initialize if 109 - users try to enable them. 110 - 111 - - Show eswitch port metadata:: 112 - 113 - $ devlink dev param show pci/0000:06:00.0 name esw_port_metadata 114 - pci/0000:06:00.0: 115 - name esw_port_metadata type driver-specific 116 - values: 117 - cmode runtime value true 118 - 119 - - Disable eswitch port metadata:: 120 - 121 - $ devlink dev param set pci/0000:06:00.0 name esw_port_metadata value false cmode runtime 122 - 123 - - Change eswitch mode to switchdev mode where after choosing the metadata value:: 124 - 125 - $ devlink dev eswitch set pci/0000:06:00.0 mode switchdev 126 - 127 - hairpin_num_queues: Number of hairpin queues 128 - -------------------------------------------- 129 - We refer to a TC NIC rule that involves forwarding as "hairpin". 130 - 131 - Hairpin queues are mlx5 hardware specific implementation for hardware 132 - forwarding of such packets. 133 - 134 - - Show the number of hairpin queues:: 135 - 136 - $ devlink dev param show pci/0000:06:00.0 name hairpin_num_queues 137 - pci/0000:06:00.0: 138 - name hairpin_num_queues type driver-specific 139 - values: 140 - cmode driverinit value 2 141 - 142 - - Change the number of hairpin queues:: 143 - 144 - $ devlink dev param set pci/0000:06:00.0 name hairpin_num_queues value 4 cmode driverinit 145 - 146 - hairpin_queue_size: Size of the hairpin queues 147 - ---------------------------------------------- 148 - Control the size of the hairpin queues. 149 - 150 - - Show the size of the hairpin queues:: 151 - 152 - $ devlink dev param show pci/0000:06:00.0 name hairpin_queue_size 153 - pci/0000:06:00.0: 154 - name hairpin_queue_size type driver-specific 155 - values: 156 - cmode driverinit value 1024 157 - 158 - - Change the size (in packets) of the hairpin queues:: 159 - 160 - $ devlink dev param set pci/0000:06:00.0 name hairpin_queue_size value 512 cmode driverinit 161 - 162 - Health reporters 163 - ================ 164 - 165 - tx reporter 166 - ----------- 167 - The tx reporter is responsible for reporting and recovering of the following two error scenarios: 168 - 169 - - tx timeout 170 - Report on kernel tx timeout detection. 171 - Recover by searching lost interrupts. 172 - - tx error completion 173 - Report on error tx completion. 174 - Recover by flushing the tx queue and reset it. 175 - 176 - tx reporter also support on demand diagnose callback, on which it provides 177 - real time information of its send queues status. 178 - 179 - User commands examples: 180 - 181 - - Diagnose send queues status:: 182 - 183 - $ devlink health diagnose pci/0000:82:00.0 reporter tx 184 - 185 - .. note:: 186 - This command has valid output only when interface is up, otherwise the command has empty output. 187 - 188 - - Show number of tx errors indicated, number of recover flows ended successfully, 189 - is autorecover enabled and graceful period from last recover:: 190 - 191 - $ devlink health show pci/0000:82:00.0 reporter tx 192 - 193 - rx reporter 194 - ----------- 195 - The rx reporter is responsible for reporting and recovering of the following two error scenarios: 196 - 197 - - rx queues' initialization (population) timeout 198 - Population of rx queues' descriptors on ring initialization is done 199 - in napi context via triggering an irq. In case of a failure to get 200 - the minimum amount of descriptors, a timeout would occur, and 201 - descriptors could be recovered by polling the EQ (Event Queue). 202 - - rx completions with errors (reported by HW on interrupt context) 203 - Report on rx completion error. 204 - Recover (if needed) by flushing the related queue and reset it. 205 - 206 - rx reporter also supports on demand diagnose callback, on which it 207 - provides real time information of its receive queues' status. 208 - 209 - - Diagnose rx queues' status and corresponding completion queue:: 210 - 211 - $ devlink health diagnose pci/0000:82:00.0 reporter rx 212 - 213 - NOTE: This command has valid output only when interface is up. Otherwise, the command has empty output. 214 - 215 - - Show number of rx errors indicated, number of recover flows ended successfully, 216 - is autorecover enabled, and graceful period from last recover:: 217 - 218 - $ devlink health show pci/0000:82:00.0 reporter rx 219 - 220 - fw reporter 221 - ----------- 222 - The fw reporter implements `diagnose` and `dump` callbacks. 223 - It follows symptoms of fw error such as fw syndrome by triggering 224 - fw core dump and storing it into the dump buffer. 225 - The fw reporter diagnose command can be triggered any time by the user to check 226 - current fw status. 227 - 228 - User commands examples: 229 - 230 - - Check fw heath status:: 231 - 232 - $ devlink health diagnose pci/0000:82:00.0 reporter fw 233 - 234 - - Read FW core dump if already stored or trigger new one:: 235 - 236 - $ devlink health dump show pci/0000:82:00.0 reporter fw 237 - 238 - .. note:: 239 - This command can run only on the PF which has fw tracer ownership, 240 - running it on other PF or any VF will return "Operation not permitted". 241 - 242 - fw fatal reporter 243 - ----------------- 244 - The fw fatal reporter implements `dump` and `recover` callbacks. 245 - It follows fatal errors indications by CR-space dump and recover flow. 246 - The CR-space dump uses vsc interface which is valid even if the FW command 247 - interface is not functional, which is the case in most FW fatal errors. 248 - The recover function runs recover flow which reloads the driver and triggers fw 249 - reset if needed. 250 - On firmware error, the health buffer is dumped into the dmesg. The log 251 - level is derived from the error's severity (given in health buffer). 252 - 253 - User commands examples: 254 - 255 - - Run fw recover flow manually:: 256 - 257 - $ devlink health recover pci/0000:82:00.0 reporter fw_fatal 258 - 259 - - Read FW CR-space dump if already stored or trigger new one:: 260 - 261 - $ devlink health dump show pci/0000:82:00.1 reporter fw_fatal 262 - 263 - .. note:: 264 - This command can run only on PF. 265 - 266 - vnic reporter 267 - ------------- 268 - The vnic reporter implements only the `diagnose` callback. 269 - It is responsible for querying the vnic diagnostic counters from fw and displaying 270 - them in realtime. 271 - 272 - Description of the vnic counters: 273 - 274 - - total_q_under_processor_handle 275 - number of queues in an error state due to 276 - an async error or errored command. 277 - - send_queue_priority_update_flow 278 - number of QP/SQ priority/SL update events. 279 - - cq_overrun 280 - number of times CQ entered an error state due to an overflow. 281 - - async_eq_overrun 282 - number of times an EQ mapped to async events was overrun. 283 - comp_eq_overrun number of times an EQ mapped to completion events was 284 - overrun. 285 - - quota_exceeded_command 286 - number of commands issued and failed due to quota exceeded. 287 - - invalid_command 288 - number of commands issued and failed dues to any reason other than quota 289 - exceeded. 290 - - nic_receive_steering_discard 291 - number of packets that completed RX flow 292 - steering but were discarded due to a mismatch in flow table. 293 - - generated_pkt_steering_fail 294 - number of packets generated by the VNIC experiencing unexpected steering 295 - failure (at any point in steering flow). 296 - - handled_pkt_steering_fail 297 - number of packets handled by the VNIC experiencing unexpected steering 298 - failure (at any point in steering flow owned by the VNIC, including the FDB 299 - for the eswitch owner). 300 - 301 - User commands examples: 302 - 303 - - Diagnose PF/VF vnic counters:: 304 - 305 - $ devlink health diagnose pci/0000:82:00.1 reporter vnic 306 - 307 - - Diagnose representor vnic counters (performed by supplying devlink port of the 308 - representor, which can be obtained via devlink port command):: 309 - 310 - $ devlink health diagnose pci/0000:82:00.1/65537 reporter vnic 311 - 312 - .. note:: 313 - This command can run over all interfaces such as PF/VF and representor ports.
-1
Documentation/networking/device_drivers/ethernet/mellanox/mlx5/index.rst
··· 13 13 :maxdepth: 2 14 14 15 15 kconfig 16 - devlink 17 16 switchdev 18 17 tracepoints 19 18 counters
+182
Documentation/networking/devlink/mlx5.rst
··· 18 18 * - ``enable_roce`` 19 19 - driverinit 20 20 - Type: Boolean 21 + 22 + If the device supports RoCE disablement, RoCE enablement state controls 23 + device support for RoCE capability. Otherwise, the control occurs in the 24 + driver stack. When RoCE is disabled at the driver level, only raw 25 + ethernet QPs are supported. 21 26 * - ``io_eq_size`` 22 27 - driverinit 23 28 - The range is between 64 and 4096. ··· 53 48 * ``smfs`` Software managed flow steering. In SMFS mode, the HW 54 49 steering entities are created and manage through the driver without 55 50 firmware intervention. 51 + 52 + SMFS mode is faster and provides better rule insertion rate compared to 53 + default DMFS mode. 56 54 * - ``fdb_large_groups`` 57 55 - u32 58 56 - driverinit ··· 79 71 deprecated. 80 72 81 73 Default: disabled 74 + * - ``esw_port_metadata`` 75 + - Boolean 76 + - runtime 77 + - When applicable, disabling eswitch metadata can increase packet rate up 78 + to 20% depending on the use case and packet sizes. 82 79 80 + Eswitch port metadata state controls whether to internally tag packets 81 + with metadata. Metadata tagging must be enabled for multi-port RoCE, 82 + failover between representors and stacked devices. By default metadata is 83 + enabled on the supported devices in E-switch. Metadata is applicable only 84 + for E-switch in switchdev mode and users may disable it when NONE of the 85 + below use cases will be in use: 86 + 1. HCA is in Dual/multi-port RoCE mode. 87 + 2. VF/SF representor bonding (Usually used for Live migration) 88 + 3. Stacked devices 89 + 90 + When metadata is disabled, the above use cases will fail to initialize if 91 + users try to enable them. 83 92 * - ``hairpin_num_queues`` 84 93 - u32 85 94 - driverinit ··· 129 104 * - ``fw.version`` 130 105 - stored, running 131 106 - Three digit major.minor.subminor firmware version number. 107 + 108 + Health reporters 109 + ================ 110 + 111 + tx reporter 112 + ----------- 113 + The tx reporter is responsible for reporting and recovering of the following three error scenarios: 114 + 115 + - tx timeout 116 + Report on kernel tx timeout detection. 117 + Recover by searching lost interrupts. 118 + - tx error completion 119 + Report on error tx completion. 120 + Recover by flushing the tx queue and reset it. 121 + - tx PTP port timestamping CQ unhealthy 122 + Report too many CQEs never delivered on port ts CQ. 123 + Recover by flushing and re-creating all PTP channels. 124 + 125 + tx reporter also support on demand diagnose callback, on which it provides 126 + real time information of its send queues status. 127 + 128 + User commands examples: 129 + 130 + - Diagnose send queues status:: 131 + 132 + $ devlink health diagnose pci/0000:82:00.0 reporter tx 133 + 134 + .. note:: 135 + This command has valid output only when interface is up, otherwise the command has empty output. 136 + 137 + - Show number of tx errors indicated, number of recover flows ended successfully, 138 + is autorecover enabled and graceful period from last recover:: 139 + 140 + $ devlink health show pci/0000:82:00.0 reporter tx 141 + 142 + rx reporter 143 + ----------- 144 + The rx reporter is responsible for reporting and recovering of the following two error scenarios: 145 + 146 + - rx queues' initialization (population) timeout 147 + Population of rx queues' descriptors on ring initialization is done 148 + in napi context via triggering an irq. In case of a failure to get 149 + the minimum amount of descriptors, a timeout would occur, and 150 + descriptors could be recovered by polling the EQ (Event Queue). 151 + - rx completions with errors (reported by HW on interrupt context) 152 + Report on rx completion error. 153 + Recover (if needed) by flushing the related queue and reset it. 154 + 155 + rx reporter also supports on demand diagnose callback, on which it 156 + provides real time information of its receive queues' status. 157 + 158 + - Diagnose rx queues' status and corresponding completion queue:: 159 + 160 + $ devlink health diagnose pci/0000:82:00.0 reporter rx 161 + 162 + .. note:: 163 + This command has valid output only when interface is up. Otherwise, the command has empty output. 164 + 165 + - Show number of rx errors indicated, number of recover flows ended successfully, 166 + is autorecover enabled, and graceful period from last recover:: 167 + 168 + $ devlink health show pci/0000:82:00.0 reporter rx 169 + 170 + fw reporter 171 + ----------- 172 + The fw reporter implements `diagnose` and `dump` callbacks. 173 + It follows symptoms of fw error such as fw syndrome by triggering 174 + fw core dump and storing it into the dump buffer. 175 + The fw reporter diagnose command can be triggered any time by the user to check 176 + current fw status. 177 + 178 + User commands examples: 179 + 180 + - Check fw heath status:: 181 + 182 + $ devlink health diagnose pci/0000:82:00.0 reporter fw 183 + 184 + - Read FW core dump if already stored or trigger new one:: 185 + 186 + $ devlink health dump show pci/0000:82:00.0 reporter fw 187 + 188 + .. note:: 189 + This command can run only on the PF which has fw tracer ownership, 190 + running it on other PF or any VF will return "Operation not permitted". 191 + 192 + fw fatal reporter 193 + ----------------- 194 + The fw fatal reporter implements `dump` and `recover` callbacks. 195 + It follows fatal errors indications by CR-space dump and recover flow. 196 + The CR-space dump uses vsc interface which is valid even if the FW command 197 + interface is not functional, which is the case in most FW fatal errors. 198 + The recover function runs recover flow which reloads the driver and triggers fw 199 + reset if needed. 200 + On firmware error, the health buffer is dumped into the dmesg. The log 201 + level is derived from the error's severity (given in health buffer). 202 + 203 + User commands examples: 204 + 205 + - Run fw recover flow manually:: 206 + 207 + $ devlink health recover pci/0000:82:00.0 reporter fw_fatal 208 + 209 + - Read FW CR-space dump if already stored or trigger new one:: 210 + 211 + $ devlink health dump show pci/0000:82:00.1 reporter fw_fatal 212 + 213 + .. note:: 214 + This command can run only on PF. 215 + 216 + vnic reporter 217 + ------------- 218 + The vnic reporter implements only the `diagnose` callback. 219 + It is responsible for querying the vnic diagnostic counters from fw and displaying 220 + them in realtime. 221 + 222 + Description of the vnic counters: 223 + 224 + - total_q_under_processor_handle 225 + number of queues in an error state due to 226 + an async error or errored command. 227 + - send_queue_priority_update_flow 228 + number of QP/SQ priority/SL update events. 229 + - cq_overrun 230 + number of times CQ entered an error state due to an overflow. 231 + - async_eq_overrun 232 + number of times an EQ mapped to async events was overrun. 233 + comp_eq_overrun number of times an EQ mapped to completion events was 234 + overrun. 235 + - quota_exceeded_command 236 + number of commands issued and failed due to quota exceeded. 237 + - invalid_command 238 + number of commands issued and failed dues to any reason other than quota 239 + exceeded. 240 + - nic_receive_steering_discard 241 + number of packets that completed RX flow 242 + steering but were discarded due to a mismatch in flow table. 243 + - generated_pkt_steering_fail 244 + number of packets generated by the VNIC experiencing unexpected steering 245 + failure (at any point in steering flow). 246 + - handled_pkt_steering_fail 247 + number of packets handled by the VNIC experiencing unexpected steering 248 + failure (at any point in steering flow owned by the VNIC, including the FDB 249 + for the eswitch owner). 250 + 251 + User commands examples: 252 + 253 + - Diagnose PF/VF vnic counters:: 254 + 255 + $ devlink health diagnose pci/0000:82:00.1 reporter vnic 256 + 257 + - Diagnose representor vnic counters (performed by supplying devlink port of the 258 + representor, which can be obtained via devlink port command):: 259 + 260 + $ devlink health diagnose pci/0000:82:00.1/65537 reporter vnic 261 + 262 + .. note:: 263 + This command can run over all interfaces such as PF/VF and representor ports.
+3
drivers/net/ethernet/mellanox/mlx5/core/devlink.c
··· 212 212 /* On fw_activate action, also driver is reloaded and reinit performed */ 213 213 *actions_performed |= BIT(DEVLINK_RELOAD_ACTION_DRIVER_REINIT); 214 214 ret = mlx5_load_one_devl_locked(dev, true); 215 + if (ret) 216 + return ret; 217 + ret = mlx5_fw_reset_verify_fw_complete(dev, extack); 215 218 break; 216 219 default: 217 220 /* Unsupported action should not get to this function */
+8
drivers/net/ethernet/mellanox/mlx5/core/devlink.h
··· 6 6 7 7 #include <net/devlink.h> 8 8 9 + enum mlx5_devlink_resource_id { 10 + MLX5_DL_RES_MAX_LOCAL_SFS = 1, 11 + MLX5_DL_RES_MAX_EXTERNAL_SFS, 12 + 13 + __MLX5_ID_RES_MAX, 14 + MLX5_ID_RES_MAX = __MLX5_ID_RES_MAX - 1, 15 + }; 16 + 9 17 enum mlx5_devlink_param_id { 10 18 MLX5_DEVLINK_PARAM_ID_BASE = DEVLINK_PARAM_GENERIC_ID_MAX, 11 19 MLX5_DEVLINK_PARAM_ID_FLOW_STEERING_MODE,
+1
drivers/net/ethernet/mellanox/mlx5/core/en/health.h
··· 18 18 void mlx5e_reporter_tx_destroy(struct mlx5e_priv *priv); 19 19 void mlx5e_reporter_tx_err_cqe(struct mlx5e_txqsq *sq); 20 20 int mlx5e_reporter_tx_timeout(struct mlx5e_txqsq *sq); 21 + void mlx5e_reporter_tx_ptpsq_unhealthy(struct mlx5e_ptpsq *ptpsq); 21 22 22 23 int mlx5e_health_cq_diag_fmsg(struct mlx5e_cq *cq, struct devlink_fmsg *fmsg); 23 24 int mlx5e_health_cq_common_diag_fmsg(struct mlx5e_cq *cq, struct devlink_fmsg *fmsg);
+185 -62
drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c
··· 2 2 // Copyright (c) 2020 Mellanox Technologies 3 3 4 4 #include "en/ptp.h" 5 + #include "en/health.h" 5 6 #include "en/txrx.h" 6 7 #include "en/params.h" 7 8 #include "en/fs_tt_redirect.h" 9 + #include <linux/list.h> 10 + #include <linux/spinlock.h> 8 11 9 12 struct mlx5e_ptp_fs { 10 13 struct mlx5_flow_handle *l2_rule; ··· 21 18 struct mlx5e_sq_param txq_sq_param; 22 19 struct mlx5e_rq_param rq_param; 23 20 }; 21 + 22 + struct mlx5e_ptp_port_ts_cqe_tracker { 23 + u8 metadata_id; 24 + bool inuse : 1; 25 + struct list_head entry; 26 + }; 27 + 28 + struct mlx5e_ptp_port_ts_cqe_list { 29 + struct mlx5e_ptp_port_ts_cqe_tracker *nodes; 30 + struct list_head tracker_list_head; 31 + /* Sync list operations in xmit and napi_poll contexts */ 32 + spinlock_t tracker_list_lock; 33 + }; 34 + 35 + static inline void 36 + mlx5e_ptp_port_ts_cqe_list_add(struct mlx5e_ptp_port_ts_cqe_list *list, u8 metadata) 37 + { 38 + struct mlx5e_ptp_port_ts_cqe_tracker *tracker = &list->nodes[metadata]; 39 + 40 + WARN_ON_ONCE(tracker->inuse); 41 + tracker->inuse = true; 42 + spin_lock(&list->tracker_list_lock); 43 + list_add_tail(&tracker->entry, &list->tracker_list_head); 44 + spin_unlock(&list->tracker_list_lock); 45 + } 46 + 47 + static void 48 + mlx5e_ptp_port_ts_cqe_list_remove(struct mlx5e_ptp_port_ts_cqe_list *list, u8 metadata) 49 + { 50 + struct mlx5e_ptp_port_ts_cqe_tracker *tracker = &list->nodes[metadata]; 51 + 52 + WARN_ON_ONCE(!tracker->inuse); 53 + tracker->inuse = false; 54 + spin_lock(&list->tracker_list_lock); 55 + list_del(&tracker->entry); 56 + spin_unlock(&list->tracker_list_lock); 57 + } 58 + 59 + void mlx5e_ptpsq_track_metadata(struct mlx5e_ptpsq *ptpsq, u8 metadata) 60 + { 61 + mlx5e_ptp_port_ts_cqe_list_add(ptpsq->ts_cqe_pending_list, metadata); 62 + } 24 63 25 64 struct mlx5e_skb_cb_hwtstamp { 26 65 ktime_t cqe_hwtstamp; ··· 124 79 memset(skb->cb, 0, sizeof(struct mlx5e_skb_cb_hwtstamp)); 125 80 } 126 81 127 - #define PTP_WQE_CTR2IDX(val) ((val) & ptpsq->ts_cqe_ctr_mask) 128 - 129 - static bool mlx5e_ptp_ts_cqe_drop(struct mlx5e_ptpsq *ptpsq, u16 skb_ci, u16 skb_id) 82 + static struct sk_buff * 83 + mlx5e_ptp_metadata_map_lookup(struct mlx5e_ptp_metadata_map *map, u16 metadata) 130 84 { 131 - return (ptpsq->ts_cqe_ctr_mask && (skb_ci != skb_id)); 85 + return map->data[metadata]; 132 86 } 133 87 134 - static bool mlx5e_ptp_ts_cqe_ooo(struct mlx5e_ptpsq *ptpsq, u16 skb_id) 88 + static struct sk_buff * 89 + mlx5e_ptp_metadata_map_remove(struct mlx5e_ptp_metadata_map *map, u16 metadata) 135 90 { 136 - u16 skb_ci = PTP_WQE_CTR2IDX(ptpsq->skb_fifo_cc); 137 - u16 skb_pi = PTP_WQE_CTR2IDX(ptpsq->skb_fifo_pc); 138 - 139 - if (PTP_WQE_CTR2IDX(skb_id - skb_ci) >= PTP_WQE_CTR2IDX(skb_pi - skb_ci)) 140 - return true; 141 - 142 - return false; 143 - } 144 - 145 - static void mlx5e_ptp_skb_fifo_ts_cqe_resync(struct mlx5e_ptpsq *ptpsq, u16 skb_ci, 146 - u16 skb_id, int budget) 147 - { 148 - struct skb_shared_hwtstamps hwts = {}; 149 91 struct sk_buff *skb; 150 92 151 - ptpsq->cq_stats->resync_event++; 93 + skb = map->data[metadata]; 94 + map->data[metadata] = NULL; 152 95 153 - while (skb_ci != skb_id) { 154 - skb = mlx5e_skb_fifo_pop(&ptpsq->skb_fifo); 155 - hwts.hwtstamp = mlx5e_skb_cb_get_hwts(skb)->cqe_hwtstamp; 156 - skb_tstamp_tx(skb, &hwts); 157 - ptpsq->cq_stats->resync_cqe++; 158 - napi_consume_skb(skb, budget); 159 - skb_ci = PTP_WQE_CTR2IDX(ptpsq->skb_fifo_cc); 160 - } 96 + return skb; 161 97 } 98 + 99 + static bool mlx5e_ptp_metadata_map_unhealthy(struct mlx5e_ptp_metadata_map *map) 100 + { 101 + /* Considered beginning unhealthy state if size * 15 / 2^4 cannot be reclaimed. */ 102 + return map->undelivered_counter > (map->capacity >> 4) * 15; 103 + } 104 + 105 + static void mlx5e_ptpsq_mark_ts_cqes_undelivered(struct mlx5e_ptpsq *ptpsq, 106 + ktime_t port_tstamp) 107 + { 108 + struct mlx5e_ptp_port_ts_cqe_list *cqe_list = ptpsq->ts_cqe_pending_list; 109 + ktime_t timeout = ns_to_ktime(MLX5E_PTP_TS_CQE_UNDELIVERED_TIMEOUT); 110 + struct mlx5e_ptp_metadata_map *metadata_map = &ptpsq->metadata_map; 111 + struct mlx5e_ptp_port_ts_cqe_tracker *pos, *n; 112 + 113 + spin_lock(&cqe_list->tracker_list_lock); 114 + list_for_each_entry_safe(pos, n, &cqe_list->tracker_list_head, entry) { 115 + struct sk_buff *skb = 116 + mlx5e_ptp_metadata_map_lookup(metadata_map, pos->metadata_id); 117 + ktime_t dma_tstamp = mlx5e_skb_cb_get_hwts(skb)->cqe_hwtstamp; 118 + 119 + if (!dma_tstamp || 120 + ktime_after(ktime_add(dma_tstamp, timeout), port_tstamp)) 121 + break; 122 + 123 + metadata_map->undelivered_counter++; 124 + WARN_ON_ONCE(!pos->inuse); 125 + pos->inuse = false; 126 + list_del(&pos->entry); 127 + } 128 + spin_unlock(&cqe_list->tracker_list_lock); 129 + } 130 + 131 + #define PTP_WQE_CTR2IDX(val) ((val) & ptpsq->ts_cqe_ctr_mask) 162 132 163 133 static void mlx5e_ptp_handle_ts_cqe(struct mlx5e_ptpsq *ptpsq, 164 134 struct mlx5_cqe64 *cqe, 165 135 int budget) 166 136 { 167 - u16 skb_id = PTP_WQE_CTR2IDX(be16_to_cpu(cqe->wqe_counter)); 168 - u16 skb_ci = PTP_WQE_CTR2IDX(ptpsq->skb_fifo_cc); 137 + struct mlx5e_ptp_port_ts_cqe_list *pending_cqe_list = ptpsq->ts_cqe_pending_list; 138 + u8 metadata_id = PTP_WQE_CTR2IDX(be16_to_cpu(cqe->wqe_counter)); 139 + bool is_err_cqe = !!MLX5E_RX_ERR_CQE(cqe); 169 140 struct mlx5e_txqsq *sq = &ptpsq->txqsq; 170 141 struct sk_buff *skb; 171 142 ktime_t hwtstamp; 172 143 173 - if (unlikely(MLX5E_RX_ERR_CQE(cqe))) { 174 - skb = mlx5e_skb_fifo_pop(&ptpsq->skb_fifo); 144 + if (likely(pending_cqe_list->nodes[metadata_id].inuse)) { 145 + mlx5e_ptp_port_ts_cqe_list_remove(pending_cqe_list, metadata_id); 146 + } else { 147 + /* Reclaim space in the unlikely event CQE was delivered after 148 + * marking it late. 149 + */ 150 + ptpsq->metadata_map.undelivered_counter--; 151 + ptpsq->cq_stats->late_cqe++; 152 + } 153 + 154 + skb = mlx5e_ptp_metadata_map_remove(&ptpsq->metadata_map, metadata_id); 155 + 156 + if (unlikely(is_err_cqe)) { 175 157 ptpsq->cq_stats->err_cqe++; 176 158 goto out; 177 159 } 178 160 179 - if (mlx5e_ptp_ts_cqe_drop(ptpsq, skb_ci, skb_id)) { 180 - if (mlx5e_ptp_ts_cqe_ooo(ptpsq, skb_id)) { 181 - /* already handled by a previous resync */ 182 - ptpsq->cq_stats->ooo_cqe_drop++; 183 - return; 184 - } 185 - mlx5e_ptp_skb_fifo_ts_cqe_resync(ptpsq, skb_ci, skb_id, budget); 186 - } 187 - 188 - skb = mlx5e_skb_fifo_pop(&ptpsq->skb_fifo); 189 161 hwtstamp = mlx5e_cqe_ts_to_ns(sq->ptp_cyc2time, sq->clock, get_cqe_ts(cqe)); 190 162 mlx5e_skb_cb_hwtstamp_handler(skb, MLX5E_SKB_CB_PORT_HWTSTAMP, 191 163 hwtstamp, ptpsq->cq_stats); 192 164 ptpsq->cq_stats->cqe++; 193 165 166 + mlx5e_ptpsq_mark_ts_cqes_undelivered(ptpsq, hwtstamp); 194 167 out: 195 168 napi_consume_skb(skb, budget); 169 + mlx5e_ptp_metadata_fifo_push(&ptpsq->metadata_freelist, metadata_id); 170 + if (unlikely(mlx5e_ptp_metadata_map_unhealthy(&ptpsq->metadata_map)) && 171 + !test_and_set_bit(MLX5E_SQ_STATE_RECOVERING, &sq->state)) 172 + queue_work(ptpsq->txqsq.priv->wq, &ptpsq->report_unhealthy_work); 196 173 } 197 174 198 175 static bool mlx5e_ptp_poll_ts_cq(struct mlx5e_cq *cq, int budget) ··· 358 291 359 292 static int mlx5e_ptp_alloc_traffic_db(struct mlx5e_ptpsq *ptpsq, int numa) 360 293 { 361 - int wq_sz = mlx5_wq_cyc_get_size(&ptpsq->txqsq.wq); 362 - struct mlx5_core_dev *mdev = ptpsq->txqsq.mdev; 294 + struct mlx5e_ptp_metadata_fifo *metadata_freelist = &ptpsq->metadata_freelist; 295 + struct mlx5e_ptp_metadata_map *metadata_map = &ptpsq->metadata_map; 296 + struct mlx5e_ptp_port_ts_cqe_list *cqe_list; 297 + int db_sz; 298 + int md; 363 299 364 - ptpsq->skb_fifo.fifo = kvzalloc_node(array_size(wq_sz, sizeof(*ptpsq->skb_fifo.fifo)), 365 - GFP_KERNEL, numa); 366 - if (!ptpsq->skb_fifo.fifo) 300 + cqe_list = kvzalloc_node(sizeof(*ptpsq->ts_cqe_pending_list), GFP_KERNEL, numa); 301 + if (!cqe_list) 367 302 return -ENOMEM; 303 + ptpsq->ts_cqe_pending_list = cqe_list; 368 304 369 - ptpsq->skb_fifo.pc = &ptpsq->skb_fifo_pc; 370 - ptpsq->skb_fifo.cc = &ptpsq->skb_fifo_cc; 371 - ptpsq->skb_fifo.mask = wq_sz - 1; 372 - if (MLX5_CAP_GEN_2(mdev, ts_cqe_metadata_size2wqe_counter)) 373 - ptpsq->ts_cqe_ctr_mask = 374 - (1 << MLX5_CAP_GEN_2(mdev, ts_cqe_metadata_size2wqe_counter)) - 1; 305 + db_sz = min_t(u32, mlx5_wq_cyc_get_size(&ptpsq->txqsq.wq), 306 + 1 << MLX5_CAP_GEN_2(ptpsq->txqsq.mdev, 307 + ts_cqe_metadata_size2wqe_counter)); 308 + ptpsq->ts_cqe_ctr_mask = db_sz - 1; 309 + 310 + cqe_list->nodes = kvzalloc_node(array_size(db_sz, sizeof(*cqe_list->nodes)), 311 + GFP_KERNEL, numa); 312 + if (!cqe_list->nodes) 313 + goto free_cqe_list; 314 + INIT_LIST_HEAD(&cqe_list->tracker_list_head); 315 + spin_lock_init(&cqe_list->tracker_list_lock); 316 + 317 + metadata_freelist->data = 318 + kvzalloc_node(array_size(db_sz, sizeof(*metadata_freelist->data)), 319 + GFP_KERNEL, numa); 320 + if (!metadata_freelist->data) 321 + goto free_cqe_list_nodes; 322 + metadata_freelist->mask = ptpsq->ts_cqe_ctr_mask; 323 + 324 + for (md = 0; md < db_sz; ++md) { 325 + cqe_list->nodes[md].metadata_id = md; 326 + metadata_freelist->data[md] = md; 327 + } 328 + metadata_freelist->pc = db_sz; 329 + 330 + metadata_map->data = 331 + kvzalloc_node(array_size(db_sz, sizeof(*metadata_map->data)), 332 + GFP_KERNEL, numa); 333 + if (!metadata_map->data) 334 + goto free_metadata_freelist; 335 + metadata_map->capacity = db_sz; 336 + 375 337 return 0; 338 + 339 + free_metadata_freelist: 340 + kvfree(metadata_freelist->data); 341 + free_cqe_list_nodes: 342 + kvfree(cqe_list->nodes); 343 + free_cqe_list: 344 + kvfree(cqe_list); 345 + return -ENOMEM; 376 346 } 377 347 378 - static void mlx5e_ptp_drain_skb_fifo(struct mlx5e_skb_fifo *skb_fifo) 348 + static void mlx5e_ptp_drain_metadata_map(struct mlx5e_ptp_metadata_map *map) 379 349 { 380 - while (*skb_fifo->pc != *skb_fifo->cc) { 381 - struct sk_buff *skb = mlx5e_skb_fifo_pop(skb_fifo); 350 + int idx; 351 + 352 + for (idx = 0; idx < map->capacity; ++idx) { 353 + struct sk_buff *skb = map->data[idx]; 382 354 383 355 dev_kfree_skb_any(skb); 384 356 } 385 357 } 386 358 387 - static void mlx5e_ptp_free_traffic_db(struct mlx5e_skb_fifo *skb_fifo) 359 + static void mlx5e_ptp_free_traffic_db(struct mlx5e_ptpsq *ptpsq) 388 360 { 389 - mlx5e_ptp_drain_skb_fifo(skb_fifo); 390 - kvfree(skb_fifo->fifo); 361 + mlx5e_ptp_drain_metadata_map(&ptpsq->metadata_map); 362 + kvfree(ptpsq->metadata_map.data); 363 + kvfree(ptpsq->metadata_freelist.data); 364 + kvfree(ptpsq->ts_cqe_pending_list->nodes); 365 + kvfree(ptpsq->ts_cqe_pending_list); 366 + } 367 + 368 + static void mlx5e_ptpsq_unhealthy_work(struct work_struct *work) 369 + { 370 + struct mlx5e_ptpsq *ptpsq = 371 + container_of(work, struct mlx5e_ptpsq, report_unhealthy_work); 372 + 373 + mlx5e_reporter_tx_ptpsq_unhealthy(ptpsq); 391 374 } 392 375 393 376 static int mlx5e_ptp_open_txqsq(struct mlx5e_ptp *c, u32 tisn, ··· 465 348 if (err) 466 349 goto err_free_txqsq; 467 350 468 - err = mlx5e_ptp_alloc_traffic_db(ptpsq, 469 - dev_to_node(mlx5_core_dma_dev(c->mdev))); 351 + err = mlx5e_ptp_alloc_traffic_db(ptpsq, dev_to_node(mlx5_core_dma_dev(c->mdev))); 470 352 if (err) 471 353 goto err_free_txqsq; 354 + 355 + INIT_WORK(&ptpsq->report_unhealthy_work, mlx5e_ptpsq_unhealthy_work); 472 356 473 357 return 0; 474 358 ··· 484 366 struct mlx5e_txqsq *sq = &ptpsq->txqsq; 485 367 struct mlx5_core_dev *mdev = sq->mdev; 486 368 487 - mlx5e_ptp_free_traffic_db(&ptpsq->skb_fifo); 369 + if (current_work() != &ptpsq->report_unhealthy_work) 370 + cancel_work_sync(&ptpsq->report_unhealthy_work); 371 + mlx5e_ptp_free_traffic_db(ptpsq); 488 372 cancel_work_sync(&sq->recover_work); 489 373 mlx5e_ptp_destroy_sq(mdev, sq->sqn); 490 374 mlx5e_free_txqsq_descs(sq); ··· 654 534 655 535 /* SQ */ 656 536 if (test_bit(MLX5E_PTP_STATE_TX, c->state)) { 657 - params->log_sq_size = orig->log_sq_size; 537 + params->log_sq_size = 538 + min(MLX5_CAP_GEN_2(c->mdev, ts_cqe_metadata_size2wqe_counter), 539 + MLX5E_PTP_MAX_LOG_SQ_SIZE); 540 + params->log_sq_size = min(params->log_sq_size, orig->log_sq_size); 658 541 mlx5e_ptp_build_sq_param(c->mdev, params, &cparams->txq_sq_param); 659 542 } 660 543 /* RQ */
+52 -7
drivers/net/ethernet/mellanox/mlx5/core/en/ptp.h
··· 7 7 #include "en.h" 8 8 #include "en_stats.h" 9 9 #include "en/txrx.h" 10 + #include <linux/ktime.h> 10 11 #include <linux/ptp_classify.h> 12 + #include <linux/time64.h> 13 + #include <linux/workqueue.h> 11 14 12 15 #define MLX5E_PTP_CHANNEL_IX 0 16 + #define MLX5E_PTP_MAX_LOG_SQ_SIZE (8U) 17 + #define MLX5E_PTP_TS_CQE_UNDELIVERED_TIMEOUT (1 * NSEC_PER_SEC) 18 + 19 + struct mlx5e_ptp_metadata_fifo { 20 + u8 cc; 21 + u8 pc; 22 + u8 mask; 23 + u8 *data; 24 + }; 25 + 26 + struct mlx5e_ptp_metadata_map { 27 + u16 undelivered_counter; 28 + u16 capacity; 29 + struct sk_buff **data; 30 + }; 13 31 14 32 struct mlx5e_ptpsq { 15 33 struct mlx5e_txqsq txqsq; 16 34 struct mlx5e_cq ts_cq; 17 - u16 skb_fifo_cc; 18 - u16 skb_fifo_pc; 19 - struct mlx5e_skb_fifo skb_fifo; 20 35 struct mlx5e_ptp_cq_stats *cq_stats; 21 36 u16 ts_cqe_ctr_mask; 37 + 38 + struct work_struct report_unhealthy_work; 39 + struct mlx5e_ptp_port_ts_cqe_list *ts_cqe_pending_list; 40 + struct mlx5e_ptp_metadata_fifo metadata_freelist; 41 + struct mlx5e_ptp_metadata_map metadata_map; 22 42 }; 23 43 24 44 enum { ··· 89 69 fk.ports.dst == htons(PTP_EV_PORT)); 90 70 } 91 71 92 - static inline bool mlx5e_ptpsq_fifo_has_room(struct mlx5e_txqsq *sq) 72 + static inline void mlx5e_ptp_metadata_fifo_push(struct mlx5e_ptp_metadata_fifo *fifo, u8 metadata) 93 73 { 94 - if (!sq->ptpsq) 95 - return true; 74 + fifo->data[fifo->mask & fifo->pc++] = metadata; 75 + } 96 76 97 - return mlx5e_skb_fifo_has_room(&sq->ptpsq->skb_fifo); 77 + static inline u8 78 + mlx5e_ptp_metadata_fifo_pop(struct mlx5e_ptp_metadata_fifo *fifo) 79 + { 80 + return fifo->data[fifo->mask & fifo->cc++]; 81 + } 82 + 83 + static inline void 84 + mlx5e_ptp_metadata_map_put(struct mlx5e_ptp_metadata_map *map, 85 + struct sk_buff *skb, u8 metadata) 86 + { 87 + WARN_ON_ONCE(map->data[metadata]); 88 + map->data[metadata] = skb; 89 + } 90 + 91 + static inline bool mlx5e_ptpsq_metadata_freelist_empty(struct mlx5e_ptpsq *ptpsq) 92 + { 93 + struct mlx5e_ptp_metadata_fifo *freelist; 94 + 95 + if (likely(!ptpsq)) 96 + return false; 97 + 98 + freelist = &ptpsq->metadata_freelist; 99 + 100 + return freelist->pc == freelist->cc; 98 101 } 99 102 100 103 int mlx5e_ptp_open(struct mlx5e_priv *priv, struct mlx5e_params *params, ··· 131 88 void mlx5e_ptp_free_rx_fs(struct mlx5e_flow_steering *fs, 132 89 const struct mlx5e_profile *profile); 133 90 int mlx5e_ptp_rx_manage_fs(struct mlx5e_priv *priv, bool set); 91 + 92 + void mlx5e_ptpsq_track_metadata(struct mlx5e_ptpsq *ptpsq, u8 metadata); 134 93 135 94 enum { 136 95 MLX5E_SKB_CB_CQE_HWTSTAMP = BIT(0),
+65
drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
··· 164 164 return err; 165 165 } 166 166 167 + static int mlx5e_tx_reporter_ptpsq_unhealthy_recover(void *ctx) 168 + { 169 + struct mlx5e_ptpsq *ptpsq = ctx; 170 + struct mlx5e_channels *chs; 171 + struct net_device *netdev; 172 + struct mlx5e_priv *priv; 173 + int carrier_ok; 174 + int err; 175 + 176 + if (!test_bit(MLX5E_SQ_STATE_RECOVERING, &ptpsq->txqsq.state)) 177 + return 0; 178 + 179 + priv = ptpsq->txqsq.priv; 180 + 181 + mutex_lock(&priv->state_lock); 182 + chs = &priv->channels; 183 + netdev = priv->netdev; 184 + 185 + carrier_ok = netif_carrier_ok(netdev); 186 + netif_carrier_off(netdev); 187 + 188 + mlx5e_deactivate_priv_channels(priv); 189 + 190 + mlx5e_ptp_close(chs->ptp); 191 + err = mlx5e_ptp_open(priv, &chs->params, chs->c[0]->lag_port, &chs->ptp); 192 + 193 + mlx5e_activate_priv_channels(priv); 194 + 195 + /* return carrier back if needed */ 196 + if (carrier_ok) 197 + netif_carrier_on(netdev); 198 + 199 + mutex_unlock(&priv->state_lock); 200 + 201 + return err; 202 + } 203 + 167 204 /* state lock cannot be grabbed within this function. 168 205 * It can cause a dead lock or a read-after-free. 169 206 */ ··· 553 516 return mlx5e_tx_reporter_dump_sq(priv, fmsg, to_ctx->sq); 554 517 } 555 518 519 + static int mlx5e_tx_reporter_ptpsq_unhealthy_dump(struct mlx5e_priv *priv, 520 + struct devlink_fmsg *fmsg, 521 + void *ctx) 522 + { 523 + struct mlx5e_ptpsq *ptpsq = ctx; 524 + 525 + return mlx5e_tx_reporter_dump_sq(priv, fmsg, &ptpsq->txqsq); 526 + } 527 + 556 528 static int mlx5e_tx_reporter_dump_all_sqs(struct mlx5e_priv *priv, 557 529 struct devlink_fmsg *fmsg) 558 530 { ··· 665 619 666 620 mlx5e_health_report(priv, priv->tx_reporter, err_str, &err_ctx); 667 621 return to_ctx.status; 622 + } 623 + 624 + void mlx5e_reporter_tx_ptpsq_unhealthy(struct mlx5e_ptpsq *ptpsq) 625 + { 626 + struct mlx5e_ptp_metadata_map *map = &ptpsq->metadata_map; 627 + char err_str[MLX5E_REPORTER_PER_Q_MAX_LEN]; 628 + struct mlx5e_txqsq *txqsq = &ptpsq->txqsq; 629 + struct mlx5e_cq *ts_cq = &ptpsq->ts_cq; 630 + struct mlx5e_priv *priv = txqsq->priv; 631 + struct mlx5e_err_ctx err_ctx = {}; 632 + 633 + err_ctx.ctx = ptpsq; 634 + err_ctx.recover = mlx5e_tx_reporter_ptpsq_unhealthy_recover; 635 + err_ctx.dump = mlx5e_tx_reporter_ptpsq_unhealthy_dump; 636 + snprintf(err_str, sizeof(err_str), 637 + "Unhealthy TX port TS queue: %d, SQ: 0x%x, CQ: 0x%x, Undelivered CQEs: %u Map Capacity: %u", 638 + txqsq->ch_ix, txqsq->sqn, ts_cq->mcq.cqn, map->undelivered_counter, map->capacity); 639 + 640 + mlx5e_health_report(priv, priv->tx_reporter, err_str, &err_ctx); 668 641 } 669 642 670 643 static const struct devlink_health_reporter_ops mlx5_tx_reporter_ops = {
+2 -1
drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
··· 2061 2061 struct mlx5e_params new_params; 2062 2062 int err; 2063 2063 2064 - if (!MLX5_CAP_GEN(mdev, ts_cqe_to_dest_cqn)) 2064 + if (!MLX5_CAP_GEN(mdev, ts_cqe_to_dest_cqn) || 2065 + !MLX5_CAP_GEN_2(mdev, ts_cqe_metadata_size2wqe_counter)) 2065 2066 return -EOPNOTSUPP; 2066 2067 2067 2068 /* Don't allow changing the PTP state if HTB offload is active, because
+1 -3
drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
··· 2142 2142 { MLX5E_DECLARE_PTP_CQ_STAT(struct mlx5e_ptp_cq_stats, err_cqe) }, 2143 2143 { MLX5E_DECLARE_PTP_CQ_STAT(struct mlx5e_ptp_cq_stats, abort) }, 2144 2144 { MLX5E_DECLARE_PTP_CQ_STAT(struct mlx5e_ptp_cq_stats, abort_abs_diff_ns) }, 2145 - { MLX5E_DECLARE_PTP_CQ_STAT(struct mlx5e_ptp_cq_stats, resync_cqe) }, 2146 - { MLX5E_DECLARE_PTP_CQ_STAT(struct mlx5e_ptp_cq_stats, resync_event) }, 2147 - { MLX5E_DECLARE_PTP_CQ_STAT(struct mlx5e_ptp_cq_stats, ooo_cqe_drop) }, 2145 + { MLX5E_DECLARE_PTP_CQ_STAT(struct mlx5e_ptp_cq_stats, late_cqe) }, 2148 2146 }; 2149 2147 2150 2148 static const struct counter_desc ptp_rq_stats_desc[] = {
+1 -3
drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
··· 449 449 u64 err_cqe; 450 450 u64 abort; 451 451 u64 abort_abs_diff_ns; 452 - u64 resync_cqe; 453 - u64 resync_event; 454 - u64 ooo_cqe_drop; 452 + u64 late_cqe; 455 453 }; 456 454 457 455 struct mlx5e_rep_stats {
+18 -10
drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
··· 372 372 const struct mlx5e_tx_attr *attr, 373 373 const struct mlx5e_tx_wqe_attr *wqe_attr, u8 num_dma, 374 374 struct mlx5e_tx_wqe_info *wi, struct mlx5_wqe_ctrl_seg *cseg, 375 - bool xmit_more) 375 + struct mlx5_wqe_eth_seg *eseg, bool xmit_more) 376 376 { 377 377 struct mlx5_wq_cyc *wq = &sq->wq; 378 378 bool send_doorbell; ··· 394 394 395 395 mlx5e_tx_check_stop(sq); 396 396 397 - if (unlikely(sq->ptpsq)) { 397 + if (unlikely(sq->ptpsq && 398 + (skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP))) { 399 + u8 metadata_index = be32_to_cpu(eseg->flow_table_metadata); 400 + 398 401 mlx5e_skb_cb_hwtstamp_init(skb); 399 - mlx5e_skb_fifo_push(&sq->ptpsq->skb_fifo, skb); 402 + mlx5e_ptpsq_track_metadata(sq->ptpsq, metadata_index); 403 + mlx5e_ptp_metadata_map_put(&sq->ptpsq->metadata_map, skb, 404 + metadata_index); 400 405 if (!netif_tx_queue_stopped(sq->txq) && 401 - !mlx5e_skb_fifo_has_room(&sq->ptpsq->skb_fifo)) { 406 + mlx5e_ptpsq_metadata_freelist_empty(sq->ptpsq)) { 402 407 netif_tx_stop_queue(sq->txq); 403 408 sq->stats->stopped++; 404 409 } ··· 488 483 if (unlikely(num_dma < 0)) 489 484 goto err_drop; 490 485 491 - mlx5e_txwqe_complete(sq, skb, attr, wqe_attr, num_dma, wi, cseg, xmit_more); 486 + mlx5e_txwqe_complete(sq, skb, attr, wqe_attr, num_dma, wi, cseg, eseg, xmit_more); 492 487 493 488 return; 494 489 495 490 err_drop: 496 491 stats->dropped++; 497 492 dev_kfree_skb_any(skb); 493 + if (unlikely(sq->ptpsq && (skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP))) 494 + mlx5e_ptp_metadata_fifo_push(&sq->ptpsq->metadata_freelist, 495 + be32_to_cpu(eseg->flow_table_metadata)); 498 496 mlx5e_tx_flush(sq); 499 497 } 500 498 ··· 653 645 static void mlx5e_cqe_ts_id_eseg(struct mlx5e_ptpsq *ptpsq, struct sk_buff *skb, 654 646 struct mlx5_wqe_eth_seg *eseg) 655 647 { 656 - if (ptpsq->ts_cqe_ctr_mask && unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)) 657 - eseg->flow_table_metadata = cpu_to_be32(ptpsq->skb_fifo_pc & 658 - ptpsq->ts_cqe_ctr_mask); 648 + if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)) 649 + eseg->flow_table_metadata = 650 + cpu_to_be32(mlx5e_ptp_metadata_fifo_pop(&ptpsq->metadata_freelist)); 659 651 } 660 652 661 653 static void mlx5e_txwqe_build_eseg(struct mlx5e_priv *priv, struct mlx5e_txqsq *sq, ··· 774 766 { 775 767 if (netif_tx_queue_stopped(sq->txq) && 776 768 mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, sq->stop_room) && 777 - mlx5e_ptpsq_fifo_has_room(sq) && 769 + !mlx5e_ptpsq_metadata_freelist_empty(sq->ptpsq) && 778 770 !test_bit(MLX5E_SQ_STATE_RECOVERING, &sq->state)) { 779 771 netif_tx_wake_queue(sq->txq); 780 772 sq->stats->wake++; ··· 1039 1031 if (unlikely(num_dma < 0)) 1040 1032 goto err_drop; 1041 1033 1042 - mlx5e_txwqe_complete(sq, skb, &attr, &wqe_attr, num_dma, wi, cseg, xmit_more); 1034 + mlx5e_txwqe_complete(sq, skb, &attr, &wqe_attr, num_dma, wi, cseg, eseg, xmit_more); 1043 1035 1044 1036 return; 1045 1037
+31
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
··· 535 535 MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, ignore_flow_level); 536 536 } 537 537 538 + static bool 539 + esw_dests_to_vf_pf_vports(struct mlx5_flow_destination *dests, int max_dest) 540 + { 541 + bool vf_dest = false, pf_dest = false; 542 + int i; 543 + 544 + for (i = 0; i < max_dest; i++) { 545 + if (dests[i].type != MLX5_FLOW_DESTINATION_TYPE_VPORT) 546 + continue; 547 + 548 + if (dests[i].vport.num == MLX5_VPORT_UPLINK) 549 + pf_dest = true; 550 + else 551 + vf_dest = true; 552 + 553 + if (vf_dest && pf_dest) 554 + return true; 555 + } 556 + 557 + return false; 558 + } 559 + 538 560 static int 539 561 esw_setup_dests(struct mlx5_flow_destination *dest, 540 562 struct mlx5_flow_act *flow_act, ··· 692 670 if (err) { 693 671 rule = ERR_PTR(err); 694 672 goto err_create_goto_table; 673 + } 674 + 675 + /* Header rewrite with combined wire+loopback in FDB is not allowed */ 676 + if ((flow_act.action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) && 677 + esw_dests_to_vf_pf_vports(dest, i)) { 678 + esw_warn(esw->dev, 679 + "FDB: Header rewrite with forwarding to both PF and VF is not allowed\n"); 680 + rule = ERR_PTR(-EINVAL); 681 + goto err_esw_get; 695 682 } 696 683 } 697 684
+25 -34
drivers/net/ethernet/mellanox/mlx5/core/fw.c
··· 143 143 { 144 144 int err; 145 145 146 - err = mlx5_core_get_caps(dev, MLX5_CAP_GENERAL); 146 + err = mlx5_core_get_caps_mode(dev, MLX5_CAP_GENERAL, HCA_CAP_OPMOD_GET_CUR); 147 147 if (err) 148 148 return err; 149 149 150 150 if (MLX5_CAP_GEN(dev, port_selection_cap)) { 151 - err = mlx5_core_get_caps(dev, MLX5_CAP_PORT_SELECTION); 151 + err = mlx5_core_get_caps_mode(dev, MLX5_CAP_PORT_SELECTION, HCA_CAP_OPMOD_GET_CUR); 152 152 if (err) 153 153 return err; 154 154 } 155 155 156 156 if (MLX5_CAP_GEN(dev, hca_cap_2)) { 157 - err = mlx5_core_get_caps(dev, MLX5_CAP_GENERAL_2); 157 + err = mlx5_core_get_caps_mode(dev, MLX5_CAP_GENERAL_2, HCA_CAP_OPMOD_GET_CUR); 158 158 if (err) 159 159 return err; 160 160 } 161 161 162 162 if (MLX5_CAP_GEN(dev, eth_net_offloads)) { 163 - err = mlx5_core_get_caps(dev, MLX5_CAP_ETHERNET_OFFLOADS); 163 + err = mlx5_core_get_caps_mode(dev, MLX5_CAP_ETHERNET_OFFLOADS, 164 + HCA_CAP_OPMOD_GET_CUR); 164 165 if (err) 165 166 return err; 166 167 } 167 168 168 169 if (MLX5_CAP_GEN(dev, ipoib_enhanced_offloads)) { 169 - err = mlx5_core_get_caps(dev, MLX5_CAP_IPOIB_ENHANCED_OFFLOADS); 170 + err = mlx5_core_get_caps_mode(dev, MLX5_CAP_IPOIB_ENHANCED_OFFLOADS, 171 + HCA_CAP_OPMOD_GET_CUR); 170 172 if (err) 171 173 return err; 172 174 } 173 175 174 176 if (MLX5_CAP_GEN(dev, pg)) { 175 - err = mlx5_core_get_caps(dev, MLX5_CAP_ODP); 177 + err = mlx5_core_get_caps_mode(dev, MLX5_CAP_ODP, HCA_CAP_OPMOD_GET_CUR); 176 178 if (err) 177 179 return err; 178 180 } 179 181 180 182 if (MLX5_CAP_GEN(dev, atomic)) { 181 - err = mlx5_core_get_caps(dev, MLX5_CAP_ATOMIC); 183 + err = mlx5_core_get_caps_mode(dev, MLX5_CAP_ATOMIC, HCA_CAP_OPMOD_GET_CUR); 182 184 if (err) 183 185 return err; 184 186 } 185 187 186 188 if (MLX5_CAP_GEN(dev, roce)) { 187 - err = mlx5_core_get_caps(dev, MLX5_CAP_ROCE); 189 + err = mlx5_core_get_caps_mode(dev, MLX5_CAP_ROCE, HCA_CAP_OPMOD_GET_CUR); 188 190 if (err) 189 191 return err; 190 192 } 191 193 192 194 if (MLX5_CAP_GEN(dev, nic_flow_table) || 193 195 MLX5_CAP_GEN(dev, ipoib_enhanced_offloads)) { 194 - err = mlx5_core_get_caps(dev, MLX5_CAP_FLOW_TABLE); 196 + err = mlx5_core_get_caps_mode(dev, MLX5_CAP_FLOW_TABLE, HCA_CAP_OPMOD_GET_CUR); 195 197 if (err) 196 198 return err; 197 199 } 198 200 199 201 if (MLX5_ESWITCH_MANAGER(dev)) { 200 - err = mlx5_core_get_caps(dev, MLX5_CAP_ESWITCH_FLOW_TABLE); 202 + err = mlx5_core_get_caps_mode(dev, MLX5_CAP_ESWITCH_FLOW_TABLE, 203 + HCA_CAP_OPMOD_GET_CUR); 201 204 if (err) 202 205 return err; 203 206 204 - err = mlx5_core_get_caps(dev, MLX5_CAP_ESWITCH); 205 - if (err) 206 - return err; 207 - } 208 - 209 - if (MLX5_CAP_GEN(dev, vector_calc)) { 210 - err = mlx5_core_get_caps(dev, MLX5_CAP_VECTOR_CALC); 207 + err = mlx5_core_get_caps_mode(dev, MLX5_CAP_ESWITCH, HCA_CAP_OPMOD_GET_CUR); 211 208 if (err) 212 209 return err; 213 210 } 214 211 215 212 if (MLX5_CAP_GEN(dev, qos)) { 216 - err = mlx5_core_get_caps(dev, MLX5_CAP_QOS); 213 + err = mlx5_core_get_caps_mode(dev, MLX5_CAP_QOS, HCA_CAP_OPMOD_GET_CUR); 217 214 if (err) 218 215 return err; 219 216 } 220 217 221 218 if (MLX5_CAP_GEN(dev, debug)) 222 - mlx5_core_get_caps(dev, MLX5_CAP_DEBUG); 219 + mlx5_core_get_caps_mode(dev, MLX5_CAP_DEBUG, HCA_CAP_OPMOD_GET_CUR); 223 220 224 221 if (MLX5_CAP_GEN(dev, pcam_reg)) 225 222 mlx5_get_pcam_reg(dev); 226 223 227 224 if (MLX5_CAP_GEN(dev, mcam_reg)) { 228 225 mlx5_get_mcam_access_reg_group(dev, MLX5_MCAM_REGS_FIRST_128); 229 - mlx5_get_mcam_access_reg_group(dev, MLX5_MCAM_REGS_0x9080_0x90FF); 230 226 mlx5_get_mcam_access_reg_group(dev, MLX5_MCAM_REGS_0x9100_0x917F); 231 227 } 232 228 ··· 230 234 mlx5_get_qcam_reg(dev); 231 235 232 236 if (MLX5_CAP_GEN(dev, device_memory)) { 233 - err = mlx5_core_get_caps(dev, MLX5_CAP_DEV_MEM); 237 + err = mlx5_core_get_caps_mode(dev, MLX5_CAP_DEV_MEM, HCA_CAP_OPMOD_GET_CUR); 234 238 if (err) 235 239 return err; 236 240 } 237 241 238 242 if (MLX5_CAP_GEN(dev, event_cap)) { 239 - err = mlx5_core_get_caps(dev, MLX5_CAP_DEV_EVENT); 243 + err = mlx5_core_get_caps_mode(dev, MLX5_CAP_DEV_EVENT, HCA_CAP_OPMOD_GET_CUR); 240 244 if (err) 241 245 return err; 242 246 } 243 247 244 248 if (MLX5_CAP_GEN(dev, tls_tx) || MLX5_CAP_GEN(dev, tls_rx)) { 245 - err = mlx5_core_get_caps(dev, MLX5_CAP_TLS); 249 + err = mlx5_core_get_caps_mode(dev, MLX5_CAP_TLS, HCA_CAP_OPMOD_GET_CUR); 246 250 if (err) 247 251 return err; 248 252 } 249 253 250 254 if (MLX5_CAP_GEN_64(dev, general_obj_types) & 251 255 MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_NET_Q) { 252 - err = mlx5_core_get_caps(dev, MLX5_CAP_VDPA_EMULATION); 256 + err = mlx5_core_get_caps_mode(dev, MLX5_CAP_VDPA_EMULATION, HCA_CAP_OPMOD_GET_CUR); 253 257 if (err) 254 258 return err; 255 259 } 256 260 257 261 if (MLX5_CAP_GEN(dev, ipsec_offload)) { 258 - err = mlx5_core_get_caps(dev, MLX5_CAP_IPSEC); 262 + err = mlx5_core_get_caps_mode(dev, MLX5_CAP_IPSEC, HCA_CAP_OPMOD_GET_CUR); 259 263 if (err) 260 264 return err; 261 265 } 262 266 263 267 if (MLX5_CAP_GEN(dev, crypto)) { 264 - err = mlx5_core_get_caps(dev, MLX5_CAP_CRYPTO); 265 - if (err) 266 - return err; 267 - } 268 - 269 - if (MLX5_CAP_GEN(dev, shampo)) { 270 - err = mlx5_core_get_caps(dev, MLX5_CAP_DEV_SHAMPO); 268 + err = mlx5_core_get_caps_mode(dev, MLX5_CAP_CRYPTO, HCA_CAP_OPMOD_GET_CUR); 271 269 if (err) 272 270 return err; 273 271 } 274 272 275 273 if (MLX5_CAP_GEN_64(dev, general_obj_types) & 276 274 MLX5_GENERAL_OBJ_TYPES_CAP_MACSEC_OFFLOAD) { 277 - err = mlx5_core_get_caps(dev, MLX5_CAP_MACSEC); 275 + err = mlx5_core_get_caps_mode(dev, MLX5_CAP_MACSEC, HCA_CAP_OPMOD_GET_CUR); 278 276 if (err) 279 277 return err; 280 278 } 281 279 282 280 if (MLX5_CAP_GEN(dev, adv_virtualization)) { 283 - err = mlx5_core_get_caps(dev, MLX5_CAP_ADV_VIRTUALIZATION); 281 + err = mlx5_core_get_caps_mode(dev, MLX5_CAP_ADV_VIRTUALIZATION, 282 + HCA_CAP_OPMOD_GET_CUR); 284 283 if (err) 285 284 return err; 286 285 }
+33 -6
drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
··· 127 127 if (mlx5_reg_mfrl_query(dev, NULL, NULL, &reset_state)) 128 128 goto out; 129 129 130 + if (!reset_state) 131 + return 0; 132 + 130 133 switch (reset_state) { 131 134 case MLX5_MFRL_REG_RESET_STATE_IN_NEGOTIATION: 132 135 case MLX5_MFRL_REG_RESET_STATE_RESET_IN_PROGRESS: 133 - NL_SET_ERR_MSG_MOD(extack, "Sync reset was already triggered"); 136 + NL_SET_ERR_MSG_MOD(extack, "Sync reset still in progress"); 134 137 return -EBUSY; 135 - case MLX5_MFRL_REG_RESET_STATE_TIMEOUT: 136 - NL_SET_ERR_MSG_MOD(extack, "Sync reset got timeout"); 138 + case MLX5_MFRL_REG_RESET_STATE_NEG_TIMEOUT: 139 + NL_SET_ERR_MSG_MOD(extack, "Sync reset negotiation timeout"); 137 140 return -ETIMEDOUT; 138 141 case MLX5_MFRL_REG_RESET_STATE_NACK: 139 142 NL_SET_ERR_MSG_MOD(extack, "One of the hosts disabled reset"); 140 143 return -EPERM; 144 + case MLX5_MFRL_REG_RESET_STATE_UNLOAD_TIMEOUT: 145 + NL_SET_ERR_MSG_MOD(extack, "Sync reset unload timeout"); 146 + return -ETIMEDOUT; 141 147 } 142 148 143 149 out: ··· 157 151 struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset; 158 152 u32 out[MLX5_ST_SZ_DW(mfrl_reg)] = {}; 159 153 u32 in[MLX5_ST_SZ_DW(mfrl_reg)] = {}; 160 - int err; 154 + int err, rst_res; 161 155 162 156 set_bit(MLX5_FW_RESET_FLAGS_PENDING_COMP, &fw_reset->reset_flags); 163 157 ··· 170 164 return 0; 171 165 172 166 clear_bit(MLX5_FW_RESET_FLAGS_PENDING_COMP, &fw_reset->reset_flags); 173 - if (err == -EREMOTEIO && MLX5_CAP_MCAM_FEATURE(dev, reset_state)) 174 - return mlx5_fw_reset_get_reset_state_err(dev, extack); 167 + if (err == -EREMOTEIO && MLX5_CAP_MCAM_FEATURE(dev, reset_state)) { 168 + rst_res = mlx5_fw_reset_get_reset_state_err(dev, extack); 169 + return rst_res ? rst_res : err; 170 + } 175 171 176 172 NL_SET_ERR_MSG_MOD(extack, "Sync reset command failed"); 177 173 return mlx5_cmd_check(dev, err, in, out); 174 + } 175 + 176 + int mlx5_fw_reset_verify_fw_complete(struct mlx5_core_dev *dev, 177 + struct netlink_ext_ack *extack) 178 + { 179 + u8 rst_state; 180 + int err; 181 + 182 + err = mlx5_fw_reset_get_reset_state_err(dev, extack); 183 + if (err) 184 + return err; 185 + 186 + rst_state = mlx5_get_fw_rst_state(dev); 187 + if (!rst_state) 188 + return 0; 189 + 190 + mlx5_core_err(dev, "Sync reset did not complete, state=%d\n", rst_state); 191 + NL_SET_ERR_MSG_MOD(extack, "Sync reset did not complete successfully"); 192 + return rst_state; 178 193 } 179 194 180 195 int mlx5_fw_reset_set_live_patch(struct mlx5_core_dev *dev)
+2
drivers/net/ethernet/mellanox/mlx5/core/fw_reset.h
··· 12 12 int mlx5_fw_reset_set_live_patch(struct mlx5_core_dev *dev); 13 13 14 14 int mlx5_fw_reset_wait_reset_done(struct mlx5_core_dev *dev); 15 + int mlx5_fw_reset_verify_fw_complete(struct mlx5_core_dev *dev, 16 + struct netlink_ext_ack *extack); 15 17 void mlx5_fw_reset_events_start(struct mlx5_core_dev *dev); 16 18 void mlx5_fw_reset_events_stop(struct mlx5_core_dev *dev); 17 19 void mlx5_drain_fw_reset(struct mlx5_core_dev *dev);
+8 -8
drivers/net/ethernet/mellanox/mlx5/core/main.c
··· 361 361 } 362 362 EXPORT_SYMBOL(mlx5_core_uplink_netdev_event_replay); 363 363 364 - static int mlx5_core_get_caps_mode(struct mlx5_core_dev *dev, 365 - enum mlx5_cap_type cap_type, 366 - enum mlx5_cap_mode cap_mode) 364 + int mlx5_core_get_caps_mode(struct mlx5_core_dev *dev, enum mlx5_cap_type cap_type, 365 + enum mlx5_cap_mode cap_mode) 367 366 { 368 367 u8 in[MLX5_ST_SZ_BYTES(query_hca_cap_in)]; 369 368 int out_sz = MLX5_ST_SZ_BYTES(query_hca_cap_out); ··· 1619 1620 return err; 1620 1621 1621 1622 if (MLX5_CAP_GEN(dev, eth_net_offloads)) { 1622 - err = mlx5_core_get_caps(dev, MLX5_CAP_ETHERNET_OFFLOADS); 1623 + err = mlx5_core_get_caps_mode(dev, MLX5_CAP_ETHERNET_OFFLOADS, 1624 + HCA_CAP_OPMOD_GET_CUR); 1623 1625 if (err) 1624 1626 return err; 1625 1627 } 1626 1628 1627 1629 if (MLX5_CAP_GEN(dev, nic_flow_table) || 1628 1630 MLX5_CAP_GEN(dev, ipoib_enhanced_offloads)) { 1629 - err = mlx5_core_get_caps(dev, MLX5_CAP_FLOW_TABLE); 1631 + err = mlx5_core_get_caps_mode(dev, MLX5_CAP_FLOW_TABLE, 1632 + HCA_CAP_OPMOD_GET_CUR); 1630 1633 if (err) 1631 1634 return err; 1632 1635 } 1633 1636 1634 1637 if (MLX5_CAP_GEN_64(dev, general_obj_types) & 1635 1638 MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_NET_Q) { 1636 - err = mlx5_core_get_caps(dev, MLX5_CAP_VDPA_EMULATION); 1639 + err = mlx5_core_get_caps_mode(dev, MLX5_CAP_VDPA_EMULATION, 1640 + HCA_CAP_OPMOD_GET_CUR); 1637 1641 if (err) 1638 1642 return err; 1639 1643 } ··· 1716 1714 MLX5_CAP_FLOW_TABLE, 1717 1715 MLX5_CAP_ESWITCH_FLOW_TABLE, 1718 1716 MLX5_CAP_ESWITCH, 1719 - MLX5_CAP_VECTOR_CALC, 1720 1717 MLX5_CAP_QOS, 1721 1718 MLX5_CAP_DEBUG, 1722 1719 MLX5_CAP_DEV_MEM, ··· 1724 1723 MLX5_CAP_VDPA_EMULATION, 1725 1724 MLX5_CAP_IPSEC, 1726 1725 MLX5_CAP_PORT_SELECTION, 1727 - MLX5_CAP_DEV_SHAMPO, 1728 1726 MLX5_CAP_MACSEC, 1729 1727 MLX5_CAP_ADV_VIRTUALIZATION, 1730 1728 MLX5_CAP_CRYPTO,
+3
drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
··· 174 174 #define MLX5_FLEXIBLE_INLEN(dev, fixed, item_size, num_items) \ 175 175 mlx5_flexible_inlen(dev, fixed, item_size, num_items, __func__, __LINE__) 176 176 177 + int mlx5_core_get_caps(struct mlx5_core_dev *dev, enum mlx5_cap_type cap_type); 178 + int mlx5_core_get_caps_mode(struct mlx5_core_dev *dev, enum mlx5_cap_type cap_type, 179 + enum mlx5_cap_mode cap_mode); 177 180 int mlx5_query_hca_caps(struct mlx5_core_dev *dev); 178 181 int mlx5_query_board_id(struct mlx5_core_dev *dev); 179 182 int mlx5_query_module_num(struct mlx5_core_dev *dev, int *module_num);
+6 -6
drivers/net/ethernet/mellanox/mlx5/core/sf/dev/dev.c
··· 129 129 130 130 err = auxiliary_device_add(&sf_dev->adev); 131 131 if (err) { 132 - put_device(&sf_dev->adev.dev); 132 + auxiliary_device_uninit(&sf_dev->adev); 133 133 goto add_err; 134 134 } 135 135 ··· 167 167 if (!max_functions) 168 168 return 0; 169 169 170 - base_id = MLX5_CAP_GEN(table->dev, sf_base_id); 170 + base_id = mlx5_sf_start_function_id(table->dev); 171 171 if (event->function_id < base_id || event->function_id >= (base_id + max_functions)) 172 172 return 0; 173 173 ··· 185 185 mlx5_sf_dev_del(table->dev, sf_dev, sf_index); 186 186 else 187 187 mlx5_core_err(table->dev, 188 - "SF DEV: teardown state for invalid dev index=%d fn_id=0x%x\n", 188 + "SF DEV: teardown state for invalid dev index=%d sfnum=0x%x\n", 189 189 sf_index, event->sw_function_id); 190 190 break; 191 191 case MLX5_VHCA_STATE_ACTIVE: ··· 209 209 int i; 210 210 211 211 max_functions = mlx5_sf_max_functions(dev); 212 - function_id = MLX5_CAP_GEN(dev, sf_base_id); 212 + function_id = mlx5_sf_start_function_id(dev); 213 213 /* Arm the vhca context as the vhca event notifier */ 214 214 for (i = 0; i < max_functions; i++) { 215 215 err = mlx5_vhca_event_arm(dev, function_id); ··· 234 234 int i; 235 235 236 236 max_functions = mlx5_sf_max_functions(dev); 237 - function_id = MLX5_CAP_GEN(dev, sf_base_id); 237 + function_id = mlx5_sf_start_function_id(dev); 238 238 for (i = 0; i < max_functions; i++, function_id++) { 239 239 if (table->stop_active_wq) 240 240 return; ··· 299 299 unsigned int max_sfs; 300 300 int err; 301 301 302 - if (!mlx5_sf_dev_supported(dev) || !mlx5_vhca_event_supported(dev)) 302 + if (!mlx5_sf_dev_supported(dev)) 303 303 return; 304 304 305 305 table = kzalloc(sizeof(*table), GFP_KERNEL);
+42 -7
drivers/net/ethernet/mellanox/mlx5/core/sf/hw_table.c
··· 9 9 #include "mlx5_core.h" 10 10 #include "eswitch.h" 11 11 #include "diag/sf_tracepoint.h" 12 + #include "devlink.h" 12 13 13 14 struct mlx5_sf_hw { 14 15 u32 usr_sfnum; ··· 244 243 kfree(hwc->sfs); 245 244 } 246 245 246 + static void mlx5_sf_hw_table_res_unregister(struct mlx5_core_dev *dev) 247 + { 248 + devl_resources_unregister(priv_to_devlink(dev)); 249 + } 250 + 251 + static int mlx5_sf_hw_table_res_register(struct mlx5_core_dev *dev, u16 max_fn, 252 + u16 max_ext_fn) 253 + { 254 + struct devlink_resource_size_params size_params; 255 + struct devlink *devlink = priv_to_devlink(dev); 256 + int err; 257 + 258 + devlink_resource_size_params_init(&size_params, max_fn, max_fn, 1, 259 + DEVLINK_RESOURCE_UNIT_ENTRY); 260 + err = devl_resource_register(devlink, "max_local_SFs", max_fn, MLX5_DL_RES_MAX_LOCAL_SFS, 261 + DEVLINK_RESOURCE_ID_PARENT_TOP, &size_params); 262 + if (err) 263 + return err; 264 + 265 + devlink_resource_size_params_init(&size_params, max_ext_fn, max_ext_fn, 1, 266 + DEVLINK_RESOURCE_UNIT_ENTRY); 267 + return devl_resource_register(devlink, "max_external_SFs", max_ext_fn, 268 + MLX5_DL_RES_MAX_EXTERNAL_SFS, DEVLINK_RESOURCE_ID_PARENT_TOP, 269 + &size_params); 270 + } 271 + 247 272 int mlx5_sf_hw_table_init(struct mlx5_core_dev *dev) 248 273 { 249 274 struct mlx5_sf_hw_table *table; 250 275 u16 max_ext_fn = 0; 251 276 u16 ext_base_id = 0; 252 - u16 max_fn = 0; 253 277 u16 base_id; 278 + u16 max_fn; 254 279 int err; 255 280 256 281 if (!mlx5_vhca_event_supported(dev)) 257 282 return 0; 258 283 259 - if (mlx5_sf_supported(dev)) 260 - max_fn = mlx5_sf_max_functions(dev); 284 + max_fn = mlx5_sf_max_functions(dev); 261 285 262 286 err = mlx5_esw_sf_max_hpf_functions(dev, &max_ext_fn, &ext_base_id); 263 287 if (err) 264 288 return err; 265 289 290 + if (mlx5_sf_hw_table_res_register(dev, max_fn, max_ext_fn)) 291 + mlx5_core_dbg(dev, "failed to register max SFs resources"); 292 + 266 293 if (!max_fn && !max_ext_fn) 267 294 return 0; 268 295 269 296 table = kzalloc(sizeof(*table), GFP_KERNEL); 270 - if (!table) 271 - return -ENOMEM; 297 + if (!table) { 298 + err = -ENOMEM; 299 + goto alloc_err; 300 + } 272 301 273 302 mutex_init(&table->table_lock); 274 303 table->dev = dev; ··· 322 291 table_err: 323 292 mutex_destroy(&table->table_lock); 324 293 kfree(table); 294 + alloc_err: 295 + mlx5_sf_hw_table_res_unregister(dev); 325 296 return err; 326 297 } 327 298 ··· 332 299 struct mlx5_sf_hw_table *table = dev->priv.sf_hw_table; 333 300 334 301 if (!table) 335 - return; 302 + goto res_unregister; 336 303 337 - mutex_destroy(&table->table_lock); 338 304 mlx5_sf_hw_table_hwc_cleanup(&table->hwc[MLX5_SF_HWC_EXTERNAL]); 339 305 mlx5_sf_hw_table_hwc_cleanup(&table->hwc[MLX5_SF_HWC_LOCAL]); 306 + mutex_destroy(&table->table_lock); 340 307 kfree(table); 308 + res_unregister: 309 + mlx5_sf_hw_table_res_unregister(dev); 341 310 } 342 311 343 312 static int mlx5_sf_hw_vhca_event(struct notifier_block *nb, unsigned long opcode, void *data)
+1 -68
include/linux/mlx5/device.h
··· 1208 1208 MLX5_CAP_FLOW_TABLE, 1209 1209 MLX5_CAP_ESWITCH_FLOW_TABLE, 1210 1210 MLX5_CAP_ESWITCH, 1211 - MLX5_CAP_RESERVED, 1212 - MLX5_CAP_VECTOR_CALC, 1213 - MLX5_CAP_QOS, 1211 + MLX5_CAP_QOS = 0xc, 1214 1212 MLX5_CAP_DEBUG, 1215 1213 MLX5_CAP_RESERVED_14, 1216 1214 MLX5_CAP_DEV_MEM, ··· 1218 1220 MLX5_CAP_DEV_EVENT = 0x14, 1219 1221 MLX5_CAP_IPSEC, 1220 1222 MLX5_CAP_CRYPTO = 0x1a, 1221 - MLX5_CAP_DEV_SHAMPO = 0x1d, 1222 1223 MLX5_CAP_MACSEC = 0x1f, 1223 1224 MLX5_CAP_GENERAL_2 = 0x20, 1224 1225 MLX5_CAP_PORT_SELECTION = 0x25, ··· 1236 1239 1237 1240 enum mlx5_mcam_reg_groups { 1238 1241 MLX5_MCAM_REGS_FIRST_128 = 0x0, 1239 - MLX5_MCAM_REGS_0x9080_0x90FF = 0x1, 1240 1242 MLX5_MCAM_REGS_0x9100_0x917F = 0x2, 1241 1243 MLX5_MCAM_REGS_NUM = 0x3, 1242 1244 }; ··· 1275 1279 MLX5_GET(per_protocol_networking_offload_caps,\ 1276 1280 mdev->caps.hca[MLX5_CAP_ETHERNET_OFFLOADS]->cur, cap) 1277 1281 1278 - #define MLX5_CAP_ETH_MAX(mdev, cap) \ 1279 - MLX5_GET(per_protocol_networking_offload_caps,\ 1280 - mdev->caps.hca[MLX5_CAP_ETHERNET_OFFLOADS]->max, cap) 1281 - 1282 1282 #define MLX5_CAP_IPOIB_ENHANCED(mdev, cap) \ 1283 1283 MLX5_GET(per_protocol_networking_offload_caps,\ 1284 1284 mdev->caps.hca[MLX5_CAP_IPOIB_ENHANCED_OFFLOADS]->cur, cap) ··· 1297 1305 #define MLX5_CAP64_FLOWTABLE(mdev, cap) \ 1298 1306 MLX5_GET64(flow_table_nic_cap, (mdev)->caps.hca[MLX5_CAP_FLOW_TABLE]->cur, cap) 1299 1307 1300 - #define MLX5_CAP_FLOWTABLE_MAX(mdev, cap) \ 1301 - MLX5_GET(flow_table_nic_cap, mdev->caps.hca[MLX5_CAP_FLOW_TABLE]->max, cap) 1302 - 1303 1308 #define MLX5_CAP_FLOWTABLE_NIC_RX(mdev, cap) \ 1304 1309 MLX5_CAP_FLOWTABLE(mdev, flow_table_properties_nic_receive.cap) 1305 - 1306 - #define MLX5_CAP_FLOWTABLE_NIC_RX_MAX(mdev, cap) \ 1307 - MLX5_CAP_FLOWTABLE_MAX(mdev, flow_table_properties_nic_receive.cap) 1308 1310 1309 1311 #define MLX5_CAP_FLOWTABLE_NIC_TX(mdev, cap) \ 1310 1312 MLX5_CAP_FLOWTABLE(mdev, flow_table_properties_nic_transmit.cap) 1311 1313 1312 - #define MLX5_CAP_FLOWTABLE_NIC_TX_MAX(mdev, cap) \ 1313 - MLX5_CAP_FLOWTABLE_MAX(mdev, flow_table_properties_nic_transmit.cap) 1314 - 1315 1314 #define MLX5_CAP_FLOWTABLE_SNIFFER_RX(mdev, cap) \ 1316 1315 MLX5_CAP_FLOWTABLE(mdev, flow_table_properties_nic_receive_sniffer.cap) 1317 - 1318 - #define MLX5_CAP_FLOWTABLE_SNIFFER_RX_MAX(mdev, cap) \ 1319 - MLX5_CAP_FLOWTABLE_MAX(mdev, flow_table_properties_nic_receive_sniffer.cap) 1320 1316 1321 1317 #define MLX5_CAP_FLOWTABLE_SNIFFER_TX(mdev, cap) \ 1322 1318 MLX5_CAP_FLOWTABLE(mdev, flow_table_properties_nic_transmit_sniffer.cap) 1323 1319 1324 - #define MLX5_CAP_FLOWTABLE_SNIFFER_TX_MAX(mdev, cap) \ 1325 - MLX5_CAP_FLOWTABLE_MAX(mdev, flow_table_properties_nic_transmit_sniffer.cap) 1326 - 1327 1320 #define MLX5_CAP_FLOWTABLE_RDMA_RX(mdev, cap) \ 1328 1321 MLX5_CAP_FLOWTABLE(mdev, flow_table_properties_nic_receive_rdma.cap) 1329 1322 1330 - #define MLX5_CAP_FLOWTABLE_RDMA_RX_MAX(mdev, cap) \ 1331 - MLX5_CAP_FLOWTABLE_MAX(mdev, flow_table_properties_nic_receive_rdma.cap) 1332 - 1333 1323 #define MLX5_CAP_FLOWTABLE_RDMA_TX(mdev, cap) \ 1334 1324 MLX5_CAP_FLOWTABLE(mdev, flow_table_properties_nic_transmit_rdma.cap) 1335 - 1336 - #define MLX5_CAP_FLOWTABLE_RDMA_TX_MAX(mdev, cap) \ 1337 - MLX5_CAP_FLOWTABLE_MAX(mdev, flow_table_properties_nic_transmit_rdma.cap) 1338 1325 1339 1326 #define MLX5_CAP_ESW_FLOWTABLE(mdev, cap) \ 1340 1327 MLX5_GET(flow_table_eswitch_cap, \ 1341 1328 mdev->caps.hca[MLX5_CAP_ESWITCH_FLOW_TABLE]->cur, cap) 1342 1329 1343 - #define MLX5_CAP_ESW_FLOWTABLE_MAX(mdev, cap) \ 1344 - MLX5_GET(flow_table_eswitch_cap, \ 1345 - mdev->caps.hca[MLX5_CAP_ESWITCH_FLOW_TABLE]->max, cap) 1346 - 1347 1330 #define MLX5_CAP_ESW_FLOWTABLE_FDB(mdev, cap) \ 1348 1331 MLX5_CAP_ESW_FLOWTABLE(mdev, flow_table_properties_nic_esw_fdb.cap) 1349 - 1350 - #define MLX5_CAP_ESW_FLOWTABLE_FDB_MAX(mdev, cap) \ 1351 - MLX5_CAP_ESW_FLOWTABLE_MAX(mdev, flow_table_properties_nic_esw_fdb.cap) 1352 1332 1353 1333 #define MLX5_CAP_ESW_EGRESS_ACL(mdev, cap) \ 1354 1334 MLX5_CAP_ESW_FLOWTABLE(mdev, flow_table_properties_esw_acl_egress.cap) 1355 1335 1356 - #define MLX5_CAP_ESW_EGRESS_ACL_MAX(mdev, cap) \ 1357 - MLX5_CAP_ESW_FLOWTABLE_MAX(mdev, flow_table_properties_esw_acl_egress.cap) 1358 - 1359 1336 #define MLX5_CAP_ESW_INGRESS_ACL(mdev, cap) \ 1360 1337 MLX5_CAP_ESW_FLOWTABLE(mdev, flow_table_properties_esw_acl_ingress.cap) 1361 1338 1362 - #define MLX5_CAP_ESW_INGRESS_ACL_MAX(mdev, cap) \ 1363 - MLX5_CAP_ESW_FLOWTABLE_MAX(mdev, flow_table_properties_esw_acl_ingress.cap) 1364 - 1365 1339 #define MLX5_CAP_ESW_FT_FIELD_SUPPORT_2(mdev, cap) \ 1366 1340 MLX5_CAP_ESW_FLOWTABLE(mdev, ft_field_support_2_esw_fdb.cap) 1367 - 1368 - #define MLX5_CAP_ESW_FT_FIELD_SUPPORT_2_MAX(mdev, cap) \ 1369 - MLX5_CAP_ESW_FLOWTABLE_MAX(mdev, ft_field_support_2_esw_fdb.cap) 1370 1341 1371 1342 #define MLX5_CAP_ESW(mdev, cap) \ 1372 1343 MLX5_GET(e_switch_cap, \ ··· 1338 1383 #define MLX5_CAP64_ESW_FLOWTABLE(mdev, cap) \ 1339 1384 MLX5_GET64(flow_table_eswitch_cap, \ 1340 1385 (mdev)->caps.hca[MLX5_CAP_ESWITCH_FLOW_TABLE]->cur, cap) 1341 - 1342 - #define MLX5_CAP_ESW_MAX(mdev, cap) \ 1343 - MLX5_GET(e_switch_cap, \ 1344 - mdev->caps.hca[MLX5_CAP_ESWITCH]->max, cap) 1345 1386 1346 1387 #define MLX5_CAP_PORT_SELECTION(mdev, cap) \ 1347 1388 MLX5_GET(port_selection_cap, \ ··· 1351 1400 MLX5_GET(adv_virtualization_cap, \ 1352 1401 mdev->caps.hca[MLX5_CAP_ADV_VIRTUALIZATION]->cur, cap) 1353 1402 1354 - #define MLX5_CAP_ADV_VIRTUALIZATION_MAX(mdev, cap) \ 1355 - MLX5_GET(adv_virtualization_cap, \ 1356 - mdev->caps.hca[MLX5_CAP_ADV_VIRTUALIZATION]->max, cap) 1357 - 1358 1403 #define MLX5_CAP_FLOWTABLE_PORT_SELECTION(mdev, cap) \ 1359 1404 MLX5_CAP_PORT_SELECTION(mdev, flow_table_properties_port_selection.cap) 1360 - 1361 - #define MLX5_CAP_FLOWTABLE_PORT_SELECTION_MAX(mdev, cap) \ 1362 - MLX5_CAP_PORT_SELECTION_MAX(mdev, flow_table_properties_port_selection.cap) 1363 1405 1364 1406 #define MLX5_CAP_ODP(mdev, cap)\ 1365 1407 MLX5_GET(odp_cap, mdev->caps.hca[MLX5_CAP_ODP]->cur, cap) 1366 1408 1367 1409 #define MLX5_CAP_ODP_MAX(mdev, cap)\ 1368 1410 MLX5_GET(odp_cap, mdev->caps.hca[MLX5_CAP_ODP]->max, cap) 1369 - 1370 - #define MLX5_CAP_VECTOR_CALC(mdev, cap) \ 1371 - MLX5_GET(vector_calc_cap, \ 1372 - mdev->caps.hca[MLX5_CAP_VECTOR_CALC]->cur, cap) 1373 1411 1374 1412 #define MLX5_CAP_QOS(mdev, cap)\ 1375 1413 MLX5_GET(qos_cap, mdev->caps.hca[MLX5_CAP_QOS]->cur, cap) ··· 1375 1435 #define MLX5_CAP_MCAM_REG(mdev, reg) \ 1376 1436 MLX5_GET(mcam_reg, (mdev)->caps.mcam[MLX5_MCAM_REGS_FIRST_128], \ 1377 1437 mng_access_reg_cap_mask.access_regs.reg) 1378 - 1379 - #define MLX5_CAP_MCAM_REG1(mdev, reg) \ 1380 - MLX5_GET(mcam_reg, (mdev)->caps.mcam[MLX5_MCAM_REGS_0x9080_0x90FF], \ 1381 - mng_access_reg_cap_mask.access_regs1.reg) 1382 1438 1383 1439 #define MLX5_CAP_MCAM_REG2(mdev, reg) \ 1384 1440 MLX5_GET(mcam_reg, (mdev)->caps.mcam[MLX5_MCAM_REGS_0x9100_0x917F], \ ··· 1420 1484 1421 1485 #define MLX5_CAP_CRYPTO(mdev, cap)\ 1422 1486 MLX5_GET(crypto_cap, (mdev)->caps.hca[MLX5_CAP_CRYPTO]->cur, cap) 1423 - 1424 - #define MLX5_CAP_DEV_SHAMPO(mdev, cap)\ 1425 - MLX5_GET(shampo_cap, mdev->caps.hca_cur[MLX5_CAP_DEV_SHAMPO], cap) 1426 1487 1427 1488 #define MLX5_CAP_MACSEC(mdev, cap)\ 1428 1489 MLX5_GET(macsec_cap, (mdev)->caps.hca[MLX5_CAP_MACSEC]->cur, cap)
-1
include/linux/mlx5/driver.h
··· 1022 1022 void mlx5_core_uplink_netdev_set(struct mlx5_core_dev *mdev, struct net_device *netdev); 1023 1023 void mlx5_core_uplink_netdev_event_replay(struct mlx5_core_dev *mdev); 1024 1024 1025 - int mlx5_core_get_caps(struct mlx5_core_dev *dev, enum mlx5_cap_type cap_type); 1026 1025 void mlx5_health_cleanup(struct mlx5_core_dev *dev); 1027 1026 int mlx5_health_init(struct mlx5_core_dev *dev); 1028 1027 void mlx5_start_health_poll(struct mlx5_core_dev *dev);
+2 -44
include/linux/mlx5/mlx5_ifc.h
··· 1314 1314 u8 reserved_at_120[0x6E0]; 1315 1315 }; 1316 1316 1317 - struct mlx5_ifc_calc_op { 1318 - u8 reserved_at_0[0x10]; 1319 - u8 reserved_at_10[0x9]; 1320 - u8 op_swap_endianness[0x1]; 1321 - u8 op_min[0x1]; 1322 - u8 op_xor[0x1]; 1323 - u8 op_or[0x1]; 1324 - u8 op_and[0x1]; 1325 - u8 op_max[0x1]; 1326 - u8 op_add[0x1]; 1327 - }; 1328 - 1329 - struct mlx5_ifc_vector_calc_cap_bits { 1330 - u8 calc_matrix[0x1]; 1331 - u8 reserved_at_1[0x1f]; 1332 - u8 reserved_at_20[0x8]; 1333 - u8 max_vec_count[0x8]; 1334 - u8 reserved_at_30[0xd]; 1335 - u8 max_chunk_size[0x3]; 1336 - struct mlx5_ifc_calc_op calc0; 1337 - struct mlx5_ifc_calc_op calc1; 1338 - struct mlx5_ifc_calc_op calc2; 1339 - struct mlx5_ifc_calc_op calc3; 1340 - 1341 - u8 reserved_at_c0[0x720]; 1342 - }; 1343 - 1344 1317 struct mlx5_ifc_tls_cap_bits { 1345 1318 u8 tls_1_2_aes_gcm_128[0x1]; 1346 1319 u8 tls_1_3_aes_gcm_128[0x1]; ··· 3408 3435 u8 reserved_at_e0[0x20]; 3409 3436 }; 3410 3437 3411 - struct mlx5_ifc_shampo_cap_bits { 3412 - u8 reserved_at_0[0x3]; 3413 - u8 shampo_log_max_reservation_size[0x5]; 3414 - u8 reserved_at_8[0x3]; 3415 - u8 shampo_log_min_reservation_size[0x5]; 3416 - u8 shampo_min_mss_size[0x10]; 3417 - 3418 - u8 reserved_at_20[0x3]; 3419 - u8 shampo_max_log_headers_entry_size[0x5]; 3420 - u8 reserved_at_28[0x18]; 3421 - 3422 - u8 reserved_at_40[0x7c0]; 3423 - }; 3424 - 3425 3438 struct mlx5_ifc_crypto_cap_bits { 3426 3439 u8 reserved_at_0[0x3]; 3427 3440 u8 synchronize_dek[0x1]; ··· 3443 3484 struct mlx5_ifc_flow_table_eswitch_cap_bits flow_table_eswitch_cap; 3444 3485 struct mlx5_ifc_e_switch_cap_bits e_switch_cap; 3445 3486 struct mlx5_ifc_port_selection_cap_bits port_selection_cap; 3446 - struct mlx5_ifc_vector_calc_cap_bits vector_calc_cap; 3447 3487 struct mlx5_ifc_qos_cap_bits qos_cap; 3448 3488 struct mlx5_ifc_debug_cap_bits debug_cap; 3449 3489 struct mlx5_ifc_fpga_cap_bits fpga_cap; 3450 3490 struct mlx5_ifc_tls_cap_bits tls_cap; 3451 3491 struct mlx5_ifc_device_mem_cap_bits device_mem_cap; 3452 3492 struct mlx5_ifc_virtio_emulation_cap_bits virtio_emulation_cap; 3453 - struct mlx5_ifc_shampo_cap_bits shampo_cap; 3454 3493 struct mlx5_ifc_macsec_cap_bits macsec_cap; 3455 3494 struct mlx5_ifc_crypto_cap_bits crypto_cap; 3456 3495 u8 reserved_at_0[0x8000]; ··· 10815 10858 MLX5_MFRL_REG_RESET_STATE_IDLE = 0, 10816 10859 MLX5_MFRL_REG_RESET_STATE_IN_NEGOTIATION = 1, 10817 10860 MLX5_MFRL_REG_RESET_STATE_RESET_IN_PROGRESS = 2, 10818 - MLX5_MFRL_REG_RESET_STATE_TIMEOUT = 3, 10861 + MLX5_MFRL_REG_RESET_STATE_NEG_TIMEOUT = 3, 10819 10862 MLX5_MFRL_REG_RESET_STATE_NACK = 4, 10863 + MLX5_MFRL_REG_RESET_STATE_UNLOAD_TIMEOUT = 5, 10820 10864 }; 10821 10865 10822 10866 enum {