Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'mlx5-updates-2020-11-03' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux

Saeed Mahameed says:

====================
mlx5-updates-2020-11-03

This series includes updates to mlx5 software steering component.

1) Few improvements in the DR area, such as removing unneeded checks,
renaming to better general names, refactor in some places, etc.

2) Software steering (DR) Memory management improvements

This patch series contains SW Steering memory management improvements:
using buddy allocator instead of an existing bucket allocator, and
several other optimizations.

The buddy system is a memory allocation and management algorithm
that manages memory in power of two increments.

The algorithm is well-known and well-described, such as here:
https://en.wikipedia.org/wiki/Buddy_memory_allocation

Linux uses this algorithm for managing and allocating physical pages,
as described here:
https://www.kernel.org/doc/gorman/html/understand/understand009.html

In our case, although the algorithm in principal is similar to the
Linux physical page allocator, the "building blocks" and the circumstances
are different: in SW steering, buddy allocator doesn't really allocates
a memory, but rather manages ICM (Interconnect Context Memory) that was
previously allocated and registered.

The ICM memory that is used in SW steering is always power
of 2 (order), so buddy system is a good fit for this.

Patches in this series:

[PATH 4] net/mlx5: DR, Add buddy allocator utilities
This patch adds a modified implementation of a well-known buddy allocator,
adjusted for SW steering needs: the algorithm in principal is similar to
the Linux physical page allocator, but in our case buddy allocator doesn't
really allocate a memory, but rather manages ICM memory that was previously
allocated and registered.

[PATH 5] net/mlx5: DR, Handle ICM memory via buddy allocation instead of bucket management
This patch changes ICM management of SW steering to use buddy-system mechanism
Instead of the previous bucket management.

[PATH 6] net/mlx5: DR, Sync chunks only during free
This patch makes syncing happen only when freeing memory chunks.

[PATH 7] net/mlx5: DR, ICM memory pools sync optimization
This patch adds tracking of pool's "hot" memory and makes the
check whether steering sync is required much shorter and faster.

[PATH 8] net/mlx5: DR, Free buddy ICM memory if it is unused
This patch adds tracking buddy's used ICM memory,
and frees the buddy if all its memory becomes unused.

3) Misc code cleanups

* tag 'mlx5-updates-2020-11-03' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux:
net: mlx5: Replace in_irq() usage
net/mlx5: Cleanup kernel-doc warnings
net/mlx4: Cleanup kernel-doc warnings
net/mlx5e: Validate stop_room size upon user input
net/mlx5: DR, Free unused buddy ICM memory
net/mlx5: DR, ICM memory pools sync optimization
net/mlx5: DR, Sync chunks only during free
net/mlx5: DR, Handle ICM memory via buddy allocation instead of buckets
net/mlx5: DR, Add buddy allocator utilities
net/mlx5: DR, Rename matcher functions to be more HW agnostic
net/mlx5: DR, Rename builders HW specific names
net/mlx5: DR, Remove unused member of action struct
====================

Link: https://lore.kernel.org/r/20201105201242.21716-1-saeedm@nvidia.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+593 -469
+1 -1
drivers/net/ethernet/mellanox/mlx4/fw_qos.h
··· 135 135 * @dev: mlx4_dev. 136 136 * @port: Physical port number. 137 137 * @vport: Vport id. 138 - * @out_param: Array of mlx4_vport_qos_param which holds the requested values. 138 + * @in_param: Array of mlx4_vport_qos_param which holds the requested values. 139 139 * 140 140 * Returns 0 on success or a negative mlx4_core errno code. 141 141 **/
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/Makefile
··· 81 81 82 82 mlx5_core-$(CONFIG_MLX5_SW_STEERING) += steering/dr_domain.o steering/dr_table.o \ 83 83 steering/dr_matcher.o steering/dr_rule.o \ 84 - steering/dr_icm_pool.o \ 84 + steering/dr_icm_pool.o steering/dr_buddy.o \ 85 85 steering/dr_ste.o steering/dr_send.o \ 86 86 steering/dr_cmd.o steering/dr_fw.o \ 87 87 steering/dr_action.o steering/fs_dr.o
+34
drivers/net/ethernet/mellanox/mlx5/core/en/params.c
··· 2 2 /* Copyright (c) 2019 Mellanox Technologies. */ 3 3 4 4 #include "en/params.h" 5 + #include "en/txrx.h" 6 + #include "en_accel/tls_rxtx.h" 5 7 6 8 static inline bool mlx5e_rx_is_xdp(struct mlx5e_params *params, 7 9 struct mlx5e_xsk_param *xsk) ··· 153 151 mlx5e_rx_mpwqe_is_linear_skb(mdev, params, xsk); 154 152 155 153 return is_linear_skb ? mlx5e_get_linear_rq_headroom(params, xsk) : 0; 154 + } 155 + 156 + u16 mlx5e_calc_sq_stop_room(struct mlx5_core_dev *mdev, struct mlx5e_params *params) 157 + { 158 + bool is_mpwqe = MLX5E_GET_PFLAG(params, MLX5E_PFLAG_SKB_TX_MPWQE); 159 + u16 stop_room; 160 + 161 + stop_room = mlx5e_tls_get_stop_room(mdev, params); 162 + stop_room += mlx5e_stop_room_for_wqe(MLX5_SEND_WQE_MAX_WQEBBS); 163 + if (is_mpwqe) 164 + /* A MPWQE can take up to the maximum-sized WQE + all the normal 165 + * stop room can be taken if a new packet breaks the active 166 + * MPWQE session and allocates its WQEs right away. 167 + */ 168 + stop_room += mlx5e_stop_room_for_wqe(MLX5_SEND_WQE_MAX_WQEBBS); 169 + 170 + return stop_room; 171 + } 172 + 173 + int mlx5e_validate_params(struct mlx5e_priv *priv, struct mlx5e_params *params) 174 + { 175 + size_t sq_size = 1 << params->log_sq_size; 176 + u16 stop_room; 177 + 178 + stop_room = mlx5e_calc_sq_stop_room(priv->mdev, params); 179 + if (stop_room >= sq_size) { 180 + netdev_err(priv->netdev, "Stop room %hu is bigger than the SQ size %zu\n", 181 + stop_room, sq_size); 182 + return -EINVAL; 183 + } 184 + 185 + return 0; 156 186 }
+4
drivers/net/ethernet/mellanox/mlx5/core/en/params.h
··· 30 30 u32 sqc[MLX5_ST_SZ_DW(sqc)]; 31 31 struct mlx5_wq_param wq; 32 32 bool is_mpw; 33 + u16 stop_room; 33 34 }; 34 35 35 36 struct mlx5e_channel_param { ··· 124 123 void mlx5e_build_xdpsq_param(struct mlx5e_priv *priv, 125 124 struct mlx5e_params *params, 126 125 struct mlx5e_sq_param *param); 126 + 127 + u16 mlx5e_calc_sq_stop_room(struct mlx5_core_dev *mdev, struct mlx5e_params *params); 128 + int mlx5e_validate_params(struct mlx5e_priv *priv, struct mlx5e_params *params); 127 129 128 130 #endif /* __MLX5_EN_PARAMS_H__ */
+4 -4
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
··· 13 13 (DIV_ROUND_UP(sizeof(struct mlx5e_dump_wqe), MLX5_SEND_WQE_BB)) 14 14 15 15 static u8 16 - mlx5e_ktls_dumps_num_wqes(struct mlx5e_txqsq *sq, unsigned int nfrags, 16 + mlx5e_ktls_dumps_num_wqes(struct mlx5e_params *params, unsigned int nfrags, 17 17 unsigned int sync_len) 18 18 { 19 19 /* Given the MTU and sync_len, calculates an upper bound for the 20 20 * number of DUMP WQEs needed for the TX resync of a record. 21 21 */ 22 - return nfrags + DIV_ROUND_UP(sync_len, sq->hw_mtu); 22 + return nfrags + DIV_ROUND_UP(sync_len, MLX5E_SW2HW_MTU(params, params->sw_mtu)); 23 23 } 24 24 25 - u16 mlx5e_ktls_get_stop_room(struct mlx5e_txqsq *sq) 25 + u16 mlx5e_ktls_get_stop_room(struct mlx5e_params *params) 26 26 { 27 27 u16 num_dumps, stop_room = 0; 28 28 29 - num_dumps = mlx5e_ktls_dumps_num_wqes(sq, MAX_SKB_FRAGS, TLS_MAX_PAYLOAD_SIZE); 29 + num_dumps = mlx5e_ktls_dumps_num_wqes(params, MAX_SKB_FRAGS, TLS_MAX_PAYLOAD_SIZE); 30 30 31 31 stop_room += mlx5e_stop_room_for_wqe(MLX5E_TLS_SET_STATIC_PARAMS_WQEBBS); 32 32 stop_room += mlx5e_stop_room_for_wqe(MLX5E_TLS_SET_PROGRESS_PARAMS_WQEBBS);
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_txrx.h
··· 14 14 u32 tls_tisn; 15 15 }; 16 16 17 - u16 mlx5e_ktls_get_stop_room(struct mlx5e_txqsq *sq); 17 + u16 mlx5e_ktls_get_stop_room(struct mlx5e_params *params); 18 18 19 19 bool mlx5e_ktls_handle_tx_skb(struct tls_context *tls_ctx, struct mlx5e_txqsq *sq, 20 20 struct sk_buff *skb, int datalen,
+2 -4
drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls_rxtx.c
··· 385 385 *cqe_bcnt -= MLX5E_METADATA_ETHER_LEN; 386 386 } 387 387 388 - u16 mlx5e_tls_get_stop_room(struct mlx5e_txqsq *sq) 388 + u16 mlx5e_tls_get_stop_room(struct mlx5_core_dev *mdev, struct mlx5e_params *params) 389 389 { 390 - struct mlx5_core_dev *mdev = sq->channel->mdev; 391 - 392 390 if (!mlx5_accel_is_tls_device(mdev)) 393 391 return 0; 394 392 395 393 if (mlx5_accel_is_ktls_device(mdev)) 396 - return mlx5e_ktls_get_stop_room(sq); 394 + return mlx5e_ktls_get_stop_room(params); 397 395 398 396 /* FPGA */ 399 397 /* Resync SKB. */
+2 -2
drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls_rxtx.h
··· 43 43 #include "en.h" 44 44 #include "en/txrx.h" 45 45 46 - u16 mlx5e_tls_get_stop_room(struct mlx5e_txqsq *sq); 46 + u16 mlx5e_tls_get_stop_room(struct mlx5_core_dev *mdev, struct mlx5e_params *params); 47 47 48 48 bool mlx5e_tls_handle_tx_skb(struct net_device *netdev, struct mlx5e_txqsq *sq, 49 49 struct sk_buff *skb, struct mlx5e_accel_tx_tls_state *state); ··· 71 71 static inline void 72 72 mlx5e_tls_handle_rx_skb(struct mlx5e_rq *rq, struct sk_buff *skb, 73 73 struct mlx5_cqe64 *cqe, u32 *cqe_bcnt) {} 74 - static inline u16 mlx5e_tls_get_stop_room(struct mlx5e_txqsq *sq) 74 + static inline u16 mlx5e_tls_get_stop_room(struct mlx5_core_dev *mdev, struct mlx5e_params *params) 75 75 { 76 76 return 0; 77 77 }
+5
drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
··· 32 32 33 33 #include "en.h" 34 34 #include "en/port.h" 35 + #include "en/params.h" 35 36 #include "en/xsk/pool.h" 36 37 #include "lib/clock.h" 37 38 ··· 369 368 new_channels.params = priv->channels.params; 370 369 new_channels.params.log_rq_mtu_frames = log_rq_size; 371 370 new_channels.params.log_sq_size = log_sq_size; 371 + 372 + err = mlx5e_validate_params(priv, &new_channels.params); 373 + if (err) 374 + goto unlock; 372 375 373 376 if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) { 374 377 priv->channels.params = new_channels.params;
+5 -25
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 1121 1121 return 0; 1122 1122 } 1123 1123 1124 - static int mlx5e_calc_sq_stop_room(struct mlx5e_txqsq *sq, u8 log_sq_size) 1125 - { 1126 - int sq_size = 1 << log_sq_size; 1127 - 1128 - sq->stop_room = mlx5e_tls_get_stop_room(sq); 1129 - sq->stop_room += mlx5e_stop_room_for_wqe(MLX5_SEND_WQE_MAX_WQEBBS); 1130 - if (test_bit(MLX5E_SQ_STATE_MPWQE, &sq->state)) 1131 - /* A MPWQE can take up to the maximum-sized WQE + all the normal 1132 - * stop room can be taken if a new packet breaks the active 1133 - * MPWQE session and allocates its WQEs right away. 1134 - */ 1135 - sq->stop_room += mlx5e_stop_room_for_wqe(MLX5_SEND_WQE_MAX_WQEBBS); 1136 - 1137 - if (WARN_ON(sq->stop_room >= sq_size)) { 1138 - netdev_err(sq->channel->netdev, "Stop room %hu is bigger than the SQ size %d\n", 1139 - sq->stop_room, sq_size); 1140 - return -ENOSPC; 1141 - } 1142 - 1143 - return 0; 1144 - } 1145 - 1146 1124 static void mlx5e_tx_err_cqe_work(struct work_struct *recover_work); 1147 1125 static int mlx5e_alloc_txqsq(struct mlx5e_channel *c, 1148 1126 int txq_ix, ··· 1154 1176 set_bit(MLX5E_SQ_STATE_TLS, &sq->state); 1155 1177 if (param->is_mpw) 1156 1178 set_bit(MLX5E_SQ_STATE_MPWQE, &sq->state); 1157 - err = mlx5e_calc_sq_stop_room(sq, params->log_sq_size); 1158 - if (err) 1159 - return err; 1179 + sq->stop_room = param->stop_room; 1160 1180 1161 1181 param->wq.db_numa_node = cpu_to_node(c->cpu); 1162 1182 err = mlx5_wq_cyc_create(mdev, &param->wq, sqc_wq, wq, &sq->wq_ctrl); ··· 2201 2225 MLX5_SET(wq, wq, log_wq_sz, params->log_sq_size); 2202 2226 MLX5_SET(sqc, sqc, allow_swp, allow_swp); 2203 2227 param->is_mpw = MLX5E_GET_PFLAG(params, MLX5E_PFLAG_SKB_TX_MPWQE); 2228 + param->stop_room = mlx5e_calc_sq_stop_room(priv->mdev, params); 2204 2229 mlx5e_build_tx_cq_param(priv, params, &param->cqp); 2205 2230 } 2206 2231 ··· 3976 3999 3977 4000 new_channels.params = *params; 3978 4001 new_channels.params.sw_mtu = new_mtu; 4002 + err = mlx5e_validate_params(priv, &new_channels.params); 4003 + if (err) 4004 + goto out; 3979 4005 3980 4006 if (params->xdp_prog && 3981 4007 !mlx5e_rx_is_linear_skb(&new_channels.params, NULL)) {
+11 -7
drivers/net/ethernet/mellanox/mlx5/core/eq.c
··· 189 189 return count_eqe; 190 190 } 191 191 192 - static void mlx5_eq_async_int_lock(struct mlx5_eq_async *eq, unsigned long *flags) 192 + static void mlx5_eq_async_int_lock(struct mlx5_eq_async *eq, bool recovery, 193 + unsigned long *flags) 193 194 __acquires(&eq->lock) 194 195 { 195 - if (in_irq()) 196 + if (!recovery) 196 197 spin_lock(&eq->lock); 197 198 else 198 199 spin_lock_irqsave(&eq->lock, *flags); 199 200 } 200 201 201 - static void mlx5_eq_async_int_unlock(struct mlx5_eq_async *eq, unsigned long *flags) 202 + static void mlx5_eq_async_int_unlock(struct mlx5_eq_async *eq, bool recovery, 203 + unsigned long *flags) 202 204 __releases(&eq->lock) 203 205 { 204 - if (in_irq()) 206 + if (!recovery) 205 207 spin_unlock(&eq->lock); 206 208 else 207 209 spin_unlock_irqrestore(&eq->lock, *flags); ··· 225 223 struct mlx5_eqe *eqe; 226 224 unsigned long flags; 227 225 int num_eqes = 0; 226 + bool recovery; 228 227 229 228 dev = eq->dev; 230 229 eqt = dev->priv.eq_table; 231 230 232 - mlx5_eq_async_int_lock(eq_async, &flags); 231 + recovery = action == ASYNC_EQ_RECOVER; 232 + mlx5_eq_async_int_lock(eq_async, recovery, &flags); 233 233 234 234 eqe = next_eqe_sw(eq); 235 235 if (!eqe) ··· 253 249 254 250 out: 255 251 eq_update_ci(eq, 1); 256 - mlx5_eq_async_int_unlock(eq_async, &flags); 252 + mlx5_eq_async_int_unlock(eq_async, recovery, &flags); 257 253 258 - return unlikely(action == ASYNC_EQ_RECOVER) ? num_eqes : 0; 254 + return unlikely(recovery) ? num_eqes : 0; 259 255 } 260 256 261 257 void mlx5_cmd_eq_recover(struct mlx5_core_dev *dev)
+5 -3
drivers/net/ethernet/mellanox/mlx5/core/fpga/sdk.h
··· 47 47 /** 48 48 * enum mlx5_fpga_access_type - Enumerated the different methods possible for 49 49 * accessing the device memory address space 50 + * 51 + * @MLX5_FPGA_ACCESS_TYPE_I2C: Use the slow CX-FPGA I2C bus 52 + * @MLX5_FPGA_ACCESS_TYPE_DONTCARE: Use the fastest available method 50 53 */ 51 54 enum mlx5_fpga_access_type { 52 - /** Use the slow CX-FPGA I2C bus */ 53 55 MLX5_FPGA_ACCESS_TYPE_I2C = 0x0, 54 - /** Use the fastest available method */ 55 56 MLX5_FPGA_ACCESS_TYPE_DONTCARE = 0x0, 56 57 }; 57 58 ··· 114 113 * subsequent receives. 115 114 */ 116 115 void (*recv_cb)(void *cb_arg, struct mlx5_fpga_dma_buf *buf); 116 + /** @cb_arg: A context to be passed to recv_cb callback */ 117 117 void *cb_arg; 118 118 }; 119 119 ··· 147 145 148 146 /** 149 147 * mlx5_fpga_sbu_conn_sendmsg() - Queue the transmission of a packet 150 - * @fdev: An FPGA SBU connection 148 + * @conn: An FPGA SBU connection 151 149 * @buf: The packet buffer 152 150 * 153 151 * Queues a packet for transmission over an FPGA SBU connection.
+170
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_buddy.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB 2 + /* Copyright (c) 2004 Topspin Communications. All rights reserved. 3 + * Copyright (c) 2005 - 2008 Mellanox Technologies. All rights reserved. 4 + * Copyright (c) 2006 - 2007 Cisco Systems, Inc. All rights reserved. 5 + * Copyright (c) 2020 NVIDIA CORPORATION. All rights reserved. 6 + */ 7 + 8 + #include "dr_types.h" 9 + 10 + int mlx5dr_buddy_init(struct mlx5dr_icm_buddy_mem *buddy, 11 + unsigned int max_order) 12 + { 13 + int i; 14 + 15 + buddy->max_order = max_order; 16 + 17 + INIT_LIST_HEAD(&buddy->list_node); 18 + INIT_LIST_HEAD(&buddy->used_list); 19 + INIT_LIST_HEAD(&buddy->hot_list); 20 + 21 + buddy->bitmap = kcalloc(buddy->max_order + 1, 22 + sizeof(*buddy->bitmap), 23 + GFP_KERNEL); 24 + buddy->num_free = kcalloc(buddy->max_order + 1, 25 + sizeof(*buddy->num_free), 26 + GFP_KERNEL); 27 + 28 + if (!buddy->bitmap || !buddy->num_free) 29 + goto err_free_all; 30 + 31 + /* Allocating max_order bitmaps, one for each order */ 32 + 33 + for (i = 0; i <= buddy->max_order; ++i) { 34 + unsigned int size = 1 << (buddy->max_order - i); 35 + 36 + buddy->bitmap[i] = bitmap_zalloc(size, GFP_KERNEL); 37 + if (!buddy->bitmap[i]) 38 + goto err_out_free_each_bit_per_order; 39 + } 40 + 41 + /* In the beginning, we have only one order that is available for 42 + * use (the biggest one), so mark the first bit in both bitmaps. 43 + */ 44 + 45 + bitmap_set(buddy->bitmap[buddy->max_order], 0, 1); 46 + 47 + buddy->num_free[buddy->max_order] = 1; 48 + 49 + return 0; 50 + 51 + err_out_free_each_bit_per_order: 52 + for (i = 0; i <= buddy->max_order; ++i) 53 + bitmap_free(buddy->bitmap[i]); 54 + 55 + err_free_all: 56 + kfree(buddy->num_free); 57 + kfree(buddy->bitmap); 58 + return -ENOMEM; 59 + } 60 + 61 + void mlx5dr_buddy_cleanup(struct mlx5dr_icm_buddy_mem *buddy) 62 + { 63 + int i; 64 + 65 + list_del(&buddy->list_node); 66 + 67 + for (i = 0; i <= buddy->max_order; ++i) 68 + bitmap_free(buddy->bitmap[i]); 69 + 70 + kfree(buddy->num_free); 71 + kfree(buddy->bitmap); 72 + } 73 + 74 + static int dr_buddy_find_free_seg(struct mlx5dr_icm_buddy_mem *buddy, 75 + unsigned int start_order, 76 + unsigned int *segment, 77 + unsigned int *order) 78 + { 79 + unsigned int seg, order_iter, m; 80 + 81 + for (order_iter = start_order; 82 + order_iter <= buddy->max_order; ++order_iter) { 83 + if (!buddy->num_free[order_iter]) 84 + continue; 85 + 86 + m = 1 << (buddy->max_order - order_iter); 87 + seg = find_first_bit(buddy->bitmap[order_iter], m); 88 + 89 + if (WARN(seg >= m, 90 + "ICM Buddy: failed finding free mem for order %d\n", 91 + order_iter)) 92 + return -ENOMEM; 93 + 94 + break; 95 + } 96 + 97 + if (order_iter > buddy->max_order) 98 + return -ENOMEM; 99 + 100 + *segment = seg; 101 + *order = order_iter; 102 + return 0; 103 + } 104 + 105 + /** 106 + * mlx5dr_buddy_alloc_mem() - Update second level bitmap. 107 + * @buddy: Buddy to update. 108 + * @order: Order of the buddy to update. 109 + * @segment: Segment number. 110 + * 111 + * This function finds the first area of the ICM memory managed by this buddy. 112 + * It uses the data structures of the buddy system in order to find the first 113 + * area of free place, starting from the current order till the maximum order 114 + * in the system. 115 + * 116 + * Return: 0 when segment is set, non-zero error status otherwise. 117 + * 118 + * The function returns the location (segment) in the whole buddy ICM memory 119 + * area - the index of the memory segment that is available for use. 120 + */ 121 + int mlx5dr_buddy_alloc_mem(struct mlx5dr_icm_buddy_mem *buddy, 122 + unsigned int order, 123 + unsigned int *segment) 124 + { 125 + unsigned int seg, order_iter; 126 + int err; 127 + 128 + err = dr_buddy_find_free_seg(buddy, order, &seg, &order_iter); 129 + if (err) 130 + return err; 131 + 132 + bitmap_clear(buddy->bitmap[order_iter], seg, 1); 133 + --buddy->num_free[order_iter]; 134 + 135 + /* If we found free memory in some order that is bigger than the 136 + * required order, we need to split every order between the required 137 + * order and the order that we found into two parts, and mark accordingly. 138 + */ 139 + while (order_iter > order) { 140 + --order_iter; 141 + seg <<= 1; 142 + bitmap_set(buddy->bitmap[order_iter], seg ^ 1, 1); 143 + ++buddy->num_free[order_iter]; 144 + } 145 + 146 + seg <<= order; 147 + *segment = seg; 148 + 149 + return 0; 150 + } 151 + 152 + void mlx5dr_buddy_free_mem(struct mlx5dr_icm_buddy_mem *buddy, 153 + unsigned int seg, unsigned int order) 154 + { 155 + seg >>= order; 156 + 157 + /* Whenever a segment is free, 158 + * the mem is added to the buddy that gave it. 159 + */ 160 + while (test_bit(seg ^ 1, buddy->bitmap[order])) { 161 + bitmap_clear(buddy->bitmap[order], seg ^ 1, 1); 162 + --buddy->num_free[order]; 163 + seg >>= 1; 164 + ++order; 165 + } 166 + bitmap_set(buddy->bitmap[order], seg, 1); 167 + 168 + ++buddy->num_free[order]; 169 + } 170 +
+2 -2
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_cmd.c
··· 93 93 caps->gvmi = MLX5_CAP_GEN(mdev, vhca_id); 94 94 caps->flex_protocols = MLX5_CAP_GEN(mdev, flex_parser_protocols); 95 95 96 - if (mlx5dr_matcher_supp_flex_parser_icmp_v4(caps)) { 96 + if (caps->flex_protocols & MLX5_FLEX_PARSER_ICMP_V4_ENABLED) { 97 97 caps->flex_parser_id_icmp_dw0 = MLX5_CAP_GEN(mdev, flex_parser_id_icmp_dw0); 98 98 caps->flex_parser_id_icmp_dw1 = MLX5_CAP_GEN(mdev, flex_parser_id_icmp_dw1); 99 99 } 100 100 101 - if (mlx5dr_matcher_supp_flex_parser_icmp_v6(caps)) { 101 + if (caps->flex_protocols & MLX5_FLEX_PARSER_ICMP_V6_ENABLED) { 102 102 caps->flex_parser_id_icmpv6_dw0 = 103 103 MLX5_CAP_GEN(mdev, flex_parser_id_icmpv6_dw0); 104 104 caps->flex_parser_id_icmpv6_dw1 =
+195 -310
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c
··· 4 4 #include "dr_types.h" 5 5 6 6 #define DR_ICM_MODIFY_HDR_ALIGN_BASE 64 7 - #define DR_ICM_SYNC_THRESHOLD (64 * 1024 * 1024) 8 - 9 - struct mlx5dr_icm_pool; 10 - 11 - struct mlx5dr_icm_bucket { 12 - struct mlx5dr_icm_pool *pool; 13 - 14 - /* Chunks that aren't visible to HW not directly and not in cache */ 15 - struct list_head free_list; 16 - unsigned int free_list_count; 17 - 18 - /* Used chunks, HW may be accessing this memory */ 19 - struct list_head used_list; 20 - unsigned int used_list_count; 21 - 22 - /* HW may be accessing this memory but at some future, 23 - * undetermined time, it might cease to do so. Before deciding to call 24 - * sync_ste, this list is moved to sync_list 25 - */ 26 - struct list_head hot_list; 27 - unsigned int hot_list_count; 28 - 29 - /* Pending sync list, entries from the hot list are moved to this list. 30 - * sync_ste is executed and then sync_list is concatenated to the free list 31 - */ 32 - struct list_head sync_list; 33 - unsigned int sync_list_count; 34 - 35 - u32 total_chunks; 36 - u32 num_of_entries; 37 - u32 entry_size; 38 - /* protect the ICM bucket */ 39 - struct mutex mutex; 40 - }; 7 + #define DR_ICM_SYNC_THRESHOLD_POOL (64 * 1024 * 1024) 41 8 42 9 struct mlx5dr_icm_pool { 43 - struct mlx5dr_icm_bucket *buckets; 44 10 enum mlx5dr_icm_type icm_type; 45 11 enum mlx5dr_icm_chunk_size max_log_chunk_sz; 46 - enum mlx5dr_icm_chunk_size num_of_buckets; 47 - struct list_head icm_mr_list; 48 - /* protect the ICM MR list */ 49 - struct mutex mr_mutex; 50 12 struct mlx5dr_domain *dmn; 13 + /* memory management */ 14 + struct mutex mutex; /* protect the ICM pool and ICM buddy */ 15 + struct list_head buddy_mem_list; 16 + u64 hot_memory_size; 51 17 }; 52 18 53 19 struct mlx5dr_icm_dm { ··· 24 58 }; 25 59 26 60 struct mlx5dr_icm_mr { 27 - struct mlx5dr_icm_pool *pool; 28 61 struct mlx5_core_mkey mkey; 29 62 struct mlx5dr_icm_dm dm; 30 - size_t used_length; 63 + struct mlx5dr_domain *dmn; 31 64 size_t length; 32 65 u64 icm_start_addr; 33 - struct list_head mr_list; 34 66 }; 35 67 36 68 static int dr_icm_create_dm_mkey(struct mlx5_core_dev *mdev, ··· 71 107 if (!icm_mr) 72 108 return NULL; 73 109 74 - icm_mr->pool = pool; 75 - INIT_LIST_HEAD(&icm_mr->mr_list); 110 + icm_mr->dmn = pool->dmn; 76 111 77 112 icm_mr->dm.length = mlx5dr_icm_pool_chunk_size_to_byte(pool->max_log_chunk_sz, 78 113 pool->icm_type); ··· 113 150 goto free_mkey; 114 151 } 115 152 116 - list_add_tail(&icm_mr->mr_list, &pool->icm_mr_list); 117 - 118 153 return icm_mr; 119 154 120 155 free_mkey: ··· 127 166 128 167 static void dr_icm_pool_mr_destroy(struct mlx5dr_icm_mr *icm_mr) 129 168 { 130 - struct mlx5_core_dev *mdev = icm_mr->pool->dmn->mdev; 169 + struct mlx5_core_dev *mdev = icm_mr->dmn->mdev; 131 170 struct mlx5dr_icm_dm *dm = &icm_mr->dm; 132 171 133 - list_del(&icm_mr->mr_list); 134 172 mlx5_core_destroy_mkey(mdev, &icm_mr->mkey); 135 173 mlx5_dm_sw_icm_dealloc(mdev, dm->type, dm->length, 0, 136 174 dm->addr, dm->obj_id); ··· 138 178 139 179 static int dr_icm_chunk_ste_init(struct mlx5dr_icm_chunk *chunk) 140 180 { 141 - struct mlx5dr_icm_bucket *bucket = chunk->bucket; 142 - 143 - chunk->ste_arr = kvzalloc(bucket->num_of_entries * 181 + chunk->ste_arr = kvzalloc(chunk->num_of_entries * 144 182 sizeof(chunk->ste_arr[0]), GFP_KERNEL); 145 183 if (!chunk->ste_arr) 146 184 return -ENOMEM; 147 185 148 - chunk->hw_ste_arr = kvzalloc(bucket->num_of_entries * 186 + chunk->hw_ste_arr = kvzalloc(chunk->num_of_entries * 149 187 DR_STE_SIZE_REDUCED, GFP_KERNEL); 150 188 if (!chunk->hw_ste_arr) 151 189 goto out_free_ste_arr; 152 190 153 - chunk->miss_list = kvmalloc(bucket->num_of_entries * 191 + chunk->miss_list = kvmalloc(chunk->num_of_entries * 154 192 sizeof(chunk->miss_list[0]), GFP_KERNEL); 155 193 if (!chunk->miss_list) 156 194 goto out_free_hw_ste_arr; ··· 162 204 return -ENOMEM; 163 205 } 164 206 165 - static int dr_icm_chunks_create(struct mlx5dr_icm_bucket *bucket) 166 - { 167 - size_t mr_free_size, mr_req_size, mr_row_size; 168 - struct mlx5dr_icm_pool *pool = bucket->pool; 169 - struct mlx5dr_icm_mr *icm_mr = NULL; 170 - struct mlx5dr_icm_chunk *chunk; 171 - int i, err = 0; 172 - 173 - mr_req_size = bucket->num_of_entries * bucket->entry_size; 174 - mr_row_size = mlx5dr_icm_pool_chunk_size_to_byte(pool->max_log_chunk_sz, 175 - pool->icm_type); 176 - mutex_lock(&pool->mr_mutex); 177 - if (!list_empty(&pool->icm_mr_list)) { 178 - icm_mr = list_last_entry(&pool->icm_mr_list, 179 - struct mlx5dr_icm_mr, mr_list); 180 - 181 - if (icm_mr) 182 - mr_free_size = icm_mr->dm.length - icm_mr->used_length; 183 - } 184 - 185 - if (!icm_mr || mr_free_size < mr_row_size) { 186 - icm_mr = dr_icm_pool_mr_create(pool); 187 - if (!icm_mr) { 188 - err = -ENOMEM; 189 - goto out_err; 190 - } 191 - } 192 - 193 - /* Create memory aligned chunks */ 194 - for (i = 0; i < mr_row_size / mr_req_size; i++) { 195 - chunk = kvzalloc(sizeof(*chunk), GFP_KERNEL); 196 - if (!chunk) { 197 - err = -ENOMEM; 198 - goto out_err; 199 - } 200 - 201 - chunk->bucket = bucket; 202 - chunk->rkey = icm_mr->mkey.key; 203 - /* mr start addr is zero based */ 204 - chunk->mr_addr = icm_mr->used_length; 205 - chunk->icm_addr = (uintptr_t)icm_mr->icm_start_addr + icm_mr->used_length; 206 - icm_mr->used_length += mr_req_size; 207 - chunk->num_of_entries = bucket->num_of_entries; 208 - chunk->byte_size = chunk->num_of_entries * bucket->entry_size; 209 - 210 - if (pool->icm_type == DR_ICM_TYPE_STE) { 211 - err = dr_icm_chunk_ste_init(chunk); 212 - if (err) 213 - goto out_free_chunk; 214 - } 215 - 216 - INIT_LIST_HEAD(&chunk->chunk_list); 217 - list_add(&chunk->chunk_list, &bucket->free_list); 218 - bucket->free_list_count++; 219 - bucket->total_chunks++; 220 - } 221 - mutex_unlock(&pool->mr_mutex); 222 - return 0; 223 - 224 - out_free_chunk: 225 - kvfree(chunk); 226 - out_err: 227 - mutex_unlock(&pool->mr_mutex); 228 - return err; 229 - } 230 - 231 207 static void dr_icm_chunk_ste_cleanup(struct mlx5dr_icm_chunk *chunk) 232 208 { 233 209 kvfree(chunk->miss_list); ··· 169 277 kvfree(chunk->ste_arr); 170 278 } 171 279 172 - static void dr_icm_chunk_destroy(struct mlx5dr_icm_chunk *chunk) 280 + static enum mlx5dr_icm_type 281 + get_chunk_icm_type(struct mlx5dr_icm_chunk *chunk) 173 282 { 174 - struct mlx5dr_icm_bucket *bucket = chunk->bucket; 283 + return chunk->buddy_mem->pool->icm_type; 284 + } 175 285 286 + static void dr_icm_chunk_destroy(struct mlx5dr_icm_chunk *chunk, 287 + struct mlx5dr_icm_buddy_mem *buddy) 288 + { 289 + enum mlx5dr_icm_type icm_type = get_chunk_icm_type(chunk); 290 + 291 + buddy->used_memory -= chunk->byte_size; 176 292 list_del(&chunk->chunk_list); 177 - bucket->total_chunks--; 178 293 179 - if (bucket->pool->icm_type == DR_ICM_TYPE_STE) 294 + if (icm_type == DR_ICM_TYPE_STE) 180 295 dr_icm_chunk_ste_cleanup(chunk); 181 296 182 297 kvfree(chunk); 183 298 } 184 299 185 - static void dr_icm_bucket_init(struct mlx5dr_icm_pool *pool, 186 - struct mlx5dr_icm_bucket *bucket, 187 - enum mlx5dr_icm_chunk_size chunk_size) 300 + static int dr_icm_buddy_create(struct mlx5dr_icm_pool *pool) 188 301 { 189 - if (pool->icm_type == DR_ICM_TYPE_STE) 190 - bucket->entry_size = DR_STE_SIZE; 191 - else 192 - bucket->entry_size = DR_MODIFY_ACTION_SIZE; 302 + struct mlx5dr_icm_buddy_mem *buddy; 303 + struct mlx5dr_icm_mr *icm_mr; 193 304 194 - bucket->num_of_entries = mlx5dr_icm_pool_chunk_size_to_entries(chunk_size); 195 - bucket->pool = pool; 196 - mutex_init(&bucket->mutex); 197 - INIT_LIST_HEAD(&bucket->free_list); 198 - INIT_LIST_HEAD(&bucket->used_list); 199 - INIT_LIST_HEAD(&bucket->hot_list); 200 - INIT_LIST_HEAD(&bucket->sync_list); 305 + icm_mr = dr_icm_pool_mr_create(pool); 306 + if (!icm_mr) 307 + return -ENOMEM; 308 + 309 + buddy = kvzalloc(sizeof(*buddy), GFP_KERNEL); 310 + if (!buddy) 311 + goto free_mr; 312 + 313 + if (mlx5dr_buddy_init(buddy, pool->max_log_chunk_sz)) 314 + goto err_free_buddy; 315 + 316 + buddy->icm_mr = icm_mr; 317 + buddy->pool = pool; 318 + 319 + /* add it to the -start- of the list in order to search in it first */ 320 + list_add(&buddy->list_node, &pool->buddy_mem_list); 321 + 322 + return 0; 323 + 324 + err_free_buddy: 325 + kvfree(buddy); 326 + free_mr: 327 + dr_icm_pool_mr_destroy(icm_mr); 328 + return -ENOMEM; 201 329 } 202 330 203 - static void dr_icm_bucket_cleanup(struct mlx5dr_icm_bucket *bucket) 331 + static void dr_icm_buddy_destroy(struct mlx5dr_icm_buddy_mem *buddy) 204 332 { 205 333 struct mlx5dr_icm_chunk *chunk, *next; 206 334 207 - mutex_destroy(&bucket->mutex); 208 - list_splice_tail_init(&bucket->sync_list, &bucket->free_list); 209 - list_splice_tail_init(&bucket->hot_list, &bucket->free_list); 335 + list_for_each_entry_safe(chunk, next, &buddy->hot_list, chunk_list) 336 + dr_icm_chunk_destroy(chunk, buddy); 210 337 211 - list_for_each_entry_safe(chunk, next, &bucket->free_list, chunk_list) 212 - dr_icm_chunk_destroy(chunk); 338 + list_for_each_entry_safe(chunk, next, &buddy->used_list, chunk_list) 339 + dr_icm_chunk_destroy(chunk, buddy); 213 340 214 - WARN_ON(bucket->total_chunks != 0); 341 + dr_icm_pool_mr_destroy(buddy->icm_mr); 215 342 216 - /* Cleanup of unreturned chunks */ 217 - list_for_each_entry_safe(chunk, next, &bucket->used_list, chunk_list) 218 - dr_icm_chunk_destroy(chunk); 343 + mlx5dr_buddy_cleanup(buddy); 344 + 345 + kvfree(buddy); 219 346 } 220 347 221 - static u64 dr_icm_hot_mem_size(struct mlx5dr_icm_pool *pool) 348 + static struct mlx5dr_icm_chunk * 349 + dr_icm_chunk_create(struct mlx5dr_icm_pool *pool, 350 + enum mlx5dr_icm_chunk_size chunk_size, 351 + struct mlx5dr_icm_buddy_mem *buddy_mem_pool, 352 + unsigned int seg) 222 353 { 223 - u64 hot_size = 0; 224 - int chunk_order; 354 + struct mlx5dr_icm_chunk *chunk; 355 + int offset; 225 356 226 - for (chunk_order = 0; chunk_order < pool->num_of_buckets; chunk_order++) 227 - hot_size += pool->buckets[chunk_order].hot_list_count * 228 - mlx5dr_icm_pool_chunk_size_to_byte(chunk_order, pool->icm_type); 357 + chunk = kvzalloc(sizeof(*chunk), GFP_KERNEL); 358 + if (!chunk) 359 + return NULL; 229 360 230 - return hot_size; 361 + offset = mlx5dr_icm_pool_dm_type_to_entry_size(pool->icm_type) * seg; 362 + 363 + chunk->rkey = buddy_mem_pool->icm_mr->mkey.key; 364 + chunk->mr_addr = offset; 365 + chunk->icm_addr = 366 + (uintptr_t)buddy_mem_pool->icm_mr->icm_start_addr + offset; 367 + chunk->num_of_entries = 368 + mlx5dr_icm_pool_chunk_size_to_entries(chunk_size); 369 + chunk->byte_size = 370 + mlx5dr_icm_pool_chunk_size_to_byte(chunk_size, pool->icm_type); 371 + chunk->seg = seg; 372 + 373 + if (pool->icm_type == DR_ICM_TYPE_STE && dr_icm_chunk_ste_init(chunk)) { 374 + mlx5dr_err(pool->dmn, 375 + "Failed to init ste arrays (order: %d)\n", 376 + chunk_size); 377 + goto out_free_chunk; 378 + } 379 + 380 + buddy_mem_pool->used_memory += chunk->byte_size; 381 + chunk->buddy_mem = buddy_mem_pool; 382 + INIT_LIST_HEAD(&chunk->chunk_list); 383 + 384 + /* chunk now is part of the used_list */ 385 + list_add_tail(&chunk->chunk_list, &buddy_mem_pool->used_list); 386 + 387 + return chunk; 388 + 389 + out_free_chunk: 390 + kvfree(chunk); 391 + return NULL; 231 392 } 232 393 233 - static bool dr_icm_reuse_hot_entries(struct mlx5dr_icm_pool *pool, 234 - struct mlx5dr_icm_bucket *bucket) 394 + static bool dr_icm_pool_is_sync_required(struct mlx5dr_icm_pool *pool) 235 395 { 236 - u64 bytes_for_sync; 396 + if (pool->hot_memory_size > DR_ICM_SYNC_THRESHOLD_POOL) 397 + return true; 237 398 238 - bytes_for_sync = dr_icm_hot_mem_size(pool); 239 - if (bytes_for_sync < DR_ICM_SYNC_THRESHOLD || !bucket->hot_list_count) 240 - return false; 241 - 242 - return true; 399 + return false; 243 400 } 244 401 245 - static void dr_icm_chill_bucket_start(struct mlx5dr_icm_bucket *bucket) 402 + static int dr_icm_pool_sync_all_buddy_pools(struct mlx5dr_icm_pool *pool) 246 403 { 247 - list_splice_tail_init(&bucket->hot_list, &bucket->sync_list); 248 - bucket->sync_list_count += bucket->hot_list_count; 249 - bucket->hot_list_count = 0; 250 - } 404 + struct mlx5dr_icm_buddy_mem *buddy, *tmp_buddy; 405 + int err; 251 406 252 - static void dr_icm_chill_bucket_end(struct mlx5dr_icm_bucket *bucket) 253 - { 254 - list_splice_tail_init(&bucket->sync_list, &bucket->free_list); 255 - bucket->free_list_count += bucket->sync_list_count; 256 - bucket->sync_list_count = 0; 257 - } 407 + err = mlx5dr_cmd_sync_steering(pool->dmn->mdev); 408 + if (err) { 409 + mlx5dr_err(pool->dmn, "Failed to sync to HW (err: %d)\n", err); 410 + return err; 411 + } 258 412 259 - static void dr_icm_chill_bucket_abort(struct mlx5dr_icm_bucket *bucket) 260 - { 261 - list_splice_tail_init(&bucket->sync_list, &bucket->hot_list); 262 - bucket->hot_list_count += bucket->sync_list_count; 263 - bucket->sync_list_count = 0; 264 - } 413 + list_for_each_entry_safe(buddy, tmp_buddy, &pool->buddy_mem_list, list_node) { 414 + struct mlx5dr_icm_chunk *chunk, *tmp_chunk; 265 415 266 - static void dr_icm_chill_buckets_start(struct mlx5dr_icm_pool *pool, 267 - struct mlx5dr_icm_bucket *cb, 268 - bool buckets[DR_CHUNK_SIZE_MAX]) 269 - { 270 - struct mlx5dr_icm_bucket *bucket; 271 - int i; 272 - 273 - for (i = 0; i < pool->num_of_buckets; i++) { 274 - bucket = &pool->buckets[i]; 275 - if (bucket == cb) { 276 - dr_icm_chill_bucket_start(bucket); 277 - continue; 416 + list_for_each_entry_safe(chunk, tmp_chunk, &buddy->hot_list, chunk_list) { 417 + mlx5dr_buddy_free_mem(buddy, chunk->seg, 418 + ilog2(chunk->num_of_entries)); 419 + pool->hot_memory_size -= chunk->byte_size; 420 + dr_icm_chunk_destroy(chunk, buddy); 278 421 } 279 422 280 - /* Freeing the mutex is done at the end of that process, after 281 - * sync_ste was executed at dr_icm_chill_buckets_end func. 282 - */ 283 - if (mutex_trylock(&bucket->mutex)) { 284 - dr_icm_chill_bucket_start(bucket); 285 - buckets[i] = true; 423 + if (!buddy->used_memory && pool->icm_type == DR_ICM_TYPE_STE) 424 + dr_icm_buddy_destroy(buddy); 425 + } 426 + 427 + return 0; 428 + } 429 + 430 + static int dr_icm_handle_buddies_get_mem(struct mlx5dr_icm_pool *pool, 431 + enum mlx5dr_icm_chunk_size chunk_size, 432 + struct mlx5dr_icm_buddy_mem **buddy, 433 + unsigned int *seg) 434 + { 435 + struct mlx5dr_icm_buddy_mem *buddy_mem_pool; 436 + bool new_mem = false; 437 + int err; 438 + 439 + alloc_buddy_mem: 440 + /* find the next free place from the buddy list */ 441 + list_for_each_entry(buddy_mem_pool, &pool->buddy_mem_list, list_node) { 442 + err = mlx5dr_buddy_alloc_mem(buddy_mem_pool, 443 + chunk_size, seg); 444 + if (!err) 445 + goto found; 446 + 447 + if (WARN_ON(new_mem)) { 448 + /* We have new memory pool, first in the list */ 449 + mlx5dr_err(pool->dmn, 450 + "No memory for order: %d\n", 451 + chunk_size); 452 + goto out; 286 453 } 287 454 } 288 - } 289 455 290 - static void dr_icm_chill_buckets_end(struct mlx5dr_icm_pool *pool, 291 - struct mlx5dr_icm_bucket *cb, 292 - bool buckets[DR_CHUNK_SIZE_MAX]) 293 - { 294 - struct mlx5dr_icm_bucket *bucket; 295 - int i; 296 - 297 - for (i = 0; i < pool->num_of_buckets; i++) { 298 - bucket = &pool->buckets[i]; 299 - if (bucket == cb) { 300 - dr_icm_chill_bucket_end(bucket); 301 - continue; 302 - } 303 - 304 - if (!buckets[i]) 305 - continue; 306 - 307 - dr_icm_chill_bucket_end(bucket); 308 - mutex_unlock(&bucket->mutex); 456 + /* no more available allocators in that pool, create new */ 457 + err = dr_icm_buddy_create(pool); 458 + if (err) { 459 + mlx5dr_err(pool->dmn, 460 + "Failed creating buddy for order %d\n", 461 + chunk_size); 462 + goto out; 309 463 } 310 - } 311 464 312 - static void dr_icm_chill_buckets_abort(struct mlx5dr_icm_pool *pool, 313 - struct mlx5dr_icm_bucket *cb, 314 - bool buckets[DR_CHUNK_SIZE_MAX]) 315 - { 316 - struct mlx5dr_icm_bucket *bucket; 317 - int i; 465 + /* mark we have new memory, first in list */ 466 + new_mem = true; 467 + goto alloc_buddy_mem; 318 468 319 - for (i = 0; i < pool->num_of_buckets; i++) { 320 - bucket = &pool->buckets[i]; 321 - if (bucket == cb) { 322 - dr_icm_chill_bucket_abort(bucket); 323 - continue; 324 - } 325 - 326 - if (!buckets[i]) 327 - continue; 328 - 329 - dr_icm_chill_bucket_abort(bucket); 330 - mutex_unlock(&bucket->mutex); 331 - } 469 + found: 470 + *buddy = buddy_mem_pool; 471 + out: 472 + return err; 332 473 } 333 474 334 475 /* Allocate an ICM chunk, each chunk holds a piece of ICM memory and ··· 371 446 mlx5dr_icm_alloc_chunk(struct mlx5dr_icm_pool *pool, 372 447 enum mlx5dr_icm_chunk_size chunk_size) 373 448 { 374 - struct mlx5dr_icm_chunk *chunk = NULL; /* Fix compilation warning */ 375 - bool buckets[DR_CHUNK_SIZE_MAX] = {}; 376 - struct mlx5dr_icm_bucket *bucket; 377 - int err; 449 + struct mlx5dr_icm_chunk *chunk = NULL; 450 + struct mlx5dr_icm_buddy_mem *buddy; 451 + unsigned int seg; 452 + int ret; 378 453 379 454 if (chunk_size > pool->max_log_chunk_sz) 380 455 return NULL; 381 456 382 - bucket = &pool->buckets[chunk_size]; 457 + mutex_lock(&pool->mutex); 458 + /* find mem, get back the relevant buddy pool and seg in that mem */ 459 + ret = dr_icm_handle_buddies_get_mem(pool, chunk_size, &buddy, &seg); 460 + if (ret) 461 + goto out; 383 462 384 - mutex_lock(&bucket->mutex); 463 + chunk = dr_icm_chunk_create(pool, chunk_size, buddy, seg); 464 + if (!chunk) 465 + goto out_err; 385 466 386 - /* Take chunk from pool if available, otherwise allocate new chunks */ 387 - if (list_empty(&bucket->free_list)) { 388 - if (dr_icm_reuse_hot_entries(pool, bucket)) { 389 - dr_icm_chill_buckets_start(pool, bucket, buckets); 390 - err = mlx5dr_cmd_sync_steering(pool->dmn->mdev); 391 - if (err) { 392 - dr_icm_chill_buckets_abort(pool, bucket, buckets); 393 - mlx5dr_err(pool->dmn, "Sync_steering failed\n"); 394 - chunk = NULL; 395 - goto out; 396 - } 397 - dr_icm_chill_buckets_end(pool, bucket, buckets); 398 - } else { 399 - dr_icm_chunks_create(bucket); 400 - } 401 - } 467 + goto out; 402 468 403 - if (!list_empty(&bucket->free_list)) { 404 - chunk = list_last_entry(&bucket->free_list, 405 - struct mlx5dr_icm_chunk, 406 - chunk_list); 407 - if (chunk) { 408 - list_del_init(&chunk->chunk_list); 409 - list_add_tail(&chunk->chunk_list, &bucket->used_list); 410 - bucket->free_list_count--; 411 - bucket->used_list_count++; 412 - } 413 - } 469 + out_err: 470 + mlx5dr_buddy_free_mem(buddy, seg, chunk_size); 414 471 out: 415 - mutex_unlock(&bucket->mutex); 472 + mutex_unlock(&pool->mutex); 416 473 return chunk; 417 474 } 418 475 419 476 void mlx5dr_icm_free_chunk(struct mlx5dr_icm_chunk *chunk) 420 477 { 421 - struct mlx5dr_icm_bucket *bucket = chunk->bucket; 478 + struct mlx5dr_icm_buddy_mem *buddy = chunk->buddy_mem; 479 + struct mlx5dr_icm_pool *pool = buddy->pool; 422 480 423 - if (bucket->pool->icm_type == DR_ICM_TYPE_STE) { 424 - memset(chunk->ste_arr, 0, 425 - bucket->num_of_entries * sizeof(chunk->ste_arr[0])); 426 - memset(chunk->hw_ste_arr, 0, 427 - bucket->num_of_entries * DR_STE_SIZE_REDUCED); 428 - } 481 + /* move the memory to the waiting list AKA "hot" */ 482 + mutex_lock(&pool->mutex); 483 + list_move_tail(&chunk->chunk_list, &buddy->hot_list); 484 + pool->hot_memory_size += chunk->byte_size; 429 485 430 - mutex_lock(&bucket->mutex); 431 - list_del_init(&chunk->chunk_list); 432 - list_add_tail(&chunk->chunk_list, &bucket->hot_list); 433 - bucket->hot_list_count++; 434 - bucket->used_list_count--; 435 - mutex_unlock(&bucket->mutex); 486 + /* Check if we have chunks that are waiting for sync-ste */ 487 + if (dr_icm_pool_is_sync_required(pool)) 488 + dr_icm_pool_sync_all_buddy_pools(pool); 489 + 490 + mutex_unlock(&pool->mutex); 436 491 } 437 492 438 493 struct mlx5dr_icm_pool *mlx5dr_icm_pool_create(struct mlx5dr_domain *dmn, ··· 420 515 { 421 516 enum mlx5dr_icm_chunk_size max_log_chunk_sz; 422 517 struct mlx5dr_icm_pool *pool; 423 - int i; 424 518 425 519 if (icm_type == DR_ICM_TYPE_STE) 426 520 max_log_chunk_sz = dmn->info.max_log_sw_icm_sz; ··· 430 526 if (!pool) 431 527 return NULL; 432 528 433 - pool->buckets = kcalloc(max_log_chunk_sz + 1, 434 - sizeof(pool->buckets[0]), 435 - GFP_KERNEL); 436 - if (!pool->buckets) 437 - goto free_pool; 438 - 439 529 pool->dmn = dmn; 440 530 pool->icm_type = icm_type; 441 531 pool->max_log_chunk_sz = max_log_chunk_sz; 442 - pool->num_of_buckets = max_log_chunk_sz + 1; 443 - INIT_LIST_HEAD(&pool->icm_mr_list); 444 532 445 - for (i = 0; i < pool->num_of_buckets; i++) 446 - dr_icm_bucket_init(pool, &pool->buckets[i], i); 533 + INIT_LIST_HEAD(&pool->buddy_mem_list); 447 534 448 - mutex_init(&pool->mr_mutex); 535 + mutex_init(&pool->mutex); 449 536 450 537 return pool; 451 - 452 - free_pool: 453 - kvfree(pool); 454 - return NULL; 455 538 } 456 539 457 540 void mlx5dr_icm_pool_destroy(struct mlx5dr_icm_pool *pool) 458 541 { 459 - struct mlx5dr_icm_mr *icm_mr, *next; 460 - int i; 542 + struct mlx5dr_icm_buddy_mem *buddy, *tmp_buddy; 461 543 462 - mutex_destroy(&pool->mr_mutex); 544 + list_for_each_entry_safe(buddy, tmp_buddy, &pool->buddy_mem_list, list_node) 545 + dr_icm_buddy_destroy(buddy); 463 546 464 - list_for_each_entry_safe(icm_mr, next, &pool->icm_mr_list, mr_list) 465 - dr_icm_pool_mr_destroy(icm_mr); 466 - 467 - for (i = 0; i < pool->num_of_buckets; i++) 468 - dr_icm_bucket_cleanup(&pool->buckets[i]); 469 - 470 - kfree(pool->buckets); 547 + mutex_destroy(&pool->mutex); 471 548 kvfree(pool); 472 549 }
+60 -47
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_matcher.c
··· 85 85 (_misc2)._inner_outer##_first_mpls_s_bos || \ 86 86 (_misc2)._inner_outer##_first_mpls_ttl) 87 87 88 - static bool dr_mask_is_gre_set(struct mlx5dr_match_misc *misc) 88 + static bool dr_mask_is_tnl_gre_set(struct mlx5dr_match_misc *misc) 89 89 { 90 90 return (misc->gre_key_h || misc->gre_key_l || 91 91 misc->gre_protocol || misc->gre_c_present || ··· 98 98 (_misc2).outer_first_mpls_over_##gre_udp##_s_bos || \ 99 99 (_misc2).outer_first_mpls_over_##gre_udp##_ttl) 100 100 101 - #define DR_MASK_IS_FLEX_PARSER_0_SET(_misc2) ( \ 101 + #define DR_MASK_IS_TNL_MPLS_SET(_misc2) ( \ 102 102 DR_MASK_IS_OUTER_MPLS_OVER_GRE_UDP_SET((_misc2), gre) || \ 103 103 DR_MASK_IS_OUTER_MPLS_OVER_GRE_UDP_SET((_misc2), udp)) 104 104 105 105 static bool 106 - dr_mask_is_misc3_vxlan_gpe_set(struct mlx5dr_match_misc3 *misc3) 106 + dr_mask_is_vxlan_gpe_set(struct mlx5dr_match_misc3 *misc3) 107 107 { 108 108 return (misc3->outer_vxlan_gpe_vni || 109 109 misc3->outer_vxlan_gpe_next_protocol || ··· 111 111 } 112 112 113 113 static bool 114 - dr_matcher_supp_flex_parser_vxlan_gpe(struct mlx5dr_cmd_caps *caps) 114 + dr_matcher_supp_vxlan_gpe(struct mlx5dr_cmd_caps *caps) 115 115 { 116 - return caps->flex_protocols & 117 - MLX5_FLEX_PARSER_VXLAN_GPE_ENABLED; 116 + return caps->flex_protocols & MLX5_FLEX_PARSER_VXLAN_GPE_ENABLED; 118 117 } 119 118 120 119 static bool 121 - dr_mask_is_flex_parser_tnl_vxlan_gpe_set(struct mlx5dr_match_param *mask, 122 - struct mlx5dr_domain *dmn) 120 + dr_mask_is_tnl_vxlan_gpe(struct mlx5dr_match_param *mask, 121 + struct mlx5dr_domain *dmn) 123 122 { 124 - return dr_mask_is_misc3_vxlan_gpe_set(&mask->misc3) && 125 - dr_matcher_supp_flex_parser_vxlan_gpe(&dmn->info.caps); 123 + return dr_mask_is_vxlan_gpe_set(&mask->misc3) && 124 + dr_matcher_supp_vxlan_gpe(&dmn->info.caps); 126 125 } 127 126 128 - static bool dr_mask_is_misc_geneve_set(struct mlx5dr_match_misc *misc) 127 + static bool dr_mask_is_tnl_geneve_set(struct mlx5dr_match_misc *misc) 129 128 { 130 129 return misc->geneve_vni || 131 130 misc->geneve_oam || ··· 133 134 } 134 135 135 136 static bool 136 - dr_matcher_supp_flex_parser_geneve(struct mlx5dr_cmd_caps *caps) 137 + dr_matcher_supp_tnl_geneve(struct mlx5dr_cmd_caps *caps) 137 138 { 138 - return caps->flex_protocols & 139 - MLX5_FLEX_PARSER_GENEVE_ENABLED; 139 + return caps->flex_protocols & MLX5_FLEX_PARSER_GENEVE_ENABLED; 140 140 } 141 141 142 142 static bool 143 - dr_mask_is_flex_parser_tnl_geneve_set(struct mlx5dr_match_param *mask, 144 - struct mlx5dr_domain *dmn) 143 + dr_mask_is_tnl_geneve(struct mlx5dr_match_param *mask, 144 + struct mlx5dr_domain *dmn) 145 145 { 146 - return dr_mask_is_misc_geneve_set(&mask->misc) && 147 - dr_matcher_supp_flex_parser_geneve(&dmn->info.caps); 146 + return dr_mask_is_tnl_geneve_set(&mask->misc) && 147 + dr_matcher_supp_tnl_geneve(&dmn->info.caps); 148 148 } 149 149 150 - static bool dr_mask_is_flex_parser_icmpv6_set(struct mlx5dr_match_misc3 *misc3) 150 + static int dr_matcher_supp_icmp_v4(struct mlx5dr_cmd_caps *caps) 151 + { 152 + return caps->flex_protocols & MLX5_FLEX_PARSER_ICMP_V4_ENABLED; 153 + } 154 + 155 + static int dr_matcher_supp_icmp_v6(struct mlx5dr_cmd_caps *caps) 156 + { 157 + return caps->flex_protocols & MLX5_FLEX_PARSER_ICMP_V6_ENABLED; 158 + } 159 + 160 + static bool dr_mask_is_icmpv6_set(struct mlx5dr_match_misc3 *misc3) 151 161 { 152 162 return (misc3->icmpv6_type || misc3->icmpv6_code || 153 163 misc3->icmpv6_header_data); 164 + } 165 + 166 + static bool dr_mask_is_icmp(struct mlx5dr_match_param *mask, 167 + struct mlx5dr_domain *dmn) 168 + { 169 + if (DR_MASK_IS_ICMPV4_SET(&mask->misc3)) 170 + return dr_matcher_supp_icmp_v4(&dmn->info.caps); 171 + else if (dr_mask_is_icmpv6_set(&mask->misc3)) 172 + return dr_matcher_supp_icmp_v6(&dmn->info.caps); 173 + 174 + return false; 154 175 } 155 176 156 177 static bool dr_mask_is_wqe_metadata_set(struct mlx5dr_match_misc2 *misc2) ··· 276 257 277 258 if (dr_mask_is_smac_set(&mask.outer) && 278 259 dr_mask_is_dmac_set(&mask.outer)) { 279 - mlx5dr_ste_build_eth_l2_src_des(&sb[idx++], &mask, 260 + mlx5dr_ste_build_eth_l2_src_dst(&sb[idx++], &mask, 280 261 inner, rx); 281 262 } 282 263 ··· 296 277 inner, rx); 297 278 298 279 if (DR_MASK_IS_ETH_L4_SET(mask.outer, mask.misc, outer)) 299 - mlx5dr_ste_build_ipv6_l3_l4(&sb[idx++], &mask, 300 - inner, rx); 280 + mlx5dr_ste_build_eth_ipv6_l3_l4(&sb[idx++], &mask, 281 + inner, rx); 301 282 } else { 302 283 if (dr_mask_is_ipv4_5_tuple_set(&mask.outer)) 303 284 mlx5dr_ste_build_eth_l3_ipv4_5_tuple(&sb[idx++], &mask, ··· 308 289 inner, rx); 309 290 } 310 291 311 - if (dr_mask_is_flex_parser_tnl_vxlan_gpe_set(&mask, dmn)) 312 - mlx5dr_ste_build_flex_parser_tnl_vxlan_gpe(&sb[idx++], 313 - &mask, 314 - inner, rx); 315 - else if (dr_mask_is_flex_parser_tnl_geneve_set(&mask, dmn)) 316 - mlx5dr_ste_build_flex_parser_tnl_geneve(&sb[idx++], 317 - &mask, 318 - inner, rx); 292 + if (dr_mask_is_tnl_vxlan_gpe(&mask, dmn)) 293 + mlx5dr_ste_build_tnl_vxlan_gpe(&sb[idx++], &mask, 294 + inner, rx); 295 + else if (dr_mask_is_tnl_geneve(&mask, dmn)) 296 + mlx5dr_ste_build_tnl_geneve(&sb[idx++], &mask, 297 + inner, rx); 319 298 320 299 if (DR_MASK_IS_ETH_L4_MISC_SET(mask.misc3, outer)) 321 300 mlx5dr_ste_build_eth_l4_misc(&sb[idx++], &mask, inner, rx); ··· 321 304 if (DR_MASK_IS_FIRST_MPLS_SET(mask.misc2, outer)) 322 305 mlx5dr_ste_build_mpls(&sb[idx++], &mask, inner, rx); 323 306 324 - if (DR_MASK_IS_FLEX_PARSER_0_SET(mask.misc2)) 325 - mlx5dr_ste_build_flex_parser_0(&sb[idx++], &mask, 326 - inner, rx); 307 + if (DR_MASK_IS_TNL_MPLS_SET(mask.misc2)) 308 + mlx5dr_ste_build_tnl_mpls(&sb[idx++], &mask, inner, rx); 327 309 328 - if ((DR_MASK_IS_FLEX_PARSER_ICMPV4_SET(&mask.misc3) && 329 - mlx5dr_matcher_supp_flex_parser_icmp_v4(&dmn->info.caps)) || 330 - (dr_mask_is_flex_parser_icmpv6_set(&mask.misc3) && 331 - mlx5dr_matcher_supp_flex_parser_icmp_v6(&dmn->info.caps))) { 332 - ret = mlx5dr_ste_build_flex_parser_1(&sb[idx++], 333 - &mask, &dmn->info.caps, 334 - inner, rx); 310 + if (dr_mask_is_icmp(&mask, dmn)) { 311 + ret = mlx5dr_ste_build_icmp(&sb[idx++], 312 + &mask, &dmn->info.caps, 313 + inner, rx); 335 314 if (ret) 336 315 return ret; 337 316 } 338 - if (dr_mask_is_gre_set(&mask.misc)) 339 - mlx5dr_ste_build_gre(&sb[idx++], &mask, inner, rx); 317 + if (dr_mask_is_tnl_gre_set(&mask.misc)) 318 + mlx5dr_ste_build_tnl_gre(&sb[idx++], &mask, inner, rx); 340 319 } 341 320 342 321 /* Inner */ ··· 347 334 348 335 if (dr_mask_is_smac_set(&mask.inner) && 349 336 dr_mask_is_dmac_set(&mask.inner)) { 350 - mlx5dr_ste_build_eth_l2_src_des(&sb[idx++], 337 + mlx5dr_ste_build_eth_l2_src_dst(&sb[idx++], 351 338 &mask, inner, rx); 352 339 } 353 340 ··· 367 354 inner, rx); 368 355 369 356 if (DR_MASK_IS_ETH_L4_SET(mask.inner, mask.misc, inner)) 370 - mlx5dr_ste_build_ipv6_l3_l4(&sb[idx++], &mask, 371 - inner, rx); 357 + mlx5dr_ste_build_eth_ipv6_l3_l4(&sb[idx++], &mask, 358 + inner, rx); 372 359 } else { 373 360 if (dr_mask_is_ipv4_5_tuple_set(&mask.inner)) 374 361 mlx5dr_ste_build_eth_l3_ipv4_5_tuple(&sb[idx++], &mask, ··· 385 372 if (DR_MASK_IS_FIRST_MPLS_SET(mask.misc2, inner)) 386 373 mlx5dr_ste_build_mpls(&sb[idx++], &mask, inner, rx); 387 374 388 - if (DR_MASK_IS_FLEX_PARSER_0_SET(mask.misc2)) 389 - mlx5dr_ste_build_flex_parser_0(&sb[idx++], &mask, inner, rx); 375 + if (DR_MASK_IS_TNL_MPLS_SET(mask.misc2)) 376 + mlx5dr_ste_build_tnl_mpls(&sb[idx++], &mask, inner, rx); 390 377 } 391 378 /* Empty matcher, takes all */ 392 379 if (matcher->match_criteria == DR_MATCHER_CRITERIA_EMPTY)
+21 -21
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste.c
··· 1090 1090 return 0; 1091 1091 } 1092 1092 1093 - void mlx5dr_ste_build_eth_l2_src_des(struct mlx5dr_ste_build *sb, 1093 + void mlx5dr_ste_build_eth_l2_src_dst(struct mlx5dr_ste_build *sb, 1094 1094 struct mlx5dr_match_param *mask, 1095 1095 bool inner, bool rx) 1096 1096 { ··· 1594 1594 return 0; 1595 1595 } 1596 1596 1597 - void mlx5dr_ste_build_ipv6_l3_l4(struct mlx5dr_ste_build *sb, 1598 - struct mlx5dr_match_param *mask, 1599 - bool inner, bool rx) 1597 + void mlx5dr_ste_build_eth_ipv6_l3_l4(struct mlx5dr_ste_build *sb, 1598 + struct mlx5dr_match_param *mask, 1599 + bool inner, bool rx) 1600 1600 { 1601 1601 dr_ste_build_ipv6_l3_l4_bit_mask(mask, inner, sb->bit_mask); 1602 1602 ··· 1693 1693 return 0; 1694 1694 } 1695 1695 1696 - void mlx5dr_ste_build_gre(struct mlx5dr_ste_build *sb, 1697 - struct mlx5dr_match_param *mask, bool inner, bool rx) 1696 + void mlx5dr_ste_build_tnl_gre(struct mlx5dr_ste_build *sb, 1697 + struct mlx5dr_match_param *mask, bool inner, bool rx) 1698 1698 { 1699 1699 dr_ste_build_gre_bit_mask(mask, inner, sb->bit_mask); 1700 1700 ··· 1771 1771 return 0; 1772 1772 } 1773 1773 1774 - void mlx5dr_ste_build_flex_parser_0(struct mlx5dr_ste_build *sb, 1775 - struct mlx5dr_match_param *mask, 1776 - bool inner, bool rx) 1774 + void mlx5dr_ste_build_tnl_mpls(struct mlx5dr_ste_build *sb, 1775 + struct mlx5dr_match_param *mask, 1776 + bool inner, bool rx) 1777 1777 { 1778 1778 dr_ste_build_flex_parser_0_bit_mask(mask, inner, sb->bit_mask); 1779 1779 ··· 1792 1792 struct mlx5dr_cmd_caps *caps, 1793 1793 u8 *bit_mask) 1794 1794 { 1795 + bool is_ipv4_mask = DR_MASK_IS_ICMPV4_SET(&mask->misc3); 1795 1796 struct mlx5dr_match_misc3 *misc_3_mask = &mask->misc3; 1796 - bool is_ipv4_mask = DR_MASK_IS_FLEX_PARSER_ICMPV4_SET(misc_3_mask); 1797 1797 u32 icmp_header_data_mask; 1798 1798 u32 icmp_type_mask; 1799 1799 u32 icmp_code_mask; ··· 1869 1869 u32 icmp_code; 1870 1870 bool is_ipv4; 1871 1871 1872 - is_ipv4 = DR_MASK_IS_FLEX_PARSER_ICMPV4_SET(misc_3); 1872 + is_ipv4 = DR_MASK_IS_ICMPV4_SET(misc_3); 1873 1873 if (is_ipv4) { 1874 1874 icmp_header_data = misc_3->icmpv4_header_data; 1875 1875 icmp_type = misc_3->icmpv4_type; ··· 1928 1928 return 0; 1929 1929 } 1930 1930 1931 - int mlx5dr_ste_build_flex_parser_1(struct mlx5dr_ste_build *sb, 1932 - struct mlx5dr_match_param *mask, 1933 - struct mlx5dr_cmd_caps *caps, 1934 - bool inner, bool rx) 1931 + int mlx5dr_ste_build_icmp(struct mlx5dr_ste_build *sb, 1932 + struct mlx5dr_match_param *mask, 1933 + struct mlx5dr_cmd_caps *caps, 1934 + bool inner, bool rx) 1935 1935 { 1936 1936 int ret; 1937 1937 ··· 2069 2069 return 0; 2070 2070 } 2071 2071 2072 - void mlx5dr_ste_build_flex_parser_tnl_vxlan_gpe(struct mlx5dr_ste_build *sb, 2073 - struct mlx5dr_match_param *mask, 2074 - bool inner, bool rx) 2072 + void mlx5dr_ste_build_tnl_vxlan_gpe(struct mlx5dr_ste_build *sb, 2073 + struct mlx5dr_match_param *mask, 2074 + bool inner, bool rx) 2075 2075 { 2076 2076 dr_ste_build_flex_parser_tnl_vxlan_gpe_bit_mask(mask, inner, 2077 2077 sb->bit_mask); ··· 2122 2122 return 0; 2123 2123 } 2124 2124 2125 - void mlx5dr_ste_build_flex_parser_tnl_geneve(struct mlx5dr_ste_build *sb, 2126 - struct mlx5dr_match_param *mask, 2127 - bool inner, bool rx) 2125 + void mlx5dr_ste_build_tnl_geneve(struct mlx5dr_ste_build *sb, 2126 + struct mlx5dr_match_param *mask, 2127 + bool inner, bool rx) 2128 2128 { 2129 2129 dr_ste_build_flex_parser_tnl_geneve_bit_mask(mask, sb->bit_mask); 2130 2130 sb->rx = rx;
+38 -41
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h
··· 114 114 115 115 struct mlx5dr_icm_pool; 116 116 struct mlx5dr_icm_chunk; 117 - struct mlx5dr_icm_bucket; 117 + struct mlx5dr_icm_buddy_mem; 118 118 struct mlx5dr_ste_htbl; 119 119 struct mlx5dr_match_param; 120 120 struct mlx5dr_cmd_caps; ··· 288 288 struct mlx5dr_matcher_rx_tx *nic_matcher, 289 289 struct mlx5dr_match_param *value, 290 290 u8 *ste_arr); 291 - void mlx5dr_ste_build_eth_l2_src_des(struct mlx5dr_ste_build *builder, 291 + void mlx5dr_ste_build_eth_l2_src_dst(struct mlx5dr_ste_build *builder, 292 292 struct mlx5dr_match_param *mask, 293 293 bool inner, bool rx); 294 294 void mlx5dr_ste_build_eth_l3_ipv4_5_tuple(struct mlx5dr_ste_build *sb, ··· 312 312 void mlx5dr_ste_build_eth_l2_tnl(struct mlx5dr_ste_build *sb, 313 313 struct mlx5dr_match_param *mask, 314 314 bool inner, bool rx); 315 - void mlx5dr_ste_build_ipv6_l3_l4(struct mlx5dr_ste_build *sb, 316 - struct mlx5dr_match_param *mask, 317 - bool inner, bool rx); 315 + void mlx5dr_ste_build_eth_ipv6_l3_l4(struct mlx5dr_ste_build *sb, 316 + struct mlx5dr_match_param *mask, 317 + bool inner, bool rx); 318 318 void mlx5dr_ste_build_eth_l4_misc(struct mlx5dr_ste_build *sb, 319 319 struct mlx5dr_match_param *mask, 320 320 bool inner, bool rx); 321 - void mlx5dr_ste_build_gre(struct mlx5dr_ste_build *sb, 322 - struct mlx5dr_match_param *mask, 323 - bool inner, bool rx); 321 + void mlx5dr_ste_build_tnl_gre(struct mlx5dr_ste_build *sb, 322 + struct mlx5dr_match_param *mask, 323 + bool inner, bool rx); 324 324 void mlx5dr_ste_build_mpls(struct mlx5dr_ste_build *sb, 325 325 struct mlx5dr_match_param *mask, 326 326 bool inner, bool rx); 327 - void mlx5dr_ste_build_flex_parser_0(struct mlx5dr_ste_build *sb, 327 + void mlx5dr_ste_build_tnl_mpls(struct mlx5dr_ste_build *sb, 328 + struct mlx5dr_match_param *mask, 329 + bool inner, bool rx); 330 + int mlx5dr_ste_build_icmp(struct mlx5dr_ste_build *sb, 331 + struct mlx5dr_match_param *mask, 332 + struct mlx5dr_cmd_caps *caps, 333 + bool inner, bool rx); 334 + void mlx5dr_ste_build_tnl_vxlan_gpe(struct mlx5dr_ste_build *sb, 328 335 struct mlx5dr_match_param *mask, 329 336 bool inner, bool rx); 330 - int mlx5dr_ste_build_flex_parser_1(struct mlx5dr_ste_build *sb, 331 - struct mlx5dr_match_param *mask, 332 - struct mlx5dr_cmd_caps *caps, 333 - bool inner, bool rx); 334 - void mlx5dr_ste_build_flex_parser_tnl_vxlan_gpe(struct mlx5dr_ste_build *sb, 335 - struct mlx5dr_match_param *mask, 336 - bool inner, bool rx); 337 - void mlx5dr_ste_build_flex_parser_tnl_geneve(struct mlx5dr_ste_build *sb, 338 - struct mlx5dr_match_param *mask, 339 - bool inner, bool rx); 337 + void mlx5dr_ste_build_tnl_geneve(struct mlx5dr_ste_build *sb, 338 + struct mlx5dr_match_param *mask, 339 + bool inner, bool rx); 340 340 void mlx5dr_ste_build_general_purpose(struct mlx5dr_ste_build *sb, 341 341 struct mlx5dr_match_param *mask, 342 342 bool inner, bool rx); ··· 588 588 struct mlx5dr_match_misc3 misc3; 589 589 }; 590 590 591 - #define DR_MASK_IS_FLEX_PARSER_ICMPV4_SET(_misc3) ((_misc3)->icmpv4_type || \ 592 - (_misc3)->icmpv4_code || \ 593 - (_misc3)->icmpv4_header_data) 591 + #define DR_MASK_IS_ICMPV4_SET(_misc3) ((_misc3)->icmpv4_type || \ 592 + (_misc3)->icmpv4_code || \ 593 + (_misc3)->icmpv4_header_data) 594 594 595 595 struct mlx5dr_esw_caps { 596 596 u64 drop_icm_address_rx; ··· 731 731 struct mlx5dr_domain *dmn; 732 732 struct mlx5dr_icm_chunk *chunk; 733 733 u8 *data; 734 - u32 data_size; 735 734 u16 num_of_actions; 736 735 u32 index; 737 736 u8 allow_rx:1; ··· 803 804 struct mlx5dr_ste *ste); 804 805 805 806 struct mlx5dr_icm_chunk { 806 - struct mlx5dr_icm_bucket *bucket; 807 + struct mlx5dr_icm_buddy_mem *buddy_mem; 807 808 struct list_head chunk_list; 808 809 u32 rkey; 809 810 u32 num_of_entries; 810 811 u32 byte_size; 811 812 u64 icm_addr; 812 813 u64 mr_addr; 814 + 815 + /* indicates the index of this chunk in the whole memory, 816 + * used for deleting the chunk from the buddy 817 + */ 818 + unsigned int seg; 813 819 814 820 /* Memory optimisation */ 815 821 struct mlx5dr_ste *ste_arr; ··· 844 840 mlx5dr_domain_nic_unlock(&dmn->info.rx); 845 841 } 846 842 847 - static inline int 848 - mlx5dr_matcher_supp_flex_parser_icmp_v4(struct mlx5dr_cmd_caps *caps) 849 - { 850 - return caps->flex_protocols & MLX5_FLEX_PARSER_ICMP_V4_ENABLED; 851 - } 852 - 853 - static inline int 854 - mlx5dr_matcher_supp_flex_parser_icmp_v6(struct mlx5dr_cmd_caps *caps) 855 - { 856 - return caps->flex_protocols & MLX5_FLEX_PARSER_ICMP_V6_ENABLED; 857 - } 858 - 859 843 int mlx5dr_matcher_select_builders(struct mlx5dr_matcher *matcher, 860 844 struct mlx5dr_matcher_rx_tx *nic_matcher, 861 845 enum mlx5dr_ipv outer_ipv, 862 846 enum mlx5dr_ipv inner_ipv); 847 + 848 + static inline int 849 + mlx5dr_icm_pool_dm_type_to_entry_size(enum mlx5dr_icm_type icm_type) 850 + { 851 + if (icm_type == DR_ICM_TYPE_STE) 852 + return DR_STE_SIZE; 853 + 854 + return DR_MODIFY_ACTION_SIZE; 855 + } 863 856 864 857 static inline u32 865 858 mlx5dr_icm_pool_chunk_size_to_entries(enum mlx5dr_icm_chunk_size chunk_size) ··· 871 870 int num_of_entries; 872 871 int entry_size; 873 872 874 - if (icm_type == DR_ICM_TYPE_STE) 875 - entry_size = DR_STE_SIZE; 876 - else 877 - entry_size = DR_MODIFY_ACTION_SIZE; 878 - 873 + entry_size = mlx5dr_icm_pool_dm_type_to_entry_size(icm_type); 879 874 num_of_entries = mlx5dr_icm_pool_chunk_size_to_entries(chunk_size); 880 875 881 876 return entry_size * num_of_entries;
+32
drivers/net/ethernet/mellanox/mlx5/core/steering/mlx5dr.h
··· 127 127 return MLX5_CAP_ESW_FLOWTABLE_FDB(dev, sw_owner); 128 128 } 129 129 130 + /* buddy functions & structure */ 131 + 132 + struct mlx5dr_icm_mr; 133 + 134 + struct mlx5dr_icm_buddy_mem { 135 + unsigned long **bitmap; 136 + unsigned int *num_free; 137 + u32 max_order; 138 + struct list_head list_node; 139 + struct mlx5dr_icm_mr *icm_mr; 140 + struct mlx5dr_icm_pool *pool; 141 + 142 + /* This is the list of used chunks. HW may be accessing this memory */ 143 + struct list_head used_list; 144 + u64 used_memory; 145 + 146 + /* Hardware may be accessing this memory but at some future, 147 + * undetermined time, it might cease to do so. 148 + * sync_ste command sets them free. 149 + */ 150 + struct list_head hot_list; 151 + }; 152 + 153 + int mlx5dr_buddy_init(struct mlx5dr_icm_buddy_mem *buddy, 154 + unsigned int max_order); 155 + void mlx5dr_buddy_cleanup(struct mlx5dr_icm_buddy_mem *buddy); 156 + int mlx5dr_buddy_alloc_mem(struct mlx5dr_icm_buddy_mem *buddy, 157 + unsigned int order, 158 + unsigned int *segment); 159 + void mlx5dr_buddy_free_mem(struct mlx5dr_icm_buddy_mem *buddy, 160 + unsigned int seg, unsigned int order); 161 + 130 162 #endif /* _MLX5DR_H_ */