Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'mlx5-updates-2023-04-11' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux

Saeed Mahameed says:

====================
mlx5-updates-2023-04-11

1) Vlad adds the support for linux bridge multicast offload support
Patches #1 through #9
Synopsis

Vlad Says:
==============
Implement support of bridge multicast offload in mlx5. Handle port object
attribute SWITCHDEV_ATTR_ID_BRIDGE_MC_DISABLED notification to toggle multicast
offload and bridge snooping support on bridge. Handle port object
SWITCHDEV_OBJ_ID_PORT_MDB notification to attach a bridge port to MDB.

Steering architecture

Existing offload infrastructure relies on two levels of flow tables - bridge
ingress and egress. For multicast offload the architecture is extended with
additional layer of per-port multicast replication tables. Such tables filter
loopback traffic (so packets are not replicated to their source port) and pop
VLAN headers for "untagged" VLANs. The tables are referenced by the MDB rules in
egress table. MDB egress rule can point to multiple per-port multicast tables,
which causes matching multicast traffic to be replicated to all of them, and,
consecutively, to several bridge ports:

+--------+--+
+---------------------------------------> Port 1 | |
| +-^------+--+
| |
| |
+-----------------------------------------+ | +---------------------------+ |
| EGRESS table | | +--> PORT 1 multicast table | |
+----------------------------------+ +-----------------------------------------+ | | +---------------------------+ |
| INGRESS table | | | | | | | |
+----------------------------------+ | dst_mac=P1,vlan=X -> pop vlan, goto P1 +--+ | | FG0: | |
| | | dst_mac=P1,vlan=Y -> pop vlan, goto P1 | | | src_port=dst_port -> drop | |
| src_mac=M1,vlan=X -> goto egress +---> dst_mac=P2,vlan=X -> pop vlan, goto P2 +--+ | | FG1: | |
| ... | | dst_mac=P2,vlan=Y -> goto P2 | | | | VLAN X -> pop, goto port | |
| | | dst_mac=MDB1,vlan=Y -> goto mcast P1,P2 +-----+ | ... | |
+----------------------------------+ | | | | | VLAN Y -> pop, goto port +-------+
+-----------------------------------------+ | | | FG3: |
| | | matchall -> goto port |
| | | |
| | +---------------------------+
| |
| |
| | +--------+--+
+---------------------------------------> Port 2 | |
| +-^------+--+
| |
| |
| +---------------------------+ |
+--> PORT 2 multicast table | |
+---------------------------+ |
| | |
| FG0: | |
| src_port=dst_port -> drop | |
| FG1: | |
| VLAN X -> pop, goto port | |
| ... | |
| | |
| FG3: | |
| matchall -> goto port +-------+
| |
+---------------------------+

Patches overview:

- Patch 1 adds hardware definition bits for capabilities required to replicate
multicast packets to multiple per-port tables. These bits are used by
following patches to only attempt multicast offload if firmware and hardware
provide necessary support.

- Pathces 2-4 patches are preparations and refactoring.

- Patch 5 implements necessary infrastructure to toggle multicast offload
via SWITCHDEV_ATTR_ID_BRIDGE_MC_DISABLED port object attribute notification.
This also enabled IGMP and MLD snooping.

- Patch 6 implements per-port multicast replication tables. It only supports
filtering of loopback packets.

- Patch 7 extends per-port multicast tables with VLAN pop support for 'untagged'
VLANs.

- Patch 8 handles SWITCHDEV_OBJ_ID_PORT_MDB port object notifications. It
creates MDB replication rules in egress table that can replicate packets to
multiple per-port multicast tables.

- Patch 9 adds tracepoints for MDB events.

==============

2) Parav Create a new allocation profile for SFs, to save on memory

3) Yevgeny provides some initial patches for upcoming software steering
support new pattern/arguments type of modify_header actions.

Starting with ConnectX-6 DX, we use a new design of modify_header FW object.
The current modify_header object allows for having only limited number of
these FW objects, which means that we are limited in the number of offloaded
flows that require modify_header action.

As a preparation Yevgeny provides the following 4 patches:
- Patch 1: Add required mlx5_ifc HW bits
- Patch 2, 3: Add new WQE type and opcode that is required for pattern/arg
support and adds appropriate support in dr_send.c
- Patch 4: Add ICM pool for modify-header-pattern objects and implement
patterns cache, allowing patterns reuse for different flows

* tag 'mlx5-updates-2023-04-11' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux:
net/mlx5: DR, Add modify-header-pattern ICM pool
net/mlx5: DR, Prepare sending new WQE type
net/mlx5: Add new WQE for updating flow table
net/mlx5: Add mlx5_ifc bits for modify header argument
net/mlx5: DR, Set counter ID on the last STE for STEv1 TX
net/mlx5: Create a new profile for SFs
net/mlx5: Bridge, add tracepoints for multicast
net/mlx5: Bridge, implement mdb offload
net/mlx5: Bridge, support multicast VLAN pop
net/mlx5: Bridge, add per-port multicast replication tables
net/mlx5: Bridge, snoop igmp/mld packets
net/mlx5: Bridge, extract code to lookup parent bridge of port
net/mlx5: Bridge, move additional data structures to priv header
net/mlx5: Bridge, increase bridge tables sizes
net/mlx5: Add mlx5_ifc definitions for bridge multicast support
====================

Link: https://lore.kernel.org/r/20230412040752.14220-1-saeed@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+1783 -164
+2 -2
drivers/net/ethernet/mellanox/mlx5/core/Makefile
··· 75 75 esw/acl/egress_lgcy.o esw/acl/egress_ofld.o \ 76 76 esw/acl/ingress_lgcy.o esw/acl/ingress_ofld.o 77 77 78 - mlx5_core-$(CONFIG_MLX5_BRIDGE) += esw/bridge.o en/rep/bridge.o 78 + mlx5_core-$(CONFIG_MLX5_BRIDGE) += esw/bridge.o esw/bridge_mcast.o en/rep/bridge.o 79 79 80 80 mlx5_core-$(CONFIG_THERMAL) += thermal.o 81 81 mlx5_core-$(CONFIG_MLX5_MPFS) += lib/mpfs.o ··· 112 112 steering/dr_ste_v2.o \ 113 113 steering/dr_cmd.o steering/dr_fw.o \ 114 114 steering/dr_action.o steering/fs_dr.o \ 115 - steering/dr_definer.o \ 115 + steering/dr_definer.o steering/dr_ptrn.o \ 116 116 steering/dr_dbg.o lib/smfs.o 117 117 # 118 118 # SF device
+3 -3
drivers/net/ethernet/mellanox/mlx5/core/cmd.c
··· 1802 1802 if (in_size <= 16) 1803 1803 goto cache_miss; 1804 1804 1805 - for (i = 0; i < MLX5_NUM_COMMAND_CACHES; i++) { 1805 + for (i = 0; i < dev->profile.num_cmd_caches; i++) { 1806 1806 ch = &cmd->cache[i]; 1807 1807 if (in_size > ch->max_inbox_size) 1808 1808 continue; ··· 2097 2097 struct mlx5_cmd_msg *n; 2098 2098 int i; 2099 2099 2100 - for (i = 0; i < MLX5_NUM_COMMAND_CACHES; i++) { 2100 + for (i = 0; i < dev->profile.num_cmd_caches; i++) { 2101 2101 ch = &dev->cmd.cache[i]; 2102 2102 list_for_each_entry_safe(msg, n, &ch->head, list) { 2103 2103 list_del(&msg->list); ··· 2127 2127 int k; 2128 2128 2129 2129 /* Initialize and fill the caches with initial entries */ 2130 - for (k = 0; k < MLX5_NUM_COMMAND_CACHES; k++) { 2130 + for (k = 0; k < dev->profile.num_cmd_caches; k++) { 2131 2131 ch = &cmd->cache[k]; 2132 2132 spin_lock_init(&ch->lock); 2133 2133 INIT_LIST_HEAD(&ch->head);
+16
drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c
··· 220 220 struct netlink_ext_ack *extack = switchdev_notifier_info_to_extack(&port_obj_info->info); 221 221 const struct switchdev_obj *obj = port_obj_info->obj; 222 222 const struct switchdev_obj_port_vlan *vlan; 223 + const struct switchdev_obj_port_mdb *mdb; 223 224 u16 vport_num, esw_owner_vhca_id; 224 225 int err; 225 226 ··· 236 235 err = mlx5_esw_bridge_port_vlan_add(vport_num, esw_owner_vhca_id, vlan->vid, 237 236 vlan->flags, br_offloads, extack); 238 237 break; 238 + case SWITCHDEV_OBJ_ID_PORT_MDB: 239 + mdb = SWITCHDEV_OBJ_PORT_MDB(obj); 240 + err = mlx5_esw_bridge_port_mdb_add(dev, vport_num, esw_owner_vhca_id, mdb->addr, 241 + mdb->vid, br_offloads, extack); 242 + break; 239 243 default: 240 244 return -EOPNOTSUPP; 241 245 } ··· 254 248 { 255 249 const struct switchdev_obj *obj = port_obj_info->obj; 256 250 const struct switchdev_obj_port_vlan *vlan; 251 + const struct switchdev_obj_port_mdb *mdb; 257 252 u16 vport_num, esw_owner_vhca_id; 258 253 259 254 if (!mlx5_esw_bridge_rep_vport_num_vhca_id_get(dev, br_offloads->esw, &vport_num, ··· 267 260 case SWITCHDEV_OBJ_ID_PORT_VLAN: 268 261 vlan = SWITCHDEV_OBJ_PORT_VLAN(obj); 269 262 mlx5_esw_bridge_port_vlan_del(vport_num, esw_owner_vhca_id, vlan->vid, br_offloads); 263 + break; 264 + case SWITCHDEV_OBJ_ID_PORT_MDB: 265 + mdb = SWITCHDEV_OBJ_PORT_MDB(obj); 266 + mlx5_esw_bridge_port_mdb_del(dev, vport_num, esw_owner_vhca_id, mdb->addr, mdb->vid, 267 + br_offloads); 270 268 break; 271 269 default: 272 270 return -EOPNOTSUPP; ··· 317 305 esw_owner_vhca_id, 318 306 attr->u.vlan_protocol, 319 307 br_offloads); 308 + break; 309 + case SWITCHDEV_ATTR_ID_BRIDGE_MC_DISABLED: 310 + err = mlx5_esw_bridge_mcast_set(vport_num, esw_owner_vhca_id, 311 + !attr->u.mc_disabled, br_offloads); 320 312 break; 321 313 default: 322 314 err = -EOPNOTSUPP;
+171 -116
drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c
··· 13 13 #define CREATE_TRACE_POINTS 14 14 #include "diag/bridge_tracepoint.h" 15 15 16 - #define MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_SIZE 12000 17 - #define MLX5_ESW_BRIDGE_INGRESS_TABLE_UNTAGGED_GRP_SIZE 16000 18 - #define MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_IDX_FROM 0 19 - #define MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_IDX_TO \ 20 - (MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_SIZE - 1) 21 - #define MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_FILTER_GRP_IDX_FROM \ 22 - (MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_IDX_TO + 1) 23 - #define MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_FILTER_GRP_IDX_TO \ 24 - (MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_FILTER_GRP_IDX_FROM + \ 25 - MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_SIZE - 1) 26 - #define MLX5_ESW_BRIDGE_INGRESS_TABLE_QINQ_GRP_IDX_FROM \ 27 - (MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_FILTER_GRP_IDX_TO + 1) 28 - #define MLX5_ESW_BRIDGE_INGRESS_TABLE_QINQ_GRP_IDX_TO \ 29 - (MLX5_ESW_BRIDGE_INGRESS_TABLE_QINQ_GRP_IDX_FROM + \ 30 - MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_SIZE - 1) 31 - #define MLX5_ESW_BRIDGE_INGRESS_TABLE_QINQ_FILTER_GRP_IDX_FROM \ 32 - (MLX5_ESW_BRIDGE_INGRESS_TABLE_QINQ_GRP_IDX_TO + 1) 33 - #define MLX5_ESW_BRIDGE_INGRESS_TABLE_QINQ_FILTER_GRP_IDX_TO \ 34 - (MLX5_ESW_BRIDGE_INGRESS_TABLE_QINQ_FILTER_GRP_IDX_FROM + \ 35 - MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_SIZE - 1) 36 - #define MLX5_ESW_BRIDGE_INGRESS_TABLE_MAC_GRP_IDX_FROM \ 37 - (MLX5_ESW_BRIDGE_INGRESS_TABLE_QINQ_FILTER_GRP_IDX_TO + 1) 38 - #define MLX5_ESW_BRIDGE_INGRESS_TABLE_MAC_GRP_IDX_TO \ 39 - (MLX5_ESW_BRIDGE_INGRESS_TABLE_MAC_GRP_IDX_FROM + \ 40 - MLX5_ESW_BRIDGE_INGRESS_TABLE_UNTAGGED_GRP_SIZE - 1) 41 - #define MLX5_ESW_BRIDGE_INGRESS_TABLE_SIZE \ 42 - (MLX5_ESW_BRIDGE_INGRESS_TABLE_MAC_GRP_IDX_TO + 1) 43 - static_assert(MLX5_ESW_BRIDGE_INGRESS_TABLE_SIZE == 64000); 44 - 45 - #define MLX5_ESW_BRIDGE_EGRESS_TABLE_VLAN_GRP_SIZE 16000 46 - #define MLX5_ESW_BRIDGE_EGRESS_TABLE_MAC_GRP_SIZE (32000 - 1) 47 - #define MLX5_ESW_BRIDGE_EGRESS_TABLE_VLAN_GRP_IDX_FROM 0 48 - #define MLX5_ESW_BRIDGE_EGRESS_TABLE_VLAN_GRP_IDX_TO \ 49 - (MLX5_ESW_BRIDGE_EGRESS_TABLE_VLAN_GRP_SIZE - 1) 50 - #define MLX5_ESW_BRIDGE_EGRESS_TABLE_QINQ_GRP_IDX_FROM \ 51 - (MLX5_ESW_BRIDGE_EGRESS_TABLE_VLAN_GRP_IDX_TO + 1) 52 - #define MLX5_ESW_BRIDGE_EGRESS_TABLE_QINQ_GRP_IDX_TO \ 53 - (MLX5_ESW_BRIDGE_EGRESS_TABLE_QINQ_GRP_IDX_FROM + \ 54 - MLX5_ESW_BRIDGE_EGRESS_TABLE_VLAN_GRP_SIZE - 1) 55 - #define MLX5_ESW_BRIDGE_EGRESS_TABLE_MAC_GRP_IDX_FROM \ 56 - (MLX5_ESW_BRIDGE_EGRESS_TABLE_QINQ_GRP_IDX_TO + 1) 57 - #define MLX5_ESW_BRIDGE_EGRESS_TABLE_MAC_GRP_IDX_TO \ 58 - (MLX5_ESW_BRIDGE_EGRESS_TABLE_MAC_GRP_IDX_FROM + \ 59 - MLX5_ESW_BRIDGE_EGRESS_TABLE_MAC_GRP_SIZE - 1) 60 - #define MLX5_ESW_BRIDGE_EGRESS_TABLE_MISS_GRP_IDX_FROM \ 61 - (MLX5_ESW_BRIDGE_EGRESS_TABLE_MAC_GRP_IDX_TO + 1) 62 - #define MLX5_ESW_BRIDGE_EGRESS_TABLE_MISS_GRP_IDX_TO \ 63 - MLX5_ESW_BRIDGE_EGRESS_TABLE_MISS_GRP_IDX_FROM 64 - #define MLX5_ESW_BRIDGE_EGRESS_TABLE_SIZE \ 65 - (MLX5_ESW_BRIDGE_EGRESS_TABLE_MISS_GRP_IDX_TO + 1) 66 - static_assert(MLX5_ESW_BRIDGE_EGRESS_TABLE_SIZE == 64000); 67 - 68 - #define MLX5_ESW_BRIDGE_SKIP_TABLE_SIZE 0 69 - 70 - enum { 71 - MLX5_ESW_BRIDGE_LEVEL_INGRESS_TABLE, 72 - MLX5_ESW_BRIDGE_LEVEL_EGRESS_TABLE, 73 - MLX5_ESW_BRIDGE_LEVEL_SKIP_TABLE, 74 - }; 75 - 76 16 static const struct rhashtable_params fdb_ht_params = { 77 17 .key_offset = offsetof(struct mlx5_esw_bridge_fdb_entry, key), 78 18 .key_len = sizeof(struct mlx5_esw_bridge_fdb_key), 79 19 .head_offset = offsetof(struct mlx5_esw_bridge_fdb_entry, ht_node), 80 20 .automatic_shrinking = true, 81 - }; 82 - 83 - enum { 84 - MLX5_ESW_BRIDGE_VLAN_FILTERING_FLAG = BIT(0), 85 - }; 86 - 87 - struct mlx5_esw_bridge { 88 - int ifindex; 89 - int refcnt; 90 - struct list_head list; 91 - struct mlx5_esw_bridge_offloads *br_offloads; 92 - 93 - struct list_head fdb_list; 94 - struct rhashtable fdb_ht; 95 - 96 - struct mlx5_flow_table *egress_ft; 97 - struct mlx5_flow_group *egress_vlan_fg; 98 - struct mlx5_flow_group *egress_qinq_fg; 99 - struct mlx5_flow_group *egress_mac_fg; 100 - struct mlx5_flow_group *egress_miss_fg; 101 - struct mlx5_pkt_reformat *egress_miss_pkt_reformat; 102 - struct mlx5_flow_handle *egress_miss_handle; 103 - unsigned long ageing_time; 104 - u32 flags; 105 - u16 vlan_proto; 106 21 }; 107 22 108 23 static void ··· 61 146 return mlx5_packet_reformat_alloc(esw->dev, &reformat_params, MLX5_FLOW_NAMESPACE_FDB); 62 147 } 63 148 64 - static struct mlx5_flow_table * 149 + struct mlx5_flow_table * 65 150 mlx5_esw_bridge_table_create(int max_fte, u32 level, struct mlx5_eswitch *esw) 66 151 { 67 152 struct mlx5_flow_table_attr ft_attr = {}; ··· 840 925 if (err) 841 926 goto err_fdb_ht; 842 927 928 + err = mlx5_esw_bridge_mdb_init(bridge); 929 + if (err) 930 + goto err_mdb_ht; 931 + 843 932 INIT_LIST_HEAD(&bridge->fdb_list); 844 933 bridge->ifindex = ifindex; 845 934 bridge->refcnt = 1; ··· 853 934 854 935 return bridge; 855 936 937 + err_mdb_ht: 938 + rhashtable_destroy(&bridge->fdb_ht); 856 939 err_fdb_ht: 857 940 mlx5_esw_bridge_egress_table_cleanup(bridge); 858 941 err_egress_tbl: ··· 874 953 return; 875 954 876 955 mlx5_esw_bridge_egress_table_cleanup(bridge); 956 + mlx5_esw_bridge_mcast_disable(bridge); 877 957 list_del(&bridge->list); 958 + mlx5_esw_bridge_mdb_cleanup(bridge); 878 959 rhashtable_destroy(&bridge->fdb_ht); 879 960 kvfree(bridge); 880 961 ··· 916 993 return vport_num | (unsigned long)esw_owner_vhca_id << sizeof(vport_num) * BITS_PER_BYTE; 917 994 } 918 995 919 - static unsigned long mlx5_esw_bridge_port_key(struct mlx5_esw_bridge_port *port) 996 + unsigned long mlx5_esw_bridge_port_key(struct mlx5_esw_bridge_port *port) 920 997 { 921 998 return mlx5_esw_bridge_port_key_from_data(port->vport_num, port->esw_owner_vhca_id); 922 999 } ··· 939 1016 struct mlx5_esw_bridge_offloads *br_offloads) 940 1017 { 941 1018 xa_erase(&br_offloads->ports, mlx5_esw_bridge_port_key(port)); 1019 + } 1020 + 1021 + static struct mlx5_esw_bridge * 1022 + mlx5_esw_bridge_from_port_lookup(u16 vport_num, u16 esw_owner_vhca_id, 1023 + struct mlx5_esw_bridge_offloads *br_offloads) 1024 + { 1025 + struct mlx5_esw_bridge_port *port; 1026 + 1027 + port = mlx5_esw_bridge_port_lookup(vport_num, esw_owner_vhca_id, br_offloads); 1028 + if (!port) 1029 + return NULL; 1030 + 1031 + return port->bridge; 942 1032 } 943 1033 944 1034 static void mlx5_esw_bridge_fdb_entry_refresh(struct mlx5_esw_bridge_fdb_entry *entry) ··· 1102 1166 } 1103 1167 1104 1168 static int 1105 - mlx5_esw_bridge_vlan_push_pop_create(u16 vlan_proto, u16 flags, struct mlx5_esw_bridge_vlan *vlan, 1106 - struct mlx5_eswitch *esw) 1169 + mlx5_esw_bridge_vlan_push_pop_fhs_create(u16 vlan_proto, struct mlx5_esw_bridge_port *port, 1170 + struct mlx5_esw_bridge_vlan *vlan) 1171 + { 1172 + return mlx5_esw_bridge_vlan_mcast_init(vlan_proto, port, vlan); 1173 + } 1174 + 1175 + static void 1176 + mlx5_esw_bridge_vlan_push_pop_fhs_cleanup(struct mlx5_esw_bridge_vlan *vlan) 1177 + { 1178 + mlx5_esw_bridge_vlan_mcast_cleanup(vlan); 1179 + } 1180 + 1181 + static int 1182 + mlx5_esw_bridge_vlan_push_pop_create(u16 vlan_proto, u16 flags, struct mlx5_esw_bridge_port *port, 1183 + struct mlx5_esw_bridge_vlan *vlan, struct mlx5_eswitch *esw) 1107 1184 { 1108 1185 int err; 1109 1186 ··· 1134 1185 err = mlx5_esw_bridge_vlan_pop_create(vlan, esw); 1135 1186 if (err) 1136 1187 goto err_vlan_pop; 1188 + 1189 + err = mlx5_esw_bridge_vlan_push_pop_fhs_create(vlan_proto, port, vlan); 1190 + if (err) 1191 + goto err_vlan_pop_fhs; 1137 1192 } 1138 1193 1139 1194 return 0; 1140 1195 1196 + err_vlan_pop_fhs: 1197 + mlx5_esw_bridge_vlan_pop_cleanup(vlan, esw); 1141 1198 err_vlan_pop: 1142 1199 if (vlan->pkt_mod_hdr_push_mark) 1143 1200 mlx5_esw_bridge_vlan_push_mark_cleanup(vlan, esw); ··· 1168 1213 vlan->flags = flags; 1169 1214 INIT_LIST_HEAD(&vlan->fdb_list); 1170 1215 1171 - err = mlx5_esw_bridge_vlan_push_pop_create(vlan_proto, flags, vlan, esw); 1216 + err = mlx5_esw_bridge_vlan_push_pop_create(vlan_proto, flags, port, vlan, esw); 1172 1217 if (err) 1173 1218 goto err_vlan_push_pop; 1174 1219 ··· 1180 1225 return vlan; 1181 1226 1182 1227 err_xa_insert: 1228 + if (vlan->mcast_handle) 1229 + mlx5_esw_bridge_vlan_push_pop_fhs_cleanup(vlan); 1183 1230 if (vlan->pkt_reformat_pop) 1184 1231 mlx5_esw_bridge_vlan_pop_cleanup(vlan, esw); 1185 1232 if (vlan->pkt_mod_hdr_push_mark) ··· 1199 1242 xa_erase(&port->vlans, vlan->vid); 1200 1243 } 1201 1244 1202 - static void mlx5_esw_bridge_vlan_flush(struct mlx5_esw_bridge_vlan *vlan, 1245 + static void mlx5_esw_bridge_vlan_flush(struct mlx5_esw_bridge_port *port, 1246 + struct mlx5_esw_bridge_vlan *vlan, 1203 1247 struct mlx5_esw_bridge *bridge) 1204 1248 { 1205 1249 struct mlx5_eswitch *esw = bridge->br_offloads->esw; ··· 1208 1250 1209 1251 list_for_each_entry_safe(entry, tmp, &vlan->fdb_list, vlan_list) 1210 1252 mlx5_esw_bridge_fdb_entry_notify_and_cleanup(entry, bridge); 1253 + mlx5_esw_bridge_port_mdb_vlan_flush(port, vlan); 1211 1254 1255 + if (vlan->mcast_handle) 1256 + mlx5_esw_bridge_vlan_push_pop_fhs_cleanup(vlan); 1212 1257 if (vlan->pkt_reformat_pop) 1213 1258 mlx5_esw_bridge_vlan_pop_cleanup(vlan, esw); 1214 1259 if (vlan->pkt_mod_hdr_push_mark) ··· 1225 1264 struct mlx5_esw_bridge *bridge) 1226 1265 { 1227 1266 trace_mlx5_esw_bridge_vlan_cleanup(vlan); 1228 - mlx5_esw_bridge_vlan_flush(vlan, bridge); 1267 + mlx5_esw_bridge_vlan_flush(port, vlan, bridge); 1229 1268 mlx5_esw_bridge_vlan_erase(port, vlan); 1230 1269 kvfree(vlan); 1231 1270 } ··· 1249 1288 int err; 1250 1289 1251 1290 xa_for_each(&port->vlans, i, vlan) { 1252 - mlx5_esw_bridge_vlan_flush(vlan, bridge); 1253 - err = mlx5_esw_bridge_vlan_push_pop_create(bridge->vlan_proto, vlan->flags, vlan, 1254 - br_offloads->esw); 1291 + mlx5_esw_bridge_vlan_flush(port, vlan, bridge); 1292 + err = mlx5_esw_bridge_vlan_push_pop_create(bridge->vlan_proto, vlan->flags, port, 1293 + vlan, br_offloads->esw); 1255 1294 if (err) { 1256 1295 esw_warn(br_offloads->esw->dev, 1257 1296 "Failed to create VLAN=%u(proto=%x) push/pop actions (vport=%u,err=%d)\n", ··· 1434 1473 int mlx5_esw_bridge_ageing_time_set(u16 vport_num, u16 esw_owner_vhca_id, unsigned long ageing_time, 1435 1474 struct mlx5_esw_bridge_offloads *br_offloads) 1436 1475 { 1437 - struct mlx5_esw_bridge_port *port; 1476 + struct mlx5_esw_bridge *bridge; 1438 1477 1439 - port = mlx5_esw_bridge_port_lookup(vport_num, esw_owner_vhca_id, br_offloads); 1440 - if (!port) 1478 + bridge = mlx5_esw_bridge_from_port_lookup(vport_num, esw_owner_vhca_id, br_offloads); 1479 + if (!bridge) 1441 1480 return -EINVAL; 1442 1481 1443 - port->bridge->ageing_time = clock_t_to_jiffies(ageing_time); 1482 + bridge->ageing_time = clock_t_to_jiffies(ageing_time); 1444 1483 return 0; 1445 1484 } 1446 1485 1447 1486 int mlx5_esw_bridge_vlan_filtering_set(u16 vport_num, u16 esw_owner_vhca_id, bool enable, 1448 1487 struct mlx5_esw_bridge_offloads *br_offloads) 1449 1488 { 1450 - struct mlx5_esw_bridge_port *port; 1451 1489 struct mlx5_esw_bridge *bridge; 1452 1490 bool filtering; 1453 1491 1454 - port = mlx5_esw_bridge_port_lookup(vport_num, esw_owner_vhca_id, br_offloads); 1455 - if (!port) 1492 + bridge = mlx5_esw_bridge_from_port_lookup(vport_num, esw_owner_vhca_id, br_offloads); 1493 + if (!bridge) 1456 1494 return -EINVAL; 1457 1495 1458 - bridge = port->bridge; 1459 1496 filtering = bridge->flags & MLX5_ESW_BRIDGE_VLAN_FILTERING_FLAG; 1460 1497 if (filtering == enable) 1461 1498 return 0; 1462 1499 1463 1500 mlx5_esw_bridge_fdb_flush(bridge); 1501 + mlx5_esw_bridge_mdb_flush(bridge); 1464 1502 if (enable) 1465 1503 bridge->flags |= MLX5_ESW_BRIDGE_VLAN_FILTERING_FLAG; 1466 1504 else ··· 1471 1511 int mlx5_esw_bridge_vlan_proto_set(u16 vport_num, u16 esw_owner_vhca_id, u16 proto, 1472 1512 struct mlx5_esw_bridge_offloads *br_offloads) 1473 1513 { 1474 - struct mlx5_esw_bridge_port *port; 1475 1514 struct mlx5_esw_bridge *bridge; 1476 1515 1477 - port = mlx5_esw_bridge_port_lookup(vport_num, esw_owner_vhca_id, 1478 - br_offloads); 1479 - if (!port) 1516 + bridge = mlx5_esw_bridge_from_port_lookup(vport_num, esw_owner_vhca_id, 1517 + br_offloads); 1518 + if (!bridge) 1480 1519 return -EINVAL; 1481 1520 1482 - bridge = port->bridge; 1483 1521 if (bridge->vlan_proto == proto) 1484 1522 return 0; 1485 1523 if (proto != ETH_P_8021Q && proto != ETH_P_8021AD) { ··· 1486 1528 } 1487 1529 1488 1530 mlx5_esw_bridge_fdb_flush(bridge); 1531 + mlx5_esw_bridge_mdb_flush(bridge); 1489 1532 bridge->vlan_proto = proto; 1490 1533 mlx5_esw_bridge_vlans_recreate(bridge); 1491 1534 1492 1535 return 0; 1536 + } 1537 + 1538 + int mlx5_esw_bridge_mcast_set(u16 vport_num, u16 esw_owner_vhca_id, bool enable, 1539 + struct mlx5_esw_bridge_offloads *br_offloads) 1540 + { 1541 + struct mlx5_eswitch *esw = br_offloads->esw; 1542 + struct mlx5_esw_bridge *bridge; 1543 + int err = 0; 1544 + bool mcast; 1545 + 1546 + if (!(MLX5_CAP_ESW_FLOWTABLE((esw)->dev, fdb_multi_path_any_table) || 1547 + MLX5_CAP_ESW_FLOWTABLE((esw)->dev, fdb_multi_path_any_table_limit_regc)) || 1548 + !MLX5_CAP_ESW_FLOWTABLE((esw)->dev, fdb_uplink_hairpin) || 1549 + !MLX5_CAP_ESW_FLOWTABLE_FDB((esw)->dev, ignore_flow_level)) 1550 + return -EOPNOTSUPP; 1551 + 1552 + bridge = mlx5_esw_bridge_from_port_lookup(vport_num, esw_owner_vhca_id, br_offloads); 1553 + if (!bridge) 1554 + return -EINVAL; 1555 + 1556 + mcast = bridge->flags & MLX5_ESW_BRIDGE_MCAST_FLAG; 1557 + if (mcast == enable) 1558 + return 0; 1559 + 1560 + if (enable) 1561 + err = mlx5_esw_bridge_mcast_enable(bridge); 1562 + else 1563 + mlx5_esw_bridge_mcast_disable(bridge); 1564 + 1565 + return err; 1493 1566 } 1494 1567 1495 1568 static int mlx5_esw_bridge_vport_init(u16 vport_num, u16 esw_owner_vhca_id, u16 flags, ··· 1540 1551 port->bridge = bridge; 1541 1552 port->flags |= flags; 1542 1553 xa_init(&port->vlans); 1554 + 1555 + err = mlx5_esw_bridge_port_mcast_init(port); 1556 + if (err) { 1557 + esw_warn(esw->dev, 1558 + "Failed to initialize port multicast (vport=%u,esw_owner_vhca_id=%u,err=%d)\n", 1559 + port->vport_num, port->esw_owner_vhca_id, err); 1560 + goto err_port_mcast; 1561 + } 1562 + 1543 1563 err = mlx5_esw_bridge_port_insert(port, br_offloads); 1544 1564 if (err) { 1545 1565 esw_warn(esw->dev, ··· 1561 1563 return 0; 1562 1564 1563 1565 err_port_insert: 1566 + mlx5_esw_bridge_port_mcast_cleanup(port); 1567 + err_port_mcast: 1564 1568 kvfree(port); 1565 1569 return err; 1566 1570 } ··· 1580 1580 1581 1581 trace_mlx5_esw_bridge_vport_cleanup(port); 1582 1582 mlx5_esw_bridge_port_vlans_flush(port, bridge); 1583 + mlx5_esw_bridge_port_mcast_cleanup(port); 1583 1584 mlx5_esw_bridge_port_erase(port, br_offloads); 1584 1585 kvfree(port); 1585 1586 mlx5_esw_bridge_put(br_offloads, bridge); ··· 1712 1711 struct switchdev_notifier_fdb_info *fdb_info) 1713 1712 { 1714 1713 struct mlx5_esw_bridge_fdb_entry *entry; 1715 - struct mlx5_esw_bridge_port *port; 1716 1714 struct mlx5_esw_bridge *bridge; 1717 1715 1718 - port = mlx5_esw_bridge_port_lookup(vport_num, esw_owner_vhca_id, br_offloads); 1719 - if (!port) 1716 + bridge = mlx5_esw_bridge_from_port_lookup(vport_num, esw_owner_vhca_id, br_offloads); 1717 + if (!bridge) 1720 1718 return; 1721 1719 1722 - bridge = port->bridge; 1723 1720 entry = mlx5_esw_bridge_fdb_lookup(bridge, fdb_info->addr, fdb_info->vid); 1724 1721 if (!entry) { 1725 1722 esw_debug(br_offloads->esw->dev, ··· 1764 1765 { 1765 1766 struct mlx5_eswitch *esw = br_offloads->esw; 1766 1767 struct mlx5_esw_bridge_fdb_entry *entry; 1767 - struct mlx5_esw_bridge_port *port; 1768 1768 struct mlx5_esw_bridge *bridge; 1769 1769 1770 - port = mlx5_esw_bridge_port_lookup(vport_num, esw_owner_vhca_id, br_offloads); 1771 - if (!port) 1770 + bridge = mlx5_esw_bridge_from_port_lookup(vport_num, esw_owner_vhca_id, br_offloads); 1771 + if (!bridge) 1772 1772 return; 1773 1773 1774 - bridge = port->bridge; 1775 1774 entry = mlx5_esw_bridge_fdb_lookup(bridge, fdb_info->addr, fdb_info->vid); 1776 1775 if (!entry) { 1777 1776 esw_debug(esw->dev, ··· 1801 1804 mlx5_esw_bridge_fdb_entry_notify_and_cleanup(entry, bridge); 1802 1805 } 1803 1806 } 1807 + } 1808 + 1809 + int mlx5_esw_bridge_port_mdb_add(struct net_device *dev, u16 vport_num, u16 esw_owner_vhca_id, 1810 + const unsigned char *addr, u16 vid, 1811 + struct mlx5_esw_bridge_offloads *br_offloads, 1812 + struct netlink_ext_ack *extack) 1813 + { 1814 + struct mlx5_esw_bridge_vlan *vlan; 1815 + struct mlx5_esw_bridge_port *port; 1816 + struct mlx5_esw_bridge *bridge; 1817 + int err; 1818 + 1819 + port = mlx5_esw_bridge_port_lookup(vport_num, esw_owner_vhca_id, br_offloads); 1820 + if (!port) { 1821 + esw_warn(br_offloads->esw->dev, 1822 + "Failed to lookup bridge port to add MDB (MAC=%pM,vport=%u)\n", 1823 + addr, vport_num); 1824 + NL_SET_ERR_MSG_FMT_MOD(extack, 1825 + "Failed to lookup bridge port to add MDB (MAC=%pM,vport=%u)\n", 1826 + addr, vport_num); 1827 + return -EINVAL; 1828 + } 1829 + 1830 + bridge = port->bridge; 1831 + if (bridge->flags & MLX5_ESW_BRIDGE_VLAN_FILTERING_FLAG && vid) { 1832 + vlan = mlx5_esw_bridge_vlan_lookup(vid, port); 1833 + if (!vlan) { 1834 + esw_warn(br_offloads->esw->dev, 1835 + "Failed to lookup bridge port vlan metadata to create MDB (MAC=%pM,vid=%u,vport=%u)\n", 1836 + addr, vid, vport_num); 1837 + NL_SET_ERR_MSG_FMT_MOD(extack, 1838 + "Failed to lookup bridge port vlan metadata to create MDB (MAC=%pM,vid=%u,vport=%u)\n", 1839 + addr, vid, vport_num); 1840 + return -EINVAL; 1841 + } 1842 + } 1843 + 1844 + err = mlx5_esw_bridge_port_mdb_attach(dev, port, addr, vid); 1845 + if (err) { 1846 + NL_SET_ERR_MSG_FMT_MOD(extack, "Failed to add MDB (MAC=%pM,vid=%u,vport=%u)\n", 1847 + addr, vid, vport_num); 1848 + return err; 1849 + } 1850 + 1851 + return 0; 1852 + } 1853 + 1854 + void mlx5_esw_bridge_port_mdb_del(struct net_device *dev, u16 vport_num, u16 esw_owner_vhca_id, 1855 + const unsigned char *addr, u16 vid, 1856 + struct mlx5_esw_bridge_offloads *br_offloads) 1857 + { 1858 + struct mlx5_esw_bridge_port *port; 1859 + 1860 + port = mlx5_esw_bridge_port_lookup(vport_num, esw_owner_vhca_id, br_offloads); 1861 + if (!port) 1862 + return; 1863 + 1864 + mlx5_esw_bridge_port_mdb_detach(dev, port, addr, vid); 1804 1865 } 1805 1866 1806 1867 static void mlx5_esw_bridge_flush(struct mlx5_esw_bridge_offloads *br_offloads)
+17
drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.h
··· 25 25 struct delayed_work update_work; 26 26 27 27 struct mlx5_flow_table *ingress_ft; 28 + struct mlx5_flow_group *ingress_igmp_fg; 29 + struct mlx5_flow_group *ingress_mld_fg; 28 30 struct mlx5_flow_group *ingress_vlan_fg; 29 31 struct mlx5_flow_group *ingress_vlan_filter_fg; 30 32 struct mlx5_flow_group *ingress_qinq_fg; 31 33 struct mlx5_flow_group *ingress_qinq_filter_fg; 32 34 struct mlx5_flow_group *ingress_mac_fg; 35 + 36 + struct mlx5_flow_handle *igmp_handle; 37 + struct mlx5_flow_handle *mld_query_handle; 38 + struct mlx5_flow_handle *mld_report_handle; 39 + struct mlx5_flow_handle *mld_done_handle; 33 40 34 41 struct mlx5_flow_table *skip_ft; 35 42 }; ··· 71 64 struct mlx5_esw_bridge_offloads *br_offloads); 72 65 int mlx5_esw_bridge_vlan_proto_set(u16 vport_num, u16 esw_owner_vhca_id, u16 proto, 73 66 struct mlx5_esw_bridge_offloads *br_offloads); 67 + int mlx5_esw_bridge_mcast_set(u16 vport_num, u16 esw_owner_vhca_id, bool enable, 68 + struct mlx5_esw_bridge_offloads *br_offloads); 74 69 int mlx5_esw_bridge_port_vlan_add(u16 vport_num, u16 esw_owner_vhca_id, u16 vid, u16 flags, 75 70 struct mlx5_esw_bridge_offloads *br_offloads, 76 71 struct netlink_ext_ack *extack); 77 72 void mlx5_esw_bridge_port_vlan_del(u16 vport_num, u16 esw_owner_vhca_id, u16 vid, 78 73 struct mlx5_esw_bridge_offloads *br_offloads); 74 + 75 + int mlx5_esw_bridge_port_mdb_add(struct net_device *dev, u16 vport_num, u16 esw_owner_vhca_id, 76 + const unsigned char *addr, u16 vid, 77 + struct mlx5_esw_bridge_offloads *br_offloads, 78 + struct netlink_ext_ack *extack); 79 + void mlx5_esw_bridge_port_mdb_del(struct net_device *dev, u16 vport_num, u16 esw_owner_vhca_id, 80 + const unsigned char *addr, u16 vid, 81 + struct mlx5_esw_bridge_offloads *br_offloads); 79 82 80 83 #endif /* __MLX5_ESW_BRIDGE_H__ */
+1126
drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB 2 + /* Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved. */ 3 + 4 + #include "lib/devcom.h" 5 + #include "bridge.h" 6 + #include "eswitch.h" 7 + #include "bridge_priv.h" 8 + #include "diag/bridge_tracepoint.h" 9 + 10 + static const struct rhashtable_params mdb_ht_params = { 11 + .key_offset = offsetof(struct mlx5_esw_bridge_mdb_entry, key), 12 + .key_len = sizeof(struct mlx5_esw_bridge_mdb_key), 13 + .head_offset = offsetof(struct mlx5_esw_bridge_mdb_entry, ht_node), 14 + .automatic_shrinking = true, 15 + }; 16 + 17 + int mlx5_esw_bridge_mdb_init(struct mlx5_esw_bridge *bridge) 18 + { 19 + INIT_LIST_HEAD(&bridge->mdb_list); 20 + return rhashtable_init(&bridge->mdb_ht, &mdb_ht_params); 21 + } 22 + 23 + void mlx5_esw_bridge_mdb_cleanup(struct mlx5_esw_bridge *bridge) 24 + { 25 + rhashtable_destroy(&bridge->mdb_ht); 26 + } 27 + 28 + static struct mlx5_esw_bridge_port * 29 + mlx5_esw_bridge_mdb_port_lookup(struct mlx5_esw_bridge_port *port, 30 + struct mlx5_esw_bridge_mdb_entry *entry) 31 + { 32 + return xa_load(&entry->ports, mlx5_esw_bridge_port_key(port)); 33 + } 34 + 35 + static int mlx5_esw_bridge_mdb_port_insert(struct mlx5_esw_bridge_port *port, 36 + struct mlx5_esw_bridge_mdb_entry *entry) 37 + { 38 + int err = xa_insert(&entry->ports, mlx5_esw_bridge_port_key(port), port, GFP_KERNEL); 39 + 40 + if (!err) 41 + entry->num_ports++; 42 + return err; 43 + } 44 + 45 + static void mlx5_esw_bridge_mdb_port_remove(struct mlx5_esw_bridge_port *port, 46 + struct mlx5_esw_bridge_mdb_entry *entry) 47 + { 48 + xa_erase(&entry->ports, mlx5_esw_bridge_port_key(port)); 49 + entry->num_ports--; 50 + } 51 + 52 + static struct mlx5_flow_handle * 53 + mlx5_esw_bridge_mdb_flow_create(u16 esw_owner_vhca_id, struct mlx5_esw_bridge_mdb_entry *entry, 54 + struct mlx5_esw_bridge *bridge) 55 + { 56 + struct mlx5_flow_act flow_act = { 57 + .action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST, 58 + .flags = FLOW_ACT_NO_APPEND | FLOW_ACT_IGNORE_FLOW_LEVEL, 59 + }; 60 + int num_dests = entry->num_ports, i = 0; 61 + struct mlx5_flow_destination *dests; 62 + struct mlx5_esw_bridge_port *port; 63 + struct mlx5_flow_spec *rule_spec; 64 + struct mlx5_flow_handle *handle; 65 + u8 *dmac_v, *dmac_c; 66 + unsigned long idx; 67 + 68 + rule_spec = kvzalloc(sizeof(*rule_spec), GFP_KERNEL); 69 + if (!rule_spec) 70 + return ERR_PTR(-ENOMEM); 71 + 72 + dests = kvcalloc(num_dests, sizeof(*dests), GFP_KERNEL); 73 + if (!dests) { 74 + kvfree(rule_spec); 75 + return ERR_PTR(-ENOMEM); 76 + } 77 + 78 + xa_for_each(&entry->ports, idx, port) { 79 + dests[i].type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE; 80 + dests[i].ft = port->mcast.ft; 81 + i++; 82 + } 83 + 84 + rule_spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS; 85 + dmac_v = MLX5_ADDR_OF(fte_match_param, rule_spec->match_value, outer_headers.dmac_47_16); 86 + ether_addr_copy(dmac_v, entry->key.addr); 87 + dmac_c = MLX5_ADDR_OF(fte_match_param, rule_spec->match_criteria, outer_headers.dmac_47_16); 88 + eth_broadcast_addr(dmac_c); 89 + 90 + if (entry->key.vid) { 91 + if (bridge->vlan_proto == ETH_P_8021Q) { 92 + MLX5_SET_TO_ONES(fte_match_param, rule_spec->match_criteria, 93 + outer_headers.cvlan_tag); 94 + MLX5_SET_TO_ONES(fte_match_param, rule_spec->match_value, 95 + outer_headers.cvlan_tag); 96 + } else if (bridge->vlan_proto == ETH_P_8021AD) { 97 + MLX5_SET_TO_ONES(fte_match_param, rule_spec->match_criteria, 98 + outer_headers.svlan_tag); 99 + MLX5_SET_TO_ONES(fte_match_param, rule_spec->match_value, 100 + outer_headers.svlan_tag); 101 + } 102 + MLX5_SET_TO_ONES(fte_match_param, rule_spec->match_criteria, 103 + outer_headers.first_vid); 104 + MLX5_SET(fte_match_param, rule_spec->match_value, outer_headers.first_vid, 105 + entry->key.vid); 106 + } 107 + 108 + handle = mlx5_add_flow_rules(bridge->egress_ft, rule_spec, &flow_act, dests, num_dests); 109 + 110 + kvfree(dests); 111 + kvfree(rule_spec); 112 + return handle; 113 + } 114 + 115 + static int 116 + mlx5_esw_bridge_port_mdb_offload(struct mlx5_esw_bridge_port *port, 117 + struct mlx5_esw_bridge_mdb_entry *entry) 118 + { 119 + struct mlx5_flow_handle *handle; 120 + 121 + handle = mlx5_esw_bridge_mdb_flow_create(port->esw_owner_vhca_id, entry, port->bridge); 122 + if (entry->egress_handle) { 123 + mlx5_del_flow_rules(entry->egress_handle); 124 + entry->egress_handle = NULL; 125 + } 126 + if (IS_ERR(handle)) 127 + return PTR_ERR(handle); 128 + 129 + entry->egress_handle = handle; 130 + return 0; 131 + } 132 + 133 + static struct mlx5_esw_bridge_mdb_entry * 134 + mlx5_esw_bridge_mdb_lookup(struct mlx5_esw_bridge *bridge, 135 + const unsigned char *addr, u16 vid) 136 + { 137 + struct mlx5_esw_bridge_mdb_key key = {}; 138 + 139 + ether_addr_copy(key.addr, addr); 140 + key.vid = vid; 141 + return rhashtable_lookup_fast(&bridge->mdb_ht, &key, mdb_ht_params); 142 + } 143 + 144 + static struct mlx5_esw_bridge_mdb_entry * 145 + mlx5_esw_bridge_port_mdb_entry_init(struct mlx5_esw_bridge_port *port, 146 + const unsigned char *addr, u16 vid) 147 + { 148 + struct mlx5_esw_bridge *bridge = port->bridge; 149 + struct mlx5_esw_bridge_mdb_entry *entry; 150 + int err; 151 + 152 + entry = kvzalloc(sizeof(*entry), GFP_KERNEL); 153 + if (!entry) 154 + return ERR_PTR(-ENOMEM); 155 + 156 + ether_addr_copy(entry->key.addr, addr); 157 + entry->key.vid = vid; 158 + xa_init(&entry->ports); 159 + err = rhashtable_insert_fast(&bridge->mdb_ht, &entry->ht_node, mdb_ht_params); 160 + if (err) 161 + goto err_ht_insert; 162 + 163 + list_add(&entry->list, &bridge->mdb_list); 164 + 165 + return entry; 166 + 167 + err_ht_insert: 168 + xa_destroy(&entry->ports); 169 + kvfree(entry); 170 + return ERR_PTR(err); 171 + } 172 + 173 + static void mlx5_esw_bridge_port_mdb_entry_cleanup(struct mlx5_esw_bridge *bridge, 174 + struct mlx5_esw_bridge_mdb_entry *entry) 175 + { 176 + if (entry->egress_handle) 177 + mlx5_del_flow_rules(entry->egress_handle); 178 + list_del(&entry->list); 179 + rhashtable_remove_fast(&bridge->mdb_ht, &entry->ht_node, mdb_ht_params); 180 + xa_destroy(&entry->ports); 181 + kvfree(entry); 182 + } 183 + 184 + int mlx5_esw_bridge_port_mdb_attach(struct net_device *dev, struct mlx5_esw_bridge_port *port, 185 + const unsigned char *addr, u16 vid) 186 + { 187 + struct mlx5_esw_bridge *bridge = port->bridge; 188 + struct mlx5_esw_bridge_mdb_entry *entry; 189 + int err; 190 + 191 + if (!(bridge->flags & MLX5_ESW_BRIDGE_MCAST_FLAG)) 192 + return -EOPNOTSUPP; 193 + 194 + entry = mlx5_esw_bridge_mdb_lookup(bridge, addr, vid); 195 + if (entry) { 196 + if (mlx5_esw_bridge_mdb_port_lookup(port, entry)) { 197 + esw_warn(bridge->br_offloads->esw->dev, "MDB attach entry is already attached to port (MAC=%pM,vid=%u,vport=%u)\n", 198 + addr, vid, port->vport_num); 199 + return 0; 200 + } 201 + } else { 202 + entry = mlx5_esw_bridge_port_mdb_entry_init(port, addr, vid); 203 + if (IS_ERR(entry)) { 204 + err = PTR_ERR(entry); 205 + esw_warn(bridge->br_offloads->esw->dev, "MDB attach failed to init entry (MAC=%pM,vid=%u,vport=%u,err=%d)\n", 206 + addr, vid, port->vport_num, err); 207 + return err; 208 + } 209 + } 210 + 211 + err = mlx5_esw_bridge_mdb_port_insert(port, entry); 212 + if (err) { 213 + if (!entry->num_ports) 214 + mlx5_esw_bridge_port_mdb_entry_cleanup(bridge, entry); /* new mdb entry */ 215 + esw_warn(bridge->br_offloads->esw->dev, 216 + "MDB attach failed to insert port (MAC=%pM,vid=%u,vport=%u,err=%d)\n", 217 + addr, vid, port->vport_num, err); 218 + return err; 219 + } 220 + 221 + err = mlx5_esw_bridge_port_mdb_offload(port, entry); 222 + if (err) 223 + /* Single mdb can be used by multiple ports, so just log the 224 + * error and continue. 225 + */ 226 + esw_warn(bridge->br_offloads->esw->dev, "MDB attach failed to offload (MAC=%pM,vid=%u,vport=%u,err=%d)\n", 227 + addr, vid, port->vport_num, err); 228 + 229 + trace_mlx5_esw_bridge_port_mdb_attach(dev, entry); 230 + return 0; 231 + } 232 + 233 + static void mlx5_esw_bridge_port_mdb_entry_detach(struct mlx5_esw_bridge_port *port, 234 + struct mlx5_esw_bridge_mdb_entry *entry) 235 + { 236 + struct mlx5_esw_bridge *bridge = port->bridge; 237 + int err; 238 + 239 + mlx5_esw_bridge_mdb_port_remove(port, entry); 240 + if (!entry->num_ports) { 241 + mlx5_esw_bridge_port_mdb_entry_cleanup(bridge, entry); 242 + return; 243 + } 244 + 245 + err = mlx5_esw_bridge_port_mdb_offload(port, entry); 246 + if (err) 247 + /* Single mdb can be used by multiple ports, so just log the 248 + * error and continue. 249 + */ 250 + esw_warn(bridge->br_offloads->esw->dev, "MDB detach failed to offload (MAC=%pM,vid=%u,vport=%u)\n", 251 + entry->key.addr, entry->key.vid, port->vport_num); 252 + } 253 + 254 + void mlx5_esw_bridge_port_mdb_detach(struct net_device *dev, struct mlx5_esw_bridge_port *port, 255 + const unsigned char *addr, u16 vid) 256 + { 257 + struct mlx5_esw_bridge *bridge = port->bridge; 258 + struct mlx5_esw_bridge_mdb_entry *entry; 259 + 260 + entry = mlx5_esw_bridge_mdb_lookup(bridge, addr, vid); 261 + if (!entry) { 262 + esw_debug(bridge->br_offloads->esw->dev, 263 + "MDB detach entry not found (MAC=%pM,vid=%u,vport=%u)\n", 264 + addr, vid, port->vport_num); 265 + return; 266 + } 267 + 268 + if (!mlx5_esw_bridge_mdb_port_lookup(port, entry)) { 269 + esw_debug(bridge->br_offloads->esw->dev, 270 + "MDB detach entry not attached to the port (MAC=%pM,vid=%u,vport=%u)\n", 271 + addr, vid, port->vport_num); 272 + return; 273 + } 274 + 275 + trace_mlx5_esw_bridge_port_mdb_detach(dev, entry); 276 + mlx5_esw_bridge_port_mdb_entry_detach(port, entry); 277 + } 278 + 279 + void mlx5_esw_bridge_port_mdb_vlan_flush(struct mlx5_esw_bridge_port *port, 280 + struct mlx5_esw_bridge_vlan *vlan) 281 + { 282 + struct mlx5_esw_bridge *bridge = port->bridge; 283 + struct mlx5_esw_bridge_mdb_entry *entry, *tmp; 284 + 285 + list_for_each_entry_safe(entry, tmp, &bridge->mdb_list, list) 286 + if (entry->key.vid == vlan->vid && mlx5_esw_bridge_mdb_port_lookup(port, entry)) 287 + mlx5_esw_bridge_port_mdb_entry_detach(port, entry); 288 + } 289 + 290 + static void mlx5_esw_bridge_port_mdb_flush(struct mlx5_esw_bridge_port *port) 291 + { 292 + struct mlx5_esw_bridge *bridge = port->bridge; 293 + struct mlx5_esw_bridge_mdb_entry *entry, *tmp; 294 + 295 + list_for_each_entry_safe(entry, tmp, &bridge->mdb_list, list) 296 + if (mlx5_esw_bridge_mdb_port_lookup(port, entry)) 297 + mlx5_esw_bridge_port_mdb_entry_detach(port, entry); 298 + } 299 + 300 + void mlx5_esw_bridge_mdb_flush(struct mlx5_esw_bridge *bridge) 301 + { 302 + struct mlx5_esw_bridge_mdb_entry *entry, *tmp; 303 + 304 + list_for_each_entry_safe(entry, tmp, &bridge->mdb_list, list) 305 + mlx5_esw_bridge_port_mdb_entry_cleanup(bridge, entry); 306 + } 307 + static int mlx5_esw_bridge_port_mcast_fts_init(struct mlx5_esw_bridge_port *port, 308 + struct mlx5_esw_bridge *bridge) 309 + { 310 + struct mlx5_eswitch *esw = bridge->br_offloads->esw; 311 + struct mlx5_flow_table *mcast_ft; 312 + 313 + mcast_ft = mlx5_esw_bridge_table_create(MLX5_ESW_BRIDGE_MCAST_TABLE_SIZE, 314 + MLX5_ESW_BRIDGE_LEVEL_MCAST_TABLE, 315 + esw); 316 + if (IS_ERR(mcast_ft)) 317 + return PTR_ERR(mcast_ft); 318 + 319 + port->mcast.ft = mcast_ft; 320 + return 0; 321 + } 322 + 323 + static void mlx5_esw_bridge_port_mcast_fts_cleanup(struct mlx5_esw_bridge_port *port) 324 + { 325 + if (port->mcast.ft) 326 + mlx5_destroy_flow_table(port->mcast.ft); 327 + port->mcast.ft = NULL; 328 + } 329 + 330 + static struct mlx5_flow_group * 331 + mlx5_esw_bridge_mcast_filter_fg_create(struct mlx5_eswitch *esw, 332 + struct mlx5_flow_table *mcast_ft) 333 + { 334 + int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in); 335 + struct mlx5_flow_group *fg; 336 + u32 *in, *match; 337 + 338 + in = kvzalloc(inlen, GFP_KERNEL); 339 + if (!in) 340 + return ERR_PTR(-ENOMEM); 341 + 342 + MLX5_SET(create_flow_group_in, in, match_criteria_enable, MLX5_MATCH_MISC_PARAMETERS_2); 343 + match = MLX5_ADDR_OF(create_flow_group_in, in, match_criteria); 344 + 345 + MLX5_SET(fte_match_param, match, misc_parameters_2.metadata_reg_c_0, 346 + mlx5_eswitch_get_vport_metadata_mask()); 347 + 348 + MLX5_SET(create_flow_group_in, in, start_flow_index, 349 + MLX5_ESW_BRIDGE_MCAST_TABLE_FILTER_GRP_IDX_FROM); 350 + MLX5_SET(create_flow_group_in, in, end_flow_index, 351 + MLX5_ESW_BRIDGE_MCAST_TABLE_FILTER_GRP_IDX_TO); 352 + 353 + fg = mlx5_create_flow_group(mcast_ft, in); 354 + kvfree(in); 355 + if (IS_ERR(fg)) 356 + esw_warn(esw->dev, 357 + "Failed to create filter flow group for bridge mcast table (err=%pe)\n", 358 + fg); 359 + 360 + return fg; 361 + } 362 + 363 + static struct mlx5_flow_group * 364 + mlx5_esw_bridge_mcast_vlan_proto_fg_create(unsigned int from, unsigned int to, u16 vlan_proto, 365 + struct mlx5_eswitch *esw, 366 + struct mlx5_flow_table *mcast_ft) 367 + { 368 + int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in); 369 + struct mlx5_flow_group *fg; 370 + u32 *in, *match; 371 + 372 + in = kvzalloc(inlen, GFP_KERNEL); 373 + if (!in) 374 + return ERR_PTR(-ENOMEM); 375 + 376 + MLX5_SET(create_flow_group_in, in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS); 377 + match = MLX5_ADDR_OF(create_flow_group_in, in, match_criteria); 378 + 379 + if (vlan_proto == ETH_P_8021Q) 380 + MLX5_SET_TO_ONES(fte_match_param, match, outer_headers.cvlan_tag); 381 + else if (vlan_proto == ETH_P_8021AD) 382 + MLX5_SET_TO_ONES(fte_match_param, match, outer_headers.svlan_tag); 383 + MLX5_SET_TO_ONES(fte_match_param, match, outer_headers.first_vid); 384 + 385 + MLX5_SET(create_flow_group_in, in, start_flow_index, from); 386 + MLX5_SET(create_flow_group_in, in, end_flow_index, to); 387 + 388 + fg = mlx5_create_flow_group(mcast_ft, in); 389 + kvfree(in); 390 + if (IS_ERR(fg)) 391 + esw_warn(esw->dev, 392 + "Failed to create VLAN(proto=%x) flow group for bridge mcast table (err=%pe)\n", 393 + vlan_proto, fg); 394 + 395 + return fg; 396 + } 397 + 398 + static struct mlx5_flow_group * 399 + mlx5_esw_bridge_mcast_vlan_fg_create(struct mlx5_eswitch *esw, struct mlx5_flow_table *mcast_ft) 400 + { 401 + unsigned int from = MLX5_ESW_BRIDGE_MCAST_TABLE_VLAN_GRP_IDX_FROM; 402 + unsigned int to = MLX5_ESW_BRIDGE_MCAST_TABLE_VLAN_GRP_IDX_TO; 403 + 404 + return mlx5_esw_bridge_mcast_vlan_proto_fg_create(from, to, ETH_P_8021Q, esw, mcast_ft); 405 + } 406 + 407 + static struct mlx5_flow_group * 408 + mlx5_esw_bridge_mcast_qinq_fg_create(struct mlx5_eswitch *esw, 409 + struct mlx5_flow_table *mcast_ft) 410 + { 411 + unsigned int from = MLX5_ESW_BRIDGE_MCAST_TABLE_QINQ_GRP_IDX_FROM; 412 + unsigned int to = MLX5_ESW_BRIDGE_MCAST_TABLE_QINQ_GRP_IDX_TO; 413 + 414 + return mlx5_esw_bridge_mcast_vlan_proto_fg_create(from, to, ETH_P_8021AD, esw, mcast_ft); 415 + } 416 + 417 + static struct mlx5_flow_group * 418 + mlx5_esw_bridge_mcast_fwd_fg_create(struct mlx5_eswitch *esw, 419 + struct mlx5_flow_table *mcast_ft) 420 + { 421 + int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in); 422 + struct mlx5_flow_group *fg; 423 + u32 *in; 424 + 425 + in = kvzalloc(inlen, GFP_KERNEL); 426 + if (!in) 427 + return ERR_PTR(-ENOMEM); 428 + 429 + MLX5_SET(create_flow_group_in, in, start_flow_index, 430 + MLX5_ESW_BRIDGE_MCAST_TABLE_FWD_GRP_IDX_FROM); 431 + MLX5_SET(create_flow_group_in, in, end_flow_index, 432 + MLX5_ESW_BRIDGE_MCAST_TABLE_FWD_GRP_IDX_TO); 433 + 434 + fg = mlx5_create_flow_group(mcast_ft, in); 435 + kvfree(in); 436 + if (IS_ERR(fg)) 437 + esw_warn(esw->dev, 438 + "Failed to create forward flow group for bridge mcast table (err=%pe)\n", 439 + fg); 440 + 441 + return fg; 442 + } 443 + 444 + static int mlx5_esw_bridge_port_mcast_fgs_init(struct mlx5_esw_bridge_port *port) 445 + { 446 + struct mlx5_flow_group *fwd_fg, *qinq_fg, *vlan_fg, *filter_fg; 447 + struct mlx5_eswitch *esw = port->bridge->br_offloads->esw; 448 + struct mlx5_flow_table *mcast_ft = port->mcast.ft; 449 + int err; 450 + 451 + filter_fg = mlx5_esw_bridge_mcast_filter_fg_create(esw, mcast_ft); 452 + if (IS_ERR(filter_fg)) 453 + return PTR_ERR(filter_fg); 454 + 455 + vlan_fg = mlx5_esw_bridge_mcast_vlan_fg_create(esw, mcast_ft); 456 + if (IS_ERR(vlan_fg)) { 457 + err = PTR_ERR(vlan_fg); 458 + goto err_vlan_fg; 459 + } 460 + 461 + qinq_fg = mlx5_esw_bridge_mcast_qinq_fg_create(esw, mcast_ft); 462 + if (IS_ERR(qinq_fg)) { 463 + err = PTR_ERR(qinq_fg); 464 + goto err_qinq_fg; 465 + } 466 + 467 + fwd_fg = mlx5_esw_bridge_mcast_fwd_fg_create(esw, mcast_ft); 468 + if (IS_ERR(fwd_fg)) { 469 + err = PTR_ERR(fwd_fg); 470 + goto err_fwd_fg; 471 + } 472 + 473 + port->mcast.filter_fg = filter_fg; 474 + port->mcast.vlan_fg = vlan_fg; 475 + port->mcast.qinq_fg = qinq_fg; 476 + port->mcast.fwd_fg = fwd_fg; 477 + 478 + return 0; 479 + 480 + err_fwd_fg: 481 + mlx5_destroy_flow_group(qinq_fg); 482 + err_qinq_fg: 483 + mlx5_destroy_flow_group(vlan_fg); 484 + err_vlan_fg: 485 + mlx5_destroy_flow_group(filter_fg); 486 + return err; 487 + } 488 + 489 + static void mlx5_esw_bridge_port_mcast_fgs_cleanup(struct mlx5_esw_bridge_port *port) 490 + { 491 + if (port->mcast.fwd_fg) 492 + mlx5_destroy_flow_group(port->mcast.fwd_fg); 493 + port->mcast.fwd_fg = NULL; 494 + if (port->mcast.qinq_fg) 495 + mlx5_destroy_flow_group(port->mcast.qinq_fg); 496 + port->mcast.qinq_fg = NULL; 497 + if (port->mcast.vlan_fg) 498 + mlx5_destroy_flow_group(port->mcast.vlan_fg); 499 + port->mcast.vlan_fg = NULL; 500 + if (port->mcast.filter_fg) 501 + mlx5_destroy_flow_group(port->mcast.filter_fg); 502 + port->mcast.filter_fg = NULL; 503 + } 504 + 505 + static struct mlx5_flow_handle * 506 + mlx5_esw_bridge_mcast_flow_with_esw_create(struct mlx5_esw_bridge_port *port, 507 + struct mlx5_eswitch *esw) 508 + { 509 + struct mlx5_flow_act flow_act = { 510 + .action = MLX5_FLOW_CONTEXT_ACTION_DROP, 511 + .flags = FLOW_ACT_NO_APPEND, 512 + }; 513 + struct mlx5_flow_spec *rule_spec; 514 + struct mlx5_flow_handle *handle; 515 + 516 + rule_spec = kvzalloc(sizeof(*rule_spec), GFP_KERNEL); 517 + if (!rule_spec) 518 + return ERR_PTR(-ENOMEM); 519 + 520 + rule_spec->match_criteria_enable = MLX5_MATCH_MISC_PARAMETERS_2; 521 + 522 + MLX5_SET(fte_match_param, rule_spec->match_criteria, 523 + misc_parameters_2.metadata_reg_c_0, mlx5_eswitch_get_vport_metadata_mask()); 524 + MLX5_SET(fte_match_param, rule_spec->match_value, misc_parameters_2.metadata_reg_c_0, 525 + mlx5_eswitch_get_vport_metadata_for_match(esw, port->vport_num)); 526 + 527 + handle = mlx5_add_flow_rules(port->mcast.ft, rule_spec, &flow_act, NULL, 0); 528 + 529 + kvfree(rule_spec); 530 + return handle; 531 + } 532 + 533 + static struct mlx5_flow_handle * 534 + mlx5_esw_bridge_mcast_filter_flow_create(struct mlx5_esw_bridge_port *port) 535 + { 536 + return mlx5_esw_bridge_mcast_flow_with_esw_create(port, port->bridge->br_offloads->esw); 537 + } 538 + 539 + static struct mlx5_flow_handle * 540 + mlx5_esw_bridge_mcast_filter_flow_peer_create(struct mlx5_esw_bridge_port *port) 541 + { 542 + struct mlx5_devcom *devcom = port->bridge->br_offloads->esw->dev->priv.devcom; 543 + static struct mlx5_flow_handle *handle; 544 + struct mlx5_eswitch *peer_esw; 545 + 546 + peer_esw = mlx5_devcom_get_peer_data(devcom, MLX5_DEVCOM_ESW_OFFLOADS); 547 + if (!peer_esw) 548 + return ERR_PTR(-ENODEV); 549 + 550 + handle = mlx5_esw_bridge_mcast_flow_with_esw_create(port, peer_esw); 551 + 552 + mlx5_devcom_release_peer_data(devcom, MLX5_DEVCOM_ESW_OFFLOADS); 553 + return handle; 554 + } 555 + 556 + static struct mlx5_flow_handle * 557 + mlx5_esw_bridge_mcast_vlan_flow_create(u16 vlan_proto, struct mlx5_esw_bridge_port *port, 558 + struct mlx5_esw_bridge_vlan *vlan) 559 + { 560 + struct mlx5_flow_act flow_act = { 561 + .action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST, 562 + .flags = FLOW_ACT_NO_APPEND, 563 + }; 564 + struct mlx5_flow_destination dest = { 565 + .type = MLX5_FLOW_DESTINATION_TYPE_VPORT, 566 + .vport.num = port->vport_num, 567 + }; 568 + struct mlx5_esw_bridge *bridge = port->bridge; 569 + struct mlx5_flow_spec *rule_spec; 570 + struct mlx5_flow_handle *handle; 571 + 572 + rule_spec = kvzalloc(sizeof(*rule_spec), GFP_KERNEL); 573 + if (!rule_spec) 574 + return ERR_PTR(-ENOMEM); 575 + 576 + if (MLX5_CAP_ESW_FLOWTABLE(bridge->br_offloads->esw->dev, flow_source) && 577 + port->vport_num == MLX5_VPORT_UPLINK) 578 + rule_spec->flow_context.flow_source = 579 + MLX5_FLOW_CONTEXT_FLOW_SOURCE_LOCAL_VPORT; 580 + rule_spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS; 581 + 582 + flow_act.action |= MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT; 583 + flow_act.pkt_reformat = vlan->pkt_reformat_pop; 584 + 585 + if (vlan_proto == ETH_P_8021Q) { 586 + MLX5_SET_TO_ONES(fte_match_param, rule_spec->match_criteria, 587 + outer_headers.cvlan_tag); 588 + MLX5_SET_TO_ONES(fte_match_param, rule_spec->match_value, 589 + outer_headers.cvlan_tag); 590 + } else if (vlan_proto == ETH_P_8021AD) { 591 + MLX5_SET_TO_ONES(fte_match_param, rule_spec->match_criteria, 592 + outer_headers.svlan_tag); 593 + MLX5_SET_TO_ONES(fte_match_param, rule_spec->match_value, 594 + outer_headers.svlan_tag); 595 + } 596 + MLX5_SET_TO_ONES(fte_match_param, rule_spec->match_criteria, outer_headers.first_vid); 597 + MLX5_SET(fte_match_param, rule_spec->match_value, outer_headers.first_vid, vlan->vid); 598 + 599 + if (MLX5_CAP_ESW(bridge->br_offloads->esw->dev, merged_eswitch)) { 600 + dest.vport.flags = MLX5_FLOW_DEST_VPORT_VHCA_ID; 601 + dest.vport.vhca_id = port->esw_owner_vhca_id; 602 + } 603 + handle = mlx5_add_flow_rules(port->mcast.ft, rule_spec, &flow_act, &dest, 1); 604 + 605 + kvfree(rule_spec); 606 + return handle; 607 + } 608 + 609 + int mlx5_esw_bridge_vlan_mcast_init(u16 vlan_proto, struct mlx5_esw_bridge_port *port, 610 + struct mlx5_esw_bridge_vlan *vlan) 611 + { 612 + struct mlx5_flow_handle *handle; 613 + 614 + if (!(port->bridge->flags & MLX5_ESW_BRIDGE_MCAST_FLAG)) 615 + return 0; 616 + 617 + handle = mlx5_esw_bridge_mcast_vlan_flow_create(vlan_proto, port, vlan); 618 + if (IS_ERR(handle)) 619 + return PTR_ERR(handle); 620 + 621 + vlan->mcast_handle = handle; 622 + return 0; 623 + } 624 + 625 + void mlx5_esw_bridge_vlan_mcast_cleanup(struct mlx5_esw_bridge_vlan *vlan) 626 + { 627 + if (vlan->mcast_handle) 628 + mlx5_del_flow_rules(vlan->mcast_handle); 629 + vlan->mcast_handle = NULL; 630 + } 631 + 632 + static struct mlx5_flow_handle * 633 + mlx5_esw_bridge_mcast_fwd_flow_create(struct mlx5_esw_bridge_port *port) 634 + { 635 + struct mlx5_flow_act flow_act = { 636 + .action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST, 637 + .flags = FLOW_ACT_NO_APPEND, 638 + }; 639 + struct mlx5_flow_destination dest = { 640 + .type = MLX5_FLOW_DESTINATION_TYPE_VPORT, 641 + .vport.num = port->vport_num, 642 + }; 643 + struct mlx5_esw_bridge *bridge = port->bridge; 644 + struct mlx5_flow_spec *rule_spec; 645 + struct mlx5_flow_handle *handle; 646 + 647 + rule_spec = kvzalloc(sizeof(*rule_spec), GFP_KERNEL); 648 + if (!rule_spec) 649 + return ERR_PTR(-ENOMEM); 650 + 651 + if (MLX5_CAP_ESW_FLOWTABLE(bridge->br_offloads->esw->dev, flow_source) && 652 + port->vport_num == MLX5_VPORT_UPLINK) 653 + rule_spec->flow_context.flow_source = 654 + MLX5_FLOW_CONTEXT_FLOW_SOURCE_LOCAL_VPORT; 655 + 656 + if (MLX5_CAP_ESW(bridge->br_offloads->esw->dev, merged_eswitch)) { 657 + dest.vport.flags = MLX5_FLOW_DEST_VPORT_VHCA_ID; 658 + dest.vport.vhca_id = port->esw_owner_vhca_id; 659 + } 660 + handle = mlx5_add_flow_rules(port->mcast.ft, rule_spec, &flow_act, &dest, 1); 661 + 662 + kvfree(rule_spec); 663 + return handle; 664 + } 665 + 666 + static int mlx5_esw_bridge_port_mcast_fhs_init(struct mlx5_esw_bridge_port *port) 667 + { 668 + struct mlx5_flow_handle *filter_handle, *fwd_handle; 669 + struct mlx5_esw_bridge_vlan *vlan, *failed; 670 + unsigned long index; 671 + int err; 672 + 673 + 674 + filter_handle = (port->flags & MLX5_ESW_BRIDGE_PORT_FLAG_PEER) ? 675 + mlx5_esw_bridge_mcast_filter_flow_peer_create(port) : 676 + mlx5_esw_bridge_mcast_filter_flow_create(port); 677 + if (IS_ERR(filter_handle)) 678 + return PTR_ERR(filter_handle); 679 + 680 + fwd_handle = mlx5_esw_bridge_mcast_fwd_flow_create(port); 681 + if (IS_ERR(fwd_handle)) { 682 + err = PTR_ERR(fwd_handle); 683 + goto err_fwd; 684 + } 685 + 686 + xa_for_each(&port->vlans, index, vlan) { 687 + err = mlx5_esw_bridge_vlan_mcast_init(port->bridge->vlan_proto, port, vlan); 688 + if (err) { 689 + failed = vlan; 690 + goto err_vlan; 691 + } 692 + } 693 + 694 + port->mcast.filter_handle = filter_handle; 695 + port->mcast.fwd_handle = fwd_handle; 696 + 697 + return 0; 698 + 699 + err_vlan: 700 + xa_for_each(&port->vlans, index, vlan) { 701 + if (vlan == failed) 702 + break; 703 + 704 + mlx5_esw_bridge_vlan_mcast_cleanup(vlan); 705 + } 706 + mlx5_del_flow_rules(fwd_handle); 707 + err_fwd: 708 + mlx5_del_flow_rules(filter_handle); 709 + return err; 710 + } 711 + 712 + static void mlx5_esw_bridge_port_mcast_fhs_cleanup(struct mlx5_esw_bridge_port *port) 713 + { 714 + struct mlx5_esw_bridge_vlan *vlan; 715 + unsigned long index; 716 + 717 + xa_for_each(&port->vlans, index, vlan) 718 + mlx5_esw_bridge_vlan_mcast_cleanup(vlan); 719 + 720 + if (port->mcast.fwd_handle) 721 + mlx5_del_flow_rules(port->mcast.fwd_handle); 722 + port->mcast.fwd_handle = NULL; 723 + if (port->mcast.filter_handle) 724 + mlx5_del_flow_rules(port->mcast.filter_handle); 725 + port->mcast.filter_handle = NULL; 726 + } 727 + 728 + int mlx5_esw_bridge_port_mcast_init(struct mlx5_esw_bridge_port *port) 729 + { 730 + struct mlx5_esw_bridge *bridge = port->bridge; 731 + int err; 732 + 733 + if (!(bridge->flags & MLX5_ESW_BRIDGE_MCAST_FLAG)) 734 + return 0; 735 + 736 + err = mlx5_esw_bridge_port_mcast_fts_init(port, bridge); 737 + if (err) 738 + return err; 739 + 740 + err = mlx5_esw_bridge_port_mcast_fgs_init(port); 741 + if (err) 742 + goto err_fgs; 743 + 744 + err = mlx5_esw_bridge_port_mcast_fhs_init(port); 745 + if (err) 746 + goto err_fhs; 747 + return err; 748 + 749 + err_fhs: 750 + mlx5_esw_bridge_port_mcast_fgs_cleanup(port); 751 + err_fgs: 752 + mlx5_esw_bridge_port_mcast_fts_cleanup(port); 753 + return err; 754 + } 755 + 756 + void mlx5_esw_bridge_port_mcast_cleanup(struct mlx5_esw_bridge_port *port) 757 + { 758 + mlx5_esw_bridge_port_mdb_flush(port); 759 + mlx5_esw_bridge_port_mcast_fhs_cleanup(port); 760 + mlx5_esw_bridge_port_mcast_fgs_cleanup(port); 761 + mlx5_esw_bridge_port_mcast_fts_cleanup(port); 762 + } 763 + 764 + static struct mlx5_flow_group * 765 + mlx5_esw_bridge_ingress_igmp_fg_create(struct mlx5_eswitch *esw, 766 + struct mlx5_flow_table *ingress_ft) 767 + { 768 + int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in); 769 + struct mlx5_flow_group *fg; 770 + u32 *in, *match; 771 + 772 + in = kvzalloc(inlen, GFP_KERNEL); 773 + if (!in) 774 + return ERR_PTR(-ENOMEM); 775 + 776 + MLX5_SET(create_flow_group_in, in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS); 777 + match = MLX5_ADDR_OF(create_flow_group_in, in, match_criteria); 778 + 779 + MLX5_SET_TO_ONES(fte_match_param, match, outer_headers.ip_version); 780 + MLX5_SET_TO_ONES(fte_match_param, match, outer_headers.ip_protocol); 781 + 782 + MLX5_SET(create_flow_group_in, in, start_flow_index, 783 + MLX5_ESW_BRIDGE_INGRESS_TABLE_IGMP_GRP_IDX_FROM); 784 + MLX5_SET(create_flow_group_in, in, end_flow_index, 785 + MLX5_ESW_BRIDGE_INGRESS_TABLE_IGMP_GRP_IDX_TO); 786 + 787 + fg = mlx5_create_flow_group(ingress_ft, in); 788 + kvfree(in); 789 + if (IS_ERR(fg)) 790 + esw_warn(esw->dev, 791 + "Failed to create IGMP flow group for bridge ingress table (err=%pe)\n", 792 + fg); 793 + 794 + return fg; 795 + } 796 + 797 + static struct mlx5_flow_group * 798 + mlx5_esw_bridge_ingress_mld_fg_create(struct mlx5_eswitch *esw, 799 + struct mlx5_flow_table *ingress_ft) 800 + { 801 + int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in); 802 + struct mlx5_flow_group *fg; 803 + u32 *in, *match; 804 + 805 + if (!(MLX5_CAP_GEN(esw->dev, flex_parser_protocols) & MLX5_FLEX_PROTO_ICMPV6)) { 806 + esw_warn(esw->dev, 807 + "Can't create MLD flow group due to missing hardware ICMPv6 parsing support\n"); 808 + return NULL; 809 + } 810 + 811 + in = kvzalloc(inlen, GFP_KERNEL); 812 + if (!in) 813 + return ERR_PTR(-ENOMEM); 814 + 815 + MLX5_SET(create_flow_group_in, in, match_criteria_enable, 816 + MLX5_MATCH_OUTER_HEADERS | MLX5_MATCH_MISC_PARAMETERS_3); 817 + match = MLX5_ADDR_OF(create_flow_group_in, in, match_criteria); 818 + 819 + MLX5_SET_TO_ONES(fte_match_param, match, outer_headers.ip_version); 820 + MLX5_SET_TO_ONES(fte_match_param, match, misc_parameters_3.icmpv6_type); 821 + 822 + MLX5_SET(create_flow_group_in, in, start_flow_index, 823 + MLX5_ESW_BRIDGE_INGRESS_TABLE_MLD_GRP_IDX_FROM); 824 + MLX5_SET(create_flow_group_in, in, end_flow_index, 825 + MLX5_ESW_BRIDGE_INGRESS_TABLE_MLD_GRP_IDX_TO); 826 + 827 + fg = mlx5_create_flow_group(ingress_ft, in); 828 + kvfree(in); 829 + if (IS_ERR(fg)) 830 + esw_warn(esw->dev, 831 + "Failed to create MLD flow group for bridge ingress table (err=%pe)\n", 832 + fg); 833 + 834 + return fg; 835 + } 836 + 837 + static int 838 + mlx5_esw_bridge_ingress_mcast_fgs_init(struct mlx5_esw_bridge_offloads *br_offloads) 839 + { 840 + struct mlx5_flow_table *ingress_ft = br_offloads->ingress_ft; 841 + struct mlx5_eswitch *esw = br_offloads->esw; 842 + struct mlx5_flow_group *igmp_fg, *mld_fg; 843 + 844 + igmp_fg = mlx5_esw_bridge_ingress_igmp_fg_create(esw, ingress_ft); 845 + if (IS_ERR(igmp_fg)) 846 + return PTR_ERR(igmp_fg); 847 + 848 + mld_fg = mlx5_esw_bridge_ingress_mld_fg_create(esw, ingress_ft); 849 + if (IS_ERR(mld_fg)) { 850 + mlx5_destroy_flow_group(igmp_fg); 851 + return PTR_ERR(mld_fg); 852 + } 853 + 854 + br_offloads->ingress_igmp_fg = igmp_fg; 855 + br_offloads->ingress_mld_fg = mld_fg; 856 + return 0; 857 + } 858 + 859 + static void 860 + mlx5_esw_bridge_ingress_mcast_fgs_cleanup(struct mlx5_esw_bridge_offloads *br_offloads) 861 + { 862 + if (br_offloads->ingress_mld_fg) 863 + mlx5_destroy_flow_group(br_offloads->ingress_mld_fg); 864 + br_offloads->ingress_mld_fg = NULL; 865 + if (br_offloads->ingress_igmp_fg) 866 + mlx5_destroy_flow_group(br_offloads->ingress_igmp_fg); 867 + br_offloads->ingress_igmp_fg = NULL; 868 + } 869 + 870 + static struct mlx5_flow_handle * 871 + mlx5_esw_bridge_ingress_igmp_fh_create(struct mlx5_flow_table *ingress_ft, 872 + struct mlx5_flow_table *skip_ft) 873 + { 874 + struct mlx5_flow_destination dest = { 875 + .type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE, 876 + .ft = skip_ft, 877 + }; 878 + struct mlx5_flow_act flow_act = { 879 + .action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST, 880 + .flags = FLOW_ACT_NO_APPEND, 881 + }; 882 + struct mlx5_flow_spec *rule_spec; 883 + struct mlx5_flow_handle *handle; 884 + 885 + rule_spec = kvzalloc(sizeof(*rule_spec), GFP_KERNEL); 886 + if (!rule_spec) 887 + return ERR_PTR(-ENOMEM); 888 + 889 + rule_spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS; 890 + 891 + MLX5_SET_TO_ONES(fte_match_param, rule_spec->match_criteria, outer_headers.ip_version); 892 + MLX5_SET(fte_match_param, rule_spec->match_value, outer_headers.ip_version, 4); 893 + MLX5_SET_TO_ONES(fte_match_param, rule_spec->match_criteria, outer_headers.ip_protocol); 894 + MLX5_SET(fte_match_param, rule_spec->match_value, outer_headers.ip_protocol, IPPROTO_IGMP); 895 + 896 + handle = mlx5_add_flow_rules(ingress_ft, rule_spec, &flow_act, &dest, 1); 897 + 898 + kvfree(rule_spec); 899 + return handle; 900 + } 901 + 902 + static struct mlx5_flow_handle * 903 + mlx5_esw_bridge_ingress_mld_fh_create(u8 type, struct mlx5_flow_table *ingress_ft, 904 + struct mlx5_flow_table *skip_ft) 905 + { 906 + struct mlx5_flow_destination dest = { 907 + .type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE, 908 + .ft = skip_ft, 909 + }; 910 + struct mlx5_flow_act flow_act = { 911 + .action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST, 912 + .flags = FLOW_ACT_NO_APPEND, 913 + }; 914 + struct mlx5_flow_spec *rule_spec; 915 + struct mlx5_flow_handle *handle; 916 + 917 + rule_spec = kvzalloc(sizeof(*rule_spec), GFP_KERNEL); 918 + if (!rule_spec) 919 + return ERR_PTR(-ENOMEM); 920 + 921 + rule_spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS | MLX5_MATCH_MISC_PARAMETERS_3; 922 + 923 + MLX5_SET_TO_ONES(fte_match_param, rule_spec->match_criteria, outer_headers.ip_version); 924 + MLX5_SET(fte_match_param, rule_spec->match_value, outer_headers.ip_version, 6); 925 + MLX5_SET_TO_ONES(fte_match_param, rule_spec->match_criteria, misc_parameters_3.icmpv6_type); 926 + MLX5_SET(fte_match_param, rule_spec->match_value, misc_parameters_3.icmpv6_type, type); 927 + 928 + handle = mlx5_add_flow_rules(ingress_ft, rule_spec, &flow_act, &dest, 1); 929 + 930 + kvfree(rule_spec); 931 + return handle; 932 + } 933 + 934 + static int 935 + mlx5_esw_bridge_ingress_mcast_fhs_create(struct mlx5_esw_bridge_offloads *br_offloads) 936 + { 937 + struct mlx5_flow_handle *igmp_handle, *mld_query_handle, *mld_report_handle, 938 + *mld_done_handle; 939 + struct mlx5_flow_table *ingress_ft = br_offloads->ingress_ft, 940 + *skip_ft = br_offloads->skip_ft; 941 + int err; 942 + 943 + igmp_handle = mlx5_esw_bridge_ingress_igmp_fh_create(ingress_ft, skip_ft); 944 + if (IS_ERR(igmp_handle)) 945 + return PTR_ERR(igmp_handle); 946 + 947 + if (br_offloads->ingress_mld_fg) { 948 + mld_query_handle = mlx5_esw_bridge_ingress_mld_fh_create(ICMPV6_MGM_QUERY, 949 + ingress_ft, 950 + skip_ft); 951 + if (IS_ERR(mld_query_handle)) { 952 + err = PTR_ERR(mld_query_handle); 953 + goto err_mld_query; 954 + } 955 + 956 + mld_report_handle = mlx5_esw_bridge_ingress_mld_fh_create(ICMPV6_MGM_REPORT, 957 + ingress_ft, 958 + skip_ft); 959 + if (IS_ERR(mld_report_handle)) { 960 + err = PTR_ERR(mld_report_handle); 961 + goto err_mld_report; 962 + } 963 + 964 + mld_done_handle = mlx5_esw_bridge_ingress_mld_fh_create(ICMPV6_MGM_REDUCTION, 965 + ingress_ft, 966 + skip_ft); 967 + if (IS_ERR(mld_done_handle)) { 968 + err = PTR_ERR(mld_done_handle); 969 + goto err_mld_done; 970 + } 971 + } else { 972 + mld_query_handle = NULL; 973 + mld_report_handle = NULL; 974 + mld_done_handle = NULL; 975 + } 976 + 977 + br_offloads->igmp_handle = igmp_handle; 978 + br_offloads->mld_query_handle = mld_query_handle; 979 + br_offloads->mld_report_handle = mld_report_handle; 980 + br_offloads->mld_done_handle = mld_done_handle; 981 + 982 + return 0; 983 + 984 + err_mld_done: 985 + mlx5_del_flow_rules(mld_report_handle); 986 + err_mld_report: 987 + mlx5_del_flow_rules(mld_query_handle); 988 + err_mld_query: 989 + mlx5_del_flow_rules(igmp_handle); 990 + return err; 991 + } 992 + 993 + static void 994 + mlx5_esw_bridge_ingress_mcast_fhs_cleanup(struct mlx5_esw_bridge_offloads *br_offloads) 995 + { 996 + if (br_offloads->mld_done_handle) 997 + mlx5_del_flow_rules(br_offloads->mld_done_handle); 998 + br_offloads->mld_done_handle = NULL; 999 + if (br_offloads->mld_report_handle) 1000 + mlx5_del_flow_rules(br_offloads->mld_report_handle); 1001 + br_offloads->mld_report_handle = NULL; 1002 + if (br_offloads->mld_query_handle) 1003 + mlx5_del_flow_rules(br_offloads->mld_query_handle); 1004 + br_offloads->mld_query_handle = NULL; 1005 + if (br_offloads->igmp_handle) 1006 + mlx5_del_flow_rules(br_offloads->igmp_handle); 1007 + br_offloads->igmp_handle = NULL; 1008 + } 1009 + 1010 + static int mlx5_esw_brige_mcast_init(struct mlx5_esw_bridge *bridge) 1011 + { 1012 + struct mlx5_esw_bridge_offloads *br_offloads = bridge->br_offloads; 1013 + struct mlx5_esw_bridge_port *port, *failed; 1014 + unsigned long i; 1015 + int err; 1016 + 1017 + xa_for_each(&br_offloads->ports, i, port) { 1018 + if (port->bridge != bridge) 1019 + continue; 1020 + 1021 + err = mlx5_esw_bridge_port_mcast_init(port); 1022 + if (err) { 1023 + failed = port; 1024 + goto err_port; 1025 + } 1026 + } 1027 + return 0; 1028 + 1029 + err_port: 1030 + xa_for_each(&br_offloads->ports, i, port) { 1031 + if (port == failed) 1032 + break; 1033 + if (port->bridge != bridge) 1034 + continue; 1035 + 1036 + mlx5_esw_bridge_port_mcast_cleanup(port); 1037 + } 1038 + return err; 1039 + } 1040 + 1041 + static void mlx5_esw_brige_mcast_cleanup(struct mlx5_esw_bridge *bridge) 1042 + { 1043 + struct mlx5_esw_bridge_offloads *br_offloads = bridge->br_offloads; 1044 + struct mlx5_esw_bridge_port *port; 1045 + unsigned long i; 1046 + 1047 + xa_for_each(&br_offloads->ports, i, port) { 1048 + if (port->bridge != bridge) 1049 + continue; 1050 + 1051 + mlx5_esw_bridge_port_mcast_cleanup(port); 1052 + } 1053 + } 1054 + 1055 + static int mlx5_esw_brige_mcast_global_enable(struct mlx5_esw_bridge_offloads *br_offloads) 1056 + { 1057 + int err; 1058 + 1059 + if (br_offloads->ingress_igmp_fg) 1060 + return 0; /* already enabled by another bridge */ 1061 + 1062 + err = mlx5_esw_bridge_ingress_mcast_fgs_init(br_offloads); 1063 + if (err) { 1064 + esw_warn(br_offloads->esw->dev, 1065 + "Failed to create global multicast flow groups (err=%d)\n", 1066 + err); 1067 + return err; 1068 + } 1069 + 1070 + err = mlx5_esw_bridge_ingress_mcast_fhs_create(br_offloads); 1071 + if (err) { 1072 + esw_warn(br_offloads->esw->dev, 1073 + "Failed to create global multicast flows (err=%d)\n", 1074 + err); 1075 + goto err_fhs; 1076 + } 1077 + 1078 + return 0; 1079 + 1080 + err_fhs: 1081 + mlx5_esw_bridge_ingress_mcast_fgs_cleanup(br_offloads); 1082 + return err; 1083 + } 1084 + 1085 + static void mlx5_esw_brige_mcast_global_disable(struct mlx5_esw_bridge_offloads *br_offloads) 1086 + { 1087 + struct mlx5_esw_bridge *br; 1088 + 1089 + list_for_each_entry(br, &br_offloads->bridges, list) { 1090 + /* Ingress table is global, so only disable snooping when all 1091 + * bridges on esw have multicast disabled. 1092 + */ 1093 + if (br->flags & MLX5_ESW_BRIDGE_MCAST_FLAG) 1094 + return; 1095 + } 1096 + 1097 + mlx5_esw_bridge_ingress_mcast_fhs_cleanup(br_offloads); 1098 + mlx5_esw_bridge_ingress_mcast_fgs_cleanup(br_offloads); 1099 + } 1100 + 1101 + int mlx5_esw_bridge_mcast_enable(struct mlx5_esw_bridge *bridge) 1102 + { 1103 + int err; 1104 + 1105 + err = mlx5_esw_brige_mcast_global_enable(bridge->br_offloads); 1106 + if (err) 1107 + return err; 1108 + 1109 + bridge->flags |= MLX5_ESW_BRIDGE_MCAST_FLAG; 1110 + 1111 + err = mlx5_esw_brige_mcast_init(bridge); 1112 + if (err) { 1113 + esw_warn(bridge->br_offloads->esw->dev, "Failed to enable multicast (err=%d)\n", 1114 + err); 1115 + bridge->flags &= ~MLX5_ESW_BRIDGE_MCAST_FLAG; 1116 + mlx5_esw_brige_mcast_global_disable(bridge->br_offloads); 1117 + } 1118 + return err; 1119 + } 1120 + 1121 + void mlx5_esw_bridge_mcast_disable(struct mlx5_esw_bridge *bridge) 1122 + { 1123 + mlx5_esw_brige_mcast_cleanup(bridge); 1124 + bridge->flags &= ~MLX5_ESW_BRIDGE_MCAST_FLAG; 1125 + mlx5_esw_brige_mcast_global_disable(bridge->br_offloads); 1126 + }
+181
drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_priv.h
··· 12 12 #include <linux/xarray.h> 13 13 #include "fs_core.h" 14 14 15 + #define MLX5_ESW_BRIDGE_INGRESS_TABLE_IGMP_GRP_SIZE 1 16 + #define MLX5_ESW_BRIDGE_INGRESS_TABLE_MLD_GRP_SIZE 3 17 + #define MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_SIZE 131072 18 + #define MLX5_ESW_BRIDGE_INGRESS_TABLE_UNTAGGED_GRP_SIZE \ 19 + (524288 - MLX5_ESW_BRIDGE_INGRESS_TABLE_IGMP_GRP_SIZE - \ 20 + MLX5_ESW_BRIDGE_INGRESS_TABLE_MLD_GRP_SIZE) 21 + 22 + #define MLX5_ESW_BRIDGE_INGRESS_TABLE_IGMP_GRP_IDX_FROM 0 23 + #define MLX5_ESW_BRIDGE_INGRESS_TABLE_IGMP_GRP_IDX_TO \ 24 + (MLX5_ESW_BRIDGE_INGRESS_TABLE_IGMP_GRP_SIZE - 1) 25 + #define MLX5_ESW_BRIDGE_INGRESS_TABLE_MLD_GRP_IDX_FROM \ 26 + (MLX5_ESW_BRIDGE_INGRESS_TABLE_IGMP_GRP_IDX_TO + 1) 27 + #define MLX5_ESW_BRIDGE_INGRESS_TABLE_MLD_GRP_IDX_TO \ 28 + (MLX5_ESW_BRIDGE_INGRESS_TABLE_MLD_GRP_IDX_FROM + \ 29 + MLX5_ESW_BRIDGE_INGRESS_TABLE_MLD_GRP_SIZE - 1) 30 + #define MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_IDX_FROM \ 31 + (MLX5_ESW_BRIDGE_INGRESS_TABLE_MLD_GRP_IDX_TO + 1) 32 + #define MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_IDX_TO \ 33 + (MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_IDX_FROM + \ 34 + MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_SIZE - 1) 35 + #define MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_FILTER_GRP_IDX_FROM \ 36 + (MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_IDX_TO + 1) 37 + #define MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_FILTER_GRP_IDX_TO \ 38 + (MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_FILTER_GRP_IDX_FROM + \ 39 + MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_SIZE - 1) 40 + #define MLX5_ESW_BRIDGE_INGRESS_TABLE_QINQ_GRP_IDX_FROM \ 41 + (MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_FILTER_GRP_IDX_TO + 1) 42 + #define MLX5_ESW_BRIDGE_INGRESS_TABLE_QINQ_GRP_IDX_TO \ 43 + (MLX5_ESW_BRIDGE_INGRESS_TABLE_QINQ_GRP_IDX_FROM + \ 44 + MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_SIZE - 1) 45 + #define MLX5_ESW_BRIDGE_INGRESS_TABLE_QINQ_FILTER_GRP_IDX_FROM \ 46 + (MLX5_ESW_BRIDGE_INGRESS_TABLE_QINQ_GRP_IDX_TO + 1) 47 + #define MLX5_ESW_BRIDGE_INGRESS_TABLE_QINQ_FILTER_GRP_IDX_TO \ 48 + (MLX5_ESW_BRIDGE_INGRESS_TABLE_QINQ_FILTER_GRP_IDX_FROM + \ 49 + MLX5_ESW_BRIDGE_INGRESS_TABLE_VLAN_GRP_SIZE - 1) 50 + #define MLX5_ESW_BRIDGE_INGRESS_TABLE_MAC_GRP_IDX_FROM \ 51 + (MLX5_ESW_BRIDGE_INGRESS_TABLE_QINQ_FILTER_GRP_IDX_TO + 1) 52 + #define MLX5_ESW_BRIDGE_INGRESS_TABLE_MAC_GRP_IDX_TO \ 53 + (MLX5_ESW_BRIDGE_INGRESS_TABLE_MAC_GRP_IDX_FROM + \ 54 + MLX5_ESW_BRIDGE_INGRESS_TABLE_UNTAGGED_GRP_SIZE - 1) 55 + #define MLX5_ESW_BRIDGE_INGRESS_TABLE_SIZE \ 56 + (MLX5_ESW_BRIDGE_INGRESS_TABLE_MAC_GRP_IDX_TO + 1) 57 + static_assert(MLX5_ESW_BRIDGE_INGRESS_TABLE_SIZE == 1048576); 58 + 59 + #define MLX5_ESW_BRIDGE_EGRESS_TABLE_VLAN_GRP_SIZE 131072 60 + #define MLX5_ESW_BRIDGE_EGRESS_TABLE_MAC_GRP_SIZE (262144 - 1) 61 + #define MLX5_ESW_BRIDGE_EGRESS_TABLE_VLAN_GRP_IDX_FROM 0 62 + #define MLX5_ESW_BRIDGE_EGRESS_TABLE_VLAN_GRP_IDX_TO \ 63 + (MLX5_ESW_BRIDGE_EGRESS_TABLE_VLAN_GRP_SIZE - 1) 64 + #define MLX5_ESW_BRIDGE_EGRESS_TABLE_QINQ_GRP_IDX_FROM \ 65 + (MLX5_ESW_BRIDGE_EGRESS_TABLE_VLAN_GRP_IDX_TO + 1) 66 + #define MLX5_ESW_BRIDGE_EGRESS_TABLE_QINQ_GRP_IDX_TO \ 67 + (MLX5_ESW_BRIDGE_EGRESS_TABLE_QINQ_GRP_IDX_FROM + \ 68 + MLX5_ESW_BRIDGE_EGRESS_TABLE_VLAN_GRP_SIZE - 1) 69 + #define MLX5_ESW_BRIDGE_EGRESS_TABLE_MAC_GRP_IDX_FROM \ 70 + (MLX5_ESW_BRIDGE_EGRESS_TABLE_QINQ_GRP_IDX_TO + 1) 71 + #define MLX5_ESW_BRIDGE_EGRESS_TABLE_MAC_GRP_IDX_TO \ 72 + (MLX5_ESW_BRIDGE_EGRESS_TABLE_MAC_GRP_IDX_FROM + \ 73 + MLX5_ESW_BRIDGE_EGRESS_TABLE_MAC_GRP_SIZE - 1) 74 + #define MLX5_ESW_BRIDGE_EGRESS_TABLE_MISS_GRP_IDX_FROM \ 75 + (MLX5_ESW_BRIDGE_EGRESS_TABLE_MAC_GRP_IDX_TO + 1) 76 + #define MLX5_ESW_BRIDGE_EGRESS_TABLE_MISS_GRP_IDX_TO \ 77 + MLX5_ESW_BRIDGE_EGRESS_TABLE_MISS_GRP_IDX_FROM 78 + #define MLX5_ESW_BRIDGE_EGRESS_TABLE_SIZE \ 79 + (MLX5_ESW_BRIDGE_EGRESS_TABLE_MISS_GRP_IDX_TO + 1) 80 + static_assert(MLX5_ESW_BRIDGE_EGRESS_TABLE_SIZE == 524288); 81 + 82 + #define MLX5_ESW_BRIDGE_SKIP_TABLE_SIZE 0 83 + 84 + #define MLX5_ESW_BRIDGE_MCAST_TABLE_FILTER_GRP_SIZE 1 85 + #define MLX5_ESW_BRIDGE_MCAST_TABLE_FWD_GRP_SIZE 1 86 + #define MLX5_ESW_BRIDGE_MCAST_TABLE_VLAN_GRP_SIZE 4095 87 + #define MLX5_ESW_BRIDGE_MCAST_TABLE_QINQ_GRP_SIZE MLX5_ESW_BRIDGE_MCAST_TABLE_VLAN_GRP_SIZE 88 + #define MLX5_ESW_BRIDGE_MCAST_TABLE_FILTER_GRP_IDX_FROM 0 89 + #define MLX5_ESW_BRIDGE_MCAST_TABLE_FILTER_GRP_IDX_TO \ 90 + (MLX5_ESW_BRIDGE_MCAST_TABLE_FILTER_GRP_SIZE - 1) 91 + #define MLX5_ESW_BRIDGE_MCAST_TABLE_VLAN_GRP_IDX_FROM \ 92 + (MLX5_ESW_BRIDGE_MCAST_TABLE_FILTER_GRP_IDX_TO + 1) 93 + #define MLX5_ESW_BRIDGE_MCAST_TABLE_VLAN_GRP_IDX_TO \ 94 + (MLX5_ESW_BRIDGE_MCAST_TABLE_VLAN_GRP_IDX_FROM + \ 95 + MLX5_ESW_BRIDGE_MCAST_TABLE_VLAN_GRP_SIZE - 1) 96 + #define MLX5_ESW_BRIDGE_MCAST_TABLE_QINQ_GRP_IDX_FROM \ 97 + (MLX5_ESW_BRIDGE_MCAST_TABLE_VLAN_GRP_IDX_TO + 1) 98 + #define MLX5_ESW_BRIDGE_MCAST_TABLE_QINQ_GRP_IDX_TO \ 99 + (MLX5_ESW_BRIDGE_MCAST_TABLE_QINQ_GRP_IDX_FROM + \ 100 + MLX5_ESW_BRIDGE_MCAST_TABLE_QINQ_GRP_SIZE - 1) 101 + #define MLX5_ESW_BRIDGE_MCAST_TABLE_FWD_GRP_IDX_FROM \ 102 + (MLX5_ESW_BRIDGE_MCAST_TABLE_QINQ_GRP_IDX_TO + 1) 103 + #define MLX5_ESW_BRIDGE_MCAST_TABLE_FWD_GRP_IDX_TO \ 104 + (MLX5_ESW_BRIDGE_MCAST_TABLE_FWD_GRP_IDX_FROM + \ 105 + MLX5_ESW_BRIDGE_MCAST_TABLE_FWD_GRP_SIZE - 1) 106 + 107 + #define MLX5_ESW_BRIDGE_MCAST_TABLE_SIZE \ 108 + (MLX5_ESW_BRIDGE_MCAST_TABLE_FWD_GRP_IDX_TO + 1) 109 + static_assert(MLX5_ESW_BRIDGE_MCAST_TABLE_SIZE == 8192); 110 + 111 + enum { 112 + MLX5_ESW_BRIDGE_LEVEL_INGRESS_TABLE, 113 + MLX5_ESW_BRIDGE_LEVEL_EGRESS_TABLE, 114 + MLX5_ESW_BRIDGE_LEVEL_MCAST_TABLE, 115 + MLX5_ESW_BRIDGE_LEVEL_SKIP_TABLE, 116 + }; 117 + 118 + enum { 119 + MLX5_ESW_BRIDGE_VLAN_FILTERING_FLAG = BIT(0), 120 + MLX5_ESW_BRIDGE_MCAST_FLAG = BIT(1), 121 + }; 122 + 15 123 struct mlx5_esw_bridge_fdb_key { 124 + unsigned char addr[ETH_ALEN]; 125 + u16 vid; 126 + }; 127 + 128 + struct mlx5_esw_bridge_mdb_key { 16 129 unsigned char addr[ETH_ALEN]; 17 130 u16 vid; 18 131 }; ··· 156 43 struct mlx5_flow_handle *filter_handle; 157 44 }; 158 45 46 + struct mlx5_esw_bridge_mdb_entry { 47 + struct mlx5_esw_bridge_mdb_key key; 48 + struct rhash_head ht_node; 49 + struct list_head list; 50 + struct xarray ports; 51 + int num_ports; 52 + 53 + struct mlx5_flow_handle *egress_handle; 54 + }; 55 + 159 56 struct mlx5_esw_bridge_vlan { 160 57 u16 vid; 161 58 u16 flags; ··· 173 50 struct mlx5_pkt_reformat *pkt_reformat_push; 174 51 struct mlx5_pkt_reformat *pkt_reformat_pop; 175 52 struct mlx5_modify_hdr *pkt_mod_hdr_push_mark; 53 + struct mlx5_flow_handle *mcast_handle; 176 54 }; 177 55 178 56 struct mlx5_esw_bridge_port { ··· 182 58 u16 flags; 183 59 struct mlx5_esw_bridge *bridge; 184 60 struct xarray vlans; 61 + struct { 62 + struct mlx5_flow_table *ft; 63 + struct mlx5_flow_group *filter_fg; 64 + struct mlx5_flow_group *vlan_fg; 65 + struct mlx5_flow_group *qinq_fg; 66 + struct mlx5_flow_group *fwd_fg; 67 + 68 + struct mlx5_flow_handle *filter_handle; 69 + struct mlx5_flow_handle *fwd_handle; 70 + } mcast; 185 71 }; 72 + 73 + struct mlx5_esw_bridge { 74 + int ifindex; 75 + int refcnt; 76 + struct list_head list; 77 + struct mlx5_esw_bridge_offloads *br_offloads; 78 + 79 + struct list_head fdb_list; 80 + struct rhashtable fdb_ht; 81 + 82 + struct list_head mdb_list; 83 + struct rhashtable mdb_ht; 84 + 85 + struct mlx5_flow_table *egress_ft; 86 + struct mlx5_flow_group *egress_vlan_fg; 87 + struct mlx5_flow_group *egress_qinq_fg; 88 + struct mlx5_flow_group *egress_mac_fg; 89 + struct mlx5_flow_group *egress_miss_fg; 90 + struct mlx5_pkt_reformat *egress_miss_pkt_reformat; 91 + struct mlx5_flow_handle *egress_miss_handle; 92 + unsigned long ageing_time; 93 + u32 flags; 94 + u16 vlan_proto; 95 + }; 96 + 97 + struct mlx5_flow_table *mlx5_esw_bridge_table_create(int max_fte, u32 level, 98 + struct mlx5_eswitch *esw); 99 + unsigned long mlx5_esw_bridge_port_key(struct mlx5_esw_bridge_port *port); 100 + 101 + int mlx5_esw_bridge_port_mcast_init(struct mlx5_esw_bridge_port *port); 102 + void mlx5_esw_bridge_port_mcast_cleanup(struct mlx5_esw_bridge_port *port); 103 + int mlx5_esw_bridge_vlan_mcast_init(u16 vlan_proto, struct mlx5_esw_bridge_port *port, 104 + struct mlx5_esw_bridge_vlan *vlan); 105 + void mlx5_esw_bridge_vlan_mcast_cleanup(struct mlx5_esw_bridge_vlan *vlan); 106 + 107 + int mlx5_esw_bridge_mcast_enable(struct mlx5_esw_bridge *bridge); 108 + void mlx5_esw_bridge_mcast_disable(struct mlx5_esw_bridge *bridge); 109 + 110 + int mlx5_esw_bridge_mdb_init(struct mlx5_esw_bridge *bridge); 111 + void mlx5_esw_bridge_mdb_cleanup(struct mlx5_esw_bridge *bridge); 112 + int mlx5_esw_bridge_port_mdb_attach(struct net_device *dev, struct mlx5_esw_bridge_port *port, 113 + const unsigned char *addr, u16 vid); 114 + void mlx5_esw_bridge_port_mdb_detach(struct net_device *dev, struct mlx5_esw_bridge_port *port, 115 + const unsigned char *addr, u16 vid); 116 + void mlx5_esw_bridge_port_mdb_vlan_flush(struct mlx5_esw_bridge_port *port, 117 + struct mlx5_esw_bridge_vlan *vlan); 118 + void mlx5_esw_bridge_mdb_flush(struct mlx5_esw_bridge *bridge); 186 119 187 120 #endif /* _MLX5_ESW_BRIDGE_PRIVATE_ */
+35
drivers/net/ethernet/mellanox/mlx5/core/esw/diag/bridge_tracepoint.h
··· 110 110 TP_ARGS(port) 111 111 ); 112 112 113 + DECLARE_EVENT_CLASS(mlx5_esw_bridge_mdb_port_change_template, 114 + TP_PROTO(const struct net_device *dev, 115 + const struct mlx5_esw_bridge_mdb_entry *mdb), 116 + TP_ARGS(dev, mdb), 117 + TP_STRUCT__entry( 118 + __array(char, dev_name, IFNAMSIZ) 119 + __array(unsigned char, addr, ETH_ALEN) 120 + __field(u16, vid) 121 + __field(int, num_ports) 122 + __field(bool, offloaded)), 123 + TP_fast_assign( 124 + strscpy(__entry->dev_name, netdev_name(dev), IFNAMSIZ); 125 + memcpy(__entry->addr, mdb->key.addr, ETH_ALEN); 126 + __entry->vid = mdb->key.vid; 127 + __entry->num_ports = mdb->num_ports; 128 + __entry->offloaded = mdb->egress_handle;), 129 + TP_printk("net_device=%s addr=%pM vid=%u num_ports=%d offloaded=%d", 130 + __entry->dev_name, 131 + __entry->addr, 132 + __entry->vid, 133 + __entry->num_ports, 134 + __entry->offloaded)); 135 + 136 + DEFINE_EVENT(mlx5_esw_bridge_mdb_port_change_template, 137 + mlx5_esw_bridge_port_mdb_attach, 138 + TP_PROTO(const struct net_device *dev, 139 + const struct mlx5_esw_bridge_mdb_entry *mdb), 140 + TP_ARGS(dev, mdb)); 141 + 142 + DEFINE_EVENT(mlx5_esw_bridge_mdb_port_change_template, 143 + mlx5_esw_bridge_port_mdb_detach, 144 + TP_PROTO(const struct net_device *dev, 145 + const struct mlx5_esw_bridge_mdb_entry *mdb), 146 + TP_ARGS(dev, mdb)); 147 + 113 148 #endif 114 149 115 150 /* This part must be outside protection */
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
··· 2997 2997 goto out_err; 2998 2998 } 2999 2999 3000 - maj_prio = fs_create_prio(&steering->fdb_root_ns->ns, FDB_BR_OFFLOAD, 3); 3000 + maj_prio = fs_create_prio(&steering->fdb_root_ns->ns, FDB_BR_OFFLOAD, 4); 3001 3001 if (IS_ERR(maj_prio)) { 3002 3002 err = PTR_ERR(maj_prio); 3003 3003 goto out_err;
+9
drivers/net/ethernet/mellanox/mlx5/core/main.c
··· 100 100 static struct mlx5_profile profile[] = { 101 101 [0] = { 102 102 .mask = 0, 103 + .num_cmd_caches = MLX5_NUM_COMMAND_CACHES, 103 104 }, 104 105 [1] = { 105 106 .mask = MLX5_PROF_MASK_QP_SIZE, 106 107 .log_max_qp = 12, 108 + .num_cmd_caches = MLX5_NUM_COMMAND_CACHES, 109 + 107 110 }, 108 111 [2] = { 109 112 .mask = MLX5_PROF_MASK_QP_SIZE | 110 113 MLX5_PROF_MASK_MR_CACHE, 111 114 .log_max_qp = LOG_MAX_SUPPORTED_QPS, 115 + .num_cmd_caches = MLX5_NUM_COMMAND_CACHES, 112 116 .mr_cache[0] = { 113 117 .size = 500, 114 118 .limit = 250 ··· 177 173 .size = 8, 178 174 .limit = 4 179 175 }, 176 + }, 177 + [3] = { 178 + .mask = MLX5_PROF_MASK_QP_SIZE, 179 + .log_max_qp = LOG_MAX_SUPPORTED_QPS, 180 + .num_cmd_caches = 0, 180 181 }, 181 182 }; 182 183
+1
drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
··· 142 142 }; 143 143 144 144 #define MLX5_DEFAULT_PROF 2 145 + #define MLX5_SF_PROF 3 145 146 146 147 static inline int mlx5_flexible_inlen(struct mlx5_core_dev *dev, size_t fixed, 147 148 size_t item_size, size_t num_items,
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/sf/dev/driver.c
··· 28 28 mdev->priv.adev_idx = adev->id; 29 29 sf_dev->mdev = mdev; 30 30 31 - err = mlx5_mdev_init(mdev, MLX5_DEFAULT_PROF); 31 + err = mlx5_mdev_init(mdev, MLX5_SF_PROF); 32 32 if (err) { 33 33 mlx5_core_warn(mdev, "mlx5_mdev_init on err=%d\n", err); 34 34 goto mdev_err;
+6
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_cmd.c
··· 200 200 caps->hdr_modify_icm_addr = 201 201 MLX5_CAP64_DEV_MEM(mdev, header_modify_sw_icm_start_address); 202 202 203 + caps->log_modify_pattern_icm_size = 204 + MLX5_CAP_DEV_MEM(mdev, log_header_modify_pattern_sw_icm_size); 205 + 206 + caps->hdr_modify_pattern_icm_addr = 207 + MLX5_CAP64_DEV_MEM(mdev, header_modify_pattern_sw_icm_start_address); 208 + 203 209 caps->roce_min_src_udp = MLX5_CAP_ROCE(mdev, r_roce_min_src_udp_port); 204 210 205 211 caps->is_ecpf = mlx5_core_is_ecpf_esw_manager(mdev);
+42 -3
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_domain.c
··· 10 10 ((dmn)->info.caps.dmn_type##_sw_owner_v2 && \ 11 11 (dmn)->info.caps.sw_format_ver <= MLX5_STEERING_FORMAT_CONNECTX_7)) 12 12 13 + bool mlx5dr_domain_is_support_ptrn_arg(struct mlx5dr_domain *dmn) 14 + { 15 + return false; 16 + } 17 + 18 + static int dr_domain_init_modify_header_resources(struct mlx5dr_domain *dmn) 19 + { 20 + if (!mlx5dr_domain_is_support_ptrn_arg(dmn)) 21 + return 0; 22 + 23 + dmn->ptrn_mgr = mlx5dr_ptrn_mgr_create(dmn); 24 + if (!dmn->ptrn_mgr) { 25 + mlx5dr_err(dmn, "Couldn't create ptrn_mgr\n"); 26 + return -ENOMEM; 27 + } 28 + 29 + return 0; 30 + } 31 + 32 + static void dr_domain_destroy_modify_header_resources(struct mlx5dr_domain *dmn) 33 + { 34 + if (!mlx5dr_domain_is_support_ptrn_arg(dmn)) 35 + return; 36 + 37 + mlx5dr_ptrn_mgr_destroy(dmn->ptrn_mgr); 38 + } 39 + 13 40 static void dr_domain_init_csum_recalc_fts(struct mlx5dr_domain *dmn) 14 41 { 15 42 /* Per vport cached FW FT for checksum recalculation, this ··· 176 149 goto clean_uar; 177 150 } 178 151 152 + ret = dr_domain_init_modify_header_resources(dmn); 153 + if (ret) { 154 + mlx5dr_err(dmn, "Couldn't create modify-header-resources\n"); 155 + goto clean_mem_resources; 156 + } 157 + 179 158 ret = mlx5dr_send_ring_alloc(dmn); 180 159 if (ret) { 181 160 mlx5dr_err(dmn, "Couldn't create send-ring\n"); 182 - goto clean_mem_resources; 161 + goto clean_modify_hdr; 183 162 } 184 163 185 164 return 0; 186 165 166 + clean_modify_hdr: 167 + dr_domain_destroy_modify_header_resources(dmn); 187 168 clean_mem_resources: 188 169 dr_domain_uninit_mem_resources(dmn); 189 170 clean_uar: ··· 205 170 static void dr_domain_uninit_resources(struct mlx5dr_domain *dmn) 206 171 { 207 172 mlx5dr_send_ring_free(dmn, dmn->send_ring); 173 + dr_domain_destroy_modify_header_resources(dmn); 208 174 dr_domain_uninit_mem_resources(dmn); 209 175 mlx5_put_uars_page(dmn->mdev, dmn->uar); 210 176 mlx5_core_dealloc_pd(dmn->mdev, dmn->pdn); ··· 251 215 return 0; 252 216 } 253 217 254 - static int dr_domain_query_esw_mngr(struct mlx5dr_domain *dmn) 218 + static int dr_domain_query_esw_mgr(struct mlx5dr_domain *dmn) 255 219 { 256 220 return dr_domain_query_vport(dmn, 0, false, 257 221 &dmn->info.caps.vports.esw_manager_caps); ··· 357 321 * vports (vport 0, VFs and SFs) will be queried dynamically. 358 322 */ 359 323 360 - ret = dr_domain_query_esw_mngr(dmn); 324 + ret = dr_domain_query_esw_mgr(dmn); 361 325 if (ret) { 362 326 mlx5dr_err(dmn, "Failed to query eswitch manager vport caps (err: %d)", ret); 363 327 goto free_vports_caps_xa; ··· 471 435 dmn->info.max_log_action_icm_sz = DR_CHUNK_SIZE_4K; 472 436 dmn->info.max_log_sw_icm_sz = min_t(u32, DR_CHUNK_SIZE_1024K, 473 437 dmn->info.caps.log_icm_size); 438 + dmn->info.max_log_modify_hdr_pattern_icm_sz = 439 + min_t(u32, DR_CHUNK_SIZE_4K, 440 + dmn->info.caps.log_modify_pattern_icm_size); 474 441 475 442 if (!dmn->info.supp_sw_steering) { 476 443 mlx5dr_err(dmn, "SW steering is not supported\n");
+29 -12
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c
··· 107 107 dr_icm_pool_mr_create(struct mlx5dr_icm_pool *pool) 108 108 { 109 109 struct mlx5_core_dev *mdev = pool->dmn->mdev; 110 - enum mlx5_sw_icm_type dm_type; 110 + enum mlx5_sw_icm_type dm_type = 0; 111 111 struct mlx5dr_icm_mr *icm_mr; 112 - size_t log_align_base; 112 + size_t log_align_base = 0; 113 113 int err; 114 114 115 115 icm_mr = kvzalloc(sizeof(*icm_mr), GFP_KERNEL); ··· 121 121 icm_mr->dm.length = mlx5dr_icm_pool_chunk_size_to_byte(pool->max_log_chunk_sz, 122 122 pool->icm_type); 123 123 124 - if (pool->icm_type == DR_ICM_TYPE_STE) { 124 + switch (pool->icm_type) { 125 + case DR_ICM_TYPE_STE: 125 126 dm_type = MLX5_SW_ICM_TYPE_STEERING; 126 127 log_align_base = ilog2(icm_mr->dm.length); 127 - } else { 128 + break; 129 + case DR_ICM_TYPE_MODIFY_ACTION: 128 130 dm_type = MLX5_SW_ICM_TYPE_HEADER_MODIFY; 129 131 /* Align base is 64B */ 130 132 log_align_base = ilog2(DR_ICM_MODIFY_HDR_ALIGN_BASE); 133 + break; 134 + case DR_ICM_TYPE_MODIFY_HDR_PTRN: 135 + dm_type = MLX5_SW_ICM_TYPE_HEADER_MODIFY_PATTERN; 136 + /* Align base is 64B */ 137 + log_align_base = ilog2(DR_ICM_MODIFY_HDR_ALIGN_BASE); 138 + break; 139 + default: 140 + WARN_ON(pool->icm_type); 131 141 } 142 + 132 143 icm_mr->dm.type = dm_type; 133 144 134 145 err = mlx5_dm_sw_icm_alloc(mdev, icm_mr->dm.type, icm_mr->dm.length, ··· 504 493 enum mlx5dr_icm_type icm_type) 505 494 { 506 495 u32 num_of_chunks, entry_size, max_hot_size; 507 - enum mlx5dr_icm_chunk_size max_log_chunk_sz; 508 496 struct mlx5dr_icm_pool *pool; 509 - 510 - if (icm_type == DR_ICM_TYPE_STE) 511 - max_log_chunk_sz = dmn->info.max_log_sw_icm_sz; 512 - else 513 - max_log_chunk_sz = dmn->info.max_log_action_icm_sz; 514 497 515 498 pool = kvzalloc(sizeof(*pool), GFP_KERNEL); 516 499 if (!pool) ··· 512 507 513 508 pool->dmn = dmn; 514 509 pool->icm_type = icm_type; 515 - pool->max_log_chunk_sz = max_log_chunk_sz; 516 510 pool->chunks_kmem_cache = dmn->chunks_kmem_cache; 517 511 518 512 INIT_LIST_HEAD(&pool->buddy_mem_list); 519 - 520 513 mutex_init(&pool->mutex); 514 + 515 + switch (icm_type) { 516 + case DR_ICM_TYPE_STE: 517 + pool->max_log_chunk_sz = dmn->info.max_log_sw_icm_sz; 518 + break; 519 + case DR_ICM_TYPE_MODIFY_ACTION: 520 + pool->max_log_chunk_sz = dmn->info.max_log_action_icm_sz; 521 + break; 522 + case DR_ICM_TYPE_MODIFY_HDR_PTRN: 523 + pool->max_log_chunk_sz = dmn->info.max_log_modify_hdr_pattern_icm_sz; 524 + break; 525 + default: 526 + WARN_ON(icm_type); 527 + } 521 528 522 529 entry_size = mlx5dr_icm_pool_dm_type_to_entry_size(pool->icm_type); 523 530
+43
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ptrn.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB 2 + // Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. 3 + 4 + #include "dr_types.h" 5 + 6 + struct mlx5dr_ptrn_mgr { 7 + struct mlx5dr_domain *dmn; 8 + struct mlx5dr_icm_pool *ptrn_icm_pool; 9 + }; 10 + 11 + struct mlx5dr_ptrn_mgr *mlx5dr_ptrn_mgr_create(struct mlx5dr_domain *dmn) 12 + { 13 + struct mlx5dr_ptrn_mgr *mgr; 14 + 15 + if (!mlx5dr_domain_is_support_ptrn_arg(dmn)) 16 + return NULL; 17 + 18 + mgr = kzalloc(sizeof(*mgr), GFP_KERNEL); 19 + if (!mgr) 20 + return NULL; 21 + 22 + mgr->dmn = dmn; 23 + mgr->ptrn_icm_pool = mlx5dr_icm_pool_create(dmn, DR_ICM_TYPE_MODIFY_HDR_PTRN); 24 + if (!mgr->ptrn_icm_pool) { 25 + mlx5dr_err(dmn, "Couldn't get modify-header-pattern memory\n"); 26 + goto free_mgr; 27 + } 28 + 29 + return mgr; 30 + 31 + free_mgr: 32 + kfree(mgr); 33 + return NULL; 34 + } 35 + 36 + void mlx5dr_ptrn_mgr_destroy(struct mlx5dr_ptrn_mgr *mgr) 37 + { 38 + if (!mgr) 39 + return; 40 + 41 + mlx5dr_icm_pool_destroy(mgr->ptrn_icm_pool); 42 + kfree(mgr); 43 + }
+39 -21
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
··· 18 18 unsigned int send_flags; 19 19 }; 20 20 21 + enum send_info_type { 22 + WRITE_ICM = 0, 23 + }; 24 + 21 25 struct postsend_info { 26 + enum send_info_type type; 22 27 struct dr_data_seg write; 23 28 struct dr_data_seg read; 24 29 u64 remote_addr; ··· 407 402 408 403 static void dr_post_send(struct mlx5dr_qp *dr_qp, struct postsend_info *send_info) 409 404 { 410 - dr_rdma_segments(dr_qp, send_info->remote_addr, send_info->rkey, 411 - &send_info->write, MLX5_OPCODE_RDMA_WRITE, false); 412 - dr_rdma_segments(dr_qp, send_info->remote_addr, send_info->rkey, 413 - &send_info->read, MLX5_OPCODE_RDMA_READ, true); 405 + if (send_info->type == WRITE_ICM) { 406 + dr_rdma_segments(dr_qp, send_info->remote_addr, send_info->rkey, 407 + &send_info->write, MLX5_OPCODE_RDMA_WRITE, false); 408 + dr_rdma_segments(dr_qp, send_info->remote_addr, send_info->rkey, 409 + &send_info->read, MLX5_OPCODE_RDMA_READ, true); 410 + } 414 411 } 415 412 416 413 /** ··· 483 476 return 0; 484 477 } 485 478 486 - static void dr_fill_data_segs(struct mlx5dr_send_ring *send_ring, 487 - struct postsend_info *send_info) 479 + static void dr_fill_write_icm_segs(struct mlx5dr_domain *dmn, 480 + struct mlx5dr_send_ring *send_ring, 481 + struct postsend_info *send_info) 488 482 { 483 + u32 buff_offset; 484 + 485 + if (send_info->write.length > dmn->info.max_inline_size) { 486 + buff_offset = (send_ring->tx_head & 487 + (dmn->send_ring->signal_th - 1)) * 488 + send_ring->max_post_send_size; 489 + /* Copy to ring mr */ 490 + memcpy(send_ring->buf + buff_offset, 491 + (void *)(uintptr_t)send_info->write.addr, 492 + send_info->write.length); 493 + send_info->write.addr = (uintptr_t)send_ring->mr->dma_addr + buff_offset; 494 + send_info->write.lkey = send_ring->mr->mkey; 495 + 496 + send_ring->tx_head++; 497 + } 498 + 489 499 send_ring->pending_wqe++; 490 500 491 501 if (send_ring->pending_wqe % send_ring->signal_th == 0) ··· 520 496 send_info->read.send_flags = 0; 521 497 } 522 498 499 + static void dr_fill_data_segs(struct mlx5dr_domain *dmn, 500 + struct mlx5dr_send_ring *send_ring, 501 + struct postsend_info *send_info) 502 + { 503 + if (send_info->type == WRITE_ICM) 504 + dr_fill_write_icm_segs(dmn, send_ring, send_info); 505 + } 506 + 523 507 static int dr_postsend_icm_data(struct mlx5dr_domain *dmn, 524 508 struct postsend_info *send_info) 525 509 { 526 510 struct mlx5dr_send_ring *send_ring = dmn->send_ring; 527 - u32 buff_offset; 528 511 int ret; 529 512 530 513 if (unlikely(dmn->mdev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR || ··· 548 517 if (ret) 549 518 goto out_unlock; 550 519 551 - if (send_info->write.length > dmn->info.max_inline_size) { 552 - buff_offset = (send_ring->tx_head & 553 - (dmn->send_ring->signal_th - 1)) * 554 - send_ring->max_post_send_size; 555 - /* Copy to ring mr */ 556 - memcpy(send_ring->buf + buff_offset, 557 - (void *)(uintptr_t)send_info->write.addr, 558 - send_info->write.length); 559 - send_info->write.addr = (uintptr_t)send_ring->mr->dma_addr + buff_offset; 560 - send_info->write.lkey = send_ring->mr->mkey; 561 - } 562 - 563 - send_ring->tx_head++; 564 - dr_fill_data_segs(send_ring, send_info); 520 + dr_fill_data_segs(dmn, send_ring, send_info); 565 521 dr_post_send(send_ring->qp, send_info); 566 522 567 523 out_unlock:
+4 -3
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste_v1.c
··· 604 604 allow_modify_hdr = false; 605 605 } 606 606 607 - if (action_type_set[DR_ACTION_TYP_CTR]) 608 - dr_ste_v1_set_counter_id(last_ste, attr->ctr_id); 609 - 610 607 if (action_type_set[DR_ACTION_TYP_MODIFY_HDR]) { 611 608 if (!allow_modify_hdr || action_sz < DR_STE_ACTION_DOUBLE_SZ) { 612 609 dr_ste_v1_arr_init_next_match(&last_ste, added_stes, ··· 720 723 attr->range.min, 721 724 attr->range.max); 722 725 } 726 + 727 + /* set counter ID on the last STE to adhere to DMFS behavior */ 728 + if (action_type_set[DR_ACTION_TYP_CTR]) 729 + dr_ste_v1_set_counter_id(last_ste, attr->ctr_id); 723 730 724 731 dr_ste_v1_set_hit_gvmi(last_ste, attr->hit_gvmi); 725 732 dr_ste_v1_set_hit_addr(last_ste, attr->final_icm_addr, 1);
+11
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h
··· 26 26 #define mlx5dr_info(dmn, arg...) mlx5_core_info((dmn)->mdev, ##arg) 27 27 #define mlx5dr_dbg(dmn, arg...) mlx5_core_dbg((dmn)->mdev, ##arg) 28 28 29 + struct mlx5dr_ptrn_mgr; 30 + 29 31 static inline bool dr_is_flex_parser_0_id(u8 parser_id) 30 32 { 31 33 return parser_id <= DR_STE_MAX_FLEX_0_ID; ··· 68 66 enum mlx5dr_icm_type { 69 67 DR_ICM_TYPE_STE, 70 68 DR_ICM_TYPE_MODIFY_ACTION, 69 + DR_ICM_TYPE_MODIFY_HDR_PTRN, 71 70 }; 72 71 73 72 static inline enum mlx5dr_icm_chunk_size ··· 864 861 u64 esw_tx_drop_address; 865 862 u32 log_icm_size; 866 863 u64 hdr_modify_icm_addr; 864 + u32 log_modify_pattern_icm_size; 865 + u64 hdr_modify_pattern_icm_addr; 867 866 u32 flex_protocols; 868 867 u8 flex_parser_id_icmp_dw0; 869 868 u8 flex_parser_id_icmp_dw1; ··· 915 910 u32 max_send_wr; 916 911 u32 max_log_sw_icm_sz; 917 912 u32 max_log_action_icm_sz; 913 + u32 max_log_modify_hdr_pattern_icm_sz; 918 914 struct mlx5dr_domain_rx_tx rx; 919 915 struct mlx5dr_domain_rx_tx tx; 920 916 struct mlx5dr_cmd_caps caps; ··· 934 928 struct mlx5dr_send_info_pool *send_info_pool_tx; 935 929 struct kmem_cache *chunks_kmem_cache; 936 930 struct kmem_cache *htbls_kmem_cache; 931 + struct mlx5dr_ptrn_mgr *ptrn_mgr; 937 932 struct mlx5dr_send_ring *send_ring; 938 933 struct mlx5dr_domain_info info; 939 934 struct xarray csum_fts_xa; ··· 1532 1525 (MLX5_CAP_GEN_64(dev, match_definer_format_supported) & 1533 1526 (1ULL << MLX5_IFC_DEFINER_FORMAT_ID_SELECT)); 1534 1527 } 1528 + 1529 + bool mlx5dr_domain_is_support_ptrn_arg(struct mlx5dr_domain *dmn); 1530 + struct mlx5dr_ptrn_mgr *mlx5dr_ptrn_mgr_create(struct mlx5dr_domain *dmn); 1531 + void mlx5dr_ptrn_mgr_destroy(struct mlx5dr_ptrn_mgr *mgr); 1535 1532 1536 1533 #endif /* _DR_TYPES_H_ */
+2
include/linux/mlx5/device.h
··· 443 443 444 444 MLX5_OPCODE_UMR = 0x25, 445 445 446 + MLX5_OPCODE_FLOW_TBL_ACCESS = 0x2c, 447 + 446 448 MLX5_OPCODE_ACCESS_ASO = 0x2d, 447 449 }; 448 450
+1
include/linux/mlx5/driver.h
··· 751 751 struct mlx5_profile { 752 752 u64 mask; 753 753 u8 log_max_qp; 754 + u8 num_cmd_caches; 754 755 struct { 755 756 int size; 756 757 int limit;
+33 -2
include/linux/mlx5/mlx5_ifc.h
··· 78 78 79 79 enum { 80 80 MLX5_OBJ_TYPE_SW_ICM = 0x0008, 81 + MLX5_OBJ_TYPE_HEADER_MODIFY_ARGUMENT = 0x23, 81 82 }; 82 83 83 84 enum { 84 85 MLX5_GENERAL_OBJ_TYPES_CAP_SW_ICM = (1ULL << MLX5_OBJ_TYPE_SW_ICM), 85 86 MLX5_GENERAL_OBJ_TYPES_CAP_GENEVE_TLV_OPT = (1ULL << 11), 86 87 MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_NET_Q = (1ULL << 13), 88 + MLX5_GENERAL_OBJ_TYPES_CAP_HEADER_MODIFY_ARGUMENT = 89 + (1ULL << MLX5_OBJ_TYPE_HEADER_MODIFY_ARGUMENT), 87 90 MLX5_GENERAL_OBJ_TYPES_CAP_MACSEC_OFFLOAD = (1ULL << 39), 88 91 }; 89 92 ··· 322 319 enum { 323 320 MLX5_FT_NIC_RX_2_NIC_RX_RDMA = BIT(0), 324 321 MLX5_FT_NIC_TX_RDMA_2_NIC_TX = BIT(1), 322 + }; 323 + 324 + enum { 325 + MLX5_CMD_OP_MOD_UPDATE_HEADER_MODIFY_ARGUMENT = 0x1, 325 326 }; 326 327 327 328 struct mlx5_ifc_flow_table_fields_supported_bits { ··· 887 880 888 881 struct mlx5_ifc_flow_table_eswitch_cap_bits { 889 882 u8 fdb_to_vport_reg_c_id[0x8]; 890 - u8 reserved_at_8[0xd]; 883 + u8 reserved_at_8[0x5]; 884 + u8 fdb_uplink_hairpin[0x1]; 885 + u8 fdb_multi_path_any_table_limit_regc[0x1]; 886 + u8 reserved_at_f[0x3]; 887 + u8 fdb_multi_path_any_table[0x1]; 888 + u8 reserved_at_13[0x2]; 891 889 u8 fdb_modify_header_fwd_to_table[0x1]; 892 890 u8 fdb_ipv4_ttl_modify[0x1]; 893 891 u8 flow_source[0x1]; ··· 1934 1922 u8 reserved_at_750[0x4]; 1935 1923 u8 max_dynamic_vf_msix_table_size[0xc]; 1936 1924 1937 - u8 reserved_at_760[0x20]; 1925 + u8 reserved_at_760[0x3]; 1926 + u8 log_max_num_header_modify_argument[0x5]; 1927 + u8 reserved_at_768[0x4]; 1928 + u8 log_header_modify_argument_granularity[0x4]; 1929 + u8 reserved_at_770[0x3]; 1930 + u8 log_header_modify_argument_max_alloc[0x5]; 1931 + u8 reserved_at_778[0x8]; 1932 + 1938 1933 u8 vhca_tunnel_commands[0x40]; 1939 1934 u8 match_definer_format_supported[0x40]; 1940 1935 }; ··· 6373 6354 u8 obj_id[0x20]; 6374 6355 6375 6356 u8 reserved_at_60[0x20]; 6357 + }; 6358 + 6359 + struct mlx5_ifc_modify_header_arg_bits { 6360 + u8 reserved_at_0[0x80]; 6361 + 6362 + u8 reserved_at_80[0x8]; 6363 + u8 access_pd[0x18]; 6364 + }; 6365 + 6366 + struct mlx5_ifc_create_modify_header_arg_in_bits { 6367 + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; 6368 + struct mlx5_ifc_modify_header_arg_bits arg; 6376 6369 }; 6377 6370 6378 6371 struct mlx5_ifc_create_match_definer_in_bits {
+10
include/linux/mlx5/qp.h
··· 499 499 __be16 num_entries; 500 500 }; 501 501 502 + struct mlx5_wqe_flow_update_ctrl_seg { 503 + __be32 flow_idx_update; 504 + __be32 dest_handle; 505 + u8 reserved0[40]; 506 + }; 507 + 508 + struct mlx5_wqe_header_modify_argument_update_seg { 509 + u8 argument_list[64]; 510 + }; 511 + 502 512 struct mlx5_core_qp { 503 513 struct mlx5_core_rsc_common common; /* must be first */ 504 514 void (*event) (struct mlx5_core_qp *, int);