Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

tipc: improve throughput between nodes in netns

Currently, TIPC transports intra-node user data messages directly
socket to socket, hence shortcutting all the lower layers of the
communication stack. This gives TIPC very good intra node performance,
both regarding throughput and latency.

We now introduce a similar mechanism for TIPC data traffic across
network namespaces located in the same kernel. On the send path, the
call chain is as always accompanied by the sending node's network name
space pointer. However, once we have reliably established that the
receiving node is represented by a namespace on the same host, we just
replace the namespace pointer with the receiving node/namespace's
ditto, and follow the regular socket receive patch though the receiving
node. This technique gives us a throughput similar to the node internal
throughput, several times larger than if we let the traffic go though
the full network stacks. As a comparison, max throughput for 64k
messages is four times larger than TCP throughput for the same type of
traffic.

To meet any security concerns, the following should be noted.

- All nodes joining a cluster are supposed to have been be certified
and authenticated by mechanisms outside TIPC. This is no different for
nodes/namespaces on the same host; they have to auto discover each
other using the attached interfaces, and establish links which are
supervised via the regular link monitoring mechanism. Hence, a kernel
local node has no other way to join a cluster than any other node, and
have to obey to policies set in the IP or device layers of the stack.

- Only when a sender has established with 100% certainty that the peer
node is located in a kernel local namespace does it choose to let user
data messages, and only those, take the crossover path to the receiving
node/namespace.

- If the receiving node/namespace is removed, its namespace pointer
is invalidated at all peer nodes, and their neighbor link monitoring
will eventually note that this node is gone.

- To ensure the "100% certainty" criteria, and prevent any possible
spoofing, received discovery messages must contain a proof that the
sender knows a common secret. We use the hash mix of the sending
node/namespace for this purpose, since it can be accessed directly by
all other namespaces in the kernel. Upon reception of a discovery
message, the receiver checks this proof against all the local
namespaces'hash_mix:es. If it finds a match, that, along with a
matching node id and cluster id, this is deemed sufficient proof that
the peer node in question is in a local namespace, and a wormhole can
be opened.

- We should also consider that TIPC is intended to be a cluster local
IPC mechanism (just like e.g. UNIX sockets) rather than a network
protocol, and hence we think it can justified to allow it to shortcut the
lower protocol layers.

Regarding traceability, we should notice that since commit 6c9081a3915d
("tipc: add loopback device tracking") it is possible to follow the node
internal packet flow by just activating tcpdump on the loopback
interface. This will be true even for this mechanism; by activating
tcpdump on the involved nodes' loopback interfaces their inter-name
space messaging can easily be tracked.

v2:
- update 'net' pointer when node left/rejoined
v3:
- grab read/write lock when using node ref obj
v4:
- clone traffics between netns to loopback

Suggested-by: Jon Maloy <jon.maloy@ericsson.com>
Acked-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: Hoang Le <hoang.h.le@dektech.com.au>
Signed-off-by: David S. Miller <davem@davemloft.net>

authored by

Hoang Le and committed by
David S. Miller
f73b1281 51210ad5

+197 -11
+16
net/tipc/core.c
··· 105 105 tipc_sk_rht_destroy(net); 106 106 } 107 107 108 + static void __net_exit tipc_pernet_pre_exit(struct net *net) 109 + { 110 + tipc_node_pre_cleanup_net(net); 111 + } 112 + 113 + static struct pernet_operations tipc_pernet_pre_exit_ops = { 114 + .pre_exit = tipc_pernet_pre_exit, 115 + }; 116 + 108 117 static struct pernet_operations tipc_net_ops = { 109 118 .init = tipc_init_net, 110 119 .exit = tipc_exit_net, ··· 160 151 if (err) 161 152 goto out_pernet_topsrv; 162 153 154 + err = register_pernet_subsys(&tipc_pernet_pre_exit_ops); 155 + if (err) 156 + goto out_register_pernet_subsys; 157 + 163 158 err = tipc_bearer_setup(); 164 159 if (err) 165 160 goto out_bearer; ··· 171 158 pr_info("Started in single node mode\n"); 172 159 return 0; 173 160 out_bearer: 161 + unregister_pernet_subsys(&tipc_pernet_pre_exit_ops); 162 + out_register_pernet_subsys: 174 163 unregister_pernet_device(&tipc_topsrv_net_ops); 175 164 out_pernet_topsrv: 176 165 tipc_socket_stop(); ··· 192 177 static void __exit tipc_exit(void) 193 178 { 194 179 tipc_bearer_cleanup(); 180 + unregister_pernet_subsys(&tipc_pernet_pre_exit_ops); 195 181 unregister_pernet_device(&tipc_topsrv_net_ops); 196 182 tipc_socket_stop(); 197 183 unregister_pernet_device(&tipc_net_ops);
+6
net/tipc/core.h
··· 59 59 #include <net/netns/generic.h> 60 60 #include <linux/rhashtable.h> 61 61 #include <net/genetlink.h> 62 + #include <net/netns/hash.h> 62 63 63 64 struct tipc_node; 64 65 struct tipc_bearer; ··· 184 183 static inline int in_range(u16 val, u16 min, u16 max) 185 184 { 186 185 return !less(val, min) && !more(val, max); 186 + } 187 + 188 + static inline u32 tipc_net_hash_mixes(struct net *net, int tn_rand) 189 + { 190 + return net_hash_mix(&init_net) ^ net_hash_mix(net) ^ tn_rand; 187 191 } 188 192 189 193 #ifdef CONFIG_SYSCTL
+3 -1
net/tipc/discover.c
··· 94 94 msg_set_dest_domain(hdr, dest_domain); 95 95 msg_set_bc_netid(hdr, tn->net_id); 96 96 b->media->addr2msg(msg_media_addr(hdr), &b->addr); 97 + msg_set_peer_net_hash(hdr, tipc_net_hash_mixes(net, tn->random)); 97 98 msg_set_node_id(hdr, tipc_own_id(net)); 98 99 } 99 100 ··· 243 242 if (!tipc_in_scope(legacy, b->domain, src)) 244 243 return; 245 244 tipc_node_check_dest(net, src, peer_id, b, caps, signature, 246 - &maddr, &respond, &dupl_addr); 245 + msg_peer_net_hash(hdr), &maddr, &respond, 246 + &dupl_addr); 247 247 if (dupl_addr) 248 248 disc_dupl_alert(b, src, &maddr); 249 249 if (!respond)
+14
net/tipc/msg.h
··· 1026 1026 return (msg_user(hdr) == LINK_PROTOCOL) && (msg_type(hdr) == RESET_MSG); 1027 1027 } 1028 1028 1029 + /* Word 13 1030 + */ 1031 + static inline void msg_set_peer_net_hash(struct tipc_msg *m, u32 n) 1032 + { 1033 + msg_set_word(m, 13, n); 1034 + } 1035 + 1036 + static inline u32 msg_peer_net_hash(struct tipc_msg *m) 1037 + { 1038 + return msg_word(m, 13); 1039 + } 1040 + 1041 + /* Word 14 1042 + */ 1029 1043 static inline u32 msg_sugg_node_addr(struct tipc_msg *m) 1030 1044 { 1031 1045 return msg_word(m, 14);
+1 -1
net/tipc/name_distr.c
··· 146 146 struct publication *publ; 147 147 struct sk_buff *skb = NULL; 148 148 struct distr_item *item = NULL; 149 - u32 msg_dsz = ((tipc_node_get_mtu(net, dnode, 0) - INT_H_SIZE) / 149 + u32 msg_dsz = ((tipc_node_get_mtu(net, dnode, 0, false) - INT_H_SIZE) / 150 150 ITEM_SIZE) * ITEM_SIZE; 151 151 u32 msg_rem = msg_dsz; 152 152
+151 -4
net/tipc/node.c
··· 126 126 struct timer_list timer; 127 127 struct rcu_head rcu; 128 128 unsigned long delete_at; 129 + struct net *peer_net; 130 + u32 peer_hash_mix; 129 131 }; 130 132 131 133 /* Node FSM states and events: ··· 186 184 return n->links[bearer_id].link; 187 185 } 188 186 189 - int tipc_node_get_mtu(struct net *net, u32 addr, u32 sel) 187 + int tipc_node_get_mtu(struct net *net, u32 addr, u32 sel, bool connected) 190 188 { 191 189 struct tipc_node *n; 192 190 int bearer_id; ··· 195 193 n = tipc_node_find(net, addr); 196 194 if (unlikely(!n)) 197 195 return mtu; 196 + 197 + /* Allow MAX_MSG_SIZE when building connection oriented message 198 + * if they are in the same core network 199 + */ 200 + if (n->peer_net && connected) { 201 + tipc_node_put(n); 202 + return mtu; 203 + } 198 204 199 205 bearer_id = n->active_links[sel & 1]; 200 206 if (likely(bearer_id != INVALID_BEARER_ID)) ··· 370 360 } 371 361 } 372 362 363 + static void tipc_node_assign_peer_net(struct tipc_node *n, u32 hash_mixes) 364 + { 365 + int net_id = tipc_netid(n->net); 366 + struct tipc_net *tn_peer; 367 + struct net *tmp; 368 + u32 hash_chk; 369 + 370 + if (n->peer_net) 371 + return; 372 + 373 + for_each_net_rcu(tmp) { 374 + tn_peer = tipc_net(tmp); 375 + if (!tn_peer) 376 + continue; 377 + /* Integrity checking whether node exists in namespace or not */ 378 + if (tn_peer->net_id != net_id) 379 + continue; 380 + if (memcmp(n->peer_id, tn_peer->node_id, NODE_ID_LEN)) 381 + continue; 382 + hash_chk = tipc_net_hash_mixes(tmp, tn_peer->random); 383 + if (hash_mixes ^ hash_chk) 384 + continue; 385 + n->peer_net = tmp; 386 + n->peer_hash_mix = hash_mixes; 387 + break; 388 + } 389 + } 390 + 373 391 static struct tipc_node *tipc_node_create(struct net *net, u32 addr, 374 - u8 *peer_id, u16 capabilities) 392 + u8 *peer_id, u16 capabilities, 393 + u32 signature, u32 hash_mixes) 375 394 { 376 395 struct tipc_net *tn = net_generic(net, tipc_net_id); 377 396 struct tipc_node *n, *temp_node; ··· 411 372 spin_lock_bh(&tn->node_list_lock); 412 373 n = tipc_node_find(net, addr); 413 374 if (n) { 375 + if (n->peer_hash_mix ^ hash_mixes) 376 + tipc_node_assign_peer_net(n, hash_mixes); 414 377 if (n->capabilities == capabilities) 415 378 goto exit; 416 379 /* Same node may come back with new capabilities */ ··· 430 389 list_for_each_entry_rcu(temp_node, &tn->node_list, list) { 431 390 tn->capabilities &= temp_node->capabilities; 432 391 } 392 + 433 393 goto exit; 434 394 } 435 395 n = kzalloc(sizeof(*n), GFP_ATOMIC); ··· 441 399 n->addr = addr; 442 400 memcpy(&n->peer_id, peer_id, 16); 443 401 n->net = net; 402 + n->peer_net = NULL; 403 + n->peer_hash_mix = 0; 404 + /* Assign kernel local namespace if exists */ 405 + tipc_node_assign_peer_net(n, hash_mixes); 444 406 n->capabilities = capabilities; 445 407 kref_init(&n->kref); 446 408 rwlock_init(&n->lock); ··· 472 426 tipc_bc_sndlink(net), 473 427 &n->bc_entry.link)) { 474 428 pr_warn("Broadcast rcv link creation failed, no memory\n"); 429 + if (n->peer_net) { 430 + n->peer_net = NULL; 431 + n->peer_hash_mix = 0; 432 + } 475 433 kfree(n); 476 434 n = NULL; 477 435 goto exit; ··· 1029 979 1030 980 void tipc_node_check_dest(struct net *net, u32 addr, 1031 981 u8 *peer_id, struct tipc_bearer *b, 1032 - u16 capabilities, u32 signature, 982 + u16 capabilities, u32 signature, u32 hash_mixes, 1033 983 struct tipc_media_addr *maddr, 1034 984 bool *respond, bool *dupl_addr) 1035 985 { ··· 1048 998 *dupl_addr = false; 1049 999 *respond = false; 1050 1000 1051 - n = tipc_node_create(net, addr, peer_id, capabilities); 1001 + n = tipc_node_create(net, addr, peer_id, capabilities, signature, 1002 + hash_mixes); 1052 1003 if (!n) 1053 1004 return; 1054 1005 ··· 1394 1343 /* Notify publications from this node */ 1395 1344 n->action_flags |= TIPC_NOTIFY_NODE_DOWN; 1396 1345 1346 + if (n->peer_net) { 1347 + n->peer_net = NULL; 1348 + n->peer_hash_mix = 0; 1349 + } 1397 1350 /* Notify sockets connected to node */ 1398 1351 list_for_each_entry_safe(conn, safe, conns, list) { 1399 1352 skb = tipc_msg_create(TIPC_CRITICAL_IMPORTANCE, TIPC_CONN_MSG, ··· 1479 1424 return -EMSGSIZE; 1480 1425 } 1481 1426 1427 + static void tipc_lxc_xmit(struct net *peer_net, struct sk_buff_head *list) 1428 + { 1429 + struct tipc_msg *hdr = buf_msg(skb_peek(list)); 1430 + struct sk_buff_head inputq; 1431 + 1432 + switch (msg_user(hdr)) { 1433 + case TIPC_LOW_IMPORTANCE: 1434 + case TIPC_MEDIUM_IMPORTANCE: 1435 + case TIPC_HIGH_IMPORTANCE: 1436 + case TIPC_CRITICAL_IMPORTANCE: 1437 + if (msg_connected(hdr) || msg_named(hdr)) { 1438 + tipc_loopback_trace(peer_net, list); 1439 + spin_lock_init(&list->lock); 1440 + tipc_sk_rcv(peer_net, list); 1441 + return; 1442 + } 1443 + if (msg_mcast(hdr)) { 1444 + tipc_loopback_trace(peer_net, list); 1445 + skb_queue_head_init(&inputq); 1446 + tipc_sk_mcast_rcv(peer_net, list, &inputq); 1447 + __skb_queue_purge(list); 1448 + skb_queue_purge(&inputq); 1449 + return; 1450 + } 1451 + return; 1452 + case MSG_FRAGMENTER: 1453 + if (tipc_msg_assemble(list)) { 1454 + tipc_loopback_trace(peer_net, list); 1455 + skb_queue_head_init(&inputq); 1456 + tipc_sk_mcast_rcv(peer_net, list, &inputq); 1457 + __skb_queue_purge(list); 1458 + skb_queue_purge(&inputq); 1459 + } 1460 + return; 1461 + case GROUP_PROTOCOL: 1462 + case CONN_MANAGER: 1463 + tipc_loopback_trace(peer_net, list); 1464 + spin_lock_init(&list->lock); 1465 + tipc_sk_rcv(peer_net, list); 1466 + return; 1467 + case LINK_PROTOCOL: 1468 + case NAME_DISTRIBUTOR: 1469 + case TUNNEL_PROTOCOL: 1470 + case BCAST_PROTOCOL: 1471 + return; 1472 + default: 1473 + return; 1474 + }; 1475 + } 1476 + 1482 1477 /** 1483 1478 * tipc_node_xmit() is the general link level function for message sending 1484 1479 * @net: the applicable net namespace ··· 1544 1439 struct tipc_link_entry *le = NULL; 1545 1440 struct tipc_node *n; 1546 1441 struct sk_buff_head xmitq; 1442 + bool node_up = false; 1547 1443 int bearer_id; 1548 1444 int rc; 1549 1445 ··· 1562 1456 } 1563 1457 1564 1458 tipc_node_read_lock(n); 1459 + node_up = node_is_up(n); 1460 + if (node_up && n->peer_net && check_net(n->peer_net)) { 1461 + /* xmit inner linux container */ 1462 + tipc_lxc_xmit(n->peer_net, list); 1463 + if (likely(skb_queue_empty(list))) { 1464 + tipc_node_read_unlock(n); 1465 + tipc_node_put(n); 1466 + return 0; 1467 + } 1468 + } 1469 + 1565 1470 bearer_id = n->active_links[selector & 1]; 1566 1471 if (unlikely(bearer_id == INVALID_BEARER_ID)) { 1567 1472 tipc_node_read_unlock(n); ··· 2703 2586 i += tipc_link_dump(n->bc_entry.link, TIPC_DUMP_NONE, buf + i); 2704 2587 2705 2588 return i; 2589 + } 2590 + 2591 + void tipc_node_pre_cleanup_net(struct net *exit_net) 2592 + { 2593 + struct tipc_node *n; 2594 + struct tipc_net *tn; 2595 + struct net *tmp; 2596 + 2597 + rcu_read_lock(); 2598 + for_each_net_rcu(tmp) { 2599 + if (tmp == exit_net) 2600 + continue; 2601 + tn = tipc_net(tmp); 2602 + if (!tn) 2603 + continue; 2604 + spin_lock_bh(&tn->node_list_lock); 2605 + list_for_each_entry_rcu(n, &tn->node_list, list) { 2606 + if (!n->peer_net) 2607 + continue; 2608 + if (n->peer_net != exit_net) 2609 + continue; 2610 + tipc_node_write_lock(n); 2611 + n->peer_net = NULL; 2612 + n->peer_hash_mix = 0; 2613 + tipc_node_write_unlock_fast(n); 2614 + break; 2615 + } 2616 + spin_unlock_bh(&tn->node_list_lock); 2617 + } 2618 + rcu_read_unlock(); 2706 2619 }
+3 -2
net/tipc/node.h
··· 75 75 u32 tipc_node_try_addr(struct net *net, u8 *id, u32 addr); 76 76 void tipc_node_check_dest(struct net *net, u32 onode, u8 *peer_id128, 77 77 struct tipc_bearer *bearer, 78 - u16 capabilities, u32 signature, 78 + u16 capabilities, u32 signature, u32 hash_mixes, 79 79 struct tipc_media_addr *maddr, 80 80 bool *respond, bool *dupl_addr); 81 81 void tipc_node_delete_links(struct net *net, int bearer_id); ··· 92 92 void tipc_node_broadcast(struct net *net, struct sk_buff *skb); 93 93 int tipc_node_add_conn(struct net *net, u32 dnode, u32 port, u32 peer_port); 94 94 void tipc_node_remove_conn(struct net *net, u32 dnode, u32 port); 95 - int tipc_node_get_mtu(struct net *net, u32 addr, u32 sel); 95 + int tipc_node_get_mtu(struct net *net, u32 addr, u32 sel, bool connected); 96 96 bool tipc_node_is_up(struct net *net, u32 addr); 97 97 u16 tipc_node_get_capabilities(struct net *net, u32 addr); 98 98 int tipc_nl_node_dump(struct sk_buff *skb, struct netlink_callback *cb); ··· 107 107 int tipc_nl_node_dump_monitor(struct sk_buff *skb, struct netlink_callback *cb); 108 108 int tipc_nl_node_dump_monitor_peer(struct sk_buff *skb, 109 109 struct netlink_callback *cb); 110 + void tipc_node_pre_cleanup_net(struct net *exit_net); 110 111 #endif
+3 -3
net/tipc/socket.c
··· 854 854 855 855 /* Build message as chain of buffers */ 856 856 __skb_queue_head_init(&pkts); 857 - mtu = tipc_node_get_mtu(net, dnode, tsk->portid); 857 + mtu = tipc_node_get_mtu(net, dnode, tsk->portid, false); 858 858 rc = tipc_msg_build(hdr, m, 0, dlen, mtu, &pkts); 859 859 if (unlikely(rc != dlen)) 860 860 return rc; ··· 1388 1388 return rc; 1389 1389 1390 1390 __skb_queue_head_init(&pkts); 1391 - mtu = tipc_node_get_mtu(net, dnode, tsk->portid); 1391 + mtu = tipc_node_get_mtu(net, dnode, tsk->portid, false); 1392 1392 rc = tipc_msg_build(hdr, m, 0, dlen, mtu, &pkts); 1393 1393 if (unlikely(rc != dlen)) 1394 1394 return rc; ··· 1526 1526 sk_reset_timer(sk, &sk->sk_timer, jiffies + CONN_PROBING_INTV); 1527 1527 tipc_set_sk_state(sk, TIPC_ESTABLISHED); 1528 1528 tipc_node_add_conn(net, peer_node, tsk->portid, peer_port); 1529 - tsk->max_pkt = tipc_node_get_mtu(net, peer_node, tsk->portid); 1529 + tsk->max_pkt = tipc_node_get_mtu(net, peer_node, tsk->portid, true); 1530 1530 tsk->peer_caps = tipc_node_get_capabilities(net, peer_node); 1531 1531 __skb_queue_purge(&sk->sk_write_queue); 1532 1532 if (tsk->peer_caps & TIPC_BLOCK_FLOWCTL)