Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

selftests: forwarding: Add test for custom multipath hash

Test that when the hash policy is set to custom, traffic is distributed
only according to the outer fields set in the fib_multipath_hash_fields
sysctl.

Each time set a different field and make sure traffic is only
distributed when the field is changed in the packet stream.

The test only verifies the behavior with non-encapsulated IPv4 and IPv6
packets. Subsequent patches will add tests for IPv4/IPv6 overlays on top
of IPv4/IPv6 underlay networks.

Example output:

# ./custom_multipath_hash.sh
TEST: ping [ OK ]
TEST: ping6 [ OK ]
INFO: Running IPv4 custom multipath hash tests
TEST: Multipath hash field: Source IP (balanced) [ OK ]
INFO: Packets sent on path1 / path2: 6353 / 6254
TEST: Multipath hash field: Source IP (unbalanced) [ OK ]
INFO: Packets sent on path1 / path2: 0 / 12600
TEST: Multipath hash field: Destination IP (balanced) [ OK ]
INFO: Packets sent on path1 / path2: 6102 / 6502
TEST: Multipath hash field: Destination IP (unbalanced) [ OK ]
INFO: Packets sent on path1 / path2: 1 / 12601
TEST: Multipath hash field: Source port (balanced) [ OK ]
INFO: Packets sent on path1 / path2: 16428 / 16345
TEST: Multipath hash field: Source port (unbalanced) [ OK ]
INFO: Packets sent on path1 / path2: 32770 / 2
TEST: Multipath hash field: Destination port (balanced) [ OK ]
INFO: Packets sent on path1 / path2: 16428 / 16345
TEST: Multipath hash field: Destination port (unbalanced) [ OK ]
INFO: Packets sent on path1 / path2: 32770 / 2
INFO: Running IPv6 custom multipath hash tests
TEST: Multipath hash field: Source IP (balanced) [ OK ]
INFO: Packets sent on path1 / path2: 6704 / 5903
TEST: Multipath hash field: Source IP (unbalanced) [ OK ]
INFO: Packets sent on path1 / path2: 12600 / 0
TEST: Multipath hash field: Destination IP (balanced) [ OK ]
INFO: Packets sent on path1 / path2: 5551 / 7052
TEST: Multipath hash field: Destination IP (unbalanced) [ OK ]
INFO: Packets sent on path1 / path2: 12603 / 0
TEST: Multipath hash field: Flowlabel (balanced) [ OK ]
INFO: Packets sent on path1 / path2: 8378 / 8080
TEST: Multipath hash field: Flowlabel (unbalanced) [ OK ]
INFO: Packets sent on path1 / path2: 2 / 12603
TEST: Multipath hash field: Source port (balanced) [ OK ]
INFO: Packets sent on path1 / path2: 16385 / 16388
TEST: Multipath hash field: Source port (unbalanced) [ OK ]
INFO: Packets sent on path1 / path2: 0 / 32774
TEST: Multipath hash field: Destination port (balanced) [ OK ]
INFO: Packets sent on path1 / path2: 16386 / 16390
TEST: Multipath hash field: Destination port (unbalanced) [ OK ]
INFO: Packets sent on path1 / path2: 32771 / 2

Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>

authored by

Ido Schimmel and committed by
David S. Miller
511e8db5 73c2c5cb

+364
+364
tools/testing/selftests/net/forwarding/custom_multipath_hash.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + # 4 + # Test traffic distribution between two paths when using custom hash policy. 5 + # 6 + # +--------------------------------+ 7 + # | H1 | 8 + # | $h1 + | 9 + # | 198.51.100.{2-253}/24 | | 10 + # | 2001:db8:1::{2-fd}/64 | | 11 + # +-------------------------|------+ 12 + # | 13 + # +-------------------------|-------------------------+ 14 + # | SW1 | | 15 + # | $rp1 + | 16 + # | 198.51.100.1/24 | 17 + # | 2001:db8:1::1/64 | 18 + # | | 19 + # | | 20 + # | $rp11 + + $rp12 | 21 + # | 192.0.2.1/28 | | 192.0.2.17/28 | 22 + # | 2001:db8:2::1/64 | | 2001:db8:3::1/64 | 23 + # +------------------|-------------|------------------+ 24 + # | | 25 + # +------------------|-------------|------------------+ 26 + # | SW2 | | | 27 + # | | | | 28 + # | $rp21 + + $rp22 | 29 + # | 192.0.2.2/28 192.0.2.18/28 | 30 + # | 2001:db8:2::2/64 2001:db8:3::2/64 | 31 + # | | 32 + # | | 33 + # | $rp2 + | 34 + # | 203.0.113.1/24 | | 35 + # | 2001:db8:4::1/64 | | 36 + # +-------------------------|-------------------------+ 37 + # | 38 + # +-------------------------|------+ 39 + # | H2 | | 40 + # | $h2 + | 41 + # | 203.0.113.{2-253}/24 | 42 + # | 2001:db8:4::{2-fd}/64 | 43 + # +--------------------------------+ 44 + 45 + ALL_TESTS=" 46 + ping_ipv4 47 + ping_ipv6 48 + custom_hash 49 + " 50 + 51 + NUM_NETIFS=8 52 + source lib.sh 53 + 54 + h1_create() 55 + { 56 + simple_if_init $h1 198.51.100.2/24 2001:db8:1::2/64 57 + ip route add vrf v$h1 default via 198.51.100.1 dev $h1 58 + ip -6 route add vrf v$h1 default via 2001:db8:1::1 dev $h1 59 + } 60 + 61 + h1_destroy() 62 + { 63 + ip -6 route del vrf v$h1 default 64 + ip route del vrf v$h1 default 65 + simple_if_fini $h1 198.51.100.2/24 2001:db8:1::2/64 66 + } 67 + 68 + sw1_create() 69 + { 70 + simple_if_init $rp1 198.51.100.1/24 2001:db8:1::1/64 71 + __simple_if_init $rp11 v$rp1 192.0.2.1/28 2001:db8:2::1/64 72 + __simple_if_init $rp12 v$rp1 192.0.2.17/28 2001:db8:3::1/64 73 + 74 + ip route add vrf v$rp1 203.0.113.0/24 \ 75 + nexthop via 192.0.2.2 dev $rp11 \ 76 + nexthop via 192.0.2.18 dev $rp12 77 + 78 + ip -6 route add vrf v$rp1 2001:db8:4::/64 \ 79 + nexthop via 2001:db8:2::2 dev $rp11 \ 80 + nexthop via 2001:db8:3::2 dev $rp12 81 + } 82 + 83 + sw1_destroy() 84 + { 85 + ip -6 route del vrf v$rp1 2001:db8:4::/64 86 + 87 + ip route del vrf v$rp1 203.0.113.0/24 88 + 89 + __simple_if_fini $rp12 192.0.2.17/28 2001:db8:3::1/64 90 + __simple_if_fini $rp11 192.0.2.1/28 2001:db8:2::1/64 91 + simple_if_fini $rp1 198.51.100.1/24 2001:db8:1::1/64 92 + } 93 + 94 + sw2_create() 95 + { 96 + simple_if_init $rp2 203.0.113.1/24 2001:db8:4::1/64 97 + __simple_if_init $rp21 v$rp2 192.0.2.2/28 2001:db8:2::2/64 98 + __simple_if_init $rp22 v$rp2 192.0.2.18/28 2001:db8:3::2/64 99 + 100 + ip route add vrf v$rp2 198.51.100.0/24 \ 101 + nexthop via 192.0.2.1 dev $rp21 \ 102 + nexthop via 192.0.2.17 dev $rp22 103 + 104 + ip -6 route add vrf v$rp2 2001:db8:1::/64 \ 105 + nexthop via 2001:db8:2::1 dev $rp21 \ 106 + nexthop via 2001:db8:3::1 dev $rp22 107 + } 108 + 109 + sw2_destroy() 110 + { 111 + ip -6 route del vrf v$rp2 2001:db8:1::/64 112 + 113 + ip route del vrf v$rp2 198.51.100.0/24 114 + 115 + __simple_if_fini $rp22 192.0.2.18/28 2001:db8:3::2/64 116 + __simple_if_fini $rp21 192.0.2.2/28 2001:db8:2::2/64 117 + simple_if_fini $rp2 203.0.113.1/24 2001:db8:4::1/64 118 + } 119 + 120 + h2_create() 121 + { 122 + simple_if_init $h2 203.0.113.2/24 2001:db8:4::2/64 123 + ip route add vrf v$h2 default via 203.0.113.1 dev $h2 124 + ip -6 route add vrf v$h2 default via 2001:db8:4::1 dev $h2 125 + } 126 + 127 + h2_destroy() 128 + { 129 + ip -6 route del vrf v$h2 default 130 + ip route del vrf v$h2 default 131 + simple_if_fini $h2 203.0.113.2/24 2001:db8:4::2/64 132 + } 133 + 134 + setup_prepare() 135 + { 136 + h1=${NETIFS[p1]} 137 + 138 + rp1=${NETIFS[p2]} 139 + 140 + rp11=${NETIFS[p3]} 141 + rp21=${NETIFS[p4]} 142 + 143 + rp12=${NETIFS[p5]} 144 + rp22=${NETIFS[p6]} 145 + 146 + rp2=${NETIFS[p7]} 147 + 148 + h2=${NETIFS[p8]} 149 + 150 + vrf_prepare 151 + h1_create 152 + sw1_create 153 + sw2_create 154 + h2_create 155 + 156 + forwarding_enable 157 + } 158 + 159 + cleanup() 160 + { 161 + pre_cleanup 162 + 163 + forwarding_restore 164 + 165 + h2_destroy 166 + sw2_destroy 167 + sw1_destroy 168 + h1_destroy 169 + vrf_cleanup 170 + } 171 + 172 + ping_ipv4() 173 + { 174 + ping_test $h1 203.0.113.2 175 + } 176 + 177 + ping_ipv6() 178 + { 179 + ping6_test $h1 2001:db8:4::2 180 + } 181 + 182 + send_src_ipv4() 183 + { 184 + $MZ $h1 -q -p 64 -A "198.51.100.2-198.51.100.253" -B 203.0.113.2 \ 185 + -d 1msec -c 50 -t udp "sp=20000,dp=30000" 186 + } 187 + 188 + send_dst_ipv4() 189 + { 190 + $MZ $h1 -q -p 64 -A 198.51.100.2 -B "203.0.113.2-203.0.113.253" \ 191 + -d 1msec -c 50 -t udp "sp=20000,dp=30000" 192 + } 193 + 194 + send_src_udp4() 195 + { 196 + $MZ $h1 -q -p 64 -A 198.51.100.2 -B 203.0.113.2 \ 197 + -d 1msec -t udp "sp=0-32768,dp=30000" 198 + } 199 + 200 + send_dst_udp4() 201 + { 202 + $MZ $h1 -q -p 64 -A 198.51.100.2 -B 203.0.113.2 \ 203 + -d 1msec -t udp "sp=20000,dp=0-32768" 204 + } 205 + 206 + send_src_ipv6() 207 + { 208 + $MZ -6 $h1 -q -p 64 -A "2001:db8:1::2-2001:db8:1::fd" -B 2001:db8:4::2 \ 209 + -d 1msec -c 50 -t udp "sp=20000,dp=30000" 210 + } 211 + 212 + send_dst_ipv6() 213 + { 214 + $MZ -6 $h1 -q -p 64 -A 2001:db8:1::2 -B "2001:db8:4::2-2001:db8:4::fd" \ 215 + -d 1msec -c 50 -t udp "sp=20000,dp=30000" 216 + } 217 + 218 + send_flowlabel() 219 + { 220 + # Generate 16384 echo requests, each with a random flow label. 221 + for _ in $(seq 1 16384); do 222 + ip vrf exec v$h1 \ 223 + $PING6 2001:db8:4::2 -F 0 -c 1 -q >/dev/null 2>&1 224 + done 225 + } 226 + 227 + send_src_udp6() 228 + { 229 + $MZ -6 $h1 -q -p 64 -A 2001:db8:1::2 -B 2001:db8:4::2 \ 230 + -d 1msec -t udp "sp=0-32768,dp=30000" 231 + } 232 + 233 + send_dst_udp6() 234 + { 235 + $MZ -6 $h1 -q -p 64 -A 2001:db8:1::2 -B 2001:db8:4::2 \ 236 + -d 1msec -t udp "sp=20000,dp=0-32768" 237 + } 238 + 239 + custom_hash_test() 240 + { 241 + local field="$1"; shift 242 + local balanced="$1"; shift 243 + local send_flows="$@" 244 + 245 + RET=0 246 + 247 + local t0_rp11=$(link_stats_tx_packets_get $rp11) 248 + local t0_rp12=$(link_stats_tx_packets_get $rp12) 249 + 250 + $send_flows 251 + 252 + local t1_rp11=$(link_stats_tx_packets_get $rp11) 253 + local t1_rp12=$(link_stats_tx_packets_get $rp12) 254 + 255 + local d_rp11=$((t1_rp11 - t0_rp11)) 256 + local d_rp12=$((t1_rp12 - t0_rp12)) 257 + 258 + local diff=$((d_rp12 - d_rp11)) 259 + local sum=$((d_rp11 + d_rp12)) 260 + 261 + local pct=$(echo "$diff / $sum * 100" | bc -l) 262 + local is_balanced=$(echo "-20 <= $pct && $pct <= 20" | bc) 263 + 264 + [[ ( $is_balanced -eq 1 && $balanced == "balanced" ) || 265 + ( $is_balanced -eq 0 && $balanced == "unbalanced" ) ]] 266 + check_err $? "Expected traffic to be $balanced, but it is not" 267 + 268 + log_test "Multipath hash field: $field ($balanced)" 269 + log_info "Packets sent on path1 / path2: $d_rp11 / $d_rp12" 270 + } 271 + 272 + custom_hash_v4() 273 + { 274 + log_info "Running IPv4 custom multipath hash tests" 275 + 276 + sysctl_set net.ipv4.fib_multipath_hash_policy 3 277 + 278 + # Prevent the neighbour table from overflowing, as different neighbour 279 + # entries will be created on $ol4 when using different destination IPs. 280 + sysctl_set net.ipv4.neigh.default.gc_thresh1 1024 281 + sysctl_set net.ipv4.neigh.default.gc_thresh2 1024 282 + sysctl_set net.ipv4.neigh.default.gc_thresh3 1024 283 + 284 + sysctl_set net.ipv4.fib_multipath_hash_fields 0x0001 285 + custom_hash_test "Source IP" "balanced" send_src_ipv4 286 + custom_hash_test "Source IP" "unbalanced" send_dst_ipv4 287 + 288 + sysctl_set net.ipv4.fib_multipath_hash_fields 0x0002 289 + custom_hash_test "Destination IP" "balanced" send_dst_ipv4 290 + custom_hash_test "Destination IP" "unbalanced" send_src_ipv4 291 + 292 + sysctl_set net.ipv4.fib_multipath_hash_fields 0x0010 293 + custom_hash_test "Source port" "balanced" send_src_udp4 294 + custom_hash_test "Source port" "unbalanced" send_dst_udp4 295 + 296 + sysctl_set net.ipv4.fib_multipath_hash_fields 0x0020 297 + custom_hash_test "Destination port" "balanced" send_dst_udp4 298 + custom_hash_test "Destination port" "unbalanced" send_src_udp4 299 + 300 + sysctl_restore net.ipv4.neigh.default.gc_thresh3 301 + sysctl_restore net.ipv4.neigh.default.gc_thresh2 302 + sysctl_restore net.ipv4.neigh.default.gc_thresh1 303 + 304 + sysctl_restore net.ipv4.fib_multipath_hash_policy 305 + } 306 + 307 + custom_hash_v6() 308 + { 309 + log_info "Running IPv6 custom multipath hash tests" 310 + 311 + sysctl_set net.ipv6.fib_multipath_hash_policy 3 312 + 313 + # Prevent the neighbour table from overflowing, as different neighbour 314 + # entries will be created on $ol4 when using different destination IPs. 315 + sysctl_set net.ipv6.neigh.default.gc_thresh1 1024 316 + sysctl_set net.ipv6.neigh.default.gc_thresh2 1024 317 + sysctl_set net.ipv6.neigh.default.gc_thresh3 1024 318 + 319 + sysctl_set net.ipv6.fib_multipath_hash_fields 0x0001 320 + custom_hash_test "Source IP" "balanced" send_src_ipv6 321 + custom_hash_test "Source IP" "unbalanced" send_dst_ipv6 322 + 323 + sysctl_set net.ipv6.fib_multipath_hash_fields 0x0002 324 + custom_hash_test "Destination IP" "balanced" send_dst_ipv6 325 + custom_hash_test "Destination IP" "unbalanced" send_src_ipv6 326 + 327 + sysctl_set net.ipv6.fib_multipath_hash_fields 0x0008 328 + custom_hash_test "Flowlabel" "balanced" send_flowlabel 329 + custom_hash_test "Flowlabel" "unbalanced" send_src_ipv6 330 + 331 + sysctl_set net.ipv6.fib_multipath_hash_fields 0x0010 332 + custom_hash_test "Source port" "balanced" send_src_udp6 333 + custom_hash_test "Source port" "unbalanced" send_dst_udp6 334 + 335 + sysctl_set net.ipv6.fib_multipath_hash_fields 0x0020 336 + custom_hash_test "Destination port" "balanced" send_dst_udp6 337 + custom_hash_test "Destination port" "unbalanced" send_src_udp6 338 + 339 + sysctl_restore net.ipv6.neigh.default.gc_thresh3 340 + sysctl_restore net.ipv6.neigh.default.gc_thresh2 341 + sysctl_restore net.ipv6.neigh.default.gc_thresh1 342 + 343 + sysctl_restore net.ipv6.fib_multipath_hash_policy 344 + } 345 + 346 + custom_hash() 347 + { 348 + # Test that when the hash policy is set to custom, traffic is 349 + # distributed only according to the fields set in the 350 + # fib_multipath_hash_fields sysctl. 351 + # 352 + # Each time set a different field and make sure traffic is only 353 + # distributed when the field is changed in the packet stream. 354 + custom_hash_v4 355 + custom_hash_v6 356 + } 357 + 358 + trap cleanup EXIT 359 + 360 + setup_prepare 361 + setup_wait 362 + tests_run 363 + 364 + exit $EXIT_STATUS