Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

siphash: implement HalfSipHash1-3 for hash tables

HalfSipHash, or hsiphash, is a shortened version of SipHash, which
generates 32-bit outputs using a weaker 64-bit key. It has *much* lower
security margins, and shouldn't be used for anything too sensitive, but
it could be used as a hashtable key function replacement, if the output
is never exposed, and if the security requirement is not too high.

The goal is to make this something that performance-critical jhash users
would be willing to use.

On 64-bit machines, HalfSipHash1-3 is slower than SipHash1-3, so we alias
SipHash1-3 to HalfSipHash1-3 on those systems.

64-bit x86_64:
[ 0.509409] test_siphash: SipHash2-4 cycles: 4049181
[ 0.510650] test_siphash: SipHash1-3 cycles: 2512884
[ 0.512205] test_siphash: HalfSipHash1-3 cycles: 3429920
[ 0.512904] test_siphash: JenkinsHash cycles: 978267
So, we map hsiphash() -> SipHash1-3

32-bit x86:
[ 0.509868] test_siphash: SipHash2-4 cycles: 14812892
[ 0.513601] test_siphash: SipHash1-3 cycles: 9510710
[ 0.515263] test_siphash: HalfSipHash1-3 cycles: 3856157
[ 0.515952] test_siphash: JenkinsHash cycles: 1148567
So, we map hsiphash() -> HalfSipHash1-3

hsiphash() is roughly 3 times slower than jhash(), but comes with a
considerable security improvement.

Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Reviewed-by: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>

authored by

Jason A. Donenfeld and committed by
David S. Miller
1ae2324f 2c956a60

+546 -5
+75
Documentation/siphash.txt
··· 98 98 99 99 Read the SipHash paper if you're interested in learning more: 100 100 https://131002.net/siphash/siphash.pdf 101 + 102 + 103 + ~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~ 104 + 105 + HalfSipHash - SipHash's insecure younger cousin 106 + ----------------------------------------------- 107 + Written by Jason A. Donenfeld <jason@zx2c4.com> 108 + 109 + On the off-chance that SipHash is not fast enough for your needs, you might be 110 + able to justify using HalfSipHash, a terrifying but potentially useful 111 + possibility. HalfSipHash cuts SipHash's rounds down from "2-4" to "1-3" and, 112 + even scarier, uses an easily brute-forcable 64-bit key (with a 32-bit output) 113 + instead of SipHash's 128-bit key. However, this may appeal to some 114 + high-performance `jhash` users. 115 + 116 + Danger! 117 + 118 + Do not ever use HalfSipHash except for as a hashtable key function, and only 119 + then when you can be absolutely certain that the outputs will never be 120 + transmitted out of the kernel. This is only remotely useful over `jhash` as a 121 + means of mitigating hashtable flooding denial of service attacks. 122 + 123 + 1. Generating a key 124 + 125 + Keys should always be generated from a cryptographically secure source of 126 + random numbers, either using get_random_bytes or get_random_once: 127 + 128 + hsiphash_key_t key; 129 + get_random_bytes(&key, sizeof(key)); 130 + 131 + If you're not deriving your key from here, you're doing it wrong. 132 + 133 + 2. Using the functions 134 + 135 + There are two variants of the function, one that takes a list of integers, and 136 + one that takes a buffer: 137 + 138 + u32 hsiphash(const void *data, size_t len, const hsiphash_key_t *key); 139 + 140 + And: 141 + 142 + u32 hsiphash_1u32(u32, const hsiphash_key_t *key); 143 + u32 hsiphash_2u32(u32, u32, const hsiphash_key_t *key); 144 + u32 hsiphash_3u32(u32, u32, u32, const hsiphash_key_t *key); 145 + u32 hsiphash_4u32(u32, u32, u32, u32, const hsiphash_key_t *key); 146 + 147 + If you pass the generic hsiphash function something of a constant length, it 148 + will constant fold at compile-time and automatically choose one of the 149 + optimized functions. 150 + 151 + 3. Hashtable key function usage: 152 + 153 + struct some_hashtable { 154 + DECLARE_HASHTABLE(hashtable, 8); 155 + hsiphash_key_t key; 156 + }; 157 + 158 + void init_hashtable(struct some_hashtable *table) 159 + { 160 + get_random_bytes(&table->key, sizeof(table->key)); 161 + } 162 + 163 + static inline hlist_head *some_hashtable_bucket(struct some_hashtable *table, struct interesting_input *input) 164 + { 165 + return &table->hashtable[hsiphash(input, sizeof(*input), &table->key) & (HASH_SIZE(table->hashtable) - 1)]; 166 + } 167 + 168 + You may then iterate like usual over the returned hash bucket. 169 + 170 + 4. Performance 171 + 172 + HalfSipHash is roughly 3 times slower than JenkinsHash. For many replacements, 173 + this will not be a problem, as the hashtable lookup isn't the bottleneck. And 174 + in general, this is probably a good sacrifice to make for the security and DoS 175 + resistance of HalfSipHash.
+56 -1
include/linux/siphash.h
··· 5 5 * SipHash: a fast short-input PRF 6 6 * https://131002.net/siphash/ 7 7 * 8 - * This implementation is specifically for SipHash2-4. 8 + * This implementation is specifically for SipHash2-4 for a secure PRF 9 + * and HalfSipHash1-3/SipHash1-3 for an insecure PRF only suitable for 10 + * hashtables. 9 11 */ 10 12 11 13 #ifndef _LINUX_SIPHASH_H ··· 82 80 return __siphash_unaligned(data, len, key); 83 81 #endif 84 82 return ___siphash_aligned(data, len, key); 83 + } 84 + 85 + #define HSIPHASH_ALIGNMENT __alignof__(unsigned long) 86 + typedef struct { 87 + unsigned long key[2]; 88 + } hsiphash_key_t; 89 + 90 + u32 __hsiphash_aligned(const void *data, size_t len, 91 + const hsiphash_key_t *key); 92 + #ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 93 + u32 __hsiphash_unaligned(const void *data, size_t len, 94 + const hsiphash_key_t *key); 95 + #endif 96 + 97 + u32 hsiphash_1u32(const u32 a, const hsiphash_key_t *key); 98 + u32 hsiphash_2u32(const u32 a, const u32 b, const hsiphash_key_t *key); 99 + u32 hsiphash_3u32(const u32 a, const u32 b, const u32 c, 100 + const hsiphash_key_t *key); 101 + u32 hsiphash_4u32(const u32 a, const u32 b, const u32 c, const u32 d, 102 + const hsiphash_key_t *key); 103 + 104 + static inline u32 ___hsiphash_aligned(const __le32 *data, size_t len, 105 + const hsiphash_key_t *key) 106 + { 107 + if (__builtin_constant_p(len) && len == 4) 108 + return hsiphash_1u32(le32_to_cpu(data[0]), key); 109 + if (__builtin_constant_p(len) && len == 8) 110 + return hsiphash_2u32(le32_to_cpu(data[0]), le32_to_cpu(data[1]), 111 + key); 112 + if (__builtin_constant_p(len) && len == 12) 113 + return hsiphash_3u32(le32_to_cpu(data[0]), le32_to_cpu(data[1]), 114 + le32_to_cpu(data[2]), key); 115 + if (__builtin_constant_p(len) && len == 16) 116 + return hsiphash_4u32(le32_to_cpu(data[0]), le32_to_cpu(data[1]), 117 + le32_to_cpu(data[2]), le32_to_cpu(data[3]), 118 + key); 119 + return __hsiphash_aligned(data, len, key); 120 + } 121 + 122 + /** 123 + * hsiphash - compute 32-bit hsiphash PRF value 124 + * @data: buffer to hash 125 + * @size: size of @data 126 + * @key: the hsiphash key 127 + */ 128 + static inline u32 hsiphash(const void *data, size_t len, 129 + const hsiphash_key_t *key) 130 + { 131 + #ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 132 + if (!IS_ALIGNED((unsigned long)data, HSIPHASH_ALIGNMENT)) 133 + return __hsiphash_unaligned(data, len, key); 134 + #endif 135 + return ___hsiphash_aligned(data, len, key); 85 136 } 86 137 87 138 #endif /* _LINUX_SIPHASH_H */
+320 -1
lib/siphash.c
··· 5 5 * SipHash: a fast short-input PRF 6 6 * https://131002.net/siphash/ 7 7 * 8 - * This implementation is specifically for SipHash2-4. 8 + * This implementation is specifically for SipHash2-4 for a secure PRF 9 + * and HalfSipHash1-3/SipHash1-3 for an insecure PRF only suitable for 10 + * hashtables. 9 11 */ 10 12 11 13 #include <linux/siphash.h> ··· 232 230 POSTAMBLE 233 231 } 234 232 EXPORT_SYMBOL(siphash_3u32); 233 + 234 + #if BITS_PER_LONG == 64 235 + /* Note that on 64-bit, we make HalfSipHash1-3 actually be SipHash1-3, for 236 + * performance reasons. On 32-bit, below, we actually implement HalfSipHash1-3. 237 + */ 238 + 239 + #define HSIPROUND SIPROUND 240 + #define HPREAMBLE(len) PREAMBLE(len) 241 + #define HPOSTAMBLE \ 242 + v3 ^= b; \ 243 + HSIPROUND; \ 244 + v0 ^= b; \ 245 + v2 ^= 0xff; \ 246 + HSIPROUND; \ 247 + HSIPROUND; \ 248 + HSIPROUND; \ 249 + return (v0 ^ v1) ^ (v2 ^ v3); 250 + 251 + u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key) 252 + { 253 + const u8 *end = data + len - (len % sizeof(u64)); 254 + const u8 left = len & (sizeof(u64) - 1); 255 + u64 m; 256 + HPREAMBLE(len) 257 + for (; data != end; data += sizeof(u64)) { 258 + m = le64_to_cpup(data); 259 + v3 ^= m; 260 + HSIPROUND; 261 + v0 ^= m; 262 + } 263 + #if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG == 64 264 + if (left) 265 + b |= le64_to_cpu((__force __le64)(load_unaligned_zeropad(data) & 266 + bytemask_from_count(left))); 267 + #else 268 + switch (left) { 269 + case 7: b |= ((u64)end[6]) << 48; 270 + case 6: b |= ((u64)end[5]) << 40; 271 + case 5: b |= ((u64)end[4]) << 32; 272 + case 4: b |= le32_to_cpup(data); break; 273 + case 3: b |= ((u64)end[2]) << 16; 274 + case 2: b |= le16_to_cpup(data); break; 275 + case 1: b |= end[0]; 276 + } 277 + #endif 278 + HPOSTAMBLE 279 + } 280 + EXPORT_SYMBOL(__hsiphash_aligned); 281 + 282 + #ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 283 + u32 __hsiphash_unaligned(const void *data, size_t len, 284 + const hsiphash_key_t *key) 285 + { 286 + const u8 *end = data + len - (len % sizeof(u64)); 287 + const u8 left = len & (sizeof(u64) - 1); 288 + u64 m; 289 + HPREAMBLE(len) 290 + for (; data != end; data += sizeof(u64)) { 291 + m = get_unaligned_le64(data); 292 + v3 ^= m; 293 + HSIPROUND; 294 + v0 ^= m; 295 + } 296 + #if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG == 64 297 + if (left) 298 + b |= le64_to_cpu((__force __le64)(load_unaligned_zeropad(data) & 299 + bytemask_from_count(left))); 300 + #else 301 + switch (left) { 302 + case 7: b |= ((u64)end[6]) << 48; 303 + case 6: b |= ((u64)end[5]) << 40; 304 + case 5: b |= ((u64)end[4]) << 32; 305 + case 4: b |= get_unaligned_le32(end); break; 306 + case 3: b |= ((u64)end[2]) << 16; 307 + case 2: b |= get_unaligned_le16(end); break; 308 + case 1: b |= end[0]; 309 + } 310 + #endif 311 + HPOSTAMBLE 312 + } 313 + EXPORT_SYMBOL(__hsiphash_unaligned); 314 + #endif 315 + 316 + /** 317 + * hsiphash_1u32 - compute 64-bit hsiphash PRF value of a u32 318 + * @first: first u32 319 + * @key: the hsiphash key 320 + */ 321 + u32 hsiphash_1u32(const u32 first, const hsiphash_key_t *key) 322 + { 323 + HPREAMBLE(4) 324 + b |= first; 325 + HPOSTAMBLE 326 + } 327 + EXPORT_SYMBOL(hsiphash_1u32); 328 + 329 + /** 330 + * hsiphash_2u32 - compute 32-bit hsiphash PRF value of 2 u32 331 + * @first: first u32 332 + * @second: second u32 333 + * @key: the hsiphash key 334 + */ 335 + u32 hsiphash_2u32(const u32 first, const u32 second, const hsiphash_key_t *key) 336 + { 337 + u64 combined = (u64)second << 32 | first; 338 + HPREAMBLE(8) 339 + v3 ^= combined; 340 + HSIPROUND; 341 + v0 ^= combined; 342 + HPOSTAMBLE 343 + } 344 + EXPORT_SYMBOL(hsiphash_2u32); 345 + 346 + /** 347 + * hsiphash_3u32 - compute 32-bit hsiphash PRF value of 3 u32 348 + * @first: first u32 349 + * @second: second u32 350 + * @third: third u32 351 + * @key: the hsiphash key 352 + */ 353 + u32 hsiphash_3u32(const u32 first, const u32 second, const u32 third, 354 + const hsiphash_key_t *key) 355 + { 356 + u64 combined = (u64)second << 32 | first; 357 + HPREAMBLE(12) 358 + v3 ^= combined; 359 + HSIPROUND; 360 + v0 ^= combined; 361 + b |= third; 362 + HPOSTAMBLE 363 + } 364 + EXPORT_SYMBOL(hsiphash_3u32); 365 + 366 + /** 367 + * hsiphash_4u32 - compute 32-bit hsiphash PRF value of 4 u32 368 + * @first: first u32 369 + * @second: second u32 370 + * @third: third u32 371 + * @forth: forth u32 372 + * @key: the hsiphash key 373 + */ 374 + u32 hsiphash_4u32(const u32 first, const u32 second, const u32 third, 375 + const u32 forth, const hsiphash_key_t *key) 376 + { 377 + u64 combined = (u64)second << 32 | first; 378 + HPREAMBLE(16) 379 + v3 ^= combined; 380 + HSIPROUND; 381 + v0 ^= combined; 382 + combined = (u64)forth << 32 | third; 383 + v3 ^= combined; 384 + HSIPROUND; 385 + v0 ^= combined; 386 + HPOSTAMBLE 387 + } 388 + EXPORT_SYMBOL(hsiphash_4u32); 389 + #else 390 + #define HSIPROUND \ 391 + do { \ 392 + v0 += v1; v1 = rol32(v1, 5); v1 ^= v0; v0 = rol32(v0, 16); \ 393 + v2 += v3; v3 = rol32(v3, 8); v3 ^= v2; \ 394 + v0 += v3; v3 = rol32(v3, 7); v3 ^= v0; \ 395 + v2 += v1; v1 = rol32(v1, 13); v1 ^= v2; v2 = rol32(v2, 16); \ 396 + } while (0) 397 + 398 + #define HPREAMBLE(len) \ 399 + u32 v0 = 0; \ 400 + u32 v1 = 0; \ 401 + u32 v2 = 0x6c796765U; \ 402 + u32 v3 = 0x74656462U; \ 403 + u32 b = ((u32)(len)) << 24; \ 404 + v3 ^= key->key[1]; \ 405 + v2 ^= key->key[0]; \ 406 + v1 ^= key->key[1]; \ 407 + v0 ^= key->key[0]; 408 + 409 + #define HPOSTAMBLE \ 410 + v3 ^= b; \ 411 + HSIPROUND; \ 412 + v0 ^= b; \ 413 + v2 ^= 0xff; \ 414 + HSIPROUND; \ 415 + HSIPROUND; \ 416 + HSIPROUND; \ 417 + return v1 ^ v3; 418 + 419 + u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key) 420 + { 421 + const u8 *end = data + len - (len % sizeof(u32)); 422 + const u8 left = len & (sizeof(u32) - 1); 423 + u32 m; 424 + HPREAMBLE(len) 425 + for (; data != end; data += sizeof(u32)) { 426 + m = le32_to_cpup(data); 427 + v3 ^= m; 428 + HSIPROUND; 429 + v0 ^= m; 430 + } 431 + switch (left) { 432 + case 3: b |= ((u32)end[2]) << 16; 433 + case 2: b |= le16_to_cpup(data); break; 434 + case 1: b |= end[0]; 435 + } 436 + HPOSTAMBLE 437 + } 438 + EXPORT_SYMBOL(__hsiphash_aligned); 439 + 440 + #ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 441 + u32 __hsiphash_unaligned(const void *data, size_t len, 442 + const hsiphash_key_t *key) 443 + { 444 + const u8 *end = data + len - (len % sizeof(u32)); 445 + const u8 left = len & (sizeof(u32) - 1); 446 + u32 m; 447 + HPREAMBLE(len) 448 + for (; data != end; data += sizeof(u32)) { 449 + m = get_unaligned_le32(data); 450 + v3 ^= m; 451 + HSIPROUND; 452 + v0 ^= m; 453 + } 454 + switch (left) { 455 + case 3: b |= ((u32)end[2]) << 16; 456 + case 2: b |= get_unaligned_le16(end); break; 457 + case 1: b |= end[0]; 458 + } 459 + HPOSTAMBLE 460 + } 461 + EXPORT_SYMBOL(__hsiphash_unaligned); 462 + #endif 463 + 464 + /** 465 + * hsiphash_1u32 - compute 32-bit hsiphash PRF value of a u32 466 + * @first: first u32 467 + * @key: the hsiphash key 468 + */ 469 + u32 hsiphash_1u32(const u32 first, const hsiphash_key_t *key) 470 + { 471 + HPREAMBLE(4) 472 + v3 ^= first; 473 + HSIPROUND; 474 + v0 ^= first; 475 + HPOSTAMBLE 476 + } 477 + EXPORT_SYMBOL(hsiphash_1u32); 478 + 479 + /** 480 + * hsiphash_2u32 - compute 32-bit hsiphash PRF value of 2 u32 481 + * @first: first u32 482 + * @second: second u32 483 + * @key: the hsiphash key 484 + */ 485 + u32 hsiphash_2u32(const u32 first, const u32 second, const hsiphash_key_t *key) 486 + { 487 + HPREAMBLE(8) 488 + v3 ^= first; 489 + HSIPROUND; 490 + v0 ^= first; 491 + v3 ^= second; 492 + HSIPROUND; 493 + v0 ^= second; 494 + HPOSTAMBLE 495 + } 496 + EXPORT_SYMBOL(hsiphash_2u32); 497 + 498 + /** 499 + * hsiphash_3u32 - compute 32-bit hsiphash PRF value of 3 u32 500 + * @first: first u32 501 + * @second: second u32 502 + * @third: third u32 503 + * @key: the hsiphash key 504 + */ 505 + u32 hsiphash_3u32(const u32 first, const u32 second, const u32 third, 506 + const hsiphash_key_t *key) 507 + { 508 + HPREAMBLE(12) 509 + v3 ^= first; 510 + HSIPROUND; 511 + v0 ^= first; 512 + v3 ^= second; 513 + HSIPROUND; 514 + v0 ^= second; 515 + v3 ^= third; 516 + HSIPROUND; 517 + v0 ^= third; 518 + HPOSTAMBLE 519 + } 520 + EXPORT_SYMBOL(hsiphash_3u32); 521 + 522 + /** 523 + * hsiphash_4u32 - compute 32-bit hsiphash PRF value of 4 u32 524 + * @first: first u32 525 + * @second: second u32 526 + * @third: third u32 527 + * @forth: forth u32 528 + * @key: the hsiphash key 529 + */ 530 + u32 hsiphash_4u32(const u32 first, const u32 second, const u32 third, 531 + const u32 forth, const hsiphash_key_t *key) 532 + { 533 + HPREAMBLE(16) 534 + v3 ^= first; 535 + HSIPROUND; 536 + v0 ^= first; 537 + v3 ^= second; 538 + HSIPROUND; 539 + v0 ^= second; 540 + v3 ^= third; 541 + HSIPROUND; 542 + v0 ^= third; 543 + v3 ^= forth; 544 + HSIPROUND; 545 + v0 ^= forth; 546 + HPOSTAMBLE 547 + } 548 + EXPORT_SYMBOL(hsiphash_4u32); 549 + #endif
+95 -3
lib/test_siphash.c
··· 7 7 * SipHash: a fast short-input PRF 8 8 * https://131002.net/siphash/ 9 9 * 10 - * This implementation is specifically for SipHash2-4. 10 + * This implementation is specifically for SipHash2-4 for a secure PRF 11 + * and HalfSipHash1-3/SipHash1-3 for an insecure PRF only suitable for 12 + * hashtables. 11 13 */ 12 14 13 15 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt ··· 20 18 #include <linux/errno.h> 21 19 #include <linux/module.h> 22 20 23 - /* Test vectors taken from official reference source available at: 24 - * https://131002.net/siphash/siphash24.c 21 + /* Test vectors taken from reference source available at: 22 + * https://github.com/veorq/SipHash 25 23 */ 26 24 27 25 static const siphash_key_t test_key_siphash = ··· 52 50 0x958a324ceb064572ULL 53 51 }; 54 52 53 + #if BITS_PER_LONG == 64 54 + static const hsiphash_key_t test_key_hsiphash = 55 + {{ 0x0706050403020100ULL, 0x0f0e0d0c0b0a0908ULL }}; 56 + 57 + static const u32 test_vectors_hsiphash[64] = { 58 + 0x050fc4dcU, 0x7d57ca93U, 0x4dc7d44dU, 59 + 0xe7ddf7fbU, 0x88d38328U, 0x49533b67U, 60 + 0xc59f22a7U, 0x9bb11140U, 0x8d299a8eU, 61 + 0x6c063de4U, 0x92ff097fU, 0xf94dc352U, 62 + 0x57b4d9a2U, 0x1229ffa7U, 0xc0f95d34U, 63 + 0x2a519956U, 0x7d908b66U, 0x63dbd80cU, 64 + 0xb473e63eU, 0x8d297d1cU, 0xa6cce040U, 65 + 0x2b45f844U, 0xa320872eU, 0xdae6c123U, 66 + 0x67349c8cU, 0x705b0979U, 0xca9913a5U, 67 + 0x4ade3b35U, 0xef6cd00dU, 0x4ab1e1f4U, 68 + 0x43c5e663U, 0x8c21d1bcU, 0x16a7b60dU, 69 + 0x7a8ff9bfU, 0x1f2a753eU, 0xbf186b91U, 70 + 0xada26206U, 0xa3c33057U, 0xae3a36a1U, 71 + 0x7b108392U, 0x99e41531U, 0x3f1ad944U, 72 + 0xc8138825U, 0xc28949a6U, 0xfaf8876bU, 73 + 0x9f042196U, 0x68b1d623U, 0x8b5114fdU, 74 + 0xdf074c46U, 0x12cc86b3U, 0x0a52098fU, 75 + 0x9d292f9aU, 0xa2f41f12U, 0x43a71ed0U, 76 + 0x73f0bce6U, 0x70a7e980U, 0x243c6d75U, 77 + 0xfdb71513U, 0xa67d8a08U, 0xb7e8f148U, 78 + 0xf7a644eeU, 0x0f1837f2U, 0x4b6694e0U, 79 + 0xb7bbb3a8U 80 + }; 81 + #else 82 + static const hsiphash_key_t test_key_hsiphash = 83 + {{ 0x03020100U, 0x07060504U }}; 84 + 85 + static const u32 test_vectors_hsiphash[64] = { 86 + 0x5814c896U, 0xe7e864caU, 0xbc4b0e30U, 87 + 0x01539939U, 0x7e059ea6U, 0x88e3d89bU, 88 + 0xa0080b65U, 0x9d38d9d6U, 0x577999b1U, 89 + 0xc839caedU, 0xe4fa32cfU, 0x959246eeU, 90 + 0x6b28096cU, 0x66dd9cd6U, 0x16658a7cU, 91 + 0xd0257b04U, 0x8b31d501U, 0x2b1cd04bU, 92 + 0x06712339U, 0x522aca67U, 0x911bb605U, 93 + 0x90a65f0eU, 0xf826ef7bU, 0x62512debU, 94 + 0x57150ad7U, 0x5d473507U, 0x1ec47442U, 95 + 0xab64afd3U, 0x0a4100d0U, 0x6d2ce652U, 96 + 0x2331b6a3U, 0x08d8791aU, 0xbc6dda8dU, 97 + 0xe0f6c934U, 0xb0652033U, 0x9b9851ccU, 98 + 0x7c46fb7fU, 0x732ba8cbU, 0xf142997aU, 99 + 0xfcc9aa1bU, 0x05327eb2U, 0xe110131cU, 100 + 0xf9e5e7c0U, 0xa7d708a6U, 0x11795ab1U, 101 + 0x65671619U, 0x9f5fff91U, 0xd89c5267U, 102 + 0x007783ebU, 0x95766243U, 0xab639262U, 103 + 0x9c7e1390U, 0xc368dda6U, 0x38ddc455U, 104 + 0xfa13d379U, 0x979ea4e8U, 0x53ecd77eU, 105 + 0x2ee80657U, 0x33dbb66aU, 0xae3f0577U, 106 + 0x88b4c4ccU, 0x3e7f480bU, 0x74c1ebf8U, 107 + 0x87178304U 108 + }; 109 + #endif 110 + 55 111 static int __init siphash_test_init(void) 56 112 { 57 113 u8 in[64] __aligned(SIPHASH_ALIGNMENT); ··· 128 68 if (siphash(in_unaligned + 1, i, &test_key_siphash) != 129 69 test_vectors_siphash[i]) { 130 70 pr_info("siphash self-test unaligned %u: FAIL\n", i + 1); 71 + ret = -EINVAL; 72 + } 73 + if (hsiphash(in, i, &test_key_hsiphash) != 74 + test_vectors_hsiphash[i]) { 75 + pr_info("hsiphash self-test aligned %u: FAIL\n", i + 1); 76 + ret = -EINVAL; 77 + } 78 + if (hsiphash(in_unaligned + 1, i, &test_key_hsiphash) != 79 + test_vectors_hsiphash[i]) { 80 + pr_info("hsiphash self-test unaligned %u: FAIL\n", i + 1); 131 81 ret = -EINVAL; 132 82 } 133 83 } ··· 183 113 0x0b0a0908U, 0x0f0e0d0cU, &test_key_siphash) != 184 114 test_vectors_siphash[16]) { 185 115 pr_info("siphash self-test 4u32: FAIL\n"); 116 + ret = -EINVAL; 117 + } 118 + if (hsiphash_1u32(0x03020100U, &test_key_hsiphash) != 119 + test_vectors_hsiphash[4]) { 120 + pr_info("hsiphash self-test 1u32: FAIL\n"); 121 + ret = -EINVAL; 122 + } 123 + if (hsiphash_2u32(0x03020100U, 0x07060504U, &test_key_hsiphash) != 124 + test_vectors_hsiphash[8]) { 125 + pr_info("hsiphash self-test 2u32: FAIL\n"); 126 + ret = -EINVAL; 127 + } 128 + if (hsiphash_3u32(0x03020100U, 0x07060504U, 129 + 0x0b0a0908U, &test_key_hsiphash) != 130 + test_vectors_hsiphash[12]) { 131 + pr_info("hsiphash self-test 3u32: FAIL\n"); 132 + ret = -EINVAL; 133 + } 134 + if (hsiphash_4u32(0x03020100U, 0x07060504U, 135 + 0x0b0a0908U, 0x0f0e0d0cU, &test_key_hsiphash) != 136 + test_vectors_hsiphash[16]) { 137 + pr_info("hsiphash self-test 4u32: FAIL\n"); 186 138 ret = -EINVAL; 187 139 } 188 140 if (!ret)