Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

lib/crypto: blake2b: Roll up BLAKE2b round loop on 32-bit

BLAKE2b has a state of 16 64-bit words. Add the message data in and
there are 32 64-bit words. With the current code where all the rounds
are unrolled to enable constant-folding of the blake2b_sigma values,
this results in a very large code size on 32-bit kernels, including a
recurring issue where gcc uses a large amount of stack.

There's just not much benefit to this unrolling when the code is already
so large. Let's roll up the rounds when !CONFIG_64BIT.

To avoid having to duplicate the code, just write the code once using a
loop, and conditionally use 'unrolled_full' from <linux/unroll.h>.

Then, fold the now-unneeded ROUND() macro into the loop. Finally, also
remove the now-unneeded override of the stack frame size warning.

Code size improvements for blake2b_compress_generic():

Size before (bytes) Size after (bytes)
------------------- ------------------
i386, gcc 27584 3632
i386, clang 18208 3248
arm32, gcc 19912 2860
arm32, clang 21336 3344

Running the BLAKE2b benchmark on a !CONFIG_64BIT kernel on an x86_64
processor shows a 16384B throughput change of 351 => 340 MB/s (gcc) or
442 MB/s => 375 MB/s (clang). So clearly not much of a slowdown either.
But also that microbenchmark also effectively disregards cache usage,
which is important in practice and is far better in the smaller code.

Note: If we rolled up the loop on x86_64 too, the change would be
7024 bytes => 1584 bytes and 1960 MB/s => 1396 MB/s (gcc), or
6848 bytes => 1696 bytes and 1920 MB/s => 1263 MB/s (clang).
Maybe still worth it, though not quite as clearly beneficial.

Fixes: 91d689337fe8 ("crypto: blake2b - add blake2b generic implementation")
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20251205050330.89704-1-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>

+20 -25
-1
lib/crypto/Makefile
··· 33 33 34 34 obj-$(CONFIG_CRYPTO_LIB_BLAKE2B) += libblake2b.o 35 35 libblake2b-y := blake2b.o 36 - CFLAGS_blake2b.o := -Wframe-larger-than=4096 # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105930 37 36 ifeq ($(CONFIG_CRYPTO_LIB_BLAKE2B_ARCH),y) 38 37 CFLAGS_blake2b.o += -I$(src)/$(SRCARCH) 39 38 libblake2b-$(CONFIG_ARM) += arm/blake2b-neon-core.o
+20 -24
lib/crypto/blake2b.c
··· 14 14 #include <linux/kernel.h> 15 15 #include <linux/module.h> 16 16 #include <linux/string.h> 17 + #include <linux/unroll.h> 17 18 #include <linux/types.h> 18 19 19 20 static const u8 blake2b_sigma[12][16] = { ··· 74 73 b = ror64(b ^ c, 63); \ 75 74 } while (0) 76 75 77 - #define ROUND(r) do { \ 78 - G(r, 0, v[0], v[ 4], v[ 8], v[12]); \ 79 - G(r, 1, v[1], v[ 5], v[ 9], v[13]); \ 80 - G(r, 2, v[2], v[ 6], v[10], v[14]); \ 81 - G(r, 3, v[3], v[ 7], v[11], v[15]); \ 82 - G(r, 4, v[0], v[ 5], v[10], v[15]); \ 83 - G(r, 5, v[1], v[ 6], v[11], v[12]); \ 84 - G(r, 6, v[2], v[ 7], v[ 8], v[13]); \ 85 - G(r, 7, v[3], v[ 4], v[ 9], v[14]); \ 86 - } while (0) 87 - ROUND(0); 88 - ROUND(1); 89 - ROUND(2); 90 - ROUND(3); 91 - ROUND(4); 92 - ROUND(5); 93 - ROUND(6); 94 - ROUND(7); 95 - ROUND(8); 96 - ROUND(9); 97 - ROUND(10); 98 - ROUND(11); 99 - 76 + #ifdef CONFIG_64BIT 77 + /* 78 + * Unroll the rounds loop to enable constant-folding of the 79 + * blake2b_sigma values. Seems worthwhile on 64-bit kernels. 80 + * Not worthwhile on 32-bit kernels because the code size is 81 + * already so large there due to BLAKE2b using 64-bit words. 82 + */ 83 + unrolled_full 84 + #endif 85 + for (int r = 0; r < 12; r++) { 86 + G(r, 0, v[0], v[4], v[8], v[12]); 87 + G(r, 1, v[1], v[5], v[9], v[13]); 88 + G(r, 2, v[2], v[6], v[10], v[14]); 89 + G(r, 3, v[3], v[7], v[11], v[15]); 90 + G(r, 4, v[0], v[5], v[10], v[15]); 91 + G(r, 5, v[1], v[6], v[11], v[12]); 92 + G(r, 6, v[2], v[7], v[8], v[13]); 93 + G(r, 7, v[3], v[4], v[9], v[14]); 94 + } 100 95 #undef G 101 - #undef ROUND 102 96 103 97 for (i = 0; i < 8; ++i) 104 98 ctx->h[i] ^= v[i] ^ v[i + 8];