Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

crypto/arm64: sm4-ce-gcm - Avoid pointless yield of the NEON unit

Kernel mode NEON sections are now preemptible on arm64, and so there is
no need to yield it when calling APIs that may sleep.

Also, move the calls to kernel_neon_end() to the same scope as
kernel_neon_begin(). This is needed for a subsequent change where a
stack buffer is allocated transparently and passed to
kernel_neon_begin().

While at it, simplify the logic.

Reviewed-by: Eric Biggers <ebiggers@kernel.org>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>

+6 -19
+6 -19
arch/arm64/crypto/sm4-ce-gcm-glue.c
··· 154 154 if (req->assoclen) 155 155 gcm_calculate_auth_mac(req, ghash); 156 156 157 - while (walk->nbytes) { 157 + do { 158 158 unsigned int tail = walk->nbytes % SM4_BLOCK_SIZE; 159 159 const u8 *src = walk->src.virt.addr; 160 160 u8 *dst = walk->dst.virt.addr; 161 + const u8 *l = NULL; 161 162 162 163 if (walk->nbytes == walk->total) { 163 - sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, dst, src, iv, 164 - walk->nbytes, ghash, 165 - ctx->ghash_table, 166 - (const u8 *)&lengths); 167 - 168 - kernel_neon_end(); 169 - 170 - return skcipher_walk_done(walk, 0); 164 + l = (const u8 *)&lengths; 165 + tail = 0; 171 166 } 172 167 173 168 sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, dst, src, iv, 174 169 walk->nbytes - tail, ghash, 175 - ctx->ghash_table, NULL); 176 - 177 - kernel_neon_end(); 170 + ctx->ghash_table, l); 178 171 179 172 err = skcipher_walk_done(walk, tail); 180 - 181 - kernel_neon_begin(); 182 - } 183 - 184 - sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, NULL, NULL, iv, 185 - walk->nbytes, ghash, ctx->ghash_table, 186 - (const u8 *)&lengths); 173 + } while (walk->nbytes); 187 174 188 175 kernel_neon_end(); 189 176