Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

crypto: ahash - remove support for nonzero alignmask

Currently, the ahash API checks the alignment of all key and result
buffers against the algorithm's declared alignmask, and for any
unaligned buffers it falls back to manually aligned temporary buffers.

This is virtually useless, however. First, since it does not apply to
the message, its effect is much more limited than e.g. is the case for
the alignmask for "skcipher". Second, the key and result buffers are
given as virtual addresses and cannot (in general) be DMA'ed into, so
drivers end up having to copy to/from them in software anyway. As a
result it's easy to use memcpy() or the unaligned access helpers.

The crypto_hash_walk_*() helper functions do use the alignmask to align
the message. But with one exception those are only used for shash
algorithms being exposed via the ahash API, not for native ahashes, and
aligning the message is not required in this case, especially now that
alignmask support has been removed from shash. The exception is the
n2_core driver, which doesn't set an alignmask.

In any case, no ahash algorithms actually set a nonzero alignmask
anymore. Therefore, remove support for it from ahash. The benefit is
that all the code to handle "misaligned" buffers in the ahash API goes
away, reducing the overhead of the ahash API.

This follows the same change that was made to shash.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

authored by

Eric Biggers and committed by
Herbert Xu
c626910f 54eea8e2

+28 -132
+1 -3
Documentation/crypto/devel-algos.rst
··· 235 235 236 236 Some of the drivers will want to use the Generic ScatterWalk in case the 237 237 implementation needs to be fed separate chunks of the scatterlist which 238 - contains the input data. The buffer containing the resulting hash will 239 - always be properly aligned to .cra_alignmask so there is no need to 240 - worry about this. 238 + contains the input data.
+8 -109
crypto/ahash.c
··· 35 35 36 36 static int hash_walk_next(struct crypto_hash_walk *walk) 37 37 { 38 - unsigned int alignmask = walk->alignmask; 39 38 unsigned int offset = walk->offset; 40 39 unsigned int nbytes = min(walk->entrylen, 41 40 ((unsigned int)(PAGE_SIZE)) - offset); 42 41 43 42 walk->data = kmap_local_page(walk->pg); 44 43 walk->data += offset; 45 - 46 - if (offset & alignmask) { 47 - unsigned int unaligned = alignmask + 1 - (offset & alignmask); 48 - 49 - if (nbytes > unaligned) 50 - nbytes = unaligned; 51 - } 52 - 53 44 walk->entrylen -= nbytes; 54 45 return nbytes; 55 46 } ··· 64 73 65 74 int crypto_hash_walk_done(struct crypto_hash_walk *walk, int err) 66 75 { 67 - unsigned int alignmask = walk->alignmask; 68 - 69 76 walk->data -= walk->offset; 70 - 71 - if (walk->entrylen && (walk->offset & alignmask) && !err) { 72 - unsigned int nbytes; 73 - 74 - walk->offset = ALIGN(walk->offset, alignmask + 1); 75 - nbytes = min(walk->entrylen, 76 - (unsigned int)(PAGE_SIZE - walk->offset)); 77 - if (nbytes) { 78 - walk->entrylen -= nbytes; 79 - walk->data += walk->offset; 80 - return nbytes; 81 - } 82 - } 83 77 84 78 kunmap_local(walk->data); 85 79 crypto_yield(walk->flags); ··· 97 121 return 0; 98 122 } 99 123 100 - walk->alignmask = crypto_ahash_alignmask(crypto_ahash_reqtfm(req)); 101 124 walk->sg = req->src; 102 125 walk->flags = req->base.flags; 103 126 104 127 return hash_walk_new_entry(walk); 105 128 } 106 129 EXPORT_SYMBOL_GPL(crypto_hash_walk_first); 107 - 108 - static int ahash_setkey_unaligned(struct crypto_ahash *tfm, const u8 *key, 109 - unsigned int keylen) 110 - { 111 - unsigned long alignmask = crypto_ahash_alignmask(tfm); 112 - int ret; 113 - u8 *buffer, *alignbuffer; 114 - unsigned long absize; 115 - 116 - absize = keylen + alignmask; 117 - buffer = kmalloc(absize, GFP_KERNEL); 118 - if (!buffer) 119 - return -ENOMEM; 120 - 121 - alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1); 122 - memcpy(alignbuffer, key, keylen); 123 - ret = tfm->setkey(tfm, alignbuffer, keylen); 124 - kfree_sensitive(buffer); 125 - return ret; 126 - } 127 130 128 131 static int ahash_nosetkey(struct crypto_ahash *tfm, const u8 *key, 129 132 unsigned int keylen) ··· 122 167 int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key, 123 168 unsigned int keylen) 124 169 { 125 - unsigned long alignmask = crypto_ahash_alignmask(tfm); 126 - int err; 127 - 128 - if ((unsigned long)key & alignmask) 129 - err = ahash_setkey_unaligned(tfm, key, keylen); 130 - else 131 - err = tfm->setkey(tfm, key, keylen); 170 + int err = tfm->setkey(tfm, key, keylen); 132 171 133 172 if (unlikely(err)) { 134 173 ahash_set_needkey(tfm); ··· 138 189 bool has_state) 139 190 { 140 191 struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); 141 - unsigned long alignmask = crypto_ahash_alignmask(tfm); 142 192 unsigned int ds = crypto_ahash_digestsize(tfm); 143 193 struct ahash_request *subreq; 144 194 unsigned int subreq_size; ··· 151 203 reqsize = ALIGN(reqsize, crypto_tfm_ctx_alignment()); 152 204 subreq_size += reqsize; 153 205 subreq_size += ds; 154 - subreq_size += alignmask & ~(crypto_tfm_ctx_alignment() - 1); 155 206 156 207 flags = ahash_request_flags(req); 157 208 gfp = (flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? GFP_KERNEL : GFP_ATOMIC; ··· 162 215 ahash_request_set_callback(subreq, flags, cplt, req); 163 216 164 217 result = (u8 *)(subreq + 1) + reqsize; 165 - result = PTR_ALIGN(result, alignmask + 1); 166 218 167 219 ahash_request_set_crypt(subreq, req->src, result, req->nbytes); 168 220 ··· 197 251 kfree_sensitive(subreq); 198 252 } 199 253 200 - static void ahash_op_unaligned_done(void *data, int err) 201 - { 202 - struct ahash_request *areq = data; 203 - 204 - if (err == -EINPROGRESS) 205 - goto out; 206 - 207 - /* First copy req->result into req->priv.result */ 208 - ahash_restore_req(areq, err); 209 - 210 - out: 211 - /* Complete the ORIGINAL request. */ 212 - ahash_request_complete(areq, err); 213 - } 214 - 215 - static int ahash_op_unaligned(struct ahash_request *req, 216 - int (*op)(struct ahash_request *), 217 - bool has_state) 218 - { 219 - int err; 220 - 221 - err = ahash_save_req(req, ahash_op_unaligned_done, has_state); 222 - if (err) 223 - return err; 224 - 225 - err = op(req->priv); 226 - if (err == -EINPROGRESS || err == -EBUSY) 227 - return err; 228 - 229 - ahash_restore_req(req, err); 230 - 231 - return err; 232 - } 233 - 234 - static int crypto_ahash_op(struct ahash_request *req, 235 - int (*op)(struct ahash_request *), 236 - bool has_state) 237 - { 238 - struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); 239 - unsigned long alignmask = crypto_ahash_alignmask(tfm); 240 - int err; 241 - 242 - if ((unsigned long)req->result & alignmask) 243 - err = ahash_op_unaligned(req, op, has_state); 244 - else 245 - err = op(req); 246 - 247 - return crypto_hash_errstat(crypto_hash_alg_common(tfm), err); 248 - } 249 - 250 254 int crypto_ahash_final(struct ahash_request *req) 251 255 { 252 256 struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); ··· 205 309 if (IS_ENABLED(CONFIG_CRYPTO_STATS)) 206 310 atomic64_inc(&hash_get_stat(alg)->hash_cnt); 207 311 208 - return crypto_ahash_op(req, tfm->final, true); 312 + return crypto_hash_errstat(alg, tfm->final(req)); 209 313 } 210 314 EXPORT_SYMBOL_GPL(crypto_ahash_final); 211 315 ··· 221 325 atomic64_add(req->nbytes, &istat->hash_tlen); 222 326 } 223 327 224 - return crypto_ahash_op(req, tfm->finup, true); 328 + return crypto_hash_errstat(alg, tfm->finup(req)); 225 329 } 226 330 EXPORT_SYMBOL_GPL(crypto_ahash_finup); 227 331 ··· 229 333 { 230 334 struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); 231 335 struct hash_alg_common *alg = crypto_hash_alg_common(tfm); 336 + int err; 232 337 233 338 if (IS_ENABLED(CONFIG_CRYPTO_STATS)) { 234 339 struct crypto_istat_hash *istat = hash_get_stat(alg); ··· 239 342 } 240 343 241 344 if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY) 242 - return crypto_hash_errstat(alg, -ENOKEY); 345 + err = -ENOKEY; 346 + else 347 + err = tfm->digest(req); 243 348 244 - return crypto_ahash_op(req, tfm->digest, false); 349 + return crypto_hash_errstat(alg, err); 245 350 } 246 351 EXPORT_SYMBOL_GPL(crypto_ahash_digest); 247 352
+4 -4
crypto/shash.c
··· 541 541 if (alg->digestsize > HASH_MAX_DIGESTSIZE) 542 542 return -EINVAL; 543 543 544 + /* alignmask is not useful for hashes, so it is not supported. */ 545 + if (base->cra_alignmask) 546 + return -EINVAL; 547 + 544 548 base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK; 545 549 546 550 if (IS_ENABLED(CONFIG_CRYPTO_STATS)) ··· 559 555 int err; 560 556 561 557 if (alg->descsize > HASH_MAX_DESCSIZE) 562 - return -EINVAL; 563 - 564 - /* alignmask is not useful for shash, so it is not supported. */ 565 - if (base->cra_alignmask) 566 558 return -EINVAL; 567 559 568 560 if ((alg->export && !alg->import) || (alg->import && !alg->export))
+1 -3
include/crypto/internal/hash.h
··· 18 18 char *data; 19 19 20 20 unsigned int offset; 21 - unsigned int alignmask; 21 + unsigned int flags; 22 22 23 23 struct page *pg; 24 24 unsigned int entrylen; 25 25 26 26 unsigned int total; 27 27 struct scatterlist *sg; 28 - 29 - unsigned int flags; 30 28 }; 31 29 32 30 struct ahash_instance {
+14 -13
include/linux/crypto.h
··· 110 110 * crypto_aead_walksize() (with the remainder going at the end), no chunk 111 111 * can cross a page boundary or a scatterlist element boundary. 112 112 * ahash: 113 - * - The result buffer must be aligned to the algorithm's alignmask. 114 113 * - crypto_ahash_finup() must not be used unless the algorithm implements 115 114 * ->finup() natively. 116 115 */ ··· 277 278 * @cra_ctxsize: Size of the operational context of the transformation. This 278 279 * value informs the kernel crypto API about the memory size 279 280 * needed to be allocated for the transformation context. 280 - * @cra_alignmask: Alignment mask for the input and output data buffer. The data 281 - * buffer containing the input data for the algorithm must be 282 - * aligned to this alignment mask. The data buffer for the 283 - * output data must be aligned to this alignment mask. Note that 284 - * the Crypto API will do the re-alignment in software, but 285 - * only under special conditions and there is a performance hit. 286 - * The re-alignment happens at these occasions for different 287 - * @cra_u types: cipher -- For both input data and output data 288 - * buffer; ahash -- For output hash destination buf; shash -- 289 - * For output hash destination buf. 290 - * This is needed on hardware which is flawed by design and 291 - * cannot pick data from arbitrary addresses. 281 + * @cra_alignmask: For cipher, skcipher, lskcipher, and aead algorithms this is 282 + * 1 less than the alignment, in bytes, that the algorithm 283 + * implementation requires for input and output buffers. When 284 + * the crypto API is invoked with buffers that are not aligned 285 + * to this alignment, the crypto API automatically utilizes 286 + * appropriately aligned temporary buffers to comply with what 287 + * the algorithm needs. (For scatterlists this happens only if 288 + * the algorithm uses the skcipher_walk helper functions.) This 289 + * misalignment handling carries a performance penalty, so it is 290 + * preferred that algorithms do not set a nonzero alignmask. 291 + * Also, crypto API users may wish to allocate buffers aligned 292 + * to the alignmask of the algorithm being used, in order to 293 + * avoid the API having to realign them. Note: the alignmask is 294 + * not supported for hash algorithms and is always 0 for them. 292 295 * @cra_priority: Priority of this transformation implementation. In case 293 296 * multiple transformations with same @cra_name are available to 294 297 * the Crypto API, the kernel will use the one with highest