Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'v6.19-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto updates from Herbert Xu:
"API:
- Rewrite memcpy_sglist from scratch
- Add on-stack AEAD request allocation
- Fix partial block processing in ahash

Algorithms:
- Remove ansi_cprng
- Remove tcrypt tests for poly1305
- Fix EINPROGRESS processing in authenc
- Fix double-free in zstd

Drivers:
- Use drbg ctr helper when reseeding xilinx-trng
- Add support for PCI device 0x115A to ccp
- Add support of paes in caam
- Add support for aes-xts in dthev2

Others:
- Use likely in rhashtable lookup
- Fix lockdep false-positive in padata by removing a helper"

* tag 'v6.19-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (71 commits)
crypto: zstd - fix double-free in per-CPU stream cleanup
crypto: ahash - Zero positive err value in ahash_update_finish
crypto: ahash - Fix crypto_ahash_import with partial block data
crypto: lib/mpi - use min() instead of min_t()
crypto: ccp - use min() instead of min_t()
hwrng: core - use min3() instead of nested min_t()
crypto: aesni - ctr_crypt() use min() instead of min_t()
crypto: drbg - Delete unused ctx from struct sdesc
crypto: testmgr - Add missing DES weak and semi-weak key tests
Revert "crypto: scatterwalk - Move skcipher walk and use it for memcpy_sglist"
crypto: scatterwalk - Fix memcpy_sglist() to always succeed
crypto: iaa - Request to add Kanchana P Sridhar to Maintainers.
crypto: tcrypt - Remove unused poly1305 support
crypto: ansi_cprng - Remove unused ansi_cprng algorithm
crypto: asymmetric_keys - fix uninitialized pointers with free attribute
KEYS: Avoid -Wflex-array-member-not-at-end warning
crypto: ccree - Correctly handle return of sg_nents_for_len
crypto: starfive - Correctly handle return of sg_nents_for_len
crypto: iaa - Fix incorrect return value in save_iaa_wq()
crypto: zstd - Remove unnecessary size_t cast
...

+2027 -1681
+3 -4
Documentation/crypto/userspace-if.rst
··· 302 302 303 303 304 304 Depending on the RNG type, the RNG must be seeded. The seed is provided 305 - using the setsockopt interface to set the key. For example, the 306 - ansi_cprng requires a seed. The DRBGs do not require a seed, but may be 307 - seeded. The seed is also known as a *Personalization String* in NIST SP 800-90A 308 - standard. 305 + using the setsockopt interface to set the key. The SP800-90A DRBGs do 306 + not require a seed, but may be seeded. The seed is also known as a 307 + *Personalization String* in NIST SP 800-90A standard. 309 308 310 309 Using the read()/recvmsg() system calls, random numbers can be obtained. 311 310 The kernel generates at most 128 bytes in one call. If user space
+3
Documentation/devicetree/bindings/crypto/amd,ccp-seattle-v1a.yaml
··· 21 21 22 22 dma-coherent: true 23 23 24 + iommus: 25 + maxItems: 4 26 + 24 27 required: 25 28 - compatible 26 29 - reg
+1
Documentation/devicetree/bindings/crypto/qcom,inline-crypto-engine.yaml
··· 13 13 compatible: 14 14 items: 15 15 - enum: 16 + - qcom,kaanapali-inline-crypto-engine 16 17 - qcom,qcs8300-inline-crypto-engine 17 18 - qcom,sa8775p-inline-crypto-engine 18 19 - qcom,sc7180-inline-crypto-engine
+1
Documentation/devicetree/bindings/crypto/qcom,prng.yaml
··· 20 20 - qcom,ipq5332-trng 21 21 - qcom,ipq5424-trng 22 22 - qcom,ipq9574-trng 23 + - qcom,kaanapali-trng 23 24 - qcom,qcs615-trng 24 25 - qcom,qcs8300-trng 25 26 - qcom,sa8255p-trng
+1
Documentation/devicetree/bindings/crypto/qcom-qce.yaml
··· 45 45 46 46 - items: 47 47 - enum: 48 + - qcom,kaanapali-qce 48 49 - qcom,qcs615-qce 49 50 - qcom,qcs8300-qce 50 51 - qcom,sa8775p-qce
-17
Documentation/devicetree/bindings/rng/microchip,pic32-rng.txt
··· 1 - * Microchip PIC32 Random Number Generator 2 - 3 - The PIC32 RNG provides a pseudo random number generator which can be seeded by 4 - another true random number generator. 5 - 6 - Required properties: 7 - - compatible : should be "microchip,pic32mzda-rng" 8 - - reg : Specifies base physical address and size of the registers. 9 - - clocks: clock phandle. 10 - 11 - Example: 12 - 13 - rng: rng@1f8e6000 { 14 - compatible = "microchip,pic32mzda-rng"; 15 - reg = <0x1f8e6000 0x1000>; 16 - clocks = <&PBCLK5>; 17 - };
+40
Documentation/devicetree/bindings/rng/microchip,pic32-rng.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/rng/microchip,pic32-rng.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Microchip PIC32 Random Number Generator 8 + 9 + description: | 10 + The PIC32 RNG provides a pseudo random number generator which can be seeded 11 + by another true random number generator. 12 + 13 + maintainers: 14 + - Joshua Henderson <joshua.henderson@microchip.com> 15 + 16 + properties: 17 + compatible: 18 + enum: 19 + - microchip,pic32mzda-rng 20 + 21 + reg: 22 + maxItems: 1 23 + 24 + clocks: 25 + maxItems: 1 26 + 27 + required: 28 + - compatible 29 + - reg 30 + - clocks 31 + 32 + additionalProperties: false 33 + 34 + examples: 35 + - | 36 + rng: rng@1f8e6000 { 37 + compatible = "microchip,pic32mzda-rng"; 38 + reg = <0x1f8e6000 0x1000>; 39 + clocks = <&PBCLK5>; 40 + };
+87 -1
Documentation/security/keys/trusted-encrypted.rst
··· 10 10 system. All user level blobs, are displayed and loaded in hex ASCII for 11 11 convenience, and are integrity verified. 12 12 13 + Trusted Keys as Protected key 14 + ============================= 15 + It is the secure way of keeping the keys in the kernel key-ring as Trusted-Key, 16 + such that: 17 + 18 + - Key-blob, an encrypted key-data, created to be stored, loaded and seen by 19 + userspace. 20 + - Key-data, the plain-key text in the system memory, to be used by 21 + kernel space only. 22 + 23 + Though key-data is not accessible to the user-space in plain-text, but it is in 24 + plain-text in system memory, when used in kernel space. Even though kernel-space 25 + attracts small surface attack, but with compromised kernel or side-channel 26 + attack accessing the system memory can lead to a chance of the key getting 27 + compromised/leaked. 28 + 29 + In order to protect the key in kernel space, the concept of "protected-keys" is 30 + introduced which will act as an added layer of protection. The key-data of the 31 + protected keys is encrypted with Key-Encryption-Key(KEK), and decrypted inside 32 + the trust source boundary. The plain-key text never available out-side in the 33 + system memory. Thus, any crypto operation that is to be executed using the 34 + protected key, can only be done by the trust source, which generated the 35 + key blob. 36 + 37 + Hence, if the protected-key is leaked or compromised, it is of no use to the 38 + hacker. 39 + 40 + Trusted keys as protected keys, with trust source having the capability of 41 + generating: 42 + 43 + - Key-Blob, to be loaded, stored and seen by user-space. 13 44 14 45 Trust Source 15 46 ============ ··· 283 252 Trusted Keys usage: CAAM 284 253 ------------------------ 285 254 286 - Usage:: 255 + Trusted Keys Usage:: 287 256 288 257 keyctl add trusted name "new keylen" ring 289 258 keyctl add trusted name "load hex_blob" ring 290 259 keyctl print keyid 260 + 261 + "keyctl print" returns an ASCII hex copy of the sealed key, which is in a 262 + CAAM-specific format. The key length for new keys is always in bytes. 263 + Trusted Keys can be 32 - 128 bytes (256 - 1024 bits). 264 + 265 + Trusted Keys as Protected Keys Usage:: 266 + 267 + keyctl add trusted name "new keylen pk [options]" ring 268 + keyctl add trusted name "load hex_blob [options]" ring 269 + keyctl print keyid 270 + 271 + where, 'pk' is used to direct trust source to generate protected key. 272 + 273 + options: 274 + key_enc_algo = For CAAM, supported enc algo are ECB(2), CCM(1). 291 275 292 276 "keyctl print" returns an ASCII hex copy of the sealed key, which is in a 293 277 CAAM-specific format. The key length for new keys is always in bytes. ··· 377 331 Load a trusted key from the saved blob:: 378 332 379 333 $ keyctl add trusted kmk "load `cat kmk.blob`" @u 334 + 268728824 335 + 336 + $ keyctl print 268728824 337 + 0101000000000000000001005d01b7e3f4a6be5709930f3b70a743cbb42e0cc95e18e915 338 + 3f60da455bbf1144ad12e4f92b452f966929f6105fd29ca28e4d4d5a031d068478bacb0b 339 + 27351119f822911b0a11ba3d3498ba6a32e50dac7f32894dd890eb9ad578e4e292c83722 340 + a52e56a097e6a68b3f56f7a52ece0cdccba1eb62cad7d817f6dc58898b3ac15f36026fec 341 + d568bd4a706cb60bb37be6d8f1240661199d640b66fb0fe3b079f97f450b9ef9c22c6d5d 342 + dd379f0facd1cd020281dfa3c70ba21a3fa6fc2471dc6d13ecf8298b946f65345faa5ef0 343 + f1f8fff03ad0acb083725535636addb08d73dedb9832da198081e5deae84bfaf0409c22b 344 + e4a8aea2b607ec96931e6f4d4fe563ba 345 + 346 + Create and save a trusted key as protected key named "kmk" of length 32 bytes. 347 + 348 + :: 349 + 350 + $ keyctl add trusted kmk "new 32 pk key_enc_algo=1" @u 351 + 440502848 352 + 353 + $ keyctl show 354 + Session Keyring 355 + -3 --alswrv 500 500 keyring: _ses 356 + 97833714 --alswrv 500 -1 \_ keyring: _uid.500 357 + 440502848 --alswrv 500 500 \_ trusted: kmk 358 + 359 + $ keyctl print 440502848 360 + 0101000000000000000001005d01b7e3f4a6be5709930f3b70a743cbb42e0cc95e18e915 361 + 3f60da455bbf1144ad12e4f92b452f966929f6105fd29ca28e4d4d5a031d068478bacb0b 362 + 27351119f822911b0a11ba3d3498ba6a32e50dac7f32894dd890eb9ad578e4e292c83722 363 + a52e56a097e6a68b3f56f7a52ece0cdccba1eb62cad7d817f6dc58898b3ac15f36026fec 364 + d568bd4a706cb60bb37be6d8f1240661199d640b66fb0fe3b079f97f450b9ef9c22c6d5d 365 + dd379f0facd1cd020281dfa3c70ba21a3fa6fc2471dc6d13ecf8298b946f65345faa5ef0 366 + f1f8fff03ad0acb083725535636addb08d73dedb9832da198081e5deae84bfaf0409c22b 367 + e4a8aea2b607ec96931e6f4d4fe563ba 368 + 369 + $ keyctl pipe 440502848 > kmk.blob 370 + 371 + Load a trusted key from the saved blob:: 372 + 373 + $ keyctl add trusted kmk "load `cat kmk.blob` key_enc_algo=1" @u 380 374 268728824 381 375 382 376 $ keyctl print 268728824
+1 -1
MAINTAINERS
··· 6613 6613 M: Neil Horman <nhorman@tuxdriver.com> 6614 6614 L: linux-crypto@vger.kernel.org 6615 6615 S: Maintained 6616 - F: crypto/ansi_cprng.c 6617 6616 F: crypto/rng.c 6618 6617 6619 6618 CS3308 MEDIA DRIVER ··· 12572 12573 INTEL IAA CRYPTO DRIVER 12573 12574 M: Kristen Accardi <kristen.c.accardi@intel.com> 12574 12575 M: Vinicius Costa Gomes <vinicius.gomes@intel.com> 12576 + M: Kanchana P Sridhar <kanchana.p.sridhar@intel.com> 12575 12577 L: linux-crypto@vger.kernel.org 12576 12578 S: Supported 12577 12579 F: Documentation/driver-api/crypto/iaa/iaa-crypto.rst
-1
arch/arm/configs/axm55xx_defconfig
··· 232 232 CONFIG_DEBUG_USER=y 233 233 CONFIG_CRYPTO_GCM=y 234 234 CONFIG_CRYPTO_SHA256=y 235 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/arm/configs/clps711x_defconfig
··· 75 75 CONFIG_DEBUG_USER=y 76 76 CONFIG_DEBUG_LL=y 77 77 CONFIG_EARLY_PRINTK=y 78 - # CONFIG_CRYPTO_ANSI_CPRNG is not set 79 78 # CONFIG_CRYPTO_HW is not set
-1
arch/arm/configs/dove_defconfig
··· 126 126 CONFIG_CRYPTO_SHA512=y 127 127 CONFIG_CRYPTO_DEFLATE=y 128 128 CONFIG_CRYPTO_LZO=y 129 - # CONFIG_CRYPTO_ANSI_CPRNG is not set 130 129 CONFIG_CRYPTO_DEV_MARVELL_CESA=y 131 130 CONFIG_PRINTK_TIME=y 132 131 # CONFIG_DEBUG_BUGVERBOSE is not set
-1
arch/arm/configs/ep93xx_defconfig
··· 119 119 CONFIG_DEBUG_MUTEXES=y 120 120 CONFIG_DEBUG_USER=y 121 121 CONFIG_DEBUG_LL=y 122 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/arm/configs/jornada720_defconfig
··· 92 92 CONFIG_DEBUG_KERNEL=y 93 93 # CONFIG_FTRACE is not set 94 94 CONFIG_DEBUG_LL=y 95 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/arm/configs/keystone_defconfig
··· 228 228 CONFIG_CRYPTO_CBC=y 229 229 CONFIG_CRYPTO_CTR=y 230 230 CONFIG_CRYPTO_XCBC=y 231 - CONFIG_CRYPTO_ANSI_CPRNG=y 232 231 CONFIG_CRYPTO_USER_API_HASH=y 233 232 CONFIG_CRYPTO_USER_API_SKCIPHER=y 234 233 CONFIG_DMA_CMA=y
-1
arch/arm/configs/lpc32xx_defconfig
··· 177 177 CONFIG_NLS_ASCII=y 178 178 CONFIG_NLS_ISO8859_1=y 179 179 CONFIG_NLS_UTF8=y 180 - CONFIG_CRYPTO_ANSI_CPRNG=y 181 180 # CONFIG_CRYPTO_HW is not set 182 181 CONFIG_PRINTK_TIME=y 183 182 CONFIG_DYNAMIC_DEBUG=y
-1
arch/arm/configs/mmp2_defconfig
··· 78 78 CONFIG_DEBUG_LL=y 79 79 CONFIG_DEBUG_MMP_UART3=y 80 80 CONFIG_EARLY_PRINTK=y 81 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/arm/configs/mv78xx0_defconfig
··· 121 121 CONFIG_SCHEDSTATS=y 122 122 CONFIG_DEBUG_USER=y 123 123 CONFIG_DEBUG_LL=y 124 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/arm/configs/omap1_defconfig
··· 220 220 CONFIG_CRYPTO_PCBC=y 221 221 CONFIG_CRYPTO_DEFLATE=y 222 222 CONFIG_CRYPTO_LZO=y 223 - # CONFIG_CRYPTO_ANSI_CPRNG is not set 224 223 CONFIG_FONTS=y 225 224 CONFIG_FONT_8x8=y 226 225 CONFIG_FONT_8x16=y
-1
arch/arm/configs/orion5x_defconfig
··· 145 145 # CONFIG_FTRACE is not set 146 146 CONFIG_DEBUG_USER=y 147 147 CONFIG_DEBUG_LL=y 148 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/arm/configs/pxa168_defconfig
··· 48 48 # CONFIG_DEBUG_PREEMPT is not set 49 49 CONFIG_DEBUG_USER=y 50 50 CONFIG_DEBUG_LL=y 51 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/arm/configs/pxa3xx_defconfig
··· 106 106 CONFIG_DEBUG_SPINLOCK_SLEEP=y 107 107 # CONFIG_FTRACE is not set 108 108 CONFIG_DEBUG_USER=y 109 - # CONFIG_CRYPTO_ANSI_CPRNG is not set 110 109 # CONFIG_CRYPTO_HW is not set
-1
arch/arm/configs/pxa910_defconfig
··· 59 59 CONFIG_DEBUG_LL=y 60 60 CONFIG_DEBUG_MMP_UART2=y 61 61 CONFIG_EARLY_PRINTK=y 62 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/arm/configs/spitz_defconfig
··· 228 228 CONFIG_CRYPTO_SERPENT=m 229 229 CONFIG_CRYPTO_TEA=m 230 230 CONFIG_CRYPTO_TWOFISH=m 231 - # CONFIG_CRYPTO_ANSI_CPRNG is not set 232 231 CONFIG_CRYPTO_HMAC=y 233 232 CONFIG_CRYPTO_MD4=m 234 233 CONFIG_CRYPTO_MICHAEL_MIC=m
-1
arch/arm64/configs/defconfig
··· 1784 1784 CONFIG_CRYPTO_ECHAINIV=y 1785 1785 CONFIG_CRYPTO_MICHAEL_MIC=m 1786 1786 CONFIG_CRYPTO_SHA3=m 1787 - CONFIG_CRYPTO_ANSI_CPRNG=y 1788 1787 CONFIG_CRYPTO_USER_API_RNG=m 1789 1788 CONFIG_CRYPTO_GHASH_ARM64_CE=y 1790 1789 CONFIG_CRYPTO_SM3_ARM64_CE=m
-1
arch/hexagon/configs/comet_defconfig
··· 69 69 # CONFIG_INET_DIAG is not set 70 70 # CONFIG_IPV6 is not set 71 71 CONFIG_CRYPTO_MD5=y 72 - # CONFIG_CRYPTO_ANSI_CPRNG is not set 73 72 # CONFIG_CRYPTO_HW is not set 74 73 CONFIG_FRAME_WARN=0 75 74 CONFIG_MAGIC_SYSRQ=y
-1
arch/m68k/configs/amcore_defconfig
··· 86 86 # CONFIG_SCHED_DEBUG is not set 87 87 # CONFIG_DEBUG_BUGVERBOSE is not set 88 88 # CONFIG_CRYPTO_ECHAINIV is not set 89 - CONFIG_CRYPTO_ANSI_CPRNG=y 90 89 # CONFIG_CRYPTO_HW is not set
-1
arch/m68k/configs/amiga_defconfig
··· 590 590 CONFIG_CRYPTO_LZ4=m 591 591 CONFIG_CRYPTO_LZ4HC=m 592 592 CONFIG_CRYPTO_ZSTD=m 593 - CONFIG_CRYPTO_ANSI_CPRNG=m 594 593 CONFIG_CRYPTO_DRBG_HASH=y 595 594 CONFIG_CRYPTO_DRBG_CTR=y 596 595 CONFIG_CRYPTO_USER_API_HASH=m
-1
arch/m68k/configs/apollo_defconfig
··· 547 547 CONFIG_CRYPTO_LZ4=m 548 548 CONFIG_CRYPTO_LZ4HC=m 549 549 CONFIG_CRYPTO_ZSTD=m 550 - CONFIG_CRYPTO_ANSI_CPRNG=m 551 550 CONFIG_CRYPTO_DRBG_HASH=y 552 551 CONFIG_CRYPTO_DRBG_CTR=y 553 552 CONFIG_CRYPTO_USER_API_HASH=m
-1
arch/m68k/configs/atari_defconfig
··· 567 567 CONFIG_CRYPTO_LZ4=m 568 568 CONFIG_CRYPTO_LZ4HC=m 569 569 CONFIG_CRYPTO_ZSTD=m 570 - CONFIG_CRYPTO_ANSI_CPRNG=m 571 570 CONFIG_CRYPTO_DRBG_HASH=y 572 571 CONFIG_CRYPTO_DRBG_CTR=y 573 572 CONFIG_CRYPTO_USER_API_HASH=m
-1
arch/m68k/configs/bvme6000_defconfig
··· 539 539 CONFIG_CRYPTO_LZ4=m 540 540 CONFIG_CRYPTO_LZ4HC=m 541 541 CONFIG_CRYPTO_ZSTD=m 542 - CONFIG_CRYPTO_ANSI_CPRNG=m 543 542 CONFIG_CRYPTO_DRBG_HASH=y 544 543 CONFIG_CRYPTO_DRBG_CTR=y 545 544 CONFIG_CRYPTO_USER_API_HASH=m
-1
arch/m68k/configs/hp300_defconfig
··· 549 549 CONFIG_CRYPTO_LZ4=m 550 550 CONFIG_CRYPTO_LZ4HC=m 551 551 CONFIG_CRYPTO_ZSTD=m 552 - CONFIG_CRYPTO_ANSI_CPRNG=m 553 552 CONFIG_CRYPTO_DRBG_HASH=y 554 553 CONFIG_CRYPTO_DRBG_CTR=y 555 554 CONFIG_CRYPTO_USER_API_HASH=m
-1
arch/m68k/configs/mac_defconfig
··· 566 566 CONFIG_CRYPTO_LZ4=m 567 567 CONFIG_CRYPTO_LZ4HC=m 568 568 CONFIG_CRYPTO_ZSTD=m 569 - CONFIG_CRYPTO_ANSI_CPRNG=m 570 569 CONFIG_CRYPTO_DRBG_HASH=y 571 570 CONFIG_CRYPTO_DRBG_CTR=y 572 571 CONFIG_CRYPTO_USER_API_HASH=m
-1
arch/m68k/configs/multi_defconfig
··· 653 653 CONFIG_CRYPTO_LZ4=m 654 654 CONFIG_CRYPTO_LZ4HC=m 655 655 CONFIG_CRYPTO_ZSTD=m 656 - CONFIG_CRYPTO_ANSI_CPRNG=m 657 656 CONFIG_CRYPTO_DRBG_HASH=y 658 657 CONFIG_CRYPTO_DRBG_CTR=y 659 658 CONFIG_CRYPTO_USER_API_HASH=m
-1
arch/m68k/configs/mvme147_defconfig
··· 539 539 CONFIG_CRYPTO_LZ4=m 540 540 CONFIG_CRYPTO_LZ4HC=m 541 541 CONFIG_CRYPTO_ZSTD=m 542 - CONFIG_CRYPTO_ANSI_CPRNG=m 543 542 CONFIG_CRYPTO_DRBG_HASH=y 544 543 CONFIG_CRYPTO_DRBG_CTR=y 545 544 CONFIG_CRYPTO_USER_API_HASH=m
-1
arch/m68k/configs/mvme16x_defconfig
··· 540 540 CONFIG_CRYPTO_LZ4=m 541 541 CONFIG_CRYPTO_LZ4HC=m 542 542 CONFIG_CRYPTO_ZSTD=m 543 - CONFIG_CRYPTO_ANSI_CPRNG=m 544 543 CONFIG_CRYPTO_DRBG_HASH=y 545 544 CONFIG_CRYPTO_DRBG_CTR=y 546 545 CONFIG_CRYPTO_USER_API_HASH=m
-1
arch/m68k/configs/q40_defconfig
··· 556 556 CONFIG_CRYPTO_LZ4=m 557 557 CONFIG_CRYPTO_LZ4HC=m 558 558 CONFIG_CRYPTO_ZSTD=m 559 - CONFIG_CRYPTO_ANSI_CPRNG=m 560 559 CONFIG_CRYPTO_DRBG_HASH=y 561 560 CONFIG_CRYPTO_DRBG_CTR=y 562 561 CONFIG_CRYPTO_USER_API_HASH=m
-1
arch/m68k/configs/stmark2_defconfig
··· 84 84 CONFIG_CRAMFS=y 85 85 CONFIG_SQUASHFS=y 86 86 CONFIG_ROMFS_FS=y 87 - CONFIG_CRYPTO_ANSI_CPRNG=y 88 87 # CONFIG_CRYPTO_HW is not set 89 88 CONFIG_PRINTK_TIME=y 90 89 # CONFIG_DEBUG_BUGVERBOSE is not set
-1
arch/m68k/configs/sun3_defconfig
··· 537 537 CONFIG_CRYPTO_LZ4=m 538 538 CONFIG_CRYPTO_LZ4HC=m 539 539 CONFIG_CRYPTO_ZSTD=m 540 - CONFIG_CRYPTO_ANSI_CPRNG=m 541 540 CONFIG_CRYPTO_DRBG_HASH=y 542 541 CONFIG_CRYPTO_DRBG_CTR=y 543 542 CONFIG_CRYPTO_USER_API_HASH=m
-1
arch/m68k/configs/sun3x_defconfig
··· 537 537 CONFIG_CRYPTO_LZ4=m 538 538 CONFIG_CRYPTO_LZ4HC=m 539 539 CONFIG_CRYPTO_ZSTD=m 540 - CONFIG_CRYPTO_ANSI_CPRNG=m 541 540 CONFIG_CRYPTO_DRBG_HASH=y 542 541 CONFIG_CRYPTO_DRBG_CTR=y 543 542 CONFIG_CRYPTO_USER_API_HASH=m
-1
arch/mips/configs/decstation_64_defconfig
··· 200 200 CONFIG_CRYPTO_842=m 201 201 CONFIG_CRYPTO_LZ4=m 202 202 CONFIG_CRYPTO_LZ4HC=m 203 - CONFIG_CRYPTO_ANSI_CPRNG=m 204 203 CONFIG_CRYPTO_DRBG_HASH=y 205 204 CONFIG_CRYPTO_DRBG_CTR=y 206 205 # CONFIG_CRYPTO_HW is not set
-1
arch/mips/configs/decstation_defconfig
··· 195 195 CONFIG_CRYPTO_842=m 196 196 CONFIG_CRYPTO_LZ4=m 197 197 CONFIG_CRYPTO_LZ4HC=m 198 - CONFIG_CRYPTO_ANSI_CPRNG=m 199 198 CONFIG_CRYPTO_DRBG_HASH=y 200 199 CONFIG_CRYPTO_DRBG_CTR=y 201 200 # CONFIG_CRYPTO_HW is not set
-1
arch/mips/configs/decstation_r4k_defconfig
··· 195 195 CONFIG_CRYPTO_842=m 196 196 CONFIG_CRYPTO_LZ4=m 197 197 CONFIG_CRYPTO_LZ4HC=m 198 - CONFIG_CRYPTO_ANSI_CPRNG=m 199 198 CONFIG_CRYPTO_DRBG_HASH=y 200 199 CONFIG_CRYPTO_DRBG_CTR=y 201 200 # CONFIG_CRYPTO_HW is not set
-1
arch/s390/configs/debug_defconfig
··· 805 805 CONFIG_CRYPTO_LZ4=m 806 806 CONFIG_CRYPTO_LZ4HC=m 807 807 CONFIG_CRYPTO_ZSTD=m 808 - CONFIG_CRYPTO_ANSI_CPRNG=m 809 808 CONFIG_CRYPTO_USER_API_HASH=m 810 809 CONFIG_CRYPTO_USER_API_SKCIPHER=m 811 810 CONFIG_CRYPTO_USER_API_RNG=m
-1
arch/s390/configs/defconfig
··· 789 789 CONFIG_CRYPTO_LZ4=m 790 790 CONFIG_CRYPTO_LZ4HC=m 791 791 CONFIG_CRYPTO_ZSTD=m 792 - CONFIG_CRYPTO_ANSI_CPRNG=m 793 792 CONFIG_CRYPTO_JITTERENTROPY_OSR=1 794 793 CONFIG_CRYPTO_USER_API_HASH=m 795 794 CONFIG_CRYPTO_USER_API_SKCIPHER=m
-1
arch/sh/configs/ap325rxa_defconfig
··· 97 97 # CONFIG_ENABLE_MUST_CHECK is not set 98 98 CONFIG_CRYPTO=y 99 99 CONFIG_CRYPTO_CBC=y 100 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/sh/configs/apsh4a3a_defconfig
··· 86 86 # CONFIG_DEBUG_BUGVERBOSE is not set 87 87 CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y 88 88 # CONFIG_FTRACE is not set 89 - # CONFIG_CRYPTO_ANSI_CPRNG is not set 90 89 # CONFIG_CRYPTO_HW is not set
-1
arch/sh/configs/apsh4ad0a_defconfig
··· 116 116 CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y 117 117 CONFIG_DEBUG_VM=y 118 118 CONFIG_DWARF_UNWINDER=y 119 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/sh/configs/dreamcast_defconfig
··· 66 66 CONFIG_PROC_KCORE=y 67 67 CONFIG_TMPFS=y 68 68 CONFIG_HUGETLBFS=y 69 - # CONFIG_CRYPTO_ANSI_CPRNG is not set 70 69 CONFIG_RTC_CLASS=y 71 70 CONFIG_RTC_DRV_GENERIC=y
-1
arch/sh/configs/ecovec24_defconfig
··· 126 126 CONFIG_DEBUG_FS=y 127 127 CONFIG_CRYPTO=y 128 128 CONFIG_CRYPTO_CBC=y 129 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/sh/configs/edosk7760_defconfig
··· 110 110 CONFIG_CRYPTO=y 111 111 CONFIG_CRYPTO_MD5=y 112 112 CONFIG_CRYPTO_DES=y 113 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/sh/configs/espt_defconfig
··· 108 108 CONFIG_NLS_UTF8=y 109 109 # CONFIG_ENABLE_MUST_CHECK is not set 110 110 CONFIG_DEBUG_FS=y 111 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/sh/configs/hp6xx_defconfig
··· 54 54 CONFIG_CRYPTO_ECB=y 55 55 CONFIG_CRYPTO_PCBC=y 56 56 CONFIG_CRYPTO_MD5=y 57 - # CONFIG_CRYPTO_ANSI_CPRNG is not set 58 57 # CONFIG_CRYPTO_HW is not set
-1
arch/sh/configs/landisk_defconfig
··· 109 109 CONFIG_NLS_CODEPAGE_437=y 110 110 CONFIG_NLS_CODEPAGE_932=y 111 111 CONFIG_SH_STANDARD_BIOS=y 112 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/sh/configs/lboxre2_defconfig
··· 56 56 CONFIG_ROMFS_FS=y 57 57 CONFIG_NLS_CODEPAGE_437=y 58 58 CONFIG_SH_STANDARD_BIOS=y 59 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/sh/configs/migor_defconfig
··· 87 87 CONFIG_NFS_FS=y 88 88 CONFIG_ROOT_NFS=y 89 89 CONFIG_DEBUG_FS=y 90 - # CONFIG_CRYPTO_ANSI_CPRNG is not set 91 90 # CONFIG_CRYPTO_HW is not set
-1
arch/sh/configs/r7780mp_defconfig
··· 103 103 CONFIG_CRYPTO_ECB=m 104 104 CONFIG_CRYPTO_PCBC=m 105 105 CONFIG_CRYPTO_HMAC=y 106 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/sh/configs/r7785rp_defconfig
··· 101 101 CONFIG_CRYPTO_ECB=m 102 102 CONFIG_CRYPTO_PCBC=m 103 103 CONFIG_CRYPTO_HMAC=y 104 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/sh/configs/rts7751r2d1_defconfig
··· 86 86 CONFIG_MINIX_FS=y 87 87 CONFIG_NLS_CODEPAGE_932=y 88 88 CONFIG_DEBUG_FS=y 89 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/sh/configs/rts7751r2dplus_defconfig
··· 91 91 CONFIG_MINIX_FS=y 92 92 CONFIG_NLS_CODEPAGE_932=y 93 93 CONFIG_DEBUG_FS=y 94 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/sh/configs/sdk7780_defconfig
··· 134 134 CONFIG_SH_STANDARD_BIOS=y 135 135 CONFIG_CRYPTO_MD5=y 136 136 CONFIG_CRYPTO_DES=y 137 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/sh/configs/sdk7786_defconfig
··· 211 211 CONFIG_DMA_API_DEBUG=y 212 212 CONFIG_DEBUG_STACK_USAGE=y 213 213 CONFIG_DWARF_UNWINDER=y 214 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/sh/configs/se7206_defconfig
··· 99 99 CONFIG_DEBUG_STACK_USAGE=y 100 100 CONFIG_CRYPTO_DEFLATE=y 101 101 CONFIG_CRYPTO_LZO=y 102 - # CONFIG_CRYPTO_ANSI_CPRNG is not set 103 102 # CONFIG_CRYPTO_HW is not set
-1
arch/sh/configs/se7343_defconfig
··· 91 91 CONFIG_NFS_FS=y 92 92 CONFIG_NFS_V3=y 93 93 CONFIG_NFSD=y 94 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/sh/configs/se7705_defconfig
··· 51 51 CONFIG_JFFS2_FS=y 52 52 CONFIG_NFS_FS=y 53 53 CONFIG_ROOT_NFS=y 54 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/sh/configs/se7712_defconfig
··· 94 94 CONFIG_FRAME_POINTER=y 95 95 CONFIG_CRYPTO_ECB=m 96 96 CONFIG_CRYPTO_PCBC=m 97 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/sh/configs/se7721_defconfig
··· 120 120 CONFIG_DEBUG_KERNEL=y 121 121 CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y 122 122 CONFIG_FRAME_POINTER=y 123 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/sh/configs/se7722_defconfig
··· 53 53 CONFIG_MAGIC_SYSRQ=y 54 54 CONFIG_DEBUG_FS=y 55 55 CONFIG_SH_STANDARD_BIOS=y 56 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/sh/configs/se7724_defconfig
··· 126 126 # CONFIG_ENABLE_MUST_CHECK is not set 127 127 CONFIG_CRYPTO=y 128 128 CONFIG_CRYPTO_CBC=y 129 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/sh/configs/se7750_defconfig
··· 52 52 CONFIG_PARTITION_ADVANCED=y 53 53 # CONFIG_MSDOS_PARTITION is not set 54 54 # CONFIG_ENABLE_MUST_CHECK is not set 55 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/sh/configs/se7751_defconfig
··· 42 42 CONFIG_PROC_KCORE=y 43 43 CONFIG_TMPFS=y 44 44 CONFIG_JFFS2_FS=y 45 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/sh/configs/se7780_defconfig
··· 102 102 CONFIG_NFS_V3=y 103 103 CONFIG_ROOT_NFS=y 104 104 CONFIG_DEBUG_FS=y 105 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/sh/configs/sh03_defconfig
··· 118 118 CONFIG_CRYPTO_HMAC=y 119 119 CONFIG_CRYPTO_SHA1=y 120 120 CONFIG_CRYPTO_DEFLATE=y 121 - # CONFIG_CRYPTO_ANSI_CPRNG is not set 122 121 CONFIG_RTC_CLASS=y 123 122 CONFIG_RTC_DRV_GENERIC=y
-1
arch/sh/configs/sh2007_defconfig
··· 191 191 CONFIG_CRYPTO_TWOFISH=y 192 192 CONFIG_CRYPTO_DEFLATE=y 193 193 CONFIG_CRYPTO_LZO=y 194 - # CONFIG_CRYPTO_ANSI_CPRNG is not set 195 194 # CONFIG_CRYPTO_HW is not set
-1
arch/sh/configs/sh7710voipgw_defconfig
··· 51 51 # CONFIG_DNOTIFY is not set 52 52 CONFIG_JFFS2_FS=y 53 53 CONFIG_DEBUG_FS=y 54 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/sh/configs/sh7757lcr_defconfig
··· 81 81 # CONFIG_DEBUG_BUGVERBOSE is not set 82 82 CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y 83 83 # CONFIG_FTRACE is not set 84 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/sh/configs/sh7763rdp_defconfig
··· 110 110 CONFIG_NLS_UTF8=y 111 111 # CONFIG_ENABLE_MUST_CHECK is not set 112 112 CONFIG_DEBUG_FS=y 113 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/sh/configs/sh7785lcr_32bit_defconfig
··· 144 144 CONFIG_LATENCYTOP=y 145 145 # CONFIG_FTRACE is not set 146 146 CONFIG_CRYPTO_HMAC=y 147 - # CONFIG_CRYPTO_ANSI_CPRNG is not set 148 147 # CONFIG_CRYPTO_HW is not set
-1
arch/sh/configs/sh7785lcr_defconfig
··· 112 112 CONFIG_DETECT_HUNG_TASK=y 113 113 # CONFIG_DEBUG_BUGVERBOSE is not set 114 114 CONFIG_CRYPTO_HMAC=y 115 - # CONFIG_CRYPTO_ANSI_CPRNG is not set 116 115 # CONFIG_CRYPTO_HW is not set
-1
arch/sh/configs/shmin_defconfig
··· 49 49 CONFIG_NFS_V3=y 50 50 CONFIG_ROOT_NFS=y 51 51 CONFIG_SH_STANDARD_BIOS=y 52 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/sh/configs/shx3_defconfig
··· 97 97 CONFIG_FRAME_POINTER=y 98 98 CONFIG_SH_STANDARD_BIOS=y 99 99 CONFIG_DEBUG_STACK_USAGE=y 100 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/sh/configs/titan_defconfig
··· 261 261 CONFIG_CRYPTO_SERPENT=m 262 262 CONFIG_CRYPTO_TEA=m 263 263 CONFIG_CRYPTO_TWOFISH=m 264 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/sh/configs/ul2_defconfig
··· 80 80 CONFIG_NLS_ISO8859_1=y 81 81 # CONFIG_ENABLE_MUST_CHECK is not set 82 82 CONFIG_CRYPTO_MICHAEL_MIC=y 83 - # CONFIG_CRYPTO_ANSI_CPRNG is not set
-1
arch/sh/configs/urquell_defconfig
··· 142 142 # CONFIG_FTRACE is not set 143 143 # CONFIG_DUMP_CODE is not set 144 144 CONFIG_CRYPTO_HMAC=y 145 - # CONFIG_CRYPTO_ANSI_CPRNG is not set 146 145 # CONFIG_CRYPTO_HW is not set
-1
arch/sparc/configs/sparc32_defconfig
··· 92 92 CONFIG_CRYPTO_CAST6=m 93 93 CONFIG_CRYPTO_SERPENT=m 94 94 CONFIG_CRYPTO_TWOFISH=m 95 - # CONFIG_CRYPTO_ANSI_CPRNG is not set 96 95 # CONFIG_CRYPTO_HW is not set
-1
arch/sparc/configs/sparc64_defconfig
··· 227 227 CONFIG_CRYPTO_SERPENT=m 228 228 CONFIG_CRYPTO_TEA=m 229 229 CONFIG_CRYPTO_TWOFISH=m 230 - # CONFIG_CRYPTO_ANSI_CPRNG is not set 231 230 CONFIG_VCC=m 232 231 CONFIG_PATA_CMD64X=y 233 232 CONFIG_IP_PNP=y
+1 -2
arch/x86/crypto/aesni-intel_glue.c
··· 693 693 * operation into two at the point where the overflow 694 694 * will occur. After the first part, add the carry bit. 695 695 */ 696 - p1_nbytes = min_t(unsigned int, nbytes, 697 - (nblocks - ctr64) * AES_BLOCK_SIZE); 696 + p1_nbytes = min(nbytes, (nblocks - ctr64) * AES_BLOCK_SIZE); 698 697 (*ctr64_func)(key, walk.src.virt.addr, 699 698 walk.dst.virt.addr, p1_nbytes, le_ctr); 700 699 le_ctr[0] = 0;
-1
arch/xtensa/configs/audio_kc705_defconfig
··· 133 133 CONFIG_RCU_TRACE=y 134 134 # CONFIG_FTRACE is not set 135 135 # CONFIG_S32C1I_SELFTEST is not set 136 - CONFIG_CRYPTO_ANSI_CPRNG=y
-1
arch/xtensa/configs/generic_kc705_defconfig
··· 121 121 # CONFIG_FTRACE is not set 122 122 CONFIG_LD_NO_RELAX=y 123 123 # CONFIG_S32C1I_SELFTEST is not set 124 - CONFIG_CRYPTO_ANSI_CPRNG=y
-1
arch/xtensa/configs/iss_defconfig
··· 28 28 CONFIG_TMPFS=y 29 29 # CONFIG_FRAME_POINTER is not set 30 30 CONFIG_DETECT_HUNG_TASK=y 31 - CONFIG_CRYPTO_ANSI_CPRNG=y
-1
arch/xtensa/configs/nommu_kc705_defconfig
··· 122 122 # CONFIG_FTRACE is not set 123 123 # CONFIG_LD_NO_RELAX is not set 124 124 # CONFIG_CRYPTO_ECHAINIV is not set 125 - CONFIG_CRYPTO_ANSI_CPRNG=y
-1
arch/xtensa/configs/smp_lx200_defconfig
··· 125 125 # CONFIG_FTRACE is not set 126 126 CONFIG_LD_NO_RELAX=y 127 127 # CONFIG_S32C1I_SELFTEST is not set 128 - CONFIG_CRYPTO_ANSI_CPRNG=y
-1
arch/xtensa/configs/virt_defconfig
··· 92 92 CONFIG_CRYPTO_ECHAINIV=y 93 93 CONFIG_CRYPTO_DEFLATE=y 94 94 CONFIG_CRYPTO_LZO=y 95 - CONFIG_CRYPTO_ANSI_CPRNG=y 96 95 CONFIG_CRYPTO_DEV_VIRTIO=y 97 96 CONFIG_FONTS=y 98 97 CONFIG_PRINTK_TIME=y
-1
arch/xtensa/configs/xip_kc705_defconfig
··· 98 98 CONFIG_CRYPTO_ECHAINIV=y 99 99 CONFIG_CRYPTO_DEFLATE=y 100 100 CONFIG_CRYPTO_LZO=y 101 - CONFIG_CRYPTO_ANSI_CPRNG=y 102 101 CONFIG_PRINTK_TIME=y 103 102 CONFIG_DYNAMIC_DEBUG=y 104 103 CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y
+7 -14
crypto/Kconfig
··· 25 25 26 26 config CRYPTO_FIPS 27 27 bool "FIPS 200 compliance" 28 - depends on (CRYPTO_ANSI_CPRNG || CRYPTO_DRBG) && CRYPTO_SELFTESTS 28 + depends on CRYPTO_DRBG && CRYPTO_SELFTESTS 29 29 depends on (MODULE_SIG || !MODULES) 30 30 help 31 31 This option enables the fips boot option which is ··· 1161 1161 1162 1162 menu "Random number generation" 1163 1163 1164 - config CRYPTO_ANSI_CPRNG 1165 - tristate "ANSI PRNG (Pseudo Random Number Generator)" 1166 - select CRYPTO_AES 1167 - select CRYPTO_RNG 1168 - help 1169 - Pseudo RNG (random number generator) (ANSI X9.31 Appendix A.2.4) 1170 - 1171 - This uses the AES cipher algorithm. 1172 - 1173 - Note that this option must be enabled if CRYPTO_FIPS is selected 1174 - 1175 1164 menuconfig CRYPTO_DRBG_MENU 1176 1165 tristate "NIST SP800-90A DRBG (Deterministic Random Bit Generator)" 1177 1166 help ··· 1186 1197 1187 1198 config CRYPTO_DRBG_CTR 1188 1199 bool "CTR_DRBG" 1189 - select CRYPTO_AES 1190 - select CRYPTO_CTR 1200 + select CRYPTO_DF80090A 1191 1201 help 1192 1202 CTR_DRBG variant as defined in NIST SP800-90A. 1193 1203 ··· 1321 1333 tristate 1322 1334 select CRYPTO_HMAC 1323 1335 select CRYPTO_SHA256 1336 + 1337 + config CRYPTO_DF80090A 1338 + tristate 1339 + select CRYPTO_AES 1340 + select CRYPTO_CTR 1324 1341 1325 1342 endmenu 1326 1343 menu "Userspace interface"
+2 -1
crypto/Makefile
··· 162 162 obj-$(CONFIG_CRYPTO_XXHASH) += xxhash_generic.o 163 163 obj-$(CONFIG_CRYPTO_842) += 842.o 164 164 obj-$(CONFIG_CRYPTO_RNG2) += rng.o 165 - obj-$(CONFIG_CRYPTO_ANSI_CPRNG) += ansi_cprng.o 166 165 obj-$(CONFIG_CRYPTO_DRBG) += drbg.o 167 166 obj-$(CONFIG_CRYPTO_JITTERENTROPY) += jitterentropy_rng.o 168 167 CFLAGS_jitterentropy.o = -O0 ··· 205 206 # Key derivation function 206 207 # 207 208 obj-$(CONFIG_CRYPTO_KDF800108_CTR) += kdf_sp800108.o 209 + 210 + obj-$(CONFIG_CRYPTO_DF80090A) += df_sp80090a.o 208 211 209 212 obj-$(CONFIG_CRYPTO_KRB5) += krb5/
+20
crypto/aead.c
··· 120 120 struct aead_alg *alg = crypto_aead_alg(aead); 121 121 122 122 crypto_aead_set_flags(aead, CRYPTO_TFM_NEED_KEY); 123 + crypto_aead_set_reqsize(aead, crypto_tfm_alg_reqsize(tfm)); 123 124 124 125 aead->authsize = alg->maxauthsize; 125 126 ··· 204 203 return crypto_alloc_tfm(alg_name, &crypto_aead_type, type, mask); 205 204 } 206 205 EXPORT_SYMBOL_GPL(crypto_alloc_aead); 206 + 207 + struct crypto_sync_aead *crypto_alloc_sync_aead(const char *alg_name, u32 type, u32 mask) 208 + { 209 + struct crypto_aead *tfm; 210 + 211 + /* Only sync algorithms are allowed. */ 212 + mask |= CRYPTO_ALG_ASYNC; 213 + type &= ~(CRYPTO_ALG_ASYNC); 214 + 215 + tfm = crypto_alloc_tfm(alg_name, &crypto_aead_type, type, mask); 216 + 217 + if (!IS_ERR(tfm) && WARN_ON(crypto_aead_reqsize(tfm) > MAX_SYNC_AEAD_REQSIZE)) { 218 + crypto_free_aead(tfm); 219 + return ERR_PTR(-EINVAL); 220 + } 221 + 222 + return (struct crypto_sync_aead *)tfm; 223 + } 224 + EXPORT_SYMBOL_GPL(crypto_alloc_sync_aead); 207 225 208 226 int crypto_has_aead(const char *alg_name, u32 type, u32 mask) 209 227 {
+2 -3
crypto/af_alg.c
··· 1212 1212 if (unlikely(!areq)) 1213 1213 return ERR_PTR(-ENOMEM); 1214 1214 1215 + memset(areq, 0, areqlen); 1216 + 1215 1217 ctx->inflight = true; 1216 1218 1217 1219 areq->areqlen = areqlen; 1218 1220 areq->sk = sk; 1219 1221 areq->first_rsgl.sgl.sgt.sgl = areq->first_rsgl.sgl.sgl; 1220 - areq->last_rsgl = NULL; 1221 1222 INIT_LIST_HEAD(&areq->rsgl_list); 1222 - areq->tsgl = NULL; 1223 - areq->tsgl_entries = 0; 1224 1223 1225 1224 return areq; 1226 1225 }
+16 -2
crypto/ahash.c
··· 423 423 424 424 req->nbytes += nonzero - blen; 425 425 426 - blen = err < 0 ? 0 : err + nonzero; 426 + blen = 0; 427 + if (err >= 0) { 428 + blen = err + nonzero; 429 + err = 0; 430 + } 427 431 if (ahash_request_isvirt(req)) 428 432 memcpy(buf, req->svirt + req->nbytes - blen, blen); 429 433 else ··· 665 661 in); 666 662 if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY) 667 663 return -ENOKEY; 664 + if (crypto_ahash_block_only(tfm)) { 665 + unsigned int reqsize = crypto_ahash_reqsize(tfm); 666 + u8 *buf = ahash_request_ctx(req); 667 + 668 + buf[reqsize - 1] = 0; 669 + } 668 670 return crypto_ahash_alg(tfm)->import_core(req, in); 669 671 } 670 672 EXPORT_SYMBOL_GPL(crypto_ahash_import_core); ··· 684 674 if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY) 685 675 return -ENOKEY; 686 676 if (crypto_ahash_block_only(tfm)) { 677 + unsigned int plen = crypto_ahash_blocksize(tfm) + 1; 687 678 unsigned int reqsize = crypto_ahash_reqsize(tfm); 679 + unsigned int ss = crypto_ahash_statesize(tfm); 688 680 u8 *buf = ahash_request_ctx(req); 689 681 690 - buf[reqsize - 1] = 0; 682 + memcpy(buf + reqsize - plen, in + ss - plen, plen); 683 + if (buf[reqsize - 1] >= plen) 684 + return -EOVERFLOW; 691 685 } 692 686 return crypto_ahash_alg(tfm)->import(req, in); 693 687 }
+1 -2
crypto/algif_hash.c
··· 416 416 if (!ctx) 417 417 return -ENOMEM; 418 418 419 - ctx->result = NULL; 419 + memset(ctx, 0, len); 420 420 ctx->len = len; 421 - ctx->more = false; 422 421 crypto_init_wait(&ctx->wait); 423 422 424 423 ask->private = ctx;
+1 -2
crypto/algif_rng.c
··· 248 248 if (!ctx) 249 249 return -ENOMEM; 250 250 251 + memset(ctx, 0, len); 251 252 ctx->len = len; 252 - ctx->addtl = NULL; 253 - ctx->addtl_len = 0; 254 253 255 254 /* 256 255 * No seeding done at that point -- if multiple accepts are
-474
crypto/ansi_cprng.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * PRNG: Pseudo Random Number Generator 4 - * Based on NIST Recommended PRNG From ANSI X9.31 Appendix A.2.4 using 5 - * AES 128 cipher 6 - * 7 - * (C) Neil Horman <nhorman@tuxdriver.com> 8 - */ 9 - 10 - #include <crypto/internal/cipher.h> 11 - #include <crypto/internal/rng.h> 12 - #include <linux/err.h> 13 - #include <linux/init.h> 14 - #include <linux/module.h> 15 - #include <linux/moduleparam.h> 16 - #include <linux/string.h> 17 - 18 - #define DEFAULT_PRNG_KEY "0123456789abcdef" 19 - #define DEFAULT_PRNG_KSZ 16 20 - #define DEFAULT_BLK_SZ 16 21 - #define DEFAULT_V_SEED "zaybxcwdveuftgsh" 22 - 23 - /* 24 - * Flags for the prng_context flags field 25 - */ 26 - 27 - #define PRNG_FIXED_SIZE 0x1 28 - #define PRNG_NEED_RESET 0x2 29 - 30 - /* 31 - * Note: DT is our counter value 32 - * I is our intermediate value 33 - * V is our seed vector 34 - * See http://csrc.nist.gov/groups/STM/cavp/documents/rng/931rngext.pdf 35 - * for implementation details 36 - */ 37 - 38 - 39 - struct prng_context { 40 - spinlock_t prng_lock; 41 - unsigned char rand_data[DEFAULT_BLK_SZ]; 42 - unsigned char last_rand_data[DEFAULT_BLK_SZ]; 43 - unsigned char DT[DEFAULT_BLK_SZ]; 44 - unsigned char I[DEFAULT_BLK_SZ]; 45 - unsigned char V[DEFAULT_BLK_SZ]; 46 - u32 rand_data_valid; 47 - struct crypto_cipher *tfm; 48 - u32 flags; 49 - }; 50 - 51 - static int dbg; 52 - 53 - static void hexdump(char *note, unsigned char *buf, unsigned int len) 54 - { 55 - if (dbg) { 56 - printk(KERN_CRIT "%s", note); 57 - print_hex_dump(KERN_CONT, "", DUMP_PREFIX_OFFSET, 58 - 16, 1, 59 - buf, len, false); 60 - } 61 - } 62 - 63 - #define dbgprint(format, args...) do {\ 64 - if (dbg)\ 65 - printk(format, ##args);\ 66 - } while (0) 67 - 68 - static void xor_vectors(unsigned char *in1, unsigned char *in2, 69 - unsigned char *out, unsigned int size) 70 - { 71 - int i; 72 - 73 - for (i = 0; i < size; i++) 74 - out[i] = in1[i] ^ in2[i]; 75 - 76 - } 77 - /* 78 - * Returns DEFAULT_BLK_SZ bytes of random data per call 79 - * returns 0 if generation succeeded, <0 if something went wrong 80 - */ 81 - static int _get_more_prng_bytes(struct prng_context *ctx, int cont_test) 82 - { 83 - int i; 84 - unsigned char tmp[DEFAULT_BLK_SZ]; 85 - unsigned char *output = NULL; 86 - 87 - 88 - dbgprint(KERN_CRIT "Calling _get_more_prng_bytes for context %p\n", 89 - ctx); 90 - 91 - hexdump("Input DT: ", ctx->DT, DEFAULT_BLK_SZ); 92 - hexdump("Input I: ", ctx->I, DEFAULT_BLK_SZ); 93 - hexdump("Input V: ", ctx->V, DEFAULT_BLK_SZ); 94 - 95 - /* 96 - * This algorithm is a 3 stage state machine 97 - */ 98 - for (i = 0; i < 3; i++) { 99 - 100 - switch (i) { 101 - case 0: 102 - /* 103 - * Start by encrypting the counter value 104 - * This gives us an intermediate value I 105 - */ 106 - memcpy(tmp, ctx->DT, DEFAULT_BLK_SZ); 107 - output = ctx->I; 108 - hexdump("tmp stage 0: ", tmp, DEFAULT_BLK_SZ); 109 - break; 110 - case 1: 111 - 112 - /* 113 - * Next xor I with our secret vector V 114 - * encrypt that result to obtain our 115 - * pseudo random data which we output 116 - */ 117 - xor_vectors(ctx->I, ctx->V, tmp, DEFAULT_BLK_SZ); 118 - hexdump("tmp stage 1: ", tmp, DEFAULT_BLK_SZ); 119 - output = ctx->rand_data; 120 - break; 121 - case 2: 122 - /* 123 - * First check that we didn't produce the same 124 - * random data that we did last time around through this 125 - */ 126 - if (!memcmp(ctx->rand_data, ctx->last_rand_data, 127 - DEFAULT_BLK_SZ)) { 128 - if (cont_test) { 129 - panic("cprng %p Failed repetition check!\n", 130 - ctx); 131 - } 132 - 133 - printk(KERN_ERR 134 - "ctx %p Failed repetition check!\n", 135 - ctx); 136 - 137 - ctx->flags |= PRNG_NEED_RESET; 138 - return -EINVAL; 139 - } 140 - memcpy(ctx->last_rand_data, ctx->rand_data, 141 - DEFAULT_BLK_SZ); 142 - 143 - /* 144 - * Lastly xor the random data with I 145 - * and encrypt that to obtain a new secret vector V 146 - */ 147 - xor_vectors(ctx->rand_data, ctx->I, tmp, 148 - DEFAULT_BLK_SZ); 149 - output = ctx->V; 150 - hexdump("tmp stage 2: ", tmp, DEFAULT_BLK_SZ); 151 - break; 152 - } 153 - 154 - 155 - /* do the encryption */ 156 - crypto_cipher_encrypt_one(ctx->tfm, output, tmp); 157 - 158 - } 159 - 160 - /* 161 - * Now update our DT value 162 - */ 163 - for (i = DEFAULT_BLK_SZ - 1; i >= 0; i--) { 164 - ctx->DT[i] += 1; 165 - if (ctx->DT[i] != 0) 166 - break; 167 - } 168 - 169 - dbgprint("Returning new block for context %p\n", ctx); 170 - ctx->rand_data_valid = 0; 171 - 172 - hexdump("Output DT: ", ctx->DT, DEFAULT_BLK_SZ); 173 - hexdump("Output I: ", ctx->I, DEFAULT_BLK_SZ); 174 - hexdump("Output V: ", ctx->V, DEFAULT_BLK_SZ); 175 - hexdump("New Random Data: ", ctx->rand_data, DEFAULT_BLK_SZ); 176 - 177 - return 0; 178 - } 179 - 180 - /* Our exported functions */ 181 - static int get_prng_bytes(char *buf, size_t nbytes, struct prng_context *ctx, 182 - int do_cont_test) 183 - { 184 - unsigned char *ptr = buf; 185 - unsigned int byte_count = (unsigned int)nbytes; 186 - int err; 187 - 188 - 189 - spin_lock_bh(&ctx->prng_lock); 190 - 191 - err = -EINVAL; 192 - if (ctx->flags & PRNG_NEED_RESET) 193 - goto done; 194 - 195 - /* 196 - * If the FIXED_SIZE flag is on, only return whole blocks of 197 - * pseudo random data 198 - */ 199 - err = -EINVAL; 200 - if (ctx->flags & PRNG_FIXED_SIZE) { 201 - if (nbytes < DEFAULT_BLK_SZ) 202 - goto done; 203 - byte_count = DEFAULT_BLK_SZ; 204 - } 205 - 206 - /* 207 - * Return 0 in case of success as mandated by the kernel 208 - * crypto API interface definition. 209 - */ 210 - err = 0; 211 - 212 - dbgprint(KERN_CRIT "getting %d random bytes for context %p\n", 213 - byte_count, ctx); 214 - 215 - 216 - remainder: 217 - if (ctx->rand_data_valid == DEFAULT_BLK_SZ) { 218 - if (_get_more_prng_bytes(ctx, do_cont_test) < 0) { 219 - memset(buf, 0, nbytes); 220 - err = -EINVAL; 221 - goto done; 222 - } 223 - } 224 - 225 - /* 226 - * Copy any data less than an entire block 227 - */ 228 - if (byte_count < DEFAULT_BLK_SZ) { 229 - empty_rbuf: 230 - while (ctx->rand_data_valid < DEFAULT_BLK_SZ) { 231 - *ptr = ctx->rand_data[ctx->rand_data_valid]; 232 - ptr++; 233 - byte_count--; 234 - ctx->rand_data_valid++; 235 - if (byte_count == 0) 236 - goto done; 237 - } 238 - } 239 - 240 - /* 241 - * Now copy whole blocks 242 - */ 243 - for (; byte_count >= DEFAULT_BLK_SZ; byte_count -= DEFAULT_BLK_SZ) { 244 - if (ctx->rand_data_valid == DEFAULT_BLK_SZ) { 245 - if (_get_more_prng_bytes(ctx, do_cont_test) < 0) { 246 - memset(buf, 0, nbytes); 247 - err = -EINVAL; 248 - goto done; 249 - } 250 - } 251 - if (ctx->rand_data_valid > 0) 252 - goto empty_rbuf; 253 - memcpy(ptr, ctx->rand_data, DEFAULT_BLK_SZ); 254 - ctx->rand_data_valid += DEFAULT_BLK_SZ; 255 - ptr += DEFAULT_BLK_SZ; 256 - } 257 - 258 - /* 259 - * Now go back and get any remaining partial block 260 - */ 261 - if (byte_count) 262 - goto remainder; 263 - 264 - done: 265 - spin_unlock_bh(&ctx->prng_lock); 266 - dbgprint(KERN_CRIT "returning %d from get_prng_bytes in context %p\n", 267 - err, ctx); 268 - return err; 269 - } 270 - 271 - static void free_prng_context(struct prng_context *ctx) 272 - { 273 - crypto_free_cipher(ctx->tfm); 274 - } 275 - 276 - static int reset_prng_context(struct prng_context *ctx, 277 - const unsigned char *key, size_t klen, 278 - const unsigned char *V, const unsigned char *DT) 279 - { 280 - int ret; 281 - const unsigned char *prng_key; 282 - 283 - spin_lock_bh(&ctx->prng_lock); 284 - ctx->flags |= PRNG_NEED_RESET; 285 - 286 - prng_key = (key != NULL) ? key : (unsigned char *)DEFAULT_PRNG_KEY; 287 - 288 - if (!key) 289 - klen = DEFAULT_PRNG_KSZ; 290 - 291 - if (V) 292 - memcpy(ctx->V, V, DEFAULT_BLK_SZ); 293 - else 294 - memcpy(ctx->V, DEFAULT_V_SEED, DEFAULT_BLK_SZ); 295 - 296 - if (DT) 297 - memcpy(ctx->DT, DT, DEFAULT_BLK_SZ); 298 - else 299 - memset(ctx->DT, 0, DEFAULT_BLK_SZ); 300 - 301 - memset(ctx->rand_data, 0, DEFAULT_BLK_SZ); 302 - memset(ctx->last_rand_data, 0, DEFAULT_BLK_SZ); 303 - 304 - ctx->rand_data_valid = DEFAULT_BLK_SZ; 305 - 306 - ret = crypto_cipher_setkey(ctx->tfm, prng_key, klen); 307 - if (ret) { 308 - dbgprint(KERN_CRIT "PRNG: setkey() failed flags=%x\n", 309 - crypto_cipher_get_flags(ctx->tfm)); 310 - goto out; 311 - } 312 - 313 - ret = 0; 314 - ctx->flags &= ~PRNG_NEED_RESET; 315 - out: 316 - spin_unlock_bh(&ctx->prng_lock); 317 - return ret; 318 - } 319 - 320 - static int cprng_init(struct crypto_tfm *tfm) 321 - { 322 - struct prng_context *ctx = crypto_tfm_ctx(tfm); 323 - 324 - spin_lock_init(&ctx->prng_lock); 325 - ctx->tfm = crypto_alloc_cipher("aes", 0, 0); 326 - if (IS_ERR(ctx->tfm)) { 327 - dbgprint(KERN_CRIT "Failed to alloc tfm for context %p\n", 328 - ctx); 329 - return PTR_ERR(ctx->tfm); 330 - } 331 - 332 - if (reset_prng_context(ctx, NULL, DEFAULT_PRNG_KSZ, NULL, NULL) < 0) 333 - return -EINVAL; 334 - 335 - /* 336 - * after allocation, we should always force the user to reset 337 - * so they don't inadvertently use the insecure default values 338 - * without specifying them intentially 339 - */ 340 - ctx->flags |= PRNG_NEED_RESET; 341 - return 0; 342 - } 343 - 344 - static void cprng_exit(struct crypto_tfm *tfm) 345 - { 346 - free_prng_context(crypto_tfm_ctx(tfm)); 347 - } 348 - 349 - static int cprng_get_random(struct crypto_rng *tfm, 350 - const u8 *src, unsigned int slen, 351 - u8 *rdata, unsigned int dlen) 352 - { 353 - struct prng_context *prng = crypto_rng_ctx(tfm); 354 - 355 - return get_prng_bytes(rdata, dlen, prng, 0); 356 - } 357 - 358 - /* 359 - * This is the cprng_registered reset method the seed value is 360 - * interpreted as the tuple { V KEY DT} 361 - * V and KEY are required during reset, and DT is optional, detected 362 - * as being present by testing the length of the seed 363 - */ 364 - static int cprng_reset(struct crypto_rng *tfm, 365 - const u8 *seed, unsigned int slen) 366 - { 367 - struct prng_context *prng = crypto_rng_ctx(tfm); 368 - const u8 *key = seed + DEFAULT_BLK_SZ; 369 - const u8 *dt = NULL; 370 - 371 - if (slen < DEFAULT_PRNG_KSZ + DEFAULT_BLK_SZ) 372 - return -EINVAL; 373 - 374 - if (slen >= (2 * DEFAULT_BLK_SZ + DEFAULT_PRNG_KSZ)) 375 - dt = key + DEFAULT_PRNG_KSZ; 376 - 377 - reset_prng_context(prng, key, DEFAULT_PRNG_KSZ, seed, dt); 378 - 379 - if (prng->flags & PRNG_NEED_RESET) 380 - return -EINVAL; 381 - return 0; 382 - } 383 - 384 - #ifdef CONFIG_CRYPTO_FIPS 385 - static int fips_cprng_get_random(struct crypto_rng *tfm, 386 - const u8 *src, unsigned int slen, 387 - u8 *rdata, unsigned int dlen) 388 - { 389 - struct prng_context *prng = crypto_rng_ctx(tfm); 390 - 391 - return get_prng_bytes(rdata, dlen, prng, 1); 392 - } 393 - 394 - static int fips_cprng_reset(struct crypto_rng *tfm, 395 - const u8 *seed, unsigned int slen) 396 - { 397 - u8 rdata[DEFAULT_BLK_SZ]; 398 - const u8 *key = seed + DEFAULT_BLK_SZ; 399 - int rc; 400 - 401 - struct prng_context *prng = crypto_rng_ctx(tfm); 402 - 403 - if (slen < DEFAULT_PRNG_KSZ + DEFAULT_BLK_SZ) 404 - return -EINVAL; 405 - 406 - /* fips strictly requires seed != key */ 407 - if (!memcmp(seed, key, DEFAULT_PRNG_KSZ)) 408 - return -EINVAL; 409 - 410 - rc = cprng_reset(tfm, seed, slen); 411 - 412 - if (!rc) 413 - goto out; 414 - 415 - /* this primes our continuity test */ 416 - rc = get_prng_bytes(rdata, DEFAULT_BLK_SZ, prng, 0); 417 - prng->rand_data_valid = DEFAULT_BLK_SZ; 418 - 419 - out: 420 - return rc; 421 - } 422 - #endif 423 - 424 - static struct rng_alg rng_algs[] = { { 425 - .generate = cprng_get_random, 426 - .seed = cprng_reset, 427 - .seedsize = DEFAULT_PRNG_KSZ + 2 * DEFAULT_BLK_SZ, 428 - .base = { 429 - .cra_name = "stdrng", 430 - .cra_driver_name = "ansi_cprng", 431 - .cra_priority = 100, 432 - .cra_ctxsize = sizeof(struct prng_context), 433 - .cra_module = THIS_MODULE, 434 - .cra_init = cprng_init, 435 - .cra_exit = cprng_exit, 436 - } 437 - #ifdef CONFIG_CRYPTO_FIPS 438 - }, { 439 - .generate = fips_cprng_get_random, 440 - .seed = fips_cprng_reset, 441 - .seedsize = DEFAULT_PRNG_KSZ + 2 * DEFAULT_BLK_SZ, 442 - .base = { 443 - .cra_name = "fips(ansi_cprng)", 444 - .cra_driver_name = "fips_ansi_cprng", 445 - .cra_priority = 300, 446 - .cra_ctxsize = sizeof(struct prng_context), 447 - .cra_module = THIS_MODULE, 448 - .cra_init = cprng_init, 449 - .cra_exit = cprng_exit, 450 - } 451 - #endif 452 - } }; 453 - 454 - /* Module initalization */ 455 - static int __init prng_mod_init(void) 456 - { 457 - return crypto_register_rngs(rng_algs, ARRAY_SIZE(rng_algs)); 458 - } 459 - 460 - static void __exit prng_mod_fini(void) 461 - { 462 - crypto_unregister_rngs(rng_algs, ARRAY_SIZE(rng_algs)); 463 - } 464 - 465 - MODULE_LICENSE("GPL"); 466 - MODULE_DESCRIPTION("Software Pseudo Random Number Generator"); 467 - MODULE_AUTHOR("Neil Horman <nhorman@tuxdriver.com>"); 468 - module_param(dbg, int, 0); 469 - MODULE_PARM_DESC(dbg, "Boolean to enable debugging (0/1 == off/on)"); 470 - module_init(prng_mod_init); 471 - module_exit(prng_mod_fini); 472 - MODULE_ALIAS_CRYPTO("stdrng"); 473 - MODULE_ALIAS_CRYPTO("ansi_cprng"); 474 - MODULE_IMPORT_NS("CRYPTO_INTERNAL");
+9 -3
crypto/asymmetric_keys/asymmetric_type.c
··· 11 11 #include <crypto/public_key.h> 12 12 #include <linux/seq_file.h> 13 13 #include <linux/module.h> 14 + #include <linux/overflow.h> 14 15 #include <linux/slab.h> 15 16 #include <linux/ctype.h> 16 17 #include <keys/system_keyring.h> ··· 142 141 size_t len_2) 143 142 { 144 143 struct asymmetric_key_id *kid; 144 + size_t kid_sz; 145 + size_t len; 145 146 146 - kid = kmalloc(sizeof(struct asymmetric_key_id) + len_1 + len_2, 147 - GFP_KERNEL); 147 + if (check_add_overflow(len_1, len_2, &len)) 148 + return ERR_PTR(-EOVERFLOW); 149 + if (check_add_overflow(sizeof(struct asymmetric_key_id), len, &kid_sz)) 150 + return ERR_PTR(-EOVERFLOW); 151 + kid = kmalloc(kid_sz, GFP_KERNEL); 148 152 if (!kid) 149 153 return ERR_PTR(-ENOMEM); 150 - kid->len = len_1 + len_2; 154 + kid->len = len; 151 155 memcpy(kid->data, val_1, len_1); 152 156 memcpy(kid->data + len_1, val_2, len_2); 153 157 return kid;
+5 -2
crypto/asymmetric_keys/restrict.c
··· 17 17 18 18 #ifndef MODULE 19 19 static struct { 20 - struct asymmetric_key_id id; 21 - unsigned char data[10]; 20 + /* Must be last as it ends in a flexible-array member. */ 21 + TRAILING_OVERLAP(struct asymmetric_key_id, id, data, 22 + unsigned char data[10]; 23 + ); 22 24 } cakey; 25 + static_assert(offsetof(typeof(cakey), id.data) == offsetof(typeof(cakey), data)); 23 26 24 27 static int __init ca_keys_setup(char *str) 25 28 {
+1 -1
crypto/asymmetric_keys/x509_cert_parser.c
··· 60 60 */ 61 61 struct x509_certificate *x509_cert_parse(const void *data, size_t datalen) 62 62 { 63 - struct x509_certificate *cert __free(x509_free_certificate); 63 + struct x509_certificate *cert __free(x509_free_certificate) = NULL; 64 64 struct x509_parse_context *ctx __free(kfree) = NULL; 65 65 struct asymmetric_key_id *kid; 66 66 long ret;
+1 -1
crypto/asymmetric_keys/x509_public_key.c
··· 148 148 */ 149 149 static int x509_key_preparse(struct key_preparsed_payload *prep) 150 150 { 151 - struct x509_certificate *cert __free(x509_free_certificate); 151 + struct x509_certificate *cert __free(x509_free_certificate) = NULL; 152 152 struct asymmetric_key_ids *kids __free(kfree) = NULL; 153 153 char *p, *desc __free(kfree) = NULL; 154 154 const char *q;
+50 -25
crypto/authenc.c
··· 37 37 38 38 static void authenc_request_complete(struct aead_request *req, int err) 39 39 { 40 - if (err != -EINPROGRESS) 40 + if (err != -EINPROGRESS && err != -EBUSY) 41 41 aead_request_complete(req, err); 42 42 } 43 43 ··· 107 107 return err; 108 108 } 109 109 110 - static void authenc_geniv_ahash_done(void *data, int err) 110 + static void authenc_geniv_ahash_finish(struct aead_request *req) 111 111 { 112 - struct aead_request *req = data; 113 112 struct crypto_aead *authenc = crypto_aead_reqtfm(req); 114 113 struct aead_instance *inst = aead_alg_instance(authenc); 115 114 struct authenc_instance_ctx *ictx = aead_instance_ctx(inst); 116 115 struct authenc_request_ctx *areq_ctx = aead_request_ctx(req); 117 116 struct ahash_request *ahreq = (void *)(areq_ctx->tail + ictx->reqoff); 118 117 119 - if (err) 120 - goto out; 121 - 122 118 scatterwalk_map_and_copy(ahreq->result, req->dst, 123 119 req->assoclen + req->cryptlen, 124 120 crypto_aead_authsize(authenc), 1); 121 + } 125 122 126 - out: 123 + static void authenc_geniv_ahash_done(void *data, int err) 124 + { 125 + struct aead_request *req = data; 126 + 127 + if (!err) 128 + authenc_geniv_ahash_finish(req); 127 129 aead_request_complete(req, err); 128 130 } 129 131 130 - static int crypto_authenc_genicv(struct aead_request *req, unsigned int flags) 132 + /* 133 + * Used when the ahash request was invoked in the async callback context 134 + * of the previous skcipher request. Eat any EINPROGRESS notifications. 135 + */ 136 + static void authenc_geniv_ahash_done2(void *data, int err) 137 + { 138 + struct aead_request *req = data; 139 + 140 + if (!err) 141 + authenc_geniv_ahash_finish(req); 142 + authenc_request_complete(req, err); 143 + } 144 + 145 + static int crypto_authenc_genicv(struct aead_request *req, unsigned int mask) 131 146 { 132 147 struct crypto_aead *authenc = crypto_aead_reqtfm(req); 133 148 struct aead_instance *inst = aead_alg_instance(authenc); ··· 151 136 struct crypto_ahash *auth = ctx->auth; 152 137 struct authenc_request_ctx *areq_ctx = aead_request_ctx(req); 153 138 struct ahash_request *ahreq = (void *)(areq_ctx->tail + ictx->reqoff); 139 + unsigned int flags = aead_request_flags(req) & ~mask; 154 140 u8 *hash = areq_ctx->tail; 155 141 int err; 156 142 ··· 159 143 ahash_request_set_crypt(ahreq, req->dst, hash, 160 144 req->assoclen + req->cryptlen); 161 145 ahash_request_set_callback(ahreq, flags, 162 - authenc_geniv_ahash_done, req); 146 + mask ? authenc_geniv_ahash_done2 : 147 + authenc_geniv_ahash_done, req); 163 148 164 149 err = crypto_ahash_digest(ahreq); 165 150 if (err) ··· 176 159 { 177 160 struct aead_request *areq = data; 178 161 179 - if (err) 180 - goto out; 181 - 182 - err = crypto_authenc_genicv(areq, 0); 183 - 184 - out: 162 + if (err) { 163 + aead_request_complete(areq, err); 164 + return; 165 + } 166 + err = crypto_authenc_genicv(areq, CRYPTO_TFM_REQ_MAY_SLEEP); 185 167 authenc_request_complete(areq, err); 186 168 } 187 169 ··· 215 199 if (err) 216 200 return err; 217 201 218 - return crypto_authenc_genicv(req, aead_request_flags(req)); 202 + return crypto_authenc_genicv(req, 0); 203 + } 204 + 205 + static void authenc_decrypt_tail_done(void *data, int err) 206 + { 207 + struct aead_request *req = data; 208 + 209 + authenc_request_complete(req, err); 219 210 } 220 211 221 212 static int crypto_authenc_decrypt_tail(struct aead_request *req, 222 - unsigned int flags) 213 + unsigned int mask) 223 214 { 224 215 struct crypto_aead *authenc = crypto_aead_reqtfm(req); 225 216 struct aead_instance *inst = aead_alg_instance(authenc); ··· 237 214 struct skcipher_request *skreq = (void *)(areq_ctx->tail + 238 215 ictx->reqoff); 239 216 unsigned int authsize = crypto_aead_authsize(authenc); 217 + unsigned int flags = aead_request_flags(req) & ~mask; 240 218 u8 *ihash = ahreq->result + authsize; 241 219 struct scatterlist *src, *dst; 242 220 ··· 254 230 255 231 skcipher_request_set_tfm(skreq, ctx->enc); 256 232 skcipher_request_set_callback(skreq, flags, 257 - req->base.complete, req->base.data); 233 + mask ? authenc_decrypt_tail_done : 234 + req->base.complete, 235 + mask ? req : req->base.data); 258 236 skcipher_request_set_crypt(skreq, src, dst, 259 237 req->cryptlen - authsize, req->iv); 260 238 ··· 267 241 { 268 242 struct aead_request *req = data; 269 243 270 - if (err) 271 - goto out; 272 - 273 - err = crypto_authenc_decrypt_tail(req, 0); 274 - 275 - out: 244 + if (err) { 245 + aead_request_complete(req, err); 246 + return; 247 + } 248 + err = crypto_authenc_decrypt_tail(req, CRYPTO_TFM_REQ_MAY_SLEEP); 276 249 authenc_request_complete(req, err); 277 250 } 278 251 ··· 298 273 if (err) 299 274 return err; 300 275 301 - return crypto_authenc_decrypt_tail(req, aead_request_flags(req)); 276 + return crypto_authenc_decrypt_tail(req, 0); 302 277 } 303 278 304 279 static int crypto_authenc_init_tfm(struct crypto_aead *tfm)
+2 -1
crypto/deflate.c
··· 15 15 #include <linux/kernel.h> 16 16 #include <linux/module.h> 17 17 #include <linux/mutex.h> 18 + #include <linux/overflow.h> 18 19 #include <linux/percpu.h> 19 20 #include <linux/scatterlist.h> 20 21 #include <linux/slab.h> ··· 40 39 DEFLATE_DEF_MEMLEVEL)); 41 40 struct deflate_stream *ctx; 42 41 43 - ctx = kvmalloc(sizeof(*ctx) + size, GFP_KERNEL); 42 + ctx = kvmalloc(struct_size(ctx, workspace, size), GFP_KERNEL); 44 43 if (!ctx) 45 44 return ERR_PTR(-ENOMEM); 46 45
+232
crypto/df_sp80090a.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + /* 4 + * NIST SP800-90A DRBG derivation function 5 + * 6 + * Copyright (C) 2014, Stephan Mueller <smueller@chronox.de> 7 + */ 8 + 9 + #include <linux/errno.h> 10 + #include <linux/kernel.h> 11 + #include <linux/module.h> 12 + #include <linux/string.h> 13 + #include <crypto/aes.h> 14 + #include <crypto/df_sp80090a.h> 15 + #include <crypto/internal/drbg.h> 16 + 17 + static void drbg_kcapi_symsetkey(struct crypto_aes_ctx *aesctx, 18 + const unsigned char *key, 19 + u8 keylen); 20 + static void drbg_kcapi_symsetkey(struct crypto_aes_ctx *aesctx, 21 + const unsigned char *key, u8 keylen) 22 + { 23 + aes_expandkey(aesctx, key, keylen); 24 + } 25 + 26 + static void drbg_kcapi_sym(struct crypto_aes_ctx *aesctx, 27 + unsigned char *outval, 28 + const struct drbg_string *in, u8 blocklen_bytes) 29 + { 30 + /* there is only component in *in */ 31 + BUG_ON(in->len < blocklen_bytes); 32 + aes_encrypt(aesctx, outval, in->buf); 33 + } 34 + 35 + /* BCC function for CTR DRBG as defined in 10.4.3 */ 36 + 37 + static void drbg_ctr_bcc(struct crypto_aes_ctx *aesctx, 38 + unsigned char *out, const unsigned char *key, 39 + struct list_head *in, 40 + u8 blocklen_bytes, 41 + u8 keylen) 42 + { 43 + struct drbg_string *curr = NULL; 44 + struct drbg_string data; 45 + short cnt = 0; 46 + 47 + drbg_string_fill(&data, out, blocklen_bytes); 48 + 49 + /* 10.4.3 step 2 / 4 */ 50 + drbg_kcapi_symsetkey(aesctx, key, keylen); 51 + list_for_each_entry(curr, in, list) { 52 + const unsigned char *pos = curr->buf; 53 + size_t len = curr->len; 54 + /* 10.4.3 step 4.1 */ 55 + while (len) { 56 + /* 10.4.3 step 4.2 */ 57 + if (blocklen_bytes == cnt) { 58 + cnt = 0; 59 + drbg_kcapi_sym(aesctx, out, &data, blocklen_bytes); 60 + } 61 + out[cnt] ^= *pos; 62 + pos++; 63 + cnt++; 64 + len--; 65 + } 66 + } 67 + /* 10.4.3 step 4.2 for last block */ 68 + if (cnt) 69 + drbg_kcapi_sym(aesctx, out, &data, blocklen_bytes); 70 + } 71 + 72 + /* 73 + * scratchpad usage: drbg_ctr_update is interlinked with crypto_drbg_ctr_df 74 + * (and drbg_ctr_bcc, but this function does not need any temporary buffers), 75 + * the scratchpad is used as follows: 76 + * drbg_ctr_update: 77 + * temp 78 + * start: drbg->scratchpad 79 + * length: drbg_statelen(drbg) + drbg_blocklen(drbg) 80 + * note: the cipher writing into this variable works 81 + * blocklen-wise. Now, when the statelen is not a multiple 82 + * of blocklen, the generateion loop below "spills over" 83 + * by at most blocklen. Thus, we need to give sufficient 84 + * memory. 85 + * df_data 86 + * start: drbg->scratchpad + 87 + * drbg_statelen(drbg) + drbg_blocklen(drbg) 88 + * length: drbg_statelen(drbg) 89 + * 90 + * crypto_drbg_ctr_df: 91 + * pad 92 + * start: df_data + drbg_statelen(drbg) 93 + * length: drbg_blocklen(drbg) 94 + * iv 95 + * start: pad + drbg_blocklen(drbg) 96 + * length: drbg_blocklen(drbg) 97 + * temp 98 + * start: iv + drbg_blocklen(drbg) 99 + * length: drbg_satelen(drbg) + drbg_blocklen(drbg) 100 + * note: temp is the buffer that the BCC function operates 101 + * on. BCC operates blockwise. drbg_statelen(drbg) 102 + * is sufficient when the DRBG state length is a multiple 103 + * of the block size. For AES192 (and maybe other ciphers) 104 + * this is not correct and the length for temp is 105 + * insufficient (yes, that also means for such ciphers, 106 + * the final output of all BCC rounds are truncated). 107 + * Therefore, add drbg_blocklen(drbg) to cover all 108 + * possibilities. 109 + * refer to crypto_drbg_ctr_df_datalen() to get required length 110 + */ 111 + 112 + /* Derivation Function for CTR DRBG as defined in 10.4.2 */ 113 + int crypto_drbg_ctr_df(struct crypto_aes_ctx *aesctx, 114 + unsigned char *df_data, size_t bytes_to_return, 115 + struct list_head *seedlist, 116 + u8 blocklen_bytes, 117 + u8 statelen) 118 + { 119 + unsigned char L_N[8]; 120 + /* S3 is input */ 121 + struct drbg_string S1, S2, S4, cipherin; 122 + LIST_HEAD(bcc_list); 123 + unsigned char *pad = df_data + statelen; 124 + unsigned char *iv = pad + blocklen_bytes; 125 + unsigned char *temp = iv + blocklen_bytes; 126 + size_t padlen = 0; 127 + unsigned int templen = 0; 128 + /* 10.4.2 step 7 */ 129 + unsigned int i = 0; 130 + /* 10.4.2 step 8 */ 131 + const unsigned char *K = (unsigned char *) 132 + "\x00\x01\x02\x03\x04\x05\x06\x07" 133 + "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f" 134 + "\x10\x11\x12\x13\x14\x15\x16\x17" 135 + "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"; 136 + unsigned char *X; 137 + size_t generated_len = 0; 138 + size_t inputlen = 0; 139 + struct drbg_string *seed = NULL; 140 + u8 keylen; 141 + 142 + memset(pad, 0, blocklen_bytes); 143 + memset(iv, 0, blocklen_bytes); 144 + keylen = statelen - blocklen_bytes; 145 + /* 10.4.2 step 1 is implicit as we work byte-wise */ 146 + 147 + /* 10.4.2 step 2 */ 148 + if ((512 / 8) < bytes_to_return) 149 + return -EINVAL; 150 + 151 + /* 10.4.2 step 2 -- calculate the entire length of all input data */ 152 + list_for_each_entry(seed, seedlist, list) 153 + inputlen += seed->len; 154 + drbg_cpu_to_be32(inputlen, &L_N[0]); 155 + 156 + /* 10.4.2 step 3 */ 157 + drbg_cpu_to_be32(bytes_to_return, &L_N[4]); 158 + 159 + /* 10.4.2 step 5: length is L_N, input_string, one byte, padding */ 160 + padlen = (inputlen + sizeof(L_N) + 1) % (blocklen_bytes); 161 + /* wrap the padlen appropriately */ 162 + if (padlen) 163 + padlen = blocklen_bytes - padlen; 164 + /* 165 + * pad / padlen contains the 0x80 byte and the following zero bytes. 166 + * As the calculated padlen value only covers the number of zero 167 + * bytes, this value has to be incremented by one for the 0x80 byte. 168 + */ 169 + padlen++; 170 + pad[0] = 0x80; 171 + 172 + /* 10.4.2 step 4 -- first fill the linked list and then order it */ 173 + drbg_string_fill(&S1, iv, blocklen_bytes); 174 + list_add_tail(&S1.list, &bcc_list); 175 + drbg_string_fill(&S2, L_N, sizeof(L_N)); 176 + list_add_tail(&S2.list, &bcc_list); 177 + list_splice_tail(seedlist, &bcc_list); 178 + drbg_string_fill(&S4, pad, padlen); 179 + list_add_tail(&S4.list, &bcc_list); 180 + 181 + /* 10.4.2 step 9 */ 182 + while (templen < (keylen + (blocklen_bytes))) { 183 + /* 184 + * 10.4.2 step 9.1 - the padding is implicit as the buffer 185 + * holds zeros after allocation -- even the increment of i 186 + * is irrelevant as the increment remains within length of i 187 + */ 188 + drbg_cpu_to_be32(i, iv); 189 + /* 10.4.2 step 9.2 -- BCC and concatenation with temp */ 190 + drbg_ctr_bcc(aesctx, temp + templen, K, &bcc_list, 191 + blocklen_bytes, keylen); 192 + /* 10.4.2 step 9.3 */ 193 + i++; 194 + templen += blocklen_bytes; 195 + } 196 + 197 + /* 10.4.2 step 11 */ 198 + X = temp + (keylen); 199 + drbg_string_fill(&cipherin, X, blocklen_bytes); 200 + 201 + /* 10.4.2 step 12: overwriting of outval is implemented in next step */ 202 + 203 + /* 10.4.2 step 13 */ 204 + drbg_kcapi_symsetkey(aesctx, temp, keylen); 205 + while (generated_len < bytes_to_return) { 206 + short blocklen = 0; 207 + /* 208 + * 10.4.2 step 13.1: the truncation of the key length is 209 + * implicit as the key is only drbg_blocklen in size based on 210 + * the implementation of the cipher function callback 211 + */ 212 + drbg_kcapi_sym(aesctx, X, &cipherin, blocklen_bytes); 213 + blocklen = (blocklen_bytes < 214 + (bytes_to_return - generated_len)) ? 215 + blocklen_bytes : 216 + (bytes_to_return - generated_len); 217 + /* 10.4.2 step 13.2 and 14 */ 218 + memcpy(df_data + generated_len, X, blocklen); 219 + generated_len += blocklen; 220 + } 221 + 222 + memset(iv, 0, blocklen_bytes); 223 + memset(temp, 0, statelen + blocklen_bytes); 224 + memset(pad, 0, blocklen_bytes); 225 + return 0; 226 + } 227 + EXPORT_SYMBOL_GPL(crypto_drbg_ctr_df); 228 + 229 + MODULE_IMPORT_NS("CRYPTO_INTERNAL"); 230 + MODULE_LICENSE("GPL v2"); 231 + MODULE_AUTHOR("Stephan Mueller <smueller@chronox.de>"); 232 + MODULE_DESCRIPTION("Derivation Function conformant to SP800-90A");
+13 -253
crypto/drbg.c
··· 98 98 */ 99 99 100 100 #include <crypto/drbg.h> 101 + #include <crypto/df_sp80090a.h> 101 102 #include <crypto/internal/cipher.h> 102 103 #include <linux/kernel.h> 103 104 #include <linux/jiffies.h> ··· 262 261 return 0; 263 262 } 264 263 265 - /* 266 - * Convert an integer into a byte representation of this integer. 267 - * The byte representation is big-endian 268 - * 269 - * @val value to be converted 270 - * @buf buffer holding the converted integer -- caller must ensure that 271 - * buffer size is at least 32 bit 272 - */ 273 - #if (defined(CONFIG_CRYPTO_DRBG_HASH) || defined(CONFIG_CRYPTO_DRBG_CTR)) 274 - static inline void drbg_cpu_to_be32(__u32 val, unsigned char *buf) 275 - { 276 - struct s { 277 - __be32 conv; 278 - }; 279 - struct s *conversion = (struct s *) buf; 280 - 281 - conversion->conv = cpu_to_be32(val); 282 - } 283 - #endif /* defined(CONFIG_CRYPTO_DRBG_HASH) || defined(CONFIG_CRYPTO_DRBG_CTR) */ 284 - 285 264 /****************************************************************** 286 265 * CTR DRBG callback functions 287 266 ******************************************************************/ ··· 275 294 MODULE_ALIAS_CRYPTO("drbg_pr_ctr_aes128"); 276 295 MODULE_ALIAS_CRYPTO("drbg_nopr_ctr_aes128"); 277 296 278 - static void drbg_kcapi_symsetkey(struct drbg_state *drbg, 279 - const unsigned char *key); 280 - static int drbg_kcapi_sym(struct drbg_state *drbg, unsigned char *outval, 281 - const struct drbg_string *in); 282 297 static int drbg_init_sym_kernel(struct drbg_state *drbg); 283 298 static int drbg_fini_sym_kernel(struct drbg_state *drbg); 284 299 static int drbg_kcapi_sym_ctr(struct drbg_state *drbg, ··· 282 305 u8 *outbuf, u32 outlen); 283 306 #define DRBG_OUTSCRATCHLEN 256 284 307 285 - /* BCC function for CTR DRBG as defined in 10.4.3 */ 286 - static int drbg_ctr_bcc(struct drbg_state *drbg, 287 - unsigned char *out, const unsigned char *key, 288 - struct list_head *in) 289 - { 290 - int ret = 0; 291 - struct drbg_string *curr = NULL; 292 - struct drbg_string data; 293 - short cnt = 0; 294 - 295 - drbg_string_fill(&data, out, drbg_blocklen(drbg)); 296 - 297 - /* 10.4.3 step 2 / 4 */ 298 - drbg_kcapi_symsetkey(drbg, key); 299 - list_for_each_entry(curr, in, list) { 300 - const unsigned char *pos = curr->buf; 301 - size_t len = curr->len; 302 - /* 10.4.3 step 4.1 */ 303 - while (len) { 304 - /* 10.4.3 step 4.2 */ 305 - if (drbg_blocklen(drbg) == cnt) { 306 - cnt = 0; 307 - ret = drbg_kcapi_sym(drbg, out, &data); 308 - if (ret) 309 - return ret; 310 - } 311 - out[cnt] ^= *pos; 312 - pos++; 313 - cnt++; 314 - len--; 315 - } 316 - } 317 - /* 10.4.3 step 4.2 for last block */ 318 - if (cnt) 319 - ret = drbg_kcapi_sym(drbg, out, &data); 320 - 321 - return ret; 322 - } 323 - 324 - /* 325 - * scratchpad usage: drbg_ctr_update is interlinked with drbg_ctr_df 326 - * (and drbg_ctr_bcc, but this function does not need any temporary buffers), 327 - * the scratchpad is used as follows: 328 - * drbg_ctr_update: 329 - * temp 330 - * start: drbg->scratchpad 331 - * length: drbg_statelen(drbg) + drbg_blocklen(drbg) 332 - * note: the cipher writing into this variable works 333 - * blocklen-wise. Now, when the statelen is not a multiple 334 - * of blocklen, the generateion loop below "spills over" 335 - * by at most blocklen. Thus, we need to give sufficient 336 - * memory. 337 - * df_data 338 - * start: drbg->scratchpad + 339 - * drbg_statelen(drbg) + drbg_blocklen(drbg) 340 - * length: drbg_statelen(drbg) 341 - * 342 - * drbg_ctr_df: 343 - * pad 344 - * start: df_data + drbg_statelen(drbg) 345 - * length: drbg_blocklen(drbg) 346 - * iv 347 - * start: pad + drbg_blocklen(drbg) 348 - * length: drbg_blocklen(drbg) 349 - * temp 350 - * start: iv + drbg_blocklen(drbg) 351 - * length: drbg_satelen(drbg) + drbg_blocklen(drbg) 352 - * note: temp is the buffer that the BCC function operates 353 - * on. BCC operates blockwise. drbg_statelen(drbg) 354 - * is sufficient when the DRBG state length is a multiple 355 - * of the block size. For AES192 (and maybe other ciphers) 356 - * this is not correct and the length for temp is 357 - * insufficient (yes, that also means for such ciphers, 358 - * the final output of all BCC rounds are truncated). 359 - * Therefore, add drbg_blocklen(drbg) to cover all 360 - * possibilities. 361 - */ 362 - 363 - /* Derivation Function for CTR DRBG as defined in 10.4.2 */ 364 308 static int drbg_ctr_df(struct drbg_state *drbg, 365 309 unsigned char *df_data, size_t bytes_to_return, 366 310 struct list_head *seedlist) 367 311 { 368 - int ret = -EFAULT; 369 - unsigned char L_N[8]; 370 - /* S3 is input */ 371 - struct drbg_string S1, S2, S4, cipherin; 372 - LIST_HEAD(bcc_list); 373 - unsigned char *pad = df_data + drbg_statelen(drbg); 374 - unsigned char *iv = pad + drbg_blocklen(drbg); 375 - unsigned char *temp = iv + drbg_blocklen(drbg); 376 - size_t padlen = 0; 377 - unsigned int templen = 0; 378 - /* 10.4.2 step 7 */ 379 - unsigned int i = 0; 380 - /* 10.4.2 step 8 */ 381 - const unsigned char *K = (unsigned char *) 382 - "\x00\x01\x02\x03\x04\x05\x06\x07" 383 - "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f" 384 - "\x10\x11\x12\x13\x14\x15\x16\x17" 385 - "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"; 386 - unsigned char *X; 387 - size_t generated_len = 0; 388 - size_t inputlen = 0; 389 - struct drbg_string *seed = NULL; 390 - 391 - memset(pad, 0, drbg_blocklen(drbg)); 392 - memset(iv, 0, drbg_blocklen(drbg)); 393 - 394 - /* 10.4.2 step 1 is implicit as we work byte-wise */ 395 - 396 - /* 10.4.2 step 2 */ 397 - if ((512/8) < bytes_to_return) 398 - return -EINVAL; 399 - 400 - /* 10.4.2 step 2 -- calculate the entire length of all input data */ 401 - list_for_each_entry(seed, seedlist, list) 402 - inputlen += seed->len; 403 - drbg_cpu_to_be32(inputlen, &L_N[0]); 404 - 405 - /* 10.4.2 step 3 */ 406 - drbg_cpu_to_be32(bytes_to_return, &L_N[4]); 407 - 408 - /* 10.4.2 step 5: length is L_N, input_string, one byte, padding */ 409 - padlen = (inputlen + sizeof(L_N) + 1) % (drbg_blocklen(drbg)); 410 - /* wrap the padlen appropriately */ 411 - if (padlen) 412 - padlen = drbg_blocklen(drbg) - padlen; 413 - /* 414 - * pad / padlen contains the 0x80 byte and the following zero bytes. 415 - * As the calculated padlen value only covers the number of zero 416 - * bytes, this value has to be incremented by one for the 0x80 byte. 417 - */ 418 - padlen++; 419 - pad[0] = 0x80; 420 - 421 - /* 10.4.2 step 4 -- first fill the linked list and then order it */ 422 - drbg_string_fill(&S1, iv, drbg_blocklen(drbg)); 423 - list_add_tail(&S1.list, &bcc_list); 424 - drbg_string_fill(&S2, L_N, sizeof(L_N)); 425 - list_add_tail(&S2.list, &bcc_list); 426 - list_splice_tail(seedlist, &bcc_list); 427 - drbg_string_fill(&S4, pad, padlen); 428 - list_add_tail(&S4.list, &bcc_list); 429 - 430 - /* 10.4.2 step 9 */ 431 - while (templen < (drbg_keylen(drbg) + (drbg_blocklen(drbg)))) { 432 - /* 433 - * 10.4.2 step 9.1 - the padding is implicit as the buffer 434 - * holds zeros after allocation -- even the increment of i 435 - * is irrelevant as the increment remains within length of i 436 - */ 437 - drbg_cpu_to_be32(i, iv); 438 - /* 10.4.2 step 9.2 -- BCC and concatenation with temp */ 439 - ret = drbg_ctr_bcc(drbg, temp + templen, K, &bcc_list); 440 - if (ret) 441 - goto out; 442 - /* 10.4.2 step 9.3 */ 443 - i++; 444 - templen += drbg_blocklen(drbg); 445 - } 446 - 447 - /* 10.4.2 step 11 */ 448 - X = temp + (drbg_keylen(drbg)); 449 - drbg_string_fill(&cipherin, X, drbg_blocklen(drbg)); 450 - 451 - /* 10.4.2 step 12: overwriting of outval is implemented in next step */ 452 - 453 - /* 10.4.2 step 13 */ 454 - drbg_kcapi_symsetkey(drbg, temp); 455 - while (generated_len < bytes_to_return) { 456 - short blocklen = 0; 457 - /* 458 - * 10.4.2 step 13.1: the truncation of the key length is 459 - * implicit as the key is only drbg_blocklen in size based on 460 - * the implementation of the cipher function callback 461 - */ 462 - ret = drbg_kcapi_sym(drbg, X, &cipherin); 463 - if (ret) 464 - goto out; 465 - blocklen = (drbg_blocklen(drbg) < 466 - (bytes_to_return - generated_len)) ? 467 - drbg_blocklen(drbg) : 468 - (bytes_to_return - generated_len); 469 - /* 10.4.2 step 13.2 and 14 */ 470 - memcpy(df_data + generated_len, X, blocklen); 471 - generated_len += blocklen; 472 - } 473 - 474 - ret = 0; 475 - 476 - out: 477 - memset(iv, 0, drbg_blocklen(drbg)); 478 - memset(temp, 0, drbg_statelen(drbg) + drbg_blocklen(drbg)); 479 - memset(pad, 0, drbg_blocklen(drbg)); 480 - return ret; 312 + return crypto_drbg_ctr_df(drbg->priv_data, df_data, drbg_statelen(drbg), 313 + seedlist, drbg_blocklen(drbg), drbg_statelen(drbg)); 481 314 } 482 315 483 316 /* ··· 1097 1310 sb_size = 0; 1098 1311 else if (drbg->core->flags & DRBG_CTR) 1099 1312 sb_size = drbg_statelen(drbg) + drbg_blocklen(drbg) + /* temp */ 1100 - drbg_statelen(drbg) + /* df_data */ 1101 - drbg_blocklen(drbg) + /* pad */ 1102 - drbg_blocklen(drbg) + /* iv */ 1103 - drbg_statelen(drbg) + drbg_blocklen(drbg); /* temp */ 1313 + crypto_drbg_ctr_df_datalen(drbg_statelen(drbg), 1314 + drbg_blocklen(drbg)); 1104 1315 else 1105 1316 sb_size = drbg_statelen(drbg) + drbg_blocklen(drbg); 1106 1317 ··· 1443 1658 #if defined(CONFIG_CRYPTO_DRBG_HASH) || defined(CONFIG_CRYPTO_DRBG_HMAC) 1444 1659 struct sdesc { 1445 1660 struct shash_desc shash; 1446 - char ctx[]; 1447 1661 }; 1448 1662 1449 1663 static int drbg_init_hash_kernel(struct drbg_state *drbg) ··· 1505 1721 #ifdef CONFIG_CRYPTO_DRBG_CTR 1506 1722 static int drbg_fini_sym_kernel(struct drbg_state *drbg) 1507 1723 { 1508 - struct crypto_cipher *tfm = 1509 - (struct crypto_cipher *)drbg->priv_data; 1510 - if (tfm) 1511 - crypto_free_cipher(tfm); 1724 + struct crypto_aes_ctx *aesctx = (struct crypto_aes_ctx *)drbg->priv_data; 1725 + 1726 + kfree(aesctx); 1512 1727 drbg->priv_data = NULL; 1513 1728 1514 1729 if (drbg->ctr_handle) ··· 1526 1743 1527 1744 static int drbg_init_sym_kernel(struct drbg_state *drbg) 1528 1745 { 1529 - struct crypto_cipher *tfm; 1746 + struct crypto_aes_ctx *aesctx; 1530 1747 struct crypto_skcipher *sk_tfm; 1531 1748 struct skcipher_request *req; 1532 1749 unsigned int alignmask; 1533 1750 char ctr_name[CRYPTO_MAX_ALG_NAME]; 1534 1751 1535 - tfm = crypto_alloc_cipher(drbg->core->backend_cra_name, 0, 0); 1536 - if (IS_ERR(tfm)) { 1537 - pr_info("DRBG: could not allocate cipher TFM handle: %s\n", 1538 - drbg->core->backend_cra_name); 1539 - return PTR_ERR(tfm); 1540 - } 1541 - BUG_ON(drbg_blocklen(drbg) != crypto_cipher_blocksize(tfm)); 1542 - drbg->priv_data = tfm; 1752 + aesctx = kzalloc(sizeof(*aesctx), GFP_KERNEL); 1753 + if (!aesctx) 1754 + return -ENOMEM; 1755 + drbg->priv_data = aesctx; 1543 1756 1544 1757 if (snprintf(ctr_name, CRYPTO_MAX_ALG_NAME, "ctr(%s)", 1545 1758 drbg->core->backend_cra_name) >= CRYPTO_MAX_ALG_NAME) { ··· 1577 1798 sg_init_one(&drbg->sg_out, drbg->outscratchpad, DRBG_OUTSCRATCHLEN); 1578 1799 1579 1800 return alignmask; 1580 - } 1581 - 1582 - static void drbg_kcapi_symsetkey(struct drbg_state *drbg, 1583 - const unsigned char *key) 1584 - { 1585 - struct crypto_cipher *tfm = drbg->priv_data; 1586 - 1587 - crypto_cipher_setkey(tfm, key, (drbg_keylen(drbg))); 1588 - } 1589 - 1590 - static int drbg_kcapi_sym(struct drbg_state *drbg, unsigned char *outval, 1591 - const struct drbg_string *in) 1592 - { 1593 - struct crypto_cipher *tfm = drbg->priv_data; 1594 - 1595 - /* there is only component in *in */ 1596 - BUG_ON(in->len < drbg_blocklen(drbg)); 1597 - crypto_cipher_encrypt_one(tfm, outval, in->buf); 1598 - return 0; 1599 1801 } 1600 1802 1601 1803 static int drbg_kcapi_sym_ctr(struct drbg_state *drbg,
+4 -1
crypto/fips.c
··· 24 24 /* Process kernel command-line parameter at boot time. fips=0 or fips=1 */ 25 25 static int fips_enable(char *str) 26 26 { 27 - fips_enabled = !!simple_strtol(str, NULL, 0); 27 + if (kstrtoint(str, 0, &fips_enabled)) 28 + return 0; 29 + 30 + fips_enabled = !!fips_enabled; 28 31 pr_info("fips mode: %s\n", str_enabled_disabled(fips_enabled)); 29 32 return 1; 30 33 }
+83 -260
crypto/scatterwalk.c
··· 10 10 */ 11 11 12 12 #include <crypto/scatterwalk.h> 13 - #include <linux/crypto.h> 14 - #include <linux/errno.h> 15 13 #include <linux/kernel.h> 16 14 #include <linux/mm.h> 17 15 #include <linux/module.h> 18 16 #include <linux/scatterlist.h> 19 - #include <linux/slab.h> 20 - 21 - enum { 22 - SKCIPHER_WALK_SLOW = 1 << 0, 23 - SKCIPHER_WALK_COPY = 1 << 1, 24 - SKCIPHER_WALK_DIFF = 1 << 2, 25 - SKCIPHER_WALK_SLEEP = 1 << 3, 26 - }; 27 - 28 - static inline gfp_t skcipher_walk_gfp(struct skcipher_walk *walk) 29 - { 30 - return walk->flags & SKCIPHER_WALK_SLEEP ? GFP_KERNEL : GFP_ATOMIC; 31 - } 32 17 33 18 void scatterwalk_skip(struct scatter_walk *walk, unsigned int nbytes) 34 19 { ··· 86 101 } 87 102 EXPORT_SYMBOL_GPL(memcpy_to_sglist); 88 103 104 + /** 105 + * memcpy_sglist() - Copy data from one scatterlist to another 106 + * @dst: The destination scatterlist. Can be NULL if @nbytes == 0. 107 + * @src: The source scatterlist. Can be NULL if @nbytes == 0. 108 + * @nbytes: Number of bytes to copy 109 + * 110 + * The scatterlists can describe exactly the same memory, in which case this 111 + * function is a no-op. No other overlaps are supported. 112 + * 113 + * Context: Any context 114 + */ 89 115 void memcpy_sglist(struct scatterlist *dst, struct scatterlist *src, 90 116 unsigned int nbytes) 91 117 { 92 - struct skcipher_walk walk = {}; 118 + unsigned int src_offset, dst_offset; 93 119 94 - if (unlikely(nbytes == 0)) /* in case sg == NULL */ 120 + if (unlikely(nbytes == 0)) /* in case src and/or dst is NULL */ 95 121 return; 96 122 97 - walk.total = nbytes; 123 + src_offset = src->offset; 124 + dst_offset = dst->offset; 125 + for (;;) { 126 + /* Compute the length to copy this step. */ 127 + unsigned int len = min3(src->offset + src->length - src_offset, 128 + dst->offset + dst->length - dst_offset, 129 + nbytes); 130 + struct page *src_page = sg_page(src); 131 + struct page *dst_page = sg_page(dst); 132 + const void *src_virt; 133 + void *dst_virt; 98 134 99 - scatterwalk_start(&walk.in, src); 100 - scatterwalk_start(&walk.out, dst); 135 + if (IS_ENABLED(CONFIG_HIGHMEM)) { 136 + /* HIGHMEM: we may have to actually map the pages. */ 137 + const unsigned int src_oip = offset_in_page(src_offset); 138 + const unsigned int dst_oip = offset_in_page(dst_offset); 139 + const unsigned int limit = PAGE_SIZE; 101 140 102 - skcipher_walk_first(&walk, true); 103 - do { 104 - if (walk.src.virt.addr != walk.dst.virt.addr) 105 - memcpy(walk.dst.virt.addr, walk.src.virt.addr, 106 - walk.nbytes); 107 - skcipher_walk_done(&walk, 0); 108 - } while (walk.nbytes); 141 + /* Further limit len to not cross a page boundary. */ 142 + len = min3(len, limit - src_oip, limit - dst_oip); 143 + 144 + /* Compute the source and destination pages. */ 145 + src_page += src_offset / PAGE_SIZE; 146 + dst_page += dst_offset / PAGE_SIZE; 147 + 148 + if (src_page != dst_page) { 149 + /* Copy between different pages. */ 150 + memcpy_page(dst_page, dst_oip, 151 + src_page, src_oip, len); 152 + flush_dcache_page(dst_page); 153 + } else if (src_oip != dst_oip) { 154 + /* Copy between different parts of same page. */ 155 + dst_virt = kmap_local_page(dst_page); 156 + memcpy(dst_virt + dst_oip, dst_virt + src_oip, 157 + len); 158 + kunmap_local(dst_virt); 159 + flush_dcache_page(dst_page); 160 + } /* Else, it's the same memory. No action needed. */ 161 + } else { 162 + /* 163 + * !HIGHMEM: no mapping needed. Just work in the linear 164 + * buffer of each sg entry. Note that we can cross page 165 + * boundaries, as they are not significant in this case. 166 + */ 167 + src_virt = page_address(src_page) + src_offset; 168 + dst_virt = page_address(dst_page) + dst_offset; 169 + if (src_virt != dst_virt) { 170 + memcpy(dst_virt, src_virt, len); 171 + if (ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE) 172 + __scatterwalk_flush_dcache_pages( 173 + dst_page, dst_offset, len); 174 + } /* Else, it's the same memory. No action needed. */ 175 + } 176 + nbytes -= len; 177 + if (nbytes == 0) /* No more to copy? */ 178 + break; 179 + 180 + /* 181 + * There's more to copy. Advance the offsets by the length 182 + * copied this step, and advance the sg entries as needed. 183 + */ 184 + src_offset += len; 185 + if (src_offset >= src->offset + src->length) { 186 + src = sg_next(src); 187 + src_offset = src->offset; 188 + } 189 + dst_offset += len; 190 + if (dst_offset >= dst->offset + dst->length) { 191 + dst = sg_next(dst); 192 + dst_offset = dst->offset; 193 + } 194 + } 109 195 } 110 196 EXPORT_SYMBOL_GPL(memcpy_sglist); 111 197 ··· 202 146 return dst; 203 147 } 204 148 EXPORT_SYMBOL_GPL(scatterwalk_ffwd); 205 - 206 - static int skcipher_next_slow(struct skcipher_walk *walk, unsigned int bsize) 207 - { 208 - unsigned alignmask = walk->alignmask; 209 - unsigned n; 210 - void *buffer; 211 - 212 - if (!walk->buffer) 213 - walk->buffer = walk->page; 214 - buffer = walk->buffer; 215 - if (!buffer) { 216 - /* Min size for a buffer of bsize bytes aligned to alignmask */ 217 - n = bsize + (alignmask & ~(crypto_tfm_ctx_alignment() - 1)); 218 - 219 - buffer = kzalloc(n, skcipher_walk_gfp(walk)); 220 - if (!buffer) 221 - return skcipher_walk_done(walk, -ENOMEM); 222 - walk->buffer = buffer; 223 - } 224 - 225 - buffer = PTR_ALIGN(buffer, alignmask + 1); 226 - memcpy_from_scatterwalk(buffer, &walk->in, bsize); 227 - walk->out.__addr = buffer; 228 - walk->in.__addr = walk->out.addr; 229 - 230 - walk->nbytes = bsize; 231 - walk->flags |= SKCIPHER_WALK_SLOW; 232 - 233 - return 0; 234 - } 235 - 236 - static int skcipher_next_copy(struct skcipher_walk *walk) 237 - { 238 - void *tmp = walk->page; 239 - 240 - scatterwalk_map(&walk->in); 241 - memcpy(tmp, walk->in.addr, walk->nbytes); 242 - scatterwalk_unmap(&walk->in); 243 - /* 244 - * walk->in is advanced later when the number of bytes actually 245 - * processed (which might be less than walk->nbytes) is known. 246 - */ 247 - 248 - walk->in.__addr = tmp; 249 - walk->out.__addr = tmp; 250 - return 0; 251 - } 252 - 253 - static int skcipher_next_fast(struct skcipher_walk *walk) 254 - { 255 - unsigned long diff; 256 - 257 - diff = offset_in_page(walk->in.offset) - 258 - offset_in_page(walk->out.offset); 259 - diff |= (u8 *)(sg_page(walk->in.sg) + (walk->in.offset >> PAGE_SHIFT)) - 260 - (u8 *)(sg_page(walk->out.sg) + (walk->out.offset >> PAGE_SHIFT)); 261 - 262 - scatterwalk_map(&walk->out); 263 - walk->in.__addr = walk->out.__addr; 264 - 265 - if (diff) { 266 - walk->flags |= SKCIPHER_WALK_DIFF; 267 - scatterwalk_map(&walk->in); 268 - } 269 - 270 - return 0; 271 - } 272 - 273 - static int skcipher_walk_next(struct skcipher_walk *walk) 274 - { 275 - unsigned int bsize; 276 - unsigned int n; 277 - 278 - n = walk->total; 279 - bsize = min(walk->stride, max(n, walk->blocksize)); 280 - n = scatterwalk_clamp(&walk->in, n); 281 - n = scatterwalk_clamp(&walk->out, n); 282 - 283 - if (unlikely(n < bsize)) { 284 - if (unlikely(walk->total < walk->blocksize)) 285 - return skcipher_walk_done(walk, -EINVAL); 286 - 287 - slow_path: 288 - return skcipher_next_slow(walk, bsize); 289 - } 290 - walk->nbytes = n; 291 - 292 - if (unlikely((walk->in.offset | walk->out.offset) & walk->alignmask)) { 293 - if (!walk->page) { 294 - gfp_t gfp = skcipher_walk_gfp(walk); 295 - 296 - walk->page = (void *)__get_free_page(gfp); 297 - if (!walk->page) 298 - goto slow_path; 299 - } 300 - walk->flags |= SKCIPHER_WALK_COPY; 301 - return skcipher_next_copy(walk); 302 - } 303 - 304 - return skcipher_next_fast(walk); 305 - } 306 - 307 - static int skcipher_copy_iv(struct skcipher_walk *walk) 308 - { 309 - unsigned alignmask = walk->alignmask; 310 - unsigned ivsize = walk->ivsize; 311 - unsigned aligned_stride = ALIGN(walk->stride, alignmask + 1); 312 - unsigned size; 313 - u8 *iv; 314 - 315 - /* Min size for a buffer of stride + ivsize, aligned to alignmask */ 316 - size = aligned_stride + ivsize + 317 - (alignmask & ~(crypto_tfm_ctx_alignment() - 1)); 318 - 319 - walk->buffer = kmalloc(size, skcipher_walk_gfp(walk)); 320 - if (!walk->buffer) 321 - return -ENOMEM; 322 - 323 - iv = PTR_ALIGN(walk->buffer, alignmask + 1) + aligned_stride; 324 - 325 - walk->iv = memcpy(iv, walk->iv, walk->ivsize); 326 - return 0; 327 - } 328 - 329 - int skcipher_walk_first(struct skcipher_walk *walk, bool atomic) 330 - { 331 - if (WARN_ON_ONCE(in_hardirq())) 332 - return -EDEADLK; 333 - 334 - walk->flags = atomic ? 0 : SKCIPHER_WALK_SLEEP; 335 - 336 - walk->buffer = NULL; 337 - if (unlikely(((unsigned long)walk->iv & walk->alignmask))) { 338 - int err = skcipher_copy_iv(walk); 339 - if (err) 340 - return err; 341 - } 342 - 343 - walk->page = NULL; 344 - 345 - return skcipher_walk_next(walk); 346 - } 347 - EXPORT_SYMBOL_GPL(skcipher_walk_first); 348 - 349 - /** 350 - * skcipher_walk_done() - finish one step of a skcipher_walk 351 - * @walk: the skcipher_walk 352 - * @res: number of bytes *not* processed (>= 0) from walk->nbytes, 353 - * or a -errno value to terminate the walk due to an error 354 - * 355 - * This function cleans up after one step of walking through the source and 356 - * destination scatterlists, and advances to the next step if applicable. 357 - * walk->nbytes is set to the number of bytes available in the next step, 358 - * walk->total is set to the new total number of bytes remaining, and 359 - * walk->{src,dst}.virt.addr is set to the next pair of data pointers. If there 360 - * is no more data, or if an error occurred (i.e. -errno return), then 361 - * walk->nbytes and walk->total are set to 0 and all resources owned by the 362 - * skcipher_walk are freed. 363 - * 364 - * Return: 0 or a -errno value. If @res was a -errno value then it will be 365 - * returned, but other errors may occur too. 366 - */ 367 - int skcipher_walk_done(struct skcipher_walk *walk, int res) 368 - { 369 - unsigned int n = walk->nbytes; /* num bytes processed this step */ 370 - unsigned int total = 0; /* new total remaining */ 371 - 372 - if (!n) 373 - goto finish; 374 - 375 - if (likely(res >= 0)) { 376 - n -= res; /* subtract num bytes *not* processed */ 377 - total = walk->total - n; 378 - } 379 - 380 - if (likely(!(walk->flags & (SKCIPHER_WALK_SLOW | 381 - SKCIPHER_WALK_COPY | 382 - SKCIPHER_WALK_DIFF)))) { 383 - scatterwalk_advance(&walk->in, n); 384 - } else if (walk->flags & SKCIPHER_WALK_DIFF) { 385 - scatterwalk_done_src(&walk->in, n); 386 - } else if (walk->flags & SKCIPHER_WALK_COPY) { 387 - scatterwalk_advance(&walk->in, n); 388 - scatterwalk_map(&walk->out); 389 - memcpy(walk->out.addr, walk->page, n); 390 - } else { /* SKCIPHER_WALK_SLOW */ 391 - if (res > 0) { 392 - /* 393 - * Didn't process all bytes. Either the algorithm is 394 - * broken, or this was the last step and it turned out 395 - * the message wasn't evenly divisible into blocks but 396 - * the algorithm requires it. 397 - */ 398 - res = -EINVAL; 399 - total = 0; 400 - } else 401 - memcpy_to_scatterwalk(&walk->out, walk->out.addr, n); 402 - goto dst_done; 403 - } 404 - 405 - scatterwalk_done_dst(&walk->out, n); 406 - dst_done: 407 - 408 - if (res > 0) 409 - res = 0; 410 - 411 - walk->total = total; 412 - walk->nbytes = 0; 413 - 414 - if (total) { 415 - if (walk->flags & SKCIPHER_WALK_SLEEP) 416 - cond_resched(); 417 - walk->flags &= ~(SKCIPHER_WALK_SLOW | SKCIPHER_WALK_COPY | 418 - SKCIPHER_WALK_DIFF); 419 - return skcipher_walk_next(walk); 420 - } 421 - 422 - finish: 423 - /* Short-circuit for the common/fast path. */ 424 - if (!((unsigned long)walk->buffer | (unsigned long)walk->page)) 425 - goto out; 426 - 427 - if (walk->iv != walk->oiv) 428 - memcpy(walk->oiv, walk->iv, walk->ivsize); 429 - if (walk->buffer != walk->page) 430 - kfree(walk->buffer); 431 - if (walk->page) 432 - free_page((unsigned long)walk->page); 433 - 434 - out: 435 - return res; 436 - } 437 - EXPORT_SYMBOL_GPL(skcipher_walk_done);
+255 -6
crypto/skcipher.c
··· 17 17 #include <linux/cryptouser.h> 18 18 #include <linux/err.h> 19 19 #include <linux/kernel.h> 20 + #include <linux/mm.h> 20 21 #include <linux/module.h> 21 22 #include <linux/seq_file.h> 22 23 #include <linux/slab.h> ··· 28 27 29 28 #define CRYPTO_ALG_TYPE_SKCIPHER_MASK 0x0000000e 30 29 30 + enum { 31 + SKCIPHER_WALK_SLOW = 1 << 0, 32 + SKCIPHER_WALK_COPY = 1 << 1, 33 + SKCIPHER_WALK_DIFF = 1 << 2, 34 + SKCIPHER_WALK_SLEEP = 1 << 3, 35 + }; 36 + 31 37 static const struct crypto_type crypto_skcipher_type; 38 + 39 + static int skcipher_walk_next(struct skcipher_walk *walk); 40 + 41 + static inline gfp_t skcipher_walk_gfp(struct skcipher_walk *walk) 42 + { 43 + return walk->flags & SKCIPHER_WALK_SLEEP ? GFP_KERNEL : GFP_ATOMIC; 44 + } 32 45 33 46 static inline struct skcipher_alg *__crypto_skcipher_alg( 34 47 struct crypto_alg *alg) 35 48 { 36 49 return container_of(alg, struct skcipher_alg, base); 50 + } 51 + 52 + /** 53 + * skcipher_walk_done() - finish one step of a skcipher_walk 54 + * @walk: the skcipher_walk 55 + * @res: number of bytes *not* processed (>= 0) from walk->nbytes, 56 + * or a -errno value to terminate the walk due to an error 57 + * 58 + * This function cleans up after one step of walking through the source and 59 + * destination scatterlists, and advances to the next step if applicable. 60 + * walk->nbytes is set to the number of bytes available in the next step, 61 + * walk->total is set to the new total number of bytes remaining, and 62 + * walk->{src,dst}.virt.addr is set to the next pair of data pointers. If there 63 + * is no more data, or if an error occurred (i.e. -errno return), then 64 + * walk->nbytes and walk->total are set to 0 and all resources owned by the 65 + * skcipher_walk are freed. 66 + * 67 + * Return: 0 or a -errno value. If @res was a -errno value then it will be 68 + * returned, but other errors may occur too. 69 + */ 70 + int skcipher_walk_done(struct skcipher_walk *walk, int res) 71 + { 72 + unsigned int n = walk->nbytes; /* num bytes processed this step */ 73 + unsigned int total = 0; /* new total remaining */ 74 + 75 + if (!n) 76 + goto finish; 77 + 78 + if (likely(res >= 0)) { 79 + n -= res; /* subtract num bytes *not* processed */ 80 + total = walk->total - n; 81 + } 82 + 83 + if (likely(!(walk->flags & (SKCIPHER_WALK_SLOW | 84 + SKCIPHER_WALK_COPY | 85 + SKCIPHER_WALK_DIFF)))) { 86 + scatterwalk_advance(&walk->in, n); 87 + } else if (walk->flags & SKCIPHER_WALK_DIFF) { 88 + scatterwalk_done_src(&walk->in, n); 89 + } else if (walk->flags & SKCIPHER_WALK_COPY) { 90 + scatterwalk_advance(&walk->in, n); 91 + scatterwalk_map(&walk->out); 92 + memcpy(walk->out.addr, walk->page, n); 93 + } else { /* SKCIPHER_WALK_SLOW */ 94 + if (res > 0) { 95 + /* 96 + * Didn't process all bytes. Either the algorithm is 97 + * broken, or this was the last step and it turned out 98 + * the message wasn't evenly divisible into blocks but 99 + * the algorithm requires it. 100 + */ 101 + res = -EINVAL; 102 + total = 0; 103 + } else 104 + memcpy_to_scatterwalk(&walk->out, walk->out.addr, n); 105 + goto dst_done; 106 + } 107 + 108 + scatterwalk_done_dst(&walk->out, n); 109 + dst_done: 110 + 111 + if (res > 0) 112 + res = 0; 113 + 114 + walk->total = total; 115 + walk->nbytes = 0; 116 + 117 + if (total) { 118 + if (walk->flags & SKCIPHER_WALK_SLEEP) 119 + cond_resched(); 120 + walk->flags &= ~(SKCIPHER_WALK_SLOW | SKCIPHER_WALK_COPY | 121 + SKCIPHER_WALK_DIFF); 122 + return skcipher_walk_next(walk); 123 + } 124 + 125 + finish: 126 + /* Short-circuit for the common/fast path. */ 127 + if (!((unsigned long)walk->buffer | (unsigned long)walk->page)) 128 + goto out; 129 + 130 + if (walk->iv != walk->oiv) 131 + memcpy(walk->oiv, walk->iv, walk->ivsize); 132 + if (walk->buffer != walk->page) 133 + kfree(walk->buffer); 134 + if (walk->page) 135 + free_page((unsigned long)walk->page); 136 + 137 + out: 138 + return res; 139 + } 140 + EXPORT_SYMBOL_GPL(skcipher_walk_done); 141 + 142 + static int skcipher_next_slow(struct skcipher_walk *walk, unsigned int bsize) 143 + { 144 + unsigned alignmask = walk->alignmask; 145 + unsigned n; 146 + void *buffer; 147 + 148 + if (!walk->buffer) 149 + walk->buffer = walk->page; 150 + buffer = walk->buffer; 151 + if (!buffer) { 152 + /* Min size for a buffer of bsize bytes aligned to alignmask */ 153 + n = bsize + (alignmask & ~(crypto_tfm_ctx_alignment() - 1)); 154 + 155 + buffer = kzalloc(n, skcipher_walk_gfp(walk)); 156 + if (!buffer) 157 + return skcipher_walk_done(walk, -ENOMEM); 158 + walk->buffer = buffer; 159 + } 160 + 161 + buffer = PTR_ALIGN(buffer, alignmask + 1); 162 + memcpy_from_scatterwalk(buffer, &walk->in, bsize); 163 + walk->out.__addr = buffer; 164 + walk->in.__addr = walk->out.addr; 165 + 166 + walk->nbytes = bsize; 167 + walk->flags |= SKCIPHER_WALK_SLOW; 168 + 169 + return 0; 170 + } 171 + 172 + static int skcipher_next_copy(struct skcipher_walk *walk) 173 + { 174 + void *tmp = walk->page; 175 + 176 + scatterwalk_map(&walk->in); 177 + memcpy(tmp, walk->in.addr, walk->nbytes); 178 + scatterwalk_unmap(&walk->in); 179 + /* 180 + * walk->in is advanced later when the number of bytes actually 181 + * processed (which might be less than walk->nbytes) is known. 182 + */ 183 + 184 + walk->in.__addr = tmp; 185 + walk->out.__addr = tmp; 186 + return 0; 187 + } 188 + 189 + static int skcipher_next_fast(struct skcipher_walk *walk) 190 + { 191 + unsigned long diff; 192 + 193 + diff = offset_in_page(walk->in.offset) - 194 + offset_in_page(walk->out.offset); 195 + diff |= (u8 *)(sg_page(walk->in.sg) + (walk->in.offset >> PAGE_SHIFT)) - 196 + (u8 *)(sg_page(walk->out.sg) + (walk->out.offset >> PAGE_SHIFT)); 197 + 198 + scatterwalk_map(&walk->out); 199 + walk->in.__addr = walk->out.__addr; 200 + 201 + if (diff) { 202 + walk->flags |= SKCIPHER_WALK_DIFF; 203 + scatterwalk_map(&walk->in); 204 + } 205 + 206 + return 0; 207 + } 208 + 209 + static int skcipher_walk_next(struct skcipher_walk *walk) 210 + { 211 + unsigned int bsize; 212 + unsigned int n; 213 + 214 + n = walk->total; 215 + bsize = min(walk->stride, max(n, walk->blocksize)); 216 + n = scatterwalk_clamp(&walk->in, n); 217 + n = scatterwalk_clamp(&walk->out, n); 218 + 219 + if (unlikely(n < bsize)) { 220 + if (unlikely(walk->total < walk->blocksize)) 221 + return skcipher_walk_done(walk, -EINVAL); 222 + 223 + slow_path: 224 + return skcipher_next_slow(walk, bsize); 225 + } 226 + walk->nbytes = n; 227 + 228 + if (unlikely((walk->in.offset | walk->out.offset) & walk->alignmask)) { 229 + if (!walk->page) { 230 + gfp_t gfp = skcipher_walk_gfp(walk); 231 + 232 + walk->page = (void *)__get_free_page(gfp); 233 + if (!walk->page) 234 + goto slow_path; 235 + } 236 + walk->flags |= SKCIPHER_WALK_COPY; 237 + return skcipher_next_copy(walk); 238 + } 239 + 240 + return skcipher_next_fast(walk); 241 + } 242 + 243 + static int skcipher_copy_iv(struct skcipher_walk *walk) 244 + { 245 + unsigned alignmask = walk->alignmask; 246 + unsigned ivsize = walk->ivsize; 247 + unsigned aligned_stride = ALIGN(walk->stride, alignmask + 1); 248 + unsigned size; 249 + u8 *iv; 250 + 251 + /* Min size for a buffer of stride + ivsize, aligned to alignmask */ 252 + size = aligned_stride + ivsize + 253 + (alignmask & ~(crypto_tfm_ctx_alignment() - 1)); 254 + 255 + walk->buffer = kmalloc(size, skcipher_walk_gfp(walk)); 256 + if (!walk->buffer) 257 + return -ENOMEM; 258 + 259 + iv = PTR_ALIGN(walk->buffer, alignmask + 1) + aligned_stride; 260 + 261 + walk->iv = memcpy(iv, walk->iv, walk->ivsize); 262 + return 0; 263 + } 264 + 265 + static int skcipher_walk_first(struct skcipher_walk *walk) 266 + { 267 + if (WARN_ON_ONCE(in_hardirq())) 268 + return -EDEADLK; 269 + 270 + walk->buffer = NULL; 271 + if (unlikely(((unsigned long)walk->iv & walk->alignmask))) { 272 + int err = skcipher_copy_iv(walk); 273 + if (err) 274 + return err; 275 + } 276 + 277 + walk->page = NULL; 278 + 279 + return skcipher_walk_next(walk); 37 280 } 38 281 39 282 int skcipher_walk_virt(struct skcipher_walk *__restrict walk, ··· 294 49 walk->nbytes = 0; 295 50 walk->iv = req->iv; 296 51 walk->oiv = req->iv; 297 - if (!(req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP)) 298 - atomic = true; 52 + if ((req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) && !atomic) 53 + walk->flags = SKCIPHER_WALK_SLEEP; 54 + else 55 + walk->flags = 0; 299 56 300 57 if (unlikely(!walk->total)) 301 58 return 0; ··· 314 67 else 315 68 walk->stride = alg->walksize; 316 69 317 - return skcipher_walk_first(walk, atomic); 70 + return skcipher_walk_first(walk); 318 71 } 319 72 EXPORT_SYMBOL_GPL(skcipher_walk_virt); 320 73 ··· 327 80 walk->nbytes = 0; 328 81 walk->iv = req->iv; 329 82 walk->oiv = req->iv; 330 - if (!(req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP)) 331 - atomic = true; 83 + if ((req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) && !atomic) 84 + walk->flags = SKCIPHER_WALK_SLEEP; 85 + else 86 + walk->flags = 0; 332 87 333 88 if (unlikely(!walk->total)) 334 89 return 0; ··· 343 94 walk->ivsize = crypto_aead_ivsize(tfm); 344 95 walk->alignmask = crypto_aead_alignmask(tfm); 345 96 346 - return skcipher_walk_first(walk, atomic); 97 + return skcipher_walk_first(walk); 347 98 } 348 99 349 100 int skcipher_walk_aead_encrypt(struct skcipher_walk *__restrict walk,
-8
crypto/tcrypt.c
··· 1754 1754 ret = min(ret, tcrypt_test("hmac(streebog512)")); 1755 1755 break; 1756 1756 1757 - case 150: 1758 - ret = min(ret, tcrypt_test("ansi_cprng")); 1759 - break; 1760 - 1761 1757 case 151: 1762 1758 ret = min(ret, tcrypt_test("rfc4106(gcm(aes))")); 1763 1759 break; ··· 2258 2262 fallthrough; 2259 2263 case 319: 2260 2264 test_hash_speed("crc32c", sec, generic_hash_speed_template); 2261 - if (mode > 300 && mode < 400) break; 2262 - fallthrough; 2263 - case 321: 2264 - test_hash_speed("poly1305", sec, poly1305_speed_template); 2265 2265 if (mode > 300 && mode < 400) break; 2266 2266 fallthrough; 2267 2267 case 322:
-18
crypto/tcrypt.h
··· 96 96 { .blen = 0, .plen = 0, } 97 97 }; 98 98 99 - static struct hash_speed poly1305_speed_template[] = { 100 - { .blen = 96, .plen = 16, }, 101 - { .blen = 96, .plen = 32, }, 102 - { .blen = 96, .plen = 96, }, 103 - { .blen = 288, .plen = 16, }, 104 - { .blen = 288, .plen = 32, }, 105 - { .blen = 288, .plen = 288, }, 106 - { .blen = 1056, .plen = 32, }, 107 - { .blen = 1056, .plen = 1056, }, 108 - { .blen = 2080, .plen = 32, }, 109 - { .blen = 2080, .plen = 2080, }, 110 - { .blen = 4128, .plen = 4128, }, 111 - { .blen = 8224, .plen = 8224, }, 112 - 113 - /* End marker */ 114 - { .blen = 0, .plen = 0, } 115 - }; 116 - 117 99 #endif /* _CRYPTO_TCRYPT_H */
-97
crypto/testmgr.c
··· 117 117 unsigned int count; 118 118 }; 119 119 120 - struct cprng_test_suite { 121 - const struct cprng_testvec *vecs; 122 - unsigned int count; 123 - }; 124 - 125 120 struct drbg_test_suite { 126 121 const struct drbg_testvec *vecs; 127 122 unsigned int count; ··· 149 154 struct cipher_test_suite cipher; 150 155 struct comp_test_suite comp; 151 156 struct hash_test_suite hash; 152 - struct cprng_test_suite cprng; 153 157 struct drbg_test_suite drbg; 154 158 struct akcipher_test_suite akcipher; 155 159 struct sig_test_suite sig; ··· 3436 3442 return ret; 3437 3443 } 3438 3444 3439 - static int test_cprng(struct crypto_rng *tfm, 3440 - const struct cprng_testvec *template, 3441 - unsigned int tcount) 3442 - { 3443 - const char *algo = crypto_tfm_alg_driver_name(crypto_rng_tfm(tfm)); 3444 - int err = 0, i, j, seedsize; 3445 - u8 *seed; 3446 - char result[32]; 3447 - 3448 - seedsize = crypto_rng_seedsize(tfm); 3449 - 3450 - seed = kmalloc(seedsize, GFP_KERNEL); 3451 - if (!seed) { 3452 - printk(KERN_ERR "alg: cprng: Failed to allocate seed space " 3453 - "for %s\n", algo); 3454 - return -ENOMEM; 3455 - } 3456 - 3457 - for (i = 0; i < tcount; i++) { 3458 - memset(result, 0, 32); 3459 - 3460 - memcpy(seed, template[i].v, template[i].vlen); 3461 - memcpy(seed + template[i].vlen, template[i].key, 3462 - template[i].klen); 3463 - memcpy(seed + template[i].vlen + template[i].klen, 3464 - template[i].dt, template[i].dtlen); 3465 - 3466 - err = crypto_rng_reset(tfm, seed, seedsize); 3467 - if (err) { 3468 - printk(KERN_ERR "alg: cprng: Failed to reset rng " 3469 - "for %s\n", algo); 3470 - goto out; 3471 - } 3472 - 3473 - for (j = 0; j < template[i].loops; j++) { 3474 - err = crypto_rng_get_bytes(tfm, result, 3475 - template[i].rlen); 3476 - if (err < 0) { 3477 - printk(KERN_ERR "alg: cprng: Failed to obtain " 3478 - "the correct amount of random data for " 3479 - "%s (requested %d)\n", algo, 3480 - template[i].rlen); 3481 - goto out; 3482 - } 3483 - } 3484 - 3485 - err = memcmp(result, template[i].result, 3486 - template[i].rlen); 3487 - if (err) { 3488 - printk(KERN_ERR "alg: cprng: Test %d failed for %s\n", 3489 - i, algo); 3490 - hexdump(result, template[i].rlen); 3491 - err = -EINVAL; 3492 - goto out; 3493 - } 3494 - } 3495 - 3496 - out: 3497 - kfree(seed); 3498 - return err; 3499 - } 3500 - 3501 3445 static int alg_test_cipher(const struct alg_test_desc *desc, 3502 3446 const char *driver, u32 type, u32 mask) 3503 3447 { ··· 3481 3549 crypto_free_acomp(acomp); 3482 3550 return err; 3483 3551 } 3484 - 3485 - static int alg_test_cprng(const struct alg_test_desc *desc, const char *driver, 3486 - u32 type, u32 mask) 3487 - { 3488 - struct crypto_rng *rng; 3489 - int err; 3490 - 3491 - rng = crypto_alloc_rng(driver, type, mask); 3492 - if (IS_ERR(rng)) { 3493 - if (PTR_ERR(rng) == -ENOENT) 3494 - return 0; 3495 - printk(KERN_ERR "alg: cprng: Failed to load transform for %s: " 3496 - "%ld\n", driver, PTR_ERR(rng)); 3497 - return PTR_ERR(rng); 3498 - } 3499 - 3500 - err = test_cprng(rng, desc->suite.cprng.vecs, desc->suite.cprng.count); 3501 - 3502 - crypto_free_rng(rng); 3503 - 3504 - return err; 3505 - } 3506 - 3507 3552 3508 3553 static int drbg_cavs_test(const struct drbg_testvec *test, int pr, 3509 3554 const char *driver, u32 type, u32 mask) ··· 4078 4169 .test = alg_test_aead, 4079 4170 .suite = { 4080 4171 .aead = __VECS(aegis128_tv_template) 4081 - } 4082 - }, { 4083 - .alg = "ansi_cprng", 4084 - .test = alg_test_cprng, 4085 - .suite = { 4086 - .cprng = __VECS(ansi_cprng_aes_tv_template) 4087 4172 } 4088 4173 }, { 4089 4174 .alg = "authenc(hmac(md5),ecb(cipher_null))",
+120 -106
crypto/testmgr.h
··· 119 119 int crypt_error; 120 120 }; 121 121 122 - struct cprng_testvec { 123 - const char *key; 124 - const char *dt; 125 - const char *v; 126 - const char *result; 127 - unsigned char klen; 128 - unsigned short dtlen; 129 - unsigned short vlen; 130 - unsigned short rlen; 131 - unsigned short loops; 132 - }; 133 - 134 122 struct drbg_testvec { 135 123 const unsigned char *entropy; 136 124 size_t entropylen; ··· 9007 9019 .setkey_error = -EINVAL, 9008 9020 .wk = 1, 9009 9021 .key = "\x01\x01\x01\x01\x01\x01\x01\x01", 9022 + .klen = 8, 9023 + .ptext = "\x01\x23\x45\x67\x89\xab\xcd\xe7", 9024 + .ctext = "\xc9\x57\x44\x25\x6a\x5e\xd3\x1d", 9025 + .len = 8, 9026 + }, { /* Weak key */ 9027 + .setkey_error = -EINVAL, 9028 + .wk = 1, 9029 + .key = "\xe0\xe0\xe0\xe0\xf1\xf1\xf1\xf1", 9030 + .klen = 8, 9031 + .ptext = "\x01\x23\x45\x67\x89\xab\xcd\xe7", 9032 + .ctext = "\xc9\x57\x44\x25\x6a\x5e\xd3\x1d", 9033 + .len = 8, 9034 + }, { /* Weak key */ 9035 + .setkey_error = -EINVAL, 9036 + .wk = 1, 9037 + .key = "\x1f\x1f\x1f\x1f\x0e\x0e\x0e\x0e", 9038 + .klen = 8, 9039 + .ptext = "\x01\x23\x45\x67\x89\xab\xcd\xe7", 9040 + .ctext = "\xc9\x57\x44\x25\x6a\x5e\xd3\x1d", 9041 + .len = 8, 9042 + }, { /* Weak key */ 9043 + .setkey_error = -EINVAL, 9044 + .wk = 1, 9045 + .key = "\xfe\xfe\xfe\xfe\xfe\xfe\xfe\xfe", 9046 + .klen = 8, 9047 + .ptext = "\x01\x23\x45\x67\x89\xab\xcd\xe7", 9048 + .ctext = "\xc9\x57\x44\x25\x6a\x5e\xd3\x1d", 9049 + .len = 8, 9050 + }, { /* Semi-weak key pair 1a */ 9051 + .setkey_error = -EINVAL, 9052 + .wk = 1, 9053 + .key = "\x01\xfe\x01\xfe\x01\xfe\x01\xfe", 9054 + .klen = 8, 9055 + .ptext = "\x01\x23\x45\x67\x89\xab\xcd\xe7", 9056 + .ctext = "\xc9\x57\x44\x25\x6a\x5e\xd3\x1d", 9057 + .len = 8, 9058 + }, { /* Semi-weak key pair 1b */ 9059 + .setkey_error = -EINVAL, 9060 + .wk = 1, 9061 + .key = "\xfe\x01\xfe\x01\xfe\x01\xfe\x01", 9062 + .klen = 8, 9063 + .ptext = "\x01\x23\x45\x67\x89\xab\xcd\xe7", 9064 + .ctext = "\xc9\x57\x44\x25\x6a\x5e\xd3\x1d", 9065 + .len = 8, 9066 + }, { /* Semi-weak key pair 2a */ 9067 + .setkey_error = -EINVAL, 9068 + .wk = 1, 9069 + .key = "\x1f\xe0\x1f\xe0\x0e\xf1\x0e\xf1", 9070 + .klen = 8, 9071 + .ptext = "\x01\x23\x45\x67\x89\xab\xcd\xe7", 9072 + .ctext = "\xc9\x57\x44\x25\x6a\x5e\xd3\x1d", 9073 + .len = 8, 9074 + }, { /* Semi-weak key pair 2b */ 9075 + .setkey_error = -EINVAL, 9076 + .wk = 1, 9077 + .key = "\xe0\x1f\xe0\x1f\xf1\x0e\xf1\x0e", 9078 + .klen = 8, 9079 + .ptext = "\x01\x23\x45\x67\x89\xab\xcd\xe7", 9080 + .ctext = "\xc9\x57\x44\x25\x6a\x5e\xd3\x1d", 9081 + .len = 8, 9082 + }, { /* Semi-weak key pair 3a */ 9083 + .setkey_error = -EINVAL, 9084 + .wk = 1, 9085 + .key = "\x01\xe0\x01\xe0\x01\xf1\x01\xf1", 9086 + .klen = 8, 9087 + .ptext = "\x01\x23\x45\x67\x89\xab\xcd\xe7", 9088 + .ctext = "\xc9\x57\x44\x25\x6a\x5e\xd3\x1d", 9089 + .len = 8, 9090 + }, { /* Semi-weak key pair 3b */ 9091 + .setkey_error = -EINVAL, 9092 + .wk = 1, 9093 + .key = "\xe0\x01\xe0\x01\xf1\x01\xf1\x01", 9094 + .klen = 8, 9095 + .ptext = "\x01\x23\x45\x67\x89\xab\xcd\xe7", 9096 + .ctext = "\xc9\x57\x44\x25\x6a\x5e\xd3\x1d", 9097 + .len = 8, 9098 + }, { /* Semi-weak key pair 4a */ 9099 + .setkey_error = -EINVAL, 9100 + .wk = 1, 9101 + .key = "\x1f\xfe\x1f\xfe\x0e\xfe\x0e\xfe", 9102 + .klen = 8, 9103 + .ptext = "\x01\x23\x45\x67\x89\xab\xcd\xe7", 9104 + .ctext = "\xc9\x57\x44\x25\x6a\x5e\xd3\x1d", 9105 + .len = 8, 9106 + }, { /* Semi-weak key pair 4b */ 9107 + .setkey_error = -EINVAL, 9108 + .wk = 1, 9109 + .key = "\xfe\x1f\xfe\x1f\xfe\x0e\xfe\x0e", 9110 + .klen = 8, 9111 + .ptext = "\x01\x23\x45\x67\x89\xab\xcd\xe7", 9112 + .ctext = "\xc9\x57\x44\x25\x6a\x5e\xd3\x1d", 9113 + .len = 8, 9114 + }, { /* Semi-weak key pair 5a */ 9115 + .setkey_error = -EINVAL, 9116 + .wk = 1, 9117 + .key = "\x01\x1f\x01\x1f\x01\x0e\x01\x0e", 9118 + .klen = 8, 9119 + .ptext = "\x01\x23\x45\x67\x89\xab\xcd\xe7", 9120 + .ctext = "\xc9\x57\x44\x25\x6a\x5e\xd3\x1d", 9121 + .len = 8, 9122 + }, { /* Semi-weak key pair 5b */ 9123 + .setkey_error = -EINVAL, 9124 + .wk = 1, 9125 + .key = "\x1f\x01\x1f\x01\x0e\x01\x0e\x01", 9126 + .klen = 8, 9127 + .ptext = "\x01\x23\x45\x67\x89\xab\xcd\xe7", 9128 + .ctext = "\xc9\x57\x44\x25\x6a\x5e\xd3\x1d", 9129 + .len = 8, 9130 + }, { /* Semi-weak key pair 6a */ 9131 + .setkey_error = -EINVAL, 9132 + .wk = 1, 9133 + .key = "\xe0\xfe\xe0\xfe\xf1\xfe\xf1\xfe", 9134 + .klen = 8, 9135 + .ptext = "\x01\x23\x45\x67\x89\xab\xcd\xe7", 9136 + .ctext = "\xc9\x57\x44\x25\x6a\x5e\xd3\x1d", 9137 + .len = 8, 9138 + }, { /* Semi-weak key pair 6b */ 9139 + .setkey_error = -EINVAL, 9140 + .wk = 1, 9141 + .key = "\xfe\xe0\xfe\xe0\xfe\xf1\xfe\xf1", 9010 9142 .klen = 8, 9011 9143 .ptext = "\x01\x23\x45\x67\x89\xab\xcd\xe7", 9012 9144 .ctext = "\xc9\x57\x44\x25\x6a\x5e\xd3\x1d", ··· 22481 22373 "\xf5\x57\x0f\x2f\x49\x0e\x11\x3b" 22482 22374 "\x78\x93\xec\xfc\xf4\xff\xe1\x2d", 22483 22375 .clen = 24, 22484 - }, 22485 - }; 22486 - 22487 - /* 22488 - * ANSI X9.31 Continuous Pseudo-Random Number Generator (AES mode) 22489 - * test vectors, taken from Appendix B.2.9 and B.2.10: 22490 - * http://csrc.nist.gov/groups/STM/cavp/documents/rng/RNGVS.pdf 22491 - * Only AES-128 is supported at this time. 22492 - */ 22493 - static const struct cprng_testvec ansi_cprng_aes_tv_template[] = { 22494 - { 22495 - .key = "\xf3\xb1\x66\x6d\x13\x60\x72\x42" 22496 - "\xed\x06\x1c\xab\xb8\xd4\x62\x02", 22497 - .klen = 16, 22498 - .dt = "\xe6\xb3\xbe\x78\x2a\x23\xfa\x62" 22499 - "\xd7\x1d\x4a\xfb\xb0\xe9\x22\xf9", 22500 - .dtlen = 16, 22501 - .v = "\x80\x00\x00\x00\x00\x00\x00\x00" 22502 - "\x00\x00\x00\x00\x00\x00\x00\x00", 22503 - .vlen = 16, 22504 - .result = "\x59\x53\x1e\xd1\x3b\xb0\xc0\x55" 22505 - "\x84\x79\x66\x85\xc1\x2f\x76\x41", 22506 - .rlen = 16, 22507 - .loops = 1, 22508 - }, { 22509 - .key = "\xf3\xb1\x66\x6d\x13\x60\x72\x42" 22510 - "\xed\x06\x1c\xab\xb8\xd4\x62\x02", 22511 - .klen = 16, 22512 - .dt = "\xe6\xb3\xbe\x78\x2a\x23\xfa\x62" 22513 - "\xd7\x1d\x4a\xfb\xb0\xe9\x22\xfa", 22514 - .dtlen = 16, 22515 - .v = "\xc0\x00\x00\x00\x00\x00\x00\x00" 22516 - "\x00\x00\x00\x00\x00\x00\x00\x00", 22517 - .vlen = 16, 22518 - .result = "\x7c\x22\x2c\xf4\xca\x8f\xa2\x4c" 22519 - "\x1c\x9c\xb6\x41\xa9\xf3\x22\x0d", 22520 - .rlen = 16, 22521 - .loops = 1, 22522 - }, { 22523 - .key = "\xf3\xb1\x66\x6d\x13\x60\x72\x42" 22524 - "\xed\x06\x1c\xab\xb8\xd4\x62\x02", 22525 - .klen = 16, 22526 - .dt = "\xe6\xb3\xbe\x78\x2a\x23\xfa\x62" 22527 - "\xd7\x1d\x4a\xfb\xb0\xe9\x22\xfb", 22528 - .dtlen = 16, 22529 - .v = "\xe0\x00\x00\x00\x00\x00\x00\x00" 22530 - "\x00\x00\x00\x00\x00\x00\x00\x00", 22531 - .vlen = 16, 22532 - .result = "\x8a\xaa\x00\x39\x66\x67\x5b\xe5" 22533 - "\x29\x14\x28\x81\xa9\x4d\x4e\xc7", 22534 - .rlen = 16, 22535 - .loops = 1, 22536 - }, { 22537 - .key = "\xf3\xb1\x66\x6d\x13\x60\x72\x42" 22538 - "\xed\x06\x1c\xab\xb8\xd4\x62\x02", 22539 - .klen = 16, 22540 - .dt = "\xe6\xb3\xbe\x78\x2a\x23\xfa\x62" 22541 - "\xd7\x1d\x4a\xfb\xb0\xe9\x22\xfc", 22542 - .dtlen = 16, 22543 - .v = "\xf0\x00\x00\x00\x00\x00\x00\x00" 22544 - "\x00\x00\x00\x00\x00\x00\x00\x00", 22545 - .vlen = 16, 22546 - .result = "\x88\xdd\xa4\x56\x30\x24\x23\xe5" 22547 - "\xf6\x9d\xa5\x7e\x7b\x95\xc7\x3a", 22548 - .rlen = 16, 22549 - .loops = 1, 22550 - }, { 22551 - .key = "\xf3\xb1\x66\x6d\x13\x60\x72\x42" 22552 - "\xed\x06\x1c\xab\xb8\xd4\x62\x02", 22553 - .klen = 16, 22554 - .dt = "\xe6\xb3\xbe\x78\x2a\x23\xfa\x62" 22555 - "\xd7\x1d\x4a\xfb\xb0\xe9\x22\xfd", 22556 - .dtlen = 16, 22557 - .v = "\xf8\x00\x00\x00\x00\x00\x00\x00" 22558 - "\x00\x00\x00\x00\x00\x00\x00\x00", 22559 - .vlen = 16, 22560 - .result = "\x05\x25\x92\x46\x61\x79\xd2\xcb" 22561 - "\x78\xc4\x0b\x14\x0a\x5a\x9a\xc8", 22562 - .rlen = 16, 22563 - .loops = 1, 22564 - }, { /* Monte Carlo Test */ 22565 - .key = "\x9f\x5b\x51\x20\x0b\xf3\x34\xb5" 22566 - "\xd8\x2b\xe8\xc3\x72\x55\xc8\x48", 22567 - .klen = 16, 22568 - .dt = "\x63\x76\xbb\xe5\x29\x02\xba\x3b" 22569 - "\x67\xc9\x25\xfa\x70\x1f\x11\xac", 22570 - .dtlen = 16, 22571 - .v = "\x57\x2c\x8e\x76\x87\x26\x47\x97" 22572 - "\x7e\x74\xfb\xdd\xc4\x95\x01\xd1", 22573 - .vlen = 16, 22574 - .result = "\x48\xe9\xbd\x0d\x06\xee\x18\xfb" 22575 - "\xe4\x57\x90\xd5\xc3\xfc\x9b\x73", 22576 - .rlen = 16, 22577 - .loops = 10000, 22578 22376 }, 22579 22377 }; 22580 22378
+6 -11
crypto/zstd.c
··· 10 10 #include <linux/mm.h> 11 11 #include <linux/module.h> 12 12 #include <linux/net.h> 13 + #include <linux/overflow.h> 13 14 #include <linux/vmalloc.h> 14 15 #include <linux/zstd.h> 15 16 #include <crypto/internal/acompress.h> ··· 26 25 zstd_dctx *dctx; 27 26 size_t wksp_size; 28 27 zstd_parameters params; 29 - u8 wksp[] __aligned(8); 28 + u8 wksp[] __aligned(8) __counted_by(wksp_size); 30 29 }; 31 30 32 31 static DEFINE_MUTEX(zstd_stream_lock); ··· 39 38 40 39 params = zstd_get_params(ZSTD_DEF_LEVEL, ZSTD_MAX_SIZE); 41 40 42 - wksp_size = max_t(size_t, 43 - zstd_cstream_workspace_bound(&params.cParams), 44 - zstd_dstream_workspace_bound(ZSTD_MAX_SIZE)); 41 + wksp_size = max(zstd_cstream_workspace_bound(&params.cParams), 42 + zstd_dstream_workspace_bound(ZSTD_MAX_SIZE)); 45 43 if (!wksp_size) 46 44 return ERR_PTR(-EINVAL); 47 45 48 - ctx = kvmalloc(sizeof(*ctx) + wksp_size, GFP_KERNEL); 46 + ctx = kvmalloc(struct_size(ctx, wksp, wksp_size), GFP_KERNEL); 49 47 if (!ctx) 50 48 return ERR_PTR(-ENOMEM); 51 49 ··· 73 73 mutex_unlock(&zstd_stream_lock); 74 74 75 75 return ret; 76 - } 77 - 78 - static void zstd_exit(struct crypto_acomp *acomp_tfm) 79 - { 80 - crypto_acomp_free_streams(&zstd_streams); 81 76 } 82 77 83 78 static int zstd_compress_one(struct acomp_req *req, struct zstd_ctx *ctx, ··· 292 297 .cra_module = THIS_MODULE, 293 298 }, 294 299 .init = zstd_init, 295 - .exit = zstd_exit, 296 300 .compress = zstd_compress, 297 301 .decompress = zstd_decompress, 298 302 }; ··· 304 310 static void __exit zstd_mod_fini(void) 305 311 { 306 312 crypto_unregister_acomp(&zstd_acomp); 313 + crypto_acomp_free_streams(&zstd_streams); 307 314 } 308 315 309 316 module_init(zstd_mod_init);
+3 -8
drivers/char/hw_random/bcm2835-rng.c
··· 138 138 { .compatible = "brcm,bcm6368-rng"}, 139 139 {}, 140 140 }; 141 + MODULE_DEVICE_TABLE(of, bcm2835_rng_of_match); 141 142 142 143 static int bcm2835_rng_probe(struct platform_device *pdev) 143 144 { 144 - const struct bcm2835_rng_of_data *of_data; 145 145 struct device *dev = &pdev->dev; 146 - const struct of_device_id *rng_id; 147 146 struct bcm2835_rng_priv *priv; 148 147 int err; 149 148 ··· 170 171 priv->rng.cleanup = bcm2835_rng_cleanup; 171 172 172 173 if (dev_of_node(dev)) { 173 - rng_id = of_match_node(bcm2835_rng_of_match, dev->of_node); 174 - if (!rng_id) 175 - return -EINVAL; 174 + const struct bcm2835_rng_of_data *of_data; 176 175 177 176 /* Check for rng init function, execute it */ 178 - of_data = rng_id->data; 177 + of_data = of_device_get_match_data(dev); 179 178 if (of_data) 180 179 priv->mask_interrupts = of_data->mask_interrupts; 181 180 } ··· 187 190 188 191 return err; 189 192 } 190 - 191 - MODULE_DEVICE_TABLE(of, bcm2835_rng_of_match); 192 193 193 194 static const struct platform_device_id bcm2835_rng_devtype[] = { 194 195 { .name = "bcm2835-rng" },
+7 -4
drivers/char/hw_random/core.c
··· 341 341 342 342 if (sysfs_streq(buf, "")) { 343 343 err = enable_best_rng(); 344 + } else if (sysfs_streq(buf, "none")) { 345 + cur_rng_set_by_user = 1; 346 + drop_current_rng(); 344 347 } else { 345 348 list_for_each_entry(rng, &rng_list, list) { 346 349 if (sysfs_streq(rng->name, buf)) { ··· 395 392 strlcat(buf, rng->name, PAGE_SIZE); 396 393 strlcat(buf, " ", PAGE_SIZE); 397 394 } 398 - strlcat(buf, "\n", PAGE_SIZE); 395 + strlcat(buf, "none\n", PAGE_SIZE); 399 396 mutex_unlock(&rng_mutex); 400 397 401 398 return strlen(buf); ··· 545 542 init_completion(&rng->dying); 546 543 547 544 /* Adjust quality field to always have a proper value */ 548 - rng->quality = min_t(u16, min_t(u16, default_quality, 1024), rng->quality ?: 1024); 545 + rng->quality = min3(default_quality, 1024, rng->quality ?: 1024); 549 546 550 - if (!current_rng || 551 - (!cur_rng_set_by_user && rng->quality > current_rng->quality)) { 547 + if (!cur_rng_set_by_user && 548 + (!current_rng || rng->quality > current_rng->quality)) { 552 549 /* 553 550 * Set new rng as current as the new rng source 554 551 * provides better entropy quality and was not
+1
drivers/crypto/Kconfig
··· 728 728 config CRYPTO_DEV_XILINX_TRNG 729 729 tristate "Support for Xilinx True Random Generator" 730 730 depends on ZYNQMP_FIRMWARE || COMPILE_TEST 731 + select CRYPTO_DF80090A 731 732 select CRYPTO_RNG 732 733 select HW_RANDOM 733 734 help
+1 -1
drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
··· 502 502 503 503 algt = container_of(alg, struct sun8i_ss_alg_template, alg.hash.base); 504 504 ss = algt->ss; 505 + j = 0; 505 506 506 507 digestsize = crypto_ahash_digestsize(tfm); 507 508 if (digestsize == SHA224_DIGEST_SIZE) ··· 537 536 goto err_dma_result; 538 537 } 539 538 540 - j = 0; 541 539 len = areq->nbytes; 542 540 sg = areq->src; 543 541 i = 0;
+1 -1
drivers/crypto/atmel-i2c.c
··· 402 402 403 403 static int __init atmel_i2c_init(void) 404 404 { 405 - atmel_wq = alloc_workqueue("atmel_wq", 0, 0); 405 + atmel_wq = alloc_workqueue("atmel_wq", WQ_PERCPU, 0); 406 406 return atmel_wq ? 0 : -ENOMEM; 407 407 } 408 408
+3 -6
drivers/crypto/axis/artpec6_crypto.c
··· 252 252 }; 253 253 254 254 enum artpec6_crypto_variant { 255 - ARTPEC6_CRYPTO, 255 + ARTPEC6_CRYPTO = 1, 256 256 ARTPEC7_CRYPTO, 257 257 }; 258 258 ··· 2842 2842 2843 2843 static int artpec6_crypto_probe(struct platform_device *pdev) 2844 2844 { 2845 - const struct of_device_id *match; 2846 2845 enum artpec6_crypto_variant variant; 2847 2846 struct artpec6_crypto *ac; 2848 2847 struct device *dev = &pdev->dev; ··· 2852 2853 if (artpec6_crypto_dev) 2853 2854 return -ENODEV; 2854 2855 2855 - match = of_match_node(artpec6_crypto_of_match, dev->of_node); 2856 - if (!match) 2856 + variant = (enum artpec6_crypto_variant)of_device_get_match_data(dev); 2857 + if (!variant) 2857 2858 return -EINVAL; 2858 - 2859 - variant = (enum artpec6_crypto_variant)match->data; 2860 2859 2861 2860 base = devm_platform_ioremap_resource(pdev, 0); 2862 2861 if (IS_ERR(base))
+70 -16
drivers/crypto/caam/blob_gen.c
··· 2 2 /* 3 3 * Copyright (C) 2015 Pengutronix, Steffen Trumtrar <kernel@pengutronix.de> 4 4 * Copyright (C) 2021 Pengutronix, Ahmad Fatoum <kernel@pengutronix.de> 5 - * Copyright 2024 NXP 5 + * Copyright 2024-2025 NXP 6 6 */ 7 7 8 8 #define pr_fmt(fmt) "caam blob_gen: " fmt 9 9 10 10 #include <linux/bitfield.h> 11 11 #include <linux/device.h> 12 + #include <keys/trusted-type.h> 12 13 #include <soc/fsl/caam-blob.h> 13 14 14 15 #include "compat.h" ··· 61 60 complete(&res->completion); 62 61 } 63 62 63 + static u32 check_caam_state(struct device *jrdev) 64 + { 65 + const struct caam_drv_private *ctrlpriv; 66 + 67 + ctrlpriv = dev_get_drvdata(jrdev->parent); 68 + return FIELD_GET(CSTA_MOO, rd_reg32(&ctrlpriv->jr[0]->perfmon.status)); 69 + } 70 + 64 71 int caam_process_blob(struct caam_blob_priv *priv, 65 72 struct caam_blob_info *info, bool encap) 66 73 { 67 - const struct caam_drv_private *ctrlpriv; 68 74 struct caam_blob_job_result testres; 69 75 struct device *jrdev = &priv->jrdev; 70 76 dma_addr_t dma_in, dma_out; 71 77 int op = OP_PCLID_BLOB; 78 + int hwbk_caam_ovhd = 0; 72 79 size_t output_len; 73 80 u32 *desc; 74 81 u32 moo; 75 82 int ret; 83 + int len; 76 84 77 85 if (info->key_mod_len > CAAM_BLOB_KEYMOD_LENGTH) 78 86 return -EINVAL; ··· 92 82 } else { 93 83 op |= OP_TYPE_DECAP_PROTOCOL; 94 84 output_len = info->input_len - CAAM_BLOB_OVERHEAD; 85 + info->output_len = output_len; 86 + } 87 + 88 + if (encap && info->pkey_info.is_pkey) { 89 + op |= OP_PCL_BLOB_BLACK; 90 + if (info->pkey_info.key_enc_algo == CAAM_ENC_ALGO_CCM) { 91 + op |= OP_PCL_BLOB_EKT; 92 + hwbk_caam_ovhd = CAAM_CCM_OVERHEAD; 93 + } 94 + if ((info->input_len + hwbk_caam_ovhd) > MAX_KEY_SIZE) 95 + return -EINVAL; 96 + 97 + len = info->input_len + hwbk_caam_ovhd; 98 + } else { 99 + len = info->input_len; 95 100 } 96 101 97 102 desc = kzalloc(CAAM_BLOB_DESC_BYTES_MAX, GFP_KERNEL); 98 103 if (!desc) 99 104 return -ENOMEM; 100 105 101 - dma_in = dma_map_single(jrdev, info->input, info->input_len, 102 - DMA_TO_DEVICE); 106 + dma_in = dma_map_single(jrdev, info->input, len, 107 + encap ? DMA_BIDIRECTIONAL : DMA_TO_DEVICE); 103 108 if (dma_mapping_error(jrdev, dma_in)) { 104 109 dev_err(jrdev, "unable to map input DMA buffer\n"); 105 110 ret = -ENOMEM; ··· 129 104 goto out_unmap_in; 130 105 } 131 106 132 - ctrlpriv = dev_get_drvdata(jrdev->parent); 133 - moo = FIELD_GET(CSTA_MOO, rd_reg32(&ctrlpriv->jr[0]->perfmon.status)); 107 + moo = check_caam_state(jrdev); 134 108 if (moo != CSTA_MOO_SECURE && moo != CSTA_MOO_TRUSTED) 135 109 dev_warn(jrdev, 136 110 "using insecure test key, enable HAB to use unique device key!\n"); ··· 141 117 * Class 1 Context DWords 0+1+2+3. The random BK is stored in the 142 118 * Class 1 Key Register. Operation Mode is set to AES-CCM. 143 119 */ 144 - 145 120 init_job_desc(desc, 0); 121 + 122 + if (encap && info->pkey_info.is_pkey) { 123 + /*!1. key command used to load class 1 key register 124 + * from input plain key. 125 + */ 126 + append_key(desc, dma_in, info->input_len, 127 + CLASS_1 | KEY_DEST_CLASS_REG); 128 + /*!2. Fifostore to store protected key from class 1 key register. */ 129 + if (info->pkey_info.key_enc_algo == CAAM_ENC_ALGO_CCM) { 130 + append_fifo_store(desc, dma_in, info->input_len, 131 + LDST_CLASS_1_CCB | 132 + FIFOST_TYPE_KEY_CCM_JKEK); 133 + } else { 134 + append_fifo_store(desc, dma_in, info->input_len, 135 + LDST_CLASS_1_CCB | 136 + FIFOST_TYPE_KEY_KEK); 137 + } 138 + /* 139 + * JUMP_OFFSET specifies the offset of the JUMP target from 140 + * the JUMP command's address in the descriptor buffer. 141 + */ 142 + append_jump(desc, JUMP_COND_NOP | BIT(0) << JUMP_OFFSET_SHIFT); 143 + } 144 + 145 + /*!3. Load class 2 key with key modifier. */ 146 146 append_key_as_imm(desc, info->key_mod, info->key_mod_len, 147 - info->key_mod_len, CLASS_2 | KEY_DEST_CLASS_REG); 148 - append_seq_in_ptr_intlen(desc, dma_in, info->input_len, 0); 149 - append_seq_out_ptr_intlen(desc, dma_out, output_len, 0); 147 + info->key_mod_len, CLASS_2 | KEY_DEST_CLASS_REG); 148 + 149 + /*!4. SEQ IN PTR Command. */ 150 + append_seq_in_ptr(desc, dma_in, info->input_len, 0); 151 + 152 + /*!5. SEQ OUT PTR Command. */ 153 + append_seq_out_ptr(desc, dma_out, output_len, 0); 154 + 155 + /*!6. Blob encapsulation/decapsulation PROTOCOL Command. */ 150 156 append_operation(desc, op); 151 157 152 - print_hex_dump_debug("data@"__stringify(__LINE__)": ", 158 + print_hex_dump_debug("data@" __stringify(__LINE__)": ", 153 159 DUMP_PREFIX_ADDRESS, 16, 1, info->input, 154 - info->input_len, false); 155 - print_hex_dump_debug("jobdesc@"__stringify(__LINE__)": ", 160 + len, false); 161 + print_hex_dump_debug("jobdesc@" __stringify(__LINE__)": ", 156 162 DUMP_PREFIX_ADDRESS, 16, 1, desc, 157 163 desc_bytes(desc), false); 158 164 ··· 193 139 if (ret == -EINPROGRESS) { 194 140 wait_for_completion(&testres.completion); 195 141 ret = testres.err; 196 - print_hex_dump_debug("output@"__stringify(__LINE__)": ", 142 + print_hex_dump_debug("output@" __stringify(__LINE__)": ", 197 143 DUMP_PREFIX_ADDRESS, 16, 1, info->output, 198 144 output_len, false); 199 145 } ··· 203 149 204 150 dma_unmap_single(jrdev, dma_out, output_len, DMA_FROM_DEVICE); 205 151 out_unmap_in: 206 - dma_unmap_single(jrdev, dma_in, info->input_len, DMA_TO_DEVICE); 152 + dma_unmap_single(jrdev, dma_in, len, 153 + encap ? DMA_BIDIRECTIONAL : DMA_TO_DEVICE); 207 154 out_free: 208 155 kfree(desc); 209 - 210 156 return ret; 211 157 } 212 158 EXPORT_SYMBOL(caam_process_blob);
+117 -11
drivers/crypto/caam/caamalg.c
··· 3 3 * caam - Freescale FSL CAAM support for crypto API 4 4 * 5 5 * Copyright 2008-2011 Freescale Semiconductor, Inc. 6 - * Copyright 2016-2019, 2023 NXP 6 + * Copyright 2016-2019, 2023, 2025 NXP 7 7 * 8 8 * Based on talitos crypto API driver. 9 9 * ··· 61 61 #include <crypto/internal/engine.h> 62 62 #include <crypto/internal/skcipher.h> 63 63 #include <crypto/xts.h> 64 + #include <keys/trusted-type.h> 64 65 #include <linux/dma-mapping.h> 65 66 #include <linux/device.h> 66 67 #include <linux/err.h> 67 68 #include <linux/module.h> 68 69 #include <linux/kernel.h> 70 + #include <linux/key-type.h> 69 71 #include <linux/slab.h> 70 72 #include <linux/string.h> 73 + #include <soc/fsl/caam-blob.h> 71 74 72 75 /* 73 76 * crypto alg ··· 122 119 dma_addr_t sh_desc_enc_dma; 123 120 dma_addr_t sh_desc_dec_dma; 124 121 dma_addr_t key_dma; 122 + u8 protected_key[CAAM_MAX_KEY_SIZE]; 123 + dma_addr_t protected_key_dma; 125 124 enum dma_data_direction dir; 126 125 struct device *jrdev; 127 126 struct alginfo adata; 128 127 struct alginfo cdata; 129 128 unsigned int authsize; 130 129 bool xts_key_fallback; 130 + bool is_blob; 131 131 struct crypto_skcipher *fallback; 132 132 }; 133 133 ··· 757 751 print_hex_dump_debug("key in @"__stringify(__LINE__)": ", 758 752 DUMP_PREFIX_ADDRESS, 16, 4, key, keylen, 1); 759 753 754 + /* Here keylen is actual key length */ 760 755 ctx->cdata.keylen = keylen; 761 756 ctx->cdata.key_virt = key; 762 757 ctx->cdata.key_inline = true; 758 + /* Here protected key len is plain key length */ 759 + ctx->cdata.plain_keylen = keylen; 760 + ctx->cdata.key_cmd_opt = 0; 761 + 763 762 764 763 /* skcipher_encrypt shared descriptor */ 765 764 desc = ctx->sh_desc_enc; ··· 779 768 ctx1_iv_off); 780 769 dma_sync_single_for_device(jrdev, ctx->sh_desc_dec_dma, 781 770 desc_bytes(desc), ctx->dir); 771 + 772 + return 0; 773 + } 774 + 775 + static int paes_skcipher_setkey(struct crypto_skcipher *skcipher, 776 + const u8 *key, 777 + unsigned int keylen) 778 + { 779 + struct caam_pkey_info *pkey_info = (struct caam_pkey_info *)key; 780 + struct caam_ctx *ctx = crypto_skcipher_ctx_dma(skcipher); 781 + struct device *jrdev = ctx->jrdev; 782 + int err; 783 + 784 + ctx->cdata.key_inline = false; 785 + 786 + keylen = keylen - CAAM_PKEY_HEADER; 787 + 788 + /* Retrieve the length of key */ 789 + ctx->cdata.plain_keylen = pkey_info->plain_key_sz; 790 + 791 + /* Retrieve the length of blob*/ 792 + ctx->cdata.keylen = keylen; 793 + 794 + /* Retrieve the address of the blob */ 795 + ctx->cdata.key_virt = pkey_info->key_buf; 796 + 797 + /* Validate key length for AES algorithms */ 798 + err = aes_check_keylen(ctx->cdata.plain_keylen); 799 + if (err) { 800 + dev_err(jrdev, "bad key length\n"); 801 + return err; 802 + } 803 + 804 + /* set command option */ 805 + ctx->cdata.key_cmd_opt |= KEY_ENC; 806 + 807 + /* check if the Protected-Key is CCM key */ 808 + if (pkey_info->key_enc_algo == CAAM_ENC_ALGO_CCM) 809 + ctx->cdata.key_cmd_opt |= KEY_EKT; 810 + 811 + memcpy(ctx->key, ctx->cdata.key_virt, keylen); 812 + dma_sync_single_for_device(jrdev, ctx->key_dma, keylen, DMA_TO_DEVICE); 813 + ctx->cdata.key_dma = ctx->key_dma; 814 + 815 + if (pkey_info->key_enc_algo == CAAM_ENC_ALGO_CCM) 816 + ctx->protected_key_dma = dma_map_single(jrdev, ctx->protected_key, 817 + ctx->cdata.plain_keylen + 818 + CAAM_CCM_OVERHEAD, 819 + DMA_FROM_DEVICE); 820 + else 821 + ctx->protected_key_dma = dma_map_single(jrdev, ctx->protected_key, 822 + ctx->cdata.plain_keylen, 823 + DMA_FROM_DEVICE); 824 + 825 + ctx->cdata.protected_key_dma = ctx->protected_key_dma; 826 + ctx->is_blob = true; 782 827 783 828 return 0; 784 829 } ··· 1321 1254 struct caam_ctx *ctx = crypto_skcipher_ctx_dma(skcipher); 1322 1255 struct device *jrdev = ctx->jrdev; 1323 1256 int ivsize = crypto_skcipher_ivsize(skcipher); 1324 - u32 *desc = edesc->hw_desc; 1257 + u32 *desc = !ctx->is_blob ? edesc->hw_desc : 1258 + (u32 *)((u8 *)edesc->hw_desc + CAAM_DESC_BYTES_MAX); 1259 + dma_addr_t desc_dma; 1325 1260 u32 *sh_desc; 1326 1261 u32 in_options = 0, out_options = 0; 1327 1262 dma_addr_t src_dma, dst_dma, ptr; ··· 1338 1269 DUMP_PREFIX_ADDRESS, 16, 4, req->src, 1339 1270 edesc->src_nents > 1 ? 100 : req->cryptlen, 1); 1340 1271 1341 - sh_desc = encrypt ? ctx->sh_desc_enc : ctx->sh_desc_dec; 1342 - ptr = encrypt ? ctx->sh_desc_enc_dma : ctx->sh_desc_dec_dma; 1343 - 1344 - len = desc_len(sh_desc); 1345 - init_job_desc_shared(desc, ptr, len, HDR_SHARE_DEFER | HDR_REVERSE); 1346 1272 1347 1273 if (ivsize || edesc->mapped_src_nents > 1) { 1348 1274 src_dma = edesc->sec4_sg_dma; ··· 1346 1282 } else { 1347 1283 src_dma = sg_dma_address(req->src); 1348 1284 } 1349 - 1350 - append_seq_in_ptr(desc, src_dma, req->cryptlen + ivsize, in_options); 1351 1285 1352 1286 if (likely(req->src == req->dst)) { 1353 1287 dst_dma = src_dma + !!ivsize * sizeof(struct sec4_sg_entry); ··· 1358 1296 out_options = LDST_SGF; 1359 1297 } 1360 1298 1361 - append_seq_out_ptr(desc, dst_dma, req->cryptlen + ivsize, out_options); 1299 + if (ctx->is_blob) { 1300 + cnstr_desc_skcipher_enc_dec(desc, &ctx->cdata, 1301 + src_dma, dst_dma, req->cryptlen + ivsize, 1302 + in_options, out_options, 1303 + ivsize, encrypt); 1304 + 1305 + desc_dma = dma_map_single(jrdev, desc, desc_bytes(desc), DMA_TO_DEVICE); 1306 + 1307 + cnstr_desc_protected_blob_decap(edesc->hw_desc, &ctx->cdata, desc_dma); 1308 + } else { 1309 + sh_desc = encrypt ? ctx->sh_desc_enc : ctx->sh_desc_dec; 1310 + ptr = encrypt ? ctx->sh_desc_enc_dma : ctx->sh_desc_dec_dma; 1311 + 1312 + len = desc_len(sh_desc); 1313 + init_job_desc_shared(desc, ptr, len, HDR_SHARE_DEFER | HDR_REVERSE); 1314 + append_seq_in_ptr(desc, src_dma, req->cryptlen + ivsize, in_options); 1315 + 1316 + append_seq_out_ptr(desc, dst_dma, req->cryptlen + ivsize, out_options); 1317 + } 1362 1318 } 1363 1319 1364 1320 /* ··· 1897 1817 struct caam_drv_private *ctrlpriv = dev_get_drvdata(jrdev->parent); 1898 1818 u32 *desc; 1899 1819 int ret = 0; 1820 + int len; 1900 1821 1901 1822 /* 1902 1823 * XTS is expected to return an error even for input length = 0 ··· 1923 1842 crypto_skcipher_decrypt(&rctx->fallback_req); 1924 1843 } 1925 1844 1845 + len = DESC_JOB_IO_LEN * CAAM_CMD_SZ; 1846 + if (ctx->is_blob) 1847 + len += CAAM_DESC_BYTES_MAX; 1848 + 1926 1849 /* allocate extended descriptor */ 1927 - edesc = skcipher_edesc_alloc(req, DESC_JOB_IO_LEN * CAAM_CMD_SZ); 1850 + edesc = skcipher_edesc_alloc(req, len); 1928 1851 if (IS_ERR(edesc)) 1929 1852 return PTR_ERR(edesc); 1930 1853 ··· 1970 1885 } 1971 1886 1972 1887 static struct caam_skcipher_alg driver_algs[] = { 1888 + { 1889 + .skcipher.base = { 1890 + .base = { 1891 + .cra_name = "cbc(paes)", 1892 + .cra_driver_name = "cbc-paes-caam", 1893 + .cra_blocksize = AES_BLOCK_SIZE, 1894 + }, 1895 + .setkey = paes_skcipher_setkey, 1896 + .encrypt = skcipher_encrypt, 1897 + .decrypt = skcipher_decrypt, 1898 + .min_keysize = AES_MIN_KEY_SIZE + CAAM_BLOB_OVERHEAD + 1899 + CAAM_PKEY_HEADER, 1900 + .max_keysize = AES_MAX_KEY_SIZE + CAAM_BLOB_OVERHEAD + 1901 + CAAM_PKEY_HEADER, 1902 + .ivsize = AES_BLOCK_SIZE, 1903 + }, 1904 + .skcipher.op = { 1905 + .do_one_request = skcipher_do_one_req, 1906 + }, 1907 + .caam.class1_alg_type = OP_ALG_ALGSEL_AES | OP_ALG_AAI_CBC, 1908 + }, 1973 1909 { 1974 1910 .skcipher.base = { 1975 1911 .base = {
+84 -3
drivers/crypto/caam/caamalg_desc.c
··· 2 2 /* 3 3 * Shared descriptors for aead, skcipher algorithms 4 4 * 5 - * Copyright 2016-2019 NXP 5 + * Copyright 2016-2019, 2025 NXP 6 6 */ 7 7 8 8 #include "compat.h" 9 9 #include "desc_constr.h" 10 10 #include "caamalg_desc.h" 11 + #include <soc/fsl/caam-blob.h> 11 12 12 13 /* 13 14 * For aead functions, read payload and write payload, ··· 1365 1364 append_seq_fifo_store(desc, 0, FIFOST_TYPE_MESSAGE_DATA | KEY_VLF); 1366 1365 } 1367 1366 1367 + void cnstr_desc_skcipher_enc_dec(u32 * const desc, struct alginfo *cdata, 1368 + dma_addr_t src, dma_addr_t dst, unsigned int data_sz, 1369 + unsigned int in_options, unsigned int out_options, 1370 + unsigned int ivsize, const bool encrypt) 1371 + { 1372 + u32 options = cdata->algtype | OP_ALG_AS_INIT; 1373 + 1374 + if (encrypt) 1375 + options |= OP_ALG_ENCRYPT; 1376 + else 1377 + options |= OP_ALG_DECRYPT; 1378 + 1379 + init_job_desc(desc, 0); 1380 + 1381 + append_jump(desc, JUMP_JSL | JUMP_TYPE_LOCAL | 1382 + JUMP_COND_NOP | JUMP_TEST_ALL | 1); 1383 + 1384 + append_key(desc, cdata->protected_key_dma, cdata->plain_keylen, 1385 + CLASS_1 | KEY_DEST_CLASS_REG | cdata->key_cmd_opt); 1386 + 1387 + append_seq_in_ptr(desc, src, data_sz, in_options); 1388 + 1389 + append_seq_out_ptr(desc, dst, data_sz, out_options); 1390 + 1391 + /* Load IV, if there is one */ 1392 + if (ivsize) 1393 + append_seq_load(desc, ivsize, LDST_SRCDST_BYTE_CONTEXT | 1394 + LDST_CLASS_1_CCB); 1395 + 1396 + append_operation(desc, options); 1397 + 1398 + skcipher_append_src_dst(desc); 1399 + 1400 + /* Store IV */ 1401 + if (ivsize) 1402 + append_seq_store(desc, ivsize, LDST_SRCDST_BYTE_CONTEXT | 1403 + LDST_CLASS_1_CCB); 1404 + 1405 + print_hex_dump_debug("skcipher_enc_dec job desc@" __stringify(__LINE__)": ", 1406 + DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1407 + 1); 1408 + } 1409 + EXPORT_SYMBOL(cnstr_desc_skcipher_enc_dec); 1410 + 1411 + void cnstr_desc_protected_blob_decap(u32 * const desc, struct alginfo *cdata, 1412 + dma_addr_t next_desc_addr) 1413 + { 1414 + u32 protected_store; 1415 + 1416 + init_job_desc(desc, 0); 1417 + 1418 + /* Load key modifier */ 1419 + append_load_as_imm(desc, KEYMOD, sizeof(KEYMOD) - 1, 1420 + LDST_CLASS_2_CCB | LDST_SRCDST_BYTE_KEY); 1421 + 1422 + append_seq_in_ptr_intlen(desc, cdata->key_dma, 1423 + cdata->plain_keylen + CAAM_BLOB_OVERHEAD, 0); 1424 + 1425 + append_seq_out_ptr_intlen(desc, cdata->protected_key_dma, 1426 + cdata->plain_keylen, 0); 1427 + 1428 + protected_store = OP_PCLID_BLOB | OP_PCL_BLOB_BLACK; 1429 + if ((cdata->key_cmd_opt >> KEY_EKT_OFFSET) & 1) 1430 + protected_store |= OP_PCL_BLOB_EKT; 1431 + 1432 + append_operation(desc, OP_TYPE_DECAP_PROTOCOL | protected_store); 1433 + 1434 + if (next_desc_addr) { 1435 + append_jump(desc, JUMP_TYPE_NONLOCAL | JUMP_TEST_ALL); 1436 + append_ptr(desc, next_desc_addr); 1437 + } 1438 + 1439 + print_hex_dump_debug("protected blob decap job desc@" __stringify(__LINE__) ":", 1440 + DUMP_PREFIX_ADDRESS, 16, 4, desc, 1441 + desc_bytes(desc), 1); 1442 + } 1443 + EXPORT_SYMBOL(cnstr_desc_protected_blob_decap); 1444 + 1368 1445 /** 1369 1446 * cnstr_shdsc_skcipher_encap - skcipher encapsulation shared descriptor 1370 1447 * @desc: pointer to buffer used for descriptor construction ··· 1470 1391 1471 1392 /* Load class1 key only */ 1472 1393 append_key_as_imm(desc, cdata->key_virt, cdata->keylen, 1473 - cdata->keylen, CLASS_1 | KEY_DEST_CLASS_REG); 1394 + cdata->plain_keylen, CLASS_1 | KEY_DEST_CLASS_REG 1395 + | cdata->key_cmd_opt); 1474 1396 1475 1397 /* Load nonce into CONTEXT1 reg */ 1476 1398 if (is_rfc3686) { ··· 1546 1466 1547 1467 /* Load class1 key only */ 1548 1468 append_key_as_imm(desc, cdata->key_virt, cdata->keylen, 1549 - cdata->keylen, CLASS_1 | KEY_DEST_CLASS_REG); 1469 + cdata->plain_keylen, CLASS_1 | KEY_DEST_CLASS_REG 1470 + | cdata->key_cmd_opt); 1550 1471 1551 1472 /* Load nonce into CONTEXT1 reg */ 1552 1473 if (is_rfc3686) {
+12 -1
drivers/crypto/caam/caamalg_desc.h
··· 2 2 /* 3 3 * Shared descriptors for aead, skcipher algorithms 4 4 * 5 - * Copyright 2016 NXP 5 + * Copyright 2016, 2025 NXP 6 6 */ 7 7 8 8 #ifndef _CAAMALG_DESC_H_ ··· 47 47 21 * CAAM_CMD_SZ) 48 48 #define DESC_SKCIPHER_DEC_LEN (DESC_SKCIPHER_BASE + \ 49 49 16 * CAAM_CMD_SZ) 50 + 51 + /* Key modifier for CAAM Protected blobs */ 52 + #define KEYMOD "SECURE_KEY" 50 53 51 54 void cnstr_shdsc_aead_null_encap(u32 * const desc, struct alginfo *adata, 52 55 unsigned int icvsize, int era); ··· 115 112 void cnstr_shdsc_xts_skcipher_encap(u32 * const desc, struct alginfo *cdata); 116 113 117 114 void cnstr_shdsc_xts_skcipher_decap(u32 * const desc, struct alginfo *cdata); 115 + 116 + void cnstr_desc_protected_blob_decap(u32 * const desc, struct alginfo *cdata, 117 + dma_addr_t next_desc); 118 + 119 + void cnstr_desc_skcipher_enc_dec(u32 * const desc, struct alginfo *cdata, 120 + dma_addr_t src, dma_addr_t dst, unsigned int data_sz, 121 + unsigned int in_options, unsigned int out_options, 122 + unsigned int ivsize, const bool encrypt); 118 123 119 124 #endif /* _CAAMALG_DESC_H_ */
+3 -1
drivers/crypto/caam/caamrng.c
··· 181 181 struct device *dev = ctx->ctrldev; 182 182 183 183 buf = kcalloc(CAAM_RNG_MAX_FIFO_STORE_SIZE, sizeof(u8), GFP_KERNEL); 184 - 184 + if (!buf) { 185 + return; 186 + } 185 187 while (len > 0) { 186 188 read_len = rng->read(rng, buf, len, wait); 187 189
+8 -1
drivers/crypto/caam/desc.h
··· 4 4 * Definitions to support CAAM descriptor instruction generation 5 5 * 6 6 * Copyright 2008-2011 Freescale Semiconductor, Inc. 7 - * Copyright 2018 NXP 7 + * Copyright 2018, 2025 NXP 8 8 */ 9 9 10 10 #ifndef DESC_H ··· 162 162 * Enhanced Encryption of Key 163 163 */ 164 164 #define KEY_EKT 0x00100000 165 + #define KEY_EKT_OFFSET 20 165 166 166 167 /* 167 168 * Encrypted with Trusted Key ··· 404 403 #define FIFOST_TYPE_PKHA_N (0x08 << FIFOST_TYPE_SHIFT) 405 404 #define FIFOST_TYPE_PKHA_A (0x0c << FIFOST_TYPE_SHIFT) 406 405 #define FIFOST_TYPE_PKHA_B (0x0d << FIFOST_TYPE_SHIFT) 406 + #define FIFOST_TYPE_KEY_CCM_JKEK (0x14 << FIFOST_TYPE_SHIFT) 407 407 #define FIFOST_TYPE_AF_SBOX_JKEK (0x20 << FIFOST_TYPE_SHIFT) 408 408 #define FIFOST_TYPE_AF_SBOX_TKEK (0x21 << FIFOST_TYPE_SHIFT) 409 409 #define FIFOST_TYPE_PKHA_E_JKEK (0x22 << FIFOST_TYPE_SHIFT) ··· 1002 1000 #define OP_PCL_TLS12_AES_256_CBC_SHA256 0xff66 1003 1001 #define OP_PCL_TLS12_AES_256_CBC_SHA384 0xff63 1004 1002 #define OP_PCL_TLS12_AES_256_CBC_SHA512 0xff65 1003 + 1004 + /* Blob protocol protinfo bits */ 1005 + 1006 + #define OP_PCL_BLOB_BLACK 0x0004 1007 + #define OP_PCL_BLOB_EKT 0x0100 1005 1008 1006 1009 /* For DTLS - OP_PCLID_DTLS */ 1007 1010
+7 -1
drivers/crypto/caam/desc_constr.h
··· 3 3 * caam descriptor construction helper functions 4 4 * 5 5 * Copyright 2008-2012 Freescale Semiconductor, Inc. 6 - * Copyright 2019 NXP 6 + * Copyright 2019, 2025 NXP 7 7 */ 8 8 9 9 #ifndef DESC_CONSTR_H ··· 498 498 * @keylen: length of the provided algorithm key, in bytes 499 499 * @keylen_pad: padded length of the provided algorithm key, in bytes 500 500 * @key_dma: dma (bus) address where algorithm key resides 501 + * @protected_key_dma: dma (bus) address where protected key resides 501 502 * @key_virt: virtual address where algorithm key resides 502 503 * @key_inline: true - key can be inlined in the descriptor; false - key is 503 504 * referenced by the descriptor 505 + * @plain_keylen: size of the key to be loaded by the CAAM 506 + * @key_cmd_opt: optional parameters for KEY command 504 507 */ 505 508 struct alginfo { 506 509 u32 algtype; 507 510 unsigned int keylen; 508 511 unsigned int keylen_pad; 509 512 dma_addr_t key_dma; 513 + dma_addr_t protected_key_dma; 510 514 const void *key_virt; 511 515 bool key_inline; 516 + u32 plain_keylen; 517 + u32 key_cmd_opt; 512 518 }; 513 519 514 520 /**
+1 -1
drivers/crypto/cavium/nitrox/nitrox_mbx.c
··· 192 192 } 193 193 194 194 /* allocate pf2vf response workqueue */ 195 - ndev->iov.pf2vf_wq = alloc_workqueue("nitrox_pf2vf", 0, 0); 195 + ndev->iov.pf2vf_wq = alloc_workqueue("nitrox_pf2vf", WQ_PERCPU, 0); 196 196 if (!ndev->iov.pf2vf_wq) { 197 197 kfree(ndev->iov.vfdev); 198 198 ndev->iov.vfdev = NULL;
+1 -1
drivers/crypto/ccp/ccp-dev.c
··· 507 507 { 508 508 struct ccp_device *ccp = container_of(rng, struct ccp_device, hwrng); 509 509 u32 trng_value; 510 - int len = min_t(int, sizeof(trng_value), max); 510 + int len = min(sizeof(trng_value), max); 511 511 512 512 /* Locking is provided by the caller so we can update device 513 513 * hwrng-related fields safely
+1 -1
drivers/crypto/ccp/sp-dev.h
··· 95 95 96 96 struct device *dev; 97 97 98 - struct sp_dev_vdata *dev_vdata; 98 + const struct sp_dev_vdata *dev_vdata; 99 99 unsigned int ord; 100 100 char name[SP_MAX_NAME_LEN]; 101 101
+19
drivers/crypto/ccp/sp-pci.c
··· 459 459 .intsts_reg = 0x10514, /* P2CMSG_INTSTS */ 460 460 }; 461 461 462 + static const struct psp_vdata pspv7 = { 463 + .tee = &teev2, 464 + .cmdresp_reg = 0x10944, /* C2PMSG_17 */ 465 + .cmdbuff_addr_lo_reg = 0x10948, /* C2PMSG_18 */ 466 + .cmdbuff_addr_hi_reg = 0x1094c, /* C2PMSG_19 */ 467 + .bootloader_info_reg = 0x109ec, /* C2PMSG_59 */ 468 + .feature_reg = 0x109fc, /* C2PMSG_63 */ 469 + .inten_reg = 0x10510, /* P2CMSG_INTEN */ 470 + .intsts_reg = 0x10514, /* P2CMSG_INTSTS */ 471 + }; 472 + 462 473 #endif 463 474 464 475 static const struct sp_dev_vdata dev_vdata[] = { ··· 536 525 .psp_vdata = &pspv6, 537 526 #endif 538 527 }, 528 + { /* 9 */ 529 + .bar = 2, 530 + #ifdef CONFIG_CRYPTO_DEV_SP_PSP 531 + .psp_vdata = &pspv7, 532 + #endif 533 + }, 534 + 539 535 }; 540 536 static const struct pci_device_id sp_pci_table[] = { 541 537 { PCI_VDEVICE(AMD, 0x1537), (kernel_ulong_t)&dev_vdata[0] }, ··· 557 539 { PCI_VDEVICE(AMD, 0x17E0), (kernel_ulong_t)&dev_vdata[7] }, 558 540 { PCI_VDEVICE(AMD, 0x156E), (kernel_ulong_t)&dev_vdata[8] }, 559 541 { PCI_VDEVICE(AMD, 0x17D8), (kernel_ulong_t)&dev_vdata[8] }, 542 + { PCI_VDEVICE(AMD, 0x115A), (kernel_ulong_t)&dev_vdata[9] }, 560 543 /* Last entry must be zero */ 561 544 { 0, } 562 545 };
+3 -14
drivers/crypto/ccp/sp-platform.c
··· 52 52 }; 53 53 MODULE_DEVICE_TABLE(of, sp_of_match); 54 54 55 - static struct sp_dev_vdata *sp_get_of_version(struct platform_device *pdev) 56 - { 57 - const struct of_device_id *match; 58 - 59 - match = of_match_node(sp_of_match, pdev->dev.of_node); 60 - if (match && match->data) 61 - return (struct sp_dev_vdata *)match->data; 62 - 63 - return NULL; 64 - } 65 - 66 - static struct sp_dev_vdata *sp_get_acpi_version(struct platform_device *pdev) 55 + static const struct sp_dev_vdata *sp_get_acpi_version(struct platform_device *pdev) 67 56 { 68 57 const struct acpi_device_id *match; 69 58 70 59 match = acpi_match_device(sp_acpi_match, &pdev->dev); 71 60 if (match && match->driver_data) 72 - return (struct sp_dev_vdata *)match->driver_data; 61 + return (const struct sp_dev_vdata *)match->driver_data; 73 62 74 63 return NULL; 75 64 } ··· 112 123 goto e_err; 113 124 114 125 sp->dev_specific = sp_platform; 115 - sp->dev_vdata = pdev->dev.of_node ? sp_get_of_version(pdev) 126 + sp->dev_vdata = pdev->dev.of_node ? of_device_get_match_data(&pdev->dev) 116 127 : sp_get_acpi_version(pdev); 117 128 if (!sp->dev_vdata) { 118 129 ret = -ENODEV;
+5 -1
drivers/crypto/ccree/cc_buffer_mgr.c
··· 1235 1235 int rc = 0; 1236 1236 u32 dummy = 0; 1237 1237 u32 mapped_nents = 0; 1238 + int sg_nents; 1238 1239 1239 1240 dev_dbg(dev, " update params : curr_buff=%p curr_buff_cnt=0x%X nbytes=0x%X src=%p curr_index=%u\n", 1240 1241 curr_buff, *curr_buff_cnt, nbytes, src, areq_ctx->buff_index); ··· 1249 1248 if (total_in_len < block_size) { 1250 1249 dev_dbg(dev, " less than one block: curr_buff=%p *curr_buff_cnt=0x%X copy_to=%p\n", 1251 1250 curr_buff, *curr_buff_cnt, &curr_buff[*curr_buff_cnt]); 1252 - areq_ctx->in_nents = sg_nents_for_len(src, nbytes); 1251 + sg_nents = sg_nents_for_len(src, nbytes); 1252 + if (sg_nents < 0) 1253 + return sg_nents; 1254 + areq_ctx->in_nents = sg_nents; 1253 1255 sg_copy_to_buffer(src, areq_ctx->in_nents, 1254 1256 &curr_buff[*curr_buff_cnt], nbytes); 1255 1257 *curr_buff_cnt += nbytes;
+3 -4
drivers/crypto/hifn_795x.c
··· 913 913 else 914 914 pllcfg |= HIFN_PLL_REF_CLK_HBI; 915 915 916 - if (hifn_pll_ref[3] != '\0') 917 - freq = simple_strtoul(hifn_pll_ref + 3, NULL, 10); 918 - else { 916 + if (hifn_pll_ref[3] == '\0' || 917 + kstrtouint(hifn_pll_ref + 3, 10, &freq)) { 919 918 freq = 66; 920 - dev_info(&dev->pdev->dev, "assuming %uMHz clock speed, override with hifn_pll_ref=%.3s<frequency>\n", 919 + dev_info(&dev->pdev->dev, "assuming %u MHz clock speed, override with hifn_pll_ref=%.3s<frequency>\n", 921 920 freq, hifn_pll_ref); 922 921 } 923 922
+40 -15
drivers/crypto/hisilicon/qm.c
··· 64 64 #define QM_EQE_AEQE_SIZE (2UL << 12) 65 65 #define QM_EQC_PHASE_SHIFT 16 66 66 67 - #define QM_EQE_PHASE(eqe) ((le32_to_cpu((eqe)->dw0) >> 16) & 0x1) 67 + #define QM_EQE_PHASE(dw0) (((dw0) >> 16) & 0x1) 68 68 #define QM_EQE_CQN_MASK GENMASK(15, 0) 69 69 70 - #define QM_AEQE_PHASE(aeqe) ((le32_to_cpu((aeqe)->dw0) >> 16) & 0x1) 70 + #define QM_AEQE_PHASE(dw0) (((dw0) >> 16) & 0x1) 71 71 #define QM_AEQE_TYPE_SHIFT 17 72 72 #define QM_AEQE_TYPE_MASK 0xf 73 73 #define QM_AEQE_CQN_MASK GENMASK(15, 0) ··· 976 976 { 977 977 struct qm_eqe *eqe = qm->eqe + qm->status.eq_head; 978 978 struct hisi_qm_poll_data *poll_data = NULL; 979 + u32 dw0 = le32_to_cpu(eqe->dw0); 979 980 u16 eq_depth = qm->eq_depth; 980 981 u16 cqn, eqe_num = 0; 981 982 982 - if (QM_EQE_PHASE(eqe) != qm->status.eqc_phase) { 983 + if (QM_EQE_PHASE(dw0) != qm->status.eqc_phase) { 983 984 atomic64_inc(&qm->debug.dfx.err_irq_cnt); 984 985 qm_db(qm, 0, QM_DOORBELL_CMD_EQ, qm->status.eq_head, 0); 985 986 return; 986 987 } 987 988 988 - cqn = le32_to_cpu(eqe->dw0) & QM_EQE_CQN_MASK; 989 + cqn = dw0 & QM_EQE_CQN_MASK; 989 990 if (unlikely(cqn >= qm->qp_num)) 990 991 return; 991 992 poll_data = &qm->poll_data[cqn]; 992 993 993 - while (QM_EQE_PHASE(eqe) == qm->status.eqc_phase) { 994 - cqn = le32_to_cpu(eqe->dw0) & QM_EQE_CQN_MASK; 995 - poll_data->qp_finish_id[eqe_num] = cqn; 994 + while (QM_EQE_PHASE(dw0) != qm->status.eqc_phase) { 995 + poll_data->qp_finish_id[eqe_num] = dw0 & QM_EQE_CQN_MASK; 996 996 eqe_num++; 997 997 998 998 if (qm->status.eq_head == eq_depth - 1) { ··· 1006 1006 1007 1007 if (eqe_num == (eq_depth >> 1) - 1) 1008 1008 break; 1009 + 1010 + dw0 = le32_to_cpu(eqe->dw0); 1009 1011 } 1010 1012 1011 1013 poll_data->eqe_num = eqe_num; ··· 1100 1098 { 1101 1099 struct hisi_qm *qm = data; 1102 1100 struct qm_aeqe *aeqe = qm->aeqe + qm->status.aeq_head; 1101 + u32 dw0 = le32_to_cpu(aeqe->dw0); 1103 1102 u16 aeq_depth = qm->aeq_depth; 1104 1103 u32 type, qp_id; 1105 1104 1106 1105 atomic64_inc(&qm->debug.dfx.aeq_irq_cnt); 1107 1106 1108 - while (QM_AEQE_PHASE(aeqe) == qm->status.aeqc_phase) { 1109 - type = (le32_to_cpu(aeqe->dw0) >> QM_AEQE_TYPE_SHIFT) & 1110 - QM_AEQE_TYPE_MASK; 1111 - qp_id = le32_to_cpu(aeqe->dw0) & QM_AEQE_CQN_MASK; 1107 + while (QM_AEQE_PHASE(dw0) == qm->status.aeqc_phase) { 1108 + type = (dw0 >> QM_AEQE_TYPE_SHIFT) & QM_AEQE_TYPE_MASK; 1109 + qp_id = dw0 & QM_AEQE_CQN_MASK; 1112 1110 1113 1111 switch (type) { 1114 1112 case QM_EQ_OVERFLOW: ··· 1136 1134 aeqe++; 1137 1135 qm->status.aeq_head++; 1138 1136 } 1137 + dw0 = le32_to_cpu(aeqe->dw0); 1139 1138 } 1140 1139 1141 1140 qm_db(qm, 0, QM_DOORBELL_CMD_AEQ, qm->status.aeq_head, 0); ··· 1285 1282 (QM_SHAPER_CBS_B << QM_SHAPER_FACTOR_CBS_B_SHIFT) | 1286 1283 (factor->cbs_s << QM_SHAPER_FACTOR_CBS_S_SHIFT); 1287 1284 } 1285 + break; 1286 + /* 1287 + * Note: The current logic only needs to handle the above three types 1288 + * If new types are added, they need to be supplemented here, 1289 + * otherwise undefined behavior may occur. 1290 + */ 1291 + default: 1288 1292 break; 1289 1293 } 1290 1294 } ··· 2662 2652 } 2663 2653 } 2664 2654 list_add(&hw_err->list, &isolate->qm_hw_errs); 2665 - mutex_unlock(&isolate->isolate_lock); 2666 2655 2667 2656 if (count >= isolate->err_threshold) 2668 2657 isolate->is_isolate = true; 2658 + mutex_unlock(&isolate->isolate_lock); 2669 2659 2670 2660 return 0; 2671 2661 } ··· 2674 2664 { 2675 2665 struct qm_hw_err *err, *tmp; 2676 2666 2677 - mutex_lock(&qm->isolate_data.isolate_lock); 2678 2667 list_for_each_entry_safe(err, tmp, &qm->isolate_data.qm_hw_errs, list) { 2679 2668 list_del(&err->list); 2680 2669 kfree(err); 2681 2670 } 2682 - mutex_unlock(&qm->isolate_data.isolate_lock); 2683 2671 } 2684 2672 2685 2673 static enum uacce_dev_state hisi_qm_get_isolate_state(struct uacce_device *uacce) ··· 2705 2697 if (qm->isolate_data.is_isolate) 2706 2698 return -EPERM; 2707 2699 2700 + mutex_lock(&qm->isolate_data.isolate_lock); 2708 2701 qm->isolate_data.err_threshold = num; 2709 2702 2710 2703 /* After the policy is updated, need to reset the hardware err list */ 2711 2704 qm_hw_err_destroy(qm); 2705 + mutex_unlock(&qm->isolate_data.isolate_lock); 2712 2706 2713 2707 return 0; 2714 2708 } ··· 2747 2737 struct uacce_device *uacce = qm->uacce; 2748 2738 2749 2739 if (qm->use_sva) { 2740 + mutex_lock(&qm->isolate_data.isolate_lock); 2750 2741 qm_hw_err_destroy(qm); 2742 + mutex_unlock(&qm->isolate_data.isolate_lock); 2743 + 2751 2744 uacce_remove(uacce); 2752 2745 qm->uacce = NULL; 2753 2746 } ··· 3691 3678 static int qm_func_shaper_enable(struct hisi_qm *qm, u32 fun_index, u32 qos) 3692 3679 { 3693 3680 struct device *dev = &qm->pdev->dev; 3681 + struct qm_shaper_factor t_factor; 3694 3682 u32 ir = qos * QM_QOS_RATE; 3695 3683 int ret, total_vfs, i; 3696 3684 ··· 3699 3685 if (fun_index > total_vfs) 3700 3686 return -EINVAL; 3701 3687 3688 + memcpy(&t_factor, &qm->factor[fun_index], sizeof(t_factor)); 3702 3689 qm->factor[fun_index].func_qos = qos; 3703 3690 3704 3691 ret = qm_get_shaper_para(ir, &qm->factor[fun_index]); ··· 3713 3698 ret = qm_set_vft_common(qm, SHAPER_VFT, fun_index, i, 1); 3714 3699 if (ret) { 3715 3700 dev_err(dev, "type: %d, failed to set shaper vft!\n", i); 3716 - return -EINVAL; 3701 + goto back_func_qos; 3717 3702 } 3718 3703 } 3719 3704 3720 3705 return 0; 3706 + 3707 + back_func_qos: 3708 + memcpy(&qm->factor[fun_index], &t_factor, sizeof(t_factor)); 3709 + for (i--; i >= ALG_TYPE_0; i--) { 3710 + ret = qm_set_vft_common(qm, SHAPER_VFT, fun_index, i, 1); 3711 + if (ret) 3712 + dev_err(dev, "failed to restore shaper vft during rollback!\n"); 3713 + } 3714 + 3715 + return -EINVAL; 3721 3716 } 3722 3717 3723 3718 static u32 qm_get_shaper_vft_qos(struct hisi_qm *qm, u32 fun_index)
-5
drivers/crypto/hisilicon/sgl.c
··· 245 245 } 246 246 247 247 curr_hw_sgl = acc_get_sgl(pool, index, &curr_sgl_dma); 248 - if (IS_ERR(curr_hw_sgl)) { 249 - dev_err(dev, "Get SGL error!\n"); 250 - ret = -ENOMEM; 251 - goto err_unmap; 252 - } 253 248 curr_hw_sgl->entry_length_in_sgl = cpu_to_le16(pool->sge_nr); 254 249 curr_hw_sge = curr_hw_sgl->sge_entries; 255 250
+1 -1
drivers/crypto/intel/iaa/iaa_crypto_main.c
··· 805 805 if (!cpus_per_iaa) 806 806 cpus_per_iaa = 1; 807 807 out: 808 - return 0; 808 + return ret; 809 809 } 810 810 811 811 static void remove_iaa_wq(struct idxd_wq *wq)
+2 -2
drivers/crypto/intel/qat/qat_common/adf_aer.c
··· 276 276 int adf_init_aer(void) 277 277 { 278 278 device_reset_wq = alloc_workqueue("qat_device_reset_wq", 279 - WQ_MEM_RECLAIM, 0); 279 + WQ_MEM_RECLAIM | WQ_PERCPU, 0); 280 280 if (!device_reset_wq) 281 281 return -EFAULT; 282 282 283 - device_sriov_wq = alloc_workqueue("qat_device_sriov_wq", 0, 0); 283 + device_sriov_wq = alloc_workqueue("qat_device_sriov_wq", WQ_PERCPU, 0); 284 284 if (!device_sriov_wq) { 285 285 destroy_workqueue(device_reset_wq); 286 286 device_reset_wq = NULL;
+2 -1
drivers/crypto/intel/qat/qat_common/adf_isr.c
··· 384 384 */ 385 385 int __init adf_init_misc_wq(void) 386 386 { 387 - adf_misc_wq = alloc_workqueue("qat_misc_wq", WQ_MEM_RECLAIM, 0); 387 + adf_misc_wq = alloc_workqueue("qat_misc_wq", 388 + WQ_MEM_RECLAIM | WQ_PERCPU, 0); 388 389 389 390 return !adf_misc_wq ? -ENOMEM : 0; 390 391 }
+2 -1
drivers/crypto/intel/qat/qat_common/adf_sriov.c
··· 299 299 int __init adf_init_pf_wq(void) 300 300 { 301 301 /* Workqueue for PF2VF responses */ 302 - pf2vf_resp_wq = alloc_workqueue("qat_pf2vf_resp_wq", WQ_MEM_RECLAIM, 0); 302 + pf2vf_resp_wq = alloc_workqueue("qat_pf2vf_resp_wq", 303 + WQ_MEM_RECLAIM | WQ_PERCPU, 0); 303 304 304 305 return !pf2vf_resp_wq ? -ENOMEM : 0; 305 306 }
+2 -1
drivers/crypto/intel/qat/qat_common/adf_vf_isr.c
··· 299 299 */ 300 300 int __init adf_init_vf_wq(void) 301 301 { 302 - adf_vf_stop_wq = alloc_workqueue("adf_vf_stop_wq", WQ_MEM_RECLAIM, 0); 302 + adf_vf_stop_wq = alloc_workqueue("adf_vf_stop_wq", 303 + WQ_MEM_RECLAIM | WQ_PERCPU, 0); 303 304 304 305 return !adf_vf_stop_wq ? -EFAULT : 0; 305 306 }
+5 -13
drivers/crypto/intel/qat/qat_common/qat_uclo.c
··· 200 200 201 201 static int qat_uclo_parse_num(char *str, unsigned int *num) 202 202 { 203 - char buf[16] = {0}; 204 - unsigned long ae = 0; 205 - int i; 203 + unsigned long long ae; 204 + char *end; 206 205 207 - strscpy(buf, str, sizeof(buf)); 208 - for (i = 0; i < 16; i++) { 209 - if (!isdigit(buf[i])) { 210 - buf[i] = '\0'; 211 - break; 212 - } 213 - } 214 - if ((kstrtoul(buf, 10, &ae))) 215 - return -EFAULT; 216 - 206 + ae = simple_strtoull(str, &end, 10); 207 + if (ae > UINT_MAX || str == end || (end - str) > 19) 208 + return -EINVAL; 217 209 *num = (unsigned int)ae; 218 210 return 0; 219 211 }
+2 -5
drivers/crypto/marvell/cesa/cesa.c
··· 420 420 { 421 421 const struct mv_cesa_caps *caps = &orion_caps; 422 422 const struct mbus_dram_target_info *dram; 423 - const struct of_device_id *match; 424 423 struct device *dev = &pdev->dev; 425 424 struct mv_cesa_dev *cesa; 426 425 struct mv_cesa_engine *engines; ··· 432 433 } 433 434 434 435 if (dev->of_node) { 435 - match = of_match_node(mv_cesa_of_match_table, dev->of_node); 436 - if (!match || !match->data) 436 + caps = of_device_get_match_data(dev); 437 + if (!caps) 437 438 return -ENOTSUPP; 438 - 439 - caps = match->data; 440 439 } 441 440 442 441 cesa = devm_kzalloc(dev, sizeof(*cesa), GFP_KERNEL);
+3 -2
drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c
··· 3 3 4 4 #include <linux/ctype.h> 5 5 #include <linux/firmware.h> 6 + #include <linux/string.h> 6 7 #include <linux/string_choices.h> 7 8 #include "otx2_cptpf_ucode.h" 8 9 #include "otx2_cpt_common.h" ··· 459 458 u16 rid) 460 459 { 461 460 char filename[OTX2_CPT_NAME_LENGTH]; 462 - char eng_type[8] = {0}; 461 + char eng_type[8]; 463 462 int ret, e, i; 464 463 465 464 INIT_LIST_HEAD(&fw_info->ucodes); 466 465 467 466 for (e = 1; e < OTX2_CPT_MAX_ENG_TYPES; e++) { 468 - strcpy(eng_type, get_eng_type_str(e)); 467 + strscpy(eng_type, get_eng_type_str(e)); 469 468 for (i = 0; i < strlen(eng_type); i++) 470 469 eng_type[i] = tolower(eng_type[i]); 471 470
+1 -2
drivers/crypto/qce/core.c
··· 21 21 #include "sha.h" 22 22 #include "aead.h" 23 23 24 - #define QCE_MAJOR_VERSION5 0x05 25 24 #define QCE_QUEUE_LENGTH 1 26 25 27 26 #define QCE_DEFAULT_MEM_BANDWIDTH 393600 ··· 160 161 * the driver does not support v5 with minor 0 because it has special 161 162 * alignment requirements. 162 163 */ 163 - if (major != QCE_MAJOR_VERSION5 || minor == 0) 164 + if (major == 5 && minor == 0) 164 165 return -ENODEV; 165 166 166 167 qce->burst_size = QCE_BAM_BURST_SIZE;
+4 -2
drivers/crypto/qce/dma.c
··· 24 24 25 25 dma->txchan = dma_request_chan(dev, "tx"); 26 26 if (IS_ERR(dma->txchan)) 27 - return PTR_ERR(dma->txchan); 27 + return dev_err_probe(dev, PTR_ERR(dma->txchan), 28 + "Failed to get TX DMA channel\n"); 28 29 29 30 dma->rxchan = dma_request_chan(dev, "rx"); 30 31 if (IS_ERR(dma->rxchan)) { 31 - ret = PTR_ERR(dma->rxchan); 32 + ret = dev_err_probe(dev, PTR_ERR(dma->rxchan), 33 + "Failed to get RX DMA channel\n"); 32 34 goto error_rx; 33 35 } 34 36
+1 -2
drivers/crypto/rockchip/rk3288_crypto_skcipher.c
··· 321 321 algt->stat_req++; 322 322 rkc->nreq++; 323 323 324 - ivsize = crypto_skcipher_ivsize(tfm); 325 - if (areq->iv && crypto_skcipher_ivsize(tfm) > 0) { 324 + if (areq->iv && ivsize > 0) { 326 325 if (rctx->mode & RK_CRYPTO_DEC) { 327 326 offset = areq->cryptlen - ivsize; 328 327 scatterwalk_map_and_copy(rctx->backup_iv, areq->src,
+5 -1
drivers/crypto/starfive/jh7110-hash.c
··· 325 325 struct starfive_cryp_ctx *ctx = crypto_ahash_ctx(tfm); 326 326 struct starfive_cryp_request_ctx *rctx = ahash_request_ctx(req); 327 327 struct starfive_cryp_dev *cryp = ctx->cryp; 328 + int sg_len; 328 329 329 330 memset(rctx, 0, sizeof(struct starfive_cryp_request_ctx)); 330 331 ··· 334 333 rctx->in_sg = req->src; 335 334 rctx->blksize = crypto_tfm_alg_blocksize(crypto_ahash_tfm(tfm)); 336 335 rctx->digsize = crypto_ahash_digestsize(tfm); 337 - rctx->in_sg_len = sg_nents_for_len(rctx->in_sg, rctx->total); 336 + sg_len = sg_nents_for_len(rctx->in_sg, rctx->total); 337 + if (sg_len < 0) 338 + return sg_len; 339 + rctx->in_sg_len = sg_len; 338 340 ctx->rctx = rctx; 339 341 340 342 return crypto_transfer_hash_request_to_engine(cryp->engine, req);
+1
drivers/crypto/ti/Kconfig
··· 6 6 select CRYPTO_SKCIPHER 7 7 select CRYPTO_ECB 8 8 select CRYPTO_CBC 9 + select CRYPTO_XTS 9 10 help 10 11 This enables support for the TI DTHE V2 hw cryptography engine 11 12 which can be found on TI K3 SOCs. Selecting this enables use
+132 -7
drivers/crypto/ti/dthev2-aes.c
··· 25 25 26 26 // AES Engine 27 27 #define DTHE_P_AES_BASE 0x7000 28 + 28 29 #define DTHE_P_AES_KEY1_0 0x0038 29 30 #define DTHE_P_AES_KEY1_1 0x003C 30 31 #define DTHE_P_AES_KEY1_2 0x0030 ··· 34 33 #define DTHE_P_AES_KEY1_5 0x002C 35 34 #define DTHE_P_AES_KEY1_6 0x0020 36 35 #define DTHE_P_AES_KEY1_7 0x0024 36 + 37 + #define DTHE_P_AES_KEY2_0 0x0018 38 + #define DTHE_P_AES_KEY2_1 0x001C 39 + #define DTHE_P_AES_KEY2_2 0x0010 40 + #define DTHE_P_AES_KEY2_3 0x0014 41 + #define DTHE_P_AES_KEY2_4 0x0008 42 + #define DTHE_P_AES_KEY2_5 0x000C 43 + #define DTHE_P_AES_KEY2_6 0x0000 44 + #define DTHE_P_AES_KEY2_7 0x0004 45 + 37 46 #define DTHE_P_AES_IV_IN_0 0x0040 38 47 #define DTHE_P_AES_IV_IN_1 0x0044 39 48 #define DTHE_P_AES_IV_IN_2 0x0048 ··· 63 52 enum aes_ctrl_mode_masks { 64 53 AES_CTRL_ECB_MASK = 0x00, 65 54 AES_CTRL_CBC_MASK = BIT(5), 55 + AES_CTRL_XTS_MASK = BIT(12) | BIT(11), 66 56 }; 67 57 68 58 #define DTHE_AES_CTRL_MODE_CLEAR_MASK ~GENMASK(28, 5) ··· 100 88 return 0; 101 89 } 102 90 91 + static int dthe_cipher_xts_init_tfm(struct crypto_skcipher *tfm) 92 + { 93 + struct dthe_tfm_ctx *ctx = crypto_skcipher_ctx(tfm); 94 + struct dthe_data *dev_data = dthe_get_dev(ctx); 95 + 96 + ctx->dev_data = dev_data; 97 + ctx->keylen = 0; 98 + 99 + ctx->skcipher_fb = crypto_alloc_sync_skcipher("xts(aes)", 0, 100 + CRYPTO_ALG_NEED_FALLBACK); 101 + if (IS_ERR(ctx->skcipher_fb)) { 102 + dev_err(dev_data->dev, "fallback driver xts(aes) couldn't be loaded\n"); 103 + return PTR_ERR(ctx->skcipher_fb); 104 + } 105 + 106 + return 0; 107 + } 108 + 109 + static void dthe_cipher_xts_exit_tfm(struct crypto_skcipher *tfm) 110 + { 111 + struct dthe_tfm_ctx *ctx = crypto_skcipher_ctx(tfm); 112 + 113 + crypto_free_sync_skcipher(ctx->skcipher_fb); 114 + } 115 + 103 116 static int dthe_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen) 104 117 { 105 118 struct dthe_tfm_ctx *ctx = crypto_skcipher_ctx(tfm); ··· 156 119 return dthe_aes_setkey(tfm, key, keylen); 157 120 } 158 121 122 + static int dthe_aes_xts_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen) 123 + { 124 + struct dthe_tfm_ctx *ctx = crypto_skcipher_ctx(tfm); 125 + 126 + if (keylen != 2 * AES_KEYSIZE_128 && 127 + keylen != 2 * AES_KEYSIZE_192 && 128 + keylen != 2 * AES_KEYSIZE_256) 129 + return -EINVAL; 130 + 131 + ctx->aes_mode = DTHE_AES_XTS; 132 + ctx->keylen = keylen / 2; 133 + memcpy(ctx->key, key, keylen); 134 + 135 + crypto_sync_skcipher_clear_flags(ctx->skcipher_fb, CRYPTO_TFM_REQ_MASK); 136 + crypto_sync_skcipher_set_flags(ctx->skcipher_fb, 137 + crypto_skcipher_get_flags(tfm) & 138 + CRYPTO_TFM_REQ_MASK); 139 + 140 + return crypto_sync_skcipher_setkey(ctx->skcipher_fb, key, keylen); 141 + } 142 + 159 143 static void dthe_aes_set_ctrl_key(struct dthe_tfm_ctx *ctx, 160 144 struct dthe_aes_req_ctx *rctx, 161 145 u32 *iv_in) ··· 199 141 writel_relaxed(ctx->key[7], aes_base_reg + DTHE_P_AES_KEY1_7); 200 142 } 201 143 144 + if (ctx->aes_mode == DTHE_AES_XTS) { 145 + size_t key2_offset = ctx->keylen / sizeof(u32); 146 + 147 + writel_relaxed(ctx->key[key2_offset + 0], aes_base_reg + DTHE_P_AES_KEY2_0); 148 + writel_relaxed(ctx->key[key2_offset + 1], aes_base_reg + DTHE_P_AES_KEY2_1); 149 + writel_relaxed(ctx->key[key2_offset + 2], aes_base_reg + DTHE_P_AES_KEY2_2); 150 + writel_relaxed(ctx->key[key2_offset + 3], aes_base_reg + DTHE_P_AES_KEY2_3); 151 + 152 + if (ctx->keylen > AES_KEYSIZE_128) { 153 + writel_relaxed(ctx->key[key2_offset + 4], aes_base_reg + DTHE_P_AES_KEY2_4); 154 + writel_relaxed(ctx->key[key2_offset + 5], aes_base_reg + DTHE_P_AES_KEY2_5); 155 + } 156 + if (ctx->keylen == AES_KEYSIZE_256) { 157 + writel_relaxed(ctx->key[key2_offset + 6], aes_base_reg + DTHE_P_AES_KEY2_6); 158 + writel_relaxed(ctx->key[key2_offset + 7], aes_base_reg + DTHE_P_AES_KEY2_7); 159 + } 160 + } 161 + 202 162 if (rctx->enc) 203 163 ctrl_val |= DTHE_AES_CTRL_DIR_ENC; 204 164 ··· 235 159 break; 236 160 case DTHE_AES_CBC: 237 161 ctrl_val |= AES_CTRL_CBC_MASK; 162 + break; 163 + case DTHE_AES_XTS: 164 + ctrl_val |= AES_CTRL_XTS_MASK; 238 165 break; 239 166 } 240 167 ··· 394 315 local_bh_disable(); 395 316 crypto_finalize_skcipher_request(dev_data->engine, req, ret); 396 317 local_bh_enable(); 397 - return ret; 318 + return 0; 398 319 } 399 320 400 321 static int dthe_aes_crypt(struct skcipher_request *req) 401 322 { 402 323 struct dthe_tfm_ctx *ctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(req)); 324 + struct dthe_aes_req_ctx *rctx = skcipher_request_ctx(req); 403 325 struct dthe_data *dev_data = dthe_get_dev(ctx); 404 326 struct crypto_engine *engine; 405 327 406 328 /* 407 - * If data is not a multiple of AES_BLOCK_SIZE, need to return -EINVAL 408 - * If data length input is zero, no need to do any operation. 329 + * If data is not a multiple of AES_BLOCK_SIZE: 330 + * - need to return -EINVAL for ECB, CBC as they are block ciphers 331 + * - need to fallback to software as H/W doesn't support Ciphertext Stealing for XTS 409 332 */ 410 - if (req->cryptlen % AES_BLOCK_SIZE) 411 - return -EINVAL; 333 + if (req->cryptlen % AES_BLOCK_SIZE) { 334 + if (ctx->aes_mode == DTHE_AES_XTS) { 335 + SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->skcipher_fb); 412 336 413 - if (req->cryptlen == 0) 337 + skcipher_request_set_callback(subreq, skcipher_request_flags(req), 338 + req->base.complete, req->base.data); 339 + skcipher_request_set_crypt(subreq, req->src, req->dst, 340 + req->cryptlen, req->iv); 341 + 342 + return rctx->enc ? crypto_skcipher_encrypt(subreq) : 343 + crypto_skcipher_decrypt(subreq); 344 + } 345 + return -EINVAL; 346 + } 347 + 348 + /* 349 + * If data length input is zero, no need to do any operation. 350 + * Except for XTS mode, where data length should be non-zero. 351 + */ 352 + if (req->cryptlen == 0) { 353 + if (ctx->aes_mode == DTHE_AES_XTS) 354 + return -EINVAL; 414 355 return 0; 356 + } 415 357 416 358 engine = dev_data->engine; 417 359 return crypto_transfer_skcipher_request_to_engine(engine, req); ··· 499 399 .cra_module = THIS_MODULE, 500 400 }, 501 401 .op.do_one_request = dthe_aes_run, 502 - } /* CBC AES */ 402 + }, /* CBC AES */ 403 + { 404 + .base.init = dthe_cipher_xts_init_tfm, 405 + .base.exit = dthe_cipher_xts_exit_tfm, 406 + .base.setkey = dthe_aes_xts_setkey, 407 + .base.encrypt = dthe_aes_encrypt, 408 + .base.decrypt = dthe_aes_decrypt, 409 + .base.min_keysize = AES_MIN_KEY_SIZE * 2, 410 + .base.max_keysize = AES_MAX_KEY_SIZE * 2, 411 + .base.ivsize = AES_IV_SIZE, 412 + .base.base = { 413 + .cra_name = "xts(aes)", 414 + .cra_driver_name = "xts-aes-dthev2", 415 + .cra_priority = 299, 416 + .cra_flags = CRYPTO_ALG_TYPE_SKCIPHER | 417 + CRYPTO_ALG_ASYNC | 418 + CRYPTO_ALG_KERN_DRIVER_ONLY | 419 + CRYPTO_ALG_NEED_FALLBACK, 420 + .cra_alignmask = AES_BLOCK_SIZE - 1, 421 + .cra_blocksize = AES_BLOCK_SIZE, 422 + .cra_ctxsize = sizeof(struct dthe_tfm_ctx), 423 + .cra_reqsize = sizeof(struct dthe_aes_req_ctx), 424 + .cra_module = THIS_MODULE, 425 + }, 426 + .op.do_one_request = dthe_aes_run, 427 + }, /* XTS AES */ 503 428 }; 504 429 505 430 int dthe_register_aes_algs(void)
+9 -1
drivers/crypto/ti/dthev2-common.h
··· 27 27 28 28 #define DTHE_REG_SIZE 4 29 29 #define DTHE_DMA_TIMEOUT_MS 2000 30 + /* 31 + * Size of largest possible key (of all algorithms) to be stored in dthe_tfm_ctx 32 + * This is currently the keysize of XTS-AES-256 which is 512 bits (64 bytes) 33 + */ 34 + #define DTHE_MAX_KEYSIZE (AES_MAX_KEY_SIZE * 2) 30 35 31 36 enum dthe_aes_mode { 32 37 DTHE_AES_ECB = 0, 33 38 DTHE_AES_CBC, 39 + DTHE_AES_XTS, 34 40 }; 35 41 36 42 /* Driver specific struct definitions */ ··· 79 73 * @keylen: AES key length 80 74 * @key: AES key 81 75 * @aes_mode: AES mode 76 + * @skcipher_fb: Fallback crypto skcipher handle for AES-XTS mode 82 77 */ 83 78 struct dthe_tfm_ctx { 84 79 struct dthe_data *dev_data; 85 80 unsigned int keylen; 86 - u32 key[AES_KEYSIZE_256 / sizeof(u32)]; 81 + u32 key[DTHE_MAX_KEYSIZE / sizeof(u32)]; 87 82 enum dthe_aes_mode aes_mode; 83 + struct crypto_sync_skcipher *skcipher_fb; 88 84 }; 89 85 90 86 /**
+32 -7
drivers/crypto/xilinx/xilinx-trng.c
··· 8 8 #include <linux/clk.h> 9 9 #include <linux/crypto.h> 10 10 #include <linux/delay.h> 11 - #include <linux/errno.h> 12 11 #include <linux/firmware/xlnx-zynqmp.h> 13 12 #include <linux/hw_random.h> 14 13 #include <linux/io.h> ··· 17 18 #include <linux/mutex.h> 18 19 #include <linux/mod_devicetable.h> 19 20 #include <linux/platform_device.h> 20 - #include <linux/string.h> 21 + #include <crypto/aes.h> 22 + #include <crypto/df_sp80090a.h> 23 + #include <crypto/internal/drbg.h> 21 24 #include <crypto/internal/cipher.h> 22 25 #include <crypto/internal/rng.h> 23 - #include <crypto/aes.h> 24 26 25 27 /* TRNG Registers Offsets */ 26 28 #define TRNG_STATUS_OFFSET 0x4U ··· 59 59 struct xilinx_rng { 60 60 void __iomem *rng_base; 61 61 struct device *dev; 62 + unsigned char *scratchpadbuf; 63 + struct crypto_aes_ctx *aesctx; 62 64 struct mutex lock; /* Protect access to TRNG device */ 63 65 struct hwrng trng; 64 66 }; ··· 184 182 static int xtrng_reseed_internal(struct xilinx_rng *rng) 185 183 { 186 184 u8 entropy[TRNG_ENTROPY_SEED_LEN_BYTES]; 185 + struct drbg_string data; 186 + LIST_HEAD(seedlist); 187 187 u32 val; 188 188 int ret; 189 189 190 + drbg_string_fill(&data, entropy, TRNG_SEED_LEN_BYTES); 191 + list_add_tail(&data.list, &seedlist); 190 192 memset(entropy, 0, sizeof(entropy)); 191 193 xtrng_enable_entropy(rng); 192 194 ··· 198 192 ret = xtrng_collect_random_data(rng, entropy, TRNG_SEED_LEN_BYTES, true); 199 193 if (ret != TRNG_SEED_LEN_BYTES) 200 194 return -EINVAL; 195 + ret = crypto_drbg_ctr_df(rng->aesctx, rng->scratchpadbuf, 196 + TRNG_SEED_LEN_BYTES, &seedlist, AES_BLOCK_SIZE, 197 + TRNG_SEED_LEN_BYTES); 198 + if (ret) 199 + return ret; 201 200 202 201 xtrng_write_multiple_registers(rng->rng_base + TRNG_EXT_SEED_OFFSET, 203 - (u32 *)entropy, TRNG_NUM_INIT_REGS); 202 + (u32 *)rng->scratchpadbuf, TRNG_NUM_INIT_REGS); 204 203 /* select reseed operation */ 205 204 iowrite32(TRNG_CTRL_PRNGXS_MASK, rng->rng_base + TRNG_CTRL_OFFSET); 206 205 ··· 335 324 static int xtrng_probe(struct platform_device *pdev) 336 325 { 337 326 struct xilinx_rng *rng; 327 + size_t sb_size; 338 328 int ret; 339 329 340 330 rng = devm_kzalloc(&pdev->dev, sizeof(*rng), GFP_KERNEL); ··· 345 333 rng->dev = &pdev->dev; 346 334 rng->rng_base = devm_platform_ioremap_resource(pdev, 0); 347 335 if (IS_ERR(rng->rng_base)) { 348 - dev_err(&pdev->dev, "Failed to map resource %ld\n", PTR_ERR(rng->rng_base)); 336 + dev_err(&pdev->dev, "Failed to map resource %pe\n", rng->rng_base); 349 337 return PTR_ERR(rng->rng_base); 338 + } 339 + 340 + rng->aesctx = devm_kzalloc(&pdev->dev, sizeof(*rng->aesctx), GFP_KERNEL); 341 + if (!rng->aesctx) 342 + return -ENOMEM; 343 + 344 + sb_size = crypto_drbg_ctr_df_datalen(TRNG_SEED_LEN_BYTES, AES_BLOCK_SIZE); 345 + rng->scratchpadbuf = devm_kzalloc(&pdev->dev, sb_size, GFP_KERNEL); 346 + if (!rng->scratchpadbuf) { 347 + ret = -ENOMEM; 348 + goto end; 350 349 } 351 350 352 351 xtrng_trng_reset(rng->rng_base); 353 352 ret = xtrng_reseed_internal(rng); 354 353 if (ret) { 355 354 dev_err(&pdev->dev, "TRNG Seed fail\n"); 356 - return ret; 355 + goto end; 357 356 } 358 357 359 358 xilinx_rng_dev = rng; ··· 372 349 ret = crypto_register_rng(&xtrng_trng_alg); 373 350 if (ret) { 374 351 dev_err(&pdev->dev, "Crypto Random device registration failed: %d\n", ret); 375 - return ret; 352 + goto end; 376 353 } 354 + 377 355 ret = xtrng_hwrng_register(&rng->trng); 378 356 if (ret) { 379 357 dev_err(&pdev->dev, "HWRNG device registration failed: %d\n", ret); ··· 387 363 crypto_rng_free: 388 364 crypto_unregister_rng(&xtrng_trng_alg); 389 365 366 + end: 390 367 return ret; 391 368 } 392 369
+87
include/crypto/aead.h
··· 159 159 struct crypto_tfm base; 160 160 }; 161 161 162 + struct crypto_sync_aead { 163 + struct crypto_aead base; 164 + }; 165 + 166 + #define MAX_SYNC_AEAD_REQSIZE 384 167 + 168 + #define SYNC_AEAD_REQUEST_ON_STACK(name, _tfm) \ 169 + char __##name##_desc[sizeof(struct aead_request) + \ 170 + MAX_SYNC_AEAD_REQSIZE \ 171 + ] CRYPTO_MINALIGN_ATTR; \ 172 + struct aead_request *name = \ 173 + (((struct aead_request *)__##name##_desc)->base.tfm = \ 174 + crypto_sync_aead_tfm((_tfm)), \ 175 + (void *)__##name##_desc) 176 + 162 177 static inline struct crypto_aead *__crypto_aead_cast(struct crypto_tfm *tfm) 163 178 { 164 179 return container_of(tfm, struct crypto_aead, base); ··· 195 180 */ 196 181 struct crypto_aead *crypto_alloc_aead(const char *alg_name, u32 type, u32 mask); 197 182 183 + struct crypto_sync_aead *crypto_alloc_sync_aead(const char *alg_name, u32 type, u32 mask); 184 + 198 185 static inline struct crypto_tfm *crypto_aead_tfm(struct crypto_aead *tfm) 199 186 { 200 187 return &tfm->base; 188 + } 189 + 190 + static inline struct crypto_tfm *crypto_sync_aead_tfm(struct crypto_sync_aead *tfm) 191 + { 192 + return crypto_aead_tfm(&tfm->base); 201 193 } 202 194 203 195 /** ··· 216 194 static inline void crypto_free_aead(struct crypto_aead *tfm) 217 195 { 218 196 crypto_destroy_tfm(tfm, crypto_aead_tfm(tfm)); 197 + } 198 + 199 + static inline void crypto_free_sync_aead(struct crypto_sync_aead *tfm) 200 + { 201 + crypto_free_aead(&tfm->base); 219 202 } 220 203 221 204 /** ··· 265 238 return crypto_aead_alg_ivsize(crypto_aead_alg(tfm)); 266 239 } 267 240 241 + static inline unsigned int crypto_sync_aead_ivsize(struct crypto_sync_aead *tfm) 242 + { 243 + return crypto_aead_ivsize(&tfm->base); 244 + } 245 + 268 246 /** 269 247 * crypto_aead_authsize() - obtain maximum authentication data size 270 248 * @tfm: cipher handle ··· 287 255 return tfm->authsize; 288 256 } 289 257 258 + static inline unsigned int crypto_sync_aead_authsize(struct crypto_sync_aead *tfm) 259 + { 260 + return crypto_aead_authsize(&tfm->base); 261 + } 262 + 290 263 static inline unsigned int crypto_aead_alg_maxauthsize(struct aead_alg *alg) 291 264 { 292 265 return alg->maxauthsize; ··· 300 263 static inline unsigned int crypto_aead_maxauthsize(struct crypto_aead *aead) 301 264 { 302 265 return crypto_aead_alg_maxauthsize(crypto_aead_alg(aead)); 266 + } 267 + 268 + static inline unsigned int crypto_sync_aead_maxauthsize(struct crypto_sync_aead *tfm) 269 + { 270 + return crypto_aead_maxauthsize(&tfm->base); 303 271 } 304 272 305 273 /** ··· 320 278 static inline unsigned int crypto_aead_blocksize(struct crypto_aead *tfm) 321 279 { 322 280 return crypto_tfm_alg_blocksize(crypto_aead_tfm(tfm)); 281 + } 282 + 283 + static inline unsigned int crypto_sync_aead_blocksize(struct crypto_sync_aead *tfm) 284 + { 285 + return crypto_aead_blocksize(&tfm->base); 323 286 } 324 287 325 288 static inline unsigned int crypto_aead_alignmask(struct crypto_aead *tfm) ··· 347 300 crypto_tfm_clear_flags(crypto_aead_tfm(tfm), flags); 348 301 } 349 302 303 + static inline u32 crypto_sync_aead_get_flags(struct crypto_sync_aead *tfm) 304 + { 305 + return crypto_aead_get_flags(&tfm->base); 306 + } 307 + 308 + static inline void crypto_sync_aead_set_flags(struct crypto_sync_aead *tfm, u32 flags) 309 + { 310 + crypto_aead_set_flags(&tfm->base, flags); 311 + } 312 + 313 + static inline void crypto_sync_aead_clear_flags(struct crypto_sync_aead *tfm, u32 flags) 314 + { 315 + crypto_aead_clear_flags(&tfm->base, flags); 316 + } 317 + 350 318 /** 351 319 * crypto_aead_setkey() - set key for cipher 352 320 * @tfm: cipher handle ··· 381 319 int crypto_aead_setkey(struct crypto_aead *tfm, 382 320 const u8 *key, unsigned int keylen); 383 321 322 + static inline int crypto_sync_aead_setkey(struct crypto_sync_aead *tfm, 323 + const u8 *key, unsigned int keylen) 324 + { 325 + return crypto_aead_setkey(&tfm->base, key, keylen); 326 + } 327 + 384 328 /** 385 329 * crypto_aead_setauthsize() - set authentication data size 386 330 * @tfm: cipher handle ··· 399 331 */ 400 332 int crypto_aead_setauthsize(struct crypto_aead *tfm, unsigned int authsize); 401 333 334 + static inline int crypto_sync_aead_setauthsize(struct crypto_sync_aead *tfm, 335 + unsigned int authsize) 336 + { 337 + return crypto_aead_setauthsize(&tfm->base, authsize); 338 + } 339 + 402 340 static inline struct crypto_aead *crypto_aead_reqtfm(struct aead_request *req) 403 341 { 404 342 return __crypto_aead_cast(req->base.tfm); 343 + } 344 + 345 + static inline struct crypto_sync_aead *crypto_sync_aead_reqtfm(struct aead_request *req) 346 + { 347 + struct crypto_aead *tfm = crypto_aead_reqtfm(req); 348 + 349 + return container_of(tfm, struct crypto_sync_aead, base); 405 350 } 406 351 407 352 /** ··· 496 415 struct crypto_aead *tfm) 497 416 { 498 417 req->base.tfm = crypto_aead_tfm(tfm); 418 + } 419 + 420 + static inline void aead_request_set_sync_tfm(struct aead_request *req, 421 + struct crypto_sync_aead *tfm) 422 + { 423 + aead_request_set_tfm(req, &tfm->base); 499 424 } 500 425 501 426 /**
+12
include/crypto/algapi.h
··· 107 107 unsigned int max_qlen; 108 108 }; 109 109 110 + struct scatter_walk { 111 + /* Must be the first member, see struct skcipher_walk. */ 112 + union { 113 + void *const addr; 114 + 115 + /* Private API field, do not touch. */ 116 + union crypto_no_such_thing *__addr; 117 + }; 118 + struct scatterlist *sg; 119 + unsigned int offset; 120 + }; 121 + 110 122 struct crypto_attr_alg { 111 123 char name[CRYPTO_MAX_ALG_NAME]; 112 124 };
+28
include/crypto/df_sp80090a.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 3 + /* 4 + * Copyright Stephan Mueller <smueller@chronox.de>, 2014 5 + */ 6 + 7 + #ifndef _CRYPTO_DF80090A_H 8 + #define _CRYPTO_DF80090A_H 9 + 10 + #include <crypto/internal/cipher.h> 11 + #include <crypto/aes.h> 12 + 13 + static inline int crypto_drbg_ctr_df_datalen(u8 statelen, u8 blocklen) 14 + { 15 + return statelen + /* df_data */ 16 + blocklen + /* pad */ 17 + blocklen + /* iv */ 18 + statelen + blocklen; /* temp */ 19 + } 20 + 21 + int crypto_drbg_ctr_df(struct crypto_aes_ctx *aes, 22 + unsigned char *df_data, 23 + size_t bytes_to_return, 24 + struct list_head *seedlist, 25 + u8 blocklen_bytes, 26 + u8 statelen); 27 + 28 + #endif /* _CRYPTO_DF80090A_H */
+1 -24
include/crypto/drbg.h
··· 47 47 #include <linux/module.h> 48 48 #include <linux/crypto.h> 49 49 #include <linux/slab.h> 50 + #include <crypto/internal/drbg.h> 50 51 #include <crypto/internal/rng.h> 51 52 #include <crypto/rng.h> 52 53 #include <linux/fips.h> 53 54 #include <linux/mutex.h> 54 55 #include <linux/list.h> 55 56 #include <linux/workqueue.h> 56 - 57 - /* 58 - * Concatenation Helper and string operation helper 59 - * 60 - * SP800-90A requires the concatenation of different data. To avoid copying 61 - * buffers around or allocate additional memory, the following data structure 62 - * is used to point to the original memory with its size. In addition, it 63 - * is used to build a linked list. The linked list defines the concatenation 64 - * of individual buffers. The order of memory block referenced in that 65 - * linked list determines the order of concatenation. 66 - */ 67 - struct drbg_string { 68 - const unsigned char *buf; 69 - size_t len; 70 - struct list_head list; 71 - }; 72 - 73 - static inline void drbg_string_fill(struct drbg_string *string, 74 - const unsigned char *buf, size_t len) 75 - { 76 - string->buf = buf; 77 - string->len = len; 78 - INIT_LIST_HEAD(&string->list); 79 - } 80 57 81 58 struct drbg_state; 82 59 typedef uint32_t drbg_flag_t;
+54
include/crypto/internal/drbg.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 3 + /* 4 + * NIST SP800-90A DRBG derivation function 5 + * 6 + * Copyright (C) 2014, Stephan Mueller <smueller@chronox.de> 7 + */ 8 + 9 + #ifndef _INTERNAL_DRBG_H 10 + #define _INTERNAL_DRBG_H 11 + 12 + /* 13 + * Convert an integer into a byte representation of this integer. 14 + * The byte representation is big-endian 15 + * 16 + * @val value to be converted 17 + * @buf buffer holding the converted integer -- caller must ensure that 18 + * buffer size is at least 32 bit 19 + */ 20 + static inline void drbg_cpu_to_be32(__u32 val, unsigned char *buf) 21 + { 22 + struct s { 23 + __be32 conv; 24 + }; 25 + struct s *conversion = (struct s *)buf; 26 + 27 + conversion->conv = cpu_to_be32(val); 28 + } 29 + 30 + /* 31 + * Concatenation Helper and string operation helper 32 + * 33 + * SP800-90A requires the concatenation of different data. To avoid copying 34 + * buffers around or allocate additional memory, the following data structure 35 + * is used to point to the original memory with its size. In addition, it 36 + * is used to build a linked list. The linked list defines the concatenation 37 + * of individual buffers. The order of memory block referenced in that 38 + * linked list determines the order of concatenation. 39 + */ 40 + struct drbg_string { 41 + const unsigned char *buf; 42 + size_t len; 43 + struct list_head list; 44 + }; 45 + 46 + static inline void drbg_string_fill(struct drbg_string *string, 47 + const unsigned char *buf, size_t len) 48 + { 49 + string->buf = buf; 50 + string->len = len; 51 + INIT_LIST_HEAD(&string->list); 52 + } 53 + 54 + #endif //_INTERNAL_DRBG_H
+47 -1
include/crypto/internal/skcipher.h
··· 10 10 11 11 #include <crypto/algapi.h> 12 12 #include <crypto/internal/cipher.h> 13 - #include <crypto/scatterwalk.h> 14 13 #include <crypto/skcipher.h> 15 14 #include <linux/types.h> 16 15 ··· 52 53 53 54 struct crypto_lskcipher_spawn { 54 55 struct crypto_spawn base; 56 + }; 57 + 58 + struct skcipher_walk { 59 + union { 60 + /* Virtual address of the source. */ 61 + struct { 62 + struct { 63 + const void *const addr; 64 + } virt; 65 + } src; 66 + 67 + /* Private field for the API, do not use. */ 68 + struct scatter_walk in; 69 + }; 70 + 71 + union { 72 + /* Virtual address of the destination. */ 73 + struct { 74 + struct { 75 + void *const addr; 76 + } virt; 77 + } dst; 78 + 79 + /* Private field for the API, do not use. */ 80 + struct scatter_walk out; 81 + }; 82 + 83 + unsigned int nbytes; 84 + unsigned int total; 85 + 86 + u8 *page; 87 + u8 *buffer; 88 + u8 *oiv; 89 + void *iv; 90 + 91 + unsigned int ivsize; 92 + 93 + int flags; 94 + unsigned int blocksize; 95 + unsigned int stride; 96 + unsigned int alignmask; 55 97 }; 56 98 57 99 static inline struct crypto_instance *skcipher_crypto_instance( ··· 211 171 int lskcipher_register_instance(struct crypto_template *tmpl, 212 172 struct lskcipher_instance *inst); 213 173 174 + int skcipher_walk_done(struct skcipher_walk *walk, int res); 214 175 int skcipher_walk_virt(struct skcipher_walk *__restrict walk, 215 176 struct skcipher_request *__restrict req, 216 177 bool atomic); ··· 221 180 int skcipher_walk_aead_decrypt(struct skcipher_walk *__restrict walk, 222 181 struct aead_request *__restrict req, 223 182 bool atomic); 183 + 184 + static inline void skcipher_walk_abort(struct skcipher_walk *walk) 185 + { 186 + skcipher_walk_done(walk, -ECANCELED); 187 + } 224 188 225 189 static inline void *crypto_skcipher_ctx(struct crypto_skcipher *tfm) 226 190 {
+5 -6
include/crypto/rng.h
··· 169 169 * 170 170 * The reset function completely re-initializes the random number generator 171 171 * referenced by the cipher handle by clearing the current state. The new state 172 - * is initialized with the caller provided seed or automatically, depending 173 - * on the random number generator type (the ANSI X9.31 RNG requires 174 - * caller-provided seed, the SP800-90A DRBGs perform an automatic seeding). 175 - * The seed is provided as a parameter to this function call. The provided seed 176 - * should have the length of the seed size defined for the random number 177 - * generator as defined by crypto_rng_seedsize. 172 + * is initialized with the caller provided seed or automatically, depending on 173 + * the random number generator type. (The SP800-90A DRBGs perform an automatic 174 + * seeding.) The seed is provided as a parameter to this function call. The 175 + * provided seed should have the length of the seed size defined for the random 176 + * number generator as defined by crypto_rng_seedsize. 178 177 * 179 178 * Return: 0 if the setting of the key was successful; < 0 if an error occurred 180 179 */
+33 -84
include/crypto/scatterwalk.h
··· 11 11 #ifndef _CRYPTO_SCATTERWALK_H 12 12 #define _CRYPTO_SCATTERWALK_H 13 13 14 - #include <linux/errno.h> 14 + #include <crypto/algapi.h> 15 + 15 16 #include <linux/highmem.h> 16 17 #include <linux/mm.h> 17 18 #include <linux/scatterlist.h> 18 - #include <linux/types.h> 19 - 20 - struct scatter_walk { 21 - /* Must be the first member, see struct skcipher_walk. */ 22 - union { 23 - void *const addr; 24 - 25 - /* Private API field, do not touch. */ 26 - union crypto_no_such_thing *__addr; 27 - }; 28 - struct scatterlist *sg; 29 - unsigned int offset; 30 - }; 31 - 32 - struct skcipher_walk { 33 - union { 34 - /* Virtual address of the source. */ 35 - struct { 36 - struct { 37 - const void *const addr; 38 - } virt; 39 - } src; 40 - 41 - /* Private field for the API, do not use. */ 42 - struct scatter_walk in; 43 - }; 44 - 45 - union { 46 - /* Virtual address of the destination. */ 47 - struct { 48 - struct { 49 - void *const addr; 50 - } virt; 51 - } dst; 52 - 53 - /* Private field for the API, do not use. */ 54 - struct scatter_walk out; 55 - }; 56 - 57 - unsigned int nbytes; 58 - unsigned int total; 59 - 60 - u8 *page; 61 - u8 *buffer; 62 - u8 *oiv; 63 - void *iv; 64 - 65 - unsigned int ivsize; 66 - 67 - int flags; 68 - unsigned int blocksize; 69 - unsigned int stride; 70 - unsigned int alignmask; 71 - }; 72 19 73 20 static inline void scatterwalk_crypto_chain(struct scatterlist *head, 74 21 struct scatterlist *sg, int num) ··· 174 227 scatterwalk_advance(walk, nbytes); 175 228 } 176 229 230 + /* 231 + * Flush the dcache of any pages that overlap the region 232 + * [offset, offset + nbytes) relative to base_page. 233 + * 234 + * This should be called only when ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, to ensure 235 + * that all relevant code (including the call to sg_page() in the caller, if 236 + * applicable) gets fully optimized out when !ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE. 237 + */ 238 + static inline void __scatterwalk_flush_dcache_pages(struct page *base_page, 239 + unsigned int offset, 240 + unsigned int nbytes) 241 + { 242 + unsigned int num_pages; 243 + 244 + base_page += offset / PAGE_SIZE; 245 + offset %= PAGE_SIZE; 246 + 247 + /* 248 + * This is an overflow-safe version of 249 + * num_pages = DIV_ROUND_UP(offset + nbytes, PAGE_SIZE). 250 + */ 251 + num_pages = nbytes / PAGE_SIZE; 252 + num_pages += DIV_ROUND_UP(offset + (nbytes % PAGE_SIZE), PAGE_SIZE); 253 + 254 + for (unsigned int i = 0; i < num_pages; i++) 255 + flush_dcache_page(base_page + i); 256 + } 257 + 177 258 /** 178 259 * scatterwalk_done_dst() - Finish one step of a walk of destination scatterlist 179 260 * @walk: the scatter_walk ··· 215 240 unsigned int nbytes) 216 241 { 217 242 scatterwalk_unmap(walk); 218 - /* 219 - * Explicitly check ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE instead of just 220 - * relying on flush_dcache_page() being a no-op when not implemented, 221 - * since otherwise the BUG_ON in sg_page() does not get optimized out. 222 - * This also avoids having to consider whether the loop would get 223 - * reliably optimized out or not. 224 - */ 225 - if (ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE) { 226 - struct page *base_page; 227 - unsigned int offset; 228 - int start, end, i; 229 - 230 - base_page = sg_page(walk->sg); 231 - offset = walk->offset; 232 - start = offset >> PAGE_SHIFT; 233 - end = start + (nbytes >> PAGE_SHIFT); 234 - end += (offset_in_page(offset) + offset_in_page(nbytes) + 235 - PAGE_SIZE - 1) >> PAGE_SHIFT; 236 - for (i = start; i < end; i++) 237 - flush_dcache_page(base_page + i); 238 - } 243 + if (ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE) 244 + __scatterwalk_flush_dcache_pages(sg_page(walk->sg), 245 + walk->offset, nbytes); 239 246 scatterwalk_advance(walk, nbytes); 240 247 } 241 248 ··· 252 295 struct scatterlist *scatterwalk_ffwd(struct scatterlist dst[2], 253 296 struct scatterlist *src, 254 297 unsigned int len); 255 - 256 - int skcipher_walk_first(struct skcipher_walk *walk, bool atomic); 257 - int skcipher_walk_done(struct skcipher_walk *walk, int res); 258 - 259 - static inline void skcipher_walk_abort(struct skcipher_walk *walk) 260 - { 261 - skcipher_walk_done(walk, -ECANCELED); 262 - } 263 298 264 299 #endif /* _CRYPTO_SCATTERWALK_H */
+1 -1
include/keys/asymmetric-type.h
··· 49 49 */ 50 50 struct asymmetric_key_id { 51 51 unsigned short len; 52 - unsigned char data[]; 52 + unsigned char data[] __counted_by(len); 53 53 }; 54 54 55 55 struct asymmetric_key_ids {
+58 -12
include/linux/rhashtable.h
··· 355 355 local_irq_restore(flags); 356 356 } 357 357 358 - static inline struct rhash_head *__rht_ptr( 359 - struct rhash_lock_head *p, struct rhash_lock_head __rcu *const *bkt) 358 + enum rht_lookup_freq { 359 + RHT_LOOKUP_NORMAL, 360 + RHT_LOOKUP_LIKELY, 361 + }; 362 + 363 + static __always_inline struct rhash_head *__rht_ptr( 364 + struct rhash_lock_head *p, struct rhash_lock_head __rcu *const *bkt, 365 + const enum rht_lookup_freq freq) 360 366 { 361 - return (struct rhash_head *) 362 - ((unsigned long)p & ~BIT(0) ?: 363 - (unsigned long)RHT_NULLS_MARKER(bkt)); 367 + unsigned long p_val = (unsigned long)p & ~BIT(0); 368 + 369 + BUILD_BUG_ON(!__builtin_constant_p(freq)); 370 + 371 + if (freq == RHT_LOOKUP_LIKELY) 372 + return (struct rhash_head *) 373 + (likely(p_val) ? p_val : (unsigned long)RHT_NULLS_MARKER(bkt)); 374 + else 375 + return (struct rhash_head *) 376 + (p_val ?: (unsigned long)RHT_NULLS_MARKER(bkt)); 364 377 } 365 378 366 379 /* ··· 383 370 * rht_ptr_exclusive() dereferences in a context where exclusive 384 371 * access is guaranteed, such as when destroying the table. 385 372 */ 373 + static __always_inline struct rhash_head *__rht_ptr_rcu( 374 + struct rhash_lock_head __rcu *const *bkt, 375 + const enum rht_lookup_freq freq) 376 + { 377 + return __rht_ptr(rcu_dereference_all(*bkt), bkt, freq); 378 + } 379 + 386 380 static inline struct rhash_head *rht_ptr_rcu( 387 381 struct rhash_lock_head __rcu *const *bkt) 388 382 { 389 - return __rht_ptr(rcu_dereference_all(*bkt), bkt); 383 + return __rht_ptr_rcu(bkt, RHT_LOOKUP_NORMAL); 390 384 } 391 385 392 386 static inline struct rhash_head *rht_ptr( ··· 401 381 struct bucket_table *tbl, 402 382 unsigned int hash) 403 383 { 404 - return __rht_ptr(rht_dereference_bucket(*bkt, tbl, hash), bkt); 384 + return __rht_ptr(rht_dereference_bucket(*bkt, tbl, hash), bkt, 385 + RHT_LOOKUP_NORMAL); 405 386 } 406 387 407 388 static inline struct rhash_head *rht_ptr_exclusive( 408 389 struct rhash_lock_head __rcu *const *bkt) 409 390 { 410 - return __rht_ptr(rcu_dereference_protected(*bkt, 1), bkt); 391 + return __rht_ptr(rcu_dereference_protected(*bkt, 1), bkt, 392 + RHT_LOOKUP_NORMAL); 411 393 } 412 394 413 395 static inline void rht_assign_locked(struct rhash_lock_head __rcu **bkt, ··· 610 588 /* Internal function, do not use. */ 611 589 static __always_inline struct rhash_head *__rhashtable_lookup( 612 590 struct rhashtable *ht, const void *key, 613 - const struct rhashtable_params params) 591 + const struct rhashtable_params params, 592 + const enum rht_lookup_freq freq) 614 593 { 615 594 struct rhashtable_compare_arg arg = { 616 595 .ht = ht, ··· 622 599 struct rhash_head *he; 623 600 unsigned int hash; 624 601 602 + BUILD_BUG_ON(!__builtin_constant_p(freq)); 625 603 tbl = rht_dereference_rcu(ht->tbl, ht); 626 604 restart: 627 605 hash = rht_key_hashfn(ht, tbl, key, params); 628 606 bkt = rht_bucket(tbl, hash); 629 607 do { 630 - rht_for_each_rcu_from(he, rht_ptr_rcu(bkt), tbl, hash) { 608 + rht_for_each_rcu_from(he, __rht_ptr_rcu(bkt, freq), tbl, hash) { 631 609 if (params.obj_cmpfn ? 632 610 params.obj_cmpfn(&arg, rht_obj(ht, he)) : 633 611 rhashtable_compare(&arg, rht_obj(ht, he))) ··· 667 643 struct rhashtable *ht, const void *key, 668 644 const struct rhashtable_params params) 669 645 { 670 - struct rhash_head *he = __rhashtable_lookup(ht, key, params); 646 + struct rhash_head *he = __rhashtable_lookup(ht, key, params, 647 + RHT_LOOKUP_NORMAL); 671 648 672 649 return he ? rht_obj(ht, he) : NULL; 650 + } 651 + 652 + static __always_inline void *rhashtable_lookup_likely( 653 + struct rhashtable *ht, const void *key, 654 + const struct rhashtable_params params) 655 + { 656 + struct rhash_head *he = __rhashtable_lookup(ht, key, params, 657 + RHT_LOOKUP_LIKELY); 658 + 659 + return likely(he) ? rht_obj(ht, he) : NULL; 673 660 } 674 661 675 662 /** ··· 728 693 struct rhltable *hlt, const void *key, 729 694 const struct rhashtable_params params) 730 695 { 731 - struct rhash_head *he = __rhashtable_lookup(&hlt->ht, key, params); 696 + struct rhash_head *he = __rhashtable_lookup(&hlt->ht, key, params, 697 + RHT_LOOKUP_NORMAL); 732 698 733 699 return he ? container_of(he, struct rhlist_head, rhead) : NULL; 700 + } 701 + 702 + static __always_inline struct rhlist_head *rhltable_lookup_likely( 703 + struct rhltable *hlt, const void *key, 704 + const struct rhashtable_params params) 705 + { 706 + struct rhash_head *he = __rhashtable_lookup(&hlt->ht, key, params, 707 + RHT_LOOKUP_LIKELY); 708 + 709 + return likely(he) ? container_of(he, struct rhlist_head, rhead) : NULL; 734 710 } 735 711 736 712 /* Internal function, please use rhashtable_insert_fast() instead. This
+26
include/soc/fsl/caam-blob.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0-only */ 2 2 /* 3 3 * Copyright (C) 2020 Pengutronix, Ahmad Fatoum <kernel@pengutronix.de> 4 + * Copyright 2024-2025 NXP 4 5 */ 5 6 6 7 #ifndef __CAAM_BLOB_GEN ··· 13 12 #define CAAM_BLOB_KEYMOD_LENGTH 16 14 13 #define CAAM_BLOB_OVERHEAD (32 + 16) 15 14 #define CAAM_BLOB_MAX_LEN 4096 15 + #define CAAM_ENC_ALGO_CCM 0x1 16 + #define CAAM_ENC_ALGO_ECB 0x2 17 + #define CAAM_NONCE_SIZE 6 18 + #define CAAM_ICV_SIZE 6 19 + #define CAAM_CCM_OVERHEAD (CAAM_NONCE_SIZE + CAAM_ICV_SIZE) 16 20 17 21 struct caam_blob_priv; 18 22 19 23 /** 24 + * struct caam_pkey_info - information for CAAM protected key 25 + * @is_pkey: flag to identify, if the key is protected. 26 + * @key_enc_algo: identifies the algorithm, ccm or ecb 27 + * @plain_key_sz: size of plain key. 28 + * @key_buf: contains key data 29 + */ 30 + struct caam_pkey_info { 31 + u8 is_pkey; 32 + u8 key_enc_algo; 33 + u16 plain_key_sz; 34 + u8 key_buf[]; 35 + } __packed; 36 + 37 + /* sizeof struct caam_pkey_info */ 38 + #define CAAM_PKEY_HEADER 4 39 + 40 + /** 20 41 * struct caam_blob_info - information for CAAM blobbing 42 + * @pkey_info: pointer to keep protected key information 21 43 * @input: pointer to input buffer (must be DMAable) 22 44 * @input_len: length of @input buffer in bytes. 23 45 * @output: pointer to output buffer (must be DMAable) ··· 50 26 * May not exceed %CAAM_BLOB_KEYMOD_LENGTH 51 27 */ 52 28 struct caam_blob_info { 29 + struct caam_pkey_info pkey_info; 30 + 53 31 void *input; 54 32 size_t input_len; 55 33
+4 -8
kernel/padata.c
··· 506 506 padata_works_free(&works); 507 507 } 508 508 509 - static void __padata_list_init(struct padata_list *pd_list) 510 - { 511 - INIT_LIST_HEAD(&pd_list->list); 512 - spin_lock_init(&pd_list->lock); 513 - } 514 - 515 509 /* Initialize all percpu queues used by serial workers */ 516 510 static void padata_init_squeues(struct parallel_data *pd) 517 511 { ··· 515 521 for_each_cpu(cpu, pd->cpumask.cbcpu) { 516 522 squeue = per_cpu_ptr(pd->squeue, cpu); 517 523 squeue->pd = pd; 518 - __padata_list_init(&squeue->serial); 524 + INIT_LIST_HEAD(&squeue->serial.list); 525 + spin_lock_init(&squeue->serial.lock); 519 526 INIT_WORK(&squeue->work, padata_serial_worker); 520 527 } 521 528 } ··· 529 534 530 535 for_each_cpu(cpu, pd->cpumask.pcpu) { 531 536 list = per_cpu_ptr(pd->reorder_list, cpu); 532 - __padata_list_init(list); 537 + INIT_LIST_HEAD(&list->list); 538 + spin_lock_init(&list->lock); 533 539 } 534 540 } 535 541
+1 -1
lib/crypto/mpi/mpicoder.c
··· 398 398 399 399 while (sg_miter_next(&miter)) { 400 400 buff = miter.addr; 401 - len = min_t(unsigned, miter.length, nbytes); 401 + len = min(miter.length, nbytes); 402 402 nbytes -= len; 403 403 404 404 for (x = 0; x < len; x++) {
+108
security/keys/trusted-keys/trusted_caam.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 3 * Copyright (C) 2021 Pengutronix, Ahmad Fatoum <kernel@pengutronix.de> 4 + * Copyright 2025 NXP 4 5 */ 5 6 6 7 #include <keys/trusted_caam.h> 7 8 #include <keys/trusted-type.h> 8 9 #include <linux/build_bug.h> 9 10 #include <linux/key-type.h> 11 + #include <linux/parser.h> 10 12 #include <soc/fsl/caam-blob.h> 11 13 12 14 static struct caam_blob_priv *blobifier; ··· 17 15 18 16 static_assert(MAX_KEY_SIZE + CAAM_BLOB_OVERHEAD <= CAAM_BLOB_MAX_LEN); 19 17 static_assert(MAX_BLOB_SIZE <= CAAM_BLOB_MAX_LEN); 18 + 19 + enum { 20 + opt_err, 21 + opt_key_enc_algo, 22 + }; 23 + 24 + static const match_table_t key_tokens = { 25 + {opt_key_enc_algo, "key_enc_algo=%s"}, 26 + {opt_err, NULL} 27 + }; 28 + 29 + #ifdef CAAM_DEBUG 30 + static inline void dump_options(const struct caam_pkey_info *pkey_info) 31 + { 32 + pr_info("key encryption algo %d\n", pkey_info->key_enc_algo); 33 + } 34 + #else 35 + static inline void dump_options(const struct caam_pkey_info *pkey_info) 36 + { 37 + } 38 + #endif 39 + 40 + static int get_pkey_options(char *c, 41 + struct caam_pkey_info *pkey_info) 42 + { 43 + substring_t args[MAX_OPT_ARGS]; 44 + unsigned long token_mask = 0; 45 + u16 key_enc_algo; 46 + char *p = c; 47 + int token; 48 + int res; 49 + 50 + if (!c) 51 + return 0; 52 + 53 + while ((p = strsep(&c, " \t"))) { 54 + if (*p == '\0' || *p == ' ' || *p == '\t') 55 + continue; 56 + token = match_token(p, key_tokens, args); 57 + if (test_and_set_bit(token, &token_mask)) 58 + return -EINVAL; 59 + 60 + switch (token) { 61 + case opt_key_enc_algo: 62 + res = kstrtou16(args[0].from, 16, &key_enc_algo); 63 + if (res < 0) 64 + return -EINVAL; 65 + pkey_info->key_enc_algo = key_enc_algo; 66 + break; 67 + default: 68 + return -EINVAL; 69 + } 70 + } 71 + return 0; 72 + } 73 + 74 + static bool is_key_pkey(char **datablob) 75 + { 76 + char *c = NULL; 77 + 78 + do { 79 + /* Second argument onwards, 80 + * determine if tied to HW 81 + */ 82 + c = strsep(datablob, " \t"); 83 + if (c && (strcmp(c, "pk") == 0)) 84 + return true; 85 + } while (c); 86 + 87 + return false; 88 + } 20 89 21 90 static int trusted_caam_seal(struct trusted_key_payload *p, char *datablob) 22 91 { ··· 98 25 .key_mod = KEYMOD, .key_mod_len = sizeof(KEYMOD) - 1, 99 26 }; 100 27 28 + /* 29 + * If it is to be treated as protected key, 30 + * read next arguments too. 31 + */ 32 + if (is_key_pkey(&datablob)) { 33 + info.pkey_info.plain_key_sz = p->key_len; 34 + info.pkey_info.is_pkey = 1; 35 + ret = get_pkey_options(datablob, &info.pkey_info); 36 + if (ret < 0) 37 + return 0; 38 + dump_options(&info.pkey_info); 39 + } 40 + 101 41 ret = caam_encap_blob(blobifier, &info); 102 42 if (ret) 103 43 return ret; 104 44 105 45 p->blob_len = info.output_len; 46 + if (info.pkey_info.is_pkey) { 47 + p->key_len = p->blob_len + sizeof(struct caam_pkey_info); 48 + memcpy(p->key, &info.pkey_info, sizeof(struct caam_pkey_info)); 49 + memcpy(p->key + sizeof(struct caam_pkey_info), p->blob, p->blob_len); 50 + } 51 + 106 52 return 0; 107 53 } 108 54 ··· 134 42 .key_mod = KEYMOD, .key_mod_len = sizeof(KEYMOD) - 1, 135 43 }; 136 44 45 + if (is_key_pkey(&datablob)) { 46 + info.pkey_info.plain_key_sz = p->blob_len - CAAM_BLOB_OVERHEAD; 47 + info.pkey_info.is_pkey = 1; 48 + ret = get_pkey_options(datablob, &info.pkey_info); 49 + if (ret < 0) 50 + return 0; 51 + dump_options(&info.pkey_info); 52 + 53 + p->key_len = p->blob_len + sizeof(struct caam_pkey_info); 54 + memcpy(p->key, &info.pkey_info, sizeof(struct caam_pkey_info)); 55 + memcpy(p->key + sizeof(struct caam_pkey_info), p->blob, p->blob_len); 56 + 57 + return 0; 58 + } 59 + 137 60 ret = caam_decap_blob(blobifier, &info); 138 61 if (ret) 139 62 return ret; 140 63 141 64 p->key_len = info.output_len; 65 + 142 66 return 0; 143 67 } 144 68