Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'v6.18-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto updates from Herbert Xu:
"Drivers:
- Add ciphertext hiding support to ccp
- Add hashjoin, gather and UDMA data move features to hisilicon
- Add lz4 and lz77_only to hisilicon
- Add xilinx hwrng driver
- Add ti driver with ecb/cbc aes support
- Add ring buffer idle and command queue telemetry for GEN6 in qat

Others:
- Use rcu_dereference_all to stop false alarms in rhashtable
- Fix CPU number wraparound in padata"

* tag 'v6.18-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (78 commits)
dt-bindings: rng: hisi-rng: convert to DT schema
crypto: doc - Add explicit title heading to API docs
hwrng: ks-sa - fix division by zero in ks_sa_rng_init
KEYS: X.509: Fix Basic Constraints CA flag parsing
crypto: anubis - simplify return statement in anubis_mod_init
crypto: hisilicon/qm - set NULL to qm->debug.qm_diff_regs
crypto: hisilicon/qm - clear all VF configurations in the hardware
crypto: hisilicon - enable error reporting again
crypto: hisilicon/qm - mask axi error before memory init
crypto: hisilicon/qm - invalidate queues in use
crypto: qat - Return pointer directly in adf_ctl_alloc_resources
crypto: aspeed - Fix dma_unmap_sg() direction
rhashtable: Use rcu_dereference_all and rcu_dereference_all_check
crypto: comp - Use same definition of context alloc and free ops
crypto: omap - convert from tasklet to BH workqueue
crypto: qat - Replace kzalloc() + copy_from_user() with memdup_user()
crypto: caam - double the entropy delay interval for retry
padata: WQ_PERCPU added to alloc_workqueue users
padata: replace use of system_unbound_wq with system_dfl_wq
crypto: cryptd - WQ_PERCPU added to alloc_workqueue users
...

+2865 -959
+27
Documentation/ABI/testing/debugfs-driver-qat_telemetry
··· 57 57 gp_lat_acc_avg average get to put latency [ns] 58 58 bw_in PCIe, write bandwidth [Mbps] 59 59 bw_out PCIe, read bandwidth [Mbps] 60 + re_acc_avg average ring empty time [ns] 60 61 at_page_req_lat_avg Address Translator(AT), average page 61 62 request latency [ns] 62 63 at_trans_lat_avg AT, average page translation latency [ns] ··· 86 85 exec_cph<N> execution count of Cipher slice N 87 86 util_ath<N> utilization of Authentication slice N [%] 88 87 exec_ath<N> execution count of Authentication slice N 88 + cmdq_wait_cnv<N> wait time for cmdq N to get Compression and verify 89 + slice ownership 90 + cmdq_exec_cnv<N> Compression and verify slice execution time while 91 + owned by cmdq N 92 + cmdq_drain_cnv<N> time taken for cmdq N to release Compression and 93 + verify slice ownership 94 + cmdq_wait_dcprz<N> wait time for cmdq N to get Decompression 95 + slice N ownership 96 + cmdq_exec_dcprz<N> Decompression slice execution time while 97 + owned by cmdq N 98 + cmdq_drain_dcprz<N> time taken for cmdq N to release Decompression 99 + slice ownership 100 + cmdq_wait_pke<N> wait time for cmdq N to get PKE slice ownership 101 + cmdq_exec_pke<N> PKE slice execution time while owned by cmdq N 102 + cmdq_drain_pke<N> time taken for cmdq N to release PKE slice 103 + ownership 104 + cmdq_wait_ucs<N> wait time for cmdq N to get UCS slice ownership 105 + cmdq_exec_ucs<N> UCS slice execution time while owned by cmdq N 106 + cmdq_drain_ucs<N> time taken for cmdq N to release UCS slice 107 + ownership 108 + cmdq_wait_ath<N> wait time for cmdq N to get Authentication slice 109 + ownership 110 + cmdq_exec_ath<N> Authentication slice execution time while owned 111 + by cmdq N 112 + cmdq_drain_ath<N> time taken for cmdq N to release Authentication 113 + slice ownership 89 114 ======================= ======================================== 90 115 91 116 The telemetry report file can be read with the following command::
+3
Documentation/crypto/api-aead.rst
··· 1 + Authenticated Encryption With Associated Data (AEAD) 2 + ==================================================== 3 + 1 4 Authenticated Encryption With Associated Data (AEAD) Algorithm Definitions 2 5 -------------------------------------------------------------------------- 3 6
+3
Documentation/crypto/api-akcipher.rst
··· 1 + Asymmetric Cipher 2 + ================= 3 + 1 4 Asymmetric Cipher Algorithm Definitions 2 5 --------------------------------------- 3 6
+3
Documentation/crypto/api-digest.rst
··· 1 + Message Digest 2 + ============== 3 + 1 4 Message Digest Algorithm Definitions 2 5 ------------------------------------ 3 6
+3
Documentation/crypto/api-kpp.rst
··· 1 + Key-agreement Protocol Primitives (KPP) 2 + ======================================= 3 + 1 4 Key-agreement Protocol Primitives (KPP) Cipher Algorithm Definitions 2 5 -------------------------------------------------------------------- 3 6
+3
Documentation/crypto/api-rng.rst
··· 1 + Random Number Generator (RNG) 2 + ============================= 3 + 1 4 Random Number Algorithm Definitions 2 5 ----------------------------------- 3 6
+3
Documentation/crypto/api-sig.rst
··· 1 + Asymmetric Signature 2 + ==================== 3 + 1 4 Asymmetric Signature Algorithm Definitions 2 5 ------------------------------------------ 3 6
+3
Documentation/crypto/api-skcipher.rst
··· 1 + Symmetric Key Cipher 2 + ==================== 3 + 1 4 Block Cipher Algorithm Definitions 2 5 ---------------------------------- 3 6
+50
Documentation/devicetree/bindings/crypto/ti,am62l-dthev2.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/crypto/ti,am62l-dthev2.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: K3 SoC DTHE V2 crypto module 8 + 9 + maintainers: 10 + - T Pratham <t-pratham@ti.com> 11 + 12 + properties: 13 + compatible: 14 + enum: 15 + - ti,am62l-dthev2 16 + 17 + reg: 18 + maxItems: 1 19 + 20 + dmas: 21 + items: 22 + - description: AES Engine RX DMA Channel 23 + - description: AES Engine TX DMA Channel 24 + - description: SHA Engine TX DMA Channel 25 + 26 + dma-names: 27 + items: 28 + - const: rx 29 + - const: tx1 30 + - const: tx2 31 + 32 + required: 33 + - compatible 34 + - reg 35 + - dmas 36 + - dma-names 37 + 38 + additionalProperties: false 39 + 40 + examples: 41 + - | 42 + crypto@40800000 { 43 + compatible = "ti,am62l-dthev2"; 44 + reg = <0x40800000 0x10000>; 45 + 46 + dmas = <&main_bcdma 0 0 0x4700 0>, 47 + <&main_bcdma 0 0 0xc701 0>, 48 + <&main_bcdma 0 0 0xc700 0>; 49 + dma-names = "rx", "tx1", "tx2"; 50 + };
+35
Documentation/devicetree/bindings/crypto/xlnx,versal-trng.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/crypto/xlnx,versal-trng.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Xilinx Versal True Random Number Generator Hardware Accelerator 8 + 9 + maintainers: 10 + - Harsh Jain <h.jain@amd.com> 11 + - Mounika Botcha <mounika.botcha@amd.com> 12 + 13 + description: 14 + The Versal True Random Number Generator consists of Ring Oscillators as 15 + entropy source and a deterministic CTR_DRBG random bit generator (DRBG). 16 + 17 + properties: 18 + compatible: 19 + const: xlnx,versal-trng 20 + 21 + reg: 22 + maxItems: 1 23 + 24 + required: 25 + - reg 26 + 27 + additionalProperties: false 28 + 29 + examples: 30 + - | 31 + rng@f1230000 { 32 + compatible = "xlnx,versal-trng"; 33 + reg = <0xf1230000 0x1000>; 34 + }; 35 + ...
-12
Documentation/devicetree/bindings/rng/hisi-rng.txt
··· 1 - Hisilicon Random Number Generator 2 - 3 - Required properties: 4 - - compatible : Should be "hisilicon,hip04-rng" or "hisilicon,hip05-rng" 5 - - reg : Offset and length of the register set of this block 6 - 7 - Example: 8 - 9 - rng@d1010000 { 10 - compatible = "hisilicon,hip05-rng"; 11 - reg = <0xd1010000 0x100>; 12 - };
+32
Documentation/devicetree/bindings/rng/hisi-rng.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/rng/hisi-rng.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Hisilicon Random Number Generator 8 + 9 + maintainers: 10 + - Kefeng Wang <wangkefeng.wang@huawei> 11 + 12 + properties: 13 + compatible: 14 + enum: 15 + - hisilicon,hip04-rng 16 + - hisilicon,hip05-rng 17 + 18 + reg: 19 + maxItems: 1 20 + 21 + required: 22 + - compatible 23 + - reg 24 + 25 + additionalProperties: false 26 + 27 + examples: 28 + - | 29 + rng@d1010000 { 30 + compatible = "hisilicon,hip05-rng"; 31 + reg = <0xd1010000 0x100>; 32 + };
+13
MAINTAINERS
··· 25558 25558 F: drivers/clk/ti/ 25559 25559 F: include/linux/clk/ti.h 25560 25560 25561 + TI DATA TRANSFORM AND HASHING ENGINE (DTHE) V2 CRYPTO DRIVER 25562 + M: T Pratham <t-pratham@ti.com> 25563 + L: linux-crypto@vger.kernel.org 25564 + S: Supported 25565 + F: Documentation/devicetree/bindings/crypto/ti,am62l-dthev2.yaml 25566 + F: drivers/crypto/ti/ 25567 + 25561 25568 TI DAVINCI MACHINE SUPPORT 25562 25569 M: Bartosz Golaszewski <brgl@bgdev.pl> 25563 25570 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) ··· 28014 28007 F: Documentation/misc-devices/xilinx_sdfec.rst 28015 28008 F: drivers/misc/xilinx_sdfec.c 28016 28009 F: include/uapi/misc/xilinx_sdfec.h 28010 + 28011 + XILINX TRNG DRIVER 28012 + M: Mounika Botcha <mounika.botcha@amd.com> 28013 + M: Harsh Jain <h.jain@amd.com> 28014 + S: Maintained 28015 + F: drivers/crypto/xilinx/xilinx-trng.c 28017 28016 28018 28017 XILINX UARTLITE SERIAL DRIVER 28019 28018 M: Peter Korsgaard <jacmet@sunsite.dk>
+1
arch/arm64/crypto/Kconfig
··· 71 71 config CRYPTO_AES_ARM64 72 72 tristate "Ciphers: AES, modes: ECB, CBC, CTR, CTS, XCTR, XTS" 73 73 select CRYPTO_AES 74 + select CRYPTO_LIB_SHA256 74 75 help 75 76 Block ciphers: AES cipher algorithms (FIPS-197) 76 77 Length-preserving ciphers: AES with ECB, CBC, CTR, CTS,
+1 -20
arch/arm64/crypto/aes-glue.c
··· 122 122 struct crypto_aes_essiv_cbc_ctx { 123 123 struct crypto_aes_ctx key1; 124 124 struct crypto_aes_ctx __aligned(8) key2; 125 - struct crypto_shash *hash; 126 125 }; 127 126 128 127 struct mac_tfm_ctx { ··· 170 171 if (ret) 171 172 return ret; 172 173 173 - crypto_shash_tfm_digest(ctx->hash, in_key, key_len, digest); 174 + sha256(in_key, key_len, digest); 174 175 175 176 return aes_expandkey(&ctx->key2, digest, sizeof(digest)); 176 177 } ··· 385 386 kernel_neon_end(); 386 387 387 388 return skcipher_walk_done(&walk, 0); 388 - } 389 - 390 - static int __maybe_unused essiv_cbc_init_tfm(struct crypto_skcipher *tfm) 391 - { 392 - struct crypto_aes_essiv_cbc_ctx *ctx = crypto_skcipher_ctx(tfm); 393 - 394 - ctx->hash = crypto_alloc_shash("sha256", 0, 0); 395 - 396 - return PTR_ERR_OR_ZERO(ctx->hash); 397 - } 398 - 399 - static void __maybe_unused essiv_cbc_exit_tfm(struct crypto_skcipher *tfm) 400 - { 401 - struct crypto_aes_essiv_cbc_ctx *ctx = crypto_skcipher_ctx(tfm); 402 - 403 - crypto_free_shash(ctx->hash); 404 389 } 405 390 406 391 static int __maybe_unused essiv_cbc_encrypt(struct skcipher_request *req) ··· 776 793 .setkey = essiv_cbc_set_key, 777 794 .encrypt = essiv_cbc_encrypt, 778 795 .decrypt = essiv_cbc_decrypt, 779 - .init = essiv_cbc_init_tfm, 780 - .exit = essiv_cbc_exit_tfm, 781 796 } }; 782 797 783 798 static int cbcmac_setkey(struct crypto_shash *tfm, const u8 *in_key,
+7 -1
arch/s390/crypto/sha.h
··· 10 10 #ifndef _CRYPTO_ARCH_S390_SHA_H 11 11 #define _CRYPTO_ARCH_S390_SHA_H 12 12 13 + #include <crypto/hash.h> 13 14 #include <crypto/sha2.h> 14 15 #include <crypto/sha3.h> 16 + #include <linux/build_bug.h> 15 17 #include <linux/types.h> 16 18 17 19 /* must be big enough for the largest SHA variant */ 18 20 #define CPACF_MAX_PARMBLOCK_SIZE SHA3_STATE_SIZE 19 21 #define SHA_MAX_BLOCK_SIZE SHA3_224_BLOCK_SIZE 20 - #define S390_SHA_CTX_SIZE sizeof(struct s390_sha_ctx) 21 22 22 23 struct s390_sha_ctx { 23 24 u64 count; /* message length in bytes */ ··· 42 41 unsigned int len); 43 42 int s390_sha_finup(struct shash_desc *desc, const u8 *src, unsigned int len, 44 43 u8 *out); 44 + 45 + static inline void __check_s390_sha_ctx_size(void) 46 + { 47 + BUILD_BUG_ON(S390_SHA_CTX_SIZE != sizeof(struct s390_sha_ctx)); 48 + } 45 49 46 50 #endif
+4 -2
crypto/842.c
··· 54 54 } 55 55 56 56 static struct scomp_alg scomp = { 57 - .alloc_ctx = crypto842_alloc_ctx, 58 - .free_ctx = crypto842_free_ctx, 57 + .streams = { 58 + .alloc_ctx = crypto842_alloc_ctx, 59 + .free_ctx = crypto842_free_ctx, 60 + }, 59 61 .compress = crypto842_scompress, 60 62 .decompress = crypto842_sdecompress, 61 63 .base = {
+1 -4
crypto/anubis.c
··· 683 683 684 684 static int __init anubis_mod_init(void) 685 685 { 686 - int ret = 0; 687 - 688 - ret = crypto_register_alg(&anubis_alg); 689 - return ret; 686 + return crypto_register_alg(&anubis_alg); 690 687 } 691 688 692 689 static void __exit anubis_mod_fini(void)
+12 -4
crypto/asymmetric_keys/x509_cert_parser.c
··· 610 610 /* 611 611 * Get hold of the basicConstraints 612 612 * v[1] is the encoding size 613 - * (Expect 0x2 or greater, making it 1 or more bytes) 613 + * (Expect 0x00 for empty SEQUENCE with CA:FALSE, or 614 + * 0x03 or greater for non-empty SEQUENCE) 614 615 * v[2] is the encoding type 615 616 * (Expect an ASN1_BOOL for the CA) 616 - * v[3] is the contents of the ASN1_BOOL 617 - * (Expect 1 if the CA is TRUE) 617 + * v[3] is the length of the ASN1_BOOL 618 + * (Expect 1 for a single byte boolean) 619 + * v[4] is the contents of the ASN1_BOOL 620 + * (Expect 0xFF if the CA is TRUE) 618 621 * vlen should match the entire extension size 619 622 */ 620 623 if (v[0] != (ASN1_CONS_BIT | ASN1_SEQ)) ··· 626 623 return -EBADMSG; 627 624 if (v[1] != vlen - 2) 628 625 return -EBADMSG; 629 - if (vlen >= 4 && v[1] != 0 && v[2] == ASN1_BOOL && v[3] == 1) 626 + /* Empty SEQUENCE means CA:FALSE (default value omitted per DER) */ 627 + if (v[1] == 0) 628 + return 0; 629 + if (vlen >= 5 && v[2] == ASN1_BOOL && v[3] == 1 && v[4] == 0xFF) 630 630 ctx->cert->pub->key_eflags |= 1 << KEY_EFLAG_CA; 631 + else 632 + return -EBADMSG; 631 633 return 0; 632 634 } 633 635
+2 -1
crypto/cryptd.c
··· 1115 1115 { 1116 1116 int err; 1117 1117 1118 - cryptd_wq = alloc_workqueue("cryptd", WQ_MEM_RECLAIM | WQ_CPU_INTENSIVE, 1118 + cryptd_wq = alloc_workqueue("cryptd", 1119 + WQ_MEM_RECLAIM | WQ_CPU_INTENSIVE | WQ_PERCPU, 1119 1120 1); 1120 1121 if (!cryptd_wq) 1121 1122 return -ENOMEM;
+1
crypto/jitterentropy-kcapi.c
··· 117 117 pr_warn_ratelimited("Unexpected digest size\n"); 118 118 return -EINVAL; 119 119 } 120 + kmsan_unpoison_memory(intermediary, sizeof(intermediary)); 120 121 121 122 /* 122 123 * This loop fills a buffer which is injected into the entropy pool.
+4 -2
crypto/lz4.c
··· 68 68 } 69 69 70 70 static struct scomp_alg scomp = { 71 - .alloc_ctx = lz4_alloc_ctx, 72 - .free_ctx = lz4_free_ctx, 71 + .streams = { 72 + .alloc_ctx = lz4_alloc_ctx, 73 + .free_ctx = lz4_free_ctx, 74 + }, 73 75 .compress = lz4_scompress, 74 76 .decompress = lz4_sdecompress, 75 77 .base = {
+4 -2
crypto/lz4hc.c
··· 66 66 } 67 67 68 68 static struct scomp_alg scomp = { 69 - .alloc_ctx = lz4hc_alloc_ctx, 70 - .free_ctx = lz4hc_free_ctx, 69 + .streams = { 70 + .alloc_ctx = lz4hc_alloc_ctx, 71 + .free_ctx = lz4hc_free_ctx, 72 + }, 71 73 .compress = lz4hc_scompress, 72 74 .decompress = lz4hc_sdecompress, 73 75 .base = {
+4 -2
crypto/lzo-rle.c
··· 70 70 } 71 71 72 72 static struct scomp_alg scomp = { 73 - .alloc_ctx = lzorle_alloc_ctx, 74 - .free_ctx = lzorle_free_ctx, 73 + .streams = { 74 + .alloc_ctx = lzorle_alloc_ctx, 75 + .free_ctx = lzorle_free_ctx, 76 + }, 75 77 .compress = lzorle_scompress, 76 78 .decompress = lzorle_sdecompress, 77 79 .base = {
+4 -2
crypto/lzo.c
··· 70 70 } 71 71 72 72 static struct scomp_alg scomp = { 73 - .alloc_ctx = lzo_alloc_ctx, 74 - .free_ctx = lzo_free_ctx, 73 + .streams = { 74 + .alloc_ctx = lzo_alloc_ctx, 75 + .free_ctx = lzo_free_ctx, 76 + }, 75 77 .compress = lzo_scompress, 76 78 .decompress = lzo_sdecompress, 77 79 .base = {
+1
drivers/char/hw_random/Kconfig
··· 312 312 config HW_RANDOM_NOMADIK 313 313 tristate "ST-Ericsson Nomadik Random Number Generator support" 314 314 depends on ARCH_NOMADIK || COMPILE_TEST 315 + depends on ARM_AMBA 315 316 default HW_RANDOM 316 317 help 317 318 This driver provides kernel-side support for the Random Number
+1 -1
drivers/char/hw_random/cn10k-rng.c
··· 188 188 189 189 rng->reg_base = pcim_iomap(pdev, 0, 0); 190 190 if (!rng->reg_base) 191 - return dev_err_probe(&pdev->dev, -ENOMEM, "Error while mapping CSRs, exiting\n"); 191 + return -ENOMEM; 192 192 193 193 rng->ops.name = devm_kasprintf(&pdev->dev, GFP_KERNEL, 194 194 "cn10k-rng-%s", dev_name(&pdev->dev));
+4
drivers/char/hw_random/ks-sa-rng.c
··· 231 231 if (IS_ERR(ks_sa_rng->regmap_cfg)) 232 232 return dev_err_probe(dev, -EINVAL, "syscon_node_to_regmap failed\n"); 233 233 234 + ks_sa_rng->clk = devm_clk_get_enabled(dev, NULL); 235 + if (IS_ERR(ks_sa_rng->clk)) 236 + return dev_err_probe(dev, PTR_ERR(ks_sa_rng->clk), "Failed to get clock\n"); 237 + 234 238 pm_runtime_enable(dev); 235 239 ret = pm_runtime_resume_and_get(dev); 236 240 if (ret < 0) {
+1 -1
drivers/char/hw_random/timeriomem-rng.c
··· 150 150 priv->rng_ops.quality = pdata->quality; 151 151 } 152 152 153 - priv->period = ns_to_ktime(period * NSEC_PER_USEC); 153 + priv->period = us_to_ktime(period); 154 154 init_completion(&priv->completion); 155 155 hrtimer_setup(&priv->timer, timeriomem_rng_trigger, CLOCK_MONOTONIC, HRTIMER_MODE_ABS); 156 156
+13
drivers/crypto/Kconfig
··· 725 725 Select this to enable Tegra Security Engine which accelerates various 726 726 AES encryption/decryption and HASH algorithms. 727 727 728 + config CRYPTO_DEV_XILINX_TRNG 729 + tristate "Support for Xilinx True Random Generator" 730 + depends on ZYNQMP_FIRMWARE || COMPILE_TEST 731 + select CRYPTO_RNG 732 + select HW_RANDOM 733 + help 734 + Xilinx Versal SoC driver provides kernel-side support for True Random Number 735 + Generator and Pseudo random Number in CTR_DRBG mode as defined in NIST SP800-90A. 736 + 737 + To compile this driver as a module, choose M here: the module 738 + will be called xilinx-trng. 739 + 728 740 config CRYPTO_DEV_ZYNQMP_AES 729 741 tristate "Support for Xilinx ZynqMP AES hw accelerator" 730 742 depends on ZYNQMP_FIRMWARE || COMPILE_TEST ··· 876 864 source "drivers/crypto/aspeed/Kconfig" 877 865 source "drivers/crypto/starfive/Kconfig" 878 866 source "drivers/crypto/inside-secure/eip93/Kconfig" 867 + source "drivers/crypto/ti/Kconfig" 879 868 880 869 endif # CRYPTO_HW
+1
drivers/crypto/Makefile
··· 49 49 obj-y += intel/ 50 50 obj-y += starfive/ 51 51 obj-y += cavium/ 52 + obj-y += ti/
+35 -50
drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c
··· 111 111 112 112 if (IS_ENABLED(CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG)) { 113 113 struct skcipher_alg *alg = crypto_skcipher_alg(tfm); 114 - struct sun8i_ce_alg_template *algt __maybe_unused; 114 + struct sun8i_ce_alg_template *algt; 115 115 116 116 algt = container_of(alg, struct sun8i_ce_alg_template, 117 117 alg.skcipher.base); ··· 131 131 return err; 132 132 } 133 133 134 - static int sun8i_ce_cipher_prepare(struct crypto_engine *engine, void *async_req) 134 + static int sun8i_ce_cipher_prepare(struct skcipher_request *areq, 135 + struct ce_task *cet) 135 136 { 136 - struct skcipher_request *areq = container_of(async_req, struct skcipher_request, base); 137 137 struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(areq); 138 138 struct sun8i_cipher_tfm_ctx *op = crypto_skcipher_ctx(tfm); 139 139 struct sun8i_ce_dev *ce = op->ce; 140 140 struct sun8i_cipher_req_ctx *rctx = skcipher_request_ctx(areq); 141 141 struct skcipher_alg *alg = crypto_skcipher_alg(tfm); 142 142 struct sun8i_ce_alg_template *algt; 143 - struct sun8i_ce_flow *chan; 144 - struct ce_task *cet; 145 143 struct scatterlist *sg; 146 144 unsigned int todo, len, offset, ivsize; 147 145 u32 common, sym; 148 - int flow, i; 146 + int i; 149 147 int nr_sgs = 0; 150 148 int nr_sgd = 0; 151 149 int err = 0; ··· 161 163 if (IS_ENABLED(CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG)) 162 164 algt->stat_req++; 163 165 164 - flow = rctx->flow; 165 - 166 - chan = &ce->chanlist[flow]; 167 - 168 - cet = chan->tl; 169 166 memset(cet, 0, sizeof(struct ce_task)); 170 167 171 - cet->t_id = cpu_to_le32(flow); 168 + cet->t_id = cpu_to_le32(rctx->flow); 172 169 common = ce->variant->alg_cipher[algt->ce_algo_id]; 173 170 common |= rctx->op_dir | CE_COMM_INT; 174 171 cet->t_common_ctl = cpu_to_le32(common); ··· 202 209 if (areq->iv && ivsize > 0) { 203 210 if (rctx->op_dir & CE_DECRYPTION) { 204 211 offset = areq->cryptlen - ivsize; 205 - scatterwalk_map_and_copy(chan->backup_iv, areq->src, 212 + scatterwalk_map_and_copy(rctx->backup_iv, areq->src, 206 213 offset, ivsize, 0); 207 214 } 208 - memcpy(chan->bounce_iv, areq->iv, ivsize); 209 - rctx->addr_iv = dma_map_single(ce->dev, chan->bounce_iv, ivsize, 215 + memcpy(rctx->bounce_iv, areq->iv, ivsize); 216 + rctx->addr_iv = dma_map_single(ce->dev, rctx->bounce_iv, ivsize, 210 217 DMA_TO_DEVICE); 211 218 if (dma_mapping_error(ce->dev, rctx->addr_iv)) { 212 219 dev_err(ce->dev, "Cannot DMA MAP IV\n"); ··· 269 276 goto theend_sgs; 270 277 } 271 278 272 - chan->timeout = areq->cryptlen; 273 279 rctx->nr_sgs = ns; 274 280 rctx->nr_sgd = nd; 275 281 return 0; ··· 292 300 293 301 offset = areq->cryptlen - ivsize; 294 302 if (rctx->op_dir & CE_DECRYPTION) { 295 - memcpy(areq->iv, chan->backup_iv, ivsize); 296 - memzero_explicit(chan->backup_iv, ivsize); 303 + memcpy(areq->iv, rctx->backup_iv, ivsize); 304 + memzero_explicit(rctx->backup_iv, ivsize); 297 305 } else { 298 306 scatterwalk_map_and_copy(areq->iv, areq->dst, offset, 299 307 ivsize, 0); 300 308 } 301 - memzero_explicit(chan->bounce_iv, ivsize); 309 + memzero_explicit(rctx->bounce_iv, ivsize); 302 310 } 303 311 304 312 dma_unmap_single(ce->dev, rctx->addr_key, op->keylen, DMA_TO_DEVICE); ··· 307 315 return err; 308 316 } 309 317 310 - static void sun8i_ce_cipher_unprepare(struct crypto_engine *engine, 311 - void *async_req) 318 + static void sun8i_ce_cipher_unprepare(struct skcipher_request *areq, 319 + struct ce_task *cet) 312 320 { 313 - struct skcipher_request *areq = container_of(async_req, struct skcipher_request, base); 314 321 struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(areq); 315 322 struct sun8i_cipher_tfm_ctx *op = crypto_skcipher_ctx(tfm); 316 323 struct sun8i_ce_dev *ce = op->ce; 317 324 struct sun8i_cipher_req_ctx *rctx = skcipher_request_ctx(areq); 318 - struct sun8i_ce_flow *chan; 319 - struct ce_task *cet; 320 325 unsigned int ivsize, offset; 321 326 int nr_sgs = rctx->nr_sgs; 322 327 int nr_sgd = rctx->nr_sgd; 323 - int flow; 324 328 325 - flow = rctx->flow; 326 - chan = &ce->chanlist[flow]; 327 - cet = chan->tl; 328 329 ivsize = crypto_skcipher_ivsize(tfm); 329 330 330 331 if (areq->src == areq->dst) { ··· 334 349 DMA_TO_DEVICE); 335 350 offset = areq->cryptlen - ivsize; 336 351 if (rctx->op_dir & CE_DECRYPTION) { 337 - memcpy(areq->iv, chan->backup_iv, ivsize); 338 - memzero_explicit(chan->backup_iv, ivsize); 352 + memcpy(areq->iv, rctx->backup_iv, ivsize); 353 + memzero_explicit(rctx->backup_iv, ivsize); 339 354 } else { 340 355 scatterwalk_map_and_copy(areq->iv, areq->dst, offset, 341 356 ivsize, 0); 342 357 } 343 - memzero_explicit(chan->bounce_iv, ivsize); 358 + memzero_explicit(rctx->bounce_iv, ivsize); 344 359 } 345 360 346 361 dma_unmap_single(ce->dev, rctx->addr_key, op->keylen, DMA_TO_DEVICE); 347 362 } 348 363 349 - static void sun8i_ce_cipher_run(struct crypto_engine *engine, void *areq) 350 - { 351 - struct skcipher_request *breq = container_of(areq, struct skcipher_request, base); 352 - struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(breq); 353 - struct sun8i_cipher_tfm_ctx *op = crypto_skcipher_ctx(tfm); 354 - struct sun8i_ce_dev *ce = op->ce; 355 - struct sun8i_cipher_req_ctx *rctx = skcipher_request_ctx(breq); 356 - int flow, err; 357 - 358 - flow = rctx->flow; 359 - err = sun8i_ce_run_task(ce, flow, crypto_tfm_alg_name(breq->base.tfm)); 360 - sun8i_ce_cipher_unprepare(engine, areq); 361 - local_bh_disable(); 362 - crypto_finalize_skcipher_request(engine, breq, err); 363 - local_bh_enable(); 364 - } 365 - 366 364 int sun8i_ce_cipher_do_one(struct crypto_engine *engine, void *areq) 367 365 { 368 - int err = sun8i_ce_cipher_prepare(engine, areq); 366 + struct skcipher_request *req = skcipher_request_cast(areq); 367 + struct sun8i_cipher_req_ctx *rctx = skcipher_request_ctx(req); 368 + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); 369 + struct sun8i_cipher_tfm_ctx *ctx = crypto_skcipher_ctx(tfm); 370 + struct sun8i_ce_dev *ce = ctx->ce; 371 + struct sun8i_ce_flow *chan; 372 + int err; 369 373 374 + chan = &ce->chanlist[rctx->flow]; 375 + 376 + err = sun8i_ce_cipher_prepare(req, chan->tl); 370 377 if (err) 371 378 return err; 372 379 373 - sun8i_ce_cipher_run(engine, areq); 380 + err = sun8i_ce_run_task(ce, rctx->flow, 381 + crypto_tfm_alg_name(req->base.tfm)); 382 + 383 + sun8i_ce_cipher_unprepare(req, chan->tl); 384 + 385 + local_bh_disable(); 386 + crypto_finalize_skcipher_request(engine, req, err); 387 + local_bh_enable(); 388 + 374 389 return 0; 375 390 } 376 391
+12 -23
drivers/crypto/allwinner/sun8i-ce/sun8i-ce-core.c
··· 169 169 .trng = CE_ID_NOTSUPP, 170 170 }; 171 171 172 + static void sun8i_ce_dump_task_descriptors(struct sun8i_ce_flow *chan) 173 + { 174 + print_hex_dump(KERN_INFO, "TASK: ", DUMP_PREFIX_NONE, 16, 4, 175 + chan->tl, sizeof(struct ce_task), false); 176 + } 177 + 172 178 /* 173 179 * sun8i_ce_get_engine_number() get the next channel slot 174 180 * This is a simple round-robin way of getting the next channel ··· 189 183 { 190 184 u32 v; 191 185 int err = 0; 192 - struct ce_task *cet = ce->chanlist[flow].tl; 193 186 194 187 #ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG 195 188 ce->chanlist[flow].stat_req++; ··· 215 210 mutex_unlock(&ce->mlock); 216 211 217 212 wait_for_completion_interruptible_timeout(&ce->chanlist[flow].complete, 218 - msecs_to_jiffies(ce->chanlist[flow].timeout)); 213 + msecs_to_jiffies(CE_DMA_TIMEOUT_MS)); 219 214 220 215 if (ce->chanlist[flow].status == 0) { 221 - dev_err(ce->dev, "DMA timeout for %s (tm=%d) on flow %d\n", name, 222 - ce->chanlist[flow].timeout, flow); 216 + dev_err(ce->dev, "DMA timeout for %s on flow %d\n", name, flow); 223 217 err = -EFAULT; 224 218 } 225 219 /* No need to lock for this read, the channel is locked so ··· 230 226 /* Sadly, the error bit is not per flow */ 231 227 if (v) { 232 228 dev_err(ce->dev, "CE ERROR: %x for flow %x\n", v, flow); 229 + sun8i_ce_dump_task_descriptors(&ce->chanlist[flow]); 233 230 err = -EFAULT; 234 - print_hex_dump(KERN_INFO, "TASK: ", DUMP_PREFIX_NONE, 16, 4, 235 - cet, sizeof(struct ce_task), false); 236 231 } 237 232 if (v & CE_ERR_ALGO_NOTSUP) 238 233 dev_err(ce->dev, "CE ERROR: algorithm not supported\n"); ··· 248 245 v &= 0xF; 249 246 if (v) { 250 247 dev_err(ce->dev, "CE ERROR: %x for flow %x\n", v, flow); 248 + sun8i_ce_dump_task_descriptors(&ce->chanlist[flow]); 251 249 err = -EFAULT; 252 - print_hex_dump(KERN_INFO, "TASK: ", DUMP_PREFIX_NONE, 16, 4, 253 - cet, sizeof(struct ce_task), false); 254 250 } 255 251 if (v & CE_ERR_ALGO_NOTSUP) 256 252 dev_err(ce->dev, "CE ERROR: algorithm not supported\n"); ··· 263 261 v &= 0xFF; 264 262 if (v) { 265 263 dev_err(ce->dev, "CE ERROR: %x for flow %x\n", v, flow); 264 + sun8i_ce_dump_task_descriptors(&ce->chanlist[flow]); 266 265 err = -EFAULT; 267 - print_hex_dump(KERN_INFO, "TASK: ", DUMP_PREFIX_NONE, 16, 4, 268 - cet, sizeof(struct ce_task), false); 269 266 } 270 267 if (v & CE_ERR_ALGO_NOTSUP) 271 268 dev_err(ce->dev, "CE ERROR: algorithm not supported\n"); ··· 759 758 err = -ENOMEM; 760 759 goto error_engine; 761 760 } 762 - ce->chanlist[i].bounce_iv = devm_kmalloc(ce->dev, AES_BLOCK_SIZE, 763 - GFP_KERNEL | GFP_DMA); 764 - if (!ce->chanlist[i].bounce_iv) { 765 - err = -ENOMEM; 766 - goto error_engine; 767 - } 768 - ce->chanlist[i].backup_iv = devm_kmalloc(ce->dev, AES_BLOCK_SIZE, 769 - GFP_KERNEL); 770 - if (!ce->chanlist[i].backup_iv) { 771 - err = -ENOMEM; 772 - goto error_engine; 773 - } 774 761 } 775 762 return 0; 776 763 error_engine: ··· 1052 1063 pm_runtime_put_sync(ce->dev); 1053 1064 1054 1065 if (IS_ENABLED(CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG)) { 1055 - struct dentry *dbgfs_dir __maybe_unused; 1066 + struct dentry *dbgfs_dir; 1056 1067 struct dentry *dbgfs_stats __maybe_unused; 1057 1068 1058 1069 /* Ignore error of debugfs */
+79 -68
drivers/crypto/allwinner/sun8i-ce/sun8i-ce-hash.c
··· 26 26 static void sun8i_ce_hash_stat_fb_inc(struct crypto_ahash *tfm) 27 27 { 28 28 if (IS_ENABLED(CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG)) { 29 - struct sun8i_ce_alg_template *algt __maybe_unused; 29 + struct sun8i_ce_alg_template *algt; 30 30 struct ahash_alg *alg = crypto_ahash_alg(tfm); 31 31 32 32 algt = container_of(alg, struct sun8i_ce_alg_template, ··· 58 58 59 59 crypto_ahash_set_reqsize(tfm, 60 60 sizeof(struct sun8i_ce_hash_reqctx) + 61 - crypto_ahash_reqsize(op->fallback_tfm)); 61 + crypto_ahash_reqsize(op->fallback_tfm) + 62 + CRYPTO_DMA_PADDING); 62 63 63 64 if (IS_ENABLED(CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG)) 64 65 memcpy(algt->fbname, ··· 85 84 86 85 int sun8i_ce_hash_init(struct ahash_request *areq) 87 86 { 88 - struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx(areq); 87 + struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx_dma(areq); 89 88 struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq); 90 89 struct sun8i_ce_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm); 91 90 ··· 101 100 102 101 int sun8i_ce_hash_export(struct ahash_request *areq, void *out) 103 102 { 104 - struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx(areq); 103 + struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx_dma(areq); 105 104 struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq); 106 105 struct sun8i_ce_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm); 107 106 ··· 115 114 116 115 int sun8i_ce_hash_import(struct ahash_request *areq, const void *in) 117 116 { 118 - struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx(areq); 117 + struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx_dma(areq); 119 118 struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq); 120 119 struct sun8i_ce_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm); 121 120 ··· 129 128 130 129 int sun8i_ce_hash_final(struct ahash_request *areq) 131 130 { 132 - struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx(areq); 131 + struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx_dma(areq); 133 132 struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq); 134 133 struct sun8i_ce_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm); 135 134 ··· 146 145 147 146 int sun8i_ce_hash_update(struct ahash_request *areq) 148 147 { 149 - struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx(areq); 148 + struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx_dma(areq); 150 149 struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq); 151 150 struct sun8i_ce_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm); 152 151 ··· 161 160 162 161 int sun8i_ce_hash_finup(struct ahash_request *areq) 163 162 { 164 - struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx(areq); 163 + struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx_dma(areq); 165 164 struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq); 166 165 struct sun8i_ce_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm); 167 166 ··· 179 178 180 179 static int sun8i_ce_hash_digest_fb(struct ahash_request *areq) 181 180 { 182 - struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx(areq); 181 + struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx_dma(areq); 183 182 struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq); 184 183 struct sun8i_ce_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm); 185 184 ··· 239 238 int sun8i_ce_hash_digest(struct ahash_request *areq) 240 239 { 241 240 struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq); 242 - struct ahash_alg *alg = __crypto_ahash_alg(tfm->base.__crt_alg); 243 - struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx(areq); 244 - struct sun8i_ce_alg_template *algt; 245 - struct sun8i_ce_dev *ce; 241 + struct sun8i_ce_hash_tfm_ctx *ctx = crypto_ahash_ctx(tfm); 242 + struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx_dma(areq); 243 + struct sun8i_ce_dev *ce = ctx->ce; 246 244 struct crypto_engine *engine; 247 245 int e; 248 246 249 247 if (sun8i_ce_hash_need_fallback(areq)) 250 248 return sun8i_ce_hash_digest_fb(areq); 251 - 252 - algt = container_of(alg, struct sun8i_ce_alg_template, alg.hash.base); 253 - ce = algt->ce; 254 249 255 250 e = sun8i_ce_get_engine_number(ce); 256 251 rctx->flow = e; ··· 313 316 return j; 314 317 } 315 318 316 - int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq) 319 + static int sun8i_ce_hash_prepare(struct ahash_request *areq, struct ce_task *cet) 317 320 { 318 - struct ahash_request *areq = container_of(breq, struct ahash_request, base); 319 321 struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq); 320 322 struct ahash_alg *alg = __crypto_ahash_alg(tfm->base.__crt_alg); 321 - struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx(areq); 323 + struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx_dma(areq); 322 324 struct sun8i_ce_alg_template *algt; 323 325 struct sun8i_ce_dev *ce; 324 - struct sun8i_ce_flow *chan; 325 - struct ce_task *cet; 326 326 struct scatterlist *sg; 327 - int nr_sgs, flow, err; 327 + int nr_sgs, err; 328 328 unsigned int len; 329 329 u32 common; 330 330 u64 byte_count; 331 331 __le32 *bf; 332 - void *buf, *result; 333 332 int j, i, todo; 334 333 u64 bs; 335 334 int digestsize; 336 - dma_addr_t addr_res, addr_pad; 337 - int ns = sg_nents_for_len(areq->src, areq->nbytes); 338 335 339 336 algt = container_of(alg, struct sun8i_ce_alg_template, alg.hash.base); 340 337 ce = algt->ce; ··· 340 349 if (digestsize == SHA384_DIGEST_SIZE) 341 350 digestsize = SHA512_DIGEST_SIZE; 342 351 343 - /* the padding could be up to two block. */ 344 - buf = kcalloc(2, bs, GFP_KERNEL | GFP_DMA); 345 - if (!buf) { 346 - err = -ENOMEM; 347 - goto err_out; 348 - } 349 - bf = (__le32 *)buf; 350 - 351 - result = kzalloc(digestsize, GFP_KERNEL | GFP_DMA); 352 - if (!result) { 353 - err = -ENOMEM; 354 - goto err_free_buf; 355 - } 356 - 357 - flow = rctx->flow; 358 - chan = &ce->chanlist[flow]; 352 + bf = (__le32 *)rctx->pad; 359 353 360 354 if (IS_ENABLED(CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG)) 361 355 algt->stat_req++; 362 356 363 357 dev_dbg(ce->dev, "%s %s len=%d\n", __func__, crypto_tfm_alg_name(areq->base.tfm), areq->nbytes); 364 358 365 - cet = chan->tl; 366 359 memset(cet, 0, sizeof(struct ce_task)); 367 360 368 - cet->t_id = cpu_to_le32(flow); 361 + cet->t_id = cpu_to_le32(rctx->flow); 369 362 common = ce->variant->alg_hash[algt->ce_algo_id]; 370 363 common |= CE_COMM_INT; 371 364 cet->t_common_ctl = cpu_to_le32(common); ··· 357 382 cet->t_sym_ctl = 0; 358 383 cet->t_asym_ctl = 0; 359 384 360 - nr_sgs = dma_map_sg(ce->dev, areq->src, ns, DMA_TO_DEVICE); 385 + rctx->nr_sgs = sg_nents_for_len(areq->src, areq->nbytes); 386 + nr_sgs = dma_map_sg(ce->dev, areq->src, rctx->nr_sgs, DMA_TO_DEVICE); 361 387 if (nr_sgs <= 0 || nr_sgs > MAX_SG) { 362 388 dev_err(ce->dev, "Invalid sg number %d\n", nr_sgs); 363 389 err = -EINVAL; 364 - goto err_free_result; 390 + goto err_out; 365 391 } 366 392 367 393 len = areq->nbytes; ··· 377 401 err = -EINVAL; 378 402 goto err_unmap_src; 379 403 } 380 - addr_res = dma_map_single(ce->dev, result, digestsize, DMA_FROM_DEVICE); 381 - cet->t_dst[0].addr = desc_addr_val_le32(ce, addr_res); 382 - cet->t_dst[0].len = cpu_to_le32(digestsize / 4); 383 - if (dma_mapping_error(ce->dev, addr_res)) { 404 + 405 + rctx->result_len = digestsize; 406 + rctx->addr_res = dma_map_single(ce->dev, rctx->result, rctx->result_len, 407 + DMA_FROM_DEVICE); 408 + cet->t_dst[0].addr = desc_addr_val_le32(ce, rctx->addr_res); 409 + cet->t_dst[0].len = cpu_to_le32(rctx->result_len / 4); 410 + if (dma_mapping_error(ce->dev, rctx->addr_res)) { 384 411 dev_err(ce->dev, "DMA map dest\n"); 385 412 err = -EINVAL; 386 413 goto err_unmap_src; ··· 411 432 goto err_unmap_result; 412 433 } 413 434 414 - addr_pad = dma_map_single(ce->dev, buf, j * 4, DMA_TO_DEVICE); 415 - cet->t_src[i].addr = desc_addr_val_le32(ce, addr_pad); 435 + rctx->pad_len = j * 4; 436 + rctx->addr_pad = dma_map_single(ce->dev, rctx->pad, rctx->pad_len, 437 + DMA_TO_DEVICE); 438 + cet->t_src[i].addr = desc_addr_val_le32(ce, rctx->addr_pad); 416 439 cet->t_src[i].len = cpu_to_le32(j); 417 - if (dma_mapping_error(ce->dev, addr_pad)) { 440 + if (dma_mapping_error(ce->dev, rctx->addr_pad)) { 418 441 dev_err(ce->dev, "DMA error on padding SG\n"); 419 442 err = -EINVAL; 420 443 goto err_unmap_result; ··· 427 446 else 428 447 cet->t_dlen = cpu_to_le32(areq->nbytes / 4 + j); 429 448 430 - chan->timeout = areq->nbytes; 431 - 432 - err = sun8i_ce_run_task(ce, flow, crypto_ahash_alg_name(tfm)); 433 - 434 - dma_unmap_single(ce->dev, addr_pad, j * 4, DMA_TO_DEVICE); 449 + return 0; 435 450 436 451 err_unmap_result: 437 - dma_unmap_single(ce->dev, addr_res, digestsize, DMA_FROM_DEVICE); 438 - if (!err) 439 - memcpy(areq->result, result, crypto_ahash_digestsize(tfm)); 452 + dma_unmap_single(ce->dev, rctx->addr_res, rctx->result_len, 453 + DMA_FROM_DEVICE); 440 454 441 455 err_unmap_src: 442 - dma_unmap_sg(ce->dev, areq->src, ns, DMA_TO_DEVICE); 443 - 444 - err_free_result: 445 - kfree(result); 446 - 447 - err_free_buf: 448 - kfree(buf); 456 + dma_unmap_sg(ce->dev, areq->src, rctx->nr_sgs, DMA_TO_DEVICE); 449 457 450 458 err_out: 459 + return err; 460 + } 461 + 462 + static void sun8i_ce_hash_unprepare(struct ahash_request *areq, 463 + struct ce_task *cet) 464 + { 465 + struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx_dma(areq); 466 + struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq); 467 + struct sun8i_ce_hash_tfm_ctx *ctx = crypto_ahash_ctx(tfm); 468 + struct sun8i_ce_dev *ce = ctx->ce; 469 + 470 + dma_unmap_single(ce->dev, rctx->addr_pad, rctx->pad_len, DMA_TO_DEVICE); 471 + dma_unmap_single(ce->dev, rctx->addr_res, rctx->result_len, 472 + DMA_FROM_DEVICE); 473 + dma_unmap_sg(ce->dev, areq->src, rctx->nr_sgs, DMA_TO_DEVICE); 474 + } 475 + 476 + int sun8i_ce_hash_run(struct crypto_engine *engine, void *async_req) 477 + { 478 + struct ahash_request *areq = ahash_request_cast(async_req); 479 + struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq); 480 + struct sun8i_ce_hash_tfm_ctx *ctx = crypto_ahash_ctx(tfm); 481 + struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx_dma(areq); 482 + struct sun8i_ce_dev *ce = ctx->ce; 483 + struct sun8i_ce_flow *chan; 484 + int err; 485 + 486 + chan = &ce->chanlist[rctx->flow]; 487 + 488 + err = sun8i_ce_hash_prepare(areq, chan->tl); 489 + if (err) 490 + return err; 491 + 492 + err = sun8i_ce_run_task(ce, rctx->flow, crypto_ahash_alg_name(tfm)); 493 + 494 + sun8i_ce_hash_unprepare(areq, chan->tl); 495 + 496 + if (!err) 497 + memcpy(areq->result, rctx->result, 498 + crypto_ahash_digestsize(tfm)); 499 + 451 500 local_bh_disable(); 452 - crypto_finalize_hash_request(engine, breq, err); 501 + crypto_finalize_hash_request(engine, async_req, err); 453 502 local_bh_enable(); 454 503 455 504 return 0;
-1
drivers/crypto/allwinner/sun8i-ce/sun8i-ce-prng.c
··· 137 137 138 138 cet->t_dst[0].addr = desc_addr_val_le32(ce, dma_dst); 139 139 cet->t_dst[0].len = cpu_to_le32(todo / 4); 140 - ce->chanlist[flow].timeout = 2000; 141 140 142 141 err = sun8i_ce_run_task(ce, 3, "PRNG"); 143 142 mutex_unlock(&ce->rnglock);
-1
drivers/crypto/allwinner/sun8i-ce/sun8i-ce-trng.c
··· 79 79 80 80 cet->t_dst[0].addr = desc_addr_val_le32(ce, dma_dst); 81 81 cet->t_dst[0].len = cpu_to_le32(todo / 4); 82 - ce->chanlist[flow].timeout = todo; 83 82 84 83 err = sun8i_ce_run_task(ce, 3, "TRNG"); 85 84 mutex_unlock(&ce->rnglock);
+22 -5
drivers/crypto/allwinner/sun8i-ce/sun8i-ce.h
··· 106 106 #define MAX_SG 8 107 107 108 108 #define CE_MAX_CLOCKS 4 109 + #define CE_DMA_TIMEOUT_MS 3000 109 110 110 111 #define MAXFLOW 4 112 + 113 + #define CE_MAX_HASH_DIGEST_SIZE SHA512_DIGEST_SIZE 114 + #define CE_MAX_HASH_BLOCK_SIZE SHA512_BLOCK_SIZE 111 115 112 116 /* 113 117 * struct ce_clock - Describe clocks used by sun8i-ce ··· 191 187 * @status: set to 1 by interrupt if task is done 192 188 * @t_phy: Physical address of task 193 189 * @tl: pointer to the current ce_task for this flow 194 - * @backup_iv: buffer which contain the next IV to store 195 - * @bounce_iv: buffer which contain the IV 196 190 * @stat_req: number of request done by this flow 197 191 */ 198 192 struct sun8i_ce_flow { ··· 198 196 struct completion complete; 199 197 int status; 200 198 dma_addr_t t_phy; 201 - int timeout; 202 199 struct ce_task *tl; 203 - void *backup_iv; 204 - void *bounce_iv; 205 200 #ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG 206 201 unsigned long stat_req; 207 202 #endif ··· 263 264 * @nr_sgd: The number of destination SG (as given by dma_map_sg()) 264 265 * @addr_iv: The IV addr returned by dma_map_single, need to unmap later 265 266 * @addr_key: The key addr returned by dma_map_single, need to unmap later 267 + * @bounce_iv: Current IV buffer 268 + * @backup_iv: Next IV buffer 266 269 * @fallback_req: request struct for invoking the fallback skcipher TFM 267 270 */ 268 271 struct sun8i_cipher_req_ctx { ··· 274 273 int nr_sgd; 275 274 dma_addr_t addr_iv; 276 275 dma_addr_t addr_key; 276 + u8 bounce_iv[AES_BLOCK_SIZE] __aligned(sizeof(u32)); 277 + u8 backup_iv[AES_BLOCK_SIZE]; 277 278 struct skcipher_request fallback_req; // keep at the end 278 279 }; 279 280 ··· 307 304 * struct sun8i_ce_hash_reqctx - context for an ahash request 308 305 * @fallback_req: pre-allocated fallback request 309 306 * @flow: the flow to use for this request 307 + * @nr_sgs: number of entries in the source scatterlist 308 + * @result_len: result length in bytes 309 + * @pad_len: padding length in bytes 310 + * @addr_res: DMA address of the result buffer, returned by dma_map_single() 311 + * @addr_pad: DMA address of the padding buffer, returned by dma_map_single() 312 + * @result: per-request result buffer 313 + * @pad: per-request padding buffer (up to 2 blocks) 310 314 */ 311 315 struct sun8i_ce_hash_reqctx { 312 316 int flow; 317 + int nr_sgs; 318 + size_t result_len; 319 + size_t pad_len; 320 + dma_addr_t addr_res; 321 + dma_addr_t addr_pad; 322 + u8 result[CE_MAX_HASH_DIGEST_SIZE] __aligned(CRYPTO_DMA_ALIGN); 323 + u8 pad[2 * CE_MAX_HASH_BLOCK_SIZE]; 313 324 struct ahash_request fallback_req; // keep at the end 314 325 }; 315 326
+1 -1
drivers/crypto/aspeed/aspeed-hace-crypto.c
··· 346 346 347 347 } else { 348 348 dma_unmap_sg(hace_dev->dev, req->dst, rctx->dst_nents, 349 - DMA_TO_DEVICE); 349 + DMA_FROM_DEVICE); 350 350 dma_unmap_sg(hace_dev->dev, req->src, rctx->src_nents, 351 351 DMA_TO_DEVICE); 352 352 }
+1 -1
drivers/crypto/atmel-tdes.c
··· 512 512 513 513 if (err && (dd->flags & TDES_FLAGS_FAST)) { 514 514 dma_unmap_sg(dd->dev, dd->in_sg, 1, DMA_TO_DEVICE); 515 - dma_unmap_sg(dd->dev, dd->out_sg, 1, DMA_TO_DEVICE); 515 + dma_unmap_sg(dd->dev, dd->out_sg, 1, DMA_FROM_DEVICE); 516 516 } 517 517 518 518 return err;
+5 -5
drivers/crypto/caam/ctrl.c
··· 592 592 int ret; 593 593 594 594 ctrlpriv->num_clks = data->num_clks; 595 - ctrlpriv->clks = devm_kmemdup(dev, data->clks, 596 - data->num_clks * sizeof(data->clks[0]), 597 - GFP_KERNEL); 595 + ctrlpriv->clks = devm_kmemdup_array(dev, data->clks, 596 + data->num_clks, sizeof(*data->clks), 597 + GFP_KERNEL); 598 598 if (!ctrlpriv->clks) 599 599 return -ENOMEM; 600 600 ··· 703 703 */ 704 704 if (needs_entropy_delay_adjustment()) 705 705 ent_delay = 12000; 706 - if (!(ctrlpriv->rng4_sh_init || inst_handles)) { 706 + if (!inst_handles) { 707 707 dev_info(dev, 708 708 "Entropy delay = %u\n", 709 709 ent_delay); 710 710 kick_trng(dev, ent_delay); 711 - ent_delay += 400; 711 + ent_delay = ent_delay * 2; 712 712 } 713 713 /* 714 714 * if instantiate_rng(...) fails, the loop will rerun
+4 -4
drivers/crypto/ccp/hsti.c
··· 74 74 .is_visible = psp_security_is_visible, 75 75 }; 76 76 77 - static int psp_poulate_hsti(struct psp_device *psp) 77 + static int psp_populate_hsti(struct psp_device *psp) 78 78 { 79 79 struct hsti_request *req; 80 80 int ret; ··· 84 84 return 0; 85 85 86 86 /* Allocate command-response buffer */ 87 - req = kzalloc(sizeof(*req), GFP_KERNEL | __GFP_ZERO); 87 + req = kzalloc(sizeof(*req), GFP_KERNEL); 88 88 if (!req) 89 89 return -ENOMEM; 90 90 91 - req->header.payload_size = sizeof(req); 91 + req->header.payload_size = sizeof(*req); 92 92 93 93 ret = psp_send_platform_access_msg(PSP_CMD_HSTI_QUERY, (struct psp_request *)req); 94 94 if (ret) ··· 114 114 int ret; 115 115 116 116 if (PSP_FEATURE(psp, HSTI)) { 117 - ret = psp_poulate_hsti(psp); 117 + ret = psp_populate_hsti(psp); 118 118 if (ret) 119 119 return ret; 120 120 }
+117 -14
drivers/crypto/ccp/sev-dev.c
··· 249 249 case SEV_CMD_SNP_GUEST_REQUEST: return sizeof(struct sev_data_snp_guest_request); 250 250 case SEV_CMD_SNP_CONFIG: return sizeof(struct sev_user_data_snp_config); 251 251 case SEV_CMD_SNP_COMMIT: return sizeof(struct sev_data_snp_commit); 252 + case SEV_CMD_SNP_FEATURE_INFO: return sizeof(struct sev_data_snp_feature_info); 253 + case SEV_CMD_SNP_VLEK_LOAD: return sizeof(struct sev_user_data_snp_vlek_load); 252 254 default: return 0; 253 255 } 254 256 ··· 864 862 struct sev_device *sev; 865 863 unsigned int cmdbuff_hi, cmdbuff_lo; 866 864 unsigned int phys_lsb, phys_msb; 867 - unsigned int reg, ret = 0; 865 + unsigned int reg; 868 866 void *cmd_buf; 869 867 int buf_len; 868 + int ret = 0; 870 869 871 870 if (!psp || !psp->sev_data) 872 871 return -ENODEV; ··· 1251 1248 1 << entry->order, false); 1252 1249 } 1253 1250 1251 + bool sev_is_snp_ciphertext_hiding_supported(void) 1252 + { 1253 + struct psp_device *psp = psp_master; 1254 + struct sev_device *sev; 1255 + 1256 + if (!psp || !psp->sev_data) 1257 + return false; 1258 + 1259 + sev = psp->sev_data; 1260 + 1261 + /* 1262 + * Feature information indicates if CipherTextHiding feature is 1263 + * supported by the SEV firmware and additionally platform status 1264 + * indicates if CipherTextHiding feature is enabled in the 1265 + * Platform BIOS. 1266 + */ 1267 + return ((sev->snp_feat_info_0.ecx & SNP_CIPHER_TEXT_HIDING_SUPPORTED) && 1268 + sev->snp_plat_status.ciphertext_hiding_cap); 1269 + } 1270 + EXPORT_SYMBOL_GPL(sev_is_snp_ciphertext_hiding_supported); 1271 + 1272 + static int snp_get_platform_data(struct sev_device *sev, int *error) 1273 + { 1274 + struct sev_data_snp_feature_info snp_feat_info; 1275 + struct snp_feature_info *feat_info; 1276 + struct sev_data_snp_addr buf; 1277 + struct page *page; 1278 + int rc; 1279 + 1280 + /* 1281 + * This function is expected to be called before SNP is 1282 + * initialized. 1283 + */ 1284 + if (sev->snp_initialized) 1285 + return -EINVAL; 1286 + 1287 + buf.address = __psp_pa(&sev->snp_plat_status); 1288 + rc = sev_do_cmd(SEV_CMD_SNP_PLATFORM_STATUS, &buf, error); 1289 + if (rc) { 1290 + dev_err(sev->dev, "SNP PLATFORM_STATUS command failed, ret = %d, error = %#x\n", 1291 + rc, *error); 1292 + return rc; 1293 + } 1294 + 1295 + sev->api_major = sev->snp_plat_status.api_major; 1296 + sev->api_minor = sev->snp_plat_status.api_minor; 1297 + sev->build = sev->snp_plat_status.build_id; 1298 + 1299 + /* 1300 + * Do feature discovery of the currently loaded firmware, 1301 + * and cache feature information from CPUID 0x8000_0024, 1302 + * sub-function 0. 1303 + */ 1304 + if (!sev->snp_plat_status.feature_info) 1305 + return 0; 1306 + 1307 + /* 1308 + * Use dynamically allocated structure for the SNP_FEATURE_INFO 1309 + * command to ensure structure is 8-byte aligned, and does not 1310 + * cross a page boundary. 1311 + */ 1312 + page = alloc_page(GFP_KERNEL); 1313 + if (!page) 1314 + return -ENOMEM; 1315 + 1316 + feat_info = page_address(page); 1317 + snp_feat_info.length = sizeof(snp_feat_info); 1318 + snp_feat_info.ecx_in = 0; 1319 + snp_feat_info.feature_info_paddr = __psp_pa(feat_info); 1320 + 1321 + rc = sev_do_cmd(SEV_CMD_SNP_FEATURE_INFO, &snp_feat_info, error); 1322 + if (!rc) 1323 + sev->snp_feat_info_0 = *feat_info; 1324 + else 1325 + dev_err(sev->dev, "SNP FEATURE_INFO command failed, ret = %d, error = %#x\n", 1326 + rc, *error); 1327 + 1328 + __free_page(page); 1329 + 1330 + return rc; 1331 + } 1332 + 1254 1333 static int snp_filter_reserved_mem_regions(struct resource *rs, void *arg) 1255 1334 { 1256 1335 struct sev_data_range_list *range_list = arg; ··· 1363 1278 return 0; 1364 1279 } 1365 1280 1366 - static int __sev_snp_init_locked(int *error) 1281 + static int __sev_snp_init_locked(int *error, unsigned int max_snp_asid) 1367 1282 { 1368 1283 struct psp_device *psp = psp_master; 1369 1284 struct sev_data_snp_init_ex data; ··· 1430 1345 snp_add_hv_fixed_pages(sev, snp_range_list); 1431 1346 1432 1347 memset(&data, 0, sizeof(data)); 1348 + 1349 + if (max_snp_asid) { 1350 + data.ciphertext_hiding_en = 1; 1351 + data.max_snp_asid = max_snp_asid; 1352 + } 1353 + 1433 1354 data.init_rmp = 1; 1434 1355 data.list_paddr_en = 1; 1435 1356 data.list_paddr = __psp_pa(snp_range_list); ··· 1559 1468 1560 1469 sev = psp_master->sev_data; 1561 1470 1562 - if (sev->state == SEV_STATE_INIT) 1471 + if (sev->sev_plat_status.state == SEV_STATE_INIT) 1563 1472 return 0; 1564 1473 1565 1474 __sev_platform_init_handle_tmr(sev); ··· 1591 1500 return rc; 1592 1501 } 1593 1502 1594 - sev->state = SEV_STATE_INIT; 1503 + sev->sev_plat_status.state = SEV_STATE_INIT; 1595 1504 1596 1505 /* Prepare for first SEV guest launch after INIT */ 1597 1506 wbinvd_on_all_cpus(); ··· 1629 1538 1630 1539 sev = psp_master->sev_data; 1631 1540 1632 - if (sev->state == SEV_STATE_INIT) 1541 + if (sev->sev_plat_status.state == SEV_STATE_INIT) 1633 1542 return 0; 1634 1543 1635 - rc = __sev_snp_init_locked(&args->error); 1544 + rc = __sev_snp_init_locked(&args->error, args->max_snp_asid); 1636 1545 if (rc && rc != -ENODEV) 1637 1546 return rc; 1638 1547 ··· 1666 1575 1667 1576 sev = psp->sev_data; 1668 1577 1669 - if (sev->state == SEV_STATE_UNINIT) 1578 + if (sev->sev_plat_status.state == SEV_STATE_UNINIT) 1670 1579 return 0; 1671 1580 1672 1581 ret = __sev_do_cmd_locked(SEV_CMD_SHUTDOWN, NULL, error); ··· 1676 1585 return ret; 1677 1586 } 1678 1587 1679 - sev->state = SEV_STATE_UNINIT; 1588 + sev->sev_plat_status.state = SEV_STATE_UNINIT; 1680 1589 dev_dbg(sev->dev, "SEV firmware shutdown\n"); 1681 1590 1682 1591 return ret; ··· 1715 1624 { 1716 1625 int error, rc; 1717 1626 1718 - rc = __sev_snp_init_locked(&error); 1627 + rc = __sev_snp_init_locked(&error, 0); 1719 1628 if (rc) { 1720 1629 argp->error = SEV_RET_INVALID_PLATFORM_STATE; 1721 1630 return rc; ··· 1784 1693 if (!writable) 1785 1694 return -EPERM; 1786 1695 1787 - if (sev->state == SEV_STATE_UNINIT) { 1696 + if (sev->sev_plat_status.state == SEV_STATE_UNINIT) { 1788 1697 rc = sev_move_to_init_state(argp, &shutdown_required); 1789 1698 if (rc) 1790 1699 return rc; ··· 1833 1742 data.len = input.length; 1834 1743 1835 1744 cmd: 1836 - if (sev->state == SEV_STATE_UNINIT) { 1745 + if (sev->sev_plat_status.state == SEV_STATE_UNINIT) { 1837 1746 ret = sev_move_to_init_state(argp, &shutdown_required); 1838 1747 if (ret) 1839 1748 goto e_free_blob; ··· 1881 1790 struct sev_user_data_status status; 1882 1791 int error = 0, ret; 1883 1792 1793 + /* 1794 + * Cache SNP platform status and SNP feature information 1795 + * if SNP is available. 1796 + */ 1797 + if (cc_platform_has(CC_ATTR_HOST_SEV_SNP)) { 1798 + ret = snp_get_platform_data(sev, &error); 1799 + if (ret) 1800 + return 1; 1801 + } 1802 + 1884 1803 ret = sev_platform_status(&status, &error); 1885 1804 if (ret) { 1886 1805 dev_err(sev->dev, ··· 1898 1797 return 1; 1899 1798 } 1900 1799 1800 + /* Cache SEV platform status */ 1801 + sev->sev_plat_status = status; 1802 + 1901 1803 sev->api_major = status.api_major; 1902 1804 sev->api_minor = status.api_minor; 1903 1805 sev->build = status.build; 1904 - sev->state = status.state; 1905 1806 1906 1807 return 0; 1907 1808 } ··· 2132 2029 data.oca_cert_len = input.oca_cert_len; 2133 2030 2134 2031 /* If platform is not in INIT state then transition it to INIT */ 2135 - if (sev->state != SEV_STATE_INIT) { 2032 + if (sev->sev_plat_status.state != SEV_STATE_INIT) { 2136 2033 ret = sev_move_to_init_state(argp, &shutdown_required); 2137 2034 if (ret) 2138 2035 goto e_free_oca; ··· 2303 2200 2304 2201 cmd: 2305 2202 /* If platform is not in INIT state then transition it to INIT. */ 2306 - if (sev->state != SEV_STATE_INIT) { 2203 + if (sev->sev_plat_status.state != SEV_STATE_INIT) { 2307 2204 if (!writable) { 2308 2205 ret = -EPERM; 2309 2206 goto e_free_cert;
+5 -1
drivers/crypto/ccp/sev-dev.h
··· 42 42 43 43 struct sev_vdata *vdata; 44 44 45 - int state; 46 45 unsigned int int_rcvd; 47 46 wait_queue_head_t int_queue; 48 47 struct sev_misc_dev *misc; ··· 56 57 bool cmd_buf_backup_active; 57 58 58 59 bool snp_initialized; 60 + 61 + struct sev_user_data_status sev_plat_status; 62 + 63 + struct sev_user_data_snp_status snp_plat_status; 64 + struct snp_feature_info snp_feat_info_0; 59 65 }; 60 66 61 67 int sev_dev_init(struct psp_device *psp);
+3 -3
drivers/crypto/chelsio/Kconfig
··· 4 4 depends on CHELSIO_T4 5 5 select CRYPTO_LIB_AES 6 6 select CRYPTO_LIB_GF128MUL 7 - select CRYPTO_SHA1 8 - select CRYPTO_SHA256 9 - select CRYPTO_SHA512 7 + select CRYPTO_LIB_SHA1 8 + select CRYPTO_LIB_SHA256 9 + select CRYPTO_LIB_SHA512 10 10 select CRYPTO_AUTHENC 11 11 help 12 12 The Chelsio Crypto Co-processor driver for T6 adapters.
+58 -197
drivers/crypto/chelsio/chcr_algo.c
··· 51 51 52 52 #include <crypto/aes.h> 53 53 #include <crypto/algapi.h> 54 - #include <crypto/hash.h> 55 54 #include <crypto/gcm.h> 56 55 #include <crypto/sha1.h> 57 56 #include <crypto/sha2.h> ··· 276 277 } 277 278 } 278 279 279 - static struct crypto_shash *chcr_alloc_shash(unsigned int ds) 280 + static int chcr_prepare_hmac_key(const u8 *raw_key, unsigned int raw_key_len, 281 + int digestsize, void *istate, void *ostate) 280 282 { 281 - struct crypto_shash *base_hash = ERR_PTR(-EINVAL); 283 + __be32 *istate32 = istate, *ostate32 = ostate; 284 + __be64 *istate64 = istate, *ostate64 = ostate; 285 + union { 286 + struct hmac_sha1_key sha1; 287 + struct hmac_sha224_key sha224; 288 + struct hmac_sha256_key sha256; 289 + struct hmac_sha384_key sha384; 290 + struct hmac_sha512_key sha512; 291 + } k; 282 292 283 - switch (ds) { 293 + switch (digestsize) { 284 294 case SHA1_DIGEST_SIZE: 285 - base_hash = crypto_alloc_shash("sha1", 0, 0); 295 + hmac_sha1_preparekey(&k.sha1, raw_key, raw_key_len); 296 + for (int i = 0; i < ARRAY_SIZE(k.sha1.istate.h); i++) { 297 + istate32[i] = cpu_to_be32(k.sha1.istate.h[i]); 298 + ostate32[i] = cpu_to_be32(k.sha1.ostate.h[i]); 299 + } 286 300 break; 287 301 case SHA224_DIGEST_SIZE: 288 - base_hash = crypto_alloc_shash("sha224", 0, 0); 302 + hmac_sha224_preparekey(&k.sha224, raw_key, raw_key_len); 303 + for (int i = 0; i < ARRAY_SIZE(k.sha224.key.istate.h); i++) { 304 + istate32[i] = cpu_to_be32(k.sha224.key.istate.h[i]); 305 + ostate32[i] = cpu_to_be32(k.sha224.key.ostate.h[i]); 306 + } 289 307 break; 290 308 case SHA256_DIGEST_SIZE: 291 - base_hash = crypto_alloc_shash("sha256", 0, 0); 309 + hmac_sha256_preparekey(&k.sha256, raw_key, raw_key_len); 310 + for (int i = 0; i < ARRAY_SIZE(k.sha256.key.istate.h); i++) { 311 + istate32[i] = cpu_to_be32(k.sha256.key.istate.h[i]); 312 + ostate32[i] = cpu_to_be32(k.sha256.key.ostate.h[i]); 313 + } 292 314 break; 293 315 case SHA384_DIGEST_SIZE: 294 - base_hash = crypto_alloc_shash("sha384", 0, 0); 316 + hmac_sha384_preparekey(&k.sha384, raw_key, raw_key_len); 317 + for (int i = 0; i < ARRAY_SIZE(k.sha384.key.istate.h); i++) { 318 + istate64[i] = cpu_to_be64(k.sha384.key.istate.h[i]); 319 + ostate64[i] = cpu_to_be64(k.sha384.key.ostate.h[i]); 320 + } 295 321 break; 296 322 case SHA512_DIGEST_SIZE: 297 - base_hash = crypto_alloc_shash("sha512", 0, 0); 323 + hmac_sha512_preparekey(&k.sha512, raw_key, raw_key_len); 324 + for (int i = 0; i < ARRAY_SIZE(k.sha512.key.istate.h); i++) { 325 + istate64[i] = cpu_to_be64(k.sha512.key.istate.h[i]); 326 + ostate64[i] = cpu_to_be64(k.sha512.key.ostate.h[i]); 327 + } 298 328 break; 329 + default: 330 + return -EINVAL; 299 331 } 300 - 301 - return base_hash; 302 - } 303 - 304 - static int chcr_compute_partial_hash(struct shash_desc *desc, 305 - char *iopad, char *result_hash, 306 - int digest_size) 307 - { 308 - struct sha1_state sha1_st; 309 - struct sha256_state sha256_st; 310 - struct sha512_state sha512_st; 311 - int error; 312 - 313 - if (digest_size == SHA1_DIGEST_SIZE) { 314 - error = crypto_shash_init(desc) ?: 315 - crypto_shash_update(desc, iopad, SHA1_BLOCK_SIZE) ?: 316 - crypto_shash_export_core(desc, &sha1_st); 317 - memcpy(result_hash, sha1_st.state, SHA1_DIGEST_SIZE); 318 - } else if (digest_size == SHA224_DIGEST_SIZE) { 319 - error = crypto_shash_init(desc) ?: 320 - crypto_shash_update(desc, iopad, SHA256_BLOCK_SIZE) ?: 321 - crypto_shash_export_core(desc, &sha256_st); 322 - memcpy(result_hash, sha256_st.state, SHA256_DIGEST_SIZE); 323 - 324 - } else if (digest_size == SHA256_DIGEST_SIZE) { 325 - error = crypto_shash_init(desc) ?: 326 - crypto_shash_update(desc, iopad, SHA256_BLOCK_SIZE) ?: 327 - crypto_shash_export_core(desc, &sha256_st); 328 - memcpy(result_hash, sha256_st.state, SHA256_DIGEST_SIZE); 329 - 330 - } else if (digest_size == SHA384_DIGEST_SIZE) { 331 - error = crypto_shash_init(desc) ?: 332 - crypto_shash_update(desc, iopad, SHA512_BLOCK_SIZE) ?: 333 - crypto_shash_export_core(desc, &sha512_st); 334 - memcpy(result_hash, sha512_st.state, SHA512_DIGEST_SIZE); 335 - 336 - } else if (digest_size == SHA512_DIGEST_SIZE) { 337 - error = crypto_shash_init(desc) ?: 338 - crypto_shash_update(desc, iopad, SHA512_BLOCK_SIZE) ?: 339 - crypto_shash_export_core(desc, &sha512_st); 340 - memcpy(result_hash, sha512_st.state, SHA512_DIGEST_SIZE); 341 - } else { 342 - error = -EINVAL; 343 - pr_err("Unknown digest size %d\n", digest_size); 344 - } 345 - return error; 346 - } 347 - 348 - static void chcr_change_order(char *buf, int ds) 349 - { 350 - int i; 351 - 352 - if (ds == SHA512_DIGEST_SIZE) { 353 - for (i = 0; i < (ds / sizeof(u64)); i++) 354 - *((__be64 *)buf + i) = 355 - cpu_to_be64(*((u64 *)buf + i)); 356 - } else { 357 - for (i = 0; i < (ds / sizeof(u32)); i++) 358 - *((__be32 *)buf + i) = 359 - cpu_to_be32(*((u32 *)buf + i)); 360 - } 332 + memzero_explicit(&k, sizeof(k)); 333 + return 0; 361 334 } 362 335 363 336 static inline int is_hmac(struct crypto_tfm *tfm) ··· 1518 1547 return 0; 1519 1548 } 1520 1549 1521 - static inline void chcr_free_shash(struct crypto_shash *base_hash) 1522 - { 1523 - crypto_free_shash(base_hash); 1524 - } 1525 - 1526 1550 /** 1527 1551 * create_hash_wr - Create hash work request 1528 1552 * @req: Cipher req base ··· 2168 2202 unsigned int keylen) 2169 2203 { 2170 2204 struct hmac_ctx *hmacctx = HMAC_CTX(h_ctx(tfm)); 2171 - unsigned int digestsize = crypto_ahash_digestsize(tfm); 2172 - unsigned int bs = crypto_tfm_alg_blocksize(crypto_ahash_tfm(tfm)); 2173 - unsigned int i, err = 0, updated_digestsize; 2174 - 2175 - SHASH_DESC_ON_STACK(shash, hmacctx->base_hash); 2176 2205 2177 2206 /* use the key to calculate the ipad and opad. ipad will sent with the 2178 2207 * first request's data. opad will be sent with the final hash result 2179 2208 * ipad in hmacctx->ipad and opad in hmacctx->opad location 2180 2209 */ 2181 - shash->tfm = hmacctx->base_hash; 2182 - if (keylen > bs) { 2183 - err = crypto_shash_digest(shash, key, keylen, 2184 - hmacctx->ipad); 2185 - if (err) 2186 - goto out; 2187 - keylen = digestsize; 2188 - } else { 2189 - memcpy(hmacctx->ipad, key, keylen); 2190 - } 2191 - memset(hmacctx->ipad + keylen, 0, bs - keylen); 2192 - unsafe_memcpy(hmacctx->opad, hmacctx->ipad, bs, 2193 - "fortified memcpy causes -Wrestrict warning"); 2194 - 2195 - for (i = 0; i < bs / sizeof(int); i++) { 2196 - *((unsigned int *)(&hmacctx->ipad) + i) ^= IPAD_DATA; 2197 - *((unsigned int *)(&hmacctx->opad) + i) ^= OPAD_DATA; 2198 - } 2199 - 2200 - updated_digestsize = digestsize; 2201 - if (digestsize == SHA224_DIGEST_SIZE) 2202 - updated_digestsize = SHA256_DIGEST_SIZE; 2203 - else if (digestsize == SHA384_DIGEST_SIZE) 2204 - updated_digestsize = SHA512_DIGEST_SIZE; 2205 - err = chcr_compute_partial_hash(shash, hmacctx->ipad, 2206 - hmacctx->ipad, digestsize); 2207 - if (err) 2208 - goto out; 2209 - chcr_change_order(hmacctx->ipad, updated_digestsize); 2210 - 2211 - err = chcr_compute_partial_hash(shash, hmacctx->opad, 2212 - hmacctx->opad, digestsize); 2213 - if (err) 2214 - goto out; 2215 - chcr_change_order(hmacctx->opad, updated_digestsize); 2216 - out: 2217 - return err; 2210 + return chcr_prepare_hmac_key(key, keylen, crypto_ahash_digestsize(tfm), 2211 + hmacctx->ipad, hmacctx->opad); 2218 2212 } 2219 2213 2220 2214 static int chcr_aes_xts_setkey(struct crypto_skcipher *cipher, const u8 *key, ··· 2270 2344 2271 2345 static int chcr_hmac_cra_init(struct crypto_tfm *tfm) 2272 2346 { 2273 - struct chcr_context *ctx = crypto_tfm_ctx(tfm); 2274 - struct hmac_ctx *hmacctx = HMAC_CTX(ctx); 2275 - unsigned int digestsize = 2276 - crypto_ahash_digestsize(__crypto_ahash_cast(tfm)); 2277 - 2278 2347 crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm), 2279 2348 sizeof(struct chcr_ahash_req_ctx)); 2280 - hmacctx->base_hash = chcr_alloc_shash(digestsize); 2281 - if (IS_ERR(hmacctx->base_hash)) 2282 - return PTR_ERR(hmacctx->base_hash); 2283 2349 return chcr_device_init(crypto_tfm_ctx(tfm)); 2284 - } 2285 - 2286 - static void chcr_hmac_cra_exit(struct crypto_tfm *tfm) 2287 - { 2288 - struct chcr_context *ctx = crypto_tfm_ctx(tfm); 2289 - struct hmac_ctx *hmacctx = HMAC_CTX(ctx); 2290 - 2291 - if (hmacctx->base_hash) { 2292 - chcr_free_shash(hmacctx->base_hash); 2293 - hmacctx->base_hash = NULL; 2294 - } 2295 2350 } 2296 2351 2297 2352 inline void chcr_aead_common_exit(struct aead_request *req) ··· 3464 3557 struct chcr_authenc_ctx *actx = AUTHENC_CTX(aeadctx); 3465 3558 /* it contains auth and cipher key both*/ 3466 3559 struct crypto_authenc_keys keys; 3467 - unsigned int bs, subtype; 3560 + unsigned int subtype; 3468 3561 unsigned int max_authsize = crypto_aead_alg(authenc)->maxauthsize; 3469 - int err = 0, i, key_ctx_len = 0; 3562 + int err = 0, key_ctx_len = 0; 3470 3563 unsigned char ck_size = 0; 3471 - unsigned char pad[CHCR_HASH_MAX_BLOCK_SIZE_128] = { 0 }; 3472 - struct crypto_shash *base_hash = ERR_PTR(-EINVAL); 3473 3564 struct algo_param param; 3474 3565 int align; 3475 - u8 *o_ptr = NULL; 3476 3566 3477 3567 crypto_aead_clear_flags(aeadctx->sw_cipher, CRYPTO_TFM_REQ_MASK); 3478 3568 crypto_aead_set_flags(aeadctx->sw_cipher, crypto_aead_get_flags(authenc) ··· 3517 3613 get_aes_decrypt_key(actx->dec_rrkey, aeadctx->key, 3518 3614 aeadctx->enckey_len << 3); 3519 3615 } 3520 - base_hash = chcr_alloc_shash(max_authsize); 3521 - if (IS_ERR(base_hash)) { 3522 - pr_err("Base driver cannot be loaded\n"); 3616 + 3617 + align = KEYCTX_ALIGN_PAD(max_authsize); 3618 + err = chcr_prepare_hmac_key(keys.authkey, keys.authkeylen, max_authsize, 3619 + actx->h_iopad, 3620 + actx->h_iopad + param.result_size + align); 3621 + if (err) 3523 3622 goto out; 3524 - } 3525 - { 3526 - SHASH_DESC_ON_STACK(shash, base_hash); 3527 3623 3528 - shash->tfm = base_hash; 3529 - bs = crypto_shash_blocksize(base_hash); 3530 - align = KEYCTX_ALIGN_PAD(max_authsize); 3531 - o_ptr = actx->h_iopad + param.result_size + align; 3624 + key_ctx_len = sizeof(struct _key_ctx) + roundup(keys.enckeylen, 16) + 3625 + (param.result_size + align) * 2; 3626 + aeadctx->key_ctx_hdr = FILL_KEY_CTX_HDR(ck_size, param.mk_size, 0, 1, 3627 + key_ctx_len >> 4); 3628 + actx->auth_mode = param.auth_mode; 3532 3629 3533 - if (keys.authkeylen > bs) { 3534 - err = crypto_shash_digest(shash, keys.authkey, 3535 - keys.authkeylen, 3536 - o_ptr); 3537 - if (err) { 3538 - pr_err("Base driver cannot be loaded\n"); 3539 - goto out; 3540 - } 3541 - keys.authkeylen = max_authsize; 3542 - } else 3543 - memcpy(o_ptr, keys.authkey, keys.authkeylen); 3630 + memzero_explicit(&keys, sizeof(keys)); 3631 + return 0; 3544 3632 3545 - /* Compute the ipad-digest*/ 3546 - memset(pad + keys.authkeylen, 0, bs - keys.authkeylen); 3547 - memcpy(pad, o_ptr, keys.authkeylen); 3548 - for (i = 0; i < bs >> 2; i++) 3549 - *((unsigned int *)pad + i) ^= IPAD_DATA; 3550 - 3551 - if (chcr_compute_partial_hash(shash, pad, actx->h_iopad, 3552 - max_authsize)) 3553 - goto out; 3554 - /* Compute the opad-digest */ 3555 - memset(pad + keys.authkeylen, 0, bs - keys.authkeylen); 3556 - memcpy(pad, o_ptr, keys.authkeylen); 3557 - for (i = 0; i < bs >> 2; i++) 3558 - *((unsigned int *)pad + i) ^= OPAD_DATA; 3559 - 3560 - if (chcr_compute_partial_hash(shash, pad, o_ptr, max_authsize)) 3561 - goto out; 3562 - 3563 - /* convert the ipad and opad digest to network order */ 3564 - chcr_change_order(actx->h_iopad, param.result_size); 3565 - chcr_change_order(o_ptr, param.result_size); 3566 - key_ctx_len = sizeof(struct _key_ctx) + 3567 - roundup(keys.enckeylen, 16) + 3568 - (param.result_size + align) * 2; 3569 - aeadctx->key_ctx_hdr = FILL_KEY_CTX_HDR(ck_size, param.mk_size, 3570 - 0, 1, key_ctx_len >> 4); 3571 - actx->auth_mode = param.auth_mode; 3572 - chcr_free_shash(base_hash); 3573 - 3574 - memzero_explicit(&keys, sizeof(keys)); 3575 - return 0; 3576 - } 3577 3633 out: 3578 3634 aeadctx->enckey_len = 0; 3579 3635 memzero_explicit(&keys, sizeof(keys)); 3580 - if (!IS_ERR(base_hash)) 3581 - chcr_free_shash(base_hash); 3582 3636 return -EINVAL; 3583 3637 } 3584 3638 ··· 4352 4490 4353 4491 if (driver_algs[i].type == CRYPTO_ALG_TYPE_HMAC) { 4354 4492 a_hash->halg.base.cra_init = chcr_hmac_cra_init; 4355 - a_hash->halg.base.cra_exit = chcr_hmac_cra_exit; 4356 4493 a_hash->init = chcr_hmac_init; 4357 4494 a_hash->setkey = chcr_ahash_setkey; 4358 4495 a_hash->halg.base.cra_ctxsize = SZ_AHASH_H_CTX;
-1
drivers/crypto/chelsio/chcr_crypto.h
··· 241 241 }; 242 242 243 243 struct hmac_ctx { 244 - struct crypto_shash *base_hash; 245 244 u8 ipad[CHCR_HASH_MAX_BLOCK_SIZE_128]; 246 245 u8 opad[CHCR_HASH_MAX_BLOCK_SIZE_128]; 247 246 };
+1
drivers/crypto/hisilicon/debugfs.c
··· 888 888 dfx_regs_uninit(qm, qm->debug.qm_diff_regs, ARRAY_SIZE(qm_diff_regs)); 889 889 ret = PTR_ERR(qm->debug.acc_diff_regs); 890 890 qm->debug.acc_diff_regs = NULL; 891 + qm->debug.qm_diff_regs = NULL; 891 892 return ret; 892 893 } 893 894
+141 -58
drivers/crypto/hisilicon/hpre/hpre_main.c
··· 39 39 #define HPRE_HAC_RAS_NFE_ENB 0x301414 40 40 #define HPRE_HAC_RAS_FE_ENB 0x301418 41 41 #define HPRE_HAC_INT_SET 0x301500 42 + #define HPRE_AXI_ERROR_MASK GENMASK(21, 10) 42 43 #define HPRE_RNG_TIMEOUT_NUM 0x301A34 43 44 #define HPRE_CORE_INT_ENABLE 0 44 45 #define HPRE_RDCHN_INI_ST 0x301a00 ··· 79 78 #define HPRE_PREFETCH_ENABLE (~(BIT(0) | BIT(30))) 80 79 #define HPRE_PREFETCH_DISABLE BIT(30) 81 80 #define HPRE_SVA_DISABLE_READY (BIT(4) | BIT(8)) 81 + #define HPRE_SVA_PREFTCH_DFX4 0x301144 82 + #define HPRE_WAIT_SVA_READY 500000 83 + #define HPRE_READ_SVA_STATUS_TIMES 3 84 + #define HPRE_WAIT_US_MIN 10 85 + #define HPRE_WAIT_US_MAX 20 82 86 83 87 /* clock gate */ 84 88 #define HPRE_CLKGATE_CTL 0x301a10 ··· 472 466 return NULL; 473 467 } 474 468 469 + static int hpre_wait_sva_ready(struct hisi_qm *qm) 470 + { 471 + u32 val, try_times = 0; 472 + u8 count = 0; 473 + 474 + /* 475 + * Read the register value every 10-20us. If the value is 0 for three 476 + * consecutive times, the SVA module is ready. 477 + */ 478 + do { 479 + val = readl(qm->io_base + HPRE_SVA_PREFTCH_DFX4); 480 + if (val) 481 + count = 0; 482 + else if (++count == HPRE_READ_SVA_STATUS_TIMES) 483 + break; 484 + 485 + usleep_range(HPRE_WAIT_US_MIN, HPRE_WAIT_US_MAX); 486 + } while (++try_times < HPRE_WAIT_SVA_READY); 487 + 488 + if (try_times == HPRE_WAIT_SVA_READY) { 489 + pci_err(qm->pdev, "failed to wait sva prefetch ready\n"); 490 + return -ETIMEDOUT; 491 + } 492 + 493 + return 0; 494 + } 495 + 475 496 static void hpre_config_pasid(struct hisi_qm *qm) 476 497 { 477 498 u32 val1, val2; ··· 596 563 writel(PEH_AXUSER_CFG_ENABLE, qm->io_base + QM_PEH_AXUSER_CFG_ENABLE); 597 564 } 598 565 599 - static void hpre_open_sva_prefetch(struct hisi_qm *qm) 600 - { 601 - u32 val; 602 - int ret; 603 - 604 - if (!test_bit(QM_SUPPORT_SVA_PREFETCH, &qm->caps)) 605 - return; 606 - 607 - /* Enable prefetch */ 608 - val = readl_relaxed(qm->io_base + HPRE_PREFETCH_CFG); 609 - val &= HPRE_PREFETCH_ENABLE; 610 - writel(val, qm->io_base + HPRE_PREFETCH_CFG); 611 - 612 - ret = readl_relaxed_poll_timeout(qm->io_base + HPRE_PREFETCH_CFG, 613 - val, !(val & HPRE_PREFETCH_DISABLE), 614 - HPRE_REG_RD_INTVRL_US, 615 - HPRE_REG_RD_TMOUT_US); 616 - if (ret) 617 - pci_err(qm->pdev, "failed to open sva prefetch\n"); 618 - } 619 - 620 566 static void hpre_close_sva_prefetch(struct hisi_qm *qm) 621 567 { 622 568 u32 val; ··· 614 602 HPRE_REG_RD_TMOUT_US); 615 603 if (ret) 616 604 pci_err(qm->pdev, "failed to close sva prefetch\n"); 605 + 606 + (void)hpre_wait_sva_ready(qm); 607 + } 608 + 609 + static void hpre_open_sva_prefetch(struct hisi_qm *qm) 610 + { 611 + u32 val; 612 + int ret; 613 + 614 + if (!test_bit(QM_SUPPORT_SVA_PREFETCH, &qm->caps)) 615 + return; 616 + 617 + /* Enable prefetch */ 618 + val = readl_relaxed(qm->io_base + HPRE_PREFETCH_CFG); 619 + val &= HPRE_PREFETCH_ENABLE; 620 + writel(val, qm->io_base + HPRE_PREFETCH_CFG); 621 + 622 + ret = readl_relaxed_poll_timeout(qm->io_base + HPRE_PREFETCH_CFG, 623 + val, !(val & HPRE_PREFETCH_DISABLE), 624 + HPRE_REG_RD_INTVRL_US, 625 + HPRE_REG_RD_TMOUT_US); 626 + if (ret) { 627 + pci_err(qm->pdev, "failed to open sva prefetch\n"); 628 + hpre_close_sva_prefetch(qm); 629 + return; 630 + } 631 + 632 + ret = hpre_wait_sva_ready(qm); 633 + if (ret) 634 + hpre_close_sva_prefetch(qm); 617 635 } 618 636 619 637 static void hpre_enable_clock_gate(struct hisi_qm *qm) ··· 763 721 764 722 /* Config data buffer pasid needed by Kunpeng 920 */ 765 723 hpre_config_pasid(qm); 724 + hpre_open_sva_prefetch(qm); 766 725 767 726 hpre_enable_clock_gate(qm); 768 727 ··· 799 756 val1 = readl(qm->io_base + HPRE_AM_OOO_SHUTDOWN_ENB); 800 757 if (enable) { 801 758 val1 |= HPRE_AM_OOO_SHUTDOWN_ENABLE; 802 - val2 = hisi_qm_get_hw_info(qm, hpre_basic_info, 803 - HPRE_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver); 759 + val2 = qm->err_info.dev_err.shutdown_mask; 804 760 } else { 805 761 val1 &= ~HPRE_AM_OOO_SHUTDOWN_ENABLE; 806 762 val2 = 0x0; ··· 813 771 814 772 static void hpre_hw_error_disable(struct hisi_qm *qm) 815 773 { 816 - u32 ce, nfe; 817 - 818 - ce = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_CE_MASK_CAP, qm->cap_ver); 819 - nfe = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_NFE_MASK_CAP, qm->cap_ver); 774 + struct hisi_qm_err_mask *dev_err = &qm->err_info.dev_err; 775 + u32 err_mask = dev_err->ce | dev_err->nfe | dev_err->fe; 820 776 821 777 /* disable hpre hw error interrupts */ 822 - writel(ce | nfe | HPRE_HAC_RAS_FE_ENABLE, qm->io_base + HPRE_INT_MASK); 778 + writel(err_mask, qm->io_base + HPRE_INT_MASK); 823 779 /* disable HPRE block master OOO when nfe occurs on Kunpeng930 */ 824 780 hpre_master_ooo_ctrl(qm, false); 825 781 } 826 782 827 783 static void hpre_hw_error_enable(struct hisi_qm *qm) 828 784 { 829 - u32 ce, nfe, err_en; 830 - 831 - ce = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_CE_MASK_CAP, qm->cap_ver); 832 - nfe = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_NFE_MASK_CAP, qm->cap_ver); 785 + struct hisi_qm_err_mask *dev_err = &qm->err_info.dev_err; 786 + u32 err_mask = dev_err->ce | dev_err->nfe | dev_err->fe; 833 787 834 788 /* clear HPRE hw error source if having */ 835 - writel(ce | nfe | HPRE_HAC_RAS_FE_ENABLE, qm->io_base + HPRE_HAC_SOURCE_INT); 789 + writel(err_mask, qm->io_base + HPRE_HAC_SOURCE_INT); 836 790 837 791 /* configure error type */ 838 - writel(ce, qm->io_base + HPRE_RAS_CE_ENB); 839 - writel(nfe, qm->io_base + HPRE_RAS_NFE_ENB); 840 - writel(HPRE_HAC_RAS_FE_ENABLE, qm->io_base + HPRE_RAS_FE_ENB); 792 + writel(dev_err->ce, qm->io_base + HPRE_RAS_CE_ENB); 793 + writel(dev_err->nfe, qm->io_base + HPRE_RAS_NFE_ENB); 794 + writel(dev_err->fe, qm->io_base + HPRE_RAS_FE_ENB); 841 795 842 796 /* enable HPRE block master OOO when nfe occurs on Kunpeng930 */ 843 797 hpre_master_ooo_ctrl(qm, true); 844 798 845 799 /* enable hpre hw error interrupts */ 846 - err_en = ce | nfe | HPRE_HAC_RAS_FE_ENABLE; 847 - writel(~err_en, qm->io_base + HPRE_INT_MASK); 800 + writel(~err_mask, qm->io_base + HPRE_INT_MASK); 848 801 } 849 802 850 803 static inline struct hisi_qm *hpre_file_to_qm(struct hpre_debugfs_file *file) ··· 1208 1171 size_t i, size; 1209 1172 1210 1173 size = ARRAY_SIZE(hpre_cap_query_info); 1211 - hpre_cap = devm_kzalloc(dev, sizeof(*hpre_cap) * size, GFP_KERNEL); 1174 + hpre_cap = devm_kcalloc(dev, size, sizeof(*hpre_cap), GFP_KERNEL); 1212 1175 if (!hpre_cap) 1213 1176 return -ENOMEM; 1214 1177 ··· 1394 1357 1395 1358 static void hpre_disable_error_report(struct hisi_qm *qm, u32 err_type) 1396 1359 { 1397 - u32 nfe_mask; 1360 + u32 nfe_mask = qm->err_info.dev_err.nfe; 1398 1361 1399 - nfe_mask = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_NFE_MASK_CAP, qm->cap_ver); 1400 1362 writel(nfe_mask & (~err_type), qm->io_base + HPRE_RAS_NFE_ENB); 1363 + } 1364 + 1365 + static void hpre_enable_error_report(struct hisi_qm *qm) 1366 + { 1367 + u32 nfe_mask = qm->err_info.dev_err.nfe; 1368 + u32 ce_mask = qm->err_info.dev_err.ce; 1369 + 1370 + writel(nfe_mask, qm->io_base + HPRE_RAS_NFE_ENB); 1371 + writel(ce_mask, qm->io_base + HPRE_RAS_CE_ENB); 1401 1372 } 1402 1373 1403 1374 static void hpre_open_axi_master_ooo(struct hisi_qm *qm) ··· 1425 1380 1426 1381 err_status = hpre_get_hw_err_status(qm); 1427 1382 if (err_status) { 1428 - if (err_status & qm->err_info.ecc_2bits_mask) 1383 + if (err_status & qm->err_info.dev_err.ecc_2bits_mask) 1429 1384 qm->err_status.is_dev_ecc_mbit = true; 1430 1385 hpre_log_hw_error(qm, err_status); 1431 1386 1432 - if (err_status & qm->err_info.dev_reset_mask) { 1387 + if (err_status & qm->err_info.dev_err.reset_mask) { 1433 1388 /* Disable the same error reporting until device is recovered. */ 1434 1389 hpre_disable_error_report(qm, err_status); 1435 1390 return ACC_ERR_NEED_RESET; 1436 1391 } 1437 1392 hpre_clear_hw_err_status(qm, err_status); 1393 + /* Avoid firmware disable error report, re-enable. */ 1394 + hpre_enable_error_report(qm); 1438 1395 } 1439 1396 1440 1397 return ACC_ERR_RECOVERED; ··· 1447 1400 u32 err_status; 1448 1401 1449 1402 err_status = hpre_get_hw_err_status(qm); 1450 - if (err_status & qm->err_info.dev_shutdown_mask) 1403 + if (err_status & qm->err_info.dev_err.shutdown_mask) 1451 1404 return true; 1452 1405 1453 1406 return false; 1454 1407 } 1455 1408 1409 + static void hpre_disable_axi_error(struct hisi_qm *qm) 1410 + { 1411 + struct hisi_qm_err_mask *dev_err = &qm->err_info.dev_err; 1412 + u32 err_mask = dev_err->ce | dev_err->nfe | dev_err->fe; 1413 + u32 val; 1414 + 1415 + val = ~(err_mask & (~HPRE_AXI_ERROR_MASK)); 1416 + writel(val, qm->io_base + HPRE_INT_MASK); 1417 + 1418 + if (qm->ver > QM_HW_V2) 1419 + writel(dev_err->shutdown_mask & (~HPRE_AXI_ERROR_MASK), 1420 + qm->io_base + HPRE_OOO_SHUTDOWN_SEL); 1421 + } 1422 + 1423 + static void hpre_enable_axi_error(struct hisi_qm *qm) 1424 + { 1425 + struct hisi_qm_err_mask *dev_err = &qm->err_info.dev_err; 1426 + u32 err_mask = dev_err->ce | dev_err->nfe | dev_err->fe; 1427 + 1428 + /* clear axi error source */ 1429 + writel(HPRE_AXI_ERROR_MASK, qm->io_base + HPRE_HAC_SOURCE_INT); 1430 + 1431 + writel(~err_mask, qm->io_base + HPRE_INT_MASK); 1432 + 1433 + if (qm->ver > QM_HW_V2) 1434 + writel(dev_err->shutdown_mask, qm->io_base + HPRE_OOO_SHUTDOWN_SEL); 1435 + } 1436 + 1456 1437 static void hpre_err_info_init(struct hisi_qm *qm) 1457 1438 { 1458 1439 struct hisi_qm_err_info *err_info = &qm->err_info; 1440 + struct hisi_qm_err_mask *qm_err = &err_info->qm_err; 1441 + struct hisi_qm_err_mask *dev_err = &err_info->dev_err; 1459 1442 1460 - err_info->fe = HPRE_HAC_RAS_FE_ENABLE; 1461 - err_info->ce = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_QM_CE_MASK_CAP, qm->cap_ver); 1462 - err_info->nfe = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_QM_NFE_MASK_CAP, qm->cap_ver); 1463 - err_info->ecc_2bits_mask = HPRE_CORE_ECC_2BIT_ERR | HPRE_OOO_ECC_2BIT_ERR; 1464 - err_info->dev_shutdown_mask = hisi_qm_get_hw_info(qm, hpre_basic_info, 1465 - HPRE_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver); 1466 - err_info->qm_shutdown_mask = hisi_qm_get_hw_info(qm, hpre_basic_info, 1467 - HPRE_QM_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver); 1468 - err_info->qm_reset_mask = hisi_qm_get_hw_info(qm, hpre_basic_info, 1469 - HPRE_QM_RESET_MASK_CAP, qm->cap_ver); 1470 - err_info->dev_reset_mask = hisi_qm_get_hw_info(qm, hpre_basic_info, 1471 - HPRE_RESET_MASK_CAP, qm->cap_ver); 1443 + qm_err->fe = HPRE_HAC_RAS_FE_ENABLE; 1444 + qm_err->ce = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_QM_CE_MASK_CAP, qm->cap_ver); 1445 + qm_err->nfe = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_QM_NFE_MASK_CAP, qm->cap_ver); 1446 + qm_err->shutdown_mask = hisi_qm_get_hw_info(qm, hpre_basic_info, 1447 + HPRE_QM_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver); 1448 + qm_err->reset_mask = hisi_qm_get_hw_info(qm, hpre_basic_info, 1449 + HPRE_QM_RESET_MASK_CAP, qm->cap_ver); 1450 + qm_err->ecc_2bits_mask = QM_ECC_MBIT; 1451 + 1452 + dev_err->fe = HPRE_HAC_RAS_FE_ENABLE; 1453 + dev_err->ce = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_CE_MASK_CAP, qm->cap_ver); 1454 + dev_err->nfe = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_NFE_MASK_CAP, qm->cap_ver); 1455 + dev_err->shutdown_mask = hisi_qm_get_hw_info(qm, hpre_basic_info, 1456 + HPRE_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver); 1457 + dev_err->reset_mask = hisi_qm_get_hw_info(qm, hpre_basic_info, 1458 + HPRE_RESET_MASK_CAP, qm->cap_ver); 1459 + dev_err->ecc_2bits_mask = HPRE_CORE_ECC_2BIT_ERR | HPRE_OOO_ECC_2BIT_ERR; 1460 + 1472 1461 err_info->msi_wr_port = HPRE_WR_MSI_PORT; 1473 1462 err_info->acpi_rst = "HRST"; 1474 1463 } ··· 1522 1439 .err_info_init = hpre_err_info_init, 1523 1440 .get_err_result = hpre_get_err_result, 1524 1441 .dev_is_abnormal = hpre_dev_is_abnormal, 1442 + .disable_axi_error = hpre_disable_axi_error, 1443 + .enable_axi_error = hpre_enable_axi_error, 1525 1444 }; 1526 1445 1527 1446 static int hpre_pf_probe_init(struct hpre *hpre) ··· 1534 1449 ret = hpre_set_user_domain_and_cache(qm); 1535 1450 if (ret) 1536 1451 return ret; 1537 - 1538 - hpre_open_sva_prefetch(qm); 1539 1452 1540 1453 hisi_qm_dev_err_init(qm); 1541 1454 ret = hpre_show_last_regs_init(qm);
+157 -61
drivers/crypto/hisilicon/qm.c
··· 45 45 46 46 #define QM_SQ_TYPE_MASK GENMASK(3, 0) 47 47 #define QM_SQ_TAIL_IDX(sqc) ((le16_to_cpu((sqc).w11) >> 6) & 0x1) 48 + #define QM_SQC_DISABLE_QP (1U << 6) 49 + #define QM_XQC_RANDOM_DATA 0xaaaa 48 50 49 51 /* cqc shift */ 50 52 #define QM_CQ_HOP_NUM_SHIFT 0 ··· 147 145 #define QM_RAS_CE_TIMES_PER_IRQ 1 148 146 #define QM_OOO_SHUTDOWN_SEL 0x1040f8 149 147 #define QM_AXI_RRESP_ERR BIT(0) 150 - #define QM_ECC_MBIT BIT(2) 151 148 #define QM_DB_TIMEOUT BIT(10) 152 149 #define QM_OF_FIFO_OF BIT(11) 150 + #define QM_RAS_AXI_ERROR (BIT(0) | BIT(1) | BIT(12)) 153 151 154 152 #define QM_RESET_WAIT_TIMEOUT 400 155 153 #define QM_PEH_VENDOR_ID 0x1000d8 ··· 165 163 #define ACC_MASTER_TRANS_RETURN 0x300150 166 164 #define ACC_MASTER_GLOBAL_CTRL 0x300000 167 165 #define ACC_AM_CFG_PORT_WR_EN 0x30001c 168 - #define QM_RAS_NFE_MBIT_DISABLE ~QM_ECC_MBIT 169 166 #define ACC_AM_ROB_ECC_INT_STS 0x300104 170 167 #define ACC_ROB_ECC_ERR_MULTPL BIT(1) 171 168 #define QM_MSI_CAP_ENABLE BIT(16) ··· 521 520 return false; 522 521 523 522 err_status = qm_get_hw_error_status(pf_qm); 524 - if (err_status & pf_qm->err_info.qm_shutdown_mask) 523 + if (err_status & pf_qm->err_info.qm_err.shutdown_mask) 525 524 return true; 526 525 527 526 if (pf_qm->err_ini->dev_is_abnormal) ··· 1396 1395 1397 1396 static void qm_hw_error_cfg(struct hisi_qm *qm) 1398 1397 { 1399 - struct hisi_qm_err_info *err_info = &qm->err_info; 1398 + struct hisi_qm_err_mask *qm_err = &qm->err_info.qm_err; 1400 1399 1401 - qm->error_mask = err_info->nfe | err_info->ce | err_info->fe; 1400 + qm->error_mask = qm_err->nfe | qm_err->ce | qm_err->fe; 1402 1401 /* clear QM hw residual error source */ 1403 1402 writel(qm->error_mask, qm->io_base + QM_ABNORMAL_INT_SOURCE); 1404 1403 1405 1404 /* configure error type */ 1406 - writel(err_info->ce, qm->io_base + QM_RAS_CE_ENABLE); 1405 + writel(qm_err->ce, qm->io_base + QM_RAS_CE_ENABLE); 1407 1406 writel(QM_RAS_CE_TIMES_PER_IRQ, qm->io_base + QM_RAS_CE_THRESHOLD); 1408 - writel(err_info->nfe, qm->io_base + QM_RAS_NFE_ENABLE); 1409 - writel(err_info->fe, qm->io_base + QM_RAS_FE_ENABLE); 1407 + writel(qm_err->nfe, qm->io_base + QM_RAS_NFE_ENABLE); 1408 + writel(qm_err->fe, qm->io_base + QM_RAS_FE_ENABLE); 1410 1409 } 1411 1410 1412 1411 static void qm_hw_error_init_v2(struct hisi_qm *qm) ··· 1435 1434 qm_hw_error_cfg(qm); 1436 1435 1437 1436 /* enable close master ooo when hardware error happened */ 1438 - writel(qm->err_info.qm_shutdown_mask, qm->io_base + QM_OOO_SHUTDOWN_SEL); 1437 + writel(qm->err_info.qm_err.shutdown_mask, qm->io_base + QM_OOO_SHUTDOWN_SEL); 1439 1438 1440 1439 irq_unmask = ~qm->error_mask; 1441 1440 irq_unmask &= readl(qm->io_base + QM_ABNORMAL_INT_MASK); ··· 1497 1496 1498 1497 static enum acc_err_result qm_hw_error_handle_v2(struct hisi_qm *qm) 1499 1498 { 1499 + struct hisi_qm_err_mask *qm_err = &qm->err_info.qm_err; 1500 1500 u32 error_status; 1501 1501 1502 1502 error_status = qm_get_hw_error_status(qm); ··· 1506 1504 qm->err_status.is_qm_ecc_mbit = true; 1507 1505 1508 1506 qm_log_hw_error(qm, error_status); 1509 - if (error_status & qm->err_info.qm_reset_mask) { 1507 + if (error_status & qm_err->reset_mask) { 1510 1508 /* Disable the same error reporting until device is recovered. */ 1511 - writel(qm->err_info.nfe & (~error_status), 1512 - qm->io_base + QM_RAS_NFE_ENABLE); 1509 + writel(qm_err->nfe & (~error_status), qm->io_base + QM_RAS_NFE_ENABLE); 1513 1510 return ACC_ERR_NEED_RESET; 1514 1511 } 1515 1512 1516 1513 /* Clear error source if not need reset. */ 1517 1514 writel(error_status, qm->io_base + QM_ABNORMAL_INT_SOURCE); 1518 - writel(qm->err_info.nfe, qm->io_base + QM_RAS_NFE_ENABLE); 1519 - writel(qm->err_info.ce, qm->io_base + QM_RAS_CE_ENABLE); 1515 + writel(qm_err->nfe, qm->io_base + QM_RAS_NFE_ENABLE); 1516 + writel(qm_err->ce, qm->io_base + QM_RAS_CE_ENABLE); 1520 1517 } 1521 1518 1522 1519 return ACC_ERR_RECOVERED; ··· 2743 2742 } 2744 2743 } 2745 2744 2745 + static void qm_uacce_api_ver_init(struct hisi_qm *qm) 2746 + { 2747 + struct uacce_device *uacce = qm->uacce; 2748 + 2749 + switch (qm->ver) { 2750 + case QM_HW_V1: 2751 + uacce->api_ver = HISI_QM_API_VER_BASE; 2752 + break; 2753 + case QM_HW_V2: 2754 + uacce->api_ver = HISI_QM_API_VER2_BASE; 2755 + break; 2756 + case QM_HW_V3: 2757 + case QM_HW_V4: 2758 + uacce->api_ver = HISI_QM_API_VER3_BASE; 2759 + break; 2760 + default: 2761 + uacce->api_ver = HISI_QM_API_VER5_BASE; 2762 + break; 2763 + } 2764 + } 2765 + 2746 2766 static int qm_alloc_uacce(struct hisi_qm *qm) 2747 2767 { 2748 2768 struct pci_dev *pdev = qm->pdev; ··· 2798 2776 uacce->priv = qm; 2799 2777 2800 2778 if (qm->ver == QM_HW_V1) 2801 - uacce->api_ver = HISI_QM_API_VER_BASE; 2802 - else if (qm->ver == QM_HW_V2) 2803 - uacce->api_ver = HISI_QM_API_VER2_BASE; 2804 - else 2805 - uacce->api_ver = HISI_QM_API_VER3_BASE; 2806 - 2807 - if (qm->ver == QM_HW_V1) 2808 2779 mmio_page_nr = QM_DOORBELL_PAGE_NR; 2809 2780 else if (!test_bit(QM_SUPPORT_DB_ISOLATION, &qm->caps)) 2810 2781 mmio_page_nr = QM_DOORBELL_PAGE_NR + ··· 2816 2801 uacce->qf_pg_num[UACCE_QFRT_DUS] = dus_page_nr; 2817 2802 2818 2803 qm->uacce = uacce; 2804 + qm_uacce_api_ver_init(qm); 2819 2805 INIT_LIST_HEAD(&qm->isolate_data.qm_hw_errs); 2820 2806 mutex_init(&qm->isolate_data.isolate_lock); 2821 2807 ··· 3195 3179 3196 3180 qm_init_eq_aeq_status(qm); 3197 3181 3182 + /* Before starting the dev, clear the memory and then configure to device using. */ 3183 + memset(qm->qdma.va, 0, qm->qdma.size); 3184 + 3198 3185 ret = qm_eq_ctx_cfg(qm); 3199 3186 if (ret) { 3200 3187 dev_err(dev, "Set eqc failed!\n"); ··· 3209 3190 3210 3191 static int __hisi_qm_start(struct hisi_qm *qm) 3211 3192 { 3193 + struct device *dev = &qm->pdev->dev; 3212 3194 int ret; 3213 3195 3214 - WARN_ON(!qm->qdma.va); 3196 + if (!qm->qdma.va) { 3197 + dev_err(dev, "qm qdma is NULL!\n"); 3198 + return -EINVAL; 3199 + } 3215 3200 3216 3201 if (qm->fun_type == QM_HW_PF) { 3217 3202 ret = hisi_qm_set_vft(qm, 0, qm->qp_base, qm->qp_num); ··· 3289 3266 for (i = 0; i < qm->qp_num; i++) { 3290 3267 qp = &qm->qp_array[i]; 3291 3268 if (atomic_read(&qp->qp_status.flags) == QP_STOP && 3292 - qp->is_resetting == true) { 3269 + qp->is_resetting == true && qp->is_in_kernel == true) { 3293 3270 ret = qm_start_qp_nolock(qp, 0); 3294 3271 if (ret < 0) { 3295 3272 dev_err(dev, "Failed to start qp%d!\n", i); ··· 3321 3298 } 3322 3299 3323 3300 /** 3324 - * qm_clear_queues() - Clear all queues memory in a qm. 3325 - * @qm: The qm in which the queues will be cleared. 3301 + * qm_invalid_queues() - invalid all queues in use. 3302 + * @qm: The qm in which the queues will be invalidated. 3326 3303 * 3327 - * This function clears all queues memory in a qm. Reset of accelerator can 3328 - * use this to clear queues. 3304 + * This function invalid all queues in use. If the doorbell command is sent 3305 + * to device in user space after the device is reset, the device discards 3306 + * the doorbell command. 3329 3307 */ 3330 - static void qm_clear_queues(struct hisi_qm *qm) 3308 + static void qm_invalid_queues(struct hisi_qm *qm) 3331 3309 { 3332 3310 struct hisi_qp *qp; 3311 + struct qm_sqc *sqc; 3312 + struct qm_cqc *cqc; 3333 3313 int i; 3314 + 3315 + /* 3316 + * Normal stop queues is no longer used and does not need to be 3317 + * invalid queues. 3318 + */ 3319 + if (qm->status.stop_reason == QM_NORMAL) 3320 + return; 3321 + 3322 + if (qm->status.stop_reason == QM_DOWN) 3323 + hisi_qm_cache_wb(qm); 3334 3324 3335 3325 for (i = 0; i < qm->qp_num; i++) { 3336 3326 qp = &qm->qp_array[i]; 3337 - if (qp->is_in_kernel && qp->is_resetting) 3327 + if (!qp->is_resetting) 3328 + continue; 3329 + 3330 + /* Modify random data and set sqc close bit to invalid queue. */ 3331 + sqc = qm->sqc + i; 3332 + cqc = qm->cqc + i; 3333 + sqc->w8 = cpu_to_le16(QM_XQC_RANDOM_DATA); 3334 + sqc->w13 = cpu_to_le16(QM_SQC_DISABLE_QP); 3335 + cqc->w8 = cpu_to_le16(QM_XQC_RANDOM_DATA); 3336 + if (qp->is_in_kernel) 3338 3337 memset(qp->qdma.va, 0, qp->qdma.size); 3339 3338 } 3340 - 3341 - memset(qm->qdma.va, 0, qm->qdma.size); 3342 3339 } 3343 3340 3344 3341 /** ··· 3415 3372 } 3416 3373 } 3417 3374 3418 - qm_clear_queues(qm); 3375 + qm_invalid_queues(qm); 3419 3376 qm->status.stop_reason = QM_NORMAL; 3420 3377 3421 3378 err_unlock: ··· 3660 3617 return 0; 3661 3618 } 3662 3619 3663 - static int qm_clear_vft_config(struct hisi_qm *qm) 3620 + static void qm_clear_vft_config(struct hisi_qm *qm) 3664 3621 { 3665 - int ret; 3666 3622 u32 i; 3667 3623 3668 - for (i = 1; i <= qm->vfs_num; i++) { 3669 - ret = hisi_qm_set_vft(qm, i, 0, 0); 3670 - if (ret) 3671 - return ret; 3672 - } 3673 - qm->vfs_num = 0; 3624 + /* 3625 + * When disabling SR-IOV, clear the configuration of each VF in the hardware 3626 + * sequentially. Failure to clear a single VF should not affect the clearing 3627 + * operation of other VFs. 3628 + */ 3629 + for (i = 1; i <= qm->vfs_num; i++) 3630 + (void)hisi_qm_set_vft(qm, i, 0, 0); 3674 3631 3675 - return 0; 3632 + qm->vfs_num = 0; 3676 3633 } 3677 3634 3678 3635 static int qm_func_shaper_enable(struct hisi_qm *qm, u32 fun_index, u32 qos) ··· 3869 3826 } 3870 3827 3871 3828 pdev = container_of(dev, struct pci_dev, dev); 3829 + if (pci_physfn(pdev) != qm->pdev) { 3830 + pci_err(qm->pdev, "the pdev input does not match the pf!\n"); 3831 + return -EINVAL; 3832 + } 3872 3833 3873 3834 *fun_index = pdev->devfn; 3874 3835 ··· 4007 3960 goto err_put_sync; 4008 3961 } 4009 3962 3963 + qm->vfs_num = num_vfs; 4010 3964 ret = pci_enable_sriov(pdev, num_vfs); 4011 3965 if (ret) { 4012 3966 pci_err(pdev, "Can't enable VF!\n"); 4013 3967 qm_clear_vft_config(qm); 4014 3968 goto err_put_sync; 4015 3969 } 4016 - qm->vfs_num = num_vfs; 4017 3970 4018 3971 pci_info(pdev, "VF enabled, vfs_num(=%d)!\n", num_vfs); 4019 3972 ··· 4048 4001 } 4049 4002 4050 4003 pci_disable_sriov(pdev); 4051 - 4052 - qm->vfs_num = 0; 4004 + qm_clear_vft_config(qm); 4053 4005 qm_pm_put_sync(qm); 4054 4006 4055 - return qm_clear_vft_config(qm); 4007 + return 0; 4056 4008 } 4057 4009 EXPORT_SYMBOL_GPL(hisi_qm_sriov_disable); 4058 4010 ··· 4225 4179 !qm->err_status.is_qm_ecc_mbit && 4226 4180 !qm->err_ini->close_axi_master_ooo) { 4227 4181 nfe_enb = readl(qm->io_base + QM_RAS_NFE_ENABLE); 4228 - writel(nfe_enb & QM_RAS_NFE_MBIT_DISABLE, 4182 + writel(nfe_enb & ~qm->err_info.qm_err.ecc_2bits_mask, 4229 4183 qm->io_base + QM_RAS_NFE_ENABLE); 4230 - writel(QM_ECC_MBIT, qm->io_base + QM_ABNORMAL_INT_SET); 4184 + writel(qm->err_info.qm_err.ecc_2bits_mask, qm->io_base + QM_ABNORMAL_INT_SET); 4231 4185 } 4232 4186 } 4233 4187 ··· 4493 4447 { 4494 4448 u32 value; 4495 4449 4496 - if (qm->err_ini->open_sva_prefetch) 4497 - qm->err_ini->open_sva_prefetch(qm); 4498 - 4499 4450 if (qm->ver >= QM_HW_V3) 4500 4451 return; 4501 4452 ··· 4506 4463 qm->io_base + ACC_AM_CFG_PORT_WR_EN); 4507 4464 4508 4465 /* clear dev ecc 2bit error source if having */ 4509 - value = qm_get_dev_err_status(qm) & qm->err_info.ecc_2bits_mask; 4466 + value = qm_get_dev_err_status(qm) & qm->err_info.dev_err.ecc_2bits_mask; 4510 4467 if (value && qm->err_ini->clear_dev_hw_err_status) 4511 4468 qm->err_ini->clear_dev_hw_err_status(qm, value); 4512 4469 4513 4470 /* clear QM ecc mbit error source */ 4514 - writel(QM_ECC_MBIT, qm->io_base + QM_ABNORMAL_INT_SOURCE); 4471 + writel(qm->err_info.qm_err.ecc_2bits_mask, qm->io_base + QM_ABNORMAL_INT_SOURCE); 4515 4472 4516 4473 /* clear AM Reorder Buffer ecc mbit source */ 4517 4474 writel(ACC_ROB_ECC_ERR_MULTPL, qm->io_base + ACC_AM_ROB_ECC_INT_STS); ··· 4536 4493 clear_flags: 4537 4494 qm->err_status.is_qm_ecc_mbit = false; 4538 4495 qm->err_status.is_dev_ecc_mbit = false; 4496 + } 4497 + 4498 + static void qm_disable_axi_error(struct hisi_qm *qm) 4499 + { 4500 + struct hisi_qm_err_mask *qm_err = &qm->err_info.qm_err; 4501 + u32 val; 4502 + 4503 + val = ~(qm->error_mask & (~QM_RAS_AXI_ERROR)); 4504 + writel(val, qm->io_base + QM_ABNORMAL_INT_MASK); 4505 + if (qm->ver > QM_HW_V2) 4506 + writel(qm_err->shutdown_mask & (~QM_RAS_AXI_ERROR), 4507 + qm->io_base + QM_OOO_SHUTDOWN_SEL); 4508 + 4509 + if (qm->err_ini->disable_axi_error) 4510 + qm->err_ini->disable_axi_error(qm); 4511 + } 4512 + 4513 + static void qm_enable_axi_error(struct hisi_qm *qm) 4514 + { 4515 + /* clear axi error source */ 4516 + writel(QM_RAS_AXI_ERROR, qm->io_base + QM_ABNORMAL_INT_SOURCE); 4517 + 4518 + writel(~qm->error_mask, qm->io_base + QM_ABNORMAL_INT_MASK); 4519 + if (qm->ver > QM_HW_V2) 4520 + writel(qm->err_info.qm_err.shutdown_mask, qm->io_base + QM_OOO_SHUTDOWN_SEL); 4521 + 4522 + if (qm->err_ini->enable_axi_error) 4523 + qm->err_ini->enable_axi_error(qm); 4539 4524 } 4540 4525 4541 4526 static int qm_controller_reset_done(struct hisi_qm *qm) ··· 4599 4528 4600 4529 qm_restart_prepare(qm); 4601 4530 hisi_qm_dev_err_init(qm); 4531 + qm_disable_axi_error(qm); 4602 4532 if (qm->err_ini->open_axi_master_ooo) 4603 4533 qm->err_ini->open_axi_master_ooo(qm); 4604 4534 ··· 4622 4550 ret = qm_wait_vf_prepare_finish(qm); 4623 4551 if (ret) 4624 4552 pci_err(pdev, "failed to start by vfs in soft reset!\n"); 4625 - 4553 + qm_enable_axi_error(qm); 4626 4554 qm_cmd_init(qm); 4627 4555 qm_restart_done(qm); 4628 4556 ··· 4803 4731 } 4804 4732 EXPORT_SYMBOL_GPL(hisi_qm_reset_done); 4805 4733 4734 + static irqreturn_t qm_rsvd_irq(int irq, void *data) 4735 + { 4736 + struct hisi_qm *qm = data; 4737 + 4738 + dev_info(&qm->pdev->dev, "Reserved interrupt, ignore!\n"); 4739 + 4740 + return IRQ_HANDLED; 4741 + } 4742 + 4806 4743 static irqreturn_t qm_abnormal_irq(int irq, void *data) 4807 4744 { 4808 4745 struct hisi_qm *qm = data; ··· 4841 4760 ret = hisi_qm_stop(qm, QM_DOWN); 4842 4761 if (ret) 4843 4762 dev_err(&pdev->dev, "Fail to stop qm in shutdown!\n"); 4844 - 4845 - hisi_qm_cache_wb(qm); 4846 4763 } 4847 4764 EXPORT_SYMBOL_GPL(hisi_qm_dev_shutdown); 4848 4765 ··· 5093 5014 struct pci_dev *pdev = qm->pdev; 5094 5015 u32 irq_vector, val; 5095 5016 5096 - if (qm->fun_type == QM_HW_VF) 5017 + if (qm->fun_type == QM_HW_VF && qm->ver < QM_HW_V3) 5097 5018 return; 5098 5019 5099 5020 val = qm->cap_tables.qm_cap_table[QM_ABNORMAL_IRQ].cap_val; ··· 5110 5031 u32 irq_vector, val; 5111 5032 int ret; 5112 5033 5113 - if (qm->fun_type == QM_HW_VF) 5114 - return 0; 5115 - 5116 5034 val = qm->cap_tables.qm_cap_table[QM_ABNORMAL_IRQ].cap_val; 5117 5035 if (!((val >> QM_IRQ_TYPE_SHIFT) & QM_ABN_IRQ_TYPE_MASK)) 5118 5036 return 0; 5119 - 5120 5037 irq_vector = val & QM_IRQ_VECTOR_MASK; 5038 + 5039 + /* For VF, this is a reserved interrupt in V3 version. */ 5040 + if (qm->fun_type == QM_HW_VF) { 5041 + if (qm->ver < QM_HW_V3) 5042 + return 0; 5043 + 5044 + ret = request_irq(pci_irq_vector(pdev, irq_vector), qm_rsvd_irq, 5045 + IRQF_NO_AUTOEN, qm->dev_name, qm); 5046 + if (ret) { 5047 + dev_err(&pdev->dev, "failed to request reserved irq, ret = %d!\n", ret); 5048 + return ret; 5049 + } 5050 + return 0; 5051 + } 5052 + 5121 5053 ret = request_irq(pci_irq_vector(pdev, irq_vector), qm_abnormal_irq, 0, qm->dev_name, qm); 5122 5054 if (ret) 5123 - dev_err(&qm->pdev->dev, "failed to request abnormal irq, ret = %d", ret); 5055 + dev_err(&qm->pdev->dev, "failed to request abnormal irq, ret = %d!\n", ret); 5124 5056 5125 5057 return ret; 5126 5058 } ··· 5497 5407 pci_set_master(pdev); 5498 5408 5499 5409 num_vec = qm_get_irq_num(qm); 5410 + if (!num_vec) { 5411 + dev_err(dev, "Device irq num is zero!\n"); 5412 + ret = -EINVAL; 5413 + goto err_get_pci_res; 5414 + } 5415 + num_vec = roundup_pow_of_two(num_vec); 5500 5416 ret = pci_alloc_irq_vectors(pdev, num_vec, num_vec, PCI_IRQ_MSI); 5501 5417 if (ret < 0) { 5502 5418 dev_err(dev, "Failed to enable MSI vectors!\n");
+2 -1
drivers/crypto/hisilicon/sec/sec_drv.c
··· 922 922 struct iommu_domain *domain; 923 923 u32 sec_ipv4_mask = 0; 924 924 u32 sec_ipv6_mask[10] = {}; 925 - u32 i, ret; 925 + int ret; 926 + u32 i; 926 927 927 928 domain = iommu_get_domain_for_dev(info->dev); 928 929
+3 -5
drivers/crypto/hisilicon/sec2/sec_crypto.c
··· 1944 1944 static int sec_request_init(struct sec_ctx *ctx, struct sec_req *req) 1945 1945 { 1946 1946 struct sec_qp_ctx *qp_ctx; 1947 - int i; 1947 + int i = 0; 1948 1948 1949 - for (i = 0; i < ctx->sec->ctx_q_num; i++) { 1949 + do { 1950 1950 qp_ctx = &ctx->qp_ctx[i]; 1951 1951 req->req_id = sec_alloc_req_id(req, qp_ctx); 1952 - if (req->req_id >= 0) 1953 - break; 1954 - } 1952 + } while (req->req_id < 0 && ++i < ctx->sec->ctx_q_num); 1955 1953 1956 1954 req->qp_ctx = qp_ctx; 1957 1955 req->backlog = &qp_ctx->backlog;
+160 -69
drivers/crypto/hisilicon/sec2/sec_main.c
··· 47 47 #define SEC_RAS_FE_ENB_MSK 0x0 48 48 #define SEC_OOO_SHUTDOWN_SEL 0x301014 49 49 #define SEC_RAS_DISABLE 0x0 50 + #define SEC_AXI_ERROR_MASK (BIT(0) | BIT(1)) 51 + 50 52 #define SEC_MEM_START_INIT_REG 0x301100 51 53 #define SEC_MEM_INIT_DONE_REG 0x301104 52 54 ··· 95 93 #define SEC_PREFETCH_ENABLE (~(BIT(0) | BIT(1) | BIT(11))) 96 94 #define SEC_PREFETCH_DISABLE BIT(1) 97 95 #define SEC_SVA_DISABLE_READY (BIT(7) | BIT(11)) 96 + #define SEC_SVA_PREFETCH_INFO 0x301ED4 97 + #define SEC_SVA_STALL_NUM GENMASK(23, 8) 98 + #define SEC_SVA_PREFETCH_NUM GENMASK(2, 0) 99 + #define SEC_WAIT_SVA_READY 500000 100 + #define SEC_READ_SVA_STATUS_TIMES 3 101 + #define SEC_WAIT_US_MIN 10 102 + #define SEC_WAIT_US_MAX 20 103 + #define SEC_WAIT_QP_US_MIN 1000 104 + #define SEC_WAIT_QP_US_MAX 2000 105 + #define SEC_MAX_WAIT_TIMES 2000 98 106 99 107 #define SEC_DELAY_10_US 10 100 108 #define SEC_POLL_TIMEOUT_US 1000 ··· 476 464 writel_relaxed(reg, qm->io_base + SEC_CONTROL_REG); 477 465 } 478 466 467 + static int sec_wait_sva_ready(struct hisi_qm *qm, __u32 offset, __u32 mask) 468 + { 469 + u32 val, try_times = 0; 470 + u8 count = 0; 471 + 472 + /* 473 + * Read the register value every 10-20us. If the value is 0 for three 474 + * consecutive times, the SVA module is ready. 475 + */ 476 + do { 477 + val = readl(qm->io_base + offset); 478 + if (val & mask) 479 + count = 0; 480 + else if (++count == SEC_READ_SVA_STATUS_TIMES) 481 + break; 482 + 483 + usleep_range(SEC_WAIT_US_MIN, SEC_WAIT_US_MAX); 484 + } while (++try_times < SEC_WAIT_SVA_READY); 485 + 486 + if (try_times == SEC_WAIT_SVA_READY) { 487 + pci_err(qm->pdev, "failed to wait sva prefetch ready\n"); 488 + return -ETIMEDOUT; 489 + } 490 + 491 + return 0; 492 + } 493 + 494 + static void sec_close_sva_prefetch(struct hisi_qm *qm) 495 + { 496 + u32 val; 497 + int ret; 498 + 499 + if (!test_bit(QM_SUPPORT_SVA_PREFETCH, &qm->caps)) 500 + return; 501 + 502 + val = readl_relaxed(qm->io_base + SEC_PREFETCH_CFG); 503 + val |= SEC_PREFETCH_DISABLE; 504 + writel(val, qm->io_base + SEC_PREFETCH_CFG); 505 + 506 + ret = readl_relaxed_poll_timeout(qm->io_base + SEC_SVA_TRANS, 507 + val, !(val & SEC_SVA_DISABLE_READY), 508 + SEC_DELAY_10_US, SEC_POLL_TIMEOUT_US); 509 + if (ret) 510 + pci_err(qm->pdev, "failed to close sva prefetch\n"); 511 + 512 + (void)sec_wait_sva_ready(qm, SEC_SVA_PREFETCH_INFO, SEC_SVA_STALL_NUM); 513 + } 514 + 515 + static void sec_open_sva_prefetch(struct hisi_qm *qm) 516 + { 517 + u32 val; 518 + int ret; 519 + 520 + if (!test_bit(QM_SUPPORT_SVA_PREFETCH, &qm->caps)) 521 + return; 522 + 523 + /* Enable prefetch */ 524 + val = readl_relaxed(qm->io_base + SEC_PREFETCH_CFG); 525 + val &= SEC_PREFETCH_ENABLE; 526 + writel(val, qm->io_base + SEC_PREFETCH_CFG); 527 + 528 + ret = readl_relaxed_poll_timeout(qm->io_base + SEC_PREFETCH_CFG, 529 + val, !(val & SEC_PREFETCH_DISABLE), 530 + SEC_DELAY_10_US, SEC_POLL_TIMEOUT_US); 531 + if (ret) { 532 + pci_err(qm->pdev, "failed to open sva prefetch\n"); 533 + sec_close_sva_prefetch(qm); 534 + return; 535 + } 536 + 537 + ret = sec_wait_sva_ready(qm, SEC_SVA_TRANS, SEC_SVA_PREFETCH_NUM); 538 + if (ret) 539 + sec_close_sva_prefetch(qm); 540 + } 541 + 479 542 static void sec_engine_sva_config(struct hisi_qm *qm) 480 543 { 481 544 u32 reg; ··· 584 497 writel_relaxed(reg, qm->io_base + 585 498 SEC_INTERFACE_USER_CTRL1_REG); 586 499 } 587 - } 588 - 589 - static void sec_open_sva_prefetch(struct hisi_qm *qm) 590 - { 591 - u32 val; 592 - int ret; 593 - 594 - if (!test_bit(QM_SUPPORT_SVA_PREFETCH, &qm->caps)) 595 - return; 596 - 597 - /* Enable prefetch */ 598 - val = readl_relaxed(qm->io_base + SEC_PREFETCH_CFG); 599 - val &= SEC_PREFETCH_ENABLE; 600 - writel(val, qm->io_base + SEC_PREFETCH_CFG); 601 - 602 - ret = readl_relaxed_poll_timeout(qm->io_base + SEC_PREFETCH_CFG, 603 - val, !(val & SEC_PREFETCH_DISABLE), 604 - SEC_DELAY_10_US, SEC_POLL_TIMEOUT_US); 605 - if (ret) 606 - pci_err(qm->pdev, "failed to open sva prefetch\n"); 607 - } 608 - 609 - static void sec_close_sva_prefetch(struct hisi_qm *qm) 610 - { 611 - u32 val; 612 - int ret; 613 - 614 - if (!test_bit(QM_SUPPORT_SVA_PREFETCH, &qm->caps)) 615 - return; 616 - 617 - val = readl_relaxed(qm->io_base + SEC_PREFETCH_CFG); 618 - val |= SEC_PREFETCH_DISABLE; 619 - writel(val, qm->io_base + SEC_PREFETCH_CFG); 620 - 621 - ret = readl_relaxed_poll_timeout(qm->io_base + SEC_SVA_TRANS, 622 - val, !(val & SEC_SVA_DISABLE_READY), 623 - SEC_DELAY_10_US, SEC_POLL_TIMEOUT_US); 624 - if (ret) 625 - pci_err(qm->pdev, "failed to close sva prefetch\n"); 500 + sec_open_sva_prefetch(qm); 626 501 } 627 502 628 503 static void sec_enable_clock_gate(struct hisi_qm *qm) ··· 715 666 val1 = readl(qm->io_base + SEC_CONTROL_REG); 716 667 if (enable) { 717 668 val1 |= SEC_AXI_SHUTDOWN_ENABLE; 718 - val2 = hisi_qm_get_hw_info(qm, sec_basic_info, 719 - SEC_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver); 669 + val2 = qm->err_info.dev_err.shutdown_mask; 720 670 } else { 721 671 val1 &= SEC_AXI_SHUTDOWN_DISABLE; 722 672 val2 = 0x0; ··· 729 681 730 682 static void sec_hw_error_enable(struct hisi_qm *qm) 731 683 { 732 - u32 ce, nfe; 684 + struct hisi_qm_err_mask *dev_err = &qm->err_info.dev_err; 685 + u32 err_mask = dev_err->ce | dev_err->nfe | dev_err->fe; 733 686 734 687 if (qm->ver == QM_HW_V1) { 735 688 writel(SEC_CORE_INT_DISABLE, qm->io_base + SEC_CORE_INT_MASK); ··· 738 689 return; 739 690 } 740 691 741 - ce = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_CE_MASK_CAP, qm->cap_ver); 742 - nfe = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_NFE_MASK_CAP, qm->cap_ver); 743 - 744 692 /* clear SEC hw error source if having */ 745 - writel(ce | nfe | SEC_RAS_FE_ENB_MSK, qm->io_base + SEC_CORE_INT_SOURCE); 693 + writel(err_mask, qm->io_base + SEC_CORE_INT_SOURCE); 746 694 747 695 /* enable RAS int */ 748 - writel(ce, qm->io_base + SEC_RAS_CE_REG); 749 - writel(SEC_RAS_FE_ENB_MSK, qm->io_base + SEC_RAS_FE_REG); 750 - writel(nfe, qm->io_base + SEC_RAS_NFE_REG); 696 + writel(dev_err->ce, qm->io_base + SEC_RAS_CE_REG); 697 + writel(dev_err->fe, qm->io_base + SEC_RAS_FE_REG); 698 + writel(dev_err->nfe, qm->io_base + SEC_RAS_NFE_REG); 751 699 752 700 /* enable SEC block master OOO when nfe occurs on Kunpeng930 */ 753 701 sec_master_ooo_ctrl(qm, true); 754 702 755 703 /* enable SEC hw error interrupts */ 756 - writel(ce | nfe | SEC_RAS_FE_ENB_MSK, qm->io_base + SEC_CORE_INT_MASK); 704 + writel(err_mask, qm->io_base + SEC_CORE_INT_MASK); 757 705 } 758 706 759 707 static void sec_hw_error_disable(struct hisi_qm *qm) ··· 1107 1061 1108 1062 static void sec_disable_error_report(struct hisi_qm *qm, u32 err_type) 1109 1063 { 1110 - u32 nfe_mask; 1064 + u32 nfe_mask = qm->err_info.dev_err.nfe; 1111 1065 1112 - nfe_mask = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_NFE_MASK_CAP, qm->cap_ver); 1113 1066 writel(nfe_mask & (~err_type), qm->io_base + SEC_RAS_NFE_REG); 1067 + } 1068 + 1069 + static void sec_enable_error_report(struct hisi_qm *qm) 1070 + { 1071 + u32 nfe_mask = qm->err_info.dev_err.nfe; 1072 + u32 ce_mask = qm->err_info.dev_err.ce; 1073 + 1074 + writel(nfe_mask, qm->io_base + SEC_RAS_NFE_REG); 1075 + writel(ce_mask, qm->io_base + SEC_RAS_CE_REG); 1114 1076 } 1115 1077 1116 1078 static void sec_open_axi_master_ooo(struct hisi_qm *qm) ··· 1136 1082 1137 1083 err_status = sec_get_hw_err_status(qm); 1138 1084 if (err_status) { 1139 - if (err_status & qm->err_info.ecc_2bits_mask) 1085 + if (err_status & qm->err_info.dev_err.ecc_2bits_mask) 1140 1086 qm->err_status.is_dev_ecc_mbit = true; 1141 1087 sec_log_hw_error(qm, err_status); 1142 1088 1143 - if (err_status & qm->err_info.dev_reset_mask) { 1089 + if (err_status & qm->err_info.dev_err.reset_mask) { 1144 1090 /* Disable the same error reporting until device is recovered. */ 1145 1091 sec_disable_error_report(qm, err_status); 1146 1092 return ACC_ERR_NEED_RESET; 1147 1093 } 1148 1094 sec_clear_hw_err_status(qm, err_status); 1095 + /* Avoid firmware disable error report, re-enable. */ 1096 + sec_enable_error_report(qm); 1149 1097 } 1150 1098 1151 1099 return ACC_ERR_RECOVERED; ··· 1158 1102 u32 err_status; 1159 1103 1160 1104 err_status = sec_get_hw_err_status(qm); 1161 - if (err_status & qm->err_info.dev_shutdown_mask) 1105 + if (err_status & qm->err_info.dev_err.shutdown_mask) 1162 1106 return true; 1163 1107 1164 1108 return false; 1165 1109 } 1166 1110 1111 + static void sec_disable_axi_error(struct hisi_qm *qm) 1112 + { 1113 + struct hisi_qm_err_mask *dev_err = &qm->err_info.dev_err; 1114 + u32 err_mask = dev_err->ce | dev_err->nfe | dev_err->fe; 1115 + 1116 + writel(err_mask & ~SEC_AXI_ERROR_MASK, qm->io_base + SEC_CORE_INT_MASK); 1117 + 1118 + if (qm->ver > QM_HW_V2) 1119 + writel(dev_err->shutdown_mask & (~SEC_AXI_ERROR_MASK), 1120 + qm->io_base + SEC_OOO_SHUTDOWN_SEL); 1121 + } 1122 + 1123 + static void sec_enable_axi_error(struct hisi_qm *qm) 1124 + { 1125 + struct hisi_qm_err_mask *dev_err = &qm->err_info.dev_err; 1126 + u32 err_mask = dev_err->ce | dev_err->nfe | dev_err->fe; 1127 + 1128 + /* clear axi error source */ 1129 + writel(SEC_AXI_ERROR_MASK, qm->io_base + SEC_CORE_INT_SOURCE); 1130 + 1131 + writel(err_mask, qm->io_base + SEC_CORE_INT_MASK); 1132 + 1133 + if (qm->ver > QM_HW_V2) 1134 + writel(dev_err->shutdown_mask, qm->io_base + SEC_OOO_SHUTDOWN_SEL); 1135 + } 1136 + 1167 1137 static void sec_err_info_init(struct hisi_qm *qm) 1168 1138 { 1169 1139 struct hisi_qm_err_info *err_info = &qm->err_info; 1140 + struct hisi_qm_err_mask *qm_err = &err_info->qm_err; 1141 + struct hisi_qm_err_mask *dev_err = &err_info->dev_err; 1170 1142 1171 - err_info->fe = SEC_RAS_FE_ENB_MSK; 1172 - err_info->ce = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_QM_CE_MASK_CAP, qm->cap_ver); 1173 - err_info->nfe = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_QM_NFE_MASK_CAP, qm->cap_ver); 1174 - err_info->ecc_2bits_mask = SEC_CORE_INT_STATUS_M_ECC; 1175 - err_info->qm_shutdown_mask = hisi_qm_get_hw_info(qm, sec_basic_info, 1176 - SEC_QM_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver); 1177 - err_info->dev_shutdown_mask = hisi_qm_get_hw_info(qm, sec_basic_info, 1178 - SEC_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver); 1179 - err_info->qm_reset_mask = hisi_qm_get_hw_info(qm, sec_basic_info, 1180 - SEC_QM_RESET_MASK_CAP, qm->cap_ver); 1181 - err_info->dev_reset_mask = hisi_qm_get_hw_info(qm, sec_basic_info, 1182 - SEC_RESET_MASK_CAP, qm->cap_ver); 1143 + qm_err->fe = SEC_RAS_FE_ENB_MSK; 1144 + qm_err->ce = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_QM_CE_MASK_CAP, qm->cap_ver); 1145 + qm_err->nfe = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_QM_NFE_MASK_CAP, qm->cap_ver); 1146 + qm_err->shutdown_mask = hisi_qm_get_hw_info(qm, sec_basic_info, 1147 + SEC_QM_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver); 1148 + qm_err->reset_mask = hisi_qm_get_hw_info(qm, sec_basic_info, 1149 + SEC_QM_RESET_MASK_CAP, qm->cap_ver); 1150 + qm_err->ecc_2bits_mask = QM_ECC_MBIT; 1151 + 1152 + dev_err->fe = SEC_RAS_FE_ENB_MSK; 1153 + dev_err->ce = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_CE_MASK_CAP, qm->cap_ver); 1154 + dev_err->nfe = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_NFE_MASK_CAP, qm->cap_ver); 1155 + dev_err->shutdown_mask = hisi_qm_get_hw_info(qm, sec_basic_info, 1156 + SEC_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver); 1157 + dev_err->reset_mask = hisi_qm_get_hw_info(qm, sec_basic_info, 1158 + SEC_RESET_MASK_CAP, qm->cap_ver); 1159 + dev_err->ecc_2bits_mask = SEC_CORE_INT_STATUS_M_ECC; 1160 + 1183 1161 err_info->msi_wr_port = BIT(0); 1184 1162 err_info->acpi_rst = "SRST"; 1185 1163 } ··· 1231 1141 .err_info_init = sec_err_info_init, 1232 1142 .get_err_result = sec_get_err_result, 1233 1143 .dev_is_abnormal = sec_dev_is_abnormal, 1144 + .disable_axi_error = sec_disable_axi_error, 1145 + .enable_axi_error = sec_enable_axi_error, 1234 1146 }; 1235 1147 1236 1148 static int sec_pf_probe_init(struct sec_dev *sec) ··· 1244 1152 if (ret) 1245 1153 return ret; 1246 1154 1247 - sec_open_sva_prefetch(qm); 1248 1155 hisi_qm_dev_err_init(qm); 1249 1156 sec_debug_regs_clear(qm); 1250 1157 ret = sec_show_last_regs_init(qm); ··· 1260 1169 size_t i, size; 1261 1170 1262 1171 size = ARRAY_SIZE(sec_cap_query_info); 1263 - sec_cap = devm_kzalloc(&pdev->dev, sizeof(*sec_cap) * size, GFP_KERNEL); 1172 + sec_cap = devm_kcalloc(&pdev->dev, size, sizeof(*sec_cap), GFP_KERNEL); 1264 1173 if (!sec_cap) 1265 1174 return -ENOMEM; 1266 1175
+17 -2
drivers/crypto/hisilicon/zip/dae_main.c
··· 15 15 #define DAE_REG_RD_TMOUT_US USEC_PER_SEC 16 16 17 17 #define DAE_ALG_NAME "hashagg" 18 + #define DAE_V5_ALG_NAME "hashagg\nudma\nhashjoin\ngather" 18 19 19 20 /* error */ 20 21 #define DAE_AXI_CFG_OFFSET 0x331000 ··· 83 82 84 83 int hisi_dae_set_alg(struct hisi_qm *qm) 85 84 { 85 + const char *alg_name; 86 86 size_t len; 87 87 88 88 if (!dae_is_support(qm)) ··· 92 90 if (!qm->uacce) 93 91 return 0; 94 92 93 + if (qm->ver >= QM_HW_V5) 94 + alg_name = DAE_V5_ALG_NAME; 95 + else 96 + alg_name = DAE_ALG_NAME; 97 + 95 98 len = strlen(qm->uacce->algs); 96 99 /* A line break may be required */ 97 - if (len + strlen(DAE_ALG_NAME) + 1 >= QM_DEV_ALG_MAX_LEN) { 100 + if (len + strlen(alg_name) + 1 >= QM_DEV_ALG_MAX_LEN) { 98 101 pci_err(qm->pdev, "algorithm name is too long!\n"); 99 102 return -EINVAL; 100 103 } ··· 107 100 if (len) 108 101 strcat((char *)qm->uacce->algs, "\n"); 109 102 110 - strcat((char *)qm->uacce->algs, DAE_ALG_NAME); 103 + strcat((char *)qm->uacce->algs, alg_name); 111 104 112 105 return 0; 113 106 } ··· 175 168 writel(DAE_ERR_NFE_MASK & (~err_type), qm->io_base + DAE_ERR_NFE_OFFSET); 176 169 } 177 170 171 + static void hisi_dae_enable_error_report(struct hisi_qm *qm) 172 + { 173 + writel(DAE_ERR_CE_MASK, qm->io_base + DAE_ERR_CE_OFFSET); 174 + writel(DAE_ERR_NFE_MASK, qm->io_base + DAE_ERR_NFE_OFFSET); 175 + } 176 + 178 177 static void hisi_dae_log_hw_error(struct hisi_qm *qm, u32 err_type) 179 178 { 180 179 const struct hisi_dae_hw_error *err = dae_hw_error; ··· 222 209 return ACC_ERR_NEED_RESET; 223 210 } 224 211 hisi_dae_clear_hw_err_status(qm, err_status); 212 + /* Avoid firmware disable error report, re-enable. */ 213 + hisi_dae_enable_error_report(qm); 225 214 226 215 return ACC_ERR_RECOVERED; 227 216 }
+176 -66
drivers/crypto/hisilicon/zip/zip_main.c
··· 65 65 #define HZIP_SRAM_ECC_ERR_NUM_SHIFT 16 66 66 #define HZIP_SRAM_ECC_ERR_ADDR_SHIFT 24 67 67 #define HZIP_CORE_INT_MASK_ALL GENMASK(12, 0) 68 + #define HZIP_AXI_ERROR_MASK (BIT(2) | BIT(3)) 68 69 #define HZIP_SQE_SIZE 128 69 70 #define HZIP_PF_DEF_Q_NUM 64 70 71 #define HZIP_PF_DEF_Q_BASE 0 ··· 81 80 #define HZIP_ALG_GZIP_BIT GENMASK(3, 2) 82 81 #define HZIP_ALG_DEFLATE_BIT GENMASK(5, 4) 83 82 #define HZIP_ALG_LZ77_BIT GENMASK(7, 6) 83 + #define HZIP_ALG_LZ4_BIT GENMASK(9, 8) 84 84 85 85 #define HZIP_BUF_SIZE 22 86 86 #define HZIP_SQE_MASK_OFFSET 64 ··· 97 95 #define HZIP_PREFETCH_ENABLE (~(BIT(26) | BIT(17) | BIT(0))) 98 96 #define HZIP_SVA_PREFETCH_DISABLE BIT(26) 99 97 #define HZIP_SVA_DISABLE_READY (BIT(26) | BIT(30)) 98 + #define HZIP_SVA_PREFETCH_NUM GENMASK(18, 16) 99 + #define HZIP_SVA_STALL_NUM GENMASK(15, 0) 100 100 #define HZIP_SHAPER_RATE_COMPRESS 750 101 101 #define HZIP_SHAPER_RATE_DECOMPRESS 140 102 - #define HZIP_DELAY_1_US 1 103 - #define HZIP_POLL_TIMEOUT_US 1000 102 + #define HZIP_DELAY_1_US 1 103 + #define HZIP_POLL_TIMEOUT_US 1000 104 + #define HZIP_WAIT_SVA_READY 500000 105 + #define HZIP_READ_SVA_STATUS_TIMES 3 106 + #define HZIP_WAIT_US_MIN 10 107 + #define HZIP_WAIT_US_MAX 20 104 108 105 109 /* clock gating */ 106 110 #define HZIP_PEH_CFG_AUTO_GATE 0x3011A8 ··· 118 110 119 111 /* zip comp high performance */ 120 112 #define HZIP_HIGH_PERF_OFFSET 0x301208 113 + 114 + #define HZIP_LIT_LEN_EN_OFFSET 0x301204 115 + #define HZIP_LIT_LEN_EN_EN BIT(4) 121 116 122 117 enum { 123 118 HZIP_HIGH_COMP_RATE, ··· 152 141 }, { 153 142 .alg_msk = HZIP_ALG_LZ77_BIT, 154 143 .alg = "lz77_zstd\n", 144 + }, { 145 + .alg_msk = HZIP_ALG_LZ77_BIT, 146 + .alg = "lz77_only\n", 147 + }, { 148 + .alg_msk = HZIP_ALG_LZ4_BIT, 149 + .alg = "lz4\n", 155 150 }, 156 151 }; 157 152 ··· 465 448 return false; 466 449 } 467 450 468 - static int hisi_zip_set_high_perf(struct hisi_qm *qm) 451 + static void hisi_zip_literal_set(struct hisi_qm *qm) 469 452 { 470 453 u32 val; 471 - int ret; 454 + 455 + if (qm->ver < QM_HW_V3) 456 + return; 457 + 458 + val = readl_relaxed(qm->io_base + HZIP_LIT_LEN_EN_OFFSET); 459 + val &= ~HZIP_LIT_LEN_EN_EN; 460 + 461 + /* enable literal length in stream mode compression */ 462 + writel(val, qm->io_base + HZIP_LIT_LEN_EN_OFFSET); 463 + } 464 + 465 + static void hisi_zip_set_high_perf(struct hisi_qm *qm) 466 + { 467 + u32 val; 472 468 473 469 val = readl_relaxed(qm->io_base + HZIP_HIGH_PERF_OFFSET); 474 470 if (perf_mode == HZIP_HIGH_COMP_PERF) ··· 491 461 492 462 /* Set perf mode */ 493 463 writel(val, qm->io_base + HZIP_HIGH_PERF_OFFSET); 494 - ret = readl_relaxed_poll_timeout(qm->io_base + HZIP_HIGH_PERF_OFFSET, 495 - val, val == perf_mode, HZIP_DELAY_1_US, 496 - HZIP_POLL_TIMEOUT_US); 497 - if (ret) 498 - pci_err(qm->pdev, "failed to set perf mode\n"); 499 - 500 - return ret; 501 464 } 502 465 503 - static void hisi_zip_open_sva_prefetch(struct hisi_qm *qm) 466 + static int hisi_zip_wait_sva_ready(struct hisi_qm *qm, __u32 offset, __u32 mask) 504 467 { 505 - u32 val; 506 - int ret; 468 + u32 val, try_times = 0; 469 + u8 count = 0; 507 470 508 - if (!test_bit(QM_SUPPORT_SVA_PREFETCH, &qm->caps)) 509 - return; 471 + /* 472 + * Read the register value every 10-20us. If the value is 0 for three 473 + * consecutive times, the SVA module is ready. 474 + */ 475 + do { 476 + val = readl(qm->io_base + offset); 477 + if (val & mask) 478 + count = 0; 479 + else if (++count == HZIP_READ_SVA_STATUS_TIMES) 480 + break; 510 481 511 - /* Enable prefetch */ 512 - val = readl_relaxed(qm->io_base + HZIP_PREFETCH_CFG); 513 - val &= HZIP_PREFETCH_ENABLE; 514 - writel(val, qm->io_base + HZIP_PREFETCH_CFG); 482 + usleep_range(HZIP_WAIT_US_MIN, HZIP_WAIT_US_MAX); 483 + } while (++try_times < HZIP_WAIT_SVA_READY); 515 484 516 - ret = readl_relaxed_poll_timeout(qm->io_base + HZIP_PREFETCH_CFG, 517 - val, !(val & HZIP_SVA_PREFETCH_DISABLE), 518 - HZIP_DELAY_1_US, HZIP_POLL_TIMEOUT_US); 519 - if (ret) 520 - pci_err(qm->pdev, "failed to open sva prefetch\n"); 485 + if (try_times == HZIP_WAIT_SVA_READY) { 486 + pci_err(qm->pdev, "failed to wait sva prefetch ready\n"); 487 + return -ETIMEDOUT; 488 + } 489 + 490 + return 0; 521 491 } 522 492 523 493 static void hisi_zip_close_sva_prefetch(struct hisi_qm *qm) ··· 537 507 HZIP_DELAY_1_US, HZIP_POLL_TIMEOUT_US); 538 508 if (ret) 539 509 pci_err(qm->pdev, "failed to close sva prefetch\n"); 510 + 511 + (void)hisi_zip_wait_sva_ready(qm, HZIP_SVA_TRANS, HZIP_SVA_STALL_NUM); 512 + } 513 + 514 + static void hisi_zip_open_sva_prefetch(struct hisi_qm *qm) 515 + { 516 + u32 val; 517 + int ret; 518 + 519 + if (!test_bit(QM_SUPPORT_SVA_PREFETCH, &qm->caps)) 520 + return; 521 + 522 + /* Enable prefetch */ 523 + val = readl_relaxed(qm->io_base + HZIP_PREFETCH_CFG); 524 + val &= HZIP_PREFETCH_ENABLE; 525 + writel(val, qm->io_base + HZIP_PREFETCH_CFG); 526 + 527 + ret = readl_relaxed_poll_timeout(qm->io_base + HZIP_PREFETCH_CFG, 528 + val, !(val & HZIP_SVA_PREFETCH_DISABLE), 529 + HZIP_DELAY_1_US, HZIP_POLL_TIMEOUT_US); 530 + if (ret) { 531 + pci_err(qm->pdev, "failed to open sva prefetch\n"); 532 + hisi_zip_close_sva_prefetch(qm); 533 + return; 534 + } 535 + 536 + ret = hisi_zip_wait_sva_ready(qm, HZIP_SVA_TRANS, HZIP_SVA_PREFETCH_NUM); 537 + if (ret) 538 + hisi_zip_close_sva_prefetch(qm); 540 539 } 541 540 542 541 static void hisi_zip_enable_clock_gate(struct hisi_qm *qm) ··· 589 530 void __iomem *base = qm->io_base; 590 531 u32 dcomp_bm, comp_bm; 591 532 u32 zip_core_en; 533 + int ret; 592 534 593 535 /* qm user domain */ 594 536 writel(AXUSER_BASE, base + QM_ARUSER_M_CFG_1); ··· 625 565 writel(AXUSER_BASE, base + HZIP_DATA_WUSER_32_63); 626 566 writel(AXUSER_BASE, base + HZIP_SGL_RUSER_32_63); 627 567 } 568 + hisi_zip_open_sva_prefetch(qm); 628 569 629 570 /* let's open all compression/decompression cores */ 630 571 ··· 641 580 CQC_CACHE_WB_ENABLE | FIELD_PREP(SQC_CACHE_WB_THRD, 1) | 642 581 FIELD_PREP(CQC_CACHE_WB_THRD, 1), base + QM_CACHE_CTL); 643 582 583 + hisi_zip_set_high_perf(qm); 584 + hisi_zip_literal_set(qm); 644 585 hisi_zip_enable_clock_gate(qm); 645 586 646 - return hisi_dae_set_user_domain(qm); 587 + ret = hisi_dae_set_user_domain(qm); 588 + if (ret) 589 + goto close_sva_prefetch; 590 + 591 + return 0; 592 + 593 + close_sva_prefetch: 594 + hisi_zip_close_sva_prefetch(qm); 595 + return ret; 647 596 } 648 597 649 598 static void hisi_zip_master_ooo_ctrl(struct hisi_qm *qm, bool enable) ··· 663 592 val1 = readl(qm->io_base + HZIP_SOFT_CTRL_ZIP_CONTROL); 664 593 if (enable) { 665 594 val1 |= HZIP_AXI_SHUTDOWN_ENABLE; 666 - val2 = hisi_qm_get_hw_info(qm, zip_basic_cap_info, 667 - ZIP_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver); 595 + val2 = qm->err_info.dev_err.shutdown_mask; 668 596 } else { 669 597 val1 &= ~HZIP_AXI_SHUTDOWN_ENABLE; 670 598 val2 = 0x0; ··· 677 607 678 608 static void hisi_zip_hw_error_enable(struct hisi_qm *qm) 679 609 { 680 - u32 nfe, ce; 610 + struct hisi_qm_err_mask *dev_err = &qm->err_info.dev_err; 611 + u32 err_mask = dev_err->ce | dev_err->nfe | dev_err->fe; 681 612 682 613 if (qm->ver == QM_HW_V1) { 683 614 writel(HZIP_CORE_INT_MASK_ALL, ··· 687 616 return; 688 617 } 689 618 690 - nfe = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_NFE_MASK_CAP, qm->cap_ver); 691 - ce = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_CE_MASK_CAP, qm->cap_ver); 692 - 693 619 /* clear ZIP hw error source if having */ 694 - writel(ce | nfe | HZIP_CORE_INT_RAS_FE_ENB_MASK, qm->io_base + HZIP_CORE_INT_SOURCE); 620 + writel(err_mask, qm->io_base + HZIP_CORE_INT_SOURCE); 695 621 696 622 /* configure error type */ 697 - writel(ce, qm->io_base + HZIP_CORE_INT_RAS_CE_ENB); 698 - writel(HZIP_CORE_INT_RAS_FE_ENB_MASK, qm->io_base + HZIP_CORE_INT_RAS_FE_ENB); 699 - writel(nfe, qm->io_base + HZIP_CORE_INT_RAS_NFE_ENB); 623 + writel(dev_err->ce, qm->io_base + HZIP_CORE_INT_RAS_CE_ENB); 624 + writel(dev_err->fe, qm->io_base + HZIP_CORE_INT_RAS_FE_ENB); 625 + writel(dev_err->nfe, qm->io_base + HZIP_CORE_INT_RAS_NFE_ENB); 700 626 701 627 hisi_zip_master_ooo_ctrl(qm, true); 702 628 703 629 /* enable ZIP hw error interrupts */ 704 - writel(0, qm->io_base + HZIP_CORE_INT_MASK_REG); 630 + writel(~err_mask, qm->io_base + HZIP_CORE_INT_MASK_REG); 705 631 706 632 hisi_dae_hw_error_enable(qm); 707 633 } 708 634 709 635 static void hisi_zip_hw_error_disable(struct hisi_qm *qm) 710 636 { 711 - u32 nfe, ce; 637 + struct hisi_qm_err_mask *dev_err = &qm->err_info.dev_err; 638 + u32 err_mask = dev_err->ce | dev_err->nfe | dev_err->fe; 712 639 713 640 /* disable ZIP hw error interrupts */ 714 - nfe = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_NFE_MASK_CAP, qm->cap_ver); 715 - ce = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_CE_MASK_CAP, qm->cap_ver); 716 - writel(ce | nfe | HZIP_CORE_INT_RAS_FE_ENB_MASK, qm->io_base + HZIP_CORE_INT_MASK_REG); 641 + writel(err_mask, qm->io_base + HZIP_CORE_INT_MASK_REG); 717 642 718 643 hisi_zip_master_ooo_ctrl(qm, false); 719 644 ··· 1183 1116 1184 1117 static void hisi_zip_disable_error_report(struct hisi_qm *qm, u32 err_type) 1185 1118 { 1186 - u32 nfe_mask; 1119 + u32 nfe_mask = qm->err_info.dev_err.nfe; 1187 1120 1188 - nfe_mask = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_NFE_MASK_CAP, qm->cap_ver); 1189 1121 writel(nfe_mask & (~err_type), qm->io_base + HZIP_CORE_INT_RAS_NFE_ENB); 1122 + } 1123 + 1124 + static void hisi_zip_enable_error_report(struct hisi_qm *qm) 1125 + { 1126 + u32 nfe_mask = qm->err_info.dev_err.nfe; 1127 + u32 ce_mask = qm->err_info.dev_err.ce; 1128 + 1129 + writel(nfe_mask, qm->io_base + HZIP_CORE_INT_RAS_NFE_ENB); 1130 + writel(ce_mask, qm->io_base + HZIP_CORE_INT_RAS_CE_ENB); 1190 1131 } 1191 1132 1192 1133 static void hisi_zip_open_axi_master_ooo(struct hisi_qm *qm) ··· 1235 1160 /* Get device hardware new error status */ 1236 1161 err_status = hisi_zip_get_hw_err_status(qm); 1237 1162 if (err_status) { 1238 - if (err_status & qm->err_info.ecc_2bits_mask) 1163 + if (err_status & qm->err_info.dev_err.ecc_2bits_mask) 1239 1164 qm->err_status.is_dev_ecc_mbit = true; 1240 1165 hisi_zip_log_hw_error(qm, err_status); 1241 1166 1242 - if (err_status & qm->err_info.dev_reset_mask) { 1167 + if (err_status & qm->err_info.dev_err.reset_mask) { 1243 1168 /* Disable the same error reporting until device is recovered. */ 1244 1169 hisi_zip_disable_error_report(qm, err_status); 1245 - return ACC_ERR_NEED_RESET; 1170 + zip_result = ACC_ERR_NEED_RESET; 1246 1171 } else { 1247 1172 hisi_zip_clear_hw_err_status(qm, err_status); 1173 + /* Avoid firmware disable error report, re-enable. */ 1174 + hisi_zip_enable_error_report(qm); 1248 1175 } 1249 1176 } 1250 1177 ··· 1262 1185 u32 err_status; 1263 1186 1264 1187 err_status = hisi_zip_get_hw_err_status(qm); 1265 - if (err_status & qm->err_info.dev_shutdown_mask) 1188 + if (err_status & qm->err_info.dev_err.shutdown_mask) 1266 1189 return true; 1267 1190 1268 1191 return hisi_dae_dev_is_abnormal(qm); ··· 1273 1196 return hisi_dae_close_axi_master_ooo(qm); 1274 1197 } 1275 1198 1199 + static void hisi_zip_disable_axi_error(struct hisi_qm *qm) 1200 + { 1201 + struct hisi_qm_err_mask *dev_err = &qm->err_info.dev_err; 1202 + u32 err_mask = dev_err->ce | dev_err->nfe | dev_err->fe; 1203 + u32 val; 1204 + 1205 + val = ~(err_mask & (~HZIP_AXI_ERROR_MASK)); 1206 + writel(val, qm->io_base + HZIP_CORE_INT_MASK_REG); 1207 + 1208 + if (qm->ver > QM_HW_V2) 1209 + writel(dev_err->shutdown_mask & (~HZIP_AXI_ERROR_MASK), 1210 + qm->io_base + HZIP_OOO_SHUTDOWN_SEL); 1211 + } 1212 + 1213 + static void hisi_zip_enable_axi_error(struct hisi_qm *qm) 1214 + { 1215 + struct hisi_qm_err_mask *dev_err = &qm->err_info.dev_err; 1216 + u32 err_mask = dev_err->ce | dev_err->nfe | dev_err->fe; 1217 + 1218 + /* clear axi error source */ 1219 + writel(HZIP_AXI_ERROR_MASK, qm->io_base + HZIP_CORE_INT_SOURCE); 1220 + 1221 + writel(~err_mask, qm->io_base + HZIP_CORE_INT_MASK_REG); 1222 + 1223 + if (qm->ver > QM_HW_V2) 1224 + writel(dev_err->shutdown_mask, qm->io_base + HZIP_OOO_SHUTDOWN_SEL); 1225 + } 1226 + 1276 1227 static void hisi_zip_err_info_init(struct hisi_qm *qm) 1277 1228 { 1278 1229 struct hisi_qm_err_info *err_info = &qm->err_info; 1230 + struct hisi_qm_err_mask *qm_err = &err_info->qm_err; 1231 + struct hisi_qm_err_mask *dev_err = &err_info->dev_err; 1279 1232 1280 - err_info->fe = HZIP_CORE_INT_RAS_FE_ENB_MASK; 1281 - err_info->ce = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_QM_CE_MASK_CAP, qm->cap_ver); 1282 - err_info->nfe = hisi_qm_get_hw_info(qm, zip_basic_cap_info, 1283 - ZIP_QM_NFE_MASK_CAP, qm->cap_ver); 1284 - err_info->ecc_2bits_mask = HZIP_CORE_INT_STATUS_M_ECC; 1285 - err_info->qm_shutdown_mask = hisi_qm_get_hw_info(qm, zip_basic_cap_info, 1286 - ZIP_QM_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver); 1287 - err_info->dev_shutdown_mask = hisi_qm_get_hw_info(qm, zip_basic_cap_info, 1288 - ZIP_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver); 1289 - err_info->qm_reset_mask = hisi_qm_get_hw_info(qm, zip_basic_cap_info, 1290 - ZIP_QM_RESET_MASK_CAP, qm->cap_ver); 1291 - err_info->dev_reset_mask = hisi_qm_get_hw_info(qm, zip_basic_cap_info, 1292 - ZIP_RESET_MASK_CAP, qm->cap_ver); 1233 + qm_err->fe = HZIP_CORE_INT_RAS_FE_ENB_MASK; 1234 + qm_err->ce = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_QM_CE_MASK_CAP, qm->cap_ver); 1235 + qm_err->nfe = hisi_qm_get_hw_info(qm, zip_basic_cap_info, 1236 + ZIP_QM_NFE_MASK_CAP, qm->cap_ver); 1237 + qm_err->ecc_2bits_mask = QM_ECC_MBIT; 1238 + qm_err->reset_mask = hisi_qm_get_hw_info(qm, zip_basic_cap_info, 1239 + ZIP_QM_RESET_MASK_CAP, qm->cap_ver); 1240 + qm_err->shutdown_mask = hisi_qm_get_hw_info(qm, zip_basic_cap_info, 1241 + ZIP_QM_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver); 1242 + 1243 + dev_err->fe = HZIP_CORE_INT_RAS_FE_ENB_MASK; 1244 + dev_err->ce = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_CE_MASK_CAP, qm->cap_ver); 1245 + dev_err->nfe = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_NFE_MASK_CAP, qm->cap_ver); 1246 + dev_err->ecc_2bits_mask = HZIP_CORE_INT_STATUS_M_ECC; 1247 + dev_err->shutdown_mask = hisi_qm_get_hw_info(qm, zip_basic_cap_info, 1248 + ZIP_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver); 1249 + dev_err->reset_mask = hisi_qm_get_hw_info(qm, zip_basic_cap_info, 1250 + ZIP_RESET_MASK_CAP, qm->cap_ver); 1251 + 1293 1252 err_info->msi_wr_port = HZIP_WR_PORT; 1294 1253 err_info->acpi_rst = "ZRST"; 1295 1254 } ··· 1345 1232 .get_err_result = hisi_zip_get_err_result, 1346 1233 .set_priv_status = hisi_zip_set_priv_status, 1347 1234 .dev_is_abnormal = hisi_zip_dev_is_abnormal, 1235 + .disable_axi_error = hisi_zip_disable_axi_error, 1236 + .enable_axi_error = hisi_zip_enable_axi_error, 1348 1237 }; 1349 1238 1350 1239 static int hisi_zip_pf_probe_init(struct hisi_zip *hisi_zip) ··· 1366 1251 if (ret) 1367 1252 return ret; 1368 1253 1369 - ret = hisi_zip_set_high_perf(qm); 1370 - if (ret) 1371 - return ret; 1372 - 1373 - hisi_zip_open_sva_prefetch(qm); 1374 1254 hisi_qm_dev_err_init(qm); 1375 1255 hisi_zip_debug_regs_clear(qm); 1376 1256 ··· 1383 1273 size_t i, size; 1384 1274 1385 1275 size = ARRAY_SIZE(zip_cap_query_info); 1386 - zip_cap = devm_kzalloc(&pdev->dev, sizeof(*zip_cap) * size, GFP_KERNEL); 1276 + zip_cap = devm_kcalloc(&pdev->dev, size, sizeof(*zip_cap), GFP_KERNEL); 1387 1277 if (!zip_cap) 1388 1278 return -ENOMEM; 1389 1279
+4 -1
drivers/crypto/intel/keembay/keembay-ocs-hcu-core.c
··· 232 232 struct device *dev = rctx->hcu_dev->dev; 233 233 unsigned int remainder = 0; 234 234 unsigned int total; 235 - size_t nents; 235 + int nents; 236 236 size_t count; 237 237 int rc; 238 238 int i; ··· 252 252 253 253 /* Determine the number of scatter gather list entries to process. */ 254 254 nents = sg_nents_for_len(req->src, rctx->sg_data_total - remainder); 255 + 256 + if (nents < 0) 257 + return nents; 255 258 256 259 /* If there are entries to process, map them. */ 257 260 if (nents) {
+3 -4
drivers/crypto/intel/qat/Kconfig
··· 6 6 select CRYPTO_SKCIPHER 7 7 select CRYPTO_AKCIPHER 8 8 select CRYPTO_DH 9 - select CRYPTO_HMAC 10 9 select CRYPTO_RSA 11 - select CRYPTO_SHA1 12 - select CRYPTO_SHA256 13 - select CRYPTO_SHA512 14 10 select CRYPTO_LIB_AES 11 + select CRYPTO_LIB_SHA1 12 + select CRYPTO_LIB_SHA256 13 + select CRYPTO_LIB_SHA512 15 14 select FW_LOADER 16 15 select CRC8 17 16
+14 -26
drivers/crypto/intel/qat/qat_common/adf_ctl_drv.c
··· 89 89 return -EFAULT; 90 90 } 91 91 92 - static int adf_ctl_alloc_resources(struct adf_user_cfg_ctl_data **ctl_data, 93 - unsigned long arg) 92 + static struct adf_user_cfg_ctl_data *adf_ctl_alloc_resources(unsigned long arg) 94 93 { 95 94 struct adf_user_cfg_ctl_data *cfg_data; 96 95 97 - cfg_data = kzalloc(sizeof(*cfg_data), GFP_KERNEL); 98 - if (!cfg_data) 99 - return -ENOMEM; 100 - 101 - /* Initialize device id to NO DEVICE as 0 is a valid device id */ 102 - cfg_data->device_id = ADF_CFG_NO_DEVICE; 103 - 104 - if (copy_from_user(cfg_data, (void __user *)arg, sizeof(*cfg_data))) { 96 + cfg_data = memdup_user((void __user *)arg, sizeof(*cfg_data)); 97 + if (IS_ERR(cfg_data)) 105 98 pr_err("QAT: failed to copy from user cfg_data.\n"); 106 - kfree(cfg_data); 107 - return -EIO; 108 - } 109 - 110 - *ctl_data = cfg_data; 111 - return 0; 99 + return cfg_data; 112 100 } 113 101 114 102 static int adf_add_key_value_data(struct adf_accel_dev *accel_dev, ··· 176 188 static int adf_ctl_ioctl_dev_config(struct file *fp, unsigned int cmd, 177 189 unsigned long arg) 178 190 { 179 - int ret; 180 191 struct adf_user_cfg_ctl_data *ctl_data; 181 192 struct adf_accel_dev *accel_dev; 193 + int ret = 0; 182 194 183 - ret = adf_ctl_alloc_resources(&ctl_data, arg); 184 - if (ret) 185 - return ret; 195 + ctl_data = adf_ctl_alloc_resources(arg); 196 + if (IS_ERR(ctl_data)) 197 + return PTR_ERR(ctl_data); 186 198 187 199 accel_dev = adf_devmgr_get_dev_by_id(ctl_data->device_id); 188 200 if (!accel_dev) { ··· 255 267 int ret; 256 268 struct adf_user_cfg_ctl_data *ctl_data; 257 269 258 - ret = adf_ctl_alloc_resources(&ctl_data, arg); 259 - if (ret) 260 - return ret; 270 + ctl_data = adf_ctl_alloc_resources(arg); 271 + if (IS_ERR(ctl_data)) 272 + return PTR_ERR(ctl_data); 261 273 262 274 if (adf_devmgr_verify_id(ctl_data->device_id)) { 263 275 pr_err("QAT: Device %d not found\n", ctl_data->device_id); ··· 289 301 struct adf_user_cfg_ctl_data *ctl_data; 290 302 struct adf_accel_dev *accel_dev; 291 303 292 - ret = adf_ctl_alloc_resources(&ctl_data, arg); 293 - if (ret) 294 - return ret; 304 + ctl_data = adf_ctl_alloc_resources(arg); 305 + if (IS_ERR(ctl_data)) 306 + return PTR_ERR(ctl_data); 295 307 296 308 ret = -ENODEV; 297 309 accel_dev = adf_devmgr_get_dev_by_id(ctl_data->device_id);
+112
drivers/crypto/intel/qat/qat_common/adf_gen6_tl.c
··· 21 21 22 22 #define SLICE_IDX(sl) offsetof(struct icp_qat_fw_init_admin_slice_cnt, sl##_cnt) 23 23 24 + #define ADF_GEN6_TL_CMDQ_WAIT_COUNTER(_name) \ 25 + ADF_TL_COUNTER("cmdq_wait_" #_name, ADF_TL_SIMPLE_COUNT, \ 26 + ADF_TL_CMDQ_REG_OFF(_name, reg_tm_cmdq_wait_cnt, gen6)) 27 + #define ADF_GEN6_TL_CMDQ_EXEC_COUNTER(_name) \ 28 + ADF_TL_COUNTER("cmdq_exec_" #_name, ADF_TL_SIMPLE_COUNT, \ 29 + ADF_TL_CMDQ_REG_OFF(_name, reg_tm_cmdq_exec_cnt, gen6)) 30 + #define ADF_GEN6_TL_CMDQ_DRAIN_COUNTER(_name) \ 31 + ADF_TL_COUNTER("cmdq_drain_" #_name, ADF_TL_SIMPLE_COUNT, \ 32 + ADF_TL_CMDQ_REG_OFF(_name, reg_tm_cmdq_drain_cnt, \ 33 + gen6)) 34 + 35 + #define CPR_QUEUE_COUNT 5 36 + #define DCPR_QUEUE_COUNT 3 37 + #define PKE_QUEUE_COUNT 1 38 + #define WAT_QUEUE_COUNT 7 39 + #define WCP_QUEUE_COUNT 7 40 + #define USC_QUEUE_COUNT 3 41 + #define ATH_QUEUE_COUNT 2 42 + 24 43 /* Device level counters. */ 25 44 static const struct adf_tl_dbg_counter dev_counters[] = { 26 45 /* PCIe partial transactions. */ ··· 76 57 /* Maximum uTLB used. */ 77 58 ADF_TL_COUNTER(AT_MAX_UTLB_USED_NAME, ADF_TL_SIMPLE_COUNT, 78 59 ADF_GEN6_TL_DEV_REG_OFF(reg_tl_at_max_utlb_used)), 60 + /* Ring Empty average[ns] across all rings */ 61 + ADF_TL_COUNTER_LATENCY(RE_ACC_NAME, ADF_TL_COUNTER_NS_AVG, 62 + ADF_GEN6_TL_DEV_REG_OFF(reg_tl_re_acc), 63 + ADF_GEN6_TL_DEV_REG_OFF(reg_tl_re_cnt)), 79 64 }; 80 65 81 66 /* Accelerator utilization counters */ ··· 118 95 [SLICE_IDX(ath)] = ADF_GEN6_TL_SL_EXEC_COUNTER(ath), 119 96 }; 120 97 98 + static const struct adf_tl_dbg_counter cnv_cmdq_counters[] = { 99 + ADF_GEN6_TL_CMDQ_WAIT_COUNTER(cnv), 100 + ADF_GEN6_TL_CMDQ_EXEC_COUNTER(cnv), 101 + ADF_GEN6_TL_CMDQ_DRAIN_COUNTER(cnv) 102 + }; 103 + 104 + #define NUM_CMDQ_COUNTERS ARRAY_SIZE(cnv_cmdq_counters) 105 + 106 + static const struct adf_tl_dbg_counter dcprz_cmdq_counters[] = { 107 + ADF_GEN6_TL_CMDQ_WAIT_COUNTER(dcprz), 108 + ADF_GEN6_TL_CMDQ_EXEC_COUNTER(dcprz), 109 + ADF_GEN6_TL_CMDQ_DRAIN_COUNTER(dcprz) 110 + }; 111 + 112 + static_assert(ARRAY_SIZE(dcprz_cmdq_counters) == NUM_CMDQ_COUNTERS); 113 + 114 + static const struct adf_tl_dbg_counter pke_cmdq_counters[] = { 115 + ADF_GEN6_TL_CMDQ_WAIT_COUNTER(pke), 116 + ADF_GEN6_TL_CMDQ_EXEC_COUNTER(pke), 117 + ADF_GEN6_TL_CMDQ_DRAIN_COUNTER(pke) 118 + }; 119 + 120 + static_assert(ARRAY_SIZE(pke_cmdq_counters) == NUM_CMDQ_COUNTERS); 121 + 122 + static const struct adf_tl_dbg_counter wat_cmdq_counters[] = { 123 + ADF_GEN6_TL_CMDQ_WAIT_COUNTER(wat), 124 + ADF_GEN6_TL_CMDQ_EXEC_COUNTER(wat), 125 + ADF_GEN6_TL_CMDQ_DRAIN_COUNTER(wat) 126 + }; 127 + 128 + static_assert(ARRAY_SIZE(wat_cmdq_counters) == NUM_CMDQ_COUNTERS); 129 + 130 + static const struct adf_tl_dbg_counter wcp_cmdq_counters[] = { 131 + ADF_GEN6_TL_CMDQ_WAIT_COUNTER(wcp), 132 + ADF_GEN6_TL_CMDQ_EXEC_COUNTER(wcp), 133 + ADF_GEN6_TL_CMDQ_DRAIN_COUNTER(wcp) 134 + }; 135 + 136 + static_assert(ARRAY_SIZE(wcp_cmdq_counters) == NUM_CMDQ_COUNTERS); 137 + 138 + static const struct adf_tl_dbg_counter ucs_cmdq_counters[] = { 139 + ADF_GEN6_TL_CMDQ_WAIT_COUNTER(ucs), 140 + ADF_GEN6_TL_CMDQ_EXEC_COUNTER(ucs), 141 + ADF_GEN6_TL_CMDQ_DRAIN_COUNTER(ucs) 142 + }; 143 + 144 + static_assert(ARRAY_SIZE(ucs_cmdq_counters) == NUM_CMDQ_COUNTERS); 145 + 146 + static const struct adf_tl_dbg_counter ath_cmdq_counters[] = { 147 + ADF_GEN6_TL_CMDQ_WAIT_COUNTER(ath), 148 + ADF_GEN6_TL_CMDQ_EXEC_COUNTER(ath), 149 + ADF_GEN6_TL_CMDQ_DRAIN_COUNTER(ath) 150 + }; 151 + 152 + static_assert(ARRAY_SIZE(ath_cmdq_counters) == NUM_CMDQ_COUNTERS); 153 + 154 + /* CMDQ drain counters. */ 155 + static const struct adf_tl_dbg_counter *cmdq_counters[ADF_TL_SL_CNT_COUNT] = { 156 + /* Compression accelerator execution count. */ 157 + [SLICE_IDX(cpr)] = cnv_cmdq_counters, 158 + /* Decompression accelerator execution count. */ 159 + [SLICE_IDX(dcpr)] = dcprz_cmdq_counters, 160 + /* PKE execution count. */ 161 + [SLICE_IDX(pke)] = pke_cmdq_counters, 162 + /* Wireless Authentication accelerator execution count. */ 163 + [SLICE_IDX(wat)] = wat_cmdq_counters, 164 + /* Wireless Cipher accelerator execution count. */ 165 + [SLICE_IDX(wcp)] = wcp_cmdq_counters, 166 + /* UCS accelerator execution count. */ 167 + [SLICE_IDX(ucs)] = ucs_cmdq_counters, 168 + /* Authentication accelerator execution count. */ 169 + [SLICE_IDX(ath)] = ath_cmdq_counters, 170 + }; 171 + 121 172 /* Ring pair counters. */ 122 173 static const struct adf_tl_dbg_counter rp_counters[] = { 123 174 /* PCIe partial transactions. */ ··· 219 122 /* Payload DevTLB miss rate. */ 220 123 ADF_TL_COUNTER(AT_PAYLD_DTLB_MISS_NAME, ADF_TL_SIMPLE_COUNT, 221 124 ADF_GEN6_TL_RP_REG_OFF(reg_tl_at_payld_devtlb_miss)), 125 + /* Ring Empty average[ns]. */ 126 + ADF_TL_COUNTER_LATENCY(RE_ACC_NAME, ADF_TL_COUNTER_NS_AVG, 127 + ADF_GEN6_TL_RP_REG_OFF(reg_tl_re_acc), 128 + ADF_GEN6_TL_RP_REG_OFF(reg_tl_re_cnt)), 222 129 }; 223 130 224 131 void adf_gen6_init_tl_data(struct adf_tl_hw_data *tl_data) 225 132 { 226 133 tl_data->layout_sz = ADF_GEN6_TL_LAYOUT_SZ; 227 134 tl_data->slice_reg_sz = ADF_GEN6_TL_SLICE_REG_SZ; 135 + tl_data->cmdq_reg_sz = ADF_GEN6_TL_CMDQ_REG_SZ; 228 136 tl_data->rp_reg_sz = ADF_GEN6_TL_RP_REG_SZ; 229 137 tl_data->num_hbuff = ADF_GEN6_TL_NUM_HIST_BUFFS; 230 138 tl_data->max_rp = ADF_GEN6_TL_MAX_RP_NUM; ··· 241 139 tl_data->num_dev_counters = ARRAY_SIZE(dev_counters); 242 140 tl_data->sl_util_counters = sl_util_counters; 243 141 tl_data->sl_exec_counters = sl_exec_counters; 142 + tl_data->cmdq_counters = cmdq_counters; 143 + tl_data->num_cmdq_counters = NUM_CMDQ_COUNTERS; 244 144 tl_data->rp_counters = rp_counters; 245 145 tl_data->num_rp_counters = ARRAY_SIZE(rp_counters); 246 146 tl_data->max_sl_cnt = ADF_GEN6_TL_MAX_SLICES_PER_TYPE; 147 + 148 + tl_data->multiplier.cpr_cnt = CPR_QUEUE_COUNT; 149 + tl_data->multiplier.dcpr_cnt = DCPR_QUEUE_COUNT; 150 + tl_data->multiplier.pke_cnt = PKE_QUEUE_COUNT; 151 + tl_data->multiplier.wat_cnt = WAT_QUEUE_COUNT; 152 + tl_data->multiplier.wcp_cnt = WCP_QUEUE_COUNT; 153 + tl_data->multiplier.ucs_cnt = USC_QUEUE_COUNT; 154 + tl_data->multiplier.ath_cnt = ATH_QUEUE_COUNT; 247 155 } 248 156 EXPORT_SYMBOL_GPL(adf_gen6_init_tl_data);
+19
drivers/crypto/intel/qat/qat_common/adf_telemetry.c
··· 212 212 return ret; 213 213 } 214 214 215 + static void adf_set_cmdq_cnt(struct adf_accel_dev *accel_dev, 216 + struct adf_tl_hw_data *tl_data) 217 + { 218 + struct icp_qat_fw_init_admin_slice_cnt *slice_cnt, *cmdq_cnt; 219 + 220 + slice_cnt = &accel_dev->telemetry->slice_cnt; 221 + cmdq_cnt = &accel_dev->telemetry->cmdq_cnt; 222 + 223 + cmdq_cnt->cpr_cnt = slice_cnt->cpr_cnt * tl_data->multiplier.cpr_cnt; 224 + cmdq_cnt->dcpr_cnt = slice_cnt->dcpr_cnt * tl_data->multiplier.dcpr_cnt; 225 + cmdq_cnt->pke_cnt = slice_cnt->pke_cnt * tl_data->multiplier.pke_cnt; 226 + cmdq_cnt->wat_cnt = slice_cnt->wat_cnt * tl_data->multiplier.wat_cnt; 227 + cmdq_cnt->wcp_cnt = slice_cnt->wcp_cnt * tl_data->multiplier.wcp_cnt; 228 + cmdq_cnt->ucs_cnt = slice_cnt->ucs_cnt * tl_data->multiplier.ucs_cnt; 229 + cmdq_cnt->ath_cnt = slice_cnt->ath_cnt * tl_data->multiplier.ath_cnt; 230 + } 231 + 215 232 int adf_tl_run(struct adf_accel_dev *accel_dev, int state) 216 233 { 217 234 struct adf_tl_hw_data *tl_data = &GET_TL_DATA(accel_dev); ··· 251 234 adf_send_admin_tl_stop(accel_dev); 252 235 return ret; 253 236 } 237 + 238 + adf_set_cmdq_cnt(accel_dev, tl_data); 254 239 255 240 telemetry->hbuffs = state; 256 241 atomic_set(&telemetry->state, state);
+5
drivers/crypto/intel/qat/qat_common/adf_telemetry.h
··· 28 28 struct adf_tl_hw_data { 29 29 size_t layout_sz; 30 30 size_t slice_reg_sz; 31 + size_t cmdq_reg_sz; 31 32 size_t rp_reg_sz; 32 33 size_t msg_cnt_off; 33 34 const struct adf_tl_dbg_counter *dev_counters; 34 35 const struct adf_tl_dbg_counter *sl_util_counters; 35 36 const struct adf_tl_dbg_counter *sl_exec_counters; 37 + const struct adf_tl_dbg_counter **cmdq_counters; 36 38 const struct adf_tl_dbg_counter *rp_counters; 37 39 u8 num_hbuff; 38 40 u8 cpp_ns_per_cycle; 39 41 u8 bw_units_to_bytes; 40 42 u8 num_dev_counters; 41 43 u8 num_rp_counters; 44 + u8 num_cmdq_counters; 42 45 u8 max_rp; 43 46 u8 max_sl_cnt; 47 + struct icp_qat_fw_init_admin_slice_cnt multiplier; 44 48 }; 45 49 46 50 struct adf_telemetry { ··· 73 69 struct mutex wr_lock; 74 70 struct delayed_work work_ctx; 75 71 struct icp_qat_fw_init_admin_slice_cnt slice_cnt; 72 + struct icp_qat_fw_init_admin_slice_cnt cmdq_cnt; 76 73 }; 77 74 78 75 #ifdef CONFIG_DEBUG_FS
+52
drivers/crypto/intel/qat/qat_common/adf_tl_debugfs.c
··· 339 339 return 0; 340 340 } 341 341 342 + static int tl_print_cmdq_counter(struct adf_telemetry *telemetry, 343 + const struct adf_tl_dbg_counter *ctr, 344 + struct seq_file *s, u8 cnt_id, u8 counter) 345 + { 346 + size_t cmdq_regs_sz = GET_TL_DATA(telemetry->accel_dev).cmdq_reg_sz; 347 + size_t offset_inc = cnt_id * cmdq_regs_sz; 348 + struct adf_tl_dbg_counter slice_ctr; 349 + char cnt_name[MAX_COUNT_NAME_SIZE]; 350 + 351 + slice_ctr = *(ctr + counter); 352 + slice_ctr.offset1 += offset_inc; 353 + snprintf(cnt_name, MAX_COUNT_NAME_SIZE, "%s%d", slice_ctr.name, cnt_id); 354 + 355 + return tl_calc_and_print_counter(telemetry, s, &slice_ctr, cnt_name); 356 + } 357 + 358 + static int tl_calc_and_print_cmdq_counters(struct adf_accel_dev *accel_dev, 359 + struct seq_file *s, u8 cnt_type, 360 + u8 cnt_id) 361 + { 362 + struct adf_tl_hw_data *tl_data = &GET_TL_DATA(accel_dev); 363 + struct adf_telemetry *telemetry = accel_dev->telemetry; 364 + const struct adf_tl_dbg_counter **cmdq_tl_counters; 365 + const struct adf_tl_dbg_counter *ctr; 366 + u8 counter; 367 + int ret; 368 + 369 + cmdq_tl_counters = tl_data->cmdq_counters; 370 + ctr = cmdq_tl_counters[cnt_type]; 371 + 372 + for (counter = 0; counter < tl_data->num_cmdq_counters; counter++) { 373 + ret = tl_print_cmdq_counter(telemetry, ctr, s, cnt_id, counter); 374 + if (ret) { 375 + dev_notice(&GET_DEV(accel_dev), 376 + "invalid slice utilization counter type\n"); 377 + return ret; 378 + } 379 + } 380 + 381 + return 0; 382 + } 383 + 342 384 static void tl_print_msg_cnt(struct seq_file *s, u32 msg_cnt) 343 385 { 344 386 seq_printf(s, "%-*s", TL_KEY_MIN_PADDING, SNAPSHOT_CNT_MSG); ··· 394 352 struct adf_telemetry *telemetry = accel_dev->telemetry; 395 353 const struct adf_tl_dbg_counter *dev_tl_counters; 396 354 u8 num_dev_counters = tl_data->num_dev_counters; 355 + u8 *cmdq_cnt = (u8 *)&telemetry->cmdq_cnt; 397 356 u8 *sl_cnt = (u8 *)&telemetry->slice_cnt; 398 357 const struct adf_tl_dbg_counter *ctr; 399 358 unsigned int i; ··· 425 382 for (i = 0; i < ADF_TL_SL_CNT_COUNT; i++) { 426 383 for (j = 0; j < sl_cnt[i]; j++) { 427 384 ret = tl_calc_and_print_sl_counters(accel_dev, s, i, j); 385 + if (ret) 386 + return ret; 387 + } 388 + } 389 + 390 + /* Print per command queue telemetry. */ 391 + for (i = 0; i < ADF_TL_SL_CNT_COUNT; i++) { 392 + for (j = 0; j < cmdq_cnt[i]; j++) { 393 + ret = tl_calc_and_print_cmdq_counters(accel_dev, s, i, j); 428 394 if (ret) 429 395 return ret; 430 396 }
+5
drivers/crypto/intel/qat/qat_common/adf_tl_debugfs.h
··· 17 17 #define LAT_ACC_NAME "gp_lat_acc_avg" 18 18 #define BW_IN_NAME "bw_in" 19 19 #define BW_OUT_NAME "bw_out" 20 + #define RE_ACC_NAME "re_acc_avg" 20 21 #define PAGE_REQ_LAT_NAME "at_page_req_lat_avg" 21 22 #define AT_TRANS_LAT_NAME "at_trans_lat_avg" 22 23 #define AT_MAX_UTLB_USED_NAME "at_max_tlb_used" ··· 43 42 #define ADF_TL_SLICE_REG_OFF(slice, reg, qat_gen) \ 44 43 (ADF_TL_DEV_REG_OFF(slice##_slices[0], qat_gen) + \ 45 44 offsetof(struct adf_##qat_gen##_tl_slice_data_regs, reg)) 45 + 46 + #define ADF_TL_CMDQ_REG_OFF(slice, reg, qat_gen) \ 47 + (ADF_TL_DEV_REG_OFF(slice##_cmdq[0], qat_gen) + \ 48 + offsetof(struct adf_##qat_gen##_tl_cmdq_data_regs, reg)) 46 49 47 50 #define ADF_TL_RP_REG_OFF(reg, qat_gen) \ 48 51 (ADF_TL_DATA_REG_OFF(tl_ring_pairs_data_regs[0], qat_gen) + \
+60 -135
drivers/crypto/intel/qat/qat_common/qat_algs.c
··· 5 5 #include <linux/crypto.h> 6 6 #include <crypto/internal/aead.h> 7 7 #include <crypto/internal/cipher.h> 8 - #include <crypto/internal/hash.h> 9 8 #include <crypto/internal/skcipher.h> 10 9 #include <crypto/aes.h> 11 10 #include <crypto/sha1.h> 12 11 #include <crypto/sha2.h> 13 - #include <crypto/hmac.h> 14 12 #include <crypto/algapi.h> 15 13 #include <crypto/authenc.h> 16 14 #include <crypto/scatterwalk.h> ··· 66 68 dma_addr_t dec_cd_paddr; 67 69 struct icp_qat_fw_la_bulk_req enc_fw_req; 68 70 struct icp_qat_fw_la_bulk_req dec_fw_req; 69 - struct crypto_shash *hash_tfm; 70 71 enum icp_qat_hw_auth_algo qat_hash_alg; 72 + unsigned int hash_digestsize; 73 + unsigned int hash_blocksize; 71 74 struct qat_crypto_instance *inst; 72 - union { 73 - struct sha1_state sha1; 74 - struct sha256_state sha256; 75 - struct sha512_state sha512; 76 - }; 77 - char ipad[SHA512_BLOCK_SIZE]; /* sufficient for SHA-1/SHA-256 as well */ 78 - char opad[SHA512_BLOCK_SIZE]; 79 75 }; 80 76 81 77 struct qat_alg_skcipher_ctx { ··· 86 94 int mode; 87 95 }; 88 96 89 - static int qat_get_inter_state_size(enum icp_qat_hw_auth_algo qat_hash_alg) 90 - { 91 - switch (qat_hash_alg) { 92 - case ICP_QAT_HW_AUTH_ALGO_SHA1: 93 - return ICP_QAT_HW_SHA1_STATE1_SZ; 94 - case ICP_QAT_HW_AUTH_ALGO_SHA256: 95 - return ICP_QAT_HW_SHA256_STATE1_SZ; 96 - case ICP_QAT_HW_AUTH_ALGO_SHA512: 97 - return ICP_QAT_HW_SHA512_STATE1_SZ; 98 - default: 99 - return -EFAULT; 100 - } 101 - } 102 - 103 97 static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash, 104 98 struct qat_alg_aead_ctx *ctx, 105 99 const u8 *auth_key, 106 100 unsigned int auth_keylen) 107 101 { 108 - SHASH_DESC_ON_STACK(shash, ctx->hash_tfm); 109 - int block_size = crypto_shash_blocksize(ctx->hash_tfm); 110 - int digest_size = crypto_shash_digestsize(ctx->hash_tfm); 111 - __be32 *hash_state_out; 112 - __be64 *hash512_state_out; 113 - int i, offset; 114 - 115 - memset(ctx->ipad, 0, block_size); 116 - memset(ctx->opad, 0, block_size); 117 - shash->tfm = ctx->hash_tfm; 118 - 119 - if (auth_keylen > block_size) { 120 - int ret = crypto_shash_digest(shash, auth_key, 121 - auth_keylen, ctx->ipad); 122 - if (ret) 123 - return ret; 124 - 125 - memcpy(ctx->opad, ctx->ipad, digest_size); 126 - } else { 127 - memcpy(ctx->ipad, auth_key, auth_keylen); 128 - memcpy(ctx->opad, auth_key, auth_keylen); 129 - } 130 - 131 - for (i = 0; i < block_size; i++) { 132 - char *ipad_ptr = ctx->ipad + i; 133 - char *opad_ptr = ctx->opad + i; 134 - *ipad_ptr ^= HMAC_IPAD_VALUE; 135 - *opad_ptr ^= HMAC_OPAD_VALUE; 136 - } 137 - 138 - if (crypto_shash_init(shash)) 139 - return -EFAULT; 140 - 141 - if (crypto_shash_update(shash, ctx->ipad, block_size)) 142 - return -EFAULT; 143 - 144 - hash_state_out = (__be32 *)hash->sha.state1; 145 - hash512_state_out = (__be64 *)hash_state_out; 146 - 147 102 switch (ctx->qat_hash_alg) { 148 - case ICP_QAT_HW_AUTH_ALGO_SHA1: 149 - if (crypto_shash_export_core(shash, &ctx->sha1)) 150 - return -EFAULT; 151 - for (i = 0; i < digest_size >> 2; i++, hash_state_out++) 152 - *hash_state_out = cpu_to_be32(ctx->sha1.state[i]); 153 - break; 154 - case ICP_QAT_HW_AUTH_ALGO_SHA256: 155 - if (crypto_shash_export_core(shash, &ctx->sha256)) 156 - return -EFAULT; 157 - for (i = 0; i < digest_size >> 2; i++, hash_state_out++) 158 - *hash_state_out = cpu_to_be32(ctx->sha256.state[i]); 159 - break; 160 - case ICP_QAT_HW_AUTH_ALGO_SHA512: 161 - if (crypto_shash_export_core(shash, &ctx->sha512)) 162 - return -EFAULT; 163 - for (i = 0; i < digest_size >> 3; i++, hash512_state_out++) 164 - *hash512_state_out = cpu_to_be64(ctx->sha512.state[i]); 165 - break; 103 + case ICP_QAT_HW_AUTH_ALGO_SHA1: { 104 + struct hmac_sha1_key key; 105 + __be32 *istate = (__be32 *)hash->sha.state1; 106 + __be32 *ostate = (__be32 *)(hash->sha.state1 + 107 + round_up(sizeof(key.istate.h), 8)); 108 + 109 + hmac_sha1_preparekey(&key, auth_key, auth_keylen); 110 + for (int i = 0; i < ARRAY_SIZE(key.istate.h); i++) { 111 + istate[i] = cpu_to_be32(key.istate.h[i]); 112 + ostate[i] = cpu_to_be32(key.ostate.h[i]); 113 + } 114 + memzero_explicit(&key, sizeof(key)); 115 + return 0; 116 + } 117 + case ICP_QAT_HW_AUTH_ALGO_SHA256: { 118 + struct hmac_sha256_key key; 119 + __be32 *istate = (__be32 *)hash->sha.state1; 120 + __be32 *ostate = (__be32 *)(hash->sha.state1 + 121 + sizeof(key.key.istate.h)); 122 + 123 + hmac_sha256_preparekey(&key, auth_key, auth_keylen); 124 + for (int i = 0; i < ARRAY_SIZE(key.key.istate.h); i++) { 125 + istate[i] = cpu_to_be32(key.key.istate.h[i]); 126 + ostate[i] = cpu_to_be32(key.key.ostate.h[i]); 127 + } 128 + memzero_explicit(&key, sizeof(key)); 129 + return 0; 130 + } 131 + case ICP_QAT_HW_AUTH_ALGO_SHA512: { 132 + struct hmac_sha512_key key; 133 + __be64 *istate = (__be64 *)hash->sha.state1; 134 + __be64 *ostate = (__be64 *)(hash->sha.state1 + 135 + sizeof(key.key.istate.h)); 136 + 137 + hmac_sha512_preparekey(&key, auth_key, auth_keylen); 138 + for (int i = 0; i < ARRAY_SIZE(key.key.istate.h); i++) { 139 + istate[i] = cpu_to_be64(key.key.istate.h[i]); 140 + ostate[i] = cpu_to_be64(key.key.ostate.h[i]); 141 + } 142 + memzero_explicit(&key, sizeof(key)); 143 + return 0; 144 + } 166 145 default: 167 146 return -EFAULT; 168 147 } 169 - 170 - if (crypto_shash_init(shash)) 171 - return -EFAULT; 172 - 173 - if (crypto_shash_update(shash, ctx->opad, block_size)) 174 - return -EFAULT; 175 - 176 - offset = round_up(qat_get_inter_state_size(ctx->qat_hash_alg), 8); 177 - if (offset < 0) 178 - return -EFAULT; 179 - 180 - hash_state_out = (__be32 *)(hash->sha.state1 + offset); 181 - hash512_state_out = (__be64 *)hash_state_out; 182 - 183 - switch (ctx->qat_hash_alg) { 184 - case ICP_QAT_HW_AUTH_ALGO_SHA1: 185 - if (crypto_shash_export_core(shash, &ctx->sha1)) 186 - return -EFAULT; 187 - for (i = 0; i < digest_size >> 2; i++, hash_state_out++) 188 - *hash_state_out = cpu_to_be32(ctx->sha1.state[i]); 189 - break; 190 - case ICP_QAT_HW_AUTH_ALGO_SHA256: 191 - if (crypto_shash_export_core(shash, &ctx->sha256)) 192 - return -EFAULT; 193 - for (i = 0; i < digest_size >> 2; i++, hash_state_out++) 194 - *hash_state_out = cpu_to_be32(ctx->sha256.state[i]); 195 - break; 196 - case ICP_QAT_HW_AUTH_ALGO_SHA512: 197 - if (crypto_shash_export_core(shash, &ctx->sha512)) 198 - return -EFAULT; 199 - for (i = 0; i < digest_size >> 3; i++, hash512_state_out++) 200 - *hash512_state_out = cpu_to_be64(ctx->sha512.state[i]); 201 - break; 202 - default: 203 - return -EFAULT; 204 - } 205 - memzero_explicit(ctx->ipad, block_size); 206 - memzero_explicit(ctx->opad, block_size); 207 - return 0; 208 148 } 209 149 210 150 static void qat_alg_init_common_hdr(struct icp_qat_fw_comn_req_hdr *header) ··· 183 259 ICP_QAT_HW_AUTH_CONFIG_BUILD(ICP_QAT_HW_AUTH_MODE1, 184 260 ctx->qat_hash_alg, digestsize); 185 261 hash->sha.inner_setup.auth_counter.counter = 186 - cpu_to_be32(crypto_shash_blocksize(ctx->hash_tfm)); 262 + cpu_to_be32(ctx->hash_blocksize); 187 263 188 264 if (qat_alg_do_precomputes(hash, ctx, keys->authkey, keys->authkeylen)) 189 265 return -EFAULT; ··· 250 326 struct icp_qat_hw_cipher_algo_blk *cipher = 251 327 (struct icp_qat_hw_cipher_algo_blk *)((char *)dec_ctx + 252 328 sizeof(struct icp_qat_hw_auth_setup) + 253 - roundup(crypto_shash_digestsize(ctx->hash_tfm), 8) * 2); 329 + roundup(ctx->hash_digestsize, 8) * 2); 254 330 struct icp_qat_fw_la_bulk_req *req_tmpl = &ctx->dec_fw_req; 255 331 struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req_tmpl->cd_pars; 256 332 struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr; ··· 270 346 ctx->qat_hash_alg, 271 347 digestsize); 272 348 hash->sha.inner_setup.auth_counter.counter = 273 - cpu_to_be32(crypto_shash_blocksize(ctx->hash_tfm)); 349 + cpu_to_be32(ctx->hash_blocksize); 274 350 275 351 if (qat_alg_do_precomputes(hash, ctx, keys->authkey, keys->authkeylen)) 276 352 return -EFAULT; ··· 292 368 cipher_cd_ctrl->cipher_state_sz = AES_BLOCK_SIZE >> 3; 293 369 cipher_cd_ctrl->cipher_cfg_offset = 294 370 (sizeof(struct icp_qat_hw_auth_setup) + 295 - roundup(crypto_shash_digestsize(ctx->hash_tfm), 8) * 2) >> 3; 371 + roundup(ctx->hash_digestsize, 8) * 2) >> 3; 296 372 ICP_QAT_FW_COMN_CURR_ID_SET(cipher_cd_ctrl, ICP_QAT_FW_SLICE_CIPHER); 297 373 ICP_QAT_FW_COMN_NEXT_ID_SET(cipher_cd_ctrl, ICP_QAT_FW_SLICE_DRAM_WR); 298 374 ··· 1074 1150 } 1075 1151 1076 1152 static int qat_alg_aead_init(struct crypto_aead *tfm, 1077 - enum icp_qat_hw_auth_algo hash, 1078 - const char *hash_name) 1153 + enum icp_qat_hw_auth_algo hash_alg, 1154 + unsigned int hash_digestsize, 1155 + unsigned int hash_blocksize) 1079 1156 { 1080 1157 struct qat_alg_aead_ctx *ctx = crypto_aead_ctx(tfm); 1081 1158 1082 - ctx->hash_tfm = crypto_alloc_shash(hash_name, 0, 0); 1083 - if (IS_ERR(ctx->hash_tfm)) 1084 - return PTR_ERR(ctx->hash_tfm); 1085 - ctx->qat_hash_alg = hash; 1159 + ctx->qat_hash_alg = hash_alg; 1160 + ctx->hash_digestsize = hash_digestsize; 1161 + ctx->hash_blocksize = hash_blocksize; 1086 1162 crypto_aead_set_reqsize(tfm, sizeof(struct qat_crypto_request)); 1087 1163 return 0; 1088 1164 } 1089 1165 1090 1166 static int qat_alg_aead_sha1_init(struct crypto_aead *tfm) 1091 1167 { 1092 - return qat_alg_aead_init(tfm, ICP_QAT_HW_AUTH_ALGO_SHA1, "sha1"); 1168 + return qat_alg_aead_init(tfm, ICP_QAT_HW_AUTH_ALGO_SHA1, 1169 + SHA1_DIGEST_SIZE, SHA1_BLOCK_SIZE); 1093 1170 } 1094 1171 1095 1172 static int qat_alg_aead_sha256_init(struct crypto_aead *tfm) 1096 1173 { 1097 - return qat_alg_aead_init(tfm, ICP_QAT_HW_AUTH_ALGO_SHA256, "sha256"); 1174 + return qat_alg_aead_init(tfm, ICP_QAT_HW_AUTH_ALGO_SHA256, 1175 + SHA256_DIGEST_SIZE, SHA256_BLOCK_SIZE); 1098 1176 } 1099 1177 1100 1178 static int qat_alg_aead_sha512_init(struct crypto_aead *tfm) 1101 1179 { 1102 - return qat_alg_aead_init(tfm, ICP_QAT_HW_AUTH_ALGO_SHA512, "sha512"); 1180 + return qat_alg_aead_init(tfm, ICP_QAT_HW_AUTH_ALGO_SHA512, 1181 + SHA512_DIGEST_SIZE, SHA512_BLOCK_SIZE); 1103 1182 } 1104 1183 1105 1184 static void qat_alg_aead_exit(struct crypto_aead *tfm) ··· 1110 1183 struct qat_alg_aead_ctx *ctx = crypto_aead_ctx(tfm); 1111 1184 struct qat_crypto_instance *inst = ctx->inst; 1112 1185 struct device *dev; 1113 - 1114 - crypto_free_shash(ctx->hash_tfm); 1115 1186 1116 1187 if (!inst) 1117 1188 return;
+1 -1
drivers/crypto/intel/qat/qat_common/qat_uclo.c
··· 1900 1900 if (sobj_hdr) 1901 1901 sobj_chunk_num = sobj_hdr->num_chunks; 1902 1902 1903 - mobj_hdr = kzalloc((uobj_chunk_num + sobj_chunk_num) * 1903 + mobj_hdr = kcalloc(size_add(uobj_chunk_num, sobj_chunk_num), 1904 1904 sizeof(*mobj_hdr), GFP_KERNEL); 1905 1905 if (!mobj_hdr) 1906 1906 return -ENOMEM;
+1 -1
drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c
··· 1615 1615 return -EINVAL; 1616 1616 } 1617 1617 err_msg = "Invalid engine group format"; 1618 - strscpy(tmp_buf, ctx->val.vstr, strlen(ctx->val.vstr) + 1); 1618 + strscpy(tmp_buf, ctx->val.vstr); 1619 1619 start = tmp_buf; 1620 1620 1621 1621 has_se = has_ie = has_ae = false;
+4 -2
drivers/crypto/nx/nx-common-powernv.c
··· 1043 1043 .base.cra_priority = 300, 1044 1044 .base.cra_module = THIS_MODULE, 1045 1045 1046 - .alloc_ctx = nx842_powernv_crypto_alloc_ctx, 1047 - .free_ctx = nx842_crypto_free_ctx, 1046 + .streams = { 1047 + .alloc_ctx = nx842_powernv_crypto_alloc_ctx, 1048 + .free_ctx = nx842_crypto_free_ctx, 1049 + }, 1048 1050 .compress = nx842_crypto_compress, 1049 1051 .decompress = nx842_crypto_decompress, 1050 1052 };
+4 -2
drivers/crypto/nx/nx-common-pseries.c
··· 1020 1020 .base.cra_priority = 300, 1021 1021 .base.cra_module = THIS_MODULE, 1022 1022 1023 - .alloc_ctx = nx842_pseries_crypto_alloc_ctx, 1024 - .free_ctx = nx842_crypto_free_ctx, 1023 + .streams = { 1024 + .alloc_ctx = nx842_pseries_crypto_alloc_ctx, 1025 + .free_ctx = nx842_crypto_free_ctx, 1026 + }, 1025 1027 .compress = nx842_crypto_compress, 1026 1028 .decompress = nx842_crypto_decompress, 1027 1029 };
+8 -7
drivers/crypto/omap-aes.c
··· 32 32 #include <linux/pm_runtime.h> 33 33 #include <linux/scatterlist.h> 34 34 #include <linux/string.h> 35 + #include <linux/workqueue.h> 35 36 36 37 #include "omap-crypto.h" 37 38 #include "omap-aes.h" ··· 222 221 struct omap_aes_dev *dd = data; 223 222 224 223 /* dma_lch_out - completed */ 225 - tasklet_schedule(&dd->done_task); 224 + queue_work(system_bh_wq, &dd->done_task); 226 225 } 227 226 228 227 static int omap_aes_dma_init(struct omap_aes_dev *dd) ··· 495 494 ((u32 *)ivbuf)[i] = omap_aes_read(dd, AES_REG_IV(dd, i)); 496 495 } 497 496 498 - static void omap_aes_done_task(unsigned long data) 497 + static void omap_aes_done_task(struct work_struct *t) 499 498 { 500 - struct omap_aes_dev *dd = (struct omap_aes_dev *)data; 499 + struct omap_aes_dev *dd = from_work(dd, t, done_task); 501 500 502 501 pr_debug("enter done_task\n"); 503 502 ··· 926 925 927 926 if (!dd->total) 928 927 /* All bytes read! */ 929 - tasklet_schedule(&dd->done_task); 928 + queue_work(system_bh_wq, &dd->done_task); 930 929 else 931 930 /* Enable DATA_IN interrupt for next block */ 932 931 omap_aes_write(dd, AES_REG_IRQ_ENABLE(dd), 0x2); ··· 1141 1140 (reg & dd->pdata->major_mask) >> dd->pdata->major_shift, 1142 1141 (reg & dd->pdata->minor_mask) >> dd->pdata->minor_shift); 1143 1142 1144 - tasklet_init(&dd->done_task, omap_aes_done_task, (unsigned long)dd); 1143 + INIT_WORK(&dd->done_task, omap_aes_done_task); 1145 1144 1146 1145 err = omap_aes_dma_init(dd); 1147 1146 if (err == -EPROBE_DEFER) { ··· 1230 1229 1231 1230 omap_aes_dma_cleanup(dd); 1232 1231 err_irq: 1233 - tasklet_kill(&dd->done_task); 1232 + cancel_work_sync(&dd->done_task); 1234 1233 err_pm_disable: 1235 1234 pm_runtime_disable(dev); 1236 1235 err_res: ··· 1265 1264 1266 1265 crypto_engine_exit(dd->engine); 1267 1266 1268 - tasklet_kill(&dd->done_task); 1267 + cancel_work_sync(&dd->done_task); 1269 1268 omap_aes_dma_cleanup(dd); 1270 1269 pm_runtime_disable(dd->dev); 1271 1270 }
+1 -1
drivers/crypto/omap-aes.h
··· 159 159 unsigned long flags; 160 160 int err; 161 161 162 - struct tasklet_struct done_task; 162 + struct work_struct done_task; 163 163 struct aead_queue aead_queue; 164 164 spinlock_t lock; 165 165
+9 -8
drivers/crypto/omap-des.c
··· 32 32 #include <linux/pm_runtime.h> 33 33 #include <linux/scatterlist.h> 34 34 #include <linux/string.h> 35 + #include <linux/workqueue.h> 35 36 36 37 #include "omap-crypto.h" 37 38 ··· 131 130 unsigned long flags; 132 131 int err; 133 132 134 - struct tasklet_struct done_task; 133 + struct work_struct done_task; 135 134 136 135 struct skcipher_request *req; 137 136 struct crypto_engine *engine; ··· 326 325 struct omap_des_dev *dd = data; 327 326 328 327 /* dma_lch_out - completed */ 329 - tasklet_schedule(&dd->done_task); 328 + queue_work(system_bh_wq, &dd->done_task); 330 329 } 331 330 332 331 static int omap_des_dma_init(struct omap_des_dev *dd) ··· 581 580 omap_des_crypt_dma_start(dd); 582 581 } 583 582 584 - static void omap_des_done_task(unsigned long data) 583 + static void omap_des_done_task(struct work_struct *t) 585 584 { 586 - struct omap_des_dev *dd = (struct omap_des_dev *)data; 585 + struct omap_des_dev *dd = from_work(dd, t, done_task); 587 586 int i; 588 587 589 588 pr_debug("enter done_task\n"); ··· 891 890 892 891 if (!dd->total) 893 892 /* All bytes read! */ 894 - tasklet_schedule(&dd->done_task); 893 + queue_work(system_bh_wq, &dd->done_task); 895 894 else 896 895 /* Enable DATA_IN interrupt for next block */ 897 896 omap_des_write(dd, DES_REG_IRQ_ENABLE(dd), 0x2); ··· 987 986 (reg & dd->pdata->major_mask) >> dd->pdata->major_shift, 988 987 (reg & dd->pdata->minor_mask) >> dd->pdata->minor_shift); 989 988 990 - tasklet_init(&dd->done_task, omap_des_done_task, (unsigned long)dd); 989 + INIT_WORK(&dd->done_task, omap_des_done_task); 991 990 992 991 err = omap_des_dma_init(dd); 993 992 if (err == -EPROBE_DEFER) { ··· 1054 1053 1055 1054 omap_des_dma_cleanup(dd); 1056 1055 err_irq: 1057 - tasklet_kill(&dd->done_task); 1056 + cancel_work_sync(&dd->done_task); 1058 1057 err_get: 1059 1058 pm_runtime_disable(dev); 1060 1059 err_res: ··· 1078 1077 crypto_engine_unregister_skcipher( 1079 1078 &dd->pdata->algs_info[i].algs_list[j]); 1080 1079 1081 - tasklet_kill(&dd->done_task); 1080 + cancel_work_sync(&dd->done_task); 1082 1081 omap_des_dma_cleanup(dd); 1083 1082 pm_runtime_disable(dd->dev); 1084 1083 }
+8 -7
drivers/crypto/omap-sham.c
··· 37 37 #include <linux/scatterlist.h> 38 38 #include <linux/slab.h> 39 39 #include <linux/string.h> 40 + #include <linux/workqueue.h> 40 41 41 42 #define MD5_DIGEST_SIZE 16 42 43 ··· 218 217 int irq; 219 218 int err; 220 219 struct dma_chan *dma_lch; 221 - struct tasklet_struct done_task; 220 + struct work_struct done_task; 222 221 u8 polling_mode; 223 222 u8 xmit_buf[BUFLEN] OMAP_ALIGNED; 224 223 ··· 562 561 struct omap_sham_dev *dd = param; 563 562 564 563 set_bit(FLAGS_DMA_READY, &dd->flags); 565 - tasklet_schedule(&dd->done_task); 564 + queue_work(system_bh_wq, &dd->done_task); 566 565 } 567 566 568 567 static int omap_sham_xmit_dma(struct omap_sham_dev *dd, size_t length, ··· 1704 1703 }, 1705 1704 }; 1706 1705 1707 - static void omap_sham_done_task(unsigned long data) 1706 + static void omap_sham_done_task(struct work_struct *t) 1708 1707 { 1709 - struct omap_sham_dev *dd = (struct omap_sham_dev *)data; 1708 + struct omap_sham_dev *dd = from_work(dd, t, done_task); 1710 1709 int err = 0; 1711 1710 1712 1711 dev_dbg(dd->dev, "%s: flags=%lx\n", __func__, dd->flags); ··· 1740 1739 static irqreturn_t omap_sham_irq_common(struct omap_sham_dev *dd) 1741 1740 { 1742 1741 set_bit(FLAGS_OUTPUT_READY, &dd->flags); 1743 - tasklet_schedule(&dd->done_task); 1742 + queue_work(system_bh_wq, &dd->done_task); 1744 1743 1745 1744 return IRQ_HANDLED; 1746 1745 } ··· 2060 2059 platform_set_drvdata(pdev, dd); 2061 2060 2062 2061 INIT_LIST_HEAD(&dd->list); 2063 - tasklet_init(&dd->done_task, omap_sham_done_task, (unsigned long)dd); 2062 + INIT_WORK(&dd->done_task, omap_sham_done_task); 2064 2063 crypto_init_queue(&dd->queue, OMAP_SHAM_QUEUE_LENGTH); 2065 2064 2066 2065 err = (dev->of_node) ? omap_sham_get_res_of(dd, dev, &res) : ··· 2195 2194 &dd->pdata->algs_info[i].algs_list[j]); 2196 2195 dd->pdata->algs_info[i].registered--; 2197 2196 } 2198 - tasklet_kill(&dd->done_task); 2197 + cancel_work_sync(&dd->done_task); 2199 2198 pm_runtime_dont_use_autosuspend(&pdev->dev); 2200 2199 pm_runtime_disable(&pdev->dev); 2201 2200
+1 -1
drivers/crypto/rockchip/rk3288_crypto_ahash.c
··· 254 254 struct rk_ahash_rctx *rctx = ahash_request_ctx(areq); 255 255 struct rk_crypto_info *rkc = rctx->dev; 256 256 257 - dma_unmap_sg(rkc->dev, areq->src, rctx->nrsg, DMA_TO_DEVICE); 257 + dma_unmap_sg(rkc->dev, areq->src, sg_nents(areq->src), DMA_TO_DEVICE); 258 258 } 259 259 260 260 static int rk_hash_run(struct crypto_engine *engine, void *breq)
+4 -8
drivers/crypto/starfive/jh7110-aes.c
··· 511 511 stsg = sg_next(stsg), dtsg = sg_next(dtsg)) { 512 512 src_nents = dma_map_sg(cryp->dev, stsg, 1, DMA_BIDIRECTIONAL); 513 513 if (src_nents == 0) 514 - return dev_err_probe(cryp->dev, -ENOMEM, 515 - "dma_map_sg error\n"); 514 + return -ENOMEM; 516 515 517 516 dst_nents = src_nents; 518 517 len = min(sg_dma_len(stsg), remain); ··· 527 528 for (stsg = src, dtsg = dst;;) { 528 529 src_nents = dma_map_sg(cryp->dev, stsg, 1, DMA_TO_DEVICE); 529 530 if (src_nents == 0) 530 - return dev_err_probe(cryp->dev, -ENOMEM, 531 - "dma_map_sg src error\n"); 531 + return -ENOMEM; 532 532 533 533 dst_nents = dma_map_sg(cryp->dev, dtsg, 1, DMA_FROM_DEVICE); 534 534 if (dst_nents == 0) 535 - return dev_err_probe(cryp->dev, -ENOMEM, 536 - "dma_map_sg dst error\n"); 535 + return -ENOMEM; 537 536 538 537 len = min(sg_dma_len(stsg), sg_dma_len(dtsg)); 539 538 len = min(len, remain); ··· 666 669 if (cryp->assoclen) { 667 670 rctx->adata = kzalloc(cryp->assoclen + AES_BLOCK_SIZE, GFP_KERNEL); 668 671 if (!rctx->adata) 669 - return dev_err_probe(cryp->dev, -ENOMEM, 670 - "Failed to alloc memory for adata"); 672 + return -ENOMEM; 671 673 672 674 if (sg_copy_to_buffer(req->src, sg_nents_for_len(req->src, cryp->assoclen), 673 675 rctx->adata, cryp->assoclen) != cryp->assoclen)
+1 -2
drivers/crypto/starfive/jh7110-hash.c
··· 229 229 for_each_sg(rctx->in_sg, tsg, rctx->in_sg_len, i) { 230 230 src_nents = dma_map_sg(cryp->dev, tsg, 1, DMA_TO_DEVICE); 231 231 if (src_nents == 0) 232 - return dev_err_probe(cryp->dev, -ENOMEM, 233 - "dma_map_sg error\n"); 232 + return -ENOMEM; 234 233 235 234 ret = starfive_hash_dma_xfer(cryp, tsg); 236 235 dma_unmap_sg(cryp->dev, tsg, 1, DMA_TO_DEVICE);
+1 -1
drivers/crypto/stm32/stm32-cryp.c
··· 2781 2781 module_platform_driver(stm32_cryp_driver); 2782 2782 2783 2783 MODULE_AUTHOR("Fabien Dessenne <fabien.dessenne@st.com>"); 2784 - MODULE_DESCRIPTION("STMicrolectronics STM32 CRYP hardware driver"); 2784 + MODULE_DESCRIPTION("STMicroelectronics STM32 CRYP hardware driver"); 2785 2785 MODULE_LICENSE("GPL");
+2 -1
drivers/crypto/tegra/tegra-se-hash.c
··· 400 400 struct tegra_sha_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req)); 401 401 struct tegra_sha_reqctx *rctx = ahash_request_ctx(req); 402 402 struct tegra_se *se = ctx->se; 403 - unsigned int nblks, nresidue, size, ret; 403 + unsigned int nblks, nresidue, size; 404 404 u32 *cpuvaddr = se->cmdbuf->addr; 405 + int ret; 405 406 406 407 nresidue = (req->nbytes + rctx->residue.size) % rctx->blk_size; 407 408 nblks = (req->nbytes + rctx->residue.size) / rctx->blk_size;
+1 -1
drivers/crypto/tegra/tegra-se-main.c
··· 310 310 311 311 se->engine = crypto_engine_alloc_init(dev, 0); 312 312 if (!se->engine) 313 - return dev_err_probe(dev, -ENOMEM, "failed to init crypto engine\n"); 313 + return -ENOMEM; 314 314 315 315 ret = crypto_engine_start(se->engine); 316 316 if (ret) {
+14
drivers/crypto/ti/Kconfig
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + config CRYPTO_DEV_TI_DTHEV2 3 + tristate "Support for TI DTHE V2 cryptography engine" 4 + depends on ARCH_K3 || COMPILE_TEST 5 + select CRYPTO_ENGINE 6 + select CRYPTO_SKCIPHER 7 + select CRYPTO_ECB 8 + select CRYPTO_CBC 9 + help 10 + This enables support for the TI DTHE V2 hw cryptography engine 11 + which can be found on TI K3 SOCs. Selecting this enables use 12 + of hardware offloading for cryptographic algorithms on 13 + these devices, providing enhanced resistance against side-channel 14 + attacks.
+3
drivers/crypto/ti/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + obj-$(CONFIG_CRYPTO_DEV_TI_DTHEV2) += dthev2.o 3 + dthev2-objs := dthev2-common.o dthev2-aes.o
+411
drivers/crypto/ti/dthev2-aes.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * K3 DTHE V2 crypto accelerator driver 4 + * 5 + * Copyright (C) Texas Instruments 2025 - https://www.ti.com 6 + * Author: T Pratham <t-pratham@ti.com> 7 + */ 8 + 9 + #include <crypto/aead.h> 10 + #include <crypto/aes.h> 11 + #include <crypto/algapi.h> 12 + #include <crypto/engine.h> 13 + #include <crypto/internal/aead.h> 14 + #include <crypto/internal/skcipher.h> 15 + 16 + #include "dthev2-common.h" 17 + 18 + #include <linux/delay.h> 19 + #include <linux/dmaengine.h> 20 + #include <linux/dma-mapping.h> 21 + #include <linux/io.h> 22 + #include <linux/scatterlist.h> 23 + 24 + /* Registers */ 25 + 26 + // AES Engine 27 + #define DTHE_P_AES_BASE 0x7000 28 + #define DTHE_P_AES_KEY1_0 0x0038 29 + #define DTHE_P_AES_KEY1_1 0x003C 30 + #define DTHE_P_AES_KEY1_2 0x0030 31 + #define DTHE_P_AES_KEY1_3 0x0034 32 + #define DTHE_P_AES_KEY1_4 0x0028 33 + #define DTHE_P_AES_KEY1_5 0x002C 34 + #define DTHE_P_AES_KEY1_6 0x0020 35 + #define DTHE_P_AES_KEY1_7 0x0024 36 + #define DTHE_P_AES_IV_IN_0 0x0040 37 + #define DTHE_P_AES_IV_IN_1 0x0044 38 + #define DTHE_P_AES_IV_IN_2 0x0048 39 + #define DTHE_P_AES_IV_IN_3 0x004C 40 + #define DTHE_P_AES_CTRL 0x0050 41 + #define DTHE_P_AES_C_LENGTH_0 0x0054 42 + #define DTHE_P_AES_C_LENGTH_1 0x0058 43 + #define DTHE_P_AES_AUTH_LENGTH 0x005C 44 + #define DTHE_P_AES_DATA_IN_OUT 0x0060 45 + 46 + #define DTHE_P_AES_SYSCONFIG 0x0084 47 + #define DTHE_P_AES_IRQSTATUS 0x008C 48 + #define DTHE_P_AES_IRQENABLE 0x0090 49 + 50 + /* Register write values and macros */ 51 + 52 + enum aes_ctrl_mode_masks { 53 + AES_CTRL_ECB_MASK = 0x00, 54 + AES_CTRL_CBC_MASK = BIT(5), 55 + }; 56 + 57 + #define DTHE_AES_CTRL_MODE_CLEAR_MASK ~GENMASK(28, 5) 58 + 59 + #define DTHE_AES_CTRL_DIR_ENC BIT(2) 60 + 61 + #define DTHE_AES_CTRL_KEYSIZE_16B BIT(3) 62 + #define DTHE_AES_CTRL_KEYSIZE_24B BIT(4) 63 + #define DTHE_AES_CTRL_KEYSIZE_32B (BIT(3) | BIT(4)) 64 + 65 + #define DTHE_AES_CTRL_SAVE_CTX_SET BIT(29) 66 + 67 + #define DTHE_AES_CTRL_OUTPUT_READY BIT_MASK(0) 68 + #define DTHE_AES_CTRL_INPUT_READY BIT_MASK(1) 69 + #define DTHE_AES_CTRL_SAVED_CTX_READY BIT_MASK(30) 70 + #define DTHE_AES_CTRL_CTX_READY BIT_MASK(31) 71 + 72 + #define DTHE_AES_SYSCONFIG_DMA_DATA_IN_OUT_EN GENMASK(6, 5) 73 + #define DTHE_AES_IRQENABLE_EN_ALL GENMASK(3, 0) 74 + 75 + /* Misc */ 76 + #define AES_IV_SIZE AES_BLOCK_SIZE 77 + #define AES_BLOCK_WORDS (AES_BLOCK_SIZE / sizeof(u32)) 78 + #define AES_IV_WORDS AES_BLOCK_WORDS 79 + 80 + static int dthe_cipher_init_tfm(struct crypto_skcipher *tfm) 81 + { 82 + struct dthe_tfm_ctx *ctx = crypto_skcipher_ctx(tfm); 83 + struct dthe_data *dev_data = dthe_get_dev(ctx); 84 + 85 + ctx->dev_data = dev_data; 86 + ctx->keylen = 0; 87 + 88 + return 0; 89 + } 90 + 91 + static int dthe_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen) 92 + { 93 + struct dthe_tfm_ctx *ctx = crypto_skcipher_ctx(tfm); 94 + 95 + if (keylen != AES_KEYSIZE_128 && keylen != AES_KEYSIZE_192 && keylen != AES_KEYSIZE_256) 96 + return -EINVAL; 97 + 98 + ctx->keylen = keylen; 99 + memcpy(ctx->key, key, keylen); 100 + 101 + return 0; 102 + } 103 + 104 + static int dthe_aes_ecb_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen) 105 + { 106 + struct dthe_tfm_ctx *ctx = crypto_skcipher_ctx(tfm); 107 + 108 + ctx->aes_mode = DTHE_AES_ECB; 109 + 110 + return dthe_aes_setkey(tfm, key, keylen); 111 + } 112 + 113 + static int dthe_aes_cbc_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen) 114 + { 115 + struct dthe_tfm_ctx *ctx = crypto_skcipher_ctx(tfm); 116 + 117 + ctx->aes_mode = DTHE_AES_CBC; 118 + 119 + return dthe_aes_setkey(tfm, key, keylen); 120 + } 121 + 122 + static void dthe_aes_set_ctrl_key(struct dthe_tfm_ctx *ctx, 123 + struct dthe_aes_req_ctx *rctx, 124 + u32 *iv_in) 125 + { 126 + struct dthe_data *dev_data = dthe_get_dev(ctx); 127 + void __iomem *aes_base_reg = dev_data->regs + DTHE_P_AES_BASE; 128 + u32 ctrl_val = 0; 129 + 130 + writel_relaxed(ctx->key[0], aes_base_reg + DTHE_P_AES_KEY1_0); 131 + writel_relaxed(ctx->key[1], aes_base_reg + DTHE_P_AES_KEY1_1); 132 + writel_relaxed(ctx->key[2], aes_base_reg + DTHE_P_AES_KEY1_2); 133 + writel_relaxed(ctx->key[3], aes_base_reg + DTHE_P_AES_KEY1_3); 134 + 135 + if (ctx->keylen > AES_KEYSIZE_128) { 136 + writel_relaxed(ctx->key[4], aes_base_reg + DTHE_P_AES_KEY1_4); 137 + writel_relaxed(ctx->key[5], aes_base_reg + DTHE_P_AES_KEY1_5); 138 + } 139 + if (ctx->keylen == AES_KEYSIZE_256) { 140 + writel_relaxed(ctx->key[6], aes_base_reg + DTHE_P_AES_KEY1_6); 141 + writel_relaxed(ctx->key[7], aes_base_reg + DTHE_P_AES_KEY1_7); 142 + } 143 + 144 + if (rctx->enc) 145 + ctrl_val |= DTHE_AES_CTRL_DIR_ENC; 146 + 147 + if (ctx->keylen == AES_KEYSIZE_128) 148 + ctrl_val |= DTHE_AES_CTRL_KEYSIZE_16B; 149 + else if (ctx->keylen == AES_KEYSIZE_192) 150 + ctrl_val |= DTHE_AES_CTRL_KEYSIZE_24B; 151 + else 152 + ctrl_val |= DTHE_AES_CTRL_KEYSIZE_32B; 153 + 154 + // Write AES mode 155 + ctrl_val &= DTHE_AES_CTRL_MODE_CLEAR_MASK; 156 + switch (ctx->aes_mode) { 157 + case DTHE_AES_ECB: 158 + ctrl_val |= AES_CTRL_ECB_MASK; 159 + break; 160 + case DTHE_AES_CBC: 161 + ctrl_val |= AES_CTRL_CBC_MASK; 162 + break; 163 + } 164 + 165 + if (iv_in) { 166 + ctrl_val |= DTHE_AES_CTRL_SAVE_CTX_SET; 167 + for (int i = 0; i < AES_IV_WORDS; ++i) 168 + writel_relaxed(iv_in[i], 169 + aes_base_reg + DTHE_P_AES_IV_IN_0 + (DTHE_REG_SIZE * i)); 170 + } 171 + 172 + writel_relaxed(ctrl_val, aes_base_reg + DTHE_P_AES_CTRL); 173 + } 174 + 175 + static void dthe_aes_dma_in_callback(void *data) 176 + { 177 + struct skcipher_request *req = (struct skcipher_request *)data; 178 + struct dthe_aes_req_ctx *rctx = skcipher_request_ctx(req); 179 + 180 + complete(&rctx->aes_compl); 181 + } 182 + 183 + static int dthe_aes_run(struct crypto_engine *engine, void *areq) 184 + { 185 + struct skcipher_request *req = container_of(areq, struct skcipher_request, base); 186 + struct dthe_tfm_ctx *ctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(req)); 187 + struct dthe_data *dev_data = dthe_get_dev(ctx); 188 + struct dthe_aes_req_ctx *rctx = skcipher_request_ctx(req); 189 + 190 + unsigned int len = req->cryptlen; 191 + struct scatterlist *src = req->src; 192 + struct scatterlist *dst = req->dst; 193 + 194 + int src_nents = sg_nents_for_len(src, len); 195 + int dst_nents; 196 + 197 + int src_mapped_nents; 198 + int dst_mapped_nents; 199 + 200 + bool diff_dst; 201 + enum dma_data_direction src_dir, dst_dir; 202 + 203 + struct device *tx_dev, *rx_dev; 204 + struct dma_async_tx_descriptor *desc_in, *desc_out; 205 + 206 + int ret; 207 + 208 + void __iomem *aes_base_reg = dev_data->regs + DTHE_P_AES_BASE; 209 + 210 + u32 aes_irqenable_val = readl_relaxed(aes_base_reg + DTHE_P_AES_IRQENABLE); 211 + u32 aes_sysconfig_val = readl_relaxed(aes_base_reg + DTHE_P_AES_SYSCONFIG); 212 + 213 + aes_sysconfig_val |= DTHE_AES_SYSCONFIG_DMA_DATA_IN_OUT_EN; 214 + writel_relaxed(aes_sysconfig_val, aes_base_reg + DTHE_P_AES_SYSCONFIG); 215 + 216 + aes_irqenable_val |= DTHE_AES_IRQENABLE_EN_ALL; 217 + writel_relaxed(aes_irqenable_val, aes_base_reg + DTHE_P_AES_IRQENABLE); 218 + 219 + if (src == dst) { 220 + diff_dst = false; 221 + src_dir = DMA_BIDIRECTIONAL; 222 + dst_dir = DMA_BIDIRECTIONAL; 223 + } else { 224 + diff_dst = true; 225 + src_dir = DMA_TO_DEVICE; 226 + dst_dir = DMA_FROM_DEVICE; 227 + } 228 + 229 + tx_dev = dmaengine_get_dma_device(dev_data->dma_aes_tx); 230 + rx_dev = dmaengine_get_dma_device(dev_data->dma_aes_rx); 231 + 232 + src_mapped_nents = dma_map_sg(tx_dev, src, src_nents, src_dir); 233 + if (src_mapped_nents == 0) { 234 + ret = -EINVAL; 235 + goto aes_err; 236 + } 237 + 238 + if (!diff_dst) { 239 + dst_nents = src_nents; 240 + dst_mapped_nents = src_mapped_nents; 241 + } else { 242 + dst_nents = sg_nents_for_len(dst, len); 243 + dst_mapped_nents = dma_map_sg(rx_dev, dst, dst_nents, dst_dir); 244 + if (dst_mapped_nents == 0) { 245 + dma_unmap_sg(tx_dev, src, src_nents, src_dir); 246 + ret = -EINVAL; 247 + goto aes_err; 248 + } 249 + } 250 + 251 + desc_in = dmaengine_prep_slave_sg(dev_data->dma_aes_rx, dst, dst_mapped_nents, 252 + DMA_DEV_TO_MEM, DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 253 + if (!desc_in) { 254 + dev_err(dev_data->dev, "IN prep_slave_sg() failed\n"); 255 + ret = -EINVAL; 256 + goto aes_prep_err; 257 + } 258 + 259 + desc_out = dmaengine_prep_slave_sg(dev_data->dma_aes_tx, src, src_mapped_nents, 260 + DMA_MEM_TO_DEV, DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 261 + if (!desc_out) { 262 + dev_err(dev_data->dev, "OUT prep_slave_sg() failed\n"); 263 + ret = -EINVAL; 264 + goto aes_prep_err; 265 + } 266 + 267 + desc_in->callback = dthe_aes_dma_in_callback; 268 + desc_in->callback_param = req; 269 + 270 + init_completion(&rctx->aes_compl); 271 + 272 + if (ctx->aes_mode == DTHE_AES_ECB) 273 + dthe_aes_set_ctrl_key(ctx, rctx, NULL); 274 + else 275 + dthe_aes_set_ctrl_key(ctx, rctx, (u32 *)req->iv); 276 + 277 + writel_relaxed(lower_32_bits(req->cryptlen), aes_base_reg + DTHE_P_AES_C_LENGTH_0); 278 + writel_relaxed(upper_32_bits(req->cryptlen), aes_base_reg + DTHE_P_AES_C_LENGTH_1); 279 + 280 + dmaengine_submit(desc_in); 281 + dmaengine_submit(desc_out); 282 + 283 + dma_async_issue_pending(dev_data->dma_aes_rx); 284 + dma_async_issue_pending(dev_data->dma_aes_tx); 285 + 286 + // Need to do a timeout to ensure finalise gets called if DMA callback fails for any reason 287 + ret = wait_for_completion_timeout(&rctx->aes_compl, msecs_to_jiffies(DTHE_DMA_TIMEOUT_MS)); 288 + if (!ret) { 289 + ret = -ETIMEDOUT; 290 + dmaengine_terminate_sync(dev_data->dma_aes_rx); 291 + dmaengine_terminate_sync(dev_data->dma_aes_tx); 292 + 293 + for (int i = 0; i < AES_BLOCK_WORDS; ++i) 294 + readl_relaxed(aes_base_reg + DTHE_P_AES_DATA_IN_OUT + (DTHE_REG_SIZE * i)); 295 + } else { 296 + ret = 0; 297 + } 298 + 299 + // For modes other than ECB, read IV_OUT 300 + if (ctx->aes_mode != DTHE_AES_ECB) { 301 + u32 *iv_out = (u32 *)req->iv; 302 + 303 + for (int i = 0; i < AES_IV_WORDS; ++i) 304 + iv_out[i] = readl_relaxed(aes_base_reg + 305 + DTHE_P_AES_IV_IN_0 + 306 + (DTHE_REG_SIZE * i)); 307 + } 308 + 309 + aes_prep_err: 310 + dma_unmap_sg(tx_dev, src, src_nents, src_dir); 311 + if (dst_dir != DMA_BIDIRECTIONAL) 312 + dma_unmap_sg(rx_dev, dst, dst_nents, dst_dir); 313 + 314 + aes_err: 315 + local_bh_disable(); 316 + crypto_finalize_skcipher_request(dev_data->engine, req, ret); 317 + local_bh_enable(); 318 + return ret; 319 + } 320 + 321 + static int dthe_aes_crypt(struct skcipher_request *req) 322 + { 323 + struct dthe_tfm_ctx *ctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(req)); 324 + struct dthe_data *dev_data = dthe_get_dev(ctx); 325 + struct crypto_engine *engine; 326 + 327 + /* 328 + * If data is not a multiple of AES_BLOCK_SIZE, need to return -EINVAL 329 + * If data length input is zero, no need to do any operation. 330 + */ 331 + if (req->cryptlen % AES_BLOCK_SIZE) 332 + return -EINVAL; 333 + 334 + if (req->cryptlen == 0) 335 + return 0; 336 + 337 + engine = dev_data->engine; 338 + return crypto_transfer_skcipher_request_to_engine(engine, req); 339 + } 340 + 341 + static int dthe_aes_encrypt(struct skcipher_request *req) 342 + { 343 + struct dthe_aes_req_ctx *rctx = skcipher_request_ctx(req); 344 + 345 + rctx->enc = 1; 346 + return dthe_aes_crypt(req); 347 + } 348 + 349 + static int dthe_aes_decrypt(struct skcipher_request *req) 350 + { 351 + struct dthe_aes_req_ctx *rctx = skcipher_request_ctx(req); 352 + 353 + rctx->enc = 0; 354 + return dthe_aes_crypt(req); 355 + } 356 + 357 + static struct skcipher_engine_alg cipher_algs[] = { 358 + { 359 + .base.init = dthe_cipher_init_tfm, 360 + .base.setkey = dthe_aes_ecb_setkey, 361 + .base.encrypt = dthe_aes_encrypt, 362 + .base.decrypt = dthe_aes_decrypt, 363 + .base.min_keysize = AES_MIN_KEY_SIZE, 364 + .base.max_keysize = AES_MAX_KEY_SIZE, 365 + .base.base = { 366 + .cra_name = "ecb(aes)", 367 + .cra_driver_name = "ecb-aes-dthev2", 368 + .cra_priority = 299, 369 + .cra_flags = CRYPTO_ALG_TYPE_SKCIPHER | 370 + CRYPTO_ALG_KERN_DRIVER_ONLY, 371 + .cra_alignmask = AES_BLOCK_SIZE - 1, 372 + .cra_blocksize = AES_BLOCK_SIZE, 373 + .cra_ctxsize = sizeof(struct dthe_tfm_ctx), 374 + .cra_reqsize = sizeof(struct dthe_aes_req_ctx), 375 + .cra_module = THIS_MODULE, 376 + }, 377 + .op.do_one_request = dthe_aes_run, 378 + }, /* ECB AES */ 379 + { 380 + .base.init = dthe_cipher_init_tfm, 381 + .base.setkey = dthe_aes_cbc_setkey, 382 + .base.encrypt = dthe_aes_encrypt, 383 + .base.decrypt = dthe_aes_decrypt, 384 + .base.min_keysize = AES_MIN_KEY_SIZE, 385 + .base.max_keysize = AES_MAX_KEY_SIZE, 386 + .base.ivsize = AES_IV_SIZE, 387 + .base.base = { 388 + .cra_name = "cbc(aes)", 389 + .cra_driver_name = "cbc-aes-dthev2", 390 + .cra_priority = 299, 391 + .cra_flags = CRYPTO_ALG_TYPE_SKCIPHER | 392 + CRYPTO_ALG_KERN_DRIVER_ONLY, 393 + .cra_alignmask = AES_BLOCK_SIZE - 1, 394 + .cra_blocksize = AES_BLOCK_SIZE, 395 + .cra_ctxsize = sizeof(struct dthe_tfm_ctx), 396 + .cra_reqsize = sizeof(struct dthe_aes_req_ctx), 397 + .cra_module = THIS_MODULE, 398 + }, 399 + .op.do_one_request = dthe_aes_run, 400 + } /* CBC AES */ 401 + }; 402 + 403 + int dthe_register_aes_algs(void) 404 + { 405 + return crypto_engine_register_skciphers(cipher_algs, ARRAY_SIZE(cipher_algs)); 406 + } 407 + 408 + void dthe_unregister_aes_algs(void) 409 + { 410 + crypto_engine_unregister_skciphers(cipher_algs, ARRAY_SIZE(cipher_algs)); 411 + }
+217
drivers/crypto/ti/dthev2-common.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * K3 DTHE V2 crypto accelerator driver 4 + * 5 + * Copyright (C) Texas Instruments 2025 - https://www.ti.com 6 + * Author: T Pratham <t-pratham@ti.com> 7 + */ 8 + 9 + #include <crypto/aes.h> 10 + #include <crypto/algapi.h> 11 + #include <crypto/engine.h> 12 + #include <crypto/internal/aead.h> 13 + #include <crypto/internal/skcipher.h> 14 + 15 + #include "dthev2-common.h" 16 + 17 + #include <linux/delay.h> 18 + #include <linux/dmaengine.h> 19 + #include <linux/dmapool.h> 20 + #include <linux/dma-mapping.h> 21 + #include <linux/io.h> 22 + #include <linux/kernel.h> 23 + #include <linux/module.h> 24 + #include <linux/mod_devicetable.h> 25 + #include <linux/platform_device.h> 26 + #include <linux/scatterlist.h> 27 + 28 + #define DRIVER_NAME "dthev2" 29 + 30 + static struct dthe_list dthe_dev_list = { 31 + .dev_list = LIST_HEAD_INIT(dthe_dev_list.dev_list), 32 + .lock = __SPIN_LOCK_UNLOCKED(dthe_dev_list.lock), 33 + }; 34 + 35 + struct dthe_data *dthe_get_dev(struct dthe_tfm_ctx *ctx) 36 + { 37 + struct dthe_data *dev_data; 38 + 39 + if (ctx->dev_data) 40 + return ctx->dev_data; 41 + 42 + spin_lock_bh(&dthe_dev_list.lock); 43 + dev_data = list_first_entry(&dthe_dev_list.dev_list, struct dthe_data, list); 44 + if (dev_data) 45 + list_move_tail(&dev_data->list, &dthe_dev_list.dev_list); 46 + spin_unlock_bh(&dthe_dev_list.lock); 47 + 48 + return dev_data; 49 + } 50 + 51 + static int dthe_dma_init(struct dthe_data *dev_data) 52 + { 53 + int ret; 54 + struct dma_slave_config cfg; 55 + 56 + dev_data->dma_aes_rx = NULL; 57 + dev_data->dma_aes_tx = NULL; 58 + dev_data->dma_sha_tx = NULL; 59 + 60 + dev_data->dma_aes_rx = dma_request_chan(dev_data->dev, "rx"); 61 + if (IS_ERR(dev_data->dma_aes_rx)) { 62 + return dev_err_probe(dev_data->dev, PTR_ERR(dev_data->dma_aes_rx), 63 + "Unable to request rx DMA channel\n"); 64 + } 65 + 66 + dev_data->dma_aes_tx = dma_request_chan(dev_data->dev, "tx1"); 67 + if (IS_ERR(dev_data->dma_aes_tx)) { 68 + ret = dev_err_probe(dev_data->dev, PTR_ERR(dev_data->dma_aes_tx), 69 + "Unable to request tx1 DMA channel\n"); 70 + goto err_dma_aes_tx; 71 + } 72 + 73 + dev_data->dma_sha_tx = dma_request_chan(dev_data->dev, "tx2"); 74 + if (IS_ERR(dev_data->dma_sha_tx)) { 75 + ret = dev_err_probe(dev_data->dev, PTR_ERR(dev_data->dma_sha_tx), 76 + "Unable to request tx2 DMA channel\n"); 77 + goto err_dma_sha_tx; 78 + } 79 + 80 + memzero_explicit(&cfg, sizeof(cfg)); 81 + 82 + cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; 83 + cfg.src_maxburst = 4; 84 + 85 + ret = dmaengine_slave_config(dev_data->dma_aes_rx, &cfg); 86 + if (ret) { 87 + dev_err(dev_data->dev, "Can't configure IN dmaengine slave: %d\n", ret); 88 + goto err_dma_config; 89 + } 90 + 91 + cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; 92 + cfg.dst_maxburst = 4; 93 + 94 + ret = dmaengine_slave_config(dev_data->dma_aes_tx, &cfg); 95 + if (ret) { 96 + dev_err(dev_data->dev, "Can't configure OUT dmaengine slave: %d\n", ret); 97 + goto err_dma_config; 98 + } 99 + 100 + return 0; 101 + 102 + err_dma_config: 103 + dma_release_channel(dev_data->dma_sha_tx); 104 + err_dma_sha_tx: 105 + dma_release_channel(dev_data->dma_aes_tx); 106 + err_dma_aes_tx: 107 + dma_release_channel(dev_data->dma_aes_rx); 108 + 109 + return ret; 110 + } 111 + 112 + static int dthe_register_algs(void) 113 + { 114 + return dthe_register_aes_algs(); 115 + } 116 + 117 + static void dthe_unregister_algs(void) 118 + { 119 + dthe_unregister_aes_algs(); 120 + } 121 + 122 + static int dthe_probe(struct platform_device *pdev) 123 + { 124 + struct device *dev = &pdev->dev; 125 + struct dthe_data *dev_data; 126 + int ret; 127 + 128 + dev_data = devm_kzalloc(dev, sizeof(*dev_data), GFP_KERNEL); 129 + if (!dev_data) 130 + return -ENOMEM; 131 + 132 + dev_data->dev = dev; 133 + dev_data->regs = devm_platform_ioremap_resource(pdev, 0); 134 + if (IS_ERR(dev_data->regs)) 135 + return PTR_ERR(dev_data->regs); 136 + 137 + platform_set_drvdata(pdev, dev_data); 138 + 139 + spin_lock(&dthe_dev_list.lock); 140 + list_add(&dev_data->list, &dthe_dev_list.dev_list); 141 + spin_unlock(&dthe_dev_list.lock); 142 + 143 + ret = dthe_dma_init(dev_data); 144 + if (ret) 145 + goto probe_dma_err; 146 + 147 + dev_data->engine = crypto_engine_alloc_init(dev, 1); 148 + if (!dev_data->engine) { 149 + ret = -ENOMEM; 150 + goto probe_engine_err; 151 + } 152 + 153 + ret = crypto_engine_start(dev_data->engine); 154 + if (ret) { 155 + dev_err(dev, "Failed to start crypto engine\n"); 156 + goto probe_engine_start_err; 157 + } 158 + 159 + ret = dthe_register_algs(); 160 + if (ret) { 161 + dev_err(dev, "Failed to register algs\n"); 162 + goto probe_engine_start_err; 163 + } 164 + 165 + return 0; 166 + 167 + probe_engine_start_err: 168 + crypto_engine_exit(dev_data->engine); 169 + probe_engine_err: 170 + dma_release_channel(dev_data->dma_aes_rx); 171 + dma_release_channel(dev_data->dma_aes_tx); 172 + dma_release_channel(dev_data->dma_sha_tx); 173 + probe_dma_err: 174 + spin_lock(&dthe_dev_list.lock); 175 + list_del(&dev_data->list); 176 + spin_unlock(&dthe_dev_list.lock); 177 + 178 + return ret; 179 + } 180 + 181 + static void dthe_remove(struct platform_device *pdev) 182 + { 183 + struct dthe_data *dev_data = platform_get_drvdata(pdev); 184 + 185 + spin_lock(&dthe_dev_list.lock); 186 + list_del(&dev_data->list); 187 + spin_unlock(&dthe_dev_list.lock); 188 + 189 + dthe_unregister_algs(); 190 + 191 + crypto_engine_exit(dev_data->engine); 192 + 193 + dma_release_channel(dev_data->dma_aes_rx); 194 + dma_release_channel(dev_data->dma_aes_tx); 195 + dma_release_channel(dev_data->dma_sha_tx); 196 + } 197 + 198 + static const struct of_device_id dthe_of_match[] = { 199 + { .compatible = "ti,am62l-dthev2", }, 200 + {}, 201 + }; 202 + MODULE_DEVICE_TABLE(of, dthe_of_match); 203 + 204 + static struct platform_driver dthe_driver = { 205 + .probe = dthe_probe, 206 + .remove = dthe_remove, 207 + .driver = { 208 + .name = DRIVER_NAME, 209 + .of_match_table = dthe_of_match, 210 + }, 211 + }; 212 + 213 + module_platform_driver(dthe_driver); 214 + 215 + MODULE_AUTHOR("T Pratham <t-pratham@ti.com>"); 216 + MODULE_DESCRIPTION("Texas Instruments DTHE V2 driver"); 217 + MODULE_LICENSE("GPL");
+101
drivers/crypto/ti/dthev2-common.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * K3 DTHE V2 crypto accelerator driver 4 + * 5 + * Copyright (C) Texas Instruments 2025 - https://www.ti.com 6 + * Author: T Pratham <t-pratham@ti.com> 7 + */ 8 + 9 + #ifndef __TI_DTHEV2_H__ 10 + #define __TI_DTHEV2_H__ 11 + 12 + #include <crypto/aead.h> 13 + #include <crypto/aes.h> 14 + #include <crypto/algapi.h> 15 + #include <crypto/engine.h> 16 + #include <crypto/hash.h> 17 + #include <crypto/internal/aead.h> 18 + #include <crypto/internal/hash.h> 19 + #include <crypto/internal/skcipher.h> 20 + 21 + #include <linux/delay.h> 22 + #include <linux/dmaengine.h> 23 + #include <linux/dmapool.h> 24 + #include <linux/dma-mapping.h> 25 + #include <linux/io.h> 26 + #include <linux/scatterlist.h> 27 + 28 + #define DTHE_REG_SIZE 4 29 + #define DTHE_DMA_TIMEOUT_MS 2000 30 + 31 + enum dthe_aes_mode { 32 + DTHE_AES_ECB = 0, 33 + DTHE_AES_CBC, 34 + }; 35 + 36 + /* Driver specific struct definitions */ 37 + 38 + /** 39 + * struct dthe_data - DTHE_V2 driver instance data 40 + * @dev: Device pointer 41 + * @regs: Base address of the register space 42 + * @list: list node for dev 43 + * @engine: Crypto engine instance 44 + * @dma_aes_rx: AES Rx DMA Channel 45 + * @dma_aes_tx: AES Tx DMA Channel 46 + * @dma_sha_tx: SHA Tx DMA Channel 47 + */ 48 + struct dthe_data { 49 + struct device *dev; 50 + void __iomem *regs; 51 + struct list_head list; 52 + struct crypto_engine *engine; 53 + 54 + struct dma_chan *dma_aes_rx; 55 + struct dma_chan *dma_aes_tx; 56 + 57 + struct dma_chan *dma_sha_tx; 58 + }; 59 + 60 + /** 61 + * struct dthe_list - device data list head 62 + * @dev_list: linked list head 63 + * @lock: Spinlock protecting accesses to the list 64 + */ 65 + struct dthe_list { 66 + struct list_head dev_list; 67 + spinlock_t lock; 68 + }; 69 + 70 + /** 71 + * struct dthe_tfm_ctx - Transform ctx struct containing ctx for all sub-components of DTHE V2 72 + * @dev_data: Device data struct pointer 73 + * @keylen: AES key length 74 + * @key: AES key 75 + * @aes_mode: AES mode 76 + */ 77 + struct dthe_tfm_ctx { 78 + struct dthe_data *dev_data; 79 + unsigned int keylen; 80 + u32 key[AES_KEYSIZE_256 / sizeof(u32)]; 81 + enum dthe_aes_mode aes_mode; 82 + }; 83 + 84 + /** 85 + * struct dthe_aes_req_ctx - AES engine req ctx struct 86 + * @enc: flag indicating encryption or decryption operation 87 + * @aes_compl: Completion variable for use in manual completion in case of DMA callback failure 88 + */ 89 + struct dthe_aes_req_ctx { 90 + int enc; 91 + struct completion aes_compl; 92 + }; 93 + 94 + /* Struct definitions end */ 95 + 96 + struct dthe_data *dthe_get_dev(struct dthe_tfm_ctx *ctx); 97 + 98 + int dthe_register_aes_algs(void); 99 + void dthe_unregister_aes_algs(void); 100 + 101 + #endif
+1
drivers/crypto/xilinx/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 + obj-$(CONFIG_CRYPTO_DEV_XILINX_TRNG) += xilinx-trng.o 2 3 obj-$(CONFIG_CRYPTO_DEV_ZYNQMP_AES) += zynqmp-aes-gcm.o 3 4 obj-$(CONFIG_CRYPTO_DEV_ZYNQMP_SHA3) += zynqmp-sha.o
+405
drivers/crypto/xilinx/xilinx-trng.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * AMD Versal True Random Number Generator driver 4 + * Copyright (c) 2024 - 2025 Advanced Micro Devices, Inc. 5 + */ 6 + 7 + #include <linux/bitfield.h> 8 + #include <linux/clk.h> 9 + #include <linux/crypto.h> 10 + #include <linux/delay.h> 11 + #include <linux/errno.h> 12 + #include <linux/firmware/xlnx-zynqmp.h> 13 + #include <linux/hw_random.h> 14 + #include <linux/io.h> 15 + #include <linux/iopoll.h> 16 + #include <linux/kernel.h> 17 + #include <linux/module.h> 18 + #include <linux/mutex.h> 19 + #include <linux/mod_devicetable.h> 20 + #include <linux/platform_device.h> 21 + #include <linux/string.h> 22 + #include <crypto/internal/cipher.h> 23 + #include <crypto/internal/rng.h> 24 + #include <crypto/aes.h> 25 + 26 + /* TRNG Registers Offsets */ 27 + #define TRNG_STATUS_OFFSET 0x4U 28 + #define TRNG_CTRL_OFFSET 0x8U 29 + #define TRNG_EXT_SEED_OFFSET 0x40U 30 + #define TRNG_PER_STRNG_OFFSET 0x80U 31 + #define TRNG_CORE_OUTPUT_OFFSET 0xC0U 32 + #define TRNG_RESET_OFFSET 0xD0U 33 + #define TRNG_OSC_EN_OFFSET 0xD4U 34 + 35 + /* Mask values */ 36 + #define TRNG_RESET_VAL_MASK BIT(0) 37 + #define TRNG_OSC_EN_VAL_MASK BIT(0) 38 + #define TRNG_CTRL_PRNGSRST_MASK BIT(0) 39 + #define TRNG_CTRL_EUMODE_MASK BIT(8) 40 + #define TRNG_CTRL_TRSSEN_MASK BIT(2) 41 + #define TRNG_CTRL_PRNGSTART_MASK BIT(5) 42 + #define TRNG_CTRL_PRNGXS_MASK BIT(3) 43 + #define TRNG_CTRL_PRNGMODE_MASK BIT(7) 44 + #define TRNG_STATUS_DONE_MASK BIT(0) 45 + #define TRNG_STATUS_QCNT_MASK GENMASK(11, 9) 46 + #define TRNG_STATUS_QCNT_16_BYTES 0x800 47 + 48 + /* Sizes in bytes */ 49 + #define TRNG_SEED_LEN_BYTES 48U 50 + #define TRNG_ENTROPY_SEED_LEN_BYTES 64U 51 + #define TRNG_SEC_STRENGTH_SHIFT 5U 52 + #define TRNG_SEC_STRENGTH_BYTES BIT(TRNG_SEC_STRENGTH_SHIFT) 53 + #define TRNG_BYTES_PER_REG 4U 54 + #define TRNG_RESET_DELAY 10 55 + #define TRNG_NUM_INIT_REGS 12U 56 + #define TRNG_READ_4_WORD 4 57 + #define TRNG_DATA_READ_DELAY 8000 58 + 59 + struct xilinx_rng { 60 + void __iomem *rng_base; 61 + struct device *dev; 62 + struct mutex lock; /* Protect access to TRNG device */ 63 + struct hwrng trng; 64 + }; 65 + 66 + struct xilinx_rng_ctx { 67 + struct xilinx_rng *rng; 68 + }; 69 + 70 + static struct xilinx_rng *xilinx_rng_dev; 71 + 72 + static void xtrng_readwrite32(void __iomem *addr, u32 mask, u8 value) 73 + { 74 + u32 val; 75 + 76 + val = ioread32(addr); 77 + val = (val & (~mask)) | (mask & value); 78 + iowrite32(val, addr); 79 + } 80 + 81 + static void xtrng_trng_reset(void __iomem *addr) 82 + { 83 + xtrng_readwrite32(addr + TRNG_RESET_OFFSET, TRNG_RESET_VAL_MASK, TRNG_RESET_VAL_MASK); 84 + udelay(TRNG_RESET_DELAY); 85 + xtrng_readwrite32(addr + TRNG_RESET_OFFSET, TRNG_RESET_VAL_MASK, 0); 86 + } 87 + 88 + static void xtrng_hold_reset(void __iomem *addr) 89 + { 90 + xtrng_readwrite32(addr + TRNG_CTRL_OFFSET, TRNG_CTRL_PRNGSRST_MASK, 91 + TRNG_CTRL_PRNGSRST_MASK); 92 + iowrite32(TRNG_RESET_VAL_MASK, addr + TRNG_RESET_OFFSET); 93 + udelay(TRNG_RESET_DELAY); 94 + } 95 + 96 + static void xtrng_softreset(struct xilinx_rng *rng) 97 + { 98 + xtrng_readwrite32(rng->rng_base + TRNG_CTRL_OFFSET, TRNG_CTRL_PRNGSRST_MASK, 99 + TRNG_CTRL_PRNGSRST_MASK); 100 + udelay(TRNG_RESET_DELAY); 101 + xtrng_readwrite32(rng->rng_base + TRNG_CTRL_OFFSET, TRNG_CTRL_PRNGSRST_MASK, 0); 102 + } 103 + 104 + /* Return no. of bytes read */ 105 + static size_t xtrng_readblock32(void __iomem *rng_base, __be32 *buf, int blocks32, bool wait) 106 + { 107 + int read = 0, ret; 108 + int timeout = 1; 109 + int i, idx; 110 + u32 val; 111 + 112 + if (wait) 113 + timeout = TRNG_DATA_READ_DELAY; 114 + 115 + for (i = 0; i < (blocks32 * 2); i++) { 116 + /* TRNG core generate data in 16 bytes. Read twice to complete 32 bytes read */ 117 + ret = readl_poll_timeout(rng_base + TRNG_STATUS_OFFSET, val, 118 + (val & TRNG_STATUS_QCNT_MASK) == 119 + TRNG_STATUS_QCNT_16_BYTES, !!wait, timeout); 120 + if (ret) 121 + break; 122 + 123 + for (idx = 0; idx < TRNG_READ_4_WORD; idx++) { 124 + *(buf + read) = cpu_to_be32(ioread32(rng_base + TRNG_CORE_OUTPUT_OFFSET)); 125 + read += 1; 126 + } 127 + } 128 + return read * 4; 129 + } 130 + 131 + static int xtrng_collect_random_data(struct xilinx_rng *rng, u8 *rand_gen_buf, 132 + int no_of_random_bytes, bool wait) 133 + { 134 + u8 randbuf[TRNG_SEC_STRENGTH_BYTES]; 135 + int byteleft, blocks, count = 0; 136 + int ret; 137 + 138 + byteleft = no_of_random_bytes & (TRNG_SEC_STRENGTH_BYTES - 1); 139 + blocks = no_of_random_bytes >> TRNG_SEC_STRENGTH_SHIFT; 140 + xtrng_readwrite32(rng->rng_base + TRNG_CTRL_OFFSET, TRNG_CTRL_PRNGSTART_MASK, 141 + TRNG_CTRL_PRNGSTART_MASK); 142 + if (blocks) { 143 + ret = xtrng_readblock32(rng->rng_base, (__be32 *)rand_gen_buf, blocks, wait); 144 + if (!ret) 145 + return 0; 146 + count += ret; 147 + } 148 + 149 + if (byteleft) { 150 + ret = xtrng_readblock32(rng->rng_base, (__be32 *)randbuf, 1, wait); 151 + if (!ret) 152 + return count; 153 + memcpy(rand_gen_buf + (blocks * TRNG_SEC_STRENGTH_BYTES), randbuf, byteleft); 154 + count += byteleft; 155 + } 156 + 157 + xtrng_readwrite32(rng->rng_base + TRNG_CTRL_OFFSET, 158 + TRNG_CTRL_PRNGMODE_MASK | TRNG_CTRL_PRNGSTART_MASK, 0U); 159 + 160 + return count; 161 + } 162 + 163 + static void xtrng_write_multiple_registers(void __iomem *base_addr, u32 *values, size_t n) 164 + { 165 + void __iomem *reg_addr; 166 + size_t i; 167 + 168 + /* Write seed value into EXTERNAL_SEED Registers in big endian format */ 169 + for (i = 0; i < n; i++) { 170 + reg_addr = (base_addr + ((n - 1 - i) * TRNG_BYTES_PER_REG)); 171 + iowrite32((u32 __force)(cpu_to_be32(values[i])), reg_addr); 172 + } 173 + } 174 + 175 + static void xtrng_enable_entropy(struct xilinx_rng *rng) 176 + { 177 + iowrite32(TRNG_OSC_EN_VAL_MASK, rng->rng_base + TRNG_OSC_EN_OFFSET); 178 + xtrng_softreset(rng); 179 + iowrite32(TRNG_CTRL_EUMODE_MASK | TRNG_CTRL_TRSSEN_MASK, rng->rng_base + TRNG_CTRL_OFFSET); 180 + } 181 + 182 + static int xtrng_reseed_internal(struct xilinx_rng *rng) 183 + { 184 + u8 entropy[TRNG_ENTROPY_SEED_LEN_BYTES]; 185 + u32 val; 186 + int ret; 187 + 188 + memset(entropy, 0, sizeof(entropy)); 189 + xtrng_enable_entropy(rng); 190 + 191 + /* collect random data to use it as entropy (input for DF) */ 192 + ret = xtrng_collect_random_data(rng, entropy, TRNG_SEED_LEN_BYTES, true); 193 + if (ret != TRNG_SEED_LEN_BYTES) 194 + return -EINVAL; 195 + 196 + xtrng_write_multiple_registers(rng->rng_base + TRNG_EXT_SEED_OFFSET, 197 + (u32 *)entropy, TRNG_NUM_INIT_REGS); 198 + /* select reseed operation */ 199 + iowrite32(TRNG_CTRL_PRNGXS_MASK, rng->rng_base + TRNG_CTRL_OFFSET); 200 + 201 + /* Start the reseed operation with above configuration and wait for STATUS.Done bit to be 202 + * set. Monitor STATUS.CERTF bit, if set indicates SP800-90B entropy health test has failed. 203 + */ 204 + xtrng_readwrite32(rng->rng_base + TRNG_CTRL_OFFSET, TRNG_CTRL_PRNGSTART_MASK, 205 + TRNG_CTRL_PRNGSTART_MASK); 206 + 207 + ret = readl_poll_timeout(rng->rng_base + TRNG_STATUS_OFFSET, val, 208 + (val & TRNG_STATUS_DONE_MASK) == TRNG_STATUS_DONE_MASK, 209 + 1U, 15000U); 210 + if (ret) 211 + return ret; 212 + 213 + xtrng_readwrite32(rng->rng_base + TRNG_CTRL_OFFSET, TRNG_CTRL_PRNGSTART_MASK, 0U); 214 + 215 + return 0; 216 + } 217 + 218 + static int xtrng_random_bytes_generate(struct xilinx_rng *rng, u8 *rand_buf_ptr, 219 + u32 rand_buf_size, int wait) 220 + { 221 + int nbytes; 222 + int ret; 223 + 224 + xtrng_readwrite32(rng->rng_base + TRNG_CTRL_OFFSET, 225 + TRNG_CTRL_PRNGMODE_MASK | TRNG_CTRL_PRNGXS_MASK, 226 + TRNG_CTRL_PRNGMODE_MASK | TRNG_CTRL_PRNGXS_MASK); 227 + nbytes = xtrng_collect_random_data(rng, rand_buf_ptr, rand_buf_size, wait); 228 + 229 + ret = xtrng_reseed_internal(rng); 230 + if (ret) { 231 + dev_err(rng->dev, "Re-seed fail\n"); 232 + return ret; 233 + } 234 + 235 + return nbytes; 236 + } 237 + 238 + static int xtrng_trng_generate(struct crypto_rng *tfm, const u8 *src, u32 slen, 239 + u8 *dst, u32 dlen) 240 + { 241 + struct xilinx_rng_ctx *ctx = crypto_rng_ctx(tfm); 242 + int ret; 243 + 244 + mutex_lock(&ctx->rng->lock); 245 + ret = xtrng_random_bytes_generate(ctx->rng, dst, dlen, true); 246 + mutex_unlock(&ctx->rng->lock); 247 + 248 + return ret < 0 ? ret : 0; 249 + } 250 + 251 + static int xtrng_trng_seed(struct crypto_rng *tfm, const u8 *seed, unsigned int slen) 252 + { 253 + return 0; 254 + } 255 + 256 + static int xtrng_trng_init(struct crypto_tfm *rtfm) 257 + { 258 + struct xilinx_rng_ctx *ctx = crypto_tfm_ctx(rtfm); 259 + 260 + ctx->rng = xilinx_rng_dev; 261 + 262 + return 0; 263 + } 264 + 265 + static struct rng_alg xtrng_trng_alg = { 266 + .generate = xtrng_trng_generate, 267 + .seed = xtrng_trng_seed, 268 + .seedsize = 0, 269 + .base = { 270 + .cra_name = "stdrng", 271 + .cra_driver_name = "xilinx-trng", 272 + .cra_priority = 300, 273 + .cra_ctxsize = sizeof(struct xilinx_rng_ctx), 274 + .cra_module = THIS_MODULE, 275 + .cra_init = xtrng_trng_init, 276 + }, 277 + }; 278 + 279 + static int xtrng_hwrng_trng_read(struct hwrng *hwrng, void *data, size_t max, bool wait) 280 + { 281 + u8 buf[TRNG_SEC_STRENGTH_BYTES]; 282 + struct xilinx_rng *rng; 283 + int ret = -EINVAL, i = 0; 284 + 285 + rng = container_of(hwrng, struct xilinx_rng, trng); 286 + /* Return in case wait not set and lock not available. */ 287 + if (!mutex_trylock(&rng->lock) && !wait) 288 + return 0; 289 + else if (!mutex_is_locked(&rng->lock) && wait) 290 + mutex_lock(&rng->lock); 291 + 292 + while (i < max) { 293 + ret = xtrng_random_bytes_generate(rng, buf, TRNG_SEC_STRENGTH_BYTES, wait); 294 + if (ret < 0) 295 + break; 296 + 297 + memcpy(data + i, buf, min_t(int, ret, (max - i))); 298 + i += min_t(int, ret, (max - i)); 299 + } 300 + mutex_unlock(&rng->lock); 301 + 302 + return ret; 303 + } 304 + 305 + static int xtrng_hwrng_register(struct hwrng *trng) 306 + { 307 + int ret; 308 + 309 + trng->name = "Xilinx Versal Crypto Engine TRNG"; 310 + trng->read = xtrng_hwrng_trng_read; 311 + 312 + ret = hwrng_register(trng); 313 + if (ret) 314 + pr_err("Fail to register the TRNG\n"); 315 + 316 + return ret; 317 + } 318 + 319 + static void xtrng_hwrng_unregister(struct hwrng *trng) 320 + { 321 + hwrng_unregister(trng); 322 + } 323 + 324 + static int xtrng_probe(struct platform_device *pdev) 325 + { 326 + struct xilinx_rng *rng; 327 + int ret; 328 + 329 + rng = devm_kzalloc(&pdev->dev, sizeof(*rng), GFP_KERNEL); 330 + if (!rng) 331 + return -ENOMEM; 332 + 333 + rng->dev = &pdev->dev; 334 + rng->rng_base = devm_platform_ioremap_resource(pdev, 0); 335 + if (IS_ERR(rng->rng_base)) { 336 + dev_err(&pdev->dev, "Failed to map resource %ld\n", PTR_ERR(rng->rng_base)); 337 + return PTR_ERR(rng->rng_base); 338 + } 339 + 340 + xtrng_trng_reset(rng->rng_base); 341 + ret = xtrng_reseed_internal(rng); 342 + if (ret) { 343 + dev_err(&pdev->dev, "TRNG Seed fail\n"); 344 + return ret; 345 + } 346 + 347 + xilinx_rng_dev = rng; 348 + mutex_init(&rng->lock); 349 + ret = crypto_register_rng(&xtrng_trng_alg); 350 + if (ret) { 351 + dev_err(&pdev->dev, "Crypto Random device registration failed: %d\n", ret); 352 + return ret; 353 + } 354 + ret = xtrng_hwrng_register(&rng->trng); 355 + if (ret) { 356 + dev_err(&pdev->dev, "HWRNG device registration failed: %d\n", ret); 357 + goto crypto_rng_free; 358 + } 359 + platform_set_drvdata(pdev, rng); 360 + 361 + return 0; 362 + 363 + crypto_rng_free: 364 + crypto_unregister_rng(&xtrng_trng_alg); 365 + 366 + return ret; 367 + } 368 + 369 + static void xtrng_remove(struct platform_device *pdev) 370 + { 371 + struct xilinx_rng *rng; 372 + u32 zero[TRNG_NUM_INIT_REGS] = { }; 373 + 374 + rng = platform_get_drvdata(pdev); 375 + xtrng_hwrng_unregister(&rng->trng); 376 + crypto_unregister_rng(&xtrng_trng_alg); 377 + xtrng_write_multiple_registers(rng->rng_base + TRNG_EXT_SEED_OFFSET, zero, 378 + TRNG_NUM_INIT_REGS); 379 + xtrng_write_multiple_registers(rng->rng_base + TRNG_PER_STRNG_OFFSET, zero, 380 + TRNG_NUM_INIT_REGS); 381 + xtrng_hold_reset(rng->rng_base); 382 + xilinx_rng_dev = NULL; 383 + } 384 + 385 + static const struct of_device_id xtrng_of_match[] = { 386 + { .compatible = "xlnx,versal-trng", }, 387 + {}, 388 + }; 389 + 390 + MODULE_DEVICE_TABLE(of, xtrng_of_match); 391 + 392 + static struct platform_driver xtrng_driver = { 393 + .driver = { 394 + .name = "xlnx,versal-trng", 395 + .of_match_table = xtrng_of_match, 396 + }, 397 + .probe = xtrng_probe, 398 + .remove = xtrng_remove, 399 + }; 400 + 401 + module_platform_driver(xtrng_driver); 402 + MODULE_LICENSE("GPL"); 403 + MODULE_AUTHOR("Harsh Jain <h.jain@amd.com>"); 404 + MODULE_AUTHOR("Mounika Botcha <mounika.botcha@amd.com>"); 405 + MODULE_DESCRIPTION("True Random Number Generator Driver");
+14 -2
include/crypto/hash.h
··· 177 177 178 178 #define HASH_MAX_DIGESTSIZE 64 179 179 180 + /* 181 + * The size of a core hash state and a partial block. The final byte 182 + * is the length of the partial block. 183 + */ 184 + #define HASH_STATE_AND_BLOCK(state, block) ((state) + (block) + 1) 185 + 186 + 180 187 /* Worst case is sha3-224. */ 181 - #define HASH_MAX_STATESIZE 200 + 144 + 1 188 + #define HASH_MAX_STATESIZE HASH_STATE_AND_BLOCK(200, 144) 189 + 190 + /* This needs to match arch/s390/crypto/sha.h. */ 191 + #define S390_SHA_CTX_SIZE 216 182 192 183 193 /* 184 194 * Worst case is hmac(sha3-224-s390). Its context is a nested 'shash_desc' 185 195 * containing a 'struct s390_sha_ctx'. 186 196 */ 187 - #define HASH_MAX_DESCSIZE (sizeof(struct shash_desc) + 361) 197 + #define SHA3_224_S390_DESCSIZE HASH_STATE_AND_BLOCK(S390_SHA_CTX_SIZE, 144) 198 + #define HASH_MAX_DESCSIZE (sizeof(struct shash_desc) + \ 199 + SHA3_224_S390_DESCSIZE) 188 200 #define MAX_SYNC_HASH_REQSIZE (sizeof(struct ahash_request) + \ 189 201 HASH_MAX_DESCSIZE) 190 202
+1 -10
include/crypto/internal/scompress.h
··· 18 18 /** 19 19 * struct scomp_alg - synchronous compression algorithm 20 20 * 21 - * @alloc_ctx: Function allocates algorithm specific context 22 - * @free_ctx: Function frees context allocated with alloc_ctx 23 21 * @compress: Function performs a compress operation 24 22 * @decompress: Function performs a de-compress operation 25 - * @base: Common crypto API algorithm data structure 26 23 * @streams: Per-cpu memory for algorithm 27 24 * @calg: Cmonn algorithm data structure shared with acomp 28 25 */ ··· 31 34 unsigned int slen, u8 *dst, unsigned int *dlen, 32 35 void *ctx); 33 36 34 - union { 35 - struct { 36 - void *(*alloc_ctx)(void); 37 - void (*free_ctx)(void *ctx); 38 - }; 39 - struct crypto_acomp_streams streams; 40 - }; 37 + struct crypto_acomp_streams streams; 41 38 42 39 union { 43 40 struct COMP_ALG_COMMON;
+15 -7
include/linux/hisi_acc_qm.h
··· 104 104 #define UACCE_MODE_SVA 1 /* use uacce sva mode */ 105 105 #define UACCE_MODE_DESC "0(default) means only register to crypto, 1 means both register to crypto and uacce" 106 106 107 + #define QM_ECC_MBIT BIT(2) 108 + 107 109 enum qm_stop_reason { 108 110 QM_NORMAL, 109 111 QM_SOFT_RESET, ··· 127 125 QM_HW_V2 = 0x21, 128 126 QM_HW_V3 = 0x30, 129 127 QM_HW_V4 = 0x50, 128 + QM_HW_V5 = 0x51, 130 129 }; 131 130 132 131 enum qm_fun_type { ··· 242 239 ACC_ERR_RECOVERED, 243 240 }; 244 241 245 - struct hisi_qm_err_info { 246 - char *acpi_rst; 247 - u32 msi_wr_port; 242 + struct hisi_qm_err_mask { 248 243 u32 ecc_2bits_mask; 249 - u32 qm_shutdown_mask; 250 - u32 dev_shutdown_mask; 251 - u32 qm_reset_mask; 252 - u32 dev_reset_mask; 244 + u32 shutdown_mask; 245 + u32 reset_mask; 253 246 u32 ce; 254 247 u32 nfe; 255 248 u32 fe; 249 + }; 250 + 251 + struct hisi_qm_err_info { 252 + char *acpi_rst; 253 + u32 msi_wr_port; 254 + struct hisi_qm_err_mask qm_err; 255 + struct hisi_qm_err_mask dev_err; 256 256 }; 257 257 258 258 struct hisi_qm_err_status { ··· 278 272 enum acc_err_result (*get_err_result)(struct hisi_qm *qm); 279 273 bool (*dev_is_abnormal)(struct hisi_qm *qm); 280 274 int (*set_priv_status)(struct hisi_qm *qm); 275 + void (*disable_axi_error)(struct hisi_qm *qm); 276 + void (*enable_axi_error)(struct hisi_qm *qm); 281 277 }; 282 278 283 279 struct hisi_qm_cap_info {
+42 -2
include/linux/psp-sev.h
··· 107 107 SEV_CMD_SNP_DOWNLOAD_FIRMWARE_EX = 0x0CA, 108 108 SEV_CMD_SNP_COMMIT = 0x0CB, 109 109 SEV_CMD_SNP_VLEK_LOAD = 0x0CD, 110 + SEV_CMD_SNP_FEATURE_INFO = 0x0CE, 110 111 111 112 SEV_CMD_MAX, 112 113 }; ··· 748 747 struct sev_data_snp_init_ex { 749 748 u32 init_rmp:1; 750 749 u32 list_paddr_en:1; 751 - u32 rsvd:30; 750 + u32 rapl_dis:1; 751 + u32 ciphertext_hiding_en:1; 752 + u32 rsvd:28; 752 753 u32 rsvd1; 753 754 u64 list_paddr; 754 - u8 rsvd2[48]; 755 + u16 max_snp_asid; 756 + u8 rsvd2[46]; 755 757 } __packed; 756 758 757 759 /** ··· 803 799 * @probe: True if this is being called as part of CCP module probe, which 804 800 * will defer SEV_INIT/SEV_INIT_EX firmware initialization until needed 805 801 * unless psp_init_on_probe module param is set 802 + * @max_snp_asid: When non-zero, enable ciphertext hiding and specify the 803 + * maximum ASID that can be used for an SEV-SNP guest. 806 804 */ 807 805 struct sev_platform_init_args { 808 806 int error; 809 807 bool probe; 808 + unsigned int max_snp_asid; 810 809 }; 811 810 812 811 /** ··· 820 813 struct sev_data_snp_commit { 821 814 u32 len; 822 815 } __packed; 816 + 817 + /** 818 + * struct sev_data_snp_feature_info - SEV_SNP_FEATURE_INFO structure 819 + * 820 + * @length: len of the command buffer read by the PSP 821 + * @ecx_in: subfunction index 822 + * @feature_info_paddr : System Physical Address of the FEATURE_INFO structure 823 + */ 824 + struct sev_data_snp_feature_info { 825 + u32 length; 826 + u32 ecx_in; 827 + u64 feature_info_paddr; 828 + } __packed; 829 + 830 + /** 831 + * struct feature_info - FEATURE_INFO structure 832 + * 833 + * @eax: output of SNP_FEATURE_INFO command 834 + * @ebx: output of SNP_FEATURE_INFO command 835 + * @ecx: output of SNP_FEATURE_INFO command 836 + * #edx: output of SNP_FEATURE_INFO command 837 + */ 838 + struct snp_feature_info { 839 + u32 eax; 840 + u32 ebx; 841 + u32 ecx; 842 + u32 edx; 843 + } __packed; 844 + 845 + #define SNP_CIPHER_TEXT_HIDING_SUPPORTED BIT(3) 823 846 824 847 #ifdef CONFIG_CRYPTO_DEV_SP_PSP 825 848 ··· 994 957 void *snp_alloc_firmware_page(gfp_t mask); 995 958 void snp_free_firmware_page(void *addr); 996 959 void sev_platform_shutdown(void); 960 + bool sev_is_snp_ciphertext_hiding_supported(void); 997 961 998 962 #else /* !CONFIG_CRYPTO_DEV_SP_PSP */ 999 963 ··· 1030 992 static inline void snp_free_firmware_page(void *addr) { } 1031 993 1032 994 static inline void sev_platform_shutdown(void) { } 995 + 996 + static inline bool sev_is_snp_ciphertext_hiding_supported(void) { return false; } 1033 997 1034 998 #endif /* CONFIG_CRYPTO_DEV_SP_PSP */ 1035 999
+26
include/linux/rcupdate.h
··· 713 713 (c) || rcu_read_lock_sched_held(), \ 714 714 __rcu) 715 715 716 + /** 717 + * rcu_dereference_all_check() - rcu_dereference_all with debug checking 718 + * @p: The pointer to read, prior to dereferencing 719 + * @c: The conditions under which the dereference will take place 720 + * 721 + * This is similar to rcu_dereference_check(), but allows protection 722 + * by all forms of vanilla RCU readers, including preemption disabled, 723 + * bh-disabled, and interrupt-disabled regions of code. Note that "vanilla 724 + * RCU" excludes SRCU and the various Tasks RCU flavors. Please note 725 + * that this macro should not be backported to any Linux-kernel version 726 + * preceding v5.0 due to changes in synchronize_rcu() semantics prior 727 + * to that version. 728 + */ 729 + #define rcu_dereference_all_check(p, c) \ 730 + __rcu_dereference_check((p), __UNIQUE_ID(rcu), \ 731 + (c) || rcu_read_lock_any_held(), \ 732 + __rcu) 733 + 716 734 /* 717 735 * The tracing infrastructure traces RCU (we want that), but unfortunately 718 736 * some of the RCU checks causes tracing to lock up the system. ··· 784 766 * Makes rcu_dereference_check() do the dirty work. 785 767 */ 786 768 #define rcu_dereference_sched(p) rcu_dereference_sched_check(p, 0) 769 + 770 + /** 771 + * rcu_dereference_all() - fetch RCU-all-protected pointer for dereferencing 772 + * @p: The pointer to read, prior to dereferencing 773 + * 774 + * Makes rcu_dereference_check() do the dirty work. 775 + */ 776 + #define rcu_dereference_all(p) rcu_dereference_all_check(p, 0) 787 777 788 778 /** 789 779 * rcu_pointer_handoff() - Hand off a pointer from RCU to other mechanism
+28 -28
include/linux/rhashtable.h
··· 122 122 return hash & (tbl->size - 1); 123 123 } 124 124 125 - static inline unsigned int rht_key_get_hash(struct rhashtable *ht, 125 + static __always_inline unsigned int rht_key_get_hash(struct rhashtable *ht, 126 126 const void *key, const struct rhashtable_params params, 127 127 unsigned int hash_rnd) 128 128 { ··· 152 152 return hash; 153 153 } 154 154 155 - static inline unsigned int rht_key_hashfn( 155 + static __always_inline unsigned int rht_key_hashfn( 156 156 struct rhashtable *ht, const struct bucket_table *tbl, 157 157 const void *key, const struct rhashtable_params params) 158 158 { ··· 161 161 return rht_bucket_index(tbl, hash); 162 162 } 163 163 164 - static inline unsigned int rht_head_hashfn( 164 + static __always_inline unsigned int rht_head_hashfn( 165 165 struct rhashtable *ht, const struct bucket_table *tbl, 166 166 const struct rhash_head *he, const struct rhashtable_params params) 167 167 { ··· 272 272 rcu_dereference_protected(p, lockdep_rht_mutex_is_held(ht)) 273 273 274 274 #define rht_dereference_rcu(p, ht) \ 275 - rcu_dereference_check(p, lockdep_rht_mutex_is_held(ht)) 275 + rcu_dereference_all_check(p, lockdep_rht_mutex_is_held(ht)) 276 276 277 277 #define rht_dereference_bucket(p, tbl, hash) \ 278 278 rcu_dereference_protected(p, lockdep_rht_bucket_is_held(tbl, hash)) 279 279 280 280 #define rht_dereference_bucket_rcu(p, tbl, hash) \ 281 - rcu_dereference_check(p, lockdep_rht_bucket_is_held(tbl, hash)) 281 + rcu_dereference_all_check(p, lockdep_rht_bucket_is_held(tbl, hash)) 282 282 283 283 #define rht_entry(tpos, pos, member) \ 284 284 ({ tpos = container_of(pos, typeof(*tpos), member); 1; }) ··· 373 373 static inline struct rhash_head *rht_ptr_rcu( 374 374 struct rhash_lock_head __rcu *const *bkt) 375 375 { 376 - return __rht_ptr(rcu_dereference(*bkt), bkt); 376 + return __rht_ptr(rcu_dereference_all(*bkt), bkt); 377 377 } 378 378 379 379 static inline struct rhash_head *rht_ptr( ··· 497 497 for (({barrier(); }), \ 498 498 pos = head; \ 499 499 !rht_is_a_nulls(pos); \ 500 - pos = rcu_dereference_raw(pos->next)) 500 + pos = rcu_dereference_all(pos->next)) 501 501 502 502 /** 503 503 * rht_for_each_rcu - iterate over rcu hash chain ··· 513 513 for (({barrier(); }), \ 514 514 pos = rht_ptr_rcu(rht_bucket(tbl, hash)); \ 515 515 !rht_is_a_nulls(pos); \ 516 - pos = rcu_dereference_raw(pos->next)) 516 + pos = rcu_dereference_all(pos->next)) 517 517 518 518 /** 519 519 * rht_for_each_entry_rcu_from - iterated over rcu hash chain from given head ··· 560 560 * list returned by rhltable_lookup. 561 561 */ 562 562 #define rhl_for_each_rcu(pos, list) \ 563 - for (pos = list; pos; pos = rcu_dereference_raw(pos->next)) 563 + for (pos = list; pos; pos = rcu_dereference_all(pos->next)) 564 564 565 565 /** 566 566 * rhl_for_each_entry_rcu - iterate over rcu hash table list of given type ··· 574 574 */ 575 575 #define rhl_for_each_entry_rcu(tpos, pos, list, member) \ 576 576 for (pos = list; pos && rht_entry(tpos, pos, member); \ 577 - pos = rcu_dereference_raw(pos->next)) 577 + pos = rcu_dereference_all(pos->next)) 578 578 579 579 static inline int rhashtable_compare(struct rhashtable_compare_arg *arg, 580 580 const void *obj) ··· 586 586 } 587 587 588 588 /* Internal function, do not use. */ 589 - static inline struct rhash_head *__rhashtable_lookup( 589 + static __always_inline struct rhash_head *__rhashtable_lookup( 590 590 struct rhashtable *ht, const void *key, 591 591 const struct rhashtable_params params) 592 592 { ··· 639 639 * 640 640 * Returns the first entry on which the compare function returned true. 641 641 */ 642 - static inline void *rhashtable_lookup( 642 + static __always_inline void *rhashtable_lookup( 643 643 struct rhashtable *ht, const void *key, 644 644 const struct rhashtable_params params) 645 645 { ··· 662 662 * 663 663 * Returns the first entry on which the compare function returned true. 664 664 */ 665 - static inline void *rhashtable_lookup_fast( 665 + static __always_inline void *rhashtable_lookup_fast( 666 666 struct rhashtable *ht, const void *key, 667 667 const struct rhashtable_params params) 668 668 { ··· 689 689 * 690 690 * Returns the list of entries that match the given key. 691 691 */ 692 - static inline struct rhlist_head *rhltable_lookup( 692 + static __always_inline struct rhlist_head *rhltable_lookup( 693 693 struct rhltable *hlt, const void *key, 694 694 const struct rhashtable_params params) 695 695 { ··· 702 702 * function returns the existing element already in hashes if there is a clash, 703 703 * otherwise it returns an error via ERR_PTR(). 704 704 */ 705 - static inline void *__rhashtable_insert_fast( 705 + static __always_inline void *__rhashtable_insert_fast( 706 706 struct rhashtable *ht, const void *key, struct rhash_head *obj, 707 707 const struct rhashtable_params params, bool rhlist) 708 708 { ··· 825 825 * Will trigger an automatic deferred table resizing if residency in the 826 826 * table grows beyond 70%. 827 827 */ 828 - static inline int rhashtable_insert_fast( 828 + static __always_inline int rhashtable_insert_fast( 829 829 struct rhashtable *ht, struct rhash_head *obj, 830 830 const struct rhashtable_params params) 831 831 { ··· 854 854 * Will trigger an automatic deferred table resizing if residency in the 855 855 * table grows beyond 70%. 856 856 */ 857 - static inline int rhltable_insert_key( 857 + static __always_inline int rhltable_insert_key( 858 858 struct rhltable *hlt, const void *key, struct rhlist_head *list, 859 859 const struct rhashtable_params params) 860 860 { ··· 877 877 * Will trigger an automatic deferred table resizing if residency in the 878 878 * table grows beyond 70%. 879 879 */ 880 - static inline int rhltable_insert( 880 + static __always_inline int rhltable_insert( 881 881 struct rhltable *hlt, struct rhlist_head *list, 882 882 const struct rhashtable_params params) 883 883 { ··· 902 902 * Will trigger an automatic deferred table resizing if residency in the 903 903 * table grows beyond 70%. 904 904 */ 905 - static inline int rhashtable_lookup_insert_fast( 905 + static __always_inline int rhashtable_lookup_insert_fast( 906 906 struct rhashtable *ht, struct rhash_head *obj, 907 907 const struct rhashtable_params params) 908 908 { ··· 929 929 * object if it exists, NULL if it did not and the insertion was successful, 930 930 * and an ERR_PTR otherwise. 931 931 */ 932 - static inline void *rhashtable_lookup_get_insert_fast( 932 + static __always_inline void *rhashtable_lookup_get_insert_fast( 933 933 struct rhashtable *ht, struct rhash_head *obj, 934 934 const struct rhashtable_params params) 935 935 { ··· 956 956 * 957 957 * Returns zero on success. 958 958 */ 959 - static inline int rhashtable_lookup_insert_key( 959 + static __always_inline int rhashtable_lookup_insert_key( 960 960 struct rhashtable *ht, const void *key, struct rhash_head *obj, 961 961 const struct rhashtable_params params) 962 962 { ··· 982 982 * object if it exists, NULL if it does not and the insertion was successful, 983 983 * and an ERR_PTR otherwise. 984 984 */ 985 - static inline void *rhashtable_lookup_get_insert_key( 985 + static __always_inline void *rhashtable_lookup_get_insert_key( 986 986 struct rhashtable *ht, const void *key, struct rhash_head *obj, 987 987 const struct rhashtable_params params) 988 988 { ··· 992 992 } 993 993 994 994 /* Internal function, please use rhashtable_remove_fast() instead */ 995 - static inline int __rhashtable_remove_fast_one( 995 + static __always_inline int __rhashtable_remove_fast_one( 996 996 struct rhashtable *ht, struct bucket_table *tbl, 997 997 struct rhash_head *obj, const struct rhashtable_params params, 998 998 bool rhlist) ··· 1074 1074 } 1075 1075 1076 1076 /* Internal function, please use rhashtable_remove_fast() instead */ 1077 - static inline int __rhashtable_remove_fast( 1077 + static __always_inline int __rhashtable_remove_fast( 1078 1078 struct rhashtable *ht, struct rhash_head *obj, 1079 1079 const struct rhashtable_params params, bool rhlist) 1080 1080 { ··· 1115 1115 * 1116 1116 * Returns zero on success, -ENOENT if the entry could not be found. 1117 1117 */ 1118 - static inline int rhashtable_remove_fast( 1118 + static __always_inline int rhashtable_remove_fast( 1119 1119 struct rhashtable *ht, struct rhash_head *obj, 1120 1120 const struct rhashtable_params params) 1121 1121 { ··· 1137 1137 * 1138 1138 * Returns zero on success, -ENOENT if the entry could not be found. 1139 1139 */ 1140 - static inline int rhltable_remove( 1140 + static __always_inline int rhltable_remove( 1141 1141 struct rhltable *hlt, struct rhlist_head *list, 1142 1142 const struct rhashtable_params params) 1143 1143 { ··· 1145 1145 } 1146 1146 1147 1147 /* Internal function, please use rhashtable_replace_fast() instead */ 1148 - static inline int __rhashtable_replace_fast( 1148 + static __always_inline int __rhashtable_replace_fast( 1149 1149 struct rhashtable *ht, struct bucket_table *tbl, 1150 1150 struct rhash_head *obj_old, struct rhash_head *obj_new, 1151 1151 const struct rhashtable_params params) ··· 1208 1208 * Returns zero on success, -ENOENT if the entry could not be found, 1209 1209 * -EINVAL if hash is not the same for the old and new objects. 1210 1210 */ 1211 - static inline int rhashtable_replace_fast( 1211 + static __always_inline int rhashtable_replace_fast( 1212 1212 struct rhashtable *ht, struct rhash_head *obj_old, 1213 1213 struct rhash_head *obj_new, 1214 1214 const struct rhashtable_params params)
+9 -1
include/uapi/linux/psp-sev.h
··· 185 185 * @mask_chip_id: whether chip id is present in attestation reports or not 186 186 * @mask_chip_key: whether attestation reports are signed or not 187 187 * @vlek_en: VLEK (Version Loaded Endorsement Key) hashstick is loaded 188 + * @feature_info: whether SNP_FEATURE_INFO command is available 189 + * @rapl_dis: whether RAPL is disabled 190 + * @ciphertext_hiding_cap: whether platform has ciphertext hiding capability 191 + * @ciphertext_hiding_en: whether ciphertext hiding is enabled 188 192 * @rsvd1: reserved 189 193 * @guest_count: the number of guest currently managed by the firmware 190 194 * @current_tcb_version: current TCB version ··· 204 200 __u32 mask_chip_id:1; /* Out */ 205 201 __u32 mask_chip_key:1; /* Out */ 206 202 __u32 vlek_en:1; /* Out */ 207 - __u32 rsvd1:29; 203 + __u32 feature_info:1; /* Out */ 204 + __u32 rapl_dis:1; /* Out */ 205 + __u32 ciphertext_hiding_cap:1; /* Out */ 206 + __u32 ciphertext_hiding_en:1; /* Out */ 207 + __u32 rsvd1:25; 208 208 __u32 guest_count; /* Out */ 209 209 __u64 current_tcb_version; /* Out */ 210 210 __u64 reported_tcb_version; /* Out */
+1
include/uapi/misc/uacce/hisi_qm.h
··· 31 31 #define HISI_QM_API_VER_BASE "hisi_qm_v1" 32 32 #define HISI_QM_API_VER2_BASE "hisi_qm_v2" 33 33 #define HISI_QM_API_VER3_BASE "hisi_qm_v3" 34 + #define HISI_QM_API_VER5_BASE "hisi_qm_v5" 34 35 35 36 /* UACCE_CMD_QM_SET_QP_CTX: Set qp algorithm type */ 36 37 #define UACCE_CMD_QM_SET_QP_CTX _IOWR('H', 10, struct hisi_qp_ctx)
+10 -5
kernel/padata.c
··· 291 291 struct padata_serial_queue *squeue; 292 292 int cb_cpu; 293 293 294 - cpu = cpumask_next_wrap(cpu, pd->cpumask.pcpu); 295 294 processed++; 295 + /* When sequence wraps around, reset to the first CPU. */ 296 + if (unlikely(processed == 0)) 297 + cpu = cpumask_first(pd->cpumask.pcpu); 298 + else 299 + cpu = cpumask_next_wrap(cpu, pd->cpumask.pcpu); 296 300 297 301 cb_cpu = padata->cb_cpu; 298 302 squeue = per_cpu_ptr(pd->squeue, cb_cpu); ··· 490 486 do { 491 487 nid = next_node_in(old_node, node_states[N_CPU]); 492 488 } while (!atomic_try_cmpxchg(&last_used_nid, &old_node, nid)); 493 - queue_work_node(nid, system_unbound_wq, &pw->pw_work); 489 + queue_work_node(nid, system_dfl_wq, &pw->pw_work); 494 490 } else { 495 - queue_work(system_unbound_wq, &pw->pw_work); 491 + queue_work(system_dfl_wq, &pw->pw_work); 496 492 } 497 493 498 494 /* Use the current thread, which saves starting a workqueue worker. */ ··· 967 963 968 964 cpus_read_lock(); 969 965 970 - pinst->serial_wq = alloc_workqueue("%s_serial", WQ_MEM_RECLAIM | 971 - WQ_CPU_INTENSIVE, 1, name); 966 + pinst->serial_wq = alloc_workqueue("%s_serial", 967 + WQ_MEM_RECLAIM | WQ_CPU_INTENSIVE | WQ_PERCPU, 968 + 1, name); 972 969 if (!pinst->serial_wq) 973 970 goto err_put_cpus; 974 971
+1 -1
lib/lzo/lzo1x_compress.c
··· 26 26 #define HAVE_OP(x) 1 27 27 #endif 28 28 29 - #define NEED_OP(x) if (!HAVE_OP(x)) goto output_overrun 29 + #define NEED_OP(x) if (unlikely(!HAVE_OP(x))) goto output_overrun 30 30 31 31 static noinline int 32 32 LZO_SAFE(lzo1x_1_do_compress)(const unsigned char *in, size_t in_len,
+3 -3
lib/lzo/lzo1x_decompress_safe.c
··· 22 22 23 23 #define HAVE_IP(x) ((size_t)(ip_end - ip) >= (size_t)(x)) 24 24 #define HAVE_OP(x) ((size_t)(op_end - op) >= (size_t)(x)) 25 - #define NEED_IP(x) if (!HAVE_IP(x)) goto input_overrun 26 - #define NEED_OP(x) if (!HAVE_OP(x)) goto output_overrun 27 - #define TEST_LB(m_pos) if ((m_pos) < out) goto lookbehind_overrun 25 + #define NEED_IP(x) if (unlikely(!HAVE_IP(x))) goto input_overrun 26 + #define NEED_OP(x) if (unlikely(!HAVE_OP(x))) goto output_overrun 27 + #define TEST_LB(m_pos) if (unlikely((m_pos) < out)) goto lookbehind_overrun 28 28 29 29 /* This MAX_255_COUNT is the maximum number of times we can add 255 to a base 30 30 * count without overflowing an integer. The multiply will overflow when