Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'v6.9-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto updates from Herbert Xu:
"API:

- Avoid unnecessary copying in scomp for trivial SG lists

Algorithms:

- Optimise NEON CCM implementation on ARM64

Drivers:

- Add queue stop/query debugfs support in hisilicon/qm

- Intel qat updates and cleanups"

* tag 'v6.9-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (79 commits)
Revert "crypto: remove CONFIG_CRYPTO_STATS"
crypto: scomp - remove memcpy if sg_nents is 1 and pages are lowmem
crypto: tcrypt - add ffdhe2048(dh) test
crypto: iaa - fix the missing CRYPTO_ALG_ASYNC in cra_flags
crypto: hisilicon/zip - fix the missing CRYPTO_ALG_ASYNC in cra_flags
hwrng: hisi - use dev_err_probe
MAINTAINERS: Remove T Ambarus from few mchp entries
crypto: iaa - Fix comp/decomp delay statistics
crypto: iaa - Fix async_disable descriptor leak
dt-bindings: rng: atmel,at91-trng: add sam9x7 TRNG
dt-bindings: crypto: add sam9x7 in Atmel TDES
dt-bindings: crypto: add sam9x7 in Atmel SHA
dt-bindings: crypto: add sam9x7 in Atmel AES
crypto: remove CONFIG_CRYPTO_STATS
crypto: dh - Make public key test FIPS-only
crypto: rockchip - fix to check return value
crypto: jitter - fix CRYPTO_JITTERENTROPY help text
crypto: qat - make ring to service map common for QAT GEN4
crypto: qat - fix ring to service map for dcc in 420xx
crypto: qat - fix ring to service map for dcc in 4xxx
...

+1450 -1016
+26
Documentation/ABI/testing/debugfs-driver-qat
··· 81 81 <N>: Number of Compress and Verify (CnV) errors and type 82 82 of the last CnV error detected by Acceleration 83 83 Engine N. 84 + 85 + What: /sys/kernel/debug/qat_<device>_<BDF>/heartbeat/inject_error 86 + Date: March 2024 87 + KernelVersion: 6.8 88 + Contact: qat-linux@intel.com 89 + Description: (WO) Write to inject an error that simulates an heartbeat 90 + failure. This is to be used for testing purposes. 91 + 92 + After writing this file, the driver stops arbitration on a 93 + random engine and disables the fetching of heartbeat counters. 94 + If a workload is running on the device, a job submitted to the 95 + accelerator might not get a response and a read of the 96 + `heartbeat/status` attribute might report -1, i.e. device 97 + unresponsive. 98 + The error is unrecoverable thus the device must be restarted to 99 + restore its functionality. 100 + 101 + This attribute is available only when the kernel is built with 102 + CONFIG_CRYPTO_DEV_QAT_ERROR_INJECTION=y. 103 + 104 + A write of 1 enables error injection. 105 + 106 + The following example shows how to enable error injection:: 107 + 108 + # cd /sys/kernel/debug/qat_<device>_<BDF> 109 + # echo 1 > heartbeat/inject_error
+22
Documentation/ABI/testing/debugfs-hisi-hpre
··· 111 111 node is used to show the change of the qm register values. This 112 112 node can be help users to check the change of register values. 113 113 114 + What: /sys/kernel/debug/hisi_hpre/<bdf>/qm/qm_state 115 + Date: Jan 2024 116 + Contact: linux-crypto@vger.kernel.org 117 + Description: Dump the state of the device. 118 + 0: busy, 1: idle. 119 + Only available for PF, and take no other effect on HPRE. 120 + 121 + What: /sys/kernel/debug/hisi_hpre/<bdf>/qm/dev_timeout 122 + Date: Feb 2024 123 + Contact: linux-crypto@vger.kernel.org 124 + Description: Set the wait time when stop queue fails. Available for both PF 125 + and VF, and take no other effect on HPRE. 126 + 0: not wait(default), others value: wait dev_timeout * 20 microsecond. 127 + 128 + What: /sys/kernel/debug/hisi_hpre/<bdf>/qm/dev_state 129 + Date: Feb 2024 130 + Contact: linux-crypto@vger.kernel.org 131 + Description: Dump the stop queue status of the QM. The default value is 0, 132 + if dev_timeout is set, when stop queue fails, the dev_state 133 + will return non-zero value. Available for both PF and VF, 134 + and take no other effect on HPRE. 135 + 114 136 What: /sys/kernel/debug/hisi_hpre/<bdf>/hpre_dfx/diff_regs 115 137 Date: Mar 2022 116 138 Contact: linux-crypto@vger.kernel.org
+22
Documentation/ABI/testing/debugfs-hisi-sec
··· 91 91 node is used to show the change of the qm register values. This 92 92 node can be help users to check the change of register values. 93 93 94 + What: /sys/kernel/debug/hisi_sec2/<bdf>/qm/qm_state 95 + Date: Jan 2024 96 + Contact: linux-crypto@vger.kernel.org 97 + Description: Dump the state of the device. 98 + 0: busy, 1: idle. 99 + Only available for PF, and take no other effect on SEC. 100 + 101 + What: /sys/kernel/debug/hisi_sec2/<bdf>/qm/dev_timeout 102 + Date: Feb 2024 103 + Contact: linux-crypto@vger.kernel.org 104 + Description: Set the wait time when stop queue fails. Available for both PF 105 + and VF, and take no other effect on SEC. 106 + 0: not wait(default), others value: wait dev_timeout * 20 microsecond. 107 + 108 + What: /sys/kernel/debug/hisi_sec2/<bdf>/qm/dev_state 109 + Date: Feb 2024 110 + Contact: linux-crypto@vger.kernel.org 111 + Description: Dump the stop queue status of the QM. The default value is 0, 112 + if dev_timeout is set, when stop queue fails, the dev_state 113 + will return non-zero value. Available for both PF and VF, 114 + and take no other effect on SEC. 115 + 94 116 What: /sys/kernel/debug/hisi_sec2/<bdf>/sec_dfx/diff_regs 95 117 Date: Mar 2022 96 118 Contact: linux-crypto@vger.kernel.org
+22
Documentation/ABI/testing/debugfs-hisi-zip
··· 104 104 node is used to show the change of the qm registers value. This 105 105 node can be help users to check the change of register values. 106 106 107 + What: /sys/kernel/debug/hisi_zip/<bdf>/qm/qm_state 108 + Date: Jan 2024 109 + Contact: linux-crypto@vger.kernel.org 110 + Description: Dump the state of the device. 111 + 0: busy, 1: idle. 112 + Only available for PF, and take no other effect on ZIP. 113 + 114 + What: /sys/kernel/debug/hisi_zip/<bdf>/qm/dev_timeout 115 + Date: Feb 2024 116 + Contact: linux-crypto@vger.kernel.org 117 + Description: Set the wait time when stop queue fails. Available for both PF 118 + and VF, and take no other effect on ZIP. 119 + 0: not wait(default), others value: wait dev_timeout * 20 microsecond. 120 + 121 + What: /sys/kernel/debug/hisi_zip/<bdf>/qm/dev_state 122 + Date: Feb 2024 123 + Contact: linux-crypto@vger.kernel.org 124 + Description: Dump the stop queue status of the QM. The default value is 0, 125 + if dev_timeout is set, when stop queue fails, the dev_state 126 + will return non-zero value. Available for both PF and VF, 127 + and take no other effect on ZIP. 128 + 107 129 What: /sys/kernel/debug/hisi_zip/<bdf>/zip_dfx/diff_regs 108 130 Date: Mar 2022 109 131 Contact: linux-crypto@vger.kernel.org
+20
Documentation/ABI/testing/sysfs-driver-qat
··· 141 141 64 142 142 143 143 This attribute is only available for qat_4xxx devices. 144 + 145 + What: /sys/bus/pci/devices/<BDF>/qat/auto_reset 146 + Date: March 2024 147 + KernelVersion: 6.8 148 + Contact: qat-linux@intel.com 149 + Description: (RW) Reports the current state of the autoreset feature 150 + for a QAT device 151 + 152 + Write to the attribute to enable or disable device auto reset. 153 + 154 + Device auto reset is disabled by default. 155 + 156 + The values are: 157 + 158 + * 1/Yy/on: auto reset enabled. If the device encounters an 159 + unrecoverable error, it will be reset automatically. 160 + * 0/Nn/off: auto reset disabled. If the device encounters an 161 + unrecoverable error, it will not be reset. 162 + 163 + This attribute is only available for qat_4xxx devices.
+5 -1
Documentation/devicetree/bindings/crypto/atmel,at91sam9g46-aes.yaml
··· 12 12 13 13 properties: 14 14 compatible: 15 - const: atmel,at91sam9g46-aes 15 + oneOf: 16 + - const: atmel,at91sam9g46-aes 17 + - items: 18 + - const: microchip,sam9x7-aes 19 + - const: atmel,at91sam9g46-aes 16 20 17 21 reg: 18 22 maxItems: 1
+5 -1
Documentation/devicetree/bindings/crypto/atmel,at91sam9g46-sha.yaml
··· 12 12 13 13 properties: 14 14 compatible: 15 - const: atmel,at91sam9g46-sha 15 + oneOf: 16 + - const: atmel,at91sam9g46-sha 17 + - items: 18 + - const: microchip,sam9x7-sha 19 + - const: atmel,at91sam9g46-sha 16 20 17 21 reg: 18 22 maxItems: 1
+5 -1
Documentation/devicetree/bindings/crypto/atmel,at91sam9g46-tdes.yaml
··· 12 12 13 13 properties: 14 14 compatible: 15 - const: atmel,at91sam9g46-tdes 15 + oneOf: 16 + - const: atmel,at91sam9g46-tdes 17 + - items: 18 + - const: microchip,sam9x7-tdes 19 + - const: atmel,at91sam9g46-tdes 16 20 17 21 reg: 18 22 maxItems: 1
+1
Documentation/devicetree/bindings/crypto/qcom,inline-crypto-engine.yaml
··· 14 14 items: 15 15 - enum: 16 16 - qcom,sa8775p-inline-crypto-engine 17 + - qcom,sc7180-inline-crypto-engine 17 18 - qcom,sm8450-inline-crypto-engine 18 19 - qcom,sm8550-inline-crypto-engine 19 20 - qcom,sm8650-inline-crypto-engine
+1
Documentation/devicetree/bindings/crypto/qcom-qce.yaml
··· 45 45 - items: 46 46 - enum: 47 47 - qcom,sc7280-qce 48 + - qcom,sm6350-qce 48 49 - qcom,sm8250-qce 49 50 - qcom,sm8350-qce 50 51 - qcom,sm8450-qce
+4
Documentation/devicetree/bindings/rng/atmel,at91-trng.yaml
··· 21 21 - enum: 22 22 - microchip,sama7g5-trng 23 23 - const: atmel,at91sam9g45-trng 24 + - items: 25 + - enum: 26 + - microchip,sam9x7-trng 27 + - const: microchip,sam9x60-trng 24 28 25 29 clocks: 26 30 maxItems: 1
+13 -12
MAINTAINERS
··· 10377 10377 M: Paulo Flabiano Smorigo <pfsmorigo@gmail.com> 10378 10378 L: linux-crypto@vger.kernel.org 10379 10379 S: Supported 10380 - F: drivers/crypto/vmx/Kconfig 10381 - F: drivers/crypto/vmx/Makefile 10382 - F: drivers/crypto/vmx/aes* 10383 - F: drivers/crypto/vmx/ghash* 10384 - F: drivers/crypto/vmx/ppc-xlate.pl 10385 - F: drivers/crypto/vmx/vmx.c 10380 + F: arch/powerpc/crypto/Kconfig 10381 + F: arch/powerpc/crypto/Makefile 10382 + F: arch/powerpc/crypto/aes.c 10383 + F: arch/powerpc/crypto/aes_cbc.c 10384 + F: arch/powerpc/crypto/aes_ctr.c 10385 + F: arch/powerpc/crypto/aes_xts.c 10386 + F: arch/powerpc/crypto/aesp8-ppc.* 10387 + F: arch/powerpc/crypto/ghash.c 10388 + F: arch/powerpc/crypto/ghashp8-ppc.pl 10389 + F: arch/powerpc/crypto/ppc-xlate.pl 10390 + F: arch/powerpc/crypto/vmx.c 10386 10391 10387 10392 IBM ServeRAID RAID DRIVER 10388 10393 S: Orphan ··· 12461 12456 F: drivers/*/*pasemi* 12462 12457 F: drivers/char/tpm/tpm_ibmvtpm* 12463 12458 F: drivers/crypto/nx/ 12464 - F: drivers/crypto/vmx/ 12465 12459 F: drivers/i2c/busses/i2c-opal.c 12466 12460 F: drivers/net/ethernet/ibm/ibmveth.* 12467 12461 F: drivers/net/ethernet/ibm/ibmvnic.* ··· 14308 14304 14309 14305 MICROCHIP AT91 DMA DRIVERS 14310 14306 M: Ludovic Desroches <ludovic.desroches@microchip.com> 14311 - M: Tudor Ambarus <tudor.ambarus@linaro.org> 14312 14307 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 14313 14308 L: dmaengine@vger.kernel.org 14314 14309 S: Supported ··· 14356 14353 F: drivers/media/platform/microchip/microchip-csi2dc.c 14357 14354 14358 14355 MICROCHIP ECC DRIVER 14359 - M: Tudor Ambarus <tudor.ambarus@linaro.org> 14360 14356 L: linux-crypto@vger.kernel.org 14361 - S: Maintained 14357 + S: Orphan 14362 14358 F: drivers/crypto/atmel-ecc.* 14363 14359 14364 14360 MICROCHIP EIC DRIVER ··· 14462 14460 F: drivers/mmc/host/atmel-mci.c 14463 14461 14464 14462 MICROCHIP NAND DRIVER 14465 - M: Tudor Ambarus <tudor.ambarus@linaro.org> 14466 14463 L: linux-mtd@lists.infradead.org 14467 - S: Supported 14464 + S: Orphan 14468 14465 F: Documentation/devicetree/bindings/mtd/atmel-nand.txt 14469 14466 F: drivers/mtd/nand/raw/atmel/* 14470 14467
+5 -8
arch/arm/crypto/sha256_glue.c
··· 24 24 25 25 #include "sha256_glue.h" 26 26 27 - asmlinkage void sha256_block_data_order(u32 *digest, const void *data, 28 - unsigned int num_blks); 27 + asmlinkage void sha256_block_data_order(struct sha256_state *state, 28 + const u8 *data, int num_blks); 29 29 30 30 int crypto_sha256_arm_update(struct shash_desc *desc, const u8 *data, 31 31 unsigned int len) ··· 33 33 /* make sure casting to sha256_block_fn() is safe */ 34 34 BUILD_BUG_ON(offsetof(struct sha256_state, state) != 0); 35 35 36 - return sha256_base_do_update(desc, data, len, 37 - (sha256_block_fn *)sha256_block_data_order); 36 + return sha256_base_do_update(desc, data, len, sha256_block_data_order); 38 37 } 39 38 EXPORT_SYMBOL(crypto_sha256_arm_update); 40 39 41 40 static int crypto_sha256_arm_final(struct shash_desc *desc, u8 *out) 42 41 { 43 - sha256_base_do_finalize(desc, 44 - (sha256_block_fn *)sha256_block_data_order); 42 + sha256_base_do_finalize(desc, sha256_block_data_order); 45 43 return sha256_base_finish(desc, out); 46 44 } 47 45 48 46 int crypto_sha256_arm_finup(struct shash_desc *desc, const u8 *data, 49 47 unsigned int len, u8 *out) 50 48 { 51 - sha256_base_do_update(desc, data, len, 52 - (sha256_block_fn *)sha256_block_data_order); 49 + sha256_base_do_update(desc, data, len, sha256_block_data_order); 53 50 return crypto_sha256_arm_final(desc, out); 54 51 } 55 52 EXPORT_SYMBOL(crypto_sha256_arm_finup);
+5 -7
arch/arm/crypto/sha512-glue.c
··· 25 25 MODULE_ALIAS_CRYPTO("sha384-arm"); 26 26 MODULE_ALIAS_CRYPTO("sha512-arm"); 27 27 28 - asmlinkage void sha512_block_data_order(u64 *state, u8 const *src, int blocks); 28 + asmlinkage void sha512_block_data_order(struct sha512_state *state, 29 + u8 const *src, int blocks); 29 30 30 31 int sha512_arm_update(struct shash_desc *desc, const u8 *data, 31 32 unsigned int len) 32 33 { 33 - return sha512_base_do_update(desc, data, len, 34 - (sha512_block_fn *)sha512_block_data_order); 34 + return sha512_base_do_update(desc, data, len, sha512_block_data_order); 35 35 } 36 36 37 37 static int sha512_arm_final(struct shash_desc *desc, u8 *out) 38 38 { 39 - sha512_base_do_finalize(desc, 40 - (sha512_block_fn *)sha512_block_data_order); 39 + sha512_base_do_finalize(desc, sha512_block_data_order); 41 40 return sha512_base_finish(desc, out); 42 41 } 43 42 44 43 int sha512_arm_finup(struct shash_desc *desc, const u8 *data, 45 44 unsigned int len, u8 *out) 46 45 { 47 - sha512_base_do_update(desc, data, len, 48 - (sha512_block_fn *)sha512_block_data_order); 46 + sha512_base_do_update(desc, data, len, sha512_block_data_order); 49 47 return sha512_arm_final(desc, out); 50 48 } 51 49
+1
arch/arm64/crypto/Kconfig
··· 268 268 depends on ARM64 && KERNEL_MODE_NEON 269 269 select CRYPTO_ALGAPI 270 270 select CRYPTO_AES_ARM64_CE 271 + select CRYPTO_AES_ARM64_CE_BLK 271 272 select CRYPTO_AEAD 272 273 select CRYPTO_LIB_AES 273 274 help
+93 -172
arch/arm64/crypto/aes-ce-ccm-core.S
··· 1 1 /* SPDX-License-Identifier: GPL-2.0-only */ 2 2 /* 3 - * aesce-ccm-core.S - AES-CCM transform for ARMv8 with Crypto Extensions 3 + * aes-ce-ccm-core.S - AES-CCM transform for ARMv8 with Crypto Extensions 4 4 * 5 - * Copyright (C) 2013 - 2017 Linaro Ltd <ard.biesheuvel@linaro.org> 5 + * Copyright (C) 2013 - 2017 Linaro Ltd. 6 + * Copyright (C) 2024 Google LLC 7 + * 8 + * Author: Ard Biesheuvel <ardb@kernel.org> 6 9 */ 7 10 8 11 #include <linux/linkage.h> ··· 14 11 .text 15 12 .arch armv8-a+crypto 16 13 17 - /* 18 - * u32 ce_aes_ccm_auth_data(u8 mac[], u8 const in[], u32 abytes, 19 - * u32 macp, u8 const rk[], u32 rounds); 20 - */ 21 - SYM_FUNC_START(ce_aes_ccm_auth_data) 22 - ld1 {v0.16b}, [x0] /* load mac */ 23 - cbz w3, 1f 24 - sub w3, w3, #16 25 - eor v1.16b, v1.16b, v1.16b 26 - 0: ldrb w7, [x1], #1 /* get 1 byte of input */ 27 - subs w2, w2, #1 28 - add w3, w3, #1 29 - ins v1.b[0], w7 30 - ext v1.16b, v1.16b, v1.16b, #1 /* rotate in the input bytes */ 31 - beq 8f /* out of input? */ 32 - cbnz w3, 0b 33 - eor v0.16b, v0.16b, v1.16b 34 - 1: ld1 {v3.4s}, [x4] /* load first round key */ 35 - prfm pldl1strm, [x1] 36 - cmp w5, #12 /* which key size? */ 37 - add x6, x4, #16 38 - sub w7, w5, #2 /* modified # of rounds */ 39 - bmi 2f 40 - bne 5f 41 - mov v5.16b, v3.16b 42 - b 4f 43 - 2: mov v4.16b, v3.16b 44 - ld1 {v5.4s}, [x6], #16 /* load 2nd round key */ 45 - 3: aese v0.16b, v4.16b 46 - aesmc v0.16b, v0.16b 47 - 4: ld1 {v3.4s}, [x6], #16 /* load next round key */ 48 - aese v0.16b, v5.16b 49 - aesmc v0.16b, v0.16b 50 - 5: ld1 {v4.4s}, [x6], #16 /* load next round key */ 51 - subs w7, w7, #3 52 - aese v0.16b, v3.16b 53 - aesmc v0.16b, v0.16b 54 - ld1 {v5.4s}, [x6], #16 /* load next round key */ 55 - bpl 3b 56 - aese v0.16b, v4.16b 57 - subs w2, w2, #16 /* last data? */ 58 - eor v0.16b, v0.16b, v5.16b /* final round */ 59 - bmi 6f 60 - ld1 {v1.16b}, [x1], #16 /* load next input block */ 61 - eor v0.16b, v0.16b, v1.16b /* xor with mac */ 62 - bne 1b 63 - 6: st1 {v0.16b}, [x0] /* store mac */ 64 - beq 10f 65 - adds w2, w2, #16 66 - beq 10f 67 - mov w3, w2 68 - 7: ldrb w7, [x1], #1 69 - umov w6, v0.b[0] 70 - eor w6, w6, w7 71 - strb w6, [x0], #1 72 - subs w2, w2, #1 73 - beq 10f 74 - ext v0.16b, v0.16b, v0.16b, #1 /* rotate out the mac bytes */ 75 - b 7b 76 - 8: cbz w3, 91f 77 - mov w7, w3 78 - add w3, w3, #16 79 - 9: ext v1.16b, v1.16b, v1.16b, #1 80 - adds w7, w7, #1 81 - bne 9b 82 - 91: eor v0.16b, v0.16b, v1.16b 83 - st1 {v0.16b}, [x0] 84 - 10: mov w0, w3 85 - ret 86 - SYM_FUNC_END(ce_aes_ccm_auth_data) 14 + .macro load_round_keys, rk, nr, tmp 15 + sub w\tmp, \nr, #10 16 + add \tmp, \rk, w\tmp, sxtw #4 17 + ld1 {v10.4s-v13.4s}, [\rk] 18 + ld1 {v14.4s-v17.4s}, [\tmp], #64 19 + ld1 {v18.4s-v21.4s}, [\tmp], #64 20 + ld1 {v3.4s-v5.4s}, [\tmp] 21 + .endm 87 22 88 - /* 89 - * void ce_aes_ccm_final(u8 mac[], u8 const ctr[], u8 const rk[], 90 - * u32 rounds); 91 - */ 92 - SYM_FUNC_START(ce_aes_ccm_final) 93 - ld1 {v3.4s}, [x2], #16 /* load first round key */ 94 - ld1 {v0.16b}, [x0] /* load mac */ 95 - cmp w3, #12 /* which key size? */ 96 - sub w3, w3, #2 /* modified # of rounds */ 97 - ld1 {v1.16b}, [x1] /* load 1st ctriv */ 98 - bmi 0f 99 - bne 3f 100 - mov v5.16b, v3.16b 101 - b 2f 102 - 0: mov v4.16b, v3.16b 103 - 1: ld1 {v5.4s}, [x2], #16 /* load next round key */ 104 - aese v0.16b, v4.16b 105 - aesmc v0.16b, v0.16b 106 - aese v1.16b, v4.16b 107 - aesmc v1.16b, v1.16b 108 - 2: ld1 {v3.4s}, [x2], #16 /* load next round key */ 109 - aese v0.16b, v5.16b 110 - aesmc v0.16b, v0.16b 111 - aese v1.16b, v5.16b 112 - aesmc v1.16b, v1.16b 113 - 3: ld1 {v4.4s}, [x2], #16 /* load next round key */ 114 - subs w3, w3, #3 115 - aese v0.16b, v3.16b 116 - aesmc v0.16b, v0.16b 117 - aese v1.16b, v3.16b 118 - aesmc v1.16b, v1.16b 119 - bpl 1b 120 - aese v0.16b, v4.16b 121 - aese v1.16b, v4.16b 122 - /* final round key cancels out */ 123 - eor v0.16b, v0.16b, v1.16b /* en-/decrypt the mac */ 124 - st1 {v0.16b}, [x0] /* store result */ 125 - ret 126 - SYM_FUNC_END(ce_aes_ccm_final) 23 + .macro dround, va, vb, vk 24 + aese \va\().16b, \vk\().16b 25 + aesmc \va\().16b, \va\().16b 26 + aese \vb\().16b, \vk\().16b 27 + aesmc \vb\().16b, \vb\().16b 28 + .endm 29 + 30 + .macro aes_encrypt, va, vb, nr 31 + tbz \nr, #2, .L\@ 32 + dround \va, \vb, v10 33 + dround \va, \vb, v11 34 + tbz \nr, #1, .L\@ 35 + dround \va, \vb, v12 36 + dround \va, \vb, v13 37 + .L\@: .irp v, v14, v15, v16, v17, v18, v19, v20, v21, v3 38 + dround \va, \vb, \v 39 + .endr 40 + aese \va\().16b, v4.16b 41 + aese \vb\().16b, v4.16b 42 + .endm 127 43 128 44 .macro aes_ccm_do_crypt,enc 129 - cbz x2, 5f 130 - ldr x8, [x6, #8] /* load lower ctr */ 45 + load_round_keys x3, w4, x10 46 + 131 47 ld1 {v0.16b}, [x5] /* load mac */ 48 + cbz x2, ce_aes_ccm_final 49 + ldr x8, [x6, #8] /* load lower ctr */ 132 50 CPU_LE( rev x8, x8 ) /* keep swabbed ctr in reg */ 133 51 0: /* outer loop */ 134 52 ld1 {v1.8b}, [x6] /* load upper ctr */ 135 53 prfm pldl1strm, [x1] 136 54 add x8, x8, #1 137 55 rev x9, x8 138 - cmp w4, #12 /* which key size? */ 139 - sub w7, w4, #2 /* get modified # of rounds */ 140 56 ins v1.d[1], x9 /* no carry in lower ctr */ 141 - ld1 {v3.4s}, [x3] /* load first round key */ 142 - add x10, x3, #16 143 - bmi 1f 144 - bne 4f 145 - mov v5.16b, v3.16b 146 - b 3f 147 - 1: mov v4.16b, v3.16b 148 - ld1 {v5.4s}, [x10], #16 /* load 2nd round key */ 149 - 2: /* inner loop: 3 rounds, 2x interleaved */ 150 - aese v0.16b, v4.16b 151 - aesmc v0.16b, v0.16b 152 - aese v1.16b, v4.16b 153 - aesmc v1.16b, v1.16b 154 - 3: ld1 {v3.4s}, [x10], #16 /* load next round key */ 155 - aese v0.16b, v5.16b 156 - aesmc v0.16b, v0.16b 157 - aese v1.16b, v5.16b 158 - aesmc v1.16b, v1.16b 159 - 4: ld1 {v4.4s}, [x10], #16 /* load next round key */ 160 - subs w7, w7, #3 161 - aese v0.16b, v3.16b 162 - aesmc v0.16b, v0.16b 163 - aese v1.16b, v3.16b 164 - aesmc v1.16b, v1.16b 165 - ld1 {v5.4s}, [x10], #16 /* load next round key */ 166 - bpl 2b 167 - aese v0.16b, v4.16b 168 - aese v1.16b, v4.16b 57 + 58 + aes_encrypt v0, v1, w4 59 + 169 60 subs w2, w2, #16 170 - bmi 6f /* partial block? */ 61 + bmi ce_aes_ccm_crypt_tail 171 62 ld1 {v2.16b}, [x1], #16 /* load next input block */ 172 63 .if \enc == 1 173 64 eor v2.16b, v2.16b, v5.16b /* final round enc+mac */ 174 - eor v1.16b, v1.16b, v2.16b /* xor with crypted ctr */ 65 + eor v6.16b, v1.16b, v2.16b /* xor with crypted ctr */ 175 66 .else 176 67 eor v2.16b, v2.16b, v1.16b /* xor with crypted ctr */ 177 - eor v1.16b, v2.16b, v5.16b /* final round enc */ 68 + eor v6.16b, v2.16b, v5.16b /* final round enc */ 178 69 .endif 179 70 eor v0.16b, v0.16b, v2.16b /* xor mac with pt ^ rk[last] */ 180 - st1 {v1.16b}, [x0], #16 /* write output block */ 71 + st1 {v6.16b}, [x0], #16 /* write output block */ 181 72 bne 0b 182 73 CPU_LE( rev x8, x8 ) 183 - st1 {v0.16b}, [x5] /* store mac */ 184 74 str x8, [x6, #8] /* store lsb end of ctr (BE) */ 185 - 5: ret 186 - 187 - 6: eor v0.16b, v0.16b, v5.16b /* final round mac */ 188 - eor v1.16b, v1.16b, v5.16b /* final round enc */ 75 + cbnz x7, ce_aes_ccm_final 189 76 st1 {v0.16b}, [x5] /* store mac */ 190 - add w2, w2, #16 /* process partial tail block */ 191 - 7: ldrb w9, [x1], #1 /* get 1 byte of input */ 192 - umov w6, v1.b[0] /* get top crypted ctr byte */ 193 - umov w7, v0.b[0] /* get top mac byte */ 194 - .if \enc == 1 195 - eor w7, w7, w9 196 - eor w9, w9, w6 197 - .else 198 - eor w9, w9, w6 199 - eor w7, w7, w9 200 - .endif 201 - strb w9, [x0], #1 /* store out byte */ 202 - strb w7, [x5], #1 /* store mac byte */ 203 - subs w2, w2, #1 204 - beq 5b 205 - ext v0.16b, v0.16b, v0.16b, #1 /* shift out mac byte */ 206 - ext v1.16b, v1.16b, v1.16b, #1 /* shift out ctr byte */ 207 - b 7b 77 + ret 208 78 .endm 79 + 80 + SYM_FUNC_START_LOCAL(ce_aes_ccm_crypt_tail) 81 + eor v0.16b, v0.16b, v5.16b /* final round mac */ 82 + eor v1.16b, v1.16b, v5.16b /* final round enc */ 83 + 84 + add x1, x1, w2, sxtw /* rewind the input pointer (w2 < 0) */ 85 + add x0, x0, w2, sxtw /* rewind the output pointer */ 86 + 87 + adr_l x8, .Lpermute /* load permute vectors */ 88 + add x9, x8, w2, sxtw 89 + sub x8, x8, w2, sxtw 90 + ld1 {v7.16b-v8.16b}, [x9] 91 + ld1 {v9.16b}, [x8] 92 + 93 + ld1 {v2.16b}, [x1] /* load a full block of input */ 94 + tbl v1.16b, {v1.16b}, v7.16b /* move keystream to end of register */ 95 + eor v7.16b, v2.16b, v1.16b /* encrypt partial input block */ 96 + bif v2.16b, v7.16b, v22.16b /* select plaintext */ 97 + tbx v7.16b, {v6.16b}, v8.16b /* insert output from previous iteration */ 98 + tbl v2.16b, {v2.16b}, v9.16b /* copy plaintext to start of v2 */ 99 + eor v0.16b, v0.16b, v2.16b /* fold plaintext into mac */ 100 + 101 + st1 {v7.16b}, [x0] /* store output block */ 102 + cbz x7, 0f 103 + 104 + SYM_INNER_LABEL(ce_aes_ccm_final, SYM_L_LOCAL) 105 + ld1 {v1.16b}, [x7] /* load 1st ctriv */ 106 + 107 + aes_encrypt v0, v1, w4 108 + 109 + /* final round key cancels out */ 110 + eor v0.16b, v0.16b, v1.16b /* en-/decrypt the mac */ 111 + 0: st1 {v0.16b}, [x5] /* store result */ 112 + ret 113 + SYM_FUNC_END(ce_aes_ccm_crypt_tail) 209 114 210 115 /* 211 116 * void ce_aes_ccm_encrypt(u8 out[], u8 const in[], u32 cbytes, 212 117 * u8 const rk[], u32 rounds, u8 mac[], 213 - * u8 ctr[]); 118 + * u8 ctr[], u8 const final_iv[]); 214 119 * void ce_aes_ccm_decrypt(u8 out[], u8 const in[], u32 cbytes, 215 120 * u8 const rk[], u32 rounds, u8 mac[], 216 - * u8 ctr[]); 121 + * u8 ctr[], u8 const final_iv[]); 217 122 */ 218 123 SYM_FUNC_START(ce_aes_ccm_encrypt) 124 + movi v22.16b, #255 219 125 aes_ccm_do_crypt 1 220 126 SYM_FUNC_END(ce_aes_ccm_encrypt) 221 127 222 128 SYM_FUNC_START(ce_aes_ccm_decrypt) 129 + movi v22.16b, #0 223 130 aes_ccm_do_crypt 0 224 131 SYM_FUNC_END(ce_aes_ccm_decrypt) 132 + 133 + .section ".rodata", "a" 134 + .align 6 135 + .fill 15, 1, 0xff 136 + .Lpermute: 137 + .byte 0x0, 0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7 138 + .byte 0x8, 0x9, 0xa, 0xb, 0xc, 0xd, 0xe, 0xf 139 + .fill 15, 1, 0xff
+105 -49
arch/arm64/crypto/aes-ce-ccm-glue.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 - * aes-ccm-glue.c - AES-CCM transform for ARMv8 with Crypto Extensions 3 + * aes-ce-ccm-glue.c - AES-CCM transform for ARMv8 with Crypto Extensions 4 4 * 5 - * Copyright (C) 2013 - 2017 Linaro Ltd <ard.biesheuvel@linaro.org> 5 + * Copyright (C) 2013 - 2017 Linaro Ltd. 6 + * Copyright (C) 2024 Google LLC 7 + * 8 + * Author: Ard Biesheuvel <ardb@kernel.org> 6 9 */ 7 10 8 11 #include <asm/neon.h> ··· 17 14 #include <linux/module.h> 18 15 19 16 #include "aes-ce-setkey.h" 17 + 18 + MODULE_IMPORT_NS(CRYPTO_INTERNAL); 20 19 21 20 static int num_rounds(struct crypto_aes_ctx *ctx) 22 21 { ··· 32 27 return 6 + ctx->key_length / 4; 33 28 } 34 29 35 - asmlinkage u32 ce_aes_ccm_auth_data(u8 mac[], u8 const in[], u32 abytes, 36 - u32 macp, u32 const rk[], u32 rounds); 30 + asmlinkage u32 ce_aes_mac_update(u8 const in[], u32 const rk[], int rounds, 31 + int blocks, u8 dg[], int enc_before, 32 + int enc_after); 37 33 38 34 asmlinkage void ce_aes_ccm_encrypt(u8 out[], u8 const in[], u32 cbytes, 39 35 u32 const rk[], u32 rounds, u8 mac[], 40 - u8 ctr[]); 36 + u8 ctr[], u8 const final_iv[]); 41 37 42 38 asmlinkage void ce_aes_ccm_decrypt(u8 out[], u8 const in[], u32 cbytes, 43 39 u32 const rk[], u32 rounds, u8 mac[], 44 - u8 ctr[]); 45 - 46 - asmlinkage void ce_aes_ccm_final(u8 mac[], u8 const ctr[], u32 const rk[], 47 - u32 rounds); 40 + u8 ctr[], u8 const final_iv[]); 48 41 49 42 static int ccm_setkey(struct crypto_aead *tfm, const u8 *in_key, 50 43 unsigned int key_len) ··· 97 94 return 0; 98 95 } 99 96 97 + static u32 ce_aes_ccm_auth_data(u8 mac[], u8 const in[], u32 abytes, 98 + u32 macp, u32 const rk[], u32 rounds) 99 + { 100 + int enc_after = (macp + abytes) % AES_BLOCK_SIZE; 101 + 102 + do { 103 + u32 blocks = abytes / AES_BLOCK_SIZE; 104 + 105 + if (macp == AES_BLOCK_SIZE || (!macp && blocks > 0)) { 106 + u32 rem = ce_aes_mac_update(in, rk, rounds, blocks, mac, 107 + macp, enc_after); 108 + u32 adv = (blocks - rem) * AES_BLOCK_SIZE; 109 + 110 + macp = enc_after ? 0 : AES_BLOCK_SIZE; 111 + in += adv; 112 + abytes -= adv; 113 + 114 + if (unlikely(rem)) { 115 + kernel_neon_end(); 116 + kernel_neon_begin(); 117 + macp = 0; 118 + } 119 + } else { 120 + u32 l = min(AES_BLOCK_SIZE - macp, abytes); 121 + 122 + crypto_xor(&mac[macp], in, l); 123 + in += l; 124 + macp += l; 125 + abytes -= l; 126 + } 127 + } while (abytes > 0); 128 + 129 + return macp; 130 + } 131 + 100 132 static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[]) 101 133 { 102 134 struct crypto_aead *aead = crypto_aead_reqtfm(req); ··· 139 101 struct __packed { __be16 l; __be32 h; u16 len; } ltag; 140 102 struct scatter_walk walk; 141 103 u32 len = req->assoclen; 142 - u32 macp = 0; 104 + u32 macp = AES_BLOCK_SIZE; 143 105 144 106 /* prepend the AAD with a length tag */ 145 107 if (len < 0xff00) { ··· 163 125 scatterwalk_start(&walk, sg_next(walk.sg)); 164 126 n = scatterwalk_clamp(&walk, len); 165 127 } 166 - n = min_t(u32, n, SZ_4K); /* yield NEON at least every 4k */ 167 128 p = scatterwalk_map(&walk); 168 129 169 130 macp = ce_aes_ccm_auth_data(mac, p, n, macp, ctx->key_enc, 170 131 num_rounds(ctx)); 171 132 172 - if (len / SZ_4K > (len - n) / SZ_4K) { 173 - kernel_neon_end(); 174 - kernel_neon_begin(); 175 - } 176 133 len -= n; 177 134 178 135 scatterwalk_unmap(p); ··· 182 149 struct crypto_aes_ctx *ctx = crypto_aead_ctx(aead); 183 150 struct skcipher_walk walk; 184 151 u8 __aligned(8) mac[AES_BLOCK_SIZE]; 185 - u8 buf[AES_BLOCK_SIZE]; 152 + u8 orig_iv[AES_BLOCK_SIZE]; 186 153 u32 len = req->cryptlen; 187 154 int err; 188 155 ··· 191 158 return err; 192 159 193 160 /* preserve the original iv for the final round */ 194 - memcpy(buf, req->iv, AES_BLOCK_SIZE); 161 + memcpy(orig_iv, req->iv, AES_BLOCK_SIZE); 195 162 196 163 err = skcipher_walk_aead_encrypt(&walk, req, false); 164 + if (unlikely(err)) 165 + return err; 197 166 198 167 kernel_neon_begin(); 199 168 200 169 if (req->assoclen) 201 170 ccm_calculate_auth_mac(req, mac); 202 171 203 - while (walk.nbytes) { 172 + do { 204 173 u32 tail = walk.nbytes % AES_BLOCK_SIZE; 205 - bool final = walk.nbytes == walk.total; 174 + const u8 *src = walk.src.virt.addr; 175 + u8 *dst = walk.dst.virt.addr; 176 + u8 buf[AES_BLOCK_SIZE]; 177 + u8 *final_iv = NULL; 206 178 207 - if (final) 179 + if (walk.nbytes == walk.total) { 208 180 tail = 0; 181 + final_iv = orig_iv; 182 + } 209 183 210 - ce_aes_ccm_encrypt(walk.dst.virt.addr, walk.src.virt.addr, 211 - walk.nbytes - tail, ctx->key_enc, 212 - num_rounds(ctx), mac, walk.iv); 184 + if (unlikely(walk.nbytes < AES_BLOCK_SIZE)) 185 + src = dst = memcpy(&buf[sizeof(buf) - walk.nbytes], 186 + src, walk.nbytes); 213 187 214 - if (!final) 215 - kernel_neon_end(); 216 - err = skcipher_walk_done(&walk, tail); 217 - if (!final) 218 - kernel_neon_begin(); 219 - } 188 + ce_aes_ccm_encrypt(dst, src, walk.nbytes - tail, 189 + ctx->key_enc, num_rounds(ctx), 190 + mac, walk.iv, final_iv); 220 191 221 - ce_aes_ccm_final(mac, buf, ctx->key_enc, num_rounds(ctx)); 192 + if (unlikely(walk.nbytes < AES_BLOCK_SIZE)) 193 + memcpy(walk.dst.virt.addr, dst, walk.nbytes); 194 + 195 + if (walk.nbytes) { 196 + err = skcipher_walk_done(&walk, tail); 197 + } 198 + } while (walk.nbytes); 222 199 223 200 kernel_neon_end(); 201 + 202 + if (unlikely(err)) 203 + return err; 224 204 225 205 /* copy authtag to end of dst */ 226 206 scatterwalk_map_and_copy(mac, req->dst, req->assoclen + req->cryptlen, 227 207 crypto_aead_authsize(aead), 1); 228 208 229 - return err; 209 + return 0; 230 210 } 231 211 232 212 static int ccm_decrypt(struct aead_request *req) ··· 249 203 unsigned int authsize = crypto_aead_authsize(aead); 250 204 struct skcipher_walk walk; 251 205 u8 __aligned(8) mac[AES_BLOCK_SIZE]; 252 - u8 buf[AES_BLOCK_SIZE]; 206 + u8 orig_iv[AES_BLOCK_SIZE]; 253 207 u32 len = req->cryptlen - authsize; 254 208 int err; 255 209 ··· 258 212 return err; 259 213 260 214 /* preserve the original iv for the final round */ 261 - memcpy(buf, req->iv, AES_BLOCK_SIZE); 215 + memcpy(orig_iv, req->iv, AES_BLOCK_SIZE); 262 216 263 217 err = skcipher_walk_aead_decrypt(&walk, req, false); 218 + if (unlikely(err)) 219 + return err; 264 220 265 221 kernel_neon_begin(); 266 222 267 223 if (req->assoclen) 268 224 ccm_calculate_auth_mac(req, mac); 269 225 270 - while (walk.nbytes) { 226 + do { 271 227 u32 tail = walk.nbytes % AES_BLOCK_SIZE; 272 - bool final = walk.nbytes == walk.total; 228 + const u8 *src = walk.src.virt.addr; 229 + u8 *dst = walk.dst.virt.addr; 230 + u8 buf[AES_BLOCK_SIZE]; 231 + u8 *final_iv = NULL; 273 232 274 - if (final) 233 + if (walk.nbytes == walk.total) { 275 234 tail = 0; 235 + final_iv = orig_iv; 236 + } 276 237 277 - ce_aes_ccm_decrypt(walk.dst.virt.addr, walk.src.virt.addr, 278 - walk.nbytes - tail, ctx->key_enc, 279 - num_rounds(ctx), mac, walk.iv); 238 + if (unlikely(walk.nbytes < AES_BLOCK_SIZE)) 239 + src = dst = memcpy(&buf[sizeof(buf) - walk.nbytes], 240 + src, walk.nbytes); 280 241 281 - if (!final) 282 - kernel_neon_end(); 283 - err = skcipher_walk_done(&walk, tail); 284 - if (!final) 285 - kernel_neon_begin(); 286 - } 242 + ce_aes_ccm_decrypt(dst, src, walk.nbytes - tail, 243 + ctx->key_enc, num_rounds(ctx), 244 + mac, walk.iv, final_iv); 287 245 288 - ce_aes_ccm_final(mac, buf, ctx->key_enc, num_rounds(ctx)); 246 + if (unlikely(walk.nbytes < AES_BLOCK_SIZE)) 247 + memcpy(walk.dst.virt.addr, dst, walk.nbytes); 248 + 249 + if (walk.nbytes) { 250 + err = skcipher_walk_done(&walk, tail); 251 + } 252 + } while (walk.nbytes); 289 253 290 254 kernel_neon_end(); 291 255 ··· 303 247 return err; 304 248 305 249 /* compare calculated auth tag with the stored one */ 306 - scatterwalk_map_and_copy(buf, req->src, 250 + scatterwalk_map_and_copy(orig_iv, req->src, 307 251 req->assoclen + req->cryptlen - authsize, 308 252 authsize, 0); 309 253 310 - if (crypto_memneq(mac, buf, authsize)) 254 + if (crypto_memneq(mac, orig_iv, authsize)) 311 255 return -EBADMSG; 312 256 return 0; 313 257 } ··· 346 290 module_exit(aes_mod_exit); 347 291 348 292 MODULE_DESCRIPTION("Synchronous AES in CCM mode using ARMv8 Crypto Extensions"); 349 - MODULE_AUTHOR("Ard Biesheuvel <ard.biesheuvel@linaro.org>"); 293 + MODULE_AUTHOR("Ard Biesheuvel <ardb@kernel.org>"); 350 294 MODULE_LICENSE("GPL v2"); 351 295 MODULE_ALIAS_CRYPTO("ccm(aes)");
+1
arch/arm64/crypto/aes-glue.c
··· 1048 1048 1049 1049 #ifdef USE_V8_CRYPTO_EXTENSIONS 1050 1050 module_cpu_feature_match(AES, aes_init); 1051 + EXPORT_SYMBOL_NS(ce_aes_mac_update, CRYPTO_INTERNAL); 1051 1052 #else 1052 1053 module_init(aes_init); 1053 1054 EXPORT_SYMBOL(neon_aes_ecb_encrypt);
+20
arch/powerpc/crypto/Kconfig
··· 137 137 - Power10 or later 138 138 - Little-endian 139 139 140 + config CRYPTO_DEV_VMX 141 + bool "Support for VMX cryptographic acceleration instructions" 142 + depends on PPC64 && VSX 143 + help 144 + Support for VMX cryptographic acceleration instructions. 145 + 146 + config CRYPTO_DEV_VMX_ENCRYPT 147 + tristate "Encryption acceleration support on P8 CPU" 148 + depends on CRYPTO_DEV_VMX 149 + select CRYPTO_AES 150 + select CRYPTO_CBC 151 + select CRYPTO_CTR 152 + select CRYPTO_GHASH 153 + select CRYPTO_XTS 154 + default m 155 + help 156 + Support for VMX cryptographic acceleration instructions on Power8 CPU. 157 + This module supports acceleration for AES and GHASH in hardware. If you 158 + choose 'M' here, this module will be called vmx-crypto. 159 + 140 160 endmenu
+18 -2
arch/powerpc/crypto/Makefile
··· 16 16 obj-$(CONFIG_CRYPTO_AES_GCM_P10) += aes-gcm-p10-crypto.o 17 17 obj-$(CONFIG_CRYPTO_CHACHA20_P10) += chacha-p10-crypto.o 18 18 obj-$(CONFIG_CRYPTO_POLY1305_P10) += poly1305-p10-crypto.o 19 + obj-$(CONFIG_CRYPTO_DEV_VMX_ENCRYPT) += vmx-crypto.o 19 20 20 21 aes-ppc-spe-y := aes-spe-core.o aes-spe-keys.o aes-tab-4k.o aes-spe-modes.o aes-spe-glue.o 21 22 md5-ppc-y := md5-asm.o md5-glue.o ··· 28 27 aes-gcm-p10-crypto-y := aes-gcm-p10-glue.o aes-gcm-p10.o ghashp10-ppc.o aesp10-ppc.o 29 28 chacha-p10-crypto-y := chacha-p10-glue.o chacha-p10le-8x.o 30 29 poly1305-p10-crypto-y := poly1305-p10-glue.o poly1305-p10le_64.o 30 + vmx-crypto-objs := vmx.o aesp8-ppc.o ghashp8-ppc.o aes.o aes_cbc.o aes_ctr.o aes_xts.o ghash.o 31 + 32 + ifeq ($(CONFIG_CPU_LITTLE_ENDIAN),y) 33 + override flavour := linux-ppc64le 34 + else 35 + ifdef CONFIG_PPC64_ELF_ABI_V2 36 + override flavour := linux-ppc64-elfv2 37 + else 38 + override flavour := linux-ppc64 39 + endif 40 + endif 31 41 32 42 quiet_cmd_perl = PERL $@ 33 - cmd_perl = $(PERL) $< $(if $(CONFIG_CPU_LITTLE_ENDIAN), linux-ppc64le, linux-ppc64) > $@ 43 + cmd_perl = $(PERL) $< $(flavour) > $@ 34 44 35 - targets += aesp10-ppc.S ghashp10-ppc.S 45 + targets += aesp10-ppc.S ghashp10-ppc.S aesp8-ppc.S ghashp8-ppc.S 36 46 37 47 $(obj)/aesp10-ppc.S $(obj)/ghashp10-ppc.S: $(obj)/%.S: $(src)/%.pl FORCE 38 48 $(call if_changed,perl) 39 49 50 + $(obj)/aesp8-ppc.S $(obj)/ghashp8-ppc.S: $(obj)/%.S: $(src)/%.pl FORCE 51 + $(call if_changed,perl) 52 + 40 53 OBJECT_FILES_NON_STANDARD_aesp10-ppc.o := y 41 54 OBJECT_FILES_NON_STANDARD_ghashp10-ppc.o := y 55 + OBJECT_FILES_NON_STANDARD_aesp8-ppc.o := y
+3 -2
crypto/Kconfig
··· 1269 1269 1270 1270 A non-physical non-deterministic ("true") RNG (e.g., an entropy source 1271 1271 compliant with NIST SP800-90B) intended to provide a seed to a 1272 - deterministic RNG (e.g. per NIST SP800-90C). 1272 + deterministic RNG (e.g., per NIST SP800-90C). 1273 1273 This RNG does not perform any cryptographic whitening of the generated 1274 + random numbers. 1274 1275 1275 - See https://www.chronox.de/jent.html 1276 + See https://www.chronox.de/jent/ 1276 1277 1277 1278 if CRYPTO_JITTERENTROPY 1278 1279 if CRYPTO_FIPS && EXPERT
+10 -11
crypto/ahash.c
··· 618 618 } 619 619 EXPORT_SYMBOL_GPL(crypto_has_ahash); 620 620 621 + static bool crypto_hash_alg_has_setkey(struct hash_alg_common *halg) 622 + { 623 + struct crypto_alg *alg = &halg->base; 624 + 625 + if (alg->cra_type == &crypto_shash_type) 626 + return crypto_shash_alg_has_setkey(__crypto_shash_alg(alg)); 627 + 628 + return __crypto_ahash_alg(alg)->setkey != ahash_nosetkey; 629 + } 630 + 621 631 struct crypto_ahash *crypto_clone_ahash(struct crypto_ahash *hash) 622 632 { 623 633 struct hash_alg_common *halg = crypto_hash_alg_common(hash); ··· 769 759 return crypto_register_instance(tmpl, ahash_crypto_instance(inst)); 770 760 } 771 761 EXPORT_SYMBOL_GPL(ahash_register_instance); 772 - 773 - bool crypto_hash_alg_has_setkey(struct hash_alg_common *halg) 774 - { 775 - struct crypto_alg *alg = &halg->base; 776 - 777 - if (alg->cra_type == &crypto_shash_type) 778 - return crypto_shash_alg_has_setkey(__crypto_shash_alg(alg)); 779 - 780 - return __crypto_ahash_alg(alg)->setkey != ahash_nosetkey; 781 - } 782 - EXPORT_SYMBOL_GPL(crypto_hash_alg_has_setkey); 783 762 784 763 MODULE_LICENSE("GPL"); 785 764 MODULE_DESCRIPTION("Asynchronous cryptographic hash type");
+2 -2
crypto/asymmetric_keys/verify_pefile.c
··· 28 28 const struct pe32plus_opt_hdr *pe64; 29 29 const struct data_directory *ddir; 30 30 const struct data_dirent *dde; 31 - const struct section_header *secs, *sec; 31 + const struct section_header *sec; 32 32 size_t cursor, datalen = pelen; 33 33 34 34 kenter(""); ··· 110 110 ctx->n_sections = pe->sections; 111 111 if (ctx->n_sections > (ctx->header_size - cursor) / sizeof(*sec)) 112 112 return -ELIBBAD; 113 - ctx->secs = secs = pebuf + cursor; 113 + ctx->secs = pebuf + cursor; 114 114 115 115 return 0; 116 116 }
+32 -31
crypto/dh.c
··· 106 106 */ 107 107 static int dh_is_pubkey_valid(struct dh_ctx *ctx, MPI y) 108 108 { 109 + MPI val, q; 110 + int ret; 111 + 112 + if (!fips_enabled) 113 + return 0; 114 + 109 115 if (unlikely(!ctx->p)) 110 116 return -EINVAL; 111 117 ··· 131 125 * 132 126 * For the safe-prime groups q = (p - 1)/2. 133 127 */ 134 - if (fips_enabled) { 135 - MPI val, q; 136 - int ret; 128 + val = mpi_alloc(0); 129 + if (!val) 130 + return -ENOMEM; 137 131 138 - val = mpi_alloc(0); 139 - if (!val) 140 - return -ENOMEM; 141 - 142 - q = mpi_alloc(mpi_get_nlimbs(ctx->p)); 143 - if (!q) { 144 - mpi_free(val); 145 - return -ENOMEM; 146 - } 147 - 148 - /* 149 - * ->p is odd, so no need to explicitly subtract one 150 - * from it before shifting to the right. 151 - */ 152 - mpi_rshift(q, ctx->p, 1); 153 - 154 - ret = mpi_powm(val, y, q, ctx->p); 155 - mpi_free(q); 156 - if (ret) { 157 - mpi_free(val); 158 - return ret; 159 - } 160 - 161 - ret = mpi_cmp_ui(val, 1); 162 - 132 + q = mpi_alloc(mpi_get_nlimbs(ctx->p)); 133 + if (!q) { 163 134 mpi_free(val); 164 - 165 - if (ret != 0) 166 - return -EINVAL; 135 + return -ENOMEM; 167 136 } 137 + 138 + /* 139 + * ->p is odd, so no need to explicitly subtract one 140 + * from it before shifting to the right. 141 + */ 142 + mpi_rshift(q, ctx->p, 1); 143 + 144 + ret = mpi_powm(val, y, q, ctx->p); 145 + mpi_free(q); 146 + if (ret) { 147 + mpi_free(val); 148 + return ret; 149 + } 150 + 151 + ret = mpi_cmp_ui(val, 1); 152 + 153 + mpi_free(val); 154 + 155 + if (ret != 0) 156 + return -EINVAL; 168 157 169 158 return 0; 170 159 }
+2 -2
crypto/pcbc.c
··· 71 71 72 72 err = skcipher_walk_virt(&walk, req, false); 73 73 74 - while ((nbytes = walk.nbytes)) { 74 + while (walk.nbytes) { 75 75 if (walk.src.virt.addr == walk.dst.virt.addr) 76 76 nbytes = crypto_pcbc_encrypt_inplace(req, &walk, 77 77 cipher); ··· 138 138 139 139 err = skcipher_walk_virt(&walk, req, false); 140 140 141 - while ((nbytes = walk.nbytes)) { 141 + while (walk.nbytes) { 142 142 if (walk.src.virt.addr == walk.dst.virt.addr) 143 143 nbytes = crypto_pcbc_decrypt_inplace(req, &walk, 144 144 cipher);
+32 -4
crypto/rsa.c
··· 24 24 MPI qinv; 25 25 }; 26 26 27 + static int rsa_check_payload(MPI x, MPI n) 28 + { 29 + MPI n1; 30 + 31 + if (mpi_cmp_ui(x, 1) <= 0) 32 + return -EINVAL; 33 + 34 + n1 = mpi_alloc(0); 35 + if (!n1) 36 + return -ENOMEM; 37 + 38 + if (mpi_sub_ui(n1, n, 1) || mpi_cmp(x, n1) >= 0) { 39 + mpi_free(n1); 40 + return -EINVAL; 41 + } 42 + 43 + mpi_free(n1); 44 + return 0; 45 + } 46 + 27 47 /* 28 48 * RSAEP function [RFC3447 sec 5.1.1] 29 49 * c = m^e mod n; 30 50 */ 31 51 static int _rsa_enc(const struct rsa_mpi_key *key, MPI c, MPI m) 32 52 { 33 - /* (1) Validate 0 <= m < n */ 34 - if (mpi_cmp_ui(m, 0) < 0 || mpi_cmp(m, key->n) >= 0) 53 + /* 54 + * Even though (1) in RFC3447 only requires 0 <= m <= n - 1, we are 55 + * slightly more conservative and require 1 < m < n - 1. This is in line 56 + * with SP 800-56Br2, Section 7.1.1. 57 + */ 58 + if (rsa_check_payload(m, key->n)) 35 59 return -EINVAL; 36 60 37 61 /* (2) c = m^e mod n */ ··· 74 50 MPI m2, m12_or_qh; 75 51 int ret = -ENOMEM; 76 52 77 - /* (1) Validate 0 <= c < n */ 78 - if (mpi_cmp_ui(c, 0) < 0 || mpi_cmp(c, key->n) >= 0) 53 + /* 54 + * Even though (1) in RFC3447 only requires 0 <= c <= n - 1, we are 55 + * slightly more conservative and require 1 < c < n - 1. This is in line 56 + * with SP 800-56Br2, Section 7.1.2. 57 + */ 58 + if (rsa_check_payload(c, key->n)) 79 59 return -EINVAL; 80 60 81 61 m2 = mpi_alloc(0);
+30 -8
crypto/scompress.c
··· 117 117 struct crypto_scomp *scomp = *tfm_ctx; 118 118 void **ctx = acomp_request_ctx(req); 119 119 struct scomp_scratch *scratch; 120 + void *src, *dst; 120 121 unsigned int dlen; 121 122 int ret; 122 123 ··· 135 134 scratch = raw_cpu_ptr(&scomp_scratch); 136 135 spin_lock(&scratch->lock); 137 136 138 - scatterwalk_map_and_copy(scratch->src, req->src, 0, req->slen, 0); 139 - if (dir) 140 - ret = crypto_scomp_compress(scomp, scratch->src, req->slen, 141 - scratch->dst, &req->dlen, *ctx); 137 + if (sg_nents(req->src) == 1 && !PageHighMem(sg_page(req->src))) { 138 + src = page_to_virt(sg_page(req->src)) + req->src->offset; 139 + } else { 140 + scatterwalk_map_and_copy(scratch->src, req->src, 0, 141 + req->slen, 0); 142 + src = scratch->src; 143 + } 144 + 145 + if (req->dst && sg_nents(req->dst) == 1 && !PageHighMem(sg_page(req->dst))) 146 + dst = page_to_virt(sg_page(req->dst)) + req->dst->offset; 142 147 else 143 - ret = crypto_scomp_decompress(scomp, scratch->src, req->slen, 144 - scratch->dst, &req->dlen, *ctx); 148 + dst = scratch->dst; 149 + 150 + if (dir) 151 + ret = crypto_scomp_compress(scomp, src, req->slen, 152 + dst, &req->dlen, *ctx); 153 + else 154 + ret = crypto_scomp_decompress(scomp, src, req->slen, 155 + dst, &req->dlen, *ctx); 145 156 if (!ret) { 146 157 if (!req->dst) { 147 158 req->dst = sgl_alloc(req->dlen, GFP_ATOMIC, NULL); ··· 165 152 ret = -ENOSPC; 166 153 goto out; 167 154 } 168 - scatterwalk_map_and_copy(scratch->dst, req->dst, 0, req->dlen, 169 - 1); 155 + if (dst == scratch->dst) { 156 + scatterwalk_map_and_copy(scratch->dst, req->dst, 0, 157 + req->dlen, 1); 158 + } else { 159 + int nr_pages = DIV_ROUND_UP(req->dst->offset + req->dlen, PAGE_SIZE); 160 + int i; 161 + struct page *dst_page = sg_page(req->dst); 162 + 163 + for (i = 0; i < nr_pages; i++) 164 + flush_dcache_page(dst_page + i); 165 + } 170 166 } 171 167 out: 172 168 spin_unlock(&scratch->lock);
+3
crypto/tcrypt.c
··· 1851 1851 ret = min(ret, tcrypt_test("cbc(aria)")); 1852 1852 ret = min(ret, tcrypt_test("ctr(aria)")); 1853 1853 break; 1854 + case 193: 1855 + ret = min(ret, tcrypt_test("ffdhe2048(dh)")); 1856 + break; 1854 1857 case 200: 1855 1858 test_cipher_speed("ecb(aes)", ENCRYPT, sec, NULL, 0, 1856 1859 speed_template_16_24_32);
-8
crypto/testmgr.c
··· 5720 5720 } 5721 5721 }, { 5722 5722 #endif 5723 - .alg = "xts4096(paes)", 5724 - .test = alg_test_null, 5725 - .fips_allowed = 1, 5726 - }, { 5727 - .alg = "xts512(paes)", 5728 - .test = alg_test_null, 5729 - .fips_allowed = 1, 5730 - }, { 5731 5723 .alg = "xxhash64", 5732 5724 .test = alg_test_hash, 5733 5725 .fips_allowed = 1,
+2 -4
drivers/char/hw_random/hisi-rng.c
··· 89 89 rng->rng.read = hisi_rng_read; 90 90 91 91 ret = devm_hwrng_register(&pdev->dev, &rng->rng); 92 - if (ret) { 93 - dev_err(&pdev->dev, "failed to register hwrng\n"); 94 - return ret; 95 - } 92 + if (ret) 93 + return dev_err_probe(&pdev->dev, ret, "failed to register hwrng\n"); 96 94 97 95 return 0; 98 96 }
+7 -7
drivers/crypto/Kconfig
··· 611 611 To compile this driver as a module, choose M here. The 612 612 module will be called qcom-rng. If unsure, say N. 613 613 614 - config CRYPTO_DEV_VMX 615 - bool "Support for VMX cryptographic acceleration instructions" 616 - depends on PPC64 && VSX 617 - help 618 - Support for VMX cryptographic acceleration instructions. 619 - 620 - source "drivers/crypto/vmx/Kconfig" 614 + #config CRYPTO_DEV_VMX 615 + # bool "Support for VMX cryptographic acceleration instructions" 616 + # depends on PPC64 && VSX 617 + # help 618 + # Support for VMX cryptographic acceleration instructions. 619 + # 620 + #source "drivers/crypto/vmx/Kconfig" 621 621 622 622 config CRYPTO_DEV_IMGTEC_HASH 623 623 tristate "Imagination Technologies hardware hash accelerator"
+1 -1
drivers/crypto/Makefile
··· 42 42 obj-y += stm32/ 43 43 obj-$(CONFIG_CRYPTO_DEV_TALITOS) += talitos.o 44 44 obj-$(CONFIG_CRYPTO_DEV_VIRTIO) += virtio/ 45 - obj-$(CONFIG_CRYPTO_DEV_VMX) += vmx/ 45 + #obj-$(CONFIG_CRYPTO_DEV_VMX) += vmx/ 46 46 obj-$(CONFIG_CRYPTO_DEV_BCM_SPU) += bcm/ 47 47 obj-$(CONFIG_CRYPTO_DEV_SAFEXCEL) += inside-secure/ 48 48 obj-$(CONFIG_CRYPTO_DEV_ARTPEC6) += axis/
+1 -1
drivers/crypto/allwinner/sun8i-ce/sun8i-ce-hash.c
··· 362 362 digestsize = SHA512_DIGEST_SIZE; 363 363 364 364 /* the padding could be up to two block. */ 365 - buf = kzalloc(bs * 2, GFP_KERNEL | GFP_DMA); 365 + buf = kcalloc(2, bs, GFP_KERNEL | GFP_DMA); 366 366 if (!buf) { 367 367 err = -ENOMEM; 368 368 goto theend;
+9 -2
drivers/crypto/ccp/platform-access.c
··· 118 118 goto unlock; 119 119 } 120 120 121 - /* Store the status in request header for caller to investigate */ 121 + /* 122 + * Read status from PSP. If status is non-zero, it indicates an error 123 + * occurred during "processing" of the command. 124 + * If status is zero, it indicates the command was "processed" 125 + * successfully, but the result of the command is in the payload. 126 + * Return both cases to the caller as -EIO to investigate. 127 + */ 122 128 cmd_reg = ioread32(cmd); 123 - req->header.status = FIELD_GET(PSP_CMDRESP_STS, cmd_reg); 129 + if (FIELD_GET(PSP_CMDRESP_STS, cmd_reg)) 130 + req->header.status = FIELD_GET(PSP_CMDRESP_STS, cmd_reg); 124 131 if (req->header.status) { 125 132 ret = -EIO; 126 133 goto unlock;
+7 -4
drivers/crypto/ccp/psp-dev.c
··· 156 156 } 157 157 psp->capability = val; 158 158 159 - /* Detect if TSME and SME are both enabled */ 159 + /* Detect TSME and/or SME status */ 160 160 if (PSP_CAPABILITY(psp, PSP_SECURITY_REPORTING) && 161 - psp->capability & (PSP_SECURITY_TSME_STATUS << PSP_CAPABILITY_PSP_SECURITY_OFFSET) && 162 - cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT)) 163 - dev_notice(psp->dev, "psp: Both TSME and SME are active, SME is unnecessary when TSME is active.\n"); 161 + psp->capability & (PSP_SECURITY_TSME_STATUS << PSP_CAPABILITY_PSP_SECURITY_OFFSET)) { 162 + if (cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT)) 163 + dev_notice(psp->dev, "psp: Both TSME and SME are active, SME is unnecessary when TSME is active.\n"); 164 + else 165 + dev_notice(psp->dev, "psp: TSME enabled\n"); 166 + } 164 167 165 168 return 0; 166 169 }
+58
drivers/crypto/hisilicon/debugfs.c
··· 24 24 #define QM_DFX_QN_SHIFT 16 25 25 #define QM_DFX_CNT_CLR_CE 0x100118 26 26 #define QM_DBG_WRITE_LEN 1024 27 + #define QM_IN_IDLE_ST_REG 0x1040e4 28 + #define QM_IN_IDLE_STATE 0x1 27 29 28 30 static const char * const qm_debug_file_name[] = { 29 31 [CURRENT_QM] = "current_qm", ··· 83 81 {"QM_DFX_FF_ST5 ", 0x1040dc}, 84 82 {"QM_DFX_FF_ST6 ", 0x1040e0}, 85 83 {"QM_IN_IDLE_ST ", 0x1040e4}, 84 + {"QM_CACHE_CTL ", 0x100050}, 85 + {"QM_TIMEOUT_CFG ", 0x100070}, 86 + {"QM_DB_TIMEOUT_CFG ", 0x100074}, 87 + {"QM_FLR_PENDING_TIME_CFG ", 0x100078}, 88 + {"QM_ARUSR_MCFG1 ", 0x100088}, 89 + {"QM_AWUSR_MCFG1 ", 0x100098}, 90 + {"QM_AXI_M_CFG_ENABLE ", 0x1000B0}, 91 + {"QM_RAS_CE_THRESHOLD ", 0x1000F8}, 92 + {"QM_AXI_TIMEOUT_CTRL ", 0x100120}, 93 + {"QM_AXI_TIMEOUT_STATUS ", 0x100124}, 94 + {"QM_CQE_AGGR_TIMEOUT_CTRL ", 0x100144}, 95 + {"ACC_RAS_MSI_INT_SEL ", 0x1040fc}, 96 + {"QM_CQE_OUT ", 0x104100}, 97 + {"QM_EQE_OUT ", 0x104104}, 98 + {"QM_AEQE_OUT ", 0x104108}, 99 + {"QM_DB_INFO0 ", 0x104180}, 100 + {"QM_DB_INFO1 ", 0x104184}, 101 + {"QM_AM_CTRL_GLOBAL ", 0x300000}, 102 + {"QM_AM_CURR_PORT_STS ", 0x300100}, 103 + {"QM_AM_CURR_TRANS_RETURN ", 0x300150}, 104 + {"QM_AM_CURR_RD_MAX_TXID ", 0x300154}, 105 + {"QM_AM_CURR_WR_MAX_TXID ", 0x300158}, 106 + {"QM_AM_ALARM_RRESP ", 0x300180}, 107 + {"QM_AM_ALARM_BRESP ", 0x300184}, 86 108 }; 87 109 88 110 static const struct debugfs_reg32 qm_vf_dfx_regs[] = { ··· 1027 1001 } 1028 1002 DEFINE_SHOW_ATTRIBUTE(qm_diff_regs); 1029 1003 1004 + static int qm_state_show(struct seq_file *s, void *unused) 1005 + { 1006 + struct hisi_qm *qm = s->private; 1007 + u32 val; 1008 + int ret; 1009 + 1010 + /* If device is in suspended, directly return the idle state. */ 1011 + ret = hisi_qm_get_dfx_access(qm); 1012 + if (!ret) { 1013 + val = readl(qm->io_base + QM_IN_IDLE_ST_REG); 1014 + hisi_qm_put_dfx_access(qm); 1015 + } else if (ret == -EAGAIN) { 1016 + val = QM_IN_IDLE_STATE; 1017 + } else { 1018 + return ret; 1019 + } 1020 + 1021 + seq_printf(s, "%u\n", val); 1022 + 1023 + return 0; 1024 + } 1025 + 1026 + DEFINE_SHOW_ATTRIBUTE(qm_state); 1027 + 1030 1028 static ssize_t qm_status_read(struct file *filp, char __user *buffer, 1031 1029 size_t count, loff_t *pos) 1032 1030 { ··· 1112 1062 void hisi_qm_debug_init(struct hisi_qm *qm) 1113 1063 { 1114 1064 struct dfx_diff_registers *qm_regs = qm->debug.qm_diff_regs; 1065 + struct qm_dev_dfx *dev_dfx = &qm->debug.dev_dfx; 1115 1066 struct qm_dfx *dfx = &qm->debug.dfx; 1116 1067 struct dentry *qm_d; 1117 1068 void *data; ··· 1123 1072 1124 1073 /* only show this in PF */ 1125 1074 if (qm->fun_type == QM_HW_PF) { 1075 + debugfs_create_file("qm_state", 0444, qm->debug.qm_d, 1076 + qm, &qm_state_fops); 1077 + 1126 1078 qm_create_debugfs_file(qm, qm->debug.debug_root, CURRENT_QM); 1127 1079 for (i = CURRENT_Q; i < DEBUG_FILE_NUM; i++) 1128 1080 qm_create_debugfs_file(qm, qm->debug.qm_d, i); ··· 1141 1087 1142 1088 debugfs_create_file("status", 0444, qm->debug.qm_d, qm, 1143 1089 &qm_status_fops); 1090 + 1091 + debugfs_create_u32("dev_state", 0444, qm->debug.qm_d, &dev_dfx->dev_state); 1092 + debugfs_create_u32("dev_timeout", 0644, qm->debug.qm_d, &dev_dfx->dev_timeout); 1093 + 1144 1094 for (i = 0; i < ARRAY_SIZE(qm_dfx_files); i++) { 1145 1095 data = (atomic64_t *)((uintptr_t)dfx + qm_dfx_files[i].offset); 1146 1096 debugfs_create_file(qm_dfx_files[i].name,
+1 -1
drivers/crypto/hisilicon/hpre/hpre_main.c
··· 440 440 441 441 struct hisi_qp *hpre_create_qp(u8 type) 442 442 { 443 - int node = cpu_to_node(smp_processor_id()); 443 + int node = cpu_to_node(raw_smp_processor_id()); 444 444 struct hisi_qp *qp = NULL; 445 445 int ret; 446 446
+124 -60
drivers/crypto/hisilicon/qm.c
··· 236 236 237 237 #define QM_DEV_ALG_MAX_LEN 256 238 238 239 + /* abnormal status value for stopping queue */ 240 + #define QM_STOP_QUEUE_FAIL 1 241 + #define QM_DUMP_SQC_FAIL 3 242 + #define QM_DUMP_CQC_FAIL 4 243 + #define QM_FINISH_WAIT 5 244 + 239 245 #define QM_MK_CQC_DW3_V1(hop_num, pg_sz, buf_sz, cqe_sz) \ 240 246 (((hop_num) << QM_CQ_HOP_NUM_SHIFT) | \ 241 247 ((pg_sz) << QM_CQ_PAGE_SIZE_SHIFT) | \ ··· 318 312 {QM_SUPPORT_DB_ISOLATION, 0x30, 0, BIT(0), 0x0, 0x0, 0x0}, 319 313 {QM_SUPPORT_FUNC_QOS, 0x3100, 0, BIT(8), 0x0, 0x0, 0x1}, 320 314 {QM_SUPPORT_STOP_QP, 0x3100, 0, BIT(9), 0x0, 0x0, 0x1}, 315 + {QM_SUPPORT_STOP_FUNC, 0x3100, 0, BIT(10), 0x0, 0x0, 0x1}, 321 316 {QM_SUPPORT_MB_COMMAND, 0x3100, 0, BIT(11), 0x0, 0x0, 0x1}, 322 317 {QM_SUPPORT_SVA_PREFETCH, 0x3100, 0, BIT(14), 0x0, 0x0, 0x1}, 323 318 }; ··· 1681 1674 return ret; 1682 1675 } 1683 1676 1677 + static int qm_drain_qm(struct hisi_qm *qm) 1678 + { 1679 + return hisi_qm_mb(qm, QM_MB_CMD_FLUSH_QM, 0, 0, 0); 1680 + } 1681 + 1684 1682 static int qm_stop_qp(struct hisi_qp *qp) 1685 1683 { 1686 1684 return hisi_qm_mb(qp->qm, QM_MB_CMD_STOP_QP, 0, qp->qp_id, 0); ··· 2043 2031 } 2044 2032 } 2045 2033 2046 - /** 2047 - * qm_drain_qp() - Drain a qp. 2048 - * @qp: The qp we want to drain. 2049 - * 2050 - * Determine whether the queue is cleared by judging the tail pointers of 2051 - * sq and cq. 2052 - */ 2053 - static int qm_drain_qp(struct hisi_qp *qp) 2034 + static int qm_wait_qp_empty(struct hisi_qm *qm, u32 *state, u32 qp_id) 2054 2035 { 2055 - struct hisi_qm *qm = qp->qm; 2056 2036 struct device *dev = &qm->pdev->dev; 2057 2037 struct qm_sqc sqc; 2058 2038 struct qm_cqc cqc; 2059 2039 int ret, i = 0; 2060 2040 2061 - /* No need to judge if master OOO is blocked. */ 2062 - if (qm_check_dev_error(qm)) 2063 - return 0; 2064 - 2065 - /* Kunpeng930 supports drain qp by device */ 2066 - if (test_bit(QM_SUPPORT_STOP_QP, &qm->caps)) { 2067 - ret = qm_stop_qp(qp); 2068 - if (ret) 2069 - dev_err(dev, "Failed to stop qp(%u)!\n", qp->qp_id); 2070 - return ret; 2071 - } 2072 - 2073 2041 while (++i) { 2074 - ret = qm_set_and_get_xqc(qm, QM_MB_CMD_SQC, &sqc, qp->qp_id, 1); 2042 + ret = qm_set_and_get_xqc(qm, QM_MB_CMD_SQC, &sqc, qp_id, 1); 2075 2043 if (ret) { 2076 2044 dev_err_ratelimited(dev, "Failed to dump sqc!\n"); 2045 + *state = QM_DUMP_SQC_FAIL; 2077 2046 return ret; 2078 2047 } 2079 2048 2080 - ret = qm_set_and_get_xqc(qm, QM_MB_CMD_CQC, &cqc, qp->qp_id, 1); 2049 + ret = qm_set_and_get_xqc(qm, QM_MB_CMD_CQC, &cqc, qp_id, 1); 2081 2050 if (ret) { 2082 2051 dev_err_ratelimited(dev, "Failed to dump cqc!\n"); 2052 + *state = QM_DUMP_CQC_FAIL; 2083 2053 return ret; 2084 2054 } 2085 2055 ··· 2070 2076 break; 2071 2077 2072 2078 if (i == MAX_WAIT_COUNTS) { 2073 - dev_err(dev, "Fail to empty queue %u!\n", qp->qp_id); 2074 - return -EBUSY; 2079 + dev_err(dev, "Fail to empty queue %u!\n", qp_id); 2080 + *state = QM_STOP_QUEUE_FAIL; 2081 + return -ETIMEDOUT; 2075 2082 } 2076 2083 2077 2084 usleep_range(WAIT_PERIOD_US_MIN, WAIT_PERIOD_US_MAX); ··· 2081 2086 return 0; 2082 2087 } 2083 2088 2084 - static int qm_stop_qp_nolock(struct hisi_qp *qp) 2089 + /** 2090 + * qm_drain_qp() - Drain a qp. 2091 + * @qp: The qp we want to drain. 2092 + * 2093 + * If the device does not support stopping queue by sending mailbox, 2094 + * determine whether the queue is cleared by judging the tail pointers of 2095 + * sq and cq. 2096 + */ 2097 + static int qm_drain_qp(struct hisi_qp *qp) 2085 2098 { 2086 - struct device *dev = &qp->qm->pdev->dev; 2099 + struct hisi_qm *qm = qp->qm; 2100 + struct hisi_qm *pf_qm = pci_get_drvdata(pci_physfn(qm->pdev)); 2101 + u32 state = 0; 2102 + int ret; 2103 + 2104 + /* No need to judge if master OOO is blocked. */ 2105 + if (qm_check_dev_error(pf_qm)) 2106 + return 0; 2107 + 2108 + /* HW V3 supports drain qp by device */ 2109 + if (test_bit(QM_SUPPORT_STOP_QP, &qm->caps)) { 2110 + ret = qm_stop_qp(qp); 2111 + if (ret) { 2112 + dev_err(&qm->pdev->dev, "Failed to stop qp!\n"); 2113 + state = QM_STOP_QUEUE_FAIL; 2114 + goto set_dev_state; 2115 + } 2116 + return ret; 2117 + } 2118 + 2119 + ret = qm_wait_qp_empty(qm, &state, qp->qp_id); 2120 + if (ret) 2121 + goto set_dev_state; 2122 + 2123 + return 0; 2124 + 2125 + set_dev_state: 2126 + if (qm->debug.dev_dfx.dev_timeout) 2127 + qm->debug.dev_dfx.dev_state = state; 2128 + 2129 + return ret; 2130 + } 2131 + 2132 + static void qm_stop_qp_nolock(struct hisi_qp *qp) 2133 + { 2134 + struct hisi_qm *qm = qp->qm; 2135 + struct device *dev = &qm->pdev->dev; 2087 2136 int ret; 2088 2137 2089 2138 /* ··· 2138 2099 */ 2139 2100 if (atomic_read(&qp->qp_status.flags) != QP_START) { 2140 2101 qp->is_resetting = false; 2141 - return 0; 2102 + return; 2142 2103 } 2143 2104 2144 2105 atomic_set(&qp->qp_status.flags, QP_STOP); 2145 2106 2146 - ret = qm_drain_qp(qp); 2147 - if (ret) 2148 - dev_err(dev, "Failed to drain out data for stopping!\n"); 2107 + /* V3 supports direct stop function when FLR prepare */ 2108 + if (qm->ver < QM_HW_V3 || qm->status.stop_reason == QM_NORMAL) { 2109 + ret = qm_drain_qp(qp); 2110 + if (ret) 2111 + dev_err(dev, "Failed to drain out data for stopping qp(%u)!\n", qp->qp_id); 2112 + } 2149 2113 2150 - flush_workqueue(qp->qm->wq); 2114 + flush_workqueue(qm->wq); 2151 2115 if (unlikely(qp->is_resetting && atomic_read(&qp->qp_status.used))) 2152 2116 qp_stop_fail_cb(qp); 2153 2117 2154 2118 dev_dbg(dev, "stop queue %u!", qp->qp_id); 2155 - 2156 - return 0; 2157 2119 } 2158 2120 2159 2121 /** 2160 2122 * hisi_qm_stop_qp() - Stop a qp in qm. 2161 2123 * @qp: The qp we want to stop. 2162 2124 * 2163 - * This function is reverse of hisi_qm_start_qp. Return 0 if successful. 2125 + * This function is reverse of hisi_qm_start_qp. 2164 2126 */ 2165 - int hisi_qm_stop_qp(struct hisi_qp *qp) 2127 + void hisi_qm_stop_qp(struct hisi_qp *qp) 2166 2128 { 2167 - int ret; 2168 - 2169 2129 down_write(&qp->qm->qps_lock); 2170 - ret = qm_stop_qp_nolock(qp); 2130 + qm_stop_qp_nolock(qp); 2171 2131 up_write(&qp->qm->qps_lock); 2172 - 2173 - return ret; 2174 2132 } 2175 2133 EXPORT_SYMBOL_GPL(hisi_qm_stop_qp); 2176 2134 ··· 2345 2309 2346 2310 static void hisi_qm_uacce_stop_queue(struct uacce_queue *q) 2347 2311 { 2348 - hisi_qm_stop_qp(q->priv); 2312 + struct hisi_qp *qp = q->priv; 2313 + struct hisi_qm *qm = qp->qm; 2314 + struct qm_dev_dfx *dev_dfx = &qm->debug.dev_dfx; 2315 + u32 i = 0; 2316 + 2317 + hisi_qm_stop_qp(qp); 2318 + 2319 + if (!dev_dfx->dev_timeout || !dev_dfx->dev_state) 2320 + return; 2321 + 2322 + /* 2323 + * After the queue fails to be stopped, 2324 + * wait for a period of time before releasing the queue. 2325 + */ 2326 + while (++i) { 2327 + msleep(WAIT_PERIOD); 2328 + 2329 + /* Since dev_timeout maybe modified, check i >= dev_timeout */ 2330 + if (i >= dev_dfx->dev_timeout) { 2331 + dev_err(&qm->pdev->dev, "Stop q %u timeout, state %u\n", 2332 + qp->qp_id, dev_dfx->dev_state); 2333 + dev_dfx->dev_state = QM_FINISH_WAIT; 2334 + break; 2335 + } 2336 + } 2349 2337 } 2350 2338 2351 2339 static int hisi_qm_is_q_updated(struct uacce_queue *q) ··· 3114 3054 } 3115 3055 3116 3056 /* Stop started qps in reset flow */ 3117 - static int qm_stop_started_qp(struct hisi_qm *qm) 3057 + static void qm_stop_started_qp(struct hisi_qm *qm) 3118 3058 { 3119 - struct device *dev = &qm->pdev->dev; 3120 3059 struct hisi_qp *qp; 3121 - int i, ret; 3060 + int i; 3122 3061 3123 3062 for (i = 0; i < qm->qp_num; i++) { 3124 3063 qp = &qm->qp_array[i]; 3125 - if (qp && atomic_read(&qp->qp_status.flags) == QP_START) { 3064 + if (atomic_read(&qp->qp_status.flags) == QP_START) { 3126 3065 qp->is_resetting = true; 3127 - ret = qm_stop_qp_nolock(qp); 3128 - if (ret < 0) { 3129 - dev_err(dev, "Failed to stop qp%d!\n", i); 3130 - return ret; 3131 - } 3066 + qm_stop_qp_nolock(qp); 3132 3067 } 3133 3068 } 3134 - 3135 - return 0; 3136 3069 } 3137 3070 3138 3071 /** ··· 3165 3112 3166 3113 down_write(&qm->qps_lock); 3167 3114 3168 - qm->status.stop_reason = r; 3169 3115 if (atomic_read(&qm->status.flags) == QM_STOP) 3170 3116 goto err_unlock; 3171 3117 3172 3118 /* Stop all the request sending at first. */ 3173 3119 atomic_set(&qm->status.flags, QM_STOP); 3120 + qm->status.stop_reason = r; 3174 3121 3175 - if (qm->status.stop_reason == QM_SOFT_RESET || 3176 - qm->status.stop_reason == QM_DOWN) { 3122 + if (qm->status.stop_reason != QM_NORMAL) { 3177 3123 hisi_qm_set_hw_reset(qm, QM_RESET_STOP_TX_OFFSET); 3178 - ret = qm_stop_started_qp(qm); 3179 - if (ret < 0) { 3180 - dev_err(dev, "Failed to stop started qp!\n"); 3181 - goto err_unlock; 3124 + /* 3125 + * When performing soft reset, the hardware will no longer 3126 + * do tasks, and the tasks in the device will be flushed 3127 + * out directly since the master ooo is closed. 3128 + */ 3129 + if (test_bit(QM_SUPPORT_STOP_FUNC, &qm->caps) && 3130 + r != QM_SOFT_RESET) { 3131 + ret = qm_drain_qm(qm); 3132 + if (ret) { 3133 + dev_err(dev, "failed to drain qm!\n"); 3134 + goto err_unlock; 3135 + } 3182 3136 } 3137 + 3138 + qm_stop_started_qp(qm); 3139 + 3183 3140 hisi_qm_set_hw_reset(qm, QM_RESET_STOP_RX_OFFSET); 3184 3141 } 3185 3142 ··· 3204 3141 } 3205 3142 3206 3143 qm_clear_queues(qm); 3144 + qm->status.stop_reason = QM_NORMAL; 3207 3145 3208 3146 err_unlock: 3209 3147 up_write(&qm->qps_lock);
+12 -21
drivers/crypto/hisilicon/sec2/sec_crypto.c
··· 118 118 }; 119 119 120 120 /* Get an en/de-cipher queue cyclically to balance load over queues of TFM */ 121 - static inline int sec_alloc_queue_id(struct sec_ctx *ctx, struct sec_req *req) 121 + static inline u32 sec_alloc_queue_id(struct sec_ctx *ctx, struct sec_req *req) 122 122 { 123 123 if (req->c_req.encrypt) 124 124 return (u32)atomic_inc_return(&ctx->enc_qcyclic) % ··· 485 485 sec_free_mac_resource(dev, qp_ctx->res); 486 486 } 487 487 488 - static int sec_alloc_qp_ctx_resource(struct hisi_qm *qm, struct sec_ctx *ctx, 489 - struct sec_qp_ctx *qp_ctx) 488 + static int sec_alloc_qp_ctx_resource(struct sec_ctx *ctx, struct sec_qp_ctx *qp_ctx) 490 489 { 491 490 u16 q_depth = qp_ctx->qp->sq_depth; 492 491 struct device *dev = ctx->dev; ··· 540 541 kfree(qp_ctx->req_list); 541 542 } 542 543 543 - static int sec_create_qp_ctx(struct hisi_qm *qm, struct sec_ctx *ctx, 544 - int qp_ctx_id, int alg_type) 544 + static int sec_create_qp_ctx(struct sec_ctx *ctx, int qp_ctx_id) 545 545 { 546 546 struct sec_qp_ctx *qp_ctx; 547 547 struct hisi_qp *qp; ··· 559 561 idr_init(&qp_ctx->req_idr); 560 562 INIT_LIST_HEAD(&qp_ctx->backlog); 561 563 562 - ret = sec_alloc_qp_ctx_resource(qm, ctx, qp_ctx); 564 + ret = sec_alloc_qp_ctx_resource(ctx, qp_ctx); 563 565 if (ret) 564 566 goto err_destroy_idr; 565 567 ··· 612 614 } 613 615 614 616 for (i = 0; i < sec->ctx_q_num; i++) { 615 - ret = sec_create_qp_ctx(&sec->qm, ctx, i, 0); 617 + ret = sec_create_qp_ctx(ctx, i); 616 618 if (ret) 617 619 goto err_sec_release_qp_ctx; 618 620 } ··· 748 750 sec_ctx_base_uninit(ctx); 749 751 } 750 752 751 - static int sec_skcipher_3des_setkey(struct crypto_skcipher *tfm, const u8 *key, 752 - const u32 keylen, 753 - const enum sec_cmode c_mode) 753 + static int sec_skcipher_3des_setkey(struct crypto_skcipher *tfm, const u8 *key, const u32 keylen) 754 754 { 755 755 struct sec_ctx *ctx = crypto_skcipher_ctx(tfm); 756 756 struct sec_cipher_ctx *c_ctx = &ctx->c_ctx; ··· 839 843 840 844 switch (c_alg) { 841 845 case SEC_CALG_3DES: 842 - ret = sec_skcipher_3des_setkey(tfm, key, keylen, c_mode); 846 + ret = sec_skcipher_3des_setkey(tfm, key, keylen); 843 847 break; 844 848 case SEC_CALG_AES: 845 849 case SEC_CALG_SM4: ··· 1367 1371 sec_sqe3->bd_param = cpu_to_le32(bd_param); 1368 1372 1369 1373 sec_sqe3->c_len_ivin |= cpu_to_le32(c_req->c_len); 1370 - sec_sqe3->tag = cpu_to_le64(req); 1374 + sec_sqe3->tag = cpu_to_le64((unsigned long)req); 1371 1375 1372 1376 return 0; 1373 1377 } ··· 2141 2145 return sec_skcipher_crypto(sk_req, false); 2142 2146 } 2143 2147 2144 - #define SEC_SKCIPHER_GEN_ALG(sec_cra_name, sec_set_key, sec_min_key_size, \ 2145 - sec_max_key_size, ctx_init, ctx_exit, blk_size, iv_size)\ 2148 + #define SEC_SKCIPHER_ALG(sec_cra_name, sec_set_key, \ 2149 + sec_min_key_size, sec_max_key_size, blk_size, iv_size)\ 2146 2150 {\ 2147 2151 .base = {\ 2148 2152 .cra_name = sec_cra_name,\ ··· 2154 2158 .cra_ctxsize = sizeof(struct sec_ctx),\ 2155 2159 .cra_module = THIS_MODULE,\ 2156 2160 },\ 2157 - .init = ctx_init,\ 2158 - .exit = ctx_exit,\ 2161 + .init = sec_skcipher_ctx_init,\ 2162 + .exit = sec_skcipher_ctx_exit,\ 2159 2163 .setkey = sec_set_key,\ 2160 2164 .decrypt = sec_skcipher_decrypt,\ 2161 2165 .encrypt = sec_skcipher_encrypt,\ ··· 2163 2167 .max_keysize = sec_max_key_size,\ 2164 2168 .ivsize = iv_size,\ 2165 2169 } 2166 - 2167 - #define SEC_SKCIPHER_ALG(name, key_func, min_key_size, \ 2168 - max_key_size, blk_size, iv_size) \ 2169 - SEC_SKCIPHER_GEN_ALG(name, key_func, min_key_size, max_key_size, \ 2170 - sec_skcipher_ctx_init, sec_skcipher_ctx_exit, blk_size, iv_size) 2171 2170 2172 2171 static struct sec_skcipher sec_skciphers[] = { 2173 2172 {
+6 -1
drivers/crypto/hisilicon/sec2/sec_main.c
··· 282 282 {"SEC_BD_SAA6 ", 0x301C38}, 283 283 {"SEC_BD_SAA7 ", 0x301C3C}, 284 284 {"SEC_BD_SAA8 ", 0x301C40}, 285 + {"SEC_RAS_CE_ENABLE ", 0x301050}, 286 + {"SEC_RAS_FE_ENABLE ", 0x301054}, 287 + {"SEC_RAS_NFE_ENABLE ", 0x301058}, 288 + {"SEC_REQ_TRNG_TIME_TH ", 0x30112C}, 289 + {"SEC_CHANNEL_RNG_REQ_THLD ", 0x302110}, 285 290 }; 286 291 287 292 /* define the SEC's dfx regs region and region length */ ··· 379 374 380 375 struct hisi_qp **sec_create_qps(void) 381 376 { 382 - int node = cpu_to_node(smp_processor_id()); 377 + int node = cpu_to_node(raw_smp_processor_id()); 383 378 u32 ctx_num = ctx_q_num; 384 379 struct hisi_qp **qps; 385 380 int ret;
+1
drivers/crypto/hisilicon/zip/zip_crypto.c
··· 591 591 .base = { 592 592 .cra_name = "deflate", 593 593 .cra_driver_name = "hisi-deflate-acomp", 594 + .cra_flags = CRYPTO_ALG_ASYNC, 594 595 .cra_module = THIS_MODULE, 595 596 .cra_priority = HZIP_ALG_PRIORITY, 596 597 .cra_ctxsize = sizeof(struct hisi_zip_ctx),
+1 -1
drivers/crypto/hisilicon/zip/zip_main.c
··· 454 454 int zip_create_qps(struct hisi_qp **qps, int qp_num, int node) 455 455 { 456 456 if (node == NUMA_NO_NODE) 457 - node = cpu_to_node(smp_processor_id()); 457 + node = cpu_to_node(raw_smp_processor_id()); 458 458 459 459 return hisi_qm_alloc_qps_node(&zip_devices, qp_num, 0, node, qps); 460 460 }
-25
drivers/crypto/intel/iaa/iaa_crypto.h
··· 59 59 const char *name; 60 60 61 61 struct aecs_comp_table_record *aecs_comp_table; 62 - struct aecs_decomp_table_record *aecs_decomp_table; 63 62 64 63 dma_addr_t aecs_comp_table_dma_addr; 65 - dma_addr_t aecs_decomp_table_dma_addr; 66 64 }; 67 65 68 66 /* Representation of IAA device with wqs, populated by probe */ ··· 105 107 u32 reserved_padding[2]; 106 108 } __packed; 107 109 108 - /* AECS for decompress */ 109 - struct aecs_decomp_table_record { 110 - u32 crc; 111 - u32 xor_checksum; 112 - u32 low_filter_param; 113 - u32 high_filter_param; 114 - u32 output_mod_idx; 115 - u32 drop_init_decomp_out_bytes; 116 - u32 reserved[36]; 117 - u32 output_accum_data[2]; 118 - u32 out_bits_valid; 119 - u32 bit_off_indexing; 120 - u32 input_accum_data[64]; 121 - u8 size_qw[32]; 122 - u32 decomp_state[1220]; 123 - } __packed; 124 - 125 110 int iaa_aecs_init_fixed(void); 126 111 void iaa_aecs_cleanup_fixed(void); 127 112 ··· 117 136 int ll_table_size; 118 137 u32 *d_table; 119 138 int d_table_size; 120 - u32 *header_table; 121 - int header_table_size; 122 - u16 gen_decomp_table_flags; 123 139 iaa_dev_comp_init_fn_t init; 124 140 iaa_dev_comp_free_fn_t free; 125 141 }; ··· 126 148 int ll_table_size, 127 149 const u32 *d_table, 128 150 int d_table_size, 129 - const u8 *header_table, 130 - int header_table_size, 131 - u16 gen_decomp_table_flags, 132 151 iaa_dev_comp_init_fn_t init, 133 152 iaa_dev_comp_free_fn_t free); 134 153
-1
drivers/crypto/intel/iaa/iaa_crypto_comp_fixed.c
··· 78 78 sizeof(fixed_ll_sym), 79 79 fixed_d_sym, 80 80 sizeof(fixed_d_sym), 81 - NULL, 0, 0, 82 81 init_fixed_mode, NULL); 83 82 if (!ret) 84 83 pr_debug("IAA fixed compression mode initialized\n");
+15 -107
drivers/crypto/intel/iaa/iaa_crypto_main.c
··· 258 258 kfree(mode->name); 259 259 kfree(mode->ll_table); 260 260 kfree(mode->d_table); 261 - kfree(mode->header_table); 262 261 263 262 kfree(mode); 264 263 } 265 264 266 265 /* 267 - * IAA Compression modes are defined by an ll_table, a d_table, and an 268 - * optional header_table. These tables are typically generated and 269 - * captured using statistics collected from running actual 270 - * compress/decompress workloads. 266 + * IAA Compression modes are defined by an ll_table and a d_table. 267 + * These tables are typically generated and captured using statistics 268 + * collected from running actual compress/decompress workloads. 271 269 * 272 270 * A module or other kernel code can add and remove compression modes 273 271 * with a given name using the exported @add_iaa_compression_mode() ··· 313 315 * @ll_table_size: The ll table size in bytes 314 316 * @d_table: The d table 315 317 * @d_table_size: The d table size in bytes 316 - * @header_table: Optional header table 317 - * @header_table_size: Optional header table size in bytes 318 - * @gen_decomp_table_flags: Otional flags used to generate the decomp table 319 318 * @init: Optional callback function to init the compression mode data 320 319 * @free: Optional callback function to free the compression mode data 321 320 * ··· 325 330 int ll_table_size, 326 331 const u32 *d_table, 327 332 int d_table_size, 328 - const u8 *header_table, 329 - int header_table_size, 330 - u16 gen_decomp_table_flags, 331 333 iaa_dev_comp_init_fn_t init, 332 334 iaa_dev_comp_free_fn_t free) 333 335 { ··· 361 369 memcpy(mode->d_table, d_table, d_table_size); 362 370 mode->d_table_size = d_table_size; 363 371 } 364 - 365 - if (header_table) { 366 - mode->header_table = kzalloc(header_table_size, GFP_KERNEL); 367 - if (!mode->header_table) 368 - goto free; 369 - memcpy(mode->header_table, header_table, header_table_size); 370 - mode->header_table_size = header_table_size; 371 - } 372 - 373 - mode->gen_decomp_table_flags = gen_decomp_table_flags; 374 372 375 373 mode->init = init; 376 374 mode->free = free; ··· 402 420 if (device_mode->aecs_comp_table) 403 421 dma_free_coherent(dev, size, device_mode->aecs_comp_table, 404 422 device_mode->aecs_comp_table_dma_addr); 405 - if (device_mode->aecs_decomp_table) 406 - dma_free_coherent(dev, size, device_mode->aecs_decomp_table, 407 - device_mode->aecs_decomp_table_dma_addr); 408 - 409 423 kfree(device_mode); 410 424 } 411 425 ··· 417 439 struct iax_completion_record *comp, 418 440 bool compress, 419 441 bool only_once); 420 - 421 - static int decompress_header(struct iaa_device_compression_mode *device_mode, 422 - struct iaa_compression_mode *mode, 423 - struct idxd_wq *wq) 424 - { 425 - dma_addr_t src_addr, src2_addr; 426 - struct idxd_desc *idxd_desc; 427 - struct iax_hw_desc *desc; 428 - struct device *dev; 429 - int ret = 0; 430 - 431 - idxd_desc = idxd_alloc_desc(wq, IDXD_OP_BLOCK); 432 - if (IS_ERR(idxd_desc)) 433 - return PTR_ERR(idxd_desc); 434 - 435 - desc = idxd_desc->iax_hw; 436 - 437 - dev = &wq->idxd->pdev->dev; 438 - 439 - src_addr = dma_map_single(dev, (void *)mode->header_table, 440 - mode->header_table_size, DMA_TO_DEVICE); 441 - dev_dbg(dev, "%s: mode->name %s, src_addr %llx, dev %p, src %p, slen %d\n", 442 - __func__, mode->name, src_addr, dev, 443 - mode->header_table, mode->header_table_size); 444 - if (unlikely(dma_mapping_error(dev, src_addr))) { 445 - dev_dbg(dev, "dma_map_single err, exiting\n"); 446 - ret = -ENOMEM; 447 - return ret; 448 - } 449 - 450 - desc->flags = IAX_AECS_GEN_FLAG; 451 - desc->opcode = IAX_OPCODE_DECOMPRESS; 452 - 453 - desc->src1_addr = (u64)src_addr; 454 - desc->src1_size = mode->header_table_size; 455 - 456 - src2_addr = device_mode->aecs_decomp_table_dma_addr; 457 - desc->src2_addr = (u64)src2_addr; 458 - desc->src2_size = 1088; 459 - dev_dbg(dev, "%s: mode->name %s, src2_addr %llx, dev %p, src2_size %d\n", 460 - __func__, mode->name, desc->src2_addr, dev, desc->src2_size); 461 - desc->max_dst_size = 0; // suppressed output 462 - 463 - desc->decompr_flags = mode->gen_decomp_table_flags; 464 - 465 - desc->priv = 0; 466 - 467 - desc->completion_addr = idxd_desc->compl_dma; 468 - 469 - ret = idxd_submit_desc(wq, idxd_desc); 470 - if (ret) { 471 - pr_err("%s: submit_desc failed ret=0x%x\n", __func__, ret); 472 - goto out; 473 - } 474 - 475 - ret = check_completion(dev, idxd_desc->iax_completion, false, false); 476 - if (ret) 477 - dev_dbg(dev, "%s: mode->name %s check_completion failed ret=%d\n", 478 - __func__, mode->name, ret); 479 - else 480 - dev_dbg(dev, "%s: mode->name %s succeeded\n", __func__, 481 - mode->name); 482 - out: 483 - dma_unmap_single(dev, src_addr, 1088, DMA_TO_DEVICE); 484 - 485 - return ret; 486 - } 487 442 488 443 static int init_device_compression_mode(struct iaa_device *iaa_device, 489 444 struct iaa_compression_mode *mode, ··· 440 529 if (!device_mode->aecs_comp_table) 441 530 goto free; 442 531 443 - device_mode->aecs_decomp_table = dma_alloc_coherent(dev, size, 444 - &device_mode->aecs_decomp_table_dma_addr, GFP_KERNEL); 445 - if (!device_mode->aecs_decomp_table) 446 - goto free; 447 - 448 532 /* Add Huffman table to aecs */ 449 533 memset(device_mode->aecs_comp_table, 0, sizeof(*device_mode->aecs_comp_table)); 450 534 memcpy(device_mode->aecs_comp_table->ll_sym, mode->ll_table, mode->ll_table_size); 451 535 memcpy(device_mode->aecs_comp_table->d_sym, mode->d_table, mode->d_table_size); 452 - 453 - if (mode->header_table) { 454 - ret = decompress_header(device_mode, mode, wq); 455 - if (ret) { 456 - pr_debug("iaa header decompression failed: ret=%d\n", ret); 457 - goto free; 458 - } 459 - } 460 536 461 537 if (mode->init) { 462 538 ret = mode->init(device_mode); ··· 1222 1324 1223 1325 *compression_crc = idxd_desc->iax_completion->crc; 1224 1326 1225 - if (!ctx->async_mode) 1327 + if (!ctx->async_mode || disable_async) 1226 1328 idxd_free_desc(wq, idxd_desc); 1227 1329 out: 1228 1330 return ret; ··· 1468 1570 1469 1571 *dlen = req->dlen; 1470 1572 1471 - if (!ctx->async_mode) 1573 + if (!ctx->async_mode || disable_async) 1472 1574 idxd_free_desc(wq, idxd_desc); 1473 1575 1474 1576 /* Update stats */ ··· 1494 1596 u32 compression_crc; 1495 1597 struct idxd_wq *wq; 1496 1598 struct device *dev; 1599 + u64 start_time_ns; 1497 1600 int order = -1; 1498 1601 1499 1602 compression_ctx = crypto_tfm_ctx(tfm); ··· 1568 1669 " req->dlen %d, sg_dma_len(sg) %d\n", dst_addr, nr_sgs, 1569 1670 req->dst, req->dlen, sg_dma_len(req->dst)); 1570 1671 1672 + start_time_ns = iaa_get_ts(); 1571 1673 ret = iaa_compress(tfm, req, wq, src_addr, req->slen, dst_addr, 1572 1674 &req->dlen, &compression_crc, disable_async); 1675 + update_max_comp_delay_ns(start_time_ns); 1573 1676 if (ret == -EINPROGRESS) 1574 1677 return ret; 1575 1678 ··· 1618 1717 struct iaa_wq *iaa_wq; 1619 1718 struct device *dev; 1620 1719 struct idxd_wq *wq; 1720 + u64 start_time_ns; 1621 1721 int order = -1; 1622 1722 1623 1723 cpu = get_cpu(); ··· 1675 1773 dev_dbg(dev, "dma_map_sg, dst_addr %llx, nr_sgs %d, req->dst %p," 1676 1774 " req->dlen %d, sg_dma_len(sg) %d\n", dst_addr, nr_sgs, 1677 1775 req->dst, req->dlen, sg_dma_len(req->dst)); 1776 + start_time_ns = iaa_get_ts(); 1678 1777 ret = iaa_decompress(tfm, req, wq, src_addr, req->slen, 1679 1778 dst_addr, &req->dlen, true); 1779 + update_max_decomp_delay_ns(start_time_ns); 1680 1780 if (ret == -EOVERFLOW) { 1681 1781 dma_unmap_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE); 1682 1782 req->dlen *= 2; ··· 1709 1805 int nr_sgs, cpu, ret = 0; 1710 1806 struct iaa_wq *iaa_wq; 1711 1807 struct device *dev; 1808 + u64 start_time_ns; 1712 1809 struct idxd_wq *wq; 1713 1810 1714 1811 if (!iaa_crypto_enabled) { ··· 1769 1864 " req->dlen %d, sg_dma_len(sg) %d\n", dst_addr, nr_sgs, 1770 1865 req->dst, req->dlen, sg_dma_len(req->dst)); 1771 1866 1867 + start_time_ns = iaa_get_ts(); 1772 1868 ret = iaa_decompress(tfm, req, wq, src_addr, req->slen, 1773 1869 dst_addr, &req->dlen, false); 1870 + update_max_decomp_delay_ns(start_time_ns); 1774 1871 if (ret == -EINPROGRESS) 1775 1872 return ret; 1776 1873 ··· 1823 1916 .base = { 1824 1917 .cra_name = "deflate", 1825 1918 .cra_driver_name = "deflate-iaa", 1919 + .cra_flags = CRYPTO_ALG_ASYNC, 1826 1920 .cra_ctxsize = sizeof(struct iaa_compression_ctx), 1827 1921 .cra_module = THIS_MODULE, 1828 1922 .cra_priority = IAA_ALG_PRIORITY,
-30
drivers/crypto/intel/iaa/iaa_crypto_stats.c
··· 22 22 static u64 total_sw_decomp_calls; 23 23 static u64 max_comp_delay_ns; 24 24 static u64 max_decomp_delay_ns; 25 - static u64 max_acomp_delay_ns; 26 - static u64 max_adecomp_delay_ns; 27 25 static u64 total_comp_bytes_out; 28 26 static u64 total_decomp_bytes_in; 29 27 static u64 total_completion_einval_errors; ··· 90 92 max_decomp_delay_ns = time_diff; 91 93 } 92 94 93 - void update_max_acomp_delay_ns(u64 start_time_ns) 94 - { 95 - u64 time_diff; 96 - 97 - time_diff = ktime_get_ns() - start_time_ns; 98 - 99 - if (time_diff > max_acomp_delay_ns) 100 - max_acomp_delay_ns = time_diff; 101 - } 102 - 103 - void update_max_adecomp_delay_ns(u64 start_time_ns) 104 - { 105 - u64 time_diff; 106 - 107 - time_diff = ktime_get_ns() - start_time_ns; 108 - 109 - if (time_diff > max_adecomp_delay_ns) 110 - max_adecomp_delay_ns = time_diff; 111 - } 112 - 113 95 void update_wq_comp_calls(struct idxd_wq *idxd_wq) 114 96 { 115 97 struct iaa_wq *wq = idxd_wq_get_private(idxd_wq); ··· 129 151 total_sw_decomp_calls = 0; 130 152 max_comp_delay_ns = 0; 131 153 max_decomp_delay_ns = 0; 132 - max_acomp_delay_ns = 0; 133 - max_adecomp_delay_ns = 0; 134 154 total_comp_bytes_out = 0; 135 155 total_decomp_bytes_in = 0; 136 156 total_completion_einval_errors = 0; ··· 251 275 return -ENODEV; 252 276 253 277 iaa_crypto_debugfs_root = debugfs_create_dir("iaa_crypto", NULL); 254 - if (!iaa_crypto_debugfs_root) 255 - return -ENOMEM; 256 278 257 279 debugfs_create_u64("max_comp_delay_ns", 0644, 258 280 iaa_crypto_debugfs_root, &max_comp_delay_ns); 259 281 debugfs_create_u64("max_decomp_delay_ns", 0644, 260 - iaa_crypto_debugfs_root, &max_decomp_delay_ns); 261 - debugfs_create_u64("max_acomp_delay_ns", 0644, 262 - iaa_crypto_debugfs_root, &max_comp_delay_ns); 263 - debugfs_create_u64("max_adecomp_delay_ns", 0644, 264 282 iaa_crypto_debugfs_root, &max_decomp_delay_ns); 265 283 debugfs_create_u64("total_comp_calls", 0644, 266 284 iaa_crypto_debugfs_root, &total_comp_calls);
+4 -4
drivers/crypto/intel/iaa/iaa_crypto_stats.h
··· 15 15 void update_total_decomp_bytes_in(int n); 16 16 void update_max_comp_delay_ns(u64 start_time_ns); 17 17 void update_max_decomp_delay_ns(u64 start_time_ns); 18 - void update_max_acomp_delay_ns(u64 start_time_ns); 19 - void update_max_adecomp_delay_ns(u64 start_time_ns); 20 18 void update_completion_einval_errs(void); 21 19 void update_completion_timeout_errs(void); 22 20 void update_completion_comp_buf_overflow_errs(void); ··· 23 25 void update_wq_comp_bytes(struct idxd_wq *idxd_wq, int n); 24 26 void update_wq_decomp_calls(struct idxd_wq *idxd_wq); 25 27 void update_wq_decomp_bytes(struct idxd_wq *idxd_wq, int n); 28 + 29 + static inline u64 iaa_get_ts(void) { return ktime_get_ns(); } 26 30 27 31 #else 28 32 static inline int iaa_crypto_debugfs_init(void) { return 0; } ··· 37 37 static inline void update_total_decomp_bytes_in(int n) {} 38 38 static inline void update_max_comp_delay_ns(u64 start_time_ns) {} 39 39 static inline void update_max_decomp_delay_ns(u64 start_time_ns) {} 40 - static inline void update_max_acomp_delay_ns(u64 start_time_ns) {} 41 - static inline void update_max_adecomp_delay_ns(u64 start_time_ns) {} 42 40 static inline void update_completion_einval_errs(void) {} 43 41 static inline void update_completion_timeout_errs(void) {} 44 42 static inline void update_completion_comp_buf_overflow_errs(void) {} ··· 45 47 static inline void update_wq_comp_bytes(struct idxd_wq *idxd_wq, int n) {} 46 48 static inline void update_wq_decomp_calls(struct idxd_wq *idxd_wq) {} 47 49 static inline void update_wq_decomp_bytes(struct idxd_wq *idxd_wq, int n) {} 50 + 51 + static inline u64 iaa_get_ts(void) { return 0; } 48 52 49 53 #endif // CONFIG_CRYPTO_DEV_IAA_CRYPTO_STATS 50 54
+14
drivers/crypto/intel/qat/Kconfig
··· 106 106 107 107 To compile this as a module, choose M here: the module 108 108 will be called qat_c62xvf. 109 + 110 + config CRYPTO_DEV_QAT_ERROR_INJECTION 111 + bool "Support for Intel(R) QAT Devices Heartbeat Error Injection" 112 + depends on CRYPTO_DEV_QAT 113 + depends on DEBUG_FS 114 + help 115 + Enables a mechanism that allows to inject a heartbeat error on 116 + Intel(R) QuickAssist devices for testing purposes. 117 + 118 + This is intended for developer use only. 119 + If unsure, say N. 120 + 121 + This functionality is available via debugfs entry of the Intel(R) 122 + QuickAssist device
+16 -48
drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c
··· 361 361 } 362 362 } 363 363 364 - static u16 get_ring_to_svc_map(struct adf_accel_dev *accel_dev) 365 - { 366 - enum adf_cfg_service_type rps[RP_GROUP_COUNT] = { }; 367 - const struct adf_fw_config *fw_config; 368 - u16 ring_to_svc_map; 369 - int i, j; 370 - 371 - fw_config = get_fw_config(accel_dev); 372 - if (!fw_config) 373 - return 0; 374 - 375 - for (i = 0; i < RP_GROUP_COUNT; i++) { 376 - switch (fw_config[i].ae_mask) { 377 - case ADF_AE_GROUP_0: 378 - j = RP_GROUP_0; 379 - break; 380 - case ADF_AE_GROUP_1: 381 - j = RP_GROUP_1; 382 - break; 383 - default: 384 - return 0; 385 - } 386 - 387 - switch (fw_config[i].obj) { 388 - case ADF_FW_SYM_OBJ: 389 - rps[j] = SYM; 390 - break; 391 - case ADF_FW_ASYM_OBJ: 392 - rps[j] = ASYM; 393 - break; 394 - case ADF_FW_DC_OBJ: 395 - rps[j] = COMP; 396 - break; 397 - default: 398 - rps[j] = 0; 399 - break; 400 - } 401 - } 402 - 403 - ring_to_svc_map = rps[RP_GROUP_0] << ADF_CFG_SERV_RING_PAIR_0_SHIFT | 404 - rps[RP_GROUP_1] << ADF_CFG_SERV_RING_PAIR_1_SHIFT | 405 - rps[RP_GROUP_0] << ADF_CFG_SERV_RING_PAIR_2_SHIFT | 406 - rps[RP_GROUP_1] << ADF_CFG_SERV_RING_PAIR_3_SHIFT; 407 - 408 - return ring_to_svc_map; 409 - } 410 - 411 364 static const char *uof_get_name(struct adf_accel_dev *accel_dev, u32 obj_num, 412 365 const char * const fw_objs[], int num_objs) 413 366 { ··· 384 431 int num_fw_objs = ARRAY_SIZE(adf_420xx_fw_objs); 385 432 386 433 return uof_get_name(accel_dev, obj_num, adf_420xx_fw_objs, num_fw_objs); 434 + } 435 + 436 + static int uof_get_obj_type(struct adf_accel_dev *accel_dev, u32 obj_num) 437 + { 438 + const struct adf_fw_config *fw_config; 439 + 440 + if (obj_num >= uof_get_num_objs(accel_dev)) 441 + return -EINVAL; 442 + 443 + fw_config = get_fw_config(accel_dev); 444 + if (!fw_config) 445 + return -EINVAL; 446 + 447 + return fw_config[obj_num].obj; 387 448 } 388 449 389 450 static u32 uof_get_ae_mask(struct adf_accel_dev *accel_dev, u32 obj_num) ··· 463 496 hw_data->fw_mmp_name = ADF_420XX_MMP; 464 497 hw_data->uof_get_name = uof_get_name_420xx; 465 498 hw_data->uof_get_num_objs = uof_get_num_objs; 499 + hw_data->uof_get_obj_type = uof_get_obj_type; 466 500 hw_data->uof_get_ae_mask = uof_get_ae_mask; 467 501 hw_data->get_rp_group = get_rp_group; 468 502 hw_data->get_ena_thd_mask = get_ena_thd_mask; 469 503 hw_data->set_msix_rttable = adf_gen4_set_msix_default_rttable; 470 504 hw_data->set_ssm_wdtimer = adf_gen4_set_ssm_wdtimer; 471 - hw_data->get_ring_to_svc_map = get_ring_to_svc_map; 505 + hw_data->get_ring_to_svc_map = adf_gen4_get_ring_to_svc_map; 472 506 hw_data->disable_iov = adf_disable_sriov; 473 507 hw_data->ring_pair_reset = adf_gen4_ring_pair_reset; 474 508 hw_data->enable_pm = adf_gen4_enable_pm;
+16 -48
drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c
··· 320 320 } 321 321 } 322 322 323 - static u16 get_ring_to_svc_map(struct adf_accel_dev *accel_dev) 324 - { 325 - enum adf_cfg_service_type rps[RP_GROUP_COUNT]; 326 - const struct adf_fw_config *fw_config; 327 - u16 ring_to_svc_map; 328 - int i, j; 329 - 330 - fw_config = get_fw_config(accel_dev); 331 - if (!fw_config) 332 - return 0; 333 - 334 - for (i = 0; i < RP_GROUP_COUNT; i++) { 335 - switch (fw_config[i].ae_mask) { 336 - case ADF_AE_GROUP_0: 337 - j = RP_GROUP_0; 338 - break; 339 - case ADF_AE_GROUP_1: 340 - j = RP_GROUP_1; 341 - break; 342 - default: 343 - return 0; 344 - } 345 - 346 - switch (fw_config[i].obj) { 347 - case ADF_FW_SYM_OBJ: 348 - rps[j] = SYM; 349 - break; 350 - case ADF_FW_ASYM_OBJ: 351 - rps[j] = ASYM; 352 - break; 353 - case ADF_FW_DC_OBJ: 354 - rps[j] = COMP; 355 - break; 356 - default: 357 - rps[j] = 0; 358 - break; 359 - } 360 - } 361 - 362 - ring_to_svc_map = rps[RP_GROUP_0] << ADF_CFG_SERV_RING_PAIR_0_SHIFT | 363 - rps[RP_GROUP_1] << ADF_CFG_SERV_RING_PAIR_1_SHIFT | 364 - rps[RP_GROUP_0] << ADF_CFG_SERV_RING_PAIR_2_SHIFT | 365 - rps[RP_GROUP_1] << ADF_CFG_SERV_RING_PAIR_3_SHIFT; 366 - 367 - return ring_to_svc_map; 368 - } 369 - 370 323 static const char *uof_get_name(struct adf_accel_dev *accel_dev, u32 obj_num, 371 324 const char * const fw_objs[], int num_objs) 372 325 { ··· 350 397 int num_fw_objs = ARRAY_SIZE(adf_402xx_fw_objs); 351 398 352 399 return uof_get_name(accel_dev, obj_num, adf_402xx_fw_objs, num_fw_objs); 400 + } 401 + 402 + static int uof_get_obj_type(struct adf_accel_dev *accel_dev, u32 obj_num) 403 + { 404 + const struct adf_fw_config *fw_config; 405 + 406 + if (obj_num >= uof_get_num_objs(accel_dev)) 407 + return -EINVAL; 408 + 409 + fw_config = get_fw_config(accel_dev); 410 + if (!fw_config) 411 + return -EINVAL; 412 + 413 + return fw_config[obj_num].obj; 353 414 } 354 415 355 416 static u32 uof_get_ae_mask(struct adf_accel_dev *accel_dev, u32 obj_num) ··· 446 479 break; 447 480 } 448 481 hw_data->uof_get_num_objs = uof_get_num_objs; 482 + hw_data->uof_get_obj_type = uof_get_obj_type; 449 483 hw_data->uof_get_ae_mask = uof_get_ae_mask; 450 484 hw_data->get_rp_group = get_rp_group; 451 485 hw_data->set_msix_rttable = adf_gen4_set_msix_default_rttable; 452 486 hw_data->set_ssm_wdtimer = adf_gen4_set_ssm_wdtimer; 453 - hw_data->get_ring_to_svc_map = get_ring_to_svc_map; 487 + hw_data->get_ring_to_svc_map = adf_gen4_get_ring_to_svc_map; 454 488 hw_data->disable_iov = adf_disable_sriov; 455 489 hw_data->ring_pair_reset = adf_gen4_ring_pair_reset; 456 490 hw_data->enable_pm = adf_gen4_enable_pm;
+2
drivers/crypto/intel/qat/qat_common/Makefile
··· 53 53 adf_pfvf_pf_msg.o adf_pfvf_pf_proto.o \ 54 54 adf_pfvf_vf_msg.o adf_pfvf_vf_proto.o \ 55 55 adf_gen2_pfvf.o adf_gen4_pfvf.o 56 + 57 + intel_qat-$(CONFIG_CRYPTO_DEV_QAT_ERROR_INJECTION) += adf_heartbeat_inject.o
+3
drivers/crypto/intel/qat/qat_common/adf_accel_devices.h
··· 248 248 void (*set_msix_rttable)(struct adf_accel_dev *accel_dev); 249 249 const char *(*uof_get_name)(struct adf_accel_dev *accel_dev, u32 obj_num); 250 250 u32 (*uof_get_num_objs)(struct adf_accel_dev *accel_dev); 251 + int (*uof_get_obj_type)(struct adf_accel_dev *accel_dev, u32 obj_num); 251 252 u32 (*uof_get_ae_mask)(struct adf_accel_dev *accel_dev, u32 obj_num); 252 253 int (*get_rp_group)(struct adf_accel_dev *accel_dev, u32 ae_mask); 253 254 u32 (*get_ena_thd_mask)(struct adf_accel_dev *accel_dev, u32 obj_num); ··· 333 332 struct ratelimit_state vf2pf_ratelimit; 334 333 u32 vf_nr; 335 334 bool init; 335 + bool restarting; 336 336 u8 vf_compat_ver; 337 337 }; 338 338 ··· 403 401 struct adf_error_counters ras_errors; 404 402 struct mutex state_lock; /* protect state of the device */ 405 403 bool is_vf; 404 + bool autoreset_on_error; 406 405 u32 accel_id; 407 406 }; 408 407 #endif
+130 -8
drivers/crypto/intel/qat/qat_common/adf_aer.c
··· 7 7 #include <linux/delay.h> 8 8 #include "adf_accel_devices.h" 9 9 #include "adf_common_drv.h" 10 + #include "adf_pfvf_pf_msg.h" 11 + 12 + struct adf_fatal_error_data { 13 + struct adf_accel_dev *accel_dev; 14 + struct work_struct work; 15 + }; 10 16 11 17 static struct workqueue_struct *device_reset_wq; 18 + static struct workqueue_struct *device_sriov_wq; 12 19 13 20 static pci_ers_result_t adf_error_detected(struct pci_dev *pdev, 14 21 pci_channel_state_t state) ··· 33 26 return PCI_ERS_RESULT_DISCONNECT; 34 27 } 35 28 29 + set_bit(ADF_STATUS_RESTARTING, &accel_dev->status); 30 + if (accel_dev->hw_device->exit_arb) { 31 + dev_dbg(&pdev->dev, "Disabling arbitration\n"); 32 + accel_dev->hw_device->exit_arb(accel_dev); 33 + } 34 + adf_error_notifier(accel_dev); 35 + adf_pf2vf_notify_fatal_error(accel_dev); 36 + adf_dev_restarting_notify(accel_dev); 37 + adf_pf2vf_notify_restarting(accel_dev); 38 + adf_pf2vf_wait_for_restarting_complete(accel_dev); 39 + pci_clear_master(pdev); 40 + adf_dev_down(accel_dev, false); 41 + 36 42 return PCI_ERS_RESULT_NEED_RESET; 37 43 } 38 44 ··· 55 35 struct adf_accel_dev *accel_dev; 56 36 struct completion compl; 57 37 struct work_struct reset_work; 38 + }; 39 + 40 + /* sriov dev data */ 41 + struct adf_sriov_dev_data { 42 + struct adf_accel_dev *accel_dev; 43 + struct completion compl; 44 + struct work_struct sriov_work; 58 45 }; 59 46 60 47 void adf_reset_sbr(struct adf_accel_dev *accel_dev) ··· 109 82 } 110 83 } 111 84 85 + static void adf_device_sriov_worker(struct work_struct *work) 86 + { 87 + struct adf_sriov_dev_data *sriov_data = 88 + container_of(work, struct adf_sriov_dev_data, sriov_work); 89 + 90 + adf_reenable_sriov(sriov_data->accel_dev); 91 + complete(&sriov_data->compl); 92 + } 93 + 112 94 static void adf_device_reset_worker(struct work_struct *work) 113 95 { 114 96 struct adf_reset_dev_data *reset_data = 115 97 container_of(work, struct adf_reset_dev_data, reset_work); 116 98 struct adf_accel_dev *accel_dev = reset_data->accel_dev; 99 + unsigned long wait_jiffies = msecs_to_jiffies(10000); 100 + struct adf_sriov_dev_data sriov_data; 117 101 118 102 adf_dev_restarting_notify(accel_dev); 119 103 if (adf_dev_restart(accel_dev)) { 120 104 /* The device hanged and we can't restart it so stop here */ 121 105 dev_err(&GET_DEV(accel_dev), "Restart device failed\n"); 122 - if (reset_data->mode == ADF_DEV_RESET_ASYNC) 106 + if (reset_data->mode == ADF_DEV_RESET_ASYNC || 107 + completion_done(&reset_data->compl)) 123 108 kfree(reset_data); 124 109 WARN(1, "QAT: device restart failed. Device is unusable\n"); 125 110 return; 126 111 } 112 + 113 + sriov_data.accel_dev = accel_dev; 114 + init_completion(&sriov_data.compl); 115 + INIT_WORK(&sriov_data.sriov_work, adf_device_sriov_worker); 116 + queue_work(device_sriov_wq, &sriov_data.sriov_work); 117 + if (wait_for_completion_timeout(&sriov_data.compl, wait_jiffies)) 118 + adf_pf2vf_notify_restarted(accel_dev); 119 + 127 120 adf_dev_restarted_notify(accel_dev); 128 121 clear_bit(ADF_STATUS_RESTARTING, &accel_dev->status); 129 122 130 - /* The dev is back alive. Notify the caller if in sync mode */ 131 - if (reset_data->mode == ADF_DEV_RESET_SYNC) 132 - complete(&reset_data->compl); 133 - else 123 + /* 124 + * The dev is back alive. Notify the caller if in sync mode 125 + * 126 + * If device restart will take a more time than expected, 127 + * the schedule_reset() function can timeout and exit. This can be 128 + * detected by calling the completion_done() function. In this case 129 + * the reset_data structure needs to be freed here. 130 + */ 131 + if (reset_data->mode == ADF_DEV_RESET_ASYNC || 132 + completion_done(&reset_data->compl)) 134 133 kfree(reset_data); 134 + else 135 + complete(&reset_data->compl); 135 136 } 136 137 137 138 static int adf_dev_aer_schedule_reset(struct adf_accel_dev *accel_dev, ··· 192 137 dev_err(&GET_DEV(accel_dev), 193 138 "Reset device timeout expired\n"); 194 139 ret = -EFAULT; 140 + } else { 141 + kfree(reset_data); 195 142 } 196 - kfree(reset_data); 197 143 return ret; 198 144 } 199 145 return 0; ··· 203 147 static pci_ers_result_t adf_slot_reset(struct pci_dev *pdev) 204 148 { 205 149 struct adf_accel_dev *accel_dev = adf_devmgr_pci_to_accel_dev(pdev); 150 + int res = 0; 206 151 207 152 if (!accel_dev) { 208 153 pr_err("QAT: Can't find acceleration device\n"); 209 154 return PCI_ERS_RESULT_DISCONNECT; 210 155 } 211 - if (adf_dev_aer_schedule_reset(accel_dev, ADF_DEV_RESET_SYNC)) 156 + 157 + if (!pdev->is_busmaster) 158 + pci_set_master(pdev); 159 + pci_restore_state(pdev); 160 + pci_save_state(pdev); 161 + res = adf_dev_up(accel_dev, false); 162 + if (res && res != -EALREADY) 212 163 return PCI_ERS_RESULT_DISCONNECT; 213 164 165 + adf_reenable_sriov(accel_dev); 166 + adf_pf2vf_notify_restarted(accel_dev); 167 + adf_dev_restarted_notify(accel_dev); 168 + clear_bit(ADF_STATUS_RESTARTING, &accel_dev->status); 214 169 return PCI_ERS_RESULT_RECOVERED; 215 170 } 216 171 ··· 238 171 }; 239 172 EXPORT_SYMBOL_GPL(adf_err_handler); 240 173 174 + int adf_dev_autoreset(struct adf_accel_dev *accel_dev) 175 + { 176 + if (accel_dev->autoreset_on_error) 177 + return adf_dev_aer_schedule_reset(accel_dev, ADF_DEV_RESET_ASYNC); 178 + 179 + return 0; 180 + } 181 + 182 + static void adf_notify_fatal_error_worker(struct work_struct *work) 183 + { 184 + struct adf_fatal_error_data *wq_data = 185 + container_of(work, struct adf_fatal_error_data, work); 186 + struct adf_accel_dev *accel_dev = wq_data->accel_dev; 187 + struct adf_hw_device_data *hw_device = accel_dev->hw_device; 188 + 189 + adf_error_notifier(accel_dev); 190 + 191 + if (!accel_dev->is_vf) { 192 + /* Disable arbitration to stop processing of new requests */ 193 + if (accel_dev->autoreset_on_error && hw_device->exit_arb) 194 + hw_device->exit_arb(accel_dev); 195 + if (accel_dev->pf.vf_info) 196 + adf_pf2vf_notify_fatal_error(accel_dev); 197 + adf_dev_autoreset(accel_dev); 198 + } 199 + 200 + kfree(wq_data); 201 + } 202 + 203 + int adf_notify_fatal_error(struct adf_accel_dev *accel_dev) 204 + { 205 + struct adf_fatal_error_data *wq_data; 206 + 207 + wq_data = kzalloc(sizeof(*wq_data), GFP_ATOMIC); 208 + if (!wq_data) 209 + return -ENOMEM; 210 + 211 + wq_data->accel_dev = accel_dev; 212 + INIT_WORK(&wq_data->work, adf_notify_fatal_error_worker); 213 + adf_misc_wq_queue_work(&wq_data->work); 214 + 215 + return 0; 216 + } 217 + 241 218 int adf_init_aer(void) 242 219 { 243 220 device_reset_wq = alloc_workqueue("qat_device_reset_wq", 244 221 WQ_MEM_RECLAIM, 0); 245 - return !device_reset_wq ? -EFAULT : 0; 222 + if (!device_reset_wq) 223 + return -EFAULT; 224 + 225 + device_sriov_wq = alloc_workqueue("qat_device_sriov_wq", 0, 0); 226 + if (!device_sriov_wq) 227 + return -EFAULT; 228 + 229 + return 0; 246 230 } 247 231 248 232 void adf_exit_aer(void) ··· 301 183 if (device_reset_wq) 302 184 destroy_workqueue(device_reset_wq); 303 185 device_reset_wq = NULL; 186 + 187 + if (device_sriov_wq) 188 + destroy_workqueue(device_sriov_wq); 189 + device_sriov_wq = NULL; 304 190 }
+1
drivers/crypto/intel/qat/qat_common/adf_cfg_strings.h
··· 49 49 ADF_ETRMGR_BANK "%d" ADF_ETRMGR_CORE_AFFINITY 50 50 #define ADF_ACCEL_STR "Accelerator%d" 51 51 #define ADF_HEARTBEAT_TIMER "HeartbeatTimer" 52 + #define ADF_SRIOV_ENABLED "SriovEnabled" 52 53 53 54 #endif
+3
drivers/crypto/intel/qat/qat_common/adf_clock.c
··· 83 83 } 84 84 85 85 delta_us = timespec_to_us(&ts3) - timespec_to_us(&ts1); 86 + if (!delta_us) 87 + return -EINVAL; 88 + 86 89 temp = (timestamp2 - timestamp1) * ME_CLK_DIVIDER * 10; 87 90 temp = DIV_ROUND_CLOSEST_ULL(temp, delta_us); 88 91 /*
-1
drivers/crypto/intel/qat/qat_common/adf_cnv_dbgfs.c
··· 16 16 17 17 #define CNV_ERR_INFO_MASK GENMASK(11, 0) 18 18 #define CNV_ERR_TYPE_MASK GENMASK(15, 12) 19 - #define CNV_SLICE_ERR_MASK GENMASK(7, 0) 20 19 #define CNV_SLICE_ERR_SIGN_BIT_INDEX 7 21 20 #define CNV_DELTA_ERR_SIGN_BIT_INDEX 11 22 21
+10
drivers/crypto/intel/qat/qat_common/adf_common_drv.h
··· 40 40 ADF_EVENT_SHUTDOWN, 41 41 ADF_EVENT_RESTARTING, 42 42 ADF_EVENT_RESTARTED, 43 + ADF_EVENT_FATAL_ERROR, 43 44 }; 44 45 45 46 struct service_hndl { ··· 61 60 62 61 void adf_devmgr_update_class_index(struct adf_hw_device_data *hw_data); 63 62 void adf_clean_vf_map(bool); 63 + int adf_notify_fatal_error(struct adf_accel_dev *accel_dev); 64 + void adf_error_notifier(struct adf_accel_dev *accel_dev); 64 65 int adf_devmgr_add_dev(struct adf_accel_dev *accel_dev, 65 66 struct adf_accel_dev *pf); 66 67 void adf_devmgr_rm_dev(struct adf_accel_dev *accel_dev, ··· 87 84 extern const struct pci_error_handlers adf_err_handler; 88 85 void adf_reset_sbr(struct adf_accel_dev *accel_dev); 89 86 void adf_reset_flr(struct adf_accel_dev *accel_dev); 87 + int adf_dev_autoreset(struct adf_accel_dev *accel_dev); 90 88 void adf_dev_restore(struct adf_accel_dev *accel_dev); 91 89 int adf_init_aer(void); 92 90 void adf_exit_aer(void); 93 91 int adf_init_arb(struct adf_accel_dev *accel_dev); 94 92 void adf_exit_arb(struct adf_accel_dev *accel_dev); 95 93 void adf_update_ring_arb(struct adf_etr_ring_data *ring); 94 + int adf_disable_arb_thd(struct adf_accel_dev *accel_dev, u32 ae, u32 thr); 96 95 97 96 int adf_dev_get(struct adf_accel_dev *accel_dev); 98 97 void adf_dev_put(struct adf_accel_dev *accel_dev); ··· 193 188 #if defined(CONFIG_PCI_IOV) 194 189 int adf_sriov_configure(struct pci_dev *pdev, int numvfs); 195 190 void adf_disable_sriov(struct adf_accel_dev *accel_dev); 191 + void adf_reenable_sriov(struct adf_accel_dev *accel_dev); 196 192 void adf_enable_vf2pf_interrupts(struct adf_accel_dev *accel_dev, u32 vf_mask); 197 193 void adf_disable_all_vf2pf_interrupts(struct adf_accel_dev *accel_dev); 198 194 bool adf_recv_and_handle_pf2vf_msg(struct adf_accel_dev *accel_dev); ··· 211 205 #define adf_sriov_configure NULL 212 206 213 207 static inline void adf_disable_sriov(struct adf_accel_dev *accel_dev) 208 + { 209 + } 210 + 211 + static inline void adf_reenable_sriov(struct adf_accel_dev *accel_dev) 214 212 { 215 213 } 216 214
+2 -2
drivers/crypto/intel/qat/qat_common/adf_dev_mgr.c
··· 60 60 61 61 /** 62 62 * adf_clean_vf_map() - Cleans VF id mapings 63 - * 64 - * Function cleans internal ids for virtual functions. 65 63 * @vf: flag indicating whether mappings is cleaned 66 64 * for vfs only or for vfs and pfs 65 + * 66 + * Function cleans internal ids for virtual functions. 67 67 */ 68 68 void adf_clean_vf_map(bool vf) 69 69 {
+59
drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.c
··· 4 4 #include "adf_accel_devices.h" 5 5 #include "adf_cfg_services.h" 6 6 #include "adf_common_drv.h" 7 + #include "adf_fw_config.h" 7 8 #include "adf_gen4_hw_data.h" 8 9 #include "adf_gen4_pm.h" 9 10 ··· 399 398 ADF_GEN4_ADMIN_ACCELENGINES; 400 399 401 400 if (srv_id == SVC_DCC) { 401 + if (ae_cnt > ICP_QAT_HW_AE_DELIMITER) 402 + return -EINVAL; 403 + 402 404 memcpy(thd2arb_map, thrd_to_arb_map_dcc, 403 405 array_size(sizeof(*thd2arb_map), ae_cnt)); 404 406 return 0; ··· 434 430 return 0; 435 431 } 436 432 EXPORT_SYMBOL_GPL(adf_gen4_init_thd2arb_map); 433 + 434 + u16 adf_gen4_get_ring_to_svc_map(struct adf_accel_dev *accel_dev) 435 + { 436 + struct adf_hw_device_data *hw_data = GET_HW_DATA(accel_dev); 437 + enum adf_cfg_service_type rps[RP_GROUP_COUNT] = { }; 438 + unsigned int ae_mask, start_id, worker_obj_cnt, i; 439 + u16 ring_to_svc_map; 440 + int rp_group; 441 + 442 + if (!hw_data->get_rp_group || !hw_data->uof_get_ae_mask || 443 + !hw_data->uof_get_obj_type || !hw_data->uof_get_num_objs) 444 + return 0; 445 + 446 + /* If dcc, all rings handle compression requests */ 447 + if (adf_get_service_enabled(accel_dev) == SVC_DCC) { 448 + for (i = 0; i < RP_GROUP_COUNT; i++) 449 + rps[i] = COMP; 450 + goto set_mask; 451 + } 452 + 453 + worker_obj_cnt = hw_data->uof_get_num_objs(accel_dev) - 454 + ADF_GEN4_ADMIN_ACCELENGINES; 455 + start_id = worker_obj_cnt - RP_GROUP_COUNT; 456 + 457 + for (i = start_id; i < worker_obj_cnt; i++) { 458 + ae_mask = hw_data->uof_get_ae_mask(accel_dev, i); 459 + rp_group = hw_data->get_rp_group(accel_dev, ae_mask); 460 + if (rp_group >= RP_GROUP_COUNT || rp_group < RP_GROUP_0) 461 + return 0; 462 + 463 + switch (hw_data->uof_get_obj_type(accel_dev, i)) { 464 + case ADF_FW_SYM_OBJ: 465 + rps[rp_group] = SYM; 466 + break; 467 + case ADF_FW_ASYM_OBJ: 468 + rps[rp_group] = ASYM; 469 + break; 470 + case ADF_FW_DC_OBJ: 471 + rps[rp_group] = COMP; 472 + break; 473 + default: 474 + rps[rp_group] = 0; 475 + break; 476 + } 477 + } 478 + 479 + set_mask: 480 + ring_to_svc_map = rps[RP_GROUP_0] << ADF_CFG_SERV_RING_PAIR_0_SHIFT | 481 + rps[RP_GROUP_1] << ADF_CFG_SERV_RING_PAIR_1_SHIFT | 482 + rps[RP_GROUP_0] << ADF_CFG_SERV_RING_PAIR_2_SHIFT | 483 + rps[RP_GROUP_1] << ADF_CFG_SERV_RING_PAIR_3_SHIFT; 484 + 485 + return ring_to_svc_map; 486 + } 487 + EXPORT_SYMBOL_GPL(adf_gen4_get_ring_to_svc_map);
+1
drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.h
··· 235 235 void adf_gen4_set_msix_default_rttable(struct adf_accel_dev *accel_dev); 236 236 void adf_gen4_set_ssm_wdtimer(struct adf_accel_dev *accel_dev); 237 237 int adf_gen4_init_thd2arb_map(struct adf_accel_dev *accel_dev); 238 + u16 adf_gen4_get_ring_to_svc_map(struct adf_accel_dev *accel_dev); 238 239 239 240 #endif
+2 -4
drivers/crypto/intel/qat/qat_common/adf_gen4_ras.c
··· 1007 1007 static bool adf_handle_ssmcpppar_err(struct adf_accel_dev *accel_dev, 1008 1008 void __iomem *csr, u32 iastatssm) 1009 1009 { 1010 - u32 reg = ADF_CSR_RD(csr, ADF_GEN4_SSMCPPERR); 1011 - u32 bits_num = BITS_PER_REG(reg); 1010 + u32 reg, bits_num = BITS_PER_REG(reg); 1012 1011 bool reset_required = false; 1013 1012 unsigned long errs_bits; 1014 1013 u32 bit_iterator; ··· 1105 1106 static bool adf_handle_ser_err_ssmsh(struct adf_accel_dev *accel_dev, 1106 1107 void __iomem *csr, u32 iastatssm) 1107 1108 { 1108 - u32 reg = ADF_CSR_RD(csr, ADF_GEN4_SER_ERR_SSMSH); 1109 - u32 bits_num = BITS_PER_REG(reg); 1109 + u32 reg, bits_num = BITS_PER_REG(reg); 1110 1110 bool reset_required = false; 1111 1111 unsigned long errs_bits; 1112 1112 u32 bit_iterator;
+14 -6
drivers/crypto/intel/qat/qat_common/adf_heartbeat.c
··· 23 23 24 24 #define ADF_HB_EMPTY_SIG 0xA5A5A5A5 25 25 26 - /* Heartbeat counter pair */ 27 - struct hb_cnt_pair { 28 - __u16 resp_heartbeat_cnt; 29 - __u16 req_heartbeat_cnt; 30 - }; 31 - 32 26 static int adf_hb_check_polling_freq(struct adf_accel_dev *accel_dev) 33 27 { 34 28 u64 curr_time = adf_clock_get_current_time(); ··· 205 211 return ret; 206 212 } 207 213 214 + static void adf_heartbeat_reset(struct adf_accel_dev *accel_dev) 215 + { 216 + u64 curr_time = adf_clock_get_current_time(); 217 + u64 time_since_reset = curr_time - accel_dev->heartbeat->last_hb_reset_time; 218 + 219 + if (time_since_reset < ADF_CFG_HB_RESET_MS) 220 + return; 221 + 222 + accel_dev->heartbeat->last_hb_reset_time = curr_time; 223 + if (adf_notify_fatal_error(accel_dev)) 224 + dev_err(&GET_DEV(accel_dev), "Failed to notify fatal error\n"); 225 + } 226 + 208 227 void adf_heartbeat_status(struct adf_accel_dev *accel_dev, 209 228 enum adf_device_heartbeat_status *hb_status) 210 229 { ··· 242 235 "Heartbeat ERROR: QAT is not responding.\n"); 243 236 *hb_status = HB_DEV_UNRESPONSIVE; 244 237 hb->hb_failed_counter++; 238 + adf_heartbeat_reset(accel_dev); 245 239 return; 246 240 } 247 241
+21
drivers/crypto/intel/qat/qat_common/adf_heartbeat.h
··· 13 13 #define ADF_CFG_HB_TIMER_DEFAULT_MS 500 14 14 #define ADF_CFG_HB_COUNT_THRESHOLD 3 15 15 16 + #define ADF_CFG_HB_RESET_MS 5000 17 + 16 18 enum adf_device_heartbeat_status { 17 19 HB_DEV_UNRESPONSIVE = 0, 18 20 HB_DEV_ALIVE, 19 21 HB_DEV_UNSUPPORTED, 22 + }; 23 + 24 + /* Heartbeat counter pair */ 25 + struct hb_cnt_pair { 26 + __u16 resp_heartbeat_cnt; 27 + __u16 req_heartbeat_cnt; 20 28 }; 21 29 22 30 struct adf_heartbeat { ··· 32 24 unsigned int hb_failed_counter; 33 25 unsigned int hb_timer; 34 26 u64 last_hb_check_time; 27 + u64 last_hb_reset_time; 35 28 bool ctrs_cnt_checked; 36 29 struct hb_dma_addr { 37 30 dma_addr_t phy_addr; ··· 44 35 struct dentry *cfg; 45 36 struct dentry *sent; 46 37 struct dentry *failed; 38 + #ifdef CONFIG_CRYPTO_DEV_QAT_ERROR_INJECTION 39 + struct dentry *inject_error; 40 + #endif 47 41 } dbgfs; 48 42 }; 49 43 ··· 62 50 void adf_heartbeat_status(struct adf_accel_dev *accel_dev, 63 51 enum adf_device_heartbeat_status *hb_status); 64 52 void adf_heartbeat_check_ctrs(struct adf_accel_dev *accel_dev); 53 + 54 + #ifdef CONFIG_CRYPTO_DEV_QAT_ERROR_INJECTION 55 + int adf_heartbeat_inject_error(struct adf_accel_dev *accel_dev); 56 + #else 57 + static inline int adf_heartbeat_inject_error(struct adf_accel_dev *accel_dev) 58 + { 59 + return -EPERM; 60 + } 61 + #endif 65 62 66 63 #else 67 64 static inline int adf_heartbeat_init(struct adf_accel_dev *accel_dev)
+53
drivers/crypto/intel/qat/qat_common/adf_heartbeat_dbgfs.c
··· 155 155 .write = adf_hb_cfg_write, 156 156 }; 157 157 158 + static ssize_t adf_hb_error_inject_write(struct file *file, 159 + const char __user *user_buf, 160 + size_t count, loff_t *ppos) 161 + { 162 + struct adf_accel_dev *accel_dev = file->private_data; 163 + char buf[3]; 164 + int ret; 165 + 166 + /* last byte left as string termination */ 167 + if (*ppos != 0 || count != 2) 168 + return -EINVAL; 169 + 170 + if (copy_from_user(buf, user_buf, count)) 171 + return -EFAULT; 172 + buf[count] = '\0'; 173 + 174 + if (buf[0] != '1') 175 + return -EINVAL; 176 + 177 + ret = adf_heartbeat_inject_error(accel_dev); 178 + if (ret) { 179 + dev_err(&GET_DEV(accel_dev), 180 + "Heartbeat error injection failed with status %d\n", 181 + ret); 182 + return ret; 183 + } 184 + 185 + dev_info(&GET_DEV(accel_dev), "Heartbeat error injection enabled\n"); 186 + 187 + return count; 188 + } 189 + 190 + static const struct file_operations adf_hb_error_inject_fops = { 191 + .owner = THIS_MODULE, 192 + .open = simple_open, 193 + .write = adf_hb_error_inject_write, 194 + }; 195 + 158 196 void adf_heartbeat_dbgfs_add(struct adf_accel_dev *accel_dev) 159 197 { 160 198 struct adf_heartbeat *hb = accel_dev->heartbeat; ··· 209 171 &hb->hb_failed_counter, &adf_hb_stats_fops); 210 172 hb->dbgfs.cfg = debugfs_create_file("config", 0600, hb->dbgfs.base_dir, 211 173 accel_dev, &adf_hb_cfg_fops); 174 + 175 + if (IS_ENABLED(CONFIG_CRYPTO_DEV_QAT_ERROR_INJECTION)) { 176 + struct dentry *inject_error __maybe_unused; 177 + 178 + inject_error = debugfs_create_file("inject_error", 0200, 179 + hb->dbgfs.base_dir, accel_dev, 180 + &adf_hb_error_inject_fops); 181 + #ifdef CONFIG_CRYPTO_DEV_QAT_ERROR_INJECTION 182 + hb->dbgfs.inject_error = inject_error; 183 + #endif 184 + } 212 185 } 213 186 EXPORT_SYMBOL_GPL(adf_heartbeat_dbgfs_add); 214 187 ··· 238 189 hb->dbgfs.failed = NULL; 239 190 debugfs_remove(hb->dbgfs.cfg); 240 191 hb->dbgfs.cfg = NULL; 192 + #ifdef CONFIG_CRYPTO_DEV_QAT_ERROR_INJECTION 193 + debugfs_remove(hb->dbgfs.inject_error); 194 + hb->dbgfs.inject_error = NULL; 195 + #endif 241 196 debugfs_remove(hb->dbgfs.base_dir); 242 197 hb->dbgfs.base_dir = NULL; 243 198 }
+76
drivers/crypto/intel/qat/qat_common/adf_heartbeat_inject.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* Copyright(c) 2023 Intel Corporation */ 3 + #include <linux/random.h> 4 + 5 + #include "adf_admin.h" 6 + #include "adf_common_drv.h" 7 + #include "adf_heartbeat.h" 8 + 9 + #define MAX_HB_TICKS 0xFFFFFFFF 10 + 11 + static int adf_hb_set_timer_to_max(struct adf_accel_dev *accel_dev) 12 + { 13 + struct adf_hw_device_data *hw_data = accel_dev->hw_device; 14 + 15 + accel_dev->heartbeat->hb_timer = 0; 16 + 17 + if (hw_data->stop_timer) 18 + hw_data->stop_timer(accel_dev); 19 + 20 + return adf_send_admin_hb_timer(accel_dev, MAX_HB_TICKS); 21 + } 22 + 23 + static void adf_set_hb_counters_fail(struct adf_accel_dev *accel_dev, u32 ae, 24 + u32 thr) 25 + { 26 + struct hb_cnt_pair *stats = accel_dev->heartbeat->dma.virt_addr; 27 + struct adf_hw_device_data *hw_device = accel_dev->hw_device; 28 + const size_t max_aes = hw_device->get_num_aes(hw_device); 29 + const size_t hb_ctrs = hw_device->num_hb_ctrs; 30 + size_t thr_id = ae * hb_ctrs + thr; 31 + u16 num_rsp = stats[thr_id].resp_heartbeat_cnt; 32 + 33 + /* 34 + * Inject live.req != live.rsp and live.rsp == last.rsp 35 + * to trigger the heartbeat error detection 36 + */ 37 + stats[thr_id].req_heartbeat_cnt++; 38 + stats += (max_aes * hb_ctrs); 39 + stats[thr_id].resp_heartbeat_cnt = num_rsp; 40 + } 41 + 42 + int adf_heartbeat_inject_error(struct adf_accel_dev *accel_dev) 43 + { 44 + struct adf_hw_device_data *hw_device = accel_dev->hw_device; 45 + const size_t max_aes = hw_device->get_num_aes(hw_device); 46 + const size_t hb_ctrs = hw_device->num_hb_ctrs; 47 + u32 rand, rand_ae, rand_thr; 48 + unsigned long ae_mask; 49 + int ret; 50 + 51 + ae_mask = hw_device->ae_mask; 52 + 53 + do { 54 + /* Ensure we have a valid ae */ 55 + get_random_bytes(&rand, sizeof(rand)); 56 + rand_ae = rand % max_aes; 57 + } while (!test_bit(rand_ae, &ae_mask)); 58 + 59 + get_random_bytes(&rand, sizeof(rand)); 60 + rand_thr = rand % hb_ctrs; 61 + 62 + /* Increase the heartbeat timer to prevent FW updating HB counters */ 63 + ret = adf_hb_set_timer_to_max(accel_dev); 64 + if (ret) 65 + return ret; 66 + 67 + /* Configure worker threads to stop processing any packet */ 68 + ret = adf_disable_arb_thd(accel_dev, rand_ae, rand_thr); 69 + if (ret) 70 + return ret; 71 + 72 + /* Change HB counters memory to simulate a hang */ 73 + adf_set_hb_counters_fail(accel_dev, rand_ae, rand_thr); 74 + 75 + return 0; 76 + }
+25
drivers/crypto/intel/qat/qat_common/adf_hw_arbiter.c
··· 103 103 csr_ops->write_csr_ring_srv_arb_en(csr, i, 0); 104 104 } 105 105 EXPORT_SYMBOL_GPL(adf_exit_arb); 106 + 107 + int adf_disable_arb_thd(struct adf_accel_dev *accel_dev, u32 ae, u32 thr) 108 + { 109 + void __iomem *csr = accel_dev->transport->banks[0].csr_addr; 110 + struct adf_hw_device_data *hw_data = accel_dev->hw_device; 111 + const u32 *thd_2_arb_cfg; 112 + struct arb_info info; 113 + u32 ae_thr_map; 114 + 115 + if (ADF_AE_STRAND0_THREAD == thr || ADF_AE_STRAND1_THREAD == thr) 116 + thr = ADF_AE_ADMIN_THREAD; 117 + 118 + hw_data->get_arb_info(&info); 119 + thd_2_arb_cfg = hw_data->get_arb_mapping(accel_dev); 120 + if (!thd_2_arb_cfg) 121 + return -EFAULT; 122 + 123 + /* Disable scheduling for this particular AE and thread */ 124 + ae_thr_map = *(thd_2_arb_cfg + ae); 125 + ae_thr_map &= ~(GENMASK(3, 0) << (thr * BIT(2))); 126 + 127 + WRITE_CSR_ARB_WT2SAM(csr, info.arb_offset, info.wt2sam_offset, ae, 128 + ae_thr_map); 129 + return 0; 130 + }
+12
drivers/crypto/intel/qat/qat_common/adf_init.c
··· 433 433 return 0; 434 434 } 435 435 436 + void adf_error_notifier(struct adf_accel_dev *accel_dev) 437 + { 438 + struct service_hndl *service; 439 + 440 + list_for_each_entry(service, &service_table, list) { 441 + if (service->event_hld(accel_dev, ADF_EVENT_FATAL_ERROR)) 442 + dev_err(&GET_DEV(accel_dev), 443 + "Failed to send error event to %s.\n", 444 + service->name); 445 + } 446 + } 447 + 436 448 static int adf_dev_shutdown_cache_cfg(struct adf_accel_dev *accel_dev) 437 449 { 438 450 char services[ADF_CFG_MAX_VAL_LEN_IN_BYTES] = {0};
+7 -4
drivers/crypto/intel/qat/qat_common/adf_isr.c
··· 139 139 140 140 if (ras_ops->handle_interrupt && 141 141 ras_ops->handle_interrupt(accel_dev, &reset_required)) { 142 - if (reset_required) 142 + if (reset_required) { 143 143 dev_err(&GET_DEV(accel_dev), "Fatal error, reset required\n"); 144 + if (adf_notify_fatal_error(accel_dev)) 145 + dev_err(&GET_DEV(accel_dev), 146 + "Failed to notify fatal error\n"); 147 + } 148 + 144 149 return true; 145 150 } 146 151 ··· 277 272 if (!accel_dev->pf.vf_info) 278 273 msix_num_entries += hw_data->num_banks; 279 274 280 - irqs = kzalloc_node(msix_num_entries * sizeof(*irqs), 275 + irqs = kcalloc_node(msix_num_entries, sizeof(*irqs), 281 276 GFP_KERNEL, dev_to_node(&GET_DEV(accel_dev))); 282 277 if (!irqs) 283 278 return -ENOMEM; ··· 379 374 380 375 /** 381 376 * adf_init_misc_wq() - Init misc workqueue 382 - * 383 - * Function init workqueue 'qat_misc_wq' for general purpose. 384 377 * 385 378 * Return: 0 on success, error code otherwise. 386 379 */
+6 -1
drivers/crypto/intel/qat/qat_common/adf_pfvf_msg.h
··· 99 99 ADF_PF2VF_MSGTYPE_RESTARTING = 0x01, 100 100 ADF_PF2VF_MSGTYPE_VERSION_RESP = 0x02, 101 101 ADF_PF2VF_MSGTYPE_BLKMSG_RESP = 0x03, 102 + ADF_PF2VF_MSGTYPE_FATAL_ERROR = 0x04, 103 + ADF_PF2VF_MSGTYPE_RESTARTED = 0x05, 102 104 /* Values from 0x10 are Gen4 specific, message type is only 4 bits in Gen2 devices. */ 103 105 ADF_PF2VF_MSGTYPE_RP_RESET_RESP = 0x10, 104 106 }; ··· 114 112 ADF_VF2PF_MSGTYPE_LARGE_BLOCK_REQ = 0x07, 115 113 ADF_VF2PF_MSGTYPE_MEDIUM_BLOCK_REQ = 0x08, 116 114 ADF_VF2PF_MSGTYPE_SMALL_BLOCK_REQ = 0x09, 115 + ADF_VF2PF_MSGTYPE_RESTARTING_COMPLETE = 0x0a, 117 116 /* Values from 0x10 are Gen4 specific, message type is only 4 bits in Gen2 devices. */ 118 117 ADF_VF2PF_MSGTYPE_RP_RESET = 0x10, 119 118 }; ··· 127 124 ADF_PFVF_COMPAT_FAST_ACK = 0x03, 128 125 /* Ring to service mapping support for non-standard mappings */ 129 126 ADF_PFVF_COMPAT_RING_TO_SVC_MAP = 0x04, 127 + /* Fallback compat */ 128 + ADF_PFVF_COMPAT_FALLBACK = 0x05, 130 129 /* Reference to the latest version */ 131 - ADF_PFVF_COMPAT_THIS_VERSION = 0x04, 130 + ADF_PFVF_COMPAT_THIS_VERSION = 0x05, 132 131 }; 133 132 134 133 /* PF->VF Version Response */
+63 -1
drivers/crypto/intel/qat/qat_common/adf_pfvf_pf_msg.c
··· 1 1 // SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0-only) 2 2 /* Copyright(c) 2015 - 2021 Intel Corporation */ 3 + #include <linux/delay.h> 3 4 #include <linux/pci.h> 4 5 #include "adf_accel_devices.h" 5 6 #include "adf_pfvf_msg.h" 6 7 #include "adf_pfvf_pf_msg.h" 7 8 #include "adf_pfvf_pf_proto.h" 9 + 10 + #define ADF_PF_WAIT_RESTARTING_COMPLETE_DELAY 100 11 + #define ADF_VF_SHUTDOWN_RETRY 100 8 12 9 13 void adf_pf2vf_notify_restarting(struct adf_accel_dev *accel_dev) 10 14 { ··· 16 12 struct pfvf_message msg = { .type = ADF_PF2VF_MSGTYPE_RESTARTING }; 17 13 int i, num_vfs = pci_num_vf(accel_to_pci_dev(accel_dev)); 18 14 15 + dev_dbg(&GET_DEV(accel_dev), "pf2vf notify restarting\n"); 19 16 for (i = 0, vf = accel_dev->pf.vf_info; i < num_vfs; i++, vf++) { 20 - if (vf->init && adf_send_pf2vf_msg(accel_dev, i, msg)) 17 + vf->restarting = false; 18 + if (!vf->init) 19 + continue; 20 + if (adf_send_pf2vf_msg(accel_dev, i, msg)) 21 21 dev_err(&GET_DEV(accel_dev), 22 22 "Failed to send restarting msg to VF%d\n", i); 23 + else if (vf->vf_compat_ver >= ADF_PFVF_COMPAT_FALLBACK) 24 + vf->restarting = true; 25 + } 26 + } 27 + 28 + void adf_pf2vf_wait_for_restarting_complete(struct adf_accel_dev *accel_dev) 29 + { 30 + int num_vfs = pci_num_vf(accel_to_pci_dev(accel_dev)); 31 + int i, retries = ADF_VF_SHUTDOWN_RETRY; 32 + struct adf_accel_vf_info *vf; 33 + bool vf_running; 34 + 35 + dev_dbg(&GET_DEV(accel_dev), "pf2vf wait for restarting complete\n"); 36 + do { 37 + vf_running = false; 38 + for (i = 0, vf = accel_dev->pf.vf_info; i < num_vfs; i++, vf++) 39 + if (vf->restarting) 40 + vf_running = true; 41 + if (!vf_running) 42 + break; 43 + msleep(ADF_PF_WAIT_RESTARTING_COMPLETE_DELAY); 44 + } while (--retries); 45 + 46 + if (vf_running) 47 + dev_warn(&GET_DEV(accel_dev), "Some VFs are still running\n"); 48 + } 49 + 50 + void adf_pf2vf_notify_restarted(struct adf_accel_dev *accel_dev) 51 + { 52 + struct pfvf_message msg = { .type = ADF_PF2VF_MSGTYPE_RESTARTED }; 53 + int i, num_vfs = pci_num_vf(accel_to_pci_dev(accel_dev)); 54 + struct adf_accel_vf_info *vf; 55 + 56 + dev_dbg(&GET_DEV(accel_dev), "pf2vf notify restarted\n"); 57 + for (i = 0, vf = accel_dev->pf.vf_info; i < num_vfs; i++, vf++) { 58 + if (vf->init && vf->vf_compat_ver >= ADF_PFVF_COMPAT_FALLBACK && 59 + adf_send_pf2vf_msg(accel_dev, i, msg)) 60 + dev_err(&GET_DEV(accel_dev), 61 + "Failed to send restarted msg to VF%d\n", i); 62 + } 63 + } 64 + 65 + void adf_pf2vf_notify_fatal_error(struct adf_accel_dev *accel_dev) 66 + { 67 + struct pfvf_message msg = { .type = ADF_PF2VF_MSGTYPE_FATAL_ERROR }; 68 + int i, num_vfs = pci_num_vf(accel_to_pci_dev(accel_dev)); 69 + struct adf_accel_vf_info *vf; 70 + 71 + dev_dbg(&GET_DEV(accel_dev), "pf2vf notify fatal error\n"); 72 + for (i = 0, vf = accel_dev->pf.vf_info; i < num_vfs; i++, vf++) { 73 + if (vf->init && vf->vf_compat_ver >= ADF_PFVF_COMPAT_FALLBACK && 74 + adf_send_pf2vf_msg(accel_dev, i, msg)) 75 + dev_err(&GET_DEV(accel_dev), 76 + "Failed to send fatal error msg to VF%d\n", i); 23 77 } 24 78 } 25 79
+21
drivers/crypto/intel/qat/qat_common/adf_pfvf_pf_msg.h
··· 5 5 6 6 #include "adf_accel_devices.h" 7 7 8 + #if defined(CONFIG_PCI_IOV) 8 9 void adf_pf2vf_notify_restarting(struct adf_accel_dev *accel_dev); 10 + void adf_pf2vf_wait_for_restarting_complete(struct adf_accel_dev *accel_dev); 11 + void adf_pf2vf_notify_restarted(struct adf_accel_dev *accel_dev); 12 + void adf_pf2vf_notify_fatal_error(struct adf_accel_dev *accel_dev); 13 + #else 14 + static inline void adf_pf2vf_notify_restarting(struct adf_accel_dev *accel_dev) 15 + { 16 + } 17 + 18 + static inline void adf_pf2vf_wait_for_restarting_complete(struct adf_accel_dev *accel_dev) 19 + { 20 + } 21 + 22 + static inline void adf_pf2vf_notify_restarted(struct adf_accel_dev *accel_dev) 23 + { 24 + } 25 + 26 + static inline void adf_pf2vf_notify_fatal_error(struct adf_accel_dev *accel_dev) 27 + { 28 + } 29 + #endif 9 30 10 31 typedef int (*adf_pf2vf_blkmsg_provider)(struct adf_accel_dev *accel_dev, 11 32 u8 *buffer, u8 compat);
+8
drivers/crypto/intel/qat/qat_common/adf_pfvf_pf_proto.c
··· 291 291 vf_info->init = false; 292 292 } 293 293 break; 294 + case ADF_VF2PF_MSGTYPE_RESTARTING_COMPLETE: 295 + { 296 + dev_dbg(&GET_DEV(accel_dev), 297 + "Restarting Complete received from VF%d\n", vf_nr); 298 + vf_info->restarting = false; 299 + vf_info->init = false; 300 + } 301 + break; 294 302 case ADF_VF2PF_MSGTYPE_LARGE_BLOCK_REQ: 295 303 case ADF_VF2PF_MSGTYPE_MEDIUM_BLOCK_REQ: 296 304 case ADF_VF2PF_MSGTYPE_SMALL_BLOCK_REQ:
+6
drivers/crypto/intel/qat/qat_common/adf_pfvf_vf_proto.c
··· 308 308 309 309 adf_pf2vf_handle_pf_restarting(accel_dev); 310 310 return false; 311 + case ADF_PF2VF_MSGTYPE_RESTARTED: 312 + dev_dbg(&GET_DEV(accel_dev), "Restarted message received from PF\n"); 313 + return true; 314 + case ADF_PF2VF_MSGTYPE_FATAL_ERROR: 315 + dev_err(&GET_DEV(accel_dev), "Fatal error received from PF\n"); 316 + return true; 311 317 case ADF_PF2VF_MSGTYPE_VERSION_RESP: 312 318 case ADF_PF2VF_MSGTYPE_BLKMSG_RESP: 313 319 case ADF_PF2VF_MSGTYPE_RP_RESET_RESP:
+19 -1
drivers/crypto/intel/qat/qat_common/adf_rl.c
··· 788 788 sla_type_arr[node_id] = NULL; 789 789 } 790 790 791 + static void free_all_sla(struct adf_accel_dev *accel_dev) 792 + { 793 + struct adf_rl *rl_data = accel_dev->rate_limiting; 794 + int sla_id; 795 + 796 + mutex_lock(&rl_data->rl_lock); 797 + 798 + for (sla_id = 0; sla_id < RL_NODES_CNT_MAX; sla_id++) { 799 + if (!rl_data->sla[sla_id]) 800 + continue; 801 + 802 + kfree(rl_data->sla[sla_id]); 803 + rl_data->sla[sla_id] = NULL; 804 + } 805 + 806 + mutex_unlock(&rl_data->rl_lock); 807 + } 808 + 791 809 /** 792 810 * add_update_sla() - handles the creation and the update of an SLA 793 811 * @accel_dev: pointer to acceleration device structure ··· 1173 1155 return; 1174 1156 1175 1157 adf_sysfs_rl_rm(accel_dev); 1176 - adf_rl_remove_sla_all(accel_dev, true); 1158 + free_all_sla(accel_dev); 1177 1159 } 1178 1160 1179 1161 void adf_rl_exit(struct adf_accel_dev *accel_dev)
+35 -3
drivers/crypto/intel/qat/qat_common/adf_sriov.c
··· 60 60 /* This ptr will be populated when VFs will be created */ 61 61 vf_info->accel_dev = accel_dev; 62 62 vf_info->vf_nr = i; 63 - vf_info->vf_compat_ver = 0; 64 63 65 64 mutex_init(&vf_info->pf2vf_lock); 66 65 ratelimit_state_init(&vf_info->vf2pf_ratelimit, ··· 83 84 return pci_enable_sriov(pdev, totalvfs); 84 85 } 85 86 87 + void adf_reenable_sriov(struct adf_accel_dev *accel_dev) 88 + { 89 + struct pci_dev *pdev = accel_to_pci_dev(accel_dev); 90 + char cfg[ADF_CFG_MAX_VAL_LEN_IN_BYTES] = {0}; 91 + unsigned long val = 0; 92 + 93 + if (adf_cfg_get_param_value(accel_dev, ADF_GENERAL_SEC, 94 + ADF_SRIOV_ENABLED, cfg)) 95 + return; 96 + 97 + if (!accel_dev->pf.vf_info) 98 + return; 99 + 100 + if (adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, ADF_NUM_CY, 101 + &val, ADF_DEC)) 102 + return; 103 + 104 + if (adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, ADF_NUM_DC, 105 + &val, ADF_DEC)) 106 + return; 107 + 108 + set_bit(ADF_STATUS_CONFIGURED, &accel_dev->status); 109 + dev_dbg(&pdev->dev, "Re-enabling SRIOV\n"); 110 + adf_enable_sriov(accel_dev); 111 + } 112 + 86 113 /** 87 114 * adf_disable_sriov() - Disable SRIOV for the device 88 115 * @accel_dev: Pointer to accel device. ··· 128 103 return; 129 104 130 105 adf_pf2vf_notify_restarting(accel_dev); 106 + adf_pf2vf_wait_for_restarting_complete(accel_dev); 131 107 pci_disable_sriov(accel_to_pci_dev(accel_dev)); 132 108 133 109 /* Disable VF to PF interrupts */ ··· 141 115 for (i = 0, vf = accel_dev->pf.vf_info; i < totalvfs; i++, vf++) 142 116 mutex_destroy(&vf->pf2vf_lock); 143 117 144 - kfree(accel_dev->pf.vf_info); 145 - accel_dev->pf.vf_info = NULL; 118 + if (!test_bit(ADF_STATUS_RESTARTING, &accel_dev->status)) { 119 + kfree(accel_dev->pf.vf_info); 120 + accel_dev->pf.vf_info = NULL; 121 + } 146 122 } 147 123 EXPORT_SYMBOL_GPL(adf_disable_sriov); 148 124 ··· 221 193 ret = adf_enable_sriov(accel_dev); 222 194 if (ret) 223 195 return ret; 196 + 197 + val = 1; 198 + adf_cfg_add_key_value_param(accel_dev, ADF_GENERAL_SEC, ADF_SRIOV_ENABLED, 199 + &val, ADF_DEC); 224 200 225 201 return numvfs; 226 202 }
+37
drivers/crypto/intel/qat/qat_common/adf_sysfs.c
··· 204 204 } 205 205 static DEVICE_ATTR_RW(pm_idle_enabled); 206 206 207 + static ssize_t auto_reset_show(struct device *dev, struct device_attribute *attr, 208 + char *buf) 209 + { 210 + char *auto_reset; 211 + struct adf_accel_dev *accel_dev; 212 + 213 + accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev)); 214 + if (!accel_dev) 215 + return -EINVAL; 216 + 217 + auto_reset = accel_dev->autoreset_on_error ? "on" : "off"; 218 + 219 + return sysfs_emit(buf, "%s\n", auto_reset); 220 + } 221 + 222 + static ssize_t auto_reset_store(struct device *dev, struct device_attribute *attr, 223 + const char *buf, size_t count) 224 + { 225 + struct adf_accel_dev *accel_dev; 226 + bool enabled = false; 227 + int ret; 228 + 229 + ret = kstrtobool(buf, &enabled); 230 + if (ret) 231 + return ret; 232 + 233 + accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev)); 234 + if (!accel_dev) 235 + return -EINVAL; 236 + 237 + accel_dev->autoreset_on_error = enabled; 238 + 239 + return count; 240 + } 241 + static DEVICE_ATTR_RW(auto_reset); 242 + 207 243 static DEVICE_ATTR_RW(state); 208 244 static DEVICE_ATTR_RW(cfg_services); 209 245 ··· 327 291 &dev_attr_pm_idle_enabled.attr, 328 292 &dev_attr_rp2srv.attr, 329 293 &dev_attr_num_rps.attr, 294 + &dev_attr_auto_reset.attr, 330 295 NULL, 331 296 }; 332 297
-2
drivers/crypto/intel/qat/qat_common/adf_vf_isr.c
··· 293 293 /** 294 294 * adf_init_vf_wq() - Init workqueue for VF 295 295 * 296 - * Function init workqueue 'adf_vf_stop_wq' for VF. 297 - * 298 296 * Return: 0 on success, error code otherwise. 299 297 */ 300 298 int __init adf_init_vf_wq(void)
-9
drivers/crypto/intel/qat/qat_common/qat_comp_algs.c
··· 13 13 #include "qat_compression.h" 14 14 #include "qat_algs_send.h" 15 15 16 - #define QAT_RFC_1950_HDR_SIZE 2 17 - #define QAT_RFC_1950_FOOTER_SIZE 4 18 - #define QAT_RFC_1950_CM_DEFLATE 8 19 - #define QAT_RFC_1950_CM_DEFLATE_CINFO_32K 7 20 - #define QAT_RFC_1950_CM_MASK 0x0f 21 - #define QAT_RFC_1950_CM_OFFSET 4 22 - #define QAT_RFC_1950_DICT_MASK 0x20 23 - #define QAT_RFC_1950_COMP_HDR 0x785e 24 - 25 16 static DEFINE_MUTEX(algs_lock); 26 17 static unsigned int active_devs; 27 18
+2 -2
drivers/crypto/intel/qat/qat_common/qat_crypto.c
··· 105 105 } 106 106 107 107 /** 108 - * qat_crypto_vf_dev_config() 109 - * create dev config required to create crypto inst. 108 + * qat_crypto_vf_dev_config() - create dev config required to create 109 + * crypto inst. 110 110 * 111 111 * @accel_dev: Pointer to acceleration device. 112 112 *
+5
drivers/crypto/rockchip/rk3288_crypto.c
··· 371 371 } 372 372 373 373 crypto_info->engine = crypto_engine_alloc_init(&pdev->dev, true); 374 + if (!crypto_info->engine) { 375 + err = -ENOMEM; 376 + goto err_crypto; 377 + } 378 + 374 379 crypto_engine_start(crypto_info->engine); 375 380 init_completion(&crypto_info->complete); 376 381
+6 -6
drivers/crypto/virtio/virtio_crypto_akcipher_algs.c
··· 225 225 struct virtio_crypto *vcrypto = ctx->vcrypto; 226 226 struct virtio_crypto_op_data_req *req_data = vc_req->req_data; 227 227 struct scatterlist *sgs[4], outhdr_sg, inhdr_sg, srcdata_sg, dstdata_sg; 228 - void *src_buf = NULL, *dst_buf = NULL; 228 + void *src_buf, *dst_buf = NULL; 229 229 unsigned int num_out = 0, num_in = 0; 230 230 int node = dev_to_node(&vcrypto->vdev->dev); 231 231 unsigned long flags; 232 - int ret = -ENOMEM; 232 + int ret; 233 233 bool verify = vc_akcipher_req->opcode == VIRTIO_CRYPTO_AKCIPHER_VERIFY; 234 234 unsigned int src_len = verify ? req->src_len + req->dst_len : req->src_len; 235 235 ··· 240 240 /* src data */ 241 241 src_buf = kcalloc_node(src_len, 1, GFP_KERNEL, node); 242 242 if (!src_buf) 243 - goto err; 243 + return -ENOMEM; 244 244 245 245 if (verify) { 246 246 /* for verify operation, both src and dst data work as OUT direction */ ··· 255 255 /* dst data */ 256 256 dst_buf = kcalloc_node(req->dst_len, 1, GFP_KERNEL, node); 257 257 if (!dst_buf) 258 - goto err; 258 + goto free_src; 259 259 260 260 sg_init_one(&dstdata_sg, dst_buf, req->dst_len); 261 261 sgs[num_out + num_in++] = &dstdata_sg; ··· 278 278 return 0; 279 279 280 280 err: 281 - kfree(src_buf); 282 281 kfree(dst_buf); 283 - 282 + free_src: 283 + kfree(src_buf); 284 284 return -ENOMEM; 285 285 } 286 286
-2
drivers/crypto/virtio/virtio_crypto_core.c
··· 42 42 virtio_crypto_ctrlq_callback(vc_ctrl_req); 43 43 spin_lock_irqsave(&vcrypto->ctrl_lock, flags); 44 44 } 45 - if (unlikely(virtqueue_is_broken(vq))) 46 - break; 47 45 } while (!virtqueue_enable_cb(vq)); 48 46 spin_unlock_irqrestore(&vcrypto->ctrl_lock, flags); 49 47 }
-3
drivers/crypto/vmx/.gitignore
··· 1 - # SPDX-License-Identifier: GPL-2.0-only 2 - aesp8-ppc.S 3 - ghashp8-ppc.S
-14
drivers/crypto/vmx/Kconfig
··· 1 - # SPDX-License-Identifier: GPL-2.0-only 2 - config CRYPTO_DEV_VMX_ENCRYPT 3 - tristate "Encryption acceleration support on P8 CPU" 4 - depends on CRYPTO_DEV_VMX 5 - select CRYPTO_AES 6 - select CRYPTO_CBC 7 - select CRYPTO_CTR 8 - select CRYPTO_GHASH 9 - select CRYPTO_XTS 10 - default m 11 - help 12 - Support for VMX cryptographic acceleration instructions on Power8 CPU. 13 - This module supports acceleration for AES and GHASH in hardware. If you 14 - choose 'M' here, this module will be called vmx-crypto.
-23
drivers/crypto/vmx/Makefile
··· 1 - # SPDX-License-Identifier: GPL-2.0 2 - obj-$(CONFIG_CRYPTO_DEV_VMX_ENCRYPT) += vmx-crypto.o 3 - vmx-crypto-objs := vmx.o aesp8-ppc.o ghashp8-ppc.o aes.o aes_cbc.o aes_ctr.o aes_xts.o ghash.o 4 - 5 - ifeq ($(CONFIG_CPU_LITTLE_ENDIAN),y) 6 - override flavour := linux-ppc64le 7 - else 8 - ifdef CONFIG_PPC64_ELF_ABI_V2 9 - override flavour := linux-ppc64-elfv2 10 - else 11 - override flavour := linux-ppc64 12 - endif 13 - endif 14 - 15 - quiet_cmd_perl = PERL $@ 16 - cmd_perl = $(PERL) $< $(flavour) > $@ 17 - 18 - targets += aesp8-ppc.S ghashp8-ppc.S 19 - 20 - $(obj)/aesp8-ppc.S $(obj)/ghashp8-ppc.S: $(obj)/%.S: $(src)/%.pl FORCE 21 - $(call if_changed,perl) 22 - 23 - OBJECT_FILES_NON_STANDARD_aesp8-ppc.o := y
drivers/crypto/vmx/aes.c arch/powerpc/crypto/aes.c
drivers/crypto/vmx/aes_cbc.c arch/powerpc/crypto/aes_cbc.c
drivers/crypto/vmx/aes_ctr.c arch/powerpc/crypto/aes_ctr.c
drivers/crypto/vmx/aes_xts.c arch/powerpc/crypto/aes_xts.c
drivers/crypto/vmx/aesp8-ppc.h arch/powerpc/crypto/aesp8-ppc.h
drivers/crypto/vmx/aesp8-ppc.pl arch/powerpc/crypto/aesp8-ppc.pl
drivers/crypto/vmx/ghash.c arch/powerpc/crypto/ghash.c
drivers/crypto/vmx/ghashp8-ppc.pl arch/powerpc/crypto/ghashp8-ppc.pl
-231
drivers/crypto/vmx/ppc-xlate.pl
··· 1 - #!/usr/bin/env perl 2 - # SPDX-License-Identifier: GPL-2.0 3 - 4 - # PowerPC assembler distiller by <appro>. 5 - 6 - my $flavour = shift; 7 - my $output = shift; 8 - open STDOUT,">$output" || die "can't open $output: $!"; 9 - 10 - my %GLOBALS; 11 - my $dotinlocallabels=($flavour=~/linux/)?1:0; 12 - my $elfv2abi=(($flavour =~ /linux-ppc64le/) or ($flavour =~ /linux-ppc64-elfv2/))?1:0; 13 - my $dotfunctions=($elfv2abi=~1)?0:1; 14 - 15 - ################################################################ 16 - # directives which need special treatment on different platforms 17 - ################################################################ 18 - my $globl = sub { 19 - my $junk = shift; 20 - my $name = shift; 21 - my $global = \$GLOBALS{$name}; 22 - my $ret; 23 - 24 - $name =~ s|^[\.\_]||; 25 - 26 - SWITCH: for ($flavour) { 27 - /aix/ && do { $name = ".$name"; 28 - last; 29 - }; 30 - /osx/ && do { $name = "_$name"; 31 - last; 32 - }; 33 - /linux/ 34 - && do { $ret = "_GLOBAL($name)"; 35 - last; 36 - }; 37 - } 38 - 39 - $ret = ".globl $name\nalign 5\n$name:" if (!$ret); 40 - $$global = $name; 41 - $ret; 42 - }; 43 - my $text = sub { 44 - my $ret = ($flavour =~ /aix/) ? ".csect\t.text[PR],7" : ".text"; 45 - $ret = ".abiversion 2\n".$ret if ($elfv2abi); 46 - $ret; 47 - }; 48 - my $machine = sub { 49 - my $junk = shift; 50 - my $arch = shift; 51 - if ($flavour =~ /osx/) 52 - { $arch =~ s/\"//g; 53 - $arch = ($flavour=~/64/) ? "ppc970-64" : "ppc970" if ($arch eq "any"); 54 - } 55 - ".machine $arch"; 56 - }; 57 - my $size = sub { 58 - if ($flavour =~ /linux/) 59 - { shift; 60 - my $name = shift; $name =~ s|^[\.\_]||; 61 - my $ret = ".size $name,.-".($dotfunctions?".":"").$name; 62 - $ret .= "\n.size .$name,.-.$name" if ($dotfunctions); 63 - $ret; 64 - } 65 - else 66 - { ""; } 67 - }; 68 - my $asciz = sub { 69 - shift; 70 - my $line = join(",",@_); 71 - if ($line =~ /^"(.*)"$/) 72 - { ".byte " . join(",",unpack("C*",$1),0) . "\n.align 2"; } 73 - else 74 - { ""; } 75 - }; 76 - my $quad = sub { 77 - shift; 78 - my @ret; 79 - my ($hi,$lo); 80 - for (@_) { 81 - if (/^0x([0-9a-f]*?)([0-9a-f]{1,8})$/io) 82 - { $hi=$1?"0x$1":"0"; $lo="0x$2"; } 83 - elsif (/^([0-9]+)$/o) 84 - { $hi=$1>>32; $lo=$1&0xffffffff; } # error-prone with 32-bit perl 85 - else 86 - { $hi=undef; $lo=$_; } 87 - 88 - if (defined($hi)) 89 - { push(@ret,$flavour=~/le$/o?".long\t$lo,$hi":".long\t$hi,$lo"); } 90 - else 91 - { push(@ret,".quad $lo"); } 92 - } 93 - join("\n",@ret); 94 - }; 95 - 96 - ################################################################ 97 - # simplified mnemonics not handled by at least one assembler 98 - ################################################################ 99 - my $cmplw = sub { 100 - my $f = shift; 101 - my $cr = 0; $cr = shift if ($#_>1); 102 - # Some out-of-date 32-bit GNU assembler just can't handle cmplw... 103 - ($flavour =~ /linux.*32/) ? 104 - " .long ".sprintf "0x%x",31<<26|$cr<<23|$_[0]<<16|$_[1]<<11|64 : 105 - " cmplw ".join(',',$cr,@_); 106 - }; 107 - my $bdnz = sub { 108 - my $f = shift; 109 - my $bo = $f=~/[\+\-]/ ? 16+9 : 16; # optional "to be taken" hint 110 - " bc $bo,0,".shift; 111 - } if ($flavour!~/linux/); 112 - my $bltlr = sub { 113 - my $f = shift; 114 - my $bo = $f=~/\-/ ? 12+2 : 12; # optional "not to be taken" hint 115 - ($flavour =~ /linux/) ? # GNU as doesn't allow most recent hints 116 - " .long ".sprintf "0x%x",19<<26|$bo<<21|16<<1 : 117 - " bclr $bo,0"; 118 - }; 119 - my $bnelr = sub { 120 - my $f = shift; 121 - my $bo = $f=~/\-/ ? 4+2 : 4; # optional "not to be taken" hint 122 - ($flavour =~ /linux/) ? # GNU as doesn't allow most recent hints 123 - " .long ".sprintf "0x%x",19<<26|$bo<<21|2<<16|16<<1 : 124 - " bclr $bo,2"; 125 - }; 126 - my $beqlr = sub { 127 - my $f = shift; 128 - my $bo = $f=~/-/ ? 12+2 : 12; # optional "not to be taken" hint 129 - ($flavour =~ /linux/) ? # GNU as doesn't allow most recent hints 130 - " .long ".sprintf "0x%X",19<<26|$bo<<21|2<<16|16<<1 : 131 - " bclr $bo,2"; 132 - }; 133 - # GNU assembler can't handle extrdi rA,rS,16,48, or when sum of last two 134 - # arguments is 64, with "operand out of range" error. 135 - my $extrdi = sub { 136 - my ($f,$ra,$rs,$n,$b) = @_; 137 - $b = ($b+$n)&63; $n = 64-$n; 138 - " rldicl $ra,$rs,$b,$n"; 139 - }; 140 - my $vmr = sub { 141 - my ($f,$vx,$vy) = @_; 142 - " vor $vx,$vy,$vy"; 143 - }; 144 - 145 - # Some ABIs specify vrsave, special-purpose register #256, as reserved 146 - # for system use. 147 - my $no_vrsave = ($elfv2abi); 148 - my $mtspr = sub { 149 - my ($f,$idx,$ra) = @_; 150 - if ($idx == 256 && $no_vrsave) { 151 - " or $ra,$ra,$ra"; 152 - } else { 153 - " mtspr $idx,$ra"; 154 - } 155 - }; 156 - my $mfspr = sub { 157 - my ($f,$rd,$idx) = @_; 158 - if ($idx == 256 && $no_vrsave) { 159 - " li $rd,-1"; 160 - } else { 161 - " mfspr $rd,$idx"; 162 - } 163 - }; 164 - 165 - # PowerISA 2.06 stuff 166 - sub vsxmem_op { 167 - my ($f, $vrt, $ra, $rb, $op) = @_; 168 - " .long ".sprintf "0x%X",(31<<26)|($vrt<<21)|($ra<<16)|($rb<<11)|($op*2+1); 169 - } 170 - # made-up unaligned memory reference AltiVec/VMX instructions 171 - my $lvx_u = sub { vsxmem_op(@_, 844); }; # lxvd2x 172 - my $stvx_u = sub { vsxmem_op(@_, 972); }; # stxvd2x 173 - my $lvdx_u = sub { vsxmem_op(@_, 588); }; # lxsdx 174 - my $stvdx_u = sub { vsxmem_op(@_, 716); }; # stxsdx 175 - my $lvx_4w = sub { vsxmem_op(@_, 780); }; # lxvw4x 176 - my $stvx_4w = sub { vsxmem_op(@_, 908); }; # stxvw4x 177 - 178 - # PowerISA 2.07 stuff 179 - sub vcrypto_op { 180 - my ($f, $vrt, $vra, $vrb, $op) = @_; 181 - " .long ".sprintf "0x%X",(4<<26)|($vrt<<21)|($vra<<16)|($vrb<<11)|$op; 182 - } 183 - my $vcipher = sub { vcrypto_op(@_, 1288); }; 184 - my $vcipherlast = sub { vcrypto_op(@_, 1289); }; 185 - my $vncipher = sub { vcrypto_op(@_, 1352); }; 186 - my $vncipherlast= sub { vcrypto_op(@_, 1353); }; 187 - my $vsbox = sub { vcrypto_op(@_, 0, 1480); }; 188 - my $vshasigmad = sub { my ($st,$six)=splice(@_,-2); vcrypto_op(@_, $st<<4|$six, 1730); }; 189 - my $vshasigmaw = sub { my ($st,$six)=splice(@_,-2); vcrypto_op(@_, $st<<4|$six, 1666); }; 190 - my $vpmsumb = sub { vcrypto_op(@_, 1032); }; 191 - my $vpmsumd = sub { vcrypto_op(@_, 1224); }; 192 - my $vpmsubh = sub { vcrypto_op(@_, 1096); }; 193 - my $vpmsumw = sub { vcrypto_op(@_, 1160); }; 194 - my $vaddudm = sub { vcrypto_op(@_, 192); }; 195 - my $vadduqm = sub { vcrypto_op(@_, 256); }; 196 - 197 - my $mtsle = sub { 198 - my ($f, $arg) = @_; 199 - " .long ".sprintf "0x%X",(31<<26)|($arg<<21)|(147*2); 200 - }; 201 - 202 - print "#include <asm/ppc_asm.h>\n" if $flavour =~ /linux/; 203 - 204 - while($line=<>) { 205 - 206 - $line =~ s|[#!;].*$||; # get rid of asm-style comments... 207 - $line =~ s|/\*.*\*/||; # ... and C-style comments... 208 - $line =~ s|^\s+||; # ... and skip white spaces in beginning... 209 - $line =~ s|\s+$||; # ... and at the end 210 - 211 - { 212 - $line =~ s|\b\.L(\w+)|L$1|g; # common denominator for Locallabel 213 - $line =~ s|\bL(\w+)|\.L$1|g if ($dotinlocallabels); 214 - } 215 - 216 - { 217 - $line =~ s|^\s*(\.?)(\w+)([\.\+\-]?)\s*||; 218 - my $c = $1; $c = "\t" if ($c eq ""); 219 - my $mnemonic = $2; 220 - my $f = $3; 221 - my $opcode = eval("\$$mnemonic"); 222 - $line =~ s/\b(c?[rf]|v|vs)([0-9]+)\b/$2/g if ($c ne "." and $flavour !~ /osx/); 223 - if (ref($opcode) eq 'CODE') { $line = &$opcode($f,split(',',$line)); } 224 - elsif ($mnemonic) { $line = $c.$mnemonic.$f."\t".$line; } 225 - } 226 - 227 - print $line if ($line); 228 - print "\n"; 229 - } 230 - 231 - close STDOUT;
drivers/crypto/vmx/vmx.c arch/powerpc/crypto/vmx.c
+3
drivers/crypto/xilinx/zynqmp-aes-gcm.c
··· 231 231 err = zynqmp_aes_aead_cipher(areq); 232 232 } 233 233 234 + local_bh_disable(); 234 235 crypto_finalize_aead_request(engine, areq, err); 236 + local_bh_enable(); 237 + 235 238 return 0; 236 239 } 237 240
-2
include/crypto/internal/hash.h
··· 87 87 !(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY); 88 88 } 89 89 90 - bool crypto_hash_alg_has_setkey(struct hash_alg_common *halg); 91 - 92 90 int crypto_grab_ahash(struct crypto_ahash_spawn *spawn, 93 91 struct crypto_instance *inst, 94 92 const char *name, u32 type, u32 mask);
+1
include/crypto/public_key.h
··· 10 10 #ifndef _LINUX_PUBLIC_KEY_H 11 11 #define _LINUX_PUBLIC_KEY_H 12 12 13 + #include <linux/errno.h> 13 14 #include <linux/keyctl.h> 14 15 #include <linux/oid_registry.h> 15 16
+9 -1
include/linux/hisi_acc_qm.h
··· 43 43 #define QM_MB_CMD_CQC_BT 0x5 44 44 #define QM_MB_CMD_SQC_VFT_V2 0x6 45 45 #define QM_MB_CMD_STOP_QP 0x8 46 + #define QM_MB_CMD_FLUSH_QM 0x9 46 47 #define QM_MB_CMD_SRC 0xc 47 48 #define QM_MB_CMD_DST 0xd 48 49 ··· 152 151 QM_SUPPORT_DB_ISOLATION = 0x0, 153 152 QM_SUPPORT_FUNC_QOS, 154 153 QM_SUPPORT_STOP_QP, 154 + QM_SUPPORT_STOP_FUNC, 155 155 QM_SUPPORT_MB_COMMAND, 156 156 QM_SUPPORT_SVA_PREFETCH, 157 157 QM_SUPPORT_RPM, ··· 161 159 struct qm_dev_alg { 162 160 u64 alg_msk; 163 161 const char *alg; 162 + }; 163 + 164 + struct qm_dev_dfx { 165 + u32 dev_state; 166 + u32 dev_timeout; 164 167 }; 165 168 166 169 struct dfx_diff_registers { ··· 196 189 struct dentry *debug_root; 197 190 struct dentry *qm_d; 198 191 struct debugfs_file files[DEBUG_FILE_NUM]; 192 + struct qm_dev_dfx dev_dfx; 199 193 unsigned int *qm_last_words; 200 194 /* ACC engines recoreding last regs */ 201 195 unsigned int *last_words; ··· 531 523 int hisi_qm_start(struct hisi_qm *qm); 532 524 int hisi_qm_stop(struct hisi_qm *qm, enum qm_stop_reason r); 533 525 int hisi_qm_start_qp(struct hisi_qp *qp, unsigned long arg); 534 - int hisi_qm_stop_qp(struct hisi_qp *qp); 526 + void hisi_qm_stop_qp(struct hisi_qp *qp); 535 527 int hisi_qp_send(struct hisi_qp *qp, const void *msg); 536 528 void hisi_qm_debug_init(struct hisi_qm *qm); 537 529 void hisi_qm_debug_regs_clear(struct hisi_qm *qm);
+5 -3
tools/crypto/ccp/test_dbc.py
··· 138 138 139 139 def test_authenticated_nonce(self) -> None: 140 140 """fetch authenticated nonce""" 141 + get_nonce(self.d, None) 141 142 with self.assertRaises(OSError) as error: 142 143 get_nonce(self.d, self.signature) 143 - self.assertEqual(error.exception.errno, 1) 144 + self.assertEqual(error.exception.errno, 22) 144 145 145 146 def test_set_uid(self) -> None: 146 147 """set uid""" 148 + get_nonce(self.d, None) 147 149 with self.assertRaises(OSError) as error: 148 150 set_uid(self.d, self.uid, self.signature) 149 151 self.assertEqual(error.exception.errno, 1) ··· 154 152 """fetch a parameter""" 155 153 with self.assertRaises(OSError) as error: 156 154 process_param(self.d, PARAM_GET_SOC_PWR_CUR, self.signature) 157 - self.assertEqual(error.exception.errno, 1) 155 + self.assertEqual(error.exception.errno, 11) 158 156 159 157 def test_set_param(self) -> None: 160 158 """set a parameter""" 161 159 with self.assertRaises(OSError) as error: 162 160 process_param(self.d, PARAM_SET_PWR_CAP, self.signature, 1000) 163 - self.assertEqual(error.exception.errno, 1) 161 + self.assertEqual(error.exception.errno, 11) 164 162 165 163 166 164 class TestUnFusedSystem(DynamicBoostControlTest):