Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto updates from Herbert Xu:
"API:
- hwrng core now credits for low-quality RNG devices.

Algorithms:
- Optimisations for neon aes on arm/arm64.
- Add accelerated crc32_be on arm64.
- Add ffdheXYZ(dh) templates.
- Disallow hmac keys < 112 bits in FIPS mode.
- Add AVX assembly implementation for sm3 on x86.

Drivers:
- Add missing local_bh_disable calls for crypto_engine callback.
- Ensure BH is disabled in crypto_engine callback path.
- Fix zero length DMA mappings in ccree.
- Add synchronization between mailbox accesses in octeontx2.
- Add Xilinx SHA3 driver.
- Add support for the TDES IP available on sama7g5 SoC in atmel"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (137 commits)
crypto: xilinx - Turn SHA into a tristate and allow COMPILE_TEST
MAINTAINERS: update HPRE/SEC2/TRNG driver maintainers list
crypto: dh - Remove the unused function dh_safe_prime_dh_alg()
hwrng: nomadik - Change clk_disable to clk_disable_unprepare
crypto: arm64 - cleanup comments
crypto: qat - fix initialization of pfvf rts_map_msg structures
crypto: qat - fix initialization of pfvf cap_msg structures
crypto: qat - remove unneeded assignment
crypto: qat - disable registration of algorithms
crypto: hisilicon/qm - fix memset during queues clearing
crypto: xilinx: prevent probing on non-xilinx hardware
crypto: marvell/octeontx - Use swap() instead of open coding it
crypto: ccree - Fix use after free in cc_cipher_exit()
crypto: ccp - ccp_dmaengine_unregister release dma channels
crypto: octeontx2 - fix missing unlock
hwrng: cavium - fix NULL but dereferenced coccicheck error
crypto: cavium/nitrox - don't cast parameter in bit operations
crypto: vmx - add missing dependencies
MAINTAINERS: Add maintainer for Xilinx ZynqMP SHA3 driver
crypto: xilinx - Add Xilinx SHA3 driver
...

+5679 -1675
+94 -84
Documentation/ABI/testing/debugfs-hisi-hpre
··· 1 - What: /sys/kernel/debug/hisi_hpre/<bdf>/cluster[0-3]/regs 2 - Date: Sep 2019 3 - Contact: linux-crypto@vger.kernel.org 4 - Description: Dump debug registers from the HPRE cluster. 1 + What: /sys/kernel/debug/hisi_hpre/<bdf>/cluster[0-3]/regs 2 + Date: Sep 2019 3 + Contact: linux-crypto@vger.kernel.org 4 + Description: Dump debug registers from the HPRE cluster. 5 5 Only available for PF. 6 6 7 - What: /sys/kernel/debug/hisi_hpre/<bdf>/cluster[0-3]/cluster_ctrl 8 - Date: Sep 2019 9 - Contact: linux-crypto@vger.kernel.org 10 - Description: Write the HPRE core selection in the cluster into this file, 7 + What: /sys/kernel/debug/hisi_hpre/<bdf>/cluster[0-3]/cluster_ctrl 8 + Date: Sep 2019 9 + Contact: linux-crypto@vger.kernel.org 10 + Description: Write the HPRE core selection in the cluster into this file, 11 11 and then we can read the debug information of the core. 12 12 Only available for PF. 13 13 14 - What: /sys/kernel/debug/hisi_hpre/<bdf>/rdclr_en 15 - Date: Sep 2019 16 - Contact: linux-crypto@vger.kernel.org 17 - Description: HPRE cores debug registers read clear control. 1 means enable 14 + What: /sys/kernel/debug/hisi_hpre/<bdf>/rdclr_en 15 + Date: Sep 2019 16 + Contact: linux-crypto@vger.kernel.org 17 + Description: HPRE cores debug registers read clear control. 1 means enable 18 18 register read clear, otherwise 0. Writing to this file has no 19 19 functional effect, only enable or disable counters clear after 20 20 reading of these registers. 21 21 Only available for PF. 22 22 23 - What: /sys/kernel/debug/hisi_hpre/<bdf>/current_qm 24 - Date: Sep 2019 25 - Contact: linux-crypto@vger.kernel.org 26 - Description: One HPRE controller has one PF and multiple VFs, each function 23 + What: /sys/kernel/debug/hisi_hpre/<bdf>/current_qm 24 + Date: Sep 2019 25 + Contact: linux-crypto@vger.kernel.org 26 + Description: One HPRE controller has one PF and multiple VFs, each function 27 27 has a QM. Select the QM which below qm refers to. 28 28 Only available for PF. 29 29 30 - What: /sys/kernel/debug/hisi_hpre/<bdf>/regs 31 - Date: Sep 2019 32 - Contact: linux-crypto@vger.kernel.org 33 - Description: Dump debug registers from the HPRE. 30 + What: /sys/kernel/debug/hisi_hpre/<bdf>/alg_qos 31 + Date: Jun 2021 32 + Contact: linux-crypto@vger.kernel.org 33 + Description: The <bdf> is related the function for PF and VF. 34 + HPRE driver supports to configure each function's QoS, the driver 35 + supports to write <bdf> value to alg_qos in the host. Such as 36 + "echo <bdf> value > alg_qos". The qos value is 1~1000, means 37 + 1/1000~1000/1000 of total QoS. The driver reading alg_qos to 38 + get related QoS in the host and VM, Such as "cat alg_qos". 39 + 40 + What: /sys/kernel/debug/hisi_hpre/<bdf>/regs 41 + Date: Sep 2019 42 + Contact: linux-crypto@vger.kernel.org 43 + Description: Dump debug registers from the HPRE. 34 44 Only available for PF. 35 45 36 - What: /sys/kernel/debug/hisi_hpre/<bdf>/qm/regs 37 - Date: Sep 2019 38 - Contact: linux-crypto@vger.kernel.org 39 - Description: Dump debug registers from the QM. 46 + What: /sys/kernel/debug/hisi_hpre/<bdf>/qm/regs 47 + Date: Sep 2019 48 + Contact: linux-crypto@vger.kernel.org 49 + Description: Dump debug registers from the QM. 40 50 Available for PF and VF in host. VF in guest currently only 41 51 has one debug register. 42 52 43 - What: /sys/kernel/debug/hisi_hpre/<bdf>/qm/current_q 44 - Date: Sep 2019 45 - Contact: linux-crypto@vger.kernel.org 46 - Description: One QM may contain multiple queues. Select specific queue to 53 + What: /sys/kernel/debug/hisi_hpre/<bdf>/qm/current_q 54 + Date: Sep 2019 55 + Contact: linux-crypto@vger.kernel.org 56 + Description: One QM may contain multiple queues. Select specific queue to 47 57 show its debug registers in above regs. 48 58 Only available for PF. 49 59 50 - What: /sys/kernel/debug/hisi_hpre/<bdf>/qm/clear_enable 51 - Date: Sep 2019 52 - Contact: linux-crypto@vger.kernel.org 53 - Description: QM debug registers(regs) read clear control. 1 means enable 60 + What: /sys/kernel/debug/hisi_hpre/<bdf>/qm/clear_enable 61 + Date: Sep 2019 62 + Contact: linux-crypto@vger.kernel.org 63 + Description: QM debug registers(regs) read clear control. 1 means enable 54 64 register read clear, otherwise 0. 55 65 Writing to this file has no functional effect, only enable or 56 66 disable counters clear after reading of these registers. 57 67 Only available for PF. 58 68 59 - What: /sys/kernel/debug/hisi_hpre/<bdf>/qm/err_irq 60 - Date: Apr 2020 61 - Contact: linux-crypto@vger.kernel.org 62 - Description: Dump the number of invalid interrupts for 69 + What: /sys/kernel/debug/hisi_hpre/<bdf>/qm/err_irq 70 + Date: Apr 2020 71 + Contact: linux-crypto@vger.kernel.org 72 + Description: Dump the number of invalid interrupts for 63 73 QM task completion. 64 74 Available for both PF and VF, and take no other effect on HPRE. 65 75 66 - What: /sys/kernel/debug/hisi_hpre/<bdf>/qm/aeq_irq 67 - Date: Apr 2020 68 - Contact: linux-crypto@vger.kernel.org 69 - Description: Dump the number of QM async event queue interrupts. 76 + What: /sys/kernel/debug/hisi_hpre/<bdf>/qm/aeq_irq 77 + Date: Apr 2020 78 + Contact: linux-crypto@vger.kernel.org 79 + Description: Dump the number of QM async event queue interrupts. 70 80 Available for both PF and VF, and take no other effect on HPRE. 71 81 72 - What: /sys/kernel/debug/hisi_hpre/<bdf>/qm/abnormal_irq 73 - Date: Apr 2020 74 - Contact: linux-crypto@vger.kernel.org 75 - Description: Dump the number of interrupts for QM abnormal event. 82 + What: /sys/kernel/debug/hisi_hpre/<bdf>/qm/abnormal_irq 83 + Date: Apr 2020 84 + Contact: linux-crypto@vger.kernel.org 85 + Description: Dump the number of interrupts for QM abnormal event. 76 86 Available for both PF and VF, and take no other effect on HPRE. 77 87 78 - What: /sys/kernel/debug/hisi_hpre/<bdf>/qm/create_qp_err 79 - Date: Apr 2020 80 - Contact: linux-crypto@vger.kernel.org 81 - Description: Dump the number of queue allocation errors. 88 + What: /sys/kernel/debug/hisi_hpre/<bdf>/qm/create_qp_err 89 + Date: Apr 2020 90 + Contact: linux-crypto@vger.kernel.org 91 + Description: Dump the number of queue allocation errors. 82 92 Available for both PF and VF, and take no other effect on HPRE. 83 93 84 - What: /sys/kernel/debug/hisi_hpre/<bdf>/qm/mb_err 85 - Date: Apr 2020 86 - Contact: linux-crypto@vger.kernel.org 87 - Description: Dump the number of failed QM mailbox commands. 94 + What: /sys/kernel/debug/hisi_hpre/<bdf>/qm/mb_err 95 + Date: Apr 2020 96 + Contact: linux-crypto@vger.kernel.org 97 + Description: Dump the number of failed QM mailbox commands. 88 98 Available for both PF and VF, and take no other effect on HPRE. 89 99 90 - What: /sys/kernel/debug/hisi_hpre/<bdf>/qm/status 91 - Date: Apr 2020 92 - Contact: linux-crypto@vger.kernel.org 93 - Description: Dump the status of the QM. 100 + What: /sys/kernel/debug/hisi_hpre/<bdf>/qm/status 101 + Date: Apr 2020 102 + Contact: linux-crypto@vger.kernel.org 103 + Description: Dump the status of the QM. 94 104 Four states: initiated, started, stopped and closed. 95 105 Available for both PF and VF, and take no other effect on HPRE. 96 106 97 - What: /sys/kernel/debug/hisi_hpre/<bdf>/hpre_dfx/send_cnt 98 - Date: Apr 2020 99 - Contact: linux-crypto@vger.kernel.org 100 - Description: Dump the total number of sent requests. 107 + What: /sys/kernel/debug/hisi_hpre/<bdf>/hpre_dfx/send_cnt 108 + Date: Apr 2020 109 + Contact: linux-crypto@vger.kernel.org 110 + Description: Dump the total number of sent requests. 101 111 Available for both PF and VF, and take no other effect on HPRE. 102 112 103 - What: /sys/kernel/debug/hisi_hpre/<bdf>/hpre_dfx/recv_cnt 104 - Date: Apr 2020 105 - Contact: linux-crypto@vger.kernel.org 106 - Description: Dump the total number of received requests. 113 + What: /sys/kernel/debug/hisi_hpre/<bdf>/hpre_dfx/recv_cnt 114 + Date: Apr 2020 115 + Contact: linux-crypto@vger.kernel.org 116 + Description: Dump the total number of received requests. 107 117 Available for both PF and VF, and take no other effect on HPRE. 108 118 109 - What: /sys/kernel/debug/hisi_hpre/<bdf>/hpre_dfx/send_busy_cnt 110 - Date: Apr 2020 111 - Contact: linux-crypto@vger.kernel.org 112 - Description: Dump the total number of requests sent 119 + What: /sys/kernel/debug/hisi_hpre/<bdf>/hpre_dfx/send_busy_cnt 120 + Date: Apr 2020 121 + Contact: linux-crypto@vger.kernel.org 122 + Description: Dump the total number of requests sent 113 123 with returning busy. 114 124 Available for both PF and VF, and take no other effect on HPRE. 115 125 116 - What: /sys/kernel/debug/hisi_hpre/<bdf>/hpre_dfx/send_fail_cnt 117 - Date: Apr 2020 118 - Contact: linux-crypto@vger.kernel.org 119 - Description: Dump the total number of completed but error requests. 126 + What: /sys/kernel/debug/hisi_hpre/<bdf>/hpre_dfx/send_fail_cnt 127 + Date: Apr 2020 128 + Contact: linux-crypto@vger.kernel.org 129 + Description: Dump the total number of completed but error requests. 120 130 Available for both PF and VF, and take no other effect on HPRE. 121 131 122 - What: /sys/kernel/debug/hisi_hpre/<bdf>/hpre_dfx/invalid_req_cnt 123 - Date: Apr 2020 124 - Contact: linux-crypto@vger.kernel.org 125 - Description: Dump the total number of invalid requests being received. 132 + What: /sys/kernel/debug/hisi_hpre/<bdf>/hpre_dfx/invalid_req_cnt 133 + Date: Apr 2020 134 + Contact: linux-crypto@vger.kernel.org 135 + Description: Dump the total number of invalid requests being received. 126 136 Available for both PF and VF, and take no other effect on HPRE. 127 137 128 - What: /sys/kernel/debug/hisi_hpre/<bdf>/hpre_dfx/overtime_thrhld 129 - Date: Apr 2020 130 - Contact: linux-crypto@vger.kernel.org 131 - Description: Set the threshold time for counting the request which is 138 + What: /sys/kernel/debug/hisi_hpre/<bdf>/hpre_dfx/overtime_thrhld 139 + Date: Apr 2020 140 + Contact: linux-crypto@vger.kernel.org 141 + Description: Set the threshold time for counting the request which is 132 142 processed longer than the threshold. 133 143 0: disable(default), 1: 1 microsecond. 134 144 Available for both PF and VF, and take no other effect on HPRE. 135 145 136 - What: /sys/kernel/debug/hisi_hpre/<bdf>/hpre_dfx/over_thrhld_cnt 137 - Date: Apr 2020 138 - Contact: linux-crypto@vger.kernel.org 139 - Description: Dump the total number of time out requests. 146 + What: /sys/kernel/debug/hisi_hpre/<bdf>/hpre_dfx/over_thrhld_cnt 147 + Date: Apr 2020 148 + Contact: linux-crypto@vger.kernel.org 149 + Description: Dump the total number of time out requests. 140 150 Available for both PF and VF, and take no other effect on HPRE.
+78 -68
Documentation/ABI/testing/debugfs-hisi-sec
··· 1 - What: /sys/kernel/debug/hisi_sec2/<bdf>/clear_enable 2 - Date: Oct 2019 3 - Contact: linux-crypto@vger.kernel.org 4 - Description: Enabling/disabling of clear action after reading 1 + What: /sys/kernel/debug/hisi_sec2/<bdf>/clear_enable 2 + Date: Oct 2019 3 + Contact: linux-crypto@vger.kernel.org 4 + Description: Enabling/disabling of clear action after reading 5 5 the SEC debug registers. 6 6 0: disable, 1: enable. 7 7 Only available for PF, and take no other effect on SEC. 8 8 9 - What: /sys/kernel/debug/hisi_sec2/<bdf>/current_qm 10 - Date: Oct 2019 11 - Contact: linux-crypto@vger.kernel.org 12 - Description: One SEC controller has one PF and multiple VFs, each function 9 + What: /sys/kernel/debug/hisi_sec2/<bdf>/current_qm 10 + Date: Oct 2019 11 + Contact: linux-crypto@vger.kernel.org 12 + Description: One SEC controller has one PF and multiple VFs, each function 13 13 has a QM. This file can be used to select the QM which below 14 14 qm refers to. 15 15 Only available for PF. 16 16 17 - What: /sys/kernel/debug/hisi_sec2/<bdf>/qm/qm_regs 18 - Date: Oct 2019 19 - Contact: linux-crypto@vger.kernel.org 20 - Description: Dump of QM related debug registers. 17 + What: /sys/kernel/debug/hisi_sec2/<bdf>/alg_qos 18 + Date: Jun 2021 19 + Contact: linux-crypto@vger.kernel.org 20 + Description: The <bdf> is related the function for PF and VF. 21 + SEC driver supports to configure each function's QoS, the driver 22 + supports to write <bdf> value to alg_qos in the host. Such as 23 + "echo <bdf> value > alg_qos". The qos value is 1~1000, means 24 + 1/1000~1000/1000 of total QoS. The driver reading alg_qos to 25 + get related QoS in the host and VM, Such as "cat alg_qos". 26 + 27 + What: /sys/kernel/debug/hisi_sec2/<bdf>/qm/qm_regs 28 + Date: Oct 2019 29 + Contact: linux-crypto@vger.kernel.org 30 + Description: Dump of QM related debug registers. 21 31 Available for PF and VF in host. VF in guest currently only 22 32 has one debug register. 23 33 24 - What: /sys/kernel/debug/hisi_sec2/<bdf>/qm/current_q 25 - Date: Oct 2019 26 - Contact: linux-crypto@vger.kernel.org 27 - Description: One QM of SEC may contain multiple queues. Select specific 34 + What: /sys/kernel/debug/hisi_sec2/<bdf>/qm/current_q 35 + Date: Oct 2019 36 + Contact: linux-crypto@vger.kernel.org 37 + Description: One QM of SEC may contain multiple queues. Select specific 28 38 queue to show its debug registers in above 'regs'. 29 39 Only available for PF. 30 40 31 - What: /sys/kernel/debug/hisi_sec2/<bdf>/qm/clear_enable 32 - Date: Oct 2019 33 - Contact: linux-crypto@vger.kernel.org 34 - Description: Enabling/disabling of clear action after reading 41 + What: /sys/kernel/debug/hisi_sec2/<bdf>/qm/clear_enable 42 + Date: Oct 2019 43 + Contact: linux-crypto@vger.kernel.org 44 + Description: Enabling/disabling of clear action after reading 35 45 the SEC's QM debug registers. 36 46 0: disable, 1: enable. 37 47 Only available for PF, and take no other effect on SEC. 38 48 39 - What: /sys/kernel/debug/hisi_sec2/<bdf>/qm/err_irq 40 - Date: Apr 2020 41 - Contact: linux-crypto@vger.kernel.org 42 - Description: Dump the number of invalid interrupts for 49 + What: /sys/kernel/debug/hisi_sec2/<bdf>/qm/err_irq 50 + Date: Apr 2020 51 + Contact: linux-crypto@vger.kernel.org 52 + Description: Dump the number of invalid interrupts for 43 53 QM task completion. 44 54 Available for both PF and VF, and take no other effect on SEC. 45 55 46 - What: /sys/kernel/debug/hisi_sec2/<bdf>/qm/aeq_irq 47 - Date: Apr 2020 48 - Contact: linux-crypto@vger.kernel.org 49 - Description: Dump the number of QM async event queue interrupts. 56 + What: /sys/kernel/debug/hisi_sec2/<bdf>/qm/aeq_irq 57 + Date: Apr 2020 58 + Contact: linux-crypto@vger.kernel.org 59 + Description: Dump the number of QM async event queue interrupts. 50 60 Available for both PF and VF, and take no other effect on SEC. 51 61 52 - What: /sys/kernel/debug/hisi_sec2/<bdf>/qm/abnormal_irq 53 - Date: Apr 2020 54 - Contact: linux-crypto@vger.kernel.org 55 - Description: Dump the number of interrupts for QM abnormal event. 62 + What: /sys/kernel/debug/hisi_sec2/<bdf>/qm/abnormal_irq 63 + Date: Apr 2020 64 + Contact: linux-crypto@vger.kernel.org 65 + Description: Dump the number of interrupts for QM abnormal event. 56 66 Available for both PF and VF, and take no other effect on SEC. 57 67 58 - What: /sys/kernel/debug/hisi_sec2/<bdf>/qm/create_qp_err 59 - Date: Apr 2020 60 - Contact: linux-crypto@vger.kernel.org 61 - Description: Dump the number of queue allocation errors. 68 + What: /sys/kernel/debug/hisi_sec2/<bdf>/qm/create_qp_err 69 + Date: Apr 2020 70 + Contact: linux-crypto@vger.kernel.org 71 + Description: Dump the number of queue allocation errors. 62 72 Available for both PF and VF, and take no other effect on SEC. 63 73 64 - What: /sys/kernel/debug/hisi_sec2/<bdf>/qm/mb_err 65 - Date: Apr 2020 66 - Contact: linux-crypto@vger.kernel.org 67 - Description: Dump the number of failed QM mailbox commands. 74 + What: /sys/kernel/debug/hisi_sec2/<bdf>/qm/mb_err 75 + Date: Apr 2020 76 + Contact: linux-crypto@vger.kernel.org 77 + Description: Dump the number of failed QM mailbox commands. 68 78 Available for both PF and VF, and take no other effect on SEC. 69 79 70 - What: /sys/kernel/debug/hisi_sec2/<bdf>/qm/status 71 - Date: Apr 2020 72 - Contact: linux-crypto@vger.kernel.org 73 - Description: Dump the status of the QM. 80 + What: /sys/kernel/debug/hisi_sec2/<bdf>/qm/status 81 + Date: Apr 2020 82 + Contact: linux-crypto@vger.kernel.org 83 + Description: Dump the status of the QM. 74 84 Four states: initiated, started, stopped and closed. 75 85 Available for both PF and VF, and take no other effect on SEC. 76 86 77 - What: /sys/kernel/debug/hisi_sec2/<bdf>/sec_dfx/send_cnt 78 - Date: Apr 2020 79 - Contact: linux-crypto@vger.kernel.org 80 - Description: Dump the total number of sent requests. 87 + What: /sys/kernel/debug/hisi_sec2/<bdf>/sec_dfx/send_cnt 88 + Date: Apr 2020 89 + Contact: linux-crypto@vger.kernel.org 90 + Description: Dump the total number of sent requests. 81 91 Available for both PF and VF, and take no other effect on SEC. 82 92 83 - What: /sys/kernel/debug/hisi_sec2/<bdf>/sec_dfx/recv_cnt 84 - Date: Apr 2020 85 - Contact: linux-crypto@vger.kernel.org 86 - Description: Dump the total number of received requests. 93 + What: /sys/kernel/debug/hisi_sec2/<bdf>/sec_dfx/recv_cnt 94 + Date: Apr 2020 95 + Contact: linux-crypto@vger.kernel.org 96 + Description: Dump the total number of received requests. 87 97 Available for both PF and VF, and take no other effect on SEC. 88 98 89 - What: /sys/kernel/debug/hisi_sec2/<bdf>/sec_dfx/send_busy_cnt 90 - Date: Apr 2020 91 - Contact: linux-crypto@vger.kernel.org 92 - Description: Dump the total number of requests sent with returning busy. 99 + What: /sys/kernel/debug/hisi_sec2/<bdf>/sec_dfx/send_busy_cnt 100 + Date: Apr 2020 101 + Contact: linux-crypto@vger.kernel.org 102 + Description: Dump the total number of requests sent with returning busy. 93 103 Available for both PF and VF, and take no other effect on SEC. 94 104 95 - What: /sys/kernel/debug/hisi_sec2/<bdf>/sec_dfx/err_bd_cnt 96 - Date: Apr 2020 97 - Contact: linux-crypto@vger.kernel.org 98 - Description: Dump the total number of BD type error requests 105 + What: /sys/kernel/debug/hisi_sec2/<bdf>/sec_dfx/err_bd_cnt 106 + Date: Apr 2020 107 + Contact: linux-crypto@vger.kernel.org 108 + Description: Dump the total number of BD type error requests 99 109 to be received. 100 110 Available for both PF and VF, and take no other effect on SEC. 101 111 102 - What: /sys/kernel/debug/hisi_sec2/<bdf>/sec_dfx/invalid_req_cnt 103 - Date: Apr 2020 104 - Contact: linux-crypto@vger.kernel.org 105 - Description: Dump the total number of invalid requests being received. 112 + What: /sys/kernel/debug/hisi_sec2/<bdf>/sec_dfx/invalid_req_cnt 113 + Date: Apr 2020 114 + Contact: linux-crypto@vger.kernel.org 115 + Description: Dump the total number of invalid requests being received. 106 116 Available for both PF and VF, and take no other effect on SEC. 107 117 108 - What: /sys/kernel/debug/hisi_sec2/<bdf>/sec_dfx/done_flag_cnt 109 - Date: Apr 2020 110 - Contact: linux-crypto@vger.kernel.org 111 - Description: Dump the total number of completed but marked error requests 118 + What: /sys/kernel/debug/hisi_sec2/<bdf>/sec_dfx/done_flag_cnt 119 + Date: Apr 2020 120 + Contact: linux-crypto@vger.kernel.org 121 + Description: Dump the total number of completed but marked error requests 112 122 to be received. 113 123 Available for both PF and VF, and take no other effect on SEC.
+78 -68
Documentation/ABI/testing/debugfs-hisi-zip
··· 1 - What: /sys/kernel/debug/hisi_zip/<bdf>/comp_core[01]/regs 2 - Date: Nov 2018 3 - Contact: linux-crypto@vger.kernel.org 4 - Description: Dump of compression cores related debug registers. 1 + What: /sys/kernel/debug/hisi_zip/<bdf>/comp_core[01]/regs 2 + Date: Nov 2018 3 + Contact: linux-crypto@vger.kernel.org 4 + Description: Dump of compression cores related debug registers. 5 5 Only available for PF. 6 6 7 - What: /sys/kernel/debug/hisi_zip/<bdf>/decomp_core[0-5]/regs 8 - Date: Nov 2018 9 - Contact: linux-crypto@vger.kernel.org 10 - Description: Dump of decompression cores related debug registers. 7 + What: /sys/kernel/debug/hisi_zip/<bdf>/decomp_core[0-5]/regs 8 + Date: Nov 2018 9 + Contact: linux-crypto@vger.kernel.org 10 + Description: Dump of decompression cores related debug registers. 11 11 Only available for PF. 12 12 13 - What: /sys/kernel/debug/hisi_zip/<bdf>/clear_enable 14 - Date: Nov 2018 15 - Contact: linux-crypto@vger.kernel.org 16 - Description: Compression/decompression core debug registers read clear 13 + What: /sys/kernel/debug/hisi_zip/<bdf>/clear_enable 14 + Date: Nov 2018 15 + Contact: linux-crypto@vger.kernel.org 16 + Description: Compression/decompression core debug registers read clear 17 17 control. 1 means enable register read clear, otherwise 0. 18 18 Writing to this file has no functional effect, only enable or 19 19 disable counters clear after reading of these registers. 20 20 Only available for PF. 21 21 22 - What: /sys/kernel/debug/hisi_zip/<bdf>/current_qm 23 - Date: Nov 2018 24 - Contact: linux-crypto@vger.kernel.org 25 - Description: One ZIP controller has one PF and multiple VFs, each function 22 + What: /sys/kernel/debug/hisi_zip/<bdf>/current_qm 23 + Date: Nov 2018 24 + Contact: linux-crypto@vger.kernel.org 25 + Description: One ZIP controller has one PF and multiple VFs, each function 26 26 has a QM. Select the QM which below qm refers to. 27 27 Only available for PF. 28 28 29 - What: /sys/kernel/debug/hisi_zip/<bdf>/qm/regs 30 - Date: Nov 2018 31 - Contact: linux-crypto@vger.kernel.org 32 - Description: Dump of QM related debug registers. 29 + What: /sys/kernel/debug/hisi_zip/<bdf>/alg_qos 30 + Date: Jun 2021 31 + Contact: linux-crypto@vger.kernel.org 32 + Description: The <bdf> is related the function for PF and VF. 33 + ZIP driver supports to configure each function's QoS, the driver 34 + supports to write <bdf> value to alg_qos in the host. Such as 35 + "echo <bdf> value > alg_qos". The qos value is 1~1000, means 36 + 1/1000~1000/1000 of total QoS. The driver reading alg_qos to 37 + get related QoS in the host and VM, Such as "cat alg_qos". 38 + 39 + What: /sys/kernel/debug/hisi_zip/<bdf>/qm/regs 40 + Date: Nov 2018 41 + Contact: linux-crypto@vger.kernel.org 42 + Description: Dump of QM related debug registers. 33 43 Available for PF and VF in host. VF in guest currently only 34 44 has one debug register. 35 45 36 - What: /sys/kernel/debug/hisi_zip/<bdf>/qm/current_q 37 - Date: Nov 2018 38 - Contact: linux-crypto@vger.kernel.org 39 - Description: One QM may contain multiple queues. Select specific queue to 46 + What: /sys/kernel/debug/hisi_zip/<bdf>/qm/current_q 47 + Date: Nov 2018 48 + Contact: linux-crypto@vger.kernel.org 49 + Description: One QM may contain multiple queues. Select specific queue to 40 50 show its debug registers in above regs. 41 51 Only available for PF. 42 52 43 - What: /sys/kernel/debug/hisi_zip/<bdf>/qm/clear_enable 44 - Date: Nov 2018 45 - Contact: linux-crypto@vger.kernel.org 46 - Description: QM debug registers(regs) read clear control. 1 means enable 53 + What: /sys/kernel/debug/hisi_zip/<bdf>/qm/clear_enable 54 + Date: Nov 2018 55 + Contact: linux-crypto@vger.kernel.org 56 + Description: QM debug registers(regs) read clear control. 1 means enable 47 57 register read clear, otherwise 0. 48 58 Writing to this file has no functional effect, only enable or 49 59 disable counters clear after reading of these registers. 50 60 Only available for PF. 51 61 52 - What: /sys/kernel/debug/hisi_zip/<bdf>/qm/err_irq 53 - Date: Apr 2020 54 - Contact: linux-crypto@vger.kernel.org 55 - Description: Dump the number of invalid interrupts for 62 + What: /sys/kernel/debug/hisi_zip/<bdf>/qm/err_irq 63 + Date: Apr 2020 64 + Contact: linux-crypto@vger.kernel.org 65 + Description: Dump the number of invalid interrupts for 56 66 QM task completion. 57 67 Available for both PF and VF, and take no other effect on ZIP. 58 68 59 - What: /sys/kernel/debug/hisi_zip/<bdf>/qm/aeq_irq 60 - Date: Apr 2020 61 - Contact: linux-crypto@vger.kernel.org 62 - Description: Dump the number of QM async event queue interrupts. 69 + What: /sys/kernel/debug/hisi_zip/<bdf>/qm/aeq_irq 70 + Date: Apr 2020 71 + Contact: linux-crypto@vger.kernel.org 72 + Description: Dump the number of QM async event queue interrupts. 63 73 Available for both PF and VF, and take no other effect on ZIP. 64 74 65 - What: /sys/kernel/debug/hisi_zip/<bdf>/qm/abnormal_irq 66 - Date: Apr 2020 67 - Contact: linux-crypto@vger.kernel.org 68 - Description: Dump the number of interrupts for QM abnormal event. 75 + What: /sys/kernel/debug/hisi_zip/<bdf>/qm/abnormal_irq 76 + Date: Apr 2020 77 + Contact: linux-crypto@vger.kernel.org 78 + Description: Dump the number of interrupts for QM abnormal event. 69 79 Available for both PF and VF, and take no other effect on ZIP. 70 80 71 - What: /sys/kernel/debug/hisi_zip/<bdf>/qm/create_qp_err 72 - Date: Apr 2020 73 - Contact: linux-crypto@vger.kernel.org 74 - Description: Dump the number of queue allocation errors. 81 + What: /sys/kernel/debug/hisi_zip/<bdf>/qm/create_qp_err 82 + Date: Apr 2020 83 + Contact: linux-crypto@vger.kernel.org 84 + Description: Dump the number of queue allocation errors. 75 85 Available for both PF and VF, and take no other effect on ZIP. 76 86 77 - What: /sys/kernel/debug/hisi_zip/<bdf>/qm/mb_err 78 - Date: Apr 2020 79 - Contact: linux-crypto@vger.kernel.org 80 - Description: Dump the number of failed QM mailbox commands. 87 + What: /sys/kernel/debug/hisi_zip/<bdf>/qm/mb_err 88 + Date: Apr 2020 89 + Contact: linux-crypto@vger.kernel.org 90 + Description: Dump the number of failed QM mailbox commands. 81 91 Available for both PF and VF, and take no other effect on ZIP. 82 92 83 - What: /sys/kernel/debug/hisi_zip/<bdf>/qm/status 84 - Date: Apr 2020 85 - Contact: linux-crypto@vger.kernel.org 86 - Description: Dump the status of the QM. 93 + What: /sys/kernel/debug/hisi_zip/<bdf>/qm/status 94 + Date: Apr 2020 95 + Contact: linux-crypto@vger.kernel.org 96 + Description: Dump the status of the QM. 87 97 Four states: initiated, started, stopped and closed. 88 98 Available for both PF and VF, and take no other effect on ZIP. 89 99 90 - What: /sys/kernel/debug/hisi_zip/<bdf>/zip_dfx/send_cnt 91 - Date: Apr 2020 92 - Contact: linux-crypto@vger.kernel.org 93 - Description: Dump the total number of sent requests. 100 + What: /sys/kernel/debug/hisi_zip/<bdf>/zip_dfx/send_cnt 101 + Date: Apr 2020 102 + Contact: linux-crypto@vger.kernel.org 103 + Description: Dump the total number of sent requests. 94 104 Available for both PF and VF, and take no other effect on ZIP. 95 105 96 - What: /sys/kernel/debug/hisi_zip/<bdf>/zip_dfx/recv_cnt 97 - Date: Apr 2020 98 - Contact: linux-crypto@vger.kernel.org 99 - Description: Dump the total number of received requests. 106 + What: /sys/kernel/debug/hisi_zip/<bdf>/zip_dfx/recv_cnt 107 + Date: Apr 2020 108 + Contact: linux-crypto@vger.kernel.org 109 + Description: Dump the total number of received requests. 100 110 Available for both PF and VF, and take no other effect on ZIP. 101 111 102 - What: /sys/kernel/debug/hisi_zip/<bdf>/zip_dfx/send_busy_cnt 103 - Date: Apr 2020 104 - Contact: linux-crypto@vger.kernel.org 105 - Description: Dump the total number of requests received 112 + What: /sys/kernel/debug/hisi_zip/<bdf>/zip_dfx/send_busy_cnt 113 + Date: Apr 2020 114 + Contact: linux-crypto@vger.kernel.org 115 + Description: Dump the total number of requests received 106 116 with returning busy. 107 117 Available for both PF and VF, and take no other effect on ZIP. 108 118 109 - What: /sys/kernel/debug/hisi_zip/<bdf>/zip_dfx/err_bd_cnt 110 - Date: Apr 2020 111 - Contact: linux-crypto@vger.kernel.org 112 - Description: Dump the total number of BD type error requests 119 + What: /sys/kernel/debug/hisi_zip/<bdf>/zip_dfx/err_bd_cnt 120 + Date: Apr 2020 121 + Contact: linux-crypto@vger.kernel.org 122 + Description: Dump the total number of BD type error requests 113 123 to be received. 114 124 Available for both PF and VF, and take no other effect on ZIP.
+8 -3
MAINTAINERS
··· 8644 8644 F: drivers/gpio/gpio-hisi.c 8645 8645 8646 8646 HISILICON HIGH PERFORMANCE RSA ENGINE DRIVER (HPRE) 8647 - M: Zaibo Xu <xuzaibo@huawei.com> 8647 + M: Longfang Liu <liulongfang@huawei.com> 8648 8648 L: linux-crypto@vger.kernel.org 8649 8649 S: Maintained 8650 8650 F: Documentation/ABI/testing/debugfs-hisi-hpre ··· 8724 8724 F: drivers/scsi/hisi_sas/ 8725 8725 8726 8726 HISILICON SECURITY ENGINE V2 DRIVER (SEC2) 8727 - M: Zaibo Xu <xuzaibo@huawei.com> 8728 8727 M: Kai Ye <yekai13@huawei.com> 8728 + M: Longfang Liu <liulongfang@huawei.com> 8729 8729 L: linux-crypto@vger.kernel.org 8730 8730 S: Maintained 8731 8731 F: Documentation/ABI/testing/debugfs-hisi-sec ··· 8756 8756 F: drivers/mfd/hi6421-spmi-pmic.c 8757 8757 8758 8758 HISILICON TRUE RANDOM NUMBER GENERATOR V2 SUPPORT 8759 - M: Zaibo Xu <xuzaibo@huawei.com> 8759 + M: Weili Qian <qianweili@huawei.com> 8760 8760 S: Maintained 8761 8761 F: drivers/crypto/hisilicon/trng/trng.c 8762 8762 ··· 21301 21301 T: git https://github.com/Xilinx/linux-xlnx.git 21302 21302 F: Documentation/devicetree/bindings/phy/xlnx,zynqmp-psgtr.yaml 21303 21303 F: drivers/phy/xilinx/phy-zynqmp.c 21304 + 21305 + XILINX ZYNQMP SHA3 DRIVER 21306 + M: Harsha <harsha.harsha@xilinx.com> 21307 + S: Maintained 21308 + F: drivers/crypto/xilinx/zynqmp-sha.c 21304 21309 21305 21310 XILINX EVENT MANAGEMENT DRIVER 21306 21311 M: Abhyuday Godhasara <abhyuday.godhasara@xilinx.com>
+36 -17
arch/alpha/include/asm/xor.h
··· 5 5 * Optimized RAID-5 checksumming functions for alpha EV5 and EV6 6 6 */ 7 7 8 - extern void xor_alpha_2(unsigned long, unsigned long *, unsigned long *); 9 - extern void xor_alpha_3(unsigned long, unsigned long *, unsigned long *, 10 - unsigned long *); 11 - extern void xor_alpha_4(unsigned long, unsigned long *, unsigned long *, 12 - unsigned long *, unsigned long *); 13 - extern void xor_alpha_5(unsigned long, unsigned long *, unsigned long *, 14 - unsigned long *, unsigned long *, unsigned long *); 8 + extern void 9 + xor_alpha_2(unsigned long bytes, unsigned long * __restrict p1, 10 + const unsigned long * __restrict p2); 11 + extern void 12 + xor_alpha_3(unsigned long bytes, unsigned long * __restrict p1, 13 + const unsigned long * __restrict p2, 14 + const unsigned long * __restrict p3); 15 + extern void 16 + xor_alpha_4(unsigned long bytes, unsigned long * __restrict p1, 17 + const unsigned long * __restrict p2, 18 + const unsigned long * __restrict p3, 19 + const unsigned long * __restrict p4); 20 + extern void 21 + xor_alpha_5(unsigned long bytes, unsigned long * __restrict p1, 22 + const unsigned long * __restrict p2, 23 + const unsigned long * __restrict p3, 24 + const unsigned long * __restrict p4, 25 + const unsigned long * __restrict p5); 15 26 16 - extern void xor_alpha_prefetch_2(unsigned long, unsigned long *, 17 - unsigned long *); 18 - extern void xor_alpha_prefetch_3(unsigned long, unsigned long *, 19 - unsigned long *, unsigned long *); 20 - extern void xor_alpha_prefetch_4(unsigned long, unsigned long *, 21 - unsigned long *, unsigned long *, 22 - unsigned long *); 23 - extern void xor_alpha_prefetch_5(unsigned long, unsigned long *, 24 - unsigned long *, unsigned long *, 25 - unsigned long *, unsigned long *); 27 + extern void 28 + xor_alpha_prefetch_2(unsigned long bytes, unsigned long * __restrict p1, 29 + const unsigned long * __restrict p2); 30 + extern void 31 + xor_alpha_prefetch_3(unsigned long bytes, unsigned long * __restrict p1, 32 + const unsigned long * __restrict p2, 33 + const unsigned long * __restrict p3); 34 + extern void 35 + xor_alpha_prefetch_4(unsigned long bytes, unsigned long * __restrict p1, 36 + const unsigned long * __restrict p2, 37 + const unsigned long * __restrict p3, 38 + const unsigned long * __restrict p4); 39 + extern void 40 + xor_alpha_prefetch_5(unsigned long bytes, unsigned long * __restrict p1, 41 + const unsigned long * __restrict p2, 42 + const unsigned long * __restrict p3, 43 + const unsigned long * __restrict p4, 44 + const unsigned long * __restrict p5); 26 45 27 46 asm(" \n\ 28 47 .text \n\
+68 -47
arch/arm/crypto/aes-neonbs-core.S
··· 758 758 ENDPROC(aesbs_cbc_decrypt) 759 759 760 760 .macro next_ctr, q 761 - vmov.32 \q\()h[1], r10 761 + vmov \q\()h, r9, r10 762 762 adds r10, r10, #1 763 - vmov.32 \q\()h[0], r9 764 763 adcs r9, r9, #0 765 - vmov.32 \q\()l[1], r8 764 + vmov \q\()l, r7, r8 766 765 adcs r8, r8, #0 767 - vmov.32 \q\()l[0], r7 768 766 adc r7, r7, #0 769 767 vrev32.8 \q, \q 770 768 .endm 771 769 772 770 /* 773 771 * aesbs_ctr_encrypt(u8 out[], u8 const in[], u8 const rk[], 774 - * int rounds, int blocks, u8 ctr[], u8 final[]) 772 + * int rounds, int bytes, u8 ctr[]) 775 773 */ 776 774 ENTRY(aesbs_ctr_encrypt) 777 775 mov ip, sp 778 776 push {r4-r10, lr} 779 777 780 - ldm ip, {r5-r7} // load args 4-6 781 - teq r7, #0 782 - addne r5, r5, #1 // one extra block if final != 0 783 - 778 + ldm ip, {r5, r6} // load args 4-5 784 779 vld1.8 {q0}, [r6] // load counter 785 780 vrev32.8 q1, q0 786 781 vmov r9, r10, d3 ··· 787 792 adc r7, r7, #0 788 793 789 794 99: vmov q1, q0 790 - vmov q2, q0 791 - vmov q3, q0 792 - vmov q4, q0 793 - vmov q5, q0 794 - vmov q6, q0 795 - vmov q7, q0 796 - 797 - adr ip, 0f 798 795 sub lr, r5, #1 799 - and lr, lr, #7 800 - cmp r5, #8 801 - sub ip, ip, lr, lsl #5 802 - sub ip, ip, lr, lsl #2 803 - movlt pc, ip // computed goto if blocks < 8 796 + vmov q2, q0 797 + adr ip, 0f 798 + vmov q3, q0 799 + and lr, lr, #112 800 + vmov q4, q0 801 + cmp r5, #112 802 + vmov q5, q0 803 + sub ip, ip, lr, lsl #1 804 + vmov q6, q0 805 + add ip, ip, lr, lsr #2 806 + vmov q7, q0 807 + movle pc, ip // computed goto if bytes < 112 804 808 805 809 next_ctr q1 806 810 next_ctr q2 ··· 814 820 bl aesbs_encrypt8 815 821 816 822 adr ip, 1f 817 - and lr, r5, #7 818 - cmp r5, #8 819 - movgt r4, #0 820 - ldrle r4, [sp, #40] // load final in the last round 821 - sub ip, ip, lr, lsl #2 822 - movlt pc, ip // computed goto if blocks < 8 823 + sub lr, r5, #1 824 + cmp r5, #128 825 + bic lr, lr, #15 826 + ands r4, r5, #15 // preserves C flag 827 + teqcs r5, r5 // set Z flag if not last iteration 828 + sub ip, ip, lr, lsr #2 829 + rsb r4, r4, #16 830 + movcc pc, ip // computed goto if bytes < 128 823 831 824 832 vld1.8 {q8}, [r1]! 825 833 vld1.8 {q9}, [r1]! ··· 830 834 vld1.8 {q12}, [r1]! 831 835 vld1.8 {q13}, [r1]! 832 836 vld1.8 {q14}, [r1]! 833 - teq r4, #0 // skip last block if 'final' 834 - 1: bne 2f 837 + 1: subne r1, r1, r4 835 838 vld1.8 {q15}, [r1]! 836 839 837 - 2: adr ip, 3f 838 - cmp r5, #8 839 - sub ip, ip, lr, lsl #3 840 - movlt pc, ip // computed goto if blocks < 8 840 + add ip, ip, #2f - 1b 841 841 842 842 veor q0, q0, q8 843 - vst1.8 {q0}, [r0]! 844 843 veor q1, q1, q9 845 - vst1.8 {q1}, [r0]! 846 844 veor q4, q4, q10 847 - vst1.8 {q4}, [r0]! 848 845 veor q6, q6, q11 849 - vst1.8 {q6}, [r0]! 850 846 veor q3, q3, q12 851 - vst1.8 {q3}, [r0]! 852 847 veor q7, q7, q13 853 - vst1.8 {q7}, [r0]! 854 848 veor q2, q2, q14 849 + bne 3f 850 + veor q5, q5, q15 851 + 852 + movcc pc, ip // computed goto if bytes < 128 853 + 854 + vst1.8 {q0}, [r0]! 855 + vst1.8 {q1}, [r0]! 856 + vst1.8 {q4}, [r0]! 857 + vst1.8 {q6}, [r0]! 858 + vst1.8 {q3}, [r0]! 859 + vst1.8 {q7}, [r0]! 855 860 vst1.8 {q2}, [r0]! 856 - teq r4, #0 // skip last block if 'final' 857 - W(bne) 5f 858 - 3: veor q5, q5, q15 861 + 2: subne r0, r0, r4 859 862 vst1.8 {q5}, [r0]! 860 863 861 - 4: next_ctr q0 864 + next_ctr q0 862 865 863 - subs r5, r5, #8 866 + subs r5, r5, #128 864 867 bgt 99b 865 868 866 869 vst1.8 {q0}, [r6] 867 870 pop {r4-r10, pc} 868 871 869 - 5: vst1.8 {q5}, [r4] 870 - b 4b 872 + 3: adr lr, .Lpermute_table + 16 873 + cmp r5, #16 // Z flag remains cleared 874 + sub lr, lr, r4 875 + vld1.8 {q8-q9}, [lr] 876 + vtbl.8 d16, {q5}, d16 877 + vtbl.8 d17, {q5}, d17 878 + veor q5, q8, q15 879 + bcc 4f // have to reload prev if R5 < 16 880 + vtbx.8 d10, {q2}, d18 881 + vtbx.8 d11, {q2}, d19 882 + mov pc, ip // branch back to VST sequence 883 + 884 + 4: sub r0, r0, r4 885 + vshr.s8 q9, q9, #7 // create mask for VBIF 886 + vld1.8 {q8}, [r0] // reload 887 + vbif q5, q8, q9 888 + vst1.8 {q5}, [r0] 889 + pop {r4-r10, pc} 871 890 ENDPROC(aesbs_ctr_encrypt) 891 + 892 + .align 6 893 + .Lpermute_table: 894 + .byte 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff 895 + .byte 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff 896 + .byte 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07 897 + .byte 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f 898 + .byte 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff 899 + .byte 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff 872 900 873 901 .macro next_tweak, out, in, const, tmp 874 902 vshr.s64 \tmp, \in, #63 ··· 908 888 * aesbs_xts_decrypt(u8 out[], u8 const in[], u8 const rk[], int rounds, 909 889 * int blocks, u8 iv[], int reorder_last_tweak) 910 890 */ 891 + .align 6 911 892 __xts_prepare8: 912 893 vld1.8 {q14}, [r7] // load iv 913 894 vmov.i32 d30, #0x87 // compose tweak mask vector
+14 -21
arch/arm/crypto/aes-neonbs-glue.c
··· 37 37 int rounds, int blocks, u8 iv[]); 38 38 39 39 asmlinkage void aesbs_ctr_encrypt(u8 out[], u8 const in[], u8 const rk[], 40 - int rounds, int blocks, u8 ctr[], u8 final[]); 40 + int rounds, int blocks, u8 ctr[]); 41 41 42 42 asmlinkage void aesbs_xts_encrypt(u8 out[], u8 const in[], u8 const rk[], 43 43 int rounds, int blocks, u8 iv[], int); ··· 243 243 err = skcipher_walk_virt(&walk, req, false); 244 244 245 245 while (walk.nbytes > 0) { 246 - unsigned int blocks = walk.nbytes / AES_BLOCK_SIZE; 247 - u8 *final = (walk.total % AES_BLOCK_SIZE) ? buf : NULL; 246 + const u8 *src = walk.src.virt.addr; 247 + u8 *dst = walk.dst.virt.addr; 248 + int bytes = walk.nbytes; 248 249 249 - if (walk.nbytes < walk.total) { 250 - blocks = round_down(blocks, 251 - walk.stride / AES_BLOCK_SIZE); 252 - final = NULL; 253 - } 250 + if (unlikely(bytes < AES_BLOCK_SIZE)) 251 + src = dst = memcpy(buf + sizeof(buf) - bytes, 252 + src, bytes); 253 + else if (walk.nbytes < walk.total) 254 + bytes &= ~(8 * AES_BLOCK_SIZE - 1); 254 255 255 256 kernel_neon_begin(); 256 - aesbs_ctr_encrypt(walk.dst.virt.addr, walk.src.virt.addr, 257 - ctx->rk, ctx->rounds, blocks, walk.iv, final); 257 + aesbs_ctr_encrypt(dst, src, ctx->rk, ctx->rounds, bytes, walk.iv); 258 258 kernel_neon_end(); 259 259 260 - if (final) { 261 - u8 *dst = walk.dst.virt.addr + blocks * AES_BLOCK_SIZE; 262 - u8 *src = walk.src.virt.addr + blocks * AES_BLOCK_SIZE; 260 + if (unlikely(bytes < AES_BLOCK_SIZE)) 261 + memcpy(walk.dst.virt.addr, 262 + buf + sizeof(buf) - bytes, bytes); 263 263 264 - crypto_xor_cpy(dst, src, final, 265 - walk.total % AES_BLOCK_SIZE); 266 - 267 - err = skcipher_walk_done(&walk, 0); 268 - break; 269 - } 270 - err = skcipher_walk_done(&walk, 271 - walk.nbytes - blocks * AES_BLOCK_SIZE); 264 + err = skcipher_walk_done(&walk, walk.nbytes - bytes); 272 265 } 273 266 274 267 return err;
+28 -14
arch/arm/include/asm/xor.h
··· 44 44 : "0" (dst), "r" (a1), "r" (a2), "r" (a3), "r" (a4)) 45 45 46 46 static void 47 - xor_arm4regs_2(unsigned long bytes, unsigned long *p1, unsigned long *p2) 47 + xor_arm4regs_2(unsigned long bytes, unsigned long * __restrict p1, 48 + const unsigned long * __restrict p2) 48 49 { 49 50 unsigned int lines = bytes / sizeof(unsigned long) / 4; 50 51 register unsigned int a1 __asm__("r4"); ··· 65 64 } 66 65 67 66 static void 68 - xor_arm4regs_3(unsigned long bytes, unsigned long *p1, unsigned long *p2, 69 - unsigned long *p3) 67 + xor_arm4regs_3(unsigned long bytes, unsigned long * __restrict p1, 68 + const unsigned long * __restrict p2, 69 + const unsigned long * __restrict p3) 70 70 { 71 71 unsigned int lines = bytes / sizeof(unsigned long) / 4; 72 72 register unsigned int a1 __asm__("r4"); ··· 88 86 } 89 87 90 88 static void 91 - xor_arm4regs_4(unsigned long bytes, unsigned long *p1, unsigned long *p2, 92 - unsigned long *p3, unsigned long *p4) 89 + xor_arm4regs_4(unsigned long bytes, unsigned long * __restrict p1, 90 + const unsigned long * __restrict p2, 91 + const unsigned long * __restrict p3, 92 + const unsigned long * __restrict p4) 93 93 { 94 94 unsigned int lines = bytes / sizeof(unsigned long) / 2; 95 95 register unsigned int a1 __asm__("r8"); ··· 109 105 } 110 106 111 107 static void 112 - xor_arm4regs_5(unsigned long bytes, unsigned long *p1, unsigned long *p2, 113 - unsigned long *p3, unsigned long *p4, unsigned long *p5) 108 + xor_arm4regs_5(unsigned long bytes, unsigned long * __restrict p1, 109 + const unsigned long * __restrict p2, 110 + const unsigned long * __restrict p3, 111 + const unsigned long * __restrict p4, 112 + const unsigned long * __restrict p5) 114 113 { 115 114 unsigned int lines = bytes / sizeof(unsigned long) / 2; 116 115 register unsigned int a1 __asm__("r8"); ··· 153 146 extern struct xor_block_template const xor_block_neon_inner; 154 147 155 148 static void 156 - xor_neon_2(unsigned long bytes, unsigned long *p1, unsigned long *p2) 149 + xor_neon_2(unsigned long bytes, unsigned long * __restrict p1, 150 + const unsigned long * __restrict p2) 157 151 { 158 152 if (in_interrupt()) { 159 153 xor_arm4regs_2(bytes, p1, p2); ··· 166 158 } 167 159 168 160 static void 169 - xor_neon_3(unsigned long bytes, unsigned long *p1, unsigned long *p2, 170 - unsigned long *p3) 161 + xor_neon_3(unsigned long bytes, unsigned long * __restrict p1, 162 + const unsigned long * __restrict p2, 163 + const unsigned long * __restrict p3) 171 164 { 172 165 if (in_interrupt()) { 173 166 xor_arm4regs_3(bytes, p1, p2, p3); ··· 180 171 } 181 172 182 173 static void 183 - xor_neon_4(unsigned long bytes, unsigned long *p1, unsigned long *p2, 184 - unsigned long *p3, unsigned long *p4) 174 + xor_neon_4(unsigned long bytes, unsigned long * __restrict p1, 175 + const unsigned long * __restrict p2, 176 + const unsigned long * __restrict p3, 177 + const unsigned long * __restrict p4) 185 178 { 186 179 if (in_interrupt()) { 187 180 xor_arm4regs_4(bytes, p1, p2, p3, p4); ··· 195 184 } 196 185 197 186 static void 198 - xor_neon_5(unsigned long bytes, unsigned long *p1, unsigned long *p2, 199 - unsigned long *p3, unsigned long *p4, unsigned long *p5) 187 + xor_neon_5(unsigned long bytes, unsigned long * __restrict p1, 188 + const unsigned long * __restrict p2, 189 + const unsigned long * __restrict p3, 190 + const unsigned long * __restrict p4, 191 + const unsigned long * __restrict p5) 200 192 { 201 193 if (in_interrupt()) { 202 194 xor_arm4regs_5(bytes, p1, p2, p3, p4, p5);
+3 -9
arch/arm/lib/xor-neon.c
··· 17 17 /* 18 18 * Pull in the reference implementations while instructing GCC (through 19 19 * -ftree-vectorize) to attempt to exploit implicit parallelism and emit 20 - * NEON instructions. 20 + * NEON instructions. Clang does this by default at O2 so no pragma is 21 + * needed. 21 22 */ 22 - #if __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6) 23 + #ifdef CONFIG_CC_IS_GCC 23 24 #pragma GCC optimize "tree-vectorize" 24 - #else 25 - /* 26 - * While older versions of GCC do not generate incorrect code, they fail to 27 - * recognize the parallel nature of these functions, and emit plain ARM code, 28 - * which is known to be slower than the optimized ARM code in asm-arm/xor.h. 29 - */ 30 - #warning This code requires at least version 4.6 of GCC 31 25 #endif 32 26 33 27 #pragma GCC diagnostic ignored "-Wunused-variable"
+1 -1
arch/arm64/crypto/Kconfig
··· 45 45 tristate "SM3 digest algorithm (ARMv8.2 Crypto Extensions)" 46 46 depends on KERNEL_MODE_NEON 47 47 select CRYPTO_HASH 48 - select CRYPTO_SM3 48 + select CRYPTO_LIB_SM3 49 49 50 50 config CRYPTO_SM4_ARM64_CE 51 51 tristate "SM4 symmetric cipher (ARMv8.2 Crypto Extensions)"
+8 -14
arch/arm64/crypto/aes-glue.c
··· 24 24 #ifdef USE_V8_CRYPTO_EXTENSIONS 25 25 #define MODE "ce" 26 26 #define PRIO 300 27 - #define STRIDE 5 28 27 #define aes_expandkey ce_aes_expandkey 29 28 #define aes_ecb_encrypt ce_aes_ecb_encrypt 30 29 #define aes_ecb_decrypt ce_aes_ecb_decrypt ··· 41 42 #else 42 43 #define MODE "neon" 43 44 #define PRIO 200 44 - #define STRIDE 4 45 45 #define aes_ecb_encrypt neon_aes_ecb_encrypt 46 46 #define aes_ecb_decrypt neon_aes_ecb_decrypt 47 47 #define aes_cbc_encrypt neon_aes_cbc_encrypt ··· 87 89 int rounds, int bytes, u8 const iv[]); 88 90 89 91 asmlinkage void aes_ctr_encrypt(u8 out[], u8 const in[], u32 const rk[], 90 - int rounds, int bytes, u8 ctr[], u8 finalbuf[]); 92 + int rounds, int bytes, u8 ctr[]); 91 93 92 94 asmlinkage void aes_xts_encrypt(u8 out[], u8 const in[], u32 const rk1[], 93 95 int rounds, int bytes, u32 const rk2[], u8 iv[], ··· 456 458 unsigned int nbytes = walk.nbytes; 457 459 u8 *dst = walk.dst.virt.addr; 458 460 u8 buf[AES_BLOCK_SIZE]; 459 - unsigned int tail; 460 461 461 462 if (unlikely(nbytes < AES_BLOCK_SIZE)) 462 - src = memcpy(buf, src, nbytes); 463 + src = dst = memcpy(buf + sizeof(buf) - nbytes, 464 + src, nbytes); 463 465 else if (nbytes < walk.total) 464 466 nbytes &= ~(AES_BLOCK_SIZE - 1); 465 467 466 468 kernel_neon_begin(); 467 469 aes_ctr_encrypt(dst, src, ctx->key_enc, rounds, nbytes, 468 - walk.iv, buf); 470 + walk.iv); 469 471 kernel_neon_end(); 470 472 471 - tail = nbytes % (STRIDE * AES_BLOCK_SIZE); 472 - if (tail > 0 && tail < AES_BLOCK_SIZE) 473 - /* 474 - * The final partial block could not be returned using 475 - * an overlapping store, so it was passed via buf[] 476 - * instead. 477 - */ 478 - memcpy(dst + nbytes - tail, buf, tail); 473 + if (unlikely(nbytes < AES_BLOCK_SIZE)) 474 + memcpy(walk.dst.virt.addr, 475 + buf + sizeof(buf) - nbytes, nbytes); 479 476 480 477 err = skcipher_walk_done(&walk, walk.nbytes - nbytes); 481 478 } ··· 976 983 module_init(aes_init); 977 984 EXPORT_SYMBOL(neon_aes_ecb_encrypt); 978 985 EXPORT_SYMBOL(neon_aes_cbc_encrypt); 986 + EXPORT_SYMBOL(neon_aes_ctr_encrypt); 979 987 EXPORT_SYMBOL(neon_aes_xts_encrypt); 980 988 EXPORT_SYMBOL(neon_aes_xts_decrypt); 981 989 #endif
+13 -5
arch/arm64/crypto/aes-modes.S
··· 321 321 322 322 /* 323 323 * aes_ctr_encrypt(u8 out[], u8 const in[], u8 const rk[], int rounds, 324 - * int bytes, u8 ctr[], u8 finalbuf[]) 324 + * int bytes, u8 ctr[]) 325 325 */ 326 326 327 327 AES_FUNC_START(aes_ctr_encrypt) ··· 414 414 .Lctrtail: 415 415 /* XOR up to MAX_STRIDE * 16 - 1 bytes of in/output with v0 ... v3/v4 */ 416 416 mov x16, #16 417 - ands x13, x4, #0xf 418 - csel x13, x13, x16, ne 417 + ands x6, x4, #0xf 418 + csel x13, x6, x16, ne 419 419 420 420 ST5( cmp w4, #64 - (MAX_STRIDE << 4) ) 421 421 ST5( csel x14, x16, xzr, gt ) ··· 424 424 cmp w4, #32 - (MAX_STRIDE << 4) 425 425 csel x16, x16, xzr, gt 426 426 cmp w4, #16 - (MAX_STRIDE << 4) 427 - ble .Lctrtail1x 428 427 429 428 adr_l x12, .Lcts_permute_table 430 429 add x12, x12, x13 430 + ble .Lctrtail1x 431 431 432 432 ST5( ld1 {v5.16b}, [x1], x14 ) 433 433 ld1 {v6.16b}, [x1], x15 ··· 462 462 b .Lctrout 463 463 464 464 .Lctrtail1x: 465 - csel x0, x0, x6, eq // use finalbuf if less than a full block 465 + sub x7, x6, #16 466 + csel x6, x6, x7, eq 467 + add x1, x1, x6 468 + add x0, x0, x6 466 469 ld1 {v5.16b}, [x1] 470 + ld1 {v6.16b}, [x0] 467 471 ST5( mov v3.16b, v4.16b ) 468 472 encrypt_block v3, w3, x2, x8, w7 473 + ld1 {v10.16b-v11.16b}, [x12] 474 + tbl v3.16b, {v3.16b}, v10.16b 475 + sshr v11.16b, v11.16b, #7 469 476 eor v5.16b, v5.16b, v3.16b 477 + bif v5.16b, v6.16b, v11.16b 470 478 st1 {v5.16b}, [x0] 471 479 b .Lctrout 472 480 AES_FUNC_END(aes_ctr_encrypt)
+65 -199
arch/arm64/crypto/aes-neonbs-core.S
··· 735 735 * int blocks, u8 iv[]) 736 736 */ 737 737 SYM_FUNC_START_LOCAL(__xts_crypt8) 738 - mov x6, #1 739 - lsl x6, x6, x23 740 - subs w23, w23, #8 741 - csel x23, x23, xzr, pl 742 - csel x6, x6, xzr, mi 738 + movi v18.2s, #0x1 739 + movi v19.2s, #0x87 740 + uzp1 v18.4s, v18.4s, v19.4s 743 741 744 - ld1 {v0.16b}, [x20], #16 745 - next_tweak v26, v25, v30, v31 742 + ld1 {v0.16b-v3.16b}, [x1], #64 743 + ld1 {v4.16b-v7.16b}, [x1], #64 744 + 745 + next_tweak v26, v25, v18, v19 746 + next_tweak v27, v26, v18, v19 747 + next_tweak v28, v27, v18, v19 748 + next_tweak v29, v28, v18, v19 749 + next_tweak v30, v29, v18, v19 750 + next_tweak v31, v30, v18, v19 751 + next_tweak v16, v31, v18, v19 752 + next_tweak v17, v16, v18, v19 753 + 746 754 eor v0.16b, v0.16b, v25.16b 747 - tbnz x6, #1, 0f 748 - 749 - ld1 {v1.16b}, [x20], #16 750 - next_tweak v27, v26, v30, v31 751 755 eor v1.16b, v1.16b, v26.16b 752 - tbnz x6, #2, 0f 753 - 754 - ld1 {v2.16b}, [x20], #16 755 - next_tweak v28, v27, v30, v31 756 756 eor v2.16b, v2.16b, v27.16b 757 - tbnz x6, #3, 0f 758 - 759 - ld1 {v3.16b}, [x20], #16 760 - next_tweak v29, v28, v30, v31 761 757 eor v3.16b, v3.16b, v28.16b 762 - tbnz x6, #4, 0f 763 - 764 - ld1 {v4.16b}, [x20], #16 765 - str q29, [sp, #.Lframe_local_offset] 766 758 eor v4.16b, v4.16b, v29.16b 767 - next_tweak v29, v29, v30, v31 768 - tbnz x6, #5, 0f 759 + eor v5.16b, v5.16b, v30.16b 760 + eor v6.16b, v6.16b, v31.16b 761 + eor v7.16b, v7.16b, v16.16b 769 762 770 - ld1 {v5.16b}, [x20], #16 771 - str q29, [sp, #.Lframe_local_offset + 16] 772 - eor v5.16b, v5.16b, v29.16b 773 - next_tweak v29, v29, v30, v31 774 - tbnz x6, #6, 0f 763 + stp q16, q17, [sp, #16] 775 764 776 - ld1 {v6.16b}, [x20], #16 777 - str q29, [sp, #.Lframe_local_offset + 32] 778 - eor v6.16b, v6.16b, v29.16b 779 - next_tweak v29, v29, v30, v31 780 - tbnz x6, #7, 0f 781 - 782 - ld1 {v7.16b}, [x20], #16 783 - str q29, [sp, #.Lframe_local_offset + 48] 784 - eor v7.16b, v7.16b, v29.16b 785 - next_tweak v29, v29, v30, v31 786 - 787 - 0: mov bskey, x21 788 - mov rounds, x22 765 + mov bskey, x2 766 + mov rounds, x3 789 767 br x16 790 768 SYM_FUNC_END(__xts_crypt8) 791 769 792 770 .macro __xts_crypt, do8, o0, o1, o2, o3, o4, o5, o6, o7 793 - frame_push 6, 64 771 + stp x29, x30, [sp, #-48]! 772 + mov x29, sp 794 773 795 - mov x19, x0 796 - mov x20, x1 797 - mov x21, x2 798 - mov x22, x3 799 - mov x23, x4 800 - mov x24, x5 774 + ld1 {v25.16b}, [x5] 801 775 802 - movi v30.2s, #0x1 803 - movi v25.2s, #0x87 804 - uzp1 v30.4s, v30.4s, v25.4s 805 - ld1 {v25.16b}, [x24] 806 - 807 - 99: adr x16, \do8 776 + 0: adr x16, \do8 808 777 bl __xts_crypt8 809 778 810 - ldp q16, q17, [sp, #.Lframe_local_offset] 811 - ldp q18, q19, [sp, #.Lframe_local_offset + 32] 779 + eor v16.16b, \o0\().16b, v25.16b 780 + eor v17.16b, \o1\().16b, v26.16b 781 + eor v18.16b, \o2\().16b, v27.16b 782 + eor v19.16b, \o3\().16b, v28.16b 812 783 813 - eor \o0\().16b, \o0\().16b, v25.16b 814 - eor \o1\().16b, \o1\().16b, v26.16b 815 - eor \o2\().16b, \o2\().16b, v27.16b 816 - eor \o3\().16b, \o3\().16b, v28.16b 784 + ldp q24, q25, [sp, #16] 817 785 818 - st1 {\o0\().16b}, [x19], #16 819 - mov v25.16b, v26.16b 820 - tbnz x6, #1, 1f 821 - st1 {\o1\().16b}, [x19], #16 822 - mov v25.16b, v27.16b 823 - tbnz x6, #2, 1f 824 - st1 {\o2\().16b}, [x19], #16 825 - mov v25.16b, v28.16b 826 - tbnz x6, #3, 1f 827 - st1 {\o3\().16b}, [x19], #16 828 - mov v25.16b, v29.16b 829 - tbnz x6, #4, 1f 786 + eor v20.16b, \o4\().16b, v29.16b 787 + eor v21.16b, \o5\().16b, v30.16b 788 + eor v22.16b, \o6\().16b, v31.16b 789 + eor v23.16b, \o7\().16b, v24.16b 830 790 831 - eor \o4\().16b, \o4\().16b, v16.16b 832 - eor \o5\().16b, \o5\().16b, v17.16b 833 - eor \o6\().16b, \o6\().16b, v18.16b 834 - eor \o7\().16b, \o7\().16b, v19.16b 791 + st1 {v16.16b-v19.16b}, [x0], #64 792 + st1 {v20.16b-v23.16b}, [x0], #64 835 793 836 - st1 {\o4\().16b}, [x19], #16 837 - tbnz x6, #5, 1f 838 - st1 {\o5\().16b}, [x19], #16 839 - tbnz x6, #6, 1f 840 - st1 {\o6\().16b}, [x19], #16 841 - tbnz x6, #7, 1f 842 - st1 {\o7\().16b}, [x19], #16 794 + subs x4, x4, #8 795 + b.gt 0b 843 796 844 - cbz x23, 1f 845 - st1 {v25.16b}, [x24] 846 - 847 - b 99b 848 - 849 - 1: st1 {v25.16b}, [x24] 850 - frame_pop 797 + st1 {v25.16b}, [x5] 798 + ldp x29, x30, [sp], #48 851 799 ret 852 800 .endm 853 801 ··· 817 869 818 870 /* 819 871 * aesbs_ctr_encrypt(u8 out[], u8 const in[], u8 const rk[], 820 - * int rounds, int blocks, u8 iv[], u8 final[]) 872 + * int rounds, int blocks, u8 iv[]) 821 873 */ 822 874 SYM_FUNC_START(aesbs_ctr_encrypt) 823 - frame_push 8 875 + stp x29, x30, [sp, #-16]! 876 + mov x29, sp 824 877 825 - mov x19, x0 826 - mov x20, x1 827 - mov x21, x2 828 - mov x22, x3 829 - mov x23, x4 830 - mov x24, x5 831 - mov x25, x6 832 - 833 - cmp x25, #0 834 - cset x26, ne 835 - add x23, x23, x26 // do one extra block if final 836 - 837 - ldp x7, x8, [x24] 838 - ld1 {v0.16b}, [x24] 878 + ldp x7, x8, [x5] 879 + ld1 {v0.16b}, [x5] 839 880 CPU_LE( rev x7, x7 ) 840 881 CPU_LE( rev x8, x8 ) 841 882 adds x8, x8, #1 842 883 adc x7, x7, xzr 843 884 844 - 99: mov x9, #1 845 - lsl x9, x9, x23 846 - subs w23, w23, #8 847 - csel x23, x23, xzr, pl 848 - csel x9, x9, xzr, le 849 - 850 - tbnz x9, #1, 0f 851 - next_ctr v1 852 - tbnz x9, #2, 0f 885 + 0: next_ctr v1 853 886 next_ctr v2 854 - tbnz x9, #3, 0f 855 887 next_ctr v3 856 - tbnz x9, #4, 0f 857 888 next_ctr v4 858 - tbnz x9, #5, 0f 859 889 next_ctr v5 860 - tbnz x9, #6, 0f 861 890 next_ctr v6 862 - tbnz x9, #7, 0f 863 891 next_ctr v7 864 892 865 - 0: mov bskey, x21 866 - mov rounds, x22 893 + mov bskey, x2 894 + mov rounds, x3 867 895 bl aesbs_encrypt8 868 896 869 - lsr x9, x9, x26 // disregard the extra block 870 - tbnz x9, #0, 0f 897 + ld1 { v8.16b-v11.16b}, [x1], #64 898 + ld1 {v12.16b-v15.16b}, [x1], #64 871 899 872 - ld1 {v8.16b}, [x20], #16 873 - eor v0.16b, v0.16b, v8.16b 874 - st1 {v0.16b}, [x19], #16 875 - tbnz x9, #1, 1f 900 + eor v8.16b, v0.16b, v8.16b 901 + eor v9.16b, v1.16b, v9.16b 902 + eor v10.16b, v4.16b, v10.16b 903 + eor v11.16b, v6.16b, v11.16b 904 + eor v12.16b, v3.16b, v12.16b 905 + eor v13.16b, v7.16b, v13.16b 906 + eor v14.16b, v2.16b, v14.16b 907 + eor v15.16b, v5.16b, v15.16b 876 908 877 - ld1 {v9.16b}, [x20], #16 878 - eor v1.16b, v1.16b, v9.16b 879 - st1 {v1.16b}, [x19], #16 880 - tbnz x9, #2, 2f 909 + st1 { v8.16b-v11.16b}, [x0], #64 910 + st1 {v12.16b-v15.16b}, [x0], #64 881 911 882 - ld1 {v10.16b}, [x20], #16 883 - eor v4.16b, v4.16b, v10.16b 884 - st1 {v4.16b}, [x19], #16 885 - tbnz x9, #3, 3f 912 + next_ctr v0 913 + subs x4, x4, #8 914 + b.gt 0b 886 915 887 - ld1 {v11.16b}, [x20], #16 888 - eor v6.16b, v6.16b, v11.16b 889 - st1 {v6.16b}, [x19], #16 890 - tbnz x9, #4, 4f 891 - 892 - ld1 {v12.16b}, [x20], #16 893 - eor v3.16b, v3.16b, v12.16b 894 - st1 {v3.16b}, [x19], #16 895 - tbnz x9, #5, 5f 896 - 897 - ld1 {v13.16b}, [x20], #16 898 - eor v7.16b, v7.16b, v13.16b 899 - st1 {v7.16b}, [x19], #16 900 - tbnz x9, #6, 6f 901 - 902 - ld1 {v14.16b}, [x20], #16 903 - eor v2.16b, v2.16b, v14.16b 904 - st1 {v2.16b}, [x19], #16 905 - tbnz x9, #7, 7f 906 - 907 - ld1 {v15.16b}, [x20], #16 908 - eor v5.16b, v5.16b, v15.16b 909 - st1 {v5.16b}, [x19], #16 910 - 911 - 8: next_ctr v0 912 - st1 {v0.16b}, [x24] 913 - cbz x23, .Lctr_done 914 - 915 - b 99b 916 - 917 - .Lctr_done: 918 - frame_pop 916 + st1 {v0.16b}, [x5] 917 + ldp x29, x30, [sp], #16 919 918 ret 920 - 921 - /* 922 - * If we are handling the tail of the input (x6 != NULL), return the 923 - * final keystream block back to the caller. 924 - */ 925 - 0: cbz x25, 8b 926 - st1 {v0.16b}, [x25] 927 - b 8b 928 - 1: cbz x25, 8b 929 - st1 {v1.16b}, [x25] 930 - b 8b 931 - 2: cbz x25, 8b 932 - st1 {v4.16b}, [x25] 933 - b 8b 934 - 3: cbz x25, 8b 935 - st1 {v6.16b}, [x25] 936 - b 8b 937 - 4: cbz x25, 8b 938 - st1 {v3.16b}, [x25] 939 - b 8b 940 - 5: cbz x25, 8b 941 - st1 {v7.16b}, [x25] 942 - b 8b 943 - 6: cbz x25, 8b 944 - st1 {v2.16b}, [x25] 945 - b 8b 946 - 7: cbz x25, 8b 947 - st1 {v5.16b}, [x25] 948 - b 8b 949 919 SYM_FUNC_END(aesbs_ctr_encrypt)
+46 -51
arch/arm64/crypto/aes-neonbs-glue.c
··· 34 34 int rounds, int blocks, u8 iv[]); 35 35 36 36 asmlinkage void aesbs_ctr_encrypt(u8 out[], u8 const in[], u8 const rk[], 37 - int rounds, int blocks, u8 iv[], u8 final[]); 37 + int rounds, int blocks, u8 iv[]); 38 38 39 39 asmlinkage void aesbs_xts_encrypt(u8 out[], u8 const in[], u8 const rk[], 40 40 int rounds, int blocks, u8 iv[]); ··· 46 46 int rounds, int blocks); 47 47 asmlinkage void neon_aes_cbc_encrypt(u8 out[], u8 const in[], u32 const rk[], 48 48 int rounds, int blocks, u8 iv[]); 49 + asmlinkage void neon_aes_ctr_encrypt(u8 out[], u8 const in[], u32 const rk[], 50 + int rounds, int bytes, u8 ctr[]); 49 51 asmlinkage void neon_aes_xts_encrypt(u8 out[], u8 const in[], 50 52 u32 const rk1[], int rounds, int bytes, 51 53 u32 const rk2[], u8 iv[], int first); ··· 60 58 int rounds; 61 59 } __aligned(AES_BLOCK_SIZE); 62 60 63 - struct aesbs_cbc_ctx { 61 + struct aesbs_cbc_ctr_ctx { 64 62 struct aesbs_ctx key; 65 63 u32 enc[AES_MAX_KEYLENGTH_U32]; 66 64 }; ··· 130 128 return __ecb_crypt(req, aesbs_ecb_decrypt); 131 129 } 132 130 133 - static int aesbs_cbc_setkey(struct crypto_skcipher *tfm, const u8 *in_key, 131 + static int aesbs_cbc_ctr_setkey(struct crypto_skcipher *tfm, const u8 *in_key, 134 132 unsigned int key_len) 135 133 { 136 - struct aesbs_cbc_ctx *ctx = crypto_skcipher_ctx(tfm); 134 + struct aesbs_cbc_ctr_ctx *ctx = crypto_skcipher_ctx(tfm); 137 135 struct crypto_aes_ctx rk; 138 136 int err; 139 137 ··· 156 154 static int cbc_encrypt(struct skcipher_request *req) 157 155 { 158 156 struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); 159 - struct aesbs_cbc_ctx *ctx = crypto_skcipher_ctx(tfm); 157 + struct aesbs_cbc_ctr_ctx *ctx = crypto_skcipher_ctx(tfm); 160 158 struct skcipher_walk walk; 161 159 int err; 162 160 ··· 179 177 static int cbc_decrypt(struct skcipher_request *req) 180 178 { 181 179 struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); 182 - struct aesbs_cbc_ctx *ctx = crypto_skcipher_ctx(tfm); 180 + struct aesbs_cbc_ctr_ctx *ctx = crypto_skcipher_ctx(tfm); 183 181 struct skcipher_walk walk; 184 182 int err; 185 183 ··· 207 205 static int ctr_encrypt(struct skcipher_request *req) 208 206 { 209 207 struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); 210 - struct aesbs_ctx *ctx = crypto_skcipher_ctx(tfm); 208 + struct aesbs_cbc_ctr_ctx *ctx = crypto_skcipher_ctx(tfm); 211 209 struct skcipher_walk walk; 212 - u8 buf[AES_BLOCK_SIZE]; 213 210 int err; 214 211 215 212 err = skcipher_walk_virt(&walk, req, false); 216 213 217 214 while (walk.nbytes > 0) { 218 - unsigned int blocks = walk.nbytes / AES_BLOCK_SIZE; 219 - u8 *final = (walk.total % AES_BLOCK_SIZE) ? buf : NULL; 220 - 221 - if (walk.nbytes < walk.total) { 222 - blocks = round_down(blocks, 223 - walk.stride / AES_BLOCK_SIZE); 224 - final = NULL; 225 - } 215 + int blocks = (walk.nbytes / AES_BLOCK_SIZE) & ~7; 216 + int nbytes = walk.nbytes % (8 * AES_BLOCK_SIZE); 217 + const u8 *src = walk.src.virt.addr; 218 + u8 *dst = walk.dst.virt.addr; 226 219 227 220 kernel_neon_begin(); 228 - aesbs_ctr_encrypt(walk.dst.virt.addr, walk.src.virt.addr, 229 - ctx->rk, ctx->rounds, blocks, walk.iv, final); 230 - kernel_neon_end(); 231 - 232 - if (final) { 233 - u8 *dst = walk.dst.virt.addr + blocks * AES_BLOCK_SIZE; 234 - u8 *src = walk.src.virt.addr + blocks * AES_BLOCK_SIZE; 235 - 236 - crypto_xor_cpy(dst, src, final, 237 - walk.total % AES_BLOCK_SIZE); 238 - 239 - err = skcipher_walk_done(&walk, 0); 240 - break; 221 + if (blocks >= 8) { 222 + aesbs_ctr_encrypt(dst, src, ctx->key.rk, ctx->key.rounds, 223 + blocks, walk.iv); 224 + dst += blocks * AES_BLOCK_SIZE; 225 + src += blocks * AES_BLOCK_SIZE; 241 226 } 242 - err = skcipher_walk_done(&walk, 243 - walk.nbytes - blocks * AES_BLOCK_SIZE); 227 + if (nbytes && walk.nbytes == walk.total) { 228 + neon_aes_ctr_encrypt(dst, src, ctx->enc, ctx->key.rounds, 229 + nbytes, walk.iv); 230 + nbytes = 0; 231 + } 232 + kernel_neon_end(); 233 + err = skcipher_walk_done(&walk, nbytes); 244 234 } 245 235 return err; 246 236 } ··· 302 308 return err; 303 309 304 310 while (walk.nbytes >= AES_BLOCK_SIZE) { 305 - unsigned int blocks = walk.nbytes / AES_BLOCK_SIZE; 306 - 307 - if (walk.nbytes < walk.total || walk.nbytes % AES_BLOCK_SIZE) 308 - blocks = round_down(blocks, 309 - walk.stride / AES_BLOCK_SIZE); 310 - 311 + int blocks = (walk.nbytes / AES_BLOCK_SIZE) & ~7; 311 312 out = walk.dst.virt.addr; 312 313 in = walk.src.virt.addr; 313 314 nbytes = walk.nbytes; 314 315 315 316 kernel_neon_begin(); 316 - if (likely(blocks > 6)) { /* plain NEON is faster otherwise */ 317 - if (first) 317 + if (blocks >= 8) { 318 + if (first == 1) 318 319 neon_aes_ecb_encrypt(walk.iv, walk.iv, 319 320 ctx->twkey, 320 321 ctx->key.rounds, 1); 321 - first = 0; 322 + first = 2; 322 323 323 324 fn(out, in, ctx->key.rk, ctx->key.rounds, blocks, 324 325 walk.iv); ··· 322 333 in += blocks * AES_BLOCK_SIZE; 323 334 nbytes -= blocks * AES_BLOCK_SIZE; 324 335 } 325 - 326 - if (walk.nbytes == walk.total && nbytes > 0) 327 - goto xts_tail; 328 - 336 + if (walk.nbytes == walk.total && nbytes > 0) { 337 + if (encrypt) 338 + neon_aes_xts_encrypt(out, in, ctx->cts.key_enc, 339 + ctx->key.rounds, nbytes, 340 + ctx->twkey, walk.iv, first); 341 + else 342 + neon_aes_xts_decrypt(out, in, ctx->cts.key_dec, 343 + ctx->key.rounds, nbytes, 344 + ctx->twkey, walk.iv, first); 345 + nbytes = first = 0; 346 + } 329 347 kernel_neon_end(); 330 348 err = skcipher_walk_done(&walk, nbytes); 331 349 } ··· 357 361 nbytes = walk.nbytes; 358 362 359 363 kernel_neon_begin(); 360 - xts_tail: 361 364 if (encrypt) 362 365 neon_aes_xts_encrypt(out, in, ctx->cts.key_enc, ctx->key.rounds, 363 - nbytes, ctx->twkey, walk.iv, first ?: 2); 366 + nbytes, ctx->twkey, walk.iv, first); 364 367 else 365 368 neon_aes_xts_decrypt(out, in, ctx->cts.key_dec, ctx->key.rounds, 366 - nbytes, ctx->twkey, walk.iv, first ?: 2); 369 + nbytes, ctx->twkey, walk.iv, first); 367 370 kernel_neon_end(); 368 371 369 372 return skcipher_walk_done(&walk, 0); ··· 397 402 .base.cra_driver_name = "cbc-aes-neonbs", 398 403 .base.cra_priority = 250, 399 404 .base.cra_blocksize = AES_BLOCK_SIZE, 400 - .base.cra_ctxsize = sizeof(struct aesbs_cbc_ctx), 405 + .base.cra_ctxsize = sizeof(struct aesbs_cbc_ctr_ctx), 401 406 .base.cra_module = THIS_MODULE, 402 407 403 408 .min_keysize = AES_MIN_KEY_SIZE, 404 409 .max_keysize = AES_MAX_KEY_SIZE, 405 410 .walksize = 8 * AES_BLOCK_SIZE, 406 411 .ivsize = AES_BLOCK_SIZE, 407 - .setkey = aesbs_cbc_setkey, 412 + .setkey = aesbs_cbc_ctr_setkey, 408 413 .encrypt = cbc_encrypt, 409 414 .decrypt = cbc_decrypt, 410 415 }, { ··· 412 417 .base.cra_driver_name = "ctr-aes-neonbs", 413 418 .base.cra_priority = 250, 414 419 .base.cra_blocksize = 1, 415 - .base.cra_ctxsize = sizeof(struct aesbs_ctx), 420 + .base.cra_ctxsize = sizeof(struct aesbs_cbc_ctr_ctx), 416 421 .base.cra_module = THIS_MODULE, 417 422 418 423 .min_keysize = AES_MIN_KEY_SIZE, ··· 420 425 .chunksize = AES_BLOCK_SIZE, 421 426 .walksize = 8 * AES_BLOCK_SIZE, 422 427 .ivsize = AES_BLOCK_SIZE, 423 - .setkey = aesbs_setkey, 428 + .setkey = aesbs_cbc_ctr_setkey, 424 429 .encrypt = ctr_encrypt, 425 430 .decrypt = ctr_encrypt, 426 431 }, {
+1 -1
arch/arm64/crypto/sha3-ce-glue.c
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 1 + // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 3 * sha3-ce-glue.c - core SHA-3 transform using v8.2 Crypto Extensions 4 4 *
+1 -1
arch/arm64/crypto/sha512-armv8.pl
··· 43 43 # on Cortex-A53 (or by 4 cycles per round). 44 44 # (***) Super-impressive coefficients over gcc-generated code are 45 45 # indication of some compiler "pathology", most notably code 46 - # generated with -mgeneral-regs-only is significanty faster 46 + # generated with -mgeneral-regs-only is significantly faster 47 47 # and the gap is only 40-90%. 48 48 # 49 49 # October 2016.
+1 -1
arch/arm64/crypto/sha512-ce-glue.c
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 1 + // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 3 * sha512-ce-glue.c - SHA-384/SHA-512 using ARMv8 Crypto Extensions 4 4 *
+20 -8
arch/arm64/crypto/sm3-ce-glue.c
··· 26 26 static int sm3_ce_update(struct shash_desc *desc, const u8 *data, 27 27 unsigned int len) 28 28 { 29 - if (!crypto_simd_usable()) 30 - return crypto_sm3_update(desc, data, len); 29 + if (!crypto_simd_usable()) { 30 + sm3_update(shash_desc_ctx(desc), data, len); 31 + return 0; 32 + } 31 33 32 34 kernel_neon_begin(); 33 35 sm3_base_do_update(desc, data, len, sm3_ce_transform); ··· 40 38 41 39 static int sm3_ce_final(struct shash_desc *desc, u8 *out) 42 40 { 43 - if (!crypto_simd_usable()) 44 - return crypto_sm3_finup(desc, NULL, 0, out); 41 + if (!crypto_simd_usable()) { 42 + sm3_final(shash_desc_ctx(desc), out); 43 + return 0; 44 + } 45 45 46 46 kernel_neon_begin(); 47 47 sm3_base_do_finalize(desc, sm3_ce_transform); ··· 55 51 static int sm3_ce_finup(struct shash_desc *desc, const u8 *data, 56 52 unsigned int len, u8 *out) 57 53 { 58 - if (!crypto_simd_usable()) 59 - return crypto_sm3_finup(desc, data, len, out); 54 + if (!crypto_simd_usable()) { 55 + struct sm3_state *sctx = shash_desc_ctx(desc); 56 + 57 + if (len) 58 + sm3_update(sctx, data, len); 59 + sm3_final(sctx, out); 60 + return 0; 61 + } 60 62 61 63 kernel_neon_begin(); 62 - sm3_base_do_update(desc, data, len, sm3_ce_transform); 64 + if (len) 65 + sm3_base_do_update(desc, data, len, sm3_ce_transform); 66 + sm3_base_do_finalize(desc, sm3_ce_transform); 63 67 kernel_neon_end(); 64 68 65 - return sm3_ce_final(desc, out); 69 + return sm3_base_finish(desc, out); 66 70 } 67 71 68 72 static struct shash_alg sm3_alg = {
+14 -7
arch/arm64/include/asm/xor.h
··· 16 16 extern struct xor_block_template const xor_block_inner_neon; 17 17 18 18 static void 19 - xor_neon_2(unsigned long bytes, unsigned long *p1, unsigned long *p2) 19 + xor_neon_2(unsigned long bytes, unsigned long * __restrict p1, 20 + const unsigned long * __restrict p2) 20 21 { 21 22 kernel_neon_begin(); 22 23 xor_block_inner_neon.do_2(bytes, p1, p2); ··· 25 24 } 26 25 27 26 static void 28 - xor_neon_3(unsigned long bytes, unsigned long *p1, unsigned long *p2, 29 - unsigned long *p3) 27 + xor_neon_3(unsigned long bytes, unsigned long * __restrict p1, 28 + const unsigned long * __restrict p2, 29 + const unsigned long * __restrict p3) 30 30 { 31 31 kernel_neon_begin(); 32 32 xor_block_inner_neon.do_3(bytes, p1, p2, p3); ··· 35 33 } 36 34 37 35 static void 38 - xor_neon_4(unsigned long bytes, unsigned long *p1, unsigned long *p2, 39 - unsigned long *p3, unsigned long *p4) 36 + xor_neon_4(unsigned long bytes, unsigned long * __restrict p1, 37 + const unsigned long * __restrict p2, 38 + const unsigned long * __restrict p3, 39 + const unsigned long * __restrict p4) 40 40 { 41 41 kernel_neon_begin(); 42 42 xor_block_inner_neon.do_4(bytes, p1, p2, p3, p4); ··· 46 42 } 47 43 48 44 static void 49 - xor_neon_5(unsigned long bytes, unsigned long *p1, unsigned long *p2, 50 - unsigned long *p3, unsigned long *p4, unsigned long *p5) 45 + xor_neon_5(unsigned long bytes, unsigned long * __restrict p1, 46 + const unsigned long * __restrict p2, 47 + const unsigned long * __restrict p3, 48 + const unsigned long * __restrict p4, 49 + const unsigned long * __restrict p5) 51 50 { 52 51 kernel_neon_begin(); 53 52 xor_block_inner_neon.do_5(bytes, p1, p2, p3, p4, p5);
+73 -14
arch/arm64/lib/crc32.S
··· 11 11 12 12 .arch armv8-a+crc 13 13 14 - .macro __crc32, c 14 + .macro byteorder, reg, be 15 + .if \be 16 + CPU_LE( rev \reg, \reg ) 17 + .else 18 + CPU_BE( rev \reg, \reg ) 19 + .endif 20 + .endm 21 + 22 + .macro byteorder16, reg, be 23 + .if \be 24 + CPU_LE( rev16 \reg, \reg ) 25 + .else 26 + CPU_BE( rev16 \reg, \reg ) 27 + .endif 28 + .endm 29 + 30 + .macro bitorder, reg, be 31 + .if \be 32 + rbit \reg, \reg 33 + .endif 34 + .endm 35 + 36 + .macro bitorder16, reg, be 37 + .if \be 38 + rbit \reg, \reg 39 + lsr \reg, \reg, #16 40 + .endif 41 + .endm 42 + 43 + .macro bitorder8, reg, be 44 + .if \be 45 + rbit \reg, \reg 46 + lsr \reg, \reg, #24 47 + .endif 48 + .endm 49 + 50 + .macro __crc32, c, be=0 51 + bitorder w0, \be 15 52 cmp x2, #16 16 53 b.lt 8f // less than 16 bytes 17 54 ··· 61 24 add x8, x8, x1 62 25 add x1, x1, x7 63 26 ldp x5, x6, [x8] 64 - CPU_BE( rev x3, x3 ) 65 - CPU_BE( rev x4, x4 ) 66 - CPU_BE( rev x5, x5 ) 67 - CPU_BE( rev x6, x6 ) 27 + byteorder x3, \be 28 + byteorder x4, \be 29 + byteorder x5, \be 30 + byteorder x6, \be 31 + bitorder x3, \be 32 + bitorder x4, \be 33 + bitorder x5, \be 34 + bitorder x6, \be 68 35 69 36 tst x7, #8 70 37 crc32\c\()x w8, w0, x3 ··· 96 55 32: ldp x3, x4, [x1], #32 97 56 sub x2, x2, #32 98 57 ldp x5, x6, [x1, #-16] 99 - CPU_BE( rev x3, x3 ) 100 - CPU_BE( rev x4, x4 ) 101 - CPU_BE( rev x5, x5 ) 102 - CPU_BE( rev x6, x6 ) 58 + byteorder x3, \be 59 + byteorder x4, \be 60 + byteorder x5, \be 61 + byteorder x6, \be 62 + bitorder x3, \be 63 + bitorder x4, \be 64 + bitorder x5, \be 65 + bitorder x6, \be 103 66 crc32\c\()x w0, w0, x3 104 67 crc32\c\()x w0, w0, x4 105 68 crc32\c\()x w0, w0, x5 106 69 crc32\c\()x w0, w0, x6 107 70 cbnz x2, 32b 108 - 0: ret 71 + 0: bitorder w0, \be 72 + ret 109 73 110 74 8: tbz x2, #3, 4f 111 75 ldr x3, [x1], #8 112 - CPU_BE( rev x3, x3 ) 76 + byteorder x3, \be 77 + bitorder x3, \be 113 78 crc32\c\()x w0, w0, x3 114 79 4: tbz x2, #2, 2f 115 80 ldr w3, [x1], #4 116 - CPU_BE( rev w3, w3 ) 81 + byteorder w3, \be 82 + bitorder w3, \be 117 83 crc32\c\()w w0, w0, w3 118 84 2: tbz x2, #1, 1f 119 85 ldrh w3, [x1], #2 120 - CPU_BE( rev16 w3, w3 ) 86 + byteorder16 w3, \be 87 + bitorder16 w3, \be 121 88 crc32\c\()h w0, w0, w3 122 89 1: tbz x2, #0, 0f 123 90 ldrb w3, [x1] 91 + bitorder8 w3, \be 124 92 crc32\c\()b w0, w0, w3 125 - 0: ret 93 + 0: bitorder w0, \be 94 + ret 126 95 .endm 127 96 128 97 .align 5 ··· 150 99 alternative_else_nop_endif 151 100 __crc32 c 152 101 SYM_FUNC_END(__crc32c_le) 102 + 103 + .align 5 104 + SYM_FUNC_START(crc32_be) 105 + alternative_if_not ARM64_HAS_CRC32 106 + b crc32_be_base 107 + alternative_else_nop_endif 108 + __crc32 be=1 109 + SYM_FUNC_END(crc32_be)
+29 -17
arch/arm64/lib/xor-neon.c
··· 10 10 #include <linux/module.h> 11 11 #include <asm/neon-intrinsics.h> 12 12 13 - void xor_arm64_neon_2(unsigned long bytes, unsigned long *p1, 14 - unsigned long *p2) 13 + void xor_arm64_neon_2(unsigned long bytes, unsigned long * __restrict p1, 14 + const unsigned long * __restrict p2) 15 15 { 16 16 uint64_t *dp1 = (uint64_t *)p1; 17 17 uint64_t *dp2 = (uint64_t *)p2; ··· 37 37 } while (--lines > 0); 38 38 } 39 39 40 - void xor_arm64_neon_3(unsigned long bytes, unsigned long *p1, 41 - unsigned long *p2, unsigned long *p3) 40 + void xor_arm64_neon_3(unsigned long bytes, unsigned long * __restrict p1, 41 + const unsigned long * __restrict p2, 42 + const unsigned long * __restrict p3) 42 43 { 43 44 uint64_t *dp1 = (uint64_t *)p1; 44 45 uint64_t *dp2 = (uint64_t *)p2; ··· 73 72 } while (--lines > 0); 74 73 } 75 74 76 - void xor_arm64_neon_4(unsigned long bytes, unsigned long *p1, 77 - unsigned long *p2, unsigned long *p3, unsigned long *p4) 75 + void xor_arm64_neon_4(unsigned long bytes, unsigned long * __restrict p1, 76 + const unsigned long * __restrict p2, 77 + const unsigned long * __restrict p3, 78 + const unsigned long * __restrict p4) 78 79 { 79 80 uint64_t *dp1 = (uint64_t *)p1; 80 81 uint64_t *dp2 = (uint64_t *)p2; ··· 118 115 } while (--lines > 0); 119 116 } 120 117 121 - void xor_arm64_neon_5(unsigned long bytes, unsigned long *p1, 122 - unsigned long *p2, unsigned long *p3, 123 - unsigned long *p4, unsigned long *p5) 118 + void xor_arm64_neon_5(unsigned long bytes, unsigned long * __restrict p1, 119 + const unsigned long * __restrict p2, 120 + const unsigned long * __restrict p3, 121 + const unsigned long * __restrict p4, 122 + const unsigned long * __restrict p5) 124 123 { 125 124 uint64_t *dp1 = (uint64_t *)p1; 126 125 uint64_t *dp2 = (uint64_t *)p2; ··· 191 186 return res; 192 187 } 193 188 194 - static void xor_arm64_eor3_3(unsigned long bytes, unsigned long *p1, 195 - unsigned long *p2, unsigned long *p3) 189 + static void xor_arm64_eor3_3(unsigned long bytes, 190 + unsigned long * __restrict p1, 191 + const unsigned long * __restrict p2, 192 + const unsigned long * __restrict p3) 196 193 { 197 194 uint64_t *dp1 = (uint64_t *)p1; 198 195 uint64_t *dp2 = (uint64_t *)p2; ··· 226 219 } while (--lines > 0); 227 220 } 228 221 229 - static void xor_arm64_eor3_4(unsigned long bytes, unsigned long *p1, 230 - unsigned long *p2, unsigned long *p3, 231 - unsigned long *p4) 222 + static void xor_arm64_eor3_4(unsigned long bytes, 223 + unsigned long * __restrict p1, 224 + const unsigned long * __restrict p2, 225 + const unsigned long * __restrict p3, 226 + const unsigned long * __restrict p4) 232 227 { 233 228 uint64_t *dp1 = (uint64_t *)p1; 234 229 uint64_t *dp2 = (uint64_t *)p2; ··· 270 261 } while (--lines > 0); 271 262 } 272 263 273 - static void xor_arm64_eor3_5(unsigned long bytes, unsigned long *p1, 274 - unsigned long *p2, unsigned long *p3, 275 - unsigned long *p4, unsigned long *p5) 264 + static void xor_arm64_eor3_5(unsigned long bytes, 265 + unsigned long * __restrict p1, 266 + const unsigned long * __restrict p2, 267 + const unsigned long * __restrict p3, 268 + const unsigned long * __restrict p4, 269 + const unsigned long * __restrict p5) 276 270 { 277 271 uint64_t *dp1 = (uint64_t *)p1; 278 272 uint64_t *dp2 = (uint64_t *)p2;
+14 -7
arch/ia64/include/asm/xor.h
··· 4 4 */ 5 5 6 6 7 - extern void xor_ia64_2(unsigned long, unsigned long *, unsigned long *); 8 - extern void xor_ia64_3(unsigned long, unsigned long *, unsigned long *, 9 - unsigned long *); 10 - extern void xor_ia64_4(unsigned long, unsigned long *, unsigned long *, 11 - unsigned long *, unsigned long *); 12 - extern void xor_ia64_5(unsigned long, unsigned long *, unsigned long *, 13 - unsigned long *, unsigned long *, unsigned long *); 7 + extern void xor_ia64_2(unsigned long bytes, unsigned long * __restrict p1, 8 + const unsigned long * __restrict p2); 9 + extern void xor_ia64_3(unsigned long bytes, unsigned long * __restrict p1, 10 + const unsigned long * __restrict p2, 11 + const unsigned long * __restrict p3); 12 + extern void xor_ia64_4(unsigned long bytes, unsigned long * __restrict p1, 13 + const unsigned long * __restrict p2, 14 + const unsigned long * __restrict p3, 15 + const unsigned long * __restrict p4); 16 + extern void xor_ia64_5(unsigned long bytes, unsigned long * __restrict p1, 17 + const unsigned long * __restrict p2, 18 + const unsigned long * __restrict p3, 19 + const unsigned long * __restrict p4, 20 + const unsigned long * __restrict p5); 14 21 15 22 static struct xor_block_template xor_block_ia64 = { 16 23 .name = "ia64",
+14 -11
arch/powerpc/include/asm/xor_altivec.h
··· 3 3 #define _ASM_POWERPC_XOR_ALTIVEC_H 4 4 5 5 #ifdef CONFIG_ALTIVEC 6 - 7 - void xor_altivec_2(unsigned long bytes, unsigned long *v1_in, 8 - unsigned long *v2_in); 9 - void xor_altivec_3(unsigned long bytes, unsigned long *v1_in, 10 - unsigned long *v2_in, unsigned long *v3_in); 11 - void xor_altivec_4(unsigned long bytes, unsigned long *v1_in, 12 - unsigned long *v2_in, unsigned long *v3_in, 13 - unsigned long *v4_in); 14 - void xor_altivec_5(unsigned long bytes, unsigned long *v1_in, 15 - unsigned long *v2_in, unsigned long *v3_in, 16 - unsigned long *v4_in, unsigned long *v5_in); 6 + void xor_altivec_2(unsigned long bytes, unsigned long * __restrict p1, 7 + const unsigned long * __restrict p2); 8 + void xor_altivec_3(unsigned long bytes, unsigned long * __restrict p1, 9 + const unsigned long * __restrict p2, 10 + const unsigned long * __restrict p3); 11 + void xor_altivec_4(unsigned long bytes, unsigned long * __restrict p1, 12 + const unsigned long * __restrict p2, 13 + const unsigned long * __restrict p3, 14 + const unsigned long * __restrict p4); 15 + void xor_altivec_5(unsigned long bytes, unsigned long * __restrict p1, 16 + const unsigned long * __restrict p2, 17 + const unsigned long * __restrict p3, 18 + const unsigned long * __restrict p4, 19 + const unsigned long * __restrict p5); 17 20 18 21 #endif 19 22 #endif /* _ASM_POWERPC_XOR_ALTIVEC_H */
+18 -10
arch/powerpc/lib/xor_vmx.c
··· 49 49 V1##_3 = vec_xor(V1##_3, V2##_3); \ 50 50 } while (0) 51 51 52 - void __xor_altivec_2(unsigned long bytes, unsigned long *v1_in, 53 - unsigned long *v2_in) 52 + void __xor_altivec_2(unsigned long bytes, 53 + unsigned long * __restrict v1_in, 54 + const unsigned long * __restrict v2_in) 54 55 { 55 56 DEFINE(v1); 56 57 DEFINE(v2); ··· 68 67 } while (--lines > 0); 69 68 } 70 69 71 - void __xor_altivec_3(unsigned long bytes, unsigned long *v1_in, 72 - unsigned long *v2_in, unsigned long *v3_in) 70 + void __xor_altivec_3(unsigned long bytes, 71 + unsigned long * __restrict v1_in, 72 + const unsigned long * __restrict v2_in, 73 + const unsigned long * __restrict v3_in) 73 74 { 74 75 DEFINE(v1); 75 76 DEFINE(v2); ··· 92 89 } while (--lines > 0); 93 90 } 94 91 95 - void __xor_altivec_4(unsigned long bytes, unsigned long *v1_in, 96 - unsigned long *v2_in, unsigned long *v3_in, 97 - unsigned long *v4_in) 92 + void __xor_altivec_4(unsigned long bytes, 93 + unsigned long * __restrict v1_in, 94 + const unsigned long * __restrict v2_in, 95 + const unsigned long * __restrict v3_in, 96 + const unsigned long * __restrict v4_in) 98 97 { 99 98 DEFINE(v1); 100 99 DEFINE(v2); ··· 121 116 } while (--lines > 0); 122 117 } 123 118 124 - void __xor_altivec_5(unsigned long bytes, unsigned long *v1_in, 125 - unsigned long *v2_in, unsigned long *v3_in, 126 - unsigned long *v4_in, unsigned long *v5_in) 119 + void __xor_altivec_5(unsigned long bytes, 120 + unsigned long * __restrict v1_in, 121 + const unsigned long * __restrict v2_in, 122 + const unsigned long * __restrict v3_in, 123 + const unsigned long * __restrict v4_in, 124 + const unsigned long * __restrict v5_in) 127 125 { 128 126 DEFINE(v1); 129 127 DEFINE(v2);
+14 -13
arch/powerpc/lib/xor_vmx.h
··· 6 6 * outside of the enable/disable altivec block. 7 7 */ 8 8 9 - void __xor_altivec_2(unsigned long bytes, unsigned long *v1_in, 10 - unsigned long *v2_in); 11 - 12 - void __xor_altivec_3(unsigned long bytes, unsigned long *v1_in, 13 - unsigned long *v2_in, unsigned long *v3_in); 14 - 15 - void __xor_altivec_4(unsigned long bytes, unsigned long *v1_in, 16 - unsigned long *v2_in, unsigned long *v3_in, 17 - unsigned long *v4_in); 18 - 19 - void __xor_altivec_5(unsigned long bytes, unsigned long *v1_in, 20 - unsigned long *v2_in, unsigned long *v3_in, 21 - unsigned long *v4_in, unsigned long *v5_in); 9 + void __xor_altivec_2(unsigned long bytes, unsigned long * __restrict p1, 10 + const unsigned long * __restrict p2); 11 + void __xor_altivec_3(unsigned long bytes, unsigned long * __restrict p1, 12 + const unsigned long * __restrict p2, 13 + const unsigned long * __restrict p3); 14 + void __xor_altivec_4(unsigned long bytes, unsigned long * __restrict p1, 15 + const unsigned long * __restrict p2, 16 + const unsigned long * __restrict p3, 17 + const unsigned long * __restrict p4); 18 + void __xor_altivec_5(unsigned long bytes, unsigned long * __restrict p1, 19 + const unsigned long * __restrict p2, 20 + const unsigned long * __restrict p3, 21 + const unsigned long * __restrict p4, 22 + const unsigned long * __restrict p5);
+18 -14
arch/powerpc/lib/xor_vmx_glue.c
··· 12 12 #include <asm/xor_altivec.h> 13 13 #include "xor_vmx.h" 14 14 15 - void xor_altivec_2(unsigned long bytes, unsigned long *v1_in, 16 - unsigned long *v2_in) 15 + void xor_altivec_2(unsigned long bytes, unsigned long * __restrict p1, 16 + const unsigned long * __restrict p2) 17 17 { 18 18 preempt_disable(); 19 19 enable_kernel_altivec(); 20 - __xor_altivec_2(bytes, v1_in, v2_in); 20 + __xor_altivec_2(bytes, p1, p2); 21 21 disable_kernel_altivec(); 22 22 preempt_enable(); 23 23 } 24 24 EXPORT_SYMBOL(xor_altivec_2); 25 25 26 - void xor_altivec_3(unsigned long bytes, unsigned long *v1_in, 27 - unsigned long *v2_in, unsigned long *v3_in) 26 + void xor_altivec_3(unsigned long bytes, unsigned long * __restrict p1, 27 + const unsigned long * __restrict p2, 28 + const unsigned long * __restrict p3) 28 29 { 29 30 preempt_disable(); 30 31 enable_kernel_altivec(); 31 - __xor_altivec_3(bytes, v1_in, v2_in, v3_in); 32 + __xor_altivec_3(bytes, p1, p2, p3); 32 33 disable_kernel_altivec(); 33 34 preempt_enable(); 34 35 } 35 36 EXPORT_SYMBOL(xor_altivec_3); 36 37 37 - void xor_altivec_4(unsigned long bytes, unsigned long *v1_in, 38 - unsigned long *v2_in, unsigned long *v3_in, 39 - unsigned long *v4_in) 38 + void xor_altivec_4(unsigned long bytes, unsigned long * __restrict p1, 39 + const unsigned long * __restrict p2, 40 + const unsigned long * __restrict p3, 41 + const unsigned long * __restrict p4) 40 42 { 41 43 preempt_disable(); 42 44 enable_kernel_altivec(); 43 - __xor_altivec_4(bytes, v1_in, v2_in, v3_in, v4_in); 45 + __xor_altivec_4(bytes, p1, p2, p3, p4); 44 46 disable_kernel_altivec(); 45 47 preempt_enable(); 46 48 } 47 49 EXPORT_SYMBOL(xor_altivec_4); 48 50 49 - void xor_altivec_5(unsigned long bytes, unsigned long *v1_in, 50 - unsigned long *v2_in, unsigned long *v3_in, 51 - unsigned long *v4_in, unsigned long *v5_in) 51 + void xor_altivec_5(unsigned long bytes, unsigned long * __restrict p1, 52 + const unsigned long * __restrict p2, 53 + const unsigned long * __restrict p3, 54 + const unsigned long * __restrict p4, 55 + const unsigned long * __restrict p5) 52 56 { 53 57 preempt_disable(); 54 58 enable_kernel_altivec(); 55 - __xor_altivec_5(bytes, v1_in, v2_in, v3_in, v4_in, v5_in); 59 + __xor_altivec_5(bytes, p1, p2, p3, p4, p5); 56 60 disable_kernel_altivec(); 57 61 preempt_enable(); 58 62 }
+14 -7
arch/s390/lib/xor.c
··· 11 11 #include <linux/raid/xor.h> 12 12 #include <asm/xor.h> 13 13 14 - static void xor_xc_2(unsigned long bytes, unsigned long *p1, unsigned long *p2) 14 + static void xor_xc_2(unsigned long bytes, unsigned long * __restrict p1, 15 + const unsigned long * __restrict p2) 15 16 { 16 17 asm volatile( 17 18 " larl 1,2f\n" ··· 33 32 : "0", "1", "cc", "memory"); 34 33 } 35 34 36 - static void xor_xc_3(unsigned long bytes, unsigned long *p1, unsigned long *p2, 37 - unsigned long *p3) 35 + static void xor_xc_3(unsigned long bytes, unsigned long * __restrict p1, 36 + const unsigned long * __restrict p2, 37 + const unsigned long * __restrict p3) 38 38 { 39 39 asm volatile( 40 40 " larl 1,2f\n" ··· 60 58 : : "0", "1", "cc", "memory"); 61 59 } 62 60 63 - static void xor_xc_4(unsigned long bytes, unsigned long *p1, unsigned long *p2, 64 - unsigned long *p3, unsigned long *p4) 61 + static void xor_xc_4(unsigned long bytes, unsigned long * __restrict p1, 62 + const unsigned long * __restrict p2, 63 + const unsigned long * __restrict p3, 64 + const unsigned long * __restrict p4) 65 65 { 66 66 asm volatile( 67 67 " larl 1,2f\n" ··· 92 88 : : "0", "1", "cc", "memory"); 93 89 } 94 90 95 - static void xor_xc_5(unsigned long bytes, unsigned long *p1, unsigned long *p2, 96 - unsigned long *p3, unsigned long *p4, unsigned long *p5) 91 + static void xor_xc_5(unsigned long bytes, unsigned long * __restrict p1, 92 + const unsigned long * __restrict p2, 93 + const unsigned long * __restrict p3, 94 + const unsigned long * __restrict p4, 95 + const unsigned long * __restrict p5) 97 96 { 98 97 asm volatile( 99 98 " larl 1,2f\n"
+14 -7
arch/sparc/include/asm/xor_32.h
··· 13 13 */ 14 14 15 15 static void 16 - sparc_2(unsigned long bytes, unsigned long *p1, unsigned long *p2) 16 + sparc_2(unsigned long bytes, unsigned long * __restrict p1, 17 + const unsigned long * __restrict p2) 17 18 { 18 19 int lines = bytes / (sizeof (long)) / 8; 19 20 ··· 51 50 } 52 51 53 52 static void 54 - sparc_3(unsigned long bytes, unsigned long *p1, unsigned long *p2, 55 - unsigned long *p3) 53 + sparc_3(unsigned long bytes, unsigned long * __restrict p1, 54 + const unsigned long * __restrict p2, 55 + const unsigned long * __restrict p3) 56 56 { 57 57 int lines = bytes / (sizeof (long)) / 8; 58 58 ··· 103 101 } 104 102 105 103 static void 106 - sparc_4(unsigned long bytes, unsigned long *p1, unsigned long *p2, 107 - unsigned long *p3, unsigned long *p4) 104 + sparc_4(unsigned long bytes, unsigned long * __restrict p1, 105 + const unsigned long * __restrict p2, 106 + const unsigned long * __restrict p3, 107 + const unsigned long * __restrict p4) 108 108 { 109 109 int lines = bytes / (sizeof (long)) / 8; 110 110 ··· 169 165 } 170 166 171 167 static void 172 - sparc_5(unsigned long bytes, unsigned long *p1, unsigned long *p2, 173 - unsigned long *p3, unsigned long *p4, unsigned long *p5) 168 + sparc_5(unsigned long bytes, unsigned long * __restrict p1, 169 + const unsigned long * __restrict p2, 170 + const unsigned long * __restrict p3, 171 + const unsigned long * __restrict p4, 172 + const unsigned long * __restrict p5) 174 173 { 175 174 int lines = bytes / (sizeof (long)) / 8; 176 175
+28 -14
arch/sparc/include/asm/xor_64.h
··· 12 12 13 13 #include <asm/spitfire.h> 14 14 15 - void xor_vis_2(unsigned long, unsigned long *, unsigned long *); 16 - void xor_vis_3(unsigned long, unsigned long *, unsigned long *, 17 - unsigned long *); 18 - void xor_vis_4(unsigned long, unsigned long *, unsigned long *, 19 - unsigned long *, unsigned long *); 20 - void xor_vis_5(unsigned long, unsigned long *, unsigned long *, 21 - unsigned long *, unsigned long *, unsigned long *); 15 + void xor_vis_2(unsigned long bytes, unsigned long * __restrict p1, 16 + const unsigned long * __restrict p2); 17 + void xor_vis_3(unsigned long bytes, unsigned long * __restrict p1, 18 + const unsigned long * __restrict p2, 19 + const unsigned long * __restrict p3); 20 + void xor_vis_4(unsigned long bytes, unsigned long * __restrict p1, 21 + const unsigned long * __restrict p2, 22 + const unsigned long * __restrict p3, 23 + const unsigned long * __restrict p4); 24 + void xor_vis_5(unsigned long bytes, unsigned long * __restrict p1, 25 + const unsigned long * __restrict p2, 26 + const unsigned long * __restrict p3, 27 + const unsigned long * __restrict p4, 28 + const unsigned long * __restrict p5); 22 29 23 30 /* XXX Ugh, write cheetah versions... -DaveM */ 24 31 ··· 37 30 .do_5 = xor_vis_5, 38 31 }; 39 32 40 - void xor_niagara_2(unsigned long, unsigned long *, unsigned long *); 41 - void xor_niagara_3(unsigned long, unsigned long *, unsigned long *, 42 - unsigned long *); 43 - void xor_niagara_4(unsigned long, unsigned long *, unsigned long *, 44 - unsigned long *, unsigned long *); 45 - void xor_niagara_5(unsigned long, unsigned long *, unsigned long *, 46 - unsigned long *, unsigned long *, unsigned long *); 33 + void xor_niagara_2(unsigned long bytes, unsigned long * __restrict p1, 34 + const unsigned long * __restrict p2); 35 + void xor_niagara_3(unsigned long bytes, unsigned long * __restrict p1, 36 + const unsigned long * __restrict p2, 37 + const unsigned long * __restrict p3); 38 + void xor_niagara_4(unsigned long bytes, unsigned long * __restrict p1, 39 + const unsigned long * __restrict p2, 40 + const unsigned long * __restrict p3, 41 + const unsigned long * __restrict p4); 42 + void xor_niagara_5(unsigned long bytes, unsigned long * __restrict p1, 43 + const unsigned long * __restrict p2, 44 + const unsigned long * __restrict p3, 45 + const unsigned long * __restrict p4, 46 + const unsigned long * __restrict p5); 47 47 48 48 static struct xor_block_template xor_block_niagara = { 49 49 .name = "Niagara",
+3
arch/x86/crypto/Makefile
··· 90 90 91 91 obj-$(CONFIG_CRYPTO_CURVE25519_X86) += curve25519-x86_64.o 92 92 93 + obj-$(CONFIG_CRYPTO_SM3_AVX_X86_64) += sm3-avx-x86_64.o 94 + sm3-avx-x86_64-y := sm3-avx-asm_64.o sm3_avx_glue.o 95 + 93 96 obj-$(CONFIG_CRYPTO_SM4_AESNI_AVX_X86_64) += sm4-aesni-avx-x86_64.o 94 97 sm4-aesni-avx-x86_64-y := sm4-aesni-avx-asm_64.o sm4_aesni_avx_glue.o 95 98
+10 -53
arch/x86/crypto/aes_ctrby8_avx-x86_64.S
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only OR BSD-3-Clause */ 1 2 /* 2 - * Implement AES CTR mode by8 optimization with AVX instructions. (x86_64) 3 - * 4 - * This is AES128/192/256 CTR mode optimization implementation. It requires 5 - * the support of Intel(R) AESNI and AVX instructions. 6 - * 7 - * This work was inspired by the AES CTR mode optimization published 8 - * in Intel Optimized IPSEC Cryptograhpic library. 9 - * Additional information on it can be found at: 10 - * http://downloadcenter.intel.com/Detail_Desc.aspx?agr=Y&DwnldID=22972 11 - * 12 - * This file is provided under a dual BSD/GPLv2 license. When using or 13 - * redistributing this file, you may do so under either license. 14 - * 15 - * GPL LICENSE SUMMARY 3 + * AES CTR mode by8 optimization with AVX instructions. (x86_64) 16 4 * 17 5 * Copyright(c) 2014 Intel Corporation. 18 - * 19 - * This program is free software; you can redistribute it and/or modify 20 - * it under the terms of version 2 of the GNU General Public License as 21 - * published by the Free Software Foundation. 22 - * 23 - * This program is distributed in the hope that it will be useful, but 24 - * WITHOUT ANY WARRANTY; without even the implied warranty of 25 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 26 - * General Public License for more details. 27 6 * 28 7 * Contact Information: 29 8 * James Guilford <james.guilford@intel.com> 30 9 * Sean Gulley <sean.m.gulley@intel.com> 31 10 * Chandramouli Narayanan <mouli@linux.intel.com> 11 + */ 12 + /* 13 + * This is AES128/192/256 CTR mode optimization implementation. It requires 14 + * the support of Intel(R) AESNI and AVX instructions. 32 15 * 33 - * BSD LICENSE 34 - * 35 - * Copyright(c) 2014 Intel Corporation. 36 - * 37 - * Redistribution and use in source and binary forms, with or without 38 - * modification, are permitted provided that the following conditions 39 - * are met: 40 - * 41 - * Redistributions of source code must retain the above copyright 42 - * notice, this list of conditions and the following disclaimer. 43 - * Redistributions in binary form must reproduce the above copyright 44 - * notice, this list of conditions and the following disclaimer in 45 - * the documentation and/or other materials provided with the 46 - * distribution. 47 - * Neither the name of Intel Corporation nor the names of its 48 - * contributors may be used to endorse or promote products derived 49 - * from this software without specific prior written permission. 50 - * 51 - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 52 - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 53 - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 54 - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 55 - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 56 - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 57 - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 58 - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 59 - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 60 - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 61 - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 62 - * 16 + * This work was inspired by the AES CTR mode optimization published 17 + * in Intel Optimized IPSEC Cryptographic library. 18 + * Additional information on it can be found at: 19 + * https://github.com/intel/intel-ipsec-mb 63 20 */ 64 21 65 22 #include <linux/linkage.h>
-12
arch/x86/crypto/blowfish_glue.c
··· 32 32 __blowfish_enc_blk(ctx, dst, src, false); 33 33 } 34 34 35 - static inline void blowfish_enc_blk_xor(struct bf_ctx *ctx, u8 *dst, 36 - const u8 *src) 37 - { 38 - __blowfish_enc_blk(ctx, dst, src, true); 39 - } 40 - 41 35 static inline void blowfish_enc_blk_4way(struct bf_ctx *ctx, u8 *dst, 42 36 const u8 *src) 43 37 { 44 38 __blowfish_enc_blk_4way(ctx, dst, src, false); 45 - } 46 - 47 - static inline void blowfish_enc_blk_xor_4way(struct bf_ctx *ctx, u8 *dst, 48 - const u8 *src) 49 - { 50 - __blowfish_enc_blk_4way(ctx, dst, src, true); 51 39 } 52 40 53 41 static void blowfish_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
-8
arch/x86/crypto/des3_ede_glue.c
··· 45 45 des3_ede_x86_64_crypt_blk(dec_ctx, dst, src); 46 46 } 47 47 48 - static inline void des3_ede_enc_blk_3way(struct des3_ede_x86_ctx *ctx, u8 *dst, 49 - const u8 *src) 50 - { 51 - u32 *enc_ctx = ctx->enc.expkey; 52 - 53 - des3_ede_x86_64_crypt_blk_3way(enc_ctx, dst, src); 54 - } 55 - 56 48 static inline void des3_ede_dec_blk_3way(struct des3_ede_x86_ctx *ctx, u8 *dst, 57 49 const u8 *src) 58 50 {
+517
arch/x86/crypto/sm3-avx-asm_64.S
··· 1 + /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 + /* 3 + * SM3 AVX accelerated transform. 4 + * specified in: https://datatracker.ietf.org/doc/html/draft-sca-cfrg-sm3-02 5 + * 6 + * Copyright (C) 2021 Jussi Kivilinna <jussi.kivilinna@iki.fi> 7 + * Copyright (C) 2021 Tianjia Zhang <tianjia.zhang@linux.alibaba.com> 8 + */ 9 + 10 + /* Based on SM3 AES/BMI2 accelerated work by libgcrypt at: 11 + * https://gnupg.org/software/libgcrypt/index.html 12 + */ 13 + 14 + #include <linux/linkage.h> 15 + #include <asm/frame.h> 16 + 17 + /* Context structure */ 18 + 19 + #define state_h0 0 20 + #define state_h1 4 21 + #define state_h2 8 22 + #define state_h3 12 23 + #define state_h4 16 24 + #define state_h5 20 25 + #define state_h6 24 26 + #define state_h7 28 27 + 28 + /* Constants */ 29 + 30 + /* Round constant macros */ 31 + 32 + #define K0 2043430169 /* 0x79cc4519 */ 33 + #define K1 -208106958 /* 0xf3988a32 */ 34 + #define K2 -416213915 /* 0xe7311465 */ 35 + #define K3 -832427829 /* 0xce6228cb */ 36 + #define K4 -1664855657 /* 0x9cc45197 */ 37 + #define K5 965255983 /* 0x3988a32f */ 38 + #define K6 1930511966 /* 0x7311465e */ 39 + #define K7 -433943364 /* 0xe6228cbc */ 40 + #define K8 -867886727 /* 0xcc451979 */ 41 + #define K9 -1735773453 /* 0x988a32f3 */ 42 + #define K10 823420391 /* 0x311465e7 */ 43 + #define K11 1646840782 /* 0x6228cbce */ 44 + #define K12 -1001285732 /* 0xc451979c */ 45 + #define K13 -2002571463 /* 0x88a32f39 */ 46 + #define K14 289824371 /* 0x11465e73 */ 47 + #define K15 579648742 /* 0x228cbce6 */ 48 + #define K16 -1651869049 /* 0x9d8a7a87 */ 49 + #define K17 991229199 /* 0x3b14f50f */ 50 + #define K18 1982458398 /* 0x7629ea1e */ 51 + #define K19 -330050500 /* 0xec53d43c */ 52 + #define K20 -660100999 /* 0xd8a7a879 */ 53 + #define K21 -1320201997 /* 0xb14f50f3 */ 54 + #define K22 1654563303 /* 0x629ea1e7 */ 55 + #define K23 -985840690 /* 0xc53d43ce */ 56 + #define K24 -1971681379 /* 0x8a7a879d */ 57 + #define K25 351604539 /* 0x14f50f3b */ 58 + #define K26 703209078 /* 0x29ea1e76 */ 59 + #define K27 1406418156 /* 0x53d43cec */ 60 + #define K28 -1482130984 /* 0xa7a879d8 */ 61 + #define K29 1330705329 /* 0x4f50f3b1 */ 62 + #define K30 -1633556638 /* 0x9ea1e762 */ 63 + #define K31 1027854021 /* 0x3d43cec5 */ 64 + #define K32 2055708042 /* 0x7a879d8a */ 65 + #define K33 -183551212 /* 0xf50f3b14 */ 66 + #define K34 -367102423 /* 0xea1e7629 */ 67 + #define K35 -734204845 /* 0xd43cec53 */ 68 + #define K36 -1468409689 /* 0xa879d8a7 */ 69 + #define K37 1358147919 /* 0x50f3b14f */ 70 + #define K38 -1578671458 /* 0xa1e7629e */ 71 + #define K39 1137624381 /* 0x43cec53d */ 72 + #define K40 -2019718534 /* 0x879d8a7a */ 73 + #define K41 255530229 /* 0x0f3b14f5 */ 74 + #define K42 511060458 /* 0x1e7629ea */ 75 + #define K43 1022120916 /* 0x3cec53d4 */ 76 + #define K44 2044241832 /* 0x79d8a7a8 */ 77 + #define K45 -206483632 /* 0xf3b14f50 */ 78 + #define K46 -412967263 /* 0xe7629ea1 */ 79 + #define K47 -825934525 /* 0xcec53d43 */ 80 + #define K48 -1651869049 /* 0x9d8a7a87 */ 81 + #define K49 991229199 /* 0x3b14f50f */ 82 + #define K50 1982458398 /* 0x7629ea1e */ 83 + #define K51 -330050500 /* 0xec53d43c */ 84 + #define K52 -660100999 /* 0xd8a7a879 */ 85 + #define K53 -1320201997 /* 0xb14f50f3 */ 86 + #define K54 1654563303 /* 0x629ea1e7 */ 87 + #define K55 -985840690 /* 0xc53d43ce */ 88 + #define K56 -1971681379 /* 0x8a7a879d */ 89 + #define K57 351604539 /* 0x14f50f3b */ 90 + #define K58 703209078 /* 0x29ea1e76 */ 91 + #define K59 1406418156 /* 0x53d43cec */ 92 + #define K60 -1482130984 /* 0xa7a879d8 */ 93 + #define K61 1330705329 /* 0x4f50f3b1 */ 94 + #define K62 -1633556638 /* 0x9ea1e762 */ 95 + #define K63 1027854021 /* 0x3d43cec5 */ 96 + 97 + /* Register macros */ 98 + 99 + #define RSTATE %rdi 100 + #define RDATA %rsi 101 + #define RNBLKS %rdx 102 + 103 + #define t0 %eax 104 + #define t1 %ebx 105 + #define t2 %ecx 106 + 107 + #define a %r8d 108 + #define b %r9d 109 + #define c %r10d 110 + #define d %r11d 111 + #define e %r12d 112 + #define f %r13d 113 + #define g %r14d 114 + #define h %r15d 115 + 116 + #define W0 %xmm0 117 + #define W1 %xmm1 118 + #define W2 %xmm2 119 + #define W3 %xmm3 120 + #define W4 %xmm4 121 + #define W5 %xmm5 122 + 123 + #define XTMP0 %xmm6 124 + #define XTMP1 %xmm7 125 + #define XTMP2 %xmm8 126 + #define XTMP3 %xmm9 127 + #define XTMP4 %xmm10 128 + #define XTMP5 %xmm11 129 + #define XTMP6 %xmm12 130 + 131 + #define BSWAP_REG %xmm15 132 + 133 + /* Stack structure */ 134 + 135 + #define STACK_W_SIZE (32 * 2 * 3) 136 + #define STACK_REG_SAVE_SIZE (64) 137 + 138 + #define STACK_W (0) 139 + #define STACK_REG_SAVE (STACK_W + STACK_W_SIZE) 140 + #define STACK_SIZE (STACK_REG_SAVE + STACK_REG_SAVE_SIZE) 141 + 142 + /* Instruction helpers. */ 143 + 144 + #define roll2(v, reg) \ 145 + roll $(v), reg; 146 + 147 + #define roll3mov(v, src, dst) \ 148 + movl src, dst; \ 149 + roll $(v), dst; 150 + 151 + #define roll3(v, src, dst) \ 152 + rorxl $(32-(v)), src, dst; 153 + 154 + #define addl2(a, out) \ 155 + leal (a, out), out; 156 + 157 + /* Round function macros. */ 158 + 159 + #define GG1(x, y, z, o, t) \ 160 + movl x, o; \ 161 + xorl y, o; \ 162 + xorl z, o; 163 + 164 + #define FF1(x, y, z, o, t) GG1(x, y, z, o, t) 165 + 166 + #define GG2(x, y, z, o, t) \ 167 + andnl z, x, o; \ 168 + movl y, t; \ 169 + andl x, t; \ 170 + addl2(t, o); 171 + 172 + #define FF2(x, y, z, o, t) \ 173 + movl y, o; \ 174 + xorl x, o; \ 175 + movl y, t; \ 176 + andl x, t; \ 177 + andl z, o; \ 178 + xorl t, o; 179 + 180 + #define R(i, a, b, c, d, e, f, g, h, round, widx, wtype) \ 181 + /* rol(a, 12) => t0 */ \ 182 + roll3mov(12, a, t0); /* rorxl here would reduce perf by 6% on zen3 */ \ 183 + /* rol (t0 + e + t), 7) => t1 */ \ 184 + leal K##round(t0, e, 1), t1; \ 185 + roll2(7, t1); \ 186 + /* h + w1 => h */ \ 187 + addl wtype##_W1_ADDR(round, widx), h; \ 188 + /* h + t1 => h */ \ 189 + addl2(t1, h); \ 190 + /* t1 ^ t0 => t0 */ \ 191 + xorl t1, t0; \ 192 + /* w1w2 + d => d */ \ 193 + addl wtype##_W1W2_ADDR(round, widx), d; \ 194 + /* FF##i(a,b,c) => t1 */ \ 195 + FF##i(a, b, c, t1, t2); \ 196 + /* d + t1 => d */ \ 197 + addl2(t1, d); \ 198 + /* GG#i(e,f,g) => t2 */ \ 199 + GG##i(e, f, g, t2, t1); \ 200 + /* h + t2 => h */ \ 201 + addl2(t2, h); \ 202 + /* rol (f, 19) => f */ \ 203 + roll2(19, f); \ 204 + /* d + t0 => d */ \ 205 + addl2(t0, d); \ 206 + /* rol (b, 9) => b */ \ 207 + roll2(9, b); \ 208 + /* P0(h) => h */ \ 209 + roll3(9, h, t2); \ 210 + roll3(17, h, t1); \ 211 + xorl t2, h; \ 212 + xorl t1, h; 213 + 214 + #define R1(a, b, c, d, e, f, g, h, round, widx, wtype) \ 215 + R(1, a, b, c, d, e, f, g, h, round, widx, wtype) 216 + 217 + #define R2(a, b, c, d, e, f, g, h, round, widx, wtype) \ 218 + R(2, a, b, c, d, e, f, g, h, round, widx, wtype) 219 + 220 + /* Input expansion macros. */ 221 + 222 + /* Byte-swapped input address. */ 223 + #define IW_W_ADDR(round, widx, offs) \ 224 + (STACK_W + ((round) / 4) * 64 + (offs) + ((widx) * 4))(%rsp) 225 + 226 + /* Expanded input address. */ 227 + #define XW_W_ADDR(round, widx, offs) \ 228 + (STACK_W + ((((round) / 3) - 4) % 2) * 64 + (offs) + ((widx) * 4))(%rsp) 229 + 230 + /* Rounds 1-12, byte-swapped input block addresses. */ 231 + #define IW_W1_ADDR(round, widx) IW_W_ADDR(round, widx, 0) 232 + #define IW_W1W2_ADDR(round, widx) IW_W_ADDR(round, widx, 32) 233 + 234 + /* Rounds 1-12, expanded input block addresses. */ 235 + #define XW_W1_ADDR(round, widx) XW_W_ADDR(round, widx, 0) 236 + #define XW_W1W2_ADDR(round, widx) XW_W_ADDR(round, widx, 32) 237 + 238 + /* Input block loading. */ 239 + #define LOAD_W_XMM_1() \ 240 + vmovdqu 0*16(RDATA), XTMP0; /* XTMP0: w3, w2, w1, w0 */ \ 241 + vmovdqu 1*16(RDATA), XTMP1; /* XTMP1: w7, w6, w5, w4 */ \ 242 + vmovdqu 2*16(RDATA), XTMP2; /* XTMP2: w11, w10, w9, w8 */ \ 243 + vmovdqu 3*16(RDATA), XTMP3; /* XTMP3: w15, w14, w13, w12 */ \ 244 + vpshufb BSWAP_REG, XTMP0, XTMP0; \ 245 + vpshufb BSWAP_REG, XTMP1, XTMP1; \ 246 + vpshufb BSWAP_REG, XTMP2, XTMP2; \ 247 + vpshufb BSWAP_REG, XTMP3, XTMP3; \ 248 + vpxor XTMP0, XTMP1, XTMP4; \ 249 + vpxor XTMP1, XTMP2, XTMP5; \ 250 + vpxor XTMP2, XTMP3, XTMP6; \ 251 + leaq 64(RDATA), RDATA; \ 252 + vmovdqa XTMP0, IW_W1_ADDR(0, 0); \ 253 + vmovdqa XTMP4, IW_W1W2_ADDR(0, 0); \ 254 + vmovdqa XTMP1, IW_W1_ADDR(4, 0); \ 255 + vmovdqa XTMP5, IW_W1W2_ADDR(4, 0); 256 + 257 + #define LOAD_W_XMM_2() \ 258 + vmovdqa XTMP2, IW_W1_ADDR(8, 0); \ 259 + vmovdqa XTMP6, IW_W1W2_ADDR(8, 0); 260 + 261 + #define LOAD_W_XMM_3() \ 262 + vpshufd $0b00000000, XTMP0, W0; /* W0: xx, w0, xx, xx */ \ 263 + vpshufd $0b11111001, XTMP0, W1; /* W1: xx, w3, w2, w1 */ \ 264 + vmovdqa XTMP1, W2; /* W2: xx, w6, w5, w4 */ \ 265 + vpalignr $12, XTMP1, XTMP2, W3; /* W3: xx, w9, w8, w7 */ \ 266 + vpalignr $8, XTMP2, XTMP3, W4; /* W4: xx, w12, w11, w10 */ \ 267 + vpshufd $0b11111001, XTMP3, W5; /* W5: xx, w15, w14, w13 */ 268 + 269 + /* Message scheduling. Note: 3 words per XMM register. */ 270 + #define SCHED_W_0(round, w0, w1, w2, w3, w4, w5) \ 271 + /* Load (w[i - 16]) => XTMP0 */ \ 272 + vpshufd $0b10111111, w0, XTMP0; \ 273 + vpalignr $12, XTMP0, w1, XTMP0; /* XTMP0: xx, w2, w1, w0 */ \ 274 + /* Load (w[i - 13]) => XTMP1 */ \ 275 + vpshufd $0b10111111, w1, XTMP1; \ 276 + vpalignr $12, XTMP1, w2, XTMP1; \ 277 + /* w[i - 9] == w3 */ \ 278 + /* XMM3 ^ XTMP0 => XTMP0 */ \ 279 + vpxor w3, XTMP0, XTMP0; 280 + 281 + #define SCHED_W_1(round, w0, w1, w2, w3, w4, w5) \ 282 + /* w[i - 3] == w5 */ \ 283 + /* rol(XMM5, 15) ^ XTMP0 => XTMP0 */ \ 284 + vpslld $15, w5, XTMP2; \ 285 + vpsrld $(32-15), w5, XTMP3; \ 286 + vpxor XTMP2, XTMP3, XTMP3; \ 287 + vpxor XTMP3, XTMP0, XTMP0; \ 288 + /* rol(XTMP1, 7) => XTMP1 */ \ 289 + vpslld $7, XTMP1, XTMP5; \ 290 + vpsrld $(32-7), XTMP1, XTMP1; \ 291 + vpxor XTMP5, XTMP1, XTMP1; \ 292 + /* XMM4 ^ XTMP1 => XTMP1 */ \ 293 + vpxor w4, XTMP1, XTMP1; \ 294 + /* w[i - 6] == XMM4 */ \ 295 + /* P1(XTMP0) ^ XTMP1 => XMM0 */ \ 296 + vpslld $15, XTMP0, XTMP5; \ 297 + vpsrld $(32-15), XTMP0, XTMP6; \ 298 + vpslld $23, XTMP0, XTMP2; \ 299 + vpsrld $(32-23), XTMP0, XTMP3; \ 300 + vpxor XTMP0, XTMP1, XTMP1; \ 301 + vpxor XTMP6, XTMP5, XTMP5; \ 302 + vpxor XTMP3, XTMP2, XTMP2; \ 303 + vpxor XTMP2, XTMP5, XTMP5; \ 304 + vpxor XTMP5, XTMP1, w0; 305 + 306 + #define SCHED_W_2(round, w0, w1, w2, w3, w4, w5) \ 307 + /* W1 in XMM12 */ \ 308 + vpshufd $0b10111111, w4, XTMP4; \ 309 + vpalignr $12, XTMP4, w5, XTMP4; \ 310 + vmovdqa XTMP4, XW_W1_ADDR((round), 0); \ 311 + /* W1 ^ W2 => XTMP1 */ \ 312 + vpxor w0, XTMP4, XTMP1; \ 313 + vmovdqa XTMP1, XW_W1W2_ADDR((round), 0); 314 + 315 + 316 + .section .rodata.cst16, "aM", @progbits, 16 317 + .align 16 318 + 319 + .Lbe32mask: 320 + .long 0x00010203, 0x04050607, 0x08090a0b, 0x0c0d0e0f 321 + 322 + .text 323 + 324 + /* 325 + * Transform nblocks*64 bytes (nblocks*16 32-bit words) at DATA. 326 + * 327 + * void sm3_transform_avx(struct sm3_state *state, 328 + * const u8 *data, int nblocks); 329 + */ 330 + .align 16 331 + SYM_FUNC_START(sm3_transform_avx) 332 + /* input: 333 + * %rdi: ctx, CTX 334 + * %rsi: data (64*nblks bytes) 335 + * %rdx: nblocks 336 + */ 337 + vzeroupper; 338 + 339 + pushq %rbp; 340 + movq %rsp, %rbp; 341 + 342 + movq %rdx, RNBLKS; 343 + 344 + subq $STACK_SIZE, %rsp; 345 + andq $(~63), %rsp; 346 + 347 + movq %rbx, (STACK_REG_SAVE + 0 * 8)(%rsp); 348 + movq %r15, (STACK_REG_SAVE + 1 * 8)(%rsp); 349 + movq %r14, (STACK_REG_SAVE + 2 * 8)(%rsp); 350 + movq %r13, (STACK_REG_SAVE + 3 * 8)(%rsp); 351 + movq %r12, (STACK_REG_SAVE + 4 * 8)(%rsp); 352 + 353 + vmovdqa .Lbe32mask (%rip), BSWAP_REG; 354 + 355 + /* Get the values of the chaining variables. */ 356 + movl state_h0(RSTATE), a; 357 + movl state_h1(RSTATE), b; 358 + movl state_h2(RSTATE), c; 359 + movl state_h3(RSTATE), d; 360 + movl state_h4(RSTATE), e; 361 + movl state_h5(RSTATE), f; 362 + movl state_h6(RSTATE), g; 363 + movl state_h7(RSTATE), h; 364 + 365 + .align 16 366 + .Loop: 367 + /* Load data part1. */ 368 + LOAD_W_XMM_1(); 369 + 370 + leaq -1(RNBLKS), RNBLKS; 371 + 372 + /* Transform 0-3 + Load data part2. */ 373 + R1(a, b, c, d, e, f, g, h, 0, 0, IW); LOAD_W_XMM_2(); 374 + R1(d, a, b, c, h, e, f, g, 1, 1, IW); 375 + R1(c, d, a, b, g, h, e, f, 2, 2, IW); 376 + R1(b, c, d, a, f, g, h, e, 3, 3, IW); LOAD_W_XMM_3(); 377 + 378 + /* Transform 4-7 + Precalc 12-14. */ 379 + R1(a, b, c, d, e, f, g, h, 4, 0, IW); 380 + R1(d, a, b, c, h, e, f, g, 5, 1, IW); 381 + R1(c, d, a, b, g, h, e, f, 6, 2, IW); SCHED_W_0(12, W0, W1, W2, W3, W4, W5); 382 + R1(b, c, d, a, f, g, h, e, 7, 3, IW); SCHED_W_1(12, W0, W1, W2, W3, W4, W5); 383 + 384 + /* Transform 8-11 + Precalc 12-17. */ 385 + R1(a, b, c, d, e, f, g, h, 8, 0, IW); SCHED_W_2(12, W0, W1, W2, W3, W4, W5); 386 + R1(d, a, b, c, h, e, f, g, 9, 1, IW); SCHED_W_0(15, W1, W2, W3, W4, W5, W0); 387 + R1(c, d, a, b, g, h, e, f, 10, 2, IW); SCHED_W_1(15, W1, W2, W3, W4, W5, W0); 388 + R1(b, c, d, a, f, g, h, e, 11, 3, IW); SCHED_W_2(15, W1, W2, W3, W4, W5, W0); 389 + 390 + /* Transform 12-14 + Precalc 18-20 */ 391 + R1(a, b, c, d, e, f, g, h, 12, 0, XW); SCHED_W_0(18, W2, W3, W4, W5, W0, W1); 392 + R1(d, a, b, c, h, e, f, g, 13, 1, XW); SCHED_W_1(18, W2, W3, W4, W5, W0, W1); 393 + R1(c, d, a, b, g, h, e, f, 14, 2, XW); SCHED_W_2(18, W2, W3, W4, W5, W0, W1); 394 + 395 + /* Transform 15-17 + Precalc 21-23 */ 396 + R1(b, c, d, a, f, g, h, e, 15, 0, XW); SCHED_W_0(21, W3, W4, W5, W0, W1, W2); 397 + R2(a, b, c, d, e, f, g, h, 16, 1, XW); SCHED_W_1(21, W3, W4, W5, W0, W1, W2); 398 + R2(d, a, b, c, h, e, f, g, 17, 2, XW); SCHED_W_2(21, W3, W4, W5, W0, W1, W2); 399 + 400 + /* Transform 18-20 + Precalc 24-26 */ 401 + R2(c, d, a, b, g, h, e, f, 18, 0, XW); SCHED_W_0(24, W4, W5, W0, W1, W2, W3); 402 + R2(b, c, d, a, f, g, h, e, 19, 1, XW); SCHED_W_1(24, W4, W5, W0, W1, W2, W3); 403 + R2(a, b, c, d, e, f, g, h, 20, 2, XW); SCHED_W_2(24, W4, W5, W0, W1, W2, W3); 404 + 405 + /* Transform 21-23 + Precalc 27-29 */ 406 + R2(d, a, b, c, h, e, f, g, 21, 0, XW); SCHED_W_0(27, W5, W0, W1, W2, W3, W4); 407 + R2(c, d, a, b, g, h, e, f, 22, 1, XW); SCHED_W_1(27, W5, W0, W1, W2, W3, W4); 408 + R2(b, c, d, a, f, g, h, e, 23, 2, XW); SCHED_W_2(27, W5, W0, W1, W2, W3, W4); 409 + 410 + /* Transform 24-26 + Precalc 30-32 */ 411 + R2(a, b, c, d, e, f, g, h, 24, 0, XW); SCHED_W_0(30, W0, W1, W2, W3, W4, W5); 412 + R2(d, a, b, c, h, e, f, g, 25, 1, XW); SCHED_W_1(30, W0, W1, W2, W3, W4, W5); 413 + R2(c, d, a, b, g, h, e, f, 26, 2, XW); SCHED_W_2(30, W0, W1, W2, W3, W4, W5); 414 + 415 + /* Transform 27-29 + Precalc 33-35 */ 416 + R2(b, c, d, a, f, g, h, e, 27, 0, XW); SCHED_W_0(33, W1, W2, W3, W4, W5, W0); 417 + R2(a, b, c, d, e, f, g, h, 28, 1, XW); SCHED_W_1(33, W1, W2, W3, W4, W5, W0); 418 + R2(d, a, b, c, h, e, f, g, 29, 2, XW); SCHED_W_2(33, W1, W2, W3, W4, W5, W0); 419 + 420 + /* Transform 30-32 + Precalc 36-38 */ 421 + R2(c, d, a, b, g, h, e, f, 30, 0, XW); SCHED_W_0(36, W2, W3, W4, W5, W0, W1); 422 + R2(b, c, d, a, f, g, h, e, 31, 1, XW); SCHED_W_1(36, W2, W3, W4, W5, W0, W1); 423 + R2(a, b, c, d, e, f, g, h, 32, 2, XW); SCHED_W_2(36, W2, W3, W4, W5, W0, W1); 424 + 425 + /* Transform 33-35 + Precalc 39-41 */ 426 + R2(d, a, b, c, h, e, f, g, 33, 0, XW); SCHED_W_0(39, W3, W4, W5, W0, W1, W2); 427 + R2(c, d, a, b, g, h, e, f, 34, 1, XW); SCHED_W_1(39, W3, W4, W5, W0, W1, W2); 428 + R2(b, c, d, a, f, g, h, e, 35, 2, XW); SCHED_W_2(39, W3, W4, W5, W0, W1, W2); 429 + 430 + /* Transform 36-38 + Precalc 42-44 */ 431 + R2(a, b, c, d, e, f, g, h, 36, 0, XW); SCHED_W_0(42, W4, W5, W0, W1, W2, W3); 432 + R2(d, a, b, c, h, e, f, g, 37, 1, XW); SCHED_W_1(42, W4, W5, W0, W1, W2, W3); 433 + R2(c, d, a, b, g, h, e, f, 38, 2, XW); SCHED_W_2(42, W4, W5, W0, W1, W2, W3); 434 + 435 + /* Transform 39-41 + Precalc 45-47 */ 436 + R2(b, c, d, a, f, g, h, e, 39, 0, XW); SCHED_W_0(45, W5, W0, W1, W2, W3, W4); 437 + R2(a, b, c, d, e, f, g, h, 40, 1, XW); SCHED_W_1(45, W5, W0, W1, W2, W3, W4); 438 + R2(d, a, b, c, h, e, f, g, 41, 2, XW); SCHED_W_2(45, W5, W0, W1, W2, W3, W4); 439 + 440 + /* Transform 42-44 + Precalc 48-50 */ 441 + R2(c, d, a, b, g, h, e, f, 42, 0, XW); SCHED_W_0(48, W0, W1, W2, W3, W4, W5); 442 + R2(b, c, d, a, f, g, h, e, 43, 1, XW); SCHED_W_1(48, W0, W1, W2, W3, W4, W5); 443 + R2(a, b, c, d, e, f, g, h, 44, 2, XW); SCHED_W_2(48, W0, W1, W2, W3, W4, W5); 444 + 445 + /* Transform 45-47 + Precalc 51-53 */ 446 + R2(d, a, b, c, h, e, f, g, 45, 0, XW); SCHED_W_0(51, W1, W2, W3, W4, W5, W0); 447 + R2(c, d, a, b, g, h, e, f, 46, 1, XW); SCHED_W_1(51, W1, W2, W3, W4, W5, W0); 448 + R2(b, c, d, a, f, g, h, e, 47, 2, XW); SCHED_W_2(51, W1, W2, W3, W4, W5, W0); 449 + 450 + /* Transform 48-50 + Precalc 54-56 */ 451 + R2(a, b, c, d, e, f, g, h, 48, 0, XW); SCHED_W_0(54, W2, W3, W4, W5, W0, W1); 452 + R2(d, a, b, c, h, e, f, g, 49, 1, XW); SCHED_W_1(54, W2, W3, W4, W5, W0, W1); 453 + R2(c, d, a, b, g, h, e, f, 50, 2, XW); SCHED_W_2(54, W2, W3, W4, W5, W0, W1); 454 + 455 + /* Transform 51-53 + Precalc 57-59 */ 456 + R2(b, c, d, a, f, g, h, e, 51, 0, XW); SCHED_W_0(57, W3, W4, W5, W0, W1, W2); 457 + R2(a, b, c, d, e, f, g, h, 52, 1, XW); SCHED_W_1(57, W3, W4, W5, W0, W1, W2); 458 + R2(d, a, b, c, h, e, f, g, 53, 2, XW); SCHED_W_2(57, W3, W4, W5, W0, W1, W2); 459 + 460 + /* Transform 54-56 + Precalc 60-62 */ 461 + R2(c, d, a, b, g, h, e, f, 54, 0, XW); SCHED_W_0(60, W4, W5, W0, W1, W2, W3); 462 + R2(b, c, d, a, f, g, h, e, 55, 1, XW); SCHED_W_1(60, W4, W5, W0, W1, W2, W3); 463 + R2(a, b, c, d, e, f, g, h, 56, 2, XW); SCHED_W_2(60, W4, W5, W0, W1, W2, W3); 464 + 465 + /* Transform 57-59 + Precalc 63 */ 466 + R2(d, a, b, c, h, e, f, g, 57, 0, XW); SCHED_W_0(63, W5, W0, W1, W2, W3, W4); 467 + R2(c, d, a, b, g, h, e, f, 58, 1, XW); 468 + R2(b, c, d, a, f, g, h, e, 59, 2, XW); SCHED_W_1(63, W5, W0, W1, W2, W3, W4); 469 + 470 + /* Transform 60-62 + Precalc 63 */ 471 + R2(a, b, c, d, e, f, g, h, 60, 0, XW); 472 + R2(d, a, b, c, h, e, f, g, 61, 1, XW); SCHED_W_2(63, W5, W0, W1, W2, W3, W4); 473 + R2(c, d, a, b, g, h, e, f, 62, 2, XW); 474 + 475 + /* Transform 63 */ 476 + R2(b, c, d, a, f, g, h, e, 63, 0, XW); 477 + 478 + /* Update the chaining variables. */ 479 + xorl state_h0(RSTATE), a; 480 + xorl state_h1(RSTATE), b; 481 + xorl state_h2(RSTATE), c; 482 + xorl state_h3(RSTATE), d; 483 + movl a, state_h0(RSTATE); 484 + movl b, state_h1(RSTATE); 485 + movl c, state_h2(RSTATE); 486 + movl d, state_h3(RSTATE); 487 + xorl state_h4(RSTATE), e; 488 + xorl state_h5(RSTATE), f; 489 + xorl state_h6(RSTATE), g; 490 + xorl state_h7(RSTATE), h; 491 + movl e, state_h4(RSTATE); 492 + movl f, state_h5(RSTATE); 493 + movl g, state_h6(RSTATE); 494 + movl h, state_h7(RSTATE); 495 + 496 + cmpq $0, RNBLKS; 497 + jne .Loop; 498 + 499 + vzeroall; 500 + 501 + movq (STACK_REG_SAVE + 0 * 8)(%rsp), %rbx; 502 + movq (STACK_REG_SAVE + 1 * 8)(%rsp), %r15; 503 + movq (STACK_REG_SAVE + 2 * 8)(%rsp), %r14; 504 + movq (STACK_REG_SAVE + 3 * 8)(%rsp), %r13; 505 + movq (STACK_REG_SAVE + 4 * 8)(%rsp), %r12; 506 + 507 + vmovdqa %xmm0, IW_W1_ADDR(0, 0); 508 + vmovdqa %xmm0, IW_W1W2_ADDR(0, 0); 509 + vmovdqa %xmm0, IW_W1_ADDR(4, 0); 510 + vmovdqa %xmm0, IW_W1W2_ADDR(4, 0); 511 + vmovdqa %xmm0, IW_W1_ADDR(8, 0); 512 + vmovdqa %xmm0, IW_W1W2_ADDR(8, 0); 513 + 514 + movq %rbp, %rsp; 515 + popq %rbp; 516 + ret; 517 + SYM_FUNC_END(sm3_transform_avx)
+134
arch/x86/crypto/sm3_avx_glue.c
··· 1 + /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 + /* 3 + * SM3 Secure Hash Algorithm, AVX assembler accelerated. 4 + * specified in: https://datatracker.ietf.org/doc/html/draft-sca-cfrg-sm3-02 5 + * 6 + * Copyright (C) 2021 Tianjia Zhang <tianjia.zhang@linux.alibaba.com> 7 + */ 8 + 9 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 10 + 11 + #include <crypto/internal/hash.h> 12 + #include <crypto/internal/simd.h> 13 + #include <linux/init.h> 14 + #include <linux/module.h> 15 + #include <linux/types.h> 16 + #include <crypto/sm3.h> 17 + #include <crypto/sm3_base.h> 18 + #include <asm/simd.h> 19 + 20 + asmlinkage void sm3_transform_avx(struct sm3_state *state, 21 + const u8 *data, int nblocks); 22 + 23 + static int sm3_avx_update(struct shash_desc *desc, const u8 *data, 24 + unsigned int len) 25 + { 26 + struct sm3_state *sctx = shash_desc_ctx(desc); 27 + 28 + if (!crypto_simd_usable() || 29 + (sctx->count % SM3_BLOCK_SIZE) + len < SM3_BLOCK_SIZE) { 30 + sm3_update(sctx, data, len); 31 + return 0; 32 + } 33 + 34 + /* 35 + * Make sure struct sm3_state begins directly with the SM3 36 + * 256-bit internal state, as this is what the asm functions expect. 37 + */ 38 + BUILD_BUG_ON(offsetof(struct sm3_state, state) != 0); 39 + 40 + kernel_fpu_begin(); 41 + sm3_base_do_update(desc, data, len, sm3_transform_avx); 42 + kernel_fpu_end(); 43 + 44 + return 0; 45 + } 46 + 47 + static int sm3_avx_finup(struct shash_desc *desc, const u8 *data, 48 + unsigned int len, u8 *out) 49 + { 50 + if (!crypto_simd_usable()) { 51 + struct sm3_state *sctx = shash_desc_ctx(desc); 52 + 53 + if (len) 54 + sm3_update(sctx, data, len); 55 + 56 + sm3_final(sctx, out); 57 + return 0; 58 + } 59 + 60 + kernel_fpu_begin(); 61 + if (len) 62 + sm3_base_do_update(desc, data, len, sm3_transform_avx); 63 + sm3_base_do_finalize(desc, sm3_transform_avx); 64 + kernel_fpu_end(); 65 + 66 + return sm3_base_finish(desc, out); 67 + } 68 + 69 + static int sm3_avx_final(struct shash_desc *desc, u8 *out) 70 + { 71 + if (!crypto_simd_usable()) { 72 + sm3_final(shash_desc_ctx(desc), out); 73 + return 0; 74 + } 75 + 76 + kernel_fpu_begin(); 77 + sm3_base_do_finalize(desc, sm3_transform_avx); 78 + kernel_fpu_end(); 79 + 80 + return sm3_base_finish(desc, out); 81 + } 82 + 83 + static struct shash_alg sm3_avx_alg = { 84 + .digestsize = SM3_DIGEST_SIZE, 85 + .init = sm3_base_init, 86 + .update = sm3_avx_update, 87 + .final = sm3_avx_final, 88 + .finup = sm3_avx_finup, 89 + .descsize = sizeof(struct sm3_state), 90 + .base = { 91 + .cra_name = "sm3", 92 + .cra_driver_name = "sm3-avx", 93 + .cra_priority = 300, 94 + .cra_blocksize = SM3_BLOCK_SIZE, 95 + .cra_module = THIS_MODULE, 96 + } 97 + }; 98 + 99 + static int __init sm3_avx_mod_init(void) 100 + { 101 + const char *feature_name; 102 + 103 + if (!boot_cpu_has(X86_FEATURE_AVX)) { 104 + pr_info("AVX instruction are not detected.\n"); 105 + return -ENODEV; 106 + } 107 + 108 + if (!boot_cpu_has(X86_FEATURE_BMI2)) { 109 + pr_info("BMI2 instruction are not detected.\n"); 110 + return -ENODEV; 111 + } 112 + 113 + if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, 114 + &feature_name)) { 115 + pr_info("CPU feature '%s' is not supported.\n", feature_name); 116 + return -ENODEV; 117 + } 118 + 119 + return crypto_register_shash(&sm3_avx_alg); 120 + } 121 + 122 + static void __exit sm3_avx_mod_exit(void) 123 + { 124 + crypto_unregister_shash(&sm3_avx_alg); 125 + } 126 + 127 + module_init(sm3_avx_mod_init); 128 + module_exit(sm3_avx_mod_exit); 129 + 130 + MODULE_LICENSE("GPL v2"); 131 + MODULE_AUTHOR("Tianjia Zhang <tianjia.zhang@linux.alibaba.com>"); 132 + MODULE_DESCRIPTION("SM3 Secure Hash Algorithm, AVX assembler accelerated"); 133 + MODULE_ALIAS_CRYPTO("sm3"); 134 + MODULE_ALIAS_CRYPTO("sm3-avx");
+28 -14
arch/x86/include/asm/xor.h
··· 57 57 op(i + 3, 3) 58 58 59 59 static void 60 - xor_sse_2(unsigned long bytes, unsigned long *p1, unsigned long *p2) 60 + xor_sse_2(unsigned long bytes, unsigned long * __restrict p1, 61 + const unsigned long * __restrict p2) 61 62 { 62 63 unsigned long lines = bytes >> 8; 63 64 ··· 109 108 } 110 109 111 110 static void 112 - xor_sse_2_pf64(unsigned long bytes, unsigned long *p1, unsigned long *p2) 111 + xor_sse_2_pf64(unsigned long bytes, unsigned long * __restrict p1, 112 + const unsigned long * __restrict p2) 113 113 { 114 114 unsigned long lines = bytes >> 8; 115 115 ··· 144 142 } 145 143 146 144 static void 147 - xor_sse_3(unsigned long bytes, unsigned long *p1, unsigned long *p2, 148 - unsigned long *p3) 145 + xor_sse_3(unsigned long bytes, unsigned long * __restrict p1, 146 + const unsigned long * __restrict p2, 147 + const unsigned long * __restrict p3) 149 148 { 150 149 unsigned long lines = bytes >> 8; 151 150 ··· 204 201 } 205 202 206 203 static void 207 - xor_sse_3_pf64(unsigned long bytes, unsigned long *p1, unsigned long *p2, 208 - unsigned long *p3) 204 + xor_sse_3_pf64(unsigned long bytes, unsigned long * __restrict p1, 205 + const unsigned long * __restrict p2, 206 + const unsigned long * __restrict p3) 209 207 { 210 208 unsigned long lines = bytes >> 8; 211 209 ··· 242 238 } 243 239 244 240 static void 245 - xor_sse_4(unsigned long bytes, unsigned long *p1, unsigned long *p2, 246 - unsigned long *p3, unsigned long *p4) 241 + xor_sse_4(unsigned long bytes, unsigned long * __restrict p1, 242 + const unsigned long * __restrict p2, 243 + const unsigned long * __restrict p3, 244 + const unsigned long * __restrict p4) 247 245 { 248 246 unsigned long lines = bytes >> 8; 249 247 ··· 310 304 } 311 305 312 306 static void 313 - xor_sse_4_pf64(unsigned long bytes, unsigned long *p1, unsigned long *p2, 314 - unsigned long *p3, unsigned long *p4) 307 + xor_sse_4_pf64(unsigned long bytes, unsigned long * __restrict p1, 308 + const unsigned long * __restrict p2, 309 + const unsigned long * __restrict p3, 310 + const unsigned long * __restrict p4) 315 311 { 316 312 unsigned long lines = bytes >> 8; 317 313 ··· 351 343 } 352 344 353 345 static void 354 - xor_sse_5(unsigned long bytes, unsigned long *p1, unsigned long *p2, 355 - unsigned long *p3, unsigned long *p4, unsigned long *p5) 346 + xor_sse_5(unsigned long bytes, unsigned long * __restrict p1, 347 + const unsigned long * __restrict p2, 348 + const unsigned long * __restrict p3, 349 + const unsigned long * __restrict p4, 350 + const unsigned long * __restrict p5) 356 351 { 357 352 unsigned long lines = bytes >> 8; 358 353 ··· 427 416 } 428 417 429 418 static void 430 - xor_sse_5_pf64(unsigned long bytes, unsigned long *p1, unsigned long *p2, 431 - unsigned long *p3, unsigned long *p4, unsigned long *p5) 419 + xor_sse_5_pf64(unsigned long bytes, unsigned long * __restrict p1, 420 + const unsigned long * __restrict p2, 421 + const unsigned long * __restrict p3, 422 + const unsigned long * __restrict p4, 423 + const unsigned long * __restrict p5) 432 424 { 433 425 unsigned long lines = bytes >> 8; 434 426
+28 -14
arch/x86/include/asm/xor_32.h
··· 21 21 #include <asm/fpu/api.h> 22 22 23 23 static void 24 - xor_pII_mmx_2(unsigned long bytes, unsigned long *p1, unsigned long *p2) 24 + xor_pII_mmx_2(unsigned long bytes, unsigned long * __restrict p1, 25 + const unsigned long * __restrict p2) 25 26 { 26 27 unsigned long lines = bytes >> 7; 27 28 ··· 65 64 } 66 65 67 66 static void 68 - xor_pII_mmx_3(unsigned long bytes, unsigned long *p1, unsigned long *p2, 69 - unsigned long *p3) 67 + xor_pII_mmx_3(unsigned long bytes, unsigned long * __restrict p1, 68 + const unsigned long * __restrict p2, 69 + const unsigned long * __restrict p3) 70 70 { 71 71 unsigned long lines = bytes >> 7; 72 72 ··· 115 113 } 116 114 117 115 static void 118 - xor_pII_mmx_4(unsigned long bytes, unsigned long *p1, unsigned long *p2, 119 - unsigned long *p3, unsigned long *p4) 116 + xor_pII_mmx_4(unsigned long bytes, unsigned long * __restrict p1, 117 + const unsigned long * __restrict p2, 118 + const unsigned long * __restrict p3, 119 + const unsigned long * __restrict p4) 120 120 { 121 121 unsigned long lines = bytes >> 7; 122 122 ··· 172 168 173 169 174 170 static void 175 - xor_pII_mmx_5(unsigned long bytes, unsigned long *p1, unsigned long *p2, 176 - unsigned long *p3, unsigned long *p4, unsigned long *p5) 171 + xor_pII_mmx_5(unsigned long bytes, unsigned long * __restrict p1, 172 + const unsigned long * __restrict p2, 173 + const unsigned long * __restrict p3, 174 + const unsigned long * __restrict p4, 175 + const unsigned long * __restrict p5) 177 176 { 178 177 unsigned long lines = bytes >> 7; 179 178 ··· 255 248 #undef BLOCK 256 249 257 250 static void 258 - xor_p5_mmx_2(unsigned long bytes, unsigned long *p1, unsigned long *p2) 251 + xor_p5_mmx_2(unsigned long bytes, unsigned long * __restrict p1, 252 + const unsigned long * __restrict p2) 259 253 { 260 254 unsigned long lines = bytes >> 6; 261 255 ··· 303 295 } 304 296 305 297 static void 306 - xor_p5_mmx_3(unsigned long bytes, unsigned long *p1, unsigned long *p2, 307 - unsigned long *p3) 298 + xor_p5_mmx_3(unsigned long bytes, unsigned long * __restrict p1, 299 + const unsigned long * __restrict p2, 300 + const unsigned long * __restrict p3) 308 301 { 309 302 unsigned long lines = bytes >> 6; 310 303 ··· 361 352 } 362 353 363 354 static void 364 - xor_p5_mmx_4(unsigned long bytes, unsigned long *p1, unsigned long *p2, 365 - unsigned long *p3, unsigned long *p4) 355 + xor_p5_mmx_4(unsigned long bytes, unsigned long * __restrict p1, 356 + const unsigned long * __restrict p2, 357 + const unsigned long * __restrict p3, 358 + const unsigned long * __restrict p4) 366 359 { 367 360 unsigned long lines = bytes >> 6; 368 361 ··· 429 418 } 430 419 431 420 static void 432 - xor_p5_mmx_5(unsigned long bytes, unsigned long *p1, unsigned long *p2, 433 - unsigned long *p3, unsigned long *p4, unsigned long *p5) 421 + xor_p5_mmx_5(unsigned long bytes, unsigned long * __restrict p1, 422 + const unsigned long * __restrict p2, 423 + const unsigned long * __restrict p3, 424 + const unsigned long * __restrict p4, 425 + const unsigned long * __restrict p5) 434 426 { 435 427 unsigned long lines = bytes >> 6; 436 428
+14 -7
arch/x86/include/asm/xor_avx.h
··· 26 26 BLOCK4(8) \ 27 27 BLOCK4(12) 28 28 29 - static void xor_avx_2(unsigned long bytes, unsigned long *p0, unsigned long *p1) 29 + static void xor_avx_2(unsigned long bytes, unsigned long * __restrict p0, 30 + const unsigned long * __restrict p1) 30 31 { 31 32 unsigned long lines = bytes >> 9; 32 33 ··· 53 52 kernel_fpu_end(); 54 53 } 55 54 56 - static void xor_avx_3(unsigned long bytes, unsigned long *p0, unsigned long *p1, 57 - unsigned long *p2) 55 + static void xor_avx_3(unsigned long bytes, unsigned long * __restrict p0, 56 + const unsigned long * __restrict p1, 57 + const unsigned long * __restrict p2) 58 58 { 59 59 unsigned long lines = bytes >> 9; 60 60 ··· 84 82 kernel_fpu_end(); 85 83 } 86 84 87 - static void xor_avx_4(unsigned long bytes, unsigned long *p0, unsigned long *p1, 88 - unsigned long *p2, unsigned long *p3) 85 + static void xor_avx_4(unsigned long bytes, unsigned long * __restrict p0, 86 + const unsigned long * __restrict p1, 87 + const unsigned long * __restrict p2, 88 + const unsigned long * __restrict p3) 89 89 { 90 90 unsigned long lines = bytes >> 9; 91 91 ··· 119 115 kernel_fpu_end(); 120 116 } 121 117 122 - static void xor_avx_5(unsigned long bytes, unsigned long *p0, unsigned long *p1, 123 - unsigned long *p2, unsigned long *p3, unsigned long *p4) 118 + static void xor_avx_5(unsigned long bytes, unsigned long * __restrict p0, 119 + const unsigned long * __restrict p1, 120 + const unsigned long * __restrict p2, 121 + const unsigned long * __restrict p3, 122 + const unsigned long * __restrict p4) 124 123 { 125 124 unsigned long lines = bytes >> 9; 126 125
+24 -1
crypto/Kconfig
··· 231 231 help 232 232 Generic implementation of the Diffie-Hellman algorithm. 233 233 234 + config CRYPTO_DH_RFC7919_GROUPS 235 + bool "Support for RFC 7919 FFDHE group parameters" 236 + depends on CRYPTO_DH 237 + select CRYPTO_RNG_DEFAULT 238 + help 239 + Provide support for RFC 7919 FFDHE group parameters. If unsure, say N. 240 + 234 241 config CRYPTO_ECC 235 242 tristate 236 243 select CRYPTO_RNG_DEFAULT ··· 274 267 275 268 config CRYPTO_SM2 276 269 tristate "SM2 algorithm" 277 - select CRYPTO_SM3 270 + select CRYPTO_LIB_SM3 278 271 select CRYPTO_AKCIPHER 279 272 select CRYPTO_MANAGER 280 273 select MPILIB ··· 432 425 select CRYPTO_SKCIPHER 433 426 select CRYPTO_MANAGER 434 427 select CRYPTO_GF128MUL 428 + select CRYPTO_ECB 435 429 help 436 430 LRW: Liskov Rivest Wagner, a tweakable, non malleable, non movable 437 431 narrow block cipher mode for dm-crypt. Use it with cipher ··· 1007 999 config CRYPTO_SM3 1008 1000 tristate "SM3 digest algorithm" 1009 1001 select CRYPTO_HASH 1002 + select CRYPTO_LIB_SM3 1010 1003 help 1011 1004 SM3 secure hash function as defined by OSCCA GM/T 0004-2012 SM3). 1012 1005 It is part of the Chinese Commercial Cryptography suite. ··· 1015 1006 References: 1016 1007 http://www.oscca.gov.cn/UpFile/20101222141857786.pdf 1017 1008 https://datatracker.ietf.org/doc/html/draft-shen-sm3-hash 1009 + 1010 + config CRYPTO_SM3_AVX_X86_64 1011 + tristate "SM3 digest algorithm (x86_64/AVX)" 1012 + depends on X86 && 64BIT 1013 + select CRYPTO_HASH 1014 + select CRYPTO_LIB_SM3 1015 + help 1016 + SM3 secure hash function as defined by OSCCA GM/T 0004-2012 SM3). 1017 + It is part of the Chinese Commercial Cryptography suite. This is 1018 + SM3 optimized implementation using Advanced Vector Extensions (AVX) 1019 + when available. 1020 + 1021 + If unsure, say N. 1018 1022 1019 1023 config CRYPTO_STREEBOG 1020 1024 tristate "Streebog Hash Function" ··· 1869 1847 1870 1848 config CRYPTO_KDF800108_CTR 1871 1849 tristate 1850 + select CRYPTO_HMAC 1872 1851 select CRYPTO_SHA256 1873 1852 1874 1853 config CRYPTO_USER_API
+43 -5
crypto/algapi.c
··· 6 6 */ 7 7 8 8 #include <crypto/algapi.h> 9 + #include <crypto/internal/simd.h> 9 10 #include <linux/err.h> 10 11 #include <linux/errno.h> 11 12 #include <linux/fips.h> ··· 21 20 #include "internal.h" 22 21 23 22 static LIST_HEAD(crypto_template_list); 23 + 24 + #ifdef CONFIG_CRYPTO_MANAGER_EXTRA_TESTS 25 + DEFINE_PER_CPU(bool, crypto_simd_disabled_for_test); 26 + EXPORT_PER_CPU_SYMBOL_GPL(crypto_simd_disabled_for_test); 27 + #endif 24 28 25 29 static inline void crypto_check_module_sig(struct module *mod) 26 30 { ··· 328 322 found: 329 323 q->cra_flags |= CRYPTO_ALG_DEAD; 330 324 alg = test->adult; 331 - if (err || list_empty(&alg->cra_list)) 325 + 326 + if (list_empty(&alg->cra_list)) 332 327 goto complete; 328 + 329 + if (err == -ECANCELED) 330 + alg->cra_flags |= CRYPTO_ALG_FIPS_INTERNAL; 331 + else if (err) 332 + goto complete; 333 + else 334 + alg->cra_flags &= ~CRYPTO_ALG_FIPS_INTERNAL; 333 335 334 336 alg->cra_flags |= CRYPTO_ALG_TESTED; 335 337 ··· 618 604 { 619 605 struct crypto_larval *larval; 620 606 struct crypto_spawn *spawn; 607 + u32 fips_internal = 0; 621 608 int err; 622 609 623 610 err = crypto_check_alg(&inst->alg); ··· 641 626 spawn->inst = inst; 642 627 spawn->registered = true; 643 628 629 + fips_internal |= spawn->alg->cra_flags; 630 + 644 631 crypto_mod_put(spawn->alg); 645 632 646 633 spawn = next; 647 634 } 635 + 636 + inst->alg.cra_flags |= (fips_internal & CRYPTO_ALG_FIPS_INTERNAL); 648 637 649 638 larval = __crypto_register_alg(&inst->alg); 650 639 if (IS_ERR(larval)) ··· 702 683 if (IS_ERR(name)) 703 684 return PTR_ERR(name); 704 685 705 - alg = crypto_find_alg(name, spawn->frontend, type, mask); 686 + alg = crypto_find_alg(name, spawn->frontend, 687 + type | CRYPTO_ALG_FIPS_INTERNAL, mask); 706 688 if (IS_ERR(alg)) 707 689 return PTR_ERR(alg); 708 690 ··· 1022 1002 } 1023 1003 1024 1004 while (IS_ENABLED(CONFIG_64BIT) && len >= 8 && !(relalign & 7)) { 1025 - *(u64 *)dst = *(u64 *)src1 ^ *(u64 *)src2; 1005 + if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) { 1006 + u64 l = get_unaligned((u64 *)src1) ^ 1007 + get_unaligned((u64 *)src2); 1008 + put_unaligned(l, (u64 *)dst); 1009 + } else { 1010 + *(u64 *)dst = *(u64 *)src1 ^ *(u64 *)src2; 1011 + } 1026 1012 dst += 8; 1027 1013 src1 += 8; 1028 1014 src2 += 8; ··· 1036 1010 } 1037 1011 1038 1012 while (len >= 4 && !(relalign & 3)) { 1039 - *(u32 *)dst = *(u32 *)src1 ^ *(u32 *)src2; 1013 + if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) { 1014 + u32 l = get_unaligned((u32 *)src1) ^ 1015 + get_unaligned((u32 *)src2); 1016 + put_unaligned(l, (u32 *)dst); 1017 + } else { 1018 + *(u32 *)dst = *(u32 *)src1 ^ *(u32 *)src2; 1019 + } 1040 1020 dst += 4; 1041 1021 src1 += 4; 1042 1022 src2 += 4; ··· 1050 1018 } 1051 1019 1052 1020 while (len >= 2 && !(relalign & 1)) { 1053 - *(u16 *)dst = *(u16 *)src1 ^ *(u16 *)src2; 1021 + if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) { 1022 + u16 l = get_unaligned((u16 *)src1) ^ 1023 + get_unaligned((u16 *)src2); 1024 + put_unaligned(l, (u16 *)dst); 1025 + } else { 1026 + *(u16 *)dst = *(u16 *)src1 ^ *(u16 *)src2; 1027 + } 1054 1028 dst += 2; 1055 1029 src1 += 2; 1056 1030 src2 += 2;
+17 -2
crypto/api.c
··· 223 223 else if (crypto_is_test_larval(larval) && 224 224 !(alg->cra_flags & CRYPTO_ALG_TESTED)) 225 225 alg = ERR_PTR(-EAGAIN); 226 + else if (alg->cra_flags & CRYPTO_ALG_FIPS_INTERNAL) 227 + alg = ERR_PTR(-EAGAIN); 226 228 else if (!crypto_mod_get(alg)) 227 229 alg = ERR_PTR(-EAGAIN); 228 230 crypto_mod_put(&larval->alg); ··· 235 233 static struct crypto_alg *crypto_alg_lookup(const char *name, u32 type, 236 234 u32 mask) 237 235 { 236 + const u32 fips = CRYPTO_ALG_FIPS_INTERNAL; 238 237 struct crypto_alg *alg; 239 238 u32 test = 0; 240 239 ··· 243 240 test |= CRYPTO_ALG_TESTED; 244 241 245 242 down_read(&crypto_alg_sem); 246 - alg = __crypto_alg_lookup(name, type | test, mask | test); 247 - if (!alg && test) { 243 + alg = __crypto_alg_lookup(name, (type | test) & ~fips, 244 + (mask | test) & ~fips); 245 + if (alg) { 246 + if (((type | mask) ^ fips) & fips) 247 + mask |= fips; 248 + mask &= fips; 249 + 250 + if (!crypto_is_larval(alg) && 251 + ((type ^ alg->cra_flags) & mask)) { 252 + /* Algorithm is disallowed in FIPS mode. */ 253 + crypto_mod_put(alg); 254 + alg = ERR_PTR(-ENOENT); 255 + } 256 + } else if (test) { 248 257 alg = __crypto_alg_lookup(name, type, mask); 249 258 if (alg && !crypto_is_larval(alg)) { 250 259 /* Test failed */
+1 -1
crypto/asymmetric_keys/signature.c
··· 35 35 EXPORT_SYMBOL_GPL(public_key_signature_free); 36 36 37 37 /** 38 - * query_asymmetric_key - Get information about an aymmetric key. 38 + * query_asymmetric_key - Get information about an asymmetric key. 39 39 * @params: Various parameters. 40 40 * @info: Where to put the information. 41 41 */
+1 -1
crypto/asymmetric_keys/x509_parser.h
··· 22 22 time64_t valid_to; 23 23 const void *tbs; /* Signed data */ 24 24 unsigned tbs_size; /* Size of signed data */ 25 - unsigned raw_sig_size; /* Size of sigature */ 25 + unsigned raw_sig_size; /* Size of signature */ 26 26 const void *raw_sig; /* Signature data */ 27 27 const void *raw_serial; /* Raw serial number in ASN.1 */ 28 28 unsigned raw_serial_size;
+4 -4
crypto/async_tx/async_xor.c
··· 170 170 * 171 171 * xor_blocks always uses the dest as a source so the 172 172 * ASYNC_TX_XOR_ZERO_DST flag must be set to not include dest data in 173 - * the calculation. The assumption with dma eninges is that they only 174 - * use the destination buffer as a source when it is explicity specified 173 + * the calculation. The assumption with dma engines is that they only 174 + * use the destination buffer as a source when it is explicitly specified 175 175 * in the source list. 176 176 * 177 177 * src_list note: if the dest is also a source it must be at index zero. ··· 261 261 * 262 262 * xor_blocks always uses the dest as a source so the 263 263 * ASYNC_TX_XOR_ZERO_DST flag must be set to not include dest data in 264 - * the calculation. The assumption with dma eninges is that they only 265 - * use the destination buffer as a source when it is explicity specified 264 + * the calculation. The assumption with dma engines is that they only 265 + * use the destination buffer as a source when it is explicitly specified 266 266 * in the source list. 267 267 * 268 268 * src_list note: if the dest is also a source it must be at index zero.
+2 -2
crypto/async_tx/raid6test.c
··· 217 217 err += test(12, &tests); 218 218 } 219 219 220 - /* the 24 disk case is special for ioatdma as it is the boudary point 220 + /* the 24 disk case is special for ioatdma as it is the boundary point 221 221 * at which it needs to switch from 8-source ops to 16-source 222 222 * ops for continuation (assumes DMA_HAS_PQ_CONTINUE is not set) 223 223 */ ··· 241 241 } 242 242 243 243 /* when compiled-in wait for drivers to load first (assumes dma drivers 244 - * are also compliled-in) 244 + * are also compiled-in) 245 245 */ 246 246 late_initcall(raid6_test); 247 247 module_exit(raid6_test_exit);
+1 -1
crypto/authenc.c
··· 253 253 dst = scatterwalk_ffwd(areq_ctx->dst, req->dst, req->assoclen); 254 254 255 255 skcipher_request_set_tfm(skreq, ctx->enc); 256 - skcipher_request_set_callback(skreq, aead_request_flags(req), 256 + skcipher_request_set_callback(skreq, flags, 257 257 req->base.complete, req->base.data); 258 258 skcipher_request_set_crypt(skreq, src, dst, 259 259 req->cryptlen - authsize, req->iv);
+1 -1
crypto/cfb.c
··· 1 - //SPDX-License-Identifier: GPL-2.0 1 + // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 3 * CFB: Cipher FeedBack mode 4 4 *
+1
crypto/crypto_engine.c
··· 53 53 dev_err(engine->dev, "failed to unprepare request\n"); 54 54 } 55 55 } 56 + lockdep_assert_in_softirq(); 56 57 req->complete(req, err); 57 58 58 59 kthread_queue_work(engine->kworker, &engine->pump_requests);
+663 -18
crypto/dh.c
··· 10 10 #include <crypto/internal/kpp.h> 11 11 #include <crypto/kpp.h> 12 12 #include <crypto/dh.h> 13 + #include <crypto/rng.h> 13 14 #include <linux/mpi.h> 14 15 15 16 struct dh_ctx { 16 17 MPI p; /* Value is guaranteed to be set. */ 17 - MPI q; /* Value is optional. */ 18 18 MPI g; /* Value is guaranteed to be set. */ 19 19 MPI xa; /* Value is guaranteed to be set. */ 20 20 }; ··· 22 22 static void dh_clear_ctx(struct dh_ctx *ctx) 23 23 { 24 24 mpi_free(ctx->p); 25 - mpi_free(ctx->q); 26 25 mpi_free(ctx->g); 27 26 mpi_free(ctx->xa); 28 27 memset(ctx, 0, sizeof(*ctx)); ··· 61 62 if (!ctx->p) 62 63 return -EINVAL; 63 64 64 - if (params->q && params->q_size) { 65 - ctx->q = mpi_read_raw_data(params->q, params->q_size); 66 - if (!ctx->q) 67 - return -EINVAL; 68 - } 69 - 70 65 ctx->g = mpi_read_raw_data(params->g, params->g_size); 71 66 if (!ctx->g) 72 67 return -EINVAL; ··· 97 104 /* 98 105 * SP800-56A public key verification: 99 106 * 100 - * * If Q is provided as part of the domain paramenters, a full validation 101 - * according to SP800-56A section 5.6.2.3.1 is performed. 107 + * * For the safe-prime groups in FIPS mode, Q can be computed 108 + * trivially from P and a full validation according to SP800-56A 109 + * section 5.6.2.3.1 is performed. 102 110 * 103 - * * If Q is not provided, a partial validation according to SP800-56A section 104 - * 5.6.2.3.2 is performed. 111 + * * For all other sets of group parameters, only a partial validation 112 + * according to SP800-56A section 5.6.2.3.2 is performed. 105 113 */ 106 114 static int dh_is_pubkey_valid(struct dh_ctx *ctx, MPI y) 107 115 { ··· 113 119 * Step 1: Verify that 2 <= y <= p - 2. 114 120 * 115 121 * The upper limit check is actually y < p instead of y < p - 1 116 - * as the mpi_sub_ui function is yet missing. 122 + * in order to save one mpi_sub_ui() invocation here. Note that 123 + * p - 1 is the non-trivial element of the subgroup of order 2 and 124 + * thus, the check on y^q below would fail if y == p - 1. 117 125 */ 118 126 if (mpi_cmp_ui(y, 1) < 1 || mpi_cmp(y, ctx->p) >= 0) 119 127 return -EINVAL; 120 128 121 - /* Step 2: Verify that 1 = y^q mod p */ 122 - if (ctx->q) { 123 - MPI val = mpi_alloc(0); 129 + /* 130 + * Step 2: Verify that 1 = y^q mod p 131 + * 132 + * For the safe-prime groups q = (p - 1)/2. 133 + */ 134 + if (fips_enabled) { 135 + MPI val, q; 124 136 int ret; 125 137 138 + val = mpi_alloc(0); 126 139 if (!val) 127 140 return -ENOMEM; 128 141 129 - ret = mpi_powm(val, y, ctx->q, ctx->p); 142 + q = mpi_alloc(mpi_get_nlimbs(ctx->p)); 143 + if (!q) { 144 + mpi_free(val); 145 + return -ENOMEM; 146 + } 130 147 148 + /* 149 + * ->p is odd, so no need to explicitly subtract one 150 + * from it before shifting to the right. 151 + */ 152 + mpi_rshift(q, ctx->p, 1); 153 + 154 + ret = mpi_powm(val, y, q, ctx->p); 155 + mpi_free(q); 131 156 if (ret) { 132 157 mpi_free(val); 133 158 return ret; ··· 276 263 }, 277 264 }; 278 265 266 + 267 + struct dh_safe_prime { 268 + unsigned int max_strength; 269 + unsigned int p_size; 270 + const char *p; 271 + }; 272 + 273 + static const char safe_prime_g[] = { 2 }; 274 + 275 + struct dh_safe_prime_instance_ctx { 276 + struct crypto_kpp_spawn dh_spawn; 277 + const struct dh_safe_prime *safe_prime; 278 + }; 279 + 280 + struct dh_safe_prime_tfm_ctx { 281 + struct crypto_kpp *dh_tfm; 282 + }; 283 + 284 + static void dh_safe_prime_free_instance(struct kpp_instance *inst) 285 + { 286 + struct dh_safe_prime_instance_ctx *ctx = kpp_instance_ctx(inst); 287 + 288 + crypto_drop_kpp(&ctx->dh_spawn); 289 + kfree(inst); 290 + } 291 + 292 + static inline struct dh_safe_prime_instance_ctx *dh_safe_prime_instance_ctx( 293 + struct crypto_kpp *tfm) 294 + { 295 + return kpp_instance_ctx(kpp_alg_instance(tfm)); 296 + } 297 + 298 + static int dh_safe_prime_init_tfm(struct crypto_kpp *tfm) 299 + { 300 + struct dh_safe_prime_instance_ctx *inst_ctx = 301 + dh_safe_prime_instance_ctx(tfm); 302 + struct dh_safe_prime_tfm_ctx *tfm_ctx = kpp_tfm_ctx(tfm); 303 + 304 + tfm_ctx->dh_tfm = crypto_spawn_kpp(&inst_ctx->dh_spawn); 305 + if (IS_ERR(tfm_ctx->dh_tfm)) 306 + return PTR_ERR(tfm_ctx->dh_tfm); 307 + 308 + return 0; 309 + } 310 + 311 + static void dh_safe_prime_exit_tfm(struct crypto_kpp *tfm) 312 + { 313 + struct dh_safe_prime_tfm_ctx *tfm_ctx = kpp_tfm_ctx(tfm); 314 + 315 + crypto_free_kpp(tfm_ctx->dh_tfm); 316 + } 317 + 318 + static u64 __add_u64_to_be(__be64 *dst, unsigned int n, u64 val) 319 + { 320 + unsigned int i; 321 + 322 + for (i = n; val && i > 0; --i) { 323 + u64 tmp = be64_to_cpu(dst[i - 1]); 324 + 325 + tmp += val; 326 + val = tmp >= val ? 0 : 1; 327 + dst[i - 1] = cpu_to_be64(tmp); 328 + } 329 + 330 + return val; 331 + } 332 + 333 + static void *dh_safe_prime_gen_privkey(const struct dh_safe_prime *safe_prime, 334 + unsigned int *key_size) 335 + { 336 + unsigned int n, oversampling_size; 337 + __be64 *key; 338 + int err; 339 + u64 h, o; 340 + 341 + /* 342 + * Generate a private key following NIST SP800-56Ar3, 343 + * sec. 5.6.1.1.1 and 5.6.1.1.3 resp.. 344 + * 345 + * 5.6.1.1.1: choose key length N such that 346 + * 2 * ->max_strength <= N <= log2(q) + 1 = ->p_size * 8 - 1 347 + * with q = (p - 1) / 2 for the safe-prime groups. 348 + * Choose the lower bound's next power of two for N in order to 349 + * avoid excessively large private keys while still 350 + * maintaining some extra reserve beyond the bare minimum in 351 + * most cases. Note that for each entry in safe_prime_groups[], 352 + * the following holds for such N: 353 + * - N >= 256, in particular it is a multiple of 2^6 = 64 354 + * bits and 355 + * - N < log2(q) + 1, i.e. N respects the upper bound. 356 + */ 357 + n = roundup_pow_of_two(2 * safe_prime->max_strength); 358 + WARN_ON_ONCE(n & ((1u << 6) - 1)); 359 + n >>= 6; /* Convert N into units of u64. */ 360 + 361 + /* 362 + * Reserve one extra u64 to hold the extra random bits 363 + * required as per 5.6.1.1.3. 364 + */ 365 + oversampling_size = (n + 1) * sizeof(__be64); 366 + key = kmalloc(oversampling_size, GFP_KERNEL); 367 + if (!key) 368 + return ERR_PTR(-ENOMEM); 369 + 370 + /* 371 + * 5.6.1.1.3, step 3 (and implicitly step 4): obtain N + 64 372 + * random bits and interpret them as a big endian integer. 373 + */ 374 + err = -EFAULT; 375 + if (crypto_get_default_rng()) 376 + goto out_err; 377 + 378 + err = crypto_rng_get_bytes(crypto_default_rng, (u8 *)key, 379 + oversampling_size); 380 + crypto_put_default_rng(); 381 + if (err) 382 + goto out_err; 383 + 384 + /* 385 + * 5.6.1.1.3, step 5 is implicit: 2^N < q and thus, 386 + * M = min(2^N, q) = 2^N. 387 + * 388 + * For step 6, calculate 389 + * key = (key[] mod (M - 1)) + 1 = (key[] mod (2^N - 1)) + 1. 390 + * 391 + * In order to avoid expensive divisions, note that 392 + * 2^N mod (2^N - 1) = 1 and thus, for any integer h, 393 + * 2^N * h mod (2^N - 1) = h mod (2^N - 1) always holds. 394 + * The big endian integer key[] composed of n + 1 64bit words 395 + * may be written as key[] = h * 2^N + l, with h = key[0] 396 + * representing the 64 most significant bits and l 397 + * corresponding to the remaining 2^N bits. With the remark 398 + * from above, 399 + * h * 2^N + l mod (2^N - 1) = l + h mod (2^N - 1). 400 + * As both, l and h are less than 2^N, their sum after 401 + * this first reduction is guaranteed to be <= 2^(N + 1) - 2. 402 + * Or equivalently, that their sum can again be written as 403 + * h' * 2^N + l' with h' now either zero or one and if one, 404 + * then l' <= 2^N - 2. Thus, all bits at positions >= N will 405 + * be zero after a second reduction: 406 + * h' * 2^N + l' mod (2^N - 1) = l' + h' mod (2^N - 1). 407 + * At this point, it is still possible that 408 + * l' + h' = 2^N - 1, i.e. that l' + h' mod (2^N - 1) 409 + * is zero. This condition will be detected below by means of 410 + * the final increment overflowing in this case. 411 + */ 412 + h = be64_to_cpu(key[0]); 413 + h = __add_u64_to_be(key + 1, n, h); 414 + h = __add_u64_to_be(key + 1, n, h); 415 + WARN_ON_ONCE(h); 416 + 417 + /* Increment to obtain the final result. */ 418 + o = __add_u64_to_be(key + 1, n, 1); 419 + /* 420 + * The overflow bit o from the increment is either zero or 421 + * one. If zero, key[1:n] holds the final result in big-endian 422 + * order. If one, key[1:n] is zero now, but needs to be set to 423 + * one, c.f. above. 424 + */ 425 + if (o) 426 + key[n] = cpu_to_be64(1); 427 + 428 + /* n is in units of u64, convert to bytes. */ 429 + *key_size = n << 3; 430 + /* Strip the leading extra __be64, which is (virtually) zero by now. */ 431 + memmove(key, &key[1], *key_size); 432 + 433 + return key; 434 + 435 + out_err: 436 + kfree_sensitive(key); 437 + return ERR_PTR(err); 438 + } 439 + 440 + static int dh_safe_prime_set_secret(struct crypto_kpp *tfm, const void *buffer, 441 + unsigned int len) 442 + { 443 + struct dh_safe_prime_instance_ctx *inst_ctx = 444 + dh_safe_prime_instance_ctx(tfm); 445 + struct dh_safe_prime_tfm_ctx *tfm_ctx = kpp_tfm_ctx(tfm); 446 + struct dh params = {}; 447 + void *buf = NULL, *key = NULL; 448 + unsigned int buf_size; 449 + int err; 450 + 451 + if (buffer) { 452 + err = __crypto_dh_decode_key(buffer, len, &params); 453 + if (err) 454 + return err; 455 + if (params.p_size || params.g_size) 456 + return -EINVAL; 457 + } 458 + 459 + params.p = inst_ctx->safe_prime->p; 460 + params.p_size = inst_ctx->safe_prime->p_size; 461 + params.g = safe_prime_g; 462 + params.g_size = sizeof(safe_prime_g); 463 + 464 + if (!params.key_size) { 465 + key = dh_safe_prime_gen_privkey(inst_ctx->safe_prime, 466 + &params.key_size); 467 + if (IS_ERR(key)) 468 + return PTR_ERR(key); 469 + params.key = key; 470 + } 471 + 472 + buf_size = crypto_dh_key_len(&params); 473 + buf = kmalloc(buf_size, GFP_KERNEL); 474 + if (!buf) { 475 + err = -ENOMEM; 476 + goto out; 477 + } 478 + 479 + err = crypto_dh_encode_key(buf, buf_size, &params); 480 + if (err) 481 + goto out; 482 + 483 + err = crypto_kpp_set_secret(tfm_ctx->dh_tfm, buf, buf_size); 484 + out: 485 + kfree_sensitive(buf); 486 + kfree_sensitive(key); 487 + return err; 488 + } 489 + 490 + static void dh_safe_prime_complete_req(struct crypto_async_request *dh_req, 491 + int err) 492 + { 493 + struct kpp_request *req = dh_req->data; 494 + 495 + kpp_request_complete(req, err); 496 + } 497 + 498 + static struct kpp_request *dh_safe_prime_prepare_dh_req(struct kpp_request *req) 499 + { 500 + struct dh_safe_prime_tfm_ctx *tfm_ctx = 501 + kpp_tfm_ctx(crypto_kpp_reqtfm(req)); 502 + struct kpp_request *dh_req = kpp_request_ctx(req); 503 + 504 + kpp_request_set_tfm(dh_req, tfm_ctx->dh_tfm); 505 + kpp_request_set_callback(dh_req, req->base.flags, 506 + dh_safe_prime_complete_req, req); 507 + 508 + kpp_request_set_input(dh_req, req->src, req->src_len); 509 + kpp_request_set_output(dh_req, req->dst, req->dst_len); 510 + 511 + return dh_req; 512 + } 513 + 514 + static int dh_safe_prime_generate_public_key(struct kpp_request *req) 515 + { 516 + struct kpp_request *dh_req = dh_safe_prime_prepare_dh_req(req); 517 + 518 + return crypto_kpp_generate_public_key(dh_req); 519 + } 520 + 521 + static int dh_safe_prime_compute_shared_secret(struct kpp_request *req) 522 + { 523 + struct kpp_request *dh_req = dh_safe_prime_prepare_dh_req(req); 524 + 525 + return crypto_kpp_compute_shared_secret(dh_req); 526 + } 527 + 528 + static unsigned int dh_safe_prime_max_size(struct crypto_kpp *tfm) 529 + { 530 + struct dh_safe_prime_tfm_ctx *tfm_ctx = kpp_tfm_ctx(tfm); 531 + 532 + return crypto_kpp_maxsize(tfm_ctx->dh_tfm); 533 + } 534 + 535 + static int __maybe_unused __dh_safe_prime_create( 536 + struct crypto_template *tmpl, struct rtattr **tb, 537 + const struct dh_safe_prime *safe_prime) 538 + { 539 + struct kpp_instance *inst; 540 + struct dh_safe_prime_instance_ctx *ctx; 541 + const char *dh_name; 542 + struct kpp_alg *dh_alg; 543 + u32 mask; 544 + int err; 545 + 546 + err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_KPP, &mask); 547 + if (err) 548 + return err; 549 + 550 + dh_name = crypto_attr_alg_name(tb[1]); 551 + if (IS_ERR(dh_name)) 552 + return PTR_ERR(dh_name); 553 + 554 + inst = kzalloc(sizeof(*inst) + sizeof(*ctx), GFP_KERNEL); 555 + if (!inst) 556 + return -ENOMEM; 557 + 558 + ctx = kpp_instance_ctx(inst); 559 + 560 + err = crypto_grab_kpp(&ctx->dh_spawn, kpp_crypto_instance(inst), 561 + dh_name, 0, mask); 562 + if (err) 563 + goto err_free_inst; 564 + 565 + err = -EINVAL; 566 + dh_alg = crypto_spawn_kpp_alg(&ctx->dh_spawn); 567 + if (strcmp(dh_alg->base.cra_name, "dh")) 568 + goto err_free_inst; 569 + 570 + ctx->safe_prime = safe_prime; 571 + 572 + err = crypto_inst_setname(kpp_crypto_instance(inst), 573 + tmpl->name, &dh_alg->base); 574 + if (err) 575 + goto err_free_inst; 576 + 577 + inst->alg.set_secret = dh_safe_prime_set_secret; 578 + inst->alg.generate_public_key = dh_safe_prime_generate_public_key; 579 + inst->alg.compute_shared_secret = dh_safe_prime_compute_shared_secret; 580 + inst->alg.max_size = dh_safe_prime_max_size; 581 + inst->alg.init = dh_safe_prime_init_tfm; 582 + inst->alg.exit = dh_safe_prime_exit_tfm; 583 + inst->alg.reqsize = sizeof(struct kpp_request) + dh_alg->reqsize; 584 + inst->alg.base.cra_priority = dh_alg->base.cra_priority; 585 + inst->alg.base.cra_module = THIS_MODULE; 586 + inst->alg.base.cra_ctxsize = sizeof(struct dh_safe_prime_tfm_ctx); 587 + 588 + inst->free = dh_safe_prime_free_instance; 589 + 590 + err = kpp_register_instance(tmpl, inst); 591 + if (err) 592 + goto err_free_inst; 593 + 594 + return 0; 595 + 596 + err_free_inst: 597 + dh_safe_prime_free_instance(inst); 598 + 599 + return err; 600 + } 601 + 602 + #ifdef CONFIG_CRYPTO_DH_RFC7919_GROUPS 603 + 604 + static const struct dh_safe_prime ffdhe2048_prime = { 605 + .max_strength = 112, 606 + .p_size = 256, 607 + .p = 608 + "\xff\xff\xff\xff\xff\xff\xff\xff\xad\xf8\x54\x58\xa2\xbb\x4a\x9a" 609 + "\xaf\xdc\x56\x20\x27\x3d\x3c\xf1\xd8\xb9\xc5\x83\xce\x2d\x36\x95" 610 + "\xa9\xe1\x36\x41\x14\x64\x33\xfb\xcc\x93\x9d\xce\x24\x9b\x3e\xf9" 611 + "\x7d\x2f\xe3\x63\x63\x0c\x75\xd8\xf6\x81\xb2\x02\xae\xc4\x61\x7a" 612 + "\xd3\xdf\x1e\xd5\xd5\xfd\x65\x61\x24\x33\xf5\x1f\x5f\x06\x6e\xd0" 613 + "\x85\x63\x65\x55\x3d\xed\x1a\xf3\xb5\x57\x13\x5e\x7f\x57\xc9\x35" 614 + "\x98\x4f\x0c\x70\xe0\xe6\x8b\x77\xe2\xa6\x89\xda\xf3\xef\xe8\x72" 615 + "\x1d\xf1\x58\xa1\x36\xad\xe7\x35\x30\xac\xca\x4f\x48\x3a\x79\x7a" 616 + "\xbc\x0a\xb1\x82\xb3\x24\xfb\x61\xd1\x08\xa9\x4b\xb2\xc8\xe3\xfb" 617 + "\xb9\x6a\xda\xb7\x60\xd7\xf4\x68\x1d\x4f\x42\xa3\xde\x39\x4d\xf4" 618 + "\xae\x56\xed\xe7\x63\x72\xbb\x19\x0b\x07\xa7\xc8\xee\x0a\x6d\x70" 619 + "\x9e\x02\xfc\xe1\xcd\xf7\xe2\xec\xc0\x34\x04\xcd\x28\x34\x2f\x61" 620 + "\x91\x72\xfe\x9c\xe9\x85\x83\xff\x8e\x4f\x12\x32\xee\xf2\x81\x83" 621 + "\xc3\xfe\x3b\x1b\x4c\x6f\xad\x73\x3b\xb5\xfc\xbc\x2e\xc2\x20\x05" 622 + "\xc5\x8e\xf1\x83\x7d\x16\x83\xb2\xc6\xf3\x4a\x26\xc1\xb2\xef\xfa" 623 + "\x88\x6b\x42\x38\x61\x28\x5c\x97\xff\xff\xff\xff\xff\xff\xff\xff", 624 + }; 625 + 626 + static const struct dh_safe_prime ffdhe3072_prime = { 627 + .max_strength = 128, 628 + .p_size = 384, 629 + .p = 630 + "\xff\xff\xff\xff\xff\xff\xff\xff\xad\xf8\x54\x58\xa2\xbb\x4a\x9a" 631 + "\xaf\xdc\x56\x20\x27\x3d\x3c\xf1\xd8\xb9\xc5\x83\xce\x2d\x36\x95" 632 + "\xa9\xe1\x36\x41\x14\x64\x33\xfb\xcc\x93\x9d\xce\x24\x9b\x3e\xf9" 633 + "\x7d\x2f\xe3\x63\x63\x0c\x75\xd8\xf6\x81\xb2\x02\xae\xc4\x61\x7a" 634 + "\xd3\xdf\x1e\xd5\xd5\xfd\x65\x61\x24\x33\xf5\x1f\x5f\x06\x6e\xd0" 635 + "\x85\x63\x65\x55\x3d\xed\x1a\xf3\xb5\x57\x13\x5e\x7f\x57\xc9\x35" 636 + "\x98\x4f\x0c\x70\xe0\xe6\x8b\x77\xe2\xa6\x89\xda\xf3\xef\xe8\x72" 637 + "\x1d\xf1\x58\xa1\x36\xad\xe7\x35\x30\xac\xca\x4f\x48\x3a\x79\x7a" 638 + "\xbc\x0a\xb1\x82\xb3\x24\xfb\x61\xd1\x08\xa9\x4b\xb2\xc8\xe3\xfb" 639 + "\xb9\x6a\xda\xb7\x60\xd7\xf4\x68\x1d\x4f\x42\xa3\xde\x39\x4d\xf4" 640 + "\xae\x56\xed\xe7\x63\x72\xbb\x19\x0b\x07\xa7\xc8\xee\x0a\x6d\x70" 641 + "\x9e\x02\xfc\xe1\xcd\xf7\xe2\xec\xc0\x34\x04\xcd\x28\x34\x2f\x61" 642 + "\x91\x72\xfe\x9c\xe9\x85\x83\xff\x8e\x4f\x12\x32\xee\xf2\x81\x83" 643 + "\xc3\xfe\x3b\x1b\x4c\x6f\xad\x73\x3b\xb5\xfc\xbc\x2e\xc2\x20\x05" 644 + "\xc5\x8e\xf1\x83\x7d\x16\x83\xb2\xc6\xf3\x4a\x26\xc1\xb2\xef\xfa" 645 + "\x88\x6b\x42\x38\x61\x1f\xcf\xdc\xde\x35\x5b\x3b\x65\x19\x03\x5b" 646 + "\xbc\x34\xf4\xde\xf9\x9c\x02\x38\x61\xb4\x6f\xc9\xd6\xe6\xc9\x07" 647 + "\x7a\xd9\x1d\x26\x91\xf7\xf7\xee\x59\x8c\xb0\xfa\xc1\x86\xd9\x1c" 648 + "\xae\xfe\x13\x09\x85\x13\x92\x70\xb4\x13\x0c\x93\xbc\x43\x79\x44" 649 + "\xf4\xfd\x44\x52\xe2\xd7\x4d\xd3\x64\xf2\xe2\x1e\x71\xf5\x4b\xff" 650 + "\x5c\xae\x82\xab\x9c\x9d\xf6\x9e\xe8\x6d\x2b\xc5\x22\x36\x3a\x0d" 651 + "\xab\xc5\x21\x97\x9b\x0d\xea\xda\x1d\xbf\x9a\x42\xd5\xc4\x48\x4e" 652 + "\x0a\xbc\xd0\x6b\xfa\x53\xdd\xef\x3c\x1b\x20\xee\x3f\xd5\x9d\x7c" 653 + "\x25\xe4\x1d\x2b\x66\xc6\x2e\x37\xff\xff\xff\xff\xff\xff\xff\xff", 654 + }; 655 + 656 + static const struct dh_safe_prime ffdhe4096_prime = { 657 + .max_strength = 152, 658 + .p_size = 512, 659 + .p = 660 + "\xff\xff\xff\xff\xff\xff\xff\xff\xad\xf8\x54\x58\xa2\xbb\x4a\x9a" 661 + "\xaf\xdc\x56\x20\x27\x3d\x3c\xf1\xd8\xb9\xc5\x83\xce\x2d\x36\x95" 662 + "\xa9\xe1\x36\x41\x14\x64\x33\xfb\xcc\x93\x9d\xce\x24\x9b\x3e\xf9" 663 + "\x7d\x2f\xe3\x63\x63\x0c\x75\xd8\xf6\x81\xb2\x02\xae\xc4\x61\x7a" 664 + "\xd3\xdf\x1e\xd5\xd5\xfd\x65\x61\x24\x33\xf5\x1f\x5f\x06\x6e\xd0" 665 + "\x85\x63\x65\x55\x3d\xed\x1a\xf3\xb5\x57\x13\x5e\x7f\x57\xc9\x35" 666 + "\x98\x4f\x0c\x70\xe0\xe6\x8b\x77\xe2\xa6\x89\xda\xf3\xef\xe8\x72" 667 + "\x1d\xf1\x58\xa1\x36\xad\xe7\x35\x30\xac\xca\x4f\x48\x3a\x79\x7a" 668 + "\xbc\x0a\xb1\x82\xb3\x24\xfb\x61\xd1\x08\xa9\x4b\xb2\xc8\xe3\xfb" 669 + "\xb9\x6a\xda\xb7\x60\xd7\xf4\x68\x1d\x4f\x42\xa3\xde\x39\x4d\xf4" 670 + "\xae\x56\xed\xe7\x63\x72\xbb\x19\x0b\x07\xa7\xc8\xee\x0a\x6d\x70" 671 + "\x9e\x02\xfc\xe1\xcd\xf7\xe2\xec\xc0\x34\x04\xcd\x28\x34\x2f\x61" 672 + "\x91\x72\xfe\x9c\xe9\x85\x83\xff\x8e\x4f\x12\x32\xee\xf2\x81\x83" 673 + "\xc3\xfe\x3b\x1b\x4c\x6f\xad\x73\x3b\xb5\xfc\xbc\x2e\xc2\x20\x05" 674 + "\xc5\x8e\xf1\x83\x7d\x16\x83\xb2\xc6\xf3\x4a\x26\xc1\xb2\xef\xfa" 675 + "\x88\x6b\x42\x38\x61\x1f\xcf\xdc\xde\x35\x5b\x3b\x65\x19\x03\x5b" 676 + "\xbc\x34\xf4\xde\xf9\x9c\x02\x38\x61\xb4\x6f\xc9\xd6\xe6\xc9\x07" 677 + "\x7a\xd9\x1d\x26\x91\xf7\xf7\xee\x59\x8c\xb0\xfa\xc1\x86\xd9\x1c" 678 + "\xae\xfe\x13\x09\x85\x13\x92\x70\xb4\x13\x0c\x93\xbc\x43\x79\x44" 679 + "\xf4\xfd\x44\x52\xe2\xd7\x4d\xd3\x64\xf2\xe2\x1e\x71\xf5\x4b\xff" 680 + "\x5c\xae\x82\xab\x9c\x9d\xf6\x9e\xe8\x6d\x2b\xc5\x22\x36\x3a\x0d" 681 + "\xab\xc5\x21\x97\x9b\x0d\xea\xda\x1d\xbf\x9a\x42\xd5\xc4\x48\x4e" 682 + "\x0a\xbc\xd0\x6b\xfa\x53\xdd\xef\x3c\x1b\x20\xee\x3f\xd5\x9d\x7c" 683 + "\x25\xe4\x1d\x2b\x66\x9e\x1e\xf1\x6e\x6f\x52\xc3\x16\x4d\xf4\xfb" 684 + "\x79\x30\xe9\xe4\xe5\x88\x57\xb6\xac\x7d\x5f\x42\xd6\x9f\x6d\x18" 685 + "\x77\x63\xcf\x1d\x55\x03\x40\x04\x87\xf5\x5b\xa5\x7e\x31\xcc\x7a" 686 + "\x71\x35\xc8\x86\xef\xb4\x31\x8a\xed\x6a\x1e\x01\x2d\x9e\x68\x32" 687 + "\xa9\x07\x60\x0a\x91\x81\x30\xc4\x6d\xc7\x78\xf9\x71\xad\x00\x38" 688 + "\x09\x29\x99\xa3\x33\xcb\x8b\x7a\x1a\x1d\xb9\x3d\x71\x40\x00\x3c" 689 + "\x2a\x4e\xce\xa9\xf9\x8d\x0a\xcc\x0a\x82\x91\xcd\xce\xc9\x7d\xcf" 690 + "\x8e\xc9\xb5\x5a\x7f\x88\xa4\x6b\x4d\xb5\xa8\x51\xf4\x41\x82\xe1" 691 + "\xc6\x8a\x00\x7e\x5e\x65\x5f\x6a\xff\xff\xff\xff\xff\xff\xff\xff", 692 + }; 693 + 694 + static const struct dh_safe_prime ffdhe6144_prime = { 695 + .max_strength = 176, 696 + .p_size = 768, 697 + .p = 698 + "\xff\xff\xff\xff\xff\xff\xff\xff\xad\xf8\x54\x58\xa2\xbb\x4a\x9a" 699 + "\xaf\xdc\x56\x20\x27\x3d\x3c\xf1\xd8\xb9\xc5\x83\xce\x2d\x36\x95" 700 + "\xa9\xe1\x36\x41\x14\x64\x33\xfb\xcc\x93\x9d\xce\x24\x9b\x3e\xf9" 701 + "\x7d\x2f\xe3\x63\x63\x0c\x75\xd8\xf6\x81\xb2\x02\xae\xc4\x61\x7a" 702 + "\xd3\xdf\x1e\xd5\xd5\xfd\x65\x61\x24\x33\xf5\x1f\x5f\x06\x6e\xd0" 703 + "\x85\x63\x65\x55\x3d\xed\x1a\xf3\xb5\x57\x13\x5e\x7f\x57\xc9\x35" 704 + "\x98\x4f\x0c\x70\xe0\xe6\x8b\x77\xe2\xa6\x89\xda\xf3\xef\xe8\x72" 705 + "\x1d\xf1\x58\xa1\x36\xad\xe7\x35\x30\xac\xca\x4f\x48\x3a\x79\x7a" 706 + "\xbc\x0a\xb1\x82\xb3\x24\xfb\x61\xd1\x08\xa9\x4b\xb2\xc8\xe3\xfb" 707 + "\xb9\x6a\xda\xb7\x60\xd7\xf4\x68\x1d\x4f\x42\xa3\xde\x39\x4d\xf4" 708 + "\xae\x56\xed\xe7\x63\x72\xbb\x19\x0b\x07\xa7\xc8\xee\x0a\x6d\x70" 709 + "\x9e\x02\xfc\xe1\xcd\xf7\xe2\xec\xc0\x34\x04\xcd\x28\x34\x2f\x61" 710 + "\x91\x72\xfe\x9c\xe9\x85\x83\xff\x8e\x4f\x12\x32\xee\xf2\x81\x83" 711 + "\xc3\xfe\x3b\x1b\x4c\x6f\xad\x73\x3b\xb5\xfc\xbc\x2e\xc2\x20\x05" 712 + "\xc5\x8e\xf1\x83\x7d\x16\x83\xb2\xc6\xf3\x4a\x26\xc1\xb2\xef\xfa" 713 + "\x88\x6b\x42\x38\x61\x1f\xcf\xdc\xde\x35\x5b\x3b\x65\x19\x03\x5b" 714 + "\xbc\x34\xf4\xde\xf9\x9c\x02\x38\x61\xb4\x6f\xc9\xd6\xe6\xc9\x07" 715 + "\x7a\xd9\x1d\x26\x91\xf7\xf7\xee\x59\x8c\xb0\xfa\xc1\x86\xd9\x1c" 716 + "\xae\xfe\x13\x09\x85\x13\x92\x70\xb4\x13\x0c\x93\xbc\x43\x79\x44" 717 + "\xf4\xfd\x44\x52\xe2\xd7\x4d\xd3\x64\xf2\xe2\x1e\x71\xf5\x4b\xff" 718 + "\x5c\xae\x82\xab\x9c\x9d\xf6\x9e\xe8\x6d\x2b\xc5\x22\x36\x3a\x0d" 719 + "\xab\xc5\x21\x97\x9b\x0d\xea\xda\x1d\xbf\x9a\x42\xd5\xc4\x48\x4e" 720 + "\x0a\xbc\xd0\x6b\xfa\x53\xdd\xef\x3c\x1b\x20\xee\x3f\xd5\x9d\x7c" 721 + "\x25\xe4\x1d\x2b\x66\x9e\x1e\xf1\x6e\x6f\x52\xc3\x16\x4d\xf4\xfb" 722 + "\x79\x30\xe9\xe4\xe5\x88\x57\xb6\xac\x7d\x5f\x42\xd6\x9f\x6d\x18" 723 + "\x77\x63\xcf\x1d\x55\x03\x40\x04\x87\xf5\x5b\xa5\x7e\x31\xcc\x7a" 724 + "\x71\x35\xc8\x86\xef\xb4\x31\x8a\xed\x6a\x1e\x01\x2d\x9e\x68\x32" 725 + "\xa9\x07\x60\x0a\x91\x81\x30\xc4\x6d\xc7\x78\xf9\x71\xad\x00\x38" 726 + "\x09\x29\x99\xa3\x33\xcb\x8b\x7a\x1a\x1d\xb9\x3d\x71\x40\x00\x3c" 727 + "\x2a\x4e\xce\xa9\xf9\x8d\x0a\xcc\x0a\x82\x91\xcd\xce\xc9\x7d\xcf" 728 + "\x8e\xc9\xb5\x5a\x7f\x88\xa4\x6b\x4d\xb5\xa8\x51\xf4\x41\x82\xe1" 729 + "\xc6\x8a\x00\x7e\x5e\x0d\xd9\x02\x0b\xfd\x64\xb6\x45\x03\x6c\x7a" 730 + "\x4e\x67\x7d\x2c\x38\x53\x2a\x3a\x23\xba\x44\x42\xca\xf5\x3e\xa6" 731 + "\x3b\xb4\x54\x32\x9b\x76\x24\xc8\x91\x7b\xdd\x64\xb1\xc0\xfd\x4c" 732 + "\xb3\x8e\x8c\x33\x4c\x70\x1c\x3a\xcd\xad\x06\x57\xfc\xcf\xec\x71" 733 + "\x9b\x1f\x5c\x3e\x4e\x46\x04\x1f\x38\x81\x47\xfb\x4c\xfd\xb4\x77" 734 + "\xa5\x24\x71\xf7\xa9\xa9\x69\x10\xb8\x55\x32\x2e\xdb\x63\x40\xd8" 735 + "\xa0\x0e\xf0\x92\x35\x05\x11\xe3\x0a\xbe\xc1\xff\xf9\xe3\xa2\x6e" 736 + "\x7f\xb2\x9f\x8c\x18\x30\x23\xc3\x58\x7e\x38\xda\x00\x77\xd9\xb4" 737 + "\x76\x3e\x4e\x4b\x94\xb2\xbb\xc1\x94\xc6\x65\x1e\x77\xca\xf9\x92" 738 + "\xee\xaa\xc0\x23\x2a\x28\x1b\xf6\xb3\xa7\x39\xc1\x22\x61\x16\x82" 739 + "\x0a\xe8\xdb\x58\x47\xa6\x7c\xbe\xf9\xc9\x09\x1b\x46\x2d\x53\x8c" 740 + "\xd7\x2b\x03\x74\x6a\xe7\x7f\x5e\x62\x29\x2c\x31\x15\x62\xa8\x46" 741 + "\x50\x5d\xc8\x2d\xb8\x54\x33\x8a\xe4\x9f\x52\x35\xc9\x5b\x91\x17" 742 + "\x8c\xcf\x2d\xd5\xca\xce\xf4\x03\xec\x9d\x18\x10\xc6\x27\x2b\x04" 743 + "\x5b\x3b\x71\xf9\xdc\x6b\x80\xd6\x3f\xdd\x4a\x8e\x9a\xdb\x1e\x69" 744 + "\x62\xa6\x95\x26\xd4\x31\x61\xc1\xa4\x1d\x57\x0d\x79\x38\xda\xd4" 745 + "\xa4\x0e\x32\x9c\xd0\xe4\x0e\x65\xff\xff\xff\xff\xff\xff\xff\xff", 746 + }; 747 + 748 + static const struct dh_safe_prime ffdhe8192_prime = { 749 + .max_strength = 200, 750 + .p_size = 1024, 751 + .p = 752 + "\xff\xff\xff\xff\xff\xff\xff\xff\xad\xf8\x54\x58\xa2\xbb\x4a\x9a" 753 + "\xaf\xdc\x56\x20\x27\x3d\x3c\xf1\xd8\xb9\xc5\x83\xce\x2d\x36\x95" 754 + "\xa9\xe1\x36\x41\x14\x64\x33\xfb\xcc\x93\x9d\xce\x24\x9b\x3e\xf9" 755 + "\x7d\x2f\xe3\x63\x63\x0c\x75\xd8\xf6\x81\xb2\x02\xae\xc4\x61\x7a" 756 + "\xd3\xdf\x1e\xd5\xd5\xfd\x65\x61\x24\x33\xf5\x1f\x5f\x06\x6e\xd0" 757 + "\x85\x63\x65\x55\x3d\xed\x1a\xf3\xb5\x57\x13\x5e\x7f\x57\xc9\x35" 758 + "\x98\x4f\x0c\x70\xe0\xe6\x8b\x77\xe2\xa6\x89\xda\xf3\xef\xe8\x72" 759 + "\x1d\xf1\x58\xa1\x36\xad\xe7\x35\x30\xac\xca\x4f\x48\x3a\x79\x7a" 760 + "\xbc\x0a\xb1\x82\xb3\x24\xfb\x61\xd1\x08\xa9\x4b\xb2\xc8\xe3\xfb" 761 + "\xb9\x6a\xda\xb7\x60\xd7\xf4\x68\x1d\x4f\x42\xa3\xde\x39\x4d\xf4" 762 + "\xae\x56\xed\xe7\x63\x72\xbb\x19\x0b\x07\xa7\xc8\xee\x0a\x6d\x70" 763 + "\x9e\x02\xfc\xe1\xcd\xf7\xe2\xec\xc0\x34\x04\xcd\x28\x34\x2f\x61" 764 + "\x91\x72\xfe\x9c\xe9\x85\x83\xff\x8e\x4f\x12\x32\xee\xf2\x81\x83" 765 + "\xc3\xfe\x3b\x1b\x4c\x6f\xad\x73\x3b\xb5\xfc\xbc\x2e\xc2\x20\x05" 766 + "\xc5\x8e\xf1\x83\x7d\x16\x83\xb2\xc6\xf3\x4a\x26\xc1\xb2\xef\xfa" 767 + "\x88\x6b\x42\x38\x61\x1f\xcf\xdc\xde\x35\x5b\x3b\x65\x19\x03\x5b" 768 + "\xbc\x34\xf4\xde\xf9\x9c\x02\x38\x61\xb4\x6f\xc9\xd6\xe6\xc9\x07" 769 + "\x7a\xd9\x1d\x26\x91\xf7\xf7\xee\x59\x8c\xb0\xfa\xc1\x86\xd9\x1c" 770 + "\xae\xfe\x13\x09\x85\x13\x92\x70\xb4\x13\x0c\x93\xbc\x43\x79\x44" 771 + "\xf4\xfd\x44\x52\xe2\xd7\x4d\xd3\x64\xf2\xe2\x1e\x71\xf5\x4b\xff" 772 + "\x5c\xae\x82\xab\x9c\x9d\xf6\x9e\xe8\x6d\x2b\xc5\x22\x36\x3a\x0d" 773 + "\xab\xc5\x21\x97\x9b\x0d\xea\xda\x1d\xbf\x9a\x42\xd5\xc4\x48\x4e" 774 + "\x0a\xbc\xd0\x6b\xfa\x53\xdd\xef\x3c\x1b\x20\xee\x3f\xd5\x9d\x7c" 775 + "\x25\xe4\x1d\x2b\x66\x9e\x1e\xf1\x6e\x6f\x52\xc3\x16\x4d\xf4\xfb" 776 + "\x79\x30\xe9\xe4\xe5\x88\x57\xb6\xac\x7d\x5f\x42\xd6\x9f\x6d\x18" 777 + "\x77\x63\xcf\x1d\x55\x03\x40\x04\x87\xf5\x5b\xa5\x7e\x31\xcc\x7a" 778 + "\x71\x35\xc8\x86\xef\xb4\x31\x8a\xed\x6a\x1e\x01\x2d\x9e\x68\x32" 779 + "\xa9\x07\x60\x0a\x91\x81\x30\xc4\x6d\xc7\x78\xf9\x71\xad\x00\x38" 780 + "\x09\x29\x99\xa3\x33\xcb\x8b\x7a\x1a\x1d\xb9\x3d\x71\x40\x00\x3c" 781 + "\x2a\x4e\xce\xa9\xf9\x8d\x0a\xcc\x0a\x82\x91\xcd\xce\xc9\x7d\xcf" 782 + "\x8e\xc9\xb5\x5a\x7f\x88\xa4\x6b\x4d\xb5\xa8\x51\xf4\x41\x82\xe1" 783 + "\xc6\x8a\x00\x7e\x5e\x0d\xd9\x02\x0b\xfd\x64\xb6\x45\x03\x6c\x7a" 784 + "\x4e\x67\x7d\x2c\x38\x53\x2a\x3a\x23\xba\x44\x42\xca\xf5\x3e\xa6" 785 + "\x3b\xb4\x54\x32\x9b\x76\x24\xc8\x91\x7b\xdd\x64\xb1\xc0\xfd\x4c" 786 + "\xb3\x8e\x8c\x33\x4c\x70\x1c\x3a\xcd\xad\x06\x57\xfc\xcf\xec\x71" 787 + "\x9b\x1f\x5c\x3e\x4e\x46\x04\x1f\x38\x81\x47\xfb\x4c\xfd\xb4\x77" 788 + "\xa5\x24\x71\xf7\xa9\xa9\x69\x10\xb8\x55\x32\x2e\xdb\x63\x40\xd8" 789 + "\xa0\x0e\xf0\x92\x35\x05\x11\xe3\x0a\xbe\xc1\xff\xf9\xe3\xa2\x6e" 790 + "\x7f\xb2\x9f\x8c\x18\x30\x23\xc3\x58\x7e\x38\xda\x00\x77\xd9\xb4" 791 + "\x76\x3e\x4e\x4b\x94\xb2\xbb\xc1\x94\xc6\x65\x1e\x77\xca\xf9\x92" 792 + "\xee\xaa\xc0\x23\x2a\x28\x1b\xf6\xb3\xa7\x39\xc1\x22\x61\x16\x82" 793 + "\x0a\xe8\xdb\x58\x47\xa6\x7c\xbe\xf9\xc9\x09\x1b\x46\x2d\x53\x8c" 794 + "\xd7\x2b\x03\x74\x6a\xe7\x7f\x5e\x62\x29\x2c\x31\x15\x62\xa8\x46" 795 + "\x50\x5d\xc8\x2d\xb8\x54\x33\x8a\xe4\x9f\x52\x35\xc9\x5b\x91\x17" 796 + "\x8c\xcf\x2d\xd5\xca\xce\xf4\x03\xec\x9d\x18\x10\xc6\x27\x2b\x04" 797 + "\x5b\x3b\x71\xf9\xdc\x6b\x80\xd6\x3f\xdd\x4a\x8e\x9a\xdb\x1e\x69" 798 + "\x62\xa6\x95\x26\xd4\x31\x61\xc1\xa4\x1d\x57\x0d\x79\x38\xda\xd4" 799 + "\xa4\x0e\x32\x9c\xcf\xf4\x6a\xaa\x36\xad\x00\x4c\xf6\x00\xc8\x38" 800 + "\x1e\x42\x5a\x31\xd9\x51\xae\x64\xfd\xb2\x3f\xce\xc9\x50\x9d\x43" 801 + "\x68\x7f\xeb\x69\xed\xd1\xcc\x5e\x0b\x8c\xc3\xbd\xf6\x4b\x10\xef" 802 + "\x86\xb6\x31\x42\xa3\xab\x88\x29\x55\x5b\x2f\x74\x7c\x93\x26\x65" 803 + "\xcb\x2c\x0f\x1c\xc0\x1b\xd7\x02\x29\x38\x88\x39\xd2\xaf\x05\xe4" 804 + "\x54\x50\x4a\xc7\x8b\x75\x82\x82\x28\x46\xc0\xba\x35\xc3\x5f\x5c" 805 + "\x59\x16\x0c\xc0\x46\xfd\x82\x51\x54\x1f\xc6\x8c\x9c\x86\xb0\x22" 806 + "\xbb\x70\x99\x87\x6a\x46\x0e\x74\x51\xa8\xa9\x31\x09\x70\x3f\xee" 807 + "\x1c\x21\x7e\x6c\x38\x26\xe5\x2c\x51\xaa\x69\x1e\x0e\x42\x3c\xfc" 808 + "\x99\xe9\xe3\x16\x50\xc1\x21\x7b\x62\x48\x16\xcd\xad\x9a\x95\xf9" 809 + "\xd5\xb8\x01\x94\x88\xd9\xc0\xa0\xa1\xfe\x30\x75\xa5\x77\xe2\x31" 810 + "\x83\xf8\x1d\x4a\x3f\x2f\xa4\x57\x1e\xfc\x8c\xe0\xba\x8a\x4f\xe8" 811 + "\xb6\x85\x5d\xfe\x72\xb0\xa6\x6e\xde\xd2\xfb\xab\xfb\xe5\x8a\x30" 812 + "\xfa\xfa\xbe\x1c\x5d\x71\xa8\x7e\x2f\x74\x1e\xf8\xc1\xfe\x86\xfe" 813 + "\xa6\xbb\xfd\xe5\x30\x67\x7f\x0d\x97\xd1\x1d\x49\xf7\xa8\x44\x3d" 814 + "\x08\x22\xe5\x06\xa9\xf4\x61\x4e\x01\x1e\x2a\x94\x83\x8f\xf8\x8c" 815 + "\xd6\x8c\x8b\xb7\xc5\xc6\x42\x4c\xff\xff\xff\xff\xff\xff\xff\xff", 816 + }; 817 + 818 + static int dh_ffdhe2048_create(struct crypto_template *tmpl, 819 + struct rtattr **tb) 820 + { 821 + return __dh_safe_prime_create(tmpl, tb, &ffdhe2048_prime); 822 + } 823 + 824 + static int dh_ffdhe3072_create(struct crypto_template *tmpl, 825 + struct rtattr **tb) 826 + { 827 + return __dh_safe_prime_create(tmpl, tb, &ffdhe3072_prime); 828 + } 829 + 830 + static int dh_ffdhe4096_create(struct crypto_template *tmpl, 831 + struct rtattr **tb) 832 + { 833 + return __dh_safe_prime_create(tmpl, tb, &ffdhe4096_prime); 834 + } 835 + 836 + static int dh_ffdhe6144_create(struct crypto_template *tmpl, 837 + struct rtattr **tb) 838 + { 839 + return __dh_safe_prime_create(tmpl, tb, &ffdhe6144_prime); 840 + } 841 + 842 + static int dh_ffdhe8192_create(struct crypto_template *tmpl, 843 + struct rtattr **tb) 844 + { 845 + return __dh_safe_prime_create(tmpl, tb, &ffdhe8192_prime); 846 + } 847 + 848 + static struct crypto_template crypto_ffdhe_templates[] = { 849 + { 850 + .name = "ffdhe2048", 851 + .create = dh_ffdhe2048_create, 852 + .module = THIS_MODULE, 853 + }, 854 + { 855 + .name = "ffdhe3072", 856 + .create = dh_ffdhe3072_create, 857 + .module = THIS_MODULE, 858 + }, 859 + { 860 + .name = "ffdhe4096", 861 + .create = dh_ffdhe4096_create, 862 + .module = THIS_MODULE, 863 + }, 864 + { 865 + .name = "ffdhe6144", 866 + .create = dh_ffdhe6144_create, 867 + .module = THIS_MODULE, 868 + }, 869 + { 870 + .name = "ffdhe8192", 871 + .create = dh_ffdhe8192_create, 872 + .module = THIS_MODULE, 873 + }, 874 + }; 875 + 876 + #else /* ! CONFIG_CRYPTO_DH_RFC7919_GROUPS */ 877 + 878 + static struct crypto_template crypto_ffdhe_templates[] = {}; 879 + 880 + #endif /* CONFIG_CRYPTO_DH_RFC7919_GROUPS */ 881 + 882 + 279 883 static int dh_init(void) 280 884 { 281 - return crypto_register_kpp(&dh); 885 + int err; 886 + 887 + err = crypto_register_kpp(&dh); 888 + if (err) 889 + return err; 890 + 891 + err = crypto_register_templates(crypto_ffdhe_templates, 892 + ARRAY_SIZE(crypto_ffdhe_templates)); 893 + if (err) { 894 + crypto_unregister_kpp(&dh); 895 + return err; 896 + } 897 + 898 + return 0; 282 899 } 283 900 284 901 static void dh_exit(void) 285 902 { 903 + crypto_unregister_templates(crypto_ffdhe_templates, 904 + ARRAY_SIZE(crypto_ffdhe_templates)); 286 905 crypto_unregister_kpp(&dh); 287 906 } 288 907
+23 -21
crypto/dh_helper.c
··· 10 10 #include <crypto/dh.h> 11 11 #include <crypto/kpp.h> 12 12 13 - #define DH_KPP_SECRET_MIN_SIZE (sizeof(struct kpp_secret) + 4 * sizeof(int)) 13 + #define DH_KPP_SECRET_MIN_SIZE (sizeof(struct kpp_secret) + 3 * sizeof(int)) 14 14 15 15 static inline u8 *dh_pack_data(u8 *dst, u8 *end, const void *src, size_t size) 16 16 { ··· 28 28 29 29 static inline unsigned int dh_data_size(const struct dh *p) 30 30 { 31 - return p->key_size + p->p_size + p->q_size + p->g_size; 31 + return p->key_size + p->p_size + p->g_size; 32 32 } 33 33 34 34 unsigned int crypto_dh_key_len(const struct dh *p) ··· 53 53 ptr = dh_pack_data(ptr, end, &params->key_size, 54 54 sizeof(params->key_size)); 55 55 ptr = dh_pack_data(ptr, end, &params->p_size, sizeof(params->p_size)); 56 - ptr = dh_pack_data(ptr, end, &params->q_size, sizeof(params->q_size)); 57 56 ptr = dh_pack_data(ptr, end, &params->g_size, sizeof(params->g_size)); 58 57 ptr = dh_pack_data(ptr, end, params->key, params->key_size); 59 58 ptr = dh_pack_data(ptr, end, params->p, params->p_size); 60 - ptr = dh_pack_data(ptr, end, params->q, params->q_size); 61 59 ptr = dh_pack_data(ptr, end, params->g, params->g_size); 62 60 if (ptr != end) 63 61 return -EINVAL; ··· 63 65 } 64 66 EXPORT_SYMBOL_GPL(crypto_dh_encode_key); 65 67 66 - int crypto_dh_decode_key(const char *buf, unsigned int len, struct dh *params) 68 + int __crypto_dh_decode_key(const char *buf, unsigned int len, struct dh *params) 67 69 { 68 70 const u8 *ptr = buf; 69 71 struct kpp_secret secret; ··· 77 79 78 80 ptr = dh_unpack_data(&params->key_size, ptr, sizeof(params->key_size)); 79 81 ptr = dh_unpack_data(&params->p_size, ptr, sizeof(params->p_size)); 80 - ptr = dh_unpack_data(&params->q_size, ptr, sizeof(params->q_size)); 81 82 ptr = dh_unpack_data(&params->g_size, ptr, sizeof(params->g_size)); 82 83 if (secret.len != crypto_dh_key_len(params)) 83 - return -EINVAL; 84 - 85 - /* 86 - * Don't permit the buffer for 'key' or 'g' to be larger than 'p', since 87 - * some drivers assume otherwise. 88 - */ 89 - if (params->key_size > params->p_size || 90 - params->g_size > params->p_size || params->q_size > params->p_size) 91 84 return -EINVAL; 92 85 93 86 /* Don't allocate memory. Set pointers to data within ··· 86 97 */ 87 98 params->key = (void *)ptr; 88 99 params->p = (void *)(ptr + params->key_size); 89 - params->q = (void *)(ptr + params->key_size + params->p_size); 90 - params->g = (void *)(ptr + params->key_size + params->p_size + 91 - params->q_size); 100 + params->g = (void *)(ptr + params->key_size + params->p_size); 101 + 102 + return 0; 103 + } 104 + 105 + int crypto_dh_decode_key(const char *buf, unsigned int len, struct dh *params) 106 + { 107 + int err; 108 + 109 + err = __crypto_dh_decode_key(buf, len, params); 110 + if (err) 111 + return err; 112 + 113 + /* 114 + * Don't permit the buffer for 'key' or 'g' to be larger than 'p', since 115 + * some drivers assume otherwise. 116 + */ 117 + if (params->key_size > params->p_size || 118 + params->g_size > params->p_size) 119 + return -EINVAL; 92 120 93 121 /* 94 122 * Don't permit 'p' to be 0. It's not a prime number, and it's subject ··· 114 108 */ 115 109 if (memchr_inv(params->p, 0, params->p_size) == NULL) 116 110 return -EINVAL; 117 - 118 - /* It is permissible to not provide Q. */ 119 - if (params->q_size == 0) 120 - params->q = NULL; 121 111 122 112 return 0; 123 113 }
+4
crypto/hmac.c
··· 15 15 #include <crypto/internal/hash.h> 16 16 #include <crypto/scatterwalk.h> 17 17 #include <linux/err.h> 18 + #include <linux/fips.h> 18 19 #include <linux/init.h> 19 20 #include <linux/kernel.h> 20 21 #include <linux/module.h> ··· 51 50 struct crypto_shash *hash = ctx->hash; 52 51 SHASH_DESC_ON_STACK(shash, hash); 53 52 unsigned int i; 53 + 54 + if (fips_enabled && (keylen < 112 / 8)) 55 + return -EINVAL; 54 56 55 57 shash->tfm = hash; 56 58
+29
crypto/kpp.c
··· 68 68 return 0; 69 69 } 70 70 71 + static void crypto_kpp_free_instance(struct crypto_instance *inst) 72 + { 73 + struct kpp_instance *kpp = kpp_instance(inst); 74 + 75 + kpp->free(kpp); 76 + } 77 + 71 78 static const struct crypto_type crypto_kpp_type = { 72 79 .extsize = crypto_alg_extsize, 73 80 .init_tfm = crypto_kpp_init_tfm, 81 + .free = crypto_kpp_free_instance, 74 82 #ifdef CONFIG_PROC_FS 75 83 .show = crypto_kpp_show, 76 84 #endif ··· 94 86 return crypto_alloc_tfm(alg_name, &crypto_kpp_type, type, mask); 95 87 } 96 88 EXPORT_SYMBOL_GPL(crypto_alloc_kpp); 89 + 90 + int crypto_grab_kpp(struct crypto_kpp_spawn *spawn, 91 + struct crypto_instance *inst, 92 + const char *name, u32 type, u32 mask) 93 + { 94 + spawn->base.frontend = &crypto_kpp_type; 95 + return crypto_grab_spawn(&spawn->base, inst, name, type, mask); 96 + } 97 + EXPORT_SYMBOL_GPL(crypto_grab_kpp); 97 98 98 99 static void kpp_prepare_alg(struct kpp_alg *alg) 99 100 { ··· 127 110 crypto_unregister_alg(&alg->base); 128 111 } 129 112 EXPORT_SYMBOL_GPL(crypto_unregister_kpp); 113 + 114 + int kpp_register_instance(struct crypto_template *tmpl, 115 + struct kpp_instance *inst) 116 + { 117 + if (WARN_ON(!inst->free)) 118 + return -EINVAL; 119 + 120 + kpp_prepare_alg(&inst->alg); 121 + 122 + return crypto_register_instance(tmpl, kpp_crypto_instance(inst)); 123 + } 124 + EXPORT_SYMBOL_GPL(kpp_register_instance); 130 125 131 126 MODULE_LICENSE("GPL"); 132 127 MODULE_DESCRIPTION("Key-agreement Protocol Primitives");
+1
crypto/lrw.c
··· 428 428 MODULE_LICENSE("GPL"); 429 429 MODULE_DESCRIPTION("LRW block cipher mode"); 430 430 MODULE_ALIAS_CRYPTO("lrw"); 431 + MODULE_SOFTDEP("pre: ecb");
+15 -7
crypto/memneq.c
··· 60 60 */ 61 61 62 62 #include <crypto/algapi.h> 63 + #include <asm/unaligned.h> 63 64 64 65 #ifndef __HAVE_ARCH_CRYPTO_MEMNEQ 65 66 ··· 72 71 73 72 #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) 74 73 while (size >= sizeof(unsigned long)) { 75 - neq |= *(unsigned long *)a ^ *(unsigned long *)b; 74 + neq |= get_unaligned((unsigned long *)a) ^ 75 + get_unaligned((unsigned long *)b); 76 76 OPTIMIZER_HIDE_VAR(neq); 77 77 a += sizeof(unsigned long); 78 78 b += sizeof(unsigned long); ··· 97 95 98 96 #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 99 97 if (sizeof(unsigned long) == 8) { 100 - neq |= *(unsigned long *)(a) ^ *(unsigned long *)(b); 98 + neq |= get_unaligned((unsigned long *)a) ^ 99 + get_unaligned((unsigned long *)b); 101 100 OPTIMIZER_HIDE_VAR(neq); 102 - neq |= *(unsigned long *)(a+8) ^ *(unsigned long *)(b+8); 101 + neq |= get_unaligned((unsigned long *)(a + 8)) ^ 102 + get_unaligned((unsigned long *)(b + 8)); 103 103 OPTIMIZER_HIDE_VAR(neq); 104 104 } else if (sizeof(unsigned int) == 4) { 105 - neq |= *(unsigned int *)(a) ^ *(unsigned int *)(b); 105 + neq |= get_unaligned((unsigned int *)a) ^ 106 + get_unaligned((unsigned int *)b); 106 107 OPTIMIZER_HIDE_VAR(neq); 107 - neq |= *(unsigned int *)(a+4) ^ *(unsigned int *)(b+4); 108 + neq |= get_unaligned((unsigned int *)(a + 4)) ^ 109 + get_unaligned((unsigned int *)(b + 4)); 108 110 OPTIMIZER_HIDE_VAR(neq); 109 - neq |= *(unsigned int *)(a+8) ^ *(unsigned int *)(b+8); 111 + neq |= get_unaligned((unsigned int *)(a + 8)) ^ 112 + get_unaligned((unsigned int *)(b + 8)); 110 113 OPTIMIZER_HIDE_VAR(neq); 111 - neq |= *(unsigned int *)(a+12) ^ *(unsigned int *)(b+12); 114 + neq |= get_unaligned((unsigned int *)(a + 12)) ^ 115 + get_unaligned((unsigned int *)(b + 12)); 112 116 OPTIMIZER_HIDE_VAR(neq); 113 117 } else 114 118 #endif /* CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS */
+23 -15
crypto/rsa-pkcs1pad.c
··· 385 385 struct pkcs1pad_inst_ctx *ictx = akcipher_instance_ctx(inst); 386 386 const struct rsa_asn1_template *digest_info = ictx->digest_info; 387 387 int err; 388 - unsigned int ps_end, digest_size = 0; 388 + unsigned int ps_end, digest_info_size = 0; 389 389 390 390 if (!ctx->key_size) 391 391 return -EINVAL; 392 392 393 393 if (digest_info) 394 - digest_size = digest_info->size; 394 + digest_info_size = digest_info->size; 395 395 396 - if (req->src_len + digest_size > ctx->key_size - 11) 396 + if (req->src_len + digest_info_size > ctx->key_size - 11) 397 397 return -EOVERFLOW; 398 398 399 399 if (req->dst_len < ctx->key_size) { ··· 406 406 if (!req_ctx->in_buf) 407 407 return -ENOMEM; 408 408 409 - ps_end = ctx->key_size - digest_size - req->src_len - 2; 409 + ps_end = ctx->key_size - digest_info_size - req->src_len - 2; 410 410 req_ctx->in_buf[0] = 0x01; 411 411 memset(req_ctx->in_buf + 1, 0xff, ps_end - 1); 412 412 req_ctx->in_buf[ps_end] = 0x00; ··· 441 441 struct akcipher_instance *inst = akcipher_alg_instance(tfm); 442 442 struct pkcs1pad_inst_ctx *ictx = akcipher_instance_ctx(inst); 443 443 const struct rsa_asn1_template *digest_info = ictx->digest_info; 444 + const unsigned int sig_size = req->src_len; 445 + const unsigned int digest_size = req->dst_len; 444 446 unsigned int dst_len; 445 447 unsigned int pos; 446 448 u8 *out_buf; ··· 478 476 pos++; 479 477 480 478 if (digest_info) { 479 + if (digest_info->size > dst_len - pos) 480 + goto done; 481 481 if (crypto_memneq(out_buf + pos, digest_info->data, 482 482 digest_info->size)) 483 483 goto done; ··· 489 485 490 486 err = 0; 491 487 492 - if (req->dst_len != dst_len - pos) { 488 + if (digest_size != dst_len - pos) { 493 489 err = -EKEYREJECTED; 494 490 req->dst_len = dst_len - pos; 495 491 goto done; 496 492 } 497 493 /* Extract appended digest. */ 498 494 sg_pcopy_to_buffer(req->src, 499 - sg_nents_for_len(req->src, 500 - req->src_len + req->dst_len), 495 + sg_nents_for_len(req->src, sig_size + digest_size), 501 496 req_ctx->out_buf + ctx->key_size, 502 - req->dst_len, ctx->key_size); 497 + digest_size, sig_size); 503 498 /* Do the actual verification step. */ 504 499 if (memcmp(req_ctx->out_buf + ctx->key_size, out_buf + pos, 505 - req->dst_len) != 0) 500 + digest_size) != 0) 506 501 err = -EKEYREJECTED; 507 502 done: 508 503 kfree_sensitive(req_ctx->out_buf); ··· 537 534 struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req); 538 535 struct pkcs1pad_ctx *ctx = akcipher_tfm_ctx(tfm); 539 536 struct pkcs1pad_request *req_ctx = akcipher_request_ctx(req); 537 + const unsigned int sig_size = req->src_len; 538 + const unsigned int digest_size = req->dst_len; 540 539 int err; 541 540 542 - if (WARN_ON(req->dst) || 543 - WARN_ON(!req->dst_len) || 544 - !ctx->key_size || req->src_len < ctx->key_size) 541 + if (WARN_ON(req->dst) || WARN_ON(!digest_size) || 542 + !ctx->key_size || sig_size != ctx->key_size) 545 543 return -EINVAL; 546 544 547 - req_ctx->out_buf = kmalloc(ctx->key_size + req->dst_len, GFP_KERNEL); 545 + req_ctx->out_buf = kmalloc(ctx->key_size + digest_size, GFP_KERNEL); 548 546 if (!req_ctx->out_buf) 549 547 return -ENOMEM; 550 548 ··· 558 554 559 555 /* Reuse input buffer, output to a new buffer */ 560 556 akcipher_request_set_crypt(&req_ctx->child_req, req->src, 561 - req_ctx->out_sg, req->src_len, 562 - ctx->key_size); 557 + req_ctx->out_sg, sig_size, ctx->key_size); 563 558 564 559 err = crypto_akcipher_encrypt(&req_ctx->child_req); 565 560 if (err != -EINPROGRESS && err != -EBUSY) ··· 623 620 goto err_free_inst; 624 621 625 622 rsa_alg = crypto_spawn_akcipher_alg(&ctx->spawn); 623 + 624 + if (strcmp(rsa_alg->base.cra_name, "rsa") != 0) { 625 + err = -EINVAL; 626 + goto err_free_inst; 627 + } 626 628 627 629 err = -ENAMETOOLONG; 628 630 hash_name = crypto_attr_alg_name(tb[2]);
+19 -19
crypto/sm2.c
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 2 /* 3 3 * SM2 asymmetric public-key algorithm 4 4 * as specified by OSCCA GM/T 0003.1-2012 -- 0003.5-2012 SM2 and ··· 13 13 #include <crypto/internal/akcipher.h> 14 14 #include <crypto/akcipher.h> 15 15 #include <crypto/hash.h> 16 - #include <crypto/sm3_base.h> 16 + #include <crypto/sm3.h> 17 17 #include <crypto/rng.h> 18 18 #include <crypto/sm2.h> 19 19 #include "sm2signature.asn1.h" ··· 213 213 return 0; 214 214 } 215 215 216 - static int sm2_z_digest_update(struct shash_desc *desc, 216 + static int sm2_z_digest_update(struct sm3_state *sctx, 217 217 MPI m, unsigned int pbytes) 218 218 { 219 219 static const unsigned char zero[32]; ··· 226 226 227 227 if (inlen < pbytes) { 228 228 /* padding with zero */ 229 - crypto_sm3_update(desc, zero, pbytes - inlen); 230 - crypto_sm3_update(desc, in, inlen); 229 + sm3_update(sctx, zero, pbytes - inlen); 230 + sm3_update(sctx, in, inlen); 231 231 } else if (inlen > pbytes) { 232 232 /* skip the starting zero */ 233 - crypto_sm3_update(desc, in + inlen - pbytes, pbytes); 233 + sm3_update(sctx, in + inlen - pbytes, pbytes); 234 234 } else { 235 - crypto_sm3_update(desc, in, inlen); 235 + sm3_update(sctx, in, inlen); 236 236 } 237 237 238 238 kfree(in); 239 239 return 0; 240 240 } 241 241 242 - static int sm2_z_digest_update_point(struct shash_desc *desc, 242 + static int sm2_z_digest_update_point(struct sm3_state *sctx, 243 243 MPI_POINT point, struct mpi_ec_ctx *ec, unsigned int pbytes) 244 244 { 245 245 MPI x, y; ··· 249 249 y = mpi_new(0); 250 250 251 251 if (!mpi_ec_get_affine(x, y, point, ec) && 252 - !sm2_z_digest_update(desc, x, pbytes) && 253 - !sm2_z_digest_update(desc, y, pbytes)) 252 + !sm2_z_digest_update(sctx, x, pbytes) && 253 + !sm2_z_digest_update(sctx, y, pbytes)) 254 254 ret = 0; 255 255 256 256 mpi_free(x); ··· 265 265 struct mpi_ec_ctx *ec = akcipher_tfm_ctx(tfm); 266 266 uint16_t bits_len; 267 267 unsigned char entl[2]; 268 - SHASH_DESC_ON_STACK(desc, NULL); 268 + struct sm3_state sctx; 269 269 unsigned int pbytes; 270 270 271 271 if (id_len > (USHRT_MAX / 8) || !ec->Q) ··· 278 278 pbytes = MPI_NBYTES(ec->p); 279 279 280 280 /* ZA = H256(ENTLA | IDA | a | b | xG | yG | xA | yA) */ 281 - sm3_base_init(desc); 282 - crypto_sm3_update(desc, entl, 2); 283 - crypto_sm3_update(desc, id, id_len); 281 + sm3_init(&sctx); 282 + sm3_update(&sctx, entl, 2); 283 + sm3_update(&sctx, id, id_len); 284 284 285 - if (sm2_z_digest_update(desc, ec->a, pbytes) || 286 - sm2_z_digest_update(desc, ec->b, pbytes) || 287 - sm2_z_digest_update_point(desc, ec->G, ec, pbytes) || 288 - sm2_z_digest_update_point(desc, ec->Q, ec, pbytes)) 285 + if (sm2_z_digest_update(&sctx, ec->a, pbytes) || 286 + sm2_z_digest_update(&sctx, ec->b, pbytes) || 287 + sm2_z_digest_update_point(&sctx, ec->G, ec, pbytes) || 288 + sm2_z_digest_update_point(&sctx, ec->Q, ec, pbytes)) 289 289 return -EINVAL; 290 290 291 - crypto_sm3_final(desc, dgst); 291 + sm3_final(&sctx, dgst); 292 292 return 0; 293 293 } 294 294 EXPORT_SYMBOL(sm2_compute_z_digest);
+15 -127
crypto/sm3_generic.c
··· 5 5 * 6 6 * Copyright (C) 2017 ARM Limited or its affiliates. 7 7 * Written by Gilad Ben-Yossef <gilad@benyossef.com> 8 + * Copyright (C) 2021 Tianjia Zhang <tianjia.zhang@linux.alibaba.com> 8 9 */ 9 10 10 11 #include <crypto/internal/hash.h> ··· 27 26 }; 28 27 EXPORT_SYMBOL_GPL(sm3_zero_message_hash); 29 28 30 - static inline u32 p0(u32 x) 31 - { 32 - return x ^ rol32(x, 9) ^ rol32(x, 17); 33 - } 34 - 35 - static inline u32 p1(u32 x) 36 - { 37 - return x ^ rol32(x, 15) ^ rol32(x, 23); 38 - } 39 - 40 - static inline u32 ff(unsigned int n, u32 a, u32 b, u32 c) 41 - { 42 - return (n < 16) ? (a ^ b ^ c) : ((a & b) | (a & c) | (b & c)); 43 - } 44 - 45 - static inline u32 gg(unsigned int n, u32 e, u32 f, u32 g) 46 - { 47 - return (n < 16) ? (e ^ f ^ g) : ((e & f) | ((~e) & g)); 48 - } 49 - 50 - static inline u32 t(unsigned int n) 51 - { 52 - return (n < 16) ? SM3_T1 : SM3_T2; 53 - } 54 - 55 - static void sm3_expand(u32 *t, u32 *w, u32 *wt) 56 - { 57 - int i; 58 - unsigned int tmp; 59 - 60 - /* load the input */ 61 - for (i = 0; i <= 15; i++) 62 - w[i] = get_unaligned_be32((__u32 *)t + i); 63 - 64 - for (i = 16; i <= 67; i++) { 65 - tmp = w[i - 16] ^ w[i - 9] ^ rol32(w[i - 3], 15); 66 - w[i] = p1(tmp) ^ (rol32(w[i - 13], 7)) ^ w[i - 6]; 67 - } 68 - 69 - for (i = 0; i <= 63; i++) 70 - wt[i] = w[i] ^ w[i + 4]; 71 - } 72 - 73 - static void sm3_compress(u32 *w, u32 *wt, u32 *m) 74 - { 75 - u32 ss1; 76 - u32 ss2; 77 - u32 tt1; 78 - u32 tt2; 79 - u32 a, b, c, d, e, f, g, h; 80 - int i; 81 - 82 - a = m[0]; 83 - b = m[1]; 84 - c = m[2]; 85 - d = m[3]; 86 - e = m[4]; 87 - f = m[5]; 88 - g = m[6]; 89 - h = m[7]; 90 - 91 - for (i = 0; i <= 63; i++) { 92 - 93 - ss1 = rol32((rol32(a, 12) + e + rol32(t(i), i & 31)), 7); 94 - 95 - ss2 = ss1 ^ rol32(a, 12); 96 - 97 - tt1 = ff(i, a, b, c) + d + ss2 + *wt; 98 - wt++; 99 - 100 - tt2 = gg(i, e, f, g) + h + ss1 + *w; 101 - w++; 102 - 103 - d = c; 104 - c = rol32(b, 9); 105 - b = a; 106 - a = tt1; 107 - h = g; 108 - g = rol32(f, 19); 109 - f = e; 110 - e = p0(tt2); 111 - } 112 - 113 - m[0] = a ^ m[0]; 114 - m[1] = b ^ m[1]; 115 - m[2] = c ^ m[2]; 116 - m[3] = d ^ m[3]; 117 - m[4] = e ^ m[4]; 118 - m[5] = f ^ m[5]; 119 - m[6] = g ^ m[6]; 120 - m[7] = h ^ m[7]; 121 - 122 - a = b = c = d = e = f = g = h = ss1 = ss2 = tt1 = tt2 = 0; 123 - } 124 - 125 - static void sm3_transform(struct sm3_state *sst, u8 const *src) 126 - { 127 - unsigned int w[68]; 128 - unsigned int wt[64]; 129 - 130 - sm3_expand((u32 *)src, w, wt); 131 - sm3_compress(w, wt, sst->state); 132 - 133 - memzero_explicit(w, sizeof(w)); 134 - memzero_explicit(wt, sizeof(wt)); 135 - } 136 - 137 - static void sm3_generic_block_fn(struct sm3_state *sst, u8 const *src, 138 - int blocks) 139 - { 140 - while (blocks--) { 141 - sm3_transform(sst, src); 142 - src += SM3_BLOCK_SIZE; 143 - } 144 - } 145 - 146 - int crypto_sm3_update(struct shash_desc *desc, const u8 *data, 29 + static int crypto_sm3_update(struct shash_desc *desc, const u8 *data, 147 30 unsigned int len) 148 31 { 149 - return sm3_base_do_update(desc, data, len, sm3_generic_block_fn); 32 + sm3_update(shash_desc_ctx(desc), data, len); 33 + return 0; 150 34 } 151 - EXPORT_SYMBOL(crypto_sm3_update); 152 35 153 - int crypto_sm3_final(struct shash_desc *desc, u8 *out) 36 + static int crypto_sm3_final(struct shash_desc *desc, u8 *out) 154 37 { 155 - sm3_base_do_finalize(desc, sm3_generic_block_fn); 156 - return sm3_base_finish(desc, out); 38 + sm3_final(shash_desc_ctx(desc), out); 39 + return 0; 157 40 } 158 - EXPORT_SYMBOL(crypto_sm3_final); 159 41 160 - int crypto_sm3_finup(struct shash_desc *desc, const u8 *data, 42 + static int crypto_sm3_finup(struct shash_desc *desc, const u8 *data, 161 43 unsigned int len, u8 *hash) 162 44 { 163 - sm3_base_do_update(desc, data, len, sm3_generic_block_fn); 164 - return crypto_sm3_final(desc, hash); 45 + struct sm3_state *sctx = shash_desc_ctx(desc); 46 + 47 + if (len) 48 + sm3_update(sctx, data, len); 49 + sm3_final(sctx, hash); 50 + return 0; 165 51 } 166 - EXPORT_SYMBOL(crypto_sm3_finup); 167 52 168 53 static struct shash_alg sm3_alg = { 169 54 .digestsize = SM3_DIGEST_SIZE, ··· 61 174 .base = { 62 175 .cra_name = "sm3", 63 176 .cra_driver_name = "sm3-generic", 177 + .cra_priority = 100, 64 178 .cra_blocksize = SM3_BLOCK_SIZE, 65 179 .cra_module = THIS_MODULE, 66 180 }
+3 -223
crypto/tcrypt.c
··· 724 724 return crypto_wait_req(ret, wait); 725 725 } 726 726 727 - struct test_mb_ahash_data { 728 - struct scatterlist sg[XBUFSIZE]; 729 - char result[64]; 730 - struct ahash_request *req; 731 - struct crypto_wait wait; 732 - char *xbuf[XBUFSIZE]; 733 - }; 734 - 735 - static inline int do_mult_ahash_op(struct test_mb_ahash_data *data, u32 num_mb, 736 - int *rc) 737 - { 738 - int i, err = 0; 739 - 740 - /* Fire up a bunch of concurrent requests */ 741 - for (i = 0; i < num_mb; i++) 742 - rc[i] = crypto_ahash_digest(data[i].req); 743 - 744 - /* Wait for all requests to finish */ 745 - for (i = 0; i < num_mb; i++) { 746 - rc[i] = crypto_wait_req(rc[i], &data[i].wait); 747 - 748 - if (rc[i]) { 749 - pr_info("concurrent request %d error %d\n", i, rc[i]); 750 - err = rc[i]; 751 - } 752 - } 753 - 754 - return err; 755 - } 756 - 757 - static int test_mb_ahash_jiffies(struct test_mb_ahash_data *data, int blen, 758 - int secs, u32 num_mb) 759 - { 760 - unsigned long start, end; 761 - int bcount; 762 - int ret = 0; 763 - int *rc; 764 - 765 - rc = kcalloc(num_mb, sizeof(*rc), GFP_KERNEL); 766 - if (!rc) 767 - return -ENOMEM; 768 - 769 - for (start = jiffies, end = start + secs * HZ, bcount = 0; 770 - time_before(jiffies, end); bcount++) { 771 - ret = do_mult_ahash_op(data, num_mb, rc); 772 - if (ret) 773 - goto out; 774 - } 775 - 776 - pr_cont("%d operations in %d seconds (%llu bytes)\n", 777 - bcount * num_mb, secs, (u64)bcount * blen * num_mb); 778 - 779 - out: 780 - kfree(rc); 781 - return ret; 782 - } 783 - 784 - static int test_mb_ahash_cycles(struct test_mb_ahash_data *data, int blen, 785 - u32 num_mb) 786 - { 787 - unsigned long cycles = 0; 788 - int ret = 0; 789 - int i; 790 - int *rc; 791 - 792 - rc = kcalloc(num_mb, sizeof(*rc), GFP_KERNEL); 793 - if (!rc) 794 - return -ENOMEM; 795 - 796 - /* Warm-up run. */ 797 - for (i = 0; i < 4; i++) { 798 - ret = do_mult_ahash_op(data, num_mb, rc); 799 - if (ret) 800 - goto out; 801 - } 802 - 803 - /* The real thing. */ 804 - for (i = 0; i < 8; i++) { 805 - cycles_t start, end; 806 - 807 - start = get_cycles(); 808 - ret = do_mult_ahash_op(data, num_mb, rc); 809 - end = get_cycles(); 810 - 811 - if (ret) 812 - goto out; 813 - 814 - cycles += end - start; 815 - } 816 - 817 - pr_cont("1 operation in %lu cycles (%d bytes)\n", 818 - (cycles + 4) / (8 * num_mb), blen); 819 - 820 - out: 821 - kfree(rc); 822 - return ret; 823 - } 824 - 825 - static void test_mb_ahash_speed(const char *algo, unsigned int secs, 826 - struct hash_speed *speed, u32 num_mb) 827 - { 828 - struct test_mb_ahash_data *data; 829 - struct crypto_ahash *tfm; 830 - unsigned int i, j, k; 831 - int ret; 832 - 833 - data = kcalloc(num_mb, sizeof(*data), GFP_KERNEL); 834 - if (!data) 835 - return; 836 - 837 - tfm = crypto_alloc_ahash(algo, 0, 0); 838 - if (IS_ERR(tfm)) { 839 - pr_err("failed to load transform for %s: %ld\n", 840 - algo, PTR_ERR(tfm)); 841 - goto free_data; 842 - } 843 - 844 - for (i = 0; i < num_mb; ++i) { 845 - if (testmgr_alloc_buf(data[i].xbuf)) 846 - goto out; 847 - 848 - crypto_init_wait(&data[i].wait); 849 - 850 - data[i].req = ahash_request_alloc(tfm, GFP_KERNEL); 851 - if (!data[i].req) { 852 - pr_err("alg: hash: Failed to allocate request for %s\n", 853 - algo); 854 - goto out; 855 - } 856 - 857 - ahash_request_set_callback(data[i].req, 0, crypto_req_done, 858 - &data[i].wait); 859 - 860 - sg_init_table(data[i].sg, XBUFSIZE); 861 - for (j = 0; j < XBUFSIZE; j++) { 862 - sg_set_buf(data[i].sg + j, data[i].xbuf[j], PAGE_SIZE); 863 - memset(data[i].xbuf[j], 0xff, PAGE_SIZE); 864 - } 865 - } 866 - 867 - pr_info("\ntesting speed of multibuffer %s (%s)\n", algo, 868 - get_driver_name(crypto_ahash, tfm)); 869 - 870 - for (i = 0; speed[i].blen != 0; i++) { 871 - /* For some reason this only tests digests. */ 872 - if (speed[i].blen != speed[i].plen) 873 - continue; 874 - 875 - if (speed[i].blen > XBUFSIZE * PAGE_SIZE) { 876 - pr_err("template (%u) too big for tvmem (%lu)\n", 877 - speed[i].blen, XBUFSIZE * PAGE_SIZE); 878 - goto out; 879 - } 880 - 881 - if (klen) 882 - crypto_ahash_setkey(tfm, tvmem[0], klen); 883 - 884 - for (k = 0; k < num_mb; k++) 885 - ahash_request_set_crypt(data[k].req, data[k].sg, 886 - data[k].result, speed[i].blen); 887 - 888 - pr_info("test%3u " 889 - "(%5u byte blocks,%5u bytes per update,%4u updates): ", 890 - i, speed[i].blen, speed[i].plen, 891 - speed[i].blen / speed[i].plen); 892 - 893 - if (secs) { 894 - ret = test_mb_ahash_jiffies(data, speed[i].blen, secs, 895 - num_mb); 896 - cond_resched(); 897 - } else { 898 - ret = test_mb_ahash_cycles(data, speed[i].blen, num_mb); 899 - } 900 - 901 - 902 - if (ret) { 903 - pr_err("At least one hashing failed ret=%d\n", ret); 904 - break; 905 - } 906 - } 907 - 908 - out: 909 - for (k = 0; k < num_mb; ++k) 910 - ahash_request_free(data[k].req); 911 - 912 - for (k = 0; k < num_mb; ++k) 913 - testmgr_free_buf(data[k].xbuf); 914 - 915 - crypto_free_ahash(tfm); 916 - 917 - free_data: 918 - kfree(data); 919 - } 920 - 921 727 static int test_ahash_jiffies_digest(struct ahash_request *req, int blen, 922 728 char *out, int secs) 923 729 { ··· 1473 1667 pr_debug("testing %s\n", alg); 1474 1668 1475 1669 ret = alg_test(alg, alg, 0, 0); 1476 - /* non-fips algs return -EINVAL in fips mode */ 1477 - if (fips_enabled && ret == -EINVAL) 1670 + /* non-fips algs return -EINVAL or -ECANCELED in fips mode */ 1671 + if (fips_enabled && (ret == -EINVAL || ret == -ECANCELED)) 1478 1672 ret = 0; 1479 1673 return ret; 1480 1674 } ··· 2377 2571 if (mode > 400 && mode < 500) break; 2378 2572 fallthrough; 2379 2573 case 422: 2380 - test_mb_ahash_speed("sha1", sec, generic_hash_speed_template, 2381 - num_mb); 2382 - if (mode > 400 && mode < 500) break; 2383 - fallthrough; 2384 - case 423: 2385 - test_mb_ahash_speed("sha256", sec, generic_hash_speed_template, 2386 - num_mb); 2387 - if (mode > 400 && mode < 500) break; 2388 - fallthrough; 2389 - case 424: 2390 - test_mb_ahash_speed("sha512", sec, generic_hash_speed_template, 2391 - num_mb); 2392 - if (mode > 400 && mode < 500) break; 2393 - fallthrough; 2394 - case 425: 2395 - test_mb_ahash_speed("sm3", sec, generic_hash_speed_template, 2396 - num_mb); 2397 - if (mode > 400 && mode < 500) break; 2398 - fallthrough; 2399 - case 426: 2400 - test_mb_ahash_speed("streebog256", sec, 2401 - generic_hash_speed_template, num_mb); 2402 - if (mode > 400 && mode < 500) break; 2403 - fallthrough; 2404 - case 427: 2405 - test_mb_ahash_speed("streebog512", sec, 2406 - generic_hash_speed_template, num_mb); 2574 + test_ahash_speed("sm3", sec, generic_hash_speed_template); 2407 2575 if (mode > 400 && mode < 500) break; 2408 2576 fallthrough; 2409 2577 case 499:
+59 -8
crypto/testmgr.c
··· 55 55 static unsigned int fuzz_iterations = 100; 56 56 module_param(fuzz_iterations, uint, 0644); 57 57 MODULE_PARM_DESC(fuzz_iterations, "number of fuzz test iterations"); 58 - 59 - DEFINE_PER_CPU(bool, crypto_simd_disabled_for_test); 60 - EXPORT_PER_CPU_SYMBOL_GPL(crypto_simd_disabled_for_test); 61 58 #endif 62 59 63 60 #ifdef CONFIG_CRYPTO_MANAGER_DISABLE_TESTS ··· 1851 1854 } 1852 1855 1853 1856 for (i = 0; i < num_vecs; i++) { 1857 + if (fips_enabled && vecs[i].fips_skip) 1858 + continue; 1859 + 1854 1860 err = test_hash_vec(&vecs[i], i, req, desc, tsgl, hashstate); 1855 1861 if (err) 1856 1862 goto out; ··· 4650 4650 }, { 4651 4651 .alg = "dh", 4652 4652 .test = alg_test_kpp, 4653 - .fips_allowed = 1, 4654 4653 .suite = { 4655 4654 .kpp = __VECS(dh_tv_template) 4656 4655 } ··· 4972 4973 .cipher = __VECS(essiv_aes_cbc_tv_template) 4973 4974 } 4974 4975 }, { 4976 + #if IS_ENABLED(CONFIG_CRYPTO_DH_RFC7919_GROUPS) 4977 + .alg = "ffdhe2048(dh)", 4978 + .test = alg_test_kpp, 4979 + .fips_allowed = 1, 4980 + .suite = { 4981 + .kpp = __VECS(ffdhe2048_dh_tv_template) 4982 + } 4983 + }, { 4984 + .alg = "ffdhe3072(dh)", 4985 + .test = alg_test_kpp, 4986 + .fips_allowed = 1, 4987 + .suite = { 4988 + .kpp = __VECS(ffdhe3072_dh_tv_template) 4989 + } 4990 + }, { 4991 + .alg = "ffdhe4096(dh)", 4992 + .test = alg_test_kpp, 4993 + .fips_allowed = 1, 4994 + .suite = { 4995 + .kpp = __VECS(ffdhe4096_dh_tv_template) 4996 + } 4997 + }, { 4998 + .alg = "ffdhe6144(dh)", 4999 + .test = alg_test_kpp, 5000 + .fips_allowed = 1, 5001 + .suite = { 5002 + .kpp = __VECS(ffdhe6144_dh_tv_template) 5003 + } 5004 + }, { 5005 + .alg = "ffdhe8192(dh)", 5006 + .test = alg_test_kpp, 5007 + .fips_allowed = 1, 5008 + .suite = { 5009 + .kpp = __VECS(ffdhe8192_dh_tv_template) 5010 + } 5011 + }, { 5012 + #endif /* CONFIG_CRYPTO_DH_RFC7919_GROUPS */ 4975 5013 .alg = "gcm(aes)", 4976 5014 .generic_driver = "gcm_base(ctr(aes-generic),ghash-generic)", 4977 5015 .test = alg_test_aead, ··· 5649 5613 return -1; 5650 5614 } 5651 5615 5616 + static int alg_fips_disabled(const char *driver, const char *alg) 5617 + { 5618 + pr_info("alg: %s (%s) is disabled due to FIPS\n", alg, driver); 5619 + 5620 + return -ECANCELED; 5621 + } 5622 + 5652 5623 int alg_test(const char *driver, const char *alg, u32 type, u32 mask) 5653 5624 { 5654 5625 int i; ··· 5692 5649 if (i < 0 && j < 0) 5693 5650 goto notest; 5694 5651 5695 - if (fips_enabled && ((i >= 0 && !alg_test_descs[i].fips_allowed) || 5696 - (j >= 0 && !alg_test_descs[j].fips_allowed))) 5697 - goto non_fips_alg; 5652 + if (fips_enabled) { 5653 + if (j >= 0 && !alg_test_descs[j].fips_allowed) 5654 + return -EINVAL; 5655 + 5656 + if (i >= 0 && !alg_test_descs[i].fips_allowed) 5657 + goto non_fips_alg; 5658 + } 5698 5659 5699 5660 rc = 0; 5700 5661 if (i >= 0) ··· 5728 5681 5729 5682 notest: 5730 5683 printk(KERN_INFO "alg: No test for %s (%s)\n", alg, driver); 5684 + 5685 + if (type & CRYPTO_ALG_FIPS_INTERNAL) 5686 + return alg_fips_disabled(driver, alg); 5687 + 5731 5688 return 0; 5732 5689 non_fips_alg: 5733 - return -EINVAL; 5690 + return alg_fips_disabled(driver, alg); 5734 5691 } 5735 5692 5736 5693 #endif /* CONFIG_CRYPTO_MANAGER_DISABLE_TESTS */
+1446 -10
crypto/testmgr.h
··· 33 33 * @ksize: Length of @key in bytes (0 if no key) 34 34 * @setkey_error: Expected error from setkey() 35 35 * @digest_error: Expected error from digest() 36 + * @fips_skip: Skip the test vector in FIPS mode 36 37 */ 37 38 struct hash_testvec { 38 39 const char *key; ··· 43 42 unsigned short ksize; 44 43 int setkey_error; 45 44 int digest_error; 45 + bool fips_skip; 46 46 }; 47 47 48 48 /* ··· 1246 1244 .secret = 1247 1245 #ifdef __LITTLE_ENDIAN 1248 1246 "\x01\x00" /* type */ 1249 - "\x15\x02" /* len */ 1247 + "\x11\x02" /* len */ 1250 1248 "\x00\x01\x00\x00" /* key_size */ 1251 1249 "\x00\x01\x00\x00" /* p_size */ 1252 - "\x00\x00\x00\x00" /* q_size */ 1253 1250 "\x01\x00\x00\x00" /* g_size */ 1254 1251 #else 1255 1252 "\x00\x01" /* type */ 1256 - "\x02\x15" /* len */ 1253 + "\x02\x11" /* len */ 1257 1254 "\x00\x00\x01\x00" /* key_size */ 1258 1255 "\x00\x00\x01\x00" /* p_size */ 1259 - "\x00\x00\x00\x00" /* q_size */ 1260 1256 "\x00\x00\x00\x01" /* g_size */ 1261 1257 #endif 1262 1258 /* xa */ ··· 1344 1344 "\xd3\x34\x49\xad\x64\xa6\xb1\xc0\x59\x28\x75\x60\xa7\x8a\xb0\x11" 1345 1345 "\x56\x89\x42\x74\x11\xf5\xf6\x5e\x6f\x16\x54\x6a\xb1\x76\x4d\x50" 1346 1346 "\x8a\x68\xc1\x5b\x82\xb9\x0d\x00\x32\x50\xed\x88\x87\x48\x92\x17", 1347 - .secret_size = 533, 1347 + .secret_size = 529, 1348 1348 .b_public_size = 256, 1349 1349 .expected_a_public_size = 256, 1350 1350 .expected_ss_size = 256, ··· 1353 1353 .secret = 1354 1354 #ifdef __LITTLE_ENDIAN 1355 1355 "\x01\x00" /* type */ 1356 - "\x15\x02" /* len */ 1356 + "\x11\x02" /* len */ 1357 1357 "\x00\x01\x00\x00" /* key_size */ 1358 1358 "\x00\x01\x00\x00" /* p_size */ 1359 - "\x00\x00\x00\x00" /* q_size */ 1360 1359 "\x01\x00\x00\x00" /* g_size */ 1361 1360 #else 1362 1361 "\x00\x01" /* type */ 1363 - "\x02\x15" /* len */ 1362 + "\x02\x11" /* len */ 1364 1363 "\x00\x00\x01\x00" /* key_size */ 1365 1364 "\x00\x00\x01\x00" /* p_size */ 1366 - "\x00\x00\x00\x00" /* q_size */ 1367 1365 "\x00\x00\x00\x01" /* g_size */ 1368 1366 #endif 1369 1367 /* xa */ ··· 1451 1453 "\x5e\x5a\x64\xbd\xf6\x85\x04\xe8\x28\x6a\xac\xef\xce\x19\x8e\x9a" 1452 1454 "\xfe\x75\xc0\x27\x69\xe3\xb3\x7b\x21\xa7\xb1\x16\xa4\x85\x23\xee" 1453 1455 "\xb0\x1b\x04\x6e\xbd\xab\x16\xde\xfd\x86\x6b\xa9\x95\xd7\x0b\xfd", 1454 - .secret_size = 533, 1456 + .secret_size = 529, 1455 1457 .b_public_size = 256, 1456 1458 .expected_a_public_size = 256, 1457 1459 .expected_ss_size = 256, 1458 1460 } 1461 + }; 1462 + 1463 + static const struct kpp_testvec ffdhe2048_dh_tv_template[] __maybe_unused = { 1464 + { 1465 + .secret = 1466 + #ifdef __LITTLE_ENDIAN 1467 + "\x01\x00" /* type */ 1468 + "\x10\x01" /* len */ 1469 + "\x00\x01\x00\x00" /* key_size */ 1470 + "\x00\x00\x00\x00" /* p_size */ 1471 + "\x00\x00\x00\x00" /* g_size */ 1472 + #else 1473 + "\x00\x01" /* type */ 1474 + "\x01\x10" /* len */ 1475 + "\x00\x00\x01\x00" /* key_size */ 1476 + "\x00\x00\x00\x00" /* p_size */ 1477 + "\x00\x00\x00\x00" /* g_size */ 1478 + #endif 1479 + /* xa */ 1480 + "\x23\x7d\xd0\x06\xfd\x7a\xe5\x7a\x08\xda\x98\x31\xc0\xb3\xd5\x85" 1481 + "\xe2\x0d\x2a\x91\x5f\x78\x4b\xa6\x62\xd0\xa6\x35\xd4\xef\x86\x39" 1482 + "\xf1\xdb\x71\x5e\xb0\x11\x2e\xee\x91\x3a\xaa\xf9\xe3\xdf\x8d\x8b" 1483 + "\x48\x41\xde\xe8\x78\x53\xc5\x5f\x93\xd2\x79\x0d\xbe\x8d\x83\xe8" 1484 + "\x8f\x00\xd2\xde\x13\x18\x04\x05\x20\x6d\xda\xfa\x1d\x0b\x24\x52" 1485 + "\x3a\x18\x2b\xe1\x1e\xae\x15\x3b\x0f\xaa\x09\x09\xf6\x01\x98\xe9" 1486 + "\x81\x5d\x6b\x83\x6e\x55\xf1\x5d\x6f\x6f\x0d\x9d\xa8\x72\x32\x63" 1487 + "\x60\xe6\x0b\xc5\x22\xe2\xf9\x46\x58\xa2\x1c\x2a\xb0\xd5\xaf\xe3" 1488 + "\x5b\x03\xb7\x36\xb7\xba\x55\x20\x08\x7c\x51\xd4\x89\x42\x9c\x14" 1489 + "\x23\xe2\x71\x3e\x15\x2a\x0d\x34\x8a\xde\xad\x84\x11\x15\x72\x18" 1490 + "\x42\x43\x0a\xe2\x58\x29\xb3\x90\x0f\x56\xd8\x8a\x0f\x0e\xbc\x0e" 1491 + "\x9c\xe7\xd5\xe6\x5b\xbf\x06\x64\x38\x12\xa5\x8d\x5b\x68\x34\xdd" 1492 + "\x75\x48\xc9\xa7\xa3\x58\x5a\x1c\xe1\xb2\xc5\xe3\x39\x03\xcf\xab" 1493 + "\xc2\x14\x07\xaf\x55\x80\xc7\x63\xe4\x03\xeb\xe9\x0a\x25\x61\x85" 1494 + "\x1d\x0e\x81\x52\x7b\xbc\x4a\x0c\xc8\x59\x6a\xac\x18\xfb\x8c\x0c" 1495 + "\xb4\x79\xbd\xa1\x4c\xbb\x02\xc9\xd5\x13\x88\x3d\x25\xaa\x77\x49", 1496 + .b_public = 1497 + "\x5c\x00\x6f\xda\xfe\x4c\x0c\xc2\x18\xff\xa9\xec\x7a\xbe\x8a\x51" 1498 + "\x64\x6b\x57\xf8\xed\xe2\x36\x77\xc1\x23\xbf\x56\xa6\x48\x76\x34" 1499 + "\x0e\xf3\x68\x05\x45\x6a\x98\x5b\x9e\x8b\xc0\x11\x29\xcb\x5b\x66" 1500 + "\x2d\xc2\xeb\x4c\xf1\x7d\x85\x30\xaa\xd5\xf5\xb8\xd3\x62\x1e\x97" 1501 + "\x1e\x34\x18\xf8\x76\x8c\x10\xca\x1f\xe4\x5d\x62\xe1\xbe\x61\xef" 1502 + "\xaf\x2c\x8d\x97\x15\xa5\x86\xd5\xd3\x12\x6f\xec\xe2\xa4\xb2\x5a" 1503 + "\x35\x1d\xd4\x91\xa6\xef\x13\x09\x65\x9c\x45\xc0\x12\xad\x7f\xee" 1504 + "\x93\x5d\xfa\x89\x26\x7d\xae\xee\xea\x8c\xa3\xcf\x04\x2d\xa0\xc7" 1505 + "\xd9\x14\x62\xaf\xdf\xa0\x33\xd7\x5e\x83\xa2\xe6\x0e\x0e\x5d\x77" 1506 + "\xce\xe6\x72\xe4\xec\x9d\xff\x72\x9f\x38\x95\x19\x96\xba\x4c\xe3" 1507 + "\x5f\xb8\x46\x4a\x1d\xe9\x62\x7b\xa8\xdc\xe7\x61\x90\x6b\xb9\xd4" 1508 + "\xad\x0b\xa3\x06\xb3\x70\xfa\xea\x2b\xc4\x2c\xde\x43\x37\xf6\x8d" 1509 + "\x72\xf0\x86\x9a\xbb\x3b\x8e\x7a\x71\x03\x30\x30\x2a\x5d\xcd\x1e" 1510 + "\xe4\xd3\x08\x07\x75\x17\x17\x72\x1e\x77\x6c\x98\x0d\x29\x7f\xac" 1511 + "\xe7\xb2\xee\xa9\x1c\x33\x9d\x08\x39\xe1\xd8\x5b\xe5\xbc\x48\xb2" 1512 + "\xb6\xdf\xcd\xa0\x42\x06\xcc\xfb\xed\x60\x6f\xbc\x57\xac\x09\x45", 1513 + .expected_a_public = 1514 + "\x8b\xdb\xc1\xf7\xc6\xba\xa1\x38\x95\x6a\xa1\xb6\x04\x5e\xae\x52" 1515 + "\x72\xfc\xef\x2d\x9d\x71\x05\x9c\xd3\x02\xa9\xfb\x55\x0f\xfa\xc9" 1516 + "\xb4\x34\x51\xa3\x28\x89\x8d\x93\x92\xcb\xd9\xb5\xb9\x66\xfc\x67" 1517 + "\x15\x92\x6f\x73\x85\x15\xe2\xfc\x11\x6b\x97\x8c\x4b\x0f\x12\xfa" 1518 + "\x8d\x72\x76\x9b\x8f\x3b\xfe\x31\xbe\x42\x88\x4c\xd2\xb2\x70\xa6" 1519 + "\xa5\xe3\x7e\x73\x07\x12\x36\xaa\xc9\x5c\x83\xe1\xf1\x46\x41\x4f" 1520 + "\x7c\x52\xaf\xdc\xa4\xe6\x82\xa3\x86\x83\x47\x5a\x12\x3a\x0c\xe3" 1521 + "\xdd\xdb\x94\x03\x2a\x59\x91\xa0\x19\xe5\xf8\x07\xdd\x54\x6a\x22" 1522 + "\x43\xb7\xf3\x74\xd7\xb9\x30\xfe\x9c\xe8\xd1\xcf\x06\x43\x68\xb9" 1523 + "\x54\x8f\x54\xa2\xe5\x3c\xf2\xc3\x4c\xee\xd4\x7c\x5d\x0e\xb1\x7b" 1524 + "\x16\x68\xb5\xb3\x7d\xd4\x11\x83\x5c\x77\x17\xc4\xf0\x59\x76\x7a" 1525 + "\x83\x40\xe5\xd9\x4c\x76\x23\x5b\x17\x6d\xee\x4a\x92\x68\x4b\x89" 1526 + "\xa0\x6d\x23\x8c\x80\x31\x33\x3a\x12\xf4\x50\xa6\xcb\x13\x97\x01" 1527 + "\xb8\x2c\xe6\xd2\x38\xdf\xd0\x7f\xc6\x27\x19\x0e\xb2\x07\xfd\x1f" 1528 + "\x1b\x9c\x1b\x87\xf9\x73\x6a\x3f\x7f\xb0\xf9\x2f\x3c\x19\x9f\xc9" 1529 + "\x8f\x97\x21\x0e\x8e\xbb\x1a\x17\x20\x15\xdd\xc6\x42\x60\xae\x4d", 1530 + .expected_ss = 1531 + "\xf3\x0e\x64\x7b\x66\xd7\x82\x7e\xab\x7e\x4a\xbe\x13\x6f\x43\x3d" 1532 + "\xea\x4f\x1f\x8b\x9d\x41\x56\x71\xe1\x06\x96\x02\x68\xfa\x44\x6e" 1533 + "\xe7\xf2\x26\xd4\x01\x4a\xf0\x28\x25\x76\xad\xd7\xe0\x17\x74\xfe" 1534 + "\xf9\xe1\x6d\xd3\xf7\xc7\xdf\xc0\x62\xa5\xf3\x4e\x1b\x5c\x77\x2a" 1535 + "\xfb\x0b\x87\xc3\xde\x1e\xc1\xe0\xd3\x7a\xb8\x02\x02\xec\x9c\x97" 1536 + "\xfb\x34\xa0\x20\x10\x23\x87\xb2\x9a\x72\xe3\x3d\xb2\x18\x50\xf3" 1537 + "\x6a\xd3\xd3\x19\xc4\x36\xd5\x59\xd6\xd6\xa7\x5c\xc3\xf9\x09\x33" 1538 + "\xa1\xf5\xb9\x4b\xf3\x0b\xe1\x4f\x79\x6b\x45\xf2\xec\x8b\xe5\x69" 1539 + "\x9f\xc6\x05\x01\xfe\x3a\x13\xfd\x6d\xea\x03\x83\x29\x7c\x7f\xf5" 1540 + "\x41\x55\x95\xde\x7e\x62\xae\xaf\x28\xdb\x7c\xa9\x90\x1e\xb2\xb1" 1541 + "\x1b\xef\xf1\x2e\xde\x47\xaa\xa8\x92\x9a\x49\x3d\xc0\xe0\x8d\xbb" 1542 + "\x0c\x42\x86\xaf\x00\xce\xb0\xab\x22\x7c\xe9\xbe\xb9\x72\x2f\xcf" 1543 + "\x5e\x5d\x62\x52\x2a\xd1\xfe\xcc\xa2\xf3\x40\xfd\x01\xa7\x54\x0a" 1544 + "\xa1\xfb\x1c\xf2\x44\xa6\x47\x30\x5a\xba\x2a\x05\xff\xd0\x6c\xab" 1545 + "\xeb\xe6\x8f\xf6\xd7\x73\xa3\x0e\x6c\x0e\xcf\xfd\x8e\x16\x5d\xe0" 1546 + "\x2c\x11\x05\x82\x3c\x22\x16\x6c\x52\x61\xcf\xbb\xff\xf8\x06\xd0", 1547 + .secret_size = 272, 1548 + .b_public_size = 256, 1549 + .expected_a_public_size = 256, 1550 + .expected_ss_size = 256, 1551 + }, 1552 + { 1553 + .secret = 1554 + #ifdef __LITTLE_ENDIAN 1555 + "\x01\x00" /* type */ 1556 + "\x10\x00" /* len */ 1557 + "\x00\x00\x00\x00" /* key_size */ 1558 + "\x00\x00\x00\x00" /* p_size */ 1559 + "\x00\x00\x00\x00", /* g_size */ 1560 + #else 1561 + "\x00\x01" /* type */ 1562 + "\x00\x10" /* len */ 1563 + "\x00\x00\x00\x00" /* key_size */ 1564 + "\x00\x00\x00\x00" /* p_size */ 1565 + "\x00\x00\x00\x00", /* g_size */ 1566 + #endif 1567 + .b_secret = 1568 + #ifdef __LITTLE_ENDIAN 1569 + "\x01\x00" /* type */ 1570 + "\x10\x01" /* len */ 1571 + "\x00\x01\x00\x00" /* key_size */ 1572 + "\x00\x00\x00\x00" /* p_size */ 1573 + "\x00\x00\x00\x00" /* g_size */ 1574 + #else 1575 + "\x00\x01" /* type */ 1576 + "\x01\x10" /* len */ 1577 + "\x00\x00\x01\x00" /* key_size */ 1578 + "\x00\x00\x00\x00" /* p_size */ 1579 + "\x00\x00\x00\x00" /* g_size */ 1580 + #endif 1581 + /* xa */ 1582 + "\x23\x7d\xd0\x06\xfd\x7a\xe5\x7a\x08\xda\x98\x31\xc0\xb3\xd5\x85" 1583 + "\xe2\x0d\x2a\x91\x5f\x78\x4b\xa6\x62\xd0\xa6\x35\xd4\xef\x86\x39" 1584 + "\xf1\xdb\x71\x5e\xb0\x11\x2e\xee\x91\x3a\xaa\xf9\xe3\xdf\x8d\x8b" 1585 + "\x48\x41\xde\xe8\x78\x53\xc5\x5f\x93\xd2\x79\x0d\xbe\x8d\x83\xe8" 1586 + "\x8f\x00\xd2\xde\x13\x18\x04\x05\x20\x6d\xda\xfa\x1d\x0b\x24\x52" 1587 + "\x3a\x18\x2b\xe1\x1e\xae\x15\x3b\x0f\xaa\x09\x09\xf6\x01\x98\xe9" 1588 + "\x81\x5d\x6b\x83\x6e\x55\xf1\x5d\x6f\x6f\x0d\x9d\xa8\x72\x32\x63" 1589 + "\x60\xe6\x0b\xc5\x22\xe2\xf9\x46\x58\xa2\x1c\x2a\xb0\xd5\xaf\xe3" 1590 + "\x5b\x03\xb7\x36\xb7\xba\x55\x20\x08\x7c\x51\xd4\x89\x42\x9c\x14" 1591 + "\x23\xe2\x71\x3e\x15\x2a\x0d\x34\x8a\xde\xad\x84\x11\x15\x72\x18" 1592 + "\x42\x43\x0a\xe2\x58\x29\xb3\x90\x0f\x56\xd8\x8a\x0f\x0e\xbc\x0e" 1593 + "\x9c\xe7\xd5\xe6\x5b\xbf\x06\x64\x38\x12\xa5\x8d\x5b\x68\x34\xdd" 1594 + "\x75\x48\xc9\xa7\xa3\x58\x5a\x1c\xe1\xb2\xc5\xe3\x39\x03\xcf\xab" 1595 + "\xc2\x14\x07\xaf\x55\x80\xc7\x63\xe4\x03\xeb\xe9\x0a\x25\x61\x85" 1596 + "\x1d\x0e\x81\x52\x7b\xbc\x4a\x0c\xc8\x59\x6a\xac\x18\xfb\x8c\x0c" 1597 + "\xb4\x79\xbd\xa1\x4c\xbb\x02\xc9\xd5\x13\x88\x3d\x25\xaa\x77\x49", 1598 + .b_public = 1599 + "\x8b\xdb\xc1\xf7\xc6\xba\xa1\x38\x95\x6a\xa1\xb6\x04\x5e\xae\x52" 1600 + "\x72\xfc\xef\x2d\x9d\x71\x05\x9c\xd3\x02\xa9\xfb\x55\x0f\xfa\xc9" 1601 + "\xb4\x34\x51\xa3\x28\x89\x8d\x93\x92\xcb\xd9\xb5\xb9\x66\xfc\x67" 1602 + "\x15\x92\x6f\x73\x85\x15\xe2\xfc\x11\x6b\x97\x8c\x4b\x0f\x12\xfa" 1603 + "\x8d\x72\x76\x9b\x8f\x3b\xfe\x31\xbe\x42\x88\x4c\xd2\xb2\x70\xa6" 1604 + "\xa5\xe3\x7e\x73\x07\x12\x36\xaa\xc9\x5c\x83\xe1\xf1\x46\x41\x4f" 1605 + "\x7c\x52\xaf\xdc\xa4\xe6\x82\xa3\x86\x83\x47\x5a\x12\x3a\x0c\xe3" 1606 + "\xdd\xdb\x94\x03\x2a\x59\x91\xa0\x19\xe5\xf8\x07\xdd\x54\x6a\x22" 1607 + "\x43\xb7\xf3\x74\xd7\xb9\x30\xfe\x9c\xe8\xd1\xcf\x06\x43\x68\xb9" 1608 + "\x54\x8f\x54\xa2\xe5\x3c\xf2\xc3\x4c\xee\xd4\x7c\x5d\x0e\xb1\x7b" 1609 + "\x16\x68\xb5\xb3\x7d\xd4\x11\x83\x5c\x77\x17\xc4\xf0\x59\x76\x7a" 1610 + "\x83\x40\xe5\xd9\x4c\x76\x23\x5b\x17\x6d\xee\x4a\x92\x68\x4b\x89" 1611 + "\xa0\x6d\x23\x8c\x80\x31\x33\x3a\x12\xf4\x50\xa6\xcb\x13\x97\x01" 1612 + "\xb8\x2c\xe6\xd2\x38\xdf\xd0\x7f\xc6\x27\x19\x0e\xb2\x07\xfd\x1f" 1613 + "\x1b\x9c\x1b\x87\xf9\x73\x6a\x3f\x7f\xb0\xf9\x2f\x3c\x19\x9f\xc9" 1614 + "\x8f\x97\x21\x0e\x8e\xbb\x1a\x17\x20\x15\xdd\xc6\x42\x60\xae\x4d", 1615 + .secret_size = 16, 1616 + .b_secret_size = 272, 1617 + .b_public_size = 256, 1618 + .expected_a_public_size = 256, 1619 + .expected_ss_size = 256, 1620 + .genkey = true, 1621 + }, 1622 + }; 1623 + 1624 + static const struct kpp_testvec ffdhe3072_dh_tv_template[] __maybe_unused = { 1625 + { 1626 + .secret = 1627 + #ifdef __LITTLE_ENDIAN 1628 + "\x01\x00" /* type */ 1629 + "\x90\x01" /* len */ 1630 + "\x80\x01\x00\x00" /* key_size */ 1631 + "\x00\x00\x00\x00" /* p_size */ 1632 + "\x00\x00\x00\x00" /* g_size */ 1633 + #else 1634 + "\x00\x01" /* type */ 1635 + "\x01\x90" /* len */ 1636 + "\x00\x00\x01\x80" /* key_size */ 1637 + "\x00\x00\x00\x00" /* p_size */ 1638 + "\x00\x00\x00\x00" /* g_size */ 1639 + #endif 1640 + /* xa */ 1641 + "\x6b\xb4\x97\x23\xfa\xc8\x5e\xa9\x7b\x63\xe7\x3e\x0e\x99\xc3\xb9" 1642 + "\xda\xb7\x48\x0d\xc3\xb1\xbf\x4f\x17\xc7\xa9\x51\xf6\x64\xff\xc4" 1643 + "\x31\x58\x87\x25\x83\x2c\x00\xf0\x41\x29\xf7\xee\xf9\xe6\x36\x76" 1644 + "\xd6\x3a\x24\xbe\xa7\x07\x0b\x93\xc7\x9f\x6c\x75\x0a\x26\x75\x76" 1645 + "\xe3\x0c\x42\xe0\x00\x04\x69\xd9\xec\x0b\x59\x54\x28\x8f\xd7\x9a" 1646 + "\x63\xf4\x5b\xdf\x85\x65\xc4\xe1\x95\x27\x4a\x42\xad\x36\x47\xa9" 1647 + "\x0a\xf8\x14\x1c\xf3\x94\x3b\x7e\x47\x99\x35\xa8\x18\xec\x70\x10" 1648 + "\xdf\xcb\xd2\x78\x88\xc1\x2d\x59\x93\xc1\xa4\x6d\xd7\x1d\xb9\xd5" 1649 + "\xf8\x30\x06\x7f\x98\x90\x0c\x74\x5e\x89\x2f\x64\x5a\xad\x5f\x53" 1650 + "\xb2\xa3\xa8\x83\xbf\xfc\x37\xef\xb8\x36\x0a\x5c\x62\x81\x64\x74" 1651 + "\x16\x2f\x45\x39\x2a\x91\x26\x87\xc0\x12\xcc\x75\x11\xa3\xa1\xc5" 1652 + "\xae\x20\xcf\xcb\x20\x25\x6b\x7a\x31\x93\x9d\x38\xb9\x57\x72\x46" 1653 + "\xd4\x84\x65\x87\xf1\xb5\xd3\xab\xfc\xc3\x4d\x40\x92\x94\x1e\xcd" 1654 + "\x1c\x87\xec\x3f\xcd\xbe\xd0\x95\x6b\x40\x02\xdd\x62\xeb\x0a\xda" 1655 + "\x4f\xbe\x8e\x32\x48\x8b\x6d\x83\xa0\x96\x62\x23\xec\x83\x91\x44" 1656 + "\xf9\x72\x01\xac\xa0\xe4\x72\x1d\x5a\x75\x05\x57\x90\xae\x7e\xb4" 1657 + "\x71\x39\x01\x05\xdc\xe9\xee\xcb\xf0\x61\x28\x91\x69\x8c\x31\x03" 1658 + "\x7a\x92\x15\xa1\x58\x67\x3d\x70\x82\xa6\x2c\xfe\x10\x56\x58\xd3" 1659 + "\x94\x67\xe1\xbe\xee\xc1\x64\x5c\x4b\xc8\x28\x3d\xc5\x66\x3a\xab" 1660 + "\x22\xc1\x7e\xa1\xbb\xf3\x19\x3b\xda\x46\x82\x45\xd4\x3c\x7c\xc6" 1661 + "\xce\x1f\x7f\x95\xa2\x17\xff\x88\xba\xd6\x4d\xdb\xd2\xea\xde\x39" 1662 + "\xd6\xa5\x18\x73\xbb\x64\x6e\x79\xe9\xdc\x3f\x92\x7f\xda\x1f\x49" 1663 + "\x33\x70\x65\x73\xa2\xd9\x06\xb8\x1b\x29\x29\x1a\xe0\xa3\xe6\x05" 1664 + "\x9a\xa8\xc2\x4e\x7a\x78\x1d\x22\x57\x21\xc8\xa3\x8d\x66\x3e\x23", 1665 + .b_public = 1666 + "\x73\x40\x8b\xce\xe8\x6a\x1c\x03\x50\x54\x42\x36\x22\xc6\x1d\xe8" 1667 + "\xe1\xef\x5c\x89\xa5\x55\xc1\xc4\x1c\xd7\x4f\xee\x5d\xba\x62\x60" 1668 + "\xfe\x93\x2f\xfd\x93\x2c\x8f\x70\xc6\x47\x17\x25\xb2\x95\xd7\x7d" 1669 + "\x41\x81\x4d\x52\x1c\xbe\x4d\x57\x3e\x26\x51\x28\x03\x8f\x67\xf5" 1670 + "\x22\x16\x1c\x67\xf7\x62\xcb\xfd\xa3\xee\x8d\xe0\xfa\x15\x9a\x53" 1671 + "\xbe\x7b\x9f\xc0\x12\x7a\xfc\x5e\x77\x2d\x60\x06\xba\x71\xc5\xca" 1672 + "\xd7\x26\xaf\x3b\xba\x6f\xd3\xc4\x82\x57\x19\x26\xb0\x16\x7b\xbd" 1673 + "\x83\xf2\x21\x03\x79\xff\x0a\x6f\xc5\x7b\x00\x15\xad\x5b\xf4\x42" 1674 + "\x1f\xcb\x7f\x3d\x34\x77\x3c\xc3\xe0\x38\xa5\x40\x51\xbe\x6f\xd9" 1675 + "\xc9\x77\x9c\xfc\x0d\xc1\x8e\xef\x0f\xaa\x5e\xa8\xbb\x16\x4a\x3e" 1676 + "\x26\x55\xae\xc1\xb6\x3e\xfd\x73\xf7\x59\xd2\xe5\x4b\x91\x8e\x28" 1677 + "\x77\x1e\x5a\xe2\xcd\xce\x92\x35\xbb\x1e\xbb\xcf\x79\x94\xdf\x31" 1678 + "\xde\x31\xa8\x75\xf6\xe0\xaa\x2e\xe9\x4f\x44\xc8\xba\xb9\xab\x80" 1679 + "\x29\xa1\xea\x58\x2e\x40\x96\xa0\x1a\xf5\x2c\x38\x47\x43\x5d\x26" 1680 + "\x2c\xd8\xad\xea\xd3\xad\xe8\x51\x49\xad\x45\x2b\x25\x7c\xde\xe4" 1681 + "\xaf\x03\x2a\x39\x26\x86\x66\x10\xbc\xa8\x71\xda\xe0\xe8\xf1\xdd" 1682 + "\x50\xff\x44\xb2\xd3\xc7\xff\x66\x63\xf6\x42\xe3\x97\x9d\x9e\xf4" 1683 + "\xa6\x89\xb9\xab\x12\x17\xf2\x85\x56\x9c\x6b\x24\x71\x83\x57\x7d" 1684 + "\x3c\x7b\x2b\x88\x92\x19\xd7\x1a\x00\xd5\x38\x94\x43\x60\x4d\xa7" 1685 + "\x12\x9e\x0d\xf6\x5c\x9a\xd3\xe2\x9e\xb1\x21\xe8\xe2\x9e\xe9\x1e" 1686 + "\x9d\xa5\x94\x95\xa6\x3d\x12\x15\xd8\x8b\xac\xe0\x8c\xde\xe6\x40" 1687 + "\x98\xaa\x5e\x55\x4f\x3d\x86\x87\x0d\xe3\xc6\x68\x15\xe6\xde\x17" 1688 + "\x78\x21\xc8\x6c\x06\xc7\x94\x56\xb4\xaf\xa2\x35\x0b\x0c\x97\xd7" 1689 + "\xa4\x12\xee\xf4\xd2\xef\x80\x28\xb3\xee\xe9\x15\x8b\x01\x32\x79", 1690 + .expected_a_public = 1691 + "\x1b\x6a\xba\xea\xa3\xcc\x50\x69\xa9\x41\x89\xaf\x04\xe1\x44\x22" 1692 + "\x97\x20\xd1\xf6\x1e\xcb\x64\x36\x6f\xee\x0b\x16\xc1\xd9\x91\xbe" 1693 + "\x57\xc8\xd9\xf2\xa1\x96\x91\xec\x41\xc7\x79\x00\x1a\x48\x25\x55" 1694 + "\xbe\xf3\x20\x8c\x38\xc6\x7b\xf2\x8b\x5a\xc3\xb5\x87\x0a\x86\x3d" 1695 + "\xb7\xd6\xce\xb0\x96\x2e\x5d\xc4\x00\x5e\x42\xe4\xe5\x50\x4f\xb8" 1696 + "\x6f\x18\xa4\xe1\xd3\x20\xfc\x3c\xf5\x0a\xff\x23\xa6\x5b\xb4\x17" 1697 + "\x3e\x7b\xdf\xb9\xb5\x3c\x1b\x76\x29\xcd\xb4\x46\x4f\x27\x8f\xd2" 1698 + "\xe8\x27\x66\xdb\xe8\xb3\xf5\xe1\xd0\x04\xcd\x89\xff\xba\x76\x67" 1699 + "\xe8\x4d\xcf\x86\x1c\x8a\xd1\xcf\x99\x27\xfb\xa9\x78\xcc\x94\xaf" 1700 + "\x3d\x04\xfd\x25\xc0\x47\xfa\x29\x80\x05\xf4\xde\xad\xdb\xab\x12" 1701 + "\xb0\x2b\x8e\xca\x02\x06\x6d\xad\x3e\x09\xb1\x22\xa3\xf5\x4c\x6d" 1702 + "\x69\x99\x58\x8b\xd8\x45\x2e\xe0\xc9\x3c\xf7\x92\xce\x21\x90\x6b" 1703 + "\x3b\x65\x9f\x64\x79\x8d\x67\x22\x1a\x37\xd3\xee\x51\xe2\xe7\x5a" 1704 + "\x93\x51\xaa\x3c\x4b\x04\x16\x32\xef\xe3\x66\xbe\x18\x94\x88\x64" 1705 + "\x79\xce\x06\x3f\xb8\xd6\xee\xdc\x13\x79\x6f\x20\x14\xc2\x6b\xce" 1706 + "\xc8\xda\x42\xa5\x93\x5b\xe4\x7f\x1a\xe6\xda\x0f\xb3\xc1\x5f\x30" 1707 + "\x50\x76\xe8\x37\x3d\xca\x77\x2c\xa8\xe4\x3b\xf9\x6f\xe0\x17\xed" 1708 + "\x0e\xef\xb7\x31\x14\xb5\xea\xd9\x39\x22\x89\xb6\x40\x57\xcc\x84" 1709 + "\xef\x73\xa7\xe9\x27\x21\x85\x89\xfa\xaf\x03\xda\x9c\x8b\xfd\x52" 1710 + "\x7d\xb0\xa4\xe4\xf9\xd8\x90\x55\xc4\x39\xd6\x9d\xaf\x3b\xce\xac" 1711 + "\xaa\x36\x14\x7a\x9b\x8b\x12\x43\xe1\xca\x61\xae\x46\x5b\xe7\xe5" 1712 + "\x88\x32\x80\xa0\x2d\x51\xbb\x2f\xea\xeb\x3c\x71\xb2\xae\xce\xca" 1713 + "\x61\xd2\x76\xe0\x45\x46\x78\x4e\x09\x2d\xc2\x54\xc2\xa9\xc7\xa8" 1714 + "\x55\x8e\x72\xa4\x8b\x8a\xc9\x01\xdb\xe9\x58\x11\xa1\xc4\xe7\x12", 1715 + .expected_ss = 1716 + "\x47\x8e\xb2\x19\x09\xf0\x46\x99\x6b\x41\x86\xf7\x34\xad\xbf\x2a" 1717 + "\x18\x1b\x7d\xec\xa9\xb2\x47\x2f\x40\xfb\x9a\x64\x30\x44\xf3\x4c" 1718 + "\x01\x67\xad\x57\x5a\xbc\xd4\xc8\xef\x7e\x8a\x14\x74\x1d\x6d\x8c" 1719 + "\x7b\xce\xc5\x57\x5f\x95\xe8\x72\xba\xdf\xa3\xcd\x00\xbe\x09\x4c" 1720 + "\x06\x72\xe7\x17\xb0\xe5\xe5\xb7\x20\xa5\xcb\xd9\x68\x99\xad\x3f" 1721 + "\xde\xf3\xde\x1d\x1c\x00\x74\xd2\xd1\x57\x55\x5d\xce\x76\x0c\xc4" 1722 + "\x7a\xc4\x65\x7c\x19\x17\x0a\x09\x66\x7d\x3a\xab\xf7\x61\x3a\xe3" 1723 + "\x5b\xac\xcf\x69\xb0\x8b\xee\x5d\x28\x36\xbb\x3f\x74\xce\x6e\x38" 1724 + "\x1e\x39\xab\x26\xca\x89\xdc\x58\x59\xcb\x95\xe4\xbc\xd6\x19\x48" 1725 + "\xd0\x55\x68\x7b\xb4\x27\x95\x3c\xd9\x58\x10\x4f\x8f\x55\x1c\x3f" 1726 + "\x04\xce\x89\x1f\x82\x28\xe9\x48\x17\x47\x8f\xee\xb7\x8f\xeb\xb1" 1727 + "\x29\xa8\x23\x18\x73\x33\x9f\x83\x08\xca\xcd\x54\x6e\xca\xec\x78" 1728 + "\x7b\x16\x83\x3f\xdb\x0a\xef\xfd\x87\x94\x19\x08\x6e\x6e\x22\x57" 1729 + "\xd7\xd2\x79\xf9\xf6\xeb\xe0\x6c\x93\x9d\x95\xfa\x41\x7a\xa9\xd6" 1730 + "\x2a\xa3\x26\x9b\x24\x1b\x8b\xa0\xed\x04\xb2\xe4\x6c\x4e\xc4\x3f" 1731 + "\x61\xe5\xe0\x4d\x09\x28\xaf\x58\x35\x25\x0b\xd5\x38\x18\x69\x51" 1732 + "\x18\x51\x73\x7b\x28\x19\x9f\xe4\x69\xfc\x2c\x25\x08\x99\x8f\x62" 1733 + "\x65\x62\xa5\x28\xf1\xf4\xfb\x02\x29\x27\xb0\x5e\xbb\x4f\xf9\x1a" 1734 + "\xa7\xc4\x38\x63\x5b\x01\xfe\x00\x66\xe3\x47\x77\x21\x85\x17\xd5" 1735 + "\x34\x19\xd3\x87\xab\x44\x62\x08\x59\xb2\x6b\x1f\x21\x0c\x23\x84" 1736 + "\xf7\xba\x92\x67\xf9\x16\x85\x6a\xe0\xeb\xe7\x4f\x06\x80\x81\x81" 1737 + "\x28\x9c\xe8\x2e\x71\x97\x48\xe0\xd1\xbc\xce\xe9\x42\x2c\x89\xdf" 1738 + "\x0b\xa9\xa1\x07\x84\x33\x78\x7f\x49\x2f\x1c\x55\xc3\x7f\xc3\x37" 1739 + "\x40\xdf\x13\xf4\xa0\x21\x79\x6e\x3a\xe3\xb8\x23\x9e\x8a\x6e\x9c", 1740 + .secret_size = 400, 1741 + .b_public_size = 384, 1742 + .expected_a_public_size = 384, 1743 + .expected_ss_size = 384, 1744 + }, 1745 + { 1746 + .secret = 1747 + #ifdef __LITTLE_ENDIAN 1748 + "\x01\x00" /* type */ 1749 + "\x10\x00" /* len */ 1750 + "\x00\x00\x00\x00" /* key_size */ 1751 + "\x00\x00\x00\x00" /* p_size */ 1752 + "\x00\x00\x00\x00", /* g_size */ 1753 + #else 1754 + "\x00\x01" /* type */ 1755 + "\x00\x10" /* len */ 1756 + "\x00\x00\x00\x00" /* key_size */ 1757 + "\x00\x00\x00\x00" /* p_size */ 1758 + "\x00\x00\x00\x00", /* g_size */ 1759 + #endif 1760 + .b_secret = 1761 + #ifdef __LITTLE_ENDIAN 1762 + "\x01\x00" /* type */ 1763 + "\x90\x01" /* len */ 1764 + "\x80\x01\x00\x00" /* key_size */ 1765 + "\x00\x00\x00\x00" /* p_size */ 1766 + "\x00\x00\x00\x00" /* g_size */ 1767 + #else 1768 + "\x00\x01" /* type */ 1769 + "\x01\x90" /* len */ 1770 + "\x00\x00\x01\x80" /* key_size */ 1771 + "\x00\x00\x00\x00" /* p_size */ 1772 + "\x00\x00\x00\x00" /* g_size */ 1773 + #endif 1774 + /* xa */ 1775 + "\x6b\xb4\x97\x23\xfa\xc8\x5e\xa9\x7b\x63\xe7\x3e\x0e\x99\xc3\xb9" 1776 + "\xda\xb7\x48\x0d\xc3\xb1\xbf\x4f\x17\xc7\xa9\x51\xf6\x64\xff\xc4" 1777 + "\x31\x58\x87\x25\x83\x2c\x00\xf0\x41\x29\xf7\xee\xf9\xe6\x36\x76" 1778 + "\xd6\x3a\x24\xbe\xa7\x07\x0b\x93\xc7\x9f\x6c\x75\x0a\x26\x75\x76" 1779 + "\xe3\x0c\x42\xe0\x00\x04\x69\xd9\xec\x0b\x59\x54\x28\x8f\xd7\x9a" 1780 + "\x63\xf4\x5b\xdf\x85\x65\xc4\xe1\x95\x27\x4a\x42\xad\x36\x47\xa9" 1781 + "\x0a\xf8\x14\x1c\xf3\x94\x3b\x7e\x47\x99\x35\xa8\x18\xec\x70\x10" 1782 + "\xdf\xcb\xd2\x78\x88\xc1\x2d\x59\x93\xc1\xa4\x6d\xd7\x1d\xb9\xd5" 1783 + "\xf8\x30\x06\x7f\x98\x90\x0c\x74\x5e\x89\x2f\x64\x5a\xad\x5f\x53" 1784 + "\xb2\xa3\xa8\x83\xbf\xfc\x37\xef\xb8\x36\x0a\x5c\x62\x81\x64\x74" 1785 + "\x16\x2f\x45\x39\x2a\x91\x26\x87\xc0\x12\xcc\x75\x11\xa3\xa1\xc5" 1786 + "\xae\x20\xcf\xcb\x20\x25\x6b\x7a\x31\x93\x9d\x38\xb9\x57\x72\x46" 1787 + "\xd4\x84\x65\x87\xf1\xb5\xd3\xab\xfc\xc3\x4d\x40\x92\x94\x1e\xcd" 1788 + "\x1c\x87\xec\x3f\xcd\xbe\xd0\x95\x6b\x40\x02\xdd\x62\xeb\x0a\xda" 1789 + "\x4f\xbe\x8e\x32\x48\x8b\x6d\x83\xa0\x96\x62\x23\xec\x83\x91\x44" 1790 + "\xf9\x72\x01\xac\xa0\xe4\x72\x1d\x5a\x75\x05\x57\x90\xae\x7e\xb4" 1791 + "\x71\x39\x01\x05\xdc\xe9\xee\xcb\xf0\x61\x28\x91\x69\x8c\x31\x03" 1792 + "\x7a\x92\x15\xa1\x58\x67\x3d\x70\x82\xa6\x2c\xfe\x10\x56\x58\xd3" 1793 + "\x94\x67\xe1\xbe\xee\xc1\x64\x5c\x4b\xc8\x28\x3d\xc5\x66\x3a\xab" 1794 + "\x22\xc1\x7e\xa1\xbb\xf3\x19\x3b\xda\x46\x82\x45\xd4\x3c\x7c\xc6" 1795 + "\xce\x1f\x7f\x95\xa2\x17\xff\x88\xba\xd6\x4d\xdb\xd2\xea\xde\x39" 1796 + "\xd6\xa5\x18\x73\xbb\x64\x6e\x79\xe9\xdc\x3f\x92\x7f\xda\x1f\x49" 1797 + "\x33\x70\x65\x73\xa2\xd9\x06\xb8\x1b\x29\x29\x1a\xe0\xa3\xe6\x05" 1798 + "\x9a\xa8\xc2\x4e\x7a\x78\x1d\x22\x57\x21\xc8\xa3\x8d\x66\x3e\x23", 1799 + .b_public = 1800 + "\x1b\x6a\xba\xea\xa3\xcc\x50\x69\xa9\x41\x89\xaf\x04\xe1\x44\x22" 1801 + "\x97\x20\xd1\xf6\x1e\xcb\x64\x36\x6f\xee\x0b\x16\xc1\xd9\x91\xbe" 1802 + "\x57\xc8\xd9\xf2\xa1\x96\x91\xec\x41\xc7\x79\x00\x1a\x48\x25\x55" 1803 + "\xbe\xf3\x20\x8c\x38\xc6\x7b\xf2\x8b\x5a\xc3\xb5\x87\x0a\x86\x3d" 1804 + "\xb7\xd6\xce\xb0\x96\x2e\x5d\xc4\x00\x5e\x42\xe4\xe5\x50\x4f\xb8" 1805 + "\x6f\x18\xa4\xe1\xd3\x20\xfc\x3c\xf5\x0a\xff\x23\xa6\x5b\xb4\x17" 1806 + "\x3e\x7b\xdf\xb9\xb5\x3c\x1b\x76\x29\xcd\xb4\x46\x4f\x27\x8f\xd2" 1807 + "\xe8\x27\x66\xdb\xe8\xb3\xf5\xe1\xd0\x04\xcd\x89\xff\xba\x76\x67" 1808 + "\xe8\x4d\xcf\x86\x1c\x8a\xd1\xcf\x99\x27\xfb\xa9\x78\xcc\x94\xaf" 1809 + "\x3d\x04\xfd\x25\xc0\x47\xfa\x29\x80\x05\xf4\xde\xad\xdb\xab\x12" 1810 + "\xb0\x2b\x8e\xca\x02\x06\x6d\xad\x3e\x09\xb1\x22\xa3\xf5\x4c\x6d" 1811 + "\x69\x99\x58\x8b\xd8\x45\x2e\xe0\xc9\x3c\xf7\x92\xce\x21\x90\x6b" 1812 + "\x3b\x65\x9f\x64\x79\x8d\x67\x22\x1a\x37\xd3\xee\x51\xe2\xe7\x5a" 1813 + "\x93\x51\xaa\x3c\x4b\x04\x16\x32\xef\xe3\x66\xbe\x18\x94\x88\x64" 1814 + "\x79\xce\x06\x3f\xb8\xd6\xee\xdc\x13\x79\x6f\x20\x14\xc2\x6b\xce" 1815 + "\xc8\xda\x42\xa5\x93\x5b\xe4\x7f\x1a\xe6\xda\x0f\xb3\xc1\x5f\x30" 1816 + "\x50\x76\xe8\x37\x3d\xca\x77\x2c\xa8\xe4\x3b\xf9\x6f\xe0\x17\xed" 1817 + "\x0e\xef\xb7\x31\x14\xb5\xea\xd9\x39\x22\x89\xb6\x40\x57\xcc\x84" 1818 + "\xef\x73\xa7\xe9\x27\x21\x85\x89\xfa\xaf\x03\xda\x9c\x8b\xfd\x52" 1819 + "\x7d\xb0\xa4\xe4\xf9\xd8\x90\x55\xc4\x39\xd6\x9d\xaf\x3b\xce\xac" 1820 + "\xaa\x36\x14\x7a\x9b\x8b\x12\x43\xe1\xca\x61\xae\x46\x5b\xe7\xe5" 1821 + "\x88\x32\x80\xa0\x2d\x51\xbb\x2f\xea\xeb\x3c\x71\xb2\xae\xce\xca" 1822 + "\x61\xd2\x76\xe0\x45\x46\x78\x4e\x09\x2d\xc2\x54\xc2\xa9\xc7\xa8" 1823 + "\x55\x8e\x72\xa4\x8b\x8a\xc9\x01\xdb\xe9\x58\x11\xa1\xc4\xe7\x12", 1824 + .secret_size = 16, 1825 + .b_secret_size = 400, 1826 + .b_public_size = 384, 1827 + .expected_a_public_size = 384, 1828 + .expected_ss_size = 384, 1829 + .genkey = true, 1830 + }, 1831 + }; 1832 + 1833 + static const struct kpp_testvec ffdhe4096_dh_tv_template[] __maybe_unused = { 1834 + { 1835 + .secret = 1836 + #ifdef __LITTLE_ENDIAN 1837 + "\x01\x00" /* type */ 1838 + "\x10\x02" /* len */ 1839 + "\x00\x02\x00\x00" /* key_size */ 1840 + "\x00\x00\x00\x00" /* p_size */ 1841 + "\x00\x00\x00\x00" /* g_size */ 1842 + #else 1843 + "\x00\x01" /* type */ 1844 + "\x02\x10" /* len */ 1845 + "\x00\x00\x02\x00" /* key_size */ 1846 + "\x00\x00\x00\x00" /* p_size */ 1847 + "\x00\x00\x00\x00" /* g_size */ 1848 + #endif 1849 + /* xa */ 1850 + "\x1a\x48\xf3\x6c\x61\x03\x42\x43\xd7\x42\x3b\xfa\xdb\x55\x6f\xa2" 1851 + "\xe1\x79\x52\x0b\x47\xc5\x03\x60\x2f\x26\xb9\x1a\x14\x15\x1a\xd9" 1852 + "\xe0\xbb\xa7\x82\x63\x41\xec\x26\x55\x00\xab\xe5\x21\x9d\x31\x14" 1853 + "\x0e\xe2\xc2\xb2\xb8\x37\xe6\xc3\x5a\xab\xae\x25\xdb\x71\x1e\xed" 1854 + "\xe8\x75\x9a\x04\xa7\x92\x2a\x99\x7e\xc0\x5b\x64\x75\x7f\xe5\xb5" 1855 + "\xdb\x6c\x95\x4f\xe9\xdc\x39\x76\x79\xb0\xf7\x00\x30\x8e\x86\xe7" 1856 + "\x36\xd1\xd2\x0c\x68\x7b\x94\xe9\x91\x85\x08\x86\xbc\x64\x87\xd2" 1857 + "\xf5\x5b\xaf\x03\xf6\x5f\x28\x25\xf1\xa3\x20\x5c\x1b\xb5\x26\x45" 1858 + "\x9a\x47\xab\xd6\xad\x49\xab\x92\x8e\x62\x6f\x48\x31\xea\xf6\x76" 1859 + "\xff\xa2\xb6\x28\x78\xef\x59\xc3\x71\x5d\xa8\xd9\x70\x89\xcc\xe2" 1860 + "\x63\x58\x5e\x3a\xa2\xa2\x88\xbf\x77\x20\x84\x33\x65\x64\x4e\x73" 1861 + "\xe5\x08\xd5\x89\x23\xd6\x07\xac\x29\x65\x2e\x02\xa8\x35\x96\x48" 1862 + "\xe7\x5d\x43\x6a\x42\xcc\xda\x98\xc4\x75\x90\x2e\xf6\xc4\xbf\xd4" 1863 + "\xbc\x31\x14\x0d\x54\x30\x11\xb2\xc9\xcf\xbb\xba\xbc\xc6\xf2\xcf" 1864 + "\xfe\x4a\x9d\xf3\xec\x78\x5d\x5d\xb4\x99\xd0\x67\x0f\x5a\x21\x1c" 1865 + "\x7b\x95\x2b\xcf\x49\x44\x94\x05\x1a\x21\x81\x25\x7f\xe3\x8a\x2a" 1866 + "\xdd\x88\xac\x44\x94\x23\x20\x3b\x75\xf6\x2a\x8a\x45\xf8\xb5\x1f" 1867 + "\xb9\x8b\xeb\xab\x9b\x38\x23\x26\xf1\x0f\x34\x47\x4f\x7f\xe1\x9e" 1868 + "\x84\x84\x78\xe5\xe3\x49\xeb\xcc\x2f\x02\x85\xa4\x18\x91\xde\x1a" 1869 + "\x60\x54\x33\x81\xd5\xae\xdb\x23\x9c\x4d\xa4\xdb\x22\x5b\xdf\xf4" 1870 + "\x8e\x05\x2b\x60\xba\xe8\x75\xfc\x34\x99\xcf\x35\xe1\x06\xba\xdc" 1871 + "\x79\x2a\x5e\xec\x1c\xbe\x79\x33\x63\x1c\xe7\x5f\x1e\x30\xd6\x1b" 1872 + "\xdb\x11\xb8\xea\x63\xff\xfe\x1a\x3c\x24\xf4\x78\x9c\xcc\x5d\x9a" 1873 + "\xc9\x2d\xc4\x9a\xd4\xa7\x65\x84\x98\xdb\x66\x76\xf0\x34\x31\x9f" 1874 + "\xce\xb5\xfb\x28\x07\xde\x1e\x0d\x9b\x01\x64\xeb\x2a\x37\x2f\x20" 1875 + "\xa5\x95\x72\x2b\x54\x51\x59\x91\xea\x50\x54\x0f\x2e\xb0\x1d\xf6" 1876 + "\xb9\x46\x43\xf9\xd0\x13\x21\x20\x47\x61\x1a\x1c\x30\xc6\x9e\x75" 1877 + "\x22\xe4\xf2\xb1\xab\x01\xdc\x5b\x3c\x1e\xa2\x6d\xc0\xb9\x9a\x2a" 1878 + "\x84\x61\xea\x85\x63\xa0\x77\xd0\xeb\x20\x68\xd5\x95\x6a\x1b\x8f" 1879 + "\x1f\x9a\xba\x44\x49\x8c\x77\xa6\xd9\xa0\x14\xf8\x7d\x9b\x4e\xfa" 1880 + "\xdc\x4f\x1c\x4d\x60\x50\x26\x7f\xd6\xc1\x91\x2b\xa6\x37\x5d\x94" 1881 + "\x69\xb2\x47\x59\xd6\xc3\x59\xbb\xd6\x9b\x71\x52\x85\x7a\xcb\x2d", 1882 + .b_public = 1883 + "\x24\x38\x02\x02\x2f\xeb\x54\xdd\x73\x21\x91\x4a\xd8\xa4\x0a\xbf" 1884 + "\xf4\xf5\x9a\x45\xb5\xcd\x42\xa3\x57\xcc\x65\x4a\x23\x2e\xee\x59" 1885 + "\xba\x6f\x14\x89\xae\x2e\x14\x0a\x72\x77\x23\x7f\x6c\x2e\xba\x52" 1886 + "\x3f\x71\xbf\xe4\x60\x03\x16\xaa\x61\xf5\x80\x1d\x8a\x45\x9e\x53" 1887 + "\x7b\x07\xd9\x7e\xfe\xaf\xcb\xda\xff\x20\x71\xba\x89\x39\x75\xc3" 1888 + "\xb3\x65\x0c\xb1\xa7\xfa\x4a\xe7\xe0\x85\xc5\x4e\x91\x47\x41\xf4" 1889 + "\xdd\xcd\xc5\x3d\x17\x12\xed\xee\xc0\x31\xb1\xaf\xc1\xd5\x3c\x07" 1890 + "\xa1\x5a\xc4\x05\x45\xe3\x10\x0c\xc3\x14\xae\x65\xca\x40\xae\x31" 1891 + "\x5c\x13\x0d\x32\x85\xa7\x6e\xf4\x5e\x29\x3d\x4e\xd3\xd7\x49\x58" 1892 + "\xe1\x73\xbb\x0a\x7b\xd6\x13\xea\x49\xd7\x20\x3d\x31\xaa\x77\xab" 1893 + "\x21\x74\xe9\x2f\xe9\x5e\xbe\x2f\xb4\xa2\x79\xf2\xbc\xcc\x51\x94" 1894 + "\xd2\x1d\xb2\xe6\xc5\x39\x66\xd7\xe5\x46\x75\x53\x76\xed\x49\xea" 1895 + "\x3b\xdd\x01\x27\xdb\x83\xa5\x9f\xd2\xee\xc8\xde\x9e\xde\xd2\xe7" 1896 + "\x99\xad\x9c\xe0\x71\x66\x29\xd8\x0d\xfe\xdc\xd1\xbc\xc7\x9a\xbe" 1897 + "\x8b\x26\x46\x57\xb6\x79\xfa\xad\x8b\x45\x2e\xb5\xe5\x89\x34\x01" 1898 + "\x93\x00\x9d\xe9\x58\x74\x8b\xda\x07\x92\xb5\x01\x4a\xe1\x44\x36" 1899 + "\xc7\x6c\xde\xc8\x7a\x17\xd0\xde\xee\x68\x92\xb5\xde\x21\x2b\x1c" 1900 + "\xbc\x65\x30\x1e\xae\x15\x3d\x9a\xaf\x20\xa3\xc4\x21\x70\xfb\x2f" 1901 + "\x36\x72\x31\xc0\xe8\x85\xdf\xc5\x50\x4c\x90\x10\x32\xa4\xc7\xee" 1902 + "\x59\x5a\x21\xf4\xf1\x33\xcf\xbe\xac\x67\xb1\x40\x7c\x0b\x3f\x64" 1903 + "\xe5\xd2\x2d\xb7\x7d\x0f\xce\xf7\x9b\x05\xee\x37\x61\xd2\x61\x9e" 1904 + "\x1a\x80\x2e\x79\xe6\x1b\x25\xb3\x61\x3d\x53\xe7\xe5\x97\x9a\xc2" 1905 + "\x39\xb1\xe3\x91\xc6\xee\x96\x2e\xa9\xb4\xb8\xad\xd8\x04\x3e\x11" 1906 + "\x31\x67\xb8\x6a\xcb\x6e\x1a\x4c\x7f\x74\xc7\x1f\x09\xd1\xd0\x6b" 1907 + "\x17\xde\xea\xe8\x0b\xe6\x6a\xee\x2f\xe3\x5b\x9c\x59\x5d\x00\x57" 1908 + "\xbf\x24\x25\xba\x22\x34\xb9\xc5\x3c\xc4\x57\x26\xd0\x6d\x89\xee" 1909 + "\x67\x79\x3c\x70\xf9\xc3\xb4\x30\xf0\x2e\xca\xfa\x74\x00\xd1\x00" 1910 + "\x6d\x03\x97\xd5\x08\x3f\x0b\x8e\xb8\x1d\xa3\x91\x7f\xa9\x3a\xf0" 1911 + "\x37\x57\x46\x87\x82\xa3\xb5\x8f\x51\xaa\xc7\x7b\xfe\x86\x26\xb9" 1912 + "\xfa\xe6\x1e\xee\x92\x9d\x3a\xed\x5b\x5e\x3f\xe5\xca\x5e\x13\x01" 1913 + "\xdd\x4c\x8d\x85\xf0\x60\x61\xb7\x60\x24\x83\x9f\xbe\x72\x21\x81" 1914 + "\x55\x7e\x7e\x6d\xf3\x28\xc8\x77\x5a\xae\x5a\x32\x86\xd5\x61\xad", 1915 + .expected_a_public = 1916 + "\x1f\xff\xd6\xc4\x59\xf3\x4a\x9e\x81\x74\x4d\x27\xa7\xc6\x6b\x35" 1917 + "\xd8\xf5\xb3\x24\x97\x82\xe7\x2e\xf3\x21\x91\x23\x2f\x3d\x57\x7f" 1918 + "\x15\x8c\x84\x71\xe7\x25\x35\xe8\x07\x14\x06\x4c\x83\xdc\x55\x4a" 1919 + "\xf8\x45\xc5\xe9\xfa\x6e\xae\x6e\xcf\x4d\x11\x91\x26\x16\x6f\x86" 1920 + "\x89\x78\xaa\xb4\x25\x54\xb2\x74\x07\xe5\x26\x26\x0c\xad\xa4\x57" 1921 + "\x59\x61\x66\x71\x43\x22\xff\x49\x51\xa4\x76\x0e\x55\x7b\x60\x45" 1922 + "\x4f\xaf\xbd\x9c\xec\x64\x3f\x80\x0b\x0c\x31\x41\xf0\xfe\x2c\xb7" 1923 + "\x0a\xbe\xa5\x71\x08\x0d\x8d\x1e\x8a\x77\x9a\xd2\x90\x31\x96\xd0" 1924 + "\x3b\x31\xdc\xc6\x18\x59\x43\xa1\x19\x5a\x84\x68\x29\xad\x5e\x58" 1925 + "\xa2\x50\x3e\x83\xf5\x7a\xbd\x88\x17\x60\x89\x98\x9c\x19\x89\x27" 1926 + "\x89\xfc\x33\x87\x42\xd5\xde\x19\x14\xf2\x95\x82\x10\x87\xad\x82" 1927 + "\xdd\x6b\x51\x2d\x8d\x0e\x81\x4b\xde\xb3\x35\x6c\x0f\x4b\x56\x45" 1928 + "\x48\x87\xe9\x5a\xf9\x70\x10\x30\x8e\xa1\xbb\xa4\x70\xbf\xa0\xab" 1929 + "\x10\x31\x3c\x2c\xdc\xc4\xed\xe3\x51\xdc\xee\xd2\xa5\x5c\x4e\x6e" 1930 + "\xf6\xed\x60\x5a\xeb\xf3\x02\x19\x2a\x95\xe9\x46\xff\x37\x1b\xf0" 1931 + "\x1d\x10\x4a\x8f\x4f\x3a\x6e\xf5\xfc\x02\x6d\x09\x7d\xea\x69\x7b" 1932 + "\x13\xb0\xb6\x80\x5c\x15\x20\xa8\x4d\x15\x56\x11\x72\x49\xdb\x48" 1933 + "\x54\x40\x66\xd5\xcd\x17\x3a\x26\x95\xf6\xd7\xf2\x59\xa3\xda\xbb" 1934 + "\x26\xd0\xe5\x46\xbf\xee\x0e\x7d\xf1\xe0\x11\x02\x4d\xd3\xdc\xe2" 1935 + "\x3f\xc2\x51\x7e\xc7\x90\x33\x3c\x1c\xa0\x4c\x69\xcc\x1e\xc7\xac" 1936 + "\x17\xe0\xe5\xf4\x8c\x05\x64\x34\xfe\x84\x70\xd7\x6b\xed\xab\xf5" 1937 + "\x88\x9d\x3e\x4c\x5a\x9e\xd4\x74\xfd\xdd\x91\xd5\xd4\xcb\xbf\xf8" 1938 + "\xb7\x56\xb5\xe9\x22\xa6\x6d\x7a\x44\x05\x41\xbf\xdb\x61\x28\xc6" 1939 + "\x99\x49\x87\x3d\x28\x77\xf8\x83\x23\x7e\xa9\xa7\xee\x20\xdb\x6d" 1940 + "\x21\x50\xb7\xc9\x52\x57\x53\xa3\xcf\xdf\xd0\xf9\xb9\x62\x96\x89" 1941 + "\xf5\x5c\xa9\x8a\x11\x95\x01\x25\xc9\x81\x15\x76\xae\xf0\xc7\xc5" 1942 + "\x50\xae\x6f\xb5\xd2\x8a\x8e\x9a\xd4\x30\x55\xc6\xe9\x2c\x81\x6e" 1943 + "\x95\xf6\x45\x89\x55\x28\x34\x7b\xe5\x72\x9a\x2a\xe2\x98\x09\x35" 1944 + "\xe0\xe9\x75\x94\xe9\x34\x95\xb9\x13\x6e\xd5\xa1\x62\x5a\x1c\x94" 1945 + "\x28\xed\x84\x46\x76\x6d\x10\x37\x71\xa3\x31\x46\x64\xe4\x59\x44" 1946 + "\x17\x70\x1c\x23\xc9\x7e\xf6\xab\x8a\x24\xae\x25\xe2\xb2\x5f\x33" 1947 + "\xe4\xd7\xd3\x34\x2a\x49\x22\x16\x15\x9b\x90\x40\xda\x99\xd5\xaf", 1948 + .expected_ss = 1949 + "\xe2\xce\x0e\x4b\x64\xf3\x84\x62\x38\xfd\xe3\x6f\x69\x40\x22\xb0" 1950 + "\x73\x27\x03\x12\x82\xa4\x6e\x03\x57\xec\x3d\xa0\xc1\x4f\x4b\x09" 1951 + "\xa1\xd4\xe0\x1a\x5d\x91\x2e\x08\xad\x57\xfa\xcc\x55\x90\x5f\xa0" 1952 + "\x52\x27\x62\x8d\xe5\x2d\xa1\x5f\xf0\x30\x43\x77\x4e\x3f\x02\x58" 1953 + "\xcb\xa0\x51\xae\x1d\x24\xf9\x0a\xd1\x36\x0b\x95\x0f\x07\xd9\xf7" 1954 + "\xe2\x36\x14\x2f\xf0\x11\xc2\xc9\xaf\x66\x4e\x0d\xb4\x60\x01\x4e" 1955 + "\xa8\x49\xc6\xec\x5f\xb2\xbc\x05\x48\x91\x4e\xe1\xc3\x99\x9f\xeb" 1956 + "\x4a\xc1\xde\x05\x9a\x65\x39\x7d\x2f\x89\x85\xb2\xcf\xec\x25\x27" 1957 + "\x5f\x1c\x11\x63\xcf\x7b\x86\x98\x39\xae\xc2\x16\x8f\x79\xd1\x20" 1958 + "\xd0\xb4\xa0\xba\x44\xd8\xf5\x3a\x0a\x08\x4c\xd1\xb9\xdd\x0a\x5b" 1959 + "\x9e\x62\xf3\x52\x0c\x84\x12\x43\x9b\xd7\xdf\x86\x71\x03\xdd\x04" 1960 + "\x98\x55\x0c\x7b\xe2\xe8\x03\x17\x25\x84\xd9\xbd\xe1\xce\x64\xbe" 1961 + "\xca\x55\xd4\x5b\xef\x61\x5b\x68\x4b\x80\x37\x40\xae\x28\x87\x81" 1962 + "\x55\x34\x96\x50\x21\x47\x49\xc0\xda\x26\x46\xb8\xe8\xcc\x5a\x27" 1963 + "\x9c\x9d\x0a\x3d\xcc\x4c\x63\x27\x81\x82\x2e\xf4\xa8\x91\x37\x3e" 1964 + "\xa7\x34\x6a\x0f\x60\x44\xdd\x2e\xdc\xf9\x19\xf2\x2e\x81\x05\x51" 1965 + "\x16\xbc\xc0\x85\xa5\xd5\x08\x09\x1f\xcd\xed\xa4\xc5\xdb\x16\x43" 1966 + "\xb5\x7a\x71\x66\x19\x2e\xef\x13\xbc\x40\x39\x0a\x00\x45\x7e\x61" 1967 + "\xe9\x68\x60\x83\x00\x70\xd1\x71\xd3\xa2\x61\x3e\x00\x46\x93\x0d" 1968 + "\xbf\xe6\xa2\x07\xe6\x40\x1a\xf4\x57\xc6\x67\x39\xd8\xd7\x6b\xc5" 1969 + "\xa5\xd8\x38\x78\x12\xb4\x97\x12\xbe\x97\x13\xef\xe4\x74\x0c\xe0" 1970 + "\x75\x89\x64\xf4\xe8\x85\xda\x84\x7b\x1d\xfe\xdd\x21\xba\xda\x01" 1971 + "\x52\xdc\x59\xe5\x47\x50\x7e\x15\x20\xd0\x43\x37\x6e\x48\x39\x00" 1972 + "\xee\xd9\x54\x6d\x00\x65\xc9\x4b\x85\xa2\x8a\x40\x55\xd0\x63\x0c" 1973 + "\xb5\x7a\x0d\x37\x67\x27\x73\x18\x7f\x5a\xf5\x0e\x22\xb9\xb0\x3f" 1974 + "\xda\xf1\xec\x7c\x24\x01\x49\xa9\x09\x0e\x0f\xc4\xa9\xef\xc8\x2b" 1975 + "\x13\xd1\x0a\x6f\xf8\x92\x4b\x1d\xdd\x6c\x9c\x35\xde\x75\x46\x32" 1976 + "\xe6\xfb\xda\x58\xba\x81\x08\xca\xa9\xb6\x69\x71\x96\x2a\x1f\x2e" 1977 + "\x25\xe0\x37\xfe\xee\x4d\x27\xaa\x04\xda\x95\xbb\x93\xcf\x8f\xa2" 1978 + "\x1d\x67\x35\xe3\x51\x8f\x87\x3b\xa9\x62\x05\xee\x44\xb7\x2e\xd0" 1979 + "\x07\x63\x32\xf5\xcd\x64\x18\x20\xcf\x22\x42\x28\x22\x1a\xa8\xbb" 1980 + "\x74\x8a\x6f\x2a\xea\x8a\x48\x0a\xad\xd7\xed\xba\xa3\x89\x37\x01", 1981 + .secret_size = 528, 1982 + .b_public_size = 512, 1983 + .expected_a_public_size = 512, 1984 + .expected_ss_size = 512, 1985 + }, 1986 + { 1987 + .secret = 1988 + #ifdef __LITTLE_ENDIAN 1989 + "\x01\x00" /* type */ 1990 + "\x10\x00" /* len */ 1991 + "\x00\x00\x00\x00" /* key_size */ 1992 + "\x00\x00\x00\x00" /* p_size */ 1993 + "\x00\x00\x00\x00", /* g_size */ 1994 + #else 1995 + "\x00\x01" /* type */ 1996 + "\x00\x10" /* len */ 1997 + "\x00\x00\x00\x00" /* key_size */ 1998 + "\x00\x00\x00\x00" /* p_size */ 1999 + "\x00\x00\x00\x00", /* g_size */ 2000 + #endif 2001 + .b_secret = 2002 + #ifdef __LITTLE_ENDIAN 2003 + "\x01\x00" /* type */ 2004 + "\x10\x02" /* len */ 2005 + "\x00\x02\x00\x00" /* key_size */ 2006 + "\x00\x00\x00\x00" /* p_size */ 2007 + "\x00\x00\x00\x00" /* g_size */ 2008 + #else 2009 + "\x00\x01" /* type */ 2010 + "\x02\x10" /* len */ 2011 + "\x00\x00\x02\x00" /* key_size */ 2012 + "\x00\x00\x00\x00" /* p_size */ 2013 + "\x00\x00\x00\x00" /* g_size */ 2014 + #endif 2015 + /* xa */ 2016 + "\x1a\x48\xf3\x6c\x61\x03\x42\x43\xd7\x42\x3b\xfa\xdb\x55\x6f\xa2" 2017 + "\xe1\x79\x52\x0b\x47\xc5\x03\x60\x2f\x26\xb9\x1a\x14\x15\x1a\xd9" 2018 + "\xe0\xbb\xa7\x82\x63\x41\xec\x26\x55\x00\xab\xe5\x21\x9d\x31\x14" 2019 + "\x0e\xe2\xc2\xb2\xb8\x37\xe6\xc3\x5a\xab\xae\x25\xdb\x71\x1e\xed" 2020 + "\xe8\x75\x9a\x04\xa7\x92\x2a\x99\x7e\xc0\x5b\x64\x75\x7f\xe5\xb5" 2021 + "\xdb\x6c\x95\x4f\xe9\xdc\x39\x76\x79\xb0\xf7\x00\x30\x8e\x86\xe7" 2022 + "\x36\xd1\xd2\x0c\x68\x7b\x94\xe9\x91\x85\x08\x86\xbc\x64\x87\xd2" 2023 + "\xf5\x5b\xaf\x03\xf6\x5f\x28\x25\xf1\xa3\x20\x5c\x1b\xb5\x26\x45" 2024 + "\x9a\x47\xab\xd6\xad\x49\xab\x92\x8e\x62\x6f\x48\x31\xea\xf6\x76" 2025 + "\xff\xa2\xb6\x28\x78\xef\x59\xc3\x71\x5d\xa8\xd9\x70\x89\xcc\xe2" 2026 + "\x63\x58\x5e\x3a\xa2\xa2\x88\xbf\x77\x20\x84\x33\x65\x64\x4e\x73" 2027 + "\xe5\x08\xd5\x89\x23\xd6\x07\xac\x29\x65\x2e\x02\xa8\x35\x96\x48" 2028 + "\xe7\x5d\x43\x6a\x42\xcc\xda\x98\xc4\x75\x90\x2e\xf6\xc4\xbf\xd4" 2029 + "\xbc\x31\x14\x0d\x54\x30\x11\xb2\xc9\xcf\xbb\xba\xbc\xc6\xf2\xcf" 2030 + "\xfe\x4a\x9d\xf3\xec\x78\x5d\x5d\xb4\x99\xd0\x67\x0f\x5a\x21\x1c" 2031 + "\x7b\x95\x2b\xcf\x49\x44\x94\x05\x1a\x21\x81\x25\x7f\xe3\x8a\x2a" 2032 + "\xdd\x88\xac\x44\x94\x23\x20\x3b\x75\xf6\x2a\x8a\x45\xf8\xb5\x1f" 2033 + "\xb9\x8b\xeb\xab\x9b\x38\x23\x26\xf1\x0f\x34\x47\x4f\x7f\xe1\x9e" 2034 + "\x84\x84\x78\xe5\xe3\x49\xeb\xcc\x2f\x02\x85\xa4\x18\x91\xde\x1a" 2035 + "\x60\x54\x33\x81\xd5\xae\xdb\x23\x9c\x4d\xa4\xdb\x22\x5b\xdf\xf4" 2036 + "\x8e\x05\x2b\x60\xba\xe8\x75\xfc\x34\x99\xcf\x35\xe1\x06\xba\xdc" 2037 + "\x79\x2a\x5e\xec\x1c\xbe\x79\x33\x63\x1c\xe7\x5f\x1e\x30\xd6\x1b" 2038 + "\xdb\x11\xb8\xea\x63\xff\xfe\x1a\x3c\x24\xf4\x78\x9c\xcc\x5d\x9a" 2039 + "\xc9\x2d\xc4\x9a\xd4\xa7\x65\x84\x98\xdb\x66\x76\xf0\x34\x31\x9f" 2040 + "\xce\xb5\xfb\x28\x07\xde\x1e\x0d\x9b\x01\x64\xeb\x2a\x37\x2f\x20" 2041 + "\xa5\x95\x72\x2b\x54\x51\x59\x91\xea\x50\x54\x0f\x2e\xb0\x1d\xf6" 2042 + "\xb9\x46\x43\xf9\xd0\x13\x21\x20\x47\x61\x1a\x1c\x30\xc6\x9e\x75" 2043 + "\x22\xe4\xf2\xb1\xab\x01\xdc\x5b\x3c\x1e\xa2\x6d\xc0\xb9\x9a\x2a" 2044 + "\x84\x61\xea\x85\x63\xa0\x77\xd0\xeb\x20\x68\xd5\x95\x6a\x1b\x8f" 2045 + "\x1f\x9a\xba\x44\x49\x8c\x77\xa6\xd9\xa0\x14\xf8\x7d\x9b\x4e\xfa" 2046 + "\xdc\x4f\x1c\x4d\x60\x50\x26\x7f\xd6\xc1\x91\x2b\xa6\x37\x5d\x94" 2047 + "\x69\xb2\x47\x59\xd6\xc3\x59\xbb\xd6\x9b\x71\x52\x85\x7a\xcb\x2d", 2048 + .b_public = 2049 + "\x1f\xff\xd6\xc4\x59\xf3\x4a\x9e\x81\x74\x4d\x27\xa7\xc6\x6b\x35" 2050 + "\xd8\xf5\xb3\x24\x97\x82\xe7\x2e\xf3\x21\x91\x23\x2f\x3d\x57\x7f" 2051 + "\x15\x8c\x84\x71\xe7\x25\x35\xe8\x07\x14\x06\x4c\x83\xdc\x55\x4a" 2052 + "\xf8\x45\xc5\xe9\xfa\x6e\xae\x6e\xcf\x4d\x11\x91\x26\x16\x6f\x86" 2053 + "\x89\x78\xaa\xb4\x25\x54\xb2\x74\x07\xe5\x26\x26\x0c\xad\xa4\x57" 2054 + "\x59\x61\x66\x71\x43\x22\xff\x49\x51\xa4\x76\x0e\x55\x7b\x60\x45" 2055 + "\x4f\xaf\xbd\x9c\xec\x64\x3f\x80\x0b\x0c\x31\x41\xf0\xfe\x2c\xb7" 2056 + "\x0a\xbe\xa5\x71\x08\x0d\x8d\x1e\x8a\x77\x9a\xd2\x90\x31\x96\xd0" 2057 + "\x3b\x31\xdc\xc6\x18\x59\x43\xa1\x19\x5a\x84\x68\x29\xad\x5e\x58" 2058 + "\xa2\x50\x3e\x83\xf5\x7a\xbd\x88\x17\x60\x89\x98\x9c\x19\x89\x27" 2059 + "\x89\xfc\x33\x87\x42\xd5\xde\x19\x14\xf2\x95\x82\x10\x87\xad\x82" 2060 + "\xdd\x6b\x51\x2d\x8d\x0e\x81\x4b\xde\xb3\x35\x6c\x0f\x4b\x56\x45" 2061 + "\x48\x87\xe9\x5a\xf9\x70\x10\x30\x8e\xa1\xbb\xa4\x70\xbf\xa0\xab" 2062 + "\x10\x31\x3c\x2c\xdc\xc4\xed\xe3\x51\xdc\xee\xd2\xa5\x5c\x4e\x6e" 2063 + "\xf6\xed\x60\x5a\xeb\xf3\x02\x19\x2a\x95\xe9\x46\xff\x37\x1b\xf0" 2064 + "\x1d\x10\x4a\x8f\x4f\x3a\x6e\xf5\xfc\x02\x6d\x09\x7d\xea\x69\x7b" 2065 + "\x13\xb0\xb6\x80\x5c\x15\x20\xa8\x4d\x15\x56\x11\x72\x49\xdb\x48" 2066 + "\x54\x40\x66\xd5\xcd\x17\x3a\x26\x95\xf6\xd7\xf2\x59\xa3\xda\xbb" 2067 + "\x26\xd0\xe5\x46\xbf\xee\x0e\x7d\xf1\xe0\x11\x02\x4d\xd3\xdc\xe2" 2068 + "\x3f\xc2\x51\x7e\xc7\x90\x33\x3c\x1c\xa0\x4c\x69\xcc\x1e\xc7\xac" 2069 + "\x17\xe0\xe5\xf4\x8c\x05\x64\x34\xfe\x84\x70\xd7\x6b\xed\xab\xf5" 2070 + "\x88\x9d\x3e\x4c\x5a\x9e\xd4\x74\xfd\xdd\x91\xd5\xd4\xcb\xbf\xf8" 2071 + "\xb7\x56\xb5\xe9\x22\xa6\x6d\x7a\x44\x05\x41\xbf\xdb\x61\x28\xc6" 2072 + "\x99\x49\x87\x3d\x28\x77\xf8\x83\x23\x7e\xa9\xa7\xee\x20\xdb\x6d" 2073 + "\x21\x50\xb7\xc9\x52\x57\x53\xa3\xcf\xdf\xd0\xf9\xb9\x62\x96\x89" 2074 + "\xf5\x5c\xa9\x8a\x11\x95\x01\x25\xc9\x81\x15\x76\xae\xf0\xc7\xc5" 2075 + "\x50\xae\x6f\xb5\xd2\x8a\x8e\x9a\xd4\x30\x55\xc6\xe9\x2c\x81\x6e" 2076 + "\x95\xf6\x45\x89\x55\x28\x34\x7b\xe5\x72\x9a\x2a\xe2\x98\x09\x35" 2077 + "\xe0\xe9\x75\x94\xe9\x34\x95\xb9\x13\x6e\xd5\xa1\x62\x5a\x1c\x94" 2078 + "\x28\xed\x84\x46\x76\x6d\x10\x37\x71\xa3\x31\x46\x64\xe4\x59\x44" 2079 + "\x17\x70\x1c\x23\xc9\x7e\xf6\xab\x8a\x24\xae\x25\xe2\xb2\x5f\x33" 2080 + "\xe4\xd7\xd3\x34\x2a\x49\x22\x16\x15\x9b\x90\x40\xda\x99\xd5\xaf", 2081 + .secret_size = 16, 2082 + .b_secret_size = 528, 2083 + .b_public_size = 512, 2084 + .expected_a_public_size = 512, 2085 + .expected_ss_size = 512, 2086 + .genkey = true, 2087 + }, 2088 + }; 2089 + 2090 + static const struct kpp_testvec ffdhe6144_dh_tv_template[] __maybe_unused = { 2091 + { 2092 + .secret = 2093 + #ifdef __LITTLE_ENDIAN 2094 + "\x01\x00" /* type */ 2095 + "\x10\x03" /* len */ 2096 + "\x00\x03\x00\x00" /* key_size */ 2097 + "\x00\x00\x00\x00" /* p_size */ 2098 + "\x00\x00\x00\x00" /* g_size */ 2099 + #else 2100 + "\x00\x01" /* type */ 2101 + "\x03\x10" /* len */ 2102 + "\x00\x00\x03\x00" /* key_size */ 2103 + "\x00\x00\x00\x00" /* p_size */ 2104 + "\x00\x00\x00\x00" /* g_size */ 2105 + #endif 2106 + /* xa */ 2107 + "\x63\x3e\x6f\xe0\xfe\x9f\x4a\x01\x62\x77\xce\xf1\xc7\xcc\x49\x4d" 2108 + "\x92\x53\x56\xe3\x39\x15\x81\xb2\xcd\xdc\xaf\x5e\xbf\x31\x1f\x69" 2109 + "\xce\x41\x35\x24\xaa\x46\x53\xb5\xb7\x3f\x2b\xad\x95\x14\xfb\xe4" 2110 + "\x9a\x61\xcd\x0f\x1f\x02\xee\xa4\x79\x2c\x9d\x1a\x7c\x62\x82\x39" 2111 + "\xdd\x43\xcc\x58\x9f\x62\x47\x56\x1d\x0f\xc2\x67\xbc\x24\xd0\xf9" 2112 + "\x0a\x50\x1b\x10\xe7\xbb\xd1\xc2\x01\xbb\xc4\x4c\xda\x12\x60\x0e" 2113 + "\x95\x2b\xde\x09\xd6\x67\xe1\xbc\x4c\xb9\x67\xdf\xd0\x1f\x97\xb4" 2114 + "\xde\xcb\x6b\x78\x83\x51\x74\x33\x01\x7f\xf6\x0a\x95\x69\x93\x00" 2115 + "\x2a\xc3\x75\x8e\xef\xbe\x53\x11\x6d\xc4\xd0\x9f\x6d\x63\x48\xc1" 2116 + "\x91\x1f\x7d\x88\xa7\x90\x78\xd1\x7e\x52\x42\x10\x01\xb4\x27\x95" 2117 + "\x91\x43\xcc\x82\x91\x86\x62\xa0\x9d\xef\x65\x6e\x67\xcf\x19\x11" 2118 + "\x35\x37\x5e\x94\x97\x83\xa6\x83\x1c\x7e\x8a\x3e\x32\xb0\xce\xff" 2119 + "\x20\xdc\x7b\x6e\x18\xd9\x6b\x27\x31\xfc\xc3\xef\x47\x8d\xbe\x34" 2120 + "\x2b\xc7\x60\x74\x3c\x93\xb3\x8e\x54\x77\x4e\x73\xe6\x40\x72\x35" 2121 + "\xb0\xf0\x06\x53\x43\xbe\xd0\xc3\x87\xcc\x38\x96\xa9\x10\xa0\xd6" 2122 + "\x17\xed\xa5\x6a\xf4\xf6\xaa\x77\x40\xed\x7d\x2e\x58\x0f\x5b\x04" 2123 + "\x5a\x41\x12\x95\x22\xcb\xa3\xce\x8b\x6d\x6d\x89\xec\x7c\x1d\x25" 2124 + "\x27\x52\x50\xa0\x5b\x93\x8c\x5d\x3f\x56\xb9\xa6\x5e\xe5\xf7\x9b" 2125 + "\xc7\x9a\x4a\x2e\x79\xb5\xca\x29\x58\x52\xa0\x63\xe4\x9d\xeb\x4c" 2126 + "\x4c\xa8\x37\x0b\xe9\xa0\x18\xf1\x86\xf6\x4d\x32\xfb\x9e\x4f\xb3" 2127 + "\x7b\x5d\x58\x78\x70\xbd\x56\xac\x99\x75\x25\x71\x66\x76\x4e\x5e" 2128 + "\x67\x4f\xb1\x17\xa7\x8b\x55\x12\x87\x01\x4e\xd1\x66\xef\xd0\x70" 2129 + "\xaf\x14\x34\xee\x2a\x76\x49\x25\xa6\x2e\x43\x37\x75\x7d\x1a\xad" 2130 + "\x08\xd5\x01\x85\x9c\xe1\x20\xd8\x38\x5c\x57\xa5\xed\x9d\x46\x3a" 2131 + "\xb7\x46\x60\x29\x8b\xc4\x21\x50\x0a\x30\x9c\x57\x42\xe4\x35\xf8" 2132 + "\x12\x5c\x4f\xa2\x20\xc2\xc9\x43\xe3\x6d\x20\xbc\xdf\xb8\x37\x33" 2133 + "\x45\x43\x06\x4e\x08\x6f\x8a\xcd\x61\xc3\x1b\x05\x28\x82\xbe\xf0" 2134 + "\x48\x33\xe5\x93\xc9\x1a\x61\x16\x67\x03\x9d\x47\x9d\x74\xeb\xae" 2135 + "\x13\xf2\xb4\x1b\x09\x11\xf5\x15\xcb\x28\xfd\x50\xe0\xbc\x58\x36" 2136 + "\x38\x91\x2c\x07\x27\x1f\x49\x68\xf4\xce\xad\xf7\xba\xec\x5d\x3d" 2137 + "\xfd\x27\xe2\xcf\xf4\x56\xfe\x08\xa6\x11\x61\xcb\x6c\x9f\xf9\x3c" 2138 + "\x57\x0b\x8b\xaa\x00\x16\x18\xba\x1f\xe8\x4f\x01\xe2\x79\x2a\x0b" 2139 + "\xc1\xbd\x52\xef\xe6\xf7\x5a\x66\xfe\x07\x3b\x50\x6b\xbb\xcb\x39" 2140 + "\x3c\x94\xf6\x21\x0d\x68\x69\xa4\xed\x2e\xb5\x85\x03\x11\x38\x79" 2141 + "\xec\xb5\x22\x23\xdf\x9e\xad\xb4\xbe\xd7\xc7\xdf\xea\x30\x23\x8a" 2142 + "\xb7\x21\x0a\x9d\xbd\x99\x13\x7d\x5f\x7e\xaf\x28\x54\x3f\xca\x5e" 2143 + "\xf4\xfc\x05\x0d\x65\x67\xd8\xf6\x8e\x90\x9d\x0d\xcf\x62\x82\xd6" 2144 + "\x9f\x02\xf8\xca\xfa\x42\x24\x7f\x4d\xb7\xfc\x92\xa6\x4a\x51\xc4" 2145 + "\xd8\xae\x19\x87\xc6\xa3\x83\xbe\x7b\x6d\xc3\xf5\xb8\xad\x4a\x05" 2146 + "\x78\x84\x3a\x15\x2e\x40\xbe\x79\xa9\xc0\x12\xa1\x48\x39\xc3\xdb" 2147 + "\x47\x4f\x7d\xea\x6d\xc7\xfa\x2c\x4e\xe9\xa5\x85\x81\xea\x6c\xcd" 2148 + "\x8a\xe5\x74\x17\x76\x31\x31\x75\x96\x83\xca\x81\xbb\x5c\xa9\x79" 2149 + "\x2c\xbd\x09\xfe\xe4\x86\x0d\x8c\x76\x9c\xbc\xe8\x93\xe4\xd0\xe4" 2150 + "\x0f\xf8\xff\x24\x7e\x66\x61\x69\xfb\xe4\x46\x08\x94\x99\xa5\x53" 2151 + "\xd7\xe4\x29\x72\x86\x86\xe8\x1d\x37\xfa\xcb\xd0\x8d\x51\xd0\xbf" 2152 + "\x81\xcf\x55\xb9\xc5\x78\x8c\x74\xa0\x16\x3a\xd2\x19\x94\x29\x6a" 2153 + "\x5e\xec\xd3\x20\xa0\xb2\xfd\xce\xd4\x14\xa3\x39\x10\xa9\xf4\x4e" 2154 + "\xba\x21\x09\x5c\xe6\x61\x43\x51\xae\xc4\x71\xd7\x21\xef\x98\x39", 2155 + .b_public = 2156 + "\x30\x31\xbe\x43\xd0\x14\x22\x6b\x4b\x8c\x9a\xca\xc6\xdd\xe5\x99" 2157 + "\xce\xb8\x30\x23\xb6\xa8\x8c\x4d\xfa\xef\xad\xa6\x6a\x21\x50\xa6" 2158 + "\x45\x2d\x19\x2a\x29\x81\xc5\xac\xb4\xa8\x5f\x6d\x5b\xc8\x5f\x12" 2159 + "\x35\x21\xfb\x37\xaa\x0c\x79\xeb\xd4\x83\x01\xda\xa3\xf3\x51\x6e" 2160 + "\x17\xf9\xef\x3f\xbd\x2f\xd2\x43\x82\x12\x48\xeb\x61\x4c\x8e\xf2" 2161 + "\x6c\x76\xf9\x6d\x42\x2a\xcb\x10\x13\x3b\xf6\x9b\xcd\x46\x1e\xa2" 2162 + "\xa7\x2c\x08\x56\xd2\x42\xf5\x03\xf0\x3e\xef\xa2\xa2\xf2\x4c\xf2" 2163 + "\xdb\x4f\xeb\x40\x15\x53\x27\xf7\xd4\x8e\x58\x23\xf5\x2c\x88\x04" 2164 + "\x1e\xb1\xb6\xe3\xd6\x9c\x49\x08\xa1\x4b\xb8\x33\xe4\x75\x85\xa1" 2165 + "\x86\x97\xce\x1d\xe9\x9f\xe2\xd8\xf2\x7e\xad\xdc\x8a\x4d\xbd\x06" 2166 + "\x52\x00\x9a\x2c\x69\xdd\x02\x0c\x69\x5a\xf9\x1d\xfd\xdc\xfb\x82" 2167 + "\xb2\xe5\xf3\x24\xba\xd1\x09\x76\x90\xb5\x7a\x92\xa6\x6b\x97\xc0" 2168 + "\xce\x13\x9b\x4b\xbc\x30\x91\xb2\x13\x8b\x57\x6c\x8b\x66\x6e\x58" 2169 + "\x3e\x91\x50\xc7\x6c\xe1\x18\xec\xbf\x69\xcd\xcb\xa0\xbc\x0d\x05" 2170 + "\xc4\xf8\x45\x92\xe0\x05\xd3\x08\xb3\x30\x19\xc8\x80\xf8\x17\x9f" 2171 + "\x1e\x6a\x49\x8e\x43\xef\x7a\x49\xa5\x93\xd9\xed\xd1\x07\x03\xe4" 2172 + "\xa3\x55\xeb\x1e\x2f\x69\xd7\x40\x8f\x6e\x1c\xb6\x94\xfb\xba\x4e" 2173 + "\x46\xd0\x38\x71\x00\x88\x93\x6a\x55\xfc\x16\x95\x1f\xb1\xf6\x2f" 2174 + "\x26\x45\x50\x54\x30\x62\x62\xe8\x80\xe5\x24\x0b\xe4\x15\x6b\x32" 2175 + "\x16\xc2\x30\x9b\x56\xb4\xc9\x5e\x50\xb4\x27\x82\x86\x01\xda\x68" 2176 + "\x44\x4b\x15\x81\x31\x13\x52\xd8\x08\xbc\xae\xf3\xa5\x94\x1c\x81" 2177 + "\xe8\x42\xd6\x42\xd6\xff\x99\x58\x0f\x61\x3e\x82\x9e\x2d\x13\x03" 2178 + "\x54\x02\x74\xf4\x6b\x43\x43\xce\x54\x44\x36\x3f\x55\xfa\xb2\x56" 2179 + "\xdc\xac\xb5\x65\x89\xbe\x36\xd2\x58\x65\x79\x4c\xf3\xe2\x01\xf1" 2180 + "\x69\x96\x29\x20\x5d\xee\xf5\x8a\x8b\x9f\x72\xf7\x27\x02\xde\x3b" 2181 + "\xc7\x52\x19\xdc\x8e\x22\x36\x09\x14\x59\x07\xbb\x1e\x49\x69\x4f" 2182 + "\x00\x7b\x9a\x5d\x23\xe9\xbe\x0d\x52\x90\xa3\x0d\xde\xe7\x80\x57" 2183 + "\x53\x69\x39\xe6\xf8\x33\xeb\x92\x0d\x9e\x04\x8b\x16\x16\x16\x1c" 2184 + "\xa9\xe6\xe3\x0e\x0a\xc6\xf6\x61\xd1\x44\x2b\x3e\x5e\x02\xfe\xaa" 2185 + "\xe3\xf3\x8f\xf9\xc8\x20\x37\xad\xbc\x95\xb8\xc5\xe7\x95\xda\xfb" 2186 + "\x80\x5b\xf6\x40\x28\xae\xc1\x4c\x09\xde\xff\x1e\xbf\x51\xd2\xfe" 2187 + "\x08\xdc\xb0\x48\x21\xf5\x4c\x43\xdc\x7b\x69\x83\xc8\x69\x5c\xc4" 2188 + "\xa9\x98\x76\x4b\xc4\x4a\xac\x1d\xa5\x52\xe3\x35\x43\xdd\x30\xd4" 2189 + "\xa0\x51\x9c\xc2\x62\x4c\x7e\xa5\xfb\xd3\x2c\x8a\x09\x7f\x53\xa3" 2190 + "\xcd\xca\x58\x1b\x4c\xaf\xba\x21\x8b\x88\x1d\xc0\xe9\x0a\x17\x30" 2191 + "\x33\xd6\xa2\xa5\x49\x50\x61\x3b\xff\x37\x71\x66\xef\x61\xbc\xb2" 2192 + "\x53\x82\xe5\x70\xef\x32\xff\x9d\x97\xe0\x82\xe0\xbb\x49\xc2\x29" 2193 + "\x58\x89\xdd\xe9\x62\x52\xfb\xba\x22\xa6\xd9\x16\xfa\x55\xb3\x06" 2194 + "\xed\x6d\x70\x6e\xdc\x47\x7c\x67\x1a\xcc\x27\x98\xd4\xd7\xe6\xf0" 2195 + "\xf8\x9f\x51\x3e\xf0\xee\xad\xb6\x78\x69\x71\xb5\xcb\x09\xa3\xa6" 2196 + "\x3f\x29\x24\x46\xe0\x65\xbc\x9f\x6c\xe9\xf9\x49\x49\x96\x75\xe5" 2197 + "\xe1\xff\x82\x70\xf4\x7e\xff\x8f\xec\x47\x98\x6d\x5b\x88\x60\xee" 2198 + "\x43\xb1\xe2\x14\xc1\x49\x95\x74\x46\xd3\x3f\x73\xb2\xe9\x88\xe0" 2199 + "\xd3\xb1\xc4\x2c\xef\xee\xdd\x6c\xc5\xa1\x29\xef\x86\xd2\x36\x8a" 2200 + "\x2f\x7c\x9d\x28\x0a\x6d\xc9\x5a\xdb\xd4\x04\x06\x36\x96\x09\x03" 2201 + "\x71\x5d\x38\x67\xa2\x08\x2a\x04\xe7\xd6\x51\x5a\x19\x9d\xe7\xf1" 2202 + "\x5d\x6f\xe2\xff\x48\x37\xb7\x8b\xb1\x14\xb4\x96\xcd\xf0\xa7\xbd" 2203 + "\xef\x20\xff\x0a\x8d\x08\xb7\x15\x98\x5a\x13\xd2\xda\x2a\x27\x75", 2204 + .expected_a_public = 2205 + "\x45\x96\x5a\xb7\x78\x5c\xa4\x4d\x39\xb2\x5f\xc8\xc2\xaa\x1a\xf4" 2206 + "\xa6\x68\xf6\x6f\x7e\xa8\x4a\x5b\x0e\xba\x0a\x99\x85\xf9\x63\xd4" 2207 + "\x58\x21\x6d\xa8\x3c\xf4\x05\x10\xb0\x0d\x6f\x1c\xa0\x17\x85\xae" 2208 + "\x68\xbf\xcc\x00\xc8\x86\x1b\x24\x31\xc9\x49\x23\x91\xe0\x71\x29" 2209 + "\x06\x39\x39\x93\x49\x9c\x75\x18\x1a\x8b\x61\x73\x1c\x7f\x37\xd5" 2210 + "\xf1\xab\x20\x5e\x62\x25\xeb\x58\xd5\xfa\xc9\x7f\xad\x57\xd5\xcc" 2211 + "\x0d\xc1\x7a\x2b\x33\x2a\x76\x84\x33\x26\x97\xcf\x47\x9d\x72\x2a" 2212 + "\xc9\x39\xde\xa8\x42\x27\x2d\xdc\xee\x00\x60\xd2\x4f\x13\xe0\xde" 2213 + "\xd5\xc7\xf6\x7d\x8b\x2a\x43\x49\x40\x99\xc2\x61\x84\x8e\x57\x09" 2214 + "\x7c\xcc\x19\x46\xbd\x4c\xd2\x7c\x7d\x02\x4d\x88\xdf\x58\x24\x80" 2215 + "\xeb\x19\x3b\x2a\x13\x2b\x19\x85\x3c\xd8\x31\x03\x00\xa4\xd4\x57" 2216 + "\x23\x2c\x24\x37\xb3\x62\xea\x35\x29\xd0\x2c\xac\xfd\xbd\xdf\x3d" 2217 + "\xa6\xce\xfa\x0d\x5b\xb6\x15\x8b\xe3\x58\xe9\xad\x99\x87\x29\x51" 2218 + "\x8d\x97\xd7\xa9\x55\xf0\x72\x6e\x4e\x58\xcb\x2b\x4d\xbd\xd0\x48" 2219 + "\x7d\x14\x86\xdb\x3f\xa2\x5f\x6e\x35\x4a\xe1\x70\xb1\x53\x72\xb7" 2220 + "\xbc\xe9\x3d\x1b\x33\xc0\x54\x6f\x43\x55\x76\x85\x7f\x9b\xa5\xb3" 2221 + "\xc1\x1d\xd3\xfe\xe2\xd5\x96\x3d\xdd\x92\x04\xb1\xad\x75\xdb\x13" 2222 + "\x4e\x49\xfc\x35\x34\xc5\xda\x13\x98\xb8\x12\xbe\xda\x90\x55\x7c" 2223 + "\x11\x6c\xbe\x2b\x8c\x51\x29\x23\xc1\x51\xbc\x0c\x1c\xe2\x20\xfc" 2224 + "\xfe\xf2\xaa\x71\x9b\x21\xdf\x25\x1f\x68\x21\x7e\xe1\xc9\x87\xa0" 2225 + "\x20\xf6\x8d\x4f\x27\x8c\x3c\x0f\x9d\xf4\x69\x25\xaa\x49\xab\x94" 2226 + "\x22\x5a\x92\x3a\xba\xb4\xc2\x8c\x5a\xaa\x04\xbf\x46\xc5\xaa\x93" 2227 + "\xab\x0d\xe9\x54\x6c\x3a\x64\xa6\xa2\x21\x66\xee\x1c\x10\x21\x84" 2228 + "\xf2\x9e\xcc\x57\xac\xc2\x25\x62\xad\xbb\x59\xef\x25\x61\x6c\x81" 2229 + "\x38\x8a\xdc\x8c\xeb\x7b\x18\x1d\xaf\xa9\xc5\x9a\xf4\x49\x26\x8a" 2230 + "\x25\xc4\x3e\x31\x95\x28\xef\xf7\x72\xe9\xc5\xaa\x59\x72\x2b\x67" 2231 + "\x47\xe8\x6b\x51\x05\x24\xb8\x18\xb3\x34\x0f\x8c\x2b\x80\xba\x61" 2232 + "\x1c\xbe\x9e\x9a\x7c\xe3\x60\x5e\x49\x02\xff\x50\x8a\x64\x28\x64" 2233 + "\x46\x7b\x83\x14\x72\x6e\x59\x9b\x56\x09\xb4\xf0\xde\x52\xc3\xf3" 2234 + "\x58\x17\x6a\xae\xb1\x0f\xf4\x39\xcc\xd8\xce\x4d\xe1\x51\x17\x88" 2235 + "\xe4\x98\xd9\xd1\xa9\x55\xbc\xbf\x7e\xc4\x51\x96\xdb\x44\x1d\xcd" 2236 + "\x8d\x74\xad\xa7\x8f\x87\x83\x75\xfc\x36\xb7\xd2\xd4\x89\x16\x97" 2237 + "\xe4\xc6\x2a\xe9\x65\xc8\xca\x1c\xbd\x86\xaf\x57\x80\xf7\xdd\x42" 2238 + "\xc0\x3b\x3f\x87\x51\x02\x2f\xf8\xd8\x68\x0f\x3d\x95\x2d\xf1\x67" 2239 + "\x09\xa6\x5d\x0b\x7e\x01\xb4\xb2\x32\x01\xa8\xd0\x58\x0d\xe6\xa2" 2240 + "\xd8\x4b\x22\x10\x7d\x11\xf3\xc2\x4e\xb8\x43\x8e\x31\x79\x59\xe2" 2241 + "\xc4\x96\x29\x17\x40\x06\x0d\xdf\xdf\xc3\x02\x30\x2a\xd1\x8e\xf2" 2242 + "\xee\x2d\xd2\x12\x63\x5a\x1d\x3c\xba\x4a\xc4\x56\x90\xc6\x12\x0b" 2243 + "\xe0\x04\x3f\x35\x59\x8e\x40\x75\xf4\x4c\x10\x61\xb9\x30\x89\x7c" 2244 + "\x8d\x0e\x25\xb7\x5a\x6b\x97\x05\xc6\x37\x80\x6e\x94\x56\xa8\x5f" 2245 + "\x03\x94\x59\xc8\xc5\x3e\xdc\x23\xe5\x68\x4f\xd7\xbb\x6d\x7e\xc1" 2246 + "\x8d\xf9\xcc\x3f\x38\xad\x77\xb3\x18\x61\xed\x04\xc0\x71\xa7\x96" 2247 + "\xb1\xaf\x1d\x69\x78\xda\x6d\x89\x8b\x50\x75\x99\x44\xb3\xb2\x75" 2248 + "\xd1\xc8\x14\x40\xa1\x0a\xbf\xc4\x45\xc4\xee\x12\x90\x76\x26\x64" 2249 + "\xb7\x73\x2e\x0b\x0c\xfa\xc3\x55\x29\x24\x1b\x7a\x00\x27\x07\x26" 2250 + "\x36\xf0\x38\x1a\xe3\xb7\xc4\x8d\x1c\x9c\xa9\xc0\xc1\x45\x91\x9e" 2251 + "\x86\xdd\x82\x94\x45\xfa\xcd\x5a\x19\x12\x7d\xef\xda\x17\xad\x21" 2252 + "\x17\x89\x8b\x45\xa7\xf5\xed\x51\x9e\x58\x13\xdc\x84\xa4\xe6\x37", 2253 + .expected_ss = 2254 + "\x9a\x9c\x1c\xb7\x73\x2f\xf2\x12\xed\x59\x01\xbb\x75\xf7\xf5\xe4" 2255 + "\xa0\xa8\xbc\x3f\x3f\xb6\xf7\x74\x6e\xc4\xba\x6d\x6c\x4d\x93\x31" 2256 + "\x2b\xa7\xa4\xb3\x47\x8f\x77\x04\xb5\xa5\xab\xca\x6b\x5a\xe2\x86" 2257 + "\x02\x60\xca\xb4\xd7\x5e\xe0\x0f\x73\xdd\xa2\x38\x7c\xae\x0f\x5a" 2258 + "\x1a\xd7\xfd\xb6\xc8\x6f\xdd\xe0\x98\xd5\x07\xea\x1f\x2a\xbb\x9e" 2259 + "\xef\x01\x24\x04\xee\xf5\x89\xb1\x12\x26\x54\x95\xef\xcb\x84\xe9" 2260 + "\xae\x05\xef\x63\x25\x15\x65\x79\x79\x79\x91\xc3\x76\x72\xb4\x85" 2261 + "\x86\xd9\xd3\x03\xb0\xff\x04\x96\x05\x3c\xde\xbf\x47\x34\x76\x70" 2262 + "\x17\xd2\x24\x83\xb9\xbb\xcf\x70\x7c\xb8\xc6\x7b\x4e\x01\x86\x36" 2263 + "\xc7\xc5\xe5\x8b\x7c\x69\x74\x9a\xfe\x1f\x58\x85\x0f\x00\xf8\x4e" 2264 + "\xf1\x56\xdc\xd1\x11\x28\x2c\xcf\x6c\xb9\xc9\x57\x17\x2e\x19\x19" 2265 + "\x55\xb3\x4c\xd8\xfb\xe7\x6f\x70\x63\xf9\x53\x45\xdd\xd5\x62\x95" 2266 + "\xd3\x7d\x7e\xa0\x00\x1a\x62\x9f\x96\x0a\x5d\x0a\x25\x02\xbb\xff" 2267 + "\x5a\xe8\x9e\x5a\x66\x08\x93\xbc\x92\xaf\xd2\x28\x04\x97\xc1\x54" 2268 + "\xfe\xcc\x0a\x25\xa2\xf4\x1d\x5a\x9a\xb1\x3e\x9c\xba\x78\xe2\xcf" 2269 + "\x71\x70\xe3\x40\xea\xba\x69\x9b\x03\xdd\x99\x26\x09\x84\x9d\x69" 2270 + "\x4d\x3d\x0b\xe9\x3f\x51\xcd\x05\xe5\x00\xaf\x2c\xd3\xf6\xc0\x68" 2271 + "\xb5\x23\x53\x33\x14\xbd\x39\x1c\xbd\x1b\xe6\x72\x90\xcc\xc2\x86" 2272 + "\x1a\x42\x83\x55\xb3\xed\x0b\x62\x6d\x0e\xbb\x9e\x2a\x42\x32\x05" 2273 + "\x3f\xf2\x2c\xc8\x9f\x3c\xd2\xb1\x0b\xb6\x4c\xa0\x22\x36\xee\xb9" 2274 + "\x55\x23\x3e\x80\xc7\x28\x7c\x39\x11\xd3\x4a\x96\x2e\xef\x52\x34" 2275 + "\xf2\xda\xb1\xc6\xf5\x02\x10\xbf\x56\x6b\x50\x56\xcd\x2c\xfe\xe1" 2276 + "\x94\x14\x19\x24\x6e\x9a\xdf\x0c\xb8\xe2\xb8\xd5\xa3\xc1\x22\x8e" 2277 + "\x84\x92\x00\x16\xf1\x3f\x83\xf6\x36\x31\xa5\x38\xc6\xcf\xf8\x9b" 2278 + "\x03\xc7\x6f\xb9\xa1\x04\xdf\x20\x0f\x0b\x0f\x70\xff\x57\x36\x7f" 2279 + "\xb3\x6b\xcb\x8f\x48\xf7\xb2\xdb\x85\x05\xd1\xfe\x34\x05\xf6\x57" 2280 + "\xb4\x5b\xcc\x3f\x0e\xba\x36\x59\xb0\xfd\x4d\xf6\xf4\x5e\xd2\x65" 2281 + "\x1d\x98\x87\xb4\x5e\xff\x29\xaa\x84\x9b\x44\x0f\x06\x36\x61\xbd" 2282 + "\xdb\x51\xda\x56\xc2\xd6\x19\xe2\x57\x4f\xd0\x29\x71\xc8\xe4\xd6" 2283 + "\xfb\x8c\xd0\xfc\x4f\x25\x09\xa6\xfc\x67\xe2\xb8\xac\xd3\x88\x8f" 2284 + "\x1f\xf6\xa1\xe3\x45\xa6\x34\xe3\xb1\x6b\xb7\x37\x0e\x06\xc7\x63" 2285 + "\xde\xac\x3b\xac\x07\x91\x64\xcc\x12\x10\x46\x85\x14\x0b\x6b\x03" 2286 + "\xba\x4a\x85\xae\xc5\x8c\xa5\x9d\x36\x38\x33\xca\x42\x9c\x4b\x0c" 2287 + "\x46\xe1\x77\xe9\x1f\x80\xfe\xb7\x1d\x5a\xf4\xc6\x11\x26\x78\xea" 2288 + "\x81\x25\x77\x47\xed\x8b\x59\xc2\x6b\x49\xff\x83\x56\xec\xa5\xf0" 2289 + "\xe0\x8b\x15\xd4\x99\x40\x2a\x65\x2a\x98\xf4\x71\x35\x63\x84\x08" 2290 + "\x4d\xcd\x71\x85\x55\xbc\xa4\x1c\x90\x93\x03\x41\xde\xed\x78\x62" 2291 + "\x07\x30\x50\xac\x60\x21\x06\xc3\xab\xa4\x04\xc0\xc2\x32\x07\xc4" 2292 + "\x1f\x2f\xec\xe2\x32\xbf\xbe\x5e\x50\x5b\x2a\x19\x71\x44\x37\x76" 2293 + "\x8b\xbc\xdb\x73\x98\x65\x78\xc9\x33\x97\x7e\xdc\x60\xa8\x87\xf2" 2294 + "\xb5\x96\x55\x7f\x44\x07\xcb\x3b\xf3\xd7\x82\xfd\x77\x21\x82\x21" 2295 + "\x1a\x8b\xa2\xf5\x1f\x66\xd0\x57\x00\x4f\xa9\xa5\x33\xb8\x69\x91" 2296 + "\xe8\x2e\xf7\x73\x47\x89\x30\x9b\xb1\xfd\xe1\x5d\x11\xfd\x84\xd9" 2297 + "\xa2\x91\x1f\x8a\xa7\x7a\x77\x8e\x3b\x10\x1d\x0a\x59\x50\x34\xb0" 2298 + "\xc3\x90\x9f\x56\xb7\x43\xeb\x51\x99\x2b\x8e\x6d\x7b\x58\xe7\xc0" 2299 + "\x7f\x3d\xa0\x27\x50\xf2\x6e\xc8\x1e\x7f\x84\xb3\xe1\xf7\x09\x85" 2300 + "\xd2\x9b\x56\x6b\xba\xa5\x19\x2e\xec\xd8\x5c\xf5\x4e\x43\x36\x2e" 2301 + "\x89\x85\x41\x7f\x9c\x91\x2e\x62\xc3\x41\xcf\x0e\xa1\x7f\xeb\x50", 2302 + .secret_size = 784, 2303 + .b_public_size = 768, 2304 + .expected_a_public_size = 768, 2305 + .expected_ss_size = 768, 2306 + }, 2307 + { 2308 + .secret = 2309 + #ifdef __LITTLE_ENDIAN 2310 + "\x01\x00" /* type */ 2311 + "\x10\x00" /* len */ 2312 + "\x00\x00\x00\x00" /* key_size */ 2313 + "\x00\x00\x00\x00" /* p_size */ 2314 + "\x00\x00\x00\x00", /* g_size */ 2315 + #else 2316 + "\x00\x01" /* type */ 2317 + "\x00\x10" /* len */ 2318 + "\x00\x00\x00\x00" /* key_size */ 2319 + "\x00\x00\x00\x00" /* p_size */ 2320 + "\x00\x00\x00\x00", /* g_size */ 2321 + #endif 2322 + .b_secret = 2323 + #ifdef __LITTLE_ENDIAN 2324 + "\x01\x00" /* type */ 2325 + "\x10\x03" /* len */ 2326 + "\x00\x03\x00\x00" /* key_size */ 2327 + "\x00\x00\x00\x00" /* p_size */ 2328 + "\x00\x00\x00\x00" /* g_size */ 2329 + #else 2330 + "\x00\x01" /* type */ 2331 + "\x03\x10" /* len */ 2332 + "\x00\x00\x03\x00" /* key_size */ 2333 + "\x00\x00\x00\x00" /* p_size */ 2334 + "\x00\x00\x00\x00" /* g_size */ 2335 + #endif 2336 + /* xa */ 2337 + "\x63\x3e\x6f\xe0\xfe\x9f\x4a\x01\x62\x77\xce\xf1\xc7\xcc\x49\x4d" 2338 + "\x92\x53\x56\xe3\x39\x15\x81\xb2\xcd\xdc\xaf\x5e\xbf\x31\x1f\x69" 2339 + "\xce\x41\x35\x24\xaa\x46\x53\xb5\xb7\x3f\x2b\xad\x95\x14\xfb\xe4" 2340 + "\x9a\x61\xcd\x0f\x1f\x02\xee\xa4\x79\x2c\x9d\x1a\x7c\x62\x82\x39" 2341 + "\xdd\x43\xcc\x58\x9f\x62\x47\x56\x1d\x0f\xc2\x67\xbc\x24\xd0\xf9" 2342 + "\x0a\x50\x1b\x10\xe7\xbb\xd1\xc2\x01\xbb\xc4\x4c\xda\x12\x60\x0e" 2343 + "\x95\x2b\xde\x09\xd6\x67\xe1\xbc\x4c\xb9\x67\xdf\xd0\x1f\x97\xb4" 2344 + "\xde\xcb\x6b\x78\x83\x51\x74\x33\x01\x7f\xf6\x0a\x95\x69\x93\x00" 2345 + "\x2a\xc3\x75\x8e\xef\xbe\x53\x11\x6d\xc4\xd0\x9f\x6d\x63\x48\xc1" 2346 + "\x91\x1f\x7d\x88\xa7\x90\x78\xd1\x7e\x52\x42\x10\x01\xb4\x27\x95" 2347 + "\x91\x43\xcc\x82\x91\x86\x62\xa0\x9d\xef\x65\x6e\x67\xcf\x19\x11" 2348 + "\x35\x37\x5e\x94\x97\x83\xa6\x83\x1c\x7e\x8a\x3e\x32\xb0\xce\xff" 2349 + "\x20\xdc\x7b\x6e\x18\xd9\x6b\x27\x31\xfc\xc3\xef\x47\x8d\xbe\x34" 2350 + "\x2b\xc7\x60\x74\x3c\x93\xb3\x8e\x54\x77\x4e\x73\xe6\x40\x72\x35" 2351 + "\xb0\xf0\x06\x53\x43\xbe\xd0\xc3\x87\xcc\x38\x96\xa9\x10\xa0\xd6" 2352 + "\x17\xed\xa5\x6a\xf4\xf6\xaa\x77\x40\xed\x7d\x2e\x58\x0f\x5b\x04" 2353 + "\x5a\x41\x12\x95\x22\xcb\xa3\xce\x8b\x6d\x6d\x89\xec\x7c\x1d\x25" 2354 + "\x27\x52\x50\xa0\x5b\x93\x8c\x5d\x3f\x56\xb9\xa6\x5e\xe5\xf7\x9b" 2355 + "\xc7\x9a\x4a\x2e\x79\xb5\xca\x29\x58\x52\xa0\x63\xe4\x9d\xeb\x4c" 2356 + "\x4c\xa8\x37\x0b\xe9\xa0\x18\xf1\x86\xf6\x4d\x32\xfb\x9e\x4f\xb3" 2357 + "\x7b\x5d\x58\x78\x70\xbd\x56\xac\x99\x75\x25\x71\x66\x76\x4e\x5e" 2358 + "\x67\x4f\xb1\x17\xa7\x8b\x55\x12\x87\x01\x4e\xd1\x66\xef\xd0\x70" 2359 + "\xaf\x14\x34\xee\x2a\x76\x49\x25\xa6\x2e\x43\x37\x75\x7d\x1a\xad" 2360 + "\x08\xd5\x01\x85\x9c\xe1\x20\xd8\x38\x5c\x57\xa5\xed\x9d\x46\x3a" 2361 + "\xb7\x46\x60\x29\x8b\xc4\x21\x50\x0a\x30\x9c\x57\x42\xe4\x35\xf8" 2362 + "\x12\x5c\x4f\xa2\x20\xc2\xc9\x43\xe3\x6d\x20\xbc\xdf\xb8\x37\x33" 2363 + "\x45\x43\x06\x4e\x08\x6f\x8a\xcd\x61\xc3\x1b\x05\x28\x82\xbe\xf0" 2364 + "\x48\x33\xe5\x93\xc9\x1a\x61\x16\x67\x03\x9d\x47\x9d\x74\xeb\xae" 2365 + "\x13\xf2\xb4\x1b\x09\x11\xf5\x15\xcb\x28\xfd\x50\xe0\xbc\x58\x36" 2366 + "\x38\x91\x2c\x07\x27\x1f\x49\x68\xf4\xce\xad\xf7\xba\xec\x5d\x3d" 2367 + "\xfd\x27\xe2\xcf\xf4\x56\xfe\x08\xa6\x11\x61\xcb\x6c\x9f\xf9\x3c" 2368 + "\x57\x0b\x8b\xaa\x00\x16\x18\xba\x1f\xe8\x4f\x01\xe2\x79\x2a\x0b" 2369 + "\xc1\xbd\x52\xef\xe6\xf7\x5a\x66\xfe\x07\x3b\x50\x6b\xbb\xcb\x39" 2370 + "\x3c\x94\xf6\x21\x0d\x68\x69\xa4\xed\x2e\xb5\x85\x03\x11\x38\x79" 2371 + "\xec\xb5\x22\x23\xdf\x9e\xad\xb4\xbe\xd7\xc7\xdf\xea\x30\x23\x8a" 2372 + "\xb7\x21\x0a\x9d\xbd\x99\x13\x7d\x5f\x7e\xaf\x28\x54\x3f\xca\x5e" 2373 + "\xf4\xfc\x05\x0d\x65\x67\xd8\xf6\x8e\x90\x9d\x0d\xcf\x62\x82\xd6" 2374 + "\x9f\x02\xf8\xca\xfa\x42\x24\x7f\x4d\xb7\xfc\x92\xa6\x4a\x51\xc4" 2375 + "\xd8\xae\x19\x87\xc6\xa3\x83\xbe\x7b\x6d\xc3\xf5\xb8\xad\x4a\x05" 2376 + "\x78\x84\x3a\x15\x2e\x40\xbe\x79\xa9\xc0\x12\xa1\x48\x39\xc3\xdb" 2377 + "\x47\x4f\x7d\xea\x6d\xc7\xfa\x2c\x4e\xe9\xa5\x85\x81\xea\x6c\xcd" 2378 + "\x8a\xe5\x74\x17\x76\x31\x31\x75\x96\x83\xca\x81\xbb\x5c\xa9\x79" 2379 + "\x2c\xbd\x09\xfe\xe4\x86\x0d\x8c\x76\x9c\xbc\xe8\x93\xe4\xd0\xe4" 2380 + "\x0f\xf8\xff\x24\x7e\x66\x61\x69\xfb\xe4\x46\x08\x94\x99\xa5\x53" 2381 + "\xd7\xe4\x29\x72\x86\x86\xe8\x1d\x37\xfa\xcb\xd0\x8d\x51\xd0\xbf" 2382 + "\x81\xcf\x55\xb9\xc5\x78\x8c\x74\xa0\x16\x3a\xd2\x19\x94\x29\x6a" 2383 + "\x5e\xec\xd3\x20\xa0\xb2\xfd\xce\xd4\x14\xa3\x39\x10\xa9\xf4\x4e" 2384 + "\xba\x21\x09\x5c\xe6\x61\x43\x51\xae\xc4\x71\xd7\x21\xef\x98\x39", 2385 + .b_public = 2386 + "\x45\x96\x5a\xb7\x78\x5c\xa4\x4d\x39\xb2\x5f\xc8\xc2\xaa\x1a\xf4" 2387 + "\xa6\x68\xf6\x6f\x7e\xa8\x4a\x5b\x0e\xba\x0a\x99\x85\xf9\x63\xd4" 2388 + "\x58\x21\x6d\xa8\x3c\xf4\x05\x10\xb0\x0d\x6f\x1c\xa0\x17\x85\xae" 2389 + "\x68\xbf\xcc\x00\xc8\x86\x1b\x24\x31\xc9\x49\x23\x91\xe0\x71\x29" 2390 + "\x06\x39\x39\x93\x49\x9c\x75\x18\x1a\x8b\x61\x73\x1c\x7f\x37\xd5" 2391 + "\xf1\xab\x20\x5e\x62\x25\xeb\x58\xd5\xfa\xc9\x7f\xad\x57\xd5\xcc" 2392 + "\x0d\xc1\x7a\x2b\x33\x2a\x76\x84\x33\x26\x97\xcf\x47\x9d\x72\x2a" 2393 + "\xc9\x39\xde\xa8\x42\x27\x2d\xdc\xee\x00\x60\xd2\x4f\x13\xe0\xde" 2394 + "\xd5\xc7\xf6\x7d\x8b\x2a\x43\x49\x40\x99\xc2\x61\x84\x8e\x57\x09" 2395 + "\x7c\xcc\x19\x46\xbd\x4c\xd2\x7c\x7d\x02\x4d\x88\xdf\x58\x24\x80" 2396 + "\xeb\x19\x3b\x2a\x13\x2b\x19\x85\x3c\xd8\x31\x03\x00\xa4\xd4\x57" 2397 + "\x23\x2c\x24\x37\xb3\x62\xea\x35\x29\xd0\x2c\xac\xfd\xbd\xdf\x3d" 2398 + "\xa6\xce\xfa\x0d\x5b\xb6\x15\x8b\xe3\x58\xe9\xad\x99\x87\x29\x51" 2399 + "\x8d\x97\xd7\xa9\x55\xf0\x72\x6e\x4e\x58\xcb\x2b\x4d\xbd\xd0\x48" 2400 + "\x7d\x14\x86\xdb\x3f\xa2\x5f\x6e\x35\x4a\xe1\x70\xb1\x53\x72\xb7" 2401 + "\xbc\xe9\x3d\x1b\x33\xc0\x54\x6f\x43\x55\x76\x85\x7f\x9b\xa5\xb3" 2402 + "\xc1\x1d\xd3\xfe\xe2\xd5\x96\x3d\xdd\x92\x04\xb1\xad\x75\xdb\x13" 2403 + "\x4e\x49\xfc\x35\x34\xc5\xda\x13\x98\xb8\x12\xbe\xda\x90\x55\x7c" 2404 + "\x11\x6c\xbe\x2b\x8c\x51\x29\x23\xc1\x51\xbc\x0c\x1c\xe2\x20\xfc" 2405 + "\xfe\xf2\xaa\x71\x9b\x21\xdf\x25\x1f\x68\x21\x7e\xe1\xc9\x87\xa0" 2406 + "\x20\xf6\x8d\x4f\x27\x8c\x3c\x0f\x9d\xf4\x69\x25\xaa\x49\xab\x94" 2407 + "\x22\x5a\x92\x3a\xba\xb4\xc2\x8c\x5a\xaa\x04\xbf\x46\xc5\xaa\x93" 2408 + "\xab\x0d\xe9\x54\x6c\x3a\x64\xa6\xa2\x21\x66\xee\x1c\x10\x21\x84" 2409 + "\xf2\x9e\xcc\x57\xac\xc2\x25\x62\xad\xbb\x59\xef\x25\x61\x6c\x81" 2410 + "\x38\x8a\xdc\x8c\xeb\x7b\x18\x1d\xaf\xa9\xc5\x9a\xf4\x49\x26\x8a" 2411 + "\x25\xc4\x3e\x31\x95\x28\xef\xf7\x72\xe9\xc5\xaa\x59\x72\x2b\x67" 2412 + "\x47\xe8\x6b\x51\x05\x24\xb8\x18\xb3\x34\x0f\x8c\x2b\x80\xba\x61" 2413 + "\x1c\xbe\x9e\x9a\x7c\xe3\x60\x5e\x49\x02\xff\x50\x8a\x64\x28\x64" 2414 + "\x46\x7b\x83\x14\x72\x6e\x59\x9b\x56\x09\xb4\xf0\xde\x52\xc3\xf3" 2415 + "\x58\x17\x6a\xae\xb1\x0f\xf4\x39\xcc\xd8\xce\x4d\xe1\x51\x17\x88" 2416 + "\xe4\x98\xd9\xd1\xa9\x55\xbc\xbf\x7e\xc4\x51\x96\xdb\x44\x1d\xcd" 2417 + "\x8d\x74\xad\xa7\x8f\x87\x83\x75\xfc\x36\xb7\xd2\xd4\x89\x16\x97" 2418 + "\xe4\xc6\x2a\xe9\x65\xc8\xca\x1c\xbd\x86\xaf\x57\x80\xf7\xdd\x42" 2419 + "\xc0\x3b\x3f\x87\x51\x02\x2f\xf8\xd8\x68\x0f\x3d\x95\x2d\xf1\x67" 2420 + "\x09\xa6\x5d\x0b\x7e\x01\xb4\xb2\x32\x01\xa8\xd0\x58\x0d\xe6\xa2" 2421 + "\xd8\x4b\x22\x10\x7d\x11\xf3\xc2\x4e\xb8\x43\x8e\x31\x79\x59\xe2" 2422 + "\xc4\x96\x29\x17\x40\x06\x0d\xdf\xdf\xc3\x02\x30\x2a\xd1\x8e\xf2" 2423 + "\xee\x2d\xd2\x12\x63\x5a\x1d\x3c\xba\x4a\xc4\x56\x90\xc6\x12\x0b" 2424 + "\xe0\x04\x3f\x35\x59\x8e\x40\x75\xf4\x4c\x10\x61\xb9\x30\x89\x7c" 2425 + "\x8d\x0e\x25\xb7\x5a\x6b\x97\x05\xc6\x37\x80\x6e\x94\x56\xa8\x5f" 2426 + "\x03\x94\x59\xc8\xc5\x3e\xdc\x23\xe5\x68\x4f\xd7\xbb\x6d\x7e\xc1" 2427 + "\x8d\xf9\xcc\x3f\x38\xad\x77\xb3\x18\x61\xed\x04\xc0\x71\xa7\x96" 2428 + "\xb1\xaf\x1d\x69\x78\xda\x6d\x89\x8b\x50\x75\x99\x44\xb3\xb2\x75" 2429 + "\xd1\xc8\x14\x40\xa1\x0a\xbf\xc4\x45\xc4\xee\x12\x90\x76\x26\x64" 2430 + "\xb7\x73\x2e\x0b\x0c\xfa\xc3\x55\x29\x24\x1b\x7a\x00\x27\x07\x26" 2431 + "\x36\xf0\x38\x1a\xe3\xb7\xc4\x8d\x1c\x9c\xa9\xc0\xc1\x45\x91\x9e" 2432 + "\x86\xdd\x82\x94\x45\xfa\xcd\x5a\x19\x12\x7d\xef\xda\x17\xad\x21" 2433 + "\x17\x89\x8b\x45\xa7\xf5\xed\x51\x9e\x58\x13\xdc\x84\xa4\xe6\x37", 2434 + .secret_size = 16, 2435 + .b_secret_size = 784, 2436 + .b_public_size = 768, 2437 + .expected_a_public_size = 768, 2438 + .expected_ss_size = 768, 2439 + .genkey = true, 2440 + }, 2441 + }; 2442 + 2443 + static const struct kpp_testvec ffdhe8192_dh_tv_template[] __maybe_unused = { 2444 + { 2445 + .secret = 2446 + #ifdef __LITTLE_ENDIAN 2447 + "\x01\x00" /* type */ 2448 + "\x10\x04" /* len */ 2449 + "\x00\x04\x00\x00" /* key_size */ 2450 + "\x00\x00\x00\x00" /* p_size */ 2451 + "\x00\x00\x00\x00" /* g_size */ 2452 + #else 2453 + "\x00\x01" /* type */ 2454 + "\x04\x10" /* len */ 2455 + "\x00\x00\x04\x00" /* key_size */ 2456 + "\x00\x00\x00\x00" /* p_size */ 2457 + "\x00\x00\x00\x00" /* g_size */ 2458 + #endif 2459 + /* xa */ 2460 + "\x76\x6e\xeb\xf9\xeb\x76\xae\x37\xcb\x19\x49\x8b\xeb\xaf\xb0\x4b" 2461 + "\x6d\xe9\x15\xad\xda\xf2\xef\x58\xe9\xd6\xdd\x4c\xb3\x56\xd0\x3b" 2462 + "\x00\xb0\x65\xed\xae\xe0\x2e\xdf\x8f\x45\x3f\x3c\x5d\x2f\xfa\x96" 2463 + "\x36\x33\xb2\x01\x8b\x0f\xe8\x46\x15\x6d\x60\x5b\xec\x32\xc3\x3b" 2464 + "\x06\xf3\xb4\x1b\x9a\xef\x3c\x03\x0e\xcc\xce\x1d\x24\xa0\xc9\x08" 2465 + "\x65\xf9\x45\xe5\xd2\x43\x08\x88\x58\xd6\x46\xe7\xbb\x25\xac\xed" 2466 + "\x3b\xac\x6f\x5e\xfb\xd6\x19\xa6\x20\x3a\x1d\x0c\xe8\x00\x72\x54" 2467 + "\xd7\xd9\xc9\x26\x49\x18\xc6\xb8\xbc\xdd\xf3\xce\xf3\x7b\x69\x04" 2468 + "\x5c\x6f\x11\xdb\x44\x42\x72\xb6\xb7\x84\x17\x86\x47\x3f\xc5\xa1" 2469 + "\xd8\x86\xef\xe2\x27\x49\x2b\x8f\x3e\x91\x12\xd9\x45\x96\xf7\xe6" 2470 + "\x77\x76\x36\x58\x71\x9a\xb1\xdb\xcf\x24\x9e\x7e\xad\xce\x45\xba" 2471 + "\xb5\xec\x8e\xb9\xd6\x7b\x3d\x76\xa4\x85\xad\xd8\x49\x9b\x80\x9d" 2472 + "\x7f\x9f\x85\x09\x9e\x86\x5b\x6b\xf3\x8d\x39\x5e\x6f\xe4\x30\xc8" 2473 + "\xa5\xf3\xdf\x68\x73\x6b\x2e\x9a\xcb\xac\x0a\x0d\x44\xc1\xaf\xb2" 2474 + "\x11\x1b\x7c\x43\x08\x44\x43\xe2\x4e\xfd\x93\x30\x99\x09\x12\xbb" 2475 + "\xf6\x31\x34\xa5\x3d\x45\x98\xee\xd7\x2a\x1a\x89\xf5\x37\x92\x33" 2476 + "\xa0\xdd\xf5\xfb\x1f\x90\x42\x55\x5a\x0b\x82\xff\xf0\x96\x92\x15" 2477 + "\x65\x5a\x55\x96\xca\x1b\xd5\xe5\xb5\x94\xde\x2e\xa6\x03\x57\x9e" 2478 + "\x15\xe4\x32\x2b\x1f\xb2\x22\x21\xe9\xa0\x05\xd3\x65\x6c\x11\x66" 2479 + "\x25\x38\xbb\xa3\x6c\xc2\x0b\x2b\xd0\x7a\x20\x26\x29\x37\x5d\x5f" 2480 + "\xd8\xff\x2a\xcd\x46\x6c\xd6\x6e\xe5\x77\x1a\xe6\x33\xf1\x8e\xc8" 2481 + "\x10\x30\x11\x00\x27\xf9\x7d\x0e\x28\x43\xa7\x67\x38\x7f\x16\xda" 2482 + "\xd0\x01\x8e\xa4\xe8\x6f\xcd\x23\xaf\x77\x52\x34\xad\x7e\xc3\xed" 2483 + "\x2d\x10\x0a\x33\xdc\xcf\x1b\x88\x0f\xcc\x48\x7f\x42\xf0\x9e\x13" 2484 + "\x1f\xf5\xd1\xe9\x90\x87\xbd\xfa\x5f\x1d\x77\x55\xcb\xc3\x05\xaf" 2485 + "\x71\xd0\xe0\xab\x46\x31\xd7\xea\x89\x54\x2d\x39\xaf\xf6\x4f\x74" 2486 + "\xaf\x46\x58\x89\x78\x95\x2e\xe6\x90\xb7\xaa\x00\x73\x9f\xed\xb9" 2487 + "\x00\xd6\xf6\x6d\x26\x59\xcd\x56\xdb\xf7\x3d\x5f\xeb\x6e\x46\x33" 2488 + "\xb1\x23\xed\x9f\x8d\x58\xdc\xb4\x28\x3b\x90\x09\xc4\x61\x02\x1f" 2489 + "\xf8\x62\xf2\x6e\xc1\x94\x71\x66\x93\x11\xdf\xaa\x3e\xd7\xb5\xe5" 2490 + "\xc1\x78\xe9\x14\xcd\x55\x16\x51\xdf\x8d\xd0\x94\x8c\x43\xe9\xb8" 2491 + "\x1d\x42\x7f\x76\xbc\x6f\x87\x42\x88\xde\xd7\x52\x78\x00\x4f\x18" 2492 + "\x02\xe7\x7b\xe2\x8a\xc3\xd1\x43\xa5\xac\xda\xb0\x8d\x19\x96\xd4" 2493 + "\x81\xe0\x75\xe9\xca\x41\x7e\x1f\x93\x0b\x26\x24\xb3\xaa\xdd\x10" 2494 + "\x20\xd3\xf2\x9f\x3f\xdf\x65\xde\x67\x79\xdc\x76\x9f\x3c\x72\x75" 2495 + "\x65\x8a\x30\xcc\xd2\xcc\x06\xb1\xab\x62\x86\x78\x5d\xb8\xce\x72" 2496 + "\xb3\x12\xc7\x9f\x07\xd0\x6b\x98\x82\x9b\x6c\xbb\x15\xe5\xcc\xf4" 2497 + "\xc8\xf4\x60\x81\xdc\xd3\x09\x1b\x5e\xd4\xf3\x55\xcf\x1c\x16\x83" 2498 + "\x61\xb4\x2e\xcc\x08\x67\x58\xfd\x46\x64\xbc\x29\x4b\xdd\xda\xec" 2499 + "\xdc\xc6\xa9\xa5\x73\xfb\xf8\xf3\xaf\x89\xa8\x9e\x25\x14\xfa\xac" 2500 + "\xeb\x1c\x7c\x80\x96\x66\x4d\x41\x67\x9b\x07\x4f\x0a\x97\x17\x1c" 2501 + "\x4d\x61\xc7\x2e\x6f\x36\x98\x29\x50\x39\x6d\xe7\x70\xda\xf0\xc8" 2502 + "\x05\x80\x7b\x32\xff\xfd\x12\xde\x61\x0d\xf9\x4c\x21\xf1\x56\x72" 2503 + "\x3d\x61\x46\xc0\x2d\x07\xd1\x6c\xd3\xbe\x9a\x21\x83\x85\xf7\xed" 2504 + "\x53\x95\x44\x40\x8f\x75\x12\x18\xc2\x9a\xfd\x5e\xce\x66\xa6\x7f" 2505 + "\x57\xc0\xd7\x73\x76\xb3\x13\xda\x2e\x58\xc6\x27\x40\xb2\x2d\xef" 2506 + "\x7d\x72\xb4\xa8\x75\x6f\xcc\x5f\x42\x3e\x2c\x90\x36\x59\xa0\x34" 2507 + "\xaa\xce\xbc\x04\x4c\xe6\x56\xc2\xcd\xa6\x1c\x59\x04\x56\x53\xcf" 2508 + "\x6d\xd7\xf0\xb1\x4f\x91\xfa\x84\xcf\x4b\x8d\x50\x4c\xf8\x2a\x31" 2509 + "\x5f\xe3\xba\x79\xb4\xcc\x59\x64\xe3\x7a\xfa\xf6\x06\x9d\x04\xbb" 2510 + "\xce\x61\xbf\x9e\x59\x0a\x09\x51\x6a\xbb\x0b\x80\xe0\x91\xc1\x51" 2511 + "\x04\x58\x67\x67\x4b\x42\x4f\x95\x68\x75\xe2\x1f\x9c\x14\x70\xfd" 2512 + "\x3a\x8a\xce\x8b\x04\xa1\x89\xe7\xb4\xbf\x70\xfe\xf3\x0c\x48\x04" 2513 + "\x3a\xd2\x85\x68\x03\xe7\xfa\xec\x5b\x55\xb7\x95\xfd\x5b\x19\x35" 2514 + "\xad\xcb\x4a\x63\x03\x44\x64\x2a\x48\x59\x9a\x26\x43\x96\x8c\xe6" 2515 + "\xbd\xb7\x90\xd4\x5f\x8d\x08\x28\xa8\xc5\x89\x70\xb9\x6e\xd3\x3b" 2516 + "\x76\x0e\x37\x98\x15\x27\xca\xc9\xb0\xe0\xfd\xf3\xc6\xdf\x69\xce" 2517 + "\xe1\x5f\x6a\x3e\x5c\x86\xe2\x58\x41\x11\xf0\x7e\x56\xec\xe4\xc9" 2518 + "\x0d\x87\x91\xfb\xb9\xc8\x0d\x34\xab\xb0\xc6\xf2\xa6\x00\x7b\x18" 2519 + "\x92\xf4\x43\x7f\x01\x85\x2e\xef\x8c\x72\x50\x10\xdb\xf1\x37\x62" 2520 + "\x16\x85\x71\x01\xa8\x2b\xf0\x13\xd3\x7c\x0b\xaf\xf1\xf3\xd1\xee" 2521 + "\x90\x41\x5f\x7d\x5b\xa9\x83\x4b\xfa\x80\x59\x50\x73\xe1\xc4\xf9" 2522 + "\x5e\x4b\xde\xd9\xf5\x22\x68\x5e\x65\xd9\x37\xe4\x1a\x08\x0e\xb1" 2523 + "\x28\x2f\x40\x9e\x37\xa8\x12\x56\xb7\xb8\x64\x94\x68\x94\xff\x9f", 2524 + .b_public = 2525 + "\x26\xa8\x3a\x97\xe0\x52\x76\x07\x26\xa7\xbb\x21\xfd\xe5\x69\xde" 2526 + "\xe6\xe0\xb5\xa0\xf1\xaa\x51\x2b\x56\x1c\x3c\x6c\xe5\x9f\x8f\x75" 2527 + "\x71\x04\x86\xf6\x43\x2f\x20\x7f\x45\x4f\x5c\xb9\xf3\x90\xbe\xa9" 2528 + "\xa0\xd7\xe8\x03\x0e\xfe\x99\x9b\x8a\x1c\xbe\xa7\x63\xe8\x2b\x45" 2529 + "\xd4\x2c\x65\x25\x4c\x33\xda\xc5\x85\x77\x5d\x62\xea\x93\xe4\x45" 2530 + "\x59\xff\xa1\xd2\xf1\x73\x11\xed\x02\x64\x8a\x1a\xfb\xe1\x88\xa6" 2531 + "\x50\x6f\xff\x87\x12\xbb\xfc\x10\xcf\x19\x41\xb0\x35\x44\x7d\x51" 2532 + "\xe9\xc0\x77\xf2\x73\x21\x2e\x62\xbf\x65\xa5\xd1\x3b\xb1\x3e\x19" 2533 + "\x75\x4b\xb7\x8e\x03\xc3\xdf\xc8\xb2\xe6\xec\x2d\x7d\xa5\x6a\xba" 2534 + "\x93\x47\x50\xeb\x6e\xdb\x88\x05\x45\xad\x03\x8c\xf7\x9a\xe1\xc9" 2535 + "\x1e\x16\x96\x37\xa5\x3e\xe9\xb9\xa8\xdc\xb9\xa9\xf6\xa1\x3d\xed" 2536 + "\xbe\x12\x29\x8a\x3d\x3d\x90\xfc\x94\xfe\x66\x28\x1c\x1b\xa4\x89" 2537 + "\x47\x66\x4f\xac\x14\x00\x22\x2d\x5c\x03\xea\x71\x4d\x19\x7d\xd6" 2538 + "\x58\x39\x4c\x3d\x06\x2b\x30\xa6\xdc\x2c\x8d\xd1\xde\x79\x77\xfa" 2539 + "\x9c\x6b\x72\x11\x8a\x7f\x7d\x37\x28\x2a\x88\xbf\x0a\xdb\xac\x3b" 2540 + "\xc5\xa5\xd5\x7e\x25\xec\xa6\x7f\x5b\x53\x75\x83\x49\xd4\x77\xcc" 2541 + "\x7d\x7e\xd3\x3d\x30\x2c\x98\x3f\x18\x9a\x11\x8a\x37\xda\x99\x0f" 2542 + "\x3b\x06\xe1\x87\xd5\xe9\x4e\xe0\x9c\x0e\x39\x34\xe2\xdd\xf6\x58" 2543 + "\x60\x63\xa6\xea\xe8\xc0\xb4\xde\xdf\xa0\xbc\x21\xc3\x2d\xf4\xa4" 2544 + "\xc8\x6f\x62\x6c\x0f\x71\x88\xf9\xda\x2d\x30\xd5\x95\xe1\xfc\x6d" 2545 + "\x88\xc5\xc3\x95\x51\x83\xde\x41\x46\x6f\x7e\x1b\x10\x48\xad\x2b" 2546 + "\x82\x88\xa2\x6f\x57\x4d\x4a\xbd\x90\xc8\x06\x8f\x52\x5d\x6e\xee" 2547 + "\x09\xe6\xa3\xcb\x30\x9c\x14\xf6\xac\x66\x9b\x81\x0a\x75\x42\x6b" 2548 + "\xab\x27\xec\x76\xfb\x8d\xc5\xbf\x0e\x93\x81\x7b\x81\xd4\x85\xa6" 2549 + "\x90\x5a\xa6\xa2\x8b\xa9\xb7\x34\xe6\x15\x36\x93\x8b\xe2\x99\xc7" 2550 + "\xad\x66\x7e\xd6\x89\xa9\xc8\x15\xcb\xc5\xeb\x06\x85\xd4\x2f\x6e" 2551 + "\x9b\x95\x7a\x06\x6c\xfa\x31\x1d\xc4\xe5\x7d\xfb\x10\x35\x88\xc2" 2552 + "\xbe\x1c\x16\x5d\xc2\xf4\x0d\xf3\xc9\x94\xb2\x7e\xa7\xbd\x9c\x03" 2553 + "\x32\xaf\x8b\x1a\xc8\xcc\x82\xd8\x87\x96\x6e\x3d\xcc\x93\xd2\x43" 2554 + "\x73\xf9\xde\xec\x49\x49\xf4\x56\x2a\xc8\x6e\x32\x70\x48\xf8\x70" 2555 + "\xa3\x96\x31\xf4\xf2\x08\xc5\x12\xd2\xeb\xb6\xea\xa3\x07\x05\x61" 2556 + "\x74\xa3\x04\x2f\x17\x82\x40\x5e\x4c\xd1\x51\xb8\x10\x5b\xc8\x9f" 2557 + "\x87\x73\x80\x0d\x6f\xc6\xb9\xf6\x7c\x31\x0a\xcc\xd9\x03\x0f\x7a" 2558 + "\x47\x69\xb1\x55\xab\xe9\xb5\x75\x62\x9e\x95\xbe\x7b\xa9\x53\x6e" 2559 + "\x28\x73\xdc\xb3\xa4\x8a\x1c\x91\xf5\x8a\xf9\x32\x2b\xbd\xa5\xdc" 2560 + "\x07\xb5\xaf\x49\xdb\x9c\x35\xc9\x69\xde\xac\xb1\xd0\x86\xcb\x31" 2561 + "\x0b\xc4\x4f\x63\x4e\x70\xa7\x80\xe3\xbc\x0b\x73\x0e\xf2\x8c\x87" 2562 + "\x88\x7b\xa9\x6d\xde\x8a\x73\x14\xb9\x80\x55\x03\x2b\x29\x64\x6a" 2563 + "\xda\x48\x0e\x78\x07\x40\x48\x46\x58\xa9\x4e\x68\x1d\xd1\xc1\xc8" 2564 + "\x3b\x35\x53\x61\xd5\xe3\x0d\x4c\x42\x74\x10\x67\x85\x9f\x66\x2a" 2565 + "\xf7\x2b\x7b\x77\x8b\x6e\xda\x2c\xc1\x5a\x20\x34\x3f\xf5\x8b\x6f" 2566 + "\xe4\x61\xf5\x58\xab\x72\x1a\xf1\x8d\x28\xcc\xa5\x30\x68\xb5\x50" 2567 + "\x7b\x81\x43\x89\x8e\xa9\xac\x63\x3a\x4a\x78\x7b\xd2\x45\xe6\xe0" 2568 + "\xdc\x5d\xf2\x1a\x2b\x54\x50\xa5\x9d\xf6\xe7\x9f\x25\xaf\x56\x6a" 2569 + "\x84\x2a\x75\xa3\x9a\xc7\xfa\x94\xec\x83\xab\xa5\xaa\xe1\xf9\x89" 2570 + "\x29\xa9\xf6\x53\x24\x24\xae\x4a\xe8\xbc\xe8\x9e\x5c\xd7\x54\x7c" 2571 + "\x65\x20\x97\x28\x94\x76\xf9\x9e\x81\xcf\x98\x6a\x3a\x7b\xec\xf3" 2572 + "\x09\x60\x2e\x43\x18\xb5\xf6\x8c\x44\x0f\xf2\x0a\x17\x5b\xac\x98" 2573 + "\x30\xab\x6e\xd5\xb3\xef\x25\x68\x50\xb6\xe1\xc0\xe4\x5a\x63\x43" 2574 + "\xea\xca\xda\x23\xc1\xc2\xe9\x30\xec\xb3\x9f\xbf\x1f\x09\x76\xaf" 2575 + "\x65\xbc\xb5\xab\x30\xac\x0b\x05\xef\x5c\xa3\x65\x77\x33\x1c\xc5" 2576 + "\xdf\xc9\x39\xab\xca\xf4\x3b\x88\x25\x6d\x50\x87\xb1\x79\xc2\x23" 2577 + "\x9d\xb5\x21\x01\xaa\xa3\xb7\x61\xa3\x48\x91\x72\x3d\x54\x85\x86" 2578 + "\x91\x81\x35\x78\xbf\x8f\x27\x57\xcb\x9b\x34\xab\x63\x40\xf1\xbc" 2579 + "\x23\x5a\x26\x6a\xba\x57\xe2\x8f\x2a\xdc\x82\xe0\x3b\x7f\xec\xd3" 2580 + "\xd8\x9d\xd3\x13\x54\x70\x64\xc3\xfd\xbf\xa3\x46\xa7\x53\x42\x7f" 2581 + "\xc1\xbd\x7b\xb3\x13\x47\x2a\x45\x1e\x76\x2c\x0d\x6d\x46\x26\x24" 2582 + "\xa8\xc7\x00\x2b\x10\x7f\x2a\x6c\xfc\x68\x4e\x6e\x85\x53\x00\xaf" 2583 + "\xd5\xfb\x59\x64\xc7\x9b\x24\xd1\x05\xdc\x34\x53\x6d\x27\xa9\x79" 2584 + "\xff\xd7\x5e\x7a\x40\x81\x8e\xc3\xf2\x38\xc9\x8d\x87\xb5\x38\xda" 2585 + "\x43\x64\x1b\x59\x62\x88\xc1\x6e\x85\x84\x33\xcd\x6d\x7b\x62\x1d" 2586 + "\x60\xf9\x98\xf7\xd1\xb1\xd4\xbe\x56\x6e\xa8\x6f\xff\xe7\x8b\x60" 2587 + "\x53\x80\xc7\x7c\xe0\x78\x89\xa9\xab\x42\x8f\x8e\x4d\x92\xac\xa7" 2588 + "\xfd\x47\x11\xc7\xdb\x7c\x77\xfb\xa4\x1d\x70\xaf\x56\x14\x52\xb0", 2589 + .expected_a_public = 2590 + "\xa1\x6c\x9e\xda\x45\x4d\xf6\x59\x04\x00\xc1\xc6\x8b\x12\x3b\xcd" 2591 + "\x07\xe4\x3e\xec\xac\x9b\xfc\xf7\x6d\x73\x39\x9e\x52\xf8\xbe\x33" 2592 + "\xe2\xca\xea\x99\x76\xc7\xc9\x94\x5c\xf3\x1b\xea\x6b\x66\x4b\x51" 2593 + "\x90\xf6\x4f\x75\xd5\x85\xf4\x28\xfd\x74\xa5\x57\xb1\x71\x0c\xb6" 2594 + "\xb6\x95\x70\x2d\xfa\x4b\x56\xe0\x56\x10\x21\xe5\x60\xa6\x18\xa4" 2595 + "\x78\x8c\x07\xc0\x2b\x59\x9c\x84\x5b\xe9\xb9\x74\xbf\xbc\x65\x48" 2596 + "\x27\x82\x40\x53\x46\x32\xa2\x92\x91\x9d\xf6\xd1\x07\x0e\x1d\x07" 2597 + "\x1b\x41\x04\xb1\xd4\xce\xae\x6e\x46\xf1\x72\x50\x7f\xff\xa8\xa2" 2598 + "\xbc\x3a\xc1\xbb\x28\xd7\x7d\xcd\x7a\x22\x01\xaf\x57\xb0\xa9\x02" 2599 + "\xd4\x8a\x92\xd5\xe6\x8e\x6f\x11\x39\xfe\x36\x87\x89\x42\x25\x42" 2600 + "\xd9\xbe\x67\x15\xe1\x82\x8a\x5e\x98\xc2\xd5\xde\x9e\x13\x1a\xe7" 2601 + "\xf9\x9f\x8e\x2d\x49\xdc\x4d\x98\x8c\xdd\xfd\x24\x7c\x46\xa9\x69" 2602 + "\x3b\x31\xb3\x12\xce\x54\xf6\x65\x75\x40\xc2\xf1\x04\x92\xe3\x83" 2603 + "\xeb\x02\x3d\x79\xc0\xf9\x7c\x28\xb3\x97\x03\xf7\x61\x1c\xce\x95" 2604 + "\x1a\xa0\xb3\x77\x1b\xc1\x9f\xf8\xf6\x3f\x4d\x0a\xfb\xfa\x64\x1c" 2605 + "\xcb\x37\x5b\xc3\x28\x60\x9f\xd1\xf2\xc4\xee\x77\xaa\x1f\xe9\xa2" 2606 + "\x89\x4c\xc6\xb7\xb3\xe4\xa5\xed\xa7\xe8\xac\x90\xdc\xc3\xfb\x56" 2607 + "\x9c\xda\x2c\x1d\x1a\x9a\x8c\x82\x92\xee\xdc\xa0\xa4\x01\x6e\x7f" 2608 + "\xc7\x0e\xc2\x73\x7d\xa6\xac\x12\x01\xc0\xc0\xc8\x7c\x84\x86\xc7" 2609 + "\xa5\x94\xe5\x33\x84\x71\x6e\x36\xe3\x3b\x81\x30\xe0\xc8\x51\x52" 2610 + "\x2b\x9e\x68\xa2\x6e\x09\x95\x8c\x7f\x78\x82\xbd\x53\x26\xe7\x95" 2611 + "\xe0\x03\xda\xc0\xc3\x6e\xcf\xdc\xb3\x14\xfc\xe9\x5b\x9b\x70\x6c" 2612 + "\x93\x04\xab\x13\xf7\x17\x6d\xee\xad\x32\x48\xe9\xa0\x94\x1b\x14" 2613 + "\x64\x4f\xa1\xb3\x8d\x6a\xca\x28\xfe\x4a\xf4\xf0\xc5\xb7\xf9\x8a" 2614 + "\x8e\xff\xfe\x57\x6f\x20\xdb\x04\xab\x02\x31\x22\x42\xfd\xbd\x77" 2615 + "\xea\xce\xe8\xc7\x5d\xe0\x8e\xd6\x66\xd0\xe4\x04\x2f\x5f\x71\xc7" 2616 + "\x61\x2d\xa5\x3f\x2f\x46\xf2\xd8\x5b\x25\x82\xf0\x52\x88\xc0\x59" 2617 + "\xd3\xa3\x90\x17\xc2\x04\x13\xc3\x13\x69\x4f\x17\xb1\xb3\x46\x4f" 2618 + "\xa7\xe6\x8b\x5e\x3e\x95\x0e\xf5\x42\x17\x7f\x4d\x1f\x1b\x7d\x65" 2619 + "\x86\xc5\xc8\xae\xae\xd8\x4f\xe7\x89\x41\x69\xfd\x06\xce\x5d\xed" 2620 + "\x44\x55\xad\x51\x98\x15\x78\x8d\x68\xfc\x93\x72\x9d\x22\xe5\x1d" 2621 + "\x21\xc3\xbe\x3a\x44\x34\xc0\xa3\x1f\xca\xdf\x45\xd0\x5c\xcd\xb7" 2622 + "\x72\xeb\xae\x7a\xad\x3f\x05\xa0\xe3\x6e\x5a\xd8\x52\xa7\xf1\x1e" 2623 + "\xb4\xf2\xcf\xe7\xdf\xa7\xf2\x22\x00\xb2\xc4\x17\x3d\x2c\x15\x04" 2624 + "\x71\x28\x69\x5c\x69\x21\xc8\xf1\x9b\xd8\xc7\xbc\x27\xa3\x85\xe9" 2625 + "\x53\x77\xd3\x65\xc3\x86\xdd\xb3\x76\x13\xfb\xa1\xd4\xee\x9d\xe4" 2626 + "\x51\x3f\x83\x59\xe4\x47\xa8\xa6\x0d\x68\xd5\xf6\xf4\xca\x31\xcd" 2627 + "\x30\x48\x34\x90\x11\x8e\x87\xe9\xea\xc9\xd0\xc3\xba\x28\xf9\xc0" 2628 + "\xc9\x8e\x23\xe5\xc2\xee\xf2\x47\x9c\x41\x1c\x10\x33\x27\x23\x49" 2629 + "\xe5\x0d\x18\xbe\x19\xc1\xba\x6c\xdc\xb7\xa1\xe7\xc5\x0d\x6f\xf0" 2630 + "\x8c\x62\x6e\x0d\x14\xef\xef\xf2\x8e\x01\xd2\x76\xf5\xc1\xe1\x92" 2631 + "\x3c\xb3\x76\xcd\xd8\xdd\x9b\xe0\x8e\xdc\x24\x34\x13\x65\x0f\x11" 2632 + "\xaf\x99\x7a\x2f\xe6\x1f\x7d\x17\x3e\x8a\x68\x9a\x37\xc8\x8d\x3e" 2633 + "\xa3\xfe\xfe\x57\x22\xe6\x0e\x50\xb5\x98\x0b\x71\xd8\x01\xa2\x8d" 2634 + "\x51\x96\x50\xc2\x41\x31\xd8\x23\x98\xfc\xd1\x9d\x7e\x27\xbb\x69" 2635 + "\x78\xe0\x87\xf7\xe4\xdd\x58\x13\x9d\xec\x00\xe4\xb9\x70\xa2\x94" 2636 + "\x5d\x52\x4e\xf2\x5c\xd1\xbc\xfd\xee\x9b\xb9\xe5\xc4\xc0\xa8\x77" 2637 + "\x67\xa4\xd1\x95\x34\xe4\x6d\x5f\x25\x02\x8d\x65\xdd\x11\x63\x55" 2638 + "\x04\x01\x21\x60\xc1\x5c\xef\x77\x33\x01\x1c\xa2\x11\x2b\xdd\x2b" 2639 + "\x74\x99\x23\x38\x05\x1b\x7e\x2e\x01\x52\xfe\x9c\x23\xde\x3e\x1a" 2640 + "\x72\xf4\xff\x7b\x02\xaa\x08\xcf\xe0\x5b\x83\xbe\x85\x5a\xe8\x9d" 2641 + "\x11\x3e\xff\x2f\xc6\x97\x67\x36\x6c\x0f\x81\x9c\x26\x29\xb1\x0f" 2642 + "\xbb\x53\xbd\xf4\xec\x2a\x84\x41\x28\x3b\x86\x40\x95\x69\x55\x5f" 2643 + "\x30\xee\xda\x1e\x6c\x4b\x25\xd6\x2f\x2c\x0e\x3c\x1a\x26\xa0\x3e" 2644 + "\xef\x09\xc6\x2b\xe5\xa1\x0c\x03\xa8\xf5\x39\x70\x31\xc4\x32\x79" 2645 + "\xd1\xd9\xc2\xcc\x32\x4a\xf1\x2f\x57\x5a\xcc\xe5\xc3\xc5\xd5\x4e" 2646 + "\x86\x56\xca\x64\xdb\xab\x61\x85\x8f\xf9\x20\x02\x40\x66\x76\x9e" 2647 + "\x5e\xd4\xac\xf0\x47\xa6\x50\x5f\xc2\xaf\x55\x9b\xa3\xc9\x8b\xf8" 2648 + "\x42\xd5\xcf\x1a\x95\x22\xd9\xd1\x0b\x92\x51\xca\xde\x46\x02\x0d" 2649 + "\x8b\xee\xd9\xa0\x04\x74\xf5\x0e\xb0\x3a\x62\xec\x3c\x91\x29\x33" 2650 + "\xa7\x78\x22\x92\xac\x27\xe6\x2d\x6f\x56\x8a\x5d\x72\xc2\xf1\x5c" 2651 + "\x54\x11\x97\x24\x61\xcb\x0c\x52\xd4\x57\x56\x22\x86\xf0\x19\x27" 2652 + "\x76\x30\x04\xf4\x39\x7b\x1a\x5a\x04\x0d\xec\x59\x9a\x31\x4c\x40" 2653 + "\x19\x6d\x3c\x41\x1b\x0c\xca\xeb\x25\x39\x6c\x96\xf8\x55\xd0\xec", 2654 + .expected_ss = 2655 + "\xf9\x55\x4f\x48\x38\x74\xb7\x46\xa3\xc4\x2e\x88\xf0\x34\xab\x1d" 2656 + "\xcd\xa5\x58\xa7\x95\x88\x36\x62\x6f\x8a\xbd\xf2\xfb\x6f\x3e\xb9" 2657 + "\x91\x65\x58\xef\x70\x2f\xd5\xc2\x97\x70\xcb\xce\x8b\x78\x1c\xe0" 2658 + "\xb9\xfa\x77\x34\xd2\x4a\x19\x58\x11\xfd\x93\x84\x40\xc0\x8c\x19" 2659 + "\x8b\x98\x50\x83\xba\xfb\xe2\xad\x8b\x81\x84\x63\x90\x41\x4b\xf8" 2660 + "\xe8\x78\x86\x04\x09\x8d\x84\xd1\x43\xfd\xa3\x58\x21\x2a\x3b\xb1" 2661 + "\xa2\x5b\x48\x74\x3c\xa9\x16\x34\x28\xf0\x8e\xde\xe2\xcf\x8e\x68" 2662 + "\x53\xab\x65\x06\xb7\x86\xb1\x08\x4f\x73\x97\x00\x10\x95\xd1\x84" 2663 + "\x72\xcf\x14\xdb\xff\xa7\x80\xd8\xe5\xf2\x2c\x89\x37\xb0\x81\x2c" 2664 + "\xf5\xd6\x7d\x1b\xb0\xe2\x8e\x87\x32\x3d\x37\x6a\x79\xaa\xe7\x08" 2665 + "\xc9\x67\x55\x5f\x1c\xae\xa6\xf5\xef\x79\x3a\xaf\x3f\x82\x14\xe2" 2666 + "\xf3\x69\x91\xed\xb7\x9e\xc9\xde\xd0\x29\x70\xd9\xeb\x0f\xf5\xc7" 2667 + "\xf6\x7c\xa7\x7f\xec\xed\xe1\xbd\x13\xe1\x43\xe4\x42\x30\xe3\x5f" 2668 + "\xe0\xf3\x15\x55\x2f\x7a\x42\x17\x67\xcb\xc2\x4f\xd0\x85\xfc\x6c" 2669 + "\xec\xe8\xfc\x25\x78\x4b\xe4\x0f\xd4\x3d\x78\x28\xd3\x53\x79\xcb" 2670 + "\x2c\x82\x67\x9a\xdc\x32\x55\xd2\xda\xae\xd8\x61\xce\xd6\x59\x0b" 2671 + "\xc5\x44\xeb\x08\x81\x8c\x65\xb2\xb7\xa6\xff\xf7\xbf\x99\xc6\x8a" 2672 + "\xbe\xde\xc2\x17\x56\x05\x6e\xd2\xf1\x1e\xa2\x04\xeb\x02\x74\xaa" 2673 + "\x04\xfc\xf0\x6b\xd4\xfc\xf0\x7a\x5f\xfe\xe2\x74\x7f\xeb\x9b\x6a" 2674 + "\x8a\x09\x96\x5d\xe1\x91\xb6\x9e\x37\xd7\x63\xd7\xb3\x5c\xb5\xa3" 2675 + "\x5f\x62\x00\xdf\xc5\xbf\x85\xba\xa7\xa9\xb6\x1f\x76\x78\x65\x01" 2676 + "\xfe\x1d\x6c\xfe\x15\x9e\xf4\xb1\xbc\x8d\xad\x3c\xec\x69\x27\x57" 2677 + "\xa4\x89\x77\x46\xe1\x49\xc7\x22\xde\x79\xe0\xf7\x3a\xa1\x59\x8b" 2678 + "\x59\x71\xcc\xd6\x18\x24\xc1\x8a\x2f\xe3\xdf\xdd\x6c\xf7\x62\xaa" 2679 + "\x15\xaa\x39\x37\x3b\xaf\x7d\x6e\x88\xeb\x19\xa8\xa0\x26\xd3\xaa" 2680 + "\x2d\xcc\x5f\x56\x99\x86\xa9\xed\x4d\x02\x31\x40\x97\x70\x83\xa7" 2681 + "\x08\x98\x7e\x49\x46\xd9\x75\xb5\x7a\x6a\x40\x69\xa0\x6d\xb2\x18" 2682 + "\xc0\xad\x88\x05\x02\x95\x6f\xf7\x8f\xcb\xa2\xe4\x7b\xab\x4a\x0f" 2683 + "\x9a\x1b\xef\xcc\xd1\x6a\x5d\x1e\x6a\x2a\x8b\x5b\x80\xbc\x5f\x38" 2684 + "\xdd\xaf\xad\x44\x15\xb4\xaf\x26\x1c\x1a\x4d\xa7\x4b\xec\x88\x33" 2685 + "\x24\x42\xb5\x0c\x9c\x56\xd4\xba\xa7\xb9\x65\xd5\x76\xb2\xbc\x16" 2686 + "\x8e\xfa\x0c\x7a\xc0\xa2\x2c\x5a\x39\x56\x7d\xe6\xf8\xa9\xf4\x49" 2687 + "\xd0\x50\xf2\x5e\x4b\x0a\x43\xe4\x9a\xbb\xea\x35\x28\x99\x84\x83" 2688 + "\xec\xc1\xa0\x68\x15\x9a\x2b\x01\x04\x48\x09\x11\x1b\xb6\xa4\xd8" 2689 + "\x03\xad\xb6\x4c\x9e\x1d\x90\xae\x88\x0f\x75\x95\x25\xa0\x27\x13" 2690 + "\xb7\x4f\xe2\x3e\xd5\x59\x1a\x7c\xde\x95\x14\x28\xd1\xde\x84\xe4" 2691 + "\x07\x7c\x5b\x06\xd6\xe6\x9c\x8a\xbe\xd2\xb4\x62\xd1\x67\x8a\x9c" 2692 + "\xac\x4f\xfa\x70\xd6\xc8\xc0\xeb\x5e\xf6\x3e\xdc\x48\x8e\xce\x3f" 2693 + "\x92\x3e\x60\x77\x63\x60\x6b\x76\x04\xa5\xba\xc9\xab\x92\x4e\x0d" 2694 + "\xdc\xca\x82\x44\x5f\x3a\x42\xeb\x01\xe7\xe0\x33\xb3\x32\xaf\x4b" 2695 + "\x81\x35\x2d\xb6\x57\x15\xfe\x52\xc7\x54\x2e\x41\x3b\x22\x6b\x12" 2696 + "\x72\xdb\x5c\x66\xd0\xb6\xb4\xfe\x90\xc0\x20\x34\x95\xf9\xe4\xc7" 2697 + "\x7e\x71\x89\x4f\x6f\xfb\x2a\xf3\xdf\x3f\xe3\xcf\x0e\x1a\xd9\xf2" 2698 + "\xc1\x02\x67\x5d\xdc\xf1\x7d\xe8\xcf\x64\x77\x4d\x12\x03\x77\x2c" 2699 + "\xfb\xe1\x59\xf7\x2c\x96\x9c\xaf\x46\x9c\xc7\x67\xcf\xee\x94\x50" 2700 + "\xc7\xa1\x23\xe6\x9f\x4d\x73\x92\xad\xf9\x4a\xce\xdb\x44\xd5\xe3" 2701 + "\x17\x05\x37\xdb\x9c\x6c\xc5\x7e\xb7\xd4\x11\x4a\x8c\x51\x03\xaa" 2702 + "\x73\x4b\x16\xd9\x79\xf5\xf1\x67\x20\x9b\x25\xe5\x41\x52\x59\x06" 2703 + "\x8b\xf2\x23\x2f\x6e\xea\xf3\x24\x0a\x94\xbb\xb8\x7e\xd9\x23\x4a" 2704 + "\x9f\x1f\xe1\x13\xb5\xfe\x85\x2f\x4c\xbe\x6a\x66\x02\x1d\x90\xd2" 2705 + "\x01\x25\x8a\xfd\x78\x3a\x28\xb8\x18\xc1\x38\x16\x21\x6b\xb4\xf9" 2706 + "\x64\x0f\xf1\x73\xc4\x5c\xd1\x41\xf2\xfe\xe7\x26\xad\x79\x12\x75" 2707 + "\x49\x48\xdb\x21\x71\x35\xf7\xb7\x46\x5a\xa1\x81\x25\x47\x31\xea" 2708 + "\x1d\x76\xbb\x32\x5a\x90\xb0\x42\x1a\x47\xe8\x0c\x82\x92\x43\x1c" 2709 + "\x0b\xdd\xe5\x25\xce\xd3\x06\xcc\x59\x5a\xc9\xa0\x01\xac\x29\x12" 2710 + "\x31\x2e\x3d\x1a\xed\x3b\xf3\xa7\xef\x52\xc2\x0d\x18\x1f\x03\x28" 2711 + "\xc9\x2b\x38\x61\xa4\x01\xc9\x3c\x11\x08\x14\xd4\xe5\x31\xe9\x3c" 2712 + "\x1d\xad\xf8\x76\xc4\x84\x9f\xea\x16\x61\x3d\x6d\xa3\x32\x31\xcd" 2713 + "\x1c\xca\xb8\x74\xc2\x45\xf3\x01\x9c\x7a\xaf\xfd\xe7\x1e\x5a\x18" 2714 + "\xb1\x9d\xbb\x7a\x2d\x34\x40\x17\x49\xad\x1f\xeb\x2d\xa2\x26\xb8" 2715 + "\x16\x28\x4b\x72\xdd\xd0\x8d\x85\x4c\xdd\xf8\x57\x48\xd5\x1d\xfb" 2716 + "\xbd\xec\x11\x5d\x1e\x9c\x26\x81\xbf\xf1\x16\x12\x32\xc3\xf3\x07" 2717 + "\x0e\x6e\x7f\x17\xec\xfb\xf4\x5d\xe2\xb1\xca\x97\xca\x46\x20\x2d" 2718 + "\x09\x85\x19\x25\x89\xa8\x9b\x51\x74\xae\xc9\x1b\x4c\xb6\x80\x62", 2719 + .secret_size = 1040, 2720 + .b_public_size = 1024, 2721 + .expected_a_public_size = 1024, 2722 + .expected_ss_size = 1024, 2723 + }, 2724 + { 2725 + .secret = 2726 + #ifdef __LITTLE_ENDIAN 2727 + "\x01\x00" /* type */ 2728 + "\x10\x00" /* len */ 2729 + "\x00\x00\x00\x00" /* key_size */ 2730 + "\x00\x00\x00\x00" /* p_size */ 2731 + "\x00\x00\x00\x00", /* g_size */ 2732 + #else 2733 + "\x00\x01" /* type */ 2734 + "\x00\x10" /* len */ 2735 + "\x00\x00\x00\x00" /* key_size */ 2736 + "\x00\x00\x00\x00" /* p_size */ 2737 + "\x00\x00\x00\x00", /* g_size */ 2738 + #endif 2739 + .b_secret = 2740 + #ifdef __LITTLE_ENDIAN 2741 + "\x01\x00" /* type */ 2742 + "\x10\x04" /* len */ 2743 + "\x00\x04\x00\x00" /* key_size */ 2744 + "\x00\x00\x00\x00" /* p_size */ 2745 + "\x00\x00\x00\x00" /* g_size */ 2746 + #else 2747 + "\x00\x01" /* type */ 2748 + "\x04\x10" /* len */ 2749 + "\x00\x00\x04\x00" /* key_size */ 2750 + "\x00\x00\x00\x00" /* p_size */ 2751 + "\x00\x00\x00\x00" /* g_size */ 2752 + #endif 2753 + /* xa */ 2754 + "\x76\x6e\xeb\xf9\xeb\x76\xae\x37\xcb\x19\x49\x8b\xeb\xaf\xb0\x4b" 2755 + "\x6d\xe9\x15\xad\xda\xf2\xef\x58\xe9\xd6\xdd\x4c\xb3\x56\xd0\x3b" 2756 + "\x00\xb0\x65\xed\xae\xe0\x2e\xdf\x8f\x45\x3f\x3c\x5d\x2f\xfa\x96" 2757 + "\x36\x33\xb2\x01\x8b\x0f\xe8\x46\x15\x6d\x60\x5b\xec\x32\xc3\x3b" 2758 + "\x06\xf3\xb4\x1b\x9a\xef\x3c\x03\x0e\xcc\xce\x1d\x24\xa0\xc9\x08" 2759 + "\x65\xf9\x45\xe5\xd2\x43\x08\x88\x58\xd6\x46\xe7\xbb\x25\xac\xed" 2760 + "\x3b\xac\x6f\x5e\xfb\xd6\x19\xa6\x20\x3a\x1d\x0c\xe8\x00\x72\x54" 2761 + "\xd7\xd9\xc9\x26\x49\x18\xc6\xb8\xbc\xdd\xf3\xce\xf3\x7b\x69\x04" 2762 + "\x5c\x6f\x11\xdb\x44\x42\x72\xb6\xb7\x84\x17\x86\x47\x3f\xc5\xa1" 2763 + "\xd8\x86\xef\xe2\x27\x49\x2b\x8f\x3e\x91\x12\xd9\x45\x96\xf7\xe6" 2764 + "\x77\x76\x36\x58\x71\x9a\xb1\xdb\xcf\x24\x9e\x7e\xad\xce\x45\xba" 2765 + "\xb5\xec\x8e\xb9\xd6\x7b\x3d\x76\xa4\x85\xad\xd8\x49\x9b\x80\x9d" 2766 + "\x7f\x9f\x85\x09\x9e\x86\x5b\x6b\xf3\x8d\x39\x5e\x6f\xe4\x30\xc8" 2767 + "\xa5\xf3\xdf\x68\x73\x6b\x2e\x9a\xcb\xac\x0a\x0d\x44\xc1\xaf\xb2" 2768 + "\x11\x1b\x7c\x43\x08\x44\x43\xe2\x4e\xfd\x93\x30\x99\x09\x12\xbb" 2769 + "\xf6\x31\x34\xa5\x3d\x45\x98\xee\xd7\x2a\x1a\x89\xf5\x37\x92\x33" 2770 + "\xa0\xdd\xf5\xfb\x1f\x90\x42\x55\x5a\x0b\x82\xff\xf0\x96\x92\x15" 2771 + "\x65\x5a\x55\x96\xca\x1b\xd5\xe5\xb5\x94\xde\x2e\xa6\x03\x57\x9e" 2772 + "\x15\xe4\x32\x2b\x1f\xb2\x22\x21\xe9\xa0\x05\xd3\x65\x6c\x11\x66" 2773 + "\x25\x38\xbb\xa3\x6c\xc2\x0b\x2b\xd0\x7a\x20\x26\x29\x37\x5d\x5f" 2774 + "\xd8\xff\x2a\xcd\x46\x6c\xd6\x6e\xe5\x77\x1a\xe6\x33\xf1\x8e\xc8" 2775 + "\x10\x30\x11\x00\x27\xf9\x7d\x0e\x28\x43\xa7\x67\x38\x7f\x16\xda" 2776 + "\xd0\x01\x8e\xa4\xe8\x6f\xcd\x23\xaf\x77\x52\x34\xad\x7e\xc3\xed" 2777 + "\x2d\x10\x0a\x33\xdc\xcf\x1b\x88\x0f\xcc\x48\x7f\x42\xf0\x9e\x13" 2778 + "\x1f\xf5\xd1\xe9\x90\x87\xbd\xfa\x5f\x1d\x77\x55\xcb\xc3\x05\xaf" 2779 + "\x71\xd0\xe0\xab\x46\x31\xd7\xea\x89\x54\x2d\x39\xaf\xf6\x4f\x74" 2780 + "\xaf\x46\x58\x89\x78\x95\x2e\xe6\x90\xb7\xaa\x00\x73\x9f\xed\xb9" 2781 + "\x00\xd6\xf6\x6d\x26\x59\xcd\x56\xdb\xf7\x3d\x5f\xeb\x6e\x46\x33" 2782 + "\xb1\x23\xed\x9f\x8d\x58\xdc\xb4\x28\x3b\x90\x09\xc4\x61\x02\x1f" 2783 + "\xf8\x62\xf2\x6e\xc1\x94\x71\x66\x93\x11\xdf\xaa\x3e\xd7\xb5\xe5" 2784 + "\xc1\x78\xe9\x14\xcd\x55\x16\x51\xdf\x8d\xd0\x94\x8c\x43\xe9\xb8" 2785 + "\x1d\x42\x7f\x76\xbc\x6f\x87\x42\x88\xde\xd7\x52\x78\x00\x4f\x18" 2786 + "\x02\xe7\x7b\xe2\x8a\xc3\xd1\x43\xa5\xac\xda\xb0\x8d\x19\x96\xd4" 2787 + "\x81\xe0\x75\xe9\xca\x41\x7e\x1f\x93\x0b\x26\x24\xb3\xaa\xdd\x10" 2788 + "\x20\xd3\xf2\x9f\x3f\xdf\x65\xde\x67\x79\xdc\x76\x9f\x3c\x72\x75" 2789 + "\x65\x8a\x30\xcc\xd2\xcc\x06\xb1\xab\x62\x86\x78\x5d\xb8\xce\x72" 2790 + "\xb3\x12\xc7\x9f\x07\xd0\x6b\x98\x82\x9b\x6c\xbb\x15\xe5\xcc\xf4" 2791 + "\xc8\xf4\x60\x81\xdc\xd3\x09\x1b\x5e\xd4\xf3\x55\xcf\x1c\x16\x83" 2792 + "\x61\xb4\x2e\xcc\x08\x67\x58\xfd\x46\x64\xbc\x29\x4b\xdd\xda\xec" 2793 + "\xdc\xc6\xa9\xa5\x73\xfb\xf8\xf3\xaf\x89\xa8\x9e\x25\x14\xfa\xac" 2794 + "\xeb\x1c\x7c\x80\x96\x66\x4d\x41\x67\x9b\x07\x4f\x0a\x97\x17\x1c" 2795 + "\x4d\x61\xc7\x2e\x6f\x36\x98\x29\x50\x39\x6d\xe7\x70\xda\xf0\xc8" 2796 + "\x05\x80\x7b\x32\xff\xfd\x12\xde\x61\x0d\xf9\x4c\x21\xf1\x56\x72" 2797 + "\x3d\x61\x46\xc0\x2d\x07\xd1\x6c\xd3\xbe\x9a\x21\x83\x85\xf7\xed" 2798 + "\x53\x95\x44\x40\x8f\x75\x12\x18\xc2\x9a\xfd\x5e\xce\x66\xa6\x7f" 2799 + "\x57\xc0\xd7\x73\x76\xb3\x13\xda\x2e\x58\xc6\x27\x40\xb2\x2d\xef" 2800 + "\x7d\x72\xb4\xa8\x75\x6f\xcc\x5f\x42\x3e\x2c\x90\x36\x59\xa0\x34" 2801 + "\xaa\xce\xbc\x04\x4c\xe6\x56\xc2\xcd\xa6\x1c\x59\x04\x56\x53\xcf" 2802 + "\x6d\xd7\xf0\xb1\x4f\x91\xfa\x84\xcf\x4b\x8d\x50\x4c\xf8\x2a\x31" 2803 + "\x5f\xe3\xba\x79\xb4\xcc\x59\x64\xe3\x7a\xfa\xf6\x06\x9d\x04\xbb" 2804 + "\xce\x61\xbf\x9e\x59\x0a\x09\x51\x6a\xbb\x0b\x80\xe0\x91\xc1\x51" 2805 + "\x04\x58\x67\x67\x4b\x42\x4f\x95\x68\x75\xe2\x1f\x9c\x14\x70\xfd" 2806 + "\x3a\x8a\xce\x8b\x04\xa1\x89\xe7\xb4\xbf\x70\xfe\xf3\x0c\x48\x04" 2807 + "\x3a\xd2\x85\x68\x03\xe7\xfa\xec\x5b\x55\xb7\x95\xfd\x5b\x19\x35" 2808 + "\xad\xcb\x4a\x63\x03\x44\x64\x2a\x48\x59\x9a\x26\x43\x96\x8c\xe6" 2809 + "\xbd\xb7\x90\xd4\x5f\x8d\x08\x28\xa8\xc5\x89\x70\xb9\x6e\xd3\x3b" 2810 + "\x76\x0e\x37\x98\x15\x27\xca\xc9\xb0\xe0\xfd\xf3\xc6\xdf\x69\xce" 2811 + "\xe1\x5f\x6a\x3e\x5c\x86\xe2\x58\x41\x11\xf0\x7e\x56\xec\xe4\xc9" 2812 + "\x0d\x87\x91\xfb\xb9\xc8\x0d\x34\xab\xb0\xc6\xf2\xa6\x00\x7b\x18" 2813 + "\x92\xf4\x43\x7f\x01\x85\x2e\xef\x8c\x72\x50\x10\xdb\xf1\x37\x62" 2814 + "\x16\x85\x71\x01\xa8\x2b\xf0\x13\xd3\x7c\x0b\xaf\xf1\xf3\xd1\xee" 2815 + "\x90\x41\x5f\x7d\x5b\xa9\x83\x4b\xfa\x80\x59\x50\x73\xe1\xc4\xf9" 2816 + "\x5e\x4b\xde\xd9\xf5\x22\x68\x5e\x65\xd9\x37\xe4\x1a\x08\x0e\xb1" 2817 + "\x28\x2f\x40\x9e\x37\xa8\x12\x56\xb7\xb8\x64\x94\x68\x94\xff\x9f", 2818 + .b_public = 2819 + "\xa1\x6c\x9e\xda\x45\x4d\xf6\x59\x04\x00\xc1\xc6\x8b\x12\x3b\xcd" 2820 + "\x07\xe4\x3e\xec\xac\x9b\xfc\xf7\x6d\x73\x39\x9e\x52\xf8\xbe\x33" 2821 + "\xe2\xca\xea\x99\x76\xc7\xc9\x94\x5c\xf3\x1b\xea\x6b\x66\x4b\x51" 2822 + "\x90\xf6\x4f\x75\xd5\x85\xf4\x28\xfd\x74\xa5\x57\xb1\x71\x0c\xb6" 2823 + "\xb6\x95\x70\x2d\xfa\x4b\x56\xe0\x56\x10\x21\xe5\x60\xa6\x18\xa4" 2824 + "\x78\x8c\x07\xc0\x2b\x59\x9c\x84\x5b\xe9\xb9\x74\xbf\xbc\x65\x48" 2825 + "\x27\x82\x40\x53\x46\x32\xa2\x92\x91\x9d\xf6\xd1\x07\x0e\x1d\x07" 2826 + "\x1b\x41\x04\xb1\xd4\xce\xae\x6e\x46\xf1\x72\x50\x7f\xff\xa8\xa2" 2827 + "\xbc\x3a\xc1\xbb\x28\xd7\x7d\xcd\x7a\x22\x01\xaf\x57\xb0\xa9\x02" 2828 + "\xd4\x8a\x92\xd5\xe6\x8e\x6f\x11\x39\xfe\x36\x87\x89\x42\x25\x42" 2829 + "\xd9\xbe\x67\x15\xe1\x82\x8a\x5e\x98\xc2\xd5\xde\x9e\x13\x1a\xe7" 2830 + "\xf9\x9f\x8e\x2d\x49\xdc\x4d\x98\x8c\xdd\xfd\x24\x7c\x46\xa9\x69" 2831 + "\x3b\x31\xb3\x12\xce\x54\xf6\x65\x75\x40\xc2\xf1\x04\x92\xe3\x83" 2832 + "\xeb\x02\x3d\x79\xc0\xf9\x7c\x28\xb3\x97\x03\xf7\x61\x1c\xce\x95" 2833 + "\x1a\xa0\xb3\x77\x1b\xc1\x9f\xf8\xf6\x3f\x4d\x0a\xfb\xfa\x64\x1c" 2834 + "\xcb\x37\x5b\xc3\x28\x60\x9f\xd1\xf2\xc4\xee\x77\xaa\x1f\xe9\xa2" 2835 + "\x89\x4c\xc6\xb7\xb3\xe4\xa5\xed\xa7\xe8\xac\x90\xdc\xc3\xfb\x56" 2836 + "\x9c\xda\x2c\x1d\x1a\x9a\x8c\x82\x92\xee\xdc\xa0\xa4\x01\x6e\x7f" 2837 + "\xc7\x0e\xc2\x73\x7d\xa6\xac\x12\x01\xc0\xc0\xc8\x7c\x84\x86\xc7" 2838 + "\xa5\x94\xe5\x33\x84\x71\x6e\x36\xe3\x3b\x81\x30\xe0\xc8\x51\x52" 2839 + "\x2b\x9e\x68\xa2\x6e\x09\x95\x8c\x7f\x78\x82\xbd\x53\x26\xe7\x95" 2840 + "\xe0\x03\xda\xc0\xc3\x6e\xcf\xdc\xb3\x14\xfc\xe9\x5b\x9b\x70\x6c" 2841 + "\x93\x04\xab\x13\xf7\x17\x6d\xee\xad\x32\x48\xe9\xa0\x94\x1b\x14" 2842 + "\x64\x4f\xa1\xb3\x8d\x6a\xca\x28\xfe\x4a\xf4\xf0\xc5\xb7\xf9\x8a" 2843 + "\x8e\xff\xfe\x57\x6f\x20\xdb\x04\xab\x02\x31\x22\x42\xfd\xbd\x77" 2844 + "\xea\xce\xe8\xc7\x5d\xe0\x8e\xd6\x66\xd0\xe4\x04\x2f\x5f\x71\xc7" 2845 + "\x61\x2d\xa5\x3f\x2f\x46\xf2\xd8\x5b\x25\x82\xf0\x52\x88\xc0\x59" 2846 + "\xd3\xa3\x90\x17\xc2\x04\x13\xc3\x13\x69\x4f\x17\xb1\xb3\x46\x4f" 2847 + "\xa7\xe6\x8b\x5e\x3e\x95\x0e\xf5\x42\x17\x7f\x4d\x1f\x1b\x7d\x65" 2848 + "\x86\xc5\xc8\xae\xae\xd8\x4f\xe7\x89\x41\x69\xfd\x06\xce\x5d\xed" 2849 + "\x44\x55\xad\x51\x98\x15\x78\x8d\x68\xfc\x93\x72\x9d\x22\xe5\x1d" 2850 + "\x21\xc3\xbe\x3a\x44\x34\xc0\xa3\x1f\xca\xdf\x45\xd0\x5c\xcd\xb7" 2851 + "\x72\xeb\xae\x7a\xad\x3f\x05\xa0\xe3\x6e\x5a\xd8\x52\xa7\xf1\x1e" 2852 + "\xb4\xf2\xcf\xe7\xdf\xa7\xf2\x22\x00\xb2\xc4\x17\x3d\x2c\x15\x04" 2853 + "\x71\x28\x69\x5c\x69\x21\xc8\xf1\x9b\xd8\xc7\xbc\x27\xa3\x85\xe9" 2854 + "\x53\x77\xd3\x65\xc3\x86\xdd\xb3\x76\x13\xfb\xa1\xd4\xee\x9d\xe4" 2855 + "\x51\x3f\x83\x59\xe4\x47\xa8\xa6\x0d\x68\xd5\xf6\xf4\xca\x31\xcd" 2856 + "\x30\x48\x34\x90\x11\x8e\x87\xe9\xea\xc9\xd0\xc3\xba\x28\xf9\xc0" 2857 + "\xc9\x8e\x23\xe5\xc2\xee\xf2\x47\x9c\x41\x1c\x10\x33\x27\x23\x49" 2858 + "\xe5\x0d\x18\xbe\x19\xc1\xba\x6c\xdc\xb7\xa1\xe7\xc5\x0d\x6f\xf0" 2859 + "\x8c\x62\x6e\x0d\x14\xef\xef\xf2\x8e\x01\xd2\x76\xf5\xc1\xe1\x92" 2860 + "\x3c\xb3\x76\xcd\xd8\xdd\x9b\xe0\x8e\xdc\x24\x34\x13\x65\x0f\x11" 2861 + "\xaf\x99\x7a\x2f\xe6\x1f\x7d\x17\x3e\x8a\x68\x9a\x37\xc8\x8d\x3e" 2862 + "\xa3\xfe\xfe\x57\x22\xe6\x0e\x50\xb5\x98\x0b\x71\xd8\x01\xa2\x8d" 2863 + "\x51\x96\x50\xc2\x41\x31\xd8\x23\x98\xfc\xd1\x9d\x7e\x27\xbb\x69" 2864 + "\x78\xe0\x87\xf7\xe4\xdd\x58\x13\x9d\xec\x00\xe4\xb9\x70\xa2\x94" 2865 + "\x5d\x52\x4e\xf2\x5c\xd1\xbc\xfd\xee\x9b\xb9\xe5\xc4\xc0\xa8\x77" 2866 + "\x67\xa4\xd1\x95\x34\xe4\x6d\x5f\x25\x02\x8d\x65\xdd\x11\x63\x55" 2867 + "\x04\x01\x21\x60\xc1\x5c\xef\x77\x33\x01\x1c\xa2\x11\x2b\xdd\x2b" 2868 + "\x74\x99\x23\x38\x05\x1b\x7e\x2e\x01\x52\xfe\x9c\x23\xde\x3e\x1a" 2869 + "\x72\xf4\xff\x7b\x02\xaa\x08\xcf\xe0\x5b\x83\xbe\x85\x5a\xe8\x9d" 2870 + "\x11\x3e\xff\x2f\xc6\x97\x67\x36\x6c\x0f\x81\x9c\x26\x29\xb1\x0f" 2871 + "\xbb\x53\xbd\xf4\xec\x2a\x84\x41\x28\x3b\x86\x40\x95\x69\x55\x5f" 2872 + "\x30\xee\xda\x1e\x6c\x4b\x25\xd6\x2f\x2c\x0e\x3c\x1a\x26\xa0\x3e" 2873 + "\xef\x09\xc6\x2b\xe5\xa1\x0c\x03\xa8\xf5\x39\x70\x31\xc4\x32\x79" 2874 + "\xd1\xd9\xc2\xcc\x32\x4a\xf1\x2f\x57\x5a\xcc\xe5\xc3\xc5\xd5\x4e" 2875 + "\x86\x56\xca\x64\xdb\xab\x61\x85\x8f\xf9\x20\x02\x40\x66\x76\x9e" 2876 + "\x5e\xd4\xac\xf0\x47\xa6\x50\x5f\xc2\xaf\x55\x9b\xa3\xc9\x8b\xf8" 2877 + "\x42\xd5\xcf\x1a\x95\x22\xd9\xd1\x0b\x92\x51\xca\xde\x46\x02\x0d" 2878 + "\x8b\xee\xd9\xa0\x04\x74\xf5\x0e\xb0\x3a\x62\xec\x3c\x91\x29\x33" 2879 + "\xa7\x78\x22\x92\xac\x27\xe6\x2d\x6f\x56\x8a\x5d\x72\xc2\xf1\x5c" 2880 + "\x54\x11\x97\x24\x61\xcb\x0c\x52\xd4\x57\x56\x22\x86\xf0\x19\x27" 2881 + "\x76\x30\x04\xf4\x39\x7b\x1a\x5a\x04\x0d\xec\x59\x9a\x31\x4c\x40" 2882 + "\x19\x6d\x3c\x41\x1b\x0c\xca\xeb\x25\x39\x6c\x96\xf8\x55\xd0\xec", 2883 + .secret_size = 16, 2884 + .b_secret_size = 1040, 2885 + .b_public_size = 1024, 2886 + .expected_a_public_size = 1024, 2887 + .expected_ss_size = 1024, 2888 + .genkey = true, 2889 + }, 1459 2890 }; 1460 2891 1461 2892 static const struct kpp_testvec curve25519_tv_template[] = { ··· 7140 5713 .psize = 28, 7141 5714 .digest = "\xef\xfc\xdf\x6a\xe5\xeb\x2f\xa2\xd2\x74" 7142 5715 "\x16\xd5\xf1\x84\xdf\x9c\x25\x9a\x7c\x79", 5716 + .fips_skip = 1, 7143 5717 }, { 7144 5718 .key = "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa", 7145 5719 .ksize = 20, ··· 7230 5802 "\x45\x69\x0f\x3a\x7e\x9e\x6d\x0f" 7231 5803 "\x8b\xbe\xa2\xa3\x9e\x61\x48\x00" 7232 5804 "\x8f\xd0\x5e\x44", 5805 + .fips_skip = 1, 7233 5806 }, { 7234 5807 .key = "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa" 7235 5808 "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa" ··· 7374 5945 "\x6a\x04\x24\x26\x08\x95\x75\xc7" 7375 5946 "\x5a\x00\x3f\x08\x9d\x27\x39\x83" 7376 5947 "\x9d\xec\x58\xb9\x64\xec\x38\x43", 5948 + .fips_skip = 1, 7377 5949 }, { 7378 5950 .key = "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa" 7379 5951 "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa" ··· 7873 6443 "\xe4\x2e\xc3\x73\x63\x22\x44\x5e" 7874 6444 "\x8e\x22\x40\xca\x5e\x69\xe2\xc7" 7875 6445 "\x8b\x32\x39\xec\xfa\xb2\x16\x49", 6446 + .fips_skip = 1, 7876 6447 }, { 7877 6448 .key = "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa" 7878 6449 "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa" ··· 7974 6543 "\x6d\x03\x4f\x65\xf8\xf0\xe6\xfd" 7975 6544 "\xca\xea\xb1\xa3\x4d\x4a\x6b\x4b" 7976 6545 "\x63\x6e\x07\x0a\x38\xbc\xe7\x37", 6546 + .fips_skip = 1, 7977 6547 }, { 7978 6548 .key = "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa" 7979 6549 "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa" ··· 8070 6638 "\x1b\x79\x86\x34\xad\x38\x68\x11" 8071 6639 "\xc2\xcf\xc8\x5b\xfa\xf5\xd5\x2b" 8072 6640 "\xba\xce\x5e\x66", 6641 + .fips_skip = 1, 8073 6642 }, { 8074 6643 .key = "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa" 8075 6644 "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa" ··· 8158 6725 "\x35\x96\xbb\xb0\xda\x73\xb8\x87" 8159 6726 "\xc9\x17\x1f\x93\x09\x5b\x29\x4a" 8160 6727 "\xe8\x57\xfb\xe2\x64\x5e\x1b\xa5", 6728 + .fips_skip = 1, 8161 6729 }, { 8162 6730 .key = "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa" 8163 6731 "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa" ··· 8250 6816 "\x3c\xa1\x35\x08\xa9\x32\x43\xce" 8251 6817 "\x48\xc0\x45\xdc\x00\x7f\x26\xa2" 8252 6818 "\x1b\x3f\x5e\x0e\x9d\xf4\xc2\x0a", 6819 + .fips_skip = 1, 8253 6820 }, { 8254 6821 .key = "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa" 8255 6822 "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa" ··· 8350 6915 "\xee\x7a\x0c\x31\xd0\x22\xa9\x5e" 8351 6916 "\x1f\xc9\x2b\xa9\xd7\x7d\xf8\x83" 8352 6917 "\x96\x02\x75\xbe\xb4\xe6\x20\x24", 6918 + .fips_skip = 1, 8353 6919 }, { 8354 6920 .key = "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa" 8355 6921 "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa"
+1
crypto/xts.c
··· 466 466 MODULE_DESCRIPTION("XTS block cipher mode"); 467 467 MODULE_ALIAS_CRYPTO("xts"); 468 468 MODULE_IMPORT_NS(CRYPTO_INTERNAL); 469 + MODULE_SOFTDEP("pre: ecb");
+1 -1
drivers/char/hw_random/Kconfig
··· 401 401 402 402 config HW_RANDOM_CAVIUM 403 403 tristate "Cavium ThunderX Random Number Generator support" 404 - depends on HW_RANDOM && PCI && ARM64 404 + depends on HW_RANDOM && PCI && ARCH_THUNDER 405 405 default HW_RANDOM 406 406 help 407 407 This driver provides kernel-side support for the Random Number
+93 -59
drivers/char/hw_random/atmel-rng.c
··· 13 13 #include <linux/err.h> 14 14 #include <linux/clk.h> 15 15 #include <linux/io.h> 16 + #include <linux/iopoll.h> 16 17 #include <linux/hw_random.h> 17 18 #include <linux/of_device.h> 18 19 #include <linux/platform_device.h> 20 + #include <linux/pm_runtime.h> 19 21 20 22 #define TRNG_CR 0x00 21 23 #define TRNG_MR 0x04 22 24 #define TRNG_ISR 0x1c 25 + #define TRNG_ISR_DATRDY BIT(0) 23 26 #define TRNG_ODATA 0x50 24 27 25 28 #define TRNG_KEY 0x524e4700 /* RNG */ ··· 37 34 struct clk *clk; 38 35 void __iomem *base; 39 36 struct hwrng rng; 37 + bool has_half_rate; 40 38 }; 39 + 40 + static bool atmel_trng_wait_ready(struct atmel_trng *trng, bool wait) 41 + { 42 + int ready; 43 + 44 + ready = readl(trng->base + TRNG_ISR) & TRNG_ISR_DATRDY; 45 + if (!ready && wait) 46 + readl_poll_timeout(trng->base + TRNG_ISR, ready, 47 + ready & TRNG_ISR_DATRDY, 1000, 20000); 48 + 49 + return !!ready; 50 + } 41 51 42 52 static int atmel_trng_read(struct hwrng *rng, void *buf, size_t max, 43 53 bool wait) 44 54 { 45 55 struct atmel_trng *trng = container_of(rng, struct atmel_trng, rng); 46 56 u32 *data = buf; 57 + int ret; 47 58 48 - /* data ready? */ 49 - if (readl(trng->base + TRNG_ISR) & 1) { 50 - *data = readl(trng->base + TRNG_ODATA); 51 - /* 52 - ensure data ready is only set again AFTER the next data 53 - word is ready in case it got set between checking ISR 54 - and reading ODATA, so we don't risk re-reading the 55 - same word 56 - */ 57 - readl(trng->base + TRNG_ISR); 58 - return 4; 59 - } else 60 - return 0; 59 + ret = pm_runtime_get_sync((struct device *)trng->rng.priv); 60 + if (ret < 0) { 61 + pm_runtime_put_sync((struct device *)trng->rng.priv); 62 + return ret; 63 + } 64 + 65 + ret = atmel_trng_wait_ready(trng, wait); 66 + if (!ret) 67 + goto out; 68 + 69 + *data = readl(trng->base + TRNG_ODATA); 70 + /* 71 + * ensure data ready is only set again AFTER the next data word is ready 72 + * in case it got set between checking ISR and reading ODATA, so we 73 + * don't risk re-reading the same word 74 + */ 75 + readl(trng->base + TRNG_ISR); 76 + ret = 4; 77 + 78 + out: 79 + pm_runtime_mark_last_busy((struct device *)trng->rng.priv); 80 + pm_runtime_put_sync_autosuspend((struct device *)trng->rng.priv); 81 + return ret; 61 82 } 62 83 63 - static void atmel_trng_enable(struct atmel_trng *trng) 84 + static int atmel_trng_init(struct atmel_trng *trng) 64 85 { 86 + unsigned long rate; 87 + int ret; 88 + 89 + ret = clk_prepare_enable(trng->clk); 90 + if (ret) 91 + return ret; 92 + 93 + if (trng->has_half_rate) { 94 + rate = clk_get_rate(trng->clk); 95 + 96 + /* if peripheral clk is above 100MHz, set HALFR */ 97 + if (rate > 100000000) 98 + writel(TRNG_HALFR, trng->base + TRNG_MR); 99 + } 100 + 65 101 writel(TRNG_KEY | 1, trng->base + TRNG_CR); 102 + 103 + return 0; 66 104 } 67 105 68 - static void atmel_trng_disable(struct atmel_trng *trng) 106 + static void atmel_trng_cleanup(struct atmel_trng *trng) 69 107 { 70 108 writel(TRNG_KEY, trng->base + TRNG_CR); 109 + clk_disable_unprepare(trng->clk); 71 110 } 72 111 73 112 static int atmel_trng_probe(struct platform_device *pdev) ··· 133 88 if (!data) 134 89 return -ENODEV; 135 90 136 - if (data->has_half_rate) { 137 - unsigned long rate = clk_get_rate(trng->clk); 138 - 139 - /* if peripheral clk is above 100MHz, set HALFR */ 140 - if (rate > 100000000) 141 - writel(TRNG_HALFR, trng->base + TRNG_MR); 142 - } 143 - 144 - ret = clk_prepare_enable(trng->clk); 145 - if (ret) 146 - return ret; 147 - 148 - atmel_trng_enable(trng); 91 + trng->has_half_rate = data->has_half_rate; 149 92 trng->rng.name = pdev->name; 150 93 trng->rng.read = atmel_trng_read; 151 - 152 - ret = devm_hwrng_register(&pdev->dev, &trng->rng); 153 - if (ret) 154 - goto err_register; 155 - 94 + trng->rng.priv = (unsigned long)&pdev->dev; 156 95 platform_set_drvdata(pdev, trng); 157 96 158 - return 0; 97 + #ifndef CONFIG_PM 98 + ret = atmel_trng_init(trng); 99 + if (ret) 100 + return ret; 101 + #endif 159 102 160 - err_register: 161 - clk_disable_unprepare(trng->clk); 103 + pm_runtime_set_autosuspend_delay(&pdev->dev, 100); 104 + pm_runtime_use_autosuspend(&pdev->dev); 105 + pm_runtime_enable(&pdev->dev); 106 + 107 + ret = devm_hwrng_register(&pdev->dev, &trng->rng); 108 + if (ret) { 109 + pm_runtime_disable(&pdev->dev); 110 + pm_runtime_set_suspended(&pdev->dev); 111 + #ifndef CONFIG_PM 112 + atmel_trng_cleanup(trng); 113 + #endif 114 + } 115 + 162 116 return ret; 163 117 } 164 118 ··· 165 121 { 166 122 struct atmel_trng *trng = platform_get_drvdata(pdev); 167 123 168 - 169 - atmel_trng_disable(trng); 170 - clk_disable_unprepare(trng->clk); 124 + atmel_trng_cleanup(trng); 125 + pm_runtime_disable(&pdev->dev); 126 + pm_runtime_set_suspended(&pdev->dev); 171 127 172 128 return 0; 173 129 } 174 130 175 - #ifdef CONFIG_PM 176 - static int atmel_trng_suspend(struct device *dev) 131 + static int __maybe_unused atmel_trng_runtime_suspend(struct device *dev) 177 132 { 178 133 struct atmel_trng *trng = dev_get_drvdata(dev); 179 134 180 - atmel_trng_disable(trng); 181 - clk_disable_unprepare(trng->clk); 135 + atmel_trng_cleanup(trng); 182 136 183 137 return 0; 184 138 } 185 139 186 - static int atmel_trng_resume(struct device *dev) 140 + static int __maybe_unused atmel_trng_runtime_resume(struct device *dev) 187 141 { 188 142 struct atmel_trng *trng = dev_get_drvdata(dev); 189 - int ret; 190 143 191 - ret = clk_prepare_enable(trng->clk); 192 - if (ret) 193 - return ret; 194 - 195 - atmel_trng_enable(trng); 196 - 197 - return 0; 144 + return atmel_trng_init(trng); 198 145 } 199 146 200 - static const struct dev_pm_ops atmel_trng_pm_ops = { 201 - .suspend = atmel_trng_suspend, 202 - .resume = atmel_trng_resume, 147 + static const struct dev_pm_ops __maybe_unused atmel_trng_pm_ops = { 148 + SET_RUNTIME_PM_OPS(atmel_trng_runtime_suspend, 149 + atmel_trng_runtime_resume, NULL) 150 + SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, 151 + pm_runtime_force_resume) 203 152 }; 204 - #endif /* CONFIG_PM */ 205 153 206 154 static const struct atmel_trng_data at91sam9g45_config = { 207 155 .has_half_rate = false, ··· 221 185 .remove = atmel_trng_remove, 222 186 .driver = { 223 187 .name = "atmel-trng", 224 - #ifdef CONFIG_PM 225 - .pm = &atmel_trng_pm_ops, 226 - #endif /* CONFIG_PM */ 188 + .pm = pm_ptr(&atmel_trng_pm_ops), 227 189 .of_match_table = atmel_trng_dt_ids, 228 190 }, 229 191 };
+1 -1
drivers/char/hw_random/cavium-rng-vf.c
··· 179 179 pdev = pci_get_device(PCI_VENDOR_ID_CAVIUM, 180 180 PCI_DEVID_CAVIUM_RNG_PF, NULL); 181 181 if (!pdev) { 182 - dev_err(&pdev->dev, "Cannot find RNG PF device\n"); 182 + pr_err("Cannot find RNG PF device\n"); 183 183 return -EIO; 184 184 } 185 185
+121 -40
drivers/char/hw_random/core.c
··· 32 32 /* the current rng has been explicitly chosen by user via sysfs */ 33 33 static int cur_rng_set_by_user; 34 34 static struct task_struct *hwrng_fill; 35 - /* list of registered rngs, sorted decending by quality */ 35 + /* list of registered rngs */ 36 36 static LIST_HEAD(rng_list); 37 37 /* Protects rng_list and current_rng */ 38 38 static DEFINE_MUTEX(rng_mutex); ··· 45 45 46 46 module_param(current_quality, ushort, 0644); 47 47 MODULE_PARM_DESC(current_quality, 48 - "current hwrng entropy estimation per 1024 bits of input"); 48 + "current hwrng entropy estimation per 1024 bits of input -- obsolete, use rng_quality instead"); 49 49 module_param(default_quality, ushort, 0644); 50 50 MODULE_PARM_DESC(default_quality, 51 51 "default entropy content of hwrng per 1024 bits of input"); 52 52 53 53 static void drop_current_rng(void); 54 54 static int hwrng_init(struct hwrng *rng); 55 - static void start_khwrngd(void); 55 + static void hwrng_manage_rngd(struct hwrng *rng); 56 56 57 57 static inline int rng_get_data(struct hwrng *rng, u8 *buffer, size_t size, 58 58 int wait); ··· 65 65 static void add_early_randomness(struct hwrng *rng) 66 66 { 67 67 int bytes_read; 68 - size_t size = min_t(size_t, 16, rng_buffer_size()); 69 68 70 69 mutex_lock(&reading_mutex); 71 - bytes_read = rng_get_data(rng, rng_buffer, size, 0); 70 + bytes_read = rng_get_data(rng, rng_fillbuf, 32, 0); 72 71 mutex_unlock(&reading_mutex); 73 72 if (bytes_read > 0) 74 - add_device_randomness(rng_buffer, bytes_read); 73 + add_device_randomness(rng_fillbuf, bytes_read); 75 74 } 76 75 77 76 static inline void cleanup_rng(struct kref *kref) ··· 161 162 reinit_completion(&rng->cleanup_done); 162 163 163 164 skip_init: 164 - current_quality = rng->quality ? : default_quality; 165 - if (current_quality > 1024) 166 - current_quality = 1024; 165 + if (!rng->quality) 166 + rng->quality = default_quality; 167 + if (rng->quality > 1024) 168 + rng->quality = 1024; 169 + current_quality = rng->quality; /* obsolete */ 167 170 168 - if (current_quality == 0 && hwrng_fill) 169 - kthread_stop(hwrng_fill); 170 - if (current_quality > 0 && !hwrng_fill) 171 - start_khwrngd(); 171 + hwrng_manage_rngd(rng); 172 172 173 173 return 0; 174 174 } ··· 297 299 298 300 static int enable_best_rng(void) 299 301 { 302 + struct hwrng *rng, *new_rng = NULL; 300 303 int ret = -ENODEV; 301 304 302 305 BUG_ON(!mutex_is_locked(&rng_mutex)); 303 306 304 - /* rng_list is sorted by quality, use the best (=first) one */ 305 - if (!list_empty(&rng_list)) { 306 - struct hwrng *new_rng; 307 - 308 - new_rng = list_entry(rng_list.next, struct hwrng, list); 309 - ret = ((new_rng == current_rng) ? 0 : set_current_rng(new_rng)); 310 - if (!ret) 311 - cur_rng_set_by_user = 0; 312 - } else { 307 + /* no rng to use? */ 308 + if (list_empty(&rng_list)) { 313 309 drop_current_rng(); 314 310 cur_rng_set_by_user = 0; 315 - ret = 0; 311 + return 0; 316 312 } 313 + 314 + /* use the rng which offers the best quality */ 315 + list_for_each_entry(rng, &rng_list, list) { 316 + if (!new_rng || rng->quality > new_rng->quality) 317 + new_rng = rng; 318 + } 319 + 320 + ret = ((new_rng == current_rng) ? 0 : set_current_rng(new_rng)); 321 + if (!ret) 322 + cur_rng_set_by_user = 0; 317 323 318 324 return ret; 319 325 } ··· 339 337 } else { 340 338 list_for_each_entry(rng, &rng_list, list) { 341 339 if (sysfs_streq(rng->name, buf)) { 342 - cur_rng_set_by_user = 1; 343 340 err = set_current_rng(rng); 341 + if (!err) 342 + cur_rng_set_by_user = 1; 344 343 break; 345 344 } 346 345 } ··· 403 400 return sysfs_emit(buf, "%d\n", cur_rng_set_by_user); 404 401 } 405 402 403 + static ssize_t rng_quality_show(struct device *dev, 404 + struct device_attribute *attr, 405 + char *buf) 406 + { 407 + ssize_t ret; 408 + struct hwrng *rng; 409 + 410 + rng = get_current_rng(); 411 + if (IS_ERR(rng)) 412 + return PTR_ERR(rng); 413 + 414 + if (!rng) /* no need to put_rng */ 415 + return -ENODEV; 416 + 417 + ret = sysfs_emit(buf, "%hu\n", rng->quality); 418 + put_rng(rng); 419 + 420 + return ret; 421 + } 422 + 423 + static ssize_t rng_quality_store(struct device *dev, 424 + struct device_attribute *attr, 425 + const char *buf, size_t len) 426 + { 427 + u16 quality; 428 + int ret = -EINVAL; 429 + 430 + if (len < 2) 431 + return -EINVAL; 432 + 433 + ret = mutex_lock_interruptible(&rng_mutex); 434 + if (ret) 435 + return -ERESTARTSYS; 436 + 437 + ret = kstrtou16(buf, 0, &quality); 438 + if (ret || quality > 1024) { 439 + ret = -EINVAL; 440 + goto out; 441 + } 442 + 443 + if (!current_rng) { 444 + ret = -ENODEV; 445 + goto out; 446 + } 447 + 448 + current_rng->quality = quality; 449 + current_quality = quality; /* obsolete */ 450 + 451 + /* the best available RNG may have changed */ 452 + ret = enable_best_rng(); 453 + 454 + /* start/stop rngd if necessary */ 455 + if (current_rng) 456 + hwrng_manage_rngd(current_rng); 457 + 458 + out: 459 + mutex_unlock(&rng_mutex); 460 + return ret ? ret : len; 461 + } 462 + 406 463 static DEVICE_ATTR_RW(rng_current); 407 464 static DEVICE_ATTR_RO(rng_available); 408 465 static DEVICE_ATTR_RO(rng_selected); 466 + static DEVICE_ATTR_RW(rng_quality); 409 467 410 468 static struct attribute *rng_dev_attrs[] = { 411 469 &dev_attr_rng_current.attr, 412 470 &dev_attr_rng_available.attr, 413 471 &dev_attr_rng_selected.attr, 472 + &dev_attr_rng_quality.attr, 414 473 NULL 415 474 }; 416 475 ··· 490 425 491 426 static int hwrng_fillfn(void *unused) 492 427 { 428 + size_t entropy, entropy_credit = 0; /* in 1/1024 of a bit */ 493 429 long rc; 494 430 495 431 while (!kthread_should_stop()) { 432 + unsigned short quality; 496 433 struct hwrng *rng; 497 434 498 435 rng = get_current_rng(); ··· 503 436 mutex_lock(&reading_mutex); 504 437 rc = rng_get_data(rng, rng_fillbuf, 505 438 rng_buffer_size(), 1); 439 + if (current_quality != rng->quality) 440 + rng->quality = current_quality; /* obsolete */ 441 + quality = rng->quality; 506 442 mutex_unlock(&reading_mutex); 507 443 put_rng(rng); 444 + 445 + if (!quality) 446 + break; 447 + 508 448 if (rc <= 0) { 509 449 pr_warn("hwrng: no data available\n"); 510 450 msleep_interruptible(10000); 511 451 continue; 512 452 } 453 + 454 + /* If we cannot credit at least one bit of entropy, 455 + * keep track of the remainder for the next iteration 456 + */ 457 + entropy = rc * quality * 8 + entropy_credit; 458 + if ((entropy >> 10) == 0) 459 + entropy_credit = entropy; 460 + 513 461 /* Outside lock, sure, but y'know: randomness. */ 514 462 add_hwgenerator_randomness((void *)rng_fillbuf, rc, 515 - rc * current_quality * 8 >> 10); 463 + entropy >> 10); 516 464 } 517 465 hwrng_fill = NULL; 518 466 return 0; 519 467 } 520 468 521 - static void start_khwrngd(void) 469 + static void hwrng_manage_rngd(struct hwrng *rng) 522 470 { 523 - hwrng_fill = kthread_run(hwrng_fillfn, NULL, "hwrng"); 524 - if (IS_ERR(hwrng_fill)) { 525 - pr_err("hwrng_fill thread creation failed\n"); 526 - hwrng_fill = NULL; 471 + if (WARN_ON(!mutex_is_locked(&rng_mutex))) 472 + return; 473 + 474 + if (rng->quality == 0 && hwrng_fill) 475 + kthread_stop(hwrng_fill); 476 + if (rng->quality > 0 && !hwrng_fill) { 477 + hwrng_fill = kthread_run(hwrng_fillfn, NULL, "hwrng"); 478 + if (IS_ERR(hwrng_fill)) { 479 + pr_err("hwrng_fill thread creation failed\n"); 480 + hwrng_fill = NULL; 481 + } 527 482 } 528 483 } 529 484 ··· 553 464 { 554 465 int err = -EINVAL; 555 466 struct hwrng *tmp; 556 - struct list_head *rng_list_ptr; 557 467 bool is_new_current = false; 558 468 559 469 if (!rng->name || (!rng->data_read && !rng->read)) ··· 566 478 if (strcmp(tmp->name, rng->name) == 0) 567 479 goto out_unlock; 568 480 } 481 + list_add_tail(&rng->list, &rng_list); 569 482 570 483 init_completion(&rng->cleanup_done); 571 484 complete(&rng->cleanup_done); 572 - 573 - /* rng_list is sorted by decreasing quality */ 574 - list_for_each(rng_list_ptr, &rng_list) { 575 - tmp = list_entry(rng_list_ptr, struct hwrng, list); 576 - if (tmp->quality < rng->quality) 577 - break; 578 - } 579 - list_add_tail(&rng->list, rng_list_ptr); 580 485 581 486 if (!current_rng || 582 487 (!cur_rng_set_by_user && rng->quality > current_rng->quality)) { ··· 720 639 unregister_miscdev(); 721 640 } 722 641 723 - module_init(hwrng_modinit); 642 + fs_initcall(hwrng_modinit); /* depends on misc_register() */ 724 643 module_exit(hwrng_modexit); 725 644 726 645 MODULE_DESCRIPTION("H/W Random Number Generator (RNG) driver");
+2 -2
drivers/char/hw_random/nomadik-rng.c
··· 65 65 out_release: 66 66 amba_release_regions(dev); 67 67 out_clk: 68 - clk_disable(rng_clk); 68 + clk_disable_unprepare(rng_clk); 69 69 return ret; 70 70 } 71 71 72 72 static void nmk_rng_remove(struct amba_device *dev) 73 73 { 74 74 amba_release_regions(dev); 75 - clk_disable(rng_clk); 75 + clk_disable_unprepare(rng_clk); 76 76 } 77 77 78 78 static const struct amba_id nmk_rng_ids[] = {
+10
drivers/crypto/Kconfig
··· 808 808 accelerator. Select this if you want to use the ZynqMP module 809 809 for AES algorithms. 810 810 811 + config CRYPTO_DEV_ZYNQMP_SHA3 812 + tristate "Support for Xilinx ZynqMP SHA3 hardware accelerator" 813 + depends on ZYNQMP_FIRMWARE || COMPILE_TEST 814 + select CRYPTO_SHA3 815 + help 816 + Xilinx ZynqMP has SHA3 engine used for secure hash calculation. 817 + This driver interfaces with SHA3 hardware engine. 818 + Select this if you want to use the ZynqMP module 819 + for SHA3 hash computation. 820 + 811 821 source "drivers/crypto/chelsio/Kconfig" 812 822 813 823 source "drivers/crypto/virtio/Kconfig"
+1 -1
drivers/crypto/Makefile
··· 47 47 obj-$(CONFIG_CRYPTO_DEV_BCM_SPU) += bcm/ 48 48 obj-$(CONFIG_CRYPTO_DEV_SAFEXCEL) += inside-secure/ 49 49 obj-$(CONFIG_CRYPTO_DEV_ARTPEC6) += axis/ 50 - obj-$(CONFIG_CRYPTO_DEV_ZYNQMP_AES) += xilinx/ 50 + obj-y += xilinx/ 51 51 obj-y += hisilicon/ 52 52 obj-$(CONFIG_CRYPTO_DEV_AMLOGIC_GXL) += amlogic/ 53 53 obj-y += keembay/
+3
drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c
··· 11 11 * You could find a link for the datasheet in Documentation/arm/sunxi.rst 12 12 */ 13 13 14 + #include <linux/bottom_half.h> 14 15 #include <linux/crypto.h> 15 16 #include <linux/dma-mapping.h> 16 17 #include <linux/io.h> ··· 284 283 285 284 flow = rctx->flow; 286 285 err = sun8i_ce_run_task(ce, flow, crypto_tfm_alg_name(breq->base.tfm)); 286 + local_bh_disable(); 287 287 crypto_finalize_skcipher_request(engine, breq, err); 288 + local_bh_enable(); 288 289 return 0; 289 290 } 290 291
+3
drivers/crypto/allwinner/sun8i-ce/sun8i-ce-hash.c
··· 9 9 * 10 10 * You could find the datasheet in Documentation/arm/sunxi.rst 11 11 */ 12 + #include <linux/bottom_half.h> 12 13 #include <linux/dma-mapping.h> 13 14 #include <linux/pm_runtime.h> 14 15 #include <linux/scatterlist.h> ··· 415 414 theend: 416 415 kfree(buf); 417 416 kfree(result); 417 + local_bh_disable(); 418 418 crypto_finalize_hash_request(engine, breq, err); 419 + local_bh_enable(); 419 420 return 0; 420 421 }
+3
drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
··· 11 11 * You could find a link for the datasheet in Documentation/arm/sunxi.rst 12 12 */ 13 13 14 + #include <linux/bottom_half.h> 14 15 #include <linux/crypto.h> 15 16 #include <linux/dma-mapping.h> 16 17 #include <linux/io.h> ··· 275 274 struct skcipher_request *breq = container_of(areq, struct skcipher_request, base); 276 275 277 276 err = sun8i_ss_cipher(breq); 277 + local_bh_disable(); 278 278 crypto_finalize_skcipher_request(engine, breq, err); 279 + local_bh_enable(); 279 280 280 281 return 0; 281 282 }
+2
drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c
··· 30 30 static const struct ss_variant ss_a80_variant = { 31 31 .alg_cipher = { SS_ALG_AES, SS_ALG_DES, SS_ALG_3DES, 32 32 }, 33 + .alg_hash = { SS_ID_NOTSUPP, SS_ID_NOTSUPP, SS_ID_NOTSUPP, SS_ID_NOTSUPP, 34 + }, 33 35 .op_mode = { SS_OP_ECB, SS_OP_CBC, 34 36 }, 35 37 .ss_clks = {
+3
drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
··· 9 9 * 10 10 * You could find the datasheet in Documentation/arm/sunxi.rst 11 11 */ 12 + #include <linux/bottom_half.h> 12 13 #include <linux/dma-mapping.h> 13 14 #include <linux/pm_runtime.h> 14 15 #include <linux/scatterlist.h> ··· 443 442 theend: 444 443 kfree(pad); 445 444 kfree(result); 445 + local_bh_disable(); 446 446 crypto_finalize_hash_request(engine, breq, err); 447 + local_bh_enable(); 447 448 return 0; 448 449 }
+2
drivers/crypto/amlogic/amlogic-gxl-cipher.c
··· 265 265 struct skcipher_request *breq = container_of(areq, struct skcipher_request, base); 266 266 267 267 err = meson_cipher(breq); 268 + local_bh_disable(); 268 269 crypto_finalize_skcipher_request(engine, breq, err); 270 + local_bh_enable(); 269 271 270 272 return 0; 271 273 }
+1
drivers/crypto/atmel-aes.c
··· 2509 2509 2510 2510 /* keep only major version number */ 2511 2511 switch (dd->hw_version & 0xff0) { 2512 + case 0x700: 2512 2513 case 0x500: 2513 2514 dd->caps.has_dualbuff = 1; 2514 2515 dd->caps.has_cfb64 = 1;
+1
drivers/crypto/atmel-sha.c
··· 2508 2508 2509 2509 /* keep only major version number */ 2510 2510 switch (dd->hw_version & 0xff0) { 2511 + case 0x700: 2511 2512 case 0x510: 2512 2513 dd->caps.has_dma = 1; 2513 2514 dd->caps.has_dualbuff = 1;
+1
drivers/crypto/atmel-tdes.c
··· 1130 1130 1131 1131 /* keep only major version number */ 1132 1132 switch (dd->hw_version & 0xf00) { 1133 + case 0x800: 1133 1134 case 0x700: 1134 1135 dd->caps.has_dma = 1; 1135 1136 dd->caps.has_cfb_3keys = 1;
+6 -2
drivers/crypto/cavium/nitrox/nitrox_mbx.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 + #include <linux/bitmap.h> 2 3 #include <linux/workqueue.h> 3 4 4 5 #include "nitrox_csr.h" ··· 121 120 122 121 void nitrox_pf2vf_mbox_handler(struct nitrox_device *ndev) 123 122 { 123 + DECLARE_BITMAP(csr, BITS_PER_TYPE(u64)); 124 124 struct nitrox_vfdev *vfdev; 125 125 struct pf2vf_work *pfwork; 126 126 u64 value, reg_addr; ··· 131 129 /* loop for VF(0..63) */ 132 130 reg_addr = NPS_PKT_MBOX_INT_LO; 133 131 value = nitrox_read_csr(ndev, reg_addr); 134 - for_each_set_bit(i, (const unsigned long *)&value, BITS_PER_LONG) { 132 + bitmap_from_u64(csr, value); 133 + for_each_set_bit(i, csr, BITS_PER_TYPE(csr)) { 135 134 /* get the vfno from ring */ 136 135 vfno = RING_TO_VFNO(i, ndev->iov.max_vf_queues); 137 136 vfdev = ndev->iov.vfdev + vfno; ··· 154 151 /* loop for VF(64..127) */ 155 152 reg_addr = NPS_PKT_MBOX_INT_HI; 156 153 value = nitrox_read_csr(ndev, reg_addr); 157 - for_each_set_bit(i, (const unsigned long *)&value, BITS_PER_LONG) { 154 + bitmap_from_u64(csr, value); 155 + for_each_set_bit(i, csr, BITS_PER_TYPE(csr)) { 158 156 /* get the vfno from ring */ 159 157 vfno = RING_TO_VFNO(i + 64, ndev->iov.max_vf_queues); 160 158 vfdev = ndev->iov.vfdev + vfno;
+1 -1
drivers/crypto/cavium/nitrox/nitrox_req.h
··· 440 440 /** 441 441 * struct ctx_hdr - Book keeping data about the crypto context 442 442 * @pool: Pool used to allocate crypto context 443 - * @dma: Base DMA address of the cypto context 443 + * @dma: Base DMA address of the crypto context 444 444 * @ctx_dma: Actual usable crypto context for NITROX 445 445 */ 446 446 struct ctx_hdr {
+35 -48
drivers/crypto/cavium/zip/zip_main.c
··· 55 55 { 0, } 56 56 }; 57 57 58 + static void zip_debugfs_init(void); 59 + static void zip_debugfs_exit(void); 60 + static int zip_register_compression_device(void); 61 + static void zip_unregister_compression_device(void); 62 + 58 63 void zip_reg_write(u64 val, u64 __iomem *addr) 59 64 { 60 65 writeq(val, addr); ··· 240 235 return 0; 241 236 } 242 237 238 + static void zip_reset(struct zip_device *zip) 239 + { 240 + union zip_cmd_ctl cmd_ctl; 241 + 242 + cmd_ctl.u_reg64 = 0x0ull; 243 + cmd_ctl.s.reset = 1; /* Forces ZIP cores to do reset */ 244 + zip_reg_write(cmd_ctl.u_reg64, (zip->reg_base + ZIP_CMD_CTL)); 245 + } 246 + 243 247 static int zip_probe(struct pci_dev *pdev, const struct pci_device_id *ent) 244 248 { 245 249 struct device *dev = &pdev->dev; ··· 296 282 if (err) 297 283 goto err_release_regions; 298 284 285 + /* Register with the Kernel Crypto Interface */ 286 + err = zip_register_compression_device(); 287 + if (err < 0) { 288 + zip_err("ZIP: Kernel Crypto Registration failed\n"); 289 + goto err_register; 290 + } 291 + 292 + /* comp-decomp statistics are handled with debugfs interface */ 293 + zip_debugfs_init(); 294 + 299 295 return 0; 296 + 297 + err_register: 298 + zip_reset(zip); 300 299 301 300 err_release_regions: 302 301 if (zip->reg_base) ··· 332 305 static void zip_remove(struct pci_dev *pdev) 333 306 { 334 307 struct zip_device *zip = pci_get_drvdata(pdev); 335 - union zip_cmd_ctl cmd_ctl; 336 308 int q = 0; 337 309 338 310 if (!zip) 339 311 return; 340 312 313 + zip_debugfs_exit(); 314 + 315 + zip_unregister_compression_device(); 316 + 341 317 if (zip->reg_base) { 342 - cmd_ctl.u_reg64 = 0x0ull; 343 - cmd_ctl.s.reset = 1; /* Forces ZIP cores to do reset */ 344 - zip_reg_write(cmd_ctl.u_reg64, (zip->reg_base + ZIP_CMD_CTL)); 318 + zip_reset(zip); 345 319 iounmap(zip->reg_base); 346 320 } 347 321 ··· 613 585 /* Root directory for thunderx_zip debugfs entry */ 614 586 static struct dentry *zip_debugfs_root; 615 587 616 - static void __init zip_debugfs_init(void) 588 + static void zip_debugfs_init(void) 617 589 { 618 590 if (!debugfs_initialized()) 619 591 return; ··· 632 604 633 605 } 634 606 635 - static void __exit zip_debugfs_exit(void) 607 + static void zip_debugfs_exit(void) 636 608 { 637 609 debugfs_remove_recursive(zip_debugfs_root); 638 610 } ··· 643 615 #endif 644 616 /* debugfs - end */ 645 617 646 - static int __init zip_init_module(void) 647 - { 648 - int ret; 649 - 650 - zip_msg("%s\n", DRV_NAME); 651 - 652 - ret = pci_register_driver(&zip_driver); 653 - if (ret < 0) { 654 - zip_err("ZIP: pci_register_driver() failed\n"); 655 - return ret; 656 - } 657 - 658 - /* Register with the Kernel Crypto Interface */ 659 - ret = zip_register_compression_device(); 660 - if (ret < 0) { 661 - zip_err("ZIP: Kernel Crypto Registration failed\n"); 662 - goto err_pci_unregister; 663 - } 664 - 665 - /* comp-decomp statistics are handled with debugfs interface */ 666 - zip_debugfs_init(); 667 - 668 - return ret; 669 - 670 - err_pci_unregister: 671 - pci_unregister_driver(&zip_driver); 672 - return ret; 673 - } 674 - 675 - static void __exit zip_cleanup_module(void) 676 - { 677 - zip_debugfs_exit(); 678 - 679 - /* Unregister from the kernel crypto interface */ 680 - zip_unregister_compression_device(); 681 - 682 - /* Unregister this driver for pci zip devices */ 683 - pci_unregister_driver(&zip_driver); 684 - } 685 - 686 - module_init(zip_init_module); 687 - module_exit(zip_cleanup_module); 618 + module_pci_driver(zip_driver); 688 619 689 620 MODULE_AUTHOR("Cavium Inc"); 690 621 MODULE_DESCRIPTION("Cavium Inc ThunderX ZIP Driver");
+1 -4
drivers/crypto/ccp/ccp-crypto-aes.c
··· 69 69 struct ccp_aes_req_ctx *rctx = skcipher_request_ctx(req); 70 70 struct scatterlist *iv_sg = NULL; 71 71 unsigned int iv_len = 0; 72 - int ret; 73 72 74 73 if (!ctx->u.aes.key_len) 75 74 return -EINVAL; ··· 103 104 rctx->cmd.u.aes.src_len = req->cryptlen; 104 105 rctx->cmd.u.aes.dst = req->dst; 105 106 106 - ret = ccp_crypto_enqueue_request(&req->base, &rctx->cmd); 107 - 108 - return ret; 107 + return ccp_crypto_enqueue_request(&req->base, &rctx->cmd); 109 108 } 110 109 111 110 static int ccp_aes_encrypt(struct skcipher_request *req)
+16
drivers/crypto/ccp/ccp-dmaengine.c
··· 632 632 return 0; 633 633 } 634 634 635 + static void ccp_dma_release(struct ccp_device *ccp) 636 + { 637 + struct ccp_dma_chan *chan; 638 + struct dma_chan *dma_chan; 639 + unsigned int i; 640 + 641 + for (i = 0; i < ccp->cmd_q_count; i++) { 642 + chan = ccp->ccp_dma_chan + i; 643 + dma_chan = &chan->dma_chan; 644 + tasklet_kill(&chan->cleanup_tasklet); 645 + list_del_rcu(&dma_chan->device_node); 646 + } 647 + } 648 + 635 649 int ccp_dmaengine_register(struct ccp_device *ccp) 636 650 { 637 651 struct ccp_dma_chan *chan; ··· 750 736 return 0; 751 737 752 738 err_reg: 739 + ccp_dma_release(ccp); 753 740 kmem_cache_destroy(ccp->dma_desc_cache); 754 741 755 742 err_cache: ··· 767 752 return; 768 753 769 754 dma_async_device_unregister(dma_dev); 755 + ccp_dma_release(ccp); 770 756 771 757 kmem_cache_destroy(ccp->dma_desc_cache); 772 758 kmem_cache_destroy(ccp->dma_cmd_cache);
+1 -1
drivers/crypto/ccp/sev-dev.c
··· 413 413 { 414 414 struct psp_device *psp = psp_master; 415 415 struct sev_device *sev; 416 - int rc, psp_ret; 416 + int rc, psp_ret = -1; 417 417 int (*init_function)(int *error); 418 418 419 419 if (!psp || !psp->sev_data)
+7
drivers/crypto/ccree/cc_buffer_mgr.c
··· 258 258 { 259 259 int ret = 0; 260 260 261 + if (!nbytes) { 262 + *mapped_nents = 0; 263 + *lbytes = 0; 264 + *nents = 0; 265 + return 0; 266 + } 267 + 261 268 *nents = cc_get_sgl_nents(dev, sg, nbytes, lbytes); 262 269 if (*nents > max_sg_nents) { 263 270 *nents = 0;
+1 -1
drivers/crypto/ccree/cc_cipher.c
··· 257 257 &ctx_p->user.key_dma_addr); 258 258 259 259 /* Free key buffer in context */ 260 - kfree_sensitive(ctx_p->user.key); 261 260 dev_dbg(dev, "Free key buffer in context. key=@%p\n", ctx_p->user.key); 261 + kfree_sensitive(ctx_p->user.key); 262 262 } 263 263 264 264 struct tdes_keys {
+4 -2
drivers/crypto/gemini/sl3516-ce-cipher.c
··· 23 23 struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(areq); 24 24 struct sl3516_ce_cipher_tfm_ctx *op = crypto_skcipher_ctx(tfm); 25 25 struct sl3516_ce_dev *ce = op->ce; 26 - struct scatterlist *in_sg = areq->src; 27 - struct scatterlist *out_sg = areq->dst; 26 + struct scatterlist *in_sg; 27 + struct scatterlist *out_sg; 28 28 struct scatterlist *sg; 29 29 30 30 if (areq->cryptlen == 0 || areq->cryptlen % 16) { ··· 264 264 struct skcipher_request *breq = container_of(areq, struct skcipher_request, base); 265 265 266 266 err = sl3516_ce_cipher(breq); 267 + local_bh_disable(); 267 268 crypto_finalize_skcipher_request(engine, breq, err); 269 + local_bh_enable(); 268 270 269 271 return 0; 270 272 }
+2 -2
drivers/crypto/hisilicon/qm.c
··· 3840 3840 3841 3841 for (i = 0; i < qm->qp_num; i++) { 3842 3842 qp = &qm->qp_array[i]; 3843 - if (qp->is_resetting) 3843 + if (qp->is_in_kernel && qp->is_resetting) 3844 3844 memset(qp->qdma.va, 0, qp->qdma.size); 3845 3845 } 3846 3846 ··· 4295 4295 static int qm_vf_read_qos(struct hisi_qm *qm) 4296 4296 { 4297 4297 int cnt = 0; 4298 - int ret; 4298 + int ret = -EINVAL; 4299 4299 4300 4300 /* reset mailbox qos val */ 4301 4301 qm->mb_qos = 0;
+32 -11
drivers/crypto/hisilicon/sec2/sec_crypto.c
··· 42 42 #define SEC_DE_OFFSET_V3 9 43 43 #define SEC_SCENE_OFFSET_V3 5 44 44 #define SEC_CKEY_OFFSET_V3 13 45 + #define SEC_CTR_CNT_OFFSET 25 46 + #define SEC_CTR_CNT_ROLLOVER 2 45 47 #define SEC_SRC_SGL_OFFSET_V3 11 46 48 #define SEC_DST_SGL_OFFSET_V3 14 47 49 #define SEC_CALG_OFFSET_V3 4 ··· 65 63 #define SEC_AUTH_CIPHER 0x1 66 64 #define SEC_MAX_MAC_LEN 64 67 65 #define SEC_MAX_AAD_LEN 65535 66 + #define SEC_MAX_CCM_AAD_LEN 65279 68 67 #define SEC_TOTAL_MAC_SZ (SEC_MAX_MAC_LEN * QM_Q_DEPTH) 69 68 70 69 #define SEC_PBUF_SZ 512 ··· 240 237 241 238 if (unlikely(type != type_supported)) { 242 239 atomic64_inc(&dfx->err_bd_cnt); 243 - pr_err("err bd type [%d]\n", type); 240 + pr_err("err bd type [%u]\n", type); 244 241 return; 245 242 } 246 243 ··· 644 641 struct sec_cipher_ctx *c_ctx = &ctx->c_ctx; 645 642 646 643 c_ctx->fallback = false; 644 + 645 + /* Currently, only XTS mode need fallback tfm when using 192bit key */ 647 646 if (likely(strncmp(alg, "xts", SEC_XTS_NAME_SZ))) 648 647 return 0; 649 648 650 649 c_ctx->fbtfm = crypto_alloc_sync_skcipher(alg, 0, 651 650 CRYPTO_ALG_NEED_FALLBACK); 652 651 if (IS_ERR(c_ctx->fbtfm)) { 653 - pr_err("failed to alloc fallback tfm!\n"); 652 + pr_err("failed to alloc xts mode fallback tfm!\n"); 654 653 return PTR_ERR(c_ctx->fbtfm); 655 654 } 656 655 ··· 813 808 } 814 809 815 810 memcpy(c_ctx->c_key, key, keylen); 816 - if (c_ctx->fallback) { 811 + if (c_ctx->fallback && c_ctx->fbtfm) { 817 812 ret = crypto_sync_skcipher_setkey(c_ctx->fbtfm, key, keylen); 818 813 if (ret) { 819 814 dev_err(dev, "failed to set fallback skcipher key!\n"); ··· 1305 1300 cipher = SEC_CIPHER_DEC; 1306 1301 sec_sqe3->c_icv_key |= cpu_to_le16(cipher); 1307 1302 1303 + /* Set the CTR counter mode is 128bit rollover */ 1304 + sec_sqe3->auth_mac_key = cpu_to_le32((u32)SEC_CTR_CNT_ROLLOVER << 1305 + SEC_CTR_CNT_OFFSET); 1306 + 1308 1307 if (req->use_pbuf) { 1309 1308 bd_param |= SEC_PBUF << SEC_SRC_SGL_OFFSET_V3; 1310 1309 bd_param |= SEC_PBUF << SEC_DST_SGL_OFFSET_V3; ··· 1623 1614 sqe3->auth_mac_key |= cpu_to_le32((u32)SEC_AUTH_TYPE1); 1624 1615 sqe3->huk_iv_seq &= SEC_CIPHER_AUTH_V3; 1625 1616 } else { 1626 - sqe3->auth_mac_key |= cpu_to_le32((u32)SEC_AUTH_TYPE1); 1617 + sqe3->auth_mac_key |= cpu_to_le32((u32)SEC_AUTH_TYPE2); 1627 1618 sqe3->huk_iv_seq |= SEC_AUTH_CIPHER_V3; 1628 1619 } 1629 1620 sqe3->a_len_key = cpu_to_le32(c_req->c_len + aq->assoclen); ··· 2041 2032 struct skcipher_request *sreq, bool encrypt) 2042 2033 { 2043 2034 struct sec_cipher_ctx *c_ctx = &ctx->c_ctx; 2035 + SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, c_ctx->fbtfm); 2044 2036 struct device *dev = ctx->dev; 2045 2037 int ret; 2046 2038 2047 - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, c_ctx->fbtfm); 2048 - 2049 2039 if (!c_ctx->fbtfm) { 2050 - dev_err(dev, "failed to check fallback tfm\n"); 2040 + dev_err_ratelimited(dev, "the soft tfm isn't supported in the current system.\n"); 2051 2041 return -EINVAL; 2052 2042 } 2053 2043 ··· 2227 2219 } 2228 2220 2229 2221 if (c_mode == SEC_CMODE_CCM) { 2222 + if (unlikely(req->assoclen > SEC_MAX_CCM_AAD_LEN)) { 2223 + dev_err_ratelimited(dev, "CCM input aad parameter is too long!\n"); 2224 + return -EINVAL; 2225 + } 2230 2226 ret = aead_iv_demension_check(req); 2231 2227 if (ret) { 2232 2228 dev_err(dev, "aead input iv param error!\n"); ··· 2268 2256 if (ctx->sec->qm.ver == QM_HW_V2) { 2269 2257 if (unlikely(!req->cryptlen || (!sreq->c_req.encrypt && 2270 2258 req->cryptlen <= authsize))) { 2271 - dev_err(dev, "Kunpeng920 not support 0 length!\n"); 2272 2259 ctx->a_ctx.fallback = true; 2273 2260 return -EINVAL; 2274 2261 } ··· 2295 2284 struct aead_request *aead_req, 2296 2285 bool encrypt) 2297 2286 { 2298 - struct aead_request *subreq = aead_request_ctx(aead_req); 2299 2287 struct sec_auth_ctx *a_ctx = &ctx->a_ctx; 2300 2288 struct device *dev = ctx->dev; 2289 + struct aead_request *subreq; 2290 + int ret; 2301 2291 2302 2292 /* Kunpeng920 aead mode not support input 0 size */ 2303 2293 if (!a_ctx->fallback_aead_tfm) { 2304 2294 dev_err(dev, "aead fallback tfm is NULL!\n"); 2305 2295 return -EINVAL; 2306 2296 } 2297 + 2298 + subreq = aead_request_alloc(a_ctx->fallback_aead_tfm, GFP_KERNEL); 2299 + if (!subreq) 2300 + return -ENOMEM; 2307 2301 2308 2302 aead_request_set_tfm(subreq, a_ctx->fallback_aead_tfm); 2309 2303 aead_request_set_callback(subreq, aead_req->base.flags, ··· 2317 2301 aead_req->cryptlen, aead_req->iv); 2318 2302 aead_request_set_ad(subreq, aead_req->assoclen); 2319 2303 2320 - return encrypt ? crypto_aead_encrypt(subreq) : 2321 - crypto_aead_decrypt(subreq); 2304 + if (encrypt) 2305 + ret = crypto_aead_encrypt(subreq); 2306 + else 2307 + ret = crypto_aead_decrypt(subreq); 2308 + aead_request_free(subreq); 2309 + 2310 + return ret; 2322 2311 } 2323 2312 2324 2313 static int sec_aead_crypto(struct aead_request *a_req, bool encrypt)
+4 -2
drivers/crypto/hisilicon/sec2/sec_crypto.h
··· 354 354 * akey_len: 9~14 bits 355 355 * a_alg: 15~20 bits 356 356 * key_sel: 21~24 bits 357 - * updata_key: 25 bits 358 - * reserved: 26~31 bits 357 + * ctr_count_mode/sm4_xts: 25~26 bits 358 + * sva_prefetch: 27 bits 359 + * key_wrap_num: 28~30 bits 360 + * update_key: 31 bits 359 361 */ 360 362 __le32 auth_mac_key; 361 363 __le32 salt;
+45 -14
drivers/crypto/hisilicon/sec2/sec_main.c
··· 90 90 SEC_USER1_WB_DATA_SSV) 91 91 #define SEC_USER1_SMMU_SVA (SEC_USER1_SMMU_NORMAL | SEC_USER1_SVA_SET) 92 92 #define SEC_USER1_SMMU_MASK (~SEC_USER1_SVA_SET) 93 + #define SEC_INTERFACE_USER_CTRL0_REG_V3 0x302220 94 + #define SEC_INTERFACE_USER_CTRL1_REG_V3 0x302224 95 + #define SEC_USER1_SMMU_NORMAL_V3 (BIT(23) | BIT(17) | BIT(11) | BIT(5)) 96 + #define SEC_USER1_SMMU_MASK_V3 0xFF79E79E 93 97 #define SEC_CORE_INT_STATUS_M_ECC BIT(2) 94 98 95 99 #define SEC_PREFETCH_CFG 0x301130 ··· 339 335 writel_relaxed(reg, qm->io_base + SEC_CONTROL_REG); 340 336 } 341 337 338 + static void sec_engine_sva_config(struct hisi_qm *qm) 339 + { 340 + u32 reg; 341 + 342 + if (qm->ver > QM_HW_V2) { 343 + reg = readl_relaxed(qm->io_base + 344 + SEC_INTERFACE_USER_CTRL0_REG_V3); 345 + reg |= SEC_USER0_SMMU_NORMAL; 346 + writel_relaxed(reg, qm->io_base + 347 + SEC_INTERFACE_USER_CTRL0_REG_V3); 348 + 349 + reg = readl_relaxed(qm->io_base + 350 + SEC_INTERFACE_USER_CTRL1_REG_V3); 351 + reg &= SEC_USER1_SMMU_MASK_V3; 352 + reg |= SEC_USER1_SMMU_NORMAL_V3; 353 + writel_relaxed(reg, qm->io_base + 354 + SEC_INTERFACE_USER_CTRL1_REG_V3); 355 + } else { 356 + reg = readl_relaxed(qm->io_base + 357 + SEC_INTERFACE_USER_CTRL0_REG); 358 + reg |= SEC_USER0_SMMU_NORMAL; 359 + writel_relaxed(reg, qm->io_base + 360 + SEC_INTERFACE_USER_CTRL0_REG); 361 + reg = readl_relaxed(qm->io_base + 362 + SEC_INTERFACE_USER_CTRL1_REG); 363 + reg &= SEC_USER1_SMMU_MASK; 364 + if (qm->use_sva) 365 + reg |= SEC_USER1_SMMU_SVA; 366 + else 367 + reg |= SEC_USER1_SMMU_NORMAL; 368 + writel_relaxed(reg, qm->io_base + 369 + SEC_INTERFACE_USER_CTRL1_REG); 370 + } 371 + } 372 + 342 373 static void sec_open_sva_prefetch(struct hisi_qm *qm) 343 374 { 344 375 u32 val; ··· 465 426 reg |= (0x1 << SEC_TRNG_EN_SHIFT); 466 427 writel_relaxed(reg, qm->io_base + SEC_CONTROL_REG); 467 428 468 - reg = readl_relaxed(qm->io_base + SEC_INTERFACE_USER_CTRL0_REG); 469 - reg |= SEC_USER0_SMMU_NORMAL; 470 - writel_relaxed(reg, qm->io_base + SEC_INTERFACE_USER_CTRL0_REG); 471 - 472 - reg = readl_relaxed(qm->io_base + SEC_INTERFACE_USER_CTRL1_REG); 473 - reg &= SEC_USER1_SMMU_MASK; 474 - if (qm->use_sva && qm->ver == QM_HW_V2) 475 - reg |= SEC_USER1_SMMU_SVA; 476 - else 477 - reg |= SEC_USER1_SMMU_NORMAL; 478 - writel_relaxed(reg, qm->io_base + SEC_INTERFACE_USER_CTRL1_REG); 429 + sec_engine_sva_config(qm); 479 430 480 431 writel(SEC_SINGLE_PORT_MAX_TRANS, 481 432 qm->io_base + AM_CFG_SINGLE_PORT_MAX_TRANS); 482 433 483 434 writel(SEC_SAA_ENABLE, qm->io_base + SEC_SAA_EN_REG); 484 435 485 - /* Enable sm4 extra mode, as ctr/ecb */ 486 - writel_relaxed(SEC_BD_ERR_CHK_EN0, 487 - qm->io_base + SEC_BD_ERR_CHK_EN_REG0); 436 + /* HW V2 enable sm4 extra mode, as ctr/ecb */ 437 + if (qm->ver < QM_HW_V3) 438 + writel_relaxed(SEC_BD_ERR_CHK_EN0, 439 + qm->io_base + SEC_BD_ERR_CHK_EN_REG0); 440 + 488 441 /* Enable sm4 xts mode multiple iv */ 489 442 writel_relaxed(SEC_BD_ERR_CHK_EN1, 490 443 qm->io_base + SEC_BD_ERR_CHK_EN_REG1);
+1
drivers/crypto/marvell/Kconfig
··· 47 47 select CRYPTO_SKCIPHER 48 48 select CRYPTO_HASH 49 49 select CRYPTO_AEAD 50 + select NET_DEVLINK 50 51 help 51 52 This driver allows you to utilize the Marvell Cryptographic 52 53 Accelerator Unit(CPT) found in OcteonTX2 series of processors.
+1 -4
drivers/crypto/marvell/octeontx/otx_cptvf_algs.c
··· 1639 1639 { 1640 1640 struct cpt_device_desc *ldesc = (struct cpt_device_desc *) lptr; 1641 1641 struct cpt_device_desc *rdesc = (struct cpt_device_desc *) rptr; 1642 - struct cpt_device_desc desc; 1643 1642 1644 - desc = *ldesc; 1645 - *ldesc = *rdesc; 1646 - *rdesc = desc; 1643 + swap(*ldesc, *rdesc); 1647 1644 } 1648 1645 1649 1646 int otx_cpt_crypto_init(struct pci_dev *pdev, struct module *mod,
-1
drivers/crypto/marvell/octeontx/otx_cptvf_main.c
··· 204 204 205 205 /* per queue initialization */ 206 206 for (i = 0; i < cptvf->num_queues; i++) { 207 - c_size = 0; 208 207 rem_q_size = q_size; 209 208 first = NULL; 210 209 last = NULL;
+1
drivers/crypto/marvell/octeontx2/otx2_cpt_common.h
··· 157 157 int otx2_cpt_attach_rscrs_msg(struct otx2_cptlfs_info *lfs); 158 158 int otx2_cpt_detach_rsrcs_msg(struct otx2_cptlfs_info *lfs); 159 159 int otx2_cpt_msix_offset_msg(struct otx2_cptlfs_info *lfs); 160 + int otx2_cpt_sync_mbox_msg(struct otx2_mbox *mbox); 160 161 161 162 #endif /* __OTX2_CPT_COMMON_H */
+14
drivers/crypto/marvell/octeontx2/otx2_cpt_mbox_common.c
··· 202 202 } 203 203 return ret; 204 204 } 205 + 206 + int otx2_cpt_sync_mbox_msg(struct otx2_mbox *mbox) 207 + { 208 + int err; 209 + 210 + if (!otx2_mbox_nonempty(mbox, 0)) 211 + return 0; 212 + otx2_mbox_msg_send(mbox, 0); 213 + err = otx2_mbox_wait_for_rsp(mbox, 0); 214 + if (err) 215 + return err; 216 + 217 + return otx2_mbox_check_rsp_msgs(mbox, 0); 218 + }
+15 -4
drivers/crypto/marvell/octeontx2/otx2_cptlf.h
··· 26 26 */ 27 27 #define OTX2_CPT_INST_QLEN_MSGS ((OTX2_CPT_SIZE_DIV40 - 1) * 40) 28 28 29 + /* 30 + * LDWB is getting incorrectly used when IQB_LDWB = 1 and CPT instruction 31 + * queue has less than 320 free entries. So, increase HW instruction queue 32 + * size by 320 and give 320 entries less for SW/NIX RX as a workaround. 33 + */ 34 + #define OTX2_CPT_INST_QLEN_EXTRA_BYTES (320 * OTX2_CPT_INST_SIZE) 35 + #define OTX2_CPT_EXTRA_SIZE_DIV40 (320/40) 36 + 29 37 /* CPT instruction queue length in bytes */ 30 - #define OTX2_CPT_INST_QLEN_BYTES (OTX2_CPT_SIZE_DIV40 * 40 * \ 31 - OTX2_CPT_INST_SIZE) 38 + #define OTX2_CPT_INST_QLEN_BYTES \ 39 + ((OTX2_CPT_SIZE_DIV40 * 40 * OTX2_CPT_INST_SIZE) + \ 40 + OTX2_CPT_INST_QLEN_EXTRA_BYTES) 32 41 33 42 /* CPT instruction group queue length in bytes */ 34 - #define OTX2_CPT_INST_GRP_QLEN_BYTES (OTX2_CPT_SIZE_DIV40 * 16) 43 + #define OTX2_CPT_INST_GRP_QLEN_BYTES \ 44 + ((OTX2_CPT_SIZE_DIV40 + OTX2_CPT_EXTRA_SIZE_DIV40) * 16) 35 45 36 46 /* CPT FC length in bytes */ 37 47 #define OTX2_CPT_Q_FC_LEN 128 ··· 189 179 { 190 180 union otx2_cptx_lf_q_size lf_q_size = { .u = 0x0 }; 191 181 192 - lf_q_size.s.size_div40 = OTX2_CPT_SIZE_DIV40; 182 + lf_q_size.s.size_div40 = OTX2_CPT_SIZE_DIV40 + 183 + OTX2_CPT_EXTRA_SIZE_DIV40; 193 184 otx2_cpt_write64(lf->lfs->reg_base, BLKADDR_CPT0, lf->slot, 194 185 OTX2_CPT_LF_Q_SIZE, lf_q_size.u); 195 186 }
+1
drivers/crypto/marvell/octeontx2/otx2_cptpf.h
··· 46 46 47 47 struct workqueue_struct *flr_wq; 48 48 struct cptpf_flr_work *flr_work; 49 + struct mutex lock; /* serialize mailbox access */ 49 50 50 51 unsigned long cap_flag; 51 52 u8 pf_id; /* RVU PF number */
+16 -9
drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c
··· 140 140 141 141 vf = flr_work - pf->flr_work; 142 142 143 + mutex_lock(&pf->lock); 143 144 req = otx2_mbox_alloc_msg_rsp(mbox, 0, sizeof(*req), 144 145 sizeof(struct msg_rsp)); 145 - if (!req) 146 + if (!req) { 147 + mutex_unlock(&pf->lock); 146 148 return; 149 + } 147 150 148 151 req->sig = OTX2_MBOX_REQ_SIG; 149 152 req->id = MBOX_MSG_VF_FLR; ··· 154 151 req->pcifunc |= (vf + 1) & RVU_PFVF_FUNC_MASK; 155 152 156 153 otx2_cpt_send_mbox_msg(mbox, pf->pdev); 154 + if (!otx2_cpt_sync_mbox_msg(&pf->afpf_mbox)) { 157 155 158 - if (vf >= 64) { 159 - reg = 1; 160 - vf = vf - 64; 156 + if (vf >= 64) { 157 + reg = 1; 158 + vf = vf - 64; 159 + } 160 + /* Clear transaction pending register */ 161 + otx2_cpt_write64(pf->reg_base, BLKADDR_RVUM, 0, 162 + RVU_PF_VFTRPENDX(reg), BIT_ULL(vf)); 163 + otx2_cpt_write64(pf->reg_base, BLKADDR_RVUM, 0, 164 + RVU_PF_VFFLR_INT_ENA_W1SX(reg), BIT_ULL(vf)); 161 165 } 162 - /* Clear transaction pending register */ 163 - otx2_cpt_write64(pf->reg_base, BLKADDR_RVUM, 0, 164 - RVU_PF_VFTRPENDX(reg), BIT_ULL(vf)); 165 - otx2_cpt_write64(pf->reg_base, BLKADDR_RVUM, 0, 166 - RVU_PF_VFFLR_INT_ENA_W1SX(reg), BIT_ULL(vf)); 166 + mutex_unlock(&pf->lock); 167 167 } 168 168 169 169 static irqreturn_t cptpf_vf_flr_intr(int __always_unused irq, void *arg) ··· 474 468 goto error; 475 469 476 470 INIT_WORK(&cptpf->afpf_mbox_work, otx2_cptpf_afpf_mbox_handler); 471 + mutex_init(&cptpf->lock); 477 472 return 0; 478 473 479 474 error:
+20 -7
drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c
··· 18 18 struct mbox_msghdr *msg; 19 19 int ret; 20 20 21 + mutex_lock(&cptpf->lock); 21 22 msg = otx2_mbox_alloc_msg(&cptpf->afpf_mbox, 0, size); 22 - if (msg == NULL) 23 + if (msg == NULL) { 24 + mutex_unlock(&cptpf->lock); 23 25 return -ENOMEM; 26 + } 24 27 25 28 memcpy((uint8_t *)msg + sizeof(struct mbox_msghdr), 26 29 (uint8_t *)req + sizeof(struct mbox_msghdr), size); ··· 32 29 msg->sig = req->sig; 33 30 msg->ver = req->ver; 34 31 35 - otx2_mbox_msg_send(&cptpf->afpf_mbox, 0); 36 - ret = otx2_mbox_wait_for_rsp(&cptpf->afpf_mbox, 0); 32 + ret = otx2_cpt_sync_mbox_msg(&cptpf->afpf_mbox); 33 + /* Error code -EIO indicate there is a communication failure 34 + * to the AF. Rest of the error codes indicate that AF processed 35 + * VF messages and set the error codes in response messages 36 + * (if any) so simply forward responses to VF. 37 + */ 37 38 if (ret == -EIO) { 38 - dev_err(&cptpf->pdev->dev, "RVU MBOX timeout.\n"); 39 + dev_warn(&cptpf->pdev->dev, 40 + "AF not responding to VF%d messages\n", vf->vf_id); 41 + mutex_unlock(&cptpf->lock); 39 42 return ret; 40 - } else if (ret) { 41 - dev_err(&cptpf->pdev->dev, "RVU MBOX error: %d.\n", ret); 42 - return -EFAULT; 43 43 } 44 + mutex_unlock(&cptpf->lock); 44 45 return 0; 45 46 } 46 47 ··· 211 204 if (err == -ENOMEM || err == -EIO) 212 205 break; 213 206 offset = msg->next_msgoff; 207 + /* Write barrier required for VF responses which are handled by 208 + * PF driver and not forwarded to AF. 209 + */ 210 + smp_wmb(); 214 211 } 215 212 /* Send mbox responses to VF */ 216 213 if (mdev->num_msgs) ··· 361 350 process_afpf_mbox_msg(cptpf, msg); 362 351 363 352 offset = msg->next_msgoff; 353 + /* Sync VF response ready to be sent */ 354 + smp_wmb(); 364 355 mdev->msgs_acked++; 365 356 } 366 357 otx2_mbox_reset(afpf_mbox, 0);
+55 -1
drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c
··· 1076 1076 delete_engine_group(&pdev->dev, &eng_grps->grp[i]); 1077 1077 } 1078 1078 1079 + #define PCI_DEVID_CN10K_RNM 0xA098 1080 + #define RNM_ENTROPY_STATUS 0x8 1081 + 1082 + static void rnm_to_cpt_errata_fixup(struct device *dev) 1083 + { 1084 + struct pci_dev *pdev; 1085 + void __iomem *base; 1086 + int timeout = 5000; 1087 + 1088 + pdev = pci_get_device(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CN10K_RNM, NULL); 1089 + if (!pdev) 1090 + return; 1091 + 1092 + base = pci_ioremap_bar(pdev, 0); 1093 + if (!base) 1094 + goto put_pdev; 1095 + 1096 + while ((readq(base + RNM_ENTROPY_STATUS) & 0x7F) != 0x40) { 1097 + cpu_relax(); 1098 + udelay(1); 1099 + timeout--; 1100 + if (!timeout) { 1101 + dev_warn(dev, "RNM is not producing entropy\n"); 1102 + break; 1103 + } 1104 + } 1105 + 1106 + iounmap(base); 1107 + 1108 + put_pdev: 1109 + pci_dev_put(pdev); 1110 + } 1111 + 1079 1112 int otx2_cpt_get_eng_grp(struct otx2_cpt_eng_grps *eng_grps, int eng_type) 1080 1113 { 1081 1114 ··· 1144 1111 struct otx2_cpt_engines engs[OTX2_CPT_MAX_ETYPES_PER_GRP] = { {0} }; 1145 1112 struct pci_dev *pdev = cptpf->pdev; 1146 1113 struct fw_info_t fw_info; 1114 + u64 reg_val; 1147 1115 int ret = 0; 1148 1116 1149 1117 mutex_lock(&eng_grps->lock); ··· 1223 1189 1224 1190 if (is_dev_otx2(pdev)) 1225 1191 goto unlock; 1192 + 1193 + /* 1194 + * Ensure RNM_ENTROPY_STATUS[NORMAL_CNT] = 0x40 before writing 1195 + * CPT_AF_CTL[RNM_REQ_EN] = 1 as a workaround for HW errata. 1196 + */ 1197 + rnm_to_cpt_errata_fixup(&pdev->dev); 1198 + 1226 1199 /* 1227 1200 * Configure engine group mask to allow context prefetching 1228 - * for the groups. 1201 + * for the groups and enable random number request, to enable 1202 + * CPT to request random numbers from RNM. 1229 1203 */ 1230 1204 otx2_cpt_write_af_reg(&cptpf->afpf_mbox, pdev, CPT_AF_CTL, 1231 1205 OTX2_CPT_ALL_ENG_GRPS_MASK << 3 | BIT_ULL(16), ··· 1245 1203 */ 1246 1204 otx2_cpt_write_af_reg(&cptpf->afpf_mbox, pdev, CPT_AF_CTX_FLUSH_TIMER, 1247 1205 CTX_FLUSH_TIMER_CNT, BLKADDR_CPT0); 1206 + 1207 + /* 1208 + * Set CPT_AF_DIAG[FLT_DIS], as a workaround for HW errata, when 1209 + * CPT_AF_DIAG[FLT_DIS] = 0 and a CPT engine access to LLC/DRAM 1210 + * encounters a fault/poison, a rare case may result in 1211 + * unpredictable data being delivered to a CPT engine. 1212 + */ 1213 + otx2_cpt_read_af_reg(&cptpf->afpf_mbox, pdev, CPT_AF_DIAG, &reg_val, 1214 + BLKADDR_CPT0); 1215 + otx2_cpt_write_af_reg(&cptpf->afpf_mbox, pdev, CPT_AF_DIAG, 1216 + reg_val | BIT_ULL(24), BLKADDR_CPT0); 1217 + 1248 1218 mutex_unlock(&eng_grps->lock); 1249 1219 return 0; 1250 1220
+6 -9
drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c
··· 1634 1634 { 1635 1635 int i, err = 0; 1636 1636 1637 - if (!IS_ENABLED(CONFIG_DM_CRYPT)) { 1638 - for (i = 0; i < ARRAY_SIZE(otx2_cpt_skciphers); i++) 1639 - otx2_cpt_skciphers[i].base.cra_flags &= 1640 - ~CRYPTO_ALG_DEAD; 1637 + for (i = 0; i < ARRAY_SIZE(otx2_cpt_skciphers); i++) 1638 + otx2_cpt_skciphers[i].base.cra_flags &= ~CRYPTO_ALG_DEAD; 1641 1639 1642 - err = crypto_register_skciphers(otx2_cpt_skciphers, 1643 - ARRAY_SIZE(otx2_cpt_skciphers)); 1644 - if (err) 1645 - return err; 1646 - } 1640 + err = crypto_register_skciphers(otx2_cpt_skciphers, 1641 + ARRAY_SIZE(otx2_cpt_skciphers)); 1642 + if (err) 1643 + return err; 1647 1644 1648 1645 for (i = 0; i < ARRAY_SIZE(otx2_cpt_aeads); i++) 1649 1646 otx2_cpt_aeads[i].base.cra_flags &= ~CRYPTO_ALG_DEAD;
+1 -1
drivers/crypto/mxs-dcp.c
··· 331 331 memset(key + AES_KEYSIZE_128, 0, AES_KEYSIZE_128); 332 332 } 333 333 334 - for_each_sg(req->src, src, sg_nents(src), i) { 334 + for_each_sg(req->src, src, sg_nents(req->src), i) { 335 335 src_buf = sg_virt(src); 336 336 len = sg_dma_len(src); 337 337 tlen += len;
+2 -2
drivers/crypto/nx/nx-common-pseries.c
··· 962 962 NULL, 963 963 }; 964 964 965 - static struct attribute_group nx842_attribute_group = { 965 + static const struct attribute_group nx842_attribute_group = { 966 966 .name = NULL, /* put in device directory */ 967 967 .attrs = nx842_sysfs_entries, 968 968 }; ··· 992 992 NULL, 993 993 }; 994 994 995 - static struct attribute_group nxcop_caps_attr_group = { 995 + static const struct attribute_group nxcop_caps_attr_group = { 996 996 .name = "nx_gzip_caps", 997 997 .attrs = nxcop_caps_sysfs_entries, 998 998 };
+1 -1
drivers/crypto/omap-aes.c
··· 1093 1093 NULL, 1094 1094 }; 1095 1095 1096 - static struct attribute_group omap_aes_attr_group = { 1096 + static const struct attribute_group omap_aes_attr_group = { 1097 1097 .attrs = omap_aes_attrs, 1098 1098 }; 1099 1099
+1 -1
drivers/crypto/omap-sham.c
··· 2045 2045 NULL, 2046 2046 }; 2047 2047 2048 - static struct attribute_group omap_sham_attr_group = { 2048 + static const struct attribute_group omap_sham_attr_group = { 2049 2049 .attrs = omap_sham_attrs, 2050 2050 }; 2051 2051
+13 -10
drivers/crypto/qat/qat_4xxx/adf_4xxx_hw_data.c
··· 6 6 #include <adf_common_drv.h> 7 7 #include <adf_gen4_hw_data.h> 8 8 #include <adf_gen4_pfvf.h> 9 + #include <adf_gen4_pm.h> 9 10 #include "adf_4xxx_hw_data.h" 10 11 #include "icp_qat_hw.h" 11 12 ··· 53 52 static int get_service_enabled(struct adf_accel_dev *accel_dev) 54 53 { 55 54 char services[ADF_CFG_MAX_VAL_LEN_IN_BYTES] = {0}; 56 - u32 ret; 55 + int ret; 57 56 58 57 ret = adf_cfg_get_param_value(accel_dev, ADF_GENERAL_SEC, 59 58 ADF_SERVICES_ENABLED, services); ··· 230 229 void __iomem *csr = misc_bar->virt_addr; 231 230 232 231 /* Enable all in errsou3 except VFLR notification on host */ 233 - ADF_CSR_WR(csr, ADF_4XXX_ERRMSK3, ADF_4XXX_VFLNOTIFY); 232 + ADF_CSR_WR(csr, ADF_GEN4_ERRMSK3, ADF_GEN4_VFLNOTIFY); 234 233 } 235 234 236 235 static void adf_enable_ints(struct adf_accel_dev *accel_dev) ··· 257 256 addr = (&GET_BARS(accel_dev)[ADF_4XXX_PMISC_BAR])->virt_addr; 258 257 259 258 /* Temporarily mask PM interrupt */ 260 - csr = ADF_CSR_RD(addr, ADF_4XXX_ERRMSK2); 261 - csr |= ADF_4XXX_PM_SOU; 262 - ADF_CSR_WR(addr, ADF_4XXX_ERRMSK2, csr); 259 + csr = ADF_CSR_RD(addr, ADF_GEN4_ERRMSK2); 260 + csr |= ADF_GEN4_PM_SOU; 261 + ADF_CSR_WR(addr, ADF_GEN4_ERRMSK2, csr); 263 262 264 263 /* Set DRV_ACTIVE bit to power up the device */ 265 - ADF_CSR_WR(addr, ADF_4XXX_PM_INTERRUPT, ADF_4XXX_PM_DRV_ACTIVE); 264 + ADF_CSR_WR(addr, ADF_GEN4_PM_INTERRUPT, ADF_GEN4_PM_DRV_ACTIVE); 266 265 267 266 /* Poll status register to make sure the device is powered up */ 268 267 ret = read_poll_timeout(ADF_CSR_RD, status, 269 - status & ADF_4XXX_PM_INIT_STATE, 270 - ADF_4XXX_PM_POLL_DELAY_US, 271 - ADF_4XXX_PM_POLL_TIMEOUT_US, true, addr, 272 - ADF_4XXX_PM_STATUS); 268 + status & ADF_GEN4_PM_INIT_STATE, 269 + ADF_GEN4_PM_POLL_DELAY_US, 270 + ADF_GEN4_PM_POLL_TIMEOUT_US, true, addr, 271 + ADF_GEN4_PM_STATUS); 273 272 if (ret) 274 273 dev_err(&GET_DEV(accel_dev), "Failed to power up the device\n"); 275 274 ··· 355 354 hw_data->set_ssm_wdtimer = adf_gen4_set_ssm_wdtimer; 356 355 hw_data->disable_iov = adf_disable_sriov; 357 356 hw_data->ring_pair_reset = adf_gen4_ring_pair_reset; 357 + hw_data->enable_pm = adf_gen4_enable_pm; 358 + hw_data->handle_pm_interrupt = adf_gen4_handle_pm_interrupt; 358 359 359 360 adf_gen4_init_hw_csr_ops(&hw_data->csr_ops); 360 361 adf_gen4_init_pf_pfvf_ops(&hw_data->pfvf_ops);
-24
drivers/crypto/qat/qat_4xxx/adf_4xxx_hw_data.h
··· 39 39 #define ADF_4XXX_NUM_RINGS_PER_BANK 2 40 40 #define ADF_4XXX_NUM_BANKS_PER_VF 4 41 41 42 - /* Error source registers */ 43 - #define ADF_4XXX_ERRSOU0 (0x41A200) 44 - #define ADF_4XXX_ERRSOU1 (0x41A204) 45 - #define ADF_4XXX_ERRSOU2 (0x41A208) 46 - #define ADF_4XXX_ERRSOU3 (0x41A20C) 47 - 48 - /* Error source mask registers */ 49 - #define ADF_4XXX_ERRMSK0 (0x41A210) 50 - #define ADF_4XXX_ERRMSK1 (0x41A214) 51 - #define ADF_4XXX_ERRMSK2 (0x41A218) 52 - #define ADF_4XXX_ERRMSK3 (0x41A21C) 53 - 54 - #define ADF_4XXX_VFLNOTIFY BIT(7) 55 - 56 42 /* Arbiter configuration */ 57 43 #define ADF_4XXX_ARB_CONFIG (BIT(31) | BIT(6) | BIT(0)) 58 44 #define ADF_4XXX_ARB_OFFSET (0x0) ··· 48 62 #define ADF_4XXX_ADMINMSGUR_OFFSET (0x500574) 49 63 #define ADF_4XXX_ADMINMSGLR_OFFSET (0x500578) 50 64 #define ADF_4XXX_MAILBOX_BASE_OFFSET (0x600970) 51 - 52 - /* Power management */ 53 - #define ADF_4XXX_PM_POLL_DELAY_US 20 54 - #define ADF_4XXX_PM_POLL_TIMEOUT_US USEC_PER_SEC 55 - #define ADF_4XXX_PM_STATUS (0x50A00C) 56 - #define ADF_4XXX_PM_INTERRUPT (0x50A028) 57 - #define ADF_4XXX_PM_DRV_ACTIVE BIT(20) 58 - #define ADF_4XXX_PM_INIT_STATE BIT(21) 59 - /* Power management source in ERRSOU2 and ERRMSK2 */ 60 - #define ADF_4XXX_PM_SOU BIT(18) 61 65 62 66 /* Firmware Binaries */ 63 67 #define ADF_4XXX_FW "qat_4xxx.bin"
+7
drivers/crypto/qat/qat_4xxx/adf_drv.c
··· 75 75 if (ret) 76 76 goto err; 77 77 78 + /* Temporarily set the number of crypto instances to zero to avoid 79 + * registering the crypto algorithms. 80 + * This will be removed when the algorithms will support the 81 + * CRYPTO_TFM_REQ_MAY_BACKLOG flag 82 + */ 83 + instances = 0; 84 + 78 85 for (i = 0; i < instances; i++) { 79 86 val = i; 80 87 bank = i * 2;
+1
drivers/crypto/qat/qat_common/Makefile
··· 12 12 adf_hw_arbiter.o \ 13 13 adf_gen2_hw_data.o \ 14 14 adf_gen4_hw_data.o \ 15 + adf_gen4_pm.o \ 15 16 qat_crypto.o \ 16 17 qat_algs.o \ 17 18 qat_asym_algs.o \
+2
drivers/crypto/qat/qat_common/adf_accel_devices.h
··· 184 184 void (*exit_arb)(struct adf_accel_dev *accel_dev); 185 185 const u32 *(*get_arb_mapping)(void); 186 186 int (*init_device)(struct adf_accel_dev *accel_dev); 187 + int (*enable_pm)(struct adf_accel_dev *accel_dev); 188 + bool (*handle_pm_interrupt)(struct adf_accel_dev *accel_dev); 187 189 void (*disable_iov)(struct adf_accel_dev *accel_dev); 188 190 void (*configure_iov_threads)(struct adf_accel_dev *accel_dev, 189 191 bool enable);
+37
drivers/crypto/qat/qat_common/adf_admin.c
··· 251 251 } 252 252 EXPORT_SYMBOL_GPL(adf_send_admin_init); 253 253 254 + /** 255 + * adf_init_admin_pm() - Function sends PM init message to FW 256 + * @accel_dev: Pointer to acceleration device. 257 + * @idle_delay: QAT HW idle time before power gating is initiated. 258 + * 000 - 64us 259 + * 001 - 128us 260 + * 010 - 256us 261 + * 011 - 512us 262 + * 100 - 1ms 263 + * 101 - 2ms 264 + * 110 - 4ms 265 + * 111 - 8ms 266 + * 267 + * Function sends to the FW the admin init message for the PM state 268 + * configuration. 269 + * 270 + * Return: 0 on success, error code otherwise. 271 + */ 272 + int adf_init_admin_pm(struct adf_accel_dev *accel_dev, u32 idle_delay) 273 + { 274 + struct adf_hw_device_data *hw_data = accel_dev->hw_device; 275 + struct icp_qat_fw_init_admin_resp resp = {0}; 276 + struct icp_qat_fw_init_admin_req req = {0}; 277 + u32 ae_mask = hw_data->admin_ae_mask; 278 + 279 + if (!accel_dev->admin) { 280 + dev_err(&GET_DEV(accel_dev), "adf_admin is not available\n"); 281 + return -EFAULT; 282 + } 283 + 284 + req.cmd_id = ICP_QAT_FW_PM_STATE_CONFIG; 285 + req.idle_filter = idle_delay; 286 + 287 + return adf_send_admin(accel_dev, &req, &resp, ae_mask); 288 + } 289 + EXPORT_SYMBOL_GPL(adf_init_admin_pm); 290 + 254 291 int adf_init_admin_comms(struct adf_accel_dev *accel_dev) 255 292 { 256 293 struct adf_admin_comms *admin;
+4
drivers/crypto/qat/qat_common/adf_common_drv.h
··· 102 102 int adf_init_admin_comms(struct adf_accel_dev *accel_dev); 103 103 void adf_exit_admin_comms(struct adf_accel_dev *accel_dev); 104 104 int adf_send_admin_init(struct adf_accel_dev *accel_dev); 105 + int adf_init_admin_pm(struct adf_accel_dev *accel_dev, u32 idle_delay); 105 106 int adf_init_arb(struct adf_accel_dev *accel_dev); 106 107 void adf_exit_arb(struct adf_accel_dev *accel_dev); 107 108 void adf_update_ring_arb(struct adf_etr_ring_data *ring); ··· 189 188 void *addr_ptr, u32 mem_size, char *obj_name); 190 189 int qat_uclo_set_cfg_ae_mask(struct icp_qat_fw_loader_handle *handle, 191 190 unsigned int cfg_ae_mask); 191 + int adf_init_misc_wq(void); 192 + void adf_exit_misc_wq(void); 193 + bool adf_misc_wq_queue_work(struct work_struct *work); 192 194 #if defined(CONFIG_PCI_IOV) 193 195 int adf_sriov_configure(struct pci_dev *pdev, int numvfs); 194 196 void adf_disable_sriov(struct adf_accel_dev *accel_dev);
+6
drivers/crypto/qat/qat_common/adf_ctl_drv.c
··· 419 419 if (adf_chr_drv_create()) 420 420 goto err_chr_dev; 421 421 422 + if (adf_init_misc_wq()) 423 + goto err_misc_wq; 424 + 422 425 if (adf_init_aer()) 423 426 goto err_aer; 424 427 ··· 443 440 err_pf_wq: 444 441 adf_exit_aer(); 445 442 err_aer: 443 + adf_exit_misc_wq(); 444 + err_misc_wq: 446 445 adf_chr_drv_destroy(); 447 446 err_chr_dev: 448 447 mutex_destroy(&adf_ctl_lock); ··· 454 449 static void __exit adf_unregister_ctl_device_driver(void) 455 450 { 456 451 adf_chr_drv_destroy(); 452 + adf_exit_misc_wq(); 457 453 adf_exit_aer(); 458 454 adf_exit_vf_wq(); 459 455 adf_exit_pf_wq();
+14
drivers/crypto/qat/qat_common/adf_gen4_hw_data.h
··· 122 122 #define ADF_WQM_CSR_RPRESETSTS_STATUS BIT(0) 123 123 #define ADF_WQM_CSR_RPRESETSTS(bank) (ADF_WQM_CSR_RPRESETCTL(bank) + 4) 124 124 125 + /* Error source registers */ 126 + #define ADF_GEN4_ERRSOU0 (0x41A200) 127 + #define ADF_GEN4_ERRSOU1 (0x41A204) 128 + #define ADF_GEN4_ERRSOU2 (0x41A208) 129 + #define ADF_GEN4_ERRSOU3 (0x41A20C) 130 + 131 + /* Error source mask registers */ 132 + #define ADF_GEN4_ERRMSK0 (0x41A210) 133 + #define ADF_GEN4_ERRMSK1 (0x41A214) 134 + #define ADF_GEN4_ERRMSK2 (0x41A218) 135 + #define ADF_GEN4_ERRMSK3 (0x41A21C) 136 + 137 + #define ADF_GEN4_VFLNOTIFY BIT(7) 138 + 125 139 void adf_gen4_set_ssm_wdtimer(struct adf_accel_dev *accel_dev); 126 140 void adf_gen4_init_hw_csr_ops(struct adf_hw_csr_ops *csr_ops); 127 141 int adf_gen4_ring_pair_reset(struct adf_accel_dev *accel_dev, u32 bank_number);
+9 -33
drivers/crypto/qat/qat_common/adf_gen4_pfvf.c
··· 9 9 #include "adf_pfvf_pf_proto.h" 10 10 #include "adf_pfvf_utils.h" 11 11 12 - #define ADF_4XXX_MAX_NUM_VFS 16 13 - 14 12 #define ADF_4XXX_PF2VM_OFFSET(i) (0x40B010 + ((i) * 0x20)) 15 13 #define ADF_4XXX_VM2PF_OFFSET(i) (0x40B014 + ((i) * 0x20)) 16 14 17 15 /* VF2PF interrupt source registers */ 18 - #define ADF_4XXX_VM2PF_SOU(i) (0x41A180 + ((i) * 4)) 19 - #define ADF_4XXX_VM2PF_MSK(i) (0x41A1C0 + ((i) * 4)) 20 - #define ADF_4XXX_VM2PF_INT_EN_MSK BIT(0) 16 + #define ADF_4XXX_VM2PF_SOU 0x41A180 17 + #define ADF_4XXX_VM2PF_MSK 0x41A1C0 21 18 22 19 #define ADF_PFVF_GEN4_MSGTYPE_SHIFT 2 23 20 #define ADF_PFVF_GEN4_MSGTYPE_MASK 0x3F ··· 38 41 39 42 static u32 adf_gen4_get_vf2pf_sources(void __iomem *pmisc_addr) 40 43 { 41 - int i; 42 44 u32 sou, mask; 43 - int num_csrs = ADF_4XXX_MAX_NUM_VFS; 44 - u32 vf_mask = 0; 45 45 46 - for (i = 0; i < num_csrs; i++) { 47 - sou = ADF_CSR_RD(pmisc_addr, ADF_4XXX_VM2PF_SOU(i)); 48 - mask = ADF_CSR_RD(pmisc_addr, ADF_4XXX_VM2PF_MSK(i)); 49 - sou &= ~mask; 50 - vf_mask |= sou << i; 51 - } 46 + sou = ADF_CSR_RD(pmisc_addr, ADF_4XXX_VM2PF_SOU); 47 + mask = ADF_CSR_RD(pmisc_addr, ADF_4XXX_VM2PF_MSK); 52 48 53 - return vf_mask; 49 + return sou & ~mask; 54 50 } 55 51 56 52 static void adf_gen4_enable_vf2pf_interrupts(void __iomem *pmisc_addr, 57 53 u32 vf_mask) 58 54 { 59 - int num_csrs = ADF_4XXX_MAX_NUM_VFS; 60 - unsigned long mask = vf_mask; 61 55 unsigned int val; 62 - int i; 63 56 64 - for_each_set_bit(i, &mask, num_csrs) { 65 - unsigned int offset = ADF_4XXX_VM2PF_MSK(i); 66 - 67 - val = ADF_CSR_RD(pmisc_addr, offset) & ~ADF_4XXX_VM2PF_INT_EN_MSK; 68 - ADF_CSR_WR(pmisc_addr, offset, val); 69 - } 57 + val = ADF_CSR_RD(pmisc_addr, ADF_4XXX_VM2PF_MSK) & ~vf_mask; 58 + ADF_CSR_WR(pmisc_addr, ADF_4XXX_VM2PF_MSK, val); 70 59 } 71 60 72 61 static void adf_gen4_disable_vf2pf_interrupts(void __iomem *pmisc_addr, 73 62 u32 vf_mask) 74 63 { 75 - int num_csrs = ADF_4XXX_MAX_NUM_VFS; 76 - unsigned long mask = vf_mask; 77 64 unsigned int val; 78 - int i; 79 65 80 - for_each_set_bit(i, &mask, num_csrs) { 81 - unsigned int offset = ADF_4XXX_VM2PF_MSK(i); 82 - 83 - val = ADF_CSR_RD(pmisc_addr, offset) | ADF_4XXX_VM2PF_INT_EN_MSK; 84 - ADF_CSR_WR(pmisc_addr, offset, val); 85 - } 66 + val = ADF_CSR_RD(pmisc_addr, ADF_4XXX_VM2PF_MSK) | vf_mask; 67 + ADF_CSR_WR(pmisc_addr, ADF_4XXX_VM2PF_MSK, val); 86 68 } 87 69 88 70 static int adf_gen4_pfvf_send(struct adf_accel_dev *accel_dev,
+137
drivers/crypto/qat/qat_common/adf_gen4_pm.c
··· 1 + // SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0-only) 2 + /* Copyright(c) 2022 Intel Corporation */ 3 + #include <linux/bitfield.h> 4 + #include <linux/iopoll.h> 5 + #include "adf_accel_devices.h" 6 + #include "adf_common_drv.h" 7 + #include "adf_gen4_pm.h" 8 + #include "adf_cfg_strings.h" 9 + #include "icp_qat_fw_init_admin.h" 10 + #include "adf_gen4_hw_data.h" 11 + #include "adf_cfg.h" 12 + 13 + enum qat_pm_host_msg { 14 + PM_NO_CHANGE = 0, 15 + PM_SET_MIN, 16 + }; 17 + 18 + struct adf_gen4_pm_data { 19 + struct work_struct pm_irq_work; 20 + struct adf_accel_dev *accel_dev; 21 + u32 pm_int_sts; 22 + }; 23 + 24 + static int send_host_msg(struct adf_accel_dev *accel_dev) 25 + { 26 + void __iomem *pmisc = adf_get_pmisc_base(accel_dev); 27 + u32 msg; 28 + 29 + msg = ADF_CSR_RD(pmisc, ADF_GEN4_PM_HOST_MSG); 30 + if (msg & ADF_GEN4_PM_MSG_PENDING) 31 + return -EBUSY; 32 + 33 + /* Send HOST_MSG */ 34 + msg = FIELD_PREP(ADF_GEN4_PM_MSG_PAYLOAD_BIT_MASK, PM_SET_MIN); 35 + msg |= ADF_GEN4_PM_MSG_PENDING; 36 + ADF_CSR_WR(pmisc, ADF_GEN4_PM_HOST_MSG, msg); 37 + 38 + /* Poll status register to make sure the HOST_MSG has been processed */ 39 + return read_poll_timeout(ADF_CSR_RD, msg, 40 + !(msg & ADF_GEN4_PM_MSG_PENDING), 41 + ADF_GEN4_PM_MSG_POLL_DELAY_US, 42 + ADF_GEN4_PM_POLL_TIMEOUT_US, true, pmisc, 43 + ADF_GEN4_PM_HOST_MSG); 44 + } 45 + 46 + static void pm_bh_handler(struct work_struct *work) 47 + { 48 + struct adf_gen4_pm_data *pm_data = 49 + container_of(work, struct adf_gen4_pm_data, pm_irq_work); 50 + struct adf_accel_dev *accel_dev = pm_data->accel_dev; 51 + void __iomem *pmisc = adf_get_pmisc_base(accel_dev); 52 + u32 pm_int_sts = pm_data->pm_int_sts; 53 + u32 val; 54 + 55 + /* PM Idle interrupt */ 56 + if (pm_int_sts & ADF_GEN4_PM_IDLE_STS) { 57 + /* Issue host message to FW */ 58 + if (send_host_msg(accel_dev)) 59 + dev_warn_ratelimited(&GET_DEV(accel_dev), 60 + "Failed to send host msg to FW\n"); 61 + } 62 + 63 + /* Clear interrupt status */ 64 + ADF_CSR_WR(pmisc, ADF_GEN4_PM_INTERRUPT, pm_int_sts); 65 + 66 + /* Reenable PM interrupt */ 67 + val = ADF_CSR_RD(pmisc, ADF_GEN4_ERRMSK2); 68 + val &= ~ADF_GEN4_PM_SOU; 69 + ADF_CSR_WR(pmisc, ADF_GEN4_ERRMSK2, val); 70 + 71 + kfree(pm_data); 72 + } 73 + 74 + bool adf_gen4_handle_pm_interrupt(struct adf_accel_dev *accel_dev) 75 + { 76 + void __iomem *pmisc = adf_get_pmisc_base(accel_dev); 77 + struct adf_gen4_pm_data *pm_data = NULL; 78 + u32 errsou2; 79 + u32 errmsk2; 80 + u32 val; 81 + 82 + /* Only handle the interrupt triggered by PM */ 83 + errmsk2 = ADF_CSR_RD(pmisc, ADF_GEN4_ERRMSK2); 84 + if (errmsk2 & ADF_GEN4_PM_SOU) 85 + return false; 86 + 87 + errsou2 = ADF_CSR_RD(pmisc, ADF_GEN4_ERRSOU2); 88 + if (!(errsou2 & ADF_GEN4_PM_SOU)) 89 + return false; 90 + 91 + /* Disable interrupt */ 92 + val = ADF_CSR_RD(pmisc, ADF_GEN4_ERRMSK2); 93 + val |= ADF_GEN4_PM_SOU; 94 + ADF_CSR_WR(pmisc, ADF_GEN4_ERRMSK2, val); 95 + 96 + val = ADF_CSR_RD(pmisc, ADF_GEN4_PM_INTERRUPT); 97 + 98 + pm_data = kzalloc(sizeof(*pm_data), GFP_ATOMIC); 99 + if (!pm_data) 100 + return false; 101 + 102 + pm_data->pm_int_sts = val; 103 + pm_data->accel_dev = accel_dev; 104 + 105 + INIT_WORK(&pm_data->pm_irq_work, pm_bh_handler); 106 + adf_misc_wq_queue_work(&pm_data->pm_irq_work); 107 + 108 + return true; 109 + } 110 + EXPORT_SYMBOL_GPL(adf_gen4_handle_pm_interrupt); 111 + 112 + int adf_gen4_enable_pm(struct adf_accel_dev *accel_dev) 113 + { 114 + void __iomem *pmisc = adf_get_pmisc_base(accel_dev); 115 + int ret; 116 + u32 val; 117 + 118 + ret = adf_init_admin_pm(accel_dev, ADF_GEN4_PM_DEFAULT_IDLE_FILTER); 119 + if (ret) 120 + return ret; 121 + 122 + /* Enable default PM interrupts: IDLE, THROTTLE */ 123 + val = ADF_CSR_RD(pmisc, ADF_GEN4_PM_INTERRUPT); 124 + val |= ADF_GEN4_PM_INT_EN_DEFAULT; 125 + 126 + /* Clear interrupt status */ 127 + val |= ADF_GEN4_PM_INT_STS_MASK; 128 + ADF_CSR_WR(pmisc, ADF_GEN4_PM_INTERRUPT, val); 129 + 130 + /* Unmask PM Interrupt */ 131 + val = ADF_CSR_RD(pmisc, ADF_GEN4_ERRMSK2); 132 + val &= ~ADF_GEN4_PM_SOU; 133 + ADF_CSR_WR(pmisc, ADF_GEN4_ERRMSK2, val); 134 + 135 + return 0; 136 + } 137 + EXPORT_SYMBOL_GPL(adf_gen4_enable_pm);
+44
drivers/crypto/qat/qat_common/adf_gen4_pm.h
··· 1 + /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0-only) */ 2 + /* Copyright(c) 2022 Intel Corporation */ 3 + #ifndef ADF_GEN4_PM_H 4 + #define ADF_GEN4_PM_H 5 + 6 + #include "adf_accel_devices.h" 7 + 8 + /* Power management registers */ 9 + #define ADF_GEN4_PM_HOST_MSG (0x50A01C) 10 + 11 + /* Power management */ 12 + #define ADF_GEN4_PM_POLL_DELAY_US 20 13 + #define ADF_GEN4_PM_POLL_TIMEOUT_US USEC_PER_SEC 14 + #define ADF_GEN4_PM_MSG_POLL_DELAY_US (10 * USEC_PER_MSEC) 15 + #define ADF_GEN4_PM_STATUS (0x50A00C) 16 + #define ADF_GEN4_PM_INTERRUPT (0x50A028) 17 + 18 + /* Power management source in ERRSOU2 and ERRMSK2 */ 19 + #define ADF_GEN4_PM_SOU BIT(18) 20 + 21 + #define ADF_GEN4_PM_IDLE_INT_EN BIT(18) 22 + #define ADF_GEN4_PM_THROTTLE_INT_EN BIT(19) 23 + #define ADF_GEN4_PM_DRV_ACTIVE BIT(20) 24 + #define ADF_GEN4_PM_INIT_STATE BIT(21) 25 + #define ADF_GEN4_PM_INT_EN_DEFAULT (ADF_GEN4_PM_IDLE_INT_EN | \ 26 + ADF_GEN4_PM_THROTTLE_INT_EN) 27 + 28 + #define ADF_GEN4_PM_THR_STS BIT(0) 29 + #define ADF_GEN4_PM_IDLE_STS BIT(1) 30 + #define ADF_GEN4_PM_FW_INT_STS BIT(2) 31 + #define ADF_GEN4_PM_INT_STS_MASK (ADF_GEN4_PM_THR_STS | \ 32 + ADF_GEN4_PM_IDLE_STS | \ 33 + ADF_GEN4_PM_FW_INT_STS) 34 + 35 + #define ADF_GEN4_PM_MSG_PENDING BIT(0) 36 + #define ADF_GEN4_PM_MSG_PAYLOAD_BIT_MASK GENMASK(28, 1) 37 + 38 + #define ADF_GEN4_PM_DEFAULT_IDLE_FILTER (0x0) 39 + #define ADF_GEN4_PM_MAX_IDLE_FILTER (0x7) 40 + 41 + int adf_gen4_enable_pm(struct adf_accel_dev *accel_dev); 42 + bool adf_gen4_handle_pm_interrupt(struct adf_accel_dev *accel_dev); 43 + 44 + #endif
+6
drivers/crypto/qat/qat_common/adf_init.c
··· 181 181 if (hw_data->set_ssm_wdtimer) 182 182 hw_data->set_ssm_wdtimer(accel_dev); 183 183 184 + /* Enable Power Management */ 185 + if (hw_data->enable_pm && hw_data->enable_pm(accel_dev)) { 186 + dev_err(&GET_DEV(accel_dev), "Failed to configure Power Management\n"); 187 + return -EFAULT; 188 + } 189 + 184 190 list_for_each(list_itr, &service_table) { 185 191 service = list_entry(list_itr, struct service_hndl, list); 186 192 if (service->event_hld(accel_dev, ADF_EVENT_START)) {
+42
drivers/crypto/qat/qat_common/adf_isr.c
··· 16 16 #include "adf_transport_internal.h" 17 17 18 18 #define ADF_MAX_NUM_VFS 32 19 + static struct workqueue_struct *adf_misc_wq; 19 20 20 21 static int adf_enable_msix(struct adf_accel_dev *accel_dev) 21 22 { ··· 124 123 } 125 124 #endif /* CONFIG_PCI_IOV */ 126 125 126 + static bool adf_handle_pm_int(struct adf_accel_dev *accel_dev) 127 + { 128 + struct adf_hw_device_data *hw_data = accel_dev->hw_device; 129 + 130 + if (hw_data->handle_pm_interrupt && 131 + hw_data->handle_pm_interrupt(accel_dev)) 132 + return true; 133 + 134 + return false; 135 + } 136 + 127 137 static irqreturn_t adf_msix_isr_ae(int irq, void *dev_ptr) 128 138 { 129 139 struct adf_accel_dev *accel_dev = dev_ptr; ··· 144 132 if (accel_dev->pf.vf_info && adf_handle_vf2pf_int(accel_dev)) 145 133 return IRQ_HANDLED; 146 134 #endif /* CONFIG_PCI_IOV */ 135 + 136 + if (adf_handle_pm_int(accel_dev)) 137 + return IRQ_HANDLED; 147 138 148 139 dev_dbg(&GET_DEV(accel_dev), "qat_dev%d spurious AE interrupt\n", 149 140 accel_dev->accel_id); ··· 356 341 return ret; 357 342 } 358 343 EXPORT_SYMBOL_GPL(adf_isr_resource_alloc); 344 + 345 + /** 346 + * adf_init_misc_wq() - Init misc workqueue 347 + * 348 + * Function init workqueue 'qat_misc_wq' for general purpose. 349 + * 350 + * Return: 0 on success, error code otherwise. 351 + */ 352 + int __init adf_init_misc_wq(void) 353 + { 354 + adf_misc_wq = alloc_workqueue("qat_misc_wq", WQ_MEM_RECLAIM, 0); 355 + 356 + return !adf_misc_wq ? -ENOMEM : 0; 357 + } 358 + 359 + void adf_exit_misc_wq(void) 360 + { 361 + if (adf_misc_wq) 362 + destroy_workqueue(adf_misc_wq); 363 + 364 + adf_misc_wq = NULL; 365 + } 366 + 367 + bool adf_misc_wq_queue_work(struct work_struct *work) 368 + { 369 + return queue_work(adf_misc_wq, work); 370 + }
+2 -2
drivers/crypto/qat/qat_common/adf_pfvf_vf_msg.c
··· 96 96 int adf_vf2pf_get_capabilities(struct adf_accel_dev *accel_dev) 97 97 { 98 98 struct adf_hw_device_data *hw_data = accel_dev->hw_device; 99 - struct capabilities_v3 cap_msg = { { 0 }, }; 99 + struct capabilities_v3 cap_msg = { 0 }; 100 100 unsigned int len = sizeof(cap_msg); 101 101 102 102 if (accel_dev->vf.pf_compat_ver < ADF_PFVF_COMPAT_CAPABILITIES) ··· 141 141 142 142 int adf_vf2pf_get_ring_to_svc(struct adf_accel_dev *accel_dev) 143 143 { 144 - struct ring_to_svc_map_v1 rts_map_msg = { { 0 }, }; 144 + struct ring_to_svc_map_v1 rts_map_msg = { 0 }; 145 145 unsigned int len = sizeof(rts_map_msg); 146 146 147 147 if (accel_dev->vf.pf_compat_ver < ADF_PFVF_COMPAT_RING_TO_SVC_MAP)
+1
drivers/crypto/qat/qat_common/icp_qat_fw_init_admin.h
··· 16 16 ICP_QAT_FW_HEARTBEAT_SYNC = 7, 17 17 ICP_QAT_FW_HEARTBEAT_GET = 8, 18 18 ICP_QAT_FW_COMP_CAPABILITY_GET = 9, 19 + ICP_QAT_FW_PM_STATE_CONFIG = 128, 19 20 }; 20 21 21 22 enum icp_qat_fw_init_admin_resp_status {
+7
drivers/crypto/qat/qat_common/qat_crypto.c
··· 161 161 if (ret) 162 162 goto err; 163 163 164 + /* Temporarily set the number of crypto instances to zero to avoid 165 + * registering the crypto algorithms. 166 + * This will be removed when the algorithms will support the 167 + * CRYPTO_TFM_REQ_MAY_BACKLOG flag 168 + */ 169 + instances = 0; 170 + 164 171 for (i = 0; i < instances; i++) { 165 172 val = i; 166 173 snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_ASYM_BANK_NUM, i);
+6 -3
drivers/crypto/qat/qat_common/qat_uclo.c
··· 387 387 page = image->page; 388 388 389 389 for_each_set_bit(ae, &ae_mask, handle->hal_handle->ae_max_num) { 390 - if (!test_bit(ae, (unsigned long *)&uof_image->ae_assigned)) 390 + unsigned long ae_assigned = uof_image->ae_assigned; 391 + 392 + if (!test_bit(ae, &ae_assigned)) 391 393 continue; 392 394 393 395 if (!test_bit(ae, &cfg_ae_mask)) ··· 666 664 continue; 667 665 668 666 for (i = 0; i < obj_handle->uimage_num; i++) { 669 - if (!test_bit(ae, (unsigned long *) 670 - &obj_handle->ae_uimage[i].img_ptr->ae_assigned)) 667 + unsigned long ae_assigned = obj_handle->ae_uimage[i].img_ptr->ae_assigned; 668 + 669 + if (!test_bit(ae, &ae_assigned)) 671 670 continue; 672 671 mflag = 1; 673 672 if (qat_uclo_init_ae_data(obj_handle, ae, i))
-1
drivers/crypto/rockchip/rk3288_crypto_skcipher.c
··· 506 506 .exit = rk_ablk_exit_tfm, 507 507 .min_keysize = DES3_EDE_KEY_SIZE, 508 508 .max_keysize = DES3_EDE_KEY_SIZE, 509 - .ivsize = DES_BLOCK_SIZE, 510 509 .setkey = rk_tdes_setkey, 511 510 .encrypt = rk_des3_ede_ecb_encrypt, 512 511 .decrypt = rk_des3_ede_ecb_decrypt,
+1 -1
drivers/crypto/ux500/cryp/cryp_core.c
··· 1264 1264 struct device *dev = &pdev->dev; 1265 1265 1266 1266 dev_dbg(dev, "[%s]", __func__); 1267 - device_data = devm_kzalloc(dev, sizeof(*device_data), GFP_ATOMIC); 1267 + device_data = devm_kzalloc(dev, sizeof(*device_data), GFP_KERNEL); 1268 1268 if (!device_data) { 1269 1269 ret = -ENOMEM; 1270 1270 goto out;
+1 -1
drivers/crypto/ux500/hash/hash_core.c
··· 1658 1658 struct hash_device_data *device_data; 1659 1659 struct device *dev = &pdev->dev; 1660 1660 1661 - device_data = devm_kzalloc(dev, sizeof(*device_data), GFP_ATOMIC); 1661 + device_data = devm_kzalloc(dev, sizeof(*device_data), GFP_KERNEL); 1662 1662 if (!device_data) { 1663 1663 ret = -ENOMEM; 1664 1664 goto out;
+4
drivers/crypto/vmx/Kconfig
··· 2 2 config CRYPTO_DEV_VMX_ENCRYPT 3 3 tristate "Encryption acceleration support on P8 CPU" 4 4 depends on CRYPTO_DEV_VMX 5 + select CRYPTO_AES 6 + select CRYPTO_CBC 7 + select CRYPTO_CTR 5 8 select CRYPTO_GHASH 9 + select CRYPTO_XTS 6 10 default m 7 11 help 8 12 Support for VMX cryptographic acceleration instructions on Power8 CPU.
+1
drivers/crypto/xilinx/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 obj-$(CONFIG_CRYPTO_DEV_ZYNQMP_AES) += zynqmp-aes-gcm.o 3 + obj-$(CONFIG_CRYPTO_DEV_ZYNQMP_SHA3) += zynqmp-sha.o
+264
drivers/crypto/xilinx/zynqmp-sha.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Xilinx ZynqMP SHA Driver. 4 + * Copyright (c) 2022 Xilinx Inc. 5 + */ 6 + #include <linux/cacheflush.h> 7 + #include <crypto/hash.h> 8 + #include <crypto/internal/hash.h> 9 + #include <crypto/sha3.h> 10 + #include <linux/crypto.h> 11 + #include <linux/device.h> 12 + #include <linux/dma-mapping.h> 13 + #include <linux/firmware/xlnx-zynqmp.h> 14 + #include <linux/init.h> 15 + #include <linux/io.h> 16 + #include <linux/kernel.h> 17 + #include <linux/module.h> 18 + #include <linux/of_device.h> 19 + #include <linux/platform_device.h> 20 + 21 + #define ZYNQMP_DMA_BIT_MASK 32U 22 + #define ZYNQMP_DMA_ALLOC_FIXED_SIZE 0x1000U 23 + 24 + enum zynqmp_sha_op { 25 + ZYNQMP_SHA3_INIT = 1, 26 + ZYNQMP_SHA3_UPDATE = 2, 27 + ZYNQMP_SHA3_FINAL = 4, 28 + }; 29 + 30 + struct zynqmp_sha_drv_ctx { 31 + struct shash_alg sha3_384; 32 + struct device *dev; 33 + }; 34 + 35 + struct zynqmp_sha_tfm_ctx { 36 + struct device *dev; 37 + struct crypto_shash *fbk_tfm; 38 + }; 39 + 40 + struct zynqmp_sha_desc_ctx { 41 + struct shash_desc fbk_req; 42 + }; 43 + 44 + static dma_addr_t update_dma_addr, final_dma_addr; 45 + static char *ubuf, *fbuf; 46 + 47 + static int zynqmp_sha_init_tfm(struct crypto_shash *hash) 48 + { 49 + const char *fallback_driver_name = crypto_shash_alg_name(hash); 50 + struct zynqmp_sha_tfm_ctx *tfm_ctx = crypto_shash_ctx(hash); 51 + struct shash_alg *alg = crypto_shash_alg(hash); 52 + struct crypto_shash *fallback_tfm; 53 + struct zynqmp_sha_drv_ctx *drv_ctx; 54 + 55 + drv_ctx = container_of(alg, struct zynqmp_sha_drv_ctx, sha3_384); 56 + tfm_ctx->dev = drv_ctx->dev; 57 + 58 + /* Allocate a fallback and abort if it failed. */ 59 + fallback_tfm = crypto_alloc_shash(fallback_driver_name, 0, 60 + CRYPTO_ALG_NEED_FALLBACK); 61 + if (IS_ERR(fallback_tfm)) 62 + return PTR_ERR(fallback_tfm); 63 + 64 + tfm_ctx->fbk_tfm = fallback_tfm; 65 + hash->descsize += crypto_shash_descsize(tfm_ctx->fbk_tfm); 66 + 67 + return 0; 68 + } 69 + 70 + static void zynqmp_sha_exit_tfm(struct crypto_shash *hash) 71 + { 72 + struct zynqmp_sha_tfm_ctx *tfm_ctx = crypto_shash_ctx(hash); 73 + 74 + if (tfm_ctx->fbk_tfm) { 75 + crypto_free_shash(tfm_ctx->fbk_tfm); 76 + tfm_ctx->fbk_tfm = NULL; 77 + } 78 + 79 + memzero_explicit(tfm_ctx, sizeof(struct zynqmp_sha_tfm_ctx)); 80 + } 81 + 82 + static int zynqmp_sha_init(struct shash_desc *desc) 83 + { 84 + struct zynqmp_sha_desc_ctx *dctx = shash_desc_ctx(desc); 85 + struct zynqmp_sha_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm); 86 + 87 + dctx->fbk_req.tfm = tctx->fbk_tfm; 88 + return crypto_shash_init(&dctx->fbk_req); 89 + } 90 + 91 + static int zynqmp_sha_update(struct shash_desc *desc, const u8 *data, unsigned int length) 92 + { 93 + struct zynqmp_sha_desc_ctx *dctx = shash_desc_ctx(desc); 94 + 95 + return crypto_shash_update(&dctx->fbk_req, data, length); 96 + } 97 + 98 + static int zynqmp_sha_final(struct shash_desc *desc, u8 *out) 99 + { 100 + struct zynqmp_sha_desc_ctx *dctx = shash_desc_ctx(desc); 101 + 102 + return crypto_shash_final(&dctx->fbk_req, out); 103 + } 104 + 105 + static int zynqmp_sha_finup(struct shash_desc *desc, const u8 *data, unsigned int length, u8 *out) 106 + { 107 + struct zynqmp_sha_desc_ctx *dctx = shash_desc_ctx(desc); 108 + 109 + return crypto_shash_finup(&dctx->fbk_req, data, length, out); 110 + } 111 + 112 + static int zynqmp_sha_import(struct shash_desc *desc, const void *in) 113 + { 114 + struct zynqmp_sha_desc_ctx *dctx = shash_desc_ctx(desc); 115 + struct zynqmp_sha_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm); 116 + 117 + dctx->fbk_req.tfm = tctx->fbk_tfm; 118 + return crypto_shash_import(&dctx->fbk_req, in); 119 + } 120 + 121 + static int zynqmp_sha_export(struct shash_desc *desc, void *out) 122 + { 123 + struct zynqmp_sha_desc_ctx *dctx = shash_desc_ctx(desc); 124 + 125 + return crypto_shash_export(&dctx->fbk_req, out); 126 + } 127 + 128 + static int zynqmp_sha_digest(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) 129 + { 130 + unsigned int remaining_len = len; 131 + int update_size; 132 + int ret; 133 + 134 + ret = zynqmp_pm_sha_hash(0, 0, ZYNQMP_SHA3_INIT); 135 + if (ret) 136 + return ret; 137 + 138 + while (remaining_len != 0) { 139 + memzero_explicit(ubuf, ZYNQMP_DMA_ALLOC_FIXED_SIZE); 140 + if (remaining_len >= ZYNQMP_DMA_ALLOC_FIXED_SIZE) { 141 + update_size = ZYNQMP_DMA_ALLOC_FIXED_SIZE; 142 + remaining_len -= ZYNQMP_DMA_ALLOC_FIXED_SIZE; 143 + } else { 144 + update_size = remaining_len; 145 + remaining_len = 0; 146 + } 147 + memcpy(ubuf, data, update_size); 148 + flush_icache_range((unsigned long)ubuf, (unsigned long)ubuf + update_size); 149 + ret = zynqmp_pm_sha_hash(update_dma_addr, update_size, ZYNQMP_SHA3_UPDATE); 150 + if (ret) 151 + return ret; 152 + 153 + data += update_size; 154 + } 155 + 156 + ret = zynqmp_pm_sha_hash(final_dma_addr, SHA3_384_DIGEST_SIZE, ZYNQMP_SHA3_FINAL); 157 + memcpy(out, fbuf, SHA3_384_DIGEST_SIZE); 158 + memzero_explicit(fbuf, SHA3_384_DIGEST_SIZE); 159 + 160 + return ret; 161 + } 162 + 163 + static struct zynqmp_sha_drv_ctx sha3_drv_ctx = { 164 + .sha3_384 = { 165 + .init = zynqmp_sha_init, 166 + .update = zynqmp_sha_update, 167 + .final = zynqmp_sha_final, 168 + .finup = zynqmp_sha_finup, 169 + .digest = zynqmp_sha_digest, 170 + .export = zynqmp_sha_export, 171 + .import = zynqmp_sha_import, 172 + .init_tfm = zynqmp_sha_init_tfm, 173 + .exit_tfm = zynqmp_sha_exit_tfm, 174 + .descsize = sizeof(struct zynqmp_sha_desc_ctx), 175 + .statesize = sizeof(struct sha3_state), 176 + .digestsize = SHA3_384_DIGEST_SIZE, 177 + .base = { 178 + .cra_name = "sha3-384", 179 + .cra_driver_name = "zynqmp-sha3-384", 180 + .cra_priority = 300, 181 + .cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY | 182 + CRYPTO_ALG_ALLOCATES_MEMORY | 183 + CRYPTO_ALG_NEED_FALLBACK, 184 + .cra_blocksize = SHA3_384_BLOCK_SIZE, 185 + .cra_ctxsize = sizeof(struct zynqmp_sha_tfm_ctx), 186 + .cra_alignmask = 3, 187 + .cra_module = THIS_MODULE, 188 + } 189 + } 190 + }; 191 + 192 + static int zynqmp_sha_probe(struct platform_device *pdev) 193 + { 194 + struct device *dev = &pdev->dev; 195 + int err; 196 + u32 v; 197 + 198 + /* Verify the hardware is present */ 199 + err = zynqmp_pm_get_api_version(&v); 200 + if (err) 201 + return err; 202 + 203 + 204 + err = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(ZYNQMP_DMA_BIT_MASK)); 205 + if (err < 0) { 206 + dev_err(dev, "No usable DMA configuration\n"); 207 + return err; 208 + } 209 + 210 + err = crypto_register_shash(&sha3_drv_ctx.sha3_384); 211 + if (err < 0) { 212 + dev_err(dev, "Failed to register shash alg.\n"); 213 + return err; 214 + } 215 + 216 + sha3_drv_ctx.dev = dev; 217 + platform_set_drvdata(pdev, &sha3_drv_ctx); 218 + 219 + ubuf = dma_alloc_coherent(dev, ZYNQMP_DMA_ALLOC_FIXED_SIZE, &update_dma_addr, GFP_KERNEL); 220 + if (!ubuf) { 221 + err = -ENOMEM; 222 + goto err_shash; 223 + } 224 + 225 + fbuf = dma_alloc_coherent(dev, SHA3_384_DIGEST_SIZE, &final_dma_addr, GFP_KERNEL); 226 + if (!fbuf) { 227 + err = -ENOMEM; 228 + goto err_mem; 229 + } 230 + 231 + return 0; 232 + 233 + err_mem: 234 + dma_free_coherent(sha3_drv_ctx.dev, ZYNQMP_DMA_ALLOC_FIXED_SIZE, ubuf, update_dma_addr); 235 + 236 + err_shash: 237 + crypto_unregister_shash(&sha3_drv_ctx.sha3_384); 238 + 239 + return err; 240 + } 241 + 242 + static int zynqmp_sha_remove(struct platform_device *pdev) 243 + { 244 + sha3_drv_ctx.dev = platform_get_drvdata(pdev); 245 + 246 + dma_free_coherent(sha3_drv_ctx.dev, ZYNQMP_DMA_ALLOC_FIXED_SIZE, ubuf, update_dma_addr); 247 + dma_free_coherent(sha3_drv_ctx.dev, SHA3_384_DIGEST_SIZE, fbuf, final_dma_addr); 248 + crypto_unregister_shash(&sha3_drv_ctx.sha3_384); 249 + 250 + return 0; 251 + } 252 + 253 + static struct platform_driver zynqmp_sha_driver = { 254 + .probe = zynqmp_sha_probe, 255 + .remove = zynqmp_sha_remove, 256 + .driver = { 257 + .name = "zynqmp-sha3-384", 258 + }, 259 + }; 260 + 261 + module_platform_driver(zynqmp_sha_driver); 262 + MODULE_DESCRIPTION("ZynqMP SHA3 hardware acceleration support."); 263 + MODULE_LICENSE("GPL v2"); 264 + MODULE_AUTHOR("Harsha <harsha.harsha@xilinx.com>");
+26
drivers/firmware/xilinx/zynqmp.c
··· 1121 1121 EXPORT_SYMBOL_GPL(zynqmp_pm_aes_engine); 1122 1122 1123 1123 /** 1124 + * zynqmp_pm_sha_hash - Access the SHA engine to calculate the hash 1125 + * @address: Address of the data/ Address of output buffer where 1126 + * hash should be stored. 1127 + * @size: Size of the data. 1128 + * @flags: 1129 + * BIT(0) - for initializing csudma driver and SHA3(Here address 1130 + * and size inputs can be NULL). 1131 + * BIT(1) - to call Sha3_Update API which can be called multiple 1132 + * times when data is not contiguous. 1133 + * BIT(2) - to get final hash of the whole updated data. 1134 + * Hash will be overwritten at provided address with 1135 + * 48 bytes. 1136 + * 1137 + * Return: Returns status, either success or error code. 1138 + */ 1139 + int zynqmp_pm_sha_hash(const u64 address, const u32 size, const u32 flags) 1140 + { 1141 + u32 lower_addr = lower_32_bits(address); 1142 + u32 upper_addr = upper_32_bits(address); 1143 + 1144 + return zynqmp_pm_invoke_fn(PM_SECURE_SHA, upper_addr, lower_addr, 1145 + size, flags, NULL); 1146 + } 1147 + EXPORT_SYMBOL_GPL(zynqmp_pm_sha_hash); 1148 + 1149 + /** 1124 1150 * zynqmp_pm_register_notifier() - PM API for register a subsystem 1125 1151 * to be notified about specific 1126 1152 * event/error.
+1
drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
··· 605 605 } else if (!(req->hdr.pcifunc & RVU_PFVF_FUNC_MASK)) { 606 606 /* Registers that can be accessed from PF */ 607 607 switch (offset) { 608 + case CPT_AF_DIAG: 608 609 case CPT_AF_CTL: 609 610 case CPT_AF_PF_FUNC: 610 611 case CPT_AF_BLK_RST:
+56 -28
include/asm-generic/xor.h
··· 8 8 #include <linux/prefetch.h> 9 9 10 10 static void 11 - xor_8regs_2(unsigned long bytes, unsigned long *p1, unsigned long *p2) 11 + xor_8regs_2(unsigned long bytes, unsigned long * __restrict p1, 12 + const unsigned long * __restrict p2) 12 13 { 13 14 long lines = bytes / (sizeof (long)) / 8; 14 15 ··· 28 27 } 29 28 30 29 static void 31 - xor_8regs_3(unsigned long bytes, unsigned long *p1, unsigned long *p2, 32 - unsigned long *p3) 30 + xor_8regs_3(unsigned long bytes, unsigned long * __restrict p1, 31 + const unsigned long * __restrict p2, 32 + const unsigned long * __restrict p3) 33 33 { 34 34 long lines = bytes / (sizeof (long)) / 8; 35 35 ··· 50 48 } 51 49 52 50 static void 53 - xor_8regs_4(unsigned long bytes, unsigned long *p1, unsigned long *p2, 54 - unsigned long *p3, unsigned long *p4) 51 + xor_8regs_4(unsigned long bytes, unsigned long * __restrict p1, 52 + const unsigned long * __restrict p2, 53 + const unsigned long * __restrict p3, 54 + const unsigned long * __restrict p4) 55 55 { 56 56 long lines = bytes / (sizeof (long)) / 8; 57 57 ··· 74 70 } 75 71 76 72 static void 77 - xor_8regs_5(unsigned long bytes, unsigned long *p1, unsigned long *p2, 78 - unsigned long *p3, unsigned long *p4, unsigned long *p5) 73 + xor_8regs_5(unsigned long bytes, unsigned long * __restrict p1, 74 + const unsigned long * __restrict p2, 75 + const unsigned long * __restrict p3, 76 + const unsigned long * __restrict p4, 77 + const unsigned long * __restrict p5) 79 78 { 80 79 long lines = bytes / (sizeof (long)) / 8; 81 80 ··· 100 93 } 101 94 102 95 static void 103 - xor_32regs_2(unsigned long bytes, unsigned long *p1, unsigned long *p2) 96 + xor_32regs_2(unsigned long bytes, unsigned long * __restrict p1, 97 + const unsigned long * __restrict p2) 104 98 { 105 99 long lines = bytes / (sizeof (long)) / 8; 106 100 ··· 137 129 } 138 130 139 131 static void 140 - xor_32regs_3(unsigned long bytes, unsigned long *p1, unsigned long *p2, 141 - unsigned long *p3) 132 + xor_32regs_3(unsigned long bytes, unsigned long * __restrict p1, 133 + const unsigned long * __restrict p2, 134 + const unsigned long * __restrict p3) 142 135 { 143 136 long lines = bytes / (sizeof (long)) / 8; 144 137 ··· 184 175 } 185 176 186 177 static void 187 - xor_32regs_4(unsigned long bytes, unsigned long *p1, unsigned long *p2, 188 - unsigned long *p3, unsigned long *p4) 178 + xor_32regs_4(unsigned long bytes, unsigned long * __restrict p1, 179 + const unsigned long * __restrict p2, 180 + const unsigned long * __restrict p3, 181 + const unsigned long * __restrict p4) 189 182 { 190 183 long lines = bytes / (sizeof (long)) / 8; 191 184 ··· 241 230 } 242 231 243 232 static void 244 - xor_32regs_5(unsigned long bytes, unsigned long *p1, unsigned long *p2, 245 - unsigned long *p3, unsigned long *p4, unsigned long *p5) 233 + xor_32regs_5(unsigned long bytes, unsigned long * __restrict p1, 234 + const unsigned long * __restrict p2, 235 + const unsigned long * __restrict p3, 236 + const unsigned long * __restrict p4, 237 + const unsigned long * __restrict p5) 246 238 { 247 239 long lines = bytes / (sizeof (long)) / 8; 248 240 ··· 308 294 } 309 295 310 296 static void 311 - xor_8regs_p_2(unsigned long bytes, unsigned long *p1, unsigned long *p2) 297 + xor_8regs_p_2(unsigned long bytes, unsigned long * __restrict p1, 298 + const unsigned long * __restrict p2) 312 299 { 313 300 long lines = bytes / (sizeof (long)) / 8 - 1; 314 301 prefetchw(p1); ··· 335 320 } 336 321 337 322 static void 338 - xor_8regs_p_3(unsigned long bytes, unsigned long *p1, unsigned long *p2, 339 - unsigned long *p3) 323 + xor_8regs_p_3(unsigned long bytes, unsigned long * __restrict p1, 324 + const unsigned long * __restrict p2, 325 + const unsigned long * __restrict p3) 340 326 { 341 327 long lines = bytes / (sizeof (long)) / 8 - 1; 342 328 prefetchw(p1); ··· 366 350 } 367 351 368 352 static void 369 - xor_8regs_p_4(unsigned long bytes, unsigned long *p1, unsigned long *p2, 370 - unsigned long *p3, unsigned long *p4) 353 + xor_8regs_p_4(unsigned long bytes, unsigned long * __restrict p1, 354 + const unsigned long * __restrict p2, 355 + const unsigned long * __restrict p3, 356 + const unsigned long * __restrict p4) 371 357 { 372 358 long lines = bytes / (sizeof (long)) / 8 - 1; 373 359 ··· 402 384 } 403 385 404 386 static void 405 - xor_8regs_p_5(unsigned long bytes, unsigned long *p1, unsigned long *p2, 406 - unsigned long *p3, unsigned long *p4, unsigned long *p5) 387 + xor_8regs_p_5(unsigned long bytes, unsigned long * __restrict p1, 388 + const unsigned long * __restrict p2, 389 + const unsigned long * __restrict p3, 390 + const unsigned long * __restrict p4, 391 + const unsigned long * __restrict p5) 407 392 { 408 393 long lines = bytes / (sizeof (long)) / 8 - 1; 409 394 ··· 442 421 } 443 422 444 423 static void 445 - xor_32regs_p_2(unsigned long bytes, unsigned long *p1, unsigned long *p2) 424 + xor_32regs_p_2(unsigned long bytes, unsigned long * __restrict p1, 425 + const unsigned long * __restrict p2) 446 426 { 447 427 long lines = bytes / (sizeof (long)) / 8 - 1; 448 428 ··· 488 466 } 489 467 490 468 static void 491 - xor_32regs_p_3(unsigned long bytes, unsigned long *p1, unsigned long *p2, 492 - unsigned long *p3) 469 + xor_32regs_p_3(unsigned long bytes, unsigned long * __restrict p1, 470 + const unsigned long * __restrict p2, 471 + const unsigned long * __restrict p3) 493 472 { 494 473 long lines = bytes / (sizeof (long)) / 8 - 1; 495 474 ··· 546 523 } 547 524 548 525 static void 549 - xor_32regs_p_4(unsigned long bytes, unsigned long *p1, unsigned long *p2, 550 - unsigned long *p3, unsigned long *p4) 526 + xor_32regs_p_4(unsigned long bytes, unsigned long * __restrict p1, 527 + const unsigned long * __restrict p2, 528 + const unsigned long * __restrict p3, 529 + const unsigned long * __restrict p4) 551 530 { 552 531 long lines = bytes / (sizeof (long)) / 8 - 1; 553 532 ··· 616 591 } 617 592 618 593 static void 619 - xor_32regs_p_5(unsigned long bytes, unsigned long *p1, unsigned long *p2, 620 - unsigned long *p3, unsigned long *p4, unsigned long *p5) 594 + xor_32regs_p_5(unsigned long bytes, unsigned long * __restrict p1, 595 + const unsigned long * __restrict p2, 596 + const unsigned long * __restrict p3, 597 + const unsigned long * __restrict p4, 598 + const unsigned long * __restrict p5) 621 599 { 622 600 long lines = bytes / (sizeof (long)) / 8 - 1; 623 601
+8 -2
include/crypto/algapi.h
··· 13 13 #include <linux/list.h> 14 14 #include <linux/types.h> 15 15 16 + #include <asm/unaligned.h> 17 + 16 18 /* 17 19 * Maximum values for blocksize and alignmask, used to allocate 18 20 * static buffers that are big enough for any combination of ··· 156 154 (size % sizeof(unsigned long)) == 0) { 157 155 unsigned long *d = (unsigned long *)dst; 158 156 unsigned long *s = (unsigned long *)src; 157 + unsigned long l; 159 158 160 159 while (size > 0) { 161 - *d++ ^= *s++; 160 + l = get_unaligned(d) ^ get_unaligned(s++); 161 + put_unaligned(l, d++); 162 162 size -= sizeof(unsigned long); 163 163 } 164 164 } else { ··· 177 173 unsigned long *d = (unsigned long *)dst; 178 174 unsigned long *s1 = (unsigned long *)src1; 179 175 unsigned long *s2 = (unsigned long *)src2; 176 + unsigned long l; 180 177 181 178 while (size > 0) { 182 - *d++ = *s1++ ^ *s2++; 179 + l = get_unaligned(s1++) ^ get_unaligned(s2++); 180 + put_unaligned(l, d++); 183 181 size -= sizeof(unsigned long); 184 182 } 185 183 } else {
+19 -7
include/crypto/dh.h
··· 24 24 * 25 25 * @key: Private DH key 26 26 * @p: Diffie-Hellman parameter P 27 - * @q: Diffie-Hellman parameter Q 28 27 * @g: Diffie-Hellman generator G 29 28 * @key_size: Size of the private DH key 30 29 * @p_size: Size of DH parameter P 31 - * @q_size: Size of DH parameter Q 32 30 * @g_size: Size of DH generator G 33 31 */ 34 32 struct dh { 35 - void *key; 36 - void *p; 37 - void *q; 38 - void *g; 33 + const void *key; 34 + const void *p; 35 + const void *g; 39 36 unsigned int key_size; 40 37 unsigned int p_size; 41 - unsigned int q_size; 42 38 unsigned int g_size; 43 39 }; 44 40 ··· 78 82 * Return: -EINVAL if buffer has insufficient size, 0 on success 79 83 */ 80 84 int crypto_dh_decode_key(const char *buf, unsigned int len, struct dh *params); 85 + 86 + /** 87 + * __crypto_dh_decode_key() - decode a private key without parameter checks 88 + * @buf: Buffer holding a packet key that should be decoded 89 + * @len: Length of the packet private key buffer 90 + * @params: Buffer allocated by the caller that is filled with the 91 + * unpacked DH private key. 92 + * 93 + * Internal function providing the same services as the exported 94 + * crypto_dh_decode_key(), but without any of those basic parameter 95 + * checks conducted by the latter. 96 + * 97 + * Return: -EINVAL if buffer has insufficient size, 0 on success 98 + */ 99 + int __crypto_dh_decode_key(const char *buf, unsigned int len, 100 + struct dh *params); 81 101 82 102 #endif
+158
include/crypto/internal/kpp.h
··· 10 10 #include <crypto/kpp.h> 11 11 #include <crypto/algapi.h> 12 12 13 + /** 14 + * struct kpp_instance - KPP template instance 15 + * @free: Callback getting invoked upon instance destruction. Must be set. 16 + * @s: Internal. Generic crypto core instance state properly layout 17 + * to alias with @alg as needed. 18 + * @alg: The &struct kpp_alg implementation provided by the instance. 19 + */ 20 + struct kpp_instance { 21 + void (*free)(struct kpp_instance *inst); 22 + union { 23 + struct { 24 + char head[offsetof(struct kpp_alg, base)]; 25 + struct crypto_instance base; 26 + } s; 27 + struct kpp_alg alg; 28 + }; 29 + }; 30 + 31 + /** 32 + * struct crypto_kpp_spawn - KPP algorithm spawn 33 + * @base: Internal. Generic crypto core spawn state. 34 + * 35 + * Template instances can get a hold on some inner KPP algorithm by 36 + * binding a &struct crypto_kpp_spawn via 37 + * crypto_grab_kpp(). Transforms may subsequently get instantiated 38 + * from the referenced inner &struct kpp_alg by means of 39 + * crypto_spawn_kpp(). 40 + */ 41 + struct crypto_kpp_spawn { 42 + struct crypto_spawn base; 43 + }; 44 + 13 45 /* 14 46 * Transform internal helpers. 15 47 */ ··· 65 33 return crypto_kpp_tfm(tfm)->__crt_alg->cra_name; 66 34 } 67 35 36 + /* 37 + * Template instance internal helpers. 38 + */ 39 + /** 40 + * kpp_crypto_instance() - Cast a &struct kpp_instance to the corresponding 41 + * generic &struct crypto_instance. 42 + * @inst: Pointer to the &struct kpp_instance to be cast. 43 + * Return: A pointer to the &struct crypto_instance embedded in @inst. 44 + */ 45 + static inline struct crypto_instance *kpp_crypto_instance( 46 + struct kpp_instance *inst) 47 + { 48 + return &inst->s.base; 49 + } 50 + 51 + /** 52 + * kpp_instance() - Cast a generic &struct crypto_instance to the corresponding 53 + * &struct kpp_instance. 54 + * @inst: Pointer to the &struct crypto_instance to be cast. 55 + * Return: A pointer to the &struct kpp_instance @inst is embedded in. 56 + */ 57 + static inline struct kpp_instance *kpp_instance(struct crypto_instance *inst) 58 + { 59 + return container_of(inst, struct kpp_instance, s.base); 60 + } 61 + 62 + /** 63 + * kpp_alg_instance() - Get the &struct kpp_instance a given KPP transform has 64 + * been instantiated from. 65 + * @kpp: The KPP transform instantiated from some &struct kpp_instance. 66 + * Return: The &struct kpp_instance associated with @kpp. 67 + */ 68 + static inline struct kpp_instance *kpp_alg_instance(struct crypto_kpp *kpp) 69 + { 70 + return kpp_instance(crypto_tfm_alg_instance(&kpp->base)); 71 + } 72 + 73 + /** 74 + * kpp_instance_ctx() - Get a pointer to a &struct kpp_instance's implementation 75 + * specific context data. 76 + * @inst: The &struct kpp_instance whose context data to access. 77 + * 78 + * A KPP template implementation may allocate extra memory beyond the 79 + * end of a &struct kpp_instance instantiated from &crypto_template.create(). 80 + * This function provides a means to obtain a pointer to this area. 81 + * 82 + * Return: A pointer to the implementation specific context data. 83 + */ 84 + static inline void *kpp_instance_ctx(struct kpp_instance *inst) 85 + { 86 + return crypto_instance_ctx(kpp_crypto_instance(inst)); 87 + } 88 + 89 + /* 90 + * KPP algorithm (un)registration functions. 91 + */ 68 92 /** 69 93 * crypto_register_kpp() -- Register key-agreement protocol primitives algorithm 70 94 * ··· 143 55 * @alg: algorithm definition 144 56 */ 145 57 void crypto_unregister_kpp(struct kpp_alg *alg); 58 + 59 + /** 60 + * kpp_register_instance() - Register a KPP template instance. 61 + * @tmpl: The instantiating template. 62 + * @inst: The KPP template instance to be registered. 63 + * Return: %0 on success, negative error code otherwise. 64 + */ 65 + int kpp_register_instance(struct crypto_template *tmpl, 66 + struct kpp_instance *inst); 67 + 68 + /* 69 + * KPP spawn related functions. 70 + */ 71 + /** 72 + * crypto_grab_kpp() - Look up a KPP algorithm and bind a spawn to it. 73 + * @spawn: The KPP spawn to bind. 74 + * @inst: The template instance owning @spawn. 75 + * @name: The KPP algorithm name to look up. 76 + * @type: The type bitset to pass on to the lookup. 77 + * @mask: The mask bismask to pass on to the lookup. 78 + * Return: %0 on success, a negative error code otherwise. 79 + */ 80 + int crypto_grab_kpp(struct crypto_kpp_spawn *spawn, 81 + struct crypto_instance *inst, 82 + const char *name, u32 type, u32 mask); 83 + 84 + /** 85 + * crypto_drop_kpp() - Release a spawn previously bound via crypto_grab_kpp(). 86 + * @spawn: The spawn to release. 87 + */ 88 + static inline void crypto_drop_kpp(struct crypto_kpp_spawn *spawn) 89 + { 90 + crypto_drop_spawn(&spawn->base); 91 + } 92 + 93 + /** 94 + * crypto_spawn_kpp_alg() - Get the algorithm a KPP spawn has been bound to. 95 + * @spawn: The spawn to get the referenced &struct kpp_alg for. 96 + * 97 + * This function as well as the returned result are safe to use only 98 + * after @spawn has been successfully bound via crypto_grab_kpp() and 99 + * up to until the template instance owning @spawn has either been 100 + * registered successfully or the spawn has been released again via 101 + * crypto_drop_spawn(). 102 + * 103 + * Return: A pointer to the &struct kpp_alg referenced from the spawn. 104 + */ 105 + static inline struct kpp_alg *crypto_spawn_kpp_alg( 106 + struct crypto_kpp_spawn *spawn) 107 + { 108 + return container_of(spawn->base.alg, struct kpp_alg, base); 109 + } 110 + 111 + /** 112 + * crypto_spawn_kpp() - Create a transform from a KPP spawn. 113 + * @spawn: The spawn previously bound to some &struct kpp_alg via 114 + * crypto_grab_kpp(). 115 + * 116 + * Once a &struct crypto_kpp_spawn has been successfully bound to a 117 + * &struct kpp_alg via crypto_grab_kpp(), transforms for the latter 118 + * may get instantiated from the former by means of this function. 119 + * 120 + * Return: A pointer to the freshly created KPP transform on success 121 + * or an ``ERR_PTR()`` otherwise. 122 + */ 123 + static inline struct crypto_kpp *crypto_spawn_kpp( 124 + struct crypto_kpp_spawn *spawn) 125 + { 126 + return crypto_spawn_tfm2(&spawn->base); 127 + } 146 128 147 129 #endif
+28 -6
include/crypto/sm3.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 1 2 /* 2 3 * Common values for SM3 algorithm 4 + * 5 + * Copyright (C) 2017 ARM Limited or its affiliates. 6 + * Copyright (C) 2017 Gilad Ben-Yossef <gilad@benyossef.com> 7 + * Copyright (C) 2021 Tianjia Zhang <tianjia.zhang@linux.alibaba.com> 3 8 */ 4 9 5 10 #ifndef _CRYPTO_SM3_H ··· 35 30 u8 buffer[SM3_BLOCK_SIZE]; 36 31 }; 37 32 38 - struct shash_desc; 33 + /* 34 + * Stand-alone implementation of the SM3 algorithm. It is designed to 35 + * have as little dependencies as possible so it can be used in the 36 + * kexec_file purgatory. In other cases you should generally use the 37 + * hash APIs from include/crypto/hash.h. Especially when hashing large 38 + * amounts of data as those APIs may be hw-accelerated. 39 + * 40 + * For details see lib/crypto/sm3.c 41 + */ 39 42 40 - extern int crypto_sm3_update(struct shash_desc *desc, const u8 *data, 41 - unsigned int len); 43 + static inline void sm3_init(struct sm3_state *sctx) 44 + { 45 + sctx->state[0] = SM3_IVA; 46 + sctx->state[1] = SM3_IVB; 47 + sctx->state[2] = SM3_IVC; 48 + sctx->state[3] = SM3_IVD; 49 + sctx->state[4] = SM3_IVE; 50 + sctx->state[5] = SM3_IVF; 51 + sctx->state[6] = SM3_IVG; 52 + sctx->state[7] = SM3_IVH; 53 + sctx->count = 0; 54 + } 42 55 43 - extern int crypto_sm3_final(struct shash_desc *desc, u8 *out); 56 + void sm3_update(struct sm3_state *sctx, const u8 *data, unsigned int len); 57 + void sm3_final(struct sm3_state *sctx, u8 *out); 44 58 45 - extern int crypto_sm3_finup(struct shash_desc *desc, const u8 *data, 46 - unsigned int len, u8 *hash); 47 59 #endif
+9
include/linux/crypto.h
··· 133 133 #define CRYPTO_ALG_ALLOCATES_MEMORY 0x00010000 134 134 135 135 /* 136 + * Mark an algorithm as a service implementation only usable by a 137 + * template and never by a normal user of the kernel crypto API. 138 + * This is intended to be used by algorithms that are themselves 139 + * not FIPS-approved but may instead be used to implement parts of 140 + * a FIPS-approved algorithm (e.g., dh vs. ffdhe2048(dh)). 141 + */ 142 + #define CRYPTO_ALG_FIPS_INTERNAL 0x00020000 143 + 144 + /* 136 145 * Transform masks and values (for crt_flags). 137 146 */ 138 147 #define CRYPTO_TFM_NEED_KEY 0x00000001
+8
include/linux/firmware/xlnx-zynqmp.h
··· 93 93 PM_FPGA_LOAD = 22, 94 94 PM_FPGA_GET_STATUS = 23, 95 95 PM_GET_CHIPID = 24, 96 + PM_SECURE_SHA = 26, 96 97 PM_PINCTRL_REQUEST = 28, 97 98 PM_PINCTRL_RELEASE = 29, 98 99 PM_PINCTRL_GET_FUNCTION = 30, ··· 428 427 const u32 qos, 429 428 const enum zynqmp_pm_request_ack ack); 430 429 int zynqmp_pm_aes_engine(const u64 address, u32 *out); 430 + int zynqmp_pm_sha_hash(const u64 address, const u32 size, const u32 flags); 431 431 int zynqmp_pm_fpga_load(const u64 address, const u32 size, const u32 flags); 432 432 int zynqmp_pm_fpga_get_status(u32 *value); 433 433 int zynqmp_pm_write_ggs(u32 index, u32 value); ··· 599 597 } 600 598 601 599 static inline int zynqmp_pm_aes_engine(const u64 address, u32 *out) 600 + { 601 + return -ENODEV; 602 + } 603 + 604 + static inline int zynqmp_pm_sha_hash(const u64 address, const u32 size, 605 + const u32 flags) 602 606 { 603 607 return -ENODEV; 604 608 }
+14 -7
include/linux/raid/xor.h
··· 11 11 struct xor_block_template *next; 12 12 const char *name; 13 13 int speed; 14 - void (*do_2)(unsigned long, unsigned long *, unsigned long *); 15 - void (*do_3)(unsigned long, unsigned long *, unsigned long *, 16 - unsigned long *); 17 - void (*do_4)(unsigned long, unsigned long *, unsigned long *, 18 - unsigned long *, unsigned long *); 19 - void (*do_5)(unsigned long, unsigned long *, unsigned long *, 20 - unsigned long *, unsigned long *, unsigned long *); 14 + void (*do_2)(unsigned long, unsigned long * __restrict, 15 + const unsigned long * __restrict); 16 + void (*do_3)(unsigned long, unsigned long * __restrict, 17 + const unsigned long * __restrict, 18 + const unsigned long * __restrict); 19 + void (*do_4)(unsigned long, unsigned long * __restrict, 20 + const unsigned long * __restrict, 21 + const unsigned long * __restrict, 22 + const unsigned long * __restrict); 23 + void (*do_5)(unsigned long, unsigned long * __restrict, 24 + const unsigned long * __restrict, 25 + const unsigned long * __restrict, 26 + const unsigned long * __restrict, 27 + const unsigned long * __restrict); 21 28 }; 22 29 23 30 #endif
+1 -1
kernel/padata.c
··· 181 181 goto out; 182 182 183 183 if (!cpumask_test_cpu(*cb_cpu, pd->cpumask.cbcpu)) { 184 - if (!cpumask_weight(pd->cpumask.cbcpu)) 184 + if (cpumask_empty(pd->cpumask.cbcpu)) 185 185 goto out; 186 186 187 187 /* Select an alternate fallback CPU and notify the caller. */
+6 -8
lib/crc32.c
··· 194 194 #else 195 195 u32 __pure __weak crc32_le(u32 crc, unsigned char const *p, size_t len) 196 196 { 197 - return crc32_le_generic(crc, p, len, 198 - (const u32 (*)[256])crc32table_le, CRC32_POLY_LE); 197 + return crc32_le_generic(crc, p, len, crc32table_le, CRC32_POLY_LE); 199 198 } 200 199 u32 __pure __weak __crc32c_le(u32 crc, unsigned char const *p, size_t len) 201 200 { 202 - return crc32_le_generic(crc, p, len, 203 - (const u32 (*)[256])crc32ctable_le, CRC32C_POLY_LE); 201 + return crc32_le_generic(crc, p, len, crc32ctable_le, CRC32C_POLY_LE); 204 202 } 205 203 #endif 206 204 EXPORT_SYMBOL(crc32_le); ··· 206 208 207 209 u32 __pure crc32_le_base(u32, unsigned char const *, size_t) __alias(crc32_le); 208 210 u32 __pure __crc32c_le_base(u32, unsigned char const *, size_t) __alias(__crc32c_le); 211 + u32 __pure crc32_be_base(u32, unsigned char const *, size_t) __alias(crc32_be); 209 212 210 213 /* 211 214 * This multiplies the polynomials x and y modulo the given modulus. ··· 331 332 } 332 333 333 334 #if CRC_BE_BITS == 1 334 - u32 __pure crc32_be(u32 crc, unsigned char const *p, size_t len) 335 + u32 __pure __weak crc32_be(u32 crc, unsigned char const *p, size_t len) 335 336 { 336 337 return crc32_be_generic(crc, p, len, NULL, CRC32_POLY_BE); 337 338 } 338 339 #else 339 - u32 __pure crc32_be(u32 crc, unsigned char const *p, size_t len) 340 + u32 __pure __weak crc32_be(u32 crc, unsigned char const *p, size_t len) 340 341 { 341 - return crc32_be_generic(crc, p, len, 342 - (const u32 (*)[256])crc32table_be, CRC32_POLY_BE); 342 + return crc32_be_generic(crc, p, len, crc32table_be, CRC32_POLY_BE); 343 343 } 344 344 #endif 345 345 EXPORT_SYMBOL(crc32_be);
+1 -1
lib/crc32test.c
··· 675 675 676 676 /* pre-warm the cache */ 677 677 for (i = 0; i < 100; i++) { 678 - bytes += 2*test[i].length; 678 + bytes += test[i].length; 679 679 680 680 crc ^= __crc32c_le(test[i].crc, test_buf + 681 681 test[i].start, test[i].length);
+3
lib/crypto/Kconfig
··· 123 123 config CRYPTO_LIB_SHA256 124 124 tristate 125 125 126 + config CRYPTO_LIB_SM3 127 + tristate 128 + 126 129 config CRYPTO_LIB_SM4 127 130 tristate 128 131
+3
lib/crypto/Makefile
··· 37 37 obj-$(CONFIG_CRYPTO_LIB_SHA256) += libsha256.o 38 38 libsha256-y := sha256.o 39 39 40 + obj-$(CONFIG_CRYPTO_LIB_SM3) += libsm3.o 41 + libsm3-y := sm3.o 42 + 40 43 obj-$(CONFIG_CRYPTO_LIB_SM4) += libsm4.o 41 44 libsm4-y := sm4.o 42 45
+246
lib/crypto/sm3.c
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * SM3 secure hash, as specified by OSCCA GM/T 0004-2012 SM3 and described 4 + * at https://datatracker.ietf.org/doc/html/draft-sca-cfrg-sm3-02 5 + * 6 + * Copyright (C) 2017 ARM Limited or its affiliates. 7 + * Copyright (C) 2017 Gilad Ben-Yossef <gilad@benyossef.com> 8 + * Copyright (C) 2021 Tianjia Zhang <tianjia.zhang@linux.alibaba.com> 9 + */ 10 + 11 + #include <linux/module.h> 12 + #include <asm/unaligned.h> 13 + #include <crypto/sm3.h> 14 + 15 + static const u32 ____cacheline_aligned K[64] = { 16 + 0x79cc4519, 0xf3988a32, 0xe7311465, 0xce6228cb, 17 + 0x9cc45197, 0x3988a32f, 0x7311465e, 0xe6228cbc, 18 + 0xcc451979, 0x988a32f3, 0x311465e7, 0x6228cbce, 19 + 0xc451979c, 0x88a32f39, 0x11465e73, 0x228cbce6, 20 + 0x9d8a7a87, 0x3b14f50f, 0x7629ea1e, 0xec53d43c, 21 + 0xd8a7a879, 0xb14f50f3, 0x629ea1e7, 0xc53d43ce, 22 + 0x8a7a879d, 0x14f50f3b, 0x29ea1e76, 0x53d43cec, 23 + 0xa7a879d8, 0x4f50f3b1, 0x9ea1e762, 0x3d43cec5, 24 + 0x7a879d8a, 0xf50f3b14, 0xea1e7629, 0xd43cec53, 25 + 0xa879d8a7, 0x50f3b14f, 0xa1e7629e, 0x43cec53d, 26 + 0x879d8a7a, 0x0f3b14f5, 0x1e7629ea, 0x3cec53d4, 27 + 0x79d8a7a8, 0xf3b14f50, 0xe7629ea1, 0xcec53d43, 28 + 0x9d8a7a87, 0x3b14f50f, 0x7629ea1e, 0xec53d43c, 29 + 0xd8a7a879, 0xb14f50f3, 0x629ea1e7, 0xc53d43ce, 30 + 0x8a7a879d, 0x14f50f3b, 0x29ea1e76, 0x53d43cec, 31 + 0xa7a879d8, 0x4f50f3b1, 0x9ea1e762, 0x3d43cec5 32 + }; 33 + 34 + /* 35 + * Transform the message X which consists of 16 32-bit-words. See 36 + * GM/T 004-2012 for details. 37 + */ 38 + #define R(i, a, b, c, d, e, f, g, h, t, w1, w2) \ 39 + do { \ 40 + ss1 = rol32((rol32((a), 12) + (e) + (t)), 7); \ 41 + ss2 = ss1 ^ rol32((a), 12); \ 42 + d += FF ## i(a, b, c) + ss2 + ((w1) ^ (w2)); \ 43 + h += GG ## i(e, f, g) + ss1 + (w1); \ 44 + b = rol32((b), 9); \ 45 + f = rol32((f), 19); \ 46 + h = P0((h)); \ 47 + } while (0) 48 + 49 + #define R1(a, b, c, d, e, f, g, h, t, w1, w2) \ 50 + R(1, a, b, c, d, e, f, g, h, t, w1, w2) 51 + #define R2(a, b, c, d, e, f, g, h, t, w1, w2) \ 52 + R(2, a, b, c, d, e, f, g, h, t, w1, w2) 53 + 54 + #define FF1(x, y, z) (x ^ y ^ z) 55 + #define FF2(x, y, z) ((x & y) | (x & z) | (y & z)) 56 + 57 + #define GG1(x, y, z) FF1(x, y, z) 58 + #define GG2(x, y, z) ((x & y) | (~x & z)) 59 + 60 + /* Message expansion */ 61 + #define P0(x) ((x) ^ rol32((x), 9) ^ rol32((x), 17)) 62 + #define P1(x) ((x) ^ rol32((x), 15) ^ rol32((x), 23)) 63 + #define I(i) (W[i] = get_unaligned_be32(data + i * 4)) 64 + #define W1(i) (W[i & 0x0f]) 65 + #define W2(i) (W[i & 0x0f] = \ 66 + P1(W[i & 0x0f] \ 67 + ^ W[(i-9) & 0x0f] \ 68 + ^ rol32(W[(i-3) & 0x0f], 15)) \ 69 + ^ rol32(W[(i-13) & 0x0f], 7) \ 70 + ^ W[(i-6) & 0x0f]) 71 + 72 + static void sm3_transform(struct sm3_state *sctx, u8 const *data, u32 W[16]) 73 + { 74 + u32 a, b, c, d, e, f, g, h, ss1, ss2; 75 + 76 + a = sctx->state[0]; 77 + b = sctx->state[1]; 78 + c = sctx->state[2]; 79 + d = sctx->state[3]; 80 + e = sctx->state[4]; 81 + f = sctx->state[5]; 82 + g = sctx->state[6]; 83 + h = sctx->state[7]; 84 + 85 + R1(a, b, c, d, e, f, g, h, K[0], I(0), I(4)); 86 + R1(d, a, b, c, h, e, f, g, K[1], I(1), I(5)); 87 + R1(c, d, a, b, g, h, e, f, K[2], I(2), I(6)); 88 + R1(b, c, d, a, f, g, h, e, K[3], I(3), I(7)); 89 + R1(a, b, c, d, e, f, g, h, K[4], W1(4), I(8)); 90 + R1(d, a, b, c, h, e, f, g, K[5], W1(5), I(9)); 91 + R1(c, d, a, b, g, h, e, f, K[6], W1(6), I(10)); 92 + R1(b, c, d, a, f, g, h, e, K[7], W1(7), I(11)); 93 + R1(a, b, c, d, e, f, g, h, K[8], W1(8), I(12)); 94 + R1(d, a, b, c, h, e, f, g, K[9], W1(9), I(13)); 95 + R1(c, d, a, b, g, h, e, f, K[10], W1(10), I(14)); 96 + R1(b, c, d, a, f, g, h, e, K[11], W1(11), I(15)); 97 + R1(a, b, c, d, e, f, g, h, K[12], W1(12), W2(16)); 98 + R1(d, a, b, c, h, e, f, g, K[13], W1(13), W2(17)); 99 + R1(c, d, a, b, g, h, e, f, K[14], W1(14), W2(18)); 100 + R1(b, c, d, a, f, g, h, e, K[15], W1(15), W2(19)); 101 + 102 + R2(a, b, c, d, e, f, g, h, K[16], W1(16), W2(20)); 103 + R2(d, a, b, c, h, e, f, g, K[17], W1(17), W2(21)); 104 + R2(c, d, a, b, g, h, e, f, K[18], W1(18), W2(22)); 105 + R2(b, c, d, a, f, g, h, e, K[19], W1(19), W2(23)); 106 + R2(a, b, c, d, e, f, g, h, K[20], W1(20), W2(24)); 107 + R2(d, a, b, c, h, e, f, g, K[21], W1(21), W2(25)); 108 + R2(c, d, a, b, g, h, e, f, K[22], W1(22), W2(26)); 109 + R2(b, c, d, a, f, g, h, e, K[23], W1(23), W2(27)); 110 + R2(a, b, c, d, e, f, g, h, K[24], W1(24), W2(28)); 111 + R2(d, a, b, c, h, e, f, g, K[25], W1(25), W2(29)); 112 + R2(c, d, a, b, g, h, e, f, K[26], W1(26), W2(30)); 113 + R2(b, c, d, a, f, g, h, e, K[27], W1(27), W2(31)); 114 + R2(a, b, c, d, e, f, g, h, K[28], W1(28), W2(32)); 115 + R2(d, a, b, c, h, e, f, g, K[29], W1(29), W2(33)); 116 + R2(c, d, a, b, g, h, e, f, K[30], W1(30), W2(34)); 117 + R2(b, c, d, a, f, g, h, e, K[31], W1(31), W2(35)); 118 + 119 + R2(a, b, c, d, e, f, g, h, K[32], W1(32), W2(36)); 120 + R2(d, a, b, c, h, e, f, g, K[33], W1(33), W2(37)); 121 + R2(c, d, a, b, g, h, e, f, K[34], W1(34), W2(38)); 122 + R2(b, c, d, a, f, g, h, e, K[35], W1(35), W2(39)); 123 + R2(a, b, c, d, e, f, g, h, K[36], W1(36), W2(40)); 124 + R2(d, a, b, c, h, e, f, g, K[37], W1(37), W2(41)); 125 + R2(c, d, a, b, g, h, e, f, K[38], W1(38), W2(42)); 126 + R2(b, c, d, a, f, g, h, e, K[39], W1(39), W2(43)); 127 + R2(a, b, c, d, e, f, g, h, K[40], W1(40), W2(44)); 128 + R2(d, a, b, c, h, e, f, g, K[41], W1(41), W2(45)); 129 + R2(c, d, a, b, g, h, e, f, K[42], W1(42), W2(46)); 130 + R2(b, c, d, a, f, g, h, e, K[43], W1(43), W2(47)); 131 + R2(a, b, c, d, e, f, g, h, K[44], W1(44), W2(48)); 132 + R2(d, a, b, c, h, e, f, g, K[45], W1(45), W2(49)); 133 + R2(c, d, a, b, g, h, e, f, K[46], W1(46), W2(50)); 134 + R2(b, c, d, a, f, g, h, e, K[47], W1(47), W2(51)); 135 + 136 + R2(a, b, c, d, e, f, g, h, K[48], W1(48), W2(52)); 137 + R2(d, a, b, c, h, e, f, g, K[49], W1(49), W2(53)); 138 + R2(c, d, a, b, g, h, e, f, K[50], W1(50), W2(54)); 139 + R2(b, c, d, a, f, g, h, e, K[51], W1(51), W2(55)); 140 + R2(a, b, c, d, e, f, g, h, K[52], W1(52), W2(56)); 141 + R2(d, a, b, c, h, e, f, g, K[53], W1(53), W2(57)); 142 + R2(c, d, a, b, g, h, e, f, K[54], W1(54), W2(58)); 143 + R2(b, c, d, a, f, g, h, e, K[55], W1(55), W2(59)); 144 + R2(a, b, c, d, e, f, g, h, K[56], W1(56), W2(60)); 145 + R2(d, a, b, c, h, e, f, g, K[57], W1(57), W2(61)); 146 + R2(c, d, a, b, g, h, e, f, K[58], W1(58), W2(62)); 147 + R2(b, c, d, a, f, g, h, e, K[59], W1(59), W2(63)); 148 + R2(a, b, c, d, e, f, g, h, K[60], W1(60), W2(64)); 149 + R2(d, a, b, c, h, e, f, g, K[61], W1(61), W2(65)); 150 + R2(c, d, a, b, g, h, e, f, K[62], W1(62), W2(66)); 151 + R2(b, c, d, a, f, g, h, e, K[63], W1(63), W2(67)); 152 + 153 + sctx->state[0] ^= a; 154 + sctx->state[1] ^= b; 155 + sctx->state[2] ^= c; 156 + sctx->state[3] ^= d; 157 + sctx->state[4] ^= e; 158 + sctx->state[5] ^= f; 159 + sctx->state[6] ^= g; 160 + sctx->state[7] ^= h; 161 + } 162 + #undef R 163 + #undef R1 164 + #undef R2 165 + #undef I 166 + #undef W1 167 + #undef W2 168 + 169 + static inline void sm3_block(struct sm3_state *sctx, 170 + u8 const *data, int blocks, u32 W[16]) 171 + { 172 + while (blocks--) { 173 + sm3_transform(sctx, data, W); 174 + data += SM3_BLOCK_SIZE; 175 + } 176 + } 177 + 178 + void sm3_update(struct sm3_state *sctx, const u8 *data, unsigned int len) 179 + { 180 + unsigned int partial = sctx->count % SM3_BLOCK_SIZE; 181 + u32 W[16]; 182 + 183 + sctx->count += len; 184 + 185 + if ((partial + len) >= SM3_BLOCK_SIZE) { 186 + int blocks; 187 + 188 + if (partial) { 189 + int p = SM3_BLOCK_SIZE - partial; 190 + 191 + memcpy(sctx->buffer + partial, data, p); 192 + data += p; 193 + len -= p; 194 + 195 + sm3_block(sctx, sctx->buffer, 1, W); 196 + } 197 + 198 + blocks = len / SM3_BLOCK_SIZE; 199 + len %= SM3_BLOCK_SIZE; 200 + 201 + if (blocks) { 202 + sm3_block(sctx, data, blocks, W); 203 + data += blocks * SM3_BLOCK_SIZE; 204 + } 205 + 206 + memzero_explicit(W, sizeof(W)); 207 + 208 + partial = 0; 209 + } 210 + if (len) 211 + memcpy(sctx->buffer + partial, data, len); 212 + } 213 + EXPORT_SYMBOL_GPL(sm3_update); 214 + 215 + void sm3_final(struct sm3_state *sctx, u8 *out) 216 + { 217 + const int bit_offset = SM3_BLOCK_SIZE - sizeof(u64); 218 + __be64 *bits = (__be64 *)(sctx->buffer + bit_offset); 219 + __be32 *digest = (__be32 *)out; 220 + unsigned int partial = sctx->count % SM3_BLOCK_SIZE; 221 + u32 W[16]; 222 + int i; 223 + 224 + sctx->buffer[partial++] = 0x80; 225 + if (partial > bit_offset) { 226 + memset(sctx->buffer + partial, 0, SM3_BLOCK_SIZE - partial); 227 + partial = 0; 228 + 229 + sm3_block(sctx, sctx->buffer, 1, W); 230 + } 231 + 232 + memset(sctx->buffer + partial, 0, bit_offset - partial); 233 + *bits = cpu_to_be64(sctx->count << 3); 234 + sm3_block(sctx, sctx->buffer, 1, W); 235 + 236 + for (i = 0; i < 8; i++) 237 + put_unaligned_be32(sctx->state[i], digest++); 238 + 239 + /* Zeroize sensitive information. */ 240 + memzero_explicit(W, sizeof(W)); 241 + memzero_explicit(sctx, sizeof(*sctx)); 242 + } 243 + EXPORT_SYMBOL_GPL(sm3_final); 244 + 245 + MODULE_DESCRIPTION("Generic SM3 library"); 246 + MODULE_LICENSE("GPL v2");
+1
lib/mpi/mpi-bit.c
··· 242 242 } 243 243 MPN_NORMALIZE(x->d, x->nlimbs); 244 244 } 245 + EXPORT_SYMBOL_GPL(mpi_rshift); 245 246 246 247 /**************** 247 248 * Shift A by COUNT limbs to the left
+1 -1
security/keys/dh.c
··· 15 15 #include <keys/user-type.h> 16 16 #include "internal.h" 17 17 18 - static ssize_t dh_data_from_key(key_serial_t keyid, void **data) 18 + static ssize_t dh_data_from_key(key_serial_t keyid, const void **data) 19 19 { 20 20 struct key *key; 21 21 key_ref_t key_ref;