This update includes the following changes:

API:
 
 - Feed untrusted RNGs into /dev/random.
 - Allow HWRNG sleeping to be more interruptible.
 - Create lib/utils module.
 - Setting private keys no longer required for akcipher.
 - Remove tcrypt mode=1000.
 - Reorganised Kconfig entries.
 
 Algorithms:
 
 - Load x86/sha512 based on CPU features.
 - Add AES-NI/AVX/x86_64/GFNI assembler implementation of aria cipher.
 
 Drivers:
 
 - Add HACE crypto driver aspeed.
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEn51F/lCuNhUwmDeSxycdCkmxi6cFAmM785cACgkQxycdCkmx
 i6dveBAAmGVYtrPmcGfA6CmzZ8ps9KdZxhjHjzLKwuqrOMulZvE2IYeUV4QtNqpQ
 6NLY2+TkqL0XIbCXoByIk32lMYIlXBaJdMYdHHDTeo7E2wqZn/46SPSWeNKazyJx
 dkL8Oj62nqDc2s0LOi3vLvod+sENFQ69R+vkHOa0fZhX0UBsac3NIXo+74Y2A7bE
 0+iQFKTWdNnoQzQ0j4q8WMiolKYh21iPZ9l5sjgMgichLCaE6PrITlRcaWrtPhey
 U1OmJtbTPsg+5X1r9KyLtoAXtBDONl66GQyne+p/ZYD8cMhxomjJaPlMhwWE/n4d
 d2KJKvoXoPPo4c+yNIS9hBav07ZriPl0q0jd2M1rd6oYTmFpaodTgIBfjvxO+wfV
 GoqDS8PEc42U1uwkuKC/cvfr6pB8WiybfXy+vSXBm/jUgIOO3y+eqsC8Jx9ZoQeG
 F+d34PYfJrJbmDRtcA6ZKdzN0OmKq7aCilx1kGKGPg0D+uq64FBo7zsT6XzTK8HL
 2Za9AACPn87xLQwGrKDSBfyrlSSIJm2FaIIPayUXHEo7cyoiZwbTpXRRJ1mDR+v9
 jzI+xPEXCthtjysuRmufNhTkiZUv3lZ8ORfQ0QFKR53tjZUm+dVQo0V/N/ZSXoSV
 SyRvXYO+ToXePAofNWl1LcO1grX/vxtFNedMkDLHXooRcnCaIYo=
 =rq2f
 -----END PGP SIGNATURE-----

Merge tag 'v6.1-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto updates from Herbert Xu:
 "API:
   - Feed untrusted RNGs into /dev/random
   - Allow HWRNG sleeping to be more interruptible
   - Create lib/utils module
   - Setting private keys no longer required for akcipher
   - Remove tcrypt mode=1000
   - Reorganised Kconfig entries

  Algorithms:
   - Load x86/sha512 based on CPU features
   - Add AES-NI/AVX/x86_64/GFNI assembler implementation of aria cipher

  Drivers:
   - Add HACE crypto driver aspeed"

* tag 'v6.1-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (124 commits)
  crypto: aspeed - Remove redundant dev_err call
  crypto: scatterwalk - Remove unused inline function scatterwalk_aligned()
  crypto: aead - Remove unused inline functions from aead
  crypto: bcm - Simplify obtain the name for cipher
  crypto: marvell/octeontx - use sysfs_emit() to instead of scnprintf()
  hwrng: core - start hwrng kthread also for untrusted sources
  crypto: zip - remove the unneeded result variable
  crypto: qat - add limit to linked list parsing
  crypto: octeontx2 - Remove the unneeded result variable
  crypto: ccp - Remove the unneeded result variable
  crypto: aspeed - Fix check for platform_get_irq() errors
  crypto: virtio - fix memory-leak
  crypto: cavium - prevent integer overflow loading firmware
  crypto: marvell/octeontx - prevent integer overflows
  crypto: aspeed - fix build error when only CRYPTO_DEV_ASPEED is enabled
  crypto: hisilicon/qm - fix the qos value initialization
  crypto: sun4i-ss - use DEFINE_SHOW_ATTRIBUTE to simplify sun4i_ss_debugfs
  crypto: tcrypt - add async speed test for aria cipher
  crypto: aria-avx - add AES-NI/AVX/x86_64/GFNI assembler implementation of aria cipher
  crypto: aria - prepare generic module for optimized implementations
  ...
This commit is contained in:
Linus Torvalds 2022-10-10 13:04:25 -07:00
commit 3604a7f568
116 changed files with 9140 additions and 3057 deletions

View File

@ -0,0 +1,53 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/crypto/aspeed,ast2500-hace.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: ASPEED HACE hash and crypto Hardware Accelerator Engines
maintainers:
- Neal Liu <neal_liu@aspeedtech.com>
description: |
The Hash and Crypto Engine (HACE) is designed to accelerate the throughput
of hash data digest, encryption, and decryption. Basically, HACE can be
divided into two independently engines - Hash Engine and Crypto Engine.
properties:
compatible:
enum:
- aspeed,ast2500-hace
- aspeed,ast2600-hace
reg:
maxItems: 1
clocks:
maxItems: 1
interrupts:
maxItems: 1
resets:
maxItems: 1
required:
- compatible
- reg
- clocks
- interrupts
- resets
additionalProperties: false
examples:
- |
#include <dt-bindings/clock/ast2600-clock.h>
hace: crypto@1e6d0000 {
compatible = "aspeed,ast2600-hace";
reg = <0x1e6d0000 0x200>;
interrupts = <4>;
clocks = <&syscon ASPEED_CLK_GATE_YCLK>;
resets = <&syscon ASPEED_RESET_HACE>;
};

View File

@ -89,9 +89,8 @@ context. In a typical workflow, this command should be the first command issued.
The firmware can be initialized either by using its own non-volatile storage or The firmware can be initialized either by using its own non-volatile storage or
the OS can manage the NV storage for the firmware using the module parameter the OS can manage the NV storage for the firmware using the module parameter
``init_ex_path``. The file specified by ``init_ex_path`` must exist. To create ``init_ex_path``. If the file specified by ``init_ex_path`` does not exist or
a new NV storage file allocate the file with 32KB bytes of 0xFF as required by is invalid, the OS will create or override the file with output from PSP.
the SEV spec.
Returns: 0 on success, -negative on error Returns: 0 on success, -negative on error

View File

@ -3237,6 +3237,13 @@ S: Maintained
F: Documentation/devicetree/bindings/usb/aspeed,ast2600-udc.yaml F: Documentation/devicetree/bindings/usb/aspeed,ast2600-udc.yaml
F: drivers/usb/gadget/udc/aspeed_udc.c F: drivers/usb/gadget/udc/aspeed_udc.c
ASPEED CRYPTO DRIVER
M: Neal Liu <neal_liu@aspeedtech.com>
L: linux-aspeed@lists.ozlabs.org (moderated for non-subscribers)
S: Maintained
F: Documentation/devicetree/bindings/crypto/aspeed,ast2500-hace.yaml
F: drivers/crypto/aspeed/
ASUS NOTEBOOKS AND EEEPC ACPI/WMI EXTRAS DRIVERS ASUS NOTEBOOKS AND EEEPC ACPI/WMI EXTRAS DRIVERS
M: Corentin Chary <corentin.chary@gmail.com> M: Corentin Chary <corentin.chary@gmail.com>
L: acpi4asus-user@lists.sourceforge.net L: acpi4asus-user@lists.sourceforge.net

View File

@ -1850,8 +1850,4 @@ config ARCH_HIBERNATION_POSSIBLE
endmenu endmenu
if CRYPTO
source "arch/arm/crypto/Kconfig"
endif
source "arch/arm/Kconfig.assembler" source "arch/arm/Kconfig.assembler"

View File

@ -262,6 +262,14 @@
quality = <100>; quality = <100>;
}; };
hace: crypto@1e6e3000 {
compatible = "aspeed,ast2500-hace";
reg = <0x1e6e3000 0x100>;
interrupts = <4>;
clocks = <&syscon ASPEED_CLK_GATE_YCLK>;
resets = <&syscon ASPEED_RESET_HACE>;
};
gfx: display@1e6e6000 { gfx: display@1e6e6000 {
compatible = "aspeed,ast2500-gfx", "syscon"; compatible = "aspeed,ast2500-gfx", "syscon";
reg = <0x1e6e6000 0x1000>; reg = <0x1e6e6000 0x1000>;

View File

@ -323,6 +323,14 @@
#size-cells = <1>; #size-cells = <1>;
ranges; ranges;
hace: crypto@1e6d0000 {
compatible = "aspeed,ast2600-hace";
reg = <0x1e6d0000 0x200>;
interrupts = <GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&syscon ASPEED_CLK_GATE_YCLK>;
resets = <&syscon ASPEED_RESET_HACE>;
};
syscon: syscon@1e6e2000 { syscon: syscon@1e6e2000 {
compatible = "aspeed,ast2600-scu", "syscon", "simple-mfd"; compatible = "aspeed,ast2600-scu", "syscon", "simple-mfd";
reg = <0x1e6e2000 0x1000>; reg = <0x1e6e2000 0x1000>;

View File

@ -32,7 +32,6 @@ CONFIG_KERNEL_MODE_NEON=y
CONFIG_PM_DEBUG=y CONFIG_PM_DEBUG=y
CONFIG_PM_ADVANCED_DEBUG=y CONFIG_PM_ADVANCED_DEBUG=y
CONFIG_ENERGY_MODEL=y CONFIG_ENERGY_MODEL=y
CONFIG_ARM_CRYPTO=y
CONFIG_CRYPTO_SHA1_ARM_NEON=m CONFIG_CRYPTO_SHA1_ARM_NEON=m
CONFIG_CRYPTO_SHA256_ARM=m CONFIG_CRYPTO_SHA256_ARM=m
CONFIG_CRYPTO_SHA512_ARM=m CONFIG_CRYPTO_SHA512_ARM=m

View File

@ -44,7 +44,6 @@ CONFIG_ARM_CPUIDLE=y
CONFIG_VFP=y CONFIG_VFP=y
CONFIG_NEON=y CONFIG_NEON=y
CONFIG_KERNEL_MODE_NEON=y CONFIG_KERNEL_MODE_NEON=y
CONFIG_ARM_CRYPTO=y
CONFIG_CRYPTO_SHA1_ARM_NEON=m CONFIG_CRYPTO_SHA1_ARM_NEON=m
CONFIG_CRYPTO_SHA1_ARM_CE=m CONFIG_CRYPTO_SHA1_ARM_CE=m
CONFIG_CRYPTO_SHA2_ARM_CE=m CONFIG_CRYPTO_SHA2_ARM_CE=m

View File

@ -132,7 +132,6 @@ CONFIG_ARM_EXYNOS_CPUIDLE=y
CONFIG_ARM_TEGRA_CPUIDLE=y CONFIG_ARM_TEGRA_CPUIDLE=y
CONFIG_ARM_QCOM_SPM_CPUIDLE=y CONFIG_ARM_QCOM_SPM_CPUIDLE=y
CONFIG_KERNEL_MODE_NEON=y CONFIG_KERNEL_MODE_NEON=y
CONFIG_ARM_CRYPTO=y
CONFIG_CRYPTO_SHA1_ARM_NEON=m CONFIG_CRYPTO_SHA1_ARM_NEON=m
CONFIG_CRYPTO_SHA1_ARM_CE=m CONFIG_CRYPTO_SHA1_ARM_CE=m
CONFIG_CRYPTO_SHA2_ARM_CE=m CONFIG_CRYPTO_SHA2_ARM_CE=m

View File

@ -53,7 +53,6 @@ CONFIG_CPU_IDLE=y
CONFIG_ARM_CPUIDLE=y CONFIG_ARM_CPUIDLE=y
CONFIG_KERNEL_MODE_NEON=y CONFIG_KERNEL_MODE_NEON=y
CONFIG_PM_DEBUG=y CONFIG_PM_DEBUG=y
CONFIG_ARM_CRYPTO=y
CONFIG_CRYPTO_SHA1_ARM_NEON=m CONFIG_CRYPTO_SHA1_ARM_NEON=m
CONFIG_CRYPTO_SHA256_ARM=m CONFIG_CRYPTO_SHA256_ARM=m
CONFIG_CRYPTO_SHA512_ARM=m CONFIG_CRYPTO_SHA512_ARM=m

View File

@ -34,7 +34,6 @@ CONFIG_CPUFREQ_DT=m
CONFIG_ARM_PXA2xx_CPUFREQ=m CONFIG_ARM_PXA2xx_CPUFREQ=m
CONFIG_CPU_IDLE=y CONFIG_CPU_IDLE=y
CONFIG_ARM_CPUIDLE=y CONFIG_ARM_CPUIDLE=y
CONFIG_ARM_CRYPTO=y
CONFIG_CRYPTO_SHA1_ARM=m CONFIG_CRYPTO_SHA1_ARM=m
CONFIG_CRYPTO_SHA256_ARM=m CONFIG_CRYPTO_SHA256_ARM=m
CONFIG_CRYPTO_SHA512_ARM=m CONFIG_CRYPTO_SHA512_ARM=m

View File

@ -1,92 +1,156 @@
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
menuconfig ARM_CRYPTO menu "Accelerated Cryptographic Algorithms for CPU (arm)"
bool "ARM Accelerated Cryptographic Algorithms"
depends on ARM
help
Say Y here to choose from a selection of cryptographic algorithms
implemented using ARM specific CPU features or instructions.
if ARM_CRYPTO config CRYPTO_CURVE25519_NEON
tristate "Public key crypto: Curve25519 (NEON)"
depends on KERNEL_MODE_NEON
select CRYPTO_LIB_CURVE25519_GENERIC
select CRYPTO_ARCH_HAVE_LIB_CURVE25519
help
Curve25519 algorithm
Architecture: arm with
- NEON (Advanced SIMD) extensions
config CRYPTO_GHASH_ARM_CE
tristate "Hash functions: GHASH (PMULL/NEON/ARMv8 Crypto Extensions)"
depends on KERNEL_MODE_NEON
select CRYPTO_HASH
select CRYPTO_CRYPTD
select CRYPTO_GF128MUL
help
GCM GHASH function (NIST SP800-38D)
Architecture: arm using
- PMULL (Polynomial Multiply Long) instructions
- NEON (Advanced SIMD) extensions
- ARMv8 Crypto Extensions
Use an implementation of GHASH (used by the GCM AEAD chaining mode)
that uses the 64x64 to 128 bit polynomial multiplication (vmull.p64)
that is part of the ARMv8 Crypto Extensions, or a slower variant that
uses the vmull.p8 instruction that is part of the basic NEON ISA.
config CRYPTO_NHPOLY1305_NEON
tristate "Hash functions: NHPoly1305 (NEON)"
depends on KERNEL_MODE_NEON
select CRYPTO_NHPOLY1305
help
NHPoly1305 hash function (Adiantum)
Architecture: arm using:
- NEON (Advanced SIMD) extensions
config CRYPTO_POLY1305_ARM
tristate "Hash functions: Poly1305 (NEON)"
select CRYPTO_HASH
select CRYPTO_ARCH_HAVE_LIB_POLY1305
help
Poly1305 authenticator algorithm (RFC7539)
Architecture: arm optionally using
- NEON (Advanced SIMD) extensions
config CRYPTO_BLAKE2S_ARM
bool "Hash functions: BLAKE2s"
select CRYPTO_ARCH_HAVE_LIB_BLAKE2S
help
BLAKE2s cryptographic hash function (RFC 7693)
Architecture: arm
This is faster than the generic implementations of BLAKE2s and
BLAKE2b, but slower than the NEON implementation of BLAKE2b.
There is no NEON implementation of BLAKE2s, since NEON doesn't
really help with it.
config CRYPTO_BLAKE2B_NEON
tristate "Hash functions: BLAKE2b (NEON)"
depends on KERNEL_MODE_NEON
select CRYPTO_BLAKE2B
help
BLAKE2b cryptographic hash function (RFC 7693)
Architecture: arm using
- NEON (Advanced SIMD) extensions
BLAKE2b digest algorithm optimized with ARM NEON instructions.
On ARM processors that have NEON support but not the ARMv8
Crypto Extensions, typically this BLAKE2b implementation is
much faster than the SHA-2 family and slightly faster than
SHA-1.
config CRYPTO_SHA1_ARM config CRYPTO_SHA1_ARM
tristate "SHA1 digest algorithm (ARM-asm)" tristate "Hash functions: SHA-1"
select CRYPTO_SHA1 select CRYPTO_SHA1
select CRYPTO_HASH select CRYPTO_HASH
help help
SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2) implemented SHA-1 secure hash algorithm (FIPS 180)
using optimized ARM assembler.
Architecture: arm
config CRYPTO_SHA1_ARM_NEON config CRYPTO_SHA1_ARM_NEON
tristate "SHA1 digest algorithm (ARM NEON)" tristate "Hash functions: SHA-1 (NEON)"
depends on KERNEL_MODE_NEON depends on KERNEL_MODE_NEON
select CRYPTO_SHA1_ARM select CRYPTO_SHA1_ARM
select CRYPTO_SHA1 select CRYPTO_SHA1
select CRYPTO_HASH select CRYPTO_HASH
help help
SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2) implemented SHA-1 secure hash algorithm (FIPS 180)
using optimized ARM NEON assembly, when NEON instructions are
available. Architecture: arm using
- NEON (Advanced SIMD) extensions
config CRYPTO_SHA1_ARM_CE config CRYPTO_SHA1_ARM_CE
tristate "SHA1 digest algorithm (ARM v8 Crypto Extensions)" tristate "Hash functions: SHA-1 (ARMv8 Crypto Extensions)"
depends on KERNEL_MODE_NEON depends on KERNEL_MODE_NEON
select CRYPTO_SHA1_ARM select CRYPTO_SHA1_ARM
select CRYPTO_HASH select CRYPTO_HASH
help help
SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2) implemented SHA-1 secure hash algorithm (FIPS 180)
using special ARMv8 Crypto Extensions.
Architecture: arm using ARMv8 Crypto Extensions
config CRYPTO_SHA2_ARM_CE config CRYPTO_SHA2_ARM_CE
tristate "SHA-224/256 digest algorithm (ARM v8 Crypto Extensions)" tristate "Hash functions: SHA-224 and SHA-256 (ARMv8 Crypto Extensions)"
depends on KERNEL_MODE_NEON depends on KERNEL_MODE_NEON
select CRYPTO_SHA256_ARM select CRYPTO_SHA256_ARM
select CRYPTO_HASH select CRYPTO_HASH
help help
SHA-256 secure hash standard (DFIPS 180-2) implemented SHA-224 and SHA-256 secure hash algorithms (FIPS 180)
using special ARMv8 Crypto Extensions.
Architecture: arm using
- ARMv8 Crypto Extensions
config CRYPTO_SHA256_ARM config CRYPTO_SHA256_ARM
tristate "SHA-224/256 digest algorithm (ARM-asm and NEON)" tristate "Hash functions: SHA-224 and SHA-256 (NEON)"
select CRYPTO_HASH select CRYPTO_HASH
depends on !CPU_V7M depends on !CPU_V7M
help help
SHA-256 secure hash standard (DFIPS 180-2) implemented SHA-224 and SHA-256 secure hash algorithms (FIPS 180)
using optimized ARM assembler and NEON, when available.
Architecture: arm using
- NEON (Advanced SIMD) extensions
config CRYPTO_SHA512_ARM config CRYPTO_SHA512_ARM
tristate "SHA-384/512 digest algorithm (ARM-asm and NEON)" tristate "Hash functions: SHA-384 and SHA-512 (NEON)"
select CRYPTO_HASH select CRYPTO_HASH
depends on !CPU_V7M depends on !CPU_V7M
help help
SHA-512 secure hash standard (DFIPS 180-2) implemented SHA-384 and SHA-512 secure hash algorithms (FIPS 180)
using optimized ARM assembler and NEON, when available.
config CRYPTO_BLAKE2S_ARM Architecture: arm using
bool "BLAKE2s digest algorithm (ARM)" - NEON (Advanced SIMD) extensions
select CRYPTO_ARCH_HAVE_LIB_BLAKE2S
help
BLAKE2s digest algorithm optimized with ARM scalar instructions. This
is faster than the generic implementations of BLAKE2s and BLAKE2b, but
slower than the NEON implementation of BLAKE2b. (There is no NEON
implementation of BLAKE2s, since NEON doesn't really help with it.)
config CRYPTO_BLAKE2B_NEON
tristate "BLAKE2b digest algorithm (ARM NEON)"
depends on KERNEL_MODE_NEON
select CRYPTO_BLAKE2B
help
BLAKE2b digest algorithm optimized with ARM NEON instructions.
On ARM processors that have NEON support but not the ARMv8
Crypto Extensions, typically this BLAKE2b implementation is
much faster than SHA-2 and slightly faster than SHA-1.
config CRYPTO_AES_ARM config CRYPTO_AES_ARM
tristate "Scalar AES cipher for ARM" tristate "Ciphers: AES"
select CRYPTO_ALGAPI select CRYPTO_ALGAPI
select CRYPTO_AES select CRYPTO_AES
help help
Use optimized AES assembler routines for ARM platforms. Block ciphers: AES cipher algorithms (FIPS-197)
Architecture: arm
On ARM processors without the Crypto Extensions, this is the On ARM processors without the Crypto Extensions, this is the
fastest AES implementation for single blocks. For multiple fastest AES implementation for single blocks. For multiple
@ -98,7 +162,7 @@ config CRYPTO_AES_ARM
such attacks very difficult. such attacks very difficult.
config CRYPTO_AES_ARM_BS config CRYPTO_AES_ARM_BS
tristate "Bit sliced AES using NEON instructions" tristate "Ciphers: AES, modes: ECB/CBC/CTR/XTS (bit-sliced NEON)"
depends on KERNEL_MODE_NEON depends on KERNEL_MODE_NEON
select CRYPTO_SKCIPHER select CRYPTO_SKCIPHER
select CRYPTO_LIB_AES select CRYPTO_LIB_AES
@ -106,8 +170,13 @@ config CRYPTO_AES_ARM_BS
select CRYPTO_CBC select CRYPTO_CBC
select CRYPTO_SIMD select CRYPTO_SIMD
help help
Use a faster and more secure NEON based implementation of AES in CBC, Length-preserving ciphers: AES cipher algorithms (FIPS-197)
CTR and XTS modes with block cipher modes:
- ECB (Electronic Codebook) mode (NIST SP800-38A)
- CBC (Cipher Block Chaining) mode (NIST SP800-38A)
- CTR (Counter) mode (NIST SP800-38A)
- XTS (XOR Encrypt XOR with ciphertext stealing) mode (NIST SP800-38E
and IEEE 1619)
Bit sliced AES gives around 45% speedup on Cortex-A15 for CTR mode Bit sliced AES gives around 45% speedup on Cortex-A15 for CTR mode
and for XTS mode encryption, CBC and XTS mode decryption speedup is and for XTS mode encryption, CBC and XTS mode decryption speedup is
@ -116,58 +185,59 @@ config CRYPTO_AES_ARM_BS
believed to be invulnerable to cache timing attacks. believed to be invulnerable to cache timing attacks.
config CRYPTO_AES_ARM_CE config CRYPTO_AES_ARM_CE
tristate "Accelerated AES using ARMv8 Crypto Extensions" tristate "Ciphers: AES, modes: ECB/CBC/CTS/CTR/XTS (ARMv8 Crypto Extensions)"
depends on KERNEL_MODE_NEON depends on KERNEL_MODE_NEON
select CRYPTO_SKCIPHER select CRYPTO_SKCIPHER
select CRYPTO_LIB_AES select CRYPTO_LIB_AES
select CRYPTO_SIMD select CRYPTO_SIMD
help help
Use an implementation of AES in CBC, CTR and XTS modes that uses Length-preserving ciphers: AES cipher algorithms (FIPS-197)
ARMv8 Crypto Extensions with block cipher modes:
- ECB (Electronic Codebook) mode (NIST SP800-38A)
- CBC (Cipher Block Chaining) mode (NIST SP800-38A)
- CTR (Counter) mode (NIST SP800-38A)
- CTS (Cipher Text Stealing) mode (NIST SP800-38A)
- XTS (XOR Encrypt XOR with ciphertext stealing) mode (NIST SP800-38E
and IEEE 1619)
config CRYPTO_GHASH_ARM_CE Architecture: arm using:
tristate "PMULL-accelerated GHASH using NEON/ARMv8 Crypto Extensions" - ARMv8 Crypto Extensions
depends on KERNEL_MODE_NEON
select CRYPTO_HASH config CRYPTO_CHACHA20_NEON
select CRYPTO_CRYPTD tristate "Ciphers: ChaCha20, XChaCha20, XChaCha12 (NEON)"
select CRYPTO_GF128MUL select CRYPTO_SKCIPHER
select CRYPTO_ARCH_HAVE_LIB_CHACHA
help help
Use an implementation of GHASH (used by the GCM AEAD chaining mode) Length-preserving ciphers: ChaCha20, XChaCha20, and XChaCha12
that uses the 64x64 to 128 bit polynomial multiplication (vmull.p64) stream cipher algorithms
that is part of the ARMv8 Crypto Extensions, or a slower variant that
uses the vmull.p8 instruction that is part of the basic NEON ISA.
config CRYPTO_CRCT10DIF_ARM_CE Architecture: arm using:
tristate "CRCT10DIF digest algorithm using PMULL instructions" - NEON (Advanced SIMD) extensions
depends on KERNEL_MODE_NEON
depends on CRC_T10DIF
select CRYPTO_HASH
config CRYPTO_CRC32_ARM_CE config CRYPTO_CRC32_ARM_CE
tristate "CRC32(C) digest algorithm using CRC and/or PMULL instructions" tristate "CRC32C and CRC32"
depends on KERNEL_MODE_NEON depends on KERNEL_MODE_NEON
depends on CRC32 depends on CRC32
select CRYPTO_HASH select CRYPTO_HASH
help
CRC32c CRC algorithm with the iSCSI polynomial (RFC 3385 and RFC 3720)
and CRC32 CRC algorithm (IEEE 802.3)
config CRYPTO_CHACHA20_NEON Architecture: arm using:
tristate "NEON and scalar accelerated ChaCha stream cipher algorithms" - CRC and/or PMULL instructions
select CRYPTO_SKCIPHER
select CRYPTO_ARCH_HAVE_LIB_CHACHA
config CRYPTO_POLY1305_ARM Drivers: crc32-arm-ce and crc32c-arm-ce
tristate "Accelerated scalar and SIMD Poly1305 hash implementations"
config CRYPTO_CRCT10DIF_ARM_CE
tristate "CRCT10DIF"
depends on KERNEL_MODE_NEON
depends on CRC_T10DIF
select CRYPTO_HASH select CRYPTO_HASH
select CRYPTO_ARCH_HAVE_LIB_POLY1305 help
CRC16 CRC algorithm used for the T10 (SCSI) Data Integrity Field (DIF)
config CRYPTO_NHPOLY1305_NEON Architecture: arm using:
tristate "NEON accelerated NHPoly1305 hash function (for Adiantum)" - PMULL (Polynomial Multiply Long) instructions
depends on KERNEL_MODE_NEON
select CRYPTO_NHPOLY1305
config CRYPTO_CURVE25519_NEON endmenu
tristate "NEON accelerated Curve25519 scalar multiplication library"
depends on KERNEL_MODE_NEON
select CRYPTO_LIB_CURVE25519_GENERIC
select CRYPTO_ARCH_HAVE_LIB_CURVE25519
endif

View File

@ -2251,6 +2251,3 @@ source "drivers/acpi/Kconfig"
source "arch/arm64/kvm/Kconfig" source "arch/arm64/kvm/Kconfig"
if CRYPTO
source "arch/arm64/crypto/Kconfig"
endif # CRYPTO

View File

@ -112,7 +112,6 @@ CONFIG_ACPI_APEI_MEMORY_FAILURE=y
CONFIG_ACPI_APEI_EINJ=y CONFIG_ACPI_APEI_EINJ=y
CONFIG_VIRTUALIZATION=y CONFIG_VIRTUALIZATION=y
CONFIG_KVM=y CONFIG_KVM=y
CONFIG_ARM64_CRYPTO=y
CONFIG_CRYPTO_SHA1_ARM64_CE=y CONFIG_CRYPTO_SHA1_ARM64_CE=y
CONFIG_CRYPTO_SHA2_ARM64_CE=y CONFIG_CRYPTO_SHA2_ARM64_CE=y
CONFIG_CRYPTO_SHA512_ARM64_CE=m CONFIG_CRYPTO_SHA512_ARM64_CE=m

View File

@ -1,141 +1,282 @@
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
menuconfig ARM64_CRYPTO menu "Accelerated Cryptographic Algorithms for CPU (arm64)"
bool "ARM64 Accelerated Cryptographic Algorithms"
depends on ARM64
help
Say Y here to choose from a selection of cryptographic algorithms
implemented using ARM64 specific CPU features or instructions.
if ARM64_CRYPTO
config CRYPTO_SHA256_ARM64
tristate "SHA-224/SHA-256 digest algorithm for arm64"
select CRYPTO_HASH
config CRYPTO_SHA512_ARM64
tristate "SHA-384/SHA-512 digest algorithm for arm64"
select CRYPTO_HASH
config CRYPTO_SHA1_ARM64_CE
tristate "SHA-1 digest algorithm (ARMv8 Crypto Extensions)"
depends on KERNEL_MODE_NEON
select CRYPTO_HASH
select CRYPTO_SHA1
config CRYPTO_SHA2_ARM64_CE
tristate "SHA-224/SHA-256 digest algorithm (ARMv8 Crypto Extensions)"
depends on KERNEL_MODE_NEON
select CRYPTO_HASH
select CRYPTO_SHA256_ARM64
config CRYPTO_SHA512_ARM64_CE
tristate "SHA-384/SHA-512 digest algorithm (ARMv8 Crypto Extensions)"
depends on KERNEL_MODE_NEON
select CRYPTO_HASH
select CRYPTO_SHA512_ARM64
config CRYPTO_SHA3_ARM64
tristate "SHA3 digest algorithm (ARMv8.2 Crypto Extensions)"
depends on KERNEL_MODE_NEON
select CRYPTO_HASH
select CRYPTO_SHA3
config CRYPTO_SM3_ARM64_CE
tristate "SM3 digest algorithm (ARMv8.2 Crypto Extensions)"
depends on KERNEL_MODE_NEON
select CRYPTO_HASH
select CRYPTO_SM3
config CRYPTO_SM4_ARM64_CE
tristate "SM4 symmetric cipher (ARMv8.2 Crypto Extensions)"
depends on KERNEL_MODE_NEON
select CRYPTO_ALGAPI
select CRYPTO_SM4
config CRYPTO_SM4_ARM64_CE_BLK
tristate "SM4 in ECB/CBC/CFB/CTR modes using ARMv8 Crypto Extensions"
depends on KERNEL_MODE_NEON
select CRYPTO_SKCIPHER
select CRYPTO_SM4
config CRYPTO_SM4_ARM64_NEON_BLK
tristate "SM4 in ECB/CBC/CFB/CTR modes using NEON instructions"
depends on KERNEL_MODE_NEON
select CRYPTO_SKCIPHER
select CRYPTO_SM4
config CRYPTO_GHASH_ARM64_CE config CRYPTO_GHASH_ARM64_CE
tristate "GHASH/AES-GCM using ARMv8 Crypto Extensions" tristate "Hash functions: GHASH (ARMv8 Crypto Extensions)"
depends on KERNEL_MODE_NEON depends on KERNEL_MODE_NEON
select CRYPTO_HASH select CRYPTO_HASH
select CRYPTO_GF128MUL select CRYPTO_GF128MUL
select CRYPTO_LIB_AES select CRYPTO_LIB_AES
select CRYPTO_AEAD select CRYPTO_AEAD
help
GCM GHASH function (NIST SP800-38D)
Architecture: arm64 using:
- ARMv8 Crypto Extensions
config CRYPTO_NHPOLY1305_NEON
tristate "Hash functions: NHPoly1305 (NEON)"
depends on KERNEL_MODE_NEON
select CRYPTO_NHPOLY1305
help
NHPoly1305 hash function (Adiantum)
Architecture: arm64 using:
- NEON (Advanced SIMD) extensions
config CRYPTO_POLY1305_NEON
tristate "Hash functions: Poly1305 (NEON)"
depends on KERNEL_MODE_NEON
select CRYPTO_HASH
select CRYPTO_ARCH_HAVE_LIB_POLY1305
help
Poly1305 authenticator algorithm (RFC7539)
Architecture: arm64 using:
- NEON (Advanced SIMD) extensions
config CRYPTO_SHA1_ARM64_CE
tristate "Hash functions: SHA-1 (ARMv8 Crypto Extensions)"
depends on KERNEL_MODE_NEON
select CRYPTO_HASH
select CRYPTO_SHA1
help
SHA-1 secure hash algorithm (FIPS 180)
Architecture: arm64 using:
- ARMv8 Crypto Extensions
config CRYPTO_SHA256_ARM64
tristate "Hash functions: SHA-224 and SHA-256"
select CRYPTO_HASH
help
SHA-224 and SHA-256 secure hash algorithms (FIPS 180)
Architecture: arm64
config CRYPTO_SHA2_ARM64_CE
tristate "Hash functions: SHA-224 and SHA-256 (ARMv8 Crypto Extensions)"
depends on KERNEL_MODE_NEON
select CRYPTO_HASH
select CRYPTO_SHA256_ARM64
help
SHA-224 and SHA-256 secure hash algorithms (FIPS 180)
Architecture: arm64 using:
- ARMv8 Crypto Extensions
config CRYPTO_SHA512_ARM64
tristate "Hash functions: SHA-384 and SHA-512"
select CRYPTO_HASH
help
SHA-384 and SHA-512 secure hash algorithms (FIPS 180)
Architecture: arm64
config CRYPTO_SHA512_ARM64_CE
tristate "Hash functions: SHA-384 and SHA-512 (ARMv8 Crypto Extensions)"
depends on KERNEL_MODE_NEON
select CRYPTO_HASH
select CRYPTO_SHA512_ARM64
help
SHA-384 and SHA-512 secure hash algorithms (FIPS 180)
Architecture: arm64 using:
- ARMv8 Crypto Extensions
config CRYPTO_SHA3_ARM64
tristate "Hash functions: SHA-3 (ARMv8.2 Crypto Extensions)"
depends on KERNEL_MODE_NEON
select CRYPTO_HASH
select CRYPTO_SHA3
help
SHA-3 secure hash algorithms (FIPS 202)
Architecture: arm64 using:
- ARMv8.2 Crypto Extensions
config CRYPTO_SM3_ARM64_CE
tristate "Hash functions: SM3 (ARMv8.2 Crypto Extensions)"
depends on KERNEL_MODE_NEON
select CRYPTO_HASH
select CRYPTO_SM3
help
SM3 (ShangMi 3) secure hash function (OSCCA GM/T 0004-2012)
Architecture: arm64 using:
- ARMv8.2 Crypto Extensions
config CRYPTO_POLYVAL_ARM64_CE config CRYPTO_POLYVAL_ARM64_CE
tristate "POLYVAL using ARMv8 Crypto Extensions (for HCTR2)" tristate "Hash functions: POLYVAL (ARMv8 Crypto Extensions)"
depends on KERNEL_MODE_NEON depends on KERNEL_MODE_NEON
select CRYPTO_POLYVAL select CRYPTO_POLYVAL
help
POLYVAL hash function for HCTR2
config CRYPTO_CRCT10DIF_ARM64_CE Architecture: arm64 using:
tristate "CRCT10DIF digest algorithm using PMULL instructions" - ARMv8 Crypto Extensions
depends on KERNEL_MODE_NEON && CRC_T10DIF
select CRYPTO_HASH
config CRYPTO_AES_ARM64 config CRYPTO_AES_ARM64
tristate "AES core cipher using scalar instructions" tristate "Ciphers: AES, modes: ECB, CBC, CTR, CTS, XCTR, XTS"
select CRYPTO_AES select CRYPTO_AES
help
Block ciphers: AES cipher algorithms (FIPS-197)
Length-preserving ciphers: AES with ECB, CBC, CTR, CTS,
XCTR, and XTS modes
AEAD cipher: AES with CBC, ESSIV, and SHA-256
for fscrypt and dm-crypt
Architecture: arm64
config CRYPTO_AES_ARM64_CE config CRYPTO_AES_ARM64_CE
tristate "AES core cipher using ARMv8 Crypto Extensions" tristate "Ciphers: AES (ARMv8 Crypto Extensions)"
depends on ARM64 && KERNEL_MODE_NEON depends on ARM64 && KERNEL_MODE_NEON
select CRYPTO_ALGAPI select CRYPTO_ALGAPI
select CRYPTO_LIB_AES select CRYPTO_LIB_AES
help
Block ciphers: AES cipher algorithms (FIPS-197)
Architecture: arm64 using:
- ARMv8 Crypto Extensions
config CRYPTO_AES_ARM64_CE_BLK
tristate "Ciphers: AES, modes: ECB/CBC/CTR/XTS (ARMv8 Crypto Extensions)"
depends on KERNEL_MODE_NEON
select CRYPTO_SKCIPHER
select CRYPTO_AES_ARM64_CE
help
Length-preserving ciphers: AES cipher algorithms (FIPS-197)
with block cipher modes:
- ECB (Electronic Codebook) mode (NIST SP800-38A)
- CBC (Cipher Block Chaining) mode (NIST SP800-38A)
- CTR (Counter) mode (NIST SP800-38A)
- XTS (XOR Encrypt XOR with ciphertext stealing) mode (NIST SP800-38E
and IEEE 1619)
Architecture: arm64 using:
- ARMv8 Crypto Extensions
config CRYPTO_AES_ARM64_NEON_BLK
tristate "Ciphers: AES, modes: ECB/CBC/CTR/XTS (NEON)"
depends on KERNEL_MODE_NEON
select CRYPTO_SKCIPHER
select CRYPTO_LIB_AES
help
Length-preserving ciphers: AES cipher algorithms (FIPS-197)
with block cipher modes:
- ECB (Electronic Codebook) mode (NIST SP800-38A)
- CBC (Cipher Block Chaining) mode (NIST SP800-38A)
- CTR (Counter) mode (NIST SP800-38A)
- XTS (XOR Encrypt XOR with ciphertext stealing) mode (NIST SP800-38E
and IEEE 1619)
Architecture: arm64 using:
- NEON (Advanced SIMD) extensions
config CRYPTO_CHACHA20_NEON
tristate "Ciphers: ChaCha (NEON)"
depends on KERNEL_MODE_NEON
select CRYPTO_SKCIPHER
select CRYPTO_LIB_CHACHA_GENERIC
select CRYPTO_ARCH_HAVE_LIB_CHACHA
help
Length-preserving ciphers: ChaCha20, XChaCha20, and XChaCha12
stream cipher algorithms
Architecture: arm64 using:
- NEON (Advanced SIMD) extensions
config CRYPTO_AES_ARM64_BS
tristate "Ciphers: AES, modes: ECB/CBC/CTR/XCTR/XTS modes (bit-sliced NEON)"
depends on KERNEL_MODE_NEON
select CRYPTO_SKCIPHER
select CRYPTO_AES_ARM64_NEON_BLK
select CRYPTO_LIB_AES
help
Length-preserving ciphers: AES cipher algorithms (FIPS-197)
with block cipher modes:
- ECB (Electronic Codebook) mode (NIST SP800-38A)
- CBC (Cipher Block Chaining) mode (NIST SP800-38A)
- CTR (Counter) mode (NIST SP800-38A)
- XCTR mode for HCTR2
- XTS (XOR Encrypt XOR with ciphertext stealing) mode (NIST SP800-38E
and IEEE 1619)
Architecture: arm64 using:
- bit-sliced algorithm
- NEON (Advanced SIMD) extensions
config CRYPTO_SM4_ARM64_CE
tristate "Ciphers: SM4 (ARMv8.2 Crypto Extensions)"
depends on KERNEL_MODE_NEON
select CRYPTO_ALGAPI
select CRYPTO_SM4
help
Block ciphers: SM4 cipher algorithms (OSCCA GB/T 32907-2016)
Architecture: arm64 using:
- ARMv8.2 Crypto Extensions
- NEON (Advanced SIMD) extensions
config CRYPTO_SM4_ARM64_CE_BLK
tristate "Ciphers: SM4, modes: ECB/CBC/CFB/CTR (ARMv8 Crypto Extensions)"
depends on KERNEL_MODE_NEON
select CRYPTO_SKCIPHER
select CRYPTO_SM4
help
Length-preserving ciphers: SM4 cipher algorithms (OSCCA GB/T 32907-2016)
with block cipher modes:
- ECB (Electronic Codebook) mode (NIST SP800-38A)
- CBC (Cipher Block Chaining) mode (NIST SP800-38A)
- CFB (Cipher Feedback) mode (NIST SP800-38A)
- CTR (Counter) mode (NIST SP800-38A)
Architecture: arm64 using:
- ARMv8 Crypto Extensions
- NEON (Advanced SIMD) extensions
config CRYPTO_SM4_ARM64_NEON_BLK
tristate "Ciphers: SM4, modes: ECB/CBC/CFB/CTR (NEON)"
depends on KERNEL_MODE_NEON
select CRYPTO_SKCIPHER
select CRYPTO_SM4
help
Length-preserving ciphers: SM4 cipher algorithms (OSCCA GB/T 32907-2016)
with block cipher modes:
- ECB (Electronic Codebook) mode (NIST SP800-38A)
- CBC (Cipher Block Chaining) mode (NIST SP800-38A)
- CFB (Cipher Feedback) mode (NIST SP800-38A)
- CTR (Counter) mode (NIST SP800-38A)
Architecture: arm64 using:
- NEON (Advanced SIMD) extensions
config CRYPTO_AES_ARM64_CE_CCM config CRYPTO_AES_ARM64_CE_CCM
tristate "AES in CCM mode using ARMv8 Crypto Extensions" tristate "AEAD cipher: AES in CCM mode (ARMv8 Crypto Extensions)"
depends on ARM64 && KERNEL_MODE_NEON depends on ARM64 && KERNEL_MODE_NEON
select CRYPTO_ALGAPI select CRYPTO_ALGAPI
select CRYPTO_AES_ARM64_CE select CRYPTO_AES_ARM64_CE
select CRYPTO_AEAD select CRYPTO_AEAD
select CRYPTO_LIB_AES select CRYPTO_LIB_AES
help
AEAD cipher: AES cipher algorithms (FIPS-197) with
CCM (Counter with Cipher Block Chaining-Message Authentication Code)
authenticated encryption mode (NIST SP800-38C)
config CRYPTO_AES_ARM64_CE_BLK Architecture: arm64 using:
tristate "AES in ECB/CBC/CTR/XTS/XCTR modes using ARMv8 Crypto Extensions" - ARMv8 Crypto Extensions
depends on KERNEL_MODE_NEON - NEON (Advanced SIMD) extensions
select CRYPTO_SKCIPHER
select CRYPTO_AES_ARM64_CE
config CRYPTO_AES_ARM64_NEON_BLK config CRYPTO_CRCT10DIF_ARM64_CE
tristate "AES in ECB/CBC/CTR/XTS/XCTR modes using NEON instructions" tristate "CRCT10DIF (PMULL)"
depends on KERNEL_MODE_NEON depends on KERNEL_MODE_NEON && CRC_T10DIF
select CRYPTO_SKCIPHER
select CRYPTO_LIB_AES
config CRYPTO_CHACHA20_NEON
tristate "ChaCha20, XChaCha20, and XChaCha12 stream ciphers using NEON instructions"
depends on KERNEL_MODE_NEON
select CRYPTO_SKCIPHER
select CRYPTO_LIB_CHACHA_GENERIC
select CRYPTO_ARCH_HAVE_LIB_CHACHA
config CRYPTO_POLY1305_NEON
tristate "Poly1305 hash function using scalar or NEON instructions"
depends on KERNEL_MODE_NEON
select CRYPTO_HASH select CRYPTO_HASH
select CRYPTO_ARCH_HAVE_LIB_POLY1305 help
CRC16 CRC algorithm used for the T10 (SCSI) Data Integrity Field (DIF)
config CRYPTO_NHPOLY1305_NEON Architecture: arm64 using
tristate "NHPoly1305 hash function using NEON instructions (for Adiantum)" - PMULL (Polynomial Multiply Long) instructions
depends on KERNEL_MODE_NEON
select CRYPTO_NHPOLY1305
config CRYPTO_AES_ARM64_BS endmenu
tristate "AES in ECB/CBC/CTR/XTS modes using bit-sliced NEON algorithm"
depends on KERNEL_MODE_NEON
select CRYPTO_SKCIPHER
select CRYPTO_AES_ARM64_NEON_BLK
select CRYPTO_LIB_AES
endif

74
arch/mips/crypto/Kconfig Normal file
View File

@ -0,0 +1,74 @@
# SPDX-License-Identifier: GPL-2.0
menu "Accelerated Cryptographic Algorithms for CPU (mips)"
config CRYPTO_CRC32_MIPS
tristate "CRC32c and CRC32"
depends on MIPS_CRC_SUPPORT
select CRYPTO_HASH
help
CRC32c and CRC32 CRC algorithms
Architecture: mips
config CRYPTO_POLY1305_MIPS
tristate "Hash functions: Poly1305"
depends on MIPS
select CRYPTO_ARCH_HAVE_LIB_POLY1305
help
Poly1305 authenticator algorithm (RFC7539)
Architecture: mips
config CRYPTO_MD5_OCTEON
tristate "Digests: MD5 (OCTEON)"
depends on CPU_CAVIUM_OCTEON
select CRYPTO_MD5
select CRYPTO_HASH
help
MD5 message digest algorithm (RFC1321)
Architecture: mips OCTEON using crypto instructions, when available
config CRYPTO_SHA1_OCTEON
tristate "Hash functions: SHA-1 (OCTEON)"
depends on CPU_CAVIUM_OCTEON
select CRYPTO_SHA1
select CRYPTO_HASH
help
SHA-1 secure hash algorithm (FIPS 180)
Architecture: mips OCTEON
config CRYPTO_SHA256_OCTEON
tristate "Hash functions: SHA-224 and SHA-256 (OCTEON)"
depends on CPU_CAVIUM_OCTEON
select CRYPTO_SHA256
select CRYPTO_HASH
help
SHA-224 and SHA-256 secure hash algorithms (FIPS 180)
Architecture: mips OCTEON using crypto instructions, when available
config CRYPTO_SHA512_OCTEON
tristate "Hash functions: SHA-384 and SHA-512 (OCTEON)"
depends on CPU_CAVIUM_OCTEON
select CRYPTO_SHA512
select CRYPTO_HASH
help
SHA-384 and SHA-512 secure hash algorithms (FIPS 180)
Architecture: mips OCTEON using crypto instructions, when available
config CRYPTO_CHACHA_MIPS
tristate "Ciphers: ChaCha20, XChaCha20, XChaCha12 (MIPS32r2)"
depends on CPU_MIPS32_R2
select CRYPTO_SKCIPHER
select CRYPTO_ARCH_HAVE_LIB_CHACHA
help
Length-preserving ciphers: ChaCha20, XChaCha20, and XChaCha12
stream cipher algorithms
Architecture: MIPS32r2
endmenu

View File

@ -0,0 +1,97 @@
# SPDX-License-Identifier: GPL-2.0
menu "Accelerated Cryptographic Algorithms for CPU (powerpc)"
config CRYPTO_CRC32C_VPMSUM
tristate "CRC32c"
depends on PPC64 && ALTIVEC
select CRYPTO_HASH
select CRC32
help
CRC32c CRC algorithm with the iSCSI polynomial (RFC 3385 and RFC 3720)
Architecture: powerpc64 using
- AltiVec extensions
Enable on POWER8 and newer processors for improved performance.
config CRYPTO_CRCT10DIF_VPMSUM
tristate "CRC32T10DIF"
depends on PPC64 && ALTIVEC && CRC_T10DIF
select CRYPTO_HASH
help
CRC16 CRC algorithm used for the T10 (SCSI) Data Integrity Field (DIF)
Architecture: powerpc64 using
- AltiVec extensions
Enable on POWER8 and newer processors for improved performance.
config CRYPTO_VPMSUM_TESTER
tristate "CRC32c and CRC32T10DIF hardware acceleration tester"
depends on CRYPTO_CRCT10DIF_VPMSUM && CRYPTO_CRC32C_VPMSUM
help
Stress test for CRC32c and CRCT10DIF algorithms implemented with
powerpc64 AltiVec extensions (POWER8 vpmsum instructions).
Unless you are testing these algorithms, you don't need this.
config CRYPTO_MD5_PPC
tristate "Digests: MD5"
depends on PPC
select CRYPTO_HASH
help
MD5 message digest algorithm (RFC1321)
Architecture: powerpc
config CRYPTO_SHA1_PPC
tristate "Hash functions: SHA-1"
depends on PPC
help
SHA-1 secure hash algorithm (FIPS 180)
Architecture: powerpc
config CRYPTO_SHA1_PPC_SPE
tristate "Hash functions: SHA-1 (SPE)"
depends on PPC && SPE
help
SHA-1 secure hash algorithm (FIPS 180)
Architecture: powerpc using
- SPE (Signal Processing Engine) extensions
config CRYPTO_SHA256_PPC_SPE
tristate "Hash functions: SHA-224 and SHA-256 (SPE)"
depends on PPC && SPE
select CRYPTO_SHA256
select CRYPTO_HASH
help
SHA-224 and SHA-256 secure hash algorithms (FIPS 180)
Architecture: powerpc using
- SPE (Signal Processing Engine) extensions
config CRYPTO_AES_PPC_SPE
tristate "Ciphers: AES, modes: ECB/CBC/CTR/XTS (SPE)"
depends on PPC && SPE
select CRYPTO_SKCIPHER
help
Block ciphers: AES cipher algorithms (FIPS-197)
Length-preserving ciphers: AES with ECB, CBC, CTR, and XTS modes
Architecture: powerpc using:
- SPE (Signal Processing Engine) extensions
SPE is available for:
- Processor Type: Freescale 8500
- CPU selection: e500 (8540)
This module should only be used for low power (router) devices
without hardware AES acceleration (e.g. caam crypto). It reduces the
size of the AES tables from 16KB to 8KB + 256 bytes and mitigates
timining attacks. Nevertheless it might be not as secure as other
architecture specific assembler implementations that work on 1KB
tables or 256 bytes S-boxes.
endmenu

135
arch/s390/crypto/Kconfig Normal file
View File

@ -0,0 +1,135 @@
# SPDX-License-Identifier: GPL-2.0
menu "Accelerated Cryptographic Algorithms for CPU (s390)"
config CRYPTO_CRC32_S390
tristate "CRC32c and CRC32"
depends on S390
select CRYPTO_HASH
select CRC32
help
CRC32c and CRC32 CRC algorithms
Architecture: s390
It is available with IBM z13 or later.
config CRYPTO_SHA512_S390
tristate "Hash functions: SHA-384 and SHA-512"
depends on S390
select CRYPTO_HASH
help
SHA-384 and SHA-512 secure hash algorithms (FIPS 180)
Architecture: s390
It is available as of z10.
config CRYPTO_SHA1_S390
tristate "Hash functions: SHA-1"
depends on S390
select CRYPTO_HASH
help
SHA-1 secure hash algorithm (FIPS 180)
Architecture: s390
It is available as of z990.
config CRYPTO_SHA256_S390
tristate "Hash functions: SHA-224 and SHA-256"
depends on S390
select CRYPTO_HASH
help
SHA-224 and SHA-256 secure hash algorithms (FIPS 180)
Architecture: s390
It is available as of z9.
config CRYPTO_SHA3_256_S390
tristate "Hash functions: SHA3-224 and SHA3-256"
depends on S390
select CRYPTO_HASH
help
SHA3-224 and SHA3-256 secure hash algorithms (FIPS 202)
Architecture: s390
It is available as of z14.
config CRYPTO_SHA3_512_S390
tristate "Hash functions: SHA3-384 and SHA3-512"
depends on S390
select CRYPTO_HASH
help
SHA3-384 and SHA3-512 secure hash algorithms (FIPS 202)
Architecture: s390
It is available as of z14.
config CRYPTO_GHASH_S390
tristate "Hash functions: GHASH"
depends on S390
select CRYPTO_HASH
help
GCM GHASH hash function (NIST SP800-38D)
Architecture: s390
It is available as of z196.
config CRYPTO_AES_S390
tristate "Ciphers: AES, modes: ECB, CBC, CTR, XTS, GCM"
depends on S390
select CRYPTO_ALGAPI
select CRYPTO_SKCIPHER
help
Block cipher: AES cipher algorithms (FIPS 197)
AEAD cipher: AES with GCM
Length-preserving ciphers: AES with ECB, CBC, XTS, and CTR modes
Architecture: s390
As of z9 the ECB and CBC modes are hardware accelerated
for 128 bit keys.
As of z10 the ECB and CBC modes are hardware accelerated
for all AES key sizes.
As of z196 the CTR mode is hardware accelerated for all AES
key sizes and XTS mode is hardware accelerated for 256 and
512 bit keys.
config CRYPTO_DES_S390
tristate "Ciphers: DES and Triple DES EDE, modes: ECB, CBC, CTR"
depends on S390
select CRYPTO_ALGAPI
select CRYPTO_SKCIPHER
select CRYPTO_LIB_DES
help
Block ciphers: DES (FIPS 46-2) cipher algorithm
Block ciphers: Triple DES EDE (FIPS 46-3) cipher algorithm
Length-preserving ciphers: DES with ECB, CBC, and CTR modes
Length-preserving ciphers: Triple DES EDED with ECB, CBC, and CTR modes
Architecture: s390
As of z990 the ECB and CBC mode are hardware accelerated.
As of z196 the CTR mode is hardware accelerated.
config CRYPTO_CHACHA_S390
tristate "Ciphers: ChaCha20"
depends on S390
select CRYPTO_SKCIPHER
select CRYPTO_LIB_CHACHA_GENERIC
select CRYPTO_ARCH_HAVE_LIB_CHACHA
help
Length-preserving cipher: ChaCha20 stream cipher (RFC 7539)
Architecture: s390
It is available as of z13.
endmenu

90
arch/sparc/crypto/Kconfig Normal file
View File

@ -0,0 +1,90 @@
# SPDX-License-Identifier: GPL-2.0
menu "Accelerated Cryptographic Algorithms for CPU (sparc64)"
config CRYPTO_DES_SPARC64
tristate "Ciphers: DES and Triple DES EDE, modes: ECB/CBC"
depends on SPARC64
select CRYPTO_ALGAPI
select CRYPTO_LIB_DES
select CRYPTO_SKCIPHER
help
Block cipher: DES (FIPS 46-2) cipher algorithm
Block cipher: Triple DES EDE (FIPS 46-3) cipher algorithm
Length-preserving ciphers: DES with ECB and CBC modes
Length-preserving ciphers: Tripe DES EDE with ECB and CBC modes
Architecture: sparc64
config CRYPTO_CRC32C_SPARC64
tristate "CRC32c"
depends on SPARC64
select CRYPTO_HASH
select CRC32
help
CRC32c CRC algorithm with the iSCSI polynomial (RFC 3385 and RFC 3720)
Architecture: sparc64
config CRYPTO_MD5_SPARC64
tristate "Digests: MD5"
depends on SPARC64
select CRYPTO_MD5
select CRYPTO_HASH
help
MD5 message digest algorithm (RFC1321)
Architecture: sparc64 using crypto instructions, when available
config CRYPTO_SHA1_SPARC64
tristate "Hash functions: SHA-1"
depends on SPARC64
select CRYPTO_SHA1
select CRYPTO_HASH
help
SHA-1 secure hash algorithm (FIPS 180)
Architecture: sparc64
config CRYPTO_SHA256_SPARC64
tristate "Hash functions: SHA-224 and SHA-256"
depends on SPARC64
select CRYPTO_SHA256
select CRYPTO_HASH
help
SHA-224 and SHA-256 secure hash algorithms (FIPS 180)
Architecture: sparc64 using crypto instructions, when available
config CRYPTO_SHA512_SPARC64
tristate "Hash functions: SHA-384 and SHA-512"
depends on SPARC64
select CRYPTO_SHA512
select CRYPTO_HASH
help
SHA-384 and SHA-512 secure hash algorithms (FIPS 180)
Architecture: sparc64 using crypto instructions, when available
config CRYPTO_AES_SPARC64
tristate "Ciphers: AES, modes: ECB, CBC, CTR"
depends on SPARC64
select CRYPTO_SKCIPHER
help
Block ciphers: AES cipher algorithms (FIPS-197)
Length-preseving ciphers: AES with ECB, CBC, and CTR modes
Architecture: sparc64 using crypto instructions
config CRYPTO_CAMELLIA_SPARC64
tristate "Ciphers: Camellia, modes: ECB, CBC"
depends on SPARC64
select CRYPTO_ALGAPI
select CRYPTO_SKCIPHER
help
Block ciphers: Camellia cipher algorithms
Length-preserving ciphers: Camellia with ECB and CBC modes
Architecture: sparc64
endmenu

484
arch/x86/crypto/Kconfig Normal file
View File

@ -0,0 +1,484 @@
# SPDX-License-Identifier: GPL-2.0
menu "Accelerated Cryptographic Algorithms for CPU (x86)"
config CRYPTO_CURVE25519_X86
tristate "Public key crypto: Curve25519 (ADX)"
depends on X86 && 64BIT
select CRYPTO_LIB_CURVE25519_GENERIC
select CRYPTO_ARCH_HAVE_LIB_CURVE25519
help
Curve25519 algorithm
Architecture: x86_64 using:
- ADX (large integer arithmetic)
config CRYPTO_AES_NI_INTEL
tristate "Ciphers: AES, modes: ECB, CBC, CTS, CTR, XTR, XTS, GCM (AES-NI)"
depends on X86
select CRYPTO_AEAD
select CRYPTO_LIB_AES
select CRYPTO_ALGAPI
select CRYPTO_SKCIPHER
select CRYPTO_SIMD
help
Block cipher: AES cipher algorithms
AEAD cipher: AES with GCM
Length-preserving ciphers: AES with ECB, CBC, CTS, CTR, XTR, XTS
Architecture: x86 (32-bit and 64-bit) using:
- AES-NI (AES new instructions)
config CRYPTO_BLOWFISH_X86_64
tristate "Ciphers: Blowfish, modes: ECB, CBC"
depends on X86 && 64BIT
select CRYPTO_SKCIPHER
select CRYPTO_BLOWFISH_COMMON
imply CRYPTO_CTR
help
Block cipher: Blowfish cipher algorithm
Length-preserving ciphers: Blowfish with ECB and CBC modes
Architecture: x86_64
config CRYPTO_CAMELLIA_X86_64
tristate "Ciphers: Camellia with modes: ECB, CBC"
depends on X86 && 64BIT
select CRYPTO_SKCIPHER
imply CRYPTO_CTR
help
Block cipher: Camellia cipher algorithms
Length-preserving ciphers: Camellia with ECB and CBC modes
Architecture: x86_64
config CRYPTO_CAMELLIA_AESNI_AVX_X86_64
tristate "Ciphers: Camellia with modes: ECB, CBC (AES-NI/AVX)"
depends on X86 && 64BIT
select CRYPTO_SKCIPHER
select CRYPTO_CAMELLIA_X86_64
select CRYPTO_SIMD
imply CRYPTO_XTS
help
Length-preserving ciphers: Camellia with ECB and CBC modes
Architecture: x86_64 using:
- AES-NI (AES New Instructions)
- AVX (Advanced Vector Extensions)
config CRYPTO_CAMELLIA_AESNI_AVX2_X86_64
tristate "Ciphers: Camellia with modes: ECB, CBC (AES-NI/AVX2)"
depends on X86 && 64BIT
select CRYPTO_CAMELLIA_AESNI_AVX_X86_64
help
Length-preserving ciphers: Camellia with ECB and CBC modes
Architecture: x86_64 using:
- AES-NI (AES New Instructions)
- AVX2 (Advanced Vector Extensions 2)
config CRYPTO_CAST5_AVX_X86_64
tristate "Ciphers: CAST5 with modes: ECB, CBC (AVX)"
depends on X86 && 64BIT
select CRYPTO_SKCIPHER
select CRYPTO_CAST5
select CRYPTO_CAST_COMMON
select CRYPTO_SIMD
imply CRYPTO_CTR
help
Length-preserving ciphers: CAST5 (CAST-128) cipher algorithm
(RFC2144) with ECB and CBC modes
Architecture: x86_64 using:
- AVX (Advanced Vector Extensions)
Processes 16 blocks in parallel.
config CRYPTO_CAST6_AVX_X86_64
tristate "Ciphers: CAST6 with modes: ECB, CBC (AVX)"
depends on X86 && 64BIT
select CRYPTO_SKCIPHER
select CRYPTO_CAST6
select CRYPTO_CAST_COMMON
select CRYPTO_SIMD
imply CRYPTO_XTS
imply CRYPTO_CTR
help
Length-preserving ciphers: CAST6 (CAST-256) cipher algorithm
(RFC2612) with ECB and CBC modes
Architecture: x86_64 using:
- AVX (Advanced Vector Extensions)
Processes eight blocks in parallel.
config CRYPTO_DES3_EDE_X86_64
tristate "Ciphers: Triple DES EDE with modes: ECB, CBC"
depends on X86 && 64BIT
select CRYPTO_SKCIPHER
select CRYPTO_LIB_DES
imply CRYPTO_CTR
help
Block cipher: Triple DES EDE (FIPS 46-3) cipher algorithm
Length-preserving ciphers: Triple DES EDE with ECB and CBC modes
Architecture: x86_64
Processes one or three blocks in parallel.
config CRYPTO_SERPENT_SSE2_X86_64
tristate "Ciphers: Serpent with modes: ECB, CBC (SSE2)"
depends on X86 && 64BIT
select CRYPTO_SKCIPHER
select CRYPTO_SERPENT
select CRYPTO_SIMD
imply CRYPTO_CTR
help
Length-preserving ciphers: Serpent cipher algorithm
with ECB and CBC modes
Architecture: x86_64 using:
- SSE2 (Streaming SIMD Extensions 2)
Processes eight blocks in parallel.
config CRYPTO_SERPENT_SSE2_586
tristate "Ciphers: Serpent with modes: ECB, CBC (32-bit with SSE2)"
depends on X86 && !64BIT
select CRYPTO_SKCIPHER
select CRYPTO_SERPENT
select CRYPTO_SIMD
imply CRYPTO_CTR
help
Length-preserving ciphers: Serpent cipher algorithm
with ECB and CBC modes
Architecture: x86 (32-bit) using:
- SSE2 (Streaming SIMD Extensions 2)
Processes four blocks in parallel.
config CRYPTO_SERPENT_AVX_X86_64
tristate "Ciphers: Serpent with modes: ECB, CBC (AVX)"
depends on X86 && 64BIT
select CRYPTO_SKCIPHER
select CRYPTO_SERPENT
select CRYPTO_SIMD
imply CRYPTO_XTS
imply CRYPTO_CTR
help
Length-preserving ciphers: Serpent cipher algorithm
with ECB and CBC modes
Architecture: x86_64 using:
- AVX (Advanced Vector Extensions)
Processes eight blocks in parallel.
config CRYPTO_SERPENT_AVX2_X86_64
tristate "Ciphers: Serpent with modes: ECB, CBC (AVX2)"
depends on X86 && 64BIT
select CRYPTO_SERPENT_AVX_X86_64
help
Length-preserving ciphers: Serpent cipher algorithm
with ECB and CBC modes
Architecture: x86_64 using:
- AVX2 (Advanced Vector Extensions 2)
Processes 16 blocks in parallel.
config CRYPTO_SM4_AESNI_AVX_X86_64
tristate "Ciphers: SM4 with modes: ECB, CBC, CFB, CTR (AES-NI/AVX)"
depends on X86 && 64BIT
select CRYPTO_SKCIPHER
select CRYPTO_SIMD
select CRYPTO_ALGAPI
select CRYPTO_SM4
help
Length-preserving ciphers: SM4 cipher algorithms
(OSCCA GB/T 32907-2016) with ECB, CBC, CFB, and CTR modes
Architecture: x86_64 using:
- AES-NI (AES New Instructions)
- AVX (Advanced Vector Extensions)
Through two affine transforms,
we can use the AES S-Box to simulate the SM4 S-Box to achieve the
effect of instruction acceleration.
If unsure, say N.
config CRYPTO_SM4_AESNI_AVX2_X86_64
tristate "Ciphers: SM4 with modes: ECB, CBC, CFB, CTR (AES-NI/AVX2)"
depends on X86 && 64BIT
select CRYPTO_SKCIPHER
select CRYPTO_SIMD
select CRYPTO_ALGAPI
select CRYPTO_SM4
select CRYPTO_SM4_AESNI_AVX_X86_64
help
Length-preserving ciphers: SM4 cipher algorithms
(OSCCA GB/T 32907-2016) with ECB, CBC, CFB, and CTR modes
Architecture: x86_64 using:
- AES-NI (AES New Instructions)
- AVX2 (Advanced Vector Extensions 2)
Through two affine transforms,
we can use the AES S-Box to simulate the SM4 S-Box to achieve the
effect of instruction acceleration.
If unsure, say N.
config CRYPTO_TWOFISH_586
tristate "Ciphers: Twofish (32-bit)"
depends on (X86 || UML_X86) && !64BIT
select CRYPTO_ALGAPI
select CRYPTO_TWOFISH_COMMON
imply CRYPTO_CTR
help
Block cipher: Twofish cipher algorithm
Architecture: x86 (32-bit)
config CRYPTO_TWOFISH_X86_64
tristate "Ciphers: Twofish"
depends on (X86 || UML_X86) && 64BIT
select CRYPTO_ALGAPI
select CRYPTO_TWOFISH_COMMON
imply CRYPTO_CTR
help
Block cipher: Twofish cipher algorithm
Architecture: x86_64
config CRYPTO_TWOFISH_X86_64_3WAY
tristate "Ciphers: Twofish with modes: ECB, CBC (3-way parallel)"
depends on X86 && 64BIT
select CRYPTO_SKCIPHER
select CRYPTO_TWOFISH_COMMON
select CRYPTO_TWOFISH_X86_64
help
Length-preserving cipher: Twofish cipher algorithm
with ECB and CBC modes
Architecture: x86_64
Processes three blocks in parallel, better utilizing resources of
out-of-order CPUs.
config CRYPTO_TWOFISH_AVX_X86_64
tristate "Ciphers: Twofish with modes: ECB, CBC (AVX)"
depends on X86 && 64BIT
select CRYPTO_SKCIPHER
select CRYPTO_SIMD
select CRYPTO_TWOFISH_COMMON
select CRYPTO_TWOFISH_X86_64
select CRYPTO_TWOFISH_X86_64_3WAY
imply CRYPTO_XTS
help
Length-preserving cipher: Twofish cipher algorithm
with ECB and CBC modes
Architecture: x86_64 using:
- AVX (Advanced Vector Extensions)
Processes eight blocks in parallel.
config CRYPTO_ARIA_AESNI_AVX_X86_64
tristate "Ciphers: ARIA with modes: ECB, CTR (AES-NI/AVX/GFNI)"
depends on X86 && 64BIT
select CRYPTO_SKCIPHER
select CRYPTO_SIMD
select CRYPTO_ALGAPI
select CRYPTO_ARIA
help
Length-preserving cipher: ARIA cipher algorithms
(RFC 5794) with ECB and CTR modes
Architecture: x86_64 using:
- AES-NI (AES New Instructions)
- AVX (Advanced Vector Extensions)
- GFNI (Galois Field New Instructions)
Processes 16 blocks in parallel.
config CRYPTO_CHACHA20_X86_64
tristate "Ciphers: ChaCha20, XChaCha20, XChaCha12 (SSSE3/AVX2/AVX-512VL)"
depends on X86 && 64BIT
select CRYPTO_SKCIPHER
select CRYPTO_LIB_CHACHA_GENERIC
select CRYPTO_ARCH_HAVE_LIB_CHACHA
help
Length-preserving ciphers: ChaCha20, XChaCha20, and XChaCha12
stream cipher algorithms
Architecture: x86_64 using:
- SSSE3 (Supplemental SSE3)
- AVX2 (Advanced Vector Extensions 2)
- AVX-512VL (Advanced Vector Extensions-512VL)
config CRYPTO_AEGIS128_AESNI_SSE2
tristate "AEAD ciphers: AEGIS-128 (AES-NI/SSE2)"
depends on X86 && 64BIT
select CRYPTO_AEAD
select CRYPTO_SIMD
help
AEGIS-128 AEAD algorithm
Architecture: x86_64 using:
- AES-NI (AES New Instructions)
- SSE2 (Streaming SIMD Extensions 2)
config CRYPTO_NHPOLY1305_SSE2
tristate "Hash functions: NHPoly1305 (SSE2)"
depends on X86 && 64BIT
select CRYPTO_NHPOLY1305
help
NHPoly1305 hash function for Adiantum
Architecture: x86_64 using:
- SSE2 (Streaming SIMD Extensions 2)
config CRYPTO_NHPOLY1305_AVX2
tristate "Hash functions: NHPoly1305 (AVX2)"
depends on X86 && 64BIT
select CRYPTO_NHPOLY1305
help
NHPoly1305 hash function for Adiantum
Architecture: x86_64 using:
- AVX2 (Advanced Vector Extensions 2)
config CRYPTO_BLAKE2S_X86
bool "Hash functions: BLAKE2s (SSSE3/AVX-512)"
depends on X86 && 64BIT
select CRYPTO_LIB_BLAKE2S_GENERIC
select CRYPTO_ARCH_HAVE_LIB_BLAKE2S
help
BLAKE2s cryptographic hash function (RFC 7693)
Architecture: x86_64 using:
- SSSE3 (Supplemental SSE3)
- AVX-512 (Advanced Vector Extensions-512)
config CRYPTO_POLYVAL_CLMUL_NI
tristate "Hash functions: POLYVAL (CLMUL-NI)"
depends on X86 && 64BIT
select CRYPTO_POLYVAL
help
POLYVAL hash function for HCTR2
Architecture: x86_64 using:
- CLMUL-NI (carry-less multiplication new instructions)
config CRYPTO_POLY1305_X86_64
tristate "Hash functions: Poly1305 (SSE2/AVX2)"
depends on X86 && 64BIT
select CRYPTO_LIB_POLY1305_GENERIC
select CRYPTO_ARCH_HAVE_LIB_POLY1305
help
Poly1305 authenticator algorithm (RFC7539)
Architecture: x86_64 using:
- SSE2 (Streaming SIMD Extensions 2)
- AVX2 (Advanced Vector Extensions 2)
config CRYPTO_SHA1_SSSE3
tristate "Hash functions: SHA-1 (SSSE3/AVX/AVX2/SHA-NI)"
depends on X86 && 64BIT
select CRYPTO_SHA1
select CRYPTO_HASH
help
SHA-1 secure hash algorithm (FIPS 180)
Architecture: x86_64 using:
- SSSE3 (Supplemental SSE3)
- AVX (Advanced Vector Extensions)
- AVX2 (Advanced Vector Extensions 2)
- SHA-NI (SHA Extensions New Instructions)
config CRYPTO_SHA256_SSSE3
tristate "Hash functions: SHA-224 and SHA-256 (SSSE3/AVX/AVX2/SHA-NI)"
depends on X86 && 64BIT
select CRYPTO_SHA256
select CRYPTO_HASH
help
SHA-224 and SHA-256 secure hash algorithms (FIPS 180)
Architecture: x86_64 using:
- SSSE3 (Supplemental SSE3)
- AVX (Advanced Vector Extensions)
- AVX2 (Advanced Vector Extensions 2)
- SHA-NI (SHA Extensions New Instructions)
config CRYPTO_SHA512_SSSE3
tristate "Hash functions: SHA-384 and SHA-512 (SSSE3/AVX/AVX2)"
depends on X86 && 64BIT
select CRYPTO_SHA512
select CRYPTO_HASH
help
SHA-384 and SHA-512 secure hash algorithms (FIPS 180)
Architecture: x86_64 using:
- SSSE3 (Supplemental SSE3)
- AVX (Advanced Vector Extensions)
- AVX2 (Advanced Vector Extensions 2)
config CRYPTO_SM3_AVX_X86_64
tristate "Hash functions: SM3 (AVX)"
depends on X86 && 64BIT
select CRYPTO_HASH
select CRYPTO_SM3
help
SM3 secure hash function as defined by OSCCA GM/T 0004-2012 SM3
Architecture: x86_64 using:
- AVX (Advanced Vector Extensions)
If unsure, say N.
config CRYPTO_GHASH_CLMUL_NI_INTEL
tristate "Hash functions: GHASH (CLMUL-NI)"
depends on X86 && 64BIT
select CRYPTO_CRYPTD
help
GCM GHASH hash function (NIST SP800-38D)
Architecture: x86_64 using:
- CLMUL-NI (carry-less multiplication new instructions)
config CRYPTO_CRC32C_INTEL
tristate "CRC32c (SSE4.2/PCLMULQDQ)"
depends on X86
select CRYPTO_HASH
help
CRC32c CRC algorithm with the iSCSI polynomial (RFC 3385 and RFC 3720)
Architecture: x86 (32-bit and 64-bit) using:
- SSE4.2 (Streaming SIMD Extensions 4.2) CRC32 instruction
- PCLMULQDQ (carry-less multiplication)
config CRYPTO_CRC32_PCLMUL
tristate "CRC32 (PCLMULQDQ)"
depends on X86
select CRYPTO_HASH
select CRC32
help
CRC32 CRC algorithm (IEEE 802.3)
Architecture: x86 (32-bit and 64-bit) using:
- PCLMULQDQ (carry-less multiplication)
config CRYPTO_CRCT10DIF_PCLMUL
tristate "CRCT10DIF (PCLMULQDQ)"
depends on X86 && 64BIT && CRC_T10DIF
select CRYPTO_HASH
help
CRC16 CRC algorithm used for the T10 (SCSI) Data Integrity Field (DIF)
Architecture: x86_64 using:
- PCLMULQDQ (carry-less multiplication)
endmenu

View File

@ -100,6 +100,9 @@ sm4-aesni-avx-x86_64-y := sm4-aesni-avx-asm_64.o sm4_aesni_avx_glue.o
obj-$(CONFIG_CRYPTO_SM4_AESNI_AVX2_X86_64) += sm4-aesni-avx2-x86_64.o obj-$(CONFIG_CRYPTO_SM4_AESNI_AVX2_X86_64) += sm4-aesni-avx2-x86_64.o
sm4-aesni-avx2-x86_64-y := sm4-aesni-avx2-asm_64.o sm4_aesni_avx2_glue.o sm4-aesni-avx2-x86_64-y := sm4-aesni-avx2-asm_64.o sm4_aesni_avx2_glue.o
obj-$(CONFIG_CRYPTO_ARIA_AESNI_AVX_X86_64) += aria-aesni-avx-x86_64.o
aria-aesni-avx-x86_64-y := aria-aesni-avx-asm_64.o aria_aesni_avx_glue.o
quiet_cmd_perlasm = PERLASM $@ quiet_cmd_perlasm = PERLASM $@
cmd_perlasm = $(PERL) $< > $@ cmd_perlasm = $(PERL) $< > $@
$(obj)/%.S: $(src)/%.pl FORCE $(obj)/%.S: $(src)/%.pl FORCE

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,16 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
#ifndef ASM_X86_ARIA_AVX_H
#define ASM_X86_ARIA_AVX_H
#include <linux/types.h>
#define ARIA_AESNI_PARALLEL_BLOCKS 16
#define ARIA_AESNI_PARALLEL_BLOCK_SIZE (ARIA_BLOCK_SIZE * 16)
struct aria_avx_ops {
void (*aria_encrypt_16way)(const void *ctx, u8 *dst, const u8 *src);
void (*aria_decrypt_16way)(const void *ctx, u8 *dst, const u8 *src);
void (*aria_ctr_crypt_16way)(const void *ctx, u8 *dst, const u8 *src,
u8 *keystream, u8 *iv);
};
#endif

View File

@ -0,0 +1,213 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Glue Code for the AVX/AES-NI/GFNI assembler implementation of the ARIA Cipher
*
* Copyright (c) 2022 Taehee Yoo <ap420073@gmail.com>
*/
#include <crypto/algapi.h>
#include <crypto/internal/simd.h>
#include <crypto/aria.h>
#include <linux/crypto.h>
#include <linux/err.h>
#include <linux/module.h>
#include <linux/types.h>
#include "ecb_cbc_helpers.h"
#include "aria-avx.h"
asmlinkage void aria_aesni_avx_encrypt_16way(const void *ctx, u8 *dst,
const u8 *src);
asmlinkage void aria_aesni_avx_decrypt_16way(const void *ctx, u8 *dst,
const u8 *src);
asmlinkage void aria_aesni_avx_ctr_crypt_16way(const void *ctx, u8 *dst,
const u8 *src,
u8 *keystream, u8 *iv);
asmlinkage void aria_aesni_avx_gfni_encrypt_16way(const void *ctx, u8 *dst,
const u8 *src);
asmlinkage void aria_aesni_avx_gfni_decrypt_16way(const void *ctx, u8 *dst,
const u8 *src);
asmlinkage void aria_aesni_avx_gfni_ctr_crypt_16way(const void *ctx, u8 *dst,
const u8 *src,
u8 *keystream, u8 *iv);
static struct aria_avx_ops aria_ops;
static int ecb_do_encrypt(struct skcipher_request *req, const u32 *rkey)
{
ECB_WALK_START(req, ARIA_BLOCK_SIZE, ARIA_AESNI_PARALLEL_BLOCKS);
ECB_BLOCK(ARIA_AESNI_PARALLEL_BLOCKS, aria_ops.aria_encrypt_16way);
ECB_BLOCK(1, aria_encrypt);
ECB_WALK_END();
}
static int ecb_do_decrypt(struct skcipher_request *req, const u32 *rkey)
{
ECB_WALK_START(req, ARIA_BLOCK_SIZE, ARIA_AESNI_PARALLEL_BLOCKS);
ECB_BLOCK(ARIA_AESNI_PARALLEL_BLOCKS, aria_ops.aria_decrypt_16way);
ECB_BLOCK(1, aria_decrypt);
ECB_WALK_END();
}
static int aria_avx_ecb_encrypt(struct skcipher_request *req)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
struct aria_ctx *ctx = crypto_skcipher_ctx(tfm);
return ecb_do_encrypt(req, ctx->enc_key[0]);
}
static int aria_avx_ecb_decrypt(struct skcipher_request *req)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
struct aria_ctx *ctx = crypto_skcipher_ctx(tfm);
return ecb_do_decrypt(req, ctx->dec_key[0]);
}
static int aria_avx_set_key(struct crypto_skcipher *tfm, const u8 *key,
unsigned int keylen)
{
return aria_set_key(&tfm->base, key, keylen);
}
static int aria_avx_ctr_encrypt(struct skcipher_request *req)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
struct aria_ctx *ctx = crypto_skcipher_ctx(tfm);
struct skcipher_walk walk;
unsigned int nbytes;
int err;
err = skcipher_walk_virt(&walk, req, false);
while ((nbytes = walk.nbytes) > 0) {
const u8 *src = walk.src.virt.addr;
u8 *dst = walk.dst.virt.addr;
while (nbytes >= ARIA_AESNI_PARALLEL_BLOCK_SIZE) {
u8 keystream[ARIA_AESNI_PARALLEL_BLOCK_SIZE];
kernel_fpu_begin();
aria_ops.aria_ctr_crypt_16way(ctx, dst, src, keystream,
walk.iv);
kernel_fpu_end();
dst += ARIA_AESNI_PARALLEL_BLOCK_SIZE;
src += ARIA_AESNI_PARALLEL_BLOCK_SIZE;
nbytes -= ARIA_AESNI_PARALLEL_BLOCK_SIZE;
}
while (nbytes >= ARIA_BLOCK_SIZE) {
u8 keystream[ARIA_BLOCK_SIZE];
memcpy(keystream, walk.iv, ARIA_BLOCK_SIZE);
crypto_inc(walk.iv, ARIA_BLOCK_SIZE);
aria_encrypt(ctx, keystream, keystream);
crypto_xor_cpy(dst, src, keystream, ARIA_BLOCK_SIZE);
dst += ARIA_BLOCK_SIZE;
src += ARIA_BLOCK_SIZE;
nbytes -= ARIA_BLOCK_SIZE;
}
if (walk.nbytes == walk.total && nbytes > 0) {
u8 keystream[ARIA_BLOCK_SIZE];
memcpy(keystream, walk.iv, ARIA_BLOCK_SIZE);
crypto_inc(walk.iv, ARIA_BLOCK_SIZE);
aria_encrypt(ctx, keystream, keystream);
crypto_xor_cpy(dst, src, keystream, nbytes);
dst += nbytes;
src += nbytes;
nbytes = 0;
}
err = skcipher_walk_done(&walk, nbytes);
}
return err;
}
static struct skcipher_alg aria_algs[] = {
{
.base.cra_name = "__ecb(aria)",
.base.cra_driver_name = "__ecb-aria-avx",
.base.cra_priority = 400,
.base.cra_flags = CRYPTO_ALG_INTERNAL,
.base.cra_blocksize = ARIA_BLOCK_SIZE,
.base.cra_ctxsize = sizeof(struct aria_ctx),
.base.cra_module = THIS_MODULE,
.min_keysize = ARIA_MIN_KEY_SIZE,
.max_keysize = ARIA_MAX_KEY_SIZE,
.setkey = aria_avx_set_key,
.encrypt = aria_avx_ecb_encrypt,
.decrypt = aria_avx_ecb_decrypt,
}, {
.base.cra_name = "__ctr(aria)",
.base.cra_driver_name = "__ctr-aria-avx",
.base.cra_priority = 400,
.base.cra_flags = CRYPTO_ALG_INTERNAL,
.base.cra_blocksize = 1,
.base.cra_ctxsize = sizeof(struct aria_ctx),
.base.cra_module = THIS_MODULE,
.min_keysize = ARIA_MIN_KEY_SIZE,
.max_keysize = ARIA_MAX_KEY_SIZE,
.ivsize = ARIA_BLOCK_SIZE,
.chunksize = ARIA_BLOCK_SIZE,
.walksize = 16 * ARIA_BLOCK_SIZE,
.setkey = aria_avx_set_key,
.encrypt = aria_avx_ctr_encrypt,
.decrypt = aria_avx_ctr_encrypt,
}
};
static struct simd_skcipher_alg *aria_simd_algs[ARRAY_SIZE(aria_algs)];
static int __init aria_avx_init(void)
{
const char *feature_name;
if (!boot_cpu_has(X86_FEATURE_AVX) ||
!boot_cpu_has(X86_FEATURE_AES) ||
!boot_cpu_has(X86_FEATURE_OSXSAVE)) {
pr_info("AVX or AES-NI instructions are not detected.\n");
return -ENODEV;
}
if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM,
&feature_name)) {
pr_info("CPU feature '%s' is not supported.\n", feature_name);
return -ENODEV;
}
if (boot_cpu_has(X86_FEATURE_GFNI)) {
aria_ops.aria_encrypt_16way = aria_aesni_avx_gfni_encrypt_16way;
aria_ops.aria_decrypt_16way = aria_aesni_avx_gfni_decrypt_16way;
aria_ops.aria_ctr_crypt_16way = aria_aesni_avx_gfni_ctr_crypt_16way;
} else {
aria_ops.aria_encrypt_16way = aria_aesni_avx_encrypt_16way;
aria_ops.aria_decrypt_16way = aria_aesni_avx_decrypt_16way;
aria_ops.aria_ctr_crypt_16way = aria_aesni_avx_ctr_crypt_16way;
}
return simd_register_skciphers_compat(aria_algs,
ARRAY_SIZE(aria_algs),
aria_simd_algs);
}
static void __exit aria_avx_exit(void)
{
simd_unregister_skciphers(aria_algs, ARRAY_SIZE(aria_algs),
aria_simd_algs);
}
module_init(aria_avx_init);
module_exit(aria_avx_exit);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Taehee Yoo <ap420073@gmail.com>");
MODULE_DESCRIPTION("ARIA Cipher Algorithm, AVX/AES-NI/GFNI optimized");
MODULE_ALIAS_CRYPTO("aria");
MODULE_ALIAS_CRYPTO("aria-aesni-avx");

View File

@ -36,6 +36,7 @@
#include <linux/types.h> #include <linux/types.h>
#include <crypto/sha2.h> #include <crypto/sha2.h>
#include <crypto/sha512_base.h> #include <crypto/sha512_base.h>
#include <asm/cpu_device_id.h>
#include <asm/simd.h> #include <asm/simd.h>
asmlinkage void sha512_transform_ssse3(struct sha512_state *state, asmlinkage void sha512_transform_ssse3(struct sha512_state *state,
@ -284,6 +285,13 @@ static int register_sha512_avx2(void)
ARRAY_SIZE(sha512_avx2_algs)); ARRAY_SIZE(sha512_avx2_algs));
return 0; return 0;
} }
static const struct x86_cpu_id module_cpu_ids[] = {
X86_MATCH_FEATURE(X86_FEATURE_AVX2, NULL),
X86_MATCH_FEATURE(X86_FEATURE_AVX, NULL),
X86_MATCH_FEATURE(X86_FEATURE_SSSE3, NULL),
{}
};
MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids);
static void unregister_sha512_avx2(void) static void unregister_sha512_avx2(void)
{ {
@ -294,6 +302,8 @@ static void unregister_sha512_avx2(void)
static int __init sha512_ssse3_mod_init(void) static int __init sha512_ssse3_mod_init(void)
{ {
if (!x86_match_cpu(module_cpu_ids))
return -ENODEV;
if (register_sha512_ssse3()) if (register_sha512_ssse3())
goto fail; goto fail;

File diff suppressed because it is too large Load Diff

View File

@ -149,7 +149,7 @@ obj-$(CONFIG_CRYPTO_TEA) += tea.o
obj-$(CONFIG_CRYPTO_KHAZAD) += khazad.o obj-$(CONFIG_CRYPTO_KHAZAD) += khazad.o
obj-$(CONFIG_CRYPTO_ANUBIS) += anubis.o obj-$(CONFIG_CRYPTO_ANUBIS) += anubis.o
obj-$(CONFIG_CRYPTO_SEED) += seed.o obj-$(CONFIG_CRYPTO_SEED) += seed.o
obj-$(CONFIG_CRYPTO_ARIA) += aria.o obj-$(CONFIG_CRYPTO_ARIA) += aria_generic.o
obj-$(CONFIG_CRYPTO_CHACHA20) += chacha_generic.o obj-$(CONFIG_CRYPTO_CHACHA20) += chacha_generic.o
obj-$(CONFIG_CRYPTO_POLY1305) += poly1305_generic.o obj-$(CONFIG_CRYPTO_POLY1305) += poly1305_generic.o
obj-$(CONFIG_CRYPTO_DEFLATE) += deflate.o obj-$(CONFIG_CRYPTO_DEFLATE) += deflate.o

View File

@ -120,6 +120,12 @@ static int akcipher_default_op(struct akcipher_request *req)
return -ENOSYS; return -ENOSYS;
} }
static int akcipher_default_set_key(struct crypto_akcipher *tfm,
const void *key, unsigned int keylen)
{
return -ENOSYS;
}
int crypto_register_akcipher(struct akcipher_alg *alg) int crypto_register_akcipher(struct akcipher_alg *alg)
{ {
struct crypto_alg *base = &alg->base; struct crypto_alg *base = &alg->base;
@ -132,6 +138,8 @@ int crypto_register_akcipher(struct akcipher_alg *alg)
alg->encrypt = akcipher_default_op; alg->encrypt = akcipher_default_op;
if (!alg->decrypt) if (!alg->decrypt)
alg->decrypt = akcipher_default_op; alg->decrypt = akcipher_default_op;
if (!alg->set_priv_key)
alg->set_priv_key = akcipher_default_set_key;
akcipher_prepare_alg(alg); akcipher_prepare_alg(alg);
return crypto_register_alg(base); return crypto_register_alg(base);

View File

@ -997,77 +997,6 @@ void crypto_inc(u8 *a, unsigned int size)
} }
EXPORT_SYMBOL_GPL(crypto_inc); EXPORT_SYMBOL_GPL(crypto_inc);
void __crypto_xor(u8 *dst, const u8 *src1, const u8 *src2, unsigned int len)
{
int relalign = 0;
if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) {
int size = sizeof(unsigned long);
int d = (((unsigned long)dst ^ (unsigned long)src1) |
((unsigned long)dst ^ (unsigned long)src2)) &
(size - 1);
relalign = d ? 1 << __ffs(d) : size;
/*
* If we care about alignment, process as many bytes as
* needed to advance dst and src to values whose alignments
* equal their relative alignment. This will allow us to
* process the remainder of the input using optimal strides.
*/
while (((unsigned long)dst & (relalign - 1)) && len > 0) {
*dst++ = *src1++ ^ *src2++;
len--;
}
}
while (IS_ENABLED(CONFIG_64BIT) && len >= 8 && !(relalign & 7)) {
if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) {
u64 l = get_unaligned((u64 *)src1) ^
get_unaligned((u64 *)src2);
put_unaligned(l, (u64 *)dst);
} else {
*(u64 *)dst = *(u64 *)src1 ^ *(u64 *)src2;
}
dst += 8;
src1 += 8;
src2 += 8;
len -= 8;
}
while (len >= 4 && !(relalign & 3)) {
if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) {
u32 l = get_unaligned((u32 *)src1) ^
get_unaligned((u32 *)src2);
put_unaligned(l, (u32 *)dst);
} else {
*(u32 *)dst = *(u32 *)src1 ^ *(u32 *)src2;
}
dst += 4;
src1 += 4;
src2 += 4;
len -= 4;
}
while (len >= 2 && !(relalign & 1)) {
if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) {
u16 l = get_unaligned((u16 *)src1) ^
get_unaligned((u16 *)src2);
put_unaligned(l, (u16 *)dst);
} else {
*(u16 *)dst = *(u16 *)src1 ^ *(u16 *)src2;
}
dst += 2;
src1 += 2;
src2 += 2;
len -= 2;
}
while (len--)
*dst++ = *src1++ ^ *src2++;
}
EXPORT_SYMBOL_GPL(__crypto_xor);
unsigned int crypto_alg_extsize(struct crypto_alg *alg) unsigned int crypto_alg_extsize(struct crypto_alg *alg)
{ {
return alg->cra_ctxsize + return alg->cra_ctxsize +

View File

@ -114,7 +114,7 @@ struct crypto_larval *crypto_larval_alloc(const char *name, u32 type, u32 mask)
larval->alg.cra_priority = -1; larval->alg.cra_priority = -1;
larval->alg.cra_destroy = crypto_larval_destroy; larval->alg.cra_destroy = crypto_larval_destroy;
strlcpy(larval->alg.cra_name, name, CRYPTO_MAX_ALG_NAME); strscpy(larval->alg.cra_name, name, CRYPTO_MAX_ALG_NAME);
init_completion(&larval->completion); init_completion(&larval->completion);
return larval; return larval;
@ -321,7 +321,7 @@ struct crypto_alg *crypto_alg_mod_lookup(const char *name, u32 type, u32 mask)
/* /*
* If the internal flag is set for a cipher, require a caller to * If the internal flag is set for a cipher, require a caller to
* to invoke the cipher with the internal flag to use that cipher. * invoke the cipher with the internal flag to use that cipher.
* Also, if a caller wants to allocate a cipher that may or may * Also, if a caller wants to allocate a cipher that may or may
* not be an internal cipher, use type | CRYPTO_ALG_INTERNAL and * not be an internal cipher, use type | CRYPTO_ALG_INTERNAL and
* !(mask & CRYPTO_ALG_INTERNAL). * !(mask & CRYPTO_ALG_INTERNAL).

View File

@ -16,6 +16,14 @@
#include <crypto/aria.h> #include <crypto/aria.h>
static const u32 key_rc[20] = {
0x517cc1b7, 0x27220a94, 0xfe13abe8, 0xfa9a6ee0,
0x6db14acc, 0x9e21c820, 0xff28b1d5, 0xef5de2b0,
0xdb92371d, 0x2126e970, 0x03249775, 0x04e8c90e,
0x517cc1b7, 0x27220a94, 0xfe13abe8, 0xfa9a6ee0,
0x6db14acc, 0x9e21c820, 0xff28b1d5, 0xef5de2b0
};
static void aria_set_encrypt_key(struct aria_ctx *ctx, const u8 *in_key, static void aria_set_encrypt_key(struct aria_ctx *ctx, const u8 *in_key,
unsigned int key_len) unsigned int key_len)
{ {
@ -25,7 +33,7 @@ static void aria_set_encrypt_key(struct aria_ctx *ctx, const u8 *in_key,
const u32 *ck; const u32 *ck;
int rkidx = 0; int rkidx = 0;
ck = &key_rc[(key_len - 16) / 8][0]; ck = &key_rc[(key_len - 16) / 2];
w0[0] = be32_to_cpu(key[0]); w0[0] = be32_to_cpu(key[0]);
w0[1] = be32_to_cpu(key[1]); w0[1] = be32_to_cpu(key[1]);
@ -163,8 +171,7 @@ static void aria_set_decrypt_key(struct aria_ctx *ctx)
} }
} }
static int aria_set_key(struct crypto_tfm *tfm, const u8 *in_key, int aria_set_key(struct crypto_tfm *tfm, const u8 *in_key, unsigned int key_len)
unsigned int key_len)
{ {
struct aria_ctx *ctx = crypto_tfm_ctx(tfm); struct aria_ctx *ctx = crypto_tfm_ctx(tfm);
@ -179,6 +186,7 @@ static int aria_set_key(struct crypto_tfm *tfm, const u8 *in_key,
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(aria_set_key);
static void __aria_crypt(struct aria_ctx *ctx, u8 *out, const u8 *in, static void __aria_crypt(struct aria_ctx *ctx, u8 *out, const u8 *in,
u32 key[][ARIA_RD_KEY_WORDS]) u32 key[][ARIA_RD_KEY_WORDS])
@ -235,14 +243,30 @@ static void __aria_crypt(struct aria_ctx *ctx, u8 *out, const u8 *in,
dst[3] = cpu_to_be32(reg3); dst[3] = cpu_to_be32(reg3);
} }
static void aria_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) void aria_encrypt(void *_ctx, u8 *out, const u8 *in)
{
struct aria_ctx *ctx = (struct aria_ctx *)_ctx;
__aria_crypt(ctx, out, in, ctx->enc_key);
}
EXPORT_SYMBOL_GPL(aria_encrypt);
void aria_decrypt(void *_ctx, u8 *out, const u8 *in)
{
struct aria_ctx *ctx = (struct aria_ctx *)_ctx;
__aria_crypt(ctx, out, in, ctx->dec_key);
}
EXPORT_SYMBOL_GPL(aria_decrypt);
static void __aria_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
{ {
struct aria_ctx *ctx = crypto_tfm_ctx(tfm); struct aria_ctx *ctx = crypto_tfm_ctx(tfm);
__aria_crypt(ctx, out, in, ctx->enc_key); __aria_crypt(ctx, out, in, ctx->enc_key);
} }
static void aria_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) static void __aria_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
{ {
struct aria_ctx *ctx = crypto_tfm_ctx(tfm); struct aria_ctx *ctx = crypto_tfm_ctx(tfm);
@ -263,8 +287,8 @@ static struct crypto_alg aria_alg = {
.cia_min_keysize = ARIA_MIN_KEY_SIZE, .cia_min_keysize = ARIA_MIN_KEY_SIZE,
.cia_max_keysize = ARIA_MAX_KEY_SIZE, .cia_max_keysize = ARIA_MAX_KEY_SIZE,
.cia_setkey = aria_set_key, .cia_setkey = aria_set_key,
.cia_encrypt = aria_encrypt, .cia_encrypt = __aria_encrypt,
.cia_decrypt = aria_decrypt .cia_decrypt = __aria_decrypt
} }
} }
}; };
@ -286,3 +310,4 @@ MODULE_DESCRIPTION("ARIA Cipher Algorithm");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_AUTHOR("Taehee Yoo <ap420073@gmail.com>"); MODULE_AUTHOR("Taehee Yoo <ap420073@gmail.com>");
MODULE_ALIAS_CRYPTO("aria"); MODULE_ALIAS_CRYPTO("aria");
MODULE_ALIAS_CRYPTO("aria-generic");

View File

@ -189,7 +189,7 @@ static int test(int disks, int *tests)
} }
static int raid6_test(void) static int __init raid6_test(void)
{ {
int err = 0; int err = 0;
int tests = 0; int tests = 0;
@ -236,7 +236,7 @@ static int raid6_test(void)
return 0; return 0;
} }
static void raid6_test_exit(void) static void __exit raid6_test_exit(void)
{ {
} }

View File

@ -72,12 +72,12 @@ static struct kpp_alg curve25519_alg = {
.max_size = curve25519_max_size, .max_size = curve25519_max_size,
}; };
static int curve25519_init(void) static int __init curve25519_init(void)
{ {
return crypto_register_kpp(&curve25519_alg); return crypto_register_kpp(&curve25519_alg);
} }
static void curve25519_exit(void) static void __exit curve25519_exit(void)
{ {
crypto_unregister_kpp(&curve25519_alg); crypto_unregister_kpp(&curve25519_alg);
} }

View File

@ -893,7 +893,7 @@ static struct crypto_template crypto_ffdhe_templates[] = {};
#endif /* CONFIG_CRYPTO_DH_RFC7919_GROUPS */ #endif /* CONFIG_CRYPTO_DH_RFC7919_GROUPS */
static int dh_init(void) static int __init dh_init(void)
{ {
int err; int err;
@ -911,7 +911,7 @@ static int dh_init(void)
return 0; return 0;
} }
static void dh_exit(void) static void __exit dh_exit(void)
{ {
crypto_unregister_templates(crypto_ffdhe_templates, crypto_unregister_templates(crypto_ffdhe_templates,
ARRAY_SIZE(crypto_ffdhe_templates)); ARRAY_SIZE(crypto_ffdhe_templates));

View File

@ -1703,7 +1703,7 @@ static int drbg_init_hash_kernel(struct drbg_state *drbg)
static int drbg_fini_hash_kernel(struct drbg_state *drbg) static int drbg_fini_hash_kernel(struct drbg_state *drbg)
{ {
struct sdesc *sdesc = (struct sdesc *)drbg->priv_data; struct sdesc *sdesc = drbg->priv_data;
if (sdesc) { if (sdesc) {
crypto_free_shash(sdesc->shash.tfm); crypto_free_shash(sdesc->shash.tfm);
kfree_sensitive(sdesc); kfree_sensitive(sdesc);
@ -1715,7 +1715,7 @@ static int drbg_fini_hash_kernel(struct drbg_state *drbg)
static void drbg_kcapi_hmacsetkey(struct drbg_state *drbg, static void drbg_kcapi_hmacsetkey(struct drbg_state *drbg,
const unsigned char *key) const unsigned char *key)
{ {
struct sdesc *sdesc = (struct sdesc *)drbg->priv_data; struct sdesc *sdesc = drbg->priv_data;
crypto_shash_setkey(sdesc->shash.tfm, key, drbg_statelen(drbg)); crypto_shash_setkey(sdesc->shash.tfm, key, drbg_statelen(drbg));
} }
@ -1723,7 +1723,7 @@ static void drbg_kcapi_hmacsetkey(struct drbg_state *drbg,
static int drbg_kcapi_hash(struct drbg_state *drbg, unsigned char *outval, static int drbg_kcapi_hash(struct drbg_state *drbg, unsigned char *outval,
const struct list_head *in) const struct list_head *in)
{ {
struct sdesc *sdesc = (struct sdesc *)drbg->priv_data; struct sdesc *sdesc = drbg->priv_data;
struct drbg_string *input = NULL; struct drbg_string *input = NULL;
crypto_shash_init(&sdesc->shash); crypto_shash_init(&sdesc->shash);
@ -1818,8 +1818,7 @@ static int drbg_init_sym_kernel(struct drbg_state *drbg)
static void drbg_kcapi_symsetkey(struct drbg_state *drbg, static void drbg_kcapi_symsetkey(struct drbg_state *drbg,
const unsigned char *key) const unsigned char *key)
{ {
struct crypto_cipher *tfm = struct crypto_cipher *tfm = drbg->priv_data;
(struct crypto_cipher *)drbg->priv_data;
crypto_cipher_setkey(tfm, key, (drbg_keylen(drbg))); crypto_cipher_setkey(tfm, key, (drbg_keylen(drbg)));
} }
@ -1827,8 +1826,7 @@ static void drbg_kcapi_symsetkey(struct drbg_state *drbg,
static int drbg_kcapi_sym(struct drbg_state *drbg, unsigned char *outval, static int drbg_kcapi_sym(struct drbg_state *drbg, unsigned char *outval,
const struct drbg_string *in) const struct drbg_string *in)
{ {
struct crypto_cipher *tfm = struct crypto_cipher *tfm = drbg->priv_data;
(struct crypto_cipher *)drbg->priv_data;
/* there is only component in *in */ /* there is only component in *in */
BUG_ON(in->len < drbg_blocklen(drbg)); BUG_ON(in->len < drbg_blocklen(drbg));

View File

@ -200,7 +200,7 @@ static struct kpp_alg ecdh_nist_p384 = {
static bool ecdh_nist_p192_registered; static bool ecdh_nist_p192_registered;
static int ecdh_init(void) static int __init ecdh_init(void)
{ {
int ret; int ret;
@ -227,7 +227,7 @@ nist_p256_error:
return ret; return ret;
} }
static void ecdh_exit(void) static void __exit ecdh_exit(void)
{ {
if (ecdh_nist_p192_registered) if (ecdh_nist_p192_registered)
crypto_unregister_kpp(&ecdh_nist_p192); crypto_unregister_kpp(&ecdh_nist_p192);

View File

@ -332,7 +332,7 @@ static struct akcipher_alg ecdsa_nist_p192 = {
}; };
static bool ecdsa_nist_p192_registered; static bool ecdsa_nist_p192_registered;
static int ecdsa_init(void) static int __init ecdsa_init(void)
{ {
int ret; int ret;
@ -359,7 +359,7 @@ nist_p256_error:
return ret; return ret;
} }
static void ecdsa_exit(void) static void __exit ecdsa_exit(void)
{ {
if (ecdsa_nist_p192_registered) if (ecdsa_nist_p192_registered)
crypto_unregister_akcipher(&ecdsa_nist_p192); crypto_unregister_akcipher(&ecdsa_nist_p192);

View File

@ -543,7 +543,7 @@ static int essiv_create(struct crypto_template *tmpl, struct rtattr **tb)
} }
/* record the driver name so we can instantiate this exact algo later */ /* record the driver name so we can instantiate this exact algo later */
strlcpy(ictx->shash_driver_name, hash_alg->base.cra_driver_name, strscpy(ictx->shash_driver_name, hash_alg->base.cra_driver_name,
CRYPTO_MAX_ALG_NAME); CRYPTO_MAX_ALG_NAME);
/* Instance fields */ /* Instance fields */

View File

@ -327,7 +327,7 @@ static struct akcipher_alg rsa = {
}, },
}; };
static int rsa_init(void) static int __init rsa_init(void)
{ {
int err; int err;
@ -344,7 +344,7 @@ static int rsa_init(void)
return 0; return 0;
} }
static void rsa_exit(void) static void __exit rsa_exit(void)
{ {
crypto_unregister_template(&rsa_pkcs1pad_tmpl); crypto_unregister_template(&rsa_pkcs1pad_tmpl);
crypto_unregister_akcipher(&rsa); crypto_unregister_akcipher(&rsa);

View File

@ -441,12 +441,12 @@ static struct akcipher_alg sm2 = {
}, },
}; };
static int sm2_init(void) static int __init sm2_init(void)
{ {
return crypto_register_akcipher(&sm2); return crypto_register_akcipher(&sm2);
} }
static void sm2_exit(void) static void __exit sm2_exit(void)
{ {
crypto_unregister_akcipher(&sm2); crypto_unregister_akcipher(&sm2);
} }

View File

@ -66,17 +66,6 @@ static u32 num_mb = 8;
static unsigned int klen; static unsigned int klen;
static char *tvmem[TVMEMSIZE]; static char *tvmem[TVMEMSIZE];
static const char *check[] = {
"des", "md5", "des3_ede", "rot13", "sha1", "sha224", "sha256", "sm3",
"blowfish", "twofish", "serpent", "sha384", "sha512", "md4", "aes",
"cast6", "arc4", "michael_mic", "deflate", "crc32c", "tea", "xtea",
"khazad", "wp512", "wp384", "wp256", "xeta", "fcrypt",
"camellia", "seed", "rmd160", "aria",
"lzo", "lzo-rle", "cts", "sha3-224", "sha3-256", "sha3-384",
"sha3-512", "streebog256", "streebog512",
NULL
};
static const int block_sizes[] = { 16, 64, 128, 256, 1024, 1420, 4096, 0 }; static const int block_sizes[] = { 16, 64, 128, 256, 1024, 1420, 4096, 0 };
static const int aead_sizes[] = { 16, 64, 256, 512, 1024, 1420, 4096, 8192, 0 }; static const int aead_sizes[] = { 16, 64, 256, 512, 1024, 1420, 4096, 8192, 0 };
@ -1454,18 +1443,6 @@ static void test_cipher_speed(const char *algo, int enc, unsigned int secs,
false); false);
} }
static void test_available(void)
{
const char **name = check;
while (*name) {
printk("alg %s ", *name);
printk(crypto_has_alg(*name, 0, 0) ?
"found\n" : "not found\n");
name++;
}
}
static inline int tcrypt_test(const char *alg) static inline int tcrypt_test(const char *alg)
{ {
int ret; int ret;
@ -2228,6 +2205,13 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb)
NULL, 0, 16, 8, speed_template_16_24_32); NULL, 0, 16, 8, speed_template_16_24_32);
break; break;
case 229:
test_mb_aead_speed("gcm(aria)", ENCRYPT, sec, NULL, 0, 16, 8,
speed_template_16, num_mb);
test_mb_aead_speed("gcm(aria)", DECRYPT, sec, NULL, 0, 16, 8,
speed_template_16, num_mb);
break;
case 300: case 300:
if (alg) { if (alg) {
test_hash_speed(alg, sec, generic_hash_speed_template); test_hash_speed(alg, sec, generic_hash_speed_template);
@ -2648,6 +2632,17 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb)
speed_template_16); speed_template_16);
break; break;
case 519:
test_acipher_speed("ecb(aria)", ENCRYPT, sec, NULL, 0,
speed_template_16_24_32);
test_acipher_speed("ecb(aria)", DECRYPT, sec, NULL, 0,
speed_template_16_24_32);
test_acipher_speed("ctr(aria)", ENCRYPT, sec, NULL, 0,
speed_template_16_24_32);
test_acipher_speed("ctr(aria)", DECRYPT, sec, NULL, 0,
speed_template_16_24_32);
break;
case 600: case 600:
test_mb_skcipher_speed("ecb(aes)", ENCRYPT, sec, NULL, 0, test_mb_skcipher_speed("ecb(aes)", ENCRYPT, sec, NULL, 0,
speed_template_16_24_32, num_mb); speed_template_16_24_32, num_mb);
@ -2860,9 +2855,17 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb)
speed_template_8_32, num_mb); speed_template_8_32, num_mb);
break; break;
case 1000: case 610:
test_available(); test_mb_skcipher_speed("ecb(aria)", ENCRYPT, sec, NULL, 0,
speed_template_16_32, num_mb);
test_mb_skcipher_speed("ecb(aria)", DECRYPT, sec, NULL, 0,
speed_template_16_32, num_mb);
test_mb_skcipher_speed("ctr(aria)", ENCRYPT, sec, NULL, 0,
speed_template_16_32, num_mb);
test_mb_skcipher_speed("ctr(aria)", DECRYPT, sec, NULL, 0,
speed_template_16_32, num_mb);
break; break;
} }
return ret; return ret;

View File

@ -3322,7 +3322,7 @@ out:
} }
static int test_acomp(struct crypto_acomp *tfm, static int test_acomp(struct crypto_acomp *tfm,
const struct comp_testvec *ctemplate, const struct comp_testvec *ctemplate,
const struct comp_testvec *dtemplate, const struct comp_testvec *dtemplate,
int ctcount, int dtcount) int ctcount, int dtcount)
{ {
@ -3417,6 +3417,21 @@ static int test_acomp(struct crypto_acomp *tfm,
goto out; goto out;
} }
#ifdef CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
crypto_init_wait(&wait);
sg_init_one(&src, input_vec, ilen);
acomp_request_set_params(req, &src, NULL, ilen, 0);
ret = crypto_wait_req(crypto_acomp_compress(req), &wait);
if (ret) {
pr_err("alg: acomp: compression failed on NULL dst buffer test %d for %s: ret=%d\n",
i + 1, algo, -ret);
kfree(input_vec);
acomp_request_free(req);
goto out;
}
#endif
kfree(input_vec); kfree(input_vec);
acomp_request_free(req); acomp_request_free(req);
} }
@ -3478,6 +3493,20 @@ static int test_acomp(struct crypto_acomp *tfm,
goto out; goto out;
} }
#ifdef CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
crypto_init_wait(&wait);
acomp_request_set_params(req, &src, NULL, ilen, 0);
ret = crypto_wait_req(crypto_acomp_decompress(req), &wait);
if (ret) {
pr_err("alg: acomp: decompression failed on NULL dst buffer test %d for %s: ret=%d\n",
i + 1, algo, -ret);
kfree(input_vec);
acomp_request_free(req);
goto out;
}
#endif
kfree(input_vec); kfree(input_vec);
acomp_request_free(req); acomp_request_free(req);
} }
@ -5801,8 +5830,11 @@ test_done:
driver, alg, driver, alg,
fips_enabled ? "fips" : "panic_on_fail"); fips_enabled ? "fips" : "panic_on_fail");
} }
WARN(1, "alg: self-tests for %s (%s) failed (rc=%d)", pr_warn("alg: self-tests for %s using %s failed (rc=%d)",
driver, alg, rc); alg, driver, rc);
WARN(rc != -ENOENT,
"alg: self-tests for %s using %s failed (rc=%d)",
alg, driver, rc);
} else { } else {
if (fips_enabled) if (fips_enabled)
pr_info("alg: self-tests for %s (%s) passed\n", pr_info("alg: self-tests for %s (%s) passed\n",

View File

@ -71,8 +71,6 @@ static int smccc_trng_read(struct hwrng *rng, void *data, size_t max, bool wait)
MAX_BITS_PER_CALL); MAX_BITS_PER_CALL);
arm_smccc_1_1_invoke(ARM_SMCCC_TRNG_RND, bits, &res); arm_smccc_1_1_invoke(ARM_SMCCC_TRNG_RND, bits, &res);
if ((int)res.a0 < 0)
return (int)res.a0;
switch ((int)res.a0) { switch ((int)res.a0) {
case SMCCC_RET_SUCCESS: case SMCCC_RET_SUCCESS:
@ -88,6 +86,8 @@ static int smccc_trng_read(struct hwrng *rng, void *data, size_t max, bool wait)
return copied; return copied;
cond_resched(); cond_resched();
break; break;
default:
return -EIO;
} }
} }

View File

@ -52,7 +52,7 @@ MODULE_PARM_DESC(default_quality,
static void drop_current_rng(void); static void drop_current_rng(void);
static int hwrng_init(struct hwrng *rng); static int hwrng_init(struct hwrng *rng);
static void hwrng_manage_rngd(struct hwrng *rng); static int hwrng_fillfn(void *unused);
static inline int rng_get_data(struct hwrng *rng, u8 *buffer, size_t size, static inline int rng_get_data(struct hwrng *rng, u8 *buffer, size_t size,
int wait); int wait);
@ -96,6 +96,15 @@ static int set_current_rng(struct hwrng *rng)
drop_current_rng(); drop_current_rng();
current_rng = rng; current_rng = rng;
/* if necessary, start hwrng thread */
if (!hwrng_fill) {
hwrng_fill = kthread_run(hwrng_fillfn, NULL, "hwrng");
if (IS_ERR(hwrng_fill)) {
pr_err("hwrng_fill thread creation failed\n");
hwrng_fill = NULL;
}
}
return 0; return 0;
} }
@ -167,8 +176,6 @@ skip_init:
rng->quality = 1024; rng->quality = 1024;
current_quality = rng->quality; /* obsolete */ current_quality = rng->quality; /* obsolete */
hwrng_manage_rngd(rng);
return 0; return 0;
} }
@ -454,10 +461,6 @@ static ssize_t rng_quality_store(struct device *dev,
/* the best available RNG may have changed */ /* the best available RNG may have changed */
ret = enable_best_rng(); ret = enable_best_rng();
/* start/stop rngd if necessary */
if (current_rng)
hwrng_manage_rngd(current_rng);
out: out:
mutex_unlock(&rng_mutex); mutex_unlock(&rng_mutex);
return ret ? ret : len; return ret ? ret : len;
@ -507,16 +510,14 @@ static int hwrng_fillfn(void *unused)
rng->quality = current_quality; /* obsolete */ rng->quality = current_quality; /* obsolete */
quality = rng->quality; quality = rng->quality;
mutex_unlock(&reading_mutex); mutex_unlock(&reading_mutex);
if (rc <= 0)
hwrng_msleep(rng, 10000);
put_rng(rng); put_rng(rng);
if (!quality) if (rc <= 0)
break;
if (rc <= 0) {
pr_warn("hwrng: no data available\n");
msleep_interruptible(10000);
continue; continue;
}
/* If we cannot credit at least one bit of entropy, /* If we cannot credit at least one bit of entropy,
* keep track of the remainder for the next iteration * keep track of the remainder for the next iteration
@ -533,22 +534,6 @@ static int hwrng_fillfn(void *unused)
return 0; return 0;
} }
static void hwrng_manage_rngd(struct hwrng *rng)
{
if (WARN_ON(!mutex_is_locked(&rng_mutex)))
return;
if (rng->quality == 0 && hwrng_fill)
kthread_stop(hwrng_fill);
if (rng->quality > 0 && !hwrng_fill) {
hwrng_fill = kthread_run(hwrng_fillfn, NULL, "hwrng");
if (IS_ERR(hwrng_fill)) {
pr_err("hwrng_fill thread creation failed\n");
hwrng_fill = NULL;
}
}
}
int hwrng_register(struct hwrng *rng) int hwrng_register(struct hwrng *rng)
{ {
int err = -EINVAL; int err = -EINVAL;
@ -570,6 +555,7 @@ int hwrng_register(struct hwrng *rng)
init_completion(&rng->cleanup_done); init_completion(&rng->cleanup_done);
complete(&rng->cleanup_done); complete(&rng->cleanup_done);
init_completion(&rng->dying);
if (!current_rng || if (!current_rng ||
(!cur_rng_set_by_user && rng->quality > current_rng->quality)) { (!cur_rng_set_by_user && rng->quality > current_rng->quality)) {
@ -617,6 +603,7 @@ void hwrng_unregister(struct hwrng *rng)
old_rng = current_rng; old_rng = current_rng;
list_del(&rng->list); list_del(&rng->list);
complete_all(&rng->dying);
if (current_rng == rng) { if (current_rng == rng) {
err = enable_best_rng(); err = enable_best_rng();
if (err) { if (err) {
@ -685,6 +672,14 @@ void devm_hwrng_unregister(struct device *dev, struct hwrng *rng)
} }
EXPORT_SYMBOL_GPL(devm_hwrng_unregister); EXPORT_SYMBOL_GPL(devm_hwrng_unregister);
long hwrng_msleep(struct hwrng *rng, unsigned int msecs)
{
unsigned long timeout = msecs_to_jiffies(msecs) + 1;
return wait_for_completion_interruptible_timeout(&rng->dying, timeout);
}
EXPORT_SYMBOL_GPL(hwrng_msleep);
static int __init hwrng_modinit(void) static int __init hwrng_modinit(void)
{ {
int ret; int ret;

View File

@ -245,7 +245,7 @@ static int imx_rngc_probe(struct platform_device *pdev)
if (IS_ERR(rngc->base)) if (IS_ERR(rngc->base))
return PTR_ERR(rngc->base); return PTR_ERR(rngc->base);
rngc->clk = devm_clk_get(&pdev->dev, NULL); rngc->clk = devm_clk_get_enabled(&pdev->dev, NULL);
if (IS_ERR(rngc->clk)) { if (IS_ERR(rngc->clk)) {
dev_err(&pdev->dev, "Can not get rng_clk\n"); dev_err(&pdev->dev, "Can not get rng_clk\n");
return PTR_ERR(rngc->clk); return PTR_ERR(rngc->clk);
@ -255,27 +255,14 @@ static int imx_rngc_probe(struct platform_device *pdev)
if (irq < 0) if (irq < 0)
return irq; return irq;
ret = clk_prepare_enable(rngc->clk);
if (ret)
return ret;
ver_id = readl(rngc->base + RNGC_VER_ID); ver_id = readl(rngc->base + RNGC_VER_ID);
rng_type = ver_id >> RNGC_TYPE_SHIFT; rng_type = ver_id >> RNGC_TYPE_SHIFT;
/* /*
* This driver supports only RNGC and RNGB. (There's a different * This driver supports only RNGC and RNGB. (There's a different
* driver for RNGA.) * driver for RNGA.)
*/ */
if (rng_type != RNGC_TYPE_RNGC && rng_type != RNGC_TYPE_RNGB) { if (rng_type != RNGC_TYPE_RNGC && rng_type != RNGC_TYPE_RNGB)
ret = -ENODEV; return -ENODEV;
goto err;
}
ret = devm_request_irq(&pdev->dev,
irq, imx_rngc_irq, 0, pdev->name, (void *)rngc);
if (ret) {
dev_err(rngc->dev, "Can't get interrupt working.\n");
goto err;
}
init_completion(&rngc->rng_op_done); init_completion(&rngc->rng_op_done);
@ -290,18 +277,25 @@ static int imx_rngc_probe(struct platform_device *pdev)
imx_rngc_irq_mask_clear(rngc); imx_rngc_irq_mask_clear(rngc);
ret = devm_request_irq(&pdev->dev,
irq, imx_rngc_irq, 0, pdev->name, (void *)rngc);
if (ret) {
dev_err(rngc->dev, "Can't get interrupt working.\n");
return ret;
}
if (self_test) { if (self_test) {
ret = imx_rngc_self_test(rngc); ret = imx_rngc_self_test(rngc);
if (ret) { if (ret) {
dev_err(rngc->dev, "self test failed\n"); dev_err(rngc->dev, "self test failed\n");
goto err; return ret;
} }
} }
ret = hwrng_register(&rngc->rng); ret = devm_hwrng_register(&pdev->dev, &rngc->rng);
if (ret) { if (ret) {
dev_err(&pdev->dev, "hwrng registration failed\n"); dev_err(&pdev->dev, "hwrng registration failed\n");
goto err; return ret;
} }
dev_info(&pdev->dev, dev_info(&pdev->dev,
@ -309,22 +303,6 @@ static int imx_rngc_probe(struct platform_device *pdev)
rng_type == RNGC_TYPE_RNGB ? 'B' : 'C', rng_type == RNGC_TYPE_RNGB ? 'B' : 'C',
(ver_id >> RNGC_VER_MAJ_SHIFT) & 0xff, ver_id & 0xff); (ver_id >> RNGC_VER_MAJ_SHIFT) & 0xff, ver_id & 0xff);
return 0; return 0;
err:
clk_disable_unprepare(rngc->clk);
return ret;
}
static int __exit imx_rngc_remove(struct platform_device *pdev)
{
struct imx_rngc *rngc = platform_get_drvdata(pdev);
hwrng_unregister(&rngc->rng);
clk_disable_unprepare(rngc->clk);
return 0;
} }
static int __maybe_unused imx_rngc_suspend(struct device *dev) static int __maybe_unused imx_rngc_suspend(struct device *dev)
@ -355,11 +333,10 @@ MODULE_DEVICE_TABLE(of, imx_rngc_dt_ids);
static struct platform_driver imx_rngc_driver = { static struct platform_driver imx_rngc_driver = {
.driver = { .driver = {
.name = "imx_rngc", .name = KBUILD_MODNAME,
.pm = &imx_rngc_pm_ops, .pm = &imx_rngc_pm_ops,
.of_match_table = imx_rngc_dt_ids, .of_match_table = imx_rngc_dt_ids,
}, },
.remove = __exit_p(imx_rngc_remove),
}; };
module_platform_driver_probe(imx_rngc_driver, imx_rngc_probe); module_platform_driver_probe(imx_rngc_driver, imx_rngc_probe);

View File

@ -802,9 +802,7 @@ source "drivers/crypto/amlogic/Kconfig"
config CRYPTO_DEV_SA2UL config CRYPTO_DEV_SA2UL
tristate "Support for TI security accelerator" tristate "Support for TI security accelerator"
depends on ARCH_K3 || COMPILE_TEST depends on ARCH_K3 || COMPILE_TEST
select ARM64_CRYPTO
select CRYPTO_AES select CRYPTO_AES
select CRYPTO_AES_ARM64
select CRYPTO_ALGAPI select CRYPTO_ALGAPI
select CRYPTO_AUTHENC select CRYPTO_AUTHENC
select CRYPTO_SHA1 select CRYPTO_SHA1
@ -818,5 +816,6 @@ config CRYPTO_DEV_SA2UL
acceleration for cryptographic algorithms on these devices. acceleration for cryptographic algorithms on these devices.
source "drivers/crypto/keembay/Kconfig" source "drivers/crypto/keembay/Kconfig"
source "drivers/crypto/aspeed/Kconfig"
endif # CRYPTO_HW endif # CRYPTO_HW

View File

@ -1,5 +1,6 @@
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_CRYPTO_DEV_ALLWINNER) += allwinner/ obj-$(CONFIG_CRYPTO_DEV_ALLWINNER) += allwinner/
obj-$(CONFIG_CRYPTO_DEV_ASPEED) += aspeed/
obj-$(CONFIG_CRYPTO_DEV_ATMEL_AES) += atmel-aes.o obj-$(CONFIG_CRYPTO_DEV_ATMEL_AES) += atmel-aes.o
obj-$(CONFIG_CRYPTO_DEV_ATMEL_SHA) += atmel-sha.o obj-$(CONFIG_CRYPTO_DEV_ATMEL_SHA) += atmel-sha.o
obj-$(CONFIG_CRYPTO_DEV_ATMEL_TDES) += atmel-tdes.o obj-$(CONFIG_CRYPTO_DEV_ATMEL_TDES) += atmel-tdes.o

View File

@ -235,7 +235,7 @@ static struct sun4i_ss_alg_template ss_algs[] = {
#endif #endif
}; };
static int sun4i_ss_dbgfs_read(struct seq_file *seq, void *v) static int sun4i_ss_debugfs_show(struct seq_file *seq, void *v)
{ {
unsigned int i; unsigned int i;
@ -266,19 +266,7 @@ static int sun4i_ss_dbgfs_read(struct seq_file *seq, void *v)
} }
return 0; return 0;
} }
DEFINE_SHOW_ATTRIBUTE(sun4i_ss_debugfs);
static int sun4i_ss_dbgfs_open(struct inode *inode, struct file *file)
{
return single_open(file, sun4i_ss_dbgfs_read, inode->i_private);
}
static const struct file_operations sun4i_ss_debugfs_fops = {
.owner = THIS_MODULE,
.open = sun4i_ss_dbgfs_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
/* /*
* Power management strategy: The device is suspended unless a TFM exists for * Power management strategy: The device is suspended unless a TFM exists for

View File

@ -54,11 +54,9 @@ static int sun8i_ce_trng_read(struct hwrng *rng, void *data, size_t max, bool wa
goto err_dst; goto err_dst;
} }
err = pm_runtime_get_sync(ce->dev); err = pm_runtime_resume_and_get(ce->dev);
if (err < 0) { if (err < 0)
pm_runtime_put_noidle(ce->dev);
goto err_pm; goto err_pm;
}
mutex_lock(&ce->rnglock); mutex_lock(&ce->rnglock);
chan = &ce->chanlist[flow]; chan = &ce->chanlist[flow];

View File

@ -177,7 +177,7 @@ static int meson_cipher(struct skcipher_request *areq)
if (areq->src == areq->dst) { if (areq->src == areq->dst) {
nr_sgs = dma_map_sg(mc->dev, areq->src, sg_nents(areq->src), nr_sgs = dma_map_sg(mc->dev, areq->src, sg_nents(areq->src),
DMA_BIDIRECTIONAL); DMA_BIDIRECTIONAL);
if (nr_sgs < 0) { if (!nr_sgs) {
dev_err(mc->dev, "Invalid SG count %d\n", nr_sgs); dev_err(mc->dev, "Invalid SG count %d\n", nr_sgs);
err = -EINVAL; err = -EINVAL;
goto theend; goto theend;
@ -186,14 +186,14 @@ static int meson_cipher(struct skcipher_request *areq)
} else { } else {
nr_sgs = dma_map_sg(mc->dev, areq->src, sg_nents(areq->src), nr_sgs = dma_map_sg(mc->dev, areq->src, sg_nents(areq->src),
DMA_TO_DEVICE); DMA_TO_DEVICE);
if (nr_sgs < 0 || nr_sgs > MAXDESC - 3) { if (!nr_sgs || nr_sgs > MAXDESC - 3) {
dev_err(mc->dev, "Invalid SG count %d\n", nr_sgs); dev_err(mc->dev, "Invalid SG count %d\n", nr_sgs);
err = -EINVAL; err = -EINVAL;
goto theend; goto theend;
} }
nr_sgd = dma_map_sg(mc->dev, areq->dst, sg_nents(areq->dst), nr_sgd = dma_map_sg(mc->dev, areq->dst, sg_nents(areq->dst),
DMA_FROM_DEVICE); DMA_FROM_DEVICE);
if (nr_sgd < 0 || nr_sgd > MAXDESC - 3) { if (!nr_sgd || nr_sgd > MAXDESC - 3) {
dev_err(mc->dev, "Invalid SG count %d\n", nr_sgd); dev_err(mc->dev, "Invalid SG count %d\n", nr_sgd);
err = -EINVAL; err = -EINVAL;
goto theend; goto theend;

View File

@ -0,0 +1,48 @@
config CRYPTO_DEV_ASPEED
tristate "Support for Aspeed cryptographic engine driver"
depends on ARCH_ASPEED || COMPILE_TEST
select CRYPTO_ENGINE
help
Hash and Crypto Engine (HACE) is designed to accelerate the
throughput of hash data digest, encryption and decryption.
Select y here to have support for the cryptographic driver
available on Aspeed SoC.
config CRYPTO_DEV_ASPEED_DEBUG
bool "Enable Aspeed crypto debug messages"
depends on CRYPTO_DEV_ASPEED
help
Print Aspeed crypto debugging messages if you use this
option to ask for those messages.
Avoid enabling this option for production build to
minimize driver timing.
config CRYPTO_DEV_ASPEED_HACE_HASH
bool "Enable Aspeed Hash & Crypto Engine (HACE) hash"
depends on CRYPTO_DEV_ASPEED
select CRYPTO_SHA1
select CRYPTO_SHA256
select CRYPTO_SHA512
select CRYPTO_HMAC
help
Select here to enable Aspeed Hash & Crypto Engine (HACE)
hash driver.
Supports multiple message digest standards, including
SHA-1, SHA-224, SHA-256, SHA-384, SHA-512, and so on.
config CRYPTO_DEV_ASPEED_HACE_CRYPTO
bool "Enable Aspeed Hash & Crypto Engine (HACE) crypto"
depends on CRYPTO_DEV_ASPEED
select CRYPTO_AES
select CRYPTO_DES
select CRYPTO_ECB
select CRYPTO_CBC
select CRYPTO_CFB
select CRYPTO_OFB
select CRYPTO_CTR
help
Select here to enable Aspeed Hash & Crypto Engine (HACE)
crypto driver.
Supports AES/DES symmetric-key encryption and decryption
with ECB/CBC/CFB/OFB/CTR options.

View File

@ -0,0 +1,7 @@
hace-hash-$(CONFIG_CRYPTO_DEV_ASPEED_HACE_HASH) := aspeed-hace-hash.o
hace-crypto-$(CONFIG_CRYPTO_DEV_ASPEED_HACE_CRYPTO) := aspeed-hace-crypto.o
obj-$(CONFIG_CRYPTO_DEV_ASPEED) += aspeed_crypto.o
aspeed_crypto-objs := aspeed-hace.o \
$(hace-hash-y) \
$(hace-crypto-y)

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,284 @@
// SPDX-License-Identifier: GPL-2.0+
/*
* Copyright (c) 2021 Aspeed Technology Inc.
*/
#include <linux/clk.h>
#include <linux/module.h>
#include <linux/of_address.h>
#include <linux/of_device.h>
#include <linux/of_irq.h>
#include <linux/of.h>
#include <linux/platform_device.h>
#include "aspeed-hace.h"
#ifdef CONFIG_CRYPTO_DEV_ASPEED_DEBUG
#define HACE_DBG(d, fmt, ...) \
dev_info((d)->dev, "%s() " fmt, __func__, ##__VA_ARGS__)
#else
#define HACE_DBG(d, fmt, ...) \
dev_dbg((d)->dev, "%s() " fmt, __func__, ##__VA_ARGS__)
#endif
/* HACE interrupt service routine */
static irqreturn_t aspeed_hace_irq(int irq, void *dev)
{
struct aspeed_hace_dev *hace_dev = (struct aspeed_hace_dev *)dev;
struct aspeed_engine_crypto *crypto_engine = &hace_dev->crypto_engine;
struct aspeed_engine_hash *hash_engine = &hace_dev->hash_engine;
u32 sts;
sts = ast_hace_read(hace_dev, ASPEED_HACE_STS);
ast_hace_write(hace_dev, sts, ASPEED_HACE_STS);
HACE_DBG(hace_dev, "irq status: 0x%x\n", sts);
if (sts & HACE_HASH_ISR) {
if (hash_engine->flags & CRYPTO_FLAGS_BUSY)
tasklet_schedule(&hash_engine->done_task);
else
dev_warn(hace_dev->dev, "HASH no active requests.\n");
}
if (sts & HACE_CRYPTO_ISR) {
if (crypto_engine->flags & CRYPTO_FLAGS_BUSY)
tasklet_schedule(&crypto_engine->done_task);
else
dev_warn(hace_dev->dev, "CRYPTO no active requests.\n");
}
return IRQ_HANDLED;
}
static void aspeed_hace_crypto_done_task(unsigned long data)
{
struct aspeed_hace_dev *hace_dev = (struct aspeed_hace_dev *)data;
struct aspeed_engine_crypto *crypto_engine = &hace_dev->crypto_engine;
crypto_engine->resume(hace_dev);
}
static void aspeed_hace_hash_done_task(unsigned long data)
{
struct aspeed_hace_dev *hace_dev = (struct aspeed_hace_dev *)data;
struct aspeed_engine_hash *hash_engine = &hace_dev->hash_engine;
hash_engine->resume(hace_dev);
}
static void aspeed_hace_register(struct aspeed_hace_dev *hace_dev)
{
#ifdef CONFIG_CRYPTO_DEV_ASPEED_HACE_HASH
aspeed_register_hace_hash_algs(hace_dev);
#endif
#ifdef CONFIG_CRYPTO_DEV_ASPEED_HACE_CRYPTO
aspeed_register_hace_crypto_algs(hace_dev);
#endif
}
static void aspeed_hace_unregister(struct aspeed_hace_dev *hace_dev)
{
#ifdef CONFIG_CRYPTO_DEV_ASPEED_HACE_HASH
aspeed_unregister_hace_hash_algs(hace_dev);
#endif
#ifdef CONFIG_CRYPTO_DEV_ASPEED_HACE_CRYPTO
aspeed_unregister_hace_crypto_algs(hace_dev);
#endif
}
static const struct of_device_id aspeed_hace_of_matches[] = {
{ .compatible = "aspeed,ast2500-hace", .data = (void *)5, },
{ .compatible = "aspeed,ast2600-hace", .data = (void *)6, },
{},
};
static int aspeed_hace_probe(struct platform_device *pdev)
{
struct aspeed_engine_crypto *crypto_engine;
const struct of_device_id *hace_dev_id;
struct aspeed_engine_hash *hash_engine;
struct aspeed_hace_dev *hace_dev;
struct resource *res;
int rc;
hace_dev = devm_kzalloc(&pdev->dev, sizeof(struct aspeed_hace_dev),
GFP_KERNEL);
if (!hace_dev)
return -ENOMEM;
hace_dev_id = of_match_device(aspeed_hace_of_matches, &pdev->dev);
if (!hace_dev_id) {
dev_err(&pdev->dev, "Failed to match hace dev id\n");
return -EINVAL;
}
hace_dev->dev = &pdev->dev;
hace_dev->version = (unsigned long)hace_dev_id->data;
hash_engine = &hace_dev->hash_engine;
crypto_engine = &hace_dev->crypto_engine;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
platform_set_drvdata(pdev, hace_dev);
hace_dev->regs = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(hace_dev->regs))
return PTR_ERR(hace_dev->regs);
/* Get irq number and register it */
hace_dev->irq = platform_get_irq(pdev, 0);
if (hace_dev->irq < 0)
return -ENXIO;
rc = devm_request_irq(&pdev->dev, hace_dev->irq, aspeed_hace_irq, 0,
dev_name(&pdev->dev), hace_dev);
if (rc) {
dev_err(&pdev->dev, "Failed to request interrupt\n");
return rc;
}
/* Get clk and enable it */
hace_dev->clk = devm_clk_get(&pdev->dev, NULL);
if (IS_ERR(hace_dev->clk)) {
dev_err(&pdev->dev, "Failed to get clk\n");
return -ENODEV;
}
rc = clk_prepare_enable(hace_dev->clk);
if (rc) {
dev_err(&pdev->dev, "Failed to enable clock 0x%x\n", rc);
return rc;
}
/* Initialize crypto hardware engine structure for hash */
hace_dev->crypt_engine_hash = crypto_engine_alloc_init(hace_dev->dev,
true);
if (!hace_dev->crypt_engine_hash) {
rc = -ENOMEM;
goto clk_exit;
}
rc = crypto_engine_start(hace_dev->crypt_engine_hash);
if (rc)
goto err_engine_hash_start;
tasklet_init(&hash_engine->done_task, aspeed_hace_hash_done_task,
(unsigned long)hace_dev);
/* Initialize crypto hardware engine structure for crypto */
hace_dev->crypt_engine_crypto = crypto_engine_alloc_init(hace_dev->dev,
true);
if (!hace_dev->crypt_engine_crypto) {
rc = -ENOMEM;
goto err_engine_hash_start;
}
rc = crypto_engine_start(hace_dev->crypt_engine_crypto);
if (rc)
goto err_engine_crypto_start;
tasklet_init(&crypto_engine->done_task, aspeed_hace_crypto_done_task,
(unsigned long)hace_dev);
/* Allocate DMA buffer for hash engine input used */
hash_engine->ahash_src_addr =
dmam_alloc_coherent(&pdev->dev,
ASPEED_HASH_SRC_DMA_BUF_LEN,
&hash_engine->ahash_src_dma_addr,
GFP_KERNEL);
if (!hash_engine->ahash_src_addr) {
dev_err(&pdev->dev, "Failed to allocate dma buffer\n");
rc = -ENOMEM;
goto err_engine_crypto_start;
}
/* Allocate DMA buffer for crypto engine context used */
crypto_engine->cipher_ctx =
dmam_alloc_coherent(&pdev->dev,
PAGE_SIZE,
&crypto_engine->cipher_ctx_dma,
GFP_KERNEL);
if (!crypto_engine->cipher_ctx) {
dev_err(&pdev->dev, "Failed to allocate cipher ctx dma\n");
rc = -ENOMEM;
goto err_engine_crypto_start;
}
/* Allocate DMA buffer for crypto engine input used */
crypto_engine->cipher_addr =
dmam_alloc_coherent(&pdev->dev,
ASPEED_CRYPTO_SRC_DMA_BUF_LEN,
&crypto_engine->cipher_dma_addr,
GFP_KERNEL);
if (!crypto_engine->cipher_addr) {
dev_err(&pdev->dev, "Failed to allocate cipher addr dma\n");
rc = -ENOMEM;
goto err_engine_crypto_start;
}
/* Allocate DMA buffer for crypto engine output used */
if (hace_dev->version == AST2600_VERSION) {
crypto_engine->dst_sg_addr =
dmam_alloc_coherent(&pdev->dev,
ASPEED_CRYPTO_DST_DMA_BUF_LEN,
&crypto_engine->dst_sg_dma_addr,
GFP_KERNEL);
if (!crypto_engine->dst_sg_addr) {
dev_err(&pdev->dev, "Failed to allocate dst_sg dma\n");
rc = -ENOMEM;
goto err_engine_crypto_start;
}
}
aspeed_hace_register(hace_dev);
dev_info(&pdev->dev, "Aspeed Crypto Accelerator successfully registered\n");
return 0;
err_engine_crypto_start:
crypto_engine_exit(hace_dev->crypt_engine_crypto);
err_engine_hash_start:
crypto_engine_exit(hace_dev->crypt_engine_hash);
clk_exit:
clk_disable_unprepare(hace_dev->clk);
return rc;
}
static int aspeed_hace_remove(struct platform_device *pdev)
{
struct aspeed_hace_dev *hace_dev = platform_get_drvdata(pdev);
struct aspeed_engine_crypto *crypto_engine = &hace_dev->crypto_engine;
struct aspeed_engine_hash *hash_engine = &hace_dev->hash_engine;
aspeed_hace_unregister(hace_dev);
crypto_engine_exit(hace_dev->crypt_engine_hash);
crypto_engine_exit(hace_dev->crypt_engine_crypto);
tasklet_kill(&hash_engine->done_task);
tasklet_kill(&crypto_engine->done_task);
clk_disable_unprepare(hace_dev->clk);
return 0;
}
MODULE_DEVICE_TABLE(of, aspeed_hace_of_matches);
static struct platform_driver aspeed_hace_driver = {
.probe = aspeed_hace_probe,
.remove = aspeed_hace_remove,
.driver = {
.name = KBUILD_MODNAME,
.of_match_table = aspeed_hace_of_matches,
},
};
module_platform_driver(aspeed_hace_driver);
MODULE_AUTHOR("Neal Liu <neal_liu@aspeedtech.com>");
MODULE_DESCRIPTION("Aspeed HACE driver Crypto Accelerator");
MODULE_LICENSE("GPL");

View File

@ -0,0 +1,298 @@
/* SPDX-License-Identifier: GPL-2.0+ */
#ifndef __ASPEED_HACE_H__
#define __ASPEED_HACE_H__
#include <linux/interrupt.h>
#include <linux/delay.h>
#include <linux/err.h>
#include <linux/fips.h>
#include <linux/dma-mapping.h>
#include <crypto/aes.h>
#include <crypto/des.h>
#include <crypto/scatterwalk.h>
#include <crypto/internal/aead.h>
#include <crypto/internal/akcipher.h>
#include <crypto/internal/des.h>
#include <crypto/internal/hash.h>
#include <crypto/internal/kpp.h>
#include <crypto/internal/skcipher.h>
#include <crypto/algapi.h>
#include <crypto/engine.h>
#include <crypto/hmac.h>
#include <crypto/sha1.h>
#include <crypto/sha2.h>
/*****************************
* *
* HACE register definitions *
* *
* ***************************/
#define ASPEED_HACE_SRC 0x00 /* Crypto Data Source Base Address Register */
#define ASPEED_HACE_DEST 0x04 /* Crypto Data Destination Base Address Register */
#define ASPEED_HACE_CONTEXT 0x08 /* Crypto Context Buffer Base Address Register */
#define ASPEED_HACE_DATA_LEN 0x0C /* Crypto Data Length Register */
#define ASPEED_HACE_CMD 0x10 /* Crypto Engine Command Register */
/* G5 */
#define ASPEED_HACE_TAG 0x18 /* HACE Tag Register */
/* G6 */
#define ASPEED_HACE_GCM_ADD_LEN 0x14 /* Crypto AES-GCM Additional Data Length Register */
#define ASPEED_HACE_GCM_TAG_BASE_ADDR 0x18 /* Crypto AES-GCM Tag Write Buff Base Address Reg */
#define ASPEED_HACE_STS 0x1C /* HACE Status Register */
#define ASPEED_HACE_HASH_SRC 0x20 /* Hash Data Source Base Address Register */
#define ASPEED_HACE_HASH_DIGEST_BUFF 0x24 /* Hash Digest Write Buffer Base Address Register */
#define ASPEED_HACE_HASH_KEY_BUFF 0x28 /* Hash HMAC Key Buffer Base Address Register */
#define ASPEED_HACE_HASH_DATA_LEN 0x2C /* Hash Data Length Register */
#define ASPEED_HACE_HASH_CMD 0x30 /* Hash Engine Command Register */
/* crypto cmd */
#define HACE_CMD_SINGLE_DES 0
#define HACE_CMD_TRIPLE_DES BIT(17)
#define HACE_CMD_AES_SELECT 0
#define HACE_CMD_DES_SELECT BIT(16)
#define HACE_CMD_ISR_EN BIT(12)
#define HACE_CMD_CONTEXT_SAVE_ENABLE (0)
#define HACE_CMD_CONTEXT_SAVE_DISABLE BIT(9)
#define HACE_CMD_AES (0)
#define HACE_CMD_DES (0)
#define HACE_CMD_RC4 BIT(8)
#define HACE_CMD_DECRYPT (0)
#define HACE_CMD_ENCRYPT BIT(7)
#define HACE_CMD_ECB (0x0 << 4)
#define HACE_CMD_CBC (0x1 << 4)
#define HACE_CMD_CFB (0x2 << 4)
#define HACE_CMD_OFB (0x3 << 4)
#define HACE_CMD_CTR (0x4 << 4)
#define HACE_CMD_OP_MODE_MASK (0x7 << 4)
#define HACE_CMD_AES128 (0x0 << 2)
#define HACE_CMD_AES192 (0x1 << 2)
#define HACE_CMD_AES256 (0x2 << 2)
#define HACE_CMD_OP_CASCADE (0x3)
#define HACE_CMD_OP_INDEPENDENT (0x1)
/* G5 */
#define HACE_CMD_RI_WO_DATA_ENABLE (0)
#define HACE_CMD_RI_WO_DATA_DISABLE BIT(11)
#define HACE_CMD_CONTEXT_LOAD_ENABLE (0)
#define HACE_CMD_CONTEXT_LOAD_DISABLE BIT(10)
/* G6 */
#define HACE_CMD_AES_KEY_FROM_OTP BIT(24)
#define HACE_CMD_GHASH_TAG_XOR_EN BIT(23)
#define HACE_CMD_GHASH_PAD_LEN_INV BIT(22)
#define HACE_CMD_GCM_TAG_ADDR_SEL BIT(21)
#define HACE_CMD_MBUS_REQ_SYNC_EN BIT(20)
#define HACE_CMD_DES_SG_CTRL BIT(19)
#define HACE_CMD_SRC_SG_CTRL BIT(18)
#define HACE_CMD_CTR_IV_AES_96 (0x1 << 14)
#define HACE_CMD_CTR_IV_DES_32 (0x1 << 14)
#define HACE_CMD_CTR_IV_AES_64 (0x2 << 14)
#define HACE_CMD_CTR_IV_AES_32 (0x3 << 14)
#define HACE_CMD_AES_KEY_HW_EXP BIT(13)
#define HACE_CMD_GCM (0x5 << 4)
/* interrupt status reg */
#define HACE_CRYPTO_ISR BIT(12)
#define HACE_HASH_ISR BIT(9)
#define HACE_HASH_BUSY BIT(0)
/* hash cmd reg */
#define HASH_CMD_MBUS_REQ_SYNC_EN BIT(20)
#define HASH_CMD_HASH_SRC_SG_CTRL BIT(18)
#define HASH_CMD_SHA512_224 (0x3 << 10)
#define HASH_CMD_SHA512_256 (0x2 << 10)
#define HASH_CMD_SHA384 (0x1 << 10)
#define HASH_CMD_SHA512 (0)
#define HASH_CMD_INT_ENABLE BIT(9)
#define HASH_CMD_HMAC (0x1 << 7)
#define HASH_CMD_ACC_MODE (0x2 << 7)
#define HASH_CMD_HMAC_KEY (0x3 << 7)
#define HASH_CMD_SHA1 (0x2 << 4)
#define HASH_CMD_SHA224 (0x4 << 4)
#define HASH_CMD_SHA256 (0x5 << 4)
#define HASH_CMD_SHA512_SER (0x6 << 4)
#define HASH_CMD_SHA_SWAP (0x2 << 2)
#define HASH_SG_LAST_LIST BIT(31)
#define CRYPTO_FLAGS_BUSY BIT(1)
#define SHA_OP_UPDATE 1
#define SHA_OP_FINAL 2
#define SHA_FLAGS_SHA1 BIT(0)
#define SHA_FLAGS_SHA224 BIT(1)
#define SHA_FLAGS_SHA256 BIT(2)
#define SHA_FLAGS_SHA384 BIT(3)
#define SHA_FLAGS_SHA512 BIT(4)
#define SHA_FLAGS_SHA512_224 BIT(5)
#define SHA_FLAGS_SHA512_256 BIT(6)
#define SHA_FLAGS_HMAC BIT(8)
#define SHA_FLAGS_FINUP BIT(9)
#define SHA_FLAGS_MASK (0xff)
#define ASPEED_CRYPTO_SRC_DMA_BUF_LEN 0xa000
#define ASPEED_CRYPTO_DST_DMA_BUF_LEN 0xa000
#define ASPEED_CRYPTO_GCM_TAG_OFFSET 0x9ff0
#define ASPEED_HASH_SRC_DMA_BUF_LEN 0xa000
#define ASPEED_HASH_QUEUE_LENGTH 50
#define HACE_CMD_IV_REQUIRE (HACE_CMD_CBC | HACE_CMD_CFB | \
HACE_CMD_OFB | HACE_CMD_CTR)
struct aspeed_hace_dev;
typedef int (*aspeed_hace_fn_t)(struct aspeed_hace_dev *);
struct aspeed_sg_list {
__le32 len;
__le32 phy_addr;
};
struct aspeed_engine_hash {
struct tasklet_struct done_task;
unsigned long flags;
struct ahash_request *req;
/* input buffer */
void *ahash_src_addr;
dma_addr_t ahash_src_dma_addr;
dma_addr_t src_dma;
dma_addr_t digest_dma;
size_t src_length;
/* callback func */
aspeed_hace_fn_t resume;
aspeed_hace_fn_t dma_prepare;
};
struct aspeed_sha_hmac_ctx {
struct crypto_shash *shash;
u8 ipad[SHA512_BLOCK_SIZE];
u8 opad[SHA512_BLOCK_SIZE];
};
struct aspeed_sham_ctx {
struct crypto_engine_ctx enginectx;
struct aspeed_hace_dev *hace_dev;
unsigned long flags; /* hmac flag */
struct aspeed_sha_hmac_ctx base[0];
};
struct aspeed_sham_reqctx {
unsigned long flags; /* final update flag should no use*/
unsigned long op; /* final or update */
u32 cmd; /* trigger cmd */
/* walk state */
struct scatterlist *src_sg;
int src_nents;
unsigned int offset; /* offset in current sg */
unsigned int total; /* per update length */
size_t digsize;
size_t block_size;
size_t ivsize;
const __be32 *sha_iv;
/* remain data buffer */
u8 buffer[SHA512_BLOCK_SIZE * 2];
dma_addr_t buffer_dma_addr;
size_t bufcnt; /* buffer counter */
/* output buffer */
u8 digest[SHA512_DIGEST_SIZE] __aligned(64);
dma_addr_t digest_dma_addr;
u64 digcnt[2];
};
struct aspeed_engine_crypto {
struct tasklet_struct done_task;
unsigned long flags;
struct skcipher_request *req;
/* context buffer */
void *cipher_ctx;
dma_addr_t cipher_ctx_dma;
/* input buffer, could be single/scatter-gather lists */
void *cipher_addr;
dma_addr_t cipher_dma_addr;
/* output buffer, only used in scatter-gather lists */
void *dst_sg_addr;
dma_addr_t dst_sg_dma_addr;
/* callback func */
aspeed_hace_fn_t resume;
};
struct aspeed_cipher_ctx {
struct crypto_engine_ctx enginectx;
struct aspeed_hace_dev *hace_dev;
int key_len;
u8 key[AES_MAX_KEYLENGTH];
/* callback func */
aspeed_hace_fn_t start;
struct crypto_skcipher *fallback_tfm;
};
struct aspeed_cipher_reqctx {
int enc_cmd;
int src_nents;
int dst_nents;
struct skcipher_request fallback_req; /* keep at the end */
};
struct aspeed_hace_dev {
void __iomem *regs;
struct device *dev;
int irq;
struct clk *clk;
unsigned long version;
struct crypto_engine *crypt_engine_hash;
struct crypto_engine *crypt_engine_crypto;
struct aspeed_engine_hash hash_engine;
struct aspeed_engine_crypto crypto_engine;
};
struct aspeed_hace_alg {
struct aspeed_hace_dev *hace_dev;
const char *alg_base;
union {
struct skcipher_alg skcipher;
struct ahash_alg ahash;
} alg;
};
enum aspeed_version {
AST2500_VERSION = 5,
AST2600_VERSION
};
#define ast_hace_write(hace, val, offset) \
writel((val), (hace)->regs + (offset))
#define ast_hace_read(hace, offset) \
readl((hace)->regs + (offset))
void aspeed_register_hace_hash_algs(struct aspeed_hace_dev *hace_dev);
void aspeed_unregister_hace_hash_algs(struct aspeed_hace_dev *hace_dev);
void aspeed_register_hace_crypto_algs(struct aspeed_hace_dev *hace_dev);
void aspeed_unregister_hace_crypto_algs(struct aspeed_hace_dev *hace_dev);
#endif

View File

@ -1712,7 +1712,7 @@ static int artpec6_crypto_prepare_crypto(struct skcipher_request *areq)
cipher_len = regk_crypto_key_256; cipher_len = regk_crypto_key_256;
break; break;
default: default:
pr_err("%s: Invalid key length %d!\n", pr_err("%s: Invalid key length %zu!\n",
MODULE_NAME, ctx->key_length); MODULE_NAME, ctx->key_length);
return -EINVAL; return -EINVAL;
} }
@ -2091,7 +2091,7 @@ static void artpec6_crypto_task(unsigned long data)
return; return;
} }
spin_lock_bh(&ac->queue_lock); spin_lock(&ac->queue_lock);
list_for_each_entry_safe(req, n, &ac->pending, list) { list_for_each_entry_safe(req, n, &ac->pending, list) {
struct artpec6_crypto_dma_descriptors *dma = req->dma; struct artpec6_crypto_dma_descriptors *dma = req->dma;
@ -2128,7 +2128,7 @@ static void artpec6_crypto_task(unsigned long data)
artpec6_crypto_process_queue(ac, &complete_in_progress); artpec6_crypto_process_queue(ac, &complete_in_progress);
spin_unlock_bh(&ac->queue_lock); spin_unlock(&ac->queue_lock);
/* Perform the completion callbacks without holding the queue lock /* Perform the completion callbacks without holding the queue lock
* to allow new request submissions from the callbacks. * to allow new request submissions from the callbacks.

View File

@ -1928,7 +1928,7 @@ static int ahash_enqueue(struct ahash_request *req)
/* SPU2 hardware does not compute hash of zero length data */ /* SPU2 hardware does not compute hash of zero length data */
if ((rctx->is_final == 1) && (rctx->total_todo == 0) && if ((rctx->is_final == 1) && (rctx->total_todo == 0) &&
(iproc_priv.spu.spu_type == SPU_TYPE_SPU2)) { (iproc_priv.spu.spu_type == SPU_TYPE_SPU2)) {
alg_name = crypto_tfm_alg_name(crypto_ahash_tfm(tfm)); alg_name = crypto_ahash_alg_name(tfm);
flow_log("Doing %sfinal %s zero-len hash request in software\n", flow_log("Doing %sfinal %s zero-len hash request in software\n",
rctx->is_final ? "" : "non-", alg_name); rctx->is_final ? "" : "non-", alg_name);
err = do_shash((unsigned char *)alg_name, req->result, err = do_shash((unsigned char *)alg_name, req->result,
@ -2029,7 +2029,7 @@ static int ahash_init(struct ahash_request *req)
* supported by the hardware, we need to handle it in software * supported by the hardware, we need to handle it in software
* by calling synchronous hash functions. * by calling synchronous hash functions.
*/ */
alg_name = crypto_tfm_alg_name(crypto_ahash_tfm(tfm)); alg_name = crypto_ahash_alg_name(tfm);
hash = crypto_alloc_shash(alg_name, 0, 0); hash = crypto_alloc_shash(alg_name, 0, 0);
if (IS_ERR(hash)) { if (IS_ERR(hash)) {
ret = PTR_ERR(hash); ret = PTR_ERR(hash);

View File

@ -231,7 +231,7 @@ struct iproc_ctx_s {
/* /*
* shash descriptor - needed to perform incremental hashing in * shash descriptor - needed to perform incremental hashing in
* in software, when hw doesn't support it. * software, when hw doesn't support it.
*/ */
struct shash_desc *shash; struct shash_desc *shash;

View File

@ -396,7 +396,7 @@ union cptx_vqx_misc_ena_w1s {
* Word0 * Word0
* reserved_20_63:44 [63:20] Reserved. * reserved_20_63:44 [63:20] Reserved.
* dbell_cnt:20 [19:0](R/W/H) Number of instruction queue 64-bit words to add * dbell_cnt:20 [19:0](R/W/H) Number of instruction queue 64-bit words to add
* to the CPT instruction doorbell count. Readback value is the the * to the CPT instruction doorbell count. Readback value is the
* current number of pending doorbell requests. If counter overflows * current number of pending doorbell requests. If counter overflows
* CPT()_VQ()_MISC_INT[DBELL_DOVF] is set. To reset the count back to * CPT()_VQ()_MISC_INT[DBELL_DOVF] is set. To reset the count back to
* zero, write one to clear CPT()_VQ()_MISC_INT_ENA_W1C[DBELL_DOVF], * zero, write one to clear CPT()_VQ()_MISC_INT_ENA_W1C[DBELL_DOVF],

View File

@ -253,6 +253,7 @@ static int cpt_ucode_load_fw(struct cpt_device *cpt, const u8 *fw, bool is_ae)
const struct firmware *fw_entry; const struct firmware *fw_entry;
struct device *dev = &cpt->pdev->dev; struct device *dev = &cpt->pdev->dev;
struct ucode_header *ucode; struct ucode_header *ucode;
unsigned int code_length;
struct microcode *mcode; struct microcode *mcode;
int j, ret = 0; int j, ret = 0;
@ -263,11 +264,12 @@ static int cpt_ucode_load_fw(struct cpt_device *cpt, const u8 *fw, bool is_ae)
ucode = (struct ucode_header *)fw_entry->data; ucode = (struct ucode_header *)fw_entry->data;
mcode = &cpt->mcode[cpt->next_mc_idx]; mcode = &cpt->mcode[cpt->next_mc_idx];
memcpy(mcode->version, (u8 *)fw_entry->data, CPT_UCODE_VERSION_SZ); memcpy(mcode->version, (u8 *)fw_entry->data, CPT_UCODE_VERSION_SZ);
mcode->code_size = ntohl(ucode->code_length) * 2; code_length = ntohl(ucode->code_length);
if (!mcode->code_size) { if (code_length == 0 || code_length >= INT_MAX / 2) {
ret = -EINVAL; ret = -EINVAL;
goto fw_release; goto fw_release;
} }
mcode->code_size = code_length * 2;
mcode->is_ae = is_ae; mcode->is_ae = is_ae;
mcode->core_mask = 0ULL; mcode->core_mask = 0ULL;

View File

@ -198,22 +198,16 @@ static int zip_decompress(const u8 *src, unsigned int slen,
/* Legacy Compress framework start */ /* Legacy Compress framework start */
int zip_alloc_comp_ctx_deflate(struct crypto_tfm *tfm) int zip_alloc_comp_ctx_deflate(struct crypto_tfm *tfm)
{ {
int ret;
struct zip_kernel_ctx *zip_ctx = crypto_tfm_ctx(tfm); struct zip_kernel_ctx *zip_ctx = crypto_tfm_ctx(tfm);
ret = zip_ctx_init(zip_ctx, 0); return zip_ctx_init(zip_ctx, 0);
return ret;
} }
int zip_alloc_comp_ctx_lzs(struct crypto_tfm *tfm) int zip_alloc_comp_ctx_lzs(struct crypto_tfm *tfm)
{ {
int ret;
struct zip_kernel_ctx *zip_ctx = crypto_tfm_ctx(tfm); struct zip_kernel_ctx *zip_ctx = crypto_tfm_ctx(tfm);
ret = zip_ctx_init(zip_ctx, 1); return zip_ctx_init(zip_ctx, 1);
return ret;
} }
void zip_free_comp_ctx(struct crypto_tfm *tfm) void zip_free_comp_ctx(struct crypto_tfm *tfm)
@ -227,24 +221,18 @@ int zip_comp_compress(struct crypto_tfm *tfm,
const u8 *src, unsigned int slen, const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen) u8 *dst, unsigned int *dlen)
{ {
int ret;
struct zip_kernel_ctx *zip_ctx = crypto_tfm_ctx(tfm); struct zip_kernel_ctx *zip_ctx = crypto_tfm_ctx(tfm);
ret = zip_compress(src, slen, dst, dlen, zip_ctx); return zip_compress(src, slen, dst, dlen, zip_ctx);
return ret;
} }
int zip_comp_decompress(struct crypto_tfm *tfm, int zip_comp_decompress(struct crypto_tfm *tfm,
const u8 *src, unsigned int slen, const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen) u8 *dst, unsigned int *dlen)
{ {
int ret;
struct zip_kernel_ctx *zip_ctx = crypto_tfm_ctx(tfm); struct zip_kernel_ctx *zip_ctx = crypto_tfm_ctx(tfm);
ret = zip_decompress(src, slen, dst, dlen, zip_ctx); return zip_decompress(src, slen, dst, dlen, zip_ctx);
return ret;
} /* Legacy compress framework end */ } /* Legacy compress framework end */
/* SCOMP framework start */ /* SCOMP framework start */
@ -298,22 +286,16 @@ int zip_scomp_compress(struct crypto_scomp *tfm,
const u8 *src, unsigned int slen, const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen, void *ctx) u8 *dst, unsigned int *dlen, void *ctx)
{ {
int ret;
struct zip_kernel_ctx *zip_ctx = ctx; struct zip_kernel_ctx *zip_ctx = ctx;
ret = zip_compress(src, slen, dst, dlen, zip_ctx); return zip_compress(src, slen, dst, dlen, zip_ctx);
return ret;
} }
int zip_scomp_decompress(struct crypto_scomp *tfm, int zip_scomp_decompress(struct crypto_scomp *tfm,
const u8 *src, unsigned int slen, const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen, void *ctx) u8 *dst, unsigned int *dlen, void *ctx)
{ {
int ret;
struct zip_kernel_ctx *zip_ctx = ctx; struct zip_kernel_ctx *zip_ctx = ctx;
ret = zip_decompress(src, slen, dst, dlen, zip_ctx); return zip_decompress(src, slen, dst, dlen, zip_ctx);
return ret;
} /* SCOMP framework end */ } /* SCOMP framework end */

View File

@ -64,7 +64,6 @@ static int ccp_des3_crypt(struct skcipher_request *req, bool encrypt)
struct ccp_des3_req_ctx *rctx = skcipher_request_ctx(req); struct ccp_des3_req_ctx *rctx = skcipher_request_ctx(req);
struct scatterlist *iv_sg = NULL; struct scatterlist *iv_sg = NULL;
unsigned int iv_len = 0; unsigned int iv_len = 0;
int ret;
if (!ctx->u.des3.key_len) if (!ctx->u.des3.key_len)
return -EINVAL; return -EINVAL;
@ -100,9 +99,7 @@ static int ccp_des3_crypt(struct skcipher_request *req, bool encrypt)
rctx->cmd.u.des3.src_len = req->cryptlen; rctx->cmd.u.des3.src_len = req->cryptlen;
rctx->cmd.u.des3.dst = req->dst; rctx->cmd.u.des3.dst = req->dst;
ret = ccp_crypto_enqueue_request(&req->base, &rctx->cmd); return ccp_crypto_enqueue_request(&req->base, &rctx->cmd);
return ret;
} }
static int ccp_des3_encrypt(struct skcipher_request *req) static int ccp_des3_encrypt(struct skcipher_request *req)

View File

@ -641,6 +641,10 @@ static void ccp_dma_release(struct ccp_device *ccp)
for (i = 0; i < ccp->cmd_q_count; i++) { for (i = 0; i < ccp->cmd_q_count; i++) {
chan = ccp->ccp_dma_chan + i; chan = ccp->ccp_dma_chan + i;
dma_chan = &chan->dma_chan; dma_chan = &chan->dma_chan;
if (dma_chan->client_count)
dma_release_channel(dma_chan);
tasklet_kill(&chan->cleanup_tasklet); tasklet_kill(&chan->cleanup_tasklet);
list_del_rcu(&dma_chan->device_node); list_del_rcu(&dma_chan->device_node);
} }
@ -766,8 +770,8 @@ void ccp_dmaengine_unregister(struct ccp_device *ccp)
if (!dmaengine) if (!dmaengine)
return; return;
dma_async_device_unregister(dma_dev);
ccp_dma_release(ccp); ccp_dma_release(ccp);
dma_async_device_unregister(dma_dev);
kmem_cache_destroy(ccp->dma_desc_cache); kmem_cache_destroy(ccp->dma_desc_cache);
kmem_cache_destroy(ccp->dma_cmd_cache); kmem_cache_destroy(ccp->dma_cmd_cache);

View File

@ -211,18 +211,24 @@ static int sev_read_init_ex_file(void)
if (IS_ERR(fp)) { if (IS_ERR(fp)) {
int ret = PTR_ERR(fp); int ret = PTR_ERR(fp);
dev_err(sev->dev, if (ret == -ENOENT) {
"SEV: could not open %s for read, error %d\n", dev_info(sev->dev,
init_ex_path, ret); "SEV: %s does not exist and will be created later.\n",
init_ex_path);
ret = 0;
} else {
dev_err(sev->dev,
"SEV: could not open %s for read, error %d\n",
init_ex_path, ret);
}
return ret; return ret;
} }
nread = kernel_read(fp, sev_init_ex_buffer, NV_LENGTH, NULL); nread = kernel_read(fp, sev_init_ex_buffer, NV_LENGTH, NULL);
if (nread != NV_LENGTH) { if (nread != NV_LENGTH) {
dev_err(sev->dev, dev_info(sev->dev,
"SEV: failed to read %u bytes to non volatile memory area, ret %ld\n", "SEV: could not read %u bytes to non volatile memory area, ret %ld\n",
NV_LENGTH, nread); NV_LENGTH, nread);
return -EIO;
} }
dev_dbg(sev->dev, "SEV: read %ld bytes from NV file\n", nread); dev_dbg(sev->dev, "SEV: read %ld bytes from NV file\n", nread);
@ -231,7 +237,7 @@ static int sev_read_init_ex_file(void)
return 0; return 0;
} }
static void sev_write_init_ex_file(void) static int sev_write_init_ex_file(void)
{ {
struct sev_device *sev = psp_master->sev_data; struct sev_device *sev = psp_master->sev_data;
struct file *fp; struct file *fp;
@ -241,14 +247,16 @@ static void sev_write_init_ex_file(void)
lockdep_assert_held(&sev_cmd_mutex); lockdep_assert_held(&sev_cmd_mutex);
if (!sev_init_ex_buffer) if (!sev_init_ex_buffer)
return; return 0;
fp = open_file_as_root(init_ex_path, O_CREAT | O_WRONLY, 0600); fp = open_file_as_root(init_ex_path, O_CREAT | O_WRONLY, 0600);
if (IS_ERR(fp)) { if (IS_ERR(fp)) {
int ret = PTR_ERR(fp);
dev_err(sev->dev, dev_err(sev->dev,
"SEV: could not open file for write, error %ld\n", "SEV: could not open file for write, error %d\n",
PTR_ERR(fp)); ret);
return; return ret;
} }
nwrite = kernel_write(fp, sev_init_ex_buffer, NV_LENGTH, &offset); nwrite = kernel_write(fp, sev_init_ex_buffer, NV_LENGTH, &offset);
@ -259,18 +267,20 @@ static void sev_write_init_ex_file(void)
dev_err(sev->dev, dev_err(sev->dev,
"SEV: failed to write %u bytes to non volatile memory area, ret %ld\n", "SEV: failed to write %u bytes to non volatile memory area, ret %ld\n",
NV_LENGTH, nwrite); NV_LENGTH, nwrite);
return; return -EIO;
} }
dev_dbg(sev->dev, "SEV: write successful to NV file\n"); dev_dbg(sev->dev, "SEV: write successful to NV file\n");
return 0;
} }
static void sev_write_init_ex_file_if_required(int cmd_id) static int sev_write_init_ex_file_if_required(int cmd_id)
{ {
lockdep_assert_held(&sev_cmd_mutex); lockdep_assert_held(&sev_cmd_mutex);
if (!sev_init_ex_buffer) if (!sev_init_ex_buffer)
return; return 0;
/* /*
* Only a few platform commands modify the SPI/NV area, but none of the * Only a few platform commands modify the SPI/NV area, but none of the
@ -285,10 +295,10 @@ static void sev_write_init_ex_file_if_required(int cmd_id)
case SEV_CMD_PEK_GEN: case SEV_CMD_PEK_GEN:
break; break;
default: default:
return; return 0;
} }
sev_write_init_ex_file(); return sev_write_init_ex_file();
} }
static int __sev_do_cmd_locked(int cmd, void *data, int *psp_ret) static int __sev_do_cmd_locked(int cmd, void *data, int *psp_ret)
@ -361,7 +371,7 @@ static int __sev_do_cmd_locked(int cmd, void *data, int *psp_ret)
cmd, reg & PSP_CMDRESP_ERR_MASK); cmd, reg & PSP_CMDRESP_ERR_MASK);
ret = -EIO; ret = -EIO;
} else { } else {
sev_write_init_ex_file_if_required(cmd); ret = sev_write_init_ex_file_if_required(cmd);
} }
print_hex_dump_debug("(out): ", DUMP_PREFIX_OFFSET, 16, 2, data, print_hex_dump_debug("(out): ", DUMP_PREFIX_OFFSET, 16, 2, data,
@ -410,17 +420,12 @@ static int __sev_init_locked(int *error)
static int __sev_init_ex_locked(int *error) static int __sev_init_ex_locked(int *error)
{ {
struct sev_data_init_ex data; struct sev_data_init_ex data;
int ret;
memset(&data, 0, sizeof(data)); memset(&data, 0, sizeof(data));
data.length = sizeof(data); data.length = sizeof(data);
data.nv_address = __psp_pa(sev_init_ex_buffer); data.nv_address = __psp_pa(sev_init_ex_buffer);
data.nv_len = NV_LENGTH; data.nv_len = NV_LENGTH;
ret = sev_read_init_ex_file();
if (ret)
return ret;
if (sev_es_tmr) { if (sev_es_tmr) {
/* /*
* Do not include the encryption mask on the physical * Do not include the encryption mask on the physical
@ -439,7 +444,7 @@ static int __sev_platform_init_locked(int *error)
{ {
struct psp_device *psp = psp_master; struct psp_device *psp = psp_master;
struct sev_device *sev; struct sev_device *sev;
int rc, psp_ret = -1; int rc = 0, psp_ret = -1;
int (*init_function)(int *error); int (*init_function)(int *error);
if (!psp || !psp->sev_data) if (!psp || !psp->sev_data)
@ -450,8 +455,15 @@ static int __sev_platform_init_locked(int *error)
if (sev->state == SEV_STATE_INIT) if (sev->state == SEV_STATE_INIT)
return 0; return 0;
init_function = sev_init_ex_buffer ? __sev_init_ex_locked : if (sev_init_ex_buffer) {
__sev_init_locked; init_function = __sev_init_ex_locked;
rc = sev_read_init_ex_file();
if (rc)
return rc;
} else {
init_function = __sev_init_locked;
}
rc = init_function(&psp_ret); rc = init_function(&psp_ret);
if (rc && psp_ret == SEV_RET_SECURE_DATA_INVALID) { if (rc && psp_ret == SEV_RET_SECURE_DATA_INVALID) {
/* /*
@ -744,6 +756,11 @@ static int sev_update_firmware(struct device *dev)
struct page *p; struct page *p;
u64 data_size; u64 data_size;
if (!sev_version_greater_or_equal(0, 15)) {
dev_dbg(dev, "DOWNLOAD_FIRMWARE not supported\n");
return -1;
}
if (sev_get_firmware(dev, &firmware) == -ENOENT) { if (sev_get_firmware(dev, &firmware) == -ENOENT) {
dev_dbg(dev, "No SEV firmware file present\n"); dev_dbg(dev, "No SEV firmware file present\n");
return -1; return -1;
@ -776,6 +793,14 @@ static int sev_update_firmware(struct device *dev)
data->len = firmware->size; data->len = firmware->size;
ret = sev_do_cmd(SEV_CMD_DOWNLOAD_FIRMWARE, data, &error); ret = sev_do_cmd(SEV_CMD_DOWNLOAD_FIRMWARE, data, &error);
/*
* A quirk for fixing the committed TCB version, when upgrading from
* earlier firmware version than 1.50.
*/
if (!ret && !sev_version_greater_or_equal(1, 50))
ret = sev_do_cmd(SEV_CMD_DOWNLOAD_FIRMWARE, data, &error);
if (ret) if (ret)
dev_dbg(dev, "Failed to update SEV firmware: %#x\n", error); dev_dbg(dev, "Failed to update SEV firmware: %#x\n", error);
else else
@ -1285,8 +1310,7 @@ void sev_pci_init(void)
if (sev_get_api_version()) if (sev_get_api_version())
goto err; goto err;
if (sev_version_greater_or_equal(0, 15) && if (sev_update_firmware(sev->dev) == 0)
sev_update_firmware(sev->dev) == 0)
sev_get_api_version(); sev_get_api_version();
/* If an init_ex_path is provided rely on INIT_EX for PSP initialization /* If an init_ex_path is provided rely on INIT_EX for PSP initialization

View File

@ -274,7 +274,7 @@ static int cc_map_sg(struct device *dev, struct scatterlist *sg,
} }
ret = dma_map_sg(dev, sg, *nents, direction); ret = dma_map_sg(dev, sg, *nents, direction);
if (dma_mapping_error(dev, ret)) { if (!ret) {
*nents = 0; *nents = 0;
dev_err(dev, "dma_map_sg() sg buffer failed %d\n", ret); dev_err(dev, "dma_map_sg() sg buffer failed %d\n", ret);
return -ENOMEM; return -ENOMEM;

View File

@ -22,7 +22,8 @@ enum {
HPRE_CLUSTER0, HPRE_CLUSTER0,
HPRE_CLUSTER1, HPRE_CLUSTER1,
HPRE_CLUSTER2, HPRE_CLUSTER2,
HPRE_CLUSTER3 HPRE_CLUSTER3,
HPRE_CLUSTERS_NUM_MAX
}; };
enum hpre_ctrl_dbgfs_file { enum hpre_ctrl_dbgfs_file {
@ -42,9 +43,6 @@ enum hpre_dfx_dbgfs_file {
HPRE_DFX_FILE_NUM HPRE_DFX_FILE_NUM
}; };
#define HPRE_CLUSTERS_NUM_V2 (HPRE_CLUSTER3 + 1)
#define HPRE_CLUSTERS_NUM_V3 1
#define HPRE_CLUSTERS_NUM_MAX HPRE_CLUSTERS_NUM_V2
#define HPRE_DEBUGFS_FILE_NUM (HPRE_DEBUG_FILE_NUM + HPRE_CLUSTERS_NUM_MAX - 1) #define HPRE_DEBUGFS_FILE_NUM (HPRE_DEBUG_FILE_NUM + HPRE_CLUSTERS_NUM_MAX - 1)
struct hpre_debugfs_file { struct hpre_debugfs_file {
@ -105,5 +103,5 @@ struct hpre_sqe {
struct hisi_qp *hpre_create_qp(u8 type); struct hisi_qp *hpre_create_qp(u8 type);
int hpre_algs_register(struct hisi_qm *qm); int hpre_algs_register(struct hisi_qm *qm);
void hpre_algs_unregister(struct hisi_qm *qm); void hpre_algs_unregister(struct hisi_qm *qm);
bool hpre_check_alg_support(struct hisi_qm *qm, u32 alg);
#endif #endif

View File

@ -51,6 +51,12 @@ struct hpre_ctx;
#define HPRE_ECC_HW256_KSZ_B 32 #define HPRE_ECC_HW256_KSZ_B 32
#define HPRE_ECC_HW384_KSZ_B 48 #define HPRE_ECC_HW384_KSZ_B 48
/* capability register mask of driver */
#define HPRE_DRV_RSA_MASK_CAP BIT(0)
#define HPRE_DRV_DH_MASK_CAP BIT(1)
#define HPRE_DRV_ECDH_MASK_CAP BIT(2)
#define HPRE_DRV_X25519_MASK_CAP BIT(5)
typedef void (*hpre_cb)(struct hpre_ctx *ctx, void *sqe); typedef void (*hpre_cb)(struct hpre_ctx *ctx, void *sqe);
struct hpre_rsa_ctx { struct hpre_rsa_ctx {
@ -147,7 +153,7 @@ static int hpre_alloc_req_id(struct hpre_ctx *ctx)
int id; int id;
spin_lock_irqsave(&ctx->req_lock, flags); spin_lock_irqsave(&ctx->req_lock, flags);
id = idr_alloc(&ctx->req_idr, NULL, 0, QM_Q_DEPTH, GFP_ATOMIC); id = idr_alloc(&ctx->req_idr, NULL, 0, ctx->qp->sq_depth, GFP_ATOMIC);
spin_unlock_irqrestore(&ctx->req_lock, flags); spin_unlock_irqrestore(&ctx->req_lock, flags);
return id; return id;
@ -488,7 +494,7 @@ static int hpre_ctx_init(struct hpre_ctx *ctx, u8 type)
qp->qp_ctx = ctx; qp->qp_ctx = ctx;
qp->req_cb = hpre_alg_cb; qp->req_cb = hpre_alg_cb;
ret = hpre_ctx_set(ctx, qp, QM_Q_DEPTH); ret = hpre_ctx_set(ctx, qp, qp->sq_depth);
if (ret) if (ret)
hpre_stop_qp_and_put(qp); hpre_stop_qp_and_put(qp);
@ -2002,55 +2008,53 @@ static struct kpp_alg dh = {
}, },
}; };
static struct kpp_alg ecdh_nist_p192 = { static struct kpp_alg ecdh_curves[] = {
.set_secret = hpre_ecdh_set_secret, {
.generate_public_key = hpre_ecdh_compute_value, .set_secret = hpre_ecdh_set_secret,
.compute_shared_secret = hpre_ecdh_compute_value, .generate_public_key = hpre_ecdh_compute_value,
.max_size = hpre_ecdh_max_size, .compute_shared_secret = hpre_ecdh_compute_value,
.init = hpre_ecdh_nist_p192_init_tfm, .max_size = hpre_ecdh_max_size,
.exit = hpre_ecdh_exit_tfm, .init = hpre_ecdh_nist_p192_init_tfm,
.reqsize = sizeof(struct hpre_asym_request) + HPRE_ALIGN_SZ, .exit = hpre_ecdh_exit_tfm,
.base = { .reqsize = sizeof(struct hpre_asym_request) + HPRE_ALIGN_SZ,
.cra_ctxsize = sizeof(struct hpre_ctx), .base = {
.cra_priority = HPRE_CRYPTO_ALG_PRI, .cra_ctxsize = sizeof(struct hpre_ctx),
.cra_name = "ecdh-nist-p192", .cra_priority = HPRE_CRYPTO_ALG_PRI,
.cra_driver_name = "hpre-ecdh-nist-p192", .cra_name = "ecdh-nist-p192",
.cra_module = THIS_MODULE, .cra_driver_name = "hpre-ecdh-nist-p192",
}, .cra_module = THIS_MODULE,
}; },
}, {
static struct kpp_alg ecdh_nist_p256 = { .set_secret = hpre_ecdh_set_secret,
.set_secret = hpre_ecdh_set_secret, .generate_public_key = hpre_ecdh_compute_value,
.generate_public_key = hpre_ecdh_compute_value, .compute_shared_secret = hpre_ecdh_compute_value,
.compute_shared_secret = hpre_ecdh_compute_value, .max_size = hpre_ecdh_max_size,
.max_size = hpre_ecdh_max_size, .init = hpre_ecdh_nist_p256_init_tfm,
.init = hpre_ecdh_nist_p256_init_tfm, .exit = hpre_ecdh_exit_tfm,
.exit = hpre_ecdh_exit_tfm, .reqsize = sizeof(struct hpre_asym_request) + HPRE_ALIGN_SZ,
.reqsize = sizeof(struct hpre_asym_request) + HPRE_ALIGN_SZ, .base = {
.base = { .cra_ctxsize = sizeof(struct hpre_ctx),
.cra_ctxsize = sizeof(struct hpre_ctx), .cra_priority = HPRE_CRYPTO_ALG_PRI,
.cra_priority = HPRE_CRYPTO_ALG_PRI, .cra_name = "ecdh-nist-p256",
.cra_name = "ecdh-nist-p256", .cra_driver_name = "hpre-ecdh-nist-p256",
.cra_driver_name = "hpre-ecdh-nist-p256", .cra_module = THIS_MODULE,
.cra_module = THIS_MODULE, },
}, }, {
}; .set_secret = hpre_ecdh_set_secret,
.generate_public_key = hpre_ecdh_compute_value,
static struct kpp_alg ecdh_nist_p384 = { .compute_shared_secret = hpre_ecdh_compute_value,
.set_secret = hpre_ecdh_set_secret, .max_size = hpre_ecdh_max_size,
.generate_public_key = hpre_ecdh_compute_value, .init = hpre_ecdh_nist_p384_init_tfm,
.compute_shared_secret = hpre_ecdh_compute_value, .exit = hpre_ecdh_exit_tfm,
.max_size = hpre_ecdh_max_size, .reqsize = sizeof(struct hpre_asym_request) + HPRE_ALIGN_SZ,
.init = hpre_ecdh_nist_p384_init_tfm, .base = {
.exit = hpre_ecdh_exit_tfm, .cra_ctxsize = sizeof(struct hpre_ctx),
.reqsize = sizeof(struct hpre_asym_request) + HPRE_ALIGN_SZ, .cra_priority = HPRE_CRYPTO_ALG_PRI,
.base = { .cra_name = "ecdh-nist-p384",
.cra_ctxsize = sizeof(struct hpre_ctx), .cra_driver_name = "hpre-ecdh-nist-p384",
.cra_priority = HPRE_CRYPTO_ALG_PRI, .cra_module = THIS_MODULE,
.cra_name = "ecdh-nist-p384", },
.cra_driver_name = "hpre-ecdh-nist-p384", }
.cra_module = THIS_MODULE,
},
}; };
static struct kpp_alg curve25519_alg = { static struct kpp_alg curve25519_alg = {
@ -2070,78 +2074,144 @@ static struct kpp_alg curve25519_alg = {
}, },
}; };
static int hpre_register_rsa(struct hisi_qm *qm)
static int hpre_register_ecdh(void)
{ {
int ret; int ret;
ret = crypto_register_kpp(&ecdh_nist_p192); if (!hpre_check_alg_support(qm, HPRE_DRV_RSA_MASK_CAP))
return 0;
rsa.base.cra_flags = 0;
ret = crypto_register_akcipher(&rsa);
if (ret) if (ret)
return ret; dev_err(&qm->pdev->dev, "failed to register rsa (%d)!\n", ret);
ret = crypto_register_kpp(&ecdh_nist_p256);
if (ret)
goto unregister_ecdh_p192;
ret = crypto_register_kpp(&ecdh_nist_p384);
if (ret)
goto unregister_ecdh_p256;
return 0;
unregister_ecdh_p256:
crypto_unregister_kpp(&ecdh_nist_p256);
unregister_ecdh_p192:
crypto_unregister_kpp(&ecdh_nist_p192);
return ret; return ret;
} }
static void hpre_unregister_ecdh(void) static void hpre_unregister_rsa(struct hisi_qm *qm)
{ {
crypto_unregister_kpp(&ecdh_nist_p384); if (!hpre_check_alg_support(qm, HPRE_DRV_RSA_MASK_CAP))
crypto_unregister_kpp(&ecdh_nist_p256); return;
crypto_unregister_kpp(&ecdh_nist_p192);
crypto_unregister_akcipher(&rsa);
}
static int hpre_register_dh(struct hisi_qm *qm)
{
int ret;
if (!hpre_check_alg_support(qm, HPRE_DRV_DH_MASK_CAP))
return 0;
ret = crypto_register_kpp(&dh);
if (ret)
dev_err(&qm->pdev->dev, "failed to register dh (%d)!\n", ret);
return ret;
}
static void hpre_unregister_dh(struct hisi_qm *qm)
{
if (!hpre_check_alg_support(qm, HPRE_DRV_DH_MASK_CAP))
return;
crypto_unregister_kpp(&dh);
}
static int hpre_register_ecdh(struct hisi_qm *qm)
{
int ret, i;
if (!hpre_check_alg_support(qm, HPRE_DRV_ECDH_MASK_CAP))
return 0;
for (i = 0; i < ARRAY_SIZE(ecdh_curves); i++) {
ret = crypto_register_kpp(&ecdh_curves[i]);
if (ret) {
dev_err(&qm->pdev->dev, "failed to register %s (%d)!\n",
ecdh_curves[i].base.cra_name, ret);
goto unreg_kpp;
}
}
return 0;
unreg_kpp:
for (--i; i >= 0; --i)
crypto_unregister_kpp(&ecdh_curves[i]);
return ret;
}
static void hpre_unregister_ecdh(struct hisi_qm *qm)
{
int i;
if (!hpre_check_alg_support(qm, HPRE_DRV_ECDH_MASK_CAP))
return;
for (i = ARRAY_SIZE(ecdh_curves) - 1; i >= 0; --i)
crypto_unregister_kpp(&ecdh_curves[i]);
}
static int hpre_register_x25519(struct hisi_qm *qm)
{
int ret;
if (!hpre_check_alg_support(qm, HPRE_DRV_X25519_MASK_CAP))
return 0;
ret = crypto_register_kpp(&curve25519_alg);
if (ret)
dev_err(&qm->pdev->dev, "failed to register x25519 (%d)!\n", ret);
return ret;
}
static void hpre_unregister_x25519(struct hisi_qm *qm)
{
if (!hpre_check_alg_support(qm, HPRE_DRV_X25519_MASK_CAP))
return;
crypto_unregister_kpp(&curve25519_alg);
} }
int hpre_algs_register(struct hisi_qm *qm) int hpre_algs_register(struct hisi_qm *qm)
{ {
int ret; int ret;
rsa.base.cra_flags = 0; ret = hpre_register_rsa(qm);
ret = crypto_register_akcipher(&rsa);
if (ret) if (ret)
return ret; return ret;
ret = crypto_register_kpp(&dh); ret = hpre_register_dh(qm);
if (ret) if (ret)
goto unreg_rsa; goto unreg_rsa;
if (qm->ver >= QM_HW_V3) { ret = hpre_register_ecdh(qm);
ret = hpre_register_ecdh(); if (ret)
if (ret) goto unreg_dh;
goto unreg_dh;
ret = crypto_register_kpp(&curve25519_alg); ret = hpre_register_x25519(qm);
if (ret) if (ret)
goto unreg_ecdh; goto unreg_ecdh;
}
return 0; return ret;
unreg_ecdh: unreg_ecdh:
hpre_unregister_ecdh(); hpre_unregister_ecdh(qm);
unreg_dh: unreg_dh:
crypto_unregister_kpp(&dh); hpre_unregister_dh(qm);
unreg_rsa: unreg_rsa:
crypto_unregister_akcipher(&rsa); hpre_unregister_rsa(qm);
return ret; return ret;
} }
void hpre_algs_unregister(struct hisi_qm *qm) void hpre_algs_unregister(struct hisi_qm *qm)
{ {
if (qm->ver >= QM_HW_V3) { hpre_unregister_x25519(qm);
crypto_unregister_kpp(&curve25519_alg); hpre_unregister_ecdh(qm);
hpre_unregister_ecdh(); hpre_unregister_dh(qm);
} hpre_unregister_rsa(qm);
crypto_unregister_kpp(&dh);
crypto_unregister_akcipher(&rsa);
} }

View File

@ -53,9 +53,7 @@
#define HPRE_CORE_IS_SCHD_OFFSET 0x90 #define HPRE_CORE_IS_SCHD_OFFSET 0x90
#define HPRE_RAS_CE_ENB 0x301410 #define HPRE_RAS_CE_ENB 0x301410
#define HPRE_HAC_RAS_CE_ENABLE (BIT(0) | BIT(22) | BIT(23))
#define HPRE_RAS_NFE_ENB 0x301414 #define HPRE_RAS_NFE_ENB 0x301414
#define HPRE_HAC_RAS_NFE_ENABLE 0x3ffffe
#define HPRE_RAS_FE_ENB 0x301418 #define HPRE_RAS_FE_ENB 0x301418
#define HPRE_OOO_SHUTDOWN_SEL 0x301a3c #define HPRE_OOO_SHUTDOWN_SEL 0x301a3c
#define HPRE_HAC_RAS_FE_ENABLE 0 #define HPRE_HAC_RAS_FE_ENABLE 0
@ -79,8 +77,6 @@
#define HPRE_QM_AXI_CFG_MASK GENMASK(15, 0) #define HPRE_QM_AXI_CFG_MASK GENMASK(15, 0)
#define HPRE_QM_VFG_AX_MASK GENMASK(7, 0) #define HPRE_QM_VFG_AX_MASK GENMASK(7, 0)
#define HPRE_BD_USR_MASK GENMASK(1, 0) #define HPRE_BD_USR_MASK GENMASK(1, 0)
#define HPRE_CLUSTER_CORE_MASK_V2 GENMASK(3, 0)
#define HPRE_CLUSTER_CORE_MASK_V3 GENMASK(7, 0)
#define HPRE_PREFETCH_CFG 0x301130 #define HPRE_PREFETCH_CFG 0x301130
#define HPRE_SVA_PREFTCH_DFX 0x30115C #define HPRE_SVA_PREFTCH_DFX 0x30115C
#define HPRE_PREFETCH_ENABLE (~(BIT(0) | BIT(30))) #define HPRE_PREFETCH_ENABLE (~(BIT(0) | BIT(30)))
@ -122,6 +118,8 @@
#define HPRE_DFX_COMMON2_LEN 0xE #define HPRE_DFX_COMMON2_LEN 0xE
#define HPRE_DFX_CORE_LEN 0x43 #define HPRE_DFX_CORE_LEN 0x43
#define HPRE_DEV_ALG_MAX_LEN 256
static const char hpre_name[] = "hisi_hpre"; static const char hpre_name[] = "hisi_hpre";
static struct dentry *hpre_debugfs_root; static struct dentry *hpre_debugfs_root;
static const struct pci_device_id hpre_dev_ids[] = { static const struct pci_device_id hpre_dev_ids[] = {
@ -137,6 +135,38 @@ struct hpre_hw_error {
const char *msg; const char *msg;
}; };
struct hpre_dev_alg {
u32 alg_msk;
const char *alg;
};
static const struct hpre_dev_alg hpre_dev_algs[] = {
{
.alg_msk = BIT(0),
.alg = "rsa\n"
}, {
.alg_msk = BIT(1),
.alg = "dh\n"
}, {
.alg_msk = BIT(2),
.alg = "ecdh\n"
}, {
.alg_msk = BIT(3),
.alg = "ecdsa\n"
}, {
.alg_msk = BIT(4),
.alg = "sm2\n"
}, {
.alg_msk = BIT(5),
.alg = "x25519\n"
}, {
.alg_msk = BIT(6),
.alg = "x448\n"
}, {
/* sentinel */
}
};
static struct hisi_qm_list hpre_devices = { static struct hisi_qm_list hpre_devices = {
.register_to_crypto = hpre_algs_register, .register_to_crypto = hpre_algs_register,
.unregister_from_crypto = hpre_algs_unregister, .unregister_from_crypto = hpre_algs_unregister,
@ -147,6 +177,62 @@ static const char * const hpre_debug_file_name[] = {
[HPRE_CLUSTER_CTRL] = "cluster_ctrl", [HPRE_CLUSTER_CTRL] = "cluster_ctrl",
}; };
enum hpre_cap_type {
HPRE_QM_NFE_MASK_CAP,
HPRE_QM_RESET_MASK_CAP,
HPRE_QM_OOO_SHUTDOWN_MASK_CAP,
HPRE_QM_CE_MASK_CAP,
HPRE_NFE_MASK_CAP,
HPRE_RESET_MASK_CAP,
HPRE_OOO_SHUTDOWN_MASK_CAP,
HPRE_CE_MASK_CAP,
HPRE_CLUSTER_NUM_CAP,
HPRE_CORE_TYPE_NUM_CAP,
HPRE_CORE_NUM_CAP,
HPRE_CLUSTER_CORE_NUM_CAP,
HPRE_CORE_ENABLE_BITMAP_CAP,
HPRE_DRV_ALG_BITMAP_CAP,
HPRE_DEV_ALG_BITMAP_CAP,
HPRE_CORE1_ALG_BITMAP_CAP,
HPRE_CORE2_ALG_BITMAP_CAP,
HPRE_CORE3_ALG_BITMAP_CAP,
HPRE_CORE4_ALG_BITMAP_CAP,
HPRE_CORE5_ALG_BITMAP_CAP,
HPRE_CORE6_ALG_BITMAP_CAP,
HPRE_CORE7_ALG_BITMAP_CAP,
HPRE_CORE8_ALG_BITMAP_CAP,
HPRE_CORE9_ALG_BITMAP_CAP,
HPRE_CORE10_ALG_BITMAP_CAP
};
static const struct hisi_qm_cap_info hpre_basic_info[] = {
{HPRE_QM_NFE_MASK_CAP, 0x3124, 0, GENMASK(31, 0), 0x0, 0x1C37, 0x7C37},
{HPRE_QM_RESET_MASK_CAP, 0x3128, 0, GENMASK(31, 0), 0x0, 0xC37, 0x6C37},
{HPRE_QM_OOO_SHUTDOWN_MASK_CAP, 0x3128, 0, GENMASK(31, 0), 0x0, 0x4, 0x6C37},
{HPRE_QM_CE_MASK_CAP, 0x312C, 0, GENMASK(31, 0), 0x0, 0x8, 0x8},
{HPRE_NFE_MASK_CAP, 0x3130, 0, GENMASK(31, 0), 0x0, 0x3FFFFE, 0xFFFFFE},
{HPRE_RESET_MASK_CAP, 0x3134, 0, GENMASK(31, 0), 0x0, 0x3FFFFE, 0xBFFFFE},
{HPRE_OOO_SHUTDOWN_MASK_CAP, 0x3134, 0, GENMASK(31, 0), 0x0, 0x22, 0xBFFFFE},
{HPRE_CE_MASK_CAP, 0x3138, 0, GENMASK(31, 0), 0x0, 0x1, 0x1},
{HPRE_CLUSTER_NUM_CAP, 0x313c, 20, GENMASK(3, 0), 0x0, 0x4, 0x1},
{HPRE_CORE_TYPE_NUM_CAP, 0x313c, 16, GENMASK(3, 0), 0x0, 0x2, 0x2},
{HPRE_CORE_NUM_CAP, 0x313c, 8, GENMASK(7, 0), 0x0, 0x8, 0xA},
{HPRE_CLUSTER_CORE_NUM_CAP, 0x313c, 0, GENMASK(7, 0), 0x0, 0x2, 0xA},
{HPRE_CORE_ENABLE_BITMAP_CAP, 0x3140, 0, GENMASK(31, 0), 0x0, 0xF, 0x3FF},
{HPRE_DRV_ALG_BITMAP_CAP, 0x3144, 0, GENMASK(31, 0), 0x0, 0x03, 0x27},
{HPRE_DEV_ALG_BITMAP_CAP, 0x3148, 0, GENMASK(31, 0), 0x0, 0x03, 0x7F},
{HPRE_CORE1_ALG_BITMAP_CAP, 0x314c, 0, GENMASK(31, 0), 0x0, 0x7F, 0x7F},
{HPRE_CORE2_ALG_BITMAP_CAP, 0x3150, 0, GENMASK(31, 0), 0x0, 0x7F, 0x7F},
{HPRE_CORE3_ALG_BITMAP_CAP, 0x3154, 0, GENMASK(31, 0), 0x0, 0x7F, 0x7F},
{HPRE_CORE4_ALG_BITMAP_CAP, 0x3158, 0, GENMASK(31, 0), 0x0, 0x7F, 0x7F},
{HPRE_CORE5_ALG_BITMAP_CAP, 0x315c, 0, GENMASK(31, 0), 0x0, 0x7F, 0x7F},
{HPRE_CORE6_ALG_BITMAP_CAP, 0x3160, 0, GENMASK(31, 0), 0x0, 0x7F, 0x7F},
{HPRE_CORE7_ALG_BITMAP_CAP, 0x3164, 0, GENMASK(31, 0), 0x0, 0x7F, 0x7F},
{HPRE_CORE8_ALG_BITMAP_CAP, 0x3168, 0, GENMASK(31, 0), 0x0, 0x7F, 0x7F},
{HPRE_CORE9_ALG_BITMAP_CAP, 0x316c, 0, GENMASK(31, 0), 0x0, 0x10, 0x10},
{HPRE_CORE10_ALG_BITMAP_CAP, 0x3170, 0, GENMASK(31, 0), 0x0, 0x10, 0x10}
};
static const struct hpre_hw_error hpre_hw_errors[] = { static const struct hpre_hw_error hpre_hw_errors[] = {
{ {
.int_msk = BIT(0), .int_msk = BIT(0),
@ -262,6 +348,46 @@ static struct dfx_diff_registers hpre_diff_regs[] = {
}, },
}; };
bool hpre_check_alg_support(struct hisi_qm *qm, u32 alg)
{
u32 cap_val;
cap_val = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_DRV_ALG_BITMAP_CAP, qm->cap_ver);
if (alg & cap_val)
return true;
return false;
}
static int hpre_set_qm_algs(struct hisi_qm *qm)
{
struct device *dev = &qm->pdev->dev;
char *algs, *ptr;
u32 alg_msk;
int i;
if (!qm->use_sva)
return 0;
algs = devm_kzalloc(dev, HPRE_DEV_ALG_MAX_LEN * sizeof(char), GFP_KERNEL);
if (!algs)
return -ENOMEM;
alg_msk = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_DEV_ALG_BITMAP_CAP, qm->cap_ver);
for (i = 0; i < ARRAY_SIZE(hpre_dev_algs); i++)
if (alg_msk & hpre_dev_algs[i].alg_msk)
strcat(algs, hpre_dev_algs[i].alg);
ptr = strrchr(algs, '\n');
if (ptr)
*ptr = '\0';
qm->uacce->algs = algs;
return 0;
}
static int hpre_diff_regs_show(struct seq_file *s, void *unused) static int hpre_diff_regs_show(struct seq_file *s, void *unused)
{ {
struct hisi_qm *qm = s->private; struct hisi_qm *qm = s->private;
@ -330,14 +456,12 @@ MODULE_PARM_DESC(vfs_num, "Number of VFs to enable(1-63), 0(default)");
static inline int hpre_cluster_num(struct hisi_qm *qm) static inline int hpre_cluster_num(struct hisi_qm *qm)
{ {
return (qm->ver >= QM_HW_V3) ? HPRE_CLUSTERS_NUM_V3 : return hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_CLUSTER_NUM_CAP, qm->cap_ver);
HPRE_CLUSTERS_NUM_V2;
} }
static inline int hpre_cluster_core_mask(struct hisi_qm *qm) static inline int hpre_cluster_core_mask(struct hisi_qm *qm)
{ {
return (qm->ver >= QM_HW_V3) ? return hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_CORE_ENABLE_BITMAP_CAP, qm->cap_ver);
HPRE_CLUSTER_CORE_MASK_V3 : HPRE_CLUSTER_CORE_MASK_V2;
} }
struct hisi_qp *hpre_create_qp(u8 type) struct hisi_qp *hpre_create_qp(u8 type)
@ -457,7 +581,7 @@ static void hpre_open_sva_prefetch(struct hisi_qm *qm)
u32 val; u32 val;
int ret; int ret;
if (qm->ver < QM_HW_V3) if (!test_bit(QM_SUPPORT_SVA_PREFETCH, &qm->caps))
return; return;
/* Enable prefetch */ /* Enable prefetch */
@ -478,7 +602,7 @@ static void hpre_close_sva_prefetch(struct hisi_qm *qm)
u32 val; u32 val;
int ret; int ret;
if (qm->ver < QM_HW_V3) if (!test_bit(QM_SUPPORT_SVA_PREFETCH, &qm->caps))
return; return;
val = readl_relaxed(qm->io_base + HPRE_PREFETCH_CFG); val = readl_relaxed(qm->io_base + HPRE_PREFETCH_CFG);
@ -630,7 +754,8 @@ static void hpre_master_ooo_ctrl(struct hisi_qm *qm, bool enable)
val1 = readl(qm->io_base + HPRE_AM_OOO_SHUTDOWN_ENB); val1 = readl(qm->io_base + HPRE_AM_OOO_SHUTDOWN_ENB);
if (enable) { if (enable) {
val1 |= HPRE_AM_OOO_SHUTDOWN_ENABLE; val1 |= HPRE_AM_OOO_SHUTDOWN_ENABLE;
val2 = HPRE_HAC_RAS_NFE_ENABLE; val2 = hisi_qm_get_hw_info(qm, hpre_basic_info,
HPRE_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver);
} else { } else {
val1 &= ~HPRE_AM_OOO_SHUTDOWN_ENABLE; val1 &= ~HPRE_AM_OOO_SHUTDOWN_ENABLE;
val2 = 0x0; val2 = 0x0;
@ -644,21 +769,30 @@ static void hpre_master_ooo_ctrl(struct hisi_qm *qm, bool enable)
static void hpre_hw_error_disable(struct hisi_qm *qm) static void hpre_hw_error_disable(struct hisi_qm *qm)
{ {
/* disable hpre hw error interrupts */ u32 ce, nfe;
writel(HPRE_CORE_INT_DISABLE, qm->io_base + HPRE_INT_MASK);
ce = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_CE_MASK_CAP, qm->cap_ver);
nfe = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_NFE_MASK_CAP, qm->cap_ver);
/* disable hpre hw error interrupts */
writel(ce | nfe | HPRE_HAC_RAS_FE_ENABLE, qm->io_base + HPRE_INT_MASK);
/* disable HPRE block master OOO when nfe occurs on Kunpeng930 */ /* disable HPRE block master OOO when nfe occurs on Kunpeng930 */
hpre_master_ooo_ctrl(qm, false); hpre_master_ooo_ctrl(qm, false);
} }
static void hpre_hw_error_enable(struct hisi_qm *qm) static void hpre_hw_error_enable(struct hisi_qm *qm)
{ {
u32 ce, nfe;
ce = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_CE_MASK_CAP, qm->cap_ver);
nfe = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_NFE_MASK_CAP, qm->cap_ver);
/* clear HPRE hw error source if having */ /* clear HPRE hw error source if having */
writel(HPRE_CORE_INT_DISABLE, qm->io_base + HPRE_HAC_SOURCE_INT); writel(ce | nfe | HPRE_HAC_RAS_FE_ENABLE, qm->io_base + HPRE_HAC_SOURCE_INT);
/* configure error type */ /* configure error type */
writel(HPRE_HAC_RAS_CE_ENABLE, qm->io_base + HPRE_RAS_CE_ENB); writel(ce, qm->io_base + HPRE_RAS_CE_ENB);
writel(HPRE_HAC_RAS_NFE_ENABLE, qm->io_base + HPRE_RAS_NFE_ENB); writel(nfe, qm->io_base + HPRE_RAS_NFE_ENB);
writel(HPRE_HAC_RAS_FE_ENABLE, qm->io_base + HPRE_RAS_FE_ENB); writel(HPRE_HAC_RAS_FE_ENABLE, qm->io_base + HPRE_RAS_FE_ENB);
/* enable HPRE block master OOO when nfe occurs on Kunpeng930 */ /* enable HPRE block master OOO when nfe occurs on Kunpeng930 */
@ -708,7 +842,7 @@ static u32 hpre_cluster_inqry_read(struct hpre_debugfs_file *file)
return readl(qm->io_base + offset + HPRE_CLSTR_ADDR_INQRY_RSLT); return readl(qm->io_base + offset + HPRE_CLSTR_ADDR_INQRY_RSLT);
} }
static int hpre_cluster_inqry_write(struct hpre_debugfs_file *file, u32 val) static void hpre_cluster_inqry_write(struct hpre_debugfs_file *file, u32 val)
{ {
struct hisi_qm *qm = hpre_file_to_qm(file); struct hisi_qm *qm = hpre_file_to_qm(file);
int cluster_index = file->index - HPRE_CLUSTER_CTRL; int cluster_index = file->index - HPRE_CLUSTER_CTRL;
@ -716,8 +850,6 @@ static int hpre_cluster_inqry_write(struct hpre_debugfs_file *file, u32 val)
HPRE_CLSTR_ADDR_INTRVL; HPRE_CLSTR_ADDR_INTRVL;
writel(val, qm->io_base + offset + HPRE_CLUSTER_INQURY); writel(val, qm->io_base + offset + HPRE_CLUSTER_INQURY);
return 0;
} }
static ssize_t hpre_ctrl_debug_read(struct file *filp, char __user *buf, static ssize_t hpre_ctrl_debug_read(struct file *filp, char __user *buf,
@ -792,9 +924,7 @@ static ssize_t hpre_ctrl_debug_write(struct file *filp, const char __user *buf,
goto err_input; goto err_input;
break; break;
case HPRE_CLUSTER_CTRL: case HPRE_CLUSTER_CTRL:
ret = hpre_cluster_inqry_write(file, val); hpre_cluster_inqry_write(file, val);
if (ret)
goto err_input;
break; break;
default: default:
ret = -EINVAL; ret = -EINVAL;
@ -1006,15 +1136,13 @@ static void hpre_debugfs_exit(struct hisi_qm *qm)
static int hpre_qm_init(struct hisi_qm *qm, struct pci_dev *pdev) static int hpre_qm_init(struct hisi_qm *qm, struct pci_dev *pdev)
{ {
int ret;
if (pdev->revision == QM_HW_V1) { if (pdev->revision == QM_HW_V1) {
pci_warn(pdev, "HPRE version 1 is not supported!\n"); pci_warn(pdev, "HPRE version 1 is not supported!\n");
return -EINVAL; return -EINVAL;
} }
if (pdev->revision >= QM_HW_V3)
qm->algs = "rsa\ndh\necdh\nx25519\nx448\necdsa\nsm2";
else
qm->algs = "rsa\ndh";
qm->mode = uacce_mode; qm->mode = uacce_mode;
qm->pdev = pdev; qm->pdev = pdev;
qm->ver = pdev->revision; qm->ver = pdev->revision;
@ -1030,7 +1158,19 @@ static int hpre_qm_init(struct hisi_qm *qm, struct pci_dev *pdev)
qm->qm_list = &hpre_devices; qm->qm_list = &hpre_devices;
} }
return hisi_qm_init(qm); ret = hisi_qm_init(qm);
if (ret) {
pci_err(pdev, "Failed to init hpre qm configures!\n");
return ret;
}
ret = hpre_set_qm_algs(qm);
if (ret) {
pci_err(pdev, "Failed to set hpre algs!\n");
hisi_qm_uninit(qm);
}
return ret;
} }
static int hpre_show_last_regs_init(struct hisi_qm *qm) static int hpre_show_last_regs_init(struct hisi_qm *qm)
@ -1129,7 +1269,11 @@ static u32 hpre_get_hw_err_status(struct hisi_qm *qm)
static void hpre_clear_hw_err_status(struct hisi_qm *qm, u32 err_sts) static void hpre_clear_hw_err_status(struct hisi_qm *qm, u32 err_sts)
{ {
u32 nfe;
writel(err_sts, qm->io_base + HPRE_HAC_SOURCE_INT); writel(err_sts, qm->io_base + HPRE_HAC_SOURCE_INT);
nfe = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_NFE_MASK_CAP, qm->cap_ver);
writel(nfe, qm->io_base + HPRE_RAS_NFE_ENB);
} }
static void hpre_open_axi_master_ooo(struct hisi_qm *qm) static void hpre_open_axi_master_ooo(struct hisi_qm *qm)
@ -1147,14 +1291,20 @@ static void hpre_err_info_init(struct hisi_qm *qm)
{ {
struct hisi_qm_err_info *err_info = &qm->err_info; struct hisi_qm_err_info *err_info = &qm->err_info;
err_info->ce = QM_BASE_CE; err_info->fe = HPRE_HAC_RAS_FE_ENABLE;
err_info->fe = 0; err_info->ce = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_QM_CE_MASK_CAP, qm->cap_ver);
err_info->ecc_2bits_mask = HPRE_CORE_ECC_2BIT_ERR | err_info->nfe = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_QM_NFE_MASK_CAP, qm->cap_ver);
HPRE_OOO_ECC_2BIT_ERR; err_info->ecc_2bits_mask = HPRE_CORE_ECC_2BIT_ERR | HPRE_OOO_ECC_2BIT_ERR;
err_info->dev_ce_mask = HPRE_HAC_RAS_CE_ENABLE; err_info->dev_shutdown_mask = hisi_qm_get_hw_info(qm, hpre_basic_info,
HPRE_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver);
err_info->qm_shutdown_mask = hisi_qm_get_hw_info(qm, hpre_basic_info,
HPRE_QM_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver);
err_info->qm_reset_mask = hisi_qm_get_hw_info(qm, hpre_basic_info,
HPRE_QM_RESET_MASK_CAP, qm->cap_ver);
err_info->dev_reset_mask = hisi_qm_get_hw_info(qm, hpre_basic_info,
HPRE_RESET_MASK_CAP, qm->cap_ver);
err_info->msi_wr_port = HPRE_WR_MSI_PORT; err_info->msi_wr_port = HPRE_WR_MSI_PORT;
err_info->acpi_rst = "HRST"; err_info->acpi_rst = "HRST";
err_info->nfe = QM_BASE_NFE | QM_ACC_DO_TASK_TIMEOUT;
} }
static const struct hisi_qm_err_ini hpre_err_ini = { static const struct hisi_qm_err_ini hpre_err_ini = {

File diff suppressed because it is too large Load Diff

View File

@ -17,6 +17,7 @@ struct sec_alg_res {
dma_addr_t a_ivin_dma; dma_addr_t a_ivin_dma;
u8 *out_mac; u8 *out_mac;
dma_addr_t out_mac_dma; dma_addr_t out_mac_dma;
u16 depth;
}; };
/* Cipher request of SEC private */ /* Cipher request of SEC private */
@ -115,9 +116,9 @@ struct sec_cipher_ctx {
/* SEC queue context which defines queue's relatives */ /* SEC queue context which defines queue's relatives */
struct sec_qp_ctx { struct sec_qp_ctx {
struct hisi_qp *qp; struct hisi_qp *qp;
struct sec_req *req_list[QM_Q_DEPTH]; struct sec_req **req_list;
struct idr req_idr; struct idr req_idr;
struct sec_alg_res res[QM_Q_DEPTH]; struct sec_alg_res *res;
struct sec_ctx *ctx; struct sec_ctx *ctx;
spinlock_t req_lock; spinlock_t req_lock;
struct list_head backlog; struct list_head backlog;
@ -191,8 +192,37 @@ struct sec_dev {
bool iommu_used; bool iommu_used;
}; };
enum sec_cap_type {
SEC_QM_NFE_MASK_CAP = 0x0,
SEC_QM_RESET_MASK_CAP,
SEC_QM_OOO_SHUTDOWN_MASK_CAP,
SEC_QM_CE_MASK_CAP,
SEC_NFE_MASK_CAP,
SEC_RESET_MASK_CAP,
SEC_OOO_SHUTDOWN_MASK_CAP,
SEC_CE_MASK_CAP,
SEC_CLUSTER_NUM_CAP,
SEC_CORE_TYPE_NUM_CAP,
SEC_CORE_NUM_CAP,
SEC_CORES_PER_CLUSTER_NUM_CAP,
SEC_CORE_ENABLE_BITMAP,
SEC_DRV_ALG_BITMAP_LOW,
SEC_DRV_ALG_BITMAP_HIGH,
SEC_DEV_ALG_BITMAP_LOW,
SEC_DEV_ALG_BITMAP_HIGH,
SEC_CORE1_ALG_BITMAP_LOW,
SEC_CORE1_ALG_BITMAP_HIGH,
SEC_CORE2_ALG_BITMAP_LOW,
SEC_CORE2_ALG_BITMAP_HIGH,
SEC_CORE3_ALG_BITMAP_LOW,
SEC_CORE3_ALG_BITMAP_HIGH,
SEC_CORE4_ALG_BITMAP_LOW,
SEC_CORE4_ALG_BITMAP_HIGH,
};
void sec_destroy_qps(struct hisi_qp **qps, int qp_num); void sec_destroy_qps(struct hisi_qp **qps, int qp_num);
struct hisi_qp **sec_create_qps(void); struct hisi_qp **sec_create_qps(void);
int sec_register_to_crypto(struct hisi_qm *qm); int sec_register_to_crypto(struct hisi_qm *qm);
void sec_unregister_from_crypto(struct hisi_qm *qm); void sec_unregister_from_crypto(struct hisi_qm *qm);
u64 sec_get_alg_bitmap(struct hisi_qm *qm, u32 high, u32 low);
#endif #endif

View File

@ -59,14 +59,14 @@
#define SEC_ICV_MASK 0x000E #define SEC_ICV_MASK 0x000E
#define SEC_SQE_LEN_RATE_MASK 0x3 #define SEC_SQE_LEN_RATE_MASK 0x3
#define SEC_TOTAL_IV_SZ (SEC_IV_SIZE * QM_Q_DEPTH) #define SEC_TOTAL_IV_SZ(depth) (SEC_IV_SIZE * (depth))
#define SEC_SGL_SGE_NR 128 #define SEC_SGL_SGE_NR 128
#define SEC_CIPHER_AUTH 0xfe #define SEC_CIPHER_AUTH 0xfe
#define SEC_AUTH_CIPHER 0x1 #define SEC_AUTH_CIPHER 0x1
#define SEC_MAX_MAC_LEN 64 #define SEC_MAX_MAC_LEN 64
#define SEC_MAX_AAD_LEN 65535 #define SEC_MAX_AAD_LEN 65535
#define SEC_MAX_CCM_AAD_LEN 65279 #define SEC_MAX_CCM_AAD_LEN 65279
#define SEC_TOTAL_MAC_SZ (SEC_MAX_MAC_LEN * QM_Q_DEPTH) #define SEC_TOTAL_MAC_SZ(depth) (SEC_MAX_MAC_LEN * (depth))
#define SEC_PBUF_SZ 512 #define SEC_PBUF_SZ 512
#define SEC_PBUF_IV_OFFSET SEC_PBUF_SZ #define SEC_PBUF_IV_OFFSET SEC_PBUF_SZ
@ -74,11 +74,11 @@
#define SEC_PBUF_PKG (SEC_PBUF_SZ + SEC_IV_SIZE + \ #define SEC_PBUF_PKG (SEC_PBUF_SZ + SEC_IV_SIZE + \
SEC_MAX_MAC_LEN * 2) SEC_MAX_MAC_LEN * 2)
#define SEC_PBUF_NUM (PAGE_SIZE / SEC_PBUF_PKG) #define SEC_PBUF_NUM (PAGE_SIZE / SEC_PBUF_PKG)
#define SEC_PBUF_PAGE_NUM (QM_Q_DEPTH / SEC_PBUF_NUM) #define SEC_PBUF_PAGE_NUM(depth) ((depth) / SEC_PBUF_NUM)
#define SEC_PBUF_LEFT_SZ (SEC_PBUF_PKG * (QM_Q_DEPTH - \ #define SEC_PBUF_LEFT_SZ(depth) (SEC_PBUF_PKG * ((depth) - \
SEC_PBUF_PAGE_NUM * SEC_PBUF_NUM)) SEC_PBUF_PAGE_NUM(depth) * SEC_PBUF_NUM))
#define SEC_TOTAL_PBUF_SZ (PAGE_SIZE * SEC_PBUF_PAGE_NUM + \ #define SEC_TOTAL_PBUF_SZ(depth) (PAGE_SIZE * SEC_PBUF_PAGE_NUM(depth) + \
SEC_PBUF_LEFT_SZ) SEC_PBUF_LEFT_SZ(depth))
#define SEC_SQE_LEN_RATE 4 #define SEC_SQE_LEN_RATE 4
#define SEC_SQE_CFLAG 2 #define SEC_SQE_CFLAG 2
@ -104,6 +104,16 @@
#define IV_CTR_INIT 0x1 #define IV_CTR_INIT 0x1
#define IV_BYTE_OFFSET 0x8 #define IV_BYTE_OFFSET 0x8
struct sec_skcipher {
u64 alg_msk;
struct skcipher_alg alg;
};
struct sec_aead {
u64 alg_msk;
struct aead_alg alg;
};
/* Get an en/de-cipher queue cyclically to balance load over queues of TFM */ /* Get an en/de-cipher queue cyclically to balance load over queues of TFM */
static inline int sec_alloc_queue_id(struct sec_ctx *ctx, struct sec_req *req) static inline int sec_alloc_queue_id(struct sec_ctx *ctx, struct sec_req *req)
{ {
@ -128,9 +138,7 @@ static int sec_alloc_req_id(struct sec_req *req, struct sec_qp_ctx *qp_ctx)
int req_id; int req_id;
spin_lock_bh(&qp_ctx->req_lock); spin_lock_bh(&qp_ctx->req_lock);
req_id = idr_alloc_cyclic(&qp_ctx->req_idr, NULL, 0, qp_ctx->qp->sq_depth, GFP_ATOMIC);
req_id = idr_alloc_cyclic(&qp_ctx->req_idr, NULL,
0, QM_Q_DEPTH, GFP_ATOMIC);
spin_unlock_bh(&qp_ctx->req_lock); spin_unlock_bh(&qp_ctx->req_lock);
if (unlikely(req_id < 0)) { if (unlikely(req_id < 0)) {
dev_err(req->ctx->dev, "alloc req id fail!\n"); dev_err(req->ctx->dev, "alloc req id fail!\n");
@ -148,7 +156,7 @@ static void sec_free_req_id(struct sec_req *req)
struct sec_qp_ctx *qp_ctx = req->qp_ctx; struct sec_qp_ctx *qp_ctx = req->qp_ctx;
int req_id = req->req_id; int req_id = req->req_id;
if (unlikely(req_id < 0 || req_id >= QM_Q_DEPTH)) { if (unlikely(req_id < 0 || req_id >= qp_ctx->qp->sq_depth)) {
dev_err(req->ctx->dev, "free request id invalid!\n"); dev_err(req->ctx->dev, "free request id invalid!\n");
return; return;
} }
@ -300,14 +308,15 @@ static int sec_bd_send(struct sec_ctx *ctx, struct sec_req *req)
/* Get DMA memory resources */ /* Get DMA memory resources */
static int sec_alloc_civ_resource(struct device *dev, struct sec_alg_res *res) static int sec_alloc_civ_resource(struct device *dev, struct sec_alg_res *res)
{ {
u16 q_depth = res->depth;
int i; int i;
res->c_ivin = dma_alloc_coherent(dev, SEC_TOTAL_IV_SZ, res->c_ivin = dma_alloc_coherent(dev, SEC_TOTAL_IV_SZ(q_depth),
&res->c_ivin_dma, GFP_KERNEL); &res->c_ivin_dma, GFP_KERNEL);
if (!res->c_ivin) if (!res->c_ivin)
return -ENOMEM; return -ENOMEM;
for (i = 1; i < QM_Q_DEPTH; i++) { for (i = 1; i < q_depth; i++) {
res[i].c_ivin_dma = res->c_ivin_dma + i * SEC_IV_SIZE; res[i].c_ivin_dma = res->c_ivin_dma + i * SEC_IV_SIZE;
res[i].c_ivin = res->c_ivin + i * SEC_IV_SIZE; res[i].c_ivin = res->c_ivin + i * SEC_IV_SIZE;
} }
@ -318,20 +327,21 @@ static int sec_alloc_civ_resource(struct device *dev, struct sec_alg_res *res)
static void sec_free_civ_resource(struct device *dev, struct sec_alg_res *res) static void sec_free_civ_resource(struct device *dev, struct sec_alg_res *res)
{ {
if (res->c_ivin) if (res->c_ivin)
dma_free_coherent(dev, SEC_TOTAL_IV_SZ, dma_free_coherent(dev, SEC_TOTAL_IV_SZ(res->depth),
res->c_ivin, res->c_ivin_dma); res->c_ivin, res->c_ivin_dma);
} }
static int sec_alloc_aiv_resource(struct device *dev, struct sec_alg_res *res) static int sec_alloc_aiv_resource(struct device *dev, struct sec_alg_res *res)
{ {
u16 q_depth = res->depth;
int i; int i;
res->a_ivin = dma_alloc_coherent(dev, SEC_TOTAL_IV_SZ, res->a_ivin = dma_alloc_coherent(dev, SEC_TOTAL_IV_SZ(q_depth),
&res->a_ivin_dma, GFP_KERNEL); &res->a_ivin_dma, GFP_KERNEL);
if (!res->a_ivin) if (!res->a_ivin)
return -ENOMEM; return -ENOMEM;
for (i = 1; i < QM_Q_DEPTH; i++) { for (i = 1; i < q_depth; i++) {
res[i].a_ivin_dma = res->a_ivin_dma + i * SEC_IV_SIZE; res[i].a_ivin_dma = res->a_ivin_dma + i * SEC_IV_SIZE;
res[i].a_ivin = res->a_ivin + i * SEC_IV_SIZE; res[i].a_ivin = res->a_ivin + i * SEC_IV_SIZE;
} }
@ -342,20 +352,21 @@ static int sec_alloc_aiv_resource(struct device *dev, struct sec_alg_res *res)
static void sec_free_aiv_resource(struct device *dev, struct sec_alg_res *res) static void sec_free_aiv_resource(struct device *dev, struct sec_alg_res *res)
{ {
if (res->a_ivin) if (res->a_ivin)
dma_free_coherent(dev, SEC_TOTAL_IV_SZ, dma_free_coherent(dev, SEC_TOTAL_IV_SZ(res->depth),
res->a_ivin, res->a_ivin_dma); res->a_ivin, res->a_ivin_dma);
} }
static int sec_alloc_mac_resource(struct device *dev, struct sec_alg_res *res) static int sec_alloc_mac_resource(struct device *dev, struct sec_alg_res *res)
{ {
u16 q_depth = res->depth;
int i; int i;
res->out_mac = dma_alloc_coherent(dev, SEC_TOTAL_MAC_SZ << 1, res->out_mac = dma_alloc_coherent(dev, SEC_TOTAL_MAC_SZ(q_depth) << 1,
&res->out_mac_dma, GFP_KERNEL); &res->out_mac_dma, GFP_KERNEL);
if (!res->out_mac) if (!res->out_mac)
return -ENOMEM; return -ENOMEM;
for (i = 1; i < QM_Q_DEPTH; i++) { for (i = 1; i < q_depth; i++) {
res[i].out_mac_dma = res->out_mac_dma + res[i].out_mac_dma = res->out_mac_dma +
i * (SEC_MAX_MAC_LEN << 1); i * (SEC_MAX_MAC_LEN << 1);
res[i].out_mac = res->out_mac + i * (SEC_MAX_MAC_LEN << 1); res[i].out_mac = res->out_mac + i * (SEC_MAX_MAC_LEN << 1);
@ -367,14 +378,14 @@ static int sec_alloc_mac_resource(struct device *dev, struct sec_alg_res *res)
static void sec_free_mac_resource(struct device *dev, struct sec_alg_res *res) static void sec_free_mac_resource(struct device *dev, struct sec_alg_res *res)
{ {
if (res->out_mac) if (res->out_mac)
dma_free_coherent(dev, SEC_TOTAL_MAC_SZ << 1, dma_free_coherent(dev, SEC_TOTAL_MAC_SZ(res->depth) << 1,
res->out_mac, res->out_mac_dma); res->out_mac, res->out_mac_dma);
} }
static void sec_free_pbuf_resource(struct device *dev, struct sec_alg_res *res) static void sec_free_pbuf_resource(struct device *dev, struct sec_alg_res *res)
{ {
if (res->pbuf) if (res->pbuf)
dma_free_coherent(dev, SEC_TOTAL_PBUF_SZ, dma_free_coherent(dev, SEC_TOTAL_PBUF_SZ(res->depth),
res->pbuf, res->pbuf_dma); res->pbuf, res->pbuf_dma);
} }
@ -384,10 +395,12 @@ static void sec_free_pbuf_resource(struct device *dev, struct sec_alg_res *res)
*/ */
static int sec_alloc_pbuf_resource(struct device *dev, struct sec_alg_res *res) static int sec_alloc_pbuf_resource(struct device *dev, struct sec_alg_res *res)
{ {
u16 q_depth = res->depth;
int size = SEC_PBUF_PAGE_NUM(q_depth);
int pbuf_page_offset; int pbuf_page_offset;
int i, j, k; int i, j, k;
res->pbuf = dma_alloc_coherent(dev, SEC_TOTAL_PBUF_SZ, res->pbuf = dma_alloc_coherent(dev, SEC_TOTAL_PBUF_SZ(q_depth),
&res->pbuf_dma, GFP_KERNEL); &res->pbuf_dma, GFP_KERNEL);
if (!res->pbuf) if (!res->pbuf)
return -ENOMEM; return -ENOMEM;
@ -400,11 +413,11 @@ static int sec_alloc_pbuf_resource(struct device *dev, struct sec_alg_res *res)
* So we need SEC_PBUF_PAGE_NUM numbers of PAGE * So we need SEC_PBUF_PAGE_NUM numbers of PAGE
* for the SEC_TOTAL_PBUF_SZ * for the SEC_TOTAL_PBUF_SZ
*/ */
for (i = 0; i <= SEC_PBUF_PAGE_NUM; i++) { for (i = 0; i <= size; i++) {
pbuf_page_offset = PAGE_SIZE * i; pbuf_page_offset = PAGE_SIZE * i;
for (j = 0; j < SEC_PBUF_NUM; j++) { for (j = 0; j < SEC_PBUF_NUM; j++) {
k = i * SEC_PBUF_NUM + j; k = i * SEC_PBUF_NUM + j;
if (k == QM_Q_DEPTH) if (k == q_depth)
break; break;
res[k].pbuf = res->pbuf + res[k].pbuf = res->pbuf +
j * SEC_PBUF_PKG + pbuf_page_offset; j * SEC_PBUF_PKG + pbuf_page_offset;
@ -470,13 +483,68 @@ static void sec_alg_resource_free(struct sec_ctx *ctx,
sec_free_mac_resource(dev, qp_ctx->res); sec_free_mac_resource(dev, qp_ctx->res);
} }
static int sec_alloc_qp_ctx_resource(struct hisi_qm *qm, struct sec_ctx *ctx,
struct sec_qp_ctx *qp_ctx)
{
u16 q_depth = qp_ctx->qp->sq_depth;
struct device *dev = ctx->dev;
int ret = -ENOMEM;
qp_ctx->req_list = kcalloc(q_depth, sizeof(struct sec_req *), GFP_KERNEL);
if (!qp_ctx->req_list)
return ret;
qp_ctx->res = kcalloc(q_depth, sizeof(struct sec_alg_res), GFP_KERNEL);
if (!qp_ctx->res)
goto err_free_req_list;
qp_ctx->res->depth = q_depth;
qp_ctx->c_in_pool = hisi_acc_create_sgl_pool(dev, q_depth, SEC_SGL_SGE_NR);
if (IS_ERR(qp_ctx->c_in_pool)) {
dev_err(dev, "fail to create sgl pool for input!\n");
goto err_free_res;
}
qp_ctx->c_out_pool = hisi_acc_create_sgl_pool(dev, q_depth, SEC_SGL_SGE_NR);
if (IS_ERR(qp_ctx->c_out_pool)) {
dev_err(dev, "fail to create sgl pool for output!\n");
goto err_free_c_in_pool;
}
ret = sec_alg_resource_alloc(ctx, qp_ctx);
if (ret)
goto err_free_c_out_pool;
return 0;
err_free_c_out_pool:
hisi_acc_free_sgl_pool(dev, qp_ctx->c_out_pool);
err_free_c_in_pool:
hisi_acc_free_sgl_pool(dev, qp_ctx->c_in_pool);
err_free_res:
kfree(qp_ctx->res);
err_free_req_list:
kfree(qp_ctx->req_list);
return ret;
}
static void sec_free_qp_ctx_resource(struct sec_ctx *ctx, struct sec_qp_ctx *qp_ctx)
{
struct device *dev = ctx->dev;
sec_alg_resource_free(ctx, qp_ctx);
hisi_acc_free_sgl_pool(dev, qp_ctx->c_out_pool);
hisi_acc_free_sgl_pool(dev, qp_ctx->c_in_pool);
kfree(qp_ctx->res);
kfree(qp_ctx->req_list);
}
static int sec_create_qp_ctx(struct hisi_qm *qm, struct sec_ctx *ctx, static int sec_create_qp_ctx(struct hisi_qm *qm, struct sec_ctx *ctx,
int qp_ctx_id, int alg_type) int qp_ctx_id, int alg_type)
{ {
struct device *dev = ctx->dev;
struct sec_qp_ctx *qp_ctx; struct sec_qp_ctx *qp_ctx;
struct hisi_qp *qp; struct hisi_qp *qp;
int ret = -ENOMEM; int ret;
qp_ctx = &ctx->qp_ctx[qp_ctx_id]; qp_ctx = &ctx->qp_ctx[qp_ctx_id];
qp = ctx->qps[qp_ctx_id]; qp = ctx->qps[qp_ctx_id];
@ -491,36 +559,18 @@ static int sec_create_qp_ctx(struct hisi_qm *qm, struct sec_ctx *ctx,
idr_init(&qp_ctx->req_idr); idr_init(&qp_ctx->req_idr);
INIT_LIST_HEAD(&qp_ctx->backlog); INIT_LIST_HEAD(&qp_ctx->backlog);
qp_ctx->c_in_pool = hisi_acc_create_sgl_pool(dev, QM_Q_DEPTH, ret = sec_alloc_qp_ctx_resource(qm, ctx, qp_ctx);
SEC_SGL_SGE_NR);
if (IS_ERR(qp_ctx->c_in_pool)) {
dev_err(dev, "fail to create sgl pool for input!\n");
goto err_destroy_idr;
}
qp_ctx->c_out_pool = hisi_acc_create_sgl_pool(dev, QM_Q_DEPTH,
SEC_SGL_SGE_NR);
if (IS_ERR(qp_ctx->c_out_pool)) {
dev_err(dev, "fail to create sgl pool for output!\n");
goto err_free_c_in_pool;
}
ret = sec_alg_resource_alloc(ctx, qp_ctx);
if (ret) if (ret)
goto err_free_c_out_pool; goto err_destroy_idr;
ret = hisi_qm_start_qp(qp, 0); ret = hisi_qm_start_qp(qp, 0);
if (ret < 0) if (ret < 0)
goto err_queue_free; goto err_resource_free;
return 0; return 0;
err_queue_free: err_resource_free:
sec_alg_resource_free(ctx, qp_ctx); sec_free_qp_ctx_resource(ctx, qp_ctx);
err_free_c_out_pool:
hisi_acc_free_sgl_pool(dev, qp_ctx->c_out_pool);
err_free_c_in_pool:
hisi_acc_free_sgl_pool(dev, qp_ctx->c_in_pool);
err_destroy_idr: err_destroy_idr:
idr_destroy(&qp_ctx->req_idr); idr_destroy(&qp_ctx->req_idr);
return ret; return ret;
@ -529,14 +579,8 @@ err_destroy_idr:
static void sec_release_qp_ctx(struct sec_ctx *ctx, static void sec_release_qp_ctx(struct sec_ctx *ctx,
struct sec_qp_ctx *qp_ctx) struct sec_qp_ctx *qp_ctx)
{ {
struct device *dev = ctx->dev;
hisi_qm_stop_qp(qp_ctx->qp); hisi_qm_stop_qp(qp_ctx->qp);
sec_alg_resource_free(ctx, qp_ctx); sec_free_qp_ctx_resource(ctx, qp_ctx);
hisi_acc_free_sgl_pool(dev, qp_ctx->c_out_pool);
hisi_acc_free_sgl_pool(dev, qp_ctx->c_in_pool);
idr_destroy(&qp_ctx->req_idr); idr_destroy(&qp_ctx->req_idr);
} }
@ -559,7 +603,7 @@ static int sec_ctx_base_init(struct sec_ctx *ctx)
ctx->pbuf_supported = ctx->sec->iommu_used; ctx->pbuf_supported = ctx->sec->iommu_used;
/* Half of queue depth is taken as fake requests limit in the queue. */ /* Half of queue depth is taken as fake requests limit in the queue. */
ctx->fake_req_limit = QM_Q_DEPTH >> 1; ctx->fake_req_limit = ctx->qps[0]->sq_depth >> 1;
ctx->qp_ctx = kcalloc(sec->ctx_q_num, sizeof(struct sec_qp_ctx), ctx->qp_ctx = kcalloc(sec->ctx_q_num, sizeof(struct sec_qp_ctx),
GFP_KERNEL); GFP_KERNEL);
if (!ctx->qp_ctx) { if (!ctx->qp_ctx) {
@ -1679,7 +1723,6 @@ static void sec_aead_callback(struct sec_ctx *c, struct sec_req *req, int err)
aead_req->out_mac, aead_req->out_mac,
authsize, a_req->cryptlen + authsize, a_req->cryptlen +
a_req->assoclen); a_req->assoclen);
if (unlikely(sz != authsize)) { if (unlikely(sz != authsize)) {
dev_err(c->dev, "copy out mac err!\n"); dev_err(c->dev, "copy out mac err!\n");
err = -EINVAL; err = -EINVAL;
@ -1966,7 +2009,6 @@ static int sec_aead_sha512_ctx_init(struct crypto_aead *tfm)
return sec_aead_ctx_init(tfm, "sha512"); return sec_aead_ctx_init(tfm, "sha512");
} }
static int sec_skcipher_cryptlen_ckeck(struct sec_ctx *ctx, static int sec_skcipher_cryptlen_ckeck(struct sec_ctx *ctx,
struct sec_req *sreq) struct sec_req *sreq)
{ {
@ -2126,67 +2168,80 @@ static int sec_skcipher_decrypt(struct skcipher_request *sk_req)
.min_keysize = sec_min_key_size,\ .min_keysize = sec_min_key_size,\
.max_keysize = sec_max_key_size,\ .max_keysize = sec_max_key_size,\
.ivsize = iv_size,\ .ivsize = iv_size,\
}, }
#define SEC_SKCIPHER_ALG(name, key_func, min_key_size, \ #define SEC_SKCIPHER_ALG(name, key_func, min_key_size, \
max_key_size, blk_size, iv_size) \ max_key_size, blk_size, iv_size) \
SEC_SKCIPHER_GEN_ALG(name, key_func, min_key_size, max_key_size, \ SEC_SKCIPHER_GEN_ALG(name, key_func, min_key_size, max_key_size, \
sec_skcipher_ctx_init, sec_skcipher_ctx_exit, blk_size, iv_size) sec_skcipher_ctx_init, sec_skcipher_ctx_exit, blk_size, iv_size)
static struct skcipher_alg sec_skciphers[] = { static struct sec_skcipher sec_skciphers[] = {
SEC_SKCIPHER_ALG("ecb(aes)", sec_setkey_aes_ecb, {
AES_MIN_KEY_SIZE, AES_MAX_KEY_SIZE, .alg_msk = BIT(0),
AES_BLOCK_SIZE, 0) .alg = SEC_SKCIPHER_ALG("ecb(aes)", sec_setkey_aes_ecb, AES_MIN_KEY_SIZE,
AES_MAX_KEY_SIZE, AES_BLOCK_SIZE, 0),
SEC_SKCIPHER_ALG("cbc(aes)", sec_setkey_aes_cbc, },
AES_MIN_KEY_SIZE, AES_MAX_KEY_SIZE, {
AES_BLOCK_SIZE, AES_BLOCK_SIZE) .alg_msk = BIT(1),
.alg = SEC_SKCIPHER_ALG("cbc(aes)", sec_setkey_aes_cbc, AES_MIN_KEY_SIZE,
SEC_SKCIPHER_ALG("xts(aes)", sec_setkey_aes_xts, AES_MAX_KEY_SIZE, AES_BLOCK_SIZE, AES_BLOCK_SIZE),
SEC_XTS_MIN_KEY_SIZE, SEC_XTS_MAX_KEY_SIZE, },
AES_BLOCK_SIZE, AES_BLOCK_SIZE) {
.alg_msk = BIT(2),
SEC_SKCIPHER_ALG("ecb(des3_ede)", sec_setkey_3des_ecb, .alg = SEC_SKCIPHER_ALG("ctr(aes)", sec_setkey_aes_ctr, AES_MIN_KEY_SIZE,
SEC_DES3_3KEY_SIZE, SEC_DES3_3KEY_SIZE, AES_MAX_KEY_SIZE, SEC_MIN_BLOCK_SZ, AES_BLOCK_SIZE),
DES3_EDE_BLOCK_SIZE, 0) },
{
SEC_SKCIPHER_ALG("cbc(des3_ede)", sec_setkey_3des_cbc, .alg_msk = BIT(3),
SEC_DES3_3KEY_SIZE, SEC_DES3_3KEY_SIZE, .alg = SEC_SKCIPHER_ALG("xts(aes)", sec_setkey_aes_xts, SEC_XTS_MIN_KEY_SIZE,
DES3_EDE_BLOCK_SIZE, DES3_EDE_BLOCK_SIZE) SEC_XTS_MAX_KEY_SIZE, AES_BLOCK_SIZE, AES_BLOCK_SIZE),
},
SEC_SKCIPHER_ALG("xts(sm4)", sec_setkey_sm4_xts, {
SEC_XTS_MIN_KEY_SIZE, SEC_XTS_MIN_KEY_SIZE, .alg_msk = BIT(4),
AES_BLOCK_SIZE, AES_BLOCK_SIZE) .alg = SEC_SKCIPHER_ALG("ofb(aes)", sec_setkey_aes_ofb, AES_MIN_KEY_SIZE,
AES_MAX_KEY_SIZE, SEC_MIN_BLOCK_SZ, AES_BLOCK_SIZE),
SEC_SKCIPHER_ALG("cbc(sm4)", sec_setkey_sm4_cbc, },
AES_MIN_KEY_SIZE, AES_MIN_KEY_SIZE, {
AES_BLOCK_SIZE, AES_BLOCK_SIZE) .alg_msk = BIT(5),
}; .alg = SEC_SKCIPHER_ALG("cfb(aes)", sec_setkey_aes_cfb, AES_MIN_KEY_SIZE,
AES_MAX_KEY_SIZE, SEC_MIN_BLOCK_SZ, AES_BLOCK_SIZE),
static struct skcipher_alg sec_skciphers_v3[] = { },
SEC_SKCIPHER_ALG("ofb(aes)", sec_setkey_aes_ofb, {
AES_MIN_KEY_SIZE, AES_MAX_KEY_SIZE, .alg_msk = BIT(12),
SEC_MIN_BLOCK_SZ, AES_BLOCK_SIZE) .alg = SEC_SKCIPHER_ALG("cbc(sm4)", sec_setkey_sm4_cbc, AES_MIN_KEY_SIZE,
AES_MIN_KEY_SIZE, AES_BLOCK_SIZE, AES_BLOCK_SIZE),
SEC_SKCIPHER_ALG("cfb(aes)", sec_setkey_aes_cfb, },
AES_MIN_KEY_SIZE, AES_MAX_KEY_SIZE, {
SEC_MIN_BLOCK_SZ, AES_BLOCK_SIZE) .alg_msk = BIT(13),
.alg = SEC_SKCIPHER_ALG("ctr(sm4)", sec_setkey_sm4_ctr, AES_MIN_KEY_SIZE,
SEC_SKCIPHER_ALG("ctr(aes)", sec_setkey_aes_ctr, AES_MIN_KEY_SIZE, SEC_MIN_BLOCK_SZ, AES_BLOCK_SIZE),
AES_MIN_KEY_SIZE, AES_MAX_KEY_SIZE, },
SEC_MIN_BLOCK_SZ, AES_BLOCK_SIZE) {
.alg_msk = BIT(14),
SEC_SKCIPHER_ALG("ofb(sm4)", sec_setkey_sm4_ofb, .alg = SEC_SKCIPHER_ALG("xts(sm4)", sec_setkey_sm4_xts, SEC_XTS_MIN_KEY_SIZE,
AES_MIN_KEY_SIZE, AES_MIN_KEY_SIZE, SEC_XTS_MIN_KEY_SIZE, AES_BLOCK_SIZE, AES_BLOCK_SIZE),
SEC_MIN_BLOCK_SZ, AES_BLOCK_SIZE) },
{
SEC_SKCIPHER_ALG("cfb(sm4)", sec_setkey_sm4_cfb, .alg_msk = BIT(15),
AES_MIN_KEY_SIZE, AES_MIN_KEY_SIZE, .alg = SEC_SKCIPHER_ALG("ofb(sm4)", sec_setkey_sm4_ofb, AES_MIN_KEY_SIZE,
SEC_MIN_BLOCK_SZ, AES_BLOCK_SIZE) AES_MIN_KEY_SIZE, SEC_MIN_BLOCK_SZ, AES_BLOCK_SIZE),
},
SEC_SKCIPHER_ALG("ctr(sm4)", sec_setkey_sm4_ctr, {
AES_MIN_KEY_SIZE, AES_MIN_KEY_SIZE, .alg_msk = BIT(16),
SEC_MIN_BLOCK_SZ, AES_BLOCK_SIZE) .alg = SEC_SKCIPHER_ALG("cfb(sm4)", sec_setkey_sm4_cfb, AES_MIN_KEY_SIZE,
AES_MIN_KEY_SIZE, SEC_MIN_BLOCK_SZ, AES_BLOCK_SIZE),
},
{
.alg_msk = BIT(23),
.alg = SEC_SKCIPHER_ALG("ecb(des3_ede)", sec_setkey_3des_ecb, SEC_DES3_3KEY_SIZE,
SEC_DES3_3KEY_SIZE, DES3_EDE_BLOCK_SIZE, 0),
},
{
.alg_msk = BIT(24),
.alg = SEC_SKCIPHER_ALG("cbc(des3_ede)", sec_setkey_3des_cbc, SEC_DES3_3KEY_SIZE,
SEC_DES3_3KEY_SIZE, DES3_EDE_BLOCK_SIZE,
DES3_EDE_BLOCK_SIZE),
},
}; };
static int aead_iv_demension_check(struct aead_request *aead_req) static int aead_iv_demension_check(struct aead_request *aead_req)
@ -2380,90 +2435,135 @@ static int sec_aead_decrypt(struct aead_request *a_req)
.maxauthsize = max_authsize,\ .maxauthsize = max_authsize,\
} }
static struct aead_alg sec_aeads[] = { static struct sec_aead sec_aeads[] = {
SEC_AEAD_ALG("authenc(hmac(sha1),cbc(aes))", {
sec_setkey_aes_cbc_sha1, sec_aead_sha1_ctx_init, .alg_msk = BIT(6),
sec_aead_ctx_exit, AES_BLOCK_SIZE, .alg = SEC_AEAD_ALG("ccm(aes)", sec_setkey_aes_ccm, sec_aead_xcm_ctx_init,
AES_BLOCK_SIZE, SHA1_DIGEST_SIZE), sec_aead_xcm_ctx_exit, SEC_MIN_BLOCK_SZ, AES_BLOCK_SIZE,
AES_BLOCK_SIZE),
SEC_AEAD_ALG("authenc(hmac(sha256),cbc(aes))", },
sec_setkey_aes_cbc_sha256, sec_aead_sha256_ctx_init, {
sec_aead_ctx_exit, AES_BLOCK_SIZE, .alg_msk = BIT(7),
AES_BLOCK_SIZE, SHA256_DIGEST_SIZE), .alg = SEC_AEAD_ALG("gcm(aes)", sec_setkey_aes_gcm, sec_aead_xcm_ctx_init,
sec_aead_xcm_ctx_exit, SEC_MIN_BLOCK_SZ, SEC_AIV_SIZE,
SEC_AEAD_ALG("authenc(hmac(sha512),cbc(aes))", AES_BLOCK_SIZE),
sec_setkey_aes_cbc_sha512, sec_aead_sha512_ctx_init, },
sec_aead_ctx_exit, AES_BLOCK_SIZE, {
AES_BLOCK_SIZE, SHA512_DIGEST_SIZE), .alg_msk = BIT(17),
.alg = SEC_AEAD_ALG("ccm(sm4)", sec_setkey_sm4_ccm, sec_aead_xcm_ctx_init,
SEC_AEAD_ALG("ccm(aes)", sec_setkey_aes_ccm, sec_aead_xcm_ctx_init, sec_aead_xcm_ctx_exit, SEC_MIN_BLOCK_SZ, AES_BLOCK_SIZE,
sec_aead_xcm_ctx_exit, SEC_MIN_BLOCK_SZ, AES_BLOCK_SIZE),
AES_BLOCK_SIZE, AES_BLOCK_SIZE), },
{
SEC_AEAD_ALG("gcm(aes)", sec_setkey_aes_gcm, sec_aead_xcm_ctx_init, .alg_msk = BIT(18),
sec_aead_xcm_ctx_exit, SEC_MIN_BLOCK_SZ, .alg = SEC_AEAD_ALG("gcm(sm4)", sec_setkey_sm4_gcm, sec_aead_xcm_ctx_init,
SEC_AIV_SIZE, AES_BLOCK_SIZE) sec_aead_xcm_ctx_exit, SEC_MIN_BLOCK_SZ, SEC_AIV_SIZE,
AES_BLOCK_SIZE),
},
{
.alg_msk = BIT(43),
.alg = SEC_AEAD_ALG("authenc(hmac(sha1),cbc(aes))", sec_setkey_aes_cbc_sha1,
sec_aead_sha1_ctx_init, sec_aead_ctx_exit, AES_BLOCK_SIZE,
AES_BLOCK_SIZE, SHA1_DIGEST_SIZE),
},
{
.alg_msk = BIT(44),
.alg = SEC_AEAD_ALG("authenc(hmac(sha256),cbc(aes))", sec_setkey_aes_cbc_sha256,
sec_aead_sha256_ctx_init, sec_aead_ctx_exit, AES_BLOCK_SIZE,
AES_BLOCK_SIZE, SHA256_DIGEST_SIZE),
},
{
.alg_msk = BIT(45),
.alg = SEC_AEAD_ALG("authenc(hmac(sha512),cbc(aes))", sec_setkey_aes_cbc_sha512,
sec_aead_sha512_ctx_init, sec_aead_ctx_exit, AES_BLOCK_SIZE,
AES_BLOCK_SIZE, SHA512_DIGEST_SIZE),
},
}; };
static struct aead_alg sec_aeads_v3[] = { static void sec_unregister_skcipher(u64 alg_mask, int end)
SEC_AEAD_ALG("ccm(sm4)", sec_setkey_sm4_ccm, sec_aead_xcm_ctx_init, {
sec_aead_xcm_ctx_exit, SEC_MIN_BLOCK_SZ, int i;
AES_BLOCK_SIZE, AES_BLOCK_SIZE),
SEC_AEAD_ALG("gcm(sm4)", sec_setkey_sm4_gcm, sec_aead_xcm_ctx_init, for (i = 0; i < end; i++)
sec_aead_xcm_ctx_exit, SEC_MIN_BLOCK_SZ, if (sec_skciphers[i].alg_msk & alg_mask)
SEC_AIV_SIZE, AES_BLOCK_SIZE) crypto_unregister_skcipher(&sec_skciphers[i].alg);
}; }
static int sec_register_skcipher(u64 alg_mask)
{
int i, ret, count;
count = ARRAY_SIZE(sec_skciphers);
for (i = 0; i < count; i++) {
if (!(sec_skciphers[i].alg_msk & alg_mask))
continue;
ret = crypto_register_skcipher(&sec_skciphers[i].alg);
if (ret)
goto err;
}
return 0;
err:
sec_unregister_skcipher(alg_mask, i);
return ret;
}
static void sec_unregister_aead(u64 alg_mask, int end)
{
int i;
for (i = 0; i < end; i++)
if (sec_aeads[i].alg_msk & alg_mask)
crypto_unregister_aead(&sec_aeads[i].alg);
}
static int sec_register_aead(u64 alg_mask)
{
int i, ret, count;
count = ARRAY_SIZE(sec_aeads);
for (i = 0; i < count; i++) {
if (!(sec_aeads[i].alg_msk & alg_mask))
continue;
ret = crypto_register_aead(&sec_aeads[i].alg);
if (ret)
goto err;
}
return 0;
err:
sec_unregister_aead(alg_mask, i);
return ret;
}
int sec_register_to_crypto(struct hisi_qm *qm) int sec_register_to_crypto(struct hisi_qm *qm)
{ {
u64 alg_mask = sec_get_alg_bitmap(qm, SEC_DRV_ALG_BITMAP_HIGH, SEC_DRV_ALG_BITMAP_LOW);
int ret; int ret;
/* To avoid repeat register */ ret = sec_register_skcipher(alg_mask);
ret = crypto_register_skciphers(sec_skciphers,
ARRAY_SIZE(sec_skciphers));
if (ret) if (ret)
return ret; return ret;
if (qm->ver > QM_HW_V2) { ret = sec_register_aead(alg_mask);
ret = crypto_register_skciphers(sec_skciphers_v3,
ARRAY_SIZE(sec_skciphers_v3));
if (ret)
goto reg_skcipher_fail;
}
ret = crypto_register_aeads(sec_aeads, ARRAY_SIZE(sec_aeads));
if (ret) if (ret)
goto reg_aead_fail; sec_unregister_skcipher(alg_mask, ARRAY_SIZE(sec_skciphers));
if (qm->ver > QM_HW_V2) {
ret = crypto_register_aeads(sec_aeads_v3, ARRAY_SIZE(sec_aeads_v3));
if (ret)
goto reg_aead_v3_fail;
}
return ret;
reg_aead_v3_fail:
crypto_unregister_aeads(sec_aeads, ARRAY_SIZE(sec_aeads));
reg_aead_fail:
if (qm->ver > QM_HW_V2)
crypto_unregister_skciphers(sec_skciphers_v3,
ARRAY_SIZE(sec_skciphers_v3));
reg_skcipher_fail:
crypto_unregister_skciphers(sec_skciphers,
ARRAY_SIZE(sec_skciphers));
return ret; return ret;
} }
void sec_unregister_from_crypto(struct hisi_qm *qm) void sec_unregister_from_crypto(struct hisi_qm *qm)
{ {
if (qm->ver > QM_HW_V2) u64 alg_mask = sec_get_alg_bitmap(qm, SEC_DRV_ALG_BITMAP_HIGH, SEC_DRV_ALG_BITMAP_LOW);
crypto_unregister_aeads(sec_aeads_v3,
ARRAY_SIZE(sec_aeads_v3));
crypto_unregister_aeads(sec_aeads, ARRAY_SIZE(sec_aeads));
if (qm->ver > QM_HW_V2) sec_unregister_aead(alg_mask, ARRAY_SIZE(sec_aeads));
crypto_unregister_skciphers(sec_skciphers_v3, sec_unregister_skcipher(alg_mask, ARRAY_SIZE(sec_skciphers));
ARRAY_SIZE(sec_skciphers_v3));
crypto_unregister_skciphers(sec_skciphers,
ARRAY_SIZE(sec_skciphers));
} }

View File

@ -27,7 +27,6 @@
#define SEC_BD_ERR_CHK_EN3 0xffffbfff #define SEC_BD_ERR_CHK_EN3 0xffffbfff
#define SEC_SQE_SIZE 128 #define SEC_SQE_SIZE 128
#define SEC_SQ_SIZE (SEC_SQE_SIZE * QM_Q_DEPTH)
#define SEC_PF_DEF_Q_NUM 256 #define SEC_PF_DEF_Q_NUM 256
#define SEC_PF_DEF_Q_BASE 0 #define SEC_PF_DEF_Q_BASE 0
#define SEC_CTX_Q_NUM_DEF 2 #define SEC_CTX_Q_NUM_DEF 2
@ -42,16 +41,11 @@
#define SEC_ECC_NUM 16 #define SEC_ECC_NUM 16
#define SEC_ECC_MASH 0xFF #define SEC_ECC_MASH 0xFF
#define SEC_CORE_INT_DISABLE 0x0 #define SEC_CORE_INT_DISABLE 0x0
#define SEC_CORE_INT_ENABLE 0x7c1ff
#define SEC_CORE_INT_CLEAR 0x7c1ff
#define SEC_SAA_ENABLE 0x17f
#define SEC_RAS_CE_REG 0x301050 #define SEC_RAS_CE_REG 0x301050
#define SEC_RAS_FE_REG 0x301054 #define SEC_RAS_FE_REG 0x301054
#define SEC_RAS_NFE_REG 0x301058 #define SEC_RAS_NFE_REG 0x301058
#define SEC_RAS_CE_ENB_MSK 0x88
#define SEC_RAS_FE_ENB_MSK 0x0 #define SEC_RAS_FE_ENB_MSK 0x0
#define SEC_RAS_NFE_ENB_MSK 0x7c177
#define SEC_OOO_SHUTDOWN_SEL 0x301014 #define SEC_OOO_SHUTDOWN_SEL 0x301014
#define SEC_RAS_DISABLE 0x0 #define SEC_RAS_DISABLE 0x0
#define SEC_MEM_START_INIT_REG 0x301100 #define SEC_MEM_START_INIT_REG 0x301100
@ -119,6 +113,16 @@
#define SEC_DFX_COMMON1_LEN 0x45 #define SEC_DFX_COMMON1_LEN 0x45
#define SEC_DFX_COMMON2_LEN 0xBA #define SEC_DFX_COMMON2_LEN 0xBA
#define SEC_ALG_BITMAP_SHIFT 32
#define SEC_CIPHER_BITMAP (GENMASK_ULL(5, 0) | GENMASK_ULL(16, 12) | \
GENMASK(24, 21))
#define SEC_DIGEST_BITMAP (GENMASK_ULL(11, 8) | GENMASK_ULL(20, 19) | \
GENMASK_ULL(42, 25))
#define SEC_AEAD_BITMAP (GENMASK_ULL(7, 6) | GENMASK_ULL(18, 17) | \
GENMASK_ULL(45, 43))
#define SEC_DEV_ALG_MAX_LEN 256
struct sec_hw_error { struct sec_hw_error {
u32 int_msk; u32 int_msk;
const char *msg; const char *msg;
@ -129,6 +133,11 @@ struct sec_dfx_item {
u32 offset; u32 offset;
}; };
struct sec_dev_alg {
u64 alg_msk;
const char *algs;
};
static const char sec_name[] = "hisi_sec2"; static const char sec_name[] = "hisi_sec2";
static struct dentry *sec_debugfs_root; static struct dentry *sec_debugfs_root;
@ -137,6 +146,46 @@ static struct hisi_qm_list sec_devices = {
.unregister_from_crypto = sec_unregister_from_crypto, .unregister_from_crypto = sec_unregister_from_crypto,
}; };
static const struct hisi_qm_cap_info sec_basic_info[] = {
{SEC_QM_NFE_MASK_CAP, 0x3124, 0, GENMASK(31, 0), 0x0, 0x1C77, 0x7C77},
{SEC_QM_RESET_MASK_CAP, 0x3128, 0, GENMASK(31, 0), 0x0, 0xC77, 0x6C77},
{SEC_QM_OOO_SHUTDOWN_MASK_CAP, 0x3128, 0, GENMASK(31, 0), 0x0, 0x4, 0x6C77},
{SEC_QM_CE_MASK_CAP, 0x312C, 0, GENMASK(31, 0), 0x0, 0x8, 0x8},
{SEC_NFE_MASK_CAP, 0x3130, 0, GENMASK(31, 0), 0x0, 0x177, 0x60177},
{SEC_RESET_MASK_CAP, 0x3134, 0, GENMASK(31, 0), 0x0, 0x177, 0x177},
{SEC_OOO_SHUTDOWN_MASK_CAP, 0x3134, 0, GENMASK(31, 0), 0x0, 0x4, 0x177},
{SEC_CE_MASK_CAP, 0x3138, 0, GENMASK(31, 0), 0x0, 0x88, 0xC088},
{SEC_CLUSTER_NUM_CAP, 0x313c, 20, GENMASK(3, 0), 0x1, 0x1, 0x1},
{SEC_CORE_TYPE_NUM_CAP, 0x313c, 16, GENMASK(3, 0), 0x1, 0x1, 0x1},
{SEC_CORE_NUM_CAP, 0x313c, 8, GENMASK(7, 0), 0x4, 0x4, 0x4},
{SEC_CORES_PER_CLUSTER_NUM_CAP, 0x313c, 0, GENMASK(7, 0), 0x4, 0x4, 0x4},
{SEC_CORE_ENABLE_BITMAP, 0x3140, 32, GENMASK(31, 0), 0x17F, 0x17F, 0xF},
{SEC_DRV_ALG_BITMAP_LOW, 0x3144, 0, GENMASK(31, 0), 0x18050CB, 0x18050CB, 0x187F0FF},
{SEC_DRV_ALG_BITMAP_HIGH, 0x3148, 0, GENMASK(31, 0), 0x395C, 0x395C, 0x395C},
{SEC_DEV_ALG_BITMAP_LOW, 0x314c, 0, GENMASK(31, 0), 0xFFFFFFFF, 0xFFFFFFFF, 0xFFFFFFFF},
{SEC_DEV_ALG_BITMAP_HIGH, 0x3150, 0, GENMASK(31, 0), 0x3FFF, 0x3FFF, 0x3FFF},
{SEC_CORE1_ALG_BITMAP_LOW, 0x3154, 0, GENMASK(31, 0), 0xFFFFFFFF, 0xFFFFFFFF, 0xFFFFFFFF},
{SEC_CORE1_ALG_BITMAP_HIGH, 0x3158, 0, GENMASK(31, 0), 0x3FFF, 0x3FFF, 0x3FFF},
{SEC_CORE2_ALG_BITMAP_LOW, 0x315c, 0, GENMASK(31, 0), 0xFFFFFFFF, 0xFFFFFFFF, 0xFFFFFFFF},
{SEC_CORE2_ALG_BITMAP_HIGH, 0x3160, 0, GENMASK(31, 0), 0x3FFF, 0x3FFF, 0x3FFF},
{SEC_CORE3_ALG_BITMAP_LOW, 0x3164, 0, GENMASK(31, 0), 0xFFFFFFFF, 0xFFFFFFFF, 0xFFFFFFFF},
{SEC_CORE3_ALG_BITMAP_HIGH, 0x3168, 0, GENMASK(31, 0), 0x3FFF, 0x3FFF, 0x3FFF},
{SEC_CORE4_ALG_BITMAP_LOW, 0x316c, 0, GENMASK(31, 0), 0xFFFFFFFF, 0xFFFFFFFF, 0xFFFFFFFF},
{SEC_CORE4_ALG_BITMAP_HIGH, 0x3170, 0, GENMASK(31, 0), 0x3FFF, 0x3FFF, 0x3FFF},
};
static const struct sec_dev_alg sec_dev_algs[] = { {
.alg_msk = SEC_CIPHER_BITMAP,
.algs = "cipher\n",
}, {
.alg_msk = SEC_DIGEST_BITMAP,
.algs = "digest\n",
}, {
.alg_msk = SEC_AEAD_BITMAP,
.algs = "aead\n",
},
};
static const struct sec_hw_error sec_hw_errors[] = { static const struct sec_hw_error sec_hw_errors[] = {
{ {
.int_msk = BIT(0), .int_msk = BIT(0),
@ -339,6 +388,16 @@ struct hisi_qp **sec_create_qps(void)
return NULL; return NULL;
} }
u64 sec_get_alg_bitmap(struct hisi_qm *qm, u32 high, u32 low)
{
u32 cap_val_h, cap_val_l;
cap_val_h = hisi_qm_get_hw_info(qm, sec_basic_info, high, qm->cap_ver);
cap_val_l = hisi_qm_get_hw_info(qm, sec_basic_info, low, qm->cap_ver);
return ((u64)cap_val_h << SEC_ALG_BITMAP_SHIFT) | (u64)cap_val_l;
}
static const struct kernel_param_ops sec_uacce_mode_ops = { static const struct kernel_param_ops sec_uacce_mode_ops = {
.set = uacce_mode_set, .set = uacce_mode_set,
.get = param_get_int, .get = param_get_int,
@ -415,7 +474,7 @@ static void sec_open_sva_prefetch(struct hisi_qm *qm)
u32 val; u32 val;
int ret; int ret;
if (qm->ver < QM_HW_V3) if (!test_bit(QM_SUPPORT_SVA_PREFETCH, &qm->caps))
return; return;
/* Enable prefetch */ /* Enable prefetch */
@ -435,7 +494,7 @@ static void sec_close_sva_prefetch(struct hisi_qm *qm)
u32 val; u32 val;
int ret; int ret;
if (qm->ver < QM_HW_V3) if (!test_bit(QM_SUPPORT_SVA_PREFETCH, &qm->caps))
return; return;
val = readl_relaxed(qm->io_base + SEC_PREFETCH_CFG); val = readl_relaxed(qm->io_base + SEC_PREFETCH_CFG);
@ -506,7 +565,8 @@ static int sec_engine_init(struct hisi_qm *qm)
writel(SEC_SINGLE_PORT_MAX_TRANS, writel(SEC_SINGLE_PORT_MAX_TRANS,
qm->io_base + AM_CFG_SINGLE_PORT_MAX_TRANS); qm->io_base + AM_CFG_SINGLE_PORT_MAX_TRANS);
writel(SEC_SAA_ENABLE, qm->io_base + SEC_SAA_EN_REG); reg = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_CORE_ENABLE_BITMAP, qm->cap_ver);
writel(reg, qm->io_base + SEC_SAA_EN_REG);
if (qm->ver < QM_HW_V3) { if (qm->ver < QM_HW_V3) {
/* HW V2 enable sm4 extra mode, as ctr/ecb */ /* HW V2 enable sm4 extra mode, as ctr/ecb */
@ -576,7 +636,8 @@ static void sec_master_ooo_ctrl(struct hisi_qm *qm, bool enable)
val1 = readl(qm->io_base + SEC_CONTROL_REG); val1 = readl(qm->io_base + SEC_CONTROL_REG);
if (enable) { if (enable) {
val1 |= SEC_AXI_SHUTDOWN_ENABLE; val1 |= SEC_AXI_SHUTDOWN_ENABLE;
val2 = SEC_RAS_NFE_ENB_MSK; val2 = hisi_qm_get_hw_info(qm, sec_basic_info,
SEC_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver);
} else { } else {
val1 &= SEC_AXI_SHUTDOWN_DISABLE; val1 &= SEC_AXI_SHUTDOWN_DISABLE;
val2 = 0x0; val2 = 0x0;
@ -590,25 +651,30 @@ static void sec_master_ooo_ctrl(struct hisi_qm *qm, bool enable)
static void sec_hw_error_enable(struct hisi_qm *qm) static void sec_hw_error_enable(struct hisi_qm *qm)
{ {
u32 ce, nfe;
if (qm->ver == QM_HW_V1) { if (qm->ver == QM_HW_V1) {
writel(SEC_CORE_INT_DISABLE, qm->io_base + SEC_CORE_INT_MASK); writel(SEC_CORE_INT_DISABLE, qm->io_base + SEC_CORE_INT_MASK);
pci_info(qm->pdev, "V1 not support hw error handle\n"); pci_info(qm->pdev, "V1 not support hw error handle\n");
return; return;
} }
ce = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_CE_MASK_CAP, qm->cap_ver);
nfe = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_NFE_MASK_CAP, qm->cap_ver);
/* clear SEC hw error source if having */ /* clear SEC hw error source if having */
writel(SEC_CORE_INT_CLEAR, qm->io_base + SEC_CORE_INT_SOURCE); writel(ce | nfe | SEC_RAS_FE_ENB_MSK, qm->io_base + SEC_CORE_INT_SOURCE);
/* enable RAS int */ /* enable RAS int */
writel(SEC_RAS_CE_ENB_MSK, qm->io_base + SEC_RAS_CE_REG); writel(ce, qm->io_base + SEC_RAS_CE_REG);
writel(SEC_RAS_FE_ENB_MSK, qm->io_base + SEC_RAS_FE_REG); writel(SEC_RAS_FE_ENB_MSK, qm->io_base + SEC_RAS_FE_REG);
writel(SEC_RAS_NFE_ENB_MSK, qm->io_base + SEC_RAS_NFE_REG); writel(nfe, qm->io_base + SEC_RAS_NFE_REG);
/* enable SEC block master OOO when nfe occurs on Kunpeng930 */ /* enable SEC block master OOO when nfe occurs on Kunpeng930 */
sec_master_ooo_ctrl(qm, true); sec_master_ooo_ctrl(qm, true);
/* enable SEC hw error interrupts */ /* enable SEC hw error interrupts */
writel(SEC_CORE_INT_ENABLE, qm->io_base + SEC_CORE_INT_MASK); writel(ce | nfe | SEC_RAS_FE_ENB_MSK, qm->io_base + SEC_CORE_INT_MASK);
} }
static void sec_hw_error_disable(struct hisi_qm *qm) static void sec_hw_error_disable(struct hisi_qm *qm)
@ -939,7 +1005,11 @@ static u32 sec_get_hw_err_status(struct hisi_qm *qm)
static void sec_clear_hw_err_status(struct hisi_qm *qm, u32 err_sts) static void sec_clear_hw_err_status(struct hisi_qm *qm, u32 err_sts)
{ {
u32 nfe;
writel(err_sts, qm->io_base + SEC_CORE_INT_SOURCE); writel(err_sts, qm->io_base + SEC_CORE_INT_SOURCE);
nfe = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_NFE_MASK_CAP, qm->cap_ver);
writel(nfe, qm->io_base + SEC_RAS_NFE_REG);
} }
static void sec_open_axi_master_ooo(struct hisi_qm *qm) static void sec_open_axi_master_ooo(struct hisi_qm *qm)
@ -955,14 +1025,20 @@ static void sec_err_info_init(struct hisi_qm *qm)
{ {
struct hisi_qm_err_info *err_info = &qm->err_info; struct hisi_qm_err_info *err_info = &qm->err_info;
err_info->ce = QM_BASE_CE; err_info->fe = SEC_RAS_FE_ENB_MSK;
err_info->fe = 0; err_info->ce = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_QM_CE_MASK_CAP, qm->cap_ver);
err_info->nfe = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_QM_NFE_MASK_CAP, qm->cap_ver);
err_info->ecc_2bits_mask = SEC_CORE_INT_STATUS_M_ECC; err_info->ecc_2bits_mask = SEC_CORE_INT_STATUS_M_ECC;
err_info->dev_ce_mask = SEC_RAS_CE_ENB_MSK; err_info->qm_shutdown_mask = hisi_qm_get_hw_info(qm, sec_basic_info,
SEC_QM_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver);
err_info->dev_shutdown_mask = hisi_qm_get_hw_info(qm, sec_basic_info,
SEC_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver);
err_info->qm_reset_mask = hisi_qm_get_hw_info(qm, sec_basic_info,
SEC_QM_RESET_MASK_CAP, qm->cap_ver);
err_info->dev_reset_mask = hisi_qm_get_hw_info(qm, sec_basic_info,
SEC_RESET_MASK_CAP, qm->cap_ver);
err_info->msi_wr_port = BIT(0); err_info->msi_wr_port = BIT(0);
err_info->acpi_rst = "SRST"; err_info->acpi_rst = "SRST";
err_info->nfe = QM_BASE_NFE | QM_ACC_DO_TASK_TIMEOUT |
QM_ACC_WB_NOT_READY_TIMEOUT;
} }
static const struct hisi_qm_err_ini sec_err_ini = { static const struct hisi_qm_err_ini sec_err_ini = {
@ -1001,11 +1077,41 @@ static int sec_pf_probe_init(struct sec_dev *sec)
return ret; return ret;
} }
static int sec_set_qm_algs(struct hisi_qm *qm)
{
struct device *dev = &qm->pdev->dev;
char *algs, *ptr;
u64 alg_mask;
int i;
if (!qm->use_sva)
return 0;
algs = devm_kzalloc(dev, SEC_DEV_ALG_MAX_LEN * sizeof(char), GFP_KERNEL);
if (!algs)
return -ENOMEM;
alg_mask = sec_get_alg_bitmap(qm, SEC_DEV_ALG_BITMAP_HIGH, SEC_DEV_ALG_BITMAP_LOW);
for (i = 0; i < ARRAY_SIZE(sec_dev_algs); i++)
if (alg_mask & sec_dev_algs[i].alg_msk)
strcat(algs, sec_dev_algs[i].algs);
ptr = strrchr(algs, '\n');
if (ptr)
*ptr = '\0';
qm->uacce->algs = algs;
return 0;
}
static int sec_qm_init(struct hisi_qm *qm, struct pci_dev *pdev) static int sec_qm_init(struct hisi_qm *qm, struct pci_dev *pdev)
{ {
int ret;
qm->pdev = pdev; qm->pdev = pdev;
qm->ver = pdev->revision; qm->ver = pdev->revision;
qm->algs = "cipher\ndigest\naead";
qm->mode = uacce_mode; qm->mode = uacce_mode;
qm->sqe_size = SEC_SQE_SIZE; qm->sqe_size = SEC_SQE_SIZE;
qm->dev_name = sec_name; qm->dev_name = sec_name;
@ -1028,7 +1134,19 @@ static int sec_qm_init(struct hisi_qm *qm, struct pci_dev *pdev)
qm->qp_num = SEC_QUEUE_NUM_V1 - SEC_PF_DEF_Q_NUM; qm->qp_num = SEC_QUEUE_NUM_V1 - SEC_PF_DEF_Q_NUM;
} }
return hisi_qm_init(qm); ret = hisi_qm_init(qm);
if (ret) {
pci_err(qm->pdev, "Failed to init sec qm configures!\n");
return ret;
}
ret = sec_set_qm_algs(qm);
if (ret) {
pci_err(qm->pdev, "Failed to set sec algs!\n");
hisi_qm_uninit(qm);
}
return ret;
} }
static void sec_qm_uninit(struct hisi_qm *qm) static void sec_qm_uninit(struct hisi_qm *qm)

View File

@ -81,7 +81,8 @@ struct hisi_zip_sqe {
u32 rsvd1[4]; u32 rsvd1[4];
}; };
int zip_create_qps(struct hisi_qp **qps, int ctx_num, int node); int zip_create_qps(struct hisi_qp **qps, int qp_num, int node);
int hisi_zip_register_to_crypto(struct hisi_qm *qm); int hisi_zip_register_to_crypto(struct hisi_qm *qm);
void hisi_zip_unregister_from_crypto(struct hisi_qm *qm); void hisi_zip_unregister_from_crypto(struct hisi_qm *qm);
bool hisi_zip_alg_support(struct hisi_qm *qm, u32 alg);
#endif #endif

View File

@ -39,6 +39,9 @@
#define HZIP_ALG_PRIORITY 300 #define HZIP_ALG_PRIORITY 300
#define HZIP_SGL_SGE_NR 10 #define HZIP_SGL_SGE_NR 10
#define HZIP_ALG_ZLIB GENMASK(1, 0)
#define HZIP_ALG_GZIP GENMASK(3, 2)
static const u8 zlib_head[HZIP_ZLIB_HEAD_SIZE] = {0x78, 0x9c}; static const u8 zlib_head[HZIP_ZLIB_HEAD_SIZE] = {0x78, 0x9c};
static const u8 gzip_head[HZIP_GZIP_HEAD_SIZE] = { static const u8 gzip_head[HZIP_GZIP_HEAD_SIZE] = {
0x1f, 0x8b, 0x08, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x03 0x1f, 0x8b, 0x08, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x03
@ -123,19 +126,19 @@ static int sgl_sge_nr_set(const char *val, const struct kernel_param *kp)
if (ret || n == 0 || n > HISI_ACC_SGL_SGE_NR_MAX) if (ret || n == 0 || n > HISI_ACC_SGL_SGE_NR_MAX)
return -EINVAL; return -EINVAL;
return param_set_int(val, kp); return param_set_ushort(val, kp);
} }
static const struct kernel_param_ops sgl_sge_nr_ops = { static const struct kernel_param_ops sgl_sge_nr_ops = {
.set = sgl_sge_nr_set, .set = sgl_sge_nr_set,
.get = param_get_int, .get = param_get_ushort,
}; };
static u16 sgl_sge_nr = HZIP_SGL_SGE_NR; static u16 sgl_sge_nr = HZIP_SGL_SGE_NR;
module_param_cb(sgl_sge_nr, &sgl_sge_nr_ops, &sgl_sge_nr, 0444); module_param_cb(sgl_sge_nr, &sgl_sge_nr_ops, &sgl_sge_nr, 0444);
MODULE_PARM_DESC(sgl_sge_nr, "Number of sge in sgl(1-255)"); MODULE_PARM_DESC(sgl_sge_nr, "Number of sge in sgl(1-255)");
static u16 get_extra_field_size(const u8 *start) static u32 get_extra_field_size(const u8 *start)
{ {
return *((u16 *)start) + GZIP_HEAD_FEXTRA_XLEN; return *((u16 *)start) + GZIP_HEAD_FEXTRA_XLEN;
} }
@ -167,7 +170,7 @@ static u32 __get_gzip_head_size(const u8 *src)
return size; return size;
} }
static size_t __maybe_unused get_gzip_head_size(struct scatterlist *sgl) static u32 __maybe_unused get_gzip_head_size(struct scatterlist *sgl)
{ {
char buf[HZIP_GZIP_HEAD_BUF]; char buf[HZIP_GZIP_HEAD_BUF];
@ -183,7 +186,7 @@ static int add_comp_head(struct scatterlist *dst, u8 req_type)
int ret; int ret;
ret = sg_copy_from_buffer(dst, sg_nents(dst), head, head_size); ret = sg_copy_from_buffer(dst, sg_nents(dst), head, head_size);
if (ret != head_size) { if (unlikely(ret != head_size)) {
pr_err("the head size of buffer is wrong (%d)!\n", ret); pr_err("the head size of buffer is wrong (%d)!\n", ret);
return -ENOMEM; return -ENOMEM;
} }
@ -193,11 +196,11 @@ static int add_comp_head(struct scatterlist *dst, u8 req_type)
static int get_comp_head_size(struct acomp_req *acomp_req, u8 req_type) static int get_comp_head_size(struct acomp_req *acomp_req, u8 req_type)
{ {
if (!acomp_req->src || !acomp_req->slen) if (unlikely(!acomp_req->src || !acomp_req->slen))
return -EINVAL; return -EINVAL;
if (req_type == HZIP_ALG_TYPE_GZIP && if (unlikely(req_type == HZIP_ALG_TYPE_GZIP &&
acomp_req->slen < GZIP_HEAD_FEXTRA_SHIFT) acomp_req->slen < GZIP_HEAD_FEXTRA_SHIFT))
return -EINVAL; return -EINVAL;
switch (req_type) { switch (req_type) {
@ -230,6 +233,8 @@ static struct hisi_zip_req *hisi_zip_create_req(struct acomp_req *req,
} }
set_bit(req_id, req_q->req_bitmap); set_bit(req_id, req_q->req_bitmap);
write_unlock(&req_q->req_lock);
req_cache = q + req_id; req_cache = q + req_id;
req_cache->req_id = req_id; req_cache->req_id = req_id;
req_cache->req = req; req_cache->req = req;
@ -242,8 +247,6 @@ static struct hisi_zip_req *hisi_zip_create_req(struct acomp_req *req,
req_cache->dskip = 0; req_cache->dskip = 0;
} }
write_unlock(&req_q->req_lock);
return req_cache; return req_cache;
} }
@ -254,7 +257,6 @@ static void hisi_zip_remove_req(struct hisi_zip_qp_ctx *qp_ctx,
write_lock(&req_q->req_lock); write_lock(&req_q->req_lock);
clear_bit(req->req_id, req_q->req_bitmap); clear_bit(req->req_id, req_q->req_bitmap);
memset(req, 0, sizeof(struct hisi_zip_req));
write_unlock(&req_q->req_lock); write_unlock(&req_q->req_lock);
} }
@ -339,7 +341,7 @@ static int hisi_zip_do_work(struct hisi_zip_req *req,
struct hisi_zip_sqe zip_sqe; struct hisi_zip_sqe zip_sqe;
int ret; int ret;
if (!a_req->src || !a_req->slen || !a_req->dst || !a_req->dlen) if (unlikely(!a_req->src || !a_req->slen || !a_req->dst || !a_req->dlen))
return -EINVAL; return -EINVAL;
req->hw_src = hisi_acc_sg_buf_map_to_hw_sgl(dev, a_req->src, pool, req->hw_src = hisi_acc_sg_buf_map_to_hw_sgl(dev, a_req->src, pool,
@ -365,7 +367,7 @@ static int hisi_zip_do_work(struct hisi_zip_req *req,
/* send command to start a task */ /* send command to start a task */
atomic64_inc(&dfx->send_cnt); atomic64_inc(&dfx->send_cnt);
ret = hisi_qp_send(qp, &zip_sqe); ret = hisi_qp_send(qp, &zip_sqe);
if (ret < 0) { if (unlikely(ret < 0)) {
atomic64_inc(&dfx->send_busy_cnt); atomic64_inc(&dfx->send_busy_cnt);
ret = -EAGAIN; ret = -EAGAIN;
dev_dbg_ratelimited(dev, "failed to send request!\n"); dev_dbg_ratelimited(dev, "failed to send request!\n");
@ -417,7 +419,7 @@ static void hisi_zip_acomp_cb(struct hisi_qp *qp, void *data)
atomic64_inc(&dfx->recv_cnt); atomic64_inc(&dfx->recv_cnt);
status = ops->get_status(sqe); status = ops->get_status(sqe);
if (status != 0 && status != HZIP_NC_ERR) { if (unlikely(status != 0 && status != HZIP_NC_ERR)) {
dev_err(dev, "%scompress fail in qp%u: %u, output: %u\n", dev_err(dev, "%scompress fail in qp%u: %u, output: %u\n",
(qp->alg_type == 0) ? "" : "de", qp->qp_id, status, (qp->alg_type == 0) ? "" : "de", qp->qp_id, status,
sqe->produced); sqe->produced);
@ -450,7 +452,7 @@ static int hisi_zip_acompress(struct acomp_req *acomp_req)
/* let's output compression head now */ /* let's output compression head now */
head_size = add_comp_head(acomp_req->dst, qp_ctx->qp->req_type); head_size = add_comp_head(acomp_req->dst, qp_ctx->qp->req_type);
if (head_size < 0) { if (unlikely(head_size < 0)) {
dev_err_ratelimited(dev, "failed to add comp head (%d)!\n", dev_err_ratelimited(dev, "failed to add comp head (%d)!\n",
head_size); head_size);
return head_size; return head_size;
@ -461,7 +463,7 @@ static int hisi_zip_acompress(struct acomp_req *acomp_req)
return PTR_ERR(req); return PTR_ERR(req);
ret = hisi_zip_do_work(req, qp_ctx); ret = hisi_zip_do_work(req, qp_ctx);
if (ret != -EINPROGRESS) { if (unlikely(ret != -EINPROGRESS)) {
dev_info_ratelimited(dev, "failed to do compress (%d)!\n", ret); dev_info_ratelimited(dev, "failed to do compress (%d)!\n", ret);
hisi_zip_remove_req(qp_ctx, req); hisi_zip_remove_req(qp_ctx, req);
} }
@ -478,7 +480,7 @@ static int hisi_zip_adecompress(struct acomp_req *acomp_req)
int head_size, ret; int head_size, ret;
head_size = get_comp_head_size(acomp_req, qp_ctx->qp->req_type); head_size = get_comp_head_size(acomp_req, qp_ctx->qp->req_type);
if (head_size < 0) { if (unlikely(head_size < 0)) {
dev_err_ratelimited(dev, "failed to get comp head size (%d)!\n", dev_err_ratelimited(dev, "failed to get comp head size (%d)!\n",
head_size); head_size);
return head_size; return head_size;
@ -489,7 +491,7 @@ static int hisi_zip_adecompress(struct acomp_req *acomp_req)
return PTR_ERR(req); return PTR_ERR(req);
ret = hisi_zip_do_work(req, qp_ctx); ret = hisi_zip_do_work(req, qp_ctx);
if (ret != -EINPROGRESS) { if (unlikely(ret != -EINPROGRESS)) {
dev_info_ratelimited(dev, "failed to do decompress (%d)!\n", dev_info_ratelimited(dev, "failed to do decompress (%d)!\n",
ret); ret);
hisi_zip_remove_req(qp_ctx, req); hisi_zip_remove_req(qp_ctx, req);
@ -498,7 +500,7 @@ static int hisi_zip_adecompress(struct acomp_req *acomp_req)
return ret; return ret;
} }
static int hisi_zip_start_qp(struct hisi_qp *qp, struct hisi_zip_qp_ctx *ctx, static int hisi_zip_start_qp(struct hisi_qp *qp, struct hisi_zip_qp_ctx *qp_ctx,
int alg_type, int req_type) int alg_type, int req_type)
{ {
struct device *dev = &qp->qm->pdev->dev; struct device *dev = &qp->qm->pdev->dev;
@ -506,7 +508,7 @@ static int hisi_zip_start_qp(struct hisi_qp *qp, struct hisi_zip_qp_ctx *ctx,
qp->req_type = req_type; qp->req_type = req_type;
qp->alg_type = alg_type; qp->alg_type = alg_type;
qp->qp_ctx = ctx; qp->qp_ctx = qp_ctx;
ret = hisi_qm_start_qp(qp, 0); ret = hisi_qm_start_qp(qp, 0);
if (ret < 0) { if (ret < 0) {
@ -514,15 +516,15 @@ static int hisi_zip_start_qp(struct hisi_qp *qp, struct hisi_zip_qp_ctx *ctx,
return ret; return ret;
} }
ctx->qp = qp; qp_ctx->qp = qp;
return 0; return 0;
} }
static void hisi_zip_release_qp(struct hisi_zip_qp_ctx *ctx) static void hisi_zip_release_qp(struct hisi_zip_qp_ctx *qp_ctx)
{ {
hisi_qm_stop_qp(ctx->qp); hisi_qm_stop_qp(qp_ctx->qp);
hisi_qm_free_qps(&ctx->qp, 1); hisi_qm_free_qps(&qp_ctx->qp, 1);
} }
static const struct hisi_zip_sqe_ops hisi_zip_ops_v1 = { static const struct hisi_zip_sqe_ops hisi_zip_ops_v1 = {
@ -594,18 +596,19 @@ static void hisi_zip_ctx_exit(struct hisi_zip_ctx *hisi_zip_ctx)
{ {
int i; int i;
for (i = 1; i >= 0; i--) for (i = 0; i < HZIP_CTX_Q_NUM; i++)
hisi_zip_release_qp(&hisi_zip_ctx->qp_ctx[i]); hisi_zip_release_qp(&hisi_zip_ctx->qp_ctx[i]);
} }
static int hisi_zip_create_req_q(struct hisi_zip_ctx *ctx) static int hisi_zip_create_req_q(struct hisi_zip_ctx *ctx)
{ {
u16 q_depth = ctx->qp_ctx[0].qp->sq_depth;
struct hisi_zip_req_q *req_q; struct hisi_zip_req_q *req_q;
int i, ret; int i, ret;
for (i = 0; i < HZIP_CTX_Q_NUM; i++) { for (i = 0; i < HZIP_CTX_Q_NUM; i++) {
req_q = &ctx->qp_ctx[i].req_q; req_q = &ctx->qp_ctx[i].req_q;
req_q->size = QM_Q_DEPTH; req_q->size = q_depth;
req_q->req_bitmap = bitmap_zalloc(req_q->size, GFP_KERNEL); req_q->req_bitmap = bitmap_zalloc(req_q->size, GFP_KERNEL);
if (!req_q->req_bitmap) { if (!req_q->req_bitmap) {
@ -613,7 +616,7 @@ static int hisi_zip_create_req_q(struct hisi_zip_ctx *ctx)
if (i == 0) if (i == 0)
return ret; return ret;
goto err_free_loop0; goto err_free_comp_q;
} }
rwlock_init(&req_q->req_lock); rwlock_init(&req_q->req_lock);
@ -622,19 +625,19 @@ static int hisi_zip_create_req_q(struct hisi_zip_ctx *ctx)
if (!req_q->q) { if (!req_q->q) {
ret = -ENOMEM; ret = -ENOMEM;
if (i == 0) if (i == 0)
goto err_free_bitmap; goto err_free_comp_bitmap;
else else
goto err_free_loop1; goto err_free_decomp_bitmap;
} }
} }
return 0; return 0;
err_free_loop1: err_free_decomp_bitmap:
bitmap_free(ctx->qp_ctx[HZIP_QPC_DECOMP].req_q.req_bitmap); bitmap_free(ctx->qp_ctx[HZIP_QPC_DECOMP].req_q.req_bitmap);
err_free_loop0: err_free_comp_q:
kfree(ctx->qp_ctx[HZIP_QPC_COMP].req_q.q); kfree(ctx->qp_ctx[HZIP_QPC_COMP].req_q.q);
err_free_bitmap: err_free_comp_bitmap:
bitmap_free(ctx->qp_ctx[HZIP_QPC_COMP].req_q.req_bitmap); bitmap_free(ctx->qp_ctx[HZIP_QPC_COMP].req_q.req_bitmap);
return ret; return ret;
} }
@ -651,6 +654,7 @@ static void hisi_zip_release_req_q(struct hisi_zip_ctx *ctx)
static int hisi_zip_create_sgl_pool(struct hisi_zip_ctx *ctx) static int hisi_zip_create_sgl_pool(struct hisi_zip_ctx *ctx)
{ {
u16 q_depth = ctx->qp_ctx[0].qp->sq_depth;
struct hisi_zip_qp_ctx *tmp; struct hisi_zip_qp_ctx *tmp;
struct device *dev; struct device *dev;
int i; int i;
@ -658,7 +662,7 @@ static int hisi_zip_create_sgl_pool(struct hisi_zip_ctx *ctx)
for (i = 0; i < HZIP_CTX_Q_NUM; i++) { for (i = 0; i < HZIP_CTX_Q_NUM; i++) {
tmp = &ctx->qp_ctx[i]; tmp = &ctx->qp_ctx[i];
dev = &tmp->qp->qm->pdev->dev; dev = &tmp->qp->qm->pdev->dev;
tmp->sgl_pool = hisi_acc_create_sgl_pool(dev, QM_Q_DEPTH << 1, tmp->sgl_pool = hisi_acc_create_sgl_pool(dev, q_depth << 1,
sgl_sge_nr); sgl_sge_nr);
if (IS_ERR(tmp->sgl_pool)) { if (IS_ERR(tmp->sgl_pool)) {
if (i == 1) if (i == 1)
@ -755,6 +759,28 @@ static struct acomp_alg hisi_zip_acomp_zlib = {
} }
}; };
static int hisi_zip_register_zlib(struct hisi_qm *qm)
{
int ret;
if (!hisi_zip_alg_support(qm, HZIP_ALG_ZLIB))
return 0;
ret = crypto_register_acomp(&hisi_zip_acomp_zlib);
if (ret)
dev_err(&qm->pdev->dev, "failed to register to zlib (%d)!\n", ret);
return ret;
}
static void hisi_zip_unregister_zlib(struct hisi_qm *qm)
{
if (!hisi_zip_alg_support(qm, HZIP_ALG_ZLIB))
return;
crypto_unregister_acomp(&hisi_zip_acomp_zlib);
}
static struct acomp_alg hisi_zip_acomp_gzip = { static struct acomp_alg hisi_zip_acomp_gzip = {
.init = hisi_zip_acomp_init, .init = hisi_zip_acomp_init,
.exit = hisi_zip_acomp_exit, .exit = hisi_zip_acomp_exit,
@ -769,27 +795,45 @@ static struct acomp_alg hisi_zip_acomp_gzip = {
} }
}; };
int hisi_zip_register_to_crypto(struct hisi_qm *qm) static int hisi_zip_register_gzip(struct hisi_qm *qm)
{ {
int ret; int ret;
ret = crypto_register_acomp(&hisi_zip_acomp_zlib); if (!hisi_zip_alg_support(qm, HZIP_ALG_GZIP))
if (ret) { return 0;
pr_err("failed to register to zlib (%d)!\n", ret);
return ret;
}
ret = crypto_register_acomp(&hisi_zip_acomp_gzip); ret = crypto_register_acomp(&hisi_zip_acomp_gzip);
if (ret) { if (ret)
pr_err("failed to register to gzip (%d)!\n", ret); dev_err(&qm->pdev->dev, "failed to register to gzip (%d)!\n", ret);
crypto_unregister_acomp(&hisi_zip_acomp_zlib);
} return ret;
}
static void hisi_zip_unregister_gzip(struct hisi_qm *qm)
{
if (!hisi_zip_alg_support(qm, HZIP_ALG_GZIP))
return;
crypto_unregister_acomp(&hisi_zip_acomp_gzip);
}
int hisi_zip_register_to_crypto(struct hisi_qm *qm)
{
int ret = 0;
ret = hisi_zip_register_zlib(qm);
if (ret)
return ret;
ret = hisi_zip_register_gzip(qm);
if (ret)
hisi_zip_unregister_zlib(qm);
return ret; return ret;
} }
void hisi_zip_unregister_from_crypto(struct hisi_qm *qm) void hisi_zip_unregister_from_crypto(struct hisi_qm *qm)
{ {
crypto_unregister_acomp(&hisi_zip_acomp_gzip); hisi_zip_unregister_zlib(qm);
crypto_unregister_acomp(&hisi_zip_acomp_zlib); hisi_zip_unregister_gzip(qm);
} }

View File

@ -20,18 +20,6 @@
#define HZIP_QUEUE_NUM_V1 4096 #define HZIP_QUEUE_NUM_V1 4096
#define HZIP_CLOCK_GATE_CTRL 0x301004 #define HZIP_CLOCK_GATE_CTRL 0x301004
#define COMP0_ENABLE BIT(0)
#define COMP1_ENABLE BIT(1)
#define DECOMP0_ENABLE BIT(2)
#define DECOMP1_ENABLE BIT(3)
#define DECOMP2_ENABLE BIT(4)
#define DECOMP3_ENABLE BIT(5)
#define DECOMP4_ENABLE BIT(6)
#define DECOMP5_ENABLE BIT(7)
#define HZIP_ALL_COMP_DECOMP_EN (COMP0_ENABLE | COMP1_ENABLE | \
DECOMP0_ENABLE | DECOMP1_ENABLE | \
DECOMP2_ENABLE | DECOMP3_ENABLE | \
DECOMP4_ENABLE | DECOMP5_ENABLE)
#define HZIP_DECOMP_CHECK_ENABLE BIT(16) #define HZIP_DECOMP_CHECK_ENABLE BIT(16)
#define HZIP_FSM_MAX_CNT 0x301008 #define HZIP_FSM_MAX_CNT 0x301008
@ -69,20 +57,14 @@
#define HZIP_CORE_INT_STATUS_M_ECC BIT(1) #define HZIP_CORE_INT_STATUS_M_ECC BIT(1)
#define HZIP_CORE_SRAM_ECC_ERR_INFO 0x301148 #define HZIP_CORE_SRAM_ECC_ERR_INFO 0x301148
#define HZIP_CORE_INT_RAS_CE_ENB 0x301160 #define HZIP_CORE_INT_RAS_CE_ENB 0x301160
#define HZIP_CORE_INT_RAS_CE_ENABLE 0x1
#define HZIP_CORE_INT_RAS_NFE_ENB 0x301164 #define HZIP_CORE_INT_RAS_NFE_ENB 0x301164
#define HZIP_CORE_INT_RAS_FE_ENB 0x301168 #define HZIP_CORE_INT_RAS_FE_ENB 0x301168
#define HZIP_CORE_INT_RAS_FE_ENB_MASK 0x0
#define HZIP_OOO_SHUTDOWN_SEL 0x30120C #define HZIP_OOO_SHUTDOWN_SEL 0x30120C
#define HZIP_CORE_INT_RAS_NFE_ENABLE 0x1FFE
#define HZIP_SRAM_ECC_ERR_NUM_SHIFT 16 #define HZIP_SRAM_ECC_ERR_NUM_SHIFT 16
#define HZIP_SRAM_ECC_ERR_ADDR_SHIFT 24 #define HZIP_SRAM_ECC_ERR_ADDR_SHIFT 24
#define HZIP_CORE_INT_MASK_ALL GENMASK(12, 0) #define HZIP_CORE_INT_MASK_ALL GENMASK(12, 0)
#define HZIP_COMP_CORE_NUM 2
#define HZIP_DECOMP_CORE_NUM 6
#define HZIP_CORE_NUM (HZIP_COMP_CORE_NUM + \
HZIP_DECOMP_CORE_NUM)
#define HZIP_SQE_SIZE 128 #define HZIP_SQE_SIZE 128
#define HZIP_SQ_SIZE (HZIP_SQE_SIZE * QM_Q_DEPTH)
#define HZIP_PF_DEF_Q_NUM 64 #define HZIP_PF_DEF_Q_NUM 64
#define HZIP_PF_DEF_Q_BASE 0 #define HZIP_PF_DEF_Q_BASE 0
@ -92,6 +74,12 @@
#define HZIP_AXI_SHUTDOWN_ENABLE BIT(14) #define HZIP_AXI_SHUTDOWN_ENABLE BIT(14)
#define HZIP_WR_PORT BIT(11) #define HZIP_WR_PORT BIT(11)
#define HZIP_DEV_ALG_MAX_LEN 256
#define HZIP_ALG_ZLIB_BIT GENMASK(1, 0)
#define HZIP_ALG_GZIP_BIT GENMASK(3, 2)
#define HZIP_ALG_DEFLATE_BIT GENMASK(5, 4)
#define HZIP_ALG_LZ77_BIT GENMASK(7, 6)
#define HZIP_BUF_SIZE 22 #define HZIP_BUF_SIZE 22
#define HZIP_SQE_MASK_OFFSET 64 #define HZIP_SQE_MASK_OFFSET 64
#define HZIP_SQE_MASK_LEN 48 #define HZIP_SQE_MASK_LEN 48
@ -132,6 +120,26 @@ struct zip_dfx_item {
u32 offset; u32 offset;
}; };
struct zip_dev_alg {
u32 alg_msk;
const char *algs;
};
static const struct zip_dev_alg zip_dev_algs[] = { {
.alg_msk = HZIP_ALG_ZLIB_BIT,
.algs = "zlib\n",
}, {
.alg_msk = HZIP_ALG_GZIP_BIT,
.algs = "gzip\n",
}, {
.alg_msk = HZIP_ALG_DEFLATE_BIT,
.algs = "deflate\n",
}, {
.alg_msk = HZIP_ALG_LZ77_BIT,
.algs = "lz77_zstd\n",
},
};
static struct hisi_qm_list zip_devices = { static struct hisi_qm_list zip_devices = {
.register_to_crypto = hisi_zip_register_to_crypto, .register_to_crypto = hisi_zip_register_to_crypto,
.unregister_from_crypto = hisi_zip_unregister_from_crypto, .unregister_from_crypto = hisi_zip_unregister_from_crypto,
@ -187,6 +195,58 @@ struct hisi_zip_ctrl {
struct ctrl_debug_file files[HZIP_DEBUG_FILE_NUM]; struct ctrl_debug_file files[HZIP_DEBUG_FILE_NUM];
}; };
enum zip_cap_type {
ZIP_QM_NFE_MASK_CAP = 0x0,
ZIP_QM_RESET_MASK_CAP,
ZIP_QM_OOO_SHUTDOWN_MASK_CAP,
ZIP_QM_CE_MASK_CAP,
ZIP_NFE_MASK_CAP,
ZIP_RESET_MASK_CAP,
ZIP_OOO_SHUTDOWN_MASK_CAP,
ZIP_CE_MASK_CAP,
ZIP_CLUSTER_NUM_CAP,
ZIP_CORE_TYPE_NUM_CAP,
ZIP_CORE_NUM_CAP,
ZIP_CLUSTER_COMP_NUM_CAP,
ZIP_CLUSTER_DECOMP_NUM_CAP,
ZIP_DECOMP_ENABLE_BITMAP,
ZIP_COMP_ENABLE_BITMAP,
ZIP_DRV_ALG_BITMAP,
ZIP_DEV_ALG_BITMAP,
ZIP_CORE1_ALG_BITMAP,
ZIP_CORE2_ALG_BITMAP,
ZIP_CORE3_ALG_BITMAP,
ZIP_CORE4_ALG_BITMAP,
ZIP_CORE5_ALG_BITMAP,
ZIP_CAP_MAX
};
static struct hisi_qm_cap_info zip_basic_cap_info[] = {
{ZIP_QM_NFE_MASK_CAP, 0x3124, 0, GENMASK(31, 0), 0x0, 0x1C57, 0x7C77},
{ZIP_QM_RESET_MASK_CAP, 0x3128, 0, GENMASK(31, 0), 0x0, 0xC57, 0x6C77},
{ZIP_QM_OOO_SHUTDOWN_MASK_CAP, 0x3128, 0, GENMASK(31, 0), 0x0, 0x4, 0x6C77},
{ZIP_QM_CE_MASK_CAP, 0x312C, 0, GENMASK(31, 0), 0x0, 0x8, 0x8},
{ZIP_NFE_MASK_CAP, 0x3130, 0, GENMASK(31, 0), 0x0, 0x7FE, 0x1FFE},
{ZIP_RESET_MASK_CAP, 0x3134, 0, GENMASK(31, 0), 0x0, 0x7FE, 0x7FE},
{ZIP_OOO_SHUTDOWN_MASK_CAP, 0x3134, 0, GENMASK(31, 0), 0x0, 0x2, 0x7FE},
{ZIP_CE_MASK_CAP, 0x3138, 0, GENMASK(31, 0), 0x0, 0x1, 0x1},
{ZIP_CLUSTER_NUM_CAP, 0x313C, 28, GENMASK(3, 0), 0x1, 0x1, 0x1},
{ZIP_CORE_TYPE_NUM_CAP, 0x313C, 24, GENMASK(3, 0), 0x2, 0x2, 0x2},
{ZIP_CORE_NUM_CAP, 0x313C, 16, GENMASK(7, 0), 0x8, 0x8, 0x5},
{ZIP_CLUSTER_COMP_NUM_CAP, 0x313C, 8, GENMASK(7, 0), 0x2, 0x2, 0x2},
{ZIP_CLUSTER_DECOMP_NUM_CAP, 0x313C, 0, GENMASK(7, 0), 0x6, 0x6, 0x3},
{ZIP_DECOMP_ENABLE_BITMAP, 0x3140, 16, GENMASK(15, 0), 0xFC, 0xFC, 0x1C},
{ZIP_COMP_ENABLE_BITMAP, 0x3140, 0, GENMASK(15, 0), 0x3, 0x3, 0x3},
{ZIP_DRV_ALG_BITMAP, 0x3144, 0, GENMASK(31, 0), 0xF, 0xF, 0xF},
{ZIP_DEV_ALG_BITMAP, 0x3148, 0, GENMASK(31, 0), 0xF, 0xF, 0xFF},
{ZIP_CORE1_ALG_BITMAP, 0x314C, 0, GENMASK(31, 0), 0x5, 0x5, 0xD5},
{ZIP_CORE2_ALG_BITMAP, 0x3150, 0, GENMASK(31, 0), 0x5, 0x5, 0xD5},
{ZIP_CORE3_ALG_BITMAP, 0x3154, 0, GENMASK(31, 0), 0xA, 0xA, 0x2A},
{ZIP_CORE4_ALG_BITMAP, 0x3158, 0, GENMASK(31, 0), 0xA, 0xA, 0x2A},
{ZIP_CORE5_ALG_BITMAP, 0x315C, 0, GENMASK(31, 0), 0xA, 0xA, 0x2A},
{ZIP_CAP_MAX, 0x317c, 0, GENMASK(0, 0), 0x0, 0x0, 0x0}
};
enum { enum {
HZIP_COMP_CORE0, HZIP_COMP_CORE0,
HZIP_COMP_CORE1, HZIP_COMP_CORE1,
@ -343,12 +403,52 @@ int zip_create_qps(struct hisi_qp **qps, int qp_num, int node)
return hisi_qm_alloc_qps_node(&zip_devices, qp_num, 0, node, qps); return hisi_qm_alloc_qps_node(&zip_devices, qp_num, 0, node, qps);
} }
bool hisi_zip_alg_support(struct hisi_qm *qm, u32 alg)
{
u32 cap_val;
cap_val = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_DRV_ALG_BITMAP, qm->cap_ver);
if ((alg & cap_val) == alg)
return true;
return false;
}
static int hisi_zip_set_qm_algs(struct hisi_qm *qm)
{
struct device *dev = &qm->pdev->dev;
char *algs, *ptr;
u32 alg_mask;
int i;
if (!qm->use_sva)
return 0;
algs = devm_kzalloc(dev, HZIP_DEV_ALG_MAX_LEN * sizeof(char), GFP_KERNEL);
if (!algs)
return -ENOMEM;
alg_mask = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_DEV_ALG_BITMAP, qm->cap_ver);
for (i = 0; i < ARRAY_SIZE(zip_dev_algs); i++)
if (alg_mask & zip_dev_algs[i].alg_msk)
strcat(algs, zip_dev_algs[i].algs);
ptr = strrchr(algs, '\n');
if (ptr)
*ptr = '\0';
qm->uacce->algs = algs;
return 0;
}
static void hisi_zip_open_sva_prefetch(struct hisi_qm *qm) static void hisi_zip_open_sva_prefetch(struct hisi_qm *qm)
{ {
u32 val; u32 val;
int ret; int ret;
if (qm->ver < QM_HW_V3) if (!test_bit(QM_SUPPORT_SVA_PREFETCH, &qm->caps))
return; return;
/* Enable prefetch */ /* Enable prefetch */
@ -368,7 +468,7 @@ static void hisi_zip_close_sva_prefetch(struct hisi_qm *qm)
u32 val; u32 val;
int ret; int ret;
if (qm->ver < QM_HW_V3) if (!test_bit(QM_SUPPORT_SVA_PREFETCH, &qm->caps))
return; return;
val = readl_relaxed(qm->io_base + HZIP_PREFETCH_CFG); val = readl_relaxed(qm->io_base + HZIP_PREFETCH_CFG);
@ -401,6 +501,7 @@ static void hisi_zip_enable_clock_gate(struct hisi_qm *qm)
static int hisi_zip_set_user_domain_and_cache(struct hisi_qm *qm) static int hisi_zip_set_user_domain_and_cache(struct hisi_qm *qm)
{ {
void __iomem *base = qm->io_base; void __iomem *base = qm->io_base;
u32 dcomp_bm, comp_bm;
/* qm user domain */ /* qm user domain */
writel(AXUSER_BASE, base + QM_ARUSER_M_CFG_1); writel(AXUSER_BASE, base + QM_ARUSER_M_CFG_1);
@ -438,8 +539,11 @@ static int hisi_zip_set_user_domain_and_cache(struct hisi_qm *qm)
} }
/* let's open all compression/decompression cores */ /* let's open all compression/decompression cores */
writel(HZIP_DECOMP_CHECK_ENABLE | HZIP_ALL_COMP_DECOMP_EN, dcomp_bm = hisi_qm_get_hw_info(qm, zip_basic_cap_info,
base + HZIP_CLOCK_GATE_CTRL); ZIP_DECOMP_ENABLE_BITMAP, qm->cap_ver);
comp_bm = hisi_qm_get_hw_info(qm, zip_basic_cap_info,
ZIP_COMP_ENABLE_BITMAP, qm->cap_ver);
writel(HZIP_DECOMP_CHECK_ENABLE | dcomp_bm | comp_bm, base + HZIP_CLOCK_GATE_CTRL);
/* enable sqc,cqc writeback */ /* enable sqc,cqc writeback */
writel(SQC_CACHE_ENABLE | CQC_CACHE_ENABLE | SQC_CACHE_WB_ENABLE | writel(SQC_CACHE_ENABLE | CQC_CACHE_ENABLE | SQC_CACHE_WB_ENABLE |
@ -458,7 +562,8 @@ static void hisi_zip_master_ooo_ctrl(struct hisi_qm *qm, bool enable)
val1 = readl(qm->io_base + HZIP_SOFT_CTRL_ZIP_CONTROL); val1 = readl(qm->io_base + HZIP_SOFT_CTRL_ZIP_CONTROL);
if (enable) { if (enable) {
val1 |= HZIP_AXI_SHUTDOWN_ENABLE; val1 |= HZIP_AXI_SHUTDOWN_ENABLE;
val2 = HZIP_CORE_INT_RAS_NFE_ENABLE; val2 = hisi_qm_get_hw_info(qm, zip_basic_cap_info,
ZIP_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver);
} else { } else {
val1 &= ~HZIP_AXI_SHUTDOWN_ENABLE; val1 &= ~HZIP_AXI_SHUTDOWN_ENABLE;
val2 = 0x0; val2 = 0x0;
@ -472,6 +577,8 @@ static void hisi_zip_master_ooo_ctrl(struct hisi_qm *qm, bool enable)
static void hisi_zip_hw_error_enable(struct hisi_qm *qm) static void hisi_zip_hw_error_enable(struct hisi_qm *qm)
{ {
u32 nfe, ce;
if (qm->ver == QM_HW_V1) { if (qm->ver == QM_HW_V1) {
writel(HZIP_CORE_INT_MASK_ALL, writel(HZIP_CORE_INT_MASK_ALL,
qm->io_base + HZIP_CORE_INT_MASK_REG); qm->io_base + HZIP_CORE_INT_MASK_REG);
@ -479,17 +586,17 @@ static void hisi_zip_hw_error_enable(struct hisi_qm *qm)
return; return;
} }
nfe = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_NFE_MASK_CAP, qm->cap_ver);
ce = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_CE_MASK_CAP, qm->cap_ver);
/* clear ZIP hw error source if having */ /* clear ZIP hw error source if having */
writel(HZIP_CORE_INT_MASK_ALL, qm->io_base + HZIP_CORE_INT_SOURCE); writel(ce | nfe | HZIP_CORE_INT_RAS_FE_ENB_MASK, qm->io_base + HZIP_CORE_INT_SOURCE);
/* configure error type */ /* configure error type */
writel(HZIP_CORE_INT_RAS_CE_ENABLE, writel(ce, qm->io_base + HZIP_CORE_INT_RAS_CE_ENB);
qm->io_base + HZIP_CORE_INT_RAS_CE_ENB); writel(HZIP_CORE_INT_RAS_FE_ENB_MASK, qm->io_base + HZIP_CORE_INT_RAS_FE_ENB);
writel(0x0, qm->io_base + HZIP_CORE_INT_RAS_FE_ENB); writel(nfe, qm->io_base + HZIP_CORE_INT_RAS_NFE_ENB);
writel(HZIP_CORE_INT_RAS_NFE_ENABLE,
qm->io_base + HZIP_CORE_INT_RAS_NFE_ENB);
/* enable ZIP block master OOO when nfe occurs on Kunpeng930 */
hisi_zip_master_ooo_ctrl(qm, true); hisi_zip_master_ooo_ctrl(qm, true);
/* enable ZIP hw error interrupts */ /* enable ZIP hw error interrupts */
@ -498,10 +605,13 @@ static void hisi_zip_hw_error_enable(struct hisi_qm *qm)
static void hisi_zip_hw_error_disable(struct hisi_qm *qm) static void hisi_zip_hw_error_disable(struct hisi_qm *qm)
{ {
/* disable ZIP hw error interrupts */ u32 nfe, ce;
writel(HZIP_CORE_INT_MASK_ALL, qm->io_base + HZIP_CORE_INT_MASK_REG);
/* disable ZIP hw error interrupts */
nfe = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_NFE_MASK_CAP, qm->cap_ver);
ce = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_CE_MASK_CAP, qm->cap_ver);
writel(ce | nfe | HZIP_CORE_INT_RAS_FE_ENB_MASK, qm->io_base + HZIP_CORE_INT_MASK_REG);
/* disable ZIP block master OOO when nfe occurs on Kunpeng930 */
hisi_zip_master_ooo_ctrl(qm, false); hisi_zip_master_ooo_ctrl(qm, false);
} }
@ -586,8 +696,9 @@ static ssize_t hisi_zip_ctrl_debug_write(struct file *filp,
return len; return len;
tbuf[len] = '\0'; tbuf[len] = '\0';
if (kstrtoul(tbuf, 0, &val)) ret = kstrtoul(tbuf, 0, &val);
return -EFAULT; if (ret)
return ret;
ret = hisi_qm_get_dfx_access(qm); ret = hisi_qm_get_dfx_access(qm);
if (ret) if (ret)
@ -651,18 +762,23 @@ DEFINE_SHOW_ATTRIBUTE(hisi_zip_regs);
static int hisi_zip_core_debug_init(struct hisi_qm *qm) static int hisi_zip_core_debug_init(struct hisi_qm *qm)
{ {
u32 zip_core_num, zip_comp_core_num;
struct device *dev = &qm->pdev->dev; struct device *dev = &qm->pdev->dev;
struct debugfs_regset32 *regset; struct debugfs_regset32 *regset;
struct dentry *tmp_d; struct dentry *tmp_d;
char buf[HZIP_BUF_SIZE]; char buf[HZIP_BUF_SIZE];
int i; int i;
for (i = 0; i < HZIP_CORE_NUM; i++) { zip_core_num = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_CORE_NUM_CAP, qm->cap_ver);
if (i < HZIP_COMP_CORE_NUM) zip_comp_core_num = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_CLUSTER_COMP_NUM_CAP,
qm->cap_ver);
for (i = 0; i < zip_core_num; i++) {
if (i < zip_comp_core_num)
scnprintf(buf, sizeof(buf), "comp_core%d", i); scnprintf(buf, sizeof(buf), "comp_core%d", i);
else else
scnprintf(buf, sizeof(buf), "decomp_core%d", scnprintf(buf, sizeof(buf), "decomp_core%d",
i - HZIP_COMP_CORE_NUM); i - zip_comp_core_num);
regset = devm_kzalloc(dev, sizeof(*regset), GFP_KERNEL); regset = devm_kzalloc(dev, sizeof(*regset), GFP_KERNEL);
if (!regset) if (!regset)
@ -675,7 +791,7 @@ static int hisi_zip_core_debug_init(struct hisi_qm *qm)
tmp_d = debugfs_create_dir(buf, qm->debug.debug_root); tmp_d = debugfs_create_dir(buf, qm->debug.debug_root);
debugfs_create_file("regs", 0444, tmp_d, regset, debugfs_create_file("regs", 0444, tmp_d, regset,
&hisi_zip_regs_fops); &hisi_zip_regs_fops);
} }
return 0; return 0;
@ -795,10 +911,13 @@ static int hisi_zip_show_last_regs_init(struct hisi_qm *qm)
int com_dfx_regs_num = ARRAY_SIZE(hzip_com_dfx_regs); int com_dfx_regs_num = ARRAY_SIZE(hzip_com_dfx_regs);
struct qm_debug *debug = &qm->debug; struct qm_debug *debug = &qm->debug;
void __iomem *io_base; void __iomem *io_base;
u32 zip_core_num;
int i, j, idx; int i, j, idx;
debug->last_words = kcalloc(core_dfx_regs_num * HZIP_CORE_NUM + zip_core_num = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_CORE_NUM_CAP, qm->cap_ver);
com_dfx_regs_num, sizeof(unsigned int), GFP_KERNEL);
debug->last_words = kcalloc(core_dfx_regs_num * zip_core_num + com_dfx_regs_num,
sizeof(unsigned int), GFP_KERNEL);
if (!debug->last_words) if (!debug->last_words)
return -ENOMEM; return -ENOMEM;
@ -807,7 +926,7 @@ static int hisi_zip_show_last_regs_init(struct hisi_qm *qm)
debug->last_words[i] = readl_relaxed(io_base); debug->last_words[i] = readl_relaxed(io_base);
} }
for (i = 0; i < HZIP_CORE_NUM; i++) { for (i = 0; i < zip_core_num; i++) {
io_base = qm->io_base + core_offsets[i]; io_base = qm->io_base + core_offsets[i];
for (j = 0; j < core_dfx_regs_num; j++) { for (j = 0; j < core_dfx_regs_num; j++) {
idx = com_dfx_regs_num + i * core_dfx_regs_num + j; idx = com_dfx_regs_num + i * core_dfx_regs_num + j;
@ -834,6 +953,7 @@ static void hisi_zip_show_last_dfx_regs(struct hisi_qm *qm)
{ {
int core_dfx_regs_num = ARRAY_SIZE(hzip_dump_dfx_regs); int core_dfx_regs_num = ARRAY_SIZE(hzip_dump_dfx_regs);
int com_dfx_regs_num = ARRAY_SIZE(hzip_com_dfx_regs); int com_dfx_regs_num = ARRAY_SIZE(hzip_com_dfx_regs);
u32 zip_core_num, zip_comp_core_num;
struct qm_debug *debug = &qm->debug; struct qm_debug *debug = &qm->debug;
char buf[HZIP_BUF_SIZE]; char buf[HZIP_BUF_SIZE];
void __iomem *base; void __iomem *base;
@ -847,15 +967,18 @@ static void hisi_zip_show_last_dfx_regs(struct hisi_qm *qm)
val = readl_relaxed(qm->io_base + hzip_com_dfx_regs[i].offset); val = readl_relaxed(qm->io_base + hzip_com_dfx_regs[i].offset);
if (debug->last_words[i] != val) if (debug->last_words[i] != val)
pci_info(qm->pdev, "com_dfx: %s \t= 0x%08x => 0x%08x\n", pci_info(qm->pdev, "com_dfx: %s \t= 0x%08x => 0x%08x\n",
hzip_com_dfx_regs[i].name, debug->last_words[i], val); hzip_com_dfx_regs[i].name, debug->last_words[i], val);
} }
for (i = 0; i < HZIP_CORE_NUM; i++) { zip_core_num = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_CORE_NUM_CAP, qm->cap_ver);
if (i < HZIP_COMP_CORE_NUM) zip_comp_core_num = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_CLUSTER_COMP_NUM_CAP,
qm->cap_ver);
for (i = 0; i < zip_core_num; i++) {
if (i < zip_comp_core_num)
scnprintf(buf, sizeof(buf), "Comp_core-%d", i); scnprintf(buf, sizeof(buf), "Comp_core-%d", i);
else else
scnprintf(buf, sizeof(buf), "Decomp_core-%d", scnprintf(buf, sizeof(buf), "Decomp_core-%d",
i - HZIP_COMP_CORE_NUM); i - zip_comp_core_num);
base = qm->io_base + core_offsets[i]; base = qm->io_base + core_offsets[i];
pci_info(qm->pdev, "==>%s:\n", buf); pci_info(qm->pdev, "==>%s:\n", buf);
@ -865,7 +988,8 @@ static void hisi_zip_show_last_dfx_regs(struct hisi_qm *qm)
val = readl_relaxed(base + hzip_dump_dfx_regs[j].offset); val = readl_relaxed(base + hzip_dump_dfx_regs[j].offset);
if (debug->last_words[idx] != val) if (debug->last_words[idx] != val)
pci_info(qm->pdev, "%s \t= 0x%08x => 0x%08x\n", pci_info(qm->pdev, "%s \t= 0x%08x => 0x%08x\n",
hzip_dump_dfx_regs[j].name, debug->last_words[idx], val); hzip_dump_dfx_regs[j].name,
debug->last_words[idx], val);
} }
} }
} }
@ -900,7 +1024,11 @@ static u32 hisi_zip_get_hw_err_status(struct hisi_qm *qm)
static void hisi_zip_clear_hw_err_status(struct hisi_qm *qm, u32 err_sts) static void hisi_zip_clear_hw_err_status(struct hisi_qm *qm, u32 err_sts)
{ {
u32 nfe;
writel(err_sts, qm->io_base + HZIP_CORE_INT_SOURCE); writel(err_sts, qm->io_base + HZIP_CORE_INT_SOURCE);
nfe = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_NFE_MASK_CAP, qm->cap_ver);
writel(nfe, qm->io_base + HZIP_CORE_INT_RAS_NFE_ENB);
} }
static void hisi_zip_open_axi_master_ooo(struct hisi_qm *qm) static void hisi_zip_open_axi_master_ooo(struct hisi_qm *qm)
@ -934,16 +1062,21 @@ static void hisi_zip_err_info_init(struct hisi_qm *qm)
{ {
struct hisi_qm_err_info *err_info = &qm->err_info; struct hisi_qm_err_info *err_info = &qm->err_info;
err_info->ce = QM_BASE_CE; err_info->fe = HZIP_CORE_INT_RAS_FE_ENB_MASK;
err_info->fe = 0; err_info->ce = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_QM_CE_MASK_CAP, qm->cap_ver);
err_info->nfe = hisi_qm_get_hw_info(qm, zip_basic_cap_info,
ZIP_QM_NFE_MASK_CAP, qm->cap_ver);
err_info->ecc_2bits_mask = HZIP_CORE_INT_STATUS_M_ECC; err_info->ecc_2bits_mask = HZIP_CORE_INT_STATUS_M_ECC;
err_info->dev_ce_mask = HZIP_CORE_INT_RAS_CE_ENABLE; err_info->qm_shutdown_mask = hisi_qm_get_hw_info(qm, zip_basic_cap_info,
ZIP_QM_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver);
err_info->dev_shutdown_mask = hisi_qm_get_hw_info(qm, zip_basic_cap_info,
ZIP_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver);
err_info->qm_reset_mask = hisi_qm_get_hw_info(qm, zip_basic_cap_info,
ZIP_QM_RESET_MASK_CAP, qm->cap_ver);
err_info->dev_reset_mask = hisi_qm_get_hw_info(qm, zip_basic_cap_info,
ZIP_RESET_MASK_CAP, qm->cap_ver);
err_info->msi_wr_port = HZIP_WR_PORT; err_info->msi_wr_port = HZIP_WR_PORT;
err_info->acpi_rst = "ZRST"; err_info->acpi_rst = "ZRST";
err_info->nfe = QM_BASE_NFE | QM_ACC_WB_NOT_READY_TIMEOUT;
if (qm->ver >= QM_HW_V3)
err_info->nfe |= QM_ACC_DO_TASK_TIMEOUT;
} }
static const struct hisi_qm_err_ini hisi_zip_err_ini = { static const struct hisi_qm_err_ini hisi_zip_err_ini = {
@ -976,7 +1109,10 @@ static int hisi_zip_pf_probe_init(struct hisi_zip *hisi_zip)
qm->err_ini = &hisi_zip_err_ini; qm->err_ini = &hisi_zip_err_ini;
qm->err_ini->err_info_init(qm); qm->err_ini->err_info_init(qm);
hisi_zip_set_user_domain_and_cache(qm); ret = hisi_zip_set_user_domain_and_cache(qm);
if (ret)
return ret;
hisi_zip_open_sva_prefetch(qm); hisi_zip_open_sva_prefetch(qm);
hisi_qm_dev_err_init(qm); hisi_qm_dev_err_init(qm);
hisi_zip_debug_regs_clear(qm); hisi_zip_debug_regs_clear(qm);
@ -990,12 +1126,10 @@ static int hisi_zip_pf_probe_init(struct hisi_zip *hisi_zip)
static int hisi_zip_qm_init(struct hisi_qm *qm, struct pci_dev *pdev) static int hisi_zip_qm_init(struct hisi_qm *qm, struct pci_dev *pdev)
{ {
int ret;
qm->pdev = pdev; qm->pdev = pdev;
qm->ver = pdev->revision; qm->ver = pdev->revision;
if (pdev->revision >= QM_HW_V3)
qm->algs = "zlib\ngzip\ndeflate\nlz77_zstd";
else
qm->algs = "zlib\ngzip";
qm->mode = uacce_mode; qm->mode = uacce_mode;
qm->sqe_size = HZIP_SQE_SIZE; qm->sqe_size = HZIP_SQE_SIZE;
qm->dev_name = hisi_zip_name; qm->dev_name = hisi_zip_name;
@ -1019,7 +1153,19 @@ static int hisi_zip_qm_init(struct hisi_qm *qm, struct pci_dev *pdev)
qm->qp_num = HZIP_QUEUE_NUM_V1 - HZIP_PF_DEF_Q_NUM; qm->qp_num = HZIP_QUEUE_NUM_V1 - HZIP_PF_DEF_Q_NUM;
} }
return hisi_qm_init(qm); ret = hisi_qm_init(qm);
if (ret) {
pci_err(qm->pdev, "Failed to init zip qm configures!\n");
return ret;
}
ret = hisi_zip_set_qm_algs(qm);
if (ret) {
pci_err(qm->pdev, "Failed to set zip algs!\n");
hisi_qm_uninit(qm);
}
return ret;
} }
static void hisi_zip_qm_uninit(struct hisi_qm *qm) static void hisi_zip_qm_uninit(struct hisi_qm *qm)

View File

@ -63,7 +63,6 @@ struct safexcel_cipher_ctx {
u32 hash_alg; u32 hash_alg;
u32 state_sz; u32 state_sz;
struct crypto_cipher *hkaes;
struct crypto_aead *fback; struct crypto_aead *fback;
}; };
@ -642,10 +641,16 @@ static int safexcel_handle_req_result(struct safexcel_crypto_priv *priv, int rin
safexcel_complete(priv, ring); safexcel_complete(priv, ring);
if (src == dst) { if (src == dst) {
dma_unmap_sg(priv->dev, src, sreq->nr_src, DMA_BIDIRECTIONAL); if (sreq->nr_src > 0)
dma_unmap_sg(priv->dev, src, sreq->nr_src,
DMA_BIDIRECTIONAL);
} else { } else {
dma_unmap_sg(priv->dev, src, sreq->nr_src, DMA_TO_DEVICE); if (sreq->nr_src > 0)
dma_unmap_sg(priv->dev, dst, sreq->nr_dst, DMA_FROM_DEVICE); dma_unmap_sg(priv->dev, src, sreq->nr_src,
DMA_TO_DEVICE);
if (sreq->nr_dst > 0)
dma_unmap_sg(priv->dev, dst, sreq->nr_dst,
DMA_FROM_DEVICE);
} }
/* /*
@ -737,23 +742,29 @@ static int safexcel_send_req(struct crypto_async_request *base, int ring,
max(totlen_src, totlen_dst)); max(totlen_src, totlen_dst));
return -EINVAL; return -EINVAL;
} }
dma_map_sg(priv->dev, src, sreq->nr_src, DMA_BIDIRECTIONAL); if (sreq->nr_src > 0)
dma_map_sg(priv->dev, src, sreq->nr_src,
DMA_BIDIRECTIONAL);
} else { } else {
if (unlikely(totlen_src && (sreq->nr_src <= 0))) { if (unlikely(totlen_src && (sreq->nr_src <= 0))) {
dev_err(priv->dev, "Source buffer not large enough (need %d bytes)!", dev_err(priv->dev, "Source buffer not large enough (need %d bytes)!",
totlen_src); totlen_src);
return -EINVAL; return -EINVAL;
} }
dma_map_sg(priv->dev, src, sreq->nr_src, DMA_TO_DEVICE);
if (sreq->nr_src > 0)
dma_map_sg(priv->dev, src, sreq->nr_src, DMA_TO_DEVICE);
if (unlikely(totlen_dst && (sreq->nr_dst <= 0))) { if (unlikely(totlen_dst && (sreq->nr_dst <= 0))) {
dev_err(priv->dev, "Dest buffer not large enough (need %d bytes)!", dev_err(priv->dev, "Dest buffer not large enough (need %d bytes)!",
totlen_dst); totlen_dst);
dma_unmap_sg(priv->dev, src, sreq->nr_src, ret = -EINVAL;
DMA_TO_DEVICE); goto unmap;
return -EINVAL;
} }
dma_map_sg(priv->dev, dst, sreq->nr_dst, DMA_FROM_DEVICE);
if (sreq->nr_dst > 0)
dma_map_sg(priv->dev, dst, sreq->nr_dst,
DMA_FROM_DEVICE);
} }
memcpy(ctx->base.ctxr->data, ctx->key, ctx->key_len); memcpy(ctx->base.ctxr->data, ctx->key, ctx->key_len);
@ -883,12 +894,18 @@ rdesc_rollback:
cdesc_rollback: cdesc_rollback:
for (i = 0; i < n_cdesc; i++) for (i = 0; i < n_cdesc; i++)
safexcel_ring_rollback_wptr(priv, &priv->ring[ring].cdr); safexcel_ring_rollback_wptr(priv, &priv->ring[ring].cdr);
unmap:
if (src == dst) { if (src == dst) {
dma_unmap_sg(priv->dev, src, sreq->nr_src, DMA_BIDIRECTIONAL); if (sreq->nr_src > 0)
dma_unmap_sg(priv->dev, src, sreq->nr_src,
DMA_BIDIRECTIONAL);
} else { } else {
dma_unmap_sg(priv->dev, src, sreq->nr_src, DMA_TO_DEVICE); if (sreq->nr_src > 0)
dma_unmap_sg(priv->dev, dst, sreq->nr_dst, DMA_FROM_DEVICE); dma_unmap_sg(priv->dev, src, sreq->nr_src,
DMA_TO_DEVICE);
if (sreq->nr_dst > 0)
dma_unmap_sg(priv->dev, dst, sreq->nr_dst,
DMA_FROM_DEVICE);
} }
return ret; return ret;
@ -2589,15 +2606,8 @@ static int safexcel_aead_gcm_setkey(struct crypto_aead *ctfm, const u8 *key,
ctx->key_len = len; ctx->key_len = len;
/* Compute hash key by encrypting zeroes with cipher key */ /* Compute hash key by encrypting zeroes with cipher key */
crypto_cipher_clear_flags(ctx->hkaes, CRYPTO_TFM_REQ_MASK);
crypto_cipher_set_flags(ctx->hkaes, crypto_aead_get_flags(ctfm) &
CRYPTO_TFM_REQ_MASK);
ret = crypto_cipher_setkey(ctx->hkaes, key, len);
if (ret)
return ret;
memset(hashkey, 0, AES_BLOCK_SIZE); memset(hashkey, 0, AES_BLOCK_SIZE);
crypto_cipher_encrypt_one(ctx->hkaes, (u8 *)hashkey, (u8 *)hashkey); aes_encrypt(&aes, (u8 *)hashkey, (u8 *)hashkey);
if (priv->flags & EIP197_TRC_CACHE && ctx->base.ctxr_dma) { if (priv->flags & EIP197_TRC_CACHE && ctx->base.ctxr_dma) {
for (i = 0; i < AES_BLOCK_SIZE / sizeof(u32); i++) { for (i = 0; i < AES_BLOCK_SIZE / sizeof(u32); i++) {
@ -2626,15 +2636,11 @@ static int safexcel_aead_gcm_cra_init(struct crypto_tfm *tfm)
ctx->xcm = EIP197_XCM_MODE_GCM; ctx->xcm = EIP197_XCM_MODE_GCM;
ctx->mode = CONTEXT_CONTROL_CRYPTO_MODE_XCM; /* override default */ ctx->mode = CONTEXT_CONTROL_CRYPTO_MODE_XCM; /* override default */
ctx->hkaes = crypto_alloc_cipher("aes", 0, 0); return 0;
return PTR_ERR_OR_ZERO(ctx->hkaes);
} }
static void safexcel_aead_gcm_cra_exit(struct crypto_tfm *tfm) static void safexcel_aead_gcm_cra_exit(struct crypto_tfm *tfm)
{ {
struct safexcel_cipher_ctx *ctx = crypto_tfm_ctx(tfm);
crypto_free_cipher(ctx->hkaes);
safexcel_aead_cra_exit(tfm); safexcel_aead_cra_exit(tfm);
} }

View File

@ -30,7 +30,7 @@ struct safexcel_ahash_ctx {
bool fb_init_done; bool fb_init_done;
bool fb_do_setkey; bool fb_do_setkey;
struct crypto_cipher *kaes; struct crypto_aes_ctx *aes;
struct crypto_ahash *fback; struct crypto_ahash *fback;
struct crypto_shash *shpre; struct crypto_shash *shpre;
struct shash_desc *shdesc; struct shash_desc *shdesc;
@ -383,7 +383,7 @@ static int safexcel_ahash_send_req(struct crypto_async_request *async, int ring,
u32 x; u32 x;
x = ipad[i] ^ ipad[i + 4]; x = ipad[i] ^ ipad[i + 4];
cache[i] ^= swab(x); cache[i] ^= swab32(x);
} }
} }
cache_len = AES_BLOCK_SIZE; cache_len = AES_BLOCK_SIZE;
@ -821,10 +821,10 @@ static int safexcel_ahash_final(struct ahash_request *areq)
u32 *result = (void *)areq->result; u32 *result = (void *)areq->result;
/* K3 */ /* K3 */
result[i] = swab(ctx->base.ipad.word[i + 4]); result[i] = swab32(ctx->base.ipad.word[i + 4]);
} }
areq->result[0] ^= 0x80; // 10- padding areq->result[0] ^= 0x80; // 10- padding
crypto_cipher_encrypt_one(ctx->kaes, areq->result, areq->result); aes_encrypt(ctx->aes, areq->result, areq->result);
return 0; return 0;
} else if (unlikely(req->hmac && } else if (unlikely(req->hmac &&
(req->len == req->block_sz) && (req->len == req->block_sz) &&
@ -2083,37 +2083,26 @@ static int safexcel_xcbcmac_setkey(struct crypto_ahash *tfm, const u8 *key,
unsigned int len) unsigned int len)
{ {
struct safexcel_ahash_ctx *ctx = crypto_tfm_ctx(crypto_ahash_tfm(tfm)); struct safexcel_ahash_ctx *ctx = crypto_tfm_ctx(crypto_ahash_tfm(tfm));
struct crypto_aes_ctx aes;
u32 key_tmp[3 * AES_BLOCK_SIZE / sizeof(u32)]; u32 key_tmp[3 * AES_BLOCK_SIZE / sizeof(u32)];
int ret, i; int ret, i;
ret = aes_expandkey(&aes, key, len); ret = aes_expandkey(ctx->aes, key, len);
if (ret) if (ret)
return ret; return ret;
/* precompute the XCBC key material */ /* precompute the XCBC key material */
crypto_cipher_clear_flags(ctx->kaes, CRYPTO_TFM_REQ_MASK); aes_encrypt(ctx->aes, (u8 *)key_tmp + 2 * AES_BLOCK_SIZE,
crypto_cipher_set_flags(ctx->kaes, crypto_ahash_get_flags(tfm) & "\x1\x1\x1\x1\x1\x1\x1\x1\x1\x1\x1\x1\x1\x1\x1\x1");
CRYPTO_TFM_REQ_MASK); aes_encrypt(ctx->aes, (u8 *)key_tmp,
ret = crypto_cipher_setkey(ctx->kaes, key, len); "\x2\x2\x2\x2\x2\x2\x2\x2\x2\x2\x2\x2\x2\x2\x2\x2");
if (ret) aes_encrypt(ctx->aes, (u8 *)key_tmp + AES_BLOCK_SIZE,
return ret; "\x3\x3\x3\x3\x3\x3\x3\x3\x3\x3\x3\x3\x3\x3\x3\x3");
crypto_cipher_encrypt_one(ctx->kaes, (u8 *)key_tmp + 2 * AES_BLOCK_SIZE,
"\x1\x1\x1\x1\x1\x1\x1\x1\x1\x1\x1\x1\x1\x1\x1\x1");
crypto_cipher_encrypt_one(ctx->kaes, (u8 *)key_tmp,
"\x2\x2\x2\x2\x2\x2\x2\x2\x2\x2\x2\x2\x2\x2\x2\x2");
crypto_cipher_encrypt_one(ctx->kaes, (u8 *)key_tmp + AES_BLOCK_SIZE,
"\x3\x3\x3\x3\x3\x3\x3\x3\x3\x3\x3\x3\x3\x3\x3\x3");
for (i = 0; i < 3 * AES_BLOCK_SIZE / sizeof(u32); i++) for (i = 0; i < 3 * AES_BLOCK_SIZE / sizeof(u32); i++)
ctx->base.ipad.word[i] = swab(key_tmp[i]); ctx->base.ipad.word[i] = swab32(key_tmp[i]);
crypto_cipher_clear_flags(ctx->kaes, CRYPTO_TFM_REQ_MASK); ret = aes_expandkey(ctx->aes,
crypto_cipher_set_flags(ctx->kaes, crypto_ahash_get_flags(tfm) & (u8 *)key_tmp + 2 * AES_BLOCK_SIZE,
CRYPTO_TFM_REQ_MASK); AES_MIN_KEY_SIZE);
ret = crypto_cipher_setkey(ctx->kaes,
(u8 *)key_tmp + 2 * AES_BLOCK_SIZE,
AES_MIN_KEY_SIZE);
if (ret) if (ret)
return ret; return ret;
@ -2121,7 +2110,6 @@ static int safexcel_xcbcmac_setkey(struct crypto_ahash *tfm, const u8 *key,
ctx->key_sz = AES_MIN_KEY_SIZE + 2 * AES_BLOCK_SIZE; ctx->key_sz = AES_MIN_KEY_SIZE + 2 * AES_BLOCK_SIZE;
ctx->cbcmac = false; ctx->cbcmac = false;
memzero_explicit(&aes, sizeof(aes));
return 0; return 0;
} }
@ -2130,15 +2118,15 @@ static int safexcel_xcbcmac_cra_init(struct crypto_tfm *tfm)
struct safexcel_ahash_ctx *ctx = crypto_tfm_ctx(tfm); struct safexcel_ahash_ctx *ctx = crypto_tfm_ctx(tfm);
safexcel_ahash_cra_init(tfm); safexcel_ahash_cra_init(tfm);
ctx->kaes = crypto_alloc_cipher("aes", 0, 0); ctx->aes = kmalloc(sizeof(*ctx->aes), GFP_KERNEL);
return PTR_ERR_OR_ZERO(ctx->kaes); return PTR_ERR_OR_ZERO(ctx->aes);
} }
static void safexcel_xcbcmac_cra_exit(struct crypto_tfm *tfm) static void safexcel_xcbcmac_cra_exit(struct crypto_tfm *tfm)
{ {
struct safexcel_ahash_ctx *ctx = crypto_tfm_ctx(tfm); struct safexcel_ahash_ctx *ctx = crypto_tfm_ctx(tfm);
crypto_free_cipher(ctx->kaes); kfree(ctx->aes);
safexcel_ahash_cra_exit(tfm); safexcel_ahash_cra_exit(tfm);
} }
@ -2178,31 +2166,23 @@ static int safexcel_cmac_setkey(struct crypto_ahash *tfm, const u8 *key,
unsigned int len) unsigned int len)
{ {
struct safexcel_ahash_ctx *ctx = crypto_tfm_ctx(crypto_ahash_tfm(tfm)); struct safexcel_ahash_ctx *ctx = crypto_tfm_ctx(crypto_ahash_tfm(tfm));
struct crypto_aes_ctx aes;
__be64 consts[4]; __be64 consts[4];
u64 _const[2]; u64 _const[2];
u8 msb_mask, gfmask; u8 msb_mask, gfmask;
int ret, i; int ret, i;
ret = aes_expandkey(&aes, key, len); /* precompute the CMAC key material */
ret = aes_expandkey(ctx->aes, key, len);
if (ret) if (ret)
return ret; return ret;
for (i = 0; i < len / sizeof(u32); i++) for (i = 0; i < len / sizeof(u32); i++)
ctx->base.ipad.word[i + 8] = swab(aes.key_enc[i]); ctx->base.ipad.word[i + 8] = swab32(ctx->aes->key_enc[i]);
/* precompute the CMAC key material */
crypto_cipher_clear_flags(ctx->kaes, CRYPTO_TFM_REQ_MASK);
crypto_cipher_set_flags(ctx->kaes, crypto_ahash_get_flags(tfm) &
CRYPTO_TFM_REQ_MASK);
ret = crypto_cipher_setkey(ctx->kaes, key, len);
if (ret)
return ret;
/* code below borrowed from crypto/cmac.c */ /* code below borrowed from crypto/cmac.c */
/* encrypt the zero block */ /* encrypt the zero block */
memset(consts, 0, AES_BLOCK_SIZE); memset(consts, 0, AES_BLOCK_SIZE);
crypto_cipher_encrypt_one(ctx->kaes, (u8 *)consts, (u8 *)consts); aes_encrypt(ctx->aes, (u8 *)consts, (u8 *)consts);
gfmask = 0x87; gfmask = 0x87;
_const[0] = be64_to_cpu(consts[1]); _const[0] = be64_to_cpu(consts[1]);
@ -2234,7 +2214,6 @@ static int safexcel_cmac_setkey(struct crypto_ahash *tfm, const u8 *key,
} }
ctx->cbcmac = false; ctx->cbcmac = false;
memzero_explicit(&aes, sizeof(aes));
return 0; return 0;
} }

View File

@ -42,7 +42,7 @@ config CRYPTO_DEV_KEEMBAY_OCS_AES_SM4_CTS
config CRYPTO_DEV_KEEMBAY_OCS_ECC config CRYPTO_DEV_KEEMBAY_OCS_ECC
tristate "Support for Intel Keem Bay OCS ECC HW acceleration" tristate "Support for Intel Keem Bay OCS ECC HW acceleration"
depends on ARCH_KEEMBAY || COMPILE_TEST depends on ARCH_KEEMBAY || COMPILE_TEST
depends on OF || COMPILE_TEST depends on OF
depends on HAS_IOMEM depends on HAS_IOMEM
select CRYPTO_ECDH select CRYPTO_ECDH
select CRYPTO_ENGINE select CRYPTO_ENGINE
@ -64,7 +64,7 @@ config CRYPTO_DEV_KEEMBAY_OCS_HCU
select CRYPTO_ENGINE select CRYPTO_ENGINE
depends on HAS_IOMEM depends on HAS_IOMEM
depends on ARCH_KEEMBAY || COMPILE_TEST depends on ARCH_KEEMBAY || COMPILE_TEST
depends on OF || COMPILE_TEST depends on OF
help help
Support for Intel Keem Bay Offload and Crypto Subsystem (OCS) Hash Support for Intel Keem Bay Offload and Crypto Subsystem (OCS) Hash
Control Unit (HCU) hardware acceleration for use with Crypto API. Control Unit (HCU) hardware acceleration for use with Crypto API.

View File

@ -403,7 +403,7 @@ union otx_cptx_pf_exe_bist_status {
* big-endian format in memory. * big-endian format in memory.
* iqb_ldwb:1 [7:7](R/W) Instruction load don't write back. * iqb_ldwb:1 [7:7](R/W) Instruction load don't write back.
* 0 = The hardware issues NCB transient load (LDT) towards the cache, * 0 = The hardware issues NCB transient load (LDT) towards the cache,
* which if the line hits and is is dirty will cause the line to be * which if the line hits and is dirty will cause the line to be
* written back before being replaced. * written back before being replaced.
* 1 = The hardware issues NCB LDWB read-and-invalidate command towards * 1 = The hardware issues NCB LDWB read-and-invalidate command towards
* the cache when fetching the last word of instructions; as a result the * the cache when fetching the last word of instructions; as a result the

View File

@ -97,7 +97,7 @@ static int dev_supports_eng_type(struct otx_cpt_eng_grps *eng_grps,
static void set_ucode_filename(struct otx_cpt_ucode *ucode, static void set_ucode_filename(struct otx_cpt_ucode *ucode,
const char *filename) const char *filename)
{ {
strlcpy(ucode->filename, filename, OTX_CPT_UCODE_NAME_LENGTH); strscpy(ucode->filename, filename, OTX_CPT_UCODE_NAME_LENGTH);
} }
static char *get_eng_type_str(int eng_type) static char *get_eng_type_str(int eng_type)
@ -138,7 +138,7 @@ static int get_ucode_type(struct otx_cpt_ucode_hdr *ucode_hdr, int *ucode_type)
u32 i, val = 0; u32 i, val = 0;
u8 nn; u8 nn;
strlcpy(tmp_ver_str, ucode_hdr->ver_str, OTX_CPT_UCODE_VER_STR_SZ); strscpy(tmp_ver_str, ucode_hdr->ver_str, OTX_CPT_UCODE_VER_STR_SZ);
for (i = 0; i < strlen(tmp_ver_str); i++) for (i = 0; i < strlen(tmp_ver_str); i++)
tmp_ver_str[i] = tolower(tmp_ver_str[i]); tmp_ver_str[i] = tolower(tmp_ver_str[i]);
@ -286,6 +286,7 @@ static int process_tar_file(struct device *dev,
struct tar_ucode_info_t *tar_info; struct tar_ucode_info_t *tar_info;
struct otx_cpt_ucode_hdr *ucode_hdr; struct otx_cpt_ucode_hdr *ucode_hdr;
int ucode_type, ucode_size; int ucode_type, ucode_size;
unsigned int code_length;
/* /*
* If size is less than microcode header size then don't report * If size is less than microcode header size then don't report
@ -303,7 +304,13 @@ static int process_tar_file(struct device *dev,
if (get_ucode_type(ucode_hdr, &ucode_type)) if (get_ucode_type(ucode_hdr, &ucode_type))
return 0; return 0;
ucode_size = ntohl(ucode_hdr->code_length) * 2; code_length = ntohl(ucode_hdr->code_length);
if (code_length >= INT_MAX / 2) {
dev_err(dev, "Invalid code_length %u\n", code_length);
return -EINVAL;
}
ucode_size = code_length * 2;
if (!ucode_size || (size < round_up(ucode_size, 16) + if (!ucode_size || (size < round_up(ucode_size, 16) +
sizeof(struct otx_cpt_ucode_hdr) + OTX_CPT_UCODE_SIGN_LEN)) { sizeof(struct otx_cpt_ucode_hdr) + OTX_CPT_UCODE_SIGN_LEN)) {
dev_err(dev, "Ucode %s invalid size\n", filename); dev_err(dev, "Ucode %s invalid size\n", filename);
@ -886,6 +893,7 @@ static int ucode_load(struct device *dev, struct otx_cpt_ucode *ucode,
{ {
struct otx_cpt_ucode_hdr *ucode_hdr; struct otx_cpt_ucode_hdr *ucode_hdr;
const struct firmware *fw; const struct firmware *fw;
unsigned int code_length;
int ret; int ret;
set_ucode_filename(ucode, ucode_filename); set_ucode_filename(ucode, ucode_filename);
@ -896,7 +904,13 @@ static int ucode_load(struct device *dev, struct otx_cpt_ucode *ucode,
ucode_hdr = (struct otx_cpt_ucode_hdr *) fw->data; ucode_hdr = (struct otx_cpt_ucode_hdr *) fw->data;
memcpy(ucode->ver_str, ucode_hdr->ver_str, OTX_CPT_UCODE_VER_STR_SZ); memcpy(ucode->ver_str, ucode_hdr->ver_str, OTX_CPT_UCODE_VER_STR_SZ);
ucode->ver_num = ucode_hdr->ver_num; ucode->ver_num = ucode_hdr->ver_num;
ucode->size = ntohl(ucode_hdr->code_length) * 2; code_length = ntohl(ucode_hdr->code_length);
if (code_length >= INT_MAX / 2) {
dev_err(dev, "Ucode invalid code_length %u\n", code_length);
ret = -EINVAL;
goto release_fw;
}
ucode->size = code_length * 2;
if (!ucode->size || (fw->size < round_up(ucode->size, 16) if (!ucode->size || (fw->size < round_up(ucode->size, 16)
+ sizeof(struct otx_cpt_ucode_hdr) + OTX_CPT_UCODE_SIGN_LEN)) { + sizeof(struct otx_cpt_ucode_hdr) + OTX_CPT_UCODE_SIGN_LEN)) {
dev_err(dev, "Ucode %s invalid size\n", ucode_filename); dev_err(dev, "Ucode %s invalid size\n", ucode_filename);
@ -1328,7 +1342,7 @@ static ssize_t ucode_load_store(struct device *dev,
eng_grps = container_of(attr, struct otx_cpt_eng_grps, ucode_load_attr); eng_grps = container_of(attr, struct otx_cpt_eng_grps, ucode_load_attr);
err_msg = "Invalid engine group format"; err_msg = "Invalid engine group format";
strlcpy(tmp_buf, buf, OTX_CPT_UCODE_NAME_LENGTH); strscpy(tmp_buf, buf, OTX_CPT_UCODE_NAME_LENGTH);
start = tmp_buf; start = tmp_buf;
has_se = has_ie = has_ae = false; has_se = has_ie = has_ae = false;

View File

@ -661,7 +661,7 @@ static ssize_t vf_type_show(struct device *dev,
msg = "Invalid"; msg = "Invalid";
} }
return scnprintf(buf, PAGE_SIZE, "%s\n", msg); return sysfs_emit(buf, "%s\n", msg);
} }
static ssize_t vf_engine_group_show(struct device *dev, static ssize_t vf_engine_group_show(struct device *dev,
@ -670,7 +670,7 @@ static ssize_t vf_engine_group_show(struct device *dev,
{ {
struct otx_cptvf *cptvf = dev_get_drvdata(dev); struct otx_cptvf *cptvf = dev_get_drvdata(dev);
return scnprintf(buf, PAGE_SIZE, "%d\n", cptvf->vfgrp); return sysfs_emit(buf, "%d\n", cptvf->vfgrp);
} }
static ssize_t vf_engine_group_store(struct device *dev, static ssize_t vf_engine_group_store(struct device *dev,
@ -706,7 +706,7 @@ static ssize_t vf_coalesc_time_wait_show(struct device *dev,
{ {
struct otx_cptvf *cptvf = dev_get_drvdata(dev); struct otx_cptvf *cptvf = dev_get_drvdata(dev);
return scnprintf(buf, PAGE_SIZE, "%d\n", return sysfs_emit(buf, "%d\n",
cptvf_read_vq_done_timewait(cptvf)); cptvf_read_vq_done_timewait(cptvf));
} }
@ -716,7 +716,7 @@ static ssize_t vf_coalesc_num_wait_show(struct device *dev,
{ {
struct otx_cptvf *cptvf = dev_get_drvdata(dev); struct otx_cptvf *cptvf = dev_get_drvdata(dev);
return scnprintf(buf, PAGE_SIZE, "%d\n", return sysfs_emit(buf, "%d\n",
cptvf_read_vq_done_numwait(cptvf)); cptvf_read_vq_done_numwait(cptvf));
} }

View File

@ -159,12 +159,10 @@ static int cptvf_send_msg_to_pf_timeout(struct otx_cptvf *cptvf,
int otx_cptvf_check_pf_ready(struct otx_cptvf *cptvf) int otx_cptvf_check_pf_ready(struct otx_cptvf *cptvf)
{ {
struct otx_cpt_mbox mbx = {}; struct otx_cpt_mbox mbx = {};
int ret;
mbx.msg = OTX_CPT_MSG_READY; mbx.msg = OTX_CPT_MSG_READY;
ret = cptvf_send_msg_to_pf_timeout(cptvf, &mbx);
return ret; return cptvf_send_msg_to_pf_timeout(cptvf, &mbx);
} }
/* /*
@ -174,13 +172,11 @@ int otx_cptvf_check_pf_ready(struct otx_cptvf *cptvf)
int otx_cptvf_send_vq_size_msg(struct otx_cptvf *cptvf) int otx_cptvf_send_vq_size_msg(struct otx_cptvf *cptvf)
{ {
struct otx_cpt_mbox mbx = {}; struct otx_cpt_mbox mbx = {};
int ret;
mbx.msg = OTX_CPT_MSG_QLEN; mbx.msg = OTX_CPT_MSG_QLEN;
mbx.data = cptvf->qsize; mbx.data = cptvf->qsize;
ret = cptvf_send_msg_to_pf_timeout(cptvf, &mbx);
return ret; return cptvf_send_msg_to_pf_timeout(cptvf, &mbx);
} }
/* /*
@ -208,14 +204,12 @@ int otx_cptvf_send_vf_to_grp_msg(struct otx_cptvf *cptvf, int group)
int otx_cptvf_send_vf_priority_msg(struct otx_cptvf *cptvf) int otx_cptvf_send_vf_priority_msg(struct otx_cptvf *cptvf)
{ {
struct otx_cpt_mbox mbx = {}; struct otx_cpt_mbox mbx = {};
int ret;
mbx.msg = OTX_CPT_MSG_VQ_PRIORITY; mbx.msg = OTX_CPT_MSG_VQ_PRIORITY;
/* Convey group of the VF */ /* Convey group of the VF */
mbx.data = cptvf->priority; mbx.data = cptvf->priority;
ret = cptvf_send_msg_to_pf_timeout(cptvf, &mbx);
return ret; return cptvf_send_msg_to_pf_timeout(cptvf, &mbx);
} }
/* /*
@ -224,12 +218,10 @@ int otx_cptvf_send_vf_priority_msg(struct otx_cptvf *cptvf)
int otx_cptvf_send_vf_up(struct otx_cptvf *cptvf) int otx_cptvf_send_vf_up(struct otx_cptvf *cptvf)
{ {
struct otx_cpt_mbox mbx = {}; struct otx_cpt_mbox mbx = {};
int ret;
mbx.msg = OTX_CPT_MSG_VF_UP; mbx.msg = OTX_CPT_MSG_VF_UP;
ret = cptvf_send_msg_to_pf_timeout(cptvf, &mbx);
return ret; return cptvf_send_msg_to_pf_timeout(cptvf, &mbx);
} }
/* /*
@ -238,10 +230,8 @@ int otx_cptvf_send_vf_up(struct otx_cptvf *cptvf)
int otx_cptvf_send_vf_down(struct otx_cptvf *cptvf) int otx_cptvf_send_vf_down(struct otx_cptvf *cptvf)
{ {
struct otx_cpt_mbox mbx = {}; struct otx_cpt_mbox mbx = {};
int ret;
mbx.msg = OTX_CPT_MSG_VF_DOWN; mbx.msg = OTX_CPT_MSG_VF_DOWN;
ret = cptvf_send_msg_to_pf_timeout(cptvf, &mbx);
return ret; return cptvf_send_msg_to_pf_timeout(cptvf, &mbx);
} }

View File

@ -68,7 +68,7 @@ static int is_2nd_ucode_used(struct otx2_cpt_eng_grp_info *eng_grp)
static void set_ucode_filename(struct otx2_cpt_ucode *ucode, static void set_ucode_filename(struct otx2_cpt_ucode *ucode,
const char *filename) const char *filename)
{ {
strlcpy(ucode->filename, filename, OTX2_CPT_NAME_LENGTH); strscpy(ucode->filename, filename, OTX2_CPT_NAME_LENGTH);
} }
static char *get_eng_type_str(int eng_type) static char *get_eng_type_str(int eng_type)
@ -126,7 +126,7 @@ static int get_ucode_type(struct device *dev,
int i, val = 0; int i, val = 0;
u8 nn; u8 nn;
strlcpy(tmp_ver_str, ucode_hdr->ver_str, OTX2_CPT_UCODE_VER_STR_SZ); strscpy(tmp_ver_str, ucode_hdr->ver_str, OTX2_CPT_UCODE_VER_STR_SZ);
for (i = 0; i < strlen(tmp_ver_str); i++) for (i = 0; i < strlen(tmp_ver_str); i++)
tmp_ver_str[i] = tolower(tmp_ver_str[i]); tmp_ver_str[i] = tolower(tmp_ver_str[i]);

View File

@ -191,7 +191,6 @@ int otx2_cptvf_send_kvf_limits_msg(struct otx2_cptvf_dev *cptvf)
struct otx2_mbox *mbox = &cptvf->pfvf_mbox; struct otx2_mbox *mbox = &cptvf->pfvf_mbox;
struct pci_dev *pdev = cptvf->pdev; struct pci_dev *pdev = cptvf->pdev;
struct mbox_msghdr *req; struct mbox_msghdr *req;
int ret;
req = (struct mbox_msghdr *) req = (struct mbox_msghdr *)
otx2_mbox_alloc_msg_rsp(mbox, 0, sizeof(*req), otx2_mbox_alloc_msg_rsp(mbox, 0, sizeof(*req),
@ -204,7 +203,5 @@ int otx2_cptvf_send_kvf_limits_msg(struct otx2_cptvf_dev *cptvf)
req->sig = OTX2_MBOX_REQ_SIG; req->sig = OTX2_MBOX_REQ_SIG;
req->pcifunc = OTX2_CPT_RVU_PFFUNC(cptvf->vf_id, 0); req->pcifunc = OTX2_CPT_RVU_PFFUNC(cptvf->vf_id, 0);
ret = otx2_cpt_send_mbox_msg(mbox, pdev); return otx2_cpt_send_mbox_msg(mbox, pdev);
return ret;
} }

View File

@ -1494,7 +1494,7 @@ static void n2_unregister_algs(void)
* *
* So we have to back-translate, going through the 'intr' and 'ino' * So we have to back-translate, going through the 'intr' and 'ino'
* property tables of the n2cp MDESC node, matching it with the OF * property tables of the n2cp MDESC node, matching it with the OF
* 'interrupts' property entries, in order to to figure out which * 'interrupts' property entries, in order to figure out which
* devino goes to which already-translated IRQ. * devino goes to which already-translated IRQ.
*/ */
static int find_devino_index(struct platform_device *dev, struct spu_mdesc_info *ip, static int find_devino_index(struct platform_device *dev, struct spu_mdesc_info *ip,

View File

@ -134,7 +134,6 @@ static int generate_b0(u8 *iv, unsigned int assoclen, unsigned int authsize,
unsigned int cryptlen, u8 *b0) unsigned int cryptlen, u8 *b0)
{ {
unsigned int l, lp, m = authsize; unsigned int l, lp, m = authsize;
int rc;
memcpy(b0, iv, 16); memcpy(b0, iv, 16);
@ -148,9 +147,7 @@ static int generate_b0(u8 *iv, unsigned int assoclen, unsigned int authsize,
if (assoclen) if (assoclen)
*b0 |= 64; *b0 |= 64;
rc = set_msg_len(b0 + 16 - l, cryptlen, l); return set_msg_len(b0 + 16 - l, cryptlen, l);
return rc;
} }
static int generate_pat(u8 *iv, static int generate_pat(u8 *iv,

View File

@ -251,13 +251,13 @@ int adf_cfg_add_key_value_param(struct adf_accel_dev *accel_dev,
return -ENOMEM; return -ENOMEM;
INIT_LIST_HEAD(&key_val->list); INIT_LIST_HEAD(&key_val->list);
strlcpy(key_val->key, key, sizeof(key_val->key)); strscpy(key_val->key, key, sizeof(key_val->key));
if (type == ADF_DEC) { if (type == ADF_DEC) {
snprintf(key_val->val, ADF_CFG_MAX_VAL_LEN_IN_BYTES, snprintf(key_val->val, ADF_CFG_MAX_VAL_LEN_IN_BYTES,
"%ld", (*((long *)val))); "%ld", (*((long *)val)));
} else if (type == ADF_STR) { } else if (type == ADF_STR) {
strlcpy(key_val->val, (char *)val, sizeof(key_val->val)); strscpy(key_val->val, (char *)val, sizeof(key_val->val));
} else if (type == ADF_HEX) { } else if (type == ADF_HEX) {
snprintf(key_val->val, ADF_CFG_MAX_VAL_LEN_IN_BYTES, snprintf(key_val->val, ADF_CFG_MAX_VAL_LEN_IN_BYTES,
"0x%lx", (unsigned long)val); "0x%lx", (unsigned long)val);
@ -315,7 +315,7 @@ int adf_cfg_section_add(struct adf_accel_dev *accel_dev, const char *name)
if (!sec) if (!sec)
return -ENOMEM; return -ENOMEM;
strlcpy(sec->name, name, sizeof(sec->name)); strscpy(sec->name, name, sizeof(sec->name));
INIT_LIST_HEAD(&sec->param_head); INIT_LIST_HEAD(&sec->param_head);
down_write(&cfg->lock); down_write(&cfg->lock);
list_add_tail(&sec->list, &cfg->sec_list); list_add_tail(&sec->list, &cfg->sec_list);

View File

@ -16,6 +16,9 @@
#include "adf_cfg_common.h" #include "adf_cfg_common.h"
#include "adf_cfg_user.h" #include "adf_cfg_user.h"
#define ADF_CFG_MAX_SECTION 512
#define ADF_CFG_MAX_KEY_VAL 256
#define DEVICE_NAME "qat_adf_ctl" #define DEVICE_NAME "qat_adf_ctl"
static DEFINE_MUTEX(adf_ctl_lock); static DEFINE_MUTEX(adf_ctl_lock);
@ -137,10 +140,11 @@ static int adf_copy_key_value_data(struct adf_accel_dev *accel_dev,
struct adf_user_cfg_key_val key_val; struct adf_user_cfg_key_val key_val;
struct adf_user_cfg_key_val *params_head; struct adf_user_cfg_key_val *params_head;
struct adf_user_cfg_section section, *section_head; struct adf_user_cfg_section section, *section_head;
int i, j;
section_head = ctl_data->config_section; section_head = ctl_data->config_section;
while (section_head) { for (i = 0; section_head && i < ADF_CFG_MAX_SECTION; i++) {
if (copy_from_user(&section, (void __user *)section_head, if (copy_from_user(&section, (void __user *)section_head,
sizeof(*section_head))) { sizeof(*section_head))) {
dev_err(&GET_DEV(accel_dev), dev_err(&GET_DEV(accel_dev),
@ -156,7 +160,7 @@ static int adf_copy_key_value_data(struct adf_accel_dev *accel_dev,
params_head = section.params; params_head = section.params;
while (params_head) { for (j = 0; params_head && j < ADF_CFG_MAX_KEY_VAL; j++) {
if (copy_from_user(&key_val, (void __user *)params_head, if (copy_from_user(&key_val, (void __user *)params_head,
sizeof(key_val))) { sizeof(key_val))) {
dev_err(&GET_DEV(accel_dev), dev_err(&GET_DEV(accel_dev),
@ -363,7 +367,7 @@ static int adf_ctl_ioctl_get_status(struct file *fp, unsigned int cmd,
dev_info.num_logical_accel = hw_data->num_logical_accel; dev_info.num_logical_accel = hw_data->num_logical_accel;
dev_info.banks_per_accel = hw_data->num_banks dev_info.banks_per_accel = hw_data->num_banks
/ hw_data->num_logical_accel; / hw_data->num_logical_accel;
strlcpy(dev_info.name, hw_data->dev_class->name, sizeof(dev_info.name)); strscpy(dev_info.name, hw_data->dev_class->name, sizeof(dev_info.name));
dev_info.instance_id = hw_data->instance_id; dev_info.instance_id = hw_data->instance_id;
dev_info.type = hw_data->dev_class->type; dev_info.type = hw_data->dev_class->type;
dev_info.bus = accel_to_pci_dev(accel_dev)->bus->number; dev_info.bus = accel_to_pci_dev(accel_dev)->bus->number;

View File

@ -107,7 +107,7 @@ do { \
* Timeout is in cycles. Clock speed may vary across products but this * Timeout is in cycles. Clock speed may vary across products but this
* value should be a few milli-seconds. * value should be a few milli-seconds.
*/ */
#define ADF_SSM_WDT_DEFAULT_VALUE 0x200000 #define ADF_SSM_WDT_DEFAULT_VALUE 0x7000000ULL
#define ADF_SSM_WDT_PKE_DEFAULT_VALUE 0x8000000 #define ADF_SSM_WDT_PKE_DEFAULT_VALUE 0x8000000
#define ADF_SSMWDTL_OFFSET 0x54 #define ADF_SSMWDTL_OFFSET 0x54
#define ADF_SSMWDTH_OFFSET 0x5C #define ADF_SSMWDTH_OFFSET 0x5C

View File

@ -96,7 +96,7 @@ int adf_ring_debugfs_add(struct adf_etr_ring_data *ring, const char *name)
if (!ring_debug) if (!ring_debug)
return -ENOMEM; return -ENOMEM;
strlcpy(ring_debug->ring_name, name, sizeof(ring_debug->ring_name)); strscpy(ring_debug->ring_name, name, sizeof(ring_debug->ring_name));
snprintf(entry_name, sizeof(entry_name), "ring_%02d", snprintf(entry_name, sizeof(entry_name), "ring_%02d",
ring->ring_number); ring->ring_number);

View File

@ -86,7 +86,8 @@
ICP_QAT_CSS_FWSK_MODULUS_LEN(handle) + \ ICP_QAT_CSS_FWSK_MODULUS_LEN(handle) + \
ICP_QAT_CSS_FWSK_EXPONENT_LEN(handle) + \ ICP_QAT_CSS_FWSK_EXPONENT_LEN(handle) + \
ICP_QAT_CSS_SIGNATURE_LEN(handle)) ICP_QAT_CSS_SIGNATURE_LEN(handle))
#define ICP_QAT_CSS_MAX_IMAGE_LEN 0x40000 #define ICP_QAT_CSS_RSA4K_MAX_IMAGE_LEN 0x40000
#define ICP_QAT_CSS_RSA3K_MAX_IMAGE_LEN 0x30000
#define ICP_QAT_CTX_MODE(ae_mode) ((ae_mode) & 0xf) #define ICP_QAT_CTX_MODE(ae_mode) ((ae_mode) & 0xf)
#define ICP_QAT_NN_MODE(ae_mode) (((ae_mode) >> 0x4) & 0xf) #define ICP_QAT_NN_MODE(ae_mode) (((ae_mode) >> 0x4) & 0xf)

View File

@ -673,11 +673,14 @@ static void qat_alg_free_bufl(struct qat_crypto_instance *inst,
dma_addr_t blpout = qat_req->buf.bloutp; dma_addr_t blpout = qat_req->buf.bloutp;
size_t sz = qat_req->buf.sz; size_t sz = qat_req->buf.sz;
size_t sz_out = qat_req->buf.sz_out; size_t sz_out = qat_req->buf.sz_out;
int bl_dma_dir;
int i; int i;
bl_dma_dir = blp != blpout ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL;
for (i = 0; i < bl->num_bufs; i++) for (i = 0; i < bl->num_bufs; i++)
dma_unmap_single(dev, bl->bufers[i].addr, dma_unmap_single(dev, bl->bufers[i].addr,
bl->bufers[i].len, DMA_BIDIRECTIONAL); bl->bufers[i].len, bl_dma_dir);
dma_unmap_single(dev, blp, sz, DMA_TO_DEVICE); dma_unmap_single(dev, blp, sz, DMA_TO_DEVICE);
@ -691,7 +694,7 @@ static void qat_alg_free_bufl(struct qat_crypto_instance *inst,
for (i = bufless; i < blout->num_bufs; i++) { for (i = bufless; i < blout->num_bufs; i++) {
dma_unmap_single(dev, blout->bufers[i].addr, dma_unmap_single(dev, blout->bufers[i].addr,
blout->bufers[i].len, blout->bufers[i].len,
DMA_BIDIRECTIONAL); DMA_FROM_DEVICE);
} }
dma_unmap_single(dev, blpout, sz_out, DMA_TO_DEVICE); dma_unmap_single(dev, blpout, sz_out, DMA_TO_DEVICE);
@ -716,6 +719,7 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance *inst,
struct scatterlist *sg; struct scatterlist *sg;
size_t sz_out, sz = struct_size(bufl, bufers, n); size_t sz_out, sz = struct_size(bufl, bufers, n);
int node = dev_to_node(&GET_DEV(inst->accel_dev)); int node = dev_to_node(&GET_DEV(inst->accel_dev));
int bufl_dma_dir;
if (unlikely(!n)) if (unlikely(!n))
return -EINVAL; return -EINVAL;
@ -733,6 +737,8 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance *inst,
qat_req->buf.sgl_src_valid = true; qat_req->buf.sgl_src_valid = true;
} }
bufl_dma_dir = sgl != sglout ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL;
for_each_sg(sgl, sg, n, i) for_each_sg(sgl, sg, n, i)
bufl->bufers[i].addr = DMA_MAPPING_ERROR; bufl->bufers[i].addr = DMA_MAPPING_ERROR;
@ -744,7 +750,7 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance *inst,
bufl->bufers[y].addr = dma_map_single(dev, sg_virt(sg), bufl->bufers[y].addr = dma_map_single(dev, sg_virt(sg),
sg->length, sg->length,
DMA_BIDIRECTIONAL); bufl_dma_dir);
bufl->bufers[y].len = sg->length; bufl->bufers[y].len = sg->length;
if (unlikely(dma_mapping_error(dev, bufl->bufers[y].addr))) if (unlikely(dma_mapping_error(dev, bufl->bufers[y].addr)))
goto err_in; goto err_in;
@ -787,7 +793,7 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance *inst,
bufers[y].addr = dma_map_single(dev, sg_virt(sg), bufers[y].addr = dma_map_single(dev, sg_virt(sg),
sg->length, sg->length,
DMA_BIDIRECTIONAL); DMA_FROM_DEVICE);
if (unlikely(dma_mapping_error(dev, bufers[y].addr))) if (unlikely(dma_mapping_error(dev, bufers[y].addr)))
goto err_out; goto err_out;
bufers[y].len = sg->length; bufers[y].len = sg->length;
@ -817,7 +823,7 @@ err_out:
if (!dma_mapping_error(dev, buflout->bufers[i].addr)) if (!dma_mapping_error(dev, buflout->bufers[i].addr))
dma_unmap_single(dev, buflout->bufers[i].addr, dma_unmap_single(dev, buflout->bufers[i].addr,
buflout->bufers[i].len, buflout->bufers[i].len,
DMA_BIDIRECTIONAL); DMA_FROM_DEVICE);
if (!qat_req->buf.sgl_dst_valid) if (!qat_req->buf.sgl_dst_valid)
kfree(buflout); kfree(buflout);
@ -831,7 +837,7 @@ err_in:
if (!dma_mapping_error(dev, bufl->bufers[i].addr)) if (!dma_mapping_error(dev, bufl->bufers[i].addr))
dma_unmap_single(dev, bufl->bufers[i].addr, dma_unmap_single(dev, bufl->bufers[i].addr,
bufl->bufers[i].len, bufl->bufers[i].len,
DMA_BIDIRECTIONAL); bufl_dma_dir);
if (!qat_req->buf.sgl_src_valid) if (!qat_req->buf.sgl_src_valid)
kfree(bufl); kfree(bufl);

View File

@ -332,14 +332,14 @@ static int qat_dh_compute_value(struct kpp_request *req)
qat_req->in.dh.in_tab[n_input_params] = 0; qat_req->in.dh.in_tab[n_input_params] = 0;
qat_req->out.dh.out_tab[1] = 0; qat_req->out.dh.out_tab[1] = 0;
/* Mapping in.in.b or in.in_g2.xa is the same */ /* Mapping in.in.b or in.in_g2.xa is the same */
qat_req->phy_in = dma_map_single(dev, &qat_req->in.dh.in.b, qat_req->phy_in = dma_map_single(dev, &qat_req->in.dh,
sizeof(qat_req->in.dh.in.b), sizeof(struct qat_dh_input_params),
DMA_TO_DEVICE); DMA_TO_DEVICE);
if (unlikely(dma_mapping_error(dev, qat_req->phy_in))) if (unlikely(dma_mapping_error(dev, qat_req->phy_in)))
goto unmap_dst; goto unmap_dst;
qat_req->phy_out = dma_map_single(dev, &qat_req->out.dh.r, qat_req->phy_out = dma_map_single(dev, &qat_req->out.dh,
sizeof(qat_req->out.dh.r), sizeof(struct qat_dh_output_params),
DMA_TO_DEVICE); DMA_TO_DEVICE);
if (unlikely(dma_mapping_error(dev, qat_req->phy_out))) if (unlikely(dma_mapping_error(dev, qat_req->phy_out)))
goto unmap_in_params; goto unmap_in_params;
@ -729,14 +729,14 @@ static int qat_rsa_enc(struct akcipher_request *req)
qat_req->in.rsa.in_tab[3] = 0; qat_req->in.rsa.in_tab[3] = 0;
qat_req->out.rsa.out_tab[1] = 0; qat_req->out.rsa.out_tab[1] = 0;
qat_req->phy_in = dma_map_single(dev, &qat_req->in.rsa.enc.m, qat_req->phy_in = dma_map_single(dev, &qat_req->in.rsa,
sizeof(qat_req->in.rsa.enc.m), sizeof(struct qat_rsa_input_params),
DMA_TO_DEVICE); DMA_TO_DEVICE);
if (unlikely(dma_mapping_error(dev, qat_req->phy_in))) if (unlikely(dma_mapping_error(dev, qat_req->phy_in)))
goto unmap_dst; goto unmap_dst;
qat_req->phy_out = dma_map_single(dev, &qat_req->out.rsa.enc.c, qat_req->phy_out = dma_map_single(dev, &qat_req->out.rsa,
sizeof(qat_req->out.rsa.enc.c), sizeof(struct qat_rsa_output_params),
DMA_TO_DEVICE); DMA_TO_DEVICE);
if (unlikely(dma_mapping_error(dev, qat_req->phy_out))) if (unlikely(dma_mapping_error(dev, qat_req->phy_out)))
goto unmap_in_params; goto unmap_in_params;
@ -875,14 +875,14 @@ static int qat_rsa_dec(struct akcipher_request *req)
else else
qat_req->in.rsa.in_tab[3] = 0; qat_req->in.rsa.in_tab[3] = 0;
qat_req->out.rsa.out_tab[1] = 0; qat_req->out.rsa.out_tab[1] = 0;
qat_req->phy_in = dma_map_single(dev, &qat_req->in.rsa.dec.c, qat_req->phy_in = dma_map_single(dev, &qat_req->in.rsa,
sizeof(qat_req->in.rsa.dec.c), sizeof(struct qat_rsa_input_params),
DMA_TO_DEVICE); DMA_TO_DEVICE);
if (unlikely(dma_mapping_error(dev, qat_req->phy_in))) if (unlikely(dma_mapping_error(dev, qat_req->phy_in)))
goto unmap_dst; goto unmap_dst;
qat_req->phy_out = dma_map_single(dev, &qat_req->out.rsa.dec.m, qat_req->phy_out = dma_map_single(dev, &qat_req->out.rsa,
sizeof(qat_req->out.rsa.dec.m), sizeof(struct qat_rsa_output_params),
DMA_TO_DEVICE); DMA_TO_DEVICE);
if (unlikely(dma_mapping_error(dev, qat_req->phy_out))) if (unlikely(dma_mapping_error(dev, qat_req->phy_out)))
goto unmap_in_params; goto unmap_in_params;

View File

@ -1367,6 +1367,48 @@ static void qat_uclo_ummap_auth_fw(struct icp_qat_fw_loader_handle *handle,
} }
} }
static int qat_uclo_check_image(struct icp_qat_fw_loader_handle *handle,
char *image, unsigned int size,
unsigned int fw_type)
{
char *fw_type_name = fw_type ? "MMP" : "AE";
unsigned int css_dword_size = sizeof(u32);
if (handle->chip_info->fw_auth) {
struct icp_qat_css_hdr *css_hdr = (struct icp_qat_css_hdr *)image;
unsigned int header_len = ICP_QAT_AE_IMG_OFFSET(handle);
if ((css_hdr->header_len * css_dword_size) != header_len)
goto err;
if ((css_hdr->size * css_dword_size) != size)
goto err;
if (fw_type != css_hdr->fw_type)
goto err;
if (size <= header_len)
goto err;
size -= header_len;
}
if (fw_type == CSS_AE_FIRMWARE) {
if (size < sizeof(struct icp_qat_simg_ae_mode *) +
ICP_QAT_SIMG_AE_INIT_SEQ_LEN)
goto err;
if (size > ICP_QAT_CSS_RSA4K_MAX_IMAGE_LEN)
goto err;
} else if (fw_type == CSS_MMP_FIRMWARE) {
if (size > ICP_QAT_CSS_RSA3K_MAX_IMAGE_LEN)
goto err;
} else {
pr_err("QAT: Unsupported firmware type\n");
return -EINVAL;
}
return 0;
err:
pr_err("QAT: Invalid %s firmware image\n", fw_type_name);
return -EINVAL;
}
static int qat_uclo_map_auth_fw(struct icp_qat_fw_loader_handle *handle, static int qat_uclo_map_auth_fw(struct icp_qat_fw_loader_handle *handle,
char *image, unsigned int size, char *image, unsigned int size,
struct icp_qat_fw_auth_desc **desc) struct icp_qat_fw_auth_desc **desc)
@ -1379,7 +1421,7 @@ static int qat_uclo_map_auth_fw(struct icp_qat_fw_loader_handle *handle,
struct icp_qat_simg_ae_mode *simg_ae_mode; struct icp_qat_simg_ae_mode *simg_ae_mode;
struct icp_firml_dram_desc img_desc; struct icp_firml_dram_desc img_desc;
if (size > (ICP_QAT_AE_IMG_OFFSET(handle) + ICP_QAT_CSS_MAX_IMAGE_LEN)) { if (size > (ICP_QAT_AE_IMG_OFFSET(handle) + ICP_QAT_CSS_RSA4K_MAX_IMAGE_LEN)) {
pr_err("QAT: error, input image size overflow %d\n", size); pr_err("QAT: error, input image size overflow %d\n", size);
return -EINVAL; return -EINVAL;
} }
@ -1547,6 +1589,11 @@ int qat_uclo_wr_mimage(struct icp_qat_fw_loader_handle *handle,
{ {
struct icp_qat_fw_auth_desc *desc = NULL; struct icp_qat_fw_auth_desc *desc = NULL;
int status = 0; int status = 0;
int ret;
ret = qat_uclo_check_image(handle, addr_ptr, mem_size, CSS_MMP_FIRMWARE);
if (ret)
return ret;
if (handle->chip_info->fw_auth) { if (handle->chip_info->fw_auth) {
status = qat_uclo_map_auth_fw(handle, addr_ptr, mem_size, &desc); status = qat_uclo_map_auth_fw(handle, addr_ptr, mem_size, &desc);
@ -2018,8 +2065,15 @@ static int qat_uclo_wr_suof_img(struct icp_qat_fw_loader_handle *handle)
struct icp_qat_fw_auth_desc *desc = NULL; struct icp_qat_fw_auth_desc *desc = NULL;
struct icp_qat_suof_handle *sobj_handle = handle->sobj_handle; struct icp_qat_suof_handle *sobj_handle = handle->sobj_handle;
struct icp_qat_suof_img_hdr *simg_hdr = sobj_handle->img_table.simg_hdr; struct icp_qat_suof_img_hdr *simg_hdr = sobj_handle->img_table.simg_hdr;
int ret;
for (i = 0; i < sobj_handle->img_table.num_simgs; i++) { for (i = 0; i < sobj_handle->img_table.num_simgs; i++) {
ret = qat_uclo_check_image(handle, simg_hdr[i].simg_buf,
simg_hdr[i].simg_len,
CSS_AE_FIRMWARE);
if (ret)
return ret;
if (qat_uclo_map_auth_fw(handle, if (qat_uclo_map_auth_fw(handle,
(char *)simg_hdr[i].simg_buf, (char *)simg_hdr[i].simg_buf,
(unsigned int) (unsigned int)

View File

@ -450,8 +450,8 @@ qce_aead_async_req_handle(struct crypto_async_request *async_req)
if (ret) if (ret)
return ret; return ret;
dst_nents = dma_map_sg(qce->dev, rctx->dst_sg, rctx->dst_nents, dir_dst); dst_nents = dma_map_sg(qce->dev, rctx->dst_sg, rctx->dst_nents, dir_dst);
if (dst_nents < 0) { if (!dst_nents) {
ret = dst_nents; ret = -EIO;
goto error_free; goto error_free;
} }

View File

@ -97,14 +97,16 @@ static int qce_ahash_async_req_handle(struct crypto_async_request *async_req)
} }
ret = dma_map_sg(qce->dev, req->src, rctx->src_nents, DMA_TO_DEVICE); ret = dma_map_sg(qce->dev, req->src, rctx->src_nents, DMA_TO_DEVICE);
if (ret < 0) if (!ret)
return ret; return -EIO;
sg_init_one(&rctx->result_sg, qce->dma.result_buf, QCE_RESULT_BUF_SZ); sg_init_one(&rctx->result_sg, qce->dma.result_buf, QCE_RESULT_BUF_SZ);
ret = dma_map_sg(qce->dev, &rctx->result_sg, 1, DMA_FROM_DEVICE); ret = dma_map_sg(qce->dev, &rctx->result_sg, 1, DMA_FROM_DEVICE);
if (ret < 0) if (!ret) {
ret = -EIO;
goto error_unmap_src; goto error_unmap_src;
}
ret = qce_dma_prep_sgs(&qce->dma, req->src, rctx->src_nents, ret = qce_dma_prep_sgs(&qce->dma, req->src, rctx->src_nents,
&rctx->result_sg, 1, qce_ahash_done, async_req); &rctx->result_sg, 1, qce_ahash_done, async_req);

View File

@ -124,15 +124,15 @@ qce_skcipher_async_req_handle(struct crypto_async_request *async_req)
rctx->dst_sg = rctx->dst_tbl.sgl; rctx->dst_sg = rctx->dst_tbl.sgl;
dst_nents = dma_map_sg(qce->dev, rctx->dst_sg, rctx->dst_nents, dir_dst); dst_nents = dma_map_sg(qce->dev, rctx->dst_sg, rctx->dst_nents, dir_dst);
if (dst_nents < 0) { if (!dst_nents) {
ret = dst_nents; ret = -EIO;
goto error_free; goto error_free;
} }
if (diff_dst) { if (diff_dst) {
src_nents = dma_map_sg(qce->dev, req->src, rctx->src_nents, dir_src); src_nents = dma_map_sg(qce->dev, req->src, rctx->src_nents, dir_src);
if (src_nents < 0) { if (!src_nents) {
ret = src_nents; ret = -EIO;
goto error_unmap_dst; goto error_unmap_dst;
} }
rctx->src_sg = req->src; rctx->src_sg = req->src;

View File

@ -9,6 +9,7 @@
#include <linux/crypto.h> #include <linux/crypto.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/iopoll.h> #include <linux/iopoll.h>
#include <linux/kernel.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
@ -201,15 +202,13 @@ static int qcom_rng_remove(struct platform_device *pdev)
return 0; return 0;
} }
#if IS_ENABLED(CONFIG_ACPI) static const struct acpi_device_id __maybe_unused qcom_rng_acpi_match[] = {
static const struct acpi_device_id qcom_rng_acpi_match[] = {
{ .id = "QCOM8160", .driver_data = 1 }, { .id = "QCOM8160", .driver_data = 1 },
{} {}
}; };
MODULE_DEVICE_TABLE(acpi, qcom_rng_acpi_match); MODULE_DEVICE_TABLE(acpi, qcom_rng_acpi_match);
#endif
static const struct of_device_id qcom_rng_of_match[] = { static const struct of_device_id __maybe_unused qcom_rng_of_match[] = {
{ .compatible = "qcom,prng", .data = (void *)0}, { .compatible = "qcom,prng", .data = (void *)0},
{ .compatible = "qcom,prng-ee", .data = (void *)1}, { .compatible = "qcom,prng-ee", .data = (void *)1},
{} {}

View File

@ -26,10 +26,10 @@
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/kthread.h> #include <linux/kthread.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/mutex.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/of_device.h> #include <linux/of_device.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/spinlock.h>
#define SHA_BUFFER_LEN PAGE_SIZE #define SHA_BUFFER_LEN PAGE_SIZE
#define SAHARA_MAX_SHA_BLOCK_SIZE SHA256_BLOCK_SIZE #define SAHARA_MAX_SHA_BLOCK_SIZE SHA256_BLOCK_SIZE
@ -196,7 +196,7 @@ struct sahara_dev {
void __iomem *regs_base; void __iomem *regs_base;
struct clk *clk_ipg; struct clk *clk_ipg;
struct clk *clk_ahb; struct clk *clk_ahb;
struct mutex queue_mutex; spinlock_t queue_spinlock;
struct task_struct *kthread; struct task_struct *kthread;
struct completion dma_completion; struct completion dma_completion;
@ -487,13 +487,13 @@ static int sahara_hw_descriptor_create(struct sahara_dev *dev)
ret = dma_map_sg(dev->device, dev->in_sg, dev->nb_in_sg, ret = dma_map_sg(dev->device, dev->in_sg, dev->nb_in_sg,
DMA_TO_DEVICE); DMA_TO_DEVICE);
if (ret != dev->nb_in_sg) { if (!ret) {
dev_err(dev->device, "couldn't map in sg\n"); dev_err(dev->device, "couldn't map in sg\n");
goto unmap_in; goto unmap_in;
} }
ret = dma_map_sg(dev->device, dev->out_sg, dev->nb_out_sg, ret = dma_map_sg(dev->device, dev->out_sg, dev->nb_out_sg,
DMA_FROM_DEVICE); DMA_FROM_DEVICE);
if (ret != dev->nb_out_sg) { if (!ret) {
dev_err(dev->device, "couldn't map out sg\n"); dev_err(dev->device, "couldn't map out sg\n");
goto unmap_out; goto unmap_out;
} }
@ -642,9 +642,9 @@ static int sahara_aes_crypt(struct skcipher_request *req, unsigned long mode)
rctx->mode = mode; rctx->mode = mode;
mutex_lock(&dev->queue_mutex); spin_lock_bh(&dev->queue_spinlock);
err = crypto_enqueue_request(&dev->queue, &req->base); err = crypto_enqueue_request(&dev->queue, &req->base);
mutex_unlock(&dev->queue_mutex); spin_unlock_bh(&dev->queue_spinlock);
wake_up_process(dev->kthread); wake_up_process(dev->kthread);
@ -1043,10 +1043,10 @@ static int sahara_queue_manage(void *data)
do { do {
__set_current_state(TASK_INTERRUPTIBLE); __set_current_state(TASK_INTERRUPTIBLE);
mutex_lock(&dev->queue_mutex); spin_lock_bh(&dev->queue_spinlock);
backlog = crypto_get_backlog(&dev->queue); backlog = crypto_get_backlog(&dev->queue);
async_req = crypto_dequeue_request(&dev->queue); async_req = crypto_dequeue_request(&dev->queue);
mutex_unlock(&dev->queue_mutex); spin_unlock_bh(&dev->queue_spinlock);
if (backlog) if (backlog)
backlog->complete(backlog, -EINPROGRESS); backlog->complete(backlog, -EINPROGRESS);
@ -1092,9 +1092,9 @@ static int sahara_sha_enqueue(struct ahash_request *req, int last)
rctx->first = 1; rctx->first = 1;
} }
mutex_lock(&dev->queue_mutex); spin_lock_bh(&dev->queue_spinlock);
ret = crypto_enqueue_request(&dev->queue, &req->base); ret = crypto_enqueue_request(&dev->queue, &req->base);
mutex_unlock(&dev->queue_mutex); spin_unlock_bh(&dev->queue_spinlock);
wake_up_process(dev->kthread); wake_up_process(dev->kthread);
@ -1449,7 +1449,7 @@ static int sahara_probe(struct platform_device *pdev)
crypto_init_queue(&dev->queue, SAHARA_QUEUE_LENGTH); crypto_init_queue(&dev->queue, SAHARA_QUEUE_LENGTH);
mutex_init(&dev->queue_mutex); spin_lock_init(&dev->queue_spinlock);
dev_ptr = dev; dev_ptr = dev;

Some files were not shown because too many files have changed in this diff Show More