Poly1305 is a fast message authenticator designed by Daniel J. Bernstein.
It is further defined in RFC7539 as a building block for the ChaCha20-Poly1305
AEAD for use in IETF protocols.
This is a portable C implementation of the algorithm without architecture
specific optimizations, based on public domain code by Daniel J. Bernstein and
Andrew Moon.
Signed-off-by: Martin Willi <martin@strongswan.org>
Acked-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
ChaCha20 is a high speed 256-bit key size stream cipher algorithm designed by
Daniel J. Bernstein. It is further specified in RFC7539 for use in IETF
protocols as a building block for the ChaCha20-Poly1305 AEAD.
This is a portable C implementation without any architecture specific
optimizations. It uses a 16-byte IV, which includes the 12-byte ChaCha20 nonce
prepended by the initial block counter. Some algorithms require an explicit
counter value, for example the mentioned AEAD construction.
Signed-off-by: Martin Willi <martin@strongswan.org>
Acked-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This reverts commit f858c7bcca as
the algif_aead interface has been switched over to the new AEAD
interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The CPU Jitter RNG provides a source of good entropy by
collecting CPU executing time jitter. The entropy in the CPU
execution time jitter is magnified by the CPU Jitter Random
Number Generator. The CPU Jitter Random Number Generator uses
the CPU execution timing jitter to generate a bit stream
which complies with different statistical measurements that
determine the bit stream is random.
The CPU Jitter Random Number Generator delivers entropy which
follows information theoretical requirements. Based on these
studies and the implementation, the caller can assume that
one bit of data extracted from the CPU Jitter Random Number
Generator holds one bit of entropy.
The CPU Jitter Random Number Generator provides a decentralized
source of entropy, i.e. every caller can operate on a private
state of the entropy pool.
The RNG does not have any dependencies on any other service
in the kernel. The RNG only needs a high-resolution time
stamp.
Further design details, the cryptographic assessment and
large array of test results are documented at
http://www.chronox.de/jent.html.
CC: Andreas Steffen <andreas.steffen@strongswan.org>
CC: Theodore Ts'o <tytso@mit.edu>
CC: Sandy Harris <sandyinchina@gmail.com>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The newly added AEAD user-space isn't quite ready for prime time
just yet. In particular it is conflicting with the AEAD single
SG list interface change so this patch disables it now.
Once the SG list stuff is completely done we can then renable
this interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds a new AEAD IV generator echainiv. It is intended
to replace the existing skcipher IV generator eseqiv.
If the underlying AEAD algorithm is using the old AEAD interface,
then echainiv will simply use its IV generator.
Otherwise, echainiv will encrypt a counter just like eseqiv but
it'll first xor it against a previously stored IV similar to
chainiv.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts the seqiv IV generator to work with the new
AEAD interface where IV generators are just normal AEAD algorithms.
Full backwards compatibility is paramount at this point since
no users have yet switched over to the new interface. Nor can
they switch to the new interface until IV generation is fully
supported by it.
So this means we are adding two versions of seqiv alongside the
existing one. The first one is the one that will be used when
the underlying AEAD algorithm has switched over to the new AEAD
interface. The second one handles the current case where the
underlying AEAD algorithm still uses the old interface.
Both versions export themselves through the new AEAD interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Change the crypto 842 compression alg to use the software 842 compression
and decompression library. Add the crypto driver_name as "842-generic".
Remove the fallback to LZO compression.
Previously, this crypto compression alg attemped 842 compression using
PowerPC hardware, and fell back to LZO compression and decompression if
the 842 PowerPC hardware was unavailable or failed. This should not
fall back to any other compression method, however; users of this crypto
compression alg can fallback if desired, and transparent fallback tricks
callers into thinking they are getting 842 compression when they actually
get LZO compression - the failure of the 842 hardware should not be
transparent to the caller.
The crypto compression alg for a hardware device also should not be located
in crypto/ so this is now a software-only implementation that uses the 842
software compression/decompression library.
Signed-off-by: Dan Streetman <ddstreet@ieee.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch fix a spelling typo in crypto/Kconfig.
Signed-off-by: Masanari Iida <standby24x7@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This moves all Kconfig symbols defined in crypto/Kconfig that depend
on CONFIG_ARM to a dedicated Kconfig file in arch/arm/crypto, which is
where the code that implements those features resides as well.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Enable user to select OCTEON SHA1/256/512 modules.
Signed-off-by: Aaro Koskinen <aaro.koskinen@iki.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Integrate the module into the kernel config tree.
Signed-off-by: Markus Stockhausen <stockhausen@collogia.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Enable compilation of the AEAD AF_ALG support and provide a Kconfig
option to compile the AEAD AF_ALG support.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Integrate the module into the kernel config tree.
Signed-off-by: Markus Stockhausen <stockhausen@collogia.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Integrate the module into the kernel configuration
Signed-off-by: Markus Stockhausen <stockhausen@collogia.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Integrate the module into the kernel config tree.
Signed-off-by: Markus Stockhausen <stockhausen@collogia.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Enable compilation of the RNG AF_ALG support and provide a Kconfig
option to compile the RNG AF_ALG support.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The SHA-512 NEON works just fine under big endian, so remove the Kconfig
condition preventing it from being selected if CONFIG_CPU_BIG_ENDIAN is
set.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This tweaks the SHA-1 NEON code slightly so it works correctly under big
endian, and removes the Kconfig condition preventing it from being
selected if CONFIG_CPU_BIG_ENDIAN is set.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch introduces the multi-buffer crypto daemon which is responsible
for submitting crypto jobs in a work queue to the responsible multi-buffer
crypto algorithm. The idea of the multi-buffer algorihtm is to put
data streams from multiple jobs in a wide (AVX2) register and then
take advantage of SIMD instructions to do crypto computation on several
buffers simultaneously.
The multi-buffer crypto daemon is also responsbile for flushing the
remaining buffers to complete the computation if no new buffers arrive
for a while.
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Pull ARM updates from Russell King:
"Included in this update:
- perf updates from Will Deacon:
The main changes are callchain stability fixes from Jean Pihet and
event mapping and PMU name rework from Mark Rutland
The latter is preparatory work for enabling some code re-use with
arm64 in the future.
- updates for nommu from Uwe Kleine-König:
Two different fixes for the same problem making some ARM nommu
configurations not boot since 3.6-rc1. The problem is that
user_addr_max returned the biggest available RAM address which
makes some copy_from_user variants fail to read from XIP memory.
- deprecate legacy OMAP DMA API, in preparation for it's removal.
The popular drivers have been converted over, leaving a very small
number of rarely used drivers, which hopefully can be converted
during the next cycle with a bit more visibility (and hopefully
people popping out of the woodwork to help test)
- more tweaks for BE systems, particularly with the kernel image
format. In connection with this, I've cleaned up the way we
generate the linker script for the decompressor.
- removal of hard-coded assumptions of the kernel stack size, making
everywhere depend on the value of THREAD_SIZE_ORDER.
- MCPM updates from Nicolas Pitre.
- Make it easier for proper CPU part number checks (which should
always include the vendor field).
- Assembly code optimisation - use the "bx" instruction when
returning from a function on ARMv6+ rather than "mov pc, reg".
- Save the last kernel misaligned fault location and report it via
the procfs alignment file.
- Clean up the way we create the initial stack frame, which is a
repeated pattern in several different locations.
- Support for 8-byte get_user(), needed for some DRM implementations.
- mcs locking from Will Deacon.
- Save and restore a few more Cortex-A9 registers (for errata
workarounds)
- Fix various aspects of the SWP emulation, and the ELF hwcap for the
SWP instruction.
- Update LPAE logic for pte_write and pmd_write to make it more
correct.
- Support for Broadcom Brahma15 CPU cores.
- ARM assembly crypto updates from Ard Biesheuvel"
* 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm: (53 commits)
ARM: add comments to the early page table remap code
ARM: 8122/1: smp_scu: enable SCU standby support
ARM: 8121/1: smp_scu: use macro for SCU enable bit
ARM: 8120/1: crypto: sha512: add ARM NEON implementation
ARM: 8119/1: crypto: sha1: add ARM NEON implementation
ARM: 8118/1: crypto: sha1/make use of common SHA-1 structures
ARM: 8113/1: remove remaining definitions of PLAT_PHYS_OFFSET from <mach/memory.h>
ARM: 8111/1: Enable erratum 798181 for Broadcom Brahma-B15
ARM: 8110/1: do CPU-specific init for Broadcom Brahma15 cores
ARM: 8109/1: mm: Modify pte_write and pmd_write logic for LPAE
ARM: 8108/1: mm: Introduce {pte,pmd}_isset and {pte,pmd}_isclear
ARM: hwcap: disable HWCAP_SWP if the CPU advertises it has exclusives
ARM: SWP emulation: only initialise on ARMv7 CPUs
ARM: SWP emulation: always enable when SMP is enabled
ARM: 8103/1: save/restore Cortex-A9 CP15 registers on suspend/resume
ARM: 8098/1: mcs lock: implement wfe-based polling for MCS locking
ARM: 8091/2: add get_user() support for 8 byte types
ARM: 8097/1: unistd.h: relocate comments back to place
ARM: 8096/1: Describe required sort order for textofs-y (TEXT_OFFSET)
ARM: 8090/1: add revision info for PL310 errata 588369 and 727915
...
This patch removes the build-time test that ensures at least one RNG
is set. Instead we will simply not build drbg if no options are set
through Kconfig.
This also fixes a typo in the name of the Kconfig option CRYTPO_DRBG
(should be CRYPTO_DRBG).
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Per further discussion with NIST, the requirements for FIPS state that
we only need to panic the system on failed kernel module signature checks
for crypto subsystem modules. This moves the fips-mode-only module
signature check out of the generic module loading code, into the crypto
subsystem, at points where we can catch both algorithm module loads and
mode module loads. At the same time, make CONFIG_CRYPTO_FIPS dependent on
CONFIG_MODULE_SIG, as this is entirely necessary for FIPS mode.
v2: remove extraneous blank line, perform checks in static inline
function, drop no longer necessary fips.h include.
CC: "David S. Miller" <davem@davemloft.net>
CC: Rusty Russell <rusty@rustcorp.com.au>
CC: Stephan Mueller <stephan.mueller@atsec.com>
Signed-off-by: Jarod Wilson <jarod@redhat.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The different DRBG types of CTR, Hash, HMAC can be enabled or disabled
at compile time. At least one DRBG type shall be selected.
The default is the HMAC DRBG as its code base is smallest.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This git patch adds x86_64 AVX2 optimization of SHA1
transform to crypto support. The patch has been tested with 3.14.0-rc1
kernel.
On a Haswell desktop, with turbo disabled and all cpus running
at maximum frequency, tcrypt shows AVX2 performance improvement
from 3% for 256 bytes update to 16% for 1024 bytes update over
AVX implementation.
This patch adds sha1_avx2_transform(), the glue, build and
configuration changes needed for AVX2 optimization of
SHA1 transform to crypto support.
sha1-ssse3 is one module which adds the necessary optimization
support (SSSE3/AVX/AVX2) for the low-level SHA1 transform function.
With better optimization support, transform function is overridden
as the case may be. In the case of AVX2, due to performance reasons
across datablock sizes, the AVX or AVX2 transform function is used
at run-time as it suits best. The Makefile change therefore appends
the necessary objects to the linkage. Due to this, the patch merely
appends AVX2 transform to the existing build mix and Kconfig support
and leaves the configuration build support as is.
Signed-off-by: Chandramouli Narayanan <mouli@linux.intel.com>
Reviewed-by: Marek Vasut <marex@denx.de>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Pull crypto update from Herbert Xu:
- Made x86 ablk_helper generic for ARM
- Phase out chainiv in favour of eseqiv (affects IPsec)
- Fixed aes-cbc IV corruption on s390
- Added constant-time crypto_memneq which replaces memcmp
- Fixed aes-ctr in omap-aes
- Added OMAP3 ROM RNG support
- Add PRNG support for MSM SoC's
- Add and use Job Ring API in caam
- Misc fixes
[ NOTE! This pull request was sent within the merge window, but Herbert
has some questionable email sending setup that makes him public enemy
#1 as far as gmail is concerned. So most of his emails seem to be
trapped by gmail as spam, resulting in me not seeing them. - Linus ]
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (49 commits)
crypto: s390 - Fix aes-cbc IV corruption
crypto: omap-aes - Fix CTR mode counter length
crypto: omap-sham - Add missing modalias
padata: make the sequence counter an atomic_t
crypto: caam - Modify the interface layers to use JR API's
crypto: caam - Add API's to allocate/free Job Rings
crypto: caam - Add Platform driver for Job Ring
hwrng: msm - Add PRNG support for MSM SoC's
ARM: DT: msm: Add Qualcomm's PRNG driver binding document
crypto: skcipher - Use eseqiv even on UP machines
crypto: talitos - Simplify key parsing
crypto: picoxcell - Simplify and harden key parsing
crypto: ixp4xx - Simplify and harden key parsing
crypto: authencesn - Simplify key parsing
crypto: authenc - Export key parsing helper function
crypto: mv_cesa: remove deprecated IRQF_DISABLED
hwrng: OMAP3 ROM Random Number Generator support
crypto: sha256_ssse3 - also test for BMI2
crypto: mv_cesa - Remove redundant of_match_ptr
crypto: sahara - Remove redundant of_match_ptr
...
Pull security subsystem updates from James Morris:
"In this patchset, we finally get an SELinux update, with Paul Moore
taking over as maintainer of that code.
Also a significant update for the Keys subsystem, as well as
maintenance updates to Smack, IMA, TPM, and Apparmor"
and since I wanted to know more about the updates to key handling,
here's the explanation from David Howells on that:
"Okay. There are a number of separate bits. I'll go over the big bits
and the odd important other bit, most of the smaller bits are just
fixes and cleanups. If you want the small bits accounting for, I can
do that too.
(1) Keyring capacity expansion.
KEYS: Consolidate the concept of an 'index key' for key access
KEYS: Introduce a search context structure
KEYS: Search for auth-key by name rather than target key ID
Add a generic associative array implementation.
KEYS: Expand the capacity of a keyring
Several of the patches are providing an expansion of the capacity of a
keyring. Currently, the maximum size of a keyring payload is one page.
Subtract a small header and then divide up into pointers, that only gives
you ~500 pointers on an x86_64 box. However, since the NFS idmapper uses
a keyring to store ID mapping data, that has proven to be insufficient to
the cause.
Whatever data structure I use to handle the keyring payload, it can only
store pointers to keys, not the keys themselves because several keyrings
may point to a single key. This precludes inserting, say, and rb_node
struct into the key struct for this purpose.
I could make an rbtree of records such that each record has an rb_node
and a key pointer, but that would use four words of space per key stored
in the keyring. It would, however, be able to use much existing code.
I selected instead a non-rebalancing radix-tree type approach as that
could have a better space-used/key-pointer ratio. I could have used the
radix tree implementation that we already have and insert keys into it by
their serial numbers, but that means any sort of search must iterate over
the whole radix tree. Further, its nodes are a bit on the capacious side
for what I want - especially given that key serial numbers are randomly
allocated, thus leaving a lot of empty space in the tree.
So what I have is an associative array that internally is a radix-tree
with 16 pointers per node where the index key is constructed from the key
type pointer and the key description. This means that an exact lookup by
type+description is very fast as this tells us how to navigate directly to
the target key.
I made the data structure general in lib/assoc_array.c as far as it is
concerned, its index key is just a sequence of bits that leads to a
pointer. It's possible that someone else will be able to make use of it
also. FS-Cache might, for example.
(2) Mark keys as 'trusted' and keyrings as 'trusted only'.
KEYS: verify a certificate is signed by a 'trusted' key
KEYS: Make the system 'trusted' keyring viewable by userspace
KEYS: Add a 'trusted' flag and a 'trusted only' flag
KEYS: Separate the kernel signature checking keyring from module signing
These patches allow keys carrying asymmetric public keys to be marked as
being 'trusted' and allow keyrings to be marked as only permitting the
addition or linkage of trusted keys.
Keys loaded from hardware during kernel boot or compiled into the kernel
during build are marked as being trusted automatically. New keys can be
loaded at runtime with add_key(). They are checked against the system
keyring contents and if their signatures can be validated with keys that
are already marked trusted, then they are marked trusted also and can
thus be added into the master keyring.
Patches from Mimi Zohar make this usable with the IMA keyrings also.
(3) Remove the date checks on the key used to validate a module signature.
X.509: Remove certificate date checks
It's not reasonable to reject a signature just because the key that it was
generated with is no longer valid datewise - especially if the kernel
hasn't yet managed to set the system clock when the first module is
loaded - so just remove those checks.
(4) Make it simpler to deal with additional X.509 being loaded into the kernel.
KEYS: Load *.x509 files into kernel keyring
KEYS: Have make canonicalise the paths of the X.509 certs better to deduplicate
The builder of the kernel now just places files with the extension ".x509"
into the kernel source or build trees and they're concatenated by the
kernel build and stuffed into the appropriate section.
(5) Add support for userspace kerberos to use keyrings.
KEYS: Add per-user_namespace registers for persistent per-UID kerberos caches
KEYS: Implement a big key type that can save to tmpfs
Fedora went to, by default, storing kerberos tickets and tokens in tmpfs.
We looked at storing it in keyrings instead as that confers certain
advantages such as tickets being automatically deleted after a certain
amount of time and the ability for the kernel to get at these tokens more
easily.
To make this work, two things were needed:
(a) A way for the tickets to persist beyond the lifetime of all a user's
sessions so that cron-driven processes can still use them.
The problem is that a user's session keyrings are deleted when the
session that spawned them logs out and the user's user keyring is
deleted when the UID is deleted (typically when the last log out
happens), so neither of these places is suitable.
I've added a system keyring into which a 'persistent' keyring is
created for each UID on request. Each time a user requests their
persistent keyring, the expiry time on it is set anew. If the user
doesn't ask for it for, say, three days, the keyring is automatically
expired and garbage collected using the existing gc. All the kerberos
tokens it held are then also gc'd.
(b) A key type that can hold really big tickets (up to 1MB in size).
The problem is that Active Directory can return huge tickets with lots
of auxiliary data attached. We don't, however, want to eat up huge
tracts of unswappable kernel space for this, so if the ticket is
greater than a certain size, we create a swappable shmem file and dump
the contents in there and just live with the fact we then have an
inode and a dentry overhead. If the ticket is smaller than that, we
slap it in a kmalloc()'d buffer"
* 'for-linus2' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security: (121 commits)
KEYS: Fix keyring content gc scanner
KEYS: Fix error handling in big_key instantiation
KEYS: Fix UID check in keyctl_get_persistent()
KEYS: The RSA public key algorithm needs to select MPILIB
ima: define '_ima' as a builtin 'trusted' keyring
ima: extend the measurement list to include the file signature
kernel/system_certificate.S: use real contents instead of macro GLOBAL()
KEYS: fix error return code in big_key_instantiate()
KEYS: Fix keyring quota misaccounting on key replacement and unlink
KEYS: Fix a race between negating a key and reading the error set
KEYS: Make BIG_KEYS boolean
apparmor: remove the "task" arg from may_change_ptraced_domain()
apparmor: remove parent task info from audit logging
apparmor: remove tsk field from the apparmor_audit_struct
apparmor: fix capability to not use the current task, during reporting
Smack: Ptrace access check mode
ima: provide hash algo info in the xattr
ima: enable support for larger default filedata hash algorithms
ima: define kernel parameter 'ima_template=' to change configured default
ima: add Kconfig default measurement list template
...
This patch provides a single place for information about hash algorithms,
such as hash sizes and kernel driver names, which will be used by IMA
and the public key code.
Changelog:
- Fix sparse and checkpatch warnings
- Move hash algo enums to uapi for userspace signing functions.
Signed-off-by: Dmitry Kasatkin <d.kasatkin@samsung.com>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Bit sliced AES gives around 45% speedup on Cortex-A15 for encryption
and around 25% for decryption. This implementation of the AES algorithm
does not rely on any lookup tables so it is believed to be invulnerable
to cache timing attacks.
This algorithm processes up to 8 blocks in parallel in constant time. This
means that it is not usable by chaining modes that are strictly sequential
in nature, such as CBC encryption. CBC decryption, however, can benefit from
this implementation and runs about 25% faster. The other chaining modes
implemented in this module, XTS and CTR, can execute fully in parallel in
both directions.
The core code has been adopted from the OpenSSL project (in collaboration
with the original author, on cc). For ease of maintenance, this version is
identical to the upstream OpenSSL code, i.e., all modifications that were
required to make it suitable for inclusion into the kernel have been made
upstream. The original can be found here:
http://git.openssl.org/gitweb/?p=openssl.git;a=commit;h=6f6a6130
Note to integrators:
While this implementation is significantly faster than the existing table
based ones (generic or ARM asm), especially in CTR mode, the effects on
power efficiency are unclear as of yet. This code does fundamentally more
work, by calculating values that the table based code obtains by a simple
lookup; only by doing all of that work in a SIMD fashion, it manages to
perform better.
Cc: Andy Polyakov <appro@openssl.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Move all users of ablk_helper under x86/ to the generic version
and delete the x86 specific version.
Acked-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Create a generic version of ablk_helper so it can be reused
by other architectures.
Acked-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch reinstates commits
67822649d739761214ee0b95a7f85731d939625a2d31e518a4
Now that module softdeps are in the kernel we can use that to resolve
the boot issue which cause the revert.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Pull crypto fixes from Herbert Xu:
"This push fixes a memory corruption issue in caam, as well as
reverting the new optimised crct10dif implementation as it breaks boot
on initrd systems.
Hopefully crct10dif will be reinstated once the supporting code is
added so that it doesn't break boot"
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
Revert "crypto: crct10dif - Wrap crc_t10dif function all to use crypto transform framework"
crypto: caam - Fixed the memory out of bound overwrite issue
This reverts commits
67822649d739761214ee0b95a7f85731d939625a2d31e518a4
Unfortunately this change broke boot on some systems that used an
initrd which does not include the newly created crct10dif modules.
As these modules are required by sd_mod under certain configurations
this is a serious problem.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add support for lz4 and lz4hc compression algorithm using the lib/lz4/*
codebase.
[akpm@linux-foundation.org: fix warnings]
Signed-off-by: Chanho Min <chanho.min@lge.com>
Cc: "Darrick J. Wong" <djwong@us.ibm.com>
Cc: Bob Pearson <rpearson@systemfabricworks.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Herbert Xu <herbert@gondor.hengli.com.au>
Cc: Yann Collet <yann.collet.73@gmail.com>
Cc: Kyungsik Lee <kyungsik.lee@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This reverts commit cf1521a1a5.
Instruction (vpgatherdd) that this implementation relied on turned out to be
slow performer on real hardware (i5-4570). The previous 8-way twofish/AVX
implementation is therefore faster and this implementation should be removed.
Converting this implementation to use the same method as in twofish/AVX for
table look-ups would give additional ~3% speed up vs twofish/AVX, but would
hardly be worth of the added code and binary size.
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This reverts commit 6048801070.
Instruction (vpgatherdd) that this implementation relied on turned out to be
slow performer on real hardware (i5-4570). The previous 4-way blowfish
implementation is therefore faster and this implementation should be removed.
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Glue code that plugs the PCLMULQDQ accelerated CRC T10 DIF hash into the
crypto framework. The config CRYPTO_CRCT10DIF_PCLMUL should be turned
on to enable the feature. The crc_t10dif crypto library function will
use this faster algorithm when crct10dif_pclmul module is loaded.
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When CRC T10 DIF is calculated using the crypto transform framework, we
wrap the crc_t10dif function call to utilize it. This allows us to
take advantage of any accelerated CRC T10 DIF transform that is
plugged into the crypto framework.
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Patch adds AVX2/AES-NI/x86-64 implementation of Camellia cipher, requiring
32 parallel blocks for input (512 bytes). Compared to AVX implementation, this
version is extended to use the 256-bit wide YMM registers. For AES-NI
instructions data is split to two 128-bit registers and merged afterwards.
Even with this additional handling, performance should be higher compared
to the AES-NI/AVX implementation.
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Patch adds AVX2/x86-64 implementation of Serpent cipher, requiring 16 parallel
blocks for input (256 bytes). Implementation is based on the AVX implementation
and extends to use the 256-bit wide YMM registers. Since serpent does not use
table look-ups, this implementation should be close to two times faster than
the AVX implementation.
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Patch adds AVX2/x86-64 implementation of Twofish cipher, requiring 16 parallel
blocks for input (256 bytes). Table look-ups are performed using vpgatherdd
instruction directly from vector registers and thus should be faster than
earlier implementations. Implementation also uses 256-bit wide YMM registers,
which should give additional speed up compared to the AVX implementation.
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Patch adds AVX2/x86-64 implementation of Blowfish cipher, requiring 32 parallel
blocks for input (256 bytes). Table look-ups are performed using vpgatherdd
instruction directly from vector registers and thus should be faster than
earlier implementations.
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>