Commit Graph

2577 Commits

Author SHA1 Message Date
Eric Biggers
edaf28e996 crypto: salsa20 - don't access already-freed walk.iv
If the user-provided IV needs to be aligned to the algorithm's
alignmask, then skcipher_walk_virt() copies the IV into a new aligned
buffer walk.iv.  But skcipher_walk_virt() can fail afterwards, and then
if the caller unconditionally accesses walk.iv, it's a use-after-free.

salsa20-generic doesn't set an alignmask, so currently it isn't affected
by this despite unconditionally accessing walk.iv.  However this is more
subtle than desired, and it was actually broken prior to the alignmask
being removed by commit b62b3db76f ("crypto: salsa20-generic - cleanup
and convert to skcipher API").

Since salsa20-generic does not update the IV and does not need any IV
alignment, update it to use req->iv instead of walk.iv.

Fixes: 2407d60872 ("[CRYPTO] salsa20: Salsa20 stream cipher")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-18 22:14:58 +08:00
Eric Biggers
aec286cd36 crypto: lrw - don't access already-freed walk.iv
If the user-provided IV needs to be aligned to the algorithm's
alignmask, then skcipher_walk_virt() copies the IV into a new aligned
buffer walk.iv.  But skcipher_walk_virt() can fail afterwards, and then
if the caller unconditionally accesses walk.iv, it's a use-after-free.

Fix this in the LRW template by checking the return value of
skcipher_walk_virt().

This bug was detected by my patches that improve testmgr to fuzz
algorithms against their generic implementation.  When the extra
self-tests were run on a KASAN-enabled kernel, a KASAN use-after-free
splat occured during lrw(aes) testing.

Fixes: c778f96bf3 ("crypto: lrw - Optimize tweak computation")
Cc: <stable@vger.kernel.org> # v4.20+
Cc: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-18 22:14:58 +08:00
Eric Biggers
eda69b0c06 crypto: testmgr - add panic_on_fail module parameter
Add a module parameter cryptomgr.panic_on_fail which causes the kernel
to panic if any crypto self-tests fail.

Use cases:

- More easily detect crypto self-test failures by boot testing,
  e.g. on KernelCI.
- Get a bug report if syzkaller manages to use the template system to
  instantiate an algorithm that fails its self-tests.

The command-line option "fips=1" already does this, but it also makes
other changes not wanted for general testing, such as disabling
"unapproved" algorithms.  panic_on_fail just does what it says.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-08 14:42:55 +08:00
Eric Biggers
c31a871985 crypto: cts - don't support empty messages
My patches to make testmgr fuzz algorithms against their generic
implementation detected that the arm64 implementations of
"cts(cbc(aes))" handle empty messages differently from the cts template.
Namely, the arm64 implementations forbids (with -EINVAL) all messages
shorter than the block size, including the empty message; but the cts
template permits empty messages as a special case.

No user should be CTS-encrypting/decrypting empty messages, but we need
to keep the behavior consistent.  Unfortunately, as noted in the source
of OpenSSL's CTS implementation [1], there's no common specification for
CTS.  This makes it somewhat debatable what the behavior should be.

However, all CTS specifications seem to agree that messages shorter than
the block size are not allowed, and OpenSSL follows this in both CTS
conventions it implements.  It would also simplify the user-visible
semantics to have empty messages no longer be a special case.

Therefore, make the cts template return -EINVAL on *all* messages
shorter than the block size, including the empty message.

[1] https://github.com/openssl/openssl/blob/master/crypto/modes/cts128.c

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-08 14:42:55 +08:00
Eric Biggers
c5c46887cf crypto: streebog - fix unaligned memory accesses
Don't cast the data buffer directly to streebog_uint512, as this
violates alignment rules.

Fixes: fe18957e8e ("crypto: streebog - add Streebog hash function")
Cc: Vitaly Chikunov <vt@altlinux.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Vitaly Chikunov <vt@altlinux.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-08 14:42:55 +08:00
Eric Biggers
5e27f38f1f crypto: chacha20poly1305 - set cra_name correctly
If the rfc7539 template is instantiated with specific implementations,
e.g. "rfc7539(chacha20-generic,poly1305-generic)" rather than
"rfc7539(chacha20,poly1305)", then the implementation names end up
included in the instance's cra_name.  This is incorrect because it then
prevents all users from allocating "rfc7539(chacha20,poly1305)", if the
highest priority implementations of chacha20 and poly1305 were selected.
Also, the self-tests aren't run on an instance allocated in this way.

Fix it by setting the instance's cra_name from the underlying
algorithms' actual cra_names, rather than from the requested names.
This matches what other templates do.

Fixes: 71ebc4d1b2 ("crypto: chacha20poly1305 - Add a ChaCha20-Poly1305 AEAD construction, RFC7539")
Cc: <stable@vger.kernel.org> # v4.2+
Cc: Martin Willi <martin@strongswan.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-08 14:42:55 +08:00
Eric Biggers
dcaca01a42 crypto: skcipher - don't WARN on unprocessed data after slow walk step
skcipher_walk_done() assumes it's a bug if, after the "slow" path is
executed where the next chunk of data is processed via a bounce buffer,
the algorithm says it didn't process all bytes.  Thus it WARNs on this.

However, this can happen legitimately when the message needs to be
evenly divisible into "blocks" but isn't, and the algorithm has a
'walksize' greater than the block size.  For example, ecb-aes-neonbs
sets 'walksize' to 128 bytes and only supports messages evenly divisible
into 16-byte blocks.  If, say, 17 message bytes remain but they straddle
scatterlist elements, the skcipher_walk code will take the "slow" path
and pass the algorithm all 17 bytes in the bounce buffer.  But the
algorithm will only be able to process 16 bytes, triggering the WARN.

Fix this by just removing the WARN_ON().  Returning -EINVAL, as the code
already does, is the right behavior.

This bug was detected by my patches that improve testmgr to fuzz
algorithms against their generic implementation.

Fixes: b286d8b1a6 ("crypto: skcipher - Add skcipher walk interface")
Cc: <stable@vger.kernel.org> # v4.10+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-08 14:42:55 +08:00
Eric Biggers
307508d107 crypto: crct10dif-generic - fix use via crypto_shash_digest()
The ->digest() method of crct10dif-generic reads the current CRC value
from the shash_desc context.  But this value is uninitialized, causing
crypto_shash_digest() to compute the wrong result.  Fix it.

Probably this wasn't noticed before because lib/crc-t10dif.c only uses
crypto_shash_update(), not crypto_shash_digest().  Likewise,
crypto_shash_digest() is not yet tested by the crypto self-tests because
those only test the ahash API which only uses shash init/update/final.

This bug was detected by my patches that improve testmgr to fuzz
algorithms against their generic implementation.

Fixes: 2d31e518a4 ("crypto: crct10dif - Wrap crc_t10dif function all to use crypto transform framework")
Cc: <stable@vger.kernel.org> # v3.11+
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-08 14:42:54 +08:00
Andi Kleen
61abc356bf crypto: aes - Use ___cacheline_aligned for aes data
cacheline_aligned is a special section. It cannot be const at the same
time because it's not read-only. It doesn't give any MMU protection.

Mark it ____cacheline_aligned to not place it in a special section,
but just align it in .rodata

Cc: herbert@gondor.apana.org.au
Suggested-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-08 14:36:16 +08:00
Sebastian Andrzej Siewior
71052dcf4b crypto: scompress - Use per-CPU struct instead multiple variables
Two per-CPU variables are allocated as pointer to per-CPU memory which
then are used as scratch buffers.
We could be smart about this and use instead a per-CPU struct which
contains the pointers already and then we need to allocate just the
scratch buffers.
Add a lock to the struct. By doing so we can avoid the get_cpu()
statement and gain lockdep coverage (if enabled) to ensure that the lock
is always acquired in the right context. On non-preemptible kernels the
lock vanishes.
It is okay to use raw_cpu_ptr() in order to get a pointer to the struct
since it is protected by the spinlock.

The diffstat of this is negative and according to size scompress.o:
   text    data     bss     dec     hex filename
   1847     160      24    2031     7ef dbg_before.o
   1754     232       4    1990     7c6 dbg_after.o
   1799      64      24    1887     75f no_dbg-before.o
   1703      88       4    1795     703 no_dbg-after.o

The overall size increase difference is also negative. The increase in
the data section is only four bytes without lockdep.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-08 14:36:16 +08:00
Sebastian Andrzej Siewior
6a4d1b18ef crypto: scompress - return proper error code for allocation failure
If scomp_acomp_comp_decomp() fails to allocate memory for the
destination then we never copy back the data we compressed.
It is probably best to return an error code instead 0 in case of
failure.
I haven't found any user that is using acomp_request_set_params()
without the `dst' buffer so there is probably no harm.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-08 14:36:16 +08:00
Geert Uytterhoeven
d99324c226 crypto: fips - Grammar s/options/option/, s/to/the/
Fixes: ccb778e184 ("crypto: api - Add fips_enable flag")
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: Mukesh Ojha <mojha@codeaurora.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-28 13:55:34 +08:00
Ondrej Mosnacek
4e5180eb3d crypto: Kconfig - fix typos AEGSI -> AEGIS
Spotted while reviewind patches from Eric Biggers.

Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22 20:58:20 +08:00
Eric Biggers
f6fff17072 crypto: salsa20-generic - use crypto_xor_cpy()
In salsa20_docrypt(), use crypto_xor_cpy() instead of crypto_xor().
This avoids having to memcpy() the src buffer to the dst buffer.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22 20:57:28 +08:00
Eric Biggers
29d97dec22 crypto: chacha-generic - use crypto_xor_cpy()
In chacha_docrypt(), use crypto_xor_cpy() instead of crypto_xor().
This avoids having to memcpy() the src buffer to the dst buffer.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22 20:57:28 +08:00
Eric Biggers
6570737c7f crypto: testmgr - test the !may_use_simd() fallback code
All crypto API algorithms are supposed to support the case where they
are called in a context where SIMD instructions are unusable, e.g. IRQ
context on some architectures.  However, this isn't tested for by the
self-tests, causing bugs to go undetected.

Now that all algorithms have been converted to use crypto_simd_usable(),
update the self-tests to test the no-SIMD case.  First, a bool
testvec_config::nosimd is added.  When set, the crypto operation is
executed with preemption disabled and with crypto_simd_usable() mocked
out to return false on the current CPU.

A bool test_sg_division::nosimd is also added.  For hash algorithms it's
honored by the corresponding ->update().  By setting just a subset of
these bools, the case where some ->update()s are done in SIMD context
and some are done in no-SIMD context is also tested.

These bools are then randomly set by generate_random_testvec_config().

For now, all no-SIMD testing is limited to the extra crypto self-tests,
because it might be a bit too invasive for the regular self-tests.
But this could be changed later.

This has already found bugs in the arm64 AES-GCM and ChaCha algorithms.
This would have found some past bugs as well.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22 20:57:28 +08:00
Eric Biggers
8b8d91d4ce crypto: simd - convert to use crypto_simd_usable()
Replace all calls to may_use_simd() in the shared SIMD helpers with
crypto_simd_usable(), in order to allow testing the no-SIMD code paths.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22 20:57:27 +08:00
Eric Biggers
b55e1a3954 crypto: simd,testmgr - introduce crypto_simd_usable()
So that the no-SIMD fallback code can be tested by the crypto
self-tests, add a macro crypto_simd_usable() which wraps may_use_simd(),
but also returns false if the crypto self-tests have set a per-CPU bool
to disable SIMD in crypto code on the current CPU.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22 20:57:27 +08:00
Eric Biggers
7aceaaef04 crypto: chacha-generic - fix use as arm64 no-NEON fallback
The arm64 implementations of ChaCha and XChaCha are failing the extra
crypto self-tests following my patches to test the !may_use_simd() code
paths, which previously were untested.  The problem is as follows:

When !may_use_simd(), the arm64 NEON implementations fall back to the
generic implementation, which uses the skcipher_walk API to iterate
through the src/dst scatterlists.  Due to how the skcipher_walk API
works, walk.stride is set from the skcipher_alg actually being used,
which in this case is the arm64 NEON algorithm.  Thus walk.stride is
5*CHACHA_BLOCK_SIZE, not CHACHA_BLOCK_SIZE.

This unnecessarily large stride shouldn't cause an actual problem.
However, the generic implementation computes round_down(nbytes,
walk.stride).  round_down() assumes the round amount is a power of 2,
which 5*CHACHA_BLOCK_SIZE is not, so it gives the wrong result.

This causes the following case in skcipher_walk_done() to be hit,
causing a WARN() and failing the encryption operation:

	if (WARN_ON(err)) {
		/* unexpected case; didn't process all bytes */
		err = -EINVAL;
		goto finish;
	}

Fix it by rounding down to CHACHA_BLOCK_SIZE instead of walk.stride.

(Or we could replace round_down() with rounddown(), but that would add a
slow division operation every time, which I think we should avoid.)

Fixes: 2fe55987b2 ("crypto: arm64/chacha - use combined SIMD/ALU routine for more speed")
Cc: <stable@vger.kernel.org> # v5.0+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22 20:57:27 +08:00
Eric Biggers
f808aa3f24 crypto: testmgr - remove workaround for AEADs that modify aead_request
Now that all AEAD algorithms (that I have the hardware to test, at
least) have been fixed to not modify the user-provided aead_request,
remove the workaround from testmgr that reset aead_request::tfm after
each AEAD encryption/decryption.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22 20:57:26 +08:00
Eric Biggers
e151a8d28c crypto: x86/morus1280 - convert to use AEAD SIMD helpers
Convert the x86 implementations of MORUS-1280 to use the AEAD SIMD
helpers, rather than hand-rolling the same functionality.  This
simplifies the code and also fixes the bug where the user-provided
aead_request is modified.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22 20:57:26 +08:00
Eric Biggers
477309580d crypto: x86/morus640 - convert to use AEAD SIMD helpers
Convert the x86 implementation of MORUS-640 to use the AEAD SIMD
helpers, rather than hand-rolling the same functionality.  This
simplifies the code and also fixes the bug where the user-provided
aead_request is modified.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22 20:57:26 +08:00
Eric Biggers
b6708c2d8f crypto: x86/aegis256 - convert to use AEAD SIMD helpers
Convert the x86 implementation of AEGIS-256 to use the AEAD SIMD
helpers, rather than hand-rolling the same functionality.  This
simplifies the code and also fixes the bug where the user-provided
aead_request is modified.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22 20:57:26 +08:00
Eric Biggers
d628132a5e crypto: x86/aegis128l - convert to use AEAD SIMD helpers
Convert the x86 implementation of AEGIS-128L to use the AEAD SIMD
helpers, rather than hand-rolling the same functionality.  This
simplifies the code and also fixes the bug where the user-provided
aead_request is modified.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22 20:57:26 +08:00
Eric Biggers
de272ca72c crypto: x86/aegis128 - convert to use AEAD SIMD helpers
Convert the x86 implementation of AEGIS-128 to use the AEAD SIMD
helpers, rather than hand-rolling the same functionality.  This
simplifies the code and also fixes the bug where the user-provided
aead_request is modified.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22 20:57:26 +08:00
Eric Biggers
1661131a04 crypto: simd - support wrapping AEAD algorithms
Update the crypto_simd module to support wrapping AEAD algorithms.
Previously it only supported skciphers.  The code for each is similar.

I'll be converting the x86 implementations of AES-GCM, AEGIS, and MORUS
to use this.  Currently they each independently implement the same
functionality.  This will not only simplify the code, but it will also
fix the bug detected by the improved self-tests: the user-provided
aead_request is modified.  This is because these algorithms currently
reuse the original request, whereas the crypto_simd helpers build a new
request in the original request's context.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22 20:57:26 +08:00
Dave Rodgman
45ec975efb lib/lzo: separate lzo-rle from lzo
To prevent any issues with persistent data, separate lzo-rle from lzo so
that it is treated as a separate algorithm, and lzo is still available.

Link: http://lkml.kernel.org/r/20190205155944.16007-3-dave.rodgman@arm.com
Signed-off-by: Dave Rodgman <dave.rodgman@arm.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Markus F.X.J. Oberhumer <markus@oberhumer.com>
Cc: Matt Sealey <matt.sealey@arm.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <nitingupta910@gmail.com>
Cc: Richard Purdie <rpurdie@openedhand.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Cc: Sonny Rao <sonnyrao@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-03-07 18:32:03 -08:00
Linus Torvalds
63bdf4284c Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto update from Herbert Xu:
 "API:
   - Add helper for simple skcipher modes.
   - Add helper to register multiple templates.
   - Set CRYPTO_TFM_NEED_KEY when setkey fails.
   - Require neither or both of export/import in shash.
   - AEAD decryption test vectors are now generated from encryption
     ones.
   - New option CONFIG_CRYPTO_MANAGER_EXTRA_TESTS that includes random
     fuzzing.

  Algorithms:
   - Conversions to skcipher and helper for many templates.
   - Add more test vectors for nhpoly1305 and adiantum.

  Drivers:
   - Add crypto4xx prng support.
   - Add xcbc/cmac/ecb support in caam.
   - Add AES support for Exynos5433 in s5p.
   - Remove sha384/sha512 from artpec7 as hardware cannot do partial
     hash"

[ There is a merge of the Freescale SoC tree in order to pull in changes
  required by patches to the caam/qi2 driver. ]

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (174 commits)
  crypto: s5p - add AES support for Exynos5433
  dt-bindings: crypto: document Exynos5433 SlimSSS
  crypto: crypto4xx - add missing of_node_put after of_device_is_available
  crypto: cavium/zip - fix collision with generic cra_driver_name
  crypto: af_alg - use struct_size() in sock_kfree_s()
  crypto: caam - remove redundant likely/unlikely annotation
  crypto: s5p - update iv after AES-CBC op end
  crypto: x86/poly1305 - Clear key material from stack in SSE2 variant
  crypto: caam - generate hash keys in-place
  crypto: caam - fix DMA mapping xcbc key twice
  crypto: caam - fix hash context DMA unmap size
  hwrng: bcm2835 - fix probe as platform device
  crypto: s5p-sss - Use AES_BLOCK_SIZE define instead of number
  crypto: stm32 - drop pointless static qualifier in stm32_hash_remove()
  crypto: chelsio - Fixed Traffic Stall
  crypto: marvell - Remove set but not used variable 'ivsize'
  crypto: ccp - Update driver messages to remove some confusion
  crypto: adiantum - add 1536 and 4096-byte test vectors
  crypto: nhpoly1305 - add a test vector with len % 16 != 0
  crypto: arm/aes-ce - update IV after partial final CTR block
  ...
2019-03-05 09:09:55 -08:00
Gustavo A. R. Silva
91e14842f8 crypto: af_alg - use struct_size() in sock_kfree_s()
Make use of the struct_size() helper instead of an open-coded version
in order to avoid any potential type mistakes, in particular in the
context in which this code is being used.

So, change the following form:

sizeof(*sgl) + sizeof(sgl->sg[0]) * (MAX_SGL_ENTS + 1)

to :

struct_size(sgl, sg, MAX_SGL_ENTS + 1)

This code was detected with the help of Coccinelle.

Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-28 14:17:59 +08:00
Eric Biggers
333e664772 crypto: adiantum - add 1536 and 4096-byte test vectors
Add 1536 and 4096-byte Adiantum test vectors so that the case where
there are multiple NH hashes is tested.  This is already tested by the
nhpoly1305 test vectors, but it should be tested at the Adiantum level
too.  Moreover the 4096-byte case is especially important.

As with the other Adiantum test vectors, these were generated by the
reference Python implementation at https://github.com/google/adiantum
and then automatically formatted for testmgr by a script.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-22 12:47:27 +08:00
Eric Biggers
367ecc0731 crypto: nhpoly1305 - add a test vector with len % 16 != 0
This is needed to test that the end of the message is zero-padded when
the length is not a multiple of 16 (NH_MESSAGE_UNIT).  It's already
tested indirectly by the 31-byte Adiantum test vector, but it should be
tested directly at the nhpoly1305 level too.

As with the other nhpoly1305 test vectors, this was generated by the
reference Python implementation at https://github.com/google/adiantum
and then automatically formatted for testmgr by a script.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-22 12:47:27 +08:00
Eric Biggers
e674dbc088 crypto: testmgr - add iv_out to all CTR test vectors
Test that all CTR implementations update the IV buffer to contain the
next counter block, aka the IV to continue the encryption/decryption of
a larger message.  When the length processed is a multiple of the block
size, users may rely on this for chaining.

When the length processed is *not* a multiple of the block size, simple
chaining doesn't work.  However, as noted in commit 88a3f582be
("crypto: arm64/aes - don't use IV buffer to return final keystream
block"), the generic CCM implementation assumes that the CTR IV is
handled in some sane way, not e.g. overwritten with part of the
keystream.  Since this was gotten wrong once already, it's desirable to
test for it.  And, the most straightforward way to do this is to enforce
that all CTR implementations have the same behavior as the generic
implementation, which returns the *next* counter following the final
partial block.  This behavior also has the advantage that if someone
does misuse this case for chaining, then the keystream won't be
repeated.  Thus, this patch makes the tests expect this behavior.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-22 12:47:27 +08:00
Eric Biggers
cdc694699a crypto: testmgr - add iv_out to all CBC test vectors
Test that all CBC implementations update the IV buffer to contain the
last ciphertext block, aka the IV to continue the encryption/decryption
of a larger message.  Users may rely on this for chaining.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-22 12:47:27 +08:00
Eric Biggers
8efd972ef9 crypto: testmgr - support checking skcipher output IV
Allow skcipher test vectors to declare the value the IV buffer should be
updated to at the end of the encryption or decryption operation.

(This check actually used to be supported in testmgr, but it was never
used and therefore got removed except for the AES-Keywrap special case.
But it will be used by CBC and CTR now, so re-add it.)

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-22 12:47:27 +08:00
Eric Biggers
c9e1d48a11 crypto: testmgr - remove extra bytes from 3DES-CTR IVs
3DES only has an 8-byte block size, but the 3DES-CTR test vectors use
16-byte IVs.  Remove the unused 8 bytes from the ends of the IVs.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-22 12:47:26 +08:00
Mao Wenan
9060cb719e net: crypto set sk to NULL when af_alg_release.
KASAN has found use-after-free in sockfs_setattr.
The existed commit 6d8c50dcb0 ("socket: close race condition between sock_close()
and sockfs_setattr()") is to fix this simillar issue, but it seems to ignore
that crypto module forgets to set the sk to NULL after af_alg_release.

KASAN report details as below:
BUG: KASAN: use-after-free in sockfs_setattr+0x120/0x150
Write of size 4 at addr ffff88837b956128 by task syz-executor0/4186

CPU: 2 PID: 4186 Comm: syz-executor0 Not tainted xxx + #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
1.10.2-1ubuntu1 04/01/2014
Call Trace:
 dump_stack+0xca/0x13e
 print_address_description+0x79/0x330
 ? vprintk_func+0x5e/0xf0
 kasan_report+0x18a/0x2e0
 ? sockfs_setattr+0x120/0x150
 sockfs_setattr+0x120/0x150
 ? sock_register+0x2d0/0x2d0
 notify_change+0x90c/0xd40
 ? chown_common+0x2ef/0x510
 chown_common+0x2ef/0x510
 ? chmod_common+0x3b0/0x3b0
 ? __lock_is_held+0xbc/0x160
 ? __sb_start_write+0x13d/0x2b0
 ? __mnt_want_write+0x19a/0x250
 do_fchownat+0x15c/0x190
 ? __ia32_sys_chmod+0x80/0x80
 ? trace_hardirqs_on_thunk+0x1a/0x1c
 __x64_sys_fchownat+0xbf/0x160
 ? lockdep_hardirqs_on+0x39a/0x5e0
 do_syscall_64+0xc8/0x580
 entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x462589
Code: f7 d8 64 89 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 48 89 f8 48 89
f7 48 89 d6 48 89
ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3
48 c7 c1 bc ff ff
ff f7 d8 64 89 01 48
RSP: 002b:00007fb4b2c83c58 EFLAGS: 00000246 ORIG_RAX: 0000000000000104
RAX: ffffffffffffffda RBX: 000000000072bfa0 RCX: 0000000000462589
RDX: 0000000000000000 RSI: 00000000200000c0 RDI: 0000000000000007
RBP: 0000000000000005 R08: 0000000000001000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00007fb4b2c846bc
R13: 00000000004bc733 R14: 00000000006f5138 R15: 00000000ffffffff

Allocated by task 4185:
 kasan_kmalloc+0xa0/0xd0
 __kmalloc+0x14a/0x350
 sk_prot_alloc+0xf6/0x290
 sk_alloc+0x3d/0xc00
 af_alg_accept+0x9e/0x670
 hash_accept+0x4a3/0x650
 __sys_accept4+0x306/0x5c0
 __x64_sys_accept4+0x98/0x100
 do_syscall_64+0xc8/0x580
 entry_SYSCALL_64_after_hwframe+0x49/0xbe

Freed by task 4184:
 __kasan_slab_free+0x12e/0x180
 kfree+0xeb/0x2f0
 __sk_destruct+0x4e6/0x6a0
 sk_destruct+0x48/0x70
 __sk_free+0xa9/0x270
 sk_free+0x2a/0x30
 af_alg_release+0x5c/0x70
 __sock_release+0xd3/0x280
 sock_close+0x1a/0x20
 __fput+0x27f/0x7f0
 task_work_run+0x136/0x1b0
 exit_to_usermode_loop+0x1a7/0x1d0
 do_syscall_64+0x461/0x580
 entry_SYSCALL_64_after_hwframe+0x49/0xbe

Syzkaller reproducer:
r0 = perf_event_open(&(0x7f0000000000)={0x0, 0x70, 0x0, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, @perf_config_ext}, 0x0, 0x0,
0xffffffffffffffff, 0x0)
r1 = socket$alg(0x26, 0x5, 0x0)
getrusage(0x0, 0x0)
bind(r1, &(0x7f00000001c0)=@alg={0x26, 'hash\x00', 0x0, 0x0,
'sha256-ssse3\x00'}, 0x80)
r2 = accept(r1, 0x0, 0x0)
r3 = accept4$unix(r2, 0x0, 0x0, 0x0)
r4 = dup3(r3, r0, 0x0)
fchownat(r4, &(0x7f00000000c0)='\x00', 0x0, 0x0, 0x1000)

Fixes: 6d8c50dcb0 ("socket: close race condition between sock_close() and sockfs_setattr()")
Signed-off-by: Mao Wenan <maowenan@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-18 12:01:24 -08:00
Iuliana Prodan
bd30cf533b crypto: export arc4 defines
Some arc4 cipher algorithm defines show up in two places:
crypto/arc4.c and drivers/crypto/bcm/cipher.h.
Let's export them in a common header and update their users.

Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-15 13:21:55 +08:00
Eric Biggers
a6e5ef9baa crypto: testmgr - check for aead_request corruption
Check that algorithms do not change the aead_request structure, as users
may rely on submitting the request again (e.g. after copying new data
into the same source buffer) without reinitializing everything.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-08 15:30:09 +08:00
Eric Biggers
fa353c9917 crypto: testmgr - check for skcipher_request corruption
Check that algorithms do not change the skcipher_request structure, as
users may rely on submitting the request again (e.g. after copying new
data into the same source buffer) without reinitializing everything.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-08 15:30:09 +08:00
Eric Biggers
4cc2dcf95f crypto: testmgr - convert hash testing to use testvec_configs
Convert alg_test_hash() to use the new test framework, adding a list of
testvec_configs to test by default.  When the extra self-tests are
enabled, randomly generated testvec_configs are tested as well.

This improves hash test coverage mainly because now all algorithms have
a variety of data layouts tested, whereas before each algorithm was
responsible for declaring its own chunked test cases which were often
missing or provided poor test coverage.  The new code also tests both
the MAY_SLEEP and !MAY_SLEEP cases and buffers that cross pages.

This already found bugs in the hash walk code and in the arm32 and arm64
implementations of crct10dif.

I removed the hash chunked test vectors that were the same as
non-chunked ones, but left the ones that were unique.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-08 15:30:09 +08:00
Eric Biggers
ed96804ff1 crypto: testmgr - convert aead testing to use testvec_configs
Convert alg_test_aead() to use the new test framework, using the same
list of testvec_configs that skcipher testing uses.

This significantly improves AEAD test coverage mainly because previously
there was only very limited test coverage of the possible data layouts.
Now the data layouts to test are listed in one place for all algorithms
and optionally are also randomly generated.  In fact, only one AEAD
algorithm (AES-GCM) even had a chunked test case before.

This already found bugs in all the AEGIS and MORUS implementations, the
x86 AES-GCM implementation, and the arm64 AES-CCM implementation.

I removed the AEAD chunked test vectors that were the same as
non-chunked ones, but left the ones that were unique.

Note: the rewritten test code allocates an aead_request just once per
algorithm rather than once per encryption/decryption, but some AEAD
algorithms incorrectly change the tfm pointer in the request.  It's
nontrivial to fix these, so to move forward I'm temporarily working
around it by resetting the tfm pointer.  But they'll need to be fixed.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-08 15:30:09 +08:00
Eric Biggers
4e7babba30 crypto: testmgr - convert skcipher testing to use testvec_configs
Convert alg_test_skcipher() to use the new test framework, adding a list
of testvec_configs to test by default.  When the extra self-tests are
enabled, randomly generated testvec_configs are tested as well.

This improves skcipher test coverage mainly because now all algorithms
have a variety of data layouts tested, whereas before each algorithm was
responsible for declaring its own chunked test cases which were often
missing or provided poor test coverage.  The new code also tests both
the MAY_SLEEP and !MAY_SLEEP cases, different IV alignments, and buffers
that cross pages.

This has already found a bug in the arm64 ctr-aes-neonbs algorithm.
It would have easily found many past bugs.

I removed the skcipher chunked test vectors that were the same as
non-chunked ones, but left the ones that were unique.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-08 15:30:09 +08:00
Eric Biggers
25f9dddb92 crypto: testmgr - implement random testvec_config generation
Add functions that generate a random testvec_config, in preparation for
using it for randomized fuzz tests.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-08 15:30:09 +08:00
Eric Biggers
5b2706a4d4 crypto: testmgr - introduce CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
To achieve more comprehensive crypto test coverage, I'd like to add fuzz
tests that use random data layouts and request flags.

To be most effective these tests should be part of testmgr, so they
automatically run on every algorithm registered with the crypto API.
However, they will take much longer to run than the current tests and
therefore will only really be intended to be run by developers, whereas
the current tests have a wider audience.

Therefore, add a new kconfig option CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
that can be set by developers to enable these extra, expensive tests.

Similar to the regular tests, also add a module parameter
cryptomgr.noextratests to support disabling the tests.

Finally, another module parameter cryptomgr.fuzz_iterations is added to
control how many iterations the fuzz tests do.  Note: for now setting
this to 0 will be equivalent to cryptomgr.noextratests=1.  But I opted
for separate parameters to provide more flexibility to add other types
of tests under the "extra tests" category in the future.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-08 15:30:09 +08:00
Eric Biggers
3f47a03df6 crypto: testmgr - add testvec_config struct and helper functions
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned.  Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.

However, testing of this currently has many gaps.  For example,
individual algorithms are responsible for providing their own chunked
test vectors.  But many don't bother to do this or test only one or two
cases, providing poor test coverage.  Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.

Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.

To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms.  However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().

Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code.  This improves overall test coverage, without reducing test
performance too much.  Note that the test vectors themselves are not
changed, except for removing the chunk lists.

This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.

These improved tests have already exposed many bugs.

To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-08 15:30:09 +08:00
Eric Biggers
77568e535a crypto: ahash - fix another early termination in hash walk
Hash algorithms with an alignmask set, e.g. "xcbc(aes-aesni)" and
"michael_mic", fail the improved hash tests because they sometimes
produce the wrong digest.  The bug is that in the case where a
scatterlist element crosses pages, not all the data is actually hashed
because the scatterlist walk terminates too early.  This happens because
the 'nbytes' variable in crypto_hash_walk_done() is assigned the number
of bytes remaining in the page, then later interpreted as the number of
bytes remaining in the scatterlist element.  Fix it.

Fixes: 900a081f69 ("crypto: ahash - Fix early termination in hash walk")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-08 15:30:08 +08:00
Eric Biggers
d644f1c874 crypto: morus - fix handling chunked inputs
The generic MORUS implementations all fail the improved AEAD tests
because they produce the wrong result with some data layouts.  The issue
is that they assume that if the skcipher_walk API gives 'nbytes' not
aligned to the walksize (a.k.a. walk.stride), then it is the end of the
data.  In fact, this can happen before the end.  Fix them.

Fixes: 396be41f16 ("crypto: morus - Add generic MORUS AEAD implementations")
Cc: <stable@vger.kernel.org> # v4.18+
Cc: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-08 15:30:08 +08:00
Eric Biggers
0f533e67d2 crypto: aegis - fix handling chunked inputs
The generic AEGIS implementations all fail the improved AEAD tests
because they produce the wrong result with some data layouts.  The issue
is that they assume that if the skcipher_walk API gives 'nbytes' not
aligned to the walksize (a.k.a. walk.stride), then it is the end of the
data.  In fact, this can happen before the end.  Fix them.

Fixes: f606a88e58 ("crypto: aegis - Add generic AEGIS AEAD implementations")
Cc: <stable@vger.kernel.org> # v4.18+
Cc: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-08 15:30:08 +08:00
Christopher Diaz Riveros
e3d90e52ea crypto: testmgr - use kmemdup
Fixes coccinnelle alerts:

/crypto/testmgr.c:2112:13-20: WARNING opportunity for kmemdup
/crypto/testmgr.c:2130:13-20: WARNING opportunity for kmemdup
/crypto/testmgr.c:2152:9-16: WARNING opportunity for kmemdup

Signed-off-by: Christopher Diaz Riveros <chrisadr@gentoo.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-08 15:29:48 +08:00
Milan Broz
a8a3441663 crypto: testmgr - mark crc32 checksum as FIPS allowed
The CRC32 is not a cryptographic hash algorithm,
so the FIPS restrictions should not apply to it.
(The CRC32C variant is already allowed.)

This CRC32 variant is used for in dm-crypt legacy TrueCrypt
IV implementation (tcw); detected by cryptsetup test suite
failure in FIPS mode.

Signed-off-by: Milan Broz <gmazyland@gmail.com>
Reviewed-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01 14:42:05 +08:00