If scomp_acomp_comp_decomp() fails to allocate memory for the
destination then we never copy back the data we compressed.
It is probably best to return an error code instead 0 in case of
failure.
I haven't found any user that is using acomp_request_set_params()
without the `dst' buffer so there is probably no harm.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Spotted while reviewind patches from Eric Biggers.
Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In salsa20_docrypt(), use crypto_xor_cpy() instead of crypto_xor().
This avoids having to memcpy() the src buffer to the dst buffer.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In chacha_docrypt(), use crypto_xor_cpy() instead of crypto_xor().
This avoids having to memcpy() the src buffer to the dst buffer.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
All crypto API algorithms are supposed to support the case where they
are called in a context where SIMD instructions are unusable, e.g. IRQ
context on some architectures. However, this isn't tested for by the
self-tests, causing bugs to go undetected.
Now that all algorithms have been converted to use crypto_simd_usable(),
update the self-tests to test the no-SIMD case. First, a bool
testvec_config::nosimd is added. When set, the crypto operation is
executed with preemption disabled and with crypto_simd_usable() mocked
out to return false on the current CPU.
A bool test_sg_division::nosimd is also added. For hash algorithms it's
honored by the corresponding ->update(). By setting just a subset of
these bools, the case where some ->update()s are done in SIMD context
and some are done in no-SIMD context is also tested.
These bools are then randomly set by generate_random_testvec_config().
For now, all no-SIMD testing is limited to the extra crypto self-tests,
because it might be a bit too invasive for the regular self-tests.
But this could be changed later.
This has already found bugs in the arm64 AES-GCM and ChaCha algorithms.
This would have found some past bugs as well.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Replace all calls to may_use_simd() in the shared SIMD helpers with
crypto_simd_usable(), in order to allow testing the no-SIMD code paths.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
So that the no-SIMD fallback code can be tested by the crypto
self-tests, add a macro crypto_simd_usable() which wraps may_use_simd(),
but also returns false if the crypto self-tests have set a per-CPU bool
to disable SIMD in crypto code on the current CPU.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The arm64 implementations of ChaCha and XChaCha are failing the extra
crypto self-tests following my patches to test the !may_use_simd() code
paths, which previously were untested. The problem is as follows:
When !may_use_simd(), the arm64 NEON implementations fall back to the
generic implementation, which uses the skcipher_walk API to iterate
through the src/dst scatterlists. Due to how the skcipher_walk API
works, walk.stride is set from the skcipher_alg actually being used,
which in this case is the arm64 NEON algorithm. Thus walk.stride is
5*CHACHA_BLOCK_SIZE, not CHACHA_BLOCK_SIZE.
This unnecessarily large stride shouldn't cause an actual problem.
However, the generic implementation computes round_down(nbytes,
walk.stride). round_down() assumes the round amount is a power of 2,
which 5*CHACHA_BLOCK_SIZE is not, so it gives the wrong result.
This causes the following case in skcipher_walk_done() to be hit,
causing a WARN() and failing the encryption operation:
if (WARN_ON(err)) {
/* unexpected case; didn't process all bytes */
err = -EINVAL;
goto finish;
}
Fix it by rounding down to CHACHA_BLOCK_SIZE instead of walk.stride.
(Or we could replace round_down() with rounddown(), but that would add a
slow division operation every time, which I think we should avoid.)
Fixes: 2fe55987b2 ("crypto: arm64/chacha - use combined SIMD/ALU routine for more speed")
Cc: <stable@vger.kernel.org> # v5.0+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that all AEAD algorithms (that I have the hardware to test, at
least) have been fixed to not modify the user-provided aead_request,
remove the workaround from testmgr that reset aead_request::tfm after
each AEAD encryption/decryption.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the x86 implementations of MORUS-1280 to use the AEAD SIMD
helpers, rather than hand-rolling the same functionality. This
simplifies the code and also fixes the bug where the user-provided
aead_request is modified.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the x86 implementation of MORUS-640 to use the AEAD SIMD
helpers, rather than hand-rolling the same functionality. This
simplifies the code and also fixes the bug where the user-provided
aead_request is modified.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the x86 implementation of AEGIS-256 to use the AEAD SIMD
helpers, rather than hand-rolling the same functionality. This
simplifies the code and also fixes the bug where the user-provided
aead_request is modified.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the x86 implementation of AEGIS-128L to use the AEAD SIMD
helpers, rather than hand-rolling the same functionality. This
simplifies the code and also fixes the bug where the user-provided
aead_request is modified.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the x86 implementation of AEGIS-128 to use the AEAD SIMD
helpers, rather than hand-rolling the same functionality. This
simplifies the code and also fixes the bug where the user-provided
aead_request is modified.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Update the crypto_simd module to support wrapping AEAD algorithms.
Previously it only supported skciphers. The code for each is similar.
I'll be converting the x86 implementations of AES-GCM, AEGIS, and MORUS
to use this. Currently they each independently implement the same
functionality. This will not only simplify the code, but it will also
fix the bug detected by the improved self-tests: the user-provided
aead_request is modified. This is because these algorithms currently
reuse the original request, whereas the crypto_simd helpers build a new
request in the original request's context.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
To prevent any issues with persistent data, separate lzo-rle from lzo so
that it is treated as a separate algorithm, and lzo is still available.
Link: http://lkml.kernel.org/r/20190205155944.16007-3-dave.rodgman@arm.com
Signed-off-by: Dave Rodgman <dave.rodgman@arm.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Markus F.X.J. Oberhumer <markus@oberhumer.com>
Cc: Matt Sealey <matt.sealey@arm.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <nitingupta910@gmail.com>
Cc: Richard Purdie <rpurdie@openedhand.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Cc: Sonny Rao <sonnyrao@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Pull crypto update from Herbert Xu:
"API:
- Add helper for simple skcipher modes.
- Add helper to register multiple templates.
- Set CRYPTO_TFM_NEED_KEY when setkey fails.
- Require neither or both of export/import in shash.
- AEAD decryption test vectors are now generated from encryption
ones.
- New option CONFIG_CRYPTO_MANAGER_EXTRA_TESTS that includes random
fuzzing.
Algorithms:
- Conversions to skcipher and helper for many templates.
- Add more test vectors for nhpoly1305 and adiantum.
Drivers:
- Add crypto4xx prng support.
- Add xcbc/cmac/ecb support in caam.
- Add AES support for Exynos5433 in s5p.
- Remove sha384/sha512 from artpec7 as hardware cannot do partial
hash"
[ There is a merge of the Freescale SoC tree in order to pull in changes
required by patches to the caam/qi2 driver. ]
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (174 commits)
crypto: s5p - add AES support for Exynos5433
dt-bindings: crypto: document Exynos5433 SlimSSS
crypto: crypto4xx - add missing of_node_put after of_device_is_available
crypto: cavium/zip - fix collision with generic cra_driver_name
crypto: af_alg - use struct_size() in sock_kfree_s()
crypto: caam - remove redundant likely/unlikely annotation
crypto: s5p - update iv after AES-CBC op end
crypto: x86/poly1305 - Clear key material from stack in SSE2 variant
crypto: caam - generate hash keys in-place
crypto: caam - fix DMA mapping xcbc key twice
crypto: caam - fix hash context DMA unmap size
hwrng: bcm2835 - fix probe as platform device
crypto: s5p-sss - Use AES_BLOCK_SIZE define instead of number
crypto: stm32 - drop pointless static qualifier in stm32_hash_remove()
crypto: chelsio - Fixed Traffic Stall
crypto: marvell - Remove set but not used variable 'ivsize'
crypto: ccp - Update driver messages to remove some confusion
crypto: adiantum - add 1536 and 4096-byte test vectors
crypto: nhpoly1305 - add a test vector with len % 16 != 0
crypto: arm/aes-ce - update IV after partial final CTR block
...
Make use of the struct_size() helper instead of an open-coded version
in order to avoid any potential type mistakes, in particular in the
context in which this code is being used.
So, change the following form:
sizeof(*sgl) + sizeof(sgl->sg[0]) * (MAX_SGL_ENTS + 1)
to :
struct_size(sgl, sg, MAX_SGL_ENTS + 1)
This code was detected with the help of Coccinelle.
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add 1536 and 4096-byte Adiantum test vectors so that the case where
there are multiple NH hashes is tested. This is already tested by the
nhpoly1305 test vectors, but it should be tested at the Adiantum level
too. Moreover the 4096-byte case is especially important.
As with the other Adiantum test vectors, these were generated by the
reference Python implementation at https://github.com/google/adiantum
and then automatically formatted for testmgr by a script.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This is needed to test that the end of the message is zero-padded when
the length is not a multiple of 16 (NH_MESSAGE_UNIT). It's already
tested indirectly by the 31-byte Adiantum test vector, but it should be
tested directly at the nhpoly1305 level too.
As with the other nhpoly1305 test vectors, this was generated by the
reference Python implementation at https://github.com/google/adiantum
and then automatically formatted for testmgr by a script.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Test that all CTR implementations update the IV buffer to contain the
next counter block, aka the IV to continue the encryption/decryption of
a larger message. When the length processed is a multiple of the block
size, users may rely on this for chaining.
When the length processed is *not* a multiple of the block size, simple
chaining doesn't work. However, as noted in commit 88a3f582be
("crypto: arm64/aes - don't use IV buffer to return final keystream
block"), the generic CCM implementation assumes that the CTR IV is
handled in some sane way, not e.g. overwritten with part of the
keystream. Since this was gotten wrong once already, it's desirable to
test for it. And, the most straightforward way to do this is to enforce
that all CTR implementations have the same behavior as the generic
implementation, which returns the *next* counter following the final
partial block. This behavior also has the advantage that if someone
does misuse this case for chaining, then the keystream won't be
repeated. Thus, this patch makes the tests expect this behavior.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Test that all CBC implementations update the IV buffer to contain the
last ciphertext block, aka the IV to continue the encryption/decryption
of a larger message. Users may rely on this for chaining.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Allow skcipher test vectors to declare the value the IV buffer should be
updated to at the end of the encryption or decryption operation.
(This check actually used to be supported in testmgr, but it was never
used and therefore got removed except for the AES-Keywrap special case.
But it will be used by CBC and CTR now, so re-add it.)
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3DES only has an 8-byte block size, but the 3DES-CTR test vectors use
16-byte IVs. Remove the unused 8 bytes from the ends of the IVs.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Some arc4 cipher algorithm defines show up in two places:
crypto/arc4.c and drivers/crypto/bcm/cipher.h.
Let's export them in a common header and update their users.
Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Check that algorithms do not change the aead_request structure, as users
may rely on submitting the request again (e.g. after copying new data
into the same source buffer) without reinitializing everything.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Check that algorithms do not change the skcipher_request structure, as
users may rely on submitting the request again (e.g. after copying new
data into the same source buffer) without reinitializing everything.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert alg_test_hash() to use the new test framework, adding a list of
testvec_configs to test by default. When the extra self-tests are
enabled, randomly generated testvec_configs are tested as well.
This improves hash test coverage mainly because now all algorithms have
a variety of data layouts tested, whereas before each algorithm was
responsible for declaring its own chunked test cases which were often
missing or provided poor test coverage. The new code also tests both
the MAY_SLEEP and !MAY_SLEEP cases and buffers that cross pages.
This already found bugs in the hash walk code and in the arm32 and arm64
implementations of crct10dif.
I removed the hash chunked test vectors that were the same as
non-chunked ones, but left the ones that were unique.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert alg_test_aead() to use the new test framework, using the same
list of testvec_configs that skcipher testing uses.
This significantly improves AEAD test coverage mainly because previously
there was only very limited test coverage of the possible data layouts.
Now the data layouts to test are listed in one place for all algorithms
and optionally are also randomly generated. In fact, only one AEAD
algorithm (AES-GCM) even had a chunked test case before.
This already found bugs in all the AEGIS and MORUS implementations, the
x86 AES-GCM implementation, and the arm64 AES-CCM implementation.
I removed the AEAD chunked test vectors that were the same as
non-chunked ones, but left the ones that were unique.
Note: the rewritten test code allocates an aead_request just once per
algorithm rather than once per encryption/decryption, but some AEAD
algorithms incorrectly change the tfm pointer in the request. It's
nontrivial to fix these, so to move forward I'm temporarily working
around it by resetting the tfm pointer. But they'll need to be fixed.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert alg_test_skcipher() to use the new test framework, adding a list
of testvec_configs to test by default. When the extra self-tests are
enabled, randomly generated testvec_configs are tested as well.
This improves skcipher test coverage mainly because now all algorithms
have a variety of data layouts tested, whereas before each algorithm was
responsible for declaring its own chunked test cases which were often
missing or provided poor test coverage. The new code also tests both
the MAY_SLEEP and !MAY_SLEEP cases, different IV alignments, and buffers
that cross pages.
This has already found a bug in the arm64 ctr-aes-neonbs algorithm.
It would have easily found many past bugs.
I removed the skcipher chunked test vectors that were the same as
non-chunked ones, but left the ones that were unique.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add functions that generate a random testvec_config, in preparation for
using it for randomized fuzz tests.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
To achieve more comprehensive crypto test coverage, I'd like to add fuzz
tests that use random data layouts and request flags.
To be most effective these tests should be part of testmgr, so they
automatically run on every algorithm registered with the crypto API.
However, they will take much longer to run than the current tests and
therefore will only really be intended to be run by developers, whereas
the current tests have a wider audience.
Therefore, add a new kconfig option CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
that can be set by developers to enable these extra, expensive tests.
Similar to the regular tests, also add a module parameter
cryptomgr.noextratests to support disabling the tests.
Finally, another module parameter cryptomgr.fuzz_iterations is added to
control how many iterations the fuzz tests do. Note: for now setting
this to 0 will be equivalent to cryptomgr.noextratests=1. But I opted
for separate parameters to provide more flexibility to add other types
of tests under the "extra tests" category in the future.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned. Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
However, testing of this currently has many gaps. For example,
individual algorithms are responsible for providing their own chunked
test vectors. But many don't bother to do this or test only one or two
cases, providing poor test coverage. Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.
To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms. However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().
Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code. This improves overall test coverage, without reducing test
performance too much. Note that the test vectors themselves are not
changed, except for removing the chunk lists.
This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.
These improved tests have already exposed many bugs.
To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Hash algorithms with an alignmask set, e.g. "xcbc(aes-aesni)" and
"michael_mic", fail the improved hash tests because they sometimes
produce the wrong digest. The bug is that in the case where a
scatterlist element crosses pages, not all the data is actually hashed
because the scatterlist walk terminates too early. This happens because
the 'nbytes' variable in crypto_hash_walk_done() is assigned the number
of bytes remaining in the page, then later interpreted as the number of
bytes remaining in the scatterlist element. Fix it.
Fixes: 900a081f69 ("crypto: ahash - Fix early termination in hash walk")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The generic MORUS implementations all fail the improved AEAD tests
because they produce the wrong result with some data layouts. The issue
is that they assume that if the skcipher_walk API gives 'nbytes' not
aligned to the walksize (a.k.a. walk.stride), then it is the end of the
data. In fact, this can happen before the end. Fix them.
Fixes: 396be41f16 ("crypto: morus - Add generic MORUS AEAD implementations")
Cc: <stable@vger.kernel.org> # v4.18+
Cc: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The generic AEGIS implementations all fail the improved AEAD tests
because they produce the wrong result with some data layouts. The issue
is that they assume that if the skcipher_walk API gives 'nbytes' not
aligned to the walksize (a.k.a. walk.stride), then it is the end of the
data. In fact, this can happen before the end. Fix them.
Fixes: f606a88e58 ("crypto: aegis - Add generic AEGIS AEAD implementations")
Cc: <stable@vger.kernel.org> # v4.18+
Cc: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The CRC32 is not a cryptographic hash algorithm,
so the FIPS restrictions should not apply to it.
(The CRC32C variant is already allowed.)
This CRC32 variant is used for in dm-crypt legacy TrueCrypt
IV implementation (tcw); detected by cryptsetup test suite
failure in FIPS mode.
Signed-off-by: Milan Broz <gmazyland@gmail.com>
Reviewed-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Instantiating "cryptd(crc32c)" causes a crypto self-test failure because
the crypto_alloc_shash() in alg_test_crc32c() fails. This is because
cryptd(crc32c) is an ahash algorithm, not a shash algorithm; so it can
only be accessed through the ahash API, unlike shash algorithms which
can be accessed through both the ahash and shash APIs.
As the test is testing the shash descriptor format which is only
applicable to shash algorithms, skip it for ahash algorithms.
(Note that it's still important to fix crypto self-test failures even
for weird algorithm instantiations like cryptd(crc32c) that no one
would really use; in fips_enabled mode unprivileged users can use them
to panic the kernel, and also they prevent treating a crypto self-test
failure as a bug when fuzzing the kernel.)
Fixes: 8e3ee85e68 ("crypto: crc32c - Test descriptor context format")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use kmemdup rather than duplicating its implementation
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
CRYPTO_TFM_REQ_WEAK_KEY confuses newcomers to the crypto API because it
sounds like it is requesting a weak key. Actually, it is requesting
that weak keys be forbidden (for algorithms that have the notion of
"weak keys"; currently only DES and XTS do).
Also it is only one letter away from CRYPTO_TFM_RES_WEAK_KEY, with which
it can be easily confused. (This in fact happened in the UX500 driver,
though just in some debugging messages.)
Therefore, make the intent clear by renaming it to
CRYPTO_TFM_REQ_FORBID_WEAK_KEYS.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use crypto template array registering API to simplify the code.
Signed-off-by: Xiongfeng Wang <xiongfeng.wang@linaro.org>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use crypto template array registering API to simplify the code.
Signed-off-by: Xiongfeng Wang <xiongfeng.wang@linaro.org>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use crypto template array registering API to simplify the code.
Signed-off-by: Xiongfeng Wang <xiongfeng.wang@linaro.org>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use crypto template array registering API to simplify the code.
Signed-off-by: Xiongfeng Wang <xiongfeng.wang@linaro.org>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch add a helper to (un)register a array of templates. The
following patches will use this helper to simplify the code.
Signed-off-by: Xiongfeng Wang <xiongfeng.wang@linaro.org>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The license boiler plate text is not ideal for machine parsing. The kernel
uses SPDX license identifiers for that purpose, which replace the boiler
plate text.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Ondrej Mosnacek <omosnacek@gmail.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-crypto@vger.kernel.org
Acked-by: Ondrej Mosnacek <omosnacek@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The license boiler plate text is not ideal for machine parsing. The kernel
uses SPDX license identifiers for that purpose, which replace the boiler
plate text.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Ondrej Mosnacek <omosnacek@gmail.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-crypto@vger.kernel.org
Acked-by: Ondrej Mosnacek <omosnacek@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Precise and non-ambiguous license information is important. The recently
added aegis header file has a SPDX license identifier, which is nice, but
at the same time it has a contradictionary license boiler plate text.
SPDX-License-Identifier: GPL-2.0
versus
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version
Oh well.
As the other aegis related files are licensed under the GPL v2 or later,
it's assumed that the boiler plate code is correct, but the SPDX license
identifier is wrong.
Fix the SPDX identifier and remove the boiler plate as it is redundant.
Fixes: f606a88e58 ("crypto: aegis - Add generic AEGIS AEAD implementations")
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Ondrej Mosnacek <omosnacek@gmail.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-crypto@vger.kernel.org
Acked-by: Ondrej Mosnacek <omosnacek@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently testmgr has separate encryption and decryption test vectors
for AEADs. That's massively redundant, since usually the decryption
tests are identical to the encryption tests, just with the input/result
swapped. And for some algorithms it was forgotten to add decryption
test vectors, so for them currently only encryption is being tested.
Therefore, eliminate the redundancy by removing the AEAD decryption test
vectors and updating testmgr to test both AEAD encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each aead_testvec now has a 'ptext' (plaintext), 'plen'
(plaintext length), 'ctext' (ciphertext), and 'clen' (ciphertext length)
instead of an 'input', 'ilen', 'result', and 'rlen'. "Ciphertext" here
refers to the full ciphertext, including the authentication tag.
For now the scatterlist divisions are just given for the plaintext
length, not also the ciphertext length. For decryption, the last
scatterlist element is just extended by the authentication tag length.
In total, this removes over 5000 lines from testmgr.h, with no reduction
in test coverage since prior patches already copied the few unique
decryption test vectors into the encryption test vectors.
The testmgr.h portion of this patch was automatically generated using
the following awk script, except that I also manually updated the
definition of 'struct aead_testvec' and fixed the location of the
comment describing the AEGIS-128 test vectors.
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct aead_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct aead_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC {
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.ilen[[:space:]]*=/, ".plen\t=")
sub(/\.rlen[[:space:]]*=/, ".clen\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 1235 insertions(+), 6491 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
One "rfc4543(gcm(aes))" decryption test vector doesn't exactly match any of the
encryption test vectors with input and result swapped. In preparation
for removing the AEAD decryption test vectors and testing AEAD
decryption using the encryption test vectors, add this to the encryption
test vectors, so we don't lose any test coverage.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Some "gcm(aes)" decryption test vectors don't exactly match any of the
encryption test vectors with input and result swapped. In preparation
for removing the AEAD decryption test vectors and testing AEAD
decryption using the encryption test vectors, add these to the
encryption test vectors, so we don't lose any test coverage.
In the case of the chunked test vector, I truncated the last scatterlist
element to the end of the plaintext.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Some "ccm(aes)" decryption test vectors don't exactly match any of the
encryption test vectors with input and result swapped. In preparation
for removing the AEAD decryption test vectors and testing AEAD
decryption using the encryption test vectors, add these to the
encryption test vectors, so we don't lose any test coverage.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In preparation for unifying the AEAD encryption and decryption test
vectors, skip AEAD test vectors with the 'novrfy' (verification failure
expected) flag set when testing encryption rather than decryption.
These test vectors only make sense for decryption.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
sk_alloc() already sets sock::sk_family to PF_ALG which is passed as the
'family' argument, so there's no need to set it again.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
af_alg_count_tsgl() iterates through a list without modifying it, so use
list_for_each_entry() rather than list_for_each_entry_safe(). Also make
the pointers 'const' to make it clearer that nothing is modified.
No actual change in behavior.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Some exported functions in af_alg.c aren't used outside of that file.
Therefore, un-export them and make them 'static'.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto_cfg_mutex in crypto_user_stat.c is unused. Remove it.
Cc: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix an unaligned memory access in tgr192_transform() by using the
unaligned access helpers.
Fixes: 06ace7a9ba ("[CRYPTO] Use standard byte order macros wherever possible")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Move the declaration of crypto_nlsk into internal/cryptouser.h. This
fixes the following sparse warning:
crypto/crypto_user_base.c:41:13: warning: symbol 'crypto_nlsk' was not declared. Should it be static?
Cc: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The crc32c context is in CPU endianness, whereas the final digest is
little endian. alg_test_crc32c() got this mixed up. Fix it.
The test passes both before and after, but this patch fixes the
following sparse warning:
crypto/testmgr.c:1912:24: warning: cast to restricted __le32
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
streebog_uint512::qword needs to be __le64, not u64. This fixes a large
number of sparse warnings:
crypto/streebog_generic.c:25:9: warning: incorrect type in initializer (different base types)
crypto/streebog_generic.c:25:9: expected unsigned long long
crypto/streebog_generic.c:25:9: got restricted __le64 [usertype]
[omitted many similar warnings]
No actual change in behavior.
Cc: Vitaly Chikunov <vt@altlinux.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Include internal/rsa.h in rsa-pkcs1pad.c to get the declaration of
rsa_pkcs1pad_tmpl. This fixes the following sparse warning:
crypto/rsa-pkcs1pad.c:698:24: warning: symbol 'rsa_pkcs1pad_tmpl' was not declared. Should it be static?
Cc: Andrzej Zaborowski <andrew.zaborowski@intel.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In gcm_hash_len(), use be128 rather than u128. This fixes the following
sparse warnings:
crypto/gcm.c:252:19: warning: incorrect type in assignment (different base types)
crypto/gcm.c:252:19: expected unsigned long long [usertype] a
crypto/gcm.c:252:19: got restricted __be64 [usertype]
crypto/gcm.c:253:19: warning: incorrect type in assignment (different base types)
crypto/gcm.c:253:19: expected unsigned long long [usertype] b
crypto/gcm.c:253:19: got restricted __be64 [usertype]
No actual change in behavior.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Before this, if akcipher_testvec have `public_key_vec' set to true
(i.e. having a public key) only sign/encrypt test is performed, but
verify/decrypt test is skipped.
With a public key we could do encrypt and verify, but to sign and decrypt
a private key is required.
This logic is correct for encrypt/decrypt tests (decrypt is skipped if
no private key). But incorrect for sign/verify tests - sign is performed
no matter if there is no private key, but verify is skipped if there is
a public key.
Rework `test_akcipher_one' to arrange tests properly depending on value
of `public_key_vec` and `siggen_sigver_test'.
No tests were missed since there is only one sign/verify test (which
have `siggen_sigver_test' set to true) and it has a private key, but
future tests could benefit from this improvement.
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto_init_shash_ops_async() only gives the ahash tfm non-NULL
->export() and ->import() if the underlying shash alg has these
non-NULL. This doesn't make sense because when an shash algorithm is
registered, shash_prepare_alg() sets a default ->export() and ->import()
if the implementor didn't provide them. And elsewhere it's assumed that
all shash algs and ahash tfms have non-NULL ->export() and ->import().
Therefore, remove these unnecessary, always-true conditions.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Prevent registering shash algorithms that implement ->export() but not
->import(), or ->import() but not ->export(). Such cases don't make
sense and could confuse the check that shash_prepare_alg() does for just
->export().
I don't believe this affects any existing algorithms; this is just
preventing future mistakes.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Some algorithms have a ->setkey() method that is not atomic, in the
sense that setting a key can fail after changes were already made to the
tfm context. In this case, if a key was already set the tfm can end up
in a state that corresponds to neither the old key nor the new key.
For example, in gcm.c, if the kzalloc() fails due to lack of memory,
then the CTR part of GCM will have the new key but GHASH will not.
It's not feasible to make all ->setkey() methods atomic, especially ones
that have to key multiple sub-tfms. Therefore, make the crypto API set
CRYPTO_TFM_NEED_KEY if ->setkey() fails, to prevent the tfm from being
used until a new key is set.
[Cc stable mainly because when introducing the NEED_KEY flag I changed
AF_ALG to rely on it; and unlike in-kernel crypto API users, AF_ALG
previously didn't have this problem. So these "incompletely keyed"
states became theoretically accessible via AF_ALG -- though, the
opportunities for causing real mischief seem pretty limited.]
Fixes: dc26c17f74 ("crypto: aead - prevent using AEADs without setting key")
Cc: <stable@vger.kernel.org> # v4.16+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Some algorithms have a ->setkey() method that is not atomic, in the
sense that setting a key can fail after changes were already made to the
tfm context. In this case, if a key was already set the tfm can end up
in a state that corresponds to neither the old key nor the new key.
For example, in lrw.c, if gf128mul_init_64k_bbe() fails due to lack of
memory, then priv::table will be left NULL. After that, encryption with
that tfm will cause a NULL pointer dereference.
It's not feasible to make all ->setkey() methods atomic, especially ones
that have to key multiple sub-tfms. Therefore, make the crypto API set
CRYPTO_TFM_NEED_KEY if ->setkey() fails and the algorithm requires a
key, to prevent the tfm from being used until a new key is set.
[Cc stable mainly because when introducing the NEED_KEY flag I changed
AF_ALG to rely on it; and unlike in-kernel crypto API users, AF_ALG
previously didn't have this problem. So these "incompletely keyed"
states became theoretically accessible via AF_ALG -- though, the
opportunities for causing real mischief seem pretty limited.]
Fixes: f8d33fac84 ("crypto: skcipher - prevent using skciphers without setting key")
Cc: <stable@vger.kernel.org> # v4.16+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Some algorithms have a ->setkey() method that is not atomic, in the
sense that setting a key can fail after changes were already made to the
tfm context. In this case, if a key was already set the tfm can end up
in a state that corresponds to neither the old key nor the new key.
It's not feasible to make all ->setkey() methods atomic, especially ones
that have to key multiple sub-tfms. Therefore, make the crypto API set
CRYPTO_TFM_NEED_KEY if ->setkey() fails and the algorithm requires a
key, to prevent the tfm from being used until a new key is set.
Note: we can't set CRYPTO_TFM_NEED_KEY for OPTIONAL_KEY algorithms, so
->setkey() for those must nevertheless be atomic. That's fine for now
since only the crc32 and crc32c algorithms set OPTIONAL_KEY, and it's
not intended that OPTIONAL_KEY be used much.
[Cc stable mainly because when introducing the NEED_KEY flag I changed
AF_ALG to rely on it; and unlike in-kernel crypto API users, AF_ALG
previously didn't have this problem. So these "incompletely keyed"
states became theoretically accessible via AF_ALG -- though, the
opportunities for causing real mischief seem pretty limited.]
Fixes: 9fa68f6200 ("crypto: hash - prevent using keyed hashes without setting key")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
It took me a while to notice the bug where the adiantum template left
crypto_spawn::inst == NULL, because this only caused problems in certain
cases where algorithms are dynamically loaded/unloaded.
More improvements are needed, but for now make crypto_init_spawn()
reject this case and WARN(), so this type of bug will be noticed
immediately in the future.
Note: I checked all callers and the adiantum template was the only place
that had this wrong. So this WARN shouldn't trigger anymore.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that all "blkcipher" templates have been converted to "skcipher",
crypto_alloc_instance() is no longer used. And it's not useful any
longer as it creates an old-style weakly typed instance rather than a
new-style strongly typed instance. So remove it, and now that the name
is freed up rename crypto_alloc_instance2() to crypto_alloc_instance().
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the "ecb-cipher_null" algorithm from the deprecated "blkcipher"
API to the "skcipher" API.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the "ecb(arc4)" algorithm from the deprecated "blkcipher" API to
the "skcipher" API.
(Note that this is really a stream cipher and not a block cipher in ECB
mode as the name implies, but that's a problem for another day...)
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The PCBC template just wraps a single block cipher algorithm, so
simplify it by converting it to use skcipher_alloc_instance_simple().
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Following commit 944585a64f ("crypto: x86/aes-ni - remove special
handling of AES in PCBC mode"), it's no longer needed for the PCBC
template to support wrapping a cipher that has the CRYPTO_ALG_INTERNAL
flag set. Thus, remove this now-unused functionality to make PCBC
consistent with the other single block cipher templates.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The OFB template just wraps a single block cipher algorithm, so simplify
it by converting it to use skcipher_alloc_instance_simple().
Cc: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the keywrap template from the deprecated "blkcipher" API to the
"skcipher" API, taking advantage of skcipher_alloc_instance_simple() to
simplify it considerably.
Cc: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the ECB template from the deprecated "blkcipher" API to the
"skcipher" API, taking advantage of skcipher_alloc_instance_simple() to
simplify it considerably.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the CTR template from the deprecated "blkcipher" API to the
"skcipher" API, taking advantage of skcipher_alloc_instance_simple() to
simplify it considerably.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The CFB template just wraps a single block cipher algorithm, so simplify
it by converting it to use skcipher_alloc_instance_simple().
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The CBC template just wraps a single block cipher algorithm, so simplify
it by converting it to use skcipher_alloc_instance_simple().
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The majority of skcipher templates (including both the existing ones and
the ones remaining to be converted from the "blkcipher" API) just wrap a
single block cipher algorithm. This includes cbc, cfb, ctr, ecb, kw,
ofb, and pcbc. Add a helper function skcipher_alloc_instance_simple()
that handles allocating an skcipher instance for this common case.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The memcpy()s in the PCBC implementation use walk->iv as both the source
and destination, which has undefined behavior. These memcpy()'s are
actually unneeded, because walk->iv is already used to hold the previous
plaintext block XOR'd with the previous ciphertext block. Thus,
walk->iv is already updated to its final value.
So remove the broken and unnecessary memcpy()s.
Fixes: 91652be5d1 ("[CRYPTO] pcbc: Add Propagated CBC template")
Cc: <stable@vger.kernel.org> # v2.6.21+
Cc: David Howells <dhowells@redhat.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix multiple bugs in the OFB implementation:
1. It stored the per-request state 'cnt' in the tfm context, which can be
used by multiple threads concurrently (e.g. via AF_ALG).
2. It didn't support messages not a multiple of the block cipher size,
despite being a stream cipher.
3. It didn't set cra_blocksize to 1 to indicate it is a stream cipher.
To fix these, set the 'chunksize' property to the cipher block size to
guarantee that when walking through the scatterlist, a partial block can
only occur at the end. Then change the implementation to XOR a block at
a time at first, then XOR the partial block at the end if needed. This
is the same way CTR and CFB are implemented. As a bonus, this also
improves performance in most cases over the current approach.
Fixes: e497c51896 ("crypto: ofb - add output feedback mode")
Cc: <stable@vger.kernel.org> # v4.20+
Cc: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The memcpy() in crypto_cfb_decrypt_inplace() uses walk->iv as both the
source and destination, which has undefined behavior. It is unneeded
because walk->iv is already used to hold the previous ciphertext block;
thus, walk->iv is already updated to its final value. So, remove it.
Also, note that in-place decryption is the only case where the previous
ciphertext block is not directly available. Therefore, as a related
cleanup I also updated crypto_cfb_encrypt_segment() to directly use the
previous ciphertext block rather than save it into walk->iv. This makes
it consistent with in-place encryption and out-of-place decryption; now
only in-place decryption is different, because it has to be.
Fixes: a7d85e06ed ("crypto: cfb - add support for Cipher FeedBack mode")
Cc: <stable@vger.kernel.org> # v4.17+
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Like some other block cipher mode implementations, the CFB
implementation assumes that while walking through the scatterlist, a
partial block does not occur until the end. But the walk is incorrectly
being done with a blocksize of 1, as 'cra_blocksize' is set to 1 (since
CFB is a stream cipher) but no 'chunksize' is set. This bug causes
incorrect encryption/decryption for some scatterlist layouts.
Fix it by setting the 'chunksize'. Also extend the CFB test vectors to
cover this bug as well as cases where the message length is not a
multiple of the block size.
Fixes: a7d85e06ed ("crypto: cfb - add support for Cipher FeedBack mode")
Cc: <stable@vger.kernel.org> # v4.17+
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
sm3_compress() calls rol32() with shift >= 32, which causes undefined
behavior. This is easily detected by enabling CONFIG_UBSAN.
Explicitly AND with 31 to make the behavior well defined.
Fixes: 4f0fc1600e ("crypto: sm3 - add OSCCA SM3 secure hash")
Cc: <stable@vger.kernel.org> # v4.15+
Cc: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto_grab_*() doesn't set crypto_spawn::inst, so templates must set it
beforehand. Otherwise it will be left NULL, which causes a crash in
certain cases where algorithms are dynamically loaded/unloaded. E.g.
with CONFIG_CRYPTO_CHACHA20_X86_64=m, the following caused a crash:
insmod chacha-x86_64.ko
python -c 'import socket; socket.socket(socket.AF_ALG, 5, 0).bind(("skcipher", "adiantum(xchacha12,aes)"))'
rmmod chacha-x86_64.ko
python -c 'import socket; socket.socket(socket.AF_ALG, 5, 0).bind(("skcipher", "adiantum(xchacha12,aes)"))'
Fixes: 059c2a4d8e ("crypto: adiantum - add Adiantum support")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Keys for "authenc" AEADs are formatted as an rtattr containing a 4-byte
'enckeylen', followed by an authentication key and an encryption key.
crypto_authenc_extractkeys() parses the key to find the inner keys.
However, it fails to consider the case where the rtattr's payload is
longer than 4 bytes but not 4-byte aligned, and where the key ends
before the next 4-byte aligned boundary. In this case, 'keylen -=
RTA_ALIGN(rta->rta_len);' underflows to a value near UINT_MAX. This
causes a buffer overread and crash during crypto_ahash_setkey().
Fix it by restricting the rtattr payload to the expected size.
Reproducer using AF_ALG:
#include <linux/if_alg.h>
#include <linux/rtnetlink.h>
#include <sys/socket.h>
int main()
{
int fd;
struct sockaddr_alg addr = {
.salg_type = "aead",
.salg_name = "authenc(hmac(sha256),cbc(aes))",
};
struct {
struct rtattr attr;
__be32 enckeylen;
char keys[1];
} __attribute__((packed)) key = {
.attr.rta_len = sizeof(key),
.attr.rta_type = 1 /* CRYPTO_AUTHENC_KEYA_PARAM */,
};
fd = socket(AF_ALG, SOCK_SEQPACKET, 0);
bind(fd, (void *)&addr, sizeof(addr));
setsockopt(fd, SOL_ALG, ALG_SET_KEY, &key, sizeof(key));
}
It caused:
BUG: unable to handle kernel paging request at ffff88007ffdc000
PGD 2e01067 P4D 2e01067 PUD 2e04067 PMD 2e05067 PTE 0
Oops: 0000 [#1] SMP
CPU: 0 PID: 883 Comm: authenc Not tainted 4.20.0-rc1-00108-g00c9fe37a7f27 #13
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-20181126_142135-anatol 04/01/2014
RIP: 0010:sha256_ni_transform+0xb3/0x330 arch/x86/crypto/sha256_ni_asm.S:155
[...]
Call Trace:
sha256_ni_finup+0x10/0x20 arch/x86/crypto/sha256_ssse3_glue.c:321
crypto_shash_finup+0x1a/0x30 crypto/shash.c:178
shash_digest_unaligned+0x45/0x60 crypto/shash.c:186
crypto_shash_digest+0x24/0x40 crypto/shash.c:202
hmac_setkey+0x135/0x1e0 crypto/hmac.c:66
crypto_shash_setkey+0x2b/0xb0 crypto/shash.c:66
shash_async_setkey+0x10/0x20 crypto/shash.c:223
crypto_ahash_setkey+0x2d/0xa0 crypto/ahash.c:202
crypto_authenc_setkey+0x68/0x100 crypto/authenc.c:96
crypto_aead_setkey+0x2a/0xc0 crypto/aead.c:62
aead_setkey+0xc/0x10 crypto/algif_aead.c:526
alg_setkey crypto/af_alg.c:223 [inline]
alg_setsockopt+0xfe/0x130 crypto/af_alg.c:256
__sys_setsockopt+0x6d/0xd0 net/socket.c:1902
__do_sys_setsockopt net/socket.c:1913 [inline]
__se_sys_setsockopt net/socket.c:1910 [inline]
__x64_sys_setsockopt+0x1f/0x30 net/socket.c:1910
do_syscall_64+0x4a/0x180 arch/x86/entry/common.c:290
entry_SYSCALL_64_after_hwframe+0x49/0xbe
Fixes: e236d4a89a ("[CRYPTO] authenc: Move enckeylen into key itself")
Cc: <stable@vger.kernel.org> # v2.6.25+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
- support -y option for merge_config.sh to avoid downgrading =y to =m
- remove S_OTHER symbol type, and touch include/config/*.h files correctly
- fix file name and line number in lexer warnings
- fix memory leak when EOF is encountered in quotation
- resolve all shift/reduce conflicts of the parser
- warn no new line at end of file
- make 'source' statement more strict to take only string literal
- rewrite the lexer and remove the keyword lookup table
- convert to SPDX License Identifier
- compile C files independently instead of including them from zconf.y
- fix various warnings of gconfig
- misc cleanups
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJcJieuAAoJED2LAQed4NsGHlIP/1s0fQ86XD9dIMyHzAO0gh2f
7rylfe2kEXJgIzJ0DyZdLu4iZtwbkEUqTQrRS1abriNGVemPkfBAnZdM5d92lOQX
3iREa700AJ2xo7V7gYZ6AbhZoG3p0S9U9Q2qE5S+tFTe8c2Gy4xtjnODF+Vel85r
S0P8tF5sE1/d00lm+yfMI/CJVfDjyNaMm+aVEnL0kZTPiRkaktjWgo6Fc2p4z1L5
HFmMMP6/iaXmRZ+tHJGPQ2AT70GFVZw5ePxPcl50EotUP25KHbuUdzs8wDpYm3U/
rcESVsIFpgqHWmTsdBk6dZk0q8yFZNkMlkaP/aYukVZpUn/N6oAXgTFckYl8dmQL
fQBkQi6DTfr9EBPVbj18BKm7xI3Y4DdQ2fzTfYkJ2XwNRGFA5r9N3sjd7ZTVGjxC
aeeMHCwvGdSx1x8PeZAhZfsUHW8xVDMSQiT713+ljBY+6cwzA+2NF0kP7B6OAqwr
ETFzd4Xu2/lZcL7gQRH8WU3L2S5iedmDG6RnZgJMXI0/9V4qAA+nlsWaCgnl1TgA
mpxYlLUMrd6AUJevE34FlnyFdk8IMn9iKRFsvF0f3doO5C7QzTVGqFdJu5a0CuWO
4NBJvZjFT8/4amoWLfnDlfApWXzTfwLbKG+r6V2F30fLuXpYg5LxWhBoGRPYLZSq
oi4xN1Mpx3TvXz6WcKVZ
=r3Fl
-----END PGP SIGNATURE-----
Merge tag 'kconfig-v4.21' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild
Pull Kconfig updates from Masahiro Yamada:
- support -y option for merge_config.sh to avoid downgrading =y to =m
- remove S_OTHER symbol type, and touch include/config/*.h files correctly
- fix file name and line number in lexer warnings
- fix memory leak when EOF is encountered in quotation
- resolve all shift/reduce conflicts of the parser
- warn no new line at end of file
- make 'source' statement more strict to take only string literal
- rewrite the lexer and remove the keyword lookup table
- convert to SPDX License Identifier
- compile C files independently instead of including them from zconf.y
- fix various warnings of gconfig
- misc cleanups
* tag 'kconfig-v4.21' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: (39 commits)
kconfig: surround dbg_sym_flags with #ifdef DEBUG to fix gconf warning
kconfig: split images.c out of qconf.cc/gconf.c to fix gconf warnings
kconfig: add static qualifiers to fix gconf warnings
kconfig: split the lexer out of zconf.y
kconfig: split some C files out of zconf.y
kconfig: convert to SPDX License Identifier
kconfig: remove keyword lookup table entirely
kconfig: update current_pos in the second lexer
kconfig: switch to ASSIGN_VAL state in the second lexer
kconfig: stop associating kconf_id with yylval
kconfig: refactor end token rules
kconfig: stop supporting '.' and '/' in unquoted words
treewide: surround Kconfig file paths with double quotes
microblaze: surround string default in Kconfig with double quotes
kconfig: use T_WORD instead of T_VARIABLE for variables
kconfig: use specific tokens instead of T_ASSIGN for assignments
kconfig: refactor scanning and parsing "option" properties
kconfig: use distinct tokens for type and default properties
kconfig: remove redundant token defines
kconfig: rename depends_list to comment_option_list
...
Pull crypto updates from Herbert Xu:
"API:
- Add 1472-byte test to tcrypt for IPsec
- Reintroduced crypto stats interface with numerous changes
- Support incremental algorithm dumps
Algorithms:
- Add xchacha12/20
- Add nhpoly1305
- Add adiantum
- Add streebog hash
- Mark cts(cbc(aes)) as FIPS allowed
Drivers:
- Improve performance of arm64/chacha20
- Improve performance of x86/chacha20
- Add NEON-accelerated nhpoly1305
- Add SSE2 accelerated nhpoly1305
- Add AVX2 accelerated nhpoly1305
- Add support for 192/256-bit keys in gcmaes AVX
- Add SG support in gcmaes AVX
- ESN for inline IPsec tx in chcr
- Add support for CryptoCell 703 in ccree
- Add support for CryptoCell 713 in ccree
- Add SM4 support in ccree
- Add SM3 support in ccree
- Add support for chacha20 in caam/qi2
- Add support for chacha20 + poly1305 in caam/jr
- Add support for chacha20 + poly1305 in caam/qi2
- Add AEAD cipher support in cavium/nitrox"
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (130 commits)
crypto: skcipher - remove remnants of internal IV generators
crypto: cavium/nitrox - Fix build with !CONFIG_DEBUG_FS
crypto: salsa20-generic - don't unnecessarily use atomic walk
crypto: skcipher - add might_sleep() to skcipher_walk_virt()
crypto: x86/chacha - avoid sleeping under kernel_fpu_begin()
crypto: cavium/nitrox - Added AEAD cipher support
crypto: mxc-scc - fix build warnings on ARM64
crypto: api - document missing stats member
crypto: user - remove unused dump functions
crypto: chelsio - Fix wrong error counter increments
crypto: chelsio - Reset counters on cxgb4 Detach
crypto: chelsio - Handle PCI shutdown event
crypto: chelsio - cleanup:send addr as value in function argument
crypto: chelsio - Use same value for both channel in single WR
crypto: chelsio - Swap location of AAD and IV sent in WR
crypto: chelsio - remove set but not used variable 'kctx_len'
crypto: ux500 - Use proper enum in hash_set_dma_transfer
crypto: ux500 - Use proper enum in cryp_set_dma_transfer
crypto: aesni - Add scatter/gather avx stubs, and use them in C
crypto: aesni - Introduce partial block macro
..
Pull RCU updates from Ingo Molnar:
"The biggest RCU changes in this cycle were:
- Convert RCU's BUG_ON() and similar calls to WARN_ON() and similar.
- Replace calls of RCU-bh and RCU-sched update-side functions to
their vanilla RCU counterparts. This series is a step towards
complete removal of the RCU-bh and RCU-sched update-side functions.
( Note that some of these conversions are going upstream via their
respective maintainers. )
- Documentation updates, including a number of flavor-consolidation
updates from Joel Fernandes.
- Miscellaneous fixes.
- Automate generation of the initrd filesystem used for rcutorture
testing.
- Convert spin_is_locked() assertions to instead use lockdep.
( Note that some of these conversions are going upstream via their
respective maintainers. )
- SRCU updates, especially including a fix from Dennis Krein for a
bag-on-head-class bug.
- RCU torture-test updates"
* 'core-rcu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (112 commits)
rcutorture: Don't do busted forward-progress testing
rcutorture: Use 100ms buckets for forward-progress callback histograms
rcutorture: Recover from OOM during forward-progress tests
rcutorture: Print forward-progress test age upon failure
rcutorture: Print time since GP end upon forward-progress failure
rcutorture: Print histogram of CB invocation at OOM time
rcutorture: Print GP age upon forward-progress failure
rcu: Print per-CPU callback counts for forward-progress failures
rcu: Account for nocb-CPU callback counts in RCU CPU stall warnings
rcutorture: Dump grace-period diagnostics upon forward-progress OOM
rcutorture: Prepare for asynchronous access to rcu_fwd_startat
torture: Remove unnecessary "ret" variables
rcutorture: Affinity forward-progress test to avoid housekeeping CPUs
rcutorture: Break up too-long rcu_torture_fwd_prog() function
rcutorture: Remove cbflood facility
torture: Bring any extra CPUs online during kernel startup
rcutorture: Add call_rcu() flooding forward-progress tests
rcutorture/formal: Replace synchronize_sched() with synchronize_rcu()
tools/kernel.h: Replace synchronize_sched() with synchronize_rcu()
net/decnet: Replace rcu_barrier_bh() with rcu_barrier()
...
Remove dead code related to internal IV generators, which are no longer
used since they've been replaced with the "seqiv" and "echainiv"
templates. The removed code includes:
- The "givcipher" (GIVCIPHER) algorithm type. No algorithms are
registered with this type anymore, so it's unneeded.
- The "const char *geniv" member of aead_alg, ablkcipher_alg, and
blkcipher_alg. A few algorithms still set this, but it isn't used
anymore except to show via /proc/crypto and CRYPTO_MSG_GETALG.
Just hardcode "<default>" or "<none>" in those cases.
- The 'skcipher_givcrypt_request' structure, which is never used.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
salsa20-generic doesn't use SIMD instructions or otherwise disable
preemption, so passing atomic=true to skcipher_walk_virt() is
unnecessary.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
skcipher_walk_virt() can still sleep even with atomic=true, since that
only affects the later calls to skcipher_walk_done(). But,
skcipher_walk_virt() only has to allocate memory for some input data
layouts, so incorrectly calling it with preemption disabled can go
undetected. Use might_sleep() so that it's detected reliably.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch removes unused dump functions for crypto_user_stats.
There are remains of the copy/paste of crypto_user_base to
crypto_user_stat and I forgot to remove them.
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The Kconfig lexer supports special characters such as '.' and '/' in
the parameter context. In my understanding, the reason is just to
support bare file paths in the source statement.
I do not see a good reason to complicate Kconfig for the room of
ambiguity.
The majority of code already surrounds file paths with double quotes,
and it makes sense since file paths are constant string literals.
Make it treewide consistent now.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Acked-by: Wolfram Sang <wsa@the-dreams.de>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Acked-by: Ingo Molnar <mingo@kernel.org>
crypto_alg_mod_lookup() takes a reference to the hash algorithm but
crypto_init_shash_spawn() doesn't take ownership of it, hence the
reference needs to be dropped in adiantum_create().
Fixes: 059c2a4d8e ("crypto: adiantum - add Adiantum support")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
CRYPTO_MSG_GETALG in NLM_F_DUMP mode sometimes doesn't return all
registered crypto algorithms, because it doesn't support incremental
dumps. crypto_dump_report() only permits itself to be called once, yet
the netlink subsystem allocates at most ~64 KiB for the skb being dumped
to. Thus only the first recvmsg() returns data, and it may only include
a subset of the crypto algorithms even if the user buffer passed to
recvmsg() is large enough to hold all of them.
Fix this by using one of the arguments in the netlink_callback structure
to keep track of the current position in the algorithm list. Then
userspace can do multiple recvmsg() on the socket after sending the dump
request. This is the way netlink dumps work elsewhere in the kernel;
it's unclear why this was different (probably just an oversight).
Also fix an integer overflow when calculating the dump buffer size hint.
Fixes: a38f7907b9 ("crypto: Add userspace configuration API")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The 2018-11-28 revision of the Adiantum paper has revised some notation:
- 'M' was replaced with 'L' (meaning "Left", for the left-hand part of
the message) in the definition of Adiantum hashing, to avoid confusion
with the full message
- ε-almost-∆-universal is now abbreviated as ε-∆U instead of εA∆U
- "block" is now used only to mean block cipher and Poly1305 blocks
Also, Adiantum hashing was moved from the appendix to the main paper.
To avoid confusion, update relevant comments in the code to match.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The kernel's ChaCha20 uses the RFC7539 convention of the nonce being 12
bytes rather than 8, so actually I only appended 12 random bytes (not
16) to its test vectors to form 24-byte nonces for the XChaCha20 test
vectors. The other 4 bytes were just from zero-padding the stream
position to 8 bytes. Fix the comments above the test vectors.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
There is a draft specification for XChaCha20 being worked on. Add the
XChaCha20 test vector from the appendix so that we can be extra sure the
kernel's implementation is compatible.
I also recomputed the ciphertext with XChaCha12 and added it there too,
to keep the tests for XChaCha20 and XChaCha12 in sync.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that the x86_64 SIMD implementations of ChaCha20 and XChaCha20 have
been refactored to support varying the number of rounds, add support for
XChaCha12. This is identical to XChaCha20 except for the number of
rounds, which is 12 instead of 20. This can be used by Adiantum.
Reviewed-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add an XChaCha20 implementation that is hooked up to the x86_64 SIMD
implementations of ChaCha20. This can be used by Adiantum.
An SSSE3 implementation of single-block HChaCha20 is also added so that
XChaCha20 can use it rather than the generic implementation. This
required refactoring the ChaCha permutation into its own function.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add a 64-bit AVX2 implementation of NHPoly1305, an ε-almost-∆-universal
hash function used in the Adiantum encryption mode. For now, only the
NH portion is actually AVX2-accelerated; the Poly1305 part is less
performance-critical so is just implemented in C.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add a 64-bit SSE2 implementation of NHPoly1305, an ε-almost-∆-universal
hash function used in the Adiantum encryption mode. For now, only the
NH portion is actually SSE2-accelerated; the Poly1305 part is less
performance-critical so is just implemented in C.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If the stream cipher implementation is asynchronous, then the Adiantum
instance must be flagged as asynchronous as well. Otherwise someone
asking for a synchronous algorithm can get an asynchronous algorithm.
There are no asynchronous xchacha12 or xchacha20 implementations yet
which makes this largely a theoretical issue, but it should be fixed.
Fixes: 059c2a4d8e ("crypto: adiantum - add Adiantum support")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In order to have better coverage of algorithms operating on block
sizes that are in the ballpark of a VPN packet, add 1472 to the
block_sizes array.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch add the crypto_stats_init() function.
This will permit to remove some ifdef from __crypto_register_alg().
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since now all crypto stats are on their own structures, it is now
useless to have the algorithm name in the err_cnt member.
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Like for userspace, this patch splits stats into multiple structures,
one for each algorithm class.
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The use of the v64 intermediate variable is useless, and removing it
bring to much readable code.
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Some error count use the wrong name for getting this data.
But this had not caused any reporting problem, since all error count are shared in the same
union.
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
All crypto_stats functions use the struct xxx_request for feeding stats,
but in some case this structure could already be freed.
For fixing this, the needed parameters (len and alg) will be stored
before the request being executed.
Fixes: cac5818c25 ("crypto: user - Implement a generic crypto statistics")
Reported-by: syzbot <syzbot+6939a606a5305e9e9799@syzkaller.appspotmail.com>
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
It is cleaner to have each stat in their own structures.
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
All the 32-bit fields need to be 64-bit. In some cases, UINT32_MAX crypto
operations can be done in seconds.
Reported-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
CRYPTO_STATS is using CRYPTO_USER stuff, so it should depends on it.
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Even if CRYPTO_STATS is set to n, some part of CRYPTO_STATS are
compiled.
This patch made all part of crypto_user_stat uncompiled in that case.
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since this user-space API is still undergoing significant changes,
this patch disables it for the current merge window.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Pull RCU changes from Paul E. McKenney:
- Convert RCU's BUG_ON() and similar calls to WARN_ON() and similar.
- Replace calls of RCU-bh and RCU-sched update-side functions
to their vanilla RCU counterparts. This series is a step
towards complete removal of the RCU-bh and RCU-sched update-side
functions.
( Note that some of these conversions are going upstream via their
respective maintainers. )
- Documentation updates, including a number of flavor-consolidation
updates from Joel Fernandes.
- Miscellaneous fixes.
- Automate generation of the initrd filesystem used for
rcutorture testing.
- Convert spin_is_locked() assertions to instead use lockdep.
( Note that some of these conversions are going upstream via their
respective maintainers. )
- SRCU updates, especially including a fix from Dennis Krein
for a bag-on-head-class bug.
- RCU torture-test updates.
Signed-off-by: Ingo Molnar <mingo@kernel.org>
In multiple functions, the algorithm fields are read after its reference
is dropped through crypto_mod_put. In this case, the algorithm memory
may be freed, resulting in use-after-free bugs. This patch delays the
put operation until the algorithm is never used.
Fixes: 79c65d179a ("crypto: cbc - Convert to skcipher")
Fixes: a7d85e06ed ("crypto: cfb - add support for Cipher FeedBack mode")
Fixes: 043a44001b ("crypto: pcbc - Convert to skcipher")
Cc: <stable@vger.kernel.org>
Signed-off-by: Pan Bian <bianpan2016@163.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that synchronize_rcu() waits for bh-disable regions of code as
well as RCU read-side critical sections, the synchronize_rcu_bh() in
pcrypt_cpumask_change_notify() can be replaced by synchronize_rcu().
This commit therefore makes this change.
Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
Cc: Steffen Klassert <steffen.klassert@secunet.com>
Cc: <linux-crypto@vger.kernel.org>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Add support for the Adiantum encryption mode. Adiantum was designed by
Paul Crowley and is specified by our paper:
Adiantum: length-preserving encryption for entry-level processors
(https://eprint.iacr.org/2018/720.pdf)
See our paper for full details; this patch only provides an overview.
Adiantum is a tweakable, length-preserving encryption mode designed for
fast and secure disk encryption, especially on CPUs without dedicated
crypto instructions. Adiantum encrypts each sector using the XChaCha12
stream cipher, two passes of an ε-almost-∆-universal (εA∆U) hash
function, and an invocation of the AES-256 block cipher on a single
16-byte block. On CPUs without AES instructions, Adiantum is much
faster than AES-XTS; for example, on ARM Cortex-A7, on 4096-byte sectors
Adiantum encryption is about 4 times faster than AES-256-XTS encryption,
and decryption about 5 times faster.
Adiantum is a specialization of the more general HBSH construction. Our
earlier proposal, HPolyC, was also a HBSH specialization, but it used a
different εA∆U hash function, one based on Poly1305 only. Adiantum's
εA∆U hash function, which is based primarily on the "NH" hash function
like that used in UMAC (RFC4418), is about twice as fast as HPolyC's;
consequently, Adiantum is about 20% faster than HPolyC.
This speed comes with no loss of security: Adiantum is provably just as
secure as HPolyC, in fact slightly *more* secure. Like HPolyC,
Adiantum's security is reducible to that of XChaCha12 and AES-256,
subject to a security bound. XChaCha12 itself has a security reduction
to ChaCha12. Therefore, one need not "trust" Adiantum; one need only
trust ChaCha12 and AES-256. Note that the εA∆U hash function is only
used for its proven combinatorical properties so cannot be "broken".
Adiantum is also a true wide-block encryption mode, so flipping any
plaintext bit in the sector scrambles the entire ciphertext, and vice
versa. No other such mode is available in the kernel currently; doing
the same with XTS scrambles only 16 bytes. Adiantum also supports
arbitrary-length tweaks and naturally supports any length input >= 16
bytes without needing "ciphertext stealing".
For the stream cipher, Adiantum uses XChaCha12 rather than XChaCha20 in
order to make encryption feasible on the widest range of devices.
Although the 20-round variant is quite popular, the best known attacks
on ChaCha are on only 7 rounds, so ChaCha12 still has a substantial
security margin; in fact, larger than AES-256's. 12-round Salsa20 is
also the eSTREAM recommendation. For the block cipher, Adiantum uses
AES-256, despite it having a lower security margin than XChaCha12 and
needing table lookups, due to AES's extensive adoption and analysis
making it the obvious first choice. Nevertheless, for flexibility this
patch also permits the "adiantum" template to be instantiated with
XChaCha20 and/or with an alternate block cipher.
We need Adiantum support in the kernel for use in dm-crypt and fscrypt,
where currently the only other suitable options are block cipher modes
such as AES-XTS. A big problem with this is that many low-end mobile
devices (e.g. Android Go phones sold primarily in developing countries,
as well as some smartwatches) still have CPUs that lack AES
instructions, e.g. ARM Cortex-A7. Sadly, AES-XTS encryption is much too
slow to be viable on these devices. We did find that some "lightweight"
block ciphers are fast enough, but these suffer from problems such as
not having much cryptanalysis or being too controversial.
The ChaCha stream cipher has excellent performance but is insecure to
use directly for disk encryption, since each sector's IV is reused each
time it is overwritten. Even restricting the threat model to offline
attacks only isn't enough, since modern flash storage devices don't
guarantee that "overwrites" are really overwrites, due to wear-leveling.
Adiantum avoids this problem by constructing a
"tweakable super-pseudorandom permutation"; this is the strongest
possible security model for length-preserving encryption.
Of course, storing random nonces along with the ciphertext would be the
ideal solution. But doing that with existing hardware and filesystems
runs into major practical problems; in most cases it would require data
journaling (like dm-integrity) which severely degrades performance.
Thus, for now length-preserving encryption is still needed.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add a generic implementation of NHPoly1305, an ε-almost-∆-universal hash
function used in the Adiantum encryption mode.
CONFIG_NHPOLY1305 is not selectable by itself since there won't be any
real reason to enable it without also enabling Adiantum support.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Expose a low-level Poly1305 API which implements the
ε-almost-∆-universal (εA∆U) hash function underlying the Poly1305 MAC
and supports block-aligned inputs only.
This is needed for Adiantum hashing, which builds an εA∆U hash function
from NH and a polynomial evaluation in GF(2^{130}-5); this polynomial
evaluation is identical to the one the Poly1305 MAC does. However, the
crypto_shash Poly1305 API isn't very appropriate for this because its
calling convention assumes it is used as a MAC, with a 32-byte "one-time
key" provided for every digest.
But by design, in Adiantum hashing the performance of the polynomial
evaluation isn't nearly as critical as NH. So it suffices to just have
some C helper functions. Thus, this patch adds such functions.
Acked-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In preparation for exposing a low-level Poly1305 API which implements
the ε-almost-∆-universal (εA∆U) hash function underlying the Poly1305
MAC and supports block-aligned inputs only, create structures
poly1305_key and poly1305_state which hold the limbs of the Poly1305
"r" key and accumulator, respectively.
These structures could actually have the same type (e.g. poly1305_val),
but different types are preferable, to prevent misuse.
Acked-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that the generic implementation of ChaCha20 has been refactored to
allow varying the number of rounds, add support for XChaCha12, which is
the XSalsa construction applied to ChaCha12. ChaCha12 is one of the
three ciphers specified by the original ChaCha paper
(https://cr.yp.to/chacha/chacha-20080128.pdf: "ChaCha, a variant of
Salsa20"), alongside ChaCha8 and ChaCha20. ChaCha12 is faster than
ChaCha20 but has a lower, but still large, security margin.
We need XChaCha12 support so that it can be used in the Adiantum
encryption mode, which enables disk/file encryption on low-end mobile
devices where AES-XTS is too slow as the CPUs lack AES instructions.
We'd prefer XChaCha20 (the more popular variant), but it's too slow on
some of our target devices, so at least in some cases we do need the
XChaCha12-based version. In more detail, the problem is that Adiantum
is still much slower than we're happy with, and encryption still has a
quite noticeable effect on the feel of low-end devices. Users and
vendors push back hard against encryption that degrades the user
experience, which always risks encryption being disabled entirely. So
we need to choose the fastest option that gives us a solid margin of
security, and here that's XChaCha12. The best known attack on ChaCha
breaks only 7 rounds and has 2^235 time complexity, so ChaCha12's
security margin is still better than AES-256's. Much has been learned
about cryptanalysis of ARX ciphers since Salsa20 was originally designed
in 2005, and it now seems we can be comfortable with a smaller number of
rounds. The eSTREAM project also suggests the 12-round version of
Salsa20 as providing the best balance among the different variants:
combining very good performance with a "comfortable margin of security".
Note that it would be trivial to add vanilla ChaCha12 in addition to
XChaCha12. However, it's unneeded for now and therefore is omitted.
As discussed in the patch that introduced XChaCha20 support, I
considered splitting the code into separate chacha-common, chacha20,
xchacha20, and xchacha12 modules, so that these algorithms could be
enabled/disabled independently. However, since nearly all the code is
shared anyway, I ultimately decided there would have been little benefit
to the added complexity.
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In preparation for adding XChaCha12 support, rename/refactor
chacha20-generic to support different numbers of rounds. The
justification for needing XChaCha12 support is explained in more detail
in the patch "crypto: chacha - add XChaCha12 support".
The only difference between ChaCha{8,12,20} are the number of rounds
itself; all other parts of the algorithm are the same. Therefore,
remove the "20" from all definitions, structures, functions, files, etc.
that will be shared by all ChaCha versions.
Also make ->setkey() store the round count in the chacha_ctx (previously
chacha20_ctx). The generic code then passes the round count through to
chacha_block(). There will be a ->setkey() function for each explicitly
allowed round count; the encrypt/decrypt functions will be the same. I
decided not to do it the opposite way (same ->setkey() function for all
round counts, with different encrypt/decrypt functions) because that
would have required more boilerplate code in architecture-specific
implementations of ChaCha and XChaCha.
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add support for the XChaCha20 stream cipher. XChaCha20 is the
application of the XSalsa20 construction
(https://cr.yp.to/snuffle/xsalsa-20081128.pdf) to ChaCha20 rather than
to Salsa20. XChaCha20 extends ChaCha20's nonce length from 64 bits (or
96 bits, depending on convention) to 192 bits, while provably retaining
ChaCha20's security. XChaCha20 uses the ChaCha20 permutation to map the
key and first 128 nonce bits to a 256-bit subkey. Then, it does the
ChaCha20 stream cipher with the subkey and remaining 64 bits of nonce.
We need XChaCha support in order to add support for the Adiantum
encryption mode. Note that to meet our performance requirements, we
actually plan to primarily use the variant XChaCha12. But we believe
it's wise to first add XChaCha20 as a baseline with a higher security
margin, in case there are any situations where it can be used.
Supporting both variants is straightforward.
Since XChaCha20's subkey differs for each request, XChaCha20 can't be a
template that wraps ChaCha20; that would require re-keying the
underlying ChaCha20 for every request, which wouldn't be thread-safe.
Instead, we make XChaCha20 its own top-level algorithm which calls the
ChaCha20 streaming implementation internally.
Similar to the existing ChaCha20 implementation, we define the IV to be
the nonce and stream position concatenated together. This allows users
to seek to any position in the stream.
I considered splitting the code into separate chacha20-common, chacha20,
and xchacha20 modules, so that chacha20 and xchacha20 could be
enabled/disabled independently. However, since nearly all the code is
shared anyway, I ultimately decided there would have been little benefit
to the added complexity of separate modules.
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
chacha20-generic doesn't use SIMD instructions or otherwise disable
preemption, so passing atomic=true to skcipher_walk_virt() is
unnecessary.
Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Some algorithms initialize their .cra_list prior to registration.
But this is unnecessary since crypto_register_alg() will overwrite
.cra_list when adding the algorithm to the 'crypto_alg_list'.
Apparently the useless assignment has just been copy+pasted around.
So, remove the useless assignments.
Exception: paes_s390.c uses cra_list to check whether the algorithm is
registered or not, so I left that as-is for now.
This patch shouldn't change any actual behavior.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
ecc_point_mult is supposed to be used with a regularized scalar,
otherwise, it's possible to deduce the position of the top bit of the
scalar with timing attack. This is important when the scalar is a
private key.
ecc_point_mult is already using a regular algorithm (i.e. having an
operation flow independent of the input scalar) but regularization step
is not implemented.
Arrange scalar to always have fixed top bit by adding a multiple of the
curve order (n).
References:
The constant time regularization step is based on micro-ecc by Kenneth
MacKay and also referenced in the literature (Bernstein, D. J., & Lange,
T. (2017). Montgomery curves and the Montgomery ladder. (Cryptology
ePrint Archive; Vol. 2017/293). s.l.: IACR. Chapter 4.6.2.)
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Cc: kernel-hardening@lists.openwall.com
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Move CHACHAPOLY_IV_SIZE to header file, so it can be reused.
Signed-off-by: Cristian Stoica <cristian.stoica@nxp.com>
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add testmgr and tcrypt tests and vectors for Streebog hash function
from RFC 6986 and GOST R 34.11-2012, for HMAC-Streebog vectors are
from RFC 7836 and R 50.1.113-2016.
Cc: linux-integrity@vger.kernel.org
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Register Streebog hash function in Hash Info arrays to let IMA use
it for its purposes.
Cc: linux-integrity@vger.kernel.org
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Reviewed-by: Mimi Zohar <zohar@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
cts(cbc(aes)) as used in the kernel has been added to NIST
standard as CBC-CS3. Document it as such.
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Suggested-by: Stephan Mueller <smueller@chronox.de>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently used scalar multiplication algorithm (Matthieu Rivain, 2011)
have invalid values for scalar == 1, n-1, and for regularized version
n-2, which was previously not checked. Verify that they are not used as
private keys.
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As per Sp800-38A addendum from Oct 2010[1], cts(cbc(aes)) is
allowed as a FIPS mode algorithm. Mark it as such.
[1] https://csrc.nist.gov/publications/detail/sp/800-38a/addendum/final
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Reviewed-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
There have been a pretty ridiculous number of issues with initializing
the report structures that are copied to userspace by NETLINK_CRYPTO.
Commit 4473710df1 ("crypto: user - Prepare for CRYPTO_MAX_ALG_NAME
expansion") replaced some strncpy()s with strlcpy()s, thereby
introducing information leaks. Later two other people tried to replace
other strncpy()s with strlcpy() too, which would have introduced even
more information leaks:
- https://lore.kernel.org/patchwork/patch/954991/
- https://patchwork.kernel.org/patch/10434351/
Commit cac5818c25 ("crypto: user - Implement a generic crypto
statistics") also uses the buggy strlcpy() approach and therefore leaks
uninitialized memory to userspace. A fix was proposed, but it was
originally incomplete.
Seeing as how apparently no one can get this right with the current
approach, change all the reporting functions to:
- Start by memsetting the report structure to 0. This guarantees it's
always initialized, regardless of what happens later.
- Initialize all strings using strscpy(). This is safe after the
memset, ensures null termination of long strings, avoids unnecessary
work, and avoids the -Wstringop-truncation warnings from gcc.
- Use sizeof(var) instead of sizeof(type). This is more robust against
copy+paste errors.
For simplicity, also reuse the -EMSGSIZE return value from nla_put().
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The acomp, akcipher, and kpp algorithm types already have .report
methods defined, so there's no need to duplicate this functionality in
crypto_user itself; the duplicate functions are actually never executed.
Remove the unused code.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Passing string 'name' as the format specifier is potentially hazardous
because name could (although very unlikely to) have a format specifier
embedded in it causing issues when parsing the non-existent arguments
to these. Follow best practice by using the "%s" format string for
the string 'name'.
Cleans up clang warning:
crypto/pcrypt.c:397:40: warning: format string is not a string literal
(potentially insecure) [-Wformat-security]
Fixes: a3fb1e330d ("pcrypt: Added sysfs interface to pcrypt")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto_cfb_decrypt_segment() incorrectly XOR'ed generated keystream with
IV, rather than with data stream, resulting in incorrect decryption.
Test vectors will be added in the next patch.
Signed-off-by: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Make the ARM scalar AES implementation closer to constant-time by
disabling interrupts and prefetching the tables into L1 cache. This is
feasible because due to ARM's "free" rotations, the main tables are only
1024 bytes instead of the usual 4096 used by most AES implementations.
On ARM Cortex-A7, the speed loss is only about 5%. The resulting code
is still over twice as fast as aes_ti.c. Responsiveness is potentially
a concern, but interrupts are only disabled for a single AES block.
Note that even after these changes, the implementation still isn't
necessarily guaranteed to be constant-time; see
https://cr.yp.to/antiforgery/cachetiming-20050414.pdf for a discussion
of the many difficulties involved in writing truly constant-time AES
software. But it's valuable to make such attacks more difficult.
Much of this patch is based on patches suggested by Ard Biesheuvel.
Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In the "aes-fixed-time" AES implementation, disable interrupts while
accessing the S-box, in order to make cache-timing attacks more
difficult. Previously it was possible for the CPU to be interrupted
while the S-box was loaded into L1 cache, potentially evicting the
cachelines and causing later table lookups to be time-variant.
In tests I did on x86 and ARM, this doesn't affect performance
significantly. Responsiveness is potentially a concern, but interrupts
are only disabled for a single AES block.
Note that even after this change, the implementation still isn't
necessarily guaranteed to be constant-time; see
https://cr.yp.to/antiforgery/cachetiming-20050414.pdf for a discussion
of the many difficulties involved in writing truly constant-time AES
software. But it's valuable to make such attacks more difficult.
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
For preventing uninitialized data to be given to user-space (and so leak
potential useful data), the crypto_stat structure must be correctly
initialized.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Fixes: cac5818c25 ("crypto: user - Implement a generic crypto statistics")
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
[EB: also fix it in crypto_reportstat_one()]
[EB: use sizeof(var) rather than sizeof(type)]
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
All bytes of the NETLINK_CRYPTO report structures must be initialized,
since they are copied to userspace. The change from strncpy() to
strlcpy() broke this. As a minimal fix, change it back.
Fixes: 4473710df1 ("crypto: user - Prepare for CRYPTO_MAX_ALG_NAME expansion")
Cc: <stable@vger.kernel.org> # v4.12+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The simd wrapper's skcipher request context structure consists
of a single subrequest whose size is taken from the subordinate
skcipher. However, in simd_skcipher_init(), the reqsize that is
retrieved is not from the subordinate skcipher but from the
cryptd request structure, whose size is completely unrelated to
the actual wrapped skcipher.
Reported-by: Qian Cai <cai@gmx.us>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Qian Cai <cai@gmx.us>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The sign operation can operate in a non-hashed mode by running the RSA
sign operation directly on the input. This assumes that the input is
less than key_size_in_bytes - 11. Since the TPM performs its own PKCS1
padding, it isn't possible to support 'raw' mode, only 'pkcs1'.
Alternatively, a hashed version is also possible. In this variant the
input is hashed (by userspace) via the selected hash function first.
Then this implementation takes care of converting the hash to ASN.1
format and the sign operation is performed on the result. This is
similar to the implementation inside crypto/rsa-pkcs1pad.c.
ASN1 templates were copied from crypto/rsa-pkcs1pad.c. There seems to
be no easy way to expose that functionality, but likely the templates
should be shared somehow.
The sign operation is implemented via TPM_Sign operation on the TPM.
It is assumed that the TPM wrapped key provided uses
TPM_SS_RSASSAPKCS1v15_DER signature scheme. This allows the TPM_Sign
operation to work on data up to key_len_in_bytes - 11 bytes long.
In theory, we could also use TPM_Unbind instead of TPM_Sign, but we would
have to manually pkcs1 pad the digest first.
Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>
This patch implements the verify_signature operation. The public key
portion extracted from the TPM key blob is used. The operation is
performed entirely in software using the crypto API.
Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>
This patch implements the pkey_decrypt operation using the private key
blob. The blob is first loaded into the TPM via tpm_loadkey2. Once the
handle is obtained, tpm_unbind operation is used to decrypt the data on
the TPM and the result is returned. The key loaded by tpm_loadkey2 is
then evicted via tpm_flushspecific operation.
This patch assumes that the SRK authorization is a well known 20-byte of
zeros and the same holds for the key authorization of the provided key.
Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>
This patch exposes some common functionality needed to send TPM commands.
Several functions from keys/trusted.c are exposed for use by the new tpm
key subtype and a module dependency is introduced.
In the future, common functionality between the trusted key type and the
asym_tpm subtype should be factored out into a common utility library.
Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>
This patch impelements the pkey_encrypt operation. The public key
portion extracted from the TPM key blob is used. The operation is
performed entirely in software using the crypto API.
Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>
This commit implements the pkey_query operation. This is accomplished
by utilizing the public key portion to obtain max encryption size
information for the operations that utilize the public key (encrypt,
verify). The private key size extracted from the TPM_Key data structure
is used to fill the information where the private key is used (decrypt,
sign).
The kernel uses a DER/BER format for public keys and does not support
setting the key via the raw binary form. To get around this a simple
DER/BER formatter is implemented which stores the DER/BER formatted key
and exponent in a temporary buffer for use by the crypto API.
The only exponent supported currently is 65537. This holds true for
other Linux TPM tools such as 'create_tpm_key' and
trousers-openssl_tpm_engine.
Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>
For TPM based keys, the only standard seems to be described here:
http://david.woodhou.se/draft-woodhouse-cert-best-practice.html#rfc.section.4.4
Quote from the relevant section:
"Rather, a common form of storage for "wrapped" keys is to encode the
binary TCPA_KEY structure in a single ASN.1 OCTET-STRING, and store the
result in PEM format with the tag "-----BEGIN TSS KEY BLOB-----". "
This patch implements the above behavior. It is assumed that the PEM
encoding is stripped out by userspace and only the raw DER/BER format is
provided. This is similar to how PKCS7, PKCS8 and X.509 keys are
handled.
Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>
The parsed BER/DER blob obtained from user space contains a TPM_Key
structure. This structure has some information about the key as well as
the public key portion.
This patch extracts this information for future use.
Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>
The original pkcs1pad implementation allowed to pad/unpad raw RSA
output. However, this has been taken out in commit:
commit c0d20d22e0 ("crypto: rsa-pkcs1pad - Require hash to be present")
This patch restored this ability as it is needed by the asymmetric key
implementation.
Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: James Morris <james.morris@microsoft.com>
Implement PKCS#8 RSA Private Key format [RFC 5208] parser for the
asymmetric key type. For the moment, this will only support unencrypted
DER blobs. PEM and decryption can be added later.
PKCS#8 keys can be loaded like this:
openssl pkcs8 -in private_key.pem -topk8 -nocrypt -outform DER | \
keyctl padd asymmetric foo @s
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Denis Kenzior <denkenz@gmail.com>
Tested-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: James Morris <james.morris@microsoft.com>
Implement the encrypt, decrypt and sign operations for the software
asymmetric key subtype. This mostly involves offloading the call to the
crypto layer.
Note that the decrypt and sign operations require a private key to be
supplied. Encrypt (and also verify) will work with either a public or a
private key. A public key can be supplied with an X.509 certificate and a
private key can be supplied using a PKCS#8 blob:
# j=`openssl pkcs8 -in ~/pkcs7/firmwarekey2.priv -topk8 -nocrypt -outform DER | keyctl padd asymmetric foo @s`
# keyctl pkey_query $j - enc=pkcs1
key_size=4096
max_data_size=512
max_sig_size=512
max_enc_size=512
max_dec_size=512
encrypt=y
decrypt=y
sign=y
verify=y
# keyctl pkey_encrypt $j 0 data enc=pkcs1 >/tmp/enc
# keyctl pkey_decrypt $j 0 /tmp/enc enc=pkcs1 >/tmp/dec
# cmp data /tmp/dec
# keyctl pkey_sign $j 0 data enc=pkcs1 hash=sha1 >/tmp/sig
# keyctl pkey_verify $j 0 data /tmp/sig enc=pkcs1 hash=sha1
#
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Denis Kenzior <denkenz@gmail.com>
Tested-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: James Morris <james.morris@microsoft.com>
Put a flag in the public_key struct to indicate if the structure is holding
a private key. The private key must be held ASN.1 encoded in the format
specified in RFC 3447 A.1.2. This is the form required by crypto/rsa.c.
The software encryption subtype's verification and query functions then
need to select the appropriate crypto function to set the key.
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Denis Kenzior <denkenz@gmail.com>
Tested-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: James Morris <james.morris@microsoft.com>
Provide a query function for the software public key implementation. This
permits information about such a key to be obtained using
query_asymmetric_key() or KEYCTL_PKEY_QUERY.
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Denis Kenzior <denkenz@gmail.com>
Tested-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: James Morris <james.morris@microsoft.com>
Make the X.509 and PKCS7 parsers fill in the signature encoding type field
recently added to the public_key_signature struct.
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Denis Kenzior <denkenz@gmail.com>
Tested-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: James Morris <james.morris@microsoft.com>
Provide the missing asymmetric key subops for new key type ops. This
include query, encrypt, decrypt and create signature. Verify signature
already exists. Also provided are accessor functions for this:
int query_asymmetric_key(const struct key *key,
struct kernel_pkey_query *info);
int encrypt_blob(struct kernel_pkey_params *params,
const void *data, void *enc);
int decrypt_blob(struct kernel_pkey_params *params,
const void *enc, void *data);
int create_signature(struct kernel_pkey_params *params,
const void *data, void *enc);
The public_key_signature struct gains an encoding field to carry the
encoding for verify_signature().
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Reviewed-by: Denis Kenzior <denkenz@gmail.com>
Tested-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: James Morris <james.morris@microsoft.com>
This reverts commit dd979b4df8.
This broke tcp_poll for SMC fallback: An AF_SMC socket establishes an
internal TCP socket for the initial handshake with the remote peer.
Whenever the SMC connection can not be established this TCP socket is
used as a fallback. All socket operations on the SMC socket are then
forwarded to the TCP socket. In case of poll, the file->private_data
pointer references the SMC socket because the TCP socket has no file
assigned. This causes tcp_poll to wait on the wrong socket.
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
After allocation, output and decomp_output both point to memory chunks of
size COMP_BUF_SIZE. Then, only the first bytes are zeroed out using
sizeof(COMP_BUF_SIZE) as parameter to memset(), because
sizeof(COMP_BUF_SIZE) provides the size of the constant and not the size of
allocated memory.
Instead, the whole allocated memory is meant to be zeroed out. Use
COMP_BUF_SIZE as parameter to memset() directly in order to accomplish
this.
Fixes: 336073840a ("crypto: testmgr - Allow different compression results")
Signed-off-by: Michael Schupikov <michael@schupikov.de>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use the correct __le32 annotation and accessors to perform the
single round of AES encryption performed inside the AEGIS transform.
Otherwise, tcrypt reports:
alg: aead: Test 1 failed on encryption for aegis128-generic
00000000: 6c 25 25 4a 3c 10 1d 27 2b c1 d4 84 9a ef 7f 6e
alg: aead: Test 1 failed on encryption for aegis128l-generic
00000000: cd c6 e3 b8 a0 70 9d 8e c2 4f 6f fe 71 42 df 28
alg: aead: Test 1 failed on encryption for aegis256-generic
00000000: aa ed 07 b1 96 1d e9 e6 f2 ed b5 8e 1c 5f dc 1c
Fixes: f606a88e58 ("crypto: aegis - Add generic AEGIS AEAD implementations")
Cc: <stable@vger.kernel.org> # v4.18+
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Omit the endian swabbing when folding the lengths of the assoc and
crypt input buffers into the state to finalize the tag. This is not
necessary given that the memory representation of the state is in
machine native endianness already.
This fixes an error reported by tcrypt running on a big endian system:
alg: aead: Test 2 failed on encryption for morus640-generic
00000000: a8 30 ef fb e6 26 eb 23 b0 87 dd 98 57 f3 e1 4b
00000010: 21
alg: aead: Test 2 failed on encryption for morus1280-generic
00000000: 88 19 1b fb 1c 29 49 0e ee 82 2f cb 97 a6 a5 ee
00000010: 5f
Fixes: 396be41f16 ("crypto: morus - Add generic MORUS AEAD implementations")
Cc: <stable@vger.kernel.org> # v4.18+
Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Due to an unfortunate interaction between commit fbe1a850b3
("crypto: lrw - Fix out-of bounds access on counter overflow") and
commit c778f96bf3 ("crypto: lrw - Optimize tweak computation"),
we ended up with a version of next_index() that always returns 127.
Fixes: c778f96bf3 ("crypto: lrw - Optimize tweak computation")
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
For historical reasons, the AES-NI based implementation of the PCBC
chaining mode uses a special FPU chaining mode wrapper template to
amortize the FPU start/stop overhead over multiple blocks.
When this FPU wrapper was introduced, it supported widely used
chaining modes such as XTS and CTR (as well as LRW), but currently,
PCBC is the only remaining user.
Since there are no known users of pcbc(aes) in the kernel, let's remove
this special driver, and rely on the generic pcbc driver to encapsulate
the AES-NI core cipher.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We already have OFB test vectors and tcrypt OFB speed tests.
Add OFB functional tests to tcrypt as well.
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add a generic version of output feedback mode. We already have support of
several hardware based transformations of this mode and the needed test
vectors but we somehow missed adding a generic software one. Fix this now.
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add additional test vectors from "The SM4 Blockcipher Algorithm And Its
Modes Of Operations" draft-ribose-cfrg-sm4-10 and register cipher speed
tests for sm4.
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch implement a generic way to get statistics about all crypto
usages.
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In preparation for removal of VLAs due to skcipher requests on the stack
via SKCIPHER_REQUEST_ON_STACK() usage, this introduces the infrastructure
for the "sync skcipher" tfm, which is for handling the on-stack cases of
skcipher, which are always non-ASYNC and have a known limited request
size.
The crypto API additions:
struct crypto_sync_skcipher (wrapper for struct crypto_skcipher)
crypto_alloc_sync_skcipher()
crypto_free_sync_skcipher()
crypto_sync_skcipher_setkey()
crypto_sync_skcipher_get_flags()
crypto_sync_skcipher_set_flags()
crypto_sync_skcipher_clear_flags()
crypto_sync_skcipher_blocksize()
crypto_sync_skcipher_ivsize()
crypto_sync_skcipher_reqtfm()
skcipher_request_set_sync_tfm()
SYNC_SKCIPHER_REQUEST_ON_STACK() (with tfm type check)
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The encryption mode of pkcs1pad never uses out_sg and out_buf, so
there's no need to allocate the buffer, which presently is not even
being freed.
CC: Herbert Xu <herbert@gondor.apana.org.au>
CC: linux-crypto@vger.kernel.org
CC: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Dan Aloni <dan@kernelim.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds a test vector for lrw(aes) that triggers wrap-around of
the counter, which is a tricky corner case.
Suggested-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When the LRW block counter overflows, the current implementation returns
128 as the index to the precomputed multiplication table, which has 128
entries. This patch fixes it to return the correct value (127).
Fixes: 64470f1b85 ("[CRYPTO] lrw: Liskov Rivest Wagner, a tweakable narrow block cipher mode")
Cc: <stable@vger.kernel.org> # 2.6.20+
Reported-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In commit 9f480faec5 ("crypto: chacha20 - Fix keystream alignment for
chacha20_block()"), I had missed that chacha20_block() can be called
directly on the buffer passed to get_random_bytes(), which can have any
alignment. So, while my commit didn't break anything, it didn't fully
solve the alignment problems.
Revert my solution and just update chacha20_block() to use
put_unaligned_le32(), so the output buffer need not be aligned.
This is simpler, and on many CPUs it's the same speed.
But, I kept the 'tmp' buffers in extract_crng_user() and
_get_random_bytes() 4-byte aligned, since that alignment is actually
needed for _crng_backtrack_protect() too.
Reported-by: Stephan Müller <smueller@chronox.de>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since commit acb9b159c7 ("crypto: gf128mul - define gf128mul_x_* in
gf128mul.h"), the gf128mul_x_*() functions are very fast and therefore
caching the computed XTS tweaks has only negligible advantage over
computing them twice.
In fact, since the current caching implementation limits the size of
the calls to the child ecb(...) algorithm to PAGE_SIZE (usually 4096 B),
it is often actually slower than the simple recomputing implementation.
This patch simplifies the XTS template to recompute the XTS tweaks from
scratch in the second pass and thus also removes the need to allocate a
dynamic buffer using kmalloc().
As discussed at [1], the use of kmalloc causes deadlocks with dm-crypt.
PERFORMANCE RESULTS
I measured time to encrypt/decrypt a memory buffer of varying sizes with
xts(ecb-aes-aesni) using a tool I wrote ([2]) and the results suggest
that after this patch the performance is either better or comparable for
both small and large buffers. Note that there is a lot of noise in the
measurements, but the overall difference is easy to see.
Old code:
ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns)
xts(aes) 256 64 331 328
xts(aes) 384 64 332 333
xts(aes) 512 64 338 348
xts(aes) 256 512 889 920
xts(aes) 384 512 1019 993
xts(aes) 512 512 1032 990
xts(aes) 256 4096 2152 2292
xts(aes) 384 4096 2453 2597
xts(aes) 512 4096 3041 2641
xts(aes) 256 16384 9443 8027
xts(aes) 384 16384 8536 8925
xts(aes) 512 16384 9232 9417
xts(aes) 256 32768 16383 14897
xts(aes) 384 32768 17527 16102
xts(aes) 512 32768 18483 17322
New code:
ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns)
xts(aes) 256 64 328 324
xts(aes) 384 64 324 319
xts(aes) 512 64 320 322
xts(aes) 256 512 476 473
xts(aes) 384 512 509 492
xts(aes) 512 512 531 514
xts(aes) 256 4096 2132 1829
xts(aes) 384 4096 2357 2055
xts(aes) 512 4096 2178 2027
xts(aes) 256 16384 6920 6983
xts(aes) 384 16384 8597 7505
xts(aes) 512 16384 7841 8164
xts(aes) 256 32768 13468 12307
xts(aes) 384 32768 14808 13402
xts(aes) 512 32768 15753 14636
[1] https://lkml.org/lkml/2018/8/23/1315
[2] https://gitlab.com/omos/linux-crypto-bench
Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Introduce a facility that can be used to receive a notification
callback when a new algorithm becomes available. This can be used by
existing crypto registrations to trigger a switch from a software-only
algorithm to a hardware-accelerated version.
A new CRYPTO_MSG_ALG_LOADED state is introduced to the existing crypto
notification chain, and the register/unregister functions are exported
so they can be called by subsystems outside of crypto.
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Suggested-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As it turns out, the AVX2 multibuffer SHA routines are currently
broken [0], in a way that would have likely been noticed if this
code were in wide use. Since the code is too complicated to be
maintained by anyone except the original authors, and since the
performance benefits for real-world use cases are debatable to
begin with, it is better to drop it entirely for the moment.
[0] https://marc.info/?l=linux-crypto-vger&m=153476243825350&w=2
Suggested-by: Eric Biggers <ebiggers@google.com>
Cc: Megha Dey <megha.dey@linux.intel.com>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In the quest to remove all stack VLA usage from the kernel[1], this uses
the newly defined max alignment to perform unaligned hashing to avoid
VLAs, and drops the helper function while adding sanity checks on the
resulting buffer sizes. Additionally, the __aligned_largest macro is
removed since this helper was the only user.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In the quest to remove all stack VLA usage from the kernel[1], this
exposes a new general upper bound on crypto blocksize and alignmask
(higher than for the existing cipher limits) for VLA removal,
and introduces new checks.
At present, the highest cra_alignmask in the kernel is 63. The highest
cra_blocksize is 144 (SHA3_224_BLOCK_SIZE, 18 8-byte words). For the
new blocksize limit, I went with 160 (20 8-byte words).
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In the quest to remove all stack VLA usage from the kernel[1], this
removes the VLAs in SHASH_DESC_ON_STACK (via crypto_shash_descsize())
by using the maximum allowable size (which is now more clearly captured
in a macro), along with a few other cases. Similar limits are turned into
macros as well.
A review of existing sizes shows that SHA512_DIGEST_SIZE (64) is the
largest digest size and that sizeof(struct sha3_state) (360) is the
largest descriptor size. The corresponding maximums are reduced.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In the quest to remove all stack VLA usage from the kernel[1], this drops
AHASH_REQUEST_ON_STACK by preallocating the ahash request area combined
with the skcipher area (which are not used at the same time).
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In the quest to remove all stack VLA usage from the kernel[1], this uses
the maximum blocksize and adds a sanity check. For xcbc, the blocksize
must always be 16, so use that, since it's already being enforced during
instantiation.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
These are unused, undesired, and have never actually been used by
anybody. The original authors of this code have changed their mind about
its inclusion. While originally proposed for disk encryption on low-end
devices, the idea was discarded [1] in favor of something else before
that could really get going. Therefore, this patch removes Speck.
[1] https://marc.info/?l=linux-crypto-vger&m=153359499015659
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Acked-by: Eric Biggers <ebiggers@google.com>
Cc: stable@vger.kernel.org
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This round brings couple of framework changes, a new driver and usual driver
updates:
- New managed helper for dmaengine framework registration
- Split dmaengine pause capability to pause and resume and allow drivers to
report that individually
- Update dma_request_chan_by_mask() to handle deferred probing
- Move imx-sdma to use virt-dma
- New driver for Actions Semi Owl family S900 controller
- Minor updates to intel, renesas, mv_xor, pl330 etc
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJbdsctAAoJEHwUBw8lI4NHZrIP/3/HrNSUKApt1KdOcG5UA7nu
7O3BcvkAahmM285Hw3a/zLEnSm2sJ/6EI0lN1sz+VYi8IECG7nbCyHQh3Bd1Mxi1
XLHafdTGcI5b7rpicNtRS1BHCPtNrgOypFxs8b/bTatbzc/aWM8K8WFLX27sqGZT
1Sb2nNKKrVbQDVqJ+1ZEQ4q86w61tPHmmRH0icl1DAQREfsvbu/bRMdol5H7/orx
A+ZGH39Ig3FI8/Ri8KccqShvG0VM1yCVJca+0j30IL1x4JNZ36uG+NQbtkBIkOJC
kk9qfCu3ugm4NOtfKGOtkmmOwE9/GirRh+QMPpSmi6oQu4vdOVxyQyYpKukHIer1
vxwpvo2b+3POMfHi1kuqDJhcGIEPak6tH2Oyd01l7nA7Lyww9iC2AyiL89knw+i6
aUK4oHIhf2fFLUN6/ck4JbBqQ3MrDNraZfLJcnmQPtpTftW9Yqd2yqs7Cf1gcBC9
jyLAekJENiUmaNJsL5nJUMDVGG0lIiOnfwtPNfPZJuWu+4doKb2pM4+Ljcyfn2g0
ub4fPfXp0wcFaVarjpQr6T0tdZVMpmrPSTPGS5BdVZbWntrNOpiHmmPVEOLNz3zb
ibIMFn478/RYYB5pcNtHkUaOF4tu0w46fSqRp1ixkey+FIHKlj8/B+YeaAJF0nJh
fc4XaTTJgLufzc1F0ztU
=kbCC
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-4.19-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull DMAengine updates from Vinod Koul:
"This round brings couple of framework changes, a new driver and usual
driver updates:
- new managed helper for dmaengine framework registration
- split dmaengine pause capability to pause and resume and allow
drivers to report that individually
- update dma_request_chan_by_mask() to handle deferred probing
- move imx-sdma to use virt-dma
- new driver for Actions Semi Owl family S900 controller
- minor updates to intel, renesas, mv_xor, pl330 etc"
* tag 'dmaengine-4.19-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (46 commits)
dmaengine: Add Actions Semi Owl family S900 DMA driver
dt-bindings: dmaengine: Add binding for Actions Semi Owl SoCs
dmaengine: sh: rcar-dmac: Should not stop the DMAC by rcar_dmac_sync_tcr()
dmaengine: mic_x100_dma: use the new helper to simplify the code
dmaengine: add a new helper dmaenginem_async_device_register
dmaengine: imx-sdma: add memcpy interface
dmaengine: imx-sdma: add SDMA_BD_MAX_CNT to replace '0xffff'
dmaengine: dma_request_chan_by_mask() to handle deferred probing
dmaengine: pl330: fix irq race with terminate_all
dmaengine: Revert "dmaengine: mv_xor_v2: enable COMPILE_TEST"
dmaengine: mv_xor_v2: use {lower,upper}_32_bits to configure HW descriptor address
dmaengine: mv_xor_v2: enable COMPILE_TEST
dmaengine: mv_xor_v2: move unmap to before callback
dmaengine: mv_xor_v2: convert callback to helper function
dmaengine: mv_xor_v2: kill the tasklets upon exit
dmaengine: mv_xor_v2: explicitly freeup irq
dmaengine: sh: rcar-dmac: Add dma_pause operation
dmaengine: sh: rcar-dmac: add a new function to clear CHCR.DE with barrier
dmaengine: idma64: Support dmaengine_terminate_sync()
dmaengine: hsu: Support dmaengine_terminate_sync()
...
Replace the use of a magic number that indicates that verify_*_signature()
should use the secondary keyring with a symbol.
Signed-off-by: Yannik Sembritzki <yannik@sembritzki.me>
Signed-off-by: David Howells <dhowells@redhat.com>
Cc: keyrings@vger.kernel.org
Cc: linux-security-module@vger.kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Pull integrity updates from James Morris:
"This adds support for EVM signatures based on larger digests, contains
a new audit record AUDIT_INTEGRITY_POLICY_RULE to differentiate the
IMA policy rules from the IMA-audit messages, addresses two deadlocks
due to either loading or searching for crypto algorithms, and cleans
up the audit messages"
* 'next-integrity' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security:
EVM: fix return value check in evm_write_xattrs()
integrity: prevent deadlock during digsig verification.
evm: Allow non-SHA1 digital signatures
evm: Don't deadlock if a crypto algorithm is unavailable
integrity: silence warning when CONFIG_SECURITYFS is not enabled
ima: Differentiate auditing policy rules from "audit" actions
ima: Do not audit if CONFIG_INTEGRITY_AUDIT is not set
ima: Use audit_log_format() rather than audit_log_string()
ima: Call audit_log_string() rather than logging it untrusted
Make it return -EINVAL if crypto_dh_key_len() is incorrect rather than
overflowing the buffer.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
It was forgotten to increase DH_KPP_SECRET_MIN_SIZE to include 'q_size',
causing an out-of-bounds write of 4 bytes in crypto_dh_encode_key(), and
an out-of-bounds read of 4 bytes in crypto_dh_decode_key(). Fix it, and
fix the lengths of the test vectors to match this.
Reported-by: syzbot+6d38d558c25b53b8f4ed@syzkaller.appspotmail.com
Fixes: e3fe0ae129 ("crypto: dh - add public key verification test")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Like the skcipher_walk and blkcipher_walk cases:
scatterwalk_done() is only meant to be called after a nonzero number of
bytes have been processed, since scatterwalk_pagedone() will flush the
dcache of the *previous* page. But in the error case of
ablkcipher_walk_done(), e.g. if the input wasn't an integer number of
blocks, scatterwalk_done() was actually called after advancing 0 bytes.
This caused a crash ("BUG: unable to handle kernel paging request")
during '!PageSlab(page)' on architectures like arm and arm64 that define
ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was
page-aligned as in that case walk->offset == 0.
Fix it by reorganizing ablkcipher_walk_done() to skip the
scatterwalk_advance() and scatterwalk_done() if an error has occurred.
Reported-by: Liu Chao <liuchao741@huawei.com>
Fixes: bf06099db1 ("crypto: skcipher - Add ablkcipher_walk interfaces")
Cc: <stable@vger.kernel.org> # v2.6.35+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Like the skcipher_walk case:
scatterwalk_done() is only meant to be called after a nonzero number of
bytes have been processed, since scatterwalk_pagedone() will flush the
dcache of the *previous* page. But in the error case of
blkcipher_walk_done(), e.g. if the input wasn't an integer number of
blocks, scatterwalk_done() was actually called after advancing 0 bytes.
This caused a crash ("BUG: unable to handle kernel paging request")
during '!PageSlab(page)' on architectures like arm and arm64 that define
ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was
page-aligned as in that case walk->offset == 0.
Fix it by reorganizing blkcipher_walk_done() to skip the
scatterwalk_advance() and scatterwalk_done() if an error has occurred.
This bug was found by syzkaller fuzzing.
Reproducer, assuming ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE:
#include <linux/if_alg.h>
#include <sys/socket.h>
#include <unistd.h>
int main()
{
struct sockaddr_alg addr = {
.salg_type = "skcipher",
.salg_name = "ecb(aes-generic)",
};
char buffer[4096] __attribute__((aligned(4096))) = { 0 };
int fd;
fd = socket(AF_ALG, SOCK_SEQPACKET, 0);
bind(fd, (void *)&addr, sizeof(addr));
setsockopt(fd, SOL_ALG, ALG_SET_KEY, buffer, 16);
fd = accept(fd, NULL, NULL);
write(fd, buffer, 15);
read(fd, buffer, 15);
}
Reported-by: Liu Chao <liuchao741@huawei.com>
Fixes: 5cde0af2a9 ("[CRYPTO] cipher: Added block cipher type")
Cc: <stable@vger.kernel.org> # v2.6.19+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
scatterwalk_done() is only meant to be called after a nonzero number of
bytes have been processed, since scatterwalk_pagedone() will flush the
dcache of the *previous* page. But in the error case of
skcipher_walk_done(), e.g. if the input wasn't an integer number of
blocks, scatterwalk_done() was actually called after advancing 0 bytes.
This caused a crash ("BUG: unable to handle kernel paging request")
during '!PageSlab(page)' on architectures like arm and arm64 that define
ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was
page-aligned as in that case walk->offset == 0.
Fix it by reorganizing skcipher_walk_done() to skip the
scatterwalk_advance() and scatterwalk_done() if an error has occurred.
This bug was found by syzkaller fuzzing.
Reproducer, assuming ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE:
#include <linux/if_alg.h>
#include <sys/socket.h>
#include <unistd.h>
int main()
{
struct sockaddr_alg addr = {
.salg_type = "skcipher",
.salg_name = "cbc(aes-generic)",
};
char buffer[4096] __attribute__((aligned(4096))) = { 0 };
int fd;
fd = socket(AF_ALG, SOCK_SEQPACKET, 0);
bind(fd, (void *)&addr, sizeof(addr));
setsockopt(fd, SOL_ALG, ALG_SET_KEY, buffer, 16);
fd = accept(fd, NULL, NULL);
write(fd, buffer, 15);
read(fd, buffer, 15);
}
Reported-by: Liu Chao <liuchao741@huawei.com>
Fixes: b286d8b1a6 ("crypto: skcipher - Add skcipher walk interface")
Cc: <stable@vger.kernel.org> # v4.10+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Setting 'walk->nbytes = walk->total' in skcipher_walk_first() doesn't
make sense because actually walk->nbytes needs to be set to the length
of the first step in the walk, which may be less than walk->total. This
is done by skcipher_walk_next() which is called immediately afterwards.
Also walk->nbytes was already set to 0 in skcipher_walk_skcipher(),
which is a better default value in case it's forgotten to be set later.
Therefore, remove the unnecessary assignment to walk->nbytes.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
All callers pass chain=0 to scatterwalk_crypto_chain().
Remove this unneeded parameter.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The ALIGN() macro needs to be passed the alignment, not the alignmask
(which is the alignment minus 1).
Fixes: b286d8b1a6 ("crypto: skcipher - Add skcipher walk interface")
Cc: <stable@vger.kernel.org> # v4.10+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Avoid RCU stalls in the case of non-preemptible kernel and lengthy
speed tests by rescheduling when advancing from one block size
to another.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The wait_address argument is always directly derived from the filp
argument, so remove it.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Make use of the swap macro and remove unnecessary variable *tmp*.
This makes the code easier to read and maintain.
This code was detected with the help of Coccinelle.
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Make use of the swap macro and remove unnecessary variable *tmp*.
This makes the code easier to read and maintain.
This code was detected with the help of Coccinelle.
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix the b value to be compliant with FIPS 186-4 D.1.2.1. This fix is
required to make sure the SP800-56A public key test passes for P-192.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
By adding a zero byte-length for the DH parameter Q value, the public
key verification test is disabled for the given test.
Reported-by: Eric Biggers <ebiggers3@gmail.com>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The CTR DRBG requires two SGLs pointing to input/output buffers for the
CTR AES operation. The used SGLs always have only one entry. Thus, the
SGL can be initialized during allocation time, preventing a
re-initialization of the SGLs during each call.
The performance is increased by about 1 to 3 percent depending on the
size of the requested buffer size.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In case memory resources for *base* were allocated, release them
before return.
Addresses-Coverity-ID: 1471702 ("Resource leak")
Fixes: e3fe0ae129 ("crypto: dh - add public key verification test")
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Reviewed-by: Stephan Müller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Pull crypto fix from Herbert Xu:
"This fixes an allocation error-path bug in af_alg discovered by
syzkaller"
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: af_alg - Initialize sg_num_bytes in error code path
When EVM attempts to appraise a file signed with a crypto algorithm the
kernel doesn't have support for, it will cause the kernel to trigger a
module load. If the EVM policy includes appraisal of kernel modules this
will in turn call back into EVM - since EVM is holding a lock until the
crypto initialisation is complete, this triggers a deadlock. Add a
CRYPTO_NOLOAD flag and skip module loading if it's set, and add that flag
in the EVM case in order to fail gracefully with an error message
instead of deadlocking.
Signed-off-by: Matthew Garrett <mjg59@google.com>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
The RX SGL in processing is already registered with the RX SGL tracking
list to support proper cleanup. The cleanup code path uses the
sg_num_bytes variable which must therefore be always initialized, even
in the error code path.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Reported-by: syzbot+9c251bdd09f83b92ba95@syzkaller.appspotmail.com
#syz test: https://github.com/google/kmsan.git master
CC: <stable@vger.kernel.org> #4.14
Fixes: e870456d8e ("crypto: algif_skcipher - overhaul memory management")
Fixes: d887c52d6a ("crypto: algif_aead - overhaul memory management")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The testmgr hash tests were testing init, digest, update and final
methods but not the finup method. Add a test for this one too.
While doing this, make sure we only run the partial tests once with
the digest tests and skip them with the final and finup tests since
they are the same.
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Some aead algorithms set .cra_flags = CRYPTO_ALG_TYPE_AEAD. But this is
redundant with the C structure type ('struct aead_alg'), and
crypto_register_aead() already sets the type flag automatically,
clearing any type flag that was already there. Apparently the useless
assignment has just been copy+pasted around.
So, remove the useless assignment from all the aead algorithms.
This patch shouldn't change any actual behavior.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Many shash algorithms set .cra_flags = CRYPTO_ALG_TYPE_SHASH. But this
is redundant with the C structure type ('struct shash_alg'), and
crypto_register_shash() already sets the type flag automatically,
clearing any type flag that was already there. Apparently the useless
assignment has just been copy+pasted around.
So, remove the useless assignment from all the shash algorithms.
This patch shouldn't change any actual behavior.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
sha512-generic and sha384-generic had a cra_priority of 0, so it wasn't
possible to have a lower priority SHA-512 or SHA-384 implementation, as
is desired for sha512_mb which is only useful under certain workloads
and is otherwise extremely slow. Change them to priority 100, which is
the priority used for many of the other generic algorithms.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
sha256-generic and sha224-generic had a cra_priority of 0, so it wasn't
possible to have a lower priority SHA-256 or SHA-224 implementation, as
is desired for sha256_mb which is only useful under certain workloads
and is otherwise extremely slow. Change them to priority 100, which is
the priority used for many of the other generic algorithms.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
sha1-generic had a cra_priority of 0, so it wasn't possible to have a
lower priority SHA-1 implementation, as is desired for sha1_mb which is
only useful under certain workloads and is otherwise extremely slow.
Change it to priority 100, which is the priority used for many of the
other generic algorithms.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
According to SP800-56A section 5.6.2.1, the public key to be processed
for the DH operation shall be checked for appropriateness. The check
shall covers the full verification test in case the domain parameter Q
is provided as defined in SP800-56A section 5.6.2.3.1. If Q is not
provided, the partial check according to SP800-56A section 5.6.2.3.2 is
performed.
The full verification test requires the presence of the domain parameter
Q. Thus, the patch adds the support to handle Q. It is permissible to
not provide the Q value as part of the domain parameters. This implies
that the interface is still backwards-compatible where so far only P and
G are to be provided. However, if Q is provided, it is imported.
Without the test, the NIST ACVP testing fails. After adding this check,
the NIST ACVP testing passes. Testing without providing the Q domain
parameter has been performed to verify the interface has not changed.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As of GCC 9.0.0 the build is reporting warnings like:
crypto/ablkcipher.c: In function ‘crypto_ablkcipher_report’:
crypto/ablkcipher.c:374:2: warning: ‘strncpy’ specified bound 64 equals destination size [-Wstringop-truncation]
strncpy(rblkcipher.geniv, alg->cra_ablkcipher.geniv ?: "<default>",
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
sizeof(rblkcipher.geniv));
~~~~~~~~~~~~~~~~~~~~~~~~~
This means the strnycpy might create a non null terminated string. Fix this by
explicitly performing '\0' termination.
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Eric Biggers <ebiggers3@gmail.com>
Cc: Nick Desaulniers <nick.desaulniers@gmail.com>
Signed-off-by: Stafford Horne <shorne@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
According to SP800-56A section 5.6.2.1, the public key to be processed
for the ECDH operation shall be checked for appropriateness. When the
public key is considered to be an ephemeral key, the partial validation
test as defined in SP800-56A section 5.6.2.3.4 can be applied.
The partial verification test requires the presence of the field
elements of a and b. For the implemented NIST curves, b is defined in
FIPS 186-4 appendix D.1.2. The element a is implicitly given with the
Weierstrass equation given in D.1.2 where a = p - 3.
Without the test, the NIST ACVP testing fails. After adding this check,
the NIST ACVP testing passes.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The function skcipher_walk_next declared as static and marked as
EXPORT_SYMBOL_GPL. It's a bit confusing for internal function to be
exported. The area of visibility for such function is its .c file
and all other modules. Other *.c files of the same module can't use it,
despite all other modules can. Relying on the fact that this is the
internal function and it's not a crucial part of the API, the patch
just removes the EXPORT_SYMBOL_GPL marking of skcipher_walk_next.
Found by Linux Driver Verification project (linuxtesting.org).
Signed-off-by: Denis Efremov <efremov@linux.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove the original version of the VMAC template that had the nonce
hardcoded to 0 and produced a digest with the wrong endianness. I'm
unsure whether this had users or not (there are no explicit in-kernel
references to it), but given that the hardcoded nonce made it wildly
insecure unless a unique key was used for each message, let's try
removing it and see if anyone complains.
Leave the new "vmac64" template that requires the nonce to be explicitly
specified as the first 16 bytes of data and uses the correct endianness
for the digest.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently the VMAC template uses a "nonce" hardcoded to 0, which makes
it insecure unless a unique key is set for every message. Also, the
endianness of the final digest is wrong: the implementation uses little
endian, but the VMAC specification has it as big endian, as do other
VMAC implementations such as the one in Crypto++.
Add a new VMAC template where the nonce is passed as the first 16 bytes
of data (similar to what is done for Poly1305's nonce), and the digest
is big endian. Call it "vmac64", since the old name of simply "vmac"
didn't clarify whether the implementation is of VMAC-64 or of VMAC-128
(which produce 64-bit and 128-bit digests respectively); so we fix the
naming ambiguity too.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
syzbot reported a crash in vmac_final() when multiple threads
concurrently use the same "vmac(aes)" transform through AF_ALG. The bug
is pretty fundamental: the VMAC template doesn't separate per-request
state from per-tfm (per-key) state like the other hash algorithms do,
but rather stores it all in the tfm context. That's wrong.
Also, vmac_final() incorrectly zeroes most of the state including the
derived keys and cached pseudorandom pad. Therefore, only the first
VMAC invocation with a given key calculates the correct digest.
Fix these bugs by splitting the per-tfm state from the per-request state
and using the proper init/update/final sequencing for requests.
Reproducer for the crash:
#include <linux/if_alg.h>
#include <sys/socket.h>
#include <unistd.h>
int main()
{
int fd;
struct sockaddr_alg addr = {
.salg_type = "hash",
.salg_name = "vmac(aes)",
};
char buf[256] = { 0 };
fd = socket(AF_ALG, SOCK_SEQPACKET, 0);
bind(fd, (void *)&addr, sizeof(addr));
setsockopt(fd, SOL_ALG, ALG_SET_KEY, buf, 16);
fork();
fd = accept(fd, NULL, NULL);
for (;;)
write(fd, buf, 256);
}
The immediate cause of the crash is that vmac_ctx_t.partial_size exceeds
VMAC_NHBYTES, causing vmac_final() to memset() a negative length.
Reported-by: syzbot+264bca3a6e8d645550d3@syzkaller.appspotmail.com
Fixes: f1939f7c56 ("crypto: vmac - New hash algorithm for intel_txt support")
Cc: <stable@vger.kernel.org> # v2.6.32+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The VMAC template assumes the block cipher has a 128-bit block size, but
it failed to check for that. Thus it was possible to instantiate it
using a 64-bit block size cipher, e.g. "vmac(cast5)", causing
uninitialized memory to be used.
Add the needed check when instantiating the template.
Fixes: f1939f7c56 ("crypto: vmac - New hash algorithm for intel_txt support")
Cc: <stable@vger.kernel.org> # v2.6.32+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The poll() changes were not well thought out, and completely
unexplained. They also caused a huge performance regression, because
"->poll()" was no longer a trivial file operation that just called down
to the underlying file operations, but instead did at least two indirect
calls.
Indirect calls are sadly slow now with the Spectre mitigation, but the
performance problem could at least be largely mitigated by changing the
"->get_poll_head()" operation to just have a per-file-descriptor pointer
to the poll head instead. That gets rid of one of the new indirections.
But that doesn't fix the new complexity that is completely unwarranted
for the regular case. The (undocumented) reason for the poll() changes
was some alleged AIO poll race fixing, but we don't make the common case
slower and more complex for some uncommon special case, so this all
really needs way more explanations and most likely a fundamental
redesign.
[ This revert is a revert of about 30 different commits, not reverted
individually because that would just be unnecessarily messy - Linus ]
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Pull security subsystem fixes from James Morris:
- Smack: fix a regression caused by 1bbc55131e
- X.509: fix a (usually un-seen) bug in RSA signature parsing
* 'fixes-v4.18-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security:
X.509: unpack RSA signatureValue field from BIT STRING
Smack: Mark inode instant in smack_task_to_inode
The signatureValue field of a X.509 certificate is encoded as a BIT STRING.
For RSA signatures this BIT STRING is of so-called primitive subtype, which
contains a u8 prefix indicating a count of unused bits in the encoding.
We have to strip this prefix from signature data, just as we already do for
key data in x509_extract_key_data() function.
This wasn't noticed earlier because this prefix byte is zero for RSA key
sizes divisible by 8. Since BIT STRING is a big-endian encoding adding zero
prefixes has no bearing on its value.
The signature length, however was incorrect, which is a problem for RSA
implementations that need it to be exactly correct (like AMD CCP).
Signed-off-by: Maciej S. Szmigiero <mail@maciej.szmigiero.name>
Fixes: c26fd69fa0 ("X.509: Add a crypto key parser for binary (DER) X.509 certificates")
Cc: stable@vger.kernel.org
Signed-off-by: James Morris <james.morris@microsoft.com>
Pull crypto fixes from Herbert Xu:
- Fix use after free in chtls
- Fix RBP breakage in sha3
- Fix use after free in hwrng_unregister
- Fix overread in morus640
- Move sleep out of kernel_neon in arm64/aes-blk
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
hwrng: core - Always drop the RNG in hwrng_unregister()
crypto: morus640 - Fix out-of-bounds access
crypto: don't optimize keccakf()
crypto: arm64/aes-blk - fix and move skcipher_walk_done out of kernel_neon_begin, _end
crypto: chtls - use after free in chtls_pt_recvmsg()
Trival fix to correct the indentation of a single statement
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the sha384 pre-computed 0-length hash so that device
drivers can use it when an hardware engine does not support computing a
hash from a 0 length input.
Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the sha512 pre-computed 0-length hash so that device
drivers can use it when an hardware engine does not support computing a
hash from a 0 length input.
Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In the quest to remove VLAs from the kernel[1], this adjusts the
allocation of coefs and blocks to use the existing maximum values
(with one new define, MAX_DISKS for coefs, and a reuse of the
existing NDISKS for blocks).
[1] https://lkml.org/lkml/2018/3/7/621
Signed-off-by: Kyle Spiers <ksspiers@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
As we move stuff around, some doc references are broken. Fix some of
them via this script:
./scripts/documentation-file-ref-check --fix
Manually checked if the produced result is valid, removing a few
false-positives.
Acked-by: Takashi Iwai <tiwai@suse.de>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Acked-by: Stephen Boyd <sboyd@kernel.org>
Acked-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
Acked-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Reviewed-by: Coly Li <colyli@suse.de>
Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
Acked-by: Jonathan Corbet <corbet@lwn.net>
We must load the block from the temporary variable here, not directly
from the input.
Also add forgotten zeroing-out of the uninitialized part of the
temporary block (as is done correctly in morus1280.c).
Fixes: 396be41f16 ("crypto: morus - Add generic MORUS AEAD implementations")
Reported-by: syzbot+1fafa9c4cf42df33f716@syzkaller.appspotmail.com
Reported-by: syzbot+d82643ba80bf6937cd44@syzkaller.appspotmail.com
Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
keccakf() is the only function in kernel that uses __optimize() macro.
__optimize() breaks frame pointer unwinder as optimized code uses RBP,
and amusingly this always lead to degraded performance as gcc does not
inline across different optimizations levels, so keccakf() wasn't inlined
into its callers and keccakf_round() wasn't inlined into keccakf().
Drop __optimize() to resolve both problems.
Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
Fixes: 83dee2ce1a ("crypto: sha3-generic - rewrite KECCAK transform to help the compiler optimize")
Reported-by: syzbot+37035ccfa9a0a017ffcf@syzkaller.appspotmail.com
Reported-by: syzbot+e073e4740cfbb3ae200b@syzkaller.appspotmail.com
Cc: linux-crypto@vger.kernel.org
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
- Use overflow helpers in 2-factor allocators (Kees, Rasmus)
- Introduce overflow test module (Rasmus, Kees)
- Introduce saturating size helper functions (Matthew, Kees)
- Treewide use of struct_size() for allocators (Kees)
-----BEGIN PGP SIGNATURE-----
Comment: Kees Cook <kees@outflux.net>
iQJKBAABCgA0FiEEpcP2jyKd1g9yPm4TiXL039xtwCYFAlsYJ1gWHGtlZXNjb29r
QGNocm9taXVtLm9yZwAKCRCJcvTf3G3AJlCTEACwdEeriAd2VwxknnsstojGD/3g
8TTFA19vSu4Gxa6WiDkjGoSmIlfhXTlZo1Nlmencv16ytSvIVDNLUIB3uDxUIv1J
2+dyHML9JpXYHHR7zLXXnGFJL0wazqjbsD3NYQgXqmun7EVVYnOsAlBZ7h/Lwiej
jzEJd8DaHT3TA586uD3uggiFvQU0yVyvkDCDONIytmQx+BdtGdg9TYCzkBJaXuDZ
YIthyKDvxIw5nh/UaG3L+SKo73tUr371uAWgAfqoaGQQCWe+mxnWL4HkCKsjFzZL
u9ouxxF/n6pij3E8n6rb0i2fCzlsTDdDF+aqV1rQ4I4hVXCFPpHUZgjDPvBWbj7A
m6AfRHVNnOgI8HGKqBGOfViV+2kCHlYeQh3pPW33dWzy/4d/uq9NIHKxE63LH+S4
bY3oO2ela8oxRyvEgXLjqmRYGW1LB/ZU7FS6Rkx2gRzo4k8Rv+8K/KzUHfFVRX61
jEbiPLzko0xL9D53kcEn0c+BhofK5jgeSWxItdmfuKjLTW4jWhLRlU+bcUXb6kSS
S3G6aF+L+foSUwoq63AS8QxCuabuhreJSB+BmcGUyjthCbK/0WjXYC6W/IJiRfBa
3ZTxBC/2vP3uq/AGRNh5YZoxHL8mSxDfn62F+2cqlJTTKR/O+KyDb1cusyvk3H04
KCDVLYPxwQQqK1Mqig==
=/3L8
-----END PGP SIGNATURE-----
Merge tag 'overflow-v4.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux
Pull overflow updates from Kees Cook:
"This adds the new overflow checking helpers and adds them to the
2-factor argument allocators. And this adds the saturating size
helpers and does a treewide replacement for the struct_size() usage.
Additionally this adds the overflow testing modules to make sure
everything works.
I'm still working on the treewide replacements for allocators with
"simple" multiplied arguments:
*alloc(a * b, ...) -> *alloc_array(a, b, ...)
and
*zalloc(a * b, ...) -> *calloc(a, b, ...)
as well as the more complex cases, but that's separable from this
portion of the series. I expect to have the rest sent before -rc1
closes; there are a lot of messy cases to clean up.
Summary:
- Introduce arithmetic overflow test helper functions (Rasmus)
- Use overflow helpers in 2-factor allocators (Kees, Rasmus)
- Introduce overflow test module (Rasmus, Kees)
- Introduce saturating size helper functions (Matthew, Kees)
- Treewide use of struct_size() for allocators (Kees)"
* tag 'overflow-v4.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux:
treewide: Use struct_size() for devm_kmalloc() and friends
treewide: Use struct_size() for vmalloc()-family
treewide: Use struct_size() for kmalloc()-family
device: Use overflow helpers for devm_kmalloc()
mm: Use overflow helpers in kvmalloc()
mm: Use overflow helpers in kmalloc_array*()
test_overflow: Add memory allocation overflow tests
overflow.h: Add allocation size calculation helpers
test_overflow: Report test failures
test_overflow: macrofy some more, do more tests for free
lib: add runtime test of check_*_overflow functions
compiler.h: enable builtin overflow checkers and add fallback code
Pull aio updates from Al Viro:
"Majority of AIO stuff this cycle. aio-fsync and aio-poll, mostly.
The only thing I'm holding back for a day or so is Adam's aio ioprio -
his last-minute fixup is trivial (missing stub in !CONFIG_BLOCK case),
but let it sit in -next for decency sake..."
* 'work.aio-1' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (46 commits)
aio: sanitize the limit checking in io_submit(2)
aio: fold do_io_submit() into callers
aio: shift copyin of iocb into io_submit_one()
aio_read_events_ring(): make a bit more readable
aio: all callers of aio_{read,write,fsync,poll} treat 0 and -EIOCBQUEUED the same way
aio: take list removal to (some) callers of aio_complete()
aio: add missing break for the IOCB_CMD_FDSYNC case
random: convert to ->poll_mask
timerfd: convert to ->poll_mask
eventfd: switch to ->poll_mask
pipe: convert to ->poll_mask
crypto: af_alg: convert to ->poll_mask
net/rxrpc: convert to ->poll_mask
net/iucv: convert to ->poll_mask
net/phonet: convert to ->poll_mask
net/nfc: convert to ->poll_mask
net/caif: convert to ->poll_mask
net/bluetooth: convert to ->poll_mask
net/sctp: convert to ->poll_mask
net/tipc: convert to ->poll_mask
...
This reverts commit eb772f37ae, as now the
x86 Salsa20 implementation has been removed and the generic helpers are
no longer needed outside of salsa20_generic.c.
We could keep this just in case someone else wants to add a new
optimized Salsa20 implementation. But given that we have ChaCha20 now
too, I think it's unlikely. And this can always be reverted back.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The x86 assembly implementations of Salsa20 use the frame base pointer
register (%ebp or %rbp), which breaks frame pointer convention and
breaks stack traces when unwinding from an interrupt in the crypto code.
Recent (v4.10+) kernels will warn about this, e.g.
WARNING: kernel stack regs at 00000000a8291e69 in syzkaller047086:4677 has bad 'bp' value 000000001077994c
[...]
But after looking into it, I believe there's very little reason to still
retain the x86 Salsa20 code. First, these are *not* vectorized
(SSE2/SSSE3/AVX2) implementations, which would be needed to get anywhere
close to the best Salsa20 performance on any remotely modern x86
processor; they're just regular x86 assembly. Second, it's still
unclear that anyone is actually using the kernel's Salsa20 at all,
especially given that now ChaCha20 is supported too, and with much more
efficient SSSE3 and AVX2 implementations. Finally, in benchmarks I did
on both Intel and AMD processors with both gcc 8.1.0 and gcc 4.9.4, the
x86_64 salsa20-asm is actually slightly *slower* than salsa20-generic
(~3% slower on Skylake, ~10% slower on Zen), while the i686 salsa20-asm
is only slightly faster than salsa20-generic (~15% faster on Skylake,
~20% faster on Zen). The gcc version made little difference.
So, the x86_64 salsa20-asm is pretty clearly useless. That leaves just
the i686 salsa20-asm, which based on my tests provides a 15-20% speed
boost. But that's without updating the code to not use %ebp. And given
the maintenance cost, the small speed difference vs. salsa20-generic,
the fact that few people still use i686 kernels, the doubt that anyone
is even using the kernel's Salsa20 at all, and the fact that a SSE2
implementation would almost certainly be much faster on any remotely
modern x86 processor yet no one has cared enough to add one yet, I don't
think it's worthwhile to keep.
Thus, just remove both the x86_64 and i686 salsa20-asm implementations.
Reported-by: syzbot+ffa3a158337bbc01ff09@syzkaller.appspotmail.com
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Commit 56e8e57fc3 ("crypto: morus - Add common SIMD glue code for
MORUS") accidetally consiedered the glue code to be usable by different
architectures, but it seems to be only usable on x86.
This patch moves it under arch/x86/crypto and adds 'depends on X86' to
the Kconfig options and also removes the prompt to hide these internal
options from the user.
Reported-by: kbuild test robot <lkp@intel.com>
Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently testmgr has separate encryption and decryption test vectors
for symmetric ciphers. That's massively redundant, since with few
exceptions (mostly mistakes, apparently), all decryption tests are
identical to the encryption tests, just with the input/result flipped.
Therefore, eliminate the redundancy by removing the decryption test
vectors and updating testmgr to test both encryption and decryption
using what used to be the encryption test vectors. Naming is adjusted
accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext'
(ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and
'rlen'. Note that it was always the case that 'ilen == rlen'.
AES keywrap ("kw(aes)") is special because its IV is generated by the
encryption. Previously this was handled by specifying 'iv_out' for
encryption and 'iv' for decryption. To make it work cleanly with only
one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a
boolean that indicates that the IV is generated by the encryption.
In total, this removes over 10000 lines from testmgr.h, with no
reduction in test coverage since prior patches already copied the few
unique decryption test vectors into the encryption test vectors.
This covers all algorithms that used 'struct cipher_testvec', e.g. any
block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or
keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD
tests, though we probably can eliminate a similar redundancy there too.
The testmgr.h portion of this patch was automatically generated using
the following awk script, with some slight manual fixups on top (updated
'struct cipher_testvec' definition, updated a few comments, and fixed up
the AES keywrap test vectors):
BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER }
/^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC }
/^static const struct cipher_testvec.*_dec_/ { mode = DECVEC }
mode == ENCVEC && !/\.ilen[[:space:]]*=/ {
sub(/\.input[[:space:]]*=$/, ".ptext =")
sub(/\.input[[:space:]]*=/, ".ptext\t=")
sub(/\.result[[:space:]]*=$/, ".ctext =")
sub(/\.result[[:space:]]*=/, ".ctext\t=")
sub(/\.rlen[[:space:]]*=/, ".len\t=")
print
}
mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER }
mode == OTHER { print }
mode == ENCVEC && /^};/ { mode = OTHER }
mode == DECVEC && /^};/ { mode = DECVEC_TAIL }
Note that git's default diff algorithm gets confused by the testmgr.h
portion of this patch, and reports too many lines added and removed.
It's better viewed with 'git diff --minimal' (or 'git show --minimal'),
which reports "2 files changed, 919 insertions(+), 11723 deletions(-)".
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
One "kw(aes)" decryption test vector doesn't exactly match an encryption
test vector with input and result swapped. In preparation for removing
the decryption test vectors, add this test vector to the encryption test
vectors, so we don't lose any test coverage.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
None of the four "ecb(tnepres)" decryption test vectors exactly match an
encryption test vector with input and result swapped. In preparation
for removing the decryption test vectors, add these to the encryption
test vectors, so we don't lose any test coverage.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
One "cbc(des)" decryption test vector doesn't exactly match an
encryption test vector with input and result swapped. It's *almost* the
same as one, but the decryption version is "chunked" while the
encryption version is "unchunked". In preparation for removing the
decryption test vectors, make the encryption one both chunked and
unchunked, so we don't lose any test coverage.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Two "ecb(des)" decryption test vectors don't exactly match any of the
encryption test vectors with input and result swapped. In preparation
for removing the decryption test vectors, add these to the encryption
test vectors, so we don't lose any test coverage.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crc32c has an unkeyed test vector but crc32 did not. Add the crc32c one
(which uses an empty input) to crc32 too, and also add a new one to both
that uses a nonempty input. These test vectors verify that crc32 and
crc32c implementations use the correct default initial state.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since testmgr uses a single tfm for all tests of each hash algorithm,
once a key is set the tfm won't be unkeyed anymore. But with crc32 and
crc32c, the key is really the "default initial state" and is optional;
those algorithms should have both keyed and unkeyed test vectors, to
verify that implementations use the correct default key.
Simply listing the unkeyed test vectors first isn't guaranteed to work
yet because testmgr makes multiple passes through the test vectors.
crc32c does have an unkeyed test vector listed first currently, but it
only works by chance because the last crc32c test vector happens to use
a key that is the same as the default key.
Therefore, teach testmgr to split hash test vectors into unkeyed and
keyed sections, and do all the unkeyed ones before the keyed ones.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The Blackfin CRC driver was removed by commit 9678a8dc53 ("crypto:
bfin_crc - remove blackfin CRC driver"), but it was forgotten to remove
the corresponding "hmac(crc32)" test vectors. I see no point in keeping
them since nothing else appears to implement or use "hmac(crc32)", which
isn't an algorithm that makes sense anyway because HMAC is meant to be
used with a cryptographically secure hash function, which CRC's are not.
Thus, remove the unneeded test vectors.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The __crc32_le() wrapper function is pointless. Just call crc32_le()
directly instead.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crc32c-generic sets an alignmask, but actually its ->update() works with
any alignment; only its ->setkey() and outputting the final digest
assume an alignment. To prevent the buffer from having to be aligned by
the crypto API for just these cases, switch these cases over to the
unaligned access macros and remove the cra_alignmask. Note that this
also makes crc32c-generic more consistent with crc32-generic.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crc32-generic doesn't have a cra_alignmask set, which is desired as its
->update() works with any alignment. However, it incorrectly assumes
4-byte alignment in ->setkey() and when outputting the final digest.
Fix this by using the unaligned access macros in those cases.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds optimized implementations of MORUS-640 and MORUS-1280,
utilizing the SSE2 and AVX2 x86 extensions.
For MORUS-1280 (which operates on 256-bit blocks) we provide both AVX2
and SSE2 implementation. Although SSE2 MORUS-1280 is slower than AVX2
MORUS-1280, it is comparable in speed to the SSE2 MORUS-640.
Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds a common glue code for optimized implementations of
MORUS AEAD algorithms.
Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds test vectors for MORUS-640 and MORUS-1280. The test
vectors were generated using the reference implementation from
SUPERCOP (see code comments for more details).
Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the generic implementation of the MORUS family of AEAD
algorithms (MORUS-640 and MORUS-1280). The original authors of MORUS
are Hongjun Wu and Tao Huang.
At the time of writing, MORUS is one of the finalists in CAESAR, an
open competition intended to select a portfolio of alternatives to
the problematic AES-GCM:
https://competitions.cr.yp.to/caesar-submissions.htmlhttps://competitions.cr.yp.to/round3/morusv2.pdf
Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds optimized implementations of AEGIS-128, AEGIS-128L,
and AEGIS-256, utilizing the AES-NI and SSE2 x86 extensions.
Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds test vectors for the AEGIS family of AEAD algorithms
(AEGIS-128, AEGIS-128L, and AEGIS-256). The test vectors were
generated using the reference implementation from SUPERCOP (see code
comments for more details).
Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the generic implementation of the AEGIS family of AEAD
algorithms (AEGIS-128, AEGIS-128L, and AEGIS-256). The original
authors of AEGIS are Hongjun Wu and Bart Preneel.
At the time of writing, AEGIS is one of the finalists in CAESAR, an
open competition intended to select a portfolio of alternatives to
the problematic AES-GCM:
https://competitions.cr.yp.to/caesar-submissions.htmlhttps://competitions.cr.yp.to/round3/aegisv11.pdf
Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Due to a snafu "paes" testmgr tests were not ordered
lexicographically, which led to boot time warnings.
Reorder the tests as needed.
Fixes: a794d8d ("crypto: ccree - enable support for hardware keys")
Reported-by: Abdul Haleem <abdhalee@linux.vnet.ibm.com>
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Tested-by: Abdul Haleem <abdhalee@linux.vnet.ibm.com>
Tested-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Variants of proc_create{,_data} that directly take a struct seq_operations
argument and drastically reduces the boilerplate code in the callers.
All trivial callers converted over.
Signed-off-by: Christoph Hellwig <hch@lst.de>
In the quest to remove all stack VLA usage from the kernel[1], this
allocates the return code buffers before starting jiffie timers, rather
than using stack space for the array. Additionally cleans up some exit
paths and make sure that the num_mb module_param() is used only once
per execution to avoid possible races in the value changing.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In preparation of adding support for the SIMD based arm64 implementation
of arm64, which requires a fallback to non-SIMD code when invoked in
certain contexts, expose the generic SM4 encrypt and decrypt routines
to other drivers.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Enable CryptoCell support for hardware keys.
Hardware keys are regular AES keys loaded into CryptoCell internal memory
via firmware, often from secure boot ROM or hardware fuses at boot time.
As such, they can be used for enc/dec purposes like any other key but
cannot (read: extremely hard to) be extracted since since they are not
available anywhere in RAM during runtime.
The mechanism has some similarities to s390 secure keys although the keys
are not wrapped or sealed, but simply loaded offline. The interface was
therefore modeled based on the s390 secure keys support.
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
There is no need to assign an error value to 'ret' prior
to calling mpi_read_raw_from_sgl() because in the case
of error the 'ret' variable will be assigned to the error
code inside the if block.
In the case of non failure, 'ret' will be overwritten
immediately after, so remove the unneeded assignment.
Signed-off-by: Fabio Estevam <fabio.estevam@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The following error is triggered by the ThunderX ZIP driver
if the testmanager is enabled:
[ 199.069437] ThunderX-ZIP 0000:03:00.0: Found ZIP device 0 177d:a01a on Node 0
[ 199.073573] alg: comp: Compression test 1 failed for deflate-generic: output len = 37
The reason for this error is the verification of the compression
results. Verifying the compression result only works if all
algorithm parameters are identical, in this case to the software
implementation.
Different compression engines like the ThunderX ZIP coprocessor
might yield different compression results by tuning the
algorithm parameters. In our case the compressed result is
shorter than the test vector.
We should not forbid different compression results but only
check that compression -> decompression yields the same
result. This is done already in the acomp test. Do something
similar for test_comp().
Signed-off-by: Mahipal Challa <mchalla@cavium.com>
Signed-off-by: Balakrishna Bhamidipati <bbhamidipati@cavium.com>
[jglauber@cavium.com: removed unrelated printk changes, rewrote commit msg,
fixed whitespace and unneeded initialization]
Signed-off-by: Jan Glauber <jglauber@cavium.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In preparation for the removal of VLAs[1] from crypto code.
We create 2 new compile-time constants: all ciphers implemented
in Linux have a block size less than or equal to 16 bytes and
the most demanding hw require 16 bytes alignment for the block
buffer.
We also enforce these limits in crypto_check_alg when a new
cipher is registered.
[1] http://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Salvatore Mesoraca <s.mesoraca16@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In crypto_authenc_esn_setkey we save pointers to the authenc keys
in a local variable of type struct crypto_authenc_keys and we don't
zeroize it after use. Fix this and don't leak pointers to the
authenc keys.
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In crypto_authenc_setkey we save pointers to the authenc keys in
a local variable of type struct crypto_authenc_keys and we don't
zeroize it after use. Fix this and don't leak pointers to the
authenc keys.
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Adds zstd support to crypto and scompress. Only supports the default
level.
Previously we held off on this patch, since there weren't any users.
Now zram is ready for zstd support, but depends on CONFIG_CRYPTO_ZSTD,
which isn't defined until this patch is in. I also see a patch adding
zstd to pstore [0], which depends on crypto zstd.
[0] lkml.kernel.org/r/9c9416b2dff19f05fb4c35879aaa83d11ff72c92.1521626182.git.geliangtang@gmail.com
Signed-off-by: Nick Terrell <terrelln@fb.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
On the quest to remove all VLAs from the kernel[1], this avoids VLAs
by just using the maximum allocation size (4 bytes) for stack arrays.
All the VLAs in ecc were either 3 or 4 bytes (or a multiple), so just
make it 4 bytes all the time. Initialization routines are adjusted to
check that ndigits does not end up larger than the arrays.
This includes a removal of the earlier attempt at this fix from
commit a963834b4742 ("crypto/ecc: Remove stack VLA usage")
[1] https://lkml.org/lkml/2018/3/7/621
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
During freeing of the internal buffers used by the DRBG, set the pointer
to NULL. It is possible that the context with the freed buffers is
reused. In case of an error during initialization where the pointers
do not yet point to allocated memory, the NULL value prevents a double
free.
Cc: stable@vger.kernel.org
Fixes: 3cfc3b9721 ("crypto: drbg - use aligned buffers")
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Reported-by: syzbot+75397ee3df5c70164154@syzkaller.appspotmail.com
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Commit eb02c38f01 ("crypto: api - Keep failed instances alive") is
making allocating crypto transforms sometimes fail with ELIBBAD, when
multiple processes try to access encrypted files with fscrypt for the
first time since boot. The problem is that the "request larval" for the
algorithm is being mistaken for an algorithm which failed its tests.
Fix it by only returning ELIBBAD for "non-larval" algorithms. Also
don't leak a reference to the algorithm.
Fixes: eb02c38f01 ("crypto: api - Keep failed instances alive")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
- pass HOSTLDFLAGS when compiling single .c host programs
- build genksyms lexer and parser files instead of using shipped
versions
- rename *-asn1.[ch] to *.asn1.[ch] for suffix consistency
- let the top .gitignore globally ignore artifacts generated by
flex, bison, and asn1_compiler
- let the top Makefile globally clean artifacts generated by
flex, bison, and asn1_compiler
- use safer .SECONDARY marker instead of .PRECIOUS to prevent
intermediate files from being removed
- support -fmacro-prefix-map option to make __FILE__ a relative path
- fix # escaping to prepare for the future GNU Make release
- clean up deb-pkg by using debian tools instead of handrolled
source/changes generation
- improve rpm-pkg portability by supporting kernel-install as a
fallback of new-kernel-pkg
- extend Kconfig listnewconfig target to provide more information
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJa0krLAAoJED2LAQed4NsGyCAP/3Vsb8A4sea7sE3LV6/aFUJp
WcAm6PXcip1MXy7GI5yxFciwen3Z3ghQUer7fJKDcHR5c4mRSfKaqWp+TLHd6uux
7I4pV0FNx2PapcPu5T7wNZHN96p3xZC0Z66sq9BCZ/+gNyYmZLIDcBUSIOEk0nzJ
IsvD46zy6R6KtEnycShKVscg4JyPXJIw1UBqsPDEFHg5l16ARkghND7e5zTW62Fi
2MqQxNXAksIKpxxoxPH/fIcNp1kFKVxYBH2CW4LQtOjC3GmrozdeV5PUc7yTezPc
dpqOuEcIAbMH91bkvhhF+ZBi34YrxRoT4S8B3G9iCXRz+2LRZZaitqO4dAH8Kjbn
0KjkqzNc5TosJXQ8RPTcQlRBi+JmE1bHxICvTx3XNJcqJMqIH0vs3ez/LJKOwhB4
DbAROoxQNfVcOdouHcx2EuCSdHn24BEyzaGFhi04LACpbRLxr8IJS7hSGXRloBYp
K3ydRvG/dCZjFRTS+xWWSi3Nzjih2mCctQlH3D4nf4M3vtCX+/k5B9IMEYFfHlvL
KoNlK4/1vP/dAJZj0iOqd2ksCA1G6iLoHrFp3E5pdtmb4sVe2Ez3gMt+pxz3htR9
XvjuHOzkWE9eiihs1NsFgQuyP/o3UmNKpDDW0irQ06IFEPXkA/y1mVmeTU3qtrII
ZDiwGozIkMMEy/MLkcjE
=tD6R
-----END PGP SIGNATURE-----
Merge tag 'kbuild-v4.17-2' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild
Pull more Kbuild updates from Masahiro Yamada:
- pass HOSTLDFLAGS when compiling single .c host programs
- build genksyms lexer and parser files instead of using shipped
versions
- rename *-asn1.[ch] to *.asn1.[ch] for suffix consistency
- let the top .gitignore globally ignore artifacts generated by flex,
bison, and asn1_compiler
- let the top Makefile globally clean artifacts generated by flex,
bison, and asn1_compiler
- use safer .SECONDARY marker instead of .PRECIOUS to prevent
intermediate files from being removed
- support -fmacro-prefix-map option to make __FILE__ a relative path
- fix # escaping to prepare for the future GNU Make release
- clean up deb-pkg by using debian tools instead of handrolled
source/changes generation
- improve rpm-pkg portability by supporting kernel-install as a
fallback of new-kernel-pkg
- extend Kconfig listnewconfig target to provide more information
* tag 'kbuild-v4.17-2' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild:
kconfig: extend output of 'listnewconfig'
kbuild: rpm-pkg: use kernel-install as a fallback for new-kernel-pkg
Kbuild: fix # escaping in .cmd files for future Make
kbuild: deb-pkg: split generating packaging and build
kbuild: use -fmacro-prefix-map to make __FILE__ a relative path
kbuild: mark $(targets) as .SECONDARY and remove .PRECIOUS markers
kbuild: rename *-asn1.[ch] to *.asn1.[ch]
kbuild: clean up *-asn1.[ch] patterns from top-level Makefile
.gitignore: move *-asn1.[ch] patterns to the top-level .gitignore
kbuild: add %.dtb.S and %.dtb to 'targets' automatically
kbuild: add %.lex.c and %.tab.[ch] to 'targets' automatically
genksyms: generate lexer and parser during build instead of shipping
kbuild: clean up *.lex.c and *.tab.[ch] patterns from top-level Makefile
.gitignore: move *.lex.c *.tab.[ch] patterns to the top-level .gitignore
kbuild: use HOSTLDFLAGS for single .c executables
These are the main MIPS changes for 4.17. Rough overview:
(1) generic platform: Add support for Microsemi Ocelot SoCs
(2) crypto: Add CRC32 and CRC32C HW acceleration module
(3) Various cleanups and misc improvements
Miscellaneous:
- Hang more efficiently on halt/powerdown/restart
- pm-cps: Block system suspend when a JTAG probe is present
- Expand make help text for generic defconfigs
- Refactor handling of legacy defconfigs
- Determine the entry point from the ELF file header to fix microMIPS
for certain toolchains
- Introduce isa-rev.h for MIPS_ISA_REV and use to simplify other code
Minor cleanups:
- DTS: boston/ci20: Unit name cleanups and correction
- kdump: Make the default for PHYSICAL_START always 64-bit
- Constify gpio_led in Alchemy, AR7, and TXX9
- Silence a couple of W=1 warnings
- Remove duplicate includes
Platform support:
ath79:
- Fix AR724X_PLL_REG_PCIE_CONFIG offset
BCM47xx:
- FIRMWARE: Use mac_pton() for MAC address parsing
- Add Luxul XAP1500/XWR1750 WiFi LEDs
- Use standard reset button for Luxul XWR-1750
BMIPS:
- Enable CONFIG_BRCMSTB_PM in bmips_stb_defconfig for build coverage
- Add STB PM, wake-up timer, watchdog DT nodes
Generic platform:
- Add support for Microsemi Ocelot
- dt-bindings: Add vendor prefix for Microsemi Corporation
- dt-bindings: Add bindings for Microsemi SoCs
- Add ocelot SoC & PCB123 board DTS files
- MAINTAINERS: Add entry for Microsemi MIPS SoCs
- Enable crc32-mips on r6 configs
Octeon:
- Drop '.' after newlines in printk calls
ralink:
- pci-mt7621: Enable PCIe on MT7688
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEd80NauSabkiESfLYbAtpk944dnoFAlrL1tMACgkQbAtpk944
dnpkhg//UnKOmj4xiQf5ik4XjSDHo2dcUzQKPLq/grzpc9yC2I6bgJ32gcQnD/rw
x2f64DAjmbIJEY/hCGLEOH82/SqU5vF5gI+BtSKE6Ti28dyQH68pvyHRYIZBFx62
WRYAf50Spa/q/bOJYv4/eTqjEc7FPMaUll3kH5NMiBB3X5bIeq3w+x78YBKjAqPB
IJgHF/JnlpD8UvQ0RCl1fIXhkRYeNuTyiAseVI3ygAqbMFMGkTevePWl82D+pWuo
W9BFaliNt/9TNgq5/L91a/uhSuo6INRls1tFQT9cPa6n4GJxLpUtO7a3v4Xgaukj
TeXTkXZl4PEUfeTA3cVaXrENTN4sCwThkxvrKwefbUBiIa+/TDi/Wgxi3n+rKwi2
Vu4V36uxnO/vNU7Sgp/jHdRJyxw06GfRy55MvToFlk1S25Gwt6zW2O0o8nAHwCDW
zvoRfSkasElKwnM3zKR/mrdDn7qPulFfiyWyKuuY3bIXsaTnItO1JJ/rbgKuIXr7
lo0eVZ7GhuWfuLCEHZ99rWsGz0SjuQQIb4c6faNELp6gTwIpz9bbsOHRuajvfoSF
8jmPjUocbaRR0odyFBqsejwBmeebDoeIZljzD50ImSoYftEMgRoQ448mIm0EuWVU
KY+85MIeK5zwWXKd4VBexgLvPggGURaOEEXKisgvoE3NjCuO2nw=
=4vUc
-----END PGP SIGNATURE-----
Merge tag 'mips_4.17' of git://git.kernel.org/pub/scm/linux/kernel/git/jhogan/mips
Pull MIPS updates from James Hogan:
"These are the main MIPS changes for 4.17. Rough overview:
(1) generic platform: Add support for Microsemi Ocelot SoCs
(2) crypto: Add CRC32 and CRC32C HW acceleration module
(3) Various cleanups and misc improvements
More detailed summary:
Miscellaneous:
- hang more efficiently on halt/powerdown/restart
- pm-cps: Block system suspend when a JTAG probe is present
- expand make help text for generic defconfigs
- refactor handling of legacy defconfigs
- determine the entry point from the ELF file header to fix microMIPS
for certain toolchains
- introduce isa-rev.h for MIPS_ISA_REV and use to simplify other code
Minor cleanups:
- DTS: boston/ci20: Unit name cleanups and correction
- kdump: Make the default for PHYSICAL_START always 64-bit
- constify gpio_led in Alchemy, AR7, and TXX9
- silence a couple of W=1 warnings
- remove duplicate includes
Platform support:
Generic platform:
- add support for Microsemi Ocelot
- dt-bindings: Add vendor prefix for Microsemi Corporation
- dt-bindings: Add bindings for Microsemi SoCs
- add ocelot SoC & PCB123 board DTS files
- MAINTAINERS: Add entry for Microsemi MIPS SoCs
- enable crc32-mips on r6 configs
ath79:
- fix AR724X_PLL_REG_PCIE_CONFIG offset
BCM47xx:
- firmware: Use mac_pton() for MAC address parsing
- add Luxul XAP1500/XWR1750 WiFi LEDs
- use standard reset button for Luxul XWR-1750
BMIPS:
- enable CONFIG_BRCMSTB_PM in bmips_stb_defconfig for build coverage
- add STB PM, wake-up timer, watchdog DT nodes
Octeon:
- drop '.' after newlines in printk calls
ralink:
- pci-mt7621: Enable PCIe on MT7688"
* tag 'mips_4.17' of git://git.kernel.org/pub/scm/linux/kernel/git/jhogan/mips: (37 commits)
MIPS: BCM47XX: Use standard reset button for Luxul XWR-1750
MIPS: BCM47XX: Add Luxul XAP1500/XWR1750 WiFi LEDs
MIPS: Make the default for PHYSICAL_START always 64-bit
MIPS: Use the entry point from the ELF file header
MAINTAINERS: Add entry for Microsemi MIPS SoCs
MIPS: generic: Add support for Microsemi Ocelot
MIPS: mscc: Add ocelot PCB123 device tree
MIPS: mscc: Add ocelot dtsi
dt-bindings: mips: Add bindings for Microsemi SoCs
dt-bindings: Add vendor prefix for Microsemi Corporation
MIPS: ath79: Fix AR724X_PLL_REG_PCIE_CONFIG offset
MIPS: pci-mt7620: Enable PCIe on MT7688
MIPS: pm-cps: Block system suspend when a JTAG probe is present
MIPS: VDSO: Replace __mips_isa_rev with MIPS_ISA_REV
MIPS: BPF: Replace __mips_isa_rev with MIPS_ISA_REV
MIPS: cpu-features.h: Replace __mips_isa_rev with MIPS_ISA_REV
MIPS: Introduce isa-rev.h to define MIPS_ISA_REV
MIPS: Hang more efficiently on halt/powerdown/restart
FIRMWARE: bcm47xx_nvram: Replace mac address parsing
MIPS: BMIPS: Add Broadcom STB watchdog nodes
...
syzbot reported :
BUG: KMSAN: uninit-value in alg_bind+0xe3/0xd90 crypto/af_alg.c:162
We need to check addr_len before dereferencing sa (or uaddr)
Fixes: bb30b8848c ("crypto: af_alg - whitelist mask and type")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: syzbot <syzkaller@googlegroups.com>
Cc: Stephan Mueller <smueller@chronox.de>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Our convention is to distinguish file types by suffixes with a period
as a separator.
*-asn1.[ch] is a different pattern from other generated sources such
as *.lex.c, *.tab.[ch], *.dtb.S, etc. More confusing, files with
'-asn1.[ch]' are generated files, but '_asn1.[ch]' are checked-in
files:
net/netfilter/nf_conntrack_h323_asn1.c
include/linux/netfilter/nf_conntrack_h323_asn1.h
include/linux/sunrpc/gss_asn1.h
Rename generated files to *.asn1.[ch] for consistency.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Pull crypto updates from Herbert Xu:
"API:
- add AEAD support to crypto engine
- allow batch registration in simd
Algorithms:
- add CFB mode
- add speck block cipher
- add sm4 block cipher
- new test case for crct10dif
- improve scheduling latency on ARM
- scatter/gather support to gcm in aesni
- convert x86 crypto algorithms to skcihper
Drivers:
- hmac(sha224/sha256) support in inside-secure
- aes gcm/ccm support in stm32
- stm32mp1 support in stm32
- ccree driver from staging tree
- gcm support over QI in caam
- add ks-sa hwrng driver"
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (212 commits)
crypto: ccree - remove unused enums
crypto: ahash - Fix early termination in hash walk
crypto: brcm - explicitly cast cipher to hash type
crypto: talitos - don't leak pointers to authenc keys
crypto: qat - don't leak pointers to authenc keys
crypto: picoxcell - don't leak pointers to authenc keys
crypto: ixp4xx - don't leak pointers to authenc keys
crypto: chelsio - don't leak pointers to authenc keys
crypto: caam/qi - don't leak pointers to authenc keys
crypto: caam - don't leak pointers to authenc keys
crypto: lrw - Free rctx->ext with kzfree
crypto: talitos - fix IPsec cipher in length
crypto: Deduplicate le32_to_cpu_array() and cpu_to_le32_array()
crypto: doc - clarify hash callbacks state machine
crypto: api - Keep failed instances alive
crypto: api - Make crypto_alg_lookup static
crypto: api - Remove unused crypto_type lookup function
crypto: chelsio - Remove declaration of static function from header
crypto: inside-secure - hmac(sha224) support
crypto: inside-secure - hmac(sha256) support
..
When we have an unaligned SG list entry where there is no leftover
aligned data, the hash walk code will incorrectly return zero as if
the entire SG list has been processed.
This patch fixes it by moving onto the next page instead.
Reported-by: Eli Cooper <elicooper@gmx.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The buffer rctx->ext contains potentially sensitive data and should
be freed with kzfree.
Cc: <stable@vger.kernel.org>
Fixes: 700cb3f5fe ("crypto: lrw - Convert to skcipher")
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Deduplicate le32_to_cpu_array() and cpu_to_le32_array() by moving them
to the generic header.
No functional change implied.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch reverts commit 9c521a200b ("crypto: api - remove
instance when test failed") and fixes the underlying problem
in a different way.
To recap, prior to the reverted commit, an instance that fails
a self-test is kept around. However, it would satisfy any new
lookups against its name and therefore the system may accumlulate
an unbounded number of failed instances for the same algorithm
name.
The reverted commit fixed it by unregistering the instance. Hoever,
this still does not prevent the creation of the same failed instance
over and over again each time the name is looked up.
This patch fixes it by keeping the failed instance around, just as
we would if it were a normal algorithm. However, the lookup code
has been udpated so that we do not attempt to create another
instance as long as this failed one is still registered. Of course,
you could still force a new creation by deleting the instance from
user-space.
A new error (ELIBBAD) has been commandeered for this purpose and
will be returned when all registered algorithm of a given name
have failed the self-test.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The function crypto_alg_lookup is only usd within the crypto API
and should be not be exported to the modules. This patch marks
it as a static function.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The lookup function in crypto_type was only used for the implicit
IV generators which have been completely removed from the crypto
API.
This patch removes the lookup function as it is now useless.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In order to be able to test yield support under preempt, add a test
vector for CRC-T10DIF that is long enough to take multiple iterations
(and thus possible preemption between them) of the primary loop of the
accelerated x86 and arm64 implementations.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
On the quest to remove all VLAs from the kernel[1], this switches to
a pair of kmalloc regions instead of using the stack. This also moves
the get_random_bytes() after all allocations (and drops the needless
"nbytes" variable).
[1] https://lkml.org/lkml/2018/3/7/621
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Introduce the SM4 cipher algorithms (OSCCA GB/T 32907-2016).
SM4 (GBT.32907-2016) is a cryptographic standard issued by the
Organization of State Commercial Administration of China (OSCCA)
as an authorized cryptographic algorithms for the use within China.
SMS4 was originally created for use in protecting wireless
networks, and is mandated in the Chinese National Standard for
Wireless LAN WAPI (Wired Authentication and Privacy Infrastructure)
(GB.15629.11-2003).
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Apparently the ecdh use case was in bluetooth which always has single
element scatterlists, so the ecdh module was hard coded to expect
them. Now we're using this in TPM, we need multi-element
scatterlists, so remove this limitation.
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
TPM security routines require encryption and decryption with AES in
CFB mode, so add it to the Linux Crypto schemes. CFB is basically a
one time pad where the pad is generated initially from the encrypted
IV and then subsequently from the encrypted previous block of
ciphertext. The pad is XOR'd into the plain text to get the final
ciphertext.
https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#CFB
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
All users of ablk_helper have been converted over to crypto_simd, so
remove ablk_helper.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that all users of lrw_crypt() have been removed in favor of the LRW
template wrapping an ECB mode algorithm, remove lrw_crypt(). Also
remove crypto/lrw.h as that is no longer needed either; and fold
'struct lrw_table_ctx' into 'struct priv', lrw_init_table() into
setkey(), and lrw_free_table() into exit_tfm().
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that all users of xts_crypt() have been removed in favor of the XTS
template wrapping an ECB mode algorithm, remove xts_crypt().
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the AESNI AVX and AESNI AVX2 implementations of Camellia from
the (deprecated) ablkcipher and blkcipher interfaces over to the
skcipher interface. Note that this includes replacing the use of
ablk_helper with crypto_simd.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the x86 asm implementation of Camellia from the (deprecated)
blkcipher interface over to the skcipher interface.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The XTS template now wraps an ECB mode algorithm rather than the block
cipher directly. Therefore it is now redundant for crypto modules to
wrap their ECB code with generic XTS code themselves via xts_crypt().
Remove the xts-camellia-asm algorithm which did this. Users who request
xts(camellia) and previously would have gotten xts-camellia-asm will now
get xts(ecb-camellia-asm) instead, which is just as fast.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The LRW template now wraps an ECB mode algorithm rather than the block
cipher directly. Therefore it is now redundant for crypto modules to
wrap their ECB code with generic LRW code themselves via lrw_crypt().
Remove the lrw-camellia-asm algorithm which did this. Users who request
lrw(camellia) and previously would have gotten lrw-camellia-asm will now
get lrw(ecb-camellia-asm) instead, which is just as fast.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The LRW template now wraps an ECB mode algorithm rather than the block
cipher directly. Therefore it is now redundant for crypto modules to
wrap their ECB code with generic LRW code themselves via lrw_crypt().
Remove the lrw-camellia-aesni-avx2 algorithm which did this. Users who
request lrw(camellia) and previously would have gotten
lrw-camellia-aesni-avx2 will now get lrw(ecb-camellia-aesni-avx2)
instead, which is just as fast.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The LRW template now wraps an ECB mode algorithm rather than the block
cipher directly. Therefore it is now redundant for crypto modules to
wrap their ECB code with generic LRW code themselves via lrw_crypt().
Remove the lrw-camellia-aesni algorithm which did this. Users who
request lrw(camellia) and previously would have gotten
lrw-camellia-aesni will now get lrw(ecb-camellia-aesni) instead, which
is just as fast.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the x86 asm implementation of Triple DES from the (deprecated)
blkcipher interface over to the skcipher interface.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the x86 asm implementation of Blowfish from the (deprecated)
blkcipher interface over to the skcipher interface.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the AVX implementation of CAST6 from the (deprecated) ablkcipher
and blkcipher interfaces over to the skcipher interface. Note that this
includes replacing the use of ablk_helper with crypto_simd.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The LRW template now wraps an ECB mode algorithm rather than the block
cipher directly. Therefore it is now redundant for crypto modules to
wrap their ECB code with generic LRW code themselves via lrw_crypt().
Remove the lrw-cast6-avx algorithm which did this. Users who request
lrw(cast6) and previously would have gotten lrw-cast6-avx will now get
lrw(ecb-cast6-avx) instead, which is just as fast.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the AVX implementation of CAST5 from the (deprecated) ablkcipher
and blkcipher interfaces over to the skcipher interface. Note that this
includes replacing the use of ablk_helper with crypto_simd.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the AVX implementation of Twofish from the (deprecated)
ablkcipher and blkcipher interfaces over to the skcipher interface.
Note that this includes replacing the use of ablk_helper with
crypto_simd.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The LRW template now wraps an ECB mode algorithm rather than the block
cipher directly. Therefore it is now redundant for crypto modules to
wrap their ECB code with generic LRW code themselves via lrw_crypt().
Remove the lrw-twofish-avx algorithm which did this. Users who request
lrw(twofish) and previously would have gotten lrw-twofish-avx will now
get lrw(ecb-twofish-avx) instead, which is just as fast.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the 3-way implementation of Twofish from the (deprecated)
blkcipher interface over to the skcipher interface.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The XTS template now wraps an ECB mode algorithm rather than the block
cipher directly. Therefore it is now redundant for crypto modules to
wrap their ECB code with generic XTS code themselves via xts_crypt().
Remove the xts-twofish-3way algorithm which did this. Users who request
xts(twofish) and previously would have gotten xts-twofish-3way will now
get xts(ecb-twofish-3way) instead, which is just as fast.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The LRW template now wraps an ECB mode algorithm rather than the block
cipher directly. Therefore it is now redundant for crypto modules to
wrap their ECB code with generic LRW code themselves via lrw_crypt().
Remove the lrw-twofish-3way algorithm which did this. Users who request
lrw(twofish) and previously would have gotten lrw-twofish-3way will now
get lrw(ecb-twofish-3way) instead, which is just as fast.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the AVX and AVX2 implementations of Serpent from the
(deprecated) ablkcipher and blkcipher interfaces over to the skcipher
interface. Note that this includes replacing the use of ablk_helper
with crypto_simd.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The LRW template now wraps an ECB mode algorithm rather than the block
cipher directly. Therefore it is now redundant for crypto modules to
wrap their ECB code with generic LRW code themselves via lrw_crypt().
Remove the lrw-serpent-avx algorithm which did this. Users who request
lrw(serpent) and previously would have gotten lrw-serpent-avx will now
get lrw(ecb-serpent-avx) instead, which is just as fast.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The LRW template now wraps an ECB mode algorithm rather than the block
cipher directly. Therefore it is now redundant for crypto modules to
wrap their ECB code with generic LRW code themselves via lrw_crypt().
Remove the lrw-serpent-avx2 algorithm which did this. Users who request
lrw(serpent) and previously would have gotten lrw-serpent-avx2 will now
get lrw(ecb-serpent-avx2) instead, which is just as fast.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert the SSE2 implementation of Serpent from the (deprecated)
ablkcipher and blkcipher interfaces over to the skcipher interface.
Note that this includes replacing the use of ablk_helper with
crypto_simd.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The XTS template now wraps an ECB mode algorithm rather than the block
cipher directly. Therefore it is now redundant for crypto modules to
wrap their ECB code with generic XTS code themselves via xts_crypt().
Remove the xts-serpent-sse2 algorithm which did this. Users who request
xts(serpent) and previously would have gotten xts-serpent-sse2 will now
get xts(ecb-serpent-sse2) instead, which is just as fast.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The LRW template now wraps an ECB mode algorithm rather than the block
cipher directly. Therefore it is now redundant for crypto modules to
wrap their ECB code with generic LRW code themselves via lrw_crypt().
Remove the lrw-serpent-sse2 algorithm which did this. Users who request
lrw(serpent) and previously would have gotten lrw-serpent-sse2 will now
get lrw(ecb-serpent-sse2) instead, which is just as fast.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add a function to crypto_simd that registers an array of skcipher
algorithms, then allocates and registers the simd wrapper algorithms for
them. It assumes the naming scheme where the names of the underlying
algorithms are prefixed with two underscores.
Also add the corresponding 'unregister' function.
Most of the x86 crypto modules will be able to use these.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The asymmetric key type allows an X.509 certificate to be added even if
its signature's hash algorithm is not available in the crypto API. In
that case 'payload.data[asym_auth]' will be NULL. But the key
restriction code failed to check for this case before trying to use the
signature, resulting in a NULL pointer dereference in
key_or_keyring_common() or in restrict_link_by_signature().
Fix this by returning -ENOPKG when the signature is unsupported.
Reproducer when all the CONFIG_CRYPTO_SHA512* options are disabled and
keyctl has support for the 'restrict_keyring' command:
keyctl new_session
keyctl restrict_keyring @s asymmetric builtin_trusted
openssl req -new -sha512 -x509 -batch -nodes -outform der \
| keyctl padd asymmetric desc @s
Fixes: a511e1af8b ("KEYS: Move the point of trust determination to __key_link()")
Cc: <stable@vger.kernel.org> # v4.7+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: David Howells <dhowells@redhat.com>
The X.509 parser mishandles the case where the certificate's signature's
hash algorithm is not available in the crypto API. In this case,
x509_get_sig_params() doesn't allocate the cert->sig->digest buffer;
this part seems to be intentional. However,
public_key_verify_signature() is still called via
x509_check_for_self_signed(), which triggers the 'BUG_ON(!sig->digest)'.
Fix this by making public_key_verify_signature() return -ENOPKG if the
hash buffer has not been allocated.
Reproducer when all the CONFIG_CRYPTO_SHA512* options are disabled:
openssl req -new -sha512 -x509 -batch -nodes -outform der \
| keyctl padd asymmetric desc @s
Fixes: 6c2dc5ae4a ("X.509: Extract signature digest and make self-signed cert checks earlier")
Reported-by: Paolo Valente <paolo.valente@linaro.org>
Cc: Paolo Valente <paolo.valente@linaro.org>
Cc: <stable@vger.kernel.org> # v4.7+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: David Howells <dhowells@redhat.com>
If none of the certificates in a SignerInfo's certificate chain match a
trusted key, nor is the last certificate signed by a trusted key, then
pkcs7_validate_trust_one() tries to check whether the SignerInfo's
signature was made directly by a trusted key. But, it actually fails to
set the 'sig' variable correctly, so it actually verifies the last
signature seen. That will only be the SignerInfo's signature if the
certificate chain is empty; otherwise it will actually be the last
certificate's signature.
This is not by itself a security problem, since verifying any of the
certificates in the chain should be sufficient to verify the SignerInfo.
Still, it's not working as intended so it should be fixed.
Fix it by setting 'sig' correctly for the direct verification case.
Fixes: 757932e6da ("PKCS#7: Handle PKCS#7 messages that contain no X.509 certs")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: David Howells <dhowells@redhat.com>
If there is a blacklisted certificate in a SignerInfo's certificate
chain, then pkcs7_verify_sig_chain() sets sinfo->blacklisted and returns
0. But, pkcs7_verify() fails to handle this case appropriately, as it
actually continues on to the line 'actual_ret = 0;', indicating that the
SignerInfo has passed verification. Consequently, PKCS#7 signature
verification ignores the certificate blacklist.
Fix this by not considering blacklisted SignerInfos to have passed
verification.
Also fix the function comment with regards to when 0 is returned.
Fixes: 03bb79315d ("PKCS#7: Handle blacklisted certificates")
Cc: <stable@vger.kernel.org> # v4.12+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: David Howells <dhowells@redhat.com>
When pkcs7_verify_sig_chain() is building the certificate chain for a
SignerInfo using the certificates in the PKCS#7 message, it is passing
the wrong arguments to public_key_verify_signature(). Consequently,
when the next certificate is supposed to be used to verify the previous
certificate, the next certificate is actually used to verify itself.
An attacker can use this bug to create a bogus certificate chain that
has no cryptographic relationship between the beginning and end.
Fortunately I couldn't quite find a way to use this to bypass the
overall signature verification, though it comes very close. Here's the
reasoning: due to the bug, every certificate in the chain beyond the
first actually has to be self-signed (where "self-signed" here refers to
the actual key and signature; an attacker might still manipulate the
certificate fields such that the self_signed flag doesn't actually get
set, and thus the chain doesn't end immediately). But to pass trust
validation (pkcs7_validate_trust()), either the SignerInfo or one of the
certificates has to actually be signed by a trusted key. Since only
self-signed certificates can be added to the chain, the only way for an
attacker to introduce a trusted signature is to include a self-signed
trusted certificate.
But, when pkcs7_validate_trust_one() reaches that certificate, instead
of trying to verify the signature on that certificate, it will actually
look up the corresponding trusted key, which will succeed, and then try
to verify the *previous* certificate, which will fail. Thus, disaster
is narrowly averted (as far as I could tell).
Fixes: 6c2dc5ae4a ("X.509: Extract signature digest and make self-signed cert checks earlier")
Cc: <stable@vger.kernel.org> # v4.7+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: David Howells <dhowells@redhat.com>