This patch adds PKCS#1 v1.5 standard RSA padding as a separate template.
This way an RSA cipher with padding can be obtained by instantiating
"pkcs1pad(rsa)". The reason for adding this is that RSA is almost
never used without this padding (or OAEP) so it will be needed for
either certificate work in the kernel or the userspace, and I also hear
that it is likely implemented by hardware RSA in which case hardware
implementations of the whole of pkcs1pad(rsa) can be provided.
Signed-off-by: Andrew Zaborowski <andrew.zaborowski@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add a struct akcipher_instance and struct akcipher_spawn similar to
how AEAD declares them and the macros for converting to/from
crypto_instance/crypto_spawn. Also add register functions to
avoid exposing crypto_akcipher_type.
Signed-off-by: Andrew Zaborowski <andrew.zaborowski@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Setkey function has been split into set_priv_key and set_pub_key.
Akcipher requests takes sgl for src and dst instead of void *.
Users of the API i.e. two existing RSA implementation and
test mgr code have been updated accordingly.
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch introduces the crypto skcipher interface which aims
to replace both blkcipher and ablkcipher.
It's very similar to the existing ablkcipher interface. The
main difference is the removal of the givcrypt interface. In
order to make the transition easier for blkcipher users, there
is a helper SKCIPHER_REQUEST_ON_STACK which can be used to place
a request on the stack for synchronous transforms.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that the AEAD conversion is complete we can rip out the old
AEAD interafce and associated code.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the helpers aead_init_geniv and aead_exit_geniv
which are type-safe and intended the replace the existing geniv
init/exit helpers.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds a type-safe function for freeing AEAD instances
to struct aead_instance. This replaces the existing free function
in struct crypto_template which does not know the type of the
instance that it's freeing.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently we free the default RNG when its use count hits zero.
This was OK when the IV generators would latch onto the RNG at
instance creation time and keep it until the instance is torn
down.
Now that IV generators only keep the RNG reference during init
time this scheme causes the default RNG to come and go at a high
frequencey. This is highly undesirable as we want to keep a single
RNG in use unless the admin wants it to be removed.
This patch changes the scheme so that the system RNG once allocated
is never removed unless a specifically requested.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The struct aead_instance is meant to extend struct crypto_instance
by incorporating the extra members of struct aead_alg. However,
the current layout which is copied from shash/ahash does not specify
the struct fully. In particular only aead_alg is present.
For shash/ahash this works because users there add extra headroom
to sizeof(struct crypto_instance) when allocating the instance.
Unfortunately for aead, this bit was lost when the new aead_instance
was added.
Rather than fixing it like shash/ahash, this patch simply expands
struct aead_instance to contain what is supposed to be there, i.e.,
adding struct crypto_instance.
In order to not break existing AEAD users, this is done through an
anonymous union.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add a new rsa generic SW implementation.
This implements only cryptographic primitives.
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Added select on ASN1.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add Public Key Encryption API.
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Made CRYPTO_AKCIPHER invisible like other type config options.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
No new code should be using the return value of crypto_unregister_alg
as it will become void soon.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that type-safe init/exit functions exist, they often need
to access the underlying aead_instance. So this patch adds the
helper aead_alg_instance to access aead_instance from a crypto_aead
object.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds some common IV generation code currently duplicated
by seqiv and echainiv. For example, the setkey and setauthsize
functions are completely identical.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
AEAD algorithm implementors need to figure out a given algorithm's
IV size and maximum authentication size. During the transition
this is difficult to do as an algorithm could be new style or old
style.
This patch creates two helpers to make this easier.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts the seqiv IV generator to work with the new
AEAD interface where IV generators are just normal AEAD algorithms.
Full backwards compatibility is paramount at this point since
no users have yet switched over to the new interface. Nor can
they switch to the new interface until IV generation is fully
supported by it.
So this means we are adding two versions of seqiv alongside the
existing one. The first one is the one that will be used when
the underlying AEAD algorithm has switched over to the new AEAD
interface. The second one handles the current case where the
underlying AEAD algorithm still uses the old interface.
Both versions export themselves through the new AEAD interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the basic structure of the new AEAD type. Unlike
the current version, there is no longer any concept of geniv. IV
generation will still be carried out by wrappers but they will be
normal AEAD algorithms that simply take the IPsec sequence number
as the IV.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the helper crypto_aead_maxauthsize to remove the
need to directly dereference aead_alg internals by AEAD implementors.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch is the first step in the introduction of a new AEAD
alg type. Unlike normal conversions this patch only renames the
existing aead_alg structure because there are external references
to it.
Those references will be removed after this patch.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts the top-level aead interface to the new style.
All user-level AEAD interface code have been moved into crypto/aead.h.
The allocation/free functions have switched over to the new way of
allocating tfms.
This patch also removes the double indrection on setkey so the
indirection now exists only at the alg level.
Apart from these there are no user-visible changes.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the helper crypto_aead_set_reqsize so that people
don't have to directly access the aead internals to set the reqsize.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that all rng implementations have switched over to the new
interface, we can remove the old low-level interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the helpers that allow the registration and removal
of multiple RNG algorithms.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the function crypto_rng_set_entropy. It is only
meant to be used by testmgr when testing RNG implementations by
providing fixed entropy data in order to verify test vectors.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts the low-level crypto_rng interface to the
"new" style.
This allows existing implementations to be converted over one-
by-one. Once that is complete we can then remove the old rng
interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch introduces the multi-buffer crypto daemon which is responsible
for submitting crypto jobs in a work queue to the responsible multi-buffer
crypto algorithm. The idea of the multi-buffer algorihtm is to put
data streams from multiple jobs in a wide (AVX2) register and then
take advantage of SIMD instructions to do crypto computation on several
buffers simultaneously.
The multi-buffer crypto daemon is also responsbile for flushing the
remaining buffers to complete the computation if no new buffers arrive
for a while.
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use skcipher_givcrypt_cast(crypto_dequeue_request(queue)) instead, which
does the same thing in much cleaner way. The skcipher_givcrypt_cast()
actually uses container_of() instead of messing around with offsetof()
too.
Signed-off-by: Marek Vasut <marex@denx.de>
Reported-by: Arnd Bergmann <arnd@arndb.de>
Cc: Pantelis Antoniou <panto@antoniou-consulting.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Although the existing hash walk interface has already been used
by a number of ahash crypto drivers, it turns out that none of
them were really asynchronous. They were all essentially polling
for completion.
That's why nobody has noticed until now that the walk interface
couldn't work with a real asynchronous driver since the memory
is mapped using kmap_atomic.
As we now have a use-case for a real ahash implementation on x86,
this patch creates a minimal ahash walk interface. Basically it
just calls kmap instead of kmap_atomic and does away with the
crypto_yield call. Real ahash crypto drivers don't need to yield
since by definition they won't be hogging the CPU.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add crypto_[un]register_shashes() to allow simplifying init/exit code of shash
crypto modules that register multiple algorithms.
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We lookup algorithms with crypto_alg_mod_lookup() when instantiating via
crypto_add_alg(). However, algorithms that are wrapped by an IV genearator
(e.g. aead or genicv type algorithms) need special care. The userspace
process hangs until it gets a timeout when we use crypto_alg_mod_lookup()
to lookup these algorithms. So export the lookup functions for these
algorithms and use them in crypto_add_alg().
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (102 commits)
crypto: sha-s390 - Fix warnings in import function
crypto: vmac - New hash algorithm for intel_txt support
crypto: api - Do not displace newly registered algorithms
crypto: ansi_cprng - Fix module initialization
crypto: xcbc - Fix alignment calculation of xcbc_tfm_ctx
crypto: fips - Depend on ansi_cprng
crypto: blkcipher - Do not use eseqiv on stream ciphers
crypto: ctr - Use chainiv on raw counter mode
Revert crypto: fips - Select CPRNG
crypto: rng - Fix typo
crypto: talitos - add support for 36 bit addressing
crypto: talitos - align locks on cache lines
crypto: talitos - simplify hmac data size calculation
crypto: mv_cesa - Add support for Orion5X crypto engine
crypto: cryptd - Add support to access underlaying shash
crypto: gcm - Use GHASH digest algorithm
crypto: ghash - Add GHASH digest algorithm for GCM
crypto: authenc - Convert to ahash
crypto: api - Fix aligned ctx helper
crypto: hmac - Prehash ipad/opad
...
As struct skcipher_givcrypt_request includes struct crypto_request
at a non-zero offset, testing for NULL after converting the pointer
returned by crypto_dequeue_request does not work. This can result
in IPsec crashes when the queue is depleted.
This patch fixes it by doing the pointer conversion only when the
return value is non-NULL. In particular, we create a new function
__crypto_dequeue_request that does the pointer conversion.
Reported-by: Brad Bosch <bradbosch@comcast.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch exports the finup operation where available and adds
a default finup operation for ahash. The operations final, finup
and digest also will now deal with unaligned result pointers by
copying it. Finally export/import operations are will now be
exported too.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that all ahash implementations have been converted to the new
ahash type, we can remove old_ahash_alg and its associated support.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch changes crypto4xx to use the new style ahash type.
In particular, we now use ahash_alg to define ahash algorithms
instead of crypto_alg.
This is achieved by introducing a union that encapsulates the
new type and the existing crypto_alg structure. They're told
apart through a u32 field containing the type value.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the helpers crypto_drop_ahash and crypto_drop_shash
so that these spawns can be dropped without ugly casts.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch converts crypto_ahash to the new style. The old ahash
algorithm type is retained until the existing ahash implementations
are also converted. All ahash users will automatically get the
new crypto_ahash type.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the helper crypto_ahash_set_reqsize so that
implementations do not directly access the crypto_ahash structure.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch exports the async functions so that they can be reused
by cryptd when it switches over to using shash.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch changes descsize to a run-time attribute so that
implementations can change it in their init functions.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the helper shash_instance_ctx which is the shash
analogue of crypto_instance_ctx.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds __crypto_shash_cast which turns a crypto_tfm
into crypto_shash. It's analogous to the other __crypto_*_cast
functions.
It hasn't been needed until now since no existing shash algorithms
have had an init function.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds crypto_shash_ctx_aligned which will be needed
by hmac after its conversion to shash.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds shash_register_instance so that shash instances
can be registered without bypassing the shash checks applied to
normal algorithms.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the helper shash_attr_alg2 which locates a shash
algorithm based on the information in the given attribute.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds the functions needed to create and use shash
spawns, i.e., to use shash algorithms in a template.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds shash_instance and the associated alloc/free
functions. This is meant to be an instance that with a shash
algorithm under it. Note that the instance itself doesn't have
to be shash.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>