Commit Graph

51 Commits

Author SHA1 Message Date
Linus Torvalds 2d706e790f Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto fix from Herbert Xu:
 "This fixes a hash corruption bug in the marvell driver"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
  crypto: marvell - Copy IVDIG before launching partial DMA ahash requests
2016-12-27 17:51:36 -08:00
Romain Perier 8759fec4af crypto: marvell - Copy IVDIG before launching partial DMA ahash requests
Currently, inner IV/DIGEST data are only copied once into the hash
engines and not set explicitly before launching a request that is not a
first frag. This is an issue especially when multiple ahash reqs are
computed in parallel or chained with cipher request, as the state of the
request being computed is not updated into the hash engine. It leads to
non-deterministic corrupted digest results.

Fixes: commit 2786cee8e5 ("crypto: marvell - Move SRAM I/O operations to step functions")
Signed-off-by: Romain Perier <romain.perier@free-electrons.com>
Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-12-16 19:59:39 +08:00
Linus Torvalds 0f1d6dfe03 Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto updates from Herbert Xu:
 "Here is the crypto update for 4.10:

  API:
   - add skcipher walk interface
   - add asynchronous compression (acomp) interface
   - fix algif_aed AIO handling of zero buffer

  Algorithms:
   - fix unaligned access in poly1305
   - fix DRBG output to large buffers

  Drivers:
   - add support for iMX6UL to caam
   - fix givenc descriptors (used by IPsec) in caam
   - accelerated SHA256/SHA512 for ARM64 from OpenSSL
   - add SSE CRCT10DIF and CRC32 to ARM/ARM64
   - add AEAD support to Chelsio chcr
   - add Armada 8K support to omap-rng"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (148 commits)
  crypto: testmgr - fix overlap in chunked tests again
  crypto: arm/crc32 - accelerated support based on x86 SSE implementation
  crypto: arm64/crc32 - accelerated support based on x86 SSE implementation
  crypto: arm/crct10dif - port x86 SSE implementation to ARM
  crypto: arm64/crct10dif - port x86 SSE implementation to arm64
  crypto: testmgr - add/enhance test cases for CRC-T10DIF
  crypto: testmgr - avoid overlap in chunked tests
  crypto: chcr - checking for IS_ERR() instead of NULL
  crypto: caam - check caam_emi_slow instead of re-lookup platform
  crypto: algif_aead - fix AIO handling of zero buffer
  crypto: aes-ce - Make aes_simd_algs static
  crypto: algif_skcipher - set error code when kcalloc fails
  crypto: caam - make aamalg_desc a proper module
  crypto: caam - pass key buffers with typesafe pointers
  crypto: arm64/aes-ce-ccm - Fix AEAD decryption length
  MAINTAINERS: add crypto headers to crypto entry
  crypt: doc - remove misleading mention of async API
  crypto: doc - fix header file name
  crypto: api - fix comment typo
  crypto: skcipher - Add separate walker for AEAD decryption
  ..
2016-12-14 13:31:29 -08:00
Romain Perier 9e5f7a149e crypto: marvell - Don't corrupt state of an STD req for re-stepped ahash
mv_cesa_hash_std_step() copies the creq->state into the SRAM at each
step, but this is only required on the first one. By doing that, we
overwrite the engine state, and get erroneous results when the crypto
request is split in several chunks to fit in the internal SRAM.

This commit changes the function to copy the state only on the first
step.

Fixes: commit 2786cee8e5 ("crypto: marvell - Move SRAM I/O op...")
Signed-off-by: Romain Perier <romain.perier@free-electrons.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-12-07 19:55:17 +08:00
Romain Perier 68c7f8c1c4 crypto: marvell - Don't copy hash operation twice into the SRAM
No need to copy the template of an hash operation twice into the SRAM
from the step function.

Fixes: commit 85030c5168 ("crypto: marvell - Add support for chai...")
Signed-off-by: Romain Perier <romain.perier@free-electrons.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-12-07 19:55:16 +08:00
Romain Perier f34dad1721 crypto: marvell - Don't break chain for computable last ahash requests
Currently, the driver breaks chain for all kind of hash requests in order to
don't override intermediate states of partial ahash updates. However, some final
ahash requests can be directly processed by the engine, and so without
intermediate state. This is typically the case for most for the HMAC requests
processed via IPSec.

This commits adds a TDMA descriptor to copy context for these of requests
into the "op" dma pool, then it allow to chain these requests at the DMA level.
The 'complete' operation is also updated to retrieve the MAC digest from the
right location.

Signed-off-by: Romain Perier <romain.perier@free-electrons.com>
Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-10-21 11:03:40 +08:00
Romain Perier 4785620414 crypto: marvell - Don't hardcode block size in mv_cesa_ahash_cache_req
Don't use 64 'as is', as max block size in mv_cesa_ahash_cache_req. Use
CESA_MAX_HASH_BLOCK_SIZE instead, this is better for readability.

Signed-off-by: Romain Perier <romain.perier@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-08-09 18:47:31 +08:00
Romain Perier 57cfda1ac7 crypto: marvell - Don't overwrite default creq->state during initialization
Currently, in mv_cesa_{md5,sha1,sha256}_init creq->state is initialized
before the call to mv_cesa_ahash_init. This is wrong because this
function fills creq with zero by using memset, so its 'state' that
contains the default DIGEST is overwritten. This commit fixes the issue
by initializing creq->state just after the call to mv_cesa_ahash_init.

Fixes: commit b0ef51067c ("crypto: marvell/cesa - initialize hash...")
Signed-off-by: Romain Perier <romain.perier@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-08-09 18:47:31 +08:00
Thomas Petazzoni 6dc156f453 crypto: marvell - make mv_cesa_ahash_cache_req() return bool
The mv_cesa_ahash_cache_req() function always returns 0, which makes
its return value pretty much useless. However, in addition to
returning a useless value, it also returns a boolean in a variable
passed by reference to indicate if the request was already cached.

So, this commit changes mv_cesa_ahash_cache_req() to return this
boolean. It consequently simplifies the only call site of
mv_cesa_ahash_cache_req(), where the "ret" variable is no longer
needed.

Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-08-09 18:47:30 +08:00
Thomas Petazzoni 3e5c66c9c3 crypto: marvell - turn mv_cesa_ahash_init() into a function returning void
The mv_cesa_ahash_init() function always returns 0, and the return
value is anyway never checked. Turn it into a function returning void.

Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-08-09 18:47:29 +08:00
Thomas Petazzoni 2a8a78573b crypto: marvell - remove unused parameter in mv_cesa_ahash_dma_add_cache()
The dma_iter parameter of mv_cesa_ahash_dma_add_cache() is never used,
so get rid of it.

Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-08-09 18:47:29 +08:00
Romain Perier 64ec6ccb76 crypto: marvell - Update cache with input sg only when it is unmapped
So far, the cache of the ahash requests was updated from the 'complete'
operation. This complete operation is called from mv_cesa_tdma_process
before the cleanup operation, which means that the content of req->src
can be read and copied when it is still mapped. This commit fixes the
issue by moving this cache update from mv_cesa_ahash_complete to
mv_cesa_ahash_req_cleanup, so the copy is done once the sglist is
unmapped.

Fixes: 1bf6682cb3 ("crypto: marvell - Add a complete operation for..")
Signed-off-by: Romain Perier <romain.perier@free-electrons.com>
Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-28 13:04:44 +08:00
Romain Perier 85030c5168 crypto: marvell - Add support for chaining crypto requests in TDMA mode
The Cryptographic Engines and Security Accelerators (CESA) supports the
Multi-Packet Chain Mode. With this mode enabled, multiple tdma requests
can be chained and processed by the hardware without software
intervention. This mode was already activated, however the crypto
requests were not chained together. By doing so, we reduce significantly
the number of IRQs. Instead of being interrupted at the end of each
crypto request, we are interrupted at the end of the last cryptographic
request processed by the engine.

This commits re-factorizes the code, changes the code architecture and
adds the required data structures to chain cryptographic requests
together before sending them to an engine (stopped or possibly already
running).

Signed-off-by: Romain Perier <romain.perier@free-electrons.com>
Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-23 18:29:51 +08:00
Romain Perier bf8f91e711 crypto: marvell - Add load balancing between engines
This commits adds support for fine grained load balancing on
multi-engine IPs. The engine is pre-selected based on its current load
and on the weight of the crypto request that is about to be processed.
The global crypto queue is also moved to each engine. These changes are
required to allow chaining crypto requests at the DMA level. By using
a crypto queue per engine, we make sure that we keep the state of the
tdma chain synchronized with the crypto queue. We also reduce contention
on 'cesa_dev->lock' and improve parallelism.

Signed-off-by: Romain Perier <romain.perier@free-electrons.com>
Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-23 18:29:51 +08:00
Romain Perier 2786cee8e5 crypto: marvell - Move SRAM I/O operations to step functions
Currently the crypto requests were sent to engines sequentially.
This commit moves the SRAM I/O operations from the prepare to the step
functions. It provides flexibility for future works and allow to prepare
a request while the engine is running.

Signed-off-by: Romain Perier <romain.perier@free-electrons.com>
Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-23 18:17:25 +08:00
Romain Perier 1bf6682cb3 crypto: marvell - Add a complete operation for async requests
So far, the 'process' operation was used to check if the current request
was correctly handled by the engine, if it was the case it copied
information from the SRAM to the main memory. Now, we split this
operation. We keep the 'process' operation, which still checks if the
request was correctly handled by the engine or not, then we add a new
operation for completion. The 'complete' method copies the content of
the SRAM to memory. This will soon become useful if we want to call
the process and the complete operations from different locations
depending on the type of the request (different cleanup logic).

Signed-off-by: Romain Perier <romain.perier@free-electrons.com>
Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-23 18:17:24 +08:00
Romain Perier 53da740fed crypto: marvell - Move tdma chain out of mv_cesa_tdma_req and remove it
Currently, the only way to access the tdma chain is to use the 'req'
union from a mv_cesa_{ablkcipher,ahash}. This will soon become a problem
if we want to handle the TDMA chaining vs standard/non-DMA processing in
a generic way (with generic functions at the cesa.c level detecting
whether the request should be queued at the DMA level or not). Hence the
decision to move the chain field a the mv_cesa_req level at the expense
of adding 2 void * fields to all request contexts (including non-DMA
ones) and to remove the type completly. To limit the overhead, we get
rid of the type field, which can now be deduced from the req->chain.first
value. Once these changes are done the union is no longer needed, so
remove it and move mv_cesa_ablkcipher_std_req and mv_cesa_req
to mv_cesa_ablkcipher_req directly. There are also no needs to keep the
'base' field into the union of mv_cesa_ahash_req, so move it into the
upper structure.

Signed-off-by: Romain Perier <romain.perier@free-electrons.com>
Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-23 18:17:23 +08:00
Romain Perier f62830886f crypto: marvell - Check engine is not already running when enabling a req
Add a BUG_ON() call when the driver tries to launch a crypto request
while the engine is still processing the previous one. This replaces
a silent system hang by a verbose kernel panic with the associated
backtrace to let the user know that something went wrong in the CESA
driver.

Signed-off-by: Romain Perier <romain.perier@free-electrons.com>
Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-23 18:14:00 +08:00
Dan Carpenter 063327f54e crypto: marvell/cesa - remove unneeded condition
creq->cache[] is an array inside the struct, it's not a pointer and it
can't be NULL.

Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-04-05 20:35:45 +08:00
Boris BREZILLON b0ef51067c crypto: marvell/cesa - initialize hash states
->export() might be called before we have done an update operation,
and in this case the ->state field is left uninitialized.
Put the correct default value when initializing the request.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-03-17 19:09:04 +08:00
Boris BREZILLON 7850c91b17 crypto: marvell/cesa - fix memory leak
Crypto requests are not guaranteed to be finalized (->final() call),
and can be freed at any moment, without getting any notification from
the core. This can lead to memory leaks of the ->cache buffer.

Make this buffer part of the request object, and allocate an extra buffer
from the DMA cache pool when doing DMA operations.

As a side effect, this patch also fixes another bug related to cache
allocation and DMA operations. When the core allocates a new request and
import an existing state, a cache buffer can be allocated (depending
on the state). The problem is, at that very moment, we don't know yet
whether the request will use DMA or not, and since everything is
likely to be initialized to zero, mv_cesa_ahash_alloc_cache() thinks it
should allocate a buffer for standard operation. But when
mv_cesa_ahash_free_cache() is called, req->type has been set to
CESA_DMA_REQ in the meantime, thus leading to an invalind dma_pool_free()
call (the buffer passed in argument has not been allocated from the pool).

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Reported-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-03-17 19:09:04 +08:00
LABBE Corentin c22dafb3b1 crypto: marvell - check return value of sg_nents_for_len
The sg_nents_for_len() function could fail, this patch add a check for
its return value.

Signed-off-by: LABBE Corentin <clabbe.montjoie@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-11-17 22:00:35 +08:00
Linus Torvalds ccc9d4a6d6 Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto update from Herbert Xu:
 "API:

   - Add support for cipher output IVs in testmgr
   - Add missing crypto_ahash_blocksize helper
   - Mark authenc and des ciphers as not allowed under FIPS.

Algorithms:

   - Add CRC support to 842 compression
   - Add keywrap algorithm
   - A number of changes to the akcipher interface:
      + Separate functions for setting public/private keys.
      + Use SG lists.

Drivers:

   - Add Intel SHA Extension optimised SHA1 and SHA256
   - Use dma_map_sg instead of custom functions in crypto drivers
   - Add support for STM32 RNG
   - Add support for ST RNG
   - Add Device Tree support to exynos RNG driver
   - Add support for mxs-dcp crypto device on MX6SL
   - Add xts(aes) support to caam
   - Add ctr(aes) and xts(aes) support to qat
   - A large set of fixes from Russell King for the marvell/cesa driver"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (115 commits)
  crypto: asymmetric_keys - Fix unaligned access in x509_get_sig_params()
  crypto: akcipher - Don't #include crypto/public_key.h as the contents aren't used
  hwrng: exynos - Add Device Tree support
  hwrng: exynos - Fix missing configuration after suspend to RAM
  hwrng: exynos - Add timeout for waiting on init done
  dt-bindings: rng: Describe Exynos4 PRNG bindings
  crypto: marvell/cesa - use __le32 for hardware descriptors
  crypto: marvell/cesa - fix missing cpu_to_le32() in mv_cesa_dma_add_op()
  crypto: marvell/cesa - use memcpy_fromio()/memcpy_toio()
  crypto: marvell/cesa - use gfp_t for gfp flags
  crypto: marvell/cesa - use dma_addr_t for cur_dma
  crypto: marvell/cesa - use readl_relaxed()/writel_relaxed()
  crypto: caam - fix indentation of close braces
  crypto: caam - only export the state we really need to export
  crypto: caam - fix non-block aligned hash calculation
  crypto: caam - avoid needlessly saving and restoring caam_hash_ctx
  crypto: caam - print errno code when hash registration fails
  crypto: marvell/cesa - fix memory leak
  crypto: marvell/cesa - fix first-fragment handling in mv_cesa_ahash_dma_last_req()
  crypto: marvell/cesa - rearrange handling for sw padded hashes
  ...
2015-11-04 09:11:12 -08:00
Russell King 0f3304dc18 crypto: marvell/cesa - use memcpy_fromio()/memcpy_toio()
Use the IO memcpy() functions when copying from/to MMIO memory.
These locations were found via sparse.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-10-20 22:13:57 +08:00
Russell King b150856152 crypto: marvell/cesa - use readl_relaxed()/writel_relaxed()
Use relaxed IO accessors where appropriate.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-10-20 22:13:55 +08:00
Boris Brezillon 8c07f3a8c4 crypto: marvell/cesa - fix memory leak
To: Boris Brezillon <boris.brezillon@free-electrons.com>,Arnaud Ebalard <arno@natisbad.org>,Thomas Petazzoni <thomas.petazzoni@free-electrons.com>,Jason Cooper <jason@lakedaemon.net>

The local chain variable is not cleaned up if an error occurs in the middle
of DMA chain creation. Fix that by dropping the local chain variable and
using the dreq->chain field which will be cleaned up by
mv_cesa_dma_cleanup() in case of errors.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Reported-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-10-20 22:11:09 +08:00
Russell King 8efbc2c0f6 crypto: marvell/cesa - fix first-fragment handling in mv_cesa_ahash_dma_last_req()
When adding the software padding, this must be done using the first/mid
fragment mode, and any subsequent operation needs to be a mid-fragment.
Fix this.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-10-20 22:11:09 +08:00
Russell King ab270e7055 crypto: marvell/cesa - rearrange handling for sw padded hashes
Rearrange the last request handling for hashes which require software
padding.

We prepare the padding to be appended, and then append as much of the
padding to any existing data that's already queued up, adding an
operation block and launching the operation.

Any remainder is then appended as a separate operation.

This ensures that the hardware only ever sees multiples of the hash
block size to be operated on for software padded hashes, thus ensuring
that the engine always indicates that it has finished the calculation.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-10-20 22:11:08 +08:00
Russell King aee84a7e6e crypto: marvell/cesa - rearrange handling for hw finished hashes
Rearrange the last request handling for hardware finished hashes
by moving the generation of the fragment operation into this path.
This results in a simplified sequence to handle this case, and
allows us to move the software padded case further down into the
function.  Add comments describing these parts.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-10-20 22:11:08 +08:00
Russell King 58953e15ef crypto: marvell/cesa - rearrange last request handling
Move the test for the last request out of mv_cesa_ahash_dma_last_req()
to its caller, and move the mv_cesa_dma_add_frag() down into this
function.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-10-20 22:11:08 +08:00
Russell King e41bbebdde crypto: marvell/cesa - avoid adding final operation within loop
Avoid adding the final operation within the loop, but instead add it
outside.  We combine this with the handling for the no-data case.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-10-20 22:11:07 +08:00
Russell King bd274b1085 crypto: marvell/cesa - ensure iter.base.op_len is the full op length
When we process the last request of data, and the request contains user
data, the loop in mv_cesa_ahash_dma_req_init() marks the first data size
as being iter.base.op_len which does not include the size of the cache
data.  This means we end up hashing an insufficient amount of data.

Fix this by always including the cache size in the first operation
length of any request.

This has the effect that for a request containing no user data,

	iter.base.op_len === iter.src.op_offset === creq->cache_ptr

As a result, we include one further change to use iter.base.op_len in
the cache-but-no-user-data case to make the next change clearer.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-10-20 22:11:07 +08:00
Russell King d9bba4c3eb crypto: marvell/cesa - use presence of scatterlist to determine data load
Use the presence of the scatterlist to determine whether we should load
any new user data to the engine.  The following shall always be true at
this point:

	iter.base.op_len == 0 === iter.src.sg

In doing so, we can:

1. eliminate the test for iter.base.op_len inside the loop, which
   makes the loop operation more obvious and understandable.

2. move the operation generation for the cache-only case.

This prepares the code for the next step in its transformation, and also
uncovers a bug that will be fixed in the next patch.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-10-20 22:11:06 +08:00
Russell King 0971d09a85 crypto: marvell/cesa - move mv_cesa_dma_add_frag() calls
Move the calls to mv_cesa_dma_add_frag() into the parent function,
mv_cesa_ahash_dma_req_init().  This is in preparation to changing
when we generate the operation blocks, as we need to avoid generating
a block for a partial hash block at the end of the user data.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-10-20 22:11:06 +08:00
Russell King 2f396a91d1 crypto: marvell/cesa - always ensure mid-fragments after first-fragment
If we add a template first-fragment operation, always update the
template to be a mid-fragment.  This ensures that mid-fragments
always follow on from a first fragment in every case.

This means we can move the first to mid-fragment update code out of
mv_cesa_ahash_dma_add_data().

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-10-20 22:11:05 +08:00
Russell King 9621288673 crypto: marvell/cesa - factor out adding an operation and launching it
Add a helper to add the fragment operation block followed by the DMA
entry to launch the operation.

Although at the moment this pattern only strictly appears at one site,
two other sites can be factored as well by slightly changing the order
in which the DMA operations are performed.  This should be harmless as
the only thing which matters is to have all the data loaded into SRAM
prior to launching the operation.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-10-20 22:11:05 +08:00
Russell King 8651791e54 crypto: marvell/cesa - factor out first fragment decisions to helper
Multiple locations in the driver test the operation context fragment
type, checking whether it is a first fragment or not.  Introduce a
mv_cesa_mac_op_is_first_frag() helper, which returns true if the
fragment operation is for a first fragment.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-10-20 22:11:04 +08:00
Russell King d30cb2fa34 crypto: marvell/cesa - ensure template operation is initialised
Ensure that the template operation is fully initialised, otherwise we
end up loading data from the kernel stack into the engines, which can
upset the hash results.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-10-20 22:10:52 +08:00
Russell King 51954a968b crypto: marvell/cesa - fix the bit length endianness
The endianness of the bit length used in the final stage depends on the
endianness of the algorithm - md5 hashes need it to be in little endian
format, whereas SHA hashes need it in big endian format.  Use the
previously added algorithm endianness flag to control this.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-10-20 22:10:51 +08:00
Russell King a9eb678f8a crypto: marvell/cesa - add flag to determine algorithm endianness
Rather than determining whether we're using a MD5 hash by looking at
the digest size, switch to a cleaner solution using a per-request flag
initialised by the method type.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-10-20 22:10:51 +08:00
Russell King 4c2b130c8a crypto: marvell/cesa - keep creq->state in CPU endian format at all times
Currently, we read/write the state in CPU endian, but on the final
request, we convert its endian according to the requested algorithm.
(md5 is little endian, SHA are big endian.)

Always keep creq->state in CPU native endian format, and perform the
necessary conversion when copying the hash to the result.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-10-20 22:10:50 +08:00
Russell King 80754539ec crypto: marvell/cesa - easier way to get the transform
There's an easier way to get at the hash transform - rather than
using crypto_ahash_tfm(ahash), we can get it directly from
req->base.tfm.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-10-20 22:10:49 +08:00
Russell King a6479ea487 crypto: marvell/cesa - factor out common import/export functions
As all the import functions and export functions are virtually
identical, factor out their common parts into a generic
mv_cesa_ahash_import() and mv_cesa_ahash_export() respectively.  This
performs the actual import or export, and we pass the data pointers and
length into these functions.

We have to switch a % const operation to do_div() in the common import
function to avoid provoking gcc to use the expensive 64-bit by 64-bit
modulus operation.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-10-14 22:23:18 +08:00
Russell King c3bf02a22c crypto: marvell/cesa - fix wrong hash results
Attempting to use the sha1 digest for openssh via openssl reveals that
the result from the hash is wrong: this happens when we export the
state from one socket and import it into another via calling accept().

The reason for this is because the operation is reset to "initial block"
state, whereas we may be past the first fragment of data to be hashed.

Arrange for the operation code to avoid the initialisation of the state,
thereby preserving the imported state.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-10-14 22:23:18 +08:00
Russell King e72f407ee7 crypto: marvell/cesa - initialise struct mv_cesa_ahash_req
When a AF_ALG fd is accepted a second time (hence hash_accept() is
used), hash_accept_parent() allocates a new private context using
sock_kmalloc().  This context is uninitialised.  After use of the new
fd, we eventually end up with the kernel complaining:

marvell-cesa f1090000.crypto: dma_pool_free cesa_padding, c0627770/0 (bad dma)

where c0627770 is a random address.  Poisoning the memory allocated by
the above sock_kmalloc() produces kernel oopses within the marvell hash
code, particularly the interrupt handling.

The following simplfied call sequence occurs:

hash_accept()
  crypto_ahash_export()
    marvell hash export function
  af_alg_accept()
    hash_accept_parent()	<== allocates uninitialised struct hash_ctx
  crypto_ahash_import()
    marvell hash import function

hash_ctx contains the struct mv_cesa_ahash_req in its req.__ctx member,
and, as the marvell hash import function only partially initialises
this structure, we end up with a lot of members which are left with
whatever data was in memory prior to sock_kmalloc().

Add zero-initialisation of this structure.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Boris Brezillon <boris.brezillon@free-electronc.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-10-14 22:23:18 +08:00
Russell King 9f5594c91e crypto: marvell/cesa - fix stack smashing in marvell/hash.c
Several of the algorithms in marvell/hash.c have a statesize of zero.
When an AF_ALG accept() on an already-accepted file descriptor to
calls into hash_accept(), this causes:

	char state[crypto_ahash_statesize(crypto_ahash_reqtfm(req))];

to be zero-sized, but we still pass this to:

	err = crypto_ahash_export(req, state);

which proceeds to write to 'state' as if it was a "struct md5_state",
"struct sha1_state" etc.  Add the necessary initialisers for the
.statesize member.

Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-10-14 22:23:17 +08:00
Thomas Petazzoni cfcd2271a9 crypto: marvell - properly handle CRYPTO_TFM_REQ_MAY_BACKLOG-flagged requests
The mv_cesa_queue_req() function calls crypto_enqueue_request() to
enqueue a request. In the normal case (i.e the queue isn't full), this
function returns -EINPROGRESS. The current Marvell CESA crypto driver
takes this into account and cleans up the request only if an error
occured, i.e if the return value is not -EINPROGRESS.

Unfortunately this causes problems with
CRYPTO_TFM_REQ_MAY_BACKLOG-flagged requests. When such a request is
passed to crypto_enqueue_request() and the queue is full,
crypto_enqueue_request() will return -EBUSY, but will keep the request
enqueued nonetheless. This situation was not properly handled by the
Marvell CESA driver, which was anyway cleaning up the request in such
a situation. When later on the request was taken out of the backlog
and actually processed, a kernel crash occured due to the internal
driver data structures for this structure having been cleaned up.

To avoid this situation, this commit adds a
mv_cesa_req_needs_cleanup() helper function which indicates if the
request needs to be cleaned up or not after a call to
crypto_enqueue_request(). This helper allows to do the cleanup only in
the appropriate cases, and all call sites of mv_cesa_queue_req() are
fixed to use this new helper function.

Reported-by: Vincent Donnefort <vdonnefort@gmail.com>
Fixes: db509a4533 ("crypto: marvell/cesa - add TDMA support")
Cc: <stable@vger.kernel.org> # v4.2+
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Tested-by: Vincent Donnefort <vdonnefort@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-09-21 22:57:36 +08:00
Arnaud Ebalard f85a762e49 crypto: marvell/cesa - add SHA256 support
Add support for SHA256 operations.

Signed-off-by: Arnaud Ebalard <arno@natisbad.org>
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-06-19 22:18:04 +08:00
Arnaud Ebalard 7aeef693d1 crypto: marvell/cesa - add MD5 support
Add support for MD5 operations.

Signed-off-by: Arnaud Ebalard <arno@natisbad.org>
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-06-19 22:18:04 +08:00
Boris BREZILLON db509a4533 crypto: marvell/cesa - add TDMA support
The CESA IP supports CPU offload through a dedicated DMA engine (TDMA)
which can control the crypto block.
When you use this mode, all the required data (operation metadata and
payload data) are transferred using DMA, and the results are retrieved
through DMA when possible (hash results are not retrieved through DMA yet),
thus reducing the involvement of the CPU and providing better performances
in most cases (for small requests, the cost of DMA preparation might
exceed the performance gain).

Note that some CESA IPs do not embed this dedicated DMA, hence the
activation of this feature on a per platform basis.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Arnaud Ebalard <arno@natisbad.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-06-19 22:18:03 +08:00