This is the 2nd part of fixing the usage of GFP_KERNEL for memory
allocations, taking care off all the places that haven't caused a real
problem / failure.
Again, the issue being fixed is that GFP_KERNEL should be used only when
MAY_SLEEP flag is set, i.e. MAY_BACKLOG flag usage is orthogonal.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Changes in the SW cts (ciphertext stealing) code in
commit 0605c41cc5 ("crypto: cts - Convert to skcipher")
revealed a problem in the CAAM driver:
when cts(cbc(aes)) is executed and cts runs in SW,
cbc(aes) is offloaded in CAAM; cts encrypts the last block
in atomic context and CAAM incorrectly decides to use GFP_KERNEL
for memory allocation.
Fix this by allowing GFP_KERNEL (sleeping) only when MAY_SLEEP flag is
set, i.e. remove MAY_BACKLOG flag.
We split the fix in two parts - first is sent to -stable, while the
second is not (since there is no known failure case).
Link: http://lkml.kernel.org/g/20170602122446.2427-1-david@sigma-star.at
Cc: <stable@vger.kernel.org> # 4.8+
Reported-by: David Gstir <david@sigma-star.at>
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
of_device_ids are not supposed to change at runtime. All functions
working with of_device_ids provided by <linux/of.h> work with const
of_device_ids. So mark the non-const structs as const.
File size before:
text data bss dec hex filename
2376 808 128 3312 cf0 drivers/crypto/caam/jr.o
File size after constify caam_jr_match:
text data bss dec hex filename
2976 192 128 3296 ce0 drivers/crypto/caam/jr.o
Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto_akcipher_maxsize() asks for the output buffer size without
caring for errors. It allways assume that will be called after
a valid setkey. Comply with it and return what he wants.
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
CAAM RSA private key may have either of three representations.
1. The first representation consists of the pair (n, d), where the
components have the following meanings:
n the RSA modulus
d the RSA private exponent
2. The second representation consists of the triplet (p, q, d), where
the
components have the following meanings:
p the first prime factor of the RSA modulus n
q the second prime factor of the RSA modulus n
d the RSA private exponent
3. The third representation consists of the quintuple (p, q, dP, dQ,
qInv),
where the components have the following meanings:
p the first prime factor of the RSA modulus n
q the second prime factor of the RSA modulus n
dP the first factors's CRT exponent
dQ the second factors's CRT exponent
qInv the (first) CRT coefficient
The benefit of using the third or the second key form is lower
computational cost for the decryption and signature operations.
This patch adds support for the third RSA private key
representations and extends caampkc to use the fastest key when all
related components are present in the private key.
Signed-off-by: Tudor Ambarus <tudor-dan.ambarus@nxp.com>
Signed-off-by: Radu Alexe <radu.alexe@nxp.com>
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
CAAM RSA private key may have either of three representations.
1. The first representation consists of the pair (n, d), where the
components have the following meanings:
n the RSA modulus
d the RSA private exponent
2. The second representation consists of the triplet (p, q, d), where
the
components have the following meanings:
p the first prime factor of the RSA modulus n
q the second prime factor of the RSA modulus n
d the RSA private exponent
3. The third representation consists of the quintuple (p, q, dP, dQ,
qInv),
where the components have the following meanings:
p the first prime factor of the RSA modulus n
q the second prime factor of the RSA modulus n
dP the first factors's CRT exponent
dQ the second factors's CRT exponent
qInv the (first) CRT coefficient
The benefit of using the third or the second key form is lower
computational cost for the decryption and signature operations.
This patch adds support for the second RSA private key
representation.
Signed-off-by: Tudor Ambarus <tudor-dan.ambarus@nxp.com>
Signed-off-by: Radu Alexe <radu.alexe@nxp.com>
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This function will be used into further patches.
Signed-off-by: Radu Alexe <radu.alexe@nxp.com>
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The function returns NULL if buf is composed only of zeros.
Signed-off-by: Tudor Ambarus <tudor-dan.ambarus@nxp.com>
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix to return error code -ENOMEM from the kmem_cache_create() error
handling case instead of 0(err is 0 here), as done elsewhere in this
function.
Fixes: 67c2315def ("crypto: caam - add Queue Interface (QI) backend support")
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Acked-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
RNG instantiation was previously fixed by
commit 62743a4145 ("crypto: caam - fix RNG init descriptor ret. code checking")
while deinstantiation was not addressed.
Since the descriptors used are similar, in the sense that they both end
with a JUMP HALT command, checking for errors should be similar too,
i.e. status code 7000_0000h should be considered successful.
Cc: <stable@vger.kernel.org> # 3.13+
Fixes: 1005bccd7a ("crypto: caam - enable instantiation of all RNG4 state handles")
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In case caam_jr_alloc() fails, ctx->dev carries the error code,
thus accessing it with dev_err() is incorrect.
Cc: <stable@vger.kernel.org> # 4.8+
Fixes: 8c419778ab ("crypto: caam - add support for RSA algorithm")
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The way Job Ring platform devices are created and released does not
allow for multiple create-release cycles.
JR0 Platform device creation error
JR0 Platform device creation error
caam 2100000.caam: no queues configured, terminating
caam: probe of 2100000.caam failed with error -12
The reason is that platform devices are created for each job ring:
for_each_available_child_of_node(nprop, np)
if (of_device_is_compatible(np, "fsl,sec-v4.0-job-ring") ||
of_device_is_compatible(np, "fsl,sec4.0-job-ring")) {
ctrlpriv->jrpdev[ring] =
of_platform_device_create(np, NULL, dev);
which sets OF_POPULATED on the device node, but then it cleans these up:
/* Remove platform devices for JobRs */
for (ring = 0; ring < ctrlpriv->total_jobrs; ring++) {
if (ctrlpriv->jrpdev[ring])
of_device_unregister(ctrlpriv->jrpdev[ring]);
}
which leaves OF_POPULATED set.
Use of_platform_populate / of_platform_depopulate instead.
This allows for a bit of driver clean-up, jrpdev is no longer needed.
Logic changes a bit too:
-exit in case of_platform_populate fails, since currently even QI backend
depends on JR; true, we no longer support the case when "some" of the JR
DT nodes are incorrect
-when cleaning up, caam_remove() would also depopulate RTIC in case
it would have been populated somewhere else - not the case for now
Cc: <stable@vger.kernel.org>
Fixes: 313ea293e9 ("crypto: caam - Add Platform driver for Job Ring")
Reported-by: Russell King <rmk+kernel@armlinux.org.uk>
Suggested-by: Rob Herring <robh+dt@kernel.org>
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add support to submit ablkcipher and authenc algorithms
via the QI backend:
-ablkcipher:
cbc({aes,des,des3_ede})
ctr(aes), rfc3686(ctr(aes))
xts(aes)
-authenc:
authenc(hmac(md5),cbc({aes,des,des3_ede}))
authenc(hmac(sha*),cbc({aes,des,des3_ede}))
caam/qi being a new driver, let's wait some time to settle down without
interfering with existing caam/jr driver.
Accordingly, for now all caam/qi algorithms (caamalg_qi module) are
marked to be of lower priority than caam/jr ones (caamalg module).
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: Alex Porosanu <alexandru.porosanu@nxp.com>
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
CAAM engine supports two interfaces for crypto job submission:
-job ring interface - already existing caam/jr driver
-Queue Interface (QI) - caam/qi driver added in current patch
QI is present in CAAM engines found on DPAA platforms.
QI gets its I/O (frame descriptors) from QMan (Queue Manager) queues.
This patch adds a platform device for accessing CAAM's queue interface.
The requests are submitted to CAAM using one frame queue per
cryptographic context. Each crypto context has one shared descriptor.
This shared descriptor is attached to frame queue associated with
corresponding driver context using context_a.
The driver hides the mechanics of FQ creation, initialisation from its
applications. Each cryptographic context needs to be associated with
driver context which houses the FQ to be used to transport the job to
CAAM. The driver provides API for:
(a) Context creation
(b) Job submission
(c) Context deletion
(d) Congestion indication - whether path to/from CAAM is congested
The driver supports affining its context to a particular CPU.
This means that any responses from CAAM for the context in question
would arrive at the given CPU. This helps in implementing one CPU
per packet round trip in IPsec application.
The driver processes CAAM responses under NAPI contexts.
NAPI contexts are instantiated only on cores with affined portals since
only cores having their own portal can receive responses from DQRR.
The responses from CAAM for all cryptographic contexts ride on a fixed
set of FQs. We use one response FQ per portal owning core. The response
FQ is configured in each core's and thus portal's dedicated channel.
This gives the flexibility to direct CAAM's responses for a crypto
context on a given core.
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: Alex Porosanu <alexandru.porosanu@nxp.com>
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix typos and add the following to the scripts/spelling.txt:
deintializing||deinitializing
deintialize||deinitialize
deintialized||deinitialized
Link: http://lkml.kernel.org/r/1481573103-11329-28-git-send-email-yamada.masahiro@socionext.com
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
If we register the DMA API debug notification chain to
receive platform bus events:
dma_debug_add_bus(&platform_bus_type);
we start receiving warnings after a simple test like "modprobe caam_jr &&
modprobe caamhash && modprobe -r caamhash && modprobe -r caam_jr":
platform ffe301000.jr: DMA-API: device driver has pending DMA allocations while released from device [count=1938]
One of leaked entries details: [device address=0x0000000173fda090] [size=63 bytes] [mapped with DMA_TO_DEVICE] [mapped as single]
It turns out there are several issues with handling buf_dma (mapping of buffer
holding the previous chunk smaller than hash block size):
-detection of buf_dma mapping failure occurs too late, after a job descriptor
using that value has been submitted for execution
-dma mapping leak - unmapping is not performed in all places: for e.g.
in ahash_export or in most ahash_fin* callbacks (due to current back-to-back
implementation of buf_dma unmapping/mapping)
Fix these by:
-calling dma_mapping_error() on buf_dma right after the mapping and providing
an error code if needed
-unmapping buf_dma during the "job done" (ahash_done_*) callbacks
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
caamhash uses double buffering for holding previous/current
and next chunks (data smaller than block size) to be hashed.
Add (inline) functions to abstract this mechanism.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In case ctx_dma dma mapping fails, ahash_unmap_ctx() tries to
dma unmap an invalid address:
map_seq_out_ptr_ctx() / ctx_map_to_sec4_sg() -> goto unmap_ctx ->
-> ahash_unmap_ctx() -> dma unmap ctx_dma
There is also possible to reach ahash_unmap_ctx() with ctx_dma
uninitialzed or to try to unmap the same address twice.
Fix these by setting ctx_dma = 0 where needed:
-initialize ctx_dma in ahash_init()
-clear ctx_dma in case of mapping error (instead of holding
the error code returned by the dma map function)
-clear ctx_dma after each unmapping
Fixes: 32686d34f8 ("crypto: caam - ensure that we clean up after an error")
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
setkey() callback may be invoked multiple times for the same tfm.
In this case, DMA API leaks are caused by shared descriptors
(and key for caamalg) being mapped several times and unmapped only once.
Fix this by performing mapping / unmapping only in crypto algorithm's
cra_init() / cra_exit() callbacks and sync_for_device in the setkey()
tfm callback.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Shared descriptors for hash algorithms are small enough
for (split) keys to be inlined in all cases.
Since driver already does this, all what's left is to remove
unused ctx->key_dma.
Fixes: 045e36780f ("crypto: caam - ahash hmac support")
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
dma_map_sg() might coalesce S/G entries, so use the number of S/G
entries returned by it instead of what sg_nents_for_len() initially
returns.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Replace internal sg_count() function and the convoluted logic
around it with the standard sg_nents_for_len() function.
src_nents, dst_nents now hold the number of SW S/G entries,
instead of the HW S/G table entries.
With this change, null (zero length) input data for AEAD case
needs to be handled in a visible way. req->src is no longer
(un)mapped, pointer address is set to 0 in SEQ IN PTR command.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
sg_count() internally calls sg_nents_for_len(), which could fail
in case the required number of bytes is larger than the total
bytes in the S/G.
Thus, add checks to validate the input.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
HW S/G generation does not work properly when the following conditions
are met:
-src == dst
-src/dst is S/G
-IV is right before (contiguous with) the first src/dst S/G entry
since "iv_contig" is set to true (iv_contig is a misnomer here and
it actually refers to the whole output being contiguous)
Fix this by setting dst S/G nents equal to src S/G nents, instead of
leaving it set to init value (0).
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If one of the JRs failed at init, the next JR used
the failed JR's IO space. The patch fixes this bug.
Signed-off-by: Tudor Ambarus <tudor-dan.ambarus@nxp.com>
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Setting the dma mask could fail, thus make sure it succeeds
before going further.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
intern.h, jr.h are not needed in error.c
error.h is not needed in ctrl.c
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Pull crypto updates from Herbert Xu:
"Here is the crypto update for 4.10:
API:
- add skcipher walk interface
- add asynchronous compression (acomp) interface
- fix algif_aed AIO handling of zero buffer
Algorithms:
- fix unaligned access in poly1305
- fix DRBG output to large buffers
Drivers:
- add support for iMX6UL to caam
- fix givenc descriptors (used by IPsec) in caam
- accelerated SHA256/SHA512 for ARM64 from OpenSSL
- add SSE CRCT10DIF and CRC32 to ARM/ARM64
- add AEAD support to Chelsio chcr
- add Armada 8K support to omap-rng"
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (148 commits)
crypto: testmgr - fix overlap in chunked tests again
crypto: arm/crc32 - accelerated support based on x86 SSE implementation
crypto: arm64/crc32 - accelerated support based on x86 SSE implementation
crypto: arm/crct10dif - port x86 SSE implementation to ARM
crypto: arm64/crct10dif - port x86 SSE implementation to arm64
crypto: testmgr - add/enhance test cases for CRC-T10DIF
crypto: testmgr - avoid overlap in chunked tests
crypto: chcr - checking for IS_ERR() instead of NULL
crypto: caam - check caam_emi_slow instead of re-lookup platform
crypto: algif_aead - fix AIO handling of zero buffer
crypto: aes-ce - Make aes_simd_algs static
crypto: algif_skcipher - set error code when kcalloc fails
crypto: caam - make aamalg_desc a proper module
crypto: caam - pass key buffers with typesafe pointers
crypto: arm64/aes-ce-ccm - Fix AEAD decryption length
MAINTAINERS: add crypto headers to crypto entry
crypt: doc - remove misleading mention of async API
crypto: doc - fix header file name
crypto: api - fix comment typo
crypto: skcipher - Add separate walker for AEAD decryption
..
Start with a clean slate before dealing with bit 16 (pointer size)
of Master Configuration Register.
This fixes the case of AArch64 boot loader + AArch32 kernel, when
the boot loader might set MCFGR[PS] and kernel would fail to clear it.
Cc: <stable@vger.kernel.org>
Reported-by: Alison Wang <alison.wang@nxp.com>
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Reviewed-By: Alison Wang <Alison.wang@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The drivers/crypto/caam/ directory is entered during build only
for building modules when CONFIG_CRYPTO_DEV_FSL_CAAM=m, but
CONFIG_CRYPTO_DEV_FSL_CAAM_CRYPTO_API_DESC is defined as a
'bool' symbol, meaning that caamalg_desc.c is always compiled
into built-in code, or not at all, leading to a link failure:
ERROR: "cnstr_shdsc_xts_ablkcipher_decap" [drivers/crypto/caam/caamalg.ko] undefined!
ERROR: "cnstr_shdsc_xts_ablkcipher_encap" [drivers/crypto/caam/caamalg.ko] undefined!
ERROR: "cnstr_shdsc_aead_givencap" [drivers/crypto/caam/caamalg.ko] undefined!
ERROR: "cnstr_shdsc_aead_decap" [drivers/crypto/caam/caamalg.ko] undefined!
ERROR: "cnstr_shdsc_aead_encap" [drivers/crypto/caam/caamalg.ko] undefined!
ERROR: "cnstr_shdsc_aead_null_decap" [drivers/crypto/caam/caamalg.ko] undefined!
ERROR: "cnstr_shdsc_aead_null_encap" [drivers/crypto/caam/caamalg.ko] undefined!
ERROR: "cnstr_shdsc_rfc4106_decap" [drivers/crypto/caam/caamalg.ko] undefined!
ERROR: "cnstr_shdsc_rfc4106_encap" [drivers/crypto/caam/caamalg.ko] undefined!
...
Making caamalg_desc itself a loadable module fixes this configuration
by ensuring the driver gets built. Aside from making the symbol
'tristate', I'm adding appropriate module metadata here.
Fixes: 8cea7b66b8 ("crypto: caam - refactor encryption descriptors generation")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The 'key' field is defined as a 'u64' and used for two different
pieces of information: either to store a pointer or a dma_addr_t.
The former leads to a build error on 32-bit machines:
drivers/crypto/caam/caamalg_desc.c: In function 'cnstr_shdsc_aead_null_encap':
drivers/crypto/caam/caamalg_desc.c:67:27: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast]
drivers/crypto/caam/caamalg_desc.c: In function 'cnstr_shdsc_aead_null_decap':
drivers/crypto/caam/caamalg_desc.c:143:27: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast]
Using a union to provide correct types gets rid of the warnings
and as well as a couple of redundant casts.
Fixes: db57656b00 ("crypto: caam - group algorithm related params")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Move ahash shared descriptor generation into a single function.
Currently there is no plan to support ahash on any other interface
besides the Job Ring, thus for now the functionality is not exported.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Move split key length and padded length computation from caamalg.c
and caamhash.c to key_gen.c.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Refactor the generation of the authenc, ablkcipher shared descriptors
and exports the functionality, such that they could be shared
with the upcoming caam/qi (Queue Interface) driver.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove dependency on CRYPTO_DEV_FSL_CAAM where superfluous:
depends on CRYPTO_DEV_FSL_CAAM && CRYPTO_DEV_FSL_CAAM_JR
is equivalent to
depends on CRYPTO_DEV_FSL_CAAM_JR
since CRYPTO_DEV_FSL_CAAM_JR depends on CRYPTO_DEV_FSL_CAAM.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
A few descriptor commands are generated using generic
inline append "append_cmd" function.
Rewrite them using specific inline append functions.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
For authenc / stitched AEAD algorithms, check independently
each of the two (authentication, encryption) keys whether inlining
is possible.
Prioritize the inlining of the authentication key, since the length
of the (split) key is bigger than that of the encryption key.
For the other algorithms, compute only once per tfm the remaining
available bytes and decide whether key inlining is possible
based on this.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Information carried by alg_op can be deduced from adata->algtype
plus some fixed flags.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In preparation of factoring out the shared descriptors,
struct alginfo is introduced to group the algorithm related
parameters.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
append_key_aead() is used in only one place, thus inline it.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Building the caam driver on arm64 produces a harmless warning:
drivers/crypto/caam/caamalg.c:140:139: warning: comparison of distinct pointer types lacks a cast
We can use min_t to tell the compiler which type we want it to use
here.
Fixes: 5ecf8ef910 ("crypto: caam - fix sg dump")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Shared descriptors used by ahash_final() and ahash_finup()
are identical, thus get rid of one of them (sh_desc_finup).
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The pointer to the descriptor buffer is not touched,
it always points to start of the descriptor buffer.
Thus, make it const.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
sec4_sg_entry structure is used only by helper functions in sg_sw_sec4.h.
Since SEC HW S/G entries are to be manipulated only indirectly, via these
functions, move sec4_sg_entry to the corresponding header.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This reverts commit 66d2e20280.
Quoting from Russell's findings:
https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg21136.html
[quote]
Okay, I've re-tested, using a different way of measuring, because using
openssl speed is impractical for off-loaded engines. I've decided to
use this way to measure the performance:
dd if=/dev/zero bs=1048576 count=128 | /usr/bin/time openssl dgst -md5
For the threaded IRQs case gives:
0.05user 2.74system 0:05.30elapsed 52%CPU (0avgtext+0avgdata 2400maxresident)k
0.06user 2.52system 0:05.18elapsed 49%CPU (0avgtext+0avgdata 2404maxresident)k
0.12user 2.60system 0:05.61elapsed 48%CPU (0avgtext+0avgdata 2460maxresident)k
=> 5.36s => 25.0MB/s
and the tasklet case:
0.08user 2.53system 0:04.83elapsed 54%CPU (0avgtext+0avgdata 2468maxresident)k
0.09user 2.47system 0:05.16elapsed 49%CPU (0avgtext+0avgdata 2368maxresident)k
0.10user 2.51system 0:04.87elapsed 53%CPU (0avgtext+0avgdata 2460maxresident)k
=> 4.95 => 27.1MB/s
which corresponds to an 8% slowdown for the threaded IRQ case. So,
tasklets are indeed faster than threaded IRQs.
[...]
I think I've proven from the above that this patch needs to be reverted
due to the performance regression, and that there _is_ most definitely
a deterimental effect of switching from tasklets to threaded IRQs.
[/quote]
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>