zhong jiang [Fri, 21 Sep 2018 13:30:15 +0000 (21:30 +0800)]
crypto: cavium - remove redundant null pointer check before kfree
kfree has taken the null pointer into account. hence it is safe
to remove the redundant null pointer check before kfree.
Signed-off-by: zhong jiang <zhongjiang@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Srikanth Jampala [Fri, 21 Sep 2018 11:38:02 +0000 (17:08 +0530)]
crypto: cavium/nitrox - updated debugfs information.
Updated debugfs to provide device partname and frequency etc.
New file "stats" shows the number of requests posted, dropped and
completed.
Signed-off-by: Srikanth Jampala <Jampala.Srikanth@cavium.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Srikanth Jampala [Fri, 21 Sep 2018 11:38:01 +0000 (17:08 +0530)]
crypto: cavium/nitrox - add support for per device request statistics.
Add per device statistics like number of requests posted,
dropped and completed etc.
Signed-off-by: Srikanth Jampala <Jampala.Srikanth@cavium.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Srikanth Jampala [Fri, 21 Sep 2018 11:38:00 +0000 (17:08 +0530)]
crypto: cavium/nitrox - added support to identify the NITROX device partname.
Get the device partname based on it's capabilities like,
core frequency, number of cores and revision id.
Signed-off-by: Srikanth Jampala <Jampala.Srikanth@cavium.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Gilad Ben-Yossef [Thu, 20 Sep 2018 13:18:40 +0000 (14:18 +0100)]
crypto: tcrypt - add OFB functional tests
We already have OFB test vectors and tcrypt OFB speed tests.
Add OFB functional tests to tcrypt as well.
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Gilad Ben-Yossef [Thu, 20 Sep 2018 13:18:39 +0000 (14:18 +0100)]
crypto: ofb - add output feedback mode
Add a generic version of output feedback mode. We already have support of
several hardware based transformations of this mode and the needed test
vectors but we somehow missed adding a generic software one. Fix this now.
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Gilad Ben-Yossef [Thu, 20 Sep 2018 13:18:38 +0000 (14:18 +0100)]
crypto: testmgr - update sm4 test vectors
Add additional test vectors from "The SM4 Blockcipher Algorithm And Its
Modes Of Operations" draft-ribose-cfrg-sm4-10 and register cipher speed
tests for sm4.
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
zhong jiang [Thu, 20 Sep 2018 09:57:16 +0000 (17:57 +0800)]
crypto: chtls - remove redundant null pointer check before kfree_skb
kfree_skb has taken the null pointer into account. hence it is safe
to remove the redundant null pointer check before kfree_skb.
Signed-off-by: zhong jiang <zhongjiang@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Horia Geantă [Wed, 19 Sep 2018 14:54:21 +0000 (17:54 +0300)]
crypto: tcrypt - remove remnants of pcomp-based zlib
Commit
110492183c4b ("crypto: compress - remove unused pcomp interface")
removed pcomp interface but missed cleaning up tcrypt.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Corentin Labbe [Wed, 19 Sep 2018 10:10:55 +0000 (10:10 +0000)]
crypto: tools - Add cryptostat userspace
This patch adds an userspace tool for displaying kernel crypto API
statistics.
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Corentin Labbe [Wed, 19 Sep 2018 10:10:54 +0000 (10:10 +0000)]
crypto: user - Implement a generic crypto statistics
This patch implement a generic way to get statistics about all crypto
usages.
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kees Cook [Wed, 19 Sep 2018 02:11:00 +0000 (19:11 -0700)]
crypto: skcipher - Remove SKCIPHER_REQUEST_ON_STACK()
Now that all the users of the VLA-generating SKCIPHER_REQUEST_ON_STACK()
macro have been moved to SYNC_SKCIPHER_REQUEST_ON_STACK(), we can remove
the former.
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kees Cook [Wed, 19 Sep 2018 02:10:59 +0000 (19:10 -0700)]
crypto: picoxcell - Remove VLA usage of skcipher
In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Cc: Jamie Iles <jamie@jamieiles.com>
Cc: linux-arm-kernel@lists.infradead.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kees Cook [Wed, 19 Sep 2018 02:10:58 +0000 (19:10 -0700)]
crypto: omap-aes - Remove VLA usage of skcipher
In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kees Cook [Wed, 19 Sep 2018 02:10:57 +0000 (19:10 -0700)]
crypto: mxs-dcp - Remove VLA usage of skcipher
In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kees Cook [Wed, 19 Sep 2018 02:10:56 +0000 (19:10 -0700)]
crypto: chelsio - Remove VLA usage of skcipher
In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Cc: Harsh Jain <harsh@chelsio.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kees Cook [Wed, 19 Sep 2018 02:10:55 +0000 (19:10 -0700)]
crypto: artpec6 - Remove VLA usage of skcipher
In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Cc: Jesper Nilsson <jesper.nilsson@axis.com>
Cc: Lars Persson <lars.persson@axis.com>
Cc: linux-arm-kernel@axis.com
Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Lars Persson <lars.persson@axis.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kees Cook [Wed, 19 Sep 2018 02:10:54 +0000 (19:10 -0700)]
crypto: qce - Remove VLA usage of skcipher
In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Cc: Himanshu Jha <himanshujha199640@gmail.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kees Cook [Wed, 19 Sep 2018 02:10:53 +0000 (19:10 -0700)]
crypto: sahara - Remove VLA usage of skcipher
In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kees Cook [Wed, 19 Sep 2018 02:10:52 +0000 (19:10 -0700)]
crypto: cryptd - Remove VLA usage of skcipher
In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kees Cook [Wed, 19 Sep 2018 02:10:51 +0000 (19:10 -0700)]
crypto: null - Remove VLA usage of skcipher
In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kees Cook [Wed, 19 Sep 2018 02:10:50 +0000 (19:10 -0700)]
crypto: vmx - Remove VLA usage of skcipher
In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Cc: "Leonidas S. Barbosa" <leosilva@linux.vnet.ibm.com>
Cc: Paulo Flabiano Smorigo <pfsmorigo@linux.vnet.ibm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kees Cook [Wed, 19 Sep 2018 02:10:49 +0000 (19:10 -0700)]
crypto: ccp - Remove VLA usage of skcipher
In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Gary Hook <gary.hook@amd.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kees Cook [Wed, 19 Sep 2018 02:10:48 +0000 (19:10 -0700)]
wusb: Remove VLA usage of skcipher
In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Felipe Balbi <felipe.balbi@linux.intel.com>
Cc: Johan Hovold <johan@kernel.org>
Cc: "Gustavo A. R. Silva" <gustavo@embeddedor.com>
Cc: linux-usb@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kees Cook [Wed, 19 Sep 2018 02:10:47 +0000 (19:10 -0700)]
rxrpc: Remove VLA usage of skcipher
In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Cc: David Howells <dhowells@redhat.com>
Cc: linux-afs@lists.infradead.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kees Cook [Wed, 19 Sep 2018 02:10:46 +0000 (19:10 -0700)]
ppp: mppe: Remove VLA usage of skcipher
In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Cc: Paul Mackerras <paulus@samba.org>
Cc: linux-ppp@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kees Cook [Wed, 19 Sep 2018 02:10:45 +0000 (19:10 -0700)]
libceph: Remove VLA usage of skcipher
In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Cc: Ilya Dryomov <idryomov@gmail.com>
Cc: "Yan, Zheng" <zyan@redhat.com>
Cc: Sage Weil <sage@redhat.com>
Cc: ceph-devel@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kees Cook [Wed, 19 Sep 2018 02:10:44 +0000 (19:10 -0700)]
block: cryptoloop: Remove VLA usage of skcipher
In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Cc: Jens Axboe <axboe@kernel.dk>
Cc: linux-block@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kees Cook [Wed, 19 Sep 2018 02:10:43 +0000 (19:10 -0700)]
x86/fpu: Remove VLA usage of skcipher
In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Cc: x86@kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kees Cook [Wed, 19 Sep 2018 02:10:42 +0000 (19:10 -0700)]
s390/crypto: Remove VLA usage of skcipher
In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: linux-s390@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kees Cook [Wed, 19 Sep 2018 02:10:41 +0000 (19:10 -0700)]
mac802154: Remove VLA usage of skcipher
In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Cc: Alexander Aring <alex.aring@gmail.com>
Cc: Stefan Schmidt <stefan@datenfreihafen.org>
Cc: linux-wpan@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kees Cook [Wed, 19 Sep 2018 02:10:40 +0000 (19:10 -0700)]
lib80211: Remove VLA usage of skcipher
In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Cc: Johannes Berg <johannes@sipsolutions.net>
Cc: linux-wireless@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kees Cook [Wed, 19 Sep 2018 02:10:39 +0000 (19:10 -0700)]
gss_krb5: Remove VLA usage of skcipher
In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Cc: Trond Myklebust <trond.myklebust@hammerspace.com>
Cc: Anna Schumaker <anna.schumaker@netapp.com>
Cc: "J. Bruce Fields" <bfields@fieldses.org>
Cc: Jeff Layton <jlayton@kernel.org>
Cc: YueHaibing <yuehaibing@huawei.com>
Cc: linux-nfs@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kees Cook [Wed, 19 Sep 2018 02:10:38 +0000 (19:10 -0700)]
crypto: skcipher - Introduce crypto_sync_skcipher
In preparation for removal of VLAs due to skcipher requests on the stack
via SKCIPHER_REQUEST_ON_STACK() usage, this introduces the infrastructure
for the "sync skcipher" tfm, which is for handling the on-stack cases of
skcipher, which are always non-ASYNC and have a known limited request
size.
The crypto API additions:
struct crypto_sync_skcipher (wrapper for struct crypto_skcipher)
crypto_alloc_sync_skcipher()
crypto_free_sync_skcipher()
crypto_sync_skcipher_setkey()
crypto_sync_skcipher_get_flags()
crypto_sync_skcipher_set_flags()
crypto_sync_skcipher_clear_flags()
crypto_sync_skcipher_blocksize()
crypto_sync_skcipher_ivsize()
crypto_sync_skcipher_reqtfm()
skcipher_request_set_sync_tfm()
SYNC_SKCIPHER_REQUEST_ON_STACK() (with tfm type check)
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Dan Aloni [Mon, 17 Sep 2018 17:24:32 +0000 (20:24 +0300)]
crypto: fix a memory leak in rsa-kcs1pad's encryption mode
The encryption mode of pkcs1pad never uses out_sg and out_buf, so
there's no need to allocate the buffer, which presently is not even
being freed.
CC: Herbert Xu <herbert@gondor.apana.org.au>
CC: linux-crypto@vger.kernel.org
CC: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Dan Aloni <dan@kernelim.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Christoph Manszewski [Mon, 17 Sep 2018 15:09:30 +0000 (17:09 +0200)]
crypto: s5p-sss: Add aes-ctr support
Add support for aes counter(ctr) block cipher mode of operation for
Exynos Hardware. In contrast to ecb and cbc modes, aes-ctr allows
encyption/decryption for request sizes not being a multiple of 16(bytes).
Hardware requires block sizes being a multiple of 16(bytes). In order to
achieve this, copy request source and destination memory, and align it's size
to 16. That way hardware processes additional bytes, that are omitted
when copying the result back to its original destination.
Tested on Odroid-U3 with Exynos 4412 CPU, kernel 4.19-rc2 with crypto
run-time self test testmgr.
Signed-off-by: Christoph Manszewski <c.manszewski@samsung.com>
Reviewed-by: Krzysztof Kozlowski <krzk@kernel.org>
Acked-by: Kamil Konieczny <k.konieczny@partner.samsung.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Christoph Manszewski [Mon, 17 Sep 2018 15:09:29 +0000 (17:09 +0200)]
crypto: s5p-sss: Minor code cleanup
Modifications in s5p-sss.c:
- remove unnecessary 'goto' statements (making code shorter),
- change uint_8 and uint_32 to u8 and u32 types (for consistency in the
driver and making code shorter),
Signed-off-by: Christoph Manszewski <c.manszewski@samsung.com>
Reviewed-by: Krzysztof Kozlowski <krzk@kernel.org>
Acked-by: Kamil Konieczny <k.konieczny@partner.samsung.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Christoph Manszewski [Mon, 17 Sep 2018 15:09:28 +0000 (17:09 +0200)]
crypto: s5p-sss: Fix Fix argument list alignment
Fix misalignment of continued argument list.
Signed-off-by: Christoph Manszewski <c.manszewski@samsung.com>
Reviewed-by: Krzysztof Kozlowski <krzk@kernel.org>
Acked-by: Kamil Konieczny <k.konieczny@partner.samsung.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Christoph Manszewski [Mon, 17 Sep 2018 15:09:27 +0000 (17:09 +0200)]
crypto: s5p-sss: Fix race in error handling
Remove a race condition introduced by error path in functions:
s5p_aes_interrupt and s5p_aes_crypt_start. Setting the busy field of
struct s5p_aes_dev to false made it possible for s5p_tasklet_cb to
change the req field, before s5p_aes_complete was called.
Change the first parameter of s5p_aes_complete to struct
ablkcipher_request. Before spin_unlock, make a copy of the currently
handled request, to ensure s5p_aes_complete function call with the
correct request.
Signed-off-by: Christoph Manszewski <c.manszewski@samsung.com>
Acked-by: Kamil Konieczny <k.konieczny@partner.samsung.com>
Reviewed-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Stefan Agner [Sun, 16 Sep 2018 04:38:25 +0000 (21:38 -0700)]
crypto: arm/crc32 - avoid warning when compiling with Clang
The table id (second) argument to MODULE_DEVICE_TABLE is often
referenced otherwise. This is not the case for CPU features. This
leads to a warning when building the kernel with Clang:
arch/arm/crypto/crc32-ce-glue.c:239:33: warning: variable
'crc32_cpu_feature' is not needed and will not be emitted
[-Wunneeded-internal-declaration]
static const struct cpu_feature crc32_cpu_feature[] = {
^
Avoid warnings by using __maybe_unused, similar to commit
1f318a8bafcf
("modules: mark __inittest/__exittest as __maybe_unused").
Fixes:
2a9faf8b7e43 ("crypto: arm/crc32 - enable module autoloading based on CPU feature bits")
Signed-off-by: Stefan Agner <stefan@agner.ch>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Stefan Agner [Sun, 16 Sep 2018 04:38:24 +0000 (21:38 -0700)]
cpufeature: avoid warning when compiling with clang
The table id (second) argument to MODULE_DEVICE_TABLE is often
referenced otherwise. This is not the case for CPU features. This
leads to warnings when building the kernel with Clang:
arch/arm/crypto/aes-ce-glue.c:450:1: warning: variable
'cpu_feature_match_AES' is not needed and will not be emitted
[-Wunneeded-internal-declaration]
module_cpu_feature_match(AES, aes_init);
^
Avoid warnings by using __maybe_unused, similar to commit
1f318a8bafcf
("modules: mark __inittest/__exittest as __maybe_unused").
Fixes:
67bad2fdb754 ("cpu: add generic support for CPU feature based module autoloading")
Signed-off-by: Stefan Agner <stefan@agner.ch>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Janakarajan Natarajan [Fri, 14 Sep 2018 22:32:04 +0000 (17:32 -0500)]
crypto: ccp - Allow SEV firmware to be chosen based on Family and Model
During PSP initialization, there is an attempt to update the SEV firmware
by looking in /lib/firmware/amd/. Currently, sev.fw is the expected name
of the firmware blob.
This patch will allow for firmware filenames based on the family and
model of the processor.
Model specific firmware files are given highest priority. Followed by
firmware for a subset of models. Lastly, failing the previous two options,
fallback to looking for sev.fw.
Signed-off-by: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Acked-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Janakarajan Natarajan [Fri, 14 Sep 2018 22:32:03 +0000 (17:32 -0500)]
crypto: ccp - Fix static checker warning
Under certain configuration SEV functions can be defined as no-op.
In such a case error can be uninitialized.
Initialize the variable to 0.
Cc: Dan Carpenter <Dan.Carpenter@oracle.com>
Reported-by: Dan Carpenter <Dan.Carpenter@oracle.com>
Signed-off-by: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com>
Acked-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Ondrej Mosnacek [Thu, 13 Sep 2018 08:51:34 +0000 (10:51 +0200)]
crypto: lrw - Do not use auxiliary buffer
This patch simplifies the LRW template to recompute the LRW tweaks from
scratch in the second pass and thus also removes the need to allocate a
dynamic buffer using kmalloc().
As discussed at [1], the use of kmalloc causes deadlocks with dm-crypt.
PERFORMANCE MEASUREMENTS (x86_64)
Performed using: https://gitlab.com/omos/linux-crypto-bench
Crypto driver used: lrw(ecb-aes-aesni)
The results show that the new code has about the same performance as the
old code. For 512-byte message it seems to be even slightly faster, but
that might be just noise.
Before:
ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns)
lrw(aes) 256 64 200 203
lrw(aes) 320 64 202 204
lrw(aes) 384 64 204 205
lrw(aes) 256 512 415 415
lrw(aes) 320 512 432 440
lrw(aes) 384 512 449 451
lrw(aes) 256 4096 1838 1995
lrw(aes) 320 4096 2123 1980
lrw(aes) 384 4096 2100 2119
lrw(aes) 256 16384 7183 6954
lrw(aes) 320 16384 7844 7631
lrw(aes) 384 16384 8256 8126
lrw(aes) 256 32768 14772 14484
lrw(aes) 320 32768 15281 15431
lrw(aes) 384 32768 16469 16293
After:
ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns)
lrw(aes) 256 64 197 196
lrw(aes) 320 64 200 197
lrw(aes) 384 64 203 199
lrw(aes) 256 512 385 380
lrw(aes) 320 512 401 395
lrw(aes) 384 512 415 415
lrw(aes) 256 4096 1869 1846
lrw(aes) 320 4096 2080 1981
lrw(aes) 384 4096 2160 2109
lrw(aes) 256 16384 7077 7127
lrw(aes) 320 16384 7807 7766
lrw(aes) 384 16384 8108 8357
lrw(aes) 256 32768 14111 14454
lrw(aes) 320 32768 15268 15082
lrw(aes) 384 32768 16581 16250
[1] https://lkml.org/lkml/2018/8/23/1315
Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Ondrej Mosnacek [Thu, 13 Sep 2018 08:51:33 +0000 (10:51 +0200)]
crypto: lrw - Optimize tweak computation
This patch rewrites the tweak computation to a slightly simpler method
that performs less bswaps. Based on performance measurements the new
code seems to provide slightly better performance than the old one.
PERFORMANCE MEASUREMENTS (x86_64)
Performed using: https://gitlab.com/omos/linux-crypto-bench
Crypto driver used: lrw(ecb-aes-aesni)
Before:
ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns)
lrw(aes) 256 64 204 286
lrw(aes) 320 64 227 203
lrw(aes) 384 64 208 204
lrw(aes) 256 512 441 439
lrw(aes) 320 512 456 455
lrw(aes) 384 512 469 483
lrw(aes) 256 4096 2136 2190
lrw(aes) 320 4096 2161 2213
lrw(aes) 384 4096 2295 2369
lrw(aes) 256 16384 7692 7868
lrw(aes) 320 16384 8230 8691
lrw(aes) 384 16384 8971 8813
lrw(aes) 256 32768 15336 15560
lrw(aes) 320 32768 16410 16346
lrw(aes) 384 32768 18023 17465
After:
ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns)
lrw(aes) 256 64 200 203
lrw(aes) 320 64 202 204
lrw(aes) 384 64 204 205
lrw(aes) 256 512 415 415
lrw(aes) 320 512 432 440
lrw(aes) 384 512 449 451
lrw(aes) 256 4096 1838 1995
lrw(aes) 320 4096 2123 1980
lrw(aes) 384 4096 2100 2119
lrw(aes) 256 16384 7183 6954
lrw(aes) 320 16384 7844 7631
lrw(aes) 384 16384 8256 8126
lrw(aes) 256 32768 14772 14484
lrw(aes) 320 32768 15281 15431
lrw(aes) 384 32768 16469 16293
Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Ondrej Mosnacek [Thu, 13 Sep 2018 08:51:32 +0000 (10:51 +0200)]
crypto: testmgr - Add test for LRW counter wrap-around
This patch adds a test vector for lrw(aes) that triggers wrap-around of
the counter, which is a tricky corner case.
Suggested-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Ondrej Mosnacek [Thu, 13 Sep 2018 08:51:31 +0000 (10:51 +0200)]
crypto: lrw - Fix out-of bounds access on counter overflow
When the LRW block counter overflows, the current implementation returns
128 as the index to the precomputed multiplication table, which has 128
entries. This patch fixes it to return the correct value (127).
Fixes:
64470f1b8510 ("[CRYPTO] lrw: Liskov Rivest Wagner, a tweakable narrow block cipher mode")
Cc: <stable@vger.kernel.org> # 2.6.20+
Reported-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Horia Geantă [Wed, 12 Sep 2018 13:20:48 +0000 (16:20 +0300)]
crypto: tcrypt - fix ghash-generic speed test
ghash is a keyed hash algorithm, thus setkey needs to be called.
Otherwise the following error occurs:
$ modprobe tcrypt mode=318 sec=1
testing speed of async ghash-generic (ghash-generic)
tcrypt: test 0 ( 16 byte blocks, 16 bytes per update, 1 updates):
tcrypt: hashing failed ret=-126
Cc: <stable@vger.kernel.org> # 4.6+
Fixes:
0660511c0bee ("crypto: tcrypt - Use ahash")
Tested-by: Franck Lenormand <franck.lenormand@nxp.com>
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Horia Geantă [Wed, 12 Sep 2018 08:59:37 +0000 (11:59 +0300)]
arm64: defconfig: enable CAAM crypto engine on QorIQ DPAA2 SoCs
Enable CAAM (Cryptographic Accelerator and Assurance Module) driver
for QorIQ Data Path Acceleration Architecture (DPAA) v2.
It handles DPSECI (Data Path SEC Interface) DPAA2 objects that sit
on the Management Complex (MC) fsl-mc bus.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Horia Geantă [Wed, 12 Sep 2018 08:59:36 +0000 (11:59 +0300)]
crypto: caam/qi2 - add support for ahash algorithms
Add support for unkeyed and keyed (hmac) md5, sha algorithms.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Horia Geantă [Wed, 12 Sep 2018 08:59:35 +0000 (11:59 +0300)]
crypto: caam - export ahash shared descriptor generation
caam/qi2 driver will support ahash algorithms,
thus move ahash descriptors generation in a shared location.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Horia Geantă [Wed, 12 Sep 2018 08:59:34 +0000 (11:59 +0300)]
crypto: caam/qi2 - add skcipher algorithms
Add support to submit the following skcipher algorithms
via the DPSECI backend:
cbc({aes,des,des3_ede})
ctr(aes), rfc3686(ctr(aes))
xts(aes)
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Horia Geantă [Wed, 12 Sep 2018 08:59:33 +0000 (11:59 +0300)]
crypto: caam/qi2 - add DPAA2-CAAM driver
Add CAAM driver that works using the DPSECI backend, i.e. manages
DPSECI DPAA2 objects sitting on the Management Complex (MC) fsl-mc bus.
Data transfers (crypto requests) are sent/received to/from CAAM crypto
engine via Queue Interface (v2), this being similar to existing caam/qi.
OTOH, configuration/setup (obtaining virtual queue IDs, authorization
etc.) is done by sending commands to the MC f/w.
Note that the CAAM accelerator included in DPAA2 platforms still has
Job Rings. However, the driver being added does not handle access
via this backend. Kconfig & Makefile are updated such that DPAA2-CAAM
(a.k.a. "caam/qi2") driver does not depend on caam/jr or caam/qi
backends - which rely on platform bus support (ctrl.c).
Support for the following aead and authenc algorithms is also added
in this patch:
-aead:
gcm(aes)
rfc4106(gcm(aes))
rfc4543(gcm(aes))
-authenc:
authenc(hmac({md5,sha*}),cbc({aes,des,des3_ede}))
echainiv(authenc(hmac({md5,sha*}),cbc({aes,des,des3_ede})))
authenc(hmac({md5,sha*}),rfc3686(ctr(aes))
seqiv(authenc(hmac({md5,sha*}),rfc3686(ctr(aes)))
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Horia Geantă [Wed, 12 Sep 2018 08:59:32 +0000 (11:59 +0300)]
crypto: caam - add Queue Interface v2 error codes
Add support to translate error codes returned by QI v2, i.e.
Queue Interface present on DataPath Acceleration Architecture
v2 (DPAA2).
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Horia Geantă [Wed, 12 Sep 2018 08:59:31 +0000 (11:59 +0300)]
crypto: caam - add DPAA2-CAAM (DPSECI) backend API
Add the low-level API that allows to manage DPSECI DPAA2 objects
that sit on the Management Complex (MC) fsl-mc bus.
The API is compatible with MC firmware 10.2.0+.
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Horia Geantă [Wed, 12 Sep 2018 08:59:30 +0000 (11:59 +0300)]
crypto: caam - fix implicit casts in endianness helpers
Fix the following sparse endianness warnings:
drivers/crypto/caam/regs.h:95:1: sparse: incorrect type in return expression (different base types) @@ expected unsigned int @@ got restricted __le32unsigned int @@
drivers/crypto/caam/regs.h:95:1: expected unsigned int
drivers/crypto/caam/regs.h:95:1: got restricted __le32 [usertype] <noident>
drivers/crypto/caam/regs.h:95:1: sparse: incorrect type in return expression (different base types) @@ expected unsigned int @@ got restricted __be32unsigned int @@
drivers/crypto/caam/regs.h:95:1: expected unsigned int
drivers/crypto/caam/regs.h:95:1: got restricted __be32 [usertype] <noident>
drivers/crypto/caam/regs.h:92:1: sparse: cast to restricted __le32
drivers/crypto/caam/regs.h:92:1: sparse: cast to restricted __be32
Fixes:
261ea058f016 ("crypto: caam - handle core endianness != caam endianness")
Reported-by: kbuild test robot <lkp@intel.com>
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Horia Geantă [Wed, 12 Sep 2018 08:59:29 +0000 (11:59 +0300)]
soc: fsl: dpio: add congestion notification support
Add support for Congestion State Change Notifications (CSCN), which
allow DPIO users to be notified when a congestion group changes its
state (due to hitting the entrance / exit threshold).
Acked-by: Li Yang <leoyang.li@nxp.com>
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Horia Geantă [Wed, 12 Sep 2018 08:59:28 +0000 (11:59 +0300)]
soc: fsl: dpio: add frame list format support
Add support for dpaa2_fd_list format, i.e. dpaa2_fl_entry structure
and accessors.
Frame list entries (FLEs) are similar, but not identical to FDs:
+ "F" (final) bit
- FMT[b'01] is reserved
- DD, SC, DROPP bits (covered by "FD compatibility" field in FLE case)
- FLC[5:0] not used for stashing
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Acked-by: Li Yang <leoyang.li@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Horia Geantă [Wed, 12 Sep 2018 08:59:27 +0000 (11:59 +0300)]
soc: fsl: dpio: add back some frame queue functions
This commit adds back functions removed in
commit
a211c8170b3c ("staging: fsl-mc/dpio: remove couple of unused functions")
since dpseci object will make use of them.
Acked-by: Li Yang <leoyang.li@nxp.com>
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Horia Geantă [Wed, 12 Sep 2018 08:59:26 +0000 (11:59 +0300)]
bus: fsl-mc: add support for dpseci device type
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Acked-by: Laurentiu Tudor <laurentiu.tudor@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Wed, 12 Sep 2018 03:05:10 +0000 (20:05 -0700)]
crypto: chacha20 - Fix chacha20_block() keystream alignment (again)
In commit
9f480faec58c ("crypto: chacha20 - Fix keystream alignment for
chacha20_block()"), I had missed that chacha20_block() can be called
directly on the buffer passed to get_random_bytes(), which can have any
alignment. So, while my commit didn't break anything, it didn't fully
solve the alignment problems.
Revert my solution and just update chacha20_block() to use
put_unaligned_le32(), so the output buffer need not be aligned.
This is simpler, and on many CPUs it's the same speed.
But, I kept the 'tmp' buffers in extract_crng_user() and
_get_random_bytes() 4-byte aligned, since that alignment is actually
needed for _crng_backtrack_protect() too.
Reported-by: Stephan Müller <smueller@chronox.de>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Ondrej Mosnacek [Tue, 11 Sep 2018 07:40:08 +0000 (09:40 +0200)]
crypto: xts - Drop use of auxiliary buffer
Since commit
acb9b159c784 ("crypto: gf128mul - define gf128mul_x_* in
gf128mul.h"), the gf128mul_x_*() functions are very fast and therefore
caching the computed XTS tweaks has only negligible advantage over
computing them twice.
In fact, since the current caching implementation limits the size of
the calls to the child ecb(...) algorithm to PAGE_SIZE (usually 4096 B),
it is often actually slower than the simple recomputing implementation.
This patch simplifies the XTS template to recompute the XTS tweaks from
scratch in the second pass and thus also removes the need to allocate a
dynamic buffer using kmalloc().
As discussed at [1], the use of kmalloc causes deadlocks with dm-crypt.
PERFORMANCE RESULTS
I measured time to encrypt/decrypt a memory buffer of varying sizes with
xts(ecb-aes-aesni) using a tool I wrote ([2]) and the results suggest
that after this patch the performance is either better or comparable for
both small and large buffers. Note that there is a lot of noise in the
measurements, but the overall difference is easy to see.
Old code:
ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns)
xts(aes) 256 64 331 328
xts(aes) 384 64 332 333
xts(aes) 512 64 338 348
xts(aes) 256 512 889 920
xts(aes) 384 512 1019 993
xts(aes) 512 512 1032 990
xts(aes) 256 4096 2152 2292
xts(aes) 384 4096 2453 2597
xts(aes) 512 4096 3041 2641
xts(aes) 256 16384 9443 8027
xts(aes) 384 16384 8536 8925
xts(aes) 512 16384 9232 9417
xts(aes) 256 32768 16383 14897
xts(aes) 384 32768 17527 16102
xts(aes) 512 32768 18483 17322
New code:
ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns)
xts(aes) 256 64 328 324
xts(aes) 384 64 324 319
xts(aes) 512 64 320 322
xts(aes) 256 512 476 473
xts(aes) 384 512 509 492
xts(aes) 512 512 531 514
xts(aes) 256 4096 2132 1829
xts(aes) 384 4096 2357 2055
xts(aes) 512 4096 2178 2027
xts(aes) 256 16384 6920 6983
xts(aes) 384 16384 8597 7505
xts(aes) 512 16384 7841 8164
xts(aes) 256 32768 13468 12307
xts(aes) 384 32768 14808 13402
xts(aes) 512 32768 15753 14636
[1] https://lkml.org/lkml/2018/8/23/1315
[2] https://gitlab.com/omos/linux-crypto-bench
Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Ard Biesheuvel [Mon, 10 Sep 2018 14:41:15 +0000 (16:41 +0200)]
crypto: arm64/aes-blk - improve XTS mask handling
The Crypto Extension instantiation of the aes-modes.S collection of
skciphers uses only 15 NEON registers for the round key array, whereas
the pure NEON flavor uses 16 NEON registers for the AES S-box.
This means we have a spare register available that we can use to hold
the XTS mask vector, removing the need to reload it at every iteration
of the inner loop.
Since the pure NEON version does not permit this optimization, tweak
the macros so we can factor out this functionality. Also, replace the
literal load with a short sequence to compose the mask vector.
On Cortex-A53, this results in a ~4% speedup.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Ard Biesheuvel [Mon, 10 Sep 2018 14:41:14 +0000 (16:41 +0200)]
crypto: arm64/aes-blk - add support for CTS-CBC mode
Currently, we rely on the generic CTS chaining mode wrapper to
instantiate the cts(cbc(aes)) skcipher. Due to the high performance
of the ARMv8 Crypto Extensions AES instructions (~1 cycles per byte),
any overhead in the chaining mode layers is amplified, and so it pays
off considerably to fold the CTS handling into the SIMD routines.
On Cortex-A53, this results in a ~50% speedup for smaller input sizes.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Ard Biesheuvel [Mon, 10 Sep 2018 14:41:13 +0000 (16:41 +0200)]
crypto: arm64/aes-blk - revert NEON yield for skciphers
The reasoning of commit
f10dc56c64bb ("crypto: arm64 - revert NEON yield
for fast AEAD implementations") applies equally to skciphers: the walk
API already guarantees that the input size of each call into the NEON
code is bounded to the size of a page, and so there is no need for an
additional TIF_NEED_RESCHED flag check inside the inner loop. So revert
the skcipher changes to aes-modes.S (but retain the mac ones)
This partially reverts commit
0c8f838a52fe9fd82761861a934f16ef9896b4e5.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Ard Biesheuvel [Mon, 10 Sep 2018 14:41:12 +0000 (16:41 +0200)]
crypto: arm64/aes-blk - remove pointless (u8 *) casts
For some reason, the asmlinkage prototypes of the NEON routines take
u8[] arguments for the round key arrays, while the actual round keys
are arrays of u32, and so passing them into those routines requires
u8* casts at each occurrence. Fix that.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Srikanth Jampala [Mon, 10 Sep 2018 08:24:52 +0000 (13:54 +0530)]
crypto: cavium/nitrox - use dma_pool_zalloc()
use dma_pool_zalloc() instead of dma_pool_alloc with __GFP_ZERO flag.
crypto dma pool renamed to "nitrox-context".
Signed-off-by: Srikanth Jampala <Jampala.Srikanth@cavium.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Fri, 21 Sep 2018 05:22:37 +0000 (13:22 +0800)]
Merge git://git./linux/kernel/git/herbert/crypto-2.6
Merge crypto-2.6 to resolve caam conflict with skcipher conversion.
Horia Geantă [Fri, 14 Sep 2018 15:34:28 +0000 (18:34 +0300)]
crypto: caam/jr - fix ablkcipher_edesc pointer arithmetic
In some cases the zero-length hw_desc array at the end of
ablkcipher_edesc struct requires for 4B of tail padding.
Due to tail padding and the way pointers to S/G table and IV
are computed:
edesc->sec4_sg = (void *)edesc + sizeof(struct ablkcipher_edesc) +
desc_bytes;
iv = (u8 *)edesc->hw_desc + desc_bytes + sec4_sg_bytes;
first 4 bytes of IV are overwritten by S/G table.
Update computation of pointer to S/G table to rely on offset of hw_desc
member and not on sizeof() operator.
Cc: <stable@vger.kernel.org> # 4.13+
Fixes:
115957bb3e59 ("crypto: caam - fix IV DMA mapping and updating")
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Srikanth Jampala [Fri, 7 Sep 2018 07:01:18 +0000 (12:31 +0530)]
crypto: cavium/nitrox - Added support for SR-IOV configuration.
Added support to configure SR-IOV using sysfs interface.
Supported VF modes are 16, 32, 64 and 128. Grouped the
hardware configuration functions to "nitrox_hal.h" file.
Changed driver version to "1.1".
Signed-off-by: Srikanth Jampala <Jampala.Srikanth@cavium.com>
Reviewed-by: Gadam Sreerama <sgadam@cavium.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Mikulas Patocka [Wed, 5 Sep 2018 13:18:43 +0000 (09:18 -0400)]
crypto: aesni - don't use GFP_ATOMIC allocation if the request doesn't cross a page in gcm
This patch fixes gcmaes_crypt_by_sg so that it won't use memory
allocation if the data doesn't cross a page boundary.
Authenticated encryption may be used by dm-crypt. If the encryption or
decryption fails, it would result in I/O error and filesystem corruption.
The function gcmaes_crypt_by_sg is using GFP_ATOMIC allocation that can
fail anytime. This patch fixes the logic so that it won't attempt the
failing allocation if the data doesn't cross a page boundary.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Cc: stable@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
kbuild test robot [Tue, 4 Sep 2018 17:52:44 +0000 (01:52 +0800)]
crc-t10dif: crc_t10dif_mutex can be static
Fixes:
b76377543b73 ("crc-t10dif: Pick better transform if one becomes available")
Signed-off-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kees Cook [Tue, 7 Aug 2018 21:18:39 +0000 (14:18 -0700)]
dm: Remove VLA usage from hashes
In the quest to remove all stack VLA usage from the kernel[1], this uses
the new HASH_MAX_DIGESTSIZE from the crypto layer to allocate the upper
bounds on stack usage.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Ondrej Mosnacek [Wed, 5 Sep 2018 07:26:41 +0000 (09:26 +0200)]
crypto: x86/aegis,morus - Do not require OSXSAVE for SSE2
It turns out OSXSAVE needs to be checked only for AVX, not for SSE.
Without this patch the affected modules refuse to load on CPUs with SSE2
but without AVX support.
Fixes:
877ccce7cbe8 ("crypto: x86/aegis,morus - Fix and simplify CPUID checks")
Cc: <stable@vger.kernel.org> # 4.18
Reported-by: Zdenek Kaspar <zkaspar82@gmail.com>
Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Brijesh Singh [Wed, 15 Aug 2018 21:11:25 +0000 (16:11 -0500)]
crypto: ccp - add timeout support in the SEV command
Currently, the CCP driver assumes that the SEV command issued to the PSP
will always return (i.e. it will never hang). But recently, firmware bugs
have shown that a command can hang. Since of the SEV commands are used
in probe routines, this can cause boot hangs and/or loss of virtualization
capabilities.
To protect against firmware bugs, add a timeout in the SEV command
execution flow. If a command does not complete within the specified
timeout then return -ETIMEOUT and stop the driver from executing any
further commands since the state of the SEV firmware is unknown.
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Gary Hook <Gary.Hook@amd.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Sat, 1 Sep 2018 07:17:07 +0000 (00:17 -0700)]
crypto: arm/chacha20 - faster 8-bit rotations and other optimizations
Optimize ChaCha20 NEON performance by:
- Implementing the 8-bit rotations using the 'vtbl.8' instruction.
- Streamlining the part that adds the original state and XORs the data.
- Making some other small tweaks.
On ARM Cortex-A7, these optimizations improve ChaCha20 performance from
about 12.08 cycles per byte to about 11.37 -- a 5.9% improvement.
There is a tradeoff involved with the 'vtbl.8' rotation method since
there is at least one CPU (Cortex-A53) where it's not fastest. But it
seems to be a better default; see the added comment. Overall, this
patch reduces Cortex-A53 performance by less than 0.5%.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Martin K. Petersen [Thu, 30 Aug 2018 15:00:16 +0000 (11:00 -0400)]
crc-t10dif: Allow current transform to be inspected in sysfs
Add a way to print the currently active CRC algorithm in:
/sys/module/crc_t10dif/parameters/transform
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Martin K. Petersen [Thu, 30 Aug 2018 15:00:15 +0000 (11:00 -0400)]
crc-t10dif: Pick better transform if one becomes available
T10 CRC library is linked into the kernel thanks to block and SCSI. The
crypto accelerators are typically loaded later as modules and are
therefore not available when the T10 CRC library is initialized.
Use the crypto notifier facility to trigger a switch to a better algorithm
if one becomes available after the initial hash has been registered. Use
RCU to protect the original transform while the new one is being set up.
Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org
Suggested-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Martin K. Petersen [Thu, 30 Aug 2018 15:00:14 +0000 (11:00 -0400)]
crypto: api - Introduce notifier for new crypto algorithms
Introduce a facility that can be used to receive a notification
callback when a new algorithm becomes available. This can be used by
existing crypto registrations to trigger a switch from a software-only
algorithm to a hardware-accelerated version.
A new CRYPTO_MSG_ALG_LOADED state is introduced to the existing crypto
notification chain, and the register/unregister functions are exported
so they can be called by subsystems outside of crypto.
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Suggested-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Ard Biesheuvel [Mon, 27 Aug 2018 15:38:12 +0000 (17:38 +0200)]
crypto: arm64/crct10dif - implement non-Crypto Extensions alternative
The arm64 implementation of the CRC-T10DIF algorithm uses the 64x64 bit
polynomial multiplication instructions, which are optional in the
architecture, and if these instructions are not available, we fall back
to the C routine which is slow and inefficient.
So let's reuse the 64x64 bit PMULL alternative from the GHASH driver that
uses a sequence of ~40 instructions involving 8x8 bit PMULL and some
shifting and masking. This is a lot slower than the original, but it is
still twice as fast as the current [unoptimized] C code on Cortex-A53,
and it is time invariant and much easier on the D-cache.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Ard Biesheuvel [Mon, 27 Aug 2018 15:38:11 +0000 (17:38 +0200)]
crypto: arm64/crct10dif - preparatory refactor for 8x8 PMULL version
Reorganize the CRC-T10DIF asm routine so we can easily instantiate an
alternative version based on 8x8 polynomial multiplication in a
subsequent patch.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Ard Biesheuvel [Mon, 27 Aug 2018 11:02:45 +0000 (13:02 +0200)]
crypto: arm64/crc32 - remove PMULL based CRC32 driver
Now that the scalar fallbacks have been moved out of this driver into
the core crc32()/crc32c() routines, we are left with a CRC32 crypto API
driver for arm64 that is based only on 64x64 polynomial multiplication,
which is an optional instruction in the ARMv8 architecture, and is less
and less likely to be available on cores that do not also implement the
CRC32 instructions, given that those are mandatory in the architecture
as of ARMv8.1.
Since the scalar instructions do not require the special handling that
SIMD instructions do, and since they turn out to be considerably faster
on some cores (Cortex-A53) as well, there is really no point in keeping
this code around so let's just remove it.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Ard Biesheuvel [Thu, 23 Aug 2018 16:48:45 +0000 (17:48 +0100)]
crypto: arm64/aes-modes - get rid of literal load of addend vector
Replace the literal load of the addend vector with a sequence that
performs each add individually. This sequence is only 2 instructions
longer than the original, and 2% faster on Cortex-A53.
This is an improvement by itself, but also works around a Clang issue,
whose integrated assembler does not implement the GNU ARM asm syntax
completely, and does not support the =literal notation for FP registers
(more info at https://bugs.llvm.org/show_bug.cgi?id=38642)
Cc: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Ard Biesheuvel [Thu, 23 Aug 2018 14:48:51 +0000 (15:48 +0100)]
crypto: arm/ghash-ce - implement support for 4-way aggregation
Speed up the GHASH algorithm based on 64-bit polynomial multiplication
by adding support for 4-way aggregation. This improves throughput by
~85% on Cortex-A53, from 1.7 cycles per byte to 0.9 cycles per byte.
When combined with AES into GCM, throughput improves by ~25%, from
3.8 cycles per byte to 3.0 cycles per byte.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Ard Biesheuvel [Wed, 22 Aug 2018 08:51:44 +0000 (10:51 +0200)]
crypto: x86 - remove SHA multibuffer routines and mcryptd
As it turns out, the AVX2 multibuffer SHA routines are currently
broken [0], in a way that would have likely been noticed if this
code were in wide use. Since the code is too complicated to be
maintained by anyone except the original authors, and since the
performance benefits for real-world use cases are debatable to
begin with, it is better to drop it entirely for the moment.
[0] https://marc.info/?l=linux-crypto-vger&m=
153476243825350&w=2
Suggested-by: Eric Biggers <ebiggers@google.com>
Cc: Megha Dey <megha.dey@linux.intel.com>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tudor Ambarus [Tue, 21 Aug 2018 13:36:09 +0000 (16:36 +0300)]
crypto: atmel - switch to SPDX license identifiers
Adopt the SPDX license identifiers to ease license compliance
management.
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Brijesh Singh [Wed, 15 Aug 2018 21:11:25 +0000 (16:11 -0500)]
crypto: ccp - add timeout support in the SEV command
Currently, the CCP driver assumes that the SEV command issued to the PSP
will always return (i.e. it will never hang). But recently, firmware bugs
have shown that a command can hang. Since of the SEV commands are used
in probe routines, this can cause boot hangs and/or loss of virtualization
capabilities.
To protect against firmware bugs, add a timeout in the SEV command
execution flow. If a command does not complete within the specified
timeout then return -ETIMEOUT and stop the driver from executing any
further commands since the state of the SEV firmware is unknown.
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Gary Hook <Gary.Hook@amd.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kees Cook [Tue, 7 Aug 2018 21:18:42 +0000 (14:18 -0700)]
crypto: shash - Remove VLA usage in unaligned hashing
In the quest to remove all stack VLA usage from the kernel[1], this uses
the newly defined max alignment to perform unaligned hashing to avoid
VLAs, and drops the helper function while adding sanity checks on the
resulting buffer sizes. Additionally, the __aligned_largest macro is
removed since this helper was the only user.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kees Cook [Tue, 7 Aug 2018 21:18:41 +0000 (14:18 -0700)]
crypto: qat - Remove VLA usage
In the quest to remove all stack VLA usage from the kernel[1], this uses
the new upper bound for the stack buffer. Also adds a sanity check.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kees Cook [Tue, 7 Aug 2018 21:18:40 +0000 (14:18 -0700)]
crypto: api - Introduce generic max blocksize and alignmask
In the quest to remove all stack VLA usage from the kernel[1], this
exposes a new general upper bound on crypto blocksize and alignmask
(higher than for the existing cipher limits) for VLA removal,
and introduces new checks.
At present, the highest cra_alignmask in the kernel is 63. The highest
cra_blocksize is 144 (SHA3_224_BLOCK_SIZE, 18 8-byte words). For the
new blocksize limit, I went with 160 (20 8-byte words).
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kees Cook [Tue, 7 Aug 2018 21:18:38 +0000 (14:18 -0700)]
crypto: hash - Remove VLA usage
In the quest to remove all stack VLA usage from the kernel[1], this
removes the VLAs in SHASH_DESC_ON_STACK (via crypto_shash_descsize())
by using the maximum allowable size (which is now more clearly captured
in a macro), along with a few other cases. Similar limits are turned into
macros as well.
A review of existing sizes shows that SHA512_DIGEST_SIZE (64) is the
largest digest size and that sizeof(struct sha3_state) (360) is the
largest descriptor size. The corresponding maximums are reduced.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Ard Biesheuvel [Tue, 7 Aug 2018 21:18:37 +0000 (14:18 -0700)]
crypto: ccm - Remove VLA usage
In the quest to remove all stack VLA usage from the kernel[1], this drops
AHASH_REQUEST_ON_STACK by preallocating the ahash request area combined
with the skcipher area (which are not used at the same time).
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kees Cook [Tue, 7 Aug 2018 21:18:36 +0000 (14:18 -0700)]
crypto: cbc - Remove VLA usage
In the quest to remove all stack VLA usage from the kernel[1], this
uses the upper bounds on blocksize. Since this is always a cipher
blocksize, use the existing cipher max blocksize.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Kees Cook [Tue, 7 Aug 2018 21:18:35 +0000 (14:18 -0700)]
crypto: xcbc - Remove VLA usage
In the quest to remove all stack VLA usage from the kernel[1], this uses
the maximum blocksize and adds a sanity check. For xcbc, the blocksize
must always be 16, so use that, since it's already being enforced during
instantiation.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Jason A. Donenfeld [Tue, 7 Aug 2018 06:22:25 +0000 (08:22 +0200)]
crypto: speck - remove Speck
These are unused, undesired, and have never actually been used by
anybody. The original authors of this code have changed their mind about
its inclusion. While originally proposed for disk encryption on low-end
devices, the idea was discarded [1] in favor of something else before
that could really get going. Therefore, this patch removes Speck.
[1] https://marc.info/?l=linux-crypto-vger&m=
153359499015659
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Acked-by: Eric Biggers <ebiggers@google.com>
Cc: stable@vger.kernel.org
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Horia Geantă [Mon, 6 Aug 2018 12:44:00 +0000 (15:44 +0300)]
crypto: caam/qi - ablkcipher -> skcipher conversion
Convert driver from deprecated ablkcipher API to skcipher.
Link: https://www.mail-archive.com/search?l=mid&q=20170728085622.GC19664@gondor.apana.org.au
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Horia Geantă [Mon, 6 Aug 2018 12:43:59 +0000 (15:43 +0300)]
crypto: caam/jr - ablkcipher -> skcipher conversion
Convert driver from deprecated ablkcipher API to skcipher.
Link: https://www.mail-archive.com/search?l=mid&q=20170728085622.GC19664@gondor.apana.org.au
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Horia Geantă [Mon, 6 Aug 2018 12:43:58 +0000 (15:43 +0300)]
crypto: caam/qi - remove ablkcipher IV generation
IV generation is done only at AEAD level.
Support in ablkcipher is not needed, thus remove the dead code.
Link: https://www.mail-archive.com/search?l=mid&q=20160901101257.GA3362@gondor.apana.org.a
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Horia Geantă [Mon, 6 Aug 2018 12:43:57 +0000 (15:43 +0300)]
crypto: caam/jr - remove ablkcipher IV generation
IV generation is done only at AEAD level.
Support in ablkcipher is not needed, thus remove the dead code.
Link: https://www.mail-archive.com/search?l=mid&q=20160901101257.GA3362@gondor.apana.org.au
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Linus Torvalds [Sun, 2 Sep 2018 21:37:30 +0000 (14:37 -0700)]
Linux 4.19-rc2