platform/kernel/linux-rpi.git
6 years agocrypto: sm4 - introduce SM4 symmetric cipher algorithm
Gilad Ben-Yossef [Tue, 6 Mar 2018 09:44:42 +0000 (09:44 +0000)]
crypto: sm4 - introduce SM4 symmetric cipher algorithm

Introduce the SM4 cipher algorithms (OSCCA GB/T 32907-2016).

SM4 (GBT.32907-2016) is a cryptographic standard issued by the
Organization of State Commercial Administration of China (OSCCA)
as an authorized cryptographic algorithms for the use within China.

SMS4 was originally created for use in protecting wireless
networks, and is mandated in the Chinese National Standard for
Wireless LAN WAPI (Wired Authentication and Privacy Infrastructure)
(GB.15629.11-2003).

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: chelsio -Split Hash requests for large scatter gather list
Harsh Jain [Tue, 6 Mar 2018 05:07:52 +0000 (10:37 +0530)]
crypto: chelsio -Split Hash requests for large scatter gather list

Send multiple WRs to H/W when No. of entries received in scatter list
cannot be sent in single request.

Signed-off-by: Harsh Jain <harsh@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: chelsio - Fix iv passed in fallback path for rfc3686
Harsh Jain [Tue, 6 Mar 2018 05:07:51 +0000 (10:37 +0530)]
crypto: chelsio - Fix iv passed in fallback path for rfc3686

We use ctr(aes) to fallback rfc3686(ctr) request. Send updated IV to fallback path.

Signed-off-by: Harsh Jain <harsh@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: chelsio - Update IV before sending request to HW
Harsh Jain [Tue, 6 Mar 2018 05:07:50 +0000 (10:37 +0530)]
crypto: chelsio - Update IV before sending request to HW

CBC Decryption requires Last Block as IV. In case src/dst buffer
are same last block will be replaced by plain text. This patch copies
the Last Block before sending request to HW.

Signed-off-by: Harsh Jain <harsh@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: chelsio - Fix src buffer dma length
Harsh Jain [Tue, 6 Mar 2018 05:07:49 +0000 (10:37 +0530)]
crypto: chelsio - Fix src buffer dma length

ulptx header cannot have length > 64k. Adjust length accordingly.

Signed-off-by: Harsh Jain <harsh@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: chelsio - Use kernel round function to align lengths
Harsh Jain [Tue, 6 Mar 2018 05:07:48 +0000 (10:37 +0530)]
crypto: chelsio - Use kernel round function to align lengths

Replace DIV_ROUND_UP to roundup or rounddown

Signed-off-by: Harsh Jain <harsh@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agohwrng: mxc-rnga - add driver support on boards with device tree
Vladimir Zapolskiy [Mon, 5 Mar 2018 22:21:00 +0000 (00:21 +0200)]
hwrng: mxc-rnga - add driver support on boards with device tree

The driver works well on i.MX31 powered boards with device description
taken from board device tree, the only change to add to the driver is
the missing OF device id, the affected list of included headers and
indentation in platform driver struct are beautified a little.

Signed-off-by: Vladimir Zapolskiy <vz@mleia.com>
Reviewed-by: Fabio Estevam <fabio.estevam@nxp.com>
Reviewed-by: Kim Phillips <kim.phillips@arm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agodt-bindings: rng: Document Freescale i.MX21 and i.MX31 RNGA compatibles
Vladimir Zapolskiy [Mon, 5 Mar 2018 22:20:59 +0000 (00:20 +0200)]
dt-bindings: rng: Document Freescale i.MX21 and i.MX31 RNGA compatibles

Freescale i.MX21 and i.MX31 SoCs contain a Random Number Generator
Accelerator module (RNGA), which is replaced by RNGB and RNGC modules
on later i.MX SoC series, the change adds a new compatible property
to describe the controller.

Since all versions of Freescale RNG modules are legacy, apparently
the documentation file has no more potential for further extensions,
nevertheless generalize it by removing explicit RNGC specifics.

Signed-off-by: Vladimir Zapolskiy <vz@mleia.com>
Reviewed-by: Rob Herring <robh@kernel.org>
Reviewed-by: Fabio Estevam <fabio.estevam@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: arm64/speck - add NEON-accelerated implementation of Speck-XTS
Eric Biggers [Mon, 5 Mar 2018 19:17:07 +0000 (11:17 -0800)]
crypto: arm64/speck - add NEON-accelerated implementation of Speck-XTS

Add a NEON-accelerated implementation of Speck128-XTS and Speck64-XTS
for ARM64.  This is ported from the 32-bit version.  It may be useful on
devices with 64-bit ARM CPUs that don't have the Cryptography
Extensions, so cannot do AES efficiently -- e.g. the Cortex-A53
processor on the Raspberry Pi 3.

It generally works the same way as the 32-bit version, but there are
some slight differences due to the different instructions, registers,
and syntax available in ARM64 vs. in ARM32.  For example, in the 64-bit
version there are enough registers to hold the XTS tweaks for each
128-byte chunk, so they don't need to be saved on the stack.

Benchmarks on a Raspberry Pi 3 running a 64-bit kernel:

   Algorithm                              Encryption     Decryption
   ---------                              ----------     ----------
   Speck64/128-XTS (NEON)                 92.2 MB/s      92.2 MB/s
   Speck128/256-XTS (NEON)                75.0 MB/s      75.0 MB/s
   Speck128/256-XTS (generic)             47.4 MB/s      35.6 MB/s
   AES-128-XTS (NEON bit-sliced)          33.4 MB/s      29.6 MB/s
   AES-256-XTS (NEON bit-sliced)          24.6 MB/s      21.7 MB/s

The code performs well on higher-end ARM64 processors as well, though
such processors tend to have the Crypto Extensions which make AES
preferred.  For example, here are the same benchmarks run on a HiKey960
(with CPU affinity set for the A73 cores), with the Crypto Extensions
implementation of AES-256-XTS added:

   Algorithm                              Encryption     Decryption
   ---------                              -----------    -----------
   AES-256-XTS (Crypto Extensions)        1273.3 MB/s    1274.7 MB/s
   Speck64/128-XTS (NEON)                  359.8 MB/s     348.0 MB/s
   Speck128/256-XTS (NEON)                 292.5 MB/s     286.1 MB/s
   Speck128/256-XTS (generic)              186.3 MB/s     181.8 MB/s
   AES-128-XTS (NEON bit-sliced)           142.0 MB/s     124.3 MB/s
   AES-256-XTS (NEON bit-sliced)           104.7 MB/s      91.1 MB/s

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: ccp - Use memdup_user() rather than duplicating its implementation
Markus Elfring [Mon, 5 Mar 2018 12:50:13 +0000 (13:50 +0100)]
crypto: ccp - Use memdup_user() rather than duplicating its implementation

Reuse existing functionality from memdup_user() instead of keeping
duplicate source code.

This issue was detected by using the Coccinelle software.

Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
Acked-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: ccp - Fill the result buffer only on digest, finup, and final ops
Gary R Hook [Wed, 7 Mar 2018 17:37:42 +0000 (11:37 -0600)]
crypto: ccp - Fill the result buffer only on digest, finup, and final ops

Any change to the result buffer should only happen on final, finup
and digest operations. Changes to the buffer for update, import, export,
etc, are not allowed.

Fixes: 66d7b9f6175e ("crypto: testmgr - test misuse of result in ahash")
Signed-off-by: Gary R Hook <gary.hook@amd.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: x86/des3_ede - des3_ede_skciphers[] can be static
Wu Fengguang [Fri, 2 Mar 2018 20:29:46 +0000 (04:29 +0800)]
crypto: x86/des3_ede - des3_ede_skciphers[] can be static

Fixes: 09c0f03bf8ce ("crypto: x86/des3_ede - convert to skcipher interface")
Signed-off-by: Fengguang Wu <fengguang.wu@intel.com>
Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: ecdh - fix to allow multi segment scatterlists
James Bottomley [Thu, 1 Mar 2018 22:37:42 +0000 (14:37 -0800)]
crypto: ecdh - fix to allow multi segment scatterlists

Apparently the ecdh use case was in bluetooth which always has single
element scatterlists, so the ecdh module was hard coded to expect
them.  Now we're using this in TPM, we need multi-element
scatterlists, so remove this limitation.

Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: cfb - add support for Cipher FeedBack mode
James Bottomley [Thu, 1 Mar 2018 22:36:17 +0000 (14:36 -0800)]
crypto: cfb - add support for Cipher FeedBack mode

TPM security routines require encryption and decryption with AES in
CFB mode, so add it to the Linux Crypto schemes.  CFB is basically a
one time pad where the pad is generated initially from the encrypted
IV and then subsequently from the encrypted previous block of
ciphertext.  The pad is XOR'd into the plain text to get the final
ciphertext.

https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#CFB

Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: s5p-sss - Constify pointed data (arguments and local variables)
Krzysztof Kozlowski [Thu, 1 Mar 2018 20:50:13 +0000 (21:50 +0100)]
crypto: s5p-sss - Constify pointed data (arguments and local variables)

Improve the code (safety and readability) by indicating that data passed
through pointer is not modified.  This adds const keyword in many places,
most notably:
 - the driver data (pointer to struct samsung_aes_variant),
 - scatterlist addresses written as value to device registers,
 - key and IV arrays.

Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: s5p-sss - Remove useless check for non-null request
Krzysztof Kozlowski [Thu, 1 Mar 2018 20:50:12 +0000 (21:50 +0100)]
crypto: s5p-sss - Remove useless check for non-null request

ahash_request 'req' argument passed by the caller
s5p_hash_handle_queue() cannot be NULL here because it is obtained from
non-NULL pointer via container_of().

This fixes smatch warning:
    drivers/crypto/s5p-sss.c:1213 s5p_hash_prepare_request() warn: variable dereferenced before check 'req' (see line 1208)

Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: omap-sham - Fix misleading indentation
Krzysztof Kozlowski [Thu, 1 Mar 2018 20:50:11 +0000 (21:50 +0100)]
crypto: omap-sham - Fix misleading indentation

Commit 8043bb1ae03c ("crypto: omap-sham - convert driver logic to use
sgs for data xmit") removed the if() clause leaving the statement as is.
The intention was in that case to finish the request always so the goto
instruction seems sensible.

Remove the indentation to fix Smatch warning:
    drivers/crypto/omap-sham.c:1761 omap_sham_done_task() warn: inconsistent indenting

Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Acked-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: omap-sham - Remove useless check for non-null request
Krzysztof Kozlowski [Thu, 1 Mar 2018 20:50:10 +0000 (21:50 +0100)]
crypto: omap-sham - Remove useless check for non-null request

ahash_request 'req' argument passed by the caller
omap_sham_handle_queue() cannot be NULL here because it is obtained from
non-NULL pointer via container_of().

This fixes smatch warning:
    drivers/crypto/omap-sham.c:812 omap_sham_prepare_request() warn: variable dereferenced before check 'req' (see line 805)

Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Acked-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: chelsio - no csum offload for ipsec path
Atul Gupta [Wed, 28 Feb 2018 17:48:08 +0000 (23:18 +0530)]
crypto: chelsio - no csum offload for ipsec path

The Inline IPSec driver does not offload csum.

Signed-off-by: Atul Gupta <atul.gupta@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agohwrng: omap - Fix clock resource by adding a register clock
Gregory CLEMENT [Wed, 28 Feb 2018 14:27:23 +0000 (15:27 +0100)]
hwrng: omap - Fix clock resource by adding a register clock

On Armada 7K/8K we need to explicitly enable the register clock. This
clock is optional because not all the SoCs using this IP need it but at
least for Armada 7K/8K it is actually mandatory.

The binding documentation is updating accordingly.

Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agohwrng: omap - Remove useless test before clk_disable_unprepare
Gregory CLEMENT [Wed, 28 Feb 2018 14:27:22 +0000 (15:27 +0100)]
hwrng: omap - Remove useless test before clk_disable_unprepare

clk_disable_unprepare() already checks that the clock pointer is valid.
No need to test it before calling it.

Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: omap-aes - make queue length configurable
Tero Kristo [Tue, 27 Feb 2018 13:30:39 +0000 (15:30 +0200)]
crypto: omap-aes - make queue length configurable

Crypto driver queue size can now be configured from userspace. This
allows optimizing the queue usage based on use case. Default queue
size is still 10 entries.

Signed-off-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: omap-aes - make fallback size configurable
Tero Kristo [Tue, 27 Feb 2018 13:30:38 +0000 (15:30 +0200)]
crypto: omap-aes - make fallback size configurable

Crypto driver fallback size can now be configured from userspace. This
allows optimizing the DMA usage based on use case. Detault fallback
size of 200 is still used.

Signed-off-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: omap-sham - make queue length configurable
Tero Kristo [Tue, 27 Feb 2018 13:30:37 +0000 (15:30 +0200)]
crypto: omap-sham - make queue length configurable

Crypto driver queue size can now be configured from userspace. This
allows optimizing the queue usage based on use case. Default queue
size is still 10 entries.

Signed-off-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: omap-sham - make fallback size configurable
Tero Kristo [Tue, 27 Feb 2018 13:30:36 +0000 (15:30 +0200)]
crypto: omap-sham - make fallback size configurable

Crypto driver fallback size can now be configured from userspace. This
allows optimizing the DMA usage based on use case. Default fallback
size of 256 is still used.

Signed-off-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: omap-crypto - Verify page zone scatterlists before starting DMA
Tero Kristo [Tue, 27 Feb 2018 13:30:35 +0000 (15:30 +0200)]
crypto: omap-crypto - Verify page zone scatterlists before starting DMA

In certain platforms like DRA7xx having memory > 2GB with LPAE enabled
has a constraint that DMA can be done with the initial 2GB and marks it
as ZONE_DMA. But openssl when used with cryptodev does not make sure that
input buffer is DMA capable. So, adding a check to verify if the input
buffer is capable of DMA.

Signed-off-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: omap-sham - Verify page zone of scatterlists before starting DMA
Tero Kristo [Tue, 27 Feb 2018 13:30:34 +0000 (15:30 +0200)]
crypto: omap-sham - Verify page zone of scatterlists before starting DMA

In certain platforms like DRA7xx having memory > 2GB with LPAE enabled
has a constraint that DMA can be done with the initial 2GB and marks it
as ZONE_DMA. But openssl when used with cryptodev does not make sure that
input buffer is DMA capable. So, adding a check to verify if the input
buffer is capable of DMA.

Signed-off-by: Tero Kristo <t-kristo@ti.com>
Reported-by: Aparna Balasubramanian <aparnab@ti.com>
Reviewed-by: Lokesh Vutla <lokeshvutla@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: talitos - do not perform unnecessary dma synchronisation
LEROY Christophe [Mon, 26 Feb 2018 16:40:06 +0000 (17:40 +0100)]
crypto: talitos - do not perform unnecessary dma synchronisation

req_ctx->hw_context is mainly used only by the HW. So it is not needed
to sync the HW and the CPU each time hw_context in DMA mapped.
This patch modifies the DMA mapping in order to limit synchronisation
to necessary situations.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: talitos - don't persistently map req_ctx->hw_context and req_ctx->buf
LEROY Christophe [Mon, 26 Feb 2018 16:40:04 +0000 (17:40 +0100)]
crypto: talitos - don't persistently map req_ctx->hw_context and req_ctx->buf

Commit 49f9783b0cea ("crypto: talitos - do hw_context DMA mapping
outside the requests") introduced a persistent dma mapping of
req_ctx->hw_context
Commit 37b5e8897eb5 ("crypto: talitos - chain in buffered data for ahash
on SEC1") introduced a persistent dma mapping of req_ctx->buf

As there is no destructor for req_ctx (the request context), the
associated dma handlers where set in ctx (the tfm context). This is
wrong as several hash operations can run with the same ctx.

This patch removes this persistent mapping.

Reported-by: Horia Geanta <horia.geanta@nxp.com>
Cc: <stable@vger.kernel.org>
Fixes: 49f9783b0cea ("crypto: talitos - do hw_context DMA mapping outside the requests")
Fixes: 37b5e8897eb5 ("crypto: talitos - chain in buffered data for ahash on SEC1")
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Tested-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agohwrng: cavium - make two functions static
Colin Ian King [Mon, 26 Feb 2018 14:51:19 +0000 (14:51 +0000)]
hwrng: cavium - make two functions static

Functions cavium_rng_remove and cavium_rng_remove_vf are local to the
source and do not need to be in global scope, so make them static.

Cleans up sparse warnings:
drivers/char/hw_random/cavium-rng-vf.c:80:7: warning: symbol
'cavium_rng_remove_vf' was not declared. Should it be static?
drivers/char/hw_random/cavium-rng.c:65:7: warning: symbol
'cavium_rng_remove' was not declared. Should it be static?

Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: inside-secure - wait for the request to complete if in the backlog
Antoine Tenart [Mon, 26 Feb 2018 13:45:12 +0000 (14:45 +0100)]
crypto: inside-secure - wait for the request to complete if in the backlog

This patch updates the safexcel_hmac_init_pad() function to also wait
for completion when the digest return code is -EBUSY, as it would mean
the request is in the backlog to be processed later.

Fixes: 1b44c5a60c13 ("crypto: inside-secure - add SafeXcel EIP197 crypto engine driver")
Suggested-by: Ofer Heifetz <oferh@marvell.com>
Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: inside-secure - move cache result dma mapping to request
Antoine Tenart [Mon, 26 Feb 2018 13:45:11 +0000 (14:45 +0100)]
crypto: inside-secure - move cache result dma mapping to request

In heavy traffic the DMA mapping is overwritten by multiple requests as
the DMA address is stored in a global context. This patch moves this
information to the per-hash request context so that it can't be
overwritten.

Fixes: 1b44c5a60c13 ("crypto: inside-secure - add SafeXcel EIP197 crypto engine driver")
Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: inside-secure - move hash result dma mapping to request
Ofer Heifetz [Mon, 26 Feb 2018 13:45:10 +0000 (14:45 +0100)]
crypto: inside-secure - move hash result dma mapping to request

In heavy traffic the DMA mapping is overwritten by multiple requests as
the DMA address is stored in a global context. This patch moves this
information to the per-hash request context so that it can't be
overwritten.

Fixes: 1b44c5a60c13 ("crypto: inside-secure - add SafeXcel EIP197 crypto engine driver")
Signed-off-by: Ofer Heifetz <oferh@marvell.com>
[Antoine: rebased the patch, small fixes, commit message.]
Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agoinclude: psp-sev: Capitalize invalid length enum
Brijesh Singh [Thu, 15 Feb 2018 19:34:45 +0000 (13:34 -0600)]
include: psp-sev: Capitalize invalid length enum

Commit 1d57b17c60ff ("crypto: ccp: Define SEV userspace ioctl and command
id") added the invalid length enum but we missed capitalizing it.

Fixes: 1d57b17c60ff (crypto: ccp: Define SEV userspace ioctl ...)
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
CC: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Acked-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: ccp - Fix sparse, use plain integer as NULL pointer
Brijesh Singh [Thu, 15 Feb 2018 19:34:44 +0000 (13:34 -0600)]
crypto: ccp - Fix sparse, use plain integer as NULL pointer

Fix sparse warning: Using plain integer as NULL pointer. Replaces
assignment of 0 to pointer with NULL assignment.

Fixes: 200664d5237f (Add Secure Encrypted Virtualization ...)
Cc: Borislav Petkov <bp@suse.de>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Gary Hook <gary.hook@amd.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Acked-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: ccp - return an actual key size from RSA max_size callback
Maciej S. Szmigiero [Sat, 24 Feb 2018 16:03:21 +0000 (17:03 +0100)]
crypto: ccp - return an actual key size from RSA max_size callback

rsa-pkcs1pad uses a value returned from a RSA implementation max_size
callback as a size of an input buffer passed to the RSA implementation for
encrypt and sign operations.

CCP RSA implementation uses a hardware input buffer which size depends only
on the current RSA key length, so it should return this key length in
the max_size callback, too.
This also matches what the kernel software RSA implementation does.

Previously, the value returned from this callback was always the maximum
RSA key size the CCP hardware supports.
This resulted in this huge buffer being passed by rsa-pkcs1pad to CCP even
for smaller key sizes and then in a buffer overflow when ccp_run_rsa_cmd()
tried to copy this large input buffer into a RSA key length-sized hardware
input buffer.

Signed-off-by: Maciej S. Szmigiero <mail@maciej.szmigiero.name>
Fixes: ceeec0afd684 ("crypto: ccp - Add support for RSA on the CCP")
Cc: stable@vger.kernel.org
Acked-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: ccp - don't disable interrupts while setting up debugfs
Sebastian Andrzej Siewior [Fri, 23 Feb 2018 22:33:07 +0000 (23:33 +0100)]
crypto: ccp - don't disable interrupts while setting up debugfs

I don't why we need take a single write lock and disable interrupts
while setting up debugfs. This is what what happens when we try anyway:

|ccp 0000:03:00.2: enabling device (0000 -> 0002)
|BUG: sleeping function called from invalid context at kernel/locking/rwsem.c:69
|in_atomic(): 1, irqs_disabled(): 1, pid: 3, name: kworker/0:0
|irq event stamp: 17150
|hardirqs last  enabled at (17149): [<0000000097a18c49>] restore_regs_and_return_to_kernel+0x0/0x23
|hardirqs last disabled at (17150): [<000000000773b3a9>] _raw_write_lock_irqsave+0x1b/0x50
|softirqs last  enabled at (17148): [<0000000064d56155>] __do_softirq+0x3b8/0x4c1
|softirqs last disabled at (17125): [<0000000092633c18>] irq_exit+0xb1/0xc0
|CPU: 0 PID: 3 Comm: kworker/0:0 Not tainted 4.16.0-rc2+ #30
|Workqueue: events work_for_cpu_fn
|Call Trace:
| dump_stack+0x7d/0xb6
| ___might_sleep+0x1eb/0x250
| down_write+0x17/0x60
| start_creating+0x4c/0xe0
| debugfs_create_dir+0x9/0x100
| ccp5_debugfs_setup+0x191/0x1b0
| ccp5_init+0x8a7/0x8c0
| ccp_dev_init+0xb8/0xe0
| sp_init+0x6c/0x90
| sp_pci_probe+0x26e/0x590
| local_pci_probe+0x3f/0x90
| work_for_cpu_fn+0x11/0x20
| process_one_work+0x1ff/0x650
| worker_thread+0x1d4/0x3a0
| kthread+0xfe/0x130
| ret_from_fork+0x27/0x50

If any locking is required, a simple mutex will do it.

Cc: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Acked-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: atmel-aes - fix the keys zeroing on errors
Antoine Tenart [Fri, 23 Feb 2018 09:01:40 +0000 (10:01 +0100)]
crypto: atmel-aes - fix the keys zeroing on errors

The Atmel AES driver uses memzero_explicit on the keys on error, but the
variable zeroed isn't the right one because of a typo. Fix this by using
the right variable.

Fixes: 89a82ef87e01 ("crypto: atmel-authenc - add support to authenc(hmac(shaX), Y(aes)) modes")
Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Reviewed-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: caam - do not use mem and emi_slow clock for imx7x
Rui Miguel Silva [Thu, 22 Feb 2018 14:22:48 +0000 (14:22 +0000)]
crypto: caam - do not use mem and emi_slow clock for imx7x

I.MX7x only use two clocks for the CAAM module, so make sure we do not try to
use the mem and the emi_slow clock when running in that imx7d and imx7s machine
type.

Cc: "Horia Geantă" <horia.geanta@nxp.com>
Cc: Aymen Sghaier <aymen.sghaier@nxp.com>
Cc: Fabio Estevam <fabio.estevam@nxp.com>
Cc: Peng Fan <peng.fan@nxp.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Lukas Auer <lukas.auer@aisec.fraunhofer.de>
Signed-off-by: Rui Miguel Silva <rui.silva@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: caam - Fix null dereference at error path
Rui Miguel Silva [Thu, 22 Feb 2018 14:22:47 +0000 (14:22 +0000)]
crypto: caam - Fix null dereference at error path

caam_remove already removes the debugfs entry, so we need to remove the one
immediately before calling caam_remove.

This fix a NULL dereference at error paths is caam_probe fail.

Fixes: 67c2315def06 ("crypto: caam - add Queue Interface (QI) backend support")

Tested-by: Ryan Harkin <ryan.harkin@linaro.org>
Cc: "Horia Geantă" <horia.geanta@nxp.com>
Cc: Aymen Sghaier <aymen.sghaier@nxp.com>
Cc: Fabio Estevam <fabio.estevam@nxp.com>
Cc: Peng Fan <peng.fan@nxp.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Lukas Auer <lukas.auer@aisec.fraunhofer.de>
Cc: <stable@vger.kernel.org> # 4.12+
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Rui Miguel Silva <rui.silva@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: ccp - add check to get PSP master only when PSP is detected
Brijesh Singh [Wed, 21 Feb 2018 14:41:39 +0000 (08:41 -0600)]
crypto: ccp - add check to get PSP master only when PSP is detected

Paulian reported the below kernel crash on Ryzen 5 system:

BUG: unable to handle kernel NULL pointer dereference at 0000000000000073
RIP: 0010:.LC0+0x41f/0xa00
RSP: 0018:ffffa9968003bdd0 EFLAGS: 00010002
RAX: ffffffffb113b130 RBX: 0000000000000000 RCX: 00000000000005a7
RDX: 00000000000000ff RSI: ffff8b46dee651a0 RDI: ffffffffb1bd617c
RBP: 0000000000000246 R08: 00000000000251a0 R09: 0000000000000000
R10: ffffd81f11a38200 R11: ffff8b52e8e0a161 R12: ffffffffb19db220
R13: 0000000000000007 R14: ffffffffb17e4888 R15: 5dccd7affc30a31e
FS:  0000000000000000(0000) GS:ffff8b46dee40000(0000) knlGS:0000000000000000
CR2: 0000000000000073 CR3: 000080128120a000 CR4: 00000000003406e0
Call Trace:
 ? sp_get_psp_master_device+0x56/0x80
 ? map_properties+0x540/0x540
 ? psp_pci_init+0x20/0xe0
 ? map_properties+0x540/0x540
 ? sp_mod_init+0x16/0x1a
 ? do_one_initcall+0x4b/0x190
 ? kernel_init_freeable+0x19b/0x23c
 ? rest_init+0xb0/0xb0
 ? kernel_init+0xa/0x100
 ? ret_from_fork+0x22/0x40

Since Ryzen does not support PSP/SEV firmware hence i->psp_data will
NULL in all sp instances. In those cases, 'i' will point to the
list head after list_for_each_entry(). Dereferencing the head will
cause kernel crash.

Add check to call get master device only when PSP/SEV is detected.

Reported-by: Paulian Bogdan Marinca <paulian@marinca.net>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
CC: Gary R Hook <gary.hook@amd.com>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: ablk_helper - remove ablk_helper
Eric Biggers [Tue, 20 Feb 2018 07:48:28 +0000 (23:48 -0800)]
crypto: ablk_helper - remove ablk_helper

All users of ablk_helper have been converted over to crypto_simd, so
remove ablk_helper.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: x86/glue_helper - rename glue_skwalk_fpu_begin()
Eric Biggers [Tue, 20 Feb 2018 07:48:27 +0000 (23:48 -0800)]
crypto: x86/glue_helper - rename glue_skwalk_fpu_begin()

There are no users of the original glue_fpu_begin() anymore, so rename
glue_skwalk_fpu_begin() to glue_fpu_begin() so that it matches
glue_fpu_end() again.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: x86/glue_helper - remove blkcipher_walk functions
Eric Biggers [Tue, 20 Feb 2018 07:48:26 +0000 (23:48 -0800)]
crypto: x86/glue_helper - remove blkcipher_walk functions

Now that all glue_helper users have been switched from the blkcipher
interface over to the skcipher interface, remove the versions of the
glue_helper functions that handled the blkcipher interface.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: lrw - remove lrw_crypt()
Eric Biggers [Tue, 20 Feb 2018 07:48:25 +0000 (23:48 -0800)]
crypto: lrw - remove lrw_crypt()

Now that all users of lrw_crypt() have been removed in favor of the LRW
template wrapping an ECB mode algorithm, remove lrw_crypt().  Also
remove crypto/lrw.h as that is no longer needed either; and fold
'struct lrw_table_ctx' into 'struct priv', lrw_init_table() into
setkey(), and lrw_free_table() into exit_tfm().

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: xts - remove xts_crypt()
Eric Biggers [Tue, 20 Feb 2018 07:48:24 +0000 (23:48 -0800)]
crypto: xts - remove xts_crypt()

Now that all users of xts_crypt() have been removed in favor of the XTS
template wrapping an ECB mode algorithm, remove xts_crypt().

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: x86/camellia-aesni-avx, avx2 - convert to skcipher interface
Eric Biggers [Tue, 20 Feb 2018 07:48:23 +0000 (23:48 -0800)]
crypto: x86/camellia-aesni-avx, avx2 - convert to skcipher interface

Convert the AESNI AVX and AESNI AVX2 implementations of Camellia from
the (deprecated) ablkcipher and blkcipher interfaces over to the
skcipher interface.  Note that this includes replacing the use of
ablk_helper with crypto_simd.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: x86/camellia - convert to skcipher interface
Eric Biggers [Tue, 20 Feb 2018 07:48:22 +0000 (23:48 -0800)]
crypto: x86/camellia - convert to skcipher interface

Convert the x86 asm implementation of Camellia from the (deprecated)
blkcipher interface over to the skcipher interface.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: x86/camellia - remove XTS algorithm
Eric Biggers [Tue, 20 Feb 2018 07:48:21 +0000 (23:48 -0800)]
crypto: x86/camellia - remove XTS algorithm

The XTS template now wraps an ECB mode algorithm rather than the block
cipher directly.  Therefore it is now redundant for crypto modules to
wrap their ECB code with generic XTS code themselves via xts_crypt().

Remove the xts-camellia-asm algorithm which did this.  Users who request
xts(camellia) and previously would have gotten xts-camellia-asm will now
get xts(ecb-camellia-asm) instead, which is just as fast.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: x86/camellia - remove LRW algorithm
Eric Biggers [Tue, 20 Feb 2018 07:48:20 +0000 (23:48 -0800)]
crypto: x86/camellia - remove LRW algorithm

The LRW template now wraps an ECB mode algorithm rather than the block
cipher directly.  Therefore it is now redundant for crypto modules to
wrap their ECB code with generic LRW code themselves via lrw_crypt().

Remove the lrw-camellia-asm algorithm which did this.  Users who request
lrw(camellia) and previously would have gotten lrw-camellia-asm will now
get lrw(ecb-camellia-asm) instead, which is just as fast.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: x86/camellia-aesni-avx2 - remove LRW algorithm
Eric Biggers [Tue, 20 Feb 2018 07:48:19 +0000 (23:48 -0800)]
crypto: x86/camellia-aesni-avx2 - remove LRW algorithm

The LRW template now wraps an ECB mode algorithm rather than the block
cipher directly.  Therefore it is now redundant for crypto modules to
wrap their ECB code with generic LRW code themselves via lrw_crypt().

Remove the lrw-camellia-aesni-avx2 algorithm which did this.  Users who
request lrw(camellia) and previously would have gotten
lrw-camellia-aesni-avx2 will now get lrw(ecb-camellia-aesni-avx2)
instead, which is just as fast.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: x86/camellia-aesni-avx - remove LRW algorithm
Eric Biggers [Tue, 20 Feb 2018 07:48:18 +0000 (23:48 -0800)]
crypto: x86/camellia-aesni-avx - remove LRW algorithm

The LRW template now wraps an ECB mode algorithm rather than the block
cipher directly.  Therefore it is now redundant for crypto modules to
wrap their ECB code with generic LRW code themselves via lrw_crypt().

Remove the lrw-camellia-aesni algorithm which did this.  Users who
request lrw(camellia) and previously would have gotten
lrw-camellia-aesni will now get lrw(ecb-camellia-aesni) instead, which
is just as fast.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: x86/des3_ede - convert to skcipher interface
Eric Biggers [Tue, 20 Feb 2018 07:48:17 +0000 (23:48 -0800)]
crypto: x86/des3_ede - convert to skcipher interface

Convert the x86 asm implementation of Triple DES from the (deprecated)
blkcipher interface over to the skcipher interface.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: x86/blowfish: convert to skcipher interface
Eric Biggers [Tue, 20 Feb 2018 07:48:16 +0000 (23:48 -0800)]
crypto: x86/blowfish: convert to skcipher interface

Convert the x86 asm implementation of Blowfish from the (deprecated)
blkcipher interface over to the skcipher interface.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: x86/cast6-avx - convert to skcipher interface
Eric Biggers [Tue, 20 Feb 2018 07:48:15 +0000 (23:48 -0800)]
crypto: x86/cast6-avx - convert to skcipher interface

Convert the AVX implementation of CAST6 from the (deprecated) ablkcipher
and blkcipher interfaces over to the skcipher interface.  Note that this
includes replacing the use of ablk_helper with crypto_simd.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: x86/cast6-avx - remove LRW algorithm
Eric Biggers [Tue, 20 Feb 2018 07:48:14 +0000 (23:48 -0800)]
crypto: x86/cast6-avx - remove LRW algorithm

The LRW template now wraps an ECB mode algorithm rather than the block
cipher directly.  Therefore it is now redundant for crypto modules to
wrap their ECB code with generic LRW code themselves via lrw_crypt().

Remove the lrw-cast6-avx algorithm which did this.  Users who request
lrw(cast6) and previously would have gotten lrw-cast6-avx will now get
lrw(ecb-cast6-avx) instead, which is just as fast.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: x86/cast5-avx - convert to skcipher interface
Eric Biggers [Tue, 20 Feb 2018 07:48:13 +0000 (23:48 -0800)]
crypto: x86/cast5-avx - convert to skcipher interface

Convert the AVX implementation of CAST5 from the (deprecated) ablkcipher
and blkcipher interfaces over to the skcipher interface.  Note that this
includes replacing the use of ablk_helper with crypto_simd.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: x86/cast5-avx - fix ECB encryption when long sg follows short one
Eric Biggers [Tue, 20 Feb 2018 07:48:12 +0000 (23:48 -0800)]
crypto: x86/cast5-avx - fix ECB encryption when long sg follows short one

With ecb-cast5-avx, if a 128+ byte scatterlist element followed a
shorter one, then the algorithm accidentally encrypted/decrypted only 8
bytes instead of the expected 128 bytes.  Fix it by setting the
encryption/decryption 'fn' correctly.

Fixes: c12ab20b162c ("crypto: cast5/avx - avoid using temporary stack buffers")
Cc: <stable@vger.kernel.org> # v3.8+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: x86/twofish-avx - convert to skcipher interface
Eric Biggers [Tue, 20 Feb 2018 07:48:11 +0000 (23:48 -0800)]
crypto: x86/twofish-avx - convert to skcipher interface

Convert the AVX implementation of Twofish from the (deprecated)
ablkcipher and blkcipher interfaces over to the skcipher interface.
Note that this includes replacing the use of ablk_helper with
crypto_simd.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: x86/twofish-avx - remove LRW algorithm
Eric Biggers [Tue, 20 Feb 2018 07:48:10 +0000 (23:48 -0800)]
crypto: x86/twofish-avx - remove LRW algorithm

The LRW template now wraps an ECB mode algorithm rather than the block
cipher directly.  Therefore it is now redundant for crypto modules to
wrap their ECB code with generic LRW code themselves via lrw_crypt().

Remove the lrw-twofish-avx algorithm which did this.  Users who request
lrw(twofish) and previously would have gotten lrw-twofish-avx will now
get lrw(ecb-twofish-avx) instead, which is just as fast.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: x86/twofish-3way - convert to skcipher interface
Eric Biggers [Tue, 20 Feb 2018 07:48:09 +0000 (23:48 -0800)]
crypto: x86/twofish-3way - convert to skcipher interface

Convert the 3-way implementation of Twofish from the (deprecated)
blkcipher interface over to the skcipher interface.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: x86/twofish-3way - remove XTS algorithm
Eric Biggers [Tue, 20 Feb 2018 07:48:08 +0000 (23:48 -0800)]
crypto: x86/twofish-3way - remove XTS algorithm

The XTS template now wraps an ECB mode algorithm rather than the block
cipher directly.  Therefore it is now redundant for crypto modules to
wrap their ECB code with generic XTS code themselves via xts_crypt().

Remove the xts-twofish-3way algorithm which did this.  Users who request
xts(twofish) and previously would have gotten xts-twofish-3way will now
get xts(ecb-twofish-3way) instead, which is just as fast.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: x86/twofish-3way - remove LRW algorithm
Eric Biggers [Tue, 20 Feb 2018 07:48:07 +0000 (23:48 -0800)]
crypto: x86/twofish-3way - remove LRW algorithm

The LRW template now wraps an ECB mode algorithm rather than the block
cipher directly.  Therefore it is now redundant for crypto modules to
wrap their ECB code with generic LRW code themselves via lrw_crypt().

Remove the lrw-twofish-3way algorithm which did this.  Users who request
lrw(twofish) and previously would have gotten lrw-twofish-3way will now
get lrw(ecb-twofish-3way) instead, which is just as fast.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: x86/serpent-avx,avx2 - convert to skcipher interface
Eric Biggers [Tue, 20 Feb 2018 07:48:06 +0000 (23:48 -0800)]
crypto: x86/serpent-avx,avx2 - convert to skcipher interface

Convert the AVX and AVX2 implementations of Serpent from the
(deprecated) ablkcipher and blkcipher interfaces over to the skcipher
interface.  Note that this includes replacing the use of ablk_helper
with crypto_simd.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: x86/serpent-avx - remove LRW algorithm
Eric Biggers [Tue, 20 Feb 2018 07:48:05 +0000 (23:48 -0800)]
crypto: x86/serpent-avx - remove LRW algorithm

The LRW template now wraps an ECB mode algorithm rather than the block
cipher directly.  Therefore it is now redundant for crypto modules to
wrap their ECB code with generic LRW code themselves via lrw_crypt().

Remove the lrw-serpent-avx algorithm which did this.  Users who request
lrw(serpent) and previously would have gotten lrw-serpent-avx will now
get lrw(ecb-serpent-avx) instead, which is just as fast.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: x86/serpent-avx2 - remove LRW algorithm
Eric Biggers [Tue, 20 Feb 2018 07:48:04 +0000 (23:48 -0800)]
crypto: x86/serpent-avx2 - remove LRW algorithm

The LRW template now wraps an ECB mode algorithm rather than the block
cipher directly.  Therefore it is now redundant for crypto modules to
wrap their ECB code with generic LRW code themselves via lrw_crypt().

Remove the lrw-serpent-avx2 algorithm which did this.  Users who request
lrw(serpent) and previously would have gotten lrw-serpent-avx2 will now
get lrw(ecb-serpent-avx2) instead, which is just as fast.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: x86/serpent-sse2 - convert to skcipher interface
Eric Biggers [Tue, 20 Feb 2018 07:48:03 +0000 (23:48 -0800)]
crypto: x86/serpent-sse2 - convert to skcipher interface

Convert the SSE2 implementation of Serpent from the (deprecated)
ablkcipher and blkcipher interfaces over to the skcipher interface.
Note that this includes replacing the use of ablk_helper with
crypto_simd.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: x86/serpent-sse2 - remove XTS algorithm
Eric Biggers [Tue, 20 Feb 2018 07:48:02 +0000 (23:48 -0800)]
crypto: x86/serpent-sse2 - remove XTS algorithm

The XTS template now wraps an ECB mode algorithm rather than the block
cipher directly.  Therefore it is now redundant for crypto modules to
wrap their ECB code with generic XTS code themselves via xts_crypt().

Remove the xts-serpent-sse2 algorithm which did this.  Users who request
xts(serpent) and previously would have gotten xts-serpent-sse2 will now
get xts(ecb-serpent-sse2) instead, which is just as fast.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: x86/serpent-sse2 - remove LRW algorithm
Eric Biggers [Tue, 20 Feb 2018 07:48:01 +0000 (23:48 -0800)]
crypto: x86/serpent-sse2 - remove LRW algorithm

The LRW template now wraps an ECB mode algorithm rather than the block
cipher directly.  Therefore it is now redundant for crypto modules to
wrap their ECB code with generic LRW code themselves via lrw_crypt().

Remove the lrw-serpent-sse2 algorithm which did this.  Users who request
lrw(serpent) and previously would have gotten lrw-serpent-sse2 will now
get lrw(ecb-serpent-sse2) instead, which is just as fast.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: x86/glue_helper - add skcipher_walk functions
Eric Biggers [Tue, 20 Feb 2018 07:48:00 +0000 (23:48 -0800)]
crypto: x86/glue_helper - add skcipher_walk functions

Add ECB, CBC, and CTR functions to glue_helper which use skcipher_walk
rather than blkcipher_walk.  This will allow converting the remaining
x86 algorithms from the blkcipher interface over to the skcipher
interface, after which we'll be able to remove the blkcipher_walk
versions of these functions.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: simd - allow registering multiple algorithms at once
Eric Biggers [Tue, 20 Feb 2018 07:47:59 +0000 (23:47 -0800)]
crypto: simd - allow registering multiple algorithms at once

Add a function to crypto_simd that registers an array of skcipher
algorithms, then allocates and registers the simd wrapper algorithms for
them.  It assumes the naming scheme where the names of the underlying
algorithms are prefixed with two underscores.

Also add the corresponding 'unregister' function.

Most of the x86 crypto modules will be able to use these.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: ccree - replace memset+kfree with kzfree
Gilad Ben-Yossef [Mon, 19 Feb 2018 14:51:24 +0000 (14:51 +0000)]
crypto: ccree - replace memset+kfree with kzfree

Replace memset to 0 followed by kfree with kzfree for
simplicity.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: ccree - add support for older HW revs
Gilad Ben-Yossef [Mon, 19 Feb 2018 14:51:23 +0000 (14:51 +0000)]
crypto: ccree - add support for older HW revs

Add support for the legacy CryptoCell 630 and 710 revs.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agodt-bindings: Add DT bindings for ccree 710 and 630p
Gilad Ben-Yossef [Mon, 19 Feb 2018 14:51:22 +0000 (14:51 +0000)]
dt-bindings: Add DT bindings for ccree 710 and 630p

Add device tree bindings for Arm CryptoCell 710 and 630p hardware
revisions.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Reviewed-by: Rob Herring <robh@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: ccree - remove unused definitions
Gilad Ben-Yossef [Mon, 19 Feb 2018 14:51:21 +0000 (14:51 +0000)]
crypto: ccree - remove unused definitions

Remove enum definition which are not used by the REE interface
driver.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: marvell/cesa - Clean up redundant #include
Robin Murphy [Mon, 19 Feb 2018 13:55:36 +0000 (13:55 +0000)]
crypto: marvell/cesa - Clean up redundant #include

The inclusion of dma-direct.h was only needed temporarily to prevent
breakage from the DMA API rework, since the actual CESA fix making it
redundant was merged in parallel. Now that both have landed, it can go.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agohwrng: stm32 - rework read timeout calculation
lionel.debieve@st.com [Thu, 15 Feb 2018 13:03:12 +0000 (14:03 +0100)]
hwrng: stm32 - rework read timeout calculation

Increase timeout delay to support longer timing linked
to rng initialization. Measurement is based on timer instead
of instructions per iteration which is not powerful on all
targets.

Signed-off-by: Lionel Debieve <lionel.debieve@st.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agodt-bindings: rng: add clock detection error for stm32
lionel.debieve@st.com [Thu, 15 Feb 2018 13:03:11 +0000 (14:03 +0100)]
dt-bindings: rng: add clock detection error for stm32

Add optional property to enable the clock detection error
on rng block. It is used to allow slow clock source which
give correct entropy for rng.

Signed-off-by: Lionel Debieve <lionel.debieve@st.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agohwrng: stm32 - allow disable clock error detection
lionel.debieve@st.com [Thu, 15 Feb 2018 13:03:10 +0000 (14:03 +0100)]
hwrng: stm32 - allow disable clock error detection

Add a new property that allow to disable the clock error
detection which is required when the clock source selected
is out of specification (which is not mandatory).

Signed-off-by: Lionel Debieve <lionel.debieve@st.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agodt-bindings: rng: add reset node for stm32
lionel.debieve@st.com [Thu, 15 Feb 2018 13:03:09 +0000 (14:03 +0100)]
dt-bindings: rng: add reset node for stm32

Adding optional resets property for rng.

Signed-off-by: Lionel Debieve <lionel.debieve@st.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agohwrng: stm32 - add reset during probe
lionel.debieve@st.com [Thu, 15 Feb 2018 13:03:08 +0000 (14:03 +0100)]
hwrng: stm32 - add reset during probe

Avoid issue when probing the RNG without
reset if bad status has been detected previously

Signed-off-by: Lionel Debieve <lionel.debieve@st.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: ccree - fix memdup.cocci warnings
Fengguang Wu [Thu, 15 Feb 2018 16:40:13 +0000 (00:40 +0800)]
crypto: ccree - fix memdup.cocci warnings

drivers/crypto/ccree/cc_cipher.c:629:15-22: WARNING opportunity for kmemdep

 Use kmemdup rather than duplicating its implementation

Generated by: scripts/coccinelle/api/memdup.cocci

Fixes: 63ee04c8b491 ("crypto: ccree - add skcipher support")
CC: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: atmel - Delete error messages for a failed memory allocation in six functions
Markus Elfring [Thu, 15 Feb 2018 10:38:30 +0000 (11:38 +0100)]
crypto: atmel - Delete error messages for a failed memory allocation in six functions

Omit extra messages for a memory allocation failure in these functions.

This issue was detected by using the Coccinelle software.

Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Reviewed-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: bcm - Delete an error message for a failed memory allocation in do_shash()
Markus Elfring [Wed, 14 Feb 2018 21:05:11 +0000 (22:05 +0100)]
crypto: bcm - Delete an error message for a failed memory allocation in do_shash()

Omit an extra message for a memory allocation failure in this function.

This issue was detected by using the Coccinelle software.

Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: bfin_crc - Delete an error message for a failed memory allocation in bfin_cry...
Markus Elfring [Wed, 14 Feb 2018 20:34:54 +0000 (21:34 +0100)]
crypto: bfin_crc - Delete an error message for a failed memory allocation in bfin_crypto_crc_probe()

Omit an extra message for a memory allocation failure in this function.

This issue was detected by using the Coccinelle software.

Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: speck - add test vectors for Speck64-XTS
Eric Biggers [Wed, 14 Feb 2018 18:42:23 +0000 (10:42 -0800)]
crypto: speck - add test vectors for Speck64-XTS

Add test vectors for Speck64-XTS, generated in userspace using C code.
The inputs were borrowed from the AES-XTS test vectors, with key lengths
adjusted.

xts-speck64-neon passes these tests.  However, they aren't currently
applicable for the generic XTS template, as that only supports a 128-bit
block size.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: speck - add test vectors for Speck128-XTS
Eric Biggers [Wed, 14 Feb 2018 18:42:22 +0000 (10:42 -0800)]
crypto: speck - add test vectors for Speck128-XTS

Add test vectors for Speck128-XTS, generated in userspace using C code.
The inputs were borrowed from the AES-XTS test vectors.

Both xts(speck128-generic) and xts-speck128-neon pass these tests.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: arm/speck - add NEON-accelerated implementation of Speck-XTS
Eric Biggers [Wed, 14 Feb 2018 18:42:21 +0000 (10:42 -0800)]
crypto: arm/speck - add NEON-accelerated implementation of Speck-XTS

Add an ARM NEON-accelerated implementation of Speck-XTS.  It operates on
128-byte chunks at a time, i.e. 8 blocks for Speck128 or 16 blocks for
Speck64.  Each 128-byte chunk goes through XTS preprocessing, then is
encrypted/decrypted (doing one cipher round for all the blocks, then the
next round, etc.), then goes through XTS postprocessing.

The performance depends on the processor but can be about 3 times faster
than the generic code.  For example, on an ARMv7 processor we observe
the following performance with Speck128/256-XTS:

    xts-speck128-neon:     Encryption 107.9 MB/s, Decryption 108.1 MB/s
    xts(speck128-generic): Encryption  32.1 MB/s, Decryption  36.6 MB/s

In comparison to AES-256-XTS without the Cryptography Extensions:

    xts-aes-neonbs:        Encryption  41.2 MB/s, Decryption  36.7 MB/s
    xts(aes-asm):          Encryption  31.7 MB/s, Decryption  30.8 MB/s
    xts(aes-generic):      Encryption  21.2 MB/s, Decryption  20.9 MB/s

Speck64/128-XTS is even faster:

    xts-speck64-neon:      Encryption 138.6 MB/s, Decryption 139.1 MB/s

Note that as with the generic code, only the Speck128 and Speck64
variants are supported.  Also, for now only the XTS mode of operation is
supported, to target the disk and file encryption use cases.  The NEON
code also only handles the portion of the data that is evenly divisible
into 128-byte chunks, with any remainder handled by a C fallback.  Of
course, other modes of operation could be added later if needed, and/or
the NEON code could be updated to handle other buffer sizes.

The XTS specification is only defined for AES which has a 128-bit block
size, so for the GF(2^64) math needed for Speck64-XTS we use the
reducing polynomial 'x^64 + x^4 + x^3 + x + 1' given by the original XEX
paper.  Of course, when possible users should use Speck128-XTS, but even
that may be too slow on some processors; Speck64-XTS can be faster.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: speck - export common helpers
Eric Biggers [Wed, 14 Feb 2018 18:42:20 +0000 (10:42 -0800)]
crypto: speck - export common helpers

Export the Speck constants and transform context and the ->setkey(),
->encrypt(), and ->decrypt() functions so that they can be reused by the
ARM NEON implementation of Speck-XTS.  The generic key expansion code
will be reused because it is not performance-critical and is not
vectorizable, while the generic encryption and decryption functions are
needed as fallbacks and for the XTS tweak encryption.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: speck - add support for the Speck block cipher
Eric Biggers [Wed, 14 Feb 2018 18:42:19 +0000 (10:42 -0800)]
crypto: speck - add support for the Speck block cipher

Add a generic implementation of Speck, including the Speck128 and
Speck64 variants.  Speck is a lightweight block cipher that can be much
faster than AES on processors that don't have AES instructions.

We are planning to offer Speck-XTS (probably Speck128/256-XTS) as an
option for dm-crypt and fscrypt on Android, for low-end mobile devices
with older CPUs such as ARMv7 which don't have the Cryptography
Extensions.  Currently, such devices are unencrypted because AES is not
fast enough, even when the NEON bit-sliced implementation of AES is
used.  Other AES alternatives such as Twofish, Threefish, Camellia,
CAST6, and Serpent aren't fast enough either; it seems that only a
modern ARX cipher can provide sufficient performance on these devices.

This is a replacement for our original proposal
(https://patchwork.kernel.org/patch/10101451/) which was to offer
ChaCha20 for these devices.  However, the use of a stream cipher for
disk/file encryption with no space to store nonces would have been much
more insecure than we thought initially, given that it would be used on
top of flash storage as well as potentially on top of F2FS, neither of
which is guaranteed to overwrite data in-place.

Speck has been somewhat controversial due to its origin.  Nevertheless,
it has a straightforward design (it's an ARX cipher), and it appears to
be the leading software-optimized lightweight block cipher currently,
with the most cryptanalysis.  It's also easy to implement without side
channels, unlike AES.  Moreover, we only intend Speck to be used when
the status quo is no encryption, due to AES not being fast enough.

We've also considered a novel length-preserving encryption mode based on
ChaCha20 and Poly1305.  While theoretically attractive, such a mode
would be a brand new crypto construction and would be more complicated
and difficult to implement efficiently in comparison to Speck-XTS.

There is confusion about the byte and word orders of Speck, since the
original paper doesn't specify them.  But we have implemented it using
the orders the authors recommended in a correspondence with them.  The
test vectors are taken from the original paper but were mapped to byte
arrays using the recommended byte and word orders.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: aesni - Update aesni-intel_glue to use scatter/gather
Dave Watson [Wed, 14 Feb 2018 17:40:58 +0000 (09:40 -0800)]
crypto: aesni - Update aesni-intel_glue to use scatter/gather

Add gcmaes_crypt_by_sg routine, that will do scatter/gather
by sg. Either src or dst may contain multiple buffers, so
iterate over both at the same time if they are different.
If the input is the same as the output, iterate only over one.

Currently both the AAD and TAG must be linear, so copy them out
with scatterlist_map_and_copy.  If first buffer contains the
entire AAD, we can optimize and not copy.   Since the AAD
can be any size, if copied it must be on the heap.  TAG can
be on the stack since it is always < 16 bytes.

Only the SSE routines are updated so far, so leave the previous
gcmaes_en/decrypt routines, and branch to the sg ones if the
keysize is inappropriate for avx, or we are SSE only.

Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: aesni - Introduce scatter/gather asm function stubs
Dave Watson [Wed, 14 Feb 2018 17:40:47 +0000 (09:40 -0800)]
crypto: aesni - Introduce scatter/gather asm function stubs

The asm macros are all set up now, introduce entry points.

GCM_INIT and GCM_COMPLETE have arguments supplied, so that
the new scatter/gather entry points don't have to take all the
arguments, and only the ones they need.

Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: aesni - Add fast path for > 16 byte update
Dave Watson [Wed, 14 Feb 2018 17:40:31 +0000 (09:40 -0800)]
crypto: aesni - Add fast path for > 16 byte update

We can fast-path any < 16 byte read if the full message is > 16 bytes,
and shift over by the appropriate amount.  Usually we are
reading > 16 bytes, so this should be faster than the READ_PARTIAL
macro introduced in b20209c91e2 for the average case.

Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: aesni - Introduce partial block macro
Dave Watson [Wed, 14 Feb 2018 17:40:19 +0000 (09:40 -0800)]
crypto: aesni - Introduce partial block macro

Before this diff, multiple calls to GCM_ENC_DEC will
succeed, but only if all calls are a multiple of 16 bytes.

Handle partial blocks at the start of GCM_ENC_DEC, and update
aadhash as appropriate.

The data offset %r11 is also updated after the partial block.

Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: aesni - Move HashKey computation from stack to gcm_context
Dave Watson [Wed, 14 Feb 2018 17:40:10 +0000 (09:40 -0800)]
crypto: aesni - Move HashKey computation from stack to gcm_context

HashKey computation only needs to happen once per scatter/gather operation,
save it between calls in gcm_context struct instead of on the stack.
Since the asm no longer stores anything on the stack, we can use
%rsp directly, and clean up the frame save/restore macros a bit.

Hashkeys actually only need to be calculated once per key and could
be moved to when set_key is called, however, the current glue code
falls back to generic aes code if fpu is disabled.

Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: aesni - Move ghash_mul to GCM_COMPLETE
Dave Watson [Wed, 14 Feb 2018 17:39:55 +0000 (09:39 -0800)]
crypto: aesni - Move ghash_mul to GCM_COMPLETE

Prepare to handle partial blocks between scatter/gather calls.
For the last partial block, we only want to calculate the aadhash
in GCM_COMPLETE, and a new partial block macro will handle both
aadhash update and encrypting partial blocks between calls.

Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: aesni - Fill in new context data structures
Dave Watson [Wed, 14 Feb 2018 17:39:45 +0000 (09:39 -0800)]
crypto: aesni - Fill in new context data structures

Fill in aadhash, aadlen, pblocklen, curcount with appropriate values.
pblocklen, aadhash, and pblockenckey are also updated at the end
of each scatter/gather operation, to be carried over to the next
operation.

Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: aesni - Split AAD hash calculation to separate macro
Dave Watson [Wed, 14 Feb 2018 17:39:36 +0000 (09:39 -0800)]
crypto: aesni - Split AAD hash calculation to separate macro

AAD hash only needs to be calculated once for each scatter/gather operation.
Move it to its own macro, and call it from GCM_INIT instead of
INITIAL_BLOCKS.

Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: aesni - Introduce gcm_context_data
Dave Watson [Wed, 14 Feb 2018 17:39:23 +0000 (09:39 -0800)]
crypto: aesni - Introduce gcm_context_data

Introduce a gcm_context_data struct that will be used to pass
context data between scatter/gather update calls.  It is passed
as the second argument (after crypto keys), other args are
renumbered.

Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
6 years agocrypto: aesni - Merge encode and decode to GCM_ENC_DEC macro
Dave Watson [Wed, 14 Feb 2018 17:39:10 +0000 (09:39 -0800)]
crypto: aesni - Merge encode and decode to GCM_ENC_DEC macro

Make a macro for the main encode/decode routine.  Only a small handful
of lines differ for enc and dec.   This will also become the main
scatter/gather update routine.

Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>