platform/kernel/linux-starfive.git
3 years agocrypto: marvell/cesa - Fix use of sg_pcopy on iomem pointer
Herbert Xu [Thu, 21 Jan 2021 05:16:46 +0000 (16:16 +1100)]
crypto: marvell/cesa - Fix use of sg_pcopy on iomem pointer

The cesa driver mixes use of iomem pointers and normal kernel
pointers.  Sometimes it uses memcpy_toio/memcpy_fromio on both
while other times it would use straight memcpy on both, through
the sg_pcopy_* helpers.

This patch fixes this by adding a new field sram_pool to the engine
for the normal pointer case which then allows us to use the right
interface depending on the value of engine->pool.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: talitos - Fix ctr(aes) on SEC1
Christophe Leroy [Wed, 20 Jan 2021 18:57:25 +0000 (18:57 +0000)]
crypto: talitos - Fix ctr(aes) on SEC1

While ctr(aes) requires the use of a special descriptor on SEC2 (see
commit 70d355ccea89 ("crypto: talitos - fix ctr-aes-talitos")), that
special descriptor doesn't work on SEC1, see commit e738c5f15562
("powerpc/8xx: Add DT node for using the SEC engine of the MPC885").

However, the common nonsnoop descriptor works properly on SEC1 for
ctr(aes).

Add a second template for ctr(aes) that will be registered
only on SEC1.

Fixes: 70d355ccea89 ("crypto: talitos - fix ctr-aes-talitos")
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: talitos - Work around SEC6 ERRATA (AES-CTR mode data size error)
Christophe Leroy [Wed, 20 Jan 2021 18:57:24 +0000 (18:57 +0000)]
crypto: talitos - Work around SEC6 ERRATA (AES-CTR mode data size error)

Talitos Security Engine AESU considers any input
data size that is not a multiple of 16 bytes to be an error.
This is not a problem in general, except for Counter mode
that is a stream cipher and can have an input of any size.

Test Manager for ctr(aes) fails on 4th test vector which has
a length of 499 while all previous vectors which have a 16 bytes
multiple length succeed.

As suggested by Freescale, round up the input data length to the
nearest 16 bytes.

Fixes: 5e75ae1b3cef ("crypto: talitos - add new crypto modes")
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: hisilicon/hpre - add ecc algorithm inqury for uacce device
Hui Tang [Mon, 18 Jan 2021 08:18:19 +0000 (16:18 +0800)]
crypto: hisilicon/hpre - add ecc algorithm inqury for uacce device

Uacce SysFS support more algorithms inqury such as
'ecdh/ecdsa/sm2/x25519/x448'

Signed-off-by: Hui Tang <tanghui20@huawei.com>
Reviewed-by: Zaibo Xu <xuzaibo@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: hisilicon/hpre - add two RAS correctable errors processing
Hui Tang [Mon, 18 Jan 2021 08:17:25 +0000 (16:17 +0800)]
crypto: hisilicon/hpre - add two RAS correctable errors processing

1.One CE error is detecting timeout of generating a random number.
2.Another is detecting timeout of SVA prefetching address.

Signed-off-by: Hui Tang <tanghui20@huawei.com>
Reviewed-by: Zaibo Xu <xuzaibo@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: hisilicon/hpre - delete ECC 1bit error reported threshold
Hui Tang [Mon, 18 Jan 2021 08:15:40 +0000 (16:15 +0800)]
crypto: hisilicon/hpre - delete ECC 1bit error reported threshold

Delete 'HPRE_RAS_ECC1BIT_TH' register setting of hpre,
since register 'QM_RAS_CE_THRESHOLD' of qm has done this work.

Signed-off-by: Hui Tang <tanghui20@huawei.com>
Reviewed-by: Zaibo Xu <xuzaibo@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: aesni - release FPU during skcipher walk API calls
Ard Biesheuvel [Sat, 16 Jan 2021 16:48:10 +0000 (17:48 +0100)]
crypto: aesni - release FPU during skcipher walk API calls

Taking ownership of the FPU in kernel mode disables preemption, and
this may result in excessive scheduling blackouts if the size of the
data being processed on the FPU is unbounded.

Given that taking and releasing the FPU is cheap these days on x86, we
can limit the impact of this issue easily for skcipher implementations,
by moving the FPU begin/end calls inside the skcipher walk processing
loop. Considering that skcipher walks operate on at most one page at a
time, doing so fully mitigates this issue.

This also permits the skcipher walk logic to use non-atomic kmalloc()
calls etc so we can change the 'atomic' bool argument in the calls to
skcipher_walk_virt() to false as well.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: aesni - replace CTR function pointer with static call
Ard Biesheuvel [Sat, 16 Jan 2021 16:48:09 +0000 (17:48 +0100)]
crypto: aesni - replace CTR function pointer with static call

Indirect calls are very expensive on x86, so use a static call to set
the system-wide AES-NI/CTR asm helper.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: keembay - use 64-bit arithmetic for computing bit_len
Ovidiu Panait [Fri, 15 Jan 2021 20:46:05 +0000 (22:46 +0200)]
crypto: keembay - use 64-bit arithmetic for computing bit_len

src_size and aad_size are defined as u32, so the following expressions are
currently being evaluated using 32-bit arithmetic:

bit_len = src_size * 8;
...
bit_len = aad_size * 8;

However, bit_len is used afterwards in a context that expects a valid
64-bit value (the lower and upper 32-bit words of bit_len are extracted
and written to hw).

In order to make sure the correct bit length is generated and the 32-bit
multiplication does not wrap around, cast src_size and aad_size to u64.

Signed-off-by: Ovidiu Panait <ovidiu.panait@windriver.com>
Acked-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: lib/chacha20poly1305 - define empty module exit function
Jason A. Donenfeld [Fri, 15 Jan 2021 19:30:12 +0000 (20:30 +0100)]
crypto: lib/chacha20poly1305 - define empty module exit function

With no mod_exit function, users are unable to unload the module after
use. I'm not aware of any reason why module unloading should be
prohibited for this one, so this commit simply adds an empty exit
function.

Reported-and-tested-by: John Donnelly <john.p.donnelly@oracle.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: octeontx2 - register with linux crypto framework
Srujana Challa [Fri, 15 Jan 2021 13:52:27 +0000 (19:22 +0530)]
crypto: octeontx2 - register with linux crypto framework

CPT offload module utilises the linux crypto framework to offload
crypto processing. This patch registers supported algorithms by
calling registration functions provided by the kernel crypto API.

The module currently supports:
- AES block cipher in CBC,ECB and XTS mode.
- 3DES block cipher in CBC and ECB mode.
- AEAD algorithms.
  authenc(hmac(sha1),cbc(aes)),
  authenc(hmac(sha256),cbc(aes)),
  authenc(hmac(sha384),cbc(aes)),
  authenc(hmac(sha512),cbc(aes)),
  authenc(hmac(sha1),ecb(cipher_null)),
  authenc(hmac(sha256),ecb(cipher_null)),
  authenc(hmac(sha384),ecb(cipher_null)),
  authenc(hmac(sha512),ecb(cipher_null)),
  rfc4106(gcm(aes)).

Signed-off-by: Suheil Chandran <schandran@marvell.com>
Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com>
Signed-off-by: Srujana Challa <schalla@marvell.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: octeontx2 - add support to process the crypto request
Srujana Challa [Fri, 15 Jan 2021 13:52:26 +0000 (19:22 +0530)]
crypto: octeontx2 - add support to process the crypto request

Attach LFs to CPT VF to process the crypto requests and register
LF interrupts.

Signed-off-by: Suheil Chandran <schandran@marvell.com>
Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com>
Signed-off-by: Srujana Challa <schalla@marvell.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: octeontx2 - add virtual function driver support
Srujana Challa [Fri, 15 Jan 2021 13:52:25 +0000 (19:22 +0530)]
crypto: octeontx2 - add virtual function driver support

Add support for the Marvell OcteonTX2 CPT virtual function
driver. This patch includes probe, PCI specific initialization
and interrupt handling.

Signed-off-by: Suheil Chandran <schandran@marvell.com>
Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com>
Signed-off-by: Srujana Challa <schalla@marvell.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: octeontx2 - add support to get engine capabilities
Srujana Challa [Fri, 15 Jan 2021 13:52:24 +0000 (19:22 +0530)]
crypto: octeontx2 - add support to get engine capabilities

Adds support to get engine capabilities and adds a new mailbox
to share capabilities with VF driver.

Signed-off-by: Suheil Chandran <schandran@marvell.com>
Signed-off-by: Srujana Challa <schalla@marvell.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: octeontx2 - add LF framework
Srujana Challa [Fri, 15 Jan 2021 13:52:23 +0000 (19:22 +0530)]
crypto: octeontx2 - add LF framework

CPT RVU Local Functions(LFs) needs to be attached to the
PF/VF to submit the instructions to CPT.
This patch adds the interface to initialize and attach
the LFs. It also adds interface to register the LF's
interrupts.

Signed-off-by: Suheil Chandran <schandran@marvell.com>
Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com>
Signed-off-by: Srujana Challa <schalla@marvell.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: octeontx2 - load microcode and create engine groups
Srujana Challa [Fri, 15 Jan 2021 13:52:22 +0000 (19:22 +0530)]
crypto: octeontx2 - load microcode and create engine groups

CPT includes microcoded GigaCypher symmetric engines(SEs), IPsec
symmetric engines(IEs), and asymmetric engines (AEs).
Each engine receives CPT instructions from the engine groups it has
subscribed to. This patch loads microcode, configures three engine
groups(one for SEs, one for IEs and one for AEs), and configures
all engines.

Signed-off-by: Suheil Chandran <schandran@marvell.com>
Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com>
Signed-off-by: Srujana Challa <schalla@marvell.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: octeontx2 - enable SR-IOV and mailbox communication with VF
Srujana Challa [Fri, 15 Jan 2021 13:52:21 +0000 (19:22 +0530)]
crypto: octeontx2 - enable SR-IOV and mailbox communication with VF

Adds 'sriov_configure' to enable/disable virtual functions (VFs).
Also Initializes VF<=>PF mailbox IRQs, register handlers for
processing these mailbox messages.

Admin function (AF) handles resource allocation and configuration for
PFs and their VFs. PFs request the AF directly, via mailboxes.
Unlike PFs, VFs cannot send a mailbox request directly. A VF sends
mailbox messages to its parent PF, with which it shares a mailbox
region. The PF then forwards these messages to the AF. After handling
the request, the AF sends a response back to the VF, through the PF.

This patch adds support for this 'VF <=> PF <=> AF' mailbox
communication.

Signed-off-by: Suheil Chandran <schandran@marvell.com>
Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com>
Signed-off-by: Srujana Challa <schalla@marvell.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: octeontx2 - add mailbox communication with AF
Srujana Challa [Fri, 15 Jan 2021 13:52:20 +0000 (19:22 +0530)]
crypto: octeontx2 - add mailbox communication with AF

In the resource virtualization unit (RVU) each of the PF and AF
(admin function) share a 64KB of reserved memory region for
communication. This patch initializes PF <=> AF mailbox IRQs,
registers handlers for processing these communication messages.

Signed-off-by: Suheil Chandran <schandran@marvell.com>
Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com>
Signed-off-by: Srujana Challa <schalla@marvell.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: marvell - add Marvell OcteonTX2 CPT PF driver
Srujana Challa [Fri, 15 Jan 2021 13:52:19 +0000 (19:22 +0530)]
crypto: marvell - add Marvell OcteonTX2 CPT PF driver

Adds skeleton for the Marvell OcteonTX2 CPT physical function
driver which includes probe, PCI specific initialization and
hardware register defines.
RVU defines are present in AF driver
(drivers/net/ethernet/marvell/octeontx2/af), header files from
AF driver are included here to avoid duplication.

Signed-off-by: Suheil Chandran <schandran@marvell.com>
Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com>
Signed-off-by: Srujana Challa <schalla@marvell.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: arm64/sha - add missing module aliases
Ard Biesheuvel [Thu, 14 Jan 2021 18:10:10 +0000 (19:10 +0100)]
crypto: arm64/sha - add missing module aliases

The accelerated, instruction based implementations of SHA1, SHA2 and
SHA3 are autoloaded based on CPU capabilities, given that the code is
modest in size, and widely used, which means that resolving the algo
name, loading all compatible modules and picking the one with the
highest priority is taken to be suboptimal.

However, if these algorithms are requested before this CPU feature
based matching and autoloading occurs, these modules are not even
considered, and we end up with suboptimal performance.

So add the missing module aliases for the various SHA implementations.

Cc: <stable@vger.kernel.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: bcm - Fix sparse warnings
Herbert Xu [Thu, 14 Jan 2021 06:39:58 +0000 (17:39 +1100)]
crypto: bcm - Fix sparse warnings

This patch fixes a number of sparse warnings in the bcm driver.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto - shash: reduce minimum alignment of shash_desc structure
Ard Biesheuvel [Wed, 13 Jan 2021 09:11:35 +0000 (10:11 +0100)]
crypto - shash: reduce minimum alignment of shash_desc structure

Unlike many other structure types defined in the crypto API, the
'shash_desc' structure is permitted to live on the stack, which
implies its contents may not be accessed by DMA masters. (This is
due to the fact that the stack may be located in the vmalloc area,
which requires a different virtual-to-physical translation than the
one implemented by the DMA subsystem)

Our definition of CRYPTO_MINALIGN_ATTR is based on ARCH_KMALLOC_MINALIGN,
which may take DMA constraints into account on architectures that support
non-cache coherent DMA such as ARM and arm64. In this case, the value is
chosen to reflect the largest cacheline size in the system, in order to
ensure that explicit cache maintenance as required by non-coherent DMA
masters does not affect adjacent, unrelated slab allocations. On arm64,
this value is currently set at 128 bytes.

This means that applying CRYPTO_MINALIGN_ATTR to struct shash_desc is both
unnecessary (as it is never used for DMA), and undesirable, given that it
wastes stack space (on arm64, performing the alignment costs 112 bytes in
the worst case, and the hole between the 'tfm' and '__ctx' members takes
up another 120 bytes, resulting in an increased stack footprint of up to
232 bytes.) So instead, let's switch to the minimum SLAB alignment, which
does not take DMA constraints into account.

Note that this is a no-op for x86.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: keembay-ocs-hcu - Add dependency on HAS_IOMEM and ARCH_KEEMBAY
Daniele Alessandrelli [Wed, 6 Jan 2021 15:27:33 +0000 (15:27 +0000)]
crypto: keembay-ocs-hcu - Add dependency on HAS_IOMEM and ARCH_KEEMBAY

Add the following additional dependencies for CRYPTO_DEV_KEEMBAY_OCS_HCU:

- HAS_IOMEM to prevent build failures

- ARCH_KEEMBAY to prevent asking the user about this driver when
  configuring a kernel without Intel Keem Bay platform support.

Signed-off-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: keembay-ocs-hcu - Fix a WARN() message
Dan Carpenter [Wed, 6 Jan 2021 09:25:08 +0000 (12:25 +0300)]
crypto: keembay-ocs-hcu - Fix a WARN() message

The first argument to WARN() is a condition and the messages is the
second argument is the string, so this WARN() will only display the
__func__ part of the message.

Fixes: ae832e329a8d ("crypto: keembay-ocs-hcu - Add HMAC support")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: x86 - use local headers for x86 specific shared declarations
Ard Biesheuvel [Tue, 5 Jan 2021 16:48:09 +0000 (17:48 +0100)]
crypto: x86 - use local headers for x86 specific shared declarations

The Camellia, Serpent and Twofish related header files only contain
declarations that are shared between different implementations of the
respective algorithms residing under arch/x86/crypto, and none of their
contents should be used elsewhere. So move the header files into the
same location, and use local #includes instead.

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: x86 - remove glue helper module
Ard Biesheuvel [Tue, 5 Jan 2021 16:48:08 +0000 (17:48 +0100)]
crypto: x86 - remove glue helper module

All dependencies on the x86 glue helper module have been replaced by
local instantiations of the new ECB/CBC preprocessor helper macros, so
the glue helper module can be retired.

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: x86/twofish - drop dependency on glue helper
Ard Biesheuvel [Tue, 5 Jan 2021 16:48:07 +0000 (17:48 +0100)]
crypto: x86/twofish - drop dependency on glue helper

Replace the glue helper dependency with implementations of ECB and CBC
based on the new CPP macros, which avoid the need for indirect calls.

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: x86/cast6 - drop dependency on glue helper
Ard Biesheuvel [Tue, 5 Jan 2021 16:48:06 +0000 (17:48 +0100)]
crypto: x86/cast6 - drop dependency on glue helper

Replace the glue helper dependency with implementations of ECB and CBC
based on the new CPP macros, which avoid the need for indirect calls.

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: x86/cast5 - drop dependency on glue helper
Ard Biesheuvel [Tue, 5 Jan 2021 16:48:05 +0000 (17:48 +0100)]
crypto: x86/cast5 - drop dependency on glue helper

Replace the glue helper dependency with implementations of ECB and CBC
based on the new CPP macros, which avoid the need for indirect calls.

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: x86/serpent - drop dependency on glue helper
Ard Biesheuvel [Tue, 5 Jan 2021 16:48:04 +0000 (17:48 +0100)]
crypto: x86/serpent - drop dependency on glue helper

Replace the glue helper dependency with implementations of ECB and CBC
based on the new CPP macros, which avoid the need for indirect calls.

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: x86/camellia - drop dependency on glue helper
Ard Biesheuvel [Tue, 5 Jan 2021 16:48:03 +0000 (17:48 +0100)]
crypto: x86/camellia - drop dependency on glue helper

Replace the glue helper dependency with implementations of ECB and CBC
based on the new CPP macros, which avoid the need for indirect calls.

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: x86 - add some helper macros for ECB and CBC modes
Ard Biesheuvel [Tue, 5 Jan 2021 16:48:02 +0000 (17:48 +0100)]
crypto: x86 - add some helper macros for ECB and CBC modes

The x86 glue helper module is starting to show its age:
- It relies heavily on function pointers to invoke asm helper functions that
  operate on fixed input sizes that are relatively small. This means the
  performance is severely impacted by retpolines.
- It goes to great lengths to amortize the cost of kernel_fpu_begin()/end()
  over as much work as possible, which is no longer necessary now that FPU
  save/restore is done lazily, and doing so may cause unbounded scheduling
  blackouts due to the fact that enabling the FPU in kernel mode disables
  preemption.
- The CBC mode decryption helper makes backward strides through the input, in
  order to avoid a single block size memcpy() between chunks. Consuming the
  input in this manner is highly likely to defeat any hardware prefetchers,
  so it is better to go through the data linearly, and perform the extra
  memcpy() where needed (which is turned into direct loads and stores by the
  compiler anyway). Note that benchmarks won't show this effect, given that
  the memory they use is always cache hot.
- It implements blockwise XOR in terms of le128 pointers, which imply an
  alignment that is not guaranteed by the API, violating the C standard.

GCC does not seem to be smart enough to elide the indirect calls when the
function pointers are passed as arguments to static inline helper routines
modeled after the existing ones. So instead, let's create some CPP macros
that encapsulate the core of the ECB and CBC processing, so we can wire
them up for existing users of the glue helper module, i.e., Camellia,
Serpent, Twofish and CAST6.

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: x86/blowfish - drop CTR mode implementation
Ard Biesheuvel [Tue, 5 Jan 2021 16:48:01 +0000 (17:48 +0100)]
crypto: x86/blowfish - drop CTR mode implementation

Blowfish in counter mode is never used in the kernel, so there
is no point in keeping an accelerated implementation around.

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: x86/des - drop CTR mode implementation
Ard Biesheuvel [Tue, 5 Jan 2021 16:48:00 +0000 (17:48 +0100)]
crypto: x86/des - drop CTR mode implementation

DES or Triple DES in counter mode is never used in the kernel, so there
is no point in keeping an accelerated implementation around.

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: x86/glue-helper - drop CTR helper routines
Ard Biesheuvel [Tue, 5 Jan 2021 16:47:59 +0000 (17:47 +0100)]
crypto: x86/glue-helper - drop CTR helper routines

The glue helper's CTR routines are no longer used, so drop them.

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: x86/twofish - drop CTR mode implementation
Ard Biesheuvel [Tue, 5 Jan 2021 16:47:58 +0000 (17:47 +0100)]
crypto: x86/twofish - drop CTR mode implementation

Twofish in CTR mode is never used by the kernel directly, and is highly
unlikely to be relied upon by dm-crypt or algif_skcipher. So let's drop
the accelerated CTR mode implementation, and instead, rely on the CTR
template and the bare cipher.

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: x86/cast6 - drop CTR mode implementation
Ard Biesheuvel [Tue, 5 Jan 2021 16:47:57 +0000 (17:47 +0100)]
crypto: x86/cast6 - drop CTR mode implementation

CAST6 in CTR mode is never used by the kernel directly, and is highly
unlikely to be relied upon by dm-crypt or algif_skcipher. So let's drop
the accelerated CTR mode implementation, and instead, rely on the CTR
template and the bare cipher.

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: x86/cast5 - drop CTR mode implementation
Ard Biesheuvel [Tue, 5 Jan 2021 16:47:56 +0000 (17:47 +0100)]
crypto: x86/cast5 - drop CTR mode implementation

CAST5 in CTR mode is never used by the kernel directly, and is highly
unlikely to be relied upon by dm-crypt or algif_skcipher. So let's drop
the accelerated CTR mode implementation, and instead, rely on the CTR
template and the bare cipher.

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: x86/serpent - drop CTR mode implementation
Ard Biesheuvel [Tue, 5 Jan 2021 16:47:55 +0000 (17:47 +0100)]
crypto: x86/serpent - drop CTR mode implementation

Serpent in CTR mode is never used by the kernel directly, and is highly
unlikely to be relied upon by dm-crypt or algif_skcipher. So let's drop
the accelerated CTR mode implementation, and instead, rely on the CTR
template and the bare cipher.

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: x86/camellia - drop CTR mode implementation
Ard Biesheuvel [Tue, 5 Jan 2021 16:47:54 +0000 (17:47 +0100)]
crypto: x86/camellia - drop CTR mode implementation

Camellia in CTR mode is never used by the kernel directly, and is highly
unlikely to be relied upon by dm-crypt or algif_skcipher. So let's drop
the accelerated CTR mode implementation, and instead, rely on the CTR
template and the bare cipher.

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: x86/glue-helper - drop XTS helper routines
Ard Biesheuvel [Tue, 5 Jan 2021 16:47:53 +0000 (17:47 +0100)]
crypto: x86/glue-helper - drop XTS helper routines

The glue helper's XTS routines are no longer used, so drop them.

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: x86/twofish - switch to XTS template
Ard Biesheuvel [Tue, 5 Jan 2021 16:47:52 +0000 (17:47 +0100)]
crypto: x86/twofish - switch to XTS template

Now that the XTS template can wrap accelerated ECB modes, it can be
used to implement Twofish in XTS mode as well, which turns out to
be at least as fast, and sometimes even faster

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: x86/serpent- switch to XTS template
Ard Biesheuvel [Tue, 5 Jan 2021 16:47:51 +0000 (17:47 +0100)]
crypto: x86/serpent- switch to XTS template

Now that the XTS template can wrap accelerated ECB modes, it can be
used to implement Serpent in XTS mode as well, which turns out to
be at least as fast, and sometimes even faster

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: x86/cast6 - switch to XTS template
Ard Biesheuvel [Tue, 5 Jan 2021 16:47:50 +0000 (17:47 +0100)]
crypto: x86/cast6 - switch to XTS template

Now that the XTS template can wrap accelerated ECB modes, it can be
used to implement CAST6 in XTS mode as well, which turns out to
be at least as fast, and sometimes even faster

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: x86/camellia - switch to XTS template
Ard Biesheuvel [Tue, 5 Jan 2021 16:47:49 +0000 (17:47 +0100)]
crypto: x86/camellia - switch to XTS template

Now that the XTS template can wrap accelerated ECB modes, it can be
used to implement Camellia in XTS mode as well, which turns out to
be at least as fast, and sometimes even faster.

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: marvell/cesa - Fix a spelling s/fautly/faultly/ in comment
Bhaskar Chowdhury [Tue, 5 Jan 2021 10:01:08 +0000 (15:31 +0530)]
crypto: marvell/cesa - Fix a spelling s/fautly/faultly/ in comment

s/fautly/faulty/p

Signed-off-by: Bhaskar Chowdhury <unixbhaskar@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: hisilicon/sec - register SEC device to uacce
Kai Ye [Tue, 5 Jan 2021 06:16:44 +0000 (14:16 +0800)]
crypto: hisilicon/sec - register SEC device to uacce

Register SEC device to uacce framework for user space.

Signed-off-by: Kai Ye <yekai13@huawei.com>
Reviewed-by: Zhou Wang <wangzhou1@hisilicon.com>
Reviewed-by: Zaibo Xu <xuzaibo@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: hisilicon/hpre - register HPRE device to uacce
Kai Ye [Tue, 5 Jan 2021 06:16:43 +0000 (14:16 +0800)]
crypto: hisilicon/hpre - register HPRE device to uacce

Register HPRE device to uacce framework for user space.

Signed-off-by: Kai Ye <yekai13@huawei.com>
Reviewed-by: Zhou Wang <wangzhou1@hisilicon.com>
Reviewed-by: Zaibo Xu <xuzaibo@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: hisilicon - add ZIP device using mode parameter
Kai Ye [Tue, 5 Jan 2021 06:16:42 +0000 (14:16 +0800)]
crypto: hisilicon - add ZIP device using mode parameter

Add 'uacce_mode' parameter for ZIP, which can be set as 0(default) or 1.
'0' means ZIP is only registered to kernel crypto, and '1' means it's
registered to both kernel crypto and UACCE.

Signed-off-by: Kai Ye <yekai13@huawei.com>
Reviewed-by: Zhou Wang <wangzhou1@hisilicon.com>
Reviewed-by: Zaibo Xu <xuzaibo@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: hisilicon/qm - SVA bugfixed on Kunpeng920
Kai Ye [Tue, 5 Jan 2021 06:12:03 +0000 (14:12 +0800)]
crypto: hisilicon/qm - SVA bugfixed on Kunpeng920

Kunpeng920 SEC/HPRE/ZIP cannot support running user space SVA and kernel
Crypto at the same time. Therefore, the algorithms should not be registered
to Crypto as user space SVA is enabled.

Signed-off-by: Kai Ye <yekai13@huawei.com>
Reviewed-by: Zaibo Xu <xuzaibo@huawei.com>
Reviewed-by: Zhou Wang <wangzhou1@hisilicon.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: bcm - Rename struct device_private to bcm_device_private
Jiri Olsa [Mon, 4 Jan 2021 23:02:37 +0000 (00:02 +0100)]
crypto: bcm - Rename struct device_private to bcm_device_private

Renaming 'struct device_private' to 'struct bcm_device_private',
because it clashes with 'struct device_private' from
'drivers/base/base.h'.

While it's not a functional problem, it's causing two distinct
type hierarchies in BTF data. It also breaks build with options:
  CONFIG_DEBUG_INFO_BTF=y
  CONFIG_CRYPTO_DEV_BCM_SPU=y

as reported by Qais Yousef [1].

[1] https://lore.kernel.org/lkml/20201229151352.6hzmjvu3qh6p2qgg@e107158-lin/

Fixes: 9d12ba86f818 ("crypto: brcm - Add Broadcom SPU driver")
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Qais Yousef <qais.yousef@arm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: qat - reduce size of mapped region
Adam Guerin [Mon, 4 Jan 2021 17:21:59 +0000 (17:21 +0000)]
crypto: qat - reduce size of mapped region

Restrict size of field to what is required by the operation.

This issue was detected by smatch:

    drivers/crypto/qat/qat_common/qat_asym_algs.c:328 qat_dh_compute_value() error: dma_map_single_attrs() '&qat_req->in.dh.in.b' too small (8 vs 64)

Signed-off-by: Adam Guerin <adam.guerin@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: qat - change format string and cast ring size
Adam Guerin [Mon, 4 Jan 2021 17:21:58 +0000 (17:21 +0000)]
crypto: qat - change format string and cast ring size

Cast ADF_SIZE_TO_RING_SIZE_IN_BYTES() so it can return a 64 bit value.

This issue was detected by smatch:

    drivers/crypto/qat/qat_common/adf_transport_debug.c:65 adf_ring_show() warn: should '(1 << (ring->ring_size - 1)) << 7' be a 64 bit type?

Signed-off-by: Adam Guerin <adam.guerin@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: qat - fix potential spectre issue
Adam Guerin [Mon, 4 Jan 2021 17:21:57 +0000 (17:21 +0000)]
crypto: qat - fix potential spectre issue

Sanitize ring_num value coming from configuration (and potentially
from user space) before it is used as index in the banks array.

This issue was detected by smatch:

    drivers/crypto/qat/qat_common/adf_transport.c:233 adf_create_ring() warn: potential spectre issue 'bank->rings' [r] (local cap)

Signed-off-by: Adam Guerin <adam.guerin@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: qat - configure arbiter mapping based on engines enabled
Wojciech Ziemba [Mon, 4 Jan 2021 16:55:46 +0000 (16:55 +0000)]
crypto: qat - configure arbiter mapping based on engines enabled

The hardware specific function adf_get_arbiter_mapping() modifies
the static array thrd_to_arb_map to disable mappings for AEs
that are disabled. This static array is used for each device
of the same type. If the ae mask is not identical for all devices
of the same type then the arbiter mapping returned by
adf_get_arbiter_mapping() may be wrong.

This patch fixes this problem by ensuring the static arbiter
mapping is unchanged and the device arbiter mapping is re-calculated
each time based on the static mapping.

Signed-off-by: Wojciech Ziemba <wojciech.ziemba@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: aesni - replace function pointers with static branches
Ard Biesheuvel [Mon, 4 Jan 2021 15:55:50 +0000 (16:55 +0100)]
crypto: aesni - replace function pointers with static branches

Replace the function pointers in the GCM implementation with static branches,
which are based on code patching, which occurs only at module load time.
This avoids the severe performance penalty caused by the use of retpolines.

In order to retain the ability to switch between different versions of the
implementation based on the input size on cores that support AVX and AVX2,
use static branches instead of static calls.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: aesni - refactor scatterlist processing
Ard Biesheuvel [Mon, 4 Jan 2021 15:55:49 +0000 (16:55 +0100)]
crypto: aesni - refactor scatterlist processing

Currently, the gcm(aes-ni) driver open codes the scatterlist handling
that is encapsulated by the skcipher walk API. So let's switch to that
instead.

Also, move the handling at the end of gcmaes_crypt_by_sg() that is
dependent on whether we are encrypting or decrypting into the callers,
which always do one or the other.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: aesni - clean up mapping of associated data
Ard Biesheuvel [Mon, 4 Jan 2021 15:55:48 +0000 (16:55 +0100)]
crypto: aesni - clean up mapping of associated data

The gcm(aes-ni) driver is only built for x86_64, which does not make
use of highmem. So testing for PageHighMem is pointless and can be
omitted.

While at it, replace GFP_ATOMIC with the appropriate runtime decided
value based on the context.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: aesni - drop unused asm prototypes
Ard Biesheuvel [Mon, 4 Jan 2021 15:55:47 +0000 (16:55 +0100)]
crypto: aesni - drop unused asm prototypes

Drop some prototypes that are declared but never called.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: aesni - prevent misaligned buffers on the stack
Ard Biesheuvel [Mon, 4 Jan 2021 15:55:46 +0000 (16:55 +0100)]
crypto: aesni - prevent misaligned buffers on the stack

The GCM mode driver uses 16 byte aligned buffers on the stack to pass
the IV to the asm helpers, but unfortunately, the x86 port does not
guarantee that the stack pointer is 16 byte aligned upon entry in the
first place. Since the compiler is not aware of this, it will not emit
the additional stack realignment sequence that is needed, and so the
alignment is not guaranteed to be more than 8 bytes.

So instead, allocate some padding on the stack, and realign the IV
pointer by hand.

Cc: <stable@vger.kernel.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: qat - replace CRYPTO_AES with CRYPTO_LIB_AES in Kconfig
Marco Chiappero [Mon, 4 Jan 2021 15:35:15 +0000 (15:35 +0000)]
crypto: qat - replace CRYPTO_AES with CRYPTO_LIB_AES in Kconfig

Use CRYPTO_LIB_AES in place of CRYPTO_AES in the dependences for the QAT
common code.

Fixes: c0e583ab2016 ("crypto: qat - add CRYPTO_AES to Kconfig dependencies")
Reported-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Marco Chiappero <marco.chiappero@intel.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: stm32 - Fix last sparse warning in stm32_cryp_check_ctr_counter
Herbert Xu [Mon, 4 Jan 2021 06:15:45 +0000 (17:15 +1100)]
crypto: stm32 - Fix last sparse warning in stm32_cryp_check_ctr_counter

This patch changes the cast in stm32_cryp_check_ctr_counter from
u32 to __be32 to match the prototype of stm32_cryp_hw_write_iv
correctly.

Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: vmx - Move extern declarations into header file
Herbert Xu [Sat, 2 Jan 2021 21:56:18 +0000 (08:56 +1100)]
crypto: vmx - Move extern declarations into header file

This patch moves the extern algorithm declarations into a header
file so that a number of compiler warnings are silenced.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: x86/aes-ni-xts - rewrite and drop indirections via glue helper
Ard Biesheuvel [Thu, 31 Dec 2020 16:41:55 +0000 (17:41 +0100)]
crypto: x86/aes-ni-xts - rewrite and drop indirections via glue helper

The AES-NI driver implements XTS via the glue helper, which consumes
a struct with sets of function pointers which are invoked on chunks
of input data of the appropriate size, as annotated in the struct.

Let's get rid of this indirection, so that we can perform direct calls
to the assembler helpers. Instead, let's adopt the arm64 strategy, i.e.,
provide a helper which can consume inputs of any size, provided that the
penultimate, full block is passed via the last call if ciphertext stealing
needs to be applied.

This also allows us to enable the XTS mode for i386.

Tested-by: Eric Biggers <ebiggers@google.com> # x86_64
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: x86/aes-ni-xts - use direct calls to and 4-way stride
Ard Biesheuvel [Thu, 31 Dec 2020 16:41:54 +0000 (17:41 +0100)]
crypto: x86/aes-ni-xts - use direct calls to and 4-way stride

The XTS asm helper arrangement is a bit odd: the 8-way stride helper
consists of back-to-back calls to the 4-way core transforms, which
are called indirectly, based on a boolean that indicates whether we
are performing encryption or decryption.

Given how costly indirect calls are on x86, let's switch to direct
calls, and given how the 8-way stride doesn't really add anything
substantial, use a 4-way stride instead, and make the asm core
routine deal with any multiple of 4 blocks. Since 512 byte sectors
or 4 KB blocks are the typical quantities XTS operates on, increase
the stride exported to the glue helper to 512 bytes as well.

As a result, the number of indirect calls is reduced from 3 per 64 bytes
of in/output to 1 per 512 bytes of in/output, which produces a 65% speedup
when operating on 1 KB blocks (measured on a Intel(R) Core(TM) i7-8650U CPU)

Fixes: 9697fa39efd3f ("x86/retpoline/crypto: Convert crypto assembler indirect jumps")
Tested-by: Eric Biggers <ebiggers@google.com> # x86_64
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: picoxcell - Remove PicoXcell driver
Rob Herring [Thu, 10 Dec 2020 20:03:14 +0000 (14:03 -0600)]
crypto: picoxcell - Remove PicoXcell driver

PicoXcell has had nothing but treewide cleanups for at least the last 8
years and no signs of activity. The most recent activity is a yocto vendor
kernel based on v3.0 in 2015.

Cc: Jamie Iles <jamie@jamieiles.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-crypto@vger.kernel.org
Signed-off-by: Rob Herring <robh@kernel.org>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: arm/blake2b - add NEON-accelerated BLAKE2b
Eric Biggers [Wed, 23 Dec 2020 08:10:03 +0000 (00:10 -0800)]
crypto: arm/blake2b - add NEON-accelerated BLAKE2b

Add a NEON-accelerated implementation of BLAKE2b.

On Cortex-A7 (which these days is the most common ARM processor that
doesn't have the ARMv8 Crypto Extensions), this is over twice as fast as
SHA-256, and slightly faster than SHA-1.  It is also almost three times
as fast as the generic implementation of BLAKE2b:

Algorithm            Cycles per byte (on 4096-byte messages)
===================  =======================================
blake2b-256-neon     14.0
sha1-neon            16.3
blake2s-256-arm      18.8
sha1-asm             20.8
blake2s-256-generic  26.0
sha256-neon      28.9
sha256-asm      32.0
blake2b-256-generic  38.9

This implementation isn't directly based on any other implementation,
but it borrows some ideas from previous NEON code I've written as well
as from chacha-neon-core.S.  At least on Cortex-A7, it is faster than
the other NEON implementations of BLAKE2b I'm aware of (the
implementation in the BLAKE2 official repository using intrinsics, and
Andrew Moon's implementation which can be found in SUPERCOP).  It does
only one block at a time, so it performs well on short messages too.

NEON-accelerated BLAKE2b is useful because there is interest in using
BLAKE2b-256 for dm-verity on low-end Android devices (specifically,
devices that lack the ARMv8 Crypto Extensions) to replace SHA-1.  On
these devices, the performance cost of upgrading to SHA-256 may be
unacceptable, whereas BLAKE2b-256 would actually improve performance.

Although BLAKE2b is intended for 64-bit platforms (unlike BLAKE2s which
is intended for 32-bit platforms), on 32-bit ARM processors with NEON,
BLAKE2b is actually faster than BLAKE2s.  This is because NEON supports
64-bit operations, and because BLAKE2s's block size is too small for
NEON to be helpful for it.  The best I've been able to do with BLAKE2s
on Cortex-A7 is 18.8 cpb with an optimized scalar implementation.

(I didn't try BLAKE2sp and BLAKE3, which in theory would be faster, but
they're more complex as they require running multiple hashes at once.
Note that BLAKE2b already uses all the NEON bandwidth on the Cortex-A7,
so I expect that any speedup from BLAKE2sp or BLAKE3 would come only
from the smaller number of rounds, not from the extra parallelism.)

For now this BLAKE2b implementation is only wired up to the shash API,
since there is no library API for BLAKE2b yet.  However, I've tried to
keep things consistent with BLAKE2s, e.g. by defining
blake2b_compress_arch() which is analogous to blake2s_compress_arch()
and could be exported for use by the library API later if needed.

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Tested-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: blake2b - update file comment
Eric Biggers [Wed, 23 Dec 2020 08:10:02 +0000 (00:10 -0800)]
crypto: blake2b - update file comment

The file comment for blake2b_generic.c makes it sound like it's the
reference implementation of BLAKE2b with only minor changes.  But it's
actually been changed a lot.  Update the comment to make this clearer.

Reviewed-by: David Sterba <dsterba@suse.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: blake2b - sync with blake2s implementation
Eric Biggers [Wed, 23 Dec 2020 08:10:01 +0000 (00:10 -0800)]
crypto: blake2b - sync with blake2s implementation

Sync the BLAKE2b code with the BLAKE2s code as much as possible:

- Move a lot of code into new headers <crypto/blake2b.h> and
  <crypto/internal/blake2b.h>, and adjust it to be like the
  corresponding BLAKE2s code, i.e. like <crypto/blake2s.h> and
  <crypto/internal/blake2s.h>.

- Rename constants, e.g. BLAKE2B_*_DIGEST_SIZE => BLAKE2B_*_HASH_SIZE.

- Use a macro BLAKE2B_ALG() to define the shash_alg structs.

- Export blake2b_compress_generic() for use as a fallback.

This makes it much easier to add optimized implementations of BLAKE2b,
as optimized implementations can use the helper functions
crypto_blake2b_{setkey,init,update,final}() and
blake2b_compress_generic().  The ARM implementation will use these.

But this change is also helpful because it eliminates unnecessary
differences between the BLAKE2b and BLAKE2s code, so that the same
improvements can easily be made to both.  (The two algorithms are
basically identical, except for the word size and constants.)  It also
makes it straightforward to add a library API for BLAKE2b in the future
if/when it's needed.

This change does make the BLAKE2b code slightly more complicated than it
needs to be, as it doesn't actually provide a library API yet.  For
example, __blake2b_update() doesn't really need to exist yet; it could
just be inlined into crypto_blake2b_update().  But I believe this is
outweighed by the benefits of keeping the code in sync.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agowireguard: Kconfig: select CRYPTO_BLAKE2S_ARM
Eric Biggers [Wed, 23 Dec 2020 08:10:00 +0000 (00:10 -0800)]
wireguard: Kconfig: select CRYPTO_BLAKE2S_ARM

When available, select the new implementation of BLAKE2s for 32-bit ARM.
This is faster than the generic C implementation.

Reviewed-by: Jason A. Donenfeld <Jason@zx2c4.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: arm/blake2s - add ARM scalar optimized BLAKE2s
Eric Biggers [Wed, 23 Dec 2020 08:09:59 +0000 (00:09 -0800)]
crypto: arm/blake2s - add ARM scalar optimized BLAKE2s

Add an ARM scalar optimized implementation of BLAKE2s.

NEON isn't very useful for BLAKE2s because the BLAKE2s block size is too
small for NEON to help.  Each NEON instruction would depend on the
previous one, resulting in poor performance.

With scalar instructions, on the other hand, we can take advantage of
ARM's "free" rotations (like I did in chacha-scalar-core.S) to get an
implementation get runs much faster than the C implementation.

Performance results on Cortex-A7 in cycles per byte using the shash API:

4096-byte messages:
blake2s-256-arm:     18.8
blake2s-256-generic: 26.0

500-byte messages:
blake2s-256-arm:     20.3
blake2s-256-generic: 27.9

100-byte messages:
blake2s-256-arm:     29.7
blake2s-256-generic: 39.2

32-byte messages:
blake2s-256-arm:     50.6
blake2s-256-generic: 66.2

Except on very short messages, this is still slower than the NEON
implementation of BLAKE2b which I've written; that is 14.0, 16.4, 25.8,
and 76.1 cpb on 4096, 500, 100, and 32-byte messages, respectively.
However, optimized BLAKE2s is useful for cases where BLAKE2s is used
instead of BLAKE2b, such as WireGuard.

This new implementation is added in the form of a new module
blake2s-arm.ko, which is analogous to blake2s-x86_64.ko in that it
provides blake2s_compress_arch() for use by the library API as well as
optionally register the algorithms with the shash API.

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Tested-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: blake2s - include <linux/bug.h> instead of <asm/bug.h>
Eric Biggers [Wed, 23 Dec 2020 08:09:58 +0000 (00:09 -0800)]
crypto: blake2s - include <linux/bug.h> instead of <asm/bug.h>

Address the following checkpatch warning:

WARNING: Use #include <linux/bug.h> instead of <asm/bug.h>

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: blake2s - adjust include guard naming
Eric Biggers [Wed, 23 Dec 2020 08:09:57 +0000 (00:09 -0800)]
crypto: blake2s - adjust include guard naming

Use the full path in the include guards for the BLAKE2s headers to avoid
ambiguity and to match the convention for most files in include/crypto/.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: blake2s - add comment for blake2s_state fields
Eric Biggers [Wed, 23 Dec 2020 08:09:56 +0000 (00:09 -0800)]
crypto: blake2s - add comment for blake2s_state fields

The first three fields of 'struct blake2s_state' are used in assembly
code, which isn't immediately obvious, so add a comment to this effect.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: blake2s - optimize blake2s initialization
Eric Biggers [Wed, 23 Dec 2020 08:09:55 +0000 (00:09 -0800)]
crypto: blake2s - optimize blake2s initialization

If no key was provided, then don't waste time initializing the block
buffer, as its initial contents won't be used.

Also, make crypto_blake2s_init() and blake2s() call a single internal
function __blake2s_init() which treats the key as optional, rather than
conditionally calling blake2s_init() or blake2s_init_key().  This
reduces the compiled code size, as previously both blake2s_init() and
blake2s_init_key() were being inlined into these two callers, except
when the key size passed to blake2s() was a compile-time constant.

These optimizations aren't that significant for BLAKE2s.  However, the
equivalent optimizations will be more significant for BLAKE2b, as
everything is twice as big in BLAKE2b.  And it's good to keep things
consistent rather than making optimizations for BLAKE2b but not BLAKE2s.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: blake2s - share the "shash" API boilerplate code
Eric Biggers [Wed, 23 Dec 2020 08:09:54 +0000 (00:09 -0800)]
crypto: blake2s - share the "shash" API boilerplate code

Add helper functions for shash implementations of BLAKE2s to
include/crypto/internal/blake2s.h, taking advantage of
__blake2s_update() and __blake2s_final() that were added by the previous
patch to share more code between the library and shash implementations.

crypto_blake2s_setkey() and crypto_blake2s_init() are usable as
shash_alg::setkey and shash_alg::init directly, while
crypto_blake2s_update() and crypto_blake2s_final() take an extra
'blake2s_compress_t' function pointer parameter.  This allows the
implementation of the compression function to be overridden, which is
the only part that optimized implementations really care about.

The new functions are inline functions (similar to those in sha1_base.h,
sha256_base.h, and sm3_base.h) because this avoids needing to add a new
module blake2s_helpers.ko, they aren't *too* long, and this avoids
indirect calls which are expensive these days.  Note that they can't go
in blake2s_generic.ko, as that would require selecting CRYPTO_BLAKE2S
from CRYPTO_BLAKE2S_X86, which would cause a recursive dependency.

Finally, use these new helper functions in the x86 implementation of
BLAKE2s.  (This part should be a separate patch, but unfortunately the
x86 implementation used the exact same function names like
"crypto_blake2s_update()", so it had to be updated at the same time.)

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: blake2s - move update and final logic to internal/blake2s.h
Eric Biggers [Wed, 23 Dec 2020 08:09:53 +0000 (00:09 -0800)]
crypto: blake2s - move update and final logic to internal/blake2s.h

Move most of blake2s_update() and blake2s_final() into new inline
functions __blake2s_update() and __blake2s_final() in
include/crypto/internal/blake2s.h so that this logic can be shared by
the shash helper functions.  This will avoid duplicating this logic
between the library and shash implementations.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: blake2s - remove unneeded includes
Eric Biggers [Wed, 23 Dec 2020 08:09:52 +0000 (00:09 -0800)]
crypto: blake2s - remove unneeded includes

It doesn't make sense for the generic implementation of BLAKE2s to
include <crypto/internal/simd.h> and <linux/jump_label.h>, as these are
things that would only be useful in an architecture-specific
implementation.  Remove these unnecessary includes.

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: x86/blake2s - define shash_alg structs using macros
Eric Biggers [Wed, 23 Dec 2020 08:09:51 +0000 (00:09 -0800)]
crypto: x86/blake2s - define shash_alg structs using macros

The shash_alg structs for the four variants of BLAKE2s are identical
except for the algorithm name, driver name, and digest size.  So, avoid
code duplication by using a macro to define these structs.

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: blake2s - define shash_alg structs using macros
Eric Biggers [Wed, 23 Dec 2020 08:09:50 +0000 (00:09 -0800)]
crypto: blake2s - define shash_alg structs using macros

The shash_alg structs for the four variants of BLAKE2s are identical
except for the algorithm name, driver name, and digest size.  So, avoid
code duplication by using a macro to define these structs.

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agohwrng: ingenic - Fix a resource leak in an error handling path
Christophe JAILLET [Sat, 19 Dec 2020 07:52:07 +0000 (08:52 +0100)]
hwrng: ingenic - Fix a resource leak in an error handling path

In case of error, we should call 'clk_disable_unprepare()' to undo a
previous 'clk_prepare_enable()' call, as already done in the remove
function.

Fixes: 406346d22278 ("hwrng: ingenic - Add hardware TRNG for Ingenic X1830")
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Tested-by: 周琰杰 (Zhou Yanjie) <zhouyanjie@wanyeetech.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agohwrng: iproc-rng200 - Move enable/disable in separate function
Matthias Brugger [Fri, 18 Dec 2020 10:57:08 +0000 (11:57 +0100)]
hwrng: iproc-rng200 - Move enable/disable in separate function

We are calling the same code for enable and disable the block in various
parts of the driver. Put that code into a new function to reduce code
duplication.

Signed-off-by: Matthias Brugger <mbrugger@suse.com>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Acked-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agohwrng: iproc-rng200 - Fix disable of the block.
Matthias Brugger [Fri, 18 Dec 2020 10:57:07 +0000 (11:57 +0100)]
hwrng: iproc-rng200 - Fix disable of the block.

When trying to disable the block we bitwise or the control
register with value zero. This is confusing as using bitwise or with
value zero doesn't have any effect at all. Drop this as we already set
the enable bit to zero by appling inverted RNG_RBGEN_MASK.

Signed-off-by: Matthias Brugger <mbrugger@suse.com>
Acked-by: Scott Branden <scott.branden@broadcom.com>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: arm64/aes-ctr - improve tail handling
Ard Biesheuvel [Thu, 17 Dec 2020 18:55:16 +0000 (19:55 +0100)]
crypto: arm64/aes-ctr - improve tail handling

Counter mode is a stream cipher chaining mode that is typically used
with inputs that are of arbitrarily length, and so a tail block which
is smaller than a full AES block is rule rather than exception.

The current ctr(aes) implementation for arm64 always makes a separate
call into the assembler routine to process this tail block, which is
suboptimal, given that it requires reloading of the AES round keys,
and prevents us from handling this tail block using the 5-way stride
that we use for better performance on deep pipelines.

So let's update the assembler routine so it can handle any input size,
and uses NEON permutation instructions and overlapping loads and stores
to handle the tail block. This results in a ~16% speedup for 1420 byte
blocks on cores with deep pipelines such as ThunderX2.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: arm64/aes-ce - really hide slower algos when faster ones are enabled
Ard Biesheuvel [Thu, 17 Dec 2020 18:55:15 +0000 (19:55 +0100)]
crypto: arm64/aes-ce - really hide slower algos when faster ones are enabled

Commit 69b6f2e817e5b ("crypto: arm64/aes-neon - limit exposed routines if
faster driver is enabled") intended to hide modes from the plain NEON
driver that are also implemented by the faster bit sliced NEON one if
both are enabled. However, the defined() CPP function does not detect
if the bit sliced NEON driver is enabled as a module. So instead, let's
use IS_ENABLED() here.

Fixes: 69b6f2e817e5b ("crypto: arm64/aes-neon - limit exposed routines if ...")
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agoMAINTAINERS: Add maintainers for Keem Bay OCS HCU driver
Daniele Alessandrelli [Wed, 16 Dec 2020 11:46:39 +0000 (11:46 +0000)]
MAINTAINERS: Add maintainers for Keem Bay OCS HCU driver

Add maintainers for the Intel Keem Bay Offload Crypto Subsystem (OCS)
Hash Control Unit (HCU) crypto driver.

Signed-off-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com>
Acked-by: Declan Murphy <declan.murphy@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: keembay-ocs-hcu - Add optional support for sha224
Daniele Alessandrelli [Wed, 16 Dec 2020 11:46:38 +0000 (11:46 +0000)]
crypto: keembay-ocs-hcu - Add optional support for sha224

Add optional support of sha224 and hmac(sha224).

Co-developed-by: Declan Murphy <declan.murphy@intel.com>
Signed-off-by: Declan Murphy <declan.murphy@intel.com>
Signed-off-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: keembay-ocs-hcu - Add HMAC support
Daniele Alessandrelli [Wed, 16 Dec 2020 11:46:37 +0000 (11:46 +0000)]
crypto: keembay-ocs-hcu - Add HMAC support

Add HMAC support to the Keem Bay OCS HCU driver, thus making it provide
the following additional transformations:
- hmac(sha256)
- hmac(sha384)
- hmac(sha512)
- hmac(sm3)

The Keem Bay OCS HCU hardware does not allow "context-switch" for HMAC
operations, i.e., it does not support computing a partial HMAC, save its
state and then continue it later. Therefore, full hardware acceleration
is provided only when possible (e.g., when crypto_ahash_digest() is
called); in all other cases hardware acceleration is only partial (OPAD
and IPAD calculation is done in software, while hashing is hardware
accelerated).

Co-developed-by: Declan Murphy <declan.murphy@intel.com>
Signed-off-by: Declan Murphy <declan.murphy@intel.com>
Signed-off-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: keembay - Add Keem Bay OCS HCU driver
Declan Murphy [Wed, 16 Dec 2020 11:46:36 +0000 (11:46 +0000)]
crypto: keembay - Add Keem Bay OCS HCU driver

Add support for the Hashing Control Unit (HCU) included in the Offload
Crypto Subsystem (OCS) of the Intel Keem Bay SoC, thus enabling
hardware-accelerated hashing on the Keem Bay SoC for the following
algorithms:
- sha256
- sha384
- sha512
- sm3

The driver is composed of two files:

- 'ocs-hcu.c' which interacts with the hardware and abstracts it by
  providing an API following the usual paradigm used in hashing drivers
  / libraries (e.g., hash_init(), hash_update(), hash_final(), etc.).
  NOTE: this API can block and sleep, since completions are used to wait
  for the HW to complete the hashing.

- 'keembay-ocs-hcu-core.c' which exports the functionality provided by
  'ocs-hcu.c' as a ahash crypto driver. The crypto engine is used to
  provide asynchronous behavior. 'keembay-ocs-hcu-core.c' also takes
  care of the DMA mapping of the input sg list.

The driver passes crypto manager self-tests, including the extra tests
(CRYPTO_MANAGER_EXTRA_TESTS=y).

Signed-off-by: Declan Murphy <declan.murphy@intel.com>
Co-developed-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com>
Signed-off-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com>
Acked-by: Mark Gross <mgross@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agodt-bindings: crypto: Add Keem Bay OCS HCU bindings
Declan Murphy [Wed, 16 Dec 2020 11:46:35 +0000 (11:46 +0000)]
dt-bindings: crypto: Add Keem Bay OCS HCU bindings

Add device-tree bindings for the Intel Keem Bay Offload Crypto Subsystem
(OCS) Hashing Control Unit (HCU) crypto driver.

Signed-off-by: Declan Murphy <declan.murphy@intel.com>
Signed-off-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com>
Acked-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Rob Herring <robh@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: sun4i-ss - add SPDX header and remove blank lines
Corentin Labbe [Mon, 14 Dec 2020 20:02:32 +0000 (20:02 +0000)]
crypto: sun4i-ss - add SPDX header and remove blank lines

This patchs fixes some remaining style issue.

Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: sun4i-ss - enabled stats via debugfs
Corentin Labbe [Mon, 14 Dec 2020 20:02:31 +0000 (20:02 +0000)]
crypto: sun4i-ss - enabled stats via debugfs

This patch enable to access usage stats for each algorithm.

Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: sun4i-ss - fix kmap usage
Corentin Labbe [Mon, 14 Dec 2020 20:02:30 +0000 (20:02 +0000)]
crypto: sun4i-ss - fix kmap usage

With the recent kmap change, some tests which were conditional on
CONFIG_DEBUG_HIGHMEM now are enabled by default.
This permit to detect a problem in sun4i-ss usage of kmap.

sun4i-ss uses two kmap via sg_miter (one for input, one for output), but
using two kmap at the same time is hard:
"the ordering has to be correct and with sg_miter that's probably hard to get
right." (quoting Tlgx)

So the easiest solution is to never have two sg_miter/kmap open at the same time.
After each use of sg_miter, I store the current index, for being able to
resume sg_miter to the right place.

Fixes: 6298e948215f ("crypto: sunxi-ss - Add Allwinner Security System crypto accelerator")
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: sun4i-ss - initialize need_fallback
Corentin Labbe [Mon, 14 Dec 2020 20:02:29 +0000 (20:02 +0000)]
crypto: sun4i-ss - initialize need_fallback

The need_fallback is never initialized and seem to be always true at runtime.
So all hardware operations are always bypassed.

Fixes: 0ae1f46c55f87 ("crypto: sun4i-ss - fallback when length is not multiple of blocksize")
Cc: <stable@vger.kernel.org>
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: sun4i-ss - handle BigEndian for cipher
Corentin Labbe [Mon, 14 Dec 2020 20:02:28 +0000 (20:02 +0000)]
crypto: sun4i-ss - handle BigEndian for cipher

Ciphers produce invalid results on BE.
Key and IV need to be written in LE.

Fixes: 6298e948215f2 ("crypto: sunxi-ss - Add Allwinner Security System crypto accelerator")
Cc: <stable@vger.kernel.org>
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: sun4i-ss - IV register does not work on A10 and A13
Corentin Labbe [Mon, 14 Dec 2020 20:02:27 +0000 (20:02 +0000)]
crypto: sun4i-ss - IV register does not work on A10 and A13

Allwinner A10 and A13 SoC have a version of the SS which produce
invalid IV in IVx register.

Instead of adding a variant for those, let's convert SS to produce IV
directly from data.
Fixes: 6298e948215f2 ("crypto: sunxi-ss - Add Allwinner Security System crypto accelerator")
Cc: <stable@vger.kernel.org>
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: sun4i-ss - checking sg length is not sufficient
Corentin Labbe [Mon, 14 Dec 2020 20:02:26 +0000 (20:02 +0000)]
crypto: sun4i-ss - checking sg length is not sufficient

The optimized cipher function need length multiple of 4 bytes.
But it get sometimes odd length.
This is due to SG data could be stored with an offset.

So the fix is to check also if the offset is aligned with 4 bytes.
Fixes: 6298e948215f2 ("crypto: sunxi-ss - Add Allwinner Security System crypto accelerator")
Cc: <stable@vger.kernel.org>
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: sun4i-ss - linearize buffers content must be kept
Corentin Labbe [Mon, 14 Dec 2020 20:02:25 +0000 (20:02 +0000)]
crypto: sun4i-ss - linearize buffers content must be kept

When running the non-optimized cipher function, SS produce partial random
output.
This is due to linearize buffers being reseted after each loop.

For preserving stack, instead of moving them back to start of function,
I move them in sun4i_ss_ctx.

Fixes: 8d3bcb9900ca ("crypto: sun4i-ss - reduce stack usage")
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: inside-secure - fix platform_get_irq.cocci warnings
Tian Tao [Mon, 14 Dec 2020 11:44:40 +0000 (19:44 +0800)]
crypto: inside-secure - fix platform_get_irq.cocci warnings

Remove dev_err() messages after platform_get_irq*() failures.
drivers/crypto/inside-secure/safexcel.c: line 1161 is redundant
because platform_get_irq() already prints an error

Generated by: scripts/coccinelle/api/platform_get_irq.cocci

Signed-off-by: Tian Tao <tiantao6@hisilicon.com>
Acked-by: Antoine Tenart <atenart@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 years agocrypto: remove cipher routines from public crypto API
Ard Biesheuvel [Fri, 11 Dec 2020 12:27:15 +0000 (13:27 +0100)]
crypto: remove cipher routines from public crypto API

The cipher routines in the crypto API are mostly intended for templates
implementing skcipher modes generically in software, and shouldn't be
used outside of the crypto subsystem. So move the prototypes and all
related definitions to a new header file under include/crypto/internal.
Also, let's use the new module namespace feature to move the symbol
exports into a new namespace CRYPTO_INTERNAL.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>