Jason A. Donenfeld [Mon, 6 Jan 2020 03:40:49 +0000 (22:40 -0500)]
crypto: {arm,arm64,mips}/poly1305 - remove redundant non-reduction from emit
This appears to be some kind of copy and paste error, and is actually
dead code.
Pre: f = 0 ⇒ (f >> 32) = 0
f = (f >> 32) + le32_to_cpu(digest[0]);
Post: 0 ≤ f < 2³²
put_unaligned_le32(f, dst);
Pre: 0 ≤ f < 2³² ⇒ (f >> 32) = 0
f = (f >> 32) + le32_to_cpu(digest[1]);
Post: 0 ≤ f < 2³²
put_unaligned_le32(f, dst + 4);
Pre: 0 ≤ f < 2³² ⇒ (f >> 32) = 0
f = (f >> 32) + le32_to_cpu(digest[2]);
Post: 0 ≤ f < 2³²
put_unaligned_le32(f, dst + 8);
Pre: 0 ≤ f < 2³² ⇒ (f >> 32) = 0
f = (f >> 32) + le32_to_cpu(digest[3]);
Post: 0 ≤ f < 2³²
put_unaligned_le32(f, dst + 12);
Therefore this sequence is redundant. And Andy's code appears to handle
misalignment acceptably.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Tested-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Jason A. Donenfeld [Mon, 6 Jan 2020 03:40:48 +0000 (22:40 -0500)]
crypto: x86/poly1305 - wire up faster implementations for kernel
These x86_64 vectorized implementations support AVX, AVX-2, and AVX512F.
The AVX-512F implementation is disabled on Skylake, due to throttling,
but it is quite fast on >= Cannonlake.
On the left is cycle counts on a Core i7 6700HQ using the AVX-2
codepath, comparing this implementation ("new") to the implementation in
the current crypto api ("old"). On the right are benchmarks on a Xeon
Gold 5120 using the AVX-512 codepath. The new implementation is faster
on all benchmarks.
AVX-2 AVX-512
--------- -----------
size old new size old new
---- ---- ---- ---- ---- ----
0 70 68 0 74 70
16 92 90 16 96 92
32 134 104 32 136 106
48 172 120 48 184 124
64 218 136 64 218 138
80 254 158 80 260 160
96 298 174 96 300 176
112 342 192 112 342 194
128 388 212 128 384 212
144 428 228 144 420 226
160 466 246 160 464 248
176 510 264 176 504 264
192 550 282 192 544 282
208 594 302 208 582 300
224 628 316 224 624 318
240 676 334 240 662 338
256 716 354 256 708 358
272 764 374 272 748 372
288 802 352 288 788 358
304 420 366 304 422 370
320 428 360 320 432 364
336 484 378 336 486 380
352 426 384 352 434 390
368 478 400 368 480 408
384 488 394 384 490 398
400 542 408 400 542 412
416 486 416 416 492 426
432 534 430 432 538 436
448 544 422 448 546 432
464 600 438 464 600 448
480 540 448 480 548 456
496 594 464 496 594 476
512 602 456 512 606 470
528 656 476 528 656 480
544 600 480 544 606 498
560 650 494 560 652 512
576 664 490 576 662 508
592 714 508 592 716 522
608 656 514 608 664 538
624 708 532 624 710 552
640 716 524 640 720 516
656 770 536 656 772 526
672 716 548 672 722 544
688 770 562 688 768 556
704 774 552 704 778 556
720 826 568 720 832 568
736 768 574 736 780 584
752 822 592 752 826 600
768 830 584 768 836 560
784 884 602 784 888 572
800 828 610 800 838 588
816 884 628 816 884 604
832 888 618 832 894 598
848 942 632 848 946 612
864 884 644 864 896 628
880 936 660 880 942 644
896 948 652 896 952 608
912 1000 664 912 1004 616
928 942 676 928 954 634
944 994 690 944 1000 646
960 1002 680 960 1008 646
976 1054 694 976 1062 658
992 1002 706 992 1012 674
1008 1052 720 1008 1058 690
This commit wires in the prior implementation from Andy, and makes the
following changes to be suitable for kernel land.
- Some cosmetic and structural changes, like renaming labels to
.Lname, constants, and other Linux conventions, as well as making
the code easy for us to maintain moving forward.
- CPU feature checking is done in C by the glue code.
- We avoid jumping into the middle of functions, to appease objtool,
and instead parameterize shared code.
- We maintain frame pointers so that stack traces make sense.
- We remove the dependency on the perl xlate code, which transforms
the output into things that assemblers we don't care about use.
Importantly, none of our changes affect the arithmetic or core code, but
just involve the differing environment of kernel space.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Samuel Neves <sneves@dei.uc.pt>
Co-developed-by: Samuel Neves <sneves@dei.uc.pt>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Jason A. Donenfeld [Mon, 6 Jan 2020 03:40:47 +0000 (22:40 -0500)]
crypto: x86/poly1305 - import unmodified cryptogams implementation
These x86_64 vectorized implementations come from Andy Polyakov's
CRYPTOGAMS implementation, and are included here in raw form without
modification, so that subsequent commits that fix these up for the
kernel can see how it has changed.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Jason A. Donenfeld [Mon, 6 Jan 2020 03:40:46 +0000 (22:40 -0500)]
crypto: poly1305 - add new 32 and 64-bit generic versions
These two C implementations from Zinc -- a 32x32 one and a 64x64 one,
depending on the platform -- come from Andrew Moon's public domain
poly1305-donna portable code, modified for usage in the kernel. The
precomputation in the 32-bit version and the use of 64x64 multiplies in
the 64-bit version make these perform better than the code it replaces.
Moon's code is also very widespread and has received many eyeballs of
scrutiny.
There's a bit of interference between the x86 implementation, which
relies on internal details of the old scalar implementation. In the next
commit, the x86 implementation will be replaced with a faster one that
doesn't rely on this, so none of this matters much. But for now, to keep
this passing the tests, we inline the bits of the old implementation
that the x86 implementation relied on. Also, since we now support a
slightly larger key space, via the union, some offsets had to be fixed
up.
Nonce calculation was folded in with the emit function, to take
advantage of 64x64 arithmetic. However, Adiantum appeared to rely on no
nonce handling in emit, so this path was conditionalized. We also
introduced a new struct, poly1305_core_key, to represent the precise
amount of space that particular implementation uses.
Testing with kbench9000, depending on the CPU, the update function for
the 32x32 version has been improved by 4%-7%, and for the 64x64 by
19%-30%. The 32x32 gains are small, but I think there's great value in
having a parallel implementation to the 64x64 one so that the two can be
compared side-by-side as nice stand-alone units.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Thu, 16 Jan 2020 07:17:08 +0000 (15:17 +0800)]
Merge git://git./linux/kernel/git/herbert/crypto-2.6
Merge crypto tree to pick up hisilicon patch.
Krzysztof Kozlowski [Sat, 4 Jan 2020 15:20:59 +0000 (16:20 +0100)]
crypto: exynos-rng - Rename Exynos to lowercase
Fix up inconsistent usage of upper and lowercase letters in "Exynos"
name.
"EXYNOS" is not an abbreviation but a regular trademarked name.
Therefore it should be written with lowercase letters starting with
capital letter.
The lowercase "Exynos" name is promoted by its manufacturer Samsung
Electronics Co., Ltd., in advertisement materials and on website.
Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Ayush Sawal [Fri, 3 Jan 2020 04:56:51 +0000 (10:26 +0530)]
crypto: chelsio - Resetting crypto counters during the driver unregister
Signed-off-by: Ayush Sawal <ayush.sawal@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Fri, 3 Jan 2020 04:04:40 +0000 (20:04 -0800)]
crypto: algapi - enforce that all instances have a ->free() method
All instances need to have a ->free() method, but people could forget to
set it and then not notice if the instance is never unregistered. To
help detect this bug earlier, don't allow an instance without a ->free()
method to be registered, and complain loudly if someone tries to do it.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Fri, 3 Jan 2020 04:04:39 +0000 (20:04 -0800)]
crypto: algapi - remove crypto_template::{alloc,free}()
Now that all templates provide a ->create() method which creates an
instance, installs a strongly-typed ->free() method directly to it, and
registers it, the older ->alloc() and ->free() methods in
'struct crypto_template' are no longer used. Remove them.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Fri, 3 Jan 2020 04:04:38 +0000 (20:04 -0800)]
crypto: shash - convert shash_free_instance() to new style
Convert shash_free_instance() and its users to the new way of freeing
instances, where a ->free() method is installed to the instance struct
itself. This replaces the weakly-typed method crypto_template::free().
This will allow removing support for the old way of freeing instances.
Also give shash_free_instance() a more descriptive name to reflect that
it's only for instances with a single spawn, not for any instance.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Fri, 3 Jan 2020 04:04:37 +0000 (20:04 -0800)]
crypto: cryptd - convert to new way of freeing instances
Convert the "cryptd" template to the new way of freeing instances, where
a ->free() method is installed to the instance struct itself. This
replaces the weakly-typed method crypto_template::free().
This will allow removing support for the old way of freeing instances.
Note that the 'default' case in cryptd_free() was already unreachable.
So, we aren't missing anything by keeping only the ahash and aead parts.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Fri, 3 Jan 2020 04:04:36 +0000 (20:04 -0800)]
crypto: geniv - convert to new way of freeing instances
Convert the "seqiv" template to the new way of freeing instances where a
->free() method is installed to the instance struct itself. Also remove
the unused implementation of the old way of freeing instances from the
"echainiv" template, since it's already using the new way too.
In doing this, also simplify the code by making the helper function
aead_geniv_alloc() install the ->free() method, instead of making seqiv
and echainiv do this themselves. This is analogous to how
skcipher_alloc_instance_simple() works.
This will allow removing support for the old way of freeing instances.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Fri, 3 Jan 2020 04:04:35 +0000 (20:04 -0800)]
crypto: hash - add support for new way of freeing instances
Add support to shash and ahash for the new way of freeing instances
(already used for skcipher, aead, and akcipher) where a ->free() method
is installed to the instance struct itself. These methods are more
strongly-typed than crypto_template::free(), which they replace.
This will allow removing support for the old way of freeing instances.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Fri, 3 Jan 2020 03:59:08 +0000 (19:59 -0800)]
crypto: algapi - fold crypto_init_spawn() into crypto_grab_spawn()
Now that crypto_init_spawn() is only called by crypto_grab_spawn(),
simplify things by moving its functionality into crypto_grab_spawn().
In the process of doing this, also be more consistent about when the
spawn and instance are updated, and remove the crypto_spawn::dropref
flag since now it's always set.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Fri, 3 Jan 2020 03:59:07 +0000 (19:59 -0800)]
crypto: ahash - unexport crypto_ahash_type
Now that all the templates that need ahash spawns have been converted to
use crypto_grab_ahash() rather than look up the algorithm directly,
crypto_ahash_type is no longer used outside of ahash.c. Make it static.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Fri, 3 Jan 2020 03:59:06 +0000 (19:59 -0800)]
crypto: algapi - remove obsoleted instance creation helpers
Remove lots of helper functions that were previously used for
instantiating crypto templates, but are now unused:
- crypto_get_attr_alg() and similar functions looked up an inner
algorithm directly from a template parameter. These were replaced
with getting the algorithm's name, then calling crypto_grab_*().
- crypto_init_spawn2() and similar functions initialized a spawn, given
an algorithm. Similarly, these were replaced with crypto_grab_*().
- crypto_alloc_instance() and similar functions allocated an instance
with a single spawn, given the inner algorithm. These aren't useful
anymore since crypto_grab_*() need the instance allocated first.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Fri, 3 Jan 2020 03:59:05 +0000 (19:59 -0800)]
crypto: cipher - make crypto_spawn_cipher() take a crypto_cipher_spawn
Now that all users of single-block cipher spawns have been converted to
use 'struct crypto_cipher_spawn' rather than the less specifically typed
'struct crypto_spawn', make crypto_spawn_cipher() take a pointer to a
'struct crypto_cipher_spawn' rather than a 'struct crypto_spawn'.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Fri, 3 Jan 2020 03:59:04 +0000 (19:59 -0800)]
crypto: xcbc - use crypto_grab_cipher() and simplify error paths
Make the xcbc template use the new function crypto_grab_cipher() to
initialize its cipher spawn.
This is needed to make all spawns be initialized in a consistent way.
This required making xcbc_create() allocate the instance directly rather
than use shash_alloc_instance().
Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Fri, 3 Jan 2020 03:59:03 +0000 (19:59 -0800)]
crypto: vmac - use crypto_grab_cipher() and simplify error paths
Make the vmac64 template use the new function crypto_grab_cipher() to
initialize its cipher spawn.
This is needed to make all spawns be initialized in a consistent way.
This required making vmac_create() allocate the instance directly rather
than use shash_alloc_instance().
Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Fri, 3 Jan 2020 03:59:02 +0000 (19:59 -0800)]
crypto: cmac - use crypto_grab_cipher() and simplify error paths
Make the cmac template use the new function crypto_grab_cipher() to
initialize its cipher spawn.
This is needed to make all spawns be initialized in a consistent way.
This required making cmac_create() allocate the instance directly rather
than use shash_alloc_instance().
Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Fri, 3 Jan 2020 03:59:01 +0000 (19:59 -0800)]
crypto: cbcmac - use crypto_grab_cipher() and simplify error paths
Make the cbcmac template use the new function crypto_grab_cipher() to
initialize its cipher spawn.
This is needed to make all spawns be initialized in a consistent way.
This required making cbcmac_create() allocate the instance directly
rather than use shash_alloc_instance().
Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Fri, 3 Jan 2020 03:59:00 +0000 (19:59 -0800)]
crypto: skcipher - use crypto_grab_cipher() and simplify error paths
Make skcipher_alloc_instance_simple() use the new function
crypto_grab_cipher() to initialize its cipher spawn.
This is needed to make all spawns be initialized in a consistent way.
Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Fri, 3 Jan 2020 03:58:59 +0000 (19:58 -0800)]
crypto: chacha20poly1305 - use crypto_grab_ahash() and simplify error paths
Make the rfc7539 and rfc7539esp templates use the new function
crypto_grab_ahash() to initialize their ahash spawn.
This is needed to make all spawns be initialized in a consistent way.
Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Fri, 3 Jan 2020 03:58:58 +0000 (19:58 -0800)]
crypto: ccm - use crypto_grab_ahash() and simplify error paths
Make the ccm and ccm_base templates use the new function
crypto_grab_ahash() to initialize their ahash spawn.
This is needed to make all spawns be initialized in a consistent way.
Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Fri, 3 Jan 2020 03:58:57 +0000 (19:58 -0800)]
crypto: gcm - use crypto_grab_ahash() and simplify error paths
Make the gcm and gcm_base templates use the new function
crypto_grab_ahash() to initialize their ahash spawn.
This is needed to make all spawns be initialized in a consistent way.
Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Fri, 3 Jan 2020 03:58:56 +0000 (19:58 -0800)]
crypto: authencesn - use crypto_grab_ahash() and simplify error paths
Make the authencesn template use the new function crypto_grab_ahash() to
initialize its ahash spawn.
This is needed to make all spawns be initialized in a consistent way.
Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Fri, 3 Jan 2020 03:58:55 +0000 (19:58 -0800)]
crypto: authenc - use crypto_grab_ahash() and simplify error paths
Make the authenc template use the new function crypto_grab_ahash() to
initialize its ahash spawn.
This is needed to make all spawns be initialized in a consistent way.
Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Fri, 3 Jan 2020 03:58:54 +0000 (19:58 -0800)]
crypto: hmac - use crypto_grab_shash() and simplify error paths
Make the hmac template use the new function crypto_grab_shash() to
initialize its shash spawn.
This is needed to make all spawns be initialized in a consistent way.
This required making hmac_create() allocate the instance directly rather
than use shash_alloc_instance().
Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Fri, 3 Jan 2020 03:58:53 +0000 (19:58 -0800)]
crypto: cryptd - use crypto_grab_shash() and simplify error paths
Make the cryptd template (in the hash case) use the new function
crypto_grab_shash() to initialize its shash spawn.
This is needed to make all spawns be initialized in a consistent way.
This required making cryptd_create_hash() allocate the instance directly
rather than use cryptd_alloc_instance().
Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Fri, 3 Jan 2020 03:58:52 +0000 (19:58 -0800)]
crypto: adiantum - use crypto_grab_{cipher,shash} and simplify error paths
Make the adiantum template use the new functions crypto_grab_cipher()
and crypto_grab_shash() to initialize its cipher and shash spawns.
This is needed to make all spawns be initialized in a consistent way.
Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Fri, 3 Jan 2020 03:58:51 +0000 (19:58 -0800)]
crypto: cipher - introduce crypto_cipher_spawn and crypto_grab_cipher()
Currently, "cipher" (single-block cipher) spawns are usually initialized
by using crypto_get_attr_alg() to look up the algorithm, then calling
crypto_init_spawn(). In one case, crypto_grab_spawn() is used directly.
The former way is different from how skcipher, aead, and akcipher spawns
are initialized (they use crypto_grab_*()), and for no good reason.
This difference introduces unnecessary complexity.
The crypto_grab_*() functions used to have some problems, like not
holding a reference to the algorithm and requiring the caller to
initialize spawn->base.inst. But those problems are fixed now.
Also, the cipher spawns are not strongly typed; e.g., the API requires
that the user manually specify the flags CRYPTO_ALG_TYPE_CIPHER and
CRYPTO_ALG_TYPE_MASK. Though the "cipher" algorithm type itself isn't
yet strongly typed, we can start by making the spawns strongly typed.
So, let's introduce a new 'struct crypto_cipher_spawn', and functions
crypto_grab_cipher() and crypto_drop_cipher() to grab and drop them.
Later patches will convert all cipher spawns to use these, then make
crypto_spawn_cipher() take 'struct crypto_cipher_spawn' as well, instead
of a bare 'struct crypto_spawn' as it currently does.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Fri, 3 Jan 2020 03:58:50 +0000 (19:58 -0800)]
crypto: ahash - introduce crypto_grab_ahash()
Currently, ahash spawns are initialized by using ahash_attr_alg() or
crypto_find_alg() to look up the ahash algorithm, then calling
crypto_init_ahash_spawn().
This is different from how skcipher, aead, and akcipher spawns are
initialized (they use crypto_grab_*()), and for no good reason. This
difference introduces unnecessary complexity.
The crypto_grab_*() functions used to have some problems, like not
holding a reference to the algorithm and requiring the caller to
initialize spawn->base.inst. But those problems are fixed now.
So, let's introduce crypto_grab_ahash() so that we can convert all
templates to the same way of initializing their spawns.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Fri, 3 Jan 2020 03:58:49 +0000 (19:58 -0800)]
crypto: shash - introduce crypto_grab_shash()
Currently, shash spawns are initialized by using shash_attr_alg() or
crypto_alg_mod_lookup() to look up the shash algorithm, then calling
crypto_init_shash_spawn().
This is different from how skcipher, aead, and akcipher spawns are
initialized (they use crypto_grab_*()), and for no good reason. This
difference introduces unnecessary complexity.
The crypto_grab_*() functions used to have some problems, like not
holding a reference to the algorithm and requiring the caller to
initialize spawn->base.inst. But those problems are fixed now.
So, let's introduce crypto_grab_shash() so that we can convert all
templates to the same way of initializing their spawns.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Fri, 3 Jan 2020 03:58:48 +0000 (19:58 -0800)]
crypto: algapi - pass instance to crypto_grab_spawn()
Currently, crypto_spawn::inst is first used temporarily to pass the
instance to crypto_grab_spawn(). Then crypto_init_spawn() overwrites it
with crypto_spawn::next, which shares the same union. Finally,
crypto_spawn::inst is set again when the instance is registered.
Make this less convoluted by just passing the instance as an argument to
crypto_grab_spawn() instead.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Fri, 3 Jan 2020 03:58:47 +0000 (19:58 -0800)]
crypto: akcipher - pass instance to crypto_grab_akcipher()
Initializing a crypto_akcipher_spawn currently requires:
1. Set spawn->base.inst to point to the instance.
2. Call crypto_grab_akcipher().
But there's no reason for these steps to be separate, and in fact this
unneeded complication has caused at least one bug, the one fixed by
commit
6db43410179b ("crypto: adiantum - initialize crypto_spawn::inst")
So just make crypto_grab_akcipher() take the instance as an argument.
To keep the function call from getting too unwieldy due to this extra
argument, also introduce a 'mask' variable into pkcs1pad_create().
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Fri, 3 Jan 2020 03:58:46 +0000 (19:58 -0800)]
crypto: aead - pass instance to crypto_grab_aead()
Initializing a crypto_aead_spawn currently requires:
1. Set spawn->base.inst to point to the instance.
2. Call crypto_grab_aead().
But there's no reason for these steps to be separate, and in fact this
unneeded complication has caused at least one bug, the one fixed by
commit
6db43410179b ("crypto: adiantum - initialize crypto_spawn::inst")
So just make crypto_grab_aead() take the instance as an argument.
To keep the function calls from getting too unwieldy due to this extra
argument, also introduce a 'mask' variable into the affected places
which weren't already using one.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Fri, 3 Jan 2020 03:58:45 +0000 (19:58 -0800)]
crypto: skcipher - pass instance to crypto_grab_skcipher()
Initializing a crypto_skcipher_spawn currently requires:
1. Set spawn->base.inst to point to the instance.
2. Call crypto_grab_skcipher().
But there's no reason for these steps to be separate, and in fact this
unneeded complication has caused at least one bug, the one fixed by
commit
6db43410179b ("crypto: adiantum - initialize crypto_spawn::inst")
So just make crypto_grab_skcipher() take the instance as an argument.
To keep the function calls from getting too unwieldy due to this extra
argument, also introduce a 'mask' variable into the affected places
which weren't already using one.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Fri, 3 Jan 2020 03:58:44 +0000 (19:58 -0800)]
crypto: ahash - make struct ahash_instance be the full size
Define struct ahash_instance in a way analogous to struct
skcipher_instance, struct aead_instance, and struct akcipher_instance,
where the struct is defined to include both the algorithm structure at
the beginning and the additional crypto_instance fields at the end.
This is needed to allow allocating ahash instances directly using
kzalloc(sizeof(*inst) + sizeof(*ictx), ...) in the same way as skcipher,
aead, and akcipher instances. In turn, that's needed to make spawns be
initialized in a consistent way everywhere.
Also take advantage of the addition of the base instance to struct
ahash_instance by simplifying the ahash_crypto_instance() and
ahash_instance() functions.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Fri, 3 Jan 2020 03:58:43 +0000 (19:58 -0800)]
crypto: shash - make struct shash_instance be the full size
Define struct shash_instance in a way analogous to struct
skcipher_instance, struct aead_instance, and struct akcipher_instance,
where the struct is defined to include both the algorithm structure at
the beginning and the additional crypto_instance fields at the end.
This is needed to allow allocating shash instances directly using
kzalloc(sizeof(*inst) + sizeof(*ictx), ...) in the same way as skcipher,
aead, and akcipher instances. In turn, that's needed to make spawns be
initialized in a consistent way everywhere.
Also take advantage of the addition of the base instance to struct
shash_instance by simplifying the shash_crypto_instance() and
shash_instance() functions.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Fri, 3 Jan 2020 03:58:42 +0000 (19:58 -0800)]
crypto: algapi - make crypto_grab_spawn() handle an ERR_PTR() name
To allow further simplifying template ->create() functions, make
crypto_grab_spawn() handle an ERR_PTR() name by passing back the error.
For most templates, this will allow the result of crypto_attr_alg_name()
to be passed directly to crypto_grab_*(), rather than first having to
assign it to a variable [where it can then potentially be misused, as it
was in the rfc7539 template prior to commit
5e27f38f1f3f ("crypto:
chacha20poly1305 - set cra_name correctly")] and check it for error.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Fri, 3 Jan 2020 03:58:41 +0000 (19:58 -0800)]
crypto: algapi - make crypto_drop_spawn() a no-op on uninitialized spawns
Make crypto_drop_spawn() do nothing when the spawn hasn't been
initialized with an algorithm yet. This will allow simplifying error
handling in all the template ->create() functions, since on error they
will be able to just call their usual "free instance" function, rather
than having to handle dropping just the spawns that have been
initialized so far.
This does assume the spawn starts out zero-filled, but that's always the
case since instances are allocated with kzalloc(). And some other code
already assumes this anyway.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Gary R Hook [Thu, 2 Jan 2020 19:57:03 +0000 (13:57 -0600)]
crypto: ccp - Update MAINTAINERS for CCP driver
Remove Gary R Hook as CCP maintainer.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Christian Lamparter [Wed, 1 Jan 2020 22:27:02 +0000 (23:27 +0100)]
crypto: crypto4xx - use GFP_KERNEL for big allocations
The driver should use GFP_KERNEL for the bigger allocation
during the driver's crypto4xx_probe() and not GFP_ATOMIC in
my opinion.
Signed-off-by: Christian Lamparter <chunkeey@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Christian Lamparter [Wed, 1 Jan 2020 22:27:01 +0000 (23:27 +0100)]
crypto: crypto4xx - reduce memory fragmentation
With recent kernels (>5.2), the driver fails to probe, as the
allocation of the driver's scatter buffer fails with -ENOMEM.
This happens in crypto4xx_build_sdr(). Where the driver tries
to get 512KiB (=PPC4XX_SD_BUFFER_SIZE * PPC4XX_NUM_SD) of
continuous memory. This big chunk is by design, since the driver
uses this circumstance in the crypto4xx_copy_pkt_to_dst() to
its advantage:
"all scatter-buffers are all neatly organized in one big
continuous ringbuffer; So scatterwalk_map_and_copy() can be
instructed to copy a range of buffers in one go."
The PowerPC arch does not have support for DMA_CMA. Hence,
this patch reorganizes the order in which the memory
allocations are done. Since the driver itself is responsible
for some of the issues.
Signed-off-by: Christian Lamparter <chunkeey@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Tue, 31 Dec 2019 03:19:38 +0000 (21:19 -0600)]
crypto: remove propagation of CRYPTO_TFM_RES_* flags
The CRYPTO_TFM_RES_* flags were apparently meant as a way to make the
->setkey() functions provide more information about errors. But these
flags weren't actually being used or tested, and in many cases they
weren't being set correctly anyway. So they've now been removed.
Also, if someone ever actually needs to start better distinguishing
->setkey() errors (which is somewhat unlikely, as this has been unneeded
for a long time), we'd be much better off just defining different return
values, like -EINVAL if the key is invalid for the algorithm vs.
-EKEYREJECTED if the key was rejected by a policy like "no weak keys".
That would be much simpler, less error-prone, and easier to test.
So just remove CRYPTO_TFM_RES_MASK and all the unneeded logic that
propagates these flags around.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Tue, 31 Dec 2019 03:19:37 +0000 (21:19 -0600)]
crypto: remove CRYPTO_TFM_RES_WEAK_KEY
The CRYPTO_TFM_RES_WEAK_KEY flag was apparently meant as a way to make
the ->setkey() functions provide more information about errors.
However, no one actually checks for this flag, which makes it pointless.
There are also no tests that verify that all algorithms actually set (or
don't set) it correctly.
This is also the last remaining CRYPTO_TFM_RES_* flag, which means that
it's the only thing still needing all the boilerplate code which
propagates these flags around from child => parent tfms.
And if someone ever needs to distinguish this error in the future (which
is somewhat unlikely, as it's been unneeded for a long time), it would
be much better to just define a new return value like -EKEYREJECTED.
That would be much simpler, less error-prone, and easier to test.
So just remove this flag.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Tue, 31 Dec 2019 03:19:36 +0000 (21:19 -0600)]
crypto: remove CRYPTO_TFM_RES_BAD_KEY_LEN
The CRYPTO_TFM_RES_BAD_KEY_LEN flag was apparently meant as a way to
make the ->setkey() functions provide more information about errors.
However, no one actually checks for this flag, which makes it pointless.
Also, many algorithms fail to set this flag when given a bad length key.
Reviewing just the generic implementations, this is the case for
aes-fixed-time, cbcmac, echainiv, nhpoly1305, pcrypt, rfc3686, rfc4309,
rfc7539, rfc7539esp, salsa20, seqiv, and xcbc. But there are probably
many more in arch/*/crypto/ and drivers/crypto/.
Some algorithms can even set this flag when the key is the correct
length. For example, authenc and authencesn set it when the key payload
is malformed in any way (not just a bad length), the atmel-sha and ccree
drivers can set it if a memory allocation fails, and the chelsio driver
sets it for bad auth tag lengths, not just bad key lengths.
So even if someone actually wanted to start checking this flag (which
seems unlikely, since it's been unused for a long time), there would be
a lot of work needed to get it working correctly. But it would probably
be much better to go back to the drawing board and just define different
return values, like -EINVAL if the key is invalid for the algorithm vs.
-EKEYREJECTED if the key was rejected by a policy like "no weak keys".
That would be much simpler, less error-prone, and easier to test.
So just remove this flag.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Tue, 31 Dec 2019 03:19:35 +0000 (21:19 -0600)]
crypto: remove CRYPTO_TFM_RES_BAD_BLOCK_LEN
The flag CRYPTO_TFM_RES_BAD_BLOCK_LEN is never checked for, and it's
only set by one driver. And even that single driver's use is wrong
because the driver is setting the flag from ->encrypt() and ->decrypt()
with no locking, which is unsafe because ->encrypt() and ->decrypt() can
be executed by many threads in parallel on the same tfm.
Just remove this flag.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Tue, 31 Dec 2019 03:19:34 +0000 (21:19 -0600)]
crypto: remove unused tfm result flags
The tfm result flags CRYPTO_TFM_RES_BAD_KEY_SCHED and
CRYPTO_TFM_RES_BAD_FLAGS are never used, so remove them.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Tue, 31 Dec 2019 03:19:33 +0000 (21:19 -0600)]
crypto: atmel-sha - fix error handling when setting hmac key
HMAC keys can be of any length, and atmel_sha_hmac_key_set() can only
fail due to -ENOMEM. But atmel_sha_hmac_setkey() incorrectly treated
any error as a "bad key length" error. Fix it to correctly propagate
the -ENOMEM error code and not set any tfm result flags.
Fixes: 81d8750b2b59 ("crypto: atmel-sha - add support to hmac(shaX)")
Cc: Nicolas Ferre <nicolas.ferre@microchip.com>
Cc: Alexandre Belloni <alexandre.belloni@bootlin.com>
Cc: Ludovic Desroches <ludovic.desroches@microchip.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Tue, 31 Dec 2019 03:19:32 +0000 (21:19 -0600)]
crypto: artpec6 - return correct error code for failed setkey()
->setkey() is supposed to retun -EINVAL for invalid key lengths, not -1.
Fixes: a21eb94fc4d3 ("crypto: axis - add ARTPEC-6/7 crypto accelerator driver")
Cc: Jesper Nilsson <jesper.nilsson@axis.com>
Cc: Lars Persson <lars.persson@axis.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Lars Persson <lars.persson@axis.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Tue, 31 Dec 2019 03:19:31 +0000 (21:19 -0600)]
crypto: chelsio - fix writing tfm flags to wrong place
The chelsio crypto driver is casting 'struct crypto_aead' directly to
'struct crypto_tfm', which is incorrect because the crypto_tfm isn't the
first field of 'struct crypto_aead'. Consequently, the calls to
crypto_tfm_set_flags() are modifying some other field in the struct.
Also, the driver is setting CRYPTO_TFM_RES_BAD_KEY_LEN in
->setauthsize(), not just in ->setkey(). This is incorrect since this
flag is for bad key lengths, not for bad authentication tag lengths.
Fix these bugs by removing the broken crypto_tfm_set_flags() calls from
->setauthsize() and by fixing them in ->setkey().
Fixes: 324429d74127 ("chcr: Support for Chelsio's Crypto Hardware")
Cc: <stable@vger.kernel.org> # v4.9+
Cc: Atul Gupta <atul.gupta@chelsio.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Mon, 30 Dec 2019 19:41:15 +0000 (13:41 -0600)]
crypto: skcipher - remove skcipher_walk_aead()
skcipher_walk_aead() is unused and is identical to
skcipher_walk_aead_encrypt(), so remove it.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Arnd Bergmann [Tue, 7 Jan 2020 20:08:58 +0000 (21:08 +0100)]
crypto: hisilicon/sec2 - Use atomics instead of __sync
The use of __sync functions for atomic memory access is not
supported in the kernel, and can result in a link error depending
on configuration:
ERROR: "__tsan_atomic32_compare_exchange_strong" [drivers/crypto/hisilicon/sec2/hisi_sec2.ko] undefined!
ERROR: "__tsan_atomic64_fetch_add" [drivers/crypto/hisilicon/sec2/hisi_sec2.ko] undefined!
Use the kernel's own atomic interfaces instead. This way the
debugfs interface actually reads the counter atomically.
Fixes: 416d82204df4 ("crypto: hisilicon - add HiSilicon SEC V2 driver")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rijo Thomas [Fri, 27 Dec 2019 05:24:03 +0000 (10:54 +0530)]
Documentation: tee: add AMD-TEE driver details
Update tee.txt with AMD-TEE driver details. The driver is written
to communicate with AMD's TEE.
Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
Co-developed-by: Devaraj Rangasamy <Devaraj.Rangasamy@amd.com>
Signed-off-by: Devaraj Rangasamy <Devaraj.Rangasamy@amd.com>
Signed-off-by: Rijo Thomas <Rijo-john.Thomas@amd.com>
Reviewed-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rijo Thomas [Fri, 27 Dec 2019 05:24:02 +0000 (10:54 +0530)]
tee: amdtee: check TEE status during driver initialization
The AMD-TEE driver should check if TEE is available before
registering itself with TEE subsystem. This ensures that
there is a TEE which the driver can talk to before proceeding
with tee device node allocation.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
Co-developed-by: Devaraj Rangasamy <Devaraj.Rangasamy@amd.com>
Signed-off-by: Devaraj Rangasamy <Devaraj.Rangasamy@amd.com>
Signed-off-by: Rijo Thomas <Rijo-john.Thomas@amd.com>
Reviewed-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rijo Thomas [Fri, 27 Dec 2019 05:24:01 +0000 (10:54 +0530)]
tee: add AMD-TEE driver
Adds AMD-TEE driver.
* targets AMD APUs which has AMD Secure Processor with software-based
Trusted Execution Environment (TEE) support
* registers with TEE subsystem
* defines tee_driver_ops function callbacks
* kernel allocated memory is used as shared memory between normal
world and secure world.
* acts as REE (Rich Execution Environment) communication agent, which
uses the services of AMD Secure Processor driver to submit commands
for processing in TEE environment
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
Co-developed-by: Devaraj Rangasamy <Devaraj.Rangasamy@amd.com>
Signed-off-by: Devaraj Rangasamy <Devaraj.Rangasamy@amd.com>
Signed-off-by: Rijo Thomas <Rijo-john.Thomas@amd.com>
Reviewed-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rijo Thomas [Fri, 27 Dec 2019 05:24:00 +0000 (10:54 +0530)]
tee: allow compilation of tee subsystem for AMD CPUs
Allow compilation of tee subsystem for AMD's CPUs which have a dedicated
AMD Secure Processor for Trusted Execution Environment (TEE).
Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
Co-developed-by: Devaraj Rangasamy <Devaraj.Rangasamy@amd.com>
Signed-off-by: Devaraj Rangasamy <Devaraj.Rangasamy@amd.com>
Signed-off-by: Rijo Thomas <Rijo-john.Thomas@amd.com>
Reviewed-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eneas U de Queiroz [Fri, 20 Dec 2019 19:02:18 +0000 (16:02 -0300)]
crypto: qce - allow building only hashes/ciphers
Allow the user to choose whether to build support for all algorithms
(default), hashes-only, or skciphers-only.
The QCE engine does not appear to scale as well as the CPU to handle
multiple crypto requests. While the ipq40xx chips have 4-core CPUs, the
QCE handles only 2 requests in parallel.
Ipsec throughput seems to improve when disabling either family of
algorithms, sharing the load with the CPU. Enabling skciphers-only
appears to work best.
Signed-off-by: Eneas U de Queiroz <cotequeiroz@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eneas U de Queiroz [Fri, 20 Dec 2019 19:02:17 +0000 (16:02 -0300)]
crypto: qce - initialize fallback only for AES
Adjust cra_flags to add CRYPTO_NEED_FALLBACK only for AES ciphers, where
AES-192 is not handled by the qce hardware, and don't allocate & free
the fallback skcipher for other algorithms.
Signed-off-by: Eneas U de Queiroz <cotequeiroz@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eneas U de Queiroz [Fri, 20 Dec 2019 19:02:16 +0000 (16:02 -0300)]
crypto: qce - update the skcipher IV
Update the IV after the completion of each cipher operation.
Signed-off-by: Eneas U de Queiroz <cotequeiroz@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eneas U de Queiroz [Fri, 20 Dec 2019 19:02:15 +0000 (16:02 -0300)]
crypto: qce - save a sg table slot for result buf
When ctr-aes-qce is used for gcm-mode, an extra sg entry for the
authentication tag is present, causing trouble when the qce driver
prepares the dst-results sg table for dma.
It computes the number of entries needed with sg_nents_for_len, leaving
out the tag entry. Then it creates a sg table with that number plus
one, used to store a result buffer.
When copying the sg table, there's no limit to the number of entries
copied, so the extra slot is filled with the authentication tag sg.
When the driver tries to add the result sg, the list is full, and it
returns EINVAL.
By limiting the number of sg entries copied to the dest table, the slot
for the result buffer is guaranteed to be unused.
Signed-off-by: Eneas U de Queiroz <cotequeiroz@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eneas U de Queiroz [Fri, 20 Dec 2019 19:02:14 +0000 (16:02 -0300)]
crypto: qce - fix xts-aes-qce key sizes
XTS-mode uses two keys, so the keysizes should be doubled in
skcipher_def, and halved when checking if it is AES-128/192/256.
Signed-off-by: Eneas U de Queiroz <cotequeiroz@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eneas U de Queiroz [Fri, 20 Dec 2019 19:02:13 +0000 (16:02 -0300)]
crypto: qce - fix ctr-aes-qce block, chunk sizes
Set blocksize of ctr-aes-qce to 1, so it can operate as a stream cipher,
adding the definition for chucksize instead, where the underlying block
size belongs.
Signed-off-by: Eneas U de Queiroz <cotequeiroz@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Fri, 20 Dec 2019 05:29:40 +0000 (13:29 +0800)]
crypto: skcipher - Add skcipher_ialg_simple helper
This patch introduces the skcipher_ialg_simple helper which fetches
the crypto_alg structure from a simple skcipher instance's spawn.
This allows us to remove the third argument from the function
skcipher_alloc_instance_simple.
In doing so the reference count to the algorithm is now maintained
by the Crypto API and the caller no longer needs to drop the alg
refcount.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Vinay Kumar Yadav [Thu, 19 Dec 2019 10:51:48 +0000 (16:21 +0530)]
crypto: chtls - Fixed memory leak
Freed work request skbs when connection terminates.
enqueue_wr()/ dequeue_wr() is shared between softirq
and application contexts, should be protected by socket
lock. Moved dequeue_wr() to appropriate file.
Signed-off-by: Vinay Kumar Yadav <vinay.yadav@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Vinay Kumar Yadav [Thu, 19 Dec 2019 10:51:47 +0000 (16:21 +0530)]
crypto: chtls - Add support for AES256-GCM based ciphers
Added support to set 256 bit key to the hardware from
setsockopt for AES256-GCM based ciphers.
Signed-off-by: Vinay Kumar Yadav <vinay.yadav@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Adam Ford [Wed, 18 Dec 2019 13:06:14 +0000 (07:06 -0600)]
crypto: caam - Add support for i.MX8M Mini
The i.MX8M Mini uses the same crypto engine as the i.MX8MQ, but
the driver is restricting the check to just the i.MX8MQ.
This patch expands the check for either i.MX8MQ or i.MX8MM.
Signed-off-by: Adam Ford <aford173@gmail.com>
Tested-by: Iuliana Prodan <iuliana.prodan@nxp.com>
Reviewed-by: Iuliana Prodan <iuliana.prodan@nxp.com>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Wed, 18 Dec 2019 07:53:01 +0000 (15:53 +0800)]
crypto: api - Retain alg refcount in crypto_grab_spawn
This patch changes crypto_grab_spawn to retain the reference count
on the algorithm. This is because the caller needs to access the
algorithm parameters and without the reference count the algorithm
can be freed at any time.
The reference count will be subsequently dropped by the crypto API
once the instance has been registered. The helper crypto_drop_spawn
will also conditionally drop the reference count depending on whether
it has been registered.
Note that the code is actually added to crypto_init_spawn. However,
unless the caller activates this by setting spawn->dropref beforehand
then nothing happens. The only caller that sets dropref is currently
crypto_grab_spawn.
Once all legacy users of crypto_init_spawn disappear, then we can
kill the dropref flag.
Internally each instance will maintain a list of its spawns prior
to registration. This memory used by this list is shared with
other fields that are only used after registration. In order for
this to work a new flag spawn->registered is added to indicate
whether spawn->inst can be used.
Fixes: d6ef2f198d4c ("crypto: api - Add crypto_grab_spawn primitive")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Ben Dooks (Codethink) [Tue, 17 Dec 2019 11:30:24 +0000 (11:30 +0000)]
crypto: sun4i-ss - make unexported sun4i_ss_pm_ops static
The sun4i_ss_pm_ops is not referenced outside the driver
except via a pointer, so make it static to avoid the following
warning:
drivers/crypto/allwinner/sun4i-ss/sun4i-ss-core.c:276:25: warning: symbol 'sun4i_ss_pm_ops' was not declared. Should it be static?
Signed-off-by: Ben Dooks (Codethink) <ben.dooks@codethink.co.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Peter Ujfalusi [Tue, 17 Dec 2019 07:35:06 +0000 (09:35 +0200)]
crypto: stm32/hash - Use dma_request_chan() instead dma_request_slave_channel()
dma_request_slave_channel() is a wrapper on top of dma_request_chan()
eating up the error code.
By using dma_request_chan() directly the driver can support deferred
probing against DMA.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Peter Ujfalusi [Tue, 17 Dec 2019 07:33:05 +0000 (09:33 +0200)]
crypto: img-hash - Use dma_request_chan instead dma_request_slave_channel
dma_request_slave_channel() is a wrapper on top of dma_request_chan()
eating up the error code.
By using dma_request_chan() directly the driver can support deferred
probing against DMA.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Jason A. Donenfeld [Mon, 16 Dec 2019 18:53:26 +0000 (19:53 +0100)]
crypto: lib/curve25519 - re-add selftests
Somehow these were dropped when Zinc was being integrated, which is
problematic, because testing the library interface for Curve25519 is
important.. This commit simply adds them back and wires them in in the
same way that the blake2s selftests are wired in.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Chen Zhou [Mon, 16 Dec 2019 10:58:48 +0000 (18:58 +0800)]
crypto: api - remove unneeded semicolon
Fixes coccicheck warning:
./include/linux/crypto.h:573:2-3: Unneeded semicolon
Signed-off-by: Chen Zhou <chenzhou10@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Chen Zhou [Mon, 16 Dec 2019 10:57:04 +0000 (18:57 +0800)]
crypto: allwinner - remove unneeded semicolon
Fixes coccicheck warning:
./drivers/crypto/allwinner/sun8i-ce/sun8i-ce-core.c:558:52-53: Unneeded semicolon
Signed-off-by: Chen Zhou <chenzhou10@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Sun, 15 Dec 2019 23:51:19 +0000 (15:51 -0800)]
crypto: algapi - make unregistration functions return void
Some of the algorithm unregistration functions return -ENOENT when asked
to unregister a non-registered algorithm, while others always return 0
or always return void. But no users check the return value, except for
two of the bulk unregistration functions which print a message on error
but still always return 0 to their caller, and crypto_del_alg() which
calls crypto_unregister_instance() which always returns 0.
Since unregistering a non-registered algorithm is always a kernel bug
but there isn't anything callers should do to handle this situation at
runtime, let's simplify things by making all the unregistration
functions return void, and moving the error message into
crypto_unregister_alg() and upgrading it to a WARN().
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Mark Brown [Fri, 13 Dec 2019 15:49:10 +0000 (15:49 +0000)]
crypto: arm64 - Use modern annotations for assembly functions
In an effort to clarify and simplify the annotation of assembly functions
in the kernel new macros have been introduced. These replace ENTRY and
ENDPROC and also add a new annotation for static functions which previously
had no ENTRY equivalent. Update the annotations in the crypto code to the
new macros.
There are a small number of files imported from OpenSSL where the assembly
is generated using perl programs, these are not currently annotated at all
and have not been modified.
Signed-off-by: Mark Brown <broonie@kernel.org>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tudor Ambarus [Fri, 13 Dec 2019 14:45:44 +0000 (14:45 +0000)]
crypto: atmel-aes - Fix CTR counter overflow when multiple fragments
The CTR transfer works in fragments of data of maximum 1 MByte because
of the 16 bit CTR counter embedded in the IP. Fix the CTR counter
overflow handling for messages larger than 1 MByte.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Fixes: 781a08d9740a ("crypto: atmel-aes - Fix counter overflow in CTR mode")
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Ayush Sawal [Fri, 13 Dec 2019 11:38:52 +0000 (17:08 +0530)]
crypto: chelsio - calculating tx_channel_id as per the max number of channels
chcr driver was not using the number of channels from lld and
assuming that there are always two channels available. With following
patch chcr will use number of channel as passed by cxgb4.
Signed-off-by: Ayush Sawal <ayush.sawal@chelsio.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tudor Ambarus [Fri, 13 Dec 2019 09:54:56 +0000 (09:54 +0000)]
crypto: atmel-{aes,tdes} - Update the IV only when the op succeeds
Do not update the IV in case of errors.
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tudor Ambarus [Fri, 13 Dec 2019 09:54:54 +0000 (09:54 +0000)]
crypto: atmel-{sha,tdes} - Print warn message even when deferring
Even when deferring, we would like to know what caused it.
Update dev_warn to dev_err because if the DMA init fails,
the probe is stopped.
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tudor Ambarus [Fri, 13 Dec 2019 09:54:49 +0000 (09:54 +0000)]
crypto: atmel-{aes,sha,tdes} - Stop passing unused argument in _dma_init()
pdata is not used.
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tudor Ambarus [Fri, 13 Dec 2019 09:54:46 +0000 (09:54 +0000)]
crypto: atmel-{aes,sha,tdes} - Drop duplicate init of dma_slave_config.direction
The 'direction' member of the dma_slave_config will be going away
as it duplicates the direction given in the prepare call.
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tudor Ambarus [Fri, 13 Dec 2019 09:54:42 +0000 (09:54 +0000)]
crypto: atmel-{aes,sha} - Fix incorrect use of dmaengine_terminate_all()
device_terminate_all() is used to abort all the pending and
ongoing transfers on the channel, it should be used just in the
error path.
Also, dmaengine_terminate_all() is deprecated and one should use
dmaengine_terminate_async() or dmaengine_terminate_sync(). The method
is not used in atomic context, use dmaengine_terminate_sync().
A secondary aspect of this patch is that it luckily avoids a deadlock
between atmel_aes and at_hdmac.c. While in tasklet with the lock held,
the dma controller invokes the client callback (dmaengine_terminate_all),
which tries to get the same lock. The at_hdmac fix would be to drop the
lock before invoking the client callback, a fix on at_hdmac will follow.
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Brendan Higgins [Wed, 11 Dec 2019 19:27:40 +0000 (11:27 -0800)]
crypto: amlogic - add unspecified HAS_IOMEM dependency
Currently CONFIG_CRYPTO_DEV_AMLOGIC_GXL=y implicitly depends on
CONFIG_HAS_IOMEM=y; consequently, on architectures without IOMEM we get
the following build error:
ld: drivers/crypto/amlogic/amlogic-gxl-core.o: in function `meson_crypto_probe':
drivers/crypto/amlogic/amlogic-gxl-core.c:240: undefined reference to `devm_platform_ioremap_resource'
Fix the build error by adding the unspecified dependency.
Reported-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
Acked-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Brendan Higgins [Wed, 11 Dec 2019 19:27:39 +0000 (11:27 -0800)]
crypto: inside-secure - add unspecified HAS_IOMEM dependency
Currently CONFIG_CRYPTO_DEV_SAFEXCEL=y implicitly depends on
CONFIG_HAS_IOMEM=y; consequently, on architectures without IOMEM we get
the following build error:
ld: drivers/crypto/inside-secure/safexcel.o: in function `safexcel_probe':
drivers/crypto/inside-secure/safexcel.c:1692: undefined reference to `devm_platform_ioremap_resource'
Fix the build error by adding the unspecified dependency.
Reported-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Pascal van Leeuwen [Wed, 11 Dec 2019 16:32:37 +0000 (17:32 +0100)]
crypto: inside-secure - Fix hang case on EIP97 with basic DES/3DES ops
This patch fixes another hang case on the EIP97 caused by sending
invalidation tokens to the hardware when doing basic (3)DES ECB/CBC
operations. Invalidation tokens are an EIP197 feature and needed nor
supported by the EIP97. So they should not be sent for that device.
Signed-off-by: Pascal van Leeuwen <pvanleeuwen@rambus.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Pascal van Leeuwen [Wed, 11 Dec 2019 16:32:36 +0000 (17:32 +0100)]
crypto: inside-secure - Fix hang case on EIP97 with zero length input data
The EIP97 hardware cannot handle zero length input data and will (usually)
hang when presented with this anyway. This patch converts any zero length
input to a 1 byte dummy input to prevent this hanging.
Signed-off-by: Pascal van Leeuwen <pvanleeuwen@rambus.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Pascal van Leeuwen [Wed, 11 Dec 2019 16:32:35 +0000 (17:32 +0100)]
crypto: inside-secure - Fix Unable to fit even 1 command desc error w/ EIP97
Due to the additions of support for modes like AES-CCM and AES-GCM, which
require large command tokens, the size of the descriptor has grown such that
it now does not fit into the descriptor cache of a standard EIP97 anymore.
This means that the driver no longer works on the Marvell Armada 3700LP chip
(as used on e.g. Espressobin) that it has always supported.
Additionally, performance on EIP197's like Marvell A8K may also degrade
due to being able to fit less descriptors in the on-chip cache.
Putting these tokens into the descriptor was really a hack and not how the
design was supposed to be used - resource allocation did not account for it.
So what this patch does, is move the command token out of the descriptor.
To avoid having to allocate buffers on the fly for these command tokens,
they are stuffed in a "shadow ring", which is a circular buffer of fixed
size blocks that runs in lock-step with the descriptor ring. i.e. there is
one token block per descriptor. The descriptor ring itself is then pre-
populated with the pointers to these token blocks so these do not need to
be filled in when building the descriptors later.
Signed-off-by: Pascal van Leeuwen <pvanleeuwen@rambus.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Daniel Jordan [Wed, 11 Dec 2019 16:21:20 +0000 (11:21 -0500)]
padata: update documentation file path in MAINTAINERS
It's changed since the recent RST conversion.
Fixes: bfcdcef8c8e3 ("padata: update documentation")
Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Steffen Klassert <steffen.klassert@secunet.com>
Cc: linux-crypto@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Wed, 11 Dec 2019 02:50:11 +0000 (10:50 +0800)]
crypto: api - fix unexpectedly getting generic implementation
When CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y, the first lookup of an
algorithm that needs to be instantiated using a template will always get
the generic implementation, even when an accelerated one is available.
This happens because the extra self-tests for the accelerated
implementation allocate the generic implementation for comparison
purposes, and then crypto_alg_tested() for the generic implementation
"fulfills" the original request (i.e. sets crypto_larval::adult).
This patch fixes this by only fulfilling the original request if
we are currently the best outstanding larval as judged by the
priority. If we're not the best then we will ask all waiters on
that larval request to retry the lookup.
Note that this patch introduces a behaviour change when the module
providing the new algorithm is unregistered during the process.
Previously we would have failed with ENOENT, after the patch we
will instead redo the lookup.
Fixes: 9a8a6b3f0950 ("crypto: testmgr - fuzz hashes against...")
Fixes: d435e10e67be ("crypto: testmgr - fuzz skciphers against...")
Fixes: 40153b10d91c ("crypto: testmgr - fuzz AEADs against...")
Reported-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Andrei Botila [Mon, 9 Dec 2019 16:59:56 +0000 (18:59 +0200)]
crypto: caam/qi2 - remove double buffering for ahash
Previously double buffering was used for storing previous and next
"less-than-block-size" bytes. Double buffering can be removed by moving
the copy of next "less-than-block-size" bytes after current request is
executed by HW.
Signed-off-by: Andrei Botila <andrei.botila@nxp.com>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Andrei Botila [Mon, 9 Dec 2019 16:59:55 +0000 (18:59 +0200)]
crypto: caam - remove double buffering for ahash
Previously double buffering was used for storing previous and next
"less-than-block-size" bytes. Double buffering can be removed by moving
the copy of next "less-than-block-size" bytes after current request is
executed by HW.
Signed-off-by: Andrei Botila <andrei.botila@nxp.com>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Chuhong Yuan [Mon, 9 Dec 2019 16:21:44 +0000 (00:21 +0800)]
crypto: picoxcell - adjust the position of tasklet_init and fix missed tasklet_kill
Since tasklet is needed to be initialized before registering IRQ
handler, adjust the position of tasklet_init to fix the wrong order.
Besides, to fix the missed tasklet_kill, this patch adds a helper
function and uses devm_add_action to kill the tasklet automatically.
Fixes: ce92136843cb ("crypto: picoxcell - add support for the picoxcell crypto engines")
Signed-off-by: Chuhong Yuan <hslester96@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Greg Kroah-Hartman [Mon, 9 Dec 2019 15:21:51 +0000 (16:21 +0100)]
crypto: hisilicon - still no need to check return value of debugfs_create functions
Just like in
4a97bfc79619 ("crypto: hisilicon - no need to check return
value of debugfs_create functions"), there still is no need to ever
check the return value. The function can work or not, but the code
logic should never do something different based on this.
Cc: Zhou Wang <wangzhou1@hisilicon.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-crypto@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rijo Thomas [Wed, 4 Dec 2019 06:19:03 +0000 (11:49 +0530)]
crypto: ccp - provide in-kernel API to submit TEE commands
Extend the functionality of AMD Secure Processor (SP) driver by
providing an in-kernel API to submit commands to TEE ring buffer for
processing by Trusted OS running on AMD Secure Processor.
Following TEE commands are supported by Trusted OS:
* TEE_CMD_ID_LOAD_TA : Load Trusted Application (TA) binary into
TEE environment
* TEE_CMD_ID_UNLOAD_TA : Unload TA binary from TEE environment
* TEE_CMD_ID_OPEN_SESSION : Open session with loaded TA
* TEE_CMD_ID_CLOSE_SESSION : Close session with loaded TA
* TEE_CMD_ID_INVOKE_CMD : Invoke a command with loaded TA
* TEE_CMD_ID_MAP_SHARED_MEM : Map shared memory
* TEE_CMD_ID_UNMAP_SHARED_MEM : Unmap shared memory
Linux AMD-TEE driver will use this API to submit command buffers
for processing in Trusted Execution Environment. The AMD-TEE driver
shall be introduced in a separate patch.
Cc: Jens Wiklander <jens.wiklander@linaro.org>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Co-developed-by: Devaraj Rangasamy <Devaraj.Rangasamy@amd.com>
Signed-off-by: Devaraj Rangasamy <Devaraj.Rangasamy@amd.com>
Signed-off-by: Rijo Thomas <Rijo-john.Thomas@amd.com>
Acked-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rijo Thomas [Wed, 4 Dec 2019 06:19:02 +0000 (11:49 +0530)]
crypto: ccp - add TEE support for Raven Ridge
Adds a PCI device entry for Raven Ridge. Raven Ridge is an APU with a
dedicated AMD Secure Processor having Trusted Execution Environment (TEE)
support. The TEE provides a secure environment for running Trusted
Applications (TAs) which implement security-sensitive parts of a feature.
This patch configures AMD Secure Processor's TEE interface by initializing
a ring buffer (shared memory between Rich OS and Trusted OS) which can hold
multiple command buffer entries. The TEE interface is facilitated by a set
of CPU to PSP mailbox registers.
The next patch will address how commands are submitted to the ring buffer.
Cc: Jens Wiklander <jens.wiklander@linaro.org>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Co-developed-by: Devaraj Rangasamy <Devaraj.Rangasamy@amd.com>
Signed-off-by: Devaraj Rangasamy <Devaraj.Rangasamy@amd.com>
Signed-off-by: Rijo Thomas <Rijo-john.Thomas@amd.com>
Acked-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rijo Thomas [Wed, 4 Dec 2019 06:19:01 +0000 (11:49 +0530)]
crypto: ccp - check whether PSP supports SEV or TEE before initialization
Read PSP feature register to check for TEE (Trusted Execution Environment)
support.
If neither SEV nor TEE is supported by PSP, then skip PSP initialization.
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Jens Wiklander <jens.wiklander@linaro.org>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Co-developed-by: Devaraj Rangasamy <Devaraj.Rangasamy@amd.com>
Signed-off-by: Devaraj Rangasamy <Devaraj.Rangasamy@amd.com>
Signed-off-by: Rijo Thomas <Rijo-john.Thomas@amd.com>
Acked-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rijo Thomas [Wed, 4 Dec 2019 06:19:00 +0000 (11:49 +0530)]
crypto: ccp - move SEV vdata to a dedicated data structure
PSP can support both SEV and TEE interface. Therefore, move
SEV specific registers to a dedicated data structure.
TEE interface specific registers will be added in a later
patch.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Jens Wiklander <jens.wiklander@linaro.org>
Co-developed-by: Devaraj Rangasamy <Devaraj.Rangasamy@amd.com>
Signed-off-by: Devaraj Rangasamy <Devaraj.Rangasamy@amd.com>
Signed-off-by: Rijo Thomas <Rijo-john.Thomas@amd.com>
Acked-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rijo Thomas [Wed, 4 Dec 2019 06:18:59 +0000 (11:48 +0530)]
crypto: ccp - create a generic psp-dev file
The PSP (Platform Security Processor) provides support for key management
commands in Secure Encrypted Virtualization (SEV) mode, along with
software-based Trusted Execution Environment (TEE) to enable third-party
Trusted Applications.
Therefore, introduce psp-dev.c and psp-dev.h files, which can invoke
SEV (or TEE) initialization based on platform feature support.
TEE interface support will be introduced in a later patch.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Jens Wiklander <jens.wiklander@linaro.org>
Co-developed-by: Devaraj Rangasamy <Devaraj.Rangasamy@amd.com>
Signed-off-by: Devaraj Rangasamy <Devaraj.Rangasamy@amd.com>
Signed-off-by: Rijo Thomas <Rijo-john.Thomas@amd.com>
Acked-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>