x86/mm: Prepare sme_encrypt_kernel() for PAGE aligned encryption
authorTom Lendacky <thomas.lendacky@amd.com>
Wed, 10 Jan 2018 19:26:26 +0000 (13:26 -0600)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Tue, 23 Jan 2018 18:58:15 +0000 (19:58 +0100)
commit33e4ca36aea63c87f15e393d34a0fa52d8337ea0
tree0e12cb5ee6e9358a3bc38b5af5eaed706baeb59b
parent69a39cf36ec2094912d493dfac6747f00f8c2320
x86/mm: Prepare sme_encrypt_kernel() for PAGE aligned encryption

commit cc5f01e28d6c60f274fd1e33b245f679f79f543c upstream.

In preparation for encrypting more than just the kernel, the encryption
support in sme_encrypt_kernel() needs to support 4KB page aligned
encryption instead of just 2MB large page aligned encryption.

Update the routines that populate the PGD to support non-2MB aligned
addresses.  This is done by creating PTE page tables for the start
and end portion of the address range that fall outside of the 2MB
alignment.  This results in, at most, two extra pages to hold the
PTE entries for each mapping of a range.

Tested-by: Gabriel Craciunescu <nix.or.die@gmail.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Reviewed-by: Borislav Petkov <bp@suse.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20180110192626.6026.75387.stgit@tlendack-t1.amdoffice.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
arch/x86/mm/mem_encrypt.c
arch/x86/mm/mem_encrypt_boot.S