x86/fpu: Use unsigned long long shift in xfeature_uncompacted_offset()
authorSebastian Andrzej Siewior <bigeasy@linutronix.de>
Wed, 28 Nov 2018 22:20:07 +0000 (23:20 +0100)
committerBorislav Petkov <bp@suse.de>
Mon, 3 Dec 2018 17:40:28 +0000 (18:40 +0100)
commitd08452390179710dc7989242605e3c1faa62b64f
tree204df6086b841f1de5979fd5f53085f0cb9ab34f
parent2595646791c319cadfdbf271563aac97d0843dc7
x86/fpu: Use unsigned long long shift in xfeature_uncompacted_offset()

The xfeature mask is 64-bit so a shift from a number to its mask should
have ULL suffix or else bits above position 31 will be lost. This is not
a problem now but should XFEATURE_MASK_SUPERVISOR gain a bit >31 then
this check won't catch it.

Use BIT_ULL() to compute a mask from a number.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Rik van Riel <riel@surriel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: "Jason A. Donenfeld" <Jason@zx2c4.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kvm ML <kvm@vger.kernel.org>
Cc: x86-ml <x86@kernel.org>
Link: https://lkml.kernel.org/r/20181128222035.2996-2-bigeasy@linutronix.de
arch/x86/kernel/fpu/xstate.c