x86/uaccess: Implement macros for CMPXCHG on user addresses
authorPeter Zijlstra <peterz@infradead.org>
Wed, 2 Feb 2022 00:49:42 +0000 (00:49 +0000)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Fri, 29 Jul 2022 15:25:25 +0000 (17:25 +0200)
commit823424905d03eefdaafa4c194ca775999998b2ba
treee45167b46c77b82b35a5839b21379f2e98943efc
parent1062cfb47e565792b6fd72d7e36b0f7b542ec78a
x86/uaccess: Implement macros for CMPXCHG on user addresses

[ Upstream commit 989b5db215a2f22f89d730b607b071d964780f10 ]

Add support for CMPXCHG loops on userspace addresses.  Provide both an
"unsafe" version for tight loops that do their own uaccess begin/end, as
well as a "safe" version for use cases where the CMPXCHG is not buried in
a loop, e.g. KVM will resume the guest instead of looping when emulation
of a guest atomic accesses fails the CMPXCHG.

Provide 8-byte versions for 32-bit kernels so that KVM can do CMPXCHG on
guest PAE PTEs, which are accessed via userspace addresses.

Guard the asm_volatile_goto() variation with CC_HAS_ASM_GOTO_TIED_OUTPUT,
the "+m" constraint fails on some compilers that otherwise support
CC_HAS_ASM_GOTO_OUTPUT.

Cc: stable@vger.kernel.org
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Co-developed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220202004945.2540433-3-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
arch/x86/include/asm/uaccess.h