locking/atomic: arm: fix sync ops
authorMark Rutland <mark.rutland@arm.com>
Mon, 5 Jun 2023 07:00:58 +0000 (08:00 +0100)
committerPeter Zijlstra <peterz@infradead.org>
Mon, 5 Jun 2023 07:57:13 +0000 (09:57 +0200)
commitdda5f312bb09e56e7a1c3e3851f2000eb2e9c879
tree2c5d77a688caffdffb4b516c1e6b00baeadb1259
parent497cc42bf53b55185ab3d39c634fbf09eb6681ae
locking/atomic: arm: fix sync ops

The sync_*() ops on arch/arm are defined in terms of the regular bitops
with no special handling. This is not correct, as UP kernels elide
barriers for the fully-ordered operations, and so the required ordering
is lost when such UP kernels are run under a hypervsior on an SMP
system.

Fix this by defining sync ops with the required barriers.

Note: On 32-bit arm, the sync_*() ops are currently only used by Xen,
which requires ARMv7, but the semantics can be implemented for ARMv6+.

Fixes: e54d2f61528165bb ("xen/arm: sync_bitops")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20230605070124.3741859-2-mark.rutland@arm.com
arch/arm/include/asm/assembler.h
arch/arm/include/asm/sync_bitops.h
arch/arm/lib/bitops.h
arch/arm/lib/testchangebit.S
arch/arm/lib/testclearbit.S
arch/arm/lib/testsetbit.S