arm64: spinlocks: implement smp_mb__before_spinlock() as smp_mb()
authorWill Deacon <will.deacon@arm.com>
Mon, 5 Sep 2016 10:56:05 +0000 (11:56 +0100)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Sat, 24 Sep 2016 08:07:41 +0000 (10:07 +0200)
commit554b0ee1e89295d87c3f4bb5bcfb2d288ccde455
treeae184a5d53185d074604180fedc375ef257f93c2
parentc16cfc6688ac17ec78f018ea43345bc9f85dd70a
arm64: spinlocks: implement smp_mb__before_spinlock() as smp_mb()

commit 872c63fbf9e153146b07f0cece4da0d70b283eeb upstream.

smp_mb__before_spinlock() is intended to upgrade a spin_lock() operation
to a full barrier, such that prior stores are ordered with respect to
loads and stores occuring inside the critical section.

Unfortunately, the core code defines the barrier as smp_wmb(), which
is insufficient to provide the required ordering guarantees when used in
conjunction with our load-acquire-based spinlock implementation.

This patch overrides the arm64 definition of smp_mb__before_spinlock()
to map to a full smp_mb().

Cc: Peter Zijlstra <peterz@infradead.org>
Reported-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
arch/arm64/include/asm/spinlock.h