From: Will Deacon Date: Thu, 24 Jan 2013 13:47:38 +0000 (+0100) Subject: ARM: 7632/1: spinlock: avoid exclusive accesses on unlock() path X-Git-Tag: v3.9-rc1~143^2^2~15 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=20e260b6f4f717c100620122f626a2c06a4cfd72;p=platform%2Fkernel%2Flinux-exynos.git ARM: 7632/1: spinlock: avoid exclusive accesses on unlock() path When unlocking a spinlock, all we need to do is increment the owner field of the lock. Since only one CPU can be performing an unlock() operation for a given lock, this doesn't need to be exclusive. This patch simplifies arch_spin_unlock to use non-exclusive accesses when updating the owner field of the lock. Signed-off-by: Will Deacon Signed-off-by: Russell King --- diff --git a/arch/arm/include/asm/spinlock.h b/arch/arm/include/asm/spinlock.h index b4ca707..6220e9f 100644 --- a/arch/arm/include/asm/spinlock.h +++ b/arch/arm/include/asm/spinlock.h @@ -119,22 +119,8 @@ static inline int arch_spin_trylock(arch_spinlock_t *lock) static inline void arch_spin_unlock(arch_spinlock_t *lock) { - unsigned long tmp; - u32 slock; - smp_mb(); - - __asm__ __volatile__( -" mov %1, #1\n" -"1: ldrex %0, [%2]\n" -" uadd16 %0, %0, %1\n" -" strex %1, %0, [%2]\n" -" teq %1, #0\n" -" bne 1b" - : "=&r" (slock), "=&r" (tmp) - : "r" (&lock->slock) - : "cc"); - + lock->tickets.owner++; dsb_sev(); }