From 068de8d1be48a04b92fd97f76bb7e113b7be82a8 Mon Sep 17 00:00:00 2001 From: Will Deacon Date: Thu, 8 Jul 2010 10:58:06 +0100 Subject: [PATCH] ARM: 6211/1: atomic ops: fix register constraints for atomic64_add_unless The atomic64_add_unless function compares an atomic variable with a given value and, if they are not equal, adds another given value to the atomic variable. The function returns zero if the addition did not occur and non-zero otherwise. On ARM, the return value is initialised to 1 in C code. Inline assembly code then performs the atomic64_add_unless operation, setting the return value to 0 iff the addition does not occur. This means that when the addition *does* occur, the value of ret must be preserved across the inline assembly and therefore requires a "+r" constraint rather than the current one of "=&r". Thanks to Nicolas Pitre for helping to spot this. Cc: stable@kernel.org Reviewed-by: Nicolas Pitre Signed-off-by: Will Deacon Signed-off-by: Russell King --- arch/arm/include/asm/atomic.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h index a0162fa..e9e56c0 100644 --- a/arch/arm/include/asm/atomic.h +++ b/arch/arm/include/asm/atomic.h @@ -440,7 +440,7 @@ static inline int atomic64_add_unless(atomic64_t *v, u64 a, u64 u) " teq %2, #0\n" " bne 1b\n" "2:" - : "=&r" (val), "=&r" (ret), "=&r" (tmp) + : "=&r" (val), "+r" (ret), "=&r" (tmp) : "r" (&v->counter), "r" (u), "r" (a) : "cc"); -- 2.7.4