From: Will Deacon Date: Thu, 26 Sep 2013 16:27:00 +0000 (+0100) Subject: lockref: allow relaxed cmpxchg64 variant for lockless updates X-Git-Tag: v4.9.8~7731 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=d2212b4dce596fee83e5c523400bf084f4cc816c;p=platform%2Fkernel%2Flinux-rpi3.git lockref: allow relaxed cmpxchg64 variant for lockless updates The 64-bit cmpxchg operation on the lockref is ordered by virtue of hazarding between the cmpxchg operation and the reference count manipulation. On weakly ordered memory architectures (such as ARM), it can be of great benefit to omit the barrier instructions where they are not needed. This patch moves the lockless lockref code over to a cmpxchg64_relaxed operation, which doesn't provide barrier semantics. If the operation isn't defined, we simply #define it as the usual 64-bit cmpxchg macro. Cc: Waiman Long Signed-off-by: Will Deacon Signed-off-by: Linus Torvalds --- diff --git a/lib/lockref.c b/lib/lockref.c index 677d036c..e294ae4 100644 --- a/lib/lockref.c +++ b/lib/lockref.c @@ -4,6 +4,14 @@ #ifdef CONFIG_CMPXCHG_LOCKREF /* + * Allow weakly-ordered memory architectures to provide barrier-less + * cmpxchg semantics for lockref updates. + */ +#ifndef cmpxchg64_relaxed +# define cmpxchg64_relaxed cmpxchg64 +#endif + +/* * Note that the "cmpxchg()" reloads the "old" value for the * failure case. */ @@ -14,8 +22,9 @@ while (likely(arch_spin_value_unlocked(old.lock.rlock.raw_lock))) { \ struct lockref new = old, prev = old; \ CODE \ - old.lock_count = cmpxchg64(&lockref->lock_count, \ - old.lock_count, new.lock_count); \ + old.lock_count = cmpxchg64_relaxed(&lockref->lock_count, \ + old.lock_count, \ + new.lock_count); \ if (likely(old.lock_count == prev.lock_count)) { \ SUCCESS; \ } \