locking/rwbase: Optimize rwbase_read_trylock
authorDavidlohr Bueso <dave@stgolabs.net>
Mon, 20 Sep 2021 05:20:30 +0000 (22:20 -0700)
committerPeter Zijlstra <peterz@infradead.org>
Thu, 7 Oct 2021 11:51:07 +0000 (13:51 +0200)
Instead of a full barrier around the Rmw insn, micro-optimize
for weakly ordered archs such that we only provide the required
ACQUIRE semantics when taking the read lock.

Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Waiman Long <longman@redhat.com>
Link: https://lkml.kernel.org/r/20210920052031.54220-2-dave@stgolabs.net
kernel/locking/rwbase_rt.c

index 15c81100f0e2659dd4857c545df02ab94bfe61a6..6fd3162e4098ffa60795c38d9394c0459512708e 100644 (file)
@@ -59,8 +59,7 @@ static __always_inline int rwbase_read_trylock(struct rwbase_rt *rwb)
         * set.
         */
        for (r = atomic_read(&rwb->readers); r < 0;) {
-               /* Fully-ordered if cmpxchg() succeeds, provides ACQUIRE */
-               if (likely(atomic_try_cmpxchg(&rwb->readers, &r, r + 1)))
+               if (likely(atomic_try_cmpxchg_acquire(&rwb->readers, &r, r + 1)))
                        return 1;
        }
        return 0;
@@ -187,7 +186,7 @@ static inline void __rwbase_write_unlock(struct rwbase_rt *rwb, int bias,
 
        /*
         * _release() is needed in case that reader is in fast path, pairing
-        * with atomic_try_cmpxchg() in rwbase_read_trylock(), provides RELEASE
+        * with atomic_try_cmpxchg_acquire() in rwbase_read_trylock().
         */
        (void)atomic_add_return_release(READER_BIAS - bias, &rwb->readers);
        raw_spin_unlock_irqrestore(&rtm->wait_lock, flags);