locking: Remove smp_read_barrier_depends() from queued_spin_lock_slowpath()
authorPaul E. McKenney <paulmck@linux.vnet.ibm.com>
Tue, 18 Dec 2018 17:13:51 +0000 (18:13 +0100)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Fri, 21 Dec 2018 13:13:08 +0000 (14:13 +0100)
commit 548095dea63ffc016d39c35b32c628d033638aca upstream.

Queued spinlocks are not used by DEC Alpha, and furthermore operations
such as READ_ONCE() and release/relaxed RMW atomics are being changed
to imply smp_read_barrier_depends().  This commit therefore removes the
now-redundant smp_read_barrier_depends() from queued_spin_lock_slowpath(),
and adjusts the comments accordingly.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Sasha Levin <sashal@kernel.org>
kernel/locking/qspinlock.c

index 50dc42aeaa569ca202595a30df1ff8c61f0234f4..5541acb79e152911b4c2997d8568c532e0f7fb85 100644 (file)
@@ -170,7 +170,7 @@ static __always_inline void clear_pending_set_locked(struct qspinlock *lock)
  * @tail : The new queue tail code word
  * Return: The previous queue tail code word
  *
- * xchg(lock, tail)
+ * xchg(lock, tail), which heads an address dependency
  *
  * p,*,* -> n,*,* ; prev = xchg(lock, node)
  */
@@ -417,13 +417,11 @@ queue:
        if (old & _Q_TAIL_MASK) {
                prev = decode_tail(old);
                /*
-                * The above xchg_tail() is also a load of @lock which generates,
-                * through decode_tail(), a pointer.
-                *
-                * The address dependency matches the RELEASE of xchg_tail()
-                * such that the access to @prev must happen after.
+                * The above xchg_tail() is also a load of @lock which
+                * generates, through decode_tail(), a pointer.  The address
+                * dependency matches the RELEASE of xchg_tail() such that
+                * the subsequent access to @prev happens after.
                 */
-               smp_read_barrier_depends();
 
                WRITE_ONCE(prev->next, node);