powerpc: qspinlock: Mark accesses to qnode lock checks
authorRohan McLure <rmclure@linux.ibm.com>
Wed, 10 May 2023 03:31:07 +0000 (13:31 +1000)
committerMichael Ellerman <mpe@ellerman.id.au>
Wed, 21 Jun 2023 05:13:57 +0000 (15:13 +1000)
The powerpc implementation of qspinlocks will both poll and spin on the
bitlock guarding a qnode. Mark these accesses with READ_ONCE to convey
to KCSAN that polling is intentional here.

Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230510033117.1395895-2-rmclure@linux.ibm.com
arch/powerpc/lib/qspinlock.c

index e4bd145..b76c1f6 100644 (file)
@@ -435,7 +435,7 @@ yield_prev:
 
        smp_rmb(); /* See __yield_to_locked_owner comment */
 
-       if (!node->locked) {
+       if (!READ_ONCE(node->locked)) {
                yield_to_preempted(prev_cpu, yield_count);
                spin_begin();
                return preempted;
@@ -584,7 +584,7 @@ static __always_inline void queued_spin_lock_mcs_queue(struct qspinlock *lock, b
 
                /* Wait for mcs node lock to be released */
                spin_begin();
-               while (!node->locked) {
+               while (!READ_ONCE(node->locked)) {
                        spec_barrier();
 
                        if (yield_to_prev(lock, node, old, paravirt))