locking/core: Fix deadlock during boot on systems with GENERIC_LOCKBREAK
authorWill Deacon <will.deacon@arm.com>
Tue, 28 Nov 2017 18:42:18 +0000 (18:42 +0000)
committerIngo Molnar <mingo@kernel.org>
Tue, 12 Dec 2017 10:24:01 +0000 (11:24 +0100)
Commit:

  a8a217c22116 ("locking/core: Remove {read,spin,write}_can_lock()")

removed the definition of raw_spin_can_lock(), causing the GENERIC_LOCKBREAK
spin_lock() routines to poll the ->break_lock field when waiting on a lock.

This has been reported to cause a deadlock during boot on s390, because
the ->break_lock field is also set by the waiters, and can potentially
remain set indefinitely if no other CPUs come in to take the lock after
it has been released.

This patch removes the explicit spinning on ->break_lock from the waiters,
instead relying on the outer trylock() operation to determine when the
lock is available.

Reported-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Tested-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: a8a217c22116 ("locking/core: Remove {read,spin,write}_can_lock()")
Link: http://lkml.kernel.org/r/1511894539-7988-2-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/locking/spinlock.c

index 1fd1a75..0ebb253 100644 (file)
@@ -68,8 +68,8 @@ void __lockfunc __raw_##op##_lock(locktype##_t *lock)                 \
                                                                        \
                if (!(lock)->break_lock)                                \
                        (lock)->break_lock = 1;                         \
-               while ((lock)->break_lock)                              \
-                       arch_##op##_relax(&lock->raw_lock);             \
+                                                                       \
+               arch_##op##_relax(&lock->raw_lock);                     \
        }                                                               \
        (lock)->break_lock = 0;                                         \
 }                                                                      \
@@ -88,8 +88,8 @@ unsigned long __lockfunc __raw_##op##_lock_irqsave(locktype##_t *lock)        \
                                                                        \
                if (!(lock)->break_lock)                                \
                        (lock)->break_lock = 1;                         \
-               while ((lock)->break_lock)                              \
-                       arch_##op##_relax(&lock->raw_lock);             \
+                                                                       \
+               arch_##op##_relax(&lock->raw_lock);                     \
        }                                                               \
        (lock)->break_lock = 0;                                         \
        return flags;                                                   \