lockdep: Fix nr_unused_locks accounting
authorPeter Zijlstra <peterz@infradead.org>
Tue, 27 Oct 2020 12:48:34 +0000 (13:48 +0100)
committerPeter Zijlstra <peterz@infradead.org>
Fri, 30 Oct 2020 16:07:18 +0000 (17:07 +0100)
Chris reported that commit 24d5a3bffef1 ("lockdep: Fix
usage_traceoverflow") breaks the nr_unused_locks validation code
triggered by /proc/lockdep_stats.

By fully splitting LOCK_USED and LOCK_USED_READ it becomes a bad
indicator for accounting nr_unused_locks; simplyfy by using any first
bit.

Fixes: 24d5a3bffef1 ("lockdep: Fix usage_traceoverflow")
Reported-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://lkml.kernel.org/r/20201027124834.GL2628@hirez.programming.kicks-ass.net
kernel/locking/lockdep.c

index 1102849..b71ad8d 100644 (file)
@@ -4396,6 +4396,9 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this,
        if (unlikely(hlock_class(this)->usage_mask & new_mask))
                goto unlock;
 
+       if (!hlock_class(this)->usage_mask)
+               debug_atomic_dec(nr_unused_locks);
+
        hlock_class(this)->usage_mask |= new_mask;
 
        if (new_bit < LOCK_TRACE_STATES) {
@@ -4403,19 +4406,10 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this,
                        return 0;
        }
 
-       switch (new_bit) {
-       case 0 ... LOCK_USED-1:
+       if (new_bit < LOCK_USED) {
                ret = mark_lock_irq(curr, this, new_bit);
                if (!ret)
                        return 0;
-               break;
-
-       case LOCK_USED:
-               debug_atomic_dec(nr_unused_locks);
-               break;
-
-       default:
-               break;
        }
 
 unlock: