sched: Change wait_task_inactive()s match_state
authorPeter Zijlstra <peterz@infradead.org>
Mon, 22 Aug 2022 11:18:19 +0000 (13:18 +0200)
committerPeter Zijlstra <peterz@infradead.org>
Wed, 7 Sep 2022 19:53:48 +0000 (21:53 +0200)
Make wait_task_inactive()'s @match_state work like ttwu()'s @state.

That is, instead of an equal comparison, use it as a mask. This allows
matching multiple block conditions.

(removes the unlikely; it doesn't make sense how it's only part of the
condition)

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20220822114648.856734578@infradead.org
kernel/sched/core.c

index 1630181..43d71c6 100644 (file)
@@ -3294,7 +3294,7 @@ unsigned long wait_task_inactive(struct task_struct *p, unsigned int match_state
                 * is actually now running somewhere else!
                 */
                while (task_on_cpu(rq, p)) {
-                       if (match_state && unlikely(READ_ONCE(p->__state) != match_state))
+                       if (match_state && !(READ_ONCE(p->__state) & match_state))
                                return 0;
                        cpu_relax();
                }
@@ -3309,7 +3309,7 @@ unsigned long wait_task_inactive(struct task_struct *p, unsigned int match_state
                running = task_on_cpu(rq, p);
                queued = task_on_rq_queued(p);
                ncsw = 0;
-               if (!match_state || READ_ONCE(p->__state) == match_state)
+               if (!match_state || (READ_ONCE(p->__state) & match_state))
                        ncsw = p->nvcsw | LONG_MIN; /* sets MSB */
                task_rq_unlock(rq, p, &rf);