sched/core: Micro-optimize ttwu_runnable()
authorChengming Zhou <zhouchengming@bytedance.com>
Fri, 23 Dec 2022 10:32:56 +0000 (18:32 +0800)
committerIngo Molnar <mingo@kernel.org>
Sat, 7 Jan 2023 09:48:38 +0000 (10:48 +0100)
ttwu_runnable() is used as a fast wakeup path when the wakee task
is running on CPU or runnable on RQ, in both cases we can just
set its state to TASK_RUNNING to prevent a sleep.

If the wakee task is on_cpu running, we don't need to update_rq_clock()
or check_preempt_curr().

But if the wakee task is on_rq && !on_cpu (e.g. an IRQ hit before
the task got to schedule() and the task been preempted), we should
check_preempt_curr() to see if it can preempt the current running.

This also removes the class->task_woken() callback from ttwu_runnable(),
which wasn't required per the RT/DL implementations: any required push
operation would have been queued during class->set_next_task() when p
got preempted.

ttwu_runnable() also loses the update to rq->idle_stamp, as by definition
the rq cannot be idle in this scenario.

Suggested-by: Valentin Schneider <vschneid@redhat.com>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Valentin Schneider <vschneid@redhat.com>
Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Link: https://lore.kernel.org/r/20221223103257.4962-1-zhouchengming@bytedance.com
kernel/sched/core.c

index f99ee69..255a318 100644 (file)
@@ -3720,9 +3720,16 @@ static int ttwu_runnable(struct task_struct *p, int wake_flags)
 
        rq = __task_rq_lock(p, &rf);
        if (task_on_rq_queued(p)) {
-               /* check_preempt_curr() may use rq clock */
-               update_rq_clock(rq);
-               ttwu_do_wakeup(rq, p, wake_flags, &rf);
+               if (!task_on_cpu(rq, p)) {
+                       /*
+                        * When on_rq && !on_cpu the task is preempted, see if
+                        * it should preempt the task that is current now.
+                        */
+                       update_rq_clock(rq);
+                       check_preempt_curr(rq, p, wake_flags);
+               }
+               WRITE_ONCE(p->__state, TASK_RUNNING);
+               trace_sched_wakeup(p);
                ret = 1;
        }
        __task_rq_unlock(rq, &rf);