sched/fair: use lsub_positive in cpu_util_next()
authorVincent Donnefort <vincent.donnefort@arm.com>
Thu, 25 Feb 2021 08:36:12 +0000 (08:36 +0000)
committerIngo Molnar <mingo@kernel.org>
Sat, 6 Mar 2021 11:40:22 +0000 (12:40 +0100)
The sub_positive local version is saving an explicit load-store and is
enough for the cpu_util_next() usage.

Signed-off-by: Vincent Donnefort <vincent.donnefort@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Quentin Perret <qperret@google.com>
Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Link: https://lkml.kernel.org/r/20210225083612.1113823-3-vincent.donnefort@arm.com
kernel/sched/fair.c

index b994db9d06063f262eb7ce6cc8444b8f60832354..7b2fac0d446d97b689616c0e5dfd0a983fc7f431 100644 (file)
@@ -6471,7 +6471,7 @@ static unsigned long cpu_util_next(int cpu, struct task_struct *p, int dst_cpu)
         * util_avg should already be correct.
         */
        if (task_cpu(p) == cpu && dst_cpu != cpu)
-               sub_positive(&util, task_util(p));
+               lsub_positive(&util, task_util(p));
        else if (task_cpu(p) != cpu && dst_cpu == cpu)
                util += task_util(p);