From: Peter Zijlstra Date: Wed, 6 Sep 2017 10:51:31 +0000 (+0200) Subject: sched/fair: Fix wake_affine_llc() balancing rules X-Git-Tag: v4.14-rc1~33^2~6 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=a731ebe6f17bff9e7ca12ef227f9da4d5bdf8425;p=platform%2Fkernel%2Flinux-rpi.git sched/fair: Fix wake_affine_llc() balancing rules Chris Wilson reported that the SMT balance rules got the +1 on the wrong side, resulting in a bias towards the current LLC; which the load-balancer would then try and undo. Reported-by: Chris Wilson Tested-by: Chris Wilson Signed-off-by: Peter Zijlstra (Intel) Cc: Andy Lutomirski Cc: Linus Torvalds Cc: Mike Galbraith Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: linux-kernel@vger.kernel.org Fixes: 90001d67be2f ("sched/fair: Fix wake_affine() for !NUMA_BALANCING") Link: http://lkml.kernel.org/r/20170906105131.gqjmaextmn3u6tj2@hirez.programming.kicks-ass.net Signed-off-by: Ingo Molnar --- diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 8d58687..9dd2ce1 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5435,7 +5435,7 @@ wake_affine_llc(struct sched_domain *sd, struct task_struct *p, return false; /* if this cache has capacity, come here */ - if (this_stats.has_capacity && this_stats.nr_running < prev_stats.nr_running+1) + if (this_stats.has_capacity && this_stats.nr_running+1 < prev_stats.nr_running) return true; /*