sched/fair: Reduce minimal imbalance threshold
authorVincent Guittot <vincent.guittot@linaro.org>
Mon, 21 Sep 2020 07:24:22 +0000 (09:24 +0200)
committerPeter Zijlstra <peterz@infradead.org>
Fri, 25 Sep 2020 12:23:26 +0000 (14:23 +0200)
The 25% default imbalance threshold for DIE and NUMA domain is large
enough to generate significant unfairness between threads. A typical
example is the case of 11 threads running on 2x4 CPUs. The imbalance of
20% between the 2 groups of 4 cores is just low enough to not trigger
the load balance between the 2 groups. We will have always the same 6
threads on one group of 4 CPUs and the other 5 threads on the other
group of CPUS. With a fair time sharing in each group, we ends up with
+20% running time for the group of 5 threads.

Consider decreasing the imbalance threshold for overloaded case where we
use the load to balance task and to ensure fair time sharing.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Phil Auld <pauld@redhat.com>
Acked-by: Hillf Danton <hdanton@sina.com>
Link: https://lkml.kernel.org/r/20200921072424.14813-3-vincent.guittot@linaro.org
kernel/sched/topology.c

index 249bec7b0a4c8824c703987ef5e611f8f291703d..41df62884cea91043822a4cfc8abbbdd8b46d423 100644 (file)
@@ -1349,7 +1349,7 @@ sd_init(struct sched_domain_topology_level *tl,
                .min_interval           = sd_weight,
                .max_interval           = 2*sd_weight,
                .busy_factor            = 32,
-               .imbalance_pct          = 125,
+               .imbalance_pct          = 117,
 
                .cache_nice_tries       = 0,