sched/topology: Remove redundant cpumask_and() in init_overlap_sched_group()
authorBarry Song <song.bao.hua@hisilicon.com>
Thu, 25 Mar 2021 02:31:40 +0000 (15:31 +1300)
committerIngo Molnar <mingo@kernel.org>
Thu, 25 Mar 2021 10:41:23 +0000 (11:41 +0100)
mask is built in build_balance_mask() by for_each_cpu(i, sg_span), so
it must be a subset of sched_group_span(sg).

So the cpumask_and() call is redundant - remove it.

[ mingo: Adjusted the changelog a bit. ]

Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Valentin Schneider <Valentin.Schneider@arm.com>
Link: https://lore.kernel.org/r/20210325023140.23456-1-song.bao.hua@hisilicon.com
kernel/sched/topology.c

index f2066d6..d1aec24 100644 (file)
@@ -934,7 +934,7 @@ static void init_overlap_sched_group(struct sched_domain *sd,
        int cpu;
 
        build_balance_mask(sd, sg, mask);
-       cpu = cpumask_first_and(sched_group_span(sg), mask);
+       cpu = cpumask_first(mask);
 
        sg->sgc = *per_cpu_ptr(sdd->sgc, cpu);
        if (atomic_inc_return(&sg->sgc->ref) == 1)