From: Lin Ming Date: Thu, 15 Jan 2009 16:17:15 +0000 (+0100) Subject: sched: sched_slice() fixlet X-Git-Tag: v3.12-rc1~16193^2 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=6272d68cc6a5f90c6b1a2228cf0f67b895305d17;p=kernel%2Fkernel-generic.git sched: sched_slice() fixlet Mike's change: 0a582440f "sched: fix sched_slice())" broke group scheduling by forgetting to reload cfs_rq on each loop. This patch fixes aim7 regression and specjbb2005 regression becomes less than 1.5% on 8-core stokley. Signed-off-by: Lin Ming Signed-off-by: Peter Zijlstra Tested-by: Jayson King Signed-off-by: Ingo Molnar --- diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c index 16b419b..5cc1c16 100644 --- a/kernel/sched_fair.c +++ b/kernel/sched_fair.c @@ -429,7 +429,10 @@ static u64 sched_slice(struct cfs_rq *cfs_rq, struct sched_entity *se) u64 slice = __sched_period(cfs_rq->nr_running + !se->on_rq); for_each_sched_entity(se) { - struct load_weight *load = &cfs_rq->load; + struct load_weight *load; + + cfs_rq = cfs_rq_of(se); + load = &cfs_rq->load; if (unlikely(!se->on_rq)) { struct load_weight lw = cfs_rq->load;