sched/fair: Have task_move_group_fair() also detach entity load from the old runqueue
authorByungchul Park <byungchul.park@lge.com>
Thu, 20 Aug 2015 11:21:58 +0000 (20:21 +0900)
committerIngo Molnar <mingo@kernel.org>
Sun, 13 Sep 2015 07:52:47 +0000 (09:52 +0200)
Since we attach the entity load to the new runqueue, we should also
detatch the entity load from the old runqueue, otherwise load can
accumulate.

Signed-off-by: Byungchul Park <byungchul.park@lge.com>
[ Rewrote the changelog. ]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: yuyang.du@intel.com
Link: http://lkml.kernel.org/r/1440069720-27038-4-git-send-email-byungchul.park@lge.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/sched/fair.c

index 959b2ea..1e1fe7f 100644 (file)
@@ -8037,8 +8037,12 @@ static void task_move_group_fair(struct task_struct *p, int queued)
        if (!queued && (!se->sum_exec_runtime || p->state == TASK_WAKING))
                queued = 1;
 
+       cfs_rq = cfs_rq_of(se);
        if (!queued)
-               se->vruntime -= cfs_rq_of(se)->min_vruntime;
+               se->vruntime -= cfs_rq->min_vruntime;
+
+       /* Synchronize task with its prev cfs_rq */
+       detach_entity_load_avg(cfs_rq, se);
        set_task_rq(p, task_cpu(p));
        se->depth = se->parent ? se->parent->depth + 1 : 0;
        cfs_rq = cfs_rq_of(se);