[PORT FROM R4]sched: Fix cgroup movement of waking process
authorDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Thu, 15 Dec 2011 05:37:41 +0000 (14:37 +0900)
committerbuildbot <buildbot@intel.com>
Wed, 27 Jun 2012 04:21:30 +0000 (21:21 -0700)
BZ: 42195

(backport of upstream commit 62af3783e4f, merged in v3.3-rc1)

There is a small race between try_to_wake_up() and sched_move_task(),
which is trying to move the process being woken up.

    try_to_wake_up() on CPU0       sched_move_task() on CPU1
 --------------------------------+---------------------------------
  raw_spin_lock_irqsave(p->pi_lock)
  task_waking_fair()
    ->p.se.vruntime -= cfs_rq->min_vruntime
  ttwu_queue()
    ->send reschedule IPI to CPU1
  raw_spin_unlock_irqsave(p->pi_lock)
                                   task_rq_lock()
                                     -> tring to aquire both p->pi_lock and
                                        rq->lock with IRQ disabled
                                   task_move_group_fair()
                                     -> p.se.vruntime
                                          -= (old)cfs_rq->min_vruntime
                                          += (new)cfs_rq->min_vruntime
                                   task_rq_unlock()

                                   (via IPI)
                                   sched_ttwu_pending()
                                     raw_spin_lock(rq->lock)
                                     ttwu_do_activate()
                                       ...
                                       enqueue_entity()
                                         child.se->vruntime += cfs_rq->min_vruntime
                                     raw_spin_unlock(rq->lock)

As a result, vruntime of the process becomes far bigger than min_vruntime,
if (new)cfs_rq->min_vruntime >> (old)cfs_rq->min_vruntime.

This patch fixes this problem by just ignoring such process in
task_move_group_fair(), because the vruntime has already been normalized in
task_waking_fair().

Signed-off-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Tejun Heo <tj@kernel.org>
Link: http://lkml.kernel.org/r/20111215143741.df82dd50.nishimura@mxp.nes.nec.co.jp
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Orig-Change-Id: I5eabfb717750908a7045a35d8c10a1fcf7706052
Signed-off-by: German Monroy <german.monroy@intel.com>
Conflicts:

kernel/sched_fair.c

Change-Id: I2b32a77e3958f4b83ad99b3e74b898439b1e9da7
Reviewed-on: http://android.intel.com:8080/54413
Reviewed-by: Monroy, German <german.monroy@intel.com>
Reviewed-by: Yang, Fei <fei.yang@intel.com>
Tested-by: Ng, Cheon-woei <cheon-woei.ng@intel.com>
Reviewed-by: buildbot <buildbot@intel.com>
Tested-by: buildbot <buildbot@intel.com>
kernel/sched_fair.c

index 9cd6b99..dca7f2e 100644 (file)
@@ -4274,11 +4274,13 @@ static void task_move_group_fair(struct task_struct *p, int on_rq)
         *
         * - Moving a forked child which is waiting for being woken up by
         *   wake_up_new_task().
+        * - Moving a task which has been woken up by try_to_wake_up() and
+        *   waiting for actually being woken up by sched_ttwu_pending().
         *
         * To prevent boost or penalty in the new cfs_rq caused by delta
         * min_vruntime between the two cfs_rqs, we skip vruntime adjustment.
         */
-       if (!on_rq && !p->se.sum_exec_runtime)
+       if (!on_rq && (!p->se.sum_exec_runtime || p->state == TASK_WAKING))
                on_rq = 1;
 
        if (!on_rq)