sched/fair: Prevent unlimited runtime on throttled group
authorVincent Guittot <vincent.guittot@linaro.org>
Tue, 14 Jan 2020 14:13:56 +0000 (15:13 +0100)
committerIngo Molnar <mingo@kernel.org>
Tue, 28 Jan 2020 20:36:58 +0000 (21:36 +0100)
When a running task is moved on a throttled task group and there is no
other task enqueued on the CPU, the task can keep running using 100% CPU
whatever the allocated bandwidth for the group and although its cfs rq is
throttled. Furthermore, the group entity of the cfs_rq and its parents are
not enqueued but only set as curr on their respective cfs_rqs.

We have the following sequence:

sched_move_task
  -dequeue_task: dequeue task and group_entities.
  -put_prev_task: put task and group entities.
  -sched_change_group: move task to new group.
  -enqueue_task: enqueue only task but not group entities because cfs_rq is
    throttled.
  -set_next_task : set task and group_entities as current sched_entity of
    their cfs_rq.

Another impact is that the root cfs_rq runnable_load_avg at root rq stays
null because the group_entities are not enqueued. This situation will stay
the same until an "external" event triggers a reschedule. Let trigger it
immediately instead.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Ben Segall <bsegall@google.com>
Link: https://lkml.kernel.org/r/1579011236-31256-1-git-send-email-vincent.guittot@linaro.org
kernel/sched/core.c

index a8a5d5b6f5cf7ed2168380c3d4f45dc5881c8de4..89e54f3ed571a6b743f1cbd832150024ae7fb15f 100644 (file)
@@ -7072,8 +7072,15 @@ void sched_move_task(struct task_struct *tsk)
 
        if (queued)
                enqueue_task(rq, tsk, queue_flags);
-       if (running)
+       if (running) {
                set_next_task(rq, tsk);
+               /*
+                * After changing group, the running task may have joined a
+                * throttled one but it's still the running task. Trigger a
+                * resched to make sure that task can still run.
+                */
+               resched_curr(rq);
+       }
 
        task_rq_unlock(rq, tsk, &rf);
 }