sched/rt: Remove redundant nr_cpus_allowed test
authorShawn Bohrer <sbohrer@rgmadvisors.com>
Fri, 4 Oct 2013 19:24:53 +0000 (14:24 -0500)
committerJiri Slaby <jslaby@suse.cz>
Wed, 12 Mar 2014 12:25:40 +0000 (13:25 +0100)
commit 6bfa687c19b7ab8adee03f0d43c197c2945dd869 upstream.

In 76854c7e8f3f4172fef091e78d88b3b751463ac6 ("sched: Use
rt.nr_cpus_allowed to recover select_task_rq() cycles") an
optimization was added to select_task_rq_rt() that immediately
returns when p->nr_cpus_allowed == 1 at the beginning of the
function.

This makes the latter p->nr_cpus_allowed > 1 check redundant,
which can now be removed.

Signed-off-by: Shawn Bohrer <sbohrer@rgmadvisors.com>
Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
Cc: Mike Galbraith <mgalbraith@suse.de>
Cc: tomk@rgmadvisors.com
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1380914693-24634-1-git-send-email-shawn.bohrer@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
kernel/sched/rt.c

index 0c7886ff263ede19639f314396beec630cfd3949..ff04e1a06412f25db4a379a6593cc0b6d36c0f76 100644 (file)
@@ -1229,8 +1229,7 @@ select_task_rq_rt(struct task_struct *p, int sd_flag, int flags)
         */
        if (curr && unlikely(rt_task(curr)) &&
            (curr->nr_cpus_allowed < 2 ||
-            curr->prio <= p->prio) &&
-           (p->nr_cpus_allowed > 1)) {
+            curr->prio <= p->prio)) {
                int target = find_lowest_rq(p);
 
                if (target != -1)