sched: Replace call_rcu_sched() with call_rcu()
authorPaul E. McKenney <paulmck@linux.ibm.com>
Wed, 7 Nov 2018 03:10:53 +0000 (19:10 -0800)
committerPaul E. McKenney <paulmck@linux.ibm.com>
Fri, 25 Jan 2019 23:28:22 +0000 (15:28 -0800)
Now that call_rcu()'s callback is not invoked until after all
preempt-disable regions of code have completed (in addition to explicitly
marked RCU read-side critical sections), call_rcu() can be used in place
of call_rcu_sched().  This commit therefore makes that change.

While in the area, this commit also updates an outdated header comment
for for_each_domain().

Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
kernel/sched/sched.h
kernel/sched/topology.c

index d04530b..6665b9c 100644 (file)
@@ -1260,7 +1260,7 @@ extern void sched_ttwu_pending(void);
 
 /*
  * The domain tree (rq->sd) is protected by RCU's quiescent state transition.
- * See detach_destroy_domains: synchronize_sched for details.
+ * See destroy_sched_domains: call_rcu for details.
  *
  * The domain tree of any CPU may only be accessed from within
  * preempt-disabled sections.
index 3f35ba1..7d905f5 100644 (file)
@@ -442,7 +442,7 @@ void rq_attach_root(struct rq *rq, struct root_domain *rd)
        raw_spin_unlock_irqrestore(&rq->lock, flags);
 
        if (old_rd)
-               call_rcu_sched(&old_rd->rcu, free_rootdomain);
+               call_rcu(&old_rd->rcu, free_rootdomain);
 }
 
 void sched_get_rd(struct root_domain *rd)
@@ -455,7 +455,7 @@ void sched_put_rd(struct root_domain *rd)
        if (!atomic_dec_and_test(&rd->refcount))
                return;
 
-       call_rcu_sched(&rd->rcu, free_rootdomain);
+       call_rcu(&rd->rcu, free_rootdomain);
 }
 
 static int init_rootdomain(struct root_domain *rd)