rcu: Remove redundant call to rcu_boost_kthread_setaffinity()
authorZqiang <qiang1.zhang@intel.com>
Wed, 21 Dec 2022 19:15:43 +0000 (11:15 -0800)
committerPaul E. McKenney <paulmck@kernel.org>
Thu, 12 Jan 2023 19:30:11 +0000 (11:30 -0800)
The rcu_boost_kthread_setaffinity() function is invoked at
rcutree_online_cpu() and rcutree_offline_cpu() time, early in the online
timeline and late in the offline timeline, respectively.  It is also
invoked from rcutree_dead_cpu(), however, in the absence of userspace
manipulations (for which userspace must take responsibility), this call
is redundant with that from rcutree_offline_cpu().  This redundancy can
be demonstrated by printing out the relevant cpumasks

This commit therefore removes the call to rcu_boost_kthread_setaffinity()
from rcutree_dead_cpu().

Signed-off-by: Zqiang <qiang1.zhang@intel.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
kernel/rcu/tree.c

index 80b84ae..89313c7 100644 (file)
@@ -4076,15 +4076,10 @@ static void rcu_cleanup_dead_rnp(struct rcu_node *rnp_leaf)
  */
 int rcutree_dead_cpu(unsigned int cpu)
 {
-       struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
-       struct rcu_node *rnp = rdp->mynode;  /* Outgoing CPU's rdp & rnp. */
-
        if (!IS_ENABLED(CONFIG_HOTPLUG_CPU))
                return 0;
 
        WRITE_ONCE(rcu_state.n_online_cpus, rcu_state.n_online_cpus - 1);
-       /* Adjust any no-longer-needed kthreads. */
-       rcu_boost_kthread_setaffinity(rnp, -1);
        // Stop-machine done, so allow nohz_full to disable tick.
        tick_dep_clear(TICK_DEP_BIT_RCU);
        return 0;