rcu/nocb: Use appropriate rcu_nocb_lock_irqsave()
authorFrederic Weisbecker <frederic@kernel.org>
Tue, 19 Oct 2021 00:08:12 +0000 (02:08 +0200)
committerPaul E. McKenney <paulmck@kernel.org>
Wed, 8 Dec 2021 00:24:44 +0000 (16:24 -0800)
Instead of hardcoding IRQ save and nocb lock, use the consolidated
API (and fix a comment as per Valentin Schneider's suggestion).

Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
Tested-by: Valentin Schneider <valentin.schneider@arm.com>
Tested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Cc: Valentin Schneider <valentin.schneider@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Joel Fernandes <joel@joelfernandes.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Neeraj Upadhyay <neeraju@codeaurora.org>
Cc: Uladzislau Rezki <urezki@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
kernel/rcu/tree.c

index 4cbfc4e4fa9edcaf89b4a9b984cb3ade34efa969..20587d035d03b1d5391cd91122a4166a9223ce05 100644 (file)
@@ -2478,12 +2478,11 @@ static void rcu_do_batch(struct rcu_data *rdp)
        }
 
        /*
-        * Extract the list of ready callbacks, disabling to prevent
+        * Extract the list of ready callbacks, disabling IRQs to prevent
         * races with call_rcu() from interrupt handlers.  Leave the
         * callback counts, as rcu_barrier() needs to be conservative.
         */
-       local_irq_save(flags);
-       rcu_nocb_lock(rdp);
+       rcu_nocb_lock_irqsave(rdp, flags);
        WARN_ON_ONCE(cpu_is_offline(smp_processor_id()));
        pending = rcu_segcblist_n_cbs(&rdp->cblist);
        div = READ_ONCE(rcu_divisor);
@@ -2546,8 +2545,7 @@ static void rcu_do_batch(struct rcu_data *rdp)
                }
        }
 
-       local_irq_save(flags);
-       rcu_nocb_lock(rdp);
+       rcu_nocb_lock_irqsave(rdp, flags);
        rdp->n_cbs_invoked += count;
        trace_rcu_batch_end(rcu_state.name, count, !!rcl.head, need_resched(),
                            is_idle_task(current), rcu_is_callbacks_kthread());