rcu: Make need_resched() respond to urgent RCU-QS needs
authorPaul E. McKenney <paulmck@linux.vnet.ibm.com>
Mon, 9 Jul 2018 20:47:30 +0000 (13:47 -0700)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Sat, 1 Dec 2018 08:43:00 +0000 (09:43 +0100)
commit 92aa39e9dc77481b90cbef25e547d66cab901496 upstream.

The per-CPU rcu_dynticks.rcu_urgent_qs variable communicates an urgent
need for an RCU quiescent state from the force-quiescent-state processing
within the grace-period kthread to context switches and to cond_resched().
Unfortunately, such urgent needs are not communicated to need_resched(),
which is sometimes used to decide when to invoke cond_resched(), for
but one example, within the KVM vcpu_run() function.  As of v4.15, this
can result in synchronize_sched() being delayed by up to ten seconds,
which can be problematic, to say nothing of annoying.

This commit therefore checks rcu_dynticks.rcu_urgent_qs from within
rcu_check_callbacks(), which is invoked from the scheduling-clock
interrupt handler.  If the current task is not an idle task and is
not executing in usermode, a context switch is forced, and either way,
the rcu_dynticks.rcu_urgent_qs variable is set to false.  If the current
task is an idle task, then RCU's dyntick-idle code will detect the
quiescent state, so no further action is required.  Similarly, if the
task is executing in usermode, other code in rcu_check_callbacks() and
its called functions will report the corresponding quiescent state.

Reported-by: Marius Hillenbrand <mhillenb@amazon.de>
Reported-by: David Woodhouse <dwmw2@infradead.org>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
[ paulmck: Backported to make patch apply cleanly on older versions. ]
Tested-by: Marius Hillenbrand <mhillenb@amazon.de>
Cc: <stable@vger.kernel.org> # 4.12.x - 4.19.x
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
kernel/rcu/tree.c

index 3e3650e..710ce1d 100644 (file)
@@ -2772,6 +2772,15 @@ void rcu_check_callbacks(int user)
                rcu_bh_qs();
        }
        rcu_preempt_check_callbacks();
+       /* The load-acquire pairs with the store-release setting to true. */
+       if (smp_load_acquire(this_cpu_ptr(&rcu_dynticks.rcu_urgent_qs))) {
+               /* Idle and userspace execution already are quiescent states. */
+               if (!rcu_is_cpu_rrupt_from_idle() && !user) {
+                       set_tsk_need_resched(current);
+                       set_preempt_need_resched();
+               }
+               __this_cpu_write(rcu_dynticks.rcu_urgent_qs, false);
+       }
        if (rcu_pending())
                invoke_rcu_core();
        if (user)