rcu: Make synchronize_sched_expedited() better at work sharing
authorPaul E. McKenney <paulmck@linux.vnet.ibm.com>
Thu, 22 Sep 2011 20:18:44 +0000 (13:18 -0700)
committerPaul E. McKenney <paulmck@linux.vnet.ibm.com>
Sun, 11 Dec 2011 18:31:22 +0000 (10:31 -0800)
When synchronize_sched_expedited() takes its second and subsequent
snapshots of sync_sched_expedited_started, it subtracts 1.  This
means that the concurrent caller of synchronize_sched_expedited()
that incremented to that value sees our successful completion, it
will not be able to take advantage of it.  This restriction is
pointless, given that our full expedited grace period would have
happened after the other guy started, and thus should be able to
serve as a proxy for the other guy successfully executing
try_stop_cpus().

This commit therefore removes the subtraction of 1.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
kernel/rcutree_plugin.h

index 7986053..708dc57 100644 (file)
@@ -1910,7 +1910,7 @@ void synchronize_sched_expedited(void)
                 * grace period works for us.
                 */
                get_online_cpus();
-               snap = atomic_read(&sync_sched_expedited_started) - 1;
+               snap = atomic_read(&sync_sched_expedited_started);
                smp_mb(); /* ensure read is before try_stop_cpus(). */
        }