rcu-tasks: *_ONCE() for rcu_tasks_cbs_head
authorPaul E. McKenney <paulmck@kernel.org>
Mon, 6 Jan 2020 19:59:58 +0000 (11:59 -0800)
committerPaul E. McKenney <paulmck@kernel.org>
Fri, 21 Feb 2020 00:00:45 +0000 (16:00 -0800)
The RCU tasks list of callbacks, rcu_tasks_cbs_head, is sampled locklessly
by rcu_tasks_kthread() when waiting for work to do.  This commit therefore
applies READ_ONCE() to that lockless sampling and WRITE_ONCE() to the
single potential store outside of rcu_tasks_kthread.

This data race was reported by KCSAN.  Not appropriate for backporting
due to failure being unlikely.

Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
kernel/rcu/update.c

index 6c4b862..a27df76 100644 (file)
@@ -528,7 +528,7 @@ void call_rcu_tasks(struct rcu_head *rhp, rcu_callback_t func)
        rhp->func = func;
        raw_spin_lock_irqsave(&rcu_tasks_cbs_lock, flags);
        needwake = !rcu_tasks_cbs_head;
-       *rcu_tasks_cbs_tail = rhp;
+       WRITE_ONCE(*rcu_tasks_cbs_tail, rhp);
        rcu_tasks_cbs_tail = &rhp->next;
        raw_spin_unlock_irqrestore(&rcu_tasks_cbs_lock, flags);
        /* We can't create the thread unless interrupts are enabled. */
@@ -658,7 +658,7 @@ static int __noreturn rcu_tasks_kthread(void *arg)
                /* If there were none, wait a bit and start over. */
                if (!list) {
                        wait_event_interruptible(rcu_tasks_cbs_wq,
-                                                rcu_tasks_cbs_head);
+                                                READ_ONCE(rcu_tasks_cbs_head));
                        if (!rcu_tasks_cbs_head) {
                                WARN_ON(signal_pending(current));
                                schedule_timeout_interruptible(HZ/10);