RDMA/addr: Fix race with netevent_callback()/rdma_addr_cancel()
authorJason Gunthorpe <jgg@nvidia.com>
Wed, 30 Sep 2020 07:20:07 +0000 (10:20 +0300)
committerJason Gunthorpe <jgg@nvidia.com>
Wed, 30 Sep 2020 18:29:05 +0000 (15:29 -0300)
This three thread race can result in the work being run once the callback
becomes NULL:

       CPU1                 CPU2                   CPU3
 netevent_callback()
                     process_one_req()       rdma_addr_cancel()
                      [..]
     spin_lock_bh()
   set_timeout()
     spin_unlock_bh()

spin_lock_bh()
list_del_init(&req->list);
spin_unlock_bh()

     req->callback = NULL
     spin_lock_bh()
       if (!list_empty(&req->list))
                         // Skipped!
         // cancel_delayed_work(&req->work);
     spin_unlock_bh()

    process_one_req() // again
     req->callback() // BOOM
cancel_delayed_work_sync()

The solution is to always cancel the work once it is completed so any
in between set_timeout() does not result in it running again.

Cc: stable@vger.kernel.org
Fixes: 44e75052bc2a ("RDMA/rdma_cm: Make rdma_addr_cancel into a fence")
Link: https://lore.kernel.org/r/20200930072007.1009692-1-leon@kernel.org
Reported-by: Dan Aloni <dan@kernelim.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
drivers/infiniband/core/addr.c

index 3a98439bba832b8139c58140134a50cbb69bd90d..0abce004a9591e7cd3dd56d33705dae3cc0ee459 100644 (file)
@@ -647,13 +647,12 @@ static void process_one_req(struct work_struct *_work)
        req->callback = NULL;
 
        spin_lock_bh(&lock);
+       /*
+        * Although the work will normally have been canceled by the workqueue,
+        * it can still be requeued as long as it is on the req_list.
+        */
+       cancel_delayed_work(&req->work);
        if (!list_empty(&req->list)) {
-               /*
-                * Although the work will normally have been canceled by the
-                * workqueue, it can still be requeued as long as it is on the
-                * req_list.
-                */
-               cancel_delayed_work(&req->work);
                list_del_init(&req->list);
                kfree(req);
        }