nvmet-auth: complete a request only after freeing the dhchap pointers
authorMaurizio Lombardi <mlombard@redhat.com>
Tue, 17 Oct 2023 08:28:45 +0000 (10:28 +0200)
committerKeith Busch <kbusch@kernel.org>
Wed, 18 Oct 2023 21:08:39 +0000 (14:08 -0700)
It may happen that the work to destroy a queue
(for example nvmet_tcp_release_queue_work()) is started while
an auth-send or auth-receive command is still completing.

nvmet_sq_destroy() will block, waiting for all the references
to the sq to be dropped, the last reference is then
dropped when nvmet_req_complete() is called.

When this happens, both nvmet_sq_destroy() and
nvmet_execute_auth_send()/_receive() will free the dhchap pointers by
calling nvmet_auth_sq_free().
Since there isn't any lock, the two threads may race against each other,
causing double frees and memory corruptions, as reported by KASAN.

Reproduced by stress blktests nvme/041 nvme/042 nvme/043

 nvme nvme2: qid 0: authenticated with hash hmac(sha512) dhgroup ffdhe4096
 ==================================================================
 BUG: KASAN: double-free in kfree+0xec/0x4b0

 Call Trace:
  <TASK>
  kfree+0xec/0x4b0
  nvmet_auth_sq_free+0xe1/0x160 [nvmet]
  nvmet_execute_auth_send+0x482/0x16d0 [nvmet]
  process_one_work+0x8e5/0x1510

 Allocated by task 191846:
  __kasan_kmalloc+0x81/0xa0
  nvmet_auth_ctrl_sesskey+0xf6/0x380 [nvmet]
  nvmet_auth_reply+0x119/0x990 [nvmet]

 Freed by task 143270:
  kfree+0xec/0x4b0
  nvmet_auth_sq_free+0xe1/0x160 [nvmet]
  process_one_work+0x8e5/0x1510

Fix this bug by calling nvmet_req_complete() only after freeing the
pointers, so we will prevent the race by holding the sq reference.

V2: remove redundant code

Fixes: db1312dd9548 ("nvmet: implement basic In-Band Authentication")
Signed-off-by: Maurizio Lombardi <mlombard@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Keith Busch <kbusch@kernel.org>
drivers/nvme/target/fabrics-cmd-auth.c

index 586458f..1d98544 100644 (file)
@@ -333,19 +333,21 @@ done:
                         __func__, ctrl->cntlid, req->sq->qid,
                         status, req->error_loc);
        req->cqe->result.u64 = 0;
-       nvmet_req_complete(req, status);
        if (req->sq->dhchap_step != NVME_AUTH_DHCHAP_MESSAGE_SUCCESS2 &&
            req->sq->dhchap_step != NVME_AUTH_DHCHAP_MESSAGE_FAILURE2) {
                unsigned long auth_expire_secs = ctrl->kato ? ctrl->kato : 120;
 
                mod_delayed_work(system_wq, &req->sq->auth_expired_work,
                                 auth_expire_secs * HZ);
-               return;
+               goto complete;
        }
        /* Final states, clear up variables */
        nvmet_auth_sq_free(req->sq);
        if (req->sq->dhchap_step == NVME_AUTH_DHCHAP_MESSAGE_FAILURE2)
                nvmet_ctrl_fatal_error(ctrl);
+
+complete:
+       nvmet_req_complete(req, status);
 }
 
 static int nvmet_auth_challenge(struct nvmet_req *req, void *d, int al)
@@ -514,11 +516,12 @@ void nvmet_execute_auth_receive(struct nvmet_req *req)
        kfree(d);
 done:
        req->cqe->result.u64 = 0;
-       nvmet_req_complete(req, status);
+
        if (req->sq->dhchap_step == NVME_AUTH_DHCHAP_MESSAGE_SUCCESS2)
                nvmet_auth_sq_free(req->sq);
        else if (req->sq->dhchap_step == NVME_AUTH_DHCHAP_MESSAGE_FAILURE1) {
                nvmet_auth_sq_free(req->sq);
                nvmet_ctrl_fatal_error(ctrl);
        }
+       nvmet_req_complete(req, status);
 }