nvmet-tcp: fix a race condition between release_queue and io_work
authorMaurizio Lombardi <mlombard@redhat.com>
Tue, 16 Nov 2021 15:49:18 +0000 (16:49 +0100)
committerChristoph Hellwig <hch@lst.de>
Tue, 23 Nov 2021 16:19:25 +0000 (17:19 +0100)
If the initiator executes a reset controller operation while
performing I/O, the target kernel will crash because of a race condition
between release_queue and io_work;
nvmet_tcp_uninit_data_in_cmds() may be executed while io_work
is running, calling flush_work() was not sufficient to
prevent this because io_work could requeue itself.

Fix this bug by using cancel_work_sync() to prevent io_work
from requeuing itself and set rcv_state to NVMET_TCP_RECV_ERR to
make sure we don't receive any more data from the socket.

Signed-off-by: Maurizio Lombardi <mlombard@redhat.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: John Meneghini <jmeneghi@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
drivers/nvme/target/tcp.c

index 84c387e..18f3625 100644 (file)
@@ -1437,7 +1437,9 @@ static void nvmet_tcp_release_queue_work(struct work_struct *w)
        mutex_unlock(&nvmet_tcp_queue_mutex);
 
        nvmet_tcp_restore_socket_callbacks(queue);
-       flush_work(&queue->io_work);
+       cancel_work_sync(&queue->io_work);
+       /* stop accepting incoming data */
+       queue->rcv_state = NVMET_TCP_RECV_ERR;
 
        nvmet_tcp_uninit_data_in_cmds(queue);
        nvmet_sq_destroy(&queue->nvme_sq);