From 851eb780decb7180bcf09fad0035cba9aae669df Mon Sep 17 00:00:00 2001 From: Jens Axboe Date: Thu, 22 Sep 2022 11:41:51 -0600 Subject: [PATCH] nvme: enable batched completions of passthrough IO Now that the normal passthrough end_io path doesn't need the request anymore, we can kill the explicit blk_mq_free_request() and just pass back RQ_END_IO_FREE instead. This enables the batched completion from freeing batches of requests at the time. This brings passthrough IO performance at least on par with bdev based O_DIRECT with io_uring. With this and batche allocations, peak performance goes from 110M IOPS to 122M IOPS. For IRQ based, passthrough is now also about 10% faster than previously, going from ~61M to ~67M IOPS. Reviewed-by: Anuj Gupta Reviewed-by: Sagi Grimberg Reviewed-by: Keith Busch Co-developed-by: Stefan Roesch Signed-off-by: Jens Axboe --- drivers/nvme/host/ioctl.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c index f9d1f7e..914b142b 100644 --- a/drivers/nvme/host/ioctl.c +++ b/drivers/nvme/host/ioctl.c @@ -430,8 +430,7 @@ static enum rq_end_io_ret nvme_uring_cmd_end_io(struct request *req, else io_uring_cmd_complete_in_task(ioucmd, nvme_uring_task_cb); - blk_mq_free_request(req); - return RQ_END_IO_NONE; + return RQ_END_IO_FREE; } static enum rq_end_io_ret nvme_uring_cmd_end_io_meta(struct request *req, -- 2.7.4