nvme-rdma: limit the maximal queue size for RDMA controllers
authorMax Gurtovoy <mgurtovoy@nvidia.com>
Wed, 22 Sep 2021 21:55:35 +0000 (00:55 +0300)
committerChristoph Hellwig <hch@lst.de>
Wed, 20 Oct 2021 17:16:01 +0000 (19:16 +0200)
Corrent limit of 1024 isn't valid for some of the RDMA based ctrls. In
case the target expose a cap of larger amount of entries (e.g. 1024),
the initiator may fail to create a QP with this size. Thus limit to a
value that works for all RDMA adapters.

Future general solution should use RDMA/core API to calculate this size
according to device capabilities and number of WRs needed per NVMe IO
request.

Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
drivers/nvme/host/rdma.c
include/linux/nvme-rdma.h

index 1624da3..027ee57 100644 (file)
@@ -1112,6 +1112,13 @@ static int nvme_rdma_setup_ctrl(struct nvme_rdma_ctrl *ctrl, bool new)
                        ctrl->ctrl.opts->queue_size, ctrl->ctrl.sqsize + 1);
        }
 
+       if (ctrl->ctrl.sqsize + 1 > NVME_RDMA_MAX_QUEUE_SIZE) {
+               dev_warn(ctrl->ctrl.device,
+                       "ctrl sqsize %u > max queue size %u, clamping down\n",
+                       ctrl->ctrl.sqsize + 1, NVME_RDMA_MAX_QUEUE_SIZE);
+               ctrl->ctrl.sqsize = NVME_RDMA_MAX_QUEUE_SIZE - 1;
+       }
+
        if (ctrl->ctrl.sqsize + 1 > ctrl->ctrl.maxcmd) {
                dev_warn(ctrl->ctrl.device,
                        "sqsize %u > ctrl maxcmd %u, clamping down\n",
index 3ec8e50..4dd7e6f 100644 (file)
@@ -6,6 +6,8 @@
 #ifndef _LINUX_NVME_RDMA_H
 #define _LINUX_NVME_RDMA_H
 
+#define NVME_RDMA_MAX_QUEUE_SIZE       128
+
 enum nvme_rdma_cm_fmt {
        NVME_RDMA_CM_FMT_1_0 = 0x0,
 };