nvmet-rdma: implement get_max_queue_size controller op
authorMax Gurtovoy <mgurtovoy@nvidia.com>
Wed, 22 Sep 2021 21:55:37 +0000 (00:55 +0300)
committerChristoph Hellwig <hch@lst.de>
Wed, 20 Oct 2021 17:16:01 +0000 (19:16 +0200)
commitc7d792f9b8b0502c807ecda57aeb5eac70cc7ab9
tree1df12a019dbab8aaee33392b22600a30ad6e29c1
parent6d1555cc41c088d738b4968009b32aaeda8542a3
nvmet-rdma: implement get_max_queue_size controller op

Limit the maximal queue size for RDMA controllers. Today, the target
reports a limit of 1024 and this limit isn't valid for some of the RDMA
based controllers. For now, limit RDMA transport to 128 entries (the
max queue depth configured for Linux NVMe/RDMA host).

Future general solution should use RDMA/core API to calculate this size
according to device capabilities and number of WRs needed per NVMe IO
request.

Reported-by: Mark Ruijter <mruijter@primelogic.nl>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
drivers/nvme/target/rdma.c