RDMA/rtrs: Change MAX_SESS_QUEUE_DEPTH
authorGioh Kim <gi-oh.kim@cloud.ionos.com>
Fri, 28 May 2021 11:30:03 +0000 (13:30 +0200)
committerJason Gunthorpe <jgg@nvidia.com>
Fri, 28 May 2021 23:52:58 +0000 (20:52 -0300)
Max IB immediate data size is 2^28 (MAX_IMM_PAYL_BITS)
and the minimum chunk size is 4096 (2^12).
Therefore the maximum sess_queue_depth is 65536 (2^16).

Link: https://lore.kernel.org/r/20210528113018.52290-6-jinpu.wang@ionos.com
Signed-off-by: Gioh Kim <gi-oh.kim@ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@ionos.com>
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
drivers/infiniband/ulp/rtrs/rtrs-pri.h

index 86e65cf..d957bbf 100644 (file)
@@ -47,12 +47,15 @@ enum {
        MAX_PATHS_NUM = 128,
 
        /*
-        * With the size of struct rtrs_permit allocated on the client, 4K
-        * is the maximum number of rtrs_permits we can allocate. This number is
-        * also used on the client to allocate the IU for the user connection
-        * to receive the RDMA addresses from the server.
+        * Max IB immediate data size is 2^28 (MAX_IMM_PAYL_BITS)
+        * and the minimum chunk size is 4096 (2^12).
+        * So the maximum sess_queue_depth is 65536 (2^16) in theory.
+        * But mempool_create, create_qp and ib_post_send fail with
+        * "cannot allocate memory" error if sess_queue_depth is too big.
+        * Therefore the pratical max value of sess_queue_depth is
+        * somewhere between 1 and 65536 and it depends on the system.
         */
-       MAX_SESS_QUEUE_DEPTH = 4096,
+       MAX_SESS_QUEUE_DEPTH = 65536,
 
        RTRS_HB_INTERVAL_MS = 5000,
        RTRS_HB_MISSED_MAX = 5,