rdma: Enable ib_alloc_cq to spread work over a device's comp_vectors
authorChuck Lever <chuck.lever@oracle.com>
Mon, 29 Jul 2019 17:22:09 +0000 (13:22 -0400)
committerDoug Ledford <dledford@redhat.com>
Mon, 5 Aug 2019 15:50:32 +0000 (11:50 -0400)
commit20cf4e026730104892fa1268de0371a631cee294
tree08a7e60c303ff468d50a33d52e2bc98eab9b1b30
parent31d0e6c149b8c9a9bddc6d68f8600918bb771cb9
rdma: Enable ib_alloc_cq to spread work over a device's comp_vectors

Send and Receive completion is handled on a single CPU selected at
the time each Completion Queue is allocated. Typically this is when
an initiator instantiates an RDMA transport, or when a target
accepts an RDMA connection.

Some ULPs cannot open a connection per CPU to spread completion
workload across available CPUs and MSI vectors. For such ULPs,
provide an API that allows the RDMA core to select a completion
vector based on the device's complement of available comp_vecs.

ULPs that invoke ib_alloc_cq() with only comp_vector 0 are converted
to use the new API so that their completion workloads interfere less
with each other.

Suggested-by: HÃ¥kon Bugge <haakon.bugge@oracle.com>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Reviewed-by: Leon Romanovsky <leonro@mellanox.com>
Cc: <linux-cifs@vger.kernel.org>
Cc: <v9fs-developer@lists.sourceforge.net>
Link: https://lore.kernel.org/r/20190729171923.13428.52555.stgit@manet.1015granger.net
Signed-off-by: Doug Ledford <dledford@redhat.com>
drivers/infiniband/core/cq.c
drivers/infiniband/ulp/srpt/ib_srpt.c
fs/cifs/smbdirect.c
include/rdma/ib_verbs.h
net/9p/trans_rdma.c
net/sunrpc/xprtrdma/svc_rdma_transport.c
net/sunrpc/xprtrdma/verbs.c