platform/kernel/linux-starfive.git
4 years agoRDMA: Remove 'max_fmr'
Jason Gunthorpe [Thu, 28 May 2020 19:45:54 +0000 (16:45 -0300)]
RDMA: Remove 'max_fmr'

Now that FMR support is gone, this attribute can be deleted from all
places.

Link: https://lore.kernel.org/r/12-v3-f58e6669d5d3+2cf-fmr_removal_jgg@mellanox.com
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
Reviewed-by: Bernard Metzler <bmt@zurich.ibm.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/core: Remove FMR device ops
Max Gurtovoy [Thu, 28 May 2020 19:45:53 +0000 (16:45 -0300)]
RDMA/core: Remove FMR device ops

After removing FMR support from all the RDMA ULPs and providers, there
is no need to keep FMR operation for IB devices.

Link: https://lore.kernel.org/r/11-v3-f58e6669d5d3+2cf-fmr_removal_jgg@mellanox.com
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/rdmavt: Remove FMR memory registration
Max Gurtovoy [Thu, 28 May 2020 19:45:52 +0000 (16:45 -0300)]
RDMA/rdmavt: Remove FMR memory registration

Use FRWR method to register memory by default and remove the ancient and
unsafe FMR method.

Link: https://lore.kernel.org/r/10-v3-f58e6669d5d3+2cf-fmr_removal_jgg@mellanox.com
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Tested-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Acked-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/mthca: Remove FMR support for memory registration
Max Gurtovoy [Thu, 28 May 2020 19:45:51 +0000 (16:45 -0300)]
RDMA/mthca: Remove FMR support for memory registration

Remove the ancient and unsafe FMR method.

Link: https://lore.kernel.org/r/9-v3-f58e6669d5d3+2cf-fmr_removal_jgg@mellanox.com
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/mlx4: Remove FMR support for memory registration
Max Gurtovoy [Thu, 28 May 2020 19:45:50 +0000 (16:45 -0300)]
RDMA/mlx4: Remove FMR support for memory registration

HCA's that are driven by mlx4 driver support FRWR method to register
memory. Remove the ancient and unsafe FMR method.

Link: https://lore.kernel.org/r/8-v3-f58e6669d5d3+2cf-fmr_removal_jgg@mellanox.com
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/i40iw: Remove FMR leftovers
Jason Gunthorpe [Thu, 28 May 2020 19:45:49 +0000 (16:45 -0300)]
RDMA/i40iw: Remove FMR leftovers

The ibfmr member is never referenced, remove it.

Link: https://lore.kernel.org/r/7-v3-f58e6669d5d3+2cf-fmr_removal_jgg@mellanox.com
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
Acked-by: Shiraz Saleem <shiraz.saleem@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/bnxt_re: Remove FMR leftovers
Jason Gunthorpe [Thu, 28 May 2020 19:45:48 +0000 (16:45 -0300)]
RDMA/bnxt_re: Remove FMR leftovers

The bnxt_re_fmr struct is never referenced and the max_fmr items
in bnxt_qplib_dev_attr are never read.

Link: https://lore.kernel.org/r/6-v3-f58e6669d5d3+2cf-fmr_removal_jgg@mellanox.com
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/mlx5: Remove FMR leftovers
Gal Pressman [Thu, 28 May 2020 19:45:47 +0000 (16:45 -0300)]
RDMA/mlx5: Remove FMR leftovers

Remove a few leftovers from FMR functionality which are no longer used.

Link: https://lore.kernel.org/r/5-v3-f58e6669d5d3+2cf-fmr_removal_jgg@mellanox.com
Signed-off-by: Gal Pressman <galpress@amazon.com>
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Acked-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/core: Remove FMR pool API
Max Gurtovoy [Thu, 28 May 2020 19:45:46 +0000 (16:45 -0300)]
RDMA/core: Remove FMR pool API

This ancient and unsafe method for memory registration is no longer used
by any RDMA based ULP. Remove the FMR pool API from the core driver.

Link: https://lore.kernel.org/r/4-v3-f58e6669d5d3+2cf-fmr_removal_jgg@mellanox.com
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/rds: Remove FMR support for memory registration
Max Gurtovoy [Thu, 28 May 2020 19:45:45 +0000 (16:45 -0300)]
RDMA/rds: Remove FMR support for memory registration

Use FRWR method for memory registration by default and remove the ancient
and unsafe FMR method.

Link: https://lore.kernel.org/r/3-v3-f58e6669d5d3+2cf-fmr_removal_jgg@mellanox.com
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/srp: Remove support for FMR memory registration
Max Gurtovoy [Thu, 28 May 2020 19:45:44 +0000 (16:45 -0300)]
RDMA/srp: Remove support for FMR memory registration

FMR is not supported on most recent RDMA devices (that use fast memory
registration mechanism). Also, FMR was recently removed from NFS/RDMA
ULP.

Link: https://lore.kernel.org/r/2-v3-f58e6669d5d3+2cf-fmr_removal_jgg@mellanox.com
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Reviewed-by: Israel Rukshin <israelr@mellanox.com>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/iser: Remove support for FMR memory registration
Israel Rukshin [Thu, 28 May 2020 19:45:43 +0000 (16:45 -0300)]
RDMA/iser: Remove support for FMR memory registration

FMR is not supported on most recent RDMA devices (that use fast memory
registration mechanism). Also, FMR was recently removed from NFS/RDMA
ULP.

Link: https://lore.kernel.org/r/1-v3-f58e6669d5d3+2cf-fmr_removal_jgg@mellanox.com
Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/core: Introduce shared CQ pool API
Yamin Friedman [Wed, 27 May 2020 08:34:53 +0000 (11:34 +0300)]
RDMA/core: Introduce shared CQ pool API

Allow a ULP to ask the core to provide a completion queue based on a
least-used search on a per-device CQ pools. The device CQ pools grow in a
lazy fashion when more CQs are requested.

This feature reduces the amount of interrupts when using many QPs.  Using
shared CQs allows for more effcient completion handling. It also reduces
the amount of overhead needed for CQ contexts.

Test setup:
Intel(R) Xeon(R) Platinum 8176M CPU @ 2.10GHz servers.
Running NVMeoF 4KB read IOs over ConnectX-5EX across Spectrum switch.
TX-depth = 32. The patch was applied in the nvme driver on both the target
and initiator. Four controllers are accessed from each core. In the
current test case we have exposed sixteen NVMe namespaces using four
different subsystems (four namespaces per subsystem) from one NVM port.
Each controller allocated X queues (RDMA QPs) and attached to Y CQs.
Before this series we had X == Y, i.e for four controllers we've created
total of 4X QPs and 4X CQs. In the shared case, we've created 4X QPs and
only X CQs which means that we have four controllers that share a
completion queue per core. Until fourteen cores there is no significant
change in performance and the number of interrupts per second is less than
a million in the current case.
==================================================
|Cores|Current KIOPs  |Shared KIOPs  |improvement|
|-----|---------------|--------------|-----------|
|14   |2332           |2723          |16.7%      |
|-----|---------------|--------------|-----------|
|20   |2086           |2712          |30%        |
|-----|---------------|--------------|-----------|
|28   |1971           |2669          |35.4%      |
|=================================================
|Cores|Current avg lat|Shared avg lat|improvement|
|-----|---------------|--------------|-----------|
|14   |767us          |657us         |14.3%      |
|-----|---------------|--------------|-----------|
|20   |1225us         |943us         |23%        |
|-----|---------------|--------------|-----------|
|28   |1816us         |1341us        |26.1%      |
========================================================
|Cores|Current interrupts|Shared interrupts|improvement|
|-----|------------------|-----------------|-----------|
|14   |1.6M/sec          |0.4M/sec         |72%        |
|-----|------------------|-----------------|-----------|
|20   |2.8M/sec          |0.6M/sec         |72.4%      |
|-----|------------------|-----------------|-----------|
|28   |2.9M/sec          |0.8M/sec         |63.4%      |
====================================================================
|Cores|Current 99.99th PCTL lat|Shared 99.99th PCTL lat|improvement|
|-----|------------------------|-----------------------|-----------|
|14   |67ms                    |6ms                    |90.9%      |
|-----|------------------------|-----------------------|-----------|
|20   |5ms                     |6ms                    |-10%       |
|-----|------------------------|-----------------------|-----------|
|28   |8.7ms                   |6ms                    |25.9%      |
|===================================================================

Performance improvement with sixteen disks (sixteen CQs per core) is
comparable.

Link: https://lore.kernel.org/r/1590568495-101621-3-git-send-email-yaminf@mellanox.com
Signed-off-by: Yamin Friedman <yaminf@mellanox.com>
Reviewed-by: Or Gerlitz <ogerlitz@mellanox.com>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
Reviewed-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/core: Add protection for shared CQs used by ULPs
Yamin Friedman [Wed, 27 May 2020 08:34:52 +0000 (11:34 +0300)]
RDMA/core: Add protection for shared CQs used by ULPs

A pre-step for adding shared CQs. Add the infrastructure to prevent shared
CQ users from altering the CQ configurations. For now all cqs are marked
as private (non-shared). The core driver should use the new force
functions to perform resize/destroy/moderation changes that are not
allowed for users of shared CQs.

Link: https://lore.kernel.org/r/1590568495-101621-2-git-send-email-yaminf@mellanox.com
Signed-off-by: Yamin Friedman <yaminf@mellanox.com>
Reviewed-by: Or Gerlitz <ogerlitz@mellanox.com>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
Reviewed-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/core: Fix several reference count leaks.
Qiushi Wu [Thu, 28 May 2020 03:02:30 +0000 (22:02 -0500)]
RDMA/core: Fix several reference count leaks.

kobject_init_and_add() takes reference even when it fails.  If this
function returns an error, kobject_put() must be called to properly clean
up the memory associated with the object. Previous
commit b8eb718348b8 ("net-sysfs: Fix reference count leak in
rx|netdev_queue_add_kobject") fixed a similar problem.

Link: https://lore.kernel.org/r/20200528030231.9082-1-wu000273@umn.edu
Signed-off-by: Qiushi Wu <wu000273@umn.edu>
Reviewed-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoIB/hfi1: Fix spelling mistake "enought" -> "enough"
Colin Ian King [Thu, 28 May 2020 11:07:09 +0000 (12:07 +0100)]
IB/hfi1: Fix spelling mistake "enought" -> "enough"

There is a spelling mistake in an error message. Fix it.

Link: https://lore.kernel.org/r/20200528110709.400935-1-colin.king@canonical.com
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/core: Use offsetofend() instead of open coding
Jason Gunthorpe [Wed, 27 May 2020 17:18:45 +0000 (14:18 -0300)]
RDMA/core: Use offsetofend() instead of open coding

No reason to open code this.

Link: https://lore.kernel.org/r/0-v1-0bc346e08476+585-drop_offsetofend_jgg@mellanox.com
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/hns: remove duplicate assignment to pointer raq
Colin Ian King [Thu, 28 May 2020 15:04:27 +0000 (16:04 +0100)]
RDMA/hns: remove duplicate assignment to pointer raq

The pointer raq is being assigned twice. Fix this by removing one of the
redundant assignments.

Fixes: 14ba87304bf9 ("RDMA/hns: Remove redundant type cast for general pointers")
Link: https://lore.kernel.org/r/20200528150427.420624-1-colin.king@canonical.com
Addressses-Coverity: ("Evaluation order violation")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/mlx5: Support TX port affinity for VF drivers in LAG mode
Mark Zhang [Wed, 27 May 2020 05:50:14 +0000 (08:50 +0300)]
RDMA/mlx5: Support TX port affinity for VF drivers in LAG mode

The mlx5 VF driver doesn't set QP tx port affinity because it doesn't know
if the lag is active or not, since the "lag_active" works only for PF
interfaces. In this case for VF interfaces only one lag is used which
brings performance issue.

Add a lag_tx_port_affinity CAP bit; When it is enabled and
"num_lag_ports > 1", then driver always set QP tx affinity, regardless
of lag state.

Link: https://lore.kernel.org/r/20200527055014.355093-1-leon@kernel.org
Signed-off-by: Mark Zhang <markz@mellanox.com>
Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/srpt: Increase max_send_sge
Bart Van Assche [Mon, 25 May 2020 17:22:12 +0000 (10:22 -0700)]
RDMA/srpt: Increase max_send_sge

The ib_srpt driver limits max_send_sge to 16. Since that is a workaround
for an mlx4 bug that has been fixed, increase max_send_sge. See also
commit f95ccffc715b ("IB/mlx4: Use 4K pages for kernel QP's WQE buffer").

Link: https://lore.kernel.org/r/20200525172212.14413-5-bvanassche@acm.org
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/srpt: Reduce max_recv_sge to 1
Bart Van Assche [Mon, 25 May 2020 17:22:11 +0000 (10:22 -0700)]
RDMA/srpt: Reduce max_recv_sge to 1

Since srpt_post_recv() always sets num_sge to 1, reduce the max_recv_sge
parameter that is used at queue pair allocation time to 1.

Link: https://lore.kernel.org/r/20200525172212.14413-4-bvanassche@acm.org
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/srpt: Make debug output more detailed
Bart Van Assche [Mon, 25 May 2020 17:22:10 +0000 (10:22 -0700)]
RDMA/srpt: Make debug output more detailed

Since the session name by itself is not sufficient to uniquely identify a
queue pair, include the queue pair number. Show the ASCII channel state
name instead of the numeric value. This change makes the ib_srpt debug
output more consistent.

Link: https://lore.kernel.org/r/20200525172212.14413-3-bvanassche@acm.org
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/srp: Make the channel count configurable per target
Bart Van Assche [Mon, 25 May 2020 17:22:09 +0000 (10:22 -0700)]
RDMA/srp: Make the channel count configurable per target

Increase the flexibility of the SRP initiator driver by making the channel
count configurable per target instead of only providing a kernel module
parameter for configuring the channel count.

Link: https://lore.kernel.org/r/20200525172212.14413-2-bvanassche@acm.org
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/mlx5: Return ECE data after modify QP
Leon Romanovsky [Tue, 26 May 2020 11:54:40 +0000 (14:54 +0300)]
RDMA/mlx5: Return ECE data after modify QP

After users sets the ECE option, FW will return the agreed/supported bits
through an output structures of modify QP stages for regular QPs or
through create QP for the DCT.

Link: https://lore.kernel.org/r/20200526115440.205922-9-leon@kernel.org
Reviewed-by: Mark Zhang <markz@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/mlx5: Set ECE options during modify QP
Leon Romanovsky [Tue, 26 May 2020 11:54:39 +0000 (14:54 +0300)]
RDMA/mlx5: Set ECE options during modify QP

The most common way to set ECE option will be during modify QP command in
INIT2RTR, RTR2RTS and RTS2RTS stages, so update mlx5 to support it.

The new bit in the comp_mask is needed to mark that kernel supports ECE
and can receive data instead of "reserved" field in the struct
mlx5_ib_modify_qp.

Link: https://lore.kernel.org/r/20200526115440.205922-8-leon@kernel.org
Reviewed-by: Mark Zhang <markz@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/mlx5: Convert modify QP to use MLX5_SET macros
Leon Romanovsky [Tue, 26 May 2020 11:54:38 +0000 (14:54 +0300)]
RDMA/mlx5: Convert modify QP to use MLX5_SET macros

Instead of hand crafted mlx5_qp_context and mlx5_qp_path use common
MLX5_SET() macros.

Link: https://lore.kernel.org/r/20200526115440.205922-7-leon@kernel.org
Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Reviewed-by: Mark Zhang <markz@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/mlx5: Remove manually crafted QP context the query call
Leon Romanovsky [Tue, 26 May 2020 11:54:37 +0000 (14:54 +0300)]
RDMA/mlx5: Remove manually crafted QP context the query call

As a preparation to removal hand crafted mlx5_qp_context, convert
query_qp_attr() to use proper MLX5_GET() macros.

Link: https://lore.kernel.org/r/20200526115440.205922-6-leon@kernel.org
Reviewed-by: Mark Zhang <markz@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/mlx5: Use direct modify QP implementation
Leon Romanovsky [Tue, 26 May 2020 11:54:36 +0000 (14:54 +0300)]
RDMA/mlx5: Use direct modify QP implementation

As a preparation to removal hand crafted mlx5_qp_context, convert counter
code to use mlx5_cmd_exec_in() directly.

Link: https://lore.kernel.org/r/20200526115440.205922-5-leon@kernel.org
Reviewed-by: Mark Zhang <markz@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/mlx5: Set ECE options during QP create
Leon Romanovsky [Tue, 26 May 2020 11:54:35 +0000 (14:54 +0300)]
RDMA/mlx5: Set ECE options during QP create

Allow users to ask creation of QPs with specific ECE options.  Such early
set even before RDMA-CM connection is established is useful if user knows
exactly which option he needs.

Link: https://lore.kernel.org/r/20200526115440.205922-4-leon@kernel.org
Reviewed-by: Mark Zhang <markz@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/mlx5: Get ECE options from FW during create QP
Leon Romanovsky [Tue, 26 May 2020 11:54:34 +0000 (14:54 +0300)]
RDMA/mlx5: Get ECE options from FW during create QP

Supported ECE options are returned from FW in the create_qp phase and zero
means that field is not valid. Such default value allows us to reuse
reserved field without worries about comp_mask.

Update create QP API to return ECE options.

Link: https://lore.kernel.org/r/20200526115440.205922-3-leon@kernel.org
Reviewed-by: Mark Zhang <markz@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/cma: Provide ECE reject reason
Leon Romanovsky [Tue, 26 May 2020 10:33:04 +0000 (13:33 +0300)]
RDMA/cma: Provide ECE reject reason

IBTA declares "vendor option not supported" reject reason in REJ messages
if passive side doesn't want to accept proposed ECE options.

Due to the fact that ECE is managed by userspace, there is a need to let
users to provide such rejected reason.

Link: https://lore.kernel.org/r/20200526103304.196371-7-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/cma: Connect ECE to rdma_accept
Leon Romanovsky [Tue, 26 May 2020 10:33:03 +0000 (13:33 +0300)]
RDMA/cma: Connect ECE to rdma_accept

The rdma_accept() is called by both passive and active sides of CMID
connection to mark readiness to start data transfer. For passive side,
this is called explicitly, for active side, it is called implicitly while
receiving REP message.

Provide ECE data to rdma_accept function needed for passive side to send
that REP message.

Link: https://lore.kernel.org/r/20200526103304.196371-6-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/cm: Send and receive ECE parameter over the wire
Leon Romanovsky [Tue, 26 May 2020 10:33:02 +0000 (13:33 +0300)]
RDMA/cm: Send and receive ECE parameter over the wire

ECE parameters are exchanged through REQ->REP/SIDR_REP messages, this
patch adds the data to provide to other side of CMID communication
channel.

Link: https://lore.kernel.org/r/20200526103304.196371-5-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/ucma: Deliver ECE parameters through UCMA events
Leon Romanovsky [Tue, 26 May 2020 10:33:01 +0000 (13:33 +0300)]
RDMA/ucma: Deliver ECE parameters through UCMA events

Passive side of CMID connection receives ECE request through REQ message
and needs to respond with relevant REP message which will be forwarded to
active side.

The UCMA events interface is responsible for such communication with the
user space (librdmacm). Extend it to provide ECE wire data.

Link: https://lore.kernel.org/r/20200526103304.196371-4-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/ucma: Extend ucma_connect to receive ECE parameters
Leon Romanovsky [Tue, 26 May 2020 10:33:00 +0000 (13:33 +0300)]
RDMA/ucma: Extend ucma_connect to receive ECE parameters

Active side of CMID initiates connection through librdmacm's
rdma_connect() and kernel's ucma_connect(). Extend UCMA interface to
handle those new parameters.

Link: https://lore.kernel.org/r/20200526103304.196371-3-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/cm: Add Enhanced Connection Establishment (ECE) bits
Leon Romanovsky [Tue, 26 May 2020 10:32:59 +0000 (13:32 +0300)]
RDMA/cm: Add Enhanced Connection Establishment (ECE) bits

Extend REQ (request for communications), REP (reply to request for
communication), rejected reason and SIDR_REP (service ID resolution
response) structures with hardware vendor ID bits according to IBTA v1.4.

Link: https://lore.kernel.org/r/20200526103304.196371-2-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoMerge branch 'mellanox/mlx5-next' into rdma.git for/next
Jason Gunthorpe [Wed, 27 May 2020 19:01:17 +0000 (16:01 -0300)]
Merge branch 'mellanox/mlx5-next' into rdma.git for/next

From the mlx5-next branch at
  git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux

Required for dependencies in following patches

* branch 'mellanox/mlx5-next':
  net/mlx5: Add ability to read and write ECE options
  net/mlx5: Add support for RDMA TX FT headers modifying
  net/mlx5: Move iseg access helper routines close to mlx5_core driver
  net/mlx5: Cleanup mlx5_ifc_fte_match_set_misc2_bits

Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoIB/mlx5: Fix DEVX support for MLX5_CMD_OP_INIT2INIT_QP command
Mark Zhang [Wed, 27 May 2020 13:57:03 +0000 (16:57 +0300)]
IB/mlx5: Fix DEVX support for MLX5_CMD_OP_INIT2INIT_QP command

The commit citied in the Fixes line wasn't complete and solved
only part of the problems. Update the mlx5_ib to properly support
MLX5_CMD_OP_INIT2INIT_QP command in the DEVX, that is required when
modify the QP tx_port_affinity.

Fixes: 819f7427bafd ("RDMA/mlx5: Add init2init as a modify command")
Link: https://lore.kernel.org/r/20200527135703.482501-1-leon@kernel.org
Signed-off-by: Mark Zhang <markz@mellanox.com>
Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/core: Use sizeof_field() helper
Gustavo A. R. Silva [Wed, 27 May 2020 14:41:52 +0000 (09:41 -0500)]
RDMA/core: Use sizeof_field() helper

Make use of the sizeof_field() helper instead of an open-coded version.

Link: https://lore.kernel.org/r/20200527144152.GA22605@embeddedor
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agonet/mlx5: Add ability to read and write ECE options
Leon Romanovsky [Mon, 9 Mar 2020 14:44:25 +0000 (16:44 +0200)]
net/mlx5: Add ability to read and write ECE options

The end result of RDMA-CM ECE handshake is ECE options, which is
needed to be used while configuring data QPs. Such options can
come in any QP state, so add in/out fields to set and query
ECE options.

OUT fields:
* create_qp() - default ECE options for that type of QP.
* modify_qp() - enabled ECE options after QP state transition.

IN fields:
* create_qp() - create QP with this ECE option.
* modify_qp() - requested options. For unconnected QPs, the FW
will return an error if ECE is already configured with any options
that not equal to previously set.

Reviewed-by: Mark Zhang <markz@mellanox.com>
Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
4 years agoRDMA/ipoib: Remove can_sleep parameter from iboib_mcast_alloc
Kamal Heib [Mon, 25 May 2020 13:03:05 +0000 (16:03 +0300)]
RDMA/ipoib: Remove can_sleep parameter from iboib_mcast_alloc

can_sleep is always 0 when iboib_mcast_alloc() is called, so remove it and
use GFP_ATOMIC instead of GFP_KERNEL.

Link: https://lore.kernel.org/r/20200525130305.171509-1-kamalheib1@gmail.com
Signed-off-by: Kamal Heib <kamalheib1@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/iw_cxgb4: cleanup device debugfs entries on ULD remove
Potnuri Bharat Teja [Sun, 24 May 2020 19:08:14 +0000 (00:38 +0530)]
RDMA/iw_cxgb4: cleanup device debugfs entries on ULD remove

Remove device specific debugfs entries immediately if LLD detaches a
particular ULD device in case of fatal PCI errors.

Link: https://lore.kernel.org/r/20200524190814.17599-1-bharat@chelsio.com
Signed-off-by: Potnuri Bharat Teja <bharat@chelsio.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/hns: Make the end of sge process more clear
Yixian Liu [Fri, 22 May 2020 13:02:59 +0000 (21:02 +0800)]
RDMA/hns: Make the end of sge process more clear

Instead of i with the sge number of wr will make the comparision more
clear, that is, when the sge number in wr is small than the maximum
supported sge number in the queue, then a stop sge needed to be filled at
the end of sges in wr.

Link: https://lore.kernel.org/r/1590152579-32364-5-git-send-email-liweihang@huawei.com
Signed-off-by: Yixian Liu <liuyixian@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/hns: Simplify process related to poll cq
Lang Cheng [Fri, 22 May 2020 13:02:58 +0000 (21:02 +0800)]
RDMA/hns: Simplify process related to poll cq

Set hns_roce_v2_cq_set_ci to inline type and remove unnecessary
next_cqe_sw_v2().

Link: https://lore.kernel.org/r/1590152579-32364-4-git-send-email-liweihang@huawei.com
Signed-off-by: Lang Cheng <chenglang@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/hns: Remove redundant parameters from free_srq/qp_wrid()
Wenpeng Liang [Fri, 22 May 2020 13:02:57 +0000 (21:02 +0800)]
RDMA/hns: Remove redundant parameters from free_srq/qp_wrid()

The redundant parameters "hr_dev" need to be removed from
free_kernel_wrid() and free_srq_wrid().

Link: https://lore.kernel.org/r/1590152579-32364-3-git-send-email-liweihang@huawei.com
Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/hns: Remove redundant type cast for general pointers
Weihang Li [Fri, 22 May 2020 13:02:56 +0000 (21:02 +0800)]
RDMA/hns: Remove redundant type cast for general pointers

There is no need to do a type cast on genernal pointers, they could be
assigned to any type of variables. In addition, optimize initialization of
some variables and adjust order of them.

Link: https://lore.kernel.org/r/1590152579-32364-2-git-send-email-liweihang@huawei.com
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/hns: Optimize the usage of MTR
Xi Wang [Wed, 20 May 2020 13:53:19 +0000 (21:53 +0800)]
RDMA/hns: Optimize the usage of MTR

Currently, the MTR region is configed before hns_roce_mtr_map() is
invoked, but in some scenarios, the region is configed at MTR creation,
the caller need to store this config and call hns_roce_mtr_map() later. So
optimize the usage by wrapping the MTR region config into MTR.

Link: https://lore.kernel.org/r/1589982799-28728-10-git-send-email-liweihang@huawei.com
Signed-off-by: Xi Wang <wangxi11@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/hns: Refactor the QP context filling process related to WQE buffer configure
Xi Wang [Wed, 20 May 2020 13:53:18 +0000 (21:53 +0800)]
RDMA/hns: Refactor the QP context filling process related to WQE buffer configure

Split the code related to WQE buffer configure from the QPC filling
process into two functions: config_qp_sq_buf() and config_qp_rq_buf(),
this will make the code more readable.

Link: https://lore.kernel.org/r/1589982799-28728-9-git-send-email-liweihang@huawei.com
Signed-off-by: Xi Wang <wangxi11@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/hns: Change variables representing quantity to unsigned
Weihang Li [Wed, 20 May 2020 13:53:17 +0000 (21:53 +0800)]
RDMA/hns: Change variables representing quantity to unsigned

Number of sge/eqe is always non-negative, they should be defined in type
of unsigned.

Link: https://lore.kernel.org/r/1589982799-28728-8-git-send-email-liweihang@huawei.com
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/hns: Change all page_shift to unsigned
Weihang Li [Wed, 20 May 2020 13:53:16 +0000 (21:53 +0800)]
RDMA/hns: Change all page_shift to unsigned

page_shift is used to calculate the page size, it's always non-negative,
and should be in type of unsigned.

Link: https://lore.kernel.org/r/1589982799-28728-7-git-send-email-liweihang@huawei.com
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/hns: Rename QP buffer related function
Xi Wang [Wed, 20 May 2020 13:53:15 +0000 (21:53 +0800)]
RDMA/hns: Rename QP buffer related function

Rename the function related to QP buffer to make the code more readable.

Link: https://lore.kernel.org/r/1589982799-28728-6-git-send-email-liweihang@huawei.com
Signed-off-by: Xi Wang <wangxi11@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/hns: Remove unused code about assert
Yangyang Li [Wed, 20 May 2020 13:53:14 +0000 (21:53 +0800)]
RDMA/hns: Remove unused code about assert

The codes related to assert are no longer used and need to be deleted.

Link: https://lore.kernel.org/r/1589982799-28728-5-git-send-email-liweihang@huawei.com
Signed-off-by: Yangyang Li <liyangyang20@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/hns: Optimize post and poll process
Lang Cheng [Wed, 20 May 2020 13:53:13 +0000 (21:53 +0800)]
RDMA/hns: Optimize post and poll process

Add unlikely() and likely() to optimize main I/O process code.

Link: https://lore.kernel.org/r/1589982799-28728-4-git-send-email-liweihang@huawei.com
Signed-off-by: Lang Cheng <chenglang@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/hns: Add CQ flag instead of independent enable flag
Lang Cheng [Wed, 20 May 2020 13:53:12 +0000 (21:53 +0800)]
RDMA/hns: Add CQ flag instead of independent enable flag

It's easier to understand and maintain enable flags of cq using a single
field in type of u32 than defining a field for every flags in the
structure hns_roce_cq, and we can add new flags for features more
conveniently in the future.

Link: https://lore.kernel.org/r/1589982799-28728-3-git-send-email-liweihang@huawei.com
Signed-off-by: Lang Cheng <chenglang@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/hns: Let software PI/CI grow naturally
Lang Cheng [Wed, 20 May 2020 13:53:11 +0000 (21:53 +0800)]
RDMA/hns: Let software PI/CI grow naturally

The hardware can truncate PI/CI when posting or polling, the driver does
not need to do truncation. Therefore keep the software's PI/CI consistent
with it in the hardware.

Link: https://lore.kernel.org/r/1589982799-28728-2-git-send-email-liweihang@huawei.com
Signed-off-by: Lang Cheng <chenglang@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/rtrs: Get rid of the do_next_path while_next_path macros
Danil Kipnis [Fri, 22 May 2020 05:39:24 +0000 (07:39 +0200)]
RDMA/rtrs: Get rid of the do_next_path while_next_path macros

The macros do_each_path/while_each_path lead to a smatch warning:

drivers/infiniband/ulp/rtrs/rtrs-clt.c:1196 rtrs_clt_failover_req() warn: inconsistent indenting
drivers/infiniband/ulp/rtrs/rtrs-clt.c:2890 rtrs_clt_request() warn: inconsistent indenting

Also checkpatch complains:
ERROR: Macros with multiple statements should be enclosed in a do - while loop

The macros are used only in two places: for a normal IO path and for the
failover path triggered after errors.

Get rid of the macros and just use a for loop iterating over the list of
paths in both places. It is easier to read and also less lines of code.

Fixes: 6a98d71daea1 ("RDMA/rtrs: client: main functionality")
Link: https://lore.kernel.org/r/20200522053924.528980-1-danil.kipnis@cloud.ionos.com
Reported-by: kbuild test robot <lkp@intel.com>
Signed-off-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/rtrs: server: Use already dereferenced rtrs_sess structure
Md Haris Iqbal [Fri, 22 May 2020 08:28:33 +0000 (08:28 +0000)]
RDMA/rtrs: server: Use already dereferenced rtrs_sess structure

The rtrs_sess structure has already been extracted above from the
rtrs_srv_sess structure. Use that to avoid redundant dereferencing.

Fixes: 9cb837480424 ("RDMA/rtrs: server: main functionality")
Link: https://lore.kernel.org/r/20200522082833.1480551-1-haris.phnx@gmail.com
Signed-off-by: Md Haris Iqbal <haris.phnx@gmail.com>
Acked-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/rnbd: Fix compilation error when CONFIG_MODULES is disabled
Danil Kipnis [Thu, 21 May 2020 18:59:09 +0000 (20:59 +0200)]
RDMA/rnbd: Fix compilation error when CONFIG_MODULES is disabled

module_is_live function is only defined when CONFIG_MODULES is enabled.
Use try_module_get instead to check whether the module is being removed.

When module unload and manuall unmapping is happening in parallel, we can
try removing the symlink twice: rnbd_client_exit
vs. rnbd_clt_unmap_dev_store.

This is probably not the best way to deal with this race in general, but
for now this fixes the compilation issue when CONFIG_MODULES is disabled
and has no functional impact. Regression tests passed.

Fixes: 1eb54f8f5dd8 ("block/rnbd: client: sysfs interface functions")
Link: https://lore.kernel.org/r/20200521185909.457245-1-danil.kipnis@cloud.ionos.com
Reported-by: Randy Dunlap <rdunlap@infradead.org>
Suggested-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com>
Signed-off-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
Acked-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoIB/cma: Fix ports memory leak in cma_configfs
Maor Gottlieb [Thu, 21 May 2020 07:26:50 +0000 (10:26 +0300)]
IB/cma: Fix ports memory leak in cma_configfs

The allocated ports structure in never freed. The free function should be
called by release_cma_ports_group, but the group is never released since
we don't remove its default group.

Remove default groups when device group is deleted.

Fixes: 045959db65c6 ("IB/cma: Add configfs for rdma_cm")
Link: https://lore.kernel.org/r/20200521072650.567908-1-leon@kernel.org
Signed-off-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoblock/rnbd: Fix an IS_ERR() vs NULL check in find_or_create_sess()
Dan Carpenter [Tue, 19 May 2020 12:03:47 +0000 (15:03 +0300)]
block/rnbd: Fix an IS_ERR() vs NULL check in find_or_create_sess()

The alloc_sess() function returns error pointers, it never returns NULL.

Fixes: f7a7a5c228d4 ("block/rnbd: client: main functionality")
Link: https://lore.kernel.org/r/20200519120347.GD42765@mwanda
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: Jack Wang <jinpu.wang@cloud.ionos.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoIB/uverbs: Introduce create/destroy QP commands over ioctl
Yishai Hadas [Tue, 19 May 2020 07:27:11 +0000 (10:27 +0300)]
IB/uverbs: Introduce create/destroy QP commands over ioctl

Introduce create/destroy QP commands over the ioctl interface to let it
be extended to get an asynchronous event FD.

Link: https://lore.kernel.org/r/20200519072711.257271-8-leon@kernel.org
Signed-off-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoIB/uverbs: Introduce create/destroy WQ commands over ioctl
Yishai Hadas [Tue, 19 May 2020 07:27:10 +0000 (10:27 +0300)]
IB/uverbs: Introduce create/destroy WQ commands over ioctl

Introduce create/destroy WQ commands over the ioctl interface to let it
be extended to get an asynchronous event FD.

Link: https://lore.kernel.org/r/20200519072711.257271-7-leon@kernel.org
Signed-off-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoIB/uverbs: Introduce create/destroy SRQ commands over ioctl
Yishai Hadas [Tue, 19 May 2020 07:27:09 +0000 (10:27 +0300)]
IB/uverbs: Introduce create/destroy SRQ commands over ioctl

Introduce create/destroy SRQ commands over the ioctl interface to let it
be extended to get an asynchronous event FD.

Link: https://lore.kernel.org/r/20200519072711.257271-6-leon@kernel.org
Signed-off-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoIB/uverbs: Move QP, SRQ, WQ type and flags to UAPI
Yishai Hadas [Tue, 19 May 2020 07:27:08 +0000 (10:27 +0300)]
IB/uverbs: Move QP, SRQ, WQ type and flags to UAPI

These constants are going to be used in the ioctl interface in coming
patches so they are part of the UAPI, place them in the correct header
for clarity.

Link: https://lore.kernel.org/r/20200519072711.257271-5-leon@kernel.org
Signed-off-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoIB/uverbs: Extend CQ to get its own asynchronous event FD
Yishai Hadas [Tue, 19 May 2020 07:27:07 +0000 (10:27 +0300)]
IB/uverbs: Extend CQ to get its own asynchronous event FD

Extend CQ to get its own asynchronous event FD.
The event FD is an optional attribute, in case wasn't given the ufile
event FD will be used.

Link: https://lore.kernel.org/r/20200519072711.257271-4-leon@kernel.org
Signed-off-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoIB/uverbs: Refactor related objects to use their own asynchronous event FD
Yishai Hadas [Tue, 19 May 2020 07:27:06 +0000 (10:27 +0300)]
IB/uverbs: Refactor related objects to use their own asynchronous event FD

Refactor related objects to use their own asynchronous event FD.
The ufile event FD will be the default in case an object won't have its own
event FD.

Link: https://lore.kernel.org/r/20200519072711.257271-3-leon@kernel.org
Signed-off-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/core: Allow the ioctl layer to abort a fully created uobject
Jason Gunthorpe [Tue, 19 May 2020 07:27:05 +0000 (10:27 +0300)]
RDMA/core: Allow the ioctl layer to abort a fully created uobject

While creating a uobject every create reaches a point where the uobject is
fully initialized. For ioctls that go on to copy_to_user this means they
need to open code the destruction of a fully created uobject - ie the
RDMA_REMOVE_DESTROY sort of flow.

Open coding this creates bugs, eg the CQ does not properly flush the
events list when it does its error unwind.

Provide a uverbs_finalize_uobj_create() function which indicates that the
uobject is fully initialized and that abort should call to destroy_hw to
destroy the uobj->object and related.

Methods can call this function if they go on to have error cases after
setting uobj->object. Once done those error cases can simply do return,
without an error unwind.

Link: https://lore.kernel.org/r/20200519072711.257271-2-leon@kernel.org
Signed-off-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoMerge tag 'v5.7-rc6' into rdma.git for-next
Jason Gunthorpe [Thu, 21 May 2020 20:07:21 +0000 (17:07 -0300)]
Merge tag 'v5.7-rc6' into rdma.git for-next

Linux 5.7-rc6

Conflict in drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
resolved by deleting dr_cq_event, matching how netdev resolved it.

Required for dependencies in the following patches.

Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoIB/hfi1: Enable the transmit side of the datagram ipoib netdev
Piotr Stankiewicz [Mon, 11 May 2020 16:07:13 +0000 (12:07 -0400)]
IB/hfi1: Enable the transmit side of the datagram ipoib netdev

This patch hooks the transmit side of the datagram netdev with
ipoib by setting the rdma_netdev_get_params function for the
hfi1 ib_device_ops structue. It also enables the receiving side
by adding the AIP capability into the default capabilities.

Link: https://lore.kernel.org/r/20200511160712.173205.65700.stgit@awfm-01.aw.intel.com
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Piotr Stankiewicz <piotr.stankiewicz@intel.com>
Signed-off-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoIB/ipoib: Add capability to switch between datagram and connected mode
Gary Leshner [Mon, 11 May 2020 16:07:06 +0000 (12:07 -0400)]
IB/ipoib: Add capability to switch between datagram and connected mode

This is the prerequisite modification to the ipoib ulp to allow a
rdma netdev to obtain the default ndo ops for init/uninit/open/close.

This is accomplished by setting the netdev ops field within the
callback function passed to the netdev allocation routine which
in turn was passed into the rdma netdev allocation routine.

This allows the rdma netdev to call back into the ulp to create the
resources required for connected mode operation.

Additionally as the ulp is not re-entrant, when switching modes,
the number of real tx queues is set to 1 for the connected mode.

For datagram mode the number of real tx queues is set to the
actual number of tx queues specified at the netdev's allocation.

For the internal ulp netdev the number of tx queues defaults to 1.

It is up to the rdma netdev to specify the actual number it can support.

When the driver does not support a rdma netdev for acceleration,
(-ENOTSUPPORTED return code or the verbs function for allocation is
NULL) the ipoib ulp functions are unaffected by using the internal
netdev allocated by the ipoib ulp.

Link: https://lore.kernel.org/r/20200511160706.173205.19086.stgit@awfm-01.aw.intel.com
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Gary Leshner <Gary.S.Leshner@intel.com>
Signed-off-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoIB/hfi1: Add packet histogram trace event
Grzegorz Andrejczuk [Mon, 11 May 2020 16:07:01 +0000 (12:07 -0400)]
IB/hfi1: Add packet histogram trace event

Add a simple trace event taking context number and building simple
histogram to print packets distribution between contexts.

Link: https://lore.kernel.org/r/20200511160700.173205.84270.stgit@awfm-01.aw.intel.com
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Grzegorz Andrejczuk <grzegorz.andrejczuk@intel.com>
Signed-off-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoIB/{hfi1, ipoib, rdma}: Broadcast ping sent packets which exceeded mtu size
Gary Leshner [Mon, 11 May 2020 16:06:55 +0000 (12:06 -0400)]
IB/{hfi1, ipoib, rdma}: Broadcast ping sent packets which exceeded mtu size

When in connected mode ipoib sent broadcast pings which exceeded the mtu
size for broadcast addresses.

Add an mtu attribute to the rdma_netdev structure which ipoib sets to its
mcast mtu size.

The RDMA netdev uses this value to determine if the skb length is too long
for the mtu specified and if it is, drops the packet and logs an error
about the errant packet.

Link: https://lore.kernel.org/r/20200511160655.173205.14546.stgit@awfm-01.aw.intel.com
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Gary Leshner <Gary.S.Leshner@intel.com>
Signed-off-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoIB/hfi1: Activate the dummy netdev
Grzegorz Andrejczuk [Mon, 11 May 2020 16:06:49 +0000 (12:06 -0400)]
IB/hfi1: Activate the dummy netdev

As described in earlier patches, ipoib netdev will share receive
contexts with existing VNIC netdev through a dummy netdev. The
following changes are made to achieve that:
- Set up netdev receive contexts after user contexts. A function is
  added to count the available netdev receive contexts.
- Add functions to set/get receive map table free index.
- Rename NUM_VNIC_MAP_ENTRIES as NUM_NETDEV_MAP_ENTRIES.
- Let the dummy netdev own the receive contexts instead of VNIC.
- Allocate the dummy netdev when the hfi1 device is added and free it
  when the device is removed.
- Initialize AIP RSM rules when the IpoIb rxq is initialized and
  remove the rules when it is de-initialized.
- Convert VNIC to use the dummy netdev.

Link: https://lore.kernel.org/r/20200511160649.173205.4626.stgit@awfm-01.aw.intel.com
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Sadanand Warrier <sadanand.warrier@intel.com>
Signed-off-by: Grzegorz Andrejczuk <grzegorz.andrejczuk@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoIB/hfi1: Add rx functions for dummy netdev
Grzegorz Andrejczuk [Mon, 11 May 2020 16:06:43 +0000 (12:06 -0400)]
IB/hfi1: Add rx functions for dummy netdev

This patch adds the rx functions for the dummy netdev:
- Functions to allocate/free the dummy netdev.
- Functions to allocate/free receiving contexts for the netdev.
- Functions to initialize/de-initialize the receive queue.
- Functions to enable/disable the receive queue.

Link: https://lore.kernel.org/r/20200511160643.173205.75087.stgit@awfm-01.aw.intel.com
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Sadanand Warrier <sadanand.warrier@intel.com>
Signed-off-by: Grzegorz Andrejczuk <grzegorz.andrejczuk@intel.com>
Signed-off-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoIB/hfi1: Add interrupt handler functions for accelerated ipoib
Grzegorz Andrejczuk [Mon, 11 May 2020 16:06:37 +0000 (12:06 -0400)]
IB/hfi1: Add interrupt handler functions for accelerated ipoib

This patch adds the interrupt handler function, the NAPI poll
function, and its associated helper functions for receiving
accelerated ipoib packets. While we are here, fix the formats
of two error printouts.

Link: https://lore.kernel.org/r/20200511160637.173205.64890.stgit@awfm-01.aw.intel.com
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Sadanand Warrier <sadanand.warrier@intel.com>
Signed-off-by: Grzegorz Andrejczuk <grzegorz.andrejczuk@intel.com>
Signed-off-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoIB/hfi1: Add functions to receive accelerated ipoib packets
Kaike Wan [Mon, 11 May 2020 16:06:31 +0000 (12:06 -0400)]
IB/hfi1: Add functions to receive accelerated ipoib packets

Ipoib netdev will share receive contexts with existing VNIC netdev.
To achieve that, a dummy netdev is allocated with hfi1_devdata to
own the receive contexts, and ipoib and VNIC netdevs will be put
on top of it. Each receive context is associated with a single
NAPI object.

This patch adds the functions to receive incoming packets for
accelerated ipoib.

Link: https://lore.kernel.org/r/20200511160631.173205.54184.stgit@awfm-01.aw.intel.com
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Sadanand Warrier <sadanand.warrier@intel.com>
Signed-off-by: Grzegorz Andrejczuk <grzegorz.andrejczuk@intel.com>
Signed-off-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoIB/hfi1: Rename num_vnic_contexts as num_netdev_contexts
Grzegorz Andrejczuk [Mon, 11 May 2020 16:06:25 +0000 (12:06 -0400)]
IB/hfi1: Rename num_vnic_contexts as num_netdev_contexts

Rename num_vnic_contexts as num_ndetdev_contexts since VNIC and ipoib
will share the same set of receive contexts.

Link: https://lore.kernel.org/r/20200511160625.173205.53306.stgit@awfm-01.aw.intel.com
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Sadanand Warrier <sadanand.warrier@intel.com>
Signed-off-by: Grzegorz Andrejczuk <grzegorz.andrejczuk@intel.com>
Signed-off-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoIB/ipoib: Increase ipoib Datagram mode MTU's upper limit
Kaike Wan [Mon, 11 May 2020 16:06:18 +0000 (12:06 -0400)]
IB/ipoib: Increase ipoib Datagram mode MTU's upper limit

Currently the ipoib UD mtu is restricted to 4K bytes. Remove this
limitation so that the IPOIB module can potentially use an MTU (in UD
mode) that is bounded by the MTU of the underlying device. A field is
added to the ib_port_attr structure to indicate the maximum physical
MTU the underlying device supports.

Link: https://lore.kernel.org/r/20200511160618.173205.23053.stgit@awfm-01.aw.intel.com
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Sadanand Warrier <sadanand.warrier@intel.com>
Signed-off-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoIB/hfi1: RSM rules for AIP
Grzegorz Andrejczuk [Mon, 11 May 2020 16:06:12 +0000 (12:06 -0400)]
IB/hfi1: RSM rules for AIP

This is implementation of RSM rule for AIP packets.
AIP rule will use rule RSM2 and will match standard
Infiniband packet containg BTH (LNH==BTH) and
having Dest QPN prefixed with value 0x81. Spread between
receive contexts will be done using source QPN bits.

VNIC and AIP will share receive contexts, so their rules
will point to the same RMT entries and their shared
code is moved to separate functions.
If any of the rules is active RMT mapping will be skipped
for latter.

Changed function hfi1_vnic_is_rsm_full to be more general
and moved it from main header to chip.c.

Changed the order of RSM rules because AIP rule as
more specific one is needed to be placed before more
general QOS rule. Rules are occupying two last RSM
registers.

Link: https://lore.kernel.org/r/20200511160612.173205.73002.stgit@awfm-01.aw.intel.com
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Grzegorz Andrejczuk <grzegorz.andrejczuk@intel.com>
Signed-off-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoIB/{rdmavt, hfi1}: Implement creation of accelerated UD QPs
Gary Leshner [Mon, 11 May 2020 16:06:07 +0000 (12:06 -0400)]
IB/{rdmavt, hfi1}: Implement creation of accelerated UD QPs

Adds capability to create a qpn to be recognized as an accelerated
UD QP for ipoib.

This is accomplished by reserving 0x81 in byte[0] of the qpn as the
prefix for these qp types and reserving qpns between 0x810000 and
0x81ffff.

The hfi1 capability mask already contained a flag for the VNIC netdev.
This has been renamed and extended to include both VNIC and ipoib.

The rvt code to allocate qps now recognizes this flag and sets 0x81
into byte[0] of the qpn.

The code to allocate qpns is modified to reset the qpn numbering when it
is detected that a value is located in byte[0] for a UD QP and it is a
qpn being requested for net dev use. If it is a regular UD QP then it is
allowable to have bits set in byte[0] of the qpn and provide the
previously normal behavior.

The code to free the qpn now checks for the AIP prefix value of 0x81 and
removes it from the qpn before being freed so that the lower 16 bit
number can be reused.

This patch requires minor changes in the IB core and ipoib to facilitate
the creation of accelerated UP QPs.

Link: https://lore.kernel.org/r/20200511160607.173205.11757.stgit@awfm-01.aw.intel.com
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Gary Leshner <Gary.S.Leshner@intel.com>
Signed-off-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoIB/hfi1: Remove module parameter for KDETH qpns
Gary Leshner [Mon, 11 May 2020 16:06:00 +0000 (12:06 -0400)]
IB/hfi1: Remove module parameter for KDETH qpns

The module parameter for KDETH qpns is being removed in favor
of always using the default value of 0x80 as the qpn prefix.
Defines have been added for various KDETH values including
the prefix of 0x80.
The reserved range now starts at the base value for KDETH
qpns (0x80) and extends up to and including the last qpn for
other reserved QP prefixed types.
Adjust other QP prefixed define names to match KDETH defined
names.

Link: https://lore.kernel.org/r/20200511160600.173205.27508.stgit@awfm-01.aw.intel.com
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Gary Leshner <Gary.S.Leshner@intel.com>
Signed-off-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoIB/hfi1: Add the transmit side of a datagram ipoib RDMA netdev
Gary Leshner [Mon, 11 May 2020 16:05:54 +0000 (12:05 -0400)]
IB/hfi1: Add the transmit side of a datagram ipoib RDMA netdev

This implements the transmit side of the multiple transmit queue RDMA
netdev used to accelerate ipoib.  The receive side remains the ipoib
internal implementation.

The init/unint/open/stop netdev operations are saved off and called by the
versions within the hfi1 netdev in order to initialize the connected mode
resources present in ipoib thus allowing us to switch modes between
datagram and connected.

The datagram queue pair instantiated by the ipoib ulp is used by this
implementation for its queue pair number and to register with multicast.

The above queue pair is not used on transmit other than its qpn as the
verbs layer is skipped and packets are directly submitted to the sdma
engines.

Link: https://lore.kernel.org/r/20200511160554.173205.1369.stgit@awfm-01.aw.intel.com
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Gary Leshner <Gary.S.Leshner@intel.com>
Signed-off-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoIB/hfi1: Add functions to transmit datagram ipoib packets
Gary Leshner [Mon, 11 May 2020 16:05:48 +0000 (12:05 -0400)]
IB/hfi1: Add functions to transmit datagram ipoib packets

This patch implements the mechanism to accelerate the transmit side of
a multiple transmit queue RDMA netdev by submitting the packets to
the SDMA engine directly instead of sending through the verbs layer.

This patch also changes the UD/SEND_ONLY op to output the entropy value
in byte 0 of deth[1]. UD/SEND_ONLY_WITH_IMMEDIATE uses the previous
behavior with no entropy value being output.

The code in the ipoib rdma netdev which submits tx requests upon
successful submission will call trace_sdma_output_ibhdr to output
the ibhdr to the trace buffer.

Link: https://lore.kernel.org/r/20200511160548.173205.45616.stgit@awfm-01.aw.intel.com
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Gary Leshner <Gary.S.Leshner@intel.com>
Signed-off-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoIB/hfi1: Add accelerated IP capability bit
Kaike Wan [Mon, 11 May 2020 16:05:41 +0000 (12:05 -0400)]
IB/hfi1: Add accelerated IP capability bit

The accelerated IP capability bit is added to allow users to control
which feature is enabled and disabled.

Link: https://lore.kernel.org/r/20200511160541.173205.96870.stgit@awfm-01.aw.intel.com
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/efa: Report host information to the device
Gal Pressman [Tue, 12 May 2020 15:22:04 +0000 (18:22 +0300)]
RDMA/efa: Report host information to the device

The host info feature allows the driver to infrom the EFA device
firmware with system configuration for debugging and troubleshooting
purposes.

The host info buffer is passed as an admin command DMA mapped control
buffer, and is unmapped and freed once the command CQE is consumed.

Currently, the setting of host info is done for each device on its
probe. Failing to set the host info for the device shall not disturb the
probe flow, any errors will be discarded.

Link: https://lore.kernel.org/r/20200512152204.93091-3-galpress@amazon.com
Reviewed-by: Firas JahJah <firasj@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Signed-off-by: Gal Pressman <galpress@amazon.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/efa: Fix setting of wrong bit in get/set_feature commands
Gal Pressman [Tue, 12 May 2020 15:22:03 +0000 (18:22 +0300)]
RDMA/efa: Fix setting of wrong bit in get/set_feature commands

When using a control buffer the ctrl_data bit should be set in order to
indicate the control buffer address is valid, not ctrl_data_indirect
which is used when the control buffer itself is indirect.

Fixes: e9c6c5373088 ("RDMA/efa: Add common command handlers")
Link: https://lore.kernel.org/r/20200512152204.93091-2-galpress@amazon.com
Reviewed-by: Firas JahJah <firasj@amazon.com>
Reviewed-by: Yossi Leybovich <sleybo@amazon.com>
Signed-off-by: Gal Pressman <galpress@amazon.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/mlx5: Add init2init as a modify command
Aharon Landau [Wed, 13 May 2020 09:55:50 +0000 (12:55 +0300)]
RDMA/mlx5: Add init2init as a modify command

Missing INIT2INIT entry in the list of modify commands caused DEVX
applications to be unable to modify_qp for this transition state. Add the
MLX5_CMD_OP_INIT2INIT_QP opcode to the list of allowed DEVX opcodes.

Fixes: e662e14d801b ("IB/mlx5: Add DEVX support for modify and query commands")
Link: https://lore.kernel.org/r/20200513095550.211345-1-leon@kernel.org
Signed-off-by: Aharon Landau <aharonl@mellanox.com>
Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/hns: Reserve one sge in order to avoid local length error
Lijun Ou [Fri, 8 May 2020 09:45:59 +0000 (17:45 +0800)]
RDMA/hns: Reserve one sge in order to avoid local length error

When rq/srq sge length is smaller than sq sge length, it will produce a
local length error and may cause the bus to hang. Therefore, for rq wqe
and srq wqe, one reserved sge pointing to a reserved mr is used to avoid
this error.

Link: https://lore.kernel.org/r/1588931159-56875-10-git-send-email-liweihang@huawei.com
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/hns: Rename macro for defining hns hardware page size
Xi Wang [Fri, 8 May 2020 09:45:58 +0000 (17:45 +0800)]
RDMA/hns: Rename macro for defining hns hardware page size

Rename the PAGE_ADDR_SHIFT as HNS_HW_PAGE_SHIFT to make code more
readable.

Link: https://lore.kernel.org/r/1588931159-56875-9-git-send-email-liweihang@huawei.com
Signed-off-by: Xi Wang <wangxi11@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/hns: Remove redundant memcpy()
Weihang Li [Fri, 8 May 2020 09:45:57 +0000 (17:45 +0800)]
RDMA/hns: Remove redundant memcpy()

srq_context is a local variables and is only used to get some fields from
buffer of mailbox. It's meaningless to copy mailbox's buffer's contents
back to it.

Link: https://lore.kernel.org/r/1588931159-56875-8-git-send-email-liweihang@huawei.com
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/hns: Store mr len information into mr obj
Lang Cheng [Fri, 8 May 2020 09:45:56 +0000 (17:45 +0800)]
RDMA/hns: Store mr len information into mr obj

The length information should be stored in the struct ib_mr object,
otherwise the length value of a valid mr object would always be 0.

Link: https://lore.kernel.org/r/1588931159-56875-7-git-send-email-liweihang@huawei.com
Signed-off-by: Lang Cheng <chenglang@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/hns: Fix error with to_hr_hem_entries_count()
Weihang Li [Fri, 8 May 2020 09:45:55 +0000 (17:45 +0800)]
RDMA/hns: Fix error with to_hr_hem_entries_count()

For ilog2(x), if x is 0 and not a constant variable, it will return
-1. And there will be an error as below:

 hns3 0000:7d:00.0 hns_0: Local work queue 0x8 catast error, sub_event type is: 2

So modify to_hr_hem_entries_shift() to return 0 if conut is 0.

Fixes: 54d6638765b0 ("RDMA/hns: Optimize WQE buffer size calculating process")
Link: https://lore.kernel.org/r/1588931159-56875-6-git-send-email-liweihang@huawei.com
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/hns: Fix wrong assignment of SRQ's max_wr
Weihang Li [Fri, 8 May 2020 09:45:54 +0000 (17:45 +0800)]
RDMA/hns: Fix wrong assignment of SRQ's max_wr

srq's attribute max_wr should be 1 less than the total count of wqe.

Fixes: ffb1308b88b6 ("RDMA/hns: Move SRQ code to the reasonable place")
Link: https://lore.kernel.org/r/1588931159-56875-5-git-send-email-liweihang@huawei.com
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/hns: Fix assignment to ba_pg_sz of eqe
Wenpeng Liang [Fri, 8 May 2020 09:45:53 +0000 (17:45 +0800)]
RDMA/hns: Fix assignment to ba_pg_sz of eqe

When allocating eq buffer, the size of base address page should be defined
by eqe_ba_pg_sz instead of srqwqe_ba_pg_sz.

Fixes: 477a0a387072 ("RDMA/hns: Optimize 0 hop addressing for EQE buffer")
Link: https://lore.kernel.org/r/1588931159-56875-4-git-send-email-liweihang@huawei.com
Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/hns: Fix cmdq parameter of querying pf timer resource
Lang Cheng [Fri, 8 May 2020 09:45:52 +0000 (17:45 +0800)]
RDMA/hns: Fix cmdq parameter of querying pf timer resource

The firmware has reduced the number of descriptions of command
HNS_ROCE_OPC_QUERY_PF_TIMER_RES to 1. The driver needs to adapt, otherwise
the hardware will report error 4(CMD_NEXT_ERR).

Fixes: 0e40dc2f70cd ("RDMA/hns: Add timer allocation support for hip08")
Link: https://lore.kernel.org/r/1588931159-56875-3-git-send-email-liweihang@huawei.com
Signed-off-by: Lang Cheng <chenglang@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/hns: Bugfix for querying qkey
Lijun Ou [Fri, 8 May 2020 09:45:51 +0000 (17:45 +0800)]
RDMA/hns: Bugfix for querying qkey

The qkey queried through the query ud qp verb is a fixed value and it
should be read from qp context.

Fixes: 926a01dc000d ("RDMA/hns: Add QP operations support for hip08 SoC")
Link: https://lore.kernel.org/r/1588931159-56875-2-git-send-email-liweihang@huawei.com
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/siw: Replace one-element array and use struct_size() helper
Gustavo A. R. Silva [Tue, 19 May 2020 23:30:18 +0000 (18:30 -0500)]
RDMA/siw: Replace one-element array and use struct_size() helper

The current codebase makes use of one-element arrays in the following
form:

struct something {
    int length;
    u8 data[1];
};

struct something *instance;

instance = kmalloc(sizeof(*instance) + size, GFP_KERNEL);
instance->length = size;
memcpy(instance->data, source, size);

but the preferred mechanism to declare variable-length types such as
these ones is a flexible array member[1][2], introduced in C99:

struct foo {
        int stuff;
        struct boo array[];
};

By making use of the mechanism above, we will get a compiler warning
in case the flexible array does not occur last in the structure, which
will help us prevent some kind of undefined behavior bugs from being
inadvertently introduced[3] to the codebase from now on. So, replace
the one-element array with a flexible-array member.

Also, make use of the new struct_size() helper to properly calculate the
size of struct siw_pbl.

This issue was found with the help of Coccinelle and, audited and fixed
_manually_.

[1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
[2] https://github.com/KSPP/linux/issues/21
[3] commit 76497732932f ("cxgb3/l2t: Fix undefined behaviour")

Link: https://lore.kernel.org/r/20200519233018.GA6105@embeddedor
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agornbd/rtrs: Pass max segment size from blk user to the rdma library
Danil Kipnis [Tue, 19 May 2020 11:14:19 +0000 (13:14 +0200)]
rnbd/rtrs: Pass max segment size from blk user to the rdma library

When Block Device Layer is disabled, BLK_MAX_SEGMENT_SIZE is undefined.
The rtrs is a transport library and should compile independently of the
block layer. The desired max segment size should be passed down by the
user.

Introduce max_segment_size parameter for the rtrs_clt_open() call.

Fixes: f7a7a5c228d4 ("block/rnbd: client: main functionality")
Fixes: 6a98d71daea1 ("RDMA/rtrs: client: main functionality")
Fixes: cb80329c9434 ("RDMA/rtrs: client: private header with client structs and functions")
Fixes: b5c27cdb094e ("RDMA/rtrs: public interface header to establish RDMA connections")
Link: https://lore.kernel.org/r/20200519111419.924170-1-danil.kipnis@cloud.ionos.com
Signed-off-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
Reported-by: Randy Dunlap <rdunlap@infradead.org>
Acked-by: Randy Dunlap <rdunlap@infradead.org> # build-tested
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/rtrs: server: Fix some error return code
Wei Yongjun [Tue, 19 May 2020 09:19:12 +0000 (09:19 +0000)]
RDMA/rtrs: server: Fix some error return code

Fix to return negative error code -ENOMEM from the some error handling
cases instead of 0, as done elsewhere in this function.

Fixes: 9cb837480424 ("RDMA/rtrs: server: main functionality")
Fixes: 91b11610af8d ("RDMA/rtrs: server: sysfs interface functions")
Link: https://lore.kernel.org/r/20200519091912.134358-1-weiyongjun1@huawei.com
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Reviewed-by: Danil Kipnis <danil.kipnis@cloud.ionos.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
4 years agoRDMA/rtrs: client: Fix function return on success
Gustavo A. R. Silva [Tue, 19 May 2020 16:36:12 +0000 (11:36 -0500)]
RDMA/rtrs: client: Fix function return on success

Remove the if-statement and return the value contained in _err_,
unconditionally.

Link: https://lore.kernel.org/r/20200519163612.GA6043@embeddedor
Addresses-Coverity-ID: 1493753 ("Identical code for different branches")
Fixes: 6a98d71daea1 ("RDMA/rtrs: client: main functionality")
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>