RDMA/mlx5: Don't set tx affinity when lag is in hash mode
authorLiu, Changcheng <jerrliu@nvidia.com>
Wed, 7 Sep 2022 23:36:26 +0000 (16:36 -0700)
committerSaeed Mahameed <saeedm@nvidia.com>
Tue, 27 Sep 2022 19:50:27 +0000 (12:50 -0700)
commita83bb5df2ac604ab418fbe0a8720f55de46652eb
tree33308b93f47884b42dc1961cd1a08e2a6795ee82
parent8d1ac895fff96a228db20db92243e93687659ef7
RDMA/mlx5: Don't set tx affinity when lag is in hash mode

In hash mode, without setting tx affinity explicitly, the port select
flow table decides which port is used for the traffic.
If port_select_flow_table_bypass capability is supported and tx affinity
is set explicitly for QP/TIS, they will be added into the explicit affinity
table in FW to check which port is used for the traffic.
1. The overloaded explicit affinity table may affect performance.
   To avoid this, do not set tx affinity explicitly by default.
2. The packets of the same flow need to be transmitted on the same port.
   Because the packets of the same flow use different QPs in slow & fast
   path, it shouldn't set tx affinity explicitly for these QPs.

Signed-off-by: Liu, Changcheng <jerrliu@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: Vlad Buslov <vladbu@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
drivers/infiniband/hw/mlx5/mlx5_ib.h
drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
include/linux/mlx5/driver.h