From: Maxim Mikityanskiy Date: Fri, 30 Sep 2022 16:28:50 +0000 (-0700) Subject: net/mlx5e: Introduce wqe_index_mask for legacy RQ X-Git-Tag: v6.6.17~6504^2~22^2~13 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=a064c609849bf71adc7484b030539568cd2a5155;p=platform%2Fkernel%2Flinux-rpi.git net/mlx5e: Introduce wqe_index_mask for legacy RQ When fragments of different WQEs share the same page, mlx5e_post_rx_wqes must wait until the old WQE stops using the page, only then the new WQE can allocate the new page. Essentially, it means that if WQE index i is still in use, the allocation must stop before `i % bulk`, where bulk is the number of WQEs that may share the same page. As bulk is always a power of two, `i % bulk = i & (bulk - 1)`, and the new wqe_index_mask field will be equal to `bulk - 1`. At the same time, wqe_bulk remains for optimization purposes and stores `max(bulk, 8)`, which allows to skip the allocation until we have at least 8 WQEs free. Signed-off-by: Maxim Mikityanskiy Reviewed-by: Tariq Toukan Signed-off-by: Saeed Mahameed Signed-off-by: Jakub Kicinski --- diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index 95a232fb2127..8e174a7f7c25 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -660,6 +660,7 @@ struct mlx5e_rq_frags_info { u8 num_frags; u8 log_num_frags; u8 wqe_bulk; + u8 wqe_index_mask; }; struct mlx5e_dma_info { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c index 68bc66cbd8a5..49306a68b3b5 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c @@ -586,7 +586,14 @@ static int mlx5e_build_rq_frags_info(struct mlx5_core_dev *mdev, info->arr[0].frag_size = byte_count; info->arr[0].frag_stride = frag_stride; info->num_frags = 1; - info->wqe_bulk = PAGE_SIZE / frag_stride; + + /* N WQEs share the same page, N = PAGE_SIZE / frag_stride. The + * first WQE in the page is responsible for allocation of this + * page, this WQE's index is k*N. If WQEs [k*N+1; k*N+N-1] are + * still not completed, the allocation must stop before k*N. + */ + info->wqe_index_mask = (PAGE_SIZE / frag_stride) - 1; + goto out; } @@ -635,11 +642,21 @@ static int mlx5e_build_rq_frags_info(struct mlx5_core_dev *mdev, i++; } info->num_frags = i; - /* number of different wqes sharing a page */ - info->wqe_bulk = 1 + (info->num_frags % 2); + + /* The last fragment of WQE with index 2*N may share the page with the + * first fragment of WQE with index 2*N+1 in certain cases. If WQE 2*N+1 + * is not completed yet, WQE 2*N must not be allocated, as it's + * responsible for allocating a new page. + */ + info->wqe_index_mask = info->num_frags % 2; out: - info->wqe_bulk = max_t(u8, info->wqe_bulk, 8); + /* Bulking optimization to skip allocation until at least 8 WQEs can be + * allocated in a row. At the same time, never start allocation when + * the page is still used by older WQEs. + */ + info->wqe_bulk = max_t(u8, info->wqe_index_mask + 1, 8); + info->log_num_frags = order_base_2(info->num_frags); return 0;