net/mlx5e: RX, Fix flush and close release flow of regular rq for legacy rq
authorDragos Tatulea <dtatulea@nvidia.com>
Mon, 22 May 2023 18:18:53 +0000 (21:18 +0300)
committerSaeed Mahameed <saeedm@nvidia.com>
Wed, 5 Jul 2023 17:57:03 +0000 (10:57 -0700)
Regular (non-XSK) RQs get flushed on XSK setup and re-activated on XSK
close. If the same regular RQ is closed (a config change for example)
soon after the XSK close, a double release occurs because the missing
wqes get released a second time.

Fixes: 3f93f82988bc ("net/mlx5e: RX, Defer page release in legacy rq for better recycling")
Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c

index 704b022cd1f04bcac0513643c479d9d479ae46ef..a9575219e4555e967136aa14d73a2a33ce0ff5f6 100644 (file)
@@ -390,10 +390,18 @@ static void mlx5e_dealloc_rx_wqe(struct mlx5e_rq *rq, u16 ix)
 {
        struct mlx5e_wqe_frag_info *wi = get_frag(rq, ix);
 
-       if (rq->xsk_pool)
+       if (rq->xsk_pool) {
                mlx5e_xsk_free_rx_wqe(wi);
-       else
+       } else {
                mlx5e_free_rx_wqe(rq, wi);
+
+               /* Avoid a second release of the wqe pages: dealloc is called
+                * for the same missing wqes on regular RQ flush and on regular
+                * RQ close. This happens when XSK RQs come into play.
+                */
+               for (int i = 0; i < rq->wqe.info.num_frags; i++, wi++)
+                       wi->flags |= BIT(MLX5E_WQE_FRAG_SKIP_RELEASE);
+       }
 }
 
 static void mlx5e_xsk_free_rx_wqes(struct mlx5e_rq *rq, u16 ix, int wqe_bulk)