net/mlx5: Fix error print in case of IRQ request failed
authorShay Drory <shayd@nvidia.com>
Wed, 24 Nov 2021 21:10:57 +0000 (23:10 +0200)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Wed, 5 Jan 2022 11:42:34 +0000 (12:42 +0100)
[ Upstream commit aa968f922039706f6d13e8870b49e424d0a8d9ad ]

In case IRQ layer failed to find or to request irq, the driver is
printing the first cpu of the provided affinity as part of the error
print. Empty affinity is a valid input for the IRQ layer, and it is
an error to call cpumask_first() on empty affinity.

Remove the first cpu print from the error message.

Fixes: c36326d38d93 ("net/mlx5: Round-Robin EQs over IRQs")
Signed-off-by: Shay Drory <shayd@nvidia.com>
Reviewed-by: Moshe Shemesh <moshe@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c

index 763c83a..11f3649 100644 (file)
@@ -346,8 +346,8 @@ static struct mlx5_irq *irq_pool_request_affinity(struct mlx5_irq_pool *pool,
        new_irq = irq_pool_create_irq(pool, affinity);
        if (IS_ERR(new_irq)) {
                if (!least_loaded_irq) {
-                       mlx5_core_err(pool->dev, "Didn't find IRQ for cpu = %u\n",
-                                     cpumask_first(affinity));
+                       mlx5_core_err(pool->dev, "Didn't find a matching IRQ. err = %ld\n",
+                                     PTR_ERR(new_irq));
                        mutex_unlock(&pool->lock);
                        return new_irq;
                }