Recent changes have made it so that MAX_SKB_FRAGS is now never less than 16.
As a result we were seeing issues on systems with 64K pages as it would
cause DESC_NEEDED to increase to 68, and we would need over 136 descriptors
free before clean_tx_irq would wake the queue.
This patch makes it so that DESC_NEEDED is always MAX_SKB_FRAGS + 4. This
should prevent any possible deadlocks on the systems with 64K pages as we will
now only require 42 descriptors to wake.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
/* Tx Descriptors needed, worst case */
#define TXD_USE_COUNT(S) DIV_ROUND_UP((S), IXGBE_MAX_DATA_PER_TXD)
-#define DESC_NEEDED ((MAX_SKB_FRAGS * TXD_USE_COUNT(PAGE_SIZE)) + 4)
+#define DESC_NEEDED (MAX_SKB_FRAGS + 4)
/* wrapper around a pointer to a socket buffer,
* so a DMA handle can be stored along with the buffer */