> stack backtrace:
> [<
f9099d31>] rtl8139_start_xmit+0xd9/0xff [8139too]
> [<
c11ad5ea>] netpoll_send_skb+0x98/0xea
This seems to be a real deadlock...
So netpoll_send_skb takes the _xmit_lock, which is all nitty gritty
but then rtl8139_start_xmit comes around while that lock is taken, and
does
spin_unlock_irq(&tp->lock);
which.. enables interrupts and softirqs; this is quite bad because the
xmit lock is taken in softirq context for the watchdog like this:
[<
c1200376>] _spin_lock+0x23/0x32
[<
c11af282>] dev_watchdog+0x14/0xb1
[<
c101dab2>] run_timer_softirq+0xf2/0x14a
[<
c101a691>] __do_softirq+0x55/0xb0
[<
c1004a8d>] do_softirq+0x58/0xbd
Which would deadlock now that the spin_unlock_irq() has enabled
irqs/softirqs while the _xmit_lock is still held.
The patch below turns this into a irqsave/irqrestore pair so that
interrupts don't get enabled unconditionally.
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
void __iomem *ioaddr = tp->mmio_addr;
unsigned int entry;
unsigned int len = skb->len;
+ unsigned long flags;
/* Calculate the next Tx descriptor entry. */
entry = tp->cur_tx % NUM_TX_DESC;
return 0;
}
- spin_lock_irq(&tp->lock);
+ spin_lock_irqsave(&tp->lock, flags);
RTL_W32_F (TxStatus0 + (entry * sizeof (u32)),
tp->tx_flag | max(len, (unsigned int)ETH_ZLEN));
if ((tp->cur_tx - NUM_TX_DESC) == tp->dirty_tx)
netif_stop_queue (dev);
- spin_unlock_irq(&tp->lock);
+ spin_unlock_irqrestore(&tp->lock, flags);
if (netif_msg_tx_queued(tp))
printk (KERN_DEBUG "%s: Queued Tx packet size %u to slot %d.\n",