copying large files to a NFS mounted host was taking absurdly large
time.
Turns out that TX BD reclaim had a sublte bug.
Loop starts off from @txbd_dirty cursor and stops when it hits a BD
still in use by controller. However when it stops it needs to keep the
cursor at that very BD to resume scanning in next iteration. However it
was erroneously incrementing the cursor, causing the next scan(s) to
fail too, unless the BD chain was completely drained out.
[ARCLinux]$ ls -l -sh /disk/log.txt
17976 -rw-r--r-- 1 root root 17.5M Sep /disk/log.txt
========== Before =====================
[ARCLinux]$ time cp /disk/log.txt /mnt/.
real 31m 7.95s
user 0m 0.00s
sys 0m 0.10s
========== After =====================
[ARCLinux]$ time cp /disk/log.txt /mnt/.
real 0m 24.33s
user 0m 0.00s
sys 0m 0.19s
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Cc: Alexey Brodkin <abrodkin@synopsys.com> (commit_signer:3/4=75%)
Cc: "David S. Miller" <davem@davemloft.net> (commit_signer:3/4=75%)
Cc: netdev@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: arc-linux-dev@synopsys.com
Signed-off-by: David S. Miller <davem@davemloft.net>
struct sk_buff *skb = tx_buff->skb;
unsigned int info = le32_to_cpu(txbd->info);
- *txbd_dirty = (*txbd_dirty + 1) % TX_BD_NUM;
-
if ((info & FOR_EMAC) || !txbd->data)
break;
txbd->data = 0;
txbd->info = 0;
+ *txbd_dirty = (*txbd_dirty + 1) % TX_BD_NUM;
+
if (netif_queue_stopped(ndev))
netif_wake_queue(ndev);
}