Nikolay Aleksandrov [Thu, 11 Sep 2014 20:49:22 +0000 (22:49 +0200)]
bonding: 3ad: clean up curr_slave_lock usage
Remove the read_lock in bond_3ad_lacpdu_recv() since when the slave is
being released its rx_handler is removed before 3ad unbind, so even if
packets arrive, they won't see the slave in an inconsistent state.
Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rusty Russell [Thu, 11 Sep 2014 00:47:38 +0000 (10:17 +0930)]
virtio_ring: unify direct/indirect code paths.
virtqueue_add() populates the virtqueue descriptor table from the sgs
given. If it uses an indirect descriptor table, then it puts a single
descriptor in the descriptor table pointing to the kmalloc'ed indirect
table where the sg is populated.
Previously vring_add_indirect() did the allocation and the simple
linear layout. We replace that with alloc_indirect() which allocates
the indirect table then chains it like the normal descriptor table so
we can reuse the core logic.
This slows down pktgen by less than 1/2 a percent (which uses direct
descriptors), as well as vring_bench, but it's far neater.
vring_bench before:
1061485790-
1104800648(1.08254e+09+/-6.6e+06)ns
vring_bench after:
1125610268-
1183528965(1.14172e+09+/-8e+06)ns
pktgen before:
787781-796334(793165+/-2.4e+03)pps 365-369(367.5+/-1.2)Mb/sec (
365530384-
369498976(3.68028e+08+/-1.1e+06)bps) errors: 0
pktgen after:
779988-790404(786391+/-2.5e+03)pps 361-366(364.35+/-1.3)Mb/sec (
361914432-
366747456(3.64885e+08+/-1.2e+06)bps) errors: 0
Now, if we make force indirect descriptors by turning off any_header_sg
in virtio_net.c:
pktgen before:
713773-721062(718374+/-2.1e+03)pps 331-334(332.95+/-0.92)Mb/sec (
331190672-
334572768(3.33325e+08+/-9.6e+05)bps) errors: 0
pktgen after:
710542-719195(714898+/-2.4e+03)pps 329-333(331.15+/-1.1)Mb/sec (
329691488-
333706480(3.31713e+08+/-1.1e+06)bps) errors: 0
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rusty Russell [Thu, 11 Sep 2014 00:47:37 +0000 (10:17 +0930)]
virtio_ring: assume sgs are always well-formed.
We used to have several callers which just used arrays. They're
gone, so we can use sg_next() everywhere, simplifying the code.
On my laptop, this slowed down vring_bench by 15%:
vring_bench before:
936153354-
967745359(9.44739e+08+/-6.1e+06)ns
vring_bench after:
1061485790-
1104800648(1.08254e+09+/-6.6e+06)ns
However, a more realistic test using pktgen on a AMD FX(tm)-8320 saw
a few percent improvement:
pktgen before:
767390-792966(785159+/-6.5e+03)pps 356-367(363.75+/-2.9)Mb/sec (
356068960-
367936224(3.64314e+08+/-3e+06)bps) errors: 0
pktgen after:
787781-796334(793165+/-2.4e+03)pps 365-369(367.5+/-1.2)Mb/sec (
365530384-
369498976(3.68028e+08+/-1.1e+06)bps) errors: 0
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rusty Russell [Thu, 11 Sep 2014 00:47:36 +0000 (10:17 +0930)]
virtio_net: pass well-formed sgs to virtqueue_add_*()
This is the only driver which doesn't hand virtqueue_add_inbuf and
virtqueue_add_outbuf a well-formed, well-terminated sg. Fix it,
so we can make virtio_add_* simpler.
pktgen results:
modprobe pktgen
echo 'add_device eth0' > /proc/net/pktgen/kpktgend_0
echo nowait 1 > /proc/net/pktgen/eth0
echo count 1000000 > /proc/net/pktgen/eth0
echo clone_skb 100000 > /proc/net/pktgen/eth0
echo dst_mac 4e:14:25:a9:30:ac > /proc/net/pktgen/eth0
echo dst 192.168.1.2 > /proc/net/pktgen/eth0
for i in `seq 20`; do echo start > /proc/net/pktgen/pgctrl; tail -n1 /proc/net/pktgen/eth0; done
Before:
746547-793084(786421+/-9.6e+03)pps 346-367(364.4+/-4.4)Mb/sec (
346397808-
367990976(3.649e+08+/-4.5e+06)bps) errors: 0
After:
767390-792966(785159+/-6.5e+03)pps 356-367(363.75+/-2.9)Mb/sec (
356068960-
367936224(3.64314e+08+/-3e+06)bps) errors: 0
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 13 Sep 2014 16:43:24 +0000 (12:43 -0400)]
Merge branch 'master' of git://git./linux/kernel/git/jkirsher/net-next
Jeff Kirsher says:
====================
Intel Wired LAN Driver Updates 2014-09-12
This series contains updates to e1000, ixgbe and ixgbevf.
Mark provide two fixes to reduce compile warnings produce by ixgbe
and ixgbevf.
Alex provides two patches for ixgbe, first removes the receive buffer
allocation at the end of the ixgbe_clean_rx_irq(). The reason for
removing this is to avoid the extra latency introduced by the MMIO write.
Second patch addresses several issues in the current ixgbe implementation
of busy poll sockets. It was possible for frames to be delivered out of
order if they were held in GRO, so address this by flushing the GRO
buffers before releasing the q_vector back to the idle state. Also, we
were having to take a spinlock on changing the state to and from idle,
so to resolve this, replaced the state value with an atomic and use
atomic_cmpxchg to change the value from idle, and a simple atomic set
to restore it back to idle after we have acquired it. This allows us
to only use a locked operation on acquiring the vector without a need
for a locked operation to release it.
Florian Westphal provides several patches for e1000 which does some
cleanup and updating of the driver. Moved e1000_tbi_adjust_stats()
so that he could make the function static. Added a helper function
to deal with the tbi workaround that was located in 2 different
Rx clean functions. Added a e1000_rx_buffer struct for use on receive
since the transmit and receive have different requirements. Updates
e1000 to use napi_gro_frags API.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 13 Sep 2014 16:30:33 +0000 (12:30 -0400)]
Merge branch 'sched_rcu'
John Fastabend says:
====================
net/sched rcu classifiers and tcf
This series converts the tcf_proto usage to RCU.
This requires updating each classifier individually to handle the
new copy/update requirement and also to update the core list
traversals. This makes the assumption that updates to the tables
are infrequent in comparison to the packet per second being
classified. On a 10Gbps running near line rate we can easily
produce 12+ million packets per second so IMO this is a reasonable
assumption. The updates are serialized by RTNL.
I have done some basic testing on this series and do not see any
immediate splats or issues. The patch series has been running
on my dev systems for a month or so now and I've not seen any
issues. Although my configurations are not overly complicated.
My test cases at this point cover all the filters with a
tight loop to add/remove filters. Some basic estimator tests
where I add an estimator to the qdisc and verify the statistics
accurate using pktgen. And finally I have a small script to
exercise the 'tc actions' interface. Feel free to send me more
tests off list and I can run them.
This is prep work to drop the qdisc lock with the first
target being the ingress qdisc. To be done is making the
tc actions RCU safe and statistics per cpu. These patches
are in the works.
Comments:
- Checkpatch is still giving errors on some >80 char lines I know
about this. IMO the way to fix this is to restructure the sched
code to avoid being so heavily indented. But doing this here
bloats the patchset and anyways there are already lots of >80
chars in these files. I would prefer to keep the patches as is
but let me know if others think I should fix these and I will.
A follow up patch set could restructure the code and fix this
throughout the code blocks.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Sat, 13 Sep 2014 03:10:24 +0000 (20:10 -0700)]
net: sched: rcu'ify cls_bpf
This patch makes the cls_bpf classifier RCU safe. The tcf_lock
was being used to protect a list of cls_bpf_prog now this list
is RCU safe and updates occur with rcu_replace.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Sat, 13 Sep 2014 03:09:49 +0000 (20:09 -0700)]
net: sched: rcu'ify cls_rsvp
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Sat, 13 Sep 2014 03:09:16 +0000 (20:09 -0700)]
net: sched: make cls_u32 lockless
Make cls_u32 classifier safe to run without holding lock. This patch
converts statistics that are kept in read section u32_classify into
per cpu counters.
This patch was tested with a tight u32 filter add/delete loop while
generating traffic with pktgen. By running pktgen on vlan devices
created on top of a physical device we can hit the qdisc layer
correctly. For ingress qdisc's a loopback cable was used.
for i in {1..100}; do
q=`echo $i%8|bc`;
echo -n "u32 tos: iteration $i on queue $q";
tc filter add dev p3p2 parent $p prio $i u32 match ip tos 0x10 0xff \
action skbedit queue_mapping $q;
sleep 1;
tc filter del dev p3p2 prio $i;
echo -n "u32 tos hash table: iteration $i on queue $q";
tc filter add dev p3p2 parent $p protocol ip prio $i handle 628: u32 divisor 1
tc filter add dev p3p2 parent $p protocol ip prio $i u32 \
match ip protocol 17 0xff link 628: offset at 0 mask 0xf00 shift 6 plus 0
tc filter add dev p3p2 parent $p protocol ip prio $i u32 \
ht 628:0 match ip tos 0x10 0xff action skbedit queue_mapping $q
sleep 2;
tc filter del dev p3p2 prio $i
sleep 1;
done
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Sat, 13 Sep 2014 03:08:47 +0000 (20:08 -0700)]
net: sched: make cls_u32 per cpu
This uses per cpu counters in cls_u32 in preparation
to convert over to rcu.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Sat, 13 Sep 2014 03:08:20 +0000 (20:08 -0700)]
net: sched: RCU cls_tcindex
Make cls_tcindex RCU safe.
This patch addds a new RCU routine rcu_dereference_bh_rtnl() to check
caller either holds the rcu read lock or RTNL. This is needed to
handle the case where tcindex_lookup() is being called in both cases.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Sat, 13 Sep 2014 03:07:50 +0000 (20:07 -0700)]
net: sched: RCU cls_route
RCUify the route classifier. For now however spinlock's are used to
protect fastmap cache.
The issue here is the fastmap may be read by one CPU while the
cache is being updated by another. An array of pointers could be
one possible solution.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Sat, 13 Sep 2014 03:07:22 +0000 (20:07 -0700)]
net: sched: fw use RCU
RCU'ify fw classifier.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Sat, 13 Sep 2014 03:06:55 +0000 (20:06 -0700)]
net: sched: cls_flow use RCU
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Sat, 13 Sep 2014 03:06:26 +0000 (20:06 -0700)]
net: sched: cls_cgroup use RCU
Make cgroup classifier safe for RCU.
Also drops the calls in the classify routine that were doing a
rcu_read_lock()/rcu_read_unlock(). If the rcu_read_lock() isn't held
entering this routine we have issues with deleting the classifier
chain so remove the unnecessary rcu_read_lock()/rcu_read_unlock()
pair noting all paths AFAIK hold rcu_read_lock.
If there is a case where classify is called without the rcu read lock
then an rcu splat will occur and we can correct it.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Sat, 13 Sep 2014 03:05:59 +0000 (20:05 -0700)]
net: sched: cls_basic use RCU
Enable basic classifier for RCU.
Dereferencing tp->root may look a bit strange here but it is needed
by my accounting because it is allocated at init time and needs to
be kfree'd at destroy time. However because it may be referenced in
the classify() path we must wait an RCU grace period before free'ing
it. We use kfree_rcu() and rcu_ APIs to enforce this. This pattern
is used in all the classifiers.
Also the hgenerator can be incremented without concern because it
is always incremented under RTNL.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Sat, 13 Sep 2014 03:05:27 +0000 (20:05 -0700)]
net: rcu-ify tcf_proto
rcu'ify tcf_proto this allows calling tc_classify() without holding
any locks. Updaters are protected by RTNL.
This patch prepares the core net_sched infrastracture for running
the classifier/action chains without holding the qdisc lock however
it does nothing to ensure cls_xxx and act_xxx types also work without
locking. Additional patches are required to address the fall out.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Sat, 13 Sep 2014 03:04:52 +0000 (20:04 -0700)]
net: qdisc: use rcu prefix and silence sparse warnings
Add __rcu notation to qdisc handling by doing this we can make
smatch output more legible. And anyways some of the cases should
be using rcu_dereference() see qdisc_all_tx_empty(),
qdisc_tx_chainging(), and so on.
Also *wake_queue() API is commonly called from driver timer routines
without rcu lock or rtnl lock. So I added rcu_read_lock() blocks
around netif_wake_subqueue and netif_tx_wake_queue.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Sat, 13 Sep 2014 03:10:24 +0000 (20:10 -0700)]
net: sched: rcu'ify cls_bpf
This patch makes the cls_bpf classifier RCU safe. The tcf_lock
was being used to protect a list of cls_bpf_prog now this list
is RCU safe and updates occur with rcu_replace.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Sat, 13 Sep 2014 03:09:49 +0000 (20:09 -0700)]
net: sched: rcu'ify cls_rsvp
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Sat, 13 Sep 2014 03:09:16 +0000 (20:09 -0700)]
net: sched: make cls_u32 lockless
Make cls_u32 classifier safe to run without holding lock. This patch
converts statistics that are kept in read section u32_classify into
per cpu counters.
This patch was tested with a tight u32 filter add/delete loop while
generating traffic with pktgen. By running pktgen on vlan devices
created on top of a physical device we can hit the qdisc layer
correctly. For ingress qdisc's a loopback cable was used.
for i in {1..100}; do
q=`echo $i%8|bc`;
echo -n "u32 tos: iteration $i on queue $q";
tc filter add dev p3p2 parent $p prio $i u32 match ip tos 0x10 0xff \
action skbedit queue_mapping $q;
sleep 1;
tc filter del dev p3p2 prio $i;
echo -n "u32 tos hash table: iteration $i on queue $q";
tc filter add dev p3p2 parent $p protocol ip prio $i handle 628: u32 divisor 1
tc filter add dev p3p2 parent $p protocol ip prio $i u32 \
match ip protocol 17 0xff link 628: offset at 0 mask 0xf00 shift 6 plus 0
tc filter add dev p3p2 parent $p protocol ip prio $i u32 \
ht 628:0 match ip tos 0x10 0xff action skbedit queue_mapping $q
sleep 2;
tc filter del dev p3p2 prio $i
sleep 1;
done
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Sat, 13 Sep 2014 03:08:47 +0000 (20:08 -0700)]
net: sched: make cls_u32 per cpu
This uses per cpu counters in cls_u32 in preparation
to convert over to rcu.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Sat, 13 Sep 2014 03:08:20 +0000 (20:08 -0700)]
net: sched: RCU cls_tcindex
Make cls_tcindex RCU safe.
This patch addds a new RCU routine rcu_dereference_bh_rtnl() to check
caller either holds the rcu read lock or RTNL. This is needed to
handle the case where tcindex_lookup() is being called in both cases.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Sat, 13 Sep 2014 03:07:50 +0000 (20:07 -0700)]
net: sched: RCU cls_route
RCUify the route classifier. For now however spinlock's are used to
protect fastmap cache.
The issue here is the fastmap may be read by one CPU while the
cache is being updated by another. An array of pointers could be
one possible solution.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Sat, 13 Sep 2014 03:07:22 +0000 (20:07 -0700)]
net: sched: fw use RCU
RCU'ify fw classifier.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Sat, 13 Sep 2014 03:06:55 +0000 (20:06 -0700)]
net: sched: cls_flow use RCU
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Sat, 13 Sep 2014 03:06:26 +0000 (20:06 -0700)]
net: sched: cls_cgroup use RCU
Make cgroup classifier safe for RCU.
Also drops the calls in the classify routine that were doing a
rcu_read_lock()/rcu_read_unlock(). If the rcu_read_lock() isn't held
entering this routine we have issues with deleting the classifier
chain so remove the unnecessary rcu_read_lock()/rcu_read_unlock()
pair noting all paths AFAIK hold rcu_read_lock.
If there is a case where classify is called without the rcu read lock
then an rcu splat will occur and we can correct it.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Sat, 13 Sep 2014 03:05:59 +0000 (20:05 -0700)]
net: sched: cls_basic use RCU
Enable basic classifier for RCU.
Dereferencing tp->root may look a bit strange here but it is needed
by my accounting because it is allocated at init time and needs to
be kfree'd at destroy time. However because it may be referenced in
the classify() path we must wait an RCU grace period before free'ing
it. We use kfree_rcu() and rcu_ APIs to enforce this. This pattern
is used in all the classifiers.
Also the hgenerator can be incremented without concern because it
is always incremented under RTNL.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Sat, 13 Sep 2014 03:05:27 +0000 (20:05 -0700)]
net: rcu-ify tcf_proto
rcu'ify tcf_proto this allows calling tc_classify() without holding
any locks. Updaters are protected by RTNL.
This patch prepares the core net_sched infrastracture for running
the classifier/action chains without holding the qdisc lock however
it does nothing to ensure cls_xxx and act_xxx types also work without
locking. Additional patches are required to address the fall out.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Sat, 13 Sep 2014 03:04:52 +0000 (20:04 -0700)]
net: qdisc: use rcu prefix and silence sparse warnings
Add __rcu notation to qdisc handling by doing this we can make
smatch output more legible. And anyways some of the cases should
be using rcu_dereference() see qdisc_all_tx_empty(),
qdisc_tx_chainging(), and so on.
Also *wake_queue() API is commonly called from driver timer routines
without rcu lock or rtnl lock. So I added rcu_read_lock() blocks
around netif_wake_subqueue and netif_tx_wake_queue.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sowmini Varadhan [Thu, 11 Sep 2014 13:57:22 +0000 (09:57 -0400)]
sunvnet: Avoid sending superfluous LDC messages.
When sending out a burst of packets across multiple descriptors,
it is sufficient to send one LDC "start" trigger for
the first descriptor, so do not send an LDC "start" for every
pass through vnet_start_xmit. Similarly, it is sufficient to send
one "DRING_STOPPED" trigger for the last dring (and if that
fails, hold off and send the trigger later).
Optimizations to the number of LDC messages helps avoid
filling up the LDC channel with superfluous LDC messages
that risk triggering flow-control on the channel,
and also boosts performance.
Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Acked-by: Raghuram Kothakota <raghuram.kothakota@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Subbaraya Sundeep Bhatta [Thu, 11 Sep 2014 09:23:33 +0000 (14:53 +0530)]
net: axienet: remove unnecessary ether_setup after alloc_etherdev
calling ether_setup is redundant since alloc_etherdev calls
it.
Signed-off-by: Subbaraya Sundeep Bhatta <sbhatta@xilinx.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Varka Bhadram [Thu, 11 Sep 2014 07:20:50 +0000 (12:50 +0530)]
ethernet: amd: use pr_info_once()
It will use pr_info_one() to print the version info of the
driver in probe function only once. No need to use the static
variable here.
Signed-off-by: Varka Bhadram <varkab@cdac.in>
Signed-off-by: David S. Miller <davem@davemloft.net>
Scott Wood [Thu, 11 Sep 2014 02:23:18 +0000 (21:23 -0500)]
udp: Fix inverted NAPI_GRO_CB(skb)->flush test
Commit
2abb7cdc0d ("udp: Add support for doing checksum unnecessary
conversion") caused napi_gro_cb structs with the "flush" field zero to
take the "udp_gro_receive" path rather than the "set flush to 1" path
that they would previously take. As a result I saw booting from an NFS
root hang shortly after starting userspace, with "server not
responding" messages.
This change to the handling of "flush == 0" packets appears to be
incidental to the goal of adding new code in the case where
skb_gro_checksum_validate_zero_check() returns zero. Based on that and
the fact that it breaks things, I'm assuming that it is unintentional.
Fixes:
2abb7cdc0d ("udp: Add support for doing checksum unnecessary conversion")
Cc: Tom Herbert <therbert@google.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 12 Sep 2014 21:51:32 +0000 (17:51 -0400)]
Merge branch 'sock_queue_err_skb'
Alexander Duyck says:
====================
Address reference counting issues with sock_queue_err_skb
After looking over the code for skb_clone_sk after some comments made by
Eric Dumazet I have come to the conclusion that skb_clone_sk is taking the
correct approach in how to handle the sk_refcnt when creating a buffer that
is eventually meant to be returned to the socket via the sock_queue_err_skb
function.
However upon review of other callers I found what I believe to be a
possible reference count issue in the path for handling "wifi ack" packets.
To address this I have applied the same logic that is currently in place so
that the sk_refcnt will be forced to stay at least 1, or we will not
provide an skb to return in the sk_error_queue.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Wed, 10 Sep 2014 22:05:42 +0000 (18:05 -0400)]
mac80211: Resolve sk_refcnt/sk_wmem_alloc issue in wifi ack path
There is a possible issue with the use, or lack thereof of sk_refcnt and
sk_wmem_alloc in the wifi ack status functionality.
Specifically if a socket were to request acknowledgements, and the socket
were to have sk_refcnt drop to 0 resulting in it waiting on sk_wmem_alloc
to reach 0 it would be possible to have sock_queue_err_skb orphan the last
buffer, resulting in __sk_free being called on the socket. After this the
buffer is enqueued on sk_error_queue, however the queue has already been
flushed resulting in at least a memory leak, if not a data corruption.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Acked-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Wed, 10 Sep 2014 22:05:26 +0000 (18:05 -0400)]
skb: Add documentation for skb_clone_sk
This change adds some documentation to the call skb_clone_sk. This is
meant to help clarify the purpose of the function for other developers.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sébastien Barré [Wed, 10 Sep 2014 16:20:23 +0000 (18:20 +0200)]
Revert "ipv4: Clarify in docs that accept_local requires rp_filter."
This reverts commit
c801e3cc1925 ("ipv4: Clarify in docs that accept_local requires rp_filter.").
It is not needed anymore since commit
1dced6a85482 ("ipv4: Restore accept_local behaviour in fib_validate_source()").
Suggested-by: Julian Anastasov <ja@ssi.bg>
Cc: Gregory Detal <gregory.detal@uclouvain.be>
Cc: Christoph Paasch <christoph.paasch@uclouvain.be>
Cc: Hannes Frederic Sowa <hannes@redhat.com>
Cc: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Signed-off-by: Sébastien Barré <sebastien.barre@uclouvain.be>
Signed-off-by: David S. Miller <davem@davemloft.net>
Florian Westphal [Wed, 3 Sep 2014 13:34:42 +0000 (13:34 +0000)]
e1000: switch to napi_gro_frags api
napi_gro_frags allows skb re-use in case GRO can merge payload pages
into an skb on the GRO lists.
netperf TCP_STREAM, kvm-e1000 emulation, mtu 9k:
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
old: 87380 16384 16384 30.00 8985.78
new: 87380 16384 16384 30.00 9907.05
Signed-off-by: Florian Westphal <fw@strlen.de>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Florian Westphal [Wed, 3 Sep 2014 13:34:36 +0000 (13:34 +0000)]
e1000: convert to build_skb
Instead of preallocating Rx skbs, allocate them right before sending
inbound packet up the stack.
e1000-kvm, mtu1500, netperf TCP_STREAM:
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
old: 87380 16384 16384 60.00 4532.40
new: 87380 16384 16384 60.00 4599.05
Signed-off-by: Florian Westphal <fw@strlen.de>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Florian Westphal [Wed, 3 Sep 2014 13:34:31 +0000 (13:34 +0000)]
e1000: rename struct e1000_buffer to e1000_tx_buffer
and remove *page, its only used for Rx.
Signed-off-by: Florian Westphal <fw@strlen.de>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Florian Westphal [Wed, 3 Sep 2014 13:34:26 +0000 (13:34 +0000)]
e1000: add and use e1000_rx_buffer info for Rx
e1000 uses the same metadata struct for Rx and Tx. But Tx and Rx have
different requirements.
For Rx, we only need to store a buffer and a DMA address.
Follow-up patch will remove skb for Rx, bringing rx_buffer_info down
to 16 bytes on x86_64.
[ buffer_info is 48 bytes ]
Signed-off-by: Florian Westphal <fw@strlen.de>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Florian Westphal [Wed, 3 Sep 2014 13:34:21 +0000 (13:34 +0000)]
e1000: perform copybreak ahead of DMA unmap
Currently we unmap the DMA range, then copy to new skb.
Change this so we can keep the mapping in case the data is copied.
Signed-off-by: Florian Westphal <fw@strlen.de>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Florian Westphal [Wed, 3 Sep 2014 13:34:15 +0000 (13:34 +0000)]
e1000: move tbi workaround code into helper function
Its the same in both handlers.
Signed-off-by: Florian Westphal <fw@strlen.de>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Florian Westphal [Wed, 3 Sep 2014 13:34:10 +0000 (13:34 +0000)]
e1000: move e1000_tbi_adjust_stats to where its used
... and make it static.
Signed-off-by: Florian Westphal <fw@strlen.de>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Alexander Duyck [Sat, 26 Jul 2014 02:42:44 +0000 (02:42 +0000)]
ixgbe: Refactor busy poll socket code to address multiple issues
This change addresses several issues in the current ixgbe implementation of
busy poll sockets.
First was the fact that it was possible for frames to be delivered out of
order if they were held in GRO. This is addressed by flushing the GRO buffers
before releasing the q_vector back to the idle state.
The other issue was the fact that we were having to take a spinlock on
changing the state to and from idle. To resolve this I have replaced the
state value with an atomic and use atomic_cmpxchg to change the value from
idle, and a simple atomic set to restore it back to idle after we have
acquired it. This allows us to only use a locked operation on acquiring the
vector without a need for a locked operation to release it.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Alexander Duyck [Sat, 26 Jul 2014 02:42:39 +0000 (02:42 +0000)]
ixgbe: Drop Rx alloc at end of Rx cleanup
This change removes the Rx buffer allocation at the end of ixgbe_clean_rx_irq.
The reason for removing this is to avoid the extra latency introduced by the
MMIO write. This can amount to somewhere around an extra 100ns of latency and
one extra message worth of PCIe bus overhead.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Mark Rustad [Thu, 24 Jul 2014 06:19:29 +0000 (06:19 +0000)]
ixgbevf: Resolve missing-field-initializers warnings
Resolve missing-field-initializers warnings by using
designated initialization.
Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Mark Rustad [Thu, 24 Jul 2014 06:19:24 +0000 (06:19 +0000)]
ixgbe: Resolve warnings produced in W=2 builds
This patch resolves warnings produced by ixgbe in W=2 kernel
builds. There are missing-field-initializers warnings and shadow
warnings. None of these point to any deeper problem, so just
resolve them so any new warnings get analyzed.
Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Daniel Borkmann [Wed, 10 Sep 2014 13:01:02 +0000 (15:01 +0200)]
net: bpf: only build bpf_jit_binary_{alloc, free}() when jit selected
Since BPF JIT depends on the availability of module_alloc() and
module_free() helpers (HAVE_BPF_JIT and MODULES), we better build
that code only in case we have BPF_JIT in our config enabled, just
like with other JIT code. Fixes builds for arm/marzen_defconfig
and sh/rsk7269_defconfig.
====================
kernel/built-in.o: In function `bpf_jit_binary_alloc':
/home/cwang/linux/kernel/bpf/core.c:144: undefined reference to `module_alloc'
kernel/built-in.o: In function `bpf_jit_binary_free':
/home/cwang/linux/kernel/bpf/core.c:164: undefined reference to `module_free'
make: *** [vmlinux] Error 1
====================
Reported-by: Fengguang Wu <fengguang.wu@intel.com>
Fixes:
738cbe72adc5 ("net: bpf: consolidate JIT binary allocator")
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 10 Sep 2014 21:02:37 +0000 (14:02 -0700)]
Merge branch 'cxgb4-next'
Hariprasad Shenai says:
====================
cxgb4: Allow FW size upto 1MB, support for S25FL032P flash and misc. fixes
This patch series adds support to allow FW size upto 1MB, support for S25FL032P
flash. Fix t4_flash_erase_sectors to throw an error, when erase sector aren't in
the flash and also warning message when adapters have flashes less than 2Mb.
Adds device id of new adapter and removes device id of debug adapter.
The patches series is created against 'net-next' tree.
And includes patches on cxgb4 driver and cxgb4vf driver.
We have included all the maintainers of respective drivers. Kindly review the
change and let us know in case of any review comments.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Hariprasad Shenai [Wed, 10 Sep 2014 12:14:31 +0000 (17:44 +0530)]
cxgb4/cxgb4vf: Add device ID for new adapter and remove for dbg adapter
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hariprasad Shenai [Wed, 10 Sep 2014 12:14:30 +0000 (17:44 +0530)]
cxgb4: Add warning msg when attaching to adapters which have FLASHes smaller than 2Mb
Based on original work by Casey Leedom <leedom@chelsio.com>
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hariprasad Shenai [Wed, 10 Sep 2014 12:14:29 +0000 (17:44 +0530)]
cxgb4: Fix t4_flash_erase_sectors() to throw an error when requested to erase sectors which aren't in the FLASH
Based on original work by Casey Leedom <leedom@chelsio.com>
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hariprasad Shenai [Wed, 10 Sep 2014 12:14:28 +0000 (17:44 +0530)]
cxgb4: Add support to S25FL032P flash
Add support for Spansion S25FL032P flash
Based on original work by Dimitris Michailidis
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hariprasad Shenai [Wed, 10 Sep 2014 12:14:27 +0000 (17:44 +0530)]
cxgb4: Allow T4/T5 firmware sizes up to 1MB
Based on original work by Casey Leedom <leedom@chelsio.com>
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Erik Hugne [Wed, 10 Sep 2014 12:02:50 +0000 (14:02 +0200)]
tipc: fix sparse warnings
This fixes the following sparse warnings:
sparse: symbol 'tipc_update_nametbl' was not declared. Should it be static?
Also, the function is changed to return bool upon success, rather than a
potentially freed pointer.
Signed-off-by: Erik Hugne <erik.hugne@ericsson.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Romain Perier [Wed, 10 Sep 2014 07:51:13 +0000 (07:51 +0000)]
net: ethernet: arc: Don't free Rockchip resources before disconnect from phy
Free resources before being disconnected from phy and calling core driver is
wrong and should not happen. It avoids a delay of 4-5s caused by the timeout of
phy_disconnect().
Signed-off-by: Romain Perier <romain.perier@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 10 Sep 2014 19:46:32 +0000 (12:46 -0700)]
Merge git://git./linux/kernel/git/pablo/nf-next
Pablo Neira Ayuso says:
====================
nf-next pull request
The following patchset contains Netfilter/IPVS updates for your
net-next tree. Regarding nf_tables, most updates focus on consolidating
the NAT infrastructure and adding support for masquerading. More
specifically, they are:
1) use __u8 instead of u_int8_t in arptables header, from
Mike Frysinger.
2) Add support to match by skb->pkttype to the meta expression, from
Ana Rey.
3) Add support to match by cpu to the meta expression, also from
Ana Rey.
4) A smatch warning about IPSET_ATTR_MARKMASK validation, patch from
Vytas Dauksa.
5) Fix netnet and netportnet hash types the range support for IPv4,
from Sergey Popovich.
6) Fix missing-field-initializer warnings resolved, from Mark Rustad.
7) Dan Carperter reported possible integer overflows in ipset, from
Jozsef Kadlecsick.
8) Filter out accounting objects in nfacct by type, so you can
selectively reset quotas, from Alexey Perevalov.
9) Move specific NAT IPv4 functions to the core so x_tables and
nf_tables can share the same NAT IPv4 engine.
10) Use the new NAT IPv4 functions from nft_chain_nat_ipv4.
11) Move specific NAT IPv6 functions to the core so x_tables and
nf_tables can share the same NAT IPv4 engine.
12) Use the new NAT IPv6 functions from nft_chain_nat_ipv6.
13) Refactor code to add nft_delrule(), which can be reused in the
enhancement of the NFT_MSG_DELTABLE to remove a table and its
content, from Arturo Borrero.
14) Add a helper function to unregister chain hooks, from
Arturo Borrero.
15) A cleanup to rename to nft_delrule_by_chain for consistency with
the new nft_*() functions, also from Arturo.
16) Add support to match devgroup to the meta expression, from Ana Rey.
17) Reduce stack usage for IPVS socket option, from Julian Anastasov.
18) Remove unnecessary textsearch state initialization in xt_string,
from Bojan Prtvar.
19) Add several helper functions to nf_tables, more work to prepare
the enhancement of NFT_MSG_DELTABLE, again from Arturo Borrero.
20) Enhance NFT_MSG_DELTABLE to delete a table and its content, from
Arturo Borrero.
21) Support NAT flags in the nat expression to indicate the flavour,
eg. random fully, from Arturo.
22) Add missing audit code to ebtables when replacing tables, from
Nicolas Dichtel.
23) Generalize the IPv4 masquerading code to allow its re-use from
nf_tables, from Arturo.
24) Generalize the IPv6 masquerading code, also from Arturo.
25) Add the new masq expression to support IPv4/IPv6 masquerading
from nf_tables, also from Arturo.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Joe Perches [Wed, 10 Sep 2014 04:17:32 +0000 (21:17 -0700)]
netfilter: Convert pr_warning to pr_warn
Use the more common pr_warn.
Other miscellanea:
o Coalesce formats
o Realign arguments
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Joe Perches [Wed, 10 Sep 2014 04:17:31 +0000 (21:17 -0700)]
iucv: Convert pr_warning to pr_warn
Use the more common pr_warn.
Coalesce formats.
Realign arguments.
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Joe Perches [Wed, 10 Sep 2014 04:17:30 +0000 (21:17 -0700)]
pktgen: Convert pr_warning to pr_warn
Use the more common pr_warn.
Realign arguments.
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Joe Perches [Wed, 10 Sep 2014 04:17:28 +0000 (21:17 -0700)]
atm: Convert pr_warning to pr_warn
Use the more common pr_warn.
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 10 Sep 2014 04:29:50 +0000 (21:29 -0700)]
Merge branch 'ipip_sit_gro'
Tom Herbert says:
====================
net: enable GRO for IPIP and SIT
This patch sets populates the IPIP and SIT offload structures with
gro_receive and gro_complete functions. This enables use of GRO
for these. Also, fixed a problem in IPv6 where we were not properly
initializing flush_id.
Peformance results are below. Note that these tests were done on bnx2x
which doesn't provide RX checksum offload of IPIP or SIT (i.e. does
not give CHEKCSUM_COMPLETE). Also, we don't get 4-tuple hash for RSS
only 2-tuple in this case so all the packets between two hosts are
winding up on the same queue. Net result is the interrupting CPU is
the bottleneck in GRO (checksumming every packet there).
Testing:
netperf TCP_STREAM between two hosts using bnx2x.
* Before fix
IPIP
1 connection
6.53% CPU utilization
6544.71 Mbps
20 connections
13.79% CPU utilization
9284.54 Mbps
SIT
1 connection
6.68% CPU utilization
5653.36 Mbps
20 connections
18.88% CPU utilization
9154.61 Mbps
* After fix
IPIP
1 connection
5.73% CPU utilization
9279.53 Mbps
20 connections
7.14% CPU utilization
7279.35 Mbps
SIT
1 connection
2.95% CPU utilization
9143.36 Mbps
20 connections
7.09% CPU utilization
6255.3 Mbps
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Tom Herbert [Tue, 9 Sep 2014 18:23:16 +0000 (11:23 -0700)]
sit: Add gro callbacks to sit_offload
Add ipv6_gro_receive and ipv6_gro_complete to sit_offload to
support GRO.
Signed-off-by: Tom Herbert <therbert@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Tom Herbert [Tue, 9 Sep 2014 18:23:15 +0000 (11:23 -0700)]
ipip: Add gro callbacks to ipip offload
Add inet_gro_receive and inet_gro_complete to ipip_offload to
support GRO.
Signed-off-by: Tom Herbert <therbert@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Tom Herbert [Tue, 9 Sep 2014 18:23:14 +0000 (11:23 -0700)]
ipv6: Clear flush_id to make GRO work
In TCP gro we check flush_id which is derived from the IP identifier.
In IPv4 gro path the flush_id is set with the expectation that every
matched packet increments IP identifier. In IPv6, the flush_id is
never set and thus is uinitialized. What's worse is that in IPv6
over IPv4 encapsulation, the IP identifier is taken from the outer
header which is currently not incremented on every packet for Linux
stack, so GRO in this case never matches packets (identifier is
not increasing).
This patch clears flush_id for every time for a matched packet in
IPv6 gro_receive. We need to do this each time to overwrite the
setting that would be done in IPv4 gro_receive per the outer
header in IPv6 over Ipv4 encapsulation.
Signed-off-by: Tom Herbert <therbert@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Joe Perches [Wed, 10 Sep 2014 03:27:44 +0000 (20:27 -0700)]
drivers/net: Convert remaining uses of pr_warning to pr_warn
Use the much more common pr_warn instead of pr_warning.
Other miscellanea:
o Typo fixes submiting/submitting
o Coalesce formats
o Realign arguments
o Add missing terminating '\n' to formats
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Florian Westphal [Tue, 9 Sep 2014 23:08:46 +0000 (01:08 +0200)]
net: use kfree_skb_list() helper in more places
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Tue, 9 Sep 2014 15:29:12 +0000 (08:29 -0700)]
ipv4: udp4_gro_complete() is static
net/ipv4/udp_offload.c:339:5: warning: symbol 'udp4_gro_complete' was
not declared. Should it be static?
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Tom Herbert <therbert@google.com>
Fixes:
57c67ff4bd92 ("udp: additional GRO support")
Acked-by: Tom Herbert <therbert@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Tue, 9 Sep 2014 15:24:53 +0000 (08:24 -0700)]
netns: remove one sparse warning
net/core/net_namespace.c:227:18: warning: incorrect type in argument 1
(different address spaces)
net/core/net_namespace.c:227:18: expected void const *<noident>
net/core/net_namespace.c:227:18: got struct net_generic [noderef]
<asn:4>*gen
We can use rcu_access_pointer() here as read-side access to the pointer
was removed at least one grace period ago.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Tue, 9 Sep 2014 15:16:17 +0000 (08:16 -0700)]
ipv6: udp6_gro_complete() is static
net/ipv6/udp_offload.c:159:5: warning: symbol 'udp6_gro_complete' was
not declared. Should it be static?
Signed-off-by: Eric Dumazet <edumazet@google.com>
Fixes:
57c67ff4bd92 ("udp: additional GRO support")
Cc: Tom Herbert <therbert@google.com>
Acked-by: Tom Herbert <therbert@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Tue, 9 Sep 2014 15:11:41 +0000 (08:11 -0700)]
ipv4: rcu cleanup in ip_ra_control()
Remove one sparse warning :
net/ipv4/ip_sockglue.c:328:22: warning: incorrect type in assignment (different address spaces)
net/ipv4/ip_sockglue.c:328:22: expected struct ip_ra_chain [noderef] <asn:4>*next
net/ipv4/ip_sockglue.c:328:22: got struct ip_ra_chain *[assigned] ra
And replace one rcu_assign_ptr() by RCU_INIT_POINTER() where applicable.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Borkmann [Tue, 9 Sep 2014 11:07:32 +0000 (13:07 +0200)]
ipv6: mcast: remove dead debugging defines
It's not used anywhere, so just remove these.
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Andy Shevchenko [Tue, 9 Sep 2014 08:48:29 +0000 (11:48 +0300)]
irda: vlsi_ir: use %*ph specifier
Instead of looping in the code let's use kernel extension to dump small
buffers.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
hayeswang [Tue, 9 Sep 2014 03:40:28 +0000 (11:40 +0800)]
r8152: use usleep_range
Replace mdelay with usleep_range to avoid busy loop.
Signed-off-by: Hayes Wang <hayeswang@realtek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Willem de Bruijn [Mon, 8 Sep 2014 23:58:58 +0000 (19:58 -0400)]
net-timestamp: optimize sock_tx_timestamp default path
Few packets have timestamping enabled. Exit sock_tx_timestamp quickly
in this common case.
Signed-off-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Florian Westphal [Mon, 8 Sep 2014 21:33:01 +0000 (23:33 +0200)]
net_sched: sfq: remove unused macro
not used anymore since ddecf0f
(net_sched: sfq: add optional RED on top of SFQ).
Signed-off-by: Florian Westphal <fw@strlen.de>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rick Jones [Tue, 9 Sep 2014 21:43:27 +0000 (14:43 -0700)]
sfc: Convert the normal transmit complete path to dev_consume_skb_any()
Convert the normal transmit completion path from dev_kfree_skb_any()
to dev_consume_skb_any() to help keep dropped packet profiling
meaningful.
Signed-off-by: Rick Jones <rick.jones2@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 10 Sep 2014 00:31:43 +0000 (17:31 -0700)]
Merge branch 'bond_lock_removal'
Nikolay Aleksandrov says:
====================
bonding: get rid of bond->lock
This patch-set removes the last users of bond->lock and converts the places
that needed it for sync to use curr_slave_lock or RCU as appropriate.
I've run this with lockdep and have stress-tested it via loading/unloading
and enslaving/releasing in parallel while outputting bond's proc, I didn't
see any issues. Please pay special attention to the procfs change, I've
done about an hour of stress-testing on it and have checked that the event
that causes the bonding to delete its proc entry (NETDEV_UNREGISTER) is
called before ndo_uninit() and the freeing of the dev so any readers will
sync with that. Also ran sparse checks and there were no splats.
v2: Add patch 0001/cxgb4 bond->lock removal, RTNL should be held in the
notifier call, the other patches are the same. Also tested with
allmodconfig to make sure there're no more users of bond->lock.
Changes from the RFC:
use RCU in procfs instead of RTNL since RTNL might lead to a deadlock with
unloading and also is much slower. The bond destruction syncs with proc
via the proc locks. There's one new patch that converts primary_slave to
use RCU as it was necessary to fix a longstanding bugs in sysfs and
procfs and to make it easy to migrate bond's procfs to RCU. And of course
rebased on top of net-next current.
This is the first patch-set in a series that should simplify the bond's
locking requirements and will make it easier to define the locking
conditions necessary for the various paths. The goal is to rely on RTNL
and rcu alone, an extra lock would be needed in a few special cases that
would be documented very well.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Tue, 9 Sep 2014 21:17:03 +0000 (23:17 +0200)]
bonding: remove last users of bond->lock and bond->lock itself
The usage of bond->lock in bond_main.c was completely unnecessary as it
didn't help to sync with anything, most of the spots already had RTNL.
Since there're no more users of bond->lock, remove it.
Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Tue, 9 Sep 2014 21:17:02 +0000 (23:17 +0200)]
bonding: options: remove bond->lock usage
We're safe to remove the bond->lock use from the arp targets because
arp_rcv_probe no longer acquires bond->lock, only rcu_read_lock.
Also setting the primary slave is safe because noone uses the bond->lock
as a syncing mechanism for that anymore.
Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Tue, 9 Sep 2014 21:17:01 +0000 (23:17 +0200)]
bonding: procfs: clean bond->lock usage and use RCU
Use RCU to protect against slave release, the proc show function will sync
with the bond destruction by the proc locks and the fact that the bond is
released after NETDEV_UNREGISTER which causes the bonding to remove the
proc entry.
Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Tue, 9 Sep 2014 21:17:00 +0000 (23:17 +0200)]
bonding: convert primary_slave to use RCU
This is necessary mainly for two bonding call sites: procfs and
sysfs as it was dereferenced without any real protection.
Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Tue, 9 Sep 2014 21:16:59 +0000 (23:16 +0200)]
bonding: alb: clean bond->lock
We can remove the lock/unlock as it's no longer necessary since
RTNL should be held while calling bond_alb_set_mac_address().
Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Tue, 9 Sep 2014 21:16:58 +0000 (23:16 +0200)]
bonding: 3ad: use curr_slave_lock instead of bond->lock
In 3ad mode the only syncing needed by bond->lock is for the wq
and the recv handler, so change them to use curr_slave_lock.
There're no locking dependencies here as 3ad doesn't use
curr_slave_lock at all.
Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Tue, 9 Sep 2014 21:16:57 +0000 (23:16 +0200)]
cxgb4: remove bond->lock
RTNL should be already held in the notifier call so the slave list can
be traversed without a problem, remove the unnecessary bond->lock.
CC: Hariprasad S <hariprasad@chelsio.com>
Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Romain Perier [Mon, 8 Sep 2014 17:14:50 +0000 (17:14 +0000)]
ARM: dts: Enable emac node on the rk3188-radxarock boards
This enables EMAC Rockchip support on radxa rock boards.
Signed-off-by: Romain Perier <romain.perier@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Romain Perier [Mon, 8 Sep 2014 17:14:49 +0000 (17:14 +0000)]
ARM: dts: Add emac nodes to the rk3188 device tree
This adds support for EMAC Rockchip driver on RK3188 SoCs.
Signed-off-by: Romain Perier <romain.perier@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Romain Perier [Mon, 8 Sep 2014 17:14:48 +0000 (17:14 +0000)]
dt-bindings: Document EMAC Rockchip
This adds the necessary binding documentation for the EMAC Rockchip platform
driver found in RK3066 and RK3188 SoCs.
Signed-off-by: Romain Perier <romain.perier@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Romain Perier [Mon, 8 Sep 2014 17:14:47 +0000 (17:14 +0000)]
ethernet: arc: Add support for Rockchip SoC layer device tree bindings
This patch defines a platform glue layer for Rockchip SoCs which
support arc-emac driver. It ensures that regulator for the rmii is on
before trying to connect to the ethernet controller. It applies right
speed and mode changes to the grf when ethernet settings change.
Signed-off-by: Romain Perier <romain.perier@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 9 Sep 2014 23:59:03 +0000 (16:59 -0700)]
Merge branch 'bpf-next'
Daniel Borkmann says:
====================
BPF updates
[ Set applies on top of current net-next but also on top of
Alexei's latest patches. Please see individual patches for
more details. ]
Changelog:
v1->v2:
- Removed paragraph in 1st commit message
- Rest stays the same
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Borkmann [Mon, 8 Sep 2014 06:04:49 +0000 (08:04 +0200)]
net: bpf: be friendly to kmemcheck
Reported by Mikulas Patocka, kmemcheck currently barks out a
false positive since we don't have special kmemcheck annotation
for bitfields used in bpf_prog structure.
We currently have jited:1, len:31 and thus when accessing len
while CONFIG_KMEMCHECK enabled, kmemcheck throws a warning that
we're reading uninitialized memory.
As we don't need the whole bit universe for pages member, we
can just split it to u16 and use a bool flag for jited instead
of a bitfield.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Borkmann [Mon, 8 Sep 2014 06:04:48 +0000 (08:04 +0200)]
net: bpf: arm: address randomize and write protect JIT code
This is the ARM variant for
314beb9bcab ("x86: bpf_jit_comp: secure bpf
jit against spraying attacks").
It is now possible to implement it due to commits
75374ad47c64 ("ARM: mm:
Define set_memory_* functions for ARM") and
dca9aa92fc7c ("ARM: add
DEBUG_SET_MODULE_RONX option to Kconfig") which added infrastructure for
this facility.
Thus, this patch makes sure the BPF generated JIT code is marked RO, as
other kernel text sections, and also lets the generated JIT code start
at a pseudo random offset instead on a page boundary. The holes are filled
with illegal instructions.
JIT tested on armv7hl with BPF test suite.
Reference: http://mainisusuallyafunction.blogspot.com/2012/11/attacking-hardened-linux-systems-with.html
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Acked-by: Mircea Gherzan <mgherzan@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Borkmann [Mon, 8 Sep 2014 06:04:47 +0000 (08:04 +0200)]
net: bpf: consolidate JIT binary allocator
Introduced in commit
314beb9bcabf ("x86: bpf_jit_comp: secure bpf jit
against spraying attacks") and later on replicated in
aa2d2c73c21f
("s390/bpf,jit: address randomize and write protect jit code") for
s390 architecture, write protection for BPF JIT images got added and
a random start address of the JIT code, so that it's not on a page
boundary anymore.
Since both use a very similar allocator for the BPF binary header,
we can consolidate this code into the BPF core as it's mostly JIT
independant anyway.
This will also allow for future archs that support DEBUG_SET_MODULE_RONX
to just reuse instead of reimplementing it.
JIT tested on x86_64 and s390x with BPF test suite.
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Mon, 8 Sep 2014 15:06:07 +0000 (08:06 -0700)]
tcp: remove dst refcount false sharing for prequeue mode
Alexander Duyck reported high false sharing on dst refcount in tcp stack
when prequeue is used. prequeue is the mechanism used when a thread is
blocked in recvmsg()/read() on a TCP socket, using a blocking model
rather than select()/poll()/epoll() non blocking one.
We already try to use RCU in input path as much as possible, but we were
forced to take a refcount on the dst when skb escaped RCU protected
region. When/if the user thread runs on different cpu, dst_release()
will then touch dst refcount again.
Commit
093162553c33 (tcp: force a dst refcount when prequeue packet)
was an example of a race fix.
It turns out the only remaining usage of skb->dst for a packet stored
in a TCP socket prequeue is IP early demux.
We can add a logic to detect when IP early demux is probably going
to use skb->dst. Because we do an optimistic check rather than duplicate
existing logic, we need to guard inet_sk_rx_dst_set() and
inet6_sk_rx_dst_set() from using a NULL dst.
Many thanks to Alexander for providing a nice bug report, git bisection,
and reproducer.
Tested using Alexander script on a 40Gb NIC, 8 RX queues.
Hosts have 24 cores, 48 hyper threads.
echo 0 >/proc/sys/net/ipv4/tcp_autocorking
for i in `seq 0 47`
do
for j in `seq 0 2`
do
netperf -H $DEST -t TCP_STREAM -l 1000 \
-c -C -T $i,$i -P 0 -- \
-m 64 -s 64K -D &
done
done
Before patch : ~6Mpps and ~95% cpu usage on receiver
After patch : ~9Mpps and ~35% cpu usage on receiver.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Stephen Rothwell [Tue, 9 Sep 2014 23:37:11 +0000 (16:37 -0700)]
ath5k: Add missing vmalloc.h include.
After merging the wireless-next tree, today's linux-next build (powerpc
allyesconfig) failed like this:
drivers/net/wireless/ath/ath5k/debug.c: In function 'open_file_eeprom':
drivers/net/wireless/ath/ath5k/debug.c:933:2: error: implicit declaration of function 'vmalloc' [-Werror=implicit-function-declaration]
buf = vmalloc(eesize);
^
drivers/net/wireless/ath/ath5k/debug.c:933:6: warning: assignment makes pointer from integer without a cast
buf = vmalloc(eesize);
^
drivers/net/wireless/ath/ath5k/debug.c:960:2: error: implicit declaration of function 'vfree' [-Werror=implicit-function-declaration]
vfree(buf);
^
Caused by commit
db906eb2101b ("ath5k: added debugfs file for dumping
eeprom"). Also reported by Guenter Roeck.
I have used Geert Uytterhoeven's suggested fix of including vmalloc.h
and so added this patch for today:
From: Stephen Rothwell <sfr@canb.auug.org.au>
Date: Mon, 8 Sep 2014 18:39:23 +1000
Subject: [PATCH] ath5k: fix debugfs addition
Reported-by: Guenter Roeck <linux@roeck-us.net>
Suggested-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Varka Bhadram [Mon, 8 Sep 2014 03:58:19 +0000 (09:28 +0530)]
ethernet: ti: remove unwanted THIS_MODULE macro
It removes the owner field updation of driver structure.
It will be automatically updated by module_platform_driver()
Signed-off-by: Varka Bhadram <varkab@cdac.in>
Signed-off-by: David S. Miller <davem@davemloft.net>
Li RongQing [Sat, 6 Sep 2014 11:06:11 +0000 (19:06 +0800)]
openvswitch: change the data type of error status to atomic_long_t
Change the date type of error status from u64 to atomic_long_t, and use atomic
operation, then remove the lock which is used to protect the error status.
The operation of atomic maybe faster than spin lock.
Cc: Pravin Shelar <pshelar@nicira.com>
Signed-off-by: Li RongQing <roy.qing.li@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rami Rosen [Sat, 6 Sep 2014 10:08:08 +0000 (13:08 +0300)]
bridge: Cleanup of unncessary check.
This patch removes an unncessary check in the br_afspec() method of
br_netlink.c.
Signed-off-by: Rami Rosen <ramirose@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>