Florian Westphal [Mon, 2 May 2016 16:39:55 +0000 (18:39 +0200)]
netfilter: conntrack: use a single hashtable for all namespaces
We already include netns address in the hash and compare the netns pointers
during lookup, so even if namespaces have overlapping addresses entries
will be spread across the table.
Assuming 64k bucket size, this change saves 0.5 mbyte per namespace on a
64bit system.
NAT bysrc and expectation hash is still per namespace, those will
changed too soon.
Future patch will also make conntrack object slab cache global again.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Mon, 2 May 2016 22:25:58 +0000 (00:25 +0200)]
netfilter: conntrack: make netns address part of hash
Once we place all conntracks into a global hash table we want them to be
spread across entire hash table, even if namespaces have overlapping ip
addresses.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Thu, 28 Apr 2016 17:13:45 +0000 (19:13 +0200)]
netfilter: conntrack: check netns when comparing conntrack objects
Once we place all conntracks in the same hash table we must also compare
the netns pointer to skip conntracks that belong to a different namespace.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Thu, 28 Apr 2016 17:13:44 +0000 (19:13 +0200)]
netfilter: conntrack: small refactoring of conntrack seq_printf
The iteration process is lockless, so we test if the conntrack object is
eligible for printing (e.g. is AF_INET) after obtaining the reference
count.
Once we put all conntracks into same hash table we might see more
entries that need to be skipped.
So add a helper and first perform the test in a lockless fashion
for fast skip.
Once we obtain the reference count, just repeat the check.
Note that this refactoring also includes a missing check for unconfirmed
conntrack entries due to slab rcu object re-usage, so they need to be
skipped since they are not part of the listing.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Thu, 28 Apr 2016 17:13:43 +0000 (19:13 +0200)]
netfilter: conntrack: use nf_ct_key_equal() in more places
This prepares for upcoming change that places all conntracks into a
single, global table. For this to work we will need to also compare
net pointer during lookup. To avoid open-coding such check use the
nf_ct_key_equal helper and then later extend it to also consider net_eq.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Thu, 28 Apr 2016 17:13:42 +0000 (19:13 +0200)]
netfilter: conntrack: don't attempt to iterate over empty table
Once we place all conntracks into same table iteration becomes more
costly because the table contains conntracks that we are not interested
in (belonging to other netns).
So don't bother scanning if the current namespace has no entries.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Thu, 28 Apr 2016 17:13:41 +0000 (19:13 +0200)]
netfilter: conntrack: fix lookup race during hash resize
When resizing the conntrack hash table at runtime via
echo 42 > /sys/module/nf_conntrack/parameters/hashsize, we are racing with
the conntrack lookup path -- reads can happen in parallel and nothing
prevents readers from observing a the newly allocated hash but the old
size (or vice versa).
So access to hash[bucket] can trigger OOB read access in case the table got
expanded and we saw the new size but the old hash pointer (or it got shrunk
and we got new hash ptr but the size of the old and larger table):
kasan: GPF could be caused by NULL-ptr deref or user memory access general protection fault: 0000 [#1] SMP KASAN
CPU: 0 PID: 3 Comm: ksoftirqd/0 Not tainted 4.6.0-rc2+ #107
[..]
Call Trace:
[<
ffffffff822c3d6a>] ? nf_conntrack_tuple_taken+0x12a/0xe90
[<
ffffffff822c3ac1>] ? nf_ct_invert_tuplepr+0x221/0x3a0
[<
ffffffff8230e703>] get_unique_tuple+0xfb3/0x2760
Use generation counter to obtain the address/length of the same table.
Also add a synchronize_net before freeing the old hash.
AFAICS, without it we might access ct_hash[bucket] after ct_hash has been
freed, provided that lockless reader got delayed by another event:
CPU1 CPU2
seq_begin
seq_retry
<delay> resize occurs
free oldhash
for_each(oldhash[size])
Note that resize is only supported in init_netns, it took over 2 minutes
of constant resizing+flooding to produce the warning, so this isn't a
big problem in practice.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Thu, 28 Apr 2016 17:13:40 +0000 (19:13 +0200)]
netfilter: conntrack: keep BH enabled during lookup
No need to disable BH here anymore:
stats are switched to _ATOMIC variant (== this_cpu_inc()), which
nowadays generates same code as the non _ATOMIC NF_STAT, at least on x86.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Tue, 26 Apr 2016 09:59:53 +0000 (11:59 +0200)]
netfilter: nftables: add connlabel set support
Conntrack labels are currently sized depending on the iptables
ruleset, i.e. if we're asked to test or set bits 1, 2, and 65 then we
would allocate enough room to store at least bit 65.
However, with nft, the input is just a register with arbitrary runtime
content.
We therefore ask for the upper ceiling we currently have, which is
enough room to store 128 bits.
Alternatively, we could alter nf_connlabel_replace to increase
net->ct.label_words at run time, but since 128 bits is not that
big we'd only save sizeof(long) so it doesn't seem worth it for now.
This follows a similar approach that xtables 'connlabel'
match uses, so when user inputs
ct label set bar
then we will set the bit used by the 'bar' label and leave the rest alone.
This is done by passing the sreg content to nf_connlabels_replace
as both value and mask argument.
Labels (bits) already set thus cannot be re-set to zero, but
this is not supported by xtables connlabel match either.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Pablo Neira Ayuso [Fri, 29 Apr 2016 08:39:34 +0000 (10:39 +0200)]
netfilter: fix IS_ERR_VALUE usage
This is a forward-port of the original patch from Andrzej Hajda,
he said:
"IS_ERR_VALUE should be used only with unsigned long type.
Otherwise it can work incorrectly. To achieve this function
xt_percpu_counter_alloc is modified to return unsigned long,
and its result is assigned to temporary variable to perform
error checking, before assigning to .pcnt field.
The patch follows conclusion from discussion on LKML [1][2].
[1]: http://permalink.gmane.org/gmane.linux.kernel/2120927
[2]: http://permalink.gmane.org/gmane.linux.kernel/2150581"
Original patch from Andrzej is here:
http://patchwork.ozlabs.org/patch/582970/
This patch has clashed with input validation fixes for x_tables.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Pablo Neira Ayuso [Mon, 25 Apr 2016 13:35:53 +0000 (15:35 +0200)]
Merge tag 'ipvs-for-v4.7' of https://git./linux/kernel/git/horms/ipvs-next
Simon Horman says:
====================
IPVS Updates for v4.7
please consider these enhancements to the IPVS. They allow SIP connections
originating from real-servers to be load balanced by the SIP psersitence
engine as is already implemented in the other direction. And for better one
packet scheduling (OPS) performance.
====================
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Liping Zhang [Fri, 22 Apr 2016 09:56:57 +0000 (02:56 -0700)]
netfilter: ip6t_SYNPROXY: unnecessary to check whether ip6_route_output returns NULL
ip6_route_output() will never return a NULL pointer, so there's no need
to check it.
Signed-off-by: Liping Zhang <liping.zhang@spreadtrum.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Pablo Neira Ayuso [Thu, 14 Apr 2016 15:13:41 +0000 (17:13 +0200)]
netfilter: nf_ct_helper: disable automatic helper assignment
Four years ago we introduced a new sysctl knob to disable automatic
helper assignment in
72110dfaa907 ("netfilter: nf_ct_helper: disable
automatic helper assignment"). This knob kept this behaviour enabled by
default to remain conservative.
This measure was introduced to provide a secure way to configure
iptables and connection tracking helpers through explicit rules.
Give the time we have waited for this, let's turn off this by default
now, worse case users still have a chance to recover the former
behaviour by explicitly enabling this back through sysctl.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Pablo Neira Ayuso [Tue, 12 Apr 2016 21:50:37 +0000 (23:50 +0200)]
netfilter: nft_rbtree: allow adjacent intervals with dynamic updates
This patch fixes dynamic element updates for adjacent intervals in the
rb-tree representation.
Since elements are sorted in the rb-tree, in case of adjacent nodes with
the same key, the assumption is that an interval end node must be placed
before an interval opening.
In tree lookup operations, the idea is to search for the closer element
that is smaller than the one we're searching for. Given that we'll have
two possible matchings, we have to take the opening interval in case of
adjacent nodes.
Range merges are not trivial with the current representation,
specifically we have to check if node extensions are equal and make sure
we keep the existing internal states around.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Pablo Neira Ayuso [Tue, 12 Apr 2016 21:50:36 +0000 (23:50 +0200)]
netfilter: nft_rbtree: introduce nft_rbtree_interval_end() helper
Add this new nft_rbtree_interval_end() helper function to check in the
end interval is set.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Pablo Neira Ayuso [Tue, 12 Apr 2016 21:50:35 +0000 (23:50 +0200)]
netfilter: nf_tables: parse element flags from nft_del_setelem()
Parse flags and pass them to the set via ->deactivate() to check if we
remove the right element from the intervals.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Pablo Neira Ayuso [Tue, 12 Apr 2016 21:50:34 +0000 (23:50 +0200)]
netfilter: nf_tables: introduce nft_setelem_parse_flags() helper
This function parses the set element flags, thus, we can reuse the same
handling when deleting elements.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Mon, 18 Apr 2016 14:17:01 +0000 (16:17 +0200)]
netfilter: conntrack: use get_random_once for conntrack hash seed
As earlier commit removed accessed to the hash from other files we can
also make it static.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Mon, 18 Apr 2016 14:17:00 +0000 (16:17 +0200)]
netfilter: conntrack: use get_random_once for nat and expectations
Use a private seed and init it using get_random_once.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Mon, 18 Apr 2016 14:16:59 +0000 (16:16 +0200)]
netfilter: conntrack: move generation seqcnt out of netns_ct
We only allow rehash in init namespace, so we only use
init_ns.generation. And even if we would allow it, it makes no sense
as the conntrack locks are global; any ongoing rehash prevents insert/
delete.
So make this private to nf_conntrack_core instead.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Haiyang Zhang [Thu, 21 Apr 2016 23:13:01 +0000 (16:13 -0700)]
hv_netvsc: Fix the list processing for network change event
RNDIS_STATUS_NETWORK_CHANGE event is handled as two "half events" --
media disconnect & connect. The second half should be added to the list
head, not to the tail. So all events are processed in normal order.
Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
Reviewed-by: K. Y. Srinivasan <kys@microsoft.com>
Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Thu, 21 Apr 2016 17:55:23 +0000 (10:55 -0700)]
tcp-tso: do not split TSO packets at retransmit time
Linux TCP stack painfully segments all TSO/GSO packets before retransmits.
This was fine back in the days when TSO/GSO were emerging, with their
bugs, but we believe the dark age is over.
Keeping big packets in write queues, but also in stack traversal
has a lot of benefits.
- Less memory overhead, because write queues have less skbs
- Less cpu overhead at ACK processing.
- Better SACK processing, as lot of studies mentioned how
awful linux was at this ;)
- Less cpu overhead to send the rtx packets
(IP stack traversal, netfilter traversal, drivers...)
- Better latencies in presence of losses.
- Smaller spikes in fq like packet schedulers, as retransmits
are not constrained by TCP Small Queues.
1 % packet losses are common today, and at 100Gbit speeds, this
translates to ~80,000 losses per second.
Losses are often correlated, and we see many retransmit events
leading to 1-MSS train of packets, at the time hosts are already
under stress.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Parthasarathy Bhuvaragan [Thu, 21 Apr 2016 13:51:13 +0000 (15:51 +0200)]
tipc: fix stale links after re-enabling bearer
Commit
42b18f605fea ("tipc: refactor function tipc_link_timeout()"),
introduced a bug which prevents sending of probe messages during
link synchronization phase. This leads to hanging links, if the
bearer is disabled/enabled after links are up.
In this commit, we send the probe messages correctly.
Fixes:
42b18f605fea ("tipc: refactor function tipc_link_timeout()")
Acked-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: Parthasarathy Bhuvaragan <parthasarathy.bhuvaragan@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sun, 24 Apr 2016 18:06:56 +0000 (14:06 -0400)]
Merge branch 'tcp-tcstamp_ack-frag-coalesce'
Martin KaFai Lau says:
====================
tcp: Handle txstamp_ack when fragmenting/coalescing skbs
This patchset is to handle the txstamp-ack bit when
fragmenting/coalescing skbs.
The second patch depends on the recently posted series
for the net branch:
"tcp: Merge timestamp info when coalescing skbs"
A BPF prog is used to kprobe to sock_queue_err_skb()
and print out the value of serr->ee.ee_data. The BPF
prog (run-able from bcc) is attached here:
BPF prog used for testing:
~~~~~
from __future__ import print_function
from bcc import BPF
bpf_text = """
int trace_err_skb(struct pt_regs *ctx)
{
struct sk_buff *skb = (struct sk_buff *)ctx->si;
struct sock *sk = (struct sock *)ctx->di;
struct sock_exterr_skb *serr;
u32 ee_data = 0;
if (!sk || !skb)
return 0;
serr = SKB_EXT_ERR(skb);
bpf_probe_read(&ee_data, sizeof(ee_data), &serr->ee.ee_data);
bpf_trace_printk("ee_data:%u\\n", ee_data);
return 0;
};
"""
b = BPF(text=bpf_text)
b.attach_kprobe(event="sock_queue_err_skb", fn_name="trace_err_skb")
print("Attached to kprobe")
b.trace_print()
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Martin KaFai Lau [Wed, 20 Apr 2016 05:50:48 +0000 (22:50 -0700)]
tcp: Merge txstamp_ack in tcp_skb_collapse_tstamp
When collapsing skbs, txstamp_ack also needs to be merged.
Retrans Collapse Test:
~~~~~~
0.200 accept(3, ..., ...) = 4
+0 setsockopt(4, SOL_TCP, TCP_NODELAY, [1], 4) = 0
0.200 write(4, ..., 730) = 730
+0 setsockopt(4, SOL_SOCKET, 37, [2688], 4) = 0
0.200 write(4, ..., 730) = 730
+0 setsockopt(4, SOL_SOCKET, 37, [2176], 4) = 0
0.200 write(4, ..., 11680) = 11680
0.200 > P. 1:731(730) ack 1
0.200 > P. 731:1461(730) ack 1
0.200 > . 1461:8761(7300) ack 1
0.200 > P. 8761:13141(4380) ack 1
0.300 < . 1:1(0) ack 1 win 257 <sack 1461:2921,nop,nop>
0.300 < . 1:1(0) ack 1 win 257 <sack 1461:4381,nop,nop>
0.300 < . 1:1(0) ack 1 win 257 <sack 1461:5841,nop,nop>
0.300 > P. 1:1461(1460) ack 1
0.400 < . 1:1(0) ack 13141 win 257
BPF Output Before:
~~~~~
<No output due to missing SCM_TSTAMP_ACK timestamp>
BPF Output After:
~~~~~
<...>-2027 [007] d.s. 79.765921: : ee_data:1459
Sacks Collapse Test:
~~~~~
0.200 accept(3, ..., ...) = 4
+0 setsockopt(4, SOL_TCP, TCP_NODELAY, [1], 4) = 0
0.200 write(4, ..., 1460) = 1460
+0 setsockopt(4, SOL_SOCKET, 37, [2688], 4) = 0
0.200 write(4, ..., 13140) = 13140
+0 setsockopt(4, SOL_SOCKET, 37, [2176], 4) = 0
0.200 > P. 1:1461(1460) ack 1
0.200 > . 1461:8761(7300) ack 1
0.200 > P. 8761:14601(5840) ack 1
0.300 < . 1:1(0) ack 1 win 257 <sack 1461:14601,nop,nop>
0.300 > P. 1:1461(1460) ack 1
0.400 < . 1:1(0) ack 14601 win 257
BPF Output Before:
~~~~~
<No output due to missing SCM_TSTAMP_ACK timestamp>
BPF Output After:
~~~~~
<...>-2049 [007] d.s. 89.185538: : ee_data:14599
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Cc: Soheil Hassas Yeganeh <soheil@google.com>
Cc: Willem de Bruijn <willemb@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Tested-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Martin KaFai Lau [Wed, 20 Apr 2016 05:50:47 +0000 (22:50 -0700)]
tcp: Carry txstamp_ack in tcp_fragment_tstamp
When a tcp skb is sliced into two smaller skbs (e.g. in
tcp_fragment() and tso_fragment()), it does not carry
the txstamp_ack bit to the newly created skb if it is needed.
The end result is a timestamping event (SCM_TSTAMP_ACK) will
be missing from the sk->sk_error_queue.
This patch carries this bit to the new skb2
in tcp_fragment_tstamp().
BPF Output Before:
~~~~~~
<No output due to missing SCM_TSTAMP_ACK timestamp>
BPF Output After:
~~~~~~
<...>-2050 [000] d.s. 100.928763: : ee_data:14599
Packetdrill Script:
~~~~~~
+0 `sysctl -q -w net.ipv4.tcp_min_tso_segs=10`
+0 `sysctl -q -w net.ipv4.tcp_no_metrics_save=1`
+0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3
+0 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
+0 bind(3, ..., ...) = 0
+0 listen(3, 1) = 0
0.100 < S 0:0(0) win 32792 <mss 1460,sackOK,nop,nop,nop,wscale 7>
0.100 > S. 0:0(0) ack 1 <mss 1460,nop,nop,sackOK,nop,wscale 7>
0.200 < . 1:1(0) ack 1 win 257
0.200 accept(3, ..., ...) = 4
+0 setsockopt(4, SOL_TCP, TCP_NODELAY, [1], 4) = 0
+0 setsockopt(4, SOL_SOCKET, 37, [2688], 4) = 0
0.200 write(4, ..., 14600) = 14600
+0 setsockopt(4, SOL_SOCKET, 37, [2176], 4) = 0
0.200 > . 1:7301(7300) ack 1
0.200 > P. 7301:14601(7300) ack 1
0.300 < . 1:1(0) ack 14601 win 257
0.300 close(4) = 0
0.300 > F. 14601:14601(0) ack 1
0.400 < F. 1:1(0) ack 16062 win 257
0.400 > . 14602:14602(0) ack 2
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Cc: Soheil Hassas Yeganeh <soheil@google.com>
Cc: Willem de Bruijn <willemb@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Tested-by: Soheil Hassas Yeganeh <soheil@google.com>
Acked-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sun, 24 Apr 2016 04:12:08 +0000 (00:12 -0400)]
Merge git://git./linux/kernel/git/pablo/nf-next
Pablo Neira Ayuso says:
====================
Netfilter updates for net-next
The following patchset contains Netfilter updates for your net-next
tree, mostly from Florian Westphal to sort out the lack of sufficient
validation in x_tables and connlabel preparation patches to add
nf_tables support. They are:
1) Ensure we don't go over the ruleset blob boundaries in
mark_source_chains().
2) Validate that target jumps land on an existing xt_entry. This extra
sanitization comes with a performance penalty when loading the ruleset.
3) Introduce xt_check_entry_offsets() and use it from {arp,ip,ip6}tables.
4) Get rid of the smallish check_entry() functions in {arp,ip,ip6}tables.
5) Make sure the minimal possible target size in x_tables.
6) Similar to #3, add xt_compat_check_entry_offsets() for compat code.
7) Check that standard target size is valid.
8) More sanitization to ensure that the target_offset field is correct.
9) Add xt_check_entry_match() to validate that matches are well-formed.
10-12) Three patch to reduce the number of parameters in
translate_compat_table() for {arp,ip,ip6}tables by using a container
structure.
13) No need to return value from xt_compat_match_from_user(), so make
it void.
14) Consolidate translate_table() so it can be used by compat code too.
15) Remove obsolete check for compat code, so we keep consistent with
what was already removed in the native layout code (back in 2007).
16) Get rid of target jump validation from mark_source_chains(),
obsoleted by #2.
17) Introduce xt_copy_counters_from_user() to consolidate counter
copying, and use it from {arp,ip,ip6}tables.
18,22) Get rid of unnecessary explicit inlining in ctnetlink for dump
functions.
19) Move nf_connlabel_match() to xt_connlabel.
20) Skip event notification if connlabel did not change.
21) Update of nf_connlabels_get() to make the upcoming nft connlabel
support easier.
23) Remove spinlock to read protocol state field in conntrack.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sun, 24 Apr 2016 00:13:29 +0000 (20:13 -0400)]
Merge branch 'nla_align-more'
Nicolas Dichtel says:
====================
netlink: align attributes when needed (patchset #1)
This is the continuation of the work done to align netlink attributes
when these attributes contain some 64-bit fields.
David, if the third patch is too big (or maybe the series), I can split it.
Just tell me what you prefer.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Nicolas Dichtel [Fri, 22 Apr 2016 15:31:24 +0000 (17:31 +0200)]
taskstats: use the libnl API to align nlattr on 64-bit
Goal of this patch is to use the new libnl API to align netlink attribute
when needed.
The layout of the netlink message will be a bit different after the patch,
because the padattr (TASKSTATS_TYPE_STATS) will be inside the nested
attribute instead of before it.
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nicolas Dichtel [Fri, 22 Apr 2016 15:31:23 +0000 (17:31 +0200)]
xfrm: align nlattr properly when needed
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nicolas Dichtel [Fri, 22 Apr 2016 15:31:22 +0000 (17:31 +0200)]
libnl: add nla_put_u64_64bit() helper
With this function, nla_data() is aligned on a 64-bit area.
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nicolas Dichtel [Fri, 22 Apr 2016 15:31:21 +0000 (17:31 +0200)]
libnl: nla_put_msecs(): align on a 64-bit area
nla_data() is now aligned on a 64-bit area.
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nicolas Dichtel [Fri, 22 Apr 2016 15:31:20 +0000 (17:31 +0200)]
libnl: nla_put_s64(): align on a 64-bit area
nla_data() is now aligned on a 64-bit area.
In fact, there is no user of this function.
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nicolas Dichtel [Fri, 22 Apr 2016 15:31:19 +0000 (17:31 +0200)]
libnl: nla_put_net64(): align on a 64-bit area
nla_data() is now aligned on a 64-bit area.
The temporary function nla_put_be64_32bit() is removed in this patch.
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nicolas Dichtel [Fri, 22 Apr 2016 15:31:18 +0000 (17:31 +0200)]
libnl: nla_put_be64(): align on a 64-bit area
nla_data() is now aligned on a 64-bit area.
A temporary version (nla_put_be64_32bit()) is added for nla_put_net64().
This function is removed in the next patch.
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nicolas Dichtel [Fri, 22 Apr 2016 15:31:17 +0000 (17:31 +0200)]
libnl: nla_put_le64(): align on a 64-bit area
nla_data() is now aligned on a 64-bit area.
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nicolas Dichtel [Fri, 22 Apr 2016 15:31:16 +0000 (17:31 +0200)]
libnl: fix help of _64bit functions
Fix typo and describe 'padattr'.
Fixes:
089bf1a6a924 ("libnl: add more helpers to align attributes on 64-bit")
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 23 Apr 2016 22:26:24 +0000 (18:26 -0400)]
Merge git://git./linux/kernel/git/davem/net
Conflicts were two cases of simple overlapping changes,
nothing serious.
In the UDP case, we need to add a hlist_add_tail_rcu()
to linux/rculist.h, because we've moved UDP socket handling
away from using nulls lists.
Signed-off-by: David S. Miller <davem@davemloft.net>
Linus Torvalds [Thu, 21 Apr 2016 22:41:13 +0000 (15:41 -0700)]
Merge tag 'rtc-4.6-3' of git://git./linux/kernel/git/abelloni/linux
Pull RTC fixes from Alexandre Belloni:
"A few fixes for the RTC subsystem. The documentation fix already
missed 4.5 so I think it is worth taking it now:
A documentation fix for s3c and two fixes for the ds1307"
* tag 'rtc-4.6-3' of git://git.kernel.org/pub/scm/linux/kernel/git/abelloni/linux:
rtc: ds1307: Use irq when available for wakeup-source device
rtc: ds1307: ds3231 temperature s16 overflow
rtc: s3c: Document in binding that only s3c6410 needs a src clk
Linus Torvalds [Thu, 21 Apr 2016 21:29:34 +0000 (14:29 -0700)]
Merge tag 'pm+acpi-4.6-rc5' of git://git./linux/kernel/git/rafael/linux-pm
Pull power management fixes from Rafael Wysocki:
"Two fixes for issues introduced recently, one for an intel_pstate
driver problem uncovered by the recent switch over from using timers
and the other one for a potential cpufreq core problem related to
system suspend/resume.
Specifics:
- Fix an intel_pstate driver problem causing CPUs to get stuck in the
highest P-state when completely idle uncovered by the recent switch
over from using timers (Rafael Wysocki).
- Avoid attempts to get the current CPU frequency when all devices
(like I2C controllers that may be nedded for that purpose) have
been suspended during system suspend/resume (Rafael Wysocki)"
* tag 'pm+acpi-4.6-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
cpufreq: Abort cpufreq_update_current_freq() for cpufreq_suspended set
intel_pstate: Avoid getting stuck in high P-states when idle
Nishanth Menon [Tue, 19 Apr 2016 16:23:54 +0000 (11:23 -0500)]
rtc: ds1307: Use irq when available for wakeup-source device
With commit
8bc2a40730ec ("rtc: ds1307: add support for the
DT property 'wakeup-source'") we lost the ability for rtc irq
functionality for devices that are actually hooked on a real IRQ
line and have capability to wakeup as well. This is not an expected
behavior. So, instead of just not requesting IRQ, skip the IRQ
requirement only if interrupts are not defined for the device.
Fixes:
8bc2a40730ec ("rtc: ds1307: add support for the DT property 'wakeup-source'")
Reported-by: Tony Lindgren <tony@atomide.com>
Cc: Michael Lange <linuxstuff@milaw.biz>
Cc: Alexandre Belloni <alexandre.belloni@free-electrons.com>
Signed-off-by: Nishanth Menon <nm@ti.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@free-electrons.com>
Zhuang Yuyao [Mon, 18 Apr 2016 00:21:42 +0000 (09:21 +0900)]
rtc: ds1307: ds3231 temperature s16 overflow
while retrieving temperature from ds3231, the result may be overflow
since s16 is too small for a multiplication with 250.
ie. if temp_buf[0] == 0x2d, the result (s16 temp) will be negative.
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Tested-by: Michael Tatarinov <kukabu@gmail.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@free-electrons.com>
Linus Torvalds [Thu, 21 Apr 2016 19:57:34 +0000 (12:57 -0700)]
Merge git://git./linux/kernel/git/davem/net
Pull networking fixes from David Miller:
1) Fix memory leak in iwlwifi, from Matti Gottlieb.
2) Add missing registration of netfilter arp_tables into initial
namespace, from Florian Westphal.
3) Fix potential NULL deref in DecNET routing code.
4) Restrict NETLINK_URELEASE to truly bound sockets only, from Dmitry
Ivanov.
5) Fix dst ref counting in VRF, from David Ahern.
6) Fix TSO segmenting limits in i40e driver, from Alexander Duyck.
7) Fix heap leak in PACKET_DIAG_MCLIST, from Mathias Krause.
8) Ravalidate IPV6 datagram socket cached routes properly, particularly
with UDP, from Martin KaFai Lau.
9) Fix endian bug in RDS dp_ack_seq handling, from Qing Huang.
10) Fix stats typing in bcmgenet driver, from Eric Dumazet.
11) Openvswitch needs to orphan SKBs before ipv6 fragmentation handing,
from Joe Stringer.
12) SPI device reference leak in spi_ks8895 PHY driver, from Mark Brown.
13) atl2 doesn't actually support scatter-gather, so don't advertise the
feature. From Ben Hucthings.
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (72 commits)
openvswitch: use flow protocol when recalculating ipv6 checksums
Driver: Vmxnet3: set CHECKSUM_UNNECESSARY for IPv6 packets
atl2: Disable unimplemented scatter/gather feature
net/mlx4_en: Split SW RX dropped counter per RX ring
net/mlx4_core: Don't allow to VF change global pause settings
net/mlx4_core: Avoid repeated calls to pci enable/disable
net/mlx4_core: Implement pci_resume callback
net: phy: spi_ks8895: Don't leak references to SPI devices
net: ethernet: davinci_emac: Fix platform_data overwrite
net: ethernet: davinci_emac: Fix Unbalanced pm_runtime_enable
qede: Fix single MTU sized packet from firmware GRO flow
qede: Fix setting Skb network header
qede: Fix various memory allocation error flows for fastpath
tcp: Merge tx_flags and tskey in tcp_shifted_skb
tcp: Merge tx_flags and tskey in tcp_collapse_retrans
drivers: net: cpsw: fix wrong regs access in cpsw_ndo_open
tcp: Fix SOF_TIMESTAMPING_TX_ACK when handling dup acks
openvswitch: Orphan skbs before IPv6 defrag
Revert "Prevent NUll pointer dereference with two PHYs on cpsw"
VSOCK: Only check error on skb_recv_datagram when skb is NULL
...
David S. Miller [Thu, 21 Apr 2016 19:36:04 +0000 (15:36 -0400)]
Merge branch 'geneve-vxlan-deps'
Hannes Frederic Sowa says:
====================
net: network drivers should not depend on geneve/vxlan
This patchset removes the dependency of network drivers on vxlan or
geneve, so those don't get autoloaded when the nic driver is loaded.
Also audited the code such that vxlan_get_rx_port and geneve_get_rx_port
are not called without rtnl lock.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Hannes Frederic Sowa [Mon, 18 Apr 2016 19:19:48 +0000 (21:19 +0200)]
geneve: break dependency with netdev drivers
Equivalent to "vxlan: break dependency with netdev drivers", don't
autoload geneve module in case the driver is loaded. Instead make the
coupling weaker by using netdevice notifiers as proxy.
Cc: Jesse Gross <jesse@kernel.org>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hannes Frederic Sowa [Mon, 18 Apr 2016 19:19:47 +0000 (21:19 +0200)]
vxlan: break dependency with netdev drivers
Currently all drivers depend and autoload the vxlan module because how
vxlan_get_rx_port is linked into them. Remove this dependency:
By using a new event type in the netdevice notifier call chain we proxy
the request from the drivers to flush and resetup the vxlan ports not
directly via function call but by the already existing netdevice
notifier call chain.
I added a separate new event type, NETDEV_OFFLOAD_PUSH_VXLAN, to do so.
We don't need to save those ids, as the event type field is an unsigned
long and using specialized event types for this purpose seemed to be a
more elegant way. This also comes in beneficial if in future we want to
add offloading knobs for vxlan.
Cc: Jesse Gross <jesse@kernel.org>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hannes Frederic Sowa [Mon, 18 Apr 2016 19:19:46 +0000 (21:19 +0200)]
qlcnic: protect qlicnic_attach_func with rtnl_lock
qlcnic_attach_func requires rtnl_lock to be held.
Cc: Dept-GELinuxNICDev@qlogic.com
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hannes Frederic Sowa [Mon, 18 Apr 2016 19:19:45 +0000 (21:19 +0200)]
ixgbe: protect vxlan_get_rx_port in ixgbe_service_task with rtnl_lock
vxlan_get_rx_port requires rtnl_lock to be held.
Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Cc: Jesse Brandeburg <jesse.brandeburg@intel.com>
Cc: Shannon Nelson <shannon.nelson@intel.com>
Cc: Carolyn Wyborny <carolyn.wyborny@intel.com>
Cc: Don Skidmore <donald.c.skidmore@intel.com>
Cc: Bruce Allan <bruce.w.allan@intel.com>
Cc: John Ronciak <john.ronciak@intel.com>
Cc: Mitch Williams <mitch.a.williams@intel.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hannes Frederic Sowa [Mon, 18 Apr 2016 19:19:44 +0000 (21:19 +0200)]
mlx4: protect mlx4_en_start_port in mlx4_en_restart with rtnl_lock
mlx4_en_start_port requires rtnl_lock to be held.
Cc: Eugenia Emantayev <eugenia@mellanox.com>
Cc: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hannes Frederic Sowa [Mon, 18 Apr 2016 19:19:43 +0000 (21:19 +0200)]
fm10k: protect fm10k_open in fm10k_io_resume with rtnl_lock
fm10k_open requires rtnl_lock to be held.
Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Cc: Jesse Brandeburg <jesse.brandeburg@intel.com>
Cc: Shannon Nelson <shannon.nelson@intel.com>
Cc: Carolyn Wyborny <carolyn.wyborny@intel.com>
Cc: Don Skidmore <donald.c.skidmore@intel.com>
Cc: Bruce Allan <bruce.w.allan@intel.com>
Cc: John Ronciak <john.ronciak@intel.com>
Cc: Mitch Williams <mitch.a.williams@intel.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hannes Frederic Sowa [Mon, 18 Apr 2016 19:19:42 +0000 (21:19 +0200)]
benet: be_resume needs to protect be_open with rtnl_lock
be_open calls down to functions which expects rtnl lock to be held.
Cc: Sathya Perla <sathya.perla@broadcom.com>
Cc: Ajit Khaparde <ajit.khaparde@broadcom.com>
Cc: Padmanabh Ratnakar <padmanabh.ratnakar@broadcom.com>
Cc: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Cc: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Simon Horman [Thu, 21 Apr 2016 01:49:15 +0000 (11:49 +1000)]
openvswitch: use flow protocol when recalculating ipv6 checksums
When using masked actions the ipv6_proto field of an action
to set IPv6 fields may be zero rather than the prevailing protocol
which will result in skipping checksum recalculation.
This patch resolves the problem by relying on the protocol
in the flow key rather than that in the set field action.
Fixes:
83d2b9ba1abc ("net: openvswitch: Support masked set actions.")
Cc: Jarno Rajahalme <jrajahalme@nicira.com>
Signed-off-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Shrikrishna Khare [Thu, 21 Apr 2016 01:12:29 +0000 (18:12 -0700)]
Driver: Vmxnet3: set CHECKSUM_UNNECESSARY for IPv6 packets
For IPv6, if the device indicates that the checksum is correct, set
CHECKSUM_UNNECESSARY.
Reported-by: Subbarao Narahari <snarahari@vmware.com>
Signed-off-by: Shrikrishna Khare <skhare@vmware.com>
Signed-off-by: Jin Heo <heoj@vmware.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ben Hutchings [Wed, 20 Apr 2016 22:23:08 +0000 (23:23 +0100)]
atl2: Disable unimplemented scatter/gather feature
atl2 includes NETIF_F_SG in hw_features even though it has no support
for non-linear skbs. This bug was originally harmless since the
driver does not claim to implement checksum offload and that used to
be a requirement for SG.
Now that SG and checksum offload are independent features, if you
explicitly enable SG *and* use one of the rare protocols that can use
SG without checkusm offload, this potentially leaks sensitive
information (before you notice that it just isn't working). Therefore
this obscure bug has been designated CVE-2016-2117.
Reported-by: Justin Yackoski <jyackoski@crypto-nite.com>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Fixes:
ec5f06156423 ("net: Kill link between CSUM and SG features.")
Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Wed, 20 Apr 2016 20:51:00 +0000 (16:51 -0400)]
net: Add support for IP ID mangling TSO in cases that require encapsulation
This patch adds support for NETIF_F_TSO_MANGLEID if a given tunnel supports
NETIF_F_TSO. This way if needed a device can then later enable the TSO
with IP ID mangling and the tunnels on top of that device can then also
make use of the IP ID mangling as well.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 21 Apr 2016 19:09:06 +0000 (15:09 -0400)]
Merge branch 'mlx5-next'
Saeed Mahameed says:
====================
Mellanox 100G mlx5 driver receive path optimizations
Changes from V2:
- Rebased to
46e7b8d8d53b ("net: dsa: kill circular reference with slave priv")
- Updated: ("net/mlx5e: Support RX multi-packet WQE (Striding RQ)")
* Per Eric Dumazet comment we changed the driver memory handling scheme to
work with order-0 pages rather than order-5 via split_page().
* This means that now a mlx5e rx skb can hold one or (more in case of HW LRO)
skb frag each pointing to a 4K order-0 page rather than one frag with order-5 page.
- Updated: ("net/mlx5e: Add fragmented memory support for RX multi packet WQE")
* Code refactoring and code reuse due the split_page() mechanism,
now the MPWQE and fragmented MPWQE handling almost look the same,
and share most of the code.
- In some cases we see 2%-3% packet rate degradation in comparison to the order-5 pages approach,
due to split_page() cpu consumption, but still we do see 3%-10% improvement in comparison to the
current linear SKB approach.
- We do believe that now the driver memory scheme is significantly less vulnerable
to the memory DOS attack Eric pointed at.
Changes from V1:
- Rebased to
efde611b0afa ("Merge branch 'nfp-next'")
- Dropped: ("net/mlx5: Refactor mlx5_core_mr to mkey")
Already merged into 4.6 from rdma tree.
- Dropped: ("net/mlx5_core: Add ConnectX-5 to list of supported devices")
Will be pushed to net as we want it in 4.6 release.
- Dropped: ("net/mlx5e: Change RX moderation period to be based on CQE")
Will be pushed in a later series with full software based adaptive moderation.
- Added: ("net/mlx5e: Delay skb->data access")
Small trivial optimization.
- Updated: ("net/mlx5e: Support RX multi-packet WQE (Striding RQ)")
Changed Striding RQ defaults to:
> NUM WQEs = 16
> Strides Per WQE = 1024
> Stride Size = 128
- Updated: ("net/mlx5e: Use napi_alloc_skb for RX SKB allocations")
Consider the IP packet alignment already done in napi_alloc_skb.
Changes from V0:
- Fixed a typo in commit message reported by Sergei
- Align SKB fragments truesize to stride size
- Use skb_add_rx_frag and remove the use of SKB_TRUESIZE
- Fix: # MTTs alignment on Power PC
- Fix: Free original (unaligned) pointer of MTT array
- Use dev_alloc_pages and dev_alloc_page
- Extend the stats.buff_alloc_err counter
- Reform the copying of packet header into skb linear data
- Add compiler hints for conditional statements
- Prefetch skd->data prior to copying packet header into it
- Rework: mlx5e_complete_rx_fragmented_mpwqe
- Handle SKB fragments before linear data
- Dropped ("net/mlx5e: Prefetch next RX CQE") for now
- Added a small patch that Adds ConnectX-5 devices to the list of supported devices
- Rebased to
1cdba5505555 ("Merge git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nf-next")
This series includes Some RX modifications and optimizations for
the mlx5 Ethernet driver.
From Rana, we have one patch that adds the support for Connectx-4
queue counters.
From Tariq, several patches that are centralized around improving
RX path message rate, CPU and Memory utilization, in each patch
commit message you will find the performance improvements numbers
related to that specific patch.
In the 2nd patch we used a queue counter to report "out of buffer"
dropped packet count, "Dropped packets due to lack of software resources"
3rd patch modifies the driver's to RSS default value to be spread along the
close NUMA node cores only for better out of the box experience.
In the 4th and 5th patches we utilized the use of RX multi-packet WQE
(Striding RQ) for better memory utilization especially in case of hardware
LRO is enabled and for better message rate for small packets.
In the 6th and 7th patches we added a fallback mechanism to use fragmented
memory when allocating large WQE strides fails, using UMR
(User Memory Registration) and ICO (Internal Control Operations) SQs.
In the 8th to 11th patches we did some small modification which show some small
extra improvements.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Tariq Toukan [Wed, 20 Apr 2016 19:02:19 +0000 (22:02 +0300)]
net/mlx5e: Add ethtool counter for RX buffer allocation failures
Counts the number of RX buffer allocation failures and shows it
in ethtool statistics.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Saeed Mahameed [Wed, 20 Apr 2016 19:02:18 +0000 (22:02 +0300)]
net/mlx5e: Delay skb->data access
Move mlx5e_handle_csum and eth_type_trans to the end of
mlx5e_build_rx_skb to gain some more time before accessing
skb->data, to reduce cache misses.
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Tariq Toukan [Wed, 20 Apr 2016 19:02:17 +0000 (22:02 +0300)]
net/mlx5e: Remove redundant barrier
The bit-op operation one line before is an explicit barrier
by itself.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Tariq Toukan [Wed, 20 Apr 2016 19:02:16 +0000 (22:02 +0300)]
net/mlx5e: Use napi_alloc_skb for RX SKB allocations
Instead of netdev_alloc_skb, we use the napi_alloc_skb function
which is designated to allocate skbuff's for RX in a
channel-specific NAPI instance, and implies the IP packet alignment.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Tariq Toukan [Wed, 20 Apr 2016 19:02:15 +0000 (22:02 +0300)]
net/mlx5e: Add fragmented memory support for RX multi packet WQE
If the allocation of a linear (physically continuous) MPWQE fails,
we allocate a fragmented MPWQE.
This is implemented via device's UMR (User Memory Registration)
which allows to register multiple memory fragments into ConnectX
hardware as a continuous buffer.
UMR registration is an asynchronous operation and is done via
ICO SQs.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Tariq Toukan [Wed, 20 Apr 2016 19:02:14 +0000 (22:02 +0300)]
net/mlx5e: Added ICO SQs
Added ICO (Internal Control Operations) SQ per channel to be used
for driver internal operations such as memory registration for
fragmented memory and nop requests upon ifconfig up.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Tariq Toukan [Wed, 20 Apr 2016 19:02:13 +0000 (22:02 +0300)]
net/mlx5e: Support RX multi-packet WQE (Striding RQ)
Introduce the feature of multi-packet WQE (RX Work Queue Element)
referred to as (MPWQE or Striding RQ), in which WQEs are larger
and serve multiple packets each.
Every WQE consists of many strides of the same size, every received
packet is aligned to a beginning of a stride and is written to
consecutive strides within a WQE.
In the regular approach, each regular WQE is big enough to be capable
of serving one received packet of any size up to MTU or 64K in case of
device LRO is enabled, making it very wasteful when dealing with
small packets or device LRO is enabled.
For its flexibility, MPWQE allows a better memory utilization
(implying improvements in CPU utilization and packet rate) as packets
consume strides according to their size, preserving the rest of
the WQE to be available for other packets.
MPWQE default configuration:
Num of WQEs = 16
Strides Per WQE = 2048
Stride Size = 64 byte
The default WQEs memory footprint went from 1024*mtu (~1.5MB) to
16 * 2048 * 64 = 2MB per ring.
However, HW LRO can now be supported at no additional cost in memory
footprint, and hence we turn it on by default and get an even better
performance.
Performance tested on ConnectX4-Lx 50G.
To isolate the feature under test, the numbers below were measured with
HW LRO turned off. We verified that the performance just improves when
LRO is turned back on.
* Netperf single TCP stream:
- BW raised by 10-15% for representative packet sizes:
default, 64B, 1024B, 1478B, 65536B.
* Netperf multi TCP stream:
- No degradation, line rate reached.
* Pktgen: packet rate raised by 2-10% for traffic of different message
sizes: 64B, 128B, 256B, 1024B, and 1500B.
* Pktgen: packet loss in bursts of small messages (64byte),
single stream:
- | num packets | packets loss before | packets loss after
| 2K | ~ 1K | 0
| 8K | ~ 6K | 0
| 16K | ~13K | 0
| 32K | ~28K | 0
| 64K | ~57K | ~24K
As expected as the driver can receive as many small packets (<=64B) as
the number of total strides in the ring (default = 2048 * 16) vs. 1024
(default ring size regardless of packets size) before this feature.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Achiad Shochat <achiad@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Tariq Toukan [Wed, 20 Apr 2016 19:02:12 +0000 (22:02 +0300)]
net/mlx5e: Use function pointers for RX data path handling
In preparation for Striding RQ feature, which will need its own
RX handlers.
This patch does not change any functionality.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Achiad Shochat <achiad@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Tariq Toukan [Wed, 20 Apr 2016 19:02:11 +0000 (22:02 +0300)]
net/mlx5e: Use only close NUMA node for default RSS
Distribute default RSS table uniformly over the rings of the
close NUMA node, instead of all available channels.
This way we enforce the preference of close rings over far ones.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rana Shahout [Wed, 20 Apr 2016 19:02:10 +0000 (22:02 +0300)]
net/mlx5e: Allocate set of queue counters per netdev
Connect all netdev RQs to this set of queue counters.
Also, add an "rx_out_of_buffer" counter to ethtool,
which indicates RX packet drops due to lack of receive
buffers.
Signed-off-by: Rana Shahout <ranas@mellanox.com>
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Tariq Toukan [Wed, 20 Apr 2016 19:02:09 +0000 (22:02 +0300)]
net/mlx5: Introduce device queue counters
A queue counter can collect several statistics for one or more
hardware queues (QPs, RQs, etc ..) that the counter is attached to.
For Ethernet it will provide an "out of buffer" counter which
collects the number of all packets that are dropped due to lack
of software buffers.
Here we add device commands to alloc/query/dealloc queue counters.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Rana Shahout <ranas@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 21 Apr 2016 19:06:05 +0000 (15:06 -0400)]
Merge branch 'bcmsysport-napi-updates'
Florian Fainelli says:
====================
net: bcmsysport: utilize newer NAPI APIs
These two patches are very analoguous to what was already submitted for
BCMGENET and switch the SYSTEMPORT driver to utilizing __napi_schedule_irqoff()
and napi_complete_done for the RX NAPI context.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Florian Fainelli [Wed, 20 Apr 2016 18:37:09 +0000 (11:37 -0700)]
net: bcmsysport: use napi_complete_done()
By using napi_complete_done(), we allow fine tuning of
/sys/class/net/ethX/gro_flush_timeout for higher GRO aggregation
efficiency for a Gbit NIC.
Check commit
24d2e4a50737 ("tg3: use napi_complete_done()") for details.
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Florian Fainelli [Wed, 20 Apr 2016 18:37:08 +0000 (11:37 -0700)]
net: bcmsysport: use __napi_schedule_irqoff()
Both bcm_sysport_tx_isr() and bcm_sysport_rx_isr() run in hard irq
context, we do not need to block irq again.
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 21 Apr 2016 19:02:41 +0000 (15:02 -0400)]
Merge branch 'mlx4-fixes'
Or Gerlitz says:
====================
Mellaox 40G driver fixes for 4.6-rc
With the fix for ARM bug being under the works, these are
few other fixes for mlx4 we have ready to go.
Eran addressed the problematic/wrong reporting of dropped packets, Daniel
fixed some matters related to PPC EEH's and Jenny's patch makes sure
VFs can't change the port's pause settings.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Eran Ben Elisha [Wed, 20 Apr 2016 13:01:18 +0000 (16:01 +0300)]
net/mlx4_en: Split SW RX dropped counter per RX ring
Count SW packet drops per RX ring instead of a global counter. This
will allow monitoring the number of rx drops per ring.
In addition, SW rx_dropped counter was overwritten by HW rx_dropped
counter, sum both of them instead to show the accurate value.
Fixes:
a3333b35da16 ('net/mlx4_en: Moderate ethtool callback to [...] ')
Signed-off-by: Eran Ben Elisha <eranbe@mellanox.com>
Reported-by: Brenden Blanco <bblanco@plumgrid.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Reported-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eugenia Emantayev [Wed, 20 Apr 2016 13:01:17 +0000 (16:01 +0300)]
net/mlx4_core: Don't allow to VF change global pause settings
Currently changing global pause settings is done via SET_PORT
command with input modifier GENERAL. This command is allowed
for each VF since MTU setting is done via the same command.
Change the above to the following scheme: before passing the
request to the FW, the PF will check whether it was issued
by a slave. If yes, don't change global pause and warn,
otherwise change to the requested value and store for
further reference.
Signed-off-by: Eugenia Emantayev <eugenia@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Jurgens [Wed, 20 Apr 2016 13:01:16 +0000 (16:01 +0300)]
net/mlx4_core: Avoid repeated calls to pci enable/disable
Maintain the PCI status and provide wrappers for enabling and disabling
the PCI device. Performing the actions more than once without doing
its opposite results in warning logs.
This occurred when EEH hotplugged the device causing a warning for
disabling an already disabled device.
Fixes:
2ba5fbd62b25 ('net/mlx4_core: Handle AER flow properly')
Signed-off-by: Daniel Jurgens <danielj@mellanox.com>
Signed-off-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Jurgens [Wed, 20 Apr 2016 13:01:15 +0000 (16:01 +0300)]
net/mlx4_core: Implement pci_resume callback
Move resume related activities to a new pci_resume function instead of
performing them in mlx4_pci_slot_reset. This change is needed to avoid
a hotplug during EEH recovery due to commit
f2da4ccf8bd4 ("powerpc/eeh:
More relaxed hotplug criterion").
Fixes:
2ba5fbd62b25 ('net/mlx4_core: Handle AER flow properly')
Signed-off-by: Daniel Jurgens <danielj@mellanox.com>
Signed-off-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Mark Brown [Wed, 20 Apr 2016 11:54:05 +0000 (12:54 +0100)]
net: phy: spi_ks8895: Don't leak references to SPI devices
The ks8895 driver is using spi_dev_get() apparently just to take a copy
of the SPI device used to instantiate it but never calls spi_dev_put()
to free it. Since the device is guaranteed to exist between probe() and
remove() there should be no need for the driver to take an extra
reference to it so fix the leak by just using a straight assignment.
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Neil Armstrong [Wed, 20 Apr 2016 08:56:45 +0000 (10:56 +0200)]
net: ethernet: davinci_emac: Fix platform_data overwrite
When the DaVinci emac driver is removed and re-probed, the actual
pdev->dev.platform_data is populated with an unwanted valid pointer saved by
the previous davinci_emac_of_get_pdata() call, causing a kernel crash when
calling priv->int_disable() in emac_int_disable().
Unable to handle kernel paging request at virtual address
c8622a80
...
[<
c0426fb4>] (emac_int_disable) from [<
c0427700>] (emac_dev_open+0x290/0x5f8)
[<
c0427700>] (emac_dev_open) from [<
c04c00ec>] (__dev_open+0xb8/0x120)
[<
c04c00ec>] (__dev_open) from [<
c04c0370>] (__dev_change_flags+0x88/0x14c)
[<
c04c0370>] (__dev_change_flags) from [<
c04c044c>] (dev_change_flags+0x18/0x48)
[<
c04c044c>] (dev_change_flags) from [<
c052bafc>] (devinet_ioctl+0x6b4/0x7ac)
[<
c052bafc>] (devinet_ioctl) from [<
c04a1428>] (sock_ioctl+0x1d8/0x2c0)
[<
c04a1428>] (sock_ioctl) from [<
c014f054>] (do_vfs_ioctl+0x41c/0x600)
[<
c014f054>] (do_vfs_ioctl) from [<
c014f2a4>] (SyS_ioctl+0x6c/0x7c)
[<
c014f2a4>] (SyS_ioctl) from [<
c000ff60>] (ret_fast_syscall+0x0/0x1c)
Fixes:
42f59967a091 ("net: ethernet: davinci_emac: add OF support")
Cc: Brian Hutchinson <b.hutchman@gmail.com>
Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Neil Armstrong [Wed, 20 Apr 2016 08:56:13 +0000 (10:56 +0200)]
net: ethernet: davinci_emac: Fix Unbalanced pm_runtime_enable
In order to avoid an Unbalanced pm_runtime_enable in the DaVinci
emac driver when the device is removed and re-probed, and a
pm_runtime_disable() call in davinci_emac_remove().
Actually, using unbind/bind on a TI DM8168 SoC gives :
$ echo
4a120000.ethernet > /sys/bus/platform/drivers/davinci_emac/unbind
net eth1: DaVinci EMAC: davinci_emac_remove()
$ echo
4a120000.ethernet > /sys/bus/platform/drivers/davinci_emac/bind
davinci_emac
4a120000.ethernet: Unbalanced pm_runtime_enable
Cc: Brian Hutchinson <b.hutchman@gmail.com>
Fixes:
3ba97381343b ("net: ethernet: davinci_emac: add pm_runtime support")
Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rafael J. Wysocki [Thu, 21 Apr 2016 18:57:46 +0000 (20:57 +0200)]
Merge branch 'pm-cpufreq-fixes'
* pm-cpufreq-fixes:
cpufreq: Abort cpufreq_update_current_freq() for cpufreq_suspended set
intel_pstate: Avoid getting stuck in high P-states when idle
David S. Miller [Thu, 21 Apr 2016 18:51:29 +0000 (14:51 -0400)]
Merge branch 'qed-fixes'
Manish Chopra says:
====================
qede: Bug fixes
This series fixes -
* various memory allocation failure flows for fastpath
* issues with respect to driver GRO packets handling
V1->V2
* Send series against net instead of net-next.
Please consider applying this series to "net"
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Manish Chopra [Wed, 20 Apr 2016 07:03:29 +0000 (03:03 -0400)]
qede: Fix single MTU sized packet from firmware GRO flow
In firmware assisted GRO flow there could be a single MTU sized
segment arriving due to firmware aggregation timeout/last segment
in an aggregation flow, which is not expected to be an actual gro
packet. So If a skb has zero frags from the GRO flow then simply
push it in the stack as non gso skb.
Signed-off-by: Manish Chopra <manish.chopra@qlogic.com>
Signed-off-by: Yuval Mintz <yuval.mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Manish Chopra [Wed, 20 Apr 2016 07:03:28 +0000 (03:03 -0400)]
qede: Fix setting Skb network header
Skb's network header needs to be set before extracting IPv4/IPv6
headers from it.
Signed-off-by: Manish Chopra <manish.chopra@qlogic.com>
Signed-off-by: Yuval Mintz <yuval.mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Manish Chopra [Wed, 20 Apr 2016 07:03:27 +0000 (03:03 -0400)]
qede: Fix various memory allocation error flows for fastpath
This patch handles memory allocation failures for fastpath
gracefully in the driver.
Signed-off-by: Manish Chopra <manish.chopra@qlogic.com>
Signed-off-by: Yuval Mintz <yuval.mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 21 Apr 2016 18:40:56 +0000 (14:40 -0400)]
Merge branch 'tcp-coalesce-merge-timestamps'
Martin KaFai Lau says:
====================
tcp: Merge timestamp info when coalescing skbs
This series is separated from the RFC series related to
tcp_sendmsg(MSG_EOR) and it is targeting for the net branch.
This patchset is focusing on fixing cases where TCP
timestamp could be lost after coalescing skbs.
A BPF prog is used to kprobe to sock_queue_err_skb()
and print out the value of serr->ee.ee_data. The BPF
prog (run-able from bcc) is attached here:
BPF prog used for testing:
~~~~~
from __future__ import print_function
from bcc import BPF
bpf_text = """
int trace_err_skb(struct pt_regs *ctx)
{
struct sk_buff *skb = (struct sk_buff *)ctx->si;
struct sock *sk = (struct sock *)ctx->di;
struct sock_exterr_skb *serr;
u32 ee_data = 0;
if (!sk || !skb)
return 0;
serr = SKB_EXT_ERR(skb);
bpf_probe_read(&ee_data, sizeof(ee_data), &serr->ee.ee_data);
bpf_trace_printk("ee_data:%u\\n", ee_data);
return 0;
};
"""
b = BPF(text=bpf_text)
b.attach_kprobe(event="sock_queue_err_skb", fn_name="trace_err_skb")
print("Attached to kprobe")
b.trace_print()
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Martin KaFai Lau [Wed, 20 Apr 2016 05:39:29 +0000 (22:39 -0700)]
tcp: Merge tx_flags and tskey in tcp_shifted_skb
After receiving sacks, tcp_shifted_skb() will collapse
skbs if possible. tx_flags and tskey also have to be
merged.
This patch reuses the tcp_skb_collapse_tstamp() to handle
them.
BPF Output Before:
~~~~~
<no-output-due-to-missing-tstamp-event>
BPF Output After:
~~~~~
<...>-2024 [007] d.s. 88.644374: : ee_data:14599
Packetdrill Script:
~~~~~
+0 `sysctl -q -w net.ipv4.tcp_min_tso_segs=10`
+0 `sysctl -q -w net.ipv4.tcp_no_metrics_save=1`
+0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3
+0 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
+0 bind(3, ..., ...) = 0
+0 listen(3, 1) = 0
0.100 < S 0:0(0) win 32792 <mss 1460,sackOK,nop,nop,nop,wscale 7>
0.100 > S. 0:0(0) ack 1 <mss 1460,nop,nop,sackOK,nop,wscale 7>
0.200 < . 1:1(0) ack 1 win 257
0.200 accept(3, ..., ...) = 4
+0 setsockopt(4, SOL_TCP, TCP_NODELAY, [1], 4) = 0
0.200 write(4, ..., 1460) = 1460
+0 setsockopt(4, SOL_SOCKET, 37, [2688], 4) = 0
0.200 write(4, ..., 13140) = 13140
0.200 > P. 1:1461(1460) ack 1
0.200 > . 1461:8761(7300) ack 1
0.200 > P. 8761:14601(5840) ack 1
0.300 < . 1:1(0) ack 1 win 257 <sack 1461:14601,nop,nop>
0.300 > P. 1:1461(1460) ack 1
0.400 < . 1:1(0) ack 14601 win 257
0.400 close(4) = 0
0.400 > F. 14601:14601(0) ack 1
0.500 < F. 1:1(0) ack 14602 win 257
0.500 > . 14602:14602(0) ack 2
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Cc: Soheil Hassas Yeganeh <soheil@google.com>
Cc: Willem de Bruijn <willemb@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Tested-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Martin KaFai Lau [Wed, 20 Apr 2016 05:39:28 +0000 (22:39 -0700)]
tcp: Merge tx_flags and tskey in tcp_collapse_retrans
If two skbs are merged/collapsed during retransmission, the current
logic does not merge the tx_flags and tskey. The end result is
the SCM_TSTAMP_ACK timestamp could be missing for a packet.
The patch:
1. Merge the tx_flags
2. Overwrite the prev_skb's tskey with the next_skb's tskey
BPF Output Before:
~~~~~~
<no-output-due-to-missing-tstamp-event>
BPF Output After:
~~~~~~
packetdrill-2092 [001] d.s. 453.998486: : ee_data:1459
Packetdrill Script:
~~~~~~
+0 `sysctl -q -w net.ipv4.tcp_min_tso_segs=10`
+0 `sysctl -q -w net.ipv4.tcp_no_metrics_save=1`
+0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3
+0 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
+0 bind(3, ..., ...) = 0
+0 listen(3, 1) = 0
0.100 < S 0:0(0) win 32792 <mss 1460,sackOK,nop,nop,nop,wscale 7>
0.100 > S. 0:0(0) ack 1 <mss 1460,nop,nop,sackOK,nop,wscale 7>
0.200 < . 1:1(0) ack 1 win 257
0.200 accept(3, ..., ...) = 4
+0 setsockopt(4, SOL_TCP, TCP_NODELAY, [1], 4) = 0
0.200 write(4, ..., 730) = 730
+0 setsockopt(4, SOL_SOCKET, 37, [2688], 4) = 0
0.200 write(4, ..., 730) = 730
+0 setsockopt(4, SOL_SOCKET, 37, [2176], 4) = 0
0.200 write(4, ..., 11680) = 11680
+0 setsockopt(4, SOL_SOCKET, 37, [2688], 4) = 0
0.200 > P. 1:731(730) ack 1
0.200 > P. 731:1461(730) ack 1
0.200 > . 1461:8761(7300) ack 1
0.200 > P. 8761:13141(4380) ack 1
0.300 < . 1:1(0) ack 1 win 257 <sack 1461:2921,nop,nop>
0.300 < . 1:1(0) ack 1 win 257 <sack 1461:4381,nop,nop>
0.300 < . 1:1(0) ack 1 win 257 <sack 1461:5841,nop,nop>
0.300 > P. 1:1461(1460) ack 1
0.400 < . 1:1(0) ack 13141 win 257
0.400 close(4) = 0
0.400 > F. 13141:13141(0) ack 1
0.500 < F. 1:1(0) ack 13142 win 257
0.500 > . 13142:13142(0) ack 2
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Cc: Soheil Hassas Yeganeh <soheil@google.com>
Cc: Willem de Bruijn <willemb@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Tested-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Grygorii Strashko [Tue, 19 Apr 2016 18:09:49 +0000 (21:09 +0300)]
drivers: net: cpsw: fix wrong regs access in cpsw_ndo_open
The cpsw_ndo_open() could try to access CPSW registers before
calling pm_runtime_get_sync(). This will trigger L3 error:
WARNING: CPU: 0 PID: 21 at drivers/bus/omap_l3_noc.c:147 l3_interrupt_handler+0x220/0x34c()
44000000.ocp:L3 Custom Error: MASTER M2 (64-bit) TARGET L4_FAST (Idle): Data Access in Supervisor mode during Functional access
and CPSW will stop functioning.
Hence, fix it by moving pm_runtime_get_sync() before the first access
to CPSW registers in cpsw_ndo_open().
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 21 Apr 2016 18:22:14 +0000 (14:22 -0400)]
Merge branch 'nlattr_align'
Nicolas Dichtel says:
====================
libnl: enhance API to ease 64bit alignment for attribute
Here is a proposal to add more helpers in the libnetlink to manage 64-bit
alignment issues.
Note that this series was only tested on x86 by tweeking
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS and adding some traces.
The first patch adds helpers for 64bit alignment and other patches
use them.
We could also add helpers for nla_put_u64() and its variants if needed.
v1 -> v2:
- remove patch #1
- split patch #2 (now #1 and #2)
- add nla_need_padding_for_64bit()
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Nicolas Dichtel [Thu, 21 Apr 2016 16:58:27 +0000 (18:58 +0200)]
ip6mr: align RTA_MFC_STATS on 64-bit
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nicolas Dichtel [Thu, 21 Apr 2016 16:58:26 +0000 (18:58 +0200)]
ipmr: align RTA_MFC_STATS on 64-bit
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nicolas Dichtel [Thu, 21 Apr 2016 16:58:25 +0000 (18:58 +0200)]
rtnl: use the new API to align IFLA_STATS*
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nicolas Dichtel [Thu, 21 Apr 2016 16:58:24 +0000 (18:58 +0200)]
libnl: add more helpers to align attributes on 64-bit
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Tue, 19 Apr 2016 18:02:26 +0000 (14:02 -0400)]
veth: Update features to include all tunnel GSO types
This patch adds support for the checksum enabled versions of UDP and GRE
tunnels. With this change we should be able to send and receive GSO frames
of these types over the veth pair without needing to segment the packets.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Tue, 19 Apr 2016 18:02:19 +0000 (14:02 -0400)]
netdev_features: Fold NETIF_F_ALL_TSO into NETIF_F_GSO_SOFTWARE
This patch folds NETIF_F_ALL_TSO into the bitmask for NETIF_F_GSO_SOFTWARE.
The idea is to avoid duplication of defines since the only difference
between the two was the GSO_UDP bit.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Dan Carpenter [Tue, 19 Apr 2016 14:30:56 +0000 (17:30 +0300)]
geneve: testing the wrong variable in geneve6_build_skb()
We intended to test "err" and not "skb".
Fixes:
aed069df099c ('ip_tunnel_core: iptunnel_handle_offloads returns int and doesn't free skb')
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Alexander Duyck <aduyck@mirantis.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Peter Heise [Tue, 19 Apr 2016 11:34:28 +0000 (13:34 +0200)]
NLA_BINARY misuse bug in HSR
Removed .type field from NLA to do proper length checking.
Reported by Daniel Borkmann and Julia Lawall.
Signed-off-by: Peter Heise <peter.heise@airbus.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Xin Long [Tue, 19 Apr 2016 07:10:01 +0000 (15:10 +0800)]
net: use jiffies_to_msecs to replace EXPIRES_IN_MS in inet/sctp_diag
EXPIRES_IN_MS macro comes from net/ipv4/inet_diag.c and dates
back to before jiffies_to_msecs() has been introduced.
Now we can remove it and use jiffies_to_msecs().
Suggested-by: Jakub Sitnicki <jkbs@redhat.com>
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Jakub Sitnicki <jkbs@redhat.com>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Alexei Starovoitov [Tue, 19 Apr 2016 03:11:50 +0000 (20:11 -0700)]
perf, bpf: minimize the size of perf_trace_() tracepoint handler
move trace_call_bpf() into helper function to minimize the size
of perf_trace_*() tracepoint handlers.
text data bss dec hex filename
10541679 5526646 2945024
19013349 1221ee5 vmlinux_before
10509422 5526646 2945024
18981092 121a0e4 vmlinux_after
It may seem that perf_fetch_caller_regs() can also be moved,
but that is incorrect, since ip/sp will be wrong.
bpf+tracepoint performance is not affected, since
perf_swevent_put_recursion_context() is now inlined.
export_symbol_gpl can also be dropped.
No measurable change in normal perf tracepoints.
Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Martin KaFai Lau [Mon, 18 Apr 2016 22:39:53 +0000 (15:39 -0700)]
tcp: Fix SOF_TIMESTAMPING_TX_ACK when handling dup acks
Assuming SOF_TIMESTAMPING_TX_ACK is on. When dup acks are received,
it could incorrectly think that a skb has already
been acked and queue a SCM_TSTAMP_ACK cmsg to the
sk->sk_error_queue.
In tcp_ack_tstamp(), it checks
'between(shinfo->tskey, prior_snd_una, tcp_sk(sk)->snd_una - 1)'.
If prior_snd_una == tcp_sk(sk)->snd_una like the following packetdrill
script, between() returns true but the tskey is actually not acked.
e.g. try between(3, 2, 1).
The fix is to replace between() with one before() and one !before().
By doing this, the -1 offset on the tcp_sk(sk)->snd_una can also be
removed.
A packetdrill script is used to reproduce the dup ack scenario.
Due to the lacking cmsg support in packetdrill (may be I
cannot find it), a BPF prog is used to kprobe to
sock_queue_err_skb() and print out the value of
serr->ee.ee_data.
Both the packetdrill and the bcc BPF script is attached at the end of
this commit message.
BPF Output Before Fix:
~~~~~~
<...>-2056 [001] d.s. 433.927987: : ee_data:1459 #incorrect
packetdrill-2056 [001] d.s. 433.929563: : ee_data:1459 #incorrect
packetdrill-2056 [001] d.s. 433.930765: : ee_data:1459 #incorrect
packetdrill-2056 [001] d.s. 434.028177: : ee_data:1459
packetdrill-2056 [001] d.s. 434.029686: : ee_data:14599
BPF Output After Fix:
~~~~~~
<...>-2049 [000] d.s. 113.517039: : ee_data:1459
<...>-2049 [000] d.s. 113.517253: : ee_data:14599
BCC BPF Script:
~~~~~~
#!/usr/bin/env python
from __future__ import print_function
from bcc import BPF
bpf_text = """
#include <uapi/linux/ptrace.h>
#include <net/sock.h>
#include <bcc/proto.h>
#include <linux/errqueue.h>
#ifdef memset
#undef memset
#endif
int trace_err_skb(struct pt_regs *ctx)
{
struct sk_buff *skb = (struct sk_buff *)ctx->si;
struct sock *sk = (struct sock *)ctx->di;
struct sock_exterr_skb *serr;
u32 ee_data = 0;
if (!sk || !skb)
return 0;
serr = SKB_EXT_ERR(skb);
bpf_probe_read(&ee_data, sizeof(ee_data), &serr->ee.ee_data);
bpf_trace_printk("ee_data:%u\\n", ee_data);
return 0;
};
"""
b = BPF(text=bpf_text)
b.attach_kprobe(event="sock_queue_err_skb", fn_name="trace_err_skb")
print("Attached to kprobe")
b.trace_print()
Packetdrill Script:
~~~~~~
+0 `sysctl -q -w net.ipv4.tcp_min_tso_segs=10`
+0 `sysctl -q -w net.ipv4.tcp_no_metrics_save=1`
+0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3
+0 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
+0 bind(3, ..., ...) = 0
+0 listen(3, 1) = 0
0.100 < S 0:0(0) win 32792 <mss 1460,sackOK,nop,nop,nop,wscale 7>
0.100 > S. 0:0(0) ack 1 <mss 1460,nop,nop,sackOK,nop,wscale 7>
0.200 < . 1:1(0) ack 1 win 257
0.200 accept(3, ..., ...) = 4
+0 setsockopt(4, SOL_TCP, TCP_NODELAY, [1], 4) = 0
+0 setsockopt(4, SOL_SOCKET, 37, [2688], 4) = 0
0.200 write(4, ..., 1460) = 1460
0.200 write(4, ..., 13140) = 13140
0.200 > P. 1:1461(1460) ack 1
0.200 > . 1461:8761(7300) ack 1
0.200 > P. 8761:14601(5840) ack 1
0.300 < . 1:1(0) ack 1 win 257 <sack 1461:2921,nop,nop>
0.300 < . 1:1(0) ack 1 win 257 <sack 1461:4381,nop,nop>
0.300 < . 1:1(0) ack 1 win 257 <sack 1461:5841,nop,nop>
0.300 > P. 1:1461(1460) ack 1
0.400 < . 1:1(0) ack 14601 win 257
0.400 close(4) = 0
0.400 > F. 14601:14601(0) ack 1
0.500 < F. 1:1(0) ack 14602 win 257
0.500 > . 14602:14602(0) ack 2
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Cc: Soheil Hassas Yeganeh <soheil.kdev@gmail.com>
Cc: Willem de Bruijn <willemb@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Tested-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vivien Didelot [Mon, 18 Apr 2016 22:24:04 +0000 (18:24 -0400)]
net: dsa: remove tag_protocol from dsa_switch
Having the tag protocol in dsa_switch_driver for setup time and in
dsa_switch_tree for runtime is enough. Remove dsa_switch's one.
Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Signed-off-by: David S. Miller <davem@davemloft.net>