John Fastabend [Tue, 16 Oct 2018 18:08:04 +0000 (11:08 -0700)]
bpf: sockmap, support for msg_peek in sk_msg with redirect ingress
This adds support for the MSG_PEEK flag when doing redirect to ingress
and receiving on the sk_msg psock queue. Previously the flag was
being ignored which could confuse applications if they expected the
flag to work as normal.
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
John Fastabend [Tue, 16 Oct 2018 18:07:59 +0000 (11:07 -0700)]
bpf: skmsg, improve sk_msg_used_element to work in cork context
Currently sk_msg_used_element is only called in zerocopy context where
cork is not possible and if this case happens we fallback to copy
mode. However the helper is more useful if it works in all contexts.
This patch resolved the case where if end == head indicating a full
or empty ring the helper always reports an empty ring. To fix this
add a test for the full ring case to avoid reporting a full ring
has 0 elements. This additional functionality will be used in the
next patches from recvmsg context where end = head with a full ring
is a valid case.
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
John Fastabend [Tue, 16 Oct 2018 17:36:01 +0000 (10:36 -0700)]
bpf: sockmap, fix skmsg recvmsg handler to track size correctly
When converting sockmap to new skmsg generic data structures we missed
that the recvmsg handler did not correctly use sg.size and instead was
using individual elements length. The result is if a sock is closed
with outstanding data we omit the call to sk_mem_uncharge() and can
get the warning below.
[ 66.728282] WARNING: CPU: 6 PID: 5783 at net/core/stream.c:206 sk_stream_kill_queues+0x1fa/0x210
To fix this correct the redirect handler to xfer the size along with
the scatterlist and also decrement the size from the recvmsg handler.
Now when a sock is closed the remaining 'size' will be decremented
with sk_mem_uncharge().
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Alexei Starovoitov [Tue, 16 Oct 2018 22:23:59 +0000 (15:23 -0700)]
Merge branch 'nfp-improve-bpf-offload'
Jakub Kicinski says:
====================
this set adds check to make sure offload behaviour is correct.
First when atomic counters are used, we must make sure the map
does not already contain data we did not prepare for holding
atomics.
Second patch double checks vNIC capabilities for program offload
in case program is shared by multiple vNICs with different
constraints.
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jakub Kicinski [Tue, 16 Oct 2018 22:19:10 +0000 (15:19 -0700)]
nfp: bpf: double check vNIC capabilities after object sharing
Program translation stage checks that program can be offloaded to
the netdev which was passed during the load (bpf_attr->prog_ifindex).
After program sharing was introduced, however, the netdev on which
program is loaded can theoretically be different, and therefore
we should recheck the program size and max stack size at load time.
This was found by code inspection, AFAIK today all vNICs have
identical caps.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jakub Kicinski [Tue, 16 Oct 2018 22:19:09 +0000 (15:19 -0700)]
nfp: bpf: protect against mis-initializing atomic counters
Atomic operations on the NFP are currently always in big endian.
The driver keeps track of regions of memory storing atomic values
and byte swaps them accordingly. There are corner cases where
the map values may be initialized before the driver knows they
are used as atomic counters. This can happen either when the
datapath is performing the update and the stack contents are
unknown or when map is updated before the program which will
use it for atomic values is loaded.
To avoid situation where user initializes the value to 0 1 2 3
and then after loading a program which uses the word as an atomic
counter starts reading 3 2 1 0 - only allow atomic counters to be
initialized to endian-neutral values.
For updates from the datapath the stack information may not be
as precise, so just allow initializing such values to 0.
Example code which would break:
struct bpf_map_def SEC("maps") rxcnt = {
.type = BPF_MAP_TYPE_HASH,
.key_size = sizeof(__u32),
.value_size = sizeof(__u64),
.max_entries = 1,
};
int xdp_prog1()
{
__u64 nonzeroval = 3;
__u32 key = 0;
__u64 *value;
value = bpf_map_lookup_elem(&rxcnt, &key);
if (!value)
bpf_map_update_elem(&rxcnt, &key, &nonzeroval, BPF_ANY);
else
__sync_fetch_and_add(value, 1);
return XDP_PASS;
}
$ offload bpftool map dump
key: 00 00 00 00 value: 00 00 00 03 00 00 00 00
should be:
$ offload bpftool map dump
key: 00 00 00 00 value: 03 00 00 00 00 00 00 00
Reported-by: David Beckett <david.beckett@netronome.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrey Ignatov [Tue, 16 Oct 2018 05:50:34 +0000 (22:50 -0700)]
libbpf: Per-symbol visibility for DSO
Make global symbols in libbpf DSO hidden by default with
-fvisibility=hidden and export symbols that are part of ABI explicitly
with __attribute__((visibility("default"))).
This is common practice that should prevent from accidentally exporting
a symbol, that is not supposed to be a part of ABI what, in turn,
improves both libbpf developer- and user-experiences. See [1] for more
details.
Export control becomes more important since more and more projects use
libbpf.
The patch doesn't export a bunch of netlink related functions since as
agreed in [2] they'll be reworked. That doesn't break bpftool since
bpftool links libbpf statically.
[1] https://www.akkadia.org/drepper/dsohowto.pdf (2.2 Export Control)
[2] https://www.mail-archive.com/netdev@vger.kernel.org/msg251434.html
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Borkmann [Tue, 16 Oct 2018 13:59:36 +0000 (15:59 +0200)]
bpf, tls: add tls header to tools infrastructure
Andrey reported a build error for the BPF kselftest suite when compiled on
a machine which does not have tls related header bits installed natively:
test_sockmap.c:120:23: fatal error: linux/tls.h: No such file or directory
#include <linux/tls.h>
^
compilation terminated.
Fix it by adding the header to the tools include infrastructure and add
definitions such as SOL_TLS that could potentially be missing.
Fixes:
e9dd904708c4 ("bpf: add tls support for testing in test_sockmap")
Reported-by: Andrey Ignatov <rdna@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
David S. Miller [Tue, 16 Oct 2018 07:14:18 +0000 (00:14 -0700)]
Merge branch 'net-Kernel-side-filtering-for-route-dumps'
David Ahern says:
====================
net: Kernel side filtering for route dumps
Implement kernel side filtering of route dumps by protocol (e.g., which
routing daemon installed the route), route type (e.g., unicast), table
id and nexthop device.
iproute2 has been doing this filtering in userspace for years; pushing
the filters to the kernel side reduces the amount of data the kernel
sends and reduces wasted cycles on both sides processing unwanted data.
These initial options provide a huge improvement for efficiently
examining routes on large scale systems.
v2
- better handling of requests for a specific table. Rather than walking
the hash of all tables, lookup the specific table and dump it
- refactor mr_rtm_dumproute moving the loop over the table into a
helper that can be invoked directly
- add hook to return NLM_F_DUMP_FILTERED in DONE message to ensure
it is returned even when the dump returns nothing
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Tue, 16 Oct 2018 01:56:51 +0000 (18:56 -0700)]
net/ipv4: Bail early if user only wants prefix entries
Unlike IPv6, IPv4 does not have routes marked with RTF_PREFIX_RT. If the
flag is set in the dump request, just return.
In the process of this change, move the CLONE check to use the new
filter flags.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Tue, 16 Oct 2018 01:56:50 +0000 (18:56 -0700)]
net/ipv6: Bail early if user only wants cloned entries
Similar to IPv4, IPv6 fib no longer contains cloned routes. If a user
requests a route dump for only cloned entries, no sense walking the FIB
and returning everything.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Tue, 16 Oct 2018 01:56:49 +0000 (18:56 -0700)]
net/mpls: Handle kernel side filtering of route dumps
Update the dump request parsing in MPLS for the non-INET case to
enable kernel side filtering. If INET is disabled the only filters
that make sense for MPLS are protocol and nexthop device.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Tue, 16 Oct 2018 01:56:48 +0000 (18:56 -0700)]
net: Enable kernel side filtering of route dumps
Update parsing of route dump request to enable kernel side filtering.
Allow filtering results by protocol (e.g., which routing daemon installed
the route), route type (e.g., unicast), table id and nexthop device. These
amount to the low hanging fruit, yet a huge improvement, for dumping
routes.
ip_valid_fib_dump_req is called with RTNL held, so __dev_get_by_index can
be used to look up the device index without taking a reference. From
there filter->dev is only used during dump loops with the lock still held.
Set NLM_F_DUMP_FILTERED in the answer_flags so the user knows the results
have been filtered should no entries be returned.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Tue, 16 Oct 2018 01:56:47 +0000 (18:56 -0700)]
net: Plumb support for filtering ipv4 and ipv6 multicast route dumps
Implement kernel side filtering of routes by egress device index and
table id. If the table id is given in the filter, lookup table and
call mr_table_dump directly for it.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Tue, 16 Oct 2018 01:56:46 +0000 (18:56 -0700)]
ipmr: Refactor mr_rtm_dumproute
Move per-table loops from mr_rtm_dumproute to mr_table_dump and export
mr_table_dump for dumps by specific table id.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Tue, 16 Oct 2018 01:56:45 +0000 (18:56 -0700)]
net/mpls: Plumb support for filtering route dumps
Implement kernel side filtering of routes by egress device index and
protocol. MPLS uses only a single table and route type.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Tue, 16 Oct 2018 01:56:44 +0000 (18:56 -0700)]
net/ipv6: Plumb support for filtering route dumps
Implement kernel side filtering of routes by table id, egress device
index, protocol, and route type. If the table id is given in the filter,
lookup the table and call fib6_dump_table directly for it.
Move the existing route flags check for prefix only routes to the new
filter.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Tue, 16 Oct 2018 01:56:43 +0000 (18:56 -0700)]
net/ipv4: Plumb support for filtering route dumps
Implement kernel side filtering of routes by table id, egress device index,
protocol and route type. If the table id is given in the filter, lookup the
table and call fib_table_dump directly for it.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Tue, 16 Oct 2018 01:56:42 +0000 (18:56 -0700)]
net: Add struct for fib dump filter
Add struct fib_dump_filter for options on limiting which routes are
returned in a dump request. The current list is table id, protocol,
route type, rtm_flags and nexthop device index. struct net is needed
to lookup the net_device from the index.
Declare the filter for each route dump handler and plumb the new
arguments from dump handlers to ip_valid_fib_dump_req.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Tue, 16 Oct 2018 01:56:41 +0000 (18:56 -0700)]
netlink: Add answer_flags to netlink_callback
With dump filtering we need a way to ensure the NLM_F_DUMP_FILTERED
flag is set on a message back to the user if the data returned is
influenced by some input attributes. Normally this can be done as
messages are added to the skb, but if the filter results in no data
being returned, the user could be confused as to why.
This patch adds answer_flags to the netlink_callback allowing dump
handlers to set the NLM_F_DUMP_FILTERED at a minimum in the
NLMSG_DONE message ensuring the flag gets back to the user.
The netlink_callback space is initialized to 0 via a memset in
__netlink_dump_start, so init of the new answer_flags is covered.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 16 Oct 2018 06:21:07 +0000 (23:21 -0700)]
Merge git://git./linux/kernel/git/bpf/bpf-next
Daniel Borkmann says:
====================
pull-request: bpf-next 2018-10-16
The following pull-request contains BPF updates for your *net-next* tree.
The main changes are:
1) Convert BPF sockmap and kTLS to both use a new sk_msg API and enable
sk_msg BPF integration for the latter, from Daniel and John.
2) Enable BPF syscall side to indicate for maps that they do not support
a map lookup operation as opposed to just missing key, from Prashant.
3) Add bpftool map create command which after map creation pins the
map into bpf fs for further processing, from Jakub.
4) Add bpftool support for attaching programs to maps allowing sock_map
and sock_hash to be used from bpftool, from John.
5) Improve syscall BPF map update/delete path for map-in-map types to
wait a RCU grace period for pending references to complete, from Daniel.
6) Couple of follow-up fixes for the BPF socket lookup to get it
enabled also when IPv6 is compiled as a module, from Joe.
7) Fix a generic-XDP bug to handle the case when the Ethernet header
was mangled and thus update skb's protocol and data, from Jesper.
8) Add a missing BTF header length check between header copies from
user space, from Wenwen.
9) Minor fixups in libbpf to use __u32 instead u32 types and include
proper perf_event.h uapi header instead of perf internal one, from Yonghong.
10) Allow to pass user-defined flags through EXTRA_CFLAGS and EXTRA_LDFLAGS
to bpftool's build, from Jiri.
11) BPF kselftest tweaks to add LWTUNNEL to config fragment and to install
with_addr.sh script from flow dissector selftest, from Anders.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Heiner Kallweit [Mon, 15 Oct 2018 19:25:13 +0000 (21:25 +0200)]
net: phy: merge phy_start_aneg and phy_start_aneg_priv
After commit
9f2959b6b52d ("net: phy: improve handling delayed work")
the sync parameter isn't needed any longer in phy_start_aneg_priv().
This allows to merge phy_start_aneg() and phy_start_aneg_priv().
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Haiyang Zhang [Mon, 15 Oct 2018 19:06:15 +0000 (19:06 +0000)]
hv_netvsc: fix vf serial matching with pci slot info
The VF device's serial number is saved as a string in PCI slot's
kobj name, not the slot->number. This patch corrects the netvsc
driver, so the VF device can be successfully paired with synthetic
NIC.
Fixes:
00d7ddba1143 ("hv_netvsc: pair VF based on serial number")
Reported-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
Reviewed-by: Stephen Hemminger <sthemmin@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 16 Oct 2018 05:56:42 +0000 (22:56 -0700)]
Merge branch 'tcp-second-round-for-EDT-conversion'
Eric Dumazet says:
====================
tcp: second round for EDT conversion
First round of EDT patches left TCP stack in a non optimal state.
- High speed flows suffered from loss of performance, addressed
by the first patch of this series.
- Second patch brings pacing to the current state of networking,
since we now reach ~100 Gbit on a single TCP flow.
- Third patch implements a mitigation for scheduling delays,
like the one we did in sch_fq in the past.
- Fourth patch removes one special case in sch_fq for ACK packets.
- Fifth patch removes a serious perfomance cost for TCP internal
pacing. We should setup the high resolution timer only if
really needed.
- Sixth patch fixes a typo in BBR.
- Last patch is one minor change in cdg congestion control.
Neal Cardwell also has a patch series fixing BBR after
EDT adoption.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Mon, 15 Oct 2018 16:37:58 +0000 (09:37 -0700)]
tcp: cdg: use tcp high resolution clock cache
We store in tcp socket a cache of most recent high resolution
clock, there is no need to call local_clock() again, since
this cache is good enough.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Neal Cardwell [Mon, 15 Oct 2018 16:37:57 +0000 (09:37 -0700)]
tcp_bbr: fix typo in bbr_pacing_margin_percent
There was a typo in this parameter name.
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Mon, 15 Oct 2018 16:37:56 +0000 (09:37 -0700)]
tcp: optimize tcp internal pacing
When TCP implements its own pacing (when no fq packet scheduler is used),
it is arming high resolution timer after a packet is sent.
But in many cases (like TCP_RR kind of workloads), this high resolution
timer expires before the application attempts to write the following
packet. This overhead also happens when the flow is ACK clocked and
cwnd limited instead of being limited by the pacing rate.
This leads to extra overhead (high number of IRQ)
Now tcp_wstamp_ns is reserved for the pacing timer only
(after commit "tcp: do not change tcp_wstamp_ns in tcp_mstamp_refresh"),
we can setup the timer only when a packet is about to be sent,
and if tcp_wstamp_ns is in the future.
This leads to a ~10% performance increase in TCP_RR workloads.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Mon, 15 Oct 2018 16:37:55 +0000 (09:37 -0700)]
net_sched: sch_fq: no longer use skb_is_tcp_pure_ack()
With the new EDT model, sch_fq no longer has to special
case TCP pure acks, since their skb->tstamp will allow them
being sent without pacing delay.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Mon, 15 Oct 2018 16:37:54 +0000 (09:37 -0700)]
tcp: mitigate scheduling jitter in EDT pacing model
In commit
fefa569a9d4b ("net_sched: sch_fq: account for schedule/timers
drifts") we added a mitigation for scheduling jitter in fq packet scheduler.
This patch does the same in TCP stack, now it is using EDT model.
Note that this mitigation is valid for both external (fq packet scheduler)
or internal TCP pacing.
This uses the same strategy than the above commit, allowing
a time credit of half the packet currently sent.
Consider following case :
An skb is sent, after an idle period of 300 usec.
The air-time (skb->len/pacing_rate) is 500 usec
Instead of setting the pacing timer to now+500 usec,
it will use now+min(500/2, 300) -> now+250usec
This is like having a token bucket with a depth of half
an skb.
Tested:
tc qdisc replace dev eth0 root pfifo_fast
Before
netperf -P0 -H remote -- -q
1000000000 # 8000Mbit
540000 262144 262144 10.00 7710.43
After :
netperf -P0 -H remote -- -q
1000000000 # 8000 Mbit
540000 262144 262144 10.00 7999.75 # Much closer to 8000Mbit target
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Mon, 15 Oct 2018 16:37:53 +0000 (09:37 -0700)]
net: extend sk_pacing_rate to unsigned long
sk_pacing_rate has beed introduced as a u32 field in 2013,
effectively limiting per flow pacing to 34Gbit.
We believe it is time to allow TCP to pace high speed flows
on 64bit hosts, as we now can reach 100Gbit on one TCP flow.
This patch adds no cost for 32bit kernels.
The tcpi_pacing_rate and tcpi_max_pacing_rate were already
exported as 64bit, so iproute2/ss command require no changes.
Unfortunately the SO_MAX_PACING_RATE socket option will stay
32bit and we will need to add a new option to let applications
control high pacing rates.
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 0 1787144 10.246.9.76:49992 10.246.9.77:36741
timer:(on,003ms,0) ino:91863 sk:2 <->
skmem:(r0,rb540000,t66440,tb2363904,f605944,w1822984,o0,bl0,d0)
ts sack bbr wscale:8,8 rto:201 rtt:0.057/0.006 mss:1448
rcvmss:536 advmss:1448
cwnd:138 ssthresh:178 bytes_acked:
256699822585 segs_out:
177279177
segs_in:3916318 data_segs_out:
177279175
bbr:(bw:31276.8Mbps,mrtt:0,pacing_gain:1.25,cwnd_gain:2)
send 28045.5Mbps lastrcv:73333
pacing_rate 38705.0Mbps delivery_rate 22997.6Mbps
busy:73333ms unacked:135 retrans:0/157 rcv_space:14480
notsent:2085120 minrtt:0.013
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Mon, 15 Oct 2018 16:37:52 +0000 (09:37 -0700)]
tcp: do not change tcp_wstamp_ns in tcp_mstamp_refresh
In EDT design, I made the mistake of using tcp_wstamp_ns
to store the last tcp_clock_ns() sample and to store the
pacing virtual timer.
This causes major regressions at high speed flows.
Introduce tcp_clock_cache to store last tcp_clock_ns().
This is needed because some arches have slow high-resolution
kernel time service.
tcp_wstamp_ns is only updated when a packet is sent.
Note that we can remove tcp_mstamp in the future since
tcp_mstamp is essentially tcp_clock_cache/1000, so the
apparent socket size increase is temporary.
Fixes:
9799ccb0e984 ("tcp: add tcp_wstamp_ns socket field")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Li RongQing [Mon, 15 Oct 2018 11:00:31 +0000 (19:00 +0800)]
net: bridge: fix a possible memory leak in __vlan_add
After per-port vlan stats, vlan stats should be released
when fail to add vlan
Fixes:
9163a0fc1f0c0 ("net: bridge: add support for per-port vlan stats")
CC: bridge@lists.linux-foundation.org
cc: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
CC: Roopa Prabhu <roopa@cumulusnetworks.com>
Signed-off-by: Zhang Yu <zhangyu31@baidu.com>
Signed-off-by: Li RongQing <lirongqing@baidu.com>
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David Howells [Mon, 15 Oct 2018 10:31:03 +0000 (11:31 +0100)]
rxrpc: Add /proc/net/rxrpc/peers to display peer list
Add /proc/net/rxrpc/peers to display the list of peers currently active.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Wei Yongjun [Mon, 15 Oct 2018 03:07:16 +0000 (03:07 +0000)]
fore200e: fix missing unlock on error in bsq_audit()
Add the missing unlock before return from function bsq_audit()
in the error handling case.
Fixes:
1d9d8be91788 ("fore200e: check for dma mapping failures")
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 16 Oct 2018 05:44:33 +0000 (22:44 -0700)]
Merge branch 'bnxt_en-Add-support-for-new-57500-chips'
Michael Chan says:
====================
bnxt_en: Add support for new 57500 chips.
This patch-set is larger than normal because I wanted a complete series
to add basic support for the new 57500 chips. The new chips have the
following main differences compared to legacy chips:
1. Requires the PF driver to allocate DMA context memory as a backing
store.
2. New NQ (notification queue) for interrupt events.
3. One or more CP rings can be associated with an NQ.
4. 64-bit doorbells.
Most other structures and firmware APIs are compatible with legacy
devices with some exceptions. For example, ring groups are no longer
used and RSS table format has changed.
The patch-set includes the usual firmware spec. update, some refactoring
and restructuring, and adding the new code to add basic support for the
new class of devices.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sun, 14 Oct 2018 11:02:59 +0000 (07:02 -0400)]
bnxt_en: Add PCI ID for BCM57508 device.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sun, 14 Oct 2018 11:02:58 +0000 (07:02 -0400)]
bnxt_en: Add new NAPI poll function for 57500 chips.
Add a new poll function that polls for NQ events. If the NQ event is
a CQ notification, we locate the CP ring from the cq_handle and call
__bnxt_poll_work() to handle RX/TX events on the CP ring.
Add a new has_more_work field in struct bnxt_cp_ring_info to indicate
budget has been reached. __bnxt_poll_cqs_done() is called to update or
ARM the CP rings if budget has not been reached or not. If budget
has been reached, the next bnxt_poll_p5() call will continue to poll
from the CQ rings directly. Otherwise, the NQ will be ARMed for the
next IRQ.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sun, 14 Oct 2018 11:02:57 +0000 (07:02 -0400)]
bnxt_en: Refactor bnxt_poll_work().
Separate the CP ring polling logic in bnxt_poll_work() into 2 separate
functions __bnxt_poll_work() and __bnxt_poll_work_done(). Since the logic
is separated, we need to add tx_pkts and events fields to struct bnxt_napi
to keep track of the events to handle between the 2 functions. We also
add had_work_done field to struct bnxt_cp_ring_info to indicate whether
some work was performed on the CP ring.
This is needed to better support the 57500 chips. We need to poll up to
2 separate CP rings before we update or ARM the CP rings on the 57500 chips.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sun, 14 Oct 2018 11:02:56 +0000 (07:02 -0400)]
bnxt_en: Add coalescing setup for 57500 chips.
On legacy chips, the CP ring may be shared between RX and TX and so only
setup the RX coalescing parameters in such a case. On 57500 chips, we
always have a dedicated CP ring for TX so we can always set up the
TX coalescing parameters in bnxt_hwrm_set_coal().
Also, the min_timer coalescing parameter applies to the NQ on the new
chips and a separate firmware call needs to be made to set it up.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sun, 14 Oct 2018 11:02:55 +0000 (07:02 -0400)]
bnxt_en: Use bnxt_cp_ring_info struct pointer as parameter for RX path.
In the RX code path, we current use the bnxt_napi struct pointer to
identify the associated RX/CP rings. Change it to use the struct
bnxt_cp_ring_info pointer instead since there are now up to 2
CP rings per MSIX.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sun, 14 Oct 2018 11:02:54 +0000 (07:02 -0400)]
bnxt_en: Add RSS support for 57500 chips.
RSS context allocation and RSS indirection table setup are very different
on the new chip. Refactor bnxt_setup_vnic() to call 2 different functions
to set up RSS for the vnic based on chip type. On the new chip, the
number of RSS contexts and the indirection table size depends on the
number of RX rings. Each indirection table entry is also different
on the new chip since ring groups are no longer used.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sun, 14 Oct 2018 11:02:53 +0000 (07:02 -0400)]
bnxt_en: Increase RSS context array count and skip ring groups on 57500 chips.
On the new 57500 chips, we need to allocate one RSS context for every
64 RX rings. In previous chips, only one RSS context per vnic is
required regardless of the number of RX rings. So increase the max
RSS context array count to 8.
Hardware ring groups are not used on the new chips. Note that the
software ring group structure is still maintained in the driver to
keep track of the rings associated with the vnic.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sun, 14 Oct 2018 11:02:52 +0000 (07:02 -0400)]
bnxt_en: Allocate/Free CP rings for 57500 series chips.
On the new 57500 chips, we allocate/free one CP ring for each RX ring or
TX ring separately. Using separate CP rings for RX/TX is an improvement
as TX events will no longer be stuck behind RX events.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sun, 14 Oct 2018 11:02:51 +0000 (07:02 -0400)]
bnxt_en: Modify bnxt_ring_alloc_send_msg() to support 57500 chips.
Firmware ring allocation semantics are slightly different for most
ring types on 57500 chips. Allocation/deallocation for NQ rings are
also added for the new chips.
A CP ring handle is also added so that from the NQ interrupt event,
we can locate the CP ring.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sun, 14 Oct 2018 11:02:50 +0000 (07:02 -0400)]
bnxt_en: Add helper functions to get firmware CP ring ID.
On the new 57500 chips, getting the associated CP ring ID associated with
an RX ring or TX ring is different than before. On the legacy chips,
we find the associated ring group and look up the CP ring ID. On the
57500 chips, each RX ring and TX ring has a dedicated CP ring even if
they share the MSIX. Use these helper functions at appropriate places
to get the CP ring ID.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sun, 14 Oct 2018 11:02:49 +0000 (07:02 -0400)]
bnxt_en: Allocate completion ring structures for 57500 series chips.
On 57500 chips, the original bnxt_cp_ring_info struct now refers to the
NQ. bp->cp_nr_rings refer to the number of NQs on 57500 chips. There
are now 2 pointers for the CP rings associated with RX and TX rings.
Modify bnxt_alloc_cp_rings() and bnxt_free_cp_rings() accordingly.
With multiple CP rings per NAPI, we need to add a pointer in
bnxt_cp_ring_info struct to point back to the bnxt_napi struct.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sun, 14 Oct 2018 11:02:48 +0000 (07:02 -0400)]
bnxt_en: Modify the ring reservation functions for 57500 series chips.
The ring reservation functions have to be modified for P5 chips in the
following ways:
- bnxt_cp_ring_info structs map to internal NQs as well as CP rings.
- Ring groups are not used.
- 1 CP ring must be available for each RX or TX ring.
- number of RSS contexts to reserve is multiples of 64 RX rings.
- RFS currently not supported.
Also, RX AGG rings are only used for jumbo frames, so we need to
unconditionally call bnxt_reserve_rings() in __bnxt_open_nic()
to see if we need to reserve AGG rings in case MTU has changed.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sun, 14 Oct 2018 11:02:47 +0000 (07:02 -0400)]
bnxt_en: Adjust MSIX and ring groups for 57500 series chips.
Store the maximum MSIX capability in PCIe config. space earlier. When
we call firmware to query capability, we need to compare the PCIe
MSIX max count with the firmware count and use the smaller one as
the MSIX count for 57500 (P5) chips.
The new chips don't use ring groups. But previous chips do and
the existing logic limits the available rings based on resource
calculations including ring groups. Setting the max ring groups to
the max rx rings will work on the new chips without changing the
existing logic.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sun, 14 Oct 2018 11:02:46 +0000 (07:02 -0400)]
bnxt_en: Re-structure doorbells.
The 57500 series chips have a new 64-bit doorbell format. Use a new
bnxt_db_info structure to unify the new and the old 32-bit doorbells.
Add a new bnxt_set_db() function to set up the doorbell addreses and
doorbell keys ahead of time. Modify and introduce new doorbell
helpers to help abstract and unify the old and new doorbells.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sun, 14 Oct 2018 11:02:45 +0000 (07:02 -0400)]
bnxt_en: Add 57500 new chip ID and basic structures.
57500 series is a new chip class (P5) that requires some driver changes
in the next several patches. This adds basic chip ID, doorbells, and
the notification queue (NQ) structures. Each MSIX is associated with an
NQ instead of a CP ring in legacy chips. Each NQ has up to 2 associated
CP rings for RX and TX. The same bnxt_cp_ring_info struct will be used
for the NQ.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sun, 14 Oct 2018 11:02:44 +0000 (07:02 -0400)]
bnxt_en: Configure context memory on new devices.
Call firmware to configure the DMA addresses of all context memory
pages on new devices requiring context memory.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sun, 14 Oct 2018 11:02:43 +0000 (07:02 -0400)]
bnxt_en: Check context memory requirements from firmware.
New device requires host context memory as a backing store. Call
firmware to check for context memory requirements and store the
parameters. Allocate host pages accordingly.
We also need to move the call bnxt_hwrm_queue_qportcfg() earlier
so that all the supported hardware queues and the IDs are known
before checking and allocating context memory.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sun, 14 Oct 2018 11:02:42 +0000 (07:02 -0400)]
bnxt_en: Add new flags to setup new page table PTE bits on newer devices.
Newer chips require the PTU_PTE_VALID bit to be set for every page
table entry for context memory and rings. Additional bits are also
required for page table entries for all rings. Add a flags field to
bnxt_ring_mem_info struct to specify these additional bits to be used
when setting up the pages tables as needed.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sun, 14 Oct 2018 11:02:41 +0000 (07:02 -0400)]
bnxt_en: Refactor bnxt_ring_struct.
Move the DMA page table and vmem fields in bnxt_ring_struct to a new
bnxt_ring_mem_info struct. This will allow context memory management
for a new device to re-use some of the existing infrastructure.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sun, 14 Oct 2018 11:02:40 +0000 (07:02 -0400)]
bnxt_en: Update interrupt coalescing logic.
New firmware spec. allows interrupt coalescing parameters, such as
maximums, timer units, supported features to be queried. Update
the driver to make use of the new call to query these parameters
and provide the legacy defaults if the call is not available.
Replace the hard-coded values with these parameters.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sun, 14 Oct 2018 11:02:39 +0000 (07:02 -0400)]
bnxt_en: Add maximum extended request length fw message support.
Support the max_ext_req_len field from the HWRM_VER_GET_RESPONSE.
If this field is valid and greater than the mailbox size, use the
short command format to send firmware messages greater than the
mailbox size. Newer devices use this method to send larger messages
to the firmware.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sun, 14 Oct 2018 11:02:38 +0000 (07:02 -0400)]
bnxt_en: Add additional extended port statistics.
Latest firmware spec. has some additional rx extended port stats and new
tx extended port stats added. We now need to check the size of the
returned rx and tx extended stats and determine how many counters are
valid. New counters added include CoS byte and packet counts for rx
and tx.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sun, 14 Oct 2018 11:02:37 +0000 (07:02 -0400)]
bnxt_en: Update firmware interface spec. to 1.10.0.3.
Among the new changes are trusted VF support, 200Gbps support, and new
API to dump ring information on the new chips.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 16 Oct 2018 05:37:28 +0000 (22:37 -0700)]
Merge branch 'selftests-pmtu-Add-test-choice-and-captures'
Stefano Brivio says:
====================
selftests: pmtu: Add test choice and captures
This series adds a couple of features useful for debugging: 1/2
allows selecting single tests and 2/2 adds optional traffic
captures.
Semantics for current invocation of test script are preserved.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Stefano Brivio [Fri, 12 Oct 2018 21:54:14 +0000 (23:54 +0200)]
selftests: pmtu: Add optional traffic captures for single tests
If --trace is passed as an option and tcpdump is available,
capture traffic for all relevant interfaces to per-test pcap
files named <test>_<interface>.pcap.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: Sabrina Dubroca <sd@queasysnail.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Stefano Brivio [Fri, 12 Oct 2018 21:54:13 +0000 (23:54 +0200)]
selftests: pmtu: Allow selection of single tests
As number of tests is growing, it's quite convenient to allow
single tests to be run.
Display usage when the script is run with any invalid argument,
keep existing semantics when no arguments are passed so that
automated runs won't break.
Instead of just looping on the list of requested tests, if any,
check first that they exist, and go through them in a nested
loop to keep the existing way to display test descriptions.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: Sabrina Dubroca <sd@queasysnail.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Heiner Kallweit [Fri, 12 Oct 2018 21:30:52 +0000 (23:30 +0200)]
r8169: remove unneeded call to netif_stop_queue in rtl8169_net_suspend
netif_device_detach() stops all tx queues already, so we don't need
this call.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Heiner Kallweit [Fri, 12 Oct 2018 21:23:57 +0000 (23:23 +0200)]
r8169: simplify rtl8169_set_magic_reg
Simplify this function, no functional change intended.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Arnd Bergmann [Fri, 12 Oct 2018 19:52:22 +0000 (21:52 +0200)]
octeontx2-af: remove unused cgx_fwi_link_change
The newly added driver causes a warning about a function that is
not used anywhere:
drivers/net/ethernet/marvell/octeontx2/af/cgx.c:320:12: error: 'cgx_fwi_link_change' defined but not used [-Werror=unused-function]
Remove it for now, until a user gets added. If we want to use this
function from another module, we also need a declaration in a header
file, which is currently missing, so it would have to change anyway.
Fixes:
1463f382f58d ("octeontx2-af: Add support for CGX link management")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ryan C Goodfellow [Fri, 12 Oct 2018 18:09:01 +0000 (11:09 -0700)]
nfp: devlink port split support for 1x100G CXP NIC
This commit makes it possible to use devlink to split the 100G CXP
Netronome into two 40G interfaces. Currently when you ask for 2
interfaces, the math in src/nfp_devlink.c:nfp_devlink_port_split
calculates that you want 5 lanes per port because for some reason
eth_port.port_lanes=10 (shouldn't this be 12 for CXP?). What we really
want when asking for 2 breakout interfaces is 4 lanes per port. This
commit makes that happen by calculating based on 8 lanes if 10 are
present.
Signed-off-by: Ryan C Goodfellow <rgoodfel@isi.edu>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Greg Weeks <greg.weeks@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 16 Oct 2018 05:23:20 +0000 (22:23 -0700)]
Merge branch 'dpaa2-eth-code-cleanup'
Ioana Ciornei says:
====================
dpaa2-eth: code cleanup
There are no functional changes in this patch set, only some cleanup
changes such as: unused parameters, uninitialized variables and
unnecessary Kconfig dependencies.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Radulescu [Fri, 12 Oct 2018 16:27:40 +0000 (16:27 +0000)]
dpaa2-eth: remove unused FD field
According to the hardware ArchDef, the PTV1 field in FD[CTRL]
is ignored by WRIOP, so setting it for Tx FDs is pointless.
Remove all references to it from the code.
Signed-off-by: Ioana Radulescu <ruxandra.radulescu@nxp.com>
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Fri, 12 Oct 2018 16:27:38 +0000 (16:27 +0000)]
dpaa2-eth: mark unused parameter in dpaa2_eth_tx_conf
The ch parameter is never used in the dpaa2_eth_tx_conf function but
since its prototype must match the type defined in the consume field of
struct dpaa2_eth_fq, just mark it as __always_unused.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Fri, 12 Oct 2018 16:27:35 +0000 (16:27 +0000)]
dpaa2-eth: remove unused priv parameter
The priv parameter is never used in the build_linear_skb and
drain_channel function. Remove it from the function definitions.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Fri, 12 Oct 2018 16:27:33 +0000 (16:27 +0000)]
dpaa2-eth: fix uninitialized variable warnings
All 3 cases of possible uninitialized variables are false
positives since they are used only as output parameters.
Nonetheless, fix the warnings.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Fri, 12 Oct 2018 16:27:29 +0000 (16:27 +0000)]
dpaa2-eth: make dpaa2_eth_set_dist_key static
The dpaa2_eth_set_dist_key function is only used in a single file.
Make it static.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Radulescu [Fri, 12 Oct 2018 16:27:25 +0000 (16:27 +0000)]
dpaa2-eth: Fix Kconfig dependencies
Both ARCH_LAYERSCAPE and COMPILE_TEST dependencies are already implied
through the FSL_MC_BUS dep, so there's no need to state it explicitly.
Also, the fsl-mc bus depends on COMPILE_TEST only for some
architectures (arm, arm64, ppc, x86), so it's not correct to
claim build support unconditionally.
Signed-off-by: Ioana Radulescu <ruxandra.radulescu@nxp.com>
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ivan Khoronzhuk [Fri, 12 Oct 2018 16:06:29 +0000 (19:06 +0300)]
net: ethernet: ti: cpsw: use for mcast entries only host port
In dual-emac mode the cpsw driver sends directed packets, that means
that packets go to the directed port, but an ALE lookup is performed
to determine untagged egress only. It means that on tx side no need
to add port bit for ALE mcast entry mask, and basically ALE entry
for port identification is needed only on rx side.
So, add only host port in dual_emac mode as used directed
transmission, and no need in one more port. For single port boards
and switch mode all ports used, as usual, so no changes for them.
Also it simplifies farther changes.
In other words, mcast entries for dual-emac should behave exactly
like unicast. It also can help avoid leaking packets between ports
with same vlan on h/w level if ports could became members of same vid.
So now, for instance, if mcast address 33:33:00:00:00:01 is added then
entries in ALE table:
vid = 1, addr = 33:33:00:00:00:01, port_mask = 0x1
vid = 2, addr = 33:33:00:00:00:01, port_mask = 0x1
Instead of:
vid = 1, addr = 33:33:00:00:00:01, port_mask = 0x3
vid = 2, addr = 33:33:00:00:00:01, port_mask = 0x5
With the same considerations, set only host port for unregistered
mcast for dual-emac mode in case of IFF_ALLMULTI is set, exactly like
it's done in cpsw_ale_set_allmulti().
Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Reviewed-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 16 Oct 2018 05:21:28 +0000 (22:21 -0700)]
Merge branch 'net-ethernet-ti-cpsw-fix-mcast-packet-lost'
Ivan Khoronzhuk says:
====================
net: ethernet: ti: cpsw fix mcast packet lost
The patchset omits redundant refresh of mcast address table and
prevents mcast packet lost.
Based on net-next/master
tested on am572x evm
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Ivan Khoronzhuk [Fri, 12 Oct 2018 15:28:15 +0000 (18:28 +0300)]
net: ethernet: ti: cpsw: fix lost of mcast packets while rx_mode update
Whenever kernel or user decides to call rx mode update, it clears
every multicast entry from forwarding table and in some time adds
it again. This time can be enough to drop incoming multicast packets.
That's why clear only staled multicast entries and update or add new
one afterwards.
Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ivan Khoronzhuk [Fri, 12 Oct 2018 15:28:14 +0000 (18:28 +0300)]
net: ethernet: ti: cpsw_ale: use const for API having pointer on mac address
It allows to use function under callbacks with same const qualifier of
mac address for farther changes.
Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 16 Oct 2018 05:06:38 +0000 (22:06 -0700)]
Merge branch 'net-phy-improve-and-simplify-state-machine'
Heiner Kallweit says:
====================
net: phy: improve and simplify state machine
Improve / simplify handling of states PHY_RUNNING and PHY_RESUMING in
phylib state machine.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Heiner Kallweit [Thu, 11 Oct 2018 20:37:38 +0000 (22:37 +0200)]
net: phy: simplify handling of PHY_RESUMING in state machine
Simplify code for handling state PHY_RESUMING, no functional change
intended.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Heiner Kallweit [Thu, 11 Oct 2018 20:36:56 +0000 (22:36 +0200)]
net: phy: improve handling of PHY_RUNNING in state machine
Handling of state PHY_RUNNING seems to be more complex than it needs
to be. If not polling, then we don't have to do anything, we'll
receive an interrupt and go to state PHY_CHANGELINK once the link
goes down. If polling and link is down, we don't have to go the
extra mile over PHY_CHANGELINK and call phy_read_status() again
but can set status PHY_NOLINK directly.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Roopa Prabhu [Thu, 11 Oct 2018 19:35:13 +0000 (12:35 -0700)]
vxlan: support NTF_USE refresh of fdb entries
This makes use of NTF_USE in vxlan driver consistent
with bridge driver.
Signed-off-by: Roopa Prabhu <roopa@cumulusnetworks.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Justin.Lee1@Dell.com [Thu, 11 Oct 2018 18:07:37 +0000 (18:07 +0000)]
net/ncsi: Extend NC-SI Netlink interface to allow user space to send NC-SI command
The new command (NCSI_CMD_SEND_CMD) is added to allow user space application
to send NC-SI command to the network card.
Also, add a new attribute (NCSI_ATTR_DATA) for transferring request and response.
The work flow is as below.
Request:
User space application
-> Netlink interface (msg)
-> new Netlink handler - ncsi_send_cmd_nl()
-> ncsi_xmit_cmd()
Response:
Response received - ncsi_rcv_rsp()
-> internal response handler - ncsi_rsp_handler_xxx()
-> ncsi_rsp_handler_netlink()
-> ncsi_send_netlink_rsp ()
-> Netlink interface (msg)
-> user space application
Command timeout - ncsi_request_timeout()
-> ncsi_send_netlink_timeout ()
-> Netlink interface (msg with zero data length)
-> user space application
Error:
Error detected
-> ncsi_send_netlink_err ()
-> Netlink interface (err msg)
-> user space application
Signed-off-by: Justin Lee <justin.lee1@dell.com>
Reviewed-by: Samuel Mendoza-Jonas <sam@mendozajonas.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Heiner Kallweit [Thu, 11 Oct 2018 17:31:47 +0000 (19:31 +0200)]
net: phy: trigger state machine immediately in phy_start_machine
When starting the state machine there may be work to be done
immediately, e.g. if the initial state is PHY_UP then the state
machine may trigger an autonegotiation. Having said that I see no need
to wait a second until the state machine is run first time.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 16 Oct 2018 04:58:46 +0000 (21:58 -0700)]
Merge branch 'veth-XDP-stats-improvement'
Toshiaki Makita says:
====================
veth: XDP stats improvement
ndo_xdp_xmit in veth did not update packet counters as described in [1].
Also, current implementation only updates counters on tx side so rx side
events like XDP_DROP were not collected.
This series implements the missing accounting as well as support for
ethtool per-queue stats in veth.
Patch 1: Update drop counter in ndo_xdp_xmit.
Patch 2: Update packet and byte counters for all XDP path, and drop
counter on XDP_DROP.
Patch 3: Support per-queue ethtool stats for XDP counters.
Note that counters are maintained on per-queue basis for XDP but not
otherwise (per-cpu and atomic as before). This is because 1) tx path in
veth is essentially lockless so we cannot update per-queue stats on tx,
and 2) rx path is net core routine (process_backlog) which cannot update
per-queue based stats when XDP is disabled. On the other hand there are
real rxqs and napi handlers for veth XDP, so update per-queue stats on
rx for XDP packets, and use them to calculate tx counters as well,
contrary to the existing non-XDP counters.
[1] https://patchwork.ozlabs.org/cover/953071/#1967449
====================
Signed-off-by: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
Signed-off-by: David S. Miller <davem@davemloft.net>
Toshiaki Makita [Thu, 11 Oct 2018 09:36:50 +0000 (18:36 +0900)]
veth: Add ethtool statistics support for XDP
Expose per-queue stats for ethtool -S.
As there are only rx queues, and rx queues are used only when XDP is
used, per-queue counters are only rx XDP ones.
Example:
$ ethtool -S veth0
NIC statistics:
peer_ifindex: 11
rx_queue_0_xdp_packets:
28601434
rx_queue_0_xdp_bytes:
1716086040
rx_queue_0_xdp_drops:
28601434
rx_queue_1_xdp_packets:
17873050
rx_queue_1_xdp_bytes:
1072383000
rx_queue_1_xdp_drops:
17873050
Signed-off-by: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
Signed-off-by: David S. Miller <davem@davemloft.net>
Toshiaki Makita [Thu, 11 Oct 2018 09:36:49 +0000 (18:36 +0900)]
veth: Account for XDP packet statistics on rx side
On XDP path veth has napi handler so we can collect statistics on
per-queue basis for XDP.
By this change now we can collect XDP_DROP drop count as well as packets
and bytes coming through ndo_xdp_xmit. Packet counters shown by
"ip -s link", sysfs stats or /proc/net/dev is now correct for XDP.
Signed-off-by: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
Signed-off-by: David S. Miller <davem@davemloft.net>
Toshiaki Makita [Thu, 11 Oct 2018 09:36:48 +0000 (18:36 +0900)]
veth: Account for packet drops in ndo_xdp_xmit
Use existing atomic drop counter. Since drop path is really an
exceptional case here, I'm thinking atomic ops would not hurt the
performance.
XDP packets and bytes are not counted in ndo_xdp_xmit, but will be
accounted on rx side by the following commit.
Signed-off-by: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hoang Le [Thu, 11 Oct 2018 01:43:08 +0000 (08:43 +0700)]
tipc: support binding to specific ip address when activating UDP bearer
INADDR_ANY is hard-coded when activating UDP bearer. So, we could not
bind to a specific IP address even with replicast mode using - given
remote ip address instead of using multicast ip address.
In this commit, we fixed it by checking and switch to use appropriate
local ip address.
before:
$netstat -plu
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address
udp 0 0 **0.0.0.0:6118** 0.0.0.0:*
after:
$netstat -plu
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address
udp 0 0 **10.0.0.2:6118** 0.0.0.0:*
Acked-by: Ying Xue <ying.xue@windriver.com>
Acked-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: Hoang Le <hoang.h.le@dektech.com.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 16 Oct 2018 04:49:56 +0000 (21:49 -0700)]
Merge tag 'mlx5e-updates-2018-10-10' of git://git./linux/kernel/git/saeed/linux
Saeed Mahameed says:
====================
mlx5e-updates-2018-10-10
IPoIB netlink support and mlx5e pre-allocated netdevice initialization
IP link was broken due to the changes in IPoIB for the rdma_netdev
support after commit
cd565b4b51e5
("IB/IPoIB: Support acceleration options callbacks").
This patchset fixes IPoIB pkey creation and removal using rtnetlink by
adding support in both IPoIB ULP layer and mlx5 layer:
From Jason and Denis:
1) Introduces changes in the RDMA netdev code in order to
allow allocation of the netdev to be done by the rtnl netdev code.
2) Reworks IPoIB initialization to use the two step rdma_netdev
creation.
From Feras and Saeed, mlx5e netdev layer refactoring to allow accepting
pre-allocated netdevs:
3) Adds support to initialize/cleanup netdevs that are not created
by mlx5 driver.
4) Change mlx5e netdevice layer to accept the pre-allocated netdevice
queue number.
5) Initialize mlx5e generic structures in one place to be used for all
netdevs types NIC/representors/IPoIB (both mlx5 allocated and
pre-allocted).
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 16 Oct 2018 04:46:07 +0000 (21:46 -0700)]
Merge branch 'defza-fddi'
Maciej W. Rozycki says:
====================
FDDI: DEC FDDIcontroller 700 TURBOchannel adapter support
This is an update to <http://patchwork.ozlabs.org/patch/342737/>. I
believe I have addressed all the requests made in the previous review
round.
There is still one `checkpatch.pl' warning remaining:
WARNING: quoted string split across lines
+ pr_info("%s: ROM rev. %.4s, firmware rev. %.4s, RMC rev. %.4s, "
+ "SMT ver. %u\n", fp->name, rom_rev, fw_rev, rmc_rev, smt_ver);
total: 0 errors, 1 warnings, 2458 lines checked
however I think the value of staying within 80 columns is higher than the
value of having the string on a single line. This is because with all the
formatting specifiers there it is not directly greppable based on the
final output produced to the kernel log on one hand, e.g.:
tc2: ROM rev. 1.0, firmware rev. 1.2, RMC rev. A, SMT ver. 1
while it can be easily tracked down by grepping for an obvious substring
such as "RMC rev" on the other.
The issue with MMIO barriers I discussed in the course of the original
review turned out mostly irrelevant to this driver, because as I have
learnt in a recent Alpha/Linux discussion starting here:
<https://marc.info/?i=alpine.LRH.2.02.
1808161556450.13597%20()%20file01%20!%20intranet%20!%20prod%20!%20int%20!%20rdu2%20!%20redhat%20!%20com>
our MMIO API mandates the `readX' and `writeX' accessors to be strongly
ordered with respect to each other, even if that is not implicitly
enforced by hardware.
Consequently I have removed all the explicit ordering barriers and
instead submitted a fix for MIPS MMIO implementation, which currently does
not guarantee strong ordering (the MIPS architecture does not define bus
ordering rules except in terms of SYNC barriers), as recorded here:
<https://patchwork.linux-mips.org/project/linux-mips/list/?series=1538>.
Enforcing strong MMIO ordering can be costly however and is often
unnecessary, e.g. when using PIO to access network frame data in onboard
packet memory. I have therefore retained the information that would be
lost by the removal of barriers, by defining accessor wrappers suffixed by
`_o' and `_u', for accesses that have to be ordered and can be unordered
respectively.
If we ever have an API defined for weakly-ordered MMIO accesses, then
these wrappers can be redefined accordingly. Right now they all expand to
the respective `_relaxed' accessors, because, again, enforcing the
ordering WRT DMA transfers can be costly and we don't need it here except
in one place, where I chose to use explicit `dma_rmb' instead.
Similarly I have replaced the completion barriers with a read back from
the respective MMIO location (all adapter MMIO registers can be read with
no side effects incurred), which will serve its purpose on the basis of
MMIO being strongly ordered (although a read from TURBOchannel is going to
be slower than `iob', making the delay incurred unnecessarily longer).
And last but not least, I have split off the SMT Tx network tap support
to a separate change, 2/2 in this series, so that it does not block the
driver proper and can be discussed separately.
I think it has value in that it makes the view of the outgoing network
traffic complete, as if one actually physically tapped into the outgoing
line of the ring, between the station being examined and its downstream
neighbour. Without this part only traffic passed from applications
through the whole protocol stack can be captured and this is only a part
of the view.
With the `dev_queue_xmit_nit' interface now exported it's only
`ptype_all' that remains private, and to define a properly abstracted API
I propose to provide am exported `dev_nit_active' predicate that tells
whether any taps are active. This predicate is then used accordingly.
NB if there is a long-term maintenance concern about the `dev_nit_active'
predicate, then well, corresponding inline code currently present in
`xmit_one' has to be maintained anyway, and if the resulting changes
require `defza' to be updated accordingly, then I am going to handle it;
after some 20 years with Linux it's not that I am going to disappear
anywhere anytime. And once I am dead, which is inevitably going to happen
sooner or later, then the driver can simply be ripped from the kernel.
Though I suspect that at that point no DECstation Linux users may survive
anymore, even though hardware, being as sturdy as it is, likely will.
I have a patch for `tcpdump' to actually decode SMT frames, which I plan
to upstream sometime. Here's a sample of SMT traffic captured through the
`defza' driver in a small network of 4 stations and no concentrators,
printed in the most verbose mode:
01:16:59.138381 4f 00:60:b0:58:41:e7 00:60:b0:58:41:e7 73: SMT NIF ann vid:1 tid:
00000270 sid:00-00-00-60-b0-58-41-e7 len:40: UNA: 00 00 00 06 0d 1a 02 ae StationDescr: 00 01 02 00 StationState: 00 00 30 00 MACFrameStatusFunctions.3: 00 00 00 01
01:17:00.332750 4f 08:00:2b:a3:a3:29 08:00:2b:a3:a3:29 73: SMT NIF ann vid:1 tid:
0000013b sid:00-00-08-00-2b-a3-a3-29 len:40: UNA: 00 00 00 06 0d 1a 82 e7 StationDescr: 00 01 02 00 StationState: 00 00 30 00 MACFrameStatusFunctions.3: 00 00 00 01
01:17:00.354479 4f 00:60:b0:58:40:75 00:60:b0:58:40:75 73: SMT NIF ann vid:1 tid:
0000029c sid:00-00-00-60-b0-58-40-75 len:40: UNA: 00 00 10 00 d4 74 b6 ae StationDescr: 00 01 02 00 StationState: 00 00 31 00 MACFrameStatusFunctions.3: 00 00 00 01
01:17:00.442175 4f 00:60:b0:58:41:e7 Broadcast 73: SMT NIF req vid:1 tid:
00000271 sid:00-00-00-60-b0-58-41-e7 len:40: UNA: 00 00 00 06 0d 1a 02 ae StationDescr: 00 01 02 00 StationState: 00 00 30 00 MACFrameStatusFunctions.3: 00 00 00 01
01:17:00.448657 41 08:00:2b:a3:a3:29 00:60:b0:58:41:e7 73: SMT NIF rsp vid:1 tid:
00000271 sid:00-00-08-00-2b-a3-a3-29 len:40: UNA: 00 00 00 06 0d 1a 82 e7 StationDescr: 00 01 02 00 StationState: 00 00 30 00 MACFrameStatusFunctions.3: 00 00 00 01
01:17:01.015152 4f 08:00:2b:a3:a3:29 Broadcast 73: SMT NIF req vid:1 tid:
0000013c sid:00-00-08-00-2b-a3-a3-29 len:40: UNA: 00 00 00 06 0d 1a 82 e7 StationDescr: 00 01 02 00 StationState: 00 00 30 00 MACFrameStatusFunctions.3: 00 00 00 01
01:17:01.111644 41 08:00:2b:2e:6d:75 08:00:2b:a3:a3:29 73: SMT NIF rsp vid:1 tid:
0000013c sid:00-00-08-00-2b-2e-6d-75 len:40: UNA: 00 00 10 00 d4 c5 c5 94 StationDescr: 00 01 01 00 StationState: 00 00 11 00 MACFrameStatusFunctions.2: 00 00 00 01
01:17:04.814603 4f 08:00:2b:2e:6d:75 Broadcast 73: SMT NIF req vid:1 tid:
0000013c sid:00-00-08-00-2b-2e-6d-75 len:40: UNA: 00 00 10 00 d4 c5 c5 94 StationDescr: 00 01 01 00 StationState: 00 00 11 00 MACFrameStatusFunctions.2: 00 00 00 01
01:17:04.814939 4f 08:00:2b:2e:6d:75 Broadcast 73: SMT NIF req vid:1 tid:
0000013c sid:00-00-08-00-2b-2e-6d-75 len:40: UNA: 00 00 10 00 d4 c5 c5 94 StationDescr: 00 01 01 00 StationState: 00 00 11 00 MACFrameStatusFunctions.2: 00 00 00 01
01:17:04.820960 4f 08:00:2b:2e:6d:75 08:00:2b:2e:6d:75 73: SMT NIF ann vid:1 tid:
0000013b sid:00-00-08-00-2b-2e-6d-75 len:40: UNA: 00 00 10 00 d4 c5 c5 94 StationDescr: 00 01 01 00 StationState: 00 00 11 00 MACFrameStatusFunctions.2: 00 00 00 01
Questions, comments? Otherwise, please apply.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Maciej W. Rozycki [Tue, 9 Oct 2018 22:57:49 +0000 (23:57 +0100)]
FDDI: defza: Support capturing outgoing SMT traffic
DEC FDDIcontroller 700 (DEFZA) uses a Tx/Rx queue pair to communicate
SMT frames with adapter's firmware. Any SMT frame received from the RMC
via the Rx queue is queued back by the driver to the SMT Rx queue for
the firmware to process. Similarly the firmware uses the SMT Tx queue
to supply the driver with SMT frames which are queued back to the Tx
queue for the RMC to send to the ring.
When a network tap is attached to an FDDI interface handled by `defza'
any incoming SMT frames captured are queued to our usual processing of
network data received, which in turn delivers them to any listening
taps.
However the outgoing SMT frames produced by the firmware bypass our
network protocol stack and are therefore not delivered to taps. This in
turn means that taps are missing a part of network traffic sent by the
adapter, which may make it more difficult to track down network problems
or do general traffic analysis.
Call `dev_queue_xmit_nit' then in the SMT Tx path, having checked that
a network tap is attached, with a newly-created `dev_nit_active' helper
wrapping the usual condition used in the transmit path.
Signed-off-by: Maciej W. Rozycki <macro@linux-mips.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Maciej W. Rozycki [Tue, 9 Oct 2018 22:57:43 +0000 (23:57 +0100)]
FDDI: defza: Add support for DEC FDDIcontroller 700 TURBOchannel adapter
Add support for the DEC FDDIcontroller 700 (DEFZA), Digital Equipment
Corporation's first-generation FDDI network interface adapter, made for
TURBOchannel and based on a discrete version of what eventually became
Motorola's widely used CAMEL chipset.
The CAMEL chipset is present for example in the DEC FDDIcontroller
TURBOchannel, EISA and PCI adapters (DEFTA/DEFEA/DEFPA) that we support
with the `defxx' driver, however the host bus interface logic and the
firmware API are different in the DEFZA and hence a separate driver is
required.
There isn't much to say about the driver except that it works, but there
is one peculiarity to mention. The adapter implements two Tx/Rx queue
pairs.
Of these one pair is the usual network Tx/Rx queue pair, in this case
used by the adapter to exchange frames with the ring, via the RMC (Ring
Memory Controller) chip. The Tx queue is handled directly by the RMC
chip and resides in onboard packet memory. The Rx queue is maintained
via DMA in host memory by adapter's firmware copying received data
stored by the RMC in onboard packet memory.
The other pair is used to communicate SMT frames with adapter's
firmware. Any SMT frame received from the RMC via the Rx queue must be
queued back by the driver to the SMT Rx queue for the firmware to
process. Similarly the firmware uses the SMT Tx queue to supply the
driver with SMT frames that must be queued back to the Tx queue for the
RMC to send to the ring.
This solution was chosen because the designers ran out of PCB space and
could not squeeze in more logic onto the board that would be required to
handle this SMT frame traffic without the need to involve the driver, as
with the later DEFTA/DEFEA/DEFPA adapters.
Finally the driver does some Frame Control byte decoding, so to avoid
magic numbers some macros are added to <linux/if_fddi.h>.
Signed-off-by: Maciej W. Rozycki <macro@linux-mips.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Serhey Popovych [Tue, 9 Oct 2018 18:21:01 +0000 (21:21 +0300)]
tun: Consistently configure generic netdev params via rtnetlink
Configuring generic network device parameters on tun will fail in
presence of IFLA_INFO_KIND attribute in IFLA_LINKINFO nested attribute
since tun_validate() always return failure.
This can be visualized with following ip-link(8) command sequences:
# ip link set dev tun0 group 100
# ip link set dev tun0 group 100 type tun
RTNETLINK answers: Invalid argument
with contrast to dummy and veth drivers:
# ip link set dev dummy0 group 100
# ip link set dev dummy0 type dummy
# ip link set dev veth0 group 100
# ip link set dev veth0 group 100 type veth
Fix by returning zero in tun_validate() when @data is NULL that is
always in case since rtnl_link_ops->maxtype is zero in tun driver.
Fixes:
f019a7a594d9 ("tun: Implement ip link del tunXXX")
Signed-off-by: Serhey Popovych <serhe.popovych@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Mon, 15 Oct 2018 23:30:36 +0000 (16:30 -0700)]
tools: bpftool: add map create command
Add a way of creating maps from user space. The command takes
as parameters most of the attributes of the map creation system
call command. After map is created its pinned to bpffs. This makes
it possible to easily and dynamically (without rebuilding programs)
test various corner cases related to map creation.
Map type names are taken from bpftool's array used for printing.
In general these days we try to make use of libbpf type names, but
there are no map type names in libbpf as of today.
As with most features I add the motivation is testing (offloads) :)
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Alexei Starovoitov [Mon, 15 Oct 2018 23:13:15 +0000 (16:13 -0700)]
Merge branch 'bpftool_sockmap'
John Fastabend says:
====================
The first patch adds support for attaching programs to maps. This is
needed to support sock{map|hash} use from bpftool. Currently, I carry
around custom code to do this so doing it using standard bpftool will
be great.
The second patch adds a compat mode to ignore non-zero entries in
the map def. This allows using bpftool with maps that have a extra
fields that the user knows can be ignored. This is needed to work
correctly with maps being loaded by other tools or directly via
syscalls.
v3: add bash completion and doc updates for --mapcompat
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
John Fastabend [Mon, 15 Oct 2018 18:19:55 +0000 (11:19 -0700)]
bpf: bpftool, add flag to allow non-compat map definitions
Multiple map definition structures exist and user may have non-zero
fields in their definition that are not recognized by bpftool and
libbpf. The normal behavior is to then fail loading the map. Although
this is a good default behavior users may still want to load the map
for debugging or other reasons. This patch adds a --mapcompat flag
that can be used to override the default behavior and allow loading
the map even when it has additional non-zero fields.
For now the only user is 'bpftool prog' we can switch over other
subcommands as needed. The library exposes an API that consumes
a flags field now but I kept the original API around also in case
users of the API don't want to expose this. The flags field is an
int in case we need more control over how the API call handles
errors/features/etc in the future.
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
John Fastabend [Mon, 15 Oct 2018 18:19:50 +0000 (11:19 -0700)]
bpf: bpftool, add support for attaching programs to maps
Sock map/hash introduce support for attaching programs to maps. To
date I have been doing this with custom tooling but this is less than
ideal as we shift to using bpftool as the single CLI for our BPF uses.
This patch adds new sub commands 'attach' and 'detach' to the 'prog'
command to attach programs to maps and then detach them.
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Alexei Starovoitov [Mon, 15 Oct 2018 23:08:39 +0000 (16:08 -0700)]
Merge branch 'ipv6_sk_lookup_fixes'
Joe Stringer says:
====================
This series includes a couple of fixups for the IPv6 socket lookup
helper, to make the API more consistent (always supply all arguments in
network byte-order) and to allow its use when IPv6 is compiled as a
module.
====================
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Joe Stringer [Mon, 15 Oct 2018 17:27:46 +0000 (10:27 -0700)]
bpf: Fix IPv6 dport byte-order in bpf_sk_lookup
Commit
6acc9b432e67 ("bpf: Add helper to retrieve socket in BPF")
mistakenly passed the destination port in network byte-order to the IPv6
TCP/UDP socket lookup functions, which meant that BPF writers would need
to either manually swap the byte-order of this field or otherwise IPv6
sockets could not be located via this helper.
Fix the issue by swapping the byte-order appropriately in the helper.
This also makes the API more consistent with the IPv4 version.
Fixes:
6acc9b432e67 ("bpf: Add helper to retrieve socket in BPF")
Signed-off-by: Joe Stringer <joe@wand.net.nz>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Joe Stringer [Mon, 15 Oct 2018 17:27:45 +0000 (10:27 -0700)]
bpf: Allow sk_lookup with IPv6 module
This is a more complete fix than
d71019b54bff ("net: core: Fix build
with CONFIG_IPV6=m"), so that IPv6 sockets may be looked up if the IPv6
module is loaded (not just if it's compiled in).
Signed-off-by: Joe Stringer <joe@wand.net.nz>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Alexei Starovoitov [Mon, 15 Oct 2018 19:23:20 +0000 (12:23 -0700)]
Merge branch 'sockmap_and_ktls'
Daniel Borkmann says:
====================
This work adds a generic sk_msg layer and converts both sockmap
and later ktls over to make use of it as a common data structure
for application data (similarly as sk_buff for network packets).
With that in place the sk_msg framework spans accross ULP layer
in the kernel and allows for introspection or filtering of L7
data with the help of BPF programs operating on a common input
context.
In a second step, we enable the latter for ktls which was previously
not possible, meaning, ktls and sk_msg verdict programs were
mutually exclusive in the ULP layer which created challenges for
the orchestrator when trying to apply TCP based policy, for
example. Leveraging the prior consolidation we can finally overcome
this limitation.
Note, there's no change in behavior when ktls is not used in
combination with BPF, and also no change in behavior for stand
alone sockmap. The kselftest suites for ktls, sockmap and ktls
with sockmap combined also runs through successfully. For further
details please see individual patches.
Thanks!
v1 -> v2:
- Removed leftover comment spotted by Alexei
- Improved commit messages, rebase
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>